id
stringlengths 20
20
| content
stringlengths 211
8.3M
| dsir_books
float64 -20,679,750.45
-316.68
| fluency_en
listlengths 2
2
| rps_lines_ending_with_terminal_punctution_mark
float64 0
100
| modernbert_cleanliness
listlengths 6
6
| qurater
listlengths 4
4
| rps_doc_num_sentences
int64 1
84.7k
| rps_doc_word_count
float64 9
1.29M
| ad_en
listlengths 2
2
| rps_doc_frac_no_alph_words
float64 15.7
92
| modernbert_reasoning
listlengths 6
6
| rps_doc_frac_chars_top_2gram
float64 0
92.9
| rps_lines_uppercase_letter_fraction
float64 0.06
87.8
| rps_doc_frac_unique_words
float64 1.16
100
| rps_lines_numerical_chars_fraction
float64 0
85.7
| fineweb_edu
listlengths 1
1
| dsir_math
float64 -17,396,368.29
-306.94
| rps_doc_mean_word_length
float64 1.44
61
| dsir_wiki
float64 -20,772,722.61
-313.93
| rps_doc_frac_chars_top_3gram
float64 0
119
| rps_doc_unigram_entropy
float64 1.2
8.5
| modernbert_professionalism
listlengths 6
6
| modernbert_readability
listlengths 6
6
| sub_path
stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BkiUbVk4ukPiEYTwk8ck
|
\section{Comparison of the Topologies}\label{sec:comparative}
In this section a comparison of the topologies presented in previous Sections \ref{sec:projective} and \ref{sec:topologies} is done in terms of the cost model presented in Section \ref{sec:model}. The section is divided into three subsections. The first one considers the complete picture of all the networks with diameters from 1 to 6. In such subsection also other topologies such as the dragonfly~\cite{Kim_dgfly_ISCA}, 3D Hamming graph and Hypercube are also considered as useful references. The second subsection is focused on a detailed comparison among projective networks and Slim Fly. Finally, the third subsection considers different implementations for two specific numbers of compute nodes, which are 10,000 and 25,000.
\subsection{General Comparison}\label{subsec:comparison}
\begin{table}
\begin{center}\scriptsize
\begin{tabular}{|lccc|}
\hline
\rule{0pt}{2.5ex}Graph & $k$ & $\lim_{N\rightarrow \infty}\bar k$ & $\lim u$ \\
\hline
Complete graph $K_N$ & 1 & 1 & 1 \\
\hline
Turán($N$,$r$) & 2 & $1+\frac{1}{r}$ & 1 \\
Complete bipartite graph $K_{n,n}$ & 2 & 1.5 & 1 \\
Hamming graph 2D & 2 & 2 & 1 \\
Demi-projective network $\overline{G}_q$ & 2 & 2 & 1 \\
Slim Fly MMS for $q=4w+\varepsilon$ & 2 & 2 & $8/9$ \\
\hline
Projective network $G_q$ & 3 & 2.5 & 1 \\
Dragonfly & 3 & 3 & 1 \\
Delorme's graph on quadrangles & 3 & 3 & 1 \\
Hamming graph 3D & 3 & 3 & 1 \\
\hline
Incidence graph of generalized quadrangles & 4 & 3.5 & 1 \\
Delorme's graph on hexagons & 5 & 5 & 1 \\
Incidence graph of generalized hexagons & 6 & 5.5 & 1 \\
Random graph with $N$ vertices & \hbox to 0pt{\hss$\sim\frac{\log(N)}{\log{\Delta}}$\ \hss} & $\sim\frac{\log(N)}{\log{\Delta}}$ & $\approx$0.8\\
\hline
Hypercube $C_2^n$ & $n$ & $n/2$ & 1 \\
\hline
\end{tabular}
\end{center}
\caption{Topological parameters of optimal topologies and some references.}
\label{tbl:topologies}
\end{table}
\begin{table*}
\begin{center}\scriptsize
\renewcommand\arraystretch{1.1}
\begin{tabular}{|lccccc|}
\hline
Graph & $T$ & $R$ & $N$ & $\Delta$ & $\Delta_0$ \\
\hline
Complete graph $K_N$ & $N^2$ & $2N-1$ & $N$ & $N-1$ & $N$ \\
\hline
Turán($N$,$r$) & $N^2\frac{r-1}{r+1}$ & $N\frac{(r-1)(2r+1)}{r(r+1)}$& $N$ & $N\frac{r-1}{r}$ & $N\frac{r-1}{r+1}$ \\
Complete bipartite graph $K_{n,n}$ & $4n^2/3$ & $5n/3$ & $2n$ & $n$ & $2n/3$ \\
Hamming graph 2D of side $n$ & $n^3$ & $3n-2$ & $n^2$ & $2(n-1)$ & $n$ \\
Demi-projective network $\overline{G}_q$ & $q^3/2+q^2+q+1/2$ & $3(q+1)/2$ & $q^2+q+1$ & $q+1$ & $(q+1)/2$ \\
Slim Fly MMS for $q=4w+\varepsilon$ & $4/9q^2(3q-\varepsilon)$ & $13/18(3q-\varepsilon)$ & $2q^2$ & $(3q-\varepsilon)/2$ & $2/9(3q-\varepsilon)$ \\
\hline
Projective network $G_q$ & $4/5(q^3+2q^2+2q+1)$ & $7(q+1)/5$ & $2(q^2+q+1)$ & $q+1$ & $2(q+1)/5$ \\
Dragonfly with $h$ global links per router & $4h^4+2h^2$ & $4h-1$ & $4h^3+2h$ & $3h-1$ & $h$ \\
Delorme's graph on generalized quadrangles & $(q+1)^2(q^2+1)/3$ & $4/3(q+1)$ & $q^3+q^2+q+1$ & $q+1$ & $(q+1)/3$ \\
Hamming graph 3D of side $n$ & $n^4$ & $4n-3$ & $n^3$ & $3(n-1)$ & $n$ \\
\hline
Incidence graph of generalized quadrangles & $4/7(q+1)^2(q^2+1)$ & $9/7(q+1)$ & $2(q^3+q^2+q+1)$& $q+1$ & $2(q+1)/7$ \\
Delorme's graph on generalized hexagons & $1/5(q^4+q^2+1)(q+1)^2$ & $6/5(q+1)$ & $q^5+\dotsb+q+1$& $q+1$ & $(q+1)/5$ \\
Incidence graph of generalized hexagons & $4/11(q^4+q^2+1)(q+1)^2$ & $13/11(q+1)$ & $2(q^5+\dotsb+q+1)$& $q+1$ & $2(q+1)/11$ \\
Random graph with $N$ vertices and degree $\Delta$ & $\Delta\log(\Delta)N/\log(N)$ & $\Delta(1+\frac{\log \Delta}{\log N})$ & $N$ & $\Delta$ & $\sim \frac{\Delta\log{\Delta}}{\log{N}}$ \\
\hline
Hypercube $C_2^n$ & $2^{n+1}$ & $n+2$ & $2^n$ & $n$ & 2 \\
\hline
\end{tabular}
\end{center}
\caption{Structural parameter of optimal known topologies and some references.}
\label{tbl:structural}
\end{table*}
\foreach \R in {64}
{
\begin{figure*}[ht]
\begin{center}
\newdimen\crossx
\includegraphics{cost_by_terminals_R=64.pdf}
\end{center}
\caption{The measure of cost $\bar k/u$ in realizations of topologies with a given number of compute nodes using routers with maximum radix \R.}
\label{fig:cost}
\end{figure*}
}
\begin{figure*}[ht]
\begin{center}
\newdimen\crossx
\includegraphics{map.pdf}
\end{center}
\caption{Scalability of the different topologies.}
\label{fig:map}
\end{figure*}
Table~\ref{tbl:topologies} summarizes the fundamental parameters of the graphs presented in Section \ref{sec:topologies}: the diameter and the limit values of average distance and utilization.
Table~\ref{tbl:structural} contains the parameters relevant to a network implementing the topology.
Both tables present these values for the optimal graphs, other graphs which are close to be optimal and other graphs, such as the hypercube, to take as a reference.
Figure~\ref{fig:cost} illustrates the cost of networks implementing different topologies using routers with at most 64 ports.
Other values of $R$ give similar plots.
The thick black curve is the average distance corresponding to an ideal generalized Moore graph with $u=1$ (like Equation~\eqref{eq:final}), which is a lower bound for the values of the other curves.
Each other curve corresponds to a topology, which is build for all possible radix up to 64.
The value of $\Delta_0$ has been tried to be a natural number, but sometimes this condition has been relaxed to avoid under/over-subscription, which would distort the figure.
The ordinates axis shows the value $\bar k/u$ which, according to Equation~\eqref{eq:cost}, is a measure of cost associated to the topology. Thus, curves that attain the bound are the optimal topologies, which are: the complete graph, the Turán graphs, the 2D Hamming graph, demi-PN, PN and Delorme's graph $P_3(\mathbb F_q)$. Note that $P_3(\mathbb F_q)$
intersects the curve in the limit. However, it only exists when $\Delta-1$ is a odd power of 2 which means that there are only two points in the range $R\leq 64$.
The MMS graph does not attain the bound because of its asymmetry; as we have seen in previous sections, the MMS has $u=8/9$ in the limit. Hence, the curve is about $9/8$ the one of demi-PN.
For greater average distances the dragonflies do scale very well, although not attaining the bound. As it can be observed the 3D Hamming graph is completely superseded by the dragonfly.
Figure~\ref{fig:map} indicates which topologies are realizable for a given number of terminals $T$ and available router radix $R$. It holds that solid lines are sorted by average distance. Hence, the optimal topology is the solid line immediately above the desired $(R,T)$ point.
\subsection{Projective Networks vs Slim Fly}\label{subsec:discu}
This subsection explains in more detail the advantages of PN and demi-PN with respect to the SF MMS in the design of new high scale interconnection networks. It will be shown that link utilization is an important parameter in the network cost model. For this explanation, Figure~\ref{fig:cost_Brown} will be used. In this figure both curves $\bar{k}$ and $\frac{\bar{k}}{u}$ for the three topologies PN, demi-PN and SF MMS are shown. Note that for PN both curves coincide since the graphs $G_q$ are symmetric, as it has been proved in Theorem~\ref{thm:symmetry}.
Clearly, if only average distance is considered, the smaller cost is given by SF MMS. However, its maximum size is $\frac{8}{9}$ smaller than the possible one, which is attained by the demi-PN construction. The reader should notice that the abscises axis is logarithmic, therefore this difference seems smaller in the figure. However, if the link utilization is considered in the network cost model, for more than $1000$ compute nodes demi-PN exhibits as the best alternative both in scalability and cost.
Finally, PN is an alternative to scale to a larger amount of compute nodes reaching almost $10^5$ compute nodes with the minimum cost.
\begin{figure}[]
\begin{center}
\includegraphics[]{MMS_BER64}
\end{center}
\caption{The measure of cost $\bar k/u$ and $\bar k$ given a number of terminals for SF~MMS, PN and demi-PN. Using routers with maximum radix 64.}
\label{fig:cost_Brown}
\end{figure}
\subsection{Cases of Use}\label{subsec:cases_of_use}
To exemplify the use of the topologies, in this subsection different specific networks that connect a given amount of compute nodes are shown.
Two approximate network sizes have been selected: 10,000 compute nodes and 25,000 compute nodes.
Even for the small case of $T\approx 10,000$, the complete graph would require a router radix of about $R\approx 200$, which is currently unrealistic.
Hence, the topologies to be considered will be the Hamming graph, the demi-PN, the SF MMS, the PN and the dragonfly.
Tables~\ref{tbl:10000} and \ref{tbl:25000} show the network parameters for each of the selected topologies in the small and large cases, respectively.
The calculations assume that nodes are arranged into fully electrical groups and cables outside them are optical.
These groups are the closest possible to 500 compute nodes, while trying to maximize the connections inside a group.
An electrical group size marked with asterisk in the tables indicates the size for most electrical groups, with a few smaller groups.
For a fair comparison, we have employed the cost models from~\cite{Besta} using speeds of 40~Gbps, avoiding the extra costs of 100G routers and cables which are still in their market introduction stage.
An average intra-rack distance of 1m is assumed, from which it is obtained a price of 0.985\$/Gbps for the average electrical cable.
The average length of the optical inter-rack cables is approximately the average distance of a mesh of same dimensions plus 2m of overhead.
In the 10,000 nodes case, an average cost per optical cable of 7.7432\$/Gbps is computed, and in the 25,000 case of 7.9178\$/Gbps.
The cost per router is modelled as $350.4 R-892.3$ \$/router.
The only power considered is the consumed by the SerDes, which is approximated to 2.8 watts per port.
Tables~\ref{tbl:10000} and ~\ref{tbl:25000} show cost and power per node for the topologies studied. The lowest cost and power are obtained in both cases with a 2D Hamming graph. However, its required switches exceed the current limit of 48 available ports, so it could only be built with either slower links or using multi-chip switches with higher latency, as discussed in Section~\ref{sec:model}. Next, we consider designs realizable with full speed and a single switch chip per router.
With $T\approx 10,000$ nodes, the demi-PN provides the lowest cost and power, 1\% and 7\% respectively lower than SF MMS. For $T\approx 25,000$, a diameter 3 network is required using switches up to 48 ports. In this case, the PN provides the lowest power, 10.9\% less than the dragonfly. A layout of a projective network requires more optical cables when compared with SF MMS or dragonfly, so in this case the cost of the dragonfly is 2.6\% lower because of its reduced number of optical cables.
Note that, for an all-optical system such as PERCS \cite{Arimilli}, projective networks provide significantly better power and cost per node than the alternatives in the tables.
\begin{table*}
\begin{center}
\begin{tabular}{|c|ccccc|}
\hline
Topology & Hamming $K_{22}^2$ & demi-PN(27) & SF MMS(19) & PN(23) & dragonfly(7)\\
\hline
T & 10648 & 10598 & 9386 & 9954 & 9702\\
\textbf{R} & \textbf{64} & \textbf{42} & \textbf{42} & \textbf{33} & \textbf{27}\\
N & 484 & 757 & 722 & 1106 & 1386\\
$\Delta_0$ & 22 & 14 & 13 & 9 & 7\\
subscription & 1.002 & 0.999 & 0.991 & 0.921 & 0.994\\
Size of electrical group & 484 & 504* & 494 & 396* & 490*\\
Number of groups & 22 & 22 & 19 & 26 & 20\\
Electrical cables & 5082 & 556 & 3971 & 1907 & 8926\\
Optical cables & 5082 & 10028 & 6498 & 11365 & 4514\\
\hline
\textbf{Cost per node (\$)} & \textbf{1145.41} &\textbf{1282.59}&\textbf{1294.51} & \textbf{1546.83} & \textbf{1404.42}\\
\textbf{Power per node (W)} & \textbf{8.15} & \textbf{8.40} & \textbf{9.05} & \textbf{10.27} & \textbf{10.80}\\
\hline
\end{tabular}
\end{center}
\caption{Example networks with about 10,000 compute nodes and electrical groups of about 500 nodes.}
\label{tbl:10000}
\end{table*}
\begin{table*}
\begin{center}
\begin{tabular}{|c|ccccc|}
\hline
Topology & Hamming $K_{29}^2$ & demi-PN(37) & SF MMS(27) & PN(31) & dragonfly(9)\\
\hline
T & 24389 & 26733 & 26244 & 25818 & 26406\\
\bf R & \bf 85 & \bf 57 & \bf 59 & \bf 45 & \bf 35\\
N & 841 & 1407 & 1458 & 1986 & 2934\\
$\Delta_0$ & 29 & 19 & 18 & 13 & 9\\
subscription & 1.001 & 0.999 & 0.976 & 1.003 & 0.996\\
Size of electrical group & 435* & 532* & 486 & 520* & 486*\\
Number of groups & 58 & 51 & 54 & 51 & 55\\
Electrical cables & 5684 & 620 & 10935 & 3381 & 25101\\
Optical cables & 17864 & 26094 & 18954 & 28395 & 13041\\
\hline
\bf Cost per node (\$) & \bf 1237.43 & \bf 1314.29 & \bf 1344.11 & \bf 1497.77 & \bf 1457.39\\
\bf Power per node (W) & \bf 8.21 & \bf 8.40 & \bf 9.18 & \bf 9.70 & \bf 10.89\\
\hline
\end{tabular}
\end{center}
\caption{Example networks with about 25,000 compute nodes and electrical groups of about 500 nodes.}
\label{tbl:25000}
\end{table*}
\section{Conclusions}\label{sec:conclusions}
Projective networks have been proposed in this paper for large systems using direct networks. These networks are built using incidence graphs of projective planes. Our proposal has been done by means of a coarse-grain cost model based on minimizing the average distance of the network while maintaining a uniform link utilization. The optimal networks under this cost model are those generalized Moore graphs which have uniform link utilization and, in particular, those being symmetric. By a complete a study of all the actually known families of generalized Moore graphs, for a given radix router and a number of compute nodes it is possible to choose the optimal network, using this cost model. In particular, projective networks have been proved to be a feasible alternative to the recently proposed Slim Fly. Finally, a first approach to the indirect networks' case has been considered. Our cost model has been adapted to this situation only for diameter two networks, since a general model for any diameter seems unfeasible. As it has been shown, optimal indirect networks for this case are the two-level Orthogonal Fat Trees, which can be also obtained by means of incidence graphs of projective planes.
\section{Indirect Networks}\label{sec:indirect}
Previous sections of this paper have studied direct networks, giving general bounds on the number of nodes for optimal topologies. Moreover, topologies that are close to these bounds have also been studied. However, indirect topologies are popular in the industry. For example, Clos networks have a widespread use since more or less half of current supercomputers on the Top 500 list are using them \cite{top500}. Hence, in this section it is explored how the cost model presented in this paper could be adapted to indirect networks. Moreover, the cost-optimal diameter 2 indirect network, which is the Two-Level Orthogonal Fat Tree~\cite{Valerio}, can also be obtained using the incidence graph of a projective plane. Hence, in this section it is also illustrated how the previous theoretical graph models for obtaining optimal direct networks can also be applied when dealing with indirect networks.
A \textsl{indirect network} has two types of routers since one router may or may not host compute nodes. Therefore, there are \textsl{spine routers}, which are connected only to other routers and \textsl{leaf routers}, which are also connected to compute nodes. Typically, all routers use the same hardware, so it can be assumed that every router has the same radix $R$. In addition, it will be assumed that all leaf routers have the same number $\Delta_0$ of attached compute nodes. Therefore, the graph defined by the routers has two kind of vertices: leaf vertices of degree $\Delta$ and spine vertices with degree $R$, which clearly implies that it cannot be vertex-transitive. Note that the relation $R=\Delta+\Delta_0$ considered for direct networks still holds in the case of indirect networks. In the following, the number of leaf routers will be denoted by $L$ and the number of spine routers by $S$. Thus, the total number of routers will be $N=L+S$.
When considering the graph model to study indirect networks, the main difference with the direct case lies on the diameter and average distance calculation.
In this case, the distances of interest are the ones between leafs, so that a great distance between some leaf and some spine routers becomes irrelevant.
Thus, instead of the diameter, the maximum distance among leafs is considered; and instead of average distance, the average distance between leafs, still denoted by $\bar k$.
In the remainder of the section it will be shown how the graph theoretical techniques presented in previous sections can be used to infer indirect network topologies with good properties.
A first example considers the indirect topology presented by Fujitsu in~\cite{Fujitsu}. This topology, denoted as Multi-layer Full-Mesh (MLFM), can be obtained from the incidence graph of a complete graph $K_n$. To explain this construction let us refer to Figure~\ref{fig:Fujitsu}. In this figure, the network is constructed using the incidence graph of $K_4$. In Figure~\ref{fig:Fujitsu} a) a standard representation of the incidence graph of $K_4$ is shown. The square shaped vertices are the vertices of the complete graph and the circle shaped are the vertices representing the incidence. For example, since there is a edge joining vertices $a$ and $b$ in $K_4$, vertex $A$ is adjacent to both in the incidence graph. In Figure~\ref{fig:Fujitsu} b) a different representation of this graph is shown, where vertices on the bottom are the vertices in $K_4$ and the vertices on the top are edges. Thus, the upper vertices will correspond to spine routers and the bottom vertices with leaf routers. Finally, in order to equalize the radix of the routers, leafs are replicated and compute nodes are added, as represented in Figure~\ref{fig:Fujitsu} c). In general, such a configuration can be obtained from any $K_n$, thus obtaining a indirect network topology with $\binom {n} {2}$ spine routers and $n(n-1)$ leaf routers, each one connected to $n-1$ compute nodes. Therefore, $\Delta = n-1$, $\Delta _0 = n-1$ and $R = 2 \Delta$. However, as it will be shown next, this topology is far from being the cost-optimal one among all the indirect topologies of diameter 2.
\begin{figure}
\begin{center}
\begin{tikzpicture}[
up node/.style={draw,fill=green,circle},
bottom node/.style={draw,fill=yellow},
]
\begin{scope}[shift={(2,0)}]
\node[bottom node] at (0,3) (vertexa) {a};
\node[bottom node] at (0,0) (vertexb) {b};
\node[bottom node] at (3,0) (vertexc) {c};
\node[bottom node] at (3,3) (vertexd) {d};
\draw (vertexa) --node[up node]{A} (vertexb);
\draw (vertexa) --node[up node,pos=0.3]{B} (vertexc);
\draw (vertexa) --node[up node]{C} (vertexd);
\draw (vertexb) --node[up node]{D} (vertexc);
\draw (vertexb) --node[up node,pos=0.3]{E} (vertexd);
\draw (vertexc) --node[up node]{F} (vertexd);
\node at (-.5,0) {a)};
\end{scope}
\begin{scope}[shift={(0,-3)}]
\foreach \a/\av in {A/0,B/1,C/2,D/3,E/4,F/5}
{
\path (\av*1.6,2) node[up node] (up\a) {\a};
}
\foreach \b/\bv in {a/0,b/1,c/2,d/3}
{
\pgfmathsetmacro\x{.7+\bv*2.2}
\path (\x,0) node[bottom node] (bottom\b) {\b};
}
\foreach \a/\b in {A/a,A/b,B/a,B/c,C/a,C/d,D/b,D/c,E/b,E/d,F/c,F/d}
{
\draw (up\a) -- (bottom\b);
}
\node at (0,0) {b)};
\end{scope}
\begin{scope}[shift={(0,-6)}]
\foreach \a/\av in {A/0,B/1,C/2,D/3,E/4,F/5}
{
\path (\av*1.6,2) node[up node] (up\a) {\a};
}
\foreach \b/\bv in {a/0,b/1,c/2,d/3}
\foreach \i in {1,2,3}
{
\pgfmathsetmacro\x{\bv*2.2+(\i-1)*0.7+0.2}
\path (\x,0) node[bottom node] (bottom\b\i) {\b\i};
\foreach \angle in {-15,0,15}
{
\draw (bottom\b\i) -- +(\angle-90:.75) coordinate (x);
\draw[fill=white] (x) circle (2pt);
}
}
\foreach \a/\b in {A/a,A/b,B/a,B/c,C/a,C/d,D/b,D/c,E/b,E/d,F/c,F/d}
\foreach \i in {1,2,3}
{
\draw (up\a) -- (bottom\b\i);
}
\node at (-.3,0) {c)};
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Incidence graph of $K_4$ and the Fujitsu network.}
\label{fig:Fujitsu}
\end{figure}
An analysis for cost and power optimization as the one done in Section~\ref{sec:model} would be pleasing. Unfortunately, it is unfeasible due to, among other reasons, the hardness of calculating Moore bounds on irregular graphs.
Nevertheless, it is possible to infer a similar formula when it is assumed that the maximum distance between leaf routers is 2, as in the previous case of the Multi-layer Full-Mesh. For this purpose, let us consider that there might be links from a leaf router to another leaf router\footnote{links between spines are possible only for diameter $k \geq 3$.}. Therefore, let $\delta$ denote the number of links from a leaf router to another leaf router, which is again assumed to be constant.
Note that $\delta=\Delta$ in direct topologies and $\delta=0$ in fully indirect topologies, but there are some intermediate topologies.
Now, since the maximum distance between leaf routers is 2, every of the $R$ links in a spine router must go to leaf routers. Thus, counting the links between leaf routers and spine routers it is obtained the following expression
$$L(\Delta-\delta)=SR.$$
Now, the maximum number of leafs in a graph with maximum distance between leafs being 2, can be expressed in terms of $(\delta, \Delta, R)$ as follows:
\begin{equation}\label{eq:moore_indirect}
L\leq 1+\delta^2+(\Delta-\delta)(R-1),
\end{equation}
Note that this is a Moore bound calculation but only considering leaf vertices. Also, if $\delta=\Delta$ then it becomes the original Moore bound $M(\Delta,2)$ presented in Equation~\eqref{eq:moore}.
The optimal value for the number of compute nodes is obtained when $$\Delta_0=\frac{u}{\bar k}(2\Delta-\delta),$$ which generalizes Equation~\eqref{eq:nodes}. Now, the cost per compute node is, analogously as it was done in Equation~\eqref{eq:cost},
$$\frac{\#\text{ports}}{\#\text{compute nodes}}=\frac{NR}{L\Delta_0}
=\frac{R+\Delta-\delta}{\Delta_0}=1+\frac{\bar k}{u}.$$
This surprisingly implies that the cost per node does not depend on $\delta$. Hence, the most interesting value for $\delta$ would be the one giving the best scalability, since it provides the maximum number of compute nodes for the same cost. The maximum for Equation~\eqref{eq:moore_indirect} is obtained when $\delta=0$, which is the typical situation in indirect networks. That is, $$L\leq 1+\Delta(R-1).$$ There already exists a topology called \textsl{ Orthogonal Fat Tree} (OFT) presented in \cite{Valerio} that asymptotically attains this bound for $\bar k= 2$. This was already experimentally proved in \cite{Kathareios}. Next, a different construction than the one given in that work is presented, illustrating how also OFTs can be obtained from projective finite planes.
OFTs were constructed in \cite{Valerio} using orthogonal Latin squares. As the author already remarked in that paper, there is a intimate relation between orthogonal Latin squares and finite projective planes. That is, there are $n-1$ mutually orthogonal $n$-by-$n$ Latin squares if and only if there is a finite projective plane of order $n$~\cite{Colbourn}. Therefore, in the following definition, OFTs are built directly using projective spaces instead of manipulating mutually orthogonal Latin squares.
\begin{definition} Let $q$ be a power of a prime number. Let $\hat G_q = (V, E)$ be the graph with vertex set
$$V=\{ (s,P) \mid s\in\{0,1,2\},\ P\in P_2(\mathbb F_q) \}$$
and edge set
$$E=\bigl\{ \{(0,P),(1,L)\},\{(1,P),(2,L)\} \mid P\perp L \bigr\}.$$
Thus, $\hat G_q$ is said to be the \textsl{orthogonal fat tree} of $P_2(\mathbb F_q)$.
\end{definition}
In a OFT network, vertices $(1,P)$ correspond to spine routers and the rest to leaf routers. As an example, let us consider Figure~\ref{fig:OFT}. In this figure black circles represent routers and white circles compute nodes.
As it can be seen, the routers are displayed into three columns of $q^2+q+1 = 7$ routers, since the total number of routers is $N=3(q^2+q+1) = 21$. The column in the middle would correspond to spine routers and the other two to leaf routers. It can also be seen that $\Delta=\Delta_0=q+1$ and $T=2(q+1)(q^2+q+1)$. Indirect networks are no longer vertex-transitive since there exist two different kind of vertices (spine and leaf). However, OFT is edge-transitive, so the utilization is exactly $u=1$. The average distance between leafs is exactly $\bar k=2$, since for any two leafs the minimal path connecting them is of length 2. Note that for each leaf there are several spine routers at distance 3. Finally, it is worthwhile to note that two $G_q$ projective networks are embedded in any $\hat{G}_q$, thus connecting these two different topologies. Moreover, it can be seen that this network has the same cost than the demi-PN and almost the same scalability of the PN, since $T_{PN} = 0.29 R^3$ and $T_{OFT}= 0.25 R^3$.
Finally, let us consider two different cases of use similar to the ones developed in subsection \ref{subsec:cases_of_use} but for indirect networks. Table \ref{table:OFT_case_of_use} presents the cost and power per node for OFT and MLFM networks with sizes about 10000 and 25000 computed nodes. A typical layout of indirect networks is done without electrical groups, which implies that every cable has been considered to be optical for the calculations. The MLFM results are similar to the demi-PN with slightly higher power. With respect to the OFT, on the one hand its scalability is slightly lower than PN, since with a slightly greater radix router it connects almost the same number of terminals. On the other hand, OFT has the same cost and power per node than the demi-PN.
\begin{table}
\begin{center}\scriptsize
\begin{tabular}{|c|cc|cc|}
\hline
Topology & MLFM 22 & MLFM 30 & OFT 16 & OFT 23\\
\hline
T & 9702 & 25230 & 9282 & 26544\\
\bf R & \bf 42 & \bf 58 & \bf 34 & \bf 48\\
N & 693 & 1305 & 819 & 1659\\
$\Delta_0$ & 21 & 29 & 17 & 24\\
cables & 9702 & 25230 & 9282 & 26544\\
\hline
\bf Cost per node &\bf 1297.18 &\bf 1321.76 & \bf 1282.19 & \bf 1312.14\\
\bf Watts per node &\bf 8.4 &\bf 8.4 & \bf 8.4 & \bf 8.4\\
\hline
\end{tabular}
\end{center}
\caption{Example Multi-Layer Full-Mesh and OFT networks with about 10,000 and 25,000 compute nodes.}
\label{table:OFT_case_of_use}
\end{table}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\foreach \a in {0,1}
\foreach \b in {0,1}
\foreach \c in {0,1}
{
\pgfmathtruncatemacro\heigh{\a*4+\b*2+\c}
\ifthenelse{\heigh=0}{}
{
\path[fill] (0,\heigh) node[anchor=south] {(0,(\a,\b,\c))} circle (2pt) coordinate (left \a \b \c)
++(2,0) coordinate (center \a \b \c) circle (2pt) node [anchor=south] {(1,(\a,\b,\c))}
++(2,0) coordinate (right \a \b \c) circle (2pt) node [anchor=south] {(2,(\a,\b,\c))}
;
\foreach \angle in {-10,0,10}
{
\draw (left \a \b \c) -- +(180+\angle:.75) circle (2pt);
\draw (right \a \b \c) -- +(\angle:.75) circle (2pt);
}
}
}
\foreach \la in {0,1}
\foreach \lb in {0,1}
\foreach \lc in {0,1}
\foreach \ra in {0,1}
\foreach \rb in {0,1}
\foreach \rc in {0,1}
{
\pgfmathtruncatemacro\goodl{\la==1 || \lb==1 || \lc==1}
\pgfmathtruncatemacro\goodr{\ra==1 || \rb==1 || \rc==1}
\pgfmathtruncatemacro\dot{mod(\la*\ra + \lb*\rb +\lc*\rc,2)}
\pgfmathtruncatemacro\good{\goodl && \goodr && \dot==0}
\ifthenelse{\good=1}
{
\draw (left \la \lb \lc) -- (center \ra \rb \rc) -- (right \la \lb \lc);
}
}
\end{tikzpicture}
\end{center}
\caption{Orthogonal Fat Tree $\hat G _2$}
\label{fig:OFT}
\end{figure}
\section{Introduction}\label{sec:intro}
One current trend in research for the design of Exascale systems is to greatly increase the number of compute nodes. The cost and power of the network of these large systems is significant, which urges to optimize these parameters. Specifically, the problem is how to interconnect a collection of compute nodes using a given router model with as small cost and power consumption as possible. If the interconnection network is modelled by a graph, where nodes represent the routers and edges the links connecting them, the Moore bound can be very useful. The present paper deals with graphs attaining or approaching the generalized Moore bound~\cite{Sampels} while minimizing cost and power consumption.
Graph theory has dealt with very interesting topologies that have not yet been adopted as interconnection networks. One paradigmatic example are Moore graphs~\cite{Miller}. Hoffman and Singleton provided in~\cite{Hoffman} some few examples of regular graphs of degree $\Delta$ and diameter $k$ having the maximum number of vertices; namely for $k=2$ and $\Delta=2, 3, 7$ and for $k=3$ and $\Delta=2$. They denoted such graphs as \textsl{Moore graphs} as they attain the upper bound for their number of nodes, solving for these cases, the ($\Delta$-$k$)-problem posed by E. F. Moore. Such graphs are optimal for interconnection networks as they simultaneously minimize maximum and average transmission delays among nodes.
In these interconnection networks, traffic is frequently uniform; when it is not, it can be randomized (using Valiant routing,~\cite{Valiant_ACM}). Under uniform traffic, maximum throughput depends on the network average distance $\bar k$, rather than the diameter $k$. This
promotes the search of generalized Moore graphs~\cite{Sampels}, which have minimum average distance for a given degree.
This is attained when, from a given node, there are the maximum amount of reachable nodes at any distance lower than the diameter, with the remaining nodes at distance $k$.
As it will be shown in this paper, Moore and some generalized Moore graphs also minimize cost. If it is assumed that network cost is dominated by the number of employed ports (especially SerDes, as it will shown next), minimizing graph average distance not only maximizes throughput but it can also minimize investment and exploitation expenses. Nevertheless, it is important to highlight that highly symmetric graphs are always preferable as they do not exhibit bottlenecks that can compromise performance under uniform traffic. This paper shows examples of such topologies based on incidence graphs of projective planes and compares them with competitive alternatives. Incidence graphs of finite projective planes~\cite{Brown},~\cite{Erdos} have been used to attain the Moore bound, but not only mathematicians have paid attention to this discrete structures. In fact, Valerio \latin{et al.} already use them to define Orthogonal Fat Trees (OFT)~\cite{Valerio}, which are highly scalable cost optimal indirect networks. Brahme \latin{et al.}~\cite{Brahme} propose other topologies for direct networks for HPC clusters. Al it is shown in this paper, they can also be defined using projective planes, although the authors use perfect difference sets for their definition. In this paper it is shown how incidence graphs of finite projective planes are suitable topologies for both direct and indirect networks for HPC systems.
Recently, three strongly related papers have been published. We summarize next their main achievements and bring to light how the results introduced in our paper improve them. In~\cite{Rumley2015}, the authors propose a methodology based on minimizing average distance to identify optimal topologies for Exascale systems. Therefore, topologies close to the generalized Moore bound are searched. In this aim, several compositions (Cartesian graph products in general) of known topologies are explored. However, in this analysis neither the symmetry nor the link utilization of the topologies are included and, therefore, the comparison may not reflect actual network performance.
in~\cite{Besta} the Slim Fly (SF) network is proposed. This topology provides very high scalability for diameter 2, approaching the Moore bound.
However, SF is neither symmetric nor well-balanced. Therefore, the number of compute nodes per router must be adjusted in order to give full bisection bandwidth. Moreover, this lack of symmetry makes SFs more costly than projective networks with the same diameter, which also provide higher scalability.
Finally, in~\cite{Kathareios} several diameter 2 topologies are studied, namely Stacked Single-Path Tree, Multi-layer Full-Mesh, Slim Fly and Two-Level Orthogonal Fat Tree. The authors present experimental results which conclude that the Slim Fly and the OFT are the best direct and indirect topologies respectively. The present paper proves that topologies with diameter other than 2 such as projective networks are also interesting. Furthermore, a more accessible construction of the OFT and its relation with other topologies is given.
The rest of the paper is organized as follows. Section~\ref{sec:model} describes the cost model
assumed in this paper. An expression based on average distance and link utilization which upper bounds the cost is obtained. As it will be shown, maximizing the number of terminals while maintaining the average distance and link utilization will be the target, which will be related to the generalized Moore bound. In Section~\ref{sec:projective} \textsl{Projective Networks} are introduced,
defined using incidence graphs of projective planes with the smallest average distance for their size and higher symmetry.
In Section~\ref{sec:topologies} a thorough analysis of how graph theoreticians have solved the generalized Moore bound for diameters 1--6 is done. This allows to present a complete comparative, in terms of our power/cost model, of all these topologies in Section \ref{sec:comparative}, with special emphasis on the diameter 2 case.
In Section~\ref{sec:indirect} the case for indirect networks is considered. The cost model is adapted for indirect networks of diameter 2. As it will be shown, optimal topologies can also be obtained with our methodology to derive projective networks. Finally, in Section~\ref{sec:conclusions} the main achievements of the paper are summarized.
\section{Power and cost optimization}\label{sec:model}
The interconnection network constitutes a significant fraction of the cost of an High Performance Computing (HPC) or datacenter system, both in terms of installation
and operation,
with the latter mainly dominated by energy costs. This section introduces a coarse-grain generic cost model based on the network average distance and average link utilization. This cost model will be employed to compare different topologies in next sections.
A network should provide the required bandwidth to its collection of compute nodes with minimal latency, while scaling to the required size. Measures of interest are throughput and average latency under uniform traffic. This uniform traffic pattern not only determines the topological properties of the network, but also appears in multiple workloads (such as data-intensive applications or in many collective primitives) and determines the worst-case performance when using routing randomization~\cite{Valiant_ACM}.
An important figure in the deploying of a network is the number of ports in each router chip, also called router radix. This number is a technological constraint, and current 100~Gbps designs typically only support 32 to 48 ports~\cite{Broadcom2015,MellanoxSB7790-2015,Derradji2015,OmniPath2015}. Different configurations of these switches, or alternative designs~\cite{Chrysos}, provide more than a hundred ports but at lower speeds, typically 25~Gbps. Larger non-blocking routers are built employing multiple routing chips, at the cost of an increased complexity and at least triple switching latency~\cite{MellanoxCS7500-2015,OmniPathDirector2015}.
Thus, our goal will be to build a network for $T$ computing nodes using routers of radix $R$, able to manage uniform traffic at full-bisection bandwidth and minimizing its cost.
Therefore, the use of the expression \textsl{optimal network} along this document refers to this optimization problem.
Let us consider next in more detail such requirements.
For simplicity, all links are assumed to have the same transmission rate, not only links between routers but also links from computing nodes.
The notation used throughout the paper is presented in Table~\ref{tbl:notation}. $\Delta$ is employed to refer to the degree of a graph $G$; when $G$ is a $\Delta$-regular graph, $2|E(G)|=N\Delta$. Similarly, $\Delta_0$ is generally equal to all routers; in such case the router radix is $R=\Delta+\Delta_0$ and the number of compute nodes $T=N\Delta_0$.
\begin{table}
\centering
\resizebox{\linewidth}{!}{%
\begin{tabular}{|c|l|}
\hline
Parameter & Definition \\
\hline
$T$ & Number of compute nodes or terminals. \\
$R$ & Router radix (number of ports). \\
$G(V,E)$ & Graph whose vertices $V$ represent the routers\\
& and its edges $E$ the connection between routers.\\
$N=|V|$ & Number of routers.\\
$\Delta$ & Maximum degree of $G$. \\
$\Delta_0$ & Number of compute nodes attached to every router.\\
$k$ & diameter of $G$.\\
$\bar k$ & Average distance of $G$.\\
$a$ & Load accepted by each router in saturation.\\
$u$ & Average utilization of links.\\
\hline
\end{tabular}
}\vskip 1em minus .5em
\caption{Notation used in the paper.}
\label{tbl:notation}
\end{table}
\subsection{Network Dimensioning and Cost Model}\label{sect:cost-models}
In this subsection a generic cost model for both power and hardware required by the network is introduced. This cost depends not only on the average distance of the topology, but also on the average utilization of the network links. Previous works such as \cite{Besta, Rumley2015} do not consider network utilization in their calculations, what leads to suboptimal results.
First, the number of compute nodes $\Delta_0$ which can be serviced per router is estimated. In this aim, \textsl{ideal routers} with minimal routing and a uniform traffic pattern will be assumed.
As the load $a$ increases, the saturation point is reached when some network link becomes in use all the cycles.
When this happens, the network links will have an average utilization $u\in(0,1]$. If $u=1$ then $G$ is said \firstuse{well-balanced}. Being $G$ \firstuse{edge-transitive} is a sufficient but not necessary condition to be well-balanced~\cite{Camarero_thesis}.
If the load injected per cycle per router at saturation is $a$,\footnote{All routers are assumed to inject approximately the same load at saturation.} then the average utilization $u$ is
$$u=\frac{load}{\#links}=\frac{aN\bar k}{2|E(G)|}=\frac{a\bar k}{\Delta}.$$
The load in terms of the utilization is $a=\Delta\dfrac{u}{\bar k}$.
Therefore, the number of compute nodes per router $\Delta_0$ which can be serviced without reaching the saturation point is:
\begin{equation}\label{eq:nodes}
\Delta_0\leq \Delta\frac{u}{\bar k}.
\end{equation}
Ideally, the equality should hold. If Equation (\ref{eq:nodes}) does not hold, the network is said to be \firstuse{oversubscribed}, and does not provide full bisection bandwidth under uniform traffic. Conversely, for $\Delta_0$ lower than the equality value, the network is oversized for the number of compute nodes connected.
Now, a generic estimation for the network cost per computing node $C_{node}$ is considered, which is also particularized to economic or power terms ($C_{node-\$}$ and $C_{node-W}$ in \$ and Watts, respectively).
A generic average cost $c_i$ per injection port, $c_t$ per transit port, and $c_r$ per router are assumed. The resultant cost per compute node is
$$C_{node} = \frac{N}{T}\cdot(c_i \Delta_0+c_t \Delta+c_r ) = \frac{c_i N\Delta_0+c_t N\Delta+c_r N}{T}.$$
Considering the equality value in Equation (\ref{eq:nodes}), $T=N\Delta_0$ and $R=\Delta+\Delta_0$, it results:
\begin{equation}\label{eq:cost}
C_{node} = c_i + c_t\frac{\bar k}{u} + c_r \frac{1+\bar k/u}{R}.
\end{equation}
For the installation cost $C_{node-\$}$, router and transit links comprise the largest amounts. The router cost is roughly proportional to the number of ports, so it contributes a large amount to $c_i, c_t$ and a small amount to $c_r$~\cite{Besta}. Regarding links, as network speed increases optics are expected to displace copper for even shorter distances, including both intra-rack and on-board communications~\cite{Doany2014}.
When all network links are active optical cables their cost is largely independent of their length, since it is dominated by the optical transceivers in the ends. This leads to $c_i=c_t >> c_r$, with $c_i=c_t$ approximately constant.
Therefore, the largest component of the installation cost in Equation (\ref{eq:cost}) will be determined by the router ports, $C_{node-\$} \approx c_t(1+\frac{\bar k}{u})$. A more detailed analysis considering different types of cables is presented in Section~\ref{sec:comparative}.
For the energy cost $C_{node-W}$, the most significant part are the router SerDes (which imply large $c_i, c_t$ and small $c_r$); for example, the router design in~\cite{Chrysos} dedicates 87\% of its power to SerDes. Again, this leads to the same result as for the installation cost, concluding that the best cost is obtained using
topologies that minimize $\frac{\bar k}{u}$.
\subsection{Moore Bounds}
In this subsection limits of the network size and its cost will be studied. This will be done by considering the limits of the Moore bound for the relation between the diameter and network size, and the generalized Moore bound for the relation between the average distance and network size, both for a given degree.
Section~\ref{sect:cost-models} concludes that cost depends linearly on $(1+\bar k/u)$.
This expression is minimized in the complete graph $K_N$, which is symmetric---hence $u=1$---and has minimum average distance $\bar k=1$.
However, the complete graph has $\Delta_0=N$, $R=2N-1$ and $T=N^2=\left(\frac{R+1}{2}\right)^2$.
With a radix $R=48$ the number of compute nodes would be only $T \approx 576$ nodes.
The Moore Bound~\cite{Miller} establishes that for a given diameter $k$ the maximum network size is bounded by:
\begin{equation}\label{eq:moore}
N\leq M(\Delta,k)=\frac{\Delta(\Delta-1)^k-2}{\Delta-2}.
\end{equation}
This bound is obtained by assuming the following distance distribution---the number $W(t)$ of vertices at distance $t$ from any chosen vertex:
$$
W(t)=\begin{cases}
1 &\text{if $t=0$}\\
\Delta(\Delta-1)^{t-1} &\text{otherwise.}
\end{cases}
$$
Therefore, the average distance of a Moore graph is
$$\bar k=\frac{\sum_{t=1}^ktW(t)}{N-1}=\frac{\sum_{t=1}^k\Delta(\Delta-1)^{t-1}}{N-1}.$$
Then, it is straightforward that $\lim_{\Delta\rightarrow\infty}\bar k=k$.
There are good families of graphs approaching the Moore bound for low diameter, but they are restricted to very specific values in the number of nodes. Additionally, as derived from Equation (\ref{eq:cost}), the most important factor to minimize cost is the average distance $\bar k$, not the network diameter.
\firstuse{Generalized Moore graphs}~\cite{Sampels} reach the minimum average distance for a given router radix and number of vertices $N$. This is attained when there are the maximum amount of reachable nodes up to distance $k-1$, with the remaining nodes being at distance $k$. That is, with the following distance distribution:
$$
W(t)=\begin{cases}
1 &\text{if $t=0$}\\
\Delta(\Delta-1)^{t-1} &\text{if $1\leq t\leq k-1$}\\
N-M(\Delta,k-1) &\text{if $t=k$.}
\end{cases}
$$
With this generalization, the average distance can be approximated---for large $\Delta$---as
\begin{equation}\label{eq:average}
\bar k\approx k-\frac{\Delta^{k-1}}{N}.
\end{equation}
The generalized Moore bound determines the minimal average distance $\bar k$ (hence cost, given a well-balanced topology) for a given number of nodes $T$ and router radix $R$. Next, an expression relating these values and the diameter $k$ is obtained.
Following Equation~\eqref{eq:nodes}, the number of compute nodes per router is $\Delta_0=\Delta/\bar k=(R-\Delta_0)/\bar k$. Thus, $R=\Delta_0\bar k+\Delta_0=\Delta_0(1+\bar k)$ and
$$\Delta_0=\frac{R}{\bar k+1}.$$
The degree is
$$\Delta=R-\Delta_0=R\left(1-\frac{1}{\bar k+1}\right)=R\frac{\bar k}{\bar k+1}.$$
The number of routers is
$$N=\frac{T}{\Delta_0}=\frac{T}{R}(\bar k+1).$$
The difference $k-\bar k$ can be approximated (using Equation (\ref{eq:average})) by
$$k-\bar k\approx
\frac{\Delta^{k-1}}{N}
=\frac{\left(R\frac{\bar k}{\bar k+1}\right)^{k-1}}{\frac{T}{R}(\bar k+1)}
=\frac{R^k}{T} \frac{\bar k^{k-1}}{(\bar k+1)^k}.
$$
Reordering terms, it is obtained the relation:
\begin{equation}\label{eq:final}
T \approx \frac{R^k\bar k^{k-1}}{(k-\bar k)(\bar k+1)^k}
\end{equation}
This equation is used later as an upper bound for the number of compute nodes in direct topologies.
\section{Projective Networks: A Topology Based on Incidence Graphs of Finite Projective Planes}\label{sec:projective}
As argued in previous section, average distance and average link utilization are the target parameters to design optimal cost topologies.
In this section incidence graphs of projective planes are proposed to define network topologies attaining almost optimal values of these parameters. In Subsection~\ref{subsec:incidence} incidence graphs of finite projective planes are defined, which constitute a family of symmetric graphs with diameter 3 and average distance equal to $2.5$ in the limit. In Subsection~\ref{subsec:brown} such graphs are modified in such a way that their diameter and average distance both become 2. However, they are no longer symmetric although their link utilization equals 1 in the limit. These two families of graphs are used to define \textsl{Projective Networks} which, as it will be show in Subsection~\ref{subsec:discu}, result in a competitive alternative to the recently proposed Slim Fly network \cite{Besta}. Thus, in this section the methodology proposed in the paper is validated by a specific example.
\subsection{Incidence Graph of Finite Projective Planes}\label{subsec:incidence}
A family of graphs with an average distance tending to $2.5$ can be obtained as the incidence graph of finite projective planes. Next, an algorithmic description of these graphs is given, although a more geometrical approach is considered in Example \ref{ex:Fano}. Since these graphs are defined in terms of finite projective planes, let us first introduce this concept.
Let $q$ be any power of a prime number. A canonic set of representatives of the finite projective plane over the field with $q$ elements $\mathbb F_q$ is
$$P_2(\mathbb F_q)=\{(1,x,y),(0,1,x),(0,0,1)\mid x,y\in \mathbb F_q\}.$$
\begin{remark} By a straightforward counting argument, it can be proved that $P_2(\mathbb F_q)$ has $q^2+q+1$ elements. \end{remark}
Two points $X, Y\in P_2(\mathbb F_q)$ are said \firstuse{orthogonal} (written $X\perp Y$) if their scalar product is zero. The space $P_2(\mathbb F_q)$ contains also $q^2+q+1$ lines of exactly $q+1$ points each.
Every line is represented by its dual point in the projective plane. A line $L$ is incident to a point $P$ if and only if $P$ is orthogonal to the dual point of $L$. This fact allows the following definition.
\begin{definition} Let $q$ be a power of a prime number. Let $G_q = (V, E)$ be the graph with vertex set
$$V=\{ (s,P) \mid s\in\{0,1\},\ P\in P_2(\mathbb F_q) \}$$
and edges set
$$E=\bigl\{ \{(0,P),(1,L)\} \mid P\perp L,\ P,L\in P_2(\mathbb F_q) \bigr\}.$$
Thus, $G_q$ is said to be the \textsl{incidence graph of the finite projective plane} $P_2(\mathbb F_q)$.
\end{definition}
\begin{remark}
Incidence graphs, also called \textsl{Levi graphs}, can be applied to any incidence structure~\cite{Gross}.
Note that $G_q$ is the Levi graph with a finite projective plane as the incidence structure.
\end{remark}
It is clear that $G_q$ has $2q^2+2q+2$ vertices. Let us consider the following example to better understand this construction.
\begin{example}\label{ex:Fano} Let us consider the graph $G_2$. In Figure~\ref{fig:LeviPG} two different structures are represented. On the left side, a typical graphical representation of $P_2(\mathbb F_2)$, or the Fano plane, is shown. In this representation, both the 7 points and their incident lines of the Fano plane are labeled with their homogeneous coordinates. Note that the point 100 is incident to the line \emph{001} since the scalar product of their coordinates is zero. On the right side of the figure, a graphical representation of the incidence graph of the Fano plane, denoted by $G_2$, is shown. There are two kinds of vertices, which are the points and the lines of the Fano plane. Now, two vertices are adjacent if the corresponding point and line are incident. Therefore, since point 100 is incident to line \emph{001} as we have seen before, in the graph there is an edge making them adjacent vertices. As it can be seen, every vertex has degree 3 and there are minimal paths of lengths 1, 2 or 3.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\begin{scope}[
point/.style={anchor=south west,font=\scriptsize},
line/.style={font=\scriptsize\em},
]
\fill (0,0) circle (3pt) node[point] {111}
(30:1) circle (3pt) node[point] {011}
(90:2) circle (3pt) node[point] {001}
(150:1) circle (3pt) node[point] {101}
(210:2) circle (3pt) node[point] {100}
(270:1) circle (3pt) node[point] {110}
(330:2) circle (3pt) node[point] {010};
\draw (0,0) circle (1) (100:1) node[line] {111}
(90:2) -- node[line,pos=0.3]{100} (330:2
(210:2) -- node[line,pos=0.3] {010} (90:2
(210:2) -- node[line,pos=0.3] {001} (330:2
(30:1) -- node[line,pos=0.55] {011} (210:2
(150:1) -- node[line,pos=0.55] {101} (330:2
(270:1) -- node[line,pos=0.55]{110} (90:2)
\end{scope}
\begin{scope}[shift={(4,-3)}]
\foreach \a in {0,1}
\foreach \b in {0,1}
\foreach \c in {0,1}
{
\pgfmathtruncatemacro\heigh{\a*4+\b*2+\c}
\ifthenelse{\heigh=0}{}
{
\path[fill] (0,\heigh) node[anchor=east] {(\a,\b,\c)} circle (2pt) coordinate (left \a \b \c) ++(1,0) coordinate (right \a \b \c) circle (2pt) node [anchor=west] {\em(\a,\b,\c)};
}
}
\draw (0,8) node[anchor=east] {points} ++(1,0) node [anchor=west] {lines};
\foreach \la in {0,1}
\foreach \lb in {0,1}
\foreach \lc in {0,1}
\foreach \ra in {0,1}
\foreach \rb in {0,1}
\foreach \rc in {0,1}
{
\pgfmathtruncatemacro\goodl{\la==1 || \lb==1 || \lc==1}
\pgfmathtruncatemacro\goodr{\ra==1 || \rb==1 || \rc==1}
\pgfmathtruncatemacro\dot{mod(\la*\ra + \lb*\rb +\lc*\rc,2)}
\pgfmathtruncatemacro\good{\goodl && \goodr && \dot==0}
\ifthenelse{\good=1}
{
\draw (left \la \lb \lc) -- (right \ra \rb \rc);
}
}
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Left: the projective plane $P_2(\mathbb F_2)$, also known as the Fano plane. Right: the incidence graph $G_2$, also known as Heawood graph.}
\label{fig:LeviPG}
\end{figure}
\end{example}
It is known that for any two different points $X,Y\in P_2(\mathbb F_q)$ there is a unique $Z\in P_2(\mathbb F_q)$ such that $X\perp Z$ and $Z\perp Y$.
This implies that the half of the vertices $(0,X)$ of $G_q$ are at distance 2 from $(0,(1,1,1))$ and the other half are at distance at most 3.
$P_2(\mathbb F_q)$ also satisfies that there are $q+1$ orthogonal points to any given one. Thus, in general $G_q$ is a bipartite graph of degree $\Delta=q+1$ with distance distribution
$$W(t)=\begin{cases}
1 &\text{if $t=0$}\\
q+1 &\text{if $t=1$}\\
q^2+q &\text{if $t=2$}\\
q^2 &\text{if $t=3$.}\\
\end{cases}$$
As a consequence, the average distance of $G_q$ is
$$\bar k=
\frac{5q^2+3q+1}{2q^2+2q+1}
=2.5-\frac{2q+1.5}{2q^2+2q+1}.$$
Thus, the limit of $\overline{k}$ is 2.5 and its diameter $k=3$. Moreover, it can be proved that $G_q$ is symmetric, which gives the optimal average link utilization.
\begin{theorem}\label{thm:symmetry} $G_q$ is symmetric.
\end{theorem}
\begin{proof}
For any invertible matrix $M \in \mathcal{M}_3(\mathbb F_q)$, the application that maps the point $P$ to the point $MP$ is an automorphism of the projective plane $P_2(\mathbb F_q)$, since it maps subspaces to subspaces.
As they preserve the incidence relation, they are also automorphisms of $G_q$.
Now, in order to prove both vertex-transitivity and edge-transitivity, let us prove that for any vertices $(0,P)$, $(1,L)$, $(0,P')$ and $(1,L')$ with $(0,P)$ adjacent to $(1,L)$ and $(0,P')$ adjacent to $(1,L')$ there is a graph automorphism that maps $(0,P)$ into $(0,P')$ and $(0,L)$ into $(0,L')$.
This is equivalent to finding an automorphism $\varphi$ of $P_2(\mathbb F_q)$ that maps the point $P$ into $P'$ and the line $L$ into $L'$.
Let $Q$ be any other point in the line $L$ and $Q'$ any other point in the line $L'$.
By linear algebra there is an invertible matrix $M$ such that $M[P,Q]=[P',Q']$. The induced automorphism is the one desired.
To complete the vertex-transitivity note that mapping $(s,P)$ into $(1-s,P)$ is a graph automorphism.
\end{proof}
An interesting case of $G_q$ graphs is the one in which $q = p^2$ is a square, where $p$ is a power of a prime. In this case, the projective plane $P_2(\mathbb{F}_{p^2})$ can be partitioned into $p^2-p+1$ subplanes $P_2(\mathbb{F}_{p})$~\cite{Hirschfeld}. This implies that $G_{p^2}$ can be partitioned into $p^2-p+1$ graphs isomorphic to $G_p$, which leads to an straightforward layout of the network. Figure~\ref{fig:G_4} shows the partitioning of $G_4$ as an example. In this figure global links are represented with red dashed lines and local links with solid black lines. The local links induce $3 = 2^2-2+1$ subgraphs isomorphic to $G_2$. The label of the vertices refers to the field isomorphism given by $\mathbb F_4\cong \frac{\mathbb F_2[x]}{(x^2+x+1)}.$ Note that the number of global links is almost the square of the local links.
\begin{figure*}
\begin{center}
\begin{tikzpicture}[every node/.style={draw}]
\expandafter\def\csname point0\endcsname {(0,(1,0,0))}
\expandafter\def\csname point1\endcsname {(0,(0,0,1))}
\expandafter\def\csname point2\endcsname {(0,(0,1,x+1))}
\expandafter\def\csname point3\endcsname {(0,(1,x+1,x+1))}
\expandafter\def\csname point4\endcsname {(0,(1,1,1))}
\expandafter\def\csname point5\endcsname {(0,(1,1,0))}
\expandafter\def\csname point6\endcsname {(0,(1,0,x+1))}
\expandafter\def\csname point7\endcsname {(0,(0,1,0))}
\expandafter\def\csname point8\endcsname {(0,(1,0,1))}
\expandafter\def\csname point9\endcsname {(0,(0,1,1))}
\expandafter\def\csname point10\endcsname{(0,(1,1,x))}
\expandafter\def\csname point11\endcsname{(0,(1,x,x))}
\expandafter\def\csname point12\endcsname{(0,(1,1,x+1))}
\expandafter\def\csname point13\endcsname{(0,(1,x+1,1))}
\expandafter\def\csname point14\endcsname{(0,(1,x,x+1))}
\expandafter\def\csname point15\endcsname{(0,(1,x,1))}
\expandafter\def\csname point16\endcsname{(0,(1,x+1,x))}
\expandafter\def\csname point17\endcsname{(0,(1,x+1,0))}
\expandafter\def\csname point18\endcsname{(0,(1,0,x))}
\expandafter\def\csname point19\endcsname{(0,(0,1,x))}
\expandafter\def\csname point20\endcsname{(0,(1,x,0))}
\expandafter\def\csname line0\endcsname {(1,(0,1,1))}
\expandafter\def\csname line1\endcsname {(1,(1,1,0))}
\expandafter\def\csname line2\endcsname {(1,(1,1,x))}
\expandafter\def\csname line3\endcsname {(1,(1,0,x))}
\expandafter\def\csname line4\endcsname {(1,(1,0,1))}
\expandafter\def\csname line5\endcsname {(1,(1,1,1))}
\expandafter\def\csname line6\endcsname {(1,(1,x,x))}
\expandafter\def\csname line7\endcsname {(1,(1,0,x+1))}
\expandafter\def\csname line8\endcsname {(1,(1,x,1))}
\expandafter\def\csname line9\endcsname {(1,(1,x+1,x+1))}
\expandafter\def\csname line10\endcsname {(1,(0,1,x+1))}
\expandafter\def\csname line11\endcsname {(1,(1,x+1,0))}
\expandafter\def\csname line12\endcsname {(1,(0,1,x))}
\expandafter\def\csname line13\endcsname {(1,(1,x,0))}
\expandafter\def\csname line14\endcsname {(1,(1,x,x+1))}
\expandafter\def\csname line15\endcsname {(1,(1,1,x+1))}
\expandafter\def\csname line16\endcsname {(1,(1,x+1,x))}
\expandafter\def\csname line17\endcsname {(1,(0,0,1))}
\expandafter\def\csname line18\endcsname {(1,(0,1,0))}
\expandafter\def\csname line19\endcsname {(1,(1,0,0))}
\expandafter\def\csname line20\endcsname {(1,(1,x+1,1))}
\foreach \i in {0,...,20}
{
\pgfmathsetmacro\subplanepos{int(\i/3)}
\pgfmathsetmacro\subplane{int(mod(\i,3))}
\node (point\i) at (6.5*\subplane,-\subplanepos*1.1) {\csname point\i\endcsname};
\node (line\i) at (6.5*\subplane+2.5,-\subplanepos*1.1) {\csname line\i\endcsname};
}
\foreach \p in {0,...,20}
\foreach \inc in {0,12,18}
{
\pgfmathtruncatemacro\l{mod(\p+\inc,21)}
\draw (point\p) -- (line\l);
}
\foreach \p in {0,...,20}
\foreach \inc in {10,17}
{
\pgfmathtruncatemacro\l{mod(\p+\inc,21)}
\draw[red,dashed] (point\p) -- (line\l);
}
\end{tikzpicture}
\end{center}
\caption{A layout for $G_4$ based on subplanes of $P_2(\mathbb{F}_4)$.}
\label{fig:G_4}
\end{figure*}
\subsection{Modified Incidence Graph of Finite Projective Planes}\label{subsec:brown}
In the previous graph $G_q$, each vertex $(0, P)$ can be identified with its pair $(1, P)$, for every $P \in P_2(\mathbb{F}_q)$, giving a graph of diameter 2 very close to the Moore bound. Independently and simultaneously, Brown in \cite{Brown} and Erd\H{o}s \latin{et al.} in \cite{Erdos_hungarian} defined this graph, which is introduced next. Interestingly, Brahme \latin{et al.} have recently unknowingly reinvented these graphs with a different construction and in \cite{Brahme} they already proposed them for HPC clusters. However, in this paper the next definition will be considered as the network topology model.
\begin{definition}\label{def:Brown} Let $q$ be a power of a prime number. Let $\overline{G}_q = (V, E)$ be the graph with vertex set $$V=P_2(\mathbb{F}_q)$$ and set of adjacencies $$E=\{ \{P,L\} \mid P\perp L,\ P \neq L,\ P,L\in P_2(\mathbb F_q)\}.$$
\end{definition}
Clearly, $\overline{G}_q $ has $q^2+q+1$ vertices. Now, since $P_2(\mathbb{F}_q)$ contains $q+1$ points $X$ such that $X\perp X$, this graph
is a non-regular graph with degrees $q$ and $q+1$. Hence, its number of vertices is $N=q^2+q+1=\Delta^2-\Delta+1$, where $\Delta=q+1$ is the maximum degree. Note that this expression is very close to the Moore bound $M(\Delta,2)=\Delta^2+1$. In the next example it is shown how $\overline{G}_2$ is obtained from $G_2$.
\begin{example}
In Figure \ref{fig:Brown} the graph $\overline{G}_2$ is represented. Note that this is the modified incidence graph obtained from $G_2 ,$ which was considered in Example \ref{ex:Fano}. Therefore, vertex 111, which is obtained identifying point and line 111 in $G_2$, is adjacent to 110, since point and line 110 where adjacent in $G_2$ to 111.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\begin{scope}[
vertex/.style={anchor=south west,font=\scriptsize},
]
\fill (0,0) circle (2pt) node[vertex] {111}
(30:1) circle (2pt) node[vertex] {011}
(30:2) circle (2pt) node[vertex] {100}
(150:1) circle (2pt) node[vertex] {101}
(150:2) circle (2pt) node[vertex] {010}
(270:1) circle (2pt) node[vertex] {110}
(270:2) circle (2pt) node[vertex] {001}
;
\draw (0,0) circle (2)
(0,0) -- (30:2)
(0,0) -- (150:2)
(0,0) -- (270:2)
;
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Modified incidence graph $\overline{G}_2$.}
\label{fig:Brown}
\end{figure}
\end{example}
\begin{lemma} For each pair of vertices of $\overline{G}_q$ there is a unique minimum path.
\end{lemma}
\begin{proof} Let $P, Q$ be two vertices in $\overline{G}_q$. If $P$ and $Q$ are adjacent, straightforwardly there is a unique edge joining them. On the contrary, if
they are not adjacent, their vector product is adjacent to both, which gives a minimum path between them. If any other minimum path were exist, the two paths will form a square in the graph, which is not possible.
The nonexistence of a square can be proved as follows. Let the points $P$, $Q$ be adjacent to the points $X$ and $Y$. Let $C$ be the cross point of the lines $PQ$ and $XY$. Point $C$ is adjacent to $P$ and $Q$, since it is a linear combination of $X$ and $Y$. In the same way it is adjacent to $X$ and $Y$. Furthermore, $C$ is adjacent to all the points in the lines $PQ$ and $XY$, and hence to all the points in the plane, which contradicts the maximum degree being $q+1$.
\end{proof}
\begin{theorem} The link utilization of $\overline{G}_q$ is $u = \frac{2q^2+q+1}{2q(q+1)}.$
\end{theorem}
\begin{proof} The vector product of a vertex of degree $q$ and a vertex of degree $q+1$ is the vertex of degree $q$. It follows that there is no pair of adjacent vertices of degree $q$, since both should be their vector product. Thus, there are two types of edges: edges with endpoint degrees $q$--$(q+1)$ and edges with endpoint degrees $(q+1)$--$(q+1)$. The remainder of the proof consists on counting the amount of traffic over these links and their number.
First, let us consider edges of type $q$--$(q+1)$. Thus, let us denote $X$ the vertex of degree $q$ and $Y$ the vertex of degree $q+1$. There are $q+1$ vertices of degree $q$ and for each of these vertices there are $q$ edges, all of this type. Therefore, there are $q(q+1)$ vertices of this type.
The traffic traversing the arc from $X$ to $Y$ is composed from the traffic from: 1 path from $X$ to $Y$, $q-1$ paths from neighbours of $X$ to $Y$, and $q$ paths from $X$ to neighbours of $Y$; which gives a total of $2q$ paths.
Next, let us consider edges of type $(q+1)$--$(q+1)$. Let us denote the endpoints $X$ and $Y$.
The total number of edges in $\overline{G}_q$ is $\frac{q(q+1)+(q+1)q^2}{2}=\frac{q(q+1)^2}{2}.$
The number of edges of this type is then \begin{multline*}
\frac{q(q+1)^2}{2}-q(q+1)=q\frac{(q^2+2q+1)-(2q+2)}{2}\\
=\frac{q(q^2-1)}{2}.
\end{multline*}
The vertices $X$ and $Y$ have a common neighbour $X\times Y$, whose traffic does not go through this edge. Thus, the traffic from $X$ to $Y$ is due to: 1 path from $X$ to $Y$, $q-1$ paths from neighbors of $X$ to $Y$, and $q-1$ paths from $X$ to neighbours of $Y$; which constitute a total of $2q-1$ paths.
The maximum load is therefore on $q$--$(q+1)$ links. The average use of the links can be calculated as follows: $$\frac{(2q)(q(q+1))+(2q-1)\frac{q(q^2-1)}{2}}{\frac{q(q+1)^2}{2}}=\frac{2q^2+q+1}{q+1}.$$
Finally, the average link utilization at the saturation point is equal to the average use between the maximum use, this is, $$u=\frac{\frac{2q^2+q+1}{q+1}}{2q}=\frac{2q^2+q+1}{2q(q+1)}.$$
\end{proof}
\begin{notation}
Previous families of graphs constitute the topological models of Projective Networks. We will refer to PN when the graph $G_q$ is considered, and to demi-PN when the graph $\overline{G}_q$ is selected.
\end{notation}
\section{Topologies Near the Moore Bound}\label{sec:topologies}
As stated in previous sections, our aim is to find topologies being optimal according to Equations \eqref{eq:cost} and \eqref{eq:final}. That is, for a given $\bar k$ and $R$, the goal is to find well-balanced topologies with maximum number of terminals $T$. Thus, in Subsection~\ref{subsec:peques}, topologies with small average distance are considered, that is, $\overline{k} \leq 2$. The MMS graph has been proposed for interconnection networks with the name of Slim Fly and for this reason it is analyzed in depth in Subsection \ref{subsec:slimfly}. Although the MMS graph is a generalized Moore graph with diameter 2 and $\overline{k} = 2$, its link utilization converges to $8/9$, so it does not reach the bound in Equation~\eqref{eq:final}. In Subsection \ref{subsec:grandes} some other projective constructions of a greater average distance than the ones presented in Section \ref{sec:projective} are summarized. In Subsection \ref{subsec:random} random graphs are considered since they are close to the Moore bound.
\subsection{Topologies with Small Average Distance}\label{subsec:peques}
In this subsection graph constructions approaching the generalized Moore bound and average distance between 1 and 2 are considered. Straightforwardly, the only graphs with $\bar k=1$ are the complete graphs,
which are indeed Moore graphs. As stated in previous section, complete graphs are the optimal topologies as long as routers with enough radix are available. There are many other generalized Moore graphs with $\bar k$ between 1 and 2, for example: the Turán graph, the Paley graph and the Hamming graph of dimension 2, which are described next. Some small examples are shown in Figure~\ref{fig:peques}.
The \firstuse{Turán graph}~\cite{Chartrand} Turán($n$,$r$) is a complete multipartite graph on $n$ vertices.
Let $s_1,\dotsc,s_r$ be $r$ subsets of $\{1,\dotsc,n\}$ with cardinal number $\lfloor n/r\rfloor$ or $\lceil n/r \rceil$. Then, two vertices are connected if and only if they are in different subsets.
Note that the Turán graph contains the complete bipartite graph as a special case:
$$\text{Turán}(2n,2)\cong K_{n,n}.$$ In the limit the Turán graph has average distance $\lim_{N\rightarrow\infty}\bar k=1+\frac{1}{r}=1.5, 1.\bar 3, 1.25, 1.2, 1.1\bar 6, \dots$
The \firstuse{Paley graph}~\cite{Bollobas} is a graph with $\lim_{N\rightarrow\infty}\bar k=1.5$ very similar to the complete bipartite graph. Let $q$ be a prime power satisfying $q\equiv 1 \pmod 4$. Then, the Paley graph $\text{Paley}(q)$ is the graph whose vertices are the elements of the finite field of $q$ elements $\mathbb F_q$. Two vertices $a,b\in\mathbb F_q$ are connected in $\text{Paley}(q)$ if the difference $a-b$ has its square root in $\mathbb F_q$, \latin{i.e.,} if there is $x\in\mathbb F_q$ such that $a-b=x^2$. A notable property of this graph is that it is \firstuse{self-complementary}: it is isomorphic to the graph that connects vertices if they are not connected in the Paley graph. The Paley graph will appear again later as subgraph of the MMS graph (yet to be introduced).
The \firstuse{Hamming graph}~\cite{Mulder} of side $n$ and dimension 2 is defined as the Cartesian graph product of two complete graphs, $K_n\square K_n$. It is called Hamming graph since two vertices are adjacent if their Hamming distance is 1. In recent networking literature is known as flattened butterfly~\cite{Kim_flat_ISCA}; other names the Hamming graph has received are rook's graph, generalized hypercube~\cite{Bhuyan} and K-cube~\cite{LaForge}.
It has diameter $k=2$, average distance $\bar k=2-\frac{2}{n}-\frac{1}{n^2}$ and size $N=n^2=\Delta^2/4+\Delta+1$, so it is a factor $1/4$ from being asymptotically a Moore graph. Nevertheless, it is a generalized Moore graph, which can result paradoxical; but it can be seen that, although the average distance tends to 2 as a Moore graph would, it is always smaller.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\begin{scope}[x=.5cm,y=.5cm]
\foreach \a in {0,...,3}
{
\begin{scope}[rotate=90*\a]
\foreach \b in {0,...,3}
{
\fill (\b-1.5,2.5) circle (1pt) coordinate (point\a\b);
}
\end{scope}
}
\foreach \a in {0,...,3}
\foreach \b in {0,...,3}
\foreach \c in {1,...,3}
\foreach \d in {0,...,3}
{
\pgfmathtruncatemacro6{mod(\a+\c,4)}
\draw (point\a\b) -- (point6\d);
}
\node at (0,-2cm) {Turán(16,4)};
\end{scope}
\begin{scope}[xshift=3cm]
\foreach \a in {0,...,12}
{
\fill (\a*360/13:1) circle (1pt) coordinate (point\a);
}
\foreach \hop in {1,3,4}
\foreach \a in {0,...,12}
{
\pgfmathtruncatemacro6{mod(\a+\hop,13)}
\draw (point\a) -- (point6);
}
\node at (0,-2cm) {Paley(13)};
\end{scope}
\begin{scope}[xshift=5.75cm,x=.5cm,y=.5cm]
\foreach \a in {0,...,4}
\foreach \b in {0,...,4}
{
\fill (\a-2,\b-2) circle (1pt) coordinate (point\a\b);
}
\foreach \a in {0,...,4}
\foreach \b in {0,...,4}
\foreach \hop in {1,...,4}
{
\pgfmathtruncatemacro6{mod(\a+\hop,5)}
\pgfmathtruncatemacro\cond{\a<6}
\ifthenelse{\cond=1}
{
\draw (point\a\b) edge[out=-10,in=-170] (point6\b);
}
}
\foreach \a in {0,...,4}
\foreach \b in {0,...,4}
\foreach \hop in {1,...,4}
{
\pgfmathtruncatemacro6{mod(\b+\hop,5)}
\pgfmathtruncatemacro\cond{\b<6}
\ifthenelse{\cond=1}
{
\draw (point\a\b) edge[out=100,in=-100] (point\a6);
}
}
\node at (0,-2cm) {Hamming(5,5)};
\end{scope}
\end{tikzpicture}
\end{center}
\caption{The Turán graph, the Paley graph and the Hamming graph}
\label{fig:peques}
\end{figure}
\subsection{Slim Fly}\label{subsec:slimfly}
Slim Fly is the name given by Besta and Hoefler~\cite{Besta} to network topologies based on the McKay--Miller--\v{S}ir\'{a}\v{n} (MMS) graphs~\cite{MMS}. The MMS is a family of graphs of diameter 2 reaching asymptotically $\frac{8}{9}$ of the vertices given by the Moore bound. When degree $\Delta=7$ is considered, the MMS graph coincides with the Hoffman--Singleton graph~\cite{Hoffman}, which is a Moore graph. Thus, for small number of vertices it is a very good option although it gets slightly worse for larger ones. Figure~\ref{fig:MMS_convergence_N} shows how the number of vertices of the MMS graph converges to $\frac{8}{9}$ the cardinal given by the Moore bound for $k=2$. Note that the graph attaining value 1 in the ordinates is the Hoffman--Singleton graph, which is a Moore graph.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\begin{semilogxaxis}[
domain=39:1e7,
enlargelimits=false,
xmajorgrids=true,
ymajorgrids=true,
xminorgrids=true,
yminorgrids=true,
minor y tick num=1,
ytick={0.7,0.8,0.888888888888888,0.9,1.0},
yticklabels={0.7,0.8,8/9,{\tiny 0.9},1},
minor grid style={dashed,very thin, color=blue!15},
major grid style={very thin, color=black!30},
xlabel={$T$, number of compute nodes to be connected},
ylabel={$\frac{N}{\text{Moore}}$, size relative to the Moore bound},
legend style={at={(0.00,1.01)},anchor=south west,font=\scriptsize},legend columns=5,legend cell align=left,
legend image post style={every path={nomorepostactions}},
]
\addplot[purple,mark=o,mark size=.5pt] coordinates { (38.57142857, .6923076923) (90.40000000, .8648648649) (175., 1.) (474.5263158, .8032786885) (705.4545454, .8827586207) (1001.160000, .9529411765) (1818.903226, .8344827586) (2991.756757, .9337016575) (5556.869565, .8873483536) (6658.795918, .9233226837) (9280.981818, .8574821853) (16422.68657, .8629690049) (21070.20548, .9124087591) (26520.83544, .8668252081) (32838.57647, .9091891892) (40087.42857, .8696832579) (44081.02128, .8885032538) (68060.65138, .9048248513) (92538.35537, .9032778076) (1.067178740*10^5, .8750591576) (1.392782446*10^5, .8762395875) (1.577870966*10^5, .9009380863) (1.995821338*10^5, .9000320410) (2.751780229*10^5, .8788184802) (3.040735414*10^5, .8985752234) (3.511026526*10^5, .8887924487) (4.027467638*10^5, .8800235248) (4.791578010*10^5, .8805240175) (5.207439862*10^5, .8969870392) (6.597932085*10^5, .8813726875) (7.111198382*10^5, .8961890452) (7.650415790*10^5, .8817355689) (9.430174679*10^5, .8955342001) (1.220532875*10^6, .8949871588) (1.377677246*10^6, .8947460749) (1.461070098*10^6, .8831266128) (1.637817135*10^6, .8833423347) (1.731299320*10^6, .8943168988) (1.928801024*10^6, .8941250613) (2.367745764*10^6, .8937793786) (2.610212802*10^6, .8936231055) (2.737418987*10^6, .8842168741) (2.802543246*10^6, .8888647769) (3.004096690*10^6, .8843597011) (3.435736580*10^6, .8932089659) (3.588305431*10^6, .8846206676) (4.419196357*10^6, .8928614518) (4.599431876*10^6, .8849602174) (5.169405098*10^6, .8926592547) (5.784622320*10^6, .8852497251) (6.220756024*10^6, .8853369734) (6.446811543*10^6, .8923918138) (6.915219913*10^6, .8923109031) (7.659535802*10^6, .8855753020) (7.919017987*10^6, .8921597996) (9.304635580*10^6, .8857836591) (9.599848432*10^6, .8919566102) (1.020887547*10^7, .8918943764) (1.052281765*10^7, .8859085926) };
\end{semilogxaxis}
\end{tikzpicture}
\end{center}
\caption{Convergence on the number of vertices in the MMS graph to $\frac{8}{9}$ of Moore bound for diameter 2.}
\label{fig:MMS_convergence_N}
\end{figure}
Let us now give a schematic definition of this graph based on the ideas in \cite{Hafner}. Let $q$ be a prime power other than 2.
Then, for some $\varepsilon\in\{-1,0,1\}$, $q\equiv \varepsilon\pmod 4$.
As $q$ is a prime power there is a (unique) finite field of $q$ elements, which is denoted by $\mathbb F_q$.
The set of vertices is defined as
$$V(\mathrm{MMS}(q))=\{(s,x,y) \mid s\in\{0,1\},\ x,y\in\mathbb F_q\}.$$
Thus, MMS($q$) is a graph with $2q^2$ vertices.
In order to define the set of adjacencies a \textsl{primitive element} $\xi\in\mathbb F_q$ has to be found, that is, an element $\xi$ satisfying $\{\xi^i\mid i\in\mathbb Z\}=\mathbb F_q\setminus \{0\}$. This implies that $\xi^{q-1}=1$. Now, let us first define the sets
$$X_0=\begin{cases}
\{1,\xi^2,\dotsc,\xi^{q-3}\} &\text{ if $\varepsilon=1$,}\\
\{1,\xi^2,\dotsc,\xi^{\frac{q-1}{2}},\xi^{\frac{q+1}{2}},\dotsc,\xi^{q-2}\} &\text{ if $\varepsilon=-1$,}\\
\{1,\xi^2,\dotsc,\xi^{q-2}\} &\text{ if $\varepsilon=0$,}\\
\end{cases}$$
and $X_1=\xi X_0$.
Later it will be used that $|X_0|=\frac{q-\varepsilon}{2}$, $X_0\cup X_1=\mathbb F_q\setminus \{0\}$ and
$$X_0\cap X_1=\begin{cases}
\emptyset &\text{ if $\varepsilon=1$,}\\
\{1,-1\} &\text{ if $\varepsilon=-1$,}\\
\{1\} &\text{ if $\varepsilon=0$.}\\
\end{cases}$$
The adjacencies are defined as follows:
\begin{enumerate}
\item $(s,x,y_1)$ is adjacent to $(s,x,y_2)$ for all $s\in\{0,1\}$, $x,y_1,y_2\in\mathbb F_q$ such that $y_1-y_2\in X_s$.
\item $(0,x_1,y_1)$ is adjacent to $(1,x_2,y_2)$ for all $x_1,x_2,y_1,y_2\in\mathbb F_q$ such that $y_1-y_2=x_2x_1$.
\end{enumerate}
Thus, each vertex has $|X_0|$ incident edges by the first item and $q$ incident edges by the second item. Therefore, the degree of MMS($q$) is $\Delta=\frac{3q-\varepsilon}{2}$.
For convenience, let us call the edges by item 1), \textsl{local edges} and the edges by item 2), \textsl{global edges}.
The MMS has diameter 2. Let us study the minimum paths to prove this, and further, to count the use of local and global edges. The possible routes between two vertices could be \emph{ll}, \emph{lg}, \emph{gl} or \emph{gg}; where \emph{l} means a local edge and \emph{g} a global edge.
Let $(s_1,x_1,y_1)$ be the origin vertex and $(s_2,x_2,y_2)$ the destination.
If $s_1=s_2$ and $x_1=x_2$ then the minimum routes are \emph{ll}; this is the same that in Paley graphs. Half of the vertices $(s_1,x_1,y_m)$ can be used as the middle vertex.
If $s_1=s_2$ but $x_1\neq x_2$ then the minimum route is \emph{gg} with some middle vertex $(1-s_1,x_m,y_m)$.
The adjacency exists if $y_1-y_m=(1-2s_1)x_mx_1$ and $y_2-y_m=(1-2s_1)x_2x_m$.
Hence, the vertex in the middle is unique and can be calculated by $x_m=(1-2s_1)(y_1-y_2)/(x_1-x_2)$ and $y_m=y_1-(1-2s_1)x_mx_1$.
If $s_1=1-s_2=s$ then the minimum routes will be half of the time \emph{lg} and the other half \emph{gl}.
The equations for a middle vertex $(s,x_1,y_m)$ are $y_m=y_2+(1-2s)x_1x_2$ and $z=y_1-y_2-(1-2s)x_1x_2\in X_s$, while that for a middle vertex $(1-s,x_2,y_m)$ they are $y_m=y_1-(1-2s)x_1x_2$ and $z=y_1-y_2-(1-2s)x_1x_2\in X_{1-s}$. Thus, routing is performed by computing $z=y_1-y_2-(1-2s)x_1x_2$. If $z=0$ there is a global edge from the origin to the destination, otherwise, as $X_s\cup X_{1-s}=\mathbb F_q\setminus \{0\}$, either $z\in X_s$ or $z\in X_{1-s}$. If $z\in X_s$ use the middle vertex $(s,x_1,y_m)$ and if $z\in X_{1-s}$ use the middle vertex $(1-s,x_2,y_m)$. The uniqueness depends, therefore, in $X_s\cap X_{1-s}$; if $\varepsilon=1$ then it is always the empty set and the route is unique, otherwise there are some pairs for which there are two minimal paths.
As summary, the number of routes \emph{gg} is asymptotically the sum of the number of routes \emph{lg} plus routes \emph{gl}. Thus, 3 global links are used per each local link used.
The analysis in \cite{Besta} does not consider the link utilization and concludes that $\Delta_0=\frac{\Delta}{2}$ terminals per router are required for a full use of the network. As studied in Section~\ref{sec:model}, this would be true if all links would accept the same load. However, this is not the case in the MMS as shown next. As proved above, the number of global links is about 2 times the number of local links, but the load over the total of global links is about 3 times the load of the local links. Thus, each global link receives about $3/2$ of the load received by a local link. Hence, saturation is reached when global links receive load 1 and local links receive $2/3$.
Then, the link utilization is $u=\frac{2}{3}\cdot 1+\frac{1}{3}\cdot \frac{2}{3}=\frac{8}{9}$.\footnote{The value $8/9$ is the same that the quotient of its number of vertices to the Moore bound. This is a coincidence, it does not hold in the great majority of graphs.}
Figure~\ref{fig:MMS_convergence_u} shows this convergence of the link utilization to $\frac{8}{9}$. Again, note that this is an asymptotic behaviour; for the case $q=5$---the Hoffman--Singleton graph---all links receive the same load and the utilization is $u=1$ since it is a symmetric graph. The situation is a little worse if $\varepsilon \neq 1$, where there are non-unique minimal paths and, if the routing is deterministic, there are a few links that are used exclusively for messages between their endpoints.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\begin{semilogxaxis}[
domain=39:1e7,
enlargelimits=false,
xmajorgrids=true,
ymajorgrids=true,
xminorgrids=true,
yminorgrids=true,
minor y tick num=1,
ytick={0.888888888888888,0.9,0.95,1.0},
yticklabels={8/9,0.9,0.95,1},
minor grid style={dashed,very thin, color=blue!15},
major grid style={very thin, color=black!30},
xlabel={$T$, number of compute nodes to be connected},
ylabel={$u$, average link utilization},
legend style={at={(0.00,1.01)},anchor=south west,font=\scriptsize},legend columns=5,legend cell align=left,
legend image post style={every path={nomorepostactions}},
]
\addplot[purple,mark=o,mark size=.5pt] coordinates { (38.57142857, .8571428571) (90.40000000, .9416666667) (175., 1.) (474.5263158, .8803827751) (705.4545454, .9185606061) (1001.160000, .9507692308) (1818.903226, .8842504744) (2991.756757, .9317211949) (5556.869565, .9044384058) (6658.795918, .9216326531) (9280.981818, .8865203762) (16422.68657, .8869936034) (21070.20548, .9111440207) (26520.83544, .8873108984) (32838.57647, .9080711354) (40087.42857, .8875379939) (44081.02128, .8968306738) (68060.65138, .9039199333) (92538.35537, .9024522422) (1.067178740*10^5, .8879466990) (1.392782446*10^5, .8880332354) (1.577870966*10^5, .9002361833) (1.995821338*10^5, .8993791825) (2.751780229*10^5, .8882182986) (3.040735414*10^5, .8980025499) (3.511026526*10^5, .8929002193) (4.027467638*10^5, .8883029006) (4.791578010*10^5, .8883376888) (5.207439862*10^5, .8965036148) (6.597932085*10^5, .8883962096) (7.111198382*10^5, .8957511745) (7.650415790*10^5, .8884210526) (9.430174679*10^5, .8951340616) (1.220532875*10^6, .8946187806) (1.377677246*10^6, .8943917626) (1.461070098*10^6, .8885152884) (1.637817135*10^6, .8885297611) (1.731299320*10^6, .8939877301) (1.928801024*10^6, .8938071743) (2.367745764*10^6, .8934818873) (2.610212802*10^6, .8933348626) (2.737418987*10^6, .8885880451) (2.802543246*10^6, .8909045048) (3.004096690*10^6, .8885975048) (3.435736580*10^6, .8929453158) (3.588305431*10^6, .8886147461) (4.419196357*10^6, .8926185318) (4.599431876*10^6, .8886370962) (5.169405098*10^6, .8924284353) (5.784622320*10^6, .8886560784) (6.220756024*10^6, .8886617857) (6.446811543*10^6, .8921770438) (6.915219913*10^6, .8921009985) (7.659535802*10^6, .8886773443) (7.919017987*10^6, .8919589935) (9.304635580*10^6, .8886909084) (9.599848432*10^6, .8917680641) (1.020887547*10^7, .8917095911) (1.052281765*10^7, .8886990248)};
\end{semilogxaxis}
\end{tikzpicture}
\end{center}
\caption{Convergence on the link utilization in the MMS graph to $\frac{8}{9}$.}
\label{fig:MMS_convergence_u}
\end{figure}
\subsection{Projective Networks of Higher Average Distance} \label{subsec:grandes}
In Section~\ref{sec:projective} two projective networks of average distances 2 and 2.5 were presented. There are also graphs based on projective spaces which attain the bounds for greater average distances. In this subsection they are enumerated. They are not described in a great detail since such an amount of terminal nodes is beyond the horizon of current network topologies.
The incidence graph over a generalized quadrangle or hexagon, instead of the projective plane, results in a generalized Moore graph with average distance tending to 3.5 and 5.5 respectively~\cite{Exoo}. Alike happens to $G_q$, generalized quadrangles and hexagons exist whenever $q$ is a prime power. Their number of vertices is the double of the number of points in their spaces, respectively $P_3(\mathbb F_q)$ and $P_5(\mathbb F_q)$.
Furthermore, these graphs allow for a modification similar to $\overline{G}_q$, as it was proved by Delorme~\cite{Delorme_diametro3}. In the case of quadrangles the resulting average distance tends to 3 and for hexagons it tends to 5. In both cases, the number of vertices is asymptotically close to the Moore bound. However $q$ must be an odd power of 2. Hence, they exist only for a very reduced amount of sizes. Otherwise, Delorme's graph on quadrangles, that is the modified incidence graph on the quadrangles over $P_3(\mathbb F_q)$, would have been a very good alternative to current dragonfly topology.
These graphs are denoted as \firstuse{Delorme's graph} in the remainder of the paper. By default this notation will refer to the construction using generalized quadrangles, unless specified otherwise.
\subsection{Random Graphs}\label{subsec:random}
\firstuse{Random graphs}~\cite{Bollobas} have been proposed for interconnection networks of datacenters~\cite{Jellyfish} and HPC~\cite{Koibuchi_random}. Since not many generalized Moore graphs are known, random graphs might constitute an alternative when specific constructions are not known. There are three major different models to define a random graph with $N$ vertices. Each one of these models requires a different additional parameter: a probability $p$ of each edge (the \textsl{binomial model}), a total number of edges $M$ (the \textsl{uniform model}), or a constant degree $\Delta$. Although they are very similar when $\Delta=p(N-1)=2M/N$, the three models are pairwise different. Nevertheless, for all our purposes the approximations work equally fine indistinctly of which model is chosen. The average distance is approximately $\bar k\approx \frac{\log T}{\log R}-1$ which is close to the Moore bound for all $\bar k$, although worse than the values for specific known constructions. Thus, random graphs could be used if there is no appropriate construction for the desired dimension. Furthermore, the link utilization in random graphs is a delicate aspect. If all terminals generate the same amount of traffic, then experimentally we have obtained an utilization of $u\approx 0.8$ (depending on the model), lower than all the topologies considered in this paper.
| -58,633.197494
|
[
-1.5458984375,
1.3447265625
] | 52.211796
|
[
-3.107421875,
1.498046875,
-0.5078125,
-3.5859375,
-1.2939453125,
5.55859375
] |
[
-0.404052734375,
5.52734375,
0.60595703125,
4.34765625
] | 895
| 10,892
|
[
-2.828125,
2.779296875
] | 35.902733
|
[
-5.3984375,
-2.37890625,
-1.4345703125,
-0.99609375,
1.4716796875,
7.3203125
] | 0.513832
| 40.182055
| 21.34275
| 8.059164
|
[
1.7447729110717773
] | -32,783.534857
| 5.539019
| -56,747.351998
| 0.546982
| 6.316833
|
[
-3.244140625,
-3.048828125,
-2.5625,
-4,
2.50390625,
9.8828125
] |
[
-6.25390625,
-1.25390625,
-1.3291015625,
-0.787109375,
3.25,
3.384765625
] | |
BkiUdH84eIXhzGirC1ri
|
\section{Introduction}
\maketitle
In insurance risk theory, the claim arrivals are modeled by a
compound Poisson process. The total claim up to time $t$ is given
by
\begin{equation}\label{eq:claim}
X_t=X_0+ \sum_{k=1}^{N_t}Y_k, \quad t \geq 0,
\end{equation}
where the number of claims up to time $t$, $N_t$, is a Poisson
process with intensity $\lambda_0$. The claim size process
$(Y_k)_{k \in \mathbb{N}}$ is assumed to consist of independent
and identically distributed $R^{d}$ valued random variables with
distribution function $\nu_0$. In order to compensate for the
liabilities the insurance company has to pay out, it collects
premiums at a such rate that it has a fair chance of survival.
In this paper, we will study the model in (\ref{eq:claim}) with
two types of regime shift. At time $\theta^a$ the intensity of the
Poisson process changes from $\lambda_0$ to $\lambda_1$, and at
time $\theta^b$, the distribution of the claim size changes from
$\nu_0$ to $\nu_1$. (These measures are assumed to be absolutely
continuous with respect to each other.) Both $\theta^a$ and
$\theta^b$ are unknown at time 0, and they are unobservable. It is
in the insurance company's interest to detect \emph{the change
time or the disorder time} $\theta \triangleq \theta^a \wedge
\theta^b =\min\{\theta^a,\theta^b\}$ as soon as possible and to
re-evaluate a new fair value for premiums in order to keep the
profit level the same.
We assume that the times of regime shift are independent of each
other and that they have an exponential prior distribution
\[
\quad \P\{\theta^i>t\}=
(1-\pi^i)e^{-\lambda^i t}, \quad i \in \{a,b\}, \quad t\geq 0,
\]
for $\lambda^i>0$. At time $\theta$, we do not know what the
intensity is for sure: it is either $\lambda_0$ (a change has
occurred in the distribution of the claim size) or $\lambda_1$ (a
change occurred in the intensity). In fact at time $\theta$, the
value of intensity changes from $\lambda_0$ to the random variable
$\Lambda$ where
\begin{equation}\label{eq:Lambda}
\Lambda =
\begin{cases}
\lambda_1 & \text{with probability} \quad
\frac{\lambda^a}{\lambda^a+\lambda^b}
\\ \lambda_0 & \text{with probability} \quad\frac{\lambda^b}{\lambda^a+\lambda^b}.
\end{cases}
\end{equation}
At time $\theta$ the distribution of the claim size changes from
$\nu_0$ to $\nu$, where
\begin{equation}\label{eq:nu-convex}
\nu=\frac{\lambda^a}{\lambda^a+\lambda^b} \nu_0+
\frac{\lambda^b}{\lambda^a+\lambda^b} \nu_1.
\end{equation}
Now consider a related more general problem in which at the
disorder time $\theta$ the compound process introduced in
(\ref{eq:claim}) changes its intensity from $\mu \in \mathbb{R}_+$
to a random variable $\Lambda$ (at first we will first allow the
distribution of this random variable to be as general as possible)
and the distribution of the claim sizes change from $\beta_0$ to
$\beta_1$ (these two measures are assumed to be absolutely
continuous with respect to each other). The distribution of
$\theta$ is given by
\begin{equation}\label{eq:dist-theta}
\P\{\theta=0\}=\pi, \quad \P\{\theta>t| \theta>0\}=e^{-\lambda t},
\,t \geq 0.
\end{equation}
The random variables $\Lambda$ and $\theta$ are independent.
In this more general problem the aim is to detect the unknown and
unobservable time $\theta$ as quickly as possible given the
observations from the incoming claims. More precisely, we would
like to find a stopping time $\tau$ of the observation process
that minimizes the penalty function
\begin{equation}\label{eq:penalty}
R_{\tau}(\pi) \triangleq \P\{\tau<\theta\}+ c\,\mathbb{E}[\tau-\theta]^+,
\end{equation}
which is the sum of the frequency of $\mathbb{P}(\tau<\theta)$
false alarms and the expected cost $c\,
\mathbb{E}\left[(\tau-\theta)^{+}\right]$ of detection delay.
We are interested in solving this more general problem for three
reasons. First, setting $\pi=0$, $\lambda=\lambda^a+\lambda^b$,
$\mu=\lambda_0$, $\beta_0=\nu_0$ and $\beta_1=\nu$, and the
distribution of $\Lambda$ to be the Bernoulli distribution in
(\ref{eq:Lambda}) we see that solving this more general problem
also leads to a solution of the main problem introduced in the
second paragraph. Second, in the general problem if we set
$\Lambda$ to be a constant, then we obtain a version of the main
problem in which the rate change and change of the distribution of
the claim sizes occur simultaneously. This case was analyzed by
\cite{ds} and \cite{gapeev}. Finally, the more general problem
represents a situation in which the insurance company has only
some apriori information about the post disorder rate $\lambda_1$,
but the company can not pin $\lambda_1$ down to a constant because
it might only have very few claims after the regime change occurs.
In fact, the company wants to detect the regime change as soon as
possible, so there is not really any time to collect data to
estimate $\lambda_1$. This change detection problem when the
underlying process $X$ is a (simple) Poisson process was recently
analyzed by \cite{bdk05}. This corresponds to setting
$\beta_0=\nu_0$ and $\beta_1=\nu_0$ in the current setting.
The compound/simple Poisson disorder problem is one of the rare
instances in which a stochastic control problem with partial
information can be handled. The (simple) Poisson disorder problem
with linear penalty for delay was partially solved by
\cite{galchuk71}, \cite{davis76} and \cite{wdavis}. This problem
later was solved by \cite{MR2003i:60071}. \cite{BD03} solved the
simple Poisson disorder problem for exponential penalty for delay,
and \cite{BD04} solved the standard Poisson disorder problem.
These results were recently extended by \cite{ds} (using the
results developed in \cite{bdk05}) and \cite{gapeev} for compound
Poisson procesesses. On the other hand \cite{bdk05} solved the
simple Poisson disorder problem when the post disorder rate is a
random variable and \cite{bs} solved this problem for the case
with a Phase-type disorder distribution.
We will first show that our problem is equivalent to an optimal
stopping problem for a Markovian sufficient statistic. As in
\cite{bdk05} it turns out that the dimension of the sufficient
statistic is finite dimensional if the distribution of the random
variable $\Lambda$ is discrete
with finitely many atoms. We will study the case of
a binary distribution in more detail. In particular, we will
analyze the case when the post-disorder rate only goes up. We are
able to show that the intuition that a decision would sound the
alarm only at the times when it observes an arrival does not in
general hold, see Remark~\ref{rem:only-at-jump-times}. This
intuition becomes relevant only when $\lambda$ and $c$ are small
enough, i.e. when the disorder intensity and delay penalty are
small. By performing a sample path analysis we are able to find
the optimal stopping time exactly for most of the range of
parameters. For the rest of the parameter range we provide upper
and lower bounds on the optimal stopping time. To show the
existence of the optimal stopping problem for the cases when we
can not determine it exactly we make use of the characterization
of the value function of the optimal stopping time as the fixed
point of a functional operator, as in \cite{bdk05}. We use this
approach since the free boundary problems associated with our
problem turns out to be quite difficult to manage as it involves
integro-differential equations and the failure of the smooth fit
principle is expected. This characterization can be used to
calculate the value function through an iterative procedure. From
this characterization we are able to infer that the free
boundaries are decreasing convex curves located at the corner of
$\mathbb{R}_+^2$. Using our sample path analysis, we are able to determine
a certain subset of the free boundary exactly.
The rest of the paper is organized as follows. In
Section~\ref{sec:reference-prob}, we give a more precise
probabilistic description of the disorder problem and introduce a
reference probability measure $\P_0$ under which the observations
are coming from a compound Poisson process whose jump distribution
does not change over time. In Section~\ref{sec:mss}, we show that
the disorder problem can be transformed into an optimal stopping
problem for a Markovian sufficient statistic. The Markovian
sufficient statistic may not be finite dimensional and we show in
this section that it is finite dimensional when the distribution
of the post disorder rate has finitely many atoms. In
Section~\ref{sec:Bernoulli}, we find the autonomous sufficient
statistic for any Bernoulli distribution. Also we set up an
optimal stopping problem for a Bernoulli sufficient statistic
when the post disorder rate can only move up. Section~\ref{sec:bounds} contains some of our main results in
which by performing a sample path analysis we either find the optimal stopping time exactly or provide upper and lower bounds.
We also show that
the optimal stopping time is finite $\mathbb{P}_0$-almost surely.
Section~\ref{sec:optimal-stopping-time} provides a useful
characterization of the value function as a limit of a sequence of
other value functions. Since the proofs of the results in this
section are similar to the ones in \cite{bdk05} we omit them,
except the result in which we show that the optimal stopping time
we constructed is the smallest optimal stopping time and a few
other that we prefer to keep for readers convenience.
\section{A Reference Probability Measure}\label{sec:reference-prob}
We will first introduce a reference probability measure $\P_0$
under which the observations have a simpler form, namely they come
from a compound Poisson process whose rate and jump distribution
do not change over time. Next, we will construct the model that we
briefly described in the introduction in the paragraph before
(\ref{eq:dist-theta}).
Let us start with a probability space $(\Omega, \mathcal{F},
\P_0)$ and consider a standard Poisson process $N=\{N_t: t \geq
0\}$ with rate $\mu$; independent and identically distributed
strictly positive random variables $Y_1, Y_2,...$ with a common
distribution $\beta_0$ on $\mathbb{R}^d$ independent of the
Poisson process; a random variable $\theta$ independent of the
previously described stochastic elements on this probability space
whose distribution is given by
\begin{equation}
\P_0\{\theta=0\}=\pi, \quad \P_0\{\theta>t| \theta>0\}=e^{-\lambda
t}, \,t \geq 0;
\end{equation}
a random variable $\Lambda$ independent of the other stochastic
elements whose distribution is $\gamma(\cdot)$. This distribution
charges only the positive real numbers. We will assume that
\begin{equation}\label{eq:moment}
m^{(k)} \triangleq \int_{\mathbb{R}_{+}} (v-\mu)^{k}
\gamma(dv)<\infty, \quad k \in \mathbb{N}_0.
\end{equation}
Let the process $X=\{X_t: t \geq 0\}$
be the compound Poisson process defined as in (\ref{eq:claim}) and
$\mathbb{F}=\{\mathcal{F}_t\}_{t \geq 0}$ be the natural
filtration of $X$. We will also define an initial enlargement of
$\mathbb{F}$, $\mathbb{G}=\{\mathcal{G}_t\}_{t \geq 0}$ by setting
$\mathcal{G}_t \triangleq \mathcal{F}_t \vee \sigma\{\theta,
\Lambda\}$. $\mathcal{G}_t$ is the information available to a
\emph{genie} at time $t$ that also observes the realizations of
the disorder time $\theta$ and post-disorder rate $\Lambda$. Let
$\beta_1 (\cdot)$ be a probability measure on $\mathbb{R}^d$ which
is absolutely continuous with respect to $\beta_0 (\cdot)$. We
will denote by $r$ the Radon-Nikodym derivative
\begin{equation}
r(y) \triangleq \frac{d \beta_1}{d \beta_0} (y), \quad y \in
\mathbb{R}^d.
\end{equation}
The process
\begin{equation}\label{eq:Z}
Z_t \triangleq \frac{L_t}{L_{\theta}}1_{\{\tau \leq t\}} +
1_{\{\tau>t\}}, \quad t \geq 0,
\end{equation}
is a $\mathbb{G}$-martingale where
\begin{equation}
L_t \triangleq e^{-(\Lambda-\mu)t}
\prod_{k=1}^{N_t}\left[\frac{\Lambda}{\mu} r(Y_k)\right].
\end{equation}
The positive martingale $Z$ defines a new probability measure
$\mathbb{P}$ on every $(\Omega, \mathcal{G}_t)$, $t \geq 0$ by
\begin{equation}
\frac{d \mathbb{P}}{d \mathbb{P}_0}\bigg|_{\mathcal{G}_t}= Z_t,
\quad t \geq 0.
\end{equation}
Note that since $Z_0=1$, $\mathbb{P}$ and $\mathbb{P}_0$ agree on
$\mathbb{G}_0=\sigma\{\theta, \Lambda\}$, i.e. the random
variables $\theta$ and $\Lambda$ are independent and have the same
distribution under both $\mathbb{P}$ and $\mathbb{P}_0$. On the
other hand using the Girsanov Theorem for jump processes (see e.g
\cite{cont}, \cite{ds}) we conclude that the process $X$ is a
$(\mathbb{P}, \mathbb{G})$-compound Poisson process whose arrival
rate $\mu$ and jump distribution $\beta_0$ changes at time
$\theta$ to $\Lambda$ and $\beta_1$, respectively. In other words,
on the probability space $(\Omega, \mathcal{F}, \mathbb{P})$, we
have exactly the model posited in the Introduction section in the
paragraph between (\ref{eq:nu-convex}) and (\ref{eq:dist-theta}).
\section{Markovian Sufficient Statistics}\label{sec:mss}
In this section, we will show that the stopping problem posed in
(\ref{eq:penalty}) can be formulated as an optimal stopping
problem for a Markovian sufficient statistic, which is in general
infinite dimensional. In the following sections we will see that
depending on the structure of the prior of $\Lambda$ the
sufficient statistic can be finite dimensional.
Let us denote all the $\mathbb{F}$-stopping times by $\mathcal{S}$
and introduce the $\mathbb{F}$-adapted processes
\begin{equation}\label{eq:defn-phik}
\Pi_t \triangleq \mathbb{P}\{\theta \leq t| \mathcal{F}_t\}, \quad
\text{and} \quad \Phi_t^{(k)} \triangleq
\frac{\mathbb{E}\left[(\Lambda-\mu)^{k}1_{\{\theta \leq
t\}}|\mathcal{F}_t\right]}{1-\Pi_t}, \quad k\in \mathbb{N}, t\geq
0.
\end{equation}
$\Pi_t$ is the a posteriori probability process and is the updated
probability that the disorder happened at or before time $t$ given
all the information up to time $t$. $\Phi^{(k)}$ can be read as an
\emph{odds-ratio process}, and in fact
$\Phi^{(0)}=\frac{\Pi_t}{1-\Pi_t}$.
Using Proposition 2.1 in \cite{BD04} we can write the Bayes error
in (\ref{eq:penalty}) as
\begin{equation}\label{eq:new-penalty}
R_{\tau}(\pi)=1-\pi+
c(1-\pi)\mathbb{E}_0\left[\int_0^{\tau}e^{-\lambda
t}\left(\Phi_t^{(0)}-\frac{\lambda}{c}\right)dt\right], \quad \tau
\in \mathcal{S},
\end{equation}
where the expectation $\mathbb{E}_0$ is taken under the reference
probability measure $\mathbb{P}_0$. As we can see from
(\ref{eq:new-penalty}), finding an optimal stopping time for the
quickest detection problem would be considerably easier if the
process $\Phi^{(0)}$ is Markovian and its natural filtration
coincides with the filtration generated by the observations. In
that case we would just have to solve a one-dimensional optimal
stopping problem. This is not true, however, unless $\Lambda$ has
only one possible value to take. The following lemma shows that
the whole sequence $\{\Phi^{(k)}\}_{k \in \mathbb{N}}$ is a Markovian
sufficient statistic for our detection problem. This result also
will help us develop sufficient conditions under which a finite
dimensional sufficient statistic exists.
\begin{lemma}\label{lem:dyn-of-Phi}
Let $m^{(k)}$ be as in (\ref{eq:moment}). Then the dynamics of
$\Phi^{(k)}$ can be written as
\begin{equation}\label{eq:dyn-Phi}
d\Phi^{(k)}_t=(\lambda(m^{(k)}+\Phi_t^{(k)})-\Phi_t^{(k+1)})dt+\Phi_{t-}^{(k)}\int_{y
\in \mathbb{R}^{d}}(r(y)-1)p(dt dy)+
\Phi_{t-}^{(k+1)}\frac{1}{\mu}\int_{y \in \mathbb{R}^{d}}r(y)p(dt
dy),
\end{equation}
with $\Phi^{(k)}_0=\frac{\pi}{1-\pi}m^{(k)}$, in which $p$ is the
point process defined by
\begin{equation}\label{eq:defn-p}
p((0,t]\times A) \triangleq \sum_{k=1}^{\infty}1_{\{\sigma_k \leq
t\}}1_{\{Y_k \in A\}}, \quad \quad t \geq 0, \, A \in
\mathcal{B}(\mathbb{R}^d).
\end{equation}
\end{lemma}
\begin{proof}
Using Bayes' formula, and the independence of the stochastic
elements $\theta$, $\Lambda$ and $X$ we can write
\begin{equation}\label{eq:phik-ik-vk}
\Phi^{(k)}=\frac{\mathbb{E}_0\left[(\Lambda-\mu)^{k}Z_t 1_{\{\theta \leq t
\}}|\mathcal{F}_t\right]}{(1-\Pi_t) \mathbb{E}_0[Z_t|\mathcal{F}_t]} =
U_t^{(k)}+V_t^{(k)}
\end{equation}
in which
\begin{equation}\label{eq:defn-u}
U_t^{(k)} \triangleq \frac{\pi}{1-\pi}e^{\lambda t}
\int_{\mathbb{R}_+} (\nu-\mu)^k L^{\nu}_{t} \gamma(d \nu), \quad
\text{and}
\end{equation}
\begin{equation}\label{eq:defn-v}
V_t^{(k)}\triangleq \int_{0}^{t} \int_{\mathbb{R}_{+}}\lambda
e^{\lambda(t-u)} \frac{L^{\nu}_t}{L^{\nu}_u}(\nu-\mu)^{k}\gamma(d
\nu)du.
\end{equation}
Here we have used the notation
\begin{equation}\label{eq:defn-L-t}
L^{\nu}_t \triangleq e^{-(\nu-\mu)t}
\prod_{k=1}^{N_t}\left[\frac{\nu}{\mu} r(Y_k)\right], \quad \nu
\in \mathbb{R_+}.
\end{equation}
To derive (\ref{eq:phik-ik-vk}) we have used (\ref{eq:Z}),
(\ref{eq:defn-phik}) and the identity
\[
1-\Pi_t=\frac{(1-\pi)e^{-\lambda t}}{\mathbb{E}_0[Z_t|\mathcal{F}_t]},
\]
which we can derive using the independence of $\theta$ and $X$
under $\P_0$.
The process $L^{\nu}$ is the unique locally bounded solution of
the equation (see e.g. \cite{elliott})
\begin{equation}\label{eq:dynamics-of-L}
dL^{\nu}_t=L^{\nu}_{t -}\left[-(\nu-\mu)dt+\int_{y \in
\mathbb{R}^d}\left(\frac{\nu}{\mu}r(y)-1\right)p(dtdy)\right],
\end{equation}
with $L_0=1$. Using (\ref{eq:dynamics-of-L}) and the change of
variable formula it is easy to obtain
\begin{equation}
dU_t^{(k)}=(\lambda U_t^{(k)}-U_t^{(k+1)})dt+ \int_{y \in
\mathbb{R}^d}\left((r(y)-1)U_t^{(k)}+
\frac{r(y)}{\mu}U_t^{(k+1)}\right)p(dtdy),
\end{equation}
with $U^{(k)}_0=\frac{\pi}{1-\pi}m^{(k)}$, and
\begin{equation}
dV_t^{(k)}= (\lambda m^{(k)}-V_t^{(k+1)}+\lambda V_t^{(k)})dt +
\int_{y \in \mathbb{R}^d} \left((r(y)-1)V_t^{(k)}+
\frac{r(y)}{\mu}V_t^{(k+1)} \right)p(dtdy).
\end{equation}
with $V^{k}_0=0$. Now (\ref{eq:dyn-Phi}) follows from
(\ref{eq:phik-ik-vk}).
\end{proof}
Lemma~\ref{lem:dyn-of-Phi} shows that 1)$\Phi^{(0)}$ is not a
Markov process, and 2) the sequence $\{\Phi^{(k)}\}_{k \in
\mathbb{N}}$ has the Markovian property and its natural filtration
is the same as $\mathbb{F}$. The following corollary gives a
sufficient condition that the distribution of the post-disorder
rate $\gamma$ must satisfy in order for the sufficient statistic
to be finite dimensional.
\begin{cor}\label{cor:finite-dimensional}
If $\gamma$ is a discrete distribution with only $k$ atoms then
$\{\Phi^{(0)},\Phi^{(1)}, \cdots, \Phi^{(k-1)}\}$ is a
$k$-dimensional Markovian sufficient statistic.
\end{cor}
\begin{proof}
This follows from the same line of arguments used in the proof of
Corollary 3.3 in \cite{bdk05}. Here, we will give it not only for
readers conveneience but also because the notation we introduce
here will be used later. Let us denote by $\nu_1, \cdots, \nu_k$
the atoms of the distribution $\gamma$ and define
\begin{equation}
p(v) \triangleq \prod_{k=1}^{k}(v-\nu_i+\mu)\equiv
v^{k}+\sum_{i=0}^{k-1}c_i v^i, \quad v \in \mathbb{R},
\end{equation}
for some suitable numbers $c_0,...,c_{k-1}$. Observe that the
random variable
\[
p(\Lambda-\mu)=(\Lambda-\mu)^{k}+\sum_{i=0}^{k-1}c_i(\Lambda-\mu)^{i}=0,
\quad \text{a.s}.
\]
The last identity together with (\ref{eq:defn-phik}) implies that
\begin{equation}\label{eq:k-dimensional}
\Phi_{t}^{(k)}+\sum_{i=0}^{k-1}c_i \Phi_t^{(i)}=0, \quad \P-a.s.
\end{equation}
Now, it can be seen from the form of the penalty function in
(\ref{eq:new-penalty}) and the dynamics in (\ref{eq:dyn-Phi}) that
$\{\Phi^{(0)},\Phi^{(1)}, \cdots, \Phi^{(k-1)}\}$ is a
$k$-dimensional Markovian sufficient statistic.
\end{proof}
In the remainder of the paper we will assume that the distribution
for the post-disorder rate $\Lambda$ has Bernoulli distribution.
\section{Post-Disorder Rate with Bernoulli Distribution}\label{sec:Bernoulli}
In this section we will assume that the random variable $\Lambda$
takes either the value $\mu_1>0$ or $\mu_2>0$, i.e.
$\gamma(\{\mu_1,\mu_2\})=1$. From (\ref{eq:k-dimensional}) it
follows that $\Phi^{(2)}=(\mu_1+\mu_2-2
\mu)\Phi^{(1)}-(\mu_1-\mu)(\mu_2-\mu)\Phi^{(0)}$. According to
Lemma~\ref{lem:dyn-of-Phi}, the pair $(\Phi^{(0)},\Phi^{(1)})$
satisfies
\begin{equation}
\begin{split}
&
d\Phi^{(0)}_t=(\lambda(1+\Phi_t^{(0)})-\Phi_t^{(1)})dt+\Phi_{t-}^{(0)}\int_{y
\in \mathbb{R}^{d}}(r(y)-1)p(dt dy)+
\Phi_{t-}^{(1)}\frac{1}{\mu}\int_{y \in \mathbb{R}^{d}}r(y)p(dt
dy)
\\&d\Phi^{(1)}_t=(\lambda m^{(1)}+(\lambda-(\mu_1+\mu_2-2
\mu))\Phi^{(1)}_t+(\mu_1-\mu)(\mu_2-\mu)\Phi^{(0)}_t)dt
\\& +\Phi_{t-}^{(1)}\int_{y \in \mathbb{R}^{d}}(r(y)-1)p(dt dy)+
((\mu_1+\mu_2-2
\mu)\Phi^{(1)}_{t-}-(\mu_1-\mu)(\mu_2-\mu)\Phi_{t-}^{(0)})\frac{1}{\mu}\int_{y
\in \mathbb{R}^{d}}r(y)p(dt dy)
\end{split}
\end{equation}
with initial conditions $\Phi^{(0)}_0=\frac{\pi}{1-\pi}$ and
$\Phi^{(1)}_0=\frac{\pi}{1-\pi}m^{(1)}$.
Instead of the sufficient statistic
$(\Phi^{(0)},\Phi^{(1)})$, it will be more convenient to work with
\begin{equation}\label{eq:sufficient-stats}
\tilde{\Phi}^{(0)}_t \triangleq \frac{\P\left\{\Lambda=\mu_1, \,
\theta \leq t|\mathcal{F}_t\right\}}{\P\{\theta>t|\mathcal{F}_t\}}
\quad \text{and} \quad \tilde{\Phi}^{(1)}_t \triangleq
\frac{\P\left\{\Lambda=\mu_2,\, \theta \leq
t|\mathcal{F}_t\right\}}{\P\{\theta>t|\mathcal{F}_t\}}.
\end{equation}
In fact the following a one-to-one relationship between these two
pairs holds
\begin{equation}
\tilde{\Phi}_t^{(0)}=\frac{(\mu_2-\mu)\Phi_t^{(0)}-\Phi^{(1)}_t}{\mu_2-\mu_1}\,
\quad \text{and} \quad
\tilde{\Phi}_t^{(1)}=\frac{(\mu_1-\mu)\Phi_t^{(0)}-\Phi^{(1)}_t}{\mu_1-\mu_2}\,
.
\end{equation}
The dynamics of this new sufficient statistic are autonomous as
can be seen from
\begin{equation}\label{eq:autonomous}
\begin{split}
d \tilde{\Phi}^{(0)}_{t}&=
\left\{\frac{\lambda(\mu_2-\mu-m^{(1)})}{\mu_2-\mu_1}+(\lambda-\mu_1+\mu)
\tilde{\Phi}_t^{(0)} \right\}dt+ \tilde{\Phi}_{t-}^{(0)}\int_{y
\in \mathbb{R}^d}\left[\left(1+\frac{\mu_1-\mu}{\mu
}\right)r(y)-1\right]p(dt dy)
\\ d \tilde{\Phi}^{(1)}_{t}&=
\left\{\frac{\lambda(\mu_1-\mu-m^{(1)})}{\mu_1-\mu_2}+(\lambda-\mu_2+\mu)
\tilde{\Phi}_t^{(1)} \right\}dt+ \tilde{\Phi}_{t-}^{(1)}\int_{y
\in \mathbb{R}^d}\left[\left(1+\frac{\mu_2-\mu}{\mu
}\right)r(y)-1\right]p(dt dy)
\end{split}
\end{equation}
When the number of atoms of the distribution $\gamma$ is more than
two, we expect that sufficient statistics defined similarly will
also be autonomous.
The sufficient statistic we introduced in
(\ref{eq:sufficient-stats}) has a natural interpretation and is
similar in flavor to particle filters: these are the normalized
probabilities that are assigned to each atom $\mu_i$ and these are
updated continuously between the times of the observations, since
not having an observation in fact reveals some information about
the intensity of the underlying Poisson process. Indeed from
(\ref{eq:autonomous}) we observe that the sufficient statistic
$(\tilde{\Phi}^{(0)},\tilde{\Phi}^{(1)})$ solves an ordinary
differential equation between the observations, and the terms that
involve the counting process $p$ are inactive. When there is an
observation, these normalized probabilities jump depending on the
jump size of the observation. We will see the optimal alarm
mechanism is to sound the alarm as soon as the sufficient
statistic touches or jumps above a convex and decreasing curve in
$\mathbb{R}_{+}^2$, if the sufficient statistic starts below this
curve. Otherwise it is optimal to sound the alarm immediately.
Since the jump distribution also changes at the time of disorder,
not only the timing of the observations but also the magnitude of
the observations is informative. Therefore, it is reasonable to
expect that we are able to construct a more acute alarm in this
case than the case in which the observations are coming from a
simple Poisson process where the jump size does not carry any
information.
In the case when the post disorder rate could go both up and down
by one unit, i.e., $\mu_1=\mu-1$ and $\mu_2=\mu+1$, then the
dynamics in (\ref{eq:autonomous}) become
\begin{equation}\label{eq:above-below}
\begin{split}
d \tilde{\Phi}^{(0)}_{t}&=
\left\{\frac{\lambda(1-m)}{2}+(\lambda+1) \tilde{\Phi}_t^{(0)}
\right\}dt+ \tilde{\Phi}_{t-}^{(0)}\int_{y \in
\mathbb{R}^d}\left[\left(1-\frac{1}{\mu }\right)r(y)-1\right]p(dt
dy)
\\ d \tilde{\Phi}^{(1)}_{t}&=
\left\{\frac{\lambda(1+m)}{2}+(\lambda-1) \tilde{\Phi}_t^{(1)}
\right\}dt+ \tilde{\Phi}_{t-}^{(1)}\int_{y \in
\mathbb{R}^d}\left[\left(1+\frac{1}{\mu }\right)r(y)-1\right]p(dt
dy),
\end{split}
\end{equation}
in which $m=m^{1}=\P\{\Lambda=\mu+1\}-\P\{\Lambda=\mu-1\} \in
[-1,1]$. Observe that when an arrival comes,
$\tilde{\Phi}^{(0)}_{t}$ jumps down and $\tilde{\Phi}^{(1)}$ jumps
up. Assuming $m \in (-1,1)$ then $\tilde{\Phi}^{(0)}_{t}$ is
always increasing between the observations. $\tilde{\Phi}^{(1)}$,
on the other hand, can be increasing or mean reverting depending
on the value of $\lambda$. Note that the values $m=-1$ or $m=1$
correspond to the degenerate cases in which the post-disorder rate
is known and the sufficient statistic becomes one-dimensional.
On the other hand, in the case when the post disorder rate could
only go up by one or two units, i.e., $\mu_1=\mu+1$ and
$\mu_2=\mu+2$, then the dynamics in (\ref{eq:autonomous}) become
\begin{equation}\label{eq:up-up}
\begin{split}
d \tilde{\Phi}^{(0)}_{t}&= \left\{\lambda(2-m)+(\lambda-1)
\tilde{\Phi}_t^{(0)} \right\}dt+ \tilde{\Phi}_{t-}^{(0)}\int_{y
\in \mathbb{R}^d}\left[\left(1+\frac{1}{\mu
}\right)r(y)-1\right]p(dt dy),
\\ d \tilde{\Phi}^{(1)}_{t}&=
\left\{\lambda(m-1)+(\lambda-2) \tilde{\Phi}_t^{(1)} \right\}dt+
\tilde{\Phi}_{t-}^{(1)}\int_{y \in
\mathbb{R}^d}\left[\left(1+\frac{2}{\mu }\right)r(y)-1\right]p(dt
dy),
\end{split}
\end{equation}
in which $m=2 \P\{\lambda=\mu+2\}+\P\{\lambda=\mu+1\} \in [1,2]$.
Here the initial conditions are
$\tilde{\Phi}^{(0)}_{0}=(2-m)\frac{\pi}{1-\pi}$ and
$\tilde{\Phi}^{(1)}_{0}=(m-1)\frac{\pi}{1-\pi}$. We will assume
that $m \in (1,2)$ as otherwise the problem degenerates into a
one-dimensional one. In the next section we will see that the
intuition that a decision would sound the alarm only at the times
when it observes an arrival does not in general hold; see
Remark~\ref{rem:only-at-jump-times}. This intuition becomes
relevant only when $\lambda$ and $c$ are small enough, i.e. when
the disorder intensity and delay penalty are small. If $\lambda
\geq 2$ then both $\tilde{\Phi}^{(0)}_{t}$ and
$\tilde{\Phi}^{(1)}_{t}$ increase between the jumps, because the
rate of disorder is high enough despite the fact that there have
been no arrivals. When $\lambda \in [1,2)$,
$\tilde{\Phi}^{(0)}_{t}$ increases between the jumps and
$\tilde{\Phi}^{(1)}_{t}$ is mean reverting. When $\lambda \in
(0,1)$, both $\tilde{\Phi}^{(0)}_{t}$ and $\tilde{\Phi}^{(1)}_{t}$
have mean reverting paths between arrivals. Since the post
disorder arrival rate can only move up, both
$\tilde{\Phi}^{(0)}_{t}$ and $\tilde{\Phi}^{(1)}_{t}$ have an
upward jump when there is an observation.
In the remainder of the paper we analyze the case when the
sufficient statistic is of the form (\ref{eq:up-up}). Note that in
this case the penalty function in (\ref{eq:new-penalty}) becomes
\begin{equation}\label{eq:penalty-up-up}
R_{\tau}(\pi)=1-\pi+
c(1-\pi)\mathbb{E}_0\left[\int_0^{\tau}e^{-\lambda
t}\left(\tilde{\Phi}_t^{(0)}+\tilde{\Phi}_t^{(1)}-\frac{\lambda}{c}\right)dt\right],
\quad \tau \in \mathcal{S}.
\end{equation}
Let us define
\begin{equation}\label{eq:explicit-x-and-y}
\begin{split}
x(t,x_0)& \triangleq
\begin{cases}
-\frac{\lambda(2-m)}{\lambda-1}+e^{(\lambda-1)t}\left[x_0+\frac{\lambda(2-m)}{\lambda-1}\right],
& \lambda \neq 1, \\ x_0+(2-m)t, & \lambda=1,
\end{cases} \quad \text{and}
\\ y(t,y_0) & \triangleq
\begin{cases}
-\frac{\lambda(m-1)}{\lambda-2}+
e^{(\lambda-2)t}\left[y_0+\frac{\lambda(m-1)}{\lambda-2}\right] &
\lambda \neq 2,
\\ y_0+2(m-1)t & \lambda=2.
\end{cases}
\end{split}
\end{equation}
Note that $x$ and $y$ satisfy the semigroup property, i.e., for
every $t \in \mathbb{R}$ and $s \in \mathbb{R}$,
\begin{equation}\label{eq:semi-group}
x(t+s,x_0)=x(s,x(t,x_0)) \quad \text{and} \quad
y(t+s,x_0)=y(s,y(t,x_0)).
\end{equation}
Let us denote by $\sigma_n$ the jump times of the process $X$.
Then we get
\begin{equation}\label{eq:paths-at-the-jumps}
\begin{split}
&
\tilde{\Phi}^{(0)}_{t}=x(t-\sigma_n,\tilde{\Phi}^{(0)}_{\sigma_n})
\quad \text{and} \quad
\tilde{\Phi}^{(1)}_{t}=y(t-\sigma_n,\tilde{\Phi}^{(1)}_{\sigma_n}),
\quad \sigma_n \leq t <\sigma_{n+1},
\\ & \tilde{\Phi}^{(0)}_{\sigma_n}=
\left(1+\frac{1}{\mu}\right)r(Y_n) \tilde{\Phi}^{(0)}_{\sigma_n-}
\quad \text{and} \quad \tilde{\Phi}^{(1)}_{\sigma_n}=
\left(1+\frac{2}{\mu}\right)r(Y_n)\tilde{\Phi}^{(1)}_{\sigma_n-}
\quad n \in \mathbb{N}_0.
\end{split}
\end{equation}
The minimum of the Bayes risk in (\ref{eq:penalty-up-up}) is given
by;
\begin{equation}
U(\pi)=\inf_{\tau \in
\mathcal{S}}R_{\tau}(\pi)=(1-\pi)+c(1-\pi)V\left((2-m)\frac{\pi}{1-\pi},(m-1)\frac{\pi}{1-\pi}\right),
\end{equation}
in which $V$ is defined as the value function of the optimal
stopping problem for a two-dimensional Markov process
\begin{equation}\label{eq:value-function}
V(\phi_0,\phi_1) \triangleq \inf_{\tau \in
\mathcal{S}}\mathbb{E}_0^{\phi_0,\phi_1}\left[\int_0^{\tau}e^{-\lambda t}
g\left( \tilde{\Phi}_t\right)dt\right], \quad \quad \tilde{\Phi}_t
\triangleq (\tilde{\Phi}^{(0)}_{t},\tilde{\Phi}^{(1)}_{t}),
\end{equation}
with a running cost function
\begin{equation}
g(\phi_0,\phi_1)=\phi_0+\phi_1-\frac{\lambda}{c}.
\end{equation}
Here, $\mathbb{E}_0^{\phi_0,\phi_1}$ is the conditional $\P_0$
expectation given that $\tilde{\Phi}^{(0)}_{0}=\phi_0$ and
$\tilde{\Phi}^{(1)}_{0}=\phi_1$.
\section{Upper and Lower Bounds on the Optimal Stopping
Time}\label{sec:bounds}
Unlike the optimal stopping problem for It\^{o} diffusions,
analyzing the sample path behavior of the piece-wise deterministic
Markov process $\tilde{\Phi} \triangleq (\tilde{\Phi}_1,
\tilde{\Phi}_2)$, we are able to determine the optimal stopping
time for most parameter values. For remaining parameter values we
are able to provide some lower bound and an upper bounds on the
optimal stopping time.
All the results in this section assume that an optimal stopping
time exists and it is given by
\begin{equation}\label{optst}
\tau^{*}(\phi_0,\phi_1) \triangleq \inf\{t \geq 0: V(\widetilde{\Phi}_t)=0,\,
\widetilde{\Phi}_0=(\phi_0,\phi_1)\}.
\end{equation}
In Section~\ref{sec:optimal-stopping-time}, we verify that this
assumption in fact holds. With (\ref{optst}) we will call the
region
\begin{equation}
\mathbf{\Gamma} \triangleq \{(\phi_0,\phi_1)\in \mathbb{R}^2_+:
v(\phi_0,\phi_1)=0\}, \quad \mathbf{C} \triangleq \mathbb{R}^2_+ \backslash
\mathbf{\Gamma},
\end{equation}
the \emph{optimal stopping region}. Let us start this section with
a simple observation.
\begin{lemma}
Let us define
\begin{equation}\label{defn:tau-l}
\tau^{l} \triangleq \inf\{t\geq 0: \tilde{\Phi}^{(0)}_t +
\tilde{\Phi}^{(1)}_t \geq \lambda/c\}.
\end{equation}
If there is an optimal stopping time for the problem in
(\ref{eq:value-function}), let us denote it by $\tau^*$, then
$\tau^* \geq \tau^l$.
\end{lemma}
\begin{proof}
Let $\tau \in \mathcal{S}$ be any stopping rule. Then
\begin{equation}
\begin{split}
\mathbb{E}_0^{\phi_0,\phi_1}\left[\int_0^{\tau \vee \tau^l}e^{-\lambda t
}g(\tilde{\Phi}_t)dt\right] &=
\mathbb{E}_0^{\phi_0,\phi_1}\left[\int_0^{\tau}e^{-\lambda t
}g(\tilde{\Phi}_t)dt\right]+\mathbb{E}_0^{\phi_0,\phi_1}\left[1_{\{\tau^l>\tau\}}\int_{\tau}^{\tau^l}e^{-\lambda
t }g(\tilde{\Phi}_t)dt\right]
\\ & \leq \mathbb{E}_0^{\phi_0,\phi_1}\left[\int_0^{\tau}e^{-\lambda t
}g(\tilde{\Phi}_t)dt\right], \quad (\phi_0,\phi_1) \in
\mathbb{R}_+^2.
\end{split}
\end{equation}
Here $\tau \vee \tau^l=\max\{\tau, \tau^l\}$.
\end{proof}
When the rate of disorder or $c$ in (\ref{eq:penalty}) are large
enough, then in fact the lower bound $\tau^l$ is optimal as the
following proposition illustrates, i.e. the free boundary
corresponding to the two-dimensional optimal stopping problem in
(\ref{eq:value-function}) can be determined completely. This is a
very special instance of a multi-dimensional optimal stopping
problem where an explicit determination of the free boundary is
possible.
\begin{proposition}\label{prop:explicit-solution}
If (i) $\lambda \geq 2$, or, (ii) $\lambda \in [1,2)$ and $c \geq
2-\lambda$, or, (iii) $\lambda \in (0,1)$ and $c \geq \max
\left(2-\lambda,1-\lambda\right)$, then the stopping rule $\tau^l$
of (\ref{defn:tau-l}) is optimal for the problem in
(\ref{eq:value-function}).
\end{proposition}
\begin{proof}
(i) Let us first consider the case $\lambda \geq 2$. It is clear
from the dynamics of the sufficient statistic in (\ref{eq:up-up})
that the sample paths of $\tilde{\Phi}^{(0)}_t$ and
$\tilde{\Phi}^{(1)}_t$ are increasing functions of time. Therefore
the process $\tilde{\Phi}$ does not return to the region
$\{(\phi_0,\phi_1) \in \mathbb{R}^2_+: \phi_0+\phi_1 \leq
\lambda/c\}$. Thus for every stopping time $\tau \in \mathcal{S}$
\begin{equation}
\begin{split}
&\mathbb{E}_0^{\phi_0,\phi_1} \left[\int_0^{\tau}e^{-\lambda t
}g(\tilde{\Phi}_t)dt\right] \geq
\mathbb{E}_0^{\phi_0,\phi_1}\left[\int_0^{\tau \vee \tau^l}e^{-\lambda t
}g(\tilde{\Phi}_t)dt\right] \\ &=
\mathbb{E}_0^{\phi_0,\phi_1}\left[\int_0^{ \tau^l}e^{-\lambda t
}g(\tilde{\Phi}_t)dt\right]+ \mathbb{E}_0^{\phi_0,\phi_1}\left[1_{\{\tau
\geq \tau^l\}}\int_{\tau^l}^{ \tau}e^{-\lambda t
}g(\tilde{\Phi}_t)dt\right] \geq
\mathbb{E}_0^{\phi_0,\phi_1}\left[\int_0^{ \tau^l}e^{-\lambda t
}g(\tilde{\Phi}_t)dt\right]
\end{split}
\end{equation}
(ii) If $\lambda \in [1,2)$ then any sample path of
$\tilde{\Phi}^{(0)}$ is still an increasing function of $t$, but
the same is not true anymore for the sample paths of
$\tilde{\Phi}^{(1)}$. The paths of $\widetilde{\Phi}^{(1)}$ increase with jumps;
between the jumps the paths are mean reverting to the level
$\lambda(m-1)/(2-\lambda)$. However, since the processes $\widetilde{\Phi}^{(0)}$ and
$\widetilde{\Phi}^{(1)}$ can only increase by jumps we have that
\begin{equation}
\widetilde{\Phi}^{(0)}_t \geq x(t,\phi_0) \quad \text{and} \quad \widetilde{\Phi}^{(1)}_t \geq
y(t,\phi_1), \quad t \geq 0.
\end{equation}
Therefore
\begin{equation}\label{eq:V-vs-deterministic}
V(\phi_0,\phi_1) \geq \inf_{\tau \in \mathcal{S}
}\mathbb{E}_0^{\phi_0,\phi_1}\left[\int_{0}^{\tau}e^{-\lambda t}
\left(x(t,\phi_0)+y(t,\phi_1)-\frac{\lambda}{c}\right) dt\right].
\end{equation}
Clearly if for any $(\phi_0,\phi_1)$ if the right hand side of
(\ref{eq:V-vs-deterministic}) is zero, then $V=0$, since we also
know that $V \leq 0$. This can be used to find a superset of the
continuation region. However, as we shall see shortly this
superset coincides with the \emph{advantageous region}
\begin{equation}\label{defn:advantageous}
\mathbb{C}_0 \triangleq \{(\phi_0,\phi_1) \in \mathbb{R}_+^2:
\phi_0+\phi_1 \leq \lambda/c\}.
\end{equation}
Observe that it is not optimal to stop before $\tilde{\Phi}$
leaves the region $\mathbb{C}_0$.
Let us take a look at the derivative of the integrand on the
righthand side in (\ref{eq:V-vs-deterministic}),
\begin{equation}\label{eq:derivative-sum}
\frac{d}{dt}[x(t,\phi_0)+y(t,\phi_1)]=(\lambda-1)x(t,\phi_0)+(\lambda-2)y(t,\phi_1)+\lambda.
\end{equation}
The righthand side of (\ref{eq:derivative-sum}) vanishes if the
curve $t \rightarrow (x(t,\phi_0), y(t,\phi_1))$ meets the line
\begin{equation}\label{eq:line}
l: (\lambda-1)x+(\lambda-2)y+\lambda=0.
\end{equation}
Note that since $\lambda \in [1,2)$ the y-intercept of the line is
such that $\frac{\lambda}{2-\lambda} \geq \lambda$. Since $l$ is
increasing and $c \geq 2-\lambda$, the intersection of $l$ with
the set $\mathbb{C}_0$ is empty. Observe also that every $t
\rightarrow (x(t,\phi_0),y(t,\phi_1))$ starting at
$(\phi_0,\phi_1)$ is decreasing and the derivative in
(\ref{eq:derivative-sum}) is increasing. Therefore, $t \rightarrow
(x(t,\phi_0),y(t,\phi_1))$ meets the line $l$ at most once for any
$(\phi_0,\phi_1) \in \mathbb{R}_{+}$.
\begin{equation}\label{eq:text}
\begin{cases}
\text{ Furthermore, if $t \rightarrow (x(t,\phi_0),y(t,\phi_1))$
meets $l$ at $t_{l}=t_{l}(\phi_0,\phi_1)$, then the function} \\
\text{$t \rightarrow x(t,\phi_0)+y(t,\phi_1)$ is decreasing on
$[0,t_l]$ and increasing on $[t_l,\infty)$. If $t \rightarrow
(x(t,\phi_0)$} \\ \text{ $y(t,\phi_1))$ does not intersect $l$, then the
function $t \rightarrow x(t,\phi_0)+y(t,\phi_1)$ is increasing on}
\\ [0,\infty).
\end{cases}
\end{equation}
Since the line $l$ does not meet the region $\mathbb{C}_0$ for
every $(\phi_0,\phi_1) \in l$ we have that $\phi_0+\phi_1 \geq
\lambda/c$. Now (\ref{eq:text}) implies that
$x(t,\phi_0)+y(t,\phi_1)-\frac{\lambda}{c}>0$ for $(\phi_0,\phi_1)
\in \mathbb{R}_{+} ^2-\mathbb{C}_0$ and $t \geq 0$. This implies
that the righthand side of (\ref{eq:V-vs-deterministic}) is zero,
which in turn implies that $V(\phi_0,\phi_1)=0$ for all
$(\phi_0,\phi_1) \in \mathbb{R}_{+} ^2-\mathbb{C}_0$.
(iii) If $\lambda \in (0,1)$, then both of the paths of
$x(t,\phi_0)$ and $y(t,\phi_1)$ are mean reverting. Because of our
assumption on $c$ the line $l$ in (\ref{eq:line}) does not
intersect with $\mathbb{C}_0$ and lies entirely above this region.
Let us denote the region between $l$ and $\mathbb{C}_0$ by
\begin{equation}
\text{Sh}\triangleq\{(\phi_0,\phi_1) \in \mathbb{R}_{+}^2:
\phi_0+\phi_1-\lambda/c>0, (\lambda-1)\phi_0+ (\lambda-2)
\phi_1+\lambda< 0\}.
\end{equation}
From (\ref{eq:derivative-sum}) it follows that
$x(t,\phi_0)+y(t,\phi_1)>\lambda/c$ if $(\phi_0,\phi_1) \in
\text{Sh}$. Therefore, the path $t\rightarrow
(x(t,\phi_0),y(t,\phi_1))$ never enters the region $\mathbb{C}_0$
if $(\phi_0,\phi_1) \notin \mathbb{C}_0$. Therefore, the righthand
side of (\ref{eq:V-vs-deterministic}) is zero, which in turn
implies that $V(\phi_0,\phi_1)=0$ for any $(\phi_0,\phi_1) \in
\mathbb{R}^{2}_+-\mathbb{C}_0$.
\end{proof}
\begin{proposition}\label{prop:lambda-1-2}
Assume $\lambda \in [1,2)$ and $c \in (0,2-\lambda)$. Let us
define
\begin{equation}
D \triangleq \{(\phi_0,\phi_1) \in \mathbb{R}_{+}^2: \phi_0 \leq
\phi_0^*,\,\, \phi_0+\phi_1 \leq \xi\} \cup \{(\phi_0,\phi_1) \in
\mathbb{R}_{+}^2: \phi_0 >\phi_0^*,\,\,\phi_0+\phi_1 \leq
\lambda/c\},
\end{equation}
in which $(\phi^*_0,\phi^*_1) \triangleq
(\lambda(-1+(2-\lambda)/c),\lambda(1+(\lambda-1)/c))$ and
\[
\xi=y\left(-t^*,\lambda\left(\frac{\lambda-1}{c}+1\right)\right),
\quad \text{where} \quad x\left(-t^*,\lambda\left(\frac{
2-\lambda}{c}-1\right)\right)=0.
\]
Then the region $D$ is a superset of the optimal stopping region.
\end{proposition}
\begin{proof}
Let us note that (\ref{eq:V-vs-deterministic}) implies that
\begin{equation}\label{eq:V-geq-deterministic}
V(\phi_0,\phi_1) \geq \inf_{t \in [0,\infty]
}\left[\int_{0}^{t}e^{-\lambda s}
\left(x(s,\phi_0)+y(s,\phi_1)-\frac{\lambda}{c}\right) ds\right].
\end{equation}
Because of the assumption on $c$ the line in (\ref{eq:line})
intersects the region $\mathbb{C}_0$ defined in
(\ref{defn:advantageous}). Note that $l$ and the boundary
$x+y-\lambda/c=0$ of the region $\mathbb{C}_0$ intersect at
$(\phi^*_0,\phi^*_1)$. By running the time ``backwards'', we can
find $\xi$ and $t^*$ such that
\begin{equation}
(0,\xi)=(x(-t^*,\phi_0^*),y(-t^*,\phi_1^*)).
\end{equation}
By the semi-group property (see (\ref{eq:semi-group})), we have
\[
x(t^*,0)=x(t^*,x(-t^*,\phi_0^*))=x(t^*+(-t^*),\phi_0^*)=x(0,\phi_0^*)=\phi_0^*,
\]
and,
\[
y(t^*,\xi)=y(t^*,x(-t^*,\phi_1^*))=y(t^*+(-t^*),\phi_1^*)=y(0,\phi_1^*)=\phi_1^*.
\]
So, the curve $t \rightarrow (x(t,0),y(t,\xi))$, $t \geq 0$, meets
line $l$ at $(\phi_0^*,\phi_1^*)$, and $t_l$ in (\ref{eq:text})
equals to $t^*$. This implies that
\[
x(t,0)+y(t,\xi) \geq x(t^*,0)+y(t^*,\xi)=
\phi_0^*+\phi_1^*=\frac{\lambda}{c},
\]
and in particular $\xi \geq \lambda/c$. Now we will show that when
$\lambda$ and $c$ are chosen as in the statement of the
proposition it is optimal to stop outside the region $D$.
The curve $t \rightarrow (x(t,0),y(t,\xi))$ divides
$\mathbb{R}_{+}^2$ into two connected components containing
$\mathbb{C}_0$ and the region
\begin{equation}
M \triangleq \mathbb{R}_{+}^2-D) \cap \{(x,y) \in
\mathbb{R}_{+}^2: (\lambda-1)x+(\lambda-2)y +\lambda<0 \}
\end{equation}
respectively. Every curve $t \rightarrow
(x(t,\phi_0),y(t,\phi_1))$, $t \geq 0$ starting at
$(\phi_0,\phi_1) \in M$ will stay in $M$, since from the
semi-group property (\ref{eq:semi-group}) it follows that two
distinct curves $t\rightarrow (x(t,\phi^a_0),y(t,\phi_1^a))$ and
$t\rightarrow (x(t,\phi^b_0),y(t,\phi_1^b))$ do not intersect.
Therefore, $t \rightarrow (x(t,\phi_0),y(t,\phi_1))$, $t \geq 0$,
$(\phi_0,\phi_1) \in M$ intersects the line $l$ in (\ref{eq:line})
away from $\mathbb{C}_0$ and (\ref{eq:text}) implies that
$x(t,\phi_0)+y(t,\phi_1) > \lambda/c$ for any $(\phi_0,\phi_1) \in
M$. Now, from (\ref{eq:V-geq-deterministic}) we conclude that
$V=0$ since the infimum on the right-hand-side is equal to 0 from
the arguments above and we already know that $V \leq 0$.
On the other hand, if $(\phi_0,\phi_1) \in (\mathbb{R}_{+}^2-D)
\cap \{(x,y) \in \mathbb{R}_{+}^2: (\lambda-1)x+(\lambda-2)y
+\lambda \geq 0 \}$, the curve $t \rightarrow
(x(t,\phi_0),y(t,\phi_1))$, $t \geq 0$ does not intersect the line
$l$; therefore, the function $t \rightarrow
x(t,\phi_0)+y(t,\phi_1)$ is increasing and
\[
x(t,\phi_0)+y(t,\phi_1)>x(0,\phi_0)+y(0,\phi_1)\geq \phi_0+\phi_1
\geq \xi \geq \frac{\lambda}{c}, \quad 0<t<\infty.
\]
Again, the infimum on the right-hand-side of
(\ref{eq:V-geq-deterministic}) is equal to zero, which implies
that $V=0$.
\end{proof}
\begin{remark}\label{rem:only-at-jump-times}
If $\lambda \in [1,2)$ and $c \in (0,2-\lambda)$, then the
following line segment is a subset of the free boundary
\begin{equation}
H \triangleq \left\{(\phi_0,\phi_1) \in
\mathbb{R}_+^2:\phi_0+\phi_1 -\frac{\lambda}{c} = 0,\,\,\, \phi_1
\leq \phi^*_1 \right\}.
\end{equation}
This set in fact in the entrance boundary of the stopping region
(the boundary through which the path $t \rightarrow
(x(t,\phi_0),y(t,\phi_1))$ enters the stopping region).
\end{remark}
\begin{proposition}\label{prop:stop-at-jumps}
Assume that $\lambda \in (0,1)$ and that $0< c \leq
\frac{(2-\lambda)(1-\lambda)}{3-\lambda-m}$. If furthermore $c
\geq 2 \frac{1-\lambda}{3-m}$, then
\begin{equation}
P \triangleq \left\{(\phi_0,\phi_1) \in \mathbb{R}_+^2:
\phi_0+\frac{1}{2}\phi_1+\frac{3}{2}-\frac{1}{2}m-\frac{1}{c}\geq
0, \,\,\, \phi_0+\phi_1 -\frac{\lambda}{c} \geq 0 \right\},
\end{equation}
is a subset of the optimal stopping region.
If on the other hand, $0<c<2 \frac{1-\lambda}{3-m}$, then the
first time
time $(\widetilde{\Phi}^{(0)},\widetilde{\Phi}^{(1)})$
reaches the set,
\begin{equation}
R \triangleq \left\{(\phi_0,\phi_1) \in \mathbb{R}_+^2:
\phi_0+\frac{1}{2}\phi_1+\frac{3}{2}-\frac{1}{2}m-\frac{1}{c}\geq
0\right\},
\end{equation}
is an upper bound on the
optimal stopping time.
\end{proposition}
\begin{proof}
When $0< c \leq \frac{(2-\lambda)(1-\lambda)}{3-\lambda-m}$, then
the line $l \cap \mathbb{R}_+^2$ lies entirely in $\mathbb{C}_0$. The paths, $t
\rightarrow x(t,\phi_0,\phi_1)$, $t \geq 0$, that do not originate
in $\mathbb{C}_0$ enter into this region through the boundary
$\{(\phi_0,\phi_1) \in \mathbb{R}_{+}^2: \phi_0+\phi_1=\lambda/c\}$ and
once they cross into $\mathbb{C}_0$ they never leave it again
since $x(t,\phi_0)+y(t,\phi_0)<\phi_0+\phi_1<\lambda/c$ for any
point $(\phi_0,\phi_1) \in \mathbb{C}_0 \cap \{(\phi_0,\phi_1) \in
\mathbb{R}_{+}^2: (\lambda-1)\phi_0+ (\lambda-2)\phi_1+\lambda<0\}$, which
follows from (\ref{eq:derivative-sum}). Therefore the infimum on
the right-hand-side of (\ref{eq:V-geq-deterministic}) is attained
by either $t=0$ or $t=\infty$ if $(\phi_0,\phi_1) \in
\mathbb{R}^{2}_+-\mathbb{C}_0$. Either one never stops, pays a
penalty for being outside $\mathbb{C}_0$ for a while and then
enjoys being in this region ad infinitum, or stops immediately
because the cost of the initial penalty is deterrent enough. Since
\begin{equation}
\begin{split}
&\int_{0}^{\infty}\left(x(t,\phi_0)+y(t,\phi_1)-\frac{\lambda}{c}\right)dt=\phi_0+\frac{1}{2}\phi_1+\frac{3}{2}-\frac{1}{2}m-
\frac{1}{c},
\end{split}
\end{equation}
the infimum on the right-hand-side of
(\ref{eq:V-geq-deterministic}) is attained by $t=0$ if
$(\phi_0,\phi_1) \in P$, which in turn implies that
$V(\phi_0,\phi_1)=0$ for any $(\phi_0,\phi_1) \in P$. Observe
that, if $0<c<2 \frac{1-\lambda}{3-m}$ then $P=R$.
\end{proof}
\begin{remark}
Observe that if $\lambda \in (0,1)$ and $2 \frac{1-\lambda}{3-m}
\leq c \leq \frac{(2-\lambda)(1-\lambda)}{3-\lambda-m}$, then the
following line segment is a subset of the free boundary;
\begin{equation}
F \triangleq \left\{(\phi_0,\phi_1) \in
\mathbb{R}_+^2:\phi_0+\phi_1 -\frac{\lambda}{c} = 0,\,\,\, \phi_1
\leq 2 \left(-\frac{1-\lambda}{c}+\frac{3-m}{2}\right) \right\}.
\end{equation}
This region is in the \emph{exit boundary} of the stopping region
(i.e., the boundary through which the path $t \rightarrow
(x(t,\phi_0),y(t,\phi_1))$ exits from the stopping region).
\end{remark}
\begin{proposition}\label{prop:sec5-last}
Assume that $\lambda \in (0,1)$ and that
\begin{equation}\label{eq:c-range}
\frac{(2-\lambda)(1-\lambda)}{3-\lambda-m} < c < \max
\left(2-\lambda,1-\lambda \right).
\end{equation}
Then the region $D$ defined in Proposition~\ref{prop:lambda-1-2}
is a superset of the optimal stopping region.
\end{proposition}
\begin{proof}
From the assumption on the parameters $\lambda$ and $c$ it follows
that the mean reversion level $M=
\left(\frac{\lambda(2-m)}{1-\lambda},\frac{\lambda(m-1)}{2-\lambda}\right)$
of the path $t \rightarrow (x(t,\phi_0),y(t,\phi_1))$, $t \geq 0$,
is in the region $[0,\lambda/c]\times[0,\lambda/c]
-\mathbb{C}_0$. Also, one can easily check that $M \in l$, in
which $l$ is as in (\ref{eq:line}). Line $l$ and the boundary of
the region $\mathbb{C}_0$ intersect at $(\phi_0^*,\phi_1^*)$.
Because $c>(2-\lambda)(1-\lambda)/(3-\lambda-m)$, the equation (as
an equation in the $t$-variable) $x(t,0)=\phi_0^*$ has a positive
solution, $t^*$ and $y_0=y(-t^*,\phi_1^*)>0$. The rest of the
proof follows by using the same arguments as in the proof of
Proposition~\ref{prop:lambda-1-2}.
\end{proof}
\begin{remark}
Observe that if $\lambda \in (0,1)$ and $c$ satisfies
(\ref{eq:c-range}), then the following line segment is a subset of
the free boundary
\begin{equation}
A \triangleq \left\{(\phi_0,\phi_1) \in
\mathbb{R}_+^2:\phi_0+\phi_1 -\frac{\lambda}{c} = 0,\,\,\, \phi_1
\leq \lambda \left(1-\frac{1-\lambda}{c}\right) \right\}.
\end{equation}
Moreover, this set is a subset of entrance boundary of the
stopping region.
\end{remark}
\begin{remark}
If the assumptions of Proposition~\ref{prop:stop-at-jumps} are
satisfied, then it is optimal to sound the alarm only at arrival
times of the observation. This corresponds to the case when the
mean reversion level of the paths $t \rightarrow
(x(t,\phi_0),y(t,\phi_1))$ is inside the advantageous region
$\mathbb{C}_0$, which is defined in (\ref{defn:advantageous}).
Otherwise, since the paths of the sufficient statistic, $t
\rightarrow \widetilde{\Phi}_t$, may reach the stopping region continuously or
via jumps, it might be optimal to declare the alarm between two
observations.
\end{remark}
We will close this section by proving that the optimal stopping
time $\tau^*$ is finite almost surely.
\begin{proposition}
Let $\eta$ be a positive number such that
the region $\{(\phi_0,\phi_1): \phi_0+\phi_1 \geq \eta\}$ is a
subset of the stopping region. (The existence of $\eta$ is
guaranteed by
Propositions~~\ref{prop:explicit-solution}-\ref{prop:sec5-last}).
Let us denote the hitting time of this region by $\tau^u$. Then
$\mathbb{E}^{\phi_0,\phi_1}_0[\tau^u] \leq\eta (2+1/\mu)$. This implies that $\tau^*$ is
finite $\P_0$ almost surely.
\end{proposition}
\begin{proof}
Since the compensator of $p(dt dy)$ (defined in (\ref{eq:defn-p}))
is equal to $ \mu \beta_0(y) $ we can write the dynamics of $\widetilde{\Phi}^{(0)}$
in (\ref{eq:up-up}) as
\begin{equation}
\begin{split}
\widetilde{\Phi}^{(0)}_{t \wedge \tau^u}&=\widetilde{\Phi}^{(0)}_0+\int_{0}^{t \wedge \tau^u}
\left\{\lambda(2-m)+(\lambda-1)
\tilde{\Phi}_t^{(0)} \right\}dt+ \int_0^{t \wedge \tau^u}\mu
\tilde{\Phi}_{t-}^{(0)}\int_{y \in
\mathbb{R}^d}\left[\left(1+\frac{1}{\mu
}\right)r(y)-1\right]\beta_0 (dy)ds
\\& +\int_0^{t \wedge \tau^u}\tilde{\Phi}_{t-}^{(0)}\int_{y
\in \mathbb{R}^d}\left[\left(1+\frac{1}{\mu
}\right)r(y)-1\right]q(ds dy)
\\&=\widetilde{\Phi}^{(0)}_0+\int_{0}^{t \wedge \tau^u}
\left\{\lambda(2-m)+\lambda
\tilde{\Phi}_t^{(0)} \right\}ds+\int_0^{t \wedge
\tau^u}\tilde{\Phi}_{t-}^{(0)}\int_{y \in
\mathbb{R}^d}\left[\left(1+\frac{1}{\mu }\right)r(y)-1\right]q(ds
dy),
\end{split}
\end{equation}
in which $q(dt dy) \triangleq p(dt dy)- \mu \beta_0(y) $ Here, we
have used the fact that $\int_{y \in \mathbb{R}_{+}^d}r(y)\beta_0(y)=1$.
The integral with respect to $q(dt dy)$ is an $\mathbb{F}$
martingale under the measure $\mathbb{P}_0$, since
\[
\begin{split}
\mathbb{E}^{\phi_0,\phi_1}\left[ \int_0^{t \wedge \tau^u}\mu
\tilde{\Phi}_{t-}^{(0)}\int_{y \in
\mathbb{R}^d}\left|\left(1+\frac{1}{\mu
}\right)r(y)-1\right|\beta_0 (dy)ds \right] &\leq
\mathbb{E}^{\phi_0,\phi_1}_0\left[\int_0^{t \wedge
\tau^u}\left(2+\frac{1}{\mu}\right)\widetilde{\Phi}^{(0)}_{s-}ds\right]
\\& \leq t \left(2+\frac{1}{\mu} \eta\right).
\end{split}
\]
Therefore
\[
\mathbb{E}^{\phi_0,\phi_1}_0\left[\widetilde{\Phi}^{(0)}_{t \wedge
\tau^u}\right]=\phi_0+\mathbb{E}^{\phi_0,\phi_1}_0\left[\int_{0}^{t \wedge \tau^u}
\left\{\lambda(2-m)+\lambda
\tilde{\Phi}_t^{(0)} \right\}ds\right] \geq
\lambda(2-m)\mathbb{E}^{\phi_0,\phi_1}_0\left[t \wedge \tau^u\right].
\]
On the other hand,
\[
\widetilde{\Phi}^{(0)}_{t \wedge \tau^u} \leq \max\left(\eta,
\left(1+\frac{1}{\mu}\right)r(Y_{N_{t \wedge \tau^u}})\widetilde{\Phi}^{(0)}_{t
\wedge \tau^u-}\right)\leq
\eta\left(1+\left(1+\frac{1}{\mu}\right)r(Y_{N_{t \wedge
\tau^u}})\right),
\]
almost surely; therefore
\[
\begin{split}
\mathbb{E}^{\phi_0,\phi_1}_0\left[t \wedge \tau^u\right] \leq \frac{1}{\lambda(2-m)}
\mathbb{E}^{\phi_0,\phi_1}_0\left[\widetilde{\Phi}^{(0)}_{t \wedge \tau^u}\right] &\leq
\mathbb{E}^{\phi_0,\phi_1}_0\left[\eta\left(1+\left(1+\frac{1}{\mu}\right)r(Y_{1})\right)\right]
\\&=\eta \left(2+\frac{1}{\mu}\right).
\end{split}
\]
The result follows after an application of the monotone
convergence theorem.
\end{proof}
In what follows we will consider the cases in which the parameters
do not satisfy the hypothesis of
Proposition~\ref{prop:explicit-solution} and construct a sequence
of functions iteratively, using an appropriately defined
functional operator, that converges to the value function
exponentially fast.
\section{Optimal Stopping Time and Properties of the Value Function and the Stopping Boundary}\label{sec:optimal-stopping-time}
The usual starting point to calculate the value function
in ( \ref{eq:value-function}) and find the optimal stopping time
would be to try to characterize the value function as the unique
solution of the free boundary problem
\begin{equation}\label{eq:free-boundary}
\min\{(\mathcal{A}-\lambda)v(\varphi)+g(\varphi),-v(\varphi)\}=0,
\end{equation}
in which the differential operator is the inifinitesimal generator
of the Markov process $(\tilde{\Phi}^{(0)},\tilde{\Phi}^{(1)})$
and whose action on a test function $f$ is given by
\begin{equation}
\begin{split}
\mathcal{A}f(\phi_0,\phi_1)&=\frac{\partial f}{\partial
\phi_0}(\phi_0,\phi_1)\left[\lambda(2-m)+(\lambda-1)
\phi_0\right]+\frac{\partial f}{\partial
\phi_1}(\phi_0,\phi_1)\left[\lambda(m-1)+(\lambda-2) \phi_1\right]
\\&+ \mu \int_{y \in
\mathbb{R}^d}\left[f\left(\left(1+\frac{1}{\mu}\right)r(y)\phi_0,
\left(1+\frac{2}{\mu}\right)r(y)\phi_1
\right)-f(\phi_0,\phi_1)\right] \beta_0(dy).
\end{split}
\end{equation}
The solution of the free boundary problem (\ref{eq:free-boundary})
may be identified by using certain boundary conditions (the smooth
fit principle). The smooth fit is expected to fail for
(\ref{eq:free-boundary}) at the exit boundary of the stopping
region. See e.g. \cite{BD04}, \cite{BD03} for failure of the
smooth fit principle when the infinitesimal generator
$\mathcal{A}$ is a differential delay operator (these papers
consider one dimensional free boundary problems). Instead of the
characterization of the value function as a solution of
quasi-variational inequalities, we will use a new characterization
of the value function of the optimal stopping problem in
(\ref{eq:value-function}). Specifically, we will construct a
sequence of functions iteratively, using an appropriately defined
functional operator, that converges to the value function
exponentially fast. This will let us show that $\tau^*$ in
(\ref{optst}) is the optimal stopping time. We will also be able
to show the concavity of the value function and the convexity of
the free boundary.
\subsection{Optimal Stopping with Time Horizon
$\sigma_n$}
In this section, we will approximate the value function $V$ with a
sequence of optimal stopping problems. Let us denote
\begin{equation}\label{defnofVn}
V_{n}(\phi_0,\phi_1) \triangleq \inf_{\tau \in
\mathcal{S}}\mathbb{E}^{\phi_0,\phi_1}_0 \left[\int_0^{\tau \wedge
\sigma_n }e^{-\lambda t}g\left(\widetilde{\Phi}^{(0)}_t,\widetilde{\Phi}^{(1)}_t\right)dt \right]
\end{equation}
where $(\phi_0,\phi_1) \in \mathbb{R}^2_+$, $n \in \mathbb{N}$,
and $\sigma_n$ is the $n^{\text{th}}$ jump time of the process
$X$. Observe that $(V_n)_{n \in \mathbb{N}}$ is decreasing and
satisfies $-1/c<V_{n}<0$. Since $(\sigma_n)_{n \geq 1}$ is an
almost surely increasing sequence, $(V_{n})_{n \geq 1}$ is
decreasing. Therefore $\lim_{n}V_{n}$ exists. It is also immediate
that $V_n \geq V$. In fact we can say more about the limit of the
sequence $(V_{n})_{n \geq 1}$ as the next proposition illustrates.
\begin{proposition}\label{expofast}
$V_{n}(\phi_0,\phi_1)$ converges to $V$ uniformly in
$(\phi_0,\phi_1)\in \mathbb{R}_{+}^{2}$. In fact the rate of
convergence is exponential as the following equation illustrates:
\begin{equation}
-\frac{1}{c}\left(\frac{\mu}{\mu+\lambda}\right)^n \geq
V_{n}(\phi_0,\phi_1)-V(\phi_0,\phi_1) \geq 0.
\end{equation}
\end{proposition}
\begin{proof}
\begin{equation}\label{eq:bndd}
\begin{split}
\mathbb{E}^{\phi_0,\phi_1}_0 \left[\int_0^{\tau}e^{-\lambda
t}g\left(\tilde{\Phi}_t\right)dt \right] =\mathbb{E}^{\phi_0,\phi_1}_0
\left[\int_0^{\tau \wedge \sigma_n }e^{-\lambda
t}g\left(\tilde{\Phi}_t\right)dt \right] +
\mathbb{E}^{\phi_0,\phi_1}_0\left[1_{\{\tau \geq \sigma_n\}}
\int_{\sigma_n}^{\tau} e^{-\lambda t} g(\tilde{\Phi}_t)dt\right]
\end{split}
\end{equation}
The first term on the right-hand-side of (\ref{eq:bndd}) is
greater than $V_n$. Since $g(\cdot,\cdot)>-\lambda/c$ we can show
that the second term is greater than
\begin{equation}
-\frac{\lambda}{c}\mathbb{E}^{\phi_0,\phi_1}_0\left[1_{\{\tau \geq
\sigma_n\}}\int_{\sigma_n}^{\tau}e^{-\lambda s}ds\right] \geq
-\frac{1}{c} \mathbb{E}^{\phi_0,\phi_1}_0\left[e^{-\lambda
\sigma_n}\right] \geq -\frac{1}{c}\left(\frac{\mu}{\lambda+ \mu
}\right)^n.
\end{equation}
To show the last inequality we have used the fact that $\sigma_n$
is a sum of $n$ independent and identically distributed
exponential random variables with rate $\mu$ (i.e. $\sigma_n$ has
the Erlang distribution).
\end{proof}
Next, we will show that $V_n$ can be determined using an iterative
algorithm. To this end we introduce the following operators acting
on bounded Borel functions $f: \mathbb{R}_{+}^2 \rightarrow
\mathbb{R}$
\begin{align}
J f(t,\phi_0,\phi_1) & \triangleq
\mathbb{E}^{\phi_0,\phi_1}_{0}\bigg[\int_0^{t \wedge \sigma_1}e^{-\lambda
s} g(\widetilde{\Phi}^{(0)}_s,\widetilde{\Phi}^{(1)}_s)ds +1_{\{t \geq \sigma_1\}}e^{-\lambda \sigma_1}
f(\widetilde{\Phi}^{(0)}_{\sigma_1},\widetilde{\Phi}^{(1)}_{\sigma_1}) \bigg],\,\, t \in[0,\infty],
\label{defnJ} \\J_{t}f(\phi_0,\phi_1) & \triangleq \inf_{s \in [t,\infty]} J
f(s,\phi_0,\phi_1), \quad t \in [0,\infty].
\end{align}
Recall that under $\mathbb{P}_0$, $\sigma_1$ (the first time an
observation arrives) has the exponential distribution with rate
$\mu$. Using Fubini's theorem we can write (\ref{defnJ}) as
\begin{equation}\label{eq:fubini}
Jf(t,\phi_0,\phi_1)=\int_0^{t}e^{-(\lambda+\mu)s}\left(g+\mu \cdot
S f\right)(x(s,\phi_0),y(s,\phi_1))ds, \quad t \in[0,\infty],
\end{equation}
in which $x$ and $y$ are the functions defined in
(\ref{eq:explicit-x-and-y}) and $S$ is the linear operator
\begin{equation}\label{eq:S}
S f(\phi_0,\phi_1)=
\int_{\mathbb{R}^d}f\left(\left(1+\frac{1}{\mu}\right)r(y)\phi_0
,\left(1+\frac{2}{\mu}\right)r(y)\phi_1\right) \beta_0(dy).
\end{equation}
Below we list a few useful properties of the operator $J_0$.
\begin{lemma}\label{proofJ}
For every bounded Borel function $f:\mathbb{R}_{+}^2 \rightarrow
\mathbb{R}$, the mapping $J_0 f$ is bounded. If $f$ is a concave
function, then $J_0 f$ is also a concave function. If $f_1 \leq
f_2$ are real value bounded Borel functions, then $J_0 f_1 \leq
J_0 f_2$. That is, the operator $J_0$ preserves boundedness,
concavity and ordering.
\end{lemma}
\begin{proof}
Let us define $\|f\| \triangleq \sup_{(\phi_0,\phi_1) \in
\mathbb{R}_{+}^2}|f(\phi_0,\phi_1)|<\infty$. Since $g(\cdot) \geq
g(0,0)=\lambda/c$ and $\|S(f)\| \leq \|f\|$ we can write
(\ref{eq:fubini}) as
\[
Jf(t,\phi_0,\phi_1) \geq -\left(\frac{\lambda}{c}+\mu
\|f\|\right)\int_{0}^{\infty}e^{-(\lambda+\mu)s}ds =
-\left(\frac{\lambda}{c}+\mu \|f\|\right)\frac{1}{\lambda+\mu}.
\]
Since we also have $J_{0}f(\phi_0,\phi_1) \leq
J(0,\phi_0,\phi_1)=0$, we obtain
\begin{equation}\label{eq:norm}
-\left(\frac{\lambda}{c}+\mu \|f\|\right)\frac{1}{\lambda+\mu}
\leq J_{0}f(\phi_0,\phi_1) \leq 0,
\end{equation}
which proves the first
assertion.
The second assertion follows since $S(f)(\cdot,\cdot)$ defined in
(\ref{eq:S}) is concave if $f$ is concave, and the functions
$\phi_0 \rightarrow x(t,\phi_0)$ and $\phi_1 \rightarrow
y(t,\phi_1)$ are linear for every $t \geq 0$. The preservation of
ordering follows immediately from (\ref{eq:fubini}).
\end{proof}
\begin{cor}\label{defnofvn}
Let us define $v_n:\mathbb{R}_+^2 \rightarrow \mathbb{R}$ by
\begin{equation}\label{definevn}
v_0=0 \quad \text{and $v_n=J_0 v_{n-1}$}.
\end{equation}
Then, for every $n \in
\mathbb{N}$, $v_n$ is bounded and concave, and $-1/c \leq v_{n+1}
\leq v_n \leq 0$. Therefore $v=\lim_{n \rightarrow \infty} v_n$,
exists, and is bounded and concave. Both $v_n$ and $v$ are
continuous (not only in the interior of $\mathbb{R}_{+}^2$), they
are increasing in each of their arguments, and their left and
right partial derivatives are bounded on every compact subset of
$\mathbb{R}_{+}^2$.
\end{cor}
\begin{proposition}\label{thm:epsilon-optimal}
For every $n \in \mathbb{N}$, $v_n$ defined in
Corollary~\ref{defnofvn} is equal to $V_n$ of (\ref{defnofVn}).
For $\varepsilon>0$, let us denote
\begin{equation}\label{rneps}
r^{\varepsilon}_{n}(\phi_0,\phi_1) \triangleq \inf \{t\in (0,\infty]: J
v_n(t,(\phi_0,\phi_1)) \leq J_0 v_n(\phi_0,\phi_1)+\varepsilon \}.
\end{equation}
And let us define a sequence of stopping times by $S^{\varepsilon}_1
\triangleq r_{0}^{\varepsilon}(\widetilde{\Phi}) \wedge \sigma_1$ and
\begin{equation}\label{Sne}
S^{\varepsilon}_{n+1} \triangleq
\begin{cases}
r^{\varepsilon/2}_n(\widetilde{\Phi}) & \text{if $\sigma_1 \geq r^{\varepsilon/2}_{n}(\widetilde{\Phi})$}
\\ \sigma_1+S_n^{\varepsilon/2} \circ \theta_{\sigma_1}& \text{otherwise}.
\end{cases}
\end{equation}
Here $\theta_s$ is the shift operator on $\Omega$, i.e., $X_{t}
\circ \theta_s=X_{s+t}$. Then $S^{\varepsilon}_n$ is $\varepsilon$ optimal,
i.e.,
\begin{equation}\label{vnSne}
\mathbb{E}_0^{\phi_0,\phi_1}\left[\int_0^{S^{\varepsilon}_n}e^{-\lambda
t}g(\widetilde{\Phi}_t)dt \right] \leq v_{n}(\phi_0,\phi_1)+\varepsilon.
\end{equation}
\end{proposition}
\subsection{Optimal Stopping Time}\label{optstotimesect}
\begin{proposition}\label{conoptst}
$\tau^{*}$ defined in (\ref{optst}) the smallest optimal stopping
time for (\ref{eq:value-function}).
\end{proposition}
We will divide the proof of
this theorem into several lemmas. The following lemma shows that
if there exists an optimal stopping time it is necessarily greater
than or equal to $\tau^*$.
\begin{lemma}\label{gthants}
\begin{equation}
V(\phi_0,\phi_1)=\inf_{\tau \geq \tau^{*}}
\mathbb{E}_0^{\phi_0,\phi_1}\left[\int_{0}^{\tau}e^{-\lambda s }
g(\widetilde{\Phi}_s)ds\right].
\end{equation}
\end{lemma}
\begin{proposition}
\label{prop:equality-of-v-and-V} We have $v(\phi_0, \phi_1)=
V(\phi_0,\phi_1)$ for every $(\phi_0,\phi_1)\in \mathbb{R}^2_+$. Moreover,
$V$ is the largest nonpositive solution $U$ of the equation
$U=J_0U$.
\end{proposition}
As an immediate corollary to Propositions~\ref{proofJ} and
\ref{prop:equality-of-v-and-V} and
Propositions~\ref{prop:explicit-solution}-\ref{prop:sec5-last},
which construct bounds on the optimal stopping region, we can
state the following:
\begin{cor}
Let us define the optimal stopping regions
\begin{equation}
\mathbf{\Gamma}_n \triangleq \{(\phi_0,\phi_1)\in \mathbb{R}^2_+:
v_n(\phi_0,\phi_1)=0\}, \qquad \mathbf{C}_n \triangleq \mathbb{R}^2_+ \backslash
\mathbf{\Gamma}_n, \quad n\in \mathbb{N},
\end{equation}
and recall that
\begin{equation}
\mathbf{\Gamma} = \{(\phi_0,\phi_1)\in \mathbb{R}^2_+:
v(\phi_0,\phi_1)=0\}, \quad \mathbf{C} \triangleq \mathbb{R}^2_+ \backslash \mathbf{\Gamma}.
\end{equation}
There are decreasing, convex and continuous mappings
$\gamma_n:\mathbb{R}_{+} \rightarrow \mathbb{R}_{+}$, $n \in \mathbb{N}$, and
$\gamma:\mathbb{R}_{+} \rightarrow \mathbb{R}_{+}$ such that
\begin{equation}
\Gamma_n=\{(\phi_0,\phi_1)\in \mathbb{R}_{+}: \phi_1 \geq
\gamma_n(\phi_0)\},\, \in \mathbb{N} \quad \text{and}\quad
\Gamma=\{(\phi_0,\phi_1)\in \mathbb{R}_{+}: \phi_1 \geq \gamma(\phi_0)\}.
\end{equation}
The sequence $\{\gamma_n(\phi_0)\}_{n\in \mathbb{N}}$ is increasing and
$\gamma(\phi_0) = \lim \uparrow \gamma_n(\phi_0)$ for every
$\phi_0\in \mathbb{R}_+$. If there are paths $t \rightarrow
(x(t,\phi_0),y(t,\phi_1))$, $t \geq 0$, $(\phi_0,\phi_1) \in \mathbb{C}_0$,
that exit $\mathbb{C}_0$, then there exists $\xi \in
[0,\lambda/c)$ (the value of $\xi$ depends on the parameter
values) such that
$\gamma_n(\phi_0)=\gamma(\phi_0)=\lambda/c-\phi_0$ for $\phi_0
\geq \xi$, i.e., the free boundary coincides with the boundary of
the region $\mathbb{C}_0$ defined in (\ref{defn:advantageous}). In
fact if (i) $\lambda \geq 2$, or, (ii) $\lambda \in [1,2)$ and $c
\geq 2-\lambda$, or, (iii) $\lambda \in (0,1)$ and $c \geq \max
\left(2-\lambda,1-\lambda\right)$ then $\xi=0$. If (iv) $\lambda
\in [1,2)$ and $c \in (0,2-\lambda)$, (v) $\lambda \in (0,1)$ and
$(2-\lambda)(1-\lambda)/(3-\lambda-m) < c < \max
\left(2-\lambda,1-\lambda \right)$, then
$\xi=\lambda(-1+(2-\lambda)/c)$. If on the other hand, $\lambda
\in (0,1)$ and $ c \geq 2 (1-\lambda)/(3-m)< c \leq
(2-\lambda)(1-\lambda)/(3-\lambda-m)$, then
$\xi=(2-\lambda)/c+m-3$.
\end{cor}
\begin{lemma}\label{lem:lem delay-equation}
Let $f:\mathbb{R}^2_+\mapsto \mathbb{R}$ be a bounded function. For every
$t\in \mathbb{R}_+$ and $(\phi_0,\phi_1)\in \mathbb{R}^2_+$,
\begin{align}
\label{lem:delay-equation} J_t f (\phi_0,\phi_1) =
Jf(t,(\phi_0,\phi_1)) + e^{-(\lambda+\mu)t}\, J_0 f
\big(x(t,\phi_0),y(t,\phi_1))\big).
\end{align}
\end{lemma}
\begin{remark} \normalfont
\label{rem:right-continuity-of-V-Phi_t}
Since $V$ is bounded, and $V=J_0V$ by
Proposition~\ref{prop:equality-of-v-and-V}, we have
\begin{align}
\label{eq:delay-equation-for-V} J_t V (\phi_0,\phi_1) =
JV(t,(\phi_0,\phi_1)) + e^{-(\lambda+\mu)t}\, V
\big(x(t,\phi_0),y(t,\phi_1))\big), \quad t\in \mathbb{R}_+
\end{align}
for every $(\phi_0,\phi_1)\in \mathbb{R}^2_+$.
Let us define the $\mathbb{F}$-stopping times
\begin{align}
\label{eq:epsilon-optimal-stopping-time}
U_{\varepsilon} \triangleq \inf\{t\ge 0: V(\widetilde{{\Phi}}_t)\ge
-\varepsilon\}, \qquad \varepsilon \ge 0.
\end{align}
By Remark~\ref{rem:right-continuity-of-V-Phi_t}, we have
\begin{align}
\label{eq:the-value-at-the-hitting-time}
V\big(\widetilde{{\Phi}}_{U_{\varepsilon}}\big) \ge -\varepsilon \quad
\text{on the
event}\quad \left\{U_{\varepsilon} <\infty\right\}.
\end{align}
\end{remark}
\begin{proposition}
\label{prop:martingale} Let $ M_t \triangleq e^{-\lambda t}
V(\widetilde{{\Phi}}_t) + \int^{t}_0 e^{-\lambda s} g(\widetilde{{\Phi}}_s) ds$, $t\ge 0$.
For every $n\in \mathbb{N}$, $\varepsilon\ge 0$, and $(\phi_0,\phi_1)\in
\mathbb{R}^2_+$, we have $\mathbb{E}^{\phi_0,\phi_1}_0 [M_0] =
\mathbb{E}^{\phi_0,\phi_1}_0 [M_{U_{\varepsilon}\land \sigma_n}]$, i.e.,
\begin{align}
\label{eq:martingale}
V(\phi_0,\phi_1) = \mathbb{E}^{\phi_0,\phi_1}_0 \left[ e^{-\lambda
(U_{\varepsilon}\land \sigma_n)}V(\widetilde{{\Phi}}_{U_{\varepsilon}\land
\sigma_n}) + \int^{U_{\varepsilon}\land \sigma_n}_0 e^{-\lambda
s} g(\widetilde{{\Phi}}_s) ds \right].
\end{align}
\end{proposition}
\noindent \textbf{Proof of Proposition~\ref{conoptst}} First we
will show that $\tau^*$ is an optimal stopping time. It is enough
to show that for every $\varepsilon \ge 0$, the stopping time
$U_{\varepsilon}$
in (\ref{eq:epsilon-optimal-stopping-time}) is an
$\varepsilon$-optimal stopping time for the optimal stopping problem
(\ref{eq:value-function}), i.e.,
\begin{align*}
\mathbb{E}^{\phi_0,\phi_1}_0 \left[\int^{U_{\varepsilon}}_0 e^{-\lambda
s} g(\widetilde{{\Phi}}_s) ds \right] \le V(\phi_0,\phi_1) + \varepsilon,
\quad \text{for every}\quad (\phi_0,\phi_1)\in \mathbb{R}^2_+.
\end{align*}
Note that the sequence of random variables
\begin{align*}
\int^{U_{\varepsilon}\land \sigma_{n}}_0 e^{-\lambda s} g(\widetilde{{\Phi}}_s) ds +
e^{-\lambda (U_{\varepsilon}\land \sigma_n)}
V(\widetilde{{\Phi}}_{U_{\varepsilon}\land \sigma_n}) \ge - \int^{\infty}_{0}
e^{-\lambda s}\, \frac{\lambda}{c}\, ds -\frac{1}{c}= - \frac{2}{c}
\end{align*}
is bounded from below. By (\ref{eq:martingale}) and Fatou's Lemma,
we have
\begin{align*}
V(\phi_0,\phi_1) &= \liminf_{n\rightarrow \infty}
\mathbb{E}^{\phi_0,\phi_1}_0 \left[ \int^{U_{\varepsilon}\land \sigma_n}_0
e^{-\lambda s} g(\widetilde{{\Phi}}_s) ds + e^{-\lambda (U_{\varepsilon}\land
\sigma_n)}V(\widetilde{{\Phi}}_{U_{\varepsilon}\land \sigma_n})\right] \\
&\ge \mathbb{E}^{\phi_0,\phi_1}_0 \left[\liminf_{n\rightarrow \infty}\left(
\int^{U_{\varepsilon}\land \sigma_n}_0 e^{-\lambda s}
g(\widetilde{{\Phi}}_s) ds + e^{-\lambda (U_{\varepsilon}\land
\sigma_n)}V(\widetilde{{\Phi}}_{U_{\varepsilon}\land \sigma_n}) \right)
\right] \\
&= \mathbb{E}^{\phi_0,\phi_1}_0 \left[\int^{U_{\varepsilon}}_0 e^{-\lambda
s} g(\widetilde{{\Phi}}_s) ds + 1_{\{U_{\varepsilon}<\infty\}} e^{-\lambda
U_{\varepsilon}} V(\widetilde{{\Phi}}_{U_{\varepsilon}})\right] \\
&\ge \mathbb{E}^{\phi_0,\phi_1}_0 \left[\int^{U_{\varepsilon}}_0 e^{-\lambda
s} g(\widetilde{{\Phi}}_s) ds \right] - \varepsilon \;
\mathbb{E}^{\phi_0,\phi_1}_0 \left[ 1_{\{U_{\varepsilon}<\infty\}} e^{-\lambda
U_{\varepsilon}} \right] \quad \text{by
(\ref{eq:the-value-at-the-hitting-time})} \\
&\ge \mathbb{E}^{\phi_0,\phi_1}_0 \left[\int^{U_{\varepsilon}}_0 e^{-\lambda
s} g(\widetilde{{\Phi}}_s) ds \right] - \varepsilon
\end{align*}
for every $(\phi_0,\phi_1)\in \mathbb{R}^2_+$. This shows that
$U_{\varepsilon}$ is an $\varepsilon$-optimal stopping time.
Now we will show that $\tau^*$ is the smallest optimal stopping
time. Let us define
\begin{equation}
\tilde{\tau}\triangleq
\begin{cases}
\tau, & \text{if $\tau \geq \tau^{*}$},
\\ \tau+\tau^* \circ \theta_{\tau}, & \text{if $\tau<\tau^*$}.
\end{cases}
\end{equation}
Then the stopping time $\tilde{\tau}$ satisfies
\begin{equation}\label{strngmarkv}
\begin{split}
\mathbb{E}_{0}^{\phi_0,\phi_1}\left[\int_0^{\tilde{\tau}}e^{-\lambda s}
g(\widetilde{\Phi}_s)ds \right]
&=\mathbb{E}_0^{\phi_0,\phi_1}\left[\int_0^{\tau}e^{-\lambda s}g(\widetilde{\Phi}_s)ds+\int_{\tau}^{\tilde{\tau}}e^{-\lambda s} g(\widetilde{\Phi}_s)ds\right]
\\&=\mathbb{E}_0^{\phi_0,\phi_1}\left[\int_0^{\tau}e^{-\lambda s}g(\widetilde{\Phi}_s)ds+e^{-\lambda \tau} \int_0^{\tau^*\circ \theta_{\tau}}
e^{-\lambda s} g(\widetilde{\Phi}_{s+\tau})ds\right]
\\ &=\mathbb{E}_0^{\phi_0,\phi_1}\left[\int_0^{\tau}e^{-\lambda s}g(\widetilde{\Phi}_s)ds+ e^{-\lambda \tau}V(\widetilde{\Phi}_{\tau})\right]
\\ &\leq
\mathbb{E}_0^{\phi_0,\phi_1}\left[\int_0^{\tau}e^{-\lambda
s}g(\widetilde{\Phi}_s)ds\right].
\end{split}
\end{equation}
Here the third equality follows from the strong Markov property of
the process $\widetilde{\Phi}$. Now the proof immediately follows.\hfill
$\square$.
\subsection{Structure of the Optimal Stopping Times}
Finally, let us describe here the structure of the
optimal stopping times. For this purpose we will need the
following lemma.
\begin{lemma}
\label{cor:r_n-as-an-exit-time} Let
\begin{align}
\label{eq:r_n} r_n (\phi_0,\phi_1) = \inf\left\{s\in(0,\infty]:
Jv_n
\big(s,(\phi_0,\phi_1)\big)= J_0 v_n (\phi_0,\phi_1)\right\}
\end{align}
be the same as $r^{\varepsilon}_n(\phi_0,\phi_1)$ in
Proposition~\ref{thm:epsilon-optimal} with $\varepsilon = 0$. Then
\begin{align}
\label{eq:r_n-as-an-exit-time} r_n (\phi_0,\phi_1)=
\inf\left\{t>0:
v_{n+1}\big(x(t,\phi_0),y(t,\phi_1)\big)=0 \right\}
\qquad (\inf \emptyset \equiv \infty).
\end{align}
\end{lemma}
\begin{proof} Let us fix $(\phi_0,\phi_1)\in \mathbb{R}^2_+$, and denote
$r_n(\phi_0,\phi_1)$ by $r_n$. We have
$Jv_n(r_n,(\phi_0,\phi_1))= J_0 v_n
(\phi_0,\phi_1)=J_{r_n}v_n(\phi_0,\phi_1)$.
Suppose first that $r_n <\infty$. Since $J_0v_{n}= v_{n+1}$, taking
$t=r_n$ and $w=v_n$ in (\ref{lem:delay-equation}) gives
\begin{align*}
Jv_n(r_n,(\phi_0,\phi_1))
= J_{r_n} v_n (\phi_0,\phi_1) =
Jv_n(r_n,(\phi_0,\phi_1)) + e^{-(\lambda+\mu)r_n}
v_{n+1}(x(r_n,\phi_0),y(r_n,\phi_1)).
\end{align*}
Therefore, $v_{n+1}(x(r_n,\phi_0),y(r_n,\phi_1))=0$.
If $0< t < r_n$, then $Jv_n (t,(\phi_0,\phi_1))> J_0 v_n
(\phi_0,\phi_1) = J_{r_n}v_n(\phi_0,\phi_1) =
J_{t}v_n(\phi_0,\phi_1)$ since $u \mapsto J_u v_n (\phi_0,\phi_1)$
is nondecreasing. Taking $t\in (0,r_n)$ and $w=v_n$ in
(\ref{lem:delay-equation}) imply
\begin{align*}
J_{0} v_n (\phi_0,\phi_1) = J_{t} v_n (\phi_0,\phi_1)=
Jv_n(t,(\phi_0,\phi_1)) + e^{-(\lambda+\mu)t}
v_{n+1}(x(t,\phi_0),y(t,\phi_1)).
\end{align*}
Therefore, $v_{n+1}(x(t,\phi_0),y(t,\phi_1))<0$ for every $t\in
(0,r_n)$, and (\ref{eq:r_n-as-an-exit-time}) follows.
Suppose now that $r_n = \infty$. Then we have
$v_{n+1}(x(t,\phi_0),y(t,\phi_1))<0$ for every $t\in (0,\infty)$ by
the same argument in the last paragraph above. Hence, $\{t>0:
v_{n+1}(x(t,\phi_0),y(t,\phi_1))=0\} = \emptyset$, and
(\ref{eq:r_n-as-an-exit-time}) still holds.
\end{proof}
By Proposition~\ref{conoptst}, the set $\mathbf{\Gamma}$ is the
\emph{optimal stopping region} for the optimal stopping problem
(\ref{eq:value-function}). Namely, stopping at the first hitting
time $U_0 = \inf\{t\in \mathbb{R}_+: \widetilde{{\Phi}}_t \in \mathbf{\Gamma}\}$ of the
process $\widetilde{{\Phi}}=(\widetilde{\Phi}^{(0)},\widetilde{\Phi}^{(1)})$ to the set $\mathbf{\Gamma}$ is optimal
for (\ref{eq:value-function}).
Similarly, we shall call each set $\mathbf{\Gamma}_n$, $n\in \mathbb{N}$ a
\emph{stopping region} for the family of the optimal stopping
problems in (\ref{defnofVn}). However, unlike the case above, we
need the first $n$ stopping regions, $\mathbf{\Gamma}_1,\ldots,\mathbf{\Gamma}_n$,
in order to describe an optimal stopping time for the optimal
stopping problem in (\ref{defnofVn}) (the optimal stopping times
are not hitting times of a certain set). Using
Corollary~\ref{cor:r_n-as-an-exit-time}, the optimal stopping time
$S_n \equiv S^0_n$ in Proposition~\ref{thm:epsilon-optimal} for
$V_n$ of (\ref{defnofVn}) may be described as follows: Stop if the
process $\widetilde{{\Phi}}$ hits $\mathbf{\Gamma}_n$ before $X$ jumps. If $X$ jumps
before $\widetilde{{\Phi}}$ reaches $\mathbf{\Gamma}_n$, then wait, and stop if
$\widetilde{{\Phi}}$ hits $\mathbf{\Gamma}_{n-1}$ before the next jump of $X$, and so
on. If the rule is not met before $(n-1)$st jump of $X$, then stop
at the earliest of the hitting time of $\mathbf{\Gamma}_1$ and the next
jump time of $X$.
\section{Conclusion}\label{sec:conclusion}
We have solved a change detection problem for a compound Poisson
process in which the intensity and the jump size change at the
same time but the intensity changes to a random variable with a
known distribution. This problem becomes an optimal stopping
problem for a Markovian sufficient statistic. We have analyzed a
special case of this problem, in which the rate of the arrivals
moves up to one of two possible values, and the Markovian
sufficient statistic is two-dimensional, in more detail. We have
shown that the intuition that a decision would sound the alarm
only at the times when it observes an arrival does not in general
hold, see Remark~\ref{rem:only-at-jump-times}. This intuition
becomes relevant only when the disorder intensity and delay
penalty are small. Performing a sample path analysis we have been
able to find the optimal stopping time exactly for most of the
range of parameters, and tight upper and lower bounds for the rest
of the parameter range. This work has applications in insurance
risk, in which the subject Poisson process can be viewed as the
claim arrivals process for an insurance company.
| -84,833.442688
|
[
-3.095703125,
2.87890625
] | 16.756757
|
[
-2.984375,
0.8408203125,
-2.271484375,
-5.73828125,
-0.7451171875,
8.203125
] |
[
1.013671875,
6.7734375,
-0.311279296875,
4.4765625
] | 375
| 7,900
|
[
-3.421875,
3.947265625
] | 35.027719
|
[
-5.875,
-4.2421875,
-5.19921875,
-2.744140625,
2.06640625,
13.28125
] | 1.054189
| 8.767929
| 20.278481
| 3.001509
|
[
1.5103315114974976
] | -52,514.79247
| 6.243924
| -84,636.691191
| 1.131226
| 6.039177
|
[
-1.9140625,
-3.19921875,
-3.654296875,
-5.1796875,
2.0546875,
11.9609375
] |
[
-5.7890625,
-1.69140625,
-2.197265625,
-1.06640625,
3.537109375,
4
] | |
BkiUfio5qhDCeBHYsg2T
|
\section{Introduction}
\subsection{Motivation}
Recent progress in the joint program on quantum information and holography has uncovered striking connections between entanglement and spacetime. Arguably, the most exciting discovery in this context, and the one which ignited most of the research in this field, was the proposal of Ryu and Takayanagi that relates the entanglement entropy of a region $A$ in the boundary to the area of a minimal codimension-two bulk surface $\gamma_A$ \cite{Ryu:2006bv},
\begin{equation}\label{RyuT}
S_A={\frac1{4G_N}}\underset{\gamma_A\sim A}{\min}\left[\text{area}(\gamma_A)\right]\,.
\end{equation}
This formula was further generalized to a fully covariant setting in \cite{Hubeny:2007xt}
and proved formally in \cite{Lewkowycz:2013nqa,Dong:2016hjy} using the known entries of the holographic dictionary. The RT prescription (\ref{RyuT}) and its covariant version generalize in an elegant way the well-known Bekenstein-Hawking formula for black hole entropy and provide a natural way to interpret it directly in terms of a microscopic CFT description. Given its elegance and simplicity, entanglement entropy became a robust tool to investigate fundamental aspects in holography, ranging from the problem of bulk reconstruction \cite{Balasubramanian:2013rqa,Balasubramanian:2013lsa,Myers:2014jia,Headrick:2014eia,Czech:2014ppa,Czech:2015qta,Dong:2016eik,Czech:2016xec,Cao:2016mst,Espindola:2017jil,Espindola:2018ozt,Roy:2018ehv,Faulkner:2018faa,Czech:2019hdd,Balasubramanian:2018uus,Bao:2019bib,Cao:2020uvb}, to the emergence and dynamics of spacetime \cite{Lashkari:2013koa,Faulkner:2013ica,Swingle:2014uza,Caceres:2016xjz,Czech:2016tqr,Faulkner:2017tkh,Haehl:2017sot,Rosso:2020zkk}.
Recently, Freedman and Headrick proposed an alternative way to compute entanglement entropy that does not rely on bulk surfaces, but instead, is phrased in terms of a specific flow maximization problem \cite{Freedman:2016zud}. More specifically, the new prescription states that
\begin{equation}\label{BitThreadReform}
S_A=\frac1{4G_N} \max_{v \in {\cal F}}\int_{A}\sqrt{h} \, n_\mu v^\mu\,, \qquad {\cal F}\equiv\{v \, \vert\, \nabla_\mu v^\mu=0,\, \abs{v}\leq 1\}\,,
\end{equation}
and can be shown to be equivalent to the RT formula through the continuous version of the max flow-min cut theorem of network theory. The maximization above is an example of a convex optimization program and, hence, the equivalence between (\ref{RyuT}) and (\ref{BitThreadReform}) can also be proved using techniques borrowed from convex optimization \cite{Headrick:2017ucz}. Soon after this paper appeared, it was realized that various geometric problems could likewise be translated to the realm of convex optimization leading to interesting new results \cite{Headrick:2018dlw,Headrick:2018ncs}. The connection with convex optimization has also helped uncover various properties of entanglement entropy from the bit thread perspective \cite{Cui:2018dyq}, as well as some generalizations and applications to other entanglement related quantities \cite{Chen:2018ywy,Harper:2018sdd,Du:2019emy,Bao:2019wcf,Harper:2019lff,Agon:2019qgh,Du:2019vwh}. A complementary approach that departs from the realm of convex optimization was put forward in \cite{Hubeny:2018bri,Agon:2018lwq} and studies aspects of bit threads and entanglement by considering explicit constructions of max flows. This is the line of work that we will mostly follow in this paper.
There is one crucial distinction between the two prescriptions to compute entanglement entropy that we believe deserves further investigation: while the minimal surface $\gamma_A$ is in most cases unique, the solution to the max flow problem $v$ is highly degenerate. More specifically, it can be shown that $v$ is uniquely determined only at the bulk bottle-neck $\gamma_A$, but is highly non-unique away from it. This non-uniqueness raises the question:
\vspace{2mm}
\noindent\fbox{
\parbox{0.975\textwidth}{
\centering
\emph{Out of the infinitely many thread configurations that could be associated with a boundary region, is there any meaningful separation or classification that could be associated with states of special ``entanglement classes'' in the dual field theory?}
}%
}
\vspace{2mm}
$\quad$\\
Intuitively, it would seem that this large degeneracy could indeed be associated to a choice of microstate (or a particular class of microstates) that give rise to the same amount of entanglement between the region $A$ and its complement,\footnote{The standard lore asserts that states with (semi)-classical bulk duals can only encode bipartite and perfect-tensor type entanglement, but no other form of multipartite entanglement (see however \cite{Akers:2019gcv}). Hence, the class of microstates that we have access to would be a reduced subset of the most general class of CFT microstates.} however, a precise version of this statement is not settled yet. On the other hand, one can try to exploit this non-uniqueness to gain new insights on the gravity side. The utility of the non-uniqueness property stems from the observation that, if a version of this statement is true (even if we do not know it yet), then a particular solution to the max flow problem $v$ could potentially carry more information than the minimal surface $\gamma_A$ itself: it could encode in detail how the local correlations between the degrees of freedom in the region $A$ and in its complement are distributed for a particular choice of microstate. If so, then, one could imagine that specific questions related to bulk reconstruction and the emergence of spacetime could be answered in a more efficient way by properly selecting a class of configurations/states adapted to the specific problem at hand.
In this paper we will give some steps in this direction. Specifically, our main objective is to understand how the program of \emph{gravitation from entanglement} \cite{Lashkari:2013koa,Faulkner:2013ica,Swingle:2014uza,Caceres:2016xjz,Czech:2016tqr,Faulkner:2017tkh,Haehl:2017sot,Rosso:2020zkk} unfolds in the language of bit threads and to explore an alternative way of metric reconstruction based on this framework. The particular questions that we want to address are the following:
\begin{itemize}
\item How are the metric and Einstein's equations encoded in generic thread configurations?
\item Can bulk locality be manifest in particular constructions?
\item Is it possible to reconstruct the bulk geometry from a max flow solution?
\item If so, how does the method compare to the ones based on RT surfaces?
\end{itemize}
Following \cite{Lashkari:2013koa,Faulkner:2013ica,Swingle:2014uza,Caceres:2016xjz,Czech:2016tqr,Faulkner:2017tkh,Haehl:2017sot,Rosso:2020zkk}, we begin by considering these questions in a perturbative setting in which we study small deformations continuously connected to a reference state. An important motivation of such continuous construction comes from the study of the phase transition of RT surfaces that happens for disjoint regions as one varies their separation. It is known that, close to the phase transition, the RT surface can change from a connected to a disconnected configuration. Such jumps posit a puzzle to a potential quantum information interpretation of the RT surfaces from the bulk perspective, which is solved in the language of bit threads by imposing the additional property of being continuous across phase transitions \cite{Freedman:2016zud}. Continuity is, then, a desirable feature of bit threads under continuous deformations.
Before we study the above questions, let us review some of the features of the standard methods of metric reconstruction using RT surfaces \cite{Balasubramanian:2013rqa,Balasubramanian:2013lsa,Myers:2014jia,Headrick:2014eia,Czech:2014ppa,Czech:2015qta,Dong:2016eik,Czech:2016xec,Cao:2016mst,Espindola:2017jil,Espindola:2018ozt,Roy:2018ehv,Faulkner:2018faa,Czech:2019hdd,Balasubramanian:2018uus,Bao:2019bib,Cao:2020uvb}, and explain potential advantages of studying this problem with bit threads. While there are other methods for bulk reconstruction, e.g. \cite{Engelhardt:2016wgb,Trevino:2017mik,Hernandez-Cuenca:2020ppu}, our comments and comparisons refer only to approaches that make explicit use of RT surfaces. Quite generally, if one hopes for a reconstruction of the metric everywhere in the bulk one must start with a sufficiently dense set of extremal surfaces that probe the full manifold $M$. This is in fact possible in some simple cases, at least for the subset of $M$ that can be foliated by boundary anchored extremal surfaces. For static $(2+1)-$dimensional bulk geometries this was achieved in \cite{Czech:2014ppa} starting from the full set of extremal surfaces associated with \emph{all} CFT intervals, and using ideas from hole-ography \cite{Balasubramanian:2013rqa,Balasubramanian:2013lsa,Myers:2014jia,Headrick:2014eia}. More recently, it was shown that the same ideas could be extended to the time-dependent case in \cite{Czech:2019hdd} and to higher dimensions \cite{Balasubramanian:2018uus}, here by focusing on the subset of extremal surfaces associated with spherical regions (or topologically equivalent, in the approach of \cite{Bao:2019bib}).
The problem of metric reconstruction using bit threads has a major advantage over the ones described above: it does not rely on the ability of the manifold $M$ of admitting foliations by boundary anchored extremal surfaces. In fact, threads can probe regions in the bulk that extremal surfaces cannot, such as entanglement shadows near the vicinity of (spherical) black hole horizons \cite{Agon:2018lwq}. It is important to point out that bulk shadows do not appear exclusively in cases where gravity is strong; one simple counterexample is the metric of a conical deficit geometry, which arises by the backreaction of a point particle in AdS \cite{Balasubramanian:2014sra}. Consequently, formulating the problem of metric reconstruction in the language of bit threads, even for the simpler case of perturbative states, is interesting on its own right. In particular, it will shed new light on the issue of emergence of spacetime from entanglement entropy \cite{VanRaamsdonk:2009ar,VanRaamsdonk:2010pw}, without resorting to other measures of entanglement such as \emph{entwinement} \cite{Balasubramanian:2014sra}.
Another important difference with respect to the problem of metric reconstruction using extremal surfaces is that the latter requires as a starting point the knowledge of a dense set of surfaces that probe the bulk geometry. While we can do the same in the language of bit threads, i.e., start from a \emph{dense set} of thread configurations, the fact that one single solution to the max flow problem already probes the full bulk geometry presents us with an interesting possibility: we can start from a \emph{finite set} of thread configurations, containing one, or possibly only a few solutions of the max flow problem. We will consider both approaches in this paper, and show that the explicit reconstruction is possible in both cases. In the remaining part of the introduction we will provide a quick guide to help navigate our paper and enumerate the most important findings of each section.
\subsection{Road map and summary}
We begin in section \ref{sec2} with a short discussion of various topics that we constantly refer to in our paper. Most of this material is
a review of previous works, covering known results about perturbations around AdS and the calculation of entanglement entropy,
both in the language of extremal surfaces and bit threads. We also include a short analysis of bit threads in perturbative excited states in
subsection \ref{pertstatesBT} which is new. The main message of this analysis is that, to leading order in the perturbation,
it is consistent to use the prescription (\ref{BitThreadReform}) on a constant-$t$ slice, even if the perturbation includes time dependence.
In section \ref{PBT} we study simple explicit realizations of max flows for bulk geometries that are perturbatively close to pure AdS. We begin in section \ref{generalities} by discussing some general properties about these max flows: the boundary condition at the minimal surface and how this condition is sufficient to encode the first law of entanglement entropy. We then proceed to study two particular constructions in subsections \ref{F1} and \ref{lsc}, respectively. The first method that we consider is a generalization of the geodesic method developed in \cite{Agon:2018lwq}. This method assumes a particular set of integral curves as a starting point, which we take to be the family of space-like geodesics that intersect normally the minimal surface. Given this assumption, one then determines the norm by imposing the divergenceless condition, implemented through the Gauss's law. We show that this construction works both for geodesics of the unperturbed and perturbed geometries under some mild assumptions. In subsection \ref{lsc} we study a slightly more general method. Here, our starting point is to propose a family of level set functions for the flow and then determine its norm based on the divergenceless condition, now implemented directly by solving a differential equation. The flows constructed via this method are a generalization of the maximally packed flows presented in \cite{Hubeny:2018bri,Agon:2018lwq}, where the level set functions are now arbitrary (not necessarily a nested set of minimal surfaces). This method is therefore fully non-perturbative and easily adapted to any boundary entangling region.
Importantly, both constructions presented in section \ref{PBT} assume as an input a solution to the Einstein's equations in the bulk. Given an explicit metric one can determine the norm of the vector field from the divergenceless condition, which requires an integration from the minimal surface (where the norm is known) to the points of interest. Such integration generically introduces a nonlocal dependence on the background metric which renders these methods non-suitable for addressing questions of bulk reconstruction. However, this also suggests a way around it. More specifically, since the nonlocality is introduced in both cases through the implementation of the divergenceless constrain, it suggests that a construction that implements this condition in a background independent way would be absent of such nonlocalities, which is possible if pose the question in the language of differential forms.
Motivated by the above observation we start subsection \ref{ssec:diff} by rewriting the bit thread framework in the language of differential forms. We study in detail the case of perturbative states and show, in subsections \ref{sec:IWgeneral} and \ref{5.2}, that the Iyer-Wald formalism provides a candidate for a thread perturbation which is explicitly local in the metric and furthermore connects the closedness condition with the linearized Einstein's equations.
Further, we explore the problem of metric reconstruction in subsection \ref{metric-reconstruction} and show that it can be cast in terms of the inversion of a particular differential operator. We provide explicit inversion formulas for the case of spherical entangling regions in two distinct scenarios: $i$) assuming knowledge of a dense set of forms parametrized by their radii and centers and $ii$) assuming knowledge of a finite set with a minimal number of forms. The second approach turns out to be very powerful; for instance, it suffices to have a single form to provide a full solution for the bulk metric in asymptotically AdS$_3$ and AdS$_4$ spaces, which we construct explicitly. We also show that the problem is well-posed in higher dimensions, starting with a carefully selected finite set. We end the section with a detailed analysis of how to recover the time components of the metric via boosts and translations of the space-like hypersurface on which the threads are defined, and a thorough discussion on how to generalize the reconstruction problem to higher orders in the perturbation.
\section{Preliminaries\label{sec2}}
In this section, we will start with a brief discussion of a number of topics that we will be essential throughout the paper. We include this discussion for completeness. However, since most of this material is a review, it can be safely skipped by the cognoscenti.
\subsection{Linear perturbations around AdS}
Let us start by reviewing basic properties of linear perturbations around empty AdS. In Fefferman-Graham coordinates, any asymptotically AdS metric can be written as
\begin{equation}\label{FG-PT}
ds^2=\frac{1}{z^2}\(\eta_{\mu\nu}dx^\mu dx^\nu+dz^2\)+\delta g_{\mu\nu}(x^\sigma,z)dx^\mu dx^\nu\,,\qquad \delta g_{\mu\nu}(x^\sigma,z)\equiv z^{d-2} H_{\mu\nu}(x^\sigma,z)\,,
\end{equation}
where $x^\sigma$ are boundary coordinates and $z$ is the holographic coordinate. For concreteness we have assumed a Minkowski boundary geometry. With this parametrization, one can extract the expectation value of the stress-energy tensor
from the asymptotic form of the perturbation,
\begin{equation}\label{T=H}
\langle T_{\mu\nu}(x^\sigma) \rangle = \f{d}{16\pi G_N}H_{\mu\nu}(x^\sigma,z=0)\,.
\end{equation}
Plugging the above ansatz into the vacuum Einstein equations,
\begin{equation}
R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R-\frac{d(d-1)}{2}g_{\mu\nu}=0\,,
\end{equation}
we obtain the following expressions for the $zz$, $z\mu$ and $\mu\nu$ components \cite{Lashkari:2013koa}:
\begin{equation}\label{perturbedeqns}
H^\mu\,\!_\mu=0\,,\qquad \partial_\mu H^{\mu\nu}=0\,,\qquad \frac{1}{z^{d+1}}\partial_z\(z^{d+1}\partial_z H_{\mu\nu} \)+\Box H_{\mu\nu}=0\,,
\end{equation}
respectively, where the box operator is the standard Laplace operator in Minkowski space, i.e., $\Box\equiv \partial_\mu\partial^\mu$. Alternatively, one can write down the perturbation as follows:
\begin{equation}\label{deltaGprop}
\delta g_{\mu\nu}(x,z)=\int d^dy\, G(y-x,z)T_{\mu\nu}(y)\,,
\end{equation}
where $G(x,z)$ is the Green's function of the graviton in empty AdS,
\begin{equation}
G(x,z)=\frac{16\pi G_N}{d}2^{d/2}\Gamma[d/2+1]\int \frac{d^dp}{(2\pi)^d}\theta(-p^2)\frac{z^{d/2}}{p^{d/2}}J_{d/2}(|p|z)e^{-ip\cdot x},\quad\,\,\, |p|\equiv\sqrt{|p_\mu p^\mu|}\,.
\end{equation}
A somewhat useful expression can be obtained by expanding $\delta g_{\mu\nu}$ in powers of $z$ \cite{Blanco:2013joa}:
\begin{equation}\label{pertg}
\delta g_{\mu\nu}=\frac{16\pi G_N}{d}z^{d-2}\sum_{n=0}^\infty z^{2n}T^{(n)}_{\mu\nu}\,.
\end{equation}
The strategy is to use the linear Einstein equations order by order in $z$ to determine $T^{(n)}_{\mu\nu}$ for $n >0$ in terms of the expectation value of the stress-energy tensor $T^{(0)}_{\mu\nu}$. A simple calculation shows that the $zz$ and $z\mu$ equations imply
\begin{equation}
T^{(n)}\,\!^\mu\,\!_\mu=0\,,\qquad \partial_\mu T^{(n)}\,\!^{\mu\nu}=0\,,
\end{equation}
so $T^{(n)}_{\mu\nu}$ is traceless and conserved for all $n$. Finally, the $\mu\nu$ equations imply that
\begin{equation}\label{Tnexp}
T^{(n)}_{\mu\nu}=-\frac{\Box T^{(n-1)}_{\mu\nu}}{2n(d+2n)}=\frac{(-1)^n \Gamma[d/2+1]}{2^{2n}n!\Gamma[d/2+n+1]}\Box^n T^{(0)}_{\mu\nu}\,.
\end{equation}
It is convenient to go to momentum space,
\begin{equation}
T^{(0)}_{\mu\nu}(x)=\int \frac{d^dp}{(2\pi)^d}\, e^{-ip\cdot x} T_{\mu\nu}(p)\,,
\end{equation}
where we can perform the sum
\begin{equation}
\sum_{n=0}^\infty\[\frac{1}{n!\Gamma[d/2+n+1]}\(\frac{|p|z}{2}\)^{2n+d/2}\]=I_{d/2}(|p|z)\,,
\end{equation}
if $p$ is space-like. For time-like momenta $p$ it gives instead $J_{d/2}(|p|z)$, recovering (\ref{deltaGprop}).
\subsubsection*{Perturbations in three-dimensional geometries\label{sec:2dpert}}
For $d=2$ there is a crucial simplification: the last term in the last equation of (\ref{perturbedeqns}) is absent! The reason is that for $d=2$, the first two equations (vanishing of the trace and conservation equations) imply that $\Box H_{\mu\nu}=0$. With this simplification, the last equation implies that
\begin{equation}
\partial_z H_{\mu\nu}=\frac{C_{\mu\nu}}{z^{d+1}}\,,
\end{equation}
where $\partial_z C_{\mu\nu}=0$. Moreover, since $H_{\mu\nu}$ must be finite at $z=0$, the only possibility is that $C_{\mu\nu}=0$. This means that only the $n=0$ term in (\ref{pertg}) survives, while all other higher order terms vanish. This can also be seen from the recursive formula (\ref{Tnexp}): for $d=2$ we have that $\Box T^{(0)}_{\mu\nu}=0$, therefore all $n\geq1$ terms vanish!
The above analysis implies that, to linear order in the perturbation, the general solution for the metric in $d=2$ is given by:
\begin{equation}\label{deltag2}
\delta g_{\mu\nu}(x^\sigma,z)=\frac{16\pi G_N}{d}T_{\mu\nu}(x^\sigma)\,.
\end{equation}
Since the stress tensor should be traceless and conserved the general form it can take is the sum of right-moving and left-moving waves,
\begin{equation}
T_{\mu\nu}(t,x)=f(t-x)\left(
\begin{array}{cc}
1 & -1 \\
-1 & 1 \\
\end{array}
\right)+g(t+x)\left(
\begin{array}{cc}
1 & 1 \\
1 & 1 \\
\end{array}
\right)\,.
\end{equation}
Specific examples can be obtained by specifying the profiles of $f(t-x)$ and $g(t+x)$. In Appendix \ref{app:examples} we will explore in detail the case corresponding to a local quench state.
\subsection{Linear corrections to entanglement entropy}
Entanglement entropy can be computed via the RT formula \cite{Ryu:2006bv},
\begin{equation}\label{hrt}
S_A = \frac{1}{4 G_N}\underset{\gamma_A \sim A}{\min} \left[ \text{area} \left(\gamma_A \right)\right] \,,
\end{equation}
or its covariant HRT version \cite{Hubeny:2007xt}, where the minimality condition is replaced by extremality,
\begin{equation}
S_A = \frac{1}{4 G_N}\underset{\gamma_A \sim A}{\text{ext}} \left[ \text{area} \left(\gamma_A \right)\right] \,.
\end{equation}
We are interested in computing the leading correction to entanglement entropy, assuming that the geometry is a small perturbation over AdS.
At linear order in the expansion parameter, $\lambda$, entanglement entropy can in principle receive two types of contributions. To see this we can expand the area functional $\mathcal{L}$ and embedding functions $\phi(\xi^i)$ parametrizing the codimension-two surface $\gamma_A$ as follows,
\begin{equation}
\begin{split}
\mathcal{L}[\phi(\xi^i)]&=\mathcal{L}^{(0)}[\phi(\xi^i)]+\lambda\mathcal{L}^{(1)}[\phi(\xi^i)]+\mathcal{O}(\lambda^2)\,,\\
\phi(\xi^i)&=\phi^{(0)}(\xi^i)+\lambda \phi^{(1)}(\xi^i)+\mathcal{O}(\lambda^2)\,,\label{pertexps}
\end{split}
\end{equation}
where $\xi^i$ are coordinates describing the surface. Thus, on one hand, we have corrections due to the change in the geometry, while in the other hand, we have corrections to the surface itself.
However after evaluating $\mathcal{L}[\phi(\xi^i)]$ on-shell, only one term survives at linear order in $\lambda$,
\begin{equation}\label{arealambdaonshell}
\begin{split}
&\delta S_A=\lambda\int d^{d-1}\xi\,\mathcal{L}^{(1)}[\phi^{(0)}(\xi^i)]+\lambda\int d^{d-1}\xi\, \phi^{(1)}(\xi^i)\left[\cancel{\frac{d}{d\xi^i}\frac{\partial\mathcal{L}^{(0)}}{\partial \phi'(\xi^i)}-\frac{\partial\mathcal{L}^{(0)}}{\partial \phi(\xi^i)}}\right]_{\phi^{(0)}}\!\!\!+\mathcal{O}(\lambda^2)\,.
\end{split}
\end{equation}
This means that at this order, the embedding $\phi(\xi^i)$ can be taken to be the (unperturbed) embedding in pure AdS. This is a useful property, because there are many exact solutions for the embending functions of various regions in empty AdS. For our purposes, it will suffice to recall the explicit embedding for spheres in empty AdS in Poincar\'e coordinates,
\begin{equation}
r^2+z^2=R^2\,.
\end{equation}
We will make use of this expression in later sections when we discuss concrete realizations of perturbative bit threads.
\subsection{Bit threads in dynamical scenarios}
The original formulation of bit threads \cite{Freedman:2016zud} is equivalent to the (non-covariant) RT formula \cite{Ryu:2006bv}, equation (\ref{hrt}), so it only applies to situations with time reflection symmetry (e.g. spatial regions in static spacetimes). In this section we will explain one way to extend this prescription to fully dynamical cases and show that the formulation of \cite{Freedman:2016zud} extends straightforwardly to the case of perturbative excited states.
One way to include time dependence is by using the maximin reformulation of HRT \cite{Wall:2012uf}. To do so, we pick a particular Cauchy surface $\Sigma$ that contains the boundary of the region, $\partial A$, perform the area minimization on it, and then maximize over all possible $\Sigma$. We can then use the standard bit thread prescription for each Cauchy surface $\Sigma$ by maximizing the flux through the boundary region $\Sigma \cap \mathcal{D}[A]$\footnote{$\mathcal{D}[A]$ is the boundary domain of dependence of region $A$.} and then maximizing over all Cauchy surfaces:
\begin{equation}\label{HRTBitThreadReform}
S_A={\frac1{4G_N}}\max_{\Sigma \supset \partial A}\underset{\underset{\gamma_A\subset\Sigma}{\gamma_A\sim A}}{\min}\left[\text{area}(\gamma_A)\right]\quad \iff \quad S_A=\frac1{4G_N}\max_{\Sigma \supset \partial A} \max_{v \in {\cal F}_\Sigma}\int_{\Sigma \cap \mathcal{D}[A]}\!\!\!\!\!\!\!\!\!\!\!\!\! \sqrt{h} \, n_\mu v^\mu\,,
\end{equation}
where
\begin{equation}
{\cal F}_\Sigma\equiv\{v\in \mathfrak{X}(\Sigma) \, \vert\, \nabla_\mu v^\mu=0,\, \abs{v}\leq 1\}\,.
\end{equation}
Here, $\mathfrak{X}(\Sigma)$ is the space of vector fields on $\Sigma$. We note that this formula was recently studied in the context of the membrane theory \cite{Agon:2019qgh}. There also exists a fully covariant bit thread version of the correspondence \cite{Headrick:toappear}, but we will not use it in this paper.
A solution to the maximin prescription given by the left-hand side of (\ref{HRTBitThreadReform}) consists of a codimension-two surface $\gamma_A$ that solves the two optimization steps. Such a solution would naturally be accompanied by a specific choice of a codimension-one hypersurface $\Sigma$ on which $\gamma_A$ is a minimal surface. However, in \cite{Wall:2012uf} it was shown that such $\Sigma$ is highly non-unique away from the maximin surface $\gamma_A$. This fact was used in \cite{Wall:2012uf} to argue that one could pick a particular $\Sigma$ that simultaneously contains the maximin surfaces of various disjoint boundary regions required to prove the strong subadditivity property of holographic entanglement entropy. Below, we will use this freedom to argue that to first order in a general time-dependent perturbation of a static metric, one can always choose $\Sigma$ to be the constant-$t$ hypersurface associated with the unperturbed metric $\Sigma_0$, or more in general, any space-like surface that is perturbatively closed to it and passes through $\gamma_A$, $\Sigma_\lambda$.
\subsubsection{The case of perturbative excited states\label{pertstatesBT}}
Even though the choice of $\Sigma$ is highly non-unique, it can be shown that not any slice that passes through $\gamma_A$ is a good one. The reason is that $\gamma_A$ is not necesarily minimal on any of such slices $\Sigma$. To see this, consider a null congruence shot out from $\gamma_A$. The surface $\gamma_A$ is extremal, hence, its expansion vanishes: $\theta=0$. However, the Raychaudhuri equation implies that $d\theta/d\lambda<0$ \cite{Wall:2012uf}. This means that in this case $\gamma_A$ is a local maximum of area rather than a minimum and, by continuity, the same should hold for space-like surfaces $\Sigma$ that are close enough to the null congruence. In the left panel of Figure \ref{fig:btslices} we give an example to illustarate this fact. Notice that in one of these slices $\Sigma$ the minimal area surface $\tilde{\gamma}_A$ is not the same as extremal surface $\gamma_A$. Therefore, finding a max flow in $\Sigma$ is not equivalent to computing the entanglement entropy of region $A$.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=5cm]{BTSlices0.pdf}
\hspace*{2cm}
\includegraphics[width=5cm]{BTSlices.pdf}
\setlength{\unitlength}{1cm}
\setlength{\unitlength}{1cm}
\begin{picture}(0,0)
\put(-12.55,5.35){\scriptsize$\partial M$}
\put(-11.85,3.8){\scriptsize$\mathcal{D}[A]$}
\put(-9.55,3.2){\scriptsize$\gamma_A$}
\put(-10.5,2.3){\scriptsize$\tilde{\gamma}_A$}
\put(-7.6,5.5){\scriptsize$\Sigma$}
\put(-7.6,3.8){\scriptsize$\Sigma_0$}
\put(-5.3,5.35){\scriptsize$\partial M$}
\put(-4.6,3.8){\scriptsize$\mathcal{D}[A]$}
\put(-2.3,3.2){\scriptsize$\gamma_A$}
\put(-0.35,4.25){\scriptsize$\Sigma_\lambda$}
\put(-0.35,3.8){\scriptsize$\Sigma_0$}
\end{picture}
\end{center}
\vspace{-0.5cm}
\caption{\small A solution to the maximin problem $\gamma_A$ is naturally accompanied by a specific choice of a codimension-one slice $\Sigma$ on which $\gamma_A$ is a minimal area surface. Such a slice is highly non-unique, however, not all slices that pass through $\gamma_A$ are allowed. Left: in this example $\Sigma$ is perturbatively close to the null congruence shot out from $\gamma_A$. In this case the minimal area surface $\tilde{\gamma}_A$ (orange curve) does not coincide with $\gamma_A$ (red curve). Right: for perturbative excited states, it can be shown that $\gamma_A$ is a minimal area surface on any slice $\Sigma_\lambda$ that is perturbatively close to $\Sigma_0$. This means that we can pick any of these surfaces, and in particular $\Sigma_0$, to construct relevant bit thread configurations.}
\label{fig:btslices}
\end{figure}
For the case of perturbative excited states, a natural candidate for a good Cauchy slice would be a slice $\Sigma_\lambda$ that is perturbatively close to the $t=t_0$ hypersurface associated with the unperturbed metric, $\Sigma_0$. We can parametrize such a slice as $t=t_0+\lambda\, \delta t(z,\vec{x})$, with the constraint that $\delta t(z,\vec{x})$ must vanish at $\gamma_A$. The question here is if we can find surfaces on $\Sigma_\lambda$ that are homologous to $A$ but have smaller area than $\gamma_A$ at order $\lambda$. Supposing there are such surfaces, we denote $\tilde{\gamma}^\lambda_A$ as the one with the minimal area. However, we know that $\gamma_A$ is a minimal area surface in the unperturbed background, therefore, by continuity we know that $\tilde{\gamma}_A^\lambda\to \gamma_A$ as $\lambda\to0$. Without loss of generality we can then parametrize such a surface with embedding functions as in (\ref{pertexps}). On the other hand, the calculation in (\ref{arealambdaonshell}) shows that corrections to the embedding do not affect the area at linear order. This means that $\text{area}(\tilde{\gamma}^\lambda_A)=\text{area}(\gamma_A)+\mathcal{O}(\lambda^2)$, so we can conclude that $\gamma_A$ \emph{is} a minimal area surface on any $\Sigma_\lambda$ perturbatively close to $\Sigma_0$. We illustrate this result in the right panel of Figure \ref{fig:btslices}. This also implies that on any of these surfaces, and in particular on $\Sigma_0$, the solution to the max flow ploblem computes the entanglement entropy of region $A$, and hence all of them are equally good for the construction of bit thread configurations.
\section{Simple realizations of perturbative bit threads \label{PBT}}
Given the enormous simplification that happens at $\mathcal{O}(\lambda)$ from the point of view of the HRT prescription,
we would like to study and understand the general properties of perturbative thread configurations based on the constructions developed in \cite{Agon:2018lwq}.
We will start by stating simple constraints that the $\mathcal{O}(\lambda)$ HRT surfaces induce on general bit threads, and then proceed with the specific constructions. We will show that these methods lead to thread configurations that successfully encode general properties of the CFT state and the bulk geometry, such as the first law of entanglement entropy and its relation to the (linearized) Einstein's equations, albeit in a highly nonlocal form.
Along the way, we will state the precise problem of metric reconstruction that we look to solve and enumerate the challenges that these simple constructions face, leading to a quest for a new method that exploits
bulk locality in a more explicit way.
\subsection{Generalities\label{generalities}}
Let us begin by considering empty AdS$_{d+1}$ in spherical coordinates. The geometry of a constant-$t$ slice $\Sigma$ is given by
\begin{equation}
ds_{\Sigma}^2=\frac{1}{z^2}\(dr^2+r^2d \Omega^2_{d-2}+dz^2\)\,.
\end{equation}
The minimal surface $\gamma_A$ for a ball of radius $R$ is given implicitly by
\begin{equation}\label{emb2d}
r^2+z^2=R^2\,,
\end{equation}
and its outward-pointing unit normal vector $\hat{n}$ at a point $(r,z)$ on the minimal surface is:
\begin{equation}
\hat{n}^{a}=\frac{z}{R}\(r,z\)\,.
\end{equation}
For simplicity we have omitted the angular coordinates, since both the minimal surface and the state are invariant under rotations.
A simple realization of a vector field/thread configuration, $v=|v|\hat{\tau}$, based on geodesics is given by (see \cite{Agon:2018lwq} for details)
\begin{equation}\label{Vec1-d}
v^a=\(\frac{2Rz}{\sqrt{(R^2+r^2+z^2)^2-4R^2r^2}}\)^{d}\(\frac{r z}{R}\, , \frac{R^2-r^2+z^2}{2R} \)\,,
\end{equation}
with
\begin{eqnarray}
&&|v|=\(\frac{2Rz}{\sqrt{(R^2+r^2+z^2)^2-4R^2r^2}}\)^{d-1}\!\!\!\!\!\!\,,\label{Vec23-d}\\
&&\hat{\tau}^a =\frac{2Rz}{\sqrt{(R^2+r^2+z^2)^2-4R^2r^2}}\( \frac{r z}{R}\, ,\, \frac{R^2-r^2+z^2}{2R}\)\,.
\end{eqnarray}
As a check, notice that this vector field $(i)$ satisfies the divergenceless condition $\nabla \cdot v=0$ and $(ii)$ is equal to $\hat{n}$ at the location of the minimal surface $v|_{\gamma_A}=\hat{n}$. Combining these two, it immediately follows that the flux along any bulk surface $\Gamma_A$ homologous to $A$ (not necessarily the minimal surface $\gamma_A$) yields the entanglement entropy of the ball (in units of $4G_N$),
\begin{equation}
S_A=\frac{1}{4G_N}\int v \cdot dS_{\Gamma_A}= \frac{1}{4G_N}\int \hat{n} \cdot dS_{\gamma_A} = \frac{1}{4 G_N}{\rm min} \left[ \text{area} \left(\gamma_A \right)\right]\,.
\end{equation}
We emphasize that while the minimal surface $\gamma_A$ is in most cases \emph{unique}, the choice of vector field $v$ is highly \emph{non-unique}; it is uniquely determined only at the bottle-neck $\gamma_A$.
Next, we would like to find the perturbed vector field in a perturbatively excited state, i.e., a state with bulk metric $g^{\lambda}_{\mu\nu}=g_{\mu\nu}+\lambda \delta g_{\mu\nu}+\mathcal{O}(\lambda^2)$ (satisfying Einstein's equations):
\begin{equation}\label{eq:vlambda}
v_\lambda=v+\lambda \delta v+\mathcal{O}(\lambda^2)\,,
\end{equation}
at linear order in $\lambda$. While the perturbation in the vector field $\delta v$ is on its own highly non-unique, any consistent realization must satisfy some nontrivial properties, including the first law of entanglement entropy in the CFT and the linearized Einstein's equations in the bulk. The problem that we want to address is the following:
\vspace{2mm}
\noindent\fbox{
\parbox{0.975\textwidth}{
\centering
\emph{Given a consistent thread configuration for an excited state $v_\lambda$, is it possible to reconstruct locally the bulk geometry at the same order in the perturbation?}
}%
}
\vspace{2mm}
A couple of comments are in order. First, note that we are focusing on excited states. While it is true that the same question would make sense even in the vacuum state, we recall that the bulk metric in this case is fixed by symmetries, rendering the problem exceptionally simple. Second, the non-uniqueness of $v_\lambda$ for a given metric indicates that the correspondence is not one-to-one. Even if we isolate a family of thread configurations that follow from the same bulk metric, the way they encode this information may be non-unique and, generically, highly nonlocal. In the following, we will identify basic constraints that generic realizations of $v_\lambda$ must satisfy and then, study how the particular constructions of \cite{Agon:2018lwq} encode the information about the bulk metric.
\subsubsection{Boundary conditions for the perturbed threads}
In order to find a solution $v=|v|\hat{\tau}$ for a thread configuration, we need to solve for the divergenceless condition $\nabla\cdot v=0$ subject to the norm bound $|v|\leq1$. One way to proceed is to use the fact that the norm bound is saturated $|v|=1$ at the bottle-neck $\gamma_A$. In other words, we need to impose that at the minimal surface $\gamma_A$, $v$ is equal to its unit normal,
\begin{equation}\label{bcRT}
v^a_\lambda|_{\gamma_A}=\hat{n}^a\,.
\end{equation}
Notice that this does not uniquely determine the vector field everywhere in the bulk; intuitively, the ambiguity of the thread configuration away from $\gamma_A$ corresponds to a choice of microstate in the dual CFT, such that all the macroscopic properties of the system are satisfied, including the entanglement entropy $S_A$.
Let us now determine how (\ref{bcRT}) looks like in the perturbed geometry. Fortunately, at the linear order in the perturbation the RT surfaces are unchanged and we can use this to our advantage. This implies that at this order, the change in the normal vector is only induced by the change in the geometry. To see this, consider
the metric on a constant-$t$ slice $\Sigma$ of the perturbed geometry\footnote{The indices $(a,b)$ here run over the space coordinates $x^a=\{x^i,z\}$. }
\begin{equation}
ds^{2}_{\Sigma}=g^{\lambda}_{ab}dx^adx^b=(g_{ab}+\lambda\delta g_{ab})dx^adx^b\,,
\end{equation}
where
\begin{equation}
g_{ab}=\frac{1}{z^2}\left(
\begin{array}{cc}
\delta_{ij} & 0 \\
0 & 1 \\
\end{array}
\right)\,,\qquad \delta g_{ab}=z^{d-2}\left(
\begin{array}{cc}
H_{ij} & 0 \\
0 & 0 \\
\end{array}
\right)\,.
\end{equation}
We will keep the $\lambda$'s explicitly throughout our calculations as a bookkeeping device (to count the order of the perturbations), but at the end we will set it to unity. Also, for future reference, we give an explicit expression for the inverse metric at linear order in $\lambda$,
\begin{equation}
g_\lambda^{ab}=g^{ab}+\lambda\delta g^{ab}\,,
\end{equation}
where:
\begin{equation}
g^{ab}=z^2\left(
\begin{array}{cc}
\delta_{ij} & 0 \\
0 & 1 \\
\end{array}
\right)\,,\qquad \delta g^{ab}=-z^{d+2}\left(
\begin{array}{cc}
H_{ij} & 0 \\
0 & 0 \\
\end{array}
\right)\,.
\end{equation}
As explained in the previous section, the embedding function (\ref{emb2d}) is not corrected at this order. Therefore its normal covector $\hat{n}_a$ remains the same, up to an overall constant $N$,
\begin{equation}
\hat{n}_a=\frac{N}{Rz}x_a\,.
\end{equation}
Ensuring that $\hat{n}$ is properly normalized to one, we find that at linear order in $\lambda$:
\begin{equation}
N=1+\frac{\lambda z^2}{2R^2}\delta g_{ab}x^ax^b\,.
\end{equation}
Finally, raising the index with the inverse metric we find that
\begin{eqnarray}
\hat{n}^a&=&\frac{1}{Rz}\left(N g^{ab}x_b+\lambda \delta g^{ab} x_b\right)\,,\nonumber \\
&=&\frac{z}{R}x^a+\lambda\left(\frac{z\delta g_{cd}x^cx^d g^{ab}}{2R^3}+\frac{\delta g^{ab}}{Rz}\right)x_b\,.
\end{eqnarray}
For example, in $d=2$, we find that:
\begin{equation}
\hat{n}^a=\frac{z}{R}\(x,z\)-\frac{\lambda x z^3 H(t,x)}{2 R^3}\left(x^2+2 z^2,-x z\right)\,, \quad H(t,x)\equiv H_{xx}(t,x)\,.
\end{equation}
For $d\geq3$ we can obtain similar but more longwinded expressions but for the sake of simplicity we will not transcribe them here.
Finally, from (\ref{bcRT}) we find that at linear order in $\lambda$, our boundary condition at the bottle-neck $\gamma_A$ is:
\begin{equation}\label{bcVector}\boxed{
v^a_\lambda|_{\gamma_A}=\frac{z}{R}x^a+\lambda\left(\frac{z\delta g_{cd}x^cx^d g^{ab}}{2R^3}+\frac{\delta g^{ab}}{Rz}\right)x_b\,.}
\end{equation}
We emphasize that this condition does not uniquely determine $v_\lambda$ in the bulk, specially in regions far away from $\gamma_A$ where $v_\lambda$ is highly non-unique.
\subsubsection{First law of entanglement entropy}
Since $v_\lambda$ is divergenceless, the flux across any bulk surface homologous to $A$ is constant. Hence, the boundary condition (\ref{bcVector}) should be enough to
demonstrate the first law of entanglement, provided we pick $\gamma_A$ itself as our homologous region.
To illustrate this, we can perform a simple analysis in $d=2$ dimensions. The area element $dS_{\gamma_A}$ in this case is given by
\begin{eqnarray}
dS_{\gamma_A}=\hat{n} ds_{\gamma_A}&=&\hat{n} \frac{dx}{z(x)}\sqrt{1+\lambda z^2 H_{xx}(t,x,z(x))+\mathcal{O}(\lambda^2)+z'(x)^2}\,,\\
&=&\hat{n} dx \[\frac{R}{R^2-x^2}+\lambda\frac{ (R^2-x^2) }{2 R}H_{xx}\(t,x,z(x)\)+\mathcal{O}(\lambda^2)\]\,.
\end{eqnarray}
The order $\mathcal{O}(\lambda)$ term gives the change in entanglement entropy,
\begin{equation}
\delta S_A=\frac{1}{4G_N}\int \hat{n} \cdot dS_{\gamma_A}=\frac{1}{4G_N}\int dx\,\frac{ (R^2-x^2) }{2 R}H_{xx}\(t,x,z(x)\)\,.
\end{equation}
Finally, according to (\ref{pertg}) we can expand $H_{xx}(t,x,z(x))$ as
\begin{equation}
H_{xx}(t,x,z(x))=8\pi G_N \sum_{n=0}^\infty z(x)^{2n}T^{(n)}_{xx}(t,x)\,.
\end{equation}
However, as emphasized in the previous section, for $d=2$ only the $n=0$ survives. By the traceless condition we know that $T^{(0)}_{xx}(t,x)=T^{(0)}_{00}(t,x)=T_{00}(t,x)$, so we arrive to the first law of entanglement entropy with the right modular Hamiltonian in 2d \cite{Faulkner:2013ica}
\begin{equation}\label{1stlawEE}
\delta S_A=2\pi \int_{-R}^R dx\,\frac{ (R^2-x^2) }{2 R}T_{00}(t,x)\,.
\end{equation}
For $d>2$ the proof is slightly more complicated, but it can be shown by working out the above expansions in momentum space, and resuming the resulting series. We refer the reader to \cite{Blanco:2013joa} for a detailed analysis in these higher dimensional cases.
The crucial insight here is that \emph{any} divergenceless vector field satisfying (\ref{bcVector}) will automatically encode the first law of entanglement entropy,
which for arbitrary dimensions takes the form
\begin{equation}
\delta S_A=2\pi \int_{-R}^R d^{d-1}x\,\frac{ (R^2-r^2) }{2 R}T_{00}(x^\sigma)\,,\qquad r^2\equiv\sum_{i=1}^{d-1} x_i^2\,.
\end{equation}
Since the first law of entanglement entropy has been shown to be equivalent to the bulk Einstein's equations at the linear level \cite{Faulkner:2013ica}, then \emph{all} consistent thread configurations should also encode them in some form. It remains to be seen how are the Einstein's equations encoded in the specific thread configurations, and how easy would be to recover the metric from particular constructions.
\subsection{Method 1: Geodesic bit threads \label{F1}}
Following \cite{Agon:2018lwq}, we will now present simple methods to construct explicit thread configurations satisfying the boundary condition (\ref{bcVector}) for perturbative excited states. The first method consists on picking a family of integral curves with good properties, and then fixing the norm by ensuring that Gauss's law is satisfied everywhere. In the following, we will describe this construction in some detail and study how the information of the bulk metric is encoded in the resulting thread configuration.
\subsubsection{Integral curves\label{sec:form1-intc}}
A good family of integral curves must satisfy the following properties:
\begin{enumerate}
\item They must be orthogonal to the minimal surface $\gamma_A$.
\item They must be continuous and not self-intersecting.
\item They must start and end at the boundary, or possibly at a bulk horizon.
\end{enumerate}
Given a family with these properties, it is then straightforward to construct a divergenceless vector field with the desired boundary condition. There is a small caveat here, however: one can only check if the norm bound is satisfied $|v|\leq 1$ \emph{a posteriori}.
One crucial result of \cite{Agon:2018lwq} is that a thread construction based on space-like geodesics automatically satisfy the norm bound, provided that the metric background satisfies some simple geometric properties. This conclusion followed from a systematic analysis of geodesic foliations of an arbitrary Riemannian geometry, so it must also hold true for the case in consideration, i.e., for geometries dual to perturbative excited states. Therefore, our first candidate for the family of integral curves will be the space-like geodesics of the perturbed background.
\paragraph{Corrected geodesics:}
Let us consider the $d=2$ and $d>2$ cases separately. In \cite{Agon:2018lwq} it was shown that space-like geodesics in an arbitrary $(2+1)$-dimensional $(d=2)$ background lead to a vector field satisfying the norm bound $|v|\leq1$, provided that the Ricci scalar
on a constant-$t$ slice (a Riemannian submanifold) is negative everywhere, i.e.
\begin{equation}\label{critd2}
R<0\,.
\end{equation}
We can check that this condition is indeed satisfied for the perturbative states that we are considering.
Working in coordinates adapted to the geodesics, and using the same notation of \cite{Agon:2018lwq}, we will write the bulk metric as follows:
\begin{equation}
ds^2\equiv G_{\mu\nu}dx^{\mu}dx^{\nu}=-\psi(\lambda,x) dt^2+d\lambda^2+\gamma(\lambda,x) dx^2\,,
\end{equation}
where $x$ labels different points along the minimal surface and $\lambda$ is an affine parameter that runs along geodesics orthogonal to it.\footnote{This coordinate system does not need to foliate the full manifold; points that are not covered by these coordinates have by definition a vanishing vector field $v=0$.} The above metric is a solution of Einstein's equations:\footnote{We have set $8\pi G_N=1$ for simplicity.}
\begin{equation}\label{EEQ-CC}
\mathcal{R}_{\mu \nu}-\frac{1}{2} \mathcal{R} G_{\mu \nu}+\Lambda G_{\mu \nu}=\mathcal{T}_{\mu\nu}\,,
\end{equation}
where $\mathcal{T}_{\mu\nu}$ is the bulk energy momentum tensor.
A quick calculation shows that the induced Ricci on a constant-$t$ slice is:
\begin{equation}
R=\frac{2}{\psi(\lambda,x)}(\mathcal{T}_{00}(\lambda,x)+\Lambda \psi(\lambda,x))\,,
\end{equation}
hence, for negative cosmological constant $\Lambda<0$, we have that $R<0$ if and only if the local energy density is bounded from above:
\begin{equation}\label{boundE}
\varepsilon(\lambda,x)\equiv -\mathcal{T}^0_{\,\,\;\;0}(\lambda,x)<-\Lambda\,.
\end{equation}
Since the kind of perturbations that we are considering are all vacuum solutions, i.e. we have $\mathcal{T}_{\mu\nu}=0$, then we conclude that the corrected geodesics can indeed be taken as a good family of integral curves.
For spheres in higher dimensional spaces ($d>2$) the situation is a bit more complicated. Assuming that the state is invariant under rotations, we can pick a plane that intersects the origin and find the geodesics within this plane. Then we foliate the full spacetime by surfaces of revolution generated by rotating such geodesics along all possible angles. With this construction, the bulk metric can be written as
\begin{eqnarray}\label{metricsph}
ds^2=-\psi(\lambda,x) dt^2+d\lambda^2+\gamma(\lambda,r)dr^2+e^{2\tau(\lambda,r)}d\Omega_k^2\,,\qquad ds_2^2\equiv d\lambda^2+\gamma(\lambda,r)dr^2\,.
\end{eqnarray}
After some algebra, one finds that the criterion (\ref{critd2}) generalizes to \cite{Agon:2018lwq}
\begin{eqnarray}\label{R2cond}
R_2<2k\left[\partial^2_\lambda\tau+\partial_\lambda\tau(k+\partial_\lambda\log\gamma)\right]\,,
\end{eqnarray}
where $R_2$ is the induced Ricci on the auxiliary 2-dimensional metric defined in (\ref{metricsph}). On a pure AdS background, one finds that $R_2=-2$, while the terms on the right hand side of (\ref{R2cond}) are strictly positive. This means that there is a finite gap, or in other words, that the bound is $\mathcal{O}(1)$ far from saturation. On the other hand, linear perturbations of the metric would lead to corrections on both sides of the equation but these corrections can only be of order $\mathcal{O}(\lambda)$. This means that for sufficiently small $\lambda$, the condition (\ref{R2cond}) will still hold true, regardless of the fluctuations. Similar arguments could be made for metrics that are perturbatively close to AdS but are not rotationally invariant, however, the analysis would be certainly more complicated. In these situations one would need to find corrected geodesics within infinitely many planes intersecting the origin and repeat the above steps. But, again, since the pure AdS case is far from saturating (\ref{R2cond}), the analysis at linear order would only lead to corrections of order $\mathcal{O}(\lambda)$, meaning that the bound would always be satisfied for sufficiently small $\lambda$.
The above arguments show that the $\mathcal{O}(\lambda)$ geodesics are good candidates for integral curves for any number of dimensions.
There is a slight technical problem, however: it is practically impossible to obtain closed expressions for the corrected geodesics in a \emph{generic} perturbed background. In practice, rather than working with the corrected geodesics, it is more convenient to propose an alternative family of integral curves. In the following we will explore this possibility in more detail.
\paragraph{Uncorrected geodesics:}
The corrected geodesics are far from saturating the bound (\ref{critd2}) in $d=2$ or, more generally, (\ref{R2cond}) in higher dimensions. Therefore, it is clear that a continuous family of curves that are perturbatively close to them will similarly do the job. The most natural and simplest candidate for this are the \emph{uncorrected} space-like geodesics.
To illustrate this point we will consider the $d=2$ case, where we can make a precise analytic statement. In this case, the minimal surface (\ref{emb2d}) is given implicitly by
\begin{equation}\label{minimalsurf}
z_m^2+x_m^2=R^2\,.
\end{equation}
We have added subindexes `$m$' to point out that these coordinate points are on $\gamma_A$. The geodesics in pure AdS are given by semicircles anchored at the boundary. These semicircles form a two-parameter family of curves and are defined implicitly by
\begin{eqnarray}\label{geodesic}
(x-x_s)^2+z^2=R_s^2
\end{eqnarray}
where $x_s$ is the center of the circle and $R_s$ its radius. The tangent vector with unit norm at an arbitrary point is given by
\begin{eqnarray}\label{tau}
\hat{\tau}^a=\left(\frac{z}{R_s}-\frac{\lambda z^5 H(t,x)}{2 R_s^3}\right)\(z , x_s-x\)\,,
\end{eqnarray}
where $H(t,x)\equiv H_{xx}(t,x)$. As expected, the tangent vector still points in the same direction but its normalization is corrected at leading order in the perturbation.
Since the integral curves must be orthogonal to the minimal surface, we must enforce that $\hat{\tau}|_{\gamma_A}=v_\lambda|_{\gamma_A}$, where the latter is given in (\ref{bcVector}). At order $\mathcal{O}(\lambda)$, this requirement leads to\footnote{With these definitions $R_s$ can take negative values. We can take an absolute value of $x_m$ in the denominator of (\ref{80}) to make $R_s$ positive. However, allowing $R_s$ to take any value will be useful below, in the definitions of $x_a$ and $x_{\bar{a}}$.}
\begin{eqnarray}\label{80}
&&R_s(x_m)=\frac{R \sqrt{R^2-x_m^2}}{x_m} \left[1+\frac{\lambda (R^2-x_m^2)^2 H(t,x_m)}{R^2}\right]\,,\\
&&x_s(x_m)=\frac{R^2}{x_m}\left[1+\frac{\lambda (R^2-x_m^2)^2 H(t,x_m)}{R^2}\right]\,.\label{81}
\end{eqnarray}
In order to arrive to these expressions we have made use of the equation (\ref{minimalsurf}) to eliminate $z_m$. Finally, plugging (\ref{80})-(\ref{81}) into (\ref{geodesic}) we obtain an implicit expression for the family of geodesics orthogonal to $\gamma_A$, parametrized by the point $x_m\in[-R,R]$ on the minimal surface.
Next, we need to check if the proposed integral curves are properly nested \cite{Agon:2018lwq}. In order to check this, we find the point $x_a$ at which they intersect $A$,\footnote{If we insist that $R_s\geq0$, these definitions for $x_a$ and $x_{\bar{a}}$ would only be valid for $x_s\geq0$, while for $x_s\leq0$ one should interchange the two.}
\begin{eqnarray}\label{r0}
x_a=x_s-R_s=\frac{R}{x_m}\(R-\sqrt{R^2-x_m^2} \)\left[1+\frac{\lambda (R^2-x_m^2)^2 H(t,x_m)}{R^2}\right]\,,
\end{eqnarray}
and the dual point $x_{\bar{a}}$ at which the curves intersect $\bar{A}$,
\begin{eqnarray}\label{barr0}
x_{\bar{a}}=x_s+R_s=\frac{R}{x_m}\(R+\sqrt{R^2-x_m^2} \)\left[1+\frac{\lambda (R^2-x_m^2)^2 H(t,x_m)}{R^2}\right]\,.
\end{eqnarray}
One can check that self-intersection is avoided if and only if $dx_a/dx_m>0$ and $dx_{\bar{a}}/dx_m<0$. A quick calculation leads to
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\frac{dx_a}{dx_m}&=&\frac{R^2}{x_m^2} \frac{(R-\sqrt{R^2-x_m^2})}{\sqrt{R^2-x_m^2}}\times\nonumber\\
&&\quad\left[1+\frac{\lambda (R^2-x_m^2)^2 H(t,x_m)}{R^2}+\frac{\lambda x_m\sqrt{R^2-x_m^2}}{R^3}\frac{d}{dx_m}\left((R^2-x_m^2)^2 H(t,x_m)\right)\right],
\end{eqnarray}
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\frac{dx_{\bar{a}}}{dx_m}&=&-\frac{R^2}{x_m^2} \frac{(R+\sqrt{R^2-x_m^2})}{\sqrt{R^2-x_m^2}}\times\nonumber\\
&&\quad\left[1+\frac{\lambda (R^2-x_m^2)^2 H(t,x_m)}{R^2}+\frac{\lambda x_m\sqrt{R^2-x_m^2}}{R^3}\frac{d}{dx_m}\left((R^2-x_m^2)^2 H(t,x_m)\right)\right].
\end{eqnarray}
One can check that at order $\mathcal{O}(1)$ both conditions are satisfied, i.e., $dx_a/dx_m>0$ and $dx_{\bar{a}}/dx_m<0$. At linear order in the perturbation, we get a term that does not have a definite sign (the last term in the square brackets), but one can always choose a small enough $\lambda$ such that these inequalities are still satisfied. As an example, let us consider a plane wave, $H(t,x)=\epsilon \sin[\omega(t-x)]$.\footnote{In this example even the second term in the square brackets can have a negative sign, but this problem goes away when one impose energy conditions. The last term, however, will still be indefinite after imposing energy conditions.} The last term in the square brackets can become order $\mathcal{O}(1)$ if the frequency $\omega$ is large enough. To prevent this to happen, one must take $
\lambda\ll1/(\epsilon\, \omega R^3)$. If the background is decomposed in Fourier modes, then the maximum frequency will be the relevant one, and the above condition is replaced by
\begin{equation}
\lambda\ll\frac{1}{\epsilon\, \omega_{\text{max}} R^3}\,.
\end{equation}
This means that for smooth functions we can always find a small $\lambda$ that satisfies the conditions. For sharply peaked functions this might not be the case, since the Fourier spectrum could contain arbitrarily high frequency modes. We will therefore restrict our attention to states with smooth stress energy tensor. Notice that this is not an important restriction. In CFT language, a state with a sharply peaked stress energy tensor will not be perturbatively close to the vacuum, and hence, the gravity dual would have important higher order contributions that we have ignored in the approximation of linearized gravity.
\subsubsection{Magnitude}
Given a set of integral curves the next step is to find the appropriate norm of the vector field $|v_\lambda|$. We will denote $X(x_{m},\xi)$ the proposed family of curves; $x_m$ labels points on the minimal surface and $\xi$ is a parameter that runs along the curve. As explained above, the curves $X(x_{m},\xi)$ can be the uncorrected geodesics. The parameter $\xi$ can be taken as the proper length from the given point to the minimal surface.
Following \cite{Agon:2018lwq}, we now fix the norm by implementing a version of Gauss's law for an infinitesimal cylinder enclosing each curve.\footnote{Alternatively, we could fix the norm by solving the first order differential equation for $|v_\lambda|$ resulting from the divergenceless condition, subject to the appropriate boundary condition at $\gamma_A$. This would be completely equivalent to the Gauss's law method described here, since the latter condition is the differential form of Gauss's law. However, since we have the explicit form of the integral lines, the Gauss's law turns out to be more convenient in this case, providing a final answer in closed form, as shown below in equation (\ref{magnitudeV}).} More specifically, we impose that the flux through an infinitesimal area element $\delta A$ transverse to one of the threads is constant,
\begin{eqnarray}\label{divcond}
\int_{\delta A} |v_\lambda| \sqrt{h |_{\lambda}} d^{d-1}x={\rm constant}\,,
\end{eqnarray}
where $h_{ab}=g_{ab}-\hat{\tau}_a\hat{\tau}_b$ is the projection of the metric on a plane orthogonal to the integral curve. Using the fact that at the minimal surface $|v_\lambda(x_m,\xi_m)|=1$, and letting $\delta A\to0$, we arrive to the following expression for the norm
\begin{eqnarray}\label{magnitudeV}
|v_\lambda(x_{m},\xi)| =\frac{\sqrt{h_\lambda(x_{m},\xi_{m}) }}{\sqrt{h_\lambda(x_{m},\xi)}}\,,
\end{eqnarray}
where $\xi_{m}$ is the parameter at which the curve intersects the minimal surface $\gamma_A$. Notice that we do not need to verify whether the norm bound $|v_\lambda|\leq1$ is satisfied everywhere. This is already guaranteed given our choice of integral curves and the argument based on the negativity of the scalar curvature presented in Section \ref{sec:form1-intc}. Reference \cite{Agon:2018lwq} provides various explicit examples of geodesic flows constructed with this method, including the case of spherical regions in empty AdS, given in equation (\ref{Vec1-d}). In Appendix \ref{app:examples} we complement this study by constructing a new explicit example, now for the case of the specific perturbative excited state corresponding to a local quench.
We can now inquire about how the bulk metric and the Einstein's equations are encoded in this particular construction. Unfortunately, at this level we can already see that such information is encoded in the vector field $v_\lambda$ in a highly nonlocal fashion. On one hand, one needs to solve for the geodesics in the unperturbed background subject to a boundary condition that depends on a particular metric perturbation. And, on the other hand, the magnitude of the vector is found by transporting the boundary condition along the geodesic, ensuring that the vector field is divergenceless. This process is inherently nonlocal; in particular, the final result for $|v_\lambda|$ exhibits an explicit bilocal dependence on the metric perturbation, since it must be evaluated at the points labeled by $\xi_m$ and $\xi$. The latter parameter, in particular, encodes the proper distance between the point in consideration and the minimal surface $\gamma_A$, which is nonlocal information on its own. These observations imply that it would be rather difficult to invert the problem and recover the metric from the resulting thread configuration. Similarly, the same remarks apply for the Einstein's equations: even though they are assumed as a starting point for this construction (the perturbations we consider are on-shell), they are ultimately encoded nonlocally in the resulting thread configuration.
\subsection{Method 2: Level set construction \label{lsc}}
The second method of constructing thread configurations consists on starting with a specific family of level set hypersurfaces and then building up a vector field
that is orthogonal to them and, of course, divergenceless. This is a slight generalization of a method initially proposed in \cite{Agon:2018lwq}, as we will see below. In the following, we will spell out the details of the general construction for arbitrary metrics, and then specialize to the case of perturbative excited states, where the construction simplifies drastically.
\subsubsection{General metrics}
We begin by proposing a family of level set surfaces with the following properties:
\begin{enumerate}
\item They must contain the minimal surface $\gamma_A$ as one of its members.
\item They must be continuous and not self-intersecting.
\item They must not include closed bulk surfaces.
\end{enumerate}
Given a family with these properties, it is then straightforward to construct a divergenceless vector field with the desired boundary condition. We can understand this as follows: given a family of level set hypersurfaces, one can first generate the corresponding integral lines by imposing that they must be orthogonal to each member of the family. Having the integral lines, then, the problem reduces to that of section (\ref{F1}) so we could follow the steps outlined there. This means that, in general, we can only check if norm bound is satisfied \emph{a posteriori}. There is however, one clever exception to the rule. We can ensure that $|v|\leq 1$ is satisfied everywhere by construction if we impose the following extra condition on the level set surfaces:
\begin{enumerate}
\setcounter{enumi}{3}
\item They must be homologous to $A$.
\end{enumerate}
If this condition is satisfied, then, the max flow-min cut theorem guarantees that $|v|$ will be maximal at the bottle neck $\gamma_A$. Since $|v|_{\gamma_A}=1$ then, this implies that $|v|\leq 1$ at any other member of the family. Notice that condition 4 is not a strict requirement, but a useful one. In fact, simple examples of vector fields generated by level sets that are \emph{not} homologous to $A$ are the \emph{maximally packed flows} constructed in \cite{Agon:2018lwq}. In that construction
the level set surfaces were picked as a family of nested minimal surfaces, containing $\gamma_A$ as one of its members. The motivation there was to find a flow with maximal norm $|v|=1$ in a given codimension-one region of the bulk, which was possible due to the nesting property of bit threads \cite{Freedman:2016zud,Headrick:2017ucz}.\footnote{Maximally packed flows also satisfy the norm bound by construction. If one picks level sets that are not homologous to $A$ and are not minimal surfaces, then indeed, the norm bound should be checked \emph{a posteriori}.} For the purposes of this paper, however, we are not interested in the above requierement, so we can explore other possibilities. In the remaining part of this section we will in fact assume that the condition 4 is satisfied, so we do not have to deal with the norm bound.
Let us now describe in detail the construction from level sets. To begin with, we need an efficient way to specify our level set hypersurfaces. In practice, we can do so by picking an appropriate scalar function $\varphi(x^i)$ such that the $\varphi=$ constant surfaces give us our desired level sets. We can then write the following equation for $v_\lambda$:
\begin{equation}\label{v:levelsets}
v=\Upsilon(x^i)\nabla \varphi(x^i)\,.
\end{equation}
At first glance, \eqref{v:levelsets} seems more general than a gradient flow, but in fact it is not. In principle one could always redefine the scalar function $\varphi \to \tilde{\varphi}=\int^{\varphi}\Upsilon(\psi)d \psi $ and therefore simply write $v=\nabla \tilde{\varphi}$. However, the function $\tilde{\varphi}$ would not only encode information about the level sets, but also about the norm, so it would be extremely difficult to guess a good function that gives us our desired level sets \emph{and} that also satisfies the divergenceless condition $\nabla^2\tilde{\varphi}=0$. In practice, then, it is much easier to start with (\ref{v:levelsets}) and determine $\Upsilon(x^i)$ through the divergenceless condition. We emphasize that, in this scenario, the specific values of $\varphi$ do not have a particular meaning and are in particular \emph{not} related to the norm of $v$. The field $\varphi$ here only determines the unit vector in $\vec{\tau}=v/|v|$, through
\begin{equation}
\vec{\tau}=\frac{\nabla{\varphi}}{|\nabla{\varphi}|}\,.
\end{equation}
One crucial observation that follows from the definition (\ref{v:levelsets}) is that the covector $v_a$ (i.e. $v$ with lower index) only depends on the metric $g_{ab}$ through $\Upsilon(x^i)$. To make this point self-evident, and in a form which is partially ``independent'' of the metric, we can write
\begin{equation}\label{cov:levelsets}
v_a=\Upsilon(\varphi,g)\partial_a \varphi\,.
\end{equation}
We will exploit this observation below, for the case of perturbative states. For now, let us notice that the boundary condition at the minimal surface implies:
\begin{eqnarray}
\Upsilon^2(\varphi, g) g^{a b} \partial_a \varphi \partial_b \varphi \Big|_{\gamma_A}=1\,,
\end{eqnarray}
or, equivalently,
\begin{eqnarray}\label{eq:bcvsets}
\Upsilon(\varphi, g)\Big|_{\gamma_A}=\frac{1}{| \partial \varphi|_g}\bigg|_{\gamma_A}, \qquad\qquad| \partial \varphi|_g\equiv \sqrt{g^{ab} \partial_a \varphi \partial_b \varphi}\,.
\end{eqnarray}
All we have left is to determine $\Upsilon$ away from the minimal surface, which can be done by imposing the divergenceless condition. Here we have two options: $i)$ we can use Gauss's law as we did in Section \ref{F1} or $ii)$ we can directly attempt to solve $\nabla\cdot v=0$, which should give us a first order differential equation for $\Upsilon$. As mentioned in Section \ref{F1}, the two methods are completely equivalent, since Gauss's law is the integral form of the divergenceless condition. However, since we do not have explicit expressions for the integral lines, then, the first option turns out to be more complicated in this case.\footnote{We could get the integral lines $X(x_m,\xi)$ in terms of the field $\varphi$ and its derivatives, but in order to do so we would need to solve a first order differential equation, which would by itself have the same level of complexity as solving directly the divergenceless condition.} We therefore proceed by deriving a differential equation for $\Upsilon$, which can be derived from $\nabla\cdot v=0$. Plugging (\ref{cov:levelsets}) into this condition and massaging the equation leads to:
\begin{equation}\label{eq:PsiPDE}
(\nabla\varphi)\cdot(\nabla\Upsilon)+(\nabla^2\varphi) \Upsilon=0\,.
\end{equation}
As advertised, this is a first order differential equation for $\Upsilon$ in terms of the scalar field $\varphi$ and the background metric $g$. Solving this equation subject to the boundary condition (\ref{eq:bcvsets}) would then give a unique solution for the vector field $v$.
\subsubsection{Perturbative excited states}
The above construction simplifies drastically for the case of perturbative excited states. In the following we will specialize to this situation and study in detail how the information about the bulk perturbation is encoded in the resulting thread configuration.
For a metric of the form $g^\lambda_{ab}=g_{ab}+\lambda \delta g_{ab}$ we are only interested in obtaining the vector field $v_\lambda$ (\ref{eq:vlambda}) to linear order in the perturbation around the zeroth order solution. Since the minimal surface $\gamma_A$ does not change at linear order in $\lambda$, a simple choice for the level sets consistent with all requirements would be to pick the same surfaces as for the unperturbed geometry. In this case we have that:
\begin{equation}
v^\lambda_a=v_a+\lambda\delta v_a=\Upsilon_\lambda(\varphi,g_\lambda)\partial_a \varphi\,,\qquad \Upsilon_\lambda(\varphi,g_\lambda)=\Upsilon(\varphi,g)+\lambda \delta\Upsilon(\varphi,g_\lambda)\,.
\end{equation}
In other words, with this choice of level sets, only the function $\Upsilon(x^i)$ gets corrected at linear order in $\lambda$, so the first correction of the vector field $\delta v_a$ turns out to be proportional to the zeroth order solution,
\begin{equation}\label{eq:deltavaPsi}
\delta v_a= \delta\Upsilon(\varphi,g_\lambda)\partial_a \varphi= \Psi(\varphi,g_\lambda)v_a\,,\qquad \Psi(\varphi,g_\lambda)\equiv\frac{\delta\Upsilon(\varphi,g_\lambda)}{\Upsilon(\varphi,g)}\,.
\end{equation}
The function $\Psi$ is determined at the minimal surface by the boundary condition $|v_\lambda|=1$. Expanding at linear order, we obtain
\begin{eqnarray}\label{norm}
g^{ab}_\lambda v^\lambda_{a}v^\lambda_{b}\Big|_{\gamma_A}=g^{ab} v_{a}v_{b}+\lambda\left(2g^{a b}v_{a} \delta v_{b}+\delta g^{ab} v_{a }v_{b}\right)\Big|_{\gamma_A}=1\,.
\end{eqnarray}
Since the zeroth order term is already normalized to one, the terms inside the parenthesis must vanish. Using (\ref{eq:deltavaPsi}) we arrive to:
\begin{eqnarray}\label{bc-a}
\Psi(\varphi,g_\lambda)|_{\gamma_A}=-\frac{1}{2}\delta g^{ab} v_a v_b=\frac{1}{2}\delta g_{ab} v^a v^b\,.
\end{eqnarray}
In the last equality we have used the fact that $\delta (\delta^a_{\,\,\, b})=\delta\(g^{ab}g_{bc}\)=\delta g^{ab}g_{bc}+g^{ab}\delta g_{bc} =0$. We did this, because it would be particularly convenient to have an expression for the boundary condition of $\Psi(\varphi,g_\lambda)$ in terms of the background $v$ with upper index.
Next we would like to determine the function $\Psi$ away from the minimal surface, which can be done by imposing the divergenceless condition. Again, we proceed by deriving a differential equation for $\Psi$ akin to (\ref{eq:PsiPDE}). In order to do so, first notice that
\begin{eqnarray}
v^a_\lambda =v^a +\lambda \delta v^a= (g^{a b} +\lambda \delta g^{ab} )(v_b+\lambda \delta v_b)\,,
\end{eqnarray}
so\footnote{Notice that $\delta v_a$ is proportional to $v_a$ but $\delta v^a$ is \emph{not} proportional to $v^a$. This is why we have mostly worked with covectors in this section.}
\begin{equation}\label{deltavaup}
\delta v^a = g^{ab}\delta v_b+\delta g^{ab}v_b=\Psi g^{ab}v_b+\delta g^{ab}v_b=\Psi v^a-g^{ ab}\delta g_{bc} v^c\,.
\end{equation}
Taking the divergence of $v_\lambda$ and using the fact that $\nabla\cdot v=0$ (at zeroth order), we obtain:
\begin{equation}\label{eq:DivCOnPer}
\nabla_\lambda\cdot v_\lambda=\frac{1}{\sqrt{g_\lambda}}\partial_a\left(\sqrt{g_\lambda}\, v_\lambda^a\right)=\frac{\sqrt{g}}{\sqrt{g_\lambda}}(\cancel{\nabla\cdot v})+\frac{\lambda}{\sqrt{g_\lambda}}\partial_a\left[\delta\({\sqrt{g_\lambda}}\)v^a +\sqrt{g}\,\delta v^a\right]=0\,.
\end{equation}
Taking the explicit variation of $\sqrt{g_\lambda}$ and using (\ref{deltavaup}) we obtain:
\begin{eqnarray}
\partial_a\( \tfrac 12 \sqrt{g} \, g^{bc} \delta g_{bc} v^a + \Psi \sqrt{g}\,v^a- \sqrt{g}\,g^{a b}\delta g_{b c} v^c\)=0\,,
\end{eqnarray}
or, equivalently,
\begin{eqnarray}\label{eq:DEPsi}
v \cdot \nabla \Psi+\nabla_a (\delta g^{a b} v_b)+\tfrac{1}{2} v\cdot \nabla ( \delta g ) =0\,,
\end{eqnarray}
where $\delta g\equiv g^{ab} \delta g_{ab}$. In summary, given a background metric $g_{ab}$ and a solution to the max flow problem $v^a$, one can always solve the problem of maximizing the flux in a state where the metric $g_{ab}^\lambda$ is perturbatively closed to the original one. Assuming that the level set surfaces remain the same in the perturbed geometry, the solution for the perturbation of $v$ is given by equation (\ref{deltavaup}), which is determined in terms of a scalar function $\Psi$ and the metric perturbation $\delta g_{ab}$. This function can be obtained by solving the first order differential equation (\ref{eq:DEPsi}) subject to the boundary condition (\ref{bc-a}).
In retrospective, the only non-trivial input required for this kind of construction is the choice of background vector field $v$, which is in turn used as a seed for the perturbed solution $v_\lambda$. Specializing to spherical regions, one simple choice would be to pick $v$ as a geodesic flow, which is known in closed form if the background metric is empty AdS. This background $v$ is given explicitly in equation (\ref{Vec1-d}). It is easy to check that the level sets of this vector field are all homologous to $A$, as is shown in Figure \ref{fig:contours}. Since this construction assumes that the level sets are kept fixed, this implies that any perturbative solution $v_\lambda$ build up from this background field $v$ will automatically respect the norm bound $|v_\lambda|\leq1$. In Appendix \ref{app:examples} we present an explicit example of such perturbative solutions, for the case of a local quench.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.4]{Contours.pdf}$\,\,$\includegraphics[scale=0.4]{Contours2.pdf}
\begin{picture}(0,0)
\put(-100,-5){{\tiny $x/R$}}
\put(-183,75.5){{\tiny $z/R$}}
\put(-99,47){{\tiny $\gamma_A$}}
\put(-19,146){{\tiny $|v|$}}
\end{picture}
\vspace{0mm}
\caption{Contour plot for the magnitude $|v|$ of the geodesic flow given in (\ref{Vec1-d}), in $d=2$ dimensions (i.e., empty AdS$_3$). The contours correspond to the level set surfaces of $v$, which are all homologous to $A$ and, in particular, include $\gamma_A$ as one of its members. This implies that this vector field $v$ can indeed be used as a seed to generate a good solution $v_\lambda$ in a perturbative excited state.}\label{fig:contours}
\end{figure}
Finally, we can comment on how the metric perturbation and the Einstein's equations are encoded in this particular construction. Although the explicit use of metric is reduced in comparison to the construction via integral curves, the last step in the level sets method introduces the same level of nonlocality. In particular, the way we fixed the scalar field $\Psi$ was by solving the divergenceless condition (\ref{eq:DivCOnPer}). Even though this equation is local, the nontrivial boundary condition (\ref{bc-a}) introduces nonlocalities in the solutions, because the equation effectively transports information from $\gamma_A$ to other regions in the bulk. From the Gauss's law perspective the situation is perhaps easier to understand. In that case, the final answer for $v_\lambda$ exhibits an explicit bilocal dependence with respect to the metric perturbation, through its magnitude (\ref{magnitudeV}). The way we solve for $\Psi$ in this formalism is completely equivalent to that case, because Gauss's law is nothing but the integral form of the divergenceless condition. Hence, even though this construction seems particularly efficient for building up perturbative solutions $v_\lambda$, it ultimately contains the same kind of nonlocalities than the construction via integral curves. Hence, the inversion problem to recover the bulk metric and the Einstein's equations is equally difficult in both constructions.
\section{Bit threads and bulk locality\label{SecForms}}
The simple perturbative realizations of bit threads of the previous section highlight the need of a bit thread construction that does not make explicit use of the metric.
Fortunately, we know how to reformulate this formalism in a framework that makes background independence explicit: using the language of differential forms. The equivalence between divergenceless vector fields $v$ and closed $(d-1)-$forms $\bm w$ was already emphasized in \cite{Freedman:2016zud} and was used in \cite{Headrick:2017ucz} to efficiently deal with some subtleties of the max flow problem for null intervals.\footnote{We also point out that a reformulation of the Ryu-Takayanagi prescription in terms of calibrations (closed forms) was worked out independently in \cite{Bakhmatov:2017ihw}.} In this section we will first break down this equivalence in detail, giving explicit formulas that translate various relevant expressions between the two languages. We then argue that the Iyer-Wald formalism provides us with a particular realization of the perturbed thread configuration $\delta\bm w$ that makes explicit use of bulk locality. In particular, we show that the linearized Einstein's equations are explicitly encoded in this construction through the closedness condition, i.e., $d \delta\bm w=0$. We exploit this unique property of the Iyer-Wald construction to tackle the question of metric reconstruction and show that this problem can be phrased in terms of the inversion of a particular differential operator. Finally, we carry out the explicit inversion at linear order and discuss how to generalize our results to higher orders in the perturbation.
\subsection{Bit threads in the language of differential forms \label{ssec:diff}}
In the presence of a metric $g_{ab}$, the explicit map between \emph{flows}, i.e., divergenceless vector fields $v$ and closed $(d-1)-$forms $\bm w$, is given by
\begin{eqnarray}\label{hodgestar}
v^a =g^{a b}(\star \bm w)_{b}\,,
\end{eqnarray}
where $\star \bm w$ represents the Hodge star dual of $\bm w$, defined via
\begin{eqnarray}
(\star \bm w)_{b}\equiv\frac{1}{(d-1)!} \sqrt{g}\, w^{a_1\ldots a_{d-1}}\varepsilon_{a_1\ldots a_{d-1} b}\,.
\end{eqnarray}
In the above formula $\varepsilon_{a_1\ldots a_{d}}$ represents the totally antisymmetric Levi-Civita symbol, with sign convention $ \varepsilon_{i_1 \ldots i_{d-1}z}=1$. Furthermore, the indices of $\star\bm w$ are raised with the Riemannian metric $g_{ab}$, and its determinant is denoted by $g$. At this point we can already notice an important difference between the two objects, namely that, while the notion of a flow requires a background metric, $\bm w$ can be defined independently of $g_{ab}$. This will play a crucial role below, specifically, when we address the problem of metric reconstruction.
Let us carry on with our analysis. The inverse of the map (\ref{hodgestar}) can be stated in terms of the natural volume form $\bm \epsilon$, given by
\begin{eqnarray}
\bm \epsilon=\frac{1}{d!}\, \epsilon_{a_1 \ldots a_{d}} dx^{a_1}\wedge \cdots \wedge dx^{a_{d}}\,,
\end{eqnarray}
where $ \epsilon_{a_1 \ldots a_{d}}$ is proportional to $\varepsilon_{a_1\ldots a_d}$ and normalized such that $ \epsilon_{i_1 \ldots i_{d-1}z}=\sqrt{g}$. In terms of $\bm \epsilon$, the $(d-1)-$form $\bm w$ is given by
\begin{eqnarray}\label{flow-forms-1}
{\bm w}= \frac{1}{(d-1)!}\epsilon_{a_1 \ldots a_{d-1}b} \, v^b \,dx^{a_1}\wedge \cdots \wedge dx^{a_{d-1}}\,,
\end{eqnarray}
or in components,
\begin{eqnarray} \label{flow-forms-2}
{w}_{a_1 \ldots a_{d-1}}= \epsilon_{a_1 \ldots a_{d-1}b} v^b\,.
\end{eqnarray}
Following standard manipulations one can relate the divergence of $v^a$ with the exterior derivative of $\bm w$.\footnote{See e.g. Appendix B.2 of \cite{Wald:1984rg} for an explicit derivation of various identities that we use in this section.} Explicitly, taking the exterior derivative of equation (\ref{flow-forms-1}) leads to
\begin{eqnarray} \label{flows-forms}
d {\bm w}=\(\nabla_a v^a \) {\bm \epsilon}\,.
\end{eqnarray}
This formula shows explicitly the anticipated fact that divergenceless vector fields, or ``\emph{flows}'', are mapped to closed $(d-1)-$forms. The precise relation between the two is given by (\ref{flow-forms-1}).
Now, it is well known that $k-$forms have well defined integrals over $k-$dimensional hypersurfaces. Therefore it is convenient to write down an explicit formula for the restriction of $\bm w$ on a codimension-one surface $\Gamma$ in terms of intrinsic geometric quantities of that surface. Such a formula can be derived using the fact that the
volume $d-$form $\bm \epsilon$ induces a volume $(d-1)-$form $\tilde{\bm \epsilon}$ on $\Gamma$ via
\begin{eqnarray}\label{inducedeps}
\epsilon_{a_1\ldots a_{d-1}b}=d\, \tilde{\epsilon}_{[a_1\ldots a_{d-1}}n_{b]}\,,
\end{eqnarray}
where $n$ is the unit normal to the surface. Contracting the last index of (\ref{inducedeps}) with $v$ and using (\ref{flow-forms-2}) leads to an explicit expression for the form $\bm w$ evaluated at an arbitrary codimension-one surface $\Gamma$, with local unit normal $n$, in terms of the $(d-1)-$form $\tilde{\bm \epsilon}$
\begin{eqnarray} \label{boundaryw}
\bm w|_{\,\!_\Gamma}=(n_a v^a) \tilde{\bm \epsilon}\,.
\end{eqnarray}
Next, consider Gauss's theorem applied to the divergenceless vector field $v^a$, in a bulk region $N$ with $\partial N= A\cup \(-m\)$, where $m$ is a surface homologous to $A$ ($m\sim A$):
\begin{eqnarray}
\int_{N}\nabla_a v^a \bm{\epsilon}=\int_{\partial N} \(n_a v^a\) \tilde{\bm \epsilon}=\int_{A} \(n_a v^a\) \tilde{\bm \epsilon}-\int_{m} \(n_a v^a\) \tilde{\bm \epsilon}=0\,.
\end{eqnarray}
This leads to the homology condition
\begin{eqnarray}\label{homo-cond}
\int_{A}\(n_a v^a\) \tilde{\bm \epsilon}=\int_{m\sim A}\!\!\!\!\!\!\(n_a v^a\) \tilde{\bm \epsilon}\,.
\end{eqnarray}
This result is equivalently derived in the language of forms, using Stoke's theorem:
\begin{eqnarray}
\int_{N} d {\bm w}=\int_{\partial N} {\bm w}=\int_{ A} {\bm w}-\int_{m} {\bm w}=0\,.
\end{eqnarray}
This leads to
\begin{eqnarray}
\int_{A} {\bm w}=\int_{m\sim A}\!\!\!\!\!\! {\bm w}\,,
\end{eqnarray}
which is equivalent to (\ref{homo-cond}), given (\ref{boundaryw}).
With the ingredients described above, we are now in a position to translate the max flow-min cut theorem to the language of differential forms. First, we have
\begin{eqnarray}\label{eq:boundw}
\int_{m} \bm w = \int_m \(n_a v^a\) \tilde{\bm \epsilon}\leq \int_m \tilde{\bm \epsilon}\,.
\end{eqnarray}
The inequality here comes from the standard norm bound, $|v|\leq1$, which in terms of forms can be rewritten as
\begin{eqnarray}\label{w-norm1}
\frac{1}{(d-1)!}g^{a_1 b_1}\cdots g^{a_{d-1} b_{d-1}} w_{a_1 \ldots a_{d-1}}w_{b_1 \ldots b_{d-1}} \leq 1\,.
\end{eqnarray}
In short, equation (\ref{eq:boundw}) implies that, locally, the form $\bm w$ evaluated on any codimension-one hypersurface is bounded by the natural volume form defined on it $ \tilde{\bm \epsilon}$. The max flow-min cut theorem then implies that
\begin{eqnarray}\label{eq:MFMC}
\underset{\bm w \in \bm W}{\rm max} \int_{A} \bm w =\underset{m \sim A}{\rm min } \int_m \tilde{\bm \epsilon} \,,
\end{eqnarray}
where $\bm W$ is the set of closed forms obeying the bound (\ref{w-norm1}). This means that at the bottle-neck $\gamma_A$, an optimal bit thread form $\bm w^*$ should be equal to the volume form $\tilde{\bm \epsilon}$, i.e.,
\begin{eqnarray}\label{formmA}
{\bm w^*}|_{\gamma_A}=\tilde{\bm \epsilon}|_{\gamma_A}\,.
\end{eqnarray}
Finally, combining with the RT formula for entanglement entropy, (\ref{eq:MFMC}) becomes
\begin{eqnarray}
S_A=\frac{1}{4G_N}\,\underset{\bm w \in \bm W}{\rm max} \,\, \int_{A} \bm w\,.
\end{eqnarray}
which is the differential form version of the max-flow formula (\ref{BitThreadReform}).
There are many situations in which one might want to define the threads in terms of forms $\bm w$ instead of vector fields $v$. In particular, this reformulation will prove extremely useful for the problem at hand, namely, for the study of perturbations around a given background and the corresponding solutions to the flow maximization problem.
\subsubsection{The case of linear perturbations}
Having understood how the bit threads formalism translate to in the language of differential forms, it is now time to go back to our original problem.
We will assume that the following data is given: a background metric $g_{ab}$ on a manifold $M$ with boundary $\partial M$, and an optimal flow $v$ that maximizes the flux through a boundary region $A$. Using (\ref{flow-forms-2}), then, this would also imply the knowledge of an optimal closed form $\bm w$. In the following, we will consider the max flux problem in geometries that are perturbatively close to $g_{ab}$, i.e., $g^{\lambda}_{ab}=g_{ab}+\lambda\delta g_{ab}$. We will denote a solution to the problem as $\bm w_\lambda$, where $\bm w_\lambda=\bm w+\lambda\delta \bm w$.
First, notice that the closedness condition implies
\begin{eqnarray}\label{ddelta}
d\( \bm w+\lambda\delta \bm w \)=0 \qquad \to \qquad d\( \delta \bm w \)=0\,.
\end{eqnarray}
We can also use the fact that the the minimal surface $\gamma_A$ does not change at first order in the perturbation, so $\gamma_A^\lambda=\gamma_A$. Since this is
a bottle-neck for the flow, both $v$ and $\bm w$ are fixed at its location. In particular, from (\ref{formmA}) it follows that
\begin{eqnarray}\label{deltaboundary}
\(\bm w+\lambda \delta \bm w\)|_{\gamma_A}=\(\tilde{\bm \epsilon }+\lambda \delta \tilde{\bm \epsilon }\)\qquad \to\qquad \delta \bm w |_{\gamma_A}=\delta \tilde{\bm \epsilon }\,.
\end{eqnarray}
Then, given a max flow $\bm w$ for the unperturbed geometry, we are set to find a closed $(d-1)-$form $\delta \bm w$ that satisfies the boundary condition
(\ref{deltaboundary}) and it is such that the norm bound constraint (\ref{w-norm1}) holds everywhere in the bulk for the sum $\bm w_\lambda=\bm w +\lambda\delta \bm w$.
For simplicity, let us introduce the following notation for the inner product
\begin{eqnarray}
\langle \bm w , \tilde{\bm w}\rangle_g=\frac{1}{(d-1)!}g^{a_1 b_1}\cdots g^{a_{d-1} b_{d-1}} w_{a_1 \ldots a_{d-1}}\tilde{w}_{b_1 \ldots b_{d-1}}\,,
\end{eqnarray}
and for its first order variation with respect to the metric
\begin{eqnarray}
\langle \bm w , \tilde{\bm w}\rangle_{\delta g}=\frac{1}{(d-1)!}\delta(g^{a_1 b_1}\ldots g^{a_{d-1} b_{d-1}}) w_{a_1 \ldots a_{d-1}}\tilde{w}_{b_1 \ldots b_{d-1}}\,.
\end{eqnarray}
With these notations, the norm bound (\ref{w-norm1}) at first order in $\lambda$ is given by
\begin{eqnarray}\label{P-norm-bound}
\langle \bm w , \bm w\rangle_g+\lambda\[2\langle \bm w , \delta \bm w\rangle_g +\langle \bm w , \bm w\rangle_{\delta g}\] \leq 1\,,
\end{eqnarray}
which looks more difficult to implement than its vector field counterpart. From (\ref{P-norm-bound}) it is clear that the norm bound will typically depend on $\bm w$ so a priori it seems unlikely that a generic $\delta \bm w$ obeying (\ref{ddelta}) and (\ref{deltaboundary}) could satisfy (\ref{P-norm-bound}) independent of $\bm w$. The task becomes even more untractable if one requires $\delta \bm w$ to be given in terms of a linear local functional of $\delta g_{ab}$ and its covariant derivatives $\nabla_{(a_1}\cdots \nabla_{a_n)}\delta g_{ab}$ (see however \cite{Wald:2005nz}).
In the remaining part of this section we will show that, despite the above remarks, the Iyer-Wald formalism provides a concrete realization of such perturbed form.
\subsection{Iyer-Wald formalism and Einstein's equations\label{sec:IWgeneral}}
One of the crucial breakthroughs in the joint program of holography and quantum information is that the first law of the entanglement entropy, together with the Ryu-Takayanagi formula, imply the linearized Einstein's equations in the bulk. This was originally proven using Hamiltonian perturbation theory \cite{Lashkari:2013koa}.
In a beautiful paper \cite{Faulkner:2013ica}, it was further shown that it is possible to make this connection more explicit by the proper implementation of the Noether's charge formalism in the bulk, also known as the Iyer-Wald formalism. In this new language, the problem of linearized perturbations is cast in terms of differential forms, a more natural and elegant approach that bridge the CFT and bulk quantities in an efficient way. In this section we will briefly review the basic ingredients of \cite{Faulkner:2013ica}, making the connection between entanglement entropy and Einstein's equations manifest. Later in Section \ref{5.2} we will show that the Iyer-Wald formalism provides
us with a canonical choice for the differential form $\delta\bm w$ that solves the max flux problem in a perturbed geometry. As a byproduct, we will show that such a canonical form will automatically encode (locally) the linearized Einstein's equations in the bulk which, in turn, will prove useful for the problem of metric reconstruction.
Let us first state the problem that \cite{Lashkari:2013koa} sought to solve and then discuss the approach of \cite{Faulkner:2013ica}. In general quantum field theories (holographic or not),
for small perturbations over a reference state, $\rho=\rho^{(0)}+ \lambda \delta\rho$, entanglement entropy satisfies the first law
\begin{equation}\label{eq:1stlaw}
\delta S_A=\delta\langle H_A\rangle\,,
\end{equation}
where $\langle \bullet \rangle$ represents the expectation value of the operator in the respective quantum state and $H_A$ is the so-called modular Hamiltonian. By definition, this operator is related to the reduced density matrix $\rho_A=\text{tr}_{A^c}[\rho]$ through
\begin{equation}\label{defmodularH}
\rho_A=\frac{e^{-H_A}}{\text{tr}[e^{-H_A}]}\,.
\end{equation}
However, there are very few cases for which (\ref{defmodularH}) can be explicitly inverted to obtain $H_A$ in closed form. The most famous example is the case where $A$ is half-space, say $x_1>0$, and $\rho$ corresponds to the vacuum state of the QFT. In this case \cite{Bisognano:1975ih,Unruh:1976db}
\begin{equation}\label{modular1}
H_A=2\pi\int_A x_1 \, T_{00}(t,\vec{x}) \, d^{d-1}x\,.
\end{equation}
For generic CFTs, this setup can be conformally mapped to the case where
$A$ is a ball of radius $R$, centered at an arbitrary point $\vec{x}=\vec{x}_c$, in which case \cite{Hislop:1981uh,Casini:2011kv}
\begin{equation}\label{modular2}
H_A=2\pi\int_A \frac{R^2-(\vec{x}-\vec{x}_c)^2}{2R} T_{00}(t,\vec{x})\,d^{d-1}x\,.
\end{equation}
On the other hand, the left-hand side of (\ref{eq:1stlaw}) is computed via the Ryu-Takayanagi formula in the bulk. For ball shaped regions in pure AdS, or small perturbations around it, the RT surface $\gamma_A$ is given by a half hemisphere of radius $R$ extended on the extra dimension, centered at $\vec{x}=\vec{x}_c$ and $z=0$. The Ryu-Takayanagi formula then adopts the form
\begin{eqnarray}\label{RT-Ball}
\delta S_A=\frac{1}{4G_N}\int_{\gamma_A} \delta \tilde{\bm \epsilon}
\end{eqnarray}
where $\delta\tilde{\bm \epsilon}$ is the variation of the natural volume form on the surface $\gamma_A$. A further ingredient is the relation between the
expectation value of the boundary stress tensor, appearing in the right-hand side of (\ref{eq:1stlaw}), and the fluctuations of the bulk metric $\delta g_{\mu\nu}$. In the Fefferman-Graham gauge, where the latter is given by (\ref{FG-PT}), the former can be identified as the first subleading (normalizable) mode in a near boundary expansion (\ref{T=H}). Taking into account that the boundary field theory is conformal and that the stress tensor conserved, then, this identification imposes non-trivial boundary condition for the metric fluctuations $H_{\mu\nu}$,
\begin{eqnarray}
\begin{cases}
\displaystyle \langle T^\mu_{\,\,\, \mu}(x)\rangle =0 & \displaystyle \qquad \to \qquad H^\mu_{\,\,\, \mu}(x,z=0)=0\,,\\[3ex]
\displaystyle \partial_\mu \langle T^{\mu \nu} (x) \rangle =0 & \displaystyle \qquad \to \qquad \partial_\mu H^{\mu \nu}(x,z=0)=0\,.
\end{cases}
\end{eqnarray}
Using the above, the right-hand side of (\ref{eq:1stlaw}) becomes
\begin{eqnarray}\label{H-mod}
\delta \langle H_A \rangle &=&2\pi \int_{A} d^{d-1}x \frac{R^2-|\vec{x}-\vec{x_0}|^2}{2R}\delta \langle T_{tt}(t_0,\vec{x})\rangle \,, \nonumber \\
& =&\frac{d}{16 G_N R} \int_{A} d^{d-1}x \(R^2-|\vec{x}-\vec{x_0}|^2\) H^i_{\,\,\, i}(t_0,\vec{x},z=0)\,.
\end{eqnarray}
Similarly, evaluating the left-hand side of (\ref{eq:1stlaw}) using (\ref{RT-Ball}) yields
\begin{eqnarray}\label{Ent-H}
\delta S_A&=&\frac{1}{4G_N}\int_{\gamma_A} \delta \tilde{\bm \epsilon} \nonumber \\
&=&\frac{1}{8 G_N R} \int_{\gamma_A} d^{d-1}x\( R^2 \delta^{ij} - (x^i-x^i_0)(x^j-x^j_0)\)H_{i j} (t_0, \vec{x}, z)\,.
\end{eqnarray}
The first law (\ref{eq:1stlaw}), together with (\ref{H-mod}) and (\ref{Ent-H}), then establishes a relation between integral functionals of $H_{\mu \nu}$ on $A$ and on $\gamma_A$. It turns out that this functional dependence in turn implies the Einstein's equations for $H_{\mu\nu}$, linearized around pure AdS. This was shown originally in \cite{Lashkari:2013koa} by a direct comparison between the two sides of (\ref{eq:1stlaw}).
From the gravitational perspective, (\ref{eq:1stlaw}) was then proven to be equivalent to the generalized first law of black hole thermodynamics applied to the bifurcate killing horizon of Rindler AdS \cite{Faulkner:2013ica}. This was made explicit by a clever implementation of the Noether's theorem in the bulk, using a formalism developed a couple of decades back by Iyer and Wald in \cite{Iyer:1994ys}. In order to apply this formalism to the problem at hand, the key observation was that the RT surface for ball-shaped regions in pure AdS, or perturbations around it, coincides with the bifurcate horizon of the time-like killing vector
\begin{eqnarray}
\xi=-\frac{2\pi}{R}\(t-t_0\)[z\partial_z+(x^i-x^i_0)\partial_i]+\frac{\pi}{R}[R^2-z^2-(t-t_0)^2-(\vec{x}-\vec{x}_0)^2]\partial_t\,,
\end{eqnarray}
with respect to which a notion of energy and entropy are possible. In fact, a specific conformal transformation (known as the CHM map \cite{Casini:2011kv}) maps the interior of the Rindler wedge associated with $A$ to the exterior of an hyperbolic black hole in AdS, where the killing vector $\xi$ coincides with the generator of time translations.
Following Iyer and Wald \cite{Iyer:1994ys,Iyer:1995kg,Wald:2005nz} one then investigates the Noether's theorem for the Killing symmetry generated by $\xi$. This leads to the definition of a $(d-1)-$form
\begin{eqnarray}\label{chi}
{ \bm \chi}=-\frac{1}{16 \pi G_N} \left[ \delta (\nabla^A \xi^B {\bm \epsilon}_{AB} )+\xi^B {\bm \epsilon}_{AB}(\nabla_c h^{AC}+\nabla^A h^C_{\,\, C})\right]\,,
\end{eqnarray}
where $h_{AB}=z^{d-2}H_{A B}$ and ${\bm \epsilon}_{AB}$ is the volume $(d-1)-$form
\begin{eqnarray}
{\bm \epsilon}_{AB}=\frac{1}{(d-1)!}\epsilon_{A B C_3 \cdots C_{d+1}}dx^{C_3} \wedge\cdots \wedge dx^{C_{d+1}}\,,
\end{eqnarray}
with $\epsilon_{z t i_1 \cdots i_{d-1}}=\sqrt{-G}$. As noted in \cite{Faulkner:2013ica}, the form ${\bm \chi}$ satisfies the following properties
\begin{eqnarray}\label{chi-eqs}
\int_{\gamma_A} {\bm \chi} =\delta S_A\,, \qquad \int_{A} {\bm \chi}=\delta \langle H_A\rangle\,,
\end{eqnarray}
which can be more easily verified by evaluating (\ref{chi}) on a Cauchy hypersurface $\Sigma$, containing both $\gamma_A$ and $A$. For instance, taking $\Sigma$ to be the $t=t_0$ slice, one obtains the $(d-1)-$form
\begin{eqnarray}\label{chiSigma}
{ \bm \chi}|_{\,\!_\Sigma}\equiv\tilde{\bm {\chi}} &=&\frac{z^d}{16 \pi G_N} \Bigg\{{\bm \epsilon}^t_{\,\,z}\left[ \(\frac{2\pi z}{R}+\frac{d}{z} \xi^t+\xi^t\partial^z \) H^i_{\,\,i} \right] +\nonumber \\ \label{chi-Sigma}
&& + {\bm \epsilon}^t_{\,\, i} \left[ \(\frac{2\pi (x^i-x^i_0)}{R}+\xi^t\partial^i \) H^j_{\,\,j} -\(\frac{2\pi (x^j-x^j_0)}{R}+\xi^t\partial^j\) H^i_{\,\,j} \right] \Bigg\}\,,
\end{eqnarray}
from which both equations in (\ref{chi-eqs}) follow trivially. The key point here is that ${ \bm \chi}$ is closed provided that the bulk equations of motion are satisfied. For instance, in the constant-$t$ Cauchy slice used above one finds
\begin{eqnarray}\label{closeness}
d \tilde{\bm {\chi}}=-2\xi^t \delta E^g_{tt} \, {\bm \epsilon}^t\,,
\end{eqnarray}
where $\delta E^g_{tt}$ is the $tt$ component of the linearized Einstein's equations, and ${\bm \epsilon}^t$ is the induced volume form on $\Sigma$. Similarly, other components of the Einstein's equations are obtained by specializing to different Cauchy slices. Thus, provided that the metric perturbations satisfy these equations, the form $\bm \chi$ is closed and the Stokes theorem implies the equality between the left and right equations (\ref{chi-eqs}). This equivalence also applies in the converse way, given the nonlocal form of (\ref{chi-eqs}) and the arbitrariness of $R$ and $\vec{x}_0$. This concludes the proof of the statement that they were after, namely, that \emph{for theories where the Ryu-Takayanagi formula computes entanglement entropy, the first law of entanglement entropy in the CFT is equivalent to the Einstein's equations in the bulk, linearized over empty AdS.}
\subsection{Method 3: Canonical bit threads from Iyer-Wald \label{5.2}}
Taking into account the nice properties of $\bm \chi$ defined via the Iyer-Wald formalism, here, we propose that specializing this form to a spacelike hypersurface $\Sigma$ containing both $\gamma_A$ and $A$ can be taken as a \emph{canonical} candidate for the perturbed thread $(d-1)-$form\footnote{See \cite{Oh:2017pkr} for a previous attempt at deriving a bit thread configuration from Iyer-Wald.}
\begin{eqnarray} \label{w-chi}
\tilde{\bm{\chi}}=\frac{1}{4G_N}{\delta \bm w}\,.
\end{eqnarray}
Given the integral properties of this form, it is straightforward to check that the flux through any surface homologous to $A$ yields the change of the entanglement entropy in the perturbed state, $\delta S_A$, as expected. Furthermore, this construction fully exploits the property of bulk locality, in particular, connecting the required closedness of ${\delta \bm w}$ with the linearized Einstein's equations via (\ref{closeness}). We will see below that this property will play a very important role for the problem of bulk reconstruction.
It only remains to be checked whether the norm bound constraint (\ref{w-norm1}) is satisfied at the desired order (\ref{P-norm-bound}) everywhere in the bulk. This condition will depend on the background form $\bm w$ and then might not hold in general. However, for our purposes it will suffice to find \emph{one} $\bm w$ such that the combination $\bm w_\lambda=\bm w + \lambda \delta \bm w$ respects the bound for any perturbation. We will devote the remaining part of this section to check that this is indeed possible.
To begin with, we note that the norm constraint in the form (\ref{P-norm-bound}) is slightly more complicated than its equivalent in terms of vectors. Hence, we will first translate the form (\ref{w-chi}) into the language of flows and then check the condition in terms of the latter. For this purpose, we will need an explicit expression relating $\delta v$ and $\delta \bm w$ in the presence of a perturbed metric $g^\lambda_{ab}=g_{ab}+\lambda\delta g_{ab}$. In terms of the Levi-Civita symbol $\varepsilon$, the variation of (\ref{flow-forms-2}) reads
\begin{eqnarray}\label{chi-v}
\lambda \delta {w}_{a_1 \ldots a_{d-1}}&=& \varepsilon_{b a_1 \ldots a_{d-1}} \delta( \sqrt{g_\lambda}\, v_\lambda^b)=\lambda \varepsilon_{b a_1 \ldots a_{d-1}} \sqrt{g} \left(\delta v^{b}+\tfrac{1}{2} g^{cd}\delta g_{cd} v^{b}\right)\,.
\end{eqnarray}
It is convenient to define a new vector field $\delta v_{\Phi}^a$, given by
\begin{eqnarray}\label{deltaVPhi}
\delta v_{\Phi}^a\equiv \frac{\delta\( \sqrt{g_\lambda} \, v_\lambda^a\)}{\sqrt{g}}\bigg|_{\lambda\to1}=\delta v^{a}+\tfrac{1}{2} g^{b c}\delta g_{b c} v^{a}\,,
\end{eqnarray}
which is divergenceless with respect to the unperturbed metric $g_{ab}$, i.e.,
\begin{equation}
\nabla\cdot \delta v_\Phi=\frac{1}{\sqrt{g}}\partial_a(\sqrt{g}\,\delta v_\Phi^a)=0\,,
\end{equation}
and is related to $\delta {\bm w}$ via its Hodge dual (again, with metric $g_{ab}$),
\begin{eqnarray}\label{hodgestar-pert}
\delta v_{\Phi}^a =g^{a b}(\star \delta \bm w)_{b}\,.
\end{eqnarray}
The subindex $\Phi$ here highlights the fact that the flux of this vector field across any bulk surface homologous to $A$, computed with the original metric, equals the change in the entanglement entropy $\delta S_A$. We emphasize that this vector field should \emph{not} be thought as the variation of the flow $v$, but just as an auxiliary object. However, given a $\delta v_{\Phi}^a$ obtained from (\ref{hodgestar-pert}), we can easily recover the \emph{true} variation of the flow $\delta v^a$ from (\ref{deltaVPhi}). In the Fefferman-Graham gauge (\ref{FG-PT}), the metric perturbation takes the form $\delta g_{ij}=z^{d-2} H_{ij}$ (with $\delta g_{zz}=\delta g_{zi}=0$) and $\delta v^a$ reads
\begin{eqnarray}\label{separation}
\delta v^a=\delta v_{\Phi}^{a}-\tfrac{1}{2}z^d H^i_{\,\,\, i} v^{a}\,,
\end{eqnarray}
where $H^{i}_{\,\,\, i}=\delta^{ij}H_{ij}$. Thus, $\delta v^a$ depends not only on $\delta \bm w$ but also on the background flow $v^a$. In fact, the extra piece in (\ref{separation}) is precisely what is needed such that the divergence of $v^a$ taken with the full metric $g^\lambda_{ab}$ vanishes at the desired order,
\begin{eqnarray}
\nabla_\lambda\cdot v_\lambda=\frac{1}{\sqrt{g_\lambda}}\partial_a(\sqrt{g_\lambda} v_\lambda^a)&=&\frac{\sqrt{g}}{\sqrt{g_\lambda}}(\cancel{\nabla\cdot v})+\frac{\lambda}{\sqrt{g_\lambda}}\partial_a[\delta(\sqrt{g_\lambda})v^a+\sqrt{g}\delta v^a]\,,\nonumber\\
&=&\frac{\lambda}{\sqrt{g_\lambda}}\partial_a[\cancel{\tfrac{1}{2}z^d H^i_{\,\,\, i} v^{a}}+\sqrt{g}(\delta v_{\Phi}^{a}-\cancel{\tfrac{1}{2}z^d H^i_{\,\,\, i} v^{a}})]\,,\nonumber\\
&=&\lambda\frac{\sqrt{g}}{\sqrt{g_\lambda}}(\nabla\cdot \delta v_{\Phi})=0\,.
\end{eqnarray}
Next, we need to make a choice for the background flow $v$ in order to get an explicit $\delta v^a$ and test the norm bound $|v_\lambda|\leq1$. Since the background $v$ should already respect the bound $|v|\leq1$, it is clear that $v_\lambda$ can only exceed this bound by an amount of order $\mathcal{O}(\lambda)$. This can indeed be the case for bulk points that saturate the bound at leading order $|v|=1$ (e.g., at the bottle-neck $\gamma_A$), or in their vicinity. On the other hand, points that are parametrically far from saturating the bound at leading order are safe, in the sense that we can always take $\lambda$ to be arbitrarily small such that $|v_\lambda|=|v|+\mathcal{O}(\lambda)\leq1$.
Given the above discussion, then, we should ideally pick a background flow $v$ such that its magnitude decays rapidly away from the minimal surface $\gamma_A$. Fortunately, we already know good examples of flows respecting this property, e.g., the so-called ``geodesic flows'' \cite{Agon:2018lwq}, which for spheres in empty AdS take the form (\ref{Vec1-d}). In the following we will take these background solutions and verify that the norm bound is satisfied at the desired order in the perturbation. First, notice that from (\ref{Vec1-d}), and using (\ref{hodgestar-pert})-(\ref{separation}), it immediately follows that
\begin{eqnarray}
\delta v_{\Phi}&=&\frac{z^{d+1}}{4\pi} \Big\{ \left[ \(\frac{2\pi z}{R}+\frac{d}{z} \xi^t+\xi^t\partial^z \) H^i_{\,\,i} \right] \partial_z +\nonumber \\
&& + \left[\(\frac{2\pi (x^i-x^i_0)}{R}+\xi^t\partial^i \) H^j_{\,\,j} -\(\frac{2\pi (x^j-x^j_0)}{R}+\xi^t\partial^j\) H^i_{\,\,j} \right] \partial_i\Big\}\,.
\end{eqnarray}
and
\begin{eqnarray}\label{eq:deltav_IW}
\delta v&=&\frac{z^{d+1}}{4\pi} \Big\{ \left[ \(\frac{2\pi z}{R}+\frac{d}{z} \xi^t+\xi^t\partial^z -\frac{2\pi}{z} v^z\) H^i_{\,\,i} \right] \partial_z -\(\frac{2\pi}{z} v^i \)H^j_{\,\,j} \partial_i \nonumber \\
&& + \left[\(\frac{2\pi (x^i-x^i_0)}{R}+\xi^t\partial^i \) H^j_{\,\,j} -\(\frac{2\pi (x^j-x^j_0)}{R}+\xi^t\partial^j\) H^i_{\,\,j} \right] \partial_i\Big\}\,.
\end{eqnarray}
By construction, $v_\lambda=v+\lambda\delta v$ saturates the norm bound on $\gamma_A$ since the form $\bm w_\lambda=\bm w+\lambda\delta \bm w$ from which it is derived obeys the appropriate boundary condition for a max flow (\ref{formmA}).
We can check this explicitly: at $\gamma_A$ we have that $\xi^t=0$, so\footnote{A brief comment is in order. The expression for $\delta v_a|_{\gamma_A}$ in (\ref{deltav}) does not agree with the expected boundary condition at the bulk bottle-neck (\ref{bcVector}).
The explanation of this mismatch is simple: the difference between the two vector fields is proportional to a vector that is tangential to the minimal surface $\delta v^a_{T} =\(\delta^a_{\,\,\, b} - v^a v_b \) \delta v^b$ so its first order contribution to the norm constraint vanishes, $g_{ab}v^a \delta v^b_{T}=0$, because $\delta v_{T}$ is orthogonal to $v$. Therefore, $\delta v_a|_{\gamma_A}$ in (\ref{deltav}) is equally good as (\ref{bcVector}) to our order of approximation.}
\begin{eqnarray}\label{deltav}
v|_{\gamma_A}=\frac{z}{R}\[(x^i-x_0^i)\partial_i + z\partial_z \]\,,\qquad \delta v|_{\gamma_A}=-\frac{z^{d+1}}{2R}\( x^j-x_0^j \) H^i_{\,\,\, j} \,\, \partial_i\,.
\end{eqnarray}
This leads to the expected saturation at first order in $\lambda$,
\begin{eqnarray}\label{norm-pert}
g_{ab}^{\lambda}v^a_\lambda v^b_\lambda\Big|_{\gamma_A}=g_{ab}v^av^b+\lambda(\cancel{\delta g_{ab}v^av^b+2g_{ab}v^a\delta v^b})\Big|_{\gamma_A}=1\,.
\end{eqnarray}
Away from $\gamma_A$ the norm bound is not guaranteed to hold, but since $|v|$ decays as a power law, it would suffice to study $|v_\lambda|$ in a neighborhood of $\gamma_A$. In order to see this in detail, we note that the level sets of the background flow $v$ (depicted in Figure \ref{fig:contours}) have the form
\begin{eqnarray}\label{surface-ls}
(z+\Delta)^2+|\vec{x}-\vec{x}_0|^2=R^2+\Delta^2\,,\qquad z\geq0\,,
\end{eqnarray}
where $\Delta\in\mathbb{R}$, i.e., spheres with radius $\sqrt{R^2+\Delta^2}$, centered at $(\vec{x}_c=\vec{x}_0,z_c=-\Delta)$.
It can be checked that, in the vicinity of the minimal surface ($\Delta^2 \ll R^2$)
\begin{eqnarray}
|v|\approx 1-\frac{ (d-1)\Delta^2}{2R^2}\,.
\end{eqnarray}
Since $d>1$, then $|v|<1$ for any $\Delta\neq0$ as expected. Now, we want to check whether the norm bound is still satisfied for $v_\lambda$ at the leading order in the perturbation. More precisely, what we really want is to check that for a fixed $\lambda$, $|v_\lambda|\leq1$ at linear order in $\lambda$ for an arbitrary $\Delta$. A short calculation shows that
\begin{equation}\label{norm-const-A}
|v_\lambda|=1-\frac{ (d-1) \Delta^2}{2R^2}+\lambda\left(\tfrac{1}{2}\delta g_{ab} v^av^b +g_{ab}v^a\delta v^b\right) \overset{?}{\leq} 1\,,
\end{equation}
which after some algebra can be put in the following form:
\begin{eqnarray}\label{norm-constraint-B}
\lambda\frac{\Delta z^{d+1}}{R^2}\((x^i-x_0^i)\partial_i H^j_{\,\, j} -(x^i-x_0^i)\partial^j H_{ij}+ z\partial_z H^j_{\,\, j}+(d-1) H^j_{\,\, j}\) \overset{?}{\leq} \frac{ (d-1) \Delta^2}{R^2 }\,.
\end{eqnarray}
The parenthesis in the left-hand side of (\ref{norm-constraint-B}), or (\ref{norm-const-A}), can in fact be non-negative, which implies that the above inequality will not hold for arbitrarily small $\Delta$. Nevertheless, it is interesting to estimate the order of magnitude of the potential violation of the norm bound constraint as it could still be consistent with our order of approximation. From (\ref{norm-constraint-B}) it follows that the norm bound can be violated provided that
\begin{equation}\label{eq:order}
\frac{\Delta}{R}\lesssim\mathcal{O}(\lambda)\,.
\end{equation}
Plugging (\ref{eq:order}) back into (\ref{norm-const-A}), we observe that this would only lead to a violation of order $\mathcal{O}(\lambda^2)$. Since all of our analysis is at linear order in $\lambda$, we can safely ignore this issue. In other words, up to our order of approximation the norm bound is \emph{not} violated and this means that the ``canonical'' thread configuration constructed from the Iyer-Wald formalism satisfies all the defining properties for a max flow. We relegate to Appendix \ref{app:examples} the study of a explicit example of these canonical thread configurations.
Finally, we can comment on how the information of the metric perturbation and the Einstein's equations are encoded in this particular thread construction. First, notice that the variation of the flow $\delta v$ constructed from Iyer-Wald fully exploits the property of bulk locality. This is evident, since for this particular construction the divergenceless condition, $\nabla_\lambda \cdot v_\lambda$, maps directly to the Einstein's equations, which are defined locally in the bulk. Moreover, the fact that the $\delta v$ constructed here (\ref{eq:deltav_IW}) can be written in terms of a linear local functional of $\delta g_{ab}$ and its derivatives, present us with an interesting possibility: we can use the information of this canonical solution to invert the problem and recover the bulk metric from it! We will explore this problem in more detail in the next subsection, and comment on the implications and possible generalizations to the full non-linear regime.
\subsection{Metric reconstruction\label{metric-reconstruction}}
\subsubsection{Explicit reconstruction at linear order\label{explicit-reconstruction}}
Our bit thread construction based on differential forms makes explicit use of the property of bulk locality, hence, it should be possible to invert the problem and recover the metric for a generic linear excitation of the boundary quantum state. In this section we will study this problem in detail. More specifically, we will consider a manifold $M$ with boundary $\partial M$ and a set of forms $\delta \bm w$ that encode the local pattern of entanglement of boundary regions. We will assume the knowledge of the zeroth order ---pure AdS--- metric $g_{\mu\nu}$, which is otherwise fixed by conformal symmetry (i.e. kinematics), and set up the problem of how to reconstruct the metric perturbations $\delta g_{\mu\nu}$ from the above data.
Our starting point is the knowledge of the change in the entanglement structure of the CFT, which in this case is encoded in the set of $(d-1)-$forms $\delta \bm w$. We emphasize that these canonical forms can be \emph{uniquely} specified solely from CFT data. Given a perturbative excited state in the CFT, one can first evaluate the expectation value of the stress-energy tensor $T_{\mu\nu}$ and thus the modular Hamiltonian $H_A$ associated with a spherical region $A$. This information can then be used as a boundary condition for $\delta \bm w$ on $A\subset\partial M$. For instance, specializing to a constant-$t$ slice $\Sigma$, this yields\footnote{Notice that a choice of boundary condition on $A$ is equivalent to picking a specific \emph{entanglement contour} in the dual CFT. We emphasize that this corresponds to focusing on a particular class of microstates with a given local entanglement pattern. Although this boundary condition is in general non-unique, (\ref{bcIyerWald}) is \emph{the} boundary condition singled out by the Iyer-Wald construction.}
\begin{equation}
\delta \bm w|_{A}=\frac{4\pi G_N}{R}\(R^2-|\vec{x}-\vec{x}_0|^2\)\langle T_{00}\rangle\, \bar{{ \bm \epsilon}}\,,
\end{equation}
where $\bar{{ \bm \epsilon}}$ is the natural volume form in the boundary CFT. In fact, we can analytically continue this form to the whole boundary $\partial M$, so that\footnote{More covariantly, on a general Cauchy slice $\Sigma'$ containing $\partial A$, the boundary condition would be
$$
\delta \bm w|_{\partial M}=4G_N \,N^\mu\zeta_A^\nu \langle T_{\mu\nu}\rangle\, \bar{{ \bm \epsilon}}\,,
$$
where $N^\mu$ is a future pointing unit normal vector, and $\zeta_A$ is the conformal killing vector that generates $\mathcal{D}[A]$,
$$
\zeta_A=\frac{\pi}{R}\[(R^2-(t-t_0)^2-|\vec{x}-\vec{x}_0|^2)\partial_t+2(t-t_0)(x^i-x_0^i)\partial_i\]\,.
$$
Upon conformally mapping the causal development of the region $\mathcal{D}[A]$ to the hyperbolic cylinder $H^{d-1}\times R_\tau$, it can be shown that $\zeta_A$ coincides with the time-like Killing vector $2\pi R\, \partial_\tau$.}
\begin{equation}\label{bcIyerWald}
\delta \bm w|_{\partial M}=\frac{4\pi G_N}{R}\(R^2-|\vec{x}-\vec{x}_0|^2\)\langle T_{00}\rangle\, \bar{{ \bm \epsilon}}\,.
\end{equation}
One way to see that this is consistent would be to conformally map the interior of the sphere to the exterior. Upon implementing this transformation one finds the same functional form for the modular Hamiltonian but integrated along $\vec{x}\in A^c$, hence, providing a boundary condition also at $A^c=\partial M\setminus A$.
With the above boundary condition, the full $(d-1)-$form in the interior of the manifold $M$ is then uniquely determined if we assume \emph{bulk locality} \cite{Wald:2005nz}. To see this, notice that the Iyer-Wald construction yields a form $\delta \bm w$ such that $d\delta \bm w=0$ on-shell, which is a local condition. If we want to maintain this condition, then, the only ambiguity in $\delta \bm w$ would be the addition of a term $\delta \bm w \to \delta \bm w +d \bm C$ where $\bm C$ is a $(d-2)-$form such that $d \bm C$ vanishes on $\partial M$. This is of course a gauge redundancy, which we fix by working in Fefferman-Graham coordinates. Therefore, the boundary condition (\ref{bcIyerWald}) together with the condition of bulk locality are enough to uniquely specify the full $(d-1)-$form on $M$.
Before proceeding with the specifics of this analysis, let us first quickly review how the problem of metric reconstruction is normally addressed. In the usual HRT story, given background metric $g_{\mu\nu}$, the change in the entanglement entropy of a region $A$ at first order in the perturbation $\delta g_{\mu\nu}$ is given by\footnote{The analysis in terms of extremal surfaces can be done for general states, not necessarily perturbative. Here we are discussing only this simpler case to highlight an important difference with our approach.}
\begin{eqnarray}
\delta S_A=\int_{\gamma_A} \delta \sqrt{h} \,d^{d-2}x\,,\qquad \delta \sqrt{h} =\frac{1}{2}\sqrt{h} h^{ij} \delta h_{ij}\,.
\end{eqnarray}
This means that $\delta S_A$ encodes information about the first order change in the trace of the induced metric $h_{ij}$ over the extremal surface $\gamma_A$. On the other hand, the induced metric on $\gamma_A$ depends on the bulk metric as well as the explicit embedding of $\gamma_A$ on the geometry. Therefore, by cleverly considering different boundary regions with extremal surfaces intersecting at a bulk point, one could access to the various components of the bulk metric at the given bulk point and hence derive an inversion formula for the metric perturbation $\delta g_{\mu\nu}$.
As is evident from the previous paragraph, the problem of metric reconstruction by extremal surfaces heavily relies on the possibility of foliating the full manifold $M$ with boundary anchored extremal surfaces. In particular, we would necessarily need to start from a dense family of surfaces that pass through all (reachable) bulk points multiple times. While we can do this in the language of bit threads, i.e., start from a \emph{dense set} of thread configurations, the fact that one single solution to to the max flow problem already probes the full bulk geometry presents us with an interesting possibility: we can start from a \emph{finite set} of thread configurations, containing one, or possibly only a few solutions of the max flow problem. The minimal number of thread configurations needed in such a set will generally depend on symmetry considerations as well as the number of dimensions, as will be discussed below. For the time being, let us summarize the two approaches to metric reconstruction that we can explore. For simplicity, we will frame the discussion by focusing on a constant-$t$ slice $\Sigma$, so that we will aim to recover the spatial components of the metric $\delta g_{ij}$. The $\delta g_{tt}$ and $\delta g_{ti}$ components can be recovered in a similar way, by choosing appropriate boosted slices $\Sigma'$, as we will explain at the end of the section.
The two methods that we will explore are:
\vspace{-3mm}
\begin{itemize}
\item \emph{Reconstruction from a dense set of thread configurations.} Here we will assume knowledge of $\delta \bm w$ for \emph{all} spheres in the CFT, with arbitrary radius $R$ and center point $\vec{x}_0$.\vspace{-3mm}
\item \emph{Reconstruction from a minimal set of thread configurations.} Here we will assume knowledge of $\delta \bm w$ for \emph{a few} spheres with radius $R^{(i)}$, and center point $\vec{x}_0^{(i)}$, with $i=1,\ldots,n$. The precise value of $n$ will be fixed so that the inversion problem is well defined.
\end{itemize}
\vspace{-3mm}
We will now discuss these two methods in detail.
\subsubsection*{Reconstruction from a dense set of thread configurations}
Given the infinite set of $(d-1)-$forms $\delta \bm w$ encoding the \emph{canonical} entanglement pattern of spheres of arbitrary radius $R$ and center point at $\vec{x}_0$, on a constant-$t$ slice $\Sigma$, our goal is to extract the components of the perturbed metric $\delta g_{ij}$. We recall that, in the presence of a metric, the set of $(d-1)-$forms $\delta \bm w$
define a set of covector fields $\delta w_a(R,\vec{x}_0,z,\vec{x})$, instead of the numbers $\delta S(R,x_0)$, so it is clear that in this framework we have infinitely more information in comparison to the standard setup using extremal surfaces. Hence, we can expect to be able to reconstruct the metric in a more straightforward way.
If the full metric is given in the Fefferman-Graham gauge (\ref{FG-PT}),
the components of the metric perturbation take the form $\delta g_{ij}=z^{d-2} H_{ij}$ (with $\delta g_{zz}=\delta g_{zi}=0$).
In this gauge, the components of the covectors $\delta w_a$ can be related to the metric perturbations as follows:
\begin{eqnarray}\label{wz}
\delta w_z(R,\vec{x}_0,z,\vec{x})& =&\frac{z}{4\pi}\(\frac{2\pi z}{R}+\frac{d}{z} \xi^t+\xi^t\partial^z \) H^i_{\,\,i}\,, \\ \label{wi}
\delta w_i(R,\vec{x}_0,z,\vec{x}) &= &\frac{z}{4\pi}\left[ \(\frac{2\pi (x^i-x^i_0)}{R}+\xi^t\partial^i \) H^j_{\,\,j} -\(\frac{2\pi (x^j-x^j_0)}{R}+\xi^t\partial^j\) H^i_{\,\,j} \right],
\end{eqnarray}
where
\begin{equation}
\xi^t=\frac{\pi}{R}\(R^2-z^2-|\vec{x}-\vec{x}_0|^2\)\,.
\end{equation}
These equations can be easily inverted using the dependence on $R$ and $\vec{x}_0$ of (\ref{wz}) and (\ref{wi}). In fact, there are infinitely many ways to invert these equations.
The simplest way is to get rid of the derivative terms, so that we obtain a system of algebraic equations. However, we have several ways to accomplish this. Below we will discuss two different options.
The first option is by evaluating both sides of (\ref{wz}) and (\ref{wi}) on the set of parameters $(R,\vec{x}_0)$ that satisfy $\xi^t(R,\vec{x}_0)=0$, i.e.,
\begin{eqnarray}\label{xit0}
R^2=z^2+|\vec{x}-\vec{x}_0|^2\,.
\end{eqnarray}
Notice that the requirement given by (\ref{xit0}) means that our reconstruction is limited to the points that are accessible via extremal surfaces. This means that this option is, in a sense, analogous to the metric reconstruction via the HRT prescription and does not exploit the full reach of the bit threads. We will continue for now and then explain an alternative that does not impose this limitation. Let us first analyze (\ref{wz}). From this equation, we can immediately find an algebraic expression that gives the perturbed trace $H^i_{\,\,i}(z,\vec{x})$,
\begin{eqnarray}\label{inv-Hii}
H^i_{\,\,i}(z,\vec{x})=\frac{2 R}{z^2}\delta w_z\(R,\vec{x}_0; z,\vec{x}\)\Bigg|_{\xi^t(R,\vec{x}_0)=0}\,.
\end{eqnarray}
In fact, we can extract the trace $H^i_{\,\,i}$ at a point $(z,\vec{x})$ from (\ref{inv-Hii}) using any single covector with parameters $(R,\vec{x}_0)$ such that (\ref{xit0}) is satisfied. This is an example of the non-uniquess of the inversion formulas. Notice that equation (\ref{inv-Hii}) provides the solution to the full inversion problem for $d=2$, in which case (\ref{wi}) is identically zero. In fact, for $d=2$ the only component of the metric perturbation that we need to solve for corresponds to $H_{xx}(z,x)$ which equals the trace (\ref{inv-Hii}). In higher dimensions, we can use (\ref{wi}) in addition to (\ref{wz}), and proceed in a similar way to extract the information of the individual components of the perturbed metric $H^i_{\,\,j}(z,\vec{x})$. In order to do that, first replace the solution for the trace (\ref{inv-Hii}) in equation (\ref{wi}), so that the latter equation becomes:
\begin{eqnarray}\label{wixit0}
\delta w_i \Big|_{\xi^t=0}&= &\frac{(x^i-x^i_0)}{z} \delta w_z \Big|_{\xi^t=0} -\frac{z}{2 R}\(x^j-x^j_0\)H^i_{\,\,j} \,.
\end{eqnarray}
Further, for a given $j$ and within the set of allowed parameters, we can take $x_0^j\neq x^j$ and $x_0^k=x^k$ for $k\neq j$. This leads to the following solutions for the diagonal and non-diagonal components of the perturbation:
\begin{eqnarray}
\(\textrm{no sum over}\,\, j\)\quad H^j_{\,\, j}(z,\vec{x})&=&-\frac{2R}{z(x^j-x_0^j)}\, \delta w_j \Big|_{\xi^t=0}+\frac{2R}{z^2}\delta w_z \Big|_{\xi^t=0}\,, \\
H^i_{\,\, j}(z,\vec{x})&=&-\frac{2R}{z(x^j-x_0^j)}\, \delta w_i \Big|_{\xi^t=0}\,,
\end{eqnarray}
which provides the solution to the full inversion problem for $d> 2$.
As mentioned above, the method outlined in the previous paragraph is limited to the points that are accessible via extremal surfaces.
An alternative way of inverting the system of equations in a less restrictive setting is the following. First notice the relations
\begin{eqnarray}
\frac{\partial \xi^t }{ \partial x_0^k}=\frac{2\pi (x^k-x_0^k)}{R}\,, \qquad \frac{\partial \xi^t }{ \partial x_0^k}\bigg|_{x_0^k=x^k}\!\!\!=0\,,
\end{eqnarray}
Using the above, one finds that
\begin{eqnarray}\label{wiz-derivatives}
\frac{\partial \delta w_z }{ \partial x_0^k}\bigg|_{x_0^k=x^k}\!\!\!\!\!\!&=& 0\,,\nonumber \\
\frac{\partial \delta w_i }{ \partial x_0^k}\bigg|_{x_0^k=x^k}\!\!\!\!\!\!&=&-\frac{z}{2R}\delta^i_k\, H^{j}_{\,\, j}+\frac{z}{2R}\delta^j_k\, H^{i}_{\,\, j}\,,
\end{eqnarray}
from which one can easily invert the diagonal and off diagonal components of the perturbation for $d>2$.\footnote{Notice that for $d=2$ equation (\ref{wi}) is identically equal to zero and then equations (\ref{wiz-derivatives}) are trivial.} More explicitly, we obtain that
\begin{eqnarray} \label{rep-indices}
\(\textrm{no sum over}\,\, j\)\quad H^{j}_{\,\, j}(z,x)&=&\frac{2R}{z}\(\frac{\partial \delta w_j }{ \partial x_0^j}\bigg|_{x_0^j=x^j}\!\!\!-\frac{1}{d-2}\sum_{i} \frac{\partial \delta w_i }{ \partial x_0^i}\bigg|_{x_0^i=x^i}\! \)\,, \\ \label{other-Hij}
H^{i}_{\,\, j}(z,x)&=&\frac{2R}{z}\frac{\partial \delta w_i }{ \partial x_0^j}\bigg|_{x_0^j=x^j}\!\!\! ,
\end{eqnarray}
which gives the solution to the full inversion problem for $d> 2$.
On the other hand, for $d=2$ we can simply use equation (\ref{inv-Hii}) or, alternatively, the second reconstruction method that we explain below. Notice that summing over $j$ in equation (\ref{rep-indices}) leads to a different expression for the trace of the perturbation as (\ref{inv-Hii}). This represents another example of the non-uniqueness in the inversion equations.
\subsubsection*{Reconstruction from a minimal set of thread configurations}
One may wonder whether the extended nature of the entanglement pattern information present in a single form $\delta {\bm w}(R,\vec{x}_0;z,\vec{x})$ for fixed $(R,\vec{x}_0)$, or a finite number of forms $n$, could suffice to recover the components of the metric perturbations $\delta g_{ij}$. Naively, the number of unknown variables that we need to solve for is $d(d-1)/2$, which is the number of symmetric components of $H_{ij}$ ($\{i,j\}$ run over $d-1$ values). On the other hand, the number of equations that we would have at our disposal is given by $nd$, i.e., $d$ components of a single covector $\delta w_a$, times the total number of forms $n\in\mathbb{N}$. Assuming they are all linearly independent of each other, we have that for a fixed $d$, the minimum number of forms $\bar{n}$ that we would need is
\begin{equation}\label{minimaln}
\bar{n}=\left\lceil\frac{d-1}{2}\right\rceil\,,
\end{equation}
where $\lceil\bullet\rceil$ represents the ceiling function (i.e., the smallest integer greater or equal to its argument). For $d=2$ and $d=3$ ($3d$ and $4d$ gravity respectively) we obtain $\bar{n}=1$, which means that we can hope to recover the full metric from a single form. In higher dimensions the problem would be underdetermined so we would need to increase the number of forms, although, to a finite number set by $d$. In the following, we will focus on the cases for which $\bar{n}=1$ but we will come back to the higher dimensional cases at the end of the section.
We start with equation (\ref{wz}) and notice that its right-hand side can be rewritten as
\begin{eqnarray}\label{eq:inversion}
\delta w_z(R,\vec{x}_0,z,\vec{x})=\frac{(\xi^t)^2}{4\pi z^{d-1}}\partial_z\left(\frac{z^d H^i_{\,\, i}}{\xi^t}\right)\,.
\end{eqnarray}
This equation can be in principle easily inverted for the trace $H^i_{\,\, i}$. However, we must proceed with some care because $\xi^t$ can attain a zero value in multiple points in the bulk. We refer the reader to appendix \ref{app:inversion} for a detailed analysis of this equation, and we will simply state the final result here.
The analysis is naturally split for bulk points $\vec{x}\in A^c$ and $\vec{x}\in A$ ($\forall z$). In the former case, $\xi^t$ never vanishes in the bulk, so the analysis is particularly simple. In this case we obtain
\begin{eqnarray} \label{Hii-4p}
H^i_{\,\, i} (z,\vec{x})&=&4R(z_*^2-z^2) \, \int_{0}^1 d\lambda \, \frac{ \lambda^{d-1}\delta w_z(\lambda z,\vec{x})}{[z_*^2-(\lambda z)^2]^2}\,,\qquad z_*^2\equiv R^2-|\vec{x}-\vec{x}_0|^2\,.
\end{eqnarray}
This equation is also valid for $\vec{x}\in A$ provided that $z^2< z_{*}^2$. For $z\geq z_*$ the integrand in (\ref{Hii-4p}) has a double pole at $\lambda_*=z_*/z$ (with $\lambda_*\in[0,1]$). This divergence can be removed by an appropriate regularization, e.g., using the principle value prescription. However, given the simplicity of our problem we can find the answer directly from (\ref{eq:inversion}) by taking one of the end points of the integral to be arbitrarily close to the zero locus of $\xi^t$. After a series of careful manipulations explained in appendix \ref{app:inversion} we arrive at a formula that is valid in the region $\vec{x}\in A$ and for all $z$:
\begin{eqnarray}\label{Hii-9p}
\!\!\!\!\!\!\!\!H^{i}_{\,\,i}(z,\vec{x})=\frac{2R z_{*}^{d-4}\delta w_z(z_{*},\vec{x})}{z^{d-2}}+4R(z_{*}^2-z^2)\! \int_{0}^1\! d\lambda\frac{\lambda [\lambda^{d-2} \delta w_z(\lambda z,\vec{x})-\lambda_{*}^{d-2} \delta w_z(z_{*},\vec{x})]}{[z_{*}^2-(\lambda z)^2]^2}.
\end{eqnarray}
This new integral seems to still have a single pole at $\lambda=\lambda_{*}$. However, a close inspection shows that this term is proportional to
\begin{equation}
(d-2) \delta w_z(z_*,\vec{x})+z_* \partial_z\delta w_z(z_*,\vec{x})=0\,,
\end{equation}
which can be checked from (\ref{wz}). Therefore, the integral is manifestly finite. We note that, indeed, the naive principle value regularization of (\ref{Hii-4p}) results in (\ref{Hii-9p}), and likewise there are many ways to derive (\ref{Hii-9p}) from (\ref{Hii-4p}). Perhaps the simplest way to do it is by slightly changing the integration contour:
\begin{eqnarray} \label{Hii-4b}
H^i_{\,\, i} (z,\vec{x})=4R(z_*^2-z^2) \int_{i\epsilon}^{1+i\epsilon}\!\!\! d\lambda \, \frac{ \lambda^{d-1}\delta w_z(\lambda z,\vec{x})}{[z_*^2-(\lambda z)^2]^2}\,,
\end{eqnarray}
where $\epsilon\in\mathbb{R}$, and then letting $\epsilon\to0$. It can be shown that this prescription is consistent with both (\ref{Hii-4p}) and (\ref{Hii-9p}) so it is valid everywhere in the bulk.
Notice that in the above formulas we have not specified the number of dimensions $d$. This means that given the knowledge of a single $\delta \bm w(z,\vec{x})$ one can always find an inversion formula for the trace of the perturbation, given explicitly by (\ref{Hii-4p})-(\ref{Hii-9p}) or its equivalent (\ref{Hii-4b}). We will now recover the other components of the metric for the cases $d=2$ and $d=3$.
\begin{itemize}
\item \emph{The $d=2$ case:}
\end{itemize}
The inversion problem in $d=2$ is exceptionally simple because in this case there is only one metric component to solve for. Therefore, the trace of the perturbation provides the full solution to the problem, i.e., $H^i_{\,\, i}(z,x)=H_{xx}(z,x)$. Nevertheless, we will analyze this case in some detail and check that our formulas for the trace ar consistent with the expected results.
First, notice that in this case there are further simplifications that considerably reduce our problem: equation (\ref{wi}) vanishes exactly so $\delta w_{x}=0$ everywhere in the bulk. The closedness relation $d \delta {\bm w}=0$ then implies $\partial_z \delta w_z=0$, so $\delta w_z(z,x)=\delta w_z(0,x)$. Using this fact, and applying the formula (\ref{Hii-4b}) which is valid everywhere in the bulk, we obtain
\begin{eqnarray}\label{Hxx2d}
\!\!\!\!\!\!\!\!\!\!\!H_{x x} (z,x)=4R \delta w_z(0,x)(z_*^2-z^2) \!\left.\int_{i \epsilon}^{1+i \epsilon}\!\!\! d\lambda \frac{ \lambda}{[z_*^2-(\lambda z)^2]^2}\right|_{\epsilon\to0}\!\!\!\!=\frac{2R \delta w_z(0 ,x)}{z_*^2}=H_{x x} (0,x).
\end{eqnarray}
This is precisely what is expected from the analysis of Section \ref{sec:2dpert}, specifically from equation (\ref{deltag2}).
For $x\in A^c$, (\ref{Hii-4p}) coincides explicitly with (\ref{Hii-4b}) so the integral above is the same.
For $x\in A$ we can alternatively use (\ref{Hii-9p}). Notice that for $d=2$ the integrand in (\ref{Hii-9p}) is identically zero since $\delta w_z(z,\vec{x})$ is constant
so, in this case, the full result is given by the first term in (\ref{Hii-9p}). Indeed, this term coincides with (\ref{Hxx2d}), as expected.
Again, since for $d=2$ this is the only component of the perturbed metric that we need to solve for, then, equation (\ref{Hxx2d}) completes the inversion problem.
\begin{itemize}
\item \emph{The $d=3$ case:}
\end{itemize}
To solve the inversion problem in $d=3$ we need equations (\ref{wi}), in addition to (\ref{wz}). These equations involve derivatives with respect to the spatial coordinates $\partial_{i}$ so, as a system of first order differential equations, we would need information about $H_{ij}$ at some fixed $x_i$ in order to have a well defined boundary value problem. We will deal with the choice of such boundary conditions below, but for now, let us explain how to setup the inversion problem.
First, since equation (\ref{Hii-4b}) already provides the solution for the trace part of the metric, we can already replace this back into (\ref{wi}) and solve for the remaining metric components. We find it convenient to write the fluctuations as
\begin{equation}\label{Hij-3d}
H_{ij}=\left(
\begin{array}{cc}
\frac{h}2+\phi & \chi \\
\chi & \frac{h}2-\phi \\
\end{array}
\right)\,,
\end{equation}
so that $h=H^i_{\,\,i}$ gives the trace while $\phi$ and $\chi$ are the two fields that we still need to solve for. Defining $x_1\equiv x$ and $x_2\equiv y$,
equations (\ref{wi}) then take the form:
\begin{eqnarray}
\delta w_x& = &\frac{z}{4\pi}\left[ \(\frac{2\pi (x-x_0)}{R}+\xi^t\partial_x \) \(\frac{h}2-\phi\) -\(\frac{2\pi (y-y_0)}{R}+\xi^t\partial_y\) \chi \right], \label{Hij-3d-2} \\
\delta w_y&= &\frac{z}{4\pi}\left[ \(\frac{2\pi (y-y_0)}{R}+\xi^t\partial_y \) \(\frac{h}2+\phi\) -\(\frac{2\pi (x-x_0)}{R}+\xi^t\partial_x\) \chi \right].\label{Hij-3d-2b}
\end{eqnarray}
Thus, we have a system of two coupled partial differential equations of first order. It is possible to decouple these equations by taking one further derivative and combining appropriately the two equations. To do so, we first rewrite (\ref{Hij-3d-2})-(\ref{Hij-3d-2b}) as:
\begin{eqnarray}\label{constrains-phi-xi}
\partial_x\( \frac{\phi}{\xi^t}\) + \partial_y\( \frac{\chi}{\xi^t}\)&=&\delta\Omega_x\,,\qquad \delta\Omega_x\equiv \partial_x\(\frac{h}{2\xi^t}\)-\frac{4\pi}{z(\xi^t)^2}\delta w_x\,,\label{phichi}\\
-\partial_y\( \frac{\phi}{\xi^t}\)+ \partial_x\( \frac{\chi}{\xi^t}\)&=&\delta\Omega_y\,,\qquad\delta\Omega_y \equiv \partial_y\(\frac{h}{2\xi^t}\)-\frac{4\pi}{z(\xi^t)^2}\delta w_y\,.\label{phichib}
\end{eqnarray}
Equations (\ref{phichi}) and (\ref{phichib}) can now be combined as a pair of Poisson's equations:
\begin{eqnarray}
\(\partial_x^2+\partial_y^2\)\( \frac{\phi}{\xi^t}\)&=&\rho_\phi\,,\qquad \rho_\phi\equiv \partial_x\delta\Omega_x-\partial_y\delta\Omega_y\,,\\
\(\partial_x^2+\partial_y^2\)\( \frac{\chi}{\xi^t}\)&=&\rho_\chi\,,\qquad \rho_\chi\equiv\partial_x\delta\Omega_y+\partial_y\delta\Omega_x\,,
\end{eqnarray}
where $\rho_\phi$ and $\rho_\chi$ act as sources for $\phi$ and $\chi$, respectively. These equations can be solved using standard Green's function methods. Assuming knowledge of the solutions at a closed surface $\partial \mathcal{V}$, one can formally write the solution in the interior of the surface $\mathcal{V}$ as follows:
\begin{eqnarray}\label{solution1}
\frac{\phi(\vec{x})}{\xi^t(\vec{x})}&=&\!\int_{\mathcal{V'}}\!\! G(\vec{x},\vec{x}')\rho_\phi(\vec{x}')dV'+\!\int_{\partial \mathcal{V'}}\[ \frac{\phi(x')}{\xi^t(x')}\nabla^{'} G(\vec{x},\vec{x}')-G(\vec{x},\vec{x}')\nabla^{'} \frac{\phi(x')}{\xi^t(x')}\]\cdot dS'\,, \\
\frac{\chi(\vec{x})}{\xi^t(\vec{x})}&=&\!\int_{\mathcal{V'}}\!\!G(\vec{x},\vec{x}')\rho_\chi(\vec{x}')dV'+\!\int_{\partial \mathcal{V'}}\[ \frac{\chi(x')}{\xi^t(x')}\nabla^{'} G(\vec{x},\vec{x}')-G(\vec{x},\vec{x}')\nabla^{'} \frac{\chi(x')}{\xi^t(x')}\]\cdot dS'\,,\label{solution2}
\end{eqnarray}
where $G(\vec{x},\vec{x}')$ is the Green's function for the Laplace operator in 2d, given by
\begin{eqnarray}\label{Green2d}
G(\vec{x},\vec{x}')=\frac{1}{2\pi}\log|\vec{x}-\vec{x}'| +c_0\,.
\end{eqnarray}
Next, we need to impose appropriate boundary conditions. To do so, let us start discussing the region $z>R$ for which $\xi^t<0$ so that there will be no subtleties with poles in the integrals. We will consider a closed surface $\partial \mathcal{V'}$ at infinity, so that $dS'=\hat{r}\, r'd\theta'$, with $r'=|\vec{x}'|\to\infty$. We note that, normally, the integral over $\partial \mathcal{V'}$ in 2d would give a finite contribution, assuming that the fields that we solve for are finite at infinity. This is because $\nabla^{'} G(\vec{x},\vec{x}') \cdot dS'\to$ constant at large $r'$. However, in our particular problem, we need to solve for the combination $\phi/\xi^t$, or $\chi/\xi^t$, which decay (at least) as $1/r'^2$ at large $r'$.\footnote{To see this, notice that both $\phi$ and $\chi$ are components of the perturbation $H_{ij}$, which can generally be written as (\ref{pertg}), i.e., $H_{ij}\sim \sum z^{2n}T^{(n)}_{ij}$. The leading order term gives the CFT stress-energy tensor, which scales as $T^{(0)}_{ij}\sim1/{r^{d-2}}\sim1/r$ for perturbations of compact support, or $T^{(0)}_{ij}\sim$ constant otherwise (e.g. for plane waves). For $n>0$, equation (\ref{Tnexp}) tells us that $T^{(n)}_{ij}\sim\Box^n T^{(0)}_{ij}$, so these terms they decay faster at infinity. Hence, $H_{ij}$ scales like $T^{(0)}_{ij}$ and $\phi/\xi^t\sim1/r^2$ and $\chi/\xi^t\sim1/r^2$ at large $r$ (in the worst case scenario).} This means that the surface integrals in (\ref{solution1}) and (\ref{solution2}) vanish, regardless of the specific values of $\phi$ and $\chi$ at $r\to\infty$, and therefore,
\begin{eqnarray}\label{solution1b}
\phi(\vec{x})&=&\frac{\xi^t(\vec{x})}{2\pi}\int_{\mathcal{V'}} \log|\vec{x}-\vec{x}'| \rho_\phi(\vec{x}')dV'\,, \\
\chi(\vec{x})&=&\frac{\xi^t(\vec{x})}{2\pi}\int_{\mathcal{V'}}\log|\vec{x}-\vec{x}'| \rho_\chi(\vec{x}')dV'\,,\label{solution2b}
\end{eqnarray}
where the boundary conditions once again force $c_0=0$.\footnote{Notice that if $c_0\neq0$, both $\phi\sim r^2\to\infty$ and $\chi\sim r^2\to\infty$ at large $r$, which would be unphysical.} For the region $z<R$, we note that $\xi^t$ can become zero at multiple points in the bulk, so we would need to deal with potential divergences of the integrands. Following the calculation of the trace (explained in detail in Appendix \ref{app:inversion}), we can expect to get rid of these non-physical divergences by a simple regularization procedure. One easy way to do this is by adding a small imaginary part to $z$,
\begin{eqnarray}\label{solution1c}
\phi(\vec{x})&=&\frac{\xi^t(\vec{x})}{2\pi}\int_{\mathcal{V'}} \log|\vec{x}-\vec{x}'| \rho_\phi(\vec{x}')dV'\bigg|_{z\to z+i\epsilon}\,, \\
\chi(\vec{x})&=&\frac{\xi^t(\vec{x})}{2\pi}\int_{\mathcal{V'}}\log|\vec{x}-\vec{x}'| \rho_\chi(\vec{x}')dV'\bigg|_{z\to z+i\epsilon}\,,\label{solution2c}
\end{eqnarray}
and at the end of the calculation let $\epsilon\to0$. The integration region should now be free of singularities, so these formulas apply for all values of $z$. Hence, together with the trace formula (\ref{Hii-4b}), they provide a complete solution of the reconstruction problem in $d=3$.
\begin{itemize}
\item \emph{Higher dimensional cases:}
\end{itemize}
As discussed above, by comparing the number of independent components of the perturbed metric $H_{ij}$ against the number of equations that we obtain from a single form $\delta {\bm w}$, one can conclude that the minimum number of forms $\bar{n}$ that are in principle required to invert the problem is given by (\ref{minimaln}). However, this does \emph{not} imply that any set of $\bar{n}$ forms should lead to a well defined inversion problem since, depending on the choice, one could end up with a set of equations that are not completely linearly independent. In the following we will spell out the precise conditions that we must impose on a minimal set of forms and give a concrete example of how these conditions can be satisfied.
First, notice that for a spherical region, a single form $\delta {\bm w}$ is parametrized by $d$ real numbers $\mathcal{P}=\{R, x_0^1,\cdots ,x_0^{d-1}\}$. Moreover, one can easily check that for each choice of such numbers, the $d$ components of the form $\delta w_a$, with $a\in\{z,1, \ldots, d-1\}$, are all linearly independent since each component involves different sets of metric components $H_{ij}$. Therefore, the task at hand reduces to finding a convenient set of parameters $\mathcal{P}_k$, with $k\in \{1\,,\dots\,, \bar{n}\}$, such that the individual components of each form are linearly independent across the set. More concretely, if we label the $\bar{n}$ forms as $\delta \bm w^{k}$, then we would need to impose that for a fixed $a$ the set $\{\delta w_a^{1}\,,\dots\,, \delta w_a^{\bar{n}} \}$ must be linearly independent.
In order to visualize explicitly the dependence of each component $\delta w_a$ on the parameters $\{R, x_0^1,\cdots ,x_0^{d-1}\}$, we rewrite equations (\ref{wz}) and (\ref{wi}) as follows:
\begin{eqnarray}\label{wz-i}
\delta w_z&=&\(\frac{R^2-|\vec{x}_0|^2}{4R}\)\(d\, H^i_{\,\,i}+z\partial_zH^i_{\,\,i} \)+\frac{x_0^l}{2R}\( d\, x_l H^i_{\,\,i} +x_l z\partial_zH^i_{\,\,i}\)\nonumber \\ &&\, \qquad-\frac{1}{4R}\Big[\((d-2)z^2+|\vec{x}|^2 \)H^i_{\,\,i}+ z\(z^2+|\vec{x}|^2\)\partial_zH^i_{\,\,i}\Big]\,, \\ \label{wi-i}
\delta w_i&= &\(\frac{R^2-|\vec{x}_0|^2}{4R}\)\[z\big(\partial^i\, H^j_{\,\,j}-\partial^j\,H^i_{\,\,j}\big)\]+\frac{x_0^l}{2R}\[ z\, x_l\big(\partial^i\, H^j_{\,\,j}-\partial^j\,H^i_{\,\,j}\big)-z\big(\delta^i_{\,\,l}H^j_{\,\,j}-\delta^j_{\,\,l}H^i_{\,\,j}\big)\] \nonumber \\
&&\, \qquad-\frac{1}{4R}\[\(z^2+|\vec{x}|^2\)\big(\partial^i\, H^j_{\,\,j}-\partial^j\,H^i_{\,\,j}\big)+2z\big(x^iH^j_{\,\,j}-x^jH^i_{\,\,j}\big)\]\,.
\end{eqnarray}
In this way, we can express each component of the form as a linear combination of $(d+1)$ linearly independent functions
\begin{equation}
\delta w_a(R,\vec{x}_0;z,\vec{x})=\sum_{l=1}^{d+1} \alpha_l(R,\vec{x}_0)F^l_a(z,\vec{x})\,,
\end{equation}
with coefficients $\alpha_l$ given by
\begin{eqnarray}\label{coeffs-2}
\{\alpha_1\,, \alpha_2\,, \dots\,, \alpha_{d+1}\}=\left\{\(\frac{R^2-|\vec{x}_0|^2}{4R}\)\,, \frac{x_0^1}{2R}\,, \dots\,,\frac{x_0^{d-1}}{2R} \,, \frac{1}{4R}\right\}\,.
\end{eqnarray}
Note, however, that by choosing a set of parameters $\mathcal{P}$ we can only specify up to $d$ of the above coefficients, while one of them will necessarily be determined in terms of the others. This is in fact not a problem. If we consider the set of forms $\delta \bm w^{k}$ and repeat the above analysis, we find now that
\begin{equation}\label{sumkl}
\delta w^{k}_a(R^{k},\vec{x}_0^{\,k};z,\vec{x})=\sum_{l=1}^{d+1} \alpha^k_l(R^{k},\vec{x}_0^{\,k})F^l_a(z,\vec{x})\,,
\end{equation}
where $k\in\{1,\ldots,\bar{n}\}$. We note that the number of coefficients $\alpha^k_l$ that we can fix for each $k$ is larger than the total number forms that we have at our disposal, i.e., $d>\left\lceil\frac{d-1}{2}\right\rceil$. Therefore, we still have a lot of freedom on the choice of parameters $\mathcal{P}_k$ to be able to make the set $\{\delta w_a^{1}\,,\dots\,, \delta w_a^{\bar{n}} \}$ linearly independent. One way to achieve this is by focusing only on a subspace of forms
obtained by varying $\bar{n}$ out of the $d$ free coefficients of each $\delta w_a^{k}$, which we denote as $\beta^k_l$, while keeping the rest fixed. More explicitly, we can split the sum in (\ref{sumkl}) as
\begin{equation}
\delta w^{(k)}_a=\sum_{l}\beta^k_l\,F^l_a\,+\sum_{l}
\tilde{\beta}^k_l\,F^l_a\,,
\end{equation}
where $\beta^k_l$ is now a $\bar{n}\times \bar{n}$ matrix, with the coefficients that we vary, and $\tilde{\beta}^k_l$ denote the ones that we keep fixed. The condition for the above linear independence is then given by the non-vanishing of the determinant of the matrix $\beta^{k}_{l}$. For instance, we could take
\begin{eqnarray}\label{cut-alpha}
\beta^{k}_{l}= \left\{\frac{(x_0^{1})^k}{2R^{k}}, \ldots,\frac{(x^{\bar{n}-1}_0)^k}{2R^{k}} , \frac{1}{4R^{k}}\right\}\,,
\end{eqnarray}
and vary the parameters $\{R^k,(x_0^{i})^k\}$ for $i\in\{1,\ldots,\bar{n}-1\}$ such that $\text{det}(\beta^{k}_{l})\neq0$ while keeping
$(x_0^{i})^k$ fixed for $i\in\{\bar{n},\ldots,d-1\}$. More generally, notice that there can be many other possible choices. One one hand, the choice of the subset $\beta^{k}_{l}\subset\alpha^{k}_{l}$ can be arbitrary and, on the other hand, the remaining free parameters can be random; they must not necesarily be kept fixed.
\subsubsection*{General time slices: recovering the full bulk metric}
Let us now go back to the problem of recovering the full bulk metric $\delta g_{\mu\nu}$. First, notice that when we picked a constant-$t$ slice $\Sigma$,
we were able to recover the components of the metric tangent to it, namely, $\delta g_{ij}$. This means that we still need
to find the time components, $\delta g_{tt}$ and $\delta g_{ti}$, in order to complete the reconstruction problem. Below, we present
a simple algorithm to recover these extra metric components.
From the bulk point of view it is easy to see that $\delta g_{tt}$ and $\delta g_{ti}$ are, in fact, constrained from the equations of motion (\ref{perturbedeqns}). Specifically,
they can be determined from the $zz$ and $z\mu$ components of Einstein's equations,
\begin{equation}
\delta g^{\mu}_{\,\,\,\mu}=0\,, \qquad \partial_\mu \delta g^{\mu\nu}=0\,.
\end{equation}
These equations imply that $\delta g_{tt}$ must equal the spatial trace $\delta g_{tt}=\delta g^i_{\,\,i}$, while $\delta g_{ti}$ can be determined from a first order equation $\partial_t \delta g_{ti} = \partial_j \delta g_{ij}$. This can be easily implemented in practice. However, the problem is that these particular bulk equations of motion are not known from the CFT perspective, so their use cannot be justified. Indeed, once the surface $\Sigma$ is chosen to be a constant-$t$ slice, the closedness condition $d \delta\bm w=0$ \emph{only} encodes the $tt$ component of the Einstein's equations, as explained at the end of Section \ref{sec:IWgeneral}.
One simple solution to this problem, is to pick a more general time slice $\Sigma'$ and repeat the reconstruction analysis outlined above. For simplicity, we will pick here a boosted slice parametrized by coordinates $(t',x_i',z')$, but a similar analysis can be implemented from more general choices of $\Sigma'$. We will denote the boosted coordinate $x_i=x$ and label the coordinates orthogonal to it as $x_j=y_j$ for $j\neq i$. With this notation, a boost with rapidity $\beta$ (which we can take to be arbitray) is given by the bulk transformations
\begin{eqnarray}
t&=&t' \cosh\beta +x' \sinh \beta\,,\label{boost1}\\
x&=&x' \cosh\beta +t' \sinh \beta\,,\\
y_i&=&y_j'\,,\\
z&=&z'\,.\label{boost4}
\end{eqnarray}
We can now perform the reconstruction analysis in this new slice, and recover the components $\delta g_{i'\!j'}$ on $\Sigma'$. Indeed, a quick calculation shows that the components that we can recover are
\begin{eqnarray}
\delta g_{x'x'}|_{\beta}&=& \sinh^2\beta \,\delta g_{tt}+ 2 \sinh \beta \cosh \beta\, \delta g_{tx} + \cosh^2\beta \, \delta g_{xx}\,,\\
\delta g_{x'y_i'}|_{\beta}&=& \sinh\beta \,\delta g_{ty_i}+ \cosh \beta\, \delta g_{xy_i}\,,\\
\delta g_{y_i'y_j'}|_{\beta}&=& \delta g_{y_iy_j}\,.
\end{eqnarray}
If this analysis is done for at least a few values of the rapidity $\beta_1\neq\beta_2\neq0$, then it is clear that we will have enough information to recover the extra components $\delta g_{tt}$ and $\delta g_{tx}$ from linear combinations of $\delta g_{i'\!j'}|_{\beta_i}$.
This completes the reconstruction problem for the locus of all points in the intersection between the original slice $\Sigma$ and the new slices $\Sigma'$, as shown in left panel of Figure \ref{fig:timemetric}. In particular, from the transformations (\ref{boost1})-(\ref{boost4}), it follows that the constant-$t'$
slices parametrized by $\beta$ intersect the surface $\Sigma$ at the line
\begin{equation}
x=0\,,
\end{equation}
depicted in red. If we want to reconstruct other points of $\Sigma$, then, we can generalize the boost transformations (\ref{boost1})-(\ref{boost4}) to include a translation in $x$, so that
\begin{eqnarray}
t&=&t'' \cosh\beta +x'' \sinh \beta\,,\label{boost1b}\\
x&=&x'' \cosh\beta +t'' \sinh \beta+\sigma\,,\\
y_i&=&y_j''\,,\\
z&=&z''\,.\label{boost4b}
\end{eqnarray}
The new slices $\Sigma''$, depicted in the right panel of Figure \ref{fig:timemetric}, intersect the original slice $\Sigma$ at
\begin{equation}
x=\sigma\,,
\end{equation}
so it is clear that, if we perform the reconstruction problem for a general slice $\Sigma''$ with arbitrary $\beta$ and $\sigma$ we would have enough
information to reconstruct the full metric in $\Sigma$.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=5cm]{TimeComp1.pdf}
\hspace*{2cm}
\includegraphics[width=5cm]{TimeComp2.pdf}
\setlength{\unitlength}{1cm}
\begin{picture}(0,0)
\put(-12.43,5.35){\scriptsize$\partial M$}
\put(-8.73,2.6){\scriptsize$\Sigma$}
\put(-9.98,2.85){\scriptsize$\delta g_{ij}$}
\put(-9.98,2.25){\scriptsize$\delta g_{i'\!j'}|_{\beta_2}$}
\put(-9.98,1.65){\scriptsize$\delta g_{i'\!j'}|_{\beta_1}$}
\put(-7.43,4.7){\scriptsize$\Sigma'|_{\beta_1}$}
\put(-7.43,4.2){\scriptsize$\Sigma'|_{\beta_2}$}
\put(-5.2,5.35){\scriptsize$\partial M$}
\put(-1.5,2.6){\scriptsize$\Sigma$}
\put(-0.2,4.5){\scriptsize$\Sigma''|_{\beta,\sigma_1}$}
\put(-0.2,4.2){\scriptsize$\Sigma''|_{\beta,\sigma_2}$}
\put(-0.2,3.9){\scriptsize$\Sigma''|_{\beta,\sigma_3}$}
\put(-0.3,3.5){\scriptsize$x=\sigma_3$}
\put(-0.5,3.15){\scriptsize$x=\sigma_2$}
\put(-0.7,2.8){\scriptsize$x=\sigma_1$}
\end{picture}
\end{center}
\vspace{-0.2cm}
\caption{\small Algorithm to reconstruct the full bulk metric, including its time components $\delta g_{tt}$ and $\delta g_{ti}$. In the left panel we show the components of the metric that can be recovered on a generic boosted slice $\Sigma'|_{\beta}$. Combining the data obtained from multiple boosted slices it is possible to recover the full metric on the locus of points in the intersection between the original slice $\Sigma$ and the new slices $\Sigma'$, depicted in red. In the right panel we show new slices $\Sigma''|_{\beta,\sigma}$ that are obtained by a simple translation of the original boosted slices $\Sigma'$. The extra data obtained from the inversion problem in these new slices is sufficient to recover all components of the metric in the full slice $\Sigma$.}
\label{fig:timemetric}
\end{figure}
Finally, note that the above algorithm has a trivial extension to generic choices of $\Sigma$. Thus, picking a family of slices $\Sigma$ that foliates the full manifold $M$, and repeating the same analysis for each $\Sigma$ gives a complete solution to the reconstruction problem in $M$.
\subsubsection{Going beyond linear order\label{beyond}}
In the past section we have shown that the problem of metric reconstruction can be carried out explicitly at linear order for the case of perturbative excited states. This was accomplished using the canonical bit thread construction based on differential forms. But, can this methodology be generalized to the non-linear regime?
To answer this question, we will start with a brief review of our findings and then discuss how the different aspects of our proposal can be generalized. To begin with
the reconstruction problem, we assumed knowledge of a set of canonical forms that codify the entanglement pattern of subregions in the dual CFT. We recall that these canonical forms $\delta \bm w$ can be uniquely specified from CFT data. Specifically, they are completely fixed by the boundary condition on $\partial M$ (\ref{bcIyerWald}), which is given in terms of the one-point function of the CFT stress-energy tensor $\langle T_{\mu\nu}\rangle$, and the requirement of bulk locality. In the presence of a metric, we found that one of these forms specifies a covector field that can be schematically written as
\begin{equation}\label{metricreconst}
(\star \delta \bm w)_{a}=\mathcal{F}_a^{bc}\,\delta g_{bc}\,,
\end{equation}
where $\mathcal{F}_a^{bc}$ is a specific linear differential operator. In low-dimensional cases, the equations resulting from a single form provide enough data to invert the problem, so that
\begin{equation}
\delta g_{ab}=[\mathcal{F}^{-1}]^c_{ab}\,(\star \delta \bm w)_{c}\,.
\end{equation}
This kind of inversion works for $d=2$ and $d=3$, i.e., in AdS$_{3}$ and AdS$_{4}$, respectively. For higher dimensional cases the problem becomes underdetermined, but it can be easily generalized by starting from a set of differential forms $\delta \bm W=\{\delta \bm w^1,\ldots,\delta \bm w^n\}$, that encode the entanglement pattern of a \emph{family} of subregions in the CFT. In this case, we find that
\begin{equation}
(\star \delta \bm w^i)_{a}=(\mathcal{F}^i)_a^{bc}\,\delta g_{bc}\,,
\end{equation}
so, for large enough $n$ one can always invert the system as
\begin{equation}
\delta g_{ab}=[\mathcal{F}^{-1}_i]^c_{ab}\,(\star \delta \bm w^i)_{c}\,.
\end{equation}
The optimal value of $n$ depends on the number of dimensions, and is given by (\ref{minimaln}). Of course, the larger
the value of $n$, the more information that we have at our disposal, and the easier the inversion problem becomes. In fact, in the limit of $n\to\infty$, or
when the set of differential forms is dense enough, there is even enough maneuver to turn the inversion problem into a simple algebraic system of equations as shown explicitly in section (\ref{explicit-reconstruction}).
Let us now comment on what to expect in the non-linear regime. To start with, we can treat the problem perturbatively but extending the above results to higher orders in the perturbation.
In this case, the perturbation of the metric can be given as an expansion\footnote{As shown in \cite{Lashkari:2015hha}, when going to higher orders in $\lambda$ it is convenient to work in the so-called Hollands-Wald gauge \cite{Hollands:2012sf} so that the coordinate location of $\gamma_A$ is fixed and $\mathcal{L}_{\xi_A}(g^\lambda_{\mu\nu})|_{\gamma_A}=0$. In this gauge the argument presented in Section \ref{pertstatesBT} about the choice of $\Sigma$ can be extended to higher orders in $\lambda$.}
\begin{equation}\label{eq:metrichigher}
g^\lambda_{\mu\nu}= g_{\mu\nu}+\lambda\delta g_{\mu\nu}^\lambda\,,\qquad \delta g_{\mu\nu}^\lambda\equiv\delta^{(1)} g_{\mu\nu}+\lambda\delta^{(2)} g_{\mu\nu}+\lambda^2\delta^{(3)} g_{\mu\nu}+\cdots\,,
\end{equation}
and, similarly, any solution to the max-flow problem
\begin{equation}\label{eq:formhigher}
\bm w_\lambda=\bm w+\lambda\delta \bm w_\lambda\,,\qquad \delta \bm w_\lambda\equiv \delta^{(1)} \bm w+ \lambda \delta^{(2)} \bm w+ \lambda^2 \delta^{(3)} \bm w+\cdots\,.
\end{equation}
On general grounds we expect that, in the presence of the metric (\ref{eq:metrichigher}), the change in the form $\delta \bm w_\lambda$ should follow an equation similar to (\ref{metricreconst}), i.e.,
\begin{equation}
(\star \delta \bm w_\lambda)_{a}=\tilde{\mathcal{F}}_a^{bc}\,\delta g_{bc}^\lambda\,,
\end{equation}
but now with $\tilde{\mathcal{F}}_a^{bc}$ being a higher order differential operator. For example, at second order in $\lambda$ we expect a second order differential operator, and similarly for higher orders. This introduces the standard non-uniqueness problem for the inversion of a non-linear operator. However, this issue can be circumvented by solving the reconstruction problem recursively in $\lambda$. To see this, notice that the different terms in (\ref{eq:formhigher}) should depend on
the different metric perturbations and their derivatives as follows:
\begin{equation}\label{eq:formcomponents}
\delta^{(i)}\bm w=\delta^{(i)}\bm w(\delta^{(j)}g_{\mu\nu},\nabla^{k} \delta^{(j)} g_{\mu\nu})\,,
\end{equation}
for $j\in\{1,\ldots,i\}$ and $k\in\{1,\ldots,1+i-j\}$. In other words, for a given value of $i$, in the right-hand side of (\ref{eq:formcomponents}) we expect up to $i^{\text{th}}$ derivatives of $\delta^{(1)} g_{\mu\nu}$ but only $1^{\text{st}}$ derivatives of $\delta^{(i)} g_{\mu\nu}$. Thus, if we solve for the metric at first order in $\lambda$, then we can reformulate the problem of bulk reconstruction at second order as a linear problem above the $g_{\mu\nu}+\lambda\delta^{(1)} g_{\mu\nu}$ solution. This also generalizes to higher orders in $\lambda$, so that the inversion problem at a given order can also be cast as a linear problem above one lower order.
We should also comment on how the boundary condition (\ref{bcIyerWald}) generalizes to higher orders in $\lambda$. At linear level, we saw that it is fully specified by the one-point function of the CFT stress-energy tensor $\langle T_{\mu \nu}\rangle$. However, at higher orders we would need to specify further data, e.g. \cite{Beach:2016ocq}. A useful case study would be to consider the reconstruction problem at second order in $\lambda$, which was already worked out in \cite{Faulkner:2017tkh} in the framework of extremal surfaces. This paper generalized the Iyer-Wald construction, and thus \cite{Faulkner:2013ica}, to include second order variations in the metric, hence their construction can be used to obtain canonical thread configurations at second order in $\lambda$. More specifically, \cite{Faulkner:2017tkh} focused on a class of CFT states which are expected to have a classical gravity description, and are defined by adding sources for primary operators to the Euclidean path integral defining the vacuum state \cite{Botta-Cantcheff:2015sav,Christodoulou:2016nej,Marolf:2017kvq},
\begin{equation}\label{cftState}
|\psi_\lambda\rangle=T e^{-\int_{-\infty}^0dt_Ed^{d-1}x\phi^{(0)}_\alpha\mathcal{O}_\alpha}|0\rangle
\end{equation}
Among other things, their calculation led to a closed expression for the entanglement entropy in these CFT states at second order in the sources:\footnote{In their calculation, they expressed the second order terms in terms of the first order sources. After that, they related the sources at first order to the one-point functions at first order via a map that depends on the CFT two-point functions (which are fixed by conformal symmetry).}
\begin{eqnarray}\label{bc:secondorder}
\!\!\!\!\!\!\!\!\!\!\!\!S_A(\langle T\rangle,\langle\mathcal{O}\rangle)&=&a^{*}S_A^{(0)}+\lambda\int K^{(1)}_A(x)\langle T(x)\rangle+\lambda^2\int\!\!\!\int K^{(2)}_A(x_1,x_2)\langle\mathcal{O}_\alpha(x_1)\rangle\langle\mathcal{O}_\alpha(x_2)\rangle\nonumber\\
&&+\frac{\lambda^2}{C_T}\int\!\!\!\int \tilde{K}^{(2)}_A(x_1,x_2)\langle T(x_1) \rangle\langle T(x_2)\rangle\,,
\end{eqnarray}
where $K^{(1)}_A(x)$, $K^{(2)}_A(x_1,x_2)$ and $\tilde{K}^{(2)}_A(x_1,x_2)$ are some specific kernels. A few comments are in order. First, notice that at this order the entanglement entropy depends on the specific CFT only through the central charges $a^{*}$ and $C_T$. In fact, for theories that are dual to Einstein gravity, they must be proportional to each other, i.e., $a^{*}\propto C_T$, so the answer depends only on one CFT parameter. Second, (\ref{bc:secondorder}) correctly encodes the first-law of entanglement $\delta S_A=\langle H_A\rangle$ at linear order in $\lambda$ but, in addition, it also contains information about the relative entropy in the excited state $\delta^{(2)}S(\rho_A||\rho_A^{(0)})$, to second order in $\lambda$. Finally, note that equation (\ref{bc:secondorder}) can now be used to specify a canonical boundary condition for $\delta \bm w_\lambda$ in $\partial M$,
\begin{eqnarray}
\delta \bm w_\lambda(x)|_{\partial M}&=&4G_N\bigg[K^{(1)}_A(x)\langle T(x)\rangle+\lambda\int K^{(2)}_A(x,x')\langle\mathcal{O}_\alpha(x)\rangle\langle\mathcal{O}_\alpha(x')\rangle\nonumber\\
&&\qquad+\frac{\lambda}{C_T}\int \tilde{K}^{(2)}_A(x,x')\langle T(x) \rangle\langle T(x')\rangle\bigg]\bar{{ \bm \epsilon}}\,,
\end{eqnarray}
and so, it should suffice to uniquely specify the full $(d-1)-$form on $M$ by further requiring bulk locality. This means that at second order in $\lambda$, we need also the one-point functions of all primary operators $\langle\mathcal{O}_\alpha\rangle$, in addition to stress-energy tensor. From the bulk perspective, \cite{Faulkner:2017tkh} showed that the closedness of the Iyer-Wald form $\bm \chi$, related to $\delta \bm w_\lambda$ through (\ref{w-chi}), encodes now the following data: at first order one recovers (\ref{closeness}), which specialized to an arbitrary slice $\Sigma$ leads to the linearized Einstein's equations $E_{ab}^{(1)}=0$. At second order, one obtains
\begin{equation}
E_{ab}^{(2)}\equiv (E_{ab}^{(2)})_{\text{grav}}-\frac{1}{2}T_{ab}^{(2)}=0\,,
\end{equation}
where $(E_{ab}^{(2)})_{\text{grav}}$ is the second order Einstein tensor and $T_{ab}^{(2)}$ is the second order stress-energy tensor associated with the bulk fields $\phi_\alpha$ dual to the CFT operators $\mathcal{O}_\alpha$. Thus, \emph{for theories with $a^{*}=C_T$, a CFT state of the form (\ref{cftState}) secretly encodes Einstein's equations (at least up to second order in the perturbation) with matter determined by the CFT one-point functions.} Therefore, on general grounds we can expect that the inversion problem using the canonical bit thread prescription to be well defined at second order in $\lambda$.
One can also go to higher orders in perturbation theory, however, it is clear that one would ultimately need infinitely more CFT data in the full non-linear regime, rendering the problem untractable. To see this, notice that to the $i^{\text{th}}$ order in the perturbation, the relative entropy $\delta^{(i)}S(\rho_A||\rho_A^{(0)})$ will generically involve $i$-point functions of the primary operators $\mathcal{O}_\alpha$, so at the full non-linear level we would need to specify \emph{all} correlation functions of the theory. Here we have two options to proceed. One could try to pursue the reconstruction problem in a theory with a \emph{know} gravity dual, e.g., $\mathcal{N}=4$ SYM, and with some hard work, one should be able to recover the metric order by order in the perturbation. Alternatively, one could start with a generic CFT and try to constrain the structure of the CFT $i$-point functions such that the calculations match with the gravity side. Indeed, such constraints must start appearing for $i\geq4$, since these correlation functions are not fixed by conformal invariance.
Another possibility would be to consider the full reconstruction problem without resorting to perturbation theory.\footnote{A version of this idea appeared originally in \cite{Freedman:2016zud}.} For example, given a CFT with a known holographic dual, one can start by computing entanglement contours \cite{Chen_2014,Kudler-Flam:2019oru} for several regions $A_i$ in a state of the form (\ref{cftState}). These contours can then be used as boundary conditions in $\partial M$ for a set of closed bulk forms $\bm w^i$ that encode the entanglement pattern of the individual regions. Via the closedness condition, $d\bm w^i=0$, and assuming further input such as bulk locality, one should then be able to reconstruct particular realizations of the set of forms $\bm w^i$ on $M$ that solve the max flow problem in the bulk. If this set is sufficiently dense, then, one could set up the problem of metric reconstruction as a particular optimization problem. More specifically, given a set of such $(d-1)-$ forms $\bm W=\{ \bm w^i\}$, the metric $g_{ab}$ should emerge as the minimal positive definite symmetric $(0,2)$ tensor for which the norm bound constraint
\begin{eqnarray}\label{w-norm2}
\frac{1}{(d-1)!}g^{a_1 b_1}\cdots g^{a_{d-1} b_{d-1}} w_{a_1 \cdots a_{d-1}}w_{b_1 \cdots b_{d-1}} \leq 1\,.
\end{eqnarray}
holds for all the elements $\bm w^i$ of the fundamental set $\bm W$. It would be interesting to develop a more precise algorithm based on this optimization problem and understand how the Einstein's equations would emerge upon its implementation.
\section{Conclusions and outlook}
In this paper we developed a new framework for metric reconstruction based on the bit threads reformulation of entanglement entropy. Our work can be divided roughly into two main parts. In Section \ref{PBT} we explored simple constructions of perturbative thread configurations based on the general methods originally developed in \cite{Agon:2018lwq} but expanded in this paper in various ways. We explored in detail two particular constructions, one that starts by specifying the class of integral curves and a second one that assumes a specific family of level set surfaces. We showed that both methods are efficient and can be easily implemented for the case of perturbative excited states, as we discuss in detail in the concrete example of a local quench in Appendix \ref{app:examples}. However, we realized that both constructions encode the information about the bulk metric in a highly nonlocal way. This implies that these realizations are not particularly useful to tackle the question of metric reconstruction and highlights the necessity of reformulating bit threads in a language that makes background independence manifest.
Motivated by the above results, we started Section \ref{SecForms} by reformulating the bit threads framework in terms of differential forms. We gave general formulas
that translate the relevant equations of the standard description in terms of flows into this new language and studied in detail the case of perturbative excited states. We pointed out that the Iyer-Wald formalism provides us with a canonical choice for the perturbed thread configuration that makes explicit use of bulk locality. More explicitly, we showed that the Iyer-Wald construction yields a particular solution to the max flow problem in the bulk that can be uniquely determined from CFT data, and encodes the Einstein's equations in the bulk through its closedness condition. Assuming that a set of such forms is given, we then showed that the problem of metric reconstruction is equivalent to the inversion of a particular differential operator. We gave explicit inversion formulas for the case of 2d and 3d CFTs, and argued that the problem is also well possed in higher dimensions. Finally, we discussed the generalization of our results to higher orders in the perturbation and its relation to the full Einstein's equations.
There are some open questions related to our work which we think are worth exploring:
\begin{itemize}
\item \vspace{-0.2cm}\emph{Explicit inversion at higher orders:} In Section \ref{beyond} we have sketched out how to generalize the metric inversion
problem to higher orders in $\lambda$. We believe that this would be fairly straightforward to second order in the perturbation if one uses \cite{Faulkner:2017tkh} as a starting point, while it would be illuminating to have explicit inversion formulas for the differential operator at this order. Generalizing this story to higher orders should be possible but may require some extra work. To some extent, this study would be even more rewarding since it could yield non-trivial constraints on the space of theories with classical gravity duals, specifically on the structure of their correlation functions. It would also be interesting to work out a more precise algorithm for metric reconstruction at the full non-linear level following the discussion at the end of Section \ref{beyond} and, in particular, understand how the Einstein's equations would emerge in this context.
\item \vspace{-0.2cm}\emph{More general states and entangling regions:} In our work we have considered perturbations around the vacuum state and focused on spherical regions in the boundary theory. While it is true that in this setting one could in principle recover Einstein's equations systematically (order by order) and hence the bulk geometry, one could relax these two points, with the latter one being arguably the easiest of the two (at least for regions with local modular Hamiltonians). We note that substantial progress on generalizing the former using the Iyer-Wald formalism appeared in \cite{Lewkowycz:2018sgn}. This paper also argued that doing the linear analysis for arbitrary states and shapes of the entangling region is sufficient to capture the full non-linear Einstein's equations in the bulk. It would be interesting to relax these conditions in our method of bulk reconstruction using bit threads, and try to make these statements a bit more precise in our context. It would also be interesting to explicitly start with a state for which the bulk geometry has entanglement shadows, and understand how our approach encodes information about these regions.
\item \vspace{-0.2cm}\emph{Covariant bulk reconstruction:} In this work we have incorporated time-dependence by combining the maximin prescription introduced in \cite{Wall:2012uf} and the non-covariant formulation of bit threads \cite{Freedman:2016zud}. As explained in Section \ref{pertstatesBT}, this was possible in virtue of various crucial simplifications of the perturbative setting that we consider. However, it should be possible to pose the same question in the fully covariant formulation of bit threads
\cite{Headrick:toappear}. We believe that in this case, the full modular flow in the bulk should play a role, and could serve as a guide for constructing canonical bit thread configurations in other special cases. Related to this, it would be interesting to ask the question about bulk reconstruction in time-dependent situations that are not easily accessible to the perturbative setting we consider here, e.g., completely far-of-equilibrium states that could lead to black hole formation in the bulk.
\item \vspace{-0.2cm}\emph{Higher derivative theories:} Finally, we believe it would be worthwhile to explicitly generalize our method of bulk reconstruction to the case of higher derivative theories in the bulk. We point out that the program of \emph{gravitation from entanglement} using extremal surfaces has been worked out in detail for these theories in \cite{Haehl:2017sot}, using the Iyer-Wald formalism both to first and second order in the state deformations around the vacuum. Likewise, the bit thread reformulation of entanglement entropy has already been generalized to the case of higher derivative gravities in \cite{Harper:2018sdd}, incorporating corrections to the local norm bound that depend on the specific theory. It would be interesting to see how our formulas are corrected if we turn on these extra gravity couplings.
\end{itemize}
\vspace{-0.2cm}We hope to come back to some of these points in the near future.
\section*{Acknowledgements}
It is a pleasure to thank Ning Bao, Horacio Casini, Jan de Boer, Matthew Headrick, Martin Ro\v{c}ek, Andrea Russo, Andrew Svesko and Zach Weller-Davies for useful discussions and comments on the manuscript. CAA is supported by the National Science Foundation (NSF) Grant No. PHY-1915093. EC is supported by the NSF Grants No. PHY-1620610 and No. PHY-1820712. JFP is supported by the Simons Foundation through \emph{It from Qubit: Simons Collaboration on Quantum Fields, Gravity, and Information}.
| -103,893.581518
|
[
-3.2421875,
2.94140625
] | 30.368098
|
[
-2.365234375,
0.64990234375,
-2.015625,
-5.3359375,
-1.1728515625,
7.96484375
] |
[
4.82421875,
8.984375,
2.484375,
6.32421875
] | 1,024
| 21,471
|
[
-2.76953125,
3.068359375
] | 28.472482
|
[
-5.98828125,
-5.1796875,
-6.1875,
-2.4609375,
2.33984375,
14.9765625
] | 0.746994
| 13.205941
| 15.25779
| 3.118469
|
[
1.8061950206756592
] | -56,865.546663
| 5.704951
| -105,071.854983
| 0.444114
| 6.280355
|
[
-1.962890625,
-3.6875,
-3.96875,
-5.11328125,
2.115234375,
12.5703125
] |
[
-5.54296875,
-2.505859375,
-2.166015625,
-1.3408203125,
4.09375,
5.7421875
] | |
BkiUbuvxK1ThhAqzVPKk
|
\section{Introduction}
The parquet approximation (or generalized ladder approximation)
was introduced by Landau, Abrikosov and Khalatnikov in their
famous consideration of the high energy behavior in quantum
electrodynamics \cite{LAK}. Later on this approximation has been used
for various models of quantum field theory \cite{TM}.
The results of the parquet approximation
for weak coupling are in agreement with the renormalization group
approach \cite{BSh}. Let us recall that the original aim of Landau
at all was to develop a non-perturbative method in quantum field theory.
The parquet approximation leads to a closed system of integral equations
which have meaning not only for small but also for the large coupling constant.
In this sense the method of the parquet approximation is a
non-perturbative one.
There are arguments \cite{LAK,TM} that this approximation is able to catch
the asymptotic behavior in quantum field theory at small distances even
in the strong coupling regime. However, there is a serious problem here,
see \cite{BSh}, because it is very difficult to estimate an error
of this approximation in realistic models for strong coupling, since
we have few information about the true behaviour of the model in this regime.
One can control the strong coupling regime only for some simple models such as
vector models \cite{AA} and low dimensional matrix
models \cite{BIPZ} in the large $N$ limit.
The aim of this letter is to investigate the parquet approximation for
matrix models in the large $N$ limit.
By using numerical calculations we demonstrate that the parquet
approximation proves to be a surprisingly good approximation for $N\times
N$ matrix models in the large $N$ limit. To this end we compare the numbers
of parquet diagrams with suitable weights with the total number of diagrams
with the same weights in the large $N$ limit. As the weight of diagrams we
take the corresponding powers of $N^{-1}$ and coupling constants. It
is occurred that in the large $N$ limit these two numbers that are functions
on coupling constants and the number of external lines almost coincide
for all values of the coupling constants including the strong coupling
limit.
The behavior of the weighted number of all diagrams in the large $N$ limit
is well known \cite{BIPZ}. It is given by the large $N$ limit of
the corresponding zero-dimensional matrix model. We find the weighted
numbers of parquet diagrams for large
$N$ matrix theories with the quartic and
cubic interactions. We find these numbers as solutions of
closed sets of equations for parquet correlation functions in the leading
order of $N^{-1}$ expansion.
These equations are simpler than
the parquet equations in \cite{TM} since in our
equations $u$-channel terms do not contribute.
As it is known the large $N$ limit in QCD enables us to understand
qualitatively certain striking phenomenological features of strong
interactions \cite{tH}-\cite{AS}. To perform analytical
investigations one needs to compute the sum of all planar diagrams.
At present this problem has been solved only in a few simple models
such as zero- and one-dimensional matrix models and two-dimensional QCD.
It was suggested \cite{Witten}
that in the large $N$ limit the dynamics is dominated by a
master field. The problem of finding of the master
field has been discussed in many works \cite {Haan}-\cite {Gopak}.
Recently the
problem of construction of the master field
has been solved in \cite{AV}.
It was shown that the master field satisfies to standard equations of
relativistic quantum field theory but fields are quantized according to a
new rule. An explicit solution of these equations is a rather
non-trivial problem and it seems reasonable to develop some approximated
schemes to study the master field.
There were several attempts
of an approximated treatment of the planar theory \cite {Sl}-\cite{Fer}.
In particular, there were approximations dealing with
counting only some part of planar
diagrams. In \cite{AAV,AZ} a so-called half-planar
approximation to the planar theory was considered.
The distinguished feature of this
approximation is that one can write down a closed set of Schwinger-Dyson-like
equations for a finite number of Green's functions.
The half-planar approximation gives
a good agreement with the known exact answers for the large $N$ models
for a rather wide range of coupling constant \cite{AZ}.
However, this approximation does not reproduce the correct
strong coupling limit
of exact planar Green's functions. A recursive scheme which could improve
the half-planar approximation
was proposed in \cite{Ar96}. It turns out that this scheme is very closed
to the approximation adopted in this paper.
The paper is organized as follows.
In Section 2 we give a general definition for
the planar parquet approximation.
In Sections 3 and 4 we compare the planar parquet and
the planar solutions
of zero-dimensional matrix models.
We find a surprising coincidence of these solutions
for all range of the coupling
constant up to 0.1 percent. Moreover, we find that the planar parquet
solutions for the quartic and cubic interactions have phase transition
points which also exist in the planar solutions of the corresponding
one-matrix models. The values of phase transition points for the planar
parquet and the planar approximations are
turned out to be very closed.
\section{ Planar Parquet Equations}
In this section we define the planar parquet approximation
for the $d$-dimensional matrix model with
the cubic and quartic interaction terms,
\begin{equation}
\label{Action}
S= \int d^dx Tr [ \frac{1}{2}
\partial M \cdot \partial M +
\frac{1}{2} m^2 M^2
-\frac{\lambda}{3 N^{1/2}} M^3 +
\frac{g}{4 N} M^4].
\end{equation}
Here $M=(M_{ij}(x)),~$ $i,j=1,...,N~$ is an Hermitian matrix field.
The planar Green's functions are defined as $N\to \infty$ limit
of invariant correlation functions of products of matrices
\begin{equation}
\label{Greens}
\Pi_n(x_1,...,x_n)=\lim_{N \to
\infty}\frac{1}{N^{1+n/2}}\int DM Tr( M(x_1)...
M(x_n))\exp (-S) .
\end{equation}
They satisfy to the planar Schwinger-Dyson equations \cite{Haan,Ar81}
\begin {eqnarray}
(-\bigtriangleup +m^{2})_{x_{1}}\Pi_{n}(x_{1},...,x_{n})-
\lambda \Pi_{n+1}(x_{1},x_{1},x_{2},...,x_{n})+
g \Pi_{n+2}(x_{1},x_{1},x_{1},x_{2},...,x_{n})-
\nonumber
\\
\sum _{m=2}^{n}\delta (x_{1}-x_{m})\Pi_{m-2}(x_{2},...,x_{m-1})
\Pi_{n-m}(x_{m+1},...,x_{n})=0.~~~~~~~~~~~~~~~~~~
\label{PSDE}
\end {eqnarray}
These equations look almost as the Schwinger-Dyson equations for a
scalar theory but there is an essential difference in the form
of the Schwinger's terms. Planar Green's functions are
symmetric only under cyclic permutations of arguments.
We define the planar parquet approximation
as an approximated perturbative
solution of equations (\ref{PSDE}) that takes into
account only a part of full series on coupling constants.
This part is
specified by a requirement that all three- and four-point vertex parts (1PI
parts) are composed from two-particle reducible (2PR) diagrams. Note that the
perturbative definition is meaningful since the perturbative solution of
the planar Schwinger-Dyson equations converges at least for asymptotically
free interactions. So, this is the case
for the interaction (\ref{Action}) if we average the signs of the couplings
in the suitable way as well as for $SU(\infty)$ QCD.
More precisely, the definition of the planar parquet approximation
can be done using a notion of a so-called skeleton expansion.
The skeleton expansion contains only a subset of all
planar diagrams. In this subset one takes only those diagrams that
contain no propagators, three- and four-vertices insertions.
Following 't Hooft \cite{tH1}
we call the two-, three- and four-point functions
the "basic Green's functions". In the four-dimensional
space-time for the cases of interaction (\ref{Action})
as well as of Yang-Mills theory just these vertex functions
contain divergences.
To get the planar parquet approximation
one has to take the set of skeleton diagrams and
replace the bare propagator and bare three- and four-vertices
by the parquet propagator and the corresponding vertex functions.
The parquet basic Green's functions are defined as solutions of the set
of equations that is presented below. The propagator is defined
as a solution of the exact Schwinger-Dyson equation
\vspace{5mm}
\unitlength=1.00mm
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\begin{picture}(145.00,17.60)
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\put(38.00,15.00){\makebox(0,0)[cc]{$=$}}
\put(60.00,15.00){\line(-1,0){12.00}}
\put(133.00,15.00){\circle*{5.20}}
\put(115.00,15.00){\line(1,0){8.00}}
\put(90.00,15.00){\circle*{5.20}}
\put(72.00,15.00){\line(1,0){8.00}}
\put(66.00,15.00){\makebox(0,0)[cc]{$+$}}
\put(108.00,15.00){\makebox(0,0)[cc]{$+$}}
\special{em:linewidth 1.2pt}
\linethickness{1.2pt}
\emline{122.00}{15.00}{1}{145.00}{15.00}{2}
\emline{122.00}{15.00}{3}{133.00}{16.67}{4}
\emline{133.00}{13.33}{5}{123.00}{15.00}{6}
\emline{30.00}{15.00}{7}{17.00}{15.00}{8}
\emline{79.00}{15.00}{9}{90.00}{16.67}{10}
\emline{90.00}{13.33}{11}{80.00}{15.00}{12}
\emline{90.00}{15.00}{13}{102.00}{15.00}{14}
\end{picture}
\vspace{-20mm}
\begin{equation}
\label{SDD}
\end{equation}
(all contributions of tadpole diagrams are dropped out).
Here
\vspace{10mm}
\unitlength=1mm
\begin{picture}(112.00,15.00)
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\put(90.00,15.00){\line(1,0){15.00}}
\put(47.00,15.00){\makebox(0,0)[lc]{$=~~~D(p)~,$}}
\put(32.67,12.00){\makebox(0,0)[cc]{$_p$}}
\put(97.67,12.00){\makebox(0,0)[cc]{$_p$}}
\put(112.00,15.00){\makebox(0,0)[lc]{$=~~~\frac{1}{p^2+m^2}$}}
\special{em:linewidth 1.2pt}
\linethickness{1.2pt}
\emline{25.00}{15.00}{1}{40.00}{15.00}{2}
\end{picture}
\vspace{-20mm}
$$~$$
are the full (within the parquet approximation) and the bare
propagators, respectively,
\vspace{5mm}
\unitlength=1.00mm
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\begin{picture}(107.00,20.33)
\put(35.00,15.00){\circle*{5.20}}
\put(35.00,15.00){\line(-1,0){5.67}}
\put(35.00,15.00){\line(1,1){4.00}}
\put(35.00,15.00){\line(1,-1){4.00}}
\put(29.00,13.00){\makebox(0,0)[cc]{$_p$}}
\put(40.00,17.67){\makebox(0,0)[cc]{$_q$}}
\put(48.00,15.00){\makebox(0,0)[lc]{$=~~~\Gamma_3 (p,q)~,$}}
\put(95.00,15.00){\circle*{5.20}}
\put(91.00,19.00){\line(1,-1){8.00}}
\put(99.00,19.00){\line(-1,-1){8.00}}
\put(91.67,9.67){\makebox(0,0)[cc]{$_p$}}
\put(92.67,20.33){\makebox(0,0)[cc]{$_q$}}
\put(100.00,18.00){\makebox(0,0)[cc]{$_k$}}
\put(107.00,15.00){\makebox(0,0)[lc]{$=~~~\Gamma _4 (p,q,k)$}}
\end{picture}
\vspace{-20mm}
$$~$$
are three- and four-point vertices.
The three-point parquet vertex
is defined as a solution of the following equation
\vspace{5mm}
\unitlength=1.00mm
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\begin{picture}(142.00,21.67)
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\put(63.00,13.33){\line(0,1){3.33}}
\put(18.00,15.00){\circle*{5.20}}
\put(13.00,15.00){\line(1,0){5.00}}
\put(21.67,18.67){\line(-1,-1){3.67}}
\put(18.00,15.00){\line(1,-1){3.67}}
\put(105.00,20.00){\circle*{3.33}}
\put(105.00,10.00){\circle*{3.33}}
\put(105.00,20.00){\line(1,0){3.67}}
\put(105.00,10.00){\line(1,0){3.67}}
\put(95.00,15.00){\line(-1,0){4.00}}
\put(29.33,15.00){\makebox(0,0)[cc]{$=$}}
\put(42.00,15.00){\line(6,5){4.00}}
\put(42.00,15.00){\line(6,-5){4.00}}
\put(38.00,15.00){\line(1,0){4.00}}
\put(63.00,15.00){\circle*{5.20}}
\put(63.00,15.00){\line(-1,0){4.67}}
\put(75.00,19.00){\line(1,0){4.00}}
\put(75.00,11.00){\line(1,0){4.00}}
\put(73.00,9.00){\rule{4.00\unitlength}{12.00\unitlength}}
\put(125.00,15.00){\circle*{5.20}}
\put(125.00,15.00){\line(-1,0){4.67}}
\put(10.33,15.00){\makebox(0,0)[cc]{$_p$}}
\put(21.00,21.00){\makebox(0,0)[cc]{$_q$}}
\put(20.33,10.33){\makebox(0,0)[cc]{$_{-p-q}$}}
\put(66.67,21.00){\makebox(0,0)[cc]{$_l$}}
\put(66.67,11.00){\makebox(0,0)[cc]{$_{l-p}$}}
\put(81.33,20.33){\makebox(0,0)[cc]{$_q$}}
\put(88.00,17.00){\makebox(0,0)[cc]{$_p$}}
\put(83.67,15.00){\makebox(0,0)[cc]{$+$}}
\put(111.00,21.67){\makebox(0,0)[cc]{$_q$}}
\put(113.00,15.00){\makebox(0,0)[cc]{$+$}}
\put(117.33,16.67){\makebox(0,0)[cc]{$_p$}}
\put(130.00,19.33){\makebox(0,0)[cc]{$_l$}}
\put(142.00,18.00){\makebox(0,0)[cc]{$_q$}}
\put(130.00,11.33){\makebox(0,0)[cc]{$_{l-p}$}}
\put(140.00,17.00){\line(-2,-1){4.00}}
\put(136.00,15.00){\line(2,-1){4.00}}
\special{em:linewidth 1.2pt}
\linethickness{1.2pt}
\emline{125.00}{16.67}{1}{136.00}{15.00}{2}
\emline{136.00}{15.00}{3}{125.00}{13.33}{4}
\emline{95.00}{15.00}{5}{105.00}{20.00}{6}
\emline{105.00}{20.00}{7}{105.00}{10.00}{8}
\emline{105.00}{10.00}{9}{95.00}{15.00}{10}
\emline{75.00}{19.00}{11}{63.00}{16.67}{12}
\emline{63.00}{16.67}{13}{63.00}{13.33}{14}
\emline{63.00}{13.33}{15}{75.00}{11.00}{16}
\put(50.00,15.00){\makebox(0,0)[cc]{$+$}}
\put(57.00,17.00){\makebox(0,0)[cc]{$_p$}}
\put(96.00,15.00){\circle*{3.33}}
\put(98.67,19.33){\makebox(0,0)[cc]{$_l$}}
\put(99.00,11.00){\makebox(0,0)[cc]{$_{l-p}$}}
\end{picture}
\vspace{-20mm}
\begin{equation}
\label{MG3}
\end{equation}
Equation (\ref{MG3}) means that any 1PI 3-point
diagram (except the bare one) can be decomposed into
three- and four-point subdiagrams connected by two internal lines
so that the four point subdiagram is 2PI with respect to those two lines
that connect the four- and three-point subdiagrams.
It is evident that such four-point subdiagram can be
either the bare vertex either a "dumb-bells" diagrams or
a $V$-vertex.
Here the filled vertical box $V$ (horizontal box $H$)
\vspace{5mm}
\unitlength=1mm
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\begin{picture}(110.00,22.00)
\put(30.00,13.00){\rule{12.00\unitlength}{4.00\unitlength}}
\put(95.00,9.00){\rule{4.00\unitlength}{12.00\unitlength}}
\put(32.00,11.00){\line(0,1){8.00}}
\put(40.00,19.00){\line(0,-1){8.00}}
\put(93.00,19.00){\line(1,0){8.00}}
\put(93.00,11.00){\line(1,0){8.00}}
\put(32.00,9.00){\makebox(0,0)[cc]{$_p$}}
\put(32.00,22.00){\makebox(0,0)[cc]{$_q$}}
\put(40.00,22.00){\makebox(0,0)[cc]{$_k$}}
\put(91.00,11.00){\makebox(0,0)[cc]{$_p$}}
\put(91.00,19.00){\makebox(0,0)[cc]{$_q$}}
\put(103.00,19.00){\makebox(0,0)[cc]{$_k$}}
\put(50.00,15.00){\makebox(0,0)[lc]{$=~~~H(p,q,k)~,$}}
\put(110.00,15.00){\makebox(0,0)[lc]{$=~~~V(p,q,k)~$}}
\end{picture}
\vspace{-20mm}
$$~$$
is the part of the four-point
vertex function that is 2PR in the $t$-channel
($s$-channel) and is not 2PR in the $s$-channel ($t$-channel).
The two particle reducibility in the $s$-channel
means that the $H$-vertex can be represented
in the following way
\vspace{5mm}
\unitlength=1.00mm
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\begin{picture}(127.00,46.67)
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\put(48.00,15.33){\line(0,1){3.33}}
\put(117.00,17.00){\line(0,1){1.67}}
\put(117.00,15.00){\line(0,1){2.00}}
\put(48.00,17.00){\line(0,-1){1.67}}
\put(25.00,39.00){\rule{12.00\unitlength}{4.00\unitlength}}
\put(34.00,11.00){\rule{4.00\unitlength}{12.00\unitlength}}
\put(48.00,17.00){\circle*{5.20}}
\put(32.00,13.00){\line(1,0){3.00}}
\put(35.00,21.00){\line(-1,0){3.00}}
\put(48.00,18.67){\line(1,0){4.00}}
\put(48.00,15.33){\line(1,0){4.00}}
\put(27.00,45.00){\line(0,-1){8.00}}
\put(35.00,37.00){\line(0,1){8.00}}
\put(105.00,12.00){\circle*{3.33}}
\put(105.00,22.00){\circle*{3.33}}
\put(117.00,17.00){\circle*{5.20}}
\put(105.00,22.00){\line(-1,0){3.67}}
\put(101.33,12.00){\line(1,0){3.67}}
\put(117.00,18.67){\line(1,0){4.33}}
\put(117.00,15.33){\line(1,0){4.33}}
\put(68.00,11.00){\rule{4.00\unitlength}{12.00\unitlength}}
\put(70.00,21.00){\line(-1,0){4.00}}
\put(70.00,13.00){\line(-1,0){4.00}}
\put(83.00,13.00){\circle*{3.33}}
\put(83.00,21.00){\circle*{3.33}}
\put(45.00,41.00){\makebox(0,0)[cc]{$=$}}
\put(48.00,18.67){\line(0,-1){3.33}}
\put(75.00,41.00){\circle*{5.20}}
\put(75.00,42.67){\line(1,0){4.00}}
\put(75.00,39.33){\line(1,0){4.00}}
\put(113.00,45.00){\circle*{3.33}}
\put(113.00,37.00){\circle*{3.33}}
\put(113.00,37.00){\line(1,0){3.67}}
\put(113.00,45.00){\line(1,0){3.67}}
\put(89.00,41.00){\makebox(0,0)[cc]{$+$}}
\put(127.00,41.00){\makebox(0,0)[cc]{$+$}}
\put(26.00,35.00){\makebox(0,0)[cc]{$_p$}}
\put(25.33,46.67){\makebox(0,0)[cc]{$_q$}}
\put(36.33,46.67){\makebox(0,0)[cc]{$_k$}}
\put(36.33,35.00){\makebox(0,0)[cc]{$_{-p-q-k}$}}
\put(56.33,42.67){\makebox(0,0)[cc]{$_q$}}
\put(56.33,38.67){\makebox(0,0)[cc]{$_p$}}
\put(66.67,45.00){\makebox(0,0)[cc]{$_l$}}
\put(81.67,42.67){\makebox(0,0)[cc]{$_k$}}
\put(96.00,38.67){\makebox(0,0)[cc]{$_p$}}
\put(96.00,43.33){\makebox(0,0)[cc]{$_q$}}
\put(105.00,45.67){\makebox(0,0)[cc]{$_l$}}
\put(119.33,45.00){\makebox(0,0)[cc]{$_k$}}
\put(124.67,18.67){\makebox(0,0)[cc]{$_k$}}
\put(112.00,22.33){\makebox(0,0)[cc]{$_l$}}
\put(98.00,22.00){\makebox(0,0)[cc]{$_q$}}
\put(98.00,12.00){\makebox(0,0)[cc]{$_p$}}
\put(93.00,17.00){\makebox(0,0)[cc]{$+$}}
\put(90.00,21.00){\makebox(0,0)[cc]{$_k$}}
\put(76.67,24.00){\makebox(0,0)[cc]{$_l$}}
\put(62.00,21.00){\makebox(0,0)[cc]{$_q$}}
\put(62.00,13.00){\makebox(0,0)[cc]{$_p$}}
\put(58.33,17.00){\makebox(0,0)[cc]{$+$}}
\put(54.00,18.67){\makebox(0,0)[cc]{$_k$}}
\put(43.00,22.00){\makebox(0,0)[cc]{$_l$}}
\put(28.33,21.00){\makebox(0,0)[cc]{$_q$}}
\put(28.00,13.00){\makebox(0,0)[cc]{$_p$}}
\put(102.00,41.00){\line(-2,1){4.00}}
\put(98.00,39.00){\line(2,1){4.00}}
\put(63.00,41.00){\line(-2,1){4.00}}
\put(63.00,41.00){\line(-2,-1){4.00}}
\put(83.00,13.00){\line(1,0){4.00}}
\put(83.00,21.00){\line(1,0){4.00}}
\special{em:linewidth 1.2pt}
\linethickness{1.2pt}
\emline{36.00}{21.00}{1}{48.00}{18.67}{2}
\emline{48.00}{15.33}{3}{36.00}{13.00}{4}
\emline{70.00}{21.00}{5}{83.00}{21.00}{6}
\emline{83.00}{21.00}{7}{83.00}{13.00}{8}
\emline{83.00}{13.00}{9}{70.00}{13.00}{10}
\emline{117.00}{18.67}{11}{105.00}{22.00}{12}
\emline{105.00}{22.00}{13}{105.00}{12.00}{14}
\emline{105.00}{12.00}{15}{117.00}{15.33}{16}
\emline{102.00}{41.00}{17}{113.00}{45.00}{18}
\emline{113.00}{45.00}{19}{113.00}{37.00}{20}
\emline{113.00}{37.00}{21}{102.00}{41.00}{22}
\emline{63.00}{41.00}{23}{75.00}{42.67}{24}
\emline{75.00}{42.67}{25}{75.00}{39.33}{26}
\emline{75.00}{39.33}{27}{63.00}{41.00}{28}
\put(112.00,12.00){\makebox(0,0)[cc]{$_{p+q-l}$}}
\put(77.00,11.00){\makebox(0,0)[cc]{$_{p+q-l}$}}
\put(43.00,12.00){\makebox(0,0)[cc]{$_{p+q-l}$}}
\put(68.00,38.00){\makebox(0,0)[cc]{$_{p+q-l}$}}
\put(106.00,37.00){\makebox(0,0)[cc]{$_{p+q-l}$}}
\end{picture}
\vspace{-20mm}
\begin{equation}
\label{MH}
\end{equation}
Equation (\ref{MH}) describes the structure
of $H$-type diagrams. It means that any $H$-diagram can
be decomposed into two four-point subdiagrams connected by
two internal lines so that at least one of
these subdiagrams is 2PI in the $s$-channel.
The similar relation holds for $V$-vertex.
The vertices $V$ and $H$ are related by the cyclic permutation of
external points
\begin{equation}
\label{HV}
V(p,q,k)=H(q,k,-p-q-k).
\end{equation}
The parquet approximation for the four-vertex
is defined by
\vspace{5mm}
\unitlength=1.00mm
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\begin{picture}(140.00,21.00)
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\put(23.00,15.00){\circle*{5.20}}
\put(65.00,13.00){\rule{12.00\unitlength}{4.00\unitlength}}
\put(100.00,9.00){\rule{4.00\unitlength}{12.00\unitlength}}
\put(89.00,15.00){\makebox(0,0)[cc]{$+$}}
\put(106.00,19.00){\line(-1,0){8.00}}
\put(98.00,11.00){\line(1,0){8.00}}
\put(75.00,11.00){\line(0,1){8.00}}
\put(67.00,19.00){\line(0,-1){8.00}}
\put(20.00,18.00){\line(1,-1){6.00}}
\put(26.00,18.00){\line(-1,-1){6.00}}
\put(35.00,15.00){\makebox(0,0)[cc]{$=$}}
\put(125.00,11.00){\circle*{3.33}}
\put(125.00,19.00){\circle*{3.33}}
\put(133.00,19.00){\circle*{3.33}}
\put(133.00,11.00){\circle*{3.33}}
\put(133.00,11.00){\line(1,0){3.67}}
\put(133.00,19.00){\line(1,0){3.67}}
\put(125.00,19.00){\line(-1,0){3.67}}
\put(121.33,11.00){\line(1,0){3.67}}
\put(114.00,15.00){\makebox(0,0)[cc]{$+$}}
\put(44.00,12.00){\line(1,1){6.00}}
\put(44.00,18.00){\line(1,-1){6.00}}
\put(118.00,11.00){\makebox(0,0)[cc]{$_p$}}
\put(118.00,19.00){\makebox(0,0)[cc]{$_q$}}
\put(140.00,19.00){\makebox(0,0)[cc]{$_k$}}
\put(129.00,20.67){\makebox(0,0)[cc]{$_{l+q}$}}
\put(123.00,15.67){\makebox(0,0)[cc]{$_l$}}
\put(129.00,9.00){\makebox(0,0)[cc]{$_{l-p}$}}
\put(139.00,15.33){\makebox(0,0)[cc]{$_{l+q+k}$}}
\put(94.67,11.00){\makebox(0,0)[cc]{$_p$}}
\put(94.67,19.00){\makebox(0,0)[cc]{$_q$}}
\put(108.67,19.00){\makebox(0,0)[cc]{$_k$}}
\put(77.33,20.00){\makebox(0,0)[cc]{$_k$}}
\put(68.67,20.33){\makebox(0,0)[cc]{$_q$}}
\put(65.67,9.67){\makebox(0,0)[cc]{$_p$}}
\put(18.00,10.33){\makebox(0,0)[cc]{$_p$}}
\put(18.00,19.00){\makebox(0,0)[cc]{$_q$}}
\put(27.00,19.00){\makebox(0,0)[cc]{$_k$}}
\put(27.33,10.33){\makebox(0,0)[cc]{$_{-p-q-k}$}}
\special{em:linewidth 1.2pt}
\linethickness{1.2pt}
\emline{125.00}{19.00}{1}{133.00}{19.00}{2}
\emline{133.00}{19.00}{3}{133.00}{11.00}{4}
\emline{133.00}{11.00}{5}{125.00}{11.00}{6}
\emline{125.00}{11.00}{7}{125.00}{19.00}{8}
\put(57.00,15.00){\makebox(0,0)[cc]{$+$}}
\end{picture}
\vspace{-20mm}
\begin{equation}
\label{MG4}
\end{equation}
Equation (\ref{MG4})
states that 1PI contributions into the
4-point Green's function $\Gamma_4$ are presented by the sum of the bare
4-point vertex, horizontal and vertical vertex functions
and diagrams that are 2PR in both $s$- and $t$-channels.
Equations (\ref{MG3})-(\ref{MG4})
together with the Schwinger-Dyson equation for propagator
(\ref{SDD})
form the closed system of non-linear integral equations
for the functions $D(p)$, $\Gamma _3(p,q)$,
$\Gamma _4 (p,q,k)$,
$ H(p,q,k)$ and
$ V(p,q,k)$.
Just these equation we call the planar parquet equations
for the basic functions.
Note that the planar parquet equations are different from
the usual parquet equations \cite{TM} since
in the planar approximation so-called $u$-channel terms do not contribute.
\section{Weighted Number of Planar Parquet Diagrams for Quartic Interaction}
In this section we examine the
weighed number of planar parquet diagrams
in the matrix theory with quartic interaction. This means that the
dependence on momenta variables in (\ref{SDD})-(\ref{MG4})
must be neglected. In this case equations (\ref{SDD})-(\ref{MG4})
are reduced
to the following system of algebraic equations
\begin{equation}
\label{4D}
D=1-2 g D^2 - g D^4 \Gamma_4.
\end{equation}
\begin{equation}
\label{4G4}
\Gamma_4 =-g +H+V,
\end{equation}
\begin{equation}
\label{4H}
H=-g D^2 \Gamma_4 +V D^2 \Gamma_4,
\end{equation}
\begin{equation}
\label{4V}
V=-g D^2 \Gamma_4 +H D^2 \Gamma_4.
\end{equation}
The graphical representation of equations (\ref{4D})-(\ref{4V}) is
given on Figure 1.
\vspace{5mm}
\unitlength=1.00mm
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\begin{picture}(150.00,66.30)
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\put(75.00,0.00){\makebox(0,0)[cc]{Figure 1.Graphical representation of
equations (\ref{4D})-(\ref{4H}). }} \put(124.00,17.00){\line(0,-1){1.67}}
\put(35.00,15.00){\rule{12.00\unitlength}{4.00\unitlength}}
\put(80.00,17.00){\circle*{5.20}}
\put(110.00,11.00){\rule{4.00\unitlength}{12.00\unitlength}}
\put(124.00,17.00){\circle*{5.20}}
\put(96.00,17.00){\makebox(0,0)[cc]{$+$}}
\put(58.00,17.00){\makebox(0,0)[cc]{$=$}}
\put(40.00,40.00){\circle*{5.20}}
\put(85.00,38.00){\rule{12.00\unitlength}{4.00\unitlength}}
\put(120.00,34.00){\rule{4.00\unitlength}{12.00\unitlength}}
\put(109.00,40.00){\makebox(0,0)[cc]{$+$}}
\put(75.00,40.00){\makebox(0,0)[cc]{$+$}}
\put(53.00,40.00){\makebox(0,0)[cc]{$=$}}
\put(75.33,64.00){\circle{4.00}}
\put(102.00,60.00){\circle{4.00}}
\put(75.33,64.00){\circle{4.20}}
\put(102.00,60.00){\circle{4.20}}
\put(75.33,64.00){\circle{4.40}}
\put(102.00,60.00){\circle{4.40}}
\put(75.33,64.00){\circle{4.60}}
\put(102.00,60.00){\circle{4.60}}
\put(114.00,62.00){\makebox(0,0)[cc]{$+$}}
\put(88.33,62.00){\makebox(0,0)[cc]{$+$}}
\put(62.00,62.00){\makebox(0,0)[cc]{$+$}}
\put(38.00,62.00){\makebox(0,0)[cc]{$=$}}
\put(126.00,44.00){\line(-1,0){8.00}}
\put(118.00,36.00){\line(1,0){8.00}}
\put(95.00,36.00){\line(0,1){8.00}}
\put(87.00,44.00){\line(0,-1){8.00}}
\put(37.00,43.00){\line(1,-1){6.00}}
\put(43.00,43.00){\line(-1,-1){6.00}}
\put(108.00,13.00){\line(1,0){3.00}}
\put(111.00,21.00){\line(-1,0){3.00}}
\put(124.00,18.67){\line(1,0){4.00}}
\put(124.00,15.33){\line(1,0){4.00}}
\put(81.33,18.67){\line(1,0){2.67}}
\put(84.00,15.33){\line(-1,0){2.67}}
\put(37.00,21.00){\line(0,-1){8.00}}
\put(45.00,13.00){\line(0,1){8.00}}
\put(94.00,62.00){\line(1,0){8.00}}
\put(56.00,62.00){\line(-1,0){12.00}}
\put(68.00,62.00){\line(1,0){7.33}}
\put(71.00,17.00){\line(-3,1){4.00}}
\put(71.00,17.00){\line(-3,-1){4.00}}
\put(62.67,41.67){\line(4,-3){4.67}}
\put(138.00,62.00){\circle*{5.20}}
\put(67.33,41.67){\line(-4,-3){4.67}}
\put(120.00,62.00){\line(1,0){8.00}}
\special{em:linewidth 1.2pt}
\linethickness{1.2pt}
\emline{71.00}{17.00}{1}{80.00}{18.67}{2}
\emline{80.00}{15.33}{3}{71.00}{17.00}{4}
\emline{112.00}{21.00}{5}{124.00}{18.67}{6}
\emline{124.00}{15.33}{7}{112.33}{13.00}{8}
\emline{127.00}{62.00}{9}{150.00}{62.00}{10}
\emline{127.00}{62.00}{11}{138.00}{63.67}{12}
\emline{138.00}{60.33}{13}{128.00}{62.00}{14}
\emline{109.00}{62.00}{15}{102.00}{62.00}{16}
\emline{83.00}{62.00}{17}{75.33}{62.00}{18}
\emline{30.00}{62.00}{19}{17.00}{62.00}{20}
\end{picture}
\vspace{5mm}
Four equations (\ref{4D})-(\ref{4V}) contain four unknown functions
$D(g),~\Gamma (g),~H(g),~V(g)$ and can be solved explicitly.
Excluding $\Gamma _4$, $H$ and $V$ from
(\ref{4D})-(\ref{4V}) we get
\begin{equation}
\label{4eq1}
g^3 D^6+g^2 D^5
+5 g^2 D^4+ 5 g D^3 + (1-5 g) D^2 - 2 D +1=0.
\end{equation}
Below on the second line of Tables 1 we present
numerical solutions of (\ref{4eq1})
for different values of the coupling constant
$g$. Corresponding $\Gamma_4$ is presented on the fourth line
of this Table.
\vspace{10mm}
Table 1.
\begin{tabular}{llllllll}
\hline
$g$ & $0.001$ & $0.01$ & $0.1$ & $1$ & $10$ & $100$ & $1000$
\\
\hline
$D$
& $0.9980$
& $0.9808$
& $0.8576$
& $0.5161$
& $0.2129$
& $0.07372$
& $0.02400$
\\
$D^{(pl)}$
& $0.9980$
& $0.9808$
& $0.8576$
& $0.5161$
& $0.2130$
& $0.07374$
& $0.02401$
\\
\hline
$\Gamma _4$
& $-9.980 \cdot 10^{-4}$
& $-9.813 \cdot 10^{-3}$
& $-0.08786$
& $-0.6896$
& $-5.823$
& $-54.38$
& $-531.3$
\\
$\Gamma _4^{(pl)}$
& $-9.980 \cdot 10^{-4}$
& $-9.813 \cdot 10^{-3}$
& $-0.08786$
& $-0.6900$
& $-5.835$
& $-54.55$
& $-533.1$
\\
\hline
\end{tabular}
\vspace{5mm}
It is instructive to compare these results
with the exact weighted numbers of the planar diagrams.
These numbers are solutions of the corresponding planar
theory.
The zero-dimensional planar Green's functions $\Pi _n$ for the quartic
interaction satisfy the Schwinger-Dyson equations
\begin{equation}
\label{0SDE}
\Pi_n=-g\Pi_{n+2} +\sum_{m=2}^{n}\Pi_{m-2} \Pi_{n-m},
\end{equation}
that are zero-dimensional analogs of (\ref{PSDE}).
These equations admit the following solution
\begin {equation}
\label {EqPi4}
\Pi _4=g^{-1}(1-D^{(pl)}),
\end {equation}
where the planar propagator $D^{(pl)} \equiv
\Pi_2 $ satisfies the equation
\begin{equation}
\label{eqDpl}
27 g^2 D^{(pl)~2} +(1+18g) D^{(pl)} -1-16g=0.
\end{equation}
The numerical values for the planar propagator $D^{(pl)}$ and
the four-point vertex
$\Gamma _4^ {(pl)}$ $=$ $(D^{(pl)})^{-4}[\Pi _4-2(D^{(pl)})^2]$
are also presented on Table 1.
We see that numerical values of Green's functions
in the planar parquet approximation and in the planar approximation
for equal values of the coupling constant are strikingly closed, namely the
difference is less then 0.1 percent for all range of $g$.
It is also instructive to compare an asymptotic behaviour of Green's functions in the planar
parquet approximation with the planar approximation.
From equation (\ref{4eq1}) we have the following
asymptotic behavior of $D(g)$
in the strong coupling regime
\begin{equation}
\label{4aD1}
D(g) \sim \alpha_{par} g^{-1/2},
\end{equation}
where $\alpha_{par}$ is the solution of an equation
\begin{equation}
\label{4aD2}
\alpha_{par} ^6+
5 \alpha_{par} ^4-
5 \alpha_{par}^2 +1=0.
\end{equation}
The numerical solution of equation (\ref{4aD2}) is
\begin{equation}
\label{4aD3}
\alpha_{par}=0.76948~.
\end{equation}
The asymptotic behavior of the planar propagator
$D^{(pl)}$ follows from equation (\ref{eqDpl})
\begin{equation}
\label{4aplD1}
D^{(pl)}(g) \sim \alpha_{pl} g^{-1/2},
\end{equation}
where
\begin{equation}
\label{4aplD3}
\alpha_{pl}=\frac{4\sqrt{3}}{9}=0.76980~.
\end{equation}
The agreement of (\ref{4aD3}) and (\ref{4aplD3}) is more then
susceptible.
We also compare propagator and the four-vertex function
in the parquet approximation with the exact results for negative
values of the coupling constant (see Table 2).
\vspace{5mm}
Table 2.
\begin{tabular}{lllllllll}
\hline
$g$ & $-0.001$ & $-0.01$ & $-0.05$ & $-0.08$ &$-0.0833$&
$-0.0834$ & $-0.0864$
& $-0.0865$
\\
\hline
$D$ & $1.0020$ & $1.0210$ & $1.1332$ & $1.2959$ & $~$ & $~$ &
$1.3889$ & $~~-$
\\
$D^{(pl)}$ & $1.0020$ & $1.0210$ & $1.1332$ & $1.2963$ &
$1.333$ & $~~-$ & $~$ & $~$
\\
\hline
$\Gamma _4$ & $0.0010$
& $0.0102$
& $0.05806$
& $0.1207$
&$~$
&$~$
& $0.1728$
& $~~-$
\\
$\Gamma _4^{(pl)}$ & $0.0010$
& $0.0102$
& $0.05806$
& $0.1214$
& $0.1403$
& $~~-$
& $~$
& $~$
\\
\hline
\end{tabular}
\vspace{10mm}
Remarkably that the solution of the parquet equation has a phase
transition for $g=g_0$, $-0.0865<g_0<-0.0864$ that is related with the
root branch point of the solution. In the planar approximation the
phase transition point is caused by the square root branch point and
it is equal to $g_0=-1/12=-0.0833(3)$.
\section{Weighted Numbers of Planar Parquet Diagrams for
Cubic Interaction}
In this section we
consider the weighed numbers
of planar parquet diagrams for the matrix model
with the cubic interaction.
For this case planar parquet equations (\ref{SDD})-(\ref{MG4})
reduce to the form
\begin{equation}
\label{D}
D=1+ \lambda D^3 \Gamma_3,
\end{equation}
\begin{equation}
\label{parG3}
\Gamma_3 = \lambda +D^2 V \Gamma_4 + D^3 \Gamma _3^3
\end{equation}
\begin{equation}
\label{parG}
\Gamma _4 = H + V +D^4 \Gamma_3^4,
\end{equation}
\begin{equation}
\label{parH}
H=D^2 \Gamma_4 V+
D^3 \Gamma_3^2 V+
D^3 \Gamma_4 \Gamma_3^2,
\end{equation}
\begin{equation}
\label{parV}
V=D^2 \Gamma_4 H+
D^3 \Gamma_3^2 H+
D^3 \Gamma_4 \Gamma_3^2.
\end{equation}
A graphical representation of these equations
is given on Figure 2.
\vspace{5mm}
\unitlength=1.00mm
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\begin{picture}(137.00,90.60)
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\put(68.00,17.00){\line(0,-1){1.67}}
\put(25.00,15.00){\rule{12.00\unitlength}{4.00\unitlength}}
\put(54.00,11.00){\rule{4.00\unitlength}{12.00\unitlength}}
\put(68.00,17.00){\circle*{5.20}}
\put(33.00,40.00){\circle*{5.20}}
\put(55.00,38.00){\rule{12.00\unitlength}{4.00\unitlength}}
\put(90.00,34.00){\rule{4.00\unitlength}{12.00\unitlength}}
\put(79.00,40.00){\makebox(0,0)[cc]{$+$}}
\put(96.00,44.00){\line(-1,0){8.00}}
\put(88.00,36.00){\line(1,0){8.00}}
\put(65.00,36.00){\line(0,1){8.00}}
\put(57.00,44.00){\line(0,-1){8.00}}
\put(30.00,43.00){\line(1,-1){6.00}}
\put(36.00,43.00){\line(-1,-1){6.00}}
\put(52.00,13.00){\line(1,0){3.00}}
\put(55.00,21.00){\line(-1,0){3.00}}
\put(68.00,18.67){\line(1,0){4.00}}
\put(68.00,15.33){\line(1,0){4.00}}
\put(27.00,21.00){\line(0,-1){8.00}}
\put(35.00,13.00){\line(0,1){8.00}}
\put(33.00,65.00){\circle*{5.20}}
\put(28.00,65.00){\line(1,0){5.00}}
\put(36.67,68.67){\line(-1,-1){3.67}}
\put(33.00,65.00){\line(1,-1){3.67}}
\put(125.00,70.00){\circle*{3.33}}
\put(125.00,60.00){\circle*{3.33}}
\put(88.00,12.00){\circle*{3.33}}
\put(88.00,22.00){\circle*{3.33}}
\put(100.00,17.00){\circle*{5.20}}
\put(88.00,22.00){\line(-1,0){3.67}}
\put(84.33,12.00){\line(1,0){3.67}}
\put(100.00,18.67){\line(1,0){4.33}}
\put(100.00,15.33){\line(1,0){4.33}}
\put(118.00,11.00){\rule{4.00\unitlength}{12.00\unitlength}}
\put(120.00,21.00){\line(-1,0){4.00}}
\put(120.00,13.00){\line(-1,0){4.00}}
\put(133.00,13.00){\circle*{3.33}}
\put(133.00,21.00){\circle*{3.33}}
\put(110.00,17.00){\makebox(0,0)[cc]{$+$}}
\put(78.00,17.00){\makebox(0,0)[cc]{$+$}}
\put(45.00,17.00){\makebox(0,0)[cc]{$=$}}
\put(45.00,40.00){\makebox(0,0)[cc]{$=$}}
\put(115.00,36.00){\circle*{3.33}}
\put(115.00,44.00){\circle*{3.33}}
\put(123.00,44.00){\circle*{3.33}}
\put(123.00,36.00){\circle*{3.33}}
\put(123.00,36.00){\line(1,0){3.67}}
\put(123.00,44.00){\line(1,0){3.67}}
\put(115.00,44.00){\line(-1,0){3.67}}
\put(111.33,36.00){\line(1,0){3.67}}
\put(125.00,70.00){\line(1,0){3.67}}
\put(125.00,60.00){\line(1,0){3.67}}
\put(104.00,40.00){\makebox(0,0)[cc]{$+$}}
\put(115.00,65.00){\line(-1,0){4.00}}
\put(65.00,88.00){\line(1,0){12.00}}
\put(112.00,88.00){\circle*{5.20}}
\put(100.00,88.00){\line(-1,0){10.00}}
\put(83.67,88.00){\makebox(0,0)[cc]{$+$}}
\put(58.00,88.00){\makebox(0,0)[cc]{$=$}}
\put(103.00,65.00){\makebox(0,0)[cc]{$+$}}
\put(44.33,65.00){\makebox(0,0)[cc]{$=$}}
\put(57.00,65.00){\line(6,5){4.00}}
\put(57.00,65.00){\line(6,-5){4.00}}
\put(53.00,65.00){\line(1,0){4.00}}
\put(112.00,90.00){\line(0,-1){3.67}}
\put(68.00,18.67){\line(0,-1){3.33}}
\put(67.00,65.00){\makebox(0,0)[cc]{$+$}}
\put(133.00,13.00){\line(1,0){4.00}}
\put(133.00,21.00){\line(1,0){4.00}}
\put(116.00,65.00){\circle*{3.33}}
\put(80.00,65.00){\circle*{5.20}}
\put(91.00,59.00){\rule{4.00\unitlength}{12.00\unitlength}}
\put(93.00,69.00){\line(1,0){4.00}}
\put(93.00,61.00){\line(1,0){4.00}}
\put(80.00,65.00){\line(-1,0){5.67}}
\put(75.00,0.00){\makebox(0,0)[cc]{Figure 2. Graphical representation of
equations (\ref{D})-(\ref{parH}).}}
\special{em:linewidth 1.2pt}
\linethickness{1.2pt}
\emline{56.00}{21.00}{1}{68.00}{18.67}{2}
\emline{68.00}{18.67}{3}{68.00}{15.33}{4}
\emline{68.00}{15.33}{5}{56.00}{13.00}{6}
\emline{88.00}{12.00}{7}{88.00}{22.00}{8}
\emline{88.00}{22.00}{9}{100.00}{18.67}{10}
\emline{100.00}{15.33}{11}{88.00}{12.00}{12}
\emline{120.00}{21.00}{13}{133.00}{21.00}{14}
\emline{133.00}{21.00}{15}{133.00}{13.00}{16}
\emline{133.00}{13.00}{17}{120.00}{13.00}{18}
\emline{123.00}{44.00}{19}{123.00}{36.00}{20}
\emline{123.00}{36.00}{21}{115.00}{36.00}{22}
\emline{115.00}{36.00}{23}{115.00}{44.00}{24}
\emline{115.00}{44.00}{25}{123.00}{44.00}{26}
\emline{115.00}{65.00}{27}{125.00}{60.00}{28}
\emline{125.00}{60.00}{29}{125.00}{70.00}{30}
\emline{125.00}{70.00}{31}{115.00}{65.00}{32}
\emline{100.00}{88.00}{33}{112.00}{89.67}{34}
\emline{112.00}{86.33}{35}{101.00}{88.00}{36}
\emline{51.00}{88.00}{37}{35.00}{88.00}{38}
\emline{80.00}{66.67}{39}{93.00}{69.00}{40}
\emline{93.00}{69.00}{41}{93.00}{61.00}{42}
\emline{93.00}{61.00}{43}{80.00}{63.33}{44}
\emline{112.00}{88.00}{45}{123.00}{88.00}{46}
\end{picture}
\vspace{5mm}
Note that
using equations (\ref{parG})-(\ref{parV})
one can rewrite the equation for $\Gamma_3$ in the form
\begin{equation}
\label{parG3a}
\Gamma_3= \lambda + \lambda D^2 \Gamma_4
+ \lambda D^3 \Gamma_3^2 ,
\end{equation}
or graphically
\unitlength=1.00mm
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\begin{picture}(128.67,21.66)
\special{em:linewidth 0.4pt}
\linethickness{0.4pt}
\put(33.00,15.00){\circle*{5.20}}
\put(28.00,15.00){\line(1,0){5.00}}
\put(36.67,18.67){\line(-1,-1){3.67}}
\put(33.00,15.00){\line(1,-1){3.67}}
\put(90.00,15.00){\circle*{5.20}}
\put(94.00,16.67){\line(-1,0){4.00}}
\put(90.00,16.67){\line(0,-1){3.33}}
\put(90.00,13.33){\line(1,0){4.00}}
\put(125.00,20.00){\circle*{3.33}}
\put(125.00,10.00){\circle*{3.33}}
\put(125.00,20.00){\line(1,0){3.67}}
\put(125.00,10.00){\line(1,0){3.67}}
\put(78.00,15.00){\line(-1,0){4.00}}
\put(115.00,15.00){\line(-1,0){4.00}}
\put(103.00,15.00){\makebox(0,0)[cc]{$+$}}
\put(44.33,15.00){\makebox(0,0)[cc]{$=$}}
\put(57.00,15.00){\line(6,5){4.00}}
\put(57.00,15.00){\line(6,-5){4.00}}
\put(53.00,15.00){\line(1,0){4.00}}
\put(90.00,16.67){\line(0,-1){3.33}}
\put(67.00,15.00){\makebox(0,0)[cc]{$+$}}
\special{em:linewidth 1.2pt}
\linethickness{1.2pt}
\emline{115.00}{15.00}{1}{125.00}{10.00}{2}
\emline{125.00}{10.00}{3}{125.00}{20.00}{4}
\emline{125.00}{20.00}{5}{115.00}{15.00}{6}
\emline{78.00}{15.00}{7}{90.00}{16.67}{8}
\emline{90.00}{13.33}{9}{79.00}{15.00}{10}
\end{picture}
\vspace{-20mm}
$$~$$
Equation (\ref{parG3a})
is just the Schwinger-Dyson equation for the 1PI three-point
Green's function $\Gamma _3$.
Excluding $\Gamma_3$, $\Gamma _4$, $H$ and $V$
from equations (\ref{parG})-(\ref{parV}) we get the equation for $D$
\begin{equation}
\label{eqD}
2 \lambda ^6 D^9-3 \lambda ^4 D^6(D-1)+
\lambda ^2 D^4 (D-1)^2 -(D-1)^5=0.
\end{equation}
Let us compare the solution of the planar parquet equations
(\ref{parG})-(\ref{parG3}) with the solution of the planar theory.
The planar 2-point and 1PI 3-point Green's function are
given by formulas \cite{BIPZ}:
\begin{equation}
\label{plD}
D^{(pl)}=\frac{1-3\tau}{(1-2\tau )^2},~~~
\Gamma_3^{(pl)}= \lambda \frac{(1-2\tau)^2(1-4 \tau)}{(1-3 \tau)^3},
\end{equation}
where $\tau$ is the subject of the relation
\begin{equation}
\label{tau}
\lambda ^2=\tau (1-2 \tau)^2.
\end{equation}
The results of numerical solutions of planar parquet equations
(\ref{parG})-(\ref{parG3}) as well as equations
(\ref{plD})-(\ref{tau}) for
planar Green's functions
are presented on Tables 3 and 4.
\vspace{5mm}
Table 3. Perturbative solution.
\begin{tabular}{llllllllll}
\hline
$ \lambda $ & $0.001$ & $0.01$ & $0.1$ & $0.2$ & $0.27$
& $0.272$ & $0.273$
&$0.279$ & $0.280$
\\
\hline
$D$ & $1.00$ & $1.00$ & $1.0104$ & $1.0486$ &
$1.1197$ & $~$ & $~$ & $1.1458$ & $~~ - $
\\
$D^{(pl)}$
& $1.00$
& $1.00$
& $1.0104$
& $1.0486$
& $1.1200$
& $1.1245$
& $~~-$
& $~$ & $~$
\\
\hline
$\Gamma _3$ & $ 0.0010$ & $0.010$ & $0.10106$
& $0.21084$ &
$0.31577$ & $~$ & $~$ & $0.34720$ & $~~-$
\\
$\Gamma _3^{(pl)}$
& $0.0010$
& $0.010$
& $0.10106$
& $0.21084$
& $0.31644$
& $0.32202$
& $~~-$
& $~$ & $~$
\\
\hline
\end{tabular}
\vspace{5mm}
Table 4. Nonperturbative solution.
\begin{tabular}{llllllll}
\hline
$ \lambda $ & $0.001$ & $0.01$ & $0.1$ & $1$ & $10$ & $100$ & $1000$
\\
\hline
$D$
& $-26783$
& $-864$
& $-30.78$
& $-1.96$
& $-0.286$
& $-0.0565$
& $-0.0119$
\\
$D^{(pl)}$
& $-2.51 \cdot 10^5$
& $-2640$
& $-39.6$
& $-2.00$
& $-0.287$
& $-0.0566$
& $-0.0120$
\\
\hline
$\Gamma _3$
& $1.39 \cdot 10^{-6}$
& $1.34 \cdot 10^{-4}$
& $0.0109$
& $0.0.391$
& $5.48$
& $58.6$
& $594$
\\
$\Gamma _3^{(pl)}$
& $1.58 \cdot 10^{-8}$
& $1.43 \cdot 10^{-5}$
& $0.00653$
& $0.375$
& $5.42$
& $58.2$
& $590$
\\
\hline
\end{tabular}
\vspace{5mm}
Table 3 corresponds to a perturbative solution.
When $ \lambda =0$ we have
$D=1$ and $\Gamma_3=0$ that is related to the free theory. There is a phase
transition point
$ \lambda =\lambda_0$ where $0.279< \lambda_0<0.280 $ for the planar
parquet approximation and $\lambda_0 =\sqrt{2} / ( 3\sqrt{3} )$ $=$
$0.2722 $ for the planar approximation.
Table 4 corresponds to another branch that is a
nonperturbative solution.
From (\ref{eqD}) we find
the following asymptotic behavior for $D$ in the strong coupling regime
\begin{equation}
\label{asD}
D(g) \sim c_{par} \lambda ^{-2/3},
\end{equation}
where $c_{par} $ is satisfied the equation
\begin{equation}
\label{c}
2 c_{par}^9 +3 c_{par}^6 +1=0.
\end{equation}
The numerical solution of equation (\ref{c}) is
\begin{equation}
\label{cc}
c_{par}=-1.18823~.
\end{equation}
An asymptotic behavior of the planar propagator is found from (\ref{plD}),
(\ref{tau}) and has the form:
\begin{equation}
\label{plasD}
D^{(pl)}( \lambda )
\sim c_{pl} \lambda ^{-2/3}
\end{equation}
with
\begin{equation}
\label{plcc}
c_{pl}=-\frac{3(4)^{1/3}}{4}=-1.19055~.
\end{equation}
\section{Conclusion}
In conclusion, in this paper we found that the planar parquet
approximation gives a unexpectedly good agreement with the known exact
results for large $N$ matrix models.
The parquet approximation in quantum field theory
is one of few known approximations
which admits a non-perturbative closed set of equations
for finite Green's functions.
The results of this paper mean that the class of parquet diagrams
is rather comprehensive among of all
planar diagrams of matrix theories. This makes
reasonable an investigation of planar parquet approximation for QCD
that will be the subject of our further work.
$$~$$
{\bf ACKNOWLEDGMENT}
\vspace{5mm}
Both authors are supported by RFFR
grant 96-01-00608.
We are grate\-ful to
G.E.Aru\-tyu\-nov, P.B.Med\-ve\-dev and I.Vo\-lo\-vich for use\-ful
dis\-cussions.
$$~$$
| -76,973.884864
|
[
-2.7578125,
2.58984375
] | 14.536741
|
[
-2.681640625,
1.0654296875,
-1.6826171875,
-4.98828125,
-1.234375,
7.203125
] |
[
2.853515625,
8.7578125,
2.14453125,
5.5703125
] | 1,558
| 3,363
|
[
-3.41015625,
4.10546875
] | 51.605036
|
[
-4.57421875,
-2.951171875,
-3.33203125,
-2.458984375,
0.6025390625,
9.8203125
] | 1.191196
| 8.972424
| 36.663693
| 32.320613
|
[
1.6385242938995361
] | -59,110.388168
| 7.863217
| -75,436.453279
| 2.507185
| 6.106034
|
[
-2.107421875,
-2.98046875,
-3.265625,
-4.40625,
1.833984375,
11.1015625
] |
[
-5.08203125,
-1.2431640625,
-2.12109375,
-0.70556640625,
2.962890625,
3.392578125
] | |
BkiUd1A4ubnhDUDYxPh3
|
\section{Introduction}
The price formation process in financial markets involves equating supply and demand for securities over time for arriving investors with heterogeneous trading preferences. In present day markets, large investors act on their underlying trading preferences, sometimes called {\em parent demands}, by splitting their trading into dynamic sequences of smaller orders, called {\em child orders} (see O'Hara (2015)) to minimize their price impact. Since the parent demands driving child-order trading are private information, investors use information from arriving child orders to form inferences over time about the dynamically evolving fundamental state of the market in terms of imbalances in the underlying aggregate parent demands. In particular, investors form forecasts of future trading demand imbalances and the associated pressure on future market-clearing prices and incorporate this information in their current child orders. Given the widespread prevalence of optimized order-splitting of parent orders into flows of child orders, dynamic learning about aggregate parent demands is a critical part of market dynamics.
This paper is the first to provide an analytically tractable equilibrium model of dynamic learning and trading with parent trading demands. We consider a continuous-time model with high-frequency trading at times $t\in[0,1]$ over short time-horizons with $[0,1]$ being a day or an hour. Trading occurs between price-sensitive optimizing traders with two different types of parent trading targets: One group has fixed individual targets, and the other group wants to track a stochastically evolving target over time. Since parent targets are initially not public, information about parent demand imbalances is partially revealed through market-clearing stock prices. Our analysis models the learning process and determines the endogenous investor holdings and equilibrium stock-price process.
Our main results are:
\begin{itemize}
\item We construct and solve two different equilibrium models: A simpler price-impact equilibrium and a subgame perfect Nash financial-market equilibrium. In the subgame perfect Nash equilibrium, price impact is endogenous. In addition, we find that these two equilibria are numerically very similar.
\item A practical application of our model is that we can compute total trading costs for investors given the effects of dynamic learning and front-running by other investors.
\item Our model replaces the exogenous price-elastic residual demand used in both Brunnermeier and Pedersen (2005) and Carlin, Lobo, and Viswanathan (2007) with endogenous demands coming from profit-maximizing strategic Brownian motion trackers. We find that this change leads to a combination of liquidity provision and front-running.
\end{itemize}
Our paper advances several strands of research on market microstructure. First, dynamic learning and trading have been extensively studied in the context of markets with strategic investors with long-lived asymmetric information, most notably in models based on Kyle (1985). However, equilibrium trading, learning, and pricing in markets with optimized dynamic order-splitting by large uninformed investors are less well understood. In our model, all price effects are due to price pressure to equate supply and demand rather than adverse selection. Second, Choi, Larsen, and Seppi (2019) construct an equilibrium with optimized dynamic trading and learning in a market with a strategic rebalancer with an end-of-day trading target and an informed investor, who trades on private long-lived asset-payoff information. By filtering the order flow over time, the rebalancer learns about the underlying asset payoff, the informed investor learns about the rebalancer's trading target, and market makers learn about both when setting prices. That earlier paper provides a characterization result for equilibrium and gives numerical examples but does not have an existence proof nor analytic solutions. In contrast, our model is solved analytically and gives the equilibrium in closed form. Third, Brunnermeier and Pedersen (2005) and Carlin, Lobo, and Viswanathan (2007) show how dynamic rebalancing by a large investor can lead to predatory trading. However, these papers abstract from the learning problem by assuming the parent trading needs are publicly observable. They also make an ad hoc assumption about the price sensitivity of the residual market-maker trading demand in the form of exogenous price-elastic noise traders. In contrast, our model assumes the underlying parent trading demands are private information. In addition, our prices are rationally set with no ad hoc residual demand function. Fourth, a large body of research models optimal order-splitting strategies for a single strategic investor given an exogenous pricing rule with no learning about latent trading demands of other investors (see, e.g., Almgren and Chriss (1999, 2000), Almgren (2003), and Schied and Sch\"oneborn (2009)). In contrast, we solve for optimal trades and equilibrium pricing jointly. Van Kervel, Kwan, and Westerholm (2020) solve for optimal trading strategies for two dynamic rebalancers with learning over time about each other's latent trading demands. This leads to predictions about the effect of aggregate parent demand on individual investor child orders, which are then verified empirically. However, they assume an ad hoc linear pricing rule, and there are no existence proofs or analytic solutions. In contrast, our pricing rule is endogenously determined in equilibrium, and we solve our model analytically. In addition, trading in our model is a combination of front-running along with trading demand accommodation (as in van Kervel, Kwan, and Westerholm (2020)).
Our analysis also uses a modeling approach from the asset-pricing literature for non-dividend paying stocks that makes the mathematics of our model tractable. The simplification involves finding sufficient conditions for equilibrium price drifts that clear the market without determining the levels of market-clearing prices as discounted future cash flows. The monograph Karatzas and Shreve (1998) describes this approach. Atmaz and Basak (2021) show that non-dividend paying stocks are relevant for asset pricing. However, models using the non-dividend paying stock approach are new in the mainstream microstructure literature. G\^arleanu and Pedersen (2016), Bouchard, Fukasawa, Herdegen, and Muhle-Karbe (2018), and Noh and Weston (2020) use the zero-dividend stock approach to model prices given exogenous transaction costs. Our model uses this approach with endogenous price impact.
\section{Model}
We model equilibrium trading, learning, and pricing in a market with a risky stock and a riskless bank account over a short time horizon $[0,1]$ (e.g., a trading day). For simplicity, the net supply of both the stock and bank account are set to zero. Since the time horizon is short, the risk-free interest rate on the bank account is set to zero. The stock differs from the bank account in two ways: First, the investors have individual parent demands for the stock. Second, the stock valuation is stochastic over time. In particular, we can view stock valuation as the sum of two components. One component is a fundamental valuation of a stream of future dividends absent price pressure from trading targets. The second component is the incremental valuation impact of parent trading demand imbalances on prices such that equilibrium investor stock holdings clear the market. This price pressure component is also random. It is the price pressure component that is the focus of our analysis. For simplicity, our analysis treats these two components as being orthogonal. Moreover, since our focus is on equilibrium price pressure, we ignore the dividend valuation component. Thus, hereafter, when we refer to the ``stock price'', this is shorthand, for brevity, for the ``price pressure valuation component of stock prices.'' However, in a more complicated model, a separate fundamental dividend valuation component can be added to our stock price pressure valuation to get the full stock price.
Two different groups of traders trade in our equilibrium model.
\begin{itemize}
\item[(i)] Price-sensitive rebalancers. Rebalancer $i\in\{1,...,M\}$ maximizes her expected profit subject to a parent trading target $\tilde{a}_i$ where $\tilde{a}_i$ is private information for $i$. The targets $(\tilde{a}_1,...,\tilde{a}_M)$ are assumed homogeneously distributed $\tilde{a}_i \sim \mathcal{N}(0,\sigma_{\tilde{a}}^2)$ for all rebalancers $i\in\{1,...,M\}$ with identical zero means and standard deviations $\sigma_{\tilde{a}}$. The aggregate target is
\begin{align}\label{aS}
\tilde{a}_\Sigma := \sum_{i=1}^M \tilde{a}_i.
\end{align}
Rebalancer $i$'s control is her stock holdings, which are denoted by $(\theta_{i,t})_{t\in[0,1]}$ for $i\in\{1,...,M\}$. For simplicity, the initial endowed holdings of both the bank account and the stock are normalized to zero for all rebalancers.
When $\tilde{a}_i= 0$, rebalancer $i$ is called a ``high-frequency" liquidity provider. Because $\tilde{a}_i$ is private information for $i$, other traders $k$, $k\neq i$, do not know whether rebalancer $i$ has an active latent trading demand or is a pure liquidity provider.
\item[(ii)] Price-sensitive trackers. Trackers $j\in\{M+1,...,M+\bar{M}\}$ all want to track a Brownian motion process $w_t$ over time $t\in[0,1]$ where their dynamic target $w_t$ is modeled by the exogenously process
\begin{align}\label{w_t}
w_t := w_0 + w^\circ_t,\quad t\in(0,1],
\end{align}
where the initial target is $w_0 \sim \sN(0,\sigma^2_{w_0})$, the drift is zero, and $(w^\circ_t)_{t\in[0,1]}$ is a standard Brownian motion (i.e., $w^\circ_t$ starts at zero, has a zero drift, and a unit volatility).\footnote{Adding a volatility coefficient $\sigma_w$ in front of $w^\circ_t$ in \eqref{w_t} does not increase model flexibility because --- as we shall see --- the stock volatility $\gamma$ is a free model parameter and $\gamma$ and $\sigma_w$ would play identical roles. Moreover, our model can be extended to include a drift term $\mu_w t$ in \eqref{w_t}.} While trackers observe the same $w_t$ at time $t\in[0,1]$, rebalancers do not and instead filter $w_t$ over time $t\in[0,1]$. Tracker $j$'s control is her stock holdings, which are denoted by $(\theta_{j,t})_{t\in[0,1]}$ for $j\in\{M+1,...,M+\bar{M}\}$. Their initial stock and money market holdings are also normalized to zero.
We assume that the random variables $(\tilde{a}_1,...,\tilde{a}_M)$, $w_0$, and $(w^\circ_t)_{t\in[0,1]}$ are independent.
\end{itemize}
Randomness in the stock prices in our model (i.e., in the price pressure valuation effect) comes from learning about the traders' parent targets (which are initially private information of the individual rebalancers and the trackers) and from random changes over time in the trackers' target $w_t$. As we shall see, trackers will be able to infer the aggregate target $\tilde{a}_\Sigma$ in \eqref{aS} from the initial stock price, and so trackers have no need to filter the rebalancers' individual targets $(\tilde{a}_1, ..., \tilde{a}_M)$. The situation is different for rebalancer $i$, who can only observe her own target $\tilde{a}_i$ and past and current stock prices. Because these observations are insufficient to observe $\tilde{a}_\Sigma$ and $w_t$ seperately, rebalancer $i$ filters based on $\tilde{a}_i$ and on past and current stock price observations to learn about the underlying latent parent demands $\tilde{a}_\Sigma$ and $w_t$.
In the following, index $k$ denotes any generic trader, index $i$ denotes a rebalancer, and index $j$ denotes a tracker. This allows us to express the stock-market clearing condition as
\begin{align}\label{xSigmainitial}
0= \sum_{k=1}^{M+\bar{M}} \theta_{k,t}= \sum_{i=1}^M \theta_{i,t}+\sum_{j=M+1}^{M+\bar{M}} \theta_{j,t},\quad t\in[0,1].
\end{align}
\subsection{Individual maximization problems}
This section introduces the individual maximization problems and the corresponding state processes needed to describe them. In the next sections, we will be using different filtrations depending on the application. We denote by $\mathcal{F}_{k,t}$ a generic filtration for trader $k\in \{1,...,M+\bar{M}\}$. As an example, $\mathcal{F}_{i,t}$ and $\mathcal{F}_{j,t}$ could denote
\begin{align}\label{Rfiltration}
\begin{split}
&\sigma(\tilde{a}_i,S_{i,u})_{u\in[0,t]},\quad t\in[0,1],\quad i\in\{1,...,M\},\\
&\sigma(w_u,S_{j,u})_{u\in[0,t]},\quad t\in[0,1],\quad j\in \{M+1,...,M+\bar{M}\},
\end{split}
\end{align}
where $(S_{i,t})_{t\in[0,1]}$ and $(S_{j,t})_{t\in[0,1]}$ denote perceived stock-price processes for a rebalancer $i$ and a tracker $j$. Several different perceived stock-price processes are specified in the next sections.
A generic trader's optimal stock holdings are determined in terms of a trade-off between expected terminal wealth $X_{k,1}$ and a penalty for deviations of their holdings $\theta_{k,t}$ over time from their parent target $\tilde{a}_i$ (rebalancers) or Brownian motion $w_t$ (trackers). An exogenous continuous (deterministic) function $\kappa:[0,1]\to(0,\infty]$ models the severity of the target penalty.\footnote{Our analysis can be extended to allow for different penalty functions for the two groups of traders.} The traders' objectives are
\begin{align}\label{Rproblem}
\begin{split}
&\sup_{\theta_{i,t}\in\mathcal{F}_{i,t}} \mathbb{E}\Big[ X_{i,1} - \int_0^1 \kappa(t)(\tilde{a}_i-\theta_{i,t})^2dt\Big|\,\sigma(\tilde{a}_i,S_{i,0})\Big],\quad i\in\{1,...,M\},\\
&\sup_{\theta_{j,t}\in \mathcal{F}_{j,t}} \mathbb{E}\Big[ X_{j,1} - \int_0^1 \kappa(t)(w_t-\theta_{j,t})^2dt\Big|\,\sigma(w_0,S_{j,0})\Big],\quad j\in\{M+1,...,M+\bar{M}\}.
\end{split}
\end{align}
Since traders' endowed stock holdings are normalized to zero, $\tilde{a}_i$ is the ideal holdings for rebalancer $i$ and $w_t$ is the ideal holdings for tracker $j$. The suprema in \eqref{Rproblem} are taken over progressively measurable holding processes $\theta_{i,t}$ and $\theta_{j,t}$ with respect to traders' filtrations $\mathcal{F}_{i,t}$ and $\mathcal{F}_{j,t}$ that are square integrable (to rule out doubling strategies)
\begin{align}\label{squareint}
\mathbb{E}\left[ \int_0^1 \theta_{k,t}^2 dt \right]<\infty,\quad k\in\{1,...,M+\bar{M}\}.
\end{align}Terminal wealth $X_{k,1}$ in \eqref{Rproblem} is generated by trader $k$'s perceived wealth process
\begin{align}\label{Xit}
\begin{split}
dX_{k,t} &:= \theta_{k,t}dS_{k,t},\quad X_{k,0} := 0,\quad k\in\{1,...,M+\bar{M}\},
\end{split}
\end{align}
which is affected by $k$'s holdings $\theta_{k,t}$ both directly and also indirectly via the impact of $k$'s holdings on an associated perceived stock-price process $S_{k,t}$. As a result of the price impact of $\theta_{k,t}$ on $S_{k,t}$, trader $k$'s holdings $\theta_{k,t}$ are price sensitive. In \eqref{Xit}, the zero initial wealth $X_{k,0}=0$ is because trader $k$'s initial endowed money market and stock holdings are normalized to zero. Given the objectives in \eqref{Rproblem}, trading reflects a combination of motives: Investors seek to have stock holdings close to their own targets $a_i$ and $w_t$, but they also seek to increase their expected terminal wealth by trading on price pressure from other investors trading on their targets. Thus, traders demand liquidity (to come close to their targets) and supply liquidity for markets to clear (by being willing to deviate from their targets so that other traders can trade towards their targets, given the appropriate price incentives), and front-run future predictable price pressure.
\subsection{State processes} \label{sec:states}
Before considering specific stock-price perceptions in Sections \ref{sec:PI} and \ref{sec:eq1} below, we describe a set of conjectured state processes $(Y_t,\eta_t,q_{i,t},w_{i,t})$ for rebalancer $i\in\{1,...,M\}$. These processes are all endogenous in equilibrium. However, in constructing the equilibrium, it is convenient to describe their informational properties first, before showing how they arise in equilibrium. The processes $(Y_t,\eta_t)$ are public in the sense that they are adapted to $\mathcal{F}_{k,t}$ for all traders $k\in\{1,...,M+\bar{M}\}$. Furthermore, $\eta_t$ will be adapted to $\sigma(Y_u)_{u\in[0,t]}$. The state processes $(q_{i,t},w_{i,t})$ are specific to rebalancer $i$ and are only guaranteed to be adapted to $i$'s filtration defined in $\mathcal{F}_{i,t}$ and they are not adapted to trader $k$'s filtration for $k\neq i$. There is a significant informational difference between trackers and rebalancers. Each tracker $j\in\{M+1,...,M+\bar{M}\}$ can directly observe $w_t$ in \eqref{w_t} and --- as we shall see --- can therefore infer the aggregate target $\tilde{a}_\Sigma$ in \eqref{aS} from the initial stock price. In contrast, rebalancer $i \in \{1,...,M\}$ learns about $w_t$ and $\tilde{a}_\Sigma$ using dynamic filtering.
The state process $Y_t$ in our model has the form
\begin{align}\label{Z}
Y_t:=w_t - B(t)\tilde{a}_\Sigma,\quad t\in[0,1],
\end{align}
where the aggregate target $\tilde{a}_\Sigma$ is defined in \eqref{aS} and $B:[0,1]\to\mathbb{R}$ is a smooth deterministic function of time that is endogenously determined in equilibrium. The process $Y_t$ is not directly observable for the rebalancers, but Lemma \ref{lemma_infer} below shows that $Y_t$ can be inferred from stock prices.
Because rebalancer $i\in\{1,...,M\}$ also knows her own target $\tilde{a}_i$, by observing $Y_t$ over time $t\in[0,1]$, she equivalently observes
\begin{align}\label{Zi}
\begin{split}
Y_{i,t}:&= Y_t + B(t)\tilde{a}_i\\
&= w_t - B(t)(\tilde{a}_\Sigma-\tilde{a}_i).
\end{split}
\end{align}
Unlike $Y_t$ in \eqref{Z}, the process $Y_{i,t}$ is independent of rebalancer $i$'s private trading target $\tilde{a}_i$ and satisfies
\begin{align}\label{RfiltrationQQQ}
\sigma(\tilde{a}_i,Y_u)_{u\in[0,t]} = \sigma(\tilde{a}_i,Y_{i,u})_{u\in[0,t]},\quad t\in[0,1].
\end{align}
For a continuously differentiable function $B:[0,1]\to\mathbb{R}$, we define the processes
\begin{align}\label{dwit}
\begin{split}
q_{i,t} &:= \mathbb{E}\left[\tilde{a}_\Sigma -\tilde{a}_i\,\Big|\, \sigma(Y_{i,u})_{u\in[0,t]}\right],\\
dw_{i,t}&:=dw_t-B'(t)\big(\tilde{a}_\Sigma-\tilde{a}_i - q_{i,t} \big)dt,\quad w_{i,0} := Y_{i,0},
\end{split}
\end{align}
for $ t\in[0,T]$. In addition, the function $\Sigma(t)$ denotes the remaining variance
\begin{align}
\Sigma(t):= \mathbb{V}[\tilde{a}_\Sigma -\tilde{a}_i -q_{i,t}]=\mathbb{E}[(\tilde{a}_\Sigma -\tilde{a}_i -q_{i,t})^2],\quad t\in[0,1],
\end{align}
where the second equality follows from the zero-mean assumptions.
The following result is a special case of the Kalman-Bucy result from filtering theory.
\begin{lemma}[Kalman-Bucy] \label{lemKB} For a continuously differentiable function $B:[0,1]\to\mathbb{R}$, the process $w_{i,t}$ is independent of $\tilde{a}_i$, is a Brownian motion, and satisfies (modulo $\mathbb{P}$ null sets)
\begin{align}\label{RfiltrationQ}
\sigma(\tilde{a}_i,Y_{i,u})_{u\in[0,t]}=\sigma(\tilde{a}_i,w_{i,u})_{u\in[0,t]},\quad t\in[0,1].
\end{align}
$\hfill\diamondsuit$
\end{lemma}
Our equilibrium construction combines the stock-market clearing condition \eqref{xSigmainitial} with the following decomposition result:
\begin{lemma}\label{lem:decomp} Let $B:[0,1]\to \mathbb{R}$ be a continuously differentiable function. Then, the decomposition
\begin{align}\label{SUM1}
\sum_{i=1}^Mq_{i,t} = \eta_t + A(t) \tilde{a}_\Sigma,\quad t\in[0,1],
\end{align}
holds with the process $\eta_t$ being adapted to $\sigma(Y_u)_{u\in[0,t]}$ with $Y_t$ in \eqref{Z} and
\begin{align}\label{dY2E}
\begin{split}
A'(t)&= - \big(B'(t)\big)^2\Sigma(t)\big(A(t) +1\big),\quad A(0)=-\tfrac{(M-1)B(0)^2\sigma^2_{\tilde{a}}}{\sigma^2_{w_0} +B(0)^2(M-1)\sigma^2_{\tilde{a}}},\\
\Sigma(t)&= \frac{1}{\frac{1}{\mathbb{V}[\tilde{a}_\Sigma -\tilde{a}_i -q_{i,0}]}+\int_0^t \big(B'(u)\big)^2du},\\
d\eta_t & = - \big(B'(t)\big)^2\Sigma(t)\eta_tdt- MB'(t)\Sigma(t)dY_t,\quad \eta_0 =-\tfrac{M(M-1)B(0)\sigma^2_{\tilde{a}}}{\sigma^2_{w_0} +B(0)^2(M-1)\sigma^2_{\tilde{a}}}Y_0.
\end{split}
\end{align}
$\hfill\diamondsuit$
\end{lemma}
\noindent Because the targets $(\tilde{a}_1,...,\tilde{a}_M)$ are assumed independent and homogeneously distributed $\sN(0,\sigma^2_{\tilde{a}})$, the initial variance $\Sigma(0)=\mathbb{E}[(\tilde{a}_\Sigma -\tilde{a}_i -q_{i,0})^2]$ is identical across all rebalancers $i\in \{1,...,M\}$. This property and the formula for $\Sigma(t)$ in \eqref{dY2E} imply that $\Sigma(t)$ is also independent of index $i\in \{1,...,M\}$ for all $t\in[0,1]$.
\section{Price-impact equilibrium}\label{sec:PI}
Investor perceptions of the impact of their trading on stock prices are a key part of the optimizations in \eqref{Rproblem} and the resulting market equilibrium. We consider two specifications of investor stock-price perceptions. This section presents a simpler model in which price impact is exogenous. This approach is analogous to the exogenous price impact used in, e.g., van Kerval, Kwan, and Westerholm (2020). We then solve for the endogenous stock-price process that clears the market (and also satisfies some weak consistency conditions) and the associated optimized investor holding processes. Section \ref{sec:eq1} presents a richer model of endogenous price impact in which investor price perceptions and price impacts are endogenized in a subgame perfect Nash financial-market equilibrium.
\subsection{Stock-price perceptions}
Rebalancers optimize \eqref{Rproblem} with respect to perceived stock-price processes of the form
\begin{align}\label{Sit}
\begin{split}
dS^f_{i,t} &:= \Big\{f_0(t)Y_t +f_1(t)\tilde{a}_i +f_2(t)q_{i,t}+f_3(t)\eta_t+ \alpha\theta_{i,t}\Big\}dt+ \gamma dw_{i,t}, \\
S^f_{i,0}&:=Y_0,\quad i\in\{1,...,M\},
\end{split}
\end{align}
where $f_0,f_1,f_2,f_3:[0,1]\to\mathbb{R}$ are continuous (deterministic) functions of time $t\in[0,1]$ and $(\alpha,\gamma)$ are constants. In particular, the drift in \eqref{Sit} is perceived by rebalancers to be sensitive to the two aggregate state variables $Y_t$ and $\eta_t$. Consistent with intuition, we will see in equilibrium that $f_0(t)$ and $f_3(t)$ are negative. The other coefficients describe the perceived impact of rebalancer $i$ on the stock-price drift. The ``$f$'' superscript indicates that the perceived price $S^f_{i,t}$ is defined with respect to particular coefficient functions $f$ in \eqref{Sit}. Theorem \ref{thm_PI} below endogenously determines $(f_0,f_1,f_2,f_3)$ in equilibrium. The innovations in the rebalancers' perceived stock prices $dw_{i,t}$ come from new information rebalancer $i$ learns over time about the underlying parent demand state variable $Y_t$, which has both a direct effect on the future stock-price drift and an additional indirect effect via its effect on $\eta_t$ since $\eta_t$ is adapted to $\sigma(Y_u)_{u\in[0,t]}$ from Lemma \ref{lem:decomp}. The technical goal of our analysis --- given rebalancer stock-price perceptions of the form in \eqref{Sit} with an aggregate demand state variable $Y_t$ process of the form in \eqref{Z} (and the associated $\eta_t$ process) --- is to construct a function $B(t)$ that gives an equilibrium.
Our modeling approach for price pressure follows the zero-dividend stock valuation literature (see, e.g., Karatzas and Shreve (1998)) in that we model perceived and equilibrium price drifts rather than price levels. In particular, the price pressure is not the valuation of future dividends. Instead, it is a price component needed to clear the market given investors' trading demands.\footnote{Our model features asymmetric information and learning about parent demands. However, there is no asymmetric information related to future stock cashflows.} One consequence of this modeling approach is that, in \eqref{Sit}, the stock's volatility and initial value are model input parameters. For simplicity, we set the volatility to be a constant $\gamma> 0$ (i.e., increased parent demand $w_t$ increases prices) and the initial price to be $Y_0$ in \eqref{Sit}. However, many other choices would work equally well (e.g., $\gamma(t)$ or $g(Y_0)$). The price-impact parameter $\alpha$ is also an exogenous model input. The exogenous parameters $(\alpha,\gamma)$ can be found by calibrating model output to empirical data. A competitive market is a special case with $\alpha:=0$, whereas the empirically relevant case is $\alpha<0$ such that buy (sell) orders decrease (increase) the future stock price drifts.
The next result shows that $w_{i,t}$ is rebalancer $i$'s innovations process in the sense that
$w_{i,t}$ is a Brownian motion relative to $i$'s filtration defined with perceived stock prices $S^f_{i,t}$ in \eqref{Sit} and such that $S^f_{i,t}$ and $w_{i,t}$ generate the same information.
\begin{lemma}\label{lemma_infer} Let $f_0,f_1,f_2,f_3:[0,1]\to\mathbb{R}$ be continuous functions and let $B:[0,1]\to \mathbb{R}$ be a continuously differentiable function. For a rebalancer $i\in\{1,...,M\}$, let $\theta_{i,t}$ satisfy \eqref{squareint} and be progressively measurable with respect to $\mathcal{F}_{i,t}:=\sigma(\tilde{a}_i,S^f_{i,u})_{u\in[0,t]}$ with $S^f_{i,t}$ defined in \eqref{Sit} and $Y_t$ defined in \eqref{Z}. Then, modulo $\mathbb{P}$-null sets, we have
\begin{align}\label{filt1}
\sigma(\tilde{a}_i, w_{i,u})_{u\in[0,t]} = \sigma(\tilde{a}_i,S^f_{i,u})_{u\in[0,t]} ,\quad t\in[0,1],\quad i\in\{1,...,M\}.
\end{align}
$\hfill\diamondsuit$
\end{lemma}
\noindent Thus, given a path of perceived prices generated by a process $S^f_{i,t}$ of the form in \eqref{Sit} and her personal target $\tilde{a}_i$, rebalancer $i$ can infer the path of $w_{i,t}$. Furthermore, given the path $w_{i,t}$, rebalancer $i$ can infer $Y_{i,t}$ using \eqref{RfiltrationQ} and, thus, can infer $Y_t$ from \eqref{RfiltrationQQQ}. Consequently, rebalancer $i$ can infer $(q_{i,t},\eta_t)$ where we recall from Lemma \ref{lem:decomp} that $\eta_t$ is adapted to $\sigma(Y_t)_{t\in[0,1]}$.
Trackers have different information in that they observe $w_t$ directly and can infer $\tilde{a}_\Sigma$ from the initial stock price. Therefore, their perceived stock prices differ from those of the rebalancers. In our model, the stock-price process perceived by tracker $j\in \{M+1,...,M+\bar{M}\}$ takes the form:
\begin{align}\label{New2}
\begin{split}
dS_{j,t} :&= \Big\{\bar{f}_3(t)\eta_t+\bar{f}_4(t)\tilde{a}_\Sigma+\bar{f}_5(t)w_t+ \alpha\theta_{j,t}\Big\}dt + \gamma dw_t,\\
S_{j,0}:&=Y_0,
\end{split}
\end{align}
where $\bar{f}_3,\bar{f}_4,\bar{f}_5:[0,1]\to\mathbb{R}$ are continuous (determinstic) functions and the $\alpha$ is a constant.\footnote{Our model can be extended to allow for a different price-impact coefficient $\alpha$ for the trackers.} Theorem \ref{thm_PI} below endogenously determines $(\bar{f}_3,\bar{f}_4,\bar{f}_5)$ in equilibrium, whereas $(\alpha,\gamma)$ are exogenous model inputs. Again, $\alpha:=0$ is the special case of a competitive market.
An important difference between rebalancer and tracker perceived prices in \eqref{Sit} and \eqref{New2} is that rebalancer price dynamics are based on the innovations $dw_{i,t}$ (i.e., what rebalancers learn from the state variable $Y_t$), whereas tracker price dynamics are based on $dw_t$ (i.e., the trackers' target). Reconciling the price perceptions of rebalancers and trackers will impose important restrictions on equilibrium price perceptions and holdings and will rely on the relation between $dw_{i,t}$ and $dw_t$ in \eqref{dwit}.
The proof of Lemma \ref{PI_Le} shows that pointwise quadratic maximization gives the maximizers for \eqref{Rproblem} for rebalancers and trackers for arbitrary $f$ functions.
\begin{lemma} \label{PI_Le} Let $f_0,f_1,f_2,f_3,\bar{f}_3,\bar{f}_4,\bar{f}_5:[0,1]\to \mathbb{R}$ and $\kappa:[0,1]\to (0,\infty]$ be continuous functions, let $B:[0,1]\to \mathbb{R}$ be continuously differentiable, let $\alpha \le 0$,
and let the perceived stock-price process in the wealth dynamics \eqref{Xit} be as in \eqref{Sit} and \eqref{New2}. Then, for $\mathcal{F}_{i,t}:=\sigma(\tilde{a}_i,S^f_{i,u})_{u\in[0,t]}$ and $\mathcal{F}_{j,t}:=\sigma(w_u,S^f_{j,u})_{u\in[0,t]}$, and, provided the holding processes
\begin{align}\label{Y00PI}
\begin{split}
\hat{\theta}_{i,t} &:= \frac{f_0(t)}{2 (\kappa (t)-\alpha )}Y_t+\frac{f_1(t)+2 \kappa (t)}{2(\kappa(t)- \alpha)}\tilde{a}_i+\frac{f_2(t)}{2 (\kappa (t)-\alpha )}q_{i,t}+\frac{f_3(t)}{2 (\kappa (t)-\alpha )}\eta_t,\\
\hat{\theta}_{j,t}&:= \frac{\bar{f}_3(t)}{2 (\kappa (t)-\alpha )}\eta_t+\frac{\bar{f}_5(t)+2 \kappa (t)}{2(\kappa(t)- \alpha)}w_t+\frac{\bar{f}_4(t)}{2 (\kappa (t)-\alpha )}\tilde{a}_\Sigma,
\end{split}
\end{align}
satisfy \eqref{squareint}, the traders' maximizers for \eqref{Rproblem} are $\hat{\theta}_{i,t} $ for rebalancer $i\in\{1,...,M\}$ and $\hat{\theta}_{j,t}$ for tracker $j\in \{M+1,...,M+\bar{M}\}$.
$\hfill\diamondsuit$
\end{lemma}
To summarize, stock-price perceptions play two interconnected roles in our model. First, rebalancers and trackers solve their optimization problems in \eqref{Rproblem} based on their perceptions in \eqref{Sit} and \eqref{New2} for how hypothetical orders $\theta_{i,t}$ and $\theta_{j,t}$ affect price dynamics. Second, investor stock-price perceptions affect how they learn from observed prices. In particular, Lemma \ref{lemma_infer} shows that rebalancers use their stock-price perceptions of prices in \eqref{Sit} to infer the aggregate demand state variable $Y_t$ based on past and current stock prices. In other words, dynamic learning by rebalancers depends critically on their stock-price perceptions. Trackers also use their stock-price perceptions in \eqref{New2} to infer the aggregate parent demand $\tilde{a}_\Sigma$ from the initial price at time $t=0$. However, thereafter, there is no additional learning from prices by the trackers since they directly observe changes in their target $w_t$.
\subsection{Equilibrium}\label{sec:PIeq}
The notion of equilibrium in our first construction is relatively simple, being based just on market clearing and consistency of investor price perceptions.
\begin{definition}\label{PI_eq} Deterministic functions of time $f_0,f_1,f_2,f_3,\bar{f}_3,\bar{f}_4,\bar{f}_5:[0,1]\to\mathbb{R}$ constitute a \emph{price-impact equilibrium} if:
\begin{enumerate}
\item[(i)] Maximizers $\hat{\theta}_{k,t}$ for \eqref{Rproblem} exist for traders $k \in \{1,...,M+\bar{M}\}$ given the stock-price perceptions \eqref{Sit} and \eqref{New2} for filtrations $\mathcal{F}_{i,t}:=\sigma(\tilde{a}_i,S^f_{i,u})_{u\in[0,t]}$ and $\mathcal{F}_{j,t}:=\sigma(w_u,S^f_{j,u})_{u\in[0,t]}$.
\item[(ii)] Inserting trader $k$'s maximizer $\hat{\theta}_{k,t}$ into the perceived stock-price processes \eqref{Sit} and \eqref{New2} produces identical stock-price processes across all traders $k\in\{1,...,M+\bar{M}\}$. This common equilibrium stock-price process is denoted by $\hat{S}_t$.
\item[(iii)] The money and stock markets clear.
$\hfill\diamondsuit$
\end{enumerate}
\end{definition}
\noindent Definition \ref{sec:PIeq} places only minimal restrictions on the perceived stock-price coefficient functions in \eqref{Sit} and \eqref{New2}: Markets must clear and result in consistent perceived stock-price processes when all investors use their equilibrium strategies. Section \ref{sec:eq1} below considers a subgame perfect Nash extension of our basic model that imposes more restrictions on allowable off-equilibrium stock-price perceptions such as market clearing.
In equilibrium, Definition \ref{PI_eq}(ii) requires that rebalancers and trackers perceive identical stock-price dynamics when using their equilibrium holdings. However, rebalancers and trackers have different information (i.e., rebalancers form non-perfect inferences about $w_t$ and $\tilde{a}_\Sigma$, whereas trackers observe $w_t$ directly and infer $\tilde{a}_\Sigma$ at time 0). The resolution of this apparent paradox is investors' different information sets: While traders agree on $d\hat{S}_t$, they disagree on how to decompose $d\hat{S}_t$ into drift and volatility components. Because the trackers observe $w_t$, they can use $dw_t$ in their decomposition of $d\hat{S}_t$. However, $w_t$ is not adapted to the rebalancers' filtrations and can therefore not be used in their $d\hat{S}_t$ decompositions. Instead, rebalancers use their innovations processes $dw_{i,t}$ when decomposing $d\hat{S}_t$ into drift and volatility. By replacing $dw_{i,t}$ in $dS^f_{i,t}$ in \eqref{Sit} with the decomposition of $dw_{i,t}$ in terms of $dw_t$ from \eqref{dwit}, we can rewrite $dS^f_{i,t}$ in \eqref{Sit} as
\begin{align}\label{Y32PI}
\begin{split}
dS^f_{i,t} &= \Big\{f_0(t)Y_t +f_1(t)\tilde{a}_i +f_2(t)q_{i,t}+f_3(t) \eta_t +\alpha \theta_{i,t}\\
&\quad -B'(t)\big(\tilde{a}_\Sigma-\tilde{a}_i - q_{i,t} \big) \gamma \Big\}dt+ \gamma dw_t,\quad i\in\{1,...,M\}.
\end{split}
\end{align}
Therefore, to ensure identical equilibrium stock-price perceptions for all traders $k\in\{1,...,M+\bar{M}\}$, it suffices to match the drift of $dS^f_{j,t}$ in \eqref{New2} for $j\in\{M+1,...,M+\bar{M}\}$ with the drift of $dS^f_{i,t}$ in \eqref{Y32PI} for the equilibrium holdings $\theta_{i,t}:= \hat{\theta}_{i,t}$, $i\in\{1,...,M\}$. This produces the following requirement:
\begin{align}\label{driftAPI}
\begin{split}
&f_0(t)Y_t +f_1(t)\tilde{a}_i +f_2(t)q_{i,t}+f_3(t) \eta_t +\alpha \hat{\theta}_{i,t} -B'(t)\big(\tilde{a}_\Sigma-\tilde{a}_i - q_{i,t} \big)\gamma\\
&=\bar{f}_3(t)\eta_t+\bar{f}_4(t)\tilde{a}_\Sigma+\bar{f}_5(t)w_t+ \alpha\hat{\theta}_{j,t},
\end{split}
\end{align}
for all rebalancers $i \in\{1,...,M\}$ and all trackers $j\in \{M+1,...,M+\bar{M}\}$. We note that the right-hand side of \eqref{driftAPI} does not depend on the rebalancer index $i$. Matching up coefficients in front of $(\tilde{a}_i,\tilde{a}_{\Sigma},q_{i,t},\eta_t,w_t)$ in \eqref{driftAPI} using $\hat{\theta}_{i,t}$ and $\hat{\theta}_{j,t}$ in \eqref{Y00PI} produces a system of equations that gives the coefficient functions \eqref{fs} in Appendix \ref{sec:formulas}.
Our equilibrium existence result is based on the following technical lemma. It guarantees the existence of a solution to an autonomous system of coupled ODEs. The exogenous price-impact coefficient $\alpha$ plays no role in this result.
\begin{lemma}\label{PI_Lemma}
Let $\kappa:[0,1]\to[0,\infty]$ be a continuous and integrable function (i.e., $\int_0^1 \kappa(t)dt <\infty$). For an initial constant $B(0) \in \mathbb{R}$, the coupled ODEs
\begin{align}\label{derivatives0aPI}
\begin{split}
B'(t)&= \frac{2 \kappa (t) (\bar{M} B(t)+1)}{\gamma (A(t)+\bar{M}+1)},\\
A'(t)&= - \big(B'(t)\big)^2\Sigma(t)\big(A(t) +1\big),\quad A(0)=-\tfrac{(M-1)B(0)^2\sigma^2_{\tilde{a}}}{\sigma^2_{w_0} +B(0)^2(M-1)\sigma^2_{\tilde{a}}},\\
\Sigma'(t) &= -\big(B'(t)\big)^2\Sigma(t)^2,
\quad \Sigma(0) =\tfrac{(M-1) \sigma_{\tilde{a}}^2 \sigma_{w_0}^2}{B(0)^2 (M-1) \sigma_{\tilde{a}}^2+\sigma_{w_0}^2},
\end{split}
\end{align}
have unique solutions with $\Sigma(t) \ge 0$, $\Sigma(t)$ decreasing, $A(t) \in [-1,0]$, $A(t)$ decreasing for $t\in[0,1]$, and $B(t),B'(t)<0$ when $\bar{M}B(0) +1< 0$.
$\hfill\diamondsuit$
\end{lemma}
\noindent The ODEs for $A(t)$ and $\Sigma(t)$ in \eqref{derivatives0aPI} are consistent with the expressions in \eqref{dY2E}.
The following theoretical result gives the price-impact equilibrium in terms of the ODEs \eqref{derivatives0aPI}. In this theorem, the price-impact parameter $\alpha$, volatility $\gamma$, and initial value $B(0)$ are free parameters. The intuition for $B(0)$ being free is discussed after our equilibrium construction in Theorem \ref{thm_PI}.
\begin{theorem}\label{thm_PI} Let $\kappa:[0,1]\to (0,\infty)$ be continuous, let the functions $(B,A,\Sigma)$ be as in Lemma \ref{PI_Lemma}, and let $\alpha\le0$. Then, we have:
\begin{itemize}
\item[(i)] A price-impact equilibrium exists and is given by the price-perception functions \eqref{fs} in Appendix \ref{sec:formulas}.
\item[(ii)] Equilibrium holdings $\hat{\theta}_{i,t}$ for rebalancer $i$ and $\hat{\theta}_{j,t}$ for tracker $j$ are
\begin{align}\label{Y0000PI}
\begin{split}
\hat{\theta}_{i,t} &=\tfrac{\gamma B'(t)-2 \kappa (t)}{\alpha -2 \kappa (t)}\tilde{a}_i +\tfrac{\gamma B'(t)}{\alpha -2 \kappa (t)}q_{i,t}
\\
&-\tfrac{\gamma B'(t)}{(M+\bar{M}) (\alpha -2 \kappa (t))}\eta_t+\tfrac{2 \bar{M} \kappa (t)}{(M+\bar{M}) (\alpha -2 \kappa (t))}Y_t,\quad i\in\{1,...,M\},
\\
\hat{\theta}_{j,t} &=-\tfrac{\gamma B'(t)}{(M+\bar{M}) (\alpha -2 \kappa (t))}\eta_t
-\tfrac{2 M \kappa (t)}{(M+\bar{M}) (\alpha -2 \kappa (t))}w_t
\\
&+\tfrac{\gamma (A(t)-M+1) B'(t)-2 \kappa (t)}{(M+\bar{M}) (2 \kappa (t)-\alpha)}\tilde{a}_\Sigma,\quad j\in\{M+1,...,M+\bar{M}\}.
\end{split}
\end{align}
\item[(iii)] The equilibrium stock-price process has dynamics
\begin{align}\label{S_PI}
\begin{split}
d\hat{S}_t &:=\Big\{\tfrac{\gamma B'(t)}{M+\bar{M}}\eta_t-\tfrac{2 \bar{M} \kappa (t)}{M+\bar{M}}w_t +\tfrac{\gamma (A(t)-M+1) B'(t)-2 \kappa (t)}{M+\bar{M}}\tilde{a}_\Sigma\Big\}dt + \gamma dw_t,\\
\hat{S}_0 &:= w_0 - B(0)\tilde{a}_\Sigma.
\end{split}
\end{align}
$\hfill\diamondsuit$
\end{itemize}
\end{theorem}
Several observations follow from Theorem \ref{thm_PI}:
\begin{enumerate}
\item Lemma \ref{lemma_infer} ensures that rebalancer $i$ can infer her innovations process $w_{i,t}$ from perceived prices $S^f_{i,t}$ and $\tilde{a}_i$, but rebalancer $i$ cannot infer the tracker target $w_t$ from the equilibrium prices $\hat{S}_t$ in \eqref{S_PI}. This is because the aggregate target $\tilde{a}_\Sigma$ also appears in the drift of $d\hat{S}_t$ and $\tilde{a}_\Sigma$ is not observed by individual rebalancers.
\item The equilibrium holdings \eqref{Y0000PI} follow from inserting \eqref{fs} into \eqref{Y00PI}. Thus, the holdings in \eqref{Y0000PI} are expressed in terms of the investors' state variables. However, the state variables are not orthogonal to each other. For example, $(q_{i,t},\eta_t,Y_t)$ all depend on past stock prices, which, in turn, depend on $(\tilde{a}_\Sigma,w_t)$. This affects the interpretation of model comparative statics.
\item Because the exogenous price-impact coefficient $\alpha$ does not appear in the ODEs \eqref{derivatives0aPI}, $\alpha$ is irrelevant for the equilibrium stock-price dynamics
\eqref{S_PI}. However, $\alpha$ does affect the equilibrium holdings in \eqref{Y0000PI}.
\item The stock-price volatility $\gamma$ affects the stock-price drift and holdings via its impact on $B(t)$ in \eqref{derivatives0aPI} and, thus, on \eqref{fs}.
\item The equilibrium stock-price dynamics \eqref{S_PI} depend on $w_t$ and $\tilde{a}_\Sigma$. Because $(w_t,\tilde{a}_\Sigma)$ is adapted to the trackers' filtrations, the trackers' equilibrium stock-price dynamics are those in \eqref{S_PI}. However, rebalancers have different information, so, with respect to their filtrations $\mathcal{F}_{i,t}$, their equilibrium stock-price dynamics differ from \eqref{S_PI}. To derive the rebalancers' drift and volatility, we use \eqref{dwit} to replace $dw_t$ with $dw_{i,t}$ and a drift term in \eqref{S_PI} to get rebalancer $i$'s equilibrium stock-price dynamics
\begin{align}
\begin{split}
d\hat{S}_t &=\Big\{\tfrac{\gamma B'(t)}{M+\bar{M}}\eta_t-\tfrac{2 \bar{M} \kappa (t)}{M+\bar{M}}w_t +\tfrac{\gamma (A(t)-M+1) B'(t)-2 \kappa (t)}{M+\bar{M}}\tilde{a}_\Sigma \\
&\quad +B'(t)\big(\tilde{a}_\Sigma-\tilde{a}_i - q_{i,t} \big) \gamma \Big\}dt+ \gamma dw_{i,t},\quad i\in\{1,...,M\}.
\end{split}
\end{align}
This gives rebalancer $i$'s equilibrium stock-price drift with respect to $\mathcal{F}_{i,t}$
\begin{align}
\begin{split}
-\gamma B'(t)\big( \tilde{a}_i + q_{i,t}\big) + \tfrac{\gamma B'(t)}{M+\bar{M}}\eta_t-\tfrac{2 \bar{M} \kappa (t)}{M+\bar{M}}Y_t,\quad i\in\{1,...,M\}.
\end{split}
\end{align}
Thus, rebalancers perceive the same equilibrium price process but perceive a different drift and volatility compared to the trackers.\footnote{Rebalancers and trackers both start with private information so their filtrations are not nested. However, in equilibrium, stock-price dynamics depend on $w_t$ and $\tilde{a}_\Sigma$. Because the trackers infer $\tilde{a}_\Sigma$ from $S_{j,0}=Y_0$, they have no need to filter at later times. On the other hand, rebalancer $i$ only has noisy dynamic predictions $\mathbb{E}[\tilde{a}_\Sigma|\mathcal{F}_{i,t}] = q_{i,t}+\tilde{a}_i$ of the aggregate parent imbalance $\tilde{a}_\Sigma$ given their inferences based on their individual parent targets $\tilde{a}_i$ and stock prices.}
\item For arbitrary holdings $\theta_{i,t}$, rebalancer $i$'s perceived stock-price drift in \eqref{Sit} can be decomposed as
\begin{align}\label{decom1}
\begin{split}
&f_0(t)Y_t +f_1(t)\tilde{a}_i +f_2(t)q_{i,t}+f_3(t)\eta_t+ \alpha\theta_{i,t}\\
&= -\gamma B'(t)\big( \tilde{a}_i + q_{i,t}\big) + \tfrac{\gamma B'(t)}{M+\bar{M}}\eta_t-\tfrac{2 \bar{M} \kappa (t)}{M+\bar{M}}Y_t +\alpha(\theta_{i,t} - \hat\theta_{i,t}),
\end{split}
\end{align}
where we have used the formulas for $(f_0,f_1,f_2,f_3)$ in \eqref{fs} in Appendix \ref{sec:formulas}.
Likewise, for arbitrary holdings $\theta_{j,t}$, tracker $j$'s perceived stock-price drift in \eqref{New2} is
\begin{align}\label{decom2}
\begin{split}
&\bar{f}_3(t)\eta_t+\bar{f}_4(t)\tilde{a}_\Sigma+\bar{f}_5(t)w_t+ \alpha\theta_{j,t}\\
&=\tfrac{\gamma B'(t)}{M+\bar{M}}\eta_t-\tfrac{2 \bar{M} \kappa (t)}{M+\bar{M}}w_t +\tfrac{\gamma (A(t)-M+1) B'(t)-2 \kappa (t)}{M+\bar{M}}\tilde{a}_\Sigma+\alpha(\theta_{j,t} - \hat\theta_{j,t}),
\end{split}
\end{align}
where we have used the formulas for $(\bar{f}_1,\bar{f}_2,\bar{f}_3)$ in \eqref{fs} in Appendix \ref{sec:formulas}.
Thus, investors' off-equilibrium drifts differ from their equilibrium drifts due to the differences $\theta_{k,t}-\hat{\theta}_{k,t}$ between their off-equilibrium and equilibrium holdings.\footnote{Eqs. \eqref{decom1} and \eqref{decom2} are similar to Eq. (3.14) in Choi, Larsen, and Seppi (2021).} Continuity is a reasonable property of investor perceptions. The representation of the perceived rebalancer drift in \eqref{decom1} relative to $\hat\theta_{i,t}$ from \eqref{Y0000PI} also explains the presence of the rebalancer-specific terms $(\tilde{a}_i,q_{i,t})$ in the rebalancers' perceptions in \eqref{Sit}.
\end{enumerate}
The function $B(t)$ from \eqref{derivatives0aPI} is key both in constructing the equilibrium and for interpreting the equilibrium price and holding processes. First, there is the issue that the initial value $B(0)$ is a free input in Theorem \ref{thm_PI}. The intuition is that our model determines equilibrium price drifts, but not price levels. As can be seen in \eqref{S_PI}, $B(0)$ controls the initial price level in our model. Second, the relation between $B(t)$ and price levels allows us to impose additional structure on $B(t)$. In particular, $w_t$ and $\tilde{a}_\Sigma$ represent different types of demand imbalances. Thus, if $B(t) < 0$, then $Y_t$ in \eqref{Z} plays the role of an aggregate demand state variable. How the two component quantities $w_t$ and $\tilde{a}_\Sigma$ are mixed in the aggregate demand state variable $Y_t$ is different given the two components' different informational dynamics (i.e., $\tilde{a}_\Sigma$ is fixed after time 0 while $w_t$ changes randomly over time) and the different impacts on investor demands (i.e., each rebalancer only knows their personal $\tilde{a}_i$ component of $\tilde{a}_\Sigma$ where other rebalancers' targets do not affect investor $i$'s parent demand whereas $w_t$ affects both an individual tracker's parent demands and is also information about other trackers' parent demands). It seems reasonable that the sign of the impact of $w_t$ and $\tilde{a}_\Sigma$ on the price level should be the same, which imposes the additional restriction that $B(t) < 0$. From Lemma \ref{PI_Lemma}, a sufficient condition for $B(t) < 0$ for all $t \in [0,1]$ is ${\bar M}B(0) + 1<0$.\footnote{ This sufficient condition follows because the denominator in \eqref{derivatives0aPI} is positive given that $A(t) \in [-1,0]$ so that the numerator in \eqref{derivatives0aPI} determines the sign of $B'(t)$. }
With the economically reasonable parametric restriction that $B'(t) < 0$ and given that $\alpha \leq 0$ so that $\alpha - 2 \kappa(t) < 0$, we can sign the impact of various quantities in the model on holdings and prices, which leads to the following comparative statics:
\begin{enumerate}
\item The equilibrium holdings $\hat{\theta}_{i,t}$ of rebalancers are positively related to their parent targets $\tilde{a}_i$. This is intuitive because rebalancers want holdings close to $\tilde{a}_i$. Rebalancer holdings $\hat{\theta}_{i,t}$ are also negatively related to the aggregate demand imbalance state variable $Y_t$. The fact that $\theta_{i,t}$ is decreasing in $Y_t$ is consistent with the theoretical results and empirical evidence in van Kerval, Kwan and Westerholm (2020) that investors buy less when there is a positive parent demand imbalance for other investors in the market. However, the impact of $q_{i,t}$ on $\hat \theta_{i,t}$ is positive. The intuition is that when rebalancer $i$ expects the other remaining rebalancers (given $i$'s ability to filter using her private target information $\tilde{a}_i$) to have a net positive parent demand imbalance $\mathbb{E}[\tilde{a}_\Sigma - \tilde{a}_i |\mathcal{F}_{i,t}]$ from \eqref{dwit}, she buys at time $t$ to front-run the resulting anticipated future price pressure.
\item Tracker $j$'s holdings $\hat \theta_{j,t}$ are increasing in $w_t$ (which reflects both her own parent demand and also information about the parent demands of other trackers). Tracker holdings $\hat\theta_{j,t}$ are also decreasing in information $\eta_t$ about imbalances in rebalancers' aggregate parent demand expectations. The effect of $\eta_t$ is consistent with the van Kerval, Kwan, and Westerholm (2020) liquidity-provision result and empirical evidence. However, the impact of $\tilde{a}_\Sigma$ is ambiguous in \eqref{Y0000PI}, and numerical calculations in Section \ref{sec:num} show that the sign is positive. This is again consistent with front-running future predicted price pressure due to the tracker's superior information about aggregated latent parent demand imbalances.
\item The equilibrium price drift in \eqref{S_PI} is decreasing in the tracker parent demand $w_t$. However, the impact of $\tilde{a}_\Sigma$ in the price drift is again ambiguous, which is related to information about $\tilde{a}_\Sigma$ being useful in forecasting future price pressure.
\end{enumerate}
\subsection{Tractability and model structure}
This section outlines the key model components that make our model tractable. First, we assume all traders seek to maximize their individual objectives in \eqref{Rfiltration}. Linear-quadratic objectives have been used extensively in the literature because of their tractability. Such objectives have been used in, e.g., Kyle (1985), Brunnermeier and Pederson (2005), and Carlin, Lobo, and Viswanathan (2007).\newpage
Second, our stock does not pay dividends, which means that only the stock drift can be endogenously determined in equilibrium. Models with non-dividend paying stocks have been used extensively in the literature. The monograph Karatzas and Shreve (1998) gives an overview.\footnote{Similar to a money market account, a non-dividend paying stock is a \emph{financial asset} in the sense that holding one stock at time $t=1$, gives one unit of consumption at $t=1$. Likewise, being short one stock at $t=1$, means the trader provides one unit of consumption at $t=1$. Both the money market account and the non-dividend paying stock have exogenous initial prices and volatilities. It is custom for the money market account's initial price to be one and its volatility to be zero. For the non-dividend paying stock, we set the initial price to be $Y_0$, its volatility to be a positive constant $\gamma$, and determine endogenously the drift. } In particular, non-dividend paying stock models have been used for short horizon models like ours where consumption only takes place at the terminal time.\footnote{There are long-lived non-dividend paying stocks too as; see, for example, Atmaz and Basak (2021) write: ``For example, Hartzmark and Solomon (2013) find that over the long-sample of 1927-2011, the average proportion of no-dividend stocks is around 35\% and accounts for 21.3\% of the aggregate US stock market capitalization. Similarly, by taking into account of rising share repurchase programs since the mid-1980ies, Boudoukh et al. (2007) report that over the 1984-2003 period, the average proportion of no-dividend stocks is 64\% and no-payout stocks, i.e., no dividends or no share repurchases, is 51\% with the relative market capitalizations of 16.4\% and 14.2\%, respectively."} The rebalancers' dynamic learning produces forward-running filtering equations and by considering a non-dividend paying stock, we circumvent having additional backward-running equations. Equilibrium models with both forward and backward-running equations include Kyle (1985), Foster and Viswanathan (1994, 1996), Back, Cao, and Willard (2000), and Choi, Larsen, and Seppi (2019).
Third, instead of exogenous noise traders, we use optimizing trackers. Grossman and Stiglitz (1980) and Kyle (1985) are standard references, which use an exogenous Gaussian stock supply. Gaussian noise traders are also used in the predatory trading models in Brunnermeier and Pederson (2005) and Carlin, Lobo, and Viswanathan (2007). In our setting, we could eliminate trackers by setting $\bar{M}:=0$ and replace the stock-market clearing condition \eqref{xSigmainitial} by using $w_t$ to model the exogenous stock supply as in
\begin{align}\label{xSigmainitialA}
w_t= \sum_{i=1}^{M} \theta_{i,t},\quad t\in[0,1].
\end{align}
Including noise traders as in \eqref{xSigmainitialA} in the model would be tractable in the price-impact equilibrium. However, surprisingly, exogenous noise-traders complicate constructing a Nash equilibrium with dynamic learning, whereas --- as we show in Section \ref{sec:eq1} --- optimizing trackers and market learning in \eqref{xSigmainitial} produce a subgame perfect Nash financial equilibrium in closed form. The models in Sannikov and Skrzypacz (2016) and Choi, Larsen, and Seppi (2021) have optimizing trackers but no dynamic learning.
Fourth, the linear-quadratic objectives \eqref{Rproblem} allow us to solve for the optimal holdings in
Lemma \ref{PI_Le} using quadratic pointwise optimization. Thus, dynamic programming and pointwise optimization are equivalent here in the price-impact equilibrium in that they produce the same optimal response holdings.
\section{Nash equilibrium}\label{sec:eq1}
This section builds on the analysis in Section \ref{sec:PI} by endogenizing stock-price perceptions and price impact. In particular, we model the impact of on-equilibrium and hypothetical off-equilibrium investor holdings on market-clearing stock prices based on investor perceptions of how other investors in the market perceive prices and on other investors' resulting optimal response functions to an investor's on-equilibrium and off-equilibrium holdings.
A subgame perfect market-clearing Nash equilibrium involves describing how each trader $k_0$ (who might be a rebalancer $i_0$ or a tracker $j_0$ with their different information sets) perceives market-clearing stock prices given stock-price perceptions for other traders $k \neq k_0$ (where $k$ can be rebalancers $i$ or trackers $j$). In our Nash model, a generic trader $k_0$ perceives that other rebalancers and trackers have stock-price perceptions of the form
\begin{align}\label{Sit3a}
\begin{split}
dS^Z_{i,t} &:= \Big\{Z_t +\mu_1(t)\tilde{a}_i+\mu_2(t)q_{i,t}+\mu_3(t)\eta_t + \alpha\theta_{i,t}\Big\}dt+ \gamma dW_{i,t}, \\
S^Z_{i,0}&:=Z_0,\quad i\in\{1,...,M\},\\
dS^Z_{j,t} &:= \Big\{Z_t +\bar{\mu}_4(t)\tilde{a}_\Sigma+\bar{\mu}_5(t)w_t+ \alpha\theta_{j,t}\Big\}dt + \gamma dW_{j,t},\\
S^Z_{j,0}&:=Z_0,\quad j\in\{M+1,...,M+\bar{M}\},
\end{split}
\end{align}
where $W_{k,t}$ is a Brownian motion for each trader $k\in\{1,...,M+\bar{M}\}$ and $Z_t$ is an arbitrary It\^o process. The ``$Z$'' superscript in \eqref{Sit3a} indicates that the perceived stock prices are defined with respect to a particular It\^o process $Z_t$ (i.e., a process given as a sum of drift and volatility). We use the market-clearing condition \eqref{xSigmainitial} to construct two such It\^o processes in \eqref{Y02i} and \eqref{Y0jj} below. These $Z_t$ processes differ from $Y_t$ in \eqref{Sit} and \eqref{New2} in that we use $Z_t$ to capture the effect of arbitrary off-equilibrium stock holdings by trader $k_0$ on market-clearing prices given optimal responses by other investors $k$, $k\neq k_0$. We then go on to determine the deterministic functions $(\mu_1,\mu_2,\mu_3,\bar{\mu}_4, \bar{\mu}_5)$ in equilibrium in Theorem \ref{thm_Main} below.
The major difference between the price-impact equilibrium in Section \ref{sec:PI} and the following Nash equilibrium analysis lies in the traders' stock-price perceptions. In the price-impact equilibrium, the forms of the stock-price perceptions \eqref{Sit} and \eqref{New2} were conjectured with no additional justification beyond them leading to equilibrium existence. In contrast, for a subgame perfect Nash equilibrium, these perceptions must be such that:
\begin{itemize}
\item[(i)] Trader $k_0$'s own stock-price perceptions must be consistent with market-clearing for any off-equilibrium holdings $\theta_{k_0,t}$ used by $k_0$, when other traders' holding responses are optimal given the stock-price dynamics that $k_0$ perceives other traders to have. This off-equilibrium market-clearing requirement can be found in, e.g., Vayanos (1999).
\item [(ii)] Trader $k_0$'s equilibrium holdings are found by solving her optimization problem using her own market-clearing stock-price dynamics from (i).
\item[(iii)] Any trader's optimizer from (i) must be consistent with that trader's equilibrium holdings in (ii).
\end{itemize}
\subsection{Optimal off-equilibrium responses}\label{sec_offeq}
Lemma \ref{response} gives trader $k$'s optimal response to an arbitrary It\^o process $Z_t$ and is the Nash equilibrium analogue of Lemma \ref{PI_Le}.
\begin{lemma}[Optimal responses to $Z_t$] \label{response}Let $\mu_1,\mu_2,\mu_3,\bar{\mu}_4,\bar{\mu}_5:[0,1]\to \mathbb{R}$ and $\kappa:[0,1]\to (0,\infty]$ be continuous functions, let $\alpha\le0$, let $(Z_t)_{t\in[0,1]}$ be an It\^o process, and let the perceived stock-price process in the wealth dynamics \eqref{Xit} be as in \eqref{Sit3a}. Then, $Z_t$ is adapted to both $\mathcal{F}_{i,t}:=\sigma(\tilde{a}_i,Y_u,W_{i,u},S^Z_{i,u})_{u\in[0,t]}$ and $\mathcal{F}_{j,t}:=\sigma(\tilde{a}_\Sigma,w_u,Y_u,W_{j,u},S^Z_{j,u})_{u\in[0,t]}$ and, provided
\begin{align}\label{Y00fct}
\begin{split}
\theta^Z_{i,t} &:= -\tfrac{1}{2 \alpha -2 \kappa (t)}Z_t-\tfrac{2 \kappa (t)+\mu_1(t)}{2 \alpha -2 \kappa (t)}\tilde{a}_i-\tfrac{\mu_2(t)}{2 \alpha -2 \kappa (t)}q_{i,t}-\tfrac{\mu_3(t)}{2 \alpha -2 \kappa (t)}\eta_t,\\
\theta^Z_{j,t}&:= -\tfrac{1}{2 \alpha-2 \kappa(t)}Z_t-\tfrac{2 \kappa(t)+\bar{\mu}_5(t)}{2 \alpha-2 \kappa(t)}w_t -\tfrac{\bar{\mu}_4(t)}{2 \alpha-2 \kappa(t)}\tilde{a}_\Sigma,
\end{split}
\end{align}
satisfy \eqref{squareint}, the traders' maximizers for \eqref{Rproblem} are $\theta^Z_{i,t} $ for rebalancer $i\in\{1,...,M\}$ and $\theta^Z_{j,t}$ for tracker $j\in \{M+1,...,M+\bar{M}\}$.
$\hfill\diamondsuit$
\end{lemma}
\noindent Similar to Lemma \ref{PI_Le}, Lemma \ref{response} is proven using pointwise quadratic maximization. Unlike $Y_t$ in Lemma \ref{PI_Le}, there is no Markov structure imposed on $Z_t$ in Lemma \ref{response}, which makes dynamical programming inapplicable. Therefore, the simplicity of the linear-quadratic objectives in \eqref{Rproblem} is crucial for the proof of the optimality of $\theta^Z_{i,t}$ and $\theta^Z_{j,t}$ in \eqref{Y00fct}.
\subsection{Market-clearing stock-price perceptions}\label{sec_priceperceptions}
Investor $k_0$ does not care per se about other investors' stock-price perceptions except to the extend that other investors' optimal holdings given their perceptions affect the market-clearing stock prices at which $k_0$ trades. Thus, when solving for trader $k_0$'s individual equilibrium holdings, we require $k_0$'s perceived stock-price process $S^\nu_{k_0,t}$ to clear the stock market for arbitrary holdings $\theta_{k_0,t}$. We assume that a given trader $k_0\in\{1,...,M+\bar{M}\}$ perceives that other traders $k\neq k_0$ perceive the stock-price processes in \eqref{Sit3a}. Hence, trader $k_0$ perceives that other traders $k$ optimally hold $\theta^Z_{k,t}$ in \eqref{Y00fct} shares of stock. Given this, we then find market-clearing $Z_{k_0,t}$ processes associated with arbitrary hypothetical holdings $\theta_{k_0,t}$ for trader $k_0$.
First, consider a trader $k_0$ who is a rebalancer $k_0:= i_0\in\{1,...,M\}$. We construct a process $(Z_{i_0,t})_{t\in[0,1]}$ such that the stock market clears in the sense
\begin{align}\label{Y0}
\begin{split}
0&= \sum_{i=1, i\neq i_0}^M \theta^{Z_{i_0}}_{i,t}+\sum_{j=M+1}^{\bar{M}}\theta^{Z_{i_0}}_{j,t} + \theta_{i_0,t},\quad t\in[0,1],
\end{split}
\end{align}
where $\theta_{i_0,t}$ denotes an arbitrary stock-holdings process for rebalancer $i_0$ and other investors' responses $\theta^{Z_{i_0}}_{k,t}$ are from \eqref{Y00fct} for $Z_t := Z_{i_0,t}$. Clearly, any solution $Z_{i_0,t}$ of \eqref{Y0} is specific for rebalancer $i_0$. To describe one particular solution, we consider a specific continuously differentiable function $B:[0,1]\to\mathbb{R}$ satisfying
\begin{align}\label{B}
B(t)=-\frac{A(t) \mu_2(t)+\bar{M} \bar{\mu}_4(t)+2 \kappa (t)+\mu_1(t)}{2 \bar{M} \kappa (t)+\bar{M} \bar{\mu}_5(t)},
\end{align}
where $A(t)$ is as in \eqref{dY2E}. Because $A(t)$ in \eqref{dY2E} depends on $B(t)$, Eq. \eqref{B} is a fixed point requirement for $B(t)$. Below, we show that the coupled ODEs \eqref{derivatives0a} characterize $(A,B)$ in \eqref{B}, and we give conditions ensuring that \eqref{derivatives0a} has a solution. Given a solution $B(t)$ to \eqref{B}, we can define $Y_t:=w_t - B(t)\tilde{a}_\Sigma$ as in \eqref{Z}, which allows us to express a solution of \eqref{Y0} as\footnote{The specific $B(t)$ function in \eqref{B} lets us combine $w_t$ and $\tilde{a}_\Sigma$ terms from \eqref{Y0} into the $Y_t$ term in \eqref{Y02i} using $Y_t = w_t - B(t)\tilde{a}_\Sigma$ from \eqref{Z}.}
\begin{align}\label{Y02i}
\begin{split}
Z_{i_0,t}&:=\tfrac{2 (\alpha -\kappa (t))}{M+\bar{M}-1}\theta_{i_0,t} +\tfrac{2\kappa(t)+\mu_1(t)}{M+\bar{M}-1}\tilde{a}_{i_0}+\tfrac{\mu_2(t)}{M+\bar{M}-1}q_{i_0,t}\\
&-\tfrac{(M-1) \mu_3(t)+\mu_2(t)}{M+\bar{M}-1}\eta_t
-\tfrac{\bar{M} (2 \kappa (t)+\bar{\mu}_5(t))}{M+\bar{M}-1}Y_t,\quad t\in[0,1].
\end{split}
\end{align}
The process $Z_{i_0,t}$ in \eqref{Y02i} captures the impact of arbitrary holdings $\theta_{i_0,t}$ by rebalancer $i_0$ on market-clearing stock prices given $i_0$'s perceptions of how other traders optimally respond using $\theta_{k,t}^{Z_{i_0}}$.
We then describe rebalancer $i_0$'s own stock-price perceptions for $i_0\in\{1,...,M\}$. Rebalancer $i_0$ filters based on her own target $\tilde{a}_i$ and on observations of past and current perceived market-clearing stock prices $(S^\nu_{i_0,u})_{u\in[0,t]}$ defined by
\begin{align}\label{New22i}
\begin{split}
dS^\nu_{i_0,t} &:= \Big\{\nu_0(t)Z_{i_0,t} +\nu_1(t)\tilde{a}_{i_0} +\nu_2(t)q_{i_0,t}+\nu_3(t)\eta_t+ \alpha\theta_{i_0,t}\Big\}dt + \gamma dw_{i_0,t},\\
S^\nu_{i_0,0}&:=Y_0,\quad i_0\in\{1,...,M\},
\end{split}
\end{align}
where $(\tilde{a}_{i_0},\theta_{i_0,t})$ are known and $(Z_{i_0,t} ,q_{i_0,t},\eta_{t_0})$ are inferred by rebalancer $i_0$. The ``$\nu$'' superscript in \eqref{New22i} indicates that the perceived prices are defined with respect to a particular set of deterministic functions $(\nu_0,\nu_1,\nu_2,\nu_3)$, which we endogenously determine in Theorem \ref{thm_Main} below. More specifically, by observing $\tilde{a}_{i_0}$ and $(S^\nu_{i_0,u})_{u\in[0,t]}$ defined in \eqref{New22i}, rebalancer $i_0$ infers $Y_t:=w_t - B(t)\tilde{a}_\Sigma$ from \eqref{Z} using the Volterra argument behind Lemma \ref{lemma_infer}. To see this, we insert \eqref{Y02i} into \eqref{New22i} to produce rebalancer $i_0$'s perceived market-clearing stock-price dynamics
\begin{align}\label{New22ia}
\begin{split}
dS^\nu_{i_0,t} &= \Big\{\Big(\tfrac{\nu_0(t) (2 \kappa (t)+\mu_1(t))}{M+\bar{M}-1}+\nu_1(t)\Big)\tilde{a}_{i_0} +\big(\tfrac{\mu_2(t) \nu_0(t)}{M+\bar{M}-1}+\nu_2(t)\big)q_{i_0,t}\\
&+\big(\nu_3(t)-\tfrac{\nu_0(t) ((M-1) \text{$\mu $3}(t)+\mu_2(t))}{M+\bar{M}-1}\big)\eta_t-\tfrac{\bar{M} \nu_0(t) (2 \kappa (t)+\bar{\mu}_5(t))}{M+\bar{M}-1}Y_t\\
&+\big(\alpha +\tfrac{2 \nu_0(t) (\alpha -\kappa (t))}{M+\bar{M}-1}\big)\theta_{i_0,t}\Big\}dt + \gamma dw_{i_0,t}.
\end{split}
\end{align}
Because the expressions multiplying $(\tilde{a}_{i_0},q_{i_0,t},\eta_t,Y_t,\theta_{i_0,t})$ in \eqref{New22ia} are continuous (deterministic) functions of time $t\in[0,1]$, Lemma \ref{lemma_infer} applies and shows that by observing $\tilde{a}_{i_0}$ and $(S^\nu_{i_0,u})_{u\in[0,t]}$ in \eqref{New22ia} over time $t\in[0,1]$, rebalancer $i_0$ can infer $w_{i_0,t}$. Subsequently, rebalancer $i_0$ can use \eqref{RfiltrationQQQ} and \eqref{RfiltrationQ} to also infer $Y_t$ over time $t\in[0,1]$.
Next, consider an investor $k_0$ who is tracker $k_0:=j_0\in\{M+1,...,M+\bar{M}\}$. For arbitrary off-equilibrium holdings $\theta_{j_0,t}$, the market-clearing solution $Z_{j_0,t}$ from
\begin{align}\label{Y0jj}
\begin{split}
0&= \sum_{i=1}^M \theta^{Z_{j_0}}_{i,t}+\sum_{j=M+1, j\neq j_0}^{\bar{M}}\theta^{Z_{j_0}}_{j,t} + \theta_{j_0,t},\quad t\in[0,1],
\end{split}
\end{align}
is given by
\begin{align}\label{Y02}
\begin{split}
Z_{j_0,t}&:=\tfrac{2 (\alpha -\kappa (t))}{M+\bar{M}-1}\theta_{j_0,t}-\tfrac{M \mu_3(t)+\mu_2(t)}{M+\bar{M}-1}\eta_t -\tfrac{(\bar{M}-1) (2 \kappa (t)+\bar{\mu}_5(t))}{M+\bar{M}-1}w_t\\
&\;-\tfrac{A(t) \mu_2(t)+(\bar{M}-1) \bar{\mu}_4(t)+2 \kappa (t)+\mu_1(t)}{M+\bar{M}-1}\tilde{a}_\Sigma.
\end{split}
\end{align}
Once again, $Z_{j_0,t}$ captures tracker $j_0$'s perceptions of the impact of her holdings $\theta_{j_0,t}$ on market-clearing stock prices given $j_0$'s perceptions of other investors' responses $\theta_{k,t}^{Z_{j_0}}$ to $\theta_{j_0,t}$.
Tracker $j_0$'s perceived market-clearing stock-price process is
\begin{align}\label{New22}
\begin{split}
dS^\nu_{j_0,t} &:= \Big\{Z_{j_0,t} +\bar{\nu}_3(t)\eta_t+\bar{\nu}_4(t)\tilde{a}_\Sigma+\bar{\nu}_5(t)w_t+ \alpha\theta_{j_0,t}\Big\}dt + \gamma dw_t,\\
S^\nu_{j_0,0}&:=Y_0,\quad j\in \{M+1,...,M+\bar{M}\},
\end{split}
\end{align}
where $\bar{\nu}_3,\bar{\nu}_4,\bar{\nu}_5:[0,1]\to \mathbb{R}$ are deterministic functions of time (endogenously determined Theorem \ref{thm_Main} below). Inserting \eqref{Y02} into \eqref{New22} gives tracker $j_0$'s perceived market-clearing stock-price dynamics
\begin{align}\label{New22a}
\begin{split}
dS^\nu_{j_0,t} &= \Big\{\Big(\bar{\nu}_3(t)-\tfrac{M \text{$\mu $3}(t)+\mu_2(t)}{M+\bar{M}-1}\Big)\eta_t\\
&+\Big(\bar{\nu}_5(t)-\tfrac{(\bar{M}-1) (2 \kappa (t)+\bar{\mu}_5(t))}{M+\bar{M}-1}\Big)w_t \\
&+\Big(\bar{\nu}_4(t)-\tfrac{A(t) \mu_2(t)+(\bar{M}-1) \bar{\mu}_4(t)+2 \kappa (t)+\mu_1(t)}{M+\bar{M}-1}\Big) \tilde{a}_\Sigma\\
&+\tfrac{\alpha (M+\bar{M}+1)-2 \kappa (t)}{M+\bar{M}-1}\theta_{j,t}\Big\}dt + \gamma dw_t.
\end{split}
\end{align}
We note that tracker $j_0$'s perceived market-clearing stock-price dynamics $dS^\nu_{j_0,t}$ in \eqref{New22a} are driven by the exogenous Brownian motion $w_t$ from \eqref{w_t} whereas rebalancer $i_0$'s prices $dS^\nu_{i_0,t}$ in \eqref{New22ia} are driven by $i_0$'s innovations process $dw_{i_0,t}$ from \eqref{dwit}. This is due to the different information sets of rebalancers and trackers.
Unlike the price-impact equilibrium in Theorem \ref{thm_PI}, we see from \eqref{New22ia} and \eqref{New22a} that, even if the direct price impacts vanish in the sense $\alpha := 0$ in \eqref{New22i} and \eqref{New22}, the remaining net price impacts $\frac{2 \nu_0(t) \kappa (t)}{1-M-\bar{M}}$ and $\frac{2 \kappa (t)}{1-M-\bar{M}}$ of $\theta_{i,t}$ and $\theta_{j,t}$ are nonzero. This is because price pressure in \eqref{New22ia} and \eqref{New22a} clears the stock market for arbitrary holdings $\theta_{i,t}$ and $\theta_{j,t}$.
The next result gives the optimal holdings $\theta^*_{k,t}$ for all traders $k_0:=k\in\{1,...,M+\bar{M}\}$ given their perceptions of market-clearing stock prices in \eqref{New22ia} and \eqref{New22a}. While both $\theta^*_{k,t}$ and the optimal response holdings $\theta^{Z}_{k,t}$ in \eqref{Y00fct} maximize \eqref{Rproblem}, they differ because they are based on different perceived stock-price processes. On one hand, the optimal responses $\theta^{Z}_{k,t}$ in \eqref{Y00fct} are based on the stock-price perceptions in \eqref{Sit3a}. On the other hand, the optimizer $\theta^*_{k,t}$ is based on the market-clearing stock-price perceptions in \eqref{New22ia} and \eqref{New22a}.
\begin{lemma}[Trader $k$'s maximizer for market-clearing stock-price perceptions] \label{lemma_eqholdings}Let $\nu_0,\nu_1$, $\nu_2,\nu_3,\bar{\nu}_3,\bar{\nu}_4,\bar{\nu}_5:[0,1]\to \mathbb{R}$ and $\kappa:[0,1]\to (0,\infty]$ be continuous functions with $\nu_0>0$ and assume $\alpha \le0$. Let the perceived market-clearing stock-price processes in the wealth dynamics \eqref{Xit} be given by \eqref{New22ia} and \eqref{New22a} with corresponding filtrations $\mathcal{F}_{i,t}:= \sigma(\tilde{a}_i,S^\nu_{i,u})_{u\in[0,t]}$ and $\mathcal{F}_{j,t}:= \sigma(w_u,S^\nu_{j,u})_{u\in[0,t]}$ for $ i\in \{1,...,M\}$ and $j\in \{M+1,...,M+\bar{M}\}$. Then, provided the holding processes
\begin{align}\label{Y000}
\begin{split}
\theta_{i,t}^* :&=-\tfrac{2 \kappa (t) (M+\bar{M}+\nu_0(t)-1)+(M+\bar{M}-1) \nu_1(t)+\mu_1(t) \nu_0(t)}{2 (\alpha -\kappa (t)) (M+\bar{M}+2 \nu_0(t)-1)}\tilde{a}_i\\
&-\tfrac{(M+\bar{M}-1) \nu_2(t)+\mu_2(t) \nu_0(t)}{2 (\alpha -\kappa (t)) (M+\bar{M}+2 \nu_0(t)-1)}q_{i,t}\\
&+ \tfrac{\nu_0(t) ((M-1) \mu_3(t)+\mu_2(t))-(M+\bar{M}-1) \nu_3(t)}{2 (\alpha -\kappa (t)) (M+\bar{M}+2 \nu_0(t)-1)}\eta_t\\
&+\tfrac{\bar{M} \nu_0(t) (2 \kappa (t)+\bar{\mu}_5(t))}{2 (\alpha -\kappa (t)) (M+\bar{M}+2 \nu_0(t)-1)}Y_t,\\
\theta_{j,t}^* :&= \tfrac{-(M+\bar{M}-1) \bar{\nu}_3(t)+M \mu_3(t)+\mu_2(t)}{2 (M+\bar{M}+1) (\alpha -\kappa (t))}\eta_t \\
&+\tfrac{-(M+\bar{M}-1) \bar{\nu}_5(t)-2 M \kappa (t)+(\bar{M}-1) \bar{\mu}_5(t)}{2 (M+\bar{M}+1) (\alpha -\kappa (t))}w_t\\
&+\tfrac{A(t) \mu_2(t)-(M+\bar{M}-1) \bar{\nu}_4(t)+(\bar{M}-1) \bar{\mu}_4(t)+2 \kappa (t)+\mu_1(t)}{2 (M+\bar{M}+1) (\alpha -\kappa (t))}\tilde{a}_\Sigma,
\end{split}
\end{align}
satisfy \eqref{squareint}, the traders' maximizers for \eqref{Rproblem} are $\theta_{i,t}^*$ for rebalancer $i\in\{1,...,M\}$ and $\theta_{j,t}^*$ for tracker $j\in \{M+1,...,M+\bar{M}\}$.
$\hfill\diamondsuit$
\end{lemma}
From Lemma \ref{lemma_eqholdings}, we note that a generic rebalancer $i_0$ has filtration $\sigma(\tilde{a}_{i_0},S^\nu_{i_0,u})_{u\in[0,t]}$ whereas she perceives that other rebalancers $i\neq i_0$ have filtrations $\sigma(\tilde{a}_i,Y_u,W_{i,u},S^Z_{i,u})_{u\in[0,t]}$ as in Lemma \ref{response}. Because these are $i_0$'s off-equilibrium perceptions, this is allowable as long as they are consistent with $i$'s equilibrium holdings. We require this consistency in Definition \ref{Nash_eq}(iii) below. We also note from Lemma \ref{response} that rebalancer $i$ can infer $Z_{i_0,t}$ in \eqref{Y02i}. In turn, this allows rebalancer $i$ to infer the process
\begin{align}\label{extrainfer}
\tfrac{2 (\alpha -\kappa (t))}{M+\bar{M}-1}\theta_{i_0,t} +\tfrac{2\kappa(t)+\mu_1(t)}{M+\bar{M}-1}\tilde{a}_{i_0}+\tfrac{\mu_2(t)}{M+\bar{M}-1}q_{i_0,t}.
\end{align}
However, knowing \eqref{extrainfer} is insufficient for rebalancer $i$ to infer rebalancer $i_0$'s private target $\tilde{a}_{i_0}$.
\subsection{Equilibrium}\label{sec:eq}
\begin{definition}\label{Nash_eq} Deterministic functions of time $\mu_1,\mu_2,\mu_3,\bar{\mu}_4,\bar{\mu}_5,\nu_0,\nu_1,\nu_2,\nu_3,\bar{\nu}_4,\bar{\nu}_5:[0,1]\to\mathbb{R}$ constitute a \emph{subgame perfect Nash financial-market equilibrium} if:
\begin{enumerate}
\item[(i)] For $k \in \{1,...,M+\bar{M}\}$, trader $k$'s maximizer $\theta^*_{k,t}$ for \eqref{Rproblem} exists given the market-clearing stock-price perceptions \eqref{New22ia} and \eqref{New22a}.
\item[(ii)] For $k\in\{1,...,M+\bar{M}\}$, inserting trader $k$'s maximizer $\theta^*_{k,t}$ into the perceived market-clearing stock-price processes \eqref{New22ia} and \eqref{New22a} produces identical stock-price processes across all traders. This common equilibrium stock-price process is denoted by $S^*_t$.
\item[(iii)] Optimizers and equilibrium holdings must be consistent in the sense that trader $k$'s perceived response to trader $k_0$'s maximizer $\theta^*_{k_0,t}$ is trader $k$'s maximizer $\theta^*_{k,t}$.
\item[(iv)] The money and stock markets clear.
$\hfill\diamondsuit$
\end{enumerate}
\end{definition}
The identical stock-price requirement in Definition \ref{Nash_eq}(ii) is similar to \eqref{Y32PI}. We see from the rebalancers' perceptions \eqref{New22i} that both the drifts and the martingale terms have $i$ dependence. We replace $dw_{i,t}$ in $dS^\nu_{i,t}$ in \eqref{New22i} with the decomposition of $dw_{i,t}$ in terms of $dw_t$ from \eqref{dwit} and rewrite $dS^\nu_{i,t}$ in \eqref{New22i} as
\begin{align}\label{Y32}
\begin{split}
dS^\nu_{i,t} &= \Big\{\nu_0(t)Z_{i,t} +\nu_1(t)\tilde{a}_i +\nu_2(t)q_{i,t}+\nu_3(t) \eta_t +\alpha \theta_{i,t}\\
&\quad -B'(t)\big(\tilde{a}_\Sigma-\tilde{a}_i - q_{i,t} \big) \gamma \Big\}dt+ \gamma dw_t,\quad i\in\{1,...,M\}.
\end{split}
\end{align}
Therefore, to ensure identical equilibrium stock-price perceptions for all traders $k\in\{1,...,M+\bar{M}\}$, it suffices to match the drift of $dS^\nu_{j,t}$ in \eqref{New22} for $j\in\{M+1,...,M+\bar{M}\}$ with the drift of $dS^\nu_{i,t}$ for $\theta_{i,t}:= \theta^*_{i,t}$ for the optimal holdings $i\in\{1,...,M\}$ in \eqref{Y32}. This produces the requirement (the right-hand side of \eqref{driftA} does not depend on the rebalancer index $i$)
\begin{align}\label{driftA}
\begin{split}
&\nu_0(t)Z^*_{i,t} +\nu_1(t)\tilde{a}_i +\nu_2(t)q_{i,t}+\nu_3(t) \eta_t +\alpha \theta^*_{i,t} -B'(t)\big(\tilde{a}_\Sigma-\tilde{a}_i - q_{i,t} \big)\gamma\\
&=\bar{\nu}_3(t)\eta_t+\bar{\nu}_4(t)\tilde{a}_\Sigma+\bar{\nu}_5(t)w_t+ \alpha\theta^*_{j,t},
\end{split}
\end{align}
for all rebalancers $i \in\{1,...,M\}$ and all trackers $j\in \{M+1,...,M+\bar{M}\}$. In \eqref{driftA}, the process $Z_{i,t}^*$ is \eqref{Y02i} evaluated at $\theta_{i,t}:= \theta^*_{i,t}$, and $Z_{j,t}^*$ is \eqref{Y02} evaluated at $\theta_{j,t}:= \theta^*_{j,t}$ so that:
\begin{align}\label{Y02star}
\begin{split}
Z^*_{i,t}&:=\tfrac{2 (\alpha -\kappa (t))}{M+\bar{M}-1}\theta^*_{i,t} + \tfrac{2\kappa(t)+\mu_1(t)}{M+\bar{M}-1}\tilde{a}_{i}+\tfrac{\mu_2(t)}{M+\bar{M}-1}q_{i,t}\\
&\;-\tfrac{(M-1) \mu_3(t)+\mu_2(t)}{M+\bar{M}-1}\eta_t
-\tfrac{\bar{M} (2 \kappa (t)+\bar{\mu}_5(t))}{M+\bar{M}-1}Y_t,\\
Z^*_{j,t}&:=\tfrac{2 (\alpha -\kappa (t))}{M+\bar{M}-1}\theta^*_{j,t}-\tfrac{M \mu_3(t)+\mu_2(t)}{M+\bar{M}-1}\eta_t \\
&\;-\tfrac{(\bar{M}-1) (2 \kappa (t)+\bar{\mu}_5(t))}{M+\bar{M}-1}w_t-\tfrac{A(t) \mu_2(t)+(\bar{M}-1) \bar{\mu}_4(t)+2 \kappa (t)+\mu_1(t)}{M+\bar{M}-1}\tilde{a}_\Sigma,
\end{split}
\end{align}
for rebalancers $ i\in\{1,...,M\}$ and trackers $j\in\{M+1,...,M+\bar{M}\}$.
As for the consistency requirement in Definition \ref{Nash_eq}(iii), we first fix a rebalancer $i_0\in \{1,...,M\}$. We require that the response holdings in \eqref{Y00fct} are consistent with $\theta^*_{i_0,t}$ in the sense that
\begin{align}\label{Y00fctbb}
\begin{split}
\theta^*_{i,t} &=-\tfrac{1}{2 \alpha -2 \kappa (t)}Z^*_{i_0,t}-\tfrac{2 \kappa (t)+\mu_1(t)}{2 \alpha -2 \kappa (t)}\tilde{a}_i-\tfrac{\mu_2(t)}{2 \alpha -2 \kappa (t)}q_{i,t}-\tfrac{\mu_3(t)}{2 \alpha -2 \kappa (t)}\eta_t,\\
\theta^*_{j,t}&= -\tfrac{1}{2 \alpha-2 \kappa(t)}Z^*_{i_0,t}-\tfrac{2 \kappa(t)+\bar{\mu}_5(t)}{2 \alpha-2 \kappa(t)}w_t -\tfrac{\bar{\mu}_4(t)}{2 \alpha-2 \kappa(t)}\tilde{a}_\Sigma,
\end{split}
\end{align}
for rebalancers $i\in\{1,...,M\}\setminus \{i_0\}$ and trackers $j\in \{M+1,...,M+\bar{M}\}$. Second, we fix a tracker $j_0\in \{M+1,...,M+\bar{M}\}$ and require that the response holdings in \eqref{Y00fct} must be consistent with $\theta^*_{j_0,t}$ in the sense that
\begin{align}\label{Y00fctbb2}
\begin{split}
\theta^*_{i,t} &=-\tfrac{1}{2 \alpha -2 \kappa (t)}Z^*_{j_0,t}-\tfrac{2 \kappa (t)+\mu_1(t)}{2 \alpha -2 \kappa (t)}\tilde{a}_i-\tfrac{\mu_2(t)}{2 \alpha -2 \kappa (t)}q_{i,t}-\tfrac{\mu_3(t)}{2 \alpha -2 \kappa (t)}\eta_t,\\
\theta^*_{j,t}&=-\tfrac{1}{2 \alpha-2 \kappa(t)}Z^*_{j_0,t}-\tfrac{2 \kappa(t)+\bar{\mu}_5(t)}{2 \alpha-2 \kappa(t)}w_t -\tfrac{\bar{\mu}_4(t)}{2 \alpha-2 \kappa(t)}\tilde{a}_\Sigma,
\end{split}
\end{align}
for rebalancers $i\in\{1,...,M\}$ and trackers $j\in \{M+1,...,M+\bar{M}\}\setminus \{j_0\}$.
Similar to the price-impact equilibrium, our Nash equilibrium existence result is based on a technical lemma, which guarantees the existence of a solution to an autonomous system of coupled ODEs.
\begin{lemma}\label{main_Lemma}
Let $\kappa:[0,1]\to(0,\infty]$ be a continuous and integrable function (i.e., $\int_0^1 \kappa(t)dt <\infty$), let $M+\bar{M}>2$, and let $\alpha \le0$. For a constant $B(0) \in \mathbb{R}$, the coupled ODEs
\begin{footnotesize}
\begin{align}\label{derivatives0a}
\begin{split}
B'(t)
&=\frac{\begin{array}{l}\Big\{2 \kappa (t) \Big(\bar{M} B(t) (M+\bar{M}-1) \big(\alpha (M+\bar{M})-2 (M+\bar{M}-1) \kappa (t)\big)\\
+(M+\bar{M}-2)\big(\alpha (M+\bar{M}+1)-2 (M+\bar{M}) \kappa (t)\big)\Big)\Big\}\\
\end{array}}{\begin{array}{l}
\Big\{ \gamma \Big(A(t) (M+\bar{M}-2) \big(\alpha (M+\bar{M}+1)-2 (M+\bar{M}) \kappa (t)\big)\\
+\alpha \big((M^2+M-1) \bar{M}+M^2+2 M \bar{M}^2-M+\bar{M}^3-2\big)\\
-2 \left((M^2-1) \bar{M}+(2 M-1) \bar{M}^2+(M-2) M+\bar{M}^3\right) \kappa (t)\Big)\Big\},\\
\end{array}
},\\
A'(t)&= - \big(B'(t)\big)^2\Sigma(t)\big(A(t)+1\big),\quad A(0)=-\frac{(M-1)B(0)^2\sigma^2_{\tilde{a}}}{\sigma^2_{w_0} +B(0)^2(M-1)\sigma^2_{\tilde{a}}},\\
\Sigma'(t) &= -\big(B'(t)\big)^2\Sigma(t)^2,
\quad \Sigma(0) =\frac{(M-1) \sigma_{\tilde{a}}^2 \sigma_{w_0}^2}{B(0)^2 (M-1) \sigma_{\tilde{a}}^2+\sigma_{w_0}^2},
\end{split}
\end{align}
\end{footnotesize}have unique solutions with $\Sigma(t) \ge 0$, $\Sigma(t)$ decreasing, $A(t) \in [-1,0]$, and $A(t)$ decreasing for $t\in[0,1]$.
$\hfill\diamondsuit$
\end{lemma}
\noindent The affine ODE for $B(t)$ in \eqref{derivatives0a} is more complicated than the corresponding affine ODE in \eqref{derivatives0aPI} because the Nash equilibrium has the additional fixed point requirement in \eqref{B} that is absent in the price-impact equilibrium. However, both ODEs for $B(t)$ are affine.
Our main theoretical result gives a Nash equilibrium in terms of the ODEs \eqref{derivatives0a}. In this theorem, the price-impact parameter $\alpha$, volatility $\gamma$, and initial value $B(0)$ are free parameters.
\begin{theorem}\label{thm_Main} Let $\kappa:[0,1]\to (0,\infty)$ be continuous, and let the functions $(B,A,\Sigma)$ be as in Lemma \ref{main_Lemma}, let $M+\bar{M}>2$, and let $\alpha\le0$. Then, we have:
\begin{itemize}
\item[(i)] A subgame perfect Nash financial equilibrium exists and is given by the functions in \eqref{nus} in Appendix \ref{sec:formulas}.
\item[(ii)] Equilibrium holdings are
\begin{footnotesize}
\begin{align}
\theta_{i,t}^* &:=
-\frac{(M+\bar{M}-2) \left(2 \kappa (t)-\gamma B'(t)\right)}{\alpha (M+\bar{M})-2 (M+\bar{M}-1) \kappa (t)}\tilde{a}_i\nonumber\\
& +\frac{\gamma (M+\bar{M}-2) B'(t)}{\alpha (M+\bar{M})-2 (M+\bar{M}-1) \kappa (t)}q_{i,t}\nonumber
\\
&-\frac{\begin{array}{l}\Big\{\gamma (M+\bar{M}-2)^2 B'(t) (\alpha (M+\bar{M}+1)-2 (M+\bar{M}) \kappa (t))\Big\}\nonumber\\
\end{array}}{\begin{array}{l}
\Big\{(\alpha (M+\bar{M})-2 (M+\bar{M}-1) \kappa (t)) \big(\alpha \big((3 M-1) \bar{M}^2+M (3 M-2) \bar{M}\nonumber\\
+(M-2) M (M+1)+\bar{M}^3\big)-2 \left((M+\bar{M}-2) (M+\bar{M})^2+\bar{M}\right) \kappa (t)\big)\Big\}
\end{array}
}\eta_t,\nonumber\\
&+\frac{\begin{array}{l}\Big\{2 \bar{M} (M+\bar{M}-2) (M+\bar{M}-1) \kappa (t)\Big\}\\
\end{array}}{\begin{array}{l}
\Big\{\alpha \left((3 M-1) \bar{M}^2+M (3 M-2) \bar{M}+(M-2) M (M+1)+\bar{M}^3\right)\\
-2 \left((M+\bar{M}-2) (M+\bar{M})^2+\bar{M}\right) \kappa (t)\Big\}
\end{array}
}
Y_t,\label{Y0000}
\end{align}
\end{footnotesize}
\vspace{-0.5cm}
\begin{align*}
\begin{split}
\theta_{j,t}^* :&=
-\tfrac{\gamma (M+\bar{M}-2) (M+\bar{M}-1) B'(t)}{\alpha \left((3 M-1) \bar{M}^2+M (3 M-2) \bar{M}+(M-2) M (M+1)+\bar{M}^3\right)-2 \left((M+\bar{M}-2) (M+\bar{M})^2+\bar{M}\right) \kappa (t)}\eta_t
\\
&-\tfrac{2 M (M+\bar{M}-2) (M+\bar{M}-1) \kappa (t)}{\alpha \left((3 M-1) \bar{M}^2+M (3 M-2) \bar{M}+(M-2) M (M+1)+\bar{M}^3\right)-2 \left((M+\bar{M}-2) (M+\bar{M})^2+\bar{M}\right) \kappa (t)}w_t
\\
&+\tfrac{(M+\bar{M}-2) (M+\bar{M}-1) \left(\gamma (-A(t)+M-1) B'(t)+2 \kappa (t)\right)}{\alpha \left((3 M-1) \bar{M}^2+M (3 M-2) \bar{M}+(M-2) M (M+1)+\bar{M}^3\right)-2 \left((M+\bar{M}-2) (M+\bar{M})^2+\bar{M}\right) \kappa (t)}
\tilde{a}_\Sigma,
\end{split}
\end{align*}
for rebalancers $i\in \{1,...,M\}$ and trackers $ j\in\{M+1,...,M+\bar{M}\}$.
\item[(iii)] The equilibrium stock-price process has dynamics
\begin{align}
dS^*_t &:=\Big\{\tfrac{\gamma (M+\bar{M}-2) B'(t) (\alpha (M+\bar{M}+1)-2 (M+\bar{M}) \kappa (t))}{\alpha \left((3 M-1) \bar{M}^2+M (3 M-2) \bar{M}+(M-2) M (M+1)+\bar{M}^3\right)-2 \left((M+\bar{M}-2) (M+\bar{M})^2+\bar{M}\right) \kappa (t)}\eta_t\nonumber\\
&-\tfrac{2 \bar{M} (M+\bar{M}-1) \kappa (t) (\alpha (M+\bar{M})-2 (M+\bar{M}-1) \kappa (t))}{\alpha \left((3 M-1) \bar{M}^2+M (3 M-2) \bar{M}+(M-2) M (M+1)+\bar{M}^3\right)-2 \left((M+\bar{M}-2) (M+\bar{M})^2+\bar{M}\right) \kappa (t)}w_t \nonumber\\
&-\tfrac{(M+\bar{M}-2) (\alpha (M+\bar{M}+1)-2 (M+\bar{M}) \kappa (t)) \left(\gamma (-A(t)+M-1) B'(t)+2 \kappa (t)\right)}{\alpha \left((3 M-1) \bar{M}^2+M (3 M-2) \bar{M}+(M-2) M (M+1)+\bar{M}^3\right)-2 \left((M+\bar{M}-2) (M+\bar{M})^2+\bar{M}\right) \kappa (t)}
\tilde{a}_\Sigma\Big\}dt \nonumber\\
&+ \gamma dw_t,\nonumber\\
S^*_0 &:= w_0 - B(0)\tilde{a}_\Sigma.\label{dhatS}
\end{align}
$\hfill\diamondsuit$
\end{itemize}
\end{theorem}
The following observations follow from Theorem \ref{thm_Main}:
\begin{enumerate}
\item The logic for the initial value $B(0)$ being a free input parameter is the same as in the price-impact equilibrium.
\item The price-impact parameter $\alpha$ and stock-price volatility $\gamma$ affect the stock-price drift and holdings via its impact on $B(t)$ in \eqref{derivatives0a}. The dependence on $\alpha$ is different from the price-impact equilibrium. The reason is that $\alpha$ affects the perceived optimal responses in \eqref{Y00fct}.
\item Similar to \eqref{decom1} and \eqref{decom2}, for an arbitrary trader $k_0 \in \{1,...,M+\bar{M}\}$ and arbitrary holdings $\theta_{k_0,t}$, the optimal responses in \eqref{Y00fct} can be decomposed as
\begin{align}\label{decomp3}
\begin{split}
\theta^{Z_{k_0}}_{i,t} &= \theta^*_{i,t} -\frac{1}{M+\bar{M}-1} (\theta_{k_0,t}-\theta^*_{k_0,t}),\quad i \in \{1,...,M\},\\
\theta^{Z_{k_0}}_{j,t}&=\theta^*_{j,t} -\frac{1}{M+\bar{M}-1} (\theta_{k_0,t}-\theta^*_{k_0,t}),\quad j\in \{M+1,...,M+\bar{M}\},
\end{split}
\end{align}
where the equilibrium holdings $(\theta^*_{i,t}, \theta^*_{j,t},\theta^*_{k_0,t})$ are in \eqref{Y0000}.\footnote{This is similar to Eq. (2.16) in Chen, Choi, Larsen, and Seppi (2021).}
\item The subgame perfect Nash financial-market equilibrium is attractive because of its reasonable off-equilibrium market-clearing beliefs. However, although much of the mathematic structure is similar, the expressions for the equilibrium price and holding coefficients are algebraically more complex. Nonetheless, our numerical results in Section \ref{sec:num} show that the differences between the price-impact equilibrium and the subgame perfect Nash financial-market equilibrium are quantitatively small. This, in turn, suggests that the economic logic from the price-impact equilibrium carries over to the Nash equilibrium.
\end{enumerate}
\section{Numerics}\label{sec:num}
Our price-impact and subgame perfect equilibria are straightforward to compute numerically. This is because prices and holding in the model are available in closed form given the solutions to the associated coupled ODEs in \eqref{derivatives0aPI} and \eqref{derivatives0a}. To illustrate our models, we compute them for several different parameterizations. In these parameterizations, there are $M := 10$ rebalancers and $\bar M := 10$ trackers. We assume the penalty function $\kappa(t)$ is a constant 1 over the trading day. The target volatilities are normalized to $\sigma_{\tilde{a}} = \sigma_{w_0} = 1$. We then consider two initial values of $B(0) = -1$ (consistent with our negative $B(t)$ restriction) and also $B(0) = 0$ as a reference point. We consider two values price volatility parameters $\gamma \in \{0.5,1\}$ and we consider a price-impact parameter $\alpha:=-0.1$.
For the two equilibria, Figure \ref{figholdings} shows the coefficients for the equilibrium holdings for rebalancers and trackers, and Figure \ref{figstock} shows the coefficients for the equilibrium prices. Figure \ref{figabS} in Appendix \ref{sec_graphs} shows the solutions to the coupled ODEs \eqref{derivatives0aPI} and \eqref{derivatives0a}.
In each figure, there are pairs of lines for the two different equilibria. Interestingly, the quantitative differences between the price-impact and subgame perfect Nash equilibria are quite small despite the additional mathematical complexity of the subgame perfect Nash equilibrium. Recall here that the difference between the two equilibria is the additional restriction in the subgame perfect Nash equilibrium that investors' off-equilibrium price perceptions given hypothetical holdings must still clear that market given perceived optimal order response functions based on associated perceptions for other investors. Thus, it appears that in-equilibrium market clearing has a much larger effect on equilibrium prices than the requirement of off-equilibrium market clearing.
The signs of the various price and holding coefficients in the $B(t) < 0$ case for the price-impact equilibrium are consistent with the analytic signing results in Section \ref{sec:PI}. In addition, we see that the loading on $\tilde{a}_\Sigma$ in the tracker holdings are positive, as noted in Section \ref{sec:PI}, which implies front-running on predictable future price pressure. The numerical similarity of the numerical results for the two equilibria suggests that the intuitions for the signs of the various coefficients in the price-impact equilibrium carry over to the subgame perfect Nash financial-market equilibrium.
\begin{figure}[!h]
\begin{center}
\caption{Plots of coefficient loadings in the price-impact equilibrium holdings $\hat{\theta}_{k,t}$ in \eqref{Y0000PI} and in the Nash equilibrium holdings $\theta^*_{k,t}$ in \eqref{Y0000}. The exogenous model parameters are $\gamma:=1, \;\sigma_{w_0} := \sigma_{\tilde{a}}:=1, M:=\bar{M}:=10$, $\kappa(t):=1$, $B(0):=-1$, and $\alpha:=-0.1$. }\ \\
\begin{footnotesize}
$\begin{array}{cc}
\includegraphics[width=6cm, height=4.5cm]{Fig2A.pdf} &\includegraphics[width=6cm, height=4.5cm]{Fig2B.pdf}
\\
\text{{\bf 1A:} $\tilde{a}_i$ coefficient }&\text{{\bf 1B:} $q_{i,t}$ coefficient}\\
\hat{\theta}_{i,t}\;\text{(-----)},\; \;\theta^*_{i,t}\; (- -)& \hat{\theta}_{i,t}\;\text{(-----)},\; \;\theta^*_{i,t}\; (- -)\\
\includegraphics[width=6cm, height=4.5cm]{Fig2C.pdf} &\includegraphics[width=6cm, height=4.5cm]{Fig2D.pdf}
\\
\text{{\bf 1C:} $\eta_t$ coefficient }&\text{{\bf 1D:} $w_t$ coefficient}\\
\hat{\theta}_{i,t} \;\text{(-----)}, \ \hat{\theta}_{j,t}\; (- -), \; \theta^*_{i,t}\; (-\cdot-),\;\theta^*_{j,t}\;(-\cdot\cdot-)& \hat{\theta}_{i,t} \;\text{(-----)}, \ \hat{\theta}_{j,t}\; (- -), \; \theta^*_{i,t}\; (-\cdot-),\;\theta^*_{j,t}\;(-\cdot\cdot-)\\
\includegraphics[width=6cm, height=4.5cm]{Fig2E.pdf} &\\
\text{{\bf 1E:} $\tilde{a}_\Sigma$ coefficient }&\\
\hat{\theta}_{i,t} \;\text{(-----)}, \ \hat{\theta}_{j,t}\; (- -), \; \theta^*_{i,t}\; (-\cdot-),\;\theta^*_{j,t}\;(-\cdot\cdot-)&
\end{array}$
\end{footnotesize}
\label{figholdings}
\end{center}
\end{figure}
\newpage
\begin{figure}[!h]
\begin{center}
\caption{Plots of coefficient loadings in price-impact equilibrium stock-price dynamics $d\hat{S}_t$ in \eqref{S_PI} and Nash equilibrium stock-price dynamics $dS^*_t$ in \eqref{dhatS} for $B(0)\in \{-1,0\}$. Plots 3A, 3C, and 3E use $ \gamma:=0.5$ and Plots 3B, 3D, and 3F use $ \gamma:=1$. The exogenous model parameters are $\sigma_{w_0} := \sigma_{\tilde{a}}:=1, M:=\bar{M}:=10, \;\alpha :=-0.1$, and $\kappa(t):=1$ for $t\in[0,1]$. }\ \\
\begin{footnotesize}
$\begin{array}{cc}
\includegraphics[width=6cm, height=4.5cm]{Fig3A.pdf} &\includegraphics[width=6cm, height=4.5cm]{Fig3B.pdf}
\\
\text{{\bf 2A:} $\eta_t$ coefficient, $\gamma:=0.5$, }&\text{{\bf 2B:} $\eta_t$ coefficient, $\gamma:=1$, }\\
\text{$B(0):=-1$: } \hat{S}_{t} \;\text{(-----)}, S^*_{t}\; (-\cdot-), & \text{$B(0):=-1$: } \hat{S}_{t} \;\text{(-----)}, S^*_{t}\; (-\cdot-),\\
\text{$B(0):=0$: } \; \hat{S}_{t}\; (- -), \;S^*_{t}\;(-\cdot\cdot-).&\text{$B(0):=0$: } \; \hat{S}_{t}\; (- -), \;S^*_{t}\;(-\cdot\cdot-).\\
\includegraphics[width=6cm, height=4.5cm]{Fig3C.pdf} &\includegraphics[width=6cm, height=4.5cm]{Fig3D.pdf}
\\
\text{{\bf 2C:} $w_t$ coefficient, $\gamma:=0.5$, }&\text{{\bf 2D:} $w_t$ coefficient, $\gamma:=1$, }\\
\text{$B(0):=-1$: } \hat{S}_{t} \;\text{(-----)}, S^*_{t}\; (-\cdot-), & \text{$B(0):=-1$: } \hat{S}_{t} \;\text{(-----)}, S^*_{t}\; (-\cdot-),\\
\text{$B(0):=0$: } \; \hat{S}_{t}\; (- -), \;S^*_{t}\;(-\cdot\cdot-).&\text{$B(0):=0$: } \; \hat{S}_{t}\; (- -), \;S^*_{t}\;(-\cdot\cdot-).\\
\includegraphics[width=6cm, height=4.5cm]{Fig3E.pdf} &\includegraphics[width=6cm, height=4.5cm]{Fig3F.pdf}
\\
\text{{\bf 2E:} $\tilde{a}_\Sigma$ coefficient, $\gamma:=0.5$, }&\text{{\bf 2F:} $\tilde{a}_\Sigma$ coefficient, $\gamma:=1$, }\\
\text{$B(0):=-1$: } \hat{S}_{t} \;\text{(-----)}, S^*_{t}\; (-\cdot-), & \text{$B(0):=-1$: } \hat{S}_{t} \;\text{(-----)}, S^*_{t}\; (-\cdot-),\\
\text{$B(0):=0$: } \; \hat{S}_{t}\; (- -), \;S^*_{t}\;(-\cdot\cdot-).&\text{$B(0):=0$: } \; \hat{S}_{t}\; (- -), \;S^*_{t}\;(-\cdot\cdot-).\\
\end{array}$
\end{footnotesize}
\label{figstock}
\end{center}
\end{figure}
\ \newpage
\section{Measuring execution costs}
This section gives a measure of a rebalancer's costs of rebalancing from zero endowed shares at time $t=0$ given a target $\tilde{a}_i$. We present the measure in the price-impact equilibrium in Section \ref{sec:PI} (the Nash analogue is logically similar and produces similar numerics). In the price-impact equilibrium, rebalancer $i$'s value function is
\begin{align}\label{Rproblema}
\begin{split}
J(\tilde{a}_i,0,\eta_0,Y_0,q_{i,0}):=& \mathbb{E}\Big[ \int_0^1 \hat \theta_{i,t}d\hat S_t - \int_0^1 \kappa(t)(\tilde{a}_i-\hat \theta_{i,t})^2dt\Big|\,\mathcal{F}_{i,0}\Big],
\end{split}
\end{align}
where $\hat \theta_{i,t}$ denotes rebalancer $i$'s holdings in \eqref{Y0000PI} and $\mathcal{F}_{i,t}:=\sigma(\tilde{a}_i,S^f_{i,u})_{u\in[0,t]}$ where the $f$ coefficient functions are as in \eqref{fs} for $i\in\{1,...,M\}$. We seek a value function $J= J(\tilde{a}_i,s,q,Y,q_i)$ such that the process
\begin{align}\label{Rproblemc}
\begin{split}
J(\tilde{a}_i,s,\eta_s,Y_s,q_{i,s})&+ \int_0^s \Big\{\hat\theta_{i,t}\Big(f_0(t)Y_t +f_1(t)\tilde{a}_i +f_2(t)q_{i,t}+f_3(t)\eta_t+\alpha\hat\theta_{i,t}\Big)\\
& - \kappa(t)(\tilde{a}_i-\hat\theta_{i,t})^2\Big\}dt,\quad s\in[0,1],
\end{split}
\end{align}
is a martingale with respect to $\mathcal{F}_{i,t}$. Because rebalancer $i$'s objective in \eqref{Rproblem} is linear-quadratic, the value function $J$ is quadratic in the state processes. Thus, $J$ can be written as
\begin{align}\label{Rproblemd}
\begin{split}
J(\tilde{a}_i,s,\eta,Y,q_i)&= J_0(s) + J_{\eta}(s) \eta + J_Y(s) Y+ J_{q_i}(s) q_i+ J_{\eta\eta}(s)\eta^2 \\
& + J_{\eta Y}(s) \eta Y + J_{YY}(s) Y^2+J_{q_iq_i}(s) q^2_i+ J_{q_i\eta}(s)q_i\eta+J_{q_iY}(s) q_iY,
\end{split}
\end{align}
for deterministic functions of time $(J_0, J_\eta, J_Y, J_{q_i},J_{\eta\eta},J_{\eta Y}, J_{YY},J_{q_iq_i},J_{q_i\eta},J_{q_iY})$. These functions are given by a coupled set of ODEs with zero terminal conditions (we omit the ODEs for brevity). In \eqref{Rproblemd}, the dummy variables $(\eta,Y,q_i)$ are real numbers and $s\in[0,1]$.
To quantify the costs associated with rebalancer $i$'s trading target $\tilde{a}_i$, the quadratic mapping RC (Rebalancing Costs) defined by
\begin{align}\label{def_RC}
\begin{split}
\text{RC}(\tilde{a}_i):=J(0,0,\eta,Y,q_{i})-J(\tilde{a}_i,0,\eta,Y,q_{i}),
\end{split}
\end{align}
measures the dependence the change in profit (i.e., change in value function) associated with a non-zero target $\tilde{a}_i$.
Figure \ref{fig23} plots the rebalancer's value function $J$ for different target values $\tilde{a}_i$ for different model parameterizations. When the target $\tilde{a}_i$ is close to zero, the rebalancers become high-frequency liquidity providers. Their value function in this case is positive due expected profit from liquidity provision and price-pressure front-running. As the target moves away from zero, the rebalancer starts to have larger holding penalties that eventually can drive the rebalancer's value function negative. Interestingly, the impact of the stock-price volatility parameter $\gamma$ on the rebalancer's value function can be positive or negative. Liquidity providing rebalancers are better off with a small $\gamma$ whereas rebalancers with large rebalancing targets are better off when $\gamma$ is large.
The rebalancing cost RC in \eqref{def_RC} for a target $\tilde{a}_i$ is computed as the difference between the value function evaluated at $\tilde{a}_i$ and the function evaluated at $\tilde{a}_i = 0$. Since $J$ is highest at $\tilde{a}_i = 0$, RC is positive.
\begin{figure}[!h]
\begin{center}
\caption{Plots of the rebalancers' value function $J$ for various values of $(\gamma,\sigma_{w_0})$. The exogenous model parameters are $ \sigma_{\tilde{a}}:=1, M:=\bar{M}:=10, \;\alpha :=-0.1,\; B(0):=-1,\;\kappa(t):=1$ for $t\in[0,1]$, and $w_0:= B(0)(\tilde{a}_\Sigma-\tilde{a}_i)$. }\ \\
\begin{footnotesize}
$\begin{array}{c c}
\includegraphics[width=6cm, height=4.5cm]{Fig6A.pdf} &\includegraphics[width=6cm, height=4.5cm]{Fig6B.pdf}
\\
\text{{\bf 3A:} $\gamma:=1\;\text{(-----)},\; \gamma:=0.5\; (- -),\;\sigma_{w_0} :=1 $}&\text{{\bf 3B:} $\gamma:=1\;\text{(-----)},\; \gamma:=0.5\; (- -),\;\sigma_{w_0} :=0.1 $}\\
\end{array}$
\end{footnotesize}
\label{fig23}
\end{center}
\end{figure}
\section{Conclusion}
This paper presents the first analytically tractable model of dynamic learning about parent trading demand imbalances with optimized order-splitting. In particular, we provide closed-form expressions prices and stock holdings in terms of solutions to systems of coupled ODEs in both price-impact and Nash equilibria. We then show that trading in our models reflects a combination of reaching investor's own trading targets, liquidity provision so that markets can clear, and front-running based on predictions of future price pressure.
There are many interesting directions for future research based on our analysis. First, replacing the zero-dividend stock approach with valuation based on a terminal payoff would be a significant technical step. Second, the model could be enriched by allowing for investor heterogeneity in the form of different penalty functions $\kappa(t)$ and by having multiple tracker targets (which would weaken the trackers' informational advantage). Third, it would be interesting to investigate if other off-equilibrium refinements have larger equilibrium effects. Fourth, incorporating risk-aversion into the investors' objectives would be interesting too. For example, how can Lemma \ref{response} be extended if the objectives in \eqref{Rproblem} are changed to exponential utilities?
\newpage
| -100,303.561604
|
[
-3.361328125,
3.029296875
] | 37.334934
|
[
-2.646484375,
0.1903076171875,
-2.67578125,
-6.40625,
-0.6630859375,
8.8203125
] |
[
3.958984375,
7.9453125,
2.419921875,
7.84765625
] | 600
| 10,366
|
[
-3.119140625,
3.5703125
] | 32.707924
|
[
-6.015625,
-4.12109375,
-5.19140625,
-2.818359375,
2.21875,
13.5
] | 0.410566
| 24.224609
| 18.039745
| 3.07918
|
[
1.6867109537124634
] | -66,626.940445
| 6.226606
| -100,464.837597
| 0.658455
| 6.242094
|
[
-2.884765625,
-3.333984375,
-3.51953125,
-4.73046875,
2.4140625,
11.5390625
] |
[
-5.4453125,
-2.1796875,
-2.435546875,
-1.78125,
3.658203125,
5.046875
] | |
BkiUbbA5qoTDtpFhrZ55
|
\section{Introduction}
Huddled around television sets or with ears clung to radio receivers, people all around the world heard Neil Armstrong utter the words: ``[t]hat's one small step for man; one giant leap for mankind.'' This landmark event took place on July 21, 1969. The date denotes the day on which the landing took people, even though one could convincingly argue that this day is part of a sequence of events leading up to this historic moment. In the days before the landing, newspapers published articles that counted down to the event and added commentary to the event, fueling anticipation in public discourse. On a longer time scale, the moon landing was part of a larger event: the space race, a competition between the United States and the Soviet Union for technological dominance. This distinction calls to mind Fernand Braudel's famous description of events as ``surface disturbances, crests of foam that the tides of history carry on their strong backs~\cite{braudel_1995}.''
Events, such as the moon landing, are essential for our experience of history. We do not perceive time as is but through our experience of change in which events demarcate historical temporality. We rely on events to structure the world around us, as individuals and as societies~\cite{wagner-pacificiWhatEvent2017}. William H. Sewell, Jr. describes an event as ``an occurrence that is remarkable in some way - one that is widely noted and commented on by contemporaries~\cite{sewell_historical_1996}.''
In the book \textit{What is an Event?}, Wagner-Pacifici uses 9/11 as a key example to theorize about the form and flow of events. She points out that historians have been preoccupied with bounding events in time and space, while she emphasizes ``the \textit{ongoingness} of events.'' As an event unfolds, it disrupts the historical flow while the public tries to make sense of what is happening. Afterward, the public reflects on these events and sets out to integrate these events into a historical narrative. As events gain traction, they transform how we experience historical time.
Understanding how historical temporality differs from natural temporality is crucial for ``understanding how history has shaped the identity of modern society and culture.\cite{koselleck_futures_2004}'' History is a process of both remembering and forgetting events and their relations. For contemporary, the moon landing was a singular event unlike any other, yet, canonized history knows more than one of these singular events. Was the moon really as impactful at the time, or has the wheel of time strengthen its position in our collective memories.
As Wagner-Pacifici points out, events cannot always be tied to exact dates, even though historical events are often connected to specific dates, such as the moon landing, the fall of the Berlin Wall, or winning a European soccer final. Rather than only departing from specific dates in a top-down manner, can we also detect the unfolding of events and their impact on the historical flow in a more data-driven, bottom-up manner? This paper sets out to answer this question by analyzing the relationship between events and the historical flow, represented by the information presented on the front pages of newspapers.
In our case, we model the ways events impacted language use. More specifically, we examine disruptions in the information flow of news on front pages. For example, events can disrupt the flow of the news by decreasing the amount of novel information presented on the front pages. In the run-up to an event, an increasing focus of the public's eye might be reflected in the increasing uniformity of discourse. Alternatively, an event could have a sudden impact while retaining the public's attention for an extended period. One could hypothesize different archetypical forms of events. In what follows, we try to establish universal motifs, or event flows, from the data itself. The three central questions to this paper are: (1) Do events impact historical flow, as represented by front pages in newspapers? (2) Can we cluster events based on the way they impacted the flow of information? (3) Can we use these clusters to query for events? We call these clusters, event flows, as they represent generalized manners in which events have impacted historical flow.\footnote{Data and code supporting this paper have been made available at \url{https://doi.org/10.5281/zenodo.5509949} (data) and \url{https://github.com/melvinwevers/event-flow} (code).}
A recent special forum in the journal \textit{History and Theory} clearly describes the long-standing historiographical debate on the concept of the event~\cite{jungTimesEventIntroduction2021}. One of the main challenges in history is to combine theoretical work on events with empirical studies of the temporality of events and their relationship to collective memory. The authors claim that systemic analysis of the temporal nature of events, which could shed light on an event's identity, are largely unexplored. This paper offers a computational method that contributes to this effort to understand better how events and their temporal structure have affected public discourse and, by extension, collective memory.
\section{Related Work}
Previous studies have shown that word usage in newspapers is sensitive to the dynamics of socio-cultural events~\cite{guldi_measures_2019, van_eijnatten_eurocentric_2019, daems_workers_2019}. Furthermore, the co-occurrence of words in newspaper reporting has been shown to capture thematic development accurately~\cite{newman_probabilistic_2006}, and, when modeled dynamically, is indicative of the evolution of cultural values and biases~\cite{van_eijnatten_eurocentric_2019, paul2019bursty, wevers_using_2019}. Methods from complexity science, such as Adaptive Fractal Analysis, have been used to identify distinct domains of newspaper content based on temporal patterns in word use (e.g., advertisements and articles)~\cite{wevers_tracking_2020} and to discriminate between different classes of catastrophic events that display class-specific fractal signatures in, among other things, word usage in newspapers~\cite{gao_culturomics_2012}.
Several studies have shown that measures of (relative) entropy can detect fundamental conceptual differences between distinct periods~\cite{guldi_measures_2019, degaetano-ortliebUsingRelativeEntropy2018, kestemont_mining_2014}, concurrent ideological movements (e.g. progressive and conservative politics) \cite{barron_individuals_2018, bos_quantifying_2016}, and even, the development of ideational factors (e.g., creative expression) in writing with a serial structure~\cite{murdock_exploration_2015, nielbo_automated_2019, nielbo_curious_2019}. More specifically, a set of methodologically related studies have applied windowed relative entropy to thematic text representations to generate signals that capture information \emph{novelty} as a reliable content difference from the past and \emph{resonance} as the degree to which future information conforms to said novelty~\cite{barron_individuals_2018, murdock_exploration_2015}. Two recent studies have found that successful social media content shows a strong association between novelty and resonance~\cite{nielbo_trend_2021}, and that variation in the novelty-resonance association can predict significant change points in historical data~\cite{vrangbaek_composition_2021}.
Our paper builds upon this work and will adapt the windowed relative entropy approach to a method that we call Jump Entropy. This method allows us to examine how events have impacted the flow of information in and between newspapers. We compare time series between newspapers and events, using Dynamic Time Warping Barycenter Averaging (DBA)~\cite{petitjean2011global}.
\section{Data}
Front pages function as the pulse of the nation, displaying current and pressing events at specific time points. Figure~\ref{fig:frontpage_example}, for example, depicts the front page in \textit{Algemeen Handelsblad} published on the day after the moon landing, which took place on a Sunday. In big, bold letters, we read: ``Walking on the Moon.''\footnote{Translated from the Dutch phrase ``Wandelen op de maan''.} Multiple articles on this event feature on this front page. In addition to the text, we see three images documenting this historic moment. For this study, we only looked into the textual content---captured by optical character recognition (OCR)---and not at the images.
The data consists of the textual content represented on the front pages of ten Dutch national and regional newspapers published between 1950 and 1995 (See Table~\ref{tab:data-overview} for details).\footnote{It is important to note that not all newspapers run for the entire period.}
\begin{figure}
\centering
\includegraphics[width=.75\linewidth]{figures/img1.jpg}
\caption{Frontpage of \textit{Algemeen Handelsblad} on July 21, 1969}
\label{fig:frontpage_example}
\end{figure}
\begin{table}[]
\centering
\caption{Overview of newspapers in dataset}
\begin{tabular}{@{}lllll@{}}
\toprule
Newspaper & period & type & \\ \midrule
Algemeen Handelsblad (AH) & 1950-1969 & national & \\
De Tijd (DT) & 1950-1974 & national & \\
Leeuwarder Courant (LC) & 1950-1994 & regional & \\
Limburgs Dagblad (LD) & 1950-1989 & regional & \\
Nieuwe Rotterdamsche Courant (NRC) & 1950-1994 & national & \\
Parool (PA) & 1950-1995 & regional & \\
Telegraaf (TG) & 1950-1994 & national & \\
Trouw (TR) & 1950-1995 & national & \\
Volkskrant (VK) & 1950-1995 & national & \\
Vrije Volk (VV) & 1950-1990 & national & \\
\bottomrule
\end{tabular}
\label{tab:data-overview}
\end{table}
For the data processing, which is not perfect due to flaws in the OCR technology, we removed stop words, punctuation, digits, and words shorter than three and longer than seventeen characters. We lemmatized the text using the NLP toolkit SpaCy.\footnote{https://spacy.io/} Next, we used Latent Dirichlet Allocation (LDA) with collapsed Gibbs sampling to train a topic model of the data.\footnote{Using topic coherence, the optimal number of topics ($k$) centered on 100. Going above or slightly below this number did not impact the results. However, when too few topics are selected the matrix becomes too sparse which makes it difficult to detect shifts in entropy.}
The input document for topic modeling consisted of a concatenation of all the articles on one single front page. This yields a matrix per newspaper of $P(topic_k|document_d)$ or $\theta$, in this case $document_d$ refers to a front page on a specific date and $topic_k$ holds the probability distribution of topics over documents. These ten matrices functioned as input for the calculation of the Jump Entropy.
In addition to the newspaper data, we constructed a list of sixty events for 1950-1995, using historical subject-matter knowledge combined with Wikipedia.\footnote{See Appendix \ref{section:Appendix_A} for an overview of these events.} This list includes global and national events.
\section*{Method}
\paragraph{Jump Entropy} To measure the flow of information between front pages, we propose an adapted version of the approach introduced by~\cite{barron_individuals_2018}. Barron et al.~\cite{barron_individuals_2018} measured the amount of novelty (how unexpected is a document, given previous documents) and transience (the degree to which patterns in documents fade or persist in future documents). They calculate this using varying window sizes, i.e. comparing the novelty of document compared to the average relative entropy contained in a varying number of documents. Relative entropy is a divergence measure that is able to capture the amount of ``surprise'' between two probability distributions, where (in this case) the reader learns to expect one distribution, $\vec{p}$, and then encounters second, say $\vec{q}$. These probability distributions are captured in $\theta$, i.e. the topic distributions from one time point compared to another. In our case, this would be between front pages in one newspaper.
Calculating novelty and transience using this windowed approach assumes that information accumulates in a continuous flow. This approach is quite sensitive to outliers, especially for shorter time windows. Also, due to the cyclical nature of events (e.g. seasonal or annual events), or the cascading, ripple effect in which an event might have impacted newspaper discourse, taking a continuous window might flatten out these effects.
To better capture the effect of an event on different time scales and trace ripple effects in public discourse, we adapted their approach. We introduce Jump Entropy, an approach that replaces the shifting window for jumps of different sizes. Rather than moving through the set linearly, we compare sets of front pages that are separated by a given distance. This distance between the two sets is expressed by $J$, the jump size. While using a fixed range of documents (14 days, t - 7 and t + 7), we vary the jump size ($J$) and calculate the JSD between a set of front pages around the focal point $t$ and front pages around a focal point either in the past (negative jump size) or the future (positive jump size).\footnote{We also experimented with shorter time windows, but this adds noise to the signal.}
While \cite{barron_individuals_2018} compare one front page with a range of front pages, this method compares two ranges of front pages separated by a jump. Put differently, we measure the average entropy for a range of documents and then jump into the past or future and compare this range to a similar range in this period. This approach allows us to measure the amount of ``surprise'' between the focal set to a set in the past or the future; as such, we can spot re-use of themes or recurring debates. Compared to the windowed approach, this method is less sensitive to outliers. We can find cyclical patterns, i.e., which period in the past or future is most similar to the focal period.
In addition to adding jumps, we also used a different metric than~\cite{barron_individuals_2018}. Rather than using Kullback-Leibler (KLD), we used Jensen-Shannon divergence (JSD), a less well-known formulation of relative entropy. JSD has several favorable properties when dealing with cultural information that is not produced in a strictly one-directional fashion. While newspapers are published day by day, the information represented in the papers is not necessarily produced in a one-directional fashion. Articles might have been written earlier, or authors might reflect back onto earlier events. We contend that JSD better reflects these assumptions. First and foremost, JSD is symmetric ensuring that $JSD(P|Q) = JSD(Q|P)$ for probability distributions $P$ and $Q$. Second, as a smooth version of KLD, JSD is well-behaved when $P$ and $Q$ are small. Finally, the square root of JSD is a proper distance metric that can be, for example, be used for clustering probability distributions. A disadvantage of JSD compared to KLD is that it is more computationally costly. However, this additional cost does not significantly impact the current study.
We model the difference between articles $s^{(j)}$ and $s^{(k)}$ as their relative entropy:
\begin{equation}
JSD (s^{(j)} \mid s^{(k)}) = \frac{1}{2} D (s^{(j)} \mid M) + \frac{1}{2} D (s^{(k)} \mid M)\label{eq:4}
\end{equation}
\noindent with $M = \frac{1}{2} (s^{(j)} + s^{(k)})$ and $D$ is the Kullback-Leibler divergence:
\begin{equation}
D (s^{(j)} \mid s^{(k)}) = \sum_{i = 1}^{K} s_i^{(j)} \times \log_2 \frac{s_i^{(j)}}{s_i^{(k)}}\label{eq:5}
\end{equation}
We calculated the average relative entropy between a range ($t$) of topic distributions ($s$) at moment $i$ ($s^{i+t}$) and the same range of documents at moment $j$ ($s^{j+t}$). $t$ ranged from -14 to 14, and the jump size ($J$) ranges between -1500 and 1500 with steps of 15:
\begin{equation}
\mathcal{J}_{J}(i) = \frac{1}{w} \sum_{j=1}^{w} D (s^{(i)} \mid s^{(i + vj)}
\label{eq:jump_entropy}
\end{equation}
\noindent where $v = -1$ for $t < 0$ otherwise $v = 1$, and $D$ is the distance measure (in this case $JSD$), $w$ is a window size and $J$ is the set of jumps of size $w$, and $t$ is the time point (`direction') at which $\mathcal{J}_{J}(i)$ is computed.
After calculating the jump entropies for a newspaper, we can use them to visualize event flows. Figure~\ref{fig:event_flows_vk} shows the event flow for eight random event in the newspaper \textit{De Volkskrant}. For each figure, on the x-axis, we see the jump size, and on the y-axis, the relative entropy. The center of the x-axis (0) indicates the date of the event, and to the left we see jumps in the past and to the right jumps into the future. This graph captures the flow of information leading up to and after the event.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/img2.png}
\caption{Event Flows of \textit{De Volkskrant}, window size of 50, rolling mean of 5 (orange line).The center of the x-axis reflects the date of the event, and to the left we see jumps in the past, and the right jumps into the future. This graph captures the flow of information leading up to and after the event. On the y-axis, we capture the amount of new information. A lower score means that front pages are more similar, thus a line going down means an increasing focus on a topic.}
\label{fig:event_flows_vk}
\end{figure}
\paragraph{Comparing Event Flows}
To group events within and between newspapers in an unsupervised manner requires a method to cluster dynamic processes and compute archetypical (averaged) representations of these time series. Dynamic-Time Warping Barycenter Averaging (DBA) is an ideal solution for exactly that. DBA is based on Dynamic Time Warping (DTW), a technique for optimally aligning time series and flexibly capturing similarities inside the series \cite{petitjean2011global}. As such, DTW accounts for non-linear variations in the time series, i.e., fluctuations do not need to occur at the same time steps~\cite{rakthanmanon2013addressing}. This makes DTW a better distance metric for clustering than traditional Euclidean distance metrics, which have been found to be an inaccurate measure for clustering~\cite{liao2005clustering,petitjean2011global}.
In principle, DTW allows us to align and compare events between newspapers. However, as pointed out by \cite{petitjean2011global}, while DTW is one of the most used similarity measures for time series, it cannot be reliably used for clustering using well-known algorithms since they rely on K-medoids, which require no averaging. DBA offers an extension of DTW to compute a consensus representation for a set of time series~\cite{petitjean2011global}. This allows us to calculate the average event flow for one event using data from ten newspapers. Figure~\ref{fig:event_flow_moon_landing} gives an example of this process using a DBA and a smoothened version of DTW (soft-DTW) using a soft minimum. \cite{petitjean2011global} show that DBA can be used as input for the k-means clustering of time series.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/img3.png}
\caption{Dynamic Time Warping Barycenter Averaging (DBA) of the time series of the Moon Landing. The red line shows the average time series, while the colored lines show individual time series per newspaper. On the left, we apply the default DBA method, on the right we see Soft-DTW, which uses a differentiable loss function to find the barycenter. The x-axis indicates the window before and after the event (centered at 0).}
\label{fig:event_flow_moon_landing}
\end{figure}
Rather than using k-means clustering, we applied agglomerative clustering. This approach has two main advantages over k-means clustering. First, the method is more explainable; we can inspect how clusters are created, how they are distributed over the dataset, and which clusters are more similar than others. Second, agglomerative clustering led to better separation of the clusters than k-means clustering (see~Figure~\ref{fig:clustering_projections}).
We clustered using the following steps:
\begin{enumerate}
\item Applying a window size of 28.\footnote{There were four to five clusters for all window sizes between five and fifty. We settled for 28 days for interpretative reasons, as it corresponds to four weeks, or approximately a month of front pages.}
\item Time series were z-normalized.
\item Calculate pairwise DTW distance between the events, acquiring a distance matrix.
\item Project the distance matrix in to two dimensions using UMAP (Uniform Manifold Approximation and Projection).
\item Grid search through clustering parameters (number of clusters, clustering method), aiming for a high Silhouette score. Additional sanity checks of cluster coherence were taken using the UMAP projection.
\item After the grid search, euclidean distance was picked as the clustering metric, while UPGMA (unweighted pair group method with arithmetic mean), also known as average linkage, was picked as the linkage criterion.
\item Calculate an archetypical time series using DBA for each found cluster.
\end{enumerate}
\begin{figure}
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=.9\linewidth]{figures/img4.png}
\caption{K-means clustering}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=.9\linewidth]{figures/img5.png}
\caption{Agglomerative clustering}
\label{fig:sub2}
\end{subfigure}
\caption{UMAP projection of clusters learned by k-means (left) and agglomerative clustering (right).}
\label{fig:clustering_projections}
\end{figure}
\noindent Using the described methods, we executed the following steps:
\begin{itemize}
\item compare similarities and differences between events across newspapers
\item establish archetypical event flows using agglomerative clustering
\item use an averaged event flow to query for similar events
\end{itemize}
\section{Results}
In what follows, we will first check whether there exists a difference between newspapers and specific event flows. This step helps us establish for which events there was consensus among newspapers or which newspapers deviated in their reporting on a particular event. Rather than just focusing on our selection of events, we also use a list of random dates as a baseline.
\paragraph{Newspaper difference using random dates}
We selected the event flows with a jump size of thirty, i.e, thirty days in the future and thirty in the past, for 1,000 random dates between 1950 and 1995 from all the included newspapers. After z-normalizing the time series for every date, we calculated the average event flow per date using DBA. Next, we calculated the distance for each newspaper to each date's average event flow using DTW. This distance to the mean shows us which newspapers deviated the most from the average for that date. In Figure~\ref{fig:newspaper_difference}, we see the distance from the mean per newspaper grouped per decade.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/img6.png}
\caption{Distance per newspaper to average time series for a random set of 1,000 dates. Mean-aggregated by decade with CI at 95\%.}
\label{fig:newspaper_difference}
\end{figure}
From Figure~\ref{fig:newspaper_difference}, we can gauge that the regional newspaper \textit{Leeuwarder Courant (LC)} and national newspaper \textit{De Telegraaf} deviated the most from the mean, with the latter diverging considerably over the course of these fifty years. This confirms what we knew about the country's most popular newspaper's ideological course, which moved to the right in this period~\cite{hoevenConcentratieKritischeAutonomie2019}. Also, the changing course of the \textit{Leeuwarder Courant} dovetails with the merger of the newspaper with another regional newspaper \textit{Friese Koerier}~\cite{broersmaNieusteTydingenLeeuwarder2002}. It might be that this merger has pushed the newspaper toward to average Dutch newspaper landscape.
\paragraph{Newspaper differences using selected events}
In addition to calculating the difference between papers for random dates, we used our list of events. For each event, we calculated an average event flow using DBA. Next, we calculated the distance from each event per newspaper to the average event flow. From this, we learn for which events the event flows in newspapers were the most similar, and for which events newspapers diverged.\footnote{Since \textit{Algemeen Handelsblad} and \textit{De Tijd} only appeared for a small subset of the period, we excluded these two newspapers}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/img7.png}
\caption{Distance to average event time series. The newspapers and events are sorted based on their mean distance. Dark refers to a shorter distance, while light refers to longer distances. White indicates missing values. For legibiilty, we only included the top 10 events closest to the mean and every other second from the remainder of the list of events.}
\label{fig:newspaper_event_difference}
\end{figure}
Figure~\ref{fig:newspaper_event_difference} shows that the top five events on which the newspapers reported uniformly were: the Suez crisis in 1956, the 1973 oil crisis, the Nigerian civil war (1967-1970), the fall of Saigon (April 30, 1975), and the moon landing (July 21, 1969). We also see that \textit{NRC Handelsblad}, \textit{Het Parool}, \textit{Het Vrije Volk}, and \textit{De Volkskrant}, were most closely aligned in terms of their event flows, with \textit{De Telegraaf} and \textit{Leeuwarder Courant}, again, being the outliers. Here we also clearly see how \textit{De Telegraaf} is behaving quite distinct compared to the other newspapers. Especially on Middle Eastern affairs, such as the Yom Kippur War (1973) and the Iran hostage crisis (1979-1981), other papers were very much in line with \textit{De Telegraaf} being the exception.
\paragraph{Archetypical time series}
Using DBA in combination with agglomerative clustering, we looked for clusters of event flows in our data. We excluded \textit{De Telegraaf} because of its deviant behavior. Using a window size of 28, we used the 58 events for the nine remaining newspapers as input. Using Silhouette analysis and cluster separation in the UMAP projection, we determined that the optimal number of clusters was closest to five. Figure~\ref{fig:event_flow_clusters} shows the average event flows within these five clusters.
From this clustering, we learn that events impacted the news in five characteristic manner. These manner capture how this impact unfolded over time and helps us to understand how events impacted the flow of information in the news, and by extension, how events impacted our historical temporality. The five clusters can be described as follows:
\begin{itemize}
\item \textbf{Cluster 1}: The downward slope before the event indicates a growing focus on an event, with a slow release indicating persisting, albeit abating focus on the topic after the event.
\item \textbf{Cluster 2}: The downward slope before the event indicates a growing focus on an event, with a flat line after the event indicative of a persistent focus on a topic after the event. Compared to Cluster 1, the event's impact is more sudden, and it captured the public's attention for a longer period.
\item \textbf{Cluster 3}: A noisy pattern that indicates no clear anticipation and a quick release after the event. Events with this signature might have occurred in periods with a quick news cycle, i.e., many news events rapidly superseding each other.
\item \textbf{Cluster 4}: Stable entropy, indicated by lack of slope, which suggests an increasing focus on a topic in the days before an event. The slope after the event indicates a release of focus after the event. This cluster is the mirror version of Cluster 2 and, to a lesser extent, Cluster 1.
\item \textbf{Cluster 5}: This cluster is most similar to cluster 4, albeit more balanced. There is growing anticipation and a release after the event. These event characteristics are indicative of events, such as the Moon Landing, that capture the public's attention in the days before \emph{and} after an event.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/img8.png}
\caption{Clustering dendrogram (top) with per-cluster archetypical time series (bottom). DBA time series are indicated by bold lines. Underlying event flows used for the calculation are shown in thin lines. Selected events, window size of 28.}
\label{fig:event_flow_clusters}
\end{figure}
\paragraph{Querying for Events}
One of the applications of the cluster-based approach is that we can use the average event flow of a cluster (indicated by bold lines), to query for front pages that exhibit a similar pattern. This allows us to search for all the front pages in a particular newspaper that exhibit a sudden focus on a topic, as expressed by Cluster 5. Alternatively, we could also take a specific event, for example, the Oil Crisis in the 1970s, and look for similar events.
\section{Conclusion}
We have presented an adaptation to the method introduced in ~\cite{barron_individuals_2018}, which allows us to capture how events impacted newspaper discourse, and by extension, reveal how the public's eye was drawn to specific events. We have shown how this method can be used to compare how newspapers responded to events and characterize events based on their impact on newspaper discourse.
The interaction between newspapers and the outside world is a complex interaction. Nonetheless, we managed to characterize ways in which front pages responded to world events. We can use these characterizations to define archetypical time series that can be used to query newspaper data to locate similar events. In this study, we have shown that there were events that impacted the news even though they are not remembered as having an impact, or vice versa. In future work, we will examine how these disjunctions between the public's memory of events and their impact on the news related to the canonization of historical events.
Also, we found that some noteworthy events displayed no clear signal (cluster 3). For example, the accident with the Challenger space shuttle on January 28, 1986, or the Coup in Ethiopia on December 13, 1960, did not elicit a clear response in the newspapers. For now, we can only speculate about the reasons that these events did not impact the information flow on the front pages of Dutch newspapers. One possibility is that the events did not grasp public attention. Alternatively, it could be that the event was discussed in a more specialized section or that the general public only identified an event as newsworthy well after it occurred.
Closer examination shows that the earthquakes in Chili in May 1960 followed the event flow displayed in Cluster 1, which might seem surprising. However, in this case, there was also a summit with world leaders taking place that increasingly captured the public's attention. The earthquake disrupted this trend and suddenly introduced a new topic, herewith increasing the entropy.
This example also highlights one of the shortcomings of this approach. Events can overlap each other, move away from the front pages, and after a turn of events they might return to the front again. This movement throughout the papers is not yet captured with this approach. Future work will examine the relationships between topics on the front pages and how they propagated throughout the newspaper. Retention, for instance, could also be expressed by more in-depth reflections on the events in dedicated newspaper sections.
\section{Acknowledgments}
This study was a NeiC's Nordic Digital Humanities Laboratory project (DeiC-AU1-L-000001), executed on the DeiC Type-1 HPC cluster. We acknowledge The National Library of the Netherlands (KB) for making their newspaper data available. Also, we express our gratitude to Simon DeDeo for his input during the early stages of this paper.
| -14,655.041308
|
[
-2.810546875,
2.88671875
] | 47.787611
|
[
-3.740234375,
0.3408203125,
-2.494140625,
-5.99609375,
0.10015869140625,
8.859375
] |
[
7.15234375,
4.87109375,
3.6171875,
8.6953125
] | 257
| 4,609
|
[
-3.15234375,
3.4921875
] | 23.277627
|
[
-5.7578125,
-3.052734375,
-3.923828125,
-2.296875,
1.8388671875,
11.3359375
] | 0.567026
| 28.224743
| 27.967021
| 2.510531
|
[
3.386331796646118
] | -12,092.932292
| 5.548275
| -14,421.757164
| 0.508369
| 5.968288
|
[
-3.763671875,
-3.26953125,
-2.240234375,
-3.34765625,
2.912109375,
8.984375
] |
[
-6.30859375,
-3.658203125,
-3.1171875,
-2.56640625,
4.5625,
8.3984375
] | |
BkiUf10241xiQRY-TTQW
|
\section{Introduction}
We consider methods for variational inequality problems where only a \emph{random perturbation} of the operator is available. In such problems, we have a closed convex set $X\subset\mathbb{R}^d$, a distribution $\mathbf{P}$ over a sample space $\Xi$ and a measurable random operator $F:\Xi\times X\rightarrow\mathbb{R}^d$. We then define the expected operator
\begin{equation}\label{equation:expected:valued:objective}
T(x):=\mathbf{P} F(\cdot,x):=\int_\Xi F(\xi,x)\dist\mathbf{P}(\xi),\quad (x\in X),
\end{equation}
assuming it is well defined over $X$. It will be convenient to consider a common probability space $\Omega$ on which a probability measure $\mathbb{P}$ and the correspondent expectation $\mathbb{E}$ are defined. Precisely, from now on we set a random variable $\xi:\Omega\rightarrow \Xi$ with distribution $\mathbf{P}$ so that $\mathbf{P}(A)=\mathbb{P}(\xi\in A)$ and $\mathbf{P} g=\mathbb{E}[g(\xi)]$ for any $A\in\Omega$ and integrable random variable $g:\Xi\rightarrow\mathbb{R}$.\footnote{We will sometimes use $\xi\in\Xi$ to denote a point in the sample space if no confusion arises.} Assuming \eqref{equation:expected:valued:objective}, the \emph{stochastic variational inequality} problem (SVI), denoted as VI$(T,X),$ is the problem of finding a $x^*\in X$ such that
\begin{eqnarray}\label{problem:SVI:intro}
\langle T(x^*),x-x^*\rangle\ge0,\quad\forall x\in X.
\end{eqnarray}
The solution set of \eqref{equation:expected:valued:objective}-\eqref{problem:SVI:intro} will be denoted by $X^*$.
An important property of SVIs is that it generalizes \emph{stochastic optimization} (SP) in the sense that it includes many stochatic variational problems for which $T$ is \emph{not} integrable. Indeed, if $T=\nabla f$ for some smooth function $f:X\rightarrow\mathbb{R}$ satisfying $f=\mathbb{E}[G(\xi,\cdot)]$ for some measurable function $G:\Xi\times X\rightarrow\mathbb{R}$, then VI$(T,X)$ is the first order necessary condition of the SP problem $\min_X f$. Additionally, both problems are equivalent if $f$ is convex. Notable examples of SVIs which are not related to SPs are the \emph{stochastic saddle-point problem} and the \emph{stochastic Nash equilibrium} problem. From another perspective, SVIs also generalize \emph{stochastic system of equations} in the sense that it includes geometric \emph{constraints} related to optimality conditions. Indeed, if $X=\mathbb{R}^d$ then the problem $T(x)=0$ is equivalent to VI$(T,\mathbb{R}^d)$. See e.g. \cite{facchinei:pang2003,jud:nem:tauvel2011}.
The challenge aspect of SVIs, when compared to deterministic variational inequalities, is that the expectation \eqref{equation:expected:valued:objective} cannot be evaluated.\footnote{Typical reasons are: a sample space with high dimension requiring Monte Carlo evaluation, no knowledge of the distribution $\mathbf{P}$ or, even worse, no knowledge of a closed form for $F$.} However, a practical assumption is that the decision maker have access to samples drawn from the distribution $\mathbf{P}$. Under this assumption, a popular methodology to solve \eqref{equation:expected:valued:objective}-\eqref{problem:SVI:intro} is the \emph{Stochastic Approximation} (SA) method. In this approach, the samples are accessed in an interior and online fashion: a deterministic version of an algorithm is chosen and a fresh independent identically distributed (i.i.d.) sample is used whenever the algorithm requires operator estimation at the current or previous iterates \cite{nem:jud:lan:shapiro2009}. In this setting, the mechanism to access $F$ via samples of $\mathbf{P}$ is usually named a \emph{stochastic oracle} (SO). Precisely, given an input $x\in X$ and an i.i.d. sample $\{\xi_j\}$ drawn from $\mathbf{P}$ (also independent of $x$), the SO outputs an unbiased sequence $\{F(\xi_j,x)\}$, that is, satisfying $\mathbb{E}[F(\xi_j,x)]=T(x)$ for all $j$. A different methodology is the \emph{Sample Average Approximation} (SAA) method where an external and offline sample is acquired to approximate the SVI \cite{shapiro:dent:rus2009}. The approximated problem is then solved by a deterministic algorithm of preferred choice.
The SA methodology was first proposed by Robbins and Monro in the seminal paper \cite{robbins:monro1951} for the problem $\min_{x\in\mathbb{R}^d}\{f(x):=\mathbb{E}[G(\xi,x)]\}$ for a random smooth convex function $G:\Xi\times\mathbb{R}^d\rightarrow\mathbb{R}$, that is, \eqref{equation:expected:valued:objective}-\eqref{problem:SVI:intro} with $F(\xi,\cdot):=\nabla G(\xi,\cdot)$. Their method takes the form
\begin{equation}\label{equation:stochastic:gradient}
x^{k+1}:=x^k-\alpha_k\nabla G(\xi^k,x^k),
\end{equation}
given an i.i.d. sample sequence $\{\xi^k\}$ and positive stepsize sequence $\{\alpha_k\}$. This was the first instance of the now popular \emph{stochastic gradient method}. This methodology was then extensively explored in numerous works spanning the communities of statistics and stochastic approximation, stochastic optimization and machine learning (see e.g. \cite{bach:moulines2011, bottou:curtis:nocedal2016} and references therein). See also \cite{kushner:yin2003} for other problems where the SA procedure is relevant (such as online optimization, repeated games, queueing theory, signal processing, and control theory). More recently, the SA methodology was also analyzed for SVIs e.g. in \cite{jiang:xu2008,jud:nem:tauvel2011,koshal:nedic:shanbhag2013, yousefian:nedic:shanbhag2014,kannan:shanbhag2014,chen:lan:ouyang2017,
wang:bertsekas2015,iusem:jofre:thompson2015,iusem:jofre:oliveira:thompson2017,
balamurugan:bach2016,bianchi2016}. We refer also to \cite{hiriart-urruty1976}.
The estimation in SA methods is measured by the \emph{oracle error}. This is the map $\epsilon:\Xi\times X\rightarrow\mathbb{R}^d$ defined by
\begin{equation}
\epsilon(\xi,x):=F(\xi,x)-T(x),\quad\quad(\xi\in\Xi,x\in X).\label{equation:oracle:error}
\end{equation}
For $p\ge2$, the \emph{oracle error's $p$-moment} function is defined by
\begin{equation}
\sigma_p(x):=\sqrt[p]{\mathbb{E}\left[\Vert\epsilon(\xi,x)\Vert^p\right]}\quad\quad(x\in X).\label{equation:oracle:error:variance}
\end{equation}
In the deterministic case, assumptions on the operator $T$ provide local surrogate models to establish the convergence of methods which solve VI$(T,X)$. In order to define and analyze SA methods, assumptions on the variance $\sigma(\cdot)^2:=\sigma_2(\cdot)^2$ (or even higher order moments) are as important as assumptions on $T$. This is because local surrogate models also need the estimation of $T$ from the SO. In that respect, we will consider Lemma \ref{lemma:holder:continuity:mean:std:dev} which is a consequence of the following assumption.
\begin{assumption}[Heavy-tailed H\"older continuous operators]\label{assumption:holder:continuity}
Consider definition \eqref{equation:expected:valued:objective}. There exist $\delta\in(0,1]$ and nonnegative random variable $\mathsf{L}:\Xi\rightarrow\mathbb{R}_+$ such that, for almost every $\xi\in\Xi$, $\mathsf{L}(\xi)\ge1$ and, for all $x,y\in X$,
$$
\Vert F(\xi,x)-F(\xi,y)\Vert\le\mathsf{L}(\xi)\Vert x-y\Vert^\delta.
$$
Define $\mathsf{a}:=1$ if $X$ is compact and $\mathsf{a}:=2$ for a general $X$. We assume there exist $x_*\in X$ and $p\ge2$ such that $\mathbf{P}\left[\Vert F(\cdot,x_*)\Vert^{\mathsf{a}p}\right]<\infty$ and $\mathbf{P}\left[\mathsf{L}(\cdot)^{\mathsf{a}p}\right]<\infty$. We define $L:=\mathbf{P}\mathsf{L}(\cdot)$ and $L_q:=\sqrt[q]{\mathbf{P}[\mathsf{L}(\cdot)^{q}]}+L$ for any $q>0$.
\end{assumption}
\begin{lemma}[H\"older continuity of the mean and the standard deviation]\label{lemma:holder:continuity:mean:std:dev}
Consider definitions \eqref{equation:oracle:error}-\eqref{equation:oracle:error:variance}, suppose Assumption \ref{assumption:holder:continuity} holds and take $q\in[p,2p]$ such that the integrability conditions of Assumption \ref{assumption:holder:continuity} are satisfied. Then $T$ is $(L,\delta)$-H\"older continuous on $X$ and $\sigma_q(\cdot)$ is $(L_q,\delta)$-H\"older continuous\footnote{We say $T$ is $(L,\delta)$-H\"older continuous if $\Vert T(x)-T(y)\Vert\le L\Vert x-y\Vert^\delta$ for all $x,y\in X$.} on $X$ with respect to the norm $\Vert\cdot\Vert$.
\end{lemma}
See the Appendix for a simple proof. In this work we shall only consider the Euclidean norm $\Vert\cdot\Vert$. Assumption \ref{assumption:holder:continuity} is standard in stochastic optimization \cite{shapiro:dent:rus2009}. It is much less standard in the literature of SA methods where, typically, it is assumed an \emph{uniform bound on the oracle's variance}, i.e., the existence of some $\sigma>0$, such that
$
\sup_{x\in X}\sigma(x)^2\le\sigma^2.
$
Unless the stochastic error in \eqref{equation:oracle:error} is \emph{independent} of\footnote{This is the case of the additive noise model which is a reasonable assumption in many problem instances.} $x\in X$, such \emph{global} uniform bound implicitly assumes $X$ is compact. Moreover, even if such bound holds, it does not provide sharp complexity estimates since typically $\sigma(x^*)^2\ll\sigma^2$ for $x^*\in X^*$ (see Example 3.9 in \cite{iusem:jofre:oliveira:thompson2017}). Assumption \ref{assumption:holder:continuity} does not require compactness of $X$ (including unconstrained quadratic SPs and affine SVIs with a \emph{random} matrix). Moreover, we will show that our convergence bounds depend only on the \emph{local} variances $\sigma(x^*)^2$, or the correspondent forth moments, at solutions $x^*\in X^*$ (see Theorem \ref{thm:rate:convergence} and Section \ref{section:conclusion}). See e.g. \cite{bach2014} where adaptive methods are proposed to exploit \emph{local} strong convexity modulus in stochastic optimization.
From a practical point of view, our statistical analysis will be built upon the standard assumption of an \emph{unbiased oracle with i.i.d. sampling} (UO). In the rest of the paper, it will be convenient to define the following quantities associated to an i.i.d. sample $\xi^N:=\{\xi_j\}_{j=1}^N$ drawn from $\mathbf{P}$. Recall definitions \eqref{equation:expected:valued:objective} and \eqref{equation:oracle:error}. We define the \emph{empirical mean operator} and the \emph{oracle's empirical mean error} associated to $\xi^N$, respectively, by
\begin{eqnarray}
\widehat F(\xi^N,x):=\frac{1}{N}\sum_{j=1}^NF(\xi_j,x),\quad\quad\widehat\epsilon(\xi^N,x):=\frac{1}{N}\sum_{j=1}^N\epsilon(\xi_j,x),\quad\quad (x\in X).\label{equation:empirical:mean:operator:&:error}
\end{eqnarray}
\subsection{Related work, proposed methods and contributions}\label{section:related:proposed:work}
The performance of first-order methods for optimization and variational inequalities strongly depend on the \emph{stepsize sequence}. As an example, given a smooth convex function $f:\mathbb{R}^d\rightarrow\mathbb{R}$, a classical method to solve $\min_{\mathbb{R}^d}f$ is the gradient method $x^{k+1}:=x^k-\alpha_k\nabla f(x^k)$, where $\{\alpha_k\}$ is a positive stepsize sequence. One choice of stepsizes that guarantees its convergence is the \emph{small stepsize policy} (SSP): any stepsize sequence satisfying $\sum_k\alpha_k=\infty$ and $\sum_k\alpha_k^2<\infty$, a typical choice being $\alpha_k=\mathcal{O}(k^{-1})$. If $L$ is the Lipschitz constant of $\nabla f(\cdot)$, the \emph{constant stepsize policy} (CSP) $\alpha_k=\mathcal{O}(\frac{1}{L})$ has a provable accelerated convergence rate in comparison to the SSP since its stepsize sequence do not vanishes. However, the later has the advantage of \emph{not requiring an estimate of $L$} and, in this sense, it is a more \emph{robust} and practical policy since $L$ is rarely known. A significant improvement is the use of \emph{line search schemes} which build endogenous \emph{adaptive} stepsizes bounded away from zero at the expence of a few more gradient evaluations. As an example, given iterate $x^k$, Armijo's line search \cite{armijo1966} defines the stepsize $\alpha_k$ as the maximum $\alpha\in\{\theta^\ell\hat\alpha:\ell\in\{0\}\cup\mathbb{N}\}$ such that
\begin{equation}\label{equation:armijo:rule}
f(x^k(\alpha))-f(x^k)\le\lambda\langle\nabla f(x^k),x^k(\alpha)-x^k\rangle,
\end{equation}
where $\hat\alpha\in(0,1]$, $\theta,\lambda\in(0,1)$ are exougenous parameters and, for all $\alpha>0$, $x^k(\alpha):=x^k-\alpha\nabla f(x^k)$. The next iterate is then defined by $x^{k+1}:=x^k(\alpha_k)$. This policy enjoys the accelerated convergence of the CSP and the robustness of the SSP, i.e., it does not require knowledge of $L$. Many variants of line search schemes were developed and extended to include other variational problems. For variational inequalities, two notable ones are the line search schemes of Khobotov \cite{khobotov1987} and of Iusem-Svaiter \cite{iusem:svaiter1997}.
As expected, the stepsize policy is also determinant in the performance of SA methods. In the seminal work \cite{robbins:monro1951} and in later developments, it is shown that the SSP is sufficient for the convergence of the SG method \eqref{equation:stochastic:gradient}. The nontrivial aspect here is that the SSP has to deal not only with the convergence of the sequence but also to progressively \emph{reduce the variance} of the oracle's error trajectory $\{\nabla G(\xi^k,x^k)-\nabla f(x^k)\}_{k\in\mathbb{N}}$. Such additional challenge in SA methods is still an active research subject and it has received a burst of interest in the last decade motivated by large scale statistical machine learning applications.\footnote{In this challenge setting, theoretical and practical experience has shown that first order methods are competitive and, sometimes, the best known methods.}
The performance of a SA method can be measured by its \emph{iteration} and \emph{oracle complexities} given a tolerance $\epsilon>0$ with respect to a suitable metric. The first is the total number of iterations, a measure for the optimization error, while the second is the total number of samples and oracle calls, a measure for the estimation error. As an example, statistical lower bounds \cite{agarwal:barlett:ravikumar:wainwright2012} show that the class of smooth convex functions has an optimal oracle complexity of $\mathcal{O}(\epsilon^{-2})$ in terms of the optimality gap. A fundamental improvement with respect to estimation error was Polyak-Ruppert's \emph{iterate averaging} scheme \cite{polyak1991,polyak:juditsky1992,ruppert1988,nem:rudin1978}. This scheme replaces the SSP by \emph{longer stepsizes} $\alpha_k=\mathcal{O}(k^{-\frac{1}{2}})$ with a subsequent \emph{final average} of the iterates using the stepsizes as weights (this is sometimes called \emph{ergodic average}). \emph{If one oracle call per iteration is postulated}, such scheme obtains a convergence rate of $\mathcal{O}(k^{-\frac{1}{2}})$ with optimal iteration and oracle complexities of $\mathcal{O}(\epsilon^{-2})$ on the class of smooth convex functions. This is also the size of the final ergodic average, a measure of the additional \emph{averaging effort} implicitly required in iterate averaging schemes. Such methods, hence, are efficient in terms of oracle complexity. Iterate averaging was then extensively explored (see e.g. \cite{juditsky:nazin:tsybakov:vayatis2005,juditsky:rigollet:tsybakov2008,nesterov:vial2008,nesterov2009,%
nem:jud:lan:shapiro2009,xiao2010,jud:nem:tauvel2011}). The important work \cite{nem:jud:lan:shapiro2009} studies the \emph{robustness} of iterate averaging in SA methods and shows that such schemes can outperform the SAA approach on relevant convex problems. On the strongly convex class, \cite{bach:moulines2011} gives a detailed non-asymptotic \emph{robust} analysis of Polyak-Ruppert averaging scheme. It theoretically and numerically justifies the importance of iterate averaging in handling the oracle's error variance.
Although iterate averaging methods obtain optimal oracle complexity, a remaining question is if \emph{improved iteration complexity} with (near) \emph{optimal oracle complexity} can be achieved. In this sense, a rapidly and recent line of research proposes SA methods with \emph{variance reduction} using \emph{more than one oracle call per iteration} to alleviate the role of the stepsize in reducing variance. Two representative examples include \emph{gradient aggregation methods} and \emph{dynamic sampling methods} (see \cite{bottou:curtis:nocedal2016}, Section 5). These methods can use a \emph{constant stepsize policy} and thus obtain an accelerated rate of convergence when compared to the iterate averaging scheme. Designed for finitely supported distributions with bounded data, gradient aggregation methods reduce the variance by combining in a specific manner eventual exact computation (or storage) of gradients and eventual iterate averaging (or randomization schemes) with frequent gradient sampling. See e.g. \cite{bottou:curtis:nocedal2016} and references therein. Designed to solve problems with an arbitrary distribution and online data acquisition (as is the case in many stochastic and simulation optimization problems based on Monte Carlo methods), dynamic sampling methods reduce variance by estimating the gradient via an \emph{empirical average} associated to a sample whose size (\emph{mini-batch}) is increased at every iteration. See e.g. \cite{deng:ferris2009,byrd:chiny:nocedal:wu2012,%
friedlander:schmidt2013,shanbhag:blanchet2015,
ghadimi:lan2016,iusem:jofre:oliveira:thompson2017} and references therein. However, an essential point is if such increased effort in computation per iteration is worth. A nice fact is that current gradient aggregation and dynamic sampling methods achieve, up to constants, the order of the deterministic optimal iteration complexity with the \emph{same} (near) optimal oracle complexity and averaging effort of standard iterate averaging schemes. \emph{In this sense}, gradient aggregation and dynamic sampling methods can be a more efficient option than iterate averaging.
We now comment on the main purpose of this work. All variance reduction SA methods mentioned above still use a constant stepsize policy $\alpha_k=\mathcal{O}(\frac{1}{L})$ \emph{assuming knowledge of the Lipschitz constant}. Hence, although they improve the convergence of SA methods, iterate averaging with $\alpha_k=\mathcal{O}(k^{-\frac{1}{2}})$ is still a more robust policy when $L$ or other needed parameters are unknown or poorly estimated \cite{nem:jud:lan:shapiro2009, bach:moulines2011}. In this setting, current variance reduction methods may be impractical. An important question is: can \emph{faster rates of convergence} with (near) \emph{optimal oracle complexity} be accomplished by \emph{robust variance reduction} methods? By robust variance reduction we mean the use of adaptive schemes that avoid exogenous estimation of $L$ and produce a stepsize sequence bounded away from zero. Motivated by line search schemes in deterministic methods, our aim is to propose line search schemes for a class of dynamic sampled SA methods (DS-SA). In this work we focus on SVIs and pursue an improved complexity analysis of stochastic optimization problems in future research. To the best of our knowledge, line search schemes for SVIs are currently nonexistent. Even for stochastic optimization, considering that Robbins-Monro's seminal work was published in 1951, it seems that only very few existing works treat adaptive stepsize search schemes for SA methods with \emph{stepsizes bounded away from zero} \cite{maclaurin:duvenaud:adams2015, mahsereci:hennig2017, schaul:zhang:lecun2013, masse:ollivier2015, tan:ma:dai:qian2016, wardi1990, krejic:jerinkic2015, krejic:luzanin:nikolovski:stojkovska2015, krejic:luzanin:ovcin:stojkovska2015}. Still, some of them only suggest a scheme without a provable convergence theory \cite{maclaurin:duvenaud:adams2015,mahsereci:hennig2017}. For those which do guarantee convergence, some still require knowledge of the Lipschitz constant or use a small stepsize policy \cite{schaul:zhang:lecun2013, masse:ollivier2015,tan:ma:dai:qian2016} and, hence, are not robust variance reduction methods. Finally, the analysis in \cite{wardi1990, krejic:jerinkic2015, krejic:luzanin:nikolovski:stojkovska2015, krejic:luzanin:ovcin:stojkovska2015} are too restrictive since they require much more than the standard assumption of an UO used in stochastic approximation and do not give complexity estimates. In all mentioned works, uniformly bounded assumptions are made (either on the set or on the oracle's variance) and no convergence rates or oracle complexity are given (hence their efficiency cannot be compared to the SSP). Differently, our oracle assumptions are standard and, in this sense, our proposals are also novel for stochastic optimization problems viewed as particular cases of SVIs. Moreover, we provide rate of convergence and oracle complexity and do not assume uniform boundedness.
Finally, before presenting our methods and results, it will be very instructive to briefly discuss why the analysis of line search schemes in SA methods are considerably \emph{different} and intrinsically \emph{more difficult} than in the deterministic case. This may explain the absence of a satisfying
convergence theory of SA methods with line search schemes which: (1) do not use knowledge of the Lipschitz constant, (2) obtain stepsizes bounded away from zero and (3) only assume an UO. Since \cite{robbins:monro1951, robbins:siegmund1971}, it is well known that the analysis of SA methods strongly relies on \emph{martigale processes}. From a generic perspective, such martingale-like property is obtained by:
\begin{itemize}
\item[(i)] \textsf{Optimization process}: the deterministic iterative algorithm satisfies a fixed-point contraction or Lyapunov principle.\footnote{This is usually obtained by properties like convexity of the objective, smoothness of a nonconvex objective and monotonicity or nonexpansion of an operator.}
\item[(ii)] \textsf{Estimation process}: standard stochastic approximation consists in using a \emph{fresh i.i.d. sample} update at every iteration.
\item[(iii)] \textsf{Exogenous stepsize policies}: for instance, the SSP $\alpha_k=\mathcal{O}(k^{-1})$, longer stepsizes $\alpha_k=\mathcal{O}(k^{-\frac{1}{2}})$ with iterate averaging and the CSP $\alpha_k=\mathcal{O}(\frac{1}{L})$. We also include adaptive vanishing stepsizes which achieve better tunned constants but still require exogenous parameters (see e.g. \cite{yousefian:nedic:shanbhag2016}).
\end{itemize}
As an example, consider the stochastic gradient method \eqref{equation:stochastic:gradient} with stepsizes satisfying $0<\sup_k\alpha_k<\frac{1}{2L}$. Given a solution $x^*$, it is possible to show that
$$
\Vert x^{k+1}-x^*\Vert^2\le\Vert x^{k}-x^*\Vert^2-\left(\frac{1}{2}-L\alpha_k\right)\frac{\alpha_k^2}{2}\Vert\nabla f(x^k)\Vert^2+2\alpha_k\langle\epsilon^k,x^*-x^k\rangle+\alpha_k^2\Vert\epsilon^k\Vert^2,
$$
where $\epsilon^k:=\nabla G(\xi^k,x^k)-\nabla f(x^k)$ is the oracle error at the $k$-th iterate. If $\mathcal{F}_k:=\sigma(\xi^0,\ldots,\xi^{k-1})$ denotes the $\sigma$-algebra encoding the information up to the iteration $k$, then the above relation and the fact that $\{\xi^i\}_{i=0}^\infty$ is an i.i.d. sequence imply that the iterates' error sequence $\{\Vert x^k-x^*\Vert^2\}$ defines a ``perturbed'' \emph{supermartingale} sequence adapted to $\{\mathcal{F}_k\}$ (see Section \ref{section:preliminaries}, Theorem \ref{thm:rob}).\footnote{Robbins and Monro \cite{robbins:monro1951} called this an ``almost'' supermartingale sequence. In the classical terminology from the deterministic optimization community, this would correspond to a stochastically adapted version of quasi-F\'ejer sequences.} This sequence is defined over the \emph{iteration time-scale} and accounts for the optimization error. On the other hand, the oracle's error sequence $\{\epsilon^k\}$ defines an \emph{exact martingale difference} adapted to $\{\mathcal{F}_k\}$, i.e., $\mathbb{E}[\epsilon^k|\mathcal{F}_k]=0$ for all $k$. This sequence is defined over the \emph{estimation time-scale} and accounts for the gradient estimation error.
If one considers \emph{adaptive endogenous stepsizes} and use variance reduction, a natural choice would be a SA version of Armijo's rule \eqref{equation:armijo:rule}: chose $\alpha_k$ as the maximum $\alpha\in\{\theta^\ell\hat\alpha:\ell\in\{0\}\cup\mathbb{N}\}$ such that
\begin{equation}\label{equation:armijo:DS-SA}
\widehat G\left(\xi^k,x^k(\alpha)\right)-\widehat G(\xi^k,x^k)\le\lambda\left\langle\nabla \widehat G(\xi^k,x^k),x^k(\alpha)-x^k\right\rangle,
\end{equation}
where $\hat\alpha\in(0,1]$, $\theta,\lambda\in(0,1)$, $\xi^k:=\{\xi^k_j\}_{j=1}^{N_k}$ is an i.i.d. sample from $\mathbf{P}$ such that $N_k\rightarrow\infty$ and, for all $\alpha>0$, $x^k(\alpha):=x^k-\alpha\nabla\widehat G(\xi^k,x^k)$. In above, $\widehat G(\xi^k,x^k)$ and $\nabla \widehat G(\xi^k,x^k)$ denote, respectively, the empirical averages of $G(\cdot,x^k)$ and $\nabla G(\cdot,x^k)$ with respect to the sample $\xi^k$. The challenging aspect of the above scheme is highlighted:
\begin{quote}
\textsf{(A):} DS-SA \emph{line search schemes intrinsically introduce nonmartingale-like dependencies even when using i.i.d. sampling.
}
\end{quote}
To see this, first note that the \emph{backtracking} scheme \eqref{equation:armijo:DS-SA} examines the variation of $\widehat G(\xi^k,\cdot)$ along a discrete path $\alpha\mapsto x^k(\alpha)$ so that the chosen stepsize $\alpha_k$ and accepted iterate $x^{k+1}:=x^{k}(\alpha_k)$ are both measurable functions of $(\xi^k,x^k)$. Second, by using the contraction principle produced by \eqref{equation:armijo:DS-SA}, we are forced to estimate the oracle error $\widehat\epsilon(\xi^k,x^{k+1})=\widehat G(\xi^k,x^{k+1})- f(x^{k+1})$ which \emph{is not a martingale difference}: it is a measurable function of the \emph{coupled} variables $\xi^k$ and $x^{k+1}$ due to backtracking. Even when $\xi^k$ is an i.i.d. sample of $\mathbf{P}$, this coupling is inevitable and, hence, the desired convergence
\begin{equation}\label{equation:postulated:SLLN}
\lim_{k\rightarrow\infty}\widehat\epsilon(\xi^k,x^{k+1})=\lim_{k\rightarrow\infty}\sum_{j=1}^{N_k}\frac{G(\xi^k_j,x^{k+1})-f(x^{k+1})}{N_k}=0,
\end{equation}
either in almost sure sense or in distribution, \emph{does not} follow from the standard Strong Law of Large Numbers or the Central Limit Theorem: the above sum \emph{is not a sum of independent random variables}. The nontrivial aspect here is that a SA method with line search has \emph{two} statistical estimation processes: the gradient estimation of item (ii) above \emph{and} the Lipschitz constant estimation replacing (iii). In this sense, DS-SA methods with line search schemes are statistically different than standard SA methods. We finally remark that in all the works \cite{wardi1990,krejic:luzanin:ovcin:stojkovska2015,krejic:jerinkic2015,
krejic:luzanin:nikolovski:stojkovska2015} the convergence \eqref{equation:postulated:SLLN} is \emph{postulated}, putting aside the challenging aspect in \textsf{(A)}. Thus, their assumptions are far beyond the usual assumption of an UO. Errors of the type $\widehat{\epsilon}(\xi^k,x^{k+1})$ will be referred as \emph{correlated errors}.
In this work, we propose \textsf{Algorithm \ref{algorithm:DSSA:extragradient}} for Lipschitz continuous operators and \textsf{Algorithm \ref{algorithm:DSSA:hyperplane}} for general H\"older continuous operators to solve SVIs via the SA methodology. These methods use dynamic sampling and line search schemes to cope with the absence of the Lipschitz constant or the parameters of H\"older continuity. Our contributions are resumed in the following.
\begin{algorithm}
\caption{DS-SA-extragradient method with a DS-SA line search scheme}\label{algorithm:DSSA:extragradient}
\begin{algorithmic}[1]
\scriptsize
\STATE INITIALIZATION: Choose the initial iterate $x^0\in\mathbb{R}^d$, parameters
$\hat\alpha,\theta\in(0,1]$ and $\lambda\in\left(0,\frac{1}{\sqrt{6}}\right)$ and the sample rate $\{N_k\}$.
\STATE ITERATIVE STEP: Given iterate $x^k$, generate sample $\xi^k:=\{\xi^k_{j}\}_{j\in [N_k]}$ from $\mathbf{P}$ and compute
\begin{equation}\label{equation:empirical:average:DSSA:extragradient}
\widehat F(\xi^k,x^k):=N_k^{-1}\sum_{j=1}^{N_k}F(\xi_j^k,x^k).
\end{equation}
If $x^k=\Pi\left[x^k-\hat\alpha\widehat F(\xi^k,x^k)\right]$ stop. Otherwise,
\textsf{LINE SEARCH RULE}: define $\alpha_k$ as the
maximum $\alpha\in\{\theta^\ell\hat\alpha:\ell\in\{0\}\cup\mathbb{N}\}$ such that
\begin{equation}\label{algo:armijo:rule}
\alpha\left\Vert\widehat F\left(\xi^k,z^k(\alpha)\right)-\widehat F\left(\xi^k,x^k\right)\right\Vert
\le\lambda\Vert z^k(\alpha)-x^k\Vert,
\end{equation}
where, for all $\alpha>0$, compute
$
z^k(\alpha):=\Pi\left[x^k-\alpha\widehat F(\xi^k,x^k)\right]
$
and
$
\widehat F\left(\xi^k,z^k(\alpha)\right):=N_k^{-1}\sum_{j=1}^{N_k}F(\xi_j^k,z^k(\alpha)).
$
Generate sample $\eta^k:=\{\eta^k_{j}\}_{j\in [N_k]}$ from $\mathbf{P}$ and set
\begin{eqnarray}
z^k&=&\Pi\left[x^k-\alpha_k\widehat F(\xi^k,x^k)\right],\label{algo:extragradient:armijo1}\\
x^{k+1}&=&\Pi\left[x^k-\alpha_k\widehat F(\eta^k,z^k)\right].\label{algo:extragradient:armijo2}
\end{eqnarray}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{DS-SA-hyperplane method}\label{algorithm:DSSA:hyperplane}
\begin{algorithmic}[1]
\scriptsize
\STATE INITIALIZATION: Choose the initial iterate $x^0\in\mathbb{R}^d$, parameters
$\tilde\beta\ge\hat\beta>0$, $\hat\alpha\in(0,1]$ and $\lambda,\theta\in(0,1)$, the
step sequence $\{\beta_k\}\subset[\hat\beta,\tilde\beta]$ and the sample rate $\{N_k\}$.
\STATE ITERATIVE STEP: Given iterate $x^k$, generate
sample $\xi^k:=\{\xi^k_{j}\}_{j=1}^{N_{k}}$ from $\mathbf{P}$ and compute
$
\widehat F(\xi^k,x^k):=N_k^{-1}\sum_{j=1}^{N_k}F(\xi_j^k,x^k).
$
If $x^k=\Pi\left[x^k-\beta_k\widehat F(\xi^k,x^k)\right]$ stop. Otherwise,
\textsf{LINE SEARCH RULE:} define $\alpha_k$ as the maximum $\alpha\in\{\theta^\ell\hat\alpha:\ell\in\{0\}\cup\mathbb{N}\}$ such that
\begin{equation}\label{algo:armijo:rule2}
\left\langle\widehat F\left(\xi^k,\bar z^k(\alpha)\right),x^k-\Pi(g^k)\right\rangle
\ge\frac{\lambda}{\beta_k}\Vert x^k-\Pi(g^k)\Vert^2,
\end{equation}
where $g^k:=x^k-\beta_k\widehat F(\xi^k,x^k)$ and for all $\alpha>0$, compute
$
\overline z^k(\alpha):=\alpha\Pi(g^k)+(1-\alpha)x^k
$
and
$
\widehat F\left(\xi^k,\overline z^k(\alpha)\right):=N_k^{-1}\sum_{j=1}^{N_k}F(\xi_j^k,\overline z^k(\alpha)).
$
Set
\begin{eqnarray}
z^k&:=&\alpha_k\Pi\left[x^k-\beta_k\widehat F(\xi^k,x^k)\right]+(1-\alpha_k)x^k,\label{algo:hyperplane1}\\
x^{k+1}&:=&\Pi\left[x^k-\gamma_k\widehat F(\xi_k,z^k)\right],\label{algo:hyperplane2}
\end{eqnarray}
with
$
\gamma_k:=\left\langle\widehat F(\xi^k,z^k),x^k-z^k\right\rangle\cdot\Vert\widehat F(\xi^k,z^k)\Vert^{-2}.
$
\end{algorithmic}
\end{algorithm}
(i) \emph{Robust variance reduction with efficient oracle complexity and multiplicative noise}: To the best of our knowledge, \textsf{Algorithm \ref{algorithm:DSSA:extragradient}} is the first provable \emph{robust} variance reduced SA method, either for SVIs or SPs, with improved iteration complexity and near optimal oracle complexity. This means that we obtain, up to constants, an optimal iteration complexity of $\mathcal{O}(\epsilon^{-1})$ and near optimal oracle complexity of $\mathcal{O}(\epsilon^{-2})$ (up to log factors on $\epsilon$ and $L$) in the large sample setting for SVIs with Lipschitz continuous operators without the a priori knowledge of the Lipschitz constant $L$. Previous nonrobust variance reduction methods use the policy $\alpha_k=\mathcal{O}(\frac{1}{L})$ and obtain, up to constants, the same complexities \cite{iusem:jofre:oliveira:thompson2017} but require an exogenous estimate of $L$. Such estimate is often nonexistent in practice. Moreover, even in possession of such an estimate, the convergence can be slow if it is of a poor quality. On the other hand, previous robust methods use vanishing stepsizes with the poorer iteration complexity of $\mathcal{O}(\epsilon^{-2})$ in the case of ill-conditioned problems \cite{nem:jud:lan:shapiro2009}. Concerning line search schemes, they are nonexistent for SVIs but it seems our results are also new for SPs (seen as a particular SVI): all current methods either still use the knowledge of $L$ and other parameters, use the small stepsize policy (and, hence, have a slower iteration complexity) or postulate \eqref{equation:postulated:SLLN} without giving complexity estimates \cite{wardi1990,krejic:luzanin:ovcin:stojkovska2015,krejic:jerinkic2015,
krejic:luzanin:nikolovski:stojkovska2015}. Condition \eqref{equation:postulated:SLLN} is much stronger than the standard assumption of an UO, a sufficient assumption for our analysis. Differently than previous robust methods for bounded ill-contidioned problems \cite{nem:jud:lan:shapiro2009}, we ask only Assumption \ref{assumption:holder:continuity} (an oracle with multiplicative noise). In this aggressive but practical setting, the oracle's variance is not uniformly upper bounded if $X$ is unbounded. Our bounds are local in the sense that they depend on variance at solutions, the Lipschitz constant and initial iterates (but not on the diamater of $X$ nor on a global variance upper bound). Compared to nonrobust variance reduced methods \cite{byrd:chiny:nocedal:wu2012,ghadimi:lan2016,iusem:jofre:oliveira:thompson2017}, a price to pay in our estimates for not having an exogenous estimate of $L$ is that the oracle complexity of \textsf{Algorithm \ref{algorithm:DSSA:extragradient}} has an additional factor of $\ln (L)\mathcal{O}(d)$. We note however, that such upper bound is tight in comparison to the sample complexity of the general SAA estimator, an estimator which does not assume extra information on the problem (see e.g. Theorem 5.18 in \cite{shapiro:dent:rus2009}). We refer to Theorem \ref{thm:rate:convergence}, Corollary \ref{cor:oracle:complexity} and Section \ref{section:conclusion}.\footnote{Our complexities hold for the quadratic natural residual or the D-gap function (see Section \ref{section:preliminaries}). If $X$ is compact, our method achieves the same complexities, up to constants, in terms of the dual-gap function (see e.g. \cite{nem:jud:lan:shapiro2009, chen:lan:ouyang2017}).}
(ii) \emph{Complexity estimates of SA methods via a local empirical process theory}: as mentioned before, DS-SA line schemes intrinsically introduce nonmartigale-like processes. Going beyond standard martingale techniques used in SA methods with exogenous stepsize policies, we use a novel analysis based on advanced techniques from \emph{Empirical Process Theory} \cite{boucheron:lugosi:massart2013,panchenko2003} to analyze correlated errors introduced in stochastically approximated line search schemes. Very importantly, we \emph{do not} postulate significantly narrower oracle assumptions such as \eqref{equation:postulated:SLLN} used in \cite{wardi1990,krejic:luzanin:ovcin:stojkovska2015,krejic:jerinkic2015,
krejic:luzanin:nikolovski:stojkovska2015}. We refer the reader to Section \ref{section:empirical:process:theory:DSSA} for a detailed description. This is the most sensible part of our work and the cornerstone tool. Our analysis also sets the ground for potential generalizations to other robust algorithms based on the SA methodology\footnote{Possibly requiring nontrivial adaptations.}. In a nutshell, our proposition is to \emph{locally} decouple the dependency in the correlated error up to the control of an empirical process over a suitable ball centered at the current iterate. The intuition here is that the iterate generated after the line search scheme, although highly dependent on the fresh i.i.d. sample, lies at a ball whose radius is dependent on previous information and on a martingale difference error. We refer to Section \ref{section:empirical:process:theory:DSSA} for futher details. The statistical preliminaries are carefully presented.
Besides items (i)-(ii) above, another contribution is the proof of convergence of \textsf{Algorithm \ref{algorithm:DSSA:hyperplane}}. Our main interest in this algorithm is that, differently than \textsf{Algorithm \ref{algorithm:DSSA:extragradient}} whose convergence holds for Lipschitz continuous operators, \textsf{Algorithm \ref{algorithm:DSSA:hyperplane}} converges for arbitrary H\"older continuous operators without any knowledge of the exponent $\delta$ and the H\"older modulus.
In Section \ref{section:preliminaries} we give some preliminaries. Section \ref{section:empirical:process:theory:DSSA} develops a general empirical process theory which is later applied in the convergence theory of \textsf{Algorithms \ref{algorithm:DSSA:extragradient}} and \textsf{\ref{algorithm:DSSA:hyperplane}}. The convergence theory of \textsf{Algorithm \ref{algorithm:DSSA:extragradient}} is presented in Section \ref{section:algorithm:extragradient:DSSA} while the convergence theory of \textsf{Algorithm \ref{algorithm:DSSA:hyperplane}} is presented in Section \ref{section:algorithm:hyperplane:DSSA}. Section \ref{section:conclusion} concludes with some discussions concerning \textsf{Algorithm \ref{algorithm:DSSA:extragradient}}. Some lemmas are proved in the Appendix.
\section{Preliminaries and notation}\label{section:preliminaries}
For $x,y\in\mathbb{R}^d$, we denote by $\langle x,y\rangle$ the standard inner product, and by $\Vert x\Vert=\sqrt{\langle x,x\rangle}$ the correspondent Euclidean norm. Given $C\subset\mathbb{R}^d$ and $x\in\mathbb{R}^d$, we use the notation $\dist(x,C):=\inf\{\Vert x-y\Vert:y\in C\}$ and $\diam(C):=\sup\{\Vert x-y\Vert:x,y\in C\}$. For a closed and convex set $C\subset\mathbb{R}^d$, we use the notation $\Pi_{C}(x):=\argmin_{y\in C}\Vert y-x\Vert^2$ for $x\in\mathbb{R}^d$. Given $H:\mathbb{R}^d\rightarrow\mathbb{R}^d$, S$(H,C)$ denotes the solution set of VI$(H,C)$. The following properties of the projection operator are well known (see e.g. \cite{facchinei:pang2003, iusem:svaiter1997} and \cite{chen:lan:ouyang2017}, Proposition 4.1).
\begin{lemma}\label{lemma:proj}
Take a closed and convex set $C\subset\mathbb{R}^d$.
\begin{itemize}
\item[i)] Let $v\in\mathbb{R}^d$ and $x\in C$ with $z:=\Pi_C[x-v]$. Then, for all $u\in C$,
$
2\langle v,z-u\rangle\le\Vert x-u\Vert^2-\Vert z-u\Vert^2-\Vert z-x\Vert^2.
$
\item[ii)] For all $x\in\mathbb{R}^d, y\in C$,
$
\Vert \Pi_{C}(x)-y\Vert^2+\Vert \Pi_{C}(x)-x\Vert^2\le\Vert x-y\Vert^2.
$
\item[iii)]For all $x,y\in\mathbb{R}^d$,
$
\Vert \Pi_{C}(x)-\Pi_{C}(y)\Vert\le\Vert x-y\Vert.
$
\item[iv)]Given $H:\mathbb{R}^d\rightarrow\mathbb{R}^d$, $\mbox{\emph{S}}(H,C)=\{x\in\mathbb{R}^d:x=\Pi_{C}[x-H(x)]\}$.
\item[v)] For all $x\in C,y\in\mathbb{R}^d$,
$
\langle x-y,x-\Pi_C(y)\rangle\ge\Vert x-\Pi_C(y)\Vert^2.
$
\end{itemize}
\end{lemma}
For $X$ as in \eqref{problem:SVI:intro}, we use the notation $\Pi:=\Pi_X$. Given an operator $H:\mathbb{R}^d\rightarrow\mathbb{R}^d$, for any $x\in\mathbb{R}^{n}$ and $\alpha>0$, the \emph{natural residual function} associated to VI$(H,X)$ is
$$
r_\alpha(H;x):=\left\Vert x-\Pi\left[x-\alpha H(x)\right]\right\Vert,\quad\quad (x\in X).
$$
It is a equivalent metric to the D-gap function (see \cite{facchinei:pang2003}, Theorems 10.2.3 and 10.3.3 and Proposition 10.3.7). For $T$ as in \eqref{problem:SVI:intro}, we use the notation $r_\alpha:=r_\alpha(T,\cdot)$. For $\alpha=1$, we define $r(H;\cdot):=r_1(H;\cdot)$ and $r:=r_1$. We shall need the following lemma (see \cite{facchinei:pang2003}, Proposition 10.3.6).
\begin{lemma}\label{lemma:residual:decrease}
Given $x\in\mathbb{R}^d$, the function $(0,\infty)\ni\alpha\mapsto \frac{r_\alpha(H,x)}{\alpha}$ is non-increasing.
\end{lemma}
Given sequences $\{x^k\}$ and $\{y^k\}$, we use the notation $x^k=\mathcal{O}(y^k)$ or $\Vert x^k\Vert\lesssim\Vert y^k\Vert$
to mean that there exists a constant $C>0$ such that $\Vert x^k\Vert\le C\Vert y^k\Vert$ for all $k$. The notation $\Vert x^k\Vert\sim\Vert y^k\Vert$ means that $\Vert x^k\Vert\lesssim\Vert y^k\Vert$ and $\Vert y^k\Vert\lesssim\Vert x^k\Vert$. Given a $\sigma$-algebra $\mathcal{F}$ and a random variable $\xi$, we denote by $\mathbb{E}[\xi]$, $\mathbb{E}[\xi|\mathcal{F}]$, and $\mathbb{V}[\xi]$, the expectation, conditional expectation and
variance, respectively. Given $p\ge1$, $\Lpnorm{\xi}$ is the $\mathcal{L}^p$-norm of $\xi$ and
$
\Lpnorm{\xi\,|\mathcal{F}}:=\sqrt[p]{\mathbb{E}\left[|\xi|^p\,|\mathcal{F}\right]}
$
is the $\mathcal{L}^p$-norm of $\xi$ conditional to $\mathcal{F}$. We denote by $\sigma(\xi_1,\ldots,\xi_k)$ the $\sigma$-algebra generated by the random variables $\{\xi_i\}_{i=1}^k$ and $\mathbb{E}[\cdot|\xi_1,\ldots,\xi_k]:=\mathbb{E}[\cdot|\sigma(\xi_1,\ldots,\xi_k)]$. We write $\xi\in\mathcal{F}$ for ``$\xi$ is $\mathcal{F}$-measurable'', $\xi\perp\perp\mathcal{F}$ for ``$\xi$ is independent of $\mathcal{F}$'' and $\mathsf{1}_A$ for the characteristic function of a set $A\in\mathcal{F}$. Given $x,y\in\mathbb{R}$, $\lceil x\rceil$ denotes the smallest integer greater than $x$, $x\vee y:=\max\{x,y\}$ and $x\wedge y:=\min\{x,y\}$. $\mathbb{N}_0:=\mathbb{N}\cup\{0\}$ and, for $m\in\mathbb{N}$, we use the notation $[m]=\{1,\ldots,m\}$. $|\mathcal{V}|$ denotes the cardinality of a set $\mathcal{V}$, $\mathbb{B}$ denotes the Euclidean unit ball and $\mathbb{B}[x,r]$ denotes the Euclidean ball with center $x$ and radius $r>0$.
As in other stochastic approximation methods, a fundamental tool to be used is the following Convergence Theorem of Robbins and Siegmund \cite{robbins:siegmund1971} for perturbed nonnegative supermartingales.
\begin{theorem}\label{thm:rob}
Let $\{y_k\},\{u_k\}, \{a_k\}, \{b_k\}$ be sequences of non-negative random variables, adapted to the filtration $\{\mathcal{F}_k\}$, such that almost surely (a.s.) $\sum a_k<\infty$, $\sum b_k<\infty$ and for all $k\in\mathbb{N}$,
$
\mathbb{E}\big[y_{k+1}\big| \mathcal{F}_k\big]\le(1+a_k)y_k-u_k+b_k.
$
Then a.s. $\{y_k\}$ converges and $\sum u_k<\infty$.
\end{theorem}
\section{An empirical process theory for DS-SA line search schemes}\label{section:empirical:process:theory:DSSA}
As mentioned in Section \ref{section:related:proposed:work}, if $L$ in Assumption \ref{assumption:holder:continuity} is known then the analysis of SA methods with the CSP can exploit the fact that the oracle error's define a \emph{martingale difference}. This type of errors can be controlled in a relatively straightforward way (see Lemma \ref{lemma:decay:empirical:error} in Section \ref{section:proof:theorem}). The main objective of this section is to prove the following theorem. This is will the most sensitive part of our analysis and it is the cornerstone tool to handle \emph{nonmartingale-like} oracle errors obtained when stepsize DS-SA line search schemes are used to estimate an unknown $L$ (see \textsf{(A)} in Section \ref{section:related:proposed:work} and comments following it).
\begin{theorem}[Local bound for the $\mathcal{L}^p$-norm of the correlated error in DS-SA line search schemes]\label{thm:variance:error:with:line:search}
Consider the \emph{SVI} given by \eqref{equation:expected:valued:objective}-\eqref{problem:SVI:intro} with solution set $X^*$. Let $\xi^N:=\{\xi_j\}_{j=1}^N$ be an i.i.d sample drawn from $\mathbf{P}$ and let $\alpha_N:\Xi\rightarrow[0,\hat\alpha]$ be a random variable for some $0<\hat\alpha\le1$. Suppose that Assumption \ref{assumption:holder:continuity} holds, recall definitions \eqref{equation:oracle:error}-\eqref{equation:empirical:mean:operator:&:error} and define $\delta_1:=0$ if $\delta=1$ and $\delta_1:=1$ if $\delta\in(0,1)$.
Given $(\alpha,x)\in[0,\hat\alpha]\times X$, we define
$$
z\left(\xi^N;\alpha,x\right):=\Pi\left[x-\alpha\widehat F\left(\xi^N,x\right)\right],
$$
and $\overline z_\beta(\xi^N;\alpha,x):=\alpha z(\xi^N;\beta,x)+(1-\alpha)x$, given $\beta>0$. Then the following holds:
\begin{itemize}
\item[(i)] There exist positive constants $\{\mathsf{c}_i\}_{i=1}^4$ (depending on $d$, $\delta$, $p$ and $L_{2p}\hat\alpha$) such that, for any $x\in X$ and $x^*\in X^*$,
\begin{eqnarray*}
\Lpnorm{\left\Vert\widehat\epsilon\left(\xi^N, z(\xi^N;\alpha_N,x)\right)\right\Vert}&\le &\frac{\mathsf{c}_1\sigma_{2p}(x^*)+\overline{L}_{2p}\left[\delta_1\vee\Vert x-x^*\Vert^\delta\right]}{\sqrt{N}},
\end{eqnarray*}
where $\overline{L}_{2p}:=\mathsf{c}_2L_2+\mathsf{c}_3L_p+\mathsf{c}_4L_{2p}$.
\item[(ii)] If $X$ is compact, there exist positive constants $\mathsf{d}_2$ and $C_p$ (depending on $d$, $\delta$ and $p$) such that, for any $x\in X$ and $x^*\in X^*$,
\begin{eqnarray*}
\Lpnorm{\left\Vert\widehat\epsilon\left(\xi^N, z(\xi^N;\alpha_N,x)\right)\right\Vert}&\le &\frac{C_p\sigma_{p}(x^*)+L_{p}^*\diam(X)^\delta}{\sqrt{N}},
\end{eqnarray*}
where ${L}_{p}^*:=\mathsf{d}_2L_2+pL_p$.
\end{itemize}
Up to universal constants, the same bounds above holds for
$\Lpnorm{\left\Vert\widehat\epsilon\left(\xi^N, \overline z_\beta(\xi^N;\alpha_N,x)\right)\right\Vert}$.
\end{theorem}
For further detail on the constants of Theorem \ref{thm:variance:error:with:line:search}, see Remark \ref{rem:constants:thm:correlated:error} in Section \ref{section:proof:theorem}. To prove Theorem \ref{thm:variance:error:with:line:search}, we will crucially require intermediate results which rely on a branch of statistics called \emph{Empirical Process Theory}. Let $\{X_j\}_{j=1}^N$ be a sequence of \emph{independent} stochastic processes $X_j:=(X_{j,t})_{t\in\mathcal{T}}$ indexed by a countable set $\mathcal{T}$ with real-valued random components $X_{j,t}$. The associated \emph{empirical process} (EP) is the stochastic process $\mathcal{T}\in t\mapsto Z_t:=\sum_{j=1}^NX_{j,t}$. An essential quantity in this theory is $Z:=\sup_{t\in\mathcal{T}}Z_t$. If $\mathcal{T}=\{t\}$, then $Z$ is simply a sum of independent random variables. Otherwise, $Z$ is a much more complicated object. To understand $Z$, it is important to bound its expectation and variance. EPs arise in many different settings in mathematical statistics \cite{boucheron:lugosi:massart2013}.
We apply EP theory as a novel way to successfully analyze stochastic approximated line search schemes. Referring to \textsf{Algorithm \ref{algorithm:DSSA:extragradient}} and Theorem \ref{thm:variance:error:with:line:search}, we have $z^k=z(\xi^k;\alpha_k,x^k)$ and must control the correlated error $\widehat\epsilon(\xi^k,z(\xi^k;\alpha_k,x^k))$. Our strategy is to construct an EP that \emph{locally decouples} the dependence in $\widehat\epsilon(\xi^k,z^k)$ between $\xi^k$ and $z^k$ at the $k$-th iteration.\footnote{Recall that such dependence is produced by the need to evaluate $\widehat F(\xi^k,\cdot)$ along the path $\alpha\mapsto z^k(\alpha)$ in order to choose the stepsize $\alpha_k$. Analogous observations hold for \eqref{algo:armijo:rule2}: $z^k=\overline z_{\beta_k}(\xi^k;\alpha_k,x^k)$.} The intuition behind our decoupling technique is that, although $z^k$ is a function of $(\xi^k,x^k)$, $z^k$ \emph{lies at a ball $\mathbb{B}_k$ centered at any given $x^*\in X^*$ with radius of $\mathcal{O}(\Vert x^k-x^*\Vert+\Vert\widehat\epsilon(\xi^k,x^k)\Vert)$}. Based on this fact and that, by i.i.d. sampling, $\xi^k\perp\perp x^k$, we can decouple $\xi^k$ and $z^k$ using the following guidelines:
\begin{itemize}
\item[(i)] we \emph{condition} on the past information $\mathcal{F}_k$, noting that $x^k\in\mathcal{F}_k$ and $\xi^k\perp\perp\mathcal{F}_k$,
\item[(ii)] we then \emph{control an \emph{EP}} indexed by the ball $\mathbb{B}_k$,
\item[(iii)] we further note that in item (ii) we must also control $\widehat\epsilon(\xi^k,x^k)$ which affects the radius of the ball $\mathbb{B}_k$. Nevertheless, since $x^k\in\mathcal{F}_k$ and $\xi^k\perp\perp\mathcal{F}_k$, $\widehat\epsilon(\xi^k,x^k)$ is a \emph{martingale difference} and, hence, easier to estimate.
\end{itemize}
The developed theory is presented in consecutive sections. The statistical preliminaries used outside the proofs are carefully introduced so to make the presentation as self contained as possible. We refer to the excelent book \cite{boucheron:lugosi:massart2013} by S. Boucheron, G. Lugosi and P. Massart, a standard reference in the area. A global outline is as follows. Typically, if $Z:=\sup_{t\in\mathcal{T}}Z_t$ for a stochastic process $(Z_t)_{t\in\mathcal{T}}$, an upper bound on $\mathbb{E}[Z]$ is derived under a suitable tail property on the increments of $(Z_t)_{t\in\mathcal{T}}$ and chaining arguments \cite{dudley1967}. In Section \ref{section:L2:norm}, we derive instead an upper bound on $\Lnorm{Z}\ge\mathbb{E}[Z]$ in Lemma \ref{lemma:lnorm:process}. The main reason to do so is that we assume \emph{heavy-tailed} random operators satisfying Assumption \ref{assumption:holder:continuity}. As a consequence, we will work with the \emph{square} of \emph{sub-Gaussian} random variables (see Definition \ref{def:subGaussian}). In Section \ref{section:Lp:norm}, we apply Lemma \ref{lemma:lnorm:process} derived in Section \ref{section:L2:norm} to obtain the general Lemma \ref{lemma:error:decay:empirical:process}. This lemma provides an \emph{uniform bound over a ball} on the $\mathcal{L}^p$-norm of \emph{empirical error increments of heavy-tailed H\"older continuous operators}, the main stochastic object in this work. \emph{Self-normalization} (see \cite{panchenko2003} and Theorem \ref{thm:panchenko}), variance bounds (Theorem \ref{thm:moment:emp:lugosi}) and a simple decoupling argument based on H\"older's inequality are also needed for that purpose. Finally, the proof of Theorem \ref{thm:variance:error:with:line:search} is given in Section \ref{section:proof:theorem}. It relies on Lemma \ref{lemma:error:decay:empirical:process}, the Burkholder-Davis-Gundy's moment inequality for martingales in Hilbert spaces \cite{burkholder:davis:gundy1972,marinelli:rockner2016} and the ideas of items (i)-(iii) above.
\subsection{The $\mathcal{L}^2$-norm of suprema of sub-Gaussian processes}\label{section:L2:norm}
In order to bound the expectation or the $\mathcal{L}^2$-norm of $\sup_{t\in\mathcal{T}}Z_t$ for a stochastic process $(Z_t)_{t\in\mathcal{T}}$, it is important to understand the tail behavior of its increments $(Z_t-Z_{t'})_{(t,t')\in\mathcal{T}\times\mathcal{T}}$. We will thus need the definitions of \emph{sub-Gaussian} and \emph{sub-Gamma} random variables.
\begin{definition}[sub-Gaussian and sub-Gamma random variables]\label{def:subGaussian}
A random variable $Y\in\mathbb{R}$ is called \emph{sub-Gaussian with variance factor $\sigma^2>0$} if, for all $s\in\mathbb{R}$,
$
\ln\mathbb{E}\left[e^{sY}\right]\le\frac{\sigma^2s^2}{2}.
$
A random variable $Y\in\mathbb{R}$ is called \emph{sub-Gamma on the right tail with variance factor $\sigma^2>0$ and scale parameter $c>0$} if, for all $0<s<\frac{1}{c}$,
$
\ln\mathbb{E}\left[e^{sY}\right]\le\frac{\sigma^2s^2}{2(1-cs)}.
$
\end{definition}
Hence, a random variable $Y$ is sub-Gaussian if $Y$ and $-Y$ are sub-Gamma on the right tail with scale parameter $c=0$. In order to compute $\mathcal{L}^2$-norms under heavier tails, we will need also the following result which establishes that the centered \emph{square} of a sub-Gaussian random variable is sub-Gamma on the right tail. It follows, e.g., as a corollary of Theorem 2.1 and Remark 2.3 in \cite{hsu:kakade:zhang2012} in the one dimensional setting.
\begin{theorem}[Square of sub-Gaussian random variables]\label{thm:quad:form}
Suppose that $Y\in\mathbb{R}$ is a sub-Gaussian random variable with variance factor $\sigma^2$. Then,
for all $0\le s<\frac{1}{2\sigma^2}$,
$
\ln\mathbb{E}\left[e^{sY^2}\right]\le\sigma^2s+\frac{\sigma^4s^2}{1-2\sigma^2s}.
$
\end{theorem}
One celebrated technique to understand $\sup_{t\in\mathcal{T}}Z_t$ for a stochastic process $(Z_t)_{t\in\mathcal{T}}$ is the so called \emph{chaining method} (see e.g. \cite{dudley1967}). This consists in approximating $\mathcal{T}$ by a increasing chain of finer discrete subsets. In this quest, the ``complexity'' of the index set $\mathcal{T}$ plays an important role. This is formalized in the next definition.
\begin{definition}[Metric entropy]\label{definition:metric:entropy}
Let $(\mathcal{T},d)$ be a totally bounded metric space. Given $\theta>0$, a $\theta$\emph{-net} for $\mathcal{T}$ is a finite set $\mathcal{T}_\theta\subset\mathcal{T}$ of maximal cardinality $N(\theta,\mathcal{T})$ such that for all $s,t\in\mathcal{T}_\theta$ with $s\neq t$, one has $\dist(s,t)>\theta$. The $\theta$\emph{-entropy number} is $H(\theta,\mathcal{T}):=\ln N(\theta,\mathcal{T})$. The function $H(\cdot,\mathcal{T})$ is called the \emph{metric entropy} of $\mathcal{T}$.
\end{definition}
In particular, for all $t\in\mathcal{T}$, there is $s\in\mathcal{T}_\theta$ such that $\dist(s,t)\le\theta$. Note that the metric entropy is a nonincreasing real-valued function. The next lemma establishes the metric entropy of the Euclidean unit ball $\mathbb{B}$ of $\mathbb{R}^d$ (see Lemma 13.11 of \cite{boucheron:lugosi:massart2013}).
\begin{lemma}[Metric entropy of Euclidean balls]\label{lemma:entropy}
Let $\mathbb{B}$ be the Euclidean unit ball of $\mathbb{R}^d$. For all $\theta\in(0,1]$,
$
H(\theta,\mathbb{B})\le d\ln\left(1+\frac{1}{\theta}\right).
$
\end{lemma}
Hence, the ``complexity'' of $\mathbb{B}$ is proportional to $d$, an effect perceived in high-dimensional problems. However, note that $H(\theta,\mathbb{B})$ grows slowly when the discretization precision $\theta$ diminishes. This is a key property in order for the chaining method to work.
Before proving the main Lemma \ref{lemma:lnorm:process} in this section, we state one more needed preliminary result. It bounds the expectation of the maximum of a \emph{finite} number of sub-Gamma random variables (see, e.g., Corollary 2.6 of \cite{boucheron:lugosi:massart2013}). It is an essential lemma while using discretization arguments.
\begin{lemma}[Expectation of maxima of sub-Gamma random variables]\label{lemma:maximal:inequality}
Let $\{Y_i\}_{i=1}^N$ be real-valued sub-Gamma random variables on the right tail with variance factor $\sigma^2>0$ and scale parameter $c>0$. Then
$$
\mathbb{E}\left[\max_{i=1,\ldots,N}Y_i\right]\le \sqrt{2\sigma^2\ln N}+c\ln N.
$$
\end{lemma}
\begin{lemma}[$\mathcal{L}^2$-norm of suprema of sub-Gaussian processes]\label{lemma:lnorm:process}
Let $(\mathcal{T},d)$ be a totally bounded metric space and $\theta:=\sup_{t\in\mathcal{T}}\dist(t,t_0)$ for some $t_0\in\mathcal{T}$. Suppose $(Z_t)_{t\in\mathcal{T}}$ is a continuous stochastic process for which there exist $a,v>0$ and $\delta\in(0,1]$ such that, for all $t,t'\in\mathcal{T}$ and all $\lambda>0$,
\begin{equation}\label{lemma:lnorm:process:eq0}
\ln\mathbb{E}[\exp\{\lambda(Z_t-Z_{t'})\}]\le a\dist(t,t')^\delta\lambda+\frac{v\dist(t,t')^{2\delta}\lambda^{2}}{2}.
\end{equation}
Then
$$
\Lnorm{\sup_{t\in\mathcal{T}}Z_t-Z_{t_0}}\le
(3\theta)^\delta\sqrt{2(a^2+v)}\left[\frac{1}{2^\delta-1}+\sum_{i=1}^\infty\frac{\sqrt[4]{8H\left(\theta 2^{-i},\mathcal{T}\right)}+2\sqrt{H\left(\theta 2^{-i},\mathcal{T}\right)}}{2^{i\delta}}\right].
$$
\end{lemma}
\begin{proof}
We first note that the continuity of $t\mapsto Z_t$ and separability of $\mathcal{T}$ imply that, for any continuous function $f$, $\sup_{t\in\mathcal{T}}f(Z_t)$ is measurable since it equals $\sup_{t\in\mathcal{T'}}f(Z_t)$ for a countable dense subset $\mathcal{T}'$ of $\mathcal{T}$.
Set $\mathcal{T}_0:=\{t_0\}$. Given $i\in\mathbb{N}$, we set $\theta_i:=\theta2^{-i}$ and denote by $\mathcal{T}_i$ a $\theta_i$-net for $\mathcal{T}$ with maximal cardinality $N(\theta_i,\mathcal{T})$. We also denote by $\Pi_i:\mathcal{T}\rightarrow\mathcal{T}_i$ the metric projection associated to $\dist$, that is, for any $t\in\mathcal{T}$, $\Pi_i(t)\in\argmin_{t'\in\mathcal{T}_i}\dist(t,t')$. By the definition of a net, we have that, for all $t\in\mathcal{T}$ and $i\in\mathbb{N}$,
$
\dist(t,\Pi_i(t))\le\theta_i.
$
By the triangular inequality, this implies that for all $t\in\mathcal{T}$ and $i\in\mathbb{N}$,
\begin{equation}\label{lemma:lnorm:process:eq4}
\dist(\Pi_i(t),\Pi_{i+1}(t))\le\theta_i+\theta_{i+1}=3\theta_{i+1}.
\end{equation}
For any $t\in\mathcal{T}$, $\lim_{i\rightarrow\infty}\Pi_i(t)=t$ and $\Pi_0(t)=t_0$ imply that
\begin{equation*}
Z_t=Z_{t_0}+\sum_{j=0}^\infty(Z_{\Pi_{i+1}(t)}-Z_{\Pi_i(t)}).
\end{equation*}
In the following, we denote $\Delta_i(t):=Z_{\Pi_{i+1}(t)}-Z_{\Pi_i(t)}$ for all $i\in\mathbb{N}$ and $t\in\mathcal{T}$. The above equality implies that
$
(Z_t-Z_{t_0})^2=\sum_{i=0}^\infty\sum_{k=0}^\infty\Delta_i(t)\Delta_k(t).
$
Hence,
\begin{eqnarray}
\mathbb{E}\left[\sup_{t\in\mathcal{T}}(Z_t-Z_{t_0})^2\right]&\le &
\sum_{i=0}^\infty\sum_{k=0}^\infty\mathbb{E}\left[\sup_{t\in\mathcal{T}}\left\{\Delta_i(t)\Delta_k(t)\right\}\right]\nonumber\\
&\le &\sum_{i=0}^\infty\sum_{k=0}^\infty\Lnorm{\sup_{t\in\mathcal{T}}|\Delta_i(t)|}\cdot\Lnorm{\sup_{t\in\mathcal{T}}|\Delta_k(t)|}\nonumber\\
&=&\left[\sum_{i=0}^\infty\Lnorm{\sup_{t\in\mathcal{T}}|\Delta_i(t)|}\right]^2,\label{lemma:lnorm:process:eq2}
\end{eqnarray}
using H\"older's inequality in the second inequality.
Fix $i\in\mathbb{N}$. Since $N(\theta_{i},\mathcal{T})\le N(\theta_{i+1},\mathcal{T})$, we have that
\begin{equation}\label{lemma:lnorm:process:eq3}
|\{(\Pi_i(t),\Pi_{i+1}(t)):t\in\mathcal{T}\}|\le N(\theta_{i+1},\mathcal{T})^2
=e^{2H(\theta_{i+1})}.
\end{equation}
Relations \eqref{lemma:lnorm:process:eq0} and \eqref{lemma:lnorm:process:eq4} imply that, for all $t\in\mathcal{T}$,
\begin{eqnarray*}
\ln\mathbb{E}\left[e^{\lambda\Delta_i(t)}\right]\le a\dist\left(\Pi_i(t),\Pi_{i+1}(t)\right)^{\delta}\lambda+\frac{v\dist\left(\Pi_i(t),\Pi_{i+1}(t)\right)^{2\delta}\lambda^2}{2}
\le a_i\lambda+\frac{v_i\lambda^2}{2},
\end{eqnarray*}
where we have defined $a_i:=a(3\theta_{i+1})^\delta$ and $v_i:=v(3\theta_{i+1})^{2\delta}$. The above relation implies that, for all $t\in\mathcal{T}$, $\Delta_i(t)-a_i$ is sub-Gaussian with variance factor $v_i$. This, Theorem \ref{thm:quad:form}, the bound $\Delta_i(t)^2\le2[\Delta_i(t)-a_i]^2+2a_i^2$ and the change of variables $\lambda\mapsto2\lambda$ imply that, for all $t\in\mathcal{T}$ and $0<\lambda<\frac{1}{4v_i}$,
\begin{equation}\label{lemma:lnorm:process:eq5}
\ln\mathbb{E}\left[e^{\lambda\Delta_i(t)^2}\right]\le 2(a_i^2+v_i)\lambda+\frac{4v_i^2\lambda^2}{(1-4v_i\lambda)},
\end{equation}
that is, for all $t\in\mathcal{T}$, $\Delta_i(t)^2-2(a_i^2+v_i)$ is sub-Gamma on the right tail with variance factor $8v_i^2$ and scale parameter $4v_i$. Relations \eqref{lemma:lnorm:process:eq3}-\eqref{lemma:lnorm:process:eq5}
and Lemma \ref{lemma:maximal:inequality} imply further that
\begin{eqnarray*}
\mathbb{E}\left[\sup_{t\in\mathcal{T}}\Delta_i(t)^2\right]&\le &
2(a_i^2+v_i)+\sqrt{2\cdot8v_i^2\cdot2H(\theta_{i+1},\mathcal{T})}
+4v_i\cdot2H(\theta_{i+1},\mathcal{T})\\
&\le &2\cdot9^\delta(a^2+v)\left[\theta_{i+1}^{2\delta}+\theta_{i+1}^{2\delta}\sqrt{8H(\theta_{i+1},\mathcal{T})}+4\theta_{i+1}^{2\delta} H(\theta_{i+1},\mathcal{T})\right].
\end{eqnarray*}
Taking the square root in the above relation we get
\begin{equation}\label{lemma:lnorm:process:eq6}
\Lnorm{\sup_{t\in\mathcal{T}}|\Delta_i(t)|}\le
3^\delta\sqrt{2(a^2+v)}\left[\theta_{i+1}^{\delta}+\theta_{i+1}^{\delta}\sqrt[4]{8H(\theta_{i+1},\mathcal{T})}+2\theta_{i+1}^{\delta}\sqrt{H(\theta_{i+1},\mathcal{T})}\right].
\end{equation}
We now take the square root in \eqref{lemma:lnorm:process:eq2} and use \eqref{lemma:lnorm:process:eq6}, valid for any $i\in\mathbb{N}$, obtaining
\begin{eqnarray*}
\Lnorm{\sup_{t\in\mathcal{T}}Z_t-Z_{t_0}}&\le &
3^\delta\sqrt{2(a^2+v)}\left[\sum_{i=1}^\infty\theta_{i}^{\delta}+\sum_{i=1}^\infty\theta_{i}^{\delta}\sqrt[4]{8H(\theta_{i},\mathcal{T})}+2\sum_{i=1}^\infty\theta_{i}^{\delta}\sqrt{H(\theta_{i},\mathcal{T})}\right].
\end{eqnarray*}
To finish the proof, we use $\theta_i=\theta 2^{-i}$ and $\sum_{i=1}^\infty\theta_{i}^{\delta}=\frac{\theta^\delta}{2^\delta-1}$ in the above inequality.
\end{proof}
\subsection{Heavy-tailed H\"older continuous operators: self-normalization and $\mathcal{L}^q$-norms of suprema of EPs}\label{section:Lp:norm}
We will now focus on bounds of EPs associated to sums of the form $x\mapsto\sum_{j=1}^N\frac{F(\xi_j,x)-T(x)}{N}$, where $\{\xi_j\}_{j=1}^N$ is an i.i.d. sample of $\mathbf{P}$ and $F:\Xi\times X\rightarrow\mathbb{R}^d$ satisfies Assumption \ref{assumption:holder:continuity}. The main result proved in this section is Lemma \ref{lemma:error:decay:empirical:process}. Its proof will need Lemma \ref{lemma:lnorm:process} and the following theorem (see Theorem 15.14 in \cite{boucheron:lugosi:massart2013}).
\begin{theorem}[$\mathcal{L}^q$-norm for suprema of EPs]\label{thm:moment:emp:lugosi}
Let $\{X_j\}_{j=1}^N$ be an independent sequence of stochastic processes $X_j:=(X_{j,t})_{t\in\mathcal{T}}$ indexed by a countable set $\mathcal{T}$ with real-valued random components $X_{j,t}$ such that $\mathbb{E}[X_{j,t}]=0$ and $\mathbb{E}[X_{j,t}^2]<\infty$ for all $t\in\mathcal{T}$ and $j\in[N]$. Define $Z:=\sup_{t\in\mathcal{T}}\left|\sum_{j=1}^NX_{j,t}\right|$ and
\begin{eqnarray*}
M:=\max_{j\in[N]}\sup_{t\in\mathcal{T}}|X_{j,t}|,\quad\quad
\widehat\sigma^2 :=\sup_{t\in\mathcal{T}}\sum_{j=1}^N\mathbb{E}\left[X_{j,t}^2\right].
\end{eqnarray*}
Set $\kappa:=\frac{\sqrt{e}}{2(\sqrt{e}-1)}<1.271$. Then, for all $q\ge2$,
$$
\Lqnorm{Z}\le2\mathbb{E}[Z]+2\sqrt{2\kappa q}\widehat\sigma+4\sqrt{\kappa q}\Lnorm{M}+20\kappa q\Lqnorm{M}.
$$
\end{theorem}
In order to cope with a heavy-tailed $\mathsf{L}(\xi)$ in Assumption \ref{assumption:holder:continuity}, we will need Theorem \ref{thm:panchenko}, a result due to Panchenko (see Theorem 1 in \cite{panchenko2003} or Theorem 12.3 in \cite{boucheron:lugosi:massart2013}). It establishes a sub-Gaussian tail for the deviation of an EP around its mean after a proper \emph{normalization} with respect to a \emph{random} quantity $V$. In our set-up, the standard H\"older continuous assumption turns out to be sufficient to estimate this quantity.
\begin{theorem}[Panchenko's inequality for self-normalized EPs]\label{thm:panchenko}
Consider a countable family $\mathcal{G}$ of measurable functions $f:\Xi\rightarrow\mathbb{R}$ such that $\mathbf{P} f(\cdot)^2<\infty$. Let $\{\xi_j\}_{j=1}^N$ and $\{\eta_j\}_{j=1}^N$ be i.i.d. samples of $\mathbf{P}$ independent of each other. Set
$$
Y:=\sup_{f\in\mathcal{G}}\sum_{j=1}^Nf(\xi_j),\quad\mbox{ and }\quad V:=\mathbb{E}\left\{\sup_{f\in\mathcal{G}}\sum_{j=1}^N\left[f(\xi_j)-f(\eta_j)\right]^2\Bigg|\xi_1,\ldots,\xi_N\right\}.
$$
Then there exists an universal constant $\mathsf{c}>0$ such that, for all $t>0$,
$$
\mathbb{P}\left\{Y-\mathbb{E}[Y]\ge\mathsf{c}\sqrt{V(1+t)}\right\}\bigvee\mathbb{P}\left\{Y-\mathbb{E}[Y]\le -\mathsf{c}\sqrt{V(1+t)}\right\}\le e^{-t}.
$$
\end{theorem}
Finally, before proving Lemma \ref{lemma:error:decay:empirical:process}, we will need Theorem \ref{thm:characterization:subgaussian} which is a standard tail characterization of sub-Gaussian random variables. Theorem 2.1 in \cite{boucheron:lugosi:massart2013} gives a proof for the case $\mathbb{E}[\tilde Y]=0$. The adaptation for the general case is immediate using the facts that $\mathbb{E}[e^{-t\tilde Y}]\ge e^{-t\mathbb{E}[\tilde Y]}$ by Jensen's inequality, the integral formula $\mathbb{E}[\tilde Y]\le\mathbb{E}[|\tilde Y|]=\int_0^\infty\mathbb{P}(|\tilde Y|>t)\dist t$ and $\int_0^\infty e^{-\frac{t^2}{2}}\dist t=\sqrt{\frac{\pi}{2}}$.
\begin{theorem}[Tail characterization of sub-Gaussian random variables]\label{thm:characterization:subgaussian}
If $\tilde Y\in\mathbb{R}$ is a random variable such that, for some $v>0$ and for all $t>0$,
$$
\mathbb{P}\left\{\tilde Y\ge\sqrt{2vt}\right\}\bigvee\mathbb{P}\left\{\tilde Y\le -\sqrt{2vt}\right\}\le e^{-t},
$$
then, for all $t>0$, we have
$
\ln\mathbb{E}\left[e^{t\tilde Y}\right]\le e^{\sqrt{\frac{v\pi}{2}}t+8vt^2}.
$
\end{theorem}
We now prove the main lemma of this section. It uses Lemma \ref{lemma:lnorm:process} and Theorems \ref{thm:moment:emp:lugosi}-\ref{thm:characterization:subgaussian}.
\begin{lemma}[Local uniform bound for the $\mathcal{L}^p$-norm of empirical error increments]\label{lemma:error:decay:empirical:process}
Consider definition \eqref{equation:expected:valued:objective} and let $\xi^N:=\{\xi_j\}_{j=1}^N$ be an i.i.d. sample from $\mathbf{P}$. Suppose that Assumption \ref{assumption:holder:continuity} holds and recall definitions \eqref{equation:oracle:error}-\eqref{equation:empirical:mean:operator:&:error}. Given $x_*\in X$ and $R>0$, we define
\begin{equation}
Z:=\sup_{x\in \mathbb{B}[x_*,R]\cap X}\left\Vert\widehat\epsilon(\xi^N,x)-\widehat\epsilon(\xi^N,x_*)\right\Vert.\label{def:empirical:process:statement}
\end{equation}
Then
$$
\Lpnorm{Z}\lesssim\left[\frac{3^\delta\sqrt{d}L_2}{\sqrt{\delta}\left(\sqrt{2}^\delta-1\right)}+\sqrt{p}L_2+pL_p\right]\frac{R^\delta}{\sqrt{N}}.
$$
\end{lemma}
\begin{proof}
A first step is to rewrite $Z$ as the supremum of a suitable EP and use Theorem \ref{thm:moment:emp:lugosi}. In the following, we define the set $\mathbb{B}_X:=\{u\in\mathbb{B}:x_*+Ru\in X\}$ for $x_*\in X$ and $R>0$ as stated in the theorem. Note that
\begin{eqnarray}
Z&=&\sup_{u\in \mathbb{B}_X}\frac{1}{N}\left\Vert\sum_{j=1}^N\epsilon(\xi_j,x_*+Ru)-\epsilon(\xi_j,x_*)\right\Vert\nonumber\\
&=&\sup_{u\in \mathbb{B}_X}\frac{1}{N}\sup_{y\in \mathbb{B}}\left\langle\sum_{j=1}^N\epsilon(\xi_j,x_*+Ru)
-\epsilon(\xi_j,x_*),y\right\rangle\label{def:empirical:process}\\
&=&\sup_{(u,y)\in \mathbb{B}_X\times\mathbb{B}}\frac{1}{N}\sum_{j=1}^N\left\langle\epsilon(\xi_j,x_*+Ru)-\epsilon(\xi_j,x_*),y\right\rangle,\nonumber
\end{eqnarray}
where the second equality uses the fact that $\Vert\cdot\Vert=\sup_{y\in \mathbb{B}}\langle y,\cdot\rangle$. Next, we define the index set $\mathcal{T}:=\mathbb{B}_X\times\mathbb{B}$ and, for every $j\in[N]$ and $t:=(u,y)\in\mathbb{B}_X\times\mathbb{B}$, we define the random variables
\begin{eqnarray}
X_{j,t}&:=&\frac{1}{N}\left\langle\epsilon(\xi_j,x_*+Ru)-
\epsilon(\xi_j,x_*),y\right\rangle,\label{def:empirical:process:xjt}\\
\tilde Z_t&:=&\sum_{j=1}^NX_{j,t}.\label{def:empirical:process:zt}
\end{eqnarray}
From Assumption \ref{assumption:holder:continuity}, it is not difficult to show that, for every $j\in[N]$, the process $\mathcal{T}\ni t\mapsto X_{j,t}$ is H\"older continuous with respect to the metric
\begin{equation}\label{equation:metric:process}
\dist(t,t'):=\Vert u-u'\Vert+\Vert y-y'\Vert.
\end{equation}
This fact, the separability of $\mathcal{T}$ and \eqref{def:empirical:process}, imply that $(\tilde Z_t)_{t\in\mathcal{T}}$ is a continuous process and $Z=\sup_{t\in\mathcal{T}_0}\tilde Z_t=\sup_{t\in\mathcal{T}_0}\left|\tilde Z_t\right|$ is measurable, where $\mathcal{T}_0$ is a dense countable subset of $\mathcal{T}$. Hence, we may assume next that $\mathcal{T}$ is countable without loss on generality. Our next objective is to use Theorem \ref{thm:moment:emp:lugosi}, bounding $\Lpnorm{Z}$ in terms of $\mathbb{E}[Z]$, $M$ and $\widehat\sigma^2$.
\textsf{\textbf{PART 1} (An upper bound on $\mathbb{E}[Z]$)}: To bound $\mathbb{E}[Z]$ we will need Lemma \ref{lemma:lnorm:process} and Theorems \ref{thm:panchenko}-\ref{thm:characterization:subgaussian}. At this point, let's fix $t=(u,y)\in\mathcal{T}$ and $t'=(u',y')\in\mathcal{T}$ and define the measurable function
$$
f(\cdot):=\frac{1}{N}\left\langle\epsilon(\cdot,x_*+Ru)-
\epsilon(\cdot,x_*),y\right\rangle-\frac{1}{N}\left\langle\epsilon(\cdot,x_*+Ru')-
\epsilon(\cdot,x_*),y'\right\rangle.
$$
We have that $\mathbf{P} f(\cdot)^2<\infty$ since $\Lnorm{\Vert F(\xi,\cdot)\Vert}<\infty$ on $X$ (Assumption \ref{assumption:holder:continuity}). By construction and \eqref{def:empirical:process:xjt}-\eqref{def:empirical:process:zt}, we have $f(\xi_j)=X_{j,t}-X_{j,t'}$ for all $j\in[N]$ and $\tilde Z_t-\tilde Z_{t'}=\sum_{j=1}^Nf(\xi_j)$. Note also that $\mathbb{E}\left[\sum_{j=1}^Nf(\xi_j)\right]=0$, using \eqref{equation:expected:valued:objective}, \eqref{equation:oracle:error} and that $\{\xi_j\}_{j\in[N]}$ is an i.i.d. sample of $\mathbf{P}$.
The previous observations allow us to claim Theorem \ref{thm:panchenko} with $\mathcal{G}:=\{f\}$ and $Y:=\sum_{j=1}^Nf(\xi_j)$. Precisely, if $\{\eta_j\}_{j=1}^N$ is an i.i.d. sample from $\mathbf{P}$ which is independent of $\{\xi_j\}_{j=1}^N$, then Theorem \ref{thm:panchenko} and $\mathbb{E}\left[\sum_{j=1}^Nf(\xi_j)\right]=0$ imply that, for all $\lambda>0$,
\begin{equation}\label{thm:moment:inequality:ez:eq1}
\mathbb{P}\left\{\sum_{j=1}^Nf(\xi_j)\ge \mathsf{c}\sqrt{V(1+\lambda)}\right\}\bigvee\mathbb{P}\left\{\sum_{j=1}^Nf(\xi_j)\le-\mathsf{c}\sqrt{V(1+\lambda)}\right\}\le e^{-\lambda},
\end{equation}
for some universal constant $\mathsf{c}>0$ and
$$
V:=\mathbb{E}\left[\sum_{j=1}^N\left[f(\xi_j)-f(\eta_j)\right]^2\Bigg|\xi_1,\ldots,\xi_N\right].
$$
We will now give an upper bound on $V$. Given $\xi\in\Xi$, \eqref{equation:expected:valued:objective}, \eqref{equation:oracle:error} and H\"older continuity of $F(\xi,\cdot)$ and $T$ (Assumption \ref{assumption:holder:continuity} and Lemma \ref{lemma:holder:continuity:mean:std:dev}) imply that $\epsilon(\xi,\cdot)$ is $(\mathsf{L}(\xi)+L, \delta)$-H\"older continuous on $X$. This, definition of $f$ and the facts that $y,y,u,u'\in\mathbb{B}$ and $x_*+Ru,x_*+Ru'\in X$ imply that, for all $j\in[N]$ and $\Delta f_j:=N\left|[f(\xi_j)-f(\eta_j)]\right|$,
\begin{eqnarray*}
\Delta f_j&\le &\left|\left\langle\epsilon(\xi_j,x_*+Ru)-\epsilon(\xi_j,x_*)-\epsilon(\eta_j,x_*+Ru)+\epsilon(\eta_j,x_*),y-y'\right\rangle\right|\\
&+&\left|\left\langle\epsilon(\xi_j,x_*+Ru)-\epsilon(\xi_j,x_*+Ru')-\epsilon(\eta_j,x_*+Ru)+\epsilon(\eta_j,x_*+Ru'),y'\right\rangle\right|\\
&\le &\left[\mathsf{L}(\xi_j)+\mathsf{L}(\eta_j)+2L\right]R^\delta\left[\Vert y-y'\Vert+\Vert u-u'\Vert^\delta\right]\\
&\le &\left[\mathsf{L}(\xi_j)+\mathsf{L}(\eta_j)+2L\right]R^\delta2^{1-\delta}\left[\Vert y-y'\Vert^{\frac{1}{\delta}}+\Vert u-u'\Vert\right]^\delta\\
&\le &\left[\mathsf{L}(\xi_j)+\mathsf{L}(\eta_j)+2L\right]R^\delta2^{(1-\delta)}\left[\Vert y-y'\Vert+\Vert u-u'\Vert\right]^\delta,
\end{eqnarray*}
where we used concavity of $\mathbb{R}_+\ni x\mapsto x^\delta$ in third inequality and the fact that $\Vert y-y'\Vert^{\frac{1}{\delta}}\le2^{\frac{(1-\delta)}{\delta}}\Vert y-y'\Vert$ for $y,y'\in\mathbb{B}$ in last inequality. We take squares in the above inequality, use relation $(\sum_{i=1}^3a_i)^2\le3\sum_{i=1}^3a_i^2$ and definitions of $V$ and \eqref{equation:metric:process}. We thus obtain
\begin{eqnarray}
V&\le &\frac{3\cdot4^{1-\delta}R^{2\delta}\dist(t,t')^{2\delta}}{N}\left\{\sum_{j=1}^N\frac{\mathsf{L}(\xi_j)^2}{N}+\sum_{j=1}^N\frac{\mathbb{E}\left[\mathsf{L}(\eta_j)^2|\xi_1,\ldots,\xi_N\right]}{N}+4L^2\right\}\nonumber\\
&=&\frac{3\cdot4^{1-\delta}R^{2\delta}\dist(t,t')^{2\delta}W_N^2}{N},
\label{thm:moment:inequality:ez:eq2}
\end{eqnarray}
where we have defined
\begin{equation}\label{thm:moment:inequality:ez:WN}
W_N:=\sqrt{\frac{1}{N}\sum_{j=1}^N\mathsf{L}(\xi_j)^2+\Lnorm{\mathsf{L}(\xi)}^2+4L^2},
\end{equation}
and used that $\{\eta_j\}_{j\in[N]}$ is an i.i.d. sample of $\mathbf{P}$ independent of $\{\xi_j\}_{j\in[N]}$.
Set $\tilde Y:=\frac{\tilde Z_t-\tilde Z_{t'}}{W_N}-\frac{\sqrt{3}\mathsf{c}2^{1-\delta}R^{\delta}\dist(t,t')^{\delta}}{\sqrt{N}}$. Relations \eqref{thm:moment:inequality:ez:eq1}-\eqref{thm:moment:inequality:ez:eq2} and $\sum_{j=1}^Nf(\xi_j)=\tilde Z_t-\tilde Z_{t'}$, together with $\sqrt{1+\lambda}\le1+\sqrt{\lambda}$ for $\lambda>0$, imply that
\begin{equation*}
\mathbb{P}\left\{\tilde Y\ge\frac{\sqrt{3}\mathsf{c}2^{1-\delta}R^{\delta}\dist(t,t')^{\delta}}{\sqrt{N}}\sqrt{\lambda}\right\}\bigvee\mathbb{P}\left\{\tilde Y\le-\frac{\sqrt{3}\mathsf{c}2^{1-\delta}R^{\delta}\dist(t,t')^{\delta}}{\sqrt{N}}\sqrt{\lambda}\right\}\le e^{-\lambda}.
\end{equation*}
The above relation and Theorem \ref{thm:characterization:subgaussian} imply that for some universal constants $C_1,C_2>0$ and for all $\lambda>0$,
\begin{equation}\label{thm:moment:inequality:ez:eq4}
\ln\mathbb{E}\left[\exp\left\{\frac{(\tilde Z_t-\tilde Z_{t'})}{W_N}\lambda\right\}\right]\le\frac{C_12^{1-\delta}R^{\delta}\dist(t,t')^{\delta}}{\sqrt{N}}\lambda+\frac{C_2^24^{1-\delta}R^{2\delta}\dist(t,t')^{2\delta}}{2N}\lambda^2.
\end{equation}
We now observe that \eqref{thm:moment:inequality:ez:eq4} holds for any $t,t'\in\mathcal{T}$. Inequality \eqref{thm:moment:inequality:ez:eq4} and Lemma \ref{lemma:lnorm:process} with $(\mathcal{T},\dist)$ as defined in \eqref{equation:metric:process}, the continuous process $\mathcal{T}\ni t\mapsto Z_t:=\frac{\tilde Z_t}{W_N}$, $t_0:=(0,0)$, $\theta:=\sup_{t\in\mathcal{T}}\dist(t,0)\le2$, $a:=\frac{C_1 2^{1-\delta}R^{\delta}}{\sqrt{N}}$ and $v:=\frac{C_2^2 4^{1-\delta}R^{2\delta}}{N}$ imply that
\begin{equation}\label{thm:moment:inequality:ez:eq4'}
\Lnorm{\sup_{t\in\mathcal{T}}Z_t}\le
\frac{\sqrt{2}C 2^{1-\delta}(6R)^\delta}{\sqrt{N}}\left[\frac{1}{2^\delta-1}+\sum_{i=1}^\infty\frac{\sqrt[4]{8H\left(2^{-i+1},\mathcal{T}\right)}+2\sqrt{H\left(2^{-i+1},\mathcal{T}\right)}}{2^{i\delta}}\right],
\end{equation}
where we defined $C=\sqrt{C_1^2+C_2^2}$ and used the fact that $Z_{t_0}=\frac{\tilde Z_{t_0}}{W_N}=0$. From Lemma \ref{lemma:entropy} and the fact that, for any $\theta>0$, $H(\theta,\mathbb{B}_X\times\mathbb{B})\le H(\theta,\mathbb{B}_X)+H(\theta,\mathbb{B})\le2H(\theta,\mathbb{B})$, we also have that
\begin{eqnarray}
\sum_{i=1}^\infty\frac{\sqrt[4]{8H\left(2^{-i+1},\mathcal{T}\right)}+2\sqrt{H\left(2^{-i+1},\mathcal{T}\right)}}{2^{i\delta}}&\lesssim &\sqrt{d}\sum_{i=1}^\infty\frac{\sqrt{\ln(1+2^{i+1})}}{2^{i\delta}}\nonumber\\
&\lesssim &\sqrt{d}\sum_{i=1}^\infty\frac{\sqrt{i+1}}{2^{i\delta}}\lesssim
\frac{\sqrt{d/\delta}}{2^{\frac{\delta}{2}}-1},
\end{eqnarray}
where we used the facts that $\ln(1+x)\le x$, $\sqrt{i+1}\le\frac{2^{\frac{i\delta}{2}}}{\sqrt{\delta}\ln 2}$ and\footnote{The previous fact can be derived from the inequality $2^x\ge1+(\ln2)x$.} $\sum_{i=1}^\infty2^{-\frac{i\delta}{2}}=\frac{1}{2^{\frac{\delta}{2}}-1}$.
H\"older's inequality implies that
\begin{equation}\label{thm:moment:inequality:ez:eq5}
\mathbb{E}[Z]=\mathbb{E}\left[\sup_{t\in\mathcal{T}}|\tilde Z_t|\right]
=\mathbb{E}\left[\sup_{t\in\mathcal{T}}\left|Z_t\right|\cdot W_N\right]
\le\Lnorm{\sup_{t\in\mathcal{T}}\left|Z_t\right|}\cdot\Lnorm{W_N}.
\end{equation}
Since $\{\xi_j\}_{j\in[N]}$ is an i.i.d. sample from $\mathbf{P}$, we also obtain from \eqref{thm:moment:inequality:ez:WN} that $\Lnorm{W_N}\lesssim \Lnorm{\mathsf{L}(\xi)}+L=L_2$. Finally, this, relations \eqref{thm:moment:inequality:ez:eq4'}-\eqref{thm:moment:inequality:ez:eq5} and the facts that $2^{1-\delta} 6^\delta=2\cdot3^\delta$ and $2^{\delta}-1\ge2^{\frac{\delta}{2}}-1$ imply that
\begin{eqnarray}
\mathbb{E}[Z]\lesssim\frac{\sqrt{d}(3R)^\delta L_2}{\left(2^{\frac{\delta}{2}}-1\right)\sqrt{\delta N}}.\label{lemma:error:decay:empirical:process:EZ}
\end{eqnarray}
\textsf{\textbf{PART 2} (An upper bound on $M$ and $\widehat\sigma^2$)}: From the definition of $\widehat\sigma^2$ in Theorem \ref{thm:moment:emp:lugosi} and \eqref{def:empirical:process:xjt}, we get
\begin{eqnarray}\label{lemma:error:decay:empirical:process:sigma}
\widehat\sigma &=&\sqrt{\sup_{(u,y)\in\mathcal{T}}\frac{1}{N^2}\sum_{j=1}^N\mathbb{E}\left[\left\langle\epsilon(\xi_j,x_*+Ru)-\epsilon(\xi_j,x_*),y\right\rangle^2\right]}\nonumber\\
&\le &\sqrt{\frac{1}{N}\sup_{(u,y)\in\mathcal{T}}\mathbb{E}\left[\sum_{j=1}^N\frac{(\mathsf{L}(\xi_j)+L)^2}{N}R^{2\delta}\Vert u\Vert^{2\delta}\Vert y\Vert^2\right]}\nonumber\\
&\le &\frac{R^{\delta}(\Lnorm{\mathsf{L}(\xi)}+L)}{\sqrt{N}},
\end{eqnarray}
where we used the fact that $\Vert\epsilon(\xi_j,x_*+Ru)-\epsilon(\xi_j,x_*)\Vert\le[\mathsf{L}(\xi_j)+L]R^\delta$ for $u\in\mathbb{B}_X$ (Assumption \ref{assumption:holder:continuity} and Lemma \ref{lemma:holder:continuity:mean:std:dev}) in first inequality and the fact that $\{\xi_j\}_{j\in[N]}$ is an i.i.d. sample of $\mathbf{P}$ in the last inequality.
From the definition of $M$ in Theorem \ref{thm:moment:emp:lugosi} and \eqref{def:empirical:process:xjt}, we get
\begin{eqnarray*}
\Lpnorm{M}^p&=&\mathbb{E}\left[\left(\max_{j\in[N]}\sup_{t\in\mathcal{T}}|X_{j,t}|\right)^p\right]
=\mathbb{E}\left[\max_{j\in[N]}\sup_{t\in\mathcal{T}}|X_{j,t}|^p\right]\nonumber\\
&\le &\frac{1}{N^p}\sum_{j=1}^N\mathbb{E}\left[\sup_{t\in\mathcal{T}}\left|\left\langle\epsilon(\xi_j,x_*+Ru)
-\epsilon(\xi_j,x_*),y\right\rangle\right|^p\right]\nonumber\\
&\le &\frac{1}{N^{p-1}}\sup_{(u,y)\in\mathcal{T}}\mathbb{E}\left[\sum_{j=1}^N\frac{(\mathsf{L}(\xi_j)+L)^p}{N}R^{p\delta}\Vert u\Vert^{p\delta}\Vert y\Vert^p\right]\nonumber\\
&\le &\frac{R^{p\delta}\Lpnorm{\mathsf{L}(\xi)+L}^p}{N^{p-1}},
\end{eqnarray*}
where, again, we used the fact that $\Vert\epsilon(\xi_j,x_*+Ru)-\epsilon(\xi_j,x_*)\Vert\le[\mathsf{L}(\xi_j)+L]R^\delta$ for $u\in\mathbb{B}_X$ in second inequality and the fact that $\{\xi_j\}_{j\in[N]}$ is an i.i.d. sample of $\mathbf{P}$ in the last inequality. We take the $p$-th root in the above inequality and note that for $p\ge2$ we have $N^{\frac{p-1}{p}}\ge \sqrt{N}$, obtaining
\begin{equation}\label{lemma:error:decay:empirical:process:m}
\Lpnorm{M}\le\frac{(\Lpnorm{\mathsf{L}(\xi)}+L) R^{\delta}}{\sqrt{N}}.
\end{equation}
From \eqref{lemma:error:decay:empirical:process:EZ}-\eqref{lemma:error:decay:empirical:process:m} and definitions of $L_2$ and $L_p$ in Assumption \ref{assumption:holder:continuity}, we obtain the required claim.
\end{proof}
\subsection{The proof of Theorem \ref{thm:variance:error:with:line:search}}\label{section:proof:theorem}
With the theory developed in Sections \ref{section:L2:norm}-\ref{section:Lp:norm}, we are now ready to prove Theorem \ref{thm:variance:error:with:line:search}. We shall use Lemma \ref{lemma:error:decay:empirical:process} and follow the ideas of items (i)-(iii) presented in the introduction of Section \ref{section:empirical:process:theory:DSSA}. We will also need the next Lemma \ref{lemma:decay:empirical:error} which controls the oracle's empirical error. Its control is easier than the oracle's correlated error, since it defines a martingale difference. Its proof uses Assumption \ref{assumption:holder:continuity} and a version of Burkholder-Davis-Gundy's inequality in Hilbert spaces (see \cite{burkholder:davis:gundy1972,marinelli:rockner2016}).
\begin{theorem}[Burkholder-Davis-Gundy inequality in $\mathbb{R}^d$]\label{thm:BDG}
Let $\Vert\cdot\Vert$ be the Euclidean norm in $\mathbb{R}^d$. Then, for all $q\ge2$, there exists $C_q>0$ such that for any vector-valued martingale $\{y_j\}_{j=0}^N$ adapted to the filtration $\{\mathcal{G}_j\}_{j=1}^N$ with $y_0=0$, it holds that
$$
\Lqnorm{\sup_{j\leq N}\Vert y_j\Vert}\leq C_q\,\Lqnorm{\sqrt{\sum_{j=1}^N\Vert y_j-y_{j-1}\Vert^2}}\le C_q\,\sqrt{\sum_{j=1}^N\,\Lqnorm{\Vert y_j-y_{j-1}\Vert}^2}.
$$
\end{theorem}
\begin{lemma}[Local bound for the $\mathcal{L}^q$-norm of the empirical error]\label{lemma:decay:empirical:error}
Consider definition \eqref{equation:expected:valued:objective} and let $\xi^N:=\{\xi_j\}_{j=1}^N$ be an i.i.d. sample from $\mathbf{P}$. Suppose that Assumption \ref{assumption:holder:continuity} holds and take $q\in[p,2p]$ such that the integrability conditions of Assumption \ref{assumption:holder:continuity} are satisfied. Recall definitions in \eqref{equation:oracle:error}-\eqref{equation:empirical:mean:operator:&:error} and definition of $C_q$ in Theorem \ref{thm:BDG}. Set $C_2:=1$ if $q=p=2$. Then, for any $x,x_*\in X$,
$$
\Lqnorm{\left\Vert\widehat\epsilon(\xi^N,x)\right\Vert}\le C_q\frac{\sigma_q(x^*)
+L_q\Vert x-x^*\Vert^\delta}{\sqrt{N}}.
$$
\end{lemma}
\begin{proof}
We define the $\mathbb{R}^d$-valued process $\{y_t\}_{t=0}^N$ by $y_0=0$ and $y_t:=\sum_{j=1}^t\frac{\epsilon(\xi_j,x)}{N}$ for $t\in[N]$ and the filtration $\mathcal{G}_t:=\sigma(y_0,\ldots,y_t)$ for $t\in\{0\}\cup[N]$. Since $\{\xi_j\}_{j=1}^N$ is an i.i.d. sample of $\mathbf{P}$, $\{y_t,\mathcal{G}_t\}_{t=0}^N$ is a $\mathbb{R}^d$-valued martingale whose increments satisfy
\begin{equation*}
\Lqnorm{\Vert y_t-y_{t-1}\Vert} = \Lqnorm{\frac{\Vert\epsilon(\xi,x)\Vert}{N}}\le
\frac{\Lqnorm{\Vert\epsilon(\xi,x_*)\Vert}+L_q\Vert x-y\Vert^\delta}{N},
\end{equation*}
using that $\Lqnorm{\Vert\epsilon(\xi,\cdot)\Vert}$ is H\"older continuous with modulus $L_q=\Lqnorm{\mathsf{L}(\xi)}+L$ and exponent $\delta$ (Lemma \ref{lemma:holder:continuity:mean:std:dev}) in the inequality. The required claim follows from the above relation, Theorem \ref{thm:BDG} and $\widehat\epsilon(\xi^N,x)=y_N$. We note that if $q=2$, then the linearity of the expectation, the Pythagorean identity (valid for the Euclidean norm) and independence imply the sharper equality $\Lnorm{\left\Vert\widehat\epsilon(\xi^N,x)\right\Vert}=\frac{\Lnorm{\Vert\epsilon(\xi,x)\Vert}}{\sqrt{N}}$. This fact and Lemma \ref{lemma:holder:continuity:mean:std:dev} imply the claim of the lemma with $C_2=1$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:variance:error:with:line:search}]
We fix $x\in X$ and $x^*\in X^*$ as stated in the theorem and set $z^N:=z(\xi^N;\alpha_N,x)$ and $\overline z^N:=\overline z(\xi^N;\alpha_N,x)$. In the following, we only give a proof for $\widehat\epsilon(\xi^N,z^N)$. The proof for $\widehat\epsilon(\xi^N,\overline z^N)$ requires only minor changes. For reasons to be shown in the following, it will be convenient to define
$\Delta(x,x^*):=\Vert x-x^*\Vert\vee\Vert x-x^*\Vert^\delta$ and, for any $s>0$, $\mathsf{R}(s):=(1+L\hat\alpha)\Delta(x,x^*)+\hat\alpha s$ and the ball $\mathbb{B}(s):=\mathbb{B}[x^*,\mathsf{R}(s)]$.
Example 14.29 of \cite{rockafellar:wets1998} and Assumption \ref{assumption:holder:continuity} imply that the map $\Xi\times X\ni(\omega,x)\mapsto\Vert\widehat\epsilon(\xi^N(\omega),x)\Vert$ is a \emph{normal integrand}, that is,
$$
\omega\mapsto\epi\left\Vert\widehat\epsilon(\xi^N(\omega),\cdot)\right\Vert:=\{(x,y)\in X\times\mathbb{R}:\left\Vert\widehat\epsilon(\xi^N(\omega),x)\right\Vert\le y\}
$$
is a set-valued measurable function. This fact and Theorem 14.37 in \cite{rockafellar:wets1998} imply further that, for any measurable function $\epsilon:\Omega\rightarrow[0,\infty)$ and $R>0$,
\begin{eqnarray}
\omega\mapsto\sup_{x'\in \mathbb{B}(\epsilon(\omega))\cap X}\left\Vert\widehat\epsilon(\xi^N(\omega),x')\right\Vert \quad\mbox{ and }\quad
\omega\mapsto\sup_{x'\in \mathbb{B}[x^*,R]\cap X}\left\Vert\widehat\epsilon(\xi^N(\omega),x')\right\Vert\label{equation:measurability:issue}
\end{eqnarray}
are measurable functions.
We first prove item (ii) for the easier case when $X$ is compact. We set $R:=\diam(X)$ and note that $z^N\in\mathbb{B}[x^*,R]\cap X$. This and \eqref{equation:measurability:issue} imply that
\begin{eqnarray*}
\Lpnorm{\Vert\widehat\epsilon(\xi^N,z^N)\Vert}&\le &\Lpnorm{\sup_{x'\in\mathbb{B}[x^*,R]\cap X}\Vert\widehat\epsilon(\xi^N,x')\Vert}\nonumber\\
&\le &\Lpnorm{\sup_{x'\in\mathbb{B}[x^*,R]\cap X}\Vert\widehat\epsilon(\xi^N,x')-\widehat\epsilon(\xi^N,x^*)\Vert}+\Lpnorm{\Vert\widehat\epsilon(\xi^N,x^*)\Vert}\nonumber\\
&\le &c\left[\frac{3^\delta\sqrt{d}L_2}{\sqrt{\delta}\left(\sqrt{2}^\delta-1\right)}+\sqrt{p}L_2+pL_p\right]\frac{\diam(X)^\delta}{\sqrt{N}}+\frac{C_{p}\Lpnorm{\Vert\epsilon(\xi,x^*)\Vert}}{\sqrt{N}},
\end{eqnarray*}
for some universal constant $c>0$, where we used Lemmas \ref{lemma:error:decay:empirical:process} and \ref{lemma:decay:empirical:error} with $q=p$ in the last inequality. The above inequality and definition \eqref{equation:oracle:error:variance} prove item (ii).
We now prove item (i) in the case $X$ may be unbounded. Given $\alpha\in[0,\hat\alpha]$, Lemma \ref{lemma:proj}(iv) implies that $x^*=\Pi[x^*-\alpha T(x^*)]$. Taking into account this fact, Lemma \ref{lemma:proj}(iii) and definitions of $z(\xi^N;\alpha,x)$, \eqref{equation:oracle:error} and \eqref{equation:empirical:mean:operator:&:error}, we get that, for any $\alpha\in[0,\hat\alpha]$,
\begin{eqnarray}
\left\|x^*-z(\xi^N;\alpha,x)\right\|&=&\,\left\Vert\Pi\left[x^*-\alpha T(x^*)\right]- \Pi\left[x-\alpha\left(T(x)+\widehat\epsilon(\xi^N,x)\right)\right]\right\Vert\nonumber\\
&\leq & \,\|x^*-x\|+\alpha\| T(x)-T(x^*)\| + \alpha\left\Vert\widehat\epsilon(\xi^N,x)\right\Vert\nonumber\\
&\leq & (1+L\hat\alpha)\left[\Vert x-x^*\Vert\vee\Vert x-x^*\Vert^\delta\right]+\hat\alpha\,\left\|\widehat\epsilon(\xi^N,x)\right\|,\label{lemma:error:decay:emp:eq1}
\end{eqnarray}
where, in last inequality, we used H\"older continuity of $T$ (Lemma \ref{lemma:holder:continuity:mean:std:dev}).
In the sequel we define the quantities
\begin{equation}\label{lemma:error:decay:emp:s*:eN}
s_*:=L_{2p}\Delta(x,x^*)\quad\mbox{ and }\quad\epsilon_N:=\left\Vert\widehat\epsilon(\xi^N,x)\right\Vert.
\end{equation}
Setting $\alpha:=\alpha_N$ in \eqref{lemma:error:decay:emp:eq1}, we have that\footnote{Note that from $\alpha_N\in[0,1]$ and convexity of $X$ and $\mathbb{B}(\epsilon_N)$, we also have that $\overline z^N\in \mathbb{B}(\epsilon_N)\cap X$.} $z^N\in \mathbb{B}(\epsilon_N)\cap X$. We now make the following decomposition
\begin{eqnarray}
\Lpnorm{\left\Vert\widehat\epsilon(\xi^N,z^N)\right\Vert}=I_1+I_2,\label{lemma:error:decay:emp:eq2}
\end{eqnarray}
using the definitions
\begin{eqnarray*}
I_1:=\Lpnorm{\left\Vert\widehat\epsilon(\xi^N,z^N)\right\Vert\mathsf{1}_{\{\epsilon_N\le s_*\}}}\quad\mbox{ and }\quad
I_2:=\Lpnorm{\left\Vert\widehat\epsilon(\xi^N,z^N)\right\Vert\mathsf{1}_{\{\epsilon_N>s_*\}}}.
\end{eqnarray*}
\textsf{\textbf{PART 1} (Upper bound on $I_1$):} from the fact that $z^N\in\mathbb{B}(\epsilon_N)\cap X$ and \eqref{equation:measurability:issue}, we may bound $I_1$ by
\begin{eqnarray*}
I_1&=&\Lpnorm{\Vert\widehat\epsilon(\xi^N,z^N)\Vert\mathsf{1}_{\{\epsilon_N\le s_*\}}}\nonumber\\
&\le &\Lpnorm{\sup_{x'\in \mathbb{B}(s_*)\cap X}\Vert\widehat\epsilon(\xi^N,x')\Vert}\nonumber\\
&\le &\Lpnorm{\sup_{x'\in \mathbb{B}(s_*)\cap X}\Vert\widehat\epsilon(\xi^N,x')-\widehat\epsilon(\xi^N,x^*)\Vert}+\Lpnorm{\Vert\widehat\epsilon(\xi^N,x^*)\Vert}\nonumber\\
&\le &c\left[\frac{3^\delta\sqrt{d}L_2}{\sqrt{\delta}\left(\sqrt{2}^\delta-1\right)}+\sqrt{p}L_2+pL_p\right]\frac{\mathsf{R}(s_*)^\delta}{\sqrt{N}}+\frac{C_{p}\Lpnorm{\Vert\epsilon(\xi,x^*)\Vert}}{\sqrt{N}},
\end{eqnarray*}
where we used Lemmas \ref{lemma:error:decay:empirical:process} and \ref{lemma:decay:empirical:error} with $q=p$ in the last inequality. Using the fact that $\mathsf{R}(s_*)=\left(1+L\hat\alpha+L_{2p}\hat\alpha\right)\Delta(x,x^*)$ and setting $c_\delta:=\frac{c3^\delta}{\sqrt{\delta}(\sqrt{2}^\delta-1)}$, we get from the above chain of inequalities that
\begin{equation}\label{lemma:error:decay:emp:i1:eq3}
I_1\le\left[\left(c_\delta\sqrt{d}+c\sqrt{p}\right)L_2+cpL_p\right]C_{\mathsf{L}\hat\alpha,p}^\delta\frac{\Delta(x,x^*)^\delta}{\sqrt{N}}+\frac{C_{p}\Lpnorm{\Vert\epsilon(\xi,x^*)\Vert}}{\sqrt{N}},
\end{equation}
with $C_{\mathsf{L}\hat\alpha,p}:=1+L\hat\alpha+L_{2p}\hat\alpha$.
\textsf{\textbf{PART 2} (Upper bound on $I_2$):} Defining $\widehat L_N:=N^{-1}\sum_{j=1}^N\mathsf{L}(\xi_j)$, we note that
\begin{eqnarray*}
\left\Vert\widehat\epsilon(\xi^N,z^N)\right\Vert &\le &\left\Vert\widehat\epsilon(\xi^N,z^N)-\widehat\epsilon(\xi^N,x^*)\right\Vert+\left\Vert\widehat\epsilon(\xi^N,x^*)\right\Vert\nonumber\\
&\le &\left\Vert\frac{1}{N}\sum_{j=1}^N\left[F(\xi_j,z^N)-F(\xi_j,x^*)\right]\right\Vert+
\left\Vert T(z^N)-T(x^*)\right\Vert+\left\Vert\widehat\epsilon(\xi^N,x^*)\right\Vert\nonumber\\
&\le &\left(\widehat L_N+L\right)\left\Vert z^N-x^*\right\Vert^\delta+\left\Vert\widehat\epsilon(\xi^N,x^*)\right\Vert\nonumber\\
&\le &\left(\widehat L_N+L\right)(1+L\hat\alpha)\Delta(x,x^*)+\hat\alpha\left(\widehat L_N+L\right)\epsilon_N+\epsilon_N^*,
\end{eqnarray*}
using Assumption \ref{assumption:holder:continuity} and Lemma \ref{lemma:holder:continuity:mean:std:dev} in the third inequality and
\eqref{lemma:error:decay:emp:eq1} with $\alpha:=\alpha_N$, \eqref{lemma:error:decay:emp:s*:eN} and the definition $\epsilon_N^*:=\left\Vert\widehat\epsilon(\xi^N,x^*)\right\Vert$ in the last inequality. The inequality above and definition of $I_2$ imply that
\begin{eqnarray}
I_2&=&\Lpnorm{\left\Vert\widehat\epsilon(\xi^N,z^N)\right\Vert\mathsf{1}_{\{\epsilon_N>s_*\}}}\nonumber\\
&\le &(1+L\hat\alpha)\Delta(x,x^*)\Lpnorm{\left(\widehat L_N+L\right)\mathsf{1}_{\{\epsilon_N>s_*\}}}%
+\hat\alpha\Lpnorm{\left(\widehat L_N+L\right)\epsilon_N}
+\Lpnorm{\epsilon_N^*}\nonumber\\
&\le & (1+L\hat\alpha)\Delta(x,x^*)\Ldpnorm{\widehat L_N+L}\Ldpnorm{\mathsf{1}_{\{\epsilon_N>s_*\}}}
+\hat\alpha\Ldpnorm{\widehat L_N+L}\Ldpnorm{\epsilon_N}
+\Lpnorm{\epsilon_N^*},\label{lemma:error:decay:emp:eq4}
\end{eqnarray}
where we used H\"older's inequality.
With respect to the last term in the rightmost expression of \eqref{lemma:error:decay:emp:eq4},
we have, in view of Lemma \ref{lemma:decay:empirical:error} with $q=p$,
\begin{eqnarray}
\Lpnorm{\epsilon_N^*}=\Lpnorm{\Vert\widehat\epsilon(\xi^N,x^*)\Vert}\le \frac{C_{p}\Lpnorm{\Vert\epsilon(\xi,x^*)\Vert}}{\sqrt{N}}.\label{lemma:error:decay:emp:eq5}
\end{eqnarray}
Concerning the second term in the rightmost expression of \eqref{lemma:error:decay:emp:eq4},
Lemma \ref{lemma:decay:empirical:error} with $q=2p$ implies that
\begin{equation}
\Ldpnorm{\epsilon_N}=\Ldpnorm{\left\Vert\widehat\epsilon(\xi^N,x)\right\Vert}\le C_{2p}\frac{\Ldpnorm{\Vert\epsilon(\xi,x^*)\Vert}+L_{2p}\Vert x-x^*\Vert^\delta}{\sqrt{N}}.
\label{lemma:error:decay:emp:eq6}
\end{equation}
From Markov's inequality and \eqref{lemma:error:decay:emp:eq6} we obtain
\begin{eqnarray}
\Ldpnorm{\mathsf{1}_{\{\epsilon_N>s_*\}}}&=&\sqrt[2p]{\mathbb{E}\left[\mathsf{1}_{\left\{\epsilon_N>s_*\right\}}\right]}=\sqrt[2p]{\mathbb{P}\left(\left\Vert\widehat\epsilon(\xi^N,x)\right\Vert>s_*\right)}\nonumber\\
&\le &\sqrt[2p]{\frac{\mathbb{E}\left[\left\Vert\widehat\epsilon(\xi^N,x)\right\Vert^{2p}\right]}{s_*^{2p}}}\nonumber\\
&=&\frac{\Ldpnorm{\left\Vert\widehat\epsilon(\xi^N,x)\right\Vert}}{s_*}\nonumber\\
&\le &C_{2p}\frac{\Ldpnorm{\Vert\epsilon(\xi,x^*)\Vert}+L_{2p}\Vert x-x^*\Vert^\delta}{s_*\sqrt{N}}.\label{lemma:error:decay:emp:eq7}
\end{eqnarray}
The convexity of $t\mapsto t^{2p}$ and the fact that $\{\xi_j\}_{j\in[N]}$ is an i.i.d. sample of $\mathbf{P}$ imply that $\Ldpnorm{\widehat L_N+L}\le\Ldpnorm{\mathsf{L}(\xi)}+L=L_{2p}$. Using this fact and putting together relations \eqref{lemma:error:decay:emp:eq4}-\eqref{lemma:error:decay:emp:eq7} we get
\begin{eqnarray}
I_2&\le & (1+L\hat\alpha)\frac{\Delta(x,x^*)L_{2p}}{s_*}
C_{2p}\frac{\Ldpnorm{\Vert\epsilon(\xi,x^*)\Vert}+L_{2p}\Vert x-x^*\Vert^\delta}{\sqrt{N}}
\nonumber\\
&&+L_{2p}\hat\alpha C_{2p}\frac{\Ldpnorm{\Vert\epsilon(\xi,x^*)\Vert}+L_{2p}\Vert x-x^*\Vert^\delta}{\sqrt{N}}
+\frac{C_{p}\Lpnorm{\Vert\epsilon(\xi,x^*)\Vert}}{\sqrt{N}}\nonumber\\
&=&C_{2p}\left(1+L\hat\alpha+L_{2p}\hat\alpha\right)\frac{\Ldpnorm{\Vert\epsilon(\xi,x^*)\Vert}}{\sqrt{N}}+\frac{C_{p}\Lpnorm{\Vert\epsilon(\xi,x^*)\Vert}}{\sqrt{N}}\nonumber\\
&&+C_{2p}\left(1+L\hat\alpha+L_{2p}\hat\alpha\right)\frac{L_{2p}\Vert x-x^*\Vert^\delta}{\sqrt{N}},\label{lemma:error:decay:emp:eq8}
\end{eqnarray}
where we used the fact that\footnote{Note that $[\Delta(x,x^*)\Vert x-x^*\Vert^\delta]^2\lesssim\Vert x-x^*\Vert^{4\delta}$ with $4\delta>2$ in the Lipschitz continuous case. The geometry of projection methods implies the derivation of a recursion in terms of $\{\Vert x^k-x^*\Vert^2\}$. It is then crucial for the convergence analysis that follows that we can choose a $s_*$ that balances the bounds $\mathsf{R}(s_*)^\delta\lesssim\Vert x-x^*\Vert^{\beta_1}$ in $I_1$ and $\frac{\Delta(x,x^*)}{s_*}\Vert x-x^*\Vert^\delta\lesssim\Vert x-x^*\Vert^{\beta_2}$ in $I_2$ with $\beta_1,\beta_2\in(0,1]$.} $s_*=L_{2p}\Delta(x,x^*)$.
Relations \eqref{lemma:error:decay:emp:eq2}-\eqref{lemma:error:decay:emp:i1:eq3} and \eqref{lemma:error:decay:emp:eq8}, definition \eqref{equation:oracle:error:variance} and the facts that $\Delta(x,x^*)^\delta\le\delta_1\vee\Vert x-x^*\Vert^\delta$ and $\Lpnorm{\Vert\epsilon(\xi,x^*)\Vert}\le\Ldpnorm{\Vert\epsilon(\xi,x^*)\Vert}$ prove item (i).
\end{proof}
\begin{remark}[Constants]\label{rem:constants:thm:correlated:error}
In Theorem \ref{thm:variance:error:with:line:search}, the constants satisfy
\begin{eqnarray*}
&&\mathsf{c}_1:=2C_p+C_{2p}C_{\mathsf{L}\hat\alpha,p},\quad\quad \mathsf{c}_3\lesssim pC_{\mathsf{L}\hat\alpha,p}^\delta,\quad\quad\mathsf{c}_4:= C_{2p} C_{\mathsf{L}\hat\alpha,p},\\
&&\mathsf{c}_2\lesssim\left[\frac{3^\delta\sqrt{d}}{\sqrt{\delta}\left(\sqrt{2}^\delta-1\right)}+\sqrt{p}\right]C_{\mathsf{L}\hat\alpha,p}^\delta,\quad\quad
\mathsf{d}_2\lesssim \left[\frac{3^\delta\sqrt{d}}{\sqrt{\delta}\left(\sqrt{2}^\delta-1\right)}+\sqrt{p}\right],
\end{eqnarray*}
where $C_{\mathsf{L}\hat\alpha,p}:=1+2L\hat\alpha+\Ldpnorm{\mathsf{L}(\xi)}\hat\alpha$ and $C_p$ and $C_{2p}$ are defined in Lemma \ref{lemma:decay:empirical:error}.
\end{remark}
\section{Analysis of \textsf{Algorithm \ref{algorithm:DSSA:extragradient}} for Lipschitz continuous operators}\label{section:algorithm:extragradient:DSSA}
We state next additional assumptions needed for the convergence analysis of our algorithms. In this section we always assume that in Assumption \ref{assumption:holder:continuity} we have $\delta=1$. For brevity, we will not mention it any further.
\begin{assumption}[Consistency]\label{assump:existence}
The solution set $X^*$ of VI$(T,X)$ is non-empty.
\end{assumption}
\begin{assumption}[Pseudo-monotonicity]\label{assump:monotonicity}
We assume that $T:X\rightarrow\mathbb{R}^d$ as defined in \eqref{equation:expected:valued:objective} is pseudo-monotone\footnote{Pseudo-monotonicity generalizes monotonicity: $\langle T(z),z-x\rangle\ge\langle T(x),z-x\rangle$ for all $x,z\in X$. Recall that the gradient of a smooth convex function is monotone and the gradient of a quotient of a positive smooth convex function with a positive smooth concave function is pseudo-monotone.}: for all $z,x\in X$,
$
\langle T(x),z-x\rangle\ge 0\Longrightarrow\langle T(z),z-x\rangle\ge 0.
$
\end{assumption}
\begin{assumption}[I.I.D. sampling]\label{assump:iid:sampling}
In \textsf{Algorithm \ref{algorithm:DSSA:extragradient}}, the sequences $\{\xi^k_j:k\in\mathbb{N}_0,j\in[N_k]\}$ and $\{\eta^k_j:k\in\mathbb{N}_0,j\in[N_k]\}$ are i.i.d. samples drawn from $\mathbf{P}$ independent of each other. Moreover, $\sum_{k=0}^\infty N_k^{-1}<\infty$.
\end{assumption}
Concerning \textsf{Algorithm \ref{algorithm:DSSA:extragradient}}, we shall study the stochastic process $\{x^k\}$ with respect to the filtrations
$$
\mathcal{F}_k=\sigma(x^0,\xi^0,\ldots,\xi^{k-1},
\eta^0,\ldots,\eta^{k-1}),\quad
\widehat\mathcal{F}_k=\sigma(x^0,\xi^0,\ldots,\xi^k,
\eta^0,\ldots,\eta^{k-1}).
$$
Recalling \eqref{equation:oracle:error}, \eqref{equation:empirical:mean:operator:&:error} and \textsf{Algorithm \ref{algorithm:DSSA:extragradient}}, we will define the following oracle errors:
\begin{eqnarray}
\epsilon^k_1:=\widehat\epsilon(\xi^k,x^k),\quad\quad\quad\epsilon^k_2:=\widehat\epsilon(\eta^k,z^k),\quad\quad\quad\epsilon^k_3:=\widehat\epsilon(\xi^k,z^k). \label{equation:oracle:errors:DSSA:extragradient}
\end{eqnarray}
Their relations to $\mathcal{F}_k$ and $\widehat\mathcal{F}_k$ will be essential in the convergence analysis. By definition of \textsf{Algorithm \ref{algorithm:DSSA:extragradient}} and Assumption \ref{assump:iid:sampling}, we have that $z^k\in\widehat{\mathcal{F}}_k$ and $\eta^k\perp\perp\widehat{\mathcal{F}}_k$. These facts imply that, with respect to the \emph{sampling} time scale, the process $[N_k]\ni t\mapsto N_k^{-1}\sum_{j=1}^{t}\epsilon(\eta^k_j,z^k)$ defines a martingale difference adapted to the filtration $\sigma(\widehat{\mathcal{F}}_k,\eta^k_1,\ldots,\eta^k_t)$ with final element $\epsilon^k_2$. With respect to the \emph{iteration} time scale, the same facts imply that the process $k\mapsto\epsilon^k_2$ defines a martingale difference adapted to the filtration $\widehat{\mathcal{F}}_{k+1}$. Similar observations hold for the processes $[N_k]\ni t\mapsto N_k^{-1}\sum_{j=1}^{t}\epsilon(\xi^k_j,x^k)$ and $k\mapsto\epsilon^k_1$ using the facts that $x^k\in\mathcal{F}_k$ and $\xi^k\perp\perp\mathcal{F}_k$. The line search scheme \eqref{algo:armijo:rule} introduces the error $\epsilon^k_3$. It does not have the previous martingale-like properties above due to the coupling between $\xi^k$ and $z^k$. This will be resolved by applying Theorem \ref{thm:variance:error:with:line:search} with $\xi^N:=\xi^k$, $x:=x^k$ and $\alpha_N:=\alpha_k$, noting that $z^k=z(\xi^k;\alpha_k,x^k)$ and $x^k\in\mathcal{F}_k$. It is also important to note that the stepsize $\alpha_k$ is a random variable satisfying $\alpha_k\notin\mathcal{F}_k$ and $\alpha_k\in\widehat{\mathcal{F}}_k$.
\subsection{Convergence analysis}
We first show that the line search \eqref{algo:armijo:rule} in \textsf{Algorithm \ref{algorithm:DSSA:extragradient}} is well defined.
\begin{lemma}[Good definition of the line search]
Consider Assumption \ref{assumption:holder:continuity}. Then the line search \eqref{algo:armijo:rule} in \textsf{\emph{Algorithm \ref{algorithm:DSSA:extragradient}}} terminates after a finite number $\ell_k$ of iterations.
\end{lemma}
\begin{proof}
Set $\gamma_\ell:=\theta^{-\ell}\hat\alpha$ and $H_k:=\widehat F(\xi^k,\cdot)$. Assuming by contradiction that the line search \eqref{algo:armijo:rule} does not terminate after a finite number of iterations, for every $\ell\in\mathbb{N}_0$,
\begin{equation*}
\left\Vert\widehat F\left(\xi^k,z^k(\gamma_\ell)\right)-\widehat F\left(\xi^k,x^k\right)\right\Vert>\lambda \frac{r_{\gamma_\ell}(H_k;x^k)}{\gamma_\ell}\ge\lambda\cdot r(H_k;x^k),
\end{equation*}
using definition of $r_{\alpha}(H_k;\cdot)$ in Section \ref{section:preliminaries}, the fact that $\gamma_\ell\in(0,1]$ and Lemma \ref{lemma:residual:decrease} in the last inequality. The contradiction follows by letting $\ell\rightarrow\infty$ in the above inequality and invoking the continuity of $\widehat F(\xi^k,\cdot)$, resulting from Assumption \ref{assumption:holder:continuity}, the fact that $\lim_{\ell\rightarrow\infty}z^k(\gamma_\ell)=x^k$, which follows from the continuity of $\Pi$, and the fact that $r(H_k;x^k)>0$, which follows from the definition of \textsf{Algorithm \ref{algorithm:DSSA:extragradient}}.
\end{proof}
The next lemma shows that the DS-SA line search scheme \eqref{algo:armijo:rule} either chooses the initial stepsize $\hat\alpha$ or it is an \emph{unbiased} stochastic oracle for a \emph{lower bound} of the Lipschitz constant $L=\mathbb{E}[\mathsf{L}(\xi)]$ (using the \emph{same samples} generated by the operator's stochastic oracle). Precisely, if $\hat\alpha$ is not chosen, then $\mathcal{O}(\alpha_k^{-1})$ (with an explicit constant) is a.s. a lower bound for
$
\widehat L_k:=\frac{1}{N_k}\sum_{j=1}^{N_k}\mathsf{L}(\xi^k_j).
$
\begin{lemma}[Unbiased lower estimation of the Lipschitz constant]\label{lemma:step:lower}
Consider Assumptions \ref{assumption:holder:continuity} and \ref{assump:iid:sampling} and define $\widehat L_k:=\frac{1}{N_k}\sum_{j=1}^{N_k}\mathsf{L}(\xi^k_j)$. Then, if the \textsf{\emph{Algorithm \ref{algorithm:DSSA:extragradient}}} does not stop at iteration $k+1$, a.s.
$
\alpha_k\ge\left(\frac{\lambda\theta}{\widehat L_k}\right)\wedge\hat\alpha.
$
Moreover,
$
\Lnorm{\alpha_k\big|\mathcal{F}_k}\cdot\Lnorm{\mathsf{L}(\xi)}\ge(\lambda\theta)\wedge\hat\alpha.
$
\end{lemma}
\begin{proof}
If $\hat\alpha$ satisfies \eqref{algo:armijo:rule}, then $\alpha_k=\hat\alpha$. Otherwise, we have
\begin{equation}\label{lemma:step:lower:eq1}
\theta^{-1}\alpha_k\left\Vert\widehat F\left(\xi^k,z^k(\theta^{-1}\alpha_k)\right)-
\widehat F(\xi^k,x^k)\right\Vert>\lambda\left\Vert z^k\left(\theta^{-1}\alpha_k\right)-x^k\right\Vert.
\end{equation}
Assumption \ref{assumption:holder:continuity} and definition of $\widehat F(\xi^k,\cdot)$ in \eqref{equation:empirical:mean:operator:&:error} imply that
\begin{equation}\label{lemma:step:lower:eq2}
\left\Vert\widehat F\left(\xi^k,z^k(\theta^{-1}\alpha_k)\right)-\widehat F(\xi^k,x^k)\right\Vert\le
\widehat L_k\left\Vert z^k\left(\theta^{-1}\alpha_k\right)-x^k\right\Vert.
\end{equation}
The fact that
$z^k\left(\theta^{-1}\alpha_k\right)\neq x^k$ (since the method did not stopped at iteration $k+1$) and \eqref{lemma:step:lower:eq1}-\eqref{lemma:step:lower:eq2} imply that $\alpha_k\ge\frac{\lambda\theta}{\widehat L_k}$. We have thus proved the first statement.
Since a.s. $\mathsf{L}(\xi)\ge1$, we also have a.s. $\widehat{L}_k\alpha_k\ge(\lambda\theta)\wedge\hat\alpha$. The second statement follows from this fact and
\begin{eqnarray*}
(\lambda\theta)\wedge\hat\alpha &\le &\mathbb{E}\left[\alpha_k\widehat L_k\Big|\mathcal{F}_k\right]\\
&\le & \Lnorm{\alpha_k\big|\mathcal{F}_k}\cdot\Lnorm{\widehat L_k\Big|\mathcal{F}_k}\\
&=&\Lnorm{\alpha_k\big|\mathcal{F}_k}\sqrt{\mathbb{E}\left[\left(\frac{1}{N_k}\sum_{j=1}^{N_k}\mathsf{L}(\xi^k_j)\right)^2\Bigg|\mathcal{F}_k\right]}\\
&\le &\Lnorm{\alpha_k\big|\mathcal{F}_k}\sqrt{\frac{1}{N_k}\sum_{j=1}^{N_k}\mathbb{E}\left[\mathsf{L}(\xi^k_j)^2\Big|\mathcal{F}_k\right]}=\Lnorm{\alpha_k\big|\mathcal{F}_k}\cdot\Lnorm{\mathsf{L}(\xi)},
\end{eqnarray*}
using H\"older's inequality
in the second inequality,
the convexity of $t\mapsto t^2$
in the third inequality
and the fact
that $\xi^k$ is an i.i.d sample of $\mathbf{P}$ with $\xi^k\perp\perp\mathcal{F}_k$ in the last equality.
\end{proof}
Recall \eqref{equation:oracle:errors:DSSA:extragradient}. We define, for $k\in\mathbb{N}_0$ and for $x^*\in X^*$,
\begin{eqnarray}
\Delta A_{k}&:=&(1-6\lambda^2)\hat\alpha^2\Vert\epsilon^k_1\Vert^2+
6\hat\alpha^2\Vert\epsilon^k_2\Vert^2+6\hat\alpha^2\Vert\epsilon^k_3\Vert^2,\label{def:A:armijo}\\
\Delta M_{k}(x^*)&:=&2\alpha_k\langle x^*-z^k,\epsilon^k_2\rangle.\label{def:M:armijo}
\end{eqnarray}
Lemma \ref{lemma:recursion:armijo} and Proposition \ref{prop:A:armijo} stated in the following are proved in the Appendix.
\begin{lemma}[A recursive error bound for \textsf{Algorithm \ref{algorithm:DSSA:extragradient}}]\label{lemma:recursion:armijo}
Consider Assumptions \ref{assumption:holder:continuity} and \ref{assump:existence}-\ref{assump:monotonicity}. If \textsf{\emph{Algorithm \ref{algorithm:DSSA:extragradient}}} does not stop at iteration $k+1$ then, for all $x^*\in X^*$,
\begin{eqnarray*}
\Vert x^{k+1}-x^*\Vert^2\le \Vert x^k-x^*\Vert^2-\frac{(1-6\lambda^2)\alpha_k^2}{2} r(x^k)^2+\Delta M_{k}(x^*)+\Delta A_k.
\end{eqnarray*}
\end{lemma}
\begin{proposition}[Bounds on the oracle's errors]\label{prop:A:armijo}
Consider Assumptions \ref{assumption:holder:continuity}, \ref{assump:existence} and \ref{assump:iid:sampling}. Recall definitions in \eqref{equation:oracle:error:variance}, \eqref{def:A:armijo}, Theorem \ref{thm:variance:error:with:line:search} and Lemma \ref{lemma:decay:empirical:error}. Then there exist positive constants $\mathsf{C}_{p}$ and $\mathsf{\overline C}_{p}$ (depending only on $d$, $p$, $\mathsf{L}({\xi})\hat\alpha$ and $\{N_k\}$) such that, if the method does not stop at iteration $k+1$ we have, for all $x^*\in X^*$,
\begin{eqnarray*}
\Lpddnorm{\Delta A_k|\mathcal{F}_k}&\le &\frac{\mathsf{C}_{p}\left[\hat\alpha\sigma_{\mathsf{a}p}(x^*)\right]^2+\mathsf{\overline{C}}_{p}\left(\hat\alpha\widetilde L_p\right)^2D_k^2}{N_k}.
\end{eqnarray*}
In above, for $X$ compact, we have $\widetilde L_p:=(C_pL_p)\vee L_p^*$ and $D_k:\equiv\diam(X)$. For a general $X$, we have $\widetilde L_p:=\overline{L}_{2p}$ and $D_k:=\Vert x^k-x^*\Vert$.
\end{proposition}
See Remark \ref{rem:constants:A:armijo} in the Appendix for details on the constants above. In the following convergence analysis, we set $p=2$ (see Remark \ref{rem:lp:boundedness} for the interest in higher moments).
\begin{proposition}[Stochastic quasi-Fej\'er property]\label{prop:fejer:armijo}
Consider Assumptions \ref{assumption:holder:continuity} and \ref{assump:existence}-\ref{assump:iid:sampling} and definitions in Proposition \ref{prop:A:armijo} with $p=2$. Set
$\nu:=\frac{(1-6\lambda^2)\left[(\lambda\theta)\wedge\hat\alpha\right]^2}{2\Lnorm{\mathsf{L}(\xi)}^2}$. If \textsf{\emph{Algorithm \ref{algorithm:DSSA:extragradient}}} does not stop at iteration $k+1$ then, for all $x^*\in X^*$,
\begin{eqnarray*}
\mathbb{E}\left[\|x^{k+1}-x^*\|^2|\mathcal{F}_k\right]&\le &\|x^{k}-x^*\|^2 -\nu\cdot r(x^k)^2
+\frac{\mathsf{C}_{2}\left[\hat\alpha\sigma_{2\mathsf{a}}(x^*)\right]^2}{N_k}+\frac{\mathsf{\overline{C}}_{2}\left(\hat\alpha\widetilde L_2\right)^2}{N_k}D_k^2.
\end{eqnarray*}
\end{proposition}
\begin{proof}
We first show that $\{\Delta M_k(x^*),\mathcal{F}_k\}$ defines a martingale difference even if $\alpha_k\notin\mathcal{F}_k$. Indeed, the facts that $z^k\in\widehat{\mathcal{F}}_k$ and $\eta^k\perp\perp\widehat{\mathcal{F}}_k$ imply that $\mathbb{E}[\epsilon^k_2|\widehat{\mathcal{F}}_k]=0$, where $\epsilon^k_2$ is defined in \eqref{equation:oracle:errors:DSSA:extragradient}. This fact, $z^k\in\widehat\mathcal{F}_k$ and $\alpha_k\in\widehat{\mathcal{F}}_k$ imply that $\mathbb{E}[\Delta M_k(x^*)|\widehat{\mathcal{F}}_k]=0$. Using this and the fact that $\mathbb{E}[\mathbb{E}[\cdot|\widehat{\mathcal{F}}_k]|\mathcal{F}_k]=\mathbb{E}[\cdot|\mathcal{F}_k]$, we finally conclude that $\mathbb{E}[\Delta M_k(x^*)|\mathcal{F}_k]=0$ as claimed. The recursion in the statement follows immediately from this fact, relation $\mathbb{E}\left[\alpha_k^2\big|\mathcal{F}_k\right]\ge\frac{[(\lambda\theta)\wedge\hat\alpha]^2}{\Lnorm{\mathsf{L}(\xi)}^2}$ in Lemma \ref{lemma:step:lower}, Lemma \ref{lemma:recursion:armijo} and Proposition \ref{prop:A:armijo} with $p=2$, after we take $\mathbb{E}[\cdot|\mathcal{F}_k]$ in Lemma \ref{lemma:recursion:armijo} and use the fact that $x^k\in\mathcal{F}_k$.
\end{proof}
We now proceed to establish the asymptotic convergence of \textsf{Algorithm \ref{algorithm:DSSA:extragradient}}.
\begin{theorem}[Asymptotic convergence]\label{thm:convergence:armijo}
Under Assumptions \ref{assumption:holder:continuity} and \ref{assump:existence}-\ref{assump:iid:sampling}, either \emph{\textsf{Algorithm \ref{algorithm:DSSA:extragradient}}} stops at iteration $k+1$, in which case $x^k$ is a solution of \emph{VI}$(T,X)$, or it generates an infinite sequence $\{x^k\}$ such that a.s. it is bounded,
$
\lim_{k\rightarrow\infty}\dist(x^k,X^*)=0,
$
and
$
r(x^k)
$
converges to $0$ almost surely and in $\mathcal{L}^2$. In particular, a.s. every cluster point of $\{x^k\}$ belongs to $X^*$.
\end{theorem}
\begin{proof}
If \textsf{Algorithm \ref{algorithm:DSSA:extragradient}} stops at iteration $k$, then
$
x^k=\Pi[x^k-\hat\alpha\widehat F(\xi^k,x^k)].
$
From this fact and Lemma \ref{lemma:proj}(iv) we get, for all $x\in X$,
\begin{equation}\label{thm:convergence:armijo:eq0}
\langle \widehat F(\xi^k,x^k),x-x^k\rangle\ge0.
\end{equation}
From the facts that $x^k\in\mathcal{F}_k$ and $\xi^k\perp\perp\mathcal{F}_k$, Assumption \ref{assump:iid:sampling} and \eqref{equation:expected:valued:objective}, we have that
$\mathbb{E}\left[\widehat F(\xi^k,x^k)|\mathcal{F}_k\right]=T(x^k)$. Using this result and the fact that $x^k\in\mathcal{F}_k$, we take $\mathbb{E}[\cdot|\mathcal{F}_k]$ in \eqref{thm:convergence:armijo:eq0} and obtain, for all $x\in X$, $\langle T(x^k),x-x^k\rangle\ge0$. Hence $x^k\in X^*$.
Suppose now that \textsf{Algorithm \ref{algorithm:DSSA:extragradient}} generates an infinite sequence.
Take some $x^*\in X^*$. Taking into account $\sum_k N_k^{-1}<\infty$, Proposition \ref{prop:fejer:armijo} for a general $X$ ($\mathsf{a}:=2$) and the fact that $x^k\in\mathcal{F}_k$, we apply Theorem \ref{thm:rob} with $y_k:=\Vert x^k-x^*\Vert^2$, $a_k:=\frac{\mathsf{\overline{C}}_{2}(\hat\alpha\overline{L}_{4})^2}{{N}_k}$, $b_k:=\frac{\mathsf{C}_{2}[\hat\alpha\sigma_4(x^*)]^2}{{N}_k}$ and $u_k:=\frac{(1-6\lambda^2)[(\lambda\theta)\wedge\hat\alpha]^2}{2\Lnorm{\mathsf{L}(\xi)}^2}r(x^k)^2$, in order to conclude that a.s. $\{\Vert x^k-x^*\Vert^2\}$ converges and $\sum_{k=0}^\infty r(x^k)^2<\infty$. In particular, a.s. $\{x^k\}$ is bounded and
\begin{equation}\label{thm:convergence:armijo:eq4}
0 = \lim_{k\rightarrow\infty}r(x^k)^2\\
=\lim_{k\rightarrow\infty}\left\Vert x^k-\Pi\left[x^k-T(x^k)\right]\right\Vert^2.
\end{equation}
The fact that $\lim_{k\rightarrow\infty}\mathbb{E}[r(x^k)^2]=0$ is proved in a similar way, taking expectation in the recursion of Proposition \ref{prop:fejer:armijo}.
Relation \eqref{thm:convergence:armijo:eq4}
and the continuity of $T$ (Lemma \ref{lemma:holder:continuity:mean:std:dev}) and $\Pi$ (Lemma \ref{lemma:proj}(iii)) imply that a.s. every cluster point $\bar x$ of $\{x^k\}$ satisfies
$
0=\bar x-\Pi\left[\bar x-T(\bar x)\right].
$
From Lemma \ref{lemma:proj}(iv), we conclude that $\bar x\in X^*$. A.s. the boundedness
of $\{x^k\}$ and the fact that every cluster point of $\{x^k\}$
belongs to $X^*$ imply that $\lim_{k\rightarrow\infty}\dist(x^k,X^*)=0$.
\end{proof}
\subsection{Convergence rate and oracle complexity}
As mentioned in the Introduction, we allow $X$ to be unbounded and the SO may not have an uniformly bounded variance over $X$. In this setting, it is not possible to infer a priori the boundedness of the sequence $\left\{\Lnorm{\Vert x^k\Vert}\right\}$ (i.e., $\mathcal{L}^2$-boundedness of the iterates). In this section, we obtain such $\mathcal{L}^2$-boundedness when using DS-SA schemes. This will be essential to obtain complexity estimates.
\begin{proposition}[$\mathcal{L}^2$-boundedness of the iterates: unbounded case]\label{prop:l2:boundedness}
Let Assumptions \ref{assumption:holder:continuity} and \ref{assump:existence}-\ref{assump:iid:sampling} hold and recall definitions in \textsf{\emph{Algorithm \ref{algorithm:DSSA:extragradient}}}, \eqref{equation:oracle:error}-\eqref{equation:oracle:error:variance}, Theorem \ref{thm:variance:error:with:line:search} and Proposition \ref{prop:A:armijo} with $p=2$. Let $x^*\in X^*$ and choose $k_0:=k_0(\overline{\mathsf{C}}_2,\hat\alpha\overline{L}_4)\in\mathbb{N} $ and $\phi\in(0,1)$ such that
\begin{equation}\label{prop:l2:boundedness:k0}
\sum_{i\ge k_0}^\infty\frac{1}{N_i}\le\frac{\phi}{\overline{\mathsf{C}}_2\left(\hat\alpha\overline{L}_4\right)^2}.
\end{equation}
Then
$
\sup_{k\ge k_0}\Lnorm{\Vert x^k-x^*\Vert}^2<\frac{\Lnorm{\Vert x^{k_0}-x^*\Vert}^2+\frac{\phi\mathsf{C}_2\sigma_4(x^*)^2}{\overline{\mathsf{C}}_2\overline{L}_4^2}}{1-\phi}.
$
\end{proposition}
\begin{proof}
In the following, we set $d_i:=\Vert x^i-x^*\Vert^2$ for $i\in\mathbb{N}_0$. Let $k>k_0$ in $\mathbb{N}_0$ with $k_0$ as stated in \eqref{prop:l2:boundedness:k0}. Note that such $k_0$ always exists since $\sum_kN_k^{-1}<\infty$ by Assumption \ref{assump:iid:sampling}. Consider the recursion of Proposition \ref{prop:fejer:armijo} for the case $X$ is unbounded ($\mathsf{a}:=2$). We take the expectation, use $\mathbb{E}[\mathbb{E}[\cdot|\mathcal{F}_i]]=\mathbb{E}[\cdot]$ and drop the negative term in the right hand side. We then sum recursively the obtained inequality from $i:=k_0$ to $i:=k-1$, obtaining
\begin{equation}\label{prop:l2:boundedness:eq1}
\Lnorm{d_k}^2\le\Lnorm{d_{k_0}}^2+\mathsf{\overline{C}}_2\left(\hat\alpha\overline{L}_4\right)^2\sum_{i=k_0}^{k-1}\frac{\Lnorm{d_i}^2}{N_i}+\mathsf{C}_2\left[\hat\alpha\sigma_4(x^*)\right]^2\sum_{i=k_0}^{k-1}\frac{1}{N_i}.
\end{equation}
For any $a>0$, we define the stopping time
$
\tau_a:=\{k\ge k_0:\Lnorm{d_k}>a\}.
$
From \eqref{prop:l2:boundedness:k0}-\eqref{prop:l2:boundedness:eq1} and definition of $\tau_a$, we have that, for any $a>0$ such that $\tau_a<\infty$,
\begin{eqnarray*}
a^2<\Lnorm{d_{\tau_a}}^2&\le &\Lnorm{d_{k_0}}^2+\mathsf{\overline{C}}_2\left(\hat\alpha\overline{L}_4\right)^2\sum_{i=k_0}^{\tau_a-1}\frac{\Lnorm{d_i}^2}{N_i}+\mathsf{C}_2\left[\hat\alpha\sigma_4(x^*)\right]^2\sum_{i=k_0}^{\tau_a-1}\frac{1}{N_i}\\
&<&\Lnorm{d_{k_0}}^2+\phi a^2+\frac{\phi\mathsf{C}_2\sigma_4(x^*)^2}{\overline{\mathsf{C}}_2\overline{L}_4^2},
\end{eqnarray*}
and hence,
a^2<\frac{\Lnorm{d_{k_0}}^2+\frac{\phi\mathsf{C}_2\sigma_4(x^*)^2}{\overline{\mathsf{C}}_2\overline{L}_4^2}}{1-\phi}=:B,
$
where we used that $\phi\in(0,1)$. By definition of $\tau_a$ for any $a>0$, the argument above implies that any threshold $a^2$ which the sequence $\{\Lnorm{d_k}^2\}_{k\ge k_0}$ eventually exceeds is bounded above by $B$. Hence $\{\Lnorm{d_k}^2\}_{k\ge k_0}$ is bounded and it satisfies the statement of the proposition.
\end{proof}
We now obtain a rate of convergence.
\begin{theorem}[Rate of convergence]\label{thm:rate:convergence}
Consider Assumptions \ref{assumption:holder:continuity} and \ref{assump:existence}-\ref{assump:iid:sampling} and recall definitions in \textsf{\emph{Algorithm \ref{algorithm:DSSA:extragradient}}}, \eqref{equation:oracle:error}-\eqref{equation:oracle:error:variance} and Proposition \ref{prop:A:armijo} with $p=2$. Set
\begin{equation}\label{thm:rate:convergence:Nk}
N_k:=N\left\lceil(k+\mu)(\ln(k+\mu))^{1+b}\right\rceil,
\end{equation}
for any $N\in\mathbb{N}$, $b>0$ and $\mu>2$. Then Theorem \ref{thm:convergence:armijo} holds and the sequence $\{x^k\}$ generated by \textsf{\emph{Algorithm \ref{algorithm:DSSA:extragradient}}} is bounded in $\mathcal{L}^2$. Moreover, for any $x^*\in X^*$, if $\mathsf{J}>0$ is such that
$
\sup_{k\ge0}{\Lnorm{\Vert x^k-x^*\Vert}^2}\le\mathsf{J},
$
the following bound holds for all $k\in\mathbb{N}_0$:
\begin{eqnarray*}
\min_{i=0,\ldots,k}\mathbb{E}\left[r(x^i)^2\right]
\le\frac{\left\{\frac{2\Lnorm{\mathsf{L}(\xi)}^2}{(1-6\lambda^2)[(\lambda\theta)\wedge\hat\alpha]^2}\right\}}{k+1}\left\{\Vert x^0-x^*\Vert^2+\frac{\mathsf{C}_2[\hat\alpha\sigma_{2\mathsf{a}}(x^*)]^2+\mathsf{\overline{C}}_2\left(\hat\alpha\widetilde L_2\right)^2\mathsf{J}}{Nb[\ln(\mu-1)]^b}\right\}.
\end{eqnarray*}
\end{theorem}
\begin{proof}
Clearly, $\{N_k\}$ satisfies Assumption \ref{assump:iid:sampling} and, hence, Theorem \ref{thm:convergence:armijo} and Proposition \ref{prop:l2:boundedness} hold. In particular, $\{x^k\}$ is bounded in $\mathcal{L}^2$. Let $x^*\in X^*$ and $\mathsf{J}$ as stated in the theorem. Hence, $\sup_k\mathbb{E}[D_k^2]\le\mathsf{J}$. In the recursion of Proposition \ref{prop:fejer:armijo}, we take the expectation, use $\mathbb{E}[\mathbb{E}[\cdot|\mathcal{F}_i]]=\mathbb{E}[\cdot]$ and sum recursively the obtained inequality from $i:=0$ to $i:=k$. We then obtain
$$
\frac{(1-6\lambda^2)[(\lambda\theta)\wedge\hat\alpha]^2}{2\Lnorm{\mathsf{L}(\xi)}^2}\sum_{i=0}^k\mathbb{E}\left[r(x^i)^2\right]
\le\Vert x^0-x^*\Vert^2+\left\{\mathsf{C}_2[\hat\alpha\sigma_{2\mathsf{a}}(x^*)]^2+\mathsf{\overline{C}}_2\left(\hat\alpha\widetilde L_2\right)^2\mathsf{J}\right\}\mathsf{S}_k,
$$
where $\mathsf{S}_k:=\sum_{i=0}^kN_i^{-1}$. The proof of the statement follows from the above inequality, the bound
\begin{eqnarray*}
\mathsf{S}_k\le\sum_{i=0}^\infty\frac{1}{N_i}\le\int_{-1}^\infty\frac{\dist t}{N(t+\mu)[\ln(t+\mu)]^{1+b}}=\frac{1}{Nb[\ln(\mu-1)]^b},
\end{eqnarray*}
and $\min_{i=0,\ldots,k}\mathbb{E}\left[r(x^i)^2\right]\le\frac{1}{k+1}\sum_{i=0}^k\mathbb{E}\left[r(x^i)^2\right]$.
\end{proof}
A near optimal oracle complexity is guaranteed in the next corollary of Theorem \ref{thm:rate:convergence}.
\begin{corollary}[Iteration and oracle complexities]\label{cor:oracle:complexity}
Let the assumptions of Theorem \ref{thm:rate:convergence} hold and set $N:=\mathcal{O}(d)$. Given $\epsilon>0$, \emph{\textsf{Algorithm \ref{algorithm:DSSA:extragradient}}} achieves the tolerance
$$
\min_{i=0,\ldots,K}\mathbb{E}[r(x^i)^2]\le\epsilon,
$$
after $K=b^{-1}\mathcal{O}(\epsilon^{-1})$ iterations and with a.s. an oracle complexity
$\sum_{i=0}^K(1+\ell_i)N_i$
bounded above by
$$
b^{-2}\cdot\log_{\frac{1}{\theta}}\left(\frac{\hat\alpha\max_{i=0,\ldots,K}\widehat{L}_i}{(\lambda\theta)\wedge\hat\alpha}\right)\cdot\left[\ln\left(b^{-1}\epsilon^{-1}\right)\right]^{1+b}\cdot\mathcal{O}(d\epsilon^{-2}),
$$
where $\ell_k$ is the number of oracle calls used in the line search scheme \eqref{algo:armijo:rule} at iteration $k$ and $\widehat{L}_k$ is defined in Lemma \ref{lemma:step:lower}.
Moreover, the mean oracle complexity satisfies the same upper bound above with $\max_{i=0,\ldots,K}\widehat{L}_i$ replaced by $L$.
\end{corollary}
\begin{proof}
We recall the definitions in Assumption \ref{assumption:holder:continuity}, Theorem \ref{thm:variance:error:with:line:search}, Lemma \ref{lemma:decay:empirical:error}, Remark \ref{rem:constants:thm:correlated:error}, Proposition \ref{prop:A:armijo} and Remark \ref{rem:constants:A:armijo}. The definitions of $\widetilde L_2$, $\overline{L}_4$, $L_2^*$, $\mathsf{c}_2$ and $\mathsf{d}_2$ (which depend on $d$) and Theorem \ref{thm:rate:convergence} imply that, up to a constant $B>0$, for every $k\in\mathbb{N}$, $\min_{i=0,\ldots,k}\mathbb{E}[r(x^i)^2]\le Bd(Nbk)^{-1}$. Given $\epsilon>0$, let $K$ be the least natural number such that $Bd(Nbk)^{-1}\le\epsilon$. Then $K=\mathcal{O}(dN^{-1}b^{-1}\epsilon^{-1})$, the total number of oracle calls is
\begin{eqnarray}
\sum_{i=0}^K(1+\ell_i)N_i&\lesssim &\sum_{i=1}^K N i(\ln i)^{1+b}\lesssim\left(\max_{i=0,\ldots,K}\ell_i\right)N K^2(\ln K)^{1+b}\nonumber\\
&\lesssim & \left(\max_{i=0,\ldots,K}\ell_i\right)N^{-1}d^2 b^{-2}\epsilon^{-2}\left[\ln\left(dN^{-1}b^{-1}\epsilon^{-1}\right)\right]^{1+b},\label{cor:oracle:complexity:eq1}
\end{eqnarray}
and $\min_{i=0,\ldots,K}\mathbb{E}[r(x^i)^2]\le\epsilon$. Lemma \ref{lemma:step:lower} implies that $\ell_k\le\log_{\frac{1}{\theta}}\left(\frac{\hat\alpha\widehat{L}_k}{(\lambda\theta)\wedge\hat\alpha}\right)$. This fact, \eqref{cor:oracle:complexity:eq1} and $N=\mathcal{O}(d)$ imply the claimed bound on $\sum_{i=0}^K(1+\ell_i)N_i$. The concavity of $t\mapsto\log_{\frac{1}{\theta}}t$ and Jensen's inequality imply
\begin{eqnarray*}
\mathbb{E}[\ell_k]\le\mathbb{E}\left[\log_{\frac{1}{\theta}}\left(\frac{\hat\alpha\widehat{L}_k}{(\lambda\theta)\wedge\hat\alpha}\right)\right]\le \log_{\frac{1}{\theta}}\left(\frac{\hat\alpha L}{(\lambda\theta)\wedge\hat\alpha}\right),
\end{eqnarray*}
where we used that $\mathbb{E}[\widehat{L}_k]=L$ by definitions of $\widehat{L}_k$ and $L$ and Assumption \ref{assump:iid:sampling}. The above relation, \eqref{cor:oracle:complexity:eq1} and $N:=\mathcal{O}(d)$ imply the claimed bound on the mean oracle complexity $\sum_{i=0}^K(1+\mathbb{E}[\ell_i])N_i$.
\end{proof}
\begin{remark}[Linear memory budget per operation]
Recall that $N:=\mathcal{O}(d)$ in Corollary \ref{cor:oracle:complexity}. This policy requires the computation of the sum \eqref{equation:empirical:average:DSSA:extragradient} of size $N_k\sim dk$ (up to logs) of $d$-dimensional vectors at iteration $k$. For large $d$, such computation is still cheap in terms of memory budget \emph{per operation}: the sum \eqref{equation:empirical:average:DSSA:extragradient} can be computed serially in $k$ steps, each one requiring the storage of just two $d$-dimensional vectors. Hence, it requires memory of $\mathcal{O}(d)$ per operation. It can also be easily parallelized.
\end{remark}
\begin{remark}[Radius estimate for unbounded $X$]
By Proposition \ref{prop:l2:boundedness}, the constant $\mathsf{J}$ in Theorem \ref{thm:rate:convergence} can be estimated by
\begin{equation}\label{equation:J}
\mathsf{J}\le\frac{\max_{k=0,\ldots,k_0}\Lnorm{\Vert x^k-x^*\Vert}^2+\frac{\phi\mathsf{C}_2\sigma_4(x^*)^2}{\mathsf{\overline{C}}_2\overline L_4^2}}{1-\phi}\lesssim\max_{k=0,\ldots,k_0}\Lnorm{\Vert x^k-x^*\Vert}^2+\frac{\sigma_4(x^*)^2}{\Lqrtnorm{\mathsf{L}(\xi)}^2},
\end{equation}
using the fact that $1-\phi\in(0,1)$ and the constant definitions in Assumption \ref{assumption:holder:continuity}, Theorem \ref{thm:variance:error:with:line:search}, Lemma \ref{lemma:decay:empirical:error} and Remarks \ref{rem:constants:thm:correlated:error} and \ref{rem:constants:A:armijo} with $p=2$. From \eqref{prop:l2:boundedness:k0} and \eqref{thm:rate:convergence:Nk}, $k_0$ in \eqref{equation:J} can be estimated by
\begin{equation}\label{thm:rate:convergence:k0}
k_0:=\left\lceil\exp\left\{\sqrt[b]{\frac{\mathsf{\overline{C}}_2\left(\hat\alpha\overline{L}_4\right)^2}{\phi b N}}\right\}-\mu+1\right\rceil.
\end{equation}
As discussed in Section \ref{section:conclusion}, the exponential dependence in \eqref{thm:rate:convergence:k0} is not a serious issue. Nevertheless, it can be improved to $k_0\lesssim\sqrt[a]{\mathsf{\overline{C}}_2\left(\hat\alpha\overline{L}_4\right)^2/(\phi b N)}-\mu$ if the sampling policy is taken as $N_k\sim N(k+\mu)^{1+a}(\ln(k+\mu))^{1+b}$ for some $a,b>0$. This come at the expense of an oracle complexity of $[\ln(\epsilon^{-1})]^{1+b}\mathcal{O}(\epsilon^{-(2+a)})$ which is polynomially near optimal (instead of logarithmically as in Corollary \ref{cor:oracle:complexity}).
\end{remark}
\begin{remark}[Boundedness in $\mathcal{L}^p$]\label{rem:lp:boundedness}
Adapting the proofs of Propositions \ref{prop:A:armijo} and \ref{prop:l2:boundedness}, it is possible to prove, in case $X$ is unbounded, that \emph{the sequence $\{x^k\}$ is $\mathcal{L}^p$-bounded} for any given $p\ge4$ satisfying Assumption \ref{assumption:holder:continuity}. This is a significant statistical stability property. The proof requires exploiting that $\Delta M_k(x^*)$ in \eqref{def:M:armijo} is a martingale difference.\footnote{The nonmartingale-like dependency is present only in $\Delta A_k$ via the error $\epsilon^k_3$ in \eqref{equation:oracle:errors:DSSA:extragradient}.}
\end{remark}
\section{Analysis of \textsf{Algorithm \ref{algorithm:DSSA:hyperplane}} for H\"older continuous operators}\label{section:algorithm:hyperplane:DSSA}
With respect to \textsf{Algorithm \ref{algorithm:DSSA:hyperplane}}, we will set $y^k:=x^k-\gamma_k\widehat F(\xi_k,z^k)$ and study the stochastic process $\{x^k\}$ with respect to the filtration
$$
\mathcal{F}_k=\sigma(x^0,\xi^0,\ldots,\xi^{k-1}).
$$
We will replace Assumption \ref{assump:iid:sampling} by the following one.
\begin{assumption}[I.I.D. sampling]\label{assump:iid:sampling:hyperplane}
In \textsf{Algorithm \ref{algorithm:DSSA:hyperplane}}, the sequence $\{\xi^k_j:k\in\mathbb{N}_0,j\in[N_k]\}$ is an i.i.d. sample drawn from $\mathbf{P}$ and $\sum_{k=0}^\infty N_k^{-\frac{1}{2}}<\infty$.
\end{assumption}
We also define the oracle errors:
\begin{eqnarray}
\bar\epsilon^k_{1} &:=&\widehat F(\xi^k,x^k)-T(x^k),\label{algo:noise:hyper:sample1}\\
\bar\epsilon^k_{2} &:=&\widehat F(\xi^k,z^k)-T(z^k),\label{algo:noise:hyper:sample2}\\
\bar\epsilon^k_{3} &:=&\widehat F(\xi^k,\widehat z^k)-T(z^k),\label{algo:noise:hyper:sample3}
\end{eqnarray}
where $\widehat z^k:=\bar z^k(\theta^{-1}\alpha_k)$ (see line search \eqref{algo:armijo:rule2} for the definition of $\bar z^k(\alpha)$). We remark that $\overline{\epsilon}^k_2$ and $\overline{\epsilon}^k_3$ are correlated errors in the sense that $z^k$ and $\widehat{z}^k$ are dependent on $\xi^k$. In the setting of Theorem \ref{thm:variance:error:with:line:search}, this means that $z^k=\overline z_{\beta_k}(\xi^k;\alpha_k,x^k)$ and $\widehat z^k=\overline z_{\beta_k}(\xi^k;\theta^{-1}\alpha_k,x^k)$. We start by showing the line search \eqref{algo:armijo:rule2} in \textsf{Algorithm \ref{algorithm:DSSA:hyperplane}} is well defined.
\begin{lemma}[Good definition of the line search]\label{lemma:armijo:hyper:def}
Consider Assumption \ref{assumption:holder:continuity}. Then
\begin{itemize}
\item[i)] The line search \eqref{algo:armijo:rule2} in \emph{\textsf{Algorithm \ref{algorithm:DSSA:hyperplane}}} terminates after a finite number of iterations.
\item[ii)] If \textsf{\emph{Algorithm \ref{algorithm:DSSA:hyperplane}}} does not stop at iteration $k+1$, then
$
\left\langle\widehat F(\xi^k,z^k),x^k-z^k\right\rangle>0.
$
In particular, $\gamma_k>0$ in \eqref{algo:hyperplane2}.
\end{itemize}
\end{lemma}
\begin{proof}
Item (ii) is a direct consequence of (i).
We prove next item (i). Assume by contradiction that for every $\ell\in\mathbb{N}_0$,
$$
\left\langle \beta_k\widehat F\Big(\xi^k,z^k\left(\theta^{-\ell}\widehat\alpha\right)\Big),x^k-\Pi(g^k)\right\rangle<\lambda\Vert x^k-\Pi(g^k)\Vert^2.
$$
We let $\ell\rightarrow\infty$ above and by continuity of $\widehat F(\xi^k,\cdot)$, resulting from
Assumption \ref{assumption:holder:continuity}, we obtain
$$
\lambda\Vert x^k-\Pi(g^k)\Vert^2\ge\langle x^k-g^k,x^k-\Pi(g^k)\rangle\ge\Vert x^k-\Pi(g^k)\Vert^2,
$$
using Lemma \ref{lemma:proj}(v) in the last inequality. Since we have $x^k\neq\Pi(g^k)$ by the definition of the method, we obtain that $\lambda\ge1$, a contradiction.
\end{proof}
The following Lemma is also proved in the Appendix.
\begin{lemma}\label{lemma:recursion:hyper}
Consider Assumptions \ref{assump:existence}-\ref{assump:monotonicity} and \eqref{algo:noise:hyper:sample2}.
Suppose that \textsf{\emph{Algorithm \ref{algorithm:DSSA:hyperplane}}} does not stop at iteration $k+1$. Then, for all $x^*\in X^*$,
\begin{equation*}
\Vert x^{k+1}-x^*\Vert^2\le\Vert x^{k}-x^*\Vert^2-\Vert y^k-x^k\Vert^2+2\gamma_k\langle\bar\epsilon^k_2,x^*-z^k\rangle.
\end{equation*}
\end{lemma}
We now aim at controlling the error term $\gamma_k\langle\bar\epsilon^k_2,x-z^k\rangle$. This term is not a martingale difference, since $z^k$ depends on $\xi^k$. We shall need the following lemma.
\begin{lemma}\label{lemma:gamma}
Suppose that \textsf{\emph{Algorithm \ref{algorithm:DSSA:hyperplane}}} does not stop at iteration $k+1$. Then
\begin{equation}
0<\gamma_k<\frac{\alpha_k\beta_k}{\lambda}\le\frac{\widehat\alpha\beta_k}{\lambda}.
\end{equation}
\end{lemma}
\begin{proof}
We only need to prove the second inequality.
The line search \eqref{algo:armijo:rule2} and the fact that $x^k-z^k=\alpha_k(x^k-\Pi(g^k))$ imply that
\begin{equation}\label{lemma:gamma:eq1}
\langle \widehat F(\xi^k,z^k),x^k-z^k\rangle\ge\frac{\lambda}{\alpha_k\beta_k}\Vert
x^k-z^k\Vert^2.
\end{equation}
From \eqref{lemma:gamma:eq1} and the definition of $\gamma_k$ we get
\begin{eqnarray}\label{lemma:gamma:eq2}
\gamma_k =\frac{\langle \widehat F(\xi^k,z^k),x^k-z^k\rangle}{\Vert\widehat F(\xi^k,z^k)\Vert^2}
> \frac{\lambda}{\alpha_k\beta_k}\frac{\Vert x^k-z^k\Vert^2}{\Vert \widehat F(\xi^k,z^k)\Vert^2},
\end{eqnarray}
while the definition of $\gamma_k$ gives
\begin{eqnarray}\label{lemma:gamma:eq3}
\gamma_k =\frac{\langle \widehat F(\xi^k,z^k),x^k-z^k\rangle}{\Vert \widehat F(\xi^k,z^k)\Vert^2}
\le \frac{\Vert\widehat F(\xi^k,z^k)\Vert\Vert x^k-z^k\Vert}{\Vert \widehat F(\xi^k,z^k)\Vert^2}
= \frac{\Vert x^k-z^k\Vert}{\Vert \widehat F(\xi^k,z^k)\Vert},
\end{eqnarray}
using the Cauchy-Schwartz inequality. Inequalities \eqref{lemma:gamma:eq2}-\eqref{lemma:gamma:eq3} imply
the claim.
\end{proof}
\begin{lemma}[Error decay]\label{lemma:error:decay:hyper}
Consider Assumptions \ref{assumption:holder:continuity}, \ref{assump:existence} and \ref{assump:iid:sampling:hyperplane} and \eqref{algo:noise:hyper:sample2}. Suppose that \emph{\textsf{Algorithm \ref{algorithm:DSSA:hyperplane}}} does not stop at iteration $k+1$. Then, for all $x^*\in X^*$,
\begin{eqnarray*}
\Lpddnorm{\gamma_k\langle\bar\epsilon^k_2,x^*-z^k\rangle\big|\mathcal{F}_k}\lesssim\frac{1+\Vert x^k-x^*\Vert^2}{\sqrt{N_k}}.
\end{eqnarray*}
\end{lemma}
\begin{proof}
We denote $\widetilde z^k:=\Pi(g^k)$, so that
\begin{eqnarray}\label{lemma:error:decay:hyper:eq1}
x^*-z^k=\alpha_k(x^*-\widetilde z^k)+(1-\alpha_k)(x^*-x^k),
\end{eqnarray}
using the fact that $x^*=\alpha_kx^*+(1-\alpha_k)x^*$. In view of
\eqref{lemma:error:decay:hyper:eq1},
we have
\begin{eqnarray}\label{lemma:error:decay:hyper:eq2}
\gamma_k\langle\bar\epsilon^k_2,x^*-z^k\rangle
&=&\gamma_k\alpha_k\langle\bar\epsilon^k_2,x^*-\widetilde z^k\rangle +
\gamma_k(1-\alpha_k)\langle\bar\epsilon^k_2,x^*-x^k\rangle\nonumber\\
&\le &\frac{\tilde\beta}{\lambda}\Vert\bar\epsilon^k_2\Vert\left(\Vert x^*-\widetilde z^k\Vert+\Vert x^*-x^k\Vert\right),
\end{eqnarray}
using the Cauchy-Schwarz inequality, Lemma \ref{lemma:gamma}, and the facts that
$0<\alpha_k\le\hat\alpha\le1$ and $0<\beta_k\le\tilde\beta$.
Since $x^*\in X^*$, by Lemma \ref{lemma:proj}(iv), we use the fact that $x^*=\Pi[x^*-\beta_k T(x^*)]$ and the definitions of $\tilde z^k$, $g^k$ and $\bar\epsilon^k_1$ in order to obtain
\begin{eqnarray}\label{lemma:error:decay:hyper:eq3}
\Vert\tilde z^k-x^*\Vert &=& \Vert \Pi[x^k-\beta_k (T(x^k)+\bar\epsilon^k_1)]-\Pi[x^*-\beta_k T(x^*)]\Vert\nonumber\\
&\le &\Vert x^k-x^* +\beta_k(T(x^*)-T(x^k))-\beta_k\bar\epsilon^k_1\Vert\nonumber\\
&\le &\Vert x^k-x^*\Vert+\tilde\beta L\Vert x^k-x^*\Vert^\delta+\tilde\beta\Vert\bar\epsilon^k_1\Vert,
\end{eqnarray}
using Lemma \ref{lemma:proj}(iii) in the first inequality, and the fact that $0<\beta_k\le\tilde\beta$ together with Lemma \ref{lemma:holder:continuity:mean:std:dev} in the last inequality.
Using \eqref{lemma:error:decay:hyper:eq2}-\eqref{lemma:error:decay:hyper:eq3} and the fact that
$\Vert x^k-x^*\Vert^\delta\le 1+\Vert x^k-x^*\Vert$, we take $\Lpddnorm{\cdot|\mathcal{F}_k}$ and get
\begin{eqnarray}
\Lpddnorm{\gamma_k\langle\bar\epsilon^k_2,x^*-z^k\rangle\big|\mathcal{F}_k}\le
\left[\tilde\beta L+(2+\tilde\beta L)\Vert x^k-x^*\Vert\right]\frac{\tilde\beta}{\lambda}\Lpddnorm{\Vert\bar\epsilon^k_2\Vert\big|\mathcal{F}_k}
+\frac{\tilde\beta^2}{\lambda}\Lpddnorm{\Vert\bar\epsilon^k_1\Vert\Vert\bar\epsilon^k_2\Vert\big|\mathcal{F}_k},\nonumber\\
\label{lemma:error:decay:hyper:eq4}
\end{eqnarray}
using the fact that $x^k\in\mathcal{F}_k$. By Lemma \ref{lemma:decay:empirical:error} with $q=p$ and the facts that $x^k\in\mathcal{F}_k$ and $\xi^k\perp\perp\mathcal{F}_k$, we get
\begin{equation}\label{lemma:error:decay:hyper:eq5}
\Lpnorm{\Vert\bar\epsilon^k_1\Vert\big|\mathcal{F}_k}\le C_{p}\frac{\sigma_p(x^*)+L_p+L_p\Vert x^k-x^*\Vert}{\sqrt{N_k}},
\end{equation}
where we used the fact that $\Vert x^k-x^*\Vert^\delta\le1+\Vert x^k-x^*\Vert$. By Theorem \ref{thm:variance:error:with:line:search}, \eqref{algo:noise:hyper:sample2} and the facts that $z^k=\overline{z}_{\beta_k}(\xi^k;\alpha_k,x^k)$, $x^k\in\mathcal{F}_k$ and $\alpha_k\in(0,1]$, we get
\begin{equation}\label{lemma:error:decay:hyper:eq6}
\Lpddnorm{\Vert\bar\epsilon^k_2\Vert\big|\mathcal{F}_k}\le\Lpnorm{\Vert\bar\epsilon^k_2\Vert\big|\mathcal{F}_k}\lesssim \frac{\sigma_{2p}(x^*)+\Vert x^k-x^*\Vert}{\sqrt{N_k}},
\end{equation}
where we used the fact that $\delta\vee\Vert x^k-x^*\Vert^\delta\le1+\Vert x^k-x^*\Vert$. Invoking H\"older's inequality, we also get
\begin{equation}\label{lemma:error:decay:hyper:eq7}
\Lpddnorm{\Vert\bar\epsilon^k_1\Vert\Vert\bar\epsilon^k_2\Vert\big|\mathcal{F}_k}\le
\Lpnorm{\Vert\epsilon^k_1\Vert\big|\mathcal{F}_k}\cdot\Lpnorm{\Vert\epsilon^k_2\Vert\big|\mathcal{F}_k}.
\end{equation}
Relations \eqref{lemma:error:decay:hyper:eq4}-\eqref{lemma:error:decay:hyper:eq7} prove the claim.
\end{proof}
\begin{proposition}[Stochastic quasi-Fej\'er property]\label{prop:fejer:hyper}
Consider Assumptions \ref{assumption:holder:continuity}, \ref{assump:existence}-\ref{assump:monotonicity} and \ref{assump:iid:sampling:hyperplane}. Assume that \emph{\textsf{Algorithm \ref{algorithm:DSSA:hyperplane}}} generates an infinite sequence $\{x^k\}$. Then
\begin{itemize}
\item[(i)] For all $x^*\in X^*$, there exists $c(x^*)\ge1$ such that, for all $k\in\mathbb{N}$,
\begin{eqnarray*}
\mathbb{E}\big[\Vert x^{k+1}-x^*\Vert^2\big|\mathcal{F}_k\big]\le \Vert x^{k}-x^*\Vert^2-\mathbb{E}\big[\Vert y^k-x^k\Vert^2\big|\mathcal{F}_k\big]+c(x^*)\frac{1+\Vert x^k-x^*\Vert^2}{\sqrt{N_k}}.
\end{eqnarray*}
\item[(ii)] A.s.
$\{\Vert x^k-x^*\Vert\}$ and $\{\dist(x^k,X^*)\}$ converge
for all $x^*\in X^*$.
In particular, $\{x^k\}$ is a.s.-bounded.
\item[(iii)] A.s. if a cluster point of $\{x^k\}$ belongs to $X^*$ then $\lim_{k\rightarrow\infty}\dist(x^k,X^*)=0$.
\end{itemize}
\end{proposition}
\begin{proof}
i) It is an immediate consequence of Lemmas \ref{lemma:recursion:hyper}, \ref{lemma:error:decay:hyper} and the fact that $x^k\in\mathcal{F}_k$, after taking $\mathbb{E}[\cdot|\mathcal{F}_k]$ in Lemma \ref{lemma:recursion:hyper}.
ii) Set
$
\mathsf{c}_k(x^*):=\frac{c(x^*)}{\sqrt{N_k}}.
$
From (i), for all $k\in\mathbb{N}_0$,
\begin{equation}\label{prop:fejer:hyper:eq1}
\mathbb{E}\big[\Vert x^{k+1}-x^*\Vert^2\big|\mathcal{F}_k\big]\le\left[1+\mathsf{c}_k(x^*)\right]\Vert x^k-x^*\Vert^2+\mathsf{c}_k(x^*).
\end{equation}
By Assumption \ref{assump:iid:sampling:hyperplane}, we have $\sum_k\mathsf{c}_k(x^*)<\infty$.
Hence, from \eqref{prop:fejer:hyper:eq1} and Theorem \ref{thm:rob} we conclude that a.s. $\{\Vert x^k-x^*\Vert\}$
converges and, in particular, $\{x^k\}$ is bounded.
Set $\bar x^k:=\Pi_{X^*}(x^k)$. Relation \eqref{prop:fejer:hyper:eq1} and the fact that $x^k\in\mathcal{F}_k$ imply
\begin{equation}\label{eebb}
\mathbb{E}\big[\dist(x^{k+1},X^*)^2\big|\mathcal{F}_k\big]\le\left[1+\mathsf{c}_k(\bar x^k)\right]\dist(x^{k},X^*)^2+\mathsf{c}_k(\bar x^k).
\end{equation}
The boundedness of $\{\bar x^k\}$ and Assumption \ref{assump:iid:sampling:hyperplane} imply that a.s. $\sum_k\mathsf{c}_k(\bar x^k)<\infty$. Hence, Theorem \ref{thm:rob} and \eqref{eebb} imply that $\{\dist(x^k,X^*)\}$ a.s.-converges.
iii) Suppose that a.s. there exists $\bar x\in X^*$ and a subsequence $\{k_\ell\}$ such that
$\lim_{\ell\rightarrow\infty}\Vert x^{k_\ell}-\bar x\Vert=0$. Clearly,
$\dist(x^{k_\ell},X^*)\le\Vert x^{k_\ell}-\bar x\Vert$ a.s., and therefore it follows that
$\lim_{\ell\rightarrow\infty}\dist(x^{k_\ell},X^*)=0$.
By (ii), $\{\dist(x^{k},X^*)\}$ a.s.-converges and hence $\lim_{k\rightarrow\infty}\dist(x^{k},X^*)=0$.
\end{proof}
We now prove asymptotic convergence of \textsf{Algorithm \ref{algorithm:DSSA:hyperplane}}.
\begin{theorem}[Asymptotic convergence]\label{thm:convergence:hyper}
Under Assumptions \ref{assumption:holder:continuity}, \ref{assump:existence}-\ref{assump:monotonicity} and \ref{assump:iid:sampling:hyperplane}, either \emph{\textsf{Algorithm \ref{algorithm:DSSA:hyperplane}}} stops at iteration $k+1$, in which case $x^k$ is a solution of \emph{VI}$(T,X)$, or it generates an infinite sequence $\{x^k\}$ that a.s. is bounded and such that $\lim_{k\rightarrow\infty}\dist(x^k,X^*)=0$. In particular, a.s. every cluster point of $\{x^k\}$ belongs to $X^*$.
\end{theorem}
\begin{proof}
If \textsf{Algorithm \ref{algorithm:DSSA:hyperplane}} stops at iteration $k$, then
$
x^k=\Pi[x^k-\beta_k\widehat F(\xi^k,x^k)].
$
From this fact and Lemma \ref{lemma:proj}(iv) we have
\begin{equation}\label{thm:convergence:hyper:eq1}
\langle \widehat F(\xi^k,x^k),x-x^k\rangle\ge0,\quad\quad\forall x\in X.
\end{equation}
From Assumption \ref{assump:iid:sampling:hyperplane}, \eqref{equation:expected:valued:objective} and the facts that $x^k\in\mathcal{F}_k$ and $\xi^k\perp\perp\mathcal{F}_k$, we
get $\mathbb{E}[\widehat F(\xi^k,x^k)|\mathcal{F}_k]$ $=T(x^k)$. Using this equality and the fact that $x^k\in\mathcal{F}_k$,
we take $\mathbb{E}[\cdot|\mathcal{F}_k]$ in \eqref{thm:convergence:hyper:eq1} and obtain
$\langle T(x^k),x-x^k\rangle\ge0$, for all $x\in X$. Hence $x^k\in X^*$.
We now suppose that the sequence $\{x^k\}$ is infinite. By Proposition \ref{prop:fejer:hyper}(iii),
it is sufficient to show that a.s. the bounded sequence $\{x^k\}$ has a cluster point in $X^*$. Choose any $x^*\in X^*$. As in Proposition \ref{prop:fejer:hyper}, set
$
\mathsf{c}_k(x^*):=\frac{c(x^*)}{\sqrt{N_k}}.
$
Using the property that $\mathbb{E}[\mathbb{E}[\cdot|\mathcal{F}_k]]=\mathbb{E}[\cdot]$, we take the expectation in Proposition \ref{prop:fejer:hyper}(i),
and get, for all $k\in\mathbb{N}_0$,
\begin{equation}\label{eecc}
\mathbb{E}\big[\Vert x^{k+1}-x^*\Vert^2\big|\mathcal{F}_k\big]\le\left[1+\mathsf{c}_k(x^*)\right]\mathbb{E}\left[\Vert x^{k}-x^*\Vert^2\right]-
\mathbb{E}\left[\Vert y^k-x^k\Vert^2\right]+\mathsf{c}_k(x^*).
\end{equation}
From the fact that $\sum_k\mathsf{c}_k(x^*)<\infty$ (Assumption \ref{assump:iid:sampling:hyperplane}), \eqref{eecc} and Theorem \ref{thm:rob} we conclude that
\begin{equation}\label{thm:convergence:hyper:eq2}
\sum_{k=0}^\infty\mathbb{E}\left[\Vert y^k-x^k\Vert^2\right]<\infty,
\end{equation}
and that $\left\{\mathbb{E}\left[\Vert x^k-x^*\Vert^2\right]\right\}$ converges.
In particular, $\left\{\mathbb{E}\left[\Vert x^k-x^*\Vert^2\right]\right\}$ is a bounded sequence.
By the definition of \textsf{Algorithm \ref{algorithm:DSSA:hyperplane}}, we have that
$
\Vert y^k-x^k\Vert^2=\langle T(z^{k})+\bar\epsilon^{k}_2,x^{k}-z^{k}\rangle^2
\Vert T(z^{k})+\bar\epsilon^{k}_2\Vert^{-2}.
$
Hence, from \eqref{thm:convergence:hyper:eq2} we get
\begin{equation}\label{thm:convergence:hyper:eq3}
\lim_{k\rightarrow\infty}\mathbb{E}\Bigg[\frac{\langle T(z^{k})+\bar\epsilon^{k}_2,x^{k}-z^{k}\rangle^2}{\Vert T(z^{k})+\bar\epsilon^{k}_2\Vert^2}\Bigg]=0.
\end{equation}
From the definitions of $\{\bar\epsilon^k_1,\bar\epsilon^k_2,\bar\epsilon^k_3\}$ in \eqref{algo:noise:hyper:sample1}-\eqref{algo:noise:hyper:sample3},
Lemma \ref{lemma:decay:empirical:error} with $q=p=2$, Theorem \ref{thm:variance:error:with:line:search}(i) and the facts that $z^k=\overline z_{\beta_k}(\xi^k;\alpha_k,x^k)$ and $\widehat z^k=\overline z_{\beta_k}(\xi^k;\theta^{-1}\alpha_k,x^k)$, the property that $\mathbb{E}[\mathbb{E}[\cdot|\mathcal{F}_k]]=\mathbb{E}[\cdot]$ and the boundedness of $\left\{\mathbb{E}\left[\Vert x^k-x^*\Vert^2\right]\right\}$, we get
$$
\mathbb{E}\left[\Vert\bar\epsilon^k_s\Vert^2\right]\lesssim\frac{\sup_{k\in\mathbb{N}_0}\mathbb{E}\left[\Vert x^k-x^*\Vert^2\right]+1}{N_k},
$$
for $s\in\{1,2,3\}$ and all $k\in\mathbb{N}_0$. Since $\lim_{k\rightarrow\infty}N_k^{-1}=0$ (Assumption \ref{assump:iid:sampling:hyperplane}), we have in particular that, for $s\in\{1,2,3\}$,
\begin{equation}\label{thm:convergence:hyper:eq4}
\lim_{k\rightarrow\infty}\mathbb{E}[\Vert\bar\epsilon^k_s\Vert^2]=0.
\end{equation}
Since $\mathcal{L}^2$-convergence implies a.s.-convergence along a subsequence,
from \eqref{thm:convergence:hyper:eq3}-\eqref{thm:convergence:hyper:eq4}, we may take a (deterministic)
subsequence $\{k_\ell\}_{\ell=1}^\infty$ such that a.s. for $s\in\{1,2,3\}$,
\begin{eqnarray}
\lim_{\ell\rightarrow\infty}\frac{\alpha_{k_\ell}\langle T(z^{k_\ell})+\bar\epsilon^{k_\ell}_2,x^{k_\ell}-\Pi(g^{k_\ell})\rangle}{\Vert T(z^{k_\ell})+\bar\epsilon^{k_\ell}_2\Vert}=0,\label{thm:convergence:hyper:eq5}\\
\lim_{\ell\rightarrow\infty}\bar\epsilon^{k_\ell}_s=0,\label{thm:convergence:hyper:eq6}
\end{eqnarray}
using the fact that $x^k-z^k=\alpha_k[x^k-\Pi(g^k)]$. Since $\beta_k\in[\hat\beta,\tilde\beta]$ with $\hat\beta>0$,
we may refine $\{k_\ell\}$ if necessary so that, for some $\beta>0$,
\begin{equation}\label{thm:convergence:hyper:eq7}
\lim_{\ell\rightarrow\infty}\beta_{k_\ell}=\beta.
\end{equation}
From Proposition \ref{prop:fejer:hyper}(ii), the a.s.-boundedness of the sequence $\{x^{k_\ell}\}$
implies that, on a set $\Omega_1$ of total probability, there exists a (random) subsequence
$\mathfrak{N}\subset\{k_{\ell}\}_{\ell=1}^\infty$ such that
\begin{equation}\label{thm:convergence:hyper:eq8}
\lim_{k\in\mathfrak{N}}x^{k}=x^*,\\
\end{equation}
for some (random) $x^*\in\mathbb{R}^d$. Using the fact that $g^k=x^k-\beta_k[T(x^k)+\bar\epsilon^k_1]$,
\eqref{thm:convergence:hyper:eq6}-\eqref{thm:convergence:hyper:eq8} and the continuity of $T$ and $\Pi$,
for the event $\Omega_1$, we have
\begin{equation}\label{thm:convergence:hyper:eq9}
g^*:=\lim_{k\in\mathfrak{N}}g^k=x^*-\beta T(x^*).
\end{equation}
Also, for the event $\Omega_1$, from the definition of $z^k$ in \eqref{algo:hyperplane1}, the fact that
$\alpha_k\in(0,1]$, \eqref{thm:convergence:hyper:eq6} and \eqref{thm:convergence:hyper:eq8}-\eqref{thm:convergence:hyper:eq9},
we get that $\{T(z^k)+\bar\epsilon^k_2\}_{k\in\mathfrak{N}}$ is bounded so that, since \eqref{thm:convergence:hyper:eq5}, we obtain
\begin{equation}\label{thm:convergence:hyper:eq10}
\lim_{k\in\mathfrak{N}}\alpha_{k}\langle T(z^{k})+\bar\epsilon^{k}_2,x^{k}-\Pi(g^{k})\rangle=0.
\end{equation}
We now consider two cases for the event $\Omega_1$.
\textbf{\textsf{Case (i)}}: $\lim_{k\in\mathfrak{N}}\alpha_{k}\neq 0$. In this case, we may refine $\mathfrak{N}$
if necessary, and find some (random) $\bar\alpha >0$ such that $\alpha_k\ge\bar\alpha$
for all $k\in\mathfrak{N}$. It follows from \eqref{thm:convergence:hyper:eq10} that on $\Omega_1$,
\begin{equation}\label{thm:convergence:hyper:eq11}
\lim_{k\in\mathfrak{N}}\langle T(z^{k})+\bar\epsilon^{k}_2,x^{k}-\Pi(g^{k})\rangle
= 0.
\end{equation}
From \eqref{algo:armijo:rule2}-\eqref{algo:hyperplane1}, we get
\begin{equation}\label{thm:convergence:hyper:eq12}
\langle T(z^{k})+\bar\epsilon^{k}_2,x^{k}-\Pi(g^{k})\rangle\ge
\frac{\lambda}{\beta_{k}}\Vert x^{k}-\Pi(g^{k})\Vert^2
\ge\frac{\lambda}{\tilde\beta}\Vert x^{k}-\Pi(g^{k})\Vert^2
\end{equation}
for all $k$.
Relations \eqref{thm:convergence:hyper:eq11}-\eqref{thm:convergence:hyper:eq12} imply that, on $\Omega_1$,
\begin{equation}\label{thm:convergence:hyper:eq13}
0=\lim_{k\in\mathfrak{N}}\Vert x^{k}-\Pi(g^{k})\Vert.
\end{equation}
From \eqref{thm:convergence:hyper:eq8}-\eqref{thm:convergence:hyper:eq9}, we take limits in \eqref{thm:convergence:hyper:eq13} and obtain, by continuity of $\Pi$,
$$
0=\Vert x^*-\Pi[x^*-\beta T(x^*)]\Vert.
$$
Therefore, $x^* = \Pi[x^*-\beta T(x^*)]$,
so that $x^*\in X^*$ by Lemma \ref{lemma:proj}(iv).
\textbf{\textsf{Case (ii)}}: $\lim_{k\in\mathfrak{N}}\alpha_{k}=0$. In this case we have
\begin{equation}\label{thm:convergence:hyper:eq14}
\lim_{{k\in\mathfrak{N}}}\theta^{-1}\alpha_k=0.
\end{equation}
Since $\widehat z^k:=\theta^{-1}\alpha_k\Pi(g^k)+(1-\theta^{-1}\alpha_k)x^k$ and $\{g^k\}_{{k\in\mathfrak{N}}}$ is bounded, we get from \eqref{thm:convergence:hyper:eq8} and \eqref{thm:convergence:hyper:eq14} that
\begin{equation}\label{thm:convergence:hyper:eq15}
\lim_{k\in\mathfrak{N}}\widehat z^{k}=x^*.
\end{equation}
Observe that, by the definition of the line search rule \eqref{algo:armijo:rule2} and \eqref{algo:noise:hyper:sample3}, we have
\begin{equation}\label{thm:convergence:hyper:eq16}
\langle T(\widehat z^k)+\bar\epsilon^k_3,x^k-\Pi(g^k)\rangle <
\frac{\lambda}{\beta_k}\Vert x^k-\Pi(g^k)\Vert^2,
\end{equation}
for all $k\in\mathbb{N}_0$.
We take limit in \eqref{thm:convergence:hyper:eq16} along $\mathfrak{N}$, and we get,
using the continuity of $T$ and $\Pi$ and relations \eqref{thm:convergence:hyper:eq6}-\eqref{thm:convergence:hyper:eq9} and \eqref{thm:convergence:hyper:eq15} that
\begin{equation}\label{thm:convergence:hyper:eq17}
\langle T(x^*),x^*- \Pi(g^*)\rangle\le
\frac{\lambda}{\beta}\Vert x^*-\Pi(g^*)\Vert^2.
\end{equation}
Since the sequence $\{x^k\}$ is feasible and $X$ is closed, the limit point $x^*$ belongs to $X$.
Thus, from \eqref{thm:convergence:hyper:eq17} and Lemma \ref{lemma:proj}(v), we get that, on $\Omega_1$,
\begin{equation}
\label{thm:convergence:hyper:eq18}
\lambda\Vert x^*-\Pi(g^*)\Vert^2\ge\beta\langle T(x^*),x^* -\Pi(g^*)\rangle
=\langle x^*-g^*,x^*-\Pi(g^*)\rangle
\ge\Vert x^*-\Pi(g^*)\Vert^2.
\end{equation}
Since $\lambda\in (0,1)$,
\eqref{thm:convergence:hyper:eq18}
implies that
$\Vert x^*-\Pi(g^*)\Vert =0$. Hence, in view of \eqref{thm:convergence:hyper:eq9}, we have
$x^* = \Pi(x^* - \beta T(x^*))$.
By Lemma \ref{lemma:proj}(iv), we conclude that $x^*\in X^*$.
We have proved that on the event $\Omega_1$ of total probability, both in case (i) and in case (ii),
$\{x^k\}$ has a cluster point which solves VI($T$,$X$). The claim follows from Proposition \ref{prop:fejer:hyper}(iii).
\end{proof}
\section{Discussion on the complexity constants of \textsf{Algorithm \ref{algorithm:DSSA:extragradient}}}\label{section:conclusion}
Suppose the oracle is \emph{exact}. In that case, \textsf{Algorithm \ref{algorithm:DSSA:extragradient}} would have essentially the same rate estimates, up to universal constants and a factor of $\mathcal{O}(\ln L)$ in the oracle complexity, either if a line search scheme is used or a CSP is used with a known Lipschitz constant (LC). The reason is that the Lipschitz continuity is only related to the \emph{smoothness class} of the operator. The situation is different when the oracle is \emph{stochastic}: the Lipschitz continuity also quantifies the \emph{spread of the oracle's error variance}.\footnote{This is true either for the martingale difference errors $\{\epsilon^k_i\}_{i=1,2}$ or the correlated error $\epsilon^k_3$ in \eqref{equation:oracle:errors:DSSA:extragradient}. The Lipschitz continuity in the analysis of $\epsilon^k_3$ is crucial in our chaining and self-normalization arguments of Lemmas \ref{lemma:lnorm:process} and \ref{lemma:error:decay:empirical:process}.} Consequently, the lack of knowledge of the LC is much more demanding in the stochastic case. It is instructive to compare the complexity constants when the LC is known or not. In the following, we recall the rate of convergence of Theorem \ref{thm:rate:convergence} and the constants defined in Assumption \ref{assumption:holder:continuity}, Theorem \ref{thm:variance:error:with:line:search}, Lemma \ref{lemma:decay:empirical:error}, Remarks \ref{rem:constants:thm:correlated:error} and \ref{rem:constants:A:armijo} and Proposition \ref{prop:A:armijo} with $p=2$.
Suppose first the LC is known. This was already considered in \cite{iusem:jofre:oliveira:thompson2017} under a more general condition than Assumption \ref{assumption:holder:continuity}. However, it leads to weaker complexity constants as argued in the following.\footnote{See Assumption 3.8 in \cite{iusem:jofre:oliveira:thompson2017}. Differently than Lemma \ref{lemma:holder:continuity:mean:std:dev}, it allows the multiplicative noise to depend on the reference point $x^*\in X^*$.} It is possible to show that if the stronger but fairly general condition of Lemma \ref{lemma:holder:continuity:mean:std:dev} holds and $\hat\alpha=\mathcal{O}(\frac{1}{L_2})$, then the rate statement of Theorem \ref{thm:rate:convergence} and the estimates \eqref{equation:J}-\eqref{thm:rate:convergence:k0} are valid when we replace $\sigma_4(x^*)$ by $\sigma_2(x^*)$, $\overline{L}_4$ by $L_2$ and\footnote{Up to universal constants, $\mathsf{C}_2$ and $\mathsf{\overline{C}}_2$ are unchanged.} the coefficient $(1-6\lambda^2)[(\lambda\theta)\wedge\hat\alpha]$ by a term of order $1-\mathcal{O}(1)(\hat\alpha L_2)^2$. Since $\hat\alpha L_2\lesssim1$ we also have $\mathsf{C}_2\lesssim1$ and $\mathsf{\overline {C}}_2\lesssim1$. Assuming $L_2$ is known, we obtain a property not satisfied by the estimates in \cite{iusem:jofre:oliveira:thompson2017}: $k_0$ in \eqref{thm:rate:convergence:k0} is \emph{independent of the oracle's error variances $\{\sigma_2(x)^2\}_{x\in X}$ over $X$} and there exist $b$, $N$ and $\mu$ and policy $\hat\alpha=\mathcal{O}(\frac{1}{L_2})$ such that $k_0:=0$. It is then possible to obtain the rate
\begin{eqnarray}
\min_{i=0,\ldots,k}\mathbb{E}\left[r(x^i)^2\right]
\lesssim\frac{L_2^2\Vert x^0-x^*\Vert^2+\sigma_2(x^*)^2}{k},\label{equation:rate:known:LC}
\end{eqnarray}
which depends only on the \emph{local} variance $\sigma_2(x^*)^2$ and the \emph{initial} iterate $x^0$. This can be seen as a \emph{variance localization property}. We note that the above rate is sharper than those obtained in \cite{iusem:jofre:oliveira:thompson2017} (See Section 3.4.1 in \cite{iusem:jofre:oliveira:thompson2017}.)\footnote{In \cite{iusem:jofre:oliveira:thompson2017}, given $x^*\in X^*$, the rate is of the order of $\sigma(x^*)^4\cdot\max_{0\le i\le k_0(x^*)}\mathbb{E}[\Vert x^i-x^*\Vert^2]$, where $k_0(x^*)\in\mathbb{N}_0$ depends on $\sigma(x^*)$. See Assumption 3.8 in \cite{iusem:jofre:oliveira:thompson2017} for the definition of $\sigma(x^*)$.}
Consider now the more challenging regime when the LC is unknown. As expected, the constants in the rate of Theorem \ref{thm:rate:convergence} are less sharp then the ones in \eqref{equation:rate:known:LC}. First, \eqref{equation:rate:known:LC} is not explicitly dependent on the dimension $d$. In terms of dimension, the rate in Theorem \ref{thm:rate:convergence} is of $\mathcal{O}(\frac{d}{N})$ and, thus, it is valid in the large sample regime $N:=\mathcal{O}(d)$. This is a manifestation of our need to treat correlated errors when using a line search scheme. Such scheme is an inner statistical estimator for the LC. Second, if we set $\mathsf{M}:=(\hat\alpha\Lqrtnorm{\mathsf{L}(\xi)})^2$, then the constants in the rate of Theorem \ref{thm:rate:convergence} satisfy $\mathsf{C}_2\lesssim \frac{\mathsf{M}}{N}$, $\mathsf{\overline{C}}_2\lesssim\textsf{M}$ and $\frac{(\hat\alpha\widetilde L_2)^2\mathsf J}{N}\lesssim\mathsf{M}^2\mathsf{J}$, for a general\footnote{The given order of dependence on $\mathsf{M}$ for an unbounded $X$ is an artifact of our proof techniques. We believe a sharper dependence can be obtained via more sophisticated concentration inequalities (instead of moment inequalities).} $X$ and $\mathsf{C}_2\lesssim1$, $\mathsf{\overline{C}}_2\lesssim1$ and $\frac{(\hat\alpha\widetilde L_2)^2\mathsf{J}}{N}\lesssim \mathsf{M}\diam(X)^2$, for a compact $X$. Observe that a line search scheme can only estimate a \emph{lower bound} for $\Lqrtnorm{\mathsf{L}(\xi)}$. For a large $\hat\alpha$, the lack of an \emph{upper bound} leads to a rate with larger constants when compared to \eqref{equation:rate:known:LC}. This is a manifestation of our absence of information of the LC. Note that robust methods are expected to have nonoptimal constants since the endogenous parameters are unknown \cite{nem:jud:lan:shapiro2009}. Third, note that \eqref{equation:rate:known:LC} only depends on the \emph{initial} iterate $x^0$. This is possible since $k_0$ in \eqref{thm:rate:convergence:k0} can be calibrated using the knowledge of the LC. For an unknown LC and for an unbounded $X$, $k_0$ depends on $\mathsf{M}$ but it still \emph{independent of the oracle's error moments $\{\sigma_4(x)\}_{x\in X}$ over $X$}. Differently than \eqref{equation:rate:known:LC}, for a large $\hat\alpha$ (implying a larger value for $\mathsf{M}$), the rate in Theorem \ref{thm:rate:convergence} will depend on $D_{k_0}^2(x^*):=\max_{k=0,\ldots k_0}\mathbb{E}[\Vert x^k-x^*\Vert^2]$ for a possibly large $k_0$. Although not as sharp as \eqref{equation:rate:known:LC}, the resulted rate estimate for a large $k_0$ is not a limiting issue. It is still in accordance to, and in fact generalize, previous estimates which rely on compactness of $X$ (see e.g. \cite{nem:jud:lan:shapiro2009}): for a compact $X$, we have $\max_{k=0,\ldots k_0}\mathbb{E}[\Vert x^k-x^*\Vert^2]\le\diam(X)^2$.
\section*{Appendix}
\begin{proof}[Proof of Lemma \ref{lemma:holder:continuity:mean:std:dev}]
By Jensen's inequality and Assumption \ref{assumption:holder:continuity} we get
\begin{eqnarray*}
\Vert T(x)-T(x_*)\Vert\le\mathbb{E}\left[\Vert F(\xi,x)-F(\xi,x_*)\Vert\right]
\le\mathbb{E}[\mathsf{L}(\xi)]\Vert x-y\Vert^\delta.
\end{eqnarray*}
Using this fact and definition \eqref{equation:oracle:error}, we get
\begin{eqnarray*}
\Lqnorm{\Vert\epsilon(\xi,x)\Vert}&\le &\Lqnorm{\Vert F(\xi,x)-F(\xi,x_*)\Vert}+\Lqnorm{\Vert F(\xi,x_*)-T(x_*)\Vert}+\Lqnorm{\Vert T(x)-T(x_*)\Vert}\nonumber\\
&\le &\Lqnorm{\mathsf{L}(\xi)\Vert x-x_*\Vert^\delta}+\Lqnorm{\Vert\epsilon(\xi,x_*)\Vert}+L\Vert x-x_*\Vert^\delta\nonumber\\
&=&\Lqnorm{\Vert\epsilon(\xi,x_*)\Vert}+\left(\Lqnorm{\mathsf{L}(\xi)}+L\right)\Vert x-y\Vert^\delta,
\end{eqnarray*}
where we used the triangle inequality for $\Vert\cdot\Vert$ and Minkowski's inequality for $\Lqnorm{\cdot}$. The claim is proved from the above fact, \eqref{equation:oracle:error:variance} and $L_q=\Lqnorm{\mathsf{L}(\xi)}+L$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:recursion:armijo}]
By \eqref{algo:extragradient:armijo1}-\eqref{algo:extragradient:armijo2}, we invoke twice Lemma \ref{lemma:proj}(i) with $v:=\alpha_k\widehat F(\xi^k,x^k)$, $x:=x^k$ and $z:=z^k$ and with $v:=\alpha_k\widehat F(\eta^k,z^k)$, $x:=x^k$ and $z:=x^{k+1}$, obtaining, for all $x\in X$,
\begin{eqnarray}
2\langle\alpha_k\widehat F(\xi^k,x^k),z^k-x\rangle &\le &\Vert x^k-x\Vert^2-\Vert z^k-x\Vert^2-\Vert z^k-x^k\Vert^2,\label{equation:lemma:recursion:second:eq1}\\
2\langle\alpha_k\widehat F(\eta^k,z^k),x^{k+1}-x\rangle &\le &\Vert x^k-x\Vert^2-\Vert x^{k+1}-x\Vert^2-\Vert x^{k+1}-x^k\Vert^2.\label{equation:lemma:recursion:second:eq2}
\end{eqnarray}
We now set $x:=x^{k+1}$ in \eqref{equation:lemma:recursion:second:eq1} and sum the obtained relation with \eqref{equation:lemma:recursion:second:eq2} eliminating $\Vert x^k-x^{k+1}\Vert^2$. We thus get, for all $x\in X$,
\begin{eqnarray*}
\mathsf{I}&:=&2\langle\alpha_k\widehat F(\xi^k,x^k),z^k-x^{k+1}\rangle+2\langle\alpha_k\widehat F(\eta^k,z^k),x^{k+1}-x\rangle\\
&\le &\Vert x^k-x\Vert^2-\Vert x^{k+1}-x\Vert^2-\Vert z^k-x^{k+1}\Vert^2-\Vert z^k-x^k\Vert^2.
\end{eqnarray*}
Using definitions \eqref{equation:oracle:error}, \eqref{equation:empirical:mean:operator:&:error} and \eqref{equation:oracle:errors:DSSA:extragradient}, we have
\begin{eqnarray*}
\mathsf{I}&=&2\alpha_k\langle\widehat F(\xi^k,x^k)-\widehat F(\eta^k,z^k),z^k-x^{k+1}\rangle+2\langle\alpha_k\widehat F(\eta^k,z^k),z^k-x\rangle\\
&=&2\alpha_k\langle\widehat F(\xi^k,x^k)-\widehat F(\eta^k,z^k),z^k-x^{k+1}\rangle+2\alpha_k\langle T(z^k),z^k-x\rangle+2\alpha_k\langle\epsilon^k_2,z^k-x\rangle,
\end{eqnarray*}
The two previous relations imply that, for all $\in X$,
\begin{eqnarray}
2\alpha_k\langle T(z^k),z^k-x\rangle &\le &2\alpha_k\langle\widehat F(\eta^k,z^k)-\widehat F(\xi^k,x^k),z^k-x^{k+1}\rangle+2\alpha_k\langle\epsilon^k_2,x-z^k\rangle\nonumber\\
&&+\Vert x^k-x\Vert^2-\Vert x^{k+1}-x\Vert^2-\Vert z^k-x^{k+1}\Vert^2-\Vert z^k-x^k\Vert^2\nonumber\\
&\le &2\alpha_k\Vert\widehat F(\eta^k,z^k)-\widehat F(\xi^k,x^k)\Vert\Vert z^k-x^{k+1}\Vert
+2\alpha_k\langle\epsilon^k_2,x-z^k\rangle\nonumber\\
&&+\Vert x^k-x\Vert^2-\Vert x^{k+1}-x\Vert^2-\Vert z^k-x^{k+1}\Vert^2-\Vert z^k-x^k\Vert^2\nonumber\\
&\le &2\alpha_k^2\Vert\widehat F(\eta^k,z^k)-\widehat F(\xi^k,x^k)\Vert^2+2\alpha_k\langle\epsilon^k_2,x-z^k\rangle\nonumber\\
&&+\Vert x^k-x\Vert^2-\Vert x^{k+1}-x\Vert^2-\Vert z^k-x^k\Vert^2,\label{equation:lemma:recursion:second:eq3}
\end{eqnarray}
where we used Cauchy-Schwartz in second inequality and Lemma \ref{lemma:proj}(iii) with \eqref{algo:extragradient:armijo1}-\eqref{algo:extragradient:armijo2} in the third inequality.
Concerning the first term in the rightmost expression in \eqref{equation:lemma:recursion:second:eq3}, we have
\begin{eqnarray}
\alpha_k^2\Vert\widehat F(\eta^k,z^k)-\widehat F(\xi^k,x^k)\Vert^2 &\le &
3\alpha_k^2\Vert\widehat F(\xi^k,z^k)-\widehat F(\xi^k,x^k)\Vert^2\nonumber\\
&+&3\alpha_k^2\Vert\widehat F(\eta^k,z^k)-T(z^k)\Vert^2+3\alpha_k^2\Vert\widehat F(\xi^k,z^k)-T(z^k)\Vert^2\nonumber\\
&\le &3\lambda^2\Vert z^k-x^k\Vert^2+3\hat\alpha^2\Vert\epsilon^k_2\Vert^2+3\hat\alpha^2\Vert\epsilon^k_3\Vert^2,\label{equation:lemma:recursion:second:eq4}
\end{eqnarray}
using triangle inequality and the fact that $(\sum_{i=1}^3a_i)^2\le3\sum_{i=1}a_i^2$ in the first inequality and the line search \eqref{algo:armijo:rule} and definitions in \eqref{equation:oracle:error}, \eqref{equation:empirical:mean:operator:&:error} and \eqref{equation:oracle:errors:DSSA:extragradient} in the last inequality.
From $z^k=\Pi[x^k-\alpha_k(T(x^k)+\epsilon^k_1)]$ and Lemma \ref{lemma:residual:decrease} with $\alpha_k\in(0,1]$, we also have
\begin{eqnarray}
\alpha_k^2 r(x^k)^2&\le & r_{\alpha_k}(x^k)^2\nonumber\\
&=&\Vert x^k-\Pi[x^k-\alpha_kT(x^k)]\Vert^2\nonumber\\
&\le &2\Vert x^k-z^k\Vert^2+2\Vert\Pi[x^k-\alpha_k(T(x^k)+\epsilon^k_1)]-\Pi[x^k-\alpha_k T(x^k)]\Vert^2\nonumber\\
&\le &2\Vert x^k-z^k\Vert^2+2\hat\alpha^2\Vert\epsilon^k_1\Vert^2\label{equation:lemma:recursion:second:eq5},
\end{eqnarray}
where we used Lemma \ref{lemma:proj}(iii) in the second inequality. The claim is proved using relations \eqref{equation:lemma:recursion:second:eq3}-\eqref{equation:lemma:recursion:second:eq5} with $x:=x^*$, for a given $x^*\in X^*$, definitions \eqref{def:A:armijo}-\eqref{def:M:armijo} and the facts that $0<1-6\lambda^2<1$ (see \textsf{Algorithm \ref{algorithm:DSSA:extragradient}}) and $\langle T(z^k),z^k-x^*\rangle\ge0$, which follows from $\langle T(x^*),z^k-x^*\rangle\ge0$ (since $x^*\in X^*$) and Assumption \ref{assump:monotonicity}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:A:armijo}]
First, we obtain a bound on $\Vert z^k-x^*\Vert$ similar to \eqref{lemma:error:decay:emp:eq1} in the proof of Theorem \ref{thm:variance:error:with:line:search} and then take $\Lpnorm{\cdot|\mathcal{F}_k}$. Indeed, using the facts that $z^k=z(\xi^k;\alpha_k,x^k)$, $\epsilon^k_1=\widehat\epsilon(\xi^k,x^k)$ and $x^k\in\mathcal{F}_k$, we obtain
\begin{equation}\label{prop:A:armijo:eq1}
\Lpnorm{\Vert z^k-x^*\Vert|\mathcal{F}_k}\le(1+L\hat\alpha)\Vert x^k-x^*\Vert+\hat\alpha\Lpnorm{\Vert\epsilon^k_1\Vert|\mathcal{F}_k}.
\end{equation}
Lemma \ref{lemma:decay:empirical:error} with $q=p$, \eqref{equation:oracle:errors:DSSA:extragradient} and the facts that $x^k\in\mathcal{F}_k$ and $\xi^k\perp\perp\mathcal{F}_k$ imply that
\begin{equation}\label{prop:A:armijo:eq2}
\Lpnorm{\Vert\epsilon^k_1\Vert|\mathcal{F}_k}\le C_p\frac{\sigma_p(x^*)+L_p\Vert x^k-x^*\Vert}{\sqrt{N_k}}.
\end{equation}
Lemma \ref{lemma:decay:empirical:error} with $q=p$, \eqref{equation:oracle:errors:DSSA:extragradient} and the facts that $z^k\in\widehat{\mathcal{F}}_k$, $\eta^k\perp\perp\widehat{\mathcal{F}}_k$ and $\Lpnorm{\Lpnorm{\cdot|\widehat{\mathcal{F}}_k}|\mathcal{F}_k}=\Lpnorm{\cdot|\mathcal{F}_k}$ imply that
\begin{equation}\label{prop:A:armijo:eq3}
\Lpnorm{\Vert\epsilon^k_2\Vert|\mathcal{F}_k}=\Lpnorm{\Lpnorm{\Vert\epsilon^k_2\Vert\big|\widehat{\mathcal{F}}_k}\Big|\mathcal{F}_k}\le C_p\frac{\sigma_p(x^*)+L_p\Lpnorm{\Vert z^k-x^*\Vert|\mathcal{F}_k}}{\sqrt{N_k}}.
\end{equation}
Finally, Theorem \ref{thm:variance:error:with:line:search}(i), \eqref{equation:oracle:errors:DSSA:extragradient}, Assumption \ref{assump:iid:sampling}, $0<\alpha_k\le\hat\alpha\le1$ and the facts that $z^k=z(\xi^k;\alpha_k,x^k)$, $x^k\in\mathcal{F}_k$ and $\xi^k\perp\perp\mathcal{F}_k$ imply that
\begin{equation}\label{prop:A:armijo:eq4}
\Lpnorm{\Vert\epsilon^k_3\Vert|\mathcal{F}_k}=\Lpnorm{\left\Vert\widehat\epsilon\left(\xi^k,z(\xi^k;\alpha_k,x^k)\right)\right\Vert\big|\mathcal{F}_k}\le \frac{\mathsf{c}_1\sigma_{2p}(x^*)+\overline{L}_{2p}\Vert x^k-x^*\Vert}{\sqrt{N_k}}.
\end{equation}
The required claim is proved by putting together relations \eqref{def:A:armijo}, \eqref{prop:A:armijo:eq1}-\eqref{prop:A:armijo:eq4} and using the facts that $\Lpddnorm{a^2|\mathcal{F}_k}=\Lpnorm{a|\mathcal{F}_k}^2$, $(a+b)^2\le2a^2+2b^2$, $\overline{L}_{2p}>L_pC_p$, $\mathsf{c}_1>C_p$ (as defined in Assumption \ref{assumption:holder:continuity}, Theorem \ref{thm:variance:error:with:line:search}, Lemma \ref{lemma:decay:empirical:error} and Remark \ref{rem:constants:thm:correlated:error}) and $\sigma_{2p}(x^*)\ge\sigma_p(x^*)$.
The proof for the case $X$ is compact is analogous but replacing \eqref{prop:A:armijo:eq1} by the facts that $\Vert x^k-x^*\Vert\le\diam(X)$ and $\Vert z^k-x^*\Vert\le\diam (X)$ and replacing \eqref{prop:A:armijo:eq4} by the bound of Theorem \ref{thm:variance:error:with:line:search}(ii).
\end{proof}
\begin{remark}[Constants of Proposition \ref{prop:A:armijo}]\label{rem:constants:A:armijo}
Recall definitions in Assumption \ref{assumption:holder:continuity}, \textsf{Algorithm \ref{algorithm:DSSA:extragradient}}, Theorem \ref{thm:variance:error:with:line:search}, Lemma \ref{lemma:decay:empirical:error} and Remark \ref{rem:constants:thm:correlated:error}. Let $\mathsf{G}_{p}:=\sup_k\frac{C_pL_p\hat\alpha}{\sqrt{N}_k}$. The constants in Proposition \ref{prop:A:armijo} are given, for a general $X$, by\footnote{For simplicity we do not explore the decay with $N_k^{-1}$ in the mentioned constants.}
\begin{eqnarray*}
\mathsf{C}_{p}:=2\mathsf{c}_1^2\left[6\left(1+\mathsf{G}_{p}\right)^2+7-6\lambda^2\right],\quad\quad
\mathsf{\overline C}_{p}:=2\left[6\left(1+L\hat\alpha+\mathsf{G}_{p}\right)^2+7-6\lambda^2\right].
\end{eqnarray*}
For a compact $X$, the constants are $\mathsf{C}_p:=(26-12\lambda^2)C_p^2$ and $\mathsf{\overline{C}}_p:=26-12\lambda^2$.
\end{remark}
\begin{proof}[Proof of Lemma \ref{lemma:recursion:hyper}]
By Lemma \ref{lemma:armijo:hyper:def}(ii), we have that $\gamma_k>0$. Thus
\begin{eqnarray}
\Vert x^{k+1}-x^*\Vert^2 & = & \Vert \Pi(y^k)-x^*\Vert^2\nonumber\\
&\le & \Vert y^k-x^*\Vert^2-\Vert y^k-\Pi(y^k)\Vert^2\nonumber\\
&\le & \Vert y^k-x^*\Vert^2\nonumber\\
&=& \Vert (x^k-x^*)-\gamma_k(T(z^k)+\bar\epsilon^k_2)\Vert^2\nonumber\\
&=& \Vert x^k-x^*\Vert^2+\gamma_k^2\Vert T(z^k)+\bar\epsilon^k_2\Vert^2
-2\gamma_k\langle T(z^k)+\bar\epsilon^k_2,x^k-x^*\rangle,\label{lemma:recursion:hyper:eq1}
\end{eqnarray}
using Lemma \ref{lemma:proj}(ii) in the first inequality.
Concerning the last term in the rightmost expression of \eqref{lemma:recursion:hyper:eq1}, we have
\begin{eqnarray}
-2\gamma_k\langle T(z^k)+\bar\epsilon^k_2,x^k-x^*\rangle
&=& -2\gamma_k\langle T(z^k)+\bar\epsilon^k_2,x^k-z^k\rangle+\nonumber\\
&&2\gamma_k\langle T(z^k),x^*-z^k\rangle+2\gamma_k\langle\bar\epsilon^k_2,x^*-z^k\rangle
\nonumber\\
&=& -2\gamma_k(\gamma_k\Vert T(z^k)+\bar\epsilon^k_2\Vert^2)\nonumber\\
&&+2\gamma_k\langle T(z^k),x^*-z^k\rangle+2\gamma_k\langle\bar\epsilon^k_2,x^*-z^k\rangle\nonumber\\
&\le & -2\gamma_k^2\Vert T(z^k)+\bar\epsilon^k_2\Vert^2+2\gamma_k\langle\bar\epsilon^k_2,x^*-z^k\rangle,\label{lemma:recursion:hyper:eq2}
\end{eqnarray}
using
the definition of $\gamma_k$
in the second equality,
and the facts that
$\gamma_k>0$ and $\langle T(z^k),x^*-z^k\rangle\le0$ (which follows from
the pseudo-monotonicity of $T$, and the facts $x^*\in X^*$, $z^k\in X$)
in the inequality.
Combining \eqref{lemma:recursion:hyper:eq1}-\eqref{lemma:recursion:hyper:eq2} we get
\begin{eqnarray}
\Vert x^{k+1}-x^*\Vert^2
&\le & \Vert x^k-x^*\Vert^2+\gamma_k^2\Vert T(z^k)+\bar\epsilon^k_2\Vert^2
-2\gamma_k^2\Vert T(z^k)+\bar\epsilon^k_2\Vert^2+
2\gamma_k\langle\bar\epsilon^k_2,x^*-z^k\rangle\nonumber\\
&=& \Vert x^k-x^*\Vert^2-\gamma_k^2\Vert T(z^k)+\bar\epsilon^k_2\Vert^2+ 2\gamma_k\langle\bar\epsilon^k_2,x^*-z^k\rangle\nonumber\\
&=& \Vert x^k-x^*\Vert^2-\Vert y^k-x^k\Vert^2+ 2\gamma_k\langle\bar\epsilon^k_2,x^*-z^k\rangle,\label{lemma:recursion:hyper:eq3}
\end{eqnarray}
using the fact that $\Vert y^k-x^k\Vert=\gamma_k\Vert T(z^k)+\epsilon^k_2\Vert$ (which follows from the
definition of $\gamma_k$), in the last equality.
\end{proof}
| -187,383.207522
|
[
-2.9296875,
2.634765625
] | 26.806212
|
[
-3.732421875,
0.3779296875,
-1.974609375,
-7.30078125,
-0.94287109375,
9.359375
] |
[
3.458984375,
8.6015625,
-0.53759765625,
6.0078125
] | 928
| 13,783
|
[
-3.5703125,
4.01953125
] | 30.700837
|
[
-6.22265625,
-4.49609375,
-5.234375,
-2.79296875,
2.310546875,
14.234375
] | 0.395065
| 11.647442
| 22.542262
| 2.003517
|
[
1.9289506673812866
] | -110,428.759486
| 8.356018
| -186,301.03324
| 0.401143
| 6.440269
|
[
-2.203125,
-3.6171875,
-4.2578125,
-5.4609375,
2.107421875,
12.890625
] |
[
-5.90234375,
-2.212890625,
-2.119140625,
-1.541015625,
3.806640625,
4.76171875
] | |
BkiUakTxK0-nUGYe3mp8
|
\section{Introduction}
Superstring theories are theories without ultraviolet divergences.
They contain both gravitational and gauge interactions as low
energy limits\cite{1, 2}. Thus they offer a possible solution to
the problem of unifying all of the fundamental interactions in a
consistent quantum theory. In string theory, gravitons are
massless states of closed strings and gauge particles are massless
states of open strings. To study the relations between gravity and
gauge field, we should explore the relations between closed and
open strings. The duality between open and closed strings\cite{3,
4, 5, 6, 7, 8} also motivates us to explore the relations between
closed and open strings.
The most simple relation is any excited mode of a free closed string
$\left|N_L, N_R\right>\otimes \left|p\right>$ can be factorized by
left- and right- moving open string excited modes:
\begin{equation}
\left|N_L\right>\otimes\left|N_R\right>\otimes\left|p\right>.
\end{equation}
However, when we consider the interactions among strings, there are
nontrivial relations between closed and open string amplitudes. The
first nontrivial relation was given by Kawai, Lewellen and
Tye\cite{9}. They express an amplitude for $N$ closed strings on
sphere($S_2$) by the following equation\footnote{We use
$\epsilon_{\alpha\beta}$ to denote all the polarization tensors for
convenience. $\alpha$ correspond to the left indices and $\beta$
correspond to the right indices. If there are open strings on the
boundary of $D_2$, we use $\epsilon_{\alpha\beta\gamma}$ to denote
all the polarization tensors for convinience. $\gamma$ correspond to
the indices of open strings.}:
\begin{equation}\label{KLT relations}
\mathscr{A}_{S_2}^{(N)}=\epsilon_{\alpha\beta}\mathscr{A}_{S_2}^{(N)\alpha\beta}=\left(\frac{i}{2}\right)^{N-3}\kappa^{N-2}\epsilon_{\alpha\beta}
\sum\limits_{P,
P'}\mathscr{M}^{(N)\alpha}(P)\mathscr{\bar{M}}^{(N)\beta}(P')e^{i\pi
F(P, P')}.
\end{equation}
Here $\mathscr{A}_{S_2}^{N}$ is the amplitude for $N$ closed
strings on $S_2$ and $\mathscr{A}_{S_2}^{(N)\alpha\beta}$ is the
closed string amplitude without polarization tensors.
$\mathscr{M}^{(N)\alpha}(P)$ and
$\mathscr{\bar{M}}^{(N)\beta}(P')$ are the open string partial
amplitudes on $D_2$ corresponding to the left- and right-moving
sectors respectively. They are dependent on the orderings of the
external legs. If we sum over the orderings $P$ and $P'$, we get
the total amplitudes $\sum\limits_P\mathscr{M}^{(N)\alpha}(P)$ and
$\sum\limits_{P'}\mathscr{\bar{M}}^{(N)\beta}(P')$ for the left-
and the right-moving open strings respectively. Then we can see,
except for a phase factor, a closed string amplitude on $S_2$ can
be factorized by two open string tree amplitudes corresponding to
the left- and right-moving sectors(see fig. \ref{fig1}.(a)).
There is no interaction between left- and right-moving open
strings. Any closed string polarization tensor has left and right
indices, they correspond to the left- and the right-moving modes
respectively. The left and right indices of polarization tensors
must contract with the indices in the amplitude for left- and
right-moving open strings respectively. The phase factor is
entirely independent of which open and closed string theories we
are considering. It only depends on $P$ and $P'$. Contour
deformations can be used to reduce the number of the terms in
eq.\eqref{KLT relations}. The number of the terms can be reduced
to
\begin{equation}\label{KLT term reduction1}
(N-3)!(\frac{1}{2}(N-3))!(\frac{1}{2}(N-3))!, \text{$N$ odd},
\end{equation}
and
\begin{equation}\label{KLT term reduction2}
(N-3)!(\frac{1}{2}(N-2))!(\frac{1}{2}(N-4))!, \text{$N$ even}.
\end{equation}
\newline
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.7\textwidth]{relations.eps}
\end{center}
\caption{(a) A closed string amplitude on $S_2$ can be factorize by
two open string tree amplitudes corresponding to the left- and
right-sectors. (b) A closed string amplitude on $D_2$ can be given
by connecting the open string world-sheets for the two sectors with
a time reverse in the right-moving sector. (c) A closed string
amplitude on $RP_2$ can be given by connecting the open string
world-sheets for the two sectors with a time reverse and a twist in
the right-moving sector.} \label{fig1}
\end{figure}
\newline
In the low energy limits, the massive modes decouple. Only
massless states are left. Then KLT relations can be used to
factorize the amplitudes for gravitons into products of two
amplitudes for gauge particles. Gauge theory has a better
ultraviolet behavior than gravity. Then KLT relations can be used
to investigate the ultraviolet properties of gravity. Researches
with KLT relations support that $N=8$ supergravity may be
finite\cite{11, 12, 13, 14, 15, 16}. However, a question arises:
Do KLT factorization relations hold for any gravity amplitude? In
string theory, to calculate the S-matrix, we should sum over all
the topologies of world-sheets. $S_2$ is just the most simple
topology. If we consider other topologies, we should reconsider
the relations between closed and open strings. Then the question
becomes: Do the factorization relations hold for any topology?
Earlier works\cite{17, 25, 26, 27} have given some insights into the
relations on Disk($D_2$). In \cite{17}, some examples of the
relations on $D_2$ are given. In \cite{25, 26, 27}, the most simple
process of D-brane and closed string interactions are discussed. In
the paper by Garousi and Myers\cite{25}, they found that the
two-point scattering amplitudes of closed strings from a D-brane in
Type II theory is identical with the four-point open string
amplitudes upon a certain identification between the momenta and
polarizations. In \cite{26, 27}, The amplitude for one closed string
and two open strings attached to a D-brane are calculated. They
shown that this amplitude are also identical with the four-point
open string amplitude. In these examples, The KLT factorization
relations do not hold. Then they imply the KLT factorization
relations may not hold for general amplitudes on $D_2$. The
amplitudes on real projective plane($RP_2$) have similar structures
with open string amplitudes\cite{30}. In fact, Both $D_2$ and $RP_2$
can be obtained by a sphere with a $\mathbb{Z}_2$ identification.
Then the KLT factorization relations may also not hold in the $RP_2$
case.
In this paper, we consider the general amplitudes on $D_2$ and
$RP_2$. These two cases contribute to the higher-order tree
amplitude\cite{1} for closed strings. We find that the factorization
relations\eqref{KLT relations} do not hold on $D_2$ and $RP_2$. The
amplitudes with closed strings on $D_2$ and $RP_2$ can not be
factorized by the left- and the right-moving open string amplitudes.
The amplitudes satisfy new relations. Particularly, an amplitude for
$N$ closed strings on $D_2$ can be given by an amplitude for $2N$
open strings:
\begin{equation}\label{A_(D_2)^(N,0)}
\mathscr{A}_{D_2}^{(N)}=\epsilon_{\alpha\beta}\mathscr{A}_{D_2}^{(N)\alpha\beta}=\left(\frac{i}{4}\right)^{N-1}\kappa^{N-1}\epsilon_{\alpha\beta}\sum\limits_P\mathscr{M}^{(2N)\alpha\beta}(P)e^{i\pi\Theta(P)}.
\end{equation}
In this equation, $\mathscr{M}^{(2N)\alpha\beta}(P)$ is the tree
amplitude for $2N$ open strings. $N$ open strings come from the
left-moving sector and the other $N$ open strings come from the
right-moving sector. The left- and the right-moving sectors are not
independent of each other. The two sectors are connected into a
single sector. Then the left indices contract with the right
indices. The reason is that the left-(right-)moving waves must be
reflected at the boundary of $D_2$ and then become
right-(left-)moving waves. Then the interactions between the
left-(right-)moving waves and their reflected waves become
interactions between the two sectors. If there are open strings on
the boundary of $D_2$, the left- and the right-moving sectors of
closed strings also interact with the open strings, then an
amplitude for $N$ closed strings and $M$ open strings on $D_2$ can
be given by a tree amplitude for $2N+M$ open strings except for a
phase factor:
\begin{equation}
\label{A_(D_2)^(N,M)} \mathscr{A}_{D_2}^{(N,
M)}=\epsilon_{\alpha\beta\gamma}\mathscr{A}_{D_2}^{(N,
M)\alpha\beta\gamma}=\left(\frac{i}{4}\right)^{N-1}\kappa^{N-1}g^M\epsilon_{\alpha\beta\gamma}
\sum\limits_P\mathscr{M}^{(2N,M)\alpha\beta\gamma}(P)e^{i\pi\Theta'(P)}.
\end{equation}
The amplitudes on $RP_2$ can also be factorized by one amplitude for
open strings:
\begin{equation}\label{A_RP2^(N)}
\mathscr{A}_{RP_2}^{(N)}=\epsilon_{\alpha\beta}\mathscr{A}_{RP_2}^{(N)\alpha\beta}=-\left(\frac{i}{4}\right)^{N-1}\kappa^{N-1}\epsilon_{\alpha\beta}\sum\limits_P\mathscr{M}^{(2N)\alpha\beta}(P)e^{i\pi\Theta(P)}.
\end{equation}
In this case, there is a crosscap but not a boundary here. However,
the left-(right-) moving waves are also reflected at the crosscap
and turn into the right-(left-)moving waves. Then there are also
interactions between left- and right-moving sectors of closed
strings. The two sectors are connected into one single sector again.
The phase factors in \eqref{A_(D_2)^(N,M)} and \eqref{A_RP2^(N)} are
complicated in concrete calculations. By considering the contour
deformations, the number of the terms can be reduced\cite{9, 29}. It
is noticed that the relations on $D_2$ are same with on $RP_2$
except for a minus. In a theory containing both $D_2$ and $RP_2$,
the two amplitudes cancel out. Under a T-duality, the relation
\eqref{A_(D_2)^(N,M)} gives the amplitude for $N$ closed strings and
$M$ open strings attached to a D-brane by pure open string
amplitudes while the relation\eqref{A_RP2^(N)} gives the amplitude
for $N$ closed strings scattering from an O-plane by pure open
string amplitudes. In this case, the amplitudes on $D_2$ and $RP_2$
can not cancel out.
An important fact will be used in our paper is that the amplitudes
with closed strings are invariant under conformal transformations in
each single sector. This allows us to transform the form of the
interactions between left- and right-moving sectors. After some
appropriate transformation in one sector, the interactions between
left- and the right-moving sectors have the same form with
interactions between open strings in a same sector. Then we can
treat the two sectors of $N$ closed strings as a single sector with
$2N$ open strings.
In the low energy limit of an unoriented open string theory, the
amplitudes for $N$ closed strings on $D_2$, $RP_2$ and $S_2$
contribute to the tree amplitudes for $N$ gravitons. In this case,
we can not only use KLT factorization relations on $S_2$ but also
use the relations on $D_2$ and $RP_2$ to calculate the tree
amplitudes for gravitons. The amplitudes for $N$ closed strings and
$M$ open strings on $D_2$ become tree amplitudes for $N$ gravitons
and $M$ gauge particles. Then the gauge-gravity interactions can be
given by pure gauge interactions.
The structure of this paper is as follows. In section \ref{relations
on D_2} we will consider the correlation functions and the
amplitudes on $D_2$. We will show KLT factorization relations do
not hold on $D_2$. We will give the relations between closed string
amplitudes on $D_2$ and open string tree amplitudes. We will also
give the relations in the case of $N$ closed strings and $M$ open
strings inserted on $D_2$. In section \ref{relations on RP_2} we
will consider $RP_2$. We will show KLT factorization relations do
not hold on $RP_2$. The relations between amplitudes on $RP_2$ and
open string amplitudes will be given in this section. Our conclusion
will be given in section\ref{conclusion}.
\section{Relations between amplitudes on $D_2$ and open
string tree amplitudes} \label{relations on D_2} In this section, we
will show the correlation functions on $D_2$ can not be factorized
into the left- and the right-moving sectors. The two sectors are
connected together. Then we will give the relations between
amplitudes on $D_2$\cite{1, 2, 18, 19, 22} and open string tree
amplitudes\cite{1, 2, 22, 23, 24, 25, 26, 27, 28}.
In string theory, vertex operator for any closed string can be
given as
\begin{equation}
\mathscr{V}(\omega,
\bar{\omega})=\mathscr{V}_L(\omega)\tilde{\mathscr{V}}_R(\bar{\omega})\mathscr{V}_0(\omega,
\bar{\omega}),
\end{equation}
where $\omega=\tau+i\sigma$. $\mathscr{V}_L$ and $\mathscr{V}_R$
are nonzero modes of open string vertex operators. They correspond
to the left- and the right-moving sectors. $\mathscr{V}_0(\omega,
\bar{\omega})$ correspond to the zero modes. Thus, the closed
string vertex operators can be factorized by two open string
vertex operators corresponding to the left- and the right-moving
sectors(except for the zero modes).
\newline
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.5\textwidth]{2.eps}
\end{center}
\caption{Only the annihilation modes are reflected at the boundary
of $D_2$.} \label{fig2}
\end{figure}
\newline
Now we consider the correlation function of vertex operators. On
$S_2$ the left- and the right-moving waves are independent of each
other. Then a correlation function on $S_2$ can be factorized by
the left- and the right-moving sectors\cite{1, 2}. However, when
we add a boundary to $S_2$, we get $D_2$. The left- and the
right-moving waves must be reflected at the boundary of $D_2$. The
reflection waves of the left-moving waves are in the right-moving
sector and the reflection waves of the right-moving waves are in
the left-moving sector. Waves must interact with their reflection
waves, then their must be interactions between the two sectors. To
see this, we should use the boundary state\cite{20, 21} to give
the correlation functions on $D_2$. The correlation function for N
closed strings on $D_2$ is
\begin{equation}\label{D2 correlation}
\left<0\mid\mathscr V_N(\omega,
\tilde{\omega})...\mathscr{V}_1(\omega, \tilde{\omega})\mid
B\right>,
\end{equation}
where $\mid B\rangle\equiv B\mid 0\rangle$ is the boundary state
for $D_2$. In this paper, for convenience, we use the bosonized
vertex operators\footnote{Here $\phi_i(z) (i=1...5)$ and
$\tilde{\phi}_i(\bar{z}) (i=1...5)$ are bosonic fields. They are
used to bosonize holomorphic and antiholomorphic fermionic fields
and spinor fields. $\phi_6(z)$ and $\tilde{\phi}_6(\bar{z})$ are
used to bosonize the holomorphic and antiholomorphic
superconformal ghost respectively. $\epsilon$ and $\bar{\epsilon}$
correspond to the components of polarization tensors contracting
with bosonic fields $\partial X$ and $\bar{\partial}X$
respectively. $\varepsilon$ and $\bar{\varepsilon}$ correspond to
the components contracting with $\partial\phi$ and
$\bar{\partial}\tilde{\phi}$ respectively. We pick up the pieces
multilinear in $\epsilon$, $\bar{\epsilon}$ and $\varepsilon$,
$\bar{\varepsilon}$, then replace these polarization vectors by
the polarization tensor of the vertex operator.
$\lambda_i'=i\lambda_i$ and
$\tilde{\lambda}_i'=i\tilde{\lambda}_i$ (i=1...5) are vectors in the
weight lattice\cite{18, 19} of the left- and right-moving sectors
respectively. $q$ and $\tilde{q}$ are the $\gamma$ ghost number in
the left- and right-moving sectors respectively. We use $\circ$ to
denote the inner product in the five dimensional weight space and
use $\cdot$ to denote the inner product in the space-time.
\\Physical vertices containing higher derivatives can be transformed
into the vertices with only first derivatives. In fact we can do
partial integrals to reduce the order of the derivatives. After the
integrals on the world-sheet, the surface terms turn to zero.
Redefine the polarization tensor, the vertices then turn to those
only contain first derivatives. }
\begin{equation}\label{bosonized vertex operator}
\begin{split}
\mathscr{V}(\omega,\bar{\omega})=&:\exp{(q\phi_6+\tilde{q}\tilde{\phi}_6)}
\\&\exp{(i\lambda\circ\phi+i\sum\limits_{i=1}^{m}\varepsilon^i\circ\partial\phi_i
+i\tilde{\lambda}\circ\tilde{\phi}+i\sum\limits_{i=1}^{\tilde{m}}\bar{\varepsilon}^i\circ\bar{\partial}\tilde{\phi_i}
)}
\\&\exp{(ik\cdot
X+i\sum\limits_{i=1}^n\epsilon^i\cdot\partial
X+i\sum\limits_{j=1}^{\tilde{n}}\bar{\epsilon}^j\cdot\bar{\partial}X)}(\omega,\bar{\omega}):|_{multilinear}
\end{split}
\end{equation}
With the definition of normal ordering, we have
\begin{equation}
\mathscr{V}(\omega,\bar{\omega})=\mathscr{V}^{(+)}_L(\omega)\mathscr{V}^{(-)}_L(\omega)\tilde{\mathscr{V}}^{(+)}_L(\bar{\omega})\tilde{\mathscr{V}}^{(-)}_R(\bar{\omega})\mathscr{V}_0(\omega,
\bar{\omega}),
\end{equation}
where $(+)$ and $(-)$ correspond to the creation modes and the
annihilation modes respectively. In $\mathscr{V}_0$, we consider
$x$ as creation operator and $p$ as annihilation operator. Then in
the normal ordering, $x$ must on the left of $p$.
The
bosonized boundary operator is\footnote{Here, we only consider
Neumann boundary condition for convenience. The case with Dirichlet
boundary conditions has similar relations.}
\begin{equation}
\begin{split}
B&=\exp{(\sum\limits_{n=1}^\infty
a_n^{\dagger}\cdot\tilde{a}_{n}^{\dagger})}\otimes\exp{(\sum\limits_{n=1}^\infty
b_n^{\dagger}\circ\tilde{b}_{n}^{\dagger})}\otimes\exp{(\sum\limits_{n=1}^\infty
c_n^{\dagger}\tilde{c}_{n}^{\dagger})},
\end{split}
\end{equation}
where $a^{\dag}$ and $\tilde{a}^{\dag}$ are creation modes of $X$,
$b^{\dag}$ and $\tilde{b}^{\dag}$ are creation modes of $\phi_i$
and $\tilde{\phi_i}$ respectively, $c^{\dag}$ and
$\tilde{c}^{\dag}$ are creation modes of $\phi_6$ and
$\tilde{\phi}_6$ respectively. To get the correlation function on
$D_2$ we substitute the bosonized vertex operators and the
bosonized boundary operators into \eqref{D2 correlation}.
We can move the boundary operator $B$ to the left of all the
vertex operators. Then use the creation operators in $B$ to
annihilate the state $\langle 0 \mid$. Because $B$ is constructed
by creation operators, it commutes with the creation modes and the
zero modes of the vertex operators and does not commute with the
annihilation modes of the vertex operators. It means only the
annihilation modes are reflected at the boundary(see fig.
\ref{fig2}). When we move $B$ to the left of the annihilation
modes of the vertex operators $\mathscr{V}^{(-)}_L(\omega)$ and
$\tilde{\mathscr{V}}^{(-)}_R(\bar{\omega})$, the ''images'' of the
annihilation modes $\tilde{\mathscr{V}}^{(+)}_L(-\omega)$ and
$\mathscr{V}^{(+)}_R(-\bar{\omega})$ are created respectively.
Though $\tilde{\mathscr{V}}^{(+)}_L(-\omega)$ is depend on
$\omega$, it is constructed by $\tilde{a}^{\dag}$,
$\tilde{b}^{\dag}$ and $\tilde{c}^{\dag}$. It must interact with
operators constructed by $\tilde{a}$, $\tilde{b}$ and $\tilde{c}$.
In a similar way, $\mathscr{V}^{(+)}_R(-\bar{\omega})$ must
interact with operators constructed by $a$, $b$ and $c$. Then the
correlation function can be factorized as
\begin{equation}\label{correlation function on D2}
\begin{split}
&\left<\mathscr{V}^{(+)}_L(\omega_N)\mathscr{V}^{(-)}_L(\omega_N)\mathscr{V}^{(+)}_R(-\bar{\omega}_N)...\mathscr{V}^{(+)}_L(\omega_1)\mathscr{V}^{(-)}_L(\omega_1)\mathscr{V}^{(+)}_R(-\bar{\omega}_1)\right>
\\\times &\left<\tilde{\mathscr{V}}^{(+)}_R(\bar{\omega}_N)\tilde{\mathscr{V}}^{(-)}_R(\bar{\omega}_N)\tilde{\mathscr{V}}^{(+)}_L(-\omega_N)...\tilde{\mathscr{V}}^{(+)}_R(\bar{\omega}_1)\tilde{\mathscr{V}}^{(-)}_R(\bar{\omega}_1)\tilde{\mathscr{V}}^{(+)}_L(-\omega_1)\right>
\\\times &\left<\mathscr{V}_0(\omega_N, \tilde{\omega}_N)...\mathscr{V}_0(\omega_1,
\tilde{\omega}_1)\right>.
\end{split}
\end{equation}
Here, the first correlation function only contain operators
constructed by $a$, $b$, $c$ and $a^{\dag}$, $b^{\dag}$,
$c^{\dag}$, the second correlation function only contain operators
constructed by $\tilde{a}$, $\tilde{b}$, $\tilde{c}$ and
$\tilde{a}^{\dag}$, $\tilde{b}^{\dag}$, $\tilde{c}^{\dag}$, the
third correlation function only contain operators constructed by
zero modes. Though the nonzero modes are factorized into two
correlation functions, both of them contain the interactions
between left- and the right-moving sectors. Actually, in the first
correlation function, if we move $\mathscr{V}^{(-)}_L(\omega_i)$
to the right of $\mathscr{V}^{(+)}_L(\omega_j)$, we get the
interaction in the left-moving sector and if we move
$\mathscr{V}^{(-)}_L(\omega_i)$ to the right of
$\mathscr{V}^{(+)}_R(-\bar{\omega}_j)$, we get the interactions
between the two sectors. In the same way, the second correlation
function contain interactions in the right-moving sector and the
interactions between left- and right-moving sectors. The
interactions between the two sectors are just the interactions
between vertex operators and their images. Then the correlation
function on $D_2$ can not be factorized by the the two sectors,
interactions between the two sectors connect them together.
To get the relations between amplitudes, we should calculate the
correlation function, then integral over the fundamental region
and divide the integrals by the volume of conformal Killing
group\cite{1, 2, 22, 23}. for convenience, we use the $z$
coordinate instead of $\omega$ coordinate. They are connected by a
conformal transfermation:
\begin{equation}
z=e^{\omega}.
\end{equation}
Then the amplitude for $N$ closed strings on $D_2$ becomes
\begin{align}\label{AD2superstring}
\nonumber\mathscr{A}_{D_2}^{(N,0)}&=\kappa^{N-1}\int_{|z|<1}
\prod\limits_{i=1}^Nd^2z_i\frac{|1-z_o\bar{z_o}|^2}{2\pi d^2z_o}
\\&\nonumber\times\prod\limits_{s>r}(z_s-z_r)^{\frac{\alpha'}{2}k_r\cdot
k_s+\lambda_r\circ\lambda_s-q_rq_s}(\bar{z}_r-\bar{z}_s)^{\frac{\alpha'}{2}k_r\cdot
k_s+\tilde{\lambda}_r\circ\tilde{\lambda}_s-\tilde{q}_r\tilde{q}_s}
\prod\limits_{r,
s}(1-(z_r\bar{z}_s)^{-1})^{\frac{\alpha'}{2}k_r\cdot
k_s+\lambda_r\circ\tilde{\lambda}_s-q_r\tilde{q}_s}
\\&\nonumber\times \exp{\sum\limits_{r=1}^N\left(\sum\limits_{i=1}^{n_r}\sum\limits_{j=1}^{\tilde{n}_s}\left(-\frac{\alpha'}{2}\right)\epsilon_r^{(i)}\cdot\bar{\epsilon}_r^{(j)}
-\sum\limits_{i=1}^{m_r}\sum\limits_{j=1}^{\tilde{m}_s}\varepsilon_r^{(i)}\circ\bar{\varepsilon}_r^{(j)}\right)(1-|z_r|^2)^{-2}}
\\&\times \exp{\sum\limits_{s>r}\left[\left(\sum\limits_{i=1}^{\tilde{n}_r}\sum\limits_{j=1}^{n_s}\left(-\frac{\alpha'}{2}\right)\bar{\epsilon}_r^{(i)}\cdot\epsilon_s^{(j)}
-\sum\limits_{i=1}^{\tilde{m}_r}\sum\limits_{j=1}^{m_s}\bar{\varepsilon}_r^{(i)}\circ\varepsilon_s^{(j)}\right)(1-\bar{z}_rz_s)^{-2}+c.c.\right]}
\\&\nonumber\times\exp{\left[-\sum\limits_{s>r}\left(\sum\limits_{i=1}^{n_r}\sum\limits_{j=1}^{n_s}\left(-\frac{\alpha'}{2}\right)\epsilon_r^{(i)}\cdot\epsilon_s^{(j)}-\sum\limits_{i=1}^{m_r}\sum\limits_{j=1}^{m_s}\varepsilon_r^{(i)}\circ\varepsilon_s^{(j)}\right)(z_s-z_r)^{-2}+c.c.
\right]}
\\&\nonumber\times \exp{\sum\limits_{r\neq s}\left[\left(\sum\limits_{i=1}^{n_s}\left(-\frac{\alpha'}{2}\right)k_r\cdot\epsilon_s^{(i)}
-\sum\limits_{i=1}^{m_s}\lambda_r\circ\varepsilon_s^{(i)}\right)((z_r-z_s)^{-1}+(\bar{z_r}^{-1}-z_s)^{-1})+c.c.\right]}
\\&\nonumber\times
\exp{\sum\limits_{r=1}^N\left[\left(\left(-\frac{\alpha'}{2}\right)k_r\cdot\sum\limits_{i=1}^{n_r}\epsilon_r^{(i)}
-\lambda_r\circ\sum\limits_{i=1}^{m_r}\varepsilon_r^{(i)}\right)((\bar{z_r}^{-1}-z_r)^{-1}+{z_r}^{-1})+c.c.\right]}|_{multilinear},
\end{align}
where we have
$\sum\limits_{r=1}^N\lambda_r=\sum\limits_{r=1}^N\tilde{\lambda}_r=0$,
$\sum\limits_{r=1}^Nk_r=0$ and
$\sum\limits_{r=1}^N(q_r+\tilde{q}_r)=-2$ correspond to the
conservation of fermion number, the conservation of momentum and the
fact that background superghost number is $-2$. $\frac{2\pi
d^2z_o}{|1-z_o\bar{z_o}|^2}$ is the volume element of the
CKG\footnote{To divide the amplitude by the volume of CKG, we can
fix three real coordinate. We can also fix two real coordinate or
one complex coordinate, then divide the amplitude by volume of the
one-parameter subgroup left. The two method are equivalence. Here,
we use the second method to fix $z_1=z_o$.}, it can be used to fix
one complex coordinate.
An integral over the fundamental region $|z|<1$ is equal to an
integral over the other fundamental region $|z|>1$. So we can use
${\left(\frac{1}{2}\right)}^{N-1}\int\limits_{\mathbb{C}}\prod\limits^N_{i=1}d^2z_i$
instead of the integrals over the unit disk. For any
$z_r=x_r+iy_r$, the $z_r$ integral can be given by
$\int\limits_{-\infty}^{\infty}dx_r\int\limits_{-\infty}^{\infty}dy_r$.
We then follow the same steps as in\cite{9}. We rotate the contour of the $y$ integrals along the real axis to pure imaginary axis. The fixed
point should be transformed simultaneously to guarantee the
conformal invariance. Define the new variables:
\begin{equation}\label{variable redefine}
\begin{split}
\xi_1=\xi_o=x_o+iy_o &, \eta_1=\eta_o=x_o-iy_o, \\\xi_r\equiv
x_r+iy_r &, \eta_r\equiv x_r-iy_r\text{ }(r>1).
\end{split}
\end{equation}
Then the integrals become real integrals:
\begin{align}\label{AD2real integrals}
\nonumber\mathscr{A}_{D_2}^{(N,0)}&=\kappa^{N-1}\left(\frac{1}{2}\right)^{N-1}\int
\prod\limits_{i=1}^Nd\xi_id\eta_i\frac{|1-\xi_o\eta_o|^2}{2\pi
d\xi_od\eta_o}
\\&\nonumber\times\prod\limits_{s>r}(\xi_s-\xi_r)^{\frac{\alpha'}{2}k_r\cdot
k_s+\lambda_r\circ\lambda_s-q_rq_s}(\eta_r-\eta_s)^{\frac{\alpha'}{2}k_r\cdot
k_s+\tilde{\lambda}_r\circ\tilde{\lambda}_s-\tilde{q}_r\tilde{q}_s}
\prod\limits_{r,
s}(1-(\xi_r\eta_s)^{-1})^{\frac{\alpha'}{2}k_r\cdot
k_s+\lambda_r\circ\tilde{\lambda}_s-q_r\tilde{q}_s}
\\&\nonumber\times \exp{\sum\limits_{r=1}^N\left(\sum\limits_{i=1}^{n_r}\sum\limits_{j=1}^{\tilde{n}_s}\left(-\frac{\alpha'}{2}\right)\epsilon_r^{(i)}\cdot\bar{\epsilon}_r^{(j)}
-\sum\limits_{i=1}^{m_r}\sum\limits_{j=1}^{\tilde{m}_s}\varepsilon_r^{(i)}\circ\bar{\varepsilon}_r^{(j)}\right)(1-\xi_r\eta_r)^{-2}}
\\&\times \exp{\sum\limits_{s>r}\left[\left(\sum\limits_{i=1}^{\tilde{n}_r}\sum\limits_{j=1}^{n_s}\left(-\frac{\alpha'}{2}\right)\bar{\epsilon}_r^{(i)}\cdot\epsilon_s^{(j)}
-\sum\limits_{i=1}^{\tilde{m}_r}\sum\limits_{j=1}^{m_s}\bar{\varepsilon}_r^{(i)}\circ\varepsilon_s^{(j)}\right)(1-\eta_r\xi_s)^{-2}+c.c.\right]}
\\&\nonumber\times\exp{\left[-\sum\limits_{s>r}\left(\sum\limits_{i=1}^{n_r}\sum\limits_{j=1}^{n_s}\left(-\frac{\alpha'}{2}\right)\epsilon_r^{(i)}\cdot\epsilon_s^{(j)}-\sum\limits_{i=1}^{m_r}\sum\limits_{j=1}^{m_s}\varepsilon_r^{(i)}\circ\varepsilon_s^{(j)}\right)(\xi_s-\xi_r)^{-2}+c.c.
\right]}
\\&\nonumber\times \exp{\sum\limits_{r\neq s}\left[\left(\sum\limits_{i=1}^{n_s}\left(-\frac{\alpha'}{2}\right)k_r\cdot\epsilon_s^{(i)}
-\sum\limits_{i=1}^{m_s}\lambda_r\circ\varepsilon_s^{(i)}\right)((\xi_r-\xi_s)^{-1}+(\eta_r^{-1}-\xi_s)^{-1})+c.c.\right]}
\\&\nonumber\times
\exp{\sum\limits_{r=1}^N\left[\left(\left(-\frac{\alpha'}{2}\right)k_r\cdot\sum\limits_{i=1}^{n_r}\epsilon_r^{(i)}
-\lambda_r\circ\sum\limits_{i=1}^{m_r}\varepsilon_r^{(i)}\right)((\eta_r^{-1}-\xi_r)^{-1}+{\xi_r}^{-1})+c.c.\right]}|_{multilinear},
\end{align} The real variables $\xi_r$ correspond to the
left-moving sector and $\eta_r$ correspond to the left-moving
sector.
An open string tree amplitude for $M$ bosonized vertices has the
form
\begin{equation}\label{open string amplitude}
\begin{split}
\mathscr{M}_{D_2}^{(N)}&=(g)^{M-2}\int \prod\limits_{i=1}^{M}d
x_i\frac{|x_a-x_b||x_b-x_c||x_c-x_a|}{d x_ad x_bd
x_c}\prod\limits_{s>r}|x_s-x_r|^{2\alpha'k_r\cdot
k_s}(x_s-x_r)^{\lambda_r\circ\lambda_s-q_rq_s}
\\&\times\exp{\left[\sum\limits_{s>r}\left(\sum\limits_{i=1}^{n_r}\sum\limits_{j=1}^{n_s}\left(2\alpha'\right)\epsilon_r^{(i)}\cdot\epsilon_s^{(j)}+\sum\limits_{i=1}^{m_r}\sum\limits_{j=1}^{m_s}\varepsilon_r^{(i)}\circ\varepsilon_s^{(j)}\right)(x_s-x_r)^{-2}
\right]}
\\&\times \exp{\left[\sum\limits_{r\neq s}\left(\sum\limits_{i=1}^{n_s}\left(-2\alpha'\right)k_r\cdot\epsilon_s^{(i)}
-\sum\limits_{i=1}^{m_s}\lambda_r\circ\varepsilon_s^{(i)}\right)(x_r-x_s)^{-1}\right]}|_{multilinear},
\end{split}
\end{equation}
where $g$ is the coupling constant for open strings, it can be
related with closed string coupling constant by $\kappa\sim g^2$.
By comparing \eqref{AD2real integrals} with the open string amplitude\eqref{open string amplitude}, we can see, the interactions in one sector
can be considered as interactions between open strings. The
interactions between left- and right-moving sectors look like
those between open strings inserted at $\xi_r$ and
$(\eta_s)^{-1}$. $\eta_s$ can be considered as the coordinates of
the right-moving open string. Then in the $(\tau, \sigma)$
coordinate, ${\eta_s}^{-1}=e^{-\tau}$ can be considered as a time
reverse in the right-moving sector. Thus the interactions between
the two sectors can be regarded as interactions between left- and
right-moving open strings with a time reverse in the right moving
sector(see fig. \ref{fig1}.(b)). The amplitude\eqref{AD2real
integrals} then can be considered as an amplitude for $2N$ open
strings. $N$ of them correspond to the left-moving sector and the
other $N$ of them correspond to the right-moving sector. In the
amplitude we have a time reverse in the right-moving sector.
From fig. \ref{fig1}.(b), we can see, if we reverse the time in
the right-moving sector, we will get an open string tree
amplitude. In fact, we can replace all the ${\eta_r}^{-1}$ by
$\eta_r$. By using the mass-shell condition\cite{18, 19} which is
determined by the conformal invariance in one sector, the
interactions between the two sectors as well as the interactions
in one sector become those between open strings.
Define
\begin{equation}\label{variable redefine}
\xi_{r+N}\equiv\eta_r, k_{r+N}\equiv k_r,
\tilde{\lambda}_{r+N}\equiv\lambda_r,
\bar{\epsilon}_{r+N}\equiv\epsilon_r,
\bar{\varepsilon}_{r+N}\equiv\varepsilon_r.
\end{equation}
After the simultaneous transformations, the volume of CKG becomes
$\frac{1}{2\pi}\int\frac{d\xi_od\eta_o}{|\xi_o-\eta_o|^2}$. The
fixed points become $\xi_1=\xi_o$ and $\xi_{1+N}=\xi_o$. The
conformal Killing volume has another form
$\int\frac{dx_adx_bdx_c}{|x_a-x_b||x_b-x_c||x_c-x_a|}$, it can be
used to fix three real variables. We reset the fixed points at:
\begin{equation}
\xi_1=x_a=0, \xi_2=x_b=1, \xi_{2N}=x_c=\infty.
\end{equation}
The amplitude for $N$ closed strings on $D_2$ then becomes
\begin{equation}\label{AD_2N1without phase factor}
\begin{split}
\mathscr{A}_{D_2}^{(N,0)}&=\kappa^{N-1}{\left(\frac{i}{4}\right)}^{N-1}\int
\prod\limits_{i=1}^{2N}d\xi_i\frac{|\xi_a-\xi_b||\xi_b-\xi_c||\xi_c-\xi_a|}{d\xi_ad\xi_bd\xi_c}\prod\limits_{s>r}(\xi_s-\xi_r)^{\frac{\alpha'}{2}k_r\cdot
k_s}(\xi_s-\xi_r)^{\lambda_r\circ\lambda_s-q_rq_s}
\\&\times\exp{\left[\sum\limits_{s>r}\left(\sum\limits_{i=1}^{n_r}\sum\limits_{j=1}^{n_s}\left(2\alpha'\right)\epsilon_r^{(i)}\cdot\epsilon_s^{(j)}+\sum\limits_{i=1}^{m_r}\sum\limits_{j=1}^{m_s}\varepsilon_r^{(i)}\circ\varepsilon_s^{(j)}\right)(\xi_s-\xi_r)^{-2}
\right]}
\\&\times \exp{\left[\sum\limits_{r\neq s}\left(\sum\limits_{i=1}^{n_s}\left(-2\alpha'\right)k_r\cdot\epsilon_s^{(i)}
-\sum\limits_{i=1}^{m_s}\lambda_r\circ\varepsilon_s^{(i)}\right)(\xi_r-\xi_s)^{-1}\right]}|_{multilinear}e^{i\pi\Theta(P)}
.
\end{split}
\end{equation}
After taking an appropriate phase factor out, we get
\begin{equation}\label{AD_2N1}
\begin{split}
\mathscr{A}_{D_2}^{(N,0)}&=\kappa^{N-1}{\left(\frac{i}{4}\right)}^{N-1}\int
\prod\limits_{i=1}^{2N}d\xi_i\frac{|\xi_a-\xi_b||\xi_b-\xi_c||\xi_c-\xi_a|}{d\xi_ad\xi_bd\xi_c}\prod\limits_{s>r}|\xi_s-\xi_r|^{\frac{\alpha'}{2}k_r\cdot
k_s}(\xi_s-\xi_r)^{\lambda_r\circ\lambda_s-q_rq_s}
\\&\times\exp{\left[\sum\limits_{s>r}\left(\sum\limits_{i=1}^{n_r}\sum\limits_{j=1}^{n_s}\left(2\alpha'\right)\epsilon_r^{(i)}\cdot\epsilon_s^{(j)}+\sum\limits_{i=1}^{m_r}\sum\limits_{j=1}^{m_s}\varepsilon_r^{(i)}\circ\varepsilon_s^{(j)}\right)(\xi_s-\xi_r)^{-2}
\right]}
\\&\times \exp{\left[\sum\limits_{r\neq s}\left(\sum\limits_{i=1}^{n_s}\left(-2\alpha'\right)k_r\cdot\epsilon_s^{(i)}
-\sum\limits_{i=1}^{m_s}\lambda_r\circ\varepsilon_s^{(i)}\right)(\xi_r-\xi_s)^{-1}\right]}|_{multilinear}e^{i\pi\Theta(P)}
,
\end{split}
\end{equation}
where we have absorbed a factor $\frac{1}{2}$ into each $\epsilon$.
$\Theta(P)$ is defined as
\begin{equation}
\Theta(P)=\sum\limits_{s>r}2\alpha'k'_s\cdot
k'_r\theta(\xi_s-\xi_r),
\end{equation}
where $k'^{\mu}_r=\frac{1}{2}k^{\mu}_r$ is the momentum of the open
string and
\begin{align}
\theta(\xi_s-\xi_r)=
\biggl\{
\begin{array}{l}
0 ( \xi_s>\xi_r) \\
1 ( \xi_s<\xi_r) \\
\end{array}.
\end{align}
From\eqref{open string amplitude} and
\eqref{AD_2N1} we can see amplitudes for $N$ closed strings on $D_2$
can be given by one open string tree amplitude for $2N$ open strings
except for a phase factor. The phase factor is caused by taking
absolute number of $(\xi_s-\xi_r)$ in
$(\xi_s-\xi_r)^{\frac{\alpha'}{2}k_r\cdot k_s}$. It is used to
guarantee the integrals in the right branch cut. It only depend on
the the orderings of the open strings. For a certain order $P$, the
phase factor decouple from the integrals. So we can break the
integrals into pieces, take the multilinear terms in $\epsilon$,
$\bar{\epsilon}$, $\varepsilon$ and $\bar{\varepsilon}$, replace the
polarization vectors by the polarization tensors of closed strings.
Then we get the relation between closed string amplitudes and
partial amplitudes for open strings on $D_2$:
\begin{equation}\label{D2relations}
\mathscr{A}_{D_2}^{(N,0)}=\kappa^{N-1}\epsilon_{\alpha\beta}\mathscr{A}_{D_2}^{(N)\alpha\beta}=\left(\frac{i}{4}\right)^{N-1}\kappa^{N-1}\epsilon_{\alpha\beta}\sum\limits_P\mathscr{M}^{(2N)\alpha\beta}(P)e^{i\pi\Theta(P)},
\end{equation}
where $\mathscr{M}$ is the open string amplitude without the
coupling constant $g$, and we sum over all the orderings $P$ of the
open strings.
If there are open strings on the boundary of $D_2$, we can insert
the open string vertices into the amplitude \eqref{AD_2N1}. Because
\eqref{AD_2N1} is already an amplitude for open strings on the real
axis except for a phase factor, we just increase the number of the
open strings on the boundary of $D_2$ and adjust the phase factor to
make the integrals in the right branch cuts. The phase factor should
be adjusted because we must consider the interactions between closed
and open strings. Then we have
\begin{equation}\label{(N, M)D2relations}
\mathscr{A}_{D_2}^{(N,
M)}=\epsilon_{\alpha\beta\gamma}\mathscr{A}_{D_2}^{(N,
M)\alpha\beta\gamma}=\left(\frac{i}{4}\right)^{N-1}\kappa^{N-1}g^M\epsilon_{\alpha\beta\gamma}
\sum\limits_P\mathscr{M}^{(2N,M)\alpha\beta\gamma}(P)e^{i\pi\Theta'(P)},
\end{equation}
where we have defined the coordinates of the left-moving open
strings are $\xi_1,...,\xi_N$, those of right-moving open strings
are $\xi_{1+N+M},...,\xi_{2N+M}$ and the coordinates of other open
strings are $\xi_{1+N},...,\xi_{M+N}$.
\begin{equation}
\Theta'(P)=\sum\limits_{s>r}2\alpha'k'_s\cdot
k'_r\theta'(\xi_s-\xi_r),
\end{equation}
where $k'_r$ are the momentums of the open strings. If
$\xi_s>\xi_r$, $\theta'(\xi_s-\xi_r)=0$, else if $\xi_s<\xi_r$ but
$N<s,r<N+M+1$, $\theta'(\xi_s-\xi_r)=0$, otherwise
$\theta'(\xi_s-\xi_r)=1$.
This relation can also be derived by choosing the fundamental
region as the upper half-plane, then repeat the similar steps in the
case of $N$ closed strings on $D_2$. We can see if $M=0$,
\eqref{A_(D_2)^(N,M)} gives the relation for $N$ closed strings on
$D_2$\eqref{D2relations} and if $N=0$ it gives the open string tree
amplitude\eqref{open string amplitude}.
By comparing the relations \eqref{D2relations} with KLT
factorization relations \eqref{KLT relations}, we can see, in
\eqref{KLT relations}, the left- and the right-moving sectors are
independent of each other. In \eqref{D2relations}, they are not
independent of each other. The interactions connect the two open
string amplitudes into a single one. because the interactions
between the two sectors are just the open string interactions, the
amplitudes for $N$ closed strings then can be given by tree
amplitudes for $2N$ open strings.
We can consider the relations on $D_2$ as any closed strings can be
splitted into two open strings. Each open string catch half of the
momentum of the closed string. Move the open strings corresponding
to the two sectors of closed strings onto the boundary of $D_2$.
Then an amplitude for $N$ closed strings and $2M$ open strings on
$D_2$ is given by an amplitude for $N+2M$ open strings.
In \eqref{(N, M)D2relations}, the $D_2$ amplitudes have been given
by the a sum of open string partial amplitudes with $2N+M$ external
legs correspondingly. We have to sum over all the orderings of the
open strings in this relation. However, as in the case of
$S_2$\cite{9}, the contour treatment\cite{29} can reduce the number
of the terms in this relation. The main points of the treatment
of\cite{29} is there are relations among open string partial
amplitudes. Then any open string partial amplitudes can be expressed
in terms of a minimal basis. All the $M$-point open string partial
amplitude can be expressed in terms of the minimal basis of $(M-3)!$
independent partial amplitudes. Then for the $(N, M)$ case, the
amplitude can be given by $(2N+M-3)!$ open string partial
amplitudes.
If we consider the interactions between open strings attached to a
D$p$-brane and closed strings, we should do appropriate replacements
in the right-moving sector. For example, if the external legs are
gravitons, we just need to replace the momenta
$\frac{1}{2}k^{\mu}_r$ corresponding to the right-moving sector by
$\frac{1}{2}D^{\mu}_{\nu}\cdot k^{\nu}_r$ and replace the
polarization tensor $\epsilon_{\mu\nu}$ by
$\epsilon_{\mu\lambda}D^{\lambda}_{\nu}$ in the relation\eqref{(N,
M)D2relations}\cite{25, 26, 27}. Where $D^{\mu}_{\nu}$ is defined as
\begin{equation}
\left(
\begin{array}{cccccc}
1 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ }\\
\text{ } & \ddots & \text{ } & \text{ } & \text{ } & \text{ } \\
\text{ } & \text{ } & 1 & \text{ } & \text{ } & \text{ }\\
\text{ } & \text{ } & \text{ } & -1 & \text{ } & \text{ }\\
\text{ } & \text{ } & \text{ } & \text{ } & \ddots &\text{ }\\
\text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1
\end{array}
\right).
\end{equation}
Then this relation reveals the amplitudes between $N$ closed string
and $M$ open strings on a D$p$-brane can be given by $2N+M$-point
open string partial amplitudes. Though in \cite{25, 26, 27}, $(2,
0)$ amplitude and $(1, 2)$ amplitude are four-point open string
amplitudes upon a certain identification between the momenta and
polarizations, in general case, there is a phase factor in the
relations.
In the low energy
limit of an open string theory, gravitons are closed string states
and gauge particles are open string states. Then in this case, the
KLT factorization relations do not hold. We should use one amplitude
for $2N$ gauge particles instead of the product of two amplitudes
for $N$ gauge particles to give an amplitude for $N$ gravitons.
\section{Relations between amplitudes on $RP_2$ and open
string tree amplitudes} \label{relations on RP_2} In this section,
we will explore the amplitudes on $RP_2$\cite{1, 2, 28}. We first
show the correlation functions on $RP_2$ can not be factorized by
the left- and the right-moving sectors. The two sectors are
connected together. Then we will give the relations between
amplitude on $RP_2$ and tree amplitude for open strings.
$RP_2$ is an unoriented surface, it can be derived by identifying
the diametrically opposite points on $S_2$. It can be considered
as a sphere with a crosscap. With this equivalence, the waves must
be reflected at the crosscap. the reflection waves of the
left-moving waves are in the right-moving sector and the
reflection waves of the right-moving waves are in the left-moving
sector. The waves must interact with their reflection waves, thus
the two sectors must interact with each other. This is similar
with the case of $D_2$.
\newline
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.5\textwidth]{3.eps}
\end{center}
\caption{Only the annihilation modes are reflected at the crosscap
of $RP_2$.} \label{fig3}
\end{figure}
\newline
Particularly, the correlation function on $RP_2$ is given as
\begin{equation}\label{RP2 correlation}
\left<0\mid\mathscr V_N(\omega,
\tilde{\omega})...\mathscr{V}_1(\omega, \tilde{\omega})\mid
C\right>,
\end{equation}
where $\mid C\rangle=C\mid0\rangle$ is the boundary sate for
$RP_2$\cite{20, 21}. The bosonized boundary operator $C$ is
\begin{equation}
C=\exp{(\sum\limits_{n=1}^\infty
(-1)^na_n^{\dagger}\cdot\tilde{a}_{n}^{\dagger})}\left|0\right>_{X}
\otimes\exp{(\sum\limits_{n=1}^\infty
(-1)^nb_n^{\dagger}\circ\tilde{b}_{n}^{\dagger})}\left|0\right>_{\phi}
\otimes\exp{(\sum\limits_{n=1}^\infty
(-1)^nc_n^{\dagger}\tilde{c}_{n}^{\dagger})}\left|0\right>_{\phi_6}.
\end{equation}
In this case we can see, the image point of $\omega$ is
$-\bar{\omega}+i\pi$. When we move $C$ to the left of a vertex
operator it commute with the creation modes and the zero modes of
the vertex operator. It does not commute with the annihilation
modes $\mathscr{V}^{(-)}_L(\omega)$ and
$\tilde{\mathscr{V}^{(-)}_R}(\bar{\omega})$. This means only the
annihilation modes can be reflected at the crosscap(see
fig\ref{fig3}). After moving the boundary operator to the left of
the annihilation modes $\mathscr{V}^{(-)}_L(\omega)$ and
$\tilde{\mathscr{V}^{(-)}_R}(\bar{\omega})$, the images
$\tilde{\mathscr{V}}^{(+)}_L(-\omega-i\pi)$ and
$\mathscr{V}^{(+)}_R(-\bar{\omega}+i\pi)$ are created
respectively. We move the boundary operator to the left of all the
vertex operators. Then use the creation operators in the boundary
operator to annihilate the state $\langle0\mid$. The correlation
becomes
\begin{equation}\label{correlation function on RP2}
\begin{split}
&\left<\mathscr{V}^{(+)}_L(\omega_N)\mathscr{V}^{(-)}_L(\omega_N)\mathscr{V}^{(+)}_R(-\bar{\omega}_N+i\pi)...\mathscr{V}^{(+)}_L(\omega_1)\mathscr{V}^{(-)}_L(\omega_1)\mathscr{V}^{(+)}_R(-\bar{\omega}_1+i\pi)\right>
\\\times &\left<\tilde{\mathscr{V}}^{(+)}_R(\bar{\omega}_N)\tilde{\mathscr{V}}^{(-)}_R(\bar{\omega}_N)\tilde{\mathscr{V}}^{(+)}_L(-\omega_N-i\pi)...\tilde{\mathscr{V}}^{(+)}_R(\bar{\omega}_1)\tilde{\mathscr{V}}^{(-)}_R(\bar{\omega}_1)\tilde{\mathscr{V}}^{(+)}_L(-\omega_1-i\pi)\right>
\\\times &\left<\mathscr{V}_0(\omega_N, \tilde{\omega}_N)...\mathscr{V}_0(\omega_1,
\tilde{\omega}_1)\right>.
\end{split}
\end{equation}
As in the case of $D_2$, the first correlation function in
\eqref{correlation function on RP2} only contain $a$, $b$, $c$ and
$a^{\dag}$, $b^{\dag}$, $c^{\dag}$. When we move the left-moving
modes of a vertex operator $\mathscr{V}^{(-)}_L(\omega_r)$ to the
right of the operator $\mathscr{V}^{(+)}_L(\omega_s)$, we get the
interaction in the left-moving sector. When we move
$\mathscr{V}^{(-)}_L(\omega_r)$ to the right of
$\mathscr{V}^{(+)}_R(-\bar{\omega}_s+i\pi)$, we get the
interaction between the left- and the right-moving sectors. In the
same way, the second correlation function in \eqref{correlation
function on RP2} gives the interactions in the right-moving sector
and those between the two sectors. Thus the correlation function
can not be factorized by the two sectors. Interactions connect the
two sectors together.
Now we consider the amplitude for $N$ closed strings on $RP_2$. We
calculate the correlation function, integral over the fundamental
region and divide the integrals by the volume of the CKG\cite{1,
2, 22, 23} on $RP_2$. As we have done in the case of $D_2$, we
also extend the integral region to the complex pane, rotate the
$y$ integrals to the real axis and redefine the integral
variables. The amplitude for $N$ closed strings on $RP_2$ can be
given as
\begin{align}\label{ARP2superstring}
\nonumber\mathscr{A}_{RP_2}^{(N)}&=\kappa^{N-1}\left(\frac{1}{2}\right)^{N-1}\int
\prod\limits_{i=1}^Nd\xi_id\eta_i\frac{|1+\xi_o\eta_o|^2}{2\pi
d\xi_o\eta_o}
\\&\nonumber\times\prod\limits_{s>r}(\xi_s-\xi_r)^{\frac{\alpha'}{2}k_r\cdot
k_s+\lambda_r\circ\lambda_s-q_rq_s}(\eta_r-\eta_s)^{\frac{\alpha'}{2}k_r\cdot
k_s+\tilde{\lambda}_r\circ\tilde{\lambda}_s-\tilde{q}_r\tilde{q}_s}
\prod\limits_{r,
s}(1+(\xi_r\eta_s)^{-1})^{\frac{\alpha'}{2}k_r\cdot
k_s+\lambda_r\circ\tilde{\lambda}_s-q_r\tilde{q}_s}
\\&\nonumber\times \exp{\sum\limits_{r=1}^N\left(\sum\limits_{i=1}^{n_r}\sum\limits_{j=1}^{\tilde{n}_s}\left(-\frac{\alpha'}{2}\right)\epsilon_r^{i}\cdot\bar{\epsilon}_r^{j}
-\sum\limits_{i=1}^{m_r}\sum\limits_{j=1}^{\tilde{m}_s}\varepsilon_r^{i}\circ\bar{\varepsilon}_r^{j}\right)(1+\xi_r\eta_r)^{-2}}
\\&\times \exp{\sum\limits_{s>r}\left[\left(\sum\limits_{i=1}^{n_r}\sum\limits_{j=1}^{n_s}\left(-\frac{\alpha'}{2}\right)\bar{\epsilon}_r^{(i)}\cdot\epsilon_s^{(j)}
-\sum\limits_{i=1}^{\tilde{m}_r}\sum\limits_{j=1}^{m_s}\bar{\varepsilon}_r^{(i)}\circ\varepsilon_s^{(j)}\right)(1+\eta_r\xi_s)^{-2}+c.c.\right]}
\\&\nonumber\times\exp{\left[-\sum\limits_{s>r}\left(\sum\limits_{i=1}^{n_r}\sum\limits_{j=1}^{n_s}\left(-\frac{\alpha'}{2}\right)\epsilon_r^{(i)}\cdot\epsilon_s^{(j)}-\sum\limits_{i=1}^{m_r}\sum\limits_{j=1}^{m_s}\varepsilon_r^{(i)}\circ\varepsilon_s^{(j)}\right)(\xi_s-\xi_r)^{-2}+c.c.
\right]}
\\&\nonumber\times \exp{\sum\limits_{r\neq s}\left[\left(\sum\limits_{i=1}^{n_s}\left(-\frac{\alpha'}{2}\right)k_r\cdot\epsilon_s^{(i)}
-\sum\limits_{i=1}^{m_s}\lambda_r\circ\varepsilon_s^{(i)}\right)((\xi_r-\xi_s)^{-1}+(-\eta_r^{-1}-\xi_s)^{-1})+c.c.\right]}
\\&\nonumber\times
\exp{\sum\limits_{r=1}^N\left[\left(\left(-\frac{\alpha'}{2}\right)k_r\cdot\sum\limits_{i=1}^{n_r}\epsilon_r^{(i)}
-\lambda_r\circ\sum\limits_{i=1}^{m_r}\varepsilon_r^{(i)}\right)((-\eta_r^{-1}-\xi_r)^{-1}+{\xi_r}^{-1})+c.c.\right]}|_{multilinear}.
\end{align}
Then the amplitude has been given by real integrals. The
interactions in one sector are the open string interactions. The
interaction between left- and right-moving sectors can be
considered as interactions between open strings inserted at
$\xi_r$ and $(-\eta_s)^{-1}$. $\frac{1}{-\bar{\eta_s}}$ can be
considered as a time reverse and a twist in the right-moving
sector. Then the interactions between left- and right-moving
sectors can be regarded as interactions between left- and
right-moving open strings with a time reverse and a twist in the
right-moving sector(see fig\ref{fig1}.(c)).
From fig. \ref{fig1}.(c), we can see, if we twist the right-moving
sector and reverse the time in the right-moving sector, we will get
an open string tree amplitude. In fact, we can replace all the
$\eta_r$ by $-\frac{1}{\eta_r}$. Then by using the mass-shell
condition, the interactions between the two different sectors as
well as in one sector become the interactions between open strings.
Redefine the variables in the right-moving sector by
Eq.\eqref{variable redefine}.
The amplitude on $RP_2$ then becomes
\begin{equation}\label{ARP_2N1}
\begin{split}
\mathscr{A}_{RP_2}^{(N)}&=-{\left(\frac{i}{4}\right)}^{N-1}\kappa^{N-1}\int
\prod\limits_{i=1}^{2N}d\xi_i\frac{|\xi_a-\xi_b||\xi_b-\xi_c||\xi_c-\xi_a|}{d\xi_ad\xi_bd\xi_c}\prod\limits_{s>r}|\xi_s-\xi_r|^{\frac{\alpha'}{2}k_r\cdot
k_s}(\xi_s-\xi_r)^{\lambda_r\circ\lambda_s-q_rq_s}
\\&\times\exp{\left[\sum\limits_{s>r}\left(\sum\limits_{i=1}^{n_r}\sum\limits_{j=1}^{n_s}\left(2\alpha'\right)\epsilon_r^{(i)}\cdot\epsilon_s^{(j)}+\sum\limits_{i=1}^{m_r}\sum\limits_{j=1}^{m_s}\varepsilon_r^{(i)}\circ\varepsilon_s^{(j)}\right)(\xi_s-\xi_r)^{-2}
\right]}
\\&\times \exp{\left[\sum\limits_{r\neq s}\left(\sum\limits_{i=1}^{n_s}\left(-2\alpha'\right)k_r\cdot\epsilon_s^{(i)}
-\sum\limits_{i=1}^{m_s}\lambda_r\circ\varepsilon_s^{(i)}\right)(\xi_r-\xi_s)^{-1}\right]}|_{multilinear}e^{i\pi\Theta(P)}
.
\end{split}
\end{equation}
This amplitude is different from $D_2$ amplitude by a factor $-1$.
It is caused by the difference between the measure of the CKG on
$RP_2$ and $D_2$. When we change the topology, this $-1$ appears.
The phase factor only depends on the ordering of the open strings.
We can break the integrals into pieces as in the case of $D_2$ and
keep the multilinear terms of the polarization tensers. Then
Eq.\eqref{ARP_2N1} becomes
\begin{equation}\label{RP2relations}
\mathscr{A}_{RP_2}^{(N)}=\epsilon_{\alpha\beta}\mathscr{A}_{RP_2}^{(N)\alpha\beta}=-\left(\frac{i}{4}\right)^{N-1}\kappa^{N-1}\epsilon_{\alpha\beta}\sum\limits_P\mathscr{M}^{(2N)\alpha\beta}(P)e^{i\pi\Theta(P)}.
\end{equation}
As in the case of $D_2$, KLT factorization relations\eqref{KLT
relations} do not hold on $RP_2$. The left- and the right-moving
sectors are not independent of each other again. The interactions
between the two sectors connect them into a single sector. Since the
interactions between the two sectors are just the those between open
strings, the two open string tree amplitude in the case of $S_2$ are
connected into one amplitude for open strings. In the
relation\eqref{RP2relations}, we also sum over all the orderings of
the external legs of the open strings. By using the same method in
\cite{29}, the relations on $RP_2$ for $N$ closed strings can be
reduced to $(2N-3)!$ terms.
From the relations \eqref{D2relations} and \eqref{RP2relations} we
can see, the amplitudes on $D_2$ and $RP_2$ with same external
closed string states are equal except for a factor $-1$. In fact,
after we transform the complex variables into real ones, the image
of a point $\xi_r$ in the left-moving sector becomes
$\frac{1}{\eta_r}$ on $D_2$ and $-\frac{1}{\eta_r}$ on $RP_2$. The
minus means a twist in the right-moving sector. After this twist,
the amplitude on $RP_2$ becomes that on $D_2$ except for a factor
$-1$. Then if we consider a theory containing both $D_2$ and $RP_2$,
the amplitudes with same external states cancel out. However, if we
consider T-duality, the interactions on $D_2$ becomes interactions
between closed strings and D-brane, while the interactions on $RP_2$
becomes interactions between closed strings and O-plane. As we have
seen in section \ref{relations on D_2}, we should make appropriate
replacement on the momenta and polarizations in the right-moving
sector to give the relations in $D_2$ case. Under the T-duality, we
also need to replace the vertex operators on $RP_2$ by new
ones\cite{30}. We take the massless NS-NS vertex operator as an
example. The vertex operator after T-duality becomes
\begin{equation}\label{RP_2 vertex}
\begin{split}
&\mathscr{V}^{RP_2}(\epsilon, k, z, \bar{z}) \\&=\frac{1}{2}\left(
\epsilon_{\mu\nu}:\mathscr{V}^\mu_{\alpha}(k,
z)::\tilde{\mathscr{V}}^{\mu}_{\beta}(k,
\bar{z}):+(D\cdot\epsilon^T\cdot
D)_{\mu\nu}:\mathscr{V}^{\mu}_{\alpha}(k\cdot D,
z)::\tilde{\mathscr{V}}^{\nu}_{\beta}(k\cdot D, \bar{z}):\right).
\end{split}
\end{equation}
Here $\mathscr{V}^{\mu}_{\alpha}(p, \epsilon)$ in $0$ and $-1$
picture are
\begin{equation}
\begin{split}
\mathscr{V}^{\mu}_{-1}(k, z)&=e^{-\phi(z)}\psi^{\mu}(z)e^{ik\cdot
X(z)}
\\\mathscr{V}^{\mu}_0(k, z)&=(\partial X^{\mu}(z)+ik\cdot
\psi(z)\psi^{\mu}(z))e^{ik\cdot X}.
\end{split}
\end{equation}
The second term in \eqref{RP_2 vertex} can also be given by
replacing $\epsilon_{\mu\nu}$ and $k^{\mu}$ in the original vertex
by $(D\cdot\epsilon^T\cdot D)_{\mu\nu}$ and $(k\cdot D)^{\mu}$
respectively. Now we consider the amplitudes for $N$ NS-NS strings.
Each vertex operator\eqref{RP_2 vertex} have two terms, each term
can be considered as a vertex operator on $RP_2$ under appropriate
replacement. The amplitude then is given by $2^N$ terms, each term
can be obtained from the amplitude before T-duality by appropriate
replacements. Then each term can be given by partial amplitudes for
$2N$ open strings again. So the $RP_2$ relation gives the amplitudes
for closed strings scattering from an O-plane by open string
amplitudes. Generally, under the T-duality, the $D_2$ amplitudes can
not be canceled by the $RP_2$ amplitudes\cite{30}.
In the low energy limit of an unoriented string theory, the
amplitudes for closed strings on $RP_2$ contribute to the amplitudes
for gravitons. Then the KLT factorization relations do not hold in
this case as in the case of $D_2$. The amplitudes for $N$ gravitons
can not be factorized by two amplitudes for $N$ gauge particles.
They can be given by an amplitude for $2N$ gauge particles.
\section{Conclusion}
\label{conclusion} In this paper, we investigated the relations
between closed and open strings on $D_2$ and $RP_2$. We have shown
that the KLT factorization relations do not hold for these two
topologies. The closed string amplitudes can not be factorized by
tree amplitudes for left- and right-moving open strings. However,
the two sectors are connected into a single sector. We can give the
amplitudes with closed strings in these two cases by amplitudes in
this single sector. The terms in the relations on $D_2$ and $RP_2$
can be reduced by contour deformations.
Under the T-duality, the relations on $D_2$ and $RP_2$ give the
amplitudes between closed strings scattering from D-brane and
O-plane respectively by open string partial amplitudes.
In the low energy limits of these two cases, we can not use KLT
relations to factorize amplitudes for gravitons into products of two
amplitudes for gauge particles. Interactions between the ``left-''
and the ``right-'' moving gauge fields connect the two amplitudes
into one. Then an graviton amplitude in these two cases can be given
by one amplitude for both left- and right-moving gauge particles.
The relations for other topologies have not
been given. However, we expect there are also some relations between
closed and open string amplitudes. If there are more boundaries and
crosscaps on the world-sheet, the boundaries and the crosscaps also
connect left- and the right-moving sectors, then in these cases, KLT
factorization relations do not hold.
\section*{Acknowledgement}
We would like to thank C.Cao, J.L.Li, Y.Q.Wang and Y.Xiao for
helpful discussions. We would like to thank the referee for many
helpful suggestions. The work is supported in part by the NNSF of
China Grant No. 90503009, No. 10775116, and 973 Program Grant No.
2005CB724508.
| -57,107.821092
|
[
-2.783203125,
2.728515625
] | 16.207951
|
[
-2.330078125,
0.79443359375,
-2.138671875,
-4.59375,
-1.20703125,
7.0546875
] |
[
1.5185546875,
8.8515625,
1.5234375,
4.9296875
] | 387
| 5,830
|
[
-3.140625,
3.66796875
] | 30.741263
|
[
-5.5546875,
-4.1796875,
-4.46484375,
-2.056640625,
1.4912109375,
11.7109375
] | 1.759631
| 9.517154
| 16.26072
| 1.853486
|
[
3.366582155227661
] | -38,661.13407
| 6.540823
| -57,084.648924
| 0.818189
| 5.42367
|
[
-2.154296875,
-3.380859375,
-3.505859375,
-4.65625,
1.921875,
11.609375
] |
[
-5.28125,
-0.72802734375,
-2.0625,
-0.325439453125,
2.89453125,
2.841796875
] | |
BkiUdb85qoTBC5y2Lxf0
|
\subsection{Estimation of flight $CO_2${} equivalent}
Most $CO_2${} equivalent emissions during a flight ($E_{flight}$) is associated with the combustion of the fuel whose quantity depends on the category of aircraft, the flying distance as well as the different phases of the flight.
Flights are usually decoupled into short haul and long haul aircraft with distinct consumption patterns.
The different phases of a flight can be described as Landing and Take Off (LTO) or Climb Cruise Descent (CCD).
The EMEP/EEA air pollutant emission inventory guidebook~\cite{eeaaviation1, eeaaviation2,eeaaviation3} provides for each type of aircraft the quantity of fuel burnt during LTO and CCD -- as well as the quantities of pollutant emitted on each phase.
To estimate the average consumed fuel per flying km -- across all acceptable aircraft --, ICAODATA provides the total distance flown by each aircraft (as well as the fuel consumption).
This enables to derive a weighted average fuel consumption as a function of the flying distance $Fuel( d )$ in kg -- with $d$ the flying distance in km.
Note that $Fuel$ includes LTO.
The $CO_2${} resulting from the combustion for 1 kg of fuel is $e_{CO_2}$ = 3.15 kg / kg of burnt fuel.
The impact of other non-$CO_2${} pollutant affecting the earth radiative balance - such as nitrogen oxide ($NO_x$) are estimated through a Radiative Forcing Index ($RFI$) factor over the emissions of $CO_2${} and~\cite{rfi} recommends to use $RFI$ = 2.
Note that the factor measures the effect of $NO_x$ and not the quantity.
In fact $NO_x$ and $CO_2${} have significant differences and in particular act on different time scales.
Outside of fuel combustion $CO_2${}, one needs to consider the indirect source of emissions that is the $CO_2${} emissions associated with fuel PreProcessing ($PP$) which is set to 0.54 kg / kg of burnt fuel.
The flying distance between two airports considers the round shape of the earth -- using the Great Circle Distance -- as well as some extra Distance Correction ($DC$) due to inefficiency of the traffic control, weather conditions, and holding patterns.
As a result $CO_2${} equivalent emissions for a given flying distance $x = d + DC$ with $d$ the distance between the two airports can be expressed as:
\begin{equation}
E_{flight} ( x ) = Fuel(x) \times ( e_{CO_2}. RFI + PP )
\end{equation}
\subsection{$CO_2${} equivalent per passenger}
The $CO_2${} emissions per passenger $E_{flier}$ is estimated from $E_{flight}$ by considering the fraction of the load associated to the passenger, that is $1 - CARGO_{ld}$ with $CARGO_{ld}$ representing the cargo load.
This fraction of emissions is shared between the effective passengers weighted by the cabin class $W_{cabin}$ which is equivalent to occupying a certain number of economy seats.
The effective number of passengers is determined by the total capacity in term of seats $SEAT_{T}$ -- which depends on the aircraft type an can be found in ICAODATA -- multiplied by the load passenger factor $PSG_{ld}$ published by ICAO.
As passenger and cargo are used to drive the demand for the construction of an airport or a plane, these are expressed on a per passenger basis.
The aircraft life cycle is expressed as ($AIRCRAFT_{lc}$) is per passenger / per flying km and the infrastructures are modeled by a constant ($INFRA$)~\cite{lc}\cite{ecoinvent}.
As a result, the emissions per passenger are expressed as:
\begin{equation}
\begin{aligned}
E_{flier} (x) = & E_{flight}(x) ( 1 - CARGO_{ld} ) \frac{ W_{cabin} }{ SEAT_{T} \times PSG_{ld} } \\
& + AIRCRAFT_{lc}. x + INFRA
\end{aligned}
\end{equation}
\subsection{ 'myclimate' versus 'goclimate'}
This section compares 'goclimate2019'~\cite{goclimate} as published on 2019-04-08 with 'myclimate2019'~\cite{myclimate} computation as published on 2019-08-13.
\texttt{CO2eq}{} implements 'myclimate2019', but relies on the service provided by GO Climate.
As 'goclimate2019' references the latest version of 'myclimate' - in our case 'myclimate2019', we assume that the service synchronizes its principles with that latest version published by 'myclimate'.
The 'myclimate' and 'goclimate' methodologies mostly differ in the estimation of the distance correction ($DC$) and the cargo load ($CARGO_{ld}$).
'myclimate' considers a constant value for $DC$ = 95 km, while 'goclimate' respectively sets $DC$ to 50 km, 100 km and 125 km for flying distance respectively lower than 550 km, lower 5500 km and greater than 5500 km.
In the case of the IETF where a significant number of flights are transcontinental $DC$ is increased between 5\% and 31\%.
This is likely to increase the flying distance used by 'goclimate' and so the $CO_2${} equivalent emissions.
In addition, 'myclimate' estimates the cargo load ($CARGO_{ld}$) on a mass basis which is respectively 93\% for short haul and 74\% for long haul.
On the other hand, 'goclimate' estimates the cargo load on a monetary basis to $CARGO_{ld}=95.1\%$.
While 'goclimate' and 'myclimate' use ICAO as the source of information for the average number of seats and the passenger load ($PSG_{ld}$), 'goclimate' uses respectively ICAODATA~\cite{icaodata} 2012 and ICAO~\cite{icaoeco} 2012 while 'myclimate' respectively uses ICAODATA 2019 and ICAO 2018.
More considerations may be needed to check if this presents an impact.
\subsection{$CO_2${} Emissions and Climate Change}
\label{sec:ietf-co2}
This section estimates the amount of $CO_2${} equivalent generated by IETF meetings over time and compares the average of $CO_2${} equivalent emissions per participant to the average $CO_2${} emissions per capita of various countries.
Then, different scenarios that apply to general aviation -- each associated with a specific increase of the global temperature -- are applied to the flights associated with IETF meetings using different meeting frequencies.
\subsubsection{IETF attendee $CO_2${} equivalent versus countries' $CO_2${} emissions per capita}
Figure~\ref{fig:presence} depicts the evolution since IETF~72 in Dublin of $CO_2${} emissions equivalent associated to air traffic based on estimated -- but real -- flight itineraries.
For each flight the $CO_2${} equivalent is estimated according to the 'myclimate' and 'goclimate' methodologies -- both described and compared in section~\ref{sec:co2eq}.
Attendees are then clustered according to their type of presence ('on-site', 'remote' or 'not-arrived').
While all attendees are being assigned air flight, the effective $CO_2${} emissions of the meeting are represented by 'on-site' participants only.
The effective amount of $CO_2${} equivalent emissions for IETF meetings are quite stable between 2.5 and 3 Gg from IETF~72 in Dublin to IETF~93 in Prague.
During this period, meetings in North America tend to provide a slightly lower amount of emissions -- but not always and IETF~91 in Honolulu is an outlier with significantly more emissions.
From IETF~94 in Tokyo to IETF~106 in Vancouver, the amount of $CO_2${} equivalent emissions presents a slight decrease with peaks associated to Asian locations.
IETF~107 Vancouver to IETF~112 in Madrid were entirely virtual with no 'on-site' participation.
The average effective $CO_2${} equivalent emissions from IETF~72 to IETF~106 is estimated to be 2.5 Gg by myclimate and 3.2 Gg with goclimate which corresponds respectively to an average of 2.2 and 2.7 tonnes per attendee.
Figure~\ref{fig:percapita-map}~\ref{fig:percapita-chart} compares attending 1, 2 and 3 IETF meetings a year to the annual $CO_2${} emissions per capita provided by~\cite{owidco2andothergreenhousegasemissions}~\cite{gcp-2021}~\cite{essd-2021-386}.
The $CO_2${} equivalent emissions associated with the attendance of 3 IETF meetings a year corresponds to the emissions per capita of Germany, Finland, Poland, Belgium.
These countries provide a higher amount of $CO_2${} than the average European countries mostly due to the use of coal to generate their energy.
The $CO_2${} equivalent emissions associated with the attendance of 2 IETF meetings a year corresponds to the per capita emissions of European countries such as Greece, Italy, UK.
The $CO_2${} equivalent emissions associated with the attendance of a single IETF per year corresponds to the emissions per capita of countries such as Venezuela, Mauritius.
\begin{figure}[hbt]
\centering
\begin{subfigure}{\columnwidth}
\includegraphics[width=\columnwidth]{./fig/co2eq--mode-flight-cluster-key-presence-cluster-nbr-15-co2eq-myclimate.pdf}
\caption{'myclimate'}
\label{fig:presence-myclimate}
\end{subfigure}
\\%\hfill
\begin{subfigure}{\columnwidth}
\includegraphics[width=\columnwidth]{./fig/co2eq--mode-flight-cluster-key-presence-cluster-nbr-15-co2eq-goclimate.pdf}
\caption{'goclimate'}
\label{fig:presence-goclimate}
\end{subfigure}
\caption{Total $CO_2${} emissions per presence type that is for 'on-site', 'remote' and 'not-arrived' attendees}
\label{fig:presence}
\end{figure}
\begin{figure*}
\centering
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\columnwidth]{./fig/co-emissions-per-capita.pdf}
\caption{World Map view}
\label{fig:percapita-map}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\textwidth]{./fig/co-emissions-per-capita-chart.pdf}
\caption{ $CO_2${} emissions of IETF participants in regard with $CO_2${} emissions per capita -- Representing 116 countries out of 229.}
\label{fig:percapita-chart}
\end{subfigure}
\caption{ $CO_2${} emissions are evaluated from the burning of fossil fuels for energy and cement production. Land use change is not included. $CO_2${} are measured on a production basis, meaning they do not adjust for emissions embedded in traded goods. Data and world Map are provided by Our World in Data based on the Global Carbon Project~\cite{owidco2andothergreenhousegasemissions} }
\label{fig:percapita}
\end{figure*}
\subsubsection{Comparing IETF air flight traffic with envisioned scenario for aviation}
\cite{aviation-2021} suggests aviation will contribute to 0.1 C of warming in 2050 if pre COVID-19 aviation growth would resume.
It further analyses 4 types of scenarios to foresee the future of aviation with their respective responsibility and contribution to the increase of temperature in 2050.
These scenarios assume a post-COVID-19 recovery growth until 2024 followed by a post 2024 growth.
The 'no pandemic' scenario considers no pandemic occurred with air travel growing by 3\% per year since 1970.
This scenario results in aviation being responsible for raising temperature by 0.1~\degree C.
The 'back to normal scenario' considers a post COVID-19 growth of 16\% per year and 3\% per year thereafter.
This scenario results in aviation being responsible for raising temperature by 0.09~\degree C in 2050.
More importantly it shows that the brutal and forced decrease of flights during the COVID-19 has very little long term impact.
The 'zero long term growth' assumes a post COVID-19 recovery growth of 13\% followed by a 0\% growth.
This scenario is responsible for raising temperature by 0.06~\degree.
Finally the 'long term decline' assumes a post COVID-19 recovery growth of 10\% followed by a -2.5\% growth, which ends up in air traffic level decreased by 50\% compared to 2019 -- that is the level during the pandemic.
While these scenarios apply for the whole air traffic, Figure~\ref{fig:aviation2021} apply these scenario to the IETF meetings, assuming the same number of participants during the meetings and considering for 2021 only a 45\% decrease over 2019 -- as opposed to a 100\% decrease that has been observed with IETF meetings being fully virtual.
Another adaptation is that unlike aviation growth the number of meetings is not expected to be greater than 3 meetings per year.
The dash lines show fractions of meetings which may be useful for further studies considering hybrid meetings, that is when a significant fraction of the attendees are 'remote'.
However, this is left for further analysis.
Application of the air traffic scenarios to the IETF related air traffic shows that scenario ignoring the needed effort to fight climate change ('no pandemic' and 'back to normal') results in 3 meetings a year for the IETF while other scenarios ('zero long term growth' and 'long term decline') results in respectively 2 or 1 IETF meeting a year.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./fig/meeting-strategies.pdf}
\caption{Application of the scenarios described by~\cite{aviation-2021} to IETF meetings}
\label{fig:aviation2021}
\end{figure}
In 2015 nations agreed to limit global warming well below 2~\degree C.
Current forecast based on Nationally Determined Contributions (NDC) established that we are heading toward 2.4~\degree C.
The IPCC Working Group I contribution to the Sixth Assessment Report AR6-WG1~\cite{ar6-wg1} insisted that every fraction of a degree of increase is important and that major effort needs to be done to reach the achievable 1.5~\degree C.
In such a context, it seems inappropriate to maintain a rate of 3 IETF meetings a year by which participants produce as much emissions as European countries using coal to generate their energy.
A more sustainable approach is needed and the target to reduce by 45\% emissions in the Paris agreement in 2015 as well as a sustainable scenario for aviation suggest limiting IETF meetings to 1 IETF meeting a year as well as huge effort to improve the remote participation experience.
\subsection{Limiting Air Flights Connections to Limit Virus Exposure}
\label{sec:ietf-segments}
While the COVID-19 pandemic situation took us by surprise, it is likely that pandemic frequency increases and that more severe pandemics are to come - especially as the root cause of pandemic is due to the anthropogenic destruction of biodiversity~\cite{ipbes-2020}.
While work remains to be done to evaluate the exact role airports are playing in the spreading of a pandemic, it remains plausible that limiting the number of transit airports reduces the risk of infection and consequently the widespread of the pandemic.
Figure~\ref{fig:segments} depicts for each IETF the number of flight segments for each attendee -- including 'on-site', 'remote' and 'not-arrived' --- and Table~\ref{tab:segments} orders the IETF meetings according to the average number of segments per attendee.
It appears that places like Tokyo, Los Angeles and Bejin are the destinations that minimize flight connections.
Of course, such findings require additional analysis to refine, for example, the role of airports into the widespread of a pandemic, refining the total number of connections to the number of connections in international airports, the duration of the connection, the time to retrieve luggage among other things.
It should also consider that departure location has been estimated as the capital of the attendee's country which may for large countries introduce a bias.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{./fig/co2eq--mode-attendee-cluster-key-flight-segment-number-cluster-nbr-15.pdf}
\caption{Grouping participants per the number of their associated flight segments number. }
\label{fig:segments}
\end{figure}
\begin{table}[htb]
\begin{tabular}{|p{2.3cm}llp{1.5cm}|}
\hline
IETF Meeting & City & Country & Connections \\
\hline
77 & Los Angeles & US & 2.4 \\
94 & Tokyo & JP & 2.8 \\
79 & Beijing & CN & 2.8 \\
98 & Chicago & US & 3.2 \\
83 & Paris & FR & 3.4 \\
89, 101 & London & GB & 3.5 \\
92 & Dallas/Fort W & US & 3.5 \\
74 & San Francisco & US & 3.5 \\
86 & Orlando & US & 3.5 \\
85 & Atlanta & US & 3.6 \\
78 & Brussels & BE & 3.6 \\
76 & Osaka & JP & 3.6 \\
111 & San Francisco & US & 3.7 \\
97 & Seoul & KR & 3.8 \\
96, 87& Berlin & DE & 3.9 \\
75 & Stockholm & SE & 3.9 \\
72 & Dublin & IE & 4 \\
90 & Toronto & CA & 4.0 \\
108, 112& Madrid & ES & 4.1 \\
103, 109 & Bangkok & TH & 4.1, 4.3 \\
81, 102, 105 & Montreal & CA & 4.1 \\
100, 106 & Singapore & SG & 4.2, 4.3 \\
95 & Buenos Aires & AR & 4.3 \\
82 & Taipei & TW & 4.3 \\
73 & Minneapolis & US & 4.4 \\
84, 88, 107 & Vancouver & CA & 4.4, 4.5, 4.6 \\
91 & Honolulu & US & 4.5 \\
80, 93, 99, 104, 110 & Prague & CZ & 4.5, 4.6, 4.7 \\
\hline
\end{tabular}
\caption{Ordered average flight connections per attend for each IETF meetings }
\label{tab:segments}
\end{table}
\subsection{Measuring Growth, Diversity and Transparency}
\label{sec:ietf-others}
$CO_2${} equivalent emissions of an attendee can be seen as a metric that measures participation by attributing a cost to a given participation.
The cost in question is obviously an environmental cost, but it also combines travel distance, travel expenses (air flight and hotel) as well as other costs such as time commitment to attend the IETF meeting.
Overall the sum of the attendee costs may reflect the value of the meeting and by extension the value associated by a certain type of participation.
More precisely, the global cost associated with the participants could reflect the worth of an IETF meeting, and the cost associated with 'on-site' participation (respectively 'remote', 'not-arrived') reflects the worth -- and share -- associated with each type of participation.
The $CO_2${} metric is very similar to the number of attendee number metric, however, the attendee number reflects an attendee decision while the cost estimation ponders attendee with the cost.
More specifically, it provides more weight to distant participants, for which the participation has a higher cost.
It may also provide a zero cost to attendees of the country when the IETF meeting is hosted in the country's capital.
Overall this seems to lower attendees' participation due to their local presence at an IETF meeting.
Note that we are not trying to defend the $CO_2${} metric as opposed to the number of participants.
Instead we are considering this metric as possibly providing a new angle that may be interesting.
Figure~\ref{fig:presence} as well as Figure~\ref{fig:presence-person} respectively depict the $CO_2${} emissions and the number of attendees per type of meeting participation.
All three figures tend to show a similar trend that is: 'on-site' attendance is slowly declining and 'remote' participation is growing. However such trends are more visible using the $CO_2${} metric as opposed to the number of attendee metric.
Asian and South American (Buenos Aires) locations present peaks with the $CO_2${} metric for both 'on-site' and 'remote' attendees while such peaks are less evident with the number of attendees metric.
One possible way to interpret these peaks is that attendees are heavily located in North America and Europe.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./fig/co2eq--mode-attendee-cluster-key-presence-cluster-nbr-15.pdf}
\caption{Number of attendees clustered by presence type: 'on-site', 'remote' and 'not-arrived'. See equivalent figure with $CO_2${} in Figure~\ref{fig:presence} }
\label{fig:presence-person}
\end{figure}
Figure~\ref{fig:country} clusters $CO_2${} emissions as well as the number of attendees per country.
Both metrics show a large representation from the US compared to the other countries.
On the other hand, Asia is well represented with China, Japan and Korea being the second, third and tenth most represented countries and overall the Asian region seems to be represented similarly to Europe.
As countries present a huge difference in terms of population a representation in terms of region might be useful.
However, from the country representation, it can be inferred that the African, the Middle East and the South American regions are under-represented.
Figure~\ref{fig:country} also shows that the overwhelming majority of the attendees are mostly representing 15 countries, which seems to indicate a reaching out strategy may not be limited to regions, but may consider a finer granularity such as countries.
\begin{figure}
\centering
\begin{subfigure}{\columnwidth}
\includegraphics[width=\columnwidth]{./fig/co2eq--mode-flight-cluster-key-country-cluster-nbr-15-co2eq-myclimate.pdf}
\caption{$CO_2${} estimated with 'myclimate'}
\label{fig:country-myclimate}
\end{subfigure}
\\%\hfill
\begin{subfigure}{\columnwidth}
\includegraphics[width=\columnwidth]{./fig/co2eq--mode-flight-cluster-key-country-cluster-nbr-15-co2eq-goclimate.pdf}
\caption{$CO_2${} estimated with 'goclimate'}
\label{fig:country-goclimate}
\end{subfigure}
\\%\hfill
\begin{subfigure}{\columnwidth}
\includegraphics[width=\columnwidth]{./fig/co2eq--mode-attendee-cluster-key-country-cluster-nbr-15.pdf}
\caption{number of attendees}
\label{fig:country-attendee}
\end{subfigure}
\caption{$CO_2${} emissions and number of attendees clustered by countries. Only the 15 most represented countries are represented. }
\label{fig:country}
\end{figure}
Figure~\ref{fig:organization} clusters the costs and attendee numbers per organization.
The positive aspect is that the most represented organizations do represent less than 50\% of the global attendance.
On the other hand, further investigations are needed to understand the full ecosystem, that is whether organizations labeled as 'Others' are independent organizations as opposed to working for other declared organizations.
Similarly, the most represented organization is the one labeled 'Not Provided' which indicates the field organization has not been filled by the attendee.
Further investigations are also needed here to clarify the reasons this field is omitted.
\begin{figure}
\centering
\begin{subfigure}{\columnwidth}
\includegraphics[width=\columnwidth]{./fig/co2eq--mode-flight-cluster-key-organization-cluster-nbr-15-co2eq-myclimate.pdf}
\caption{$CO_2${} estimated with 'myclimate'}
\label{fig:organization-myclimate}
\end{subfigure}
\\%\hfill
\begin{subfigure}{\columnwidth}
\includegraphics[width=\columnwidth]{./fig/co2eq--mode-flight-cluster-key-organization-cluster-nbr-15-co2eq-goclimate.pdf}
\caption{$CO_2${} estimated with 'goclimate'}
\label{fig:organization-goclimate}
\end{subfigure}
\\%\hfill
\begin{subfigure}{\columnwidth}
\includegraphics[width=\columnwidth]{./fig/co2eq--mode-attendee-cluster-key-organization-cluster-nbr-15.pdf}
\caption{number of attendees}
\label{fig:organization-attendee}
\end{subfigure}
\caption{$CO_2${} emissions and number of attendees clustered by organization. Only the 15 most represented companies are represented. }
\label{fig:organization}
\end{figure}
\subsection{Overview}
Currently, \texttt{CO2eq}~\cite{co2eq} estimates $CO_2${} equivalent emissions associated with meetings.
The \texttt{Meeting} class takes as input a list of attendees as well as the meeting location.
At minimum an attendee is represented by a location (e.g. country), but can also be associated with other criteria such as organization, type of presence (e.g on-site, remote, ...).
These criteria can be used to cluster attendees according to the different values of these criteria.
Each value can be associated with an amount of $CO_2${} equivalent emissions or the number of attendees.
The \texttt{CityDB} class is responsible for associating an airport to a location.
The $CO_2${} equivalent emissions of a flight is estimated by a \texttt{mode} ( i.e. 'distance' and 'flight') and \texttt{co2eq} the methodology ( i.e. 'myclimate2019'~\cite{myclimate} and 'goclimate2019'~\cite{goclimate}).
The 'distance' mode is solely based on the distance between the city of the meeting and the city of the attendee.
The resulting $CO_2${} equivalent emitted corresponds to a direct flight between these two cities -- thus ignoring detours, takeoff and landing operations associated with multi segment flights.
The 'flight' mode, in return, considers a real flight between the two cities eventually with potentially multiple segments.
The \texttt{FlightDB} class returns such flights by requesting the Amadeus 'Flight Offer Search' API~\cite{flightsearchoffer} that returns all available matching flights.
The \texttt{AmadeusOffersSearchResponse} class is responsible for parsing that response and selecting a plausible flight.
The \texttt{Flight} class estimates the $CO_2${} equivalent of the flight by considering each segment as an individual flight.
\texttt{CO2eq-v0.0.1}{} implements two methodologies to compute the $CO_2${} equivalent, 'myclimate2019' and 'goclimate2019' -- as detailed in Section~\ref{sec:co2eq} .
\texttt{Flight} directly implements the 'myclimate' methodology while 'goclimate' is implemented by requesting a GO Climate Neutral service.
In addition to the computation of $CO_2${} for a single meeting, the class \texttt{MeetingList} visualizes the evolution of $CO_2${} equivalent emissions across various meetings.
The \texttt{IETFMeeting} and \texttt{IETFMeetingList} classes extend the \texttt{Meeting} and \texttt{MeetingList} classes, mostly to retrieve, parse and cleanup of the attendee list from the IETF web site.
An IETF attendee is represented as a dictionary with the following keys: 'organization', 'presence', 'country'.
An additional element 'flight\_segments' that indicates the number of segments associated to flight is computed on the fly.
Attendees can be partitioned according to these keys, and for each possible value, it is possible to estimate the number of attendees or the $CO_2${} equivalent emission.
The \texttt{IETFMeetingList} class - as opposed to taking a list of Meeting objects - takes the list of all IETF meetings -- set as an global variable --, and instantiates IETFMeeting objects when these meetings have not been created.
In addition, it performs the necessary adjustment (size, title, labels, ...) to plot a relevant figure.
Figure~\ref{fig:ietf100} provides an example of the estimation provided by \texttt{CO2eq}{} for a single meeting.
For more examples of estimation provided for a list of meetings, please see section~\ref{sec:ietf} or ~\cite{co2eq.io} for an exhaustive and up to date list of \texttt{CO2eq}{} outputs.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./fig/co2eq--mode-flight-distance-cluster-key-country-cluster-nbr-15-co2eq-myclimate-goclimate.pdf}
\caption{Computation of $CO_2${} equivalent emissions for IETF~100 with a representation of $CO_2${} emissions clustered per country and estimated in kg. IETF~100 had a total number of 1618 attendees ('remote', 'not-arrived', 'on-site'). }
\label{fig:ietf100}
\end{figure}
\subsection{Design and Performance }
\texttt{CO2eq}{} is implemented in Python 3.8 as execution time is not especially crucial.
We briefly evaluate the performances using cProfiler~\cite{cprofiler} as it does not require any changes to the code and estimate $CO_2${} for the IETF~100 with all necessary information being cached.
As represented in Figure~\ref{fig:cprofiler}, the total computation takes 523.941 seconds with 464.239 seconds associated with the ourairports module and 35 seconds associated with the read\_all function of jcache module involved by the \texttt{CityBD} class.
We suspect the \texttt{OurAirports} class from ourairports module performs search within a list and this for any airports of any segment.
A \texttt{AirportDB} class should inherit from \texttt{OurAirports} and implement dictionary search.
Currently \texttt{CityDB} is still using a list of IATA cities, but we also expect this class to undergo some major function redesign -- see section~\ref{sec:evolution}.
\begin{figure}[hbt]
\centering
\includegraphics[width=\columnwidth]{./fig/profile-crop.pdf}
\caption{Profiling the computation of IETF 100 with cProfiler}
\label{fig:cprofiler}
\end{figure}
We have not performed an extended analysis over \texttt{CO2eq}{}, as performance is not the primary purpose.
However, we have favored the use of dictionaries over lists to speedup search.
The drawback is that list enables search using multiple search entries while dictionaries have a single entry key.
This is especially true for flight offers that are retrieved using multiple parameters such as origin, destination, dates, classes.
In order to provide some sort of flexibility for the search, we used primary keys - in our case origin, destination - which refers to a list of possible keys to reduce the size of the list.
We also limit the size of the cached objects, and only the latest resulting flight and input parameters are cached.
In case the primary key matches but not the secondary parameters match the cached object a new search is performed.
The search firstly looks whether a new flight can be derived from the list of offers stored in a file origin-destination.tar.gz.
If the file cannot be found or the flight offers present in the file do not match the criteria, a new request is sent to the Amadeus 'Flight Offer Search' service.
The flight response is derived, cached and the additional offers are placed to the origin-destination.tar.gz.
\subsection{Evolution}
\label{sec:evolution}
Most foreseen evolution for \texttt{CO2eq}{} is led by increasing the ability to 1) automatically and transparently handle various types of locations and usages as well as 2) to extend the $CO_2${} evaluations.
Estimation of $CO_2${} requires the computation from a departure point and a destination point.
In the 'distance' mode inter-city distance can be computed, but in the 'flight' departure and destination are airports.
Both cities and airports are represented by IATA codes.
The Amadeus Search Offer that takes IATA city code as input - as well as IATA airport and we use it to convert an IATA city code into the appropriate IATA airport code.
While in many cases, IATA city code and IATA airport city code are the same, this is not always the case, as some large cities have multiple airports -- PAR for Paris is associated with multiple airports CDG, ORY.
As a result, the main purpose of \texttt{CO2eq}{} is to translate an attendee location into a city IATA code or an airport IATA code.
In the case of the IETF, the attendee location is an ISO-3166 - alpha2 country code~\cite{iso-3166}.
There are currently 249 country codes that make it possible to assign -- even manually -- a country to an airport.
The general strategy we adopted, is to derive the capital city name (a string) from the country and find the IATA city object with that capital name, and thus derive the associated IATA city code.
The binding of the ISO-3166 country code to the IATA city code results from a match between the capital name -- which is a string -- between two databases.
One database that provides the capital from a country code and one database that contains the list of IATA cities.
The match is possible if the country code is effectively considered as a country code -- by both CountryInfo~\cite{countryinfo} and ISO-3166~\cite{iso-3166} and if the capital name returned by CountryInfo corresponds to a name associated to a IATA city.
We experienced issues with ISO-3166 that did not recognize the following country codes -- 'RS' (Serbia), 'ME' (Montenegro), 'MM' (Mayanmar) -- that were not referenced by the python module iso3166.
We should probably update the module.
In addition, some other countries may not have official capitals -- for example Palestine.
For other countries, the capital is only administrative and does not represent well the hub of the nation which could result in flight search error.
For example, for Australia, we switch Canberra to Sydney, for the US we split the main cities randomly from WAS (Washington) and LAX (Los Angeles) and so on.
For other countries such as Andorra or many territories, the main city is not within the country itself. In some cases, such as Andorra, the closest main city (Toulouse) is not even the capital of the other country (France).
At last the name of the capital provided by CountryInfo does not match the one in the IATA cities due to different spelling or that the name is associated with the name of the territory instead of the capital.
Finally, in some cases, the airport provided was not able to offer flights, in which case we needed to approximate the location to another.
Overall, the ISO-3611 country code to IATA city currently requires some manual adjustments and we would like to be able to provide a more robust approach especially as the approach would probably not scale to more diverse locations such as cities -- which is our intention.
One foreseen alternative is to use geographic coordinates in combination of airport popularity or size in terms of passengers per year or a specific Amadeus service such as Airport \& City Search~\cite{airportandcitysearch}.
On the one hand, we expect \texttt{CO2eq}{} to continue to be extended to take every meeting's specificity, but we also expect \texttt{CO2eq}{} to provide an easy way to be used by default.
We are thinking of defining a common input JSON format for meeting attendees and meeting parameters -- especially for meeting lists -- to make the use of \texttt{CO2eq}{} easy via a web interface.
We would like to extend the $CO_2${} estimation and include additional measurement - such as ICAO~\cite{icao} or updated models from myclimate and Go Climate.
We also expect to complete \texttt{CO2eq}{} by including $CO_2${} emissions associated with hotels and meeting venues -- starting with~\cite{eia, epa, ghgp, ghgp-online} -- as well as other transports.
We would also like to be able to compute the $CO_2${} associated with video conferences to better estimate the gains provided by remote meetings.
At last we would like to extend \texttt{CO2eq}{} to other usages than meetings which might be achieved by using a more generic data model -- at least internally as we do not want the specific case of estimation for meetings to reflect such complexity.
\section{Introduction}
\label{sec:intro}
\input{intro}
\section{Flight $CO_2${} Emission Estimation }
\label{sec:co2eq}
\input{co2eq}
\section{\texttt{CO2eq}{} Overview }
\label{sec:impl}
\input{impl}
\section{Case Study: $CO_2${} emission analysis for IETF meetings}
\label{sec:ietf}
\input{ietf}
\section{Conclusions}
\label{sec:concl}
\input{conclusion}
\section*{Acknowledgments}
We would like to thank Marie-Jose Montpetit for her feedbacks and the suggestion to consider the number of flight connections as a potential mean to provide safer travel. I also would like to thank Makan Pourzandi and Pernilla Bergmark for supporting and providing future directions.
\ifthenelse{\boolean{acm}}{
\bibliographystyle{ACM-Reference-Format}
}{
\bibliographystyle{IEEEtranS}
}
| -19,804.150003
|
[
-1.1220703125,
1.28125
] | 56.462585
|
[
-3.203125,
0.81201171875,
-1.10546875,
-3.7421875,
-0.227783203125,
5.125
] |
[
0.46484375,
6.05078125,
2.931640625,
5.08203125
] | 284
| 4,678
|
[
-2.55078125,
2.82421875
] | 24.796047
|
[
-5.10546875,
-2.017578125,
-1.7041015625,
-1.2578125,
1.205078125,
7.2265625
] | 0.651478
| 21.293094
| 26.934587
| 3.564593
|
[
3.5385026931762695
] | -16,042.009897
| 5.742198
| -19,275.471236
| 1.310401
| 5.953349
|
[
-3.9921875,
-3.18359375,
-1.7255859375,
-2.771484375,
3.0546875,
7.70703125
] |
[
-5.703125,
-1.669921875,
-1.4833984375,
-1.115234375,
3.451171875,
4.21484375
] | |
BkiUf7HxK7FjYHGH2qKb
|
\section{Introduction}
The physics of phase transitions and phase stability of alloys is
often couched in terms of
statistical mechanics models on the generalized
(long-range pair and multibody interactions) Ising
lattice and computed most accurately with Monte Carlo
(MC) methods \cite{Binderbook}. These are time consuming calculations and
usually are thus restricted to coarse grids of chemical
potential and temperatures. Further, MC simulations do not give
directly the values
of important thermodynamic variables such as entropy and free energy,
since these quantities cannot be written in terms of ensemble averages.
Instead, these are obtained laboriously
by integration of thermodynamic relations from a known starting point.
To remedy this situation, one often uses the less accurate
Molecular Field methods,
most notably the Cluster Variation Method (CVM) of Kikuchi \cite{Kikuchi}.
Despite its great simplicity, the CVM reproduces
many of the features of phase diagrams obtained by the
many-orders-of-magnitude more computer expensive MC method:
For the fcc nearest-neighbor antiferromagnetic Ising
Hamiltonian with coupling constant $J$ and
zero chemical field $h$=0,
the transition temperature $T_c$ \cite{note}
from MC simulations is 1.74 \cite{tc}
while CVM in the Tetrahedron (Tetrahedron-Octahedron)
approximation gives 1.89 (1.81).
This and other successes of the CVM \cite{Kikuchi} are
even more surprising in light of the finding
that the CVM correlation functions (the thermal
average of products of the Ising spin variables)
differ considerably from the exact MC values:
For example, in the nearest-neighbor
fcc antiferromagnetic Ising model, the MC pair
correlation functions \cite{note.mc} at
$T=1.9$ and $h=0$ are $-0.208,\,\,0.254,\,\,0.036,\,\,0.076$ for the first
to fourth neighbors, respectively,
while the tetrahedron-CVM first neighbor
correlation function is
$-0.188$
and the tetrahedron-octahedron-CVM first and second neighbor
correlation functions are
$-0.198$ and $+0.198$.
Thus, CVM correlation functions
are substantially closer to zero (i.e., more ``random'')
than the exact values.
The error in tetrahedron-CVM first neighbor
correlation function leads to a $\sim$10$\%$ error in
both energy and entropy relative to MC (see below).
However, despite such systematic discrepancies (of $\sim$10\% or
less) in reproducing
correlation functions, the CVM seems to describe well thermodynamic
properties (e.g., free energies) which depend on these
very correlation functions. The subject of this paper is precisely
these types of errors in CVM energy, entropy, and free energy
relative to MC. We make four points:
(i) We show that a reason for the success of the CVM
in describing the {\em free energy}
is an interesting cancellation of errors:
The closer-to-zero CVM correlation functions imply greater
randomness and hence an {\em overestimation} of
the internal energy compared to MC.
However, the more random CVM correlations
also lead to a larger entropy, and hence to an
{\em underestimation} of the $-TS$ term.
Thus, the error in internal energy is of opposite sign to the
error in the $-TS$ term, so these two errors partially cancel in
the free energy. This cancellation of errors is due to
the fact that the CVM free energy expression may be obtained
from a variational argument, \cite{Kikuchi} but not $E$ or
$S$ individually.
(ii) Our analysis gives insight into the successes and failures
of various approaches that attempt to improve CVM by
``borrowing'' certain quantities from MC. Indeed,
for some applications, one may require accuracies and flexibilities
beyond those provided by the CVM, and so there is a desire in the field
to find accurate ``hybrid'' methods combining the simplicity of the CVM
with the accuracy and flexibility of MC.
A natural possibility is to use \cite{Chris}
the correlation functions $\Pi_{\rm MC}$
(or cluster probabilities) obtained
from MC simulations in the
CVM expression for free energy
$F_{\rm CVM}(\Pi_{\rm MC})$
in the hope of obtaining a more
accurate free energy.
We demonstrate that these methods \cite{Chris} are
unlikely to succeed, as these
approximations do not benefit from the cancellations of errors noted
above.
(iii) We use the ``Entropic Monte Carlo'' (EMC) method
of Lee, \cite{Lee} which provides a method for determining the
entropy as a function of any state variable. We apply EMC
to the case of the CVM entropy as the state variable,
and demonstrate that $S_{\rm EMC}(S_{\rm CVM})$
provides a means for critically
evaluating the errors in CVM entropy.
The calculation of
$S_{\rm EMC}[S_{\rm CVM}(\Pi_{\rm MC})]$
further shows that this functional accurately
describes $S_{\rm MC}(\Pi_{\rm MC})$
and thus the Monte Carlo free energy.
However, this approach is computationally intensive.
Finally, inspired by the EMC philosophy,
(iv) We develop a functional
$\tilde{S}[S_{\rm CVM}(\Pi_{\rm MC})]$
that reproduces the exact
$S_{\rm MC}(\Pi_{\rm MC})$
very closely, and is computationally much less
expensive than either MC thermodynamic integration or EMC.
This functional permits one to borrow from MC calculations
the correlation functions (or cluster probabilities),
evaluate the ensuing CVM entropy $S_{\rm CVM}(\Pi_{\rm MC})$,
and thus obtain the nearly exact entropy
$\tilde{S}[S_{\rm CVM}(\Pi_{\rm MC})] \approx
S_{\rm MC}(\Pi_{\rm MC})$ and energy
$E(\Pi_{\rm MC})$.
This approach thus combines the accuracy of MC with
the computational simplicity of the CVM.
\section{Methods}
All of the calculations described in this paper will be
tests of the various methods on the fcc nearest-neighbor
antiferromagnetic Ising model.
\subsection{CVM Quantities}
We first briefly review our notation. Let $\sigma $ mean a
configuration (``microstate'') of Ising spins ($\pm 1$) on a lattice.
Consider a cluster (``figure'') $f$ with $k_f$ lattice points. The spin
variable, which
takes on the value $\hat{S}_i(\sigma) = -1 (+1)$ if there is an A (B)
atom at site $i$ of the figure,
depends on the configuration $\sigma$ of spins in the lattice.
Consider now all the clusters $Rf$
obtained from the cluster $f$ by the symmetry operations $R$ of the space
group of the lattice. In the CVM, we define the correlation function $\bar{%
\Pi}_f\left( \sigma \right) $ for the cluster $f$ in the configuration
$\sigma$ as the product of spin variables over the sites of $f$,
averaged over all the figures obtained from $f$ by the space group operations $R$:
\begin{equation}
\bar{\Pi}_f\left( \sigma \right) =
\frac{\sum_R \hat{S}_1 \hat{S}_2 ... \hat{S}_{k_f}}{\sum_R1}.
\label{pi}
\end{equation}
The CVM treats the correlation functions $\{\bar{\Pi}_f\}$ as thermodynamic
variables. For a cluster $f$ of $k_f$ sites there are $2^{k_f}$ arrangements
of spins $\pm 1$ at its sites. Each arrangement $j$ has a cluster
probability $\rho _j^{f}$ which is linearly dependent on the correlation
function values for all subclusters of $f$.
The correlation functions $\{\bar{\Pi}_f\}$ (or equivalently, cluster probabilities)
are determined by minimizing the free energy,
composed of the CVM internal energy
\begin{equation}
\label{energy}
E_{\rm CVM} = \sum_{f\subseteq F} D_f J_f \langle \bar{\Pi}_f \rangle
\end{equation}
and the CVM entropy
\begin{equation}
\label{scvm}
S_{\rm CVM}=-k\sum_{f\subseteq F}B^{f}\sum_j\rho _j^{f}\ln (\rho _j^{f})
\end{equation}
both written as a sum over all the subclusters of the maximum $F$. In the CVM
entropy expression, one also sums
over the arrangements of spins at the sites
of each subcluster.
The Barker coefficients
$B^{f}$ can be obtained from purely group theoretical
arguments. \cite{Barker,Morita}
Unless otherwise noted, all CVM calculations described in this
paper are for the fcc tetrahedron approximation.
In order to examine the errors involved in the CVM, we first
compute the accurate energy, entropy, and free energy from
Monte Carlo simulations of the nearest-neighbor
anti-ferromagnetic Ising model at $h$=0.
\subsection{Monte Carlo Quantities}
A Monte Carlo cell of 1728 sites
was used with 10$^6$ Monte Carlo steps per site at each
temperature. Although finite-size effects were not taken
into account, the calculated heat capacity showed a sharp peak
at $T=1.77$, within $\sim$1-2\% of
the most precise values for the transition temperature
given in the literature $\sim$1.74-1.75. \cite{tc}
The energy is given directly from MC, while the entropy is obtained
by thermodynamic integration down from
infinite temperature:
\begin{equation}
S\left( T\right) =k\ln 2+\frac{E\left( T\right) }{T}-\int_{0}^{\frac{1}{T}%
}E\left( T \right) \; d(1/T)
\end{equation}
(The entropy was also obtained by integrating the heat capacity down
in temperature, however, this method was found to be less efficient
in that it required a finer grid of temperatures near the transition
for equal accuracy).
The correlation functions $\Pi_{\rm MC}$ for a figure $f$
were obtained by taking the thermal average (over the
10$^6$ Monte Carlo steps) of the
product of Ising spin variables over the sites ${1,2,...,k_f}$
of all symmetry-equivalent figures $f$
[i.e., the thermal average of Eq. (\ref{pi})].
\section{Results}
\subsection{Analysis of CVM errors vis-a-vis MC simulations}
\label{errors}
\begin{figure}[tb]
\hbox to \hsize{\epsfxsize=0.80\hsize\hfil\epsfbox{fig.1.eps}\hfil}
\nobreak\bigskip
\caption{Energy, entropy, and free energy as a function of
temperature for the nearest-neighbor
anti-ferromagnetic Ising model.
All quantities are given in dimensionless units:
$k_BT/J$ for temperature,
energies are given normalized by $J$,
and entropies are given normalized by $k_B$.
(a) Results obtained from
Monte Carlo simulations and thermodynamic integration.
(b) Errors in standard CVM (disordered phase symmetry)
compared to Monte Carlo.
We have used only the CVM entropy expression with the
disordered phase symmetry. Thus, in (b), differences with
Monte Carlo for temperatures below the CVM transition
($T$=1.89) are overestimated and hence are
shown as dashed lines (see text).
}
\label{mc}
\end{figure}
The energy, entropy, and free energy obtained from Monte Carlo
simulations are shown in Fig. \ref{mc}a. The first-order transition
at $T \simeq 1.77$ is evident from the discontinuity in energy and
entropy. We have also computed the energy, entropy, and
free energy predicted by CVM (in the tetrahedron approximation).
By comparing these CVM results with the ``exact'' Monte Carlo
results in Fig. \ref{mc}a, we may ascertain the errors in
thermodynamic functions of the CVM. The {\em differences}
$\delta E$, $\delta (-TS)$, and $\delta F$ between
the respective
CVM and Monte Carlo functions are shown in Fig. \ref{mc}b.
In our CVM calculations,
we have used only the CVM entropy expression with the
disordered phase symmetry. Thus, in Fig. \ref{mc}b differences with
Monte Carlo for temperatures below the CVM transition
($T$=1.89) are shown as dashed lines. Obviously,
practitioners of CVM would correctly impose a lower symmetry
on the entropy expression below the transition to the
ordered phase and would not use the disordered phase
symmetry in this temperature range.
The reason we use the CVM disordered phase symmetry
down to low temperature is due to our wish to combine
CVM methods with MC, which as we describe in the next
section, is problematic when using ordered expressions
for CVM entropy.
As indicated in the Introduction, the CVM correlation
functions $\{\bar{\Pi}_f\}$ are closer to zero than
the MC values. By Eq. (\ref{energy}), the CVM internal
energy is less negative
relative to Monte Carlo (thus, $\delta E>0$ in Fig. \ref{mc}b),
demonstrating that
the energetic effect of short-range order in CVM is
underestimated, and hence the CVM internal energy is ``more random''
than that of Monte Carlo.
The entropy of CVM
is overestimated ($\delta (-TS)<0$) relative
to Monte Carlo, again indicating a more random solution than
Monte Carlo. However, the error in the CVM {\em free energy}
$\delta F$
is simply the sum of the errors $\delta E + \delta (-TS)$. Since
$\delta E$ and $\delta (-TS)$
have opposite sign, they partially cancel, and
give an error in free energy which is considerably
smaller in magnitude than
either the error in energy or in entropy. Thus, owing to the
variational nature of the CVM, \cite{Kikuchi} the free energy
of CVM is more accurate than one might expect from considering
either the energy or entropy alone.
\subsection{Using MC correlation functions in CVM calculations:
Absence of cancellation of errors}
\label{fcvm.pimc}
\begin{figure}[tb]
\hbox to \hsize{\epsfxsize=0.80\hsize\hfil\epsfbox{fig.2.eps}\hfil}
\nobreak\bigskip
\caption{Error in
free energy as a function of
temperature for the nearest-neighbor
anti-ferromagnetic Ising model obtained
using the ``$F_{\rm CVM}(\Pi_{\rm MC})$'' method.
All quantities are given in dimensionless units:
$k_BT/J$ for temperature,
energies are given normalized by $J$,
and entropies are given normalized by $k_B$.
In the ``$F_{\rm CVM}(\Pi_{\rm MC})$''
method, MC correlation functions and
cluster probabilities are used
in the CVM expressions for energy and entropy,
respectively.
We have used only the CVM entropy expression with the
disordered phase symmetry. Thus, differences with
Monte Carlo for temperatures below the CVM transition
($T$=1.89) are overestimated and hence are
shown as dashed lines (see text).
Note that in this method,
the energy is precisely that of Monte
Carlo, thus the error in $-TS$ is also the error in
free energy.
}
\label{cvm.mc}
\end{figure}
Our foregoing discussion sheds light on a hybrid method which,
naively thinking, might combine the accuracy of Monte Carlo
with the simplicity of CVM. In this method, one uses the
correlation functions $\{\bar{\Pi}_f\}$
(or equivalently, the cluster probabilities $\{\rho _j^{f}\}$)
of MC simulations
in the expressions for CVM entropy [Eq. (\ref{scvm})]
and energy [Eq. (\ref{energy})].
We refer to this method
as the ``$F_{\rm CVM}(\Pi_{\rm MC})$'' method.
This method would, of course,
require one to perform a Monte Carlo simulation for each
composition and temperature of interest; however, one
could, in principle, obtain the entropy at each point
from a {\em single} Monte
Carlo simulation (i.e., one composition and one temperature)
rather than a {\em series} of Monte Carlo calculations which would be
required for thermodynamic integration of the entropy.
Since the Monte Carlo correlation functions are
used in this method in Eq. (\ref{energy}), there
is no error in energy ($\delta E=0$). Thus, the error in
free energy, shown in Fig. \ref{cvm.mc},
is equal to the error in entropy:
$\delta F = \delta (-TS)$.
Since there is no error in energy in this method, there is
no cancellation of errors. Hence, {\em even though
the exact Monte Carlo correlation functions are used
in the ``$F_{\rm CVM}(\Pi_{\rm MC})$'' method, it
produces less accurate free energies than standard
CVM}: For example, at $T$=1.92, the free energies
as given by Monte Carlo, ``$F_{\rm CVM}(\Pi_{\rm MC})$'',
and CVM, are
= -2.109, -2.050, and -2.067, respectively.
(For comparison, CVM in the Tetrahedron-Octahedron
approximation gives $F$ = -2.094 for this temperature.)
\begin{figure}[tb]
\hbox to \hsize{\epsfxsize=0.80\hsize\hfil\epsfbox{fig.3.eps}\hfil}
\nobreak\bigskip
\caption{Entropy versus temperature for the
nearest-neighbor anti-ferromagnetic Ising model.
The open squares (connected by a solid line) is
the result of Monte Carlo simulations, the
solid line is standard CVM, the long dashed line is
the CVM entropy expression evaluated with the
Monte Carlo cluster probabilities, the
thin short dashed line is the result of the entropic
Monte Carlo calculations, and
the thick short dashed line is the simple correction
to the CVM, the ``modified CVM''.}
\label{entropy}
\end{figure}
Other disadvantages of the $F_{\rm CVM}(\Pi_{\rm MC})$
approach are illustrated in
Fig. \ref{entropy}, showing a comparison of the
entropies as a function
of temperature as calculated by MC ($S_{\rm MC}$),
by standard CVM [$S_{\rm CVM}(\Pi_{\rm CVM})$],
and by the $F_{\rm CVM}(\Pi_{\rm MC})$ method
[$S_{\rm CVM}(\Pi_{\rm MC})$].
(The other two curves of Fig. \ref{entropy} are discussed in
Sections \ref{emc} and \ref{mcvm} below.)
One can again see that
(i) standard CVM (solid line)
overestimates the entropy at high temperatures relative
to Monte Carlo (open squares),
(ii) the CVM entropy of the disordered phase
is not applicable at low temperatures,
and
(iii) the $F_{\rm CVM}(\Pi_{\rm MC})$ method underestimates the entropy
at high temperatures, and at low temperatures this entropy
takes on the unphysical values
$S_{\rm CVM}(\Pi_{\rm MC})<0$.
These unphysical values are a result of the fact that
the expression for the {\em disordered} $S_{CVM}$ allows
negative values for many atomic configurations.
For instance, in
the {\em fcc} lattice, the simplest ordered configurations such as
$L1_0,\,\,L1_1,\,\,L1_2$ have negative $S_{CVM}$ values.
(These negative values of CVM entropy
are likely to persist no matter what sized maximal cluster
is used, and thus will always lead to difficulties with the
$F_{\rm CVM}(\Pi_{\rm MC})$ method at lower temperatures,
since the CVM entropy will incorrectly tend to negative values
rather than zero.)
Of course, one might argue that these ordered configurations
only possess negative CVM entropy when evaluated with the
CVM expression for the disordered phase,
whereas when evaluated with the
CVM expressions for the corresponding ordered phases,
they will have non-negative
entropies. However, this illuminates another potential
problem with the $F_{\rm CVM}(\Pi_{\rm MC})$ method:
In Monte Carlo simulations,
the presence of anti-phase boundaries and finite-sized domains
of long-range order preclude one from unambiguously
defining the distinction between sublattices and extent of
long-range order present in the simulation.
However, an {\em ordered} CVM expression for the entropy
is written in terms of correlation functions and
cluster probabilities for all the symmetry-distinct
figures for the symmetry of the long-range ordered
phase.
Thus, the CVM ordered entropy expressions
presuppose the domains of long-range order are
infinite in size, and hence the distinction of various
sublattices in the ordered phase is unambiguous.
Thus, using the ``$F_{\rm CVM}(\Pi_{\rm MC})$'' method
with ordered CVM entropy expressions is not
practical because one doesn't know from the MC simulations
precisely how to divide the MC simulations into
sublattices of long-range order and hence one does
not even know from the MC simulations which ordered
CVM expression to use.
Also one
does not know at what temperature to change the symmetry of
the CVM to the ordered entropy expression.
Thus, an ideal method combining Monte Carlo and CVM would only
use a {\em single} expression (e.g., the disordered CVM expression)
for the entropy at both low and
high temperatures.
We next describe such a method called ``Entropic
Monte Carlo'' (EMC).
\subsection{The entropic Monte Carlo method: A critical
evaluation of CVM entropy}
\label{emc}
Lee \cite{Lee} has shown a practical way to determine the entropy of a Monte
Carlo cell as a function of any state variable. We call this method
Entropic Monte Carlo (EMC). Though in his paper Lee
applies the EMC method
to the case of the energy as state variable in a quantized
system, here we describe instead $S_{\rm EMC}$ in terms
of the state variable
$S_{CVM}(\sigma)$ which is a continuous, not quantized, variable.
Our strategy will be to calculate $S_{\rm EMC}(S_{CVM})$
by the method of Lee, and then insert $S_{CVM}(\Pi_{\rm MC})$
into this expression, giving $S_{\rm EMC}[S_{CVM}(\Pi_{\rm MC})]$
which we write as $S_{\rm EMC}(\Pi_{\rm MC})$. We will show that
this function reproduces very well $S_{\rm MC}$.
First, we describe how $S_{\rm EMC}(S_{CVM})$ is calculated:
The EMC method is a self-consistent process in which each iteration
is made from a
series of Monte Carlo sweeps where the driving ``energy'' $E(\sigma)$
of the Monte Carlo equations is not the true energy contained in the sample
but an approximation to the {\em entropy}:
\begin{equation}
E(\sigma) =E[S_{CVM}(\sigma)]
\label{driving}
\end{equation}
which depends on the configuration $\sigma $ through the
function $E[S_{CVM}] $, whose argument is the CVM entropy (per site)
calculated with Eq.(\ref{scvm})
for the cluster probabilities $\rho _j^{f}\left( \sigma \right) $ of the
configuration $\sigma $. The function $E\left[ S_{CVM}\right] $ is assumed
to be monotonic.
The EMC dynamics are given by the detailed balance condition
\begin{equation}
\exp \left[ -E\left( \sigma _i\right) \right] W(i\rightarrow j)=\exp \left[
-E\left( \sigma _j\right) \right] W(j\rightarrow i)
\end{equation}
where $W$ are transition rates and
$E(\sigma)$ is given by Eq. (\ref{driving}).
After many MC sweeps of the full lattice, one
obtains a histogram
\begin{equation}
H\left( \bar{S}_{CVM}\right) =Xd\left( \bar{S}_{CVM}\right) \exp \left[
-E\left( \bar{S}_{CVM}\right) \right]
\end{equation}
where $d\left( \bar{S}_{CVM}\right) $ is the number of configurations with a
given value $\bar{S}_{CVM}$ of the CVM entropy (degeneracy in $S_{CVM}$) and
$X$ is a constant of proportionality. We distinguish $\bar{S}_{CVM}$, which
is a numerical argument attaining certain value, from $S_{CVM}$ which is a
function of both the microstate [through Eq.(\ref{scvm})] and the cluster
probabilities $\rho _j^{f}\left( \sigma \right)$.
The density of
states function of the CVM entropy $S_{CVM}$ is given by
\begin{eqnarray}
\label{D}
D(\bar{S}_{CVM}) &=&\sum_\sigma d\left( \bar{S}_{CVM}\right) \delta \left(
\bar{S}_{CVM}-S_{CVM}\left( \sigma \right) \right) \nonumber \\
&=&(1/X)\sum_\sigma \exp \left( E\left( \bar{S}_{CVM}\right) \right) H\left(
\bar{S}_{CVM}\right) \times \nonumber \\
&& \delta \left( \bar{S}_{CVM}-S_{CVM}\left( \sigma
\right) \right)
\end{eqnarray}
and the entropy $S_{EMC}=S(\bar{S}_{CVM})$ per site is defined as
\begin{equation}
\exp [NS(\bar{S}_{CVM})]=\int_{-\infty }^{\bar{S}_{CVM}}D(\xi )d\xi
\label{defSS}
\end{equation}
\noindent where $N$ is the number of sites in the MC cell.
From Eqs.(\ref{D}) and Eq.(\ref{defSS}), we may obtain the difference between
the entropy at two different values $\bar{S}_{CVM}$.
\begin{eqnarray}
S(&&\bar{S}_{CVM}^{(2)})-S(\bar{S}_{CVM}^{(1)})= \nonumber \\
& &\frac{1}{N}\ln [
\sum_{\sigma ,\,\,S_{CVM}(\sigma )<\bar{S}_{CVM}^{(2)}}
\exp(NE[S_{CVM}(\sigma)]) \times \nonumber \\
& &H[S_{CVM}(\sigma)]] \nonumber \\
& &-\frac{1}{N}\ln [\sum_{\sigma ,\,\,S_{CVM}(\sigma) <
\bar{S}_{CVM}^{(1)}}\exp(NE[S_{CVM}(\sigma)]) \times \nonumber \\
& &H[S_{CVM}(\sigma)]]
\label{basic}
\end{eqnarray}
which is the basic equation used to determine the entropy from the EMC runs.
On the right-hand side of Eq. (\ref{basic}), the sums are over
the microstates $\sigma$ obtained in the EMC sweeps whose CVM entropy
$S_{CVM}(\sigma)$ are smaller than $\bar{S}_{CVM}^{(2)}$ or
$\bar{S}_{CVM}^{(1)}$.
As pointed out by Lee \cite{Lee}, the entropy determination becomes
especially simple when the interaction $E(S_{CVM})$ is such that the
histogram $H\left[ \bar{S}_{CVM}\right] $ is uniformly distributed and
has little dependence on $\bar{S}_{CVM}$. In this case, Eq. (\ref{basic})
becomes
\begin{eqnarray}
S(&&\bar{S}_{CVM}^{(2)})-S(\bar{S}_{CVM}^{(1)}) = \nonumber \\
&& \frac 1N\ln [\sum_{\sigma ,\,\,S_{CVM}(\sigma) <\bar{S}_{CVM}^{(2)}}
\exp(NE[S_{CVM}(\sigma)])] \nonumber \\
&&-\frac 1N\ln [\sum_{\sigma ,\,\,S_{CVM}(\sigma) <\bar{S}_{CVM}^{(1)}}
\exp(NE[S_{CVM}(\sigma)])].
\label{basic.simple}
\end{eqnarray}
Because of the factor of $N$, the
exponential in Eq. (\ref{basic.simple})
is a rapidly increasing function of $S_{CVM}$, and hence
only the extremes contribute significantly to the sums, or
\begin{equation}
S(\bar{S}_{CVM}^{(2)})-S(\bar{S}_{CVM}^{(1)})\simeq \frac 1NE(\bar{S}%
_{CVM}^{(2)})-\frac 1NE(\bar{S}_{CVM}^{(1)}) \label{E}
\end{equation}
This equation
then suggests the self-consistent procedure for determining the
entropy:
From a crude estimate of $S\left( S_{CVM}\right) $,
we use Eq. (\ref{E}) to obtain the interaction
$E\left( S_{CVM}\right)$ with which we make EMC runs, with which we
recalculate $S\left( S_{CVM}\right) $ from the basic Eq. (\ref{basic}). This
process is taken to self-consistency.
When self-consistency is reached,
(i) the histogram
$H[S_{CVM}]$ (the number of microstates in each small
range of $S_{CVM}$ obtained in a series of MC sweeps) is nearly
constant, independent of the value of $S_{CVM}$ and
(ii) the driving ``energy'' which is the
approximation to the entropy [Eq. (\ref{driving})],
becomes equal to the entropy calculated from
the density of states [Eq. (\ref{defSS})],
or in other words, the EMC entropy becomes exact.
An important aspect of the EMC is that the
calculated entropy functional form of $S\left( S_{CVM}\right) $ {\em does not
depend on any particular Ising Hamiltonian}, (so long as the important
correlations are contained within the CVM maximal cluster),
because the role of the ``energy'' driving the EMC calculations
is played in Eq. (\ref{basic}) by the entropy itself.
\begin{figure}[tb]
\hbox to \hsize{\epsfxsize=0.80\hsize\hfil\epsfbox{fig.4.eps}\hfil}
\nobreak\bigskip
\caption{Entropic Monte Carlo results for
$S_{\rm EMC} = S(S_{\rm CVM})$.
Filled circles are the EMC calculations, and
the solid line is the EMC-inspired entropy
functional $\tilde{S}(S_{\rm CVM})$. The
dashed line is line of unit slope simply to guide
the eye. EMC was performed for
a cell of 12$^3$=1728 sites, using the CVM tetrahedron
expression for the disordered entropy. Note that
many configurations correspond to negative CVM
entropy, with the most
negative (for all configurations with $\leq$16 atoms
per cell) being the $L1_0$ configuration.}
\label{old.friend}
\end{figure}
Figure \ref{old.friend} shows a typical result $S_{\rm EMC}(S_{\rm CVM})$
of the EMC calculations,
using a MC cell with $N=12^3=1728$ sites.
We also performed EMC calculations with
different MC cells, with the results being slightly different for the larger
negative values of $S_{CVM}$. The negative values of $S_{CVM}$ correspond to
configurations of atoms with higher symmetry, usually associated with
smaller repeat units, thus explaining why the curve depends to some
extent on the size and shape of the MC cell for the negative values of
$S_{CVM}$: For instance, for an EMC cell with an odd number of sites (e.g.,
$11^3$), one could never obtain the stoichiometric
configuration $L1_{0}$ with its large negative CVM entropy. In fact, simple
high-symmetry configurations such as $L1_{0},\,L1_{1},\,L1_{2}$ all have
negative values of $S_{CVM}$. Examining the CVM entropy for all
configurations with up to 16 atoms per cell, \cite{file}
we found the configurations with the
most negative CVM entropy
had very small unit cells. The largest negative
CVM entropy occurs for
$L1_{0}$ for which $S_{CVM}\left( L1_{0}\right) =-1.34\ln 2$.
The results $S_{\rm EMC}[S_{\rm CVM}(\Pi_{\rm MC})]$
of EMC are shown in Fig. \ref{entropy}, where they are
contrasted with the results of Monte Carlo,
CVM, and ``$F_{\rm CVM}(\Pi_{\rm MC})$''. \cite{note.emc}
By comparing $S_{\rm EMC}(\Pi_{\rm MC})$ and
$S_{\rm CVM}(\Pi_{\rm CVM})$ with $S_{\rm MC}$,
we see that
the EMC and CVM entropies are equally accurate
at high temperatures.
Remarkably however,
the EMC method also
produces extremely accurate entropies at
{\em low temperatures}, in qualitative contrast with
the ``$F_{\rm CVM}(\Pi_{\rm MC})$''
method. Thus, even though
one only uses a single disordered expression for
the CVM entropy in the EMC calculations, the EMC
reproduces both high temperature (disordered)
and low temperature (ordered)
entropy values, with no need
to change the CVM entropy expression at any point.
Although the internal energy in EMC is exact (so this
method does not benefit from the cancellation of errors
noted in Sec. \ref{errors} for the CVM), we see that EMC
does not need to be correct due to {\em cancellation}
of errors. Instead, it is accurate because its
{\em individual terms} ($E$ and $-TS$) are accurate.
The EMC, like standard Monte Carlo, can be a computationally laborious
procedure. However, our EMC calculations of $S\left( S_{CVM}\right)$
suggest a very simple functional $\tilde{S}(S_{\rm CVM})$
which is appealing
because the correction {\em does not require one to perform an EMC
calculation}. We next describe this simple correction.
\subsection{An EMC-inspired new entropy functional}
\label{mcvm}
While $S_{\rm CVM}(\Pi_{\rm MC})$ can be inaccurate,
$S_{\rm EMC}[S_{\rm CVM}(\Pi_{\rm MC})]$ is
accurate but computationally expensive. Thus we will now
develop a new functional $\tilde{S}[S_{\rm CVM}(\Pi_{\rm MC})]$
which is both accurate and inexpensive.
The EMC results of Fig. \ref{old.friend}
permit one to guess the behavior of the ``exact'' entropy $S\left(
S_{CVM}\right) $ (in the limit of $N\rightarrow \infty $)
as a function of the CVM entropy
obtained from a ``good'' maximal cluster (e.g., the tetrahedron or the
tetrahedron-octahedron).
This ``true'' entropy function $S(S_{CVM})$ should have the following properties:
(i) The most positive $S_{\rm CVM}(x)$ entropy $S_{\rm CVM}^{\rm MAX}(x)$
should correspond to the exact entropy for this case, i.e., the ideal
mixing entropy:
$S(S_{\rm CVM}^{\rm MAX})=S^0(x)= -k_B[x\ln x + (1-x)\ln (1-x)]$.
(ii) The slope of $S(S_{CVM})$ at the maximum value of $S_{\rm CVM}^{\rm MAX}=S^0$
should be unity because for nearly random configurations
the CVM approaches the exact result:
$\frac{dS}{dS_{CVM}}|_{S^0}=1$.
(iii) The most negative value of the CVM
entropy should correspond to zero ``true'' entropy.
Thus, $S(S_{\rm CVM}^{\rm MIN})=0$.
The configuration with most negative CVM entropy can be
found by examining all configurations up to some maximum unit-cell size,
as described in Ref. \onlinecite{file}.
For instance, for the tetrahedron CVM,
$L1_{0}$ has the most negative CVM entropy
[$S_{\rm CVM}^{\rm MIN}=S_{CVM}(L1_0) =-1.34\ln 2$].
This point is indicated in Fig. \ref{old.friend} by a square.
(iv) The function $S\left( S_{CVM}\right) $ should increase
monotonically with $S_{CVM}$ as one can see from Eq. (\ref{basic}).
Also, $S\left( S_{CVM}\right) $ has a
positive curvature [due to the exponent of $N$ in the
right-hand side of Eq. (\ref{basic})].
We select a simple functional form for $S\left( S_{CVM}\right) $
which satisfies (i)-(iv) above (but otherwise possesses
no special physical meansing.) However, use of this
simple form will provide a means of evaluating energies
and entropies which (a) is computationally much more efficient
than either MC thermodynamic integration or EMC, (b) possesses
MC accuracy, and (c) may be extended to use any maximal cluster
of the CVM.
The functional form we choose for the approximation
$\tilde{S}(S_{\rm CVM})$
to the true $S\left( S_{CVM}\right) $ which satisfies the
four properties (i)-(iv) above is:
\begin{equation}
\tilde{S}(S_{\rm CVM})=
(S^0-\frac{S^0}{\alpha})+
\frac{S^0}{\alpha}
\exp[\alpha(\frac{S_{\rm CVM}}{S^0}-1)]
\label{MCVM}
\end{equation}
where $\alpha$ is the solution of
\begin{equation}
0=(S^0-\frac{S^0}{\alpha})+
\frac{S^0}{\alpha}
\exp[\alpha(\frac{S_{\rm CVM}^{\rm MIN}}{S^0}-1)].
\label{MCVM.alpha}
\end{equation}
In the case of the tetrahedron CVM, $\alpha =0.86917$.
The function in Eq. (\ref{MCVM.alpha}) depends only
on a single parameter $S_{\rm CVM}^{\rm MIN}$ which can be easily
estimated from an enumeration of small-unit-cell configurations \cite{file}
{\em using any maximal cluster of the
CVM method}. (In contrast, an EMC calculation as we described in
Section \ref{emc} is only practical for the tetrahedron CVM approximation.)
While the negative $S_{CVM}$ configurations, which correspond to
highly symmetric arrangements of atoms, have no meaning in the standard CVM
procedure (since in standard CVM one would use a different expression
for CVM entropy to describe ordered phases),
the main merit of a correction such as
Eqs. (\ref{MCVM})-(\ref{MCVM.alpha}) is to
restore these highly ordered configurations into a single CVM
expression by attributing a
non-negative entropy to them. Naturally this correction, when used together with
the Monte Carlo correlation functions, will be especially important near the
transition when the ordered configurations begin to be important.
To test these ideas, we calculated
$\tilde{S}[S_{\rm CVM}(\Pi_{\rm MC})] \equiv \tilde{S}(\Pi_{\rm MC})$,
where $\tilde{S}$ is given by Eq. (\ref{MCVM}), and the CVM
is executed within the tetrahedron approximation.
One sees (Fig. \ref{entropy})
that this approach presents a remarkable improvement over the
$F_{\rm CVM}(\Pi_{\rm MC})$ method in all temperature ranges,
especially below the transition temperature where
$S_{\rm CVM}(\Pi_{\rm MC})$ is negative.
Also, the simple functional
represented by $\tilde{S}(\Pi_{\rm MC})$ effectively
retains all of the improvements over $F_{\rm CVM}(\Pi_{\rm MC})$
that were obtained by the full EMC calculation. In fact, for high
temperatures, the $\tilde{S}(S_{\rm CVM})$ approach is even closer to the
exact Monte Carlo results than the EMC calculations on which it
was based! This fact can be understood by examining the EMC calculations
in Fig. \ref{old.friend}: Figure \ref{old.friend}
shows that the EMC calculations
using a cell of 1728 sites do not reproduce property (ii) above,
$\frac{dS}{dS_{CVM}}|_{S^0}=1$. In fact, the slope of the EMC
curve in Fig. \ref{old.friend} is only about 0.82 at the maximum
value of the entropy. EMC simulations with even smaller cells were
typically found to possess even smaller slopes. Presumably, for larger
EMC simulations one would approach the correct slope of unity.
The slope of EMC being smaller than unity means that as one comes down
from infinite temperature, or $S_{CVM} = \ln 2$, the EMC entropy
maintains a larger value than it should. This explains why
$S_{EMC} > S_{MC}$ for temperatures above the critical temperature.
Because the $\tilde{S}(S_{\rm CVM})$ approach
was constructed to obey the requirement
$\frac{dS}{dS_{CVM}}|_{S^0}=1$, it corrects the error in the
slope of EMC caused by the finite-sized simulation cell, and hence
improves the entropy above the transition.
The reason that the derivative of EMC is less than one for small
cell sizes is due to the negative $S_{CVM}$ configurations, which
are typically high-symmetry, small-unit-cell configurations. Thus,
these negative $S_{CVM}$ states are represented more in small EMC
cells relative to configurations with large-unit-cells and low symmetry.
If the density of states in Eq. (\ref{defSS}) becomes artificially
large for the negative region of $S_{CVM}$ (due to small EMC cells),
it will have to be compensated by an artificially small density
of states in the region of positive $S_{CVM}$ [since the integral
in Eq. (\ref{defSS}) is constrained by the fact that it must be
$2^N$ at $S_{CVM} = \ln 2$.] Thus, the integral will grow more
slowly than it should for CVM entropies approaching $\ln 2$, and
hence the slope will be less than one.
\section{Summary}
The main accomplishment of this paper is to suggest a simple
functional $\tilde{S}(S_{\rm CVM})$ [Eqs. (\ref{MCVM})-(\ref{MCVM.alpha})]
that improves the CVM entropy.
The development was
based on insights gained from our analysis of the CVM free energy
(which showed cancellation of energetic {\em vs.} entropic errors),
and from the EMC philosophy \cite{Lee} permitting one to express
the true entropy as a functional of an approximate, but deterministic
entropy. The new functional $\tilde{S}(S_{\rm CVM})$ can be used
in future applications either with CVM alone
[simply by replacing the CVM
entropy with Eq. (\ref{MCVM}) in any existing CVM program], or
with a combination of CVM and $\Pi_{\rm MC}$ (using
$\tilde{S}[S_{\rm CVM}(\Pi_{\rm MC})]$
as described in this paper).
\begin{center}
{\bf Acknowledgements}
\end{center}
LGF acknowledges support from NREL during his visit, where
much of this work was carried out.
Work at NREL was supported by the Office of Energy Research
(OER) [Division of Materials Science of the Office of Basic Energy
Sciences (BES)], U. S. Department of Energy, under contract No.
DE-AC36-83CH10093.
| -28,256.848034
|
[
-3.3828125,
3.05078125
] | 22.027972
|
[
-2.54296875,
0.07012939453125,
-2.28515625,
-5.80859375,
-0.7314453125,
8.8046875
] |
[
2.515625,
7.7421875,
2.3125,
5.46484375
] | 299
| 5,137
|
[
-3.0078125,
3.4296875
] | 26.906345
|
[
-6.125,
-4.69921875,
-4.5234375,
-2.005859375,
2.1484375,
12.6796875
] | 1.268123
| 14.411189
| 20.848744
| 1.15863
|
[
2.404294967651367
] | -20,761.639727
| 5.249951
| -27,868.82608
| 0.915866
| 5.711069
|
[
-2.595703125,
-3.578125,
-3.267578125,
-4.296875,
2.369140625,
11.296875
] |
[
-5.4609375,
-2.3359375,
-2.2109375,
-1.107421875,
3.51953125,
4.59375
] | |
BkiUa9PxK7Ehm4VQtm2l
|
\section{Introduction}
\label{sec:intro}
The mechanism by which the most massive stars form, and whether there is an upper limit to the mass of star that this mechanism can produce, has been a problem in astrophysics since the pioneering works of \citet{larson71a} and \citet{kahn74a}. These authors focused on the physical mechanisms that might inhibit accretion onto stars as they accreted interstellar matter, and we will return to this topic below. However, a more modern approach to the problem of very massive stars requires placing them in the context of a broader theory of the stellar initial mass function (IMF).
The IMF is characterized by a peak in the range $0.1-1$ ${M_\odot}$, and a powerlaw tail at higher masses of the form $dn/d\ln m\propto m^\Gamma$ with $\Gamma\approx -1.35$ \citep[and references therein]{bastian10a}. However, the mass to which this simple powerlaw extends is not very well-determined. It is not possible to measure the IMF for field stars to very high masses due to uncertainties in star formation histories and the limited number of very massive stars available in the field. Measurements of the high-mass end of the IMF in young clusters must target very massive systems in order to achieve strong statistical significance, and such clusters are rare and thus distant. This creates significant problems with confusion. The limited studies that are available suggest that the a powerlaw with $\Gamma \approx -1.35$ remains a reasonable description of the IMF out to masses of $\sim 100$ ${M_\odot}$ or more \citep[e.g.][]{massey98b, kim06b, espinoza09a}. However, it is by no means implausible that there are hidden features lurking in the IMF at the highest masses. Indeed some analyses of the IMF have claimed to detect an upper cutoff (see the Chapter by F.~Martins in this volume for a critical review).
This observational question of whether the most massive stars are simply the ``tip of the iceberg" of the normal IMF, or whether they represent a fundamentally distinct population, animates the theoretical question about how such stars form. The two dominant models for how massive stars form are formation by accretion of interstellar material, i.e.~the same mechanism by which stars of low mass form, and formation by collisions between lower mass stars, which would represent a very different formation mechanism from the bulk of the stellar population.\footnote{Mergers between two members of a tight binary as a result of the growth of stellar radii during main sequence or post-main sequence evolution, or as a result of secular interactions in hierarchical triples, is a third possible mechanism by which massive stars can and probably do gain mass \citep{sana12a, de-mink13a, moeckel13a}. However, I do not discuss this possibility further, because it provides at most a factor of two increase in stellar mass.} In the remainder of this Chapter, I review each of these models in turn (Sections \ref{sec:accretion} and \ref{sec:collision}), pointing out its strengths, weaknesses, and areas of incompleteness. I then discuss the observable predictions made by each of these models, and which might be used to discriminate between them (Section \ref{sec:discrimination}). Finally, I summarize and return to the question first raised by \citet{larson71a} and \citet{kahn74a}: is there an upper mass limit for star formation, and if so, why (Section \ref{sec:masslimit})?
\section{The Formation of Very Massive Stars by Accretion}
\label{sec:accretion}
The great majority of stars form via the collapse of cold, gravitationally-unstable, molecular gas, and the subsequent accretion of cold gas onto the protostellar seeds that the collapse produces \citep[and references therein]{mckee07a}. There are numerous competing models for the origin of the observed $\Gamma\approx -1.35$ slope \citep[e.g.][]{bonnell01b, padoan02a, padoan07a, hennebelle08b, hennebelle09a, hennebelle13a, krumholz11c, krumholz12b, hopkins12d}, but in essentially all of these models, the massive end of this tail is populated by stars forming in rare, high-density regions that provide at least the potential for large mass reservoirs to be accreted onto protostellar seeds at high rates. Some but not all of these models identify the regions that give rise to massive stars with observed ``cores": compact ($\sim 0.01$ pc), dense ($>10^5$ molecules cm$^{-3}$) regions of gas, the largest of which can have masses large enough to form very massive stars \citep[e.g,.][]{beuther04b, beuther05b, bontemps10a}. None of these models predict that there is an upper limit to the masses of either cores or stars, and there is no observational evidence of a truncation either. Thus, it would seem that there is no barrier in terms of mass supply to the formation of very massive stars via the same accretion processes that give rise to the remainder of the IMF. However, the fact that there is a large supply of mass available does not guarantee that it can actually accrete onto a single object and thereby produce a very massive star. There are four major challenges to getting the available interstellar mass into a star, which we discuss below: fragmentation, radiation pressure, photoionization, and stellar winds. I discuss each of these challenges in turn in the remainder of this Section.
\subsection{Fragmentation}
The first challenge, fragmentation, can be stated very simply. When gravitationally-unstable media collapse, they tend to produce objects with a characteristic mass comparable to the Jeans mass,
\begin{equation}
\label{eq:jeansmass}
M_J = \frac{\pi}{6}\frac{c_s^3}{\sqrt{G^3\rho}} = 0.5 \left(\frac{T}{10\mbox{ K}}\right)^{3/2} \left(\frac{n}{10^4\mbox{ cm}^{-3}}\right)^{1/2} M_\odot,
\end{equation}
where $c_s$ is the sound speed, $\rho$ is the gas density, $T$ is the gas temperature, and $n$ is the gas number density. The temperature and density values to which I have scaled in the above equation are typical values in star-forming regions. Clearly, a massive star is an object whose mass is far larger than the Jeans mass of the interstellar gas from which it is forming. Why, then, does this gas not fragment into numerous small stars rather than forming a single large one? Indeed, hydrodynamic simulations of the collapse of compact, massive regions such as the observed massive cores show that they generally fail to produce massive stars \citep{dobbs05a}, and larger-scale simulations of star cluster formation appear to produce mass functions that are better described by truncated powerlaws than pure powerlaws \citep[e.g.][]{maschberger10a}, and where the formation of the most massive stars is limited by ``fragmentation-induced starvation" \citep{peters10b, girichidis12a}.
While these results might seem to present a serious challenge to the idea that massive stars form by accretion, they are mostly based on simulations that include no physics other than hydrodynamics and gravity. More recent simulations including a wider range of physical processes suggest that the fragmentation problem is much less severe than was once believed. Fragmentation is reduced by two primary effects: radiation feedback and magnetic fields.
Radiation feedback works to reduce fragmentation by heating the gas, raising its pressure and thus its Jeans mass (cf.~equation \ref{eq:jeansmass}). Although massive stars can of course produce a tremendous amount of heating, the more important effect from the standpoint of suppressing fragmentation is the early feedback provided by low mass stars, whose luminosities are dominated by accretion rather than internal energy generation. \citet{krumholz06b} first pointed out the importance of this effect, showing that even a $1$ ${M_\odot}$ star accreting at the relatively high rates expected in the dense regions where massive stars form could radiate strongly enough to raise the gas temperature by a factor of a few at distances of $\approx 1000$ AU. Since the minimum fragment mass is roughly the Jeans mass, and this varies as temperature to the $3/2$ power (equation \ref{eq:jeansmass}), this effect raises the minimum mass required for gas to fragment by a factor of $\approx 10$. Subsequent radiation-hydrodynamic simulations by a number of authors \citep{krumholz07a, krumholz10a, krumholz11c, bate09a, bate12a, offner09a} have confirmed that radiation feedback dramatically suppresses fragmentation compared to the results obtained in purely hydrodynamic models. \citet{krumholz08a} argue that this effect will efficiently suppress fragmentation in regions of high column density, allowing massive stars to form without their masses being limited by fragmentation. In contrast, \citet{peters10b} find that fragmentation limits the growth of massive stars even when heating by direct stellar photons is included, but their simulations do not include the dust-reprocessed radiation field that is likely more important for regulating fragmentation, and are limited to regions of much lower density than the typical environment of massive star formation.
Magnetic fields limit fragmentation in two ways. First, they remove angular momentum. In a collapsing cloud, the densest regions collapse fastest, and as the gas falls inward it attempts to rotate faster and faster in order to conserve angular momentum. When the collapsing gas is threaded by a magnetic field, however, the resulting differential rotation between inner collapsing regions and outer ones that have not yet begun to collapse twists the magnetic field lines. The twist produces a magnetic tension force that transfer angular momentum from the inner to the outer regions, a process known as magnetic braking. Formally, for an axisymmetric flow, one can show \citep[e.g.,][]{stahler05a} that the time rate of change of the angular momentum of a fluid element at a distance $\varpi$ from the rotation axis due to magnetic forces is given by
\begin{equation}
\frac{\partial J}{\partial t} = \frac{1}{4\pi} \left[B_\varpi \frac{\partial}{\partial \varpi} (\varpi B_\phi) + \varpi B_z \frac{\partial}{\partial z} B_\phi\right]
\end{equation}
where $\mathbf{B}$ is the magnetic field, and we have used cylindrical coordinates such that the components of $\mathbf{B}$ are $(B_\varpi, B_\phi, B_z)$. Thus in general if the toroidal ($\phi$) component of the magnetic field varies with either radial or vertical position, and the field also has a poloidal ($\varpi$ or $z$) component, there will be a magnetic torque that alters the angular momentum of the gas. For the types of magnetic field configurations produced by collapse, the net effect is to transport angular momentum outward. This process inhibits the formation of rotationally-flattened structures (e.g.~accretion disks). This is significant from the standpoint of fragmentation, because rotational flattening raises the density of the gas as it approaches the star, and dense, rotationally-flattened structures are vulnerable to the Toomre instability (see below), in which the self-gravity of a flattened rotation structure overcomes support from thermal pressure and angular momentum, leading to fragmentation and collapse.
Second, magnetic fields provide extra pressure support that prevents regions from collapsing unless their magnetic flux to mass ratios are below a critical value
\begin{equation}
\left(\frac{\Phi}{M}\right)_{\rm crit} = (4\pi^2 G)^{1/2}.
\end{equation}
Regions with masses small enough such that $\Phi/M < (\Phi/M)_{\rm crit}$ are said to be magnetically sub-critical, meaning that they do not have enough mass to overcome magnetic pressure support and collapse. Observations indicate that star-forming cores, over a wide range of size and density scales, tend to have flux to mass ratios that are roughly uniformly distributed from 0 up to $(\Phi/M)_{\rm crit}$ \citep[and references therein]{crutcher12a}. Thus the median core is magnetically supercritical, and is able to collapse despite magnetic support. However, gravity overcomes magnetic support only by a factor of $\sim 2$. If the flux to mass ratio is at all non-uniform, this implies that there may be significant amounts of mass contained in regions that are too magnetized to collapse. Simulations of massive protostellar cores by \citet{hennebelle11a} find that, for realistic levels of magnetization, the number of fragments is reduced by a factor of $\sim 2$ compared to a purely hydrodynamic calculation.
More recently, \citet{commercon11c} and \citet{myers13a} have studied the collapse of massive cores using both radiative feedback and magnetic fields, and the effects amplify one another. At early times, the extra magnetic braking provided by magnetic fields removes angular momentum and channels material to the center faster. This tends to raise the accretion rate and thus the luminosity, making radiative heating more effective. Moreover, radiative and magnetic suppression of fragmentation are complementary in that they operate in different regions. Radiation suppresses fragmentation within $\approx 1000$ AU of a forming star, as found by \citet{krumholz06b} and subsequent radiation-hydrodynamic simulations, but becomes ineffective at larger radii. However, the regions more than $\approx 1000$ AU from a forming star are precisely those that are mostly likely to be magnetically sub-critical, and thus magnetic fields are able to suppress fragmentation in these regions. Because each mechanism operates where the other is weakest, the combination of the two reduces fragmentation much more efficiently than one might naively guess. Figure \ref{fig:myers13} shows an illustration of this effect: a simulation with magnetic fields and radiation shows almost no fragmentation, while ones with either alone both experience some fragmentation, though still less than in a purely hydrodynamic case. Based on these simulations, \citet{myers13a} conclude that compact, dense regions such as the observed massive cores are likely to form single star systems, rather than fragment strongly.
While this would seem to settle the question of whether fragmentation might limit stellar masses, it is worth noting that there is one final possible fragmentation mechanism that has not yet been evaluated via simulations. \citet{kratter06a} point out that the disks around massive stars are likely to be gravitationally-unstable. Gravitational stability for a pressure-supported disk can be characterized by the \citet{toomre64a} parameter,
\begin{equation}
Q = \frac{\kappa_{\rm ep} c_s}{\pi G \Sigma},
\end{equation}
where $\kappa_{\rm ep}$ is the epicyclic frequency (equal to the angular frequency of the orbit for a Keplerian disk), $c_s$ is the gas sound speed, and $\Sigma$ is the gas surface density. Values of $Q<1$ indicate instability of the disk to axisymmetric perturbations, and non-axisymmetric perturbations begin to appear at $Q\approx 1-2$. Depending on the properties of the disk, these instabilities can run away and cause the disk to fragment into point masses. For a steady disk with dimensionless \citet{shakura73a} viscosity $\alpha$, the accretion rate through the disk is \citep[e.g.,][]{kratter10a}
\begin{equation}
\dot{M} = \frac{3\alpha c_s^3}{GQ} = 1.5\times 10^{-4} \frac{\alpha}{Q}\left(\frac{T}{100\mbox{ K}}\right)^{3/2}\,M_\odot\mbox{ yr}^{-1},
\end{equation}
where the numerical evaluation for the sound speed uses $c_s=\sqrt{k_B T/\mu}$ and the mean particle mass $\mu=3.9\times 10^{-24}$ g, appropriate for fully molecular gas of standard cosmic composition. Local instabilities such as the magnetorotational instability in the disk cannot produce $\alpha > 1$, and the disk cannot be gravitationally stable if $Q<1$, so the accretion rate through a gravitationally-stable disk where angular momentum is transported primarily by local instabilities is strictly limited from above. Accretion rates of $10^{-4}$ ${M_\odot}$ yr$^{-1}$ in such disks are possible only if the temperature is $\approx 100$ K.
This means that there is a race between stellar heating and accretion. Forming a very massive star via accretion in a time less than its main sequence lifetime of a few Myr requires extremely high accretion rates -- $\sim 10^{-3}$ ${M_\odot}$ yr$^{-1}$ for a $>100$ $M_\odot$ star. However, such high accretion rates tend to be more than a disk can process without going unstable and fragmenting, unless radiation from the central star can heat the disk up, allowing it to transport mass more quickly while remaining stable. However, this process of heating to allow more mass through has a limit: once the temperature required to stabilize the disk exceeds the dust sublimation temperature, it will not be easy to heat the disk further, and this may result in an instability so violent that the disk fragments entirely, halting further accretion. \citet{kratter06a} estimate that this could limit stellar masses of $\sim 120$ $M_\odot$. Simulations thus far have not probed this possibility, as no 3D simulations have reached such high stellar masses. However, we caution that \citeauthor{kratter06a}'s scenario did not consider the effects of magnetic fields, which limit the disk radius and help stabilize it against fragmentation, or the effects of molecular opacity in the gas, which can provide coupling to the stellar radiation field and a means to heat the disk at temperatures above the dust sublimation temperature \citep{kuiper13a}.
\begin{figure}[t]
\sidecaption
\centerline{\includegraphics[scale=0.8]{myers13}}
\caption{
Column densities from simulations of the collapse of massive protostellar cores \citep{myers13a}. The left column (BR) shows simulations including both magnetic fields and radiative feedback. The middle column (MI) uses magnetic fields but no radiation, while the right column (HR) uses radiation but has not magnetic field. Rows show snapshots at uniformly-spaced times, from the initial state to 0.6 core free-fall times. The region shown is the central 5000 AU around the most massive star. Colors show column density, and black circles show stars, with the size of the circle indicating the stellar mass. The initial magnetic field is oriented vertically in this projection. See \citet{myers13a} for more details.
\label{fig:myers13}
}
\end{figure}
\subsection{Radiation Pressure}
The second potential difficulty in forming massive stars via accretion is the radiation pressure problem, first pointed out by \citet{larson71a} and \citet{kahn74a}. The problem can be understood very simply: the inward gravitational force per unit mass exerted by a star of mass $M$ and luminosity $L$ on circumstellar material with specific opacity $\kappa$ located at a distance $r$ is $f_{\rm grav} = G M/r^2$, while the outward radiative force $f_{\rm rad} = \kappa L/(4\pi r^2 c)$. Since the radial dependence is the same, the net force will be inward only if
\begin{equation}
\frac{L}{M} < \frac{4\pi G c}{\kappa} = 2500\left(\frac{\kappa}{5\mbox{ cm}^2\mbox{ g}^{-1}}\right)^{-1} \,\frac{L_\odot}{M_\odot}.
\end{equation}
All stars above $\sim 20$ ${M_\odot}$ have $L/M > 2500$ $L_\odot/M_\odot$, so the question naturally arises: why doesn't radiation pressure expel circumstellar material and prevent stars from growing to masses substantially larger than $\sim 20$ ${M_\odot}$?
The choice of opacity $\kappa$ to use in evaluating this limit is somewhat subtle, because the dominant opacity source for circumstellar material will be dust that is mixed with the gas, which provides a highly non-gray opacity that will vary with position as starlight passes through the dust and is reprocessed by absorption and re-emission. Thus there is no single value of $\kappa$ that can be used in the equation above, and for an accurate result one must first compute the radiation field that results from the interaction of stellar photons with circumstellar dust, and then ask how the resulting radiation force compares to gravity. Nonetheless, detailed one-dimensional calculations by \citet{wolfire86a, wolfire87a}, \citet{preibisch95a}, and \citet{suttner99a}, including effects such as grain growth and drift relative to the gas, nonetheless confirm that radiation pressure is sufficient to halt accretion onto massive protostars at masses of $\sim 20$ ${M_\odot}$ for Milky Way dust abundances.
However, spherical symmetry is likely to be a very poor assumption for this problem, and a number of authors point out that relaxing it might reduce or eliminate the radiation pressure problem. The central idea behind these models is that the optically thick circumstellar matter around a rapidly-accreting protostar is capable of reshaping the radiation field emitted by the star, and making it non-spherical. If the radiation can be beamed, then the radiation force can be weaker than gravity over a significant solid angle even if the mean radiation force averaged over $4\pi$ sr is stronger than gravity. This beaming could be accomplished by a disk \citep{nakano89a, nakano95a, jijina96a} or an outflow cavity \citep{krumholz05a}, or by any other non-spherical feature that might be found in the flow.
The first radiation-hydrodynamic simulations in two dimensions found that beaming by the disk was indeed effective at channeling radiation away from an accreting star \citep{yorke95a, yorke99a, yorke02a}, but that nevertheless the radiation field was able to reverse the accretion flow and prevent formation of stars larger than $\sim 40$ ${M_\odot}$. The first three-dimensional radiation-hydrodynamic simulation, on the other hand, found that there was no flow reversal, and that mass was able to accrete essentially without limit \citep{krumholz09c}. Figure \ref{fig:krumholz09} shows a snapshot from this simulation. The key physical process uncovered in these simulations was radiation Rayleigh-Taylor instability (RRTI): the configuration of a radiation field attempting to hold up a dense, accreting gas is unstable to the development of fingers of high optical depth material that channel matter down toward the star, while radiation preferentially escapes through low optical depth chimneys that contain little matter. While the instability was first discovered numerically, subsequently \citet{jacquet11a} and \citet{jiang13a} performed analytic stability calculations that allowed them to derive the linear stability condition and linear growth rate for RRTI.
\begin{figure}[t]
\sidecaption
\centerline{\includegraphics[scale=0.25]{vrender_edge}}
\centerline{\includegraphics[scale=0.25]{vrender_face}}
\caption{
Volume renderings of the density field in the central 4000 AU of a simulation of the formation of a massive binary system including radiation pressure feedback \citep{krumholz09c}. The top image shows the edge-on view of the disk, while the bottom image shows the face-on view. At the time shown in these images, the simulation contains a $41.5+29.2$ $M_\odot$ binary, each with its own disk, and with the two disks embedded in a circumbinary disk. The filamentary structure above and below the disk is created by radiation Rayleigh-Taylor instability, and consists of dense filaments carrying mass onto the disk.
\label{fig:krumholz09}
}
\end{figure}
This picture was somewhat complicated by the work of \citet{kuiper10b, kuiper11a, kuiper12a}, who pointed out that the numerical method used in the \citet{krumholz09c}, while it provided a correct treatment of the dust-reprocessed radiation field, did not properly include the radiation force produced by the direct stellar radiation field. When \citeauthor{kuiper12a}~include this effect, they find that the extra acceleration provided to the circumstellar matter is such that gas tends to be ejected from a protostellar core before the RRTI has time to become non-linear. While there is no reason to doubt that the result is correct in the case of an initially-laminar protostellar core as considered by \citeauthor{kuiper12a}, it is unclear how general this result is, since any pre-existing density structure in the core will ``jump-start" the growth of the instability and allow it to become non-linear in far less time. Such pre-existing density structures are inevitable given the regions of massive star formation are highly turbulent \citep[e.g.][]{shirley03a}, and even in the absence of turbulence, gravitational instabilities in the accretion disk will tend to produce large density contrasts and possibly binary systems \citep{kratter10a}.\footnote{Although \citeauthor{kuiper12a}'s simulations are three-dimensional, they cannot model either turbulence of disk fragmentation, because the numerical method they use for radiation transport can only handle a single star whose location is fixed at the origin of their spherical grid.}
While there is debate about the role of RRTI, there is no debate about whether radiation pressure can actually halt accretion. \citet{kuiper11a} and \citet{kuiper13a} find that, even though radiation pressure is able to eject matter in their simulations, it is unable to eject the accretion disk, and thus that accretion can continue onto stars up to essentially arbitrary masses. Similarly, \citet{cunningham11a}, confirming the hypothesis of \citet{krumholz05a}, show that a protostellar outflow cavity produced by a massive star provides an efficient mechanism for radiation to escape, allowing accretion to continue essentially without any limit due to radiation pressure. Indeed, the presence of an outflow cavity removes the need for RRTI to occur, as it provides a pre-existing low-optical depth chimney through which radiation escapes. Thus, the consensus of modern radiation-hydrodynamic simulations of massive star formation that radiation pressure does not represent a serious barrier to the formation of stars up to essentially arbitrary masses.
\subsection{Ionization Feedback}
The third potential problem in forming very massive stars is photoionization: galactic molecular clouds generally have escape speeds below 10 km s$^{-1}$ \citep[e.g.,][]{heyer09a}, the sound speed in $\sim 10^4$ K photoionized gas. As a result, if the gas in a star-forming region becomes ionized, the gas pressure may drive a thermal wind that will choke off accretion. This process is thought to be a major factor in limiting the star formation efficiency of giant molecular clouds \citep[e.g.,][]{whitworth79a, matzner02a, krumholz06d}. However, it is much less clear whether photoionization can limit the formation of individual massive stars. The key argument on this point was first made by \citet{walmsley95a}, who noted that an accretion flow onto a massive star can sharply limit the radial extent of an H~\textsc{ii} region. This is simply a matter of the ionizing photon budget: the higher the mass inflow rate, the higher the density of matter around the star, and thus the higher the recombination rate and the smaller the Str\"omgren radius. If the radius of the ionized region is small enough that the escape speed from its outer edge is $>10$ km s$^{-1}$, then photoionized gas will not be able to flow away in a wind or escape. This problem was first considered by \citet{walmsley95a}, who considered an accretion flow in free-fall onto a star, and showed that the escape speed from the edge of the ionized region will exceed the ionized gas sound speed $c_i$ if the accretion rate satisfies
\begin{equation}
\dot{M}_* > \left[\frac{8\pi \mu_{\rm H}^2 G M_* S}{2.2\alpha_B \ln(v_{\rm esc,*}/c_i)}\right] = 4\times 10^{-5} \left(\frac{M_*}{100\,M_\odot}\right)^{1/2} \left(\frac{S}{10^{49}\mbox{ s}^{-1}}\right)^{1/2} \,M_\odot\mbox{ yr}^{-1},
\end{equation}
where $M_*$ is the stellar mass, $S$ is the star's ionizing luminosity (photons per unit time), $\mu_{\rm H}=2.34\times 10^{-24}$ is the mean mass per H nucleus, $\alpha_B\approx 2.6\times 10^{-13}$ cm$^3$ s$^{-1}$ is the case B recombination coefficient, and $v_{\rm esc,*}$ is the escape speed from the stellar surface. The factor of $2.2$ in the denominator arises from the assumption that He is singly ionized. The numerical evaluation uses $v_{\rm esc,*}=1000$ km s$^{-1}$ and $c_i=10$ km s$^{-1}$, but the numerical result is only logarithmically-sensitive to these parameters. Thus an accretion rate of $\sim 10^{-4}$ ${M_\odot}$ yr$^{-1}$ is sufficient to allow continuing accretion onto even an early O star. Given the dense, compact environments in which massive stars appear to form, such high accretion rates are entirely expected \citep{mckee03a}. \citet{keto03a} extended \citeauthor{walmsley95a}'s result by deriving a full solution for a spherical inflow plus ionization front in spherical symmetry, and reached the same qualitative conclusion. \citet{keto06a}, \citet{klaassen07a}, and \citet{keto08a} provide direct observational evidence for accretion in photoionized regions.
\begin{figure}[t]
\sidecaption
\centerline{\includegraphics[scale=0.5]{dale05}}
\caption{
Column density from a simulation of the formation of a massive star cluster including photoionization feedback \citep{dale05a}. The central star begins the simulation with a mass of $\approx 30$ $M_\odot$, but continues to grow over the course of the simulation, reaching $>100$ $M_\odot$ by the end. White dots are stars.
\label{fig:dale}
}
\end{figure}
This argument makes clear that whether photoionization can limit accretion onto massive stars depends critically on the interplay between the initial conditions, which determine the accretion rate, and stellar evolution, which determines the ionizing luminosity. If the accretion rate drops low enough, and the ionizing flux is high enough, then the ionized region will extend out to the point where photoioinzed gas can escape and accretion will be choked off. The geometry of the flow matters as well. \citet{keto07a} considered rotating infall, and showed that this may result in a configuration where the ionized region blows out in the polar direction, but continues uninhibited through a denser equatorial disk that self-shields against the ionizing photons. In three dimensions turbulent structure may plan an analogous role. \citet{dale05a} and \citet{peters10a, peters11a} have simulated the formation of massive stars and star clusters including photoionization feedback, and they find that photoionization is generally unable to disrupt accretion flows. In the simulations, accretion tends to be highly aspherical, proceeding through disks and filaments, as illustrated in Figure \ref{fig:dale}. Because these structures are dense, they have very high recombination rates and thus are resistant to being photoionized. The structure that tends to result in these simulations is that there are low-density ionized regions where material is escaping, but that the majority of the mass is contained in dense filaments where it continues to accrete. As a result, in \citeauthor{dale05a}'s simulations accretion is able to continue to masses of several hundred $M_\odot$, while in \citeauthor{peters10a}'s simulations reach a mass of $\sim 70$ $M_\odot$ without photoionization halting accretion.
There has been considerably more work on whether ionization can halt accretion in the context of the formation of the first stars. \citet{mckee08a} developed an analytic model for several forms of feedback, and argued that photoionizing radiation will blow out the polar regions of a rotating accretion flow around an accreting star once its mass reaches $\sim 50-100$ $M_\odot$, will thereafter go on to photoevaporate the disk. This process will halt accretion at a mass of $\sim 150$ $M_\odot$. \citet{hosokawa11a} conducted 2D simulations and obtained results qualitatively consistent with \citeauthor{mckee08a}'s model, but with an even lower limiting mass of $\sim 40-50$ $M_\odot$. Similar limiting masses were obtained from three-dimensional simulations by \citet{stacy12a} and \citet{susa13a}, and in a 2D simulation of metal free-star formation in gas that was externally ionized before collapsing (so-called population III.2 star formation), \citet{hosokawa12a} found an even lower limiting mass of 20 $M_\odot$.
The fairly low limiting masses found in the simulations of primordial star formation appear to be in some tension with the results of the numerical simulations of present-day star formation. At first one might think that the presence or absence of dust opacity provides an obvious explanation for the difference, but it is not clear if this is the case. Even at Solar metallicity, most ionizing photons are absorbed by hydrogen atoms and not dust grains (see the Appendix of \citealt{krumholz09d} for a discussion of why this is), so dust is responsible for removing only a small fraction of ionizing photons. Similarly, primordial H~\textsc{ii} regions have somewhat higher temperatures (due to lack of metal lines cooling) and metal-free stars have somewhat higher ionizing luminosities (due to the lack of metal opacity in the stellar atmosphere)
A more promising explanation has to do with the initial conditions, which determine the accretion rate and geometry of the inflow. In an isolated star-forming core, which is the initial condition employed in the primordial calculations, once the photoionized region escapes from the vicinity of the star it can choke off further accretion onto the disk, leaving the disk subject to photoevaporation. However this does not appear to happen in a flow that is continuously fed by large amounts of mass supplied from larger, $\sim 1$ pc scales, as occurs in the present-day star formation simulations. This mass supply into the filaments and disks appears to shield them against photoevaporation. If the initial conditions are the key difference, then for the case of present-day star formation this suggests that the mass limit imposed by photoionization is likely to depend on the large-scale environment, though exactly which environmental properties are important remains uncertain.
Finally, as a caveat, it is important to note that the treatments of ionizing radiative transfer used in the codes for the simulation of both the present-day and primordial star formation are based on a simple ray-trace using the ``on-the-spot" approximation. In this approximation, one treats ionizing photons produced by recombinations in the ionized gas as having a mean free path of zero, so that photons produced by a recombination to the ground state are re-absorbed on the spot rather than propagating a finite distance. Thus the diffuse radiation field produced by recombinations is ignored, and shadowing is too perfect. This is potentially problematic for the treatment of accretion disks, as the photoevaporation of disks is probably dominated by the diffuse photons produced in the photoionized atmosphere above the disk, rather than by direct stellar radiation \citep{hollenbach94a, mckee08a}. Thus it is unclear if the numerical results are reliable. The question of whether photoionization might limit stellar masses thus remains an only partially-solved problem.
\subsection{Stellar Winds}
A final potential challenge for the formation of massive stars by accretion has received far less theoretical attention: stellar winds. Once the surface temperatures of stars exceed $\sim 2.5\times 10^4$ K, they begin to accelerate fast, radiatively-driven winds \citep{leitherer92a, vink00a, vink01a}. Zero-age main sequence stars reach this temperature at a mass of $\sim 40$ $M_\odot$, and stars of this mass have such Kelvin-Helmholtz timescales that, even if they are rapidly accreting, their radii and surface temperatures during formation are likely to be close to their ZAMS values \citep{hosokawa09a}. The momentum carried by these winds is about half that of the stellar radiation field \citep{kudritzki99a, richer00a, repolust04a}, and so if the direct stellar radiation field cannot stop accretion then the momentum carried by stellar winds will not either.
However, winds might yet be important, because the wind launch velocity is quite large, $\sim 1000$ km s$^{-1}$. As a result, when the winds shock against the dense accretion flow, their post-shock temperature can be $>10^7$ K, well past the peak of the cooling curve \citep{castor75a, weaver77a}, and as a result the gas will stay hot rather than cooling radiatively. Should it become trapped, this hot gas could exert a pressure that is far greater than what would be suggested by its launch momentum -- in effect, it could convert from a momentum-driven flow to an energy-driven one \citep[cf.][]{dekel13a}. If this were to happen, it is possible that the stellar wind gas might be able to interfere with accretion.
There has been a great deal of work on the interaction of post-shock stellar wind gas with the ISM on the scale of star clusters \citep[e.g.,][]{tenorio-tagle07a, dale08a, rogers13a}. This work suggests that the wind gas tends, much like radiation, to leak out through openings in the surrounding dense gas rather than becoming trapped and building up a large pressure. Indeed, on the cluster scale observations appear to confirm that the pressure exerted by the hot gas is subdominant \citep{lopez11a}. However, there is no comparable work on the scale of individual stars, and it is therefore possible that the situation there could be different. Moreover, even when the wind gas does escape on cluster scales, as it flows past the colder, denser material it tends to entrain and carry of some of it. Again, the question of whether this might happen to the accretion flows around individual protostars has yet to be addressed. Given the lack of theoretical or observational attention, the best that can be said for now is that, if the interaction between stellar winds on small scales resembles those seen on larger scales, stellar winds are unlikely to set significant limits on the masses to which stars can grow by accretion.
\section{The Formation of Very Massive Stars by Collision}
\label{sec:collision}
The discussion in the previous section indicates that there is no strong argument against the idea that very massive stars form via the same accretion mechanisms that give rise to stars of lower masses. However, it is also possible for very massive stars to form through an entirely different channel: collisions between lower mass stars. The central challenge for forming massive stars via collisions is the very small cross-sectional area of stars compared to typical interstellar separations, and the relatively short times allowed for collisions by the lifetimes of massive stars. Very massive stars are found routinely in clusters with central densities $\sim 10^4$ pc$^{-3}$ \citep[e.g.][]{hillenbrand98a}, and the highest observed central densities in young clusters are $\sim 10^5$ pc$^{-3}$ \citep[their Figure 9]{portegies-zwart10a}, with the possible exception of R136 \citep{selman13a}. If gravitational focusing is significant in enhancing collision rates (usually the case for young clusters), the mean time between collisions in a cluster consisting of stars of number density $n$ and velocity dispersion $\sigma$, each with mass $m$ and radius $r$, is \citep{binney87a}
\begin{equation}
t_{\rm coll} = 7.1 n_4^{-2} \sigma_1 r_0^{-1} m_0^{-1} \mbox{ Myr},
\end{equation}
where $n_4 = n/10^4$ stars pc$^{-3}$, $\sigma_1 = \sigma/10$ km s$^{-1}$, $r_0 = R/R_\odot$, and $m_0 = m/M_\odot$. Thus under observed cluster conditions, we expect $<1$ collision between 1 $M_\odot$ stars to occur within the $\sim 4$ Myr lifetime of a massive star. Collision rates for stars more massive than the mean require a bit more care to calculate, but even under the most optimistic assumptions, production of very massive stars via collisions requires that clusters reach stellar densities much higher than the $\sim 10^4$ pc$^{-3}$ seen in young clusters. This dense phase must be short-lived, since it is not observed. Models for the production of massive stars via collision therefore consist largely of proposals for how to produce such a short-lived, very dense phase. In this section I examine the collisional formation model. In sections \ref{ssec:gasaccretion} and \ref{ssec:nbody} I describe two possible scenarios by which collisions could occur, and I discuss the mechanics of the collisions themselves, and the role of stellar evolution in mediating collisions, in Section \ref{ssec:stellarevol}.
\subsection{Gas Accretion-Driven Collision Models}
\label{ssec:gasaccretion}
The first class of proposed mechanisms to raise the density high enough to allow collisional growth consists of processes that occur during the formation of a star cluster when it is still gas-rich. In a gas-rich cluster, stars can accrete gas, and this process is dissipative: it reduces the total gas plus stellar kinetic energy of the system, with the lost energy going into radiation from accretion shocks on the surfaces of protostars, and from Mach cones created by supersonic motion of stars through the gas. To see why this should lead to an increase in density, it is helpful to invoke the virial theorem. For a system where gravity, thermal pressure, and ram pressure are the only significant forces, the Lagrangian virial theorem states that \citep{chandrasekhar53a, mestel56a}
\begin{equation}
\label{eq:virial}
\frac{1}{2}\ddot{I} = 2\mathcal{T} - \mathcal{W},
\end{equation}
where
\begin{eqnarray}
I & =& \int r^2\, dm \\
\mathcal{T} & = & \int \left(\frac{3}{2}P + \frac{1}{2}\rho v^2\right)\, dV\\
\mathcal{W} & = & -\int \rho \mathbf{r}\cdot\nabla \Phi\, dV
\end{eqnarray}
are the moment of inertia, the total kinetic plus thermal energy, and the gravitational binding energy, respectively\footnote{The functional form of $\mathcal{W}$ is independent of whether or not there is an external gravitational field, but one can only identify the quantity $\mathcal{W}$ as the gravitational self-energy if the field is entirely due to self-gravity, with no external contribution.}. The quantity $\Phi$ is the gravitational potential. If there are significant forces on the surface of the region, or significant magnetic forces, additional terms will be present, but for the moment we will ignore them.
The process of shock dissipation reduces $\mathcal{T}$ while leaving $\mathcal{W}$ unchanged, so the right-hand side becomes negative, and, on average, the system will tend to accelerate inward to smaller radii. This infall converts gravitational binding energy to kinetic energy, so both $\mathcal{T}$ and $-\mathcal{W}$ rise by the same amount. Because of the factor of $2$ in front of $\mathcal{T}$ in equation (\ref{eq:virial}), this tends to push the right-hand side back toward zero: the system is re-virializing, but at a smaller radius, higher density, and larger kinetic and (in absolute value) binding energy. However, this new equilibrium will last only as long as shocks do not keep decreasing $\mathcal{T}$. If shocks continue to happen, this will drive a continuous decrease in radius, and a continuous rise in density of both gas and stars. \citet{bonnell98a} proposed the first version of this model, and argued that it could drive stellar densities to $\sim 10^8$ pc$^{-3}$, at which point collisions would become common and massive stars could build up in this manner. The required density can be lowered significantly if all massive stars are in primordial hard binaries \citep{bonnell05a}, but even for such a configuration a significant rise in stellar density compared to observed conditions is required.
\citeauthor{bonnell98a}~suggested that contraction would halt only when gas was removed by feedback from the forming massive stars. This halts contraction because, once the gas is removed, there is no longer a dissipation mechanism available to reduce $\mathcal{T}$. However, \citet{clarke08a} subsequently realized that at sufficiently high density two-body relaxation would become faster than dissipation, and this would halt further shrinkage. In terms of the virial theorem, the increase in $-\mathcal{W}$ required to increase $\mathcal{T}$ and balance the dissipation starts to come from stars forming tight binaries rather than from overall shrinkage of the system. The maximum density that can be reached therefore depends on the total cluster mass, in such a manner as to prevent collisions from becoming significant in clusters substantially smaller than $\sim 10^4$ $M_\odot$. It is important to note that this excludes the Orion Nebula Cluster, which contains a star of $\approx 38$ $M_\odot$ \citep{kraus09b}, suggesting that stars of at least this mass at least can form via non-collisional processes.
These conclusions were based on analytic models, but more recently \citet{moeckel11a} and \citet{baumgardt11a} conducted N-body simulations including analytic prescriptions for the effects of gas accretion.\footnote{These authors did not include the effect of gas drag due to Mach cones, which for Bondi-Hoyle accretion flows is actually a factor of several larger than the change in stellar momentum due to accretion \citep{ruffert94b}, but this is probably not the most serious limitation of the simulations.} In these models, the gas is treated as a fixed potential that is reduced as the stars gain mass, eventually disappearing entirely when a prescribed amount has been accreted; this sets the limit on the duration of the gas-dominated phase. Figure \ref{fig:moeckel} shows an example output from one of these simulations. As anticipated by \citet{clarke08a}, in these models the stars sink to the center until they form a stellar-dominated region in which two-body relaxation inhibits further contraction, though these regions can also undergo core collapse (see the next section). They both find that, as a result of this limitation, stellar collisions during the gas-dominated phase are negligible unless the initial conditions are already very compact or massive, with half-mass radii of $\sim 0.1$ pc or less and/or masses of $\sim 10^4$ $M_\odot$ or more.
\begin{figure}[t]
\sidecaption
\centerline{\includegraphics[scale=1]{moeckel11a}}
\centerline{\includegraphics[scale=1.5]{moeckel11b}}
\caption{
Example results from the N-body plus accretion simulations of \citet{moeckel11a}. The top set of four panels shows the radially-averaged stellar density profile as a function of time in the simulations (black lines), together with the mass profile corresponding to the imposed gas potential (gray lines). The bottom panel shows the growth history of some of the most massive stars in the simulations. Points indicated stellar masses and the times when those stars first appear, and the convergence of two lines indicates a merger between two stars that yields a more massive star. Black points indicate stars that survive to the end of the simulation, while gray points indicate stars that merge before the end of the simulation.
\label{fig:moeckel}
}
\end{figure}
The requirement for very high initial densities creates significant tension with observations. \citet{moeckel11a} find that even the Arches cluster is insufficiently dense to have produced stellar collisions up to this point, despite the fact that it contains numerous very massive stars. Similarly, \citet{baumgardt11a} find that, once the gas potential is removed and clusters re-virialize, those that began their evolution at densities high enough to induce significant numbers of collisions end up far too compact in comparison to observed open clusters, including those containing very massive stars. As a result of these findings, both sets of authors tentatively conclude that stellar collisions during the gas-dominated phase cannot be the dominant route to the formation of very high mass stars, though they cannot rule out the possibility that such collisions occur in rare circumstances.
Before relying on these conclusions too heavily, it is important to understand the limitations of these calculations. Undoubtedly the largest one is the simple prescription used to treat the gas. In these models, the shape of the gas potential (though not its depth) is fixed, the accretion rates onto stars are fixed and independent of stellar mass or position, and the final star formation efficiency is also fixed. Obviously none of these assumptions are fully realistic. In particular, depending on the effectiveness of stellar feedback, the gas potential might either shrink and promote increases in stellar density, or the gas potential might vary violently in time as gas is pushed around by stellar feedback, pumping energy into the stars and preventing contraction -- indeed, the latter is seen to occur in at least some simulations that do treat the gas \citep{li06b, nakamura07a, wang10a}. It is unclear how the conclusions might change if these phenomena were treated more realistically.
\subsection{Gas-Free Collision Models}
\label{ssec:nbody}
The second class of models for inducing growth of very massive stars via collisions takes place in the context of gas-free clusters. These mechanisms, and their potential role in young massive clusters, were recently reviewed by \citet{portegies-zwart10a}, and I refer readers there for further details. The advantage of this approach compared to the gas-driven one is that the time available for collisions is longer, but the disadvantage is the lack of gas drag as a mechanism for raising the density.
Clusters of equal-mass stars are unstable to spontaneous segregation into a contracting core and an expanding envelope, in which the negative heat capacity of the system drives a continuous transfer of energy out of the core and thus ever-higher densities \citep{lynden-bell68a}. In a cluster containing a spectrum of masses, contraction of the core is further enhanced by the tendency of the stars to mass-segregate, with the core consisting of more massive, dynamically-cool stars, and the envelope consisting of low-mass, dynamically-hot ones \citep{spitzer69a, gurkan04a}. While there is no doubt that these processes operate, it is uncertain whether they are fast enough to produce collisions within the $\sim 4$ Myr lifetime of the most massive stars. \citet{portegies-zwart99a} conclude based on N-body simulations that collisions will occur before massive stars die enough if the central density starts at $\sim 10^7$ stars pc$^{-3}$. In this case, the mergers themselves are dissipative and will trigger further core contraction, leading to runaway formation of a single very massive object. As in the case of gas-driven collisions, the existence of a large population of primordial hard binaries can somewhat reduce the density required to initiate this cascade \citep{portegies-zwart02a}. Even so, the initial densities required in the simplest gas-free collision models would preclude the possibility of very massive stars forming by collisions except perhaps in R136. Models in which a significant fraction of very massive stars form via collision therefore generally posit a set of initial conditions that significantly increases the collision rate.
One way to shorten the time required for core collapse and the onset of collisions is to consider a cluster with primordial mass segregation, meaning that the cluster is mass-segregated even before the gas-free evolution begins \citep[e.g.][]{ardi08a, goswami12a, banerjee12a, banerjee12b}. Such a starting configuration reduces the time requires for core collapse to begin because it provides both a higher density and a higher mean stellar mass (and thus a lower relaxation time) in the cluster center. Depending on the degree of mass segregation, the reduction in time to the onset of core collapse and collisions can be $\sim 1-2$ Myr, a non-trivial fraction of the lifetime of a very massive star, and simulations using sufficiently mass-segregated initial conditions generally find that collisions become common before massive stars end their lives.
The extent to which star clusters actually are primordially mass-segregated is unclear. Observations generally show at least some degree of mass segregation in present-day clusters, but the amount varies widely. At the low-mass end of clusters containing massive stars, in the Orion Nebula Cluster the Trapezium stars are all at the cluster center, but there is no observed mass segregation for any stars except these \citep{hillenbrand98a, huff06a}. In NGC 3603 \citep{pang13a} and R136 \citep{andersen09a}, the cluster is segregated throughout so that the mass function is flatter at small radii, but more massive stars are more segregated than less massive ones. However, all of these clusters are $\sim 1-3$ Myr old, so it is entirely possible that the segregation we see now is a product of dynamical evolution during this time, not primordial segregation -- indeed, \citet{pang13a} argue that the segregation they observe in NGC 3603 is more consistent with dynamical evolution from a weakly-segregated or unsegregated initial state than with primordial segregation. Unfortunately answering this question fully would require that observations probe the gas-enshrouded phase, which is possible only in the infrared, where low resolution creates severe difficulties with confusion. Indeed, confusion is a serious worry for measurements of mass segregation even in optically-revealed clusters \citep{ascenso09a}.
Another way to accelerate the dynamical evolution of star clusters is to begin from unrelaxed initial conditions. Both observations \citep{furesz08a, tobin09a} and simulations \citep{offner09b} of star clusters that are still gas-embedded show that the stars are subvirial with respect to the gas, and such cold conditions lead to more rapid dynamical evolution than virialized initial conditions \citep{allison09a}. Larger star clusters may also be assembled via the mergers of several smaller clusters within the potential provided by a massive gas cloud \citep{bonnell03a, mcmillan07a, fujii12a}. These smaller clusters, since they have smaller numbers of stars, also have smaller time-scales for core collapse. \citet{allison09a} find that substructured initial conditions accelerate mass segregation, but it is unclear whether they do so enough to accelerate collisions. \citet{fujii13a} find that the extent to which the formation of a large cluster out of multiple star clusters influences collisions depends on the ratio of the assembly time to the core collapse time of the initial subclusters. If the subclusters undergo core collapse before merging, then they may have a few internal collisions, but there are no collisions in the merged cluster, and collisional growth of stars is negligible overall. On the other hand, if core collapse does not occur in the subclusters before they merge, the evolution is similar to that of a cluster that formed as a single entity.
In summary, collisions during the gas-free phase are unlikely to contribute significantly to the growth of very massive stars if star clusters are born virialized and non-segregated, but in reality neither of these assumptions is likely to be exactly true. The viability of collisional formation models then turns sensitively on the extent to which these assumptions are violated, and this question is unfortunately poorly constrained by observations. Hydrodynamic simulations of the formation of massive star clusters that include the gas-dominated phase may be helpful in addressing this question, but to be credible these simulations will need to include feedback processes such as stellar winds, photoionization, and radiation pressure that are presently omitted from most models.
\subsection{Stellar Evolution and Massive Star Mergers}
\label{ssec:stellarevol}
One important subtlety for models of the growth of massive stars via mergers is that the outcome depends not just on N-body processes, but also on the physics of stellar collisions, and on the structure and subsequent evolution of stellar merger products. Both questions have been the subject of considerable study in the context of mergers between low-mass stars leading to the production of blue stragglers, but only a few authors have conducted similar simulations for mergers involving massive stars. Mergers involving massive stars (particularly very massive ones) may be substantially different than those involving low-mass stars because of the importance of radiation pressure for massive stars. As stellar mass increases, the increasing dominance of radiation pressure brings the structure close to that of an $n=3$ polytrope, which is very weakly bound. Moreover, radiative forces may be non-negligible during the collision itself. For example, just as radiation pressure may be capable of inhibiting accretion, it may be capable of ejecting material that is flung off stellar surfaces during a collision, thereby increasing mass loss during the collision.
One quantity of interest from stellar merger simulations is the amount of prompt mass loss during the collision itself. Models for collisional growth generally assume that the mass loss is negligible, thus maximizing the collisional growth rate. \citet{freitag05a}, \citet{suzuki07a}, and \citet{glebbeek13a} all find in their simulations that losses are indeed small, with at most $\sim 10\%$ of the initial mass being ejected for realistic collision velocities. In three-body mergers produced when an intruder enters a tight binary system, the loss can be as large as $\sim 25\%$ \citep{gaburov10a}. However, a very important limitation of these simulations is that they do not include any radiative transfer, and treat radiation pressure as simply an extra term in the equation of state, with the radiation pressure determined by the matter temperature, which in turn is determined by hydrodynamic considerations. This is likely to be a very poor approximation for the material that is flung outward from a collision, where illumination from the central merged object will dominate the thermodynamics, as it does in the case of accretion onto massive stars. In the approximation used by the existing merger simulations, it is impossible for radiation pressure to eject matter, and thus the $\sim 5-10\%$ mass loss found in the simulations simply represents the mass of material that is raised to escape velocities during the collision itself. This figure should therefore be thought of as representing a lower limit. There is a clear need to reinvestigate this problem using a true radiation-hydrodynamics code. If the mass loss has been underestimated, collisional growth will be harder than is currently supposed.
A second quantity of interest from merger simulations is the radius of the merger product. When stars merge, shocks during the collision raise the entropy of the stellar material, so that when hydrostatic equilibrium is re-established a few days after the merger, the resulting star will initially be very bloated compared to main sequence stars of the same mass, and will gradually shrink over a Kelvin-Helmholtz timescale \citep{dale06a, suzuki07a}. Building up very massive stars via collisions likely requires multiple mergers, and at the very high densities required, the interval between mergers may be smaller than the KH timescale, so that the growing stars will have enlarged radii. Whether this will enhance or reduce the rate of collisional growth is unclear. \citet{suzuki07a} point out that the enhanced radii of the merger products make them bigger targets that are more likely to collide with other stars. On the other hand, \citet{dale06a} note that the envelopes of the post-merger stars are even more weakly bound than those of massive main sequence stars, and as a result such collisions may actually erode the envelope rather than add to it, ultimately limiting collisional growth. Which effect dominates is unclear, as no merger simulations involving such swollen stars have been reported in the literature.
A final consideration for collisional growth models in the gas-free phase, where the timescales involved may be several Myr, is mass loss via stellar winds. At masses below $\sim 100$ $M_\odot$, wind mass loss rates are considerable, but are unlikely to be able to counteract the effects of collisional growth if the density is high enough for runaway merging to begin. However, little is known about wind mass loss rates at still higher masses, and there are good empirical arguments that they might be orders of magnitude larger \citep{belkus07a}. N-body simulations using these enhanced winds find that they remove mass from stars faster than collisions can add it, yielding only very transient growth to large masses, followed by rapid shrinkage back to $\sim 100$ $M_\odot$ \citep{yungelson08a, vanbeveren09a, glebbeek09a}. Figure \ref{fig:vanbeveren} provides an example. Moreover, the winds might remove mass so efficiently that they reduce the gravitational potential energy of the system fast enough to offset the loss of kinetic energy that occurs during mergers, halting the collisional cascade completely.
\begin{figure}[t]
\sidecaption
\centerline{\includegraphics[scale=1]{vanbeveren09}}
\caption{
Results from three simulations of massive stellar mergers driven by gas-free core collapse by \citet{vanbeveren09a}, reprinted by permission. The blue line shows a calculation using fairly modest wind mass losses, similar to those adopted by \citet{portegies-zwart99a}. The black line shows a calculation with identical initial conditions but using a wind prescription taken from \citet{belkus07a} for Solar metallicity stars, and the red line is the same but using a metallicity of 5\% of Solar.
\label{fig:vanbeveren}
}
\end{figure}
\section{Observational Consequences and Tests}
\label{sec:discrimination}
Having reviewed the various models for the origins of very massive stars, I now turn to the question of their predictions for observable quantities, and how these might be used to test the models. One can roughly divide these predictions into those that apply on the scale of star clusters, and those that apply on the scale of individual star systems.
\subsection{The Shape of the Stellar Mass Function}
On the cluster scale, one obvious difference between collisional and accretion-based models of massive star formation is their predictions for the form of the stellar mass function at the massive end -- note that I refer here to the present-day mass function (PDMF) rather than the initial mass function (IMF), since in gas-free collision models very massive stars are absent in the IMF and only appear later due to collisions. As discussed above, in the case where massive stars form by the same accretion processes that produce low-mass stars, one naturally expects that very massive stars should simply be a smooth continuation of the Salpeter mass function seen at lower masses. The situation is very different for collisional models, where very massive stars form via a fundamentally different process than low mass ones. Not surprisingly, this different formation mechanism gives rise to a feature in the stellar mass function.
For the gas-driven collision gas, both \citet{moeckel11a} and \citet{baumgardt11a} find that the typical outcome of collisions is one or two objects whose masses are much greater than those of any other object, and a corresponding depletion of objects slightly less massive than the dominant one or two. Figure \ref{fig:baumgardt11} shows an example result from \citet{baumgardt11a}. As the plot shows, collisions that yield very massive stars of several hundred $M_\odot$ tend to produce an overall mass function in which the range from $\sim 10-100$ $M_\odot$ is actually significantly under-populated relative to the Salpeter slope found at lower masses, while the one or two most massive objects that are formed by collision represent a significant over-population relative to Salpeter. Unfortunately the authors of models in which collisional growth occurs during the gas-free phase have generally not reported the full mass functions produced in their simulations, but given that the mechanism for assembling the very massive stars is essentially the same as in the gas-driven models -- runaway collisions that agglomerate many massive stars into one or two supermassive ones -- it seems likely that these models would predict a similar functional form for the mass function.
\begin{figure}[t]
\sidecaption
\centerline{\includegraphics[scale=1]{baumgardt11}}
\caption{
Mass functions of stars in an N-body simulation of gas-driven stellar collisions by \citet{baumgardt11a}. The histograms are mass functions obtained 10 Myr after the beginning of the simulation, and the number of stars and initial half-mass radius used in each of the simulations are as indicated in the legend. The straight dashed line is the Salpeter mass function. Note that the simulations all predict a turndown in the mass function relative to Salpeter at masses from $\sim 10-100$ $M_\odot$.
\label{fig:baumgardt11}
}
\end{figure}
At present there is no observational evidence for mass functions of this form. \citet{massey98b} report that the mass function in R136 is well-approximated by a single powerlaw with a Salpeter-like slope from $3-120$ $M_\odot$, and \citet{andersen09a} report a continuous powerlaw slope over an even wider range of mass, with no evidence for a turn-down in the vicinity of 10 $M_\odot$. Similarly, \citet{espinoza09a} examine the Arches cluster and report that the mass function above 10 $M_\odot$ is well-described by a powerlaw with a slope $\Gamma=-1.1\pm 0.2$, consistent within the errors with the Salpeter value $\Gamma = -1.35$. There are significant systematic uncertainties on these values, arising mostly from the challenge of assigning masses to stars based on photometry, but it is important to note that a mass function of the form predicted the collisional simulations should be apparent even from the \textit{luminosity} function, independent of the mapping between luminosity and mass. Due to confusion, even luminosity functions can be difficult to measure in the cores of clusters dense enough to be candidates for collisions, but this data should improve significantly in the era of 30m-class telescopes. Observations with these facilities should be able to settle the question of whether the mass and luminosity functions in cluster cores show the characteristic signature of a depletion from $\sim 10-100$ $M_\odot$ coupled to a one or two stars at a few hundred $M_\odot$ that is predicted by collisional models.
\subsection{Environmental-Dependence of the Stellar Mass Function}
A second possible discriminant between accretion and collision as mechanisms for the formation of very massive stars is the way the stellar mass function depends on the large-scale properties of the cluster. As noted above, both gas-free and gas-driven collision models require very high stellar densities (even in the present-day state) and very high cluster masses; \citet{baumgardt11a} argue that clusters where gas-driven collisions occur all end up too compact compared to observed ones, and \citet{moeckel11a} argue that the Arches is not dense enough to be able to produce significant collisions. In contrast, accretion models either predict that the stellar IMF will be independent of environment, or that massive stars will be biased to regions of high surface density \citep{krumholz08a, krumholz10a}. Accretion models do not predict that there should be an upper limit to stellar masses that is a function of either cluster mass or central stellar density.
This is a somewhat weaker test than the functional form of the stellar mass function, simply because the model predictions are somewhat less clear, but it may nonetheless provide a valuable complement. The challenge here will be obtaining a sample large enough to see if there is a statistically-significant correlation between the presence of absence of stars above some mass and properties of the environment like cluster mass or density. The major challenge is that one expects a correlation between maximum stellar mass and cluster size simply due to size of sample effects. Observations must therefore remove the size of sample effect statistically, and search for a small correlation that might remain once the size of sample effect is removed. Some authors have claimed to detect such a correlation already in Galactic clusters \citep{weidner10a, weidner13a}, while others have reported the absence of any such correlation in extra-Galactic environments \citep{calzetti10b, fumagalli11a, andrews13a}. Given the poorly-understood selection issues associated with the Galactic sample (which is culled from the literature, rather than produced by a single survey), it seems likely that the extragalactic results based on uniform samples are more reliable, but the issue remains disputed.
An observation of a very massive star formed in relative isolation would also be strong proof that collisions are not required to make such stars, though it would not rule out the possibility that some stars form that way. There are several candidates for isolated stars with masses above $\sim 20$ $M_\odot$ \citep[and in some cases as much as $100$ $M_\odot$;][]{bressert12a, oey13a}, and which appear unlikely to be runaways because they have small radial velocities and no bow shocks indicating large transverse motions. However, there remains the possibility that these are ``slow runaways" with that were ejected very early and thus managed to reach fairly large distances from the cluster despite their fairly small space velocities \citep{banerjee12a}.
\subsection{Companions to Massive Stars}
The properties of massive star companions provide a final potential test for distinguishing accretion-based and collisional formation models. It is well known that massive stars are much more likely that stars of lower mass to be members of multiple systems. \citet{mason09a} report a companion fraction of $75\%$ for O star primaries in Milky Way star clusters\footnote{O stars outside clusters are likely to have been dynamically ejected from the cluster where they were born, and in the process stripped of companions.}, while \citet{sana13a} find a companion fraction of 50\% for O stars in the Tarantula Nebula in the Large Magellanic Cloud. \citet{sana12a} estimate that 70\% of O stars have a companion close enough that they will exchange mass with it at some point during their main sequence or post-main sequence evolution, and that $1/3$ of O stars have a companion so close that they will merge\footnote{Mergers and mass transfer may also be significant during pre-main sequence evolution -- see \citet{krumholz07c}.}. From the standpoint of formation theories, a high binary fraction is expected regardless of whether massive stars are formed via accretion \citep[e.g.][]{kratter08a, kratter10a, krumholz12b} or collisions \citep[e.g.][]{portegies-zwart99a, bonnell05a}. However, much less is known about the prevalence of low-mass companions to massive stars, or to tight massive binaries, and the statistics of low-mass companions to massive stars provide another potential test of formation models.
Accretion-based models predict that, in addition to their massive companions, massive stars are also very likely to have low-mass companions at separations of $\sim 100-1000$ AU \citep{kratter06a, kratter08a, kratter10a, krumholz12b}. The authors of collisional models have not thus far published detailed predictions for massive binary properties, but these should be trivial to obtain from the simulations already run, and it seems likely that the dense dynamical environment required for collisions would strip any low-mass, distant companions from massive stars. Thus observations capable of probing large mass ratios at intermediate separations might be a valuable test of massive star formation models.
This range is unfortunately relatively hard to probe via observations, as the expected radial velocity shifts are too small to be measured against the broad lines of a massive star, and the large contrast ratio makes direct imaging difficult. Observations using high contrast techniques like speckle imaging \citep{mason09a}, adaptive optics \citep{sana10a}, and lucky imaging \citep{maiz-apellaniz10a} are starting to push into this range, but still have some distance to go. Consider a primary massive star of mass $M_p$ with a companion of mass $q M_p$ (with $q<1$) in a circular orbit with semi-major axis $a$. The system is a distance $d$ from the Sun. Spectroscopic surveys are generally limited in their ability to detect companions by the velocity semi-amplitude $v_{\rm lim}$ to which they are sensitive, which is generally $\sim 5-10$ km s$^{-1}$ depending on the linewidths of the primary star \citep[e.g.][]{kiminki07a, kobulnicky07a}. The companion is detectable only if
\begin{equation}
\label{eq:aspec}
a < \left(\frac{q^2}{q+1}\right)\frac{G M_p}{v_{\rm lim}^2} \approx 5.3 \left(\frac{q}{0.1}\right)^2 \left(\frac{M_p}{60\,M_\odot}\right)\left(\frac{v_{\rm lim}}{10\mbox{ km s}^{-1}}\right)^{-2}\mbox{ AU}.
\end{equation}
Imaging surveys are limited by the contrast they can achieve. For example, \citet{sana10a} estimate that their detection threshold is a contrast of $\Delta K_s \approx \Delta K_{s,0} (r-0\farcs{24})^{1/3}$, where $\Delta K_{s,0} = 6$ mag and $r$ is the angular separation in arcsec and $\Delta K_s$ is the contrast in the $K_s$ band. Given a mass-magnitude relationship $K_s(M)$, a companion will be detectable if
\begin{equation}
\label{eq:aao}
|K_s(M_p) - K_s(q M_p)| < \Delta K_{s,0} \left[2.06\times 10^5\left(\frac{\mbox{arcsec}}{\mbox{rad}}\right)\frac{a}{d}-0\farcs{24}\right]^{1/3}.
\end{equation}
Figure \ref{fig:binary_limits} shows the ranges of $q$ and $a$ over which companions to massive stars are detectable given these sensitivities.
\begin{figure}[t]
\sidecaption
\centerline{\includegraphics[scale=1]{binary_limits}}
\caption{
Estimated detectability of companions to massive stars as a function of mass ratio $q$ and semi-major axis $a$ using spectroscopic surveys (blue hashed region), adaptive optics imaging surveys (red dashed region), and using a next-generation instrument like GPI (gray region). These sensitivities are calculated for a hypothetical primary of mass $M_p = 60$ $M_\odot$ at a distance $d=2$ kpc. The limit for spectroscopy is computed using equation (\ref{eq:aspec}) assuming a velocity semi-amplitude limit $v_{\rm lim} = 5$ km s$^{-1}$. The limit for adaptive optics is computed from equation (\ref{eq:aao}) using the Padova mass-magnitude relations for ZAMS stars \citep{marigo08a, girardi10a}. The GPI limit shown is the physical size corresponding to the $0\farcs{11}$ size of the GPI occulting stop in J band.
\label{fig:binary_limits}
}
\end{figure}
The next generation of high-contrast systems designed for planet imaging, such as the Gemini Planet Imager (GPI) and Spectro-Polarimetric High-contrast Exoplanet Research instrument (SPHERE) should push much farther and be able to detect even very low mass companions to massive stars. Indeed, the contrast ratios these instruments can achieve is such that, outside of their occulting stops, they should be sensitive to companions to O stars down to the hydrogen burning limit. Figure \ref{fig:binary_limits} shows an estimate of the sensitivity region for GPI. Observations using these instruments provide a definitive census of massive star companions at high mass ratio and intermediate separation. This is likely to prove a powerful constraint on formation models.
\section{Conclusions and Summary: Does Star Formation Have an Upper Mass Limit?}
\label{sec:masslimit}
Having discussed the two main formation scenarios, I now return to the question of whether star formation has a mass limit. To review, there is at present no really convincing evidence that any mechanism is capable of halting the growth of stars by accretion. The classical mechanism for limiting stellar masses is radiation pressure, but non-spherical accretion, produced by some combination of disks, outflow cavities, and instabilities appears to defeat this limit. Similarly, the problem of gas fragmenting too strongly to form massive stars appears to be solved by a combination of radiative heating and magnetic fields, though the possibility that disk fragmentation might at some point limit stellar masses remains. Photoionization and stellar winds are somewhat more promising as mechanisms that might limit stars' growth, but these remain at best possibilities. There are no real analytic models applicable to present-day (as opposed to primordial) star formation that suggest what limits these mechanisms might impose, and there are no simulations demonstrating that either of these processes are capable of terminating accretion. A fair description of the state of the field a decade ago might have been that the presumption was in favor of feedback limiting massive star formation, and that the burden of proof was on those trying to show that feedback could be overcome. The last decade of work has reversed that situation, with all tests thus-far performed showing that accretion is very difficult to reverse. This does not prove that no mechanism can limit stellar masses, but does mean that such a limit would need to be demonstrated.
For collisions, the question is not whether but where they can create very massive stars. There is no doubt that collisions and collisional growth will occur if the conditions are dense enough, and the only question is the frequency with which such dense conditions are created in nature, which in turn will determine the contribution of the collisional formation channel to the overall population of very massive stars. No presently-observed star cluster has a density high enough for collisions to be likely, but it is possible that these clusters experienced a very dense phase during which collisions occurred. This could have been either an early gas-dominated phase or a later phase of core collapse aided by primordial mass segregation and high levels of primordial substructure. However, the threshold density required to achieve significant collisional growth depends on details of massive star mergers and winds that are poorly understood. Even for favorable assumptions about these uncertain parameters, it is not clear that the observed present-day properties of massive star clusters can be reconciled with an evolutionary history in which they were once dense enough to have produced collisions.
There are a number of observational tests that may be able to settle the question of which of these mechanisms is the dominant route to the formation of the most massive stars. Accretion models predict that massive stars are simply the tip of the iceberg of normal star formation, so that the high end of the stellar mass function is continuous and does not depend radically on the environment, and that massive stars are likely to have low-mass as well as high-mass companions. Although the observable consequences of the collisional formation models have received somewhat less attention, such models appear to predict quite different results: there should be a large gap in stellar mass functions separating the bulk of the accretion-formed stellar population from the few collisionally-formed stars, and this feature should appear only in the most massive and densest clusters. It seems likely that these collisionally-formed stars will lack low-mass companions. It should be possible to perform most or all of these observational tests with the coming generation of telescopes and instruments, which will provide higher angular resolution and contrast sensitivity than have previously been possible.
\begin{acknowledgement}
I thank all the authors who provided figures for this review: H.~Baumgardt, J.~Dale, N.~Moeckel, A.~C.~Myers, and D.~Vanbeveren. During the writing of this review I was supported by NSF CAREER grant AST-0955300, NASA ATP grant NNX13AB84G, and NASA TCAN grant NNX14AB52G. I also thank the Aspen Center for Physics, which is supported by NSF Grant PHY-1066293, for hospitality during the writing of this review.
\end{acknowledgement}
\bibliographystyle{apj.bst}
| -28,362.072402
|
[
-3.28515625,
3.046875
] | 55.517241
|
[
-2.748046875,
0.74609375,
-2.384765625,
-5.94140625,
-0.5341796875,
8.1484375
] |
[
4.4765625,
7.70703125,
2.978515625,
7.4609375
] | 459
| 11,734
|
[
-2.59375,
2.798828125
] | 21.067468
|
[
-6.6015625,
-4.765625,
-5.0625,
-2.84375,
2.6171875,
13.9140625
] | 0.74714
| 32.326141
| 19.013124
| 1.810287
|
[
2.776012897491455
] | -21,264.19736
| 5.475115
| -28,047.458297
| 0.672426
| 6.177785
|
[
-3.125,
-3.673828125,
-3.03515625,
-4.21875,
2.58984375,
11.1171875
] |
[
-5.3203125,
-2.46875,
-2.41015625,
-2.259765625,
3.78515625,
6.1015625
] | |
BkiUbe3xK1ThhAcYi0WR
|
\section{Introduction} \label{intro}
\begin{figure}[!t]
\centering
\begin{subfigure}
\centering
\includegraphics[width=0.90\columnwidth]{multismc}
\end{subfigure}
\\
\begin{subfigure}
\centering
\includegraphics[width=0.90\columnwidth]{cluster}
\end{subfigure}
\\
\vspace{-0.025\columnwidth}
\begin{subfigure}
\centering
\includegraphics[width=0.93\columnwidth]{params}
\end{subfigure}
\caption{(a) An overview of the SMC network for scalable ConvNet execution, (b) block diagram of one SMC instance highlighting the NeuroCluster platform along with the baseline system parameters.}
\label{fig:sch}
\end{figure}
Today, brain-inspired computing (BIC) is successfully used in a wide variety of applications such as surveillance, robotics, industrial, medical, and entertainment systems. Recently, several research programs have been launched by major industrial players (e.g. Facebook, IBM, Google, Microsoft), pushing towards deploying services based on brain-inspired machine-learning (ML) to their customers \cite{GOOGLENET-PAPER}\cite{DEEPFACE}\cite{MICROSOFTCOCO}. These companies are interested in running such algorithms on powerful compute clusters in large data centers.
Convolutional neural networks (ConvNets) are known as the SoA ML algorithms specialized at BIC \cite{CONVNET-TAXONOMY}.
ConvNets process raw data directly, combining the classical models of feature extraction and classification into a single algorithm. The key advantages of them over traditional Multilayer-Perceptrons (MLP) are local connectivity and weight sharing:
Each neuron is connected only to a local region of the previous layer (or the input volume) called its receptive field \cite{VGGNETWORKS}. This is beneficial for dealing with high-dimensional inputs such as images. %
Moreover, weight sharing dramatically reduces the number of parameters that need to be stored.
ConvNets are not limited to image-processing only and they can be applied to other workloads such as audio and video \cite{CONVNET-VIDEO}, and even RFID-based activity recognition \cite{RFID}. %
Also, function approximation in scientific workloads \cite{Fathom} %
is another important target for ConvNets, motivating the need for a highly scalable and energy-efficient execution platform for them. In addition, recurrent networks (RNN) have been recently utilized for Deep Learning (DL) and implemented on scalable network-on-chips \cite{TPDS-SPIKING}\cite{TPDS-RNN}. These networks have a great potential for solving time-dependent pattern recognition problems because of their inherent dynamic representations. All these emerging DL models can be future targets for our PIM proposal, yet, in this paper, we focus on ConvNets for image and video.
A diverse range of ConvNet implementations exist today from standard software libraries running on general-purpose platforms \cite{CAFFE}\cite{CUDNN} to application-specific FPGA \cite{FPGA15}\cite{NEUFLOW}\cite{CAFFEINE}, ASIC \cite{EIE}\cite{DIANNAO}\cite{ORIGAMI}\cite{ShiDianNao}, and even initial explorations on near memory computing \cite{AMD}\cite{NEUROCUBE}\cite{PRIME}\cite{TETRIS}.
Even though ConvNets are computation-intensive workloads and extremely high energy-efficiencies have been previously reported for their ASIC implementations \cite{ORIGAMI}\cite{ShiDianNao}\cite{DIANNAO},
the scalability and energy-efficiency of modern ConvNets are ultimately bound by the main memory where their parameters and channels need to be stored (See \autoref{related-conv}). This makes them interesting candidates for near memory computation, offering them plenty of bandwidth at a lower cost and without much buffering compared to off-chip accelerators due to lower memory access latency (A consequence of the Little's law\footnote{Little's law ($L = \lambda W$) states that in a stable memory system, the long-term average buffer size ($L$) is equal to the long-term average effective bandwidth ($\lambda$) multiplied by the average memory access time ($W$).} \cite{ERFANARCS16}).
Heterogeneous Three-dimensional (3D) integration is helping mitigate the well-known memory-wall problem \cite{ERFANTVLSI16} %
The Through-silicon-via (TSV) technology is reaching commercial maturity by memory manufacturers \cite{HMCSTANDARD}\cite{3DFLASH} to build memory cubes made of vertically stacked thinned memory dies in packages with smaller footprint and power compared with traditional multichip modules, achieving higher capacity. On the other hand, a new opportunity for revisiting near-memory computation to further close the gap between processors and memories has been provided in this new context \cite{AMD}\cite{NEUROCUBE}. %
This approach promises significant energy savings by avoiding energy waste in the path from processors to memories.
In 2013, an industrial consortium backed by several major semiconductor companies standardized the hybrid memory cube (HMC) \cite{HMCSTANDARD} as a modular and abstracted 3D memory stack of multiple DRAM dies placed over a logic base (LoB) die, providing a high-speed serial interface to the external world.
More recently, a fully backward compatible extension to the standard HMC called the smart memory cube (SMC) was introduced in \cite{ERFANTVLSI16} along with a flexible programming-model \cite{ERFANARCS16}, augmenting the LoB die with generic PIM capabilities. %
In this paper, we propose a scalable, flexible, and energy-efficient platform targeting large-scale execution of deep ConvNets with growing memory footprints and computation requirements. Our proposal increases the total LoB die area of a standard HMC only by 8\% and achieves 240\,GFLOPS on average for complete execution of full-featured ConvNets within a power-budget of 2.5\,W. 22.5\,GFLOPS/W energy efficiency is achieved in the whole 3D stack (consuming 11\,W in total) which is 3.5X better than the best GPU implementations in similar technologies. %
We also demonstrate that using a flexible tiling mechanism along with a scalable computation paradigm it is possible to efficiently utilize this platform beyond 90\% of its roofline \cite{ROOFLINE} limit, and scale its performance to 955\,GFLOPS with a network of four SMCs. We have adopted the cycle-accurate SMC model previously developed in \cite{ERFANTVLSI16} along with the generic software stack provided in \cite{ERFANARCS16}, and built a Register-Transfer-Level (RTL) model for our DL framework, along with the required software layers.
Our main contributions can be summarized as follows: I) Using near memory computation for large-scale acceleration of deep ConvNets with large memory footprints, requiring the use of DRAM; II) Proposing the NeuroStream coprocessors as an alternative to vector-processing, providing a flexible form of parallel execution without the need for fine-grained synchronization; III) Presenting a flexible tiling mechanism and a scalable computation paradigm for ConvNets, achieving more than 90\% roofline utilization; IV) A low-cost and energy-efficient implementation of this solution based on a standard HMC device, scalable to a network of multiple HMCs.
This paper is organized as follows. Background and related work are presented in \autoref{related}. Our architectural design methodology, computation paradigm, and programming model are explained in Sections \autoref{arch}, \autoref{comp-model}, and \autoref{prog-model} respectively. Experimental results are in \autoref{exp}. Conclusions and future directions are explained in \autoref{con}.
\section{Background and Related Work} \label{related}
A brief introduction to ConvNets is presented in \autoref{background}. The evolution of modern ConvNets and their uprising implementation challenges are explained in \autoref{related-conv}. The existing implementations for them are compared with this work in \autoref{related-impl}.
\subsection{Convolutional Neural Networks} \label{background}
ConvNets are typically built by repeated concatenation of five classes of layers: convolutional (CONV), activation (ACT), pooling (POOL), fully-connected (FC), and classification (CLASS) \cite{DLBOOK}.
CONV is the core building block of the ConvNets doing most of the computational heavy-lifting for feature extraction. It essentially consists of Multiply-and-accumulate (MAC) operations as shown below \cite{DLBOOK}:
\begin{small}
\begin{align}
&y_{o}^{l}(i,j) = b_o^{l} + \sum_{c \in C_{i}} \sum_{(a,b) \in K} k_{o,c}^l(b,a) x_{c}^l(j-b,i-a)\nonumber
\end{align}
\end{small}
where $o$ indexes the output channels ($C_{o}^l$), $c$ indexes the input channels ($C_{i}^l$), and $K$ denotes the convolution kernels (a.k.a filters).
After each CONV layer, a non-linear activation function (e.g. \textit{sigmoid}, \textit{tanh}, or \textit{ReLU} \cite{DLBOOK}) is applied to the output $y$ of each individual neuron. This non-linearity gives neural networks (NNs) superior classification and learning capabilities over linear classifiers and allows them to solve non-trivial problems.
\textit{Sigmoid} and \textit{tanh} come from the traditional multilayer perceptrons, and their requirement for the computation of exponential functions makes them unsuitable for the main activation function \cite{DLBOOK}. In modern feed-forward NNs the common recommendation is to use the rectified linear unit (\textit{ReLU}) defined by $g(z)=max\{0,z\}$. Applying this function to the output of a linear transformation yields a piecewise-linear function. For this reason, it preserves many of the properties that make linear models easy to generalize and optimize with gradient-based methods \cite{DLBOOK}.
It is common to periodically insert a POOL layer in-between successive CONV layers. Its function is to progressively reduce the size of the volume (e.g. by calculating the maximum value of every $k{\times}k$ region). This is to reduce the amount of parameters and computation in the network and to control over-fitting \cite{DLBOOK}.
In the final layers, multiple FC layers and one CLASS layer perform the final classification and transform the results into several classes. FC layers have a full connectivity and work similar to MLPs.
The CLASS layer converts the outputs of the network to categorical distributions. A widely used classifier is the SoftMax function. Compared to the rest of the network, its computational complexity is usually small \cite{CONVNET-TAXONOMY}\cite{LUKAS-DAC15}.
The first layer connects the network to the input volume which can be an image, a video frame, or a signal, depending on the application (a 3-channel R,G,B image for instance). Each layer $l$ transforms the input volume $(X_i, Y_i, C_i)$ into an output volume $(X_o, Y_o, C_o)$. This terminology is used throughout this paper and will be further elaborated in \autoref{4dtile}.
\subsection{Implementation Challenges of Modern ConvNets} \label{related-conv}
\begin{table}[!t]
\centering
\caption{Storage requirement (MB) in the SoA ConvNets.}
\includegraphics[width=0.85\columnwidth]{ht9}
\label{tbl:storage}
\end{table}
ConvNets have been rapidly evolving in the past years, from small networks of only a few layers (e.g. LeNet-5 \cite{DLBOOK}) to over hundred \cite{RESNET-PAPER} and thousand \cite{RESNET1K} layers, and from having a few kilobytes of coefficients (a.k.a. weights) to multi-mega bytes in \cite{GOOGLENET-PAPER}\cite{VGGNETWORKS}\cite{RESNET-PAPER}.
Also, traditional ConvNets were only applicable to small 32x32 images, while the SoA ConvNets have 224x224 inputs, and this size is expected to grow \cite{DLBOOK}.
\autoref{tbl:storage} shows an estimation for the storage requirements (in MB) of top-performing ConvNets, assuming layer-by-layer execution. AlexNet \cite{DLBOOK} is the 2012 winner of the ILSVRC challenge \cite{LSVRC}. VGG networks \cite{VGGNETWORKS} and GoogLeNet \cite{GOOGLENET-PAPER} were the winners of different categories in 2014, and ResNet \cite{RESNET-PAPER} was the most recent winner of this challenge in 2015. ResNet1K with 1001 layers \cite{RESNET1K} is omitted from our study because its training loss and validation error (for the ImageNet database \cite{LSVRC}) are not yet lower than its previous versions. Instead in this paper, ResNet-152 is extended to larger networks (accepting {250K}/{1M}/{2M}/{4M}-pixel images shown in \autoref{tbl:storage}) to further investigate the scalability of our approach and its applicability to beyond High-Definition (HD) image resolutions. ResNet is chosen for this purpose because it is more challenging to accelerate than the other networks (See \autoref{perf}).
It can be clearly seen that the typical on-chip (L1, L2) storages in the memory hierarchy (caches or SRAM-based scratchpad memories) cannot accommodate even a single layer of these ConvNets, as the required storages per layer range from 6\,MB to over 300\,MB. In addition, the assumption that all coefficients can be stored on-chip (\cite{EIE}\cite{DIANNAO}\cite{NEUFLOW-ASIC}) is not valid anymore, since an additional storage of 14\,$\sim$\,280\,MB is required to accommodate the coefficients. Overall, 16$\sim$580\,MB is needed for layer-by-layer execution, demonstrating that DRAM is necessary as the main storage for deep ConvNets and also motivating computation near main memory. %
A similar observation was recently made in \cite{NEUROCUBE}.
Another point is that the straightforward topology of the traditional ConvNets such as LeNet-5 has recently evolved to more complex topologies such as Deep Residual Learning in ResNet \cite{RESNET-PAPER} and the Inception Model (Network in Network) in GoogLeNet \cite{GOOGLENET-PAPER}. This makes application specific implementations less practical and highlights the need for flexible and programmable platforms.
Also, unlike traditional ConvNets with very large and efficient convolution filters (a.k.a. feature maps) of over 10x10 inputs, modern ConvNets tend to have very small filters (e.g. 3x3 in VGG and 1x1 in GoogLeNet and ResNet). It can be easily verified that the Operational Intensity (OI)\footnote{Operational Intensity (OI), a.k.a. Computation to Communication Ratio, is a measure of computational efficiency defined in the roofline-model \cite{ROOFLINE} as the number of computations divided by the total transferred data (bytes).} \label{DEFINITON-OI}
decreases as the convolution filters shrink. This can negatively impact computation, energy, and bandwidth efficiency (See \autoref{exp}). In this paper, we design a scalable PIM platform capable of running very deep networks with large input volumes and arbitrary filter sizes.
Lastly, different tiling methods for ConvNets have been previously proposed \cite{FPGA15}\cite{CAFFEINE} %
for FPGA implementations, in \cite{DIANNAO} for a neuromorphic accelerator, and in \cite{NVE} for a Very Long Instruction Word (VLIW) architecture. In \cite{NVE} a tile-strip mechanism is proposed to improve locality and inter-tile data reuse for ConvNets with large filters. In \cite{CAFFEINE} a row-major data layout has been proposed to improve DRAM's bandwidth efficiency and reduce bank conflicts in FPGA's BRAM banks.
Also, tile-aware memory layouts have been previously proven effective for multi-core \cite{SVD} and GPU implementations \cite{TPDS-GPUTRANSPOSE} of linear algebra algorithms, directly affecting their cache performance, bandwidth efficiency, and the degree of parallelism.
In this paper, we introduce a general and flexible form called 4D-tiling (\autoref{4dtile}) allowing for optimization of performance and energy efficiency under given constraints such as on-die SPM and DRAM bandwidth usage. Our proposed mechanism reduces the communication overheads among the clusters and uses the DRAM interface more efficiently by merging DMA transfers into larger chunks.
Throughout this paper, we use single-precision floating-point (FP32) arithmetic to be able to flexibly target large-scale DL in the high-performance computing domain. The wide dynamic range offered by this representation improves programmability and allows for implementing a wider range of algorithms, as well as, training and backpropagation, since they usually require higher precision and dynamic range \cite{DLBOOK}.
We use the notion of GFLOPS (Giga-FLOPS per second) to demonstrate the achieved FP32 performance, along with GOPS (Giga-operations per second) to show integer/fixed-point performance in \autoref{related-impl}.
\subsection{SoA ConvNet Implementations}\label{related-impl}
A glance at the SoA highlights two main directions:
(I) Application-specific architectures based on ASIC/FPGAs \cite{NEUFLOW-ASIC}\cite{ORIGAMI}\cite{DIANNAO}\cite{FPGA15}\cite{CAFFEINE}\cite{EIE}\cite{EYERISS};
(II) Software implementations on programmable general-purpose platforms such as CPUs and GPUs \cite{LUKAS-DAC15}\cite{CAFFE-GPU}\cite{FPGA15}\cite{CAMBRICON}.
ASIC ConvNet implementations achieve impressive energy efficiency and performance:
DianNao \cite{DIANNAO} obtains 450\,GOPS at 0.5\,W with a neuromorphic architecture using 16b fixed-point arithmetic in 65nm technology. Later, it has been extended to 1250\,GOPS within a similar power budget in \cite{ShiDianNao}.
The limiting assumption in this work is that the whole ConvNet (coefficients + the largest intermediate layer of LeNet-5) fits inside the on-chip SRAM ($\sim$256kB). As we showed above, this assumption is not valid anymore for modern ConvNets. Also, they use a small input image size (32x32) with very large convolution filters (e.g. 18x18, 7x7), which is unrealistic for modern ConvNets, as explained before.
In EIE \cite{EIE} coefficients are compressed by pruning and weight-sharing, achieving 100\,GOPS at 625\,mW in 45nm technology, with the main drawback of storing 84M coefficients on-chip, resulting in an area of over 40$mm^{2}$.
Eyeriss \cite{EYERISS} presents a reconfigurable ConvNet accelerator mainly focusing on reducing data movement by an approach called ``row-stationary'' computation, in which kernel coefficients are loaded once and reused several times. Eyeriss achieves around 70\,GOPS at 278\,mW for AlexNet, but when scaling to VGG16, their performance drops to 20\,GOPS within the same power budget. In \cite{TETRIS} it is shown that memory is the main bottleneck of Eyeriss, limiting its scalability and energy efficiency when used with larger networks and images.
Origami \cite{ORIGAMI} achieves 145\,GOPS at 0.5\,W, using 12b fixed-point implementation (65nm-UMC technology at 1.2V, with 40kB of storage), being scalable to 800\,GOPS/W at 0.8V.
The main issue with these works is their lack of flexibility and scalability to large inputs and modern ConvNets. Also, the assumption that a significant part of the ConvNet can be stored on-chip is not valid anymore, and shrinking filter dimensions
can significantly hurt their reported performance and efficiency numbers with 18x18 filters in \cite{DIANNAO}, 10x10 in \cite{NEUFLOW-ASIC}, 7x7 in \cite{NEUROCUBE}, and 6x6 in \cite{ORIGAMI}, due to the significantly reduced OI.
In this paper, we propose a flexible solution supporting a wide range of ConvNets with different network, kernel, and image dimensions.
FPGA platforms provide higher flexibility compared to ASIC implementations but lower energy/area efficiency due to the usage of reconfigurable routing switches and logic blocks.
In \cite{FPGA15}, ConvNet models are synthesized to Xilinx Virtex7-485T using high-level synthesis. 61\,GFLOPS is achieved (FP32) at 18\,W (3.4\,GFLOPS/W). In \cite{NEUFLOW-ASIC} the NeuFlow data-flow vision processor has been prototyped on Xilinx Virtex-6 VLX240T and 147\,GOPS @ 10\,W (14.7\,GOPS/W) is achieved.
Caffeine \cite{CAFFEINE} presents a flexible hardware/software co-design library to efficiently accelerate ConvNets on FPGAs. It achieves 166\,GOPS @ 25\,W (6.6 GOPS/W) on Xilinx KU060 and 8.5 GOPSW on Xilinx VX690T with 16b fixed-point arithmetic. In comparison with CPU/GPU platforms, low-cost FPGAs have limited memory bandwidth which is also highly sensitive to memory access burst lengths, requiring a more careful design for efficient bandwidth usage. High-end FPGAs offer larger bandwidths thanks to their larger number of high-speed IOs. The problem is that these IOs are very general (because of the reconfigurability requirements) and therefore they are very expensive in area and power \cite{CAFFEINE}.
Our proposal achieves higher energy-efficiency thanks to near memory computation and having optimized DMA interfaces to DRAM with a novel tiling scheme.
In addition, the higher bandwidth available to our solution translates into lower programming effort (according to the roofline model \cite{ROOFLINE}) and reasonable performance, even for applications not super-optimized to use the available bandwidth efficiently.
General-purpose GPU platforms, on the other hand, are able to flexibly execute different deep NNs \cite{CAFFE-GPU}\cite{LUKAS-DAC15}\cite{FPGA15} without the limitations of application specific architectures.
Fast and user-friendly frameworks such as CAFFE \cite{CAFFE} and cuDNN \cite{CUDNN} are publicly available which also provide facilities to efficiently train deep NNs.
In \cite{CAFFE-GPU} over 500\,GFLOPS has been reported for execution of the CAFFE models based on cuDNN on NVIDIA Tesla K40 with default settings. By turning off error-correction and boosting the clock speed they have been able to reach 1092\,GFLOPS @235\,W (4.6\,GFLOPS/W). Geforce GTX 770 achieves 2.6\,GFLOPS/W using the same framework \cite{CAFFE-GPU}.
Mobile GPUs achieve similar energy efficiencies at lower power budgets. 54\,GFLOPS for less than 30\,W is reported in \cite{NEUFLOW-ASIC} for NVIDIA GT335M, and in \cite{LUKAS-DAC15} 84\,GFLOPS for 11\,W is reported for NVIDIA Tegra K1.
More recently NVIDIA \cite{NVIDIA-WP17} has reported promising energy and performance improvement for its high-end GPU accelerator Tesla P100 in 16nm technology and with a new framework called TensorRT which is 1.5X more efficient than CAFFE. For inference with GoogLeNet, ResNet-50, and AlexNet, 20, 23.9, and 35\,GFLOPS/W are reported, respectively.
We would like to remind here that Tesla P100 is an expensive high-end accelerator costing more than \$9K, while our PIM solution can be integrated within existing systems with HMC devices at almost no additional cost, in the same package structure, and within the same power budget. Plus, an HMC module itself costs less than \$1.4K, which is expected to reduce as its market size grows. %
CPU implementations achieve lower energy efficiency for execution of ConvNets with standard frameworks. In \cite{FPGA15}, 12.8\,GFLOPS at 95\,W has been reported for Intel Xeon CPU E5-2430 (@2.20GHz) with 15MB cache and 16 threads. In \cite{LUKAS-DAC15}, 35\,GFLOPS at 230\,W has been reported for Intel Xeon E5-1620v2.
In \cite{CAMBRICON} a domain-specific instruction set architecture (ISA) is designed for the widely used NN models by identifying the common operations in them. They show higher flexibility compared to \cite{DIANNAO} by being able to model 9 classes of NNs. The size of the studied networks, however, is extremely small compared to the ones studied in our paper.
Another common approach is to augment a RISC processor with Single-Instruction-on-Multiple-Data (SIMD) extensions. %
Commercial platforms such as TI AccelerationPAC, CEVA-XM4, Synopsys DesignWare EV5x, and Movidius Fathom %
follow this trend.
Performance and efficiency characterization of these platforms is not publicly available, nevertheless, SIMD extensions require more programming effort to be efficiently utilized, and their register-file bottleneck limits their scalability \cite{CAMBRICON}. In this paper, we follow a different approach based on many scalar coprocessors working in parallel on a shared memory. This is described in \autoref{arch}.
On the other hand, Google's TensorFlow platform \cite{TENSORFLOW} maps large-scale ML problems to several machines and computation devices, including multi-core CPUs, general-purpose GPUs, and custom designed ASICs known as Tensor Processing Units (TPUs).
Nervana, also, has built a scalable ML platform \cite{NERVANA} with their own implementation of TPUs, and a library called Neon to support cloud computation with different back-ends. Apache Spark features a library called MLlib \cite{MLLIB} targeting scalable practical ML. No performance or efficiency data is publicly available for these platforms. Lastly, HCL2 \cite{TPDS-HANDOOP} motivates designing a heterogeneous programming system based on map-reduce for ML applications supporting CAFFE \cite{CAFFE} representations.
The study of the ConvNets in a near-memory context has been done in \cite{AMD}\cite{NEUROCUBE}\cite{PRIME}\cite{TETRIS}.
In \cite{AMD} the authors assume that the whole internal bandwidth of the HMC (320\,GB/s) is available to PIM. They reach a performance of 160\,GFLOPS (lower compared to our solution) for AlexNet and VGG inside each cube,
and the details of their PIM design are not exposed in their work. Plus, instead of performance efficiency, normalized execution time is reported only, and the analysis of power and area are left as future works.
In \cite{NEUROCUBE} a data-driven computing model is proposed using finite-state-machines (FSM) near each HMC vault controller, preprogrammed to generate DRAM addresses for the ConvNet under execution (16b fixed-point). Their study, however, is limited to a small ConvNet with 6 layers and scaling their approach to modern ConvNets seems difficult. They achieve 132\,GOPS @ 13\,W with an energy efficiency lower compared to our work (10\,GOPS/W). The LoB die in NeuroCube consumes 3.4\,W, mainly due to the presence of data caches, on-chip storage for weights, and network-on-chip routers with packet encapsulation in their accelerator design.
Tetris \cite{TETRIS} is a scalable NN accelerator based on HMC. It uses the ``row-stationary'' computation paradigm proposed in \cite{EYERISS} with fixed-point computation and scales it to multiple NN engines each associated with a DRAM vault. Tetris requires an area of 3.5$mm^2$ per vault in the 45nm technology, which can be scaled to 21$mm^2$ in 28nm technology. From the relative results reported in \cite{TETRIS} its performance can be estimated as 159\,GOPS with an average power consumption of 6.9\,W. Both energy and area efficiency of Tetris are lower than our work.
Finally, in \cite{PRIME}, ConvNet execution in Re-RAM based non-volatile memory is investigated with different design decisions due to the drastically different memory technology used. Relative performance and energy numbers reported in this work make it difficult to compare directly, nevertheless, a throughout survey on the techniques to use these memories in comparison with DRAM is presented in \cite{TPDS-NVSURVEY}.
In this paper, we have observed that for modern ConvNets with shrinking kernels, coefficient reuse is becoming less practical and approaches such as row-stationary are not that beneficial anymore. For this reason, we use a completely different approach focusing on parallelism rather than coefficient reuse.
To summarize, three main assumptions motivate our proposed computation paradigm and tiling mechanism: a) Focusing on synchronization-free parallelism rather than coefficient reuse; b) Limiting the on-chip storage available to the PIM cluster; c) Supporting very large input images (up to 32Mega-pixels). We will demonstrate that our scalable and flexible ConvNet acceleration platform provides higher energy efficiency compared to the best FPGA and GPU implementations in similar technologies at a fraction of their system cost.
\section{System Architecture} \label{arch}
ConvNets, by nature, are computation demanding algorithms. One forward pass of VGG19, for example, requires around 20 billion MAC operations with over 100K operations per pixel. Maintaining even a frame-rate of 10 frames per second will require over 200 GFLOPS.
In theory, ConvNets can reach extremely high OI ratios (discussed in \autoref{related-conv}), as they reuse data efficiently. However, due to the very large memory footprints of deep ConvNets, their performance and energy efficiency is ultimately constrained by the main DRAM storage and off-chip communication. As we will show throughout this paper, in a near-memory context some of these constraints can be relaxed, providing the possibility to improve energy efficiency and programmability.
\autoref{neurocluster} describes the design of our many-core PIM platform.
\subsection{NeuroCluster} \label{neurocluster}
NeuroCluster (Illustrated in \autoref{fig:sch}b) is a flexible general purpose clustered many-core platform, designed based on energy-efficient RISC-V processing-elements (PEs) \cite{RISCV} and NeuroStream (NST) coprocessors (described in \autoref{neurostream}), all grouped in tightly-coupled clusters. Each cluster consists of four PEs and eight NSTs, with each PE being responsible for programming and coordinating two of the NSTs. This configuration is found to be optimal in the explorations presented in \autoref{exp}.
The PEs are augmented with a light-weight Memory Management Unit (MMU) along with a small sized Translation Look-aside Buffer (TLB) providing zero-copy virtual pointer sharing from the host to NeuroCluster (More information in \autoref{prog-model}).
Instead of caches and prefetchers which provide a higher level of abstraction without much control, and they are more suitable for host-side accelerators \cite{ERFANARCS16}, scratchpad memories (SPMs) and DMA engines are used with a simple and efficient computation paradigm to boost energy efficiency \cite{Rossi2016}\cite{ERFANARCS16}\cite{NVE}. Also, caches introduce several coherence and consistency concerns and are less area and energy-efficient in comparison with SPMs \cite{ERFANARCS16}. %
Each cluster features a DMA engine capable of performing bulk data transfers between the DRAM vaults and the SPM inside that cluster. It supports up to 32 outstanding transactions and accepts virtual address ranges without any alignment or size restrictions. The NST coprocessors, on the other hand, have limited visibility only to the cluster's SPM with no concerns about address translations and DMA transfers.
This mechanism allows for simple and efficient computation while maintaining the benefits of virtual memory support \cite{ERFANARCS16}.
Each PE is a light-weight RISC-V based processor with 4 pipeline stages and in-order execution (without branch prediction, predication, or multiple issue) for energy-efficient operation \cite{RISCV}. RTL models of these cores have been adopted from \cite{PULP}.
1\,kB of private instruction-cache (4-way set associative) is available to each core. %
An in-depth exploration of different instruction cache choices (including size, associativity, and shared/private organizations) are previously performed in \cite{IGOR-ICACHE}, demonstrating that this organization not only supports larger data-sets (e.g. ConvNets),
but also larger codes, as long as their main computing loops (kernels) fit in the caches.
The SPM inside each cluster is word-level-interleaved (WLI) with multiple banks accessible through the cluster-interconnect.
The cluster-interconnect has been designed based on the logarithmic-interconnect proposed in \cite{ERFANTVLSI14} to provide low-latency all-to-all connectivity inside the clusters. Also, the AXI-4 based global-interconnect, connecting the clusters, follows the same architecture as the SMC-Interconnect \cite{ERFANTVLSI16} to achieve a very high bandwidth.
\subsection{NeuroStream} \label{neurostream}
NeuroStream (NST) is a streaming coprocessor designed based on two observations: (I) Modern ConvNets tend to have very small convolution filters, making coefficient reuse less practical (previously discussed in \autoref{related-conv}). (II) The most demanding operation in ConvNets is MAC \cite{LUKAS-DAC15}.
Therefore, unlike conventional SIMD coprocessors (e.g. ARM NEON), NST works directly on the shared multi-bank SPM without having many internal registers (just one accumulator). This feature along with its dedicated hardware address generators allows it to perform arbitrary computations efficiently and directly on the SPM. This removes the register-file bottleneck which is present in SIMD architectures and allows it to achieve a performance close to 1 MAC/cycle. Moreover, each NST can be treated as a scalar coprocessor working independently. Yet, it is possible to instantiate several NSTs inside a cluster to achieve a scalable parallelism without the need for fine-grained synchronization among them. This way, NSTs are easier to program compared to SIMD units, and they offer more flexibility in terms of the size/shape/stride of the computations.
In total, 128 instances of NST, clocked at a moderate speed of 1\,GHz, sum up to 256\,GFLOPS of raw performance in the NeuroCluster.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{nst}
\caption{Architecture of the NeuroStream (NST) floating-point coprocessors.}
\label{fig:nst}
\end{figure}
\autoref{fig:nst} illustrates the block diagram of NST, composed of the main controller, three hardware-loops (HWL), two Address Generation Units (AGUs), and an FP32 datapath (FPU) compatible with the IEEE-754 standard.
The main-controller is responsible for receiving the commands from the processor and issuing them to the datapath. A parametric-depth first-in-first-out (FIFO) command-queue is implemented to hide the programming latencies. Also, the control interface is memory-mapped, making it possible for the NSTs to easily communicate with other processor micro-architectures (e.g. ARM).
NSTs follow a nonblocking data-flow computation paradigm, and information flows in them as tokens. The main controller is, therefore, responsible for issuing enough transactions (2 in each cycle in case of MAC) towards the SPM and filling up the operand FIFOs to keep the FPU busy almost every cycle.
The hardware-loops are programmable FSMs capable of modeling up to three nested-loops in hardware. The AGUs can be programmed to generate complex strided SPM access patterns (See \autoref{prog-nst}).
By having two direct ports to the cluster-interconnect, each NST can fetch two operands (typically one coefficient and one data) in a single-cycle and perform an operation on them.
NST supports strided convolution, max-pooling, ReLU-activation, along with some basic utilities for backpropagation and training. Apart from these tasks, it can also be used for generic computations such as dot product, matrix multiplication, linear transformations, and weighted sum/average. Even single FP32 operations (e.g. add, multiply) are supported for generality. More than 14 commands in three categories are implemented: streaming (e.g. \textit{STREAM\_MAC}, \textit{STREAM\_SUM}, \textit{STREAM\_MAX}), single (e.g. \textit{SINGLE\_ADD}, \textit{SINGLE\_MUL}), and memory commands (for configuration and memory transfers to/from the accumulator). \autoref{prog-nst} describes how NSTs can be programmed to do various computations.
\section{Computation Model} \label{comp-model}
When a ConvNet such as GoogLeNet is selected for execution over our PIM system, first it is tiled using the 4D-tiling mechanism described in \autoref{4dtile}. This procedure prepares it for parallel execution over the clusters, and optimally partitions it to achieve the highest efficiency under given constraints such as on-die SPM and DRAM bandwidth usage.
Next, all coefficients are loaded in SMC's DRAM and an additional space is reserved there for the intermediate results of the largest layer (shown previously in \autoref{tbl:storage}). The input volume (e.g. the image or video frame) is loaded into this area before each run. The actual execution takes place layer-by-layer, each layer being parallelized over 16 clusters. Each cluster executes one 4D-tile at a time with all its NSTs working cooperatively to compute its final result inside the cluster's SPM. Only at the end of each layer, the clusters are synchronized. %
A more detailed description follows in \autoref{4dtile} and \autoref{mapping}.
\subsection{4D-Tiling Mechanism} \label{4dtile}
\begin{figure}[!t] %
\centering
\includegraphics[width=0.85\columnwidth]{tile4d}
\caption{(a) Illustration of a general ConvNet, (b) a 4D-tile, (c) row-major data layout and tile-overlapping, (d) partial computation of tiles, and (e,f) the underlying DRAM storage of one augmented-tile.}
\label{fig:tile4d}
\end{figure}
A 4D-tile (illustrated in \autoref{fig:tile4d}a,b) is a subset of the input volume (called Input-tile) and output volume (Output-tile) of a convolutional layer (\textit{l}) identified by the ($T_{Xi}^{(l)}$, $T_{Yi}^{(l)}$, $T_{Ci}^{(l)}$, $T_{Co}^{(l)}$) tuple. $T_{Xi}^{(l)}$ and $T_{Yi}^{(l)}$ are the tile width and height of the input volume of layer ${l}$, and $T_{Ci}^{(l)}$ and $T_{Co}^{(l)}$ are the numbers of input and output channels to the tile. The output dimensions of each tile are calculated directly from input width and height, filter dimensions, striding, and zero-padding parameters.
4D-tiles have three main features essential for near-memory acceleration of deep ConvNets:
\textbf{Row-major data layout:} With the conventional tile-oblivious layout, data is fragmented in DRAM, so several DMA transfers are required to fetch one tile. Even a DMA engine with striding capabilities does not help with the inefficiency caused by opening a DRAM row with closed policy \cite{HMCSTANDARD} and partially reading from it in strides.
To address this problem, we modify the underlying storage of the intermediate layers in DRAM to a row-major form (illustrated in \autoref{fig:tile4d}c,e). This way with a single large DMA transfer request, the whole tile can be fetched by the processing cluster. This improves DRAM's read performance which can be exploited as described below. The implications of this mechanism on DMA write and its overheads will be explained later in this section.
\textbf{Tile overlapping:} When the convolution filters are larger than 1$\times$1, borders of the adjacent tiles of each tile should be fetched from DRAM, as well.
Assuming that the bandwidth overhead of these overlapping regions can be tolerated by proper choice of tile dimensions, still, the storage impact on the row-major data placement in DRAM is not trivial, and fragmented DMA transfers will be required to fetch the overlaps.
This problem can be solved by storing the overlapping regions in the DRAM once per each tile. This means storing the ``augmented-tiles'' (shown in \autoref{fig:tile4d}c) instead of ``raw-tiles'' inside DRAM in a row-major form, at the cost of increased DRAM storage and bandwidth. When reading from DRAM, a complete tile (including all overlapping regions required to compute the convolution in its borders) can be fetched using a single DMA transfer request. But, when writing the results back to the DRAM some care should be taken to convert the raw output tile to an augmented-tile for the next layer (explained below).
The average increases in DRAM bandwidth and storage incurred by this mechanism were found to be less than 10\% and 3\%, respectively. Also, on the average around 200\,MB of DRAM was used with maximum usage of 580\,MB for ResNet with 4M-pixel images. %
\textbf{Partial Computations:} Tiling of channels ($T_{Ci}^{(l)}$ and $T_{Co}^{(l)}$) requires maintaining partial computations, as more than one input tile contributes to the result of each output tile. Assuming that one input tile and one output tile can fit in each cluster's SPM, we perform the following steps to compute each output tile:
Tile $M$ (See \autoref{fig:tile4d}d) and the related filter coefficients ($K_{MQ}$) are fetched from the DRAM. Then, $Q = Q + M*K_{MQ}$ is computed inside the SPM ($Q$ containing partial sums of the output channels). Next, Tile $N$ and $K_{NQ}$ are fetched from the DRAM, and $Q = Q + N*K_{NQ}$ is computed, and so forth. After all input tiles have been read once, activation and pooling are directly performed on the output tile $Q$ (again inside the SPM) and then $Q$ is written back to the DRAM by the associated PE. This mechanism reduces DRAM's write bandwidth and puts more pressure on read bandwidth given that data is only written back once after several DRAM reads (as described), after reduction operations (pooling, strided convolution) which further reduce the number of DRAM writes in comparison with DRAM reads.
In all experiments of this paper, DRAM's write bandwidth was found to be less than 4\% of the read bandwidth. This suits our row-major data layout, requiring DRAM writes to be off the execution critical path.
It is important to explain how the raw output tile of one layer ($l$) is converted to an augmented tile for the next layer ($l+1$), given that data cannot be ``magically'' reorganized in the DRAM. Looking at $T_0$ in \autoref{fig:tile4d}e, we can see that it has 4 regions ($raw$, $A$, $B$, $C$). The $raw$ region of $T_0^{l+1}$ is written to DRAM using multiple fragmented DMA writes when $T_0^{l}$ is computed in SPM. This is shown in \autoref{fig:tile4d}f. The $A$, $B$, and $C$ regions of $T_0^{l+1}$ are written to DRAM after $T_1^{l}$, $T_3^{l}$, and $T_4^{l}$ are computed, respectively, using small DMA chunks shown in \autoref{fig:tile4d}f. Zero-padding is also properly handled at this stage for the corner tiles. Since DRAM writes are off the critical path, we can afford to perform these conversions, without incurring significant overheads.
Another key point is that the raw-tile width and height of the consecutive layers must be equal (for consistent row-major data layout) unless there has been a strided convolution \cite{DLBOOK} or pooling stage between them, for which the tile dimensions will shrink. This way, as we move forward through the ConvNet layers, tile width and height ($T_{Xi}^{(l)}$, $T_{Yi}^{(l)}$) tend to shrink. To avoid this having a negative impact on computation and SPM usage efficiency, we need to increase $T_{Co}^{(l)}$ or $T_{Ci}^{(l)}$. This completely modifies the shape and number of the tiles in each layer and impacts everything from synchronization overheads to the efficiency of the computing loops and DRAM bandwidth.
This highlights the need for a flexible computing cluster to support a wide range of tile dimensions.
\subsection{Mapping Tiles to Clusters} \label{mapping}
Since there is no data overlap among augmented-tiles (except possibly for some filter coefficients), each cluster can execute one tile at a time. This minimizes communication among the clusters. Also, tiling information is prepared off-line (only once) and is stored in a list accessible by all clusters in DRAM.
The master PE (the first PE in each cluster) consults this list to obtain the required information (e.g. address in DRAM, size, and filter coefficients) for the next tile. Then it issues a DMA read to fetch the new tile.
Each cluster works based on ping-pong buffering to hide the setup and DMA transfer latencies. While one tile is being computed by the NSTs in the cluster, another tile is fetched by the master PE and tiling information is prepared for it.
This procedure continues until all tiles in a layer are finished. At this point, all clusters are synchronized before proceeding with the next layer.
Inside each cluster the master PE partitions the tile among the NSTs in the order of $T_{Xo}^{(l)}$, $T_{Yo}^{(l)}$, and $T_{Co}^{(l)}$ dimensions first. This is to ensure that each output is written exactly by one NST, and to remove synchronization requirements among the NSTs.
If still more NSTs are remaining (e.g. for small corner tiles), $T_{Ci}^{(l)}$ is used for tile partitioning, posing some synchronization overheads to the PEs. Therefore, corner tiles (with smaller dimensions) and arbitrarily sized-tiles are properly handled in this scheme.
Thanks to this tile-mapping mechanism, NSTs can work independently without worrying about getting synchronized with each other. Any required synchronization is handled by the RISC-V PEs, through hardware primitives devised for this purpose.
Given that ($X_{o} \times Y_{o} \times K_{x} \times K_{y} \times C_{i} \times C_{o}$) MAC operations need to be done in each layer, 4D-tiling can be viewed as a schedule (in time and space) of this computation to the available resources in NeuroCluster.
Overall, the computation inside each SMC is done in a self-contained manner, without synchronizing with the host processors. The user only offloads a ConvNet task to the SMC, and the rest of the computation happens completely inside the cube. The serial-links are turned-off when not required to save energy. The performance and energy advantages of this scheme are studied in \autoref{multismc}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{conv-pattern}
\caption{(a) A convolution kernel to be performed on a 4D-tile, (b) a typical memory access pattern generated by this kernel.}
\label{fig:conv-pattern}
\end{figure}
\section{Programming Model} \label{prog-model}
Previously in \cite{ERFANARCS16}, a complete software stack (API, Device Driver) had been developed for a single-processor PIM device residing on the SMC, exposing it to the user-level applications. This software stack is available online for reuse and modification \cite{SMCSIM}.
An optimized memory virtualization scheme was developed in \cite{ERFANARCS16}, as well, for zero-copy data sharing between host and PIM, allowing PIM to directly access user-space virtual memory without costly memory copies.
In this paper, we have adopted this software stack and extended it to support NeuroCluster, a parallel-processing platform rather than a single core. It has been, also, amended to support DL primitives and offloading of ConvNet tasks. The memory virtualization scheme has been adopted from \cite{ERFANARCS16}, as well.
As a demonstrative example, suppose that the user application wants to execute GoogLeNet on PIM for an image already stored in DRAM. After initializing PIM's API, it uses this API to offload the precompiled computation kernels, including the computing loops for the ConvNet layers (e.g CONV, ACT, and POOL), to NeuroCluster.
This procedure is done only once. Next, the pointer to the image is passed to the API, and a special table called slice-table (a generalized form of page-table) is built for the data structures, by the driver, and stored in DRAM. The user then triggers the actual execution through the API and waits for the task to complete. The RISC-V cores work directly on the virtual memory and consult the slice-table whenever a miss occurs in their TLB.
The offloading overheads have been previously shown to be negligible in \cite{ERFANARCS16}. Also, in case of having several video frames instead of images, the same pointers can be reused in double/multi-buffering modes to avoid the need for rebuilding the slice-table upon every execution. More details on the software stack and the memory virtualization scheme can be found in \cite{ERFANARCS16}.
\autoref{prog-nst} describes how NSTs are programmed by the PEs to perform the tasks related to inference in ConvNets. \autoref{training} presents the implications of supporting training.
\subsection{Inference with NSTs} \label{prog-nst}
\autoref{fig:conv-pattern}a illustrates a convolution kernel to be performed on a 4D-tile. The data-structure \textit{tileinfo} contains the required partitioning information for the given tile among the NSTs. When the number of total jobs ($T_{Xo} \times T_{Yo} \times T_{Co}$) is more than $N_{NST}$, the jobs will be broken into several batches (\textit{NUM\_BATCHES}). The flexibility provided by \textit{tileinfo} allows us to reduce the number of convolution loops down to 4 instead of 6. \textit{filters} and \textit{tile\_in} are the two data structures accessed in every iteration of the inner-loop. Typical memory access patterns for this kernel are plotted in \autoref{fig:conv-pattern}b. These patterns seem fairly regular, therefore, NSTs should be easily able to generate them, as well.
It is enough to program the configurations registers of an NST with the starting address and the three step values illustrated in \autoref{fig:conv-pattern}b ($S_0$, $S_1$, and $S_2$), and then issue a \textit{STREAM\_MAC} command to it. This way, the three inner loops of \autoref{fig:conv-pattern}a can be replaced by execution in hardware. This is illustrated in \autoref{fig:stream-mac}a. The latency overheads of these commands are hidden by having multiple NSTs and by filling up their command queues with multiple commands. %
The implementation of \textit{STREAM\_MAC} inside the NSTs is depicted in \autoref{fig:stream-mac}b. This is hard-coded in the main controller of the NSTs and is executed efficiently, without losing any cycles (See \autoref{perf} for results).
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{stream-mac}
\caption{(a) Using NSTs to accelerate the loop shown in \autoref{fig:conv-pattern}a, (b) the pseudo-code for implementation of \textit{STREAM\_MAC} inside the NSTs.}
\label{fig:stream-mac}
\end{figure}
Similarly, each NST is able to perform ReLU activation on arbitrary tiles using \textit{STREAM\_MAX} command devised for this purpose on the same set of state machines and hardware blocks. For the sake of generality, \textit{STREAM\_SUM}, \textit{STREAM\_SCALE}, \textit{STREAM\_SHIFT}, and \textit{STREAM\_MIN} are implemented, as well.
Another widely used operation in ConvNets is pooling \cite{CONVNET-TAXONOMY}. NST supports max-pooling \cite{POOLING} through the \textit{STREAM\_MAXPL} command. Thanks to the flexibility of the AGUs and HWLs, arbitrary tiles with different strides are supported. %
Finally, FC layers can also be implemented using a set of STREAM\_MAC commands, similar to the CONV layers. The CLASS layer, however, is executed on the PEs in the current implementation using the \textit{SoftFloat} library \cite{SOFTFLOAT}.
\subsection{Implications of Training} \label{training}
Backpropagation is the prevalent method for training NNs including ConvNets \cite{DLBOOK}. Given a set of training sample inputs, first a forward propagation is executed layer-by-layer, then using an optimization algorithm, such as gradient descent (GD) \cite{CONVNET-TAXONOMY}, the coefficients (weights) are updated backwards so that the network learns that sample. A modern training algorithm based on GD has three phases \cite{DLBOOK}: (1) Forward Pass, (2) Gradient calculation and routing, and (3) Weight update.
In step (1), the selected input (e.g. an image) is fed to the network and the outputs of all layers including the value of the loss function are calculated.
This is similar to a normal inference pass, except that additional information about the current operating point (e.g., max-pool decisions) in all layers has to be stored, such that it can be retrieved later on for gradient calculation.
This can be easily handled by our platform because plenty of DRAM is available to the NeuroClusters through a high-bandwidth and low-latency 3D interface. For example, ResNet-152 requires 211\,MB for its coefficient and a total of 161\,MB for all its layers. This aggregates to a total of 372\,MB of DRAM storage. %
Another difference with inference is that the POOL layer should keep track of the inputs which were maximal in the pooling operation. This is called \textit{argmax} and since just comparison with zero is involved, in this implementation we use the RISC-V cores for it.
In step (2), starting from the final stage of the network (classification layer), the gradients of the loss function are calculated with respect to the inputs ($D_X$) and to the weights ($D_W$) and propagated backwards towards the input layers. For the FC layers
$D_X = W^T . D_Y$ and $D_W = D_X . X^T$, and for the CONV layers $D_X = D_Y * W^T$ and $D_W = X * {D_Y}^T$ can be completely calculated on the NSTs using a series of STREAM\_MAC operations ($Y$ is the output gradient of each layer which is propagated backward to the input $X$, and $T$ stands for matrix transpose).
ACT layer only propagates back the gradients ($D_{Xi} = Xi \geq 0 ?\,D_{Yi} : 0$). This operation is not currently supported by NSTs. But since only comparisons with the zero are involved, the integer datapath of RISC-V cores is used. POOL layer, similarly, propagates back the gradients with a matrix scatter operation \cite{DLBOOK}, populating a sparse matrix without performing any actual computation. Again, this operation is implemented on the RISC-V cores in the current version.
SoftMax (CLASS) is calculated similarly to the forward-pass on the RISC-V cores.
Finally, in step (3), the weights are updated either with fixed or adaptive step sizes ($\alpha$ or $\alpha_i$, respectively): $W_i = W_i - \alpha(dW_i + \lambda W_i)$. This procedure is repeated in an iterative manner for all variations of GD algorithms (e.g. Stochastic GD, Batch GD) \cite{DLBOOK}. A fixed step implementation of this formula is currently supported by the NSTs, while adaptive steps need to be calculated by the PEs, once per each backward pass. An estimation for the performance of training on SMC is presented in \autoref{perf}.
\section{Experimental Results} \label{exp}
Our baseline system is composed of a memory-centric network \cite{MEMCENTRIC} of four SMC devices based on a mesh topology. Each SMC hosts a NeuroCluster with 16 clusters on its LoB die, with each cluster having 4 RISC-V cores (with 1kB private instruction cache each), 8 NSTs, a DMA engine, and 128kB of SPM. This configuration is found to achieve reasonable performance and efficiency through several simulations. Total available DRAM is 1GB in 4 stacked dies with DRAM banks of 32MB and a closed-page policy \cite{ERFANTVLSI16}. Low-interleaved-addressing is implemented as the HMC's default addressing scheme \cite{HMCSTANDARD}. A summary of these parameters is also listed on page \pageref{fig:sch}.
A fully functional and cycle-accurate (CA) RTL model of the NeuroCluster has been modeled in SystemVerilog, with the components adopted and reconfigured from \cite{PULP}.
This model along with a previously developed cycle-accurate model of the SMC \cite{ERFANTVLSI16} allows us to analyze the performance of tiled execution over a single SMC device considering the programming overheads.
Silicon area and power consumption are also extracted from these models using topographical logic synthesis (See \autoref{area-power}). %
In addition, an epoch-based in-house simulator is developed (modeling the SMC network shown on page \pageref{fig:sch}) to estimate the performance and power consumption of executing full ConvNets on large images, based on the data obtained from the CA simulations.
This strategy allows us to obtain both reasonable accuracy and very high simulation speed.
Our simulation platform supports CAFFE \cite{CAFFE} representation of the SoA ConvNets. For every layer of the ConvNets under study, the optimum tile dimensions are found based on performance, energy efficiency, available SPM size, and required DRAM bandwidth. This procedure requires multiple simulations with different combinations of parameters and is only done once per each ConvNet (at the beginning). Optimally sized tiles can then be used in later simulations with different images.
Four serial link controllers in LoB are modeled to consume up to 10\,W of power for highest traffic pressure \cite{ERFANARCS16}\cite{HMC}. We can share this 10\,W power budget between the serial link controllers and NeuroCluster, and for example by turning off one of them we give a 2.5\,W power budget to NeuroCluster allowing it to operate in the ``shadow'' of a powered-down serial link.
Performance is studied in \autoref{perf}. Detailed energy consumption and silicon area results are presented in \autoref{area-power}. Finally, the overall results of the multi-SMC network are presented in \autoref{multismc}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.6\columnwidth]{g2}
\caption{The effect of SPM's banking-factor on the performance efficiency of a single cluster, executing tiled convolutions with 1x1, 2x2, and 3x3 filters over average tiles of the studied ConvNets.}
\label{fig:g2}
\end{figure}
\subsection{Performance of Single SMC} \label{perf}
The average performance efficiency (actual/peak performance) of a single cluster measured in CA simulation is illustrated in \autoref{fig:g2}, where the cluster is executing tiled convolution on its NSTs with 1x1, 2x2, and 3x3 filters over tiles with average dimensions of the studied ConvNets, listed in \autoref{fig:ht10}.
We define performance efficiency ($PEF$) as follows:
\begin{small}
\[ PEF = \dfrac{Total \# MACs}{\#Cycles \times N_{NST}} \% \]
\end{small}
$Total \# MACs$ indicates the total number of MACs performed by all NSTs, and $\#Cycles$ stands for the total number of execution cycles. $PEF$ is an indication for how well and efficiently the NSTs have been utilized.
The banking-factor\footnote{Banking-factor is the ratio between the number of SPM banks and the number of master ports (from the NSTs).
In WLI memories, this parameter directly affects the ratio of bank-conflicts inside the SPM, and has a critical impact on the clock frequency and area of the cluster interconnect. More information: \cite{ERFANIET}.} (BF) of the SPM is changed from 1/4 to 4 (i.e. from 4 to 64 banks). On the average, BF=2 yields an efficiency of over 93\% for the execution of a single tile. This is why the baseline clusters shown in \autoref{fig:sch} have 32 SPM banks each, in a WLI organization.
Another point is that a traditional bank-level interleaved (BLI) SPM needs to be explicitly managed and partitioned by software, and its performance is highly dependent on the tile dimensions. A significant amount of bank-conflicts can occur if it is not properly managed. Also, with BLI, only half of the banks will be used for computation (because of ping/pong buffering), further reducing the bandwidth.
In this paper, we use WLI because of its flexibility and high parallelism regardless of the tile dimensions. %
One last point to observe in \autoref{fig:g2} is that the execution efficiency reduces as the convolution filters shrink (3x3\,\textgreater\,2x2\,\textgreater\,1x1). %
This is a common trend in modern ConvNets and will be investigated later in this section.
\begin{figure}[!t]
\centering
\includegraphics[width=1\columnwidth]{ht10}
\caption{Roofline plot for execution of ConvNets over a single SMC, %
and the actual tile dimension statistics for different ConvNets.}
\label{fig:ht10}
\end{figure}
\autoref{fig:ht10} illustrates the roofline plot \cite{ROOFLINE} for complete execution of the ConvNets listed in \autoref{tbl:storage} on a single SMC, along with the tile dimension statistics, where OI denotes operational intensity (previously defined in \autoref{related-conv}).
For each ConvNet, different OI ratios are achieved by altering the tile channel ratios of all layers ($R_{TCL}$ = $T_{Co}^{(l)}$/$T_{Ci}^{(l)}$, directly proportional to OI). In this experiment, $T_X$ and $T_Y$ of the channels are kept constant.
The left axis shows achievable GFLOPS, and right axis shows the average DRAM bandwidth.
The synthetic extensions to ResNet (250K$\sim$1M) have been omitted from this plot as they behaved similarly to the rest of the ResNet group.
This plot highlights the importance of proper tile sizing on the delivered performance, as different ConvNets have different optimal points. Also, too much increase in $R_{TCL}$ can negatively impact performance, because the initialization overhead of the NSTs is highly dependent on this parameter, especially for ResNet with a high percentage of 1x1 filters (as later explained in this section).
We, also, observed that the bandwidth demand varies widely across different ConvNet layers.
For this reason, we have dedicated 3 high-performance AXI ports (each delivering 32GB/sec) to connect the cluster to the main SMC interconnect (See \autoref{fig:sch}). This is to smoothly adapt to different bandwidth demands with minimal impact on the delivered performance.
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\columnwidth]{ht7-8}
\caption{(a) Performance comparison of different ConvNets on a single SMC device, and (b) breakdown of different overheads contributing to performance loss.}
\label{fig:ht7-8}
\end{figure}
Having found the optimum tile dimensions, \autoref{fig:ht7-8}a depicts the overall performance (GFLOPS) and total execution time for the studied ConvNets on a single cube. Among the studied ConvNets from the SoA, VGG networks have the highest execution time due to their higher requirement for MAC operations, while GoogLeNet and AlexNet are the fastest ones executing in less than 12\,ms. GoogLeNet has a very low computation requirement (less than 2\,GMAC for each forward pass) compared to the other modern networks (ResNet and VGG) mainly due to the use of strided convolution in the beginning layers.
It can be further seen that the ResNet group achieves the lowest performance efficiency. This can be associated with higher percentage of SPM conflicts illustrated in \autoref{fig:ht7-8}b.
In this figure, four main sources of performance loss are identified as: $T_L$: Loop overheads, $T_S$: Cluster Synchronization, $T_B$: Bandwidth limit, $T_C$: SPM Conflicts.
In the NeuroCluster architecture, the RISC-V cores are responsible for tile preparation, loop initialization, DMA setup, and synchronization with other clusters. While NSTs are responsible for the actual computation on the SPM ($T_U$: Useful Computation). For this reason, the RISC-V cores account for ($T_L + T_S$) overhead cycles, while the NSTs account for ($T_C + T_B + T_U$) from the total execution time. Overall, for all studied ConvNets, less than 6\% of the total execution time was spent on the RISC-V cores, and the rest was spent on NSTs, either for useful computation, or waiting for the memory system. Tile preparation and DMA setup phases are also handled by the RISC-V PEs, nevertheless, they are overlapped with the execution on NST so they are hidden and do not contribute to the total execution time.
It is also worthy to note that among the overheads shown in \autoref{fig:ht7-8}b, only loop overheads are caused by the proposed tiling mechanism, which account for less than 5\% of the performance loss.
To gain further insights from this plot, a breakdown of total execution time is depicted in \autoref{fig:ht5ht11}a versus the size of the convolution filters. %
As can be seen, a significant portion of the execution time (over 45\%) of the ResNet group is spent in 1x1 filters, and as we saw previously in \autoref{fig:g2}, 1x1 filters cause more SPM conflicts than larger filters. Plus, for 1x1 filters the 3 convolution loops illustrated in \autoref{fig:conv-pattern}a change into a single loop iterating over $Tc_i$. This increases the relative overhead of NST initialization (shown in \autoref{fig:ht7-8}b).
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{ht5ht11}
\caption{(a) Breakdown of total execution time versus the size of the convolution filters. (b) Execution overheads (left-axis), and execution-time per pixel versus image size for ResNet-based networks (right-axis).}
\label{fig:ht5ht11}
\end{figure}
To demonstrate the scalability of the proposed PIM platform, the size of the input images is increased from 250K-pixels to 32M-pixels, and execution-time per pixel with execution overheads for ResNet-based synthetic networks are plotted in \autoref{fig:ht5ht11}b.
This plot clearly shows that the execution overheads do not increase even for very large images, and execution-time per pixel only increases slightly due to the increased number of layers.
This proves the effectiveness of the proposed 4D-tiling mechanism and the efficient computation paradigm based on ping-pong buffering to hide the latencies. %
To summarize each SMC instance is capable of processing 126, 83, 34, 16, 11, 8, 6 frames (each frame being 220$\times$220$\times$3 Bytes) per second for AlexNet, GoogLeNet, ResNet50, ResNet101, ResNet152, VGG16, VGG19, respectively, with an average performance of 240\,GFLOPS. This performance is scalable to larger images as described.
Lastly, an estimation of the training performance on SMC can be obtained by answering two questions: (I) How much additional computation do each of the training stages need compared to their corresponding stage in inference? (II) How much efficiency is lost due to the extra usage of the RISC-V cores for computation?
For the forward pass, the only extra operation is the \textit{argmax} function in the POOL layer. For gradient routing, FC and CONV layers require additional computations (on the NSTs), while ACT and CLASS need the same amount as inference. POOL, also, implements a different operation (on the RISC-V cores) as described in \autoref{training}. Finally, the weight-update phase works solely on the FC and CONV layers, and its overhead is found to be less than 5\% of the total execution time.
We have chosen GoogLeNet as a representative of the future ConvNets with strided convolutions, shrinking kernel coefficients, and complex topologies \cite{DLBOOK}.
For GoogLeNet we have estimated the execution-time of each kernel relative to its corresponding kernel in inference. We have, then, scaled the execution times of inference with these results. \autoref{tbl:training} summarizes these estimates, where ``Training (Best)'' indicates the estimated execution-time provided that the NSTs implement the additional required functions such as \textit{argmax} and vector multiply. This is not achievable in the current version, and it is planned to be done as a future work.
``Training (Current)'' is the estimated execution time with the current platform. It can be seen that one training pass takes 3.6X longer than one inference pass (almost 3X more in the best case).
This is reasonable and consistent with our previous measurements on GPUs \cite{LUKAS-DAC15}. Also, the amount of efficiency loss due to using RISC-Vs for part of the computation is 17\%. Overall, the training performance can be estimated as 197\,GFLOPS for GoogLeNet.
\begin{table}[!t]
\centering
\caption{Execution-time of training (ms) compared with inference for different layers of GoogLeNet.}
\includegraphics[width=0.75\columnwidth]{training}
\label{tbl:training}
\end{table}
\subsection{Silicon Area and Power Efficiency} \label{area-power}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{g1-nst-cluster}
\caption{(a) Area and (b) power breakdown inside one instance of NeuroStream, (c) Area and (d) power breakdown for one of the clusters shown in \autoref{fig:sch}, where the number of PEs has been changed from 2 to 8.}
\label{fig:g1-nst-cluster}
\end{figure}
A complete processing cluster was synthesized using Synopsys Design Compiler (J-2014.09-SP4) in the topographical mode in 28nm FDSOI technology of STMicroelectronics (1.0$V$, SS, 125$^{\circ}C$, Low Threshold Voltage Transistors), achieving a clock frequency of 1\,GHz.
The critical-path was inside the NST blocks where MAC is computed. Our current implementation uses discrete DesignWare IP components in IEEE compatible mode. This creates a timing critical loop which cannot be pipelined. As a future work, we plan to switch to fused carry-save MAC with a fast carry-save adder in the loop.
Power consumption was extracted using Synopsys Primetime (H-2013.06-SP2) at 25$^{\circ}C$, TT, 1.0$V$. The CA cluster model runs the tiled-convolution illustrated in \autoref{fig:conv-pattern}a on typical tiles (listed in \autoref{fig:ht10}) by offloading them to NSTs similar to \autoref{fig:stream-mac}a. Switching activity is then recorded and fed to Primetime for power extraction.
For vault controllers and the SMC controller previously developed models in \cite{ERFANTVLSI16} and \cite{ERFANARCS16} were used (all in 28nm FDSOI), the serial link area and energy were estimated based on \cite{HMC}\cite{SERDES}.
5000 TSVs \cite{EXASCALE} with a pitch of 48 $\mu m$ $\times$ 55$\mu m$ \cite{HBM} were used to estimate TSV matrix area, with energy modeled from \cite{HMC}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\columnwidth]{ht131918}
\caption{(a) Effect of the SPM size per cluster (b) and the number of total clusters, on energy and area efficiency of the SMC. (c) Effect of technology scaling of the LoB die on the energy efficiency of a complete SMC device. Voltage is scaled to achieve different operating frequencies.}
\label{fig:ht131918}
\end{figure}
\autoref{fig:g1-nst-cluster}a,b illustrates the power and area breakdown inside one NST instance.
As expected, over two-third of the total power and area are dedicated to the streaming FPUs,
while the controller accounts only for 26\% of the total area and 13\% of the power consumption.
These simplified FSMs allow for a significant energy reduction compared to full-featured processors such as RISC-V (see below).
\autoref{fig:g1-nst-cluster}c,d illustrates the area and power breakdowns for a complete cluster, where the number of PEs is changed. Decreasing the number of RISC-V core from 8 to 4 gives an average power and area reduction of 20\% and 22\%, respectively, while from 4 to 2 smaller reductions of 10\% and 13\% are obtained.
Reminding that having fewer RISC-V cores increases the programming overheads (each core needs to manage more NSTs), we choose to have 4 cores per cluster. For the NSTs, 8 instances were found optimal, as beyond that the delay and area of the cluster-interconnect limits achievable frequency.
The optimal SPM size in terms of energy-efficiency was found to be 128kB per cluster. This is shown in \autoref{fig:ht131918}a, yielding 22.5 GFLOPS/W and 2.7\,GFLOPS/W/mm$^2$.
Thanks to the proposed tiling scheme, this choice does not affect the size of the supported ConvNets and images, and very large networks like \textit{4M} can be easily executed on this platform, regardless of the SPM size, as long as they are properly tiled.
Also, we found that deploying 16 clusters leads to a well-balanced design as shown in \autoref{fig:ht131918}b. More clusters negatively impact the area-efficiency, yields diminishing energy returns, and can cause major modifications in the standard HMC's stack structure. This is explained below at the end of this section.
The choice of a moderate operating frequency in our design is justified through the experiment shown in \autoref{fig:ht131918}c for different technology choices. The energy efficiency of a complete SMC device executing different ConvNets is estimated for the 28nm to 10nm FDSOI technologies at various operating points.
Interestingly, a moderate clock frequency of 1.2\,GHz achieves the highest efficiency, and increasing the clock speed beyond that is not beneficial. This is mainly due to the communication bound (DRAM's latency and bandwidth), limiting the achievable performance. This choice, also, relieves us from thermal concerns. As in \cite{NEUROCUBE} a 3D thermal simulation of the HMC with 4 stacked DRAM dies shows that a processing cluster clocked up to 5\,GHz can be embedded in its LoB die increasing the temperature up to 76$^{\circ}C$ which is below the thermal limits of HMC \cite{HMCSTANDARD}.
Power consumption is a secondary concern, as well, because up to 10\,W budget is available in the LoB die by turning-off the serial links, and NeuroCluster only consumes 2.2\,W.
\begin{figure}[!t]
\centering
\includegraphics[width=0.88\columnwidth]{ht34}
\caption{(a) Total system power for NeuroCluster placed on the host side, (b) power breakdown inside NeuroCluster. (c) Silicon area of the whole LoB die and (d) one NeuroCluster.}
\label{fig:ht34}
\end{figure}
\autoref{fig:ht34}a depicts different contributors to power consumption in the baseline configuration. \textit{Cube Power} represents the total consumed power inside a single SMC averaged for the execution of all ConvNets under study. As can be seen 11\,W is consumed inside the cube, on the average. The NeuroCluster is responsible only for 2.2\,W of this power.
While the rest (around 75\%) is consumed inside the DRAM dies, mostly due to the refresh operations. For this reason, the average power does not vary a lot for the execution of the studied ConvNets (less than 10\%).
The power consumed in NeuroCluster is dominated by the SPM (51\%) and the NSTs (16\%), while the RISC-V cores only consume 14\% (See \autoref{fig:ht34}b). This highlights the efficiency of this architecture, minimizing the unnecessary power consumed in control-flow and dedicating it to the computations.
Each NST instance consumes an average of 2.7\,mW, while a RISC-V core consumes 2.2\,mW just for programming and coordinating the NSTs.
Floating-point arithmetic is only responsible for a small portion (less than 3\%) of the total power in our platform, and any improvements to it (e.g. reduced precision) is expected to have a marginal power reduction in the overall system.
For this reason, we have kept FP32 for generality and flexibility in supporting different workloads and focused on optimizing the rest of the system (especially the DRAM interface).
Overall, an energy efficiency of 22.5\,GFLOPS/W is achieved inside one SMC for the execution of complete ConvNets (NeuroCluster itself achieves 117\,GFLOPS/W).
One interesting observation is that if we place the NeuroCluster accelerator on the host side (behind the SMC controller and the serial links) rather than inside the SMC, while maintaining exactly the same computation paradigm, the total execution time and the power consumed in the NeuroCluster itself do not change much, but on the average, system power increases by 10.2\,W. This power is consumed in the SMC controller and the serial links, suggesting that computing inside the SMC can give an average energy reduction of 48\% compared to a similar host-side accelerator (1.7X improvement in system-level energy efficiency).
Another downside of the host-side accelerators is that they require more buffering to deal with the higher memory access latency and to maintain a constant bandwidth (Little's law).
Finally, looking at the silicon area results in \autoref{fig:ht34}c,d we can see that the NeuroCluster (8.3 $mm^{2}$) occupies around 8\% of the area in the LoB die with the SPM (55\% of NeuroCluster), the RISC-V cores (16.5\%), and the NSTs (16\%) being its main contributors. The total area for the LoB die was estimated as 99$mm^{2}$.
It is worth mentioning that the LoB die of HMC is already larger than its DRAM dies \cite{HMC}, and it is occupied by four serial link controllers ($\sim55mm^2$), 32 vault controllers ($\sim25 mm^2$), and the TSV matrix ($\sim13mm^2$), with almost no free space available in it. Any major addition to this die requires modification of the 3D stack structure and power delivery network. This is why we have tried to keep the area increase to a maximum of 3\% in each dimension.
The optimal parameters found in this section were listed in \autoref{intro} and used in all experiments.
\subsection{The Multi-SMC Network} \label{multismc}
This section presents the estimated performance and energy efficiency for the SMC network previously shown on page \pageref{fig:sch}.
Four SMC devices are connected to each other using mesh topology with an HD camera recording raw images of 8\,M-pixels.
ResNet has been chosen as the most difficult to accelerate ConvNet among the studied ones due to having a significant percentage of 1x1 and 3x3 kernels (more than 95\%), with a very large number of layers (more than 100).
The camera sends the images to the memory cubes over the highlighted links in \autoref{fig:sch}a, and each SMC executes ResNet on one complete frame, independently from the other cubes. This ensures minimum communication among the cubes and allows for turning off the serial-links for a large portion of the time. Each SMC has a copy of the ConvNet coefficients inside its DRAM dies, and the coefficients have been preloaded once at the beginning.
The host system-on-chip (SoC) is only responsible for coordination and receiving the results. It does not send or receive data at a high bandwidth, yet we keep its serial link (Link0) always active, to make sure it can manage the other devices through that link.
The other serial links, however, are turned on only when there is a data to send over them, and then turned off again.
Considering a typical bandwidth of 16\,GB/sec for each serial link, and the power-state transition times obtained from the HMC specifications V2.1 \cite{HMCSTANDARD} [Active to Sleep: $600ns\,(t_{SME})$,
Sleep to Power-Down: $150 \mu s\,(t_{SD})$,
and Power-Down to Active: $50 \mu s$], the total power consumed in the SMC network can be estimated as 42.8\,W.
The camera sends images to the cubes in a ping-pong fashion: while an SMC is working on one image, the camera sends another image to its DRAM. This is easily achievable because there is plenty of space available inside each SMC.
Our SMC network can achieve 955 GFLOPS @42.8\,W. %
Moreover, this architecture is scalable to a larger number of nodes because the average bandwidth demand over the serial links is not large (in the order of 10\,MB/sec per image). Therefore it is possible to turn on a link, transfer the image at a higher bandwidth, and turn it off, periodically. This asynchronous and on-demand mechanism allows us to achieve a scalable performance with high energy efficiency.
\begin{figure}[!t]
\centering
\includegraphics[width=0.88\columnwidth]{ht56}
\caption{Energy and performance efficiency of the SoA ConvNet implementations on GPUs with standard frameworks.}
\label{fig:ht56}
\end{figure}
To extend our comparison with the SoA GPUs, \autoref{fig:ht56} presents the energy and performance efficiency of some of the most recent GPU ConvNet implementations with standard frameworks.
The measurements on Tesla P100 have been performed in \cite{NVIDIA-WP17}. Geforce GTX780 and NVIDIA Tegra K1 have been directly used and measured in our group \cite{LUKAS-DAC15}. Tegra X1 has been used in \cite{NVIDIA-WP15}.
Measurements on Tesla K20, K40, Geforce GTX Titan, and GTX 770 have been performed by Berkely AI Lab \cite{CAFFE-GPU}. Finally, Geforce GTX480 and NVIDIA GT335m has been used in \cite{NEUFLOW-ASIC}.
A couple of points are interesting to observe in this plot: we saw previously on page \pageref{fig:ht7-8} that SMC achieves an almost uniform performance for all studied ConvNets (more than 90\% efficiency) while looking at the results of Tesla P100 we can observe more variations. This is mainly thanks to the hardware implementation of nested loops in the NeuroStream computation paradigm, and to the availability of a large amount of DRAM at a low latency in SMC.
Furthermore, the SMC achieves about 3.5X improvement when compared with GPUs in similar technologies (e.g. GTX780, Tegra K1, Tesla K40). Compared to Tegra X1 with 20nm technology it achieves 2.7X, and compared to Tesla P100 (16nm) for GoogLeNet and ResNet, on the average, it achieves a similar energy efficiency. For AlexNet, however, Tesla P100 achieves 1.5X better efficiency compared to our solution.
Another glance at the above plot reveals that ConvNets cannot easily exploit the maximum throughput offered by the GPUs. We can observe that even on the most powerful GPUs, the utilization of computing resources does not exceed 55\%, which is directly reflected in a lower energy-efficiency.
In fact, they are optimized to perform general matrix multiplication (GEMM) operations, and ConvNets should be transformed into these forms for execution on the GPU platforms \cite{LUKAS-DAC15}. However, for modern ConvNets with growing memory footprints and non-uniform layers it is becoming more difficult (and wasteful in terms of memory footprint and bandwidth) to transform them into GEMM formats. This is in contrast with our proposal which executes ConvNets with more than 90\% performance efficiency for all studied ConvNets (See \autoref{perf}).
Also, the aggregate bandwidth available in the multi-SMC scenario is much higher than what can be delivered to the host processors and accelerators. This makes our solution more scalable in comparison with high-performance FPGAs and GPUs.
\section{Conclusions and Ongoing Work} \label{con}
In this paper, we proposed a scalable and energy-efficient PIM system based on a network of multiple SMC devices.
Each SMC is augmented with a flexible clustered many-core called NeuroCluster, capable of executing deep ConvNets with growing memory footprints and computation requirements.
To this end, NeuroStream (a streaming FP32 coprocessor) is proposed, along with an efficient tiling mechanism and a scalable computation paradigm.
Our proposal increases the LoB die area of the standard HMC only by 8\%, and achieves an average performance of 240\,GFLOPS for complete execution of full-featured modern ConvNets within a power-budget of 2.5\,W.
22.5\,GFLOPS/W energy efficiency is achieved in a single SMC (consuming 11\,W in total) which is 3.5X better than the best GPU implementations in similar technologies. The performance was shown scalable with a network of SMCs.
It is worth stressing that our platform allows for offloading the ConvNet tasks completely into the memory cubes at a minor system power cost. This implies that the compute logic in the host SoC is free to deal with other workloads. Also, the cost increase with respect to a baseline HMC system would be negligible. Therefore, essential ConvNet acceleration is provided at a very small system cost.
Ongoing research efforts include silicon implementation of the NeuroCluster with 5 clusters, parallel implementation of training on this architecture, and pushing further to achieve higher performance and efficiency inside the cubes (e.g. more advanced refresh and power management schemes to reduce the power in unused DRAM pages).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| -30,805.228678
|
[
-0.2293701171875,
0.72021484375
] | 65.965583
|
[
-2.708984375,
1.3291015625,
-0.76123046875,
-3.734375,
-0.87646484375,
5.5078125
] |
[
0.64599609375,
4.78125,
-0.6103515625,
3.56640625
] | 663
| 11,651
|
[
-2.404296875,
2.205078125
] | 21.884259
|
[
-5.78125,
-2.884765625,
-3.09375,
-1.5654296875,
1.7314453125,
9.9140625
] | 0.718929
| 29.232406
| 20.573341
| 1.815985
|
[
1.8803927898406982
] | -23,105.807508
| 5.611106
| -29,514.232458
| 0.185086
| 6.361528
|
[
-3.306640625,
-3.845703125,
-3.62109375,
-4.27734375,
2.85546875,
11.5078125
] |
[
-5.36328125,
-2.66796875,
-2.259765625,
-2.025390625,
3.904296875,
6.28125
] | |
BkiUeAU4dbghfKo4m3yF
|
\section{Introduction}
Recently, there is a great progress in the study of the correlation functions on the M2-branes.
The most important breakthrough is, of course, the proposal \cite{ABJM,HLLLP2,ABJ} that the world-volume theory of $\min(N_1,N_2)$ M2-branes and $|N_2-N_1|$ fractional M2-branes on the orbifold ${\mathbb C}^4/{\mathbb Z}_k$ is described by the ${\mathcal N}=6$ supersymmetric Chern-Simons theory with the gauge group U$(N_1)_k\times$U$(N_2)_{-k}$ and two pairs of bifundamental matters.
Then, due to the localization technique \cite{P,KWY}, the partition function and the vacuum expectation values of the half-BPS Wilson loop operator on $S^3$, which are originally defined with the infinite-dimensional path integral, is reduced to a finite-dimensional matrix integration.
It is interesting to observe that the matrix model has a hidden structure of the gauge symmetry in the supergroup U$(N_1|N_2)$ \cite{GW,DT,MPtop}.
Another interesting progress is the study of this matrix model.
After the study of the large $N$ limit in the 't Hooft expansion \cite{DMP1,HKPT,DMP2}, where the degrees of freedom $N^{3/2}$ of the M2-branes were reproduced, it was found that all of the perturbative corrections in the large $N$ limit are summed up to the Airy function \cite{FHM,KEK}.
These studies further lead beautifully to an unexpected description of the Fermi gas formalism \cite{MP} where the partition function is reexpressed as that of a non-interacting Fermi gas system with a non-trivial one-particle Hamiltonian and the Chern-Simons level $k$ is identified as the Planck constant.
In the Fermi gas formalism, the behavior of the Airy function was reproduced in a few line computations, which immediately indicates the importance of the formalism.
The Fermi gas formalism was further used to study the non-perturbative effects in this matrix model with the WKB expansion \cite{MP,CM} and the exact values \cite{HMO1,PY,HMO2,HMO3}.
Finally, after combining these studies, it was proposed that the partition function \cite{HMMO} and the one-point function of the half-BPS Wilson loop operator \cite{HHMO} are respectively given by the free energy of the closed topological string theory and the open topological string theory on local ${\mathbb P}^1\times{\mathbb P}^1$.
The proposal was originally made for the case of equal ranks $N_2=N_1$ and later turned out to be valid in the rank deformation $N_2\ne N_1$ \cite{MM,HO} by generalizing the Fermi gas formalism.
(See \cite{PTEP,Marino} for reviews.)
One of the generalizations is called open string formalism \cite{HHMO,MM} and the other is called closed string formalism \cite{AHS,H,HO,Hosp,MS2,MN5,KM}.
There are several natural questions related to these developments.
First, so far we have only considered the partition function or the one-point function of the half-BPS Wilson loop operator, it is natural to ask whether we can generalize our analysis to more general correlations functions.
On one hand, in general, when two half-BPS Wilson loop operators preserve completely different supersymmetries, as a whole the correlation function does not preserve any supersymmetries at all, which prevents us from applying many techniques.
Especially since the localization technique for supersymmetric correlation functions does not work, our correlation function does not reduce to a matrix model any more.
On the other hand, it is obvious that, for two identical half-BPS Wilson loops with only representations being possibly different, the two-point function reduces to a matrix model with the product of two characters.
Then, due to the Littlewood-Richardson rule, the products of two characters can be decomposed trivially into a linear combination of characters.
Hence, the correlation function with more than one insertion is not a new quantity but reduce to the one-point functions.
We hope to study two-point functions with a non-trivial and at the same time tractable structure.
Secondly, it was known that the partition function and the one-point functions enjoy many non-trivial relations, such as the Wronskian identity \cite{GHM2}, the open-closed duality \cite{HaOk,KM}, the Giambelli identity \cite{HHMO,MM,MaMo}, the Jacobi-Trudi identity \cite{FM} and so on.
It would be great if we can introduce a larger framework to combine all of the identities.
Thirdly, in the open string formalism, the one-point function of the half-BPS Wilson loop is given by a minor determinant of an infinite-dimensional matrix which contains two ingredients $H$ and $K$ (see \eqref{oneptFG}).
One of them $H$ is exactly the same as the one-point function of the half-BPS Wilson loop in the hook representation.
Although a combination of the other ingredients $K$ plays an important role especially in studying the partition function with the rank deformation \cite{MM,MNN}, the interpretation for a single component of it is missing.
Fourthly, in discussing the integrable structure of the ABJM matrix model \cite{MaMo,FM}, the starting point is the open string formalism \cite{MM}, where, unlike the indices of $H$, one of the indices of $K$ is always negative and appears consecutively.
It is natural to ask whether, in studying other correlation functions, we encounter a totally general minor determinant.
Fifthly, in studying the one-point function of the $1/6$-BPS Wilson loop operators \cite{KMSS,O6}, we encounter an imaginary contribution which cannot be regarded as a simple phase factor as that of the half-BPS Wilson loops.
Since we do not have much experience in the correlation functions containing imaginary parts, the study is difficult in general.
We hope to have more tractable examples of correlation functions with imaginary parts.
It turns out that all of these dissatisfactions can be alleviated by considering a certain type of two-point functions in the ABJM matrix model.
Namely, instead of introducing the characters both by $s_Y(e^\mu|e^\nu)$ and $s_Z(e^\mu|e^\nu)$, we invert the ``charge'' of one Wilson loop operator, and introduce a two-point function with the insertions $s_Y(e^\mu|e^\nu)$ and $s_Z(e^{-\mu}|e^{-\nu})$.
Of course it is natural to ask whether our capricious inversion of charges is physically relevant.
Especially we would like to know whether we can insert two different half-BPS Wilson loop operators in the ABJM theory preserving the total supersymmetries, and after applying the localization techniques, whether the insertion of these operators results in the two-point function in the matrix model we have introduced.
Although we do not have a concrete analysis to justify our expectation, we believe that this is possible due to the following arguments.
Since the scalar fields come into the half-BPS Wilson loop operator with the norm of the coordinates \cite{DT}, we believe that we can simultaneously reverse the sign of the scalar fields in the Wilson loop and the orientation of the loop to preserve the supersymmetries.
We hope that, after applying the localization techniques, the correlation function with these two insertions results in the two-point function we have defined.
Also, as we explain later, from the viewpoint of the Fermi gas formalism of the matrix model, we can still construct an open string formalism for the insertion of two characters with the opposite charges in a parallel manner as the one-point function in \cite{MM}.
The final result for the Fermi gas formalism of the two-point function is reminiscent of the representation of the supergroup U$(N_1|N_2)$ which is characterized by the so-called composite Young diagram \cite{Moens} combining two Young diagrams in the opposite directions.
Our result that the two-point function fully respects the hidden structure of the supergroup may suggest the naturalness of the definition and their origin in the ABJM theory.
Note also that this Fermi gas formalism of the two-point function generalizes that of the one-point function and therefore provides a larger framework.
After the introduction of the matrix model with two insertions of the opposite charges, we continue to study this two-point function.
Although it looks non-trivial at the beginning, after a long numerical analysis of the matrix model, we have found that the two-point function with the opposite charges is directly related to the two-point function with the same charges by the complex conjugate, which are subsequently related to the one-point functions with the Littlewood-Richardson rule.
We shall refer to this relation as a conjugate relation.
We also find an interesting relation for the imaginary part.
We find that, after the removal of the main phase factor, the imaginary part of the two-point function in the representations $Y$ and $Z$ reduces to a sum of the one-point functions in the representation $X$ whose box number is less than the sum of the box numbers of $Y$ and $Z$ by two.
Since we are studying the imaginary part of the two-point function and the result reduces to simpler quantities, this property may remind us of an interference between two insertions $Y$ and $Z$.
We shall refer to this relation as a descent relation.
Since the two-point function is related to the one-point function, on occasion of many numerical data, we also revisit the one-point function.
Although so far only the so-called diagonal BPS indices are identified which correspond to the case of equal ranks $N_2=N_1$, with the various numerical data in the rank deformation, we can investigate how the diagonal BPS indices are split by the degree difference.
Interestingly, we have found an asymmetry of the BPS indices in exchanging the two degrees $(d_+,d_-)$.
This paper is organized as follows.
In the next section, we introduce the two-point function.
After establishing the Fermi gas formalism to study the two-point function, we proceed to studying it and find a few relations to the one-point functions.
In section \ref{onepoint}, we revisit the one-point function and investigate how the diagonal BPS indices are split.
Finally we conclude in section \ref{conclusion}.
In appendix \ref{determinant} we collect a few determinant formulas necessary for constructing the Fermi gas formalism of the two-point function and in appendix \ref{lowest} we compute the non-vanishing two-point function in the lowest rank.
In appendix \ref{superpose} we list a few data to study the conjugate relation, while appendix \ref{interfere} is devoted to the descent relation.
After presenting some relations and formulas for the one-point function in appendix \ref{onept} and appendix \ref{pert1pt}, in appendix \ref{np1pt} we list our exact expression of the non-perturbative part of the one-point function to study the split of the diagonal BPS indices.
\section{Two-point function}
In this section we shall introduce the two-point function in the ABJM matrix model, establish the Fermi gas formalism for it and study its property.
\subsection{Definition}\label{definition}
The one-point function of the half-BPS Wilson loop operator in the ABJM theory is reduced to a matrix model, after applying the localization technique for the supersymmetric theories \cite{P,KWY}\footnote{We follow the phase factor introduced in \cite{DMP1}.
This phase factor simplifies later formulas.},
\begin{align}
\langle s_Y\rangle_k(N_1,N_2)
&=i^{-\frac{1}{2}(N_1^2-N_2^2)}
\int\frac{D^{N_1}\mu}{N_1!}\frac{D^{N_2}\nu}{N_2!}
\frac{\prod_{m<m'}^{N_1}(2\sinh\frac{\mu_m-\mu_{m'}}{2})^2
\prod^{N_2}_{n<n'}(2\sinh\frac{\nu_n-\nu_{n'}}{2})^2}
{\prod_m^{N_1}\prod_n^{N_2}(2\cosh\frac{\mu_m-\nu_n}{2})^2}\nonumber\\
&\qquad\qquad\qquad\qquad\qquad\qquad\times
s_Y(e^\mu|e^\nu),
\end{align}
with
\begin{align}
D\mu=\frac{d\mu}{2\pi}e^{\frac{ik}{4\pi}\mu^2},\quad
D^{N_1}\mu=\prod_{m=1}^{N_1}D\mu_m,\quad
D\nu=\frac{d\nu}{2\pi}e^{-\frac{ik}{4\pi}\nu^2},\quad
D^{N_2}\nu=\prod_{n=1}^{N_2}D\nu_n.
\label{DmuDnu}
\end{align}
Here it was known that the hyperbolic functions can be regarded as a hyperbolic deformation of the invariant measure for the supergroup U$(N_1|N_2)$ and the Fresnel exponential factor can be regarded as the supertrace (see \cite{PTEP} for a review).
Also the character $s_Y(e^\mu|e^\nu)$ is the super Schur polynomial, the character of the supergroup U$(N_1|N_2)$, and the arguments are the abbreviation, $s_Y(e^\mu|e^\nu)=s_Y(e^{\mu_1},e^{\mu_2},\cdots,e^{\mu_{N_1}}|e^{\nu_1},e^{\nu_2},\cdots,e^{\nu_{N_2}})$.
Although we do not have a rigorous localization analysis for two-point functions of the half-BPS Wilson loop operators in the ABJM theory, it is interesting to ask how we can define a two-point function naturally at the level of the matrix model.
If we simply insert another character $s_{Z}(e^\mu|e^\nu)$ in addition to the original one $s_{Y}(e^\mu|e^\nu)$ with the same arguments, the multiplication can be computed by the Littlewood-Richardson rule
\begin{align}
s_{Y}(e^\mu|e^\nu)s_{Z}(e^\mu|e^\nu)=\sum_{X}N^X_{YZ}s_{X}(e^\mu|e^\nu),
\end{align}
and the result reduces to the one-point function trivially.
Here let us consider the insertion of $s_{Z}(e^{-\mu}|e^{-\nu})=s_Z(e^{-\mu_1},e^{-\mu_2},\cdots,e^{-\mu_{N_1}}|e^{-\nu_1},e^{-\nu_2},\cdots,e^{-\nu_{N_2}})$ with the opposite ``charges''.
The insertion of the character with the opposite charges is partially motivated by the study of the Hopf links in the Chern-Simons matrix model \cite{AKMV}.\footnote{See also \cite{Kimura} for discussions on the similarity.}
The Chern-Simons theory is a topological theory, where the topological invariant of knots can be regarded as the correlation function of the Wilson loop operators \cite{WCS} and expressed as the Chern-Simons matrix model \cite{MCS}.
Especially for the Hopf links the matrix model is constructed by gluing two wave functions on the solid tori with the Wilson loop inside.
In the gluing process we effectively invert the charges of one of the Wilson loops.
After these discussions, let us define the two-point function in the ABJM matrix model as
\begin{align}
\langle s_Y\bar s_Z\rangle_k(N_1,N_2)
&=i^{-\frac{1}{2}(N_1^2-N_2^2)}
\int\frac{D^{N_1}\mu}{N_1!}\frac{D^{N_2}\nu}{N_2!}
\frac{\prod_{m<m'}^{N_1}(2\sinh\frac{\mu_m-\mu_{m'}}{2})^2
\prod^{N_2}_{n<n'}(2\sinh\frac{\nu_n-\nu_{n'}}{2})^2}
{\prod_m^{N_1}\prod_n^{N_2}(2\cosh\frac{\mu_m-\nu_n}{2})^2}\nonumber\\
&\qquad\qquad\qquad\qquad\qquad\qquad\times
s_Y(e^\mu|e^\nu)s_Z(e^{-\mu}|e^{-\nu}),
\label{two}
\end{align}
and the matrix model in the grand canonical ensemble as\footnote{It is important to match the power of $z$ with one of the ranks for both positive and negative $M$ \cite{MNN}.}
\begin{align}
\langle s_Y\bar s_Z\rangle^\text{GC}_{k,M}(z)
=\sum_{N=\max(0,-M)}^\infty z^N\langle s_Y\bar s_Z\rangle_k(N,N+M).
\label{gc}
\end{align}
Although our definition of the two-point function is not based on a concrete physical argument from the localization techniques, we shall see in the next subsection that this is in fact a nice definition which naturally incorporates the structure of the representation of the supergroup U$(N_1|N_2)$.
Before proceeding to the study of the two-point function, here let us shortly comment on their symmetries and relations.
First, we note that the two-point functions satisfy
\begin{align}
\langle s_Z\bar s_Y\rangle_k(N_1,N_2)&=\langle s_Y\bar s_Z\rangle_k(N_1,N_2),\nonumber\\
\langle s_{Y^\text{T}}\bar s_{Z^\text{T}}\rangle_k(N_2,N_1)
&=[\langle s_Y\bar s_Z\rangle_k(N_1,N_2)]^*,
\end{align}
which can be respectively proved by inverting the signs of the integration variables $\mu$ and $\nu$ and by inverting the integration variables $\mu$ and $\nu$ and using the transposition relation $s_{Y^\text{T}}(y|x)=s_Y(x|y)$.
In terms of the grand canonical ensemble, the two relations read
\begin{align}
\langle s_Z\bar s_Y\rangle^\text{GC}_{k,M}(z)
&=\langle s_Y\bar s_Z\rangle^\text{GC}_{k,M}(z),
\nonumber\\
\langle s_{Y^\text{T}}\bar s_{Z^\text{T}}\rangle^\text{GC}_{k,-M}(z)
&=z^M[\langle s_Y\bar s_Z\rangle^\text{GC}_{k,M}(z)]^*,
\end{align}
where the complex conjugate does not apply to $z$, $z^*=z$.
Secondly, when one of the Wilson loops is trivial, the two-point function reduces to the one-point function
\begin{align}
\langle s_\varnothing\bar s_Y\rangle_k(N_1,N_2)
&=\langle\bar s_Y\rangle_k(N_1,N_2)
=\langle s_Y\rangle_k(N_1,N_2)
=\langle s_Y\bar s_\varnothing\rangle_k(N_1,N_2).
\end{align}
\subsection{Fermi gas formalism}
In this subsection we construct the Fermi gas formalism to study this matrix model.
Although so far it was only noted that the case of $M=N_2-N_1<0$ can be studied by taking the complex conjugate, here we study the cases of $M\ge 0$ and $M\le 0$ separately and point out that they are connected smoothly at $M=0$.
In the both cases, the resulting Fermi gas formalism is schematically summarized by the expression
\begin{align}
\langle s_Y\bar s_{Y'}\rangle^\text{GC}_{k,M}(z)
=i^{\frac{1}{2}M^2}\Xi_k(w)\det\begin{pmatrix}
{\cal H}_k^{(\widetilde a|\widetilde l)}(w)
\end{pmatrix}_{(\widetilde a,\widetilde l)\in\widetilde A\times\widetilde L},
\label{FG}
\end{align}
where $w$ is related to $z$ by $w=(-i)^Mz$.
\begin{figure}[!ht]
\centering\includegraphics[scale=0.6,angle=-90]{young.eps}\\[6pt]
$Y$\hspace{60mm}$Y'$
\caption{The shifted Frobenius notation for $Y=[5,4,2,1]$ (left) and $Y'=[4,2,2,1,1]$ (right).
For $Y$ we adopt the original shifted Frobenius notation $(a_1,a_2|l_1,l_2,l_3,l_4)_{M=2}=(\frac{5}{2},\frac{1}{2}|\frac{11}{2},\frac{7}{2},\frac{3}{2},\frac{1}{2})_{M=2}$ defined in \eqref{al}.
For $Y'$ we invert the signs and the roles of arms and legs $(-l'_1|{-a'_1},-a'_2,-a'_3)_{M=2}=(\frac{3}{2}|\frac{13}{2},\frac{7}{2},\frac{1}{2})_{M=2}$ as in \eqref{la}.
}
\label{young}
\end{figure}
Before explaining various quantities, let us first explain the notation of the Young diagram and the structure of the determinant in \eqref{FG}.
See figure \ref{young} for examples.
Here we denote the Young diagrams $Y$ by the so-called $M$-shifted Frobenius notation
\begin{align}
Y=(a_1,a_2,\cdots,a_R|l_1,l_2,\cdots,l_{M+R})_M,\quad
R=\max\{i|a_i>0\}=\max\{j|l_j>0\}-M.
\label{MshiftedY}
\end{align}
This is obtained by listing the horizontal lengths and the vertical lengths from the diagonal line shifted by $M$ upward/rightward (or $|M|$ downward/leftward when $M<0$) to the boundary of the Young diagram as arm lengths and leg lengths, which are also expressed as
\begin{align}
a_i=\lambda_i-i+\frac{1}{2}-M,\quad l_j=\lambda^\text{T}_j-j+\frac{1}{2}+M,
\label{al}
\end{align}
with the standard notation of the Young diagram by listing the non-vanishing numbers of horizontal boxes $Y=[\lambda_1,\lambda_2,\cdots]$ or dually $Y^\text{T}=[\lambda^\text{T}_1,\lambda^\text{T}_2,\cdots]$.
The Young diagram $Y'$ is also denoted by the $M$-shifted Frobenius notation, though we invert the signs and the roles of the arm lengths and the leg lengths
\begin{align}
Y'=(-l'_1,\cdots,-l'_{R'}|{-a'_1},\cdots,-a'_{M+R'})_M,\quad
R'=\max\{j|l'_j<0\}=\max\{i|a'_i<0\}-M.
\label{Y'al}
\end{align}
with
\begin{align}
-l'_j=\lambda'_j-j+\frac{1}{2}-M,\quad
-a'_i=\lambda'^{\text{T}}_i-i+\frac{1}{2}+M,
\label{la}
\end{align}
for $Y'=[\lambda'_1,\lambda'_2,\cdots]$.
The column indices and the row indices appearing in the determinant in \eqref{FG} are
\begin{align}
\widetilde A=\{a'_{R'+M},a'_{R'+M-1},\cdots,a'_1,a_1,a_2,\cdots,a_R\},\quad
\widetilde L=\{l'_{R'},l'_{R'-1},\cdots,l'_1,l_1,l_2,\cdots,l_{M+R}\},
\label{MshiftedYZ}
\end{align}
respectively.
Note that, compared with \eqref{Y'al}, in \eqref{MshiftedYZ} the signs and the roles of the arm lengths and the leg lengths in the Young diagram $Y'$ are exchanged to combine with those in $Y$.
This is why we have introduced the notation in \eqref{Y'al}.
Pictorially, this exchange is clearly encoded by reversing one of the Young diagrams $Y'$.
See figure \ref{composite} for an explanation for the example of the two Young diagrams $Y$ and $Y'$ given in figure \ref{young}.
The diagram combining two Young diagrams in the opposite directions is called the composite Young diagram and appear naturally in the study of the representation of the supergroup U$(N_1|N_2)$ \cite{Moens}.
We shall refer to the order of the arm lengths and the leg lengths in \eqref{MshiftedY} as the standard order for a single Young diagram and the order in \eqref{MshiftedYZ} as the standard order for a composite Young diagram.
\begin{figure}[!ht]
\centering\includegraphics[scale=0.6,angle=-90]{composite.eps}
\caption{After reversing the order of arms and legs in $Y'$, the arm and leg lengths in the Fermi gas formalism \eqref{FG} are given by $(a'_3,a'_2,a'_1,a_1,a_2|l'_1,l_1,l_2,l_3,l_4)=(-\frac{1}{2},-\frac{7}{2},-\frac{13}{2},\frac{5}{2},\frac{1}{2}|{-\frac{3}{2}},\frac{11}{2},\frac{7}{2},\frac{3}{2},\frac{1}{2})$ as in \eqref{MshiftedYZ}.}
\label{composite}
\end{figure}
Although at the first sight the matrix $\bigl({\cal H}_k^{(\widetilde a|\widetilde l)}(w)\bigr)_{\widetilde A\times\widetilde L}$ in \eqref{FG} consists of four blocks respectively with positive/negative arm/leg lengths, they reduce essentially to two blocks.
If we refer to the matrix elements as $H_k^{(a|l)}$ and $K_k^{(a'|l)}$ when the leg length is positive, the matrix elements for the negative leg length are also given in terms of $H_k^{(a|l)}$ and $K_k^{(a'|l)}$ by the complex conjugate as
\begin{align}
\Bigl({\cal H}_k^{(\widetilde a|\widetilde l)}(w)\Bigr)_{\widetilde A\times\widetilde L}
=\begin{pmatrix}
\bigl[H_k^{(-a'|-l')}(w)\bigr]^*_{(R'+M)\times R'}&
\bigl[K_k^{(a'|l)}(w)\bigr]_{(R'+M)\times(M+R)}\\
\bigl[-wK_k^{(-a|-l')}(w)\bigr]^*_{R\times R'}&
\bigl[H_k^{(a|l)}(w)\bigr]_{R\times(M+R)}
\end{pmatrix},
\label{calH}
\end{align}
where the complex conjugate does not apply to $w$, $w^*=w$.
Of course, as in the left-hand side, the indices on the right-hand side are also given in the standard order of the composite Young diagram \eqref{MshiftedYZ}.
Various quantities are schematically defined as
\begin{align}
\Xi_k(w)&=\Det(1+wPQ),\nonumber\\
K_k^{(a'|l)}(w)&=F_l(1+wQP)^{-1}F_{a'},\nonumber\\
H_k^{(a|l)}(w)&=wF_l(1+wQP)^{-1}QE_a,
\label{ingredients}
\end{align}
with
\begin{align}
P(\mu,\nu)=\frac{1}{2\cosh\frac{\mu-\nu}{2}},\quad
Q(\nu,\mu)=\frac{1}{2\cosh\frac{\nu-\mu}{2}},\quad
E_p(\mu)=e^{p\mu},\quad
F_q(\nu)=e^{q\nu}.
\end{align}
Here we regard $P(\mu,\nu)$, $Q(\nu,\mu)$ as matrices and $E_p(\mu)$, $F_q(\nu)$ as vectors.
When multiplying by contracting the ``indices'' $\mu$ and $\nu$ we utilize the integration measure \eqref{DmuDnu}.
The determinant $\Det$ used in defining $\Xi_k(w)$ in \eqref{ingredients} is the Fredholm determinant on the function space.
Note that, for the partition function or the one-point function, the expression \eqref{FG} reduces to those found in \cite{MP,HHMO,MM}.
Especially, for the grand canonical partition function of equal ranks $N_2=N_1$ or $M=0$ \cite{MP}, we have
$\langle 1\rangle^\text{GC}_{k,M=0}(z)
=\Xi_k(w)$, which gives a physical interpretation to $\Xi_k(w)$.
For the one-point function of the Wilson loop operator in the hook representation for the case of equal ranks $N_2=N_1$ \cite{HHMO}, we find
$\langle s_{(a|l)}\rangle^\text{GC}_{k,M=0}(z)
=\Xi_k(w)H_k^{(a|l)}(w)$.
Hence, after the normalization, $H_k^{(a|l)}(w)$ is clearly interpreted as the one-point function in the hook representation.
For the grand canonical partition function with the rank deformation $N_2>N_1$ or $M>0$ \cite{MM}, we find
$\langle 1\rangle^\text{GC}_{k,M}(z)
=i^{\frac{1}{2}M^2}\Xi_k(w)\det[K_k^{(a'|l)}(w)]_{M\times M}$.
Though the determinant is important in computing the grand canonical partition function with the rank deformation, the interpretation of the single component of $K_k^{(a'|l)}(w)$ is not clear.
After our introduction of the two-point function, the situation is improved.
We can consider, for example, the two-point function
\begin{align}
\langle s_{(\frac{1}{2}|l)}\bar s_{(\frac{1}{2}|-a')}\rangle^\text{GC}_{k,M=1}(z)
=i^{\frac{1}{2}}\Xi_k(w)K_k^{(a'-1|l+1)}(w),
\end{align}
which gives the interpretation for the single component of $K_k^{(a'|l)}(w)$.
Also note that, for the general one-point function of the Wilson loop operator with the rank deformation $M>0$, we find \cite{MM}
\begin{align}
\langle s_Y\rangle^\text{GC}_{k,M}(z)
=i^{\frac{1}{2}M^2}\Xi_k(w)\det\begin{pmatrix}
\bigl[K_k^{(a'_i|l_j)}(w)\bigr]_{M\times(M+R)}\\
\bigl[H_k^{(a_i|l_j)}(w)\bigr]_{R\times(M+R)}
\end{pmatrix}.
\label{oneptFG}
\end{align}
Regarding the determinant as a minor determinant of an infinite-dimensional matrix, the matrix element with negative leg lengths never appears.
Besides, although the indices of $a_i$ and $l_j$ depend on the shape of the Young diagram and can appear generally by choosing different Young diagrams, the indices of $a'_i$ always range consecutively within
\begin{align}
a'_i\in\Bigl\{-\frac{1}{2},-\frac{3}{2},\cdots,-M+\frac{1}{2}\Bigr\}.
\label{contarms}
\end{align}
The situation is not improved much even we include the case of $M<0$.
This may imply that the one-point function is not the most general quantity to study and there is a natural generalization of it.
After obtaining the Fermi gas formalism for the general two-point function, the matrix elements with both of the lengths in $(\widetilde a|\widetilde l)$ being negative can participate and the indices $a'$ and $l'$ do not have to range consecutively any more.
\subsection{Derivation}\label{derivation}
In this subsection, we shall give a derivation of the Fermi gas formalism for the two-point function \eqref{FG} from the definition \eqref{two}.
The basic techniques for the derivation already appeared in the derivation for the one-point function \cite{MM}.
We first introduce two determinant formulas, where one is used to express the integration measure in a determinant and the other is used to express the super Schur polynomial by replacing the previous determinant by another.
Then we can combine these two determinants by the continuous Cauchy-Binet formula.
The only new ingredient for the two-point function is to repeat these techniques twice.
\subsubsection{$M\ge 0$}
We first explain the Fermi gas formalism for the case of $N_2\ge N_1$ carefully.
Namely, we set $N_1=N$ and $N_2=N+M$ with $M\ge 0$.
For the integration measure, we introduce the combination of the Vandermonde determinant and the Cauchy determinant
\begin{align}
\frac{\prod_{m<m'}^{N_1}(x_m-x_{m'})
\prod_{n<n'}^{N_2}(y_n-y_{n'})}
{\prod_{m=1}^{N_1}\prod_{n=1}^{N_2}(x_m+y_n)}
=(-1)^{N_1(N_2-N_1)}
\det\begin{pmatrix}\biggl[\displaystyle\frac{1}{x_m+y_n}\biggr]
_{(m,n)\in Z_{1}\times Z_{2}}\\
\Bigl[y_n^{\overline l-\frac{1}{2}}\Bigr]
_{(\overline l,n)\in\overline L\times Z_{2}}
\end{pmatrix},
\label{Z}
\end{align}
with $Z_1=\{1,2,\cdots,N_1\}$, $Z_2=\{1,2,\cdots,N_2\}$ and
$\overline L=\{M-\frac{1}{2},M-\frac{3}{2},\cdots,\frac{1}{2}\}$ appearing in the determinant in this order.
For the super Schur polynomial, we utilize the determinant formula \cite{MVdJ}
\begin{align}
s_Y(x|y)
&=(-1)^R\det\begin{pmatrix}\biggl[\displaystyle\frac{1}{x_m+y_n}\biggr]
_{(m,n)\in Z_{1}\times Z_{2}}&
\Bigl[x_m^{a-\frac{1}{2}}\Bigr]
_{(m,a)\in Z_{1}\times A}\\
\Bigl[y_n^{l-\frac{1}{2}}\Bigr]
_{(l,n)\in L\times Z_{2}}&
[0]_{L\times A}\end{pmatrix}\nonumber\\
&\qquad\qquad\bigg/
\det\begin{pmatrix}\biggl[\displaystyle\frac{1}{x_m+y_n}\biggr]
_{(m,n)\in Z_{1}\times Z_{2}}\\
\Bigl[y_n^{\overline l-\frac{1}{2}}\Bigr]
_{(\overline l,n)\in\overline L\times Z_{2}}\end{pmatrix},
\label{Y}
\end{align}
where $A=\{a_1,a_2,\cdots,a_R\}$ and $L=\{l_1,l_2,\cdots,l_{M+R}\}$ are the sets of the arm and leg lengths in the shifted Frobenius notation in the standard order \eqref{MshiftedY}.
Since we have the square in the integration measure and two super Schur functions in the two-point function \eqref{two}, we need a copy of the previous two determinants.
It is easier for the later convenience to obtain them by substituting $x\to x^{-1}$, $y\to y^{-1}$, replacing $Y$ by $Y'$ (namely $\overline l\to -\overline a$, $a\to -l'$, $l\to -a'$ as in \eqref{Y'al}) and transposing the matrix.
Then, we find
\begin{align}
&(-1)^{\frac{1}{2}N_1(N_1-1)+\frac{1}{2}N_2(N_2-1)}
\frac{\prod_{m<m'}^{N_1}(x_{m'}^{-1}-x_m^{-1})
\prod_{n<n'}^{N_2}(y_{n'}^{-1}-y_n^{-1})}
{\prod_{m=1}^{N_1}\prod_{n=1}^{N_2}(x_m^{-1}+y_n^{-1})}\nonumber\\
&=(-1)^{N_1(N_2-N_1)}
\det\begin{pmatrix}\biggl[\displaystyle\frac{1}{y_n^{-1}+x_m^{-1}}\biggr]
_{(n,m)\in Z_{2}\times Z_{1}}&
\Bigl[y_n^{\overline a+\frac{1}{2}}\Bigr]
_{(n,\overline a)\in Z_{2}\times\overline A}
\end{pmatrix},
\label{Zbar}
\end{align}
with $\overline A=\{-(M-\frac{1}{2}),-(M-\frac{3}{2}),\cdots,-\frac{1}{2}\}$ and
\begin{align}
s_{Y'}(x^{-1}|y^{-1})
&=(-1)^{R'}\det\begin{pmatrix}\biggl[\displaystyle\frac{1}{y_n^{-1}+x_m^{-1}}\biggr]
_{(n,m)\in Z_{2}\times Z_{1}}&
\Bigl[y_n^{a'+\frac{1}{2}}\Bigr]
_{(n,a')\in Z_{2}\times A'}\\
\Bigl[x_m^{l'+\frac{1}{2}}\Bigr]
_{(l',m)\in L'\times Z_{1}}&
[0]_{L'\times A'}\end{pmatrix}\nonumber\\
&\qquad\bigg/
\det\begin{pmatrix}\biggl[\displaystyle\frac{1}{y_n^{-1}+x_m^{-1}}\biggr]
_{(n,m)\in Z_{2}\times Z_{1}}&
\Bigl[y_n^{\overline a+\frac{1}{2}}\Bigr]
_{(n,\overline a)\in Z_{2}\times\overline A}\end{pmatrix},
\label{Ybar}
\end{align}
where $-L'=\{-l'_1,-l'_2,\cdots,-l'_{R'}\}$ and $-A'=\{-a'_1,-a'_2,\cdots,-a'_{M+R'}\}$ are respectively the sets of the {\it arm} and {\it leg} lengths for the single Young diagram $Y'$.
See figure \ref{young} again to avoid confusion.
After substituting the four determinant formulas \eqref{Z}, \eqref{Y}, \eqref{Zbar}, \eqref{Ybar} with $x_m=e^{\mu_m}$, $y_n=e^{\nu_n}$ and multiplying them all together, finally we find that the two-point function is given by
\begin{align}
&\langle s_Y\bar s_{Y'}\rangle_{k}(N,N+M)
=i^{-\frac{1}{2}(N_1^2-N_2^2)}(-1)^{\frac{1}{2}N_1(N_1-1)+\frac{1}{2}N_2(N_2-1)+R+R'}
\int\frac{D^{N_1}\mu}{N_1!}\frac{D^{N_2}\nu}{N_2!}\nonumber\\
&\times\det\begin{pmatrix}
[P(\mu,\nu)]_{N\times(N+M)}
&[E_a(\mu)]_{N\times R}\\
[F_l(\nu)]_{(M+R)\times(N+M)}
&[0]_{(M+R)\times R}
\end{pmatrix}
\det\begin{pmatrix}
[Q(\nu,\mu)]_{(N+M)\times N}
&[F_{a'}(\nu)]_{(N+M)\times(M+R')}\\
[E_{l'}(\mu)]_{R'\times N}
&[0]_{R'\times(M+R')}
\end{pmatrix},
\label{detdet}
\end{align}
with
\begin{align}
&P(\mu,\nu)=\frac{1}{2\cosh\frac{\mu-\nu}{2}},\quad
E_a(\mu)=e^{a\mu},\quad
F_l(\nu)=e^{l\nu},\nonumber\\
&Q(\nu,\mu)=\frac{1}{2\cosh\frac{\nu-\mu}{2}},\quad
F_{a'}(\nu)=e^{a'\nu},\quad
E_{l'}(\mu)=e^{l'\mu}.
\end{align}
Now using the continuous Cauchy-Binet formula, {\it Formula 1} given in appendix \ref{determinant}, we can combine two determinants into
\begin{align}
&\langle s_Y\bar s_{Y'}\rangle_{k,M}(N)
=i^{MN+\frac{1}{2}M^2}(-1)^{MN+\frac{1}{2}M(M-1)+R+R'+RR'}
\int\frac{D^N\mu}{N!}\nonumber\\
&\quad\times\det\begin{pmatrix}
[(P\circ Q)(\mu,\mu)]_{N\times N}
&[(P\circ F_{a'})(\mu)]_{N\times(M+R')}
&[E_a(\mu)]_{N\times R}\\
[(F_l\circ Q)(\mu)]_{(M+R)\times N}
&[(F_l\circ F_{a'})]_{(M+R)\times(M+R')}
&[0]_{(M+R)\times R}\\
[E_{l'}(\mu)]_{R'\times N}
&[0]_{R'\times(M+R')}
&[0]_{R'\times R}
\end{pmatrix},
\end{align}
with $\circ$ denoting the contraction with $D\nu$ \eqref{DmuDnu}.
Using {\it Formula 2}, we can express the two-point function in the grand canonical ensemble defined in \eqref{gc} as
\begin{align}
\langle s_Y\bar s_{Y'}\rangle^\text{GC}_{k,M}(z)
=i^{\frac{1}{2}M^2}(-1)^{\frac{1}{2}M(M-1)+R+R'+RR'}
\Det\begin{pmatrix}1+wP\circ Q&wP\circ F_{a'}&wE_a\\
F_l\circ Q&F_l\circ F_{a'}&0\\E_{l'}&0&0\end{pmatrix}.
\end{align}
Here we have introduced $w=(-i)^Mz$ to take care of the sign factor proportional to $N$.
The determinant $\Det$ is a combination of the Fredholm determinant for the first block of rows and columns and the usual determinant for the remaining blocks.
Using {\it Formula 3} we can further rewrite it as
\begin{align}
&\langle s_Y\bar s_{Y'}\rangle^\text{GC}_{k,M}(z)
=i^{\frac{1}{2}M^2}(-1)^{\frac{1}{2}M(M-1)+R+R'+RR'}
\Det(1+wQP)\nonumber\\
&\quad\times
\det\begin{pmatrix}[F_l(1+wQP)^{-1}F_{a'}]_{(M+R)\times(M+R')}&
[-wF_l(1+wQP)^{-1}QE_a]_{(M+R)\times R}\\
[-wE_{l'}(1+wPQ)^{-1}PF_{a'}]_{R'\times(M+R')}&
[-wE_{l'}(1+wPQ)^{-1}E_a]_{R'\times R}
\end{pmatrix},
\end{align}
where we have dropped $\circ$ and understand the matrix multiplications by the integrations $D\mu$ and $D\nu$ tacitly.
Now it is very interesting to observe that the arm and leg lengths are those appearing in the composite Young diagram in figure \ref{composite}.
To reduce the expression, we first multiply the sign factors $(-1)^{R+R'}$ to the second column block and the second row block.
After that we exchange the first and second row block and rearrange the arm and leg lengths to the standard order of the composite Young diagram.
Due to the exchange of the rows and columns, we encounter the extra sign factors
\begin{align}
(-1)^{(M+R)R'+\frac{1}{2}R'(R'-1)+\frac{1}{2}(M+R')(M+R'-1)}=(-1)^{\frac{1}{2}M(M-1)+RR'},
\end{align}
cancelling parts of the sign factor.
Finally, the two-point function in the grand canonical ensemble is given by
\begin{align}
&\langle s_Y\bar s_{Y'}\rangle^\text{GC}_{k,M}(z)
=i^{\frac{1}{2}M^2}\Det(1+wQP)\nonumber\\
&\quad\times\det\begin{pmatrix}[wE_{l'}(1+w PQ)^{-1}PF_{a'}]_{R'\times(R'+M)}&
[-wE_{l'}(1+wPQ)^{-1}E_a]_{R'\times R}\\
[F_l(1+wQP)^{-1}F_{a'}]_{(M+R)\times(R'+M)}&
[wF_l(1+wQP)^{-1}QE_a]_{(M+R)\times R}\end{pmatrix}.
\label{fermi}
\end{align}
After transposing the determinant in this expression and using
\begin{align}
&wE_{l'}(1+wPQ)^{-1}PF_{a'}=wE_{-l'}(1+wPQ)^{-1}PF_{-a'}=[wF_{-l'}(1+wQP)^{-1}QE_{-a'}]^*,
\nonumber\\
&{-wE_{l'}(1+wPQ)^{-1}E_{a}}=-wE_{-l'}(1+wPQ)^{-1}E_{-a}=[-wF_{-l'}(1+wQP)^{-1}F_{-a}]^*,
\end{align}
we obtain \eqref{FG}.
Note that the complex conjugate can be realized effectively by exchanging the matrices $(P,Q)$ and the vectors $(E,F)$ simultaneously.
\subsubsection{$M\le 0$}
The construction of the Fermi gas formalism for the case of $M\le 0$ does not change much.
Instead of keeping the same notation with $M\le 0$, let us introduce the notation $M=-\bar M$, $N_1=N=\bar N+\bar M$, $N_2=N+M=\bar N$ which is more intuitive.
This time instead of \eqref{Z} and \eqref{Zbar} we use
\begin{align}
&\frac{\prod_{m<m'}^{N_1}(x_m-x_{m'})
\prod_{n<n'}^{N_2}(y_n-y_{n'})}
{\prod_{m=1}^{N_1}\prod_{n=1}^{N_2}(x_m+y_n)}\nonumber\\
&\quad=(-1)^{N_2(N_1-N_2)}
\det\begin{pmatrix}\biggl[\displaystyle\frac{1}{x_m+y_n}\biggr]
_{(m,n)\in Z_{1}\times Z_{2}}&
\Bigl[x_m^{\overline a-\frac{1}{2}}\Bigr]
_{(m,\overline a)\in Z_{1}\times\overline A}
\end{pmatrix},
\label{ZMneg}
\end{align}
with $\overline A=\{\bar M-\frac{1}{2},\bar M-\frac{3}{2},\cdots,\frac{1}{2}\}$ and
\begin{align}
&(-1)^{\frac{1}{2}N_1(N_1-1)+\frac{1}{2}N_2(N_2-1)}
\frac{\prod_{m<m'}^{N_1}(x_{m'}^{-1}-x_m^{-1})
\prod_{n<n'}^{N_2}(y_{n'}^{-1}-y_n^{-1})}
{\prod_{m=1}^{N_1}\prod_{n=1}^{N_2}(x_m^{-1}+y_n^{-1})}\nonumber\\
&=(-1)^{N_2(N_1-N_2)}
\det\begin{pmatrix}\biggl[\displaystyle\frac{1}{y_n^{-1}+x_m^{-1}}\biggr]
_{(n,m)\in Z_{2}\times Z_{1}}\\
\Bigl[x_m^{\overline l+\frac{1}{2}}\Bigr]
_{(\overline l,m)\in\overline L\times Z_{1}}
\end{pmatrix},
\label{ZbarMneg}
\end{align}
with $\overline L=\{-(\bar M-\frac{1}{2}),-(\bar M-\frac{3}{2}),\cdots,-\frac{1}{2}\}$ and change the denominators of \eqref{Y} and \eqref{Ybar} accordingly.
Also for the $M$-shifted Frobenius notation of the Young diagram, we introduce
\begin{align}
Y&=(a_1,\cdots,a_{\bar M+\bar R}|l_1,\cdots,l_{\bar R})_{-\bar M},\quad
\bar R=\max\{i|a_i>0\}-\bar M=\max\{j|l_j>0\},\nonumber\\
Y'&=(-l'_1,\cdots,-l'_{\bar M+\bar R'}|-a'_1,\cdots,-a'_{\bar R'})_{-\bar M},\quad
\bar R'=\max\{j|l'_j<0\}-\bar M=\max\{i|a'_i<0\},
\end{align}
where we note that $R$ and $\bar R$ are related by $R=\bar M+\bar R$, $M+R=\bar R$ and $R'$ and $\bar R'$ are related by $R'=\bar M+\bar R'$, $M+R'=\bar R'$.
Then we arrive at the same expression as in \eqref{detdet}.
\begin{align}
&\langle s_Y\bar s_{Y'}\rangle_{k}(\bar N+\bar M,\bar N)
=i^{-\frac{1}{2}(N_1^2-N_2^2)}(-1)^{\frac{1}{2}N_1(N_1-1)+\frac{1}{2}N_2(N_2-1)+\bar R+\bar R'}
\int\frac{D^{N_1}\mu}{N_1!}\frac{D^{N_2}\nu}{N_2!}\nonumber\\
&\times\det\begin{pmatrix}
[P(\mu,\nu)]_{(\bar N+\bar M)\times\bar N}
&[E_a(\mu)]_{(\bar N+\bar M)\times(\bar M+\bar R)}\\
[F_l(\nu)]_{\bar R\times\bar N}
&[0]_{\bar R\times(\bar M+\bar R)}
\end{pmatrix}
\det\begin{pmatrix}
[Q(\nu,\mu)]_{\bar N\times(\bar N+\bar M)}
&[F_{a'}(\nu)]_{\bar N\times\bar R'}\\
[E_{l'}(\mu)]_{(\bar M+\bar R')\times(\bar N+\bar M)}
&[0]_{(\bar M+\bar R')\times\bar R'}
\end{pmatrix}.
\end{align}
This time, since we have more $\mu$ variables than $\nu$ variables, we shall perform the integration $D\mu$ first and then move to the grand canonical ensemble \eqref{gc}
\begin{align}
\langle s_Y\bar s_{Y'}\rangle^\text{GC}_{k,-\bar M}(z)
=\sum_{\bar N=0}^\infty z^{\bar N+\bar M}\langle s_Y\bar s_{Y'}\rangle_{k}(\bar N+\bar M,\bar N).
\end{align}
Effectively we can exchange the two determinants in the integrand and proceed in the parallel manner.
Finally, we find
\begin{align}
&\langle s_Y\bar s_{Y'}\rangle^\text{GC}_{k,-\bar M}(z)
=i^{\frac{1}{2}\bar M^2}
(-1)^{\frac{1}{2}\bar M(\bar M-1)+\bar R+\bar R'+\bar R\bar R'}
\Det(1+w QP)\nonumber\\
&\quad\times
(-w)^{\bar M}\det\begin{pmatrix}
[E_{l'}(1+wPQ)^{-1}E_a]_{(\bar M+\bar R')\times(\bar M+\bar R)}&
[-wE_{l'}(1+wPQ)^{-1}PF_{a'}]_{(\bar M+\bar R')\times\bar R'}\\
[-wF_l(1+wQP)^{-1}QE_a]_{\bar R\times(\bar M+\bar R)}&
[-wF_l(1+wQP)^{-1}F_{a'}]_{\bar R\times\bar R'}
\end{pmatrix}.
\end{align}
After changing the rows and the columns suitably and transposing the determinant, we arrive at the same expression as \eqref{fermi}.
\subsection{Phase factor}\label{secphase}
After establishing the Fermi gas formalism for the two-point function in the ABJM matrix model in the previous subsection, we can start the computation.
We first note that we can compute the lowest component of the grand canonical two-point function in the expansion of $z$ (in other words, the non-vanishing canonical two-point function $\langle s_Y\bar s_Z\rangle_k(N,N+M)$ of the lowest rank $N$) and present the results in terms of the Young diagram $Y$ and $Y'$.
The result is given in appendix \ref{lowest}.
For higher orders we need to perform the residue integration order by order for each ingredient \eqref{ingredients} appearing in \eqref{FG} and \eqref{calH}.
Fortunately, the computation of each part already appeared previously.
For $\Xi_k(w)$, the computation was already given in the first computation of the exact values for the partition function in \cite{HMO1,PY,HMO2}.
The techniques of rewriting the multiplications among matrices into subsequent multiplications by matrices on a vector were also reviewed in \cite{PTEP}.
For $H_k^{(a|l)}(w)$, the computation was given in the study of the one-point function of the Wilson loop in \cite{HHMO}, where it was found that the computation is convergent for $2(a+l)<k$.
For $K_k^{(a'|l)}(w)$, the computation was given in the study of the partition function with the rank deformation $N_2\ne N_1$ in \cite{MM}, where the convergence is valid for $2|a'|<k$ and $2l<k$.
Using these results of the computation, we can compute the two-point function without difficulty.
For simplicity in discussing the numerical results, we always consider the case of $M\ge 0$.
We have computed the two-point function $\langle s_Y\bar s_Z\rangle_{k}(N,N+M)$ of $2\le|Y|+|Z|\le 5$ and $k=3,4,6,8,12$ up to $N=N_\text{max}$ with $(k,N_\text{max})=(3,7),(4,13),(6,8),(8,4),(12,5)$ for $M$ within the range of convergence.
As a preliminary study of the result, we start with the phase factor.
As known in \cite{HHMO,MM}, the phase dependence of the partition function and the one-point function is rather trivial.
Especially, with the phase factor $i^{-\frac{1}{2}(N_1^2-N_2^2)}$ included in the definition of \eqref{two} \cite{DMP1}, the phase of the partition function $e^{i\Theta^\varnothing_{k,M}}$ and that of the one-point function $e^{i\Theta_{k,M}^Y}$ defined by
\begin{align}
\frac{\langle 1\rangle_{k}(N,N+M)}{|\langle 1\rangle_{k}(N,N+M)|}
=e^{i\Theta^\varnothing_{k,M}},\quad
\frac{\langle s_Y\rangle_{k}(N,N+M)}{|\langle s_Y\rangle_{k}(N,N+M)|}
=e^{i\Theta_{k,M}^Y},
\end{align}
are given by
\begin{align}
\Theta_{k,M}^Y=\theta_{k,M}+\theta_{k,M}^Y,\quad
\theta_{k,M}=-\frac{\pi}{k}\frac{M^3-M}{6},\quad
\theta_{k,M}^Y=\frac{\pi}{k}(2c^Y-M|Y|),
\label{1ptphase}
\end{align}
independent of the rank $N$.
Here $|Y|$ is the total box number of the Young diagram $Y$ and $c^Y$ is the sum of the contents for the Young diagram $Y$,
\begin{align}
|Y|=\sum_{(i,j)\in Y}1,\quad
c^Y=\sum_{(i,j)\in Y}(j-i).
\label{contents}
\end{align}
Here we stress that, with the phase $i^{-\frac{1}{2}(N_1^2-N_2^2)}$ included, all of the remaining phases in \eqref{1ptphase} are proportional to $k^{-1}$ and cannot be removed simply by changing rows or columns.
The phase factor of the two-point function is more complicated.
Nevertheless, after plotting the phase, we have found that, in the large $N$ limit, the phase of the two-point function is exponentially approaching to the sum of those of the two one-point functions with separated insertions,
\begin{align}
\frac{\langle s_Y\bar s_Z\rangle_{k}(N,N+M)}{|\langle s_Y\bar z_Z\rangle_{k}(N,N+M)|}
\to e^{i\Theta^{Y,Z}_{k,M}},\quad
\Theta^{Y,Z}_{k,M}=\theta_{k,M}+\theta_{k,M}^Y+\theta_{k,M}^Z.
\label{twophase}
\end{align}
As an example, in figure \ref{BBphase} we plot the phases of the two-point function $\langle s_{\yng(1)}\bar s_{\yng(1)}\rangle_{k=6}(N,N+M)$ and show how the phases are approaching to $\Theta^{\yng(1),\yng(1)}_{k=6,M}=\theta_{k=6,M}+2\theta_{k=6,M}^{\yng(1)}$ for $M=0,1,2$.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.6]{k6M0.eps}\includegraphics[scale=0.6]{k6M1.eps}\includegraphics[scale=0.6]{k6M2.eps}\\
$M=0$\qquad\qquad\qquad\qquad\qquad$M=1$\qquad\qquad\qquad\qquad\qquad$M=2$
\end{center}
\caption{The phases of the two-point function $\langle s_{\Box}\bar s_{\Box}\rangle_{k=6}(N,N+M)$ (red dots) and the phases $\Theta^{\Box,\Box}_{k=6,M}=\theta_{k=6,M}+2\theta_{k=6,M}^{\Box}$ (blue lines) for $M=0$ (left), $M=1$ (center), $M=2$ (right).
The phases are approaching to $\Theta^{\Box,\Box}_{k=6,M}$ in the large $N$ limit for $M=1,2$ and the phase is identically vanishing for $M=0$.
}
\label{BBphase}
\end{figure}
It was known \cite{WCS} that the phase factor is interpreted as the framing factor in the field-theoretical viewpoint.
Here $\theta_{k,M}$ is the framing factor of the manifold while $\theta_{k,M}^Y$ is the framing factor of the Wilson loop.
Hereafter we often consider the two-point function with this framing factor removed
\begin{align}
e^{-i\Theta_{k,M}^{Y,Z}}\langle s_Y\bar s_Z\rangle_{k}(N,N+M).
\end{align}
\subsection{Perturbative part}\label{pertpart}
It turns out that as in the case of the one-point function the result of the two-point function is summarized cleanly in the grand canonical ensemble.
To present the result we define the chemical potential $\mu$ from the fugacity $z$ by $\mu=\log z$ and study the perturbation theory in large $\mu$ in this subsection.
We first conjecture that, for the one-point function of the half-BPS Wilson loop operator in an arbitrary representation, the perturbative part is given by
\begin{align}
\frac{\langle s_Y\rangle^\text{GC}_{k,M}(z)}{\langle 1\rangle^\text{GC}_{k,M}(z)}
\bigg|^\text{pert}
=\frac{e^{i\theta^Y_{k,M}}e^{\frac{2|Y|}{k}\mu}}
{\prod_{(i,j)\in Y}2\sin\frac{2\pi h(i,j)}{k}},
\label{1ptpert}
\end{align}
where $h(i,j)$ at $(i,j)\in Y$ is the hook length
\begin{align}
h(i,j)=\lambda_i+\lambda^\text{T}_j-i-j+1.
\end{align}
There are many consistency checks for this expression.
First, this expression is consistent with the numerical analysis in \cite{HaOk}, where it was found that for many one-point functions in the hook representation, the perturbative part is given as
\begin{align}
\frac{\langle s_{(a|l)}\rangle^\text{GC}_{k,M}(z)}{\langle 1\rangle^\text{GC}_{k,M}(z)}
\bigg|^\text{pert}
=\frac{e^{\frac{\pi i}{k}((a-\frac{M}{2})^2-(l+\frac{M}{2})^2)}e^{\frac{2|Y|}{k}\mu}}
{2\sin\frac{2\pi(a+l)}{k}
\prod_{m=1}^{a-\frac{1}{2}}2\sin\frac{2\pi m}{k}
\prod_{n=1}^{l-\frac{1}{2}}2\sin\frac{2\pi n}{k}}.
\label{1ptperthook}
\end{align}
Our perturbative expression \eqref{1ptpert} reduces to \eqref{1ptperthook} for the hook representation.
Secondly, this expression is also consistent with the Giambelli identity proved in \cite{HHMO,MaMo,FM}
\begin{align}
\frac{\langle s_{(a_1,a_2,\cdots,a_R|l_1,l_2,\cdots,l_R)}\rangle^\text{GC}_{k,M}(z)}
{\langle 1\rangle^\text{GC}_{k,M}(z)}
=\det\biggl(\frac{\langle s_{(a_i|l_j)}\rangle^\text{GC}_{k,M}(z)}
{\langle 1\rangle^\text{GC}_{k,M}(z)}\biggr)
_{\begin{subarray}{c}1\le i\le R\\1\le j\le R\end{subarray}},
\label{Giambelliid}
\end{align}
which reduces the expression in an arbitrary representation to that in the hook representation.
In appendix \ref{giambellic} we prove that the perturbative truncation of the one-point function \eqref{1ptpert} satisfies the Giambelli identity.
Thirdly, after assuming the correspondence with the open topological string theory \cite{HHMO} and the identification of the variables (see \eqref{Vhat} later), we also see that the expression of the open topological string free energy gives \eqref{1ptpert} when expanded for various representations $Y$.
The expansion is given in appendix \ref{freeopen}.
For the perturbative part of the two-point function, from the numerical studies, we find that it is the same as the product of the two one-point functions with separated insertions,
\begin{align}
\frac{\langle s_Y\bar s_Z\rangle^\text{GC}_{k,M}(z)}{\langle 1\rangle^\text{GC}_{k,M}(z)}
\bigg|^\text{pert}
=\frac{\langle s_Y\rangle^\text{GC}_{k,M}(z)}{\langle 1\rangle^\text{GC}_{k,M}(z)}
\bigg|^\text{pert}\times
\frac{\langle\bar s_Z\rangle^\text{GC}_{k,M}(z)}{\langle 1\rangle^\text{GC}_{k,M}(z)}
\bigg|^\text{pert}.
\end{align}
\subsection{Conjugate relation}\label{superposerel}
A direct study of the non-perturbative part of the two-point function is more complicated.
Nevertheless, we can separate the two-point function into the real part and the imaginary part and investigate the large $\mu$ expansion in the grand canonical ensemble as in the one-point function.
We have performed this analysis for a few two-point functions.
After seeing in the previous two subsections that the phase factor and the perturbative part of the two-point function split into the product of those of the two one-point functions with separated insertions, we may expect a more direct relation to the one-point functions even for the non-perturbative part.
In our analysis we have found two relations to the one-point functions.
We shall present one in this subsection, the other in the next subsection and discuss their possible interpretations.
The first relation is
\begin{align}
e^{-i\Theta_{k,M}^{Y,Z}}\langle s_Y\bar s_Z\rangle_{k}(N,N+M)
=\bigl[e^{-i\Theta_{k,M}^{Y,Z}}\langle s_Ys_Z\rangle_{k}(N,N+M)\bigr]^*.
\label{superposeid}
\end{align}
Namely, our two-point function with the main phase factor removed is equal to the complex conjugate of the trivial two-point function constructed by the same two characters and can be decomposed into the one-point functions following the Littlewood-Richardson rule.
From our original viewpoint of the definition of the non-trivial two-point function \eqref{two}, there are no signs that we can superpose the two insertions.
Nevertheless, after the long analysis of the residue computations, we have found that actually these two insertions can be superposed to each other and essentially reduced to the one-point functions.
In the following, we shall refer to this relation as the conjugate relation.
There are several ways to express this relation.
For example, if we do not want our result to contain the phase factors, we can express the result as
\begin{align}
\frac{\langle s_Y\bar s_Z\rangle_{k}(N_1,N_2)\cdot\langle 1\rangle_{k}(N_1,N_2)}
{\langle s_Y\rangle_{k}(N_1,N_2)\cdot\langle\bar s_Z\rangle_{k}(N_1,N_2)}
=\biggl[\frac{\langle s_Ys_Z\rangle_{k}(N_1,N_2)\cdot\langle 1\rangle_{k}(N_1,N_2)}
{\langle s_Y\rangle_{k}(N_1,N_2)\cdot\langle s_Z\rangle_{k}(N_1,N_2)}\biggr]^*.
\end{align}
Also, if we expand the two characters by the Littlewood-Richardson rule $s_Ys_Z=\sum_XN_{YZ}^Xs_X$, it is given by
\begin{align}
e^{-2i\Theta_{k,M}^{Y,Z}}\langle s_Y\bar s_Z\rangle_{k}(N,N+M)
=\sum_XN_{YZ}^X
e^{-2i\Theta_{k,M}^{X}}\langle s_X\rangle_{k}(N,N+M).
\label{2ptfrom1pt}
\end{align}
Although this relation is natural from the viewpoint of our studies of the phase factor and the perturbative part, it is a highly non-trivial relation, considering that it continues to be valid for all of the non-perturbative corrections.
In fact, at present we cannot prove this relation from any rigorous arguments.
We have checked this relation for all the cases of $2\le |Y|+|Z|\le 5$ and $k=3,4,6,8,12$ where the integrations are convergent.
Especially we have listed some data for the case of $\langle s_\Box\bar s_\Box\rangle_{k=6}(N,N+M)$ with $M=0,1,2$ in appendix \ref{superpose} to convince the readers of the validity.
We note that, although all the ingredients in the two-point function $\langle s_\Box\bar s_\Box\rangle_{k=6}(N,N+M)$ and the one-point function $\langle s_{\yng(2)}\rangle_{k=6}(N,N+M)$ are convergent for $M=0,1,2$, those in the one-point function $\langle s_{\yng(1,1)}\rangle_{k=6}(N,N+M)$ are convergent only for $M=0,1$.
The ranges of the convergence do not necessarily coincide for the both sides of \eqref{2ptfrom1pt}.
This relation may reflect the topological aspects of the ABJM matrix model.
Although the original ABJM theory is not topological, after applying the localization techniques for the one-point function of the half-BPS Wilson loop, the result is known to reduce to the matrix model and relate to the topological string theory.
Our definition of the two-point function is not obtained from the localization techniques and is not related to topological theories at the first sight.
However, since this definition respects all of the symmetries the one-point function has, it may still relate to the topological string theory.
Therefore, we expect that the relation we have found is another sign of the deep relation between the ABJM matrix model and the topological string theory, though the full interpretation is unclear to us.
\subsection{Descent relation}\label{interfererel}
As we have explained in section \ref{secphase}, in the large $N$ limit, the phase of the two-point function $\langle s_Y\bar s_Z\rangle_k(N,N+M)$ is approaching to $\Theta^{Y,Z}_{k,M}$ exponentially.
This implies that the imaginary part of the two-point function with the main phase factor removed, $\im e^{-i\Theta^{Y,Z}_{k,M}}\langle s_Y\bar s_Z\rangle_k(N,N+M)$, has a simpler structure.
In this subsection, we observe an interesting relation for the imaginary part of the two-point function.
In the study of the simplest two-point function $\langle s_\Box\bar s_\Box\rangle_k(N,N+M)$ with both insertions in the fundamental representation $\Box$, we find that, after removing the main phase, the imaginary part of the two-point function simply reduces to the partition function
\begin{align}
\im\Bigl[e^{-i\Theta_{k,M}^{\Box,\Box}}\langle s_\Box\bar s_\Box\rangle_k(N,N+M)\Bigr]
=\biggl(\sin\frac{2\pi M}{k}\biggr)e^{-i\Theta_{k,M}^{\varnothing}}\langle 1\rangle_k(N,N+M).
\end{align}
It turns out that this relation is a special case of a more general relation between the imaginary part of the two-point function and the one-point functions.
Our conjecture is that there exists a set of Laurent polynomials $Q_{YZ}^X$ of $q=e^{-\frac{4\pi i}{k}}$ depending only on the Young diagrams so that the relation
\begin{align}
\im\Bigl[e^{-i\Theta_{k,M}^{Y,Z}}\langle s_Y\bar s_Z\rangle_{k}(N,N+M)\Bigr]
=\im\Bigl[e^{-i\Theta_{k,M}^{Y,Z}}\sum_{|X|=|Y|+|Z|-2}Q_{YZ}^X\langle s_X\rangle_{k}(N,N+M)\Bigr],
\label{imaginary}
\end{align}
holds where the sum is taken over the Young diagrams $X$ whose box number $|X|$ is less than $|Y|+|Z|$ by $2$.
Later we will denote this relation simply by
\begin{align}
\langle s_Y\bar s_Z\rangle\sim\sum_XQ_{YZ}^X\langle s_X\rangle
\quad\mod e^{i\Theta^{Y,Z}}{\mathbb R},
\label{generalinterfere}
\end{align}
or, when there is no confusion, we often drop mod $e^{i\Theta^{Y,Z}}{\mathbb R}$ since the phase is clear for the two-point function.
For the case of $Z=\Box$, we have explicitly found a simple set of the coefficients $Q_{Y\Box}^X$ for the relation \eqref{imaginary}.
Namely, $Q^X_{Y\Box}$ is non-vanishing only for $X=Y_\bullet$ with $Y_\bullet$ denoting the Young diagram with one box removed from $Y$ without affecting the rule of the Young diagram and $Q_{Y\Box}^{Y_\bullet}=1$.
In other words, the relation simplifies to
\begin{align}
\langle s_Y\bar s_\Box\rangle
\sim\sum_{Y_\bullet}\langle s_{Y_\bullet}\rangle.
\label{generalbox}
\end{align}
Especially when $Y=(a|l)$ is the hook representation, the relation is
\begin{align}
\langle s_{(a|l)}\bar s_\Box\rangle
\sim\langle s_{(a|l-1)}\rangle+\langle s_{(a-1|l)}\rangle.
\label{hookbox}
\end{align}
Also, for the case of $Y=(a|\frac{1}{2})$ and $Z=\yng(2)$, the relation is
\begin{align}
\langle s_{(a|\frac{1}{2})}\bar s_{\yng(2)}\rangle
\sim q^{a-\frac{1}{2}}\langle s_{(a|\frac{1}{2})}\rangle
+q^{-1}\langle s_{(a-1|\frac{3}{2})}\rangle,
\label{hooksym}
\end{align}
while for the case of $Y=(\frac{1}{2}|l)$ and $Z=(a|\frac{1}{2})$, the relation is
\begin{align}
\langle s_{(\frac{1}{2}|l)}\bar s_{(a|\frac{1}{2})}\rangle
\sim\langle s_{(a|l-1)}\rangle+\langle s_{(a-1|l)}\rangle.
\label{antisymsym}
\end{align}
In fact \eqref{hookbox} and \eqref{antisymsym} are equivalent if we assume the conjugate relation \eqref{superposeid} in the previous subsection.
We list the first few concrete relations in appendix \ref{interference}.
For the cases when the total box number is less than five, the relations are always special cases of our list in \eqref{generalbox}, \eqref{hooksym}, \eqref{antisymsym}, though for the cases with more boxes there appear relations which do not belong to the cases we have discussed.
Nevertheless, our conjecture is that they still satisfy the general expression \eqref{imaginary} with a suitable choice of the Laurent polynomials $Q_{YZ}^X$.
Note that we do not claim that the set of the coefficients $Q_{YZ}^X$ is unique due to the non-trivial relations for the imaginary part among the one-point functions with the same box number
\begin{align}
\im\Bigl[e^{-i\Theta^{(n)}_{k,M}}\sum_{|X|=n}Q^X\langle s_X\rangle_k(N,N+M)\Bigr]=0,
\label{ambiX}
\end{align}
with
\begin{align}
\Theta^{(n)}_{k,M}=\theta_{k,M}-(n+2)\frac{\pi M}{k},
\end{align}
or, in our abbreviated notation,
\begin{align}
\sum_{|X|=n}Q^X\langle s_X\rangle\sim 0\quad\mod e^{i\Theta^{(n)}}{\mathbb R}.
\label{gen1pt}
\end{align}
The set of the coefficients $Q_{YZ}^X$ is determined only up to these ambiguities and we have chosen one representative set to express the relation.
Nevertheless, we stress that, once $Q_{YZ}^X$ is chosen up to the ambiguities, the same set of $Q_{YZ}^X$ is valid for any $N$ and $M$.
For example, for the case of the one-point functions with three boxes, we find a non-trivial relation for the one-point functions
\begin{align}
\im\Bigl[
e^{-i\Theta^{(3)}_{k,M}}
(q\langle s_{\yng(3)}\rangle_k(N,N+M)
-\langle s_{\yng(2,1)}\rangle_k(N,N+M)
+q^{-1}\langle s_{\yng(1,1,1)}\rangle_k(N,N+M))\Bigr]=0.
\label{onepointthree}
\end{align}
Note that we are free to multiply the relation \eqref{onepointthree} by
\begin{align}
[n]_q=\frac{q^{\frac{n}{2}}-q^{-\frac{n}{2}}}{q^{\frac{1}{2}}-q^{-\frac{1}{2}}},
\label{qnumber}
\end{align}
since $[n]_q\in{\mathbb R}$, which is sometimes necessary for discussing the representative choice of $Q^X_{YZ}$.
In the notation \eqref{gen1pt}, the relation for the one-point functions can be expressed as
\begin{align}
q\langle s_{\yng(3)}\rangle-\langle s_{\yng(2,1)}\rangle+q^{-1}\langle s_{\yng(1,1,1)}\rangle
\sim 0\quad\mod e^{i\Theta^{(3)}}{\mathbb R}.
\label{onerel3}
\end{align}
For the case of the one-point functions with more than three boxes, there are more similar relations.
We list the first few ambiguities in appendix \ref{ambiguity}.
Our descent relations in \eqref{generalbox}, \eqref{hooksym}, \eqref{antisymsym} and appendix \ref{interference} are given up to these ambiguities.
We have many checks for the relation \eqref{imaginary} (and \eqref{ambiX} as well).
Since we have computed the exact values of the two-point function up to a certain rank $N=N_\text{max}$, we can substitute the values to the relations to check the validity.
Also in appendix \ref{lowest} we have computed the non-vanishing two-point function of the lowest rank (or the lowest component in the grand canonical ensemble) and we check the relation \eqref{imaginary} for the lowest component in appendix \ref{lowcheck}.
Anyway, our conjecture passes all of the consistency checks.
Since the lowest component is determined for any values of $M$, we can proceed one step further by asking which set of the Laurent polynomials $Q_{YZ}^X$ satisfies the relation for the lowest component.
Surprisingly, we find that actually the lowest component gives a large enough number of the constraints so that as long as these constraints are satisfied the relation on the imaginary part also holds for the higher components.
In fact this is true for all of the relations given in appendix \ref{interference}.
The interpretation of the relation \eqref{imaginary} is again obscure.
Since we are considering the imaginary part of the two-point function, this may relate to the interference between the two insertions.
One may also expect that the relation is related to the orientifold projection \cite{MS1,Hosp,Oosp,MS2,MN5}.
On one hand, the restriction to the imaginary part can be regarded as a projection.
On the other hand, the resulting condition for the non-vanishing values of $Q_{YZ}^X$, $|X|=|Y|+|Z|-2$, is reminiscent of the reduction of the unitary groups to the orthogonal groups or the symplectic groups (due to the invariant tensor $\delta_{ab}$ and $J_{ab}$ respectively).
An alternative attempt for the interpretation is given in appendix \ref{heisenberg}, where the reduction by two boxes is interpreted by the derivative, in the analogy of the symplectic structure of the Heisenberg algebra $[\widehat q,\widehat p]=i\hbar$, which reduces the total number of the coordinate/momentum operators by two.
Since the relation reduces the total box number, we shall refer to this relation as the descent relation.
\subsection{Implications to one-point functions}\label{oneptrel}
In the previous two subsections, we have found two relations among the two-point functions and the one-point functions.
The conjugate relation reduces our non-trivial two-point function to the trivial two-point function which can be reexpanded into the one-point functions by the Littlewood-Richardson rule.
The descent relation reduces the two-point function to the one-point functions more directly by concentrating on the imaginary part.
Although originally the relations are naturally given in terms of the two-point function, after eliminating the two-point function, we find a relation purely among the one-point functions with the box numbers differing by two.
For example, for the case of two boxes, the relation is
\begin{align}
\sin\frac{2\pi}{k}|\langle s_{\yng(2)}\rangle_k(N_1,N_2)|
-\sin\frac{2\pi}{k}|\langle s_{\yng(1,1)}\rangle_k(N_1,N_2)|
+\sin\frac{2\pi M}{k}|\langle 1\rangle_k(N_1,N_2)|=0.
\end{align}
We shall abbreviate this relation as
\begin{align}
\sigma_1|\langle s_{\yng(2)}\rangle|-\sigma_1|\langle s_{\yng(1,1)}\rangle|
+\sigma_M|\langle 1\rangle|=0,
\end{align}
by introducing $\sigma_n=\sin(\frac{2\pi}{k}n)$.
We can alternatively write down the expression using the $q$-number \eqref{qnumber} with $[n]_q=\sigma_n/\sigma_1$.
Similar relations also hold for more boxes.
We summarize the relations for the box number up to five in appendix \ref{onept}.
Beyond four boxes the relations are subject to the ambiguities \eqref{onepointthree} as we have discussed.
These relations are given purely with the one-point functions and some of them reproduce the relations known previously.
We could have pointed out these relations without introducing the two-point function.
However, with the two simple relations given in the previous two subsections, we find it more natural to discuss in a larger framework with the two-point function.
\section{Topological strings}\label{onepoint}
In the previous section we have defined the non-trivial two-point function, studied them carefully and found that the two-point function relates to the one-point function directly.
From these studies we are convinced of the importance of the two-point function we have defined, since it provides a natural framework to unify many aspects of the one-point function.
Here on occation of various numerical data, let us revisit the one-point function and uncover some fine structure of them.
Among others, we find that the Gopakumar-Vafa invariants are asymmetric in the exchange of the two degrees, which is not very common in our experience.
We first briefly explain the correspondence between the matrix model and the topological string theory before going into the analysis of the Gopakumar-Vafa invariants.
Although most of the correspondences are well-known, we try to shed some light by presenting the cases of closed strings and open strings in a parallel manner.
\subsection{Partition function and closed topological strings}
From the definition of the matrix model in the grand canonical ensemble \eqref{gc}, it is not difficult to observe that the function $\langle s_Y\bar s_{Y'}\rangle^\text{GC}_{k,M}(e^\mu)$ is invariant under the shift of $\mu$ by $2\pi i$.
For this reason, we define the {\it reduced} correlation function by
\begin{align}
\langle s_Y\bar s_{Y'}\rangle^\text{GC}_{k,M}(e^\mu)
=\sum_{n=-\infty}^\infty
\langle\!\langle s_Y\bar s_{Y'}\rangle\!\rangle^\text{GC}_{k,M}(\mu+2\pi in).
\end{align}
Then it was known that, if we further redefine the chemical potential $\mu$ into $\mu_\text{eff}$, the reduced grand canonical partition function can be described by the free energy of closed topological strings \cite{HMMO}
\begin{align}
\langle\!\langle 1\rangle\!\rangle^\text{GC}_{k,M}(\mu)=e^{F^\text{cl}({\bm T})}.
\label{Fcl}
\end{align}
Here, besides the perturbative part $F^\text{pert}({\bm T})$, the free energy
\begin{align}
F^\text{cl}({\bm T})=F^\text{pert}({\bm T})+F^\text{WS}({\bm T})+F^\text{MB}({\bm T}),
\end{align}
is separated into the worldsheet instanton part $F^\text{WS}({\bm T})$ and the membrane instanton part $F^\text{MB}({\bm T})$ given by $(s_\text{L/R}=2j_\text{L/R}+1)$
\begin{align}
F^\text{WS}({\bm T})
&=\sum_{\bm d}\sum_{j_\text{L},j_\text{R}}N^{\bm d}_{j_\text{L},j_\text{R}}
\sum_{n=1}^\infty\frac{(-1)^{(s_\text{L}+s_\text{R}-1)n}s_\text{R}\sin 2\pi g_\text{s}ns_\text{L}}
{n(2\sin\pi g_\text{s}n)^2\sin 2\pi g_\text{s}n}e^{-n{\bm d}\cdot{\bm T}},\nonumber\\
F^\text{MB}({\bm T})
&=\sum_{\bm d}\sum_{j_\text{L},j_\text{R}}N^{\bm d}_{j_\text{L},j_\text{R}}
\sum_{n=1}^\infty\frac{\partial}{\partial g_\text{s}}
\biggl[g_\text{s}
\frac{-\sin\frac{\pi n}{g_\text{s}}s_\text{L}\sin\frac{\pi n}{g_\text{s}}s_\text{R}}
{4\pi n^2(\sin\frac{\pi n}{g_\text{s}})^3}e^{-n\frac{{\bm d}\cdot{\bm T}}{g_\text{s}}}\biggr],
\end{align}
with the identification
\begin{align}
T_\pm=\frac{4\mu_\text{eff}}{k}\pm\pi i\biggl(1-\frac{2M}{k}\biggr),\quad
g_\text{s}=\frac{2}{k}.
\label{tgs}
\end{align}
For integral $k$ and $M$ the effective chemical potential is explicitly given by
\begin{align}
\mu_\text{eff}=\begin{cases}\displaystyle\mu-2(-1)^{\frac{k}{2}-M}e^{-2\mu}
{}_4F_3\biggl(1,1,\frac{3}{2},\frac{3}{2};2,2,2;16(-1)^{\frac{k}{2}-M}e^{-2\mu}\biggr),
&k:\text{even},\\
\displaystyle\mu+e^{-4\mu}
{}_4F_3\biggl(1,1,\frac{3}{2},\frac{3}{2};2,2,2;-16e^{-4\mu}\biggr),
&k:\text{odd}.
\end{cases}
\label{integermu}
\end{align}
When the BPS index $N^{\bm d}_{j_\text{L},j_\text{R}}$ is non-vanishing only for $s_\text{L}+s_\text{R}-1\equiv 0$ mod $2$, if we ignore the non-perturbative membrane instanton part $F^\text{MB}({\bm T})$, we can rewrite the worldsheet instanton part $F^\text{WS}({\bm T})$ as
\begin{align}
F^\text{WS}({\bm T})=\sum_{\bm d}\sum_{g=0}^\infty\sum_{n=1}^\infty
n^{\bm d}_g\frac{(2i\sin\pi g_\text{s}n)^{2g-2}}{n}e^{-n{\bm d}\cdot{\bm T}},
\end{align}
where we have introduced the Gopakumar-Vafa invariants $n^{\bm d}_g$ by
\begin{align}
\sum_{j_\text{L},j_\text{R}}N^{\bm d}_{j_\text{L},j_\text{R}}
\frac{s_\text{R}\sin 2\pi g_\text{s}ns_\text{L}}{(2\sin\pi g_\text{s}n)^2\sin 2\pi g_\text{s}n}
=\sum_{g=0}^\infty n^{\bm d}_g(2i\sin\pi g_\text{s}n)^{2g-2}.
\end{align}
The free energy of closed topological strings was known to enjoy the multi-covering property.
Namely, if we define the multi-covering component as $(Q_\pm=e^{-T_\pm})$
\begin{align}
A_{k}({\bm Q})=\sum_{\bm d}\sum_{g=0}^\infty n^{\bm d}_g
\biggl(2i\sin\frac{2\pi}{k}\biggr)^{2g-2}
{\bm Q}^{\bm d},
\end{align}
or more explicitly
\begin{align}
A_{k,M}(Q)=\sum_{\bm d}\sum_{g=0}^\infty n^{\bm d}_g
\biggl(2i\sin\frac{2\pi}{k}\biggr)^{2g-2}
e^{(d_+-d_-)\frac{2\pi iM}{k}}Q^{d_++d_-},
\end{align}
after introducing $Q=-e^{-\frac{4\mu_\text{eff}}{k}}$, the free energy can be expressed as
\begin{align}
F^\text{WS}({\bm T})=\sum_{n=1}^\infty\frac{1}{n}A_{\frac{k}{n},M}(Q^n).
\end{align}
Higher powers of $Q$ in $F^\text{WS}({\bm T})$ consist both of the Gopakumar-Vafa invariants of higher degrees and the Gopakumar-Vafa invariants of lower degrees.
Physically, by regrading $n$ as the winding number, this is interpreted that both genuine states of multiple degrees without windings and states of degree one wound multiply contribute as the same effects.
\subsection{One-point functions and open topological strings}
The normalized one-point function in the grand canonical ensemble is given by the free energy of open topological strings
\begin{align}
\sum_Y\frac{\langle\!\langle s_Y\rangle\!\rangle^\text{GC}_{k,M}(\mu)}
{\langle\!\langle 1\rangle\!\rangle^\text{GC}_{k,M}(\mu)}\tr_Y(V)
=e^{F^\text{op}({\bm T},\widehat V)},
\label{mattop}
\end{align}
where the free energy of open topological strings is given by
\begin{align}
F^\text{op}({\bm T},\widehat V)
=\sum_{\bm d}\sum_{g=0}^\infty\sum_{h=1}^\infty\sum_{\bm\ell}\sum_{n=1}^\infty
n^{\bm d,\bm\ell}_{g}\frac{(2i\sin\pi g_\text{s}n)^{2g-2}}{n}
\frac{1}{h!}
\prod_{j=1}^h\biggl(\frac{2i\sin\pi g_\text{s}n\ell_j}{\ell_j}\tr\widehat V^{n\ell_j}\biggr)
e^{-n{\bm d}\cdot{\bm T}},
\label{openfree}
\end{align}
with the identification
\begin{align}
\widehat V=Q_+^{-\frac{1}{2}}V,
\label{Vhat}
\end{align}
along with those in the partition function $Q_\pm=e^{-T_\pm}$, \eqref{tgs}, \eqref{integermu}.
Note that the relation in \cite{GKM,HHMO}
\begin{align}
\frac{\langle\!\langle e^{\sum_{n=1}^\infty
\frac{1}{n}\Str U^n\tr V^n}\rangle\!\rangle^\text{GC}_{k,M}(\mu)}
{\langle\!\langle 1\rangle\!\rangle^\text{GC}_{k,M}(\mu)}
=e^{F^\text{op}({\bm T},\widehat V)},
\end{align}
with $U=\diag(e^{\mu_1},\cdots,e^{\mu_N}|{-e^{\nu_1}},\cdots,-e^{\nu_{N+M}})$ is rewritten into \eqref{mattop} with the help of the orthogonal relation
\begin{align}
\frac{\prod_{j,k}(1+y_jz_k)}{\prod_{i,k}(1-x_iz_k)}=\sum_Ys_Y(x|y)s_Y(z),
\end{align}
where $s_Y(x|y)$ is the super Schur polynomial while $s_Y(z)$ is the ordinary Schur polynomial.
The identification \eqref{Vhat} without the fractional branes $M=0$ was pointed out in \cite{GKM} with $\widehat V=Q^{-\frac{1}{2}}V$.
Here we generalize the identification to the cases with $M\ne 0$ by replacing $Q$ with only one of the Kahler parameters $Q_+=e^{-T_+}$.
In appendix \ref{freeopen}, by carefully studying the phase factor as well, we check that the identification \eqref{Vhat} is consistent with the correspondence between the open topological string theory \eqref{mattop} and the perturbative part of the one-point function in the ABJM matrix model \eqref{1ptpert} in section \ref{pertpart}.
As in the case of closed topological strings, we can express the free energy as
\begin{align}
F^\text{op}({\bm T},\widehat V)=\sum_{h=1}^\infty\sum_{\bm\ell}\sum_{n=1}^\infty
\frac{1}{n}A^{\bm\ell}_{\frac{k}{n},M}(Q^n)\frac{1}{h!}\prod_{j=1}^h\frac{\tr V^{nl_j}}{l_j},
\label{multicoverF}
\end{align}
if we define the multi-covering component suitably.
We first define the multi-covering component as
\begin{align}
A_k^{\bm\ell}(Q_+,Q_-)=\sum_{\bm d}\sum_{g=0}^\infty n^{\bm d,\bm\ell}_g
\biggl(2i\sin\frac{2\pi}{k}\biggr)^{2g-2}\biggl(\prod_{j=1}^h2i\sin\frac{2\pi l_j}{k}\biggr)
(Q_+^{-\frac{1}{2}})^{|{\bm\ell}|-2d_+}(Q_-^{-\frac{1}{2}})^{-2d_-},
\end{align}
with $(Q_+^{-\frac{1}{2}},Q_-^{-\frac{1}{2}})=(ie^{\frac{2\mu_\text{eff}}{k}}e^{-\frac{\pi iM}{k}},-ie^{\frac{2\mu_\text{eff}}{k}}e^{\frac{\pi iM}{k}})$, and observe that we can safely change the sign of $Q_-^{-\frac{1}{2}}=-ie^{\frac{2\mu_\text{eff}}{k}}e^{\frac{\pi iM}{k}}$ into $Q_-^{-\frac{1}{2}}=ie^{\frac{2\mu_\text{eff}}{k}}e^{\frac{\pi iM}{k}}$ since only even powers appear.
This means that we can rewrite the multi-covering component into
\begin{align}
A_{k,M}^{\bm\ell}(Q)=\sum_{\bm d}\sum_{g=0}^\infty n^{\bm d,\bm\ell}_g
\biggl(2i\sin\frac{2\pi}{k}\biggr)^{2g-2}\biggl(\prod_{j=1}^h2i\sin\frac{2\pi l_j}{k}\biggr)
(e^{-\frac{\pi i M}{k}})^{|{\bm\ell}|-2(d_+-d_-)}
(Q^{-\frac{1}{2}})^{|{\bm\ell}|-2(d_++d_-)},
\end{align}
with $Q^{-\frac{1}{2}}=ie^{\frac{2\mu_\text{eff}}{k}}$.
Note that, although the free energy of open topological strings is clearly given in the power sum basis, the one-point function in the ABJM matrix model is more naturally given in the basis of the Schur function with the universal phase \eqref{1ptphase}.
To relate these two theories, as the first step, let us see how the multi-covering structure is given in the basis of the Schur function.
For this purpose we expand
\begin{align}
F^\text{op}({\bm T},\widehat V)=A^{(1)}_{1}\tr V
+\frac{A^{(1)}_{2}}{2}\tr V^2+\cdots
+A^{(1,1)}_{1}\frac{(\tr V)^2}{2}+\cdots+A^{(2)}_{1}\frac{\tr V^2}{2}+\cdots,
\end{align}
with the abbreviation $A^{\bm\ell}_{n}=A^{\bm\ell}_{\frac{k}{n},M}(Q^n)$ for \eqref{multicoverF}.
Then, we find that the exponentiation can be expanded by
\begin{align}
e^{F^\text{op}({\bm T},\widehat V)}=\sum_Y\widetilde A^Y\tr_Y(V),
\end{align}
where each coefficient is
\begin{align}
&\widetilde A^{\yng(1)}=A^{(1)}_{1},\nonumber\\
&\widetilde A^{\yng(2)}=\biggl[\frac{1}{2}A^{(1,1)}_{1}+\frac{1}{2}A^{(2)}_{1}\biggr]
+\biggl[\frac{1}{2}(A^{(1)}_{1})^2+\frac{1}{2}A^{(1)}_{2}\biggr],\nonumber\\
&\widetilde A^{\yng(1,1)}=\biggl[\frac{1}{2}A^{(1,1)}_{1}-\frac{1}{2}A^{(2)}_{1}\biggr]
+\biggl[\frac{1}{2}(A^{(1)}_{1})^2-\frac{1}{2}A^{(1)}_{2}\biggr].
\end{align}
For $\widetilde A^{\yng(2)}$ and $\widetilde A^{\yng(1,1)}$ it is natural to interpret the terms in the first bracket as the genuine states without windings and the terms in the second bracket as the states coming from the double windings of $\widetilde A^{\yng(1)}$.
This computation continues to higher degrees.
Comparing the proposals for the partition function and the one-point function from the viewpoint of the tau function of the integrable system, we find a close similarity.
It was observed in \cite{BGT} that the partition function of the spectral curve (generalizing the ABJM matrix model) corresponds to the tau function of the $q$-Painleve equation and it was proved in \cite{MaMo,FM} that the normalized one-point function in the ABJM matrix model satisfies the Giambelli and Jacobi-Trudi identities, which are shared with the expansion coefficients of the tau function of the soliton theory \cite{Sato,MJD,AKLTZ}.
These two proposals seem compatible with each other by regarding $e^{F^\text{cl}}$ and $e^{F^\text{op}}$ as the ``closed'' and ``open'' tau functions respectively.
Also if we combine the proposals \eqref{Fcl} and \eqref{mattop}, we find
\begin{align}
\sum_Y\langle\!\langle s_Y\rangle\!\rangle^\text{GC}_{k,M}(\mu)\tr_Y(V)
=e^{F^\text{cl}({\bm T})+F^\text{op}({\bm T},\widehat V)}.
\end{align}
This relation combining closed topological strings and open topological strings may make the open-closed duality between the ABJM matrix model \cite{HaOk,KM} and the topological strings \cite{GO1,GO2} clearer.
\subsection{BPS indices}
\begin{table}[ht!]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
&$0$&$1$&$2$&$3$&$4$&$5$\\
\hline\hline
$0$&$1$&$1$&$0$&$0$&$0$&$0$\\
\hline
$1$&$1$&$3$&$5$&$7$&$9$\\
\cline{1-6}
$2$&$0$&$5$&$35$&$135$\\
\cline{1-5}
$3$&$0$&$7$&$135$\\
\cline{1-4}
$4$&$0$&$9$\\
\cline{1-3}
$5$&$0$\\
\cline{1-2}
\end{tabular}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
&$0$&$1$&$2$&$3$&$4$&$5$\\
\hline\hline
$0$&$0$&$0$&$0$&$0$&$0$&$0$\\
\hline
$1$&$0$&$0$&$0$&$0$&$0$\\
\cline{1-6}
$2$&$0$&$0$&$8$&$72$\\
\cline{1-5}
$3$&$0$&$0$&$72$\\
\cline{1-4}
$4$&$0$&$0$\\
\cline{1-3}
$5$&$0$\\
\cline{1-2}
\end{tabular}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
&$0$&$1$&$2$&$3$&$4$&$5$\\
\hline\hline
$0$&$0$&$0$&$0$&$0$&$0$&$0$\\
\hline
$1$&$0$&$0$&$0$&$0$&$0$\\
\cline{1-6}
$2$&$0$&$0$&$0$&$11$\\
\cline{1-5}
$3$&$0$&$0$&$11$\\
\cline{1-4}
$4$&$0$&$0$\\
\cline{1-3}
$5$&$0$\\
\cline{1-2}
\end{tabular}\\
$n^{{\bm d},{\bm\ell}=(1)}_{g=0}$\hspace{36mm}
$n^{{\bm d},{\bm\ell}=(1)}_{g=1}$\hspace{36mm}
$n^{{\bm d},{\bm\ell}=(1)}_{g=2}$
\caption{The Gopakumar-Vafa invariants $n^{{\bm d},{\bm\ell}}_g$ for $\bm\ell=(1)$.
Each column and each row denote the specific values of $d_+$ and $d_-$ respectively.}
\label{Y1}
\end{center}
\end{table}
After the review of the correspondence, let us turn to the study of the Gopakumar-Vafa invariants in the open topological string theory.
Although in \cite{HHMO} it was found that the one-point function of the half-BPS Wilson loop is described by the diagonal BPS indices identified in \cite{GKM}, it was not known how the diagonal BPS indices are split.
Especially, since some of the diagonal BPS indices given in \cite{GKM} are odd which can only be decomposed by the degree difference asymmetrically, it is interesting to see how the diagonal BPS indices are split.
With the abundant numerical data, here we read off the non-perturbative one-point function in the grand canonical ensemble in appendix \ref{np1pt} and identify the split for the first few BPS indices.
\begin{table}[ht!]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
&$0$&$1$&$2$&$3$&$4$&$5$\\
\hline\hline
$0$&$0$&$0$&$0$&$0$&$0$&$0$\\
\hline
$1$&$1$&$2$&$4$&$6$&$8$\\
\cline{1-6}
$2$&$0$&$4$&$36$&$160$\\
\cline{1-5}
$3$&$0$&$6$&$160$\\
\cline{1-4}
$4$&$0$&$8$\\
\cline{1-3}
$5$&$0$\\
\cline{1-2}
\end{tabular}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
&$0$&$1$&$2$&$3$&$4$&$5$\\
\hline\hline
$0$&$0$&$0$&$0$&$0$&$0$&$0$\\
\hline
$1$&$0$&$0$&$0$&$0$&$0$\\
\cline{1-6}
$2$&$0$&$0$&$7$&$74$\\
\cline{1-5}
$3$&$0$&$0$&$74$\\
\cline{1-4}
$4$&$0$&$0$\\
\cline{1-3}
$5$&$0$\\
\cline{1-2}
\end{tabular}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
&$0$&$1$&$2$&$3$&$4$&$5$\\
\hline\hline
$0$&$0$&$0$&$0$&$0$&$0$&$0$\\
\hline
$1$&$0$&$0$&$0$&$0$&$0$\\
\cline{1-6}
$2$&$0$&$0$&$0$&$10$\\
\cline{1-5}
$3$&$0$&$0$&$10$\\
\cline{1-4}
$4$&$0$&$0$\\
\cline{1-3}
$5$&$0$\\
\cline{1-2}
\end{tabular}\\
$n^{{\bm d},{\bm\ell}=(1,1)}_{g=0}$\hspace{36mm}
$n^{{\bm d},{\bm\ell}=(1,1)}_{g=1}$\hspace{36mm}
$n^{{\bm d},{\bm\ell}=(1,1)}_{g=2}$
\caption{The Gopakumar-Vafa invariants $n^{{\bm d},{\bm\ell}}_g$ for $\bm\ell=(1,1)$.
The asymmetry of the Gopakumar-Vafa invariants in the degrees appear in $n^{(d_+,d_-)=(0,1),{\bm\ell}=(1,1)}_{g=0}=1$.}
\label{Y11}
\end{center}
\end{table}
\begin{table}[ht!]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
&$0$&$1$&$2$&$3$&$4$&$5$\\
\hline\hline
$0$&$0$&$0$&$0$&$0$&$0$&$0$\\
\hline
$1$&$1$&$2$&$4$&$6$&$8$\\
\cline{1-6}
$2$&$0$&$4$&$24$&$96$\\
\cline{1-5}
$3$&$0$&$6$&$96$\\
\cline{1-4}
$4$&$0$&$8$\\
\cline{1-3}
$5$&$0$\\
\cline{1-2}
\end{tabular}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
&$0$&$1$&$2$&$3$&$4$&$5$\\
\hline\hline
$0$&$0$&$0$&$0$&$0$&$0$&$0$\\
\hline
$1$&$0$&$0$&$0$&$0$&$0$\\
\cline{1-6}
$2$&$0$&$0$&$7$&$56$\\
\cline{1-5}
$3$&$0$&$0$&$56$\\
\cline{1-4}
$4$&$0$&$0$\\
\cline{1-3}
$5$&$0$\\
\cline{1-2}
\end{tabular}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
&$0$&$1$&$2$&$3$&$4$&$5$\\
\hline\hline
$0$&$0$&$0$&$0$&$0$&$0$&$0$\\
\hline
$1$&$0$&$0$&$0$&$0$&$0$\\
\cline{1-6}
$2$&$0$&$0$&$0$&$10$\\
\cline{1-5}
$3$&$0$&$0$&$10$\\
\cline{1-4}
$4$&$0$&$0$\\
\cline{1-3}
$5$&$0$\\
\cline{1-2}
\end{tabular}\\
$n^{{\bm d},{\bm\ell}=(2)}_{g=0}$\hspace{36mm}
$n^{{\bm d},{\bm\ell}=(2)}_{g=1}$\hspace{36mm}
$n^{{\bm d},{\bm\ell}=(2)}_{g=2}$
\caption{The Gopakumar-Vafa invariants $n^{{\bm d},{\bm\ell}}_g$ for $\bm\ell=(2)$.
The asymmetry of the Gopakumar-Vafa invariants in the degrees appear in $n^{(d_+,d_-)=(0,1),{\bm\ell}=(2)}_{g=0}=1$.}
\label{Y2}
\end{center}
\end{table}
The results are given in tables \ref{Y1}, \ref{Y11}, \ref{Y2} for ${\bm\ell}=(1)$, ${\bm\ell}=(1,1)$, ${\bm\ell}=(2)$ respectively up to $d_1+d_2=5$ and table \ref{3boxes} for ${\bm\ell}=(1,1,1)$, ${\bm\ell}=(2,1)$, ${\bm\ell}=(3)$ up to $d_1+d_2=3$.
It is clear that the Gopakumar-Vafa invariants for ${\bm\ell}=(1,1)$ and ${\bm\ell}=(2)$ at $g=0$ are split asymmetrically as $n^{(d_+,d_-)=(0,1),{\bm\ell}=(1,1)}_{g=0}=n^{(d_+,d_-)=(0,1),{\bm\ell}=(2)}_{g=0}=1$ and the asymmetry is even larger for $|{\bm\ell}|=3$ in table \ref{3boxes}.
In determining the BPS indices for $|{\bm\ell}|=2$, we assume the symmetry in exchanging the degrees for higher order of $Q$.
This is because the one-point function in the grand canonical ensemble in appendix \ref{np1pt} is identical for ${\bm\ell}=(1,1)$ and ${\bm\ell}=(2)$ beyond the order of $Q^2$.
Although we only have data with $k=6,8,12$, let us asuume this is the case for any $k$ and $M$.
This implies that $\langle\!\langle(\Str U)^2\rangle\!\rangle^\text{GC}_{k,M}/\langle\!\langle 1\rangle\!\rangle^\text{GC}_{k,M}=(A^{(1,1)}_1+(A^{(1)}_1)^2)/2$ and $\langle\!\langle\Str U^2\rangle\!\rangle^\text{GC}_{k,M}/\langle\!\langle 1\rangle\!\rangle^\text{GC}_{k,M}=(A^{(2)}_1+A^{(1)}_2)/2$ are real and pure imaginary respectively.
Combining with the fact that $\langle\!\langle\Str U\rangle\!\rangle^\text{GC}_{k,M}/\langle\!\langle 1\rangle\!\rangle^\text{GC}_{k,M}=A^{(1)}_1$ is also pure imaginary, we find that $A^{(1,1)}$ and $A^{(2)}$ are themselves real and pure imaginary, which implies the symmetry in exchanging the degrees.
From the geometric viewpoint of local ${\mathbb P}^1\times{\mathbb P}^1$ the two Kahler parameters corresponding to the sizes of ${\mathbb P}^1$ should be symmetric under the exchange.
It is only after we include the Wilson loop insertion indicating one ${\mathbb P}^1$ that the split of the diagonal BPS indices happens.
\begin{table}[ht!]
\begin{center}
\begin{tabular}{|c||c|c|c|c|}
\hline
&$0$&$1$&$2$&$3$\\
\hline\hline
$0$&$0$&$0$&$0$&$0$\\
\hline
$1$&$1$&$2$&$3$\\
\cline{1-4}
$2$&$1$&$6$\\
\cline{1-3}
$3$&$0$\\
\cline{1-2}
\end{tabular}
\begin{tabular}{|c||c|c|c|c|}
\hline
&$0$&$1$&$2$&$3$\\
\hline\hline
$0$&$0$&$0$&$0$&$0$\\
\hline
$1$&$1$&$2$&$3$\\
\cline{1-4}
$2$&$1$&$6$\\
\cline{1-3}
$3$&$0$\\
\cline{1-2}
\end{tabular}
\begin{tabular}{|c||c|c|c|c|}
\hline
&$0$&$1$&$2$&$3$\\
\hline\hline
$0$&$0$&$0$&$0$&$0$\\
\hline
$1$&$1$&$2$&$3$\\
\cline{1-4}
$2$&$1$&$6$\\
\cline{1-3}
$3$&$0$\\
\cline{1-2}
\end{tabular}\\
$n^{{\bm d},{\bm\ell}=(1,1,1)}_{g=0}$\hspace{18mm}
$n^{{\bm d},{\bm\ell}=(2,1)}_{g=0}$\hspace{18mm}
$n^{{\bm d},{\bm\ell}=(3)}_{g=0}$
\caption{The Gopakumar-Vafa invariants $n^{{\bm d},{\bm\ell}}_g$ for $\bm\ell=(3)$, $\bm\ell=(2,1)$ and $\bm\ell=(1,1,1)$ respectively.}
\label{3boxes}
\end{center}
\end{table}
\section{Conclusion and discussions}\label{conclusion}
In this paper, we have introduced the two-point function and studied it numerically.
Though we have defined the two-point function so that it does not decompose to the one-point functions trivially, after our full analysis, we have found two unexpected relations to the one-point functions.
One of them relates the non-trivial two-point function to the trivial one which further reduces to a combination of the one-point functions through the Littlewood-Richardson rule.
The other relates the imaginary part of the two-point function to the one-point functions in the representation with the box number smaller than the total number by two.
We have also revisited the one-point function and identified how the diagonal BPS indices split by the degree difference asymmetrically.
Apparently there are many questions related to our result.
The first most important one would be the physical origin of the two-point function.
In this paper we have defined the two-point function with two characters of the opposite charges in the matrix model.
It is of course desirable to understand how the two-point function arises from the correlation function in the ABJM theory.
We would like to identify it as the two-point function of the physical Wilson loops in the ABJM theory and derive our two-point function from the localization techniques.
Although we have the general expression for the conjugate relation in section \ref{superposerel}, the general expression for the descent relation in section \ref{interfererel} is missing, where we have only proposed some relations in the main text and in appendix \ref{interference}.
Also, the physical interpretation of the two relations we have found is unclear.
We can imagine that the two relations reflect the topological nature and the orientifold or symplectic nature of the ABJM matrix model.
Especially in appendix \ref{heisenberg}, we make a proposal on the relation between the two-point function with the main phase removed $\im e^{-i\Theta^{Y,Z}_{k,M}}\langle s_Y\bar s_Z\rangle^\text{GC}_{k,M}$ and a bracket between $s_Y$ and $s_Z$.
We hope to elaborate the interpretation.
It is surprising to us that the two-point function turns out to closely relate to the representation theory of the supergroup U$(N_1|N_2)$ in the Fermi gas formalism.
It is, however, still unclear, what role the composite Young diagram appearing in the Fermi gas formalism plays in the representation theory.
Our definition of the non-trivial two-point functions in section \ref{definition} has a direct generalization to other ${\cal N}=4$ superconformal Chern-Simons matrix models of type $\widehat A$ \cite{HM,MN1,MN2,MN3,HHO,MNN,MNY}.
Namely, for the circular quiver gauge group $\prod_{i}$U$(N_i)$ with an even number of nodes, the Fermi gas formalism works naturally when the arguments of the inserted supersymmetric Schur functions appear reversely for the adjacent nodes as in $s_{Y_i}(x^{(N_i)}|x^{(N_{i+1})})$ and $s_{Y_{i+1}}((x^{-1})^{(N_{i+1})}|(x^{-1})^{(N_{i+2})})$.
In terms of the original gauge theory, we expect that, for the multiple insertion of the Wilson loops in the ${\cal N}=4$ superconformal Chern-Simons theories, the correlation functions preserve half of the supersymmetries only when the charges of the two adjacent loop insertions are reverse.
We hope to study this fact following the discussions in \cite{OWZ,CDT,GLMPS,BGLMPS,LMPZ}.
| -82,021.040859
|
[
-1.74609375,
1.6767578125
] | 32.224686
|
[
-2.23828125,
1.150390625,
-2.310546875,
-5.04296875,
-1.23046875,
7.6015625
] |
[
3.974609375,
9.40625,
2.482421875,
5.3359375
] | 401
| 9,460
|
[
-3.34765625,
4.00390625
] | 32.668968
|
[
-5.484375,
-4.30078125,
-5.10546875,
-2.26953125,
1.83984375,
12.9375
] | 1.800242
| 10.188065
| 20.031712
| 10.368492
|
[
1.402504563331604
] | -50,124.984656
| 6.106765
| -81,842.135796
| 2.170677
| 5.959036
|
[
-1.8388671875,
-3.525390625,
-3.9140625,
-5.0546875,
1.8017578125,
12.3984375
] |
[
-5.48046875,
-1.267578125,
-1.671875,
-0.3427734375,
3.41015625,
3.333984375
] | |
BkiUbIs5ixsDMFwZR-sC
|
\section{Introduction}
High-dimensional data arise frequently in many fields of the contemporary science.
In addition, it is common that the sample size is small relative
to the dimensionality of the data. Such intrinsically complex data structure introduces new challenges in
statistical analysis and inference, and requires innovative methods and theories \cite{fan_lv:2008, hall_marron_neeman:2005}.
In this context, we focus on the regression problem, which plays an important role in understanding the relationship between the response variable and the predictors.
Conventionally, the probability density function (p.d.f.) of the predictor vector is assumed to be non-degenerate. In this case, variable selection and dimension reduction are fundamental issues and have been extensively studied
\cite{fan_li:2001, fan_peng:2004, zhang_jiang:2010, fan_lv:2008, fan_song:2010, li_liang:2008, xia:2007, xia:2008}.
However, these problems remain difficult in the nonparametric regression setting, because commonly the models are built in the ambient space and
the curse of dimensionality is a serious issue \cite{lafferty_wasserman:2008, fan_feng_song:2011,zhu_li:2011}.
Recently, it has been noticed that, in practice, the predictor vector often takes on values in a lower-dimensional, nonlinear manifold.
More specifically, in the cryo Electron Microscopy problem \cite{frank:2006}, the images are located on the $3$-dimensional manifold $SO(3)$; in the radar signal example the data can be modeled as being sampled from the Grassmannian manifold \cite{chikuse:2003}; natural images are argued to be lying on a Klein bottle \cite{carlsson_Tigran:2008}; the general manifold model for image and signal analysis is considered in \cite{Gabriel:2009}; and spherical, circular and oriental data are distributed on special types of manifolds \cite{mardia_jupp:2000}; to name but a few.
Based on the manifold assumption, in the past few years,
numerous papers have been devoted to learning the manifold, or more generally the underlying
structure \cite{coifman_lafon:2006, lerman_zhang:2010, vdm}, and a few have addressed regression on manifolds \cite{pelletier:2006, bickel_li:2007, aswani_bickel:2011}.
In the manifold learning literature, the Nadaraya-Watson kernel regression estimator has been used to construct an estimator of the Laplace-Beltrami operator of the manifold; however, to avoid the boundary blowup problem, Neuman's boundary condition is required \cite{coifman_lafon:2006}. When the $p$-dimensional predictor is non-degenerate in $\mathbb{R}^p$, it is well known that the asymptotic bias of the traditional LLR in the Euclidean setup is related to the Laplacian of the regression function and that it alleviates the boundary effect \cite{ruppert_wand:1994}. Thus, it is interesting to see if these properties still hold for some properly constructed LLR in the manifold setup, as it will enable us to obtain a new estimator for the Laplace-Beltrami operator of the manifold with a different boundary condition.
Besides, due to the rich geometric structure, when the predictors are concentrated on a manifold, regression models that taking into account the geometric structure of the manifold are intuitively appealing.
In \cite{pelletier:2006,loubes_pelletier:2008} the kernel regression estimator is constructed directly on the manifold, using the true geodesic distance both in determining the nearest neighbors and in constructing the kernel weights.
Another approach is to employ the usual LLR in the ambient space $\mathbb{R}^p$ with regularization imposed on the coefficients in the directions perpendicular to a tangent plane estimate \cite{aswani_bickel:2011}.
However, there are several interesting and important issues left unsolved.
First, although the idea of constructing kernel estimators on the manifold in \cite{pelletier:2006,loubes_pelletier:2008} is appealing, it is unrealistic to make use of the geodesic distance. It is non-trivial to construct LLR on the manifold without knowing the manifold structure.
Second, it remains unknown whether the methods in \cite{aswani_bickel:2011} alleviate the boundary effect, and it is not obvious whether the asymptotic biases have any connections with the Laplace-Beltrami operator of the manifold.
Third, when $p$ is large, fitting LLR in $\mathbb{R}^p$ as in \cite{aswani_bickel:2011} can be computationally expensive even if regularization has been imposed.
Fourth, in \cite{aswani_bickel:2011} the bandwidth used in the tangent plane estimation is the same as the one employed in the LLR. It is unclear if we can benefit from using different bandwidths in these two steps.
Fifth, the quantity ``exterior derivative $d_xf|_{x_0}$'' in \cite[(4.5)]{aswani_bickel:2011} is subtle and the details are missing.
Furthermore, the topology of the embedded manifold, in particular, the condition number \cite{NSW}, is another important issue that needs to be taken care of.
Motivated by the above observations, in this paper, we explore further the Riemannian geometric structure of the manifold, in particular the tangent bundle structure, and construct the LLR directly on an estimate of the tangent plane to the manifold, without knowing the geodesic distance and manifold structure.
Specifically, we first estimate the intrinsic dimension $d$, and deal with the condition number issue when determining the nearest neighbors using the Euclidean distance. Subsequently, we obtain an estimate of the embedded tangent plane based on
local principal
component analysis (PCA). Finally, we construct the LLR on the tangent plane estimate using the coordinates of the nearest neighbors with respect to the orthonormal basis.
We call our approach the Manifold Adaptive Local Linear Estimator for the Regression (MALLER).
In addition, we suggest a procedure for selecting the bandwidth in the regression step that can handle heteroscedastic errors, which arise often in practice. A consequence of the proposed MALLER is an estimator for the gradient and the Laplace-Beltrami operator of the manifold.
Throughout this paper the dimension $p$ is kept as a fixed number and we assume the predictors are observed without any noise. Thus, if the sample size $n$ is large enough compared to the intrinsic dimension $d$, the tangent plane can be estimated accurately so that the dimensionality of the data can be reduced from $p$ to $d$. Under this circumstance, the first consequence is a much more computationally efficient scheme when $p$ is large and $p\gg d$, since all the computations in the regression step depend only on $d$. Another consequence is the ability to handle the practical situations where $n$ is less than $p$,
in which case no sparsity conditions like those in \cite{aswani_bickel:2011} are needed for MALLER to work. The isomap face data analysis illustrates these points.
We provide detailed theoretical justification of the convergence of MALLER by carefully analyzing the curvature, non-uniform sampling and boundary effects. In particular, the MALLER and gradient estimators achieve the respective optimal rates of convergence pertaining to nonparametric regression on $d$-dimensional manifolds.
In addition, the subtle relationship between the bandwidth used in the tangent plane estimation and the one used in the LLR is made explicit: it is crucial that the former should be of a smaller order than the latter, otherwise larger biases are introduced in the LLR on the tangent plane estimate and in the Laplace-Beltrami estimator mentioned below. This issue is particularly important when estimating the Laplace-Beltrami operator.
Moreover,
MALLER enjoys both the automatic boundary correction and the design adaptive properties possessed by the LLR in the $\mathbb{R}^d$ setup \cite{ruppert_wand:1994}. These properties have strong implications in manifold learning.
In particular, if the manifold has a smooth boundary, the Laplace-Beltrami operator estimated by our method MALLER is different from the one estimated by employing the Nadaraya-Watson kernel method, in the sense that the two are under different boundary conditions.
Since the main focus of this paper is regression on manifolds, further theoretical properties and applications of the new estimator of the Laplace-Beltrami operator are left as a future work.
The rest of this paper is organized as follows.
The proposed MALLER algorithm and a bandwidth selection procedure are introduced in Sections \ref{algorithm} and \ref{bandwidth} respectively. Asymptotic results for the conditional mean squared errors of MALLER and the gradient estimator in both the interior and boundary of the manifold are given in Section \ref{theory}. In Section \ref{numerics} we examine finite sample performance of MALLER and compare it with those of \cite{aswani_bickel:2011} through one simulation study and application to the isomap face dataset, and we demonstrate the efficacy of our gradient estimator via a simulated example. Section \ref{diffusionmap} gives a brief introduction of the diffusion map framework and discusses application of MALLER to estimating the Laplace-Beltrami operator of the manifold.
In Section \ref{discussion}, besides addressing the relationship between MALLER and the NEDE algorithm in \cite[(4.6)]{aswani_bickel:2011}, we discuss various
related open questions and future directions in both regression on manifolds and manifold learning. Proofs of the theoretical results can be found in the Supplementary, which also contains a brief introduction to the exterior derivative, covariant derivative and gradient of a function on the manifold.
\section{Model and Estimation Procedure}\label{algorithm}
Let $Y$ denote the scalar response variable and let $X$ be a $p$-dimensional random vector.
Assume that the distribution of $X$ is concentrated on a $d$-dimensional compact, smooth Riemannian manifold $\text{M}$ embedded in $\mathbb{R}^p$ via $\iota:\text{M}\hookrightarrow \mathbb{R}^p$, where $\text{M}$ may have boundary.
We consider the following regression model
\begin{equation}\label{model1}
Y=m(\iota^{-1}(X))+\sigma(\iota^{-1}(X))\,\epsilon,
\end{equation}
where $\epsilon$ is a random error independent of $X$ with $\mathbb{E} (\epsilon)=0$ and $\operatorname{Var}(\epsilon)=1$, and both the regression function $m$ and the conditional variance function $\sigma^2$ are defined on $\text{M}$.
Let $\{(X_l,Y_l)\}_{l=1}^n$ denote a random sample observed from model (\ref{model1})
with $\mathcal{X}:=\{X_l\}_{l=1}^n$ being sampled from $X$.
Then, given $x\in\text{M}$, the problem is to estimate nonparametrically
$m({x})$, and its higher order covariant derivatives at ${x}$ if $m$ is smooth enough, based on $\{(X_l,Y_l)\}_{l=1}^n$. Here, $x$ may or may not belong to $\mathcal{X}$.
For the sake of clearness, we should distinguish between the point $x\in\iota(\text{M})$ and the point $\iota^{-1}(x)\in \text{M}$. However, to simplify the notation, for the rest of this paper we use the same symbol $x$ to denote $x\in\iota(\text{M})$ or $\iota^{-1}(x)\in\text{M}$ and use $X$ to denote $X\in\iota(\text{M})$ or $\iota^{-1}(X)\in\text{M}$ unless there is any ambiguity in the context.} In addition, throughout this paper we assume that the sample size $n\gg d$
and $X$ is not contaminated by error}.
In the following subsections we discuss the steps in the MALLER algorithm : (1) estimating the intrinsic dimension $d$ of the manifold, (2) determining the true nearest neighbors of $x$ on $\text{M}$ using the Euclidean distance, (3) estimating the embedded tangent plane by local PCA, and (4) constructing LLR on the embedded tangent plane estimate.
Before going into the details, the MALLER algorithm is summarized below.
\noindent {\bf The MALLER Algorithm:}
\begin{enumerate}
\item Calculate the MLE intrinsic dimension estimate $\hat{d}$ in \cite{levina_bickel:2005}, and treat it as $d$.
\item For the given $x$, $h_{\text{pca}}$ and $h$ determine $\mathcal{N}^{\text{true}}_{x,h_{\text{pca}}}$ and $\mathcal{N}^{\text{true}}_{x,h}$, the two sets of estimates of the true nearest neighbors of $x$ on $\text{M}$ within a Euclidean ball of radius $\sqrt{h_{\text{pca}}}$ and $\sqrt{h}$ respectively, which are defined by (\ref{estimate:neighbors}).
\item Employ the local PCA based on the points in $\mathcal{N}^{\text{true}}_{x,h_{\text{pca}}}$ to get an orthonormal basis $\{U_k(x)\}_{k=1}^d$ for the embedded tangent plane estimate at $x$, thus obtaining $\{\boldsymbol{x}_{l}\}_{l=1}^n$, the coordinates of the projections of $\{X_l-x\}_{l=1}^n$ onto the affine space spanned by $\{U_k(x)\}_{k=1}^d$ with respect to this basis. See Section \ref{section:pca} for the details.
\item For given kernel $K$ and bandwidth $h$, obtain $\hat{\boldsymbol{\beta}}_x$ by the LLR (\ref{functionalmfd}) based on $\left \{\boldsymbol{x}_{l}: X_l \in \mathcal{N}^{\text{true}}_{x,h}\right\}$. Then we can compute the regression, embedded gradient and covariant derivative estimators defined in (\ref{algo:estimator:mhat}), (\ref{algo:estimator:CovDeri}) and (\ref{algo:estimator:Dmhat}) respectively.
\end{enumerate}
\subsection{Intrinsic dimension estimation}
Given the manifold assumption, in general the intrinsic dimension $d$ of the manifold $\text{M}$ is unknown a priori and needs to be estimated based on the sample $\mathcal{X}$. There exist many methods for estimating the intrinsic dimension
and we have picked the maximum likelihood estimation (MLE) method introduced in \cite{levina_bickel:2005} to estimate $d$ and denote the estimated dimension by $\hat{d}$. Since $d\ll n$, we assume the estimated dimension $\hat{d}$ is correct and hence will not distinguish between $d$ and $\hat{d}$
\subsection{Determining the nearest neighbors}
Numerically determining the neighbors of $x\in\text{M}$ using the Euclidean distance is problematic due to the embedding structure of the manifold, that is, the condition number of the embedded manifold \cite{NSW}.}
The reach of $\text{M}$ is defined as the largest number $\tau\geq0$ so that for every $0 \leq r<\tau$, the open normal bundle of $\text{M}$ of radius $r$ is still embedded in $\mathbb{R}^p$. Since $\text{M}$ is assumed to be compact, we know $\tau>0$. The quantity $1/\tau$ is referred to as the ``condition number'' of $\text{M}$ \cite{NSW}. For the given $x\in\text{M}$ and any $\delta>0$, denote respectively the set of Euclidean $\sqrt{\delta}$-neighbors of $x$ from $\mathcal{X}$ and the set of geodesic $\sqrt{\delta}$-neighbors of $x$ from $\mathcal{X}$ as
$$
\mathcal{N}^{\mathbb{R}^p}_{x,\delta} = \big\{X_j\in \mathcal{X}: \|X_j-x\|_{\mathbb{R}^p} < \sqrt{\delta} \big\}\mbox{ and } \mathcal{N}^\text{M}_{x,\delta} = \big\{X_j\in\mathcal{X}: d(X_j,x) < \sqrt{\delta} \big\},
$$
where $d(\cdot,\cdot)$ is the geodesic distance.
When $\delta$ is small enough, it is shown in Lemma \ref{lemma4} in the Supplementary that $\mathcal{N}^{\mathbb{R}^p}_{x,\delta}$ is roughly the same as $\mathcal{N}^\text{M}_{x,\delta}$, which is the main fact rendering the whole algorithm feasible. However, when $\sqrt{\delta}$ exceeds $2\tau$, $\mathcal{N}^\text{M}_{x,\delta}$ might be a strict subset of $\mathcal{N}^{\mathbb{R}^p}_{x,\delta}$. See Figure \ref{fig:condition}.
This fact combined with the lack of a priori knowledge of $\text{M}$, in particular, the geodesic distance and the condition number $1/\tau$, lead to the problem.
Since the manifold structure is our main concern, we need to learn $\mathcal{N}^\text{M}_{x,\delta}$. The problem is thus reduced to determining which points in $\mathcal{N}^{\mathbb{R}^p}_{x,\delta}$ are in $\mathcal{N}^\text{M}_{x,\delta}$ and which are not.
To cope with this problem, we apply the ``self-tuning spectral clustering'' algorithm \cite{Zelnik-Manor_Perona:2004} to the set $\mathcal{N}^{\mathbb{R}^p}_{x,\delta}$. We denote
\begin{equation}\label{estimate:neighbors}
\mathcal{N}^{\text{true}}_{x,\delta}:=\big\{X_j\in \mathcal{N}^{\mathbb{R}^p}_{x,\delta}: X_j\mbox{ is in the same cluster as }x \big\}.
\end{equation}
Then, according to Lemma \ref{lemma4} in the Supplementary, $\mathcal{N}^{\text{true}}_{x,\delta}$ is an accurate estimate of $\mathcal{N}^{\text{M}}_{x,\delta}$.
\begin{figure}[h]
\begin{center}
\subfigure{
\includegraphics[width=0.7\textwidth]{condition.png}}
\end{center}
\vspace{-0.6cm}
\caption{\small Condition number. A $1$-dim manifold $\text{M}$ (blue curve) is embedded in $\mathbb{R}^p$ with the condition number $1/\tau$. For the fixed $x\in\text{M}$, the black circle is of radius $\sqrt{\delta}$ and is centered at $x$. The Euclidean $\sqrt{\delta}$-neighbors of $x$, $\mathcal{N}^{\mathbb{R}^p}_{x,\delta}$, consists of both the red and green crosses. However, the geodesic $\sqrt{\delta}$-neighbors (true neighbors) of $x$, $\mathcal{N}^{\text{M}}_{x,\delta}$, consists of only the red crosses but not the green crosses.
}
\label{fig:condition}
\vspace{-0.3cm}
\end{figure}
\subsection{Embedded tangent plane estimation} \label{section:pca}
Write the tangent plane of the manifold at ${x}\in\text{M}$ as $T_{{x}}\text{M}$. Denote by $\iota_*$ the total differential of $\iota$
and by $\iota_*T_{{x}}\text{M}$ the embedded tangent plane in $\mathbb{R}^p$. Note that $\iota_*T_{{x}}\text{M}$ is a $d$-dimensional affine space inside $\mathbb{R}^p$ which is tangential to $\text{M}$ at $x$.
Next, we find an orthonormal basis of an approximation to the embedded tangent plane $\iota_*T_{{x}}\text{M}$.
Fix $h_{\text{pca}}> 0$. Assume that there are $N_x$ points in $\mathcal{N}^{\text{true}}_{x,h_{\text{pca}}}$ and rewrite them as
$\mathcal{N}^{\text{true}}_{x,h_{\text{pca}}} = \{X_{x_1},\ldots,X_{x_{N_x}} \}.$
Let
$$
\Sigma_x=\frac{1}{n}\sum_{l=1}^{N_x}\big(X_{x_l}-\mu_x\big)\big(X_{x_l}-\mu_x\big)^T
$$
be the sample covariance matrix of $\mathcal{N}^{\text{true}}_{x,h_{\text{pca}}}$, where $\mu_x$ is the sample mean of $\mathcal{N}^{\text{true}}_{x,h_{\text{pca}}}$.
Denote by $\{U_k(x)\}_{k=1}^d$ the eigenvectors corresponding to the $d$ largest eigenvalues of $\Sigma_x$, where $U_k(x)$ is a $p\times 1$ unit length column vector and $d$ is the dimension of the manifold $\text{M}$,
and define a $p\times d$ matrix
\begin{equation}\label{algorithm:BX}
B_x:=\big[
\begin{array}{ccc}
U_{1}(x) & \ldots & U_{d}(x).
\end{array}
\big]
\end{equation}
Let $\boldsymbol{x}_{l}=(\boldsymbol{x}_{l,1},~\ldots,~\boldsymbol{x}_{l,d})^T:=B^T_x(X_{l}-x)$, for $l=1,\ldots,n$.
\subsection{Local linear regression on the tangent plane }
Choose a kernel function $K:[0,\infty]\to \mathbb{R}$ so that $K|_{[0,1]}\in C^1([0,1])$ and $K|_{(1,\infty]}=0$ and a
bandwidth $h>0$. Notice that $h$ is different from $h_{\text{pca}}$.
We solve the regression problem (\ref{model1}) at ${x}$ via considering the following local linear least squares fitting on the estimated tangent plane:
\begin{equation}\label{functionalmfd}
\hat{\boldsymbol{\beta}}_x=\operatornamewithlimits{argmin}_{\boldsymbol{\beta}\in\mathbb{R}^{d+1}} \sum_{l=1}^n\Big(Y_{l}-\beta_0-\sum_{k=1}^d\beta_k\boldsymbol{x}_{l,k}\Big)^2\mbox{I}_{\mathcal{N}^{\text{true}}_{x,h}}(X_l)K_h(X_{l},x),
\end{equation}
where $\boldsymbol{\beta}=(\beta_0,\beta_1,\ldots,\beta_d)^T$, $K_h(X_{l},x):=h^{-d/2}K\big(\|X_{l}-x\|_{\mathbb{R}^p}\big/\sqrt{h}\big)$, and $\mbox{I}$ is the indicator function.
Denote
\begin{equation}\label{def:Yandm}
\boldsymbol{Y}=\left(Y_{1},\ldots,Y_{n}\right)^T\quad\mbox{and}\quad\boldsymbol{m}=\left(m(\iota^{-1}(X_{1})),\ldots,m(\iota^{-1}(X_{n}))\right)^T.
\end{equation}
Denote by $\mathbb{X}_x$ the $n\times (d+1)$ design matrix related to $x$:
\begin{equation}\label{design}
\mathbb{X}_x=\bigg[
\begin{array}{ccc}
1 & \dots & 1 \\
\boldsymbol{x}_{1} & \dots & \boldsymbol{x}_{n} \\
\end{array}
\bigg]^T,
\end{equation}
and $\mathbb{W}_{x}$ the kernel weight matrix:
\begin{equation}\label{weighted}
\mathbb{W}_{x}=\text{diag}\left(K_h(X_{1},x)\mbox{I}_{\mathcal{N}^{\text{true}}_{x,h}}(X_1),\ldots,K_h(X_{n},x)\mbox{I}_{\mathcal{N}^{\text{true}}_{x,h}}(X_n)\right),
\end{equation}
which is a diagonal matrix of size $n \times n$. Then (\ref{functionalmfd}) can be written as
\begin{equation}\label{functional2}
\hat{\boldsymbol{\beta}}_x=\operatornamewithlimits{argmin}_{\boldsymbol{\beta}\in\mathbb{R}^{d+1}} (\boldsymbol{Y}-\mathbb{X}_x\beta)^T\mathbb{W}_{x}(\boldsymbol{Y}-\mathbb{X}_x\beta).
\end{equation}
It is straightforward to show that the minimizer in (\ref{functional2}) is
$$
\hat{\boldsymbol{\beta}}_x=(\mathbb{X}_x^T\mathbb{W}_{x}\mathbb{X}_x)^{-1}\mathbb{X}^T_x\mathbb{W}_{x}\boldsymbol{Y}
$$
if $(\mathbb{X}^T_x\mathbb{W}_{x}\mathbb{X}_x)^{-1}$ exists. The invertibility of $\mathbb{X}^T_x\mathbb{W}_{x}\mathbb{X}_x$ will be shown in the Supplementary.
Our estimator of $m({x})$ MALLER is given by
\begin{equation}\label{algo:estimator:mhat}
\hat{m}({x},h):=\boldsymbol{v}_1^T\hat{\boldsymbol{\beta}}_x=\boldsymbol{v}_1^T(\mathbb{X}_x^T\mathbb{W}_{x}\mathbb{X}_x)^{-1}\mathbb{X}^T_x\mathbb{W}_{x}\boldsymbol{Y},
\end{equation}
where $\boldsymbol{v}_k\in\mathbb{R}^{d+1}$ is a $(d+1)\times 1$ unit vector with the $k$-th entry being 1. If the interest is to estimate the embedded gradient of $m$ at ${x}$, the following estimator is considered:
\begin{equation}\label{algo:estimator:CovDeri}
\widehat{\iota_*\mbox{\tt{grad}} m({x})}:=\sum_{i=1}^d\widehat{\nabla_{\partial_i(x)}m}({x},h)U_i(x).
\end{equation}
where $\mbox{\tt{grad}}$ denotes the gradient, }
\begin{equation}\label{algo:estimator:Dmhat}
\widehat{\nabla_{\partial_i({x})}m}({x},h):=\boldsymbol{v}_{i+1}^T\hat{\boldsymbol{\beta}}_x,
\end{equation}
and $\{\partial_i({x})\}_{i=1}^d$ is the orthonormal basis of $T_{{x}}\text{M}$ closest to the estimated orthonormal basis $\{U_k(x)\}_{k=1}^d$ in the sense described in Lemma \ref{lemma6} in the Supplementary.
We mention that the gradient on the manifold is closely related to the covariant derivative and the exterior derivative. The relationship between these quantities is summarized in the Supplementary.}
From (\ref{design}) and (\ref{functional2}) we can see that the key ingredient in the estimators (\ref{algo:estimator:mhat}), (\ref{algo:estimator:CovDeri}) and (\ref{algo:estimator:Dmhat}) is finding the coordinate of a given point related to a chosen basis and approximate locally the regression function by a linear function of that coordinate.
A consequence of this fact is dimension reduction. Indeed, since $d$ may be much smaller than $p$, having obtained $\{\boldsymbol{x}_{l}\}_{l=1}^n$, locally at $x$ we convert the $p$-dimensional regression problem to a $d$-dimensional one, by paying the price of additional sampling error coming from the tangent plane approximation and the curvature of the manifold. Nonetheless, it is shown in Section \ref{theory} and Section \ref{numerics}
that the effect of this extra sampling error on the MALLER is negligible and does not contribute to the leading term in the estimation error, provided that
$h_{\text{pca}}$ is smaller than $h$.
\section{Bandwidth Selection}\label{bandwidth}
Selection of the local PCA bandwidth $h_{\text{pca}}$ is a less important problem than choosing the bandwidth $h$ in the regression step, as it is discussed in Section \ref{theory} that $h_{\text{pca}}$ should be smaller than $h$ and of a smaller order than the optimal order of $h$. We refer to \cite{vdm} for selection of $h_{\text{pca}}$.
Suppose that for a given choice of $h_{\text{pca}}$, the tangent plane estimate has been obtained. The aim is finding the optimal value of $h$ so as to minimize the asymptotic conditional MSE of the MALLER, which is provided in (\ref{thm:interior:cond_mse}).
When the random errors are homoscedastic, the modified generalized cross-validation (mGCV) suggested in \cite{bickel_li:2007} can be used.
Specifically, let $\mathcal{H}_{\text{mGCV}}=\{\lambda_1,\ldots, \lambda_B\}$ be a set of candidate bandwidths, where $\lambda_i>0$, $i=1,\ldots,B$, and $B\in\mathbb{N}$,
and for each point $x$ we choose a block of data points $\{(X_{j}, Y_{j})\}_{j\in \mathcal{J}}$. For each $h\in \mathcal{H}_{\text{mGCV}}$, define the mGCV of $h$ by
$$
\text{mGCV}(h)=\Big(1+2\text{atr}_{\mathcal{J}}(h)\Big)\frac{1}{n_1}\sum_{j\in\mathcal{J}} \Big(Y_j-\hat{m}(X_j,h)\Big)^2,
$$
where
$
\text{atr}_{\mathcal{J}}(h):=\frac{1}{n_1}\sum_{j\in\mathcal{J}}\boldsymbol{v}_1^T(\mathbb{X}^T_{X_j}\mathbb{W}_{X_j}\mathbb{X}_{X_j})^{-1}\boldsymbol{v}_1h^{-d/2}K(0),
$
$n_1$ is the number of points in $\mathcal{J}$,
and $\hat{m}(X_j,h)$ is the MALLER (\ref{algo:estimator:mhat}) of $m(X_j)$ based on bandwidth $h$.
Then $h_{\text{mGCV},\hat{m}}$ is chosen as the value of $h$ in $\mathcal{H}_{\text{mGCV}}$ which minimizes $\text{mGCV}(h)$.
In the presence of heteroscedastic random errors, we adopt the following additional step to deal with the bandwidth selection problem. Note that the optimal bandwidth has to balance between the conditional bias and the conditional variance, which depends on $\sigma^2(x)$.
Thus, with the pilot mGCV bandwidth $h_{\text{mGCV},\hat{m}}$ we get the first estimate of $m(X_l)$ by the MALLER, denoted as $\hat{m}(X_l,h_{\text{mGCV},\hat{m}})$, $l=1,\ldots,n$, and we apply the method suggested in \cite{chen_cheng_peng:2009} to estimate $\sigma^2(x)$. We choose this method since the random error $\epsilon$ might have a heavy tailed distribution. Defining the residuals as
$$
\hat{r}_l:=\Big(Y_l-\hat{m}(X_l,h_{\text{mGCV},\hat{m}})\Big)^2,\, l=1,\ldots,n,
$$
we evaluate the following minimization problem
$$
(\hat{\alpha}_0(x),\hat{\boldsymbol{\alpha}}(x))=\operatornamewithlimits{argmin}_{\alpha_0\in\mathbb{R},\boldsymbol{\alpha}\in\mathbb{R}^d} \sum_{X_l\in\mathcal{N}^{\text{true}}_{x,h_{\text{mGCV},\hat{r}}}}\hspace{-20pt}\big( \log(\hat{r}_l+1/n)-\alpha_0-\boldsymbol{\alpha}^TB_x^T(X_l-x)\big)^2K_{h_{\text{mGCV},\hat{r}}}(X_l,x),
$$
where $h_{\text{mGCV},\hat{r}}$ is the bandwidth determined by minimizing the mGCV upon the data set $\{(X_l, \log(\hat{r}_l+1/n))\}_{l=1}^n$. The estimated value of $\sigma^2(x)$ is then defined as
$$
\hat{\sigma}^2({x}):=e^{\hat{\alpha}_{0}(x)}\bigg[\frac{1}{n}\sum^n_{l=1}\hat{r}_le^{-\hat{\alpha}_{0}(x)}\bigg]^{-1}.
$$
Finally we select the bandwidth for MALLER given in (\ref{algo:estimator:mhat}) at ${x}\in\text{M}$. Denote the optimal bandwidth at ${x}$ as $h_{\text{opt}}(x)$. Fix a candidate bandwidths set $\mathcal{H}_{\text{opt}}=\{\lambda_1,\ldots, \lambda_B\}$, which may be different from $\mathcal{H}_{\text{mGCV}}$, where $B\in\mathbb{N}$ and $\lambda_i>0$, $i=1,\ldots,B$. For each $h\in\mathcal{H}_{\text{opt}}$, estimate the conditional bias and the conditional variance of $\hat{m}({x},h)$ respectively by
$$
\hat{b}({x},h) = 2[\hat{m}({x},h)-\hat{m}({x},h/2)],
$$
which is based on the asymptotic bias expression given in (\ref{rslt:interior:mfd}) of the Supplementary
and (\ref{col:boundary:bias}), and
$$
\hat{v}({x},h) = \boldsymbol{v}_1^T(\mathbb{X}_x^T\mathbb{W}_{x}\mathbb{X}_x)^{-1}\mathbb{X}_x^T\mathbb{W}_{x}\hat{\mathfrak{S}}_x\mathbb{W}_{x}\mathbb{X}_x(\mathbb{X}_x^T\mathbb{W}_{x}\mathbb{X}_x)^{-1}\boldsymbol{v}_1,
$$
which is based on the finite sample variance expression given in (\ref{mfd:interior:var}) of the Supplementary, where $\hat{\mathfrak{S}}_x$ is a $n\hspace{-2pt}\times\hspace{-2pt} n$ diagonal matrix
$\hat{\mathfrak{S}}_x=\text{diag}\{\hat{\sigma}^2(X_1),\ldots, \hat{\sigma}^2(X_n)\}$.
The conditional MSE of $\hat{m}({x},h)$ is then estimated by
$$
\widehat{\text{MSE}}({x},h):=\hat{b}({x},h)^2+\hat{v}({x},h).
$$
The value of $h\in\mathcal{H}_{\text{opt}}$, denoted as $\hat{h}_{\text{opt}}(x)$, which minimizes $\widehat{\text{MSE}}({x},h)$ is then used to approximate $h_{\text{opt}}(x)$.
With $\hat{h}_{\text{opt}}(x)$, we can evaluate $\hat{m}({x}, \hat{h}_{\text{opt}}(x))$.
We do not claim the optimality of the bandwidth selection in this algorithm. For example, when the point ${x}$ is near the boundary of the manifold, the bandwidth should be chosen differently. We choose this bandwidth selection scheme since it is commonly used and is easy to implement \cite{ruppert:1997, fan_gijbels:1996}. Further study on the bandwidth selection problem in the manifold setup is an important and open problem and is out of the scope of this paper.
\section{Theory}\label{theory}
Before stating the main theorems describing the behaviors of the proposed MALLER given in Section \ref{algorithm}, we set up more notation.
Recall the assumption in Section 2 that $\text{M}$ is a $d$-dimensional compact smooth Riemannian manifold embedded in $\mathbb{R}^p$ via $\iota$.
Let the metric $g$ on $\text{M}$ be the one induced from the canonical metric of the ambient space $\mathbb{R}^p$.
The exponential map at ${x}\in\text{M}$ is denoted as $\exp_{{x}}$. Denote by $d({x},{y})$ the distance between ${x},{y}\in\text{M}$.
The volume form on $\text{M}$ induced from $g$ is denoted as $\textup{d} V$.
Given $\delta\geq 0$, denote the set of points close to the boundary $\partial\text{M}$ with distance less than $\delta$ as
\begin{equation}\label{conditions:statement:Csqrthx}
\text{M}_{\delta}=\big\{{x}\in\text{M}:~\min_{{y}\in\partial\text{M}}d({x},{y})\leq \delta\big\}.
\end{equation}
When $\delta>0$ is small enough, we denote the geodesic ball with radius $\delta$ and center ${x}\in\text{M}$ as $B^\text{M}_{\delta}({x})$. Denote $B^{\mathbb{R}^q}_{\delta}(x)$ as the ball in $\mathbb{R}^q$, $q\in\mathbb{N}$, with radius $\delta$ and center $x\in\mathbb{R}^q$ and $S^{q-1}$ as the standard $q-1$ sphere embedded in $\mathbb{R}^q$ with the induced metric. Define
\begin{equation}\label{approximate_geodesic_ball}
\tilde{B}^{\text{M}}_\delta({x}):=\iota^{-1}\left(B^{\mathbb{R}^p}_\delta(x)\cap \iota(\text{M})\right)\subset \text{M},
}
\end{equation}
which is an approximate of the geodesic ball $B^\text{M}_\delta({x})$. Denote by $\nabla$ the Levi-Civita connection, $\Delta$ the Laplace-Beltrami operator and $\text{Hess}$ the Hessian operator of $(\text{M},g)$. Denote by $\mbox{Ric}$ the Ricci curvature of $(\text{M},g)$. The second fundamental form of the embedding $\iota$ at ${x}$ is denoted by $\textup{II}_{{x}}$.
\subsection{Assumptions}
Let the random vector $X:\Omega \rightarrow \mathbb{R}^p$ be a measurable function with respect to the probability space $(\Omega,\mathcal{F},P)$.
To make the definition clear, in this paragraph we make clear the role of $\iota$ to distinguish between $x\in\text{M}$ and $\iota(x)\in\iota(\text{M})$. }Suppose the range of $X$ is supported on $\iota(\text{M})$. In this case, the p.d.f. of $X$ is not well-defined as a function on $\mathbb{R}^p$ if the intrinsic dimension $d$ of $\text{M}$ is less than $p$. To define properly the p.d.f. of $X$, let $\tilde{\mathcal{B}}$ be the Borel sigma algebra of $\iota(\text{M})$, and denote by $\tilde{P}_X$ the probability measure of $X$, defined on $ \tilde{\mathcal{B}}$, induced from $P$. Assume that $\tilde{P}_{X}$ is absolutely continuous with respect to the volume measure on $\iota(\text{M})$, that is, $\textup{d} \tilde{P}_{X}(x)=f(\iota^{-1}({x}))\iota_*\textup{d} V(x)$, where $f\in C^2(\text{M})$. Thus, for an integrable function $\zeta:\iota(\text{M})\rightarrow \mathbb{R}$, we have
\begin{eqnarray}
\hspace{-8pt}&&\hspace{-8pt}\mathbb{E} \zeta(X)=\int_{\Omega}\zeta(X(\omega))\textup{d} P(\omega)=\int_{\iota(\text{M})} \zeta(x)\textup{d} \tilde{P}_{X}(x)\nonumber\\
&=&\int_{\text{M}} \zeta(x) f(\iota^{-1}({x})) \iota_*\textup{d} V(x)
=\int_{\text{M}} \zeta(\iota(y)) f(y) \textup{d} V(y),\label{definition:expectation:manifold}
\end{eqnarray}
where the second equality follows from the fact that $\tilde{P}_X$ is the induced probability measure, and the last one comes from the change of variable $x=\iota(y)$.
In this sense we interpret $f$ as {\it the p.d.f. of $X$ on $\text{M}$}.
The kernel function $K:[0,\infty]\rightarrow \mathbb{R}$ used in the proposed MALLER is assumed to be compactly supported in $[0,1]$ so that $K|_{[0,1]}\in C^1([0,1])$. Denote
$$
\mu_{i,j}:=\int_{B^{\mathbb{R}^d}_1(0)} K^i(\|u\|_{\mathbb{R}^d})\|u\|_{\mathbb{R}^d}^j\textup{d} u
$$
and we normalize $K$ so that $\mu_{1,0}=1$.
Note that we can also consider more general kernel functions. For example, any $C^1(\mathbb{R})$ function with proper decaying property can be chosen. More general bandwidth like a positive definite symmetric bandwidth matrix $H$ considered in \cite{ruppert_wand:1994} can also be considered. Since the analysis under these more general conditions is the same except for the wrinkle caused by the extra error terms, we focus on the above setup to make the analysis clear.
We make the following assumptions in the analysis.
\begin{itemize}
\item[(A1)] $h\to 0$ and $nh^{d/2}\to \infty$ as $n\rightarrow \infty$.
\item[(A2)] $f$ belongs to $C^2(\text{M})$ and satisfies
\begin{equation}\label{conditions:statement:fcond}
0<\inf_{{x}\in\text{M}} f({x})\leq \sup_{{x}\in\text{M}}f({x})<\infty.
\end{equation}
\item[(A3)] For every given $h>0$ and every point ${x}\in\text{M}_{\sqrt{h}}$, the set
$B^\text{M}_{\sqrt{h}}({x})\cap \text{M}$
contains a non-empty interior set.
The purpose of this assumption is to avoid the potential degeneracy near the boundary.
\item[(A4)]
Assume that $h_{\text{pca}}^{1/2}< \min(2\tau,\text{inj}(\text{M}))$ and $h^{1/2}< \min(2\tau,\text{inj}(\text{M}))$, where $\text{inj}(\text{M})$ is the injectivity radius of $\text{M}$ and $1/\tau$ is the condition number of $\text{M}$ \cite{NSW}. Please see step 2 of the algorithm for precise definition of $\tau$.
\end{itemize}
\subsection{Main Theory}
We state our main theorems here and postpone the proofs to the Supplementary.
\begin{thm}\label{thm:interior}
Suppose $h_{\text{pca}}\asymp n^{-2/(d+1)}$ and $h\geqh_{\text{pca}}$. When ${x}\in\text{M}\backslash\text{M}_{\sqrt{h}}$, the conditional mean square error (MSE) of the estimator $\hat{m}({x},h)$ is
\begin{equation}\label{thm:interior:cond_mse}
\begin{split}
&\text{MSE}\{\hat{m}({x},h)| \mathcal{X}\}
=h^2\frac{\mu_{1,2}^2}{4d^2}(\Delta m({x}))^2+\frac{1}{nh^{d/2}}\frac{\mu_{2,0}\sigma^2({x})}{f({x})}\\
&\qquad\qquad+O(h^3+h^2h_{\text{pca}}^{3/4})+O_p\Big(\frac{1}{n^{1/2}h^{d/4-2}}+\frac{1}{nh^{d/2-1}}+\frac{1}{n^{3/2}h^{3d/4}}\Big).
\end{split}
\end{equation}
\end{thm}
Next, we consider the case when ${x}$ is close to the boundary. To ease the notation, for ${x}\in\text{M}_{\sqrt{h}}$ and $h>0$, define a $(d+1)\times(d+1)$ matrix $\nu_{i,x}$:
\begin{equation}
\nu_{i,x}:=\,\left[\begin{array}{cc}
\nu_{i,x,11} & \nu_{i,x,12}\\
\nu^T_{i,x,12} & \nu_{i,x,22}
\end{array}
\right]
:=\,\left[\begin{array}{cc}
\int_{\frac{1}{\sqrt{h}}\mathfrak{D}({x})}K^i(\|u\|)\textup{d} u & \int_{\frac{1}{\sqrt{h}}\mathfrak{D}({x})}K^i(\|u\|)u^T\textup{d} u\\
\int_{\frac{1}{\sqrt{h}}\mathfrak{D}({x})}K^i(\|u\|)u\textup{d} u & \int_{\frac{1}{\sqrt{h}}\mathfrak{D}({x})}K^i(\|u\|)uu^T\textup{d} u
\end{array}
\right],\label{thm1:statement:nuixdef}
\end{equation}
where for $i=1,2$, $\nu_{i,x,11}\in\mathbb{R}$, $\nu_{i,x,12}$ is a $1\times d$ matrix, $\nu_{i,x,22}$ is a $d\times d$ matrix and
\begin{eqnarray}
\mathfrak{D}({x}):= \exp_{{x}}^{-1}(B^\text{M}_{\sqrt{h}}({x})\cap \text{M})\subset T_{{x}}\text{M}.\label{thm1:statement:frakDdef}
\end{eqnarray}
We also define
\begin{equation}\label{thm2:C}
C:=\bigg[\begin{array}{cc}
1 & 0\\
0 & h^{\frac{1}{2}}I_{d}
\end{array}\bigg].
\end{equation}
Here, $I_{k}$ denotes the $k\times k$ identity matrix for any $k\in\mathbb{N}$.
\begin{thm}\label{thm:boundary}
Suppose ${x}\in\text{M}_{\sqrt{h}}$, $h_{\text{pca}}\asymp n^{-2/(d+1)}$ and $h\geq h_{\text{pca}}$.
The conditional MSE of the estimator $\hat{m}({x},h)$ is
\begin{eqnarray}
&&\hspace{-24pt}\text{MSE}\{\hat{m}({x},h)| \mathcal{X}\}= \frac{h^2}{4}\frac{[\mbox{tr} \big(\text{Hess} m({x})\nu_{1,x,22}\big)]^2}{\nu^2_{1,x,11}}+\frac{\boldsymbol{v}^T_1\nu_{1,x}^{-1}\nu_{2,x}\nu_{1,x}^{-1}\boldsymbol{v}_1}{nh^{\frac{d}{2}}} \frac{\sigma^2({x})}{f({x})}\label{thm:bdry:cond_mse}\\
&&\hspace{-32pt}+O_p\Big(h_{\text{pca}}^{3/4}h^{3/2}+h_{\text{pca}}^{1/2}h^2\Big)+O_p\Big(\frac{1}{n^{1/2}h^{d/4-2}}+\frac{1}{nh^{d/2-1/2}}+\frac{1}{n^{3/2}h^{3d/4}}\Big)\nonumber
\end{eqnarray}
\end{thm}
Notice that in both Theorem \ref{thm:interior} and \ref{thm:boundary}, the minimum of the conditional MSE is achieved when $h\asymp n^{-2/(d+4)}$, which is strictly larger than $h_{\text{pca}}$.
\begin{col}\label{col:smoothbdry}
Suppose $\partial\text{M}$ is smooth, ${x}\in\text{M}_{\sqrt{h}}$, $h_{\text{pca}}\asymp n^{-2/(d+1)}$ and $h\geq h_{\text{pca}}$. Then the conditional bias of $\hat{m}({x},h)$ is asymptotically a linear combination of the second order covariant derivative of $m$:
\begin{equation}\label{col:boundary:bias}
\mathbb{E}\{\hat{m}({x},h)-m({x})| \mathcal{X}\}=\frac{h}{2}\sum_{k=1}^dc_k({x})\nabla^2_{\partial_k,\partial_k}m({x})+O_p(h^{\frac{1}{2}}h_{\text{pca}}^{3/4}+hh_{\text{pca}}^{1/2})+O_p\Big(\frac{1}{n^{\frac{1}{2}}h^{\frac{d}{4}-1}}\Big),
\end{equation}
where $\{\partial_k\}_{k=1}^d$ is a normal coordinate determined in Lemma \ref{lemma6} of the Supplementary and $c_k({x})$ is uniformly bounded for all $k=1,\ldots,d$.
\end{col}
Recall that when the p.d.f. of the random vector $X$ is well-defined on $\mathbb{R}^p$,
denoted as $f$, so that $\text{supp} f$ satisfies some weak conditions, it is shown in \cite{ruppert_wand:1994} that the conventional LLR is unbiased up to the second order term even when $x$ is close to the boundary. Additionally, the LLR is design adaptive, that is, the asymptotic bias does not depend on $f$. These properties render the LLR popular in applications. In the degenerate case i.e. $X$ lies on the manifold $\text{M}$, we can see from the proofs of Theorem \ref{thm:interior} and Theorem \ref{thm:boundary} that MALLER also processes these nice properties. There properties of MALLER have important implications from the manifold learning viewpoint, which will be discussion in Section \ref{diffusionmap}.
\subsection{Gradient and Covariant Derivative Estimate}
When the p.d.f. $f$ of $X$ is non-degenerate on $\mathbb{R}^p$, it is well known that the traditional LLR provides an estimate of the gradient of $m$ \cite{ruppert_wand:1994, fan_gijbels:1996}. In the manifold setup, the notion of differentiation is generalized naturally to the ``covariant derivative'', and hence the gradient if the manifold is Riemannian. A brief introduction of the notion of covariant derivative, gradient, exterior derivative and their relationship is provided in the Supplementary \ref{appendix:background}. In this subsection, we show that MALLER provides an estimate of the covariant derivative of $m$.
\begin{thm}\label{thm:interior:cov}
Suppose ${x}\in\text{M}\backslash\text{M}_{\sqrt{h}}$, $h_{\text{pca}}\asymp n^{-2/(d+1)}$ and $h\geqh_{\text{pca}}$. The conditional MSE for the estimator $\widehat{\nabla_{\partial_i({x})}m}({x},h)$ given in (\ref{algo:estimator:Dmhat}) is
\begin{equation}
\begin{split}
&\text{MSE}\{\widehat{\nabla_{\partial_i({x})}m}({x},h)|\mathcal{X}\}= h^2\Bigg[\frac{\mu_{1,2}}{d}\frac{\nabla_{\partial_i} f({x})}{f({x})}\Delta m({x})-\frac{\mu_{1,2}d\int_{S^{d-1}}\theta^T\text{Hess} m({x})\theta \theta \nabla_\theta f({x})\textup{d} \theta}{|S^{d-1}|f({x})}\Bigg]^2\nonumber\\
&+\frac{1}{nh^{\frac{d}{2}+1}}\frac{d\mu_{2,2}\sigma^2({x})f({x})}{\mu^2_{1,2}}+O_p(h^{\frac{5}{2}}+h^{\frac{3}{2}}h_{\text{pca}}^{\frac{3}{4}})+O_p\Big(\frac{1}{n^{\frac{1}{2}}h^{\frac{d}{4}-\frac{3}{2}}}+\frac{1}{nh^{\frac{d}{2}}}+\frac{1}{n^{\frac{3}{2}}h^{\frac{3d}{4}+1}}\Big),\nonumber
\end{split}
\end{equation}
where $\{\partial_i({x})\}_{i=1}^d$ is an orthonormal basis of $T_{{x}}\text{M}$ described in Lemma \ref{lemma6} of the Supplementary.
\end{thm}
\begin{thm}\label{thm:boundary:cov}
Suppose ${x}\in\text{M}_{\sqrt{h}}$, $h_{\text{pca}}\asymp n^{-2/(d+1)}$ and $h\geq h_{\text{pca}}$. The conditional MSE for the estimator $\widehat{\nabla_{\partial_i({x})}m}({x},h)$ given in (\ref{algo:estimator:Dmhat}) is
\begin{eqnarray}
\lefteqn{\text{MSE}\{\widehat{\nabla_{\partial_i({x})}m}({x},h)| \mathcal{X}\}=h\bigg(\frac{\boldsymbol{v}^T_{i+1}\nu^{-1}_{1,x}}{2}\int_{\frac{1}{\sqrt{h}}\mathfrak{D}({x})}K(\|u\|)u^T\text{Hess} m({x})u\left[\begin{array}{c}
1\\ u
\end{array}\right]\textup{d} u\bigg)^2} \nonumber\\
&&\hspace{-20pt}+\frac{\boldsymbol{v}^T_{i+1}\nu_{1,x}^{-1}\nu_{2,x}\nu^{-1}_{1,x}\boldsymbol{v}_{i+1}}{nh^{\frac{d}{2}+1}}\frac{\sigma^2({x})}{f({x})}+O_p\Big(h^{\frac{1}{2}}h_{\text{pca}}^{\frac{3}{4}}+hh_{\text{pca}}^{\frac{1}{2}}\Big)+O_p\Big(\frac{1}{n^{\frac{1}{2}}h^{\frac{d}{4}-\frac{3}{2}}}+\frac{1}{nh^{\frac{d}{2}+\frac{1}{2}}}+\frac{1}{n^{\frac{3}{2}}h^{\frac{3d}{4}}}\Big),\nonumber
\end{eqnarray}
where $\{\partial_i({x})\}_{i=1}^d$ is an orthonormal basis of $T_{{x}}\text{M}$ described in Lemma \ref{lemma6} of the Supplementary.
\end{thm}
Based on Theorem \ref{thm:interior:cov}, \ref{thm:boundary:cov} and Section \ref{appendix:background} of the Supplementary, we know that the estimator (\ref{algo:estimator:CovDeri}) indeed can be used to estimate the embedded gradient of $m$. Since the application of the estimate of the gradient is not the focus of this paper, we refer the readers to \cite{coifman_lafon:2006,Mukherjee:2010}.}
\section{Numerical Examples}\label{numerics}
To demonstrate the applicability of the proposed algorithm MALLER, we test it on a series of simulations and a real dataset and compared it with the nonparametric exterior derivative estimator (NEDE), nonparametric adaptive lasso exterior derivative estimator (NALEDE), nonparametric exterior derivative estimator for the ``large $p$, small $n$'' (NEDEP) and nonparametric adaptive lasso exterior derivative estimator for the ``large $p$, small $n$'' (NALEDEP) proposed in \cite{aswani_bickel:2011}, for which the codes are provided by the authors of \cite{aswani_bickel:2011}\footnote{\url{http://www.eecs.berkeley.edu/~aaswani/EDE_Code.zip}}. The code for implementation of MALLER is in the authors' homepage\footnote{\url{http://www.math.princeton.edu/~hauwu/regression.zip}}.
All the observed values of the predictors in both the training dataset and the testing dataset are normalized by $x^0_l :=(x_l-\hat{\mu})/s$, where $\hat{\mu}$ is the sample mean of $\{x_l\}_{l=1}^{n}$, $l=1,\ldots,n+10$ and $s=\max_{i,j=1,\ldots,n}\|x_i-x_j\|_{\mathbb{R}^p}$. In order to facilitate the notation we write $x_l$ instead of $x^0_l$ in the sequel. In step 1 of our algorithm, we used the MLE dimension estimation code provided by the authors of \cite{levina_bickel:2005}\footnote{\url{ http://www.stat.lsa.umich.edu/~elevina/mledim.m}} to evaluate the intrinsic dimension of the manifold. In step 2, we used the code provided by the authors of \cite{Zelnik-Manor_Perona:2004}\footnote{\url{http://www.vision.caltech.edu/lihi/Demos/SelfTuningClustering.html}}. In step 3, we chose $h_{\text{pca}}=0.015$. In the bandwidth selection step, for each regressant, we worked out the bandwidth selection procedure given in Section \ref{bandwidth} on $21$ logarithmically equi-spaced candidate bandwidths in the interval $[0.01, 0.1]$ when $d=1$ and $[0.01, h_d]$ when $d>1$, where
\begin{equation}\label{choice:hd}
h_d=\frac{1}{4}\bigg( \frac{d\Gamma(d/2)}{\sqrt{\pi}\Gamma\left((d+1)/2\right)} \bigg)^{2/d}(0.1)^{1/d}.
\end{equation}
This choice of $h_d$ is motivated by the following facts. Fix $d>1$. The volume of $S^d$ is $|S^d|=\frac{2\pi^{\frac{d+1}{2}}}{\Gamma(\frac{d+1}{2})}$, where $\Gamma$ is the Gamma function, and the volume of a geodesic ball of radius $0<\delta(d)\ll1$ centered at ${x}\in S^d$, denoted as $B^{S^d}_{\delta(d)}({x})$, is approximately $\frac{\delta(d)^{d}|S^{d-1}|}{d}=\frac{2\pi^{d/2}\delta(d)^{d}}{d\Gamma(d/2)}$. Thus, the ratio of the volume of $B^{S^d}_{\delta(d)}({x})$ to $|S^{d}|$ is $r(d,\delta(d))=\frac{\delta(d)^{d}\Gamma((d+1)/2)}{\sqrt{\pi}d\Gamma(d/2)}$. Suppose $\delta(d)=\delta\ll 1$ for all $d$, then $r(d,\delta)$ gets smaller as $d$ increases. That is, if the number of data points sampled from $S^d$ is the same and $\delta(d)$ is fixed for all $d$, the number of data points located in $B^{S^d}_{\delta(d)}({x})$ decreases to zero exponentially. This fact plays a role in the numerics, especially in the bandwidth selection problem, since in practice the number of neighboring points is not controllable. We thus choose the largest bandwidth $h_d$ by solving
$\frac{(2\sqrt{h_d})^{d}\Gamma((d+1)/2)}{\sqrt{\pi}d\Gamma(d/2)} = r(1,0.1)=\frac{\sqrt{0.1}}{\pi}$,
which leads to (\ref{choice:hd}).
We emphasize the non-optimality of this scheme to set the candidate bandwidths for general manifolds of dimension $d$, which is out of the scope of this paper. The kernel function $K$ used in step 4 of our MALLER algorithm was taken as $K(u)=\exp(-7u^2)\mbox{I}_{[0,1]}(u)$.
In Sections \ref{simulation2} -- \ref{isomapface} we report the root average square estimation error (RASE) to measure the accuracy of different estimators
$$
\text{ RASE}=\sqrt{\frac{1}{10} \sum_{i=n+1}^{n+10} \big|\hat{m}(x_i)-m(x_i)\big|^2 },
$$
where $\hat{m}(x_i)$ is the result of each estimator
We ran our simulations and data analysis on a computer having $96$GB of ram, two Intel Xeon X5570 CPUs, each with four cores running at $2.93$GHz. No parallel computation was implemented.
\subsection{Simulated data: regression on the Klein bottle}\label{simulation2}
Consider the 2-dimensional closed and smooth manifold, the Klein bottle, embedded in $\mathbb{R}^4$, which is parametrized by $\phi_{\text{Klein}}:[0,2\pi)\times[0,2\pi)\to\mathbb{R}^4$ so that
$$
(u,v)\stackrel{\phi_{\text{Klein}}}{\mapsto}\big((2\cos v+1)\cos u,\,(2\cos v+1)\sin u,\,2\sin v\cos(u/2),\,2\sin v\sin(u/2)\big).
$$
We sampled $n=1500$ or $1000$ points uniformly from $[0,2\pi)\times[0,2\pi)$, denoted as $\{(U_l,V_l)\}_{l=1}^n$, and then obtained the corresponding $n$ observations $\{X_l\}_{l=1}^n$ on the predictors $X$ by the parametrization $\phi_{\text{Klein}}$. Notice that the uniform sampling design on $[0,2\pi)\times[0,2\pi)$ corresponds to a non-uniform sampling design on the Klein bottle.
To generate the responses $\{Y_l\}_{l=1}^n$ corresponding to $\{X_l\}_{l=1}^n$, note that the mapping $\phi_{\text{Klein}}$ is 1-1 and onto, so any $(u,v)$ in $[0,2\pi)\times[0,2\pi)$ can be written as
$(u,v)=\phi_{\text{Klein}}^{-1}(x)$ for some $x$ in the embedded Klein bottle. So, consider the following regression model on the Klein bottle:
$$
Y:= m(X)+\sigma(X)\,\epsilon,
$$
where
\begin{eqnarray}
&&m(X) := 7\sin(4 U) + 5\cos(2 V)^2 + 6\exp\{-32( (U-\pi)^2+(V-\pi)^2 )\},\nonumber\\
&&\sigma(X) := \sigma_0(1+0.1\cos(U)+0.1\sin(V)),\nonumber
\end{eqnarray}
$\epsilon\sim\mathcal{N}(0,1)$ is independent of $X$, and
$\sigma_0$ is the noise level (in $Y$) which determines the signal-to-noise ratio
\begin{equation*
\text{snrdb}:=10\log_{10}\Big(\frac{\operatorname{Var}{Y}}{\sigma_0^2}\Big).
\end{equation*}
Furthermore, let
\[
W=X+\sigma_X\eta,
\]
where $\sigma_X\geq 0$, and $\eta$ is a bivariate normal random vector with zero mean and identity covariance matrix, independent of $X$ and $\epsilon$.
Consider estimating $m(X)$ based on observations on $(W,Y)$. In this case, $W=X$ and $X$ is observed without error when $\sigma_X=0$, and $W$ is $X$ contaminated with error when $\sigma_X>0$.
In the simulations, we took $\sigma_X=0$ or $0.2$ and $\text{snrdb}=5$ or $2$.
For each simulated sample, we drew $n$ observations $\{(W_i,Y_i)\}_{i=1}^n$ to form the training dataset. Then, independent of the training sample, we sampled randomly $10$ points $\{W_i\}_{i=n+1}^{n+10}$ as the regressants and tried to estimate the values of $m$ at $\{X_{n+j}\}_{j=1}^{10}$ based on $\{(W_i,Y_i)\}_{i=1}^{n}$.
We evaluated the performance of each estimator by computing the average and standard deviation of its RASE's over $200$ realizations. The estimated dimension by the MLE intrinsic dimension estimator was $2$ for all of the 200 realizatioins, as is expected. The results of all the estimators and their computation time
are listed in Table \ref{table:Klein} and Table \ref{table:Klein:err}, from which we can draw the following conclusions. When there is no error-in-variable, i.e. $\sigma_X=0$, MALLER outperforms the four competitors in all of the cases, with significantly smaller RASE average and similar RASE standard deviation.
Also, the MALLER performs well when there exists error in the predictors.
The fact that the computation time for MALLER is longer than that for the other four estimators can be explained as follows. Besides the sample size $n$, the computation
time for the estimators in \cite{aswani_bickel:2011} also depend on the ambient space dimension $p$ which is $4$ in this example. On the other hand, in addition to $n$, the computation time for MALLER also depends on the estimated intrinsic dimension $d$ which is 2 in this example. This fundamental difference between MALLER and those in
\cite{aswani_bickel:2011} will become apparent when $p$ increases and
$p\gg d$, as in the Isomap face example discussed in Section \ref{isomapface}.
\scriptsize
\begin{table}[h]
\begin{center}
{\small
\begin{tabular}{| c | c | c | c | c | }
\hline
\multirow{3}{*}{} & \multicolumn{4}{|c|}{Klein bottle, $\sigma_X=0$, RASE. } \\
\cline{2-5}
& \multicolumn{2}{|c|}{$n=1500$} & \multicolumn{2}{|c|}{$n=1000$}\\
\cline{2-5}
& $\text{snrdb}=5$ & $\text{snrdb}=2$ & $\text{snrdb}=5$ & $\text{snrdb}=2$ \\
\hline
MALLER & $1.8675 \pm 0.5222$ & $2.3818 \pm 0.666$ & $2.3255 \pm 0.5999$ & $2.7454 \pm 0.9151$\\
NEDE & $2.552 \pm 0.5581$ & $2.9382 \pm 0.631$ & $3.4209 \pm 0.6535$ & $3.6469 \pm 0.6793$ \\
NALEDE & $2.5519 \pm 0.5581$ & $2.9417 \pm 0.6331$ & $3.4288 \pm 0.6522$ & $3.6523 \pm 0.6798$ \\
NEDEP & $2.5514 \pm 0.558$ & $2.9371 \pm 0.6313$ & $3.4212 \pm 0.6534$ & $3.6469 \pm 0.6787$ \\
NALEDEP & $2.5511 \pm 0.5583$ & $2.9406 \pm 0.6335$ & $3.429 \pm 0.6524$ & $3.6528 \pm 0.6791$ \\
\hline
\multirow{3}{*}{} & \multicolumn{4}{|c|}{Klein bottle, the computation time. } \\
\hline
MALLER & $76.9222 \pm 29.0305$ & $68.114 \pm 22.3079$ & $32.9121 \pm 10.191$ & $32.7163 \pm 11.3034$ \\
NEDE & $6.0438 \pm 0.1573$ & $6.0416 \pm 0.1709$ & $5.569 \pm 0.1514$ & $5.5878 \pm 0.152$ \\
NALEDE & $11.6054 \pm 0.289$ & $11.5148 \pm 0.2853$ & $10.5719 \pm 0.266$ & $10.5617 \pm 0.265$ \\
NEDEP & $11.4768 \pm 0.2978$ & $11.4656 \pm 0.3199$ & $10.5246 \pm 0.2875$ & $10.5576 \pm 0.2896$ \\
NALEDEP & $17.1086 \pm 0.4276$ & $17.0057 \pm 0.4317$ & $15.5967 \pm 0.4015$ & $15.601 \pm 0.4025$ \\
\hline
\end{tabular}
}
\end{center}
\vspace{0pt}
\caption{\small Regression on the Klein bottle without error in the predictors. The averages and standard deviations, over $200$ realizations, of RASE and the computation time (in seconds) for different estimators tested on different configurations.}\label{table:Klein}
\end{table}
\normalsize
\begin{table}[h]
\begin{center}
\begin{tabular}{| c | c | c | c | c | }
\hline
\multirow{3}{*}{} & \multicolumn{4}{|c|}{Klein bottle, $\sigma_X=0.2$, RASE. } \\
\cline{2-5}
& \multicolumn{2}{|c|}{$n=1500$} & \multicolumn{2}{|c|}{$n=1000$}\\
\cline{2-5}
& $\text{snrdb}=5$ & $\text{snrdb}=2$ & $\text{snrdb}=5$ & $\text{snrdb}=2$ \\
\hline
MALLER & $3.9227 \pm 0.6898$ & $4.02 \pm 0.7214$ & $3.9514 \pm 0.6785$ & $4.0512 \pm 0.6932$\\
NEDE & $3.9754 \pm 0.6508$ & $4.1225 \pm 0.6255$ & $4.1697 \pm 0.6599$ & $4.2845 \pm 0.6483$ \\
NALEDE & $3.9759 \pm 0.6509$ & $4.131 \pm 0.6252$ & $4.1702 \pm 0.6612$ & $4.2848 \pm 0.6494$ \\
NEDEP & $3.9759 \pm 0.652$ & $4.122 \pm 0.6264$ & $4.1708 \pm 0.6601$ & $4.2848 \pm 0.6479$ \\
NALEDEP & $3.9767 \pm 0.6518$ & $4.1227 \pm 0.626$ & $4.171 \pm 0.6619$ & $4.2851 \pm 0.6492$ \\
\hline
\end{tabular}
\end{center}
\caption{\small Regression on the Klein bottle with error in the predictors. The averages and standard deviations over $200$ realizations of RASE for different estimators tested on different configurations.}\label{table:Klein:err}
\end{table}
\subsection{Real data: Isomap face data}\label{isomapface}
We further tested our algorithm on the Isomap face dataset \cite{Tenenbaum_deSilva_Langford:2000}\footnote{\url{http://isomap.stanford.edu/datasets.html}}. The dataset consists of $698$ $64\times 64$ images, denoted as $\{I^{64}_l\}_{l=1}^{698}$, parametrized by three variables: the horizontal orientation, the vertical orientation, and the illumination direction. Thus, the data were sampled from a 3-dimensional manifold embedded in $\mathbb{R}^{64\times 64}$.
When we view each image as a point in $\mathbb{R}^{64\times 64}$, the ambient space dimension $p=64\times 64$ is large, so in \cite{aswani_bickel:2011} the authors suggested to rescale the images from $64\times 64$ to $7\times7$ pixels in size. Denote the resized images of size $k\times k$ as $\{I^{k}_l\}_{l=1}^{698}$, where $k=1,\ldots,64$. We performed $200$ replications of the following experiment, which is suggested in \cite{aswani_bickel:2011}. Fix $k=7$. We randomly split $\{I^{7}_l\}_{l=1}^{698}$ into a training set consisting of $688$ images and a testing set consisting of $10$ images. The horizontal orientation of the images in the testing set were then estimated based on the training set. Table \ref{table:isomap}, which summaries the results, shows that MALLER improves on the existing methods substantially in the sense of reduced RASE average and standard deviation. We mention that NEDEP and NALEDEP behave worse than NEDE and NALEDE due to the frequent occurrence of blowup in the iteration, and the reported results are the best ones among several trials we carried out.
\begin{table}[h]
\begin{center}
{\small
\begin{tabular}{| c | c | c | c | c |}
\hline
& \multicolumn{2}{|c|}{Isomap face database, $k=7$}\\
\cline{2-3}
& RASE & computation time \\
\hline
MALLER & $1.2168 \pm 0.8131$ & $131.5847 \pm 17.5136$ \\
NEDE & $1.7852 \pm 1.2122$ & $34.4606 \pm 4.5847$ \\
NALEDE & $1.7759 \pm 1.1995$ & $170.7088 \pm 28.8193$ \\
NEDEP & $1.8685 \pm 1.2413$ & $53.7212 \pm 8.3594$ \\
NALEDEP & $2.8095 \pm 3.6525$ & $187.3745 \pm 31.2623$ \\
\hline
\end{tabular}
}
\end{center}
\caption{\small The averages and standard deviations, over $200$ replications, of RASE and computation time in seconds for different estimators tested on the resized Isomap face data $\{I^{7}_l\}_{l=1}^{698}$.}\label{table:isomap}
\end{table}
Next, we carried out another $200$ replications of the same experiment but with $k=14, 21$, or $28$. The MLE intrinsic dimension estimate was $3$ in all the replications when $k=7, 14$ or $21$, and was $4$ all the time when $k=28$. The results are given in Table \ref{table:isomap:2}. We mention that when $k=14, 21$ or $28$, it took long time to compute the methods in \cite{aswani_bickel:2011} and the experiment cannot be finished within a reasonable time frame, so we decided not to include them in the comparison. When $k=7,8,\ldots,16$, the estimated time (average over $3$ realizations) to finish one replication for the methods in \cite{aswani_bickel:2011} are plotted in Figure \ref{fig:isomap:time}, which shows clearly the dependence of these methods on the ambient space dimension $k\times k$.
\begin{table}[h]
\begin{center}
{\small
\begin{tabular}{|c | c | c | c | }
\hline
& $k=14$ & $k=21$ & $k=28$ \\
\cline{2-4}
& \multicolumn{3}{|c|}{Isomap face database, RASE} \\
\hline
MALLER & $0.9865\pm 0.5473$ & $1.0259\pm 0.5098$ & $0.9369\pm 0.7403$\\
\hline
& \multicolumn{3}{|c|}{Isomap face database, computation time}\\
\hline
MALLER & $108.3796\pm 12.0145$ & $148.9841\pm 20.0436$ & $164.3576\pm 28.8329$ \\
\hline
\end{tabular}
}
\end{center}
\caption{\small The averages and standard deviations over $200$ replications of RASE and computation time in seconds for MALLER tested on the resized Isomap face data $\{I^{k}_l\}_{l=1}^{698}$, $k=14,21,28$. }\label{table:isomap:2}
\end{table}
\begin{figure}[h]
\begin{center}
\subfigure{
\includegraphics[width=0.5\textwidth]{face_time.png}
}
\end{center}
\vspace{-20pt}
\caption{\small The running time for MALLER, NEDE, NALEDE, NEDEP and NALEDEP when $k=7,8,\ldots, 16$. The $y$-axis is in the natural log scale.}\label{fig:isomap:time}
\end{figure}
Note, from Table \ref{table:isomap} and Table \ref{table:isomap:2}, that when $k$ changes from $14$ to $7$ the RASE average of MALLER increases noticeably, and it decreases when $k$ changes from $21$ to $28$. In the following are some partial explanations for these.
It is clear that resizing the images from $64\times 64$ pixels to $k\times k$ pixels for a smaller value of $k$ causes a reduction of the resolution of the images. Taking $k=1$, the extremal case, as an example, the images $\{I^1_l\}_{l=1}^{698}$ are scalar values distributed in $\mathbb{R}$, and obviously the topological structures of $\{I^1_l\}_{l=1}^{698}$ are totally different from that of the original images.
This fact indicates that over-resizing the images leads to the distortion of the topology, which
partially explains the increase of the RASE of MALLER when $k$ changes from $14$ to $7$.
Further, the fact that the RASE average dropped again when $k$ changes from $21$ to $28$ may be explained by the reason that, as the estimated intrinsic dimension increased from $3$ to $4$, the extra dimension helps to reduce the estimation error introduced by the complex geometric structure when the resolution is high.
We emphasize that the above explanations for the RASE average fluctuation need to be quantified with further analysis, which is out of the scope of this paper and will be reported in a future work.
In conclusion, the Isomap face database example shows the strength of MALLER: once the number of observations $n$ is large enough compared with the intrinsic dimension $d$ of the manifold, which may be small compared with the dimension $p$ of the ambient space, our method provides improvement over existing estimators from both the viewpoints of the prediction error and computation time.
\subsection{Gradient and Covariant Derivative Estimation}
We tested our estimator $\widehat{\iota_*\mbox{\tt{grad}}m}(x)$, given in (\ref{algo:estimator:CovDeri}), on the $2$-dimensional torus $\mathbb{T}$ embedded in $\mathbb{R}^3$ via $\iota$, which is parametrized by, except for a set of measure zero,
\begin{equation}
\phi:(u,v)\mapsto \left( (2+\cos(v))\cos(u), (2+\cos(v))\sin(u), \sin(v) \right),
\end{equation}
where $(u,v)\in I:=(0,2\pi)\times (0,2\pi)$. Considered model (\ref{model1}), where $X=\phi(U,V)$, the regression function $m:\mathbb{T}\to \mathbb{R}$ is given by $$m(\phi(u,v))=\cos(u)\sin(4v+1),$$
$\epsilon\sim \mathcal{N}(0,1)$ and $\sigma(\iota^{-1}(X)) = \sigma_0(1+0.1\cos(U)+0.1\sin(V))$ with $\sigma_0$ chosen so that snrdb$=5$ or $40$. A direct calculation leads to
\begin{equation}\label{simulation:torus:gradient}
\iota_*\mbox{\tt{grad}}m(\phi(u,v))=\left(
\begin{array}{c}
\sin^2(u)\sin(4v+1)-4\cos(u)^2\sin(v)\cos(4v+1)\\
-\sin(u)\cos(u)\sin(4v+1)-4\sin(u)\cos(u)\sin(v)\cos(4v+1)\\
4\cos(u)\cos(v)\sin(4v+1)
\end{array}
\right).
\end{equation}
The detailed calculation of (\ref{simulation:torus:gradient}) can be found in the Supplementary.
We sampled $6000$ points $\{(U_i,V_i)\}_{i=1}^{6000}$ uniformly from $I$ and then generate $\{(X_i,Y_i)\}_{i=1}^{6000}$ according to the above model. Notice that this sampling scheme is non-uniform on the torus. Then we randomly picked $3000$ points $\{X_i=\phi(U_i,V_i)\}_{i=6001}^{9000}$ as the testing sample, and compute the gradient estimates $\{\widehat{\iota_*\mbox{\tt{grad}} m}(X_i)\}_{i=6001}^{9000}$ based on the training sample $\{(X_i,Y_i)\}_{i=1}^{6000}$. The estimates are visually demonstrated in Figure \ref{fig:gradient}, together with the ground truth (\ref{simulation:torus:gradient}) for comparison.
\begin{figure}[h]
\begin{center}
\subfigure{
\includegraphics[width=0.46\textwidth]{gradient1.png}
}
\subfigure{
\includegraphics[width=0.46\textwidth]{gradient2.png}
}
\end{center}
\caption{\small Gradient estimates. Left: snrdb=$40$dB; Right: snrdb=$5$dB. The blue circles are the portion of the testingsample $\{(u_i,v_i)\}_{i=6001}^{9000}$ such that $|v_i|<1$ and $u_i>2$, the red arrows are $\iota_*\mbox{\tt{grad}}m(\phi(u_i,v_i))$ and the black arrows are $\widehat{\iota_*\mbox{\tt{grad}} m}(\phi(u_i,v_i))$.}\label{fig:gradient}
\end{figure}
\section{Implications to Manifold Learning}\label{diffusionmap}
Another branch of approaches to high-dimensional, massive data analysis are the graph based algorithms such as locally linear embedding (LLE) \cite{Roweis_Saul:2000}, ISOMAP \cite{Tenenbaum_deSilva_Langford:2000}, Hessian LLE \cite{donoho_grimes:2003}, the Laplacian eigenmap \cite{belkin_niyogi:2003}, local tangent space alignment \cite{Zhang_Zha:2004}, diffusion maps \cite{coifman_lafon:2006}, and vector diffusion maps \cite{vdm}. In addition to preserving the nonlinearity of the data structure, one advantage of these approaches is their adaptivity to the data, that is, the model imposed on the data is relatively weakened so that the information revealed from the analysis is less distorted by model mis-specification. These advantages render the graph based algorithms attractive and popular in data analysis.
When the data are assumed to be sampled from a compact and smooth $d$-dimensional manifold $\text{M}$, the key step of these methods is the learning of the intrinsic geometric quantities, for example, the Hessian operator \cite{donoho_grimes:2003}, the Laplace-Beltrami operator \cite{belkin_niyogi:2003,coifman_lafon:2006} or the connection Laplacian \cite{vdm}. What we are concerned with in this section is the estimation of the Laplace-Beltrami operator $\Delta$ of $\text{M}$, considered in the diffusion map framework \cite{coifman_lafon:2006}, via MALLER.
We refer the readers to these literature for further discussions and references.
Throughout this section, we make use of the same assumptions and notation as in Sections
\ref{algorithm} and \ref{theory}.
We start with discussing the relationship between the diffusion map framework and generalizing the Nadaraya-Watson kernel regression method to the manifold setup.
Suppose $\text{M}$ is compact, smooth and without boundary. Fix a bandwidth $h>0$. First we define a $n\times n$ weight matrix $W$ and a $n\times n$ diagonal matrix $D$ by
\begin{equation}\label{W0}
W(i,j)=K\left(\frac{\|X_i-X_j\|_{\mathbb{R}^p}}{\sqrt{h}}\right)\quad\mbox{and}\quad
D(i,i) = \sum_{j=1}^{n} W(i,j).
\end{equation}
Then $A:=D^{-1}W$ can be interpreted as a Markov transition matrix of a discrete random walk over the sample points $\{X_i\}_{i=1}^n$, where the transition probability in a single step from the sample point $X_i$ to the sample point $X_j$ is given by $A(i,j)$.
Note that $A$ can be used to generalize the Nadaraya-Watson kernel method originally defined for nonparametric regression on $\mathbb{R}^p$ to the manifold $\text{M}$ setup. Indeed, given the regression model (\ref{model1}), define this generalized Nadaraya-Watson estimator $\hat{m}_{NW}$ of $m$ at $X_i$ as
\[
\hat{m}_{NW}(X_i,h) := (A\boldsymbol{Y})(i)=\frac{\sum_{j=1}^nK\left(\frac{\|X_i-X_j\|_{\mathbb{R}^p}}{\sqrt{h}}\right)Y_j}{\sum_{j=1}^nK\left(\frac{\|X_i-X_j\|_{\mathbb{R}^p}}{\sqrt{h}}\right)}, \, i=1,\ldots,n,
\]
i.e. take $A$ as the smoothing matrix of $\hat{m}_{NW}(\cdot, h)$.
Clearly the conditional expectation of the estimator $\hat{m}_{NW}(X_i,h)$ becomes
\begin{eqnarray}
&&\mathbb{E}\big\{\hat{m}_{NW}(X_i,h)\big|\mathcal{X}\big\}
= (A\boldsymbol{m})(i)=\frac{\sum_{j=1}^nK\left(\frac{\|X_i-X_j\|_{\mathbb{R}^p}}{\sqrt{h}}\right)m({X}_j)}{\sum_{j=1}^nK\left(\frac{\|X_i-X_j\|_{\mathbb{R}^p}}{\sqrt{h}}\right)},\label{NWconvergence}
\end{eqnarray}
where $\boldsymbol{m}$ is defined in (\ref{def:Yandm}). When $m\in C^3(\text{M})$ and ${X}_i\notin \text{M}_{\sqrt{h}}$, the asymptotic expansion of (\ref{NWconvergence}) has been shown in \cite{coifman_lafon:2006,hein_audibert_luxburg:2005, singer:2006}. Indeed, we have, as $n\to \infty$,
\begin{eqnarray}
(A\boldsymbol{m})(i)= m({X}_i)+h\frac{\mu_{1,2}}{2d}\bigg(\Delta m({X}_i)+2 \frac{m({X}_i)\Delta f({X}_i)}{f({X}_i)}\bigg)+ O(h^2)+O_p\Big(\frac{1}{n^{\frac{1}{2}}h^{\frac{d}{4}-\frac{1}{2}}}\Big).\nonumber
\end{eqnarray}
Note that in \cite{coifman_lafon:2006} the kernel is normalized so that $\mu_{1,0}=1$ and $\mu_{1,2}/d=2$.
When $f$ is constant, the second order conditional bias term contains information about the Laplace-Beltrami operator of $(\text{M},g)$. This fact, however, is in general ignored when the focus is the nonparametric regression problem. On the contrary, since knowledge of the Laplace-Beltrami operator leads to abundant information about the manifold, in \cite{coifman_lafon:2006} the matrix $L_0:=h^{-1}(D^{-1}W-I_n)$ and its relationship with the Laplace-Beltrami operator are extensively studied, and the eigenvectors of $A$ are used to define the diffusion map.
When $f$ is not constant,
the $f$-dependence is removed by the following normalization \cite{coifman_lafon:2006}. Define a $n\times n$ weight matrix $W_1$ and a $n\times n$ diagonal matrix $D_1$ by
\begin{equation}
\label{Walpha}
W_1 = D^{-1}WD^{-1},\quad\mbox{and}\quad D_1(i,i) = \sum_{j=1}^n W_1(i,j)
\end{equation}
where $W$ and $D$ are defined in (\ref{W0}),
and
\[
L_1=h^{-1}\big(D_1^{-1}W_1-I_n\big).
\]
When $n\to\infty$, it is shown in \cite{coifman_lafon:2006} that for any $m\in C^3(\text{M})$ the matrix $L_1$ satisfies the following convergence:
\begin{equation}\label{remark:dm:L1m}
(L_{1}\boldsymbol{m})(i) = \frac{\mu_{1,2}}{2d}\Delta m(X_i) + O(h) + O_p\Big(\frac{1}{n^{1/2}h^{d/4+1/2}}\Big).
\end{equation}
Notice that the effect of the normalization (\ref{Walpha}) is actually to cancel out the effect of the non-uniformality in $f$ on the matrix $L_0$. We remark that the matrix $D_1^{-1}W_1$ can thus be used as the smoothing matrix of a new estimator of $m$ which is design adaptive.
If we view the Nadaraya-Watson kernel method on $\mathbb{R}^p$ as the local zero-order polynomial regression, the LLR on $\mathbb{R}^p$ can be viewed as the first-order companion of the Nadaraya-Watson kernel method which takes the local slope into account \cite{ruppert_wand:1994}. We discuss extensively its generalization to the regression on manifold setup in Section \ref{algorithm}, its large sample behaviors in Section \ref{theory}, and its numerical results are demonstrated in Section \ref{numerics}. Recall that the conditional bias of MALLER, given in (\ref{rslt:interior:mfd}) of the Supplementary, depends on the Laplace-Beltrami operator:
\[
\mathbb{E}\{\hat{m}(X,h)-m(X)| \mathcal{X}\}=h\frac{\mu_{1,2}}{2d}\Delta m(X)+O(h^2+hh_{\text{pca}}^{3/4})+O_p\Big(\frac{1}{n^{1/2}h^{d/4-1}}\Big).
\]
This fact leads us to build up an alternative matrix to approximate the Laplace-Beltrami operator.
Fix $h>0$ and consider the following $n\times n$ matrix
\begin{equation}\label{section5:Ap}
A_p=\left[\begin{array}{c}
\boldsymbol{v}_1^T(\mathbb{X}^T_{X_1}\mathbb{W}_{X_1}\mathbb{X}_{X_1})^{-1}\mathbb{X}_{X_1}^T\mathbb{W}_{X_1}\\
\vdots\\
\boldsymbol{v}_1^T(\mathbb{X}^T_{X_n}\mathbb{W}_{X_n}\mathbb{X}_{X_n})^{-1}\mathbb{X}_{X_n}^T\mathbb{W}_{X_n}
\end{array}
\right],
\end{equation}
where the $i$-th entry is defined by (\ref{design}), (\ref{weighted}), and (\ref{algo:estimator:mhat}). Note that $A_p$ is the smoothing matrix of MALLER, that is,
$A_p\boldsymbol{Y}=\big(\hat{m}(X_1,h),\ldots,\hat{m}(X_n,h)\big)^T$ from (\ref{algo:estimator:mhat}).
Using this smoothing matrix and defining
\[
L_p = h^{-1}\big(A_p-I_{n}\big),
\]
for any $m\in C^3(\text{M})$, we directly have
\begin{equation}\label{section5:newDelta}
(L_p\boldsymbol{m})(i)=\frac{\mu_{1,2}}{2d}\Delta m(X_i)+O(h+h_{\text{pca}}^{3/4})+O_p\Big(\frac{1}{n^{1/2}h^{d/4}}\Big).
\end{equation}
Thus the matrix $L_p$ can be used to construct an estimator of the Laplace-Beltrami operator $\Delta$. Notice that we do not need an extra step to handle the non-constant p.d.f. issue here because the design adaptive property of $\hat{m}(X,h)$ ensures that the leading term in the right-hand side of (\ref{section5:newDelta}) is independent of $f$. With the estimator $L_p$ of $\Delta$, massive data analysis can be carried out in the same way as those in the diffusion map framework if the manifold assumption is reasonable. We remark that the knowledge of the non-constant p.d.f. is useful in some problems. For example, in \cite{coifman_lafon:2006,nadler_lafon_coifman:2006} the authors showed a strong connection between the non-constant p.d.f. with the Fokker-Plank operator, which is useful in the low-dimensional representation of stochastic systems.
In Figure \ref{fig:Sk:ev}, some numerical results of estimating the $\Delta$ of $\text{M}$ by this new method are demonstrated. We sampled $1000$, $2000$ and $4000$ points uniformly from the $S^2$, $S^3$ and $S^4$ embedded in $\mathbb{R}^{3}$, $\mathbb{R}^{4}$ and $\mathbb{R}^{5}$ respectively, and built the matrix $L_p$ from the sample points with $h=0.1$. It is a well known fact that the $l$-th eigenvalue of the Laplace-Beltrami operator of $S^k$ is $-l(l+k-1)$ with multiplicity ${k+l \choose k}-{k+l-2 \choose k}$, where ${ \cdot \choose \cdot}$ is the binomial coefficient. The results in Figure \ref{fig:Sk:ev} show that the new estimator for the Laplace-Beltrami operator agrees with this well known fact numerically.
\begin{figure}[h]
\begin{center}
\subfigure{
\includegraphics[width=0.31\textwidth]{s2ev.png}}
\subfigure{
\includegraphics[width=0.31\textwidth]{s3ev.png}}
\subfigure{
\includegraphics[width=0.31\textwidth]{s4ev.png}}
\end{center}
\vspace{-0.6cm}
\caption{\small From left to right: bar plots of the first $30$ eigenvalues of $L_p$ when the data points were sampled uniformly from $S^2$, $S^3$ and $S^4$. Note that the first few eigenvalues of $\Delta$ are $0,-2,-6,-12$ for $S^2$, $0,-3,-8,-14$ for $S^3$ and $0,-4,-10,-18$ for $S^4$, and the multiplicities of the first few eigenvalues of $\Delta$ are $1,3,5,7$ for $S^2$, $1,4,9,16$ for $S^3$ and $1,5,14,30$ for $S^4$. This fact is well resembled by the corresponding spectrum of $L_p$.}\label{fig:Sk:ev}
\vspace{-0.1cm}
\end{figure}
Up to now there are two ways to estimate the Laplace-Beltrami operator: one is based on generalizing the Nadaraya-Watson kernel method to the manifold setup as suggested by (\ref{remark:dm:L1m}) and studied in \cite{coifman_lafon:2006}, and the other is based on MALLER, which generalizes the LLR to the manifold setup, as suggested by (\ref{section5:newDelta}). The difference between these two approaches is most obvious when the manifold has smooth boundary.
Suppose $\text{M}$ is compact, smooth and its boundary $\partial\text{M}$ is non-empty and smooth. When $X_i\in \text{M}_{\sqrt{h}}$, the asymptotic behavior of $D_1^{-1}W_1$ has been shown in the proof of Proposition 10 of \cite{coifman_lafon:2006}:
\begin{equation}\label{dm:bdry:blowup}
(D^{-1}_1W_1\boldsymbol{m})(i)=m(X_0)+\sqrt{h}C_1\partial_\nu m(X_0)+O(h)+O_p\Big(\frac{1}{n^{1/2}h^{d/4-1/2}}\Big),
\end{equation}
where $C_1=O(1)$, $X_0\in\partial\text{M}$ is the point on the boundary $\partial\text{M}$ closest to $X_i$, and $\nu$ is the normal direction at $X_0$. If the $\sqrt{h}$-order term is non-zero, the estimator $(L_{1}\boldsymbol{m})(i)$ in (\ref{remark:dm:L1m}) blows up when $h\to 0$. To avoid this blowup and to get an estimate of the Laplace-Beltrami operator on $\text{M}$, the Neuman's boundary condition $\frac{\partial m}{\partial\nu}=0$ is necessary.
Thus, solving the eigenvalue problem of $L_1$ is a discrete approximation to solving the eigenvalue problem of the Laplace-Beltrami operator with the Neuman's boundary condition.
The situation is totally different for the proposed estimator $L_p$. The asymptotic behavior of the conditional bias of MALLER at $X_i\in \text{M}_{\sqrt{h}}$ provided in Corollary \ref{col:smoothbdry}
leads to
\begin{equation}\label{manifold_learning:new_estimator}
(L_p\boldsymbol{m})(i)=\frac{1}{2}\sum_{k=1}^dc_k(X_i)\nabla^2_{\partial_k,\partial_k}m(X_i)+O_p(h^{-1/2}h_{\text{pca}}^{3/4}+h_{\text{pca}}^{1/2})+O_p\Big(\frac{1}{n^{1/2}h^{d/4}}\Big).
\end{equation}
Thus, we know that when ${X}_i$ is near the boundary, the estimator $L_p$ does not blow up when $h\to 0$, and a different boundary condition can be imposed.
Notice that the importance of using different bandwidths in the tangent plane estimation and in the LLR on the tangent plane becomes clear from (\ref{section5:newDelta}) and (\ref{manifold_learning:new_estimator}). Indeed, if we take $h_{\text{pca}}<h$ then it follows from (\ref{section5:newDelta}) (resp. (\ref{manifold_learning:new_estimator})) that the first order error of the estimator for the Laplace-Beltrami operator inside the manifold is smaller than the order $h^{3/4}$ (resp. $h^{1/4}$).}
In Figure \ref{fig:section5:bdry}, we demonstrate the eigenvectors of the estimator $L_p$ for the Laplace-Beltrami operator of a manifold with boundary. Specifically, we sampled $2000$ points $\{X_l\}_{l=1}^{2000}$ uniformly from the interval $[0,1]$ embedded in $\mathbb{R}$, and evaluated the eigenvectors of $L_p$ built on $\{X_l\}_{l=1}^{2000}$. Notice that the eigenvectors shown in Figure \ref{fig:section5:bdry} can not happen, except for the first one, if the Laplace-Beltrami operator satisfies the Neuman's condition. The survey of the boundary condition suitable for the estimator $L_p$ is out of the scope of this paper, and we leave it as a future work.
\begin{figure}[h]
\begin{center}
\subfigure{
\includegraphics[width=0.95\textwidth]{halfcirc.png}
}
\end{center}
\caption{\small From left to right: the first four eigenvectors of $L_p$ and the first $10$ eigenvalues of $L_p$ when sampling from $[0,1]$. The first two eigenvalues are zero. Notice that the second, third and fourth eigenvectors can not happen if the Laplace-Beltrami operator satisfies the Neuman's condition.}\label{fig:section5:bdry}
\end{figure}
\bigskip
\section{Discussions} \label{discussion}
When the $p$-dimensional predictor vector $X$ has some $d$-dimensional manifold structure, we obtain MALLER by constructing the traditional LLR on the estimated embedded tangent plane, which is of dimension $d$ instead of $p$. Consequently, both the estimation accuracy and computational speed depend only on $d$ but not on $p$. Keeping $p, d, n$ as fixed numbers, this feature is particularly advantageous when $d\ll n< p$, as is shown in the Isomap face database example in the numerical section.
We mention that MALLER works in this case hinges on the capability of estimating the tangent plane. Since our model is noise free in the predictors, this capability can be explained by the theoretical findings in \cite{kaslovsky_meyer:2011} and \cite{nadler:2008}.}
In \cite{nadler:2008}, the spike model is studied and the recovery of the subspace spanned by the response vectors is guaranteed even if $p\geq n$, when there is no noise \cite[(2.13)]{nadler:2008}. Under the manifold setup, locally the manifold model behaves like the Euclidean space, so it is expected to have similar results as those in \cite{nadler:2008}, which is shown in \cite{kaslovsky_meyer:2011}.
Furthermore, we emphasize that, while in \cite{aswani_bickel:2011} this case is modeled as the large $p$ small $n$ problem, where $p$ grows with $n$, and sparsity conditions and thresholding are employed, here we treat $p$ as a fixed number and take the fact that $n$ is larger than $d$.
\subsection{The Relationship with NEDE}
MALLER is not the first LLR regression scheme proposed to adapt to the manifold structure. NEDE, given in \cite{aswani_bickel:2011}, is a manifold-adaptive LLR constructed in the $p$-dimensional ambient space with regularization imposed on the directions perpendicular to the estimated embedded tangent plane.
At the first glance MALLER seems to be a special case of NEDE \cite[(4.6)]{aswani_bickel:2011} by taking $\lambda_n=\infty$ in \cite[(4.6)]{aswani_bickel:2011}. However, there are several distinct differences between the two methods. In this section we follow the notation used in \cite{aswani_bickel:2011}.
First, when $\lambda_n=\infty$ for all $n
, although $\tilde{\beta}$ in \cite[(4.6)]{aswani_bickel:2011} is forced to be located on the estimated embedded tangent plane, the NEDE algorithm still runs in the ambient space and the minimization problem in \cite[(4.6)]{aswani_bickel:2011} becomes ill-posed. Indeed, the solution in \cite[(4.6)]{aswani_bickel:2011} depends on the inverse of the matrix $\hat{C}_n+\lambda_n\hat{P}_n/nh^{d+2}$, which is unstable to solve when $\lambda_n=\infty$. This numerical instability of NEDE when $\lambda_n=\infty$ can also be shown numerically.
As an illustration, we ran NEDE with $\lambda_n=e^{100}$ (within the machine precision) on the Isomap face database with the images downsized to $7\times 7$ pixels. Then, it happened that the optimal value of $d$ chosen by the NEDE algorithm was close to $49=7\times 7 =p$ ($48.325\pm 1.3019$ over $100$ replications) due to the degeneracy of $\hat{C}_n+\lambda_n\hat{P}_n/nh^{d+2}$, and the final RASE was $12.3684 \pm 6.1161$ (over $100$ replications), which is roughly ten times of the RASE of MALLER. Even when we set $d=3$ and $\lambda_n=e^{100}$ in the NEDE algorithm and tested it on the same $7\times 7$-pixel images, the final RASE was still $10.5829 \pm 6.0986$ after $100$ replications.
Second, even if NEDE \cite[(4.6)]{aswani_bickel:2011} is stable to solve when $\lambda_n=\infty$, the bandwidth selection problem in NEDE still depends on $p$, which leads to different results compared with MALLER. Specifically, the selected bandwidth would be larger and hence the bias is increased.
Third, in NEDE the bandwidth used in the tangent plane estimation is taken to be the same as the one used in the LLR estimation, while in MALLER we estimate the tangent plane using a different bandwidth $h_{\text{pca}}$ which by the asymptotic analysis should be taken to be smaller than the bandwidth $h$ in the LLR step. Thus, the tangent plane estimate obtained by NEDE is different from that obtained by MALLER. Since this estimation error does not contribute to the leading bias term, the difference is not significant in the regression problem. However, if we would like to have a better estimator of the Laplace-Beltrami operator, this error becomes significant, as is shown in Section \ref{diffusionmap}.
In conclusion, MALLER is different from NEDE even if the parameter $\lambda_n$ in NEDE is set to $\infty$, both theoretically and numerically. And, the key features that render the two algortihms different are those mentioned above, not the more sophisticated method MALLER uses to select the bandwidth in the LLR.
\subsection{Future Directions}
To sum up this paper, here are several issues left open and are of interest for future research:
\begin{enumerate}
\item Like in any smoothing methods, bandwidth selection is crucial for the proposed MALLER.
Our bandwidth selection procedure is built on balancing between estimates of the conditional bias and variance.
Although this approach worked well in our numerical studies, there is still room for improvement.
\item
We include in our algorithm a clustering tool to alleviate numerical problems caused by the condition number, without having to estimate the condition number. This is not the ultimate solution; instead, the ideal solution is to estimate the condition number, and then use that information in the subsequent steps.
\item In this paper we consider the case where the predictor vector
is directly observable. In some situations, the predictor vector itself is subject to noise, and
the tangent plane and regression estimation steps has to be adjusted accordingly.
This is closely related to the deconvolution and measurement error problems in the literature, in the Euclidean setup.
\item In MALLER, the dimensionality is reduced to the intrinsic structure of the predictors. The dimensionality may be further reduced by taking into account the relationship between the response and the predictors \cite{xia:2007, xia:2008}.
\item
The smoothing matrix of MALLER is shown to be useful for estimating the Laplace-Beltrami operator with the boundary condition different from Neuman's condition, it is worthwhile to investigate further such a new set of tools for manifold learning.
\item In applications, the response itself may be multivariate
as well. The case when the responses are positive-definite matrices and the predictor vector is non-degenrated in $\mathbb{R}^p$ was considered by \cite{zhu:2009}. It is interesting to investigate the case when both the response and the predictor vector have manifold structures.
\end{enumerate}
\bibliographystyle{plain}
| -65,938.900552
|
[
-3.080078125,
2.841796875
] | 37.905237
|
[
-2.705078125,
0.66455078125,
-2.48046875,
-5.96875,
-1.3076171875,
9.34375
] |
[
3.27734375,
8.1171875,
1.7978515625,
6.59765625
] | 650
| 9,379
|
[
-3.037109375,
3.123046875
] | 31.926601
|
[
-6.62109375,
-5.23046875,
-5.58984375,
-2.23046875,
2.7890625,
14.1484375
] | 0.957071
| 16.958381
| 22.251839
| 5.11607
|
[
1.6765974760055542
] | -34,906.501662
| 6.127199
| -65,331.725559
| 1.266814
| 6.110867
|
[
-2.33203125,
-3.767578125,
-4.16015625,
-5.08203125,
2.291015625,
12.3828125
] |
[
-5.859375,
-2.404296875,
-2.244140625,
-1.3466796875,
3.75,
4.921875
] | |
BkiUaojxK6wB9dbuRV4N
|
\section{Introduction and Main Result}
Let $(\mathcal{X},\mu)$ and $(\mathcal{Y},\nu)$ be Polish probability spaces and $\Pi(\mu,\nu)$ the set of all couplings; i.e., probability measures~$\pi$ on~$\mathcal{X}\times\mathcal{Y}$ with first marginal~$\mu$ and second marginal~$\nu$. Moreover, let $c:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}_{+}$ be continuous with
\begin{equation}\label{eq:cIntegrable}
\int c(x,y)\,\mu(dx)\nu(dy)<\infty.
\end{equation}
Given a constant~$\varepsilon>0$, the entropic optimal transport (EOT) problem is
\begin{equation}\label{eq:EOT}
I_{\varepsilon} :=\inf_{\pi\in\Pi(\mu,\nu)} \int_{\mathcal{X}\times\mathcal{Y}} c(x,y) \, \pi(dx,dy) + \varepsilon H(\pi|\mu\otimes\nu),
\end{equation}
where $H(\,\cdot\,|\mu\otimes\nu)$ denotes relative entropy with respect to the product measure,
$$
H(\pi|\mu\otimes\nu):=\begin{cases}
\int \log (\frac{d\pi}{d(\mu\otimes\nu)}) \,d\pi, & \pi\ll \mu\otimes\nu,\\
\infty, & \pi\not\ll \mu\otimes\nu.
\end{cases}
$$
For $\varepsilon=0$ we recover the Monge--Kantorovich optimal transport problem, and~\eqref{eq:EOT} can be seen as its entropic regularization with parameter~$\varepsilon>0$. The minimization~\eqref{eq:EOT} admits a unique solution $\pi_{\varepsilon}\in\Pi(\mu,\nu)$; moreover, $\pi_{\varepsilon}\sim \mu\otimes\nu$ and its density is of the form
\begin{equation}\label{eq:densityFormIntro}
\frac{d\pi_{\varepsilon}}{d(\mu\otimes\nu)}(x,y) = \exp \left(\frac{f_{\varepsilon}(x) +g_{\varepsilon}(y) - c(x,y)}{\varepsilon}\right)
\end{equation}
for two measurable functions $f_{\varepsilon}: \mathcal{X}\to\mathbb{R}$ and $g_{\varepsilon}: \mathcal{Y}\to\mathbb{R}$. We call these functions the \emph{Schr\"odinger potentials}. They are unique up to normalization: any constant can be added to $f_{\varepsilon}$ and subtracted from $g_{\varepsilon}$. The integrability~\eqref{eq:cIntegrable} of~$c$ implies that $f_{\varepsilon}\in L^{1}(\mu)$ and $g_{\varepsilon}\in L^{1}(\nu)$, and we enforce the symmetric normalization
\begin{equation}\label{eq:normalizationIntro}
\int f_\epsilon(x)\,\mu(dx)=\int g_\epsilon(y)\,\nu(dy)
\end{equation}
to have uniqueness of the potentials in all that follows. We mention that~$\pi_{\varepsilon}$ can be characterized as the unique coupling $\pi\in\Pi(\mu,\nu)$ whose density is of the form~\eqref{eq:densityFormIntro}. See, for instance,
~\cite[Statements~3.6, 3.15, 3.19, 3.38]{FollmerGantert.97} for existence and uniqueness,
or~\cite{Nutz.20} for a simple derivation including integrability under~\eqref{eq:cIntegrable}.
These result heavily build on~\cite{BorweinLewis.92, BorweinLewisNussbaum.94, Csiszar.75, RuschendorfThomsen.97}, among others.
Rewriting the minimization~\eqref{eq:EOT}, the coupling $\pi_{\varepsilon}$ can be interpreted as the so-called static Schr\"odinger bridge
\begin{equation}\label{eq:bridgeIntro}
\pi_{\varepsilon}=\argmin_{\pi\in \Pi(\mu,\nu)} H(\pi|R)
\end{equation}
for the reference probability $dR\propto e^{-c/\varepsilon}d(\mu\otimes\nu)$ which elucidates~\eqref{eq:densityFormIntro} as the factorization property~$\frac{d\pi_{\varepsilon}}{dR}(x,y)= e^{f_{\varepsilon}(x)/\varepsilon}e^{g_{\varepsilon}(y)/\varepsilon}=:F(x)G(y)$.\footnote{We mention that \cite{Leonard.14} uses the term Schr\"odinger potentials for $f_{\varepsilon}/\varepsilon,g_{\varepsilon}/\varepsilon$ in the Schr\"odinger bridge context, as is natural when no parameter~$\varepsilon$ is present. On the other hand, calling $f_{\varepsilon},g_{\varepsilon}$ potentials is more convenient in our setting, well motivated by the connection with Kantorovich potentials in Theorem~\ref{thm:1}, and consistent with the terminology in~\cite{GigliTamanini.21}.}
A closely related, more analytic way to characterize the potentials are the Schr\"odinger equations. Writing also $C(x,y)=e^{-c(x,y)/\varepsilon}$, the fact that $\pi_{\varepsilon}$ of~\eqref{eq:densityFormIntro} is in $\Pi(\mu,\nu)$ implies that $(F,G)$ solves the coupled equations
\begin{equation}\label{eq:SchrodingerEqnsIntro}
F(x)^{-1}= \int G(y) C(x,y)\, \nu(dy) \quad \mu\mbox{-a.s.}, \quad G(x)^{-1}= \int F(y) C(x,y)\, \nu(dx) \quad \nu\mbox{-a.s.}
\end{equation}
Conversely, we can use any solution $(F,G)$ to define %
a coupling with density of the form~\eqref{eq:normalizationIntro}. This coupling must coincide with $\pi_{\varepsilon}$ by the aforementioned uniqueness, and then $(\varepsilon\log F,\varepsilon\log G)$ must be our Schr\"odinger potentials $(f_{\varepsilon},g_{\varepsilon})$, up to normalization. We refer to~\cite{Beurling.60,RuschendorfThomsen.93} and the references therein for more on Schr\"odinger equations, and to~\cite{Follmer.88,Leonard.14} for extensive surveys on Schr\"odinger bridges.
Yet another way to introduce the potentials is to consider the dual problem of~\eqref{eq:EOT} in the sense of convex analysis,
\begin{align}
\begin{split}\label{eq:dualEOT}
S_\epsilon
:=\sup_{f\in L^1(\mu), g \in L^1(\nu)} \bigg(\int f(x)\, \mu(dx)&+\int g(y) \,\nu(dy) \\
&-\epsilon \int e^{\frac{f(x)+g(y)-c(x,y)}{\epsilon}}\, \mu(dx)\nu(dy)+\epsilon\bigg).
\end{split}
\end{align}
Then $(f_{\varepsilon},g_{\varepsilon})$ is the unique solution of~\eqref{eq:dualEOT} with the normalization~\eqref{eq:normalizationIntro}. Indeed, direct arguments show the weak duality $S_{\varepsilon}\leq I_{\varepsilon}$. To see that equality is attained by~$(f_{\varepsilon},g_{\varepsilon})$ and~$\pi_{\varepsilon}$, we plug in~\eqref{eq:densityFormIntro} and use $\pi_{\varepsilon}(\mathcal{X}\times\mathcal{Y})=1$ to find that $S_{\varepsilon}\geq \int f_\epsilon(x)\,\mu(dx)+\int g_\epsilon(y)\,\nu(dy)\geq I_{\varepsilon}$. Uniqueness holds by strict concavity. See \cite{PennanenPerkkio.19} and the references therein for a convex analysis perspective including~\eqref{eq:dualEOT}.
We are interested in the relation of $(f_{\varepsilon}, g_{\varepsilon})$ to solutions of the dual Monge--Kantorovich problem,
\begin{align}
\label{eq:dualOT}
S_0 :=\sup_{f\in L^1(\mu),\, g \in L^1(\nu),\, f\oplus g\leq c} \bigg(\int f(x)\, \mu(dx) + \int g(y) \,\nu(dy) \bigg),
\end{align}
where $(f\oplus g)(x,y):= f(x)+g(y)$.
It is well known that $S_{0}=I_{0}$ and that a solution $(f_{0},g_{0})$ exists \cite[Theorem~5.10, Remark~5.14]{Villani.09}. (In fact, Theorem~\ref{thm:1}
below yields another proof as a by-product.) There is the same ambiguity as above, and to streamline terminology, we call $(f_{0},g_{0})$ Kantorovich potentials if they satisfy the normalization~\eqref{eq:normalizationIntro} for $\varepsilon=0$.
As~\eqref{eq:dualOT} lacks the strict convexity of~\eqref{eq:dualEOT}, multiple Kantorovich potentials may exist even after normalization, for instance when both marginals are discrete.
Nevertheless, uniqueness of Kantorovich potentials is known to hold for most problems of interest to us, especially when~$c$ is differentiable and at least one marginal support is connected. See for instance \cite[Appendix~B]{BerntonGhosalNutz.21} for sufficient conditions.
Much of the enormous recent interest in entropic optimal transport stems from the success of Sinkhorn's algorithm in high-dimensional problems, enabling data-rich applications in areas like machine learning or image processing. Popularized in this context by~\cite{Cuturi.13}, Sinkhorn's algorithm computes the Schr\"odiger potentials~$(f_{\varepsilon},g_{\varepsilon})$ by alternating projections. From a computational point of view, the Monge--Kantorovich problem is significantly harder than the entropic one; see \cite{PeyreCuturi.19} for a recent survey and numerous references. It is therefore natural to investigate $(f_{\varepsilon},g_{\varepsilon})$ as~$\varepsilon\to0$ to approximate Kantorovich potentials.
On the primal side, weak compactness of $\Pi(\mu,\nu)$ immediately implies that $(\pi_{\varepsilon})$ admits cluster points as~$\varepsilon\to0$. Moreover, any cluster point is an optimal transport, so that if uniqueness is known for the solution~$\pi_{0}$ of the limiting optimal transport problem, then $\pi_{\varepsilon}\to\pi_{0}$. See \cite{CarlierDuvalPeyreSchmitzer.17,Leonard.12} for proofs by Gamma convergence, or~\cite{BerntonGhosalNutz.21} for a geometric proof assuming only continuity of~$c$. Our aim is to establish a comparable result on the dual side. Here, compactness is not obvious (unless $\mu,\nu$ are compactly supported). Our main result provides strong compactness in~$L^{1}$ for $(f_{\varepsilon})$ and $(g_{\varepsilon})$ as $\varepsilon\to0$, and moreover, that cluster points are Kantorovich potentials. In most cases of interest, the latter are unique, so that the whole sequence converges.
\begin{theorem}\label{thm:1}
Let $(f_{\varepsilon},g_{\varepsilon})$ be the unique Schr\"odinger potentials for $\varepsilon>0$.
\begin{itemize}
\item[(a)] Given $\varepsilon_{n}\to0$, there is a subsequence $(\varepsilon_{k})$ such that $f_{\varepsilon_{k}}$ converges in $L^{1}(\mu)$ and $g_{\varepsilon_{k}}$ converges in $L^{1}(\nu)$.
\item[(b)] If $\lim_{n} f_{\varepsilon_{n}} = f$ $\mu$-a.s.\ and $\lim_{n} g_{\varepsilon_{n}} = g$ $\nu$-a.s.\ for $\varepsilon_{n}\to0$, then $(f,g)$ are Kantorovich potentials and the convergence also holds in $L^{1}$.
\end{itemize}
If the Kantorovich potentials $(f_{0},g_{0})$ for~\eqref{eq:dualOT} are unique, it follows that $\lim_{\varepsilon} f_{\varepsilon} = f_{0}$ in $L^{1}(\mu)$ and $\lim_{\varepsilon} g_{\varepsilon} = g_{0}$ in $L^{1}(\nu)$.
\end{theorem}
Applications of interest for Theorem~\ref{thm:1} include costs $c(x,y)=\|x-y\|^{2}$ on~$\mathcal{X}=\mathcal{Y}=\mathbb{R}^{d}$ with unbounded marginal supports as in~\cite{MenaWeed.19}; here~$c$ is continuous but not uniformly continuous.
Theorem~\ref{thm:1} simplifies substantially in the case of compactly supported marginals. More generally, if~$c$ is uniformly continuous, the functions $f_{\varepsilon},g_{\varepsilon}$ inherit its modulus of continuity (uniformly in~$\varepsilon$) and then uniform convergence on compact subsets along a subsequence follows from the Arzel\`a--Ascoli theorem; cf.\ Proposition~\ref{pr:unifContCase}. A result along those lines is contained in~\cite[Section~5]{GigliTamanini.21} in the particular case of quadratic cost and compact marginals. We emphasize that~\cite{GigliTamanini.21} analyzes the more complex dynamic problem of approximating $W_{2}$ geodesics with entropic interpolation; the present static setting would correspond only to its marginals at times~$t=0,1$.
Given the results of~\cite{GigliTamanini.21}, one may conjecture that Theorem~\ref{thm:1} can be extended to interpolations and intermediate times~$t\in(0,1).$
When $\mathcal{X},\mathcal{Y}$ are finite sets, optimal transport is a finite-dimensional linear programming problem. For such problems, a detailed convergence analysis of entropic regularization is presented in~\cite{CominettiSanMartin.94}. In particular, convergence holds even when Kantorovich potentials are not unique.
Theorem~\ref{thm:1} can be related to the large deviations principle (LDP) of~\cite{BerntonGhosalNutz.21} which describes the convergence of $(\pi_{\varepsilon})$ on the primal side (cf.\ Section~\ref{se:LDP} for a detailed discussion). On compact spaces, convergence of potentials is equivalent to the validity of an LDP whose rate functions includes the limiting Kantorovich potentials. On the other hand, neither result implies the other in general, and we see the results and methods as complementary. Indeed, the ``easier'' inequality for the present dual approach corresponds to the more delicate one in the primal approach, and vice versa. See also \cite{ConfortiTamanini.19,Pal.19} for expansions of the entropic transport cost as $\varepsilon\to0$, which are related to the speed of convergence of $(\pi_{\varepsilon})$. Finally, we mention~\cite{Berman.20}, studying the convergence of the discrete Sinkhorn algorithm to an optimal transport potential in the joint limit when $\varepsilon_{n}\to0$ and the marginals $\mu,\nu$ are approximated by discretizations $\mu_{n},\nu_{n}$ satisfying a certain density property.
Beyond the aforementioned special cases and connections, Theorem~\ref{thm:1} is novel, to the best of our knowledge.
Two extensions of Theorem~\ref{thm:1} are obtained in the body of the text. The first one replaces~$c$ in~\eqref{eq:EOT} by a cost function $c_{\varepsilon}$ that may depend on~$\varepsilon$ and converges to the continuous cost~$c$ of the Monge--Kantorovich problem as~$\varepsilon\to0$. This extension demonstrates the stability of the convergence in Theorem~\ref{thm:1}. In addition, it may be a natural result from the perspective of Sch\"odinger bridges (see~\cite{Leonard.14}): the corresponding reference measures $R_{\varepsilon}$ in~\eqref{eq:bridgeIntro} are those with large deviations rate~$c$. The second extension replaces the two marginals $(\mu,\nu)$ by any (finite) number of marginals. The resulting ``multimarginal'' optimal transport problem has become a focus of attention as the primary tool to analyze Wasserstein barycenters in the sense of~\cite{AguehCarlier.11}.
Its entropic regularization again admits a version of Sinkhorn's algorithm; see~\cite{Carlier.21} for a very recent analysis showing linear convergence and further references.
The techniques developed in the proof of Theorem~\ref{thm:1} are quite versatile and extend to the multimarginal setting without effort.
The remainder of this paper is organized as follows. Section~\ref{sec:auxiliaryResults} collects auxiliary results for the proof of Theorem~\ref{thm:1}, which is carried out in Section~\ref{se:ProofOfMainRes} and followed by the specialization to uniformly continuous costs. The relation with the LDP is the subject of Section~\ref{se:LDP}. In Section~\ref{se:varyingCost} we present the extension to costs~$c_{\varepsilon}$ that vary with~$\varepsilon$, and Section~\ref{sec:multi} concludes with the multimarginal case.
\section{Auxiliary Results}\label{sec:auxiliaryResults}
In this section we collect a number of auxiliary results for the proof of Theorem~\ref{thm:1}. Anticipating the generalization in Section~\ref{se:varyingCost}, we remark that the statements and proofs in this section hold for any measurable (but not necessarily continuous) cost function $c:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}_{+}$ that is integrable in the sense of~\eqref{eq:cIntegrable}; further regularity is only required in Lemma~\ref{lem:lusin}, where the condition is stated explicitly.
Let $\varepsilon>0$. We recall the Schr\"odinger potentials $f_{\varepsilon}\in L^{1}(\mu)$ and $g_{\varepsilon}\in L^{1}(\nu)$ from the Introduction and in particular the normalization
\begin{equation}\label{eq:normalization}
\int f_\epsilon(x)\,\mu(dx)=\int g_\epsilon(y)\,\nu(dy)=S_\epsilon/2\ge 0.
\end{equation}
The fact that $\pi_{\varepsilon}$ of~\eqref{eq:densityFormIntro} is a probability measure with marginals~$\mu$ and~$\nu$ implies
\begin{align}\label{eq:one}
\int e^{\frac{f_\epsilon(x)+g_\epsilon(y)-c(x,y)}{\epsilon}}\, \nu(dy)=1\quad \mu\mbox{-a.s.}, \quad \int e^{\frac{f_\epsilon(x)+g_\epsilon(y)-c(x,y)}{\epsilon}}\, \mu(dx)=1 \quad \nu\mbox{-a.s.}
\end{align}
and hence the Schr\"odinger equations
\begin{align}
\begin{split}\label{eq:definition}
f_\epsilon(x)&= -\epsilon\log \int e^{\frac{g_\epsilon(y)-c(x,y)}{\epsilon}}\, \nu(dy) \quad \mu\mbox{-a.s.},\\
g_\epsilon(y) &= -\epsilon\log \int e^{\frac{f_\epsilon(x)-c(x,y)}{\epsilon}} \, \mu(dx) \quad \nu\mbox{-a.s.}
\end{split}
\end{align}
By choosing versions of $f_\varepsilon,g_\varepsilon$ we may and will assume that these conjugacy relations hold everywhere on~$\mathcal{X}\times\mathcal{Y}$. In particular, this provides canonical extensions of $f_\varepsilon,g_\varepsilon$ to the whole marginal space. The conjugacy relations can also be used to obtain a priori estimates, as has been previously exploited in~\cite{Carlier.21,DiMarinoGerolin.20}, among others.
\begin{lemma}\label{lem:1}
For all $x\in \mathcal{X}$ and $y \in \mathcal{Y}$, we have
\begin{align*}
\inf_{y\in \mathcal{Y}} \big[c(x,y)-g_\epsilon(y)\big] \le f_\epsilon(x)\le \int c(x,y)\, \nu(dy),\\
\inf_{x\in \mathcal{X}} \big[c(x,y)-f_\epsilon(x)\big]\le g_\epsilon(y)\le \int c(x,y)\, \mu(dx).
\end{align*}
\end{lemma}
\begin{proof}
Using~\eqref{eq:definition}, Jensen's inequality and~\eqref{eq:normalization}, %
\begin{align}\label{eq:bounded_from_above}
\begin{split}
f_\epsilon(x)&=-\epsilon \log \int e^{\frac{g_\epsilon(y)-c(x,y)}{\epsilon}}\,\nu(dy)\\
&\le \int \left[ -g_\epsilon(y)+c(x,y)\right]\, \nu(dy) \le \int c(x,y)\, \nu(dy),
\end{split}
\end{align}
which is the upper bound. For the lower bound we note that by~\eqref{eq:definition},
\begin{align*}
f_\epsilon(x)%
&\ge -\epsilon \log \int e^{\frac{\sup_{y\in \mathcal{Y}} [g_\epsilon(y)-c(x,y)]}{\epsilon}} \, \nu(dy)\\
&= -\sup_{y\in \mathcal{Y}} \big[g_\epsilon(y)-c(x,y)\big]=\inf_{y\in \mathcal{Y}} \big[c(x,y)-g_\epsilon(y)\big].
\end{align*}
The proof for $g_\epsilon$ is symmetric.
\end{proof}
Let $(M,d)$ be a metric space. A function $\omega:\mathbb{R}_{+}\to\mathbb{R}_{+}$ is a modulus of continuity if it is continuous at~$0$ with $\omega(0)=0$. More generally, we call $\omega:M\times \mathbb{R}_{+}\to\mathbb{R}_{+}$ a modulus of continuity if $\omega(x,\cdot)$ has those properties for each $x\in M$. A function $F: M\to\mathbb{R}$ is $\omega$-continuous if it admits the modulus of continuity $\omega(x,\cdot)$ at $x\in M$; that is, $|F(x)-F(x')|\le \omega(x,d(x,x'))$ for all $x,x'\in M$. To avoid ambiguity, we say that $F$ is uniformly $\omega$-continuous if $\omega$ can be chosen independent of~$x$. The following generalization of the Arzel\`a--Ascoli theorem will be used to construct limits of $f_{\varepsilon}$ and $g_{\varepsilon}$.
\begin{lemma}\label{lem:aa1}
Let $(M,d)$ be a separable metric space and let $(F_n)$ be (arbitrary) functions on $M$ which are pointwise bounded and satisfy
\begin{align}\label{eq:equicontinuity}
|F_n(x_1)-F_n(x_2)|\le \omega(x_1,d(x_1,x_2))+h_n, \quad x_1, x_2\in M
\end{align}
for some modulus of continuity~$\omega:M\times \mathbb{R}_{+}\to\mathbb{R}_{+}$ and a sequence $h_{n}\to0$ of constants. Then after passing to a subsequence, $(F_n)$ converges uniformly on compact subsets to a $\omega$-continuous function $F: M\to\mathbb{R}$.
\end{lemma}
\begin{proof}
Let $D\subset M$ be a countable dense set, fix $\delta>0$ and choose $n_0\in \mathbb{N}$ such that $h_n\le \delta/6$ for all $n\ge n_0$. As $(F_n)$ is pointwise bounded, a diagonal argument yields a subsequence, still denoted $(F_n)$, converging pointwise on~$D$. In particular, for every $x\in D$ there exists $n(x)$ such that
\begin{align}\label{eq:aa1}
|F_n(x)-F_m(x)|\le \delta/3, \quad m,n\ge n(x).
\end{align}
For ${x_1}\in D$, \eqref{eq:equicontinuity} yields an open neighborhood $O_{x_1}$ with
\begin{align}\label{eq:aa2}
|F_n(x_1)-F_n(x_2)|\le \omega(x_1,d(x_1,x_2))+h_n\le \delta/6+\delta/6=\delta/3, \quad x_2\in O_{x_1},
\end{align}
for all $n\ge n_0$. Let $K\subset M$ be compact and $D'\subseteq D$ a finite set such that $\bigcup_{x'\in D'} O_{x'}$ covers $K$. Choose $n_{1}:= \max_{x'\in D'} n(x')\vee n_0$, then as any $x\in K$ is contained in an open neighborhood $O_{x'}$ of some $x'\in D'$, we obtain from \eqref{eq:aa1} and \eqref{eq:aa2} that
\begin{align*}
|F_n(x)-F_m(x)|\le |F_n(x)-F_n(x')|+|F_n(x')-F_m(x')|+|F_m(x')-F_m(x)|\le \delta,
\end{align*}
for all $x\in K$ and $m,n\ge n_{1}$. Thus $(F_n)$ has a limit~$F$, uniformly on compacts. Passing to the limit in~\eqref{eq:equicontinuity} shows that $F$ is $\omega$-continuous.
\end{proof}
Recall that $c:\mathcal{X}\times\mathcal{Y}\to \mathbb{R}_{+}$ is continuous. If $\mathcal{Y}_{\mathrm{cpt}}\subset \mathcal{Y}$ is compact and $\omega(x,r):=\sup_{y \in \mathcal{Y}_{\mathrm{cpt}}, d(x,x')\leq r} |c(x,y)-c(x',y)|$, then $\omega$ is a modulus of continuity in the above sense. %
That motivates the following estimates.
\begin{lemma}\label{lem:general}
Fix $\delta\in (0,1)$ and $\epsilon>0$. There exist compact sets $\mathcal{X}_{\mathrm{cpt}}\subseteq \mathcal{X}, \mathcal{Y}_{\mathrm{cpt}}\subseteq \mathcal{Y}$ and measurable sets $A_{\varepsilon}\subseteq \mathcal{X}_{\mathrm{cpt}}$, $B_{\varepsilon}\subseteq \mathcal{Y}_{\mathrm{cpt}}$ with $\mu(A_{\varepsilon}),\nu(B_{\varepsilon}) \ge 1-\delta$ such that
\begin{align*}
\left|f_\epsilon(x_1)-f_\epsilon(x_2)\right|&\le \sup_{y\in \mathcal{Y}_{\mathrm{cpt}}} \left|c(x_1, y)-c(x_2,y)\right| -\epsilon\log(1-\delta) \quad\mbox{for} \quad x_1,x_2\in A_{\varepsilon},\\[.5em]
\left|g_\epsilon(y_1)-g_\epsilon(y_2)\right|&\le \sup_{x\in A_{\varepsilon}} \left|c(x, y_1)-c(x,y_2)\right| -\epsilon\log(1-\delta)\\
&\le \sup_{x\in \mathcal{X}_{\mathrm{cpt}}} \left|c(x, y_1)-c(x,y_2)\right| -\epsilon\log(1-\delta) \quad\mbox{for} \quad y_1, y_2\in B_{\varepsilon}.
\end{align*}
\end{lemma}
\begin{proof}
Fix $\kappa\in (0,\delta)$, to be determined later.
Choose compacts $\mathcal{X}_{\mathrm{cpt}}$ and $\mathcal{Y}_{\mathrm{cpt}}$ with
$\mu(\mathcal{X}_{\mathrm{cpt}})\ge 1-\kappa^2/2$ and $\nu(\mathcal{Y}_{\mathrm{cpt}})\ge 1-\kappa^2/2$,
then $\pi_{\epsilon}\in \Pi(\mu,\nu)$ implies
\begin{align}\label{eq:tightness}
\pi_{\epsilon}(\mathcal{X}_{\mathrm{cpt}} \times \mathcal{Y}_{\mathrm{cpt}})\ge 1-\kappa^2.
\end{align}
Consider the set $$A_{\varepsilon}=\left\{x\in \mathcal{X}_{\mathrm{cpt}}:\ \int_{\mathcal{Y}_{\mathrm{cpt}}} e^{\frac{f_\epsilon(x)+g_\epsilon(y)-c(x,y)}{\epsilon}}\,\nu(dy) \ge 1-\kappa\right\};$$
we claim that its complement $A_{\varepsilon}^c=\mathcal{X}\setminus A_{\varepsilon}$ satisfies
\begin{align}\label{eq:markov}
p_\epsilon:=\mu\left(A_{\varepsilon}^c\right)\le \kappa.
\end{align}
Indeed, \eqref{eq:one} yields
\begin{align}\label{eq:exponential}
\int_{\mathcal{Y}_{\mathrm{cpt}}} e^{\frac{f_\epsilon(x)+g_\epsilon(y)-c(x,y)}{\epsilon}}\,\nu(dy)\le \int e^{\frac{f_\epsilon(x)+g_\epsilon(y)-c(x,y)}{\epsilon}}\,\nu(dy)=1
\end{align}
and thus
\begin{align*}
1-\kappa^2 &\stackrel{\eqref{eq:tightness}}{\le}\pi_{\epsilon}(\mathcal{X}_{\mathrm{cpt}} \times \mathcal{Y}_{\mathrm{cpt}})= \int_{\mathcal{X}_{\mathrm{cpt}}}\int_{\mathcal{Y}_{\mathrm{cpt}}} e^{\frac{f_\epsilon(x)+g_\epsilon(y)-c(x,y)}{\epsilon}} \,\nu(dy)\mu(dx)\\
&\le \int_{A_{\varepsilon}^c}\int_{\mathcal{Y}_{\mathrm{cpt}}} e^{\frac{f_\epsilon(x)+g_\epsilon(y)-c(x,y)}{\epsilon}} \,\nu(dy)\mu(dx)\\
&\quad +\int_{A_{\varepsilon}}\int_{\mathcal{Y}_{\mathrm{cpt}}} e^{\frac{f_\epsilon(x)+g_\epsilon(y)-c(x,y)}{\epsilon}} \,\nu(dy)\mu(dx)\\
&\stackrel{\eqref{eq:exponential}}{\le} (1-\kappa)p_\epsilon+(1-p_\epsilon)=1-p_\epsilon\kappa,
\end{align*}
which implies~\eqref{eq:markov}. Next, we observe from the definition of $A_{\varepsilon}$ and \eqref{eq:exponential} that for $x\in A_{\varepsilon}$,
\begin{align}\label{eq:ineq1}
\begin{split}
-\epsilon\left( \log \int_{\mathcal{Y}_{\mathrm{cpt}}} e^{\frac{g_\epsilon(y)-c(x,y)} {\epsilon}}\,\nu(dy)-\log(1-\kappa)\right)&\le f_\epsilon(x)\\
&\le
-\epsilon\log \int_{\mathcal{Y}_{\mathrm{cpt}}} e^{\frac{g_\epsilon(y)-c(x,y)} {\epsilon}}\,\nu(dy).
\end{split}
\end{align}
Let $x_1, x_2\in A_{\varepsilon}$ and assume without loss of generality that $f_\epsilon(x_1)\ge f_\epsilon(x_2)$. Then
\begin{align}\label{eq:longest}
\begin{split}
\left|f_\epsilon(x_1)-f_\epsilon(x_2)\right|&\stackrel{\eqref{eq:ineq1}}{\le} \epsilon \left(\log \int_{\mathcal{Y}_{\mathrm{cpt}}} e^{\frac{g_\epsilon(y)-c(x_2,y)}{\epsilon}}\, \nu(dy)-\log(1-\kappa)\right)\\
&\quad -\epsilon \log \int_{\mathcal{Y}_{\mathrm{cpt}}} e^{\frac{g_\epsilon(y)-c(x_1,y)}{\epsilon}}\, \nu(dy) \\
&=\epsilon \log \int_{\mathcal{Y}_{\mathrm{cpt}}} e^{\frac{c(x_1,y)-c(x_2,y)+g_\epsilon(y)-c(x_1,y)}{\epsilon}}\, \nu(dy) -\epsilon\log(1-\kappa)\\
&\qquad-\epsilon \log \int_{\mathcal{Y}_{\mathrm{cpt}}} e^{\frac{g_\epsilon(y)-c(x_1,y)}{\epsilon}}\, \nu(dy)\\
&\le \epsilon \log \left( e^{\frac{\sup_{y\in \mathcal{Y}_{\mathrm{cpt}}} \left|c(x_1, y)-c(x_2,y)\right| }{\epsilon}} \int_{\mathcal{Y}_{\mathrm{cpt}}} e^{\frac{g_\epsilon(y)-c(x_1,y)}{\epsilon}}\, \nu(dy) \right) \\
&\qquad-\epsilon\log(1-\kappa)-\epsilon \log \int_{\mathcal{Y}_{\mathrm{cpt}}} e^{\frac{g_\epsilon(y)-c(x_1,y)}{\epsilon}}\, \nu(dy)\\
&=\sup_{y\in \mathcal{Y}_{\mathrm{cpt}}} \left|c(x_1, y)-c(x_2,y)\right| -\epsilon\log(1-\kappa).
\end{split}
\end{align}
This concludes the proof of the first estimate in the lemma.
Turning to the second, note that by \eqref{eq:tightness}, \eqref{eq:markov} and the definition of $A_{\varepsilon}$,
\begin{align}\label{eq:nextround}
\begin{split}
\pi_{\epsilon}(A_{\varepsilon} \times \mathcal{Y}_{\mathrm{cpt}} )
&\ge \pi_{\epsilon}(\mathcal{X}_{\mathrm{cpt}} \times \mathcal{Y}_{\mathrm{cpt}} )-\pi_{\epsilon}( \mathcal{X}\setminus A_{\varepsilon} \times \mathcal{Y}_{\mathrm{cpt}} )\\
&\ge 1-\kappa^2-\int_{A_{\varepsilon}^c} \int_{\mathcal{Y}_{\text{cpt}}} e^{\frac{f_\epsilon(x)+g_\epsilon(y)-c(x,y)}{\epsilon}} \,\nu(dy)\,\mu(dx)\\
&\ge 1-\kappa^2-\kappa(1-\kappa)
=1-\kappa=1-\delta^2,
\end{split}
\end{align}
where we chose $\kappa:=\delta^2$ (ensuring $\kappa \in (0,\delta)$, in particular). Define \begin{align*}
B_{\varepsilon}=\left\{y\in \mathcal{Y}_{\mathrm{cpt}}:\ \int_{A_{\varepsilon}} e^{\frac{f_\epsilon(x)+g_\epsilon(y)-c(x,y)}{\epsilon}}\,\mu(dx) \ge 1-\delta\right\}.
\end{align*}
Arguing as for~\eqref{eq:markov} and~\eqref{eq:ineq1}, now using \eqref{eq:nextround} instead of~\eqref{eq:tightness}, we see that $\nu(B_{\varepsilon}^{c}) \le \delta$ and that for $y\in B_{\varepsilon}$,
\begin{align*}%
\begin{split}
-\epsilon\left( \log \int_{A_{\varepsilon}} e^{\frac{f_\epsilon(x)-c(x,y)} {\epsilon}}\,\mu(dx)-\log(1-\delta)\right)&\le g_\epsilon(y)\\
&\le
-\epsilon\log \int_{A_{\varepsilon}} e^{\frac{f_\epsilon(x)-c(x,y)} {\epsilon}}\,\mu(dx).
\end{split}
\end{align*}
We conclude the proof by arguing as in~\eqref{eq:longest} but with $f_\epsilon,\kappa$ replaced by $g_\epsilon,\delta$.
\end{proof}
The following extension lemma is a variation on Kirszbraun's theorem. Recall that a pseudometric $\tilde{d}$ is defined like a metric except that $\tilde{d}(x,y)=0$ need not imply $x=y$.
\begin{lemma}\label{lem:extension}
Let $(M,\tilde{d})$ be a pseudometric space and $A\subseteq M$.
Let $F:A\to \mathbb{R}$ satisfy
\begin{align}\label{eq:continuity3}
|F(x_1)-F(x_2)|\le \tilde{d}(x_1,x_2)+\gamma ,\quad x_1,x_2\in A,
\end{align}
for some $\gamma>0$. Then the function $\tilde{F}:M\to \mathbb{R}$ defined by
\begin{align*}
\tilde{F}(x):=\inf_{x'\in A} \left[F(x')+\tilde{d}(x,x')+\gamma\mathbf{1}_{\{x'\neq x\}}\right], \quad x\in M
\end{align*}
satisfies $\tilde{F}=F$ on $A$ and
\begin{align}\label{eq:extension}
|\tilde{F}(x_1)-\tilde{F}(x_2)|\le \tilde{d}(x_1,x_2)+\gamma, \quad x_1,x_2\in M.
\end{align}
\end{lemma}
\begin{proof}
Fix $x\in A$. For $x\neq x'\in A$ we have by \eqref{eq:continuity3} that
\begin{align*}
F(x')+\tilde{d}(x,x')+\gamma &\ge F(x)-\tilde{d}(x,x')-\gamma+\tilde{d}(x,x')+\gamma= F(x).
\end{align*}
It follows that $\tilde{F}(x)=F(x)$, showing the first claim. Fix $\kappa>0$ and let $x_1,x_2\in M$. By the definition of $\tilde{F}(x_2)$ there exists $x'\in A$ such that
$\tilde{F}(x_2)\ge F(x')+\tilde{d}(x_2,x')-\kappa$, and now the definition of $\tilde{F}(x_1)$ yields
\begin{align*}
\tilde{F}(x_1)-\tilde{F}(x_2)&\le F(x')+\tilde{d}(x_1,x')+\gamma-F(x')-\tilde{d}(x_2,x')+\kappa\\
&=\tilde{d}(x_1,x')-\tilde{d}(x_2,x')+\gamma+\kappa
\le \tilde{d}(x_1,x_2)+\gamma+\kappa.
\end{align*}
As $\kappa>0$ was arbitrary, \eqref{eq:extension} follows.
\end{proof}
The next two lemmas show that limits of $f_{\varepsilon},g_{\varepsilon}$ must be Kantorovich potentials.
\begin{lemma}\label{lem:as}
Let $\varepsilon_{n}\to0$ and suppose that the corresponding potentials $f_{\varepsilon_{n}},g_{\varepsilon_{n}}$ converge a.s. Then the limits $f:=\lim_{n} f_{\varepsilon_{n}}$ and $g:=\lim_{n} g_{\varepsilon_{n}}$ satisfy
\begin{align*}
f(x)+g(y) \le c(x,y) \qquad \mu\otimes\nu\text{-a.s.}
\end{align*}
\end{lemma}
\begin{proof}
Let $\delta>0$. Passing to a subsequence if necessary, we may assume that $\sum_{n=1}^\infty e^{-\delta/\epsilon_n}<\infty$.
Define
\begin{align*}
A_{\delta,n}=\left\{(x,y) \ : \ f_{\varepsilon_{n}}(x)+g_{\varepsilon_{n}}(y)-c(x,y) \ge \delta \right\},
\end{align*}
then
\begin{align*}
\begin{split}
1&\stackrel{\eqref{eq:one}}{=}\int e^{\frac{f_{\varepsilon_{n}}(x)+g_{\varepsilon_{n}}(y)-c(x,y)}{\epsilon_n}}\, \mu(dx)\nu(dy) \ge \int_{A_{\delta,n}} e^{\frac{f_{n}(x)+g_{n}(y)-c(x,y)}{\epsilon_n}}\, \mu(dx)\nu(dy) \\
&\ge e^{\delta/\epsilon_n} (\mu\otimes \nu)(A_{\delta,n})
\end{split}
\end{align*}
yields $(\mu\otimes\nu)(A_{\delta,n})\le e^{-\delta/\epsilon_n}$ and thus $\sum_{n} (\mu\otimes\nu)(A_{\delta,n})<\infty$.
The Borel--Cantelli lemma now shows that $(\mu\otimes\nu)(\limsup_{n} A_{\delta,n})=0$ and hence
\begin{align*}
f(x)+g(y) \le c(x,y)+\delta \qquad \mu\otimes\nu\text{-a.s.}
\end{align*}
As $\delta>0$ was arbitrary, the claim follows.
\end{proof}
\begin{lemma}\label{lem:lusin}
Let $c$ be upper semicontinuous and $f,g$ measurable functions with
\begin{align*}
f(x)+g(y) \le c(x,y)\quad \mu\otimes\nu\text{-a.s.}
\end{align*}
Then there exist versions $\tilde{f}=f$ $\mu$-a.s.\ and $\tilde{g}=g$ $\nu$-a.s.\ such that
$$
\tilde{f}(x)+\tilde{g}(y)\le c(x,y) \quad \text{for all}\quad(x,y)\in \mathcal{X}\times \mathcal{Y}.
$$
\end{lemma}
\begin{proof}
Suppose first that $f,g$ are continuous. If $f(x)+g(y) > c(x,y)$ for some $(x,y)\in \mathcal{X}\times \mathcal{Y}$, the same inequality holds on a neighborhood $B_{r}(x)\times B_{r}(y)$, which then must be $\mu\otimes\nu$-null by the assumption. That is, $(x,y)\notin \spt\mu\times\spt\nu$. In conclusion, we can set $\tilde f=f$ on $\spt\mu$ and $\tilde f=-\infty$ outside $\spt\mu$, and similarly for~$\tilde g$.
In general, Lusin's theorem yields an increasing sequence of closed sets $A_{n}\subset\mathcal{X}$ such that $f|_{A_{n}}$ is continuous and $\mu(A_{n}^{c})\leq 1/n$. Let $\mu_{n}=\mu|_{A_{n}}$ and $A'_{n}=\spt \mu_{n}$. Defining analogously $B'_{n}\subset \mathcal{Y}$, the above argument shows that $f(x)+g(y) \leq c(x,y)$ on $A'_{n}\times B'_{n}$. The same inequality then holds on the product of $\cup_{n}A'_{n}$ and $\cup_{n}B'_{n}$, and these sets have full measure. It remains to set $\tilde f=f$ on $\cup_{n}A'_{n}$ and $\tilde f=-\infty$ on the complement, and similarly for~$\tilde g$.
\end{proof}
\section{Proof of the Main Result}\label{se:ProofOfMainRes}
We can now report the proof of Theorem~\ref{thm:1}. To simplify the notation, let us agree that an index~$n$ always refers to an object associated with $\varepsilon=\varepsilon_{n}$; for instance, $f_{n}=f_{\varepsilon_{n}}$ and $g_{n}=g_{\varepsilon_{n}}$. Moreover, subsequences are not relabeled.
Steps 1--5 below establish the a.s.\ convergence of $f_n$ and $g_{n}$ along a subsequence. The final Step~6 shows that a.s.\ convergence also implies $L^{1}$-convergence, and that limits are Kantorovich potentials. %
\begin{proof}[Proof of Theorem \ref{thm:1}]
Let $\varepsilon_{n}\to0$. In addition, we fix a strictly decreasing sequence $\delta_{k}\to 0$ with $\delta_{k}< 1/2$.\\[-.7em]
\textit{Step 1.}
For each $k, n\in \mathbb{N}$, Lemma \ref{lem:general} yields sets $$A_{n}(\delta_k)\subseteq \mathcal{X}_{\mathrm{cpt}}(\delta_k)\subseteq \mathcal{X}\quad \text{and}\quad B_{n}(\delta_k)\subseteq \mathcal{Y}_{\mathrm{cpt}}(\delta_k)\subseteq \mathcal{Y}$$ such that $$\mu(A_{n}(\delta_k))\ge 1-\delta_k\quad \text{and}\quad \nu (B_{n}(\delta_k))\ge 1-\delta_k$$%
as well as
\begin{align}\label{eq:continuity}
\begin{split}
\left|f_n(x_1)-f_n(x_2)\right|&\le \sup_{y\in \mathcal{Y}_{\mathrm{cpt}}(\delta_k)} \left|c(x_1, y)-c(x_2,y)\right| -\epsilon_n\log(1-\delta_k)\\
&\le \sup_{y\in \mathcal{Y}_{\mathrm{cpt}}(\delta_k)} \left|c(x_1, y)-c(x_2,y)\right| +\epsilon_n\log(2)
\end{split}
\end{align}
for $x_1,x_2\in A_{n}(\delta_k)$ and similarly
\begin{align*}
\left|g_n(y_1)-g_n(y_2)\right|
&\le \sup_{x\in \mathcal{X}_{\mathrm{cpt}}(\delta_k)} \left|c(x, y_1)-c(x,y_2)\right| +\epsilon_n\log(2)
\end{align*}
for $y_1, y_2\in B_{n}(\delta_k)$. For each $n$, we can assume that the sequences $(\mathcal{X}_{\mathrm{cpt}}(\delta_k))_{k}$ and $(\mathcal{Y}_{\mathrm{cpt}}(\delta_k))_{k}$ are increasing, and consequently also that $(A_{n}(\delta_k))_{k}$ and $(B_{n}(\delta_k))_{k}$ are increasing.\\[-.7em]
\textit{Step 2.} Define
\begin{align*}
\tilde{d}_k(x_1,x_2):= \sup_{y\in \mathcal{Y}_{\mathrm{cpt}}(\delta_k)} \left|c(x_1, y)-c(x_2,y)\right|.
\end{align*}
It is elementary to verify that $\tilde{d}_k$ is a pseudometric on~$\mathcal{X}$.
Using~\eqref{eq:continuity} and Lemma~\ref{lem:extension} with $\gamma=\epsilon_n\log(2)$, there exists an extension $f^k_n$ satisfying $f^k_n=f_n$ on $A_{n}(\delta_{k})$ and
\begin{align}\label{eq:extension1}
\left|f^k_n(x_1)-f^k_n(x_2)\right|\le \tilde{d}_k(x_1,x_2) +\epsilon_n\log(2), \quad x_1,x_2\in \mathcal{X}.
\end{align}
Similarly, there exists an extension $g^k_n$ for $g_{n}$ with an analogous property.\\[-.7em]
\textit{Step 3.} We now vary $n$, while still keeping $k$ fixed, and our aim is to construct a subsequential limit $f^k=\lim_{n\to \infty} f_n^k$ $\mu$-a.s. We first argue that $(f^k_n)_{n\in \mathbb{N}}$ is pointwise bounded from above. Indeed, after taking another subsequence if necessary, there exists $x_0\in \spt\mu$ such that $x_0\in A_{n}(\delta_k)$ for all $n$ and $f_n(x_0)\leq \int c(x_0,y)\,\nu(dy)<\infty$; cf.\ Lemma~\ref{lem:1}. Thus $f_n(x_0)^{+}$ is bounded uniformly in~$n$. On the other hand,
$
\int f_n^+(x)\,\mu(dx)\le \int c(x,y)\,\nu(dy)\mu(dx)<\infty,
$
and as $\int f_n(x)\,\mu(dx)\ge 0$ by~\eqref{eq:normalization}, it follows that $\int f_n^-(x)\,\mu(dx)$ is bounded. In view of~\eqref{eq:continuity}, we obtain that $f_n(x_0)^{-}$ is bounded uniformly in~$n$. This shows that $f_n(x_0)$ is bounded, and then so is $f^{k}_n(x_0)$. By~\eqref{eq:extension1}, it follows that $f^{k}_n(x)$ is bounded uniformly in~$n$, for any $x\in\mathcal{X}$, as claimed. %
Define
$$
\omega_{k}(x,r) = \sup_{d(x,x')\leq r} \tilde{d}_{k}(x,x') \equiv \sup_{y\in \mathcal{Y}_{\mathrm{cpt}}(\delta_k), d(x,x')\leq r} \left|c(x, y)-c(x',y)\right|.
$$
Clearly $\tilde{d}(x_1,x_2) \le \omega(x_{1},d(x_1,x_2))$, and $\omega_{k}$ is a modulus of continuity as noted above. In particular, the conditions of Lemma~\ref{lem:aa1} are satisfied for the sequence $(f^k_n)_{n\in\mathbb{N}}$ with $\omega:=\omega_{k}$ and $h_n:=\epsilon_n\log(2)$. After passing to a subsequence, we thus obtain an $\omega_{k}$-continuous function $f^k$ such that $f^k_n\to f^{k}$ uniformly on compact subsets. After passing to another subsequence, we similarly obtain a limit~$g^{k}$ for~$g^k_n$.
Recall that for fixed $n$, the sets $A_{n}(\delta_k)$ are increasing in~$k$, and $\cup_{k}A_{n}(\delta_k)$ has full $\mu$-measure. As a consequence, $f^k_n=f^{k'}_n=f_n$ on $A_{n}(\delta_{k})$ for all $k'\ge k$, and a diagonal argument yields a subsequence along which $\lim_{n\to \infty} f_n^k= f^k$ $\mu$-a.s.\ for all~$k$. Similarly for $g_n^k$, and we may assume in what follows that $\lim_{n\to \infty} f_n^k= f^k$ $\mu$-a.s.\ and $\lim_{n\to \infty} g_n^k= g^k$ $\nu$-a.s.\ for all $k$.\\[-.7em]
\textit{Step 4.} In this step we show that $(f^k)$ converges $\mu$-a.s., after passing to a subsequence. Fix $\gamma>0$ and choose $k_0$ such that $\delta_{k_0}\le \gamma$. For all $k,k'\ge k_0$ and all~$n$, we have
\begin{align}\label{eq:triangle}
|f^k(x)-f^{k'}(x)| &\le |f^k(x)-f_n^k(x)|+|f_n^k(x)-f_n^{k'}(x)|+|f_n^{k'}(x)-f^{k'}(x)|.
\end{align}
Recalling also $f^k_n=f^{k'}_n=f_n$ on $A_{n}(\delta_{k_0})$ and
$\mu( (A_{n}(\delta_{k_0}))^c)\le \delta_{k_0}\le \gamma$,
we deduce
\begin{align*}
\int [ |f^k(x)&-f^{k'}(x)|\wedge 1]\,\mu(dx)\\
& \le \int_{A_{n}(\delta_{k_0})} \Big( [|f^k(x)-f_n^k(x)|\wedge 1]
+ [|f_n^{k'}(x)-f^{k'}(x)|\wedge 1] \Big) \,\mu(dx)+\gamma.
\end{align*}
Sending $n\to \infty$ and using the result of Step~3, dominated convergence allows us to conclude that
$\int |f^k(x)-f^{k'}(x)|\wedge 1 \,\mu(dx)\le \gamma;$
that is, $(f^k)$ is Cauchy in $\mu$-probability. In particular, there exists a limit~$f$ in $\mu$-probability, and after taking a subsequence, the limit also holds $\mu$-a.s. Similarly, $\lim_{k}g^k =g$ $\nu$-a.s. \\[-.7em]
\textit{Step 5.} Next, we show that the potentials $f_{n},g_{n}$ converge a.s.\ to the same limits $f,g$, after taking another subsequence. Given $\gamma>0$, Step~4 implies that for a.e.\ $x\in \mathcal{X}$ there exists $k_{0}(x)$ such that
$|f^{k}(x)-f(x)|\le \gamma/3$ and $\delta_k\le \gamma$
for~$k\geq k_{0}(x)$.
As $\lim_{n} f_{n}^{k}=f^{k}$ $\mu$-a.s., it follows for $k\geq k_{0}(x)$ and for~$n$ sufficiently large that
\begin{align*}
|f_n(x)-f(x)|&\le |f_n(x)-f_n^k(x)|+|f_n^k(x)-f^k(x)|+|f^k(x)-f(x)|\\
&\le |f_n(x)-f_n^k(x)|+|f_n^k(x)-f^k(x)|+\gamma/3\\
&\le |f_n(x)-f_n^k(x)|+\gamma/2.
\end{align*}
Recalling that $f_n(x)=f_n^k$ on $A_{n}(\delta_k)$, we conclude
$$
\lim_{n\to \infty} \mu \left(\left\{ x:\ |f_n(x)-f(x)|\ge \gamma\right\}\right)\le\limsup_{n\to \infty} \mu\left(\left(A_{n}(\delta_k)\right)^c\right)\le \delta_k\le \gamma;
$$
that is, $f_n \to f$ in $\mu$-probability. Taking another subsequence, we have $\lim_{n} f_n =f$ $\mu$-a.s. Similarly, we obtain $\lim_{n}g_n=g$.
Lemmas~\ref{lem:as} and~\ref{lem:lusin}
show that after modifying $f,g$ on nullsets, we have
\begin{align}\label{eq:inequality}
f(x)+g(y)\le c(x,y), \quad (x,y)\in \mathcal{X}\times\mathcal{Y}.
\end{align}
\vspace{.3em}
\textit{Step 6.}
Let $C_{1}(x):=\int c(x,y)\,\nu(dy)$ and $C_{2}(y):=\int c(x,y)\,\mu(dx)$. In view of Lemma~\ref{lem:1} we have
\begin{equation}\label{eq:upperBoundInt}
f_n,f \leq C_{1} \in L^{1}(\mu), \qquad g_n,g \leq C_{2} \in L^{1}(\nu).
\end{equation}
Using also $H(\pi|\mu\otimes\nu)\ge 0$, the duality $I_{\varepsilon}=S_{\varepsilon}$ from the Introduction, Fatou's lemma and \eqref{eq:inequality}, we obtain
\begin{align*}
\inf_{\pi\in \Pi(\mu,\nu)} \int c(x,y)\,\pi(dx,dy)&\le \lim_{n\to \infty}\left( \inf_{\pi\in \Pi(\mu,\nu)} \int c(x,y)\,\pi(dx,dy) +\epsilon_n H(\pi | \mu\otimes\nu)\right)\\
&=\lim_{n\to \infty} \left(\int f_n(x)\, \mu(dx)+\int g_n(y) \,\nu(dy)\right)\\
&\le \int \limsup_{n\to \infty} f_n(x)\, \mu(dx)+\int \limsup_{n\to \infty} g_n(y) \,\nu(dy)\\
&=\int f(x)\, \mu(dx)+\int g(y) \,\nu(dy)\\
&=\inf_{\pi\in \Pi(\mu,\nu)} \int [f(x) + g(y)]\,\pi(dx,dy)\\
&\le \inf_{\pi\in \Pi(\mu,\nu)} \int c(x,y)\,\pi(dx,dy).
\end{align*}
In particular, $\lim_{\varepsilon} S_{\varepsilon}=\int f(x)\, \mu(dx)+\int g(y) \,\nu(dy) = S_{0}$. Using again~\eqref{eq:upperBoundInt}, Fatou's lemma then also shows that
$$S_{0}/2 = \lim_{\varepsilon\to0} S_{\varepsilon}/2 = \lim_{n\to\infty} \int f_n(x)\, \mu(dx) \leq \int f(x)\, \mu(dx)$$
and similarly $S_{0}/2 \leq \int g(y)\, \nu(dy)$. We conclude that $$\int f(x)\, \mu(dx)=\int g(y)\, \nu(dy)=S_{0}/2$$
and hence the separate convergence
$$
\lim_{n\to\infty} \int f_n(x)\, \mu(dx) = \int f(x)\, \mu(dx), \quad \lim_{n\to\infty} \int g_n(x)\, \mu(dx) = \int g(x)\, \mu(dx).
$$
In view of~\eqref{eq:upperBoundInt} and the a.s.\ convergence $f_{n}\to f$, applying Scheff\'e's lemma to the nonpositive sequence $f_{n}-C_{1}$ allows us to conclude that $f_{n}\to f$ in $L^{1}(\mu)$. Similarly, $g_{n}\to g$ in $L^{1}(\nu)$.
\end{proof}
The proof of Theorem~\ref{thm:1} simplifies substantially if $c$ is uniformly continuous (and in particular if~$\mathcal{X}$ and~$\mathcal{Y}$ are compact). Moreover, the conclusion is stronger in this case: the almost-sure convergence of $f_{n}\to f$ and $g_{n}\to g$ can be replaced by uniform convergence on compact subsets. For the remainder of this section, let $\omega: \mathbb{R}_{+}\to\mathbb{R}_{+}$ be a modulus of continuity as defined before Lemma~\ref{lem:aa1}.
\begin{lemma}\label{le:unifCont}
Suppose that $c$ is uniformly $\omega$-continuous in both variables.
Then the potentials $f_\epsilon,g_\epsilon$ %
are uniformly $\omega$-continuous, for any $\varepsilon>0$.
\end{lemma}
\begin{proof}
Let $x_1, x_2\in \mathcal{X}$ satisfy $f_\epsilon(x_1)\ge f_\epsilon(x_2)$. Then
\begin{align*}
| & f_\epsilon(x_1)-f_\epsilon(x_2) |\\
&=\epsilon \log \int e^{\frac{g_\epsilon(y)-c(x_2,y)}{\epsilon}}\, \nu(dy)-\epsilon \log \int e^{\frac{g_\epsilon(y)-c(x_1,y)}{\epsilon}}\, \nu(dy) \\
&=\epsilon \log \int e^{\frac{c(x_1,y)-c(x_2,y)+g_\epsilon(y)-c(x_1,y)}{\epsilon}}\, \nu(dy) -\epsilon \log \int e^{\frac{g_\epsilon(y)-c(x_1,y)}{\epsilon}}\, \nu(dy)\\
&\le \epsilon \log \left( e^{\frac{\sup_{y\in \mathcal{Y}} |c(x_1,y)-c(x_2,y)|}{\epsilon}} \int e^{\frac{g_\epsilon(y)-c(x_1,y)}{\epsilon}}\, \nu(dy) \right)
-\epsilon \log \int e^{\frac{g_\epsilon(y)-c(x_1,y)}{\epsilon}}\, \nu(dy)\\
&=\sup_{y\in \mathcal{Y}} |c(x_1,y)-c(x_2,y)| \leq \omega(d(x_{1},x_{2})).
\end{align*}
The case $f_\epsilon(x_1)\le f_\epsilon(x_2)$ follows by symmetry and the proof for $g_\epsilon$ is analogous.
\end{proof}
\begin{proposition} \label{pr:unifContCase}
Let $c$ be uniformly $\omega$-continuous in both variables and $\epsilon_n\to0$. After passing to a subsequence, $f_{\varepsilon_{n}}\to f$ and $g_{\varepsilon_{n}}\to g$ uniformly on compact subsets, for some uniformly $\omega$-continuous Kantorovich potentials $f$ and $g$.
\end{proposition}
\begin{proof}
The functions $(f_n)$ are $w$-equicontinuous by Lemma~\ref{le:unifCont}, hence $(f_n)$ is pointwise bounded as soon as it is bounded at one point $x\in\mathcal{X}$. By~Lemma \ref{lem:1}, $f_n(x)\le \int c(x,y)\,\nu(dy)<\infty$ for $\mu$-a.e.\ $x$, so that $(f_n^{+})$ is pointwise bounded. On the other hand,
$
\int f_n^+(x)\,\mu(dx)\le \int c(x,y)\,\nu(dy)\mu(dx)<\infty,
$
and as $\int f_n(x)\,\mu(dx)\ge 0$ by~\eqref{eq:normalization}, it follows that $\int f_n^-(x)\,\mu(dx)$ is bounded. By equicontinuity, it follows that $(f_n^-)$ must be bounded at any point $x\in \spt\mu$, and then at all points. Similarly for $(g_{n})$, and now the claimed convergence to some uniformly $\omega$-continuous functions $f,g$ follows from the Arzel\`a--Ascoli theorem. To see that $f,g$ are Kantorovich potentials, we argue as in Step~6 of the proof of Theorem~\ref{thm:1}.
\end{proof}
\section{Relation to a Large Deviations Principle}\label{se:LDP}
In this section we discuss the connection between convergence of potentials (Theorem~\ref{thm:1}) and a large deviations principle (LDP) along the lines proposed in \cite[Theorem~1.1]{BerntonGhosalNutz.21}.
Roughly speaking, the LDP describes the exponential rate of decay of $\pi_{\varepsilon}(E)$ for a set $E$ outside the support of $\pi_{0}:=\lim_{\varepsilon\to0} \pi_{\varepsilon}$, whereas the convergence %
of potentials yields the exponential rate of decay of the density $\pi_{\varepsilon}/d(\mu\otimes\nu)$ at points outside of $\{f+g =c\}$. Clearly, these statements are closely related, and as seen below, they are equivalent if $\mathcal{X},\mathcal{Y}$ are compact. In the general case, however, neither result implies the other in an obvious way.
Throughout this section, we fix a sequence $\varepsilon_{n}\to0$ and set $(f_{n},g_{n}):=(f_{\varepsilon_{n}},g_{\varepsilon_{n}})$, as in Section \ref{se:ProofOfMainRes}. Given a measurable function $I$ on $\mathcal{X}\times\mathcal{Y}$, we denote by $\essinf I$ the essential infimum wrt.\ $\mu\otimes\nu$, defined as
$
\essinf I = \inf \{\alpha\in\mathbb{R}:\, (\mu\otimes\nu)\{I < \alpha\}>0\}.
$
\begin{proposition}\label{pr:LDPfromConv}
Suppose that $f:=\lim_{n}f_{n}$ exists in $\mu$-probability and $g:=\lim_{n}g_{n}$ exists in $\nu$-probability. Define $I(x,y):= c(x,y)-f(x)-g(y)$, then for any measurable set $E\subset \mathcal{X}\times\mathcal{Y}$,
\begin{align}\label{eq:LDPfromConv}
\liminf_{n\to \infty} \epsilon_n \log \pi_{\epsilon_n}(E) &\ge - \essinf_{(x,y)\in E} I(x,y).
\end{align}
If the convergence of $(f_{n},g_{n})$ is a.s.\ uniform on~$E$; i.e.,
$$
\|(f_{n},g_{n})\mathbf{1}_{E}-(f,g)\mathbf{1}_{E}\|_{L^\infty(\mu\otimes\nu)} \to 0,
$$
then~$E$ also satisfies the matching bound
\begin{align}\label{eq:LDPfromConv2}
\limsup_{n\to \infty} \epsilon_n \log \pi_{\epsilon_n}(E) &\le - \essinf_{(x,y)\in E} I(x,y).
\end{align}
If $\mathcal{X},\mathcal{Y}$ are compact, that is the case for all sets~$E$.
\end{proposition}
\begin{proof}
Let $E\subset \mathcal{X}\times\mathcal{Y}$ be measurable,
$
\alpha:= - \essinf_{(x,y)\in E} I(x,y)
$
and $\gamma>0$. By the definition of $\alpha$, the set
\begin{align*}
E^{\gamma}:= \{ (x,y)\in E:\ f(x)+g(y)-c(x,y) \ge \alpha-2\gamma\}
\end{align*}
satisfies $\beta:=(\mu \otimes \nu)(E^\gamma)/2>0$. In view of the assumed convergence of $(f_{n},g_{n})$, there exists $n_{0}$ such that
$(\mu\otimes\nu)\{|(f_{n},g_{n})-(f,g)|>\gamma\} \leq \beta$ for $n\geq n_{0}$, so that
$$
E^{\gamma}_{n}:= \{(x,y)\in E^{\gamma}:\, f_{n}(x)+g_{n}(y)-c(x,y) \ge \alpha-\gamma\}
$$
satisfies $(\mu \otimes \nu)(E^{\gamma}_{n})\ge \beta$ for all $n\geq n_{0}$. Thus
\begin{align*}
\pi_{\epsilon_n} (E)&\ge \pi_{\epsilon_n} (E^{\gamma}_{n})
= \int_{E^{\gamma}_{n}} e^{\frac{f_n(x)+g_n(y)-c(x,y)}{\epsilon_n}} \,\mu(dx)\,\nu(dy)
\ge \beta e^{\frac{\alpha-\gamma}{\epsilon_n}}
\end{align*}
for $n\geq n_{0}$ and then $\liminf_{n\to \infty} \epsilon_n \log \pi_{\epsilon_n} (E) \ge\alpha-\gamma$. As $\gamma>0$ was arbitrary, the lower bound~\eqref{eq:LDPfromConv} follows.
Turning to the second claim, note that
\begin{align*}
\epsilon_n \log \pi_{\epsilon_n}(E)
&=\epsilon_n \log\left( \int_E e^{\frac{f_n(x)+g_n(y)-c(x,y)}{\epsilon_n}} \,\mu(dx)\,\nu(dy)\right) \\
&\le \esssup_{(x,y)\in E} \left( f_n(x)+g_n(y)-c(x,y)\right).
\end{align*}
If $\|(f_{n},g_{n})\mathbf{1}_{E}-(f,g)\mathbf{1}_{E}\|_{\infty}\to0$, it readily follows that
\begin{align*}
\lim_{n\to \infty} \epsilon_n \log \pi_{\epsilon_n}(E) &\le - \essinf_{(x,y)\in E} I(x,y),
\end{align*}
as desired. If $\mathcal{X},\mathcal{Y}$ are compact, Proposition~\ref{pr:unifContCase} and the assumed convergence of the potentials in probability imply that $\|(f_{n},g_{n})-(f,g)\|_{\infty}\to0$ (without taking a subsequence), so that the above applies to any measurable set~$E$.
\end{proof}
\begin{remark}
(a) In Proposition~\ref{pr:LDPfromConv} the rate is stated through an essential infimum, consistent with the fact that~$E$ can be irregular and $f,g$ are considered as determined only up to nullsets. In many situations it is known that Kantorovich potentials admit a continuous version, for instance by $c$-concavity. If moreover~$E$ is suitably regular (e.g., open and contained in $\spt \mu \times\spt \nu$), the essential infimum can be written as an infimum.
(b) In the case of compactly supported marginals, an alternative proof of Proposition~\ref{pr:LDPfromConv} can be given using Bryc's inverse to Varadhan's Integral Lemma; cf.\ \cite[Theorem~4.4.2]{DemboZeitouni.10}. That proof, however, is longer than the direct argument given above. In connection with classical large deviations theory, we note that the sequence $(\pi_{\varepsilon_{n}})$ fails to be exponentially tight whenever the marginals are not compactly supported: exponential tightness implies, in particular, that any limit $\pi_{0}$ is compactly supported, but as $\pi_{0}\in\Pi(\mu,\nu)$, the same then follows for $\mu,\nu$.
\end{remark}
If the Kantorovich potentials $(f,g)$ are unique, Theorem~\ref{thm:1} implies that the first condition in Proposition~\ref{pr:LDPfromConv} is satisfied.
Bounds similar to~\eqref{eq:LDPfromConv} and~\eqref{eq:LDPfromConv2} are stated in \cite[Theorem~1.1]{BerntonGhosalNutz.21} for open and compact sets, respectively. While weak convergence $\pi_{\varepsilon_{n}}\to\pi_{0}$ of the couplings is assumed, it avoids any conditions on $\mathcal{X},\mathcal{Y}$, the integrability of~$c$, or even the finiteness of the value $I_{\varepsilon}$ in~\eqref{eq:EOT}. Such a setting does seem outside the scope of the methods used here.
In general, if convergence of potentials is not known a priori, Proposition~\ref{pr:LDPfromConv} implies non-matching bounds by maximizing or minimizing over all potentials as follows. Given a family $(I_{\lambda})$ of measurable functions, $I^{*}:=\esssup_{\lambda} I_{\lambda}$ denotes the essential supremum wrt.\ $\mu\otimes\nu$ in the sense of probability theory.\footnote{I.e., $I^{*}$ is the (a.s.\ unique) measurable function satisfying $I^{*}\geq I_{\lambda}$ a.s.\ for all~$\lambda$ and $I^{*}\leq J$ a.s.\ for any~$J$ satisfying $J\geq I_{\lambda}$ a.s.\ for all~$\lambda$. In other words, $I^{*}$ is the supremum in the lattice of measurable functions equipped with the a.s.\ order.} Similarly, $\essinf_{\lambda} I_{\lambda}$ is the essential infimum.
\begin{corollary}\label{co:LDPlower}
Define $I^{*}:= \esssup_{f,g} I_{f,g}$, where $I_{f,g}(x,y):= c(x,y)-f(x)-g(y)$ and the supremum is taken over all Kantorovich potentials $(f,g)$. Similarly, define $I_{*}:= \essinf_{f,g} I_{f,g}$. Then
\begin{align}\label{eq:LDPfromConvCor}
\liminf_{n\to \infty} \epsilon_n \log \pi_{\epsilon_n}(E) &\ge - \essinf_{(x,y)\in E} I^{*}(x,y)
\end{align}
for any measurable set $E\subset \mathcal{X}\times\mathcal{Y}$. If $\mathcal{X},\mathcal{Y}$ are compact, then also
\begin{align}\label{eq:LDPfromConv2Cor}
\limsup_{n\to \infty} \epsilon_n \log \pi_{\epsilon_n}(E) &\le - \essinf_{(x,y)\in E} I_{*}(x,y).
\end{align}
\end{corollary}
\begin{proof}
Passing to a subsequence, we may assume that the $\liminf$ on the left-hand side is a limit. After passing to another subsequence, Theorem~\ref{thm:1} yields that the Schr\"odinger potentials $(f_{n},g_{n})$ converge in~$L^{1}$ to some Kantorovich potentials~$(f,g)$, and then Proposition~\ref{pr:LDPfromConv} applies to $(f,g)$. As $\essinf_{(x,y)\in E} I_{f,g}(x,y) \leq \essinf_{(x,y)\in E} I^{*}(x,y)$, the lower bound~\eqref{eq:LDPfromConvCor} follows. The proof of~\eqref{eq:LDPfromConv2Cor} is analogous.
\end{proof}
\begin{remark}
The lower bound~\eqref{eq:LDPfromConvCor} is quite general, and seems to be novel. Except in the case of uniqueness for the Kantorovich potentials, no analogue is stated in~\cite{BerntonGhosalNutz.21}. On the other hand, the upper bound~\eqref{eq:LDPfromConv2Cor} is similar to the bound in~\cite[Theorem~1.1\,(a)]{BerntonGhosalNutz.21}. The latter is stated under the condition that $\pi_{\varepsilon_{n}}$ converges but without any conditions on $\mathcal{X},\mathcal{Y}$.
\end{remark}
The next result is a partial converse to Proposition~\ref{pr:LDPfromConv}. It suggests that if an LDP holds, then the Schr\"odinger potentials must converge (without passing to a subsequence) and the rate function must be determined by the limiting potentials. We prove this in the compact case via Varadhan's Integral Lemma, but we conjecture that the assertions remains valid in some generality.
\begin{proposition}\label{pr:LDPtoPotentialConv}
Let $\mathcal{X},\mathcal{Y}$ be compact and suppose the assertion of the LDP \cite[Theorem~1.1]{BerntonGhosalNutz.21} holds for some function~$I:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}_{+}$; that is,
\begin{align}\label{eq:ldp1}
\limsup_{n\to \infty} \epsilon_n \log \pi_{\epsilon_n}(C) &\le -\inf_{(x,y)\in C} I(x,y) \quad\mbox{for $C\subset\mathcal{X}\times\mathcal{Y}$ compact},\\
\liminf_{n\to \infty} \epsilon_n \log \pi_{\epsilon_n}(U) &\ge -\inf_{(x,y)\in U} I(x,y) \quad\mbox{for $U\subset \spt \mu \times\spt \nu$ open} \label{eq:ldp2}.
\end{align}
Then
$$
I(x,y) = c(x,y) - f(x)-g(y), \quad (x,y)\in \spt \mu \times\spt \nu
$$
for some Kantorovich potentials $(f,g)$, and
$$
f=\lim_{n\to \infty} f_{n} \quad \mbox{uniformly on }\spt \mu, \qquad g=\lim_{n\to \infty} g_{n} \quad \mbox{uniformly on }\spt \nu.
$$
\end{proposition}
\begin{proof}
As $\mathcal{X}\times\mathcal{Y}$ is compact, $c$ is uniformly continuous and then $f_{n},g_{n}$ are uniformly equicontinuous; cf.\ Lemma~\ref{le:unifCont}.
Fix $(x_0,y_0)\in \spt \mu \times\spt \nu$. Equicontinuity implies that given $\gamma>0$ there exists $r,n_{0}>0$ such that for all $n\geq n_{0}$,
$$
| I(x_0,y_0)+f_n(x_0)+g_n(y_0)-c(x_0,y_0) - J_{n}(r) |\le \gamma \quad\mbox{for}
$$
$$
J_{n}(r) := \epsilon_n \log \bigg( \int_{B_{r}(x_0,y_0)} e^{\frac{I(x,y)+f_n(x)+g_n(y)-c(x,y)}{\epsilon_n}} \,\mu(dx)\,\nu(dy)\bigg).
$$
To show $\lim_{n\to \infty} [f_n(x_{0})+g_n(y_{0})]=c(x_{0},y_{0})-I(x_{0},y_{0})$, it therefore suffices to prove for all $r>0$ that
\begin{align}\label{eq:proofLDPtoPotentialConv}
\lim_{n\to \infty} J_{n}(r)=0.
\end{align}
Next, we argue that~$I$ must be continuous. Indeed, after passing to a subsequence, Proposition~\ref{pr:LDPfromConv} shows that $I$ must be of the form $I=c-\tilde{f}-\tilde{g}$ on $\spt \mu \times\spt \nu$, for some (necessarily uniformly continuous) Kantorovich potentials $(\tilde{f},\tilde{g})$.
Moreover, we may assume that $\mathcal{X}=\spt\mu$ and $\mathcal{Y}=\spt\nu$, by shrinking the marginal spaces if necessary.
In brief, the LDP \eqref{eq:ldp1}, \eqref{eq:ldp2} then holds for all closed sets~$C$ and open sets~$U$ in $\mathcal{X}\times\mathcal{Y}$ with the ``good'' rate function $I$. In this context, Varadhan's Integral Lemma \cite[Theorem~4.3.1]{DemboZeitouni.10} states that
\begin{align}\label{eq:VaradhanLemma}
\lim_{n\to \infty} \epsilon_n \log \left( \int e^{\frac{\phi(x,y)}{\epsilon_n}} \pi_{\epsilon_n}(dx,dy) \right)= \sup_{(x,y)\in \mathcal{X}\times \mathcal{Y}} (\phi(x,y)-I(x,y))
\end{align}
for any continuous function $\phi:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}$ that satisfies the moment condition
\begin{align*}
\limsup_{n\to \infty} \epsilon_n \log \left( \int e^{\frac{\gamma \phi(x,y)}{\epsilon_n}} \pi_{\epsilon_n}(dx,dy) \right)<\infty
\end{align*}
for some $\gamma>1$.
As the continuous function~$I$ is bounded on the compact space $\mathcal{X}\times\mathcal{Y}$, this holds in particular for $\phi:=I$, for any~$\gamma>1$. Let $(x_0,y_0)\in \mathcal{X}\times\mathcal{Y}$ and $r>0$. Using ~\eqref{eq:VaradhanLemma} for $\phi=I$,
\begin{align*}
\limsup_{n\to \infty} J_{n}(r)
&\le \limsup_{n\to \infty} \epsilon_n \log \left( \int e^{\frac{I(x,y)}{\epsilon_n}} \pi_{\epsilon_n}(dx,y)\right)\\
&=\sup_{(x,y)\in \mathcal{X}\times \mathcal{Y}} (I(x,y)-I(x,y))=0.
\end{align*}
To show the converse inequality, consider a bounded continuous function $\phi$ with
$$
\phi(x_0,y_0) = I(x_0,y_0), \qquad \phi \leq I \mbox{ on } B_{r}(x_0,y_0),\qquad \phi = -1 \mbox{ on } B^{c}_{r}(x_0,y_0).
$$
Then
\begin{align*}
\int_{B_{r}(x_0,y_0)} e^{\frac{I(x,y)+f_n(x)+g_n(y)-c(x,y)}{\epsilon_n}} \,\mu(dx)\,\nu(dy)
&\geq \int_{B_{r}(x_0,y_0)} e^{\frac{\phi(x,y)}{\epsilon_n}} \,\pi_{\varepsilon_n}(dx,dy) \\
&\geq \int e^{\frac{\phi(x,y)}{\epsilon_n}} \,\pi_{\varepsilon_n}(dx,dy) - e^{\frac{-1}{\epsilon_n}}
\end{align*}
and thus
\begin{align*}
\liminf_{n\to \infty} J_{n}(r)
&\ge \liminf_{n\to \infty} \epsilon_n \log \left( \int e^{\frac{\phi(x,y)}{\epsilon_n}} \,\pi_{\varepsilon_n}(dx,dy) \right) \\
&=\sup_{(x,y)\in \mathcal{X}\times \mathcal{Y}} (\phi(x,y)-I(x,y))=0,
\end{align*}
where we have used~\eqref{eq:VaradhanLemma}. This completes the proof of~\eqref{eq:proofLDPtoPotentialConv} and thus shows that $\lim_{n\to \infty} [f_n(x_{0})+g_n(y_{0})]=c(x_{0},y_{0})-I(x_{0},y_{0})$ for $(x_0,y_0)\in \spt \mu \times\spt \nu$. In view of the uniform equicontinuity, the convergence is even uniform on that set.
On the other hand, we have already shown in Proposition~\ref{pr:unifContCase} that $f_{n}\to f$ and $g_{n}\to g$ uniformly, after passing to a subsequence, for some Kantorovich potentials $f,g$. Thus $c-I=f+g$ on $\spt \mu \times\spt \nu$. It remains to argue that the original sequences $f_{n},g_{n}$ converge to $f,g$. Indeed, the rectangular form of $S:=\spt \mu \times\spt \nu$ implies that if $f(x)+g(y)=\tilde f(x)+\tilde g(y)$ on $S$, then $\tilde f=f+a$ and $\tilde g=g-a$ for some $a\in\mathbb{R}$. Recalling our symmetric normalization for potentials, the claim follows.
\end{proof}
\section{Varying Costs}\label{se:varyingCost}
In this section we extend Theorem~\ref{thm:1} to cost functions that vary with $\varepsilon$. The continuous cost $c$ will be used for the limiting Monge--Kantorovich transport problem, as before. In addition, we introduce a family of cost functions $c_{\varepsilon}:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}_{+}$ for the regularized problems with~$\varepsilon>0$. These functions are merely required to be measurable.
On the one hand, we are interested in the stability of Theorem~\ref{thm:1} with respect to the cost function. On the other hand, this section is motivated by the large deviations perspective on Schr\"odinger bridges; cf.\ \cite{Leonard.14}. Recall that
\begin{equation}\label{eq:bridge}
\pi_{\varepsilon}=\argmin_{\pi\in \Pi(\mu,\nu)} H(\pi|R_{\varepsilon}) \quad \mbox{for} \quad \frac{dR_{\varepsilon}}{d(\mu\otimes\nu)} = \alpha_{\varepsilon} e^{-c/\varepsilon}
\end{equation}
where $\alpha_{\varepsilon}$ is the normalizing constant. Theorem~\ref{thm:1} and its counterparts in Section~\ref{se:LDP} can be interpreted as consequences of the large deviations of $(R_{\varepsilon})$ as~$\varepsilon\to0$, whose rate is the function~$c$. More generally, this rate function is shared by arbitrary measures $(R'_{\varepsilon})$ with $-\varepsilon \log \frac{dR'_{\varepsilon}}{d(\mu\otimes\nu)} \to c$, and one may wonder if they give rise to a similar result.
This convergence is equivalent to setting $\frac{dR'_{\varepsilon}}{d(\mu\otimes\nu)}=\alpha'_{\varepsilon}e^{-c_{\varepsilon}/\varepsilon}$ for some function $c_{\varepsilon}$ with~$c_{\varepsilon}\to c$, and returning to the language of entropic optimal transport, it corresponds to the cost~$c_{\varepsilon}$ under consideration.
In what follows, we assume a common bound
\begin{equation}\label{eq:majorant}
c_{\varepsilon} \leq \bar c \quad\mbox{for all}\quad \varepsilon>0
\end{equation}
for some function $\bar{c}(x,y) = \bar{c}_{1}(x)+ \bar{c}_{2}(y)$ with $\bar{c}_{1}\in L^{1}(\mu)$ and $\bar{c}_{2}\in L^{1}(\nu)$, and that
\begin{equation}\label{eq:cConv}
c_{\varepsilon} \to c \quad \mbox{uniformly on compact subsets as $\varepsilon\to0$.}
\end{equation}
The modified entropic optimal transport problem then reads
\begin{equation}\label{eq:EOTvar}
I_{\varepsilon} :=\inf_{\pi\in\Pi(\mu,\nu)} \int_{\mathcal{X}\times\mathcal{Y}} c_{\varepsilon}(x,y) \, \pi(dx,dy) + \varepsilon H(\pi|\mu\otimes\nu).
\end{equation}
As before, it has a unique solution $\pi_{\varepsilon}$, and we introduce the Schr\"odinger potentials through the formula
\begin{equation}\label{eq:densityFormVar}
\frac{d\pi_{\varepsilon}}{d(\mu\otimes\nu)}(x,y) = \exp \left(\frac{f_{\varepsilon}(x) +g_{\varepsilon}(y) - c_{\varepsilon}(x,y)}{\varepsilon}\right)
\end{equation}
and the symmetric normalization~\eqref{eq:normalizationIntro}. The Monge--Kantorovich problem and its potentials are still based on the continuous cost $c$. While not required for the regularized problem with $\varepsilon>0$, continuity of costs is important for $\varepsilon=0$, including for the validity of~Theorem~\ref{thm:1} (see Example~\ref{ex:noConv}).
\begin{proposition}\label{pr:varyingCost}
Let~\eqref{eq:majorant}, \eqref{eq:cConv} hold. Then the assertion of Theorem~\ref{thm:1} extends to the setting~\eqref{eq:EOTvar}, \eqref{eq:densityFormVar} of variable costs $(c_{\varepsilon})$.
\end{proposition}
\begin{proof}
We only indicate the necessary changes to the proof of Theorem~\ref{thm:1}. First of all, we recall that the auxiliary results in Section~\ref{sec:auxiliaryResults} did not require continuity. Next, we go through the steps in Section~\ref{se:ProofOfMainRes}.
\emph{Step~1.} We change~\eqref{eq:continuity} to
\begin{align*}%
\begin{split}
\left|f_n(x_1)-f_n(x_2)\right|&\le \sup_{y\in \mathcal{Y}_{\mathrm{cpt}}(\delta_k)} \left|c_{n}(x_1, y)-c_{n}(x_2,y)\right| -\epsilon_n\log(1-\delta_k)\\
&\le \sup_{y\in \mathcal{Y}_{\mathrm{cpt}}(\delta_k)} \left|c(x_1, y)-c(x_2,y)\right| +\epsilon_n\log(2) + \eta_{n,k}
\end{split}
\end{align*}
where, due to the uniform convergence of $c_{\varepsilon}$ on the compact set $\mathcal{X}_{\mathrm{cpt}}(\delta_k)\times\mathcal{Y}_{\mathrm{cpt}}(\delta_k)$, the constant $\eta_{n,k}$ satisfies $\lim_{n}\eta_{n,k}=0$ (for fixed $k$). The subsequent display for $g_{n}$ is changed analogously.
\emph{Step~2.} Instead of~\eqref{eq:extension1} we now have
\begin{align*}%
\left|f^k_n(x_1)-f^k_n(x_2)\right|\le \tilde{d}_k(x_1,x_2) +\epsilon_n\log(2) + \eta_{n,k}, \quad x_1,x_2\in \mathcal{X}.
\end{align*}
\emph{Step~3.} In the arguments for the pointwise boundedness, simply replace~$c$ by~$\bar c$. In the application of Lemma~\ref{lem:aa1}, replace $h_n:=\epsilon_n\log(2)$ by $h_{n,k}:=\epsilon_n\log(2)+ \eta_{n,k}$. Note that the dependence on~$k$ does not cause any difficulty, as $k$ is fixed and $\lim_{n}h_{n,k}=0$ holds for each~$k$.
\emph{Steps~4,5.} No changes are necessary in these steps; note that~\eqref{eq:inequality} is based solely on the limiting cost function $c$ which is still assumed to be continuous.
\emph{Step 6.}
Define $C_{1}(x):=\int \bar{c}(x,y)\,\nu(dy)=\bar{c}_{1}(x) + \|c_{2}\|_{L^{1}(\nu)}$ and similarly $C_{2}(y):=\bar{c}_{2}(y) + \|c_{1}\|_{L^{1}(\mu)}$. Then we again have~\eqref{eq:upperBoundInt}. For the subsequent display, we now need to argue that
\begin{align}\label{eq:MKlimit}
\inf_{\pi\in \Pi(\mu,\nu)} \int c(x,y)\,\pi(dx,dy)&\le \lim_{n\to \infty} \inf_{\pi\in \Pi(\mu,\nu)} \int c_{n}(x,y)\,\pi(dx,dy).
\end{align}
Indeed, given $\gamma>0$, we can find a compact set $K=K_{1}\times K_{2}\subset \mathcal{X}\times\mathcal{Y}$ with
$$
\int_{K^{c}} \bar{c}(x,y)\,\pi(dx,dy) \leq \int_{K_{1}^{c}} \bar{c}_{1}(x)\,\mu(dx) + \int_{K_{2}^{c}} \bar{c}_{2}(y)\,\nu(dy) <\gamma.
$$
As $c_{n}\to c$ uniformly on~$K$, we also have $|c-c_{n}|\leq \gamma$ on~$K$ for $n\geq n_{0}$. Thus
\begin{align*}
\left|\inf_{\pi\in \Pi(\mu,\nu)} \int c \,d\pi - \inf_{\pi\in \Pi(\mu,\nu)} \int c_{n} \,d\pi \right|
&\leq \sup_{\pi\in \Pi(\mu,\nu)} \int |c-c_{n}| \,d\pi \\
&\leq \sup_{\pi\in \Pi(\mu,\nu)} \int_{K }|c-c_{n}| \,d\pi + \int_{K^{c}} \bar{c} \,d\pi
\leq 2\gamma
\end{align*}
for $n\geq n_{0}$. This implies~\eqref{eq:MKlimit}, even with equality, and the remainder of the proof holds as stated without further changes.
\end{proof}
The following simple example shows that continuity of~$c$ is important for the validity of Theorem~\ref{thm:1}.
\begin{example}\label{ex:noConv}
Let $\mu=\nu$ be uniform on $\mathcal{X}=\mathcal{Y}=[0,1]$ and $c(x,y)=\mathbf{1}_{x\neq y}$. Then the Schr\"odinger potentials are $f_{\varepsilon}=g_{\varepsilon}\equiv 1/2$ for all~$\varepsilon>0$ but the (unique) Kantorovich potentials are $f_{0}=g_{0}\equiv0$.
\end{example}
To put the example in a broader context, note that the entropic optimal transport problem~\eqref{eq:EOT} with $\varepsilon>0$ remains unchanged if the cost function is altered on a $\mu\otimes\nu$-nullset, whereas the Monge--Kantorovich problem may very well change. If~$c$ is measurable and $\hat{c}$ is a continuous function with $\hat c=c$, Theorem~\ref{thm:1} thus implies that the entropic problem~\eqref{eq:EOT} with cost~$c$ converges to the Monge--Kantorovich problem with cost~$\hat c$ for~$\varepsilon\to0$. Example~\ref{ex:noConv} is a particular case with $c(x,y)=\mathbf{1}_{x\neq y}$ and $\hat c\equiv 1$. For more general cost functions, one may conjecture that~\eqref{eq:EOT} converges to some form of upper envelope of the Monge--Kantorovich problem; we leave this question for future research.
\section{Multimarginal Optimal Transport}\label{sec:multi}
Instead of two marginals $\mu$ and $\nu$, we can generalize to an arbitrary number $N\in \mathbb{N}$ of marginals. Consider Polish probability spaces $(\mathcal{X}_{i},\mu_{i})$ for $i=1,\dots,N$ and let
$$
\boldsymbol\mu(dx_{1},\dots,dx_{N}):= \mu_1(dx_1) \otimes \cdots \otimes\mu_N(dx_N) %
$$
denote the product measure. Moreover, let $c:\mathcal{X}_1\times \cdots\times \mathcal{X}_N \to \mathbb{R}_+$ be continuous with
$
\int c \,d\boldsymbol\mu <\infty.
$
The entropic optimal transport problem generalizes directly to the set $\pi\in\Pi(\mu_{1},\dots,\mu_{N})$ of couplings,
\begin{equation}\label{eq:EOTmulti}
I_{\varepsilon} :=\inf_{\pi\in\Pi(\mu_{1},\dots,\mu_{N})} \int c \, \pi + \varepsilon H(\pi|\boldsymbol\mu),
\end{equation}
and has a unique solution $\pi_{\varepsilon}$ given by
\begin{equation}\label{eq:densityFormMulti}
\frac{d\pi_{\varepsilon}}{d\boldsymbol\mu}(x_{1},\dots,x_{N}) = \exp \left(\frac{f^{1}_{\varepsilon}(x_{1}) + \cdots + f^{N}_{\varepsilon}(x_{N}) - c(x_{1},\dots,x_{N})}{\varepsilon}\right)
\end{equation}
with $f^{i}_{\varepsilon}\in L^{1}(\mu_{i})$. For $\varepsilon=0$, we again recover the multimarginal optimal transport problem, whose dual now reads
\begin{align}\label{eq:dualOTmulti}
S_0 :=\sup_{f^{i}\in L^1(\mu_{i}),\, \sum_{i}f^{i}(x_{i}) \leq c(x_{1},\dots,x_{N})} \, \sum_{i=1}^{N} \int f^{i}(x_{i})\, \mu_{i}(dx_{i}).
\end{align}
We again normalize all potentials symmetrically.
Extending Theorem~\ref{thm:1}, we have the following result.
\begin{theorem}\label{thm:multi}
Let $(f^{1}_{\varepsilon},\dots,f^{N}_{\varepsilon})$ be the unique Schr\"odinger potentials for $\varepsilon>0$.
\begin{itemize}
\item[(a)] Given $\varepsilon_{n}\to0$, there is a subsequence $(\varepsilon_{k})$ such that $f^{i}_{\varepsilon_{k}}$ converges in $L^{1}(\mu_{i})$, for all $i=1,\dots,N$.
\item[(b)] If $\lim_{n} f^{i}_{\varepsilon_{n}} = f^{i}$ $\mu_{i}$-a.s.\ for all $i=1,\dots,N$, then $(f^{1},\dots,f^{N})$ are Kantorovich potentials and the convergence also holds in $L^{1}(\mu_{i})$.
\end{itemize}
If the Kantorovich potentials $(f^{1}_{0},\dots,f^{N}_{0})$ for~\eqref{eq:dualOTmulti} are unique, then it follows that $\lim_{\varepsilon\to0} f^{i}_{\varepsilon} = f^{i}_{0}$ in $L^{1}(\mu_{i})$ for $i=1,\dots,N$.
\end{theorem}
\begin{proof}
The arguments are exactly the same as in the proof of Theorem~\ref{thm:1}, and therefore omitted.
\end{proof}
\newcommand{\dummy}[1]{}
| -82,256.677236
|
[
-2.861328125,
2.45703125
] | 32.950192
|
[
-2.326171875,
1.220703125,
-2.0625,
-7.01171875,
-1.6220703125,
9.125
] |
[
2.96875,
8.8671875,
-1.166015625,
4.8046875
] | 489
| 6,541
|
[
-3.44140625,
3.86328125
] | 34.080264
|
[
-5.59765625,
-4.328125,
-5.30859375,
-2.9140625,
2.025390625,
14.109375
] | 0.315977
| 16.306455
| 27.105947
| 1.361508
|
[
0.7733418345451355
] | -51,198.07609
| 6.773735
| -82,651.797234
| 0.43334
| 6.32514
|
[
-1.7216796875,
-3.455078125,
-4.17578125,
-5.515625,
1.96875,
12.8203125
] |
[
-5.34765625,
-2.015625,
-2.20703125,
-1.5146484375,
3.67578125,
4.59375
] | |
BkiUdWQ5qhLACGCwN13M
|
\subsection{GAP Overview}
\label{gap}
The GAP contact tracing approach \cite{ag2020exposure:crypto} is based on frequently-changing random pseudonyms, so-called \emph{Rolling Proximity Identifiers (RPI)}. An overview of the approach is shown in Fig.~\ref{fig:GAP-overview}. Each app generates these RPIs from a \emph{Temporary Exposure Key (TEK)} (formerly known as \emph{Daily Tracing Key (DTK)} in version 1.0 of the GAP specification) and beacons them into their surroundings using BLE. Apps on other devices in close proximity can observe these RPIs and store them locally as a record of contact with the device beaconing the RPI. This dataset also includes additional metadata like the received signal strength.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{gfx/GAP-overview-TEK.pdf}
\caption{Overview of the GAP contact tracing approach}
\label{fig:GAP-overview}
\end{figure}
Should a user be tested positive for SARS-CoV-2, a user can decide to upload TEKs of the last $x$ days using an app to a central server (currently $x=14$). The server accumulates the received TEKs of infected persons and offers them to be downloaded by other users' apps.
Apps in devices of participating users regularly check the server for updates and download any new TEKs. Each app then uses the downloaded TEKs to calculate the corresponding RPI pseudonyms used by the infected persons' apps in the recent past. The operating system / corresponding system service then compares these infected persons' RPIs to the RPIs stored locally on the device. If matching RPIs are found, the metadata, e.g., signal strength or duration of the encounters, related to these matching RPIs are used to calculate a risk score that is used to determine whether a warning should be displayed to the user or not.
\subsection{Privacy Attack: Profiling Infected Persons}
\label{sec:profiling-attack}
\subsubsection{Goal and System Setup}
The goal of our experiment is to show that it is practically possible to profile the movement and activities of infected users after they upload their TEKs.
Based on TEKs, other participating apps can derive the corresponding RPIs that the infected user's app has beaconed out in the recent past.
Note that since all TEKs uploaded by infected persons can be downloaded by anyone, the RPIs are essentially public information.
To conduct the attack, we deployed BLE sniffers to capture RPIs at six selected sensitive places downtown the city of Darmstadt, Germany, as listed in Table \ref{tab:list-location}. The locations of these places in Darmstadt are shown in Fig.~\ref{fig:profiling-attack-deploy}. We used commodity smartphones as the sniffers that can capture BLE signals at a distance of up to 6 meters. However, with a special Bluetooth antenna it was possible to capture signals at a much higher distance. The BLE sniffers would capture RPIs of any users moving through or spending time at the places mentioned above. In our experiment, two tracing app users simulate two particular paths.
\begin{table}[ht]
\centering
\caption{List of locations with deployed BLE sniffers}
\label{tab:list-location}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{ll}
Location & Description \\\hline
A & A residential area \\
B & City hall \\
C & Police station \\
D & Clinic and pharmacy \\
E & Outside a pub \\
F & Outside a head shop and a sports gambling bookmaker
\end{tabular}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=.85\columnwidth]{gfx/profiling-attack-deploy.pdf}
\caption{An example of observation points}
\label{fig:profiling-attack-deploy}
\end{figure}
Since the official GAP API can currently only be used by governmental health institutions \cite{AppleExposureAddendum}, we implemented a GAP tracing app simulator, following the RPI generation procedure laid out in the GAP cryptography API specification~\cite{ag2020exposure:crypto}.
\subsubsection{Experimental Results}
\label{sec:profiling-experiments}
A sample of our results of RPI measurements captured at different observation points (marked from A to F) is shown in Fig.~\ref{fig:profiling-attack-capture}. The captured data looks entirely random, and it is not obvious which RPIs could be associated with individual users. However, when we simulate the case that any one of the users is tested positive for SARS-CoV-2 and these users upload their TEKs that were used to derive the corresponding RPIs, a completely different picture emerges, as shown in Fig.~\ref{fig:profiling-attack-link}. It is evident that by matching the RPIs of User 1 with the RPIs captured in different locations, e.g., location $B$ and location $E$, we know exactly which locations User 1 has visited and when User 1 arrived and left each location.
\begin{figure}[ht]
\centering
\includegraphics[width=1.00\columnwidth]{gfx/profiling-attack-capture.pdf}
\caption{RPI measurements captured at location B and E}
\label{fig:profiling-attack-capture}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{gfx/profiling-attack-link.pdf}
\caption{Profiling User 1 movements}
\label{fig:profiling-attack-link}
\label{fig:profiling-attack-measurement}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=.85\columnwidth]{gfx/profiling-attack-route.pdf}
\caption{Movement profile of two infected users (User 1 in blue and User 2 in green) based on the observation points.}
\label{fig:profiling-attack-route}
\end{figure}
Moreover, if we sort the locations that the users have visited in chronological order, we see that we can track the movements of each of the test users, as shown in Fig.~\ref{fig:profiling-attack-route}. Let (User $i$, $X$) denote the presence of User $i$ at location $X$. The sequence of observations of User 1 was as follows: (User 1, $A$), in a residential area, then (User 1, $D$) near a clinic and a pharmacy, (User 1, $C$) near the police station, (User 1, $B$) (Darmstadt city hall), (User 1, $E$) (near to a pub) before concluding the round at the starting point with (User 1, $A$), corresponding again as mentioned, to a residential area. This may potentially indicate that the user may be living in this area. A similar tracing of locations is possible for User 2 who was first observed at (User 2, $B$), the city hall, after which the next observation (User 2, $E$) happened near the pub, after which the final observation (User 2, $F$), was near a head shop and a sports gambling bookmaker.
With these observations about users and the associated timestamps, a significant amount of information can be gathered.
Since we know where users are, at which time, and how long they spend at each observed place, it is possible to aggregate relevant information from the users to potentially de-anonymize them.
For example, our experiment indicates that User 1 lives in the residential area near location $A$, and may have health and legal problems due to visiting the clinic and the police station.
User 2 might be involved with the municipal administration and seems to like products available in a head shop or at a sports bookmaker.
Moreover, since both users left location $B$ at about the same time, and even arrived at the pub (location $E$) at the same time, spent time there, and also left the pub at the same time, it is likely that these two users may have a social relationship.
We have conducted a series of experiments of different complexity.
Our experiments demonstrate the power that the adversary gains by having access to RPI data of individual users. Since the TEKs change every 24 hours, traceability across longer time frames would initially not seem to be possible. However, since infected users upload TEKs of 14 days and since typical travel patterns of individual users show marked similarities even between different days (e.g., the typical commute pattern between home and workplace), it is possible to link and track at least some infected users for time periods significantly longer than the validity periods of individual TEKs (up to 14 days). This clearly will reveal even more personal information and activities of the targeted users and provide ample opportunities for using potentially available additional public information to de-anonymize the users in question. Moreover, de-anonymization becomes easier if the adversary has access to additional information about social relationships of users, e.g., the social graph of an online social network (OSN). This graph can be used to identify the infected users and their social contacts by comparing the social graph of the infected users obtained by the profiling attack to the OSN social graph~\cite{ji2015USENIXSecGraph, Radaelli2018SurveillanceNetworkedAge}.\footnote{Interestingly, a similar weakness was observed and criticized \cite{dp3tAgainsPepppt} for the centralized tracing app, called PEPP-PT \cite{Peppptsp}, that enables the central server to build the social graph of infected individuals!}
\textit{Experiments with the Corona-Warn-App:} We also conducted an experiment to confirm that the profiling attack is also applicable to the official German \emph{Corona-Warn-App} released on June 16, 2020. We captured RPIs from the \emph{Corona-Warn-App} without problems. We could also re-beacon these RPIs in our wormhole attack (cf. Section \ref{sec:relay-attack}).
\subsection{Case Study}
We consider a case in which an attacker seeks to identify persons infected with SARS-CoV-2 in Darmstadt, Germany. The attack would work best when useful side information is available, e.g., information about addresses of persons working in a particular place. For example, office addresses of the employees of the City of Darmstadt are available through the www.darmstadt.de website.
To capture coarse-grained movements of persons in Darmstadt, strategically-placed sensing stations need to be positioned in the city area. However, it is not necessary to place sensing stations over the full $122 \; km^2$ of the city area. This would require the ridiculous amount of $122 * 10 * 10 = 12,200$ sensing stations, even if one would place only one sensing station every $100$ meters in $1 \; km^2$. However, useful profiles can be produced even with less sensing stations.
\subsubsection{Persons Using Public Transport}
There are $42 \; km$ of tram lines in Darmstadt servicing $75$ tram stops, depicted in Fig~\ref{fig:rmvtraffic}. Out of these, $54$ stops are in the city area of Darmstadt. The tram tracks run in a star shape around Luisenplatz on approximately six distinct axes. By placing $3-4$ sensing stations along these axes, one would be able to observe all tram passengers, from which direction they approach the Darmstadt city center or in which direction they leave. This would require around $25$ sensing stations.
There are approximately $150$ bus stops in the city area of Darmstadt. Bus lines move along approximately $20$ major bus routes. The bus routes are laid out such that they feed traffic to one of the major traffic centers. By placing $3-4$ sensing stations on each of these major bus routes, one could monitor the movements of the majority of passengers approaching these traffic centers. This would require $60-80$ strategically-placed sensing stations.
There are $9$ railway stops in Darmstadt servicing commuters arriving from farther away, shown as the blue railway icon in Fig.~\ref{fig:rmvtraffic}.
Sensing stations needed to monitor movements of persons at railway stations will probably vary a lot, since the size of the stations is quite different; the main railway station is by far the largest.
Sensing all persons entering and leaving the main railway station would require about $10-20$ sensing stations. Other train stations would require probably less, $2-5$ stations would be sufficient. Therefore, about $60$ sensing stations would be required to monitor persons entering or leaving Darmstadt by rail.
\begin{comment}
\begin{figure}[ht]
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.5\linewidth]{gfx/darmstadtrmvmap}
\caption{RMV map showing train, tram, and bus lines, as well as bus stops and train stations}
\label{fig:rmvtraffic}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.5\linewidth]{gfx/darmstadttrafficmap}
\caption{OpenStreetMap showing main roads used by car and railways}
\label{fig:cartraffic}
\end{subfigure}
\caption{Maps of Darmstadt showing main transport routes}
\label{fig:trafficmaps}
\end{figure}
\end{comment}
\begin{figure}[t]
\begin{subfigure}[c]{0.49\columnwidth}
\centering
\includegraphics[height=1.2\columnwidth]{gfx/darmstadtrmvmap}
\subcaption{Train, tram, and bus lines}
\label{fig:rmvtraffic}
\end{subfigure}
\begin{subfigure}[c]{0.49\columnwidth}
\centering
\includegraphics[height=1.2\columnwidth]{gfx/darmstadttrafficmap}
\subcaption{Main roads (cars \& railways)}
\label{fig:cartraffic}
\end{subfigure}
\caption{Maps of Darmstadt showing main transport routes}
\label{fig:trafficmaps}
\end{figure}
\subsubsection{Persons Using Private Transport}
There are five major and seven minor streets that persons typically use to enter Darmstadt by car. The streets are ordered in a star shape around the city center, as shown in Fig.~\ref{fig:cartraffic}. In addition, there are $12$ other streets inside the city for cross-connections between the major streets and individual city blocks. All of these streets are regulated using traffic lights forcing car drivers to stop several times while crossing the city area. By placing $4-5$ sensing stations (per direction) along the major streets, it would be possible to monitor persons entering Darmstadt by car. A similar number of sensing stations would be necessary to cover movements inside the city on the major streets.
Thus, by using $200-250$ sensing stations, an adversary could have coarse-grained monitoring capability over all major street connections in Darmstadt.
There are a few places in Darmstadt that are heavily frequented and used by persons for changing connections of public transport.
These places include Luisenplatz, Schloß, Hauptbahnhof, Nordbahnhof, Willy-Brandt-Platz, and Eberstadt Wartehalle; the busiest place is Luisenplatz.
To have meaningful coverage over Luisenplatz, one would need to place $2-3$ sensing stations near all of the $6$ public transport stops on Luisenplatz, i.e., around 15 sensing stations.
Similarly, monitoring the area around Hauptbahnhof would require placement of $1-2$ sensing stations at the 10 public transport stops in its vicinity, requiring around $20$ sensing stations in total.
Other of the aforementioned connection points would likely require around 5 sensing stations per location.
All in all, around $50$ sensing stations would be required.
\subsubsection{Summary}
\begin{table}[t!]
\caption{Estimate of how many sensing stations are needed to track persons using different kinds of transportation}
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{lr}
\textbf{Location} & \textbf{Sensing stations} \\ \hline
Trams & 25 \\
Buses & 60 - 80 \\
Railways & 60 \\
Cars & 200 - 250 \\
Pedestrians & 50 \\ \hline
\textbf{Total} & 395 - 465
\end{tabular}
\label{tab:trackingtransport}
\end{table}
To generate coarse-grained movement profiles of persons moving in and out of Darmstadt, an adversary would roughly require the sensing capabilities shown in Table~\ref{tab:trackingtransport}.
This shows that by strategically placing a fairly modest ($<500$) number of sensing stations, an adversary would be able to monitor a large fraction of all traffic inside a major city of about $160,000$ inhabitants.
The impact of the attack on a population level primarily depends on two factors: (i) the prevalence of the disease in the targeted area in the form of new daily infections, and (ii) the number of persons uploading their infection status via a contact tracing app.
Consequently, in untargeted attacks, the adversary will focus the attack to areas where the daily infection rates are high to ensure maximal impact for the attack, assuming that many people in this area will upload their infection status via a contact tracing app.
For targeted attacks, in which the adversary is targeting a particular organization like a business corporation, the population-level incidence plays a lesser role, since the target setting of the adversary is different.
In targeted attacks, the adversary's focus is on obtaining detailed reconnaissance about any target persons in the targeted organization and therefore the overall number of persons affected by the attack plays a lesser role.
\subsection{Security Attack: Infection Alarms Going Viral}
\label{sec:relay-attack}
A wormhole attack is a particular type of relay attack.
Here, an attacker records messages at one physical location of a network, forwards them through a network tunnel to another physical location, and re-transmits them there as if they had been sent at this location in the first place~\cite{hu2006wormhole}.
\subsubsection{Goal and System Setup}
The goal of the attack is that two or more physical locations are combined into one large logical location.
If highly frequented physical locations, such as train stations, shopping malls etc.,
are logically linked to each other,
an infected person can (falsely) be observed to have been in contact with other persons at remote locations.
This can lead to a significant number of false alarms for people who might not have been in contact with anyone who is infected.
Furthermore, people might refuse to use the app, since the false positive rate seems to be too high, i.e., not helpful in contact tracing, but rather causing unnecessary work and confusion.
Fig.~\ref{fig:covid_wormhole} shows the basic setting to perform a wormhole attack.
The attacker uses (at least) two BLE enabled wormhole devices in different locations, each of them in physical proximity to mobile devices of potential victims, and records the BLE messages broadcast by the users' mobile devices at each location.
The recorded BLE messages are then transferred, e.g., via an Internet link, in both directions between the wormhole devices of the attacker and are then re-broadcast to all mobile devices of users in the near vicinity of both locations of the wormhole devices.
Thus, these BLE messages seem to come from a local 1-hop neighbor.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{gfx/Covid_Wormhole.pdf}
\caption{Wormhole attack setup to relay BLE beacons}
\label{fig:covid_wormhole}
\end{figure}
GAP RPIs are only valid for a limited time interval.
An attacker
performing a wormhole attack
can add an expiration date to each received BLE beacon message before transferring it through the Internet link to other wormhole devices.
These wormhole devices can then re-broadcast the same BLE message over and over again until the expiration date is met.
While the used bandwidth of the Internet wormhole is kept to a minimum, the efficiency of the wormhole devices is maximized, since a single BLE message from a wormhole device can be resent multiple times to victims' devices for as long as the BLE beacon is valid.
While RPIs are valid
by design
for about 10 minutes, the specification grants a +/- two-hour tolerance time window between when RPIs should have been broadcast and when they are actually observed. This means that an attacker can replay each captured RPI for at least two hours in order to generate potential false positives.
Since BLE beacons are relatively small in size and the Internet provides fast communication, relaying is achieved in a matter of milliseconds.
\subsubsection{Experimental Results}
The GAP API is only permitted to be used by developers or testers that are authorized by governmental health institutions, since a special permission of either Google or Apple is required to use their GAP (\textit{Exposure Notification}) API.
To validate that a wormhole attack was successful, we need to verify that the BLE beacons sent by a smartphone A to a wormhole device and transmitted over the network to another wormhole device are finally accepted by another smartphone B.
\paragraph{\emph{\textit{Experiment 1: DP-3T}}}
Since the GAP approach is heavily inspired by the DP-3T group and their approach,
DP-3T can be considered as a substitute for GAP to show the success of a wormhole attack.
The original DP-3T implementation (later named \textit{prestandard} DP-3T\footnote{\url{https://github.com/DP-3T/dp3t-sdk-ios/releases/tag/prestandard}}) was available before the GAP API made it into Android and iOS.
In this \textit{prestandard} version, DP-3T generates TEKs as well as handles and stores RPIs as part of the app.
The operating system (iOS or Android) is in charge of handling the lower-level BLE communication, and the app provides the payloads required for BLE communication.
We built a multi-location wormhole for forwarding and rebroadcasting BLE messages based on off-the-shelf Raspberry Pis with integrated BLE and an Internet uplink, using our PIMOD tool \cite{hoechst2020pimod}.
These Raspberry Pis were connected using a central MQTT server as a back-end system for distributing the received BLE messages between each wormhole device.
One of these Raspberry Pis functioned as a mobile node by using a battery pack and a mobile phone network uplink.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{gfx/Bidirectional_Covid_Wormhole_Marburg.png}
\caption{Wormhole attack in the city of Marburg}
\label{fig:covid_wormhole_marburg}
\end{figure}
Our evaluation wormhole connects several physical locations in the cities of Marburg, Gießen, and Darmstadt, Germany.
Part of this setup within the city of Marburg is shown in Fig.~\ref{fig:covid_wormhole_marburg}.
Each Raspberry Pi receives and records all BLE messages sent by surrounding users' iOS and Android smartphones and sends them to the MQTT wormhole server.
All other connected wormhole Raspberry Pis receive a copy of these BLE beacons from the MQTT wormhole server and rebroadcast them using their integrated BLE hardware.
Each wormhole device is sender and receiver at the same time.
Thus, this setup works in a bi- or multi-directional fashion.
In our tests, we used several iOS and Android smartphones with
the corresponding implementation of
the DP-3T prestandard \textit{SampleApp} at multiple physical locations in the three German cities mentioned above.
Our tests showed that the DP-3T prestandard \textit{SampleApp} is vulnerable to our wormhole attack.
For example, we successfully established a logical contact between smartphones that were 40 kilometers apart in two cities without a real-world contact between the users of these smartphones.
The logical contact between the smartphones was generated without any actions by the users required on their smartphones and without any physical interaction between the two individuals.
\begin{lstlisting}[
style=log,
caption={Raspberry Pi with our wormhole implementation},
label={lst:wormpi},
float=h,
captionpos=b,
]
Jun 09 20:45:13 wormpi-mr wormhole[472]: [provider ] [INFO] [in ] [7E:09:47:A6:EE:7F] [Dp3t_ScanRequest] fd68
Jun 09 20:45:13 wormpi-mr wormhole[472]: [wormhole-out] [INFO] [7E:09:47:A6:EE:7F] [Dp3t_ScanRequest] fd68
Jun 09 20:45:13 wormpi-mr wormhole[472]: [wormhole-in ] [INFO] [5A:A2:81:40:7A:B3] [Dp3t_ScanResponse] fd68 6d:72:34:32:30:80:1d:62:d7:c9:ff:d0:71:a3:37:b0
Jun 09 20:45:13 wormpi-mr wormhole[472]: [provider ] [INFO] [out] [5A:A2:81:40:7A:B3] [Dp3t_ScanResponse] fd68 6d:72:34:32:30:80:1d:62:d7:c9:ff:d0:71:a3:37:b0
\end{lstlisting}
In Listing~\ref{lst:wormpi}, an excerpt of the log of a running wormhole device, here called \textit{wormpi-mr}, is shown.
The software on the wormhole device consists of a BLE controller and a beacon distribution task, called "provider".
The DP-3T prestandard \textit{SampleApp} used the BLE UUID \texttt{fd68}, which is correctly identified as \texttt{Dp3t\_ScanRequest} in the case of an empty payload and \texttt{Dp3t\_ScanResponse} when the RPI is included.
In Lines 1-2, another wormhole device has submitted a \texttt{ScanRequest} beacon to the provider, indicated by "in", that is then broadcast by \textit{wormpi-mr's} BLE controller ("wormhole-out").
In Lines 3-4, a response is received by \textit{wormpi-mr's} BLE controller as "wormhole-in", which is then sent over the provider ("out") to all other wormhole devices.
\begin{figure}[htb]
\begin{subfigure}[c]{0.49\columnwidth}
\centering
\includegraphics[height=1.75\columnwidth]{gfx/21-46-android-ble-handshake.png}
\subcaption{Android phone in Marburg}
\label{fig:sampleapp-screenshots:android}
\end{subfigure}
\begin{subfigure}[c]{0.49\columnwidth}
\centering
\includegraphics[height=1.75\columnwidth]{gfx/21-46-ios-recv.png}
\subcaption{iOS phone in Gie\ss{}en}
\label{fig:sampleapp-screenshots:ios}
\end{subfigure}
\caption{DP-3T prestandard \textit{SampleApp} instances with confirmed beacons transmitted through the wormhole "wormpi"}
\label{fig:sampleapp-screenshots}
\end{figure}
In Fig.~\ref{fig:sampleapp-screenshots}, two screenshots of running DP-3T prestandard \textit{SampleApp} instances on Android (Fig. \ref{fig:sampleapp-screenshots:android}) and iOS (Fig. \ref{fig:sampleapp-screenshots:ios}) are shown.
The experiments indicate that the execution of a wormhole attack was successful.
The Android implementation on a smartphone located in Marburg (Fig.~\ref{fig:sampleapp-screenshots:android})
displays a handshake with the MAC address of the wormhole device (indicated by the rectangles in red), which in this experiment is the hardware MAC address of the used Raspberry Pi (abbreviated due to privacy reasons).
The iOS implementation on a smartphone located in Gie\ss{}en (Fig.~\ref{fig:sampleapp-screenshots:ios}) is less verbose, but also confirms receiving a beacon with the manually set ephemeral ID of "mr42" (i.e., the smartphone in Marburg; indicated by the rectangle in red), even though the smartphone is not in physical proximity of another smartphone running the DP-3T prestandard \textit{SampleApp}.
\begin{lstlisting}[
style=log,
keywordstyle=\color{black},
caption={Exposure notification confirming a received RPI},
label={lst:logcatWrite},
float=h,
captionpos=b,
]
I/ExposureNotification: Scan device 6B:12:D2:1B:13:B5, type=1, id=31680EBB671454E1D7B03B2E96B98328, raw_rssi=-79, calibrated_rssi=-77, meta=919BAEA1, minutes_since_last_scan=1594815319 [CONTEXT service_id=236 ]
I/ExposureNotification: BleDatabaseWriter.writeBleSighting, id=31680EBB671454E1D7B03B2E96B98328 [CONTEXT service_id=236 ]
\end{lstlisting}
\paragraph{\emph{\textit{Experiment 2: German Corona-Warn-App}}}
We also validated our results using the Android version of the official German \textit{Corona-Warn-App} released on June 16, 2020.
As shown in Listing \ref{lst:logcatWrite}, the GAP of the \textit{Corona-Warn-App} stores RPIs transmitted using the wormhole.
Since we could not get permission to access the GAP API,
we used a TEK from the official server of the \emph{Corona-Warn-App}, derived multiple RPIs and injected these into the wormhole. Since the derived RPIs do not contain the \textit{Associated Encrypted Metadata} (AEM) that would normally be broadcast with an RPI, we had to derive the AEM, too.
\begin{comment}
\begin{equation}\label{rpik_i_calculation}
RPIK_{i}\leftarrow HKDF(tek_{i},NULL,UTF8(\textrm{"EN-RPIK"}),16)
\end{equation}
\begin{equation}\label{rpi_j_calculation}
RPI_{i,j} \leftarrow AES_{128}(RPIK_{i},\textrm{PaddedData}_{j})
\end{equation}
\begin{center}
with \textit{PaddedData} containing constant data and the $ENIntervalNumber(j)$
\end{center}
\begin{equation}\label{aemk_i_calculation}
AEMK_{i}\leftarrow HKDF(tek_{i},NULL,UTF8(\textrm{"EN-AEMK"}),16)
\end{equation}
\begin{equation}\label{aem_i_j_calculation}
\textrm{Associated Encrypted Metadata}_{i,j} \leftarrow AES_{128}-CTR(AEMK_{i},RPI_{i,j},\textrm{Metadata})
\end{equation}
\end{comment}
To validate that our RPI derivation is correct,
we used Frida\footnote{\url{https://frida.re/}} on a rooted Google Pixel 3 smartphone to extract all TEKs stored on this device. Using a known TEK together with numerous RPIs and their corresponding time slots, we could validate that our RPI derivation works correctly. Additionally, this approach allowed us to decrypt the AEM for the RPIs. This non-encrypted metadata was then used to generate valid AEM for the derived RPI.
These RPIs are only valid for the time of the initial creation, therefore we had to change the system time of the receiving device. Otherwise, the device would not be able to match the keys against the uploaded TEK. Changing the system time is only necessary for our validation purposes.
\begin{comment}
\begin{lstlisting}[
style=log,
keywordstyle=\color{black},
caption={Pair of known TEK and RPI},
label={lst:extracted},
float=h,
captionpos=b,
]
TemporaryExposureKey<keyData: ffaa8692f97228de2307aa112325041d, rollingStartIntervalNumber: Wed Jul 08 02:00:00 GMT+02:00 2020, transmissionRiskLevel: 0, rollingPeriod: 144>
---
2020-07-08 16:45:32.767 2489-2489/? I/ExposureNotification: getCurrentRollingProximityId: current/latestEnIntervalNumber=2657032/0 [CONTEXT service_id=236 ]
2020-07-08 16:45:32.849 2489-2489/? I/ExposureNotification: getCurrentRollingProximityId: generated a new RollingProximityId=8A7867EDF2B4F241C405E21C80F0247F [CONTEXT service_id=236 ]
\end{lstlisting}
\end{comment}
\begin{lstlisting}[
style=log,
keywordstyle=\color{black},
caption={Automated generation of valid RPIs},
label={lst:rpigeneration},
float=h,
captionpos=b,
]
D/BackendManager: [ TEK: fd3df1b125a21a28f1d7746fd5a46538 ] encrypted Metadata for 9386bead6a0212d6205c665db64ccfe4 = a4e4489c @ Time/Date(Tue Jul 07 00:00:00 GMT+02:00 2020 | 2656788) - Full BLE-Payload: 93:86:be:ad:6a:02:12:d6:20:5c:66:5d:b6:4c:cf:e4:a4:e4:48:9c
D/BackendManager: [ TEK: fd3df1b125a21a28f1d7746fd5a46538 ] encrypted Metadata for 3b65333a5383d8c4d6344672a14963de = 3d167031 @ Time/Date(Tue Jul 07 00:10:00 GMT+02:00 2020 | 2656789) - Full BLE-Payload: 3b:65:33:3a:53:83:d8:c4:d6:34:46:72:a1:49:63:de:3d:16:70:31
\end{lstlisting}
The payload for an exposure notification (e.g., see Listing \ref{lst:rpigeneration}) was then injected into the wormhole. On the receiving side, the official \textit{Corona-Warn-App} was installed on a device, and its system time was set to the corresponding interval of the RPI. During the approximately 15 minutes this experiment took, the GAP API received and stored exposures several times (similar to Listing \ref{lst:logcatWrite}).
Afterwards, the receiving device was set back to the correct time and communication with the server containing the TEKs of known infected app users was re-enabled.
Since the \textit{Corona-Warn-App} does not submit the TEK for the same day to the GAP API twice and the list of keys is signed, we had to ensure that the app could not receive the TEKs for the specific date before the experiment finished.
Since the app checks for an existing Internet connection,
we used a proxy to block requests to the server during our test. In this way, the test for an existing Internet connection succeeded, but the app could not retrieve TEKs.
\begin{figure}[htb]
\centering
\includegraphics[height=0.98\columnwidth]{gfx/CWA_wormhole_exposure.png}
\caption{Official German \textit{Corona-Warn-App} with a positive exposure transmitted through the wormhole}
\label{fig:cwa-screenshots}
\end{figure}
As shown in Figure \ref{fig:cwa-screenshots}, the \textit{Corona-Warn-App} reports a single exposure. The low risk level shown is due to the low \textit{Transmission Risk Level} of the chosen TEK and the metadata of the transmitted RPI. Although the transmitted RPI should not have been valid for regular devices at the time of the broadcast, these values were chosen on purpose to reduce the impact for people who might have been present in the surrounding area.
It should be noted that the described steps are only necessary to validate that the \textit{Corona-Warn-App} is indeed vulnerable to our wormhole attack. The attack itself does not require any modification of software running on the device.
\subsection{Technical Limitations}
\label{subsecteclimit}
The GAP distributes beacons using the newer BLE standard~\cite{BluetoothCore52},
allowing physical transmission speeds of up to 1 Mbps.
To allow more robust transmissions, the physical layer also offers representations with 2 and 8 symbols, resulting in transmission speeds of 500 kbps and 125 kbps, respectively, which are not discussed here for the sake of simplicity.
The payload of the GAP Exposure Notification service has a combined size of 26 bytes \cite{AppleExposureBluetoothSpecification}.
A GAP beacon with a size of 26 bytes is sent via an undirected advertising event, resulting in an advertisement size of 39 bytes and a packet data unit size of 47 bytes~\cite{BluetoothCore52}.
With 1 Mbps, a single advertisement with a size of 47 bytes (= 376 bits) results in an on-air time of $376 \mu{}s$.
In addition, an inter-frame space of $150 \mu{}s$ is required after each advertisement.
Hence, a theoretical maximum rate of $10^6 \mu{}s / \frac{(376 \mu{}s + 150 \mu{}s)}{packet} = 1,901 \; packets/s$ can be sent using BLE 4.0 advertisements according to the GAP specification.
In a real world setting, there are several factors that significantly reduce the theoretical maximum rate, such as:
\begin{itemize}
\item BLE advertisements are sent using three BLE channels; receivers need to hop between these channels;
\item connection intervals forced by the device vendor;
\item distance between receiver and sender; BLE has a transmission power of 10 mW (i.e., a distance of up to 40 meters), and Class 1 BLE devices have up to 100 mW (i.e., a distance of up to 100 meters);
\item interferences and collisions.
\end{itemize}
To evaluate the impact of these factors, we set up a test environment consisting of: HackRF (sender, repeater), Raspberry Pi (receiver), Eve PowerPlug (BLE interference, distance about 2 meters), Ubiquity AP nanoHD (WiFi interference on 2.4 Ghz, distance less than 2 meters, 100\% load and transmission power). In this test environment, we received only about 4.3\% of the theoretical maximum of BLE advertisements per second (i.e., 82 BLE advertisements/s) using a consumer grade BLE receiver with factory default settings while sending on a single BLE advertisement channel. We also discovered that most of our tested devices submit BLE advertisement packets once every two seconds. Furthermore, most BLE devices only accept packets for a short period of time every few minutes.
In an indoor test with an active interfering WiFi hotspot and several interfering BLE devices, the maximum distance was reduced to below 10 meters with direct line of sight between sender and receiver. However, using a signal repeater, we were able to increase the distance up to 50 meters.
\subsection{Attack Scenario: Opportunistic Linking}
\label{subsecoplink}
To increase the probability of a successful wormhole attack, an attacker can
\begin{enumerate}
\item increase the number of collected BLE advertisements by selecting a highly frequented area with a high acceptance rate of the particular contact tracing app, and by increasing the number of deployed wormhole devices;
\item increase the probability that one of the relayed RPIs belongs to a person who will be tested positive
for COVID-19
and uploads his or her TEKs by selecting an area with a high probability of infected persons.
\end{enumerate}
\subsubsection{Selecting a highly frequented area for RPI collection} We conducted experiments at the central train station in Frankfurt (Main), Germany, on the 1st of August 2020.
We moved around the train station with a OnePlus 7 Pro smartphone while changing trains, and waited in the main hall while collecting GAP BLE advertisements using the RaMBLE\footnote{\url{https://play.google.com/store/apps/details?id=com.contextis.android.BLEScanner}} app.
Since the people moved around at the distinct locations where we collected GAP BLE advertisements, we argue that the number of unique RPIs is roughly identical to the number of distinct users.
During the first run, 549 unique RPIs were collected in 00:25:49 h, i.e., 21.26 collected BLE advertisements per minute.
During the second run, 142 unique RPIs were collected in 00:04:40 h, i.e., 30.43 collected BLE advertisements per minute.
\begin{comment}
\begin{figure}
\centering
\includegraphics[width=.75\columnwidth]{gfx/ffm-beacon-collection.pdf}
\caption{Number of unique RPIs collected at Frankfurt central train station.}
\label
{fig:ffm-beacon-collection}
\end{figure}
\end{comment}
In less populated environments, (a) inside an isolated examination room of the pulmology ward of the University Hospital of Heidelberg (only open to emergency patients and medical staff)
and (b) while driving in a car on the German highway A5 from Gie\ss{}en to Mannheim, we collected 95 (300) unique RPIs in 01:32:00 h (01:51:00 h) (i.e., 1.02 (2.70) BLE advertisements per minute).
To estimate the impact of our wormhole attack, we assume that there are
(i) 5.1 reported infections among 100,000 persons per week (i.e., the average value for Germany published on August 1, 2020) and
(ii) 30.43 collected unique BLE advertisements from distinct smartphones per minute by each wormhole receiver (according to our tests in the Frankfurt central train station).
First, we address the question of how many RPIs would be required to receive an average of one positive RPI.
Since infected users upload their TEKs of the last 14 days, the doubled weekly incidence value is a suitable estimator.
Hence, with assumption (i) one out of $1 / (5.1 / 100,000 / 7 * 14) \approx 9,804$ received RPIs will be positive.
The average validity period of a received RPI can be estimated by halving the general validity period of 10 minutes, since some RPIs will be received just after creation, while others will have almost expired.
To get access to one valid RPI at any given point in time, $9,804$ RPIs $/$ $30.43$ RPIs per minute per device $/$ $5$ minutes $\approx¢$ $65$ wormhole devices would be required.
By changing assumption (i) to 45.4 reported infections among 100,000 persons per week (i.e., the average value for Germany published for week 42 of 2020), the corresponding number of required wormhole devices at distinct locations drops down to about $1 / (45.4 / 100,000 / 7 * 14) / 30.43 / 5 \approx 8$.
The relatively low numbers of wormhole devices required for an attacker suggests that the attack can be carried out without much effort.
However, in the case of the German \emph{Corona-Warn-App}, the calculated risk score is set to zero, if the encounter period with an infected person is shorter than 10 minutes.
Hence, a successful attack would require an attacker to observe potentially infected people for a period of at least 10 minutes.
\subsubsection{Selecting an area with a high probability of infected persons}
We assume that a location with a high probability of infected persons can be used by the attacker, e.g., the COVID-19 Testing Center near Frankfurt (Main), Germany, performing a current maximum of 300 tests per hour.
Let us further assume that 3.62\% of the tests are positive (i.e., valid for Germany in week 42 of 2020),
and 9.84\% of the infected persons share their infection status (based on the calculations of submitted TEKs in relation to the overall reported infections in Germany (6.473 of 65.410 in week 41 and 42 of 2020)).
Based on these assumptions, an attacker would be able to observe an average of 10.86 infected people per hour,
of which 1.07 upload their infection status through the app.
To generate a high risk warning, an attacker would select a test center where an individual can be observed for more than 10 minutes.
In the middle of October 2020, Germany was a relatively low risk country, but in other countries with a higher test-positive rate, these calculations look differently.
For example, in Mexico in the middle of October 2020, 41.0\% of the tests were positive\footnote{\url{https://ourworldindata.org/coronavirus/country/mexico?country=~MEX} October 14, 2020)}.
Using this rate (hypothetically) in the calculation for the test center in Germany, 123 infected persons could be observed per hour, of which 12.10 persons would upload their infection status, i.e.,
a positive RPI roughly every 5 minutes is obtained.
Since each RPI remains valid for a period of 120 minutes, an attacker can repeat RPIs even when the infected person is not within reach of the wormhole anymore.
If an attacked smartphone receives these RPIs over 10 minutes, the GAP would register
$1.07 / 60 * 120 = 2.14$ infected persons in Germany and $12.10 / 60 * 120 = 24.20$
infected persons in Mexico in close proximity and would probably trigger high risk warnings.
Apparently, the impact of the attack can be limited by shrinking the 2-hour validity period of RPIs.
\subsection{Attack Scenario: Targeted Attack}
In this scenario, an attacker has the possibility to submit his or her own TEKs at will. Depending on the local implementation of the GAP, this can require the acquisition of a valid TAN for uploading TEKs to the central governmental back-end server. The wormhole will solely be used as a publishing device.
We will focus on the German \textit{Corona-Warn-App} as an example, for which a TAN can be obtained in different ways.
Our team had contact with a supplier of TANs in a dark web underground community, therefore we assume that there is a "market" for TAN keys in exchange for money.
It is also possible to request a valid TAN by uploading a (forged) diagnostic report directly in the \textit{Corona-Warn-App} itself, which will be issued after a manual check by the hotline phone support team of the app.
Based on our collected real world data (see Section \ref{subsecoplink}), we can broadcast a positive BLE advertisement to about 30.43 mobile devices per minute $\times$ 60 minute $\approx$ 825 mobile user devices per hour per wormhole device.
Currently, the last 14 days of exposure are considered for a warning by the GAP.
In this case, a single wormhole device would be able to submit $1,825 * 14 * 12 = 306,600$ registered, positive RPIs to other mobile user devices during daytime (12 hours) for 14 days.
In the case of the German \emph{Corona-Warn-App}, RPIs over a period of 10 minutes are required to trigger a high risk warning.
Thus, only the subset of people successfully attacked for more than 10 minutes and within a low distance and during the days of high infectiousness will get a high risk warning.
\section{Introduction}
\input{1_intro}
\section{GAP Overview}
\input{2_gap}
\section{Mind the Privacy GAP}
\input{3_profiling_attack}
\section{Mind the Security GAP}
\input{4_relay_attack}
\section{Conclusion}
\input{5_conclusion}
\section*{Acknowledgment}
This research work has been funded by the Deutsche Forschungsgemeinschaft (DFG) – SFB 1119 – 236615297.
| -20,363.44627
|
[
-1.5771484375,
1.7177734375
] | 47.701149
|
[
-2.912109375,
1.4326171875,
-1.5419921875,
-4.2109375,
-0.79052734375,
5.71484375
] |
[
0.66748046875,
5.8984375,
1.8642578125,
4.85546875
] | 347
| 5,916
|
[
-2.111328125,
2.271484375
] | 24.461947
|
[
-5.05078125,
-1.8828125,
-2.0703125,
-1.3056640625,
0.8671875,
7.609375
] | 0.804611
| 23.35487
| 25.194454
| 2.654549
|
[
2.427863597869873
] | -15,751.829792
| 5.777214
| -20,071.765216
| 0.263327
| 6.159449
|
[
-3.6875,
-2.564453125,
-1.853515625,
-2.888671875,
2.796875,
7.671875
] |
[
-6.2890625,
-2.6796875,
-2.61328125,
-2.091796875,
3.802734375,
6.6875
] | |
BkiUa785qWTD6essY_3T
|
\section{Introduction}
Galactic and cosmological observations indicate that if gravitational laws are dictated by general relativity, a large fraction of
the nonrelativistic matter in our Universe is in the form of particles having negligible interaction with e\-lec\-tro\-mag\-ne\-ti\-sm, baryons and themselves,
and with negligible initial velocity dispersion.
The existence of these particles has been demonstrated through their gravitational effects on the largest (galactic to cosmological) scales of the Universe.
Collectively, they are successfully modeled as cold dark matter (CDM), a crucial component of the \ensuremath{\Lambda}CDM~ concordance model.
Although a plethora of concrete DM models have been proposed \cite{Bertone:2004pz}, de\-di\-ca\-ted direct and indirect astrophysical searches have yielded
no convincing evidence for a DM particle so far.
The strongest exclusion limits in the mass vs cross-section plane
using direct detection through nuclear recoil come from the Xenon1T experiment~\cite{Aprile:2019jmx,Aprile:2019xxb,Aprile:2018dbl,Aprile:2019dbj}.
Meanwhile, possible signals of DM annihilation resulting in the positron excess detected by the Alpha Magnetic Spectrometer (AMS) instrument~\cite{AMS02-positron_excess}
are in conflict with the Planck collaboration \cite{Aghanim:2018eyx} observations of the Cosmic Microwave Background (CMB) anisotropies,
the latter being sensitive to energy injection in the intergalactic medium through such annihilations. Indeed, the AMS positron excess may be
explained by conventional astrophysical mechanisms~\cite{Ahlers:2009ae,Mertsch:2014poa}.
This lack of nongravitational evidence necessitates further testing of the CDM paradigm.
Taking a more agnostic approach with this in mind,
we test possible departures from CDM using the phenomenologically motivated generalized dark matter (GDM) model~\cite{Hu1998a}.
GDM compactly parametrizes the DM properties encapsulated by pressure and viscosity using three parametric functions: the background equation
of state (EoS) $w(a)$ of DM, sound speed $c^2_s(a,k)$ and the viscosity $\ensuremath{c^2_{\rm vis}}(a,k)$, where $a$ is the scale factor and $k$ the wave number of the linearized
GDM fluid fluctuations.
In \cite{Hu1998a} it was shown that the expansion history and consequently the CMB anisotropies angular power spectrum is particularly sensitive to these parameters.
Moreover, when $w$ is a constant, \cite{Hu1998a} uncovers a degeneracy between $w$ and $\ensuremath{\omega^{(0)}_g}$, the dimensionless DM density today.
An extensive investigation of the model was presented in \cite{KoppSkordisThomas2016} where its possible connection to more fundamental theories was established,
particularly to $K$-essence scalar fields, a rich internally coupled dark sector (e.g. dark matter coupled to dark radiation), thermodynamics and effective field theories.
Furthermore, \cite{KoppSkordisThomas2016} analysed
an exact solution of the perturbed Einstein equations in a flat GDM-dominated universe uncovering a degeneracy between
a constant sound speed and constant viscosity. Specifically, the effective perturbative parameter relevant for the CMB is $c_s^2 + \frac{8}{15} \ensuremath{c^2_{\rm vis}}$
and in order to break this degeneracy, different types of observations are necessary.
Constraints on constant GDM parameters were placed previously by \cite{Muller2005, CalabreseMigliaccioPaganoEtal2009, KumarXu2012, XuChang2013}
using a variety of datasets. The latest
constraints on constant GDM parameters were reported in \cite{ThomasKoppSkordis2016} and \cite{KunzNesserisSawicki2016} using CMB data from the Planck satellite setting
a limit on constant $|w| \lesssim 10^{-3}$ and $c_s^2, \ensuremath{c^2_{\rm vis}} \lesssim 10^{-6}$. Significant improvements on the perturbative
parameters $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$ were obtained in \cite{KunzNesserisSawicki2016} and \cite{ThomasKoppMarkovic2019} through the
inclusion of the late-time clustering data. Using late-time clustering data, however, is prone to introducing systematic modeling errors
due to the nonlinearities inherent in the processing of these datasets. Thus, to test that the improvement in $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$ is robust,
\cite{ThomasKoppMarkovic2019} designed a nonlinear extension of the GDM model based on the ``warm and fuzzy'' dark matter halo model, which incorporates certain nonlinear phenomena.
Joint constraints on the sum of neutrino masses and constant GDM parameters were obtained in \cite{KumarNunesYadav2019, ThomasKoppMarkovic2019}.
A time-varying equation of state $w$ was considered in \cite{KoppSkordisThomasEtal2018} by piecewise parametrizing $w(a)$ in $8$ redshift bins,
while both $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$ were assumed to be zero. There, the most general time-evolution of the DM equation of state was tested,
yet no evidence for DM properties beyond CDM was found.
Interestingly, while data allow $w$ to be fairly larger than zero in the late universe,
between matter-radiation equality and CMB recombination $|w|$ is $\lesssim 10^{-3}$ and thus DM must behave very closely to CDM during that time~\cite{KoppSkordisThomasEtal2018}.
Although a wealth of more constraining data exists, by sticking to observables
pertaining to linear perturbations and Friedmann-Lema\^itre-Robertson-Walker (FLRW) background one reduces systematic uncertainties and modeling errors
(on the nonlinear scales) to a minimum. This ensures that any potential detection of nonzero GDM parameters can be convincingly interpreted as a detection
of DM properties. We refer however to \cite{KunzNesserisSawicki2016, TutusausLamineBlanchard2018, ThomasKoppMarkovic2019}
for potential applications to nonlinear scales.
In this article, we present the most exhaustive parameter search to date, allowing all three GDM parametric functions $w(a)$, $c^2_s(a)$ and $\ensuremath{c^2_{\rm vis}}(a)$
to have a sufficiently general time dependence. This time dependence is modeled by binning $w(a)$ in 8 and
$c^2_s$ and $\ensuremath{c^2_{\rm vis}}$ in 9 scale factor bins, totalling 26 new parameters beyond CDM. We use the same datasets as our previous study which had a time-dependent $w(a)$ but
zero $c^2_s(a)$ and $\ensuremath{c^2_{\rm vis}}(a)$~\cite{KoppSkordisThomasEtal2018}; this allows us to perform a detailed comparison of the effects of the new enlarged parameter space
corresponding to $c^2_s(a)$ and $\ensuremath{c^2_{\rm vis}}(a)$ with respect to \cite{KoppSkordisThomasEtal2018}.
The structure of the article is as follows. We give a brief summary of the GDM model, describe our binning strategy
and present the various models and submodels that we study in Sec.~\ref{sec:model}. In Sec.~\ref{sec:methods} we present our methodology, including numerical solutions,
the datasets and sampling method used which allowed exploration of the very high-dimensional parameter space and a discussion of our choice of priors.
Our results are presented in Sec.~\ref{sec:results}, specifically constraints on the DM EoS and abundance, constraints on the sound speed and viscosity, degeneracies
and a special submodel where all three functions are set to be equal. We discuss the physical aspects and implications of our results in Sec.~\ref{sec:discussion}, particularly,
the tight constraint of the GDM comoving density perturbation in the early universe
and how some GDM models may alleviate the Hubble tension.
We conclude in Sec.~\ref{sec:conclusion}.
The reader may find useful the three appendices. In Appendix~\ref{app:sigma8w0Degeneracy} we derive an expression for the growth index
in a $\Lambda w$DM (i.e. GDM with constant EoS and zero sound speed and viscosity) and discuss the Integrated Sachs-Wolfe (ISW) effect.
We describe our publicly available suite of codes used here for sampling the parameter space and visualizing the results in Appendix~\ref{app:ECLAIR}.
A complete list of constraints for various choices of datasets, parametrization choices and priors as well as correlation matrices can be found in Appendix~\ref{app:bigtables}.
\section{The model}
\label{sec:model}
\subsection{Evolution equations}
We consider a flat FLRW background with only scalar perturbations, see \cite{KoppSkordisThomas2016} for more details and notation.
The GDM background density $\ensuremath{\bar{\rho}}_g$ and pressure $\ensuremath{\bar{P}}_g$ evolve according to the conservation law
\begin{align} \label{GDMconservation}
\dot{\ensuremath{\bar{\rho}}}_g = - 3 H (1+w) \ensuremath{\bar{\rho}}_g\,, \qquad \ensuremath{\bar{P}}_g=w \ensuremath{\bar{\rho}}_g\,,
\end{align}
where $H =\frac{\dot a}{a}$ is the Hubble parameter, satisfying the Friedmann equation, and the overdot denotes derivatives with respect to cosmic time $t$.
The parametric function $w(a)$ can be freely specified and contains with $w=0$ the CDM model ($\ensuremath{\bar{\rho}}_g = \ensuremath{\bar{\rho}}_c$).
The GDM model has two further free parametric functions, the speed of sound, $c_s^2(a,k)$, and the (shear) viscosity, $\ensuremath{c^2_{\rm vis}}(a,k)$, both of which are zero in the case of CDM.
The synchronous gauge metric perturbed around a flat FLRW background is given by
\begin{multline}
ds^2 = -\, dt^2 +a^2 \Bigg[ \Big(1+\frac{1}{3} h\Big)\, \delta_{ij} + (\ensuremath{\vec{\nabla}}_i \ensuremath{\vec{\nabla}}_j - \frac{1}{3} \gamma_{ij} \ensuremath{\vec{\nabla}}^2) \nu \Bigg] dx^i dx^j \,,
\label{def_perturbed_FLRW_metric}
\end{multline}
where $\ensuremath{\vec{\nabla}}_i$ is the covariant derivative compatible with the Euclidean metric $\gamma_{ij}$ and only scalar modes (in this gauge $h$ and $\nu$) are considered.
Switching to $k$-space, the general GDM fluid equations for the density contrast $\delta_g$ and velocity perturbation $\theta_g$ are given by
\begin{subequations} \label{GDMperts}
\begin{equation}
\dot{\delta}_g = 3 H \left( w \delta_g - \Pi_g\right) - (1+w) \left[ \frac{k^2}{a}\theta_g+ \frac{1}{2} \dot{h} \right]
\label{fluid_delta_equation}
\end{equation}
\begin{equation}
a \dot{\theta}_g = -(1 -3 c_{a}^2) aH \theta_g
+ \frac{\Pi_g}{1+w}
- \frac{2}{3} k^2 \Sigma_g \,.
\label{fluid_theta_equation}
\end{equation}
with $ c_a^2 = \dot{\ensuremath{\bar{P}}}_g/\dot{\ensuremath{\bar{\rho}}}_g$. While the above equations are generically valid for any conserved fluid, the following special choice of closure equations, defines the GDM model \cite{Hu1998a}
\begin{align}
\Pi_g &= c_s^2 \delta_g+3 (1+w) ( c_s^2 - c_a^2 ) aH \theta_g \\
\dot{\Sigma}_g &= - 3 H \Sigma_g+ \frac{4}{1+w} \ensuremath{c^2_{\rm vis}} (\frac{\theta_g}{a} - \frac{1}{2}\dot{\nu})\,.
\label{ShearGDMeom}
\end{align}
\end{subequations}
The first equation is a perturbative EoS for the pressure perturbation $\Pi_g \equiv (P_g- \ensuremath{\bar{P}}_g)/ \ensuremath{\bar{\rho}}_g $ and the second equation is an evolution equation for the scalar part $\Sigma_g$ of the traceless part of the GDM stress tensor $T_g^i{}_j$. We refer the interested reader to \cite{KoppSkordisThomas2016} for further discussions of the theoretical motivation,
physical interpretation and notation.
The EoS $w$ is expected to be uncorrelated with the two perturbative parameters $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$ during the era of matter domination,
as shown in \cite{ThomasKoppSkordis2016}. However, as we show below, these parameters become correlated during the era of radiation domination, when
adiabatic initial conditions are considered.
\subsection{Smooth bin parametrization}
In order to constrain the three purely time-dependent GDM parametric functions $w(a)$, $c^2_s(a)$ and $\ensuremath{c^2_{\rm vis}}(a)$ in a way that is sufficiently general but still feasible,
we restricted the variation of these functions to $N=9$ scale factor bins. As our goal is to explore the allowed behavior of dark matter with as few restrictions as possible, having fewer bins would unnecessarily restrict the phenomenological freedom of the model.
The bin edges were chosen to be
\begin{align}
\tilde a_0 &=1 \notag\\
\tilde a_{1\leq i\leq N-1} &= 10^{- i \Delta_{\ln a}}~~\mathrm{with}~~\Delta_{\ln a}=0.5\\
\tilde a_{N} &= 0 \notag
\end{align}
so that $f(a)$ (here denoting any of $w$, $c^2_{s}$, $\ensuremath{c^2_{\rm vis}}$) has piecewise constant values between them, that is,
\begin{equation} \label{sharpbins}
f(a)= \sum_{i=0}^{N-1} f_i \Theta(a-\tilde a_{i+1}) \Theta(\tilde a_{i}-a)\,,
\end{equation}
with the $f_i$ coefficients comprising $N$ free parameters and $\Theta$ is the Heaviside step function.
In the case of the $w(a)$ function, the discontinuity at the bin edges $\tilde a_{1\leq i\leq N-1}$ implies $c^2_{a,i}(\tilde a_i)= \pm \infty$ for the
adiabatic sound speed.
In order to test whether our conclusions depend on this discontinuity, we regularized the transitions by a
lognormal smoothing of Eq.~\eqref{sharpbins} with width $\sigma_{\ln a}$ (and assuming $\sigma_{\ln a} \ll \Delta_{\ln a} $), leading to
\begin{align} \label{smoothbins}
f(a) &= \sum_{i=0}^{N-2} \tilde f_i(a) \,\Theta(a- a_{i+1}) \Theta( a_{i}-a)\\
\tilde f_i(a) &= \frac{f_i-f_{i+1} }{2} \erf\left( \frac{\ln (a/\tilde a_{i+1})}{\sigma_{\ln a}}\right)+\frac{ f_i+f_{i+1}}{2}
\,, \notag
\end{align}
with corresponding logarithmic bin centers\footnote{For convenience we defined the bin centers for the first and last bin separately.
Any definition of bin center is acceptable if it is several multiples of $\sigma_{\ln a}$ away from the transition times $\ln \tilde a_{1\leq i\leq N-2}$.}
\begin{align}
a_0&=1 \notag \\
a_{1\leq i\leq N-2 } &=\sqrt{ \tilde a_{i} \tilde a_{i+1}} \\
a_{N-1} &= 10^{-\Delta_{\ln a}} \tilde a_{N-1} \notag \,.
\end{align}
We set $\sigma_{\ln a}=0.1 \Delta_{\ln a}$, a sufficiently small choice in order to avoid introducing unwanted physical effects,
but sufficiently wide to study potential differences to setting $\sigma_a=0$ corresponding to Eq.~\eqref{sharpbins}. Both choices produced the same constraints.
See Fig.\,\ref{PixelExplanationSmall} for a visual representation of Eq.~\eqref{smoothbins}.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.44 \textwidth]{PixelExplanationSmall}
\end{center}
\caption{ Dashed lines show the first two components of the sum in Eq.~\eqref{smoothbins} in an arbitrary case where $f_1<f_2<f_0$.
The black thin line shows the first three components of the sum in Eq.~\eqref{sharpbins}. The width of the grey bands corresponds to $\sigma_{\ln a}$.
}
\label{PixelExplanationSmall}
\end{figure}
\subsection{Definition of models and submodels}
We list here the different types of (sub)models that we used for our study of GDM.
\paragraph{Model ``{\it var-wc}'':}
The most general GDM model based on our parametrization
with all 26 GDM parameters included is denoted by ``{\it var-wc}''. In addition to this model, we consider and study separately the three nested submodels below.
We note that as discussed in~\cite{KoppSkordisThomasEtal2018}, a degeneracy between $w$ and $\Lambda$ is present in the late universe.
With this in mind, the last two $w$-bins were merged by setting $w_0=w_1$ in this model, as well as its submodel {\it var-w}.
\paragraph{Submodel ``{\it var-w}'':}
The submodel obtained by setting $c_s^2=\ensuremath{c^2_{\rm vis}}=0$ while keeping the 8 $w_i$s free, is denoted by ``{\it var-w}''
and has been previously studied in~\cite{KoppSkordisThomasEtal2018}. It describes a GDM fluid that only modifies the background evolution of the Universe,
but maintains the geodesic motion of GDM fluid elements.
We include this model here for reference purposes, as the present paper is a direct generalization and logical
continuation of \cite{KoppSkordisThomasEtal2018}. The inclusion of this model allows us to check to what extent
the previously obtained {\it var-w}~constraints are recovered within the encompassing {\it var-wc}~model after marginalization
over the 18 additional $c^2_{s, i}$ and $\ensuremath{c^2_{{\rm vis},i}}$ parameters.
The bins $w_0$ and $w_1$ were once again joined together.
\paragraph{Submodel ``{\it var-c}'':}
Using the same reasoning for studying the {\it var-w}~model,
we also study the complementary submodel ``{\it var-c}'' defined by $w=0$ with all of the 18 $c^2_{s, i}$ and $\ensuremath{c^2_{{\rm vis},i}}$ parameters left free.
\paragraph{Submodel ``{\it var-w=c}'':}
Finally, we also consider the submodel with the restriction $w_i =c^2_{s, i} = \ensuremath{c^2_{{\rm vis},i}}$.
This model is interesting due to its close relation with a number of well-motivated collisionless DM scenarios. Two examples are the case of warm DM
and the case of CDM when the effects of unresolved nonlinear small-scale physics is incorporated using
the effective field theory of large-scale structure \cite[EFTofLSS,][]{BaumannNicolisSenatoreEtal2012,CarrascoHertzbergSenatore2012,CarrollLeichenauerPollack2013,ForemanSenatore2015}.
In this case we let $w_0$ and $w_1$ be mutually independent since late universe constraints on $w$ are driven by the $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$ functions,
as we explain in Sec.\,\ref{varweqc}.
\begin{table}[t!]
\begin{center}
\begin{tabular}{|l |l |l |l |}
\hline
Model & Additional & Restrictions & No.\,of additional \\
& parameters & & parameters \\
\hline
\hline
{\it var-wc} & $w_i, c^2_{s, i}, \ensuremath{c^2_{{\rm vis},i}}$ & $w_0=w_1$ & 26 $(8+2\times9)$ \\
\hline
{\it var-w} & $w_i$ & $w_0=w_1$ & 8 \, (9-1)\\
& & $c^2_{s}= \ensuremath{c^2_{\rm vis}}=0$ & \\
\hline
{\it var-c} & $c^2_{s, i}$, $\ensuremath{c^2_{{\rm vis},i}}$ & $w=0$ & 18 $(2\times9)$ \\
\hline
{\it var-w=c} & $c^2_{s, i} $ & $w=c^2_{s} = \ensuremath{c^2_{\rm vis}}$ & 9 \, $(1\times9)$ \\
\hline \hline
\end{tabular}
\end{center}
\caption{List of the GDM models studied in this work. }
\label{tab:modelnames}
\end{table}
Table~\ref{tab:modelnames} shows a summary of all GDM models considered in this paper.
It is also convenient to define a dimensionless scaled GDM density
\begin{equation} \label{Defomegag}
\omega_g \equiv a^3 \ensuremath{\bar{\rho}}_g\,\frac{8 \pi G}{3\times (100\, \rm{km/s/Mpc})^2},
\end{equation}
in order to facilitate interpreting constraints on $w_i$.
When $w=0$, $\omega_g$ is equal to the conventional (constant) dimensionless CDM density $\omega_c$.
The function $\omega_g$ is in general time dependent, however, fully determined by the $N+1$ parameters $\ensuremath{\omega^{(0)}_g}, w_i$.
We use the notation $\omega_g^{(i)}=\omega_g(a_i)$ and similarly for other functions with subscripts, so that the present day DM abundance
is $\ensuremath{\omega^{(0)}_g} = \omega_g(a_0)$. Functions without subscript we write instead as $w_i=w(a_i)$.
\section{Methodology}
\label{sec:methods}
\subsection{Numerical solutions}\label{sec:priors}
In order to perform our analysis we implemented the GDM fluid equations \eqref{GDMconservation} and \eqref{GDMperts}
in the Cosmic Linear Anisotropy Solving System (CLASS) code~\cite{Lesgourgues2011}.
CLASS numerically solves the Boltzmann equation for each relevant component coupled to the Einstein equations and calculates the CMB
and matter power spectra given a set of model parameters.
Our modification of the CLASS code adds an additional GDM component based on the dark energy fluid
(with free equation of state and sound speed) implemented by the original authors~\cite{LesgourguesTram2011}, which we further improved to allow for nonzero viscosity.
Our modification of CLASS makes it easy to define as many bins as necessary for all three GDM functions $\{w,c_s^2,\ensuremath{c^2_{\rm vis}}\}$ through the standard CLASS interface
and to set the amplitudes for each of these functions in each bin.
Following this work, our code is made publicly available\footnote{\url{https://github.com/s-ilic/gdm_class_public} } with instructions on how to use it.
We also independently modified a different Boltzmann code~\cite[DASh,][]{KaplighatKnoxSkordis2002}
to include the full GDM parametrization. We performed a full comparison between the codes in the case of constant GDM parameters,
including the background evolution, perturbation evolution,
the CMB angular power spectra, matter power spectrum and lensing potential. The numerical difference of the two codes in the case of the GDM model is similar to the corresponding
difference in the case of $\Lambda$CDM, within $\sim0.1\%$. This level
of agreement holds for all quantities in both the synchronous gauge and the conformal Newtonian gauges.
\subsection{Datasets and sampling technique}
Our constraints are obtained using the same datasets as in \cite{KoppSkordisThomasEtal2018}. Specifically we used the Planck 2015 data release \cite{PlanckCollaborationXI2015} of
the CMB anisotropies power spectra, composed of the low-$\ell$ T/E/B likelihood and the
full TT/TE/EE high-$\ell$ likelihood with its complete set of nuisance parameters.
The combination of these likelihoods is thereafter referred to as Planck power spectra (PPS).
We also selectively added the HST key project prior on $H_0$~\cite{RiessMacriCasertanoEtal2011},
BAO measurements from the 6dF Galaxy Survey~\cite{BeutlerBlakeCollessEtAl2011} and the Baryon Oscillation Spectroscopic Survey Sloan Digital Sky Survey~\cite{AndersonAubourgBaileyEtal2014}, and the
Planck 2015 CMB lensing likelihood (respectively referred to as HST, BAO and Lens). Although more recent cosmological datasets are available \citep[see e.g][]{Aghanim:2018eyx},
keeping to the ones mentioned above allows us to perform a robust comparison with our previous work \cite{KoppSkordisThomasEtal2018} where $c_s^2=\ensuremath{c^2_{\rm vis}}=0$ in order to elucidate
the effects of the new enlarged parameter space.
Our total cosmological parameter set (not including the Planck likelihood nuisance parameters)
\begin{equation}
(\omega_b, \ensuremath{\omega^{(0)}_g}, H_0,n_s, \tau, \ln 10^{10} A_s, w_i, c^2_{s,i}, \ensuremath{c^2_{{\rm vis},i}})
\end{equation}
consists of 6 $\Lambda$CDM parameters and 8 values $w_i$, 9 values $c^2_{s,i}$ and 9 values $\ensuremath{c^2_{{\rm vis},i}}$.
We assumed adiabatic initial conditions, described in~\cite{KoppSkordisThomas2016}.
We investigated the constraints on our selection of GDM (sub)models coming from our choice of datasets using
a standard Markov Chain Monte Carlo (MCMC) approach.
For this purpose we used ECLAIR, a publicly available\footnote{\url{https://github.com/s-ilic/ECLAIR}} suite of codes
that uses the numerical output from CLASS, combined with likelihoods of state-of-the-art datasets, and efficient sampling methods.
To sample the parameter space we used the Goodman-Weare affine-invariant ensemble sampling technique~\cite{GoodmanWeare2010} via our ECLAIR framework
which internally uses the technique's Python implementation \texttt{emcee} \cite{emcee}. The convergence of the MCMC chains was assessed using graphical
and numerical tools included in the ECLAIR code package (see Appendix~\ref{app:ECLAIR} for more details).
The ECLAIR suite was also used to find the point in parameter space corresponding to the maximum likelihood of each model.
The resulting chains were used to determine the marginalized posterior distributions of the parameters using the publicly available code \texttt{getdist} \cite{Lewis2019}.
\subsection{Priors}\label{sec:priors}
We set uniform priors as specified in Table~\ref{tab:priors} unless otherwise stated. We used the same priors on Planck nuisance parameters
and the same neutrino treatment as in \cite{KoppSkordisThomasEtal2018, ThomasKoppSkordis2016}.
The helium fraction was set to $Y_{\rm He}=0.24667$~\cite{PlanckCollaborationXIII2015}. We have checked that letting $Y_{\rm He}$ be
an additional free parameter in our MCMC analysis does not affect our results and conclusions.
We set flat priors on standard cosmological parameters as well as the GDM parameters (see Table~\ref{tab:priors}).\footnote{Throughout $H_{\rm 0}$ is in units of km s${}^{-1}$ Mpc${}^{-1}$, and $H^{-1}_{\rm eq}$ and $r^{\rm drag}_s$ are in units of Mpc.}
The choice of priors is always a sensitive issue in any type of Bayesian analysis. This is particularly true
in our case, as many of our parameters have a physical lower bound -- namely sound speed and viscosity need to be positive at all times.
These bounds form a (multidimensional) ``corner'' which due to volume effects becomes highly disfavored during the MCMC exploration regardless
of whether the data favor this region of the parameter space or not. This situation is particularly problematic in our case because this corner
corresponds to the standard CDM paradigm (zero sound speed and viscosity) and could be, thus, erroneously excluded by our MCMC analysis.
Moreover, due to correlations with all other cosmological parameters, the corresponding marginalized posteriors of the set of sound speed and viscosity parameters
might also be affected.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.48 \textwidth]{varwovertime_Four}
\includegraphics[width=0.48 \textwidth]{varwovertimerepar_Four}
\end{center}
\vspace{-0.3cm}
\caption{
Shown are the 99\% credible regions on the $w_i$ parameters parametrized the EoS of DM.
The large ticks on the $a$-axis specify the bin boundaries.
The line styles correspond to different datasets and models specified in the legend.
The insets zoom into the region enclosing $w=0$ and have the same ticks on the $a$-axis. \emph{Left}:
We show the credible regions for the {\it var-wc}~and {\it var-w}~models when PPS and PPS+Lens+BAO dataset combinations are used.
In the {\it var-wc}~model, the \ensuremath{\Lambda}CDM~ model (thin black solid line $w=0$) lies outside the 99\% credible region for bins 8, 7 and 6 when PPS+Lens+BAO (yellow shaded regions;
darker shades correspond to $95\%$ and $68\%$) was used while when PPS (black full line) was used only bin 7 is marginally inconsistent with \ensuremath{\Lambda}CDM~\!\!.
The {\it var-w}~ model (red dashed and red dotted) is, however, consistent with \ensuremath{\Lambda}CDM~ for all datasets.
\emph{Right}: Comparing flat and nonflat priors in the {\it var-wc}~model with the dataset PPS+Lens+BAO combination.
The yellow shaded region corresponds to 99\% (darker shades as on the left) credible regions.
For the nonflat priors (green full line) only bin 7 does not include $w=0$.
The best fit model is shown as thick black line which deviates significantly from the mean
in the early universe. In bin 8 (leftmost bin), the best fit model lies at the lower edge of the lower 99\% credible region of
the flat prior case (lower boundary of the yellow region), while it is well contained for the nonflat prior (full green).
}
\label{varwovertime_Four}
\end{figure*}
In order to alleviate these effects and test the
sensitivity of our constraints on our choice of priors, we also used nonflat priors for $c^2_{s,i}$ and $\ensuremath{c^2_{{\rm vis},i}}$, keeping the priors on the other parameters unchanged.
For this test, we used flat priors on the combinations
\begin{align} \label{cp2anddDef}
c^2_{+,i} &\equiv c^2_{s,i}+\frac{8}{15} \ensuremath{c^2_{{\rm vis},i}}
\\
\ensuremath{b}_{i} &\equiv \frac{15c^2_{s,i} }{15c^2_{s,i}+ 8 \ensuremath{c^2_{{\rm vis},i}} }
\notag
\end{align}
which results in nonflat priors for the $c^2_{s,i}$ and $\ensuremath{c^2_{{\rm vis},i}}$ set of parameters since the measure transforms as
\begin{equation}
dc^2_{+,i} d\ensuremath{b}_{i} \propto \frac{dc^2_{s,i} d\ensuremath{c^2_{{\rm vis},i}}}{c^{2}_{+,i}}.
\label{cp2dmeasure}
\end{equation}
We refer to these priors
as ``nonflat priors'' when discussing the {\it var-wc}~and {\it var-c}~models.
These priors are physically motivated. During GDM domination the scale below which the gravitational potential decays
is determined by $c_{+} \eta$, where $\eta$ is the conformal time~\cite{ThomasKoppSkordis2016}.
The $\ensuremath{b}$ parameter interpolates linearly between the two extremes of $100\%$ sound speed or $100\%$ viscosity contribution to $c_{+}^2$.
After the leading-order effect determined by $c_{+}$ sets in, the quantity $\ensuremath{b}= c_s^2/c_+^2$ results in subleading effects, given a fixed $c_+^2$.
Thus, it seems natural to assume flat priors on the $\{c^2_{+,i}, \ensuremath{b}_i\}$ set of parameters
rather than on $c^2_{s,i}$ and $\ensuremath{c^2_{{\rm vis},i}}$. Flat priors on $c^2_{+}$ and $\ensuremath{b}$ translate
then into priors on $c^2_{s}$ and $\ensuremath{c^2_{\rm vis}}$ which peak at the values $c^2_{s}=0$ and $\ensuremath{c^2_{\rm vis}} =0$, as implied by \eqref{cp2dmeasure}.
Hence, we call these ``nonflat priors''.
When viewed in the $c^2_{s}$-$\ensuremath{c^2_{\rm vis}}$ plane, these nonflat priors give more weight to the CDM ``corner'' $c^2_{s}= \ensuremath{c^2_{\rm vis}} =0$ and thus are expected
to lead to tighter constraints on $c^2_{s}$ and $\ensuremath{c^2_{\rm vis}}$.
\begin{table}
\begin{center}
\begin{tabular}{|l |l |l |}
\hline
Parameter & Prior & Model\\
\hline \hline
$\omega_b$ & [0., 1.] & all\\
$\ensuremath{\omega^{(0)}_g}$ & [0., 1.] & all\\
$H_0$ & [45., 90.] & all\\
$\ln(10^{10}A_{s })$ & [2., 4.] & all\\
$n_s$ & [0.8, 1.2] & all\\
$\tau_{\rm reio}$ & [0.01, 0.8] & all\\
\hline
$w_i$ & [-1., 1] & var-w \& var-wc\\
\hline
$c^2_{s, i}$ & [0., 1.] & var-c \,\& var-wc\\
$\ensuremath{c^2_{{\rm vis},i}}$ & [0., 1.] & var-c \,\& var-wc\\
\hline \hline
\end{tabular}
\end{center}
\caption{List of free cosmological parameters and priors. }
\label{tab:priors}
\end{table}
\section{Results}
\label{sec:results}
The main results of this work are (i) the constraints on the time dependence of the DM EoS $w(a)$ and abundance $\omega_g(a)$ from the {\it var-wc}~model shown in Figs.\,\ref{varwovertime_Four}
and \ref{rhoovertime} and (ii) the constraints on $c^2_s(a)$ and $\ensuremath{c^2_{\rm vis}}(a)$ from {\it var-wc}~and {\it var-c}~shown in Fig. \ref{cs2cv2covertime}.
For comparison, we also show the constraints on the {\it var-w}~model discussed previously in~\cite{KoppSkordisThomasEtal2018}.
Interestingly, comparing the best (i.e. lowest) $\chi^2$ for the three GDM models to the corresponding $\chi^2$ in $\Lambda$CDM, i.e. $\Delta \chi^2_{\rm GDM} \equiv \chi^2_{\Lambda\mathrm{CDM}} - \chi^2_{\rm GDM}$,
we find $\Delta \chi^2_{{\it var-wc}} \simeq \Delta \chi^2_{{\it var-w}} \simeq 8$ and $\Delta \chi^2_{{\it var-c}} \simeq 0$.
To elaborate, adding the 8 new $w_i$ parameters improves the fit only marginally ($\Delta \chi^2 \simeq 8$).
However, adding the 18 new parameters for sound speed and viscosity
yields virtually no improvement to the fit whether added by themselves ({\it var-c}~submodel) or within the full {\it var-wc}~model.
We note that since we do not expect our numerous new GDM parameters to be physical, it makes little sense to apply model selection criteria to our GDM models.
The list of the 68\% and 95\% credible regions of the 1D-posteriors as well as best-fit values of the {\it var-wc}~and {\it var-c}~models for all parameters and datasets
may be found in Appendix~\ref{app:bigtables}. In the following sections we discuss the constraints in detail.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.48 \textwidth]{rhoovertimeComb_PPS_PPS+BAO+lens}
\includegraphics[width=0.48 \textwidth]{rhoovertimeCombrepar_PPS_PPS+BAO+lens}
\end{center}
\vspace{-0.3cm}
\caption{Shown are the 95\% credible regions on the scaled DM abundance $\omega_g(a)$ with same color scheme as in Fig.\,\ref{varwovertime_Four}. The grey band corresponds to the \ensuremath{\Lambda}CDM~ constraint for PPS+Lens+BAO.
}
\label{rhoovertime}
\end{figure*}
\subsection{Constraints on DM EoS and abundance: {\it var-wc}~and {\it var-w}}
\subsubsection{Equation of state, $w(a)$}
In Fig.\,\ref{varwovertime_Four} we show constraints on $w$ contrasting several models ({\it var-w},\,{\it var-wc}), datasets (PPS, PPS+Lens+BAO)
and priors (flat and nonflat priors for sound speed and viscosity). Quite interestingly, we observe on the left panel and
in the case of the {\it var-wc}~model that \ensuremath{\Lambda}CDM~ lies outside the 99\% credible region in the earliest universe bins ($i=$8,7, and 6) when the PPS+Lens+BAO dataset was used (yellow shaded region).
In contrast, when the same dataset was used to constrain the {\it var-w}~model (e.g. with $c^2_s=\ensuremath{c^2_{\rm vis}}=0$) the credible regions of $w_i$
are consistent with zero, and thus \ensuremath{\Lambda}CDM~\!\!, in all of the bins (red dotted lines).
Consider now the right panel of Fig.\,\ref{varwovertime_Four} which singles out the {\it var-wc}~model constrained with the PPS+Lens+BAO dataset combination
-- the most discrepant with \ensuremath{\Lambda}CDM~ -- on the left panel.
There we display the impact of using different priors on constraining this model: the flat priors on $c_{s,i}^2$ and $\ensuremath{c^2_{{\rm vis},i}}$ versus the nonflat priors on the same parameters,
the latter corresponding to flat priors on the parameters defined by \eqref{cp2anddDef}. We see that using the nonflat priors
makes the early universe credible regions shift significantly (green lines) so that all bins, except the 7th bin, become consistent with \ensuremath{\Lambda}CDM~\!\!,
although even the 7th bin's tension with \ensuremath{\Lambda}CDM~ is reduced to $\sim 3\sigma$.
Furthermore, the best fit model lies close to \ensuremath{\Lambda}CDM~ in bins 6, 7 and 8 and so we cannot decisively claim any nonzero detection of $w$.
We consider now the {\it var-wc}~versus the {\it var-w}~model. Even in the late universe (rightmost) bins where the priors on $c_{s,i}^2$ and $\ensuremath{c^2_{{\rm vis},i}}$ do not have profound impact
(see right panel of Fig.\,\ref{varwovertime_Four}), marginalization over $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$ in the {\it var-wc}~model shifts the credible regions
significantly (see the left panel of Fig.\,\ref{varwovertime_Four} and compare the red dotted lines with the yellow shaded region).
These differences between the two models at late times are present also when the PPS dataset is used (contrasting red lines versus black dashed line on the left of Fig.\,\ref{varwovertime_Four}).
Clearly then, marginalization over $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$ in the {\it var-wc}~model does not lead to the same constraints on $w$ as simply setting $c_s^2=\ensuremath{c^2_{\rm vis}}=0$ (the {\it var-w}~model).
We discuss in more detail the origin of the differences between these two models in
the late and early universe in Sec\,.\ref{sec:discussion}.
We find strongest constraints on $w$ between $a_6$ and $a_5$ which enclose the matter-radiation equality $a_{\rm eq} \simeq 3\times10^{-4}$.
In other bins the constraints on $w$ weaken significantly. Adding
the BAO or HST dataset has only a minor effect on {\it var-w}~constraints and only
tightens limits in the rightmost bin.
Contrary to the case of the {\it const-w}~model \cite[where $w$ is a constant throughout the evolution of the universe and $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$ are zero,][]{ThomasKoppSkordis2016}, and the {\it var-w}~model \cite{KoppSkordisThomasEtal2018},
adding CMB lensing in the {\it var-wc}~model significantly shifts the constraints on $w_8$ away from \ensuremath{\Lambda}CDM~\!\!. This is seen by
comparing the yellow shaded with the solid black region in the left panel of Fig.\,\ref{varwovertime_Four}.
The reasons for this will be discussed in detail in Sec\,.\ref{sec:discussion}.
\subsubsection{Dark matter abundance, $\omega_g(a)$}
The derived parameter $\omega_g(a)$, shown in Fig.\,\ref{rhoovertime}, provides a better intuition on the meaning of the constraints on $w_i$.
The $\omega_g(a)$ parameter is constructed via analytically integrating \eqref{GDMconservation} given a set of $w_i$ and $\ensuremath{\omega^{(0)}_g}$.
Figure~\ref{rhoovertime} shows the same model and dataset combinations as Fig.\,\ref{varwovertime_Four}.
Paradoxically, having free sound speed and viscosity in the {\it var-wc}~model restricts the posterior of $\ensuremath{\omega^{(0)}_g}$. This explains why
after marginalizing over $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$, the $w_0$ constraints improve compared to the {\it var-w}~submodel. The improvement of the $\ensuremath{\omega^{(0)}_g}$ posterior in
{\it var-wc}~compared to {\it var-w}~is discussed in Sec\,.\ref{sec:discussion}.
The most striking difference with the $w(a)$-constraints is the persistent offset of $\omega_g$ between the {\it var-wc}~and \ensuremath{\Lambda}CDM~ models in the early universe, i.e. for
all $a<10^{-2}$. During the best-constrained period in $a$, around the time of matter-radiation equality $a_{\rm eq} \simeq 3\times10^{-4}$,
we found $\omega^{\rm eq}_g = 0.1236^{+0.0044}_{-0.0041}$ for the $95\%$ credible interval in the case of the {\it var-wc}~model when the PPS+Lens+BAO dataset combination was used.
For comparison, we obtain $\omega^{\rm eq}_c = 0.1184^{+0.0021}_{-0.0020}$ in \ensuremath{\Lambda}CDM~ with the same dataset combination.
The same offset is present when only the PPS dataset is used, as
seen in the inset of the left panel of Fig.\,\ref{rhoovertime} (black lines), whereas the {\it var-w}~model (red dotted line) leads to virtually the same value for $\omega^{\rm eq}_g$ as $\Lambda$CDM
for both dataset combinations.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.49 \textwidth]{cs2overtime_Six.pdf}
\includegraphics[width=0.49 \textwidth]{cv2overtime_Six.pdf}
\includegraphics[width=0.49 \textwidth]{cs2overtimerepar.pdf}
\includegraphics[width=0.49 \textwidth]{cv2overtimerepar.pdf}
\end{center}
\vspace{-0.3cm}
\caption{The upper panels display the $95\%$ credible regions of $c_s^2$ (left) and $\ensuremath{c^2_{\rm vis}}$ (right) when PPS alone and the PPS+Lens+BAO dataset combination was used
for constraining the {\it var-c}~and {\it var-wc}~models. The constraints on constant $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$ parameters from~\cite{ThomasKoppSkordis2016} are superimposed in grey.
The lower panels display the $95\%$ credible regions of $c_s^2$ (left) and $\ensuremath{c^2_{\rm vis}}$ (right) when the PPS+Lens+BAO dataset combination was used,
showing the effect of having nonflat priors as well as the best fit model. In all panels does the darker yellow shading display the $68\%$ confidence regions.
Note that the vertical axis is logarithmic so that the constraining power of the later universe data is quite drastic compared to the early universe data.}
\label{cs2cv2covertime}
\label{cs2cv2covertimerepar}
\end{figure*}
The right panel in Fig.\,\ref{rhoovertime} focuses on the impact of priors on $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$. The green lines display the $95\%$ credible regions
obtained when using nonflat priors on $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$ that give more weight to $\Lambda$CDM, see \eqref{cp2anddDef}.
We see that the flat prior credible region (yellow shaded) is very similar to the nonflat prior region (green lines), suggesting that the offset of $\omega_g$ in the early
universe is not caused by the choice of priors. The black line shows the best fit model obtained through maximization of the log-likelihood.
The best fit model, which does not depend on priors, also stays above the \ensuremath{\Lambda}CDM~ $95\%$ credible region (grey band), favoring higher values of $\omega_g$ in the pre-recombination era.
\subsection{Constraints on sound speed and viscosity: {\it var-wc}~and {\it var-c}}
\subsubsection{Constraints on $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$}
We now turn to the constraints on the perturbative GDM parameters $c_{s,i}^2$ and $\ensuremath{c^2_{{\rm vis},i}}$ with $0 \leq i \leq 9$.
In the upper panel of Fig. \ref{cs2cv2covertime} we compare the {\it var-wc}~and {\it var-c}~models (for which $w=0$) for each of the dataset combinations, PPS and PPS+Lens+BAO.
We also display the constraints on constant $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$, labeled as ``const'', found previously in \cite{ThomasKoppSkordis2016}
using the same respective dataset combinations (dashed grey and dot-dashed grey lines).
We see from the upper panel of Fig. \ref{cs2cv2covertime} that CMB alone (PPS) constrains the $c_{s,i}^2$ and $\ensuremath{c^2_{{\rm vis},i}}$ in all redshift bins.
The best constraints, nearly as good as for the constant parameter case (``const''), are achieved in the bin $i=1$ for which $0.1<a<10^{-0.5}$ (redshift $2.2\lesssim z<9$),
where the gravitational lensing of the CMB is most efficient \cite[see e.g.][]{Manzotti2017}.
This secondary anisotropy, which smoothes the amplitude of the peaks and troughs of the CMB spectra without changing their location,
most strongly constrains the perturbative GDM parameters.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.39 \textwidth]{rectangle_standard_with_omegaeq_-Paper.pdf}
\includegraphics[width=0.37 \textwidth]{rectangle_cs20w0sig8_v2_-Paper.pdf}
\raisebox{0.1cm}{\includegraphics[width=0.22 \textwidth]{rectangle_cs28w8_-Paper.pdf}}
\end{center}
\vspace{-0.3cm}
\caption{
The 2D~contours of the $68\%$ and $95\%$ credible regions of various parameter combinations in the set $\{\omega^{(0)}_g,\omega^{{\rm eq}}_g,H_0,\sigma_8,w_0,w_8,A_s,c^2_{s,0},c^2_{s,8},\cvisI{8}\}$ when the PPS+Lens+BAO dataset combination was used,
showing the effect of nonflat priors. Displayed are the {\it var-wc}~model with flat priors (yellow shades) and nonflat priors (green lines),
the {\it var-c}~model with flat priors (blue dashed lines) and nonflat priors (purple dot-dashed lines),
the {\it var-w}~model (red dotted lines) and \ensuremath{\Lambda}CDM~ (grey shades).
The best-fit points are indicated by a black plus for the {\it var-wc}~and purple cross for the {\it var-c}~model.}
\label{2D_contours}
\end{figure*}
The earliest parameters $c_{s,8}^2$ and $\cvisI{8}$ are mostly constrained, however, through the primary CMB anisotropies.
This may be inferred by comparing the yellow and black regions of the top two panels of Fig.~\ref{cs2cv2covertime},
or the PPS and PPS+Lens columns
(blue numbers)
of Table~\ref{tab:alldatasets1D} (see Appendix~\ref{app:bigtables})
which indicates that CMB lensing (Lens dataset) indeed improves all constraints by a factor 2-4 except for $c_{s,8}^2$ and $\cvisI{8}$.
Note that this is a much bigger improvement than for the constant parameters, where BAO+Lens reduced the upper limits by a factor less than 2~\cite{ThomasKoppSkordis2016}.
The effect of using different priors, flat versus nonflat (see Sec.~\ref{sec:priors}), is depicted in the lower panel of
Fig.\,\ref{cs2cv2covertimerepar}. There, only the PPS+Lens+BAO dataset combination is chosen. We also show the best-fit {\it var-c}~and {\it var-wc}~models which are prior-independent.
Since the nonflat prior favors the $\Lambda$CDM corner in the $c_s^2$-$\ensuremath{c^2_{\rm vis}}$ plane, we expect tighter constraints in the nonflat prior case. This is what is observed
in the lower panels of
in Fig.\,\ref{cs2cv2covertimerepar}.\footnote{Note that there are only upper limits on the perturbative GDM parameters and their best fit value are always significantly closer to zero than their upper limit.}
\subsubsection{Constraints on the $c_+^2$ and $\ensuremath{b}$ combinations}
In Table~\ref{tab:constraints_cp2} (see Appendix~\ref{app:bigtables}) we summarize the constraints on $c_{+,i}^2$ in the nonflat prior case.
The shape of the upper limits on $c_{+,i}^2$ follows those of $c^2_{s,i}$ (and of $c^2_{{\rm vis}, i}$) shown in the lower panels of Fig.\,\ref{cs2cv2covertimerepar},
but are about a factor of 3 larger. This is partly a consequence of error propagation and partly an effect of the nonflat priors.
We remind the reader that ``nonflat'' priors refer to \emph{flat priors} on $c_{+,i}^2$ and $\ensuremath{b}_i$ as described in detail in Sec.~\ref{sec:priors} and specifically \eqref{cp2anddDef}.
Thus constraints on $c^2_{s,i}$ and $\ensuremath{c^2_{\rm vis}}$ are stronger than those on $c_{+,i}^2$.
Interpreting the constraints on $c_{+,i}^2$ by imposing flat priors on $c_{s,i}^2$ and $\ensuremath{c^2_{{\rm vis},i}}$ is difficult since even a uniform 2D-posterior
in the $c_{s,i}^2$-$\ensuremath{c^2_{{\rm vis},i}}$ quadrant would lead to a peak at nonzero $c_{+,i}^2$ for the 1D-posterior of $c_{+,i}^2$.
This may be understood through \eqref{cp2dmeasure} which implies that the 1D-marginalized prior for $c_{+,i}^2$ is proportional to $c_{+,i}^2$.
The parameter $0<\ensuremath{b}<1$ remains largely unconstrained at present. It is expected, however, that structure formation data which include smaller scales could provide constraints on $\ensuremath{b}$,
or put differently, break the degeneracy between $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$ \cite{ThomasKoppMarkovic2019}.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.45 \textwidth]{var-weqc-PPS+lens+BAO.pdf}
\includegraphics[width=0.44 \textwidth]{rhoovertime_weqc_PPS+BAO+lens.pdf}
\end{center}
\vspace{-0.3cm}
\caption{{\it Left:} Shown are the $99\%$ credible regions (shaded blue) of $w_i=c_{s,i}^2=\ensuremath{c^2_{{\rm vis},i}}$ model when the dataset combination PPS+Lens+BAO was used.
The blue dot-dashed line is the constraint from \cite{KunzNesserisSawicki2016} while
the dotted line is the rough expectation from the EFTofLSS \cite[see][]{KoppSkordisThomas2016}.
{\it Right:} $95\%$ credible regions on $\omega_g(a)$ contrasted with those of $\Lambda$CDM in grey.
}
\label{weqcovertime}
\end{figure*}
\subsection{Degeneracies and shifts in the credible regions}
In Fig.\,\ref{2D_contours} we display several 2D-posteriors for the models considered when the PPS+Lens+BAO dataset combination was used.
As evidenced by this figure the best-fit parameters of the models containing $c^2_{s,i}$ and $\ensuremath{c^2_{{\rm vis},i}}$ (that is {\it var-wc}~and {\it var-c})
lie significantly outside their credible regions.
This is a consequence of the location of the peak of our likelihood (i.e. the best-fit point) being close to the border of our prior volume, which itself has a nontrivial shape.
The best-fit point of the {\it var-wc}~model is marked with a black plus sign and the corresponding one for the varc~submodel with a purple cross.
Choosing nonflat priors reduces, quite generally, the distance between the best-fit points and the credibility contours, confirming our suspicion that the effects
coming from a choice of priors may be responsible.
We find that the best-fit parameters lie within the credible regions of the corresponding nested models with $c^2_{s,i}=\ensuremath{c^2_{{\rm vis},i}}=0$,
consistent with the fact that adding the perturbative GDM parameters does not increase the goodness of fit.
Specifically, the best-fit point of {\it var-wc}\ lies inside the $68\%$ credible region of {\it var-w},
and similarly the best-fit point of {\it var-c}\ lies inside the $68\%$ credible region of $\Lambda$CDM.
As was already observed when constraints on constant GDM parameters were obtained in an earlier work \cite{ThomasKoppSkordis2016},
there is a degeneracy between the present day values $\ensuremath{\omega^{(0)}_g}$, $H_0$ and $w_0$.
This degeneracy persists also in the {\it var-wc}~model.
Even more interesting are the various shifts of the credible region contours of the {\it var-wc}~model (yellow shades and green lines)
with respect to either the {\it var-w}~(red dotted) or the \ensuremath{\Lambda}CDM~ models.
For some parameter combinations, a shift is also seen between the {\it var-c}~(blue dashed and purple dot-dashed) and the {\it var-w}/\ensuremath{\Lambda}CDM~ models, e.g. in the $\{\omega^{(0)}_g,\sigma_8\}$-plane ,
while for others no such shift is observed.
Thus these shifts occur through the interplay between $w$ and $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$.
The most important shift is the one occurring in the $\omega_g^{\rm eq}$-$H_0$ and $\ensuremath{\omega^{(0)}_g}$-$H_0$ planes, both cases involving $H_0$.
This shift allows the Hubble constant $H_0$ to be pushed to higher values today $H_0 = 69.3^{+3.3}_{-3.0}$ than the $\Lambda$CDM
value $H_0 = 67.89^{+0.93}_{-0.93}$, while keeping $\ensuremath{\omega^{(0)}_g}$ centered closer to the $\Lambda$CDM value than in the case of the {\it var-w}~model.
The physical mechanism for the increased value of $H_0$ is related to the increased value
of $\omega_g^{\rm eq}$ and is discussed below in Sec.\,\ref{sec:discussion}.
Another shift occurs in the clustering strength at late times, $\sigma_8$, and at early times, $A_s$
seen in the middle panel of Fig.\,\ref{2D_contours}. While the presence of
non-negative $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$ in the {\it var-c}~model is sufficient to shift $\sigma_8$ toward smaller values compared to $\Lambda$CDM,
in combination with letting $w$ free in the {\it var-wc}~model, the $\sigma_8$ parameter shifts to even smaller values, while $A_s$ increases, see the middle panel of Fig.\,\ref{2D_contours}.
The increase in $A_s$ is a consequence of the $w_8$-$c_{s,8}^2$ degeneracy seen in the right panel of Fig.\,\ref{2D_contours}.
As we explain in Sec.\ref{sec:discussion} the $w_8$-$c_{s,8}^2$ degeneracy
is caused by properties of the adiabatic initial conditions in the GDM model.
This degeneracy, in combination with the positivity of $c_{s,8}^2$, is also responsible for the increase in $\omega_g^{\rm eq}$
and all the other shifts discussed above.
It is also worth pointing out that the usual degeneracy between $c_{s}^2$ and $\ensuremath{c^2_{\rm vis}}$ which keeps the combination $c_{+}^2=c_{s}^2 + \tfrac{8}{15} \ensuremath{c^2_{\rm vis}}$ constant
is broken in bin 8 for the case of the {\it var-wc}~model, as seen on the lower right panel of Fig.\,\ref{2D_contours}.
This is because of the dependence of the adiabatic initial conditions on $w_8,c_{s,8}^2$ and $\cvisI{8}$. The adiabatic initial conditions
may be found in \cite{KoppSkordisThomas2016}, however, \eqref{delta_g_LS} below shows how this effect works.
This degeneracy is restored in the later bins, which is reflected in the red, and thus positive, diagonal in the $c_{s,i}^2$-$\ensuremath{c^2_{{\rm vis},i}}$
correlation matrix displayed in Fig.\,\ref{corrmatComparison} (see Appendix~\ref{app:bigtables}).
\subsection{Constraints on the submodel {\it var-w=c}} \label{varweqc}
The final submodel of {\it var-wc}~we study is the case where $w_i=c_{s,i}^2=\ensuremath{c^2_{{\rm vis},i}}$, denoted by {\it var-w=c}. This case is interesting because freely streaming matter satisfies
this condition either exactly (in case of ultrarelativistic radiation), or approximately. One example of the latter
is the warm dark matter (WDM) model. A second example of an approximate $w_i=c_{s,i}^2=\ensuremath{c^2_{{\rm vis},i}}$ relation is the CDM model on linear scales
with the inclusion of backreaction terms coming from integrating out nonlinear scales as in the effective field theory of large-scale structure \cite[EFTofLSS,][]{BaumannNicolisSenatoreEtal2012,CarrascoHertzbergSenatore2012,CarrollLeichenauerPollack2013,ForemanSenatore2015}.
In the left panel of Fig.\,\ref{weqcovertime} we show our constraints on the $w_i=c_{s,i}^2=\ensuremath{c^2_{{\rm vis},i}}$ submodel in shaded blue. We superimpose
the WDM constraints from \cite{KunzNesserisSawicki2016} (dot-dashed line) where approximately $w=c^2_s=\ensuremath{c^2_{\rm vis}} =(\frac{1}{3} + \frac{c_{s,0}^2}{a^{2}})^{-1}$
and also a rough estimate for the EFTofLSS (black dashed line) discussed in \cite{KoppSkordisThomas2016}.
In the right panel we show the derived parameter $\omega_g(a)$. Comparing to {\it var-wc}~ and other submodels in Fig.\,\ref{rhoovertime}, it is clear that the DM abundance in this submodel
is much more tightly constrained. This is due to the much tighter constraints on $w_i$ which in the late universe are
driven by the upper limits on $c_{s,i}^2=\ensuremath{c^2_{\rm vis}}$ which cannot be larger than the upper limits on $\{w_i$, $c_{s,i}^2$, $\ensuremath{c^2_{{\rm vis},i}}\}$
in the {\it var-wc}~model, so that they approximately follow those of $c_{+,i}^2$ in Table~\ref{tab:constraints_cp2}.
The constraints on standard cosmological parameters in the {\it var-w=c}~model are as tight as in $\Lambda$CDM,
with all 2D-posteriors overlapping except for $\sigma_8$ caused by the diminished DM growth due to the non-negative sound speed and viscosity.
The constraint on the Hubble constant $H_0=67.60^{+0.96}_{-0.93} $ (95\%.C.L.) is virtually the same as corresponding $\Lambda$CDM value of $H_0=67.89{}^{+0.93}_{-0.93}$.
A recurring theme throughout our results is the improvement of the constraints on $c^2_s$ and $c^2_\text{vis}$ due to the CMB lensing spectrum, which is most significant for bin 1 (as shown by Fig.~\ref{cs2cv2covertimerepar}). The lensing spectrum used in this work will be substantially improved by upcoming CMB experiments, such as Simons Observatory \cite[][see Fig.~6 in that work for the expected improvement in the noise compared to the data used in this paper]{Ade:2018sbj}. This improvement from upcoming surveys is particularly significant given the EFTofLSS model line in Fig.~\ref{weqcovertime}. This model has no free amplitude parameter that can be used to shift the model prediction up or down, and the upper limit we have achieved in bin 1 is only a factor of a few above the model. Given that this bin benefits the most by the inclusion of the lensing spectrum, a detection of an EFTofLSS-type GDM signal is likely with the improved lensing spectrum from upcoming experiments.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.32 \textwidth]{k-modes-Deltafld_color_w8_k_0p085invMpc-varwcheck5}
\includegraphics[width=0.32 \textwidth]{k-modes-Deltafld_color_w8_k_0p085invMpc-B5}
\includegraphics[width=0.32 \textwidth]{k-modes-Deltafld_color_w8_k_0p085invMpc-B1}
\end{center}
\vspace{-0.7cm}
\caption{
Samples from a MCMC chain, showing the evolution of the GDM comoving density perturbation $\ensuremath{{\Delta}}_g(a)$ at a fixed scale $k=0.085\,\mathrm{Mpc}^{-1}$.
In the lower set of panels we show the relative difference compared to the $\Lambda$CDM best-fit in order to make the effects described in the text more visible.
Just as in the case of $\omega_g(a)$, $\ensuremath{{\Delta}}_g(a)$ also squeezes to a small allowed range around the time of matter-radiation equality $a_{\rm eq}$.
This is facilitated by a spread at early times due to the $w$-dependence of the time evolution of superhorizon modes during the radiation era.
On the \emph{left} we show the case of the {\it var-w}~model which is to be contrasted with the {\it var-wc}~model in the \emph{middle} panel,
both according to the PPS+Lens+BAO dataset combination.
The degeneracy between $c^2_{s,8}$ and $w_8$ in the adiabatic mode solution during that time shifts
the distribution to larger and positive values of $w_8$ as compared to {\it var-w}.
The \emph{right} panel shows the evolution of $\ensuremath{{\Delta}}_g(a)$ in {\it var-wc}~model according to PPS alone.
Observe that the best fit (dotted) is not affected by this shift but it remains close to the \ensuremath{\Lambda}CDM~\!\! case (dashed).
}
\label{Deltaovertime}
\end{figure*}
\section{Discussion and Implications}
\label{sec:discussion}
In this section we discuss the physical mechanism underlying our most interesting results, namely the increase of $\omega_g$ in the early universe,
leading to an increase of $H_0$ for the {\it var-wc}~model.
Let us first examine the origin of the increase of $\omega_g^{\rm eq}$, the value of $\omega_g$ at matter-radiation equality, in the {\it var-wc}~model compared to either {\it var-w}~or $\Lambda$CDM.
Increasing $\omega_g^{\rm eq}$ is ultimately tied to {\it var-wc}~favoring a higher EoS $w$ in the early universe.
To exemplify, the heights of the CMB peaks severely constrain the evolution of the potential $\Phi$ between radiation and matter eras.
Specifically, $\Phi$ can only evolve within a narrow band of allowed values, otherwise it would cause either too much or too little
early ISW and acoustic driving in the CMB. These two effects are
controlled by $a_{{\rm eq}}$ which in turn translates to a small range of allowed values for $w_{\rm eq}$ (leading to the $w_{\rm eq}$-$\omega_g^{\rm eq}$ degeneracy).
Now, during the matter era $\Phi$ is closely linked to the GDM comoving density perturbation $\ensuremath{{\Delta}}_g = \delta_g + 3 a H (1+w) \theta_g$,
hence, this narrow band of values for $\Phi$
translates to an equivalent range in $\Delta_g$. However, during the radiation era $\Phi$ is sourced by $\Delta_{{\rm radiation}}$ and thus the link to $\Delta_g$ is broken.
Therefore, the data select trajectories for $\Delta_g$ which may start within a fairly wide range of initial values but
subsequently all squeeze within a narrow range of values around the time of matter-radiation equality.
This is precisely what is observed in Fig.\,\ref{Deltaovertime}.
There we plot the evolution of a single $k$-mode ($k= 0.085 {\rm Mpc}^{-1}$) of $\ensuremath{{\Delta}}_g$ for a representative sample of our Monte Carlo Markov Chains
color-coded by their $w_8$ value. The effect just described is clearly visible in all panels, but less so in the left which displays the $\ensuremath{{\Delta}}_g$ evolution in the
{\it var-w}~submodel. This behavior is similar to the GDM abundance $\omega_g$ which squeezes within a narrow range of values around $a_{\rm eq}$ in Fig.\,\ref{rhoovertime}.\,
Thus, the CMB constrains both the DM abundance and the amplitude of DM perturbations most strongly around $a_{\rm eq}$.
Although the squeeze in $\ensuremath{{\Delta}}_g$ is present in both models, in the {\it var-wc}~model it is more pronounced.
The reason is that the evolution of $\ensuremath{{\Delta}}_g$ is affected by both $w$ and $c_s^2$. Meanwhile,
during the radiation dominated era, the GDM density perturbation in the synchronous gauge evolves on large scales ($k \eta \ll 1$) as
\begin{equation}
\delta_g = \zeta_{\rm ini} \left( -\frac{1}{4} + \frac{3c^2_s-5w }{8} \right) (k \eta)^2,
\label{delta_g_LS}
\end{equation}
when adiabatic initial conditions specified by the initial curvature perturbation $\zeta_{\rm ini}$ are set~\cite{KoppSkordisThomas2016}.
Therefore, the GDM parameters $w_8$ and $c^2_{s,8}$ affect the initial amplitude of the GDM density perturbations and consequently the initial $\ensuremath{{\Delta}}_g$. While $w_8$ takes
both positive and negative values, $c^2_{s,8}$ must be non-negative. The two compensate each other only for $w_8>0$, implied by \eqref{delta_g_LS},
leading to the degeneracy shown in the right panel of Fig.\,\ref{2D_contours}. However, the slope of the degeneracy is not what one would expect from
\eqref{delta_g_LS} under the naive assumption $\ensuremath{{\Delta}}^{\rm ini}_g = \ensuremath{{\Delta}}_{\rm cdm}^{\rm ini}$, because both $w_8$ and $c^2_{s,8}$ affect the subsequent evolution of $\ensuremath{{\Delta}}_g$ into the squeezed region around $a_{\rm eq}$.
Nevertheless, this degeneracy combined with the squeezed region around $a_{\rm eq}$ drive the posteriors of both $w_8$ and $c^2_{s,8}$ to more positive values.
Due to the $w-\omega_g$ degeneracy discussed first in~\cite{Hu1998a} and recently demonstrated in~\cite{ThomasKoppSkordis2016},
once $w_8$ shifts to positive values it then leads to an increase of the early universe abundance of DM $\omega^{(8)}_g$ and consequently $\omega_g^{\rm eq}$.
Before discussing the increase of $H_0$ in the {\it var-wc}~model, we briefly comment on a few minor points.
First, we showed in Sec.\ref{sec:results} that
marginalizing over $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$ in the {\it var-wc}~model does not exactly lead to the same posteriors as in the {\it var-w}~submodel
where $c_s^2=\ensuremath{c^2_{\rm vis}}=0$. This is a consequence of our discussion of the $w_8$ and $c^2_{s,8}$ degeneracy combined with the squeeze in $\ensuremath{{\Delta}}_g$ at equality.
Second, adding the Lens dataset widens the 1D-posteriors of $c^2_{s,8}$, compared to PPS alone, a
somewhat counter-intuitive result also inferred from Table~\ref{tab:alldatasets1D}. In the right of Fig.\,\ref{Deltaovertime}
we plot the $\ensuremath{{\Delta}}_g$ samples using PPS alone, showing a distribution of curves shifted toward smaller $ \ensuremath{{\Delta}}_g/ \ensuremath{{\Delta}}_{\rm cdm} -1 $ in the late universe.
This is the result of diminished constraining power on DM clustering at that time and
since growth is an integrated effect, the drop of $ \ensuremath{{\Delta}}_g/ \ensuremath{{\Delta}}_{\rm cdm} -1 $ then requires a more CDM-like growth in the early universe,
thus favoring smaller values of $c^2_{s,8}$.
Lastly, since $c^2_{s,8}$ and $w_8$ are correlated,
adding the Lens dataset favors larger values of $w_8$ and explains why this dataset affects significantly the constraints on $w_8$ in the {\it var-wc}~model.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.47 \textwidth]{3D_H0_rsdrag_omegaeq_-Paper.pdf}
\qquad \includegraphics[width=0.37 \textwidth]{3D_Heq_aeq_omegaeq_-Paper.pdf}
\end{center}
\vspace{-0.3cm}
\caption{{\it Left: } We show the 68\% and 95\% credible regions for various datasets and models used to constrain $H_0$ and $r_s^{\rm drag}$.
The SH0ES and the BAO+SNe credible regions are adopted from \cite{KnoxMillea2019}.
{\it Right: } We show the 68\% and 95\% credible regions of the $H_{\rm eq}$ and $a_{\rm eq}$ plane,
demonstrating that $\omega_g^{\rm eq}$ uniquely determines $H_{\rm eq}$ and $a_{\rm eq}$.
}
\label{rsdragH0plane}
\end{figure*}
We now discuss the reasons for the increase of the mean of $H_0$ in the {\it var-wc}~model, making $H_0$ more consistent with supernovae estimates of its value.
As was explained in \cite{KoppSkordisThomasEtal2018}, there is strong degeneracy between $\ensuremath{\omega^{(0)}_g}$, $w_0$ and $H_0$ in the {\it var-w}~model,
since these late universe parameters determine a large fraction of the angular diameter distance $d^*_A$ to the last scattering surface.
This degeneracy is still observed to be present in the {\it var-wc}~model. However, as can be seen in the left panel of Fig.\,\ref{2D_contours},
the contours in the $\ensuremath{\omega^{(0)}_g}-H_0$ plane shift to larger values of $H_0$ and smaller values of $\ensuremath{\omega^{(0)}_g}$ along the degeneracy direction,
compared to the {\it var-w}~case. This happens because increasing $w_0$ allows easier structure growth
in the late universe (see $w_0$-$\sigma_8$ degeneracy in Fig.\,\ref{2D_contours} and App.\,\ref{app:sigma8w0Degeneracy}).
In the {\it var-wc}~model the perturbative GDM parameters shift $\sigma_8$ to small values
and the more negative values of $w_0$ become disfavored by CMB lensing, so that overall $w_0$ tends to be more positive.
This tighter and more positive distribution of $w_0$ then leads to a tighter distribution of $\ensuremath{\omega^{(0)}_g}$ with smaller mean (compared to {\it var-w}).
Therefore a larger $H_0$ compared to {\it var-w}~is required to get the right $d^*_A$.
However, the mean of $\ensuremath{\omega^{(0)}_g}$ in the {\it var-wc}~model is not too different from its values in $\Lambda$CDM and hence, there is more to this story.
The intrinsic size of the sound horizon at the end of recombination, or rather the baryon drag epoch, $r_s^{\rm drag}$, is different in {\it var-wc}~case.
As was pointed out in \cite{KnoxMillea2019}, an increase in $r_s^{\rm drag}$ is one of the most natural ways
for increasing $H_0$ as inferred from early universe data and in turn relaxing the $H_0$-tension.
In Fig.\,\ref{rsdragH0plane} we reproduce Fig.1 of \cite{KnoxMillea2019}, showing two model independent constraints in the $H_0$-$r_s^{\rm drag}$ plane
from distance ladder-calibrated supernovae (SH0ES) and supernovae-calibrated angular diameter distance measurements of $r_s^{\rm drag}$ (BAO+SNe).
We replaced the high-$\ell$ and low-$\ell$ $\Lambda$CDM constraints which were part of the original figure
by several GDM constraints on $H_0$ and $r_s^{\rm drag}$ coming from the PPS+Lens+BAO dataset combination.
We show the $68\%$ and $95\%$ credible regions for the {\it var-wc}~model (yellow curves), $\Lambda$CDM (grey) and {\it var-w}~(red dotted),
and additionally a set of samples color-coded according to $\omega_g^{\rm eq}$ using the {\it var-wc}~model.
While the original Fig.1 of \cite{KnoxMillea2019} revealed that changing $\ensuremath{\omega^{(0)}_g}$ cannot reconcile SH0ES and BAO+SNe,
it is clear from our Fig.\,\ref{rsdragH0plane} that an independent increase of $\omega_g^{\rm eq}$ can move the contours to larger $r_s^{\rm drag}$ and larger $H_0$
toward a region where SH0ES and BAO+SNe overlap.
As is clear from the right panel of Fig.\,\ref{rsdragH0plane}, $\omega_g^{\rm eq}$
is uniquely tied to $H_{\rm eq}$ and $a_{\rm eq}$, such that $r_s^{\rm drag}$ is larger due to an increase
in the prerecombination Hubble parameter and earlier matter-radiation equality (while keeping $\ensuremath{\omega^{(0)}_g}$ at a smaller value).
Thus, an increased $\omega_g^{\rm eq}$, whose origin we already explained above, is the source of an increased $H_0$ in the {\it var-wc}~model.
We note in passing that although the {\it var-w=c}~submodel allows in principle for $\omega_g^{\rm eq} > \ensuremath{\omega^{(0)}_g}$, the data does not favor such a shift.
We see in Fig.\,\ref{weqcovertime} that $\omega^{\rm eq}_g$ is very close to $\ensuremath{\omega^{(0)}_g}$, which in turn is very close to the $\Lambda$CDM value.
This results in a low $H_0$ value which is comparable to that of $\Lambda$CDM, so that a {\it var-w=c}~type model cannot resolve the $H_0$ tension.
Our analysis indicates that a designed GDM model with only a couple of additional parameters from $\Lambda$CDM (rather than 26) may be
able to address and further investigate the $H_0$ and $\sigma_8$ tensions.
Our models also further elucidate the mechanism by which decaying dark matter,
see e.g. \cite{Buen-AbadSchmaltzLesgourguesEtal2018,Bringmann2018,Vattis2019} can increase the CMB inference of $H_0$ and decrease $\sigma_8$.
Such a model naturally implements a positive $w_8$ and $c_{s,8}^2$ when decaying DM and dark radiation are collectively modeled as a single GDM fluid.
Finally, we note that in our analysis we assumed that dark energy is a cosmological constant. Letting the dark energy EoS vary in redshift could, in principle, lead to degeneracies with
$w$ (dark matter EoS). The case of late DE was investigated in \cite{TutusausEtal2016}, where the degeneracy was solved by using both early- and late-time observables of the perturbations -- for
instance, CMB data combined with a forecast for galaxy clustering data. It is also possible to have an early dark energy (EDE) component (see e.g. \cite{KarwalKamionkowski2016,
HillMcDonoughToomeyEtal2020}). If the EDE is uncoupled with the DM and has negligible perturbations, or if the EDE is tightly-coupled with the DM, the combined EDE+DM fluid can be well described
by a single GDM fluid \cite{KoppSkordisThomas2016}, so that one could interpret our constraints on GDM parameters as constraints on such mixture scenarios. An EDE component which is not
tightly-coupled to the DM is unlikely to be the cause of the early-time marginal effects discussed in this section, since for those effects to appear a varying DM sound speed is necessary, and in
addition, they do not occur when only the EoS is varied.
\section{Conclusion}
\label{sec:conclusion}
We have presented the most exhaustive parameter search of DM properties to date, using the generalized dark matter model (GDM).
We allowed all three GDM parametric functions to have a fairly general time dependence by binning
$w$ in 8 and $c^2_s$ and $\ensuremath{c^2_{\rm vis}}$ in 9 scale factor bins, that is, $26$ new parameters beyond $\Lambda$CDM in total.
We found no convincing evidence for any of these parameters to be nonzero. We expect that merging some bins will tighten the constraints for each bin; however,
we do not expect that significant non-CDM behavior would emerge as even in the extreme case of constant GDM parameters, this doesn't happen. Thus, our result should be seen as
depicting the most general time-variation of DM properties allowed by the data.
We analyzed four nested models: {\it var-wc}~(all 26 parameters free), {\it var-c}~(setting $w_i=0$), {\it var-w}~(setting $c^2_s=\ensuremath{c^2_{\rm vis}}=0$; previously studied in \cite{KoppSkordisThomasEtal2018})
and {\it var-w}=c where the constraint $w=c^2_s=\ensuremath{c^2_{\rm vis}}$ was imposed.
Our strongest constraints on $w$ were in the early universe around matter-radiation equality, while the strongest constraints on $c^2_s$ and $\ensuremath{c^2_{\rm vis}}$ are
between redshift $2$ and $9$ where the constraining power of the CMB lensing peaks.
Our analysis was performed using flat and nonflat priors for the perturbative GDM parameters,
in order to ensure robustness of the results. Indeed, while three early universe bins show significant shifts for $w$ away from zero in the {\it var-wc}~model when
flat priors were used, these become less significant when nonflat priors were used.
Having a varying $w$ improved the fits marginally while letting $c^2_s$ and $\ensuremath{c^2_{\rm vis}}$ be free led to virtually no improvement.
However, $c^2_s$ and $\ensuremath{c^2_{\rm vis}}$ introduced some rather interesting features.
We observed a number of interesting shifts in the 2D~posteriors between the {\it var-w}~and {\it var-wc}~models. Specifically, the {\it var-wc}~model shifts the DM abundance around equality \ensuremath{\omega^{\rm eq}_g}~
to higher values while today \ensuremath{\omega^{(0)}_g} decreases and the Hubble constant $H_0$ increases compared to {\it var-w}~or to $\Lambda$CDM.
Interestingly, $\sigma_8$ also shifts to lower values in the {\it var-wc}~model. These shifts indicate the potential of GDM to alleviate the $H_0$ and $\sigma_8$ tensions driven by
early and late universe data. In particular, an \emph{a-posteriori} constructed GDM model with only a couple of more parameters than $\Lambda$CDM may be
favored over the latter while simultaneously addressing these tensions.
Our present analysis paves the way for further work in this direction using more recent cosmological datasets \cite[e.g][]{Aghanim:2018eyx} as well as investigating possible
$k^2$-dependencies in the $c_s^2$ and $\ensuremath{c^2_{\rm vis}}$ parametric functions.
Finally, we reiterate our assertion that upcoming CMB experiments, such as Simons Observatory \cite{Ade:2018sbj} will substantially
improve the CMB lensing constraining power. This improvement is expected to have profound impact on further constraining and quite possibly detecting
DM properties in the late universe, specifically $c^2_s$ and $c^2_\text{vis}$.
We predict that a detection of an EFTofLSS type GDM signal is likely with the improved lensing spectrum from upcoming experiments.
\begin{acknowledgements}
The research leading to these results
has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC Grant Agreement no. 617656 ``Theories
and Models of the Dark Sector: Dark Matter, Dark Energy and Gravity''. The Primary Investigator is C. Skordis.
\end{acknowledgements}
\bibliographystyle{aa}
| -47,915.232984
|
[
-2.955078125,
2.8046875
] | 45.717732
|
[
-2.927734375,
0.461181640625,
-2.09375,
-6.125,
-0.92822265625,
8.421875
] |
[
3.86328125,
8.1484375,
3.05078125,
6.171875
] | 544
| 9,305
|
[
-3.07421875,
3.53125
] | 27.651024
|
[
-6.2265625,
-4.3203125,
-4.80859375,
-2.90625,
1.876953125,
13.078125
] | 1.005356
| 18.156793
| 20.247179
| 2.518597
|
[
2.1783058643341064
] | -33,130.452099
| 5.718968
| -46,383.707703
| 0.986564
| 6.111674
|
[
-2.646484375,
-3.859375,
-3.78125,
-4.7421875,
2.35546875,
12.125
] |
[
-5.703125,
-2.4609375,
-2.18359375,
-1.587890625,
3.66796875,
4.91015625
] | |
BkiUa6TxaKPQonJtcpmK
|
\section{Introduction}\label{sec:intro}
Suppose, $\G$ is a group and $\phi: \G\rightarrow \G$ is an endomorphism.
Two elements $x,y\in \G$ are
$\phi$-{\em conjugate} or {\em twisted conjugate,}
if and only if there exists an element $g \in \G$ such that
$$
y=g x \phi(g^{-1}).
$$
The corresponding classes are called \emph{Reidemeister} or
{\em twisted conjugacy} classes.
The number $R(\phi)$ of them is called the {\em Reidemeister number}
of $\phi$.
The study of Reidemeister numbers is an important problem related with Topological Dynamics, Number Theory and Representation Theory (see \cite{FelshB}). One of the main problems in the field is to prove or disprove the so-called TBFT (a conjecture about the twisted Burnside-Frobenius theory (or theorem)), which has numerous important consequence for Reidemeister zeta function and
for other problems in Topological Dynamics (see a more extended discussion in \cite{FeTrZi20}). Namely the problem is to identify $R(\f)$ (when $R(\f)<\infty$) in a natural way with the number of fixed points of the induced map $\widehat{\f}$ of an appropriate dual object. In the initial formulation of the conjecture \cite{FelHill}, the dual object was the unitary dual $\widehat{\G}$ and
$\widehat{\f}:[\rho]\mapsto [\rho\circ \f]$. The conjecture about TBFT was proved in many cases, but failed for an example in \cite{FelTroVer}, which led to the new formulation: TBFT$_f$, where $\widehat{\G}$ was replaced by its finite-dimensional part,
which is evidently invariant under $\widehat{\f}$. This is the version, which we will study in this paper for a class of groups.
In \cite{FeTrZi20} an example of a group that has neither TBFT nor TBFT$_f$ was presented.
The most general proved cases of TBFT$_f$ are the case of polycyclic-by-finite groups \cite{polyc}
and the case of nilpotent torsion-free groups of finite Pr\"ufer rank \cite{FelTroDicjtomy2021RJMP}.
Another important problem in the field is to localize the class of groups, where one can consider the TBFT conjecture, i.e. where
automorphisms with $R(\f)<\infty$ do exist. The opposite case is called the $R_\infty$ property.
It has some topological consequences itself (see e.g. \cite{GoWon09Crelle}).
A part of recent results about Reidemeister clases and $R_\infty$ can be found in
\cite{FelLeonTro,Romankov,BardNasyNes2013,DekimpeGoncalves2014BLMS,FelTroJGT,TroLamp,TroitskyWBranch2019RJMP,Nasybullov2020TMNA}
(see also an overview in \cite{FelNasy2016JGT}).
We consider the following restricted wreath product $G\wr \Z^k = \Sigma \rtimes_\a \Z^k$, where $G$ is a finite Abelian group,
$\Sigma$ denotes $\oplus_{x\in \Z^k} G_x$, and $\a(x)(g_y) =g_{x+y}$. Here $g_x$ is $g \in G \cong G_x$.
The $R_\infty$ property was completely studied for $k=1$ in \cite{gowon1}, for $G=\Z_p$ with a prime $p$ and arbitrary $k$
in \cite{TroLamp}, for $G=\Z_m$ and arbitrary $k$ in \cite{Fraiman}. In all these cases the TBFT$_f$ was proved.
The complexity of the study increases drastically when we move from $k=1$ to $k>1$,
because $\Z$ has only one non-trivial automorphism in contrast with $\Z^k$.
The groups under consideration can be viewed as generalized lamplighter groups.
For a generalization of the lamplighter group in other directions, the twisted conjugacy was considered in \cite{TabackWong2011},
\cite{SteinTabackWong2015}, and other papers.
In the present paper, we prove (Theorem \ref{teo:cases}) that the groups under consideration do not have the $R_\infty$ property in the following three cases:
\begin{enumerate}[1)]
\item all prime-power components of $G$ for $2$ and $3$ have multiplicity at least $2$;
\item there is no prime-power components for $2$ and $k$ is even;
\item all prime-power components of $G$ for $2$ have multiplicity at least $2$ and $k=4s$ for some $s$.
\end{enumerate}
To prove this, we construct corresponding examples, and all of them have a finite order. This motivates us to prove the TBFT$_f$ for all groups of the form $G\wr \Z^k$ and their automorphisms of finite order (Corollary \ref{cor:main_TBFT}).
The proof is based on a description of Reidemeister classes of $\f$ as cylindrical sets (Theorem \ref{teo:main_for_TBFT}).
\textbf{Acknowledgment.} The work was supported by the Foundation for the Advancement of Theoretical
Physics and Mathematics "BASIS".
\section{Preliminaries}
We start from some general statements about Reidemeister classes of extensions.
Suppose, a normal subgroup $H$ of $G$ is $\f$-invariant under an automorphism $\f:G\to G$
and $p:G\to G/H$ is the natural projection.
Then $\f$ induces automorphisms $\f':H\to H$ and $\widetilde{\f}:G/H \to G/H$.
\begin{dfn}
\rm Denote $C(\f):=\{g\in G \colon \f(g)=g\}$, i.e. $C(\f)$ is the subgroup of $G$, formed by $\f$-fixed elements.
\end{dfn}
We will use the notation $\tau_g(x)=gxg^{-1}$ for an inner automorphism as well as for its restriction on a normal subgroup.
The following important properties were obtained in \cite{FelHill,go:nil1}, see also \cite{polyc,GoWon09Crelle}.
\begin{teo}\label{teo:extensions}
For $G$, $H$, $\f$, $\f'$, and $\widetilde{\f}$ as above, we have the following.
\begin{itemize}
\item[1.] \emph{Epimorphity:} the projection $G\to G/H$ maps Reidemeister classes of $\f$ onto Reidemeister classes of $\widetilde{\f}$, in particular $R(\widetilde{\f})\le R(\f)$;
\item[2.] \emph{Estimation by fixed elements:} if $|C(\widetilde{\f})|=n$, then $R(\f')\le R(\f)\cdot n$;
\item[3.] \emph{Fixed elements-free case:} if $C(\widetilde{\f})=\{e\}$, then each Reidemeister class of $\f'$ is an intersection of the appropriate Reidemeister class of $\f$ and $H$;
\item[4.] \emph{Summation:} if $C(\widetilde{\f})=\{e\}$, then
$R(\f)=\sum_{j=1}^R R(\tau_{g_j} \circ \f')$, where $g_1,\dots g_R$ are some elements of $G$ such that
$p(g_1),\dots,p(g_R)$ are representatives of all Reidemeister classes of $\widetilde{\f}$, $R=R(\widetilde{\f})$;
\end{itemize}
\end{teo}
Also we will need the following statement \cite{Jabara} (Lemma 4 and the step (2) in the proof of Theorem A'):
\begin{lem}\label{lem:Jab_fin_ord}
Suppose, $\G$ is a residually finite group and $\f:\G\to\G$ is an automorphism with $R(\f)<\infty$. Then
$|C(\f)|<\infty$.
\end{lem}
One can find in \cite{Jabara} an estimation for $|C(\f)|$, but we will not use it.
Passing to a semidirect product $\Sigma \rtimes_\a \Z^k $, we have by \cite{Curran2008} that a couple of automorphisms $\f':\Sigma\to\Sigma$
and $\overline{\f}: \Sigma \rtimes_\a \Z^k /\Sigma \cong \Z^k \to \Z^k \cong \Sigma \rtimes_\a \Z^k /\Sigma$ define an automorphism
$\f$ of $\Sigma \rtimes_\a \Z^k$ (not unique) if and only if
\begin{equation}\label{eq:maineq}
\f'(\a(m)(h))=\a(\ov{\f}(g))(\f'(h)),\qquad h\in\Sigma,\quad m\in \Z^k.
\end{equation}
Since $\Sigma$ is abelian, by \cite[p.~207]{Curran2008} the mapping $\f_1$ defined as $\f'$ on $\Sigma$ and by $\overline{\f}$ on $\Z^k\subset \Sigma \rtimes \Z^k $ is still an automorphism. Moreover, from the following commutative diagrams
$$
\xymatrix{0 \ar[r]& \Sigma \ar[r] \ar[d]_{\f'}& \Sigma \rtimes \Z^k \ar[r] \ar@/_/[d]_{\f} \ar@/^/[d]^{\f_1} & \Z^k \ar[r] \ar[d]^{\overline{\f}}& 0\\
0 \ar[r]& \Sigma \ar[r] & \Sigma \rtimes \Z^k \ar[r] & \Z^k \ar[r] & 0}
$$
we have $R(\f)=R(\f_1)$. Indeed, if $R(\overline{\f})=\infty$ then $R(\f)=R(\f_1)=\infty$. If $R(\overline{\f})<\infty$ then $C_{\overline{\f}}=\{0\}$
and by Theorem \ref{teo:extensions}
$$
R(\f)=\sum_{\mbox{\scriptsize representatives }m\in \Z^k\mbox{\scriptsize of Reidemeister classes of }\overline{\f}} R(\t_m \circ \f') = R(\f_1).
$$
So, without loss of generality in the $R_\infty$ questions (not in Section \ref{sec:tbft}) we will assume
\begin{equation}\label{eq:restric_on_sub}
\Z^k \subset A\wr \Z^k \mbox{ is $\f$-invariant and } \f|_{\Z^k}=\overline{\f}.
\end{equation}
This was discussed briefly in \cite[Lemma 3.5]{gowon1} in a particular case.
\begin{lem}\label{lem:R_needed}
An authomorphism $\f: G\wr \Z^k \to G \wr \Z^k$ has $R(\f)<\infty$ if and only if
$R(\overline{\f})< \infty$ and $R(\t_m \circ \f')<\infty$ for any $m \in \Z^k$ (in fact, it is sufficient to verify this for representatives
of Reidemeister classes of $\overline{\f}$).
\end{lem}
\begin{proof}
Suppose, $R(\f)<\infty$.
By Theorem \ref{teo:extensions}, we have $R(\overline{\f})<\infty$. Then by Lemma \ref{lem:Jab_fin_ord}, we obtain
$|C(\overline{\f})|<\infty$ (in fact, $|C(\overline{\f})|=1$, because an automorphism of $\Z^k$ can not have finitely many fixed elements except of $0$). So, by Theorem \ref{teo:extensions}, $R(\f')<\infty$. Considering $\t_z \circ \f$, which has $R(\t_z \circ \f)=R(\f)<\infty$, instead of $\f$, we obtain in the same way that $R(\t_z \circ \f')<\infty$.
Conversely, having $|C(\overline{\f})|=1$, one can apply the summation formula from Theorem \ref{teo:extensions}.
\end{proof}
\begin{lem}\label{lem:how_to_define}
Suppose, $\overline{\f}:\Z^k \to \Z^k$ is an automorphism and $F:G\to G$ is an automorphism.
Then $\f'$ defined by
\begin{equation}\label{eq:how_to_def}
\f'(a_0)=(Fa)_0,\qquad \f'(a_x)=(Fa)_{\overline{\f}(x)}
\end{equation}
satisfies (\ref{eq:maineq}) and so defines an automorphism of $G\wr \Z^k$.
Evidently the subgroups $\oplus G_x$, where $x$ runs over an orbit of $\overline{\f}$, are $\f'$-invariant summands of $\Sigma$.
\end{lem}
\begin{proof}
It is sufficient to prove (\ref{eq:maineq}) on generating elements of the form $a_x$. Then for any $z\in \Z^k$,
$$
\f'(\a(z) a_x)=\f'(a_{x+z})= (Fa)_{\overline{\f}(x+z)}=\a(\overline{\f}(z)) (Fa)_{\overline{\f}(z)}=\a(\overline{\f'}(z)) \f'(a_x)
$$
and (\ref{eq:maineq}) is fulfilled. The first equality in (\ref{eq:how_to_def}) is in fact a particular case of the second one.
\end{proof}
It is not difficult to prove (see \cite{FelshtynHill1993CM}) that,
for $\overline{\f}:\Z^k\to \Z^k$ defined by a matrix $M$, one has
\begin{equation}\label{eq:FHZk}
R(\overline{\f})=\# \Coker (\Id -\overline{\f})=|\det(E-M)|,
\end{equation}
if $R(\overline{\f})<\infty$, and $|\det(E-M)|=0$ otherwise.
\section{Some classes of wreath products without $R_\infty$ property}
\begin{teo}\label{teo:cases}
Suppose, the prime-power decomposition of $G$ is $\oplus_i (\Z_{(p_i)^{r_i}})^{d_i}$. Then under each of the following
conditions the corresponding wreath products $G\wr \Z^k$ admit an automorphism $\f$ with $R(\f)<\infty$, i.e. do not have the property $R_\infty$:
\begin{description}
\item[Case 1)] for all $p_i=2$ and $p_i=3$, we have $d_i\ge 2$ (and is arbitrary for primes $>3$);
\item[Case 2)] there is no $p_i=2$ and $k$ is even;
\item[Case 3)] for all $p_i=2$, we have $d_i\ge 2$ and $k=4s$ for some $s$.
\end{description}
\end{teo}
\begin{proof}
In each of these cases we will take an automorphism $\overline{\f}:\Z^k \to \Z^k$ with $R(\overline{\f})<\infty$ (in fact, of finite order)
and define $\f':\Sigma\to \Sigma$ with appropriate properties in accordance with Lemmas \ref{lem:R_needed} and \ref{lem:how_to_define}.
\textbf{Case 1).} In this case we can take $\overline{\f}=-\Id: \Z^k \to \Z^k$ and construct $\f$ similarly to \cite{gowon1}. More specifically, note that $R(\overline{\f})=2^k$
and
define $\f':\Sigma \to \Sigma$ in the following way.
The subgroups $G_x \oplus G_{-x}$ will be invariant subgroups of $\f'$ and we define
$$
\f': G_x \oplus G_{-x} \to G_x \oplus G_{-x}\mbox{ as }
\begin{pmatrix}
0 & \Psi\\
\Psi & 0
\end{pmatrix},
$$
where $\Psi:G\to G$ is defined as a direct sum of blocks of the following types:
\begin{equation}\label{eq:F2F3}
F_2 = \begin{pmatrix}
0 & 1\\
1 & 1
\end{pmatrix} : (\Z_q)^2 \to (\Z_q)^2,
\quad
F_3 = \begin{pmatrix}
0 & 0 & 1 \\
0 & 1 & 1 \\
1 & 1 & 1
\end{pmatrix} : (\Z_q)^3 \to (\Z_q)^3,
\end{equation}
where $q$ are some $(p_i)^{r_i}$ and for each summand $\left(\Z_{(p_i)^{r_i}}\right)^{d_i}$ of $G$ ($d_i \ge 2$, $p_i=2$ or $p_i=3$) we have $s$ summands $F_2$, if $d_i=2s$, or $s-1$ summands $F_2$ and one summand $F_3$, if $d_i=2s+1$.
For the remaining summands (i.e. for $p_i>3$) we do not need to group summands in the above way and we can consider $F_1:\Z_q\to \Z_q$,
$1 \mapsto m(q)$
where $q=(p_i)^{r_i}$. This $m=m(q)$ should be taken in such a way that
\begin{equation}\label{eq:condit_on_m}
m^2 \mbox{ and } 1-m^2 \mbox{ are invertible in }\Z_q.
\end{equation}
This can be done for $p_i>3$: one can take $m=2$ (and impossible for $p_i=2$ or $3$).
By Lemma \ref{lem:how_to_define}, we defined an automorphism $\f$ of $G\wr \Z^k$ in this way
(one may assume (\ref{eq:restric_on_sub}) to have a unique $\f$).
We claim that $R(\t_{z_i} \circ \f')=R(\a(z_i) \circ \f')=1$, $i=1,\dots, 2^k$. Consequently, by Theorem \ref{teo:extensions}, $R(\f)=R(\overline{\f})=2^k<\infty$.
So we need to prove that $\Id_\Sigma-\a(z_i) \circ \f'$ is an epimorphism, because, for Abelian groups, this is evidently the same
as $R(\a(z_i) \circ \f')=1$.
This homomorphism has a decomposition of
$\Sigma$ into invariant subgroups $G_{x}\oplus G_{-x +z_i}$,
because $\a(z_i): G_{-x} \to G_{-x +z_i}$, $\f': G_{-x +z_i} \to G_{x-z_i}$ and $\a(z_i): G_{x-z_i} \to G_{x}$.
Note that the subgroups $G_{x}$ and $G_{-x +z_i}$ coincide if $z_i=2x$ (this corresponds to the case of $G_0$ for $\f'$). Thus it is sufficient to verify the epimorphity for each $G_{x}\oplus G_{-x +z_i}$ and for the exceptional case. Passing to summands of $G$, it is sufficient to verify the epimorphity of
$$
\begin{pmatrix}
-E & F_2\\
F_2& -E
\end{pmatrix}, \quad
\begin{pmatrix}
-E & F_3\\
F_3& -E
\end{pmatrix}
\mbox{ and }
\begin{pmatrix}
-E & F_1\\
F_1& -E
\end{pmatrix} = \begin{pmatrix}
-1 & m\\
m & -1
\end{pmatrix}.
$$
The first two are isomorphisms with the explicit inverses
$$
\left(
\begin{array}{cccc}
-1 & 1 & 1 & 0 \\
1 & 0 & 0 & 1 \\
1 & 0 & -1 & 1 \\
0 & 1 & 1 & 0 \\
\end{array}
\right), \qquad
\left(
\begin{array}{cccccc}
-2 & 0 & 1 & 1 & 1 & -1 \\
0 & -1 & 1 & 1 & 0 & 0 \\
1 & 1 & -1 & -1 & 0 & 1 \\
1 & 1 & -1 & -2 & 0 & 1 \\
1 & 0 & 0 & 0 & -1 & 1 \\
-1 & 0 & 1 & 1 & 1 & -1 \\
\end{array}
\right).
$$
For the third one the invertibility follows from (\ref{eq:condit_on_m}).
For the exceptional case we formally do not need to verify the epimorphity, because it can add only a finite number to $R(\f')$,
but we wish to prove our (more strong) claim (this will be helpful for TBFT). So we have to prove, that
$$
F_2-E, \qquad F_3-E, \qquad m-1
$$
are epimorphisms. This can be done immediately: $\det(F_2-E)=1 \mod 2$, $\det(F_3-E)=1 \mod 2$, and $1-m^2 = (1-m)(1+m)$.
\textbf{Case 2).} Now consider the case of even $k=2 t$ and $G$ without $2$-subgroup.
In this case the construction starts as in \cite{TroLamp}:
we take $\overline{\f}:\Z^{2t} \to \Z^{2t}$ to be the direct sum of $t$ copies of
$$
\Z^2 \to \Z^2,\quad \begin{pmatrix}
u\\ v
\end{pmatrix} \mapsto M \begin{pmatrix}
u\\ v
\end{pmatrix}, \qquad M=\begin{pmatrix}
0 & 1\\
-1 & -1
\end{pmatrix}.
$$
Then $M$ generates a subgroup of $GL(2,\Z)$, which is isomorphic to $\Z_3$ (see, \cite[p. 179]{Newman1972book}).
All orbits of $M$ have length $3$ (except of the trivial one) and the corresponding Reidemeister number $=\det (E-M)=3$.
Similarly for $\overline{\f}$: the length of any orbit is $3$ (except of the zero) and $R(\overline{\f})=3^t$.
Also
\begin{equation}\label{eq:sum_deg_M}
M^2 + M +E= \begin{pmatrix}
-1 & -1\\
1 & 0
\end{pmatrix} + \begin{pmatrix}
0 & 1\\
-1 & -1
\end{pmatrix}+ \begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}= \begin{pmatrix}
0 & 0\\
0 & 0
\end{pmatrix}.
\end{equation}
Now define $\f'$ as a direct sum of actions for $\Z_q$, $q=(p_i)^{r_i}$, $p_i \ge 3$.
For $p_i \ge 3$ choose $m=m_i$ such that
\begin{equation}\label{eq:condit_on_m_3}
m^3 \mbox{ and } 1-m^3 \mbox{ are invertible in }\Z_q.
\end{equation}
This can be done for $p_i\ge 3$: one can take $m=3$ for $p_i=7$ and $m=2$ in the remaining cases (and impossible for $p_i=2$).
Define $\f'(a_0)=(m a)_0$ and
$\f'(a_x)=(m a)_{\overline{\f}(g)}$, where $a \in \Z_q \subset G$. So, the corresponding subgroup $\oplus_{g \in \Z^k}(\Z_q)_g \subset \Sigma$ is $\f'$-invariant and decomposed into infinitely many invariant summands
$(\Z_q)_g \oplus (\Z_q)_{\overline{\f}(g)}\oplus (\Z_q)_{\overline{\f}^2(g)}$
isomorphic to $(\Z_q)^3$ (over generic orbits of $\overline{\f}$) and one summand $(\Z_q)_0$ (over the trivial orbit).
Then the corresponding restrictions of $\f'$ and $1-\f'$ can be written as multiplication by
$$
\begin{pmatrix}
0 & 0 & m \\
m & 0 & 0\\
0 & m & 0
\end{pmatrix}, \quad
\begin{pmatrix}
1 & 0 & -m \\
-m & 1 & 0\\
0 & -m & 1
\end{pmatrix}, \quad \mbox{and }
m, \quad 1-m,
$$
respectively. The three-dimensional mappings are isomorphisms by (\ref{eq:condit_on_m_3}). Since an element $\ell$ is not invertible in $\Z_{(p_i)^{r_i}}$ if and only if $\ell = u\cdot p_i$, the invertibility of one-dimensional mappings follows from
(\ref{eq:condit_on_m_3}) and the factorization $1-m^3=(1-m)(1+m+m^2)$. (This construction gives a more explicit presentation of a part of proof
of \cite[Theorem 4.1]{TroLamp})
For $\t_z \circ \f'$ we have
$$
(\t_z \circ \f') (g_x) = (m g)_{\overline{\f}(x)+z}, \quad
(\t_z \circ \f') (g_{\overline{\f}(x)+z})=(mg)_{\overline{\f}^2(x)+\overline{\f}z+z},
$$
$$
(\t_z \circ \f')g_{\overline{\f}^2(x)+\overline{\f}z+z}=(mg)_{\overline{\f}^3(x)+\overline{\f}^2z+\overline{\f}z+z}=(mg)_x,
$$
because $\overline{\f}^3(x)=x$ and $\overline{\f}^2z+\overline{\f}z+z=0$
by (\ref{eq:sum_deg_M}). So $\t_z \circ \f'$ has the same matrices as $\f'$, but on new invariant summands
$(\Z_q)_x \oplus (\Z_q)_{\overline{\f}(x)+z}\oplus (\Z_q)_{\overline{\f}^2(x)+\overline{\f}z+z}$.
Similarly for the exceptional orbit.
This completes the proof of this case.
\textbf{Case 3):} when $d_i >1$ for $p_i=2$ and $k = 4s $ .
Using the cyclotomic polynomial we can define (similarly to the above $M$) an element of order 5 in $GL(4,\Z)$
$$
M_4=
\begin{pmatrix}
0 & 0 & 0 & -1 \\
1 & 0 & 0 & -1 \\
0 & 1 & 0 & -1\\
0& 0& 1 & -1
\end{pmatrix}
$$
(see e.g. \cite{KuzmPavl2002} for an elementary introduction).
For any $k= 4s$, let $M\in GL(k,\Z)$ be the direct sum of $s$ copies of $M_4$.
Let $\overline{\f}:\Z^k \to \Z^k$ be defined by $M$. One can calculate
$$
\det(M_4-E)=5, \qquad \det(M-E)=5^s.
$$
Hence, by (\ref{eq:FHZk}), $R(\overline{\f})=5^s <\infty$.
The length of any non-trivial orbit is $5$, hence an \emph{odd} number.
Similarly to $M$, one can verify that
\begin{equation}\label{eq:4M40}
(M_4)^4 +(M_4)^3+ (M_4)^2+ M_4 + E=0.
\end{equation}
This can be also deduced from the fact that the characteristic polynomial of the ``companion matrix'' of a polynomial $p$ is just $p$.
For $p$-power components $\Z_{p^i}$ with $p>2$, we define
$\f'$ (as above) by $a_0 \mapsto (p-1) a_0$. Then, for an orbit $u, \overline{\f} u, \dots, \overline{\f}^\gamma u$, we need to verify (for finiteness of $R(\f')$) that $(p-1)^\gamma$ as a homomorphism $\Z_{p^i} \to \Z_{p^i}$ has no non-trivial fixed elements, i.e. $ (p-1)^\gamma \not\equiv 1 \mod p$. This is fulfilled because, for an odd $\gamma$, $(p-1)^\gamma -1 \equiv -2 \not\equiv 0 \mod p$.
For $2$-power components $\Z_{2^i}\oplus \Z_{2^i}$, we define
$\f'$ by $a_0 \mapsto F_2 a_0$ (as in (\ref{eq:F2F3})). Then, for an orbit $u, \overline{\f} u, \dots, \overline{\f}^\gamma u$, we need to verify that $(F_2)^\gamma$ as a homomorphism $\Z_{2^i}\oplus \Z_{2^i}\to \Z_{2^i}\oplus \Z_{2^i}$ has no non-trivial fixed elements.
Here we need to use not only the fact that $\gamma$ is odd, but its more specific form: $\g=5$.
In particular it can not be divided by $3=$order of $F_2$ $\mod 2$. Hence $(F_2)^\gamma=(F_2)^5=(F_2)^2=
\begin{pmatrix}
1& 1\\
1 & 0
\end{pmatrix}
\mod 2$.
It has no non-trivial fixed elements $\mod 2^i$ for any $i$.
For $2$-power components $\Z_{2^i}\oplus \Z_{2^i}\oplus \Z_{2^i}$, we define
$\f'$ by $a_0 \mapsto F_3 a_0$ (as in (\ref{eq:F2F3})). Then, for an orbit of $\overline{\f}$ of length $\gamma$, we need to verify that $(F_3)^\gamma$ as a homomorphism $\Z_{2^i}\oplus \Z_{2^i}\oplus \Z_{2^i}\to \Z_{2^i}\oplus \Z_{2^i}\oplus \Z_{2^i}$ has no non-trivial fixed elements. One can verify, for $i=1$, i.e. for $2^i=2$, that the order of $F_3$ is relatively prime with $5$, namely it is equal to $7$. Moreover, $(F_3)^j$, $j=1,\dots,6$, has no non-trivial fixed elements.
The absence of non-trivial fixed elements is equivalent to $\det((F_3)^j-E) \not\equiv 0 \mod 2$. Then
$\det((F_3)^j-E) \not\equiv 0 \mod 2^i$. Hence, for $i>1$ these automorphisms still have no non-trivial fixed elements.
The elements $(F_3)^{7u}$, $u=1,2,\dots$, typically are not $E$ $\mod 2^i$, but in any case $7u \ne 5$, for any $u$. In fact, we are interested only in properties of $(F_3)^5$.
Collecting together these homomorphisms defined on the summands, we obtain as in the first two cases, $\f'$ with the desired properties. It remains only to verify the epimorphity of $\t_z\circ \f'$. This can be done quite similarly to the end of Case 2) with the help of
(\ref{eq:4M40}).
\end{proof}
\section{Twisted Burnside-Frobenius Theorem}\label{sec:tbft}
\begin{teo}\label{teo:main_for_TBFT}
Suppose that $\f$ is an automorphism of the restricted wreath product $G\wr \Z^k=\oplus_{m\in \Z^k} G_m \rtimes_\a \Z^k$, where $G$ is a finite abelian group.
Suppose that $\f$ is of finite order. Then $R(\f')$ is $1$ or $\infty$.
\end{teo}
\begin{cor}\label{cor:main_TBFT}
In particular, $\f$ has the TBFT$_f$ property.
\end{cor}
\begin{proof}[Proof of Corollary]
By Lemma \ref{lem:R_needed}, $R(\f)<\infty$ implies
$R(\f')<\infty$. Hence, by Theorem \ref{teo:main_for_TBFT}, $R(\f')=1$.
Considering $\t_z \circ \f$ instead of $\f$ from the very beginning, we see that $R(\t_z \circ \f')=1$, for any $z\in \Z^k$.
Thus, by Theorem \ref{teo:extensions}, Reidemeister classes $\{g\}_\f$ of $\f$ are pull-backs of Reidemeister classes
$\{z\}_{\overline{\f}} $ of $\overline{\f}$ under the natural projection $\pi:G\wr \Z^k \to \Z^k$, i.e. $\{g\}_\f=\pi^{-1}(\{\pi(g)\}_{\overline{\f}})$. So, if classes of $\overline{\f}$ are separated by an epimorphism $f: \Z^k \to F$ onto a finite group $F$, then classes of $\f$ are separated by $f\circ\pi$. It remains to use the equivalence between TBFT$_f$ and
separability of Reidemeister classes in the case of finite Reidemeister number (see \cite{polyc} and
\cite{FelLuchTro}).
\end{proof}
\begin{rmk}
\rm
In particular, this covers all automorphisms, which were considered in \cite{TroLamp}. Indeed, it was proved there, that all orbits are finite and
their length is equal to the length of orbits of $\overline{\f}$. But the structure of $\Z^k$ implies that $\overline{\f}$ has finite order (consider generators). Hence, $\f'$ and $\f$ are of finite order.
\end{rmk}
\begin{proof}[Proof of Theorem]
Suppose, $R(\f')>1$. Then there exists an element $\sigma\in \Sigma$ such that
$\sigma\not \in \Im(\Id-\f')$. Moreover, $\sigma\not \in \Im(\Id-\f'_\sigma)$,
where $\f'_\sigma$ is the restriction of $\f'$ onto the $\f'$-invariant subgroup $\Sigma_\sigma$ generated by $\sigma$. In particular, $R(\f'_\sigma)>1$. By the supposition $\Sigma_\sigma$ is a finite group with generators $\sigma,\f'(\sigma),\dots,(\f')^s(\sigma)$ for some $s$. Hence, $\f'_\sigma$ has
a nontrivial fixed element $\sigma_0$, $\f'_\sigma(\sigma_0)=\sigma_0$ and
$\sigma_0\ne 0$. For an element $m\in\Z^k$ consider the orbit
$$
\a(m)\sigma_0,\quad \f'(\a(m)\sigma_0)=\a(\overline{\f}(m))\sigma_0,\qquad
(\f')^t(\a(m)\sigma_0)=\a(\overline{\f}^t(m))\sigma_0,\qquad
\overline{\f}^{t+1}(m)=m.
$$
Then $(\f')^{t+1}(\a(m)\sigma_0)=\a(m)\sigma_0$. Passing from $m$ to $nm$,
$n\in \Z$, $m\in \Z^k$, if necessary, we can assume that the supports in $\Z^k$ of
$\sigma_0$, $\a(\overline{\f}^j(nm))\sigma_0$, $j=0,\dots,t$, do not intersect.
Then $\sum_{j=1}^t \a(\overline{\f}^j(nm))\sigma_0$ is a fixed element of
$\f'$, which is distinct from $0$ and $\sigma_0$.
Increasing $n$ ``in sufficiently large steps'' we obtain infinitely many distinct fixed elements in the same way.
Then by Lemma \ref{lem:Jab_fin_ord}, $R(\f')=\infty$.
\end{proof}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\dbar{\leavevmode\hbox
to 0pt{\hskip.2ex \accent"16\hss}d}
\def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$}
| -30,942.566847
|
[
-2.427734375,
2.150390625
] | 33.863636
|
[
-3.1484375,
0.86865234375,
-2.455078125,
-6.07421875,
-0.93359375,
8.8984375
] |
[
2.103515625,
8.875,
1.38671875,
4.79296875
] | 187
| 3,188
|
[
-3.470703125,
4.07421875
] | 36.8814
|
[
-5.09765625,
-3.73046875,
-4.875,
-2.365234375,
1.693359375,
12.3125
] | 0.47619
| 13.931631
| 25.752823
| 8.717574
|
[
2.236940383911133
] | -19,646.93382
| 5.006274
| -30,583.004866
| 0.357143
| 5.704536
|
[
-1.5380859375,
-3.037109375,
-3.853515625,
-5.53125,
1.9150390625,
12.34375
] |
[
-5.58984375,
-1.44921875,
-1.80859375,
-0.70751953125,
3.17578125,
2.970703125
] | |
BkiUd8E5qoaAwdii7Bwm
|
\section{The Fat Tails Problem}
Tetlock et al. (2022) \cite{tetlock2022false}, in their criticism of claims by a paper titled "On single point forecasts for fat-tailed variables" in this journal (Taleb et al., 2022 \cite{taleb2020single}) insist that discriminating between a binary probability and a continuous distribution is a false dichotomy, that binary probabilities derived from expert forecasting tournaments can provide information on tail risk, in addition to some claims about a collaboration with the first author and a "challenge".
We apologize for not answering most of their points as these are already amply covered in two papers in this very journal, which includes the one they are criticizing, \cite{taleb2020single} and the more formal \cite{taleb2020differences}. Alas "probability" is a mathematical concept that does not easily accommodate verbal discussions and requires a formal treatment, which necessitates precise definitions.
At the gist of what we referred to as "the so-called masquerade problem" is the following conflation, we simplify from \cite{taleb2020differences} using a continuous distribution for ease of exposition:
Let $K \in \mathbb{R}^+$ be a threshold, $f(.)$ a density function for a random variable $X \in \mathbb{R}^+$ , $P_K=\mathbb{P}(X>K) \in [0,1]$ the probability of exceeding it, and $g(x)$: $\mathbb{R}^+ \to \mathbb{R}$, an impact function. Let $G_K$ be a partial expectation of $g(.)$ for the function of realizations of $X$ above $K$:
$$G_K=\int_K^{\infty } g(x) f(x) \, \mathrm{d}x,$$
and for clarity lets write the survival function (that is, the complementary cumulative distribution function, CDF) at $K$:
$$ P_K=\int_K^{\infty } f (x) \, dx$$
\begin{figure}[h]
\includegraphics[width=\columnwidth]{tetlockgraph1.pdf}
\caption{The inverse survival function (complementary quantile function) corresponding to a complementary CDF ($P_K$) is unbounded while the complementary CDF is bounded; it is extremely concave for tail probabilities, and compounds the estimation errors on $p$. Simply the transformation reverses the signs of the second derivative and compounds it. We show how errors on $P_K$ translate in larger and larger values for $K$, possibly infinite.}\label{g1}
\end{figure}
The error comes from conflating the properties of $G_K$ which those of $P_K$, often associating $P_K$ with some constant representing the presumed impact associated with the threshold $K$.
The intuition of the difference can be shown as follows: assuming $g(x)=x$, for $X$ a
random variable with finite first moment, we have, focusing on the positive domain, generalizing the Tail Probability Expectation Formula,
\begin{equation}
G_K= \underbrace{K \;P_K}_{\text{Prob times impact at threshold}}+ \underbrace{\int_K^\infty P_x\; dx}_{\text{additional term}}, \label{identity}
\end{equation}
with a second term that can dominate the first, particularly under heavy tailed distributions.
As explained in the two referenced papers, $P_K$ \textit{as a random variable}, being bounded between $0$ and $1$, necessarily has thin tails, with finite mean, variance, and all moments. By the probability integral transform, its unconditional probability distribution is the Uniform $\mathcal{U}(0,1)$ --and its conditional is usually treated, particularly in the Bayesian literature, as a Beta (a special case of which is the Uniform) --the Beta distribution can accommodate practically all shapes of double bounded unimodal distributions. A sum of such bets becomes rapidly Gaussian.
\begin{remark}[Moments]
$P_K$ corresponds to the zeroeth moment and is always thin-tailed. $G_K$ maps to higher moments, and up to the infinite moment (i.e. the extremum) dealt with in extreme value theory.\footnote{Consider the expectation of the $p^{th}$ moment $\mathbbm{E}(x^p)$. $P_K$ corresponds to $p=0$ and the expected maximum to $\lim_{p \to \infty}\mathbbm{E}(x^p)$. In general, risk management concerns extrema, another point of divergence with Tetlock et al.}
\end{remark}
As discussed extensively in Taleb (1997) reflecting the author's experiences as a derivatives market-maker \cite{taleb1997dynamic}, one fails to "hedge" the other in practice (and, of course, in theory). For instance, a rise in skewness of the distribution will tend to increase one side ($G_K$) while decreasing the other ($P_K$): simply, the number of realizations above $K$ drops, but their impact becomes larger.\footnote{Tetlock et al states that Taleb and Tetlock (2013) claims that the two methods are "complementary". Our understanding of the latter paper is that says the exact opposite: there is no such complementarity fat tailed variables, which is what this discussion about "masquerade" is about. This is the reason why the first author (Taleb) refused to be involved in the IARPA forecasting exercise.}
And if one does not hedge the other, then being "good at predicting $P_K$" provides \textit{no} information on $G_K$.
\begin{remark}[Probability Classes]
If $P_K$ and $G_K$ are not in the same probability class, that is, while $P_K$ is always thin tailed, if $G_K$ is not thin-tailed, then one cannot be a \textit{practical} proxy for the other.
\end{remark}
The field of extreme value theory was designed to deal with the issue. Basically, "probability" is not a tangible object like a tomato; it is, mathematically, a kernel inside an integral (or a summation), inseparable from other integrands, and one should avoid drowning it with verbalism. This point applies no matter the probability interpretation (Bayesian, frequentist, propensional, or subjectivist). Furthermore, science is not about precise measurements of exceedance probability, but understanding properties in a comprehensive and useful ways. As explained in the paper criticized by Tetlock et al (2022), one does not handle pandemics via forecasting tournaments and reliance on champions for single point forecasts, but by getting full distributional properties, particularly the shape in the tails. Decisions must be informed by the shape of the total distribution, and some understanding of the dynamics involved in generating such a distribution (multiplicative effects cause thick tails while additive ones tend to be benign stochastic outcomes).
Just as warning is not predicting, understanding distribution classes and tail properties is not forecasting. Furthermore, the language of "false positive", while useful in medicine and similar applications based on signal, in not useful in risk and insurance based on distributional considerations.
\begin{remark} [Gambling]
It is worth noting that binary options on financial instruments (that is, the trading of $P_K$ or $1-P_K$) proved of little economic value and are not considered an investment in the U.S. and the European Union; they are banned by most corresponding regulators, as they are considered gambling devices. The European Securities and Markets Authority (ESMA) have disallowed retail dealing with binary options.
These binaries were also traded at Lloyds, until banned by U.K. legislation with the 1909 Marine Insurance Act.
\end{remark}
\textbf{Note on "dichotomies":} Tetlock et al (2022) seem to mix the "dichotomy" between binary and full payoffs with another distinction, that probability estimates help to flag tail risk. Our representation can accommodate both with the function $g(.)$ which as mentioned earlier can reflect the infinite moment, i.e. the extremum.
\section{A New Result}
At the Global Uncertainty Reading Group discussion around Tetlock et al (2022), on Dec 1, 2022, a new useful result emerged, which we find worth communicating\footnote{This result can be useful in financial risk management, particulaly the mapping between "VaR" (Value at Risk, maps to $K$ for a set probability $P_K$ of losses above that threshold) and expected shortfall, "CVaR" (maps to $G_K$, that is, includes the impact of losses).}.
\begin{remark}[Events are not defined]
A well known problem with heavy tails is that, at the core, in that class of distributions, "events" are not defined verbally: a "war" can have 200 or a million casualties, so it does not have a quantitative meaning. But setting a precise threshold no longer maps to a precise probability under an error rate, and, vice versa.
\end{remark}
As illustrated in Fig. \ref{g1}, The error in the evaluation of the probability $P_K$ can translate into an explosive value for the corresponding $K$, and the more fat tailed the distribution, the more explosive such corresponding value.
There is no space for a general proof, so we shall provide one for any distribution that ends (for large values) with Paretan tails, which is the standard case. Let us assume the probability $P_K$ follows a Beta Distribution $\beta(a,b)$ (both parameters $>0$ and as we mentioned this fits the unconditional uniform with $a=b=1$).
\begin{equation*}
f_{P_K}(p)= \frac{p^{a-1} (1-p)^{b-1}}{B(a,b)},
\end{equation*}
$0<p<1$, where $(B.,.)$ is the standard Beta function.
The mean and variance will be $M_{P_K}= \frac{a}{a+b}$, $V_{P_K}=\frac{a b}{(a+b)^2 (a+b+1)}$.
Now assume the underlying distribution for $X$ where $X>K$, lies in the strong Pareto basin, meaning $P(X>K) = L^{\alpha} K^{-\alpha}$, where $\alpha$ is the tail index an $L$ a scaling constant --this is general for large values of $X$ under all fat-tailed distributions.
The inverse complementary CDF (quantile function) can be expressed as:
$ K=L \left(P_K\right)^{-1/\alpha }\text{ if }0\leq P_K\leq 1$.
If probability taken as a random variable $P_K$, that is, $1-CDF$ (the cumulative distribution function), follows a Beta distribution $\beta(a,b)$, then $K$, the inverse complementary CDF has for density:
\begin{equation}
f_K(k)= \frac{\alpha \, K^{-a \alpha -1} L^{a \alpha } \left(1-K^{-\alpha } L^{\alpha }\right)^{b-1}}{B(a,b)}
,\end{equation}
with mean
$$M_K=\frac{L \, \Gamma \left(a-\frac{1}{\alpha }\right) \Gamma (a+b)}{\Gamma (a) \Gamma
\left(a+b-\frac{1}{\alpha }\right)},$$
and variance
$$V_K=\frac{L^2 \,\Gamma (a+b) \left(\frac{\Gamma (a)
\Gamma \left(a-\frac{2}{\alpha }\right)}{\Gamma \left(a+b-\frac{2}{\alpha
}\right)}-\frac{\Gamma \left(a-\frac{1}{\alpha }\right)^2 \Gamma (a+b)}{\Gamma
\left(a+b-\frac{1}{\alpha }\right)^2}\right)}{\Gamma (a)^2}.$$
The proof is done via the standard Jacobian Method for the transformation of probability distributions.
As we can see the first moment exists only if $\alpha>\frac{1}{a}$ and $\alpha>\frac{1}{a+b}$; the second moment exists only if $\alpha>\frac{2}{a}$ and $\alpha>\frac{2}{a+b}$; more generally the $n^{th}$ moment exists only if $\alpha>\frac{n}{a}$ and $\alpha>\frac{n}{a+b}$.
\begin{remark}[Error Propagation]
While the error on the probability can be small and controlled, the error on the corresponding quantity under consideration can be infinite.
\end{remark}
We note that the tail index $\alpha$ for pandemics (as addressed at length in the paper criticized by Tetlock et al. (2022)) is well below $1$. The same applies to wars, which means that when it comes to conflicts, forecasts for tail events are not compatible with probability theory.
\section{A Rather Unscientific "Challenge"}
\begin{quotation}
"We challenge Taleb et al. (2022) to be equally transparent about the performance of their tail-risk hedging strategies —-a controversial topic (Brown, 2020)."
\end{quotation}
We are surprised to see such a remark coming from professional evidence-based researchers: requesting the single track record of a tail hedging program as a back-up for a claim about the mathematical inadequacy of using a binary forecast for, say, Covid 19 or similar events under fat tailed distributions. Said tail-hedging strategy consists in capturing the \textit{difference} between idiosyncratically selected option prices in the market and subsequent market jumps. (Incidentally Professor Tetlock has appeared to conflate tail events --which take place in the tails of any distribution -- and the fat-tailedness attributes of statistical distributions). And, on top, such request is made to the author of an entire book, \textit{Fooled by Randomness} about the futility of such claims. To repeat the famous disparagement by the economist Jagdish Bhagwati of the claim by the speculator George Soros that he "falsified the random walk" \cite{bhagwati87}, we find it highly unscientific to use a single track record to make any general claim -- this bizarre demand on the part of professional researchers is no different from anecdotal claims ($n=1$) used by medical charlatans. In addition, trading records are not like points in soccer games, particularly when they can hide tail risks.\footnote{Since Tetlock et al (2022) is uncritically citing a web \textit{opinion} article by Aaron Brown, we would like to debunk the claim in it that the performance record is \textit{not} available: not being a retail product, it has been continuously available to what the Security and Exchange Commission (S.E.C.) defines as \textit{qualified} investors, not Twitter activists, and Mr. Brown (who by his own disclosure had a severe conflict of interest) violated, willingly or unwillingly journalistic standards. For it turned out that Brown never did the fact checking, and never asked to see the \textit{audited} returns. We also note that only dimensionless returns are to be compared.}
So we prefer the more robust challenge which, under these circumstances, becomes fair. We believe that academia is about search for knowledge and understanding the world, not a commercial enterprise. The same with societal risk management, which is about the public good and does not issue precise point forecasts (recall that Taleb et al.(2022), as its title indicates, is against single point prediction in some domains). So the burden is on forecasters to forecast. And we fail to get how a practical project with remarkable forecasting skills could work for government and not the private sector. The superforecasting project is just about such betting. So, as much as we would have preferred to voice the unspoken question, here we are obligated to put the old adage as "if they claim to be so good at forecasting, and their forecasts are actionable and related to reality, why aren't they so rich?" --in other words, why do they depend on taxpayer funds and, possibly, tax deductible (that is, charitable) contributions to finance such forecasts?
\section{Conclusion: Some More Evidence Required}
Finally, can Tetlock et al. prove that better estimates of $P_K$ can provide \textit{real} benefits to decision makers?
In addition to the problems in the financial domain mentioned above, we completely fail to see the link in insurance --and in event risk in general. In our experience as risk and insurance practitioners, decision makers are usually not well equipped to deal with probabilistic information (compounding the difficulty in translating probabilistic information into practical effects)\footnote{There is the other problem that payoffs are in in expectation space not in frequency space. For instance hedge funds with the best track records turned out to be the most vulnerable to tail events in 2008, see \cite{taleb2020statisticalbook}.}. For instance, in a military context, if we refine the estimate of a South China sea conflict from e.g. 15\% to 17\%, can Tetlock et al prove that this makes a difference in any practical situation?
Also, one is allowed to wonder why the superforecasting project is not applied to sports and election forecasts where 1) $P_K$ applies, 2) compatible with probability theory, 3) provides repeatable tests with overly abundant data and, centrally, 4) is "bankable" (that is, translatable into dollars and cents)?
We conclude with the following recommendation: in future work, it would be helpful if Tetlock et al. provided more rigorous backing of their claims about the link between $P_K$ and $G_K$. For, as it stands, we see neither theoretical nor practical benefits to that "superforecasting" enterprise.
\bibliographystyle{IEEEtran}
| -9,450.427826
|
[
0.0060882568359375,
0.273193359375
] | 62.686567
|
[
-6.41015625,
-3.068359375,
-3.259765625,
-7.9140625,
3.169921875,
11.921875
] |
[
1.6015625,
7.32421875,
0.96630859375,
4.375
] | 107
| 2,308
|
[
-2.83984375,
3.431640625
] | 24.08863
|
[
-5.640625,
-4.64453125,
-4.41796875,
-1.8427734375,
2.197265625,
11.90625
] | 0.418916
| 36.208249
| 37.694974
| 1.217353
|
[
1.8724544048309326
] | -7,175.375605
| 5.37825
| -9,450.104054
| 0.886168
| 5.873174
|
[
-2.919921875,
-2.97265625,
-3.220703125,
-4.16796875,
2.615234375,
10
] |
[
-6.19921875,
-3.2890625,
-2.435546875,
-1.6484375,
4.3828125,
6.359375
] | |
BkiUcwk25V5ihkoTWaCT
|
\section{Introduction}
As 3D object detection gets more popular and new datasets are published \cite{data:matter,data:apple,data:google},
evaluation metrics gain in importance. The most common one is Intersection over Union (IoU). It is well known from object detection on two-dimensional data such as images.
Existing implementations of its three-dimensional counterpart usually neglect one or more degrees of freedom. Examples for this are implementations that work with axis-aligned bounding boxes or only consider a rotation around the z-axis \cite{paper:3.5DIoU}. This oversimplifies real world problems, as usually rotation of objects is possible in any given direction.\\
To the best of our knowledge, although the strategy of computing IoU has already been mathematically generalized \cite{paper:generalization}, we provide and derive the first closed-form analytic solution for the case of 3D bounding boxes with full degree of freedom.\\
We further derive an analytic solution for the volume-to-volume distance (v2v) of two 3D bounding boxes. The metric v2v is defined as the shortest distance between the hull of one volume to the hull of another volume. Both metrics are visualized in Fig. \ref{fig:metrics}.\\
For both metrics we provide the first open source implementation as a standalone python function, as well as an extension to the Open3D library \cite{famew:o3d} and a ROS-node \cite{famew:ros}.\\
This paper is structured as follows. First, we discuss related work. In Section 3, we mathematically define bounding boxes and review the existing metrics. Afterwards the solution for volumetric IoU is stated and shortly compared to its point based method. Consecutively v2v is presented and a combined positive continuous metric called Bounding Box Disparity (BBD) is proposed.
\begin{figure}%
\centering
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics[trim=375 200 270 250, clip, width=0.96\linewidth]{pics/IoU.png}
\caption{}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics[trim=375 200 270 250, clip, width=0.96\linewidth]{pics/V2V.png}
\caption{}
\label{fig:sub2}
\end{subfigure}
\vspace{-0.8em}
\caption{Similarity metrics for two 3D bounding boxes. \\$\; $(a) Intersection over Union (IoU) \\(b) volume-to-volume distance (v2v)}
\label{fig:metrics}
\vspace{-1.em}
\end{figure}
\section{Related work}
Calculating the intersection of two three-dimensional volumes is a common task in rendering, CAD pipelines and deep learning \cite{paper:hands}. In open source frameworks such as blender~\cite{famew:blender} it is possible to use an accurate solver. In practice, fast numerical solutions are used, in order to speed up the computation with the disadvantage of loosing accuracy. Instead of using cuboids directly, the frameworks usually have in common that only triangular-mesh representations are allowed for boolean operations such as intersection.\\
Hence, with an additional step of meshifying bounding boxes one can define a volumetric intersection. This adds unnecessary computation of multiple equations and conversions. For reference purposes, we also provide a blender-based implementation of this method.
When used for custom code this adds a big dependency, since blender and its API is rather developed for graphical usage with complex scenes, but not for calculating basic linear equations.\\
The Open3D framework, which is often used in the context of machine learning does not have an implementation of an exact intersection for meshes, as well as no solution for oriented bounding boxes.\\
Benchmarks for 3D object detection such as \cite{data:standford} often rely on evaluation metrics that use axis aligned bounding boxes or bounding boxes with only a rotation around the z-axis \cite{data:sunrgbd, data:scannet}. Other benchmarks like \cite{data:kitti} use a combination of 2D IoU and an additional orientation value.
To the best of our knowledge there are no full-degree of freedom bounding box benchmarks published for the 3D case yet.
\begin{figure
\centering
\includegraphics[trim=0cm 12.06cm 26.4cm 0.1cm, clip, width=0.55\linewidth]{pics/bb.pdf}
\caption{Bounding box definitions.}
\vspace{-.6em}
\label{fig:bb_def}
\vspace{-.6em}
\end{figure}
\section{Methods}
\subsection{Definition of a Bounding Box}
First, we have to define a bounding box $i$, shown in Fig.~\ref{fig:bb_def}. It can be represented by a transformation matrix $T^i$, consisting of its rotation $R^i$, position $p^i$ and dimension $d^i$
\begin{equation}
T^i = \left[ \begin{array}{c|c}
\scalebox{1.2}{$d^i R^i$} & \scalebox{1.2}{$p^i$} \\
\hline
0\;0\;0 & 1
\end{array} \right],
\end{equation}
whereas the position corresponds to the cuboid center. Its corners $C_{1-8}^i$ can then be defined as follows:
\begin{equation}
C_{1-8}^i = T^i u_{1-8}
\end{equation}
with $u_{1-8}$ being defined as the corners of the unit cube centered around the origin of the coordinate system.
\begin{equation}
\scalebox{0.7}{%
$u_{1-8} = \begin{bmatrix}
-0.5 & 0.5 & -0.5 & -0.5 & 0.5 & -0.5 & 0.5 & 0.5 \\
-0.5 & -0.5 & 0.5 & -0.5 & 0.5 & 0.5 & -0.5 & 0.5 \\
-0.5 & -0.5 & -0.5 & 0.5 & 0.5 & 0.5 & 0.5 & -0.5
\end{bmatrix}$
}
\end{equation}
One can further define the edges $e_{1-12}^i$ and faces $f_{1-6}^i$ of a cube as a list of connected corners. Its elements correspond to the column-index in the matrix of corners.
\begin{align}
e_{1-12} = &[[1, 2], [2, 8], [3, 8], [1, 3], [4, 7], [7, 5], \nonumber \\
& [6, 5], [4, 6], [1, 4], [2, 7], [8, 5], [3, 6]] \\
f_{1-12} = &[[1, 2, 3], [1, 2, 4], [1, 3, 4], \nonumber\\
& [5, 6, 7], [5, 6, 8], [5, 7, 8]]
\end{align}
Lastly, the normal vectors $n_{1-6}^i$ of the faces are defined by the rotation $R^i$. Each column $r_{j}^i$ corresponds to two face normals~-~one time in positive and one time in negative direction.
\begin{align}
R^i &= [r_1^i,r_2^i,r_3^i] \nonumber \\
n_1^i &= - r_3^i \qquad n_2^i = - r_2^i \qquad n_3^i = - r_1^i \\
n_4^i &= + r_3^i \qquad n_5^i = + r_2^i \qquad n_6^i = + r_1^i \nonumber
\end{align}
\subsection{Metrics}
There a several metrics to consider when comparing two bounding boxes (BB) \cite{paper:metrics,paper:metrics2}:
\begin{itemize}[leftmargin=1em]
\itemsep-0.5em
\item absolute/quadratic difference in position
\item absolute/quadratic difference in size
\item rotation (can affect difference in size \cite{paper:orient}), i.e.:
\vspace{-0.5em}
\subitem - angular-difference of euler angles
\vspace{-0.5em}
\subitem - quaternion-distance
\vspace{-0.5em}
\subitem - distance of rotation matrices
\item IoU of point-cloud points within the BBs \cite{paper:pointsIoU}
\item volumetric IoU
\item volume-to-volume distance
\end{itemize}
All of the above are part of our open source implementation. Only the last two are discussed in the following.
\subsection{Volumetric IoU}
\subsubsection{Points of interest}
We now define possible corner points $POI$ of the intersection. There are four candidates:
\begin{itemize}[leftmargin=1em]
\itemsep-0.5em
\item Corners of the first cuboid laying inside the second cuboid
\item Corners of the second cuboid laying inside the first cuboid
\item Intersections of an edge of the first cuboid with a plane of the second cuboid
\item Intersections of an edge of the second cuboid with a plane of the first cuboid
\end{itemize}
In order to compute them, we formulate equations for each line and plane corresponding to the edges and faces of both cubes.
The lines can simply be defined as
\begin{equation}
L_{k}^i \left(t_l\right) = e_{k,1}^i + \underbrace{\left( e_{k,2}^i - e_{k,1}^i\right)}_{\substack{m_k^i}} t_l
\end{equation}
with $m_k^i$ describing the line's slope.
Whereas the planes can be defined by three corner points.
\begin{align}
P_{k}^i \left(t_p\right) = f_{k,1}^i + \underbrace{\begin{bmatrix}
f_{k,2}^i - f_{k,1}^i & f_{k,3}^i - f_{k,1}^i
\end{bmatrix}}_{\substack{N_k^i}} \begin{bmatrix}
t_{p,1} \\
t_{p,2}
\end{bmatrix}
\label{eq:plane}
\end{align}
The $3\times2$-matrix $N_k^i$ can also be expressed by the normals perpendicular to the face normal $n_k^i$ scaled according to $d^i$.\\
For all possible line-plane combinations of the two cuboids, we get $144$ equations. Each resulting in possible points of interest. The number depends on how many lines are parallel with the planes or lay in the planes. This again depends on how the cubes are rotated to each other.\\
Next, we calculate the corners $C^1,C^2$ of both cuboids and add them to the list of points.
\subsubsection{Checking Validity of the Points}
Now every point in the solution list has to be checked, if its a valid corner point of the intersection. For this the points have to be part of both cubes.\\
This can be done by transforming the points into the coordinate system of $T^1$.
\begin{equation}
POI = {T^1}^{-1} POI
\end{equation}
The corner points of the first cube $C_1$ now correspond to the ones of a unit cube $u_{1-8}$. Thus, every point has to be checked if its coordinates are smaller than $0.5$ and greater than $-0.5$.
\begin{equation}
valid = (POI < 0.5)\, \& \,(POI > -0.5)
\label{eq:validity}
\end{equation}
After pruning the non valid points from the list, we transform everything into the coordinate system of the second cube $T^2$.
\begin{equation}
POI = {T^2}^{-1} T^1 POI
\end{equation}
Repeating the validity check from equation \ref{eq:validity} leaves us all relevant points.
\subsubsection{Calculating the Volume}
We now pick every valid point and construct the convex hull of this set of points, which corresponds to the intersection of the cuboids. If the hull cannot be constructed, because no point is valid or the points all lay in a plane, then there is no intersection, thus $IoU=0$ holds. \\
In all other cases, the volume of the hull $V_I$ can be computed by summation of the signed volumes of tetrahedrons \cite{math:signedV}. \\
As a last step we calculate the volume of the union $V_U$
\begin{equation}
V_U = d_{1}^1 d_{2}^1 d_{3}^1 + d_{1}^2 d_{2}^2 d_{3}^2 - V_I
\end{equation}
such that $IoU$ is defined as
\begin{equation}
IoU = \frac{V_I}{V_U}.
\end{equation}
\begin{figure}
\centering
\includegraphics[trim=0.55cm 7.22cm 12.68cm 0.6cm, clip, width=1.\linewidth]{pics/example.pdf}
\caption{Object detection example with underlying point cloud.}
\label{fig:example}
\vspace{-0.8em}
\end{figure}
\subsection{Comparison of point and volume based IoU}
In Fig~\ref{fig:example} a typical example of an object leaning against a wall is shown. The underlying point cloud is constructed
by a lidar scan. Hence, most of the points of the object lay in the front. With the shown estimation of an object detector, this results in a relatively high intersection over union of $0.47$, when computed based on the points in the cloud, in comparison to $0.07$ of the semantically more meaningful volumetric counterpart.
\vspace{-0.5em}
\subsection{Volume-to-Volume Distance}
\subsubsection{Point-Pairs of Interest}
Similar to calculating the volumetric IoU, point-pairs of interest (PPOIs) can be defined, one of which gives the shortest distance $d_s$ between two bounding boxes $T^{1}$, $T^{2}$ such that
\begin{align}
d_s = \textit{norm} ( P^{1}_s-P^{2}_s) < \textit{norm} ( P^{1}_i-P^{2}_j),&\\
i \in \textit{hull}(T^1)\, \&\, j \in \textit{hull}(T^2)& \nonumber
\end{align}
Where $P^{1}_s$, $P^{2}_s$ denotes the two points from where the shortest distance is measured. Notice that the surface of the bounding boxes is an infinite, non discrete set of points, however, only a discrete set of points can be candidates for the pair of $P_s$.\\
The relevant point-pairs, shown in Fig~\ref{fig:cases}, are as follows
\begin{itemize}[leftmargin=1em]
\itemsep-0.4em
\item Corners of the first cuboid with their rectangular projection on the faces of the second cuboid \hyperref[fig:c-f]{(a)}
\item Corners of the second cuboid with their rectangular projection on the faces of the first cuboid
\item Corners of the first cuboid with their rectangular projection on the edges of the second cuboid \hyperref[fig:c-e]{(b)}
\item Corners of the second cuboid with their rectangular projection on the edges of the first cuboid
\item The distance defined by two edges, each of one cube \hyperref[fig:e-e]{(c)}
\item The distance defined by two corners, each of one cube \hyperref[fig:c-c]{(d)}
\end{itemize}
\begin{figure}[b]%
\centering
\begin{subfigure}{.25\linewidth}
\centering
\includegraphics[trim=375 200 270 250, clip, width=.9\linewidth]{pics/c-f.png}
\caption{}
\label{fig:c-f}
\end{subfigure}%
\begin{subfigure}{.25\linewidth}
\centering
\includegraphics[trim=375 200 270 250, clip, width=.9\linewidth]{pics/c-e.png}
\caption{}
\label{fig:c-e}
\end{subfigure}%
\begin{subfigure}{.25\linewidth}
\centering
\includegraphics[trim=375 200 270 250, clip, width=.9\linewidth]{pics/e-e.png}
\caption{}
\label{fig:e-e}
\end{subfigure}%
\begin{subfigure}{.25\linewidth}
\centering
\includegraphics[trim=375 200 270 250, clip, width=.9\linewidth]{pics/c-c.png}
\caption{}
\label{fig:c-c}
\end{subfigure}
\caption{The different cases of v2v.}
\label{fig:cases}
\vspace{-0.7em}
\end{figure}
In order to compute the point-pairs and the corresponding distances, we formulate equations for each point-face, point-edge, edge-edge projections.
\subsubsection{Point-Plane Projection}
The shortest distance $d$ between a point and a plane is given by its projection. In case of corners $l$ of one cuboid $i$ and the faces $k$ of another cuboid $j$, it is defined by
\begin{align}
v &= C_{l}^i - f_{k, 0}^j \\
d &= {n_{k}^j}^T v \\
p_{\textit{prj}} &= C_{l}^i + n_{k}^j * d
\end{align}
In order to check if the projected point $p_{\textit{prj}}$ is within the face $f_k^j$ of the cube $j$ one can calculate the parameters $t_{p,1}$ and $t_{p,2}$ of its plane-function $P_k^j$. This can be done by calculating the pseudo-inverse of $N_k^j$.
\begin{equation}
\begin{bmatrix}
t_{p,1} \\
t_{p,2}
\end{bmatrix} = \left({N_k^j}^T N_k^j \right)^{-1} N_k^j \; v = {N_k^j}^+ \; v
\end{equation}
Only if both parameters are between $0$ and $1$, the projected point is a valid point.
\subsubsection{Point-Line projection}
The shortest distance $d$ between a point and a line is given by its projection. In case of corners $l$ of one cuboid $i$ and the edges $k$ of another cuboid $j$, it is defined by
\begin{align}
t &= \frac{\left(C_{l}^i - l_{k, 0}^j\right)^T m_{k}^j}{{m_{k}^j}^2} \\
p_{\textit{prj}} &= L_{k}^j(t)
\end{align}
Only if parameter $t$ has a value between $0$ and $1$, the projected point is a valid point.
The solution is only defined for non-parallel lines, which in our case is sufficient. In parallel cases, the shortest distance is already given by the point to point or point to edge distance.
\subsubsection{Line-Line projection}
The shortest distance $d$ between a line and another line is given by its projection. In case of edges $k$ of one cuboid $i$ and the edges $l$ of another cuboid $j$, it is defined by
\begin{align}
v &= e_{k}^i - e_{l}^j \\
det &= {m_{k}^i}^2 + {m_{l}^j}^2 + {m_{k}^i}^T m_{l}^j \\
\nonumber \\
t^i &= \frac{{m_{l}^j}^2 \left({m_{k}^i}^T v\right) - \left({m_{l}^j}^T v\right) \left({m_{k}^i}^T m_{l}^j\right)}{det}
\end{align}
\begin{align}
t^j &= \frac{- {m_{k}^i}^2 \left({m_{l}^j}^T v\right) + \left({m_{k}^i}^T v\right) \left({m_{k}^i}^T m_{l}^j\right)}{det}\\
\nonumber \\
p_{\textit{prj}}^i &= L_{k}^i(t^i) \\
p_{\textit{prj}}^j &= L_{l}^j(t^j)
\end{align}
The formula for the parameters $t^i,t^j$ can be derived by solving the minimization-problem $\left(L_{k}^i-L_{l}^j\right)^2$.
\subsubsection{Shortest Distance}
Every combination results in a maximum of $496$ point-pairs of interest. Since all functions only consist of linear equations, efficient methods of parallel computing can be applied. As a last step left, one must sort the list of candidates according to a vector-norm, such as L2 of the point-pair difference. Using L2 has the advantage that the values are already byproducts of the previously introduced point-to-plane projection. The shortest value of all PPOIs then corresponds to the shortest volume-to-volume distance.
\subsection{Bounding Box Disparity}
As a last metric we introduce the bounding box disparity (BBD), which is a combination of IoU and v2v. Since the Intersection over Union can only rank the similarity of two bounding boxes when they are overlapping, a distinction between two bounding boxes closer or farther away without an overlap cannot be made. Hence, we suggest the combination of IoU and v2v in the following way
\begin{equation}
BBD = 1-IoU + v2v
\end{equation}
such that a continuous positive metric for the (dis-)similarity of two bounding boxes can be calculated. IoU can have values between $0$ and $1$, whereas $1$ corresponds to a total match and $0$ to no overlap. v2v on the other hand is $0$ as long as there is overlap and increases with further distance/mismatch of the bounding boxes. This results in a first quickly, but then linearly increasing scalar-field $BBD$.
\section{Conclusion}
With this paper we publish an open source library\footnote{\url{https://github.com/M-G-A/3D-Metrics}} for multiple 3D metrics for object detection, including the first analytic solution and its implementation of volumetric Intersection over Union and volume-to-volume distance for two bounding boxes with full degree of freedom. Further, we introduce the combined metric Bounding Box Disparity. In future work this could be extended such that for non overlapping bounding boxes also the rotation and size differences are considered.
\vfill\pagebreak
\bibliographystyle{IEEEbib}
| -17,795.037521
|
[
-2.923828125,
2.66015625
] | 16.961131
|
[
-3.033203125,
1.185546875,
-1.3740234375,
-4.80078125,
-1.20703125,
6.7109375
] |
[
3.822265625,
7.109375,
3.75390625,
8.203125
] | 175
| 2,360
|
[
-3.265625,
3.83984375
] | 31.576363
|
[
-5.921875,
-3.716796875,
-3.265625,
-1.4462890625,
1.84765625,
9.7109375
] | 1.390821
| 8.352943
| 30.508475
| 5.465054
|
[
2.3142142295837402
] | -12,085.829669
| 5.483898
| -17,466.042063
| 1.019935
| 5.692088
|
[
-2.734375,
-3.185546875,
-3.408203125,
-4.3515625,
2.513671875,
10.875
] |
[
-5.7421875,
-1.7216796875,
-2.484375,
-1.83203125,
3.78515625,
5.125
] | |
BkiUdHPxK7FjYHGHz5Sl
|
\section{Introduction}
Quantum Mechanics is considered one of the fundamental theories to describe the dynamic evolution and understand the wave behavior of matter, mainly in the aspect related to its structure, for analyzing in-depth the behavior of a particle in the microscopic medium. The quantum theory had a significant beginning around 1926, when Erwin Schrödinger formulated the equation that would become popularly known under his name \cite{Schr,Griffiths}. In 1927, Heisenberg deduced an uncertainty in the measurements regarding the position and moment of a particle in a microscopic medium, thus implying that the physically acceptable solutions of the Schrödinger equation were in their nature information system probabilistic analysis \cite{Griffiths}.
The mechanics proposed by Erwin Schrödinger, faced a new challenge with the theoretical advances in solid-state physics, where particles appear that behave unusually, with an effective mass that depends on the position \cite{Roos,Bastard,Weisbuch}. Since then, the number of researchers interested in working with quantum systems whose particle has a position-dependent mass (PDM) has increased. The interest of these researchers comes from the importance of this approach, which can describe some problems such as impurities in crystals \cite{Luttinger,Nabu}, study of electronic properties of semiconductor heterostructures \cite{Bastard,Pourali,Kasapoglu}, some related applications to the hermiticity of the Hamiltonian operator \cite{Mustafa}, to atomic and molecular physics \cite{Sever}, to supersymmetry \cite{Plastino}, to relativistic problems \cite{Almeida,Almeida1,Almeida2} and non-relativistic \cite{Cunha,Dong,Dong1}, to theoretical studies of Fisher's information calculation \cite{Falaye,Macedo2015,Lima2021} and Shannon entropy \cite{Lima2021,Sun2015,Dong2016,Navarro}.
Parallel to the advance of quantum theory to describe the behavior of a particle, whose mass depends on the position, information theories emerged. This theory began around 1948, when the mathematician and engineer Claude Shannon proposed one of the main concepts in the theory called entropy \cite{Shannon}, defined as a measure of how good is the propagation of information from a source to a receiver, that is, it is associated with the amount of "information" that is obtained with as little interference as possible \cite{Kripp,Zhou,Grasselli,Amigo}. Because this theory is linked to a probability density it could be applied to a quantum process, where the Shannon entropy $S_ {x}$ and $S_ {p}$, are respectively related to measuring the uncertainty of the particle location in the position space and in the momentum space \cite{Lima,Lima1}.
Another theory of great importance for communication and well before the concept of Shannon's theory is Fisher's information \cite{Fisher}. The interesting thing about this foundation is that it also works with probability density, which can be applied to a quantum system \cite{Shi,Arenas,Ikot}. In this same context, Fisher's information is also intrinsically related to the uncertainty of a measurement, in other words, Fisher's information is a way of measuring the amount of information that certain observable carries about a certain parameter with a related intrinsic probability \cite{Shi,Arenas,Ikot}.
Thus, it was clear that the study of systems with position-dependent mass, as well as information theory, is of great interest in physics. However, few studies have been carried out involving both subjects. Therefore, in this work, we study Fisher information and Shannon entropy for a mass system position-dependent with a hyperbolic barrier potential.
The work is organized as follows: In section \ref{sec1}, we show some fundamental concepts of the one-dimensional PDM problem ordered by BenDaniel-Duke. In addition, we found the analytical solutions of the Schrödinger-type equation for a solitonic mass distribution when subjected to a barrier potential $V(x)=V_1 \coth^2(x)+V_2\mathrm{csch}^2(x)$. In section \ref{sec2}, we present the basic concepts of Shannon entropy and Fisher information and apply them to our confined particle system. Finally, in section \ref{sec3}, we make the final remarks and discuss our results.
\section{Solution of Schrödinger problem for a position-dependent mass}
\label{sec1}
To solve a problem in which position-dependent mass, it is not enough to simply substitute the mass profile in the Schrödinger equation, as a problem arises regarding the hermiticity of the kinetic energy operator $\hat{T}$. The best way to solve this problem is to use a symmetric operator. Currently, there are several types of kinetic energy requests that are based on a symmetric operator that guarantees that the Hamiltonian operator is Hermitian. The five ordering most used are BenDaniel-Duke \cite{BenDaniel}, Gora-Willian \cite{GoraW}, Zhu-Kroemer \cite{ZhuKroemer}, Li-Kuhn \cite{LiKuhn} and the most current, Mustafa-Mazharimousavi \cite{Mustafa}. It is interesting to note in 1983, Von Roos \cite{Roos}, proposes a generalization of the symmetrical kinetic energy operator in the form
\begin{eqnarray}\label{1}
\hat{T}=\frac{\hbar}{4}\bigg[m^{\alpha}(\vec{r}) \hat{p}m^{\beta}(\vec{r})\hat{p}m^{\gamma}(\vec{r})+m^{\gamma}(\vec{r})\hat{p}m^{\beta}(\vec{r})\hat{p}m^{\alpha}(\vec{r})\bigg],
\end{eqnarray}
where $\alpha $, $ \beta$, $ \gamma $ are constants and are named Von Roos ordering or ambiguity parameters. These parameters must satisfy the relation $ \alpha+\beta+\gamma=-1 $ \cite{Roos}.
Currently, the ordering of kinetic energy that draws the most attention is that of BenDaniel-Duke, for its simplicity and significant results \cite{Cunha,Falaye,Sun2015,Dong2016,Navarro}. The kinetic energy operator proposed by BenDaniel-Duke is \cite{BenDaniel}
\begin{eqnarray}
\hat{T}=\frac{1}{2m(x)}\hat{p}^{2}+\frac{i\hbar }{2}\frac{m'(x)}{m^2(x)}\hat{p},
\end{eqnarray}
where $m'(x)=dm(x)/dx$. Note that we can get to equation (\ref{1}) by doing $\alpha=\gamma=0$ and $\beta=-1$, obeying the relation of the Von Roos parameters \cite{Roos}.
Then, with the BenDaniel-Duke ordering for kinetic energy, we have the Schrödinger-type equation for a particle with position-dependent mass, subject to any potential $V(x)$ \cite{Cunha,BenDaniel}
\begin{eqnarray}\label{2}
\Big[\frac{1}{2m(x)}\hat{p}^{2}+\frac{i\hbar }{2}\frac{m'(x)}{m^2(x)}\hat{p}+V(x)\Big]\psi(x)=E\psi(x).
\end{eqnarray}
where $\psi(x)$ is the standing wave function and $V(x)$ is the potential that defines the system.
\subsection{Solitonic mass profile}
The profile of the mass function $m(x)$ defines the change in the particle mass with the variation of position, that is, this $m(x)$ mass profile is very important for the definition of the equation that will describe the motion of the particle. We propose a solitonic mass profile, which is described in the form \cite{Cunha}
\begin{equation}\label{mass}
m(x)=m_{0}\mathrm{sech}^{2}(ax).
\end{equation}
where $m_{0}$ is the asymptotic mass of the system when $x\rightarrow0$ and $a$
is the parameter that controls the width of the mass distribution. This mass profile is interesting, because when $x\rightarrow\infty$ our mass distribution is $m(x)\rightarrow0$, as seen in figure \ref{fig1}.
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[height=6cm]{massa.pdf}
\end{tabular}
\end{center}
\caption{Behavior of solitonic mass distribution with $m_{0}$ constant.
\label{fig1}}
\end{figure}
We make this choice because it is an appropriate representative of a solitonic distribution (soliton-like mass) found in various models of condensed matter and low-energy nuclear energy physical \cite{Cunha,Bagchi}. Solitons are structures that arise in a non-linear theory. These structures are interesting because they have finite energy and keep their form unchanged when interacting with another soliton \cite{Heeger,Kartashov,Rajaraman}.
The mass distribution $m(x)$ can also be represented in k-space \cite{Lima2021}, in the form
\begin{eqnarray}
m(k)=\sqrt{\frac{\pi}{2}}\frac{m_{0}k}{a^{2}}\mathrm{csch}\Big(\frac{k\pi}{2a}\Big).
\end{eqnarray}
In Fig.\ref{fig2} we give the behavior of the mass distribution in the reciprocal space.
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[height=6cm]{mk.pdf}
\end{tabular}
\end{center}
\caption{Behavior of mass distribution in space-$k$ with $m_{0}$ constant.
\label{fig2}}
\end{figure}
The mass profile representing in space-$k$ gives us the energy of the dispersion \cite{Lima2021},
\begin{eqnarray}
E(k)=\sqrt{\frac{2}{\pi}}\frac{a^{2}\hbar^{2}}{m_{0}}\Big[{k^{2}} _{2}F_{1}\Big(\frac{1}{2}, \frac{3}{2}; \frac{3}{2}; \frac{k^{2}}{4} \Big)-\cosh(k)\Big].
\end{eqnarray}
Replacing Eq.(\ref{mass}) in Eq.(\ref{2}), we have \cite{Cunha}
\begin{eqnarray}\label{3}
\Big\{\frac{d^2}{dx^2}+2 \tanh(x)\frac{d}{dx}+\frac{2m_{0}}{a^2\hbar^2}[E-V(x)]\mathrm{sech}^2(x)\Big\}\psi(x)=0,
\end{eqnarray}
where is made $ax\rightarrow x$, for simplicity. With equation (\ref{3}), it is possible to analyze the behavior of a one-dimensional particle, whose mass depends on the position.
\subsection{Solution to a hyperbolic barrier potential}
As our interest is to study the behavior of a quantum-mechanical system with PDM in a barrier potential, we consider the hyperbolic potential as follows
\begin{eqnarray}\label{4}
V(x)=V_1\coth^2(x)+V_2 \mathrm{csch}^2(x),
\end{eqnarray}
where $V_1$ and $V_2$ are parameters that define a barrier potential. We plot the behavior of the potential (\ref{4}) in the figure \ref{fig3}.
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[height=6cm]{potencial2.pdf}
\end{tabular}
\end{center}
\caption{Representation of the hyperbolic potential $V(x)$ for the positive $V_1$ and $V_2$ values.
\label{fig3}}
\end{figure}
Even with a null potential $V(x)=0$ (which identifies a free particle), the particle is not entirely free, because as we saw in the previous section, the mass profile (\ref{mass}) acts as a kind of confining potential \cite{Cunha}. Thus, with the potential (\ref{4}), we guarantee a new interaction of the particle that will not be confined to the origin.
To find the solutions of the Schrödinger equation (\ref{3}), we do
\begin{equation}
\psi(x)=\cosh^v(x)\phi(x),
\end{equation}
where $v$ is an arbitrary parameter, to give us
\begin{eqnarray}\label{5}
\frac{d^2}{dx^2}\phi(x)+2(v+1)\tanh(x)\frac{d}{dx}\phi(x)+ \Big\{v(v+2)\tanh^2(x)+\Big[v+ \frac{2m_0}{a^2\hbar^2}[E-V(x)]\Big]\mathrm{sech}^2(x)\Big\}\phi(x)=0.
\end{eqnarray}
For simplicity, we make the transformation $x \rightarrow z$, in the form
\begin{equation}
\cos(z)=\mathrm{sech}(x),
\end{equation}
where $x \in (-\infty,\infty) \rightarrow z \in (-\pi/2,\pi/2)$. Thus, we obtain Eq.(\ref{5}) in terms of the variable $z$,
\begin{eqnarray}
\frac{d^2}{dz^2}\phi(z)+(1+2v)\tan(z)\frac{d}{dz}\phi(z)+\Big\{v+v(v+2)\tan^2(z)+\frac{2m_0}{a^2\hbar^2}[E-V(z)]\Big\}\psi(z)=0.
\end{eqnarray}
Choosing $v=-1/2$ eliminates the first derivative in $\phi$, which results in \cite{Cunha}
\begin{eqnarray}\label{6}
-\frac{d^2}{dz^2}\phi(z)+\Big[\frac{1}{2}+\frac{3}{4}\tan^2(z)+\tilde{V}(z)\Big]\phi(z)=\varepsilon\phi(z),
\end{eqnarray}
where we define
\begin{eqnarray}
\tilde{V}(z)=\frac{2m_0}{a^2\hbar^2}V(z)\qquad \mathrm{and} \qquad \varepsilon=\frac{2m_0}{a^2\hbar^2}E.
\end{eqnarray}
The Eq.(\ref{6}) allows us to find symmetric and antisymmetric solutions, as long as $\tilde{V}(z)$, i. e., $V(x)$ is symmetric \cite{Cunha}. Therefore, equation (\ref{6}) is equivalent to a regular stationary equation of the Schrodinger type, with a constant mass $m_0$, and an effective confining potential of the type
\begin{eqnarray}\label{7}
\mathcal{V}_{eff}(z)={\frac{1}{2}}+{\frac{3}{4}}\tan^2(z)+\tilde{V}(z),
\end{eqnarray}
where the dynamics is restricted to the interval $z=(-\pi/2,\pi/2)$, subject to the boundary conditions $\phi(z=\pm\pi/2)=0$. As we can see in Eq.(\ref{7}), even though the potential is null, we will still have an effective potential confining the particle. For our hyperbolic potential (\ref{4}), we have $V(z)= (V_1 +V_2)\mathrm{csc}^2 (z)-V_2$, and by equation (\ref{7}) the effective potential is
\begin{eqnarray}
\mathcal{V}_{eff}(z)={\frac{1}{2}}+{\frac{3}{4}}\tan^2(z)+ \frac{2m_0}{a^2\hbar^2}\bigg[ (V_1 +V_2)\mathrm{csc}^2 (z)-V_2 \bigg],
\end{eqnarray}
which is represented in figure \ref{fig4}.
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[height=6cm]{potencial3.pdf}
\end{tabular}
\end{center}
\caption{Representation of effective potential $\mathcal{V}_{eff}(z)$ for the positive $V_1$ and $V_2$ values.
\label{fig4}}
\end{figure}
Because the potential is discontinuous in origin, the solutions of Eq.(\ref{6}) are extremely difficult to find. We can write the solutions in terms of $\psi(x)$, which are
\begin{eqnarray}\label{8}
\psi^1 (x)&=& C_1\Big[\frac{\mathrm{coth}(a x)^{-\frac{1}{2}(1+\varsigma)}}{\sinh^2(a x)}\Big]
{_2F}_1\Big(1+\frac{1}{4}(\vartheta-\varsigma), 1-\frac{1}{4}(\vartheta+\varsigma),1-\frac{1}{2}\varsigma;\mathrm{coth}^2(a x)\Big),\\
\psi^2 (x)&=& C_2\Big[\frac{\mathrm{coth}(a x)^{-\frac{1}{2}(1-\varsigma)}}{\sinh^2(a x)}\Big]
{_2F}_1\Big(1+\frac{1}{4}(\vartheta+\varsigma), 1-\frac{1}{4}(\vartheta-\varsigma),1+\frac{1}{2}\varsigma;\mathrm{coth}^2(a x)\Big),
\end{eqnarray}
where
\begin{eqnarray}
\vartheta&=&\sqrt{ 4(V_1 +V_2) +1},\nonumber\\
\varsigma&=& \sqrt{1+4\Big[V_1+V_2-\Big(\frac{2n\varrho-4n^2+2\varrho-8n-4}{\kappa^2}\Big)\Big]},\nonumber\\
\varrho&=&\sqrt{4\kappa^2(V_1+V_2)+1},
\end{eqnarray}
with $\kappa^2=2m_o/a^2\hbar^2$. Only one solution is acceptable $\psi^1(x)$, because, for the wave functions to be physically acceptable, the normalization of these functions must be preserved. In this way, the wave function $\psi^2(x)$ tends to diverge at the origin of the system. With equation (\ref{8}) it is noted that energy is quantized, with energy levels
\begin{eqnarray}
E_n=\Big[\frac{2\varrho(n-1)-4(n^2+2n+1)}{\kappa^2}\Big]-V_1,\qquad \mathrm{with} \qquad n=0,1,2,...
\end{eqnarray}
For a better analysis of equation (\ref{8}), we define $V_1=V_2=\kappa=1$, since they are positive constants, reducing Eq.(\ref{8}) to
\begin{eqnarray}
\psi^1_n= C^n_1\Big[\frac{\tanh(a x)^{2n+1}}{\sinh^2(a x)}\Big]
{_2F}_1\Big(\frac{3}{2}-n, -n,\frac{1}{2}-2n;\mathrm{coth}^2(a x)\Big),
\end{eqnarray}
with $C^n_1$ to be determined by system normalization.
The solutions for the three lowest energy states found normalized, are
\begin{eqnarray}\label{solu}
\psi_0^1 (x)&=&\sqrt{\frac{35a}{4}}\tanh^2(a x) \mathrm{sech}^{2}(ax),\nonumber\\
\psi_1^1 (x)&=&\sqrt{\frac{6237a}{32}}\Big[\frac{4\cosh^2(ax)}{9}-1\Big]\tanh^2(a x) \mathrm{sech}^{4}(ax),\nonumber\\
\psi_2^1 (x)&=&\sqrt{\frac{920205a}{256}}\Big[\frac{24\cosh^4(ax)-132\cosh^2(ax)}{143}+1\Big]\tanh^2(a x) \mathrm{sech}^{6}(ax).
\end{eqnarray}
The behavior of these wave functions is observed by analyzing their probability densities $\rho(x)=\vert\psi(x)\vert^2$, as represented in the figure \ref{fig5}.
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[height=5cm]{psh1.pdf} \\
(a) \\
\includegraphics[height=5cm]{psh2.pdf}
\includegraphics[height=5cm]{psh3.pdf}\\
(b) \hspace{6 cm}(c)
\end{tabular}
\end{center}
\caption{Behavior of probability densities. (a) $n=0$. (b) $n=1$. (c) $n=2$.
\label{fig5}}
\end{figure}
\section{Information Theories}
\label{sec2}
With the advancement of communication technology, there was an increase in the studies of theories that involve the transmission of information \cite{Nalewajski,Nagaoka,Wang,Lian,Falaye0}. The interest in the study of these theories comes from the fact that they can be applied to quantum systems \cite{Rothstein,Zou}, some of these studies are linked to modern quantum communication \cite{Falaye,Serrano}, computing and the Density Functional Theory known by its acronyms \cite{Burke}. The main theories related to information transmission that can be applied to a quantum system are Shannon entropy \cite{Gadre1985} and Fisher information \cite{Frieden1}.
\subsection{Shannon's Theory}
The initial concept of entropy in Thermodynamics refers to a measure of the irreversibility of the physical system \cite{Callen}, or a measure associated with the degree of disorder of the system \cite{Callen,Pathria}. Following this same line of reasoning Claude E. Shannon in 1948, in the work entitled \textit{``A mathematical theory of communication"} \cite{Shannon}, describes for the first time entropy as an element of the theory of information and communication. Shannon entropy is a quantity that measures the uncertainty in a given probability distribution.
The interesting thing about this theory is that it is related to the probability density of an information system, defined as entropic densities \cite{Shannon}, which can be related to the probability density found in quantum systems $\vert\psi( x)\vert^{2}$ \cite{Navarro,Born,Hirschmann,Beckner}. With these definitions, the entropic densities with respect to the position $\rho^n_s(x)$ and the momentum $\rho^n_s(p)$ can be represented \cite{Shannon}
\begin{equation}
{\rho^n_s(x)=|\psi(x)|^2 \ln|\psi(x)|^2},
\end{equation}
\begin{equation}
{\rho^n_s(p)=|\phi(p)|^2 \ln|\phi(p)|^2}.
\end{equation}
Thus, for a probability density of a continuous system, the Shannon entropy can be defined for the position space $S^n_{x}$ and for the momentum space $S^n_{p}$ \cite{Shannon}
\begin{equation}
S^n_{x}=-\int_{-\infty}^{\infty}\vert\psi_{n}(x)\vert^{2}\ln\vert\psi_{n}(x)\vert^{2}dx,
\end{equation}
\begin{equation}
S^n_{p}=-\int_{-\infty}^{\infty}\vert\Phi_{n}(p)\vert^{2}\ln\vert\Phi_{n}(p)\vert^{2}dp.
\end{equation}
Beckner, Bialynicki-Birula, and Mycielski in 1975 obtained the relation of entropic uncertainty related to position and momentum, which became known as BBM uncertainty \cite{Beckner,Bialy}. The uncertainty relation of BBM is given by \cite{Beckner,Bialy}
\begin{equation}
S^n_{x}+S^n_{p}\geq D(1+\ln\pi),
\end{equation}
where $D$ represents the spatial dimension of the system.
\subsubsection{Shannon's Entropy Analysis}
We apply Shannon's Entropy to analyze solutions to our one-particle problem
with position-dependent mass in a hyperbolic potential. To start with, it is interesting to observe the entropy behavior of the wave functions (\ref{solu}), we do this by analyzing their entropic densities $\rho_s(x)=|\psi(x)|^2 \ln|\psi(x)|^2 $, represented in the figure \ref{fig6}.
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[height=5cm]{deh1.pdf} \\
(a) \\
\includegraphics[height=5cm]{deh2.pdf}
\includegraphics[height=5cm]{deh3.pdf}\\
(b) \hspace{6 cm}(c)
\end{tabular}
\end{center}
\caption{Behavior of the entropy densities $\rho_s(x)$. (a) $n = 0$. (b) $n = 1$. (c) $n = 2$.
\label{fig6}}
\end{figure}
With the help of the Fourier Transform, we obtain the eigenfunctions for the momentum space, in the three lowest energy states of the system
\begin{eqnarray}\label{solup}
\phi_0^1 (p)&=&\sqrt{\frac{35a\pi}{8}}\frac{p}{6a^4}(-2a^2+p^2)\mathrm{csch}\Big(\frac{p\pi}{2a}\Big),\nonumber\\
\phi_1^1 (p)&=&\sqrt{\frac{156237a\pi}{64}}\frac{p}{1080a^6} (16a^4-80a^2p^2+9p^4) \mathrm{csch}\Big(\frac{p\pi}{2a}\Big),\nonumber\\
\phi_2^1 (p)&=&\sqrt{\frac{920205a\pi}{412}}\frac{p}{720720a^8} (6528a^6-12152a^4p^2+3542a^2p^4-143p^6) \mathrm{csch}\Big(\frac{p\pi}{2a}\Big).
\end{eqnarray}
In the Fig.\ref{fig7} we plot the probability densities in the momentum space $\rho(p)=|\phi(p)|^2$. The entropic densities $\rho_s(p)=|\phi(p)|^2 \ln|\phi(p)|^2$, are analyzed through the figure \ref{fig8}.
With the table \ref{tab1}, the numerical study of Shannon's entropy was carried out considering the eigenfunctions in the space of position and momentum.
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[height=5cm]{dh1.pdf} \\
(a) \\
\includegraphics[height=5cm]{dh2.pdf}
\includegraphics[height=5cm]{dh3.pdf}\\
(b) \hspace{6 cm}(c)
\end{tabular}
\end{center}
\caption{Behavior of probability densities in momentum space. (a) $n = 0$. (b) $n = 1$. (c) $n = 2$.
\label{fig7}}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[height=5cm]{dph1.pdf} \\
(a) \\
\includegraphics[height=5cm]{dph2.pdf}
\includegraphics[height=5cm]{dph3.pdf}\\
(b) \hspace{6 cm}(c)
\end{tabular}
\end{center}
\caption{Behavior of the entropy densities in the momentum space $\rho_s(p)$. (a) $n = 0$. (b) $n = 1$. (c) $n = 2$.
\label{fig8}}
\end{figure}
\begin{table}[h]
\centering
\caption{ Numerical results of the Shannon entropy.}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$n$ & $a$ & $S_{x}$ & $S_{p}$ & $S_{x}+S_{p}$ & $1+ln\pi$ \\ \hline
$0$ & $2$ & $\frac{1}{105}[638-105ln(280)]= 0.441401$ & $2.14029$ & $2.58169$ & $2.1447$\\
& $4$ & $\frac{1}{105}[638-105ln(560)]= -0.251746$ & $2.83343$ & $2.58169$ & $2.1447$ \\
& $6$ & $\frac{1}{105}[638-105ln(840)]= -0.657211$ & $3.2389$ & $2.58169$ & $2.1447$ \\ \hline
$1$ & $2$ & $0.582545$ & $2.7803$ & $3.36285$ & $2.1447$ \\
& $4$ & $-0.110602$ & $3.47345$ & $3.36285$ & $2.1447$ \\
& $6$ & $-0.516067$ & $3.87891$ & $3.36285$ & $2.1447$ \\ \hline
$2$ & $2$ & $0.647182$ & $3.14459$ & $3.79177$ & $2.1447$ \\
& $4$ & $-0.0459656$ & $3.83773$ & $3.79177$ & $2.1447$ \\
& $6$ & $-0.451431$ & $4.24322$ & $3.79177$ & $2.1447$\\ \hline
\end{tabular}
\label{tab1}
\end{table}
We can notice that the Shannon entropy tends to decrease in the space of the position $S_x$ according to the parameter $a$ that defines the width of the mass profile, while the entropy tends to increase proportionally in the momentum space $S_p$. With this, it can be seen that the quantity $S_{x}+S_{p}$ becomes invariant regardless of the parameter $a$ that defines the width of the spatial distribution of the solitonic mass. Figure \ref{fig9} depicts the behavior of $S_x$ and $S_p$ for the three lowest power levels in the system.
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[height=5cm]{h1.pdf}
\includegraphics[height=5cm]{h2.pdf}\\
(b) \hspace{6 cm}(c)
\end{tabular}
\end{center}
\caption{Plots of the Shannon entropy as function of the width of the mass distribution. (a) In position space. (b) In momentum space.
\label{fig9}}
\end{figure}
\subsection{Fisher's Theory}
In his 1925 work entitled \textit{``Theory of statistical estimation"} \cite{Fisher}, Ronald A. Fisher introduces his well-known theory as Fisher information which is a way of measuring the amount of information that an observable random variable carries concerning an unknown parameter \cite{Fisher}.
Fisher's information has drawn a lot of attention in several areas of computing, physics, and engineering, due to its numerous applications. For example, using Fisher's minimum information principle, one can obtain the non-relativistic quantum mechanics equations \cite{Frieden1,Reginatto}, the time independent Kohn-Sham equations and the time-dependent Euler equation \cite {Naje}. Another application that has gained supporters in recent years is the study of Fisher's information for position-dependent mass cases \cite{Falaye,Lima2021}, this is also due to the large applications of the concept of position-dependent mass in the quantum context \cite{Almeida,Almeida1,Almeida2}.
The interesting thing about this theory, very similar to Shannon's theory, is that it is related to the probability density of an information system, and it can be related to the probability density found in quantum systems $\vert\Psi(x )\vert^{2}$. An information density linked to Fisher's theory is defined to the position $\rho^n_F(x)$ and the moment $\rho^n_F(p)$ \cite{Falaye,Fisher}
\begin{eqnarray}
\rho^n_F(x)&=&\vert\Psi^n(x)\vert^{2}\bigg[\frac{d}{dx}\ln\vert\Psi^n(x)\vert^2\bigg]^2,\\
\rho^n_F(p)&=&\vert\Phi^n(p)\vert^{2}\bigg[\frac{d}{dp}\ln\vert\Phi^n(p)\vert^2\bigg]^2.
\end{eqnarray}
Thus, for a probability density of a continuous system, Fisher's information can be defined for the position space $F^n_{x}$ and for the momentum space $F^n_{p}$ \cite{Fisher}
\begin{eqnarray}
F^n_{x}&=&\int_{-\infty}^{\infty}\vert\Psi^n(x)\vert^2\bigg[\frac{d}{dx}\ln\vert \Psi^n(x)\vert^2\bigg]^{2} dx > 0,\label{f11}\\
F^n_{p}&=&\int_{-\infty}^{\infty}\vert\Phi^n(p)\vert^2\bigg[\frac{d}{dp}\ln\vert\Phi^n(p)\vert^2\bigg]^{2} dp > 0\label{f12}.
\end{eqnarray}
For convenience, we can rewrite equation (\ref{f11}) in such a way that \cite{Falaye},
\begin{eqnarray}
F_{x}^{n}=4\int_{-\infty}^{\infty}\Psi_{n}(x)\Psi_{n}^{*'}(x)dx+\int_{-\infty}^{\infty}\bigg[\frac{\Psi_{n}^{'}(x)}{\Psi_{n}(x)}-\frac{\Psi_{n}^{*'}(x)}{\Psi_{n}^{*}(x)}\bigg]\vert\Psi_{n}(x)\vert^2 dx>0,
\end{eqnarray}
with the notation line ($'$) denoting the respective derivative of the wave function concerning variable $x$. Similarly, we can rewrite equation (\ref{f12}),
\begin{eqnarray}
F_{p}^{n}=4\int_{-\infty}^{\infty}\Phi_{n}(p)\dot{\Phi}_{n}^{*}(p)dp+\int_{-\infty}^{\infty}\bigg[\frac{\dot{\Phi}_{n}(p)}{\Phi_{n}(p)}-\frac{\dot{\Phi}_{n}^{*}(p)}{\Phi_{n}^{*}(p)}\bigg]\vert\Phi_{n}(p)\vert^2 dp>0.
\end{eqnarray}
where dot (\ $\dot{}$\ ) represents the derivative with respect to the independent variable $p$.
\subsubsection{Fisher's information Analysis}
We apply Fisher's information to analyze solutions to our one-particle problem
with mass depending on the position in a hyperbolic barrier potential. To start with, it is interesting to observe the behavior of the information densities $\rho_F$, both with respect to the position space (\ref{solu}) and for the momentum space (\ref{solup}). In figure \ref{fig10} we plot the information densities in position space. The information densities in the momentum space are analyzed using the figure \ref{fig11}. With the table \ref{tab2}, the numerical study of Fisher's information was carried out considering the eigenfunctions in the space of position and moment, in addition to calculating the standard deviation of position and moment for these eigenfunctions
\begin{eqnarray}
\sigma_{x}^{2}&=&\langle x^{2}\rangle-\langle x\rangle^{2},\nonumber\\
\sigma_{p}^{2}&=&\langle p^{2}\rangle-\langle p\rangle^{2},
\end{eqnarray}
where $\langle x\rangle$, $\langle x^{2}\rangle$, $\langle p\rangle$ and $\langle p^{2}\rangle$ are the respective averages of the $x$ observables, $x^{2}$, $p$ and $p^{2}$ \cite{Griffiths,Cohen}. It is interesting to note that for all three energy states $n=0 ,1 , 2$, we obtain
\begin{equation}
\langle p\rangle=0 \qquad \mathrm{and} \qquad \langle x\rangle=0.
\end{equation}
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[height=5cm]{dfh1.pdf}\\
(a)\\
\includegraphics[height=5cm]{dfh2.pdf}
\includegraphics[height=5cm]{dfh3.pdf}\\
(b) \hspace{6 cm}(c)
\end{tabular}
\end{center}
\caption{Behavior of information densities in position space $ \rho_F(p) $. (a) $n = 0$. (b) $n = 1$. (c) $n = 2$.
\label{fig10}}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[height=5cm]{dff1.pdf}\\
(a)\\
\includegraphics[height=5cm]{dff2.pdf}
\includegraphics[height=5cm]{dff3.pdf}\\
(b) \hspace{6 cm}(c)
\end{tabular}
\end{center}
\caption{Behavior of information densities in momentum space $ \rho_F (p) $. (a) $n = 0$. (b) $n = 1$. (c) $n = 2$.
\label{fig11}}
\end{figure}
\begin{table}[h]
\centering
\caption{{ Numerical results of the Fisher information.}}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
$n$ & $a$ & $<x^2>$ & $<p^2>$ & $\Delta x$ & $\Delta p$& $\Delta x \Delta p$ & $F_x$ & $F_p$\\
\hline
$0$ & $2$ & $0.302839 $&$ 8.88889$ & $0.550308$ & $2.98142$ & $1.6407$ & $35.5556$ & $1.21136$\\
& $4$ & $0.0757097 $& $35.55568$ & $0.275154$ & $5.96285$ &$ 1.6407 $& $142.222$ &$0.302839$ \\
& $6$ & $0.0336488 $& $80.0$ & $0.183436$ & $8.94427$ & $1.6407 $& $320.0$ & $0.134595$\\ \hline
$1$ & $2$ &$0.3752$ & $34.188$ & $0.612536$ & $5.84705$ & $3.58153 $& $136.752$ &$ 1.5008$\\
& $4$ & $0.0938 $& $136.752$ & $0.306268$ & $11.6941$ & $3.58153 $& $547.009$ &$0.3752$ \\
& $6$ & $0.0416889 $& $307.692$ & $0.204179$ & $17.5412$ & $3.58153 $& $1230.77$ &$0.166756$ \\ \hline
$2$ & $2$ &$0.418019$ &$75.5113$ & $0.646544$ & $8.68972$ & $5.61829$ & $302.045$ &$1.67208$ \\
& $4$ & $0.104505$ & $302.045$ & $0.323272$ & $17.3794$ &$ 5.61829 $& $1208.18$ &$0.418019$ \\
& $6$ & $0.0464466$ & $679.602$ & $0.215515$ & $26.0692$ &$5.61829 $& $2718.41$ &$0.185786$ \\
\hline
\end{tabular}
\label{tab2}
\end{table}
We note the existence of a ``propagation of information", where the Fisher information tends to increase with the width of the mass distribution, that is, in the space of positions the information increases proportionally with $a^{2}$. At first, the information tends to decrease with the width of the mass distribution, that is, it decreases with $a^{-2}$ in the space of momentum.We can observe this behavior more clearly in the figure \ref{fig12}.
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[height=5cm]{if2.pdf}
\includegraphics[height=5cm]{if1.pdf}\\
(b) \hspace{6 cm}(c)
\end{tabular}
\end{center}
\caption{Plots of the Fisher information as function of the width of the mass distribution. (a) In position space. (b) In momentum space.
\label{fig12}}
\end{figure}
We also observe with the values of the uncertainties $\Delta_x$ and $\Delta_p$ that the Heisenberg inequality is obeyed, obtaining the relations,
\begin{equation}
F_{x}\geq \frac{1}{(\Delta_x)^2}\qquad \mathrm{and} \qquad
F_{p}\geq \frac{1}{(\Delta_p)^2}.
\end{equation}
In this way, we can write the Heisenberg uncertainty principle as
\begin{equation}
\sigma_{x}\sigma_{p}\geq\frac{1}{(F_{x}F_{p})^{\frac{1}{2}}}\geq\frac{\hbar}{2},
\end{equation}
that is
\begin{equation}
F_{x}F_{p}\geq4\hbar^{-2}.
\end{equation}
\section{Final remarks}
\label{sec3}
In this work, we present the solutions of the Schrödinger equation for a solitonic mass distribution with the kinetic energy operator ordered by BenDaniel-Duke when subjected to barrier potentials $V(x)=V_1\coth^2(x)+V_2 \mathrm{csch}^2(x)$. We find the eigenfunctions and the corresponding quantized energies. With this, it was possible to observe that the complete set of solutions that describe these physical systems are given by the known hypergeometric functions of Gauss. We also present, through the Fourier transform, the eigenfunctions for the momentum space.
With the analytical solutions of the two systems studied, we calculate the Shannon entropy for the first energy levels, both for the position space $S_x$ and for the momentum space $S_p$. With this, we can conclude that the Shannon entropy tends to decrease with the width of the mass distribution in the position space, on the other hand, the Shannon entropy tends to increase for the momentum space, so that it makes the sum $S_{x}+S_{p}$ be constant regardless of the width of the soliton. With this, we note that the BBM relationship was followed for both analyzed cases.
Finally, with the help of the solutions found, we calculate Fisher information for the first energy levels of each case, both for the space of positions $F_x$ and for the space of moments $F_p$. With this, we observe a behavior contrary to that found in the Shannon entropy, where the Fisher information tends to increase in the position space. In contrast, Fisher's information decreases in momentum space. We also observe that $\Delta_x\Delta_p$ is constant with the width of the mass profile for both cases. With this, we conclude that the uncertainty of the measurements of the observables tends to be minimum in the position space and maximum in the momentum space, thus respecting the Heisenberg uncertainty principle. We also obtain that Fisher's information is related to the uncertainties of position and momentum, that is, $F_{x}F_{p}\geq4\hbar^{-2}$. Therefore, we can conclude that the more localized the mass distribution, the more information transmitted.
\section*{Acknowledgments}
The author thank the Coordena\c{c}\~{a}o de Aperfei\c{c}oamento de Pessoal de N\'{i}vel Superior (CAPES) for financial support. The author thanks C. A. S. Almeida and F. C. E. Lima for the important discussion, and is grateful to M. S. Cunha for the valuable discussion and important contribution to the progress of this project.
| -34,251.868553
|
[
-2.748046875,
2.517578125
] | 29.867257
|
[
-3.25,
0.78857421875,
-1.921875,
-5.41796875,
-0.51708984375,
7.6796875
] |
[
3.78515625,
7.55078125,
2.84375,
6.83984375
] | 301
| 3,614
|
[
-3.609375,
4.03515625
] | 32.114551
|
[
-5.8984375,
-3.822265625,
-3.71484375,
-2,
2.103515625,
10.9765625
] | 1.375693
| 16.549181
| 28.595087
| 7.031488
|
[
3.08874249458313
] | -20,979.567036
| 6.436359
| -33,525.299153
| 0.894201
| 5.729858
|
[
-2.83984375,
-3.63671875,
-3.423828125,
-4.453125,
2.556640625,
11.5625
] |
[
-5.4296875,
-1.7265625,
-2.498046875,
-1.5205078125,
3.58203125,
4.79296875
] | |
BkiUbGI4uzki0oIf_kz8
|
\chapter{$G$-Sets and $G$-Graphs}
\section{Permutation Groups}
\begin{definition}[Permutation group]
A \emph{permutation group} is a triple $(G, \Omega, \rho)$
where $G$ is a group, $\Omega$ is a set and $\rho$
is a homomorphism:
$$ \rho : G \to \Aut(\Omega). $$
We say that $G$ {\it acts on} $\Omega$ as a group of permutations.
The action is said to be \emph{faithful} if $\ker(\rho) = \{1\}$.
\end{definition}
We will usually neglect to mention the homomorphism $\rho$ explicitely,
and speak of the permutation group $(G, \Omega)$.
We write $\alpha^{\rho(g)}$ or just $\alpha^g$ to indicate
the action of a permutation $g \in G$ on a point $\alpha \in \Omega$.
This allows us to compose permutations as
$\alpha^{(gh)} = (\alpha^{g})^{h}$
rather than $(gh)(\alpha) = h(g(\alpha))$.
If $\Delta$ is a subset of $\Omega$ and $g$ is an element of $G$, then we write
$\Delta^g = \{ \alpha^g : \alpha \in \Delta \}$
to indicate the image of $\Delta$ under the action of $g$.
\hfill \break
Sometimes we will call $(G,\Omega)$ a \emph{representation}
of $G$ as a group of permutations of $\Omega$, rather than a permutation group.
If $|\Omega| = n$ then we say that $(G,\Omega)$ is a permutation
representation of $G$ of degree n.
\begin{definition}[Permutation Equivalence]
Two permutation groups $(G_1, \Omega_1)$ and
$(G_2, \Omega_2)$ are said to be \emph{permutation equivalent}
if there is an isomorphism $\varphi : G_1 \to G_2$
and a bijection $\eta : \Omega_1 \to \Omega_2$
such that for every $g \in G_1$ the following diagram commutes:
\centerline{
\xymatrix{
\ar[d]_{\eta} \ar[r]^g \Omega_1 & \Omega_1 \ar[d]_{\eta} \\
\ar[r]_{\varphi(g)} \Omega_2 & \Omega_2
}
}
\end{definition}
Permutation equivalence is an equivalence relation
on the set of permutation representations of given group.
If we relax the condition that $\eta$ is bijective
we obtain a {\it permutation homomorphism}.
\begin{definition}[Transitive Permutation Group]
A permutation group $(G, \Omega)$ is said to be \emph{transitive}
if for any $\alpha, \beta \in \Omega$ there exist a $ g \in G $
such that $\alpha^g = \beta$
\end{definition}
\begin{definition}[Orbit]
If $(G, \Omega)$ is a permutation group and $\alpha \in \Omega$
then the \emph{orbit of $\alpha$ under the action of $G$} is:
$$ \alpha^G = \{ \alpha^x : x \in G \} $$
\end{definition}
For a transitive permutation group $\alpha^G = \Omega$.
Otherwise $\Omega$ maybe be partitioned into orbits:
$ \Omega = \coprod_{i = 1..n} \Omega_i $
such that for each $i$, the permutation group $(G,{\Omega_i})$ is transitive.
\section{Graphs}
The most general definition of a {\it graph} is a pair of sets $(V, A)$
with $A \subseteq V \times V$.
The elements of $V$ are refered to as the {\it vertices} of the graph
and the elements of $A$ are refered to as {\it arcs}.
A {\it simple graph} is a graph such that:
$$ (\alpha,\beta) \in A \Longleftrightarrow (\beta, \alpha) \in A
\mbox{ for all } \alpha, \beta \in V.$$
A graph that is not simple is said to be {\it directed}.
The term ``digraph'' is also frequenty used.
An {\it edge} of a graph is an unordered pair $\{\alpha,\beta\}$
where $(\alpha,\beta)$ is an arc and $\alpha \neq \beta$.
A {\it loop} is an arc of the form $(\alpha,\alpha)$.
In this paper we will be mostly interested
in simple graphs without loops
and shall use the term ``graph'' to mean
``simple graph without loops''. We give a formal definition for ease of reference.
\begin{definition} [Graph]
A {\it graph} is a pair of sets $(V,A)$ such that:
$$A \subseteq (V \times V) \diagdown \{ (\alpha,\alpha) : \alpha \in V \}$$
and:
$$(\alpha,\beta) \in A \Longleftrightarrow (\beta,\alpha) \in A
\mbox{ for all } \alpha, \beta \in V.$$
\end{definition}
When more than one graph is being discussed, we shall write
$\Gamma = (V \Gamma, A \Gamma)$ or $\Sigma = (V \Sigma, A \Sigma)$
so that it is clear to which graph each set of vertices and arcs belongs.
If $\alpha$ is a vertex of some graph $\Gamma$,
then the {\it neighbourhood} of $\alpha$ in $\Gamma$
is the set $\Gamma(\alpha) = \{ \beta \in V \Gamma : (\alpha,\beta) \in A \Gamma \}$.
If $\alpha$ is a vertex of the graph $\Sigma$
then the neighbourhood of $\alpha$ in $\Sigma$ is denoted by $\Sigma(\alpha)$.
We write $E \Gamma$ (or $E \Sigma$) to denote the set of edge of $\Gamma$ (or $\Sigma$)
and $N \Gamma$ (or $N \Sigma$) to denote the set of neighbourhoods.
\begin{definition} [Automorphism of a Graph]
An automorphism of a graph $\Gamma$ is a bijection
$\varphi : V \Gamma \to V \Gamma$ such that:
$$ (\alpha,\beta) \in A \Gamma \Longleftrightarrow
(\varphi(\alpha),\varphi(\beta)) \in A \Gamma \mbox{ for all } \alpha,\beta \in V \Gamma.$$
\end{definition}
The autmorphisms of a graph form a group which we denote by $\Aut(\Gamma)$.
If $\rho : G \to \Aut(\Gamma)$ is a homomorphism
then $(G,V \Gamma,\rho)$ is a permutation group
and we say that $G$ acts on $\Gamma$ as a group of automorphisms.
\begin{definition} [Vertex-transitive Graph]
A vertex transitive graph is a triple $(G,\Gamma,\rho)$
where $\rho : G \to \Aut(\Gamma)$ is a homomorphism
and $(G,\Gamma,\rho)$ is transitive.
\end{definition}
As with permutation groups, we will often neglect to mention $\rho$
explicitely. We shall also sometimes say that $\Gamma$ is a $G$-vertex
transitive graph rather than $(G,\Gamma)$ is a vertex transitive graph.
If we say simply that $\Gamma$ is a vertex transitive graph
then we mean that $(\Aut(\Gamma),\Gamma)$
is an vertex transitive graph.
\section{Local Properties}
\begin{definition}[Stabilizer]
Let $(G, \Omega)$ be a transitive permutation group,
and let $\alpha$ be any point of $\Omega$.
The \emph{point stabilizer} of $\alpha$ is:
$$ G_a = \{ x \in G : \alpha^x = \alpha \} $$
\end{definition}
\begin{lemma}
$G_{\alpha^g} = g^{-1} G_{\alpha} g$
for any $g \in G$.
\begin{proof}
Suppose that $x \in G$ is such that $(\alpha^g)^x = \alpha^g$.
It follows that $\alpha^{gxg^{-1}} = \alpha$
and so $gxg^{-1} \in G_\alpha$. That is $x \in g^{-1}G_\alpha g$.
Conversely suppose that $x \in g^{-1}G_\alpha g$.
Then $gxg^{-1} \in G_\alpha$ and so $\alpha^{gxg^{-1}} = \alpha$.
That is $\alpha^{gx} = \alpha$, so $x \in G_{\alpha^g}$.
\end{proof}
\end{lemma}
Since $(G,\Omega)$ is transitive, every element of $\Omega$
may be expressed in the form $\alpha^g$ for some $g \in G$.
Thus lemma 1 tells us that the point stabilizers of a transitive
representation of $G$ form a family of conjugate subgroups of $G$.
If $G$ acts on $\Gamma$ as a group of automorphisms
and $G_\alpha$ is the stabilizer of the vertex $\alpha$
then $(G_\alpha, \Gamma(\alpha))$ is a permutation group.
\begin{proposition}
if $(G, \Gamma)$ is a vertex transitive graph
then for all $\alpha, \beta \in V \Gamma$
the permutation groups $(G_\alpha, \Gamma(\alpha))$ and
$(G_\beta, \Gamma(\beta))$ are permutation equivalent.
\begin{proof}
Suppose that $\beta = \alpha^g$.
Let $\eta : \Gamma(\alpha) \to \Gamma(\beta) $ be given by: $ \gamma \mapsto \gamma^g $,
and let $\varphi : G_\alpha \to G_\beta $
be given by: $ x \mapsto g^{-1} x g $
then:
\begin{eqnarray*}
\eta(\gamma^x) & = & \gamma^{xg} \\
& = & \gamma^{gg^{-1}xg} \\
& = & (\gamma^g)^{\varphi(x)} \\
& = & \eta(\gamma)^{\varphi(x)}
\end{eqnarray*}
for any $\gamma \in \Gamma(\alpha)$ and any $x \in G_\alpha$.
Thus the pair of maps $(\eta,\varphi)$ establish the desired
permutation equivalence.
\end{proof}
\end{proposition}
If $\Gamma$ is $G$-vertex transitive,
then for any permutation group theoretic property $\cP$
we say that $\Gamma$ is {\it $G$-locally $\cP$}
if $(G_\alpha, \Gamma(\alpha))$ has property $\cP$.
\section{Symmetric Graphs}
\begin{definition} [Symmetric Graph]
A symmetric graph is a triple $(G,\Gamma,\rho)$
where $G$ is a group, $\Gamma$ is a graph
and $\rho$ is a homomorphism:
$$\rho : G \to \Aut(\Gamma)$$
such that $\Gamma$ is $G$-vertex transitive and $G$-locally transitive.
\end{definition}
Again, we won't always mention $\rho$ explicitely
and will speak of the symmetric graph $(G,\Gamma)$, or sometimes
the $G$-symmetric graph $\Gamma$.
\begin{definition} [s-arc]
An $s$-arc of a graph $\Gamma$ is a sequence of vertices $(v_0, v_1, ... v_s)$
such that $v_i$ is adjacent to $v_{i+1}$ for each $i$ and $v_{i-1} \neq v_{i+1}$.
The set of $s$-arcs of $\Gamma$ is denoted by $\Arc_s(\Gamma)$.
\end{definition}
If $G$ acts on $\Gamma$ as a group of automorphisms,
then $G$ also acts on $\Arc_s(\Gamma)$ in a natural way:
$$ (v_0, v_1, ... v_s)^g = (v_0^g, v_1^g, ... v_s^g)$$
The graph $\Gamma$ is said to be $(G,s)$-arc transitive if $(G, \Arc_s(\Gamma)$
is transitive.
Historically there has been much interest in highly arc-transitive graphs,
that is graphs that are $s$-arc transitive for large $s$.
One of the oldest results in the area is the theorem by Tutte \cite{TUTTE1,TUTTE2},
proved using combinatorial arguments,
that there are no $s$-arc transitive graphs
of valency $3$ for $s > 5$.
More recently Weiss \cite{WEISS} was able to show, with the aid
of the classification of finite simple groups
that there are no $s$-arc transitive graphs,
of any valency, for $s > 7$.
Symmetric graphs have historically been characterized by their arc transitivity
rather than their local transitivity.
A $(G,0)$-arc transitive graph is just a $G$-vertex transitive graph.
The next theorem shows that $(G,1)$-arc transitive graphs
wihtout isolated vertices are symmetric graphs.
\begin{proposition}
if $(G, \Gamma)$ is a symmetric graph, the $\Gamma$ is $(G,1)$-arc transitive.
Conversly, if $\Gamma$ contains no isolated vertices and is $(G,1)$-arc transitive,
then $(G, \Gamma)$ is a symmetric graph.
\begin{proof}
Suppose that $(G, \Gamma)$ is a symmetric graph, then
$G$ acts on $A \Gamma$ in the obvious way:
$(\alpha,\beta)^x = (\alpha^x, \beta^x)$.
Let $(\alpha_1, \beta_1)$ and $(\alpha_2, \beta_2)$
be any two arcs.
By the $G$-vertex transitivity of $\Gamma$ we can find a $g \in G$ such that
$\alpha_1^g = \alpha_2$.
Since $\beta_1 \in \Gamma(\alpha_1)$ and $g$ is an automorphism,
we must have $\beta_1^g \in \Gamma(\alpha_1^g) = \Gamma(\alpha_2)$.
By $G$-local transitivity we can find $h \in G_{\alpha_2}$
such that $(\beta_1^g)^h = \beta_2$.
It follows that
$(\alpha_1, \beta_2)^{gh} = (\alpha_1^{gh}, \beta_1^{gh})
= (\alpha_2, \beta_2)$.
Thus $G$ acts transitively on the arcs of $\Gamma$.
Now suppose that $\Gamma$ has no isolated vertices and $(G,A \Gamma)$
is transitive. For any two vertices $\alpha_1$ and $\alpha_2$,
there is some arc beginning at $\alpha_1$, say $(\alpha_1,\beta_1)$
and some arc beginning at $\alpha_2$, say $(\alpha_2, \beta_2)$.
By arc transitivity we can find some $g \in G$ carrying
$(\alpha_1, \beta_1)$ to $(\alpha_2, \beta_2)$,
and this $g$ must carry $\alpha_1$ to $\alpha_2$.
So $\Gamma$ is $G$-vertex transitive.
Suppose $\gamma_1$ and $\gamma_2$ are both elements of $\Gamma(\alpha)$.
The $(\alpha,\gamma_1), (\alpha,\gamma_2) \in A \Gamma$.
By arc transitivity we can find a $g \in G$ such that
$(\alpha, \gamma_1)^g = (\alpha, \gamma_2)$
and this $g$ must carry $\gamma_1$ to $\gamma_2$.
Thus $\Gamma$ is $G$-locally transitive.
The result follows.
\end{proof}
\end{proposition}
\section{Orbital Graphs}
If $(G,\Omega)$ is a transitive permutation group,
then $G$ has a natural action on $\Omega \times \Omega$
given by:
$$ (\alpha,\beta)^g = (\alpha^g, \beta^g) $$
Although $(G,\Omega)$ is transitive, $(G,\Omega \times \Omega)$
need not be.
The orbits of $G$ on $\Omega \times \Omega$ are called {\it orbitals}
\begin{lemma}
If $(G,\Omega)$ is a transitive permutation group
and $\alpha \in \Omega$
then there is a natural correspondence between the orbitals of $(G,\Omega)$
and the orbits of $(G_\alpha,\Omega)$
\begin{proof}
Let $\Delta$ be any orbital of $(G,\Omega)$
and let $(\gamma,\delta)$ be any element of $\Delta$.
Let $g \in G$ be such that $\gamma^g = \alpha$.
Since $\Delta$ is an orbital, it follows that
$(\alpha,\delta^g) \in \Delta$.
Thus every orbital of $(G,\Omega)$ contains an element
of the form $(\alpha,\beta)$.
Let $\eta : \Omega \to \Omega \times \Omega$ be the map given by
$\beta \mapsto (\alpha, \beta)$.
Suppose that $\beta_1$ and $\beta_2$
lie in the same orbit of $(G_\alpha,\Omega)$.
Then there is some $g \in G_\alpha$ such that $\beta_1^g = \beta_2$.
It follows that $(\alpha,\beta_1)^g = (\alpha,\beta_2)$
and so $(\alpha,\beta_1)$ and $(\alpha,\beta_2)$ lie in the same
orbital of $(G,\Omega)$.
Conversely suppose that $(\alpha,\beta_1)$ and $(\alpha,\beta_2)$
lie in the same orbital of $(G,\Omega)$.
Then there is some $g \in G$ such that $(\alpha,\beta_1)^g = (\alpha,\beta_2)$.
That is $\alpha^g = \alpha$ and $\beta_1^g = \beta_2$.
It follows that $g \in G_\alpha$ and $\beta_1$ and $\beta_2$
lie in the same orbit of $(G_\alpha, \Omega)$.
Thus, the map $\eta$ in fact induces a map from the orbits of $(G_\alpha,\Omega)$
to the orbitals of $(G,\Omega)$.
\end{proof}
\end{lemma}
The orbital $\{ (\alpha,\alpha) : \alpha \in \Omega)$
is given a special name. Its called the {\it diagonal} orbital.
The {\it rank} of a permutation group is the number of orbitals.
An orbital $\Delta$ is said to be {\it self-paired} if
$$ (\alpha, \beta) \in \Delta \Longleftrightarrow
(\beta, \alpha) \in \Delta $$
\begin{definition} [Orbital Graph]
If $(G,\Omega)$ is a transitive permutation group and
$\Delta$ is a self-paired orbital
then the {\it orbital graph} $\orb_\Delta(G, \Omega)$
is the graph with vertex set $\Omega$ and arc-set $\Delta$
Need to say more about the diagonal orbital.
\end{definition}
Note that if $\Delta$ is taken to be the diagonal orbit,
then the resulting graph has loops, and so is not actually
a graph by our definition. Generally we assume that $\Delta$
is not the diagonal orbit.
\begin{proposition}
Orbital graphs are symmetric. Every symmetric graph is an orbital graph
\begin{proof}
Suppose that $\Gamma = \orb_\Delta(G,\Omega)$
for some transitive permutation group
$(G,\Omega)$ and some self-paired orbital $\Delta$.
Since $V \Gamma = \Omega$ and $(G,\Omega)$ is transitive,
$\Gamma$ is $G$-vertex transitive.
Since $A \Gamma = \Delta$ and $\Delta$ is an orbital
$\Gamma$ is $G$-arc transitive.
It follows that $\Gamma$ is symmetric.
Now suppose that $\Gamma$ is a $G$-symmetric graph.
Since $\Gamma$ is $G$-vertex transitive
$(G, V \Gamma)$ is a transitive permutation group.
Since $A \Gamma \subseteq V \Gamma \times V \Gamma$
and $\Gamma$ is $G$-arc transitive, $A \Gamma$
is an orbital of $(G,V \Gamma)$.
It follows that $\Gamma = \orb_{A \Gamma}(G,V \Gamma)$.
\end{proof}
\end{proposition}
Relaxing the condition that $\Delta$ must be self-paired
leads to a symmetric digraph.
\section{Imprimitive Symmetric Graphs}
Recall that if $\Delta$ is a subset of $\Omega$
and $g$ is an element of $G$, then $\Delta^g$ denotes
the image of $\Delta$ under the action of $g$.
\begin{definition}[Block of Imprimitivity]
If $(G,\Omega)$ is a transitive permutation group
then a subset $\Delta$ of $\Omega$
is said to be a \emph{block of imprimitivity} if for every $x \in G$
either $\Delta^x = \Delta$ or $\Delta^x \cap \Delta = \emptyset$
\end{definition}
\begin{definition}[$G$-invariant partition]
If $(G, \Omega)$ is a transitive permutation group,
then a partition $\cB$ of $\Omega$ is said to be \emph{$G$-invariant}
if for each $\Delta \in {\cB}$ and each $x \in G$
we have $\Delta^x \in {\cB}$.
That is, $\cB$ admits $G$ as a group of permutations in a natural way.
\end{definition}
\begin{proposition}
If $(G,\Omega)$ is a transitive permutation group
and $\Delta$ is a block of imprimitivity
then $\cB = \{ \Delta^g : g \in G \}$ is a $G$-invariant
partition.
\begin{proof}
Since $(G,\Omega)$ is transitive,
every $\alpha \in \Omega$ is contained in $\Delta^g$
for some $g \in G$. Thus $\bigcup_{g \in G} \Delta^g = \Omega$.
If $\Delta^{g_1} \cap \Delta^{g_2} \neq \emptyset$
for some $g_1, g_2 \in G$
then $\Delta \cap \Delta^{g_2 g_1^{-1}} \neq \emptyset$.
Since $\Delta$ is a block of imprimitivity,
this implies that $\Delta = \Delta^{g_2 g_1^{-1}}$
and so $\Delta^{g_1} = \Delta^{g_2}$.
Thus $\cB$ is a partition of $\Omega$.
For any $\Delta^g \in \cB$ and any $x \in G$
$(\Delta^g)^x = \Delta^{gx} \in \cB$.
Thus $\cB$ is a $G$-invariant partition of $\Omega$.
\end{proof}
\end{proposition}
\begin{proposition}
If $(G,\Omega)$ is a transitive permutation group
and $\cB$ is a $G$-invariant partition of $\Omega$,
then each $B$ in $\cB$ is a block of imprimitivity.
\begin{proof}
Since $\cB$ is $G$-invariant, $B^g \in \cB$ for any
$g \in G$. Since $\cB$ is a partition, if $B^g \neq B$
then $B^g \cap B = \emptyset$.
The result follows.
\end{proof}
\end{proposition}
\begin{proposition}
The map $\pi : \Omega \to \cB$ which sends each point of $\Omega$
to the block of $\cB$ containing it
induces a surjective permutation homomorphism from $(G, \Omega)$
to $(G,\cB)$.
\begin{proof}
Need to show that $\pi(\alpha^x) = \pi(\alpha)^x$
for any $\alpha \in \Omega$ and any $x \in G$.
By the previous proposition $\pi(\alpha)$ is a block of imprimitivity.
Let $\Delta = \pi(\alpha)$.
Since $\alpha \in \Delta$ we must have $\alpha^x \in \Delta^x$
That is $\pi(\alpha^x) = \Delta^x = \pi(\alpha)^x$.
\end{proof}
\end{proposition}
For any $B \in \cB$ we refer to the set $\pi^{-1}(B)$ as the
{\it fiber} of the homomorphism $\pi$ at $B$.
The fibers of a permutation homomorphism are blocks of imprimitivity.
Conversly each block of imprimitivity $\Delta$
gives rise to the $G$-invariant partition:
$\cB = \{ \Delta^x : x \in G \} $
and thus also to the homomorphism $(G,\Omega) \mapsto (G,\cB)$.
For every permutation group both $\{a\}$ and $\Omega$ are
trivially blocks of imprimitivity. A permutation group is \emph{primitive}
if it admits no nontrivial blocks of imprimitivity.
\begin{definition} [Imprimitive Symmetric Graph]
A symmetric graph $(G,\Gamma)$ is said to be {\it imprimitive} if
the induced permutation group, $(G, V \Gamma)$, is imprimitive.
\end{definition}
\begin{definition} [Quotient Graph]
Suppose that $(G,\Gamma)$ is an imprimitive symmetric graph
with $\cB$ a nontrivial $G$-invariant partition of the vertex set $V \Gamma$.
Define the {\it quotient graph} $\Gamma_\cB$ of $(G,\Gamma)$ with respect to $\cB$
to be the graph with vertex set $\cB$, and an arc $(B,C)$,
whenever there is some $\alpha \in B$
and some $\beta \in C$
such that $(\alpha, \beta )$ is an arc of $\Gamma$.
\end{definition}
\begin{proposition}
$(G, \Gamma_\cB)$ is a symmetric graph.
\begin{proof}
We show that $G$ acts transitively on the arcs of $\Gamma_\cB$.
Let $(B,C)$ and $(D,E)$ be any two arcs of $\Gamma_\cB$.
By the definition of the quotient graph,
there must be some $\alpha \in \pi^{-1}(B)$
and $\beta \in \pi^{-1}(C)$
such that $(\alpha, \beta) \in A \Gamma$.
Simmilarly, there must be some $\gamma \in \pi^{-1}(D)$
and $\delta \in \pi^{-1}(E)$
such that $(\gamma, \delta) \in A \Gamma$,
By the symmetry of $(G, \Gamma)$, we can find an element $g \in G$
which carries $(\alpha,\beta)$ to $(\gamma,\delta)$.
Since $\pi$ is a $G$-homomorphism, it follows that:
\begin{eqnarray*}
(B,C)^g & = & (\pi(\alpha),\pi(\beta))^g \\
& = & (\pi(\alpha)^g,\pi(\beta)^g) \\
& = & (\pi(\alpha^g),\pi(\beta^g)) \\
& = & (\pi(\gamma),\pi(\delta)) \\
& = & (D,E).
\end{eqnarray*}
\end{proof}
\end{proposition}
Suppose that some arc of $\Gamma$
has both its endpoints in the same fiber of $\cB$,
that is, there exists some $(\alpha,\beta) \in A \Gamma$
such that $\pi(\alpha) = \pi(\beta)$.
Since the fibers are blocks of imprimitivity,
it follows that $\pi(\alpha^g) = \pi(\beta^g)$ for each $g \in G$,
The arc transitivity of $\Gamma_\cB$ then implies
every arc of $\Gamma$ has both its endpoints in the same fiber.
In this case $\Gamma_\cB$ is the empty graph (no edges)
with one vertex per connected component of $\Gamma$.
This case is not very interesting,
to exclude it we say that the quotient is {\it nontrivial}
if it has valency at least one.
By the above discussion, $\Gamma_\cB$ is nontrivial
if and only if each of the fibers of $\cB$ is an
{\it independent set} of $\Gamma$.
For the remainder of this paper we shall always assume
that the quotient of a $G$-symmetric graph homomorphism
is non-trivial, even if we neglect to state this explicitely.
\chapter{Coset Graphs}
In this chapter we first show that for any group $G$,
the transitive representations of $G$
together with $G$-homomorphisms between them
form a lattice isomorphic to a quotient of
the subgroup lattice of $G$.
Next we describe a construction of Sabidussi's
for vertex transitive graphs, and give a
group theoretic characterization
of symmetric graphs.
Finally we consider quotients
of symmetric graphs from a group theoretic perspective.
\section{Transitive Permutation Groups}
\begin{definition}[Core of a subgroup]
If $H$ is a subgroup of $G$, then the \emph{core} of $H$ in $G$ is:
$$\Core_G(H) = \bigcap_{x \in G} x^{-1} H x $$
\end{definition}
\begin{proposition} If $H$ is a subgroup of $G$,
then the core of $H$ in $G$ is a normal subgroup of $G$.
\begin{proof}
Since $gx$ runs over all the elements of $G$ as $x$ does, we have:
\begin{eqnarray*}
g^{-1}\Core_G(H)g & = & \bigcap_{x \in G} (gx) H (gx)^{-1} \\
& = & \bigcap_{x \in G} x H x^{-1}
\end{eqnarray*}
for any $g \in G$.
\end{proof}
\end{proposition}
\begin{definition} [Coset Representation]
For $H$ a subgroup of $G$,
let $\Cos_G(H)$ denote the right cosets of $H$ in $G$.
We may define an action of $G$ on $\Cos_G(H)$ by:
$$ (Ha)^x = Hax $$
This is indeed an action, since:
\begin{eqnarray*}
{(Ha)^x}^y & = & (Hax)^y \\
& = & (Haxy) \\
& = & (Ha)^{xy}
\end{eqnarray*}
for any ``$Ha$'' a coset of $H$ in $G$, and any $x,y \in G$.
We call a permutation group of the form $(G,\cos_G(H))$ a
{\it coset representation} of $G$.
\end{definition}
\begin{example}[Right Regular Representation]
For any group $G$, the \emph{right regular representation}
of $G$ is the permutation group $(G,G)$ with the action given by:
$$ g^{h} = gh. $$
This is a special case of a coset representation
where $H$ is the trivial group $\{ 1 \}$.
\end{example}
\begin{proposition}
For any pair of groups $H \leq G$, the action of $G$
on the cosets of $H$ is transitive with kernel $\Core_G(H)$.
\begin{proof}
Let $Ha_1$ and $Ha_2$ be any two cosets of $H$ in $G$,
then:
$$(Ha_1)^{a_1^{-1}a_2} = Ha_2$$
so the action is transitive.
The stabilizer of the point ``$H$'' is $H$.
By Lemma 1, Chapter 1,
the stabilizer of the point ``$Ha$'' is $a^{-1}Ha$.
If $g \in G$ is such that $g$ stabilizes every coset of $H$ in $G$,
then we must have $g \in a^{-1}Ha$ for each $a \in G$.
That is $g \in \Core_G(H)$.
\end{proof}
\end{proposition}
The next proposition tells us that every transitive representation
of a group $G$ is permutation equivalent to some coset representation.
\begin{proposition}
If $(G,\Omega)$ is a transitive permutation group
and $\alpha$ is some point of $\Omega$, then
$(G, \Omega)$ is permutation equivalent to $(G, \cos_G(G_\alpha))$
\begin{proof}
For each $\beta \in G$ Consider:
$$ S_\beta = \{ x \in G : \alpha^x = \beta \} $$
If $\beta = \alpha^g$ then it's not hard to see
that $S_{\beta}$ is a right coset of $G_{\alpha}$:
$$ S_{\beta} = G_{\alpha} g $$
Define $\eta : \Omega \to \cos_G(G_a)$ by $\eta(\beta) = S_\beta$
and take $\varphi : G \to G$ to be the identity.
We have, for any $\beta \in \Omega$ and any $x \in G$,
if $\beta = \alpha^g$ then:
\begin{eqnarray*}
\eta(\beta^x) & = & S_{\beta^x} \\
& = & S_{\alpha^{gx}} \\
& = & G_\alpha gx \\
& = & G_\alpha g^{\varphi(x)} \\
& = & \eta(\alpha^g)^{\varphi(x)} \\
& = & \eta(\beta)^{\varphi(x)}
\end{eqnarray*}
Thus $\eta$ and $\varphi$ establish the desired permutation equivalence.
\end{proof}
\end{proposition}
\section{Imprimitive Permutation Groups}
\begin{definition}
If $(G,\Omega)$ is a transitive permutation group and $\Delta$
is a subset of $\Omega$ then the {\it setwise stabilizer} of $\Delta$ is:
$$G_{\Delta} = \{ x \in G : \Delta^x = \Delta \}.$$
\end{definition}
\begin{lemma}
If $(G,\Omega)$ is a transitive permutation group and $\Delta$
is a block of imprimitivity, then
$G_{\alpha} \leq G_{\Delta} \leq G$.
for any $\alpha \in \Delta$.
\begin{proof}
Since $\alpha \in \Delta$,
for any $g \in G$ we have $\alpha^g \in \Delta^g$.
Since $\Delta$ is a block of imprimitivity,
if $\alpha^g = \alpha$, then $\Delta^g \cap \Delta \neq \emptyset$
so $\Delta^g = \Delta$. That is $G_\alpha \leq G_\Delta$.
\end{proof}
\end{lemma}
\begin{proposition}
If $(G,\Omega)$ is a transitive permutation group and $\Delta$
is a block of imprimitivity, then $(G_{\Delta},\Delta)$
is a transitive permutation group.
\begin{proof}
Suppose that $\alpha, \beta \in \Delta$.
Since $(G,\Omega)$ is transitive, there is some $g \in G$
such that $\alpha^g = \beta$.
Since $\alpha^g \in \Delta^g$ and $\beta \in \Delta$,
we must have $\Delta^g \cap \Delta \neq \emptyset$.
But $\Delta$ is a block of imprimitivity,
so this implies that $\Delta^g = \Delta$.
That is $g \in G_{\Delta}$,
so $G_{\Delta}$ acts transitively on $\Delta$.
\end{proof}
\end{proposition}
\begin{proposition}
Suppose that $(G,\Omega)$ is a permutation group and $\alpha \in \Omega$.
For any subgroup $H$ such that $G_\alpha \leq H \leq G$,
the set $\alpha^H = \{\alpha^h : h \in H \}$ is a block of imprimitivity.
\begin{proof}
Let $\Delta = \alpha^H$.
If $\Delta^g \cap \Delta \neq \emptyset $ for some $g \in G$
then there must be some $ x,y \in H $ such that
$\alpha^{xg} = \alpha^y$.
It follows that $ g \in x^{-1} G_\alpha y \leq H $
and $ \Delta^g = \Delta $.
That is $ \Delta^g \cap \Delta \neq \emptyset \Rightarrow \Delta^g = \Delta$,
so $\alpha^H$ is a block of imprimitivity.
\end{proof}
\end{proposition}
\begin{lemma}
If $(G, \Omega)$ is a transitive permutation group
and $\alpha$ is some point of $\Omega$,
then $G_{\alpha^H} = H$
for any $H$ such that $G_\alpha \leq H \leq G$.
\begin{proof}
Let $\Delta = \alpha^H$.
Clearly $H \leq G_{\Delta}$.
To show that $H = G_\Delta$ it suffices to show that $G_\alpha$
and $G_\Delta$ have the same index in $H$.
Since $\alpha^H$ is an orbit of $H$ on $\Omega$,
the permutation group $(H,\alpha^H)$ is transitive.
Since $G_\alpha \leq H$,
the stabilizer of the point $\alpha$ in $(H,\alpha^H)$
is $H \cap G_\alpha = G_\alpha$.
Thus by the orbit stabilizer theorem we have
$\Delta = [H : G_\alpha]$.
On the other hand, by proposition 11 and 12,
$\Delta$ is a block of imprimitivity
and $(G_{\Delta}, \Delta)$ is a transitive permutation group.
By lemma 3 we have $G_\alpha \leq G_\Delta$
so the stabilizer of the point $\alpha$ in $(G_\Delta, \Delta)$
is $G_\Delta \cap G_\alpha = G_\alpha$.
So again by the orbit stabilizer lemma we have
$|\Delta| = [G_{\Delta}: G_{\alpha}]$.
The result follows.
\end{proof}
\end{lemma}
\begin{lemma}
If $(G,\Omega)$ is a transitive permutation group,
$\Delta$ is a block of imprimitivity,
and $\alpha$ is any point of $\Delta$,
then $\alpha^{G_{\Delta}} = \Delta$.
\begin{proof}
Clearly $\alpha^{G_{\Delta}} \leq \Delta$.
Let $H = G_{\Delta}$.
We have $G_{\alpha} \leq H \leq G$
and $(H,\alpha^H)$ is a transitive permutation group
equivalent to $(H,\Cos_H(G_\alpha))$.
Since $(G_{\Delta}, \Delta)$ is also a transitive permutation group
equivalent to $(H, \Cos_H(G_\alpha))$
we must have $|Delta| = |alpha^H|$.
Since $\alpha^H \leq \Delta$, this implies that $\alpha^H = \Delta$.
\end{proof}
\end{lemma}
Let $(\cS, \leq)$ be the set of subgroups of $G$ containing $G_\alpha$,
partially ordered by the subgroup relation.
Let $(\cP, \subseteq)$ be the set of blocks of imprimitivity containing $\alpha$,
partially ordered by the subset relation.
\begin{proposition}
$(\cS, \leq)$ is order isomorphic to $(\cP, \subseteq)$
\begin{proof}
Let $\Phi : \cS \to \cP$ be given by
$\Phi(\Delta) = G_{\Delta}$
and let $\Psi : \cP \to \cS$ be given by
$\Psi(H) = \alpha^H$
Making use of lemma ?? and ??,
for any $H \in \cS$ we have:
$$ \Phi \circ \Psi(H) = \Phi(\alpha^H) = G_{\alpha^H} = H $$
and for any $\Delta \in \cP$ we have:
$$ \Psi \circ \Phi(\Delta) = \Psi (G_{\Delta}) = \alpha^{G_{\Delta}} = \Delta $$
so $\Phi$ and $\Psi$ are inverses.
To see that $\Phi$ is order preserving note that:
$$ G_{\Delta_1} \leq G_{\Delta_2}
\Longleftrightarrow \alpha^{G_{\Delta_1}} \leq \alpha^{G_{\Delta_2}}
\Longleftrightarrow \Delta_1 \subseteq \Delta_2 $$
\end{proof}
\end{proposition}
\begin{proposition}
Let $(G,\Omega_1)$ and $(G,\Omega_2)$ be two transitive representations
of $G$. For any $\alpha \in \Omega_1$, $\beta \in \Omega_2$,
the permutation groups $(G,\Omega_1)$ and $(G,\Omega_2)$
are permutation equivalent if and only if there is an automorphism of $G$
carrying $G_\alpha$ to $G_\beta$.
\begin{proof}
Suppose that $(G, \Omega_1)$ and $(G, \Omega_2)$ are permutation equivalent.
Let $\varphi: G \to G$ and $\eta : \Omega_1 \to \Omega_2$ be the maps
establishing the equivalence.
Let $\gamma = \eta(\alpha)$.
Clearly $G_\gamma = \varphi(G_\alpha)$.
By transitivity there must be some $g \in G$ such that $\beta = \gamma^g$.
Let $\vartheta_g : G \to G$ be the map: $x \mapsto g^{-1} x g$.
We have:
\begin{eqnarray*}
G_\beta & = & g^{-1}G_\gamma g \\
& = & \vartheta(G_\gamma) \\
& = & \vartheta \circ \varphi (G_\alpha)
\end{eqnarray*}
So $\vartheta \circ \varphi$ carries $G_\alpha$ to $G_\beta$.
For the other direction, let $\psi$ be the map carrying $G_\alpha$
to $G_\beta$. By proposition 1,
$(G, \Omega_1)$ is permutation equivalent to $(G, \cos(G_\alpha))$
and $(G, \Omega_2)$ is permutation equivalent to $(G, \cos(G_\beta))$.
Take $\varphi = \psi$ and let
$\eta: \cos(G_\alpha) \to \eta(G_\beta)$ be given by
$(G_\alpha) g \mapsto (G_\beta) \psi(g)$.
For any $x \in G$ we have:
\begin{eqnarray*}
\eta({(G_\alpha g)}^x) & = & \eta((G_\alpha) gx) \\
& = & (G_\beta) \psi(gx) \\
& = & {(G_\beta) \psi(g)}^{\psi(x)} \\
& = & \eta((G_\alpha) g)^{\varphi(x)}
\end{eqnarray*}
thus $\varphi$ and $\eta$ establish a permutation equivalence between
$(G, \cos(G_\alpha)$ and $(G, \cos(G_\beta)$ and hence between
$(G, \Omega_1)$ and $(G, \Omega_2)$.
\end{proof}
\end{proposition}
Define an equivalence relation on the set of subgroups of $G$
as follows: $H_1 \sim H_2$ if and only if there is an automorphism of $G$
carrying $H_1$ to $H_2$.
Let $\cZ$ denote the equivalence classes of this relation.
Define a partial order on $\cZ$
by $[H_1] \leq [H_2]$ if and only if there is some $\tilde{H_1} \in [H_1]$ and some
$\tilde{H_2} \in [H_2]$ such that $\tilde{H_1} \leq \tilde{H_2}$.
The previous result tells us that, upto permutation equivalence,
the essential information about the transitive representations of $G$
is contained in $(\cZ, \leq)$. In particular all the transitive representations
of $G$ are to be found as quotients of the right regular representation.
\section{Sabidussi's Construction}
We saw in the last section that
every permutation group is equivalent to
one of the form $(G,\cos_G(H))$.
Thus, upto isomorphism, transitive permutation groups are
uniquely determined by pairs of groups $(G,H)$
with $H$ a subgroup of $G$.
In this section and the next
we shall see that upto isomorphism,
a symmetric graph is uniquely
determined by a triple $(G,H,a)$
where $H$ is a subgroup of $G$
and $a \in G \backslash H$ is an involution.
Given a triple $(G,H,a)$, the idea is to construct
a graph whose vertices are the cosets of $H$ in $G$.
We call such a graph a {\it coset graph}.
The idea of a coset graph originally goes back to Sabidussi
who was studying vertex transitive graphs.
Sabidussi's construction was essentially
a generalization of the {\it Cayley graph} construction.
\begin{definition} (Cayley Graph)
Given a group $G$ and a subset $D \subseteq G$ the {\it Cayley graph},
$\Cay(G,D)$ is the directed graph with vertices the elements of $G$
and arc set $\{(x,y) : xy^{-1} \in D \}$
\end{definition}
A Cayley Graph will contain no loops,
provided that $1 \not\in D$.
A Cayley Graph is simple if and only if
the set $D$ is closed under taking inverses,
and connected if and only if $D$ is a generating set for $G$.
The vertices are the elements of the group $G$,
and the action of $G$ on the vertices is permutation equivalent
to the right regular representation of $G$.
Not every vertex transitive graph is a Cayley Graph.
Sabidussi's idea was to generalize Cayley's construction by considering the
action of $G$ on the cosets of some non-trivial subgroup $H$.
\begin{definition} (Sabidussi Graph)
Given a group $G$, a subgroup $H$, and a set $D \subseteq G$,
the Sabidussi graph, $\Sab(G,H,D)$,
is the directed graph with vertex set $\cos_G(H)$
and arc set $\{(Hx,Hy) : xy^{-1} \in D\}$
\end{definition}
A Sabidussi graph will contain no loops
provided that $D \cap H = \emptyset$.
A Sabidussi Graph is simple if and only if
$D$ is closed under taking inverses
and connected if and only if $D \cup H$ generates $G$.
The vertices of the Sabidussi graph are
the cosets of the subgroup $H$ in $G$,
and the action of $G$ on the vertices is $(G,cos(H))$.
\begin{proposition}[Sabidussi]
Sabidussi Graphs are vertex transitive.
Every vertex transitive graph is isomorphic to a Sabidussi Graph.
\begin{proof}
Let $\Gamma = \Sab(G,H,D)$ be any sabidussi graph
(simple and without loops).
The group $G$ has a natural transitive action on the vertices of $\Gamma$
given by $Hx^g = Hxg$.
To prove that $\Gamma$ is $G$-vertex transitive,
we must check that this action preserves the adjacency structure of $\Gamma$.
Suppose that $(Hx,Hy)$ is an arc of $\Gamma$,
so $xy^{-1} \in D$.
Since $xg(yg)^{-1} = xgg^{-1}y^{-1} = xy^{-1} \in D$
for any $g$,
it follows immediately that $(Hx,Hy)^g = (Hxg,Hyg)$
is also an arc of $\Gamma$.
This proves the first part.
Now, let $\Gamma$ be any $G$-vertex transitive graph.
It follows that $(G, V \Gamma)$ is permutation
equivalent to $(G, \cos_G(H))$ for some $H < G$.
Let $\eta : \Omega \to \cos_G(H)$ be the map establishing
the permutation equivalence, and let $\mu = \eta^{-1}$
Let $N$ be the neighbourhood of the vertex $\mu(H)$, and let
$$D = (\bigcup_{\alpha \in N} \eta(\alpha)) \diagdown H$$
We will show that $\Gamma \cong \Sab(G,H,\eta(D))$.
The map $\eta : V \Gamma \to \cos_G(H)$
establishes a permutation isomorphism between the
vertices of $\Gamma$ and the vertices of $\Sab(G,H,D)$.
Let $(\alpha, \beta)$ be any arc of $\Gamma$.
By $G$-vertex transitivity, we can find some $g \in G$
such that $\alpha^g = \mu(H)$ and $\beta^g \in E$.
Suppose that $\eta(\alpha) = Hx$ and $\eta(\beta) = Hy$.
We must check that $xy^{-1} \in D$
Since $\eta$ is a permutation isomorphism,
it follows that $\eta(\alpha^g) = Hxg$
and $\eta(\beta^g) = Hyg$.
Since $\beta^g \in E$ we must have
$xg(yg)^{-1} = xgg^{-1}y = xy^{-1}\in D$.
The result follows.
\end{proof}
\end{proposition}
\section{A Group Theoretic Characterization of Symmetric Graphs}
Since symmetric graphs are vertex transitive,
every symmetric graph is a Sabidussi graph.
However, not every vertex transitive graph is symmetric,
so we expect there to be some extra conditions on
the subset $D$ in the symmetric case.
Recall from Chapter 1 that every symmetric graph
is an orbital graph.
In the first section of this chapter, we saw that if $H < G$
is the stabilizer of some point in $\Omega$, then $(G,\Omega)$
is permutation equivalent to $(G,\cos_G(H))$.
We shall see in this section, the orbitals of $(G,\Omega)$
actually correspond to {\it double cosets} of $H$ in $G$
\begin{definition}
If $H$ is a subgroup of $G$, then a {\it double coset} of $H$ in $G$
is a subset of $G$ of the form
$HxH = \{ h_1 x h_2 : h_1, h_2 \in H \}$
for some $x \in G$.
\end{definition}
\begin{lemma}
Each double coset of $H$ in $G$ is a union of right cosets of $H$.
The double cosets of $H$ in $G$ form a partition of $G$.
\begin{proof}
The first part is obvious since $HxH = \bigcup_{h \in H} Hxh$
for any $x \in G$.
For each element $g \in G$ it is clear that $g \in HgH$
so $\bigcup_{x \in G} HxH = G$.
If $HxH \bigcap HyH \neq \emptyset$
then $h_1 x h_2 = h_3 y h_4 $ for some $h_1,h_2,h_3,h_4 \in H$.
It follows that $y = h_1^{-1} h_3 x h_4 h_2^{-1}$
and $HyH = H h_1^{-1} h_3 x h_4 h_2^{-1} H = HxH$.
This shows that any two double cosets are equal or disjoint,
so the double cosets partition $G$ as claimed.
\end{proof}
\end{lemma}
\begin{proposition}[Lorimer]
For any permutationg group $(G,\Omega)$ with point stabilizer $H$,
the orbitals are in natural bijection with the double cosets of $H$ in $G$.
The self-paired orbitals correspond to those
double cosets which contain an involution.
\begin{proof}
From the first section in this chapter we know that $(G,\Omega)$
is permutation equivalent to $(G,\cos_G(H))$.
Recall from chapter 1 that the orbitals of a permutation group
are in bijection with the orbits of the point stabilizer.
We must show that orbit of $(H,\cos_G(H))$
corresponds to a double coset of $H$ in $G$.
Let $\kappa$ be the map $Hx \mapsto HxH$.
Suppose that $Hx$ and $Hy$ are in the same orbit
of $(H,\cos_G(H))$.
Then there is some $h \in H$ such that $Hxh = Hy$.
It follows that $y \in Hxh \subset HxH$.
That is $HyH = HxH$ and so $\kappa(Hx) = \kappa(Hy)$.
Conversly, suppose that $\kappa(Hx) = \kappa(Hy)$
for some pair of cosets $Hx$ and $Hy$.
Then $x \in HyH$ and so $x = h_1 y h_2$ for some
$h_1, h_2 \in H$.
It follows that $Hx = Hyh_2$,
that is $Hx$ and $Hy$ lie in the same orbit of
$(H,\cos_G(H))$.
Since $\kappa$ is surjective,
this proves the first part.
Suppose that $a \in G$ is such that $a^2 = 1$ and $a \not\in H$.
Since $(H, Ha)^a = (Ha, Ha^2) = (Ha, H)$, it follows that
the orbital $\Delta$ of $(G,\cos_G(H))$
containing $(H,Ha)$ is self-paired.
Clearly $a \in HaH = \kappa(Ha)$,
so the double coset associated with $\Delta$ contains an involution.
Conversely, if the double coset $HxH$ contains
an involution $a$, then $HxH = HaH = \kappa(Ha)$.
Thus $HxH$ is associated with the orbital containing
$(H,Ha)$ which is self-paired.
This proves the second part.
\end{proof}
\end{proposition}
\begin{theorem} [Lorimer] For any pair of groups $H \leq G$ and any $a \in G$
such that $a \not\in H$ and $a^2 = 1$, the Sabidussi graph
$\Sab(G,H,HaH)$ is $G$-symmetric. Furthermore every symmetric graph
is of this form for some $G$, $H$ and $a$.
\begin{proof}
Let $\Gamma = \Sab(G,H,HaH)$.
By proposition ?? $\Gamma$ is $G$-vertex transitive.
Since $HaH$ is an orbit of the action of $H$
on $\Cos_G(H)$, the graph $\Gamma$ is also $G$-locally transitive.
Suppose that $\Gamma$ is a $G$-symmetric graph.
Let $\alpha$ be any vertex, and let $H = G_\alpha$.
By proposition ??, the permutation group $(G, V \Gamma)$
is equivalent to $(G, \Cos_G(H))$.
Let $\eta : V \Gamma \to \Cos_G(H))$ be any map
inducing this equivalence.
Let $\beta$ be any neighbour of $\alpha$ in $\Gamma$
and let $a \in G$ be such that $(\alpha,\beta)^a = (\beta,\alpha)$.
Clearly $a$ is an involution and $\eta(\beta) = Ha$.
Let $\Gamma' = \Sab(G,H,HaH)$.
The map $\eta$ induces a permutation equivalence
between the vertices of $\Gamma$ and the vertices of $\Gamma'$.
To show that $\Gamma$ is isomorphic to $\Gamma'$
we must show that $\eta$ preserves adjacency.
Let $(\omega, \delta)$ be any arc of $\Gamma$.
Let $g \in G$ be such that $\omega^g = \alpha$.
Then $\delta^g$ is some neighbour of $\alpha$.
Thus there exists $h \in H$ such that $\delta^{gh} = \beta$.
Therefore $\eta(\delta^{gh}) = \eta(\beta) = Ha$.
Since $\eta$ is a permutation equivalence,
$\eta(\delta^{gh}) = \eta(\delta)^{gh}$,
so $\eta(\delta) = Ha^{{(gh)}^{-1}} = Hah^{-1}g^{-1}$.
Simmilarly, $\eta(\omega^g) = \eta(\omega)^g$
so $\eta(\omega) = H^{g^{-1}} = Hg^{-1}$.
Now $ah^{-1}g^{-1}g = ah^{-1} \in HaH$
so $(\eta(\omega),\eta(\delta))$ is an arc of $\Gamma'$.
The result follows.
\end{proof}
\end{theorem}
\begin{lemma}
$|HxH| = |H||H|/|x^{-1}Hx \bigcap H|$ Fix me
\begin{proof}
The mapping $HxH \to x^{-1}HxH$ given by $h_1 x h_2 \mapsto x^{-1} h_1 x h_2$
is bijective, so $|HxH| = |x^{-1}HxH|$.
But for the rule for the product of two subgroups:
$|x^{-1}HxH| = |x^{-1}Hx||H|/|x^{-1}Hx \bigcap H|$,
of course, $|x^{-1}Hx| = |H|$. and the result follows.
\end{proof}
\end{lemma}
\begin{proposition}
If $\Gamma$ is a $G$-symmetric graph isomorphic to the Sabidussi
graph $Sab(G,H,HaH)$ the the stabilizer of an arc of $\Gamma$
is isomorphic to $a^{-1}Ha \cap H$ and the valency of $\Gamma$
is $ |H|/|x^{-1}Hx \bigcap H| $.
\begin{proof}
Consider the arc $(H,Ha)$. The stabilizer of this arc is
the intersection between the stabilizer of the vertex ``$H$''
and the stabilizer of the vertex ``$Ha$''.
But the stabilizer of the vertex ``$H$'' is just $H$
and the stabilizer of the vertex ``$Ha$'' is $a^{-1}Ha$.
So the stabilizer of the arc is $a^{-1}Ha \cap H$.
Consider the action of $H$ on the neighbours
of the vertex ``$H$''.
Since $\Gamma$ is symmetric, this action is transitive.
If ``$Ha$'' is any neighbour of ``$H$'',
then the action of $H$ on the neighbours of ``$H$''
is permutation equivalent to the action of $H$
on the cosets of the stabilizer of the arc $(H,Ha)$.
Since the stabilizer of the arc $(H,Ha)$ is $a^{-1}Ha \cap H$
it follows that the valency of $\Gamma$ is
$[H : a^{-1}Ha \cap H] = |H|/|x^{-1}Hx \bigcap H|$
\end{proof}
\end{proposition}
In some sense the valancy of a symmetric graph is
measuring the extent to which a subgroup fails to be normal.
\section{Quotient Graphs}
In the first section of this Chapter,
we determined, for any two transitive permutation groups
$(G, \Omega_1)$ and $(G,\Omega_2)$,
the conditions under which there exists a
$G$-homomorphism from $(G,\Omega_1)$ to $(G,\Omega_2)$.
We saw that if $(G,\Omega_1)$
is permutation equivalent to $(G,\cos_G(H))$
for some $H < G$,
and $(G,\Omega_2)$ is permutation equivalent
to $(G,\cos_G(K))$ for some $K < G$,
then there is a $G$-homomorphism from $(G,\Omega_1)$
to $(G,\Omega_2)$ if and only if $H < K < G$.
In this section we give analogous conditions
under which there exists a $G$-homomorphism
from a $G$-symmetric graph $\Gamma$
to a $G$-symmetric graph $\Sigma$.
\begin{theorem} [Lorimer]
If $\Gamma$ is a $G$-symmetric graph isomorphic to $\Sab(G,H,HaH)$
and $\Sigma$ is a quotient of $\Gamma$,
then $\Sigma$ is isomorphic to $\Sab(G,K,KaK)$ for some $H < K < G$.
\begin{proof}
Let $\eta : V \Gamma \to \Cos_G(H)$ be the map inducing
the isomorphism between $\Gamma$ and $\Sab(G,H,HaH)$.
Let $\pi : V \Gamma \to V \Sigma$ be the map
inducing the homomorphism between $\Gamma$ and $\Sigma$.
Let $\alpha = \eta^{-1}(H)$,
let $B = \pi(\alpha)$ and let $\Delta = \pi^{-1}(B)$.
By proposition ?? $\Delta$ is a block of imprimitivity.
If $K = G_{\Delta}$ then the permutation group $(G, V \Sigma)$
is equivalent to $(G, \cos_G(K))$.
Let $\mu : V \Sigma \to \cos_G(K)$ be any map
inducing the equivalence.
If $\Sigma' = \Sab(G,K,KaK)$ then $\mu$ induces
a permutation equivalence between the vertices of $\Sigma$
and the vertices of $\Sigma'$.
We must check that it preserves adjacency. Blah blah blah...
\end{proof}
\end{theorem}
If $a \in K$ then the valency of the quotient is one.
\chapter{The Extension Problem}
We would like to understand
how a symmetric graph $(G, \Gamma)$ can be ``unfolded''
into a larger imprimitive symmetric graph $(\widetilde{G}, \widetilde{\Gamma})$
admitting the original graph as a quotient.
In particular we'd like to understand how pairs of graphs
$(\Gamma, \widetilde{\Gamma})$ where $\widetilde{\Gamma}$
is an ``extension'' of $\Gamma$ are related combinatorially.
That is, we'd like to be able to describe the the structure
of the graph $\widetilde{\Gamma}$
in terms of the structure of the graph $\Gamma$.
The quotient of a symmetric graph contains considerably less information
than the original graph.
Gardiner and Praeger \cite{GPL1993} observed that
some of the information that is lost
may be recovered from the induced bipartite graph
between adjacent blocks of the partition,
and from a combinatorial design induced on the blocks themselves.
In this chapter we describe Gardiner and Praeger's observations
as well as some of the questions is raises.
\section{Induced Bipartite graph}
Let $\Gamma$ be an imprimitive $G$-symmetric graph,
with $\cB$ a non-trivial $G$-invariant partition of the vertices.
As always, we assume that the quotient $\Gamma_\cB$ has valency at least one,
so the blocks of $\cB$ are independent sets.
For any arc $(B,C)$ of the quotient,
the subgraph of $\Gamma$ induced by $B \cup C$ must be bipartite --
possibly containing some isolated vertices.
If we restrict ourselves to the subgraph induced by
$(\Gamma(C) \cap B) \cup (\Gamma(B) \cap C)$
then we obtain a bipartite graph with no isolated vertices.
We call this graph the {\it induced bipartite graph} of $(B,C)$,
and denote it by $\Gamma[B,C]$.
\begin{proposition}
The induced bipartite graph $\Gamma[B,C]$ is $G_{B \cup C}$ symmetric.
\begin{proof}
Let $(\alpha,\beta)$ and $(\gamma,\delta)$ be any two arcs of $\Gamma[B,C]$.
Without loss of generality we may assume that $\alpha \in B$ and $\beta \in C$.
By the $G$-symmetry of $\Gamma$ we can find $g \in G$ such that
$(\alpha,\beta)^g = (\gamma,\delta)$.
Since $\Gamma[B,C]$ is bipartite, either $\gamma \in B$ and $\delta \in C$
in which case $B^g = B$ and $C^g = C$,
or $\gamma \in C$, $\delta \in B$, in which case $B^g = C$ and $C^g = B$.
Either way, $g \in G_{B \cup C}$ and the result follows.
\end{proof}
\end{proposition}
\begin{proposition}
For any two arcs in the quotient, the induced bipartite graphs are isomorphic.
\begin{proof}
Let $(B,C)$ and $(D,E)$ be any two arcs of $\Gamma_\cB$.
Since $\Gamma_\cB$ is $G$-symmetric we can find some $g \in G$
such that $(B,C)^g = (D,E)$. Clearly
$$(\Gamma(C) \cap B)^g = (\Gamma(C^g) \cap B^g) = (\Gamma(E) \cap D),$$
and
$$(\Gamma(B) \cap C)^g = (\Gamma(B^g) \cap C^g) = (\Gamma(D) \cap E).$$
So $g$ induces a bijection
$$(\Gamma(C) \cap B) \cup (\Gamma(B) \cap C)
\to (\Gamma(E) \cap D) \cup (\Gamma(D) \cap E).$$
Since $g$ is an automorphism, adjacency is preserved and we have an isomorphism
from $\Gamma[B,C]$ to $\Gamma[D,E]$.
\end{proof}
\end{proposition}
When passing from a symmetric graph to its quotient
we preserve the adjacency structure of the blocks,
and discard the exact details of which vertices are connected to which others.
Some of the information that has been lost is recoverable from the induced bipartite graph.
But the induced bipartite graph reveals only the {\it local} connectivity.
To reconstruct the original graph from its quotient we need to know
how these induced bipartite graphs are ``glued together''.
Gardiner and Praeger observed that some of this global information about
the way the induced bipartite graphs fit together is captured in a combinatorial design
induced on each of the blocks.
Before we can describe this design we need a little background.
\section{Combinatorial Designs}
\begin{definition}
An {\it incidence structure} is a triple $(P,B,I)$
where $P$, and $B$ are sets, usually refered to as
{\it points} and {\it blocks} respectively,
and $I$ is an {\it incidence relation} $I \subseteq P \times B$.
\end{definition}
It is often convenient to visualize an incidence structure
as a bipartite graph.
Take the points as vertices of one side of
the bipartition and the blocks as vertices of the other.
Draw an edge between the vertex corresponding to a point $p$
and the vertex corresponding to a block $b$ if and only if
$(p,b) \in I$.
We will be interested in incidence structures
satisfying strong regularity and symmetry conditions.
\begin{definition}
A $(v,k,\lambda)$-design is an incidence structure such that:
\begin{enumerate}
\item There are $v$ points in total
\item Each block is incident with exactly $k$ points
\item Each point is incident with exactly $\lambda$ blocks
\end{enumerate}
\end{definition}
When visualizing an incidence structure as a bipartite graph,
the extra regularity conditions of a design correspond
to the condition that
any two vertices in the same bipartite half of the graph
must have the same valency.
Note that our definition here of a ``design''
corresponds to what is usually refered to as a $1$-design
or ``tactical configuration'' in the literature.
There is a more general definition of a $t$-design
of which the definition of a $1$-design is a special case.
We will not be needing this more general definition.
The incident point-block pairs of a design are usually refered to as {\it flags}.
For a design $\cD$ we will use the notation
$P_\cD$, $B_\cD$ and $F_\cD$ to denote the points,
blocks and flags of $\cD$ respectively.
Define the {\it trace} of a block $b$ to be the set
$T(b) = \{ p \in P : (p,b) \in I \}$.
Simmilarly define the {\it trace} of a point $p$
to be the set $T(p) = \{b \in B : (p,b) \in I \}$.
\begin{proposition}
In a design, the number of blocks with the same trace is a constant
that is independent of the choice of block
\end{proposition}
We denote this constant by $m$ and call it the
{\it multiplicity} of the design.
If distinct blocks have distinct traces,
then we may identify the blocks with their traces,
and take $B$ to be a subset of the power set of $P$.
Otherwise we say that the design contains {\it repeated blocks}.
\begin{proposition}
If a design contains repeated blocks,
then the incidence structure obtained by identifying
blocks with the same trace is again a design
\end{proposition}
\begin{definition}
if $\cD = (P_\cD, B_\cD, I)$ is a $1$-design then the {\it dual}
of $\cD$ is the design $ \cD^{*} = (B_\cD, P_\cD, I^{*})$
where $(b,p) \in I^{*}$ if and only if $(p,b) \in I$.
\end{definition}
\begin{proposition}
The dual of a $1$-design is a $1$-design.
\end{proposition}
\section{Flag Transitive Designs}
An automorphism of a design $\cD$ is a pair $(\eta, \mu)$ of bijections
$\eta : P_\cD \to P_\cD$ and $\mu : B_\cD \to B_\cD$
with the property that $(p,b)$ is a flag of $\cD$
if and only if $(\eta(p), \mu(b))$ is a flag.
The automorphisms of a design form a group.
As with sets and graphs, we say that a group $G$ acts on a design $\cD$
as a group of automorphisms if there is some homomorphism $\rho : G \to \Aut(\cD)$.
We don't require this homomorphism to be injective, that is we don't
require that $G$ acts {\it faithfully} on $\cD$.
If the group $G$ acts on the design $\cD$ as a group of automorphisms
then we get three induced permutation groups, namely:
\begin{enumerate}
\item $(G, P_\cD)$ -- The induced permutation group on the points.
\item $(G, B_\cD)$ -- The induced permutation group on the blocks.
\item $(G, F_\cD)$ -- The induced permutation group on the flags.
\end{enumerate}
We will be interested in highly symmetric designs.
In particular, we will be interested in pairs $(G, \cD)$
where $G$ acts on $\cD$ in such a way that the induced
permutation group on the flags $(G, F_\cD)$ is transitive.
We will either call the pair $(G, \cD)$ a flag-transitive design,
or we call the design $\cD$ a $G$-flag transitive design.
Many interesting groups arise naturally as flag-transitive
automorphism groups of designs, including a number
of the sporadic simple groups.
\begin{proposition}
If $(G,\cD)$ is a flag-transitive design
then the induced permutation groups $(G,P_\cD)$ and $(G,B_\cD)$
are both transitive
\end{proposition}
\begin{proposition}
If $(G,\cD)$ is a flag-transitive design and $H_1 < G$
is the stabilizer of some point $p$ then the
induced permutation group $(H_1, T(p))$ is transitive.
Simmilarly if $H_2 < G$ is the stabilizer of some block $b$
then the induced permutation group $(H_2, T(b))$ is transitive.
\end{proposition}
If $\cD_1$ and $\cD_2$ are two $G$-flag transitive designs,
then a {\it $G$-design homomorphism} from $\cD_1$ to $\cD_2$
is a pair of maps $\rho = (\rho_P, \rho_B)$
such that:
\begin {enumerate}
\item $\rho_P : P_{\cD_1} \to P_{\cD_2}$ induces a permutation homomorphism between
$(G,P_{\cD_1})$ and $(G,P_{\cD_2})$.
\item $\rho_B : B_{\cD_1} \to B_{\cD_2}$ induces a permutation homomorphism between
$(G,B_{\cD_1})$ and $(G,B_{\cD_2})$.
\item $(p,b) \in I_1$ if and only if $(\rho_P(p), \rho_B(b)) \in I_2$.
\end{enumerate}
When both $\rho_P$ and $\rho_B$ are bijections we have a {\it $G$-design isomorphisms}.
When confusion is unlikely to result,
we will sometimes write $\rho(p)$ instead of $\rho_P(p)$ when $p$ is a point
and $\rho(b)$ instead of $\rho_B(b)$ when $b$ is a block.
\begin{proposition}
If $\rho$ is a $G$-design homomorphism from $\cD_1$ to $\cD_2$
then $\rho$ induces a $G$-permutation homomorphism from $F_{\cD_1}$ to $F_{\cD_2}$.
\end{proposition}
\section{$G$-symmetric designs}
In this section we show how $G$-symmetric designs
may be viewed as a special kind of flag-transitive design.
I'm calling them symmetric designs, but I think
they should probably be called square designs.
\begin{definition}
A $G$-flag transitive design $\cD$ is {\it self-dual}
if there exists a $G$-isomorphism $\rho = (\rho_P, \rho_B)$
between $\cD$ and $\cD^{*}$.
The $G$-isomorphism $\rho$ is called a {\it duality} of $\cD$.
\end{definition}
\begin{definition}
A {\it $G$-symmetric design} is self-dual $G$-flag transitive design $\cD$
which admits a dulaity $\rho = (\rho_P, \rho_B)$ with the property that
$\rho_B \circ \rho_P = id_P$ and $\rho_P \circ \rho_B = id_B$.
In this case the $G$-isomorphism $\rho$ is called a {\it polarity}
of $\cD$.
\end{definition}
Observe that if $\cD$ is a $G$-symmetric design with the property
that $(p, \rho(p))$ is a a flag for some $p \in P_\cD$,
then since $\rho$ is a $G$-isomorphism it follows
that the point-stabilizer of $\cD$ is isomorphic to the flag-stabilizer of $\cD$.
Thus each point is incident with exactly one block
and vice versa.
We consider such $G$-symmetric designs to be ``degenerate'',
and unless an explicit statement to the contrary is given shall
take the expression ``$G$-symmetric design''
to mean ``non-degenerate $G$-symmetric design''.
In a sense that will become clear a little later,
these ``degenerate'' $G$-symmetric
designs correspond to the ``degenerate'' orbital graph $\orb_\Delta(G,\Omega)$
which could be formed from the permutation group $(G, \Omega)$
by taking $\Delta$ to be the diagonal orbit,
and also to the ``degenerate'' Sabidussi graph $\sab(G,H,a)$
which could be formed by taking $a \in H$.
\hfill \break
We shall see that each $G$-symmetric graph
gives rise in a natural way to a $G$-symmetric design
and conversely a $G$-symmetric design together
with a ``marked'' polarity give rise to a $G$-symmetric graph.
We shall also see that in some cases, by choosing a different
polarity it is possible to construct two non-isomorphic $G$-symmetric
graphs from the same $G$-symmetric design.
For any graph $\Gamma$, let $N \Gamma = \{ \Gamma(v) : v \in V \Gamma \}$
denote the set of {\it neighbourhoods} of $\Gamma$.
\begin{proposition}
If \/ $\Gamma$ is a $G$-symmetric graph,
then the incidence structure $\cD(\Gamma) = (V \Gamma, N \Gamma,I)$
where $(v,n) \in I$ if and only if $v \in n$ is a $G$-symmetric design
\begin{proof}
The $G$-arc transitivity of $\Gamma$ is sufficient to ensure
that $\cD(\Gamma)$ is a $G$-flag transitive design.
To see that it is in fact
a $G$-symmetric design we must exhibit a polarity.
Let $\rho_P : P \to B$ be given by $v \mapsto \Gamma(v)$.
The map $\rho_P$ is clearly bijective, and since $G$ acts on $\Gamma$
as a group of automorphisms we have:
\begin{eqnarray*}
\rho_P(v^g) & = & \Gamma(v^g) \\
& = & \Gamma(v)^g \\
& = & \rho_P(v)^g.
\end{eqnarray*}
So $\rho_P$ induces a permutation isomorphism.
Now, take $\rho_B = \rho_P^{-1}$.
We must check that the pair $\rho = (\rho_P, \rho_B)$ preserve
the incidence structure of the design.
Suppose that $(v,\Gamma(w))$ is a flag of $\cD$, so $v \in \Gamma(w)$.
Since $\Gamma$ is a simple graph it follows immediately that $w \in \Gamma(v)$
and so $(w, \Gamma(v))$ is also a flag of $\cD$.
That is $(\Gamma(v), w)$ is a flag of $\cD^{*}$.
But
\begin{eqnarray*}
\rho((v, \Gamma(w))) & = & (\rho_P(v), \rho_B(\Gamma(w))) \\
& = & (\Gamma(v), w).
\end{eqnarray*}
So we are done.
\end{proof}
\end{proposition}
\begin{proposition}
If $\cD$ is a (non-degenerate) $G$-symmetric design with ``marked'' polarity $\rho$
then the graph $\Gamma(\cD, \rho)$ with vertex set $P_\cD$
and arc set $\{ (p,q) : (q,\rho_P(p)) \in I_\cD \}$ is $G$-symmetric.
\begin{proof}
We must first check that $\Gamma(\cD, \rho)$ is well-defined.
The non-degenerateness of $\cD$ ensures that there are no loops.
If $(p,q)$ is an arc of $\Gamma(\cD, \rho)$
then $(q, \rho_P(p))$ is a flag of $\cD$.
Since $\rho$ is an isomorphism,
if $(q, \rho_P(p))$ is a flag of $\cD$
then $(\rho_P(q), \rho_B \circ \rho_P(p)) = (\rho(q), p)$
is a flag of $\cD^{*}$.
It follows that $(p,\rho(q))$ is a flag of $\cD$,
and thus $(q,p)$ is an arc of $\Gamma(\cD, \rho)$.
So $\Gamma(\cD, \rho)$ is simple.
By proposition [?] $G$ acts transitively on the points $\cD$
so $\Gamma(\cD, \rho)$ is $G$-vertex transitive.
Suppose that $(p,q_1)$ and $(p,q_2)$ are two distinct arcs of
$\Gamma(\cD, \rho)$. Then we have $q_1, q_2 \in T(\rho(p))$.
By proposition [?]
the stabilizer of $\rho(p)$ acts transitively on $T(\rho(p))$.
Thus we can find some $g \in G$ which fixes $\rho(p)$ and
carries $q_1$ to $q_2$.
Since $\rho$ is a $G$-isomorphism, if $g$ fixes $\rho(p)$
it also fixes $p$, thus $g$ carries the arc $(p,q_1)$
to the arc $(p,q_2)$. So $\Gamma(\cD, \rho)$ is $G$-locally
transitive. The result follows.
\end{proof}
\end{proposition}
\begin{proposition}
For any design $\cD$ and any polarity $\rho$ the design $\cD(\Gamma(\cD,\rho))$
is isomorphic to $\cD$
\begin{proof}
later
\end{proof}
\end{proposition}
\section{``cross-section'' of a graph homomorphism}
Suppose that $\Gamma$ is an imprimitive $G$-symmetric graph
with $\cB$ a non-trivial $G$-invariant partition of the vertices.
Assume further that the quotient $\Gamma_\cB$ has valency
at least one so that the blocks of $\cB$ are independent sets.
For any vertex $\alpha$ of $\Gamma$, let $B(\alpha)$ denote
the block of $\cB$ containing $\alpha$.
Let $\Gamma(\alpha) = \{ \beta \in V \Gamma : (\alpha, \beta) \in A \Gamma \}$
denote the neighbours of $\alpha$ in $\Gamma$.
Let $\Gamma_\cB(B) = \{ C \in \cB : (B,C) \in A \Gamma_\cB \}$
denote the neighbours of $B$ in the quotient.
For any $\alpha \in B$, let
$\Gamma_\cB(\alpha) = \{ B(\beta) : \beta \in \Gamma(\alpha) \}$
denote the blocks of $\cB$ containing the neighbours of $\alpha$ in $\Gamma$.
Construct a design with point set $B$, block set $\Gamma_\cB(B)$
and an incidence relation $ I \subseteq B \times \Gamma_\cB(B)$
defined by $(\alpha, C) \in I$ if and only if $C \in \Gamma_\cB(\alpha)$.
\begin{proposition}
The incidence structure $\cD(B) = (B, \Gamma_\cB(B), I)$ is a $H$-flag transitive
$(v,k,\lambda)$ design, where:
\begin{eqnarray*}
v & := & |B| \\
k & := & |\Gamma(C) \cap B|, \\
\lambda & := & |\Gamma_\cB(\alpha)| \\
\end{eqnarray*}
and $H$ is the stabilizer of the block $B \in \cB$.
\end{proposition}
The design $\cD(B)$ gives, in a sense, a ``cross-section''
of the homomorphism $\Gamma \mapsto \widetilde{\Gamma}$.
\section{Reconstruction problem}
Gardiner and Praeger observed that if $\Gamma$ is a $G$-symetric graph
and $\cB$ is a non-trivial $G$-invariant partition of the vertices,
then the graph $\Gamma$ ``decomposes''
into the triple $(\Gamma_\cB, \Gamma[B,C], \cD(B))$.
We may ask now, to what extent the combinatorial structure of $\Gamma$
is determined by the triple $(\Gamma_\cB, \Gamma[B,C], \cD(B))$?
Do these three components contain sufficient information
to reconstruct $\Gamma$ uniquely?
If not, what extra information is required?
Suppose we are given a symmetric graph $\Lambda$,
a symmetric bipartite graph $\Sigma$ and a flag transitive design
$\cD$ without any particular groups acting upon them.
Let us say that a symmetric graph $\Gamma$ is
``product'' of $(\Lambda, \Sigma, \cD)$ if there is a group $G$
acting $\Gamma$ and a $G$-invariant partition $\cB$ of
the vertices of $\Gamma$
such that the triple $\Gamma$ decomposes into the triple
$(\Lambda, \Sigma, \cD)$
What are the necessary and sufficient
conditions under which these three components $(\Lambda, \Sigma, \cD)$
can be ``glued together'' into some ``product'' $\Gamma$ ?
If the necessary conditions are satisfied,
could there be more than one way to ``glue together'' a given
triple $(\Lambda, \Sigma, \cD)$ into an imprimitive symmetric graph
$\Gamma$ ?
\chapter*{Introduction}
A graph is a combinatorial object that captures abstractly
the idea of a relationship amongst the elements in a set.
Associated with every combinatorial object is a group of symmetries,
or an \emph{automorphism group}.
An automorphism is, loosly speaking, a structure preserving map from the object to itself.
A homomorphism maps a complex object onto a simpler one
in such a way that certain features of the original object are preserved
while others are lost.
In this paper we study a family of highly symmetric graphs
using a mixture of group theoretic and combinatorial techniques.
The graphs we study have the property that locally
they ``look the same'' at every vertex,
while globally they are rich in structure.
In particular we look at homomorphic images,
or quotients, of symmetric graphs.
We would like to understand how the combinatorial structure of a
symmetric graph is related to that of its quotients.
When passing from a graph to its quotient, information is lost.
For any given symmetric graph there are, in fact, infinitely many
larger symmetric graphs which admit the given graph as a quotient.
Where is this information being lost to?
How is it possible to take a symmetric graph and ``unfold''
it into a larger symmetric graph which
admits the original as a quotient?
What extra information is needed?
\hfill \break
In Chapter 1, we introduce the basic notions from the theory of
permutation groups necessary to give the definition of
a symmetric graph, an imprimitive symmetric graph
and the quotient of an imprimitive symmetric graph.
In Chapter 2 we introduce the idea of coset spaces and coset graphs.
We see that, in some sense,
symmetric graphs capture combinatorially
the way that a subgroup sits inside a larger group.
In Chapter 3 we look at the ``extension problem''
for symmetric graphs and describe a ``geometric approach''
to the problem suggested by Gardiner and Praeger.
In Chapter 4 we look at a number of methods
for constructing symmetric graphs with a given quotient
\section{Description of the problem}
We would like to understand the ways in which a symmetric graph
$(G, \Gamma)$ can be ``unfolded'' into a larger imprimitive symmetric graph
$(\widetilde{G}, \widetilde{\Gamma})$ which admits the original graph as a quotient.
On a group theoretic level, if we knew the subgroup structure of $G$
and we knew all the groups $\widetilde{G}$ which have $G$ as a
composition factor then in a sense we would ``know'' all the possible unfoldings.
What would still be lacking is a {\it combinatorial} understanding
of how the structure of the new graph is related to that of the original.
The {\it three-arc graph construction} gives a nice description
of the graph $\widetilde{\Gamma}$ in terms of the graph $\Gamma$
in the special case where $\Gamma \cong \sab(G,H,a)$,
$\widetilde{\Gamma} \cong \sab(G,K,a)$ and $K \cong a^{-1}Ha \cap H$.
That is, the special case where the stabilizer of an arc of $\Gamma$
is isomorphic to the stabilizer of a vertex of $\widetilde{\Gamma}$.
One may ask whether there other {\it group theoretic} assumptions
which may be imposed on a pair of symmetric graphs $(\Gamma, \widetilde{\Gamma})$
which will yield a nice description of $\widetilde{\Gamma}$ in terms of $\Gamma$.
I have considered the case where $\Gamma \cong \sab(G,H,a)$,
$\widetilde{\Gamma} \cong \sab(G,K,a)$
and there exists a normal subgroup $N$ of $G$ such that
$G$ is a semidirect product of $N$ by $H$.
Here are some rough notes.
\section{Labelling technique}
Every $g \in G$ has a unique expression of the form $g = (h,n)$
where $h \in H$ and $n \in N$.
Since the elements of $N$ form a traversal of the cosets of $H$ in $G$,
the vertices of $\Gamma$ may labelled by elements of $N$
and in fact form a group.
The action of $G$ on the vertices of $\Gamma$ is given by
$m^{(h,n)} = h^{-1}mhn$.
Let us take $\cD$ to be the the design induced on the fiber of the
vertex of $\Gamma$ which is labelled by the identity of $N$.
The points of $\widetilde{\Gamma}$ may be labelled by ordered pairs of the form
$(x, m)$ where $x$ is a coset of $K$ and $m \in N$.
The action of $G$ on the points of $\widetilde{\Gamma}$ is then given by:
$(x,m)^{(h,n)} = (x^h, h^{-1}mhn)$.
This is indeed an action, since:
\begin{eqnarray*}
{(x,m)^{(h_1,n_1)}}^{(h_2,n_2)} & = & (x^{h_1}, h_{1}^{-1} m h_1 n_1)^{(h_2,n_2)} \\
& = & (x^{h_1 h_2}, h_{2}^{-1}h_1^{-1} m h_1 n_1 h_2 n_2) \\
& = & (x^{h_1 h_2}, h_{2}^{-1}h_1^{-1} m h_1 h_2 h_2^{-1} n_1 h_2 n_2) \\
& = & (x,m)^{(h_1 h_2, h_2^{-1} n_1 h_2 n_2)} \\
& = & (x,m)^{(h_1,n_1)(h_2,n_2)} \\
\end{eqnarray*}
\section{Self-paired orbital on the flags $\cD$}
I will show that there is a self-paired orbital on the flags
of $\cD$ which can be used to reconstruct $\widetilde{\Gamma}$
from $\Gamma$.
Suppose that $((x,p), (y,b))$ is an arc $\widetilde{\Gamma}$.
Then $(p,q)$ is an arc of the quotient $\Gamma$.
Making use of the automorphism $(1,p^{-1})$ of $\widetilde{\Gamma}$
we have that $((x,p), (y,b))^{(1,p^{-1})} = ((x,1), (y,bp^{-1}))$
is also an arc of $\widetilde{\Gamma}$,
and so $(x,bp^{-1})$ is a flag of $\cD$
Since $\widetilde{\Gamma}$ is assumed to be a simple graph,
if $((x,p), (y,b))$ is an arc of $\widetilde{\Gamma}$
then so is $((y,b),(x,p))$.
Making use of the automorphism $(1,b^{-1})$ we have that
and $((y,b),(x,p))^{(1,b^{-1})} = ((y,1), (x,pb^{-1}))$
is an arc of $\widetilde{\Gamma}$.
and so $(y, pb^{-1})$ is a flag of $\cD$.
What we have done is ``decomposed'' the arc $((x,p), (y,b))$
of $\widetilde{\Gamma}$ into the arc $(p,q)$ of $\Gamma$
together with the pair of flags $((x,bp^{-1}),(y, pb^{-1}))$
of $\cD$.
Let $\Delta$ be the {\it orbital} on the flags of $\cD$
which contains the pair $((x,bp^{-1}),(y, pb^{-1}))$.
That is $\Delta = \{ ((x,bp^{-1}),(y, pb^{-1}))^h : h \in H \}$
\hfill \break
Suppose we choose a different arc $((w,q),(z,d))$ of $\widetilde{\Gamma}$.
Then by arguments identical to those above $(q,d)$ is an arc of $\Gamma$
and $(w,dq^{-1})$ and $(z,qd^{-1})$ are flags of $\cD$.
Since $\widetilde{\Gamma}$ is $G$-arc transitive, we can find some $g = (h,n)$
such that $((x,p),(y,b))^{(h,n)} = ((w,q),(z,d))$.
That is:
\begin{eqnarray*}
w & = & x^h \\
z & = & y^h \\
q & = & h^{-1}phn \\
d & = & h^{-1}bhn \\
\end{eqnarray*}
So:
\begin{eqnarray*}
(w,dq^{-1}) & = & (x^h, h^{-1}bhnn^{-1}h^{-1}p^{-1}h) \\
& = & (x^h, h^{-1}bp^{-1}h) \\
& = & (x,bp^{-1})^h.
\end{eqnarray*}
Simmilarly:
\begin{eqnarray*}
(z,qd^{-1}) & = & (y^h, h^{-1}phnn^{-1}h^{-1}b^{-1}h) \\
& = & (y^h, h^{-1}pb^{-1}h) \\
& = & (y,pb^{-1})^h.
\end{eqnarray*}
\newpage
It follows immediately that $((w,dq^{-1}),(z,qd^{-1})) \in \Delta$.
Thus, any arc of $\widetilde{\Gamma}$ can be ``decomposed''
into an arc of $\Gamma$ together with a pair of flags of $\cD$
contained in the orbital $\Delta$.
\hfill \break
I will now show that this orbital $\Delta$ on the flags of $\cD$
is self-paired.
Since $\widetilde{\Gamma}$ is simple.
There must be some $g = (h,n)$ which ``flips'' the arc
$((x,p), (y,b))$. That is, there must be some $g = (h,n)$
such that $((x,p), (y,b))^{(h,n)} = ((y,b), (x,p))$.
By an argument identical to the one on the previous page,
we must have that
\begin{eqnarray*}
(y,pb^{-1}) & = & (x,bp^{-1})^h \\
(x,bp^{-1}) & = & (y,pb^{-1})^h
\end{eqnarray*}
\hfill \break
This shows that the orbital $\Delta$ is self-paired.
\section{Reconstruction}
Given $\Gamma$ and $\cD$ together with a self paired orbital $\Delta$
on the flags of $\cD$ it should be possible to reconstruct $\widetilde{\Gamma}$.
Let $N(1)$ denote the neighbourhood of the vertex ``$1$'' of the $\Gamma$
and let $P$ and $B$ denote the points and blocks respectively
of the design $\cD$.
Let $\eta : N(1) \to B$ be some map establishing a permutation isomorphism
between the permutation groups $(H,N(1))$ and $(H,B)$.
The graph $\widetilde{\Gamma}$ can be described as the graph
with vertex set $P \times V \Gamma$
and arc set $((x,v), (y,w))$ such that
\begin{enumerate}
\item $(v,w)$ is an arc of $\Gamma$
\item $(x,\eta(vw^{-1})$ is a flag of $\cD$
\item $(y, \eta(wv^{-1})$ is a flag of $\cD$
\item $((x,\eta(vw^{-1}),(y, \eta(wv^{-1})) \in \Delta$
\end{enumerate}
\hfill \break
The action of $G$ on $P \times V \Gamma$ is that given in section 2.
Since $\cD$ is $H$-flag transitive,
for any pair of vertices $(x,p)$ and $(y,q)$,
we can find some $h \in H$ such that $x^h = y$.
Now:
\begin{eqnarray*}
(x,p)^{(h,h^{-1}p^{-1}hq)} & = & (x^h, h^{-1}phh^{-1}p^{-1}hq) \\
& = & (y,q).
\end{eqnarray*}
So this action is transitive.
\hfill \break
We need to check that this action is well defined on the arcs.
Suppose that $((x,p),(y,q))$ is any arc and $(h,n)$ is
any element of $G$.
We need to check that
$((x,p),(y,q))^{(h,n)}$ is also an arc.
Since $((x,p),(y,q))$ is an arc of $\widetilde{\Gamma}$
we know that $(p,q)$ is an arc of $\Gamma$
and that:
$$((x,\eta(pq^{-1}), (y,\eta(qp^{-1})) \in \Delta$$
\hfill \break
The arc $((x,p),(y,q))^{(h,n)}$ of $\widetilde{\Gamma}$
decomposes into the arc $(p,q)^{(h,n)}$ of $\Gamma$.
and the pair of flags: $$((x,\eta(pq^{-1})^h, (y,\eta(qp^{-1}))^h \in \Delta$$
So the action is well-defined on arcs.
\hfill \break
Finally we need to check that this action is transitive on the arcs.
If $((x,p),(y,b))$ and $((w,q),(z,d))$ are any two arcs of $\widetilde{\Gamma}$
then we must have:
$$((x,\eta(pb^{-1}),(y,\eta(bp^{-1})) \in \Delta$$
and:
$$((w,\eta(qd^{-1}),(z,\eta(dq^{-1})) \in \Delta.$$
\hfill \break
So there must be some $h \in H$ such that:
$$((x,\eta(pb^{-1}),(y,\eta(bp^{-1}))^h = ((w,\eta(qd^{-1}),(z,\eta(dq^{-1})).$$
\hfill \break
Since $\eta$ is a permutation homomorphism,
$\eta(bp^{-1})^h = \eta(dq^{-1})$ implies that
$\eta(h^{-1}bp^{-1}h) = \eta(dq)$ which implies that
$h^{-1}bp^{-1}h = dq^{-1}$.
Now we have:
\begin{eqnarray*}
(x,p)^{(h,h^{-1}p^{-1}hq)} & = & (x^h, h^{-1}phh^{-1}p^{-1}hq) \\
& = & (w,q)
\end{eqnarray*}
And:
\begin{eqnarray*}
(y,b)^{(h,h^{-1}p^{-1}hq)} & = & (y^h, h^{-1}bhh^{-1}p^{-1}hq) \\
& = & (z, h^{-1}bp^{-1}hq) \\
& = & (z, dq^{-1}q) \\
& = & (z,d) \\
\end{eqnarray*}
Thus the action is transitive on the arcs as claimed.
\newpage
\section{Another Special Case}
Suppose that $\Gamma = \sab(G,H,a)$.
Let $\overline{H} = a^{-1}Ha \cap H$,
so the local permutation group induced at each vertex of $\Gamma$
is equivalent to $(H, \cos(\overline{H}))$.
Suppose further that there is some $K$ such that $\overline{H} < K < H$
and $a \not\in K$. Then the graph $\widetilde{\Gamma} = \sab(G,K,a)$
is an extension of $\Gamma$.
Let $\overline{K} = a^{-1}Ka \cap K$.
Since $K < H$ we must also have $a^{-1}Ka < a^{-1}Ha$
so $\overline{K} \leq \overline{H}$.
Since $\overline{H} < K$ we must also have
$a^{-1}\overline{H}a < a^{-1}Ka$
but since $a$ is an involution $a^{-1}\overline{H}a = \overline{H}$
so we have $\overline{H} \leq \overline{K}$.
That is $\overline{H} = \overline{K}$
and the local permutation group induced at each vertex of
$\widetilde{\Gamma}$ is $(K, \cos(\overline{H}))$.
The pair $(\Gamma,\widetilde{\Gamma})$ satisfy the property that
{\it globally} $\widetilde{\Gamma}$ is imprimitive,
admitting $\Gamma$ as quotient, but {\it locally}
$\Gamma$ is imprimitive, and the local permutation group of
$\widetilde{\Gamma}$ is a quotient of the local permutation
group of $\Gamma$. We wish to understand how $\Gamma$
and $\widetilde{\Gamma}$ are related {\it structurally},
and ideally find some method of constructing $\widetilde{\Gamma}$
from $\Gamma$.
As a preliminary observation, let $n = [G:H]$ be the number of vertices of $\Gamma$,
let $v = [K:\overline{H}]$ be the valency of $\widetilde{\Gamma}$
and let $r = [H:K]$.
The graph $\widetilde{\Gamma}$ has $[G:K] = [G:H][H:K] = nr$
vertices. That is, $r$ times as many vertices as $\Gamma$.
The valency of $\Gamma$ is $[H:\overline{H}] = [H:K][K:\overline{H}] = rv$.
That is $r$ times the valency of $\widetilde{\Gamma}$.
Since the number of edges in a graph is equal to half the
valency times the number of vertices, it follows that
both $\Gamma$ and $\widetilde{\Gamma}$ have the same number of edges.
\hfill \break
The action of $G$ on the arcs of $\Gamma$ is permutation equivalent
to $(G, \cos(\overline{H}))$.
Since $\Gamma$ is simple, each arc $(\alpha,\beta)$ has a {\it pair}
namely $(\beta,\alpha)$.
The function $\varphi : A \Gamma \to A \Gamma$
which sends each arc to its pair is an involution
It also preserves the action of $G$.
Now, since $\overline{H} < H < G$ the action of $G$
on the arcs of $\Gamma$ is imprimitive.
That is, there is some $G$-invariant partition $\cB$
of $A \Gamma$ such that the action of $G$ on $\cB$
is permutation equivalent to $(G, \cos(H))$.
In fact, this is the partition $\cB = \{ B(\alpha) : \alpha \in V \Gamma \}$
where $B(\alpha)$ is the set of all arcs who's initial vertex is $\alpha$.
Let $\pi : A \Gamma \to \cB$ be the map which sends
each arc to the block of $\cB$ containing it.
We may define a new graph $\Sigma$ we vertex set
$\cB$ where $p$ is adjacent to $b$ if and only if
there is some $x \in \pi^{-1}(p)$ and some $y \in \pi^{-1}(b)$
such that $y = \varphi(x)$.
This new graph $\Sigma$ is in fact isomorphic to $\Gamma$.
The action of $G$ on the arcs of $\Gamma$ is permutation
isomorphic to the action of $G$ on the arcs of $\widetilde{\Gamma}$.
Both actions are of the form $(G, \cos(\overline{H}))$
Since $\overline{H} < K < G$ there exists some $G$-invariant partition
$\mathfrak{B}$ of $A \Gamma$ such that the action of $G$ on $\mathfrak{B}$
is permutation equivalent to $(G, \cos(H))$.
The partition $\mathfrak{B}$ is a refinement of the partition $\cB$.
Let $\lambda : A \Gamma \to \mathfrak{B}$ be the map hich sends
each arc to the block of $\mathfrak{B}$ containing it.
We may define a new graph $\widetilde{\Sigma}$ we vertex set
$\mathfrak{B}$ where $p$ is adjacent to $b$ if and only if
there is some $x \in \lambda^{-1}(p)$ and some $y \in \lambda^{-1}(b)$
such that $y = \varphi(x)$.
This new graph $\Sigma$ is in fact isomorphic to $\widetilde{\Gamma}$.
Thus we have a method of construction $\widetilde{\Gamma}$ from $\Gamma$.
\end{document}
\section{$G$-symmetric designs}
In this Chapter we shall define a new category $G$-Design.
The objects of this category are $G$-symmetric designs
and the morphisms are $G$-design homomorphisms.
We shall exhibit a ``forgetful functor'' from $G$-graph
to $G$-design and rephrase some of the questions we have
been asking about $G$-symmetric graphs into questions
about $G$-symmetric designs.
In particular we shall describe a ``decomposition''
for $G$-symmetric designs that is analogous to the
decomposition given by Gardiner and Praeger for $G$-symmetric graphs.
\begin{definition}
A $G$-flag transitive design $\cD$ is {\it self-dual}
if there exists a $G$-isomorphism $\rho = (\rho_P, \rho_B)$
between $\cD$ and $\cD^{*}$.
The $G$-isomorphism $\rho$ is called a {\it duality} of $\cD$.
\end{definition}
\begin{definition}
A {\it $G$-symmetric design} is self-dual $G$-flag transitive design $\cD$
which admits a dulaity $\rho = (\rho_P, \rho_B)$ with the property that
$\rho_B \circ \rho_P = id_P$ and $\rho_P \circ \rho_B = id_B$.
In this case the $G$-isomorphism $\rho$ is called a {\it polarity}
of $\cD$.
\end{definition}
Observe that if $\cD$ is a $G$-symmetric design with the property
that $(p, \rho(p))$ is a a flag for some $p \in P_\cD$,
then since $\rho$ is a $G$-isomorphism it follows
that the point-stabilizer of $\cD$ is isomorphic to the flag-stabilizer of $\cD$.
Thus each point is incident with exactly one block
and vice versa.
We consider such $G$-symmetric designs to be ``degenerate'',
and unless an explicit statement to the contrary is given shall
take the expression ``$G$-symmetric design''
to mean ``non-degenerate $G$-symmetric design''.
In a sense that will become clear a little later,
these ``degenerate'' $G$-symmetric
designs correspond to the ``degenerate'' orbital graph $\orb_\Delta(G,\Omega)$
which could be formed from the permutation group $(G, \Omega)$
by taking $\Delta$ to be the diagonal orbit,
and also to the ``degenerate'' Sabidussi graph $\sab(G,H,a)$
which could be formed by taking $a \in H$.
\hfill \break
We shall see that each $G$-symmetric graph
gives rise in a natural way to a $G$-symmetric design
and conversely a $G$-symmetric design together
with a ``marked'' polarity give rise to a $G$-symmetric graph.
We shall also see that in some cases, by choosing a different
polarity it is possible to construct two non-isomorphic $G$-symmetric
graphs from the same $G$-symmetric design.
For any graph $\Gamma$, let $N \Gamma = \{ \Gamma(v) : v \in V \Gamma \}$
denote the set of {\it neighbourhoods} of $\Gamma$.
\begin{proposition}
If \/ $\Gamma$ is a $G$-symmetric graph,
then the incidence structure $\cD(\Gamma) = (V \Gamma, N \Gamma,I)$
where $(v,n) \in I$ if and only if $v \in n$ is a $G$-symmetric design
\begin{proof}
The $G$-arc transitivity of $\Gamma$ is sufficient to ensure
that $\cD(\Gamma)$ is a $G$-flag transitive design.
To see that it is in fact
a $G$-symmetric design we must exhibit a polarity.
Let $\rho_P : P \to B$ be given by $v \mapsto \Gamma(v)$.
The map $\rho_P$ is clearly bijective, and since $G$ acts on $\Gamma$
as a group of automorphisms we have:
\begin{eqnarray*}
\rho_P(v^g) & = & \Gamma(v^g) \\
& = & \Gamma(v)^g \\
& = & \rho_P(v)^g.
\end{eqnarray*}
So $\rho_P$ induces a permutation isomorphism.
Now, take $\rho_B = \rho_P^{-1}$.
We must check that the pair $\rho = (\rho_P, \rho_B)$ preserve
the incidence structure of the design.
Suppose that $(v,\Gamma(w))$ is a flag of $\cD$, so $v \in \Gamma(w)$.
Since $\Gamma$ is a simple graph it follows immediately that $w \in \Gamma(v)$
and so $(w, \Gamma(v))$ is also a flag of $\cD$.
That is $(\Gamma(v), w)$ is a flag of $\cD^{*}$.
But
\begin{eqnarray*}
\rho((v, \Gamma(w))) & = & (\rho_P(v), \rho_B(\Gamma(w))) \\
& = & (\Gamma(v), w).
\end{eqnarray*}
So we are done.
\end{proof}
\end{proposition}
\begin{proposition}
If $\cD$ is a (non-degenerate) $G$-symmetric design with ``marked'' polarity $\rho$
then the graph $\Gamma(\cD, \rho)$ with vertex set $P_\cD$
and arc set $\{ (p,q) : (q,\rho_P(p)) \in I_\cD \}$ is $G$-symmetric.
\begin{proof}
We must first check that $\Gamma(\cD, \rho)$ is well-defined.
The non-degenerateness of $\cD$ ensures that there are no loops.
If $(p,q)$ is an arc of $\Gamma(\cD, \rho)$
then $(q, \rho_P(p))$ is a flag of $\cD$.
Since $\rho$ is an isomorphism,
if $(q, \rho_P(p))$ is a flag of $\cD$
then $(\rho_P(q), \rho_B \circ \rho_P(p)) = (\rho(q), p)$
is a flag of $\cD^{*}$.
It follows that $(p,\rho(q))$ is a flag of $\cD$,
and thus $(q,p)$ is an arc of $\Gamma(\cD, \rho)$.
So $\Gamma(\cD, \rho)$ is simple.
By proposition [?] $G$ acts transitively on the points $\cD$
so $\Gamma(\cD, \rho)$ is $G$-vertex transitive.
Suppose that $(p,q_1)$ and $(p,q_2)$ are two distinct arcs of
$\Gamma(\cD, \rho)$. Then we have $q_1, q_2 \in T(\rho(p))$.
By proposition [?]
the stabilizer of $\rho(p)$ acts transitively on $T(\rho(p))$.
Thus we can find some $g \in G$ which fixes $\rho(p)$ and
carries $q_1$ to $q_2$.
Since $\rho$ is a $G$-isomorphism, if $g$ fixes $\rho(p)$
it also fixes $p$, thus $g$ carries the arc $(p,q_1)$
to the arc $(p,q_2)$. So $\Gamma(\cD, \rho)$ is $G$-locally
transitive. The result follows.
\end{proof}
\end{proposition}
\begin{proposition}
For any design $\cD$ and any polarity $\rho$ the design $\cD(\Gamma(\cD,\rho))$
is isomorphic to $\cD$
\begin{proof}
later
\end{proof}
\end{proposition}
\begin{definition}
If $\cD_1$ and $\cD_2$ are two $G$-symmetric designs
then a $G$-design homomorphism from $\cD_1$ to $\cD_2$
is a pair of permutation homomorphisms $\eta_P : P_1 \to P_2$ and $\eta_B : B_1 \to B_2$
satisfying $(p,b) \in I_1$ implies $(\eta_P(p),\eta_B(b)) \in I_2$.
\end{definition}
\begin{proposition}
If $\cD_1$ and $\cD_2$ are two $G$-symmetric designs
then a $G$-design homomorphism from $\cD_1$ to $\cD_2$
induces a a $G$-permutation homomorphism from the flags of $\cD_1$
to the flags of $\cD_2$
\end{proposition}
What I want to say is that $G$-symmetric designs
together with $G$-design homomorphisms form a category
(well, kind of more of a poset really).
I also want to say that the category $G$-Graph
``projects down'' onto the category $G$-Design.
$F : G$-Graph $\to G$-Design.
If there's a $G$-graph homomorphism from $x$ to $y$
then there's a $G$-design homomorphism from $F(x)$ to $F(y)$
\section{Kernel of a design homomorphism}
Gardiner and Praeger showed that whenever we have
a $G$-symmetric graph homomorphism $\Gamma \mapsto \Sigma$
we get an induced ``cross-sectional'' design
on each of the fibers of the kernel.
Here I'll show that you get exactly the same thing
whenever you have $G$-symmetric design homomorphism.
So, with an abuse of notation, let $\pi : \cD \to \cQ$
be a $G$-symmetric design homomorphism.
For any point $q \in P_\cQ$
consider the {\it fiber} over $q$,
that is the set $F(q) = \pi^{-1}(q) = \{ p \in P_\cD : \pi(p) = q \}$.
The ``induced kernel design'' on the fiber of $q$
is $\cK(q) = (F(q), T(q), I_q)$
where $(p,d) \in I_q$ if and only if there is some $b \in \pi^{-1}(d)$
such that $(p,b) \in I_\cD$.
I think that if $\Gamma \mapsto \Sigma$ is a $G$-symmetric graph
homomorphism and $\cD$ is the ``GP-cross-section'' of the map
then if $F(\Gamma) \mapsto F(\Sigma)$ is the $G$-symmetric design
homomorphism then the
\end{document}
\chapter{Some Constructions}
The problem of relating the structure of $\Gamma$
to that of the triple $(\Gamma_\cB, \Gamma[B,C], \cD(B))$
is very difficult when $\Gamma$ is taken to be
an arbitrary imprimitive symmetric graph.
The approach which has been taken by researchers in the area
is to first impose additional assumptions
on one or more of the components of the decomposition,
then study the subfamily of imprimitive symmetric graphs
admitting a decompositions which satisfies these additional assumptions.
The special case where the parameters of the ``kernel design'' $\cD(B)$
satisfy: $$v = k-1 \geq 2.$$
was studied by Li, Praeger and Zhou \cite{ZCP2000}.
It was found that for this special case,
if the design $\cD(B)$ contains no repeated blocks,
then there exists an elegant combinatorial method
for constructing $\Gamma$ from $\Gamma_\cB$.
This is the {\it three-arc graph construction} which will
be described in the next section.
In a later paper Zhou \cite{ZDM2002} showed that
the three-arc graph construction actually applies
to a wider family of triples $(G, \Gamma, \cB)$
than those originally studied.
In particular the construction may be used for any triple $(G, \Gamma, \cB)$
satisfying the following condition:
\begin{condition}
The induced actions of $G_B$ on $B$ and $\Gamma_\cB(B)$
are permutationally equivalent with respect to some bijection
$\rho : B \to \Gamma_\cB(B)$.
\end{condition}
On a group theoretic level this is quite a natural condition to impose.
Suppose that the triple $(G,\Gamma, \cB)$ satisfies the above condition
and that $\Gamma \cong \Sab(G,K,a)$
and $\Gamma_\cB \cong \Sab(G,H,a)$.
If $B$ is a block of $\cB$ then
by proposition [?] the action of $G_B$ on $B$
is permutation equivalent to $(H, \cos(K))$.
The action of $G_B$ on $\Gamma_\cB(B)$
is just the ``local action'' of $\Gamma_\cB$
and so the permutation group $(G_B, \Gamma_\cB(B))$
is permutation equivalent to $(H, \cos(a^{-1}Ha \cap H))$.
If the condition above holds,
then we must have $K \cong a^{-1}Ha \cap H$.
This means that the stabilizer of a point of $\Gamma$ is isomorphic
to the stabilizer of an arc of $\Gamma_\cB$.
This strong ``coupling'' between the points of $\Gamma$
and the arcs $\Gamma_\cB$ allows us to construct
$\Gamma$ from $\Gamma_\cB$ in a straightforeward manner.
\section{Three-Arc Graphs}
The three-arc graph construction was first introduced
by Li, Praeger and Zhou in \cite{ZCP2000}.
Let $\Sigma$ be any $G$-symmetric graph.
Recall that an {\it $s$-arc} of $\Sigma$ is a sequence
$(\alpha_0, \alpha_1, \dots, \alpha_s)$ of vertices in $\Sigma$ such that
$\alpha_i,\alpha_{i+1}$ are adjacent in $\Sigma$ and $\alpha_{i-1} \neq \alpha_{i+1}$
for each $i$. The set of $s$-arcs of $\Sigma$ is denoted by $\Arc_s(\Sigma)$.
Consider the induced action of $G$ on $\Arc_3(\Sigma)$ given by:
$$(\sigma_0,\sigma_1,\sigma_2,\sigma_3)^g = (\sigma_0^g,\sigma_1^g,\sigma_2^g,\sigma_4^g).$$
This action is, in general, intransitive. We may however partition the set $\Arc_3(\Sigma)$
into {\it orbits} on which $G$ acts transitively. For such an orbit $\Delta$, let
$\Delta^{\circ} = \{ (\sigma_4, \sigma_3, \sigma_2, \sigma_1) :
(\sigma_1, \sigma_2, \sigma_3, \sigma_4) \in \Delta \}$
denote the {\it pair} of $\Delta$. It is not hard to check that
$\Delta^{\circ}$ is again an orbit of $G$ on $\Arc_3(\Sigma)$.
If $\Delta = \Delta^{\circ}$ then $\Delta$ is said to be {\it self paired}.
\begin{definition}
\label{threearc}
Given a $G$-symmetric graph $\Sigma$ and a self-paired orbit $\Delta$ on
$\Arc_3(\Sigma)$, the {\it three-arc graph}
$\Gamma = \Arc_{\Delta}(\Sigma)$ is the graph with vertex set $A \Sigma$
and arc set $\{ ((\sigma,\tau), (\sigma',\tau') : (\tau,\sigma,\sigma',\tau') \in \Delta \}$.
\end{definition}
Note that the requirement that $\Delta$ is self-paired
ensures that the resulting graph is simple.
\begin{proposition}
With $G$, $\Sigma$ and $\Delta$ as in definition \ref{threearc},
the three-arc graph $\Gamma = \Arc_{\Delta}(\Sigma)$
is $G$-symmetric.
\begin{proof} Immediate from the construction.
\end{proof}
\end{proposition}
For each vertex $\sigma$ of $\Sigma$, let
$B(\sigma) = \{ (\sigma, \tau) : \tau \mbox{ is a neighbour of } \sigma \}$
be the set of arcs of $\Sigma$ with initial vertex $\sigma$.
Clearly $\cB = \{ B(\sigma) : \sigma \mbox{ is a vertex of } \Sigma \}$
is a partition of $A \Sigma$.
For any $g \in G$ we have
\begin{eqnarray*}
B(\sigma)^g & = & \{ (\sigma, \tau)^g : \tau \mbox{ is a neighbour of } \sigma \} \\
& = & \{ (\sigma^g, \tau^g) : \tau \mbox{ is a neighbour of } \sigma \} \\
& = & \{ (\sigma^g, \tau) : \tau \mbox{ is a neighbour of } \sigma^g \} \\
& = & B(\sigma^g).
\end{eqnarray*}
Thus $\cB = \{ B(\sigma) : \sigma \in V \Sigma \}$
is in fact a $G$-invariant partition of $V \Gamma$.
\begin{proposition}
\label{three-arc quotient}
With $\Gamma = \Arc_{\Delta}(\Sigma)$ and $\cB$ as defined above,
The quotient graph $\Gamma_\cB$ is isomorphic to $\Sigma$.
\begin{proof}
The map $\sigma \mapsto B(\sigma)$ identifies the vertices
of $\Sigma$ with the vertices of $\Gamma_\cB$.
We must show that
$(\sigma, \sigma')$ is an arc of $\Sigma$ if and only if
$(B(\sigma), B(\sigma'))$ is an arc of $\Gamma_\cB$.
If $(B(\sigma), B(\sigma'))$ is an arc of $\Gamma_\cB$
then then for some $\tau, \tau' \in V \Sigma$
the arcs $(\sigma, \tau)$ and $(\sigma',\tau')$ are adjacent in $\Gamma$.
It follows that
$(\tau, \sigma, \sigma',\tau')$ is a three-arc of $\Sigma$
and so $\sigma$ is adjacent to $\sigma'$ in $\Sigma$.
For the other direction,
let $(\alpha, \beta, \gamma, \delta)$ be any three-arc
of $\Sigma$ contained in in $\Delta$.
If $(\sigma, \sigma')$ is any arc of $\Sigma$,
then by the $G$-arc-transitivity of $\Sigma$
there exists a $g \in G$ such that $(\beta,\gamma)^g = (\sigma,\sigma')$.
Since $\Delta$ is an orbit of $G$ on the three-arcs of $\Sigma$,
it follows that
$(\alpha,\beta,\gamma,\delta)^g = (\alpha^g, \sigma, \sigma', \delta^g)$
is contained in $\Delta$.
Thus the arc $(\sigma, \alpha^g)$ is adjacent to
the arc $(\sigma', \delta^g)$ in $\Gamma$
and so $B(\sigma)$ is adjacent to $B(\sigma')$ in $\Gamma_\cB$.
\end{proof}
\end{proposition}
\begin{proposition}
If $(G,\Gamma, \cB)$ is such that $\Gamma$ is a three-arc graph
of $\Gamma_\cB$, then for any $B \in \cB$
the permutation groups $(G, B)$ and $(G, \Gamma_\cB(B))$
are permutation equivalent.
\begin{proof}
Each $B \in \cB$ is of the form $B(\sigma)$ for some
$\sigma$ a vertex of $\Gamma_\cB$.
The vertices of $\Gamma$ contained in $B(\sigma)$
are precisely the arcs of $\Gamma_\cB$
of the form $(\sigma, \tau)$.
Let $\rho : B(\sigma) \to \Gamma_\cB(B(\sigma))$ be given by
$(\sigma,\tau) \mapsto B(\tau)$.
Clearly $\eta$ is a bijection.
If $g$ is any element of the stabilizer of $B(\sigma)$
then we have:
\begin{eqnarray*}
\rho((\sigma,\tau)^g) & = & \rho((\sigma^g,\tau^g)) \\
& = & \rho((\sigma,\tau^g)) \\
& = & B(\tau^g) \\
& = & B(\tau)^g.
\end{eqnarray*}
Thus $\rho$ induces a permutation equivalence between the permutation groups
$(G_{B(\sigma)}, B(\sigma))$ and $(G_{B(\sigma)}, \Gamma_\cB(B(\sigma))$.
\end{proof}
\end{proposition}
\begin{proposition}
If $(G, \Gamma, \cB)$ is such that
$(G, B)$ is permutation equivalent to $(G, \Gamma_\cB(B))$
for some $B \in \cB$, then the action of $G$
on the vertices of $\Gamma$ is permutation equivalent
to the action of $G$ on the arcs of $\Gamma_\cB$.
\begin{proof}
Let $\rho : B \to \Gamma_\cB(B)$ be any bijection
inducing a permutation equivalence between
$(G,B)$ and $(G, \Gamma_\cB(B))$
and let $\pi : V \Gamma \to \cB$ be the map which sends
each vertex of $\Gamma$ to the block of $\cB$ containing it.
Fix a vertex $\alpha$ of $\Gamma$.
Since $\Gamma$ is $G$-vertex transitive, every vertex of $\Gamma$
may be written in the form $\alpha^g$ for some $g \in G$.
Let $\mu : V \Gamma \to A \Gamma_{\cB}$ be given by
$ \alpha^g \mapsto (\pi(\alpha)^g, \rho(\alpha)^g) $.
It is not too hard to see that $\mu$ is a bijection.
For any $\beta = \alpha^g$ and any $x \in G$
we have:
\begin{eqnarray*}
\mu(\beta^x) & = & \mu(\alpha^{gx}) \\
& = & (\pi(\alpha)^{gx}, \eta(\alpha)^{gx}) \\
& = & (\pi(\alpha^g)^x, \eta(\alpha^g)^x) \\
& = & (\pi(\beta)^x, \eta(\beta)^x) \\
& = & (\pi(\beta), \eta(\beta))^x \\
& = & \mu(\beta)^x
\end{eqnarray*}
Thus $\mu$ establizhes a permutation isomorphism between
$(G,V \Gamma)$ and $(G, A \Gamma_\cB)$.
\end{proof}
\end{proposition}
We may make use the map $\mu$ to ``label'' the vertices
of $\Gamma$ by the arcs of $\Gamma_\cB$.
For any arc $(B,C)$ of $\Gamma_{\cB}$,
let $v_{BC} = \mu^{-1}((B,C))$ denote
the vertex of $\Gamma$ which is mapped to $(B,C)$ by $\mu$.
\begin{proposition}
\label{labelling}
Some stuff about how $G$ acts on labelled vertices
and how the initial block contains the vertex.
\end{proposition}
\begin{proposition}
\label{PE implies 3-arc}
Provided that $\Gamma$ as valency at least two,
for each arc $(v_{BC}, v_{DE})$ of $\Gamma$,
$(C,B,D,E)$ is a 3-arc of $\Gamma_\cB$.
\begin{proof}
Suppose that $v_{BC}$ were adjacent to $v_{CB}$.
Since $\val(\Gamma) \geq 2$, the vertex $v_{CB}$
is adjacent to some other vertex $v_{B_1C_1}$
distinct from $v_{BC}$
By the $G$-symmetry of $\Gamma$ there exists a $g \in G$
such that $(v_{BC},v_{CB})^g = (v_{B_1C_1},v_{CB})$.
By the previous proposition, this implies that
$B = B^x = B_1$ and $C = C^x = C_1$, so $v_{BC} = v_{B_1C_1}$.
A contradiction. Thus $v_{BC}$ is not adjacent to $v_{CB}$.
Suppose now that $v_{BC}$ were adjacent to $v_{CE}$
for some $E \neq B$.
Since $\val(\Gamma) \geq 2$ we can find another vertex $v_{C_1E_1}$
distinct from $v_{CE}$ and adjacent to $v_{BC}$.
By the $G$-symmetry of $\Gamma$ we can find some $g_1 \in G$
such that $(v_{BC},v_{CE})^{g_1} = (v_{BC},v_{C_1E_1})$.
By the previous proposition, this implies that $C = C^{g_1} = C_1$.
We can also find a $g_2 \in G$
such that $(v_{BC},v_{CE})^{g_2} = (v_{C_1E_1}, v_{BC})$
and so
$B = C^{g_2} = E_1$. Combining these gives $v_{C_1E_1} = v_{CB}$,
but $v_{BC}$ is not adjacent to $v_{CB}$
so again we have a contradiction.
We know now that if $(v_{BC}, v_{DE})$ is an arc of $\Gamma$,
then $D \neq C$. A simmilar argument shows that $B \neq E$,
so $B,C,D,E$ are distinct vertices with $B$ adjacent to $C$
and $D$ adjacent to $E$. Since, by the previous proposition
$B(v_{BC}) = B$ and $B(v_{DE} = D$, we have $B$ adjacent to $D$
and so $C,B,D,E$ is a 3-arc of $\Gamma_\cB$ as claimed.
\end{proof}
\end{proposition}
Concluding remarks.
\section{Locally imprimitive quotient}
Suppose that $\Gamma = \sab(G,H,a)$.
Let $\overline{H} = a^{-1}Ha \cap H$,
so the local permutation group induced at each vertex of $\Gamma$
is equivalent to $(H, \cos(\overline{H}))$.
Suppose further that there is some $K$ such that $\overline{H} < K < H$
and $a \not\in K$. Then the graph $\widetilde{\Gamma} = \sab(G,K,a)$
is an extension of $\Gamma$.
Let $\overline{K} = a^{-1}Ka \cap K$.
Since $K < H$ we must also have $a^{-1}Ka < a^{-1}Ha$
so $\overline{K} \leq \overline{H}$.
Since $\overline{H} < K$ we must also have
$a^{-1}\overline{H}a < a^{-1}Ka$
but since $a$ is an involution $a^{-1}\overline{H}a = \overline{H}$
so we have $\overline{H} \leq \overline{K}$.
That is $\overline{H} = \overline{K}$
and the local permutation group induced at each vertex of
$\widetilde{\Gamma}$ is $(K, \cos(\overline{H}))$.
The pair $(\Gamma,\widetilde{\Gamma})$ satisfy the property that
{\it globally} $\widetilde{\Gamma}$ is imprimitive,
admitting $\Gamma$ as quotient, but {\it locally}
$\Gamma$ is imprimitive, and the local permutation group of
$\widetilde{\Gamma}$ is a quotient of the local permutation
group of $\Gamma$. We wish to understand how $\Gamma$
and $\widetilde{\Gamma}$ are related {\it structurally},
and ideally find some method of constructing $\widetilde{\Gamma}$
from $\Gamma$.
As a preliminary observation, let $n = [G:H]$ be the number of vertices of $\Gamma$,
let $v = [K:\overline{H}]$ be the valency of $\widetilde{\Gamma}$
and let $r = [H:K]$.
The graph $\widetilde{\Gamma}$ has $[G:K] = [G:H][H:K] = nr$
vertices. That is, $r$ times as many vertices as $\Gamma$.
The valency of $\Gamma$ is $[H:\overline{H}] = [H:K][K:\overline{H}] = rv$.
That is $r$ times the valency of $\widetilde{\Gamma}$.
Since the number of edges in a graph is equal to half the
valency times the number of vertices, it follows that
both $\Gamma$ and $\widetilde{\Gamma}$ have the same number of edges.
\hfill \break
The action of $G$ on the arcs of $\Gamma$ is permutation equivalent
to $(G, \cos(\overline{H}))$.
Since $\Gamma$ is simple, each arc $(\alpha,\beta)$ has a {\it pair}
namely $(\beta,\alpha)$.
The function $\varphi : A \Gamma \to A \Gamma$
which sends each arc to its pair is an involution
It also preserves the action of $G$.
Now, since $\overline{H} < H < G$ the action of $G$
on the arcs of $\Gamma$ is imprimitive.
That is, there is some $G$-invariant partition $\cB$
of $A \Gamma$ such that the action of $G$ on $\cB$
is permutation equivalent to $(G, \cos(H))$.
In fact, this is the partition $\cB = \{ B(\alpha) : \alpha \in V \Gamma \}$
where $B(\alpha)$ is the set of all arcs who's initial vertex is $\alpha$.
Let $\pi : A \Gamma \to \cB$ be the map which sends
each arc to the block of $\cB$ containing it.
We may define a new graph $\Sigma$ we vertex set
$\cB$ where $p$ is adjacent to $b$ if and only if
there is some $x \in \pi^{-1}(p)$ and some $y \in \pi^{-1}(b)$
such that $y = \varphi(x)$.
This new graph $\Sigma$ is in fact isomorphic to $\Gamma$.
The action of $G$ on the arcs of $\Gamma$ is permutation
isomorphic to the action of $G$ on the arcs of $\widetilde{\Gamma}$.
Both actions are of the form $(G, \cos(\overline{H}))$
Since $\overline{H} < K < G$ there exists some $G$-invariant partition
$\mathfrak{B}$ of $A \Gamma$ such that the action of $G$ on $\mathfrak{B}$
is permutation equivalent to $(G, \cos(H))$.
The partition $\mathfrak{B}$ is a refinement of the partition $\cB$.
Let $\lambda : A \Gamma \to \mathfrak{B}$ be the map hich sends
each arc to the block of $\mathfrak{B}$ containing it.
We may define a new graph $\widetilde{\Sigma}$ we vertex set
$\mathfrak{B}$ where $p$ is adjacent to $b$ if and only if
there is some $x \in \lambda^{-1}(p)$ and some $y \in \lambda^{-1}(b)$
such that $y = \varphi(x)$.
This new graph $\Sigma$ is in fact isomorphic to $\widetilde{\Gamma}$.
Thus we have a method of construction $\widetilde{\Gamma}$ from $\Gamma$.
\section{Covering Graphs}
The three-arc graph construction allows us, under certain conditions,
to ``unfold'' a $G$-symmetric graph into a larger, {\it imprimitive}
$G$-symmetric graph, admitting the original graph as a quotient.
The {\it covering graph} construction \cite{B1974} is simmilar
in this respect. It also, under certain conditions,
allows us to ``unfold'' a $G$-symmetric graph into a larger one.
It differs from the three-arc graph construction, however,
in that it requires a simultaneous ``unfolding''
of the group $G$.
Recall that given two groups $N$ and $G$ and a homomorphism
$$\rho : G \to \Aut(N)$$
we may form the {\it semidirect product} of $N$ by $G$.
This is the group $\widetilde{G} = N \rtimes_\rho G$,
whose elements are the ordered pairs:
$$\{ (n,g) : n \in N, g \in G \}$$
with multiplication given by:
$$(n_1, g_1)(n_2,g_2) = (n_1^{\rho(g_1)}n_2, g_1g_2).$$
The functions $i_1 : N \to \widetilde{G}$ and $i_2 : G \to \widetilde{G}$
given by $ n \mapsto (n,1) $ and $ g \mapsto (1,g) $ respectively give
natural embeddings of $N$ and $G$ into $\widetilde{G}$.
Identifying $N$ with its image under $i_1$,
the semidirect product has the property that $N$ is normal in $\widetilde{G}$
and $\widetilde{G}/N \cong G$.
\begin{definition} [$N$-chain]
Suppose that $\Gamma$ is a $G$-symmetric graph and $N$ is a group.
An $N$-chain is a function $\phi : A \Gamma \to N$
satisfying $\phi((v,u)) = \phi((u,v))^{-1}$ for all $(u,v) \in A \Gamma$.
\end{definition}
\begin{definition} [Compatible $N$-chain]
Suppose that $\Gamma$ is a $G$-symmetric graph, $N$ is a group
and $\rho : G \to \Aut(N)$ is a homomorphism.
An $N$-chain $\phi$ is said to be {\it compatible} with $\rho$
if for every $g \in G$ the following diagram commutes:
\centerline{
\xymatrix{
\ar[d]_{g} \ar[r]^\phi A \Sigma & K \ar[d]^{\rho(g)} \\
\ar[r]_{\phi} A \Sigma & K
}}
\end{definition}
\begin{definition}[Biggs Cover]
Suppose that $\Gamma$ is a $G$-symmetric graph, $N$ is a group,
$\rho : G \to \Aut(N)$ is a homomorphism and $\phi$ is a compatible $N$-chain.
The Biggs Cover $\widetilde{\Gamma}(N,\rho,\phi)$
of $\Gamma$ with respect to $\phi$ is the graph
with vertex set:
$\{ (g,v) : k \in G, v \in V \Gamma \}$
and arc set:
$ \{ ((g_1,v_1),(g_2,v_2)) : (v_1,v_2) \in A \Gamma, g_2 = g_1 \phi((v_1,v_2)) \}$
\end{definition}
Note that the condition $\phi((v,u)) = \phi((u,v))^{-1}$ ensures that the resulting
graph is simple.
\begin{proposition}
The covering graph $\widetilde{\Gamma}(N,\rho,\phi)$ is $(N \rtimes_\rho G)$-symmetric.
\begin{proof}
Define an action of $N \rtimes_\rho G$ on $\widetilde{\Gamma}$ by:
$$(n,v)^{(\eta, g)} = (n^{\rho(g)} \eta,v^g).$$
This action is well-defined since:
\begin{eqnarray*}
(n,v)^{(\eta_1, g_1)(\eta_2, g_2)}
& = & (n^{\rho(g_1)} \eta_1 ,v^{g_1})^{(\eta_2, g_2)} \\
& = & (n^{\rho(g_1) \rho(g_2)} \eta_1^{\rho(g_2)} \eta_2 , v^{g_1 g_2}) \\
& = & (n,v)^{(\eta_1^{\rho(g_2)} \eta_2, g_1 g_2)}
\end{eqnarray*}
We must check that it preserves the adjacency structure of $\widetilde{\Gamma}$.
Observe that, by the compatibility of $\rho$ and $\phi$,
if $n_2 = n_1 \phi(v_1,v_2)$ then:
\begin{eqnarray*}
{n_2}^{\rho(g)} \phi({v_1}^g, {v_2}^g)
& = & {n_2}^{\rho(g)} \phi(v_1, v_2)^{\phi(g)} \\
& = & (n_2 \phi(v_1, v_2))^{\phi(g)} \\
& = & {n_1}^{\phi(g)}
\end{eqnarray*}
for any $(\eta,g) \in N \rtimes_\rho G$.
Also, since $G$ is a group of automorphisms of $\Gamma$, we know that
$(v_1,v_2) \in A \Gamma$ if and only if $({v_1}^g,{v_2}^g) \in A \Gamma$.
Thus the action defined above does indeed preserves adjacency.
\hfill \break
For any $(n_1,v_1), (n_2,v_2) \in V \widetilde{\Gamma}$,
by the $G$-symmetry of $\Gamma$ we can find a $g \in G$
such that ${v_1}^g = {v_2}$.
Let $\eta = {n_1}^{\rho(g)^{-1}} n_2$.
We have:
\begin{eqnarray*}
(n_1,v_1)^{(\eta,g)} & = & ({n_1}^{\rho(g)} \eta,{v_1}^g) \\
& = & ( {n_1}^{\rho(g)}, {n_1}^{\rho(g)^{-1}} n_2,v_2) \\
& = & (n_2,v_2).
\end{eqnarray*}
Thus the action is vertex transitive.
\hfill \break
Suppose that $(n_1,v_1)$ and $(n_2,v_2)$ are both adjacent to $(n,g)$,
so $n_1 = n \phi(v,v_1)$ and $n_2 = n \phi(v,v_2)$.
By the $G$-symmetry of $\Gamma$ we can find a $g \in G_v$ such that
$v_1^g = v_2$.
Let $\eta = (n^{-1})^{\rho(g)}n$. We have:
\begin{eqnarray*}
(n,v)^{(\eta,g)} & = & (n^{\rho(g)} \eta, v^g) \\
& = & (n^{\rho(g)} (n^{-1})^{\rho(g)}n, v) \\
& = & (n,v)
\end{eqnarray*}
and:
\begin{eqnarray*}
(n_1,v_1)^{(\eta,g)} & = & ({n_1}^{\rho(g)} \eta, {v_1}^g) \\
& = & ({n_1}^{\rho(g)} (n^{-1})^{\rho(g)}n, v_2) \\
& = & ({(n_1 n^{-1})}^{\rho(g)} n, v_2) \\
& = & (\phi(v,v_1)^{\rho(g)} n, v_2) \\
& = & (\phi(v,v_2) n, v_2) \\
& = & (n_2 n^{-1} n, v_2) \\
& = & (n_2, v_2) \\
\end{eqnarray*}
Thus the action is locally transitive. The result follows.
\end{proof}
\end{proposition}
In fact, a stronger result than this holds.
If $\Gamma$ is $(G,s)$-arc transitive
then $\widetilde{\Gamma}$ is $(\widetilde{G},s)$-arc transitive,
where $\widetilde{G} = N \rtimes_\rho G$.
The proof is by induction and may be found in \cite{B1974}.
This construction was originally used by Conway
to produce an infinitely family of $5$-arc transitive graphs.
\hfill \break
For each $v \in V \Gamma$, let $B(v) = \{ (n,v) : n \in N \}$.
For any $(\eta, g) \in \widetilde{G}$ we have:
\begin{eqnarray*}
B(v)^{(\eta,g)} & = & \{ (n,v)^{(\eta,g)} : n \in N \} \\
& = & \{ (\eta n^{\rho(g)},v^g) : n \in N \} \\
& = & \{ (n, v^g) : n \in N \} \\
& = & B(v^g).
\end{eqnarray*}
Thus the partition:
$\cB = \{ B(v) : v \in V \Gamma \}$
is $\widetilde{G}$-invariant.
\begin{proposition}
The quotient of the Biggs cover $\widetilde{\Gamma}$
with respect to the partition $\cB$
is isomorphic to the original graph $\Gamma$
\begin{proof}
The map $\eta : V \Gamma \to V \widetilde{\Gamma}_\cB$ given by
$v \mapsto B(v)$ establishes a bijection between the vertices of $\Gamma$
and the vertices of $\widetilde{\Gamma}_\cB$.
If $(u,v)$ is an arc of $\Gamma$, then $((1,u),(\phi((u,v)),v))$
is an arc of $\widetilde{\Gamma}$, and so $(B(u), B(v))$
is an arc of $\widetilde{\Gamma}_\cB$.
If $(u,v)$ is not an arc of $\Gamma$, then for all $n_1, n_2 \in N$
$((n_1, u),(n_2, v))$ is not an arc of $\widetilde{\Gamma}$
and so $(B(u), B(v))$ is not an arc of $\widetilde{\Gamma}_\cB$.
Thus $\eta$ establishes an isomorphism between
$\Gamma$ and $\widetilde{\Gamma}_\cB$.
\end{proof}
\end{proposition}
Let $(B,C)$ be any arc of $\widetilde{\Gamma}$.
By the above $B = B(u)$ and $C = B(v)$ for some $(u,v) \in A \Gamma$.
Its not too hard to see that each $(n,u) \in B$
has a unique neighbour in $C$, namely $(n \phi(u,v),v)$.
Thus the induced bipartite graph $\Gamma[B,C]$ is a matching.
It follows immediately that the valency of $\widetilde{\Gamma}$
is the same as the valency of $\Gamma$.
\section{Group Theoretic analysis of covering graph construction}
The Bigg's covering graph construction encompasses
all pairs of graphs $\Gamma(G, H, a)$ and $\Gamma(\widetilde{G}, H, \widetilde{a})$
where $\widetilde{G}$ is a semidirect product of $N$ by $G$ for some $N$.
The local permutation groups of $\Gamma$ and $\widetilde{\Gamma}$
are the same, and $N$ acts regularly on the fibers.
\section{Subgraph Extension}
I have been thinking about the idea of a ``combinatorial''
description of the extension of a graph in terms of its quotient.
That is, a description of the adjacency structure of $\widetilde{\Gamma}$
directly in terms of the adjaceny structure of $\Gamma$.
This section is a sketch of an idea I had, that perhaps
there is a connection between the subgraph structures
of different $G$-symmetric graphs for the same $G$.
\begin{definition} [Subgraph]
Suppose that $\Gamma = (V,A)$ is a graph.
If $W$ is a subset of $V$
and $B$ is a subset of $(W \times W) \cap A$
then $\Upsilon = (W,B)$ is a {\it subgraph} of $\Gamma$.
We write $\Upsilon < \Gamma$ and allow for the possibility
that $\Upsilon$ is a directed graph.
\end{definition}
\begin{definition} [Stabilizer of a Subgraph]
Suppose that $(G, \Gamma)$ is a symmetric graph and
$\Upsilon$ is a directed subgraph. The {\it stabilizer}
of $\Upsilon$ in $(G, \Gamma)$ is $\Aut(\Upsilon) \cap G$.
It is denoted by $G_{\Upsilon}$.
\end{definition}
\begin{definition} [Subgraph Graph]
Suppose that $(G, \Gamma)$ is a symmetric graph,
$\Upsilon$ is a directed subgraph of $\Gamma$
and $a$ is an involution of $\Gamma$ which fixes an arc.
The subgraph graph $\sub(\Gamma, \Upsilon, a)$
of $\Gamma$ with respect to $\Upsilon$ and $a$
is the graph with vertices $V = \{ \Upsilon^g : g \in G \}$
and arcs $A = \{ (\Upsilon \cup \Upsilon^a)^g : g \in G \}$.
\end{definition}
\begin{proposition}
Subgraph graph's are symmetric, with point stabilizer $G_{\Upsilon}$.
\begin{proof}
Obvious.
\end{proof}
\end{proposition}
\begin{example} [cube from tetrahedron]
Let $\Gamma$ be the tetrahedron with $S4$ acting on it.
Let $\Upsilon$ be any directed three-cycle.
Let $a$ be any involution fixing an arc of $\Upsilon$,
then the subgraph graph $\sub(\Gamma,\Upsilon,a)$
is isomorphic to the cube.
\end{example}
\begin{definition} [Symmetric subgraph]
Suppose that $(G,\Gamma)$ is a symmetric graph.
The digraph $\Upsilon < \Gamma$ is a {\it symmetric subgraph}
of $\Gamma$ if $(G_\Upsilon, \Upsilon)$ is a symmetric graph.
\end{definition}
\begin{question}
If $(G, \Gamma)$ is a symmetric graph, for which subgroups $K$ of $G$
do their exist a subgraphs of $\Upsilon$ of $\Gamma$ such that
$G_\Upsilon = K$ ?
For which subgroups $K$ of $G$
do their exist a {\it symmetric subgraph} $\Upsilon$ of $\Gamma$ such that
$G_\Upsilon = K$ ?
\end{question}
\begin{question}
Which extensions of a given $G$-symmetric graph $\Gamma$
are isomorphic to subgraph graphs of $\Gamma$ ?
Which are isomorphic to subgraph graphs of $\Gamma$
with respect to some symmetric subgraph (digraph) of $\Gamma$ ?
\end{question}
\section{Quotient Graphs}
In the first section of this Chapter,
we determined, for any two transitive permutation groups
$(G, \Omega_1)$ and $(G,\Omega_2)$,
the conditions under which there exists a
$G$-homomorphism from $(G,\Omega_1)$ to $(G,\Omega_2)$.
We saw that if $(G,\Omega_1)$
is permutation equivalent to $(G,\cos_G(H))$
for some $H < G$,
and $(G,\Omega_2)$ is permutation equivalent
to $(G,\cos_G(K))$ for some $K < G$,
then there is a $G$-homomorphism from $(G,\Omega_1)$
to $(G,\Omega_2)$ if and only if $H < K < G$.
In this section we give analogous conditions
under which there exists a $G$-homomorphism
from a $G$-symmetric graph $\Gamma$
to a $G$-symmetric graph $\Sigma$.
\begin{theorem} [Lorimer]
If $\Gamma$ is a $G$-symmetric graph isomorphic to $\sab(G,H,HaH)$
and $\Sigma$ is a quotient of $\Gamma$,
then $\Sigma$ is isomorphic to $\sab(G,K,KaK)$ for some $H < K < G$
with $a \not\in K$.
\begin{proof}
Suppose that $\pi : \Gamma \to \Sigma$ is a $G$-graph homomorphism.
Since $\Gamma$ is $G$-isomorphic to $\sab(G,H,HaH)$,
the action of $G$ on the vertices of $\Gamma$ is permutation
equivalent to $(G,\cos_G(H))$.
Since $\pi$ induces a homomorphism from $V \Gamma$ to $V \Sigma$,
by [where?] $(G,V \Sigma)$
must be permutation equivalent to $(G,\cos_G(K))$
for some $K$ with $H < K < G$.
What about the $a$?
\end{proof}
\end{theorem}
\section{Extension Problem}
For a given symmetric graph $\Gamma$, let $\grp(\Gamma)$ denote
the set of subgroups of $\Aut(\Gamma)$
which act symmetrically on $\Gamma$.
For a given group $G$, let $\ext(G)$ denote the set of groups $\widetilde{G}$
such that $\widetilde{G}/N \cong G$ for some $N$ normal in $\widetilde{G}$.
Clearly $\Gamma$ is $\widetilde{G}$-symmetric if and only if
$\widetilde{G} \in \ext(G)$ for some $G \in \grp(\Gamma)$.
\begin{definition}
The symmetric graph $(\widetilde{G},\widetilde{\Gamma})$
is said to be an {\it extension} of the symmetric graph $(G,\Gamma)$
if $\widetilde{G} \in \ext(G)$
and there exists a $\widetilde{G}$-homomorphism
from $\widetilde{\Gamma}$ to $\Gamma$.
\end{definition}
The ``extension problem'' for symmetric graph
is the problem of finding all the extensions
of a given symmetric graph $(G,\Gamma)$.
On a group theoretic level, if we knew the subgroup structure of $G$
and we knew all the groups $\widetilde{G}$ which admit $G$ as a
quotient then in a sense we would ``know'' all the possible
extensions of $(G,\Gamma)$.
\begin{proposition}
If $\Gamma$ is $G$-isomorphic to $\sab(G,H,HaH)$
then the faithful extensions of $\Gamma$
are the graphs of the form $\sab(G,K,KaK)$
where $K < H$ and $KaK \subset HaH$.
\end{proposition}
\begin{proposition}
For any $\widetilde{G} \in \ext(G)$, let $N \triangleleft G$
be such that $\widetilde{G}/N \cong G$
and let $\pi : \widetilde{G} \to G$ be the natural projection.
The $\widetilde{G}$-symmetric extensions of $\Gamma$
are the graphs of the form
$\sab(\widetilde{G},R,R\widetilde{a}R))$
where $\pi(R) < H$ and $\pi(a) \in HaH$.
\end{proposition}
From a combinatorial perspective,
if $(\widetilde{G},\widetilde{\Gamma})$
is an extension of $(G,\Gamma)$
then we would like to understand
how the structure of $\widetilde{\Gamma}$
is related to the structure of $\Gamma$.
Ideally, we'd like to be able to describe the adjacency
structure of $\widetilde{\Gamma}$ in terms of the
adjacency structure of $\Gamma$.
\section{Unfaithful Extensions}
Recall from Chapter 1, that in our definition of a symmetric
graph, we do not require the action of the group
to be faithful on the vertices of the graph.
Suppose that $\Gamma$ is a $G$-symmetric graph
isomorphic to $\sab(G,H,HaH)$.
The vertices of $\Gamma$ are the cosets of $H$ in $G$
and the action of $G$ on the vertices of $\Gamma$
is the permutation group $(G,\cos_G(H))$.
By proposition [?] the {\it kernel} of this action is:
$$\core_G(H) = \bigcap_{g \in G} g^{-1}Hg.$$
If this kernel is non-trivial then the action of $G$
on the vertices of $\Gamma$ is unfaithful.
\begin{proposition}
For any pair of groups $\wt{G}$ and $\wt{H}$ with $\wt{H} < \wt{G}$,
and any $a \in \wt{G}$ with $\wt{a}^2 = 1$ and $\wt{a} \not\in \wt{H}$,
the graphs
$\sab(\wt{G},\wt{H},\wt{H}a\wt{H})$ and $\sab(G, H, HaH)$
are isomorphic where:
\begin{eqnarray*}
N & = &\core_{\wt{G}}(\wt{H}) \\
G & = & \wt{G}/N \\
H & = & \wt{H}/N
\end{eqnarray*}
and $a = \pi(\wt{a})$ where $\pi : \wt{G} \to G$ is the natural projection.
\begin{proof}
Let $\wt{\Gamma} = \sab(\wt{G},\wt{H},\wt{H}\wt{a}\wt{H})$
and let $\Gamma = \sab(G,H,HaH)$.
The vertices of $\wt{\Gamma}$ are the right cosets of $H$ in $G$.
The vertices of $\Gamma$ are the right cosets of $H/N$ in $G/N$.
Now,
\begin{eqnarray*}
[\wt{G}/N : \wt{H}/N] & = & (|\wt{G}|/|N|)/(|\wt{H}|/|N|) \\
& = & |\wt{G}|/|\wt{H}| \\
& = & [\wt{G}:\wt{H}]
\end{eqnarray*}
so both $\wt{\Gamma}$ and $\Gamma$ have the same number of vertices.
Let $\pi : \wt{G} \to G$ be the natural projection.
Since $N$ is normal in $\wt{H}$, we have $\pi(\wt{H}) = H$.
Now,
\begin{eqnarray*}
H\pi(x) = H\pi(y) & \Leftrightarrow & \pi(xy^{-1}) \in H \\
& \Leftrightarrow & xy^{-1} \in \wt{H} \\
& \Leftrightarrow & \wt{H}x = \wt{H}y \\
\end{eqnarray*}
so $\pi$ actually induces a bijection between the cosets of $H$
in $G$ and the cosets of $H'$ in $G'$.
To see that it preserves adjacency,
suppose that $(\wt{H}x,\wt{H}y)$ is any arc of $\wt{\Gamma}$,
and so $xy^{-1} \in \wt{H}\wt{a}\wt{H}$.
It follows that $\pi(xy^{-1}) \in HaH$
and so $(Hx,Hy)$ is an arc of $\Gamma$.
The result follows.
\end{proof}
\end{proposition}
Suppose that $\wt{\Gamma}$ is a symmetric graph
isomorphic to $\sab(\wt{G},\wt{H},\wt{H}\wt{a}\wt{H})$.
Suppose further that $\wt{G}$ acts faithfully on the vertices of $\wt{\Gamma}$.
Let $\Gamma$ be any quotient of $\wt{\Gamma}$,
so that, by proposition [?], $\Gamma$ is isomorphic to
$\sab(\wt{G},\wt{K},\wt{K}\wt{a}\wt{K})$ for some $\wt{K}$
such that $\wt{H} < \wt{K} < \wt{G}$ and $\wt{a} \not\in \wt{K}$.
Although $\wt{G}$ acts faithfully on the vertices of $\wt{\Gamma}$
it does not follow that $\wt{G}$ acts faithfully
on the vertices of the quotient $\Gamma$.
\begin{definition}
If $(G,\Gamma)$ is a symmetric graph with $G$ acting faithfully on the vertices
of the $\Gamma$, then an {\it unfaithful} extension of $(G,\Gamma)$
is an extension of the form $(\widetilde{G},\widetilde{\Gamma})$
where $\widetilde{G} \neq G$.
\end{definition}
\begin{proposition}
Suppose that $(G,\Gamma)$ is a symmetric graph with
$\Gamma \cong \sab(G,H,HaH)$.
and that $(\wt{G},\wt{\Gamma})$ is an unfaithful extension
of $(G,\Gamma)$ with
$\wt{\Gamma} \cong
\sab(\wt{G},\wt{H},\wt{H}\wt{a}\wt{H}))$
If $\pi : \wt{G} \to G$ is the natural projection, then
$\pi(\wt{H}) < H$ and $\pi(\wt{a}) \in HaH$.
\begin{proof}
Let $N = \ker(\pi))$
Since $\Gamma$ is a quotient of $\wt{Gamma}$
we must have
$\Gamma \cong \sab(\wt{G},\wt{K},\wt{K}\wt{a}\wt{K}))$
for some $\wt{H} < K < \wt{G}$.
Since $\Gamma \cong \sab(G,H,HaH)$
we must have $\core_{\wt{G}}(K) = N$
and $\pi(K) = H$.
Since $\wt{H} < K$ it follows that $\pi(\wt{H}) < \pi(K) < H$.
What about the $a$?
\end{proof}
\end{proposition}
\section{Covering Graphs}
\begin{definition}
The graph $\widetilde{\Gamma}$ is said to be a {\it cover}
of the graph $\Gamma$ if there exists a graph homomorphism
$\pi : \widetilde{\Gamma} \to \Gamma$ with the property that.
For any arc $(B,C)$ of $\Gamma$, and any $v \in \pi^{-1}(B)$
there is a unique $w \in \pi^{-1}(C)$ such that $(v,w)$
is an arc of $\widetilde{\Gamma}$.
\end{definition}
\begin{definition}
The graph $\widetilde{\Gamma}$ is said to be a {\it cover}
of the graph $\Gamma$ if there exists a graph homomorphism
$\pi : \widetilde{\Gamma} \to \Gamma$ with the property that.
For any arc $(B,C)$ of $\Gamma$, and any $v \in \pi^{-1}(B)$
there exists some $w \in \pi^{-1}(C)$
(not necessarily unique) such that $(v,w)$
\end{definition}
Clearly a multicover is a special case of a cover.
We shall see that in this section
that if $(\widetilde{G},\widetilde{\Gamma})$
is an unfaithful extension of $(G,\Gamma)$ then
$\widetilde{\Gamma}$ is a multicover of of $\Gamma$.
\begin{proposition}
Let $(\widetilde{G},\widetilde{\Gamma})$ be an unfaithful
extension of $(G, \Gamma)$ and let $N$ be a normal subgroup of
$\widetilde{G}$
such that $\widetilde{G}/N \cong G$.
If $B$ is any vertex of $\Gamma$ then $N$ acts transitively on
the fiber $\pi^{-1}(B)$.
\begin{proof}
Let $\Delta = \pi^{-1}(B)$.
By proposition [?], $\Delta$ is a block of imprimitivity
of $V \widetilde{\Gamma}$.
Suppose that $\Gamma \cong \sab(G,H,HaH)$ and
$\widetilde{\Gamma} \cong
\sab(\widetilde{G},\widetilde{H},\widetilde{H}\widetilde{a}\widetilde{H}))$.
Then the stabilizer of $\Delta$ is $HN$ and the action
of $HN$ on the fiber is permutation equivalent to $(HN,\cos(H))$.
\end{proof}
\end{proposition}
\newpage
\section{Combinatorial Perspective}
In this chapter we look at a number of well-known
``combinatorial'' methods for constructing
extensions of symmetric graphs. We first describe the constructions,
and then analyze them group theoretically.
In the next chapter we describe a ``geometrical'' approach
to the extension problem for symmetric graphs which was
proposed by Gardiner and Praeger.
\section{Direct Sum and Lexicographic Product}
There are a number of ways of forming ``products''
of graphs. See for example [?].
\begin{definition}
For any two graphs $\Gamma$ and $\Sigma$,
then the direct sum of $\Gamma$ and $\Sigma$
is the graph with vertex set $V \Gamma \times V \Sigma$
where $(v,x)$ is adjacent to $(w,y)$ if and only
if $v$ is adjacent to $w$ in $\Gamma$
and $x$ is adjacent to $y$ in $\Sigma$.
\end{definition}
\begin{proposition}
If $\Gamma$ and $\Sigma$ are both symmetric graphs
then the direct product of $\Gamma$ and $\Sigma$
is symmetric.
\end{proposition}
\begin{definition}
For any two graphs $\Gamma$ and $\Sigma$,
the lexicographic product of $\Gamma$ by $\Sigma$
is the graph with vertex set $V \Gamma \times V \Sigma$
where $(v,x)$ is adjacent to $(w,y)$
if and only if $v$ is adjacent to $w$ in $\Gamma$
or $v = w$ and $x$ is adjacent to $y$ in $\Sigma$
\end{definition}
\begin{proposition}
If $\Gamma$ is any symmetric graph and $\Sigma$ is the empty
graph on $n$ vertices, then the lexicographic product
of $\Gamma$ by $\Sigma$ is symmetric.
\end{proposition}
\section{Bigg's Covering Graph Construction}
The three-arc graph construction allows us, under certain conditions,
to ``unfold'' a $G$-symmetric graph into a larger, {\it imprimitive}
$G$-symmetric graph, admitting the original graph as a quotient.
The {\it covering graph} construction \cite{B1974} is simmilar
in this respect. It also, under certain conditions,
allows us to ``unfold'' a $G$-symmetric graph into a larger one.
It differs from the three-arc graph construction, however,
in that it requires a simultaneous ``unfolding''
of the group $G$.
Recall that given two groups $N$ and $G$ and a homomorphism
$$\rho : G \to \Aut(N)$$
we may form the {\it semidirect product} of $N$ by $G$.
This is the group $\widetilde{G} = N \rtimes_\rho G$,
whose elements are the ordered pairs:
$$\{ (n,g) : n \in N, g \in G \}$$
with multiplication given by:
$$(n_1, g_1)(n_2,g_2) = (n_1^{\rho(g_1)}n_2, g_1g_2).$$
The functions $i_1 : N \to \widetilde{G}$ and $i_2 : G \to \widetilde{G}$
given by $ n \mapsto (n,1) $ and $ g \mapsto (1,g) $ respectively give
natural embeddings of $N$ and $G$ into $\widetilde{G}$.
Identifying $N$ with its image under $i_1$,
the semidirect product has the property that $N$ is normal in $\widetilde{G}$
and $\widetilde{G}/N \cong G$.
\begin{definition} [$N$-chain]
Suppose that $\Gamma$ is a $G$-symmetric graph and $N$ is a group.
An $N$-chain is a function $\phi : A \Gamma \to N$
satisfying $\phi((v,u)) = \phi((u,v))^{-1}$ for all $(u,v) \in A \Gamma$.
\end{definition}
\begin{definition} [Compatible $N$-chain]
Suppose that $\Gamma$ is a $G$-symmetric graph, $N$ is a group
and $\rho : G \to \Aut(N)$ is a homomorphism.
An $N$-chain $\phi$ is said to be {\it compatible} with $\rho$
if for every $g \in G$ the following diagram commutes:
\centerline{
\xymatrix{
\ar[d]_{g} \ar[r]^\phi A \Sigma & K \ar[d]^{\rho(g)} \\
\ar[r]_{\phi} A \Sigma & K
}}
\end{definition}
\begin{definition}[Biggs Cover]
Suppose that $\Gamma$ is a $G$-symmetric graph, $N$ is a group,
$\rho : G \to \Aut(N)$ is a homomorphism and $\phi$ is a compatible $N$-chain.
The Biggs Cover $\widetilde{\Gamma}(N,\rho,\phi)$
of $\Gamma$ with respect to $\phi$ is the graph
with vertex set:
$\{ (g,v) : k \in G, v \in V \Gamma \}$
and arc set:
$ \{ ((g_1,v_1),(g_2,v_2)) : (v_1,v_2) \in A \Gamma, g_2 = g_1 \phi((v_1,v_2)) \}$
\end{definition}
Note that the condition $\phi((v,u)) = \phi((u,v))^{-1}$ ensures that the resulting
graph is simple.
\begin{proposition}
The covering graph $\widetilde{\Gamma}(N,\rho,\phi)$ is $(N \rtimes_\rho G)$-symmetric.
\begin{proof}
Define an action of $N \rtimes_\rho G$ on $\widetilde{\Gamma}$ by:
$$(n,v)^{(\eta, g)} = (n^{\rho(g)} \eta,v^g).$$
This action is well-defined since:
\begin{eqnarray*}
(n,v)^{(\eta_1, g_1)(\eta_2, g_2)}
& = & (n^{\rho(g_1)} \eta_1 ,v^{g_1})^{(\eta_2, g_2)} \\
& = & (n^{\rho(g_1) \rho(g_2)} \eta_1^{\rho(g_2)} \eta_2 , v^{g_1 g_2}) \\
& = & (n,v)^{(\eta_1^{\rho(g_2)} \eta_2, g_1 g_2)}
\end{eqnarray*}
We must check that it preserves the adjacency structure of $\widetilde{\Gamma}$.
Observe that, by the compatibility of $\rho$ and $\phi$,
if $n_2 = n_1 \phi(v_1,v_2)$ then:
\begin{eqnarray*}
{n_2}^{\rho(g)} \phi({v_1}^g, {v_2}^g)
& = & {n_2}^{\rho(g)} \phi(v_1, v_2)^{\phi(g)} \\
& = & (n_2 \phi(v_1, v_2))^{\phi(g)} \\
& = & {n_1}^{\phi(g)}
\end{eqnarray*}
for any $(\eta,g) \in N \rtimes_\rho G$.
Also, since $G$ is a group of automorphisms of $\Gamma$, we know that
$(v_1,v_2) \in A \Gamma$ if and only if $({v_1}^g,{v_2}^g) \in A \Gamma$.
Thus the action defined above does indeed preserves adjacency.
\hfill \break
For any $(n_1,v_1), (n_2,v_2) \in V \widetilde{\Gamma}$,
by the $G$-symmetry of $\Gamma$ we can find a $g \in G$
such that ${v_1}^g = {v_2}$.
Let $\eta = {n_1}^{\rho(g)^{-1}} n_2$.
We have:
\begin{eqnarray*}
(n_1,v_1)^{(\eta,g)} & = & ({n_1}^{\rho(g)} \eta,{v_1}^g) \\
& = & ( {n_1}^{\rho(g)}, {n_1}^{\rho(g)^{-1}} n_2,v_2) \\
& = & (n_2,v_2).
\end{eqnarray*}
Thus the action is vertex transitive.
\hfill \break
Suppose that $(n_1,v_1)$ and $(n_2,v_2)$ are both adjacent to $(n,g)$,
so $n_1 = n \phi(v,v_1)$ and $n_2 = n \phi(v,v_2)$.
By the $G$-symmetry of $\Gamma$ we can find a $g \in G_v$ such that
$v_1^g = v_2$.
Let $\eta = (n^{-1})^{\rho(g)}n$. We have:
\begin{eqnarray*}
(n,v)^{(\eta,g)} & = & (n^{\rho(g)} \eta, v^g) \\
& = & (n^{\rho(g)} (n^{-1})^{\rho(g)}n, v) \\
& = & (n,v)
\end{eqnarray*}
and:
\begin{eqnarray*}
(n_1,v_1)^{(\eta,g)} & = & ({n_1}^{\rho(g)} \eta, {v_1}^g) \\
& = & ({n_1}^{\rho(g)} (n^{-1})^{\rho(g)}n, v_2) \\
& = & ({(n_1 n^{-1})}^{\rho(g)} n, v_2) \\
& = & (\phi(v,v_1)^{\rho(g)} n, v_2) \\
& = & (\phi(v,v_2) n, v_2) \\
& = & (n_2 n^{-1} n, v_2) \\
& = & (n_2, v_2) \\
\end{eqnarray*}
Thus the action is locally transitive.
The result follows.
\end{proof}
\end{proposition}
In fact, a stronger result than this holds.
If $\Gamma$ is $(G,s)$-arc transitive
then $\widetilde{\Gamma}$ is $(\widetilde{G},s)$-arc transitive,
where $\widetilde{G} = N \rtimes_\rho G$.
The proof is by induction and may be found in \cite{B1974}.
This construction was originally used by Conway
to produce an infinitely family of $5$-arc transitive graphs.
\hfill \break
For each $v \in V \Gamma$, let $B(v) = \{ (n,v) : n \in N \}$.
For any $(\eta, g) \in \widetilde{G}$ we have:
\begin{eqnarray*}
B(v)^{(\eta,g)} & = & \{ (n,v)^{(\eta,g)} : n \in N \} \\
& = & \{ (\eta n^{\rho(g)},v^g) : n \in N \} \\
& = & \{ (n, v^g) : n \in N \} \\
& = & B(v^g).
\end{eqnarray*}
Thus the partition:
$\cB = \{ B(v) : v \in V \Gamma \}$
is $\widetilde{G}$-invariant.
\begin{proposition}
The quotient of the Biggs cover $\widetilde{\Gamma}$
with respect to the partition $\cB$
is isomorphic to the original graph $\Gamma$
\begin{proof}
The map $\eta : V \Gamma \to V \widetilde{\Gamma}_\cB$ given by
$v \mapsto B(v)$ establishes a bijection between the vertices of $\Gamma$
and the vertices of $\widetilde{\Gamma}_\cB$.
If $(u,v)$ is an arc of $\Gamma$, then $((1,u),(\phi((u,v)),v))$
is an arc of $\widetilde{\Gamma}$, and so $(B(u), B(v))$
is an arc of $\widetilde{\Gamma}_\cB$.
If $(u,v)$ is not an arc of $\Gamma$, then for all $n_1, n_2 \in N$
$((n_1, u),(n_2, v))$ is not an arc of $\widetilde{\Gamma}$
and so $(B(u), B(v))$ is not an arc of $\widetilde{\Gamma}_\cB$.
Thus $\eta$ establishes an isomorphism between
$\Gamma$ and $\widetilde{\Gamma}_\cB$.
\end{proof}
\end{proposition}
\newpage
Let $(B,C)$ be any arc of $\widetilde{\Gamma}$.
By the above $B = B(u)$ and $C = B(v)$ for some $(u,v) \in A \Gamma$.
Its not too hard to see that each $(n,u) \in B$
has a unique neighbour in $C$, namely $(n \phi(u,v),v)$.
Thus the induced bipartite graph $\Gamma[B,C]$ is a matching.
It follows immediately that the valency of $\widetilde{\Gamma}$
is the same as the valency of $\Gamma$.
\section{Group Theoretic Analyais of Bigg's Covering Graph Construction}
The group $\widetilde{G}$ acts unfaithfully on $\cB$ with kernel
$$\widetilde{N} = \{ (n,1) : n \in N \} \cong N.$$
If $|N| = n$ then $|\widetilde{G}| = n |G|$,
and since $\widetilde{\Gamma}$ has exactly $n$ times as many vertices as $\Gamma$,
we have $[\widetilde{G} : \widetilde{G}_{B(\alpha)}] = n [G : G_\alpha]$.
It follows that
$ |\widetilde{G}_{B(\alpha)}| = |G_\alpha|$.
Let $\pi : \widetilde{G} \to G$ be the natural projection.
Clearly $|\pi(\widetilde{G}_{B(\alpha)})| \leq |\widetilde{G}_{B(\alpha)}|$.
Since $G_\alpha \leq \pi(\widetilde{G}_{B(\alpha)})$,
we must have $G_\alpha = \pi(\widetilde{G}_{B(\alpha)})$
and $|\pi(\widetilde{G}_{B(\alpha)})| = |\widetilde{G}_{B(\alpha)}|$,
so $\widetilde{G}_{B(\alpha)} \cong \pi(\widetilde{G}_{B(\alpha)}) \cong G_\alpha$.
In fact, the permutation groups
$(G_\alpha, \Gamma(\alpha))$ and
$(\widetilde{G}_{B(\alpha)}, \widetilde{\Gamma}(B(\alpha)))$
are equivalent via the bijection
$\eta : \Gamma(\alpha) \to \widetilde{\Gamma}(B(\alpha))$
which sends $\beta$ to $B(\beta)$.
Probably a much easier way to say this.
If $\Gamma$ is isomorphic to the coset graph $\Gamma(G, H, a)$
then since the local actions of $\Gamma$ and $\widetilde{\Gamma}$
are the same, $\widetilde{\Gamma}$ must be isomorphic to the coset graph
$\Gamma(\widetilde{G}, H, \widetilde{a})$ where $\widetilde{a}$
is such that $\pi(\widetilde{a}) = a$. That is
$\widetilde{a} \in \{ (n,a) : n \in N \}$.
Thus for any $N$ there are exactly $n$ possible Biggs covers of $\Gamma$.
Looking closely at the compatibility condition between
$\rho$ and $\phi$ we see that, since $G$ acts transitively
on the arcs of $\Gamma$, the map $\phi$ is completely
determined by where it sends the arc $(v_{H}, v_{Ha})$.
Choosing where to send this arcs is essentially equivalent to
choosing which preimage of $a$ we want.
Requiring that $\phi((v_{H}, v_{Ha})) = n$ is essentially
the same as determining that $\widetilde{a} = (n,a)$.
I think that Bigg's covering graph construction encompasses
all pairs of graphs $\Gamma(G, H, a)$ and $\Gamma(\widetilde{G}, H, \widetilde{a})$
where $\widetilde{G}$ is a semidirect product of $N$ by $G$ for some $N$.
In the more general case where $\widetilde{G}$ is an extention of $N$ by $G$
but not a split extension $\Gamma(\widetilde{G}, H, \widetilde{a})$
should be a cover of $\Gamma(G, H, a)$ in the sense that the induced
bipartite graph is a matching, but Bigg's covering graph construction
cannot be used to construct $\Gamma(\widetilde{G}, H, \widetilde{a})$
from $\Gamma(G, H, a)$.
\section{Generalization of Bigg's Covering Graph Construction}
The Bigg's covering graph construction applies to the case
where $\widetilde{G}$ is a semidirect product
of $N$ by $G$ and $H < \widetilde{G}$ is such that $H \cap N = \{1\}$.
The graph $\widetilde{\Gamma}$ is isomorphic to
$\sab(\widetilde{G},H,HaH)$
and the graph $\Gamma$ is isomorphic
to $\sab(\widetilde{G},\widetilde{H},\widetilde{H}a\widetilde{H})$
where $\widetilde{H} = HN$.
In this section we consider a generalization of the
Bigg's covering graph construction in which the condition
that $H \cap N = \{1\}$ is relaxed.
We find that in this case $\widetilde{\Gamma}$ is a
{\it multicover} of $\Gamma$.
\begin{lemma}
If $H$ is a subgroup of $G$ and $N$ is a normal subgroup of $G$
then $H \cap N$ is a normal subgroup of $H$.
\end{lemma}
Let $M = N / H \cap N$ and let $\lambda : N \to M$ be the natural projection.
| -138,918.692583
|
[
-1.7890625,
1.609375
] | 40.95952
|
[
-3.44921875,
0.59326171875,
-1.57421875,
-6.0390625,
-0.67138671875,
7.78515625
] |
[
0.93701171875,
7.38671875,
0.98974609375,
6.6796875
] | 935
| 18,251
|
[
-2.947265625,
3.62109375
] | 35.39562
|
[
-5.28515625,
-3.076171875,
-4.234375,
-2.25390625,
1.30859375,
11.109375
] | 0.833572
| 18.695207
| 9.10635
| 1.212659
|
[
3.4915637969970703
] | -88,418.074323
| 4.594598
| -137,099.464161
| 0.190803
| 5.631464
|
[
-2.072265625,
-2.935546875,
-3.22265625,
-5.125,
2.021484375,
11.4765625
] |
[
-5.5546875,
-1.5966796875,
-1.4130859375,
-0.8046875,
2.865234375,
3.41796875
] | |
BkiUfjDxK4tBVhat6MXr
|
\section{Introduction}
\label{sec:intro}
The Standard Model (SM) provides a successful framework to describe three out of four known fundamental forces of nature, i.e. the electromagnetic, nuclear strong and weak interactions. However, it does not account for the number of fermion generations and lacks a natural explanation for the tremendous hierarchy in the fermion sector, which is extended over a range of 13 orders of magnitude from the light active neutrino mass scale up to the top quark mass. Moreover, there is no assertion for the smallness of the quark mixing angles, which is in contrast with the sizable values of two of the three leptonic mixing angles. This set of issues is the so called flavor problem which, among others, motivates the construction of models where the SM particle content and symmetries are enlarged. One way to tackle the flavor problem is offered in SM extensions that assume the existence of discrete flavor symmetries\footnote{For more details about flavor symmetry groups see, for example \cite{King:2013eh,Altarelli:2010gt,Ishimori:2010au,King:2015aea}}. These scenarios feature relations in the Yukawa sector and typically predict correlations between the observed fermion mixing patterns as well as the fermion mass relations.
With the use of flavor symmetries it can be suggested that the existing number of fermion families is because they transform as components of a three-dimensional irreducible representation (irrep) of a non-Abelian discrete group, such as $A_4$. Another option is to have the heaviest fermion transforming as one-dimensional irrep and the lighter ones being the components of a doublet under the symmetry group. The smaller non-Abelian discrete groups, containing one- and two-dimentional irreps, are \cite{Ishimori:2010au}: $S_{3}$, $Q_{4}$, $D_4$ and $Q_6$\footnote{Pioneer works using these symmetries to tackle the flavor problem can be found in \cite{Gerard:1982mm,Frampton:1994rk,Grimus:2003kq,Kubo:2003iw,Kubo:2003pd,Babu:2004tn,Kajiyama:2005rk,Lovrekovic:2012bz}}. Both assumptions point to the idea on how to account for the three generations of quarks and leptons but not for their (very strong) mass hierarchies.
For this reason, apart of the flavor symmetry, the distribution of the fermion mass spectrum could suggest the existence of new particles, resulting in phenomenologically richer setups. One can add more scalars to the SM with non-zero vacuum expectation values (vevs) whose contributions to the fermion masses or to different mass matrix elements are restricted by the additional symmetry. Similar to the role of a $\mathcal{Z}_2$ symmetry in a 2-Higgs doublet model (2HDM) \cite{Branco:2011iw}. One fashionable approach to explain the fermion mass and mixing hierarchies is by using the Froggatt–Nielsen (FN) mechanism~\cite{Froggatt:1978nt}, where vector-like fermions (VLFs) are introduced to the SM and transform under a new $\mathrm{U(1)_F}$ symmetry which is spontaneously broken by the vevs of $\mathrm{SU(2)}$ scalar singlets (flavons). The new energy scale is much bigger than the electroweak one as well as the VLFs are heavier than the SM ones, then all the new fields are effectively integrated out.
Here, we present a framework that somehow gathers the previous ideas. We consider a multiscalar model with the flavor symmetry $\mathcal{Q}_{6}\times\mathcal{Z}_{2}$, where the $\mathcal{Z}_{2}$ symmetry assigns one Higgs to the up-type fermion sector and the other to the down-type, both scalars are $\mathcal{Q}_{6}$ singlets. In contrast, fermions transform as ($doublet$+$singlet$) under $\mathcal{Q}_6$, preventing the Yukawa interaction between light fermions and the $\mathrm{SU(2)}$ scalar doublets. Therefore, one Higgs doublet furnishes the top quark with a no-null mass, whereas the other one generates the bottom quark and tau mass. In order to generate the mass of light fermions through a seesaw (or FN) mechanism, we introduce VLFs and three different flavons with a non-trivial transformation under the flavor symmetry. We also introduce right-handed (RH) Majorana neutrinos to generate the small neutrino masses via a type-I seesaw mechanism.
In contrast to the FN mechanism, in our model, the VLFs are not decoupled. For this reason, we study the processes involving VLFs that are within the reach of the Large Hadron Collider (LHC). We perform collider studies for vector-like leptons (VLLs) and vector-like quarks (VLQs), focusing on double production channels for both cases, while for VLLs the single production topologies are also included. Furthermore, it has been known that the experimentally measured muon anomalous magnetic moment deviates from the SM prediction. The longstanding non-compliance of the muon $(g-2)$ with the SM was first observed by the Brookhaven E821 experiment at BNL~\cite{Bennett:2006fi} and it has been recently confirmed by the Muon $(g-2)$ experiment at FERMILAB \cite{Abi:2021gix}. Hence, we also show how the muon $(g-2)$ anomaly is accommodated within our theory with the predicted VLLs.
The layout of the remainder of the paper is as follows. In Sec.~\ref{sec:model} we describe the model, i.e. we provide the invariant Yukawa Lagrangian, the scalar potential and the particle mass spectrum. Afterwards, in Sec.~\ref{section:Muon_g2}, the consequences of our model in the muon anomalous magnetic moment are analyzed. In Sec.~\ref{section:VLLs_collider} we detail the methodology for the collider analysis of the VLFs, for both the quark and lepton counterparts, with the numerical results being showcased in the Sec.~\ref{section:Results}. We finalize this paper with Sec.~\ref{sec:conclusions}, where we take our conclusions.
\section{Model Description}
\label{sec:model}
We propose a model where the SM gauge group is extended with a global flavor symmetry
group, i.e. the complete description is given by the symmetry, $\mathrm{SU(3)_C \times SU(2)_L \times U(1)_Y} \times \mathcal{Q}_{6}\times \mathcal{Z}_{2}$. This theory adds to the SM particle content, a second $\mathrm{SU(2)}$ scalar doublet, three flavon fields, two RH neutrinos, a flavor doublet of VLQs and flavor doublet of VLLs. The charge assignments of the particle content under the flavor group are shown in Tables~\ref{tab:model} and \ref{tab:model2}.
\begin{table}[t!]
\begin{tabular}{|c|cc|cc|cc|cc|cc|cc|}
\hline
& $H_{1}$ & $H_{2}$ & $Q_{L_D}$ & $Q_{L_3}$ & $u_{R_D}$ & ${u}_{R_3}$ & $d_{R_D}$ & ${d}_{R_3}$ & $L_{L_D}$ & $ L_{L_3} $ & ${\ell }_{R_D}$ & $\ell_{R_3}$ \\ \hline\hline
$SU(2)_L$ & \bf{2} & \bf{2} & \bf{2} & \bf{2} & \bf{1} & \bf{1} & \bf{1} & \bf{1}& \bf{2} & \bf{2} & \bf{1}& \bf{1}\\
$U(1)_Y$ & 1/2 & 1/2 & 1/6 & 1/6 & 2/3 & 2/3 & -1/3 & -1/3& -1/2 & -1/2 & -1 & -1\\
$\mathcal{Q}_{6}$ & $\mathbf{1}_{+-}$ & $\mathbf{1}_{+-}$ & $\mathbf{2}_{2}$
& $\mathbf{1}_{++}$ & $\mathbf{2}_{2}$ & $\mathbf{1_{+-}}$ & $\mathbf{2}_{2}$ & $\mathbf{1}_{+-}$ & $\mathbf{2}_{2}$ & $\mathbf{1}_{++}$ & $\mathbf{2}_{2}$ &
$\mathbf{1}_{+-}$ \\
$\mathcal{Z}_{2}$ & $-1$ & $+1$ & $+1$ & $+1$ & $-1$ & $-1$ & $+1$ & $+1$ & $%
+1$ & $+1$ & $+1$ & $+1$ \\ \hline
\end{tabular}%
\caption{Charge assignments of the SU(2) scalar doublets and SM fermions
under the symmetry, $\mathcal{Q}_{6}\times \mathcal{Z}_{2}$.
We have arranged the $\mathcal{Q}_{6}$ doublets as, $Q_{L_D}\equiv (Q_{L_1},Q_{L_2})^T$, $u_{R_D}\equiv (u_{R_1},{u}_{R_2})^T$, $d_{R_D}\equiv ({d}_{R_1},{d}_{R_2})^T$,
$L_{L_D}\equiv (L_{L_1},L_{L_2})^T$ and ${\ell}_{R_D}\equiv ({\ell }_{R_1},{\ell}_{R_2})^T$.
}
\label{tab:model}
\end{table}
\begin{table}[t!]
{\small
\begin{tabular}{|c|ccc|cc|cc|cc|cc|}
\hline
& $\sigma _{1}$ & $\sigma _{2}$ & $\xi $ & $N_{R_1}$ & $N_{R_2}$ & $T_{L}$
& $T_{R}$ & $ B_{L}$ & $B_{R}$ & $E_{L}$ & $E_{R} $ \\ \hline\hline
$SU(2)_L$ & \bf{1} & \bf{1} & \bf{1} & \bf{1} & \bf{1} & \bf{1} & \bf{1} & \bf{1}& \bf{1} & \bf{1} & \bf{1} \\
$U(1)_Y$ & 0 & 0 & 0 & 0 & 0 & 2/3 & 2/3 & -1/3& -1/3 & -1 & -1 \\
$\mathcal{Q}_{6}$ & $\mathbf{1}_{++}$ & $\mathbf{1}_{+-}$ & $\mathbf{2}_{2}$
& $\mathbf{1}_{+-}$ & $\mathbf{1}_{+-}$ & $\mathbf{2}_{1}$ & $\mathbf{2}_{1}$ & $\mathbf{2}%
_{1} $ & $\mathbf{2}_{1}$ & $\mathbf{2}_{1}$ & $\mathbf{2}_{1}$ \\
$\mathcal{Z}_{2}$ & $-1$ & $-1$ & $-1$ & $+1$ & $+1$ & $+1$ & $-1$ & $-1$ & $%
+1$ & $-1$ & $+1$ \\ \hline
\end{tabular}%
}
\caption{Assignments of the singlet scalars and exotic fermions under the $%
\mathcal{Q}_{6}$ flavor symmetry irreps. For convenience we have not
included a subindex $D$ for $\mathcal{Q}_{6}$ doublets.}
\label{tab:model2}
\end{table}
Given the matter content in our theory, the invariant Yukawa Lagrangian is formed by the contribution from each sector as,
%
\begin{equation}
\mathcal{L}_{Y}= \mathcal{L}_{u} + \mathcal{L}_{d} + \mathcal{L}_{\ell} + \mathcal{L}_{\nu}
\end{equation}
where
\begin{equation}
\mathcal{L}_{u}=
y_{u3}\overline{Q}_{L_3}u_{R_3}\widetilde{H}_{1}
+y_{T}\overline{Q}_{D} T_{R} \widetilde{H}_{1}
+y_{T1}\overline{T}_{L}T_{R}\sigma _{1}
+y_{T2}\overline{T}_{L} u_{R_D} \sigma_{2}
+y_{T3}\overline{T}_{L} u_{R_3} \xi
+y_{T4}\overline{T}_{L}T_{R}\xi + \mathrm{H.c.},
\label{Lyu}
\end{equation}
%
\begin{equation}
\mathcal{L}_d =
y_{d3}\overline{Q}_{L_3}d_{R_3}H_{2}
+y_{B}\overline{Q}_{D}B_{R} H_{2}
+y_{B1}\overline{B}_{L}B_{R}\sigma _{1}
+y_{B2}\overline{B}_{L}d_{R_D}\sigma_{2}
+y_{B3}\overline{B}_{L}d_{R_3} \xi
+y_{B4}\overline{B}_{L}B_{R}\xi +\mathrm{H.c.},
\label{Lyd}
\end{equation}
%
\begin{equation}
\mathcal{L}_{\ell}=
y_{\ell3}\overline{L}_{L_3}\ell _{R_3}H_{2}
+y_{E}\overline{L}_{L_D}E_{R}H_{2}
+y_{E1}\overline{E}_{L}E_{R}\sigma _{1}
+y_{E2}\overline{E}_{L}\ell _{R_D}\sigma _{2}
+y_{E3}\overline{E}_{L}\ell_{R_3}\xi
+y_{E4}\overline{E}_{L}E_{R}\xi + \mathrm{H.c.},
\label{Lyl}
\end{equation}
%
and
%
\begin{equation}
\mathcal{L}_{\nu}=\sum_{i=1}^{2}\frac{1}{\Lambda }
\left(
y_{\nu _{i}}\overline{L}_{L_3}N_{R_i}\widetilde{H}_{1}\sigma_{1}
+y'_{\nu _{i}}\overline{L}_{L_D}N_{R_i}\widetilde{H}_{1}\xi
\right)
+\sum_{i=1}^{2}M_{R_{i}}N_{R_i}\overline{N_{R_i}^{C}}+ \mathrm{H.c.}
\label{Lynu}
\end{equation}
with $\tilde{H}_a=i\sigma_2 H^*_a$.
We have defined the $\mathcal{Q}_{6}$ doublets as, $Q_{L_D}\equiv (Q_{L_1},Q_{L_2})^T$, $u_{R_D}\equiv (u_{R_1},{u}_{R_2})^T$, $d_{R_D}\equiv ({d}_{R_1},{d}_{R_2})^T$, $L_{L_D}\equiv (L_{L_1},L_{L_2})^T$, ${\ell}_{R_D}\equiv ({\ell }_{R_1},{\ell}_{R_2})^T$, $T_{L,R}\equiv (T_{L_1,R_1},T_{L_2,R_2})^T$
and $B_{L,R}\equiv (B_{L_1,R_1},B_{L_2,R_2})^T$. Using the multiplication rules of $\mathcal{Q}_{6}$ given in Appendix~\ref{app}, the above Yukawa interactions can be rewritten as follows
\begin{eqnarray}
\mathcal{L}_{u}&=&
y_{u3} \overline{Q}_{L_3}u_{R_3}\widetilde{H}_{1}
+y_{T}
\left( \overline{Q}_{L_1}T_{R_1}-\overline{Q}_{L_2}T_{R_2}\right)\widetilde{H}_{1}
+y_{T1}\left( \overline{T}_{L_1}T_{R_2}-\overline{T}_{L_2}T_{R_1}\right)\sigma_{1}\notag\\
&+&y_{T2}
\left( \overline{T}_{L_1}u_{R_1}-\overline{T}_{L_2}u_{R_2}\right)\sigma _{2}
+y_{T3}\left( \overline{T}_{L_1}\xi_{1}-\overline{T}_{L_2}\xi _{2}\right) u_{R_3}
+y_{T4}\left( \overline{T}_{L_1}T_{R_1}\xi_{2}-\overline{T}_{L2}T_{R_2}\xi _{1}\right)
\label{Lyu2}
\end{eqnarray}
\begin{eqnarray}
\mathcal{L}_{d}&=&
y_{d3}\overline{Q}_{L_3}d_{R_3}H_{2}
+y_{B}\left( \overline{Q}_{L_1}B_{R_1}-\overline{Q}_{L_2} B_{R_2}\right)H_{2}
+y_{B1}\left( \overline{B}_{L_1}B_{R_2}-\overline{B}_{L_2}B_{R_1}\right)\sigma _{1}\notag\\
&+&y_{B2}\left( \overline{B}_{L_1}d_{R_1}-\overline{B}_{L_2}d_{R_2}\right)\sigma _{2}
+y_{B3}\left( \overline{B}_{L_1}\xi _{1}{-}\overline{B}_{L_2}\xi _{2}\right) d_{R_3}
+y_{B4}\left( \overline{B}_{L_1}B_{R_1}\xi _{2}-\overline{B}_{L_2}B_{R_1}\xi _{1}\right)
+\mathrm{H.c.}, \label{Lyd2}
\end{eqnarray}
\begin{eqnarray}
\mathcal{L}_{\ell} &=&
y_{\ell3}\overline{L}_{L_3}\ell _{R_3}H_{2}
+y_{E}\left( \overline{L}_{L_1}E_{R_1}-\overline{L}_{L_2}E_{R_2}\right) H_{2}
+y_{E1}\left( \overline{E}_{L_1}E_{R_2}-\overline{E}_{L_2}E_{R_1}\right)\sigma _{1} \notag\\
&+&y_{E2}\left( \overline{E}_{L_1}\ell _{R_1}-\overline{E}_{L_2}\ell _{R_2}\right)\sigma _{2}
+ y_{E3}\left( \overline{E}_{L_1}\xi_{1}{-}\overline{E}_{L_2}\xi _{2}\right)
\ell_{R_3}
+y_{E4}\left( \overline{E}_{L_1}E_{R_1}\xi _{2}-\overline{E}_{L_2}E_{R_2}\xi _{1}\right) + \mathrm{H.c.}, \label{Lyl2}
\end{eqnarray}
\begin{equation}
\mathcal{L}_{\nu}=\sum_{i=1}^{2}\frac{1}{\Lambda }\left[
y_{\nu _{i}}\overline{L}_{L_3}N_{R_i}\widetilde{H}_{1}\sigma_{1}
+y'_{\nu _{i}}\left( \overline{L}_{L_1}N_{R_i}\widetilde{H}_{1}\xi _{1}-\overline{L}_{L_2}N_{R_i}\widetilde{H}_{1}\xi _{2}\right) %
\right] +\sum_{i=1}^{2}M_{R_{i}}N_{R_i}\overline{N_{R_i}^{C}}+\mathrm{H.c.}
\label{Lynu2}
\end{equation}
In addition, the invariant scalar potential reads
\begin{equation}
V=V_{\text{2HDM}}+V(\text{$H_{1}$,$H_{2}$,flavons})
\label{ec:pot1}
\end{equation}
where the first term corresponds to the 2HDM potential
The second term
in eq.~(\ref{ec:pot1}) has the contributions from the flavons fields, $\sigma_1$, $\sigma_2$ and $\xi$, i.e. the interactions among them and with the $\mathrm{SU(2)}$ scalar doublets. Explicitly,
\begin{eqnarray}
V_{\text{2HDM}}&=& -\mu _{1}^{2}\left( H_{1}^{\dagger }H_{1}\right) -\mu _{2}^{2}\left(
H_{2}^{\dagger}H_{2}\right)
+\frac{\lambda_{1}}{2}\left(H_{1}^{\dagger }H_{1}\right) ^{2} \notag\\
&+& \frac{\lambda _{2}}{2}\left(H_{2}^{\dagger }H_{2}\right) ^{2}+\lambda _{3}\left(
H_{1}^{\dagger }H_{1}\right) \left( H_{2}^{\dagger }H_{2}\right)
+\lambda_{4}\left(H_{1}^{\dagger }H_{2}\right) \left(H_{2}^{\dagger
}H_{1}\right) +\frac{\lambda _{5}}{2}\left[\left(H_{1}^{\dagger
}H_{2}\right)^2+\mathrm{H.c.}\right]
\label{ec:pot2hdm}
\end{eqnarray}
where $H_i=\left(\Phi^{+}_{i}, \Phi^{0}_{i}\right)^{T}$ with $i=1,2$ and
\begin{eqnarray} \label{eq:potential}
V(\text{$H_{1}$,$H_{2}$,flavons}) &=&
-\mu _{3}^{2}\sigma_1^{*}\sigma_1
-\mu_{4}^{2}\sigma_2^{*}\sigma_2
-\mu_{5}^{2}\xi_1^{*}\xi_1
-\mu_{6}^2\xi_2^{*}\xi_2
-\mu_{7}^{2}(\xi_1^{*}\xi_2+\mathrm{H.c.})
-\mu _{8} \left(\sigma_1 H_{1}^{\dagger }H_{2}+ \mathrm{H.c.} \right)
+\lambda_{6} \left(H_{1}^{\dagger }H_{1}\right) (\sigma_1 ^{*}\sigma_1)\notag\\
&+&\lambda_{7}\left(H_{2}^{\dagger }H_{2}\right) (\sigma_1 ^{*}\sigma_1)
+\lambda _{8}\left(H_{1}^{\dagger }H_{1}\right) (\sigma_2 ^{*}\sigma_2)
+\lambda_{9}\left(H_{2}^{\dagger }H_{2}\right) (\sigma_2 ^{*}\sigma_2)
+\frac{\lambda _{10}}{2}(\sigma_1 ^{*}\sigma_1)^{2}
+\frac{\lambda _{11}}{2}(\sigma_2 ^{*}\sigma_2)^2 \notag\\
&+&\lambda_{12}(\sigma_1 ^{*}\sigma_1)(\sigma_2^{*}\sigma_2)
+\lambda^{^{\prime }}_{12}(\sigma_1 ^{*}\sigma_2)(\sigma_2^{*}\sigma_1)
+\lambda^{^{\prime \prime }}_{12}\left[(\sigma_1^{*}\sigma_2)^2+\mathrm{H.c.}\right]
+\frac{\lambda_{13}}{2}\left( \xi^*\xi \right)_{\mathbf{1}_{--}}\left(\xi^*\xi \right)_{\mathbf{1}_{--}},
\end{eqnarray}
The terms $\mu_{5,6,7}$ in the last equation break softly the $\mathcal{Q}_6$ symmetry\footnote{$\xi$ does not mix with the other scalars because of the $\mathcal{Q}_6$ symmetry. We have, for real $\xi$, $(\xi^2)_{++}=\xi_1 \xi_2 - \xi_2 \xi_1=0$ and, if complex, $(\xi^*\xi)_{++}+\mathrm{H.c.}=0$.} and prevent the appearance of either Goldstone or tachyonic fields. Since the flavons are real fields, they have no complex charge assignment, the $\lambda_{12}$, $%
\lambda^{\prime }_{12}$ and $\lambda^{\prime \prime }_{12}$ terms are equivalent. Then,
eq.~(\ref{eq:potential}) can be rewritten (discarding redundant terms) as follows,
\begin{eqnarray}
V(\text{$H_{1}$,$H_{2}$,flavons}) &=&
-\mu _{3}^{2}\sigma_1^{2}-\mu_{4}^{2}\sigma_2^{2} -\mu_{5}^{2}\xi_1^{2}-\mu_{6}^2\xi_2^{2}-\mu_{7}^{2}\xi_1\xi_2
-\mu _{8} \left(\sigma_1 H_{1}^{\dagger }H_{2}+ \mathrm{H.c.} \right)
+\lambda_6 \left(H_{1}^{\dagger }H_{1}\right) \sigma_1 ^{2}
+\lambda_{7}\left(H_{2}^{\dagger }H_{2}\right) \sigma_1 ^{2} \notag \\
&+&\lambda_{8}\left(H_{1}^{\dagger }H_{1}\right)\sigma_2 ^{2}
+\lambda_{9}\left(H_{2}^{\dagger }H_{1}\right)\sigma_2 ^{2}
+\frac{\lambda_{10}}{2}\sigma_1^{4}
+\frac{\lambda _{11}}{2}\sigma_2^4
+\lambda_{12}\sigma_1 ^{2}\sigma_2 ^{2}
+\frac{\lambda _{13}}{2}\left(2 \xi_1\xi_2 \right)^2,
\label{ec:potflavon}
\end{eqnarray}
All these scalars contribute to the symmetry breaking, they get a no-null vev,
and they are shifted as follows
\begin{equation}
\Phi_i^0=\frac{1}{\sqrt{2}}\left(v_i+\varphi_{R_i}+i \varphi_{I_i}\right),
\sigma_i=\frac{1}{\sqrt{2}}\left(v_{\sigma_i}+\sigma_{R_i}\right)\ \ \text{and%
} \ \ \xi_i=\frac{1}{\sqrt{2}}\left(v_{\xi_i}+\xi_{R_i}\right),
\end{equation}
where $i=1,2$, the $SU(2)$ scalar vevs satisfy $v^2_1 +v^2_2 = v_{\rm EW}^2$ and $v_{\rm EW}\equiv246\,\text{GeV}$.
\subsection*{Fermion mass spectrum}
\label{sec:fermionmasses}
After the spontaneous breaking of
the $\mathrm{SU(3)_C\times SU(2)_L \times U(1)_Y \times \mathcal{Q}_{6}\times \mathcal{Z}_{2}}$ symmetry,
using eqs.(\ref{Lyu}), (\ref{Lyd}) and (\ref{Lyl}), we get $5\times5$ fermion mass matrices,
\begin{eqnarray}
M_{f}&=&\frac{1}{\sqrt{2}}\left(
\begin{array}{ccccc}
0 & 0 & 0 & y_{F} v_{H_{1}} & 0 \\
0 & 0 & 0 & 0 & -y_{F}v_{H_{1}} \\
0 & 0 & y_{f3}v_{H_{1}} & 0 & 0 \\
y_{F2}v_{\sigma _{2}} & 0 & y_{F3}v_{\xi _{1}} & y_{F4}v_{\xi _{2}} & y_{F1}v_{\sigma _{1}} \\
0 & -y_{F2}v_{\sigma _{2}} & -y_{F3}v_{\xi _{2}} & -y_{F1}v_{\sigma _{1}} &
-y_{F4}v_{\xi _{1}}%
\end{array}%
\right) =\left(
\begin{array}{cc}
C_{f}& A_{f}\\
B_{f}& M_{F}%
\end{array}%
\right) , \label{MF1}
\end{eqnarray}
%
where the sub-indices $f=u,d,\ell$ and $F=T,B,E$. The block matrices, in the previous equation, are defined as
%
\begin{equation}
C_{f}=\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & M_{f_{33}}%
\end{array}%
\right), \ \
A_{f}=\left(
\begin{array}{cc}
M_{f_{14}} & 0 \\
0 & M_{f_{25}} \\
0 & 0%
\end{array}%
\right), \ \
B_{f}=\left(
\begin{array}{ccc}
M_{f_{41}} & 0 & M_{f_{43}} \\
0 & M_{f_{52}} & M_{f_{53}}%
\end{array}%
\right),
\label{Mblocks}
\end{equation}
%
and
%
\begin{equation}
M_{F} =\left(
\begin{array}{cc}
M_{f_{44}} & M_{f_{45}} \\
M_{f_{54}} & M_{f_{55}}%
\end{array}%
\right)
\label{Mblocks2}
\end{equation}
The mass matrices in eq.(\ref{MF1})
are diagonalized via the following bi-unitary transformation:
\begin{equation}
\left(U_{L}^{f}\right)^{\dagger }M_{f}U_{R}^{f}=\text{diag}\left( m_{f_1},m_{f_2},m_{f_3},m_{F_1},m_{F_2}\right),
\label{matrixdiagonalization}
\end{equation}
where $f=u,d,\ell$ and $F=T,B,E$.
On can notice, from eq.(\ref{Mblocks}), that only the third fermion family
gets the mass through the Yukawa interaction with one of the two Higgs doublets.
That is, the top quark gets tree-level mass from its Yukawa interaction with $H_{1}$,
whereas the bottom quark and tau lepton obtain their masses from their Yukawa
interactions with the second Higgs doublet $H_{2}$. We will assume that
the symmetry breaking and the masses of the VLFs
are around the TeV scale. Thus, the mass matrices for the SM charged fermions,
resulting from a seesaw-like (or FN-like) mechanism, are given by
\begin{eqnarray}
\widetilde{M}_{f} &=&C_{f}-A_{f}M_{F}^{-1}B_{f}=
\frac{M_{f_{14}}}{M_{f_{45}}^{2}+M_{f_{55}}M_{f_{44}}}\left(
\begin{array}{ccc}
-M_{f_{55}} M_{f_{41}} & -M_{f_{45}} M_{f_{41}} & - M_{f_{55}}M_{f_{43}}-M_{f_{45}}M_{f_{53}} \\
M_{f_{45}} M_{f_{41}} & -M_{f_{44}} M_{f_{41}} & M_{f_{45}}M_{f_{43}}+M_{f_{44}}M_{f_{53}} \\
0 & 0 & M_{f_{33}}%
\end{array}%
\right)
\end{eqnarray}
%
where $f=u,d,\ell$, $F=T,B,E$ and we have used $M_{f_{25}}=-M_{f_{14}}$, $M_{f_{54}}=-M_{f_{45}}$,
$M_{f_{52}}=-M_{f_{41}}$.
Furthermore, from the neutrino Yukawa interactions, we obtain a $5\times5$ neutrino mass
matrix given by
\begin{equation}
M_{\nu }=\left(
\begin{array}{cc}
0_{3\times 3} & M_{D_\nu} \\
M_{D_\nu}^{T} & M_{R}%
\end{array}%
\right) ,
\end{equation}
where $M_{D_\nu}$ is the Dirac mass matrix and $M_{R}$ is the Majorana mass matrix for
RH neutrinos. These matrices are
\begin{equation}
M_{D_\nu}=\left(
\begin{array}{cc}
A_{\nu } & C_{\nu } \\
-\tilde{A}_{\nu } & -\tilde{C}_{\nu } \\
B_{\nu } & D_{\nu }%
\end{array}%
\right) ,\quad \quad M_{R}=\left(
\begin{array}{cc}
M_{R_{1}} & 0 \\
0 & M_{R_{2}}%
\end{array}%
\right), \label{Mnu}
\end{equation}
with $A_\nu = y'_{\nu _{1}}v_1 v_{\xi_1}/2\Lambda$, $\tilde{A}_\nu = y'_{\nu _{1}}v_1 v_{\xi_2}/2\Lambda$, $C_\nu = y'_{\nu _{2}}v_1 v_{\xi_1}/2\Lambda$, $\tilde{C}_\nu = y'_{\nu_{2}}v_1 v_{\xi_2}/2\Lambda$, $B_\nu = y_{\nu _{1}}v_1 v_{\sigma_1}/2\Lambda$ and $D_\nu = y_{\nu _{2}}v_1 v_{\sigma_1}/2\Lambda$.
Assuming that the right handed Majorana neutrinos have masses much larger
than the electroweak symmetry breaking scale $v_{\rm EW}=246$ GeV, the type I seesaw
mechanism can be implemented to generate the tiny masses of the light active
neutrinos. The resulting mass matrix for light active neutrinos takes the
form
\begin{equation}\label{Mnulow}
\begin{aligned}
&\widetilde{M}_{\nu } = M_{D_\nu}M_{R}^{-1}M_{D_\nu}^{T}
\end{aligned}
\end{equation}
Then, the light active neutrino masses are given by
\begin{equation}
\label{eq:neutrino_masses}
\begin{aligned}
m_{\nu_1} = 0, \quad \quad m_{\nu_2,\nu_3} = \frac{\kappa \pm \kappa'^2}{2M_{R_1} M_{R_2}},
\end{aligned}
\end{equation}
where we have defined
\begin{equation}\label{eq:kappas_defs}
\begin{aligned}
&\kappa = \tilde{C}_\nu^2 M_{R_1} + C_\nu^2 M_{R_1} + D_\nu^2 M_{R_1} + \tilde{A}_\nu^2 M_{R_2} + A_\nu^2 M_{R_2} + B_\nu^2 M_{R_2}, \\
&\kappa'^2 = -\tilde{C}_\nu^2 M_{R_1} - C_\nu^2 M_{R_1} - D_\nu^2 M_{R_1} - \tilde{A}_\nu^2 M_{R_2} - A_\nu^2 M_{R_2} - B_\nu^2 M_{R_2}^2)^2 - \\
&\hspace{2.4em}- 4A_\nu^2 \tilde{C}_\nu^2 M_{R_1} M_{R_2} + 4B_\nu^2 \tilde{C}_\nu^2 M_{R_1} M_{R_2} - 8 \tilde{A}_\nu A_\nu \tilde{C}_\nu C_\nu M_{R_1} M_{R_2} + \\
&\hspace{2.4em}+ 4 \tilde{A_\nu}^2 C_\nu^2 M_{R_1} M_{R_2} + 4 B_\nu^2 C_\nu^2 M_{R_1} M_{R_2} - 8 \tilde{A}_\nu B_\nu \tilde{C}_\nu D_\nu M_{R_1} M_{R_2} + \\
&\hspace{2.4em}+ 8 A_\nu B_\nu C_\nu D_\nu M_{R_1} M_{R_2} + 4 \tilde{A}_\nu^2 D_\nu^2 M_{R_1} M_{R_2} + A_\nu^2 D_\nu^2 M_{R_1} M_{R_2}.
\end{aligned}
\end{equation}
\subsection*{Scalar mass spectrum}
Using eqs.~(\ref{ec:pot2hdm}) and (\ref{ec:potflavon}) and solving the tadpole equations,
the CP-even squared mass matrix becomes,
\begin{eqnarray}
M^2_{\text{CP-even}}= \left(
\begin{array}{cc}
[M^2]_{4\times4} & 0_{4\times2} \\
0_{2\times4} & [\bar{M}^2]_{2\times2}%
\end{array}
\right)
\end{eqnarray}
where
\begin{eqnarray} \label{eq:m013}
M^2= \left(
\begin{array}{cccc}
\lambda _1 v_1^2 & \left(\lambda _3+\lambda _4+\lambda _5\right) v_1 v_2 &
\lambda _6 v_1 v_{\sigma_1} & \lambda _8 v_1 v_{\sigma_2} \\
\left(\lambda _3+\lambda _4+\lambda _5\right) v_1 v_2 & \lambda _2 v_2^2 &
\lambda _7 v_2 v_{\sigma_1} & \lambda _9 v_2 v_{\sigma_2} \\
\lambda _6 v_1 v_{\sigma_1} & \lambda _7 v_2 v_{\sigma_1} & \lambda _{10}
v_{\sigma_1}^2 & \lambda _{12} v_{\sigma_1} v_{\sigma_2} \\
\lambda _8 v_1 v_{\sigma_2} & \lambda _9 v_2 v_{\sigma_2} & \lambda _{12}
v_{\sigma_1} v_{\sigma_2} & \lambda _{11} v_{\sigma_2}^2 \\
\end{array}
\right)
\end{eqnarray}
and
\begin{eqnarray} \label{eq:m022}
\bar{M}^2= \left(
\begin{array}{cc}
\frac{\mu_7^2 v_{\xi_2}}{2 v_{\xi_1}} & 2 \lambda _{13} v_{\xi_1} v_{\xi_2}-%
\frac{\mu_7^2}{2} \\
2 \lambda _{13} v_{\xi_1} v_{\xi_2}-\frac{\mu_7^2}{2} & \frac{\mu_7^2
v_{\xi_1}}{2 v_{\xi_2}}\\
\end{array}
\right).
\end{eqnarray}
From the last equation we find that $\left\langle \xi_i\right\rangle\neq0$. Notice that
the flavon fields get decoupled when $\left\langle \sigma_i\right\rangle \gg v_{\rm EW}$
or simply if one takes $\lambda_{6,7,8,9} \ll 1$. In this case, the CP-even parts of the
$\mathrm{SU(2)}$ scalar doublets do not mix with the scalar singlets and their masses are obtained
by diagonalizing the matrix
\begin{eqnarray} \label{eq:m012}
M^2_{\text{2DHM}}\sim \left(
\begin{array}{cc}
\lambda _1 v_1^2 & -\mu^2_{12}+\left(\lambda _3+\lambda _4+\lambda _5\right)
v_1 v_2 \\
-\mu^2_{12}+\left(\lambda _3+\lambda _4+\lambda _5\right) v_1 v_2 & \lambda
_2 v_2^2 \\
\end{array}
\right)
\end{eqnarray}
The mass of the CP-odd and charged scalar are, respectively,
\begin{equation} \label{eq:m2A2}
m_A^2=\frac{\mu_{12}^2}{v_1 v_2}-\lambda _5 \left(v_1^2+v_2^2\right),
\end{equation}
and
\begin{equation} \label{eq:m2ch2}
m_{H^\pm}^2=\frac{\mu_{12}^2}{v_1 v_2}-\frac{\left(\lambda
_4+\lambda_5\right)}{2}\left(v_1^2+v_2^2\right)
\end{equation}
Before concluding this section, let us briefly mention
that even though the $SU(2)$ scalar $H_1$ is only coupled to the SM up-type sector and $H_2$ to the down-type sector the Weinberg-Glasgow-Paschos theorem \cite{Glashow:1976nt,Paschos:1976ay} does not apply in this case, i.e. FCNCs might appear at tree-level due to the mixing between SM and vector like fermions. However,
we expect these FCNCs to be under control since they will be proportional to the small mixing angles\footnote{ For instance, we have numerically checked that the mixing angles between SM and vector like quarks are at most of the order of $10^{-3}$ which are sufficiently small to suppress FCNCs induced by these mixings.} and further suppressed by the square of the heavy non-SM scalar masses. A thorough analysis on this regard is out of the scope of this paper.
\section{Muon anomalous magnetic moment\label{section:Muon_g2}}
In this section we start discussing the implications of our model for the muon anomalous magnetic moment. It is worth mentioning that the Yukawa interactions, $-y_E\overline{L}_{L_2}E_{R_2} H_{2}$ and
$-y_{E2}\overline{E}_{L_2}\ell_{R_2} \sigma_{2}$ in eq.~(\ref{Lyl2}) as well as the quartic scalar interaction $\lambda _{9}( H_{2}^{\dagger }H_{2}) (\sigma _{2}^{\ast }\sigma
_{2})$ in eq.~(\ref{ec:potflavon}), provide the dominant contributions to the muon anomalous magnetic moment. These contributions to $(g-2)_{\mu}$ are one-loop diagrams which involve the exchange of a electrically neutral CP-even scalar and the VLL $E_{2}$.
To simplify our analysis, we consider a benchmark close to the decoupling limit
where $\varphi_{R_2}$ and $\sigma_{R_2}$ are mainly
composed of two orthogonal combinations involving two heavy CP-even $S_{1}^{0}$
and $S_{2}^{0}$ physical scalar fields. Therefore, the muon anomalous magnetic moment takes the form:
\begin{equation}
\Delta a_{\mu}\simeq \frac{y_{E}y_{E2}m_{\mu }^{2}}{8\pi ^{2}}\left[ J\left(
m_{E_{2}},m_{S_{1}^{0}}\right) -J\left( m_{E_{2}},m_{S_{2}^{0}}\right) %
\right] \sin \theta \cos \theta ,
\end{equation}
where $S_{1}^{0}\simeq \cos \theta \sigma_{R_2} +\sin \theta \varphi_{R_2}$ ,
$S_{2}^{0}\simeq -\sin \theta \sigma_{R_2} +\cos \theta \varphi_{R_2}$, and $%
m_{E_{2}}$ is the mass of the VLL $E_{2}$. Furthermore, the
loop $J\left( m_{E},m_{S}\right) $ function has the following form \cite{Diaz:2002uk,Jegerlehner:2009ry,Kelso:2014qka,Lindner:2016bgg}
\begin{equation}
J\left( m_{E},m_{S}\right) =\int_{0}^{1} dx\frac{x^{2}\left( 1-x+\frac{m_{E}}{%
m_{\mu }}\right) }{m_{\mu }^{2}x^{2}+\left( m_{E}^{2}-m_{\mu }^{2}\right)
x+m_{S}^{2}\left( 1-x\right) }.
\end{equation}
It is worth mentioning that in this model there exit other BSM contributions to the muon anomalous magnetic moment
but they turn out to be subleading. For instance, the loop contributions mediated by heavy neutrinos and the $W$ gauge boson are strongly suppressed by the quadratic power of both the very tiny active-sterile neutrino mixing angle and the small effective Dirac neutrino Yukawa coupling. Notice that the smallness of this coupling is due to the neutrino Yukawa interactions are dimension-5, Eq. (\ref{Lynu2}). Therefore, Dirac neutrino Yukawas also suppress one-loop diagrams where neutrinos and electrically charged Higgs are exchanged. Furthermore, following Refs. \cite{Raby:2017igl,CarcamoHernandez:2019ydc,Dermisek:2020cod}, one can estimate that the $\left(g-2\right)_{\mu}$ contribution that involve mediation of the $Z$ gauge boson and heavy vector like leptons turns out be $\Delta a^{(Z)}_{\mu}\sim\frac{m_{\mu}m_E}{8\pi^{2}m_Z^{2}}\theta^2G_{loop}$. Hence, $\Delta a^{(Z)}_{\mu}\sim\mathcal{O}\left(10^{-11}\right)$, for $200$ GeV masses of charged vector like leptons and SM-heavy vector like lepton mixing angle satisfying $\theta\sim\mathcal{O}\left(10^{-3}\right)$.
Fig.~\ref{gminus2muonvsmE} shows the muon anomalous magnetic moment as a function of the VLL mass $M_{E_{2}}$. The solid horizontal lines correspond to the current upper and lower experimental bounds for the muon anomalous magnetic moment which are set by \cite{Abi:2021gix}
\begin{equation}
\label{eq:exp_values_g2}
\begin{aligned}
&(\Delta a_{\mu })_{\exp } =\left( 2.51\pm 0.59\right) \times
10^{-9}.
\end{aligned}
\end{equation}
In our numerical analysis we have considered the benchmark point with $\theta =\pi/4$, $M_{S_{1}}=1.5$ TeV and $M_{S_{2}}=2$ TeV. Furthermore, we have set $y_{E}=y_{E2}=0.2$ and $y_{E}=y_{E2}=0.3$ for the black and blue curves, respectively. The mass of the charged exotic lepton $E_{2}$ has been varied in the range $0.2$ TeV$\leqslant M_{E_{2}}\leqslant $ $2$ TeV. Fig.~\ref{gminus2muonvsmE} shows that our model can successfully accommodate the current experimental anomaly $\Delta a_{\mu }$ within the considered mass range. One can see that $\Delta a_{\mu }$ requires vector-like states below the TeV scale. For this reason, in what follows, we will analyze how likely one can observe these VLLs at the upcoming LHC searches.
\begin{figure}[h]
\centering
\captionsetup{justification=raggedright,singlelinecheck=false}
\hspace{-6em}
\includegraphics[width=0.75\textwidth]{Diagrams/plotgminus2muon.pdf}
\caption{Muon anomalous magnetic moment as a function of the charged exotic lepton mass $M_{E_{2}}$. The black and blue curves correspond to $y_{E}=y_{E2}=0.2$ and $y_{E}=y_{E2}=0.3$, respectively. The horizontal magenta and orange lines correspond to the $2\sigma$ upper and lower bounds for the muon anomalous magnetic moment, respectively.}
\label{gminus2muonvsmE}
\end{figure}
\section{Exotic fermionic signatures: Analysis setup}\label{section:VLLs_collider}
As it was shown in Sect.~\ref{sec:model}, two generations of VLLs and four VLQs are present in the model. For the VLQs, two are of the up-type whereas the other two are of the down-type. Such states have masses at the TeV scale, within the range of future collider runs at the LHC. In this section, we focus on a discussion of potential signatures characteristic of these particles as well as on the numerical techniques that we propose to probe them. Such analysis will be boosted via the implementation of neural networks (NNs), whose hyperparameters are optimized through the use of genetic algorithms, based on previous work by some of the authors \cite{Freitas:2020ttd}. We focus in pair-production topologies for both VLQs and VLLs, while single-production is considered only for the VLLs.
The model, at the Lagrangian level, is implemented in \texttt{SARAH} \cite{Staub:2013tta}, from which we generate the relevant \texttt{UFO} \cite{Degrande:2011ua} python codes that interface with Monte-Carlo simulators. In particular, we employ \texttt{MadGraph} (MG5) \cite{Alwall:2014hca} for simulation of particle collisions at parton-level for both signal and background topologies. We add hadronization and showering effects with \texttt{Pythia8} \cite{Sjostrand:2014zea} and \texttt{Delphes} \cite{deFavereau:2013fsa} for fast detector simulation. Angular and kinematic distributions are extracted from this last step, with the help of \texttt{ROOT} \cite{Brun:1997pa}, which are used as inputs in the NNs, for signal/background separation and computation of statistical significance. All parton-level events are generated for proton-proton collisions at the LHC, for a centre-of-mass energy $\sqrt{s} = 14$ TeV and for the parton-distribution function nn23lo1, which automatically fixes the strong coupling, $\alpha_s$, and its evolution. We generate 250 000 events for each individual topology (background and signal). We consider the MLM matching scheme \cite{Hoche:2006ph} for topologies with at least two jets as final states.
VLLs have long been motivated by various SM extensions and Grand Unified Theory (GUT) frameworks (see, e.g. \cite{Raby:2017igl,Garcia:2015sfa,Bhattacherjee:2017cxh}), and like it was shown in Sec.~\ref{section:Muon_g2}, are important in addressing the muon $(g-2)$ anomaly, whose relevance has recently come to the forefront of new physics explorations \cite{Abi:2021gix}\footnote{ For further constraints on VLLs we refer the reader to \cite{Crivellin:2020ebi}.}. Despite the strong theoretical motivations, very limited collider searches have been performed so far, with the most stringent constraints coming from CMS \cite{Sirunyan:2019ofn}, for doublet VLLs that strongly couple to the tau lepton. Older searches at LEP \cite{Achard:2001qw} constrain these exotic states to be heavier than $101.2~\mathrm{GeV}$. Therefore, there is still a plenty of parameter space left to be explored and as such, phenomenological studies such as these may help pin-point regions of the model parameter space to look for at collider experiments.
For the VLLs' search, we consider topologies identical to some of those studied in our previous work \cite{Freitas:2020ttd}. These include pair-production in the t-channel via vector-boson fusion (VBF) processes (see Fig.~\ref{fig:VBF-events}), characterized by two light jets in the forward region originating from colliding protons. We also include contributions from pair production via the exchange of a virtual photon or a $Z^0$ boson, which we name as ``ZA'' in what follows (see Fig.~\ref{fig:ZA-events}). Both topologies are characterized by having two leptons and a large missing transverse energy (MET) as final states. The single-production diagram is characterized by a single lepton and a large MET in the final state (see Fig.~\ref{fig:VLBSM-events}), which we dub as the ``VLBSM topology''.
\begin{figure*}[h!]
\centering
\captionsetup{justification=raggedright}
\subfloat[]{{\includegraphics[width=0.28\textwidth]{Diagrams/VBF_channel.pdf} }}
\subfloat[]{{\includegraphics[width=0.28\textwidth]{Diagrams/VBF_channel_1.pdf} }} \\
\caption{Leading-order Feynman diagrams for the VBF topologies. Original quarks from colliding protons are indicated as $q$ and $\bar{q}$, while $E_2$ represents the lightest VLL. Besides the forward jets, we have purely leptonic final states originating from $W^\pm$ decays, with one anti-muon, $\mu^+$, and an electron $e^-$, as well as their associated neutrino. $\nu_\ell$ are the SM neutrinos.
\label{fig:VBF-events}}
\end{figure*}
\begin{figure}[h!]
\centering
\captionsetup{justification=raggedright}
\includegraphics[width=0.34\textwidth]{Diagrams/JL_channel.pdf}
\caption{Leading-order Feynman diagram for the ZA topologies. The same nomenclature as in Fig.~\ref{fig:VBF-events} applies here.}
\label{fig:ZA-events}
\end{figure}
\begin{figure}[h!]
\centering
\captionsetup{justification=raggedright}
\includegraphics[width=0.34\textwidth]{Diagrams/VLBSM_channel.pdf}
\caption{Leading-order Feynman diagrams for the VLBSM topologies. The same nomenclature as in Fig.~\ref{fig:VBF-events} applies here.}
\label{fig:VLBSM-events}
\end{figure}
For these processes, we consider the main irreducible backgrounds as follows:
\begin{enumerate}
\item For ZA topologies, main backgrounds include top quark pair production, $t\bar{t}$, with two b-jets and fully leptonic decays for the W bosons. $t\bar{t}$ plus $Z^0$ production is also considered, with the $Z^0$ decaying in the fully invisible channel (two neutrinos) or into two leptons;
\item For VBF topologies, diboson $W^+W^-$ production is considered, with subsequent decay into leptons. We also take into account $t\bar{t}$ pair production plus one or two jets, that is, the top decays into leptons and is accompanied by one or two light jets;
\item For VLBSM topologies, we consider all production channels with a single lepton in the final state, that is, $p p \rightarrow \ell \nu_\ell$, with up to two light jets.
\end{enumerate}
To maximize the signal region and to reduce the main irreducible backgrounds, specific kinematic cuts are imposed in \texttt{ROOT}. In particular, we consider
\begin{enumerate}
\item For VBF and ZA topologies, we require at least two lepton candidates with an opposite flavour and opposite charge. In particular, one anti-muon originating from $W^+$ decays and one electron candidate originating from $W^-$. For VLBSM, we require only one lepton, an electron.
\item Generic to all topologies, we impose kinematic constraints for the final charged lepton states with $p_{T} > 25$ GeV and $|\eta| \leq 2.5$. A minimum MET is also considered with $\mathrm{MET} > 15$ GeV.
\item For jet reconstruction, we use the Cambridge/Aachen algorithm \cite{CMS:2009lxa} with cone radius $\Delta R = 1.0$, with kinematic constraints $p_T > 35$ GeV and $|\eta| \leq 2.5$. For jets originating from bottom quarks, we consider tight-working points with 90\% b-tagging efficiency.
\end{enumerate}
The reconstruction procedure via the use of invisible particles in the final states follows the same approach as thoroughly described in \cite{Freitas:2020ttd}. The dimension-full and dimension-less variables are extracted from the final states to train our Deep Learning models, for different reference frames, including the laboratory frame, the $W$ bosons frame and $\bar{E}_2E_2$ frame. All chosen observables for double production topologies are shown in Table~\ref{tab:vars_Zp}, while in Table~\ref{tab:vars_Zp_VLBSm} we present the variables used for the VLBSM topology.
\begin{table*}[ht!]
\centering
\captionsetup{justification=raggedright,singlelinecheck=false}
\resizebox{0.75\textwidth}{!}{\begin{tabular}{|c|c|c|c|}
\toprule
\hline
& Dimension-full & \multicolumn{2}{|c|}{Dimensionless} \\
\hline
\hline
\midrule
\makecell{Lab. \\
frame} & \makecell{$p_T(e^-)$, $p_T(\mu^+)$,$p_T(E_2)$ \\
$p_{T}(\bar{E}_2)$, $M({E_2})$, $M(\bar{E}_2)$ \\
$M_T(W^-)$, $M_T(W^+)$, $p_T(W^+)$, \\
$p_T(W^-)$, MET} &
\makecell{
$\cos(\theta_{\bar{\nu}_e e})$, $\cos(\theta_{\bar{\nu}_\mu \mu^+})$, \\
$\cos(\theta_{W^- W^+})$, \\ $\cos(\Delta \phi)$,
$\cos(\Delta \theta)$, \\ $\eta(e^-)$, $\eta({\mu^+})$, $\eta({E_2})$, $\eta(\bar{E}_2)$ \\ $\eta(W^+)$, $\eta(W^-)$} &
\makecell{$\Delta R(e, \bar{\nu_e})$, $\Delta R(\mu^+, \nu_{\mu^+})$}\\
\hline
\makecell{\\[-0.5em]$W^-$ \\
frame} & \makecell{$p_T(e^-)$, $p_T(E_2)$}& \makecell{
$\cos(\theta_{\bar{\nu}_e e})$, \\
$\eta(e^-)$, $\eta({E_2})$} &
\makecell{}\\
\hline
\makecell{\\[-0.5em]$W^+$ \\
frame} &\makecell{$p_T(\mu^+)$, $p_T(\bar{E}_2)$} & \makecell{
$\cos(\theta_{\nu_\mu \mu^+})$, \\
$\eta(\mu^+)$, $\eta(\bar{E}_2)$} &
\makecell{}\\
\hline
\makecell{\\[-0.5em] $E_2\bar{E_2}$ \\
frame} &\makecell{} & \makecell{
$\cos(\Delta \Phi)$,
$\cos(\Delta \Theta)$} &
\makecell{}\\
\hline
\hline
\end{tabular}}
\caption{Angular and kinematic observables selected for study of the pair-production topologies, for different frames of reference, the laboratory frame (in the first row), $W^+$ and $W^-$ frames (in the second and third row, respectively) and the vector-like frame $E_2\bar{E}_2$ (last row). $\theta_{i,j}$ denotes the angle between different particles, either in the final state or reconstructed ones. In the $E_2\bar{E_2}$ frame, the angles $\Delta\Phi$ and $\Delta\Theta$ correspond to the azimuthal and polar angles formed by the decay plane of the two $W$ bosons (see \cite{Freitas:2020ttd}).}
\label{tab:vars_Zp}
\end{table*}
\begin{table*}[ht!]
\centering
\captionsetup{justification=raggedright,singlelinecheck=false}
\begin{tabular}{|c|c|c|}
\toprule
\hline
& Dimension-full & {Dimensionless} \\
\hline
\hline
\midrule
\makecell{Lab. \\
frame} & \makecell{$p_T(e^-)$,$M_T(W^-)$, \\
$p_T(W^-)$, MET} &
\makecell{
$\cos(\theta_{e^-})$, $\cos(\theta_{\bar{\nu}_e e^-})$, \\
$\cos(\theta_{W^-})$, $\eta(e^-)$, $\eta(W^-)$, $\phi(e^{-})$} \\
\hline
\hline
\end{tabular}
\caption{Angular and kinematic observables selected to study the single production channel (VLBSM). We compute observables in the laboratory frame. The same nomenclature of Table~\ref{tab:vars_Zp} for angles applies here.}
\label{tab:vars_Zp_VLBSm}
\end{table*}
A similar analysis is also built for the VLQs. The prediction of VLQs is not exclusive to the model under consideration. In fact, the existence of such states has been predicted by a series of distinct models in previous literature, such as in $\mathrm{E_6}$-inspired string and GUT models \cite{Hebbar:2016gab,HEWETT1989193} or in other extensions to the SM \cite{Benbrik:2015fyz,Hernandez:2021uxx}. However, unlike VLLs, from an experimental point-of-view, there is also a good history of searches, in particular, at the LHC (see, for example, \cite{Aaboud:2018ifs,Aaboud:2018zpr,Sirunyan:2019sza}), where the current constraints on parameter space constrain the VLQ masses to be between 690 GeV up to 1.85 TeV (for current constraints, as of March 2021, see the summary plots in \cite{ATLAS_twiki_VLQs}). See also \cite{Roy:2020fqf,Araque:2015cna} where possible interpretations of current VLQ searches at the LHC are undertaken and \cite{Romao:2019dvs,Romao:2020ocr} for a discussion about Deep-Learning based methods that can be applied in direct VLQ searches. Naturally, the constraints are heavily dependent on considered assumptions, mainly when it comes to couplings to the SM states. In this regard, the majority of searches focus on dominant couplings with the top quark, where the primary decay channel $\text{VLQ} \rightarrow t(b)W$ is studied, characterized by a final b-jet and two charged leptons. Of course, such assumption is not the most general, as there is no reason for mixings with other SM quarks to not exist. For the purpose of this work, the proposed search topology is focused on a channel with light jets as final states, as seen in Fig.~\ref{fig:VLQ-events}.
\begin{figure}[h!]
\centering
\captionsetup{justification=raggedright}
\includegraphics[width=0.34\textwidth]{Diagrams/VLQ_channel.pdf}
\caption{Leading-order Feynman diagram for the VLQ pair-production via gluon-gluon fusion. $T_1$ represents the lightest up-type VLQ and $u/\bar{u}$ indicate light up-type quarks (up or charm quarks). This diagram provides a larger contribution than that with $T_1 u Z^0$ vertices. We refer to Appendix \ref{app:feynman} for further details.}\label{fig:VLQ-events}
\end{figure}
Identical cuts to the VLL scenario are imposed, with two main differences. The more obvious one is that we now require at least four lepton candidates (an anti-muon/muon pair and a positron/electron pair) and at least two light jets. We also alter the minimum transverse momentum for the jet candidates, with now $p_T > 50$ GeV. The reason for such alteration comes from the fact that we plan to probe higher masses ($m > 1.8$ TeV), and therefore, the final jets emerging from VLQs will be highly boosted and energetic when compared to SM processes, helping to reduce the number of relevant backgrounds. Without any missing energy, both VLQs can be more easily reconstructed from the leptons and light jets. For irreducible backgrounds, we consider the same $t\bar{t} + Z^0$ background as for the ZA topology, with $Z^0$ decaying into two charged leptons. We also include all production channels with the same final states $p p \rightarrow e^+ e^- \mu^+\mu^- j j$, where $j$ is a light jet. Such a process includes the main diboson production backgrounds. Similarly, dimension-full and dimension-less variables from final and reconstructed states are used for NNs' training in three distinct reference frames: laboratory frame, $j_1 + \gamma$ frame and $j_2 + \gamma$ frame, where we define $j_1$ as the leading jet (greatest $p_T$), while $j_2$ is the sub-leading jet. All distributions are indicated in Table.~\ref{tab:vars_VLQ}
\begin{table*}[ht!]
\centering
\captionsetup{justification=raggedright,singlelinecheck=false}
\resizebox{\textwidth}{!}{\begin{tabular}{|c|c|c|c|}
\toprule
\hline
& Dimension-full & \multicolumn{2}{|c|}{Dimensionless} \\
\hline
\hline
\midrule
\makecell{Lab. \\
frame} & \makecell{$M(e^+,e^-)$, $M(\mu^+,\mu^-)$, $M(e^-,\mu^-)$, \\[0.2em] $M(j_1,j_2)$, $p_T(e^-)$, $p_T(e^+)$, $p_T(\mu^+)$, \\[0.2em] $p_T(\mu^-)$, $p_T(j_n)$, $M(e^+,e^-,j_n)$,\\[0.2em] $M(\mu^+,\mu^-,j_n)$} &
\makecell{$\eta(e^-)$, $\phi(e^-)$, $\eta(e^+)$, $\phi(e^+)$, \\[0.2em] $\eta(\mu^-)$, $\phi(\mu^-)$, $\eta(\mu^+)$, $\phi(\mu^-)$, \\[0.2em] $\eta(j_n)$, $\phi(j_n)$, \\[0.2em] $\cos(\theta_{e^+ e^-})$, $\cos(\theta_{\mu^+ \mu^-})$ $\cos(\theta_{j_1 j_2})$\\[0.2em] $\cos(\theta_{e^- \mu^-})$, $\cos(\theta_{e^- \mu^+})$, $\cos(\theta_{e^- j_n})$\\[0.2em] $\cos(\theta_{e^+ j_n})$, $\cos(\theta_{\mu^- j_n})$, $\cos(\theta_{\mu^+ j_n})$, \\[0.2em] $\Delta\phi(e^+,e^-)$, $\Delta\phi(e^-,j_n)$, $\Delta\phi(e^+,j_n)$, \\[0.2em] $\Delta\phi(\mu^+,\mu^-)$, $\Delta\phi(\mu^-,j_n)$, $\Delta\phi(\mu^+,j_n)$, \\[0.2em] $\Delta\phi(e^-,\mu^-)$, $\Delta\phi(e^-,\mu^+)$, $\Delta\phi(e^+,\mu^-)$, \\[0.2em] $\Delta\phi(e^+,\mu^+)$} &
\makecell{$\Delta R(e^+, e^-)$, $\Delta R(e^-, j_n)$, $\Delta R(e^+, j_n)$ \\[0.2em] $\Delta R(\mu^+, \mu^-)$, $\Delta R(\mu^-, j_n)$, $\Delta R(\mu^+, j_n)$, \\[0.2em] $\Delta R(e^-,\mu^-)$, $\Delta R(e^-,\mu^+)$ , $\Delta R(e^+,\mu^-)$, \\[0.2em] $\Delta R(e^+,\mu^+)$}\\
\hline
\makecell{\\[-0.5em]$j_1 + \gamma$ \\
frame} & \makecell{}& \makecell{
$\cos(\theta_{\mu^- j_1})$, $\cos(\theta_{\mu^+ j_1})$, $\cos(\Delta\Phi_1)$} &
\makecell{}\\
\hline
\makecell{\\[-0.5em]$j_2 + \gamma$ \\
frame} &\makecell{} & \makecell{
$\cos(\theta_{\mu^- j_2})$, $\cos(\theta_{\mu^+ j_2})$, $\cos(\Delta\Phi_2)$} &
\makecell{}\\
\hline
\hline
\end{tabular}}
\caption{Angular and kinematic observables selected for study of VLQ pair-production, for different frames of reference, the laboratory frame (in the first row), $j_1 + \gamma$ and $j_2 + \gamma$ frames (in the second and third row, respectively). The same nomenclature of Table~\ref{tab:vars_Zp} for angles applies here. $\Delta\Phi_{1,2}$ corresponds to the azimuthal angle formed by the decay planes of the two virtual photons. To simplify notation, it is defined $j_n = j_1, j_2$.}
\label{tab:vars_VLQ}
\end{table*}
For both VLQs and VLLs, the main objective is to combine the kinematic distributions into multi-dimensional distributions that are then fed into the NN, whose job is to solve a classification task, that is, to distinguish between the background and signal events, so that we can evaluate the statistical significance. To obtain the most optimal results, one must take special care in the employed architecture, the number of layers and nodes, activation functions, etc. The selection of these parameters is often made based on arbitrary decision-making, by choosing the set of hyperparameters that have shown to guarantee the best results in earlier analyses. Such a method is often arbitrary and can not be generalised to other applications.
For the purpose of this work, we implement the same techniques as in \cite{Freitas:2020ttd}, via a genetic algorithm, that chooses the parameters that best improve a given significance. A diagrammatic representation of the algorithm can be seen in Fig.~\ref{fig:EVO-algo}. The entire process starts by providing a list of possible input parameters which the algorithm chooses from. From this list, the algorithm picks, in a random fashion, a series of parameters from which it can build an arbitrary number of NNs. Once all the networks have been constructed, we train them for fixed number of epochs. From the trained networks, we choose the top networks, that is, the ones that better maximize a given metric. It is from this point that the evolutionary part of the algorithm kicks in. The idea is inspired by the process of natural selection. From the best networks, we create Father-Mother pairs, where 50\% of the father's traits and 50\% of the mother's traits are used to construct new NNs, that we dub ``daughters''. We also impose a probability of mutation, $\mathcal{P}(M)$, meaning that after the daughters have been built, we also add a non-zero probability that each of the traits may change to another, leading to the creation of ``mutated daughters''. Then, we train these new daughter networks, and the loop repeats for a given number of generations. At the end, we select the one with the better performance, for a given metric, in our case, the Asimov significance.
\begin{figure}[th!]
\centering
\tikzstyle{Rectangulo} = [draw, rectangle, fill=black!30, text width=6em, text centered, minimum height=2em]
\tikzstyle{Diamante} = [draw, diamond, fill=black!30, text width=6em, text badly centered, inner sep=0pt]
\tikzstyle{Linha} = [draw, -latex']
\resizebox{0.32\textwidth}{!}{\begin{tikzpicture}[node distance = 1.5cm, auto]
\node [Rectangulo, rounded corners] (step1) {\scriptsize Initiate NN models};
\node [Diamante, below of=step1,node distance=3.5cm] (step2) {\scriptsize Top 5 selection: Act as F+M};
\node [Rectangulo, rounded corners, below of=step2, node distance=3.5cm] (step3) {\scriptsize Select best model};
\node [Rectangulo, rounded corners, above left of=step2, node distance=2.0cm, above left=0.4cm, left=0.8cm] (step4) {\scriptsize New NN models ``Daughters''};
\node [Rectangulo, rounded corners, below left of=step2, node distance=2.0cm, below left=0.4cm, left=0.8cm] (step5) {\scriptsize ``Mutated'' NN models \\ ``Daughters''};
\path [Linha] (step1) -- node [left] {\scriptsize 200 epochs} (step2);
\path [Linha] (step2) -- node [left] {\scriptsize After 5 gen.} (step3);
\path [Linha] (step2) -- node [above=0.50cm, right=-0.65cm, rotate=-26] {\scriptsize F+M traits} (step4);
\path [Linha] (step4) -- node [left, rotate=90, below=-0.3cm, left=-0.9cm] {\scriptsize $\mathcal{P}(M) = 20\%$}(step5);
\path [Linha] (step5) node [above=0.90cm , right=1.50cm, rotate=26] {\scriptsize train} -- (step2);
\path [Linha] (step5) node [below=-0.40cm, right=1.50cm, rotate=26] {\scriptsize daughters} -- (step2);
\end{tikzpicture}}
\caption[]{Diagram representative of the iterations involved in the evolutive algorithm as used in this work.}
\label{fig:EVO-algo}
\end{figure}
We choose the same set of hyperparameters as in our previous work \cite{Freitas:2020ttd}, which we detail as follows:
\begin{itemize}
\item Number of hidden layers: 1 to 5;
\item Number of nodes per layers: 256, 512, 1024 or 2048;
\item Initialisers: '\texttt{normal}', '\texttt{he normal}' and '\texttt{he uniform}';
\item \texttt{L2} regulariser, with penalties $1\times 10^{-3}$, $1\times 10^{-5}$ or $1\times 10^{-7}$;
\item Activation functions: '\texttt{ReLU}', '\texttt{eLU}', '\texttt{tanh}' and '\texttt{sigmoid}';
\item Optimisers: '\texttt{Adam}', '\texttt{sgd}', '\texttt{AdaMax}' and '\texttt{NAdam}'.
\end{itemize}
The networks are built in \texttt{Keras} \cite{chollet2015keras} with \texttt{TensorFlow 2.0} \cite{Abadi:2016kic} as back-end. Some architectural considerations remain fixed during the evolution of the genetic algorithm. Namely, the final output layer works as a prediction layer where we funnel the data into a vector with dimensions $\text{dim}N = 1 + N_b$, with $N_b$ the number of backgrounds. Each entry corresponds to the probability of being a signal or background. For example, consider a vector $(S,B1,B2) = (0.98, 0.01, 0.01)$. The output of this form indicates that the network considers this event as a signal with probability of $\mathcal{P}_S = 0.98$ whereas backgrounds have a probability $\mathcal{P}_{B_{1,2}} = 0.01$. The inputs of the networks are also identical, with normalized and balanced distributions of kinematic data. The normalized data are important for training due to potentially high variability in numerical data structures. By normalizing the datasets, we mitigate numerical errors when computing gradients during back propagation, which in turn allows for faster learning and improved performance \cite{589532,LeCun2012}. The datasets are also balanced, as the imposition of cuts in the kinematics of final states reduces the number of entries for both background and signal classes. Such unbalanced nature may lead to over-fitting problems and poor generalization to validation datasets. We use \texttt{SMOTE} \cite{Chawla_2002} algorithm to balance the data, by oversampling the minority classes. We also employ a cyclic learning rate during the training, with the initial value of 0.01 and the maximum allowed value of 0.1. A fixed batch size of 32544 is considered.
In this work we are interested in models that provide us with the best significance. Ss such, we define the Asimov significance that can be modified to work as the loss function. In our case we define our loss as $1/(\mathcal{Z_A} + \epsilon)$, that we plan to minimize\footnote{This methodology was first proposed by Adam Elwood and Dirk Krücker in \cite{Elwood:2018qsr}.}, with
\begin{equation}
\label{eq:Asimov_sig}
\mathcal{Z_A} = \Bigg[2\Bigg((s + b)\ln\Bigg(\frac{(s+b)(b+\sigma_b^2)}{b^2 + (s+b)\sigma_b^2}\Bigg) -\frac{b^2}{\sigma_b^2}\ln\Bigg(1+\frac{\sigma_b^2 s}{b(b+\sigma_b)}\Bigg)\Bigg)\Bigg]^{1/2},
\end{equation}
where $s$ is the number of signal events, $b$ the number of backgrounds and $\sigma_b^2$ is the variance of backgrounds events. Note that in the limit of large backgrounds, $\mathcal{Z}_A \approx s/\sqrt{b}$.
\section{Exotic fermionic signatures: Results}\label{section:Results}
In this section we discuss the results obtained for collider signatures of exotic fermions in the context of the $\mathcal{Q}_6$ flavored multiscalar model under consideration. Due to their distinct nature, the mass range that the LHC can probe for the model's VLLs differs from that of the VLQs. In particular, and based on the current exclusion bounds, we focus on the following two sets,
\begin{equation}\label{eq:mass_ranges}
m_{E_2}\in [200,800] \hphantom{.}\mathrm{GeV}, \quad m_{T_1}\in [2.2,4] \hphantom{.}\mathrm{TeV},
\end{equation}
where the mass range chosen for $m_{E_2}$ was based on the $\Delta a_{\mu }$ analysis performed above. Note that both the VLL and VLQ decay widths are automatically computed by \texttt{MadGraph} for each mass.
In all studied signal events the internal vertices are gauge-interacting, always involving couplings with the SM vector bosons ($Z^0$, $\gamma$, $W^\pm$ or $g$). These do indeed provide the dominant contributions and their strength is well known. However, the fermion-mixing effects must be considered and typically provide a suppression factor. This is what happens in flavor non-diagonal scenarios as is the case of the model that we study in this article. For the purpose of this work, we assume \textit{vector-to-chiral} fermion mixing of order $\mathcal{O}(10^{-2}-10^{-3})$ according to the structures specified in section~\ref{sec:model}, which in turn corresponds to the benchmark point $y_E = y_{E_2} =0.2$ shown in Fig.~\ref{gminus2muonvsmE}. For example, the lepton mixture is relevant when probing the $E_2\nu_\ell W$ vertex via an extended Pontecorvo-Maki-Nakagawa-Sakata (PMNS) mixing. In particular, we fix the PMNS matrix in such a way that the block that mixes the charged chiral leptons and SM-like neutrinos is phenomenologically consistent \cite{Zyla:2020zbs}. The same is done in the quark sector with an extended version of the Cabibbo-Kobayashi-Maskawa (CKM) matrix.
Using the cuts detailed in the previous section we can estimate the production cross-section for each signal/background topology. For the VLL channel, and fixing $m_{E_2} = 200~\mathrm{GeV}$ as well as the mixing matrix in eq.(\ref{matrixdiagonalization}):
\begin{equation}\label{eq:lepton_mixing}
U^e_L = \begin{bmatrix}
-0.999997 & 0.00122671 & -0.00196369 & -2.41219\times 10^{-6} & -9.55935\times 10^{-6} \\
0.00123261 & 0.999993 & -0.00300585 & -0.00200382 & 1.1783\times 10^{-8} \\
0.00195999 &-0.00300821 & -0.999994 & 0.0000294637 & 9.03987\times 10^{-8} \\ -1.17835\times 10^{-8} & 0.0080039 & 0.0000234357 & 0.999997 & 0.00123275 \\
9.55951\times 10^{-6} & 2.4701\times 10^{-6} & -4.27716\times 10^{-8} & 0.00123275 & -0.999999
\end{bmatrix}
\end{equation}
we have obtained,
\begin{equation}\label{eq:production_xsec}\nonumber
\begin{aligned}
&\text{VLBSM signal:}\quad \sigma = 1.32\times 10^{-4} \hphantom{.}\mathrm{fb};\\
&\text{ZA signal:}\quad \sigma = 6.77\times 10^{-4} \hphantom{.}\mathrm{fb};\\
&\text{VBF signal:}\quad \sigma = 1.77\times 10^{-4} \hphantom{.}\mathrm{fb};\\
&pp\rightarrow e^-\bar{\nu}_e\quad \sigma = 1.96\times 10^{6} \hphantom{.}\mathrm{fb}; \\
&pp\rightarrow e^-\bar{\nu}_e (j,jj)\quad \sigma = 8.02\times 10^{5} \hphantom{.}\mathrm{fb};\\
&t\bar{t}\quad \sigma = 1.21\times 10^{3} \hphantom{.}\mathrm{fb};\\
&t\bar{t} (j,jj)\quad \sigma = 2.39\times 10^{3} \hphantom{.}\mathrm{fb};\\
&W^+W^-\quad \sigma = 1.63\times 10^{2} \hphantom{.}\mathrm{fb};\\
&t\bar{t}Z^0(e^-e^+)\quad \sigma = 0.18 \hphantom{.}\mathrm{fb};\\
&t\bar{t}Z^0(\bar{\nu}_\ell\nu_\ell)\quad \sigma = 0.20 \hphantom{.}\mathrm{fb}
\end{aligned}
\end{equation}
and, as one notices, all backgrounds sit well above the expected cross sections for the signal events. On the other hand, VLQ pair-production reveals the opposite behaviour. Considering the scenario where $m_{T_1} = 2.2$ TeV and the mixing matrix in eq. (\ref{matrixdiagonalization}):
\begin{equation}\label{eq:quark_mixing}
U^u_L = \begin{bmatrix}
-1. & 0.000177353 & -0.000313111 & -4.77474\times10^{-8} & -9.09229\times10^{-6} \\
0.0003595 & -0.99997 & 0.005294 & -0.0034525 & 3.26836\times10^{-9} \\
-0.00001530 & 0.005294 & 0.999968 & 0.003161 & -1.07557\times10^{-7} \\ -9.09229\times10^{-6} & -9.64642\times10^{-7} & -6.00768\times10^{-7} & 0.00039356 & 1.\\
3.2693\times10^{-9} & -0.00026828 & -0.0016711 & 0.999995 & -0.0003935
\end{bmatrix},
\end{equation}
the cross sections for this analysis read as,
\begin{equation}\label{eq:production_xsec_VLQs}\nonumber
\begin{aligned}
& \text{VLQ signal:}\quad \sigma = 5.09 \hphantom{.}\mathrm{fb};\\
&pp\rightarrow e^-e^+\mu^+\mu^-j j\quad \sigma = 0.16\hphantom{.}\mathrm{fb}; \\
&t\bar{t}Z^0(\mu^-\mu^-)\quad \sigma = 1.23\times 10^{-2} \hphantom{.}\mathrm{fb}\\
\end{aligned}
\end{equation}
where we note that for the $t\bar{t}Z^0(\mu^-\mu^-)$ background, the electron and positron originate from $W$ decays. As one can clearly see, the signal production cross-section sits above the main irreducible backgrounds. This fact remains true up until $m_{T_1} \sim 3.8$ TeV, after which the suppression coming from the VLQ mass and its decay width becomes large enough yielding an increasingly smaller cross-section. The noticeable difference between the VLQ and VLL sectors can be mainly attributed to the chosen collider and couplings present in the production channels. Proton-proton collisions heavily favour a pair-production of colored particles, which is further maximized via the strong coupling in the three gluon $ggg$ and $g\bar{T}_1T$ vertices, as opposed to the weak gauge coupling present in all VLL pair-production processes. We show in Fig.~\ref{fig:Xsecs_VLLs_VLQs} both the VLL and VLQ production cross-sections in terms of the exotic fermion masses for each of the studied processes.
\begin{figure*}[h!]
\centering
\captionsetup{justification=raggedright}
\subfloat[VLLs]{{\includegraphics[width=0.46\textwidth]{VLLs-VLQs-Collider/Xsec_vs_VLL.pdf} }}
\subfloat[VLQs]{{\includegraphics[width=0.46\textwidth]{VLLs-VLQs-Collider/Xsec_vs_VLQ.pdf} }} \\
\caption{The production cross-section as a function of the VLLs mass (left panel) and the VLQ mass (right panel).
\label{fig:Xsecs_VLLs_VLQs}}
\end{figure*}
As mentioned in the previous section, the main goal is to compute the statistical significance of a hypothetical discovery. As such, we need to extract the relevant kinematic information that helps separating the signal from background. Let us first dedicate our attention to VLL production focusing on a scenario where $m_{E_2} = 200~\mathrm{GeV}$. We show in Appendix~\ref{app2} the distributions in the laboratory frame in Figs.~\ref{fig:VBF_vars_LabFrame}, \ref{fig:ZA_vars_LabFrame} and \ref{fig:VLBSM_vars_LabFrame}. In these figures, the angular distributions in $\bar{E}_2E_2$ frame are also presented. The remaining distributions, in the $W$ frame of reference, are shown in Fig.~\ref{fig:Vars_BoostedFrame}. It is common to all topologies that the angular distributions are dominated by the imposed backgrounds, since the signal events do not have major qualitative differences that help separating different $\cos(\theta)$ distributions. On the other hand, $\Delta R$ distributions are particularly interesting in this regard, as signal topologies typically involve a peak around $\Delta R \sim 1$, as opposed to the backgrounds, with a flatter structure and a peak at higher values, $\Delta R \sim 1-2$. Kinematic information, such as pseudo-rapidity, offers additional discrimination as the signal events are characterized by a strong peak at $\eta=0$, while backgrounds typically either possess a double-peak structure (see $\eta$ distributions for reconstructed particles such as $E_2$ and $W^+$ in Fig.~\ref{fig:VBF_vars_LabFrame}), or a more uniform distribution over the allowed pseudo-rapidity range (see $\eta$ plots for the electron and the anti-muon in Fig.~\ref{fig:VBF_vars_LabFrame}). This is true for VBF topologies, while for ZA and VLBSM events it is not (see Figs.~\ref{fig:ZA_vars_LabFrame} and \ref{fig:VLBSM_vars_LabFrame}). For instance, pseudo-rapidity distributions for ZA and VLBSM typically follow the same characteristics as the backgrounds. Therefore, they do not offer the same discriminating power. Transverse momentum distributions, for both topologies, dot not provide a greater discriminating power either. However, MET distributions supply some key differences such as longer tails at large MET values, i.e.~$\mathrm{MET} > 100~\mathrm{GeV}$. In fact, MET distributions are rather relevant, as this observable is representative of the signal events that we propose, in particular for the ZA/VBF cases where the final state contains four neutrinos. For the case of VLBSM topologies this no longer applies. Even though the final state contains three neutrinos, MET distributions have the same shape as the SM backgrounds (see middle panel in the second row of Fig.~\ref{fig:VLBSM_vars_LabFrame}).
Turning our attention to the quark sector, all relevant distributions are shown in Fig.~\ref{fig:VLQ_vars} where we fix the VLQ mass as $m_{T_1}=2.2~\mathrm{TeV}$. For VLQ production, a substantial amount of variables were used. As such, not all of them, but only a representative subset is shown. As opposed to the VLLs, $\cos(\theta)$ distributions do offer discriminating power over backgrounds. For the pair of particles $(e^+,j_1)$, $(e^-,j_2)$ and $(\mu^+,j_2)$, the cosine distributions preferably peak at $\cos(\theta) = -1$, i.e.~at $\theta = \pi$, for signal events, implying that the outgoing particles are produced back-to-back. This contrasts with the studied backgrounds where a significant portion of events feature highly collinear particles, i.e.~$\cos(\theta) = 1 \Leftrightarrow \theta = 0$. Other angular variables such as $\Delta \phi$ offer further distinction with a typical double-peak structure of signal events at $|\Delta\phi| \geq 2$, whereas the corresponding backgrounds tend to populate near $\Delta \phi = 0$. The transverse momentum distributions of the final leptons are also relevant as one notices from the bottom rows of Fig.~\ref{fig:VLQ_vars}. In particular, signal events tend to populate regions of phase-space with larger momentum ($p_T > 300$ GeV), in stark contrast with the SM backgrounds, which preferably populate regions of much lower $p_T$ values.
A proper analysis requires a combination of the various kinematical variables into a single multi-dimensional distribution that the NN uses as an input. This allows one to find the regions of the parameter space that better enhance the signal region while minimizing the background effects. More importantly, it also provides us with the ability to compute the statistical significance as well as determining which mass regions can be excluded within the studied model. For a complete analysis we calculate the significance focusing on three distinct statistical metrics, each one being more conservative than the other, in order to provide the most rigorous and realistic scenarios for further investigation.
We then consider the following
\begin{itemize}
\item $\mathcal{Z}_A:$ The Asimov significance, as it is defined in Eq.~\eqref{eq:Asimov_sig}, assuming 1\% systematic errors;
\item $\mathcal{Z}(<1\%):$ An adapted version of the Asimov significance, as it is defined in Eq.~\eqref{eq:Asimov_sig}. For this case we assume a much lower systematic uncertainty, in particular $10^{-3}$. As such, this is the most lenient metric and typically offers the highest values for the significance;
\item $s/\sqrt{s+b}:$ A more traditional metric, which is the limiting scenario of the Asimov significance when $s \ll b$.
\end{itemize}
With this in mind, we apply the genetic algorithm as described in the previous section and calculate the aforementioned significance metrics in terms of the score that the NN gives to each event. For the scenarios that have so far been discussed, $m_{E_2} = 200$ GeV and $m_{T_1} = 2.2$ TeV, our results for the VLLs significance in each channel can be seen in Fig.~\ref{fig:ACC-Sig-plots} of Appendix~\ref{app:sig_plots_DL}. Notice that the significance is calculated under an assumption of the high-luminosity LHC, with an integrated luminosity of $\mathcal{L} = 3000$ $\mathrm{fb^{-1}}$.
Let us first focus on the VLL sector. Provided that each topology is an independent event we can safely combine their individual significances as
\begin{equation}\label{eq:combined_significance}
\sigma_C = \sigma_{\mathrm{VBF}} + \sigma_{\mathrm{VLBSM}} + \sigma_{\mathrm{ZA}}.
\end{equation}
With this in mind, we notice that the highest combined significance is obtained for the $\mathcal{Z}(<1\%)$ metric, in particular, with $\sigma_C = 3.78\sigma$. The dominant contributions to this value are those of the VLBSM and VBF channels with $\sigma_{\mathrm{VLBSM}} = 1.76\sigma$ and $\sigma_{\mathrm{VBF}} = 1.03\sigma$, respectively. Equally interesting is the $s\sqrt{s+b}$ metric, for which the combined significance is slightly smaller, $\sigma_C = 2.56\sigma$, with the VLBSM topology providing the highest contribution, with $1.13\sigma$. While these results do not permit a confident exclusion at this mass range, however it is sufficiently large to merit further inspection if an hypothetical anomaly appears at the LHC experiments. If this turns out to be the case, we argue here that, if the next generation of colliders beyond the LHC are still far from becoming operational, a continuation of the high-luminosity program must be put on the table as we further discuss below.
As expected, the most conservative metric, $\mathcal{Z}_A$, shows the lowest significance values with $\sigma_C = 0.032\sigma$. Therefore, within the context of the model under consideration, we can not safely exclude VLLs up to masses of $200~\mathrm{GeV}$. In fact, singlet VLLs do not couple directly to $W$ bosons as it happens for doublet VLLs (see \cite{Freitas:2020ttd} for a previous study.) Instead, such interactions with $W$ bosons relevant for the three signal processes under consideration, are indirectly induced via off-diagonal Yukawa interactions becoming suppressed by a factor of the mass of the VLL itself \cite{Kumar:2015tna}. This implies that the larger the VLL mass, the smaller the interaction strength with $W$ bosons, thus, the smaller the cross section and significance.
We show in Fig.~\ref{fig:Sig_plots_luminosity} the dependence of the significance as a function of the luminosity for a VLL mass of $200~\mathrm{GeV}$.
\begin{figure*}[ht!]
\captionsetup{justification=raggedright}
\subfloat[$\mathcal{Z}_A$ significance]{{\includegraphics[width=0.50\textwidth]{VLLs-VLQs-Collider/Significance_vs_luminosity_Asimov_200GeV.pdf} }}
\subfloat[$\mathcal{Z}(<1\%)$ significance]{{\includegraphics[width=0.50\textwidth]{VLLs-VLQs-Collider/Significance_vs_luminosity_Z1perc_200GeV.pdf} }} \\
\subfloat[$s/\sqrt{s+b}$]{{\includegraphics[width=0.50\textwidth]{VLLs-VLQs-Collider/Significance_vs_luminosity_sqrtsb_200GeV.pdf} }}\\
\caption{Statistical significance as a function of the integrated luminosity in $\mathrm{fb^{-1}}$ for the three different statistics adopted in the current analysis and for a fixed VLL mass, $m_{E_2} = 200$ GeV. The $x$ axis is on logarithmic scale. In (a) we showcase the Asimov significance, in (b) -- the adapted Asimov significance, and in (c) -- the $s/\sqrt{s+b}$ metric. The colours represent the distinct signal processes under consideration. In particular, the green curve is representative of VBF events, the red curve indicates ZA topologies, while the blue curve refers to VLBSM single production events. The dashed curves indicate that the considered values of the luminosity are beyond the LHC operation regime.
\label{fig:Sig_plots_luminosity}}
\end{figure*}
In particular, we see that by the end of the LHC program, i.e.~$\mathcal{L} = 3000~\mathrm{fb^{-1}}$, a combined significance of $\sigma_C = 3.78\sigma$ can be achieved, for the $\mathcal{Z}(<1\%)$ metric. Alternatively, if the standard $s\sqrt{s+b}$ measure is considered, a $\sigma_C = 2.56\sigma$ anomaly could be observed. In either scenario, we argue that, if a new generation of colliders that will succeed the LHC remains decades away from the beginning of operations, such an anomaly (or any other excess) may justify a continuation of the high-luminosity runs.
As an example, we show in all panels of Fig.~\ref{fig:Sig_plots_luminosity} the continuation of the significance curves for higher luminosities choosing $\mathcal{L} = 6000~\mathrm{fb^{-1}}$ and $\mathcal{L} = 9000~\mathrm{fb^{-1}}$ as merely indicative values. In particular, we see that a signal confirmation or exclusion would be realizable with $\sigma_C = 6.54\sigma$ for the $\mathcal{Z}(<1\%)$ metric, at $\mathcal{L} = 9000~\mathrm{fb^{-1}}$, and $\sigma_C = 5.34\sigma$ at $\mathcal{L} = 6000~\mathrm{fb^{-1}}$. Note that we use dashed lines in the continuation of the significance curves for the region beyond $\mathcal{L} = 3000~\mathrm{fb}^{-1}$ to indicate that such a regime is beyond the planned LHC operation program.
\begin{table}[h!]
\centering
\captionsetup{justification=raggedright,singlelinecheck=false}
\resizebox{0.9\textwidth}{!}{\begin{tabular}{c||cccc||cccc||cccc||}
\multirow{2}{*}{Mass of VLL} & \multicolumn{4}{c||}{$s/\sqrt{s+b}$} & \multicolumn{4}{c||}{$\mathcal{Z}(<1\%)$} & \multicolumn{4}{c||}{$\mathcal{Z}_{A}$} \\ \cline{2-13}
& ZA & VBF & VLBSM &$\sigma_\mathrm{C}$ & ZA & VBF & VLBSM &$\sigma_\mathrm{C}$ & ZA & VBF & VLBSM &$\sigma_\mathrm{C}$ \\ \hline
$200$ GeV & \multicolumn{1}{c|}{$0.70$} & \multicolumn{1}{c|}{$0.73$} & \multicolumn{1}{c|}{$1.13$} &\multicolumn{1}{c||}{\textbf{2.56}} &\multicolumn{1}{c|}{$0.99$} & \multicolumn{1}{c|}{$1.03$} & \multicolumn{1}{c|}{$1.76$} &\multicolumn{1}{c||}{\textbf{3.78}}& \multicolumn{1}{c|}{$0.02$} & \multicolumn{1}{c|}{$0.011$} & \multicolumn{1}{c|}{$0.0022$} & \multicolumn{1}{c||}{\textbf{0.033}} \\
$300$ GeV & \multicolumn{1}{c|}{$0.37$} & \multicolumn{1}{c|}{$0.38$} & \multicolumn{1}{c|}{$0.59$} &\multicolumn{1}{c||}{\textbf{1.34}}& \multicolumn{1}{c|}{$0.57$} & \multicolumn{1}{c|}{$0.54$} & \multicolumn{1}{c|}{$0.91$} &\multicolumn{1}{c||}{\textbf{2.02}}& \multicolumn{1}{c|}{$0.018$} & \multicolumn{1}{c|}{$3.08\times 10^{-5}$} & \multicolumn{1}{c|}{$0.0012$} &\multicolumn{1}{c||}{\textbf{0.019}}\\
$400$ GeV & \multicolumn{1}{c|}{$0.25$} & \multicolumn{1}{c|}{$0.23$} & \multicolumn{1}{c|}{$0.38$}
&\multicolumn{1}{c||}{\textbf{0.86}}& \multicolumn{1}{c|}{$0.36$} & \multicolumn{1}{c|}{$0.32$} & \multicolumn{1}{c|}{$0.55$}
&\multicolumn{1}{c||}{\textbf{1.23}} & \multicolumn{1}{c|}{$0.0086$} & \multicolumn{1}{c|}{$0.0077$} & \multicolumn{1}{c|}{$0.0022$} &\multicolumn{1}{c||}{\textbf{0.019}}\\
$500$ GeV & \multicolumn{1}{c|}{$0.19$} & \multicolumn{1}{c|}{$0.15$} & \multicolumn{1}{c|}{$0.30$} & \multicolumn{1}{c||}{\textbf{0.64}} & \multicolumn{1}{c|}{$0.30$} & \multicolumn{1}{c|}{$0.21$} & \multicolumn{1}{c|}{$0.43$} & \multicolumn{1}{c||}{\textbf{0.94}}& \multicolumn{1}{c|}{$0.0079$} & \multicolumn{1}{c|}{$0.0037$} & \multicolumn{1}{c|}{$0.0020$}& \multicolumn{1}{c||}{\textbf{0.014}} \\
$600$ GeV & \multicolumn{1}{c|}{$0.15$} & \multicolumn{1}{c|}{$0.13$} & \multicolumn{1}{c|}{$0.19$} & \multicolumn{1}{c||}{\textbf{0.47}} & \multicolumn{1}{c|}{$0.20$} & \multicolumn{1}{c|}{$0.15$} & \multicolumn{1}{c|}{$0.32$} & \multicolumn{1}{c||}{\textbf{0.67}}& \multicolumn{1}{c|}{$0.00023$} & \multicolumn{1}{c|}{$0.0022$} & \multicolumn{1}{c|}{$0.0013$} & \multicolumn{1}{c||}{\textbf{0.0037}}\\
$700$ GeV & \multicolumn{1}{c|}{$0.0024$} & \multicolumn{1}{c|}{$0.069$} & \multicolumn{1}{c|}{$0.095$} & \multicolumn{1}{c||}{\textbf{0.17}} & \multicolumn{1}{c|}{$0.091$} & \multicolumn{1}{c|}{$0.098$} & \multicolumn{1}{c|}{$0.11$} & \multicolumn{1}{c||}{\textbf{0.30}}& \multicolumn{1}{c|}{$5.40\times 10^{-5}$} & \multicolumn{1}{c|}{$0.0015$} & \multicolumn{1}{c|}{$0.0016$} & \multicolumn{1}{c||}{\textbf{0.0032}} \\
$800$ GeV & \multicolumn{1}{c|}{$0.0020$} & \multicolumn{1}{c|}{$3.8849\times 10^{-6}$} & \multicolumn{1}{c|}{$0.055$} & \multicolumn{1}{c||}{\textbf{0.057}} & \multicolumn{1}{c|}{$0.0034$} & \multicolumn{1}{c|}{$0.071$} & \multicolumn{1}{c|}{$0.077$} &
\multicolumn{1}{c||}{\textbf{0.15}} & \multicolumn{1}{c|}{$2.78\times 10^{-5}$} & \multicolumn{1}{c|}{$2.20 \times 10^{-5}$} & \multicolumn{1}{c|}{$0.0015$} & \multicolumn{1}{c||}{\textbf{0.0015}} \\
\end{tabular}}
\caption{Signal significance for the lightest VLL. The computation follows an evolutive algorithm that maximizes the Asimov significance metric. All significances are computed for $\mathcal{L} = 3000$ fb$^{-1}$ with centre-of-mass energy of $\sqrt{s}=14$ TeV. $\sigma_C$ is the combined significance as defined in \eqref{eq:combined_significance}.}\label{tab:Evolve_Asimov_table}
\end{table}
\begin{table}[h!]
\centering
\captionsetup{justification=raggedright,singlelinecheck=false}
\resizebox{0.9\textwidth}{!}{\begin{tabular}{c||cccc||cccc||cccc||}
\multirow{2}{*}{Mass of VLL} & \multicolumn{4}{c||}{$s/\sqrt{s+b}$} & \multicolumn{4}{c||}{$\mathcal{Z}(<1\%)$} & \multicolumn{4}{c||}{$\mathcal{Z}_{A}$} \\ \cline{2-13}
& ZA & VBF & VLBSM &$\sigma_\mathrm{C}$ & ZA & VBF & VLBSM &$\sigma_\mathrm{C}$ & ZA & VBF & VLBSM &$\sigma_\mathrm{C}$ \\ \hline
$200$ GeV & \multicolumn{1}{c|}{$1.22$} & \multicolumn{1}{c|}{$1.26$} & \multicolumn{1}{c|}{$1.95$} &\multicolumn{1}{c||}{\textbf{4.43}} &\multicolumn{1}{c|}{$1.71$} & \multicolumn{1}{c|}{$1.79$} & \multicolumn{1}{c|}{$3.04$} &\multicolumn{1}{c||}{\textbf{6.54}}& \multicolumn{1}{c|}{$0.034$} & \multicolumn{1}{c|}{$0.016$} & \multicolumn{1}{c|}{$0.0032$} & \multicolumn{1}{c||}{\textbf{0.053}} \\
$300$ GeV & \multicolumn{1}{c|}{$0.63$} & \multicolumn{1}{c|}{$0.66$} & \multicolumn{1}{c|}{$1.03$} &\multicolumn{1}{c||}{\textbf{2.32}}& \multicolumn{1}{c|}{$0.99$} & \multicolumn{1}{c|}{$0.93$} & \multicolumn{1}{c|}{$1.57$} &\multicolumn{1}{c||}{\textbf{3.49}}& \multicolumn{1}{c|}{$0.030$} & \multicolumn{1}{c|}{$3.73 \times 10^{-5}$} & \multicolumn{1}{c|}{$0.0025$} &\multicolumn{1}{c||}{\textbf{0.033}}\\
$400$ GeV & \multicolumn{1}{c|}{$0.44$} & \multicolumn{1}{c|}{$0.39$} & \multicolumn{1}{c|}{$0.65$}
&\multicolumn{1}{c||}{\textbf{1.48}}& \multicolumn{1}{c|}{$0.62$} & \multicolumn{1}{c|}{$0.56$} & \multicolumn{1}{c|}{$0.95$}
&\multicolumn{1}{c||}{\textbf{2.13}} & \multicolumn{1}{c|}{$0.015$} & \multicolumn{1}{c|}{$0.013$} & \multicolumn{1}{c|}{$0.0036$} &\multicolumn{1}{c||}{\textbf{0.032}}\\
$500$ GeV & \multicolumn{1}{c|}{$0.34$} & \multicolumn{1}{c|}{$0.25$} & \multicolumn{1}{c|}{$0.52$} & \multicolumn{1}{c||}{\textbf{1.11}} & \multicolumn{1}{c|}{$0.35$} & \multicolumn{1}{c|}{$0.36$} & \multicolumn{1}{c|}{$0.52$} & \multicolumn{1}{c||}{\textbf{1.13}}& \multicolumn{1}{c|}{$0.0079$} & \multicolumn{1}{c|}{$0.0065$} & \multicolumn{1}{c|}{$0.0035$}& \multicolumn{1}{c||}{\textbf{0.0179}} \\
$600$ GeV & \multicolumn{1}{c|}{$0.23$} & \multicolumn{1}{c|}{$0.20$} & \multicolumn{1}{c|}{$0.33$} & \multicolumn{1}{c||}{\textbf{0.76}} & \multicolumn{1}{c|}{$0.23$} & \multicolumn{1}{c|}{$0.29$} & \multicolumn{1}{c|}{$0.40$} & \multicolumn{1}{c||}{\textbf{0.92}}& \multicolumn{1}{c|}{$0.0039$} & \multicolumn{1}{c|}{$0.0064$} & \multicolumn{1}{c|}{$0.0037$} & \multicolumn{1}{c||}{\textbf{0.014}}\\
$700$ GeV & \multicolumn{1}{c|}{$0.10$} & \multicolumn{1}{c|}{$0.12$} & \multicolumn{1}{c|}{$0.20$} & \multicolumn{1}{c||}{\textbf{0.42}} & \multicolumn{1}{c|}{$0.13$} & \multicolumn{1}{c|}{$0.17$} & \multicolumn{1}{c|}{$0.23$} & \multicolumn{1}{c||}{\textbf{0.53}}& \multicolumn{1}{c|}{$0.00027$} & \multicolumn{1}{c|}{$0.0026$} & \multicolumn{1}{c|}{$0.0030$} & \multicolumn{1}{c||}{\textbf{0.00587}} \\
$800$ GeV & \multicolumn{1}{c|}{$0.08$} & \multicolumn{1}{c|}{$6.73\times 10^{-6}$} & \multicolumn{1}{c|}{$0.095$} & \multicolumn{1}{c||}{\textbf{0.18}} & \multicolumn{1}{c|}{$0.07$} & \multicolumn{1}{c|}{$0.12$} & \multicolumn{1}{c|}{$0.13$} &
\multicolumn{1}{c||}{\textbf{0.32}} & \multicolumn{1}{c|}{$0.00015$} & \multicolumn{1}{c|}{$3.72\times 10^{-5}$} & \multicolumn{1}{c|}{$0.0028$} & \multicolumn{1}{c||}{\textbf{0.0030}} \\
\end{tabular}}
\caption{Signal significance for the lightest VLL. The computation follows an evolutive algorithm that maximizes the Asimov significance metric. All significances are computed for $\mathcal{L} = 9000$ fb$^{-1}$ with centre-of-mass energy of $\sqrt{s}=14$ TeV. $\sigma_C$ is the combined significance as defined in \eqref{eq:combined_significance}.}\label{tab:Evolve_Asimov_table_2}
\end{table}
\begin{table}[h!]
\centering
\captionsetup{justification=raggedright,singlelinecheck=false}
\resizebox{0.9\textwidth}{!}{\begin{tabular}{c||cccc||cccc||cccc||}
\multirow{2}{*}{Mass of VLL} & \multicolumn{4}{c||}{$s/\sqrt{s+b}$} & \multicolumn{4}{c||}{$\mathcal{Z}(<1\%)$} & \multicolumn{4}{c||}{$\mathcal{Z}_{A}$} \\ \cline{2-13}
& ZA & VBF & VLBSM &$\sigma_\mathrm{C}$ & ZA & VBF & VLBSM &$\sigma_\mathrm{C}$ & ZA & VBF & VLBSM &$\sigma_\mathrm{C}$ \\ \hline
$500$ GeV, $\sqrt{s}=28$ TeV & \multicolumn{1}{c|}{$0.49$} & \multicolumn{1}{c|}{$0.35$} & \multicolumn{1}{c|}{$0.36$} &\multicolumn{1}{c||}{\textbf{1.20}} &\multicolumn{1}{c|}{$0.70$} & \multicolumn{1}{c|}{$0.50$} & \multicolumn{1}{c|}{$0.72$} &\multicolumn{1}{c||}{\textbf{1.92}}& \multicolumn{1}{c|}{$0.032$} & \multicolumn{1}{c|}{$0.022$} & \multicolumn{1}{c|}{$0.0022$} & \multicolumn{1}{c||}{\textbf{0.0562}}
\\
$500$ GeV, $\sqrt{s}=14$ TeV & \multicolumn{1}{c|}{$0.19$} & \multicolumn{1}{c|}{$0.15$} & \multicolumn{1}{c|}{$0.30$} & \multicolumn{1}{c||}{\textbf{0.86}} & \multicolumn{1}{c|}{$0.36$} & \multicolumn{1}{c|}{$0.32$} & \multicolumn{1}{c|}{$0.55$} & \multicolumn{1}{c||}{\textbf{1.23}}& \multicolumn{1}{c|}{$0.0086$} & \multicolumn{1}{c|}{$0.0077$} & \multicolumn{1}{c|}{$0.0022$}& \multicolumn{1}{c||}{\textbf{0.019}} \\
\end{tabular}}
\caption{Signal significance for the lightest VLL. The computation follows an evolutive algorithm that maximizes the Asimov significance metric. All significances are computed for $\mathcal{L} = 3000$ fb$^{-1}$ for a fixed mass $m_{E_2}=500$ GeV. In the first row, computations are performed for the centre-of-mass energy of $\sqrt{s}=28$ TeV, while the case of $\sqrt{s} = 14$ TeV is in the second row. $\sigma_C$ is the combined significance as defined in \eqref{eq:combined_significance}.}\label{tab:Evolve_Asimov_table_3}
\end{table}
To further complement our analysis, we also perform a scan for different VLL masses, summarizing our results in Tab.~\ref{tab:Evolve_Asimov_table} for $3000$ $\mathrm{fb^{-1}}$ of integrated luminosity. For completeness, in particular, in order to understand how much we would gain in prolonging the LHC high-luminosity runs, we show in Tab.~\ref{tab:Evolve_Asimov_table_2} the same results but for an integrated luminosity of $9000$ $\mathrm{fb^{-1}}$.
First, in Table \ref{tab:Evolve_Asimov_table} we obtain significances of the order of $2\sigma$ up until a mass of 300 GeV, whereas, in Table \ref{tab:Evolve_Asimov_table_2}, the same significance would be achieved for a mass of 400 GeV. It is also interesting to observe what would be the effect of a collider at a higher centre-of-mass energy. For this scenario, we focus our attention at a particular point, $m_{E_2}=500$ GeV and a centre-of-mass energy $\sqrt{s} = 28$ TeV. The results are shown in Tab.~\ref{tab:Evolve_Asimov_table_3}. As expected, the discovery significance increases with the centre-of-mass energy. For example, combined significance of the $s/\sqrt{s+b}$ metric increases by a factor of $1.4$. Similarly, the other metrics also showcase improvements with an increase by factors of $1.6$ and $0.3$ for the $\mathcal{Z}(<1\%)$ and $\mathcal{Z}_A$ metrics, respectively. We also note that, for the $s/\sqrt{s+b}$ and $\mathcal{Z}_A$ measures of the VLBSM topology, small variations in statistical significances are obtained, with $\mathcal{Z}_A$ seeing no deviation. These small fluctuations might be associated with the stochastic nature of the tested NNs.
A similar analysis was performed for VLQ searches pair produced via gluon-gluon fusion. In Fig.~\ref{fig:ACC-Sig-plots_1} of appendix \ref{app:sig_plots_DL} we show the significance in terms of the NN score where it a massive significance increase is immediately noticeable in comparison to the VLL case. This is mostly due to a far superior cross-section which, for $\mathcal{L}=3000~\mathrm{fb^{-1}}$, promptly results in a large Asimov significance, i.e.~$\mathcal{Z}_A = 257.53\sigma$, as well as equally large values for the other two metrics, $s/\sqrt{s+b} = 121.71\sigma$ and $\mathcal{Z}(<1\%) = 275.76\sigma$. As we show in Fig.~\ref{fig:VLQ_lum_sig}, the statistical significance for the VLQ searches is such that we can probe a large range of masses from $2.2~\mathrm{TeV}$ up to about $4.0~\mathrm{TeV}$.
\begin{figure}[h!]
\centering
\captionsetup{justification=raggedright}
\includegraphics[width=0.63\textwidth]{VLLs-VLQs-Collider/sig_plot.pdf}
\caption{Statistical significance as a function of the integrated luminosity for the three statistical metrics under consideration. We fix the the lightest VLQ mass as $m_{\mathrm{T}} = 2.2~\mathrm{TeV}$. The green curve represents the $\mathcal{Z}(<1\%)$ metric, the red curve indicates the $s/\sqrt{s+b}$ one while the blue curve shows our results for the Asimov significance $\mathcal{Z}_A$.}
\label{fig:VLQ_lum_sig}
\end{figure}
The corresponding numerical results can be found in Tab.~\ref{tab:VLQs}. From this, we note that for a high-luminosity run, $\mathcal{L}=3000~\mathrm{fb^{-1}}$, one can exclude, or claim a discovery, by over $5$ standard deviations, of VLQ masses up to $3.8~\mathrm{TeV}$. More interestingly, the VLQ sector of the model under consideration, can already be probed at the forthcoming LHC Run-III, $\mathcal{L}=300~\mathrm{fb^{-1}}$, for VLQ masses of around $3.4~\mathrm{TeV}$.
\begin{table}[h!]
\centering
\captionsetup{justification=raggedright,singlelinecheck=false}
\resizebox{0.85\textwidth}{!}{\begin{tabular}{c||ccc||ccc||ccc||}
\multirow{2}{*}{Mass of VLQ} & \multicolumn{3}{c||}{$300$ $\mathrm{fb^{-1}}$} & \multicolumn{3}{c||}{$1000$ $\mathrm{fb^{-1}}$} & \multicolumn{3}{c||}{$3000$ $\mathrm{fb^{-1}}$} \\ \cline{2-10}
& $s/\sqrt{s+b}$ & $\mathcal{Z}(<1\%)$ & $\mathcal{Z}_A$ & $s/\sqrt{s+b}$ & $\mathcal{Z}(<1\%)$ & $\mathcal{Z}_A$ & $s/\sqrt{s+b}$ & $\mathcal{Z}(<1\%)$ & $\mathcal{Z}_A$ \\ \hline
$2.2$ TeV & \multicolumn{1}{c|}{$38.49$} & \multicolumn{1}{c|}{$87.19$} & \multicolumn{1}{c||}{$87.91$} &\multicolumn{1}{c|}{$70.27$} & \multicolumn{1}{c|}{$159.51$} & \multicolumn{1}{c||}{$155.98$}& \multicolumn{1}{c|}{$121.71$} & \multicolumn{1}{c|}{$275.76$} & \multicolumn{1}{c||}{$257.53$} \\
$2.4$ TeV & \multicolumn{1}{c|}{$31.22$} & \multicolumn{1}{c|}{$67.13$} & \multicolumn{1}{c||}{$67.96$} &\multicolumn{1}{c|}{$57.00$} & \multicolumn{1}{c|}{$122.70$} & \multicolumn{1}{c||}{$121.85$}& \multicolumn{1}{c|}{$98.73$} & \multicolumn{1}{c|}{$212.29$} & \multicolumn{1}{c||}{$202.30$} \\
$2.6$ TeV & \multicolumn{1}{c|}{$24.94$} & \multicolumn{1}{c|}{$50.01$} & \multicolumn{1}{c||}{$49.84$} &\multicolumn{1}{c|}{$45.53$} & \multicolumn{1}{c|}{$91.34$} & \multicolumn{1}{c||}{$89.28$}& \multicolumn{1}{c|}{$78.87$} & \multicolumn{1}{c|}{$158.17$} & \multicolumn{1}{c||}{$147.40$} \\
$2.8$ TeV & \multicolumn{1}{c|}{$19.51$} & \multicolumn{1}{c|}{$36.10$} & \multicolumn{1}{c||}{$35.94$} &\multicolumn{1}{c|}{$35.62$} & \multicolumn{1}{c|}{$65.92$} & \multicolumn{1}{c||}{$64.62$}& \multicolumn{1}{c|}{$61.70$} & \multicolumn{1}{c|}{$114.16$} & \multicolumn{1}{c||}{$107.57$} \\
$3.0$ TeV & \multicolumn{1}{c|}{$14.97$} & \multicolumn{1}{c|}{$25.33$} & \multicolumn{1}{c||}{$25.22$} &\multicolumn{1}{c|}{$27.33$} & \multicolumn{1}{c|}{$27.33$} & \multicolumn{1}{c||}{$45.48$}& \multicolumn{1}{c|}{$47.34$} & \multicolumn{1}{c|}{$80.10$} & \multicolumn{1}{c||}{$76.23$} \\
$3.2$ TeV & \multicolumn{1}{c|}{$11.22$} & \multicolumn{1}{c|}{$17.28$} & \multicolumn{1}{c||}{$17.21$} &\multicolumn{1}{c|}{$20.49$} & \multicolumn{1}{c|}{$31.55$} & \multicolumn{1}{c||}{$31.11$}& \multicolumn{1}{c|}{$35.49$} & \multicolumn{1}{c|}{$54.65$} & \multicolumn{1}{c||}{$52.42$} \\
$3.4$ TeV & \multicolumn{1}{c|}{$8.16$} & \multicolumn{1}{c|}{$11.43$} & \multicolumn{1}{c||}{$11.39$} &\multicolumn{1}{c|}{$14.89$} & \multicolumn{1}{c|}{$20.87$} & \multicolumn{1}{c||}{$20.62$}& \multicolumn{1}{c|}{$25.79$} & \multicolumn{1}{c|}{$36.15$} & \multicolumn{1}{c||}{$34.88$} \\
$3.6$ TeV & \multicolumn{1}{c|}{$4.51$} & \multicolumn{1}{c|}{$4.91$} & \multicolumn{1}{c||}{$4.98$} &\multicolumn{1}{c|}{$8.52$} & \multicolumn{1}{c|}{$8.10$} & \multicolumn{1}{c||}{$9.11$}& \multicolumn{1}{c|}{$16.32$} & \multicolumn{1}{c|}{$17.10$} & \multicolumn{1}{c||}{$15.43$} \\
$3.8$ TeV & \multicolumn{1}{c|}{$1.20$} & \multicolumn{1}{c|}{$3.25$} & \multicolumn{1}{c||}{$2.93$} &\multicolumn{1}{c|}{$3.00$} & \multicolumn{1}{c|}{$7.11$} & \multicolumn{1}{c||}{$5.02$}& \multicolumn{1}{c|}{$6.01$} & \multicolumn{1}{c|}{$8.13$} & \multicolumn{1}{c||}{$10.05$} \\
$4.0$ TeV & \multicolumn{1}{c|}{$0.44$} & \multicolumn{1}{c|}{$0.66$} & \multicolumn{1}{c||}{$0.47$} &\multicolumn{1}{c|}{$1.01$} & \multicolumn{1}{c|}{$1.33$} & \multicolumn{1}{c||}{$1.28$}& \multicolumn{1}{c|}{$2.20$} & \multicolumn{1}{c|}{$2.51$} & \multicolumn{1}{c||}{$1.91$} \\
\end{tabular}}
\caption{Signal significance for the lightest VLQ-pair production. The computation follows a genetic algorithm that maximizes the Asimov significance metric. All significances are computed for proton-proton collisions at the centre-of-mass energy of $\sqrt{s} = 14$ TeV.}
\label{tab:VLQs}
\end{table}
\begin{table}[htb!]
\centering
\captionsetup{justification=raggedright,singlelinecheck=false}
\resizebox{0.60\textwidth}{!}{\begin{tabular}{c|c c c c c c}
$\mathrm{M_{VLQ}}=2.2~\mathrm{TeV}$ & $5\%$ & $10\%$ & $20\%$ & $40\%$ & $80\%$ & $96\%$\\[0.1CM] \hline
$\mathrm{sys}: 1\%$ & $58.39\sigma$ & $56.32\sigma$ & $52.02\sigma$ & $42.74\sigma$ & $19.33\sigma$ & $5.21\sigma$ \\[0.05CM]
$\mathrm{sys}: 10\%$ & $43.14\sigma$ & $41.80\sigma$ & $39.00\sigma$ & $32.83\sigma$ & $16.13\sigma$ & $4.70\sigma$
\end{tabular}}
\caption{Asimov significance, $\mathcal{Z}_A$, for VLQ pair-production. The last six columns show the Asimov significance assuming a suppression of $5\%$ to $96\%$ in the signal cross section coming from unaccounted effects. In the first row a systematic uncertainty in $\mathcal{Z_A}$ of 1\% is considered, whereas in the second row we assume a systematic uncertainty of 10\%. We consider an integrated luminosity of $\mathcal{L} = 139~\mathrm{fb^{-1}}$ corresponding the acquired data after the LHC Run-II.}
\label{tab:Xsecs}
\end{table}
It is relevant to mention that the obtained significance at low luminosity, in particular, for Run-II data, may naively suggest that all such scenarios are excluded. However, one must note that direct searches at the LHC are mostly focused in VLQ decays to third generation quarks, so far not considering channels with light jets and di-leptons as we propose in this article, making a comparison of our results with available data not suitable. With this in mind, the key point of our analysis relies on the fact that the process in Fig.~\ref{fig:VLQ-events} results in a cross-section significantly larger than that of the corresponding irreducible backgrounds, thus yielding a large significance for this particular channel. In fact, the evolutive algorithm employed in this work was engineered to find Neural Network models that further enhance such a discovery (or exclusion) significance. Since the architectures obtained with this methodology can easily find regions on the feature phase space, in our case the kinematic and angular observables described in tables \ref{tab:vars_Zp} and \ref{tab:vars_VLQ}, the separation between signal and background can be maximized as it is shown in Fig.~\ref{fig:ACC-Sig-plots_1}(a). For completeness of information we show in Tab.~\ref{tab:Xsecs} how the significance drops with a decreasing cross-section and with increasing systematic uncertainties. While our calculation has already been subject to detector effects with \texttt{Delphes} as well as to systematic uncertainties in the definition of the Asimov metric, one can take a conservative approach and consider that further unaccounted effects can impose a larger suppression in the cross-section than the one obtained from the \texttt{Delphes} output. It is remarkable to note that even for a suppression factor of $96\%$, a $2.2~\mathrm{TeV}$ VLQ can still be probed with a statistical significance of o $5.21\sigma$ with current LHC Run-II data. If we further increase the systematic uncertainties up to $10\%$, we obtain a worst case scenario lower bound on the discovery (or exclusion) significance of $4.7$ standard deviations.
\section{Conclusions}
\label{sec:conclusions}
In this paper, we have proposed a novel model where the SM gauge symmetry is enlarged by the $\mathcal{Q}_6 \times \mathcal{Z}_2$ discrete group. Within this framework, new exotic VLFs are emergent, both of quark and lepton types, as well as RH Majorana neutrinos. Furthermore, the scalar sector is enlarged by the inclusion of a new doublet and singlet scalar fields. We show that tree-level masses for third-generation fermions (top and tau) due to interactions with doublet scalars ($H_1$ and $H_2$) are generated after the spontaneous breaking of the electroweak gauge and the flavour $\mathcal{Q}_6 \times \mathcal{Z}_2$ symmetries. The remaining SM charged fermions gain their masses via a Universal seesaw mechanism mediated by VLFs. The tiny masses of the light active neutrinos arise from a tree-level type I seesaw mechanism mediated by heavy right handed Majorana neutrinos.
Due to sizeable couplings between the exotic VLL and the new scalar fields, contributions to the anomalous magnetic moment of the muon are generated. For certain benchmark scenarios for the couplings, we demonstrate that the model can successfully accommodate the measured muon $(g-2)$ anomaly. More specifically, considering the two benchmark scenarios $y_{E}=y_{E2}=0.2$ and $y_{E}=y_{E2}=0.3$ we can explain the observed muon $(g-2)$ anomaly within 2$\sigma$ error bars for masses of the $E_2$ between 200 GeV and 2 TeV.
Phenomenological studies, in the context of collider physics at the LHC, are conducted for both VLLs and their quark counterparts. For this purpose, we employ the genetic algorithms to optimize the construction of neural networks, whose objective is to maximise the statistical significance of an hypothetical discovery of these particles at future experiments. For VLLs, we consider double production channels, either via production of a $Z^0$ boson or virtual photon, or via vector-boson fusion. We also consider the channel for single production. Using kinematic information of the final states, we determine that we can not exclude VLLs with more than five standard deviations for masses above $200~\mathrm{GeV}$, at the high-luminosity phase of the LHC, $\mathcal{L} = 3000$ $\mathrm{fb}^{-1}$. Assuming a hypothetical extension towards $\mathcal{L} = 9000$ $\mathrm{fb}^{-1}$, one can exclude the lightest VLL with masses up to approximately $200~\mathrm{GeV}$. We also determine the impact from the increasing center-of-mass energy at future colliders. For a mass of $m_{E_2} = 500$ GeV, we show that the combined significance improves when moving from $\sqrt{s} = 14$ TeV to $\sqrt{s} = 28$ TeV. Specifically, the significance increases from $0.86\sigma$ to $1.20\sigma$ for $s\sqrt{s+b}$, $1.23\sigma$ to $1.92\sigma$ for $\mathcal{Z}(<1\%)$ and $0.0562\sigma$ to $0.019\sigma$ for $\mathcal{Z}_A$. A similar analysis is made for VLQs, focusing on double production via strong-interaction channels, which is characterized by four leptons and two light jets in the final states. We found that VLQ masses up to $3.8~\mathrm{TeV}$ can be excluded at a luminosity of $\mathcal{L} = 3000$ $\mathrm{fb^{-1}}$ and up to $3.4~\mathrm{TeV}$ for $\mathcal{L} = 300$ $\mathrm{fb^{-1}}$. To the best of our knowledge, the VLQ production channel proposed in this article has so far not been adopted in direct searches by experimental collaborations and must be considered both with currently available as well as with future data to be collected in forthcoming LHC runs.
\section*{Acknowledgments}
\noindent
The authors acknowledge Ulises Saldaña Salazar for fruitful discussions at early stages of this work. The authors are also very grateful with Martin Hirsch for providing very useful comments and suggestions. A.E.C.H. acknowledges support by FONDECYT (Chile) under grant
No.~1210378, Milenio-ANID-ICN2019\_044 and ANID PIA/APOYO AFB180002. The work of C.B. was supported by FONDECYT grant No. 11201240. R.P.~is supported in part by the Swedish Research Council grant, contract number 2016-05996, as well as by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 668679). J.G., F.F.F., and A.P.M. are supported by the Center for Research and Development in Mathematics and Applications (CIDMA) through the Portuguese Foundation for Science and
Technology (FCT - Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia), references UIDB/04106/2020 and UIDP/04106/2020. A.P.M., F.F.F. and J.G. are supported by the project PTDC/FIS-PAR/31000/2017. A.P.M.~is also supported by national funds (OE), through FCT, I.P., in the scope of the framework contract foreseen in the numbers 4, 5 and 6 of the article 23, of the Decree-Law 57/2016, of August 29, changed by Law 57/2017, of July 19. The authors also would like to acknowledge the FCT Advanced Computing Project to provide computational resources via the project CPCA/A00/7395/2020. This work was partially produced with the support of INCD funded by FCT and FEDER under the project 01/SAICT/2016 nº 022153. J.G. is also directly funded by FCT through the doctoral program grant with the reference 2021.04527.BD.
C.B. and R.P. would like to thank the members of the UTFSM particle physics group in Valpara\'iso for their hospitality during their visit, where part of this work was made.
| -86,725.925944
|
[
-2.865234375,
2.701171875
] | 19.406151
|
[
-3.234375,
0.8525390625,
-0.9736328125,
-5.7578125,
-1.0986328125,
8.0546875
] |
[
2.716796875,
8.6171875,
3.064453125,
6.4765625
] | 1,056
| 10,437
|
[
-2.666015625,
2.8046875
] | 38.202281
|
[
-5.33984375,
-3.3984375,
-3.73828125,
-2.1796875,
1.3818359375,
10.4140625
] | 0.674018
| 7.580744
| 26.254789
| 7.52494
|
[
2.591724157333374
] | -50,453.400848
| 6.325764
| -85,101.03297
| 0.086335
| 6.633654
|
[
-2.65625,
-3.4921875,
-4.0234375,
-5.40234375,
2.205078125,
12.59375
] |
[
-5.30078125,
-1.37109375,
-1.4755859375,
-1.01953125,
2.896484375,
3.28515625
] | |
BkiUdR4241xiERJzIQ8w
|
\section{1. Expansion near the transition}
Taking derivatives of Eq. \eqref{SS} in the Letter we obtain
\begin{equation}
\Sigma''(\mu) = \partial_\mu \xi_* + I_2(\mu)
\end{equation}
Let us define for convenience
\begin{equation}\label{defIp}
I_{pq}(\mu,y^2) := \int_k \frac{(\mu - t \Delta(k))^q}{((\mu - t \Delta(k))^2 + y^2)^{p/2}}
\quad , \quad I_p(\mu) := \int_k \frac{1}{(\mu - t \Delta(k))^p}
\end{equation}
The equations \eqref{solu} of the text which determine $y$ and $\xi^*$ read for $\mu<\mu_c$
\begin{eqnarray}
&& \xi_* = I_{21}(\mu,y^2) \\
&& 1 = I_{20}(\mu,y^2)
\end{eqnarray}
We will need the following relations which easily follow from the definition (from now on we suppress the arguments $\mu,y^2$ of all $I_{pq}$ integrals)
\begin{eqnarray}
&& \partial_\mu I_{20}=- 2 I_{41} \quad , \quad \partial_{y^2} I_{20}=- I_{40} \\
&& \partial_\mu I_{21}=I_{20}- 2 I_{42} \quad , \quad \partial_{y^2} I_{21}=- I_{41}
\end{eqnarray}
Taking a derivative of the second SP equation we obtain
\begin{equation}
\frac{d y^2}{d \mu} = - \frac{\partial_\mu I_{20}}{\partial_{y^2} I_{20}} =
-2 \frac{I_{41}}{I_{40}}
\end{equation}
The derivative of the first SP equation gives
\begin{equation}
\partial_\mu \xi_* = \partial_\mu I_{21} + \frac{d y^2}{d \mu} \partial_{y^2} I_{21}
= I_{20}- 2 I_{42} + 2 \frac{I_{41}^2}{I_{40}}
\end{equation}
which is exact for arbitrary $\mu<\mu_c$. Taking the limit
$\mu \to \mu_c^-$ we obtain, since $y(\mu_c)=0$
\begin{equation}
\partial_\mu \xi_* = - I_2(\mu_c) + 2 \frac{I_3(\mu_c)^2}{I_4(\mu_c)}
\end{equation}
Using that $I_2(\mu_c)=1$ we obtain
\begin{equation}
\Sigma''(\mu_c)= 2 \frac{I_3(\mu_c)^2}{I_4(\mu_c)}
\end{equation}
As $\Sigma(\mu_c)=\Sigma'(\mu_c)=0$ this immediately implies (\ref{expansion0}).\\
\noindent {\bf Complexity of minima for $\mu\le \mu_c$.}
Substituting $\xi_* \to \xi_e = - \lambda_e^- - \mu$ into \eqref{sig1} we get
\begin{eqnarray} \label{Sst}
&& \Sigma_{\rm st}(\mu) = - \frac{1}{2} [ ( \mu_c + \int_k \frac{1}{ \mu_c - t \Delta(k)} - \mu)^2
\\
&& - ( \int_k \frac{1}{ \mu_c - t \Delta(k)})^2] - \int_k [ \ln (\mu - t \Delta(k)) - \ln(\mu_c - t \Delta(k)) ] \nonumber
\end{eqnarray}
where we have subtracted the value $\Sigma_{\rm st}(\mu_c)=0$ which allows
to eliminate the constant $f(- \lambda_e^-)$. Finally, by differentiating over the parameter $\mu$ it is
easy to show that
\[
\int_k [ \ln (\mu - t \Delta(k)) - \ln(\mu_c - t \Delta(k)) ]=-\int_{\mu}^{\mu_c}I_1(\tilde{\mu})d\tilde{\mu}.
\]
Expanding the square and reordering we obtain the formula
(\eqref{SstA}) in the text.
Let us investigate the critical behavior of the complexity of stable equilibria.
Taking derivatives of \eqref{SstA} w.r.t. $\mu$ we find, using the definition \eqref{refs}
of $\mu_c$ in the second line we get
\begin{eqnarray}
&& \Sigma'_{\rm st}( \mu) = - (\mu - \mu_c - I_1(\mu_c)) - I_1(\mu) \Rightarrow
\Sigma'_{\rm st}( \mu_c) = 0 \nonumber \\
&& \Sigma''_{\rm st}( \mu) = - 1 + I_2(\mu) \Rightarrow
\Sigma''_{\rm st}( \mu_c) = 0 \\
&& \Sigma'''_{\rm st}(\mu) = - 2 I_3(\mu) \Rightarrow
\Sigma'''_{\rm st}(\mu_c) = - 2 I_3(\mu_c) \nonumber
\end{eqnarray}
\section{2. Explicit formulas for the complexity in the continuum model}
\noindent{\bf Total complexity.} Here we analyze the equations \eqref{SS2} and \eqref{ymu} which
determine $\Sigma(\mu)$ as a function of $\mu$ in the complex phase $\mu<\mu_c$.
Let us consider the continuum model in dimension $d$, with $\Delta(k)=- k^2$. We restrict to
$d<4$.
We assume (and check later) that the momentum integrals are convergent for $k \in \mathbb{R}^d$.
Upon scaling $k=\sqrt{\tilde \mu/t} p$ and $y=\tilde \mu x$ and employing spherical coordinates it turns out to be useful to introduce the following functions:
\begin{equation} \label{deff}
f_d(x)=C_d \int_{0}^{\infty} \frac{dq \, q^{\frac{d-2}{2}}}{(1+q)^2 + x^2}
\quad , \quad g_d(x)=C_d x^2 \int_{0}^{\infty} \frac{dq \, q^{\frac{d-2}{2}}}{[(1+q)^2 + x^2](1+q)}
\end{equation}
where $C_d= \frac{S_d}{2 (2 \pi)^d} = \frac{1}{2^d\pi^{d/2}\Gamma(d/2)}$, with $S_d$ standing for the
area of the hypersphere in dimension $d$. With help of the introduced notations (\ref{deff}) the equation (\ref{ymu}) takes the form
\begin{equation} \label{complexitycont1}
1 = t^{-d/2}\mu^{\frac{d-4}{2}} f_d\left(\frac{y}{\mu}\right)
\end{equation}
which, in particular, implies that the Larkin mass satisfies the relation $t^{d/2}\mu_c^{\frac{4-d}{2}} = f_d(0)$.
Solving (\ref{complexitycont1}) by functional inverse as $y(\mu)=\mu\,f_d^{-1}\left(t^{d/2}\mu^{\frac{4-d}{2}}\right)$
allows us to write the complexity (\ref{SS2}) explicitly as
\begin{equation} \label{complexitycont2}
\Sigma(\mu) = t^{-d/2}\int_{\mu}^{\mu_c} \frac{d\tilde \mu}{\tilde \mu^{\frac{2-d}{2}}} g_d(f_d^{-1}(\tilde \mu^{\frac{4-d}{2}}t^{d/2}))
\end{equation}
Further changing $\tilde \mu=\left(xt^{-d/2}\right)^{\frac{2}{4-d}}$ and $\mu=\mu_c(1-\delta)$ the above can be presented in the form
\begin{equation} \label{complexitycont3}
\Sigma(\delta) = \frac{2}{4-d} t^{-2d/(4-d)}
\int^{f_d(0)}_{f_d(0)(1-\delta)^{\frac{4-d}{2}}} dx{x^{\frac{2(d-2)}{4-d}}} g_d\left[f_d^{-1}(x)\right]
\end{equation}
implying, in particular
\begin{equation} \label{complexitycont3a}
\frac{d}{d\delta}\Sigma(\delta) = t^{-2d/(4-d)} f_d(0)^{\frac{d}{4-d}}(1-\delta)^{\frac{d-2}{2}}
g_d\left[f_d^{-1}\left(f_d(0)(1-\delta)^{\frac{4-d}{2}}\right)\right]
\end{equation}
On the other hand, setting $\delta\to 1$ is equivalent to $\mu\to 0$. The zero mass limit can be easily
found from (\ref{complexitycont3}). After substituting $z= f^{-1}_d(x)$ and
taking into account $f_d(x\to \infty)=0$ one gets
\begin{equation} \label{zeromd}
\Sigma(\mu=0) = \sigma_d t^{-2d/(4-d)} \quad , \quad
\sigma_d = \frac{2}{4-d}
\int_{0}^{+\infty} dz |f'_d(z)| {f_d(z)^{\frac{2(d-2)}{4-d}}} g_d(z)
\end{equation}
{\bf Behavior near $\mu_c$}. Let us define, for $p > d/2$
\begin{equation} \label{Itilde}
\tilde I_p := C_d\,\int_{0}^{\infty} \frac{q^{\frac{d}{2}-1}\, dq}{(1+q)^p}
= \frac{1}{2^{d} \pi ^{d/2}} \frac{ \Gamma
\left(p-\frac{d}{2}\right)}{\Gamma (p)}\,.
\end{equation}
Then expanding in (\ref{deff}) as
\[
f_d(x\ll 1)=f_d(0)-x^2\tilde{I}_4 +o(x^2), \quad g_d(x\ll 1)=x^2 \tilde{I}_3+o(x^2) \quad , \quad f_d(0)=\tilde I_2
\]
it is easy to deduce from (\ref{complexitycont1}) that to the leading order in $\delta\ll 1$
holds
\[
f_d^{-1}\left(f_d(0)(1-\delta)^{\frac{4-d}{2}}\right)\simeq A\, \delta^{1/2},
\]
where the coefficient $A$ is given by
\begin{equation} \label{A}
A^2=\frac{f_d(0)(4-d)}{2\tilde I_4},
\end{equation}
Substituting all those expressions to (\ref{complexitycont3}) gives to the leading order
\begin{equation} \label{complexitycontleading1}
\frac{d}{d\delta}\Sigma(\delta) \simeq t^{-2d/(4-d)} f_d(0)^{\frac{d}{4-d}}A^2\tilde{I}_3\delta
= \frac{4-d}{2}t^{-2d/(4-d)} f_d(0)^{\frac{4}{4-d}}\frac{\tilde{I}_3}{\tilde{I}_4}\, \delta\,
\end{equation}
and further using $\mu_c = \left(f_d(0)/t^{d/2}\right)^{\frac{2}{4-d}}$ and $\frac{\tilde{I}_3}{\tilde{I}_4}=\frac{6}{6-d}$ we see that
\[
\frac{d}{d\delta}\Sigma(\delta\ll 1) \approx 3\frac{4-d}{6-d}\mu_c^2 \, \delta
\]
As $\Sigma(\delta=0)=0$ this finally implies that for the continuum model with $\Delta(k)=-k^2$ the complexity close to the threshold is given by
\begin{equation}\label{expansion0cont}
\Sigma(\delta\ll 1) \approx \frac{3}{2}\,\frac{4-d}{6-d}\mu_c^2 \, \delta^2
\end{equation}
This fully agrees with the general expression (\ref{expansion0}). Indeed, it is easy to see that in the continuum limit
the integrals $I_p$ defined in (\ref{defIp}) are related to $\tilde{I}_p$ as
\begin{equation} \label{Ipcont}
I_p(\mu_c)=\tilde{I}_p\mu_c^{\frac{d}{2}-p} t^{-d/2}
\end{equation}
so that again using $\mu_c^{\frac{4-d}{2}}\,t^{d/2} = \tilde{I}_2$ we see
\[
\frac{I_3^2(\mu_c)}{I_4(\mu_c)}=\frac{\tilde I_3^2}{\tilde I_4} \mu_c^{\frac{d}{2}-2}t^{-d/2}= \frac{\tilde I_3^2}{\tilde I_4\tilde I_2}
=\frac{3}{2}\,\frac{4-d}{6-d}
\]
exactly as expected. \\
\noindent{\bf Complexity of minima.} Let us evaluate the
complexity of minima \eqref{SstA} for the continuum model, $\Delta(k)=-k^2$.
Although each factor $I_1$ is a divergent integral (and would require a UV cutoff)
for $d \geq 2$, the difference
\begin{equation}
I_1(\tilde \mu) - I_1(\mu_c) = \int_{\tilde \mu}^{\mu_c} d\rho \, I_2(\rho)
= t^{-d/2} \int_{\tilde \mu}^{\mu_c} d\rho \, \rho^{\frac{d}{2}-2}
= t^{-d/2} \frac{2 \tilde I_2}{d-2} ( \mu_c^{\frac{d-2}{2}} - \tilde \mu^{\frac{d-2}{2}} )
\end{equation}
is convergent for any $\tilde \mu \geq 0$ for $d<4$. Inserting into \eqref{SstA},
and remembering that $t^{-d/2} \mu_c^{\frac{d}{2}-2} \tilde I_2=1$,
we obtain upon integrating once more, the complexity of minima (\eqref{SstA}) in the form
\begin{equation}\label{mincompcontinuum}
\Sigma_{st}(\mu<\mu_c)/\mu_c^2=-\frac{1}{2}\left(1-\frac{\mu}{\mu_c}\right)^2-\frac{2}{2-d}\left(1-\frac{\mu}{\mu_c}\right)
+\frac{4}{d(2-d)}\left[1-\left(\frac{\mu}{\mu_c}\right)^{d/2}\right]
\end{equation}
which upon substituting $\mu/\mu_c=1-\delta$ gives (\eqref{mincompcontinuumA}) in the text.
This expression is valid for $d<4$ and has a finite limit for $d=2$ as given in the text. \\
\noindent {\bf Results in two dimensions, $d=2$}.
In this case the functions $f_2(x)$ and $g_2(x)$ can be found explicitly as:
\begin{eqnarray}
&& f_2(x) = \frac{1}{4 \pi x} \left( \frac{\pi}{2} - \tan ^{-1}( \frac{1}{x} ) \right) = \frac{\tan ^{-1} x}{4 \pi x}
=
\frac{1}{4 \pi }-\frac{x^2}{12 \pi }+\frac{x^4}{20 \pi
}+O\left(x^5\right) \\
&& g_2(x) = \frac{\log(1+x^2)}{8 \pi}
\end{eqnarray}
We now use Eq. \eqref{complexitycont2} which reads in $d=2$
\begin{equation}\label{intcomp}
\Sigma(\mu) = \frac{1}{t} \int_{\mu}^{\mu_c} d\tilde \mu \, g_2(f_2^{-1}(\tilde \mu t))
= - \frac{1}{t^2} \int_{0}^{Z(\mu)} dz f'_2(z) \, g_2(z)
\end{equation}
where we have introduced $z$ as the solution to $t \tilde \mu=f_2(z)$,
$Z=Z(\mu)$ as the solution to $t \mu=f_2(Z)$, and used that $t \mu_c = f_2(0)=\frac{1}{4 \pi}$.
The integral (\ref{intcomp}) can be easily evaluated by parts yielding
for the complexity an explicit parametric system where $Z$ must be eliminated
\begin{eqnarray}
&& \Sigma = - \frac{\tan ^{-1}(Z) \left(\log \left(Z^2+1\right)-Z
\tan ^{-1}(Z)\right)}{32 \pi ^2 t^2 Z} \\
&& \mu=t^{-1}f_2(Z) = \frac{\tan ^{-1} Z}{4 \pi t Z}
\end{eqnarray}
which can be further written as
\begin{eqnarray}
\Sigma = \frac{\mu}{8 \pi t} \left(4 \pi t \mu Z^2 - \log(1 + Z^2) \right) \quad , \quad
\mu=\frac{\tan ^{-1} Z}{4 \pi t Z} \quad , \quad \mu_c= \frac{1}{4 \pi t}
\end{eqnarray}
or equivalently
\begin{eqnarray}
\frac{\Sigma}{\mu_c^2} = \frac{\mu}{2 \mu_c} \left(\frac{\mu}{\mu_c} Z^2 - \log(1 + Z^2) \right) \quad , \quad
\frac{\mu}{\mu_c}=\frac{\tan ^{-1} Z}{Z}
\end{eqnarray}
In particular, we obtain the series expansion
close to the transition when $\mu = \mu_c (1- \delta)$ with $\delta \ll 1$
\begin{eqnarray}
\Sigma(\delta\ll 1) = \mu_c^2\left[\frac{3 \delta ^2}{4}+\frac{3 \delta ^3}{20}+\frac{117
\delta ^4}{1400}+\frac{351 \delta
^5}{7000}+O\left(\delta ^6\right)\right] \quad ,
\end{eqnarray}
where the first term agrees with the general result (\ref{expansion0cont}).
The limit $\mu \to 0$ corresponds to $Z \to +\infty$ and we obtain the small $\mu$ expansion
\begin{equation}
\frac{\Sigma}{\mu_c^2} = \frac{\pi ^2}{8}+\frac{\mu}{\mu_c} \left(\log \left(
\frac{2 \mu}{\pi \mu_c} \right)-1\right)+\frac{2 \mu^2}{\pi ^2 \mu_c^2}-\frac{2 \left(\pi ^2-12\right) \mu^3}{3 \pi ^4 \mu_c^3}+O\left((\frac{\mu}{\mu_c})^4\right) \quad , \quad \mu_c= \frac{1}{4 \pi t}
\end{equation}
In particular we find the the finite value in $d=2$
\begin{equation}
\Sigma(\mu=0)|_{d=2}=\frac{1}{128 t^2}
\end{equation}
{\bf Results in dimension one, $d=1$}
For the continuum model in $d=1$ the complexity \eqref{complexitycont3}
can be calculated inserting
\begin{eqnarray}
&& f_1(x) = \frac{i}{4 x} ( \frac{1}{\sqrt{1+ i x}} - \frac{1}{\sqrt{1- i x}} ) \\
&& g_1(x) = - \frac{1}{4} ( \frac{1}{\sqrt{1+ i x}} + \frac{1}{\sqrt{1- i x}} -2 )
\end{eqnarray}
Since we did not find a simpler expression in $d=1$ we give here only a
numerical evaluation for zero mass, from \eqref{zeromd}
\begin{equation}
\Sigma(\mu=0)|_{d=1} \approx 0.375 \, t^{-2/3}
\end{equation}
\section{3. Larkin length}
There are several conventions to define the Larkin length $L_c$, and they simply differ by some constant prefactors in
the weak disorder limit. Let us consider here the continuum model $\Delta(k)=-k^2$.
If we stick to the definition $L_c = (\kappa^2/R''''(0))^{1/3}$ given for $N=1$, $d=1$ in \cite{FLRT}, the correspondence is that
$\kappa$ there equals $t_0$ here, and $R(u)$ there is $R(u)=B(u^2)$, which, in particular, gives the relation between the derivatives: $R''''(u)=12B''(u^2)+48u^2B'''(u^2)+16u^4B''''(u^2)$, hence $R''''(0)=12B''(0)$. To remain consistent with
the convention in \cite{FLRT}, we then define for the case of general $N,d$
\begin{equation} \label{defLc2}
L_c := \left(\frac{t_0^2}{12 B''(0)}\right)^{\frac{1}{4-d}} = (t^2/3)^{\frac{1}{4-d}}
\end{equation}
where we recalled that $t=t_0/2\sqrt{B''(0)}$.
In general we expect, for the complexity defined in the large $L$ limit
\begin{equation} \label{s11}
\Sigma(\mu=0) = C_{N,d} L_c^{-d}
\end{equation}
where $C_{N,d}$ is a constant prefactor. In \cite{FLRT} it was numerically found that
$C_{1,1}\approx 0.46$. Here we show that in the large $N$ limit \eqref{s11}
indeed holds with
\begin{equation}
\lim_{N \to +\infty} C_{N,d} = C_{\infty,d} = \sigma_d 3^{- \frac{d}{4-d}}
\end{equation}
where the last equality is obtained by comparing \eqref{defLc2}, \eqref{s11} and the result \eqref{zeromd} for $\Sigma(\mu=0)$
where the constant $\sigma_d$ was defined. We thus obtain, for different dimensions:
\begin{equation}
C_{\infty,d=1} = 0.260.. \quad , \quad C_{\infty,d=2} = 0.00260
\end{equation} \\
{\bf Universal ratio.}
Finally it is interesting to consider the dimensionless ratio $\frac{\Sigma_{st}(\mu)}{\Sigma(\mu)}$ for
$\mu<\mu_c$.
It vanishes linearly near $\mu=\mu_c$, whereas at $\mu=0$, using the relation $\mu_c=(\tilde I_2 t^{-d/2})^{\frac{2}{4-d}}$, its value for the continuum model is a universal number (in $[0,1]$) depending only on $d$:
\begin{equation}
\frac{\Sigma_{st}(\mu=0)}{\Sigma(\mu=0)} = \frac{\frac{4-d}{2d} \mu_c^2}{ \sigma_d t^{-2 d/(4-d)}}
= \frac{4-d}{2d \sigma_d} (\tilde I_2)^{\frac{4}{4-d}}
\end{equation}
where $\sigma_d$ is defined in \eqref{zeromd}. This number is $0.63..$ for $d=1$ and $0.405..$ for $d=2$.
\section{Limit $q\to 0$ and resolvant}
Here we check that the real part of the mean resolvant of the Hessian is correctly predicted
by our theory. Let us consider the Hessian ${\cal K}^0[{\bf u}] \to {\cal K}[{\bf u}]$ around
configuration ${\bf u}$ defined in the text in \eqref{Hessian0} (we recall that in our units
$\mu_0 \to \mu$, $t_0 \to t$ and $J^2=4 B''(0) \to 1$). It was also defined in
Eq (3) in \cite{UsHess}, together with the Green function (Eqs. (8,9,23) there)
\begin{equation}
{\cal G}(x,y;\lambda,{\bf u}) = \frac{1}{N} \sum_{i=1}^N \left(\frac{1}{\lambda - {\cal K}[{\bf u}]}\right)_{xi,yi}
\end{equation}
and with the mean resolvant
\begin{equation}
{\cal G}(\lambda,{\bf u}) = \frac{1}{L^d} \sum_x \overline{ {\cal G}(x,x;\lambda,{\bf u}) }
\end{equation}
where here ${\bf u}$ is a fixed typical configuration, which can be chosen to be ${\bf u}={\bf 0}$.
In that case the block matrix covariance structure is recalled in (6-7) there and is related to the one
of Wegner orbital models, and in the continuum limit in $x$, to matrix Anderson models.
The mean resolvant was calculated in \cite{UsHess} (in agreement with earlier results by Pastur)
with the result that it is the solution of the self-consistent equation (for $i p$)
\begin{equation} \label{pastur}
{\cal G}(\lambda,{\bf 0}) = i p \quad , \quad i p = \int_k \frac{1}{\lambda - \mu + t \Delta(k) - i p}
\end{equation}
On the other hand, the following equality allows an independent calculation of the mean resolvant
\begin{equation} \label{der0}
\frac{1}{N L^d} \partial_q|_{q=0} \partial_\mu \overline{|\det {\cal K}[{\bf 0}]|^q}
= \frac{1}{N L^d} \partial_\mu \overline{ \log |\det {\cal K}[{\bf 0}]| } = \frac{1}{N L^d} \partial_\mu
{\rm Re} \overline{ {\rm Tr} \log {\cal K}[{\bf 0}] }
= \frac{1}{N L^d} {\rm Re} \overline{ {\rm Tr} {\cal K}[{\bf 0}]^{-1} } = - {\rm Re} {\cal G}(\lambda=0,{\bf 0})
\end{equation}
Now we can evaluate $\overline{|\det {\cal K}[{\bf 0}]|^q}$, for any $q$, by a simple generalization of
the calculation in this paper, which corresponds to the particular case $q=1$. The case of general $q$ will be detailed and analyzed in a forthcoming publication \cite{inprep}, here we just sketch the calculation in the limit $q \to 0$. It is easy to see, that under the same assumptions as for $q=1$
\begin{equation} \label{start2n}
\overline{|\det {\cal K}[{\bf 0}]|^q}|_{N \gg 1} \sim \prod_x \int_{\mathbb{R}}
\frac{d\xi(x)}{\sqrt{2 \pi/N}} e^{- N S[\xi] }
\quad , \quad
S[\xi]= \sum_x \frac{1}{2} \xi(x)^2 - \frac{q}{N} \left\langle {\rm Tr} \log | K + X + \mu I | \right\rangle_{\rm GOE's}
\end{equation}
The natural saddle point at large $N$ is $\xi(x)=\xi^*_q$, where $\xi^*_q$ solves the
equation
\begin{equation} \label{sp0n}
\xi^*_q = q f'(\xi^*_q + \mu) \quad , \quad f(\xi) := \int d\lambda \ln|\lambda+\xi| \, \rho_{K}(\lambda)
\end{equation}
From which we obtain
\begin{equation}
\frac{1}{N L^d} \partial_\mu \log \overline{|\det {\cal K}[{\bf 0}]|^q} = \partial_\mu \left( - \frac{1}{2} (\xi_q^*)^2 + q f(\xi_q^* + \mu) \right) = q f'(\xi^*_q + \mu) = \xi^*_q
\end{equation}
Hence, taking a derivative w.r.t. $q$ and using \eqref{der0} we obtain that by this method the
mean resolvant is obtained as
\begin{equation} \label{resres}
{\rm Re} \, {\cal G}(\lambda=0,{\bf 0}) = - \partial_q|_{q=0} \xi^*_q := - G
\end{equation}
where $G$ is by definition the leading order
in the Taylor expansion of $\xi^*_q = q G + O(q^2)$ at small $q$.
Now it is easy to see that \eqref{sp2} in the text generalizes into
\begin{eqnarray} \label{sp2n}
\xi_q^* = - q {\rm Re} [ i r_{- \xi_q^*- \mu + i 0^+} ]
\end{eqnarray}
which, in the limit $q \to 0$ becomes
\begin{eqnarray} \label{22}
G = - {\rm Re} [ i r_{- \mu + i 0^+} ]
\end{eqnarray}
From \eqref{sc} in the text we see that $i r_{- \mu}$ satisfies
\begin{equation} \label{scn}
i r_{-\mu} = \int_k \frac{1}{- \mu + t \Delta(k) - i r_{-\mu}}
\end{equation}
Comparing with \eqref{pastur} we see that $i r_{-\mu}= i p$,
hence \eqref{22} implies $G = - {\rm Re} (i p)$, hence
\eqref{resres} leads to the correct result for the real
part of the mean resolvant ${\rm Re} \, {\cal G}(\lambda=0,{\bf 0})$.
| -31,470.794891
|
[
-0.771484375,
0.6181640625
] | 15.300546
|
[
-3.46484375,
0.06597900390625,
-2.06640625,
-6.5859375,
-0.99658203125,
8.703125
] |
[
0.6708984375,
7.578125,
-1.90625,
2.373046875
] | 68
| 2,143
|
[
-3.29296875,
3.876953125
] | 41.172698
|
[
-5.62109375,
-3.69921875,
-3.634765625,
-2.0390625,
1.59375,
10.3515625
] | 0.858222
| 11.100975
| 35.230985
| 5.029749
|
[
1.6446740627288818
] | -19,066.76831
| 5.437238
| -31,345.706485
| 1.021284
| 5.812497
|
[
-2.533203125,
-3.615234375,
-4.296875,
-5.55859375,
2.40625,
12.96875
] |
[
-4.79296875,
-0.92333984375,
-1.9912109375,
-0.80859375,
2.2890625,
2.384765625
] | |
BkiUe5nxaKgQSxpOO7BT
|
\section{Introduction}
\label{sec:introduction}
From cellular structures to organisms and populations, biological systems are governed by principles of self-organisation.
The intricate cycles of autocatalytic reactions that constitute cell metabolism, the highly orchestrated processes of nucleic acid transcription and translation, the replication and segregation of chromosomes, the cytoskeletal assemblies and rearrangements that mechanically drive important cellular processes like cell division and cell motility, the morphogenesis of complex tissue from a single fertilised egg -
all of these processes rely on the generation of structures and gradients based on molecular self-organisation.
Frequently, the assembly and maintenance of these structures is accompanied by spatial and temporal protein patterning.
What are the principles underlying self-organising processes that result in protein patterns?
Though the term `self-organisation' is frequently employed, as it is here, in the context of complex systems, it needs to be emphasised that there is no generally accepted theory of self-organisation that explains how internal molecular processes are able to coordinate the interactions between a system's components such that order and structure emerge.
The field which has arguably contributed most to a deeper understanding of emergent phenomena is `nonlinear dynamics', especially with concepts such as `catastrophes' \cite{Thom:1983}, `Turing instabilities' \cite{Turing:1952}, and `nonlinear attractors' \cite{Guckenheimer:2013}.
However, although pattern formation and its underlying concepts have found their way into textbooks \cite{Cross_Greenside:Book}, we are far from answering the above question in a comprehensive and convincing way.
This chapter will highlight some of the recent progress in the field, but also address some of the fascinating questions that remain open.
In contrast to the conventional representation of pattern--forbabming systems in classical texts, our exposition will be closely tied to the analysis of quantitive models for specific biological systems.
At first, this might appear to involve a loss of generality.
However, as we will see, only by studying the actual physical processes that give rise to what we call self-organisation will we be able to uncover its key features in the first place.
These key aspects can then be generalised again by identifying the according processes in other systems.
Here, we will mainly, but not exclusively, focus on a model for Min protein dynamics, a system of self-organising proteins that is essential for cell division in the bacterium \textit{Escherichia coli}.
The Min system offers an ideal combination of a broad and rich phenomenology with accessibility to theoretical and experimental analyses on a quantitative level.
As we will see, a major finding from the study of the Min system is the role of mass-conserved interactions and of system geometry in the understanding of self-organised pattern formation.
\section{Intracellular protein patterns}
\label{sec:intracellular_protein_patterns}
The formation of protein patterns and the localisation of protein clusters is a fundamental prerequisite for many important processes in bacterial cells.
Examples include Min oscillations that guide the positioning of the Z-ring to midcell in \textit{Escherichia coli}, the localisation of chemotactic signalling arrays and the positioning of flagella, as well as chromosome and plasmid segregation.
In all these examples, experimental evidence supports mechanisms based on reaction-diffusion dynamics.
Moreover, the central elements of the biochemical reaction circuits driving these processes are P-loop NTPases.
These proteins are able to switch from an NTP-bound `active' form that preferentially binds to an intracellular interface (membrane or nucleoid) to an inactive, freely diffusing, NDP-bound form in the cytosol.
Interestingly, these types of pattern--forming--mechanisms are not restricted to prokaryotic cells, but are found in eukaryotic cells as well.
An important example is cell polarisation, an essential developmental process that defines symmetry axes or selects directions of growth.
Signalling molecules accumulate in a restricted region of the inner surface of a cell's plasma membrane where they initiate further downstream processes.
For example, in the yeast \textit{Saccharomyces cerevisiae}, cell polarisation determines the position of a new growth or bud site.
The central polarity regulator responsible for this process is Cdc42, a small GTPase of the Rho family \cite{Wedlich-Soldner_etal:2003}.
Similarly, cell polarity plays an important role in proper stem cell division \cite{Florian_Geiger:2010} and in plant growth processes such as pollen tube or root hair development \cite{Molendijk_etal:2001, Gu_etal:2003}.
Another intriguing example of self-organised polarisation occurs in the \textit{Caenorhabditis elegans} zygote through the action of mutually antagonistic, so called partitioning-defective (PAR) proteins \cite{Goehring_etal:2011}.
Moreover, the crucial role of protein pattern formation in animal cell cytokinesis is highlighted by cortical waves of Rho activity and F-actin polymerization, recently observed in frog and starfish oocytes and embryos \cite{Bement_etal:2015}.
Yet another system where protein patterns play an important role is the transport of motor proteins along cytoskeletal filaments.
We will not elaborate on this system in this review, but would like to note that pattern formation in these systems is based on similar principles as for the other systems.
For instance, microtubules are highly dynamic cytoskeletal filaments, which continually assemble and disassemble through the addition and removal of tubulin heterodimers at their ends~\cite{Desai_Mitchison:2003}.
It was recently shown that traffic jams of molecular motors on microtubules play a key regulatory mechanism for the length control of microtubules \cite{Varga2006, Varga2009, Reese_etal:2011, Melbinger_etal:2012, Reese2014}.
\subsection{MinCDE oscillations in \textit{E. coli}}
\label{sec:min_oscillations}
Proteins of the Min system in the rod-shaped bacterium \textit{E. coli} show pole-to-pole oscillations \cite{Raskin_deBoer:1999a, Raskin_deBoer:1999b, Hu_Lutkenhaus:1999, Lutkenhaus:2007}. A combination of genetic, biochemical, and cell biological studies has identified the following key features of the underlying interaction network:
(1) The ATPase MinD, in its ATP-bound dimeric form, cooperatively binds to the cytoplasmic membrane \cite{Szeto_etal:2002, Hu_Lutkenhaus:2003, Lackner_etal:2003,Mileykovskaya_etal:2003}, and forms a complex with MinC that inhibits Z-ring formation \cite{Hu_etal:1999}.
(2) MinD then recruits its ATPase Activating Protein (AAP) MinE to the membrane, triggering MinD's ATPase activity and thereby stimulating detachment of MinD from the membrane in its monomeric form \cite{Hu_Lutkenhaus:2001}.
(3) Subsequently, MinD undergoes nucleotide exchange in the cytosol and rebinds to the membrane \cite{Hu_etal:2002}.
(4) Notably, MinE's interaction with MinD converts it from a latent to an active form, by exposing a sequestered MinD--interaction region as well as a cryptic membrane targeting sequence \cite{Park_etal:2011, Shih_etal:2011}.
\begin{figure}[b]
\includegraphics[width=\linewidth]{exp_min_kymograph}
\caption{\textbf{Oscillatory patterns of Min proteins \textit{in vivo}.} \textit{Left:} Time-averaged MinD fluorescence intensity profile along the red rectangle shown in the kymograph. \textit{Middle:} Kymograph of pole-to-pole oscillations of MinD and MinE in cells of normal length (shorter than $5 \, \mu$m). \textit{Right:} Micrographs of GFP-MinD and MinE-GFP \textit{in vivo}. Adapted from Ref. \protect\cite{Loose_etal:2011_review}. }
\label{fig:exp_min_kymograph}
\end{figure}
All of these biochemical features give us highly valuable molecular information, but in themselves they do not suffice to explain the emergent phenomenon of Min oscillations.
There are basically two unknowns.
First, the detailed dynamic processes underlying, for example, cooperative membrane binding of MinD, as well as the MinE conformational switch are poorly understood on a mechanistic molecular level.
At present, one can only speculate on them based on structural data. For example, Hill coefficients have been measured for MinD ATP $(\sim 2)$ and ADP $(\sim 1)$ \cite{Mileykovskaya_etal:2003}, indicating that recruitment may be associated with dimerisation.
Secondly, and perhaps even more importantly, even if all the details of the molecular processes were known, one would still not know which is responsible to what degree for any specific macroscopic property of the dynamic Min pattern.
Furthermore, how these processes are affected by changing protein expression levels and cell geometry is unclear, \textit{a priori}.
Both of these obstacles represent major challenges for the field, and can be overcome only by a combined experimental and theoretical approach.
The main biological function of Min oscillations is to regulate formation and positioning of the Z-ring \cite{Lutkenhaus:2007}, comprised of curved, overlapping FtsZ filaments, which interact with a range of accessory proteins that together make up the cytokinetic machinery \cite{Lutkenhaus:2012}.
The pole-to-pole oscillations of the MinD-ATP/MinC complex result in a time-averaged density profile of MinC that is highest at the cell poles and lowest at midcell.
Since MinC acts as an antagonist of FtsZ assembly, Min oscillations inhibit Z-ring formation at the poles and restrict it to midcell \cite{Hu_etal:1999}.
How self-organisation into the Z-ring occurs remains unknown and is subject to extensive research \cite{Loose_Mitchison:2014, Denk_etal:2016, Ramirez_etal:2016}.
\begin{widetext}
\begin{SCfigure}
\includegraphics[width=0.7\linewidth]{Cdc42-model-by-Ben}
\caption{subcaption2}
\caption{\textbf{Reaction network of the Cdc 42 system in yeast} with a guanine exchange factor (Cdc24) and GAPs controlling the hydrolytic activity of Cdc42. The polarisation relies on activation of Cdc42 through a Bem1-Cdc24-Cla4 complex and on extraction of Cdc42 from membranes by the GDI Rdi1.}
\label{fig:cell_polarity}
\end{SCfigure}
\end{widetext}
\subsection{Cell polarity in yeast}
\label{sec:cell_polarity}
Polarity establishment in budding yeast relies on crosstalk between feedback loops, one based on the actin cytoskeleton, the other on a reaction-diffusion system \cite{Wedlich-Soldner_etal:2003}.
Both are regulated by the Rho-type GTPase Cdc42. To fulfil its functions, it must constantly cycle between a GTP-bound (active) and a GDP-bound (inactive) state.
In budding yeast, activation of Cdc42 is controlled by a single guanine nucleotide exchange factor (GEF), Cdc24, and the hydrolytic activity of Cdc42 is promoted by several GTPase-activating proteins (GAPs).
In addition, Cdc42 is extracted from membranes by a single Rho-guanine nucleotide dissociation inhibitor (GDI), Rdi1 \cite{Bi_Park:2012}; see Fig. \ref{fig:cell_polarity} for the biochemical network.
Initially two independent feedback loops were identified: one based on the actin cytoskeleton and one based on a reaction-diffusion system that \textit{in vivo} depends on the scaffold protein Bem1 \cite{Bi_Park:2012}.
A combined experimental and theoretical study has shown that a combination of actin- and GDI-dependent recycling of the GTPase Cdc42 is required to achieve rapid, robust and focused polarisation \cite{Freisinger_etal:2013}.
However, there are still many open issues on the detailed interplay between these two mechanisms.
The GDI-mediated polarisation in itself is reasonably well understood.
Theoretical models differ in how they describe the recruitment of the GEF (Cdc24) towards active Cdc42 on the membrane \cite{Goryachev_Pokhilko:2008, Klunder_etal:2013}.
Experimental data~\cite{Freisinger_etal:2013} support a reaction network where recruitment of Cdc24 is mediated by Bem1 (Fig.~\ref{fig:cell_polarity}): Cytosolic Bem1 is targeted to the membrane by interaction with active Cdc42 or other Cdc42-GTP-bound proteins such as Cla4 and subsequent binding of Bem1 to the membrane~\cite{Bose_etal:2001, Butty_etal:2002, Kozubowski_etal:2008}. Once bound to the membrane it recruits the Cdc24 from the cytosol to the membrane \cite{Bose_etal:2001, Butty_etal:2002}.
Membrane-bound Cdc24 then enhances both the attachment and activation of cytosolic Cdc42-GDP to the membrane and the nucleotide-exchange of membrane-bound Cdc42-GDP \cite{Freisinger_etal:2013, Klunder_etal:2013}.
A mathematical model \cite{Klunder_etal:2013} based on this reaction scheme accurately predicts phenotypes associated with changes in Cdc42 activity and recycling, and suggests design principles for polarity establishment through coupling of two feedback loops.
Recently, there has even been evidence for a third feedback loop~\cite{Bendezu_etal:2015}.
In a recent \textit{in vivo} study the essential component Bem1 was deleted from the reaction-diffusion feedback loop \cite{Laan_etal:2015}.
Interestingly, after the mutant was allowed to evolve for about $1,000$ generations, a line was recovered that had regained the ability to polarise, despite the absence of Bem1.
Moreover, the newly evolved network had actually lost more components, resulting in a simpler reaction-diffusion system.
The structure of this minimal network has yet to be identified \cite{Brauns_etal:unpubl}.
\subsection{Protein pattern formation in animal cell polarisation and cytokinesis}
\label{sec:other_system}
As we have seen for the Min system in \textit{E. coli} and Cdc42 in budding yeast, protein patterns are an elegant way to convey intracellular positional information.
Thus, it is not surprising that more complex organisms also employ protein pattern formation to control essential processes including cell polarisation, cytokinesis, embryogenesis and development.
An animal's body plan is typically specified during embryogenesis. In this context, the establishment and stable maintenance of cell polarity is a fundamental feature of developmental programs.
So-called partitioning defective (PAR) proteins are key molecular players that promote symmetry breaking and establish intracellular polarity in diverse animal cells \cite{Goldstein_Macara:2007}. Here, we focus on the PAR network in the nematode worm \textit{C. elegans}, as this system has been particularly well studied.
\textit{C. elegans} PAR proteins are required for asymmetric cell division of the zygote, which they achieve by generating two distinct and complementary membrane domains with the aid of actomyosin flows \cite{Goehring_etal:2011, Munro_etal:2004}.
Several ``design principles'' of the PAR network have been established by a combination of experiments and theory \cite{Goehring:2014}.
A core feature of PAR polarity is the mutual antagonism between anterior and posterior PAR components (Fig. \ref{fig:PAR}), which preferentially accumulate on the anterior and posterior halves of the membrane respectively while being excluded from the opposite half.
The maintenance of this polarity is highly dynamic and involves mobility of PAR proteins in the cytosol, their cross-inhibition via phosphorylation as well as additional feedback loops \cite{Goehring:2014}.
Importantly, the mutual antagonism in the PAR network relies on reversible switching of PAR proteins between ``inactive'', rapidly diffusing cytosolic and ``active'', slowly diffusing membrane-bound states \cite{Goehring:2014}, one of the key features of the pattern-forming protein networks discussed in this chapter.
\begin{widetext}
\begin{SCfigure}
\includegraphics[width=0.65\linewidth]{Par_polarity_figure}
\caption{subcaption2}
\label{fig:2}
\caption{\textbf{Cell polarisation in the \textit{C. elegans} embryo.} A reaction-diffusion network of mutually antagonistic anterior and posterior PAR proteins, switching between ``active'' membrane-bound and ``inactive'' cytosolic states, sustains opposing membrane domains in the \textit{C. elegans} embryo. Anterior and posterior PAR components are shown in red and blue, respectively. Adapted from reference Ref.~\protect\cite{Goehring_Grill:2013}, copyright 2012 with permission from Elsevier and Ref.~\protect\cite{Goehring_etal:2011} with permission from AAAS.
}
\label{fig:PAR}
\end{SCfigure}
\end{widetext}
\begin{figure}[b]
\includegraphics[width=\linewidth]{cortical_waves_figure}
\caption{\textbf{Cortical waves of Rho activity and F-actin polymerisation involved in animal cell cytokinesis.}
\textbf{A}, Possible scheme of interactions underlying wave formation. Inactive GDP-bound Rho (RD) binds to the membrane, where it is activated to GTP-bound Rho (RT) via nucleotide exchange in an autocatalytic, GEF-dependent manner. Subsequently, the theoretical model assumes that coupled F-actin polymerisation (F) exerts a negative feedback on Rho activity converting it back into its inactive form \protect\cite{Bement_etal:2015}.
\textbf{B}, Fluorescence image of cortical waves of Rho (malachite) and F-actin (copper) in an Ect2-overexpressing starfish oocyte. Adapted from Ref.~\protect\cite{Bement_etal:2015} by permission from Macmillan Publishers Ltd: Nature Cell Biology \protect\cite{Bement_etal:2015}, copyright 2015.}
\label{fig:starfish}
\end{figure}
Another intriguing example of protein pattern formation occurs during animal cell cytokinesis.
This process involves the small GTPase Rho, whose localised activation directs assembly of the cytokinetic machinery, consisting of F-actin and myosin-2, in the equatorial cortex \cite{Green_etal:2012}.
Recently, cortical waves of Rho activity and F-actin polymerisation were discovered in frog and echinoderm oocytes and embryos \cite{Bement_etal:2015}.
These protein patterns exhibited excitable dynamics and were proposed to emerge through a reaction-diffusion mechanism involving positive feedback during Rho activation and delayed negative feedback exerted by F-actin (Fig. \ref{fig:starfish}). In this view, Rho attaches to the plasma membrane in its inactive GDP-bound form.
On the membrane, Rho is then converted to its GTP-bound active form in an autocatalytic manner, dependent on the Rho GEF Ect-2. Subsequently, F-actin is assumed to mediate a negative feedback on Rho, converting it back to its inactive form \cite{Bement_etal:2015}.
Remarkably, this reaction-diffusion network shares many similarities with our previous examples, such as reversible protein attachment to a lipid membrane, switching between different NTP-bound states and coupling of feedback loops.
\subsection{The switch paradigm}
\label{sec:switch_paradigm}
The molecular mechanisms underlying the spatio-temporal organisation of cellular components in bacteria are frequently linked to P-loop ATPases such as ParA and MinD \cite{Gerdes_etal:2010, Lutkenhaus:2012, Bange_Sinning:2013}.
ParA and MinD proteins belong to a family of proteins known as the ParA/MinD superfamily of P-loop ATPases \cite{Lutkenhaus:2012}.
Both are known to form self-organised dynamic patterns at cellular interfaces, ParA on the nucleoid and MinD on the cell membrane.
The nucleotide state of these ATPases determines their subcellular localisation: While the ATP-bound form dimerises and binds to the appropriate surface, the ADP-bound form is usually a monomer with a significantly reduced affinity for surface binding that freely diffuses in the cell.
Importantly, both ParA and MinD have a partner protein (ParB and MinE, respectively) that stimulates their ATPase activity and causes them to detach from their respective surfaces.
Moreover, there is a delay due to nucleotide exchange between the release of the ADP-bound form from the surface and its subsequent rebinding in the dimeric ATP-bound form.
These interactions enable proteins to cycle between surface-bound and cytosolic states, depending on the phosphorylation state of their bound nucleotide.
The surface-bound state is typically associated with spatially localised function (e.g.\/ the downstream regulation of other proteins on the surface), whereas the cytosolic state enables spatial redistribution and formation of surface bound patterns of these proteins.
Despite the striking similarities on a molecular level, the biological functions of ParA and MinD differ significantly.
The Min system directs the placement of the division site at midcell by inhibiting the assembly of FtsZ into a ring-like structure (Z-ring) close to the cell poles.
In contrast, ParA is involved in chromosome and plasmid segregation. Several other ParA-like proteins have been identified that are also important for the correct localisation of large cellular structures at the cell poles, at midcell or along the cell length \cite{Lutkenhaus:2012}.
One of these is PomZ in \textit{M. xanthus}.
PomZ is part of a protein system that -- like the Min system -- is important for Z-ring formation. However, in contrast to the Min system, the Pom system positively regulates the formation of the FtsZ ring at midcell \cite{Treuner-Lange_Sogaard-Andersen:2014}
Apart from the cell division and the chromosome partitioning machineries, there are various other multiprotein complexes that are positioned by self-organising processes based on P-loop NTPases.
For example, the GTPase FlhF and the ATPase FlhG constitute a regulatory circuit essential for defining the distribution of flagella in bacterial cells \cite{Bange_Sinning:2013, Schuhmacher_etal:2015}.
\section{Mass-conserving reaction-diffusion systems}
\label{sec:MaRD}
All of the examples of intracellular pattern-forming systems discussed in the previous section share some common features.
They are reaction-diffusion systems in confined intracellular space, where proteins cycle between the cytosol and the cell membrane \cite{halatek_brauns_frey:2018}.
On the time scale on which these patterns form, net change in the levels of the proteins involved is negligible and thus the \textit{copy number within each protein species is conserved}.
The reactions correspond to transitions of each protein species between a finite number of different states (membrane-bound, cytosolic, active, inactive, etc.), and these states play different functional roles in the corresponding biochemical circuit.
For example, only membrane-bound MinD induces positive and negative feedback by recruiting MinD and MinE from the cytosol to the membrane.
Hence, the protein dynamics can be understood as a reaction-diffusion system where diffusion takes place in different spatial domains (membrane and cytosol), and where reactions are sequences of state changes induced by protein-nucleotide, protein-protein, and protein-membrane interactions.
\textit{Mass-conserving} dynamics is the generic case for intracellular dynamics.
Because the production of proteins is a resource-intensive process, any mechanism that utilises production and degradation as pattern forming mechanisms would be highly inefficient and wasteful\footnote{Of course, such a process would also be limited by the duration of protein synthesis.}.
This excludes activator-inhibitor mechanisms \cite{Segel_Jackson:1972}, since they are based on the interplay between autocatalytic production of a (slow diffusing) activator and its degradation by a (fast diffusing) inhibitor.
Though such a mechanism is frequently invoked as a paradigm in biological pattern formation \cite{Kondo_Miura:2010}, it is actually irreconcilable with the fundamental physical processes on which intracellular pattern formation is based on \cite{halatek_brauns_frey:2018}.
This in turn implies that the study of biological systems should reveal hitherto unknown mechanisms for pattern formation. Recent research shows that this is indeed the case \cite{halatek_frey:2018}. In particular, explicit account for mass-conservation yields the total protein densities as system control parameters. As we will see, these are crucial for the theoretical understanding of the experimentally observed phenomena.
\subsection{Cellular geometry: membrane and cytosol}
Figure \ref{fig:cell_geometry} illustrates the geometry of a rod-shaped prokaryotic cell.
It is comprised of three main compartments: the cell membrane, the cytosol, and the nucleoid.
There are two major facts that are relevant for intracellular pattern formation.
First, the diffusion constants in the cytosol and on the cell membrane are vastly different. For example, currently accepted values for Min proteins in \textit{E. coli} are of the order of $D_c \,{\approx}\, 10 \, \mu$m$^2$/s, and $D_m \,{\approx}\, 0.01 \, \mu$m$^2$/s, respectively.
Second, due to the rod-like shape, the ratio of cytosolic volume to membrane area differs markedly between polar and midcell regions.
Beyond this local variation of volume to surface ratio, the overall ratio of cytosol volume to membrane area depends on the shape of the cell.
\begin{figure}[h!]
\includegraphics[width=0.7\linewidth]{cell_geometry}
\caption{\textbf{Schematic representation of the geometry of a rod-shaped bacterial cell.} There are three main compartments: cell membrane, cytosol, and nucleoid. The diffusion constants in these compartments will, in general, be different.
}
\label{fig:cell_geometry}
\end{figure}
\subsection{Reaction-diffusion equations for the Min system}
\label{sec:MaRD_Min}
The biochemical reactions of the Min system outlined in section \ref{sec:min_oscillations} are summarised in Fig.~\ref{fig:min_de_network}.
In the following we will refer to this scheme as the \textit{skeleton network}, as it accounts only for those molecular interactions that are (presently) believed to be essential for Min protein phenomenology.
For a quantitative analysis, this skeleton biochemical network has to be translated into a mathematical model~\cite{Huang_etal:2003, Halatek_Frey:2012}.
\begin{figure}[b]
\includegraphics[width=0.75\linewidth]{min_de_network}
\caption{\textbf{Skeleton MinCDE network:} Cytosolic MinD-ATP (T) attaches to the membrane, and recruits MinD-ATP and MinE (E) from the cytosol. Recruitment of MinE leads to the formation of MinDE complexes. MinE in the MinDE complexes stimulates ATP hydrolysis by MinD and thereby triggers detachment and dissociation of membrane-bound MinDE complexes into cytosolic MinD-ADP (D) and MinE.
}
\label{fig:min_de_network}
\end{figure}
We denote the volume concentrations of MinE, MinD-ADP, and MinD-ATP in the cytosol by $c_{E}^{}$, $c_{DD}^{}$, and $c_{DT}^{}$.
Since the only reaction that takes place in the cytosol is reactivation of cytosolic MinD-ADP by nucleotide exchange (with rate $\lambda$) to MinD-ATP, the ensuing reaction-diffusion equations read:
\begin{subequations}
\begin{align}
\partial_{t}c_{DD}^{}
&= D_{c}\nabla^{2} c_{DD}^{} -
\lambda \, c_{DD}^{} \, ,
\label{eq:de1} \\
\partial_{t}c_{DT}^{}
&= D_{c}\nabla^{2}c_{DT}^{} +
\lambda \, c_{DD}^{} \, ,
\label{eq:de2}\\
\partial_{t}c_{E}^{}
&= D_{c}\nabla^{2}c_{E}^{} \, ,
\label{eq:de3}
\end{align}
\label{eq:RD_cytosol}
\end{subequations}
The diffusion coefficients are typically distinct for all protein configurations, for simplicity, we only distinguish between cytosolic $(D_c)$ and membrane bound $(D_m)$ states.
Only the active form of MinD, $c_{DT}$, can attach to the membrane, either spontaneously with a rate $k_D$ or facilitated by MinD-ATP already bound to the membrane (recruitment) with a rate $k_{dD}^{} m_{d}^{}$, where $m_{d}^{}$ denotes the areal density of MinD-ATP on the membrane.
Overall then, the reaction term reads $R_{D}^+ = (k_{D}^{} + k_{dD}^{} \, m_{d}^{}) \, \tilde c_{DT}^{}$, where the tilde on the cytosolic concentration of MinD-ATP indicates that the value must be taken in the immediate vicinity of the membrane.
Membrane bound MinD-ATP can also recruit cytosolic MinE to the membrane and thereby form MinDE complexes.
The corresponding reaction term reads $R_{E}^+ = k_{dE}^{} \, m_{d}^{} \, \tilde c_{E}^{}$.
Finally, MinE in the MinDE complexes stimulates ATP hydrolysis by MinD and hence facilitates detachment and decay of membrane bound MinDE complexes into cytosolic MinD-ADP and MinE, $c_{E}^{}$, with rate $k_{de}^{}$.
This process is described by the reaction term $R_{DE}^- = k_{de}^{} \, m_{de}^{}$ where $m_{de}^{}$ denotes the areal density of MinDE complexes on the membrane.
Taken together, the reaction-diffusion equations on the membrane read
\begin{subequations}
\begin{align}
\partial_{t}m_{d}^{}
&= D_{m}\nabla_{m}^{2}m_{d}^{} +
R_{D}^+ (m_{d}^{}, \tilde c_{DT}^{}) -
R_{E}^+(m_{d}^{}, \tilde c_{E}^{}),
\label{eq:de4}\\
\partial_{t}m_{de}^{}
&= D_{m}\nabla_{m}^{2}m_{de}^{} +
R_{E}^+(m_{d}^{}, \tilde c_{E}^{}) -
R_{DE}^-(m_{de}^{}) \, ,
\label{eq:de5}
\end{align}
\label{eq:RD_membrane}
\end{subequations}
where the index $m$ denotes the Laplacian for membrane diffusion.
These two sets of reaction-diffusion equations, Eq.~\ref{eq:RD_cytosol} and Eq.~\ref{eq:RD_membrane}, are complemented by nonlinear reactive boundary conditions at the membrane surface that guarantee local particle number conservation.
In other words, the chemical reactions involving both membrane-bound and cytosolic proteins equal the diffusive flux onto $(-)$ and off $(+)$ the membrane (the index $\perp$ denoting the outward normal vector at the boundary):
\begin{subequations}
\begin{align}
\left. D_{c}\nabla_{\perp} c_{DD}^{}\right|_{m}
& = + R_{DE}^-(m_{de}^{})
\, ,
\label{eq:bc1}\\
\left. D_{c}\nabla_{\perp} c_{DT}^{}\right|_{m}
& = - R_{D}^+ (m_{d}^{}, \tilde c_{DT}^{}) \, ,
\label{eq:bc2}\\
\left. D_{c}\nabla_{\perp} c_{E}^{}\right|_{m}
& = + R_{DE}^-(m_{de}^{}) -
R_{E}^+(m_{d}^{}, \tilde c_{E}^{})
\, .
\label{eq:bc3}
\end{align}
\label{eq:RD_boundary}
\end{subequations}
For example, Eq.~\ref{eq:bc1} states that detachment of MinD-ADP following hydrolysis on the membrane is balanced by gradients of MinD-ADP in the cytosol.
In general, any exchange of proteins between the membrane and cytosol leads to diffusive fluxes and thereby to protein gradients in the cytosol since the membrane effectively acts as a sink or source of proteins.
These gradients are essential for understanding the mechanisms underlying intracellular pattern formation, and preclude a naive interpretation of the cytosol as a spatially uniform reservoir.
For the model to be complete, one needs to know the values of all of the reaction rates.
However, the estimation and choice of system parameters is a highly nontrivial problem.
Nonlinear systems are generically very sensitive to parameter changes, whereas biological function has to be sufficiently robust against variations in the kinetic rates and diffusion coefficients (e.g.\/ caused by temperature changes).
In addition, only rarely are the system parameters known quantitatively from experiments.
For the Min system only the diffusion coefficients have been measured and estimates for the nucleotide exchange rate $\lambda$ \cite{Meacci_etal:2006} and the Min protein densities exist \cite{Shih_etal:2002}.
However, a theoretical investigation of the skeleton model by means of linear stability analysis and numerical simulations was able to identify parameter regimes where the experimentally observed patterns are formed \cite{Halatek_Frey:2012}.
\subsection{Basic mechanisms underlying Min oscillations in \textit{E. coli} cells}
From the analysis of the skeleton model~\cite{Halatek_Frey:2012}, quantified by the reaction-diffusion equations in the previous section, one can now learn how Min proteins self-organize to give rise to pole-to-pole oscillations \textit{in vivo}.
The basic theme of the protein dynamics is the cycling of proteins between the membrane and the cytosol. This cycling is driven by the antagonistic roles of MinD and MinE:
Membrane-bound active MinD facilitates flux of MinD and MinE from the cytosol to the membrane (recruitment).
This accumulation of proteins at the membrane is counteracted by MinE's stimulation of MinD's ATPase activity, which triggers detachment of both MinD and MinE.
In concert with redistribution of proteins through cytosolic diffusion, spatio-temporal patterns may emerge on the membrane.
However, the formation of pole-to-pole oscillations is by no means generic in the context of the above reaction scheme.\footnote{In general, a given reaction-diffusion equation can generate a plethora of spatio-temporal patterns, as is well known from classical equations like the complex Ginzburg-Landau equation \cite{Aranson_Kramer:2002} or the Gray-Scott equation \cite{Gray_Scott:1983, Gray_Scott:1984, Gray_Scott:1985, Pearson:1993, Lee_etal:1993}. Conversely, a given pattern can be produced by a vast variety of mathematical equations. Hence, one must be careful to avoid falling into the trap: ``Cum hoc ergo propter hoc'' (correlation does not imply causation).}
In general, there are conditions on the values of the reaction rates, as well as on the relative abundances of the proteins which have to be met.
An exhaustive parameter scan for model equations Eq.~\ref{eq:RD_cytosol}, \ref{eq:RD_membrane}, and \ref{eq:RD_boundary} has shown that, for spatial patterns to emerge in the skeleton model, MinE needs to be recruited faster to the membrane-bound protein layer than MinD, while being lower in total particle number \cite{Halatek_Frey:2012}
\begin{equation}
k_{dD} < k_{dE} \, , \quad
N_{E} < N_{D} \, .
\end{equation}
These conditions give rise to the formation and separation of MinD and MinDE domains, the \textit{polar zone} and \textit{MinE ring}, as the two basic emergent structures of pole-to-pole oscillations.
\begin{widetext}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{invivo_mechanism_scheme}
\caption{\textbf{Key mechanism underlying Min oscillations.} \textbf{A,} Locally sequestrated MinE constitutes the MinE ring, which moves toward the left pole through local cycling. Detaching MinD rebinds predominantly at the left pole and initiates formation of a weak polar zone at the right end. The delay in reattachment caused by the need for nucleotide exchange is indicated by dashed lines. \textbf{B,} MinE depletes the old polar zone of MinD, until only MinDE complexes are left, then reassembles at the rim of the new polar zone, formed by redistributed MinD. Adapted from Ref.~\cite{Halatek_Frey:2012} under the CC BY 4.0 license.
}
\label{fig:min_oscillation_mechanism}
\end{figure}
\end{widetext}
As illustrated in Fig.\ref{fig:min_oscillation_mechanism}, this is (heuristically) understood as follows \cite{Halatek_Frey:2012}.
The higher particle number of MinD enables complete sequestration of MinE in membrane-bound MinDE complexes, while leaving a fraction of MinD available to initiate a new polar zone.\footnote{It should be noted that the condition on the particle numbers mainly serves to emphasise the sequestration mechanism. In order for MinD to accumulate in polar zones the action of MinE must be disabled, and specifying that there are fewer MinE particles permits them to be spatially confined. Outside of this zone MinD can accumulate on the membrane. It has been speculated \cite{Halatek_Frey:2012} that other mechanisms, such as transient MinE membrane binding, might provide alternative ways to transiently disable the action of MinE, removing the requirement from the particle numbers. The exact mechanism needs to be investigated in future experiments as well as in the framework of theoretical models.}
Given a sufficiently high MinD membrane concentration and MinE recruitment rate $k_{dE}$, detaching MinE rebinds immediately, forming the prominent MinE ring.
Continuous MinE cycling locally depletes the membrane of MinD, leading to a slow poleward progression of the MinE ring along the gradient of membrane bound MinD, whereupon a fraction of detaching MinD initiates a weak polar zone in the opposite cell half, see Fig.\ref{fig:min_oscillation_mechanism}A.
The new polar zone grows due to steady redistribution of MinD, while most MinE remains sequestrated in the old polar zone until the remaining MinD molecules are converted into MinDE complexes, see Fig.\ref{fig:min_oscillation_mechanism}B.
Once this state is reached, the Min proteins rapidly detach, dissociate and diffuse through the cytosol and rapidly reattach at the new polar zone, leaving behind a region of high MinDE/MinD ratio, where immediate reformation of polar zones is inhibited.
Due to the faster recruitment of MinE, the MinE ring reassembles at the rim of the new polar zone, which provides the crucial separation of MinD and MinDE maxima, i.e.\/ a polar zone and a MinE ring.
There is one element of the above argument which needs further consideration: The sequestration of MinE is transient, and hence the system is oscillatory, only if detaching MinD gradually leaks from the old to the new polar zone. But, how is this process established and regulated?
Leakage from the old polar zone is determined by the balance between two opposing factors: the ATPase cycle of MinD, and the propensity of cytosolic MinD to bind to the membrane. MinE stimulates ATPase activity of MinD and thereby initiates detachment of ADP-bound MinD.
The inactive MinD cannot reattach to the membrane until it is reactivated by nucleotide exchange.
This delay implies that the zone near the membrane is depleted of active MinD, i.e.\/ active MinD has time to diffuse further away from the membrane into the cytosol.
Taken together, these factors effectively suppress immediate reattachment of MinD and promote its leakage from the polar zone: The slower the nucleotide exchange the more particles leak from polar zones.
This is counteracted by MinD recruitment: The stronger the recruitment, the ``stickier'' the membrane and hence the fewer particles leak from polar zones.
Clear evidence for this reasoning comes from the slowing down of the oscillation with increasing nucleotide exchange and MinD recruitment rates, depicted in Fig.\ref{fig:min_oscillation_period}A.
Numerical simulation of the reaction-diffusion equations, Eq.\ref{eq:RD_cytosol}--\ref{eq:RD_boundary}, reveals further functional characteristics of Min oscillations.
For nucleotide exchange rates $\lambda = 5 \, s^{-1}$, close to the experimentally determined lower bound of $3 \, s^{-1}$, reaccumulation of the polar zone always starts in the opposite cell half, and the recruitment rate $k_{dD}$ of MinD regulates how fast the new polar zone grows towards the old one (Fig. \ref{fig:min_oscillation_period}B).
Notably, at $k_{dD} = 0.1 \, \mu$m$^2$/s in Fig. \ref{fig:min_oscillation_period}B, the redistribution of MinD from old to new polar zone is highly canalised, i.e.\/ the total MinD flux is directed towards the opposite cell half immediately after the polar zones start to shrink (Fig.\ref{fig:min_oscillation_period}B). This implies that growth and depletion of polar zones are synchronised. This is also reflected in the characteristic triangular shape observed in MinD kymographs~\cite{Loose_etal:2011_review}, where new polar zone start growing towards midcell while old polar zones shrink towards the cell pole (Fig. \ref{fig:min_oscillation_period}B).
Although most of the Min protein patterns (like stripe patterns) observed in filamentous mutant \textit{E. coli} have no biological function, the theory is able to account for their occurrence. This argues strongly that they too arise from the mechanism that optimises the spatial profile of pole-to-pole oscillations for midcell localisation. In other words, the rich phenomenology in mutant cells appears to be a byproduct of the evolutionary optimisation of the wild-type dynamics.
\newpage
\begin{widetext}
\begin{figure}[t]
\includegraphics[width=0.9\linewidth]{Min_invivo_cana_data}
\caption{\textbf {Canalised MinD transfer and regulation of spatial MinD reattachment by MinD recruitment.} \textbf{A}, Temporal period of Min oscillations as a function of the MinD recruitment rate $k^{}_{dD}$, and nucleotide exchange rate $\lambda$ in cells of $4 \, \mu$m length. With instantaneous nucleotide exchange, oscillations only exist at low MinD recruitment rates (grey). Beyond this threshold the nucleotide exchange and recruitment rates become control parameters for the spatial distribution of MinD reattachment. At high but finite nucleotide exchange rates the oscillation period increases with the MinD recruitment rate, as MinD reassembles in front of the polar zone. At low nucleotide exchange rates the oscillation period decreases with MinD recruitment, as the pole-to-pole particle transfer becomes canalised between the two cell halves. \textbf{B}, Kymographs for $\lambda=5s^{-1}$ showing the total MinD membrane density, $m^{}_d+m^{}_{de}$, and MinD flux $J^{}_D = D^{}_{D}\nabla_{\perp}(c^{}_{DT}+c^{}_{DD})|_{m}$ on (blue) and off (red) the membrane, for a set of increasing MinD recruitment rates $k^{}_{dD}$. MinD reaccumulates at the opposite cell pole while the old pole is still present. Increasing MinD recruitment accelerates the growth of new polar zones towards midcell and synchronises depletion and formation of polar zones at opposite cell ends by canalising the MinD flux from old to new polar zones. Adapted from Ref.~\cite{Halatek_Frey:2012} under the CC BY 4.0 license.
}
\label{fig:min_oscillation_period}
\end{figure}
\end{widetext}
\subsection{Cell geometry and pattern formation}
To ensure robustly symmetrical cell division, one would expect Min patterns to scale with cell size and shape, at least within the biologically relevant range.
Indeed, recent experiments using `cell-sculpting' techniques \cite{Wu_etal:2015} have shown that longitudinal pole-to-pole oscillations are highly stable in cells with widths below $3\mu$m, and lengths in the range of $3-6 \, \mu$m.
Interestingly, however, outside of this range of cell geometries, Min proteins show diverse oscillation patterns, including longitudinal, diagonal, rotational, striped, and even transverse modes \cite{Raskin_deBoer:1999b, Shih_etal:2005, Wu_etal:2016, Corbin_etal:2002, Touhami_etal:2006, Varma_etal:2008, Maennik_etal:2012, Wu_etal:2015}.
What is the origin of the simultaneous robustness of Min oscillations inside the biologically relevant regime and the bewildering diversity of patterns and multistability outside of it? In what sense are these seemingly contradictory features two faces of the same coin?
To answer these questions one has to address how and to what extent the existence and stability of different patterns is affected by a cell's geometry, and which specific biomolecular processes in the Min reaction circuit control how the system adapts to cell geometry.
This has recently been achieved by a combination of numerical studies, based on the reaction-diffusion model discussed in section \ref{sec:MaRD}, and experimental studies, in which the geometry of \textit{E. coli} bacteria was systematically varied \cite{Wu_etal:2016}.
There are basically two types of randomness that may affect the process of pattern selection, or transitions between patterns if multiple stable patterns are possible.
First, the inherent randomness of any chemical reaction may cause stochastic transitions between patterns.
Though such stochastic effects are possible in principle \cite{Fange_Elf:2006}, given the large copy number of Min proteins, they are unlikely to be the major source for transitions between patterns; factors like heterogeneities and asymmetries are expected to be far more important. Second, there are many different factors which cause realistic cellular systems to be asymmetric or heterogeneous.
For example, the membrane affinity of MinD depends on the lipid composition, which in turn is sensitive to membrane curvature. Hence, small asymmetries of the cell shape translate to variations of MinD membrane attachment.
While these asymmetries and heterogeneities are intrinsic to ensembles of cells, they need to be specifically emulated in numerical simulations. A natural choice are gradients in the MinD attachment rate that are inclined at all possible angles to the long axis of the cell.
The magnitude of these gradients must be sufficiently large to significantly affect the pattern selection process, but at the same time small enough not to cause any asymmetry in the final stable pattern.
A relative magnitude of variation in the range of $20 \%$ (well below the natural variability of MinD affinity to different lipids \cite{Mileykovskaya_etal:2003, Renner_Weibel:2012}) fulfills these requirements.
Figure \ref{fig:multistability_theory} shows histograms of the final stable patterns obtained by sampling over all directions of the gradient, as a function of cell width and length, and of the MinD recruitment rate \cite{Wu_etal:2016}.
For a recruitment rate fixed to the value that facilitates canalised transfer, $k^{}_{dD} = 0.1$, the following observations are of note.
(i) As cell length is increased, striped oscillations become more frequent patterns.
(ii) The fraction of oscillatory striped patterns tends to decrease in favour of transverse patterns as the cell width increases, indicating that cell width, and not cell length, is the main determinant for the onset of transverse modes.
\begin{widetext}
\begin{figure}[bt]
\includegraphics[width=\linewidth]{invivo_multistabiility_data.pdf}
\caption{\textbf{Basins of attraction predicted from systematic perturbations of patterns with shallow attachment gradients.}
\textbf{A,} Relative distribution of the final patterns (indicated on the right) observed after sampling all alignment angles of the MinD attachment template from 0 to 90 degrees. The MinD recruitment rate was set to a constant value $k^{}_{dD} = 0.1$. The data shows the increase in the incidence of multistability as the cell size is increased beyond minimal values for cell length and cell width.
\textbf{B,} Fractions of the final patterns in cells of 9- and 10-$\mu$m length after sampling all alignment angles of the MinD attachment template from 0 to 90 degrees. The data shows that increasing the MinD recruitment rate facilitates multistability. Adapted from Ref.~\protect\cite{Wu_etal:2016} under the CC BY 4.0 license.
}
\label{fig:multistability_theory}
\end{figure}
\end{widetext}
Both observations are remarkably consistent with experimental data based on random sampling of live \textit{E. coli} cells after they have reached a defined shape \cite{Wu_etal:2015}.
Numerical simulations allow us to go beyond the analysis of cell geometry, and investigate the effect of MinD recruitment rate, see Fig.\ref{fig:multistability_theory}B.
In narrow cells with widths ranging from $1 \, \mu$m to $3 \, \mu$m, one observes that the fraction of stripes increases with the MinD recruitment rate \cite{Halatek_Frey:2012, Wu_etal:2016}.
In contrast, for cells that reach a width of 5 $\mu$m, stripe patterns are absent below some threshold MinD recruitment rate.
With increasing MinD recruitment rate, transverse patterns appear first and increase in frequency, while the fraction of striped patterns takes on a constant value.
There are several conclusions one can draw from these observations.
The most obvious one is that multistability in Min patterns is not determined by either kinetic parameters or cell geometry alone, but originates from the interdependence between these two factors.
In addition, increasing the size of a Turing-unstable system alone does not in itself facilitate the existence of multiple stable patterns \footnote{This is surprising, because Turing instabilities are generically associated with the existence of a \textit{characteristic} (or intrinsic) wave length in the literature. This is evidently not the case here.}.
This is clearly evident from the observation that the emergence of a pole-to-pole oscillation in a short cell does not generically imply the existence of a stable striped oscillation with a characteristic wavelength in a long filamentous cell \cite{Halatek_Frey:2012}.
Instead, the emergence of a characteristic length scale (which becomes manifest in striped oscillations) is restricted to a specific regime of kinetic parameters, where growth and depletion of spatially separated polar zones become synchronised such that multiple, spatially separated polar zones can be maintained simultaneously (``canalised transfer'' regime) \cite{Halatek_Frey:2012}.
A key element among the prerequisites that permit this regime to be reached is that the degree of nonlinearity in the kinetics of the system (MinD cooperativity) must be particularly strong.
Notably, the same mechanism that enables striped oscillations in filamentous cells also facilitates transverse oscillations in wide cells.
These findings hint at an exciting connection between multistability, the ability of patterns to sense and adapt to changes in system geometry, and the existence of an intrinsic length scale in the underlying reaction-diffusion dynamics.
Remarkably -- and contrary to the treatments in the classical literature -- the existence of an intrinsic length scale is not generic for a Turing instability per se.
One example is the aforementioned selection of pole-to-pole patterns in arbitrarily long cells where MinD recruitment is weak.
In this case, irrespective of the critical wavenumber of the Turing instability, the final pattern is always a single wave traveling from pole to pole.
The selection of a single polar zone is also characteristic in the context of cell polarity \cite{Klunder_etal:2013, Otsuji_etal:2007}, where it has been ascribed to the finite protein reservoir and a winner-takes-all mechanism.
It will be an interesting task for further research to elucidate the general requirements for the emergence of an intrinsic length scale in mass-conserved reaction-diffusion systems.
\subsection{Principles of adaptation to geometry in reaction-diffusion systems}
How does the geometry of a cell affect the formation of spatio-temporal patterns? This question may be rephrased in more mathematical terms as follows: What are the inherent features of a reaction-diffusion system in confined geometry that promote or impede the adaptation of the ensuing patterns to the size and shape of that confining space\footnote{In 1966 Mark Kac published an article entitled ``Can one hear the shape of a drum?''\cite{Kac:1966}. As the dynamics (frequency spectrum) of an elastic membrane whose boundary is clamped is described by the Helmholtz equation $\nabla^2 u + \sigma u =0$ with Dirichlet boundaries, $\nabla u \mid_\perp = 0$, this amounts to asking how strongly the eigenvalues $\sigma$ depend on the shape of the domain boundary. Here we ask a much more intricate question, as the dynamics of pattern forming systems are nonlinear and we would like to know the nonlinear attractor for a given shape and size of a cell.}? In previous sections, we have seen two recurrent themes: nucleotide exchange and positive feedback through recruitment. To elucidate the role of these two factors we will in this section shortly review recent results \cite{Thalmeier_etal:2016} for a minimal pattern-forming system comprised of a single NTPase only.
\begin{figure}[b]
\centering
\includegraphics[width=0.65\linewidth]{one_ntpase_model}
\caption{
The NTPase can bind to the membrane in both of its states with \textit{attachment rate} $k_+$, or cooperatively with corresponding \textit{recruitment rates} $k_{mD}$ for $D$ and $k_{mT}$ for $T$. NTP \textit{hydrolysis} by $T$ triggers \textit{detachment} with rate $k_-$, converting membrane-bound $T$ into cytosolic $D$. Membrane-bound $D$ is also spontaneously released to the cytosol with \textit{detachment rate} $k_-$. Cytosolic $D$ undergoes \textit{nucleotide exchange} with a rate $\lambda$.
}
\label{fig:one_ntpase_model}
\end{figure}
As illustrated in Fig.\ref{fig:one_ntpase_model}, the NTPase cycles between an NDP-bound inactive ($D$) and an NTP-bound active state ($T$).
Both protein species are able to bind to the membrane spontaneously; for simplicity we take the rates to be identical and given by $k_+$.
In addition, to direct membrane attachment, each protein species may also bind cooperatively to the membrane with corresponding recruitment rates $k_{mD}$ for the inactive and $k_{mT}$ for the active protein species. Detachment of the membrane-bound species is asymmetric: While the inactive species is simply released to the cytosol with \textit{detachment rate} $k_-$, detachment of the active species is triggered by NTP hydrolysis which is thereby converted into cytosolic inactive $D$; again, for simplicity, we assume the corresponding detachment rates to be equal and given by $k_-$. Reactivation of cytosolic inactive $D$ through nucleotide exchange occurs at rate rate $\lambda$. Both protein forms are allowed to freely diffuse in the cytosol and the membrane with diffusion constants $D_c$ and $D_m$, respectively.
Denoting the concentrations of $D$ and $T$ in the cytosol by $c_{D}^{}$ and $c_{T}^{}$ and by $m_{D}^{}$ and $m_{T}^{}$ on the membrane, respectively, the reaction-diffusion equations read
\begin{subequations}
\begin{align}
\partial_t c_{T}^{}
&= \, D_c \, \Delta \, c_{T}^{}
+ \lambda \, c_{D}^{}
\, , \\
\partial_t c_{D}^{}
&= \, D_c \, \Delta \, c_{D}^{}
- \lambda \, c_{D}^{}
\, , \\
\partial_t m_{T}^{}
&= \, D_m \, \Delta_m \, m_{T}^{}
+ (k_+ \, \tilde c_{T}^{} - k_- \, m_{T}^{})
+ \, k^{}_{mT} \,
m_{T}^{} \, \tilde c_{T}^{}
\, , \\
\partial_t m_{D}^{}
&= \, D_m \, \Delta_m \, m_{D}^{} \,
+ (k_+ \, \tilde c_{D}^{} \, - k_- \, m_{D}^{})
+ k^{\phantom{}}_{mD} \,
m_{D}^{} \; \tilde c_{D}^{}
\, .
\end{align}
\label{eq:one_ntpase_MaRD}
\end{subequations}
As before, reactive and diffusive fluxes balance at the membrane-cytosol boundary
\begin{subequations}
\begin{align}
D_c \, \nabla_{\perp} c_{T}^{}{\mid_m}
&= - (k_+ \, + \, k^{}_{mT} \,
m_{T}^{}) \, \tilde c_{T}^{}
\, \\
D_c \, \nabla_{\perp} c_{D}^{}{\mid_m}
&= - (k_+ \, + \, k^{}_{mD} \,
m_{D}^{}) \, \tilde c_{D}^{} + k_- \, (m_{D}^{} + m_{T}^{})
\, .
\end{align}
\label{eq:one_ntpase_bc}
\end{subequations}
Solving this set of equations numerically in elliptical geometry reveals a series of striking features (Fig.\ref{fig:one_ntpase_polarity}): (i) In elongated cells the protein density on the membrane and in the cytosol is \textit{always} inhomogeneous, and reflects the local cell geometry. (ii) There are two distinct types of patterns: membrane-bound proteins either accumulate at midcell or form a bipolar pattern with high densities at both cell poles. (iii) The protein gradients scale with the size of the cell, i.e.\/ fully adapt to the geometry of the cell.
The type of polarity of these patterns is quantified by the ratio of the density of membrane-bound proteins located at the cell poles to that at midcell: ${\cal P} = m_\text{pole}/m_\text{midcell}$.
Accumulation occurs either at the cell pole or at midcell depending on the value of the preferential recruitment parameter ${\cal R} = (k^{\phantom{}}_{mD}{-}k^{}_{mT})/(k^{\phantom{}}_{mD}{+}k^{}_{mT})$:
One finds that proteins accumulate at the cell poles (${\cal P} > 1$) if there is a preference for cooperative binding of $D$ (${\cal R} > 0$).
Moreover, the polarity ${\cal P}$ of this bipolar pattern becomes more pronounced with increasing ${\cal R}$.
In contrast, when cooperative binding favours $T$ (${\cal R} < 0$), proteins accumulate at midcell (${\cal P} < 1$).
Thus, the sign of the recruitment preference ${\cal R}$ for a protein in a particular nucleotide state controls the type, while its magnitude determines the amplitude of the pattern.
With increasing eccentricity of the ellipse, the respective pattern becomes more sharply defined; for a spherical geometry the pattern vanishes.
In summary, cell geometry controls the definition of the pattern, and the preference for membrane recruitment of a certain nucleotide state determines both the location on the cell membrane where the proteins accumulate and how pronounced this accumulation becomes.
\begin{widetext}
\begin{figure}[tb]
\centering
\includegraphics[width=1.02\linewidth]{one_ntpase_polarity}
\caption{
Membrane-bound proteins accumulate either at midcell (left) or form a bipolar pattern with high protein densities at the cell poles (right). The left and right plots show the normalised concentration of the membrane density (blue curve) and the corresponding geometry of the cell (grey ellipse).
The membrane density of the protein is divided by its minimum concentration (left: $113 \mu$m$^{-1}$, right: $100 \mu$m$^{-1}$) such that the minimum of the normalised density is $1$.
The polarity ${\cal P} = m_\text{pole} / m_\text{midcell}$ (colour bar in plot is logarithmically spaced) of the pattern strongly depends on cell geometry and preference ${\cal R} = ( k^{}_{mD} - k^{}_{mT} ) / ( k^{}_{mD} + k^{}_{mT} )$ for the recruitment of a certain nucleotide state (middle);
the length of the short axis is fixed at $l =1 \,\mu$m, and we have used $k^{}_{mD}{+}k^{}_{mT} = 0.1 \, \mu$m/s. While for large ${\cal R}$ (preferential recruitment of $D$) the proteins form a bipolar pattern on the membrane, the membrane-bound proteins accumulate at midcell for small $R$ (preferential recruitment of $T$). If the recruitment processes are balanced (${\cal R} =0$) the pattern is flat and polarity vanishes. The cell geometry determines how pronounced a pattern becomes: The more elongated the ellipse, the more sharply defined the pattern, which vanishes completely when the ellipse becomes a circle.
Reprinted from Ref.~\cite{Thalmeier_etal:2016} with permission form PNAS.
}
\label{fig:one_ntpase_polarity}
\end{figure}
\end{widetext}
\begin{figure}[!t]
\includegraphics[width=1.0\linewidth]{one_ntpase_cytosolic_gradients.pdf}
\caption{ \textbf{Membrane affinity controls, and recruitment amplifies adaptation to geometry.} The cells used for the numerical studies have a length of $L = 5 \, \mu$m and a width of $l = 1 \, \mu$m.
\textbf{A,} Even when recruitment is turned off, $T$ and $D$ form inhomogeneous density profiles in the cytosol. $D$ accumulates close to the poles and is depleted at mid-cell. In contrast, $T$ exhibits high concentration at mid-cell and a low concentration at the poles. The attachment and detachment rates are set to $1 \, \mu$m/s and $1$s$^{-1}$, respectively, which gives a penetration depth $\ell_\lambda \approx 1.6\, \mu$m.
\textbf{B,} Illustration of the source-degradation mechanism for the spatial segregation of cytosolic $D$ and $T$. All proteins that detach from the membrane are in an NDP-bound state and can undergo nucleotide exchange, the range of $D$ in the cytosol is limited to a penetration depth $\ell_\lambda$ (dashed lines); here $\ell_\lambda = 0.35 \, \mu$m. At the poles this reaction volume receives input from opposing faces of the membrane resulting in an accumulation of cytosolic $D$ (dark red). The magnitude of this accumulation depends on the penetration depth. The polarity ${\cal P}_\text{NDP} = m^\text{pole}_d/m^\text{mid-cell}_d$ of membrane-bound $D$ plotted as a function of $\ell_\lambda$ shows a maximum at $\ell_\lambda \approx 0.35 \, \mu$m and vanishes in the limits of large as well as small penetration depths.
Reprinted from Ref.~\cite{Thalmeier_etal:2016} with permission form PNAS.
}
\label{fig:one_ntpase_cytosolic_gradients}
\end{figure}
What is the origin of these polar patterns and their features? To answer this question in the clearest possible way, it is instructive to consider the limiting case where positive feedback effects on recruitment are absent and the dynamics hence are fully linear. Then, Eqs.\ref{eq:one_ntpase_MaRD}-\ref{eq:one_ntpase_bc} imply that both the total concentration of proteins on the membrane, $m= m_{D}^{} + m_{T}^{}$, and in the cytosol, $c= c_{D}^{} + c_{T}^{}$, are spatially uniform if the detailed balance condition $k_+ \, \tilde c = k_- \, m$ holds for the exchange of proteins between the cytosol and the membrane.
This uniformity in total protein density, however, does not imply uniformity in the densities of the active and inactive protein species, either on the cell membrane or in the cytosol!
The origin of this effect is purely geometrical, and it is linked to the finite time required for nucleotide exchange in the cytosol.
Heuristically, this can be seen as follows (Fig. \ref{fig:one_ntpase_cytosolic_gradients}A). As only inactive proteins $D$ are released from the membrane they act as a source of cytosolic proteins.
In the cytosol they are then reactivated through nucleotide exchange, which is effectively equivalent to depleting the cytoplasmic compartment of inactive proteins.
This in turn implies the formation of a gradient of inactive proteins and a corresponding, oppositely oriented gradient of active proteins as one moves away from the membrane into the cytosol.
As is known from standard source-degradation processes, the ensuing density profile for $D$ in the cytosol is exponential, with the decay length being set by $\ell_\lambda = \sqrt{D_c/\lambda}$.
Due to membrane curvature these reaction volumes overlap close to the cell poles (Fig. \ref{fig:one_ntpase_cytosolic_gradients}B, bottom), which implies an accumulation of $D$ at the cell poles.
The effect becomes stronger with increasing membrane curvature. Moreover, there is an optimal value for the penetration depth $\ell_\lambda$, roughly equal to a third of the length $l$ of the short cell axis, that maximises accumulation of $D$ at the cell poles (Fig.\ref{fig:one_ntpase_cytosolic_gradients}B, top). As $\ell_\lambda$ becomes larger than $l$, the effect weakens, because the reaction volumes from opposite membrane sites also overlap at mid-cell. In the limit where $\ell_\lambda$ is much smaller than the membrane curvature at the poles, the overlap vanishes, and with it the accumulation of $D$ at the poles.
More generally, these heuristic arguments imply that the local ratio of the reaction volume for nucleotide exchange to the available membrane surface is the factor that explains the dependence of the protein distribution on cell geometry.
\section{\textit{In vitro} reconstitution and theoretical analysis of Min protein pattern formation}
A key step towards understanding pattern-formation mechanisms in biological systems is the identification of the essential functional modules that facilitate the formation of certain patterns.
In living systems, such an identification is strongly impeded by the vast amount of potentially interacting and, therefore, interdependent components.
A common strategy for tackling the complexity of biological systems is mathematical modelling, which has been discussed in the previous section of this chapter.
While mathematical analysis is able to identify possible mechanisms of pattern formation, it is also based on a priori assumptions about the biological system under consideration.
However, these assumptions need to be tested by suitable experiments.
Ideally, a conclusive comparison between theory and experiment requires the ability to isolate the essential players of the pattern forming dynamics and reconstitute them in a minimal system lacking any other potential interactions and allowing for precise control of parameters, such as protein concentrations or geometric boundaries.
A major breakthrough in this regard was the successful \textit{in vitro} reconstitution of Min protein patterns in a lipid bilayer assay \cite{Loose_etal:2008}.
These experiments demonstrated that a flat lipid bilayer surface coupled to a cytosolic solution containing only MinD, MinE, and ATP is sufficient for the formation of membrane bound Min protein patterns.
However, the patterns observed in reconstituted (\textit{in vitro}) systems significantly differed from the intracellular patterns found \textit{in vivo} (Fig.~\ref{fig:invivo_vs_invitro_pattern}).
While the majority of patterns found \textit{in vivo} can be viewed as standing waves with a wavelength matching the cell length, the patterns on the flat membrane are travelling and spiral waves with wavelengths one order of magnitude greater than the typical length of \textit{E. coli}.
\begin{figure}[t]
\includegraphics[width=\linewidth]{invivo_invitro_scheme.pdf}
\caption{\textbf{Min protein patterns \textit{in vivo} vs \textit{in vitro}}. Schematic depiction of the phenomenology observed in experiments when the system geometry is changed. For small systems the patterns in reconstituted systems \cite{Caspi_Dekker:2016} are similar to intracellular dynamics \cite{Wu_etal:2015}, showing pole-to-pole oscillations (with different length scales) in both cases. However, as the system length and width are increased, patterns appear that are not normally seen \textit{in vivo}.}
\label{fig:invivo_vs_invitro_pattern}
\end{figure}
\subsection{A kaleidoscope of \textit{in vitro} patterns}
The successful reconstitution of Min protein patterns on flat lipid bilayers stimulated a plethora of \textit{in vitro} experiments that studied Min protein dynamics under various circumstances and revealed a true kaleidoscope of patterns (Fig.~\ref{fig:in_vitro_pattern}).
On flat lipid bilayers one observed spiral and travelling wave patterns, and a varying degree of spatial coherence sometimes verging on chemical turbulence \cite{Loose_etal:2011}.
Other experiments constrained the Min protein dynamics geometrically to small membrane patches \cite{Schweizer_etal:2012}, semi-open PDMS grooves with varying lipid composition \cite{Zieske_Schwille:2013}, lipid-interfaced droplets \cite{Zieske_etal:2016}, and bilayer coated three-dimensional chambers of various shapes and sizes \cite{Caspi_Dekker:2016}.
Strikingly, the observed patterns show a very broad range of characteristics and varying degrees of sensitivity to the geometry of the enclosing membrane.
Other experiments were performed in large, laterally extended flow cell devices with a flat lipid bilayer of varying lipid composition attached at the bottom \cite{Ivanov_Mizuuchi:2010}.
These experiments showed that Min protein patterns are formed even when there is hydrodynamic flow in the cytosol.
Furthermore, these experiments revealed the capability of Min protein dynamics to form exotic patterns sharing characteristics of travelling waves and stationary patterns alike \cite{Ivanov_Mizuuchi:2010}.
Despite these intensive experimental efforts, a quantitative reconstitution of Min protein patterns observed \textit{in vivo} has not been achieved.
Instead a broad range of different patterns has been found, all of which exhibit wavelengths that are several times larger than that of the \textit{in vivo} pattern.
The pole-to-pole patterns that are observed in (semi-)confined compartments \cite{Zieske_Schwille:2014, Caspi_Dekker:2016} most closely resemble those seen \textit{in vivo}.
Interestingly, this resemblance is limited to geometries with dimensions below the typical wavelength of the pattern.
In these systems the characteristic pole-to-pole oscillation is observed \textit{in vivo} as well as \textit{in vitro}.
If the length and width of the confined system are increased, the reconstituted \textit{in vitro} experiments \cite{Caspi_Dekker:2016} predominantly show traveling and spiral wave patterns, whereas \textit{in vivo} experiments show longitudinal and transversal standing waves \cite{Wu_etal:2015, Wu_etal:2016}.
This suggests that the underlying mechanisms (dynamic instabilities) are actually not the same\footnote{We note that travelling wave patterns have also been observed \textit{in vivo} \cite{Bonny_etal:2013}, albeit only upon massive over-expression of MinD and MinE, leading to highly elevated intracellular protein densities and pathological phenomenology \cite{Sliusarenko_etal:2011} relative to the wild type. While the exact protein densities in the experiments have not been measured, this observation is consistent with the observation of travelling waves in fully confined compartments, where the protein densities inside microfluidic chambers were also elevated \cite{Caspi_Dekker:2016}.
For further discussion of the effect of protein densities we refer the reader to section \protect\ref{sec:polychotomy}.}.
While longitudinal and transversal standing waves have also been observed in semi-confined PDMS grooves of specific sizes \cite{Zieske_Schwille:2014}, the patterns became chaotic in these experiments when the system size was increased \cite{Zieske_Schwille:2014}.
Given these ambiguous results, how can we reconcile the kaleidoscope of \textit{in vitro} patterns and the range of \textit{in vivo} patterns?
In the following, we discuss how theory can shed some light on these bewildering results.
As we will see, a key problem with the interpretation of recent \textit{in vitro} reconstitution experiments and their comparison to \textit{in vivo} dynamics lies in the lack of the ceteris paribus condition, i.e.\/ conditions where only one control parameter is varied while the rest are held constant.
Achieving quantitative control over all parameters will be the key goal for future experiments.
\begin{widetext}
\begin{figure}[t]
\includegraphics[width=1.05\textwidth]{invitro_figure}
\caption{
\textbf{Min patterns \textit{in vitro}.}
\textbf{A,} Spiral- and travelling-wave patterns observed on flat lipid bilayers. From Ref.~\cite{Loose_etal:2008}. Reprinted with permission from AAAS.
\textbf{B,} Pole-to-pole oscillations in semi-confined PDMS grooves. Reprinted with permission from Ref.~\cite{Zieske_Schwille:2013}, copyright 2013 Wiley-VCH Verlag GmbH and Co. KGaA, Weinheim, Germany.
\textbf{C,} Standing waves, travelling waves, and spiral waves observed in fully confined microfluidic chambers with different lateral dimensions. Adapted from Ref.~\protect \cite{Caspi_Dekker:2016} under the CC BY 4.0 license.
\textbf{D,} Exotic Min protein patterns on flat lipid bilayers in large laterally extended flow cells showing different phenomenology depending on the distance to the outlet and inlet of the flow cell device. Reprinted from Ref.~\protect\cite{Vecchiarelli_etal:2016} with permission form PNAS.
}
\label{fig:in_vitro_pattern}
\end{figure}
\end{widetext}
\subsection{The polychotomy of Min protein patterns}
\label{sec:polychotomy}
All experimental evidence supports the assumption that the Min system can be understood as a reaction-diffusion system driven by nonlinear (cooperative) protein interactions.
Therefore, we can expect that Min protein dynamics will share generic features of such nonlinear systems.
In particular, as is well known in the field of nonlinear dynamics, even very simple models can produce a broad variety of patterns~\cite{Gray_Scott:1983, Gray_Scott:1984, Gray_Scott:1985, Pearson:1993, Lee_etal:1993}.
Moreover, which patterns are observed depends on the parameters of the system.
In the classical mathematical theory these parameters are the coefficients of the (non-)linear interactions (representing the ``kinetics''), as well as the diffusion coefficients.
Diffusion coefficients (in the cytosol) have been measured \textit{in vivo} \cite{Meacci_etal:2006} and \textit{in vitro} \cite{Loose_etal:2008, Loose_etal:2011}, and they can be controlled experimentally by the addition of crowding agents \cite{Schweizer_etal:2012, Caspi_Dekker:2016}.
Kinetic parameters of the Min system are much more difficult to measure and to control.
However, diffusion coefficients and kinetic rates are not the only control parameters.
Most of the classical literature in nonlinear dynamics neither accounts for system geometry nor for the mass-conserving nature of bio-molecular interactions.
This might explain why the fact that system geometry as well as protein densities can be key control parameters of the system's dynamics is often overlooked.
The effect of changes in these parameters is not necessarily restricted to changes in the length- and time-scales of the dynamics (e.g. wavelength, wave speed, and oscillation period), but can also induce qualitative changes and transitions between patterns.
One clear difference between the reconstituted Min system on flat lipid bilayers and the intracellular system in \textit{E. coli} is the vastly increased ratio of cytosolic volume to membrane surface in the \textit{in vitro} system, where the height of the system is of the order of milimetres, instead of $\mu$m, in the living system.
A recent theoretical analysis \cite{halatek_frey:2018} has shown that increasing this volume--to--surface ratio leads to an increased wavelength of the pattern.
This prediction agrees with the experimental observation of a reduced wavelength of the Min protein patterns in fully confined geometries \cite{Caspi_Dekker:2016} that mimic the \textit{in vivo} membrane-to-cytosol ratio more closely than does the flat lipid bilayer.
Strikingly, even when cytosolic diffusion was reduced to \textit{in vivo} levels, these experiments still showed a $3$- to $4$-fold increased wavelength in confined compartments compared to the intracellular patterns -- emphasising an apparent dichotomy between patterns observed \textit{in vivo} and \textit{in vitro}.
However, the surface--to--volume ratio is not the only difference between the intracellular and the reconstituted Min systems. Another is the particle number or effective density of MinD as well as MinE.
At first glance there is no apparent difference between the protein concentrations \textit{in vivo} and \textit{in vitro}, since the concentrations in all reconstituted systems are adjusted to the intracellular concentrations which are about $1 \, \mu$M for MinD and MinE.
However, it is important to note that these are the average cytosolic densities with no proteins attached to the membrane.
Since all cytosolic proteins are able to bind to the membrane\footnote{Either directly, or by complex formation as for MinDE complexes.}, the total number of cytosolic proteins determines the upper bound for the maximal membrane densities.
Hence, even if the average cytosolic densities in the reconstituted system are identical to typical intracellular concentrations, the crucial control parameter is the ratio of cytosolic volume to membrane surface.
\textit{In vivo}, a cytosolic density of about $1 \mu$M yields a number of proteins that can easily be absorbed by the membrane and still remain up to two orders of magnitude below the saturation limit.\footnote{Assuming a cylindrical geometry for simplicity, the volume to surface ratio is $\sim r/2$, i.e.\/ well below $1 \,\mu$m for typical cell radii $r$.}
However, in the reconstituted system with flat lipid bilayer the volume to surface ratio is given by the bulk height $h$.
For the typical bulk height on the order of millimetres, less than 1\% of all proteins can bind to the membrane before saturation due to volume exclusion is reached.
As a consequence, the protein densities at and on the membrane are highly increased in the reconstituted system compared to the situation \textit{in vivo}, despite the average cytosolic densities being identical.
Note that the densities of membrane-bound proteins are directly involved in the recruitment process which represents the only intrinsically nonlinear interaction in the Min system (cf.\/ section \ref{sec:MaRD_Min}).
As such, one can expect that changes in the average protein densities on the membrane affect the system dynamics in a significant way.
Indeed, estimates of the concentration on the flat lipid bilayer show that the density across a wave profile is about two orders of magnitude higher than the typical protein densities on the intracellular membrane \cite{Loose_etal:2011}.
The same can be assumed to be the case for reconstituted Min oscillations in semi-open PDMS grooves \cite{Zieske_Schwille:2013, Zieske_Schwille:2014}, since the dynamics are initialised with a high cytosolic column above the grooves which is only removed after the onset of pattern formation (and therefore membrane accumulation).
Elevated protein densities were also found for the reconstituted Min patterns in confined chambers \cite{Caspi_Dekker:2016} since these are based on a microfluidic device.
As proteins accumulate on the membrane while the flow is still active, the density at the inlet is merely a lower bound for the actual protein densities in the individual chambers.
Measurements of the protein fluorescence inside the confined chambers after careful calibration show that the total densities of MinD and MinE and the MinE/MinD ratios are increased and are broadly distributed \cite{Caspi_Dekker:2016}.
A similar result can be expected for Min protein dynamics in large, laterally extended flow cells where diverse wave patterns are observed \cite{Ivanov_Mizuuchi:2010, Vecchiarelli_etal:2016}.
To put these findings from the \textit{in vitro} reconstruction of Min protein pattern in the context of the theoretical framework, the broad variation of volume to surface ratios, total protein numbers, and MinE/MinD density ratios, is a crucial aspect to consider (cf. \cite{Halatek_Frey:2014}).
The theoretical analysis of the skeleton model, Eqs.\ref{eq:RD_cytosol}--\ref{eq:RD_boundary}, has shown that all these quantities are key control parameters for the system dynamics.
An increase in any of these values (total density, density ratio, volume/surface ratio) can lead to a Turing- or Hopf-instability \cite{halatek_frey:2018}.
In the latter case, each point on the membrane can be considered to be an individual chemical oscillator, and the laterally extended system a field of diffusively coupled oscillators \cite{halatek_frey:2018}.
Such dynamics describe a broad class of systems well documented in the classic nonlinear dynamics literature \cite{Aranson_Kramer:2002}.
Key characteristics of oscillatory media are spiral and travelling patterns, as well as various manifestations of chemical turbulence.
All these phenomena can be observed in the reconstituted Min system \cite{denk_etal:2018}.
From this point of view, the observed dichotomy rather appears as a polychotomy, not only between \textit{in vivo} and \textit{in vitro}, but between the many different experimental setups.
Its origin lies in the broad distribution of control parameters and emphasises the diversity of Min protein dynamics on a phenomenological and mechanistic level.
\section{Discussion and outlook}
As outlined in this chapter, the recent focus on the quantitative study of pattern formation in biological systems has led to conceptually new approaches in theory and experiments.
Among the important milestones are the inclusion of cell geometry and an explicit distinction between cell membrane and cytosolic volume in theoretical models, as well as the identification of particle numbers and cell geometry as major control parameters of the self-organisation processes that lead to pattern formation.
While these efforts enabled the quantitative study of biological pattern formation within the theoretical framework of nonlinear dynamics, experimental advances in \textit{in vitro} reconstitution opened new ways to probe, study, and design protein pattern formation as well as controlled minimal systems.
Due to its simplicity, the \textit{E. coli} Min system has been the subject of intensive theoretical and experimental investigation, establishing it as a paradigm for protein pattern formation.
In contrast, the eukaryotic systems discussed here remain far less well understood.
In part, this is due to a higher degree of complexity and redundancy in these systems.
For example, PAR networks involve several different molecular players in the anterior and posterior PAR components respectively, and also interact with dynamic cytoskeletal structures and physical triggers \cite{Goehring:2014}.
Accordingly, the \textit{in vitro} reconstitution of eukaryotic pattern-forming systems is typically more challenging compared to bacterial systems.
Yet, efforts to experimentally reconstitute even basic aspects of such pattern-forming systems \textit{in vitro} could substantially enhance our understanding of their underlying mechanisms via control and perturbation of the experimental conditions.
For the Min system, several key questions remain to be answered. Central is the experimental control over system parameters that gives rise to the multitude of observed patterns.
Future research may reveal additional chemical states of MinD as well as MinE or additional chemical reactions that refine the hitherto identified skeleton network.
While this will affect the number of chemical components and reaction terms one has to take into account in the mathematical model, it does not change the overall structure of the set of reaction-diffusion equations:
(1) Fast cytosolic diffusion is coupled to slow membrane dynamics by chemical reactions that conserve protein number.
(2) Nucleotide exchange in the cytosol implies that active MinD is spatially separated from the reactive membrane. As a consequence, the cytosol serves as a repository for active MinD.
(3) MinD and MinE remain the only conserved species. The sum of individual components of each species, regardless of the number of components, will always be a conserved quantity.
Open questions relating to molecular details of Min protein interaction concern the roles of membrane binding and conformational state switching of MinE \cite{Park_etal:2011}.
Only a combined approach, in which the theoretical model is constrained and supported by unambiguous experimental data, has the potential to truly relate molecular ``design'' features of Min proteins to defined roles in pattern formation.
In summary, protein pattern formation plays key roles in many essential biological processes from bacteria to animals, including cell polarisation and division.
Combined theoretical and experimental approaches have established important principles of pattern-forming protein systems.
Perhaps the most crucial feature that has emerged from these research efforts is the identification of the cytosol as a depot. This depot enables the system to store proteins and redistribute them throughout the system.
Cytosolic diffusion is the key process that detects the local shape of the membrane, and it is this explicit dependence on geometry that is imprinted on membrane-bound protein patterns.
\acknowledgements
We thank Fridtjof Brauns, Yaron Caspi, Cees Dekker, Jonas Denk, and Fabai Wu for helpful discussions.
This research was supported by the German Excellence Initiative via the program ``NanoSystems Initiative Munich'' (NIM), and the Deutsche Forschungsgemeinschaft (DFG) via project A09 and B02 within the Collaborative Research Center (SFB 1032) ``Nanoagents for spatio-temporal control of molecular and cellular reactions''.
SK is supported by a DFG fellowship through QBM.
| -33,681.832256
|
[
-2.63671875,
2.470703125
] | 67.343977
|
[
-2.833984375,
0.59033203125,
-1.7734375,
-4.40625,
-0.270751953125,
6.48828125
] |
[
6.36328125,
7.8671875,
2.72265625,
10.2734375
] | 563
| 11,111
|
[
-3.51953125,
4.01171875
] | 22.043346
|
[
-6.62890625,
-4.03125,
-4.4609375,
-2.193359375,
2.291015625,
12.796875
] | 0.924845
| 20.848725
| 20.340203
| 1.094874
|
[
2.6206765174865723
] | -25,996.768509
| 5.936189
| -32,421.794948
| 0.382067
| 6.221909
|
[
-3.73046875,
-3.9296875,
-3.265625,
-3.904296875,
2.611328125,
11.2734375
] |
[
-5.6015625,
-2.169921875,
-2.482421875,
-1.8759765625,
3.4140625,
5.79296875
] | |
BkiUfBfxK0iCl4WD5TLS
|
\section{Introduction}\label{sec-1}
An $R^{n}$ valued $\alpha$-permanental random variable $X=(X_{1},\ldots, X_{n})$ is a random variable with Laplace transform
\begin{equation}
E\(e^{-\sum_{i=1}^{n}s_{i}X_{i}}\)
= \frac{1}{ |I+KS|^{ \alpha}}, \label{int.1}
\end{equation}
where $K$ is an $n\times n$ matrix and $S $ is an $n\times n$ diagonal matrix with diagonal entries $(s_{1},\ldots,s_{n})$.
We refer to $K$ as a kernel of $X$. But note that $K$ is not unique. For example, if $K$ satisfies (\ref{int.1}) so does
$\Lambda K\Lambda^{-1}$ for any $\Lambda\in {\cal D}_{n,+}$, the set of $n\times n$ diagonal matrices with strictly positive diagonal entries.
Let ${\cal K}(X)$ denote the set of all kernels that determine $X $ by (\ref{int.1}).
We are particularly interested in $\alpha$-permanental random variables $X$ for which ${\cal K}(X)$ does not contain any symmetric kernels. (We explain at the end of this section why we are interested in such processes and kernels.)
If ${\cal K}(X)$ contains a symmetric matrix we say that $X$ is determined by a symmetric matrix or kernel and that any $K\subset {\cal K}(X)$ is equivalent to a symmetric matrix, or is symmetrizable. It follows from (\ref{int.1}) that a kernel $K$ is equivalent to a symmetric matrix if and only if there exists an $n\times n$ symmetric matrix $Q$ such that
\begin{equation}
|I+KS| = |I+QS| \quad\mbox{for all $S\in {\cal D}_{n,+}$}\label{1.3qqr}.
\end{equation}
An $\alpha$-permanental process $\{X_{t},t\in T\}$ is a stochastic process that has finite dimensional distributions that are $\alpha$-permanental random variables. An $\alpha$-permanental process is determined by a kernel $\{K(s,t),s,t\in T\}$ with the property that for all distinct $t_{1},\ldots,t_{n}$ in $T$, $\{K(t_{i},t_{j}),i,j\in [1,n]\}$ is the kernel of the $\alpha$-permanental random variable $(X_{t_{1}},\ldots,X_{t_{n}})$.
\medskip \noindent {\bf Definition }We say that an $\alpha$-permanental process $\{X_{t},t\in T\}$ with kernel $\{K(s,t),s,t\in T\}$ is determined by a symmetric kernel if for all $n\ge 1$ and distinct $t_{1},\ldots,t_{n}$ in $T$, $\{K(t_{i},t_{j}),i,j\in [1,n]\}$ is symmetrizable. When this is the case we also say that $\{K(s,t),s,t\in T\}$ is symmetrizable.
(In what follows we always take $|T|\ge 3.)$
\medskip The next theorem is \cite[Theorem 1.9]{MRall}. It shows that we can modify a very large class of symmetric potentials so that they are no longer symmetric but are still kernels of permanental processes.
\bt
\label{theo-borelN}
Let $S$ a be locally compact set with a countable base.
Let $X\!=\!
(\Omega, {\cal F}_{t}, X_t,\theta_{t},P^x
)$ be a transient symmetric Borel right process with state space $S$ and continuous strictly positive potential densities $u(x,y)$ with respect to some $\sigma$-finite measure $m$ on $S$.
Then for any finite excessive function $f$ of $X$ and $\alpha>0$,
\begin{equation}
\widetilde u^{f}(x,y)= u(x,y) +f(y),\qquad x,y\in S,\label{1.10mm}
\end{equation}
is the kernel of an $\alpha$-permanental process.
\end{theorem}
A function $f$ is said to be excessive for $X$ if $ E^{x}\(f(X_{t})\)\uparrow f(x)$ as $t\to 0$ for all $x\in S$.
It is easy to check that for any positive measurable function $h$, \begin{equation}
f(x)=\int u(x,y) h(y) \,dm(y)=E^{ x} \( \int_{0}^{ \infty}h\( X_{t}\)\,dt\)\label{potdef}
\end{equation}
is excessive for $X$.
Such a function $f$ is called a potential function for $X$.
\medskip Unless the function $f$ in (\ref{1.10mm}) is constant,
$\{\widetilde u^{f}(x,y);x,y\in S\}$ is not symmetric. We now show that, generally, we can choose $f $ so that $\{\widetilde u^{f}(x,y);x,y\in S\}$ is also not equivalent to a symmetric matrix. The next two theorems show how restricted the symmetric matrix $\{ u(x,y);x,y\in S\}$ must be for $\{\widetilde u^{f}(x,y);x,y\in S\}$ to be symmetrizable for all potential functions $f$.
\medskip We use $\ell_{1}^{+}$ to denote strictly positive sequences in $\ell_{1}$.
\bt\label{theo-borelNS}
Let $X\!=\!
(\Omega, {\cal F}_{t}, X_t,\theta_{t},P^x
)$ be a transient symmetric Borel right process with state space $ T\subseteq \mathbb N$, and potential $U=\{U_{j,k}\}_{j,k\in T}$. Then
\begin{itemize}
\item [(i)] Either
\begin{equation}
U_{j,k} =\Lambda_{j}\delta_{j,k}+ d, \qquad j,k\in T,\label{1.5nn}
\end{equation}
where $\Lambda_{j}\ge 0$ and $d\ge 0$,
\item [(ii)] or we can find a potential function $f=Uh$, with $h \in \ell^{+}_{1}$, such that
\begin{equation}
\widetilde U_{j,k}^{f}:= U_{j,k}+ f_{k},\qquad j,k\in T,\label{1.6nn}
\end{equation}
is not symmetrizable.
\end{itemize}
\end{theorem}
When we consider limit theorems for infinite sequences of permanental random variables $\{Y(k), k\in \mathbb N\}$ with kernel $V=\{v(j,k), j,k\in \mathbb N\}$ it is not enough to know that $V$ is not symmetrizable since we are only concerned with
the permanental variables generated by $V(n)=\{v(j,k), j,k\ge n \}$ as $n\to \infty$. We would like to know that $V(n)$ is not symmetrizable for large $n$. We say that the kernel $V$ is asymptoticly symmetrizable if there exists an $n_{0}$ such that $V(n)$ is symmetrizable for all $n\ge n_{0}$. We can modify Theorem \ref{theo-borelNS} to handle this case also.
\bt\label{theo-borelNSmm}
Let $X\!=\!
(\Omega, {\cal F}_{t}, X_t,\theta_{t},P^x
)$ be a transient symmetric Borel right process with state space $ \mathbb N$, and potential $U=\{U_{j,k}\}_{j,k\in \mathbb N}$. Then
\begin{itemize}
\item [(i)] Either there exists an $n_{0}$ such that
\begin{equation}
U_{j,k} =\Lambda_{j}\delta_{j,k}+ d,\qquad \forall j,k\ge n_{0},\label{3.3mma}
\end{equation}
where $\Lambda_{j}\ge 0$ and $d\ge 0$,
\item [(ii)] or we can find a potential function $f=Uh$, with $h \in \ell^{+}_{1}$, such that
\begin{equation}
\widetilde U_{j,k}^{f}:= U_{j,k}+ f_{k},\qquad j,k\in \mathbb N,
\end{equation}
is not asymptoticly symmetrizable.
\end{itemize}
\end{theorem}
The next theorem shows that when the state space of a transient symmetric Borel right process has a limit point, then under reasonable conditions on the potential densities that determine the process, the process is not determined by a kernel that is asymptoticly symmetrizable.
\begin{theorem}\label{theo-1.4} Let $S'=\{x_{0},x_{1},\ldots\}$ be a countable set with a single limit point $x_{0}$. Let $\overline X$ be a transient symmetric Borel right process with state space $ S'$, and continuous strictly positive potential densities $u:=\{u(x,y), x,y\in S'\}$ such that $u(y, x_{0})<u(x_{0},x_{0})$ for all $y\neq x_{0}$. Then we can find a potential function $f=uh$, with $h \in \ell ^{+}_{1}$, that is continuous at $x_{0}$, and is such that,
\begin{equation}
\widetilde u^{f}(x,y)= u(x,y)+f(y),\qquad x,y\in S',\label{1.9mm}
\end{equation}
is not asymptoticly symmetrizable.
\end{theorem}
Theorems \ref{theo-borelNS}--\ref{theo-1.4} show that generally there exists an excessive function $f$ for $X$ which gives a kernel for an $\alpha$-permanental processes that is not determined by a symmetric matrix. However, in specific examples we deal with specific functions $f$ and want to know that the kernels determined by these functions are not symmetrizable. With some additional structure on the symmetric matrix $u(x,y)$ in (\ref{1.10mm}) we can show that $\widetilde u^{f}(x,y)$ in (\ref{1.10mm}) is not asymptoticly symmetrizable.
\begin{lemma}\label{lem-1.1mm} In the notation of (\ref{1.10mm}), let $u=\{u(j,k); j,k\in \mathbb N \}$ be a symmetric To\"eplitz matrix, with at least two different off diagonal elements, and set $v(|j-k|)=u(j,k)$. Let\begin{itemize}
\item[(i)]\begin{equation}
\widetilde u^{f}(j,k)= v(|j-k|) +f(k),\qquad j,k\in \mathbb N ,\label{1.10a}
\end{equation}
where $f$ is a strictly monotone potential for $u$. Then $\{\widetilde u^{f}(j,k);j,k\in \mathbb N\}$ is not asymptoticly symmetrizable.
\item[(ii)]Let
\begin{equation}
\widetilde v^{f}(s_{j},s_{k})= s_{j}\wedge s_{k} +f(s_{k}),\qquad {j}, {k}\in \mathbb N,\label{1.10b}
\end{equation}
where $f$ is a strictly monotone potential for $\{s_{j}\wedge s_{k}; {j}, {k}\in \mathbb N\}$. Then for any triple of distinct values $ s_{j}, s_{k},s_{l} $,
\begin{equation}
\{ \widetilde v^{f}(s_{p},s_{q})\}_{p,q=j,k,l}\, ,\label{1.10bb}
\end{equation}
is not symmetrizable. In particular $\{ \widetilde v^{f}(s_{j},s_{k};j,k\in \mathbb N\}$ is not asymptoticly symmetrizable. \end{itemize}
\end{lemma}
We can use this lemma to show that certain $\alpha$-permanental processes, studied in \cite{MRall}, are not determined by kernels that are asymptoticly symmetrizable. When $S$ is an interval on the real line we say that $ \{u(x,y);x,y\in S\}$ is not asymptoticly symmetrizable at $x_{0}\in S$, if we can find a sequence $\{x_{k}\}$ in $S$ such that $\lim_{k\to\infty}x_{k}=x_{0}$, and
$ \{u(x_{j},x_{k});j,k\in \mathbb N\}$ is not asymptoticly symmetrizable.
\begin{example}\label{ex-1.1} {\rm
In \cite[Example 1.3]{MRall} we obtain a limit theorem for the asymptotic behavior of the sample paths at 0 of $\alpha$-permanental processes with the kernel,
\begin{equation}
\widehat u^{f}(s,t)=e^{-\la |s-t|}+f(t),\qquad s,t\in [0,1],\label{1.33j}
\end{equation}
where $f=q+t^{\beta}$, $\beta>2$, and $q\ge q_{0}(\beta)$, a constant depending on $\beta$. We show in Section \ref{sec-4} that $ \widehat u^{f}(s,t)$ is not asymptoticly symmetrizable at any $s_{0}\in S$.
Similarly
\begin{equation}
\overline u^{f}(j,k)=e^{-\la |j-k|}+f(k),\qquad j,k\in \mathbb N ,\label{1.33k}
\end{equation}
is not asymptoticly symmetrizable.
}\end{example}
\begin{example}\label{ex-1.2} {\rm
In \cite[Example 1.4]{MRall} we obtain limit theorems for the asymptotic behavior of the sample paths at zero and infinity of $\alpha$-permanental processes with the kernel,
\begin{equation}
\widetilde v^{f}(s,t)=s \wedge t+f(t),\qquad s,t\ge 0,\label{1.39j}
\end{equation}
where $f$ is a concave strictly increasing function. We show in Section \ref{sec-4} that for any $s_{0}\in R^{+}$ and any sequence of distinct values $\{s_{k}\} $ such that $\lim_{k\to\infty}s_{k}=s_{0}$, $ \widetilde v^{f}(s_{j},s_{k})$ is not asymptoticly symmetrizable.
In addition,
\begin{equation}
\overline v^{f}(j,k)=j \wedge k+f(k),\qquad j,k\in \mathbb N ,\label{1.39k}
\end{equation}
is not asymptoticly symmetrizable.
}\end{example}
We explain why we are particularly interested in $\alpha$-permanental processes determined by kernels $K$ that are not equivalent to a symmetric matrix. When $\{u(s,t);s,t\in {\cal T}\}$ is symmetric and is a kernel that determines $\alpha$-permanental processes, $Y_{\alpha}=\{Y_{\alpha}(t),t\in {\cal T}\}$, then
\begin{equation}
Y_{1/2} \stackrel{law}{=} \{G^{2}(t)/2,t\in {\cal T}\},
\end{equation}
where $G=\{G (t) ,t\in {\cal T}\}$ is a mean zero Gaussian process with covariance $u(s,t)$.
If $\alpha=m/n$ for integers $m$ and $n$,
\begin{equation}
Y_{m/n} \stackrel{law}{=} \sum_{j=1}^{m}\sum_{k=1}^{n}Y_{1/(2n)}^{(j,k)},
\end{equation}
where $Y_{1/(2n)}^{(j,k)}$ are independent copies of $Y_{1/(2n)} $. Therefore, in some sense, $Y_{m/n}$, is only a modification of the Gaussian process $G$.
This is not true when the kernel of $\alpha$-permanental processes is not symmetrizable. In this case we get a new class of processes. These are the processes that we find particularly interesting.
\medskip To study permanental processes with kernels that are not equivalent to a symmetric matrix our first step is to characterize those kernels that are equivalent to a symmetric matrix. This is done in Section \ref{sec-2}. In Section \ref{sec-3} we give the proofs of Theorems \ref{theo-borelNS}--\ref{theo-1.4}. In Section \ref{sec-4} we give the proof of Lemma \ref{lem-1.1mm} and details about Examples \ref{ex-1.1} and \ref{ex-1.2}.
\section{Kernels that are equivalent to a symmetric matrix }\label{sec-2}
Let $M$ be an $n\times n$ matrix. For ${\cal I}\subseteq [1,\ldots,n]$ we define $M_{{\cal I}}$ to be the $|{\cal I} | \times |{\cal I} |$ matrix $\{M_{p,q}\}_{ p,q\in {\cal I}}$. (Recall that ${\cal D}_{n,+}$ is the set of all $n\times n$ diagonal matrices with strictly positive diagonal elements.)
\begin{lemma}\label{lem-1.1n} Let $K$ be an $n\times n$ matrix and assume that
\begin{equation}
|I+KS| = |I+QS| \quad\mbox{for all $S\in {\cal D}_{n,+}$}. \label{1.3qq}
\end{equation}
Then for all ${\cal I}\subseteq [1,\ldots,n]$
\begin{equation}
|K _{{\cal I}} | = |Q _{{\cal I}} |.\label{1.3qq5}
\end{equation}
In particular
\begin{equation}
|K | = |Q | \label{1.3qq7}
\end{equation}
and
\begin{equation}
K_{j, j} =Q_{j,j} \quad\mbox{for all}\quad j=1,\ldots n.\label{1.3qq2}
\end{equation}
Furthermore, if $Q$ is symmetric, then
\begin{equation}
|Q_{j,k}|=(K_{j,k}K_{k,j})^{1/2}\quad\mbox{for all}\quad i,j =1,\ldots,n \label{1.5w}
\end{equation}
and
for all distinct $i_{1},i_{2},i_{3}\in [1,\ldots,n]$
\begin{equation}
K_{i_{1},i_{2}} K_{i_{2},i_{3}} K_{i_{3},i_{1}} =K_{i_{1},i_{3}} K_{i_{2},i_{1}} K_{i_{3},i_{2}}.\label{1.7w}
\end{equation}
\end{lemma}
\noindent{\bf Proof $\,$ } Denote the diagonal elements of $S$ by $\{s_{i}\}_{i=1}^{n} $. Let $s_{i}\to 0$ for all $s_{i}\in {\cal I}^{c}$ in (\ref{1.3qq}) to get
\begin{equation}
|I+K_{{\cal I}}S| = |I+Q_{{\cal I}}S| \quad\mbox{for all $S\in {\cal D}_{|{\cal I}|,+}$}. \label{1.3qq1}
\end{equation}
Multiply both sides of (\ref{1.3qq1}) by $|S^{-1}|$ and
let the diagonal components of $S$ go to infinity to get (\ref{1.3qq5}). The relationships in (\ref{1.3qq7}) and (\ref{1.3qq2}) are simply examples of
(\ref{1.3qq5}).
Let ${\cal I}=\{j,k\}$. It follows from (\ref{1.3qq5}) that
\begin{equation}
K_{i,i}K_{j,j}-K_{i,j}K_{j,i}=Q_{i,i}Q_{j,j}-Q^{2}_{i,j},\label{1.3qq6}
\end{equation}
which by (\ref{1.3qq2}) implies that $K_{i,j}K_{j,i}=Q^{2}_{i,j}$. This gives (\ref{1.5w}).
Finally, let ${\cal I}=\{i_{1},i_{2},i_{3}\}$ and take the determinants $ |K ({\cal I})| $ and $ |Q ({\cal I}) |$.
It follows from (\ref{1.3qq5}), (\ref{1.3qq2}) and (\ref{1.5w}) that
\begin{eqnarray}
&& K_{i_{1},i_{2}} K_{i_{2},i_{3}} K_{i_{3},i_{1}} +K_{i_{1},i_{3}} K_{i_{2},i_{1}}K_{i_{3},i_{2}}\nonumber \\
&&\hspace{1in}
=Q_{i_{1},i_{2}} Q_{i_{2},i_{3}} Q_{i_{3},i_{1}} +Q_{i_{1},i_{3}} Q_{i_{2},i_{1}} Q_{i_{3},i_{2}}\nonumber\\
&&\hspace{1in} =2Q_{i_{1},i_{2}} Q_{i_{2},i_{3}} Q_{i_{3},i_{1}}.
\end{eqnarray}
By (\ref{1.5w}) this is equal to
\begin{equation}
\pm 2(K_{i_{1},i_{2}} K_{i_{2},i_{3}} K_{i_{3},i_{1}} K_{i_{1},i_{3}} K_{i_{2},i_{1}}K_{i_{3},i_{2}})^{1/2} .\label{}
\end{equation}
Set \begin{equation}
x=K_{i_{1},i_{2}} K_{i_{2},i_{3}} K_{i_{3},i_{1}}\quad\mbox{and}\quad y=K_{i_{1},i_{3}} K_{i_{2},i_{1}}K_{i_{3},i_{2}}. \label{}
\end{equation}
Then we have
\begin{equation}
x+y=\pm 2\sqrt{xy}. \label{234}
\end{equation}
It is clear from this that $x$ and $y$ have the same sign. If they are both positive, we have
\begin{equation}
x+y= 2\sqrt{xy}, \label{235}
\end{equation}
That is, $(\sqrt{x}-\sqrt{y})^{2}=0$, which gives (\ref{1.7w}).
On the other hand, if $x $ and $y$ are both negative, (\ref{234}) implies that
\begin{equation}
(-x)+(-y)= 2\sqrt{(-x)(-y)}, \label{236}
\end{equation}
which also gives (\ref{1.7w}).{\hfill $\square$ \bigskip}
\begin{remark} {\rm Even when $K$ is the kernel of $\alpha$-permanental processes we must have absolute values on the left-hand sides of (\ref{1.5w}). This is because when (\ref{1.3qq}) holds it also holds when $|I+ QS |$ is replaced by $|I+{\cal V} Q{\cal V} S|$ for any signature matrix ${\cal V}$. (A signature matrix is a diagonal matrix with diagonal entries $\pm 1$.) So the symmetric matrix $Q$ need not be the kernel of $\alpha$-permanental processes On the other hand, by \cite[Lemma 4.2]{EK}, we can find a symmetric matrix $\widetilde Q$ that is the kernel of $\alpha$-permanental processes such that (\ref{1.3qq}) holds with $Q$ replaced by $\widetilde Q$ and we have $ \widetilde Q_{j,k} = (K_{j,k}K_{k,j})^{1/2}$. }\end{remark}
\section{Proofs of Theorems \ref{theo-borelNS}--\ref{theo-1.4}}\label{sec-3}
We begin with a simple observation that lies at the heart of the proofs of Theorems \ref{theo-borelNS} and \ref{theo-borelNSmm}.
For $y\in R^{n}$ we use $B_{\delta}(y)$ to denote a Euclidean ball of radius $\delta$ centered at $x$.
\begin{lemma}\label{lem-3.1mm} Let $W=\{w _{j,k}; j,k=1,2,3\}$ be a positive symmetric matrix such that $w _{j,k}\leq w_{j,j}\wedge w _{k,k}$. For any $x=(x_{1},x_{2},x_{3})$ let $\widetilde W^{x}$ be a $3\times 3$ matrix defined by
\begin{equation}
\widetilde W^{x}_{j,k}=w _{j,k}+x_{k},\qquad j,k=1,2,3.
\end{equation}
Suppose that $\widetilde W^{x}$ is symmetrizable for all $x\in B_{\delta}(x_{0})$, for some $x_{0}\in R^{3}$ and $\delta>0$. Then, necessarily,
\begin{equation}
w _{j,k}=\Lambda_{j}\delta_{j,k}+ d, \qquad j,k=1,2,3,\label{3.2mm}
\end{equation}
where $\Lambda_{j}\ge 0$ and $d\ge 0$.
\end{lemma}
\noindent{\bf Proof $\,$ } It follows from Lemma \ref{lem-1.1n} that for all $x\in B_{\delta}(x_{0})$
\begin{equation} \( w_{1,2}+x_{2}\) \( w_{2,3}+x_{3}\) \( w_{3,1}+x_{1}\) = \( w_{1,3}+x_{3}\) \( w_{2,1}+x_{1}\)
\( w_{3,2}+x_{2}\). \label{3.3mm}
\end{equation}
We differentiate each side of (\ref{3.3mm}) with respect to $x_{1}$ and $x_{2}$ in $B_{\delta}(x_{0})$ and see that
\begin{equation}
w_{2,3}+{x}_{3} =w_{1,3}+x_{3}.
\end{equation}
Therefore, we must have $ w_{2,3} =w_{1,3}$.
Differentiating twice more with respect to $x_{1}$ and $x_{3}$, and $x_{2}$ and $x_{3}$, we
see that if (\ref{3.3mm}) holds for all $x\in B_{\delta}\( x_{0}\)$ then
\begin{equation}
w_{2 ,3} = w_{1 ,3},\quad w_{1,2} = w_{3,2} ,\quad\mbox{and}\quad w_{3,1} = w_{2,1}. \label{}
\end{equation}
This implies that for some $(d_{1},d_{2},d_{3})$
\begin{equation} W= \left (
\begin{array}{ c cccc }
w_{1,1} &d_{2}& d_{3 } \\
d_{1} & w_{2,2} & d_{3} \\
d_{1} &d_{2}& w_{3,3} \end{array}\right ).\label{nsz.4}
\end{equation}
Furthermore, since $W$ is symmetric, we must have $d_{1}=d_{2}=d_{3}$.
Set $d=d_{i}$, $i=1,2,3$. Then, since $w_{i,i}\ge w_{i,j}$, $i,j=1,2,3$, we can write $w_{i,i}=\la_{i}+d$ for some $\la_{i}\geq 0$, $i=1,2,3$. This shows that (\ref{3.2mm}) holds.{\hfill $\square$ \bigskip}
In using Lemma \ref{lem-3.1mm} we often consider $3\times 3$ principle submatrices of a larger matrix. Consider the matrix $\{W(x,y)\}_{x,y\in S}$, for some index set $S$. Let $\{x_{1},x_{2},x_{3}\}\subset S$. Consistent with the notation introduced at the beginning of Section \ref{sec-2} we note that
\begin{equation}
W_{\{x_{1},x_{2},x_{3}\}}=\{ W_{x_{j},x_{k} }\}_{j,k=1}^{3}.
\end{equation}
We also use $1_{n}$ to denote an $n\times n$ matrix with all its elements equal to 1.
\medskip \noindent{\bf Proof of Theorem \ref{theo-borelNS}.}
If $(i)$ holds then
\begin{equation}
\widetilde U^{f}:= \Lambda+1_{|T|}G,
\end{equation}
where $G$ is a $|T|\times |T|$ diagonal matrix with entries $f_{1}+d,f_{2}+d,\ldots $. Let
${\cal I}$ be any finite subset of $T$. Obviously,
\begin{equation}
\(\widetilde U^{f}\)_{{\cal I}}= \Lambda_{{\cal I}}+1_{|{\cal I}|}G_{{\cal I}}.
\end{equation}
Since
\begin{equation}
G_{{\cal I}}^{1/2} \(\Lambda_{{\cal I}}+1_{|{\cal I}|}G_{{\cal I}}\)G_{{\cal I}}^{-1/2}= \Lambda_{{\cal I}}+G_{{\cal I}}^{1/2}1_{|{\cal I}|}G_{{\cal I}}^{1/2},
\end{equation}
and $\Lambda_{{\cal I}}+G^{1/2}_{{\cal I}}1_{|{\cal I}|}G^{1/2}_{{\cal I}}$ is symmetric, we see that $ \widetilde U^{f}$ is symmetrizable. This shows that if $(i)$ holds then $(ii)$ does not hold.
Suppose that $(i)$ does not hold. We show that in this case
we can find a triple $\{t_{1}, t_{2}, t_{3}\}$ such that $U_{\{t_{1}, t_{2}, t_{3}\}}$
does not have all its off diagonal elements equal.
Since $(i)$ does not hold
there are two off diagonal elements of $V$ that are not equal, say $u_{l,m}=a$
and $u_{p,q}=b$. Suppose that none of the indices $l,m,p,q$ are equal. The kernel of $(X_{l},X_{m},X_{p})$ has the form.
\begin{equation} U_{\{l,m,p\}} = \left (
\begin{array}{ c cccc }
\,\cdot\, &a& \,\cdot\, \\
a & \,\cdot\,&\,\cdot\, \\
\,\cdot\, &\,\cdot\,& \,\cdot\,
\end{array}\right ),\label{3.26x}
\end{equation}
where we use $\,\cdot\,$ when we don't know the value of the entry. If any of the off diagonal terms of $U_{\{l,m,p\}}$ are not equal to $a$ we are done.
Assume then that all the off diagonal terms of $U_{\{l,m,p\}}$ are equal. This implies, in particular, that $(U_{\{l,m,p\}})_{m,p}=(U_{\{l,m,p\}})_{p,m}=a$. Therefore, $U_{\{m,p,q\}}$ has the form,
\begin{equation} U_{\{m,p,q\}}:= \left (
\begin{array}{ c cccc }
\,\cdot\, &a& \,\cdot\, \\
a & \,\cdot\,&b \\
\,\cdot\, &b& \,\cdot\,
\end{array}\right ).\label{3.26}
\end{equation}
Therefore, if none of the indices $l,m,p,q$ are equal we see that there exists a triple $\{t_{1}, t_{2}, t_{3}\}$ such that $U_{\{t_{1}, t_{2}, t_{3}\}}$ does not have all its off diagonal elements equal.
If $l=p$ the argument is simpler, because in this case
\begin{equation} U_{\{l,m,q\}} = \left (
\begin{array}{ c cccc }
\,\cdot\, &a& b \\
a & \,\cdot\,&\,\cdot\, \\
b &\,\cdot\, & \,\cdot\,
\end{array}\right ).\label{3.27}
\end{equation}
If $m=q$ the kernel of $(X_{l},X_{p}, X_{m})$ is
\begin{equation}\(
\begin{array}{ c cccc }
\,\cdot\, &\,\cdot\,& a \\
\,\cdot\, & \,\cdot\,&b \\
a &b & \,\cdot\,
\end{array}\right ).\label{3.2ww}
\end{equation}
Using the fact that $U$ is symmetric we see that cases when $l=q$ or $m=p$ are included in the above.
This shows that when $(i)$ does not hold we can find a triple $\{t_{1}, t_{2}, t_{3}\}$ such that $U_{\{t_{1}, t_{2}, t_{3}\}}$ does not have all its off diagonal elements equal. We now show that in this case $(ii)$ holds, that is, we can find a potential $f$ for which (\ref{1.6nn}) is not symmetrizable.
For convenience we rearrange the indices so that $\{t_{1}, t_{2}, t_{3}\}$ = $\{1, 2, 3\}$. We take any $h^{*}\in \ell_{1}^{+}$ and consider the potential $f^{*}=U h^{*}$. If $U_{1,2,3}:= \{U_{j,k}+f^{*}_{k}\}_{j,k=1}^{3}$ is not symmetrizable, we are done. That is, $(ii)$ holds with $f=f^{*}$. However, it is possible that $U_{\{1,2,3\}}$ is not of the form of (\ref{3.2mm}) but
\begin{equation} \( U_{1,2}+f^{*}_{2}\) \( U_{2,3}+f^{*}_{3}\) \( U_{3,1}+f^{*}_{1}\) = \( U_{1,3}+f^{*}_{3}\)
\( U_{2,1}+f^{*}_{1}\)\( U_{3,2}+f^{*}_{2}\). \label{3.3k}
\end{equation}
(See (\ref{3.3mm})). Nevertheless, since $U_{\{1,2,3\}}$ is not of the form (\ref{3.2mm}), it follows from Lemma \ref{lem-3.1mm} that for all $\delta>0$
there exists an $(f_{1},f_{2},f_{3})\in B_{\delta}(f^{*}_{1},f^{*}_{2},f^{*}_{3})$ such that $\{U_{j,k}+f _{k}\}_{j,k=1}^{3}$ is not symmetrizable.
(Here we use the facts that a symmetric potential density $U_{j,k}$ is always positive and satisfies $U_{j,k} \leq U_{j,j}\wedge U_{k,k}$, see \cite[(13.2)]{book}.)
Note that $U_{\{1,2,3\}}$ is invertible. (See e.g., \cite[Lemma A.1]{MRnec}.) Therefore, we can find $c_{1},c_{2},c_{3}$ such that
\begin{equation}
f_{j}= f_{j}^{*}+ \sum_{k=1}^{3} U_{j,k} c_{k},\qquad j=1,2,3. \label{3.14mm}
\end{equation}
Now, set $h=h^{*}+c$, where $c=(c_{1},c_{2},c_{3},0,0,\ldots)$, i.e., all the components of $c$ except for the first three are equal to 0 and set $f=Uh$. The components $f_{1},f_{2},f_{3}$ are given by (\ref{3.14mm}). Furthermore, we can choose $\delta$ sufficiently small so that for $(f_{1},f_{2},f_{3})\in B_{\delta}(f^{*}_{1},f^{*}_{2},f^{*}_{3})$, $c_{1},c_{2},c_{3}$ are small enough so that $h_{1}$, $h_{2}$, and $h_{3}$ are strictly greater than 0, which, of course, implies that $h\in \ell_{1}^{+}$, (defined just prior to Theorem \ref{theo-borelNS}). Therefore, $(ii)$ holds with this potential $f$. {\hfill $\square$ \bigskip}
In Theorem \ref{theo-borelNS} it is obvious that if $(i)$ does not hold then there are functions $f$ for which (\ref{1.6nn}) is not symmetrizable. What was a little difficult was to show that $f=(f_{1},f_{2},\ldots)$, is a potential for $X$. We have the same problem in the proof of Theorem \ref{theo-borelNSmm} but it is much more complicated. If we start with a potential $f^{*}=Uh^{*}$, to show that $\widetilde U^{f}$ is not asymptotically symmetrizable, we may need to modify an infinite number of the components of $f^{*}$ and still end up with a potential $f$. The next lemma is the key to doing this.
\bl\label{lem-borelNSmm}
Let $X\!=\!
(\Omega, {\cal F}_{t}, X_t,\theta_{t},P^x
)$ be a transient symmetric Borel right process with state space $ \mathbb N$, and potential $U=\{U_{j,k}\}_{j,k\in \mathbb N}$. Then we can find a potential function $f=Uh$, with $h \in \ell^{+}_{1}$, such that for all $\alpha>0$,
\begin{equation}
\widetilde U^{f}_{j,k}= U_{j,k} +f_{ k},\qquad j,k\in\mathbb N,\label{1.10}
\end{equation}
is the kernel of an $\alpha$-permanental sequence.
Moreover, for $I_{l}=\{3l+1,3l+2,3l+3\}$, the following dichotomy holds for each $l\ge 0$:\begin{itemize}
\item [(i)] Either $\widetilde U^{f}_{I_l}$ is not symmetrizable,
\item [(ii)] or
\begin{equation}
U_{I_l}=\Lambda+d 1_{3},\label{nsz.00}
\end{equation}
where $\Lambda \in D_{3,+}$ and $d\geq 0$.
\end{itemize}
\end{lemma}
\noindent{\bf Proof $\,$ } Let $\{i_{l,j}=3l+j\}_{l\ge 0,j\in \{1.2.3\}}$. For $f=\{f_{k}\}_{k=1}^{\infty}$ define,
\begin{eqnarray}
F_{l}( f )&=&F_{l}( f_{i_{l,1}}, f_{i_{l,2}}, f_{i_{l3}})\label{nsz.2}\\
&=& ( U_{i_{l,1},i_{l,2}}+f_{i_{l,2}})( U_{i_{l,2}, i_{l,3}} +f_{i_{l,3}})( U_{i_{l,3},i_{l,1}}+ f_{i_{l,1}}) \nonumber\\
&&\hspace{.1in} -( U_{i_{l,1},i_{l,3}}+f_{i_{l,3}})( U_{i_{l,3},i_{l,2}}+f_{i_{l,2}})(U_{i_{l,2}, i_{l,1}}+ f_{i_{l,1}}). \nonumber
\end{eqnarray}
We note that when $U_{I_l}$ is given by (\ref{nsz.00}), then for any sequence $\{f_{i_{1}}, f_{i_{2}}, f_{i_{3}}\}$, $ F_{l}( f)=0$ and $\widetilde U^{f}_{I_l}$ is symmetrizable. The first assertion in the previous sentence follows because
all the terms $\{U_{i_{j}.i_{k}}\}_{j\ne k=1}^{3}$ are equal $d$. The second
is proved in the first paragraph of the proof of Theorem \ref{theo-borelNS}. On the other hand, it follows from Lemma \ref{lem-1.1n} that if $ F_{l}( f)\neq 0$ then $\widetilde U^{f}_{I_l}$ is not symmetrizable.
Therefore, to prove this theorem it suffices to find an $h \in \ell^{+}_{1}$ for which the potential function $f=Uh$ satisfies the following dichotomy for each $l\ge 0$:
\begin{equation}
\mbox{Either
$F_{l}( f )\neq 0 \quad $ or $\quad U_{I_l}$ has the form (\ref{nsz.00}).}\label{3.8mm}
\end{equation}
To find $h$ we take any function $h^{*}\in \ell_{1}^{+}$ and define successively $h^{(n)}\in \ell^{+}_{1}$, $n\ge -1$, such that $h^{(-1)} =h^{\ast} $ and
\begin{equation}
h^{(n+1)}_{j}=h^{(n)}_{j},\hspace{.2 in} \forall j\notin I_{n}, \quad\mbox{and}\quad 0<{1 \over 2} h_{j}^{\ast}\leq h^{(n)}_{j}\leq 2h_{j}^{\ast},\quad j\ge 1\label{nsz.01},
\end{equation}
and such that $f^{(n)}:=Uh^{(n)}$ satisfies,
\begin{equation} |F_{l}( f^{(n+1)} )- F_{l}( f^{(n)} )|\leq \displaystyle\frac{|F_{l}( f^{(l+1)} ) |}{ 2^{n+2}},\quad n\ge l+1 .\label{nsz.02}
\end{equation}
As we point out just below (\ref{nsz.2}), if $U_{I_l}$ is of the form (\ref{nsz.00}), (\ref{nsz.02}) is satisfied trivially since $ F_{l}( f)=0$ for all $f$. However, when $U_{I_l}$ is not of the form (\ref{nsz.00})
we also require that $h^{(l+1)}$ is such that
\begin{equation}
F_{l}( f^{(l+1)} )\neq 0.\label{nsz.02m}
\end{equation}
(The actual construction of $\{h^{(n)}; n\ge -1\}$ is given later in this proof.)
By (\ref{nsz.01}), $\|h^{(n)} - h^{(m)}\|_{1}\leq 2\sum_{j=m}^{n} h_{j}^{*} $ for any $n>m$, hence $h=\lim_{n\to \infty}h^{(n)}$ exists in $\ell_{1}^{+}$.
We set $f=Uh$ and note that
\begin{equation}
|f_{j}-f_{j}^{(n)}|=|(U(h-h^{(n)}))_{j}|\le U_{j,j}\|h-h^{(n)}\|_{1}\label{3.12mm}.
\end{equation}
Here we use the property pointed out in the proof of Theorem \ref{theo-borelNS} that
$
U_{i,j}\le U_{i,i}\wedge U_{j,j} .\label{3.13m}
$
It follows from (\ref{3.12mm}) that
$f_{j}=\lim_{n\to \infty}f_{j}^{(n)}$ for each $j\ge 1$ and consequently, by (\ref{nsz.02}),
\begin{equation}
|F_{l}( f )- F_{l}( f^{(l+1)} )|\le \sum_{k=l+1}^{\infty} |F_{l}( f ^{(k+1)})-F_{l}( f ^{(k)} )| \leq {|F_{l}( f^{(l+1)} ) | \over 2 }.\label{3.13mm}
\end{equation}
We see from this that when $U_{I_l}$ is not of the form (\ref{nsz.00}), it follows from (\ref{nsz.02m}) and (\ref{3.13mm}) that $
F_{l}( f )\neq 0.$ This implies that (\ref{3.8mm}) holds.
\medskip We now describe how the $h^{(j)}$, $j=0,1,\ldots$ are chosen.
Assume that $h^{(-1)},\ldots, h^{(n)}$ have been chosen. We choose $h^{(n+1)}$ as follows:
If either $F_{n}( f^{(n)})\neq 0$ or $U\big |_{I_{n}\times I_{n}}$ has the form (\ref{nsz.00}), we set $h^{(n+1)}=h^{(n)}$.
Assume then that $F_{n}( f^{(n)} )= 0$.
If
$U _{I_{n}} $ does not have the form of (\ref{nsz.00}), it follows from the proof of Lemma \ref{lem-3.1mm} that for all $\epsilon_{p}\downarrow 0$, there exists a
$(g_{1,p},g_{2,p},g_{3,p})\in B_{\epsilon_{p}}(f^{(n)}_{i_{n,1}},f^{(n)}_{i_{n,2}},f^{(n)}_{i_{n,3}})$ such that
$F_{n}( g_{1,p},g_{2,p},g_{3,p} )\ne 0$. We choose $f^{(n+1)}=f^{(n)}$ for all indices except $i_{n,1},i_{n,2},i_{n,3}$ and $f^{(n+1)}_{ i_{n,1}},f^{(n+1)}_{ i_{n,2}},f^{(n+1)}_{ i_{n,3}}$ to be equal to one of these triples $(g_{1,p},g_{2,p},g_{3,p})$. This gives (\ref{nsz.02m}) for $l=n$. Since $\epsilon_{p}\downarrow 0$ we can take $f^{(n+1)}$ arbitrarily close to $f^{(n)}$ so that it satisfies (\ref{nsz.02}).
As in the proof of Theorem \ref{theo-borelNS} we can solve the equation
\begin{equation}
f^{(n+1)}_{i_{n,j}}= f_{i_{n,j}}^{(n )}+ \sum_{k=1}^{3}U_{i_{n,j},i_{n,k}} c_{i_{n,k}},\qquad j=1,2,3. \label{3.14mmq}
\end{equation}
for $ c_{i_{n,1}},c_{i_{n,2}}, c_{i_{n,3}} $.
To obtain $h^{(n+1)}$ we set $h^{(n+1)}_{q}=h^{(n)}_{q}$ for all $q\notin I_{n}$ and for $q\in I_{n}$ we take
\begin{equation}
h^{(n+1)}_{q}=h^{(n)}_{q}+c^{(n)}_{q} \label{8.15mm}.
\end{equation}
where $c^{(n)}_{q}$ has all its components equal to zero except for the three components $ c_{i_{n,1}},c_{i_{n,2}}, c_{i_{n,3}} $. By taking $\epsilon_{p}$ sufficiently small we can choose $ c_{i_{n,1}},c_{i_{n,2}}, c_{i_{n,3}} $ so that the third statement in (\ref{nsz.01}) holds.
We set $ f^{(n+1)}=U h^{(n+1)} $ and note that this is consistent with (\ref{3.14mmq}). {\hfill $\square$ \bigskip}
\noindent{\bf Proof of Theorem \ref{theo-borelNSmm} } It is clear from Theorem \ref{theo-borelNS} that if $(i)$ holds then $U$ is asymptoticly symmetrizable, because in this case $\{U_{t_{i},t_{j}} \}_{i,j=1}^{k} $ is symmetrizable for all distinct $t_{1},\ldots ,t_{k}$ greater than or equal to $n_{0}$, for all $k$.
Suppose that $(i)$ does not hold. Then, as in the proof of Theorem \ref{theo-borelNS}, we can find a sequence $\{n_{k};k\in \mathbb N\}$ such that $n_{k}\to \infty$ and a sequence of triples $3n_{k}< { t_{k,1}}, { t_{k,2}}, { t_{k,3}}\le 3n_{k+1}$, such that $U_{\{ t_{k,1}, t_{k,2}, t_{k,3}\}}$ does not have all of its off diagonal elements equal. We interchange the indices ${t_{k,1}}, { t_{k,2}}, { t_{k,3}}$ with the indices in $I_{n_{k}}$; (see Lemma \ref{lem-borelNSmm}). We can now use Lemma \ref{lem-borelNSmm} to show that $(ii)$ holds. {\hfill $\square$ \bigskip}
\noindent{\bf Proof of Theorem \ref{theo-1.4} } Let $S'=\{x_{0},x_{1},x_{2},\ldots \}$ with $\lim_{k\to \infty}x_{k}=x_{0}$. Assume that for some integer $n_{0}$
\begin{equation}
u( x_{j},x_{k} )=\Lambda_ {j}\delta_{x_{j},x_{k}}+ d,\qquad \forall j,k\ge n_{0}.\label{3.28mm}
\end{equation}
Then $ u( x_{j},x_{j} ) =\Lambda_ {j} + d$, and since, by hypothesis, $u(x,y)$ is continuous,
\begin{equation}
\lim_{j\to\infty}u( x_{j},x_{j} ) =u( x_{0},x_{0} ) , \label{unif}
\end{equation}
which implies that limit $\Lambda_ {0 }:=\lim_{j\to \infty}\Lambda_ {j }$ must exist and
\begin{equation}
u( x_{0},x_{0} )=\Lambda_ {0 }+d.\label{3.28gh}
\end{equation}
It also follows from (\ref{3.28mm}) that $ u( x_{j},x_{k} ) =d$ for all $n_{0}\leq j<k$. In addition, since $\lim_{k\to\infty} u( x_{j},x_{k} )= u( x_{j},x_{0} )$,
we see that for all $j\ge n_{0} $,
\begin{equation}
u( x_{j},x_{0} )=d.\label{3.28gh2}
\end{equation}
Comparing the last two displays we get that for all $j\ge n_{0} $,
\begin{equation}
u( x_{0},x_{0} )- u( x_{j},x_{0} )=\Lambda_ {0 }.\label{3.28gh3}
\end{equation}
This contradicts (\ref{3.28mm}),
because the assumption that $ u( x_{0},x_{0} )>u( x_{j},x_{0} )$
implies that $\Lambda_ {0 }>0$, whereas the assumption that $u$ is continuous and (\ref{3.28gh3}) implies that $\Lambda_ {0 }=0$.
Since (\ref{3.28mm}) does not hold for any integer $n_{0}$,
(\ref{1.9mm}) follows from Theorem \ref{theo-borelNSmm}. The fact that $f$ is continuous at $x_{0}$ follows from the Dominated Convergence Theorem since $\lim_{j,k\to\infty} u( x_{j},x_{k} )=u(x_{0},x_{0})$ implies that $\{u(x,y);x,y\in S'\}$ is uniformly bounded.
{\hfill $\square$ \bigskip}
\section{Proof of Lemma \ref{lem-1.1mm} and Examples \ref{ex-1.1} and \ref{ex-1.2}} \label{sec-4}
\noindent{\bf Proof of Lemma \ref{lem-1.1mm} }
(i) Let $m_{1},m_{2},m_{3}$ be increasing integers such that $m_{2}-m_{1}=m_{3}-m_{2}$ and $u(m_{2}-m_{1}) \ne u(m_{3}-m_{1}) $ and consider the $3\times 3$ T\" oeplitz matrix
\begin{equation}
\left (
\begin{array}{ ccc}
u(0) +f(m_{1})& u(m_{2}-m_{1}) +f(m_{2}) & u(m_{3}-m_{1}) +f(m_{3} ) \\
u(m_{2}-m_{1}) +f(m_{1})& u(0) +f(m_{2}) & u(m_{2}-m_{1}) +f(m_{3} ) \\
u(m_{3}-m_{1}) +f(m_{1})& u(m_{2}-m_{1}) +f(m_{2}) &u(0) +f(m_{3} ) \end{array}\right )\label{pp}.
\end{equation}
By Lemma \ref{lem-1.1n}, if $\{\widetilde u^{f}(j,k);j,k\in \mathbb N\}$ is symmetrizable we must have
\begin{eqnarray}
&&\hspace{-.3in} ( u(m_{2}-m_{1})+f(m_{2}) )(u(m_{2}-m_{1})+f(m_{3} ) ) ( u(m_{3}-m_{1})+f(m_{1}))\label{4.2mm}\\\nonumber &&\quad\hspace{-.3in}=(u(m_{3}-m_{1})+f(m_{3} ) )( u(m_{2}-m_{1})+f(m_{1}))( u(m_{2}-m_{1})+f(m_{2}) ).
\end{eqnarray}
Note that we can cancel the term $u(m_{2}-m_{1})+f(m_{2})$ from each side of (\ref{4.2mm}) and rearrange it to get
\begin{equation}
(u(m_{2}-m_{1})- u(m_{3}-m_{1})) ( f(m_{1})-f(m_{3})) =0. \end{equation}
This is not possible because $u(m_{2}-m_{1}) \ne u(m_{3}-m_{1 }) $ and $f(m_{1})\ne f(m_{3})$.
Since this holds for all $m_{1},m_{2},m_{3}$ satisfying the conditions above we see that Lemma \ref{lem-1.1mm} $(i)$ holds.
\medskip (ii) Consider $s_{j}\wedge s_{k}$ at the three different values, $s_{j_{1}},s_{j_{2}},s_{j_{3}}$, and the matrix
\begin{equation}
\left (
\begin{array}{ ccc}
s_{j_{1}}+f(s_{j_{1}})&s_{j_{1}}+f(s_{j_{2}}) &s_{j_{1}}+f(s_{j_{3}} ) \\
s_{j_{1}}+f(s_{j_{1}})& s_{j_{2}}+f(s_{j_{2}}) & s_{j_{2}}+f(s_{j_{3}} ) \\
s_{j_{1}}+f(s_{j_{1}})& s_{j_{2}}+f(s_{j_{2}}) & s_{j_{3}}+f(s_{j_{3}} ) \end{array}\right )\label{jj}.
\end{equation}
By Lemma \ref{lem-1.1n}, if $ \widetilde v^{f}_{s_{j_{1}},s_{j_{2}},s_{j_{3}}}$ is symmetrizable we must have
\begin{equation} ( s_{j_{1}}+f(s_{j_{2}}) )(s_{j_{2}}+f(s_{j_{3}} ) ) ( s_{j_{1}}+f(s_{j_{1}})) =( s_{j_{1}}+f(s_{j_{3}} ) )( s_{j_{1}}+f(s_{j_{1}}))( s_{j_{2}}+f(s_{j_{2}}) )
\end{equation}
or, equivalently,
\begin{equation}
(s_{j_{1}}- s_{j_{2}}) ( f(s_{j_{3}})-f(s_{j_{2}})) =0. \end{equation}
Since $s_{j_{1}}\ne s_{j_{2}}$ and $f(s_{j_{3}})\ne f(s_{j_{2}})$ this is not possible. Therefore, $ \widetilde v^{f}_{s_{j_{1}},s_{j_{2}},s_{j_{3}}}$ is not symmetrizable. {\hfill $\square$ \bigskip}
{\bf Proof of Example \ref{ex-1.1} } {\rm Let $s_{0}\in S$. We choose a sequence $s_{j}\to s_{0}$ with the property that it contains a subsequence $\{s_{j_{k}}\}$, $s_{j_{k}}\to s_{0}$, such that
\begin{equation}
s_{j_{3k+1} }- s_{j_{3k} }= s_{j_{3k+2}}- s_{j_{3k+1} }=a_{k}, \qquad k\ge 1.
\end{equation}
The kernel of
the $3\times 3$ matrix
\begin{equation}
\widehat u^{f}(s_{j_{3k +p}},s_{j_{3k +q}}),\qquad p,q=0,1,2, \label{4.8mm}
\end{equation}
is
\begin{equation}
\left (
\begin{array}{ ccc}
1 +f(s_{j_{3k} })& e^{-\la a_{k}} +f(s_{j_{3k+1 }}) & e^{-\la 2a_{k}} +f(s_{j_{3k+2} } ) \\
e^{-\la a_{k}} +f(s_{j_{3k} })&1 +f(s_{j_{3k+1} }) & e^{-\la a_{k}} +f(s_{j_{3k+2} } ) \\
e^{-\la 2a_{k}} +f(s_{j_{3k} })& e^{-\la a_{k}} +f(s_{j_{3k+1} }) &1 +f(s_{j_{3k+2} } ) \end{array}\right )\label{ppq},
\end{equation}
similar to (\ref{pp}). Therefore, following the proof of Lemma \ref{lem-1.1mm}, we see that the kernel in (\ref{4.8mm}) is not symmetrizable.
Since this holds along the subsequence $\{s_{j_{k}}\}$, $s_{j_{k}}\to s_{0}$, we see that $ \{ \widehat u_{f}(s,t); s,t\in S\}$ is not asymptoticly symmetrizable at $s_{0} $.
The result in (\ref{1.33k}) is proved similarly. {\hfill $\square$ \bigskip}
\noindent {\bf Proof of Example \ref{ex-1.2} } The proof of Example \ref{ex-1.2} is similar to the proof of Example \ref{ex-1.2} but even simpler. This is because for all distinct values, $s_{j_{1}},s_{j_{2}},s_{j_{3}}$, the matrix in (\ref{jj}) is not symmetrizable.
| -57,253.099301
|
[
-2.4140625,
2.1328125
] | 34.653465
|
[
-3.388671875,
0.443603515625,
-2.017578125,
-4.73046875,
-0.2890625,
6.953125
] |
[
1.583984375,
8.0390625,
-0.58251953125,
4.6640625
] | 380
| 4,394
|
[
-3.26171875,
3.66796875
] | 45.237602
|
[
-5.40625,
-3.68359375,
-4.0546875,
-2.203125,
1.8447265625,
11.3203125
] | 0.877272
| 11.197981
| 24.032772
| 4.583574
|
[
1.602150797843933
] | -36,355.360931
| 5.058716
| -57,583.811896
| 1.133705
| 5.886462
|
[
-2.181640625,
-3.2109375,
-3.94921875,
-5.453125,
2.21875,
12.3515625
] |
[
-5.28125,
-0.63525390625,
-1.4716796875,
-0.4619140625,
2.607421875,
2.08203125
] | |
BkiUfWzxK03BfNelYHKL
|
\section{Introduction}
Particle in Cell (PIC) plasma simulation codes typically employ a
Finite Difference Time Domain (FDTD) algorithm with staggered spatial
mesh \citep{Yee} for advancing Maxwell's equations. The FDTD algorithm
is straightforward, second-order accurate, and parallelizes well for
efficient computation on modern computers. A very flexible and at
least equally accurate approach for solving Maxwell's equations numerically
is the pseudo-spectral method, in which Maxwell's equations are Fourier-decomposed
in space, and the resulting equations advanced in time to second-order
or better accuracy \citep{Liu1997}. Pseudo-spectral PIC algorithms
require Fourier-transforming the currents and fields at every time-step,
because particles are advanced in real rather than Fourier space.
Because Fourier transforms generally do not parallelize well, pseudo-spectral
methods are used less commonly than FDTD methods in PIC codes.
Nonetheless, the advantages of pseudo-spectral methods should not
be ignored. Haber's Pseudo-Spectral Analytical Time Domain (PSATD)
algorithm \citep{HaberICNSP73}, in particular, is exact for plasma
currents constant in time and, consequently, is free of electromagnetic
wave numerical dispersion for wave numbers satisfying $k\leq\pi/\triangle t$
and has no Courant limit in the usual sense. It also offers highly
accurate balancing of the Lorentz force, $\mathbf{E}+\mathbf{v}\times\mathbf{B}$
\citep{Vay2008}, which is especially desirable in simulations of
relativistic beams or of Laser-Plasma Acceleration (LPA) in frames
co-moving with the interaction region \citep{VayPRL07,VayPoP2011}.
PSATD also has superior numerical stability properties \citep{Vay2013PSATD}.
The more commonly used Pseudo-Spectral Time Domain (PSTD) algorithm
\citep{Liu1997,Xu2012} enjoys some of these same advantages but has
a restrictive Courant limit.
Importantly, a domain decomposition method recently has been developed
that allows efficient parallelization of Fourier transforms \citep{Vay2013PSATD}
in PIC codes. It takes advantage of the linearity and finite propagation
velocity of light in Maxwell\textquoteright{}s equations to limit
communication of data between neighboring computational domains. The
small approximation required appears to be insignificant for a range
of problems of interest.
Despite the advantages of pseudo-spectral methods, they are known
not to be free of the numerical Cherenkov instability \citep{godfrey1974numerical},
which results from coupling of electromagnetic waves with numerically
spurious beam mode aliases in cold beam simulations. In this paper,
the numerical dispersion relation is derived for the PSATD algorithm
with either a version of the Esirkepov algorithm \citep{esirkepov2001exact}
or conventional current interpolation. Although the PSATD algorithm
does not exhibit special time-steps at which numerical instability
growth rates are very small \citep{Xu2012,VayJCP2011,godfrey2013esirkepov},
a slight generalization of the PSATD-Esirkepov combination is shown
to have extraordinarily good stability properties when cubic interpolation
and appropriate digital filtering are employed, certainly substantially
better than that of FDTD algorithms previously analyzed \citep{godfrey2013esirkepov}.
The PSTD-Esirkepov algorithm also has good stability properties over
its range of allowed time-steps, although not quite as good as that
of the PSATD-Esirkepov algorithm. These analyses have been confirmed
using the multidimensional WARP \citep{Warp} PIC code for two-dimensional
simulations of plasma wake formation in a LPA stage. The parameters
used for the WARP simulations were similar to those used in \citep{godfrey2013esirkepov}.
However, the length of the plasma was increased thirty-fold, due to
the extremely small growth rates that were observed when using the
PSATD solver.
The remainder of this paper is organized as follows. The PSATD algorithm
coupled with either the Esirkepov or the conventional current deposition
algorithm is presented in Sec. 2. Derivations of the corresponding
numerical instability dispersion relations for multidimensional PSATD
PIC codes are outlined briefly in Sec. 3. The dispersion relations
are specialized in Sec. 4 to a cold, relativistic beam in two dimensions
for comparison with WARP simulations. Sec. 5 provides a reasonably
accurate approximation for maximum numerical instability growth rates
for the PSATD-Esirkepov algorithm with digital filtering, showing
the desirable numerical stability properties just mentioned. Then,
the dispersion relations are solved numerically for a range of options
and parameters and compared with WARP results in Sec. 6. (These analytical
and numerical dispersion relation calculations were performed using
\emph{Mathematica} \citep{Mathematica9}.) As a comparison, stability
results for the more commonly used PSTD algorithm are derived and
discussed in Sec. 6. Sec. 7 presents WARP simulations, demonstrating
the near absence of numerical instabilities in actual LPA simulations
for appropriately chosen options and time-steps. The concluding section
summarizes the findings in the paper and compares them with corresponding
FDTD results.
\section{PSATD algorithm}
The PSATD algorithm is derived in some detail in Appendix A of \citep{Vay2013PSATD}
and presented in Eqs. (13) and (14) of that article. It also can be
obtained directly by integrating analytically the spatially Fourier-transformed
Maxwell's equations, Eqs. (1) and (2) of \citep{Vay2013PSATD}, for
one time-step under the assumption that currents are constant over
the time-step. In either case, the algorithm is
\begin{multline}
\mathbf{E}^{n+1}=C\mathbf{E}^{n}-iS\mathbf{k}\times\mathbf{B}^{n}/k-S\mathbf{J}^{n+\nicefrac{1}{2}}/k+\left(1-C\right)\mathbf{k}\mathbf{k}\cdot\mathbf{E}^{n}/k^{2}\\
+\left(S/k-\triangle t\right)\mathbf{k}\mathbf{k}\cdot\mathbf{J}^{n+\nicefrac{1}{2}}/k^{2},\label{eq:PSATD-E}
\end{multline}
\begin{equation}
\mathbf{B}^{n+1}=C\mathbf{B}^{n}+iS\mathbf{k}\times\mathbf{E}^{n}/k-i\left(1-C\right)\mathbf{k}\times\mathbf{J}^{n+\nicefrac{1}{2}}/k^{2},\label{eq:PSATD-B}
\end{equation}
with $\mathbf{k}$ the wave-number, $k$ its magnitude, $C=\cos\left(k\triangle t\right)$,
and $S=\sin\left(k\triangle t\right)$. The speed of light is normalized
to unity. Note that the sign of $\mathbf{k}$ is reversed relative
to \citep{Vay2013PSATD} for consistency with earlier analyses of
the numerical Cherenkov instability, \emph{e.g.}, \citep{godfrey2013esirkepov,godfrey1975canonical}.
Eqs. (\ref{eq:PSATD-E}) and (\ref{eq:PSATD-B}) define both $\mathbf{E}^{n}$
and $\mathbf{B}^{n}$ at integer time-steps. For deriving the PSATD
numerical dispersion relation, and perhaps also for implementing the
PSATD algorithm in some PIC simulation codes, a leap-frog arrangement
in which $\mathbf{B}$ is defined at half-integer time-steps is more
convenient. To do so, we simply define $\mathbf{B}^{n+\nicefrac{1}{2}}$
at half-integer time-steps as
\begin{equation}
\mathbf{B}^{n}=\frac{1}{2C_{h}}\left(\mathbf{B}^{n+\nicefrac{1}{2}}+\mathbf{B}^{n-\nicefrac{1}{2}}\right).\label{eq:Bave}
\end{equation}
Using this equation, we can eliminate $\mathbf{B}^{n}$ at integer
time-steps from Eqs. (\ref{eq:PSATD-E}) and (\ref{eq:PSATD-B}) to
obtain
\begin{equation}
\mathbf{E}^{n+1}=\mathbf{E}^{n}-2iS_{h}\mathbf{k}\times\mathbf{B}^{n+\nicefrac{1}{2}}/k-S\mathbf{J}^{n+\nicefrac{1}{2}}/k+\left(S/k-\Delta t\right)\mathbf{k}\mathbf{k}\cdot\mathbf{J}^{n+\nicefrac{1}{2}}/k^{2},\label{eq:leapfrogE}
\end{equation}
\begin{equation}
\mathbf{B}^{n+\nicefrac{3}{2}}=\mathbf{B}^{n+\nicefrac{1}{2}}+2iS_{h}\mathbf{k}\times\mathbf{E}^{n+1}/k,\label{eq:leapfrogB}
\end{equation}
after a modest amount of algebra. Here, $C_{h}=\cos\left(k\triangle t/2\right)$,
and $S_{h}=\sin\left(k\triangle t/2\right)$. (Note that Eqs. (\ref{eq:leapfrogE})
and (\ref{eq:leapfrogB}) differ from Eqs. (15) and (16) of \citep{Vay2013PSATD},
which are based on a different definition of $\mathbf{B}^{n+\nicefrac{1}{2}}$.)
The divergence of Eq. (\ref{eq:leapfrogE}) yields $\mathbf{k}\cdot\mathbf{E}^{n+1}=\mathbf{k}\cdot\mathbf{E}^{n}-\mathbf{k}\cdot\mathbf{J}^{n+\nicefrac{1}{2}}\triangle t,$
which assures that $\mathbf{k}\cdot\mathbf{E}^{n+1}=i\rho^{n+1}$,
provided that charge is conserved,
\begin{equation}
\mathbf{k}\cdot\mathbf{J}^{n+\nicefrac{1}{2}}=-i\left(\rho^{n+1}-\rho^{n}\right)/\triangle t\label{eq:continuity}
\end{equation}
(and also provided that $\mathbf{k}\cdot\mathbf{E}^{0}=i\rho^{0}$
at initialization). The Buneman current deposition algorithm \citep{VillasenorCPC92}
and its generalization, the Esirkepov algorithm \citep{esirkepov2001exact},
satisfy the discretized continuity equation in real space. The adaptation
of the Esikepov algorithm for k-space in Eq. (20) of \citep{Vay2013PSATD}
automatically satisfies Eq. (\ref{eq:continuity}). (This modification
of the Esirekpov algorithm for PSATD will be referred to as the Esirkepovk
algorithm in the remainder of the paper.) Otherwise, (\ref{eq:leapfrogE})
must be rewritten as
\begin{multline}
\mathbf{E}^{n+1}=\mathbf{E}^{n}-2iS_{h}\mathbf{k}\times\mathbf{B}^{n+\nicefrac{1}{2}}/k-S\mathbf{J}^{n+\nicefrac{1}{2}}/k\\
+S\mathbf{k}\mathbf{k}\cdot\mathbf{J}^{n+\nicefrac{1}{2}}/k^{3}+i\mathbf{k}\left(\rho^{n+1}-\rho^{n}\right)/k^{2},\label{eq:leapfrogE-alt}
\end{multline}
which has as its divergence, $\mathbf{k}\cdot\mathbf{E}^{n+1}=\mathbf{k}\cdot\mathbf{E}^{n}+i\left(\rho^{n+1}-\rho^{n}\right),$
as desired. In subsequent sections both charge-conserving and non-charge-conserving
PSATD variants will be analyzed, and Eq. (\ref{eq:leapfrogE-alt})
will be used instead of Eq. (\ref{eq:leapfrogE}) in the latter instances.
As we shall see in Sec. 6, scaling the Esirkepovk currents by k-dependent
factors $\zeta$ can be beneficial for numerical stability; i.e.,
$\mathbf{J}=\mathbf{\mathbf{\zeta}:}\mathbf{J}_{e}$, with $\mathbf{\zeta}=$diag$\left(\zeta_{z},\zeta_{x},\zeta_{y}\right)$
and $\mathbf{J}_{e}$ the current computed by the Esirkepovk algorithm.
Doing so, of course, requires the use of Eq. \ref{eq:leapfrogE-alt},
because introducing the factors $\zeta$ typically does not preserve
charge conservation. However, because the Esirkepovk current satisfies
Eq. \ref{eq:continuity} identically, Eq. \ref{eq:leapfrogE-alt}
can be rewritten in this case as
\begin{multline}
\mathbf{E}^{n+1}=\mathbf{E}^{n}-2iS_{h}\mathbf{k}\times\mathbf{B}^{n+\nicefrac{1}{2}}/k-S\mathbf{\mathbf{\zeta}:}\mathbf{J}_{e}^{n+\nicefrac{1}{2}}/k\\
+S\mathbf{k}\mathbf{k}\cdot\mathbf{\mathbf{\zeta}:}\mathbf{J}_{e}^{n+\nicefrac{1}{2}}/k^{3}-\mathbf{k}\mathbf{k}\cdot\mathbf{J}_{e}^{n+\nicefrac{1}{2}}\triangle t/k^{2},\label{eq:leapfrog-alt1}
\end{multline}
which can be viewed as a generalization of Eq. \ref{eq:leapfrogE}.
The divergence of (\ref{eq:leapfrogB}) yields $\mathbf{k}\cdot\mathbf{B}^{n+\nicefrac{3}{2}}=\mathbf{k}\cdot\mathbf{B}^{n+\nicefrac{1}{2}}$,
assuring that $\mathbf{k}\cdot\mathbf{B}^{n+\nicefrac{3}{2}}=0$,
if it is so at initialization.
\section{Numerical instability dispersion relation}
The derivation of the numerical instability dispersion relation for
the PSATD and Esirkepovk combined algorithm follows closely the corresponding
derivation for the FDTD and Esirkepov combined algorithm in \citep{godfrey2013esirkepov}.
To begin, the temporal Fourier transforms of Eqs. (\ref{eq:leapfrog-alt1})
and (\ref{eq:leapfrogB}) are
\begin{equation}
\left[\omega\right]\mathbf{E}=-2S_{h}\mathbf{k}\times\mathbf{B}/k+iS\mathbf{\mathbf{\zeta}:}\mathbf{J}_{e}/k-iS\mathbf{k}\mathbf{k}\cdot\mathbf{\mathbf{\zeta}:}\mathbf{J}_{e}/k^{3}+i\mathbf{k}\mathbf{k}\cdot\mathbf{J}_{e}\Delta t/k^{2},\label{eq:Etrans}
\end{equation}
\begin{equation}
\left[\omega\right]\mathbf{B}=2S_{h}\mathbf{k}\times\mathbf{E}/k.\label{eq:Btrans}
\end{equation}
Brackets around the frequency, $\omega$, designate its finite difference
(leapfrog) representation,
\begin{equation}
\left[\omega\right]=\sin\left(\omega\frac{\Delta t}{2}\right)/\left(\frac{\Delta t}{2}\right).\label{eq:meshom}
\end{equation}
The Esirkepov algorithm, either in real or k-space, determines not
the current itself but its first derivative \citep{esirkepov2001exact}.
In the PSATD algorithm, that derivative is given by $\mathbf{k}$,
not $\left[\mathbf{k}\right]$. Consequently, Eq. (5) of \citep{godfrey2013esirkepov}
becomes
\begin{equation}
\left\{ \begin{array}{c}
W_{x}\\
W_{y}\\
W_{z}
\end{array}\right\} =-i\Delta t\left\{ \begin{array}{c}
k_{x}\mathscr{\mathcal{J}}_{x}\\
k_{y}\mathcal{J}_{y}\\
k_{z}\mathcal{J}_{z}
\end{array}\right\} ,\label{eq:dJ}
\end{equation}
and the current contribution from an individual particle, Eq. (7)
of \citep{godfrey2013esirkepov}, becomes
\begin{equation}
\left\{ \begin{array}{c}
\mathcal{J}_{x}\\
\mathcal{J}_{y}\\
\mathcal{J}_{z}
\end{array}\right\} =S^{J}\frac{2}{\Delta t}\left\{ \begin{array}{c}
\sin\left(k_{x}^{\prime}v_{x}\frac{\Delta t}{2}\right)\left[\cos\left(k_{y}^{\prime}v_{y}\frac{\Delta t}{2}\right)\cos\left(k_{z}^{\prime}v_{z}\frac{\Delta t}{2}\right)-\frac{1}{3}\sin\left(k_{y}^{\prime}v_{y}\frac{\Delta t}{2}\right)\sin\left(k_{z}^{\prime}v_{z}\frac{\Delta t}{2}\right)\right]/k_{x}\\
\sin\left(k_{y}^{\prime}v_{y}\frac{\Delta t}{2}\right)\left[\cos\left(k_{z}^{\prime}v_{z}\frac{\Delta t}{2}\right)\cos\left(k_{x}^{\prime}v_{x}\frac{\Delta t}{2}\right)-\frac{1}{3}\sin\left(k_{z}^{\prime}v_{z}\frac{\Delta t}{2}\right)\sin\left(k_{x}^{\prime}v_{x}\frac{\Delta t}{2}\right)\right]/k_{y}\\
\sin\left(k_{z}^{\prime}v_{z}\frac{\Delta t}{2}\right)\left[\cos\left(k_{x}^{\prime}x_{y}\frac{\Delta t}{2}\right)\cos\left(k_{y}^{\prime}v_{y}\frac{\Delta t}{2}\right)-\frac{1}{3}\sin\left(k_{x}^{\prime}v_{x}\frac{\Delta t}{2}\right)\sin\left(k_{y}^{\prime}v_{y}\frac{\Delta t}{2}\right)\right]/k_{z}
\end{array}\right\} ,\label{eq:Jpart}
\end{equation}
with $S^{J}$ the current interpolation function. Finally, the total
current is given by Eq. (10) of \citep{godfrey2013esirkepov},
\begin{equation}
\mathbf{J}=\sum_{m}\int\mathbf{F\cdot\frac{\partial}{\partial\mathbf{p}}\,\mathcal{\boldsymbol{J}}\,\csc}\left[\left(\omega-\mathbf{k^{\prime}\cdot v}\right)\frac{\Delta t}{2}\right]\frac{\Delta t}{2}f\,\mathrm{d}^{3}\mathbf{v},\label{eq:J}
\end{equation}
summed over spatial aliases. The determinant of the 6x6 matrix comprised
of Eqs. (\ref{eq:Etrans}), (\ref{eq:Btrans}), and (\ref{eq:J})
is the desired PSATD-Esirkepovk dispersion relation.
Alternatively, the current can be accumulated at nodal points by conventional
interpolation, in which case charge is not conserved automatically,
and Eq. \ref{eq:leapfrogE-alt} should be used. Its temporal Fourier
transform is
\begin{equation}
\left[\omega\right]\mathbf{E}=-2S_{h}\mathbf{k}\times\mathbf{B}/k+iS\mathbf{J}/k-iS\mathbf{k}\mathbf{k}\cdot\mathbf{J}/k^{3}+i\left[\omega\right]\mathbf{k}\rho/k^{2}.\label{eq:Etrans-alt}
\end{equation}
Currents are interpolated directly to nodes on the grid, so Eq. (\ref{eq:J})
becomes
\begin{equation}
\mathbf{J}=\sum_{m}S^{J}\int\mathbf{F\cdot\frac{\partial}{\partial\mathbf{p}}\,\mathbf{v}\,\csc}\left[\left(\omega-\mathbf{k^{\prime}\cdot v}\right)\frac{\Delta t}{2}\right]\frac{\Delta t}{2}f\,\mathrm{d}^{3}\mathbf{v}.\label{eq:J-alt}
\end{equation}
Similarly, the charge density is given by \citep{godfrey1975canonical}
\begin{equation}
\mathbf{\rho}=\sum_{m}S^{J}\int\mathbf{F\cdot\frac{\partial}{\partial\mathbf{p}}\,\cot}\left[\left(\omega-\mathbf{k^{\prime}\cdot v}\right)\frac{\Delta t}{2}\right]\frac{\Delta t}{2}f\,\mathrm{d}^{3}\mathbf{v}.\label{eq:rho}
\end{equation}
(The charge and current interpolation functions are assumed to be
the same.) The dispersion relation in this case is the determinant
of the 6x6 matrix comprised of Eqs. (\ref{eq:Etrans-alt}), (\ref{eq:Btrans}),
(\ref{eq:J-alt}), and (\ref{eq:rho}).
\section{WARP-PSATD 2-d dispersion relation}
For comparison with WARP-PSATD-Esirkepovk two-dimensional, cold beam
simulation results, we reduce Eqs. (\ref{eq:Etrans}) and (\ref{eq:Btrans})
to a 3x3 system in $\left\{ E_{z},E_{x},B_{y}\right\} $ and perform
the integral in Eq. (\ref{eq:J}) for a cold beam propagating at velocity
\textit{v} in the \textit{z}-direction. The resulting matrix equation
is
\begin{equation}
\left(\begin{array}{ccc}
\xi_{z,z}+[\omega] & \xi_{z,x} & \xi_{z,y}+[k_{x}]\\
\xi_{x,z} & \xi_{x,x}+[\omega] & \xi_{x,y}-[k_{z}]\\
{}[k_{x}] & -[k_{z}] & [\omega]
\end{array}\right)\left(\begin{array}{c}
E_{z}\\
E_{x}\\
B_{y}
\end{array}\right)=0.\label{eq:M3x3}
\end{equation}
Its determinant set equal to zero,
\begin{multline}
[\omega]\left([\omega]^{2}-[k_{z}]^{2}-[k_{x}]^{2}\right)+\left([\omega]^{2}-[k_{z}]^{2}\right)\xi_{z,z}-[k_{z}][k_{x}]\xi_{x,z}\\
-[k_{z}][k_{x}]\xi_{z,x}-[\omega][k_{x}]\xi_{z,y}+\left([\omega]^{2}-[k_{x}]^{2}\right)\xi_{x,x}+[\omega][k_{z}]\xi_{x,y}\\
+\xi_{z,z}\left([\omega]\xi_{x,x}+[k_{x}]\xi_{x,y}\right)-\xi_{x,z}\left([\omega]\xi_{z,x}+[k_{z}]\xi_{z,y}\right)\\
+[k_{x}]\left(\xi_{z,x}\xi_{x,y}-\xi_{z,y}\xi_{x,x}\right)=0,\label{eq:det}
\end{multline}
is the dispersion relation. The quantities $\left[\mathbf{k}\right]$
and $\xi$ are introduced purely for notational simplicity.
\begin{equation}
\left[k_{z}\right]=k_{z}\sin\left(k\frac{\Delta t}{2}\right)/\left(k\frac{\Delta t}{2}\right),\label{eq:meshkz}
\end{equation}
\begin{equation}
\left[k_{x}\right]=k_{x}\sin\left(k\frac{\Delta t}{2}\right)/\left(k\frac{\Delta t}{2}\right).\label{eq:meshkx}
\end{equation}
\begin{multline}
\xi_{z,z}=-n\gamma^{-2}\sum_{m}S^{J}S^{E_{z}}\csc^{2}\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\\
\left(kk_{z}^{2}\Delta t+\zeta_{z}k_{x}^{2}\sin\left(k\Delta t\right)\right)\Delta t\left[\omega\right]k_{z}^{\prime}/4k^{3}k_{z}\mathrm{,}\label{eq:Mzz}
\end{multline}
\begin{equation}
\xi_{z,x}=-n\sum_{m}S^{J}S^{E_{x}}\csc\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\eta_{z}k_{x}^{\prime}/2k^{3}k_{z},\label{eq:Mzx}
\end{equation}
\begin{equation}
\xi_{z,y}=nv\sum_{m}S^{J}S^{B_{y}}\csc\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\eta_{z}k_{x}^{\prime}/2k^{3}k_{z},\label{eq:Mzy}
\end{equation}
\begin{multline}
\xi_{x,z}=-n\gamma^{-2}\sum_{m}S^{J}S^{E_{z}}\csc^{2}\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\\
\left(k\Delta t-\zeta_{z}\sin\left(k\Delta t\right)\right)\Delta t\left[\omega\right]k_{x}k_{z}^{\prime}/4k^{3},\label{eq:Mxz}
\end{multline}
\begin{equation}
\xi_{x,x}=-n\sum_{m}S^{J}S^{E_{x}}\csc\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\eta_{x}k_{x}^{\prime}/2k^{3}k_{x},\label{eq:Mxx}
\end{equation}
\begin{equation}
\xi_{x,y}=nv\sum_{m}S^{J}S^{B_{y}}\csc\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\eta_{x}k_{x}^{\prime}/2k^{3}k_{x},\label{eq:Mxy}
\end{equation}
with
\begin{multline}
\eta_{z}=\cot\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\left(kk_{z}^{2}\Delta t+\zeta_{z}k_{x}^{2}\sin\left(k\Delta t\right)\right)\sin\left(k_{z}^{\prime}v\frac{\Delta t}{2}\right)\\
+\left(k\Delta t-\zeta_{x}\sin\left(k\Delta t\right)\right)k_{z}^{2}\cos\left(k_{z}^{\prime}v\frac{\Delta t}{2}\right),\label{eq:etaz}
\end{multline}
\begin{multline}
\eta_{x}=\cot\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\left(k\Delta t-\zeta_{z}\sin\left(k\Delta t\right)\right)k_{x}^{2}\sin\left(k_{z}^{\prime}v\frac{\Delta t}{2}\right)\\
+\left(kk_{x}^{2}\Delta t+\zeta_{x}k_{z}^{2}\sin\left(k\Delta t\right)\right)\cos\left(k_{z}^{\prime}v\frac{\Delta t}{2}\right).\label{eq:etax}
\end{multline}
Sums are over spatial aliases, $k_{z}^{\prime}=k_{z}+m_{z}\,2\pi/\Delta z$
and $k_{x}^{\prime}=k_{x}+m_{x}\,2\pi/\Delta x$, with $m_{z}$ and
$m_{x}$ integers. $n$ is the beam charge density divided by $\gamma$,
which can be normalized to unity. However, explicitly retaining it
in the dispersion relation sometimes is informative. Eqs. (\ref{eq:Mzz})
- (\ref{eq:etax}) are substantially more complicated than their counterparts
in \citep{godfrey2013esirkepov}, the additional terms arising from
the final expression in Eq. (\ref{eq:PSATD-E}).
Like most other PIC codes, WARP employs splines for current and field
interpolation. The Fourier transform of the current interpolation
function is
\begin{equation}
S^{J}=\left[\sin\left(k_{z}^{\prime}\frac{\Delta z}{2}\right)/\left(k_{z}^{\prime}\frac{\Delta z}{2}\right)\right]^{\ell_{z}+1}\left[\sin\left(k_{x}^{\prime}\frac{\Delta x}{2}\right)/\left(k_{x}^{\prime}\frac{\Delta x}{2}\right)\right]^{\ell_{x}+1};\label{eq:SJ}
\end{equation}
$\ell_{z}$ and $\ell_{x}$ are the orders of the current interpolation
splines in the z- and x-directions. Fields typically are interpolated
with splines of the same centering and order in PSATD implementations,
so $S^{E_{z}}=S^{E_{x}}=S^{J}$. The magnetic field interpolation
function also includes the conversion factor from $\mathbf{B}$ at
half-integer time-steps, as given in Eq. (\ref{eq:leapfrogB}), to
$\mathbf{B}$ at integer time-steps, as used to push the particles.
Hence, from the temporal Fourier transform of Eq. (\ref{eq:Bave}),
$S^{B_{y}}=S^{J}\cos\left(\omega\,\Delta t/2\right)/\cos\left(k\,\Delta t/2\right)$.
With $S^{E_{x}}$ and $S^{B_{y}}$ as just define,
\begin{equation}
\xi_{z,y}/\xi_{z,x}=\xi_{x,y}/\xi_{x,x}=-v\cos\left(\omega\,\Delta t/2\right)/\cos\left(k\,\Delta t/2\right)\label{eq:Mratio}
\end{equation}
Note, however, that field interpolation from a staggered mesh could
be employed instead, as it is in the FDTD version of WARP and most
other PIC codes. In that case the field interpolation functions would
be as described in Eqs. (21) - (23) of \citep{godfrey2013esirkepov}
and the associated text. Eq. (\ref{eq:Mratio}) is only approximately
satisfied for staggered mesh interpolation.
Next, we present the dispersion matrix for WARP-PSATD with conventional
current (and charge) deposition, as described in the final paragraph
of Sec. 3.
\begin{multline}
\xi_{z,z}=-n\gamma^{-2}\sum_{m}S^{J}S^{E_{z}}\csc^{2}\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\Delta t\\
\left\{ \left(\sin\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]k_{z}^{\prime}v+\frac{2}{\Delta t}\right)k_{x}^{2}\sin\left(k_{z}\Delta t\right)+k\, k_{z}k_{z}^{\prime}\left[\omega\right]\Delta t\right\} /4k^{3},\label{eq:Mzz-alt}
\end{multline}
\begin{multline}
\xi_{z,x}=-n\sum_{m}S^{J}S^{E_{x}}\csc^{2}\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\Delta t\\
\left\{ \left(\sin\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]k_{x}k_{x}^{\prime}v-\frac{2}{\Delta t}k_{z}\right)k_{x}\sin\left(k_{z}\Delta t\right)+k\, k_{z}k_{x}^{\prime}\left[\omega\right]\Delta t\right\} /4k^{3},\label{eq:Mzx-alt}
\end{multline}
\begin{multline}
\xi_{z,y}=nv\sum_{m}S^{J}S^{B_{y}}\csc^{2}\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\Delta t\\
\left\{ \left(\sin\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]k_{x}k_{x}^{\prime}v-\frac{2}{\Delta t}k_{z}\right)k_{x}\sin\left(k_{z}\Delta t\right)+k\, k_{z}k_{x}^{\prime}\left[\omega\right]\Delta t\right\} /4k^{3},\label{eq:Mzy-alt}
\end{multline}
\begin{multline}
\xi_{x,z}=-n\gamma^{-2}\sum_{m}S^{J}S^{E_{z}}\csc^{2}\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\Delta t\\
\left\{ \left(\sin\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]k_{z}^{\prime}v+\frac{2}{\Delta t}\right)k_{z}\sin\left(k_{z}\Delta t\right)+k\, k_{z}^{\prime}\left[\omega\right]\Delta t\right\} k_{x}/4k^{3},\label{eq:Mxz-alt}
\end{multline}
\begin{multline}
\xi_{x,x}=n\sum_{m}S^{J}S^{E_{x}}\csc\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\Delta t\\
\left\{ \left(\sin\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]k_{x}k_{x}^{\prime}v-\frac{2}{\Delta t}k_{z}\right)k_{z}\sin\left(k_{z}\Delta t\right)-k\, k_{x}k_{x}^{\prime}\left[\omega\right]\Delta t\right\} /4k^{3},\label{eq:Mxx-alt}
\end{multline}
\begin{multline}
\xi_{x,y}=-nv\sum_{m}S^{J}S^{B_{y}}\csc\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\Delta t\\
\left\{ \left(\sin\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]k_{x}k_{x}^{\prime}v-\frac{2}{\Delta t}k_{z}\right)k_{z}\sin\left(k_{z}\Delta t\right)-k\, k_{x}k_{x}^{\prime}\left[\omega\right]\Delta t\right\} /4k^{3}.\label{eq:Mxy-alt}
\end{multline}
Here too, Eq. (\ref{eq:Mratio}) is satisfied, provided that currents
and fields all are interpolated to and from the same mesh nodes.
As pointed out in \citep{godfrey2013esirkepov}, $m_{x}$ alias terms
in the dispersion relations can be summed explicitly by means of Eqs.
(1.421.3) and (1.422.3) of \citep{GradshteynRyzhik} or derivatives
thereof.
\section{Approximate growth rates}
Useful results can be obtained from the dispersion relation without
solving it in its entirety.
When Eq, (\ref{eq:Mratio}) is satisfied (and approximately otherwise),
\begin{equation}
\xi_{z,x}\xi_{x,y}-\xi_{z,y}\xi_{z,y}=0,\label{eq:minor}
\end{equation}
and the dispersion relation, Eq. \ref{eq:det}, reduces to
\begin{multline}
C_{0}+n\sum_{m_{z}}C_{1}\csc\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]+n\sum_{m_{z}}\left(C_{2x}+\gamma^{-2}C_{2z}\right)\csc^{2}\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\\
+\gamma^{-2}n^{2}\left(\sum_{m_{z}}C_{3z}\csc^{2}\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\right)\left(\sum_{m_{z}}C_{3x}\csc\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\right)=0,\label{eq:drformfull}
\end{multline}
with $C_{0}$ the vacuum dispersion function,
\begin{equation}
C_{0}=\left[\omega\right]^{2}-\left[k_{x}\right]^{2}-\left[k_{z}\right]^{2},\label{eq:C0}
\end{equation}
and, for the PSATD-Esirkepovk algorithm,
\begin{multline}
C_{1}=-\sum_{m_{x}}k_{x}^{\prime}\left(S^{J}\right)^{2}\cos\left(k_{z}^{\prime}v\frac{\Delta t}{2}\right)\\
\left\{ \zeta_{x}k_{z}\sin\left(k\,\Delta t\right)\left(k_{z}\sin\left(\omega\frac{\Delta t}{2}\right)-k\, v\tan\left(k\frac{\Delta t}{2}\right)\cos\left(\omega\frac{\Delta t}{2}\right)\right)+k\, k_{x}^{2}\Delta t\, C_{0}/\sin\left(\omega\frac{\Delta t}{2}\right)\right\} /k^{3}k_{x}\Delta t,\label{eq:C1}
\end{multline}
\begin{multline}
C_{2x}=k_{x}\sum_{m_{x}}k_{x}^{\prime}\left(S^{J}\right)^{2}\cos\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\sin\left(k_{z}^{\prime}v\frac{\Delta t}{2}\right)\\
\left\{ \zeta_{z}\sin\left(k\,\Delta t\right)\left(k_{z}\sin\left(\omega\frac{\Delta t}{2}\right)-k\, v\tan\left(k\frac{\Delta t}{2}\right)\cos\left(\omega\frac{\Delta t}{2}\right)\right)-k\, k_{z}\Delta t\, C_{0}/\sin\left(\omega\frac{\Delta t}{2}\right)\right\} /k^{3}k_{z}\Delta t,\label{eq:C2x}
\end{multline}
\begin{multline}
C_{3x}=\sum_{m_{x}}k_{x}^{\prime}\left(S^{J}\right)^{2}\sin\left(k\,\Delta t\right)\cos\left(k_{z}^{\prime}v\frac{\Delta t}{2}\right)\\
\left(k\,\sin\left(\omega\frac{\Delta t}{2}\right)-k_{z}v\tan\left(k\frac{\Delta t}{2}\right)\cos\left(\omega\frac{\Delta t}{2}\right)\right)/k^{2}k_{x}\Delta t,\label{eq:C3x}
\end{multline}
\begin{multline}
C_{2z}=-k_{z}^{\prime}\sum_{m_{x}}\left(S^{J}\right)^{2}\\
\left(\zeta_{z}\sin\left(k\,\Delta t\right)k_{x}^{2}\sin\left(\omega\frac{\Delta t}{2}\right)-k\, k_{z}^{2}\Delta t\, C_{0}\right)/k^{3}k_{z}\Delta t,\label{eq:C2z}
\end{multline}
\begin{equation}
C_{3z}=k_{z}^{\prime}\Delta t^{2}\sum_{m_{x}}\left(S^{J}\right)^{2}\left(\zeta_{z}k_{x}^{2}+\zeta_{x}k_{z}^{2}\right)/4k^{2}k_{z}.\label{eq:C3z}
\end{equation}
For $\gamma^{2}$ large but not infinite, which is the focus of this
paper, the approximate solutions of Eq. \ref{eq:drformfull} are the
solutions of
\begin{equation}
C_{0}+n\sum_{m_{z}}C_{1}\csc\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]+n\sum_{m_{z}}C_{2x}\csc^{2}\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]=0,\label{eq:drform}
\end{equation}
plus an additional, stable mode,
\begin{equation}
\omega=k_{z}^{\prime}v-\left.\frac{2}{\Delta t}\gamma^{-2}n\, C_{3z}C_{3x}/C_{2x}\right|_{\omega=k_{z}^{\prime}v},\label{eq:extramode}
\end{equation}
provided that $C_{2x}$ does not vanish there. If it does, the extra
mode may be unstable, with growth rate scaling as $\gamma^{-1}$.
As already noted, sums over $m_{x}$ can be performed explicitly,
\begin{multline}
\sum_{m_{x}}\left(S^{J}\right)^{2}=-\left[\sin\left(k_{z}^{\prime}\frac{\Delta z}{2}\right)/\left(k_{z}^{\prime}\frac{\Delta z}{2}\right)\right]^{2\ell_{z}+2}\\
\frac{1}{\left(2\ell_{x}+1\right)!}\left[\sin\left(k_{x}\frac{\Delta x}{2}\right)\right]^{2\ell_{x}+2}\left.\frac{d^{2\ell_{x}+1}\cot\left(\kappa\right)}{d\,\kappa^{2\ell_{x}+1}}\right|_{\kappa=k_{x}\frac{\Delta x}{2}},\label{eq:summx}
\end{multline}
\begin{multline}
\sum_{m_{x}}k_{x}^{\prime}\left(S^{J}\right)^{2}=\left[\sin\left(k_{z}^{\prime}\frac{\Delta z}{2}\right)/\left(k_{z}^{\prime}\frac{\Delta z}{2}\right)\right]^{2\ell_{z}+2}\\
\frac{1}{\left(2\ell_{x}\right)!}\left[\sin\left(k_{x}\frac{\Delta x}{2}\right)\right]^{2\ell_{x}+2}\left.\frac{d^{2\ell_{x}}\cot\left(\kappa\right)}{d\,\kappa^{2\ell_{x}}}\right|_{\kappa=k_{x}\frac{\Delta x}{2}}.\label{eq:summx-kxp}
\end{multline}
Analogous expressions for the $C$'s also can be obtained for the
PSATD-conventional algorithm.
Vacuum electromagnetic modes are described by $C_{0}=0,$
\begin{equation}
\sin^{2}\left(\omega\frac{\Delta t}{2}\right)=\sin^{2}\left(k\frac{\Delta t}{2}\right),\label{eq:vacuumDR}
\end{equation}
which yields real $\omega$ for all values of $k\,\Delta t$. The
PSATD algorithm thus has no Courant limit on $\Delta t$. However,
$\left|\omega\right|$ begins decreasing with increasing $k\,\Delta t$,
when $k\,\Delta t$ first exceeds $2\pi$. This threshold is expressed
in terms of the grid cell size as $\Delta t>\Delta t_{c}=\left(\Delta z^{-2}+\Delta x^{-2}\right)^{-\nicefrac{1}{2}}$,
which is recognizable as the usual Courant condition in FDTD algorithms.
Digitally filtering wave-numbers for which $k>2\pi/\Delta t_{c}$
often is prudent.
All beam modes in Eq.(\ref{eq:drformfull}) are numerical artifacts,
even the $m_{z}=0$ mode, and their interaction with the electromagnetic
modes gives rise to the numerical Cherenkov instability \citep{godfrey1974numerical,godfrey1979electro}.
Fig. \ref{fig:Normal-mode-diagram} is a typical normal mode diagram,
showing the two electromagnetic modes and beam aliases $m_{z}=[-1,\,+1]$
for $v\,\Delta t/\Delta z=1.2$ and $k_{x}=\nicefrac{1}{2}\pi/\triangle x$.
(Unless otherwise noted, other parameters for these and other figures
are $n=1$ and $\Delta x=\Delta z=0.3868$.) Not surprisingly, most
rapid growth occurs at resonances, where normal modes intersect. Fig.
\ref{fig:Resonance curves} depicts the locations in \textit{k}-space
of normal mode intersections, such as those in Fig. \ref{fig:Normal-mode-diagram},
as $k_{x}$ is varied.%
\footnote{Software to generate plots such as those in Figs. \ref{fig:Normal-mode-diagram}
and \ref{fig:Resonance curves} is available in Computable Document
Format \citep{WolframCDF} at http://hifweb.lbl.gov/public/BLAST/Godfrey/.%
} Because the electromagnetic modes are dispersionless for $\Delta t<\Delta t_{c}$,
the otherwise often dominant $m_{z}=0$ numerical Cherenkov instability
cannot occur unless $\Delta t$ somewhat exceeds $\Delta t_{c}$.
To be precise, the resonant $m_{z}=0$ instability occurs only for
$\nicefrac{\Delta t}{\Delta x}>2\left(\nicefrac{\Delta x}{\Delta t_{c}}-\nicefrac{\Delta z}{\Delta x}\right)$,
or $\nicefrac{\Delta t}{\Delta z}>2\left(\sqrt{2}-1\right)$ for $\Delta x=\Delta z$
(accurate to order $\gamma^{-2}$). The $m_{z}=-1$ instability dominates
at smaller time-steps.
More generally, instability resonances occur at
\begin{equation}
k_{x}^{r}=\left(\left(\left(k_{z}+m_{z}\frac{2\pi}{\triangle z}\right)v-p\frac{2\pi}{\triangle t}\right)^{2}-k_{z}^{2}\right)^{\nicefrac{1}{2}},\label{eq:krres}
\end{equation}
where \emph{p} is any integer within the domain,
\begin{equation}
\left[m_{z}v\frac{\triangle t}{\triangle z}-\frac{\triangle t}{2\triangle x}\:,\:\left(m_{z}+\frac{1}{2}\right)v\frac{\triangle t}{\triangle z}+\left(\triangle z^{-2}+\triangle x^{-2}\right)^{\nicefrac{1}{2}}\frac{\triangle t}{2}\right],\label{eq:p}
\end{equation}
except $p=0$ for $m_{z}=0$. In effect, \emph{p} is the temporal
alias number.
Ref. \citep{godfrey2013esirkepov} described in detail how to estimate
numerical Cherenkov instability peak growth rates as a function of
$\Delta t$, based on an approximate evaluation of Eq. \ref{eq:drformfull}.
This approach was used to good effect to explain the existence and
value of time-steps for which instability growth rates in WARP-FDTD
\citep{VayJCP2011,Vay2012AAC} and other PIC codes (\emph{e.g.}, \citep{Xu2012,UCLA2012AAC})
were greatly reduced. Here, the approximate resonant growth rate is
\begin{equation}
Im\left(\omega\right)\simeq\left|n\, C_{2}\Delta t/4k_{z}\right|^{\nicefrac{1}{3}}/\Delta t,\label{eq:resonant}
\end{equation}
with $C_{2}$ evaluated at $\omega=k_{z}v$ and $k_{x}$ chosen to
satisfy the resonance condition.
In this paper we focus instead on finding parameters for which non-resonant
growth at small $k$ naturally is minimized, while relying on digital
filtering to suppress the otherwise faster growing resonant instabilities
at large $k$. Non-resonant instability occurs when $C_{0}C_{2}>nC_{1}^{\,2}/4$,
evaluated at $\omega\simeq k_{z}^{\prime}$ and arbitrary $k_{x}$.
The resulting growth rate is
\begin{equation}
Im\left(\omega\right)\simeq\frac{\sqrt{4nC_{0}C_{2}-n^{2}C_{1}^{\,2}}}{C_{0}\Delta t}.\label{eq:quadratic growth}
\end{equation}
By numerical experimentation we have found that $C_{0}C_{2}>0$ is
bounded away from $k_{z}=0$ for $m_{z}=0$ and any $\zeta_{z}<1$,
and that for some choices of $\zeta_{z}$ this region free of non-resonant
instability can be fairly large. Fig. \ref{fig:Stability criterion}
depicts maximum approximate growth rates as a function of $v\,\Delta t/\Delta z$
according to Eq. \ref{eq:quadratic growth} for the PSATD-Esirkepov
algorithm with (a) $\zeta_{z}=\left(k_{z}\triangle z/2\right)\cot\left(k_{z}\triangle z/2\right),\zeta_{x}=\left(k_{x}\triangle x/2\right)\cot\left(k_{x}\triangle x/2\right)$
or (b) $\zeta_{z}=\zeta_{x}=1$, cubic interpolation, and smoothing
as in Eq. (37) of \citep{godfrey2013esirkepov}. Option (a) exhibits
essentially no instability for $v\,\Delta t/\Delta z<1.2$, while
option (b) does exhibit instability there. This useful finding is
substantiated in Sec. 6. Incidentally, both numerical and analytical
solutions of Eq. (\ref{eq:drformfull}) indicate significant numerical
instability even at small $k_{z}$ when $\zeta_{x}>1$.
Of course, other fruitful choices for $\zeta_{z}$ may exist. One
promising possibility is $\zeta_{z}$ chosen such that $C_{2}$ vanishes
for $\omega=k_{z}v$, in order to suppress non-resonant $m_{z}=0$
growth in accordance with Eq. (\ref{eq:quadratic growth}),
\begin{multline}
\zeta_{z}=k\, k_{z}\triangle t\left(\sin^{2}\left(k_{z}\frac{\Delta t}{2}\right)-\sin^{2}\left(k\frac{\Delta t}{2}\right)\right)\csc\left(k_{z}\frac{\Delta t}{2}\right)\csc\left(k\frac{\Delta t}{2}\right)/\\
2\left(k_{z}\sin\left(k_{z}\frac{\Delta t}{2}\right)\cos\left(k\frac{\Delta t}{2}\right)-k\,\cos\left(k_{z}\frac{\Delta t}{2}\right)\sin\left(k\frac{\Delta t}{2}\right)\right)\label{eq:resz}
\end{multline}
Equivalently, Eq. (\ref{eq:resz}) is obtained by setting to zero
the first term in the Laurent expansion of Eq. (\ref{eq:drform})
about $\omega=k_{z}v$. Note that $v_{z}$ has been set equal to unity
in this expression to assure that $\zeta_{z}\rightarrow1$ as $k\rightarrow0$.
Moreover, it is necessary to impose $0\leq\zeta_{z}\leq1$. We do
this by setting $\zeta_{z}=0$ everywhere that the constraint just
given is not satisfied, which is almost everywhere outside the curve,
$k_{z}=\frac{\pi}{\Delta t}-k_{x}^{2}\frac{\Delta t}{4\pi}$. Not
coincidentally, this is the curve at which the first $m_{z}=0$ instability
resonance occurs. Seemingly, the corresponding $\zeta_{x}$ should
be obtained by setting to zero the second term in the Laurent expansion,
\begin{multline}
\zeta_{x}=k\, k_{x}^{2}\triangle t\left(k\,\sin\left(k_{z}\frac{\Delta t}{2}\right)\sin\left(k\frac{\Delta t}{2}\right)\left(\cos^{2}\left(k_{z}\frac{\Delta t}{2}\right)+\cos^{2}\left(k\frac{\Delta t}{2}\right)\right)\right.\\
-\left.k_{z}\cos\left(k_{z}\frac{\Delta t}{2}\right)\cos\left(k\frac{\Delta t}{2}\right)\left(\sin^{2}\left(k_{z}\frac{\Delta t}{2}\right)+\sin^{2}\left(k\frac{\Delta t}{2}\right)\right)\right)\csc\left(k\frac{\Delta t}{2}\right)/\\
2k_{z}\cos\left(k_{z}\frac{\Delta t}{2}\right)\left(k_{z}\sin\left(k_{z}\frac{\Delta t}{2}\right)\cos\left(k\frac{\Delta t}{2}\right)-k\,\cos\left(k_{z}\frac{\Delta t}{2}\right)\sin\left(k\frac{\Delta t}{2}\right)\right)^{2}.\label{eq:resx}
\end{multline}
However, it satisfies the constraint, $0\leq\zeta_{x}\leq1$, over
too small a region in \emph{k}-space. Credible alternatives are $\zeta_{x}=1$,
$\zeta_{x}=\left(k_{x}\triangle x/2\right)\cot\left(k_{x}\triangle x/2\right)$,
and $\zeta_{x}=\zeta_{z}$. Each produces roughly the same growth
rates when paired with Eq. (\ref{eq:resz}), at least when digital
filtering is employed as well. The choice, $\zeta_{x}=\zeta_{z}$,
is designated PSATD option (c) and used in representative numerical
calculations in Sec. 6. Although this approach may seem rather arbitrary,
it does give good results.
Axial group velocities of unstable modes, $v_{g}=\partial\omega/\partial k_{z}$,
are of interest when dealing with short beam pulses, because numerical
instability energy propagates backward relative to the beam pulse,
limiting total growth, when the instability group velocity is somewhat
less than the beam velocity. Low instability group velocities can
be expected for beam aliases interacting resonantly with backward
propagating electromagnetic waves. Indeed, numerical solutions to
the dispersion relation predict group velocities between 0.3 and 0.5
the beam velocity in this case. On the other hand, numerical instabilities
associated with beam aliases interacting with forward propagating
electromagnetic waves can be expected to have group velocities about
equal to the beam velocity. The same is true of non-resonant instabilities,
and numerical solutions of the dispersion relation corroborate these
expectations.
The PSATD-Esirkepovk one-dimensional dispersion relation is obtained
simply by setting $k_{x}$ to zero in Eqs. (\ref{eq:C0}) - (\ref{eq:C2x}),
yielding
\begin{equation}
C_{0}=\left[\omega\right]^{2}-\left[k_{z}\right]^{2},\label{eq:C0-1d}
\end{equation}
\begin{multline}
C_{1}=-\zeta_{x}\left(S^{J}\right)^{2}\cos\left(k_{z}^{\prime}v\frac{\Delta t}{2}\right)\sin\left(k_{z}\Delta t\right)\\
\left(\sin\left(\omega\frac{\Delta t}{2}\right)-v\tan\left(k_{z}\frac{\Delta t}{2}\right)\cos\left(\omega\frac{\Delta t}{2}\right)\right)/k_{z}\Delta t,\label{eq:C1-1d}
\end{multline}
and $C_{2}=0$. Resonant instability occurs when $-n\, C_{1}/\sin\left(\omega\Delta t\right)\cos\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]>0$,
evaluated at the resonance frequency, in which case the growth rate
is the square root of that quantity. Interestingly, for $m_{z}=0$
in the limit $v\rightarrow1$, the Cherenkov resonance drops out,
and the dispersion relation simplifies further to
\begin{equation}
\sin^{2}\left(\omega\frac{\Delta t}{2}\right)-\sin^{2}\left(k_{z}\frac{\Delta t}{2}\right)-n\zeta_{x}\Delta t\left(S^{J}\right)^{2}\sin\left(k_{z}\Delta t\right)/4k_{z}=0.\label{eq:dr-v11d}
\end{equation}
Nonetheless, an instability still occurs approximately where the resonance
would have been, namely $k_{z}$ just less than an integer multiple
of $\pi/\Delta t$. The numerical solution of Eq. (\ref{eq:dr-v11d})
for $v\,\Delta t/\Delta z=3$ is provided in Fig. \ref{fig:grow1d}.
Peak growth for $k_{x}=0$ in this case is only about one-third the
peak growth at finite $k_{x}$.
\section{Numerical solutions}
Numerical solutions to the complete linear dispersion relations, presented
in Sec. 4, and instability growth rates measurements from corresponding
WARP simulations were performed as described in Sec. 5 of \citep{godfrey2013esirkepov}.
A typical dispersion relation growth spectrum, in this case corresponding
to the parameters of Fig. \ref{fig:Resonance curves} with option
(a), $\zeta_{z}=\left(k_{z}\triangle t/2\right)\cot\left(k_{z}\triangle t/2\right)$
and $\zeta_{x}=\left(k_{x}\triangle t/2\right)\cot\left(k_{x}\triangle t/2\right)$,
is depicted in Fig. \ref{fig:growContour}. Growth is dominated by
the $m_{z}=0$ numerical instability. Note that non-resonant growth
associated with the $m_{z}=0$ mode is bounded well away from small
$k_{z}$, as predicted in the previous Section. The instability group
velocity is about 0.5 on resonance and 1.0 well off resonance.
Fig. \ref{fig:linear-nosmooth} plots maximum growth rates versus
$v\,\Delta t/\Delta z$ for options (a), (b), and (c), as well as
for option (d), which is PSATD with conventional current interpolation.
Recall that the option (d) dispersion relation is given by Eqs. (\ref{eq:M3x3})
and (\ref{eq:Mzz-alt}) - (\ref{eq:Mxy-alt}). (A summary of the options
is given in Table \ref{Tableoption}.) Growth rates for option (a)
are noticeably smaller than those for options (b) and (d) with $v\,\Delta t/\Delta z$
less than about 1.5, in part because $\zeta_{z}$ introduces smoothing
at large $k$, which is where the dominant resonances occur in this
range of time-steps. On the other hand, the curves for options (a)
and (b) converge for large $v\,\Delta t/\Delta z$, because $\zeta_{z}$
for both options (and indeed for all valid choices of $\zeta_{z}$)
approaches unity at small $k$, which is where the dominant resonances
occur at large time-steps. An inflection occurs in curves (a), (b),
and (d) near $v\,\Delta t/\Delta z\approx0.9$, where the $m_{z}=0$
resonant instability begins to dominate the $m_{z}=-1$ and other
resonances. PSATD option (c), designed to suppress the the $m_{z}=0$
instability, both resonant and non-resonant, is seen to do so quite
effectively. Growth plummets to near zero at $v\,\Delta t/\Delta z=1$
and is modestly larger at larger values of $v\,\Delta t/\Delta z$
due only to residual $m_{z}=\pm1$ resonant instabilities. Agreement
between theory and simulation growth rates is very good in all cases.
The simulation growth rate measurements themselves appear to be accurate
to better than 2\%, except perhaps for very small growth rates.
\begin{table}
\caption{Algorithm options used in Fig. \ref{fig:linear-nosmooth}, \ref{fig:linear-filtering},
\ref{fig:cubic-filtering}, and elsewhere.}
\centering{}\label{Tableoption}%
\begin{tabular}{|c|c|c|}
\hline
Option & Current Factors or Equations & Comments\tabularnewline
\hline
\hline
(a) & %
\begin{tabular}{c}
$\zeta_{x}=\left(k_{x}\triangle x/2\right)\cot\left(k_{x}\triangle x/2\right)$\tabularnewline
$\zeta_{z}=\left(k_{z}\triangle z/2\right)\cot\left(k_{z}\triangle z/2\right)$\tabularnewline
\end{tabular} & Equivalent to Esirkepov in real space\tabularnewline
\hline
(b) & $\zeta_{z}=\zeta_{x}=1$ & Esirkepov in \emph{k}-space (base case)\tabularnewline
\hline
(c) & $\zeta_{x}=\zeta_{z}$, as defined in Eq. (\ref{eq:resz}) & Reduces order of nonphysical resonances\tabularnewline
\hline
(d) & Eqs. (\ref{eq:M3x3}), (\ref{eq:Mzz-alt}) - (\ref{eq:Mxy-alt}) & Conventional current deposition at nodes\tabularnewline
\hline
\end{tabular}
\end{table}
As explained in the previous Section, PSATD combined with digital
filtering can be very effective at suppressing the numerical Cherenkov
instability. Since filtering can be applied directly in \emph{k}-space,
any suitable filtering profile can be employed in a straightforward
manner. (Digital filtering of the numerical Cherenkov instabiity in
FDTD algorithms is described in \citep{godfrey2013esirkepov,GreenwoodJCP04}.)
To facilitate comparison with earlier analysis for WARP-FDTD \citep{godfrey2013esirkepov},
we use the same ten-pass (including two compensation passes) bilinear
filter used there. The \emph{$k_{z}$-} and $k_{x}$-dependent factors
of the filter function are displayed in Fig. \ref{fig:smooth}. (Also
shown are $\zeta_{z}$ and $\zeta_{x}$ for options (a) and (c). Remember,
however, that these current multipliers are not equivalent to digital
filters, although they can introduce a degree of smoothing.) Applying
this filter with parameters otherwise identical to those in Fig. \ref{fig:linear-nosmooth}
reduces growth rates by a factor of five or so over the range of $v\,\Delta t/\Delta z$
shown in Fig. \ref{fig:linear-filtering} (or for $v\,\Delta t/\Delta z$<1
in the case of option (c), which has small growth for larger time-steps
even without filtering). At larger time-steps growth rates increase
toward their unfiltered values, as the dominant resonant modes move
to progressively smaller $k_{z}$. (For instance, the option (a) filtered
maximum growth rate increases to 74\% of its unfiltered value by $v\,\Delta t/\Delta z=3$.)
Maximum growth rates oscillate irregularly for options (a) and (c)
when $v\,\Delta t/\Delta z$ is less than about 1.3, and for option
(b) when it is less than about 1.0, as higher order resonances move
through the weakly filtered region at small $k$. Digital filtering
seems less effective for option (d), probably because its $m_{z}=0$
non-resonant growth at small \emph{k} is larger than in the other
options.
The weak instability growth for options (a) and (c) can be further
reduced by higher order interpolation. As illustrated in Fig. \ref{fig:cubic-filtering}
and, with slightly less accuracy, in Fig. \ref{fig:Stability criterion},
cubic interpolation almost completely eliminates numerical Cherenkov
instability growth in option (a) for $v\,\Delta t/\Delta z<1.3$.
Option (c) performs almost as well in that same time-step range and
much better outside it. Quadratic interpolation performs almost as
well as cubic in this regard. Incidentally, the residual instability
for option (c) is a finite $\gamma$ effect, dropping to zero for
infinite $\gamma$.
One might reasonably ask whether the superior stability properties
of option (c) at larger time steps are due only to the digital filtering
of the transverse currents that it entails. No, is the answer, as
can be demonstrated from numerical solution of the option (b) dispersion
relation with the right side of Eq. (\ref{eq:resz}) used as a digital
filter applied to \emph{n} throughout. Doing so effectively suppresses
the $m_{z}=0$ resonant instability but not its non-resonant counterpart,
with maximum growth rates at larger time steps of order one-third
those of option (b) without digital filtering, Fig. \ref{fig:linear-nosmooth}.
And, when digital filtering equal to the right side of Eq. (\ref{eq:resz})
is combined with the digital filtering already employed in Fig. \ref{fig:linear-filtering}
or \ref{fig:cubic-filtering}, the results are practically indistinguishable
from those of option (b).
The PSATD algorithm also accommodates field interpolation using the
Galerkin and Uniform schemes discussed in \citep{godfrey2013esirkepov}.
(Fields are computed at mesh points as described in Sec. 2 and then
averaged to the staggered Yee mesh\citep{Yee}.) Results for these
two schemes with linear interpolation and no digital filtering are
provided in Fig. \ref{fig:uniform-galerkin}. Both exhibit non-resonant
instability growth rates at small \emph{k}. Consequently, there appears
to be no advantage in using these more complicated field interpolation
approaches with PSATD.
Although Figs. \ref{fig:linear-nosmooth}, \ref{fig:linear-filtering},
and \ref{fig:cubic-filtering} demonstrate clearly the validity of
the numerical dispersion relation in the large $\gamma$ limit, they
indicate little about its validity more generally. We have, therefore,
run comparisons between the dispersion relation and WARP-PSATD option
(b) simulations for $\gamma=3.0,\,1.4,\,1.1$ with linear interpolation
and no digital filtering. Once again, agreement is excellent; see
Fig. (\ref{fig:Low gamma}). Maximum growth rates for $\gamma$ as
low as 3 are essentially the same as those for $\gamma=130$. However,
the k-space spectrum at $\gamma=3.0$ also shows signs of the well
known $m_{z}=-1$ quasi-one-dimensional, electrostatic numerical instability
\citep{Langdon1970,Okuda1970}. For smaller $\gamma$ yet, the numerical
Cherenkov instability growth rate decreases modestly, while the electrostatic
numerical instability growth rate increases as $1/\gamma$ for fixed
\emph{n}. (Recall that n is defined in this paper as the density divided
by $\gamma$.) The two become comparable at $\gamma\approx1.4$, and
the electrostatic instability dominates strongly at $\gamma=1.1$.
The electrostatic numerical instability can be suppressed by using
any field interpolation algorithm that offsets $E_{z}$ by $\triangle z/2$
relative to $\varrho$ (or $\mathbf{W}$ in the Esirkepov current
algorithm) and interpolates it with a spline one order lower in z
relative to $\varrho$ or to $\mathbf{W}$ \citep{lewis1972variational,langdon1973energy},
such as the Galerkin ``energy-conserving'' algorithm. Even the Uniform
algorithm ameliorates the electrostatic instability to a degree, stabilizing
the strong $m_{z}=-1$ mode but destabilizing the slower $m_{z}=+1$
mode. Of course, digital filtering plus cubic interpolation also works
well.
\section{PSTD stability results}
The numerical stability properties of the related PSTD algorithm \citep{Liu1997}
recently were addressed in \citep{Xu2012}. Here, we focus on comparison
of PSATD and PSTD growth rates. The PSTD dispersion relation can be
derived following the procedures used to analyze the PSATD algorithm.
Under the assumptions leading to Eqs. (\ref{eq:leapfrog-alt1}), (\ref{eq:leapfrogB}),
and (\ref{eq:Bave}), the corresponding PSTD equations are
\begin{multline}
\mathbf{E}^{n+1}=\mathbf{E}^{n}-i\mathbf{k}\times\mathbf{B}^{n+\nicefrac{1}{2}}\triangle t-\mathbf{\mathbf{\zeta}:}\mathbf{J}_{e}^{n+\nicefrac{1}{2}}\triangle t\\
+\mathbf{k}\mathbf{k}\cdot\mathbf{\mathbf{\zeta}:}\mathbf{J}_{e}^{n+\nicefrac{1}{2}}\triangle t/k^{2}-\mathbf{k}\mathbf{k}\cdot\mathbf{J}_{e}^{n+\nicefrac{1}{2}}\triangle t/k^{2},\label{eq:EleapfrogPSTD}
\end{multline}
\begin{equation}
\mathbf{B}^{n+\nicefrac{3}{2}}=\mathbf{B}^{n+\nicefrac{1}{2}}+i\mathbf{k}\times\mathbf{E}^{n+1}\triangle t,\label{eq:BleapfrogPSTD}
\end{equation}
\begin{equation}
\mathbf{B}^{n}=\left(\mathbf{B}^{n+\nicefrac{1}{2}}+\mathbf{B}^{n-\nicefrac{1}{2}}\right)/2.\label{eq:BavePSTD}
\end{equation}
As noted in \citep{Vay2013PSATD}, these equations also can be obtained
by expanding their PSATD counterparts to first order in \emph{k}.
The dispersion relation again takes the form of (\ref{eq:M3x3}),
but with $[\mathbf{k}]=\mathbf{k}$,
\begin{equation}
\xi_{z,z}=-n\gamma^{-2}\sum_{m}S^{J}S^{E_{z}}\csc^{2}\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\left(k_{z}^{2}+\zeta_{z}k_{x}^{2}\right)\Delta t^{2}\left[\omega\right]k_{z}^{\prime}/4k^{2}k_{z}\mathrm{,}\label{eqMzzPSTD}
\end{equation}
\begin{equation}
\xi_{z,x}=-n\sum_{m}S^{J}S^{E_{x}}\csc\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\eta_{z}\Delta t\, k_{x}^{\prime}/2k^{2}k_{z},\label{eq:MzxPSTD}
\end{equation}
\begin{equation}
\xi_{z,y}=nv\sum_{m}S^{J}S^{B_{y}}\csc\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\eta_{z}\Delta t\, k_{x}^{\prime}/2k^{2}k_{z},\label{eq:MzyPSTD}
\end{equation}
\begin{equation}
\xi_{x,z}=-n\gamma^{-2}\sum_{m}S^{J}S^{E_{z}}\csc^{2}\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\left(1-\zeta_{z}\right)\Delta t^{2}\left[\omega\right]k_{x}k_{z}^{\prime}/4k^{2},\label{eq:MxzPSTD}
\end{equation}
\begin{equation}
\xi_{x,x}=-n\sum_{m}S^{J}S^{E_{x}}\csc\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\eta_{x}\Delta t\, k_{x}^{\prime}/2k^{2}k_{x},\label{eq:MxxPSTD}
\end{equation}
\begin{equation}
\xi_{x,y}=nv\sum_{m}S^{J}S^{B_{y}}\csc\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\eta_{x}\Delta t\, k_{x}^{\prime}/2k^{2}k_{x},\label{eq:MxyPSTD}
\end{equation}
and
\begin{equation}
\eta_{z}=\cot\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\left(k_{z}^{2}+\zeta_{z}k_{x}^{2}\right)\sin\left(k_{z}^{\prime}v\frac{\Delta t}{2}\right)+\left(1-\zeta_{x}\right)k_{z}^{2}\cos\left(k_{z}^{\prime}v\frac{\Delta t}{2}\right),\label{eq:etazPSTD}
\end{equation}
\begin{equation}
\eta_{x}=\cot\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\left(1-\zeta_{z}\right)k_{x}^{2}\sin\left(k_{z}^{\prime}v\frac{\Delta t}{2}\right)+\left(k_{x}^{2}+\zeta_{x}k_{z}^{2}\right)\cos\left(k_{z}^{\prime}v\frac{\Delta t}{2}\right).\label{eq:etaxPSTD}
\end{equation}
Provided that currents and fields are interpolated to or from the
same mesh points, the high-$\gamma$ dispersion relation again takes
the form in Eq. \ref{eq:drformfull} with $C_{0}$ as before and
\begin{multline}
C_{1}=-\sum_{m_{x}}k_{x}^{\prime}\left(S^{J}\right)^{2}\cos\left(k_{z}^{\prime}v\frac{\Delta t}{2}\right)\\
\left\{ 2\zeta_{x}k_{z}\left(2k_{z}\sin\left(\omega\frac{\Delta t}{2}\right)-k^{2}v\,\Delta t\cos\left(\omega\frac{\Delta t}{2}\right)\right)+k_{x}^{2}\Delta t^{2}C_{0}/\sin\left(\omega\frac{\Delta t}{2}\right)\right\} /4k^{2}k_{x},\label{eq:C1PSTD}
\end{multline}
\begin{multline}
C_{2}=k_{x}\sum_{m_{x}}k_{x}^{\prime}\left(S^{J}\right)^{2}\cos\left[\left(\omega-k_{z}^{\prime}v\right)\frac{\Delta t}{2}\right]\sin\left(k_{z}^{\prime}v\frac{\Delta t}{2}\right)\\
\left\{ 2\zeta_{z}k_{x}\left(2k_{z}\sin\left(\omega\frac{\Delta t}{2}\right)-k^{2}v\,\Delta t\cos\left(\omega\frac{\Delta t}{2}\right)\right)-k_{z}k_{x}\Delta t^{2}C_{0}/\sin\left(\omega\frac{\Delta t}{2}\right)\right\} /4k^{2}k_{z}.\label{eq:C2PSTD}
\end{multline}
Vacuum electromagnetic modes are described by $C_{0}=0,$
\begin{equation}
\sin^{2}\left(\omega\frac{\Delta t}{2}\right)=\left(k\frac{\Delta t}{2}\right)^{2},
\end{equation}
which has as a Courant limit, $\Delta t_{c}=\left(2/\pi\right)\left(\Delta z^{-2}+\Delta x^{-2}\right)^{-\nicefrac{1}{2}}$,
smaller by a factor of $2/\pi$ than the usual FDTD Courant limit.
PSTD maximum growth rates verses $v\,\Delta t/\Delta z$ are presented
for options (a) and (b) with linear interpolation and no digital filtering
in Fig. \ref{fig:PSTDlinear-nofiltering}. Not surprisingly, these
featureless curves are of the same magnitude as the corresponding
PSATD curves in Fig. \ref{fig:linear-nosmooth} over the same time-step
range. In contrast, maximum growth rates for options (a) and (b) with
cubic interpolation and the digital filtering employed for PSATD,
shown in Fig. \ref{fig:PSTDcubic-filtering}, are an order of magnitude
larger than the corresponding PSATD values at $v\,\Delta t/\Delta z\approx0.4$,
although still very small. This difference results from a narrow region
(about 8 of 4225 k-space modes) at small $k_{x}$ of $m_{z}=0$ non-resonant
instability that does not occur for PSATD. PSTD maximum growth rates
for the same digital filtering and linear interpolation differ only
moderately from the cubic interpolation results.
\section{Simulation results}
Series of two-dimensional simulations of a 100-MeV-class LPA stage
were performed, focusing on plasma wake formation (similar to those
presented in \citep{godfrey2013esirkepov}), using the parameters
given in Table \ref{TableLPA}. With the parameters chosen, dephasing
of the accelerated electron beam and the wake, as well as depletion
of the laser, occur in about 1 mm. However this distance was found
to be too short for any numerical instability to develop with the
pseudo-spectral solvers, and a much longer plasma of 3 cm was used
for the sake of stability analysis. The velocity of the wake in the
plasma corresponds to $\gamma\simeq13.2$, and the simulations were
performed in a boosted frame of $\gamma_{f}=13.$
Reference simulations were run in two dimensions for conditions where
no instability developed, and the final total field energy $W_{f0}$
was recorded as a reference value in each case. Runs then were conducted
for the PSATD and PSTD solvers, using the Esirkepovk current deposition
options (a) and (c) for PSATD, as well as option (a) for PSTD. The
final energy $W_{f}$ was recorded and divided by the reference energy
$W_{f0}$. The ratio $W_{f}/W_{f0}$ is plotted versus time-step in
Fig. \ref{fig:WARP_linear} from simulations using the PSATD solver
with linear current deposition and 4 passes of bilinear smoothing
plus compensation of both current and interpolated fields. (This is
equivalent in the linear regime to the filtering described in previous
Sections.) Following theoretical predictions, option (a) exhibits
no instability for $v\triangle t/\triangle z\lesssim0.3$, $v\triangle t/\triangle z\approx0.5$
and $v\triangle t/\triangle z=1$, and option (c) exhibits an additional
null at $v\triangle t/\triangle z=2$. Fig. \ref{fig:WARP_cubic}
shows results using cubic current deposition, where the PSATD and
PSTD instabilities are contrasted to those of the FDTD Cole-Karkkainnen
(CK) solver with Galerkin or uniform field interpolation. Still in
agreement with theoretical predictions, the PSATD solver is shown
to be stable over a wide range of time-steps for $v\triangle t/\triangle z\lesssim1.2$
with option (a) and even as wide as $v\triangle t/\triangle z\lesssim2.1$
with option (c). The PSTD solver also exhibits good stability but
only on the more restricted $v\triangle t/\triangle z\lesssim0.45$,
owing to its constraining Courant limit.
Note that conducting the time-step sweeps described in this section
would have been prohibitively expensive for the $\gamma=130$ employed
elsewhere in this article. The smaller $\gamma=13$ used here increases
the option (c) growth rates at larger time-steps, as well as the Uniform-CK
growth rates in the vicinity of $v\triangle t/\triangle z=0.5$.
\begin{table}[htd]
\caption{List of parameters for simulations of wake propagation in a LPA stage.}
\begin{centering}
\begin{tabular}{lcc}
\hline
plasma density on axis & $n_{e}$ & $10^{19}$~cm$^{-3}$\tabularnewline
plasma longitudinal profile & & flat\tabularnewline
plasma length & $L_{p}$ & $3$ cm\tabularnewline
plasma entrance ramp profile & & half sine\tabularnewline
plasma entrance ramp length & & $20$ $\mu$m\tabularnewline
\hline
laser profile & & $a_{0}\exp\left(-r^{2}/2\sigma^{2}\right)\sin\left(\pi z/3L\right)$\tabularnewline
normalized vector potential & $a_{0}$ & $1$\tabularnewline
laser wavelength & $\lambda$ & $0.8$ $\mu$m\tabularnewline
laser spot size (RMS) & $\sigma$ & $8.91$ $\mu$m\tabularnewline
laser length (HWHM) & $L$ & $3.36$ $\mu$m\tabularnewline
normalized laser spot size & $k_{p}\sigma$ & $5.3$\tabularnewline
normalized laser length & $k_{p}L$ & $2$\tabularnewline
\hline
cell size in x & $\Delta x$ & $\lambda/20$\tabularnewline
cell size in z & $\Delta z$ & $\lambda/20$\tabularnewline
\# of plasma macro-particles/cell & & 4 electrons + 4 protons\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\label{TableLPA}
\end{table}
\section{Conclusions}
The numerical stability properties of multidimensional PIC codes employing
the PSATD electromagnetic field algorithm, combined with either the
Esirkepovk or the conventional current deposition algorithm, have
been derived. Overall, the numerical Cherenkov instability growth
rates for the various versions of the PSATD algorithm are comparable
with those of FDTD algorithms. However, when cubic interpolation and
short wavelength digital filtering also are employed, at least two
versions of the PSATD algorithm exhibits excellent stability over
a wide range of time-steps. For comparison purposes, stability properties
of the more commonly used PSTD electromagnetic field algorithm also
were determined. Fig. \ref{fig:Comparison} compares growth rates
for the most stable versions of these two algorithms (options (a)
and (c) for PSATD and option (a) for PSTD, as defined in Sec. 4 and
again in Table (\ref{Tableoption}).) with the growth rates of the
Galerkin and Uniform versions of the Cole-Karkkainnen \citep{ColeIEEE1997,ColeIEEE2002,KarkICAP06}
FDTD algorithms (coupled with the Esirkepov algorithm), studied in
\citep{godfrey2013esirkepov}. The PSATD options (a) and (c) exhibit
clearly superior stability behavior, with (a) modestly better at smaller
time-steps and (c) substantially better at larger time-steps. Although
the PSTD algorithm also exhibits small growth rates, its range of
time-steps is limited by its relatively small Courant condition. The
two FDTD algorithms have small instability growth rates only over
narrow time-step bands. These findings are corroborated by WARP simulation
results in Fig. \ref{fig:WARP_cubic}.
\section{Acknowledgment}
We thank David Grote for support with the code WARP. This work was
supported in part by the Director, Office of Science, Office of High
Energy Physics, U.S. Dept. of Energy under Contract No. DE-AC02-05CH11231
and the US-DOE SciDAC ComPASS collaboration, and used resources of
the National Energy Research Scientific Computing Center.
\section*{References}
\bibliographystyle{elsarticle-num}
\input{JCP_13_PSATDstability.bbl}
\clearpage{}
\begin{figure}
\centering{}\includegraphics[bb=0bp 0bp 360bp 0bp,scale=0.93]{normalmode}\caption{\label{fig:Normal-mode-diagram}PSATD normal mode diagram for $v\triangle t/\triangle z=1.2$
and $k_{x}=\pi/2\triangle x$, showing electromagnetic modes (numerically
distorted for $k>\pi/\triangle t$ ) and spurious beam modes, $m_{z}=\left[-1,\,1\right]$.
Numerical Cherenkov instabilities are strongest near mode intersections.}
\end{figure}
\clearpage{}
\begin{figure}
\centering{}\includegraphics[scale=0.93]{resonance}\caption{\label{fig:Resonance curves}Locations in \textit{k}-space of PSATD
resonances between electromagnetic modes and spurious beam modes,
$m_{z}=\left[-1,\,+1\right]$, for $v\triangle t/\triangle z=1.2$.
Intersecting resonance curves occur at different frequencies and,
therefore, do not interact.}
\end{figure}
\clearpage{}
\begin{figure}
\centering{}\includegraphics[scale=0.93]{PSATD-criterion}\caption{\label{fig:Stability criterion}Approximate maximum growth rates for
PSATD options (a) and (b) with cubic interpolation and digital filtering.
Option (c) exhibits zero growth in this approximation.}
\end{figure}
\clearpage{}
\begin{figure}
\centering{}\includegraphics[scale=0.93]{grow1d}\caption{\label{fig:grow1d}PSATD one-dimensional growth rate for $m_{z}=0$
and $v\,\triangle t/\triangle z=3$.}
\end{figure}
\clearpage{}
\begin{figure}
\centering{}\includegraphics[scale=0.75]{growContour}\caption{\label{fig:growContour}Growth rates from PSATD dispersion relation
for option (a), $m_{z}=\left[-1,\,+1\right]$, and $v\triangle t/\triangle z=1.2$.
Superimposed are the resonance curves from Fig. \ref{fig:Resonance curves}}
\end{figure}
\clearpage{}
\begin{figure}
\centering{}\includegraphics[scale=0.95]{linear-nofiltering}\caption{\label{fig:linear-nosmooth}Maximum growth rates for PSATD options
(a), (b), (c), and (d) with linear interpolation and no digital filtering.
Markers represent corresponding simulation results.}
\end{figure}
\clearpage{}
\begin{figure}
\centering{}\includegraphics[scale=0.45]{smooth}\caption{\label{fig:smooth}Left: $k_{z}$-dependent factor of ten-pass bilinear
filter, $\zeta_{z}$ for option (a) (which depends only on $k_{z}$),
and $\zeta_{z}=\zeta_{x}$ for option (c) evaluated at $k_{x}=0$
and $\triangle t/\triangle z=2$. Right: \emph{$k_{x}$}-dependent
factor of ten-pass bilinear filter, $\zeta_{x}$ for option (a) (which
depends only on \emph{$k_{x}$}), and $\zeta_{z}=\zeta_{x}$ for option
(c) evaluated at $k_{z}=0$ and $\triangle t/\triangle z=2$.}
\end{figure}
\clearpage{}
\begin{figure}
\centering{}\includegraphics[scale=0.95]{linear-filtering}\caption{\label{fig:linear-filtering}Maximum growth rates for PSATD options
(a), (b), (c), and (d) with linear interpolation and digital filtering.
Markers represent corresponding simulation results.}
\end{figure}
\clearpage{}
\begin{figure}
\centering{}\includegraphics[scale=0.95]{cubic-filtering}\caption{\label{fig:cubic-filtering}Maximum growth rates for PSATD options
(a), (b), (c), and (d) with cubic interpolation and digital filtering.
Markers represent corresponding simulation results.}
\end{figure}
\clearpage{}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.95]{Low-gamma}
\par\end{centering}
\caption{\label{fig:Low gamma}Maximum growth rates for PSATD option (b) with
$\gamma=130,\,3.0,\,1.4,\,1.1$, linear interpolation, and no filtering.
Markers represent corresponding simulation results.}
\end{figure}
\clearpage{}
\begin{figure}
\centering{}\includegraphics[scale=0.95]{uniform-galerkin}\caption{\label{fig:uniform-galerkin}Maximum growth rates for PSATD Uniform
and Galerkin linear interpolation schemes and no digital filtering.}
\end{figure}
\clearpage{}
\begin{figure}
\centering{}\includegraphics[scale=0.95]{PSTD-linear-nofiltering}\caption{\label{fig:PSTDlinear-nofiltering}Maximum growth rates for PSTD options
(a) and (b) with linear interpolation and no digital filtering.}
\end{figure}
\clearpage{}
\begin{figure}
\centering{}\includegraphics[scale=0.95]{PSTD-cubic-filtering}\caption{\label{fig:PSTDcubic-filtering}Maximum growth rates for PSTD options
(a) and (b) with cubic interpolation and digital filtering.}
\end{figure}
\begin{center}
\clearpage{}
\begin{figure}
\centering{}\includegraphics[scale=0.92]{Warp-linear}\caption{\label{fig:WARP_linear}Field energy relative to stable reference
level vs $v\Delta t/\Delta z$ from two-dimensional WARP LPA simulations
at $\gamma$ = 13, using the PSATD solver with Esirkepovk current
deposition options (a) and (c), four passes of bilinear plus one compensation
step filtering on both current and gathered fields, and linear interpolation.}
\end{figure}
\par\end{center}
\begin{center}
\clearpage{}
\begin{figure}
\centering{}\includegraphics[scale=0.92]{Warp-cubic}\caption{\label{fig:WARP_cubic}Field energy relative to stable reference level
vs $v\Delta t/\Delta z$ from two-dimensional WARP LPA simulations
at $\gamma$ = 13, using the PSATD or PSTD solvers with Esirkepovk
current deposition options (a) and (c), four passes of bilinear plus
one compensation step filtering on both current and gathered fields,
and cubic interpolation. Results are contrasted to simulations using
the CK solver with Galerking or Uniform field gather, same filtering
and cubic interpolation.}
\end{figure}
\par\end{center}
\clearpage{}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.95]{Comparison}
\par\end{centering}
\caption{\label{fig:Comparison}Maximum growth rates for PSATD (a), PSATD (c),
PSTD (a), Galerkin-CK, and Uniform-CK with cubic interpolation and
digital filtering.}
\end{figure}
\clearpage{}This document was prepared as an account of work sponsored
in part by the United States Government. While this document is believed
to contain correct information, neither the United States Government
nor any agency thereof, nor The Regents of the University of California,
nor any of their employees, nor the authors makes any warranty, express
or implied, or assumes any legal responsibility for the accuracy,
completeness, or usefulness of any information, apparatus, product,
or process disclosed, or represents that its use would not infringe
privately owned rights. Reference herein to any specific commercial
product, process, or service by its trade name, trademark, manufacturer,
or otherwise, does not necessarily constitute or imply its endorsement,
recommendation, or favoring by the United States Government or any
agency thereof, or The Regents of the University of California. The
views and opinions of authors expressed herein do not necessarily
state or reflect those of the United States Government or any agency
thereof or The Regents of the University of California.
\end{document}
| -83,718.9452
|
[
-3.552734375,
3.19140625
] | 17.733564
|
[
-3.10546875,
0.80322265625,
-2.341796875,
-6.4921875,
-1.7958984375,
8.9296875
] |
[
2.763671875,
8.5703125,
1.7490234375,
6.046875
] | 440
| 6,418
|
[
-3.064453125,
3.552734375
] | 29.690713
|
[
-5.8828125,
-3.87109375,
-3.65625,
-1.875,
1.962890625,
10.9921875
] | 0.438749
| 13.30824
| 25.802431
| 1.470534
|
[
2.6809582710266113
] | -53,982.197737
| 7.635245
| -82,220.496792
| 0.457115
| 6.233602
|
[
-2.6015625,
-3.71484375,
-4.00390625,
-4.98046875,
2.427734375,
12.328125
] |
[
-4.77734375,
-2.078125,
-2.330078125,
-2.025390625,
3.0546875,
4.83203125
] | |
BkiUd4HxaKgT9kwJNaiM
|
\section{Introduction}
\label{secIntroduction}
The quantization of the black hole horizon area and entropy has been a fascinating subject. The pioneer work can be traced back to Bekenstein \cite{Bekenstein1} with the famous conjecture that the black hole area should be represented by a quantum operator with a discrete spectrum of eigenvalue in a quantum gravity theory. Regarding the black hole horizon area as an adiabatic invariant, the equidistant area spectrum was obtained
\begin{eqnarray}
A_{n}=\gamma \hbar\cdot n,
\end{eqnarray}
with $n$ an integer. From the dynamical modes of the classical theory, many attempts have been devoted to obtain such equally spaced area spectrum, whereas the spacing may be different from $\gamma=8\pi$ \cite{Bekenstein1995plb,Louko1996prd,Makela,
Dolgov1997plb,Peleg,Barvinsky2001plb,Barvinsky,Kastrup1996plb}.
In the respect that a black hole can be determined by a few parameters (such as mass, charge, and angular momentum), they come very close to our notion of elementary particles, especially like the hydrogen atom. Therefore characteristic vibrations of them, known as the quasinormal mode (QNM) frequencies \cite{Kokkotas,Nollert}, should play a significant role in black hole physics. The result is encouraging. Based on Bohr's correspondence principle that ``transition frequencies at large quantum numbers should
equal classical oscillation frequencies", Hod \cite{Hodprl1998,Hodprl19982} connected these classical oscillation frequencies with the real part of the highly damped QNM frequencies. Then he obtained the spacing of the area spectrum $\Delta A=(4\ln3)\hbar$ for a Schwarzschild black hole. On the other hand, Kunstatter \cite{Kunstatterprl2003} also derived the same area spectrum through the adiabatic invariant. The similar argument was also used to fix the Immirzi parameter by Dreyer \cite{Dreyerprl2003}. This rejuvenates a great interest in the investigation of the black hole area and entropy spectra via the interpretation of the QNM frequencies \cite{Polychronakosprd2004,Setarecqg2004,
Setareprd2004,Setareprd20042,Setareprd20043,Setareprd20044,Lepeplb2005}.
It has been, recently, pointed out by Maggiore \cite{Maggioreprl2008} that, in high damping limit, the proper frequency of the equivalent harmonic oscillator $\omega(E)$, which is interpreted as the QNM frequency, should be of the following form
\begin{eqnarray}
\omega(E)=\sqrt{|\omega_{R}|^{2}+|\omega_{I}|^{2}},
\end{eqnarray}
with $\omega_{R}$ and $\omega_{I}$ the real and imaginary parts of the QNM
frequency, respectively. Following this idea, the Bekenstein's area spectrum will be recovered for a Schwarzschild black hole. Vagenas and Medved \cite{Vagenasjhep2008,Medvedcqg2008} applied this idea to derive the area spectrum of a rotating Kerr black hole with the choice of $\Delta\omega(E)=(|\omega_{I}|)_{n}-(|\omega_{I}|)_{n-1}$ for $\omega_{I}\gg\omega_{R}$. The area spectrum calculated with the modified Hod's method is equally spaced with spacing $\Delta A=8\pi\hbar$, which agrees with the Bekenstein's spacing. While employing the Kunstatter's method, the spectrum is nonequidistant. And the spacing is angular momentum dependent. Medved \cite{Medvedcqg2008} argued that the Kunstatter's method requires that the black hole should be far from the extremal one, i.e., small angular momentum limit. Considering this, the area spectra calculated from these two methods coincide with each other. Equivalently, the entropy spectrum can also be obtained using the Bekenstein-Hawking entropy/area law $S=A/4$, i.e., $S=2\pi\hbar\cdot n$ with spacing $\Delta S=2\pi\hbar$ in Einstein's gravity. However, when extended these results to the modified gravity theory, the area spectra were found not to be equidistant any more, while the entropy spectra were still in the form as that of the Einstein gravity \cite{Kothawalaprd2008,Weijhep2008}. Moreover, these results have been applied to different black holes \cite{Fernando,Lopez-Ortega,Kwon,WeiYang,KwonNam,Gonzalez,Lopez-Ortega2,ChenYang,KwonNam3,Li,LiuHu} and the same area and entropy spectra were observed. The spectra can also be found with the quantization of the angular momentum component \cite{Ropotenko,Jia}, which gives the same results as that obtained from the QNM frequencies. Black hole spectroscopy can also be obtained from the quantum tunneling method \cite{Banerjee10,Banerjee102,Jiang,MajhiVagenas11,Chen12,JiangHan,JiangHanCai,Ropotenko2,BanerjeeVagenas}.
In \cite{JiangHanCai}, the author argued that the entropy spectrum is $S_{n}=n\hbar$ with spacing $\Delta S=\hbar$ independent of the black hole parameters, gravity theory, and the dimension of the spacetime. The spacing was also suggested to be the lower bound of the entropy spectrum \cite{BanerjeeVagenas}.
As we know, the QNM frequencies are determined by solving the perturbation equation with the boundary conditions that only purely outgoing at infinity and ingoing at the horizon. In a black hole background, null geodesics appear very useful to explain the QNM frequencies \cite{Press,Goebel,Ferrari,Mashhoon,Berti,Cardoso}. The QNM frequencies can be interpreted in terms of the massless particles trapped at the unstable circular null geodesics and slowly leaking out to infinity. The real part of the QNM frequencies corresponds to the angular velocity at the unstable null geodesics, and the imaginary part is measured by the instability time scale of the orbit. In the eikonal limit ($m\gg1$), the QNM frequencies of the black holes approximately read
\begin{eqnarray}
\omega_{\text{QNM}}=\Omega_{c}m-i(n+1/2)|\lambda|,\label{QNM}
\end{eqnarray}
with $\Omega_{c}$ the angular velocity at the unstable circular null
geodesic and $\lambda$ Lyapunov exponent. This result is not only valid for a static, spherically symmetric and asymptotically flat line element in any dimension, but also for the equatorial orbits in the geometry of a rotating Myers-Perry black hole in higher dimensions \cite{Cardoso}. The parameters $\Omega_{c}$ and $\lambda$, as we will show, can be both expressed in terms of the radius of the unstable circular null geodesics for an arbitrary stationary black hole with the asymptotics (\ref{asymptotics}).
Based on such view of the QNM frequencies, the strong gravitational black hole lensing, high-energy absorption cross section were found to be related to the unstable circular null geodesics \cite{Stefanov,WeiLiu} with some impressive results obtained. It is therefore imperative to find the relation between the thermodynamic quantity of a black hole and the unstable circular null geodesics. Motivated by this idea, in this paper, we aim to study the entropy spectrum of the black holes from such view of the QNM frequencies. Employing the Hod's method, we found that the entropy spectrum of an asymptotically flat black hole can be expressed in terms of its unstable circular null geodesics. And the spacing of the entropy spectrum is found to depend on the charge, spin, and the dimension of the spacetime, which is very different from that of the previous work by using the usual QNM frequencies \footnote{In order to distinguish the QNM frequencies obtained through solving the perturbation equation from such view, we refer these from the perturbation equation as the usual QNM frequencies.}. Moreover, the result shows that the spacing $\Delta S$ of the entropy spectrum decreases with the dimension $d$ of the spacetime. For $d=4$, $\Delta S$ is larger than $2\pi\hbar$, while for $d\geq 5$, it is smaller than $2\pi\hbar$. Moreover, the spacing $\Delta S$ will be below the low bound obtained from the tunneling method as suggested in \cite{BanerjeeVagenas} when $d$ is larger than 151.
The paper is organized as follows. In section \ref{geodesic}, we investigate the null geodesics of an asymptotically flat spacetime. Then we get the relationship between the entropy spectrum and the null geodesics. In section \ref{Static}, we explore the entropy spectrum in a static and spherically symmetric black hole. And we generalize it to the stationary and axis-symmetric black hole in section \ref{Stationary}. The final section is
devoted to a brief summary.
\section{Entropy spectroscopy and null geodesics}
\label{geodesic}
In this section, we first study the null geodesics of a black hole, and then show the QNM frequencies obtained from the view described above. Further, based on such view, we interpret the black hole entropy spectrum through the null geodesics.
Here we assume that the equatorial metric ($\theta=\pi/2$) of a black hole background is in the following simple form
\begin{eqnarray}
ds^{2}=-A(r)dt^{2}+B(r)dr^{2}+C(r)d\phi^{2}-D(r)dtd\phi,\label{metric0}
\end{eqnarray}
which can describe the equatorial plane of a static, spherically symmetric black hole or a stationary, axis-symmetric black hole. We only require that these metric functions satisfy the following proper asymptotics
\begin{eqnarray}
A(r\rightarrow \infty)=1,\;\;B(r\rightarrow \infty)=1,\;\;
C(r\rightarrow \infty)=r^{2},\;\;D(r\rightarrow \infty)=0. \label{asymptotics}
\end{eqnarray}
The geodesics of a massless particle in the equatorial plane of such black hole background (\ref{metric0}) can be easily obtained via the Lagrangian, which reads
\begin{eqnarray}
2\mathcal{L}=g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}=-A(r)\dot{t}^{2}
+B(r)\dot{r}^{2}+C(r)\dot{\phi}^{2}
-D(r)\dot{t}\dot{\phi}.
\end{eqnarray}
The dot over a symbol denotes the ordinary differentation with respect to an affine parameter. The generalized momentum reproduced from this Lagrangian is $p_{\mu}=\frac{\partial \mathcal{L}}{\partial \dot{x}^{\mu}}=g_{\mu\nu}\dot{x}^{\nu}$ with its components given by
\begin{eqnarray}
p_{t}&=&-A(r)\dot{t}-\frac{D(r)}{2}\dot{\phi}\equiv-E,\label{PT}\\
p_{\phi}&=&-\frac{D(r)}{2}+C(r)\dot{\phi}\equiv l,\label{Pphi}\\
p_{r}&=&B(r)\dot{r}.
\end{eqnarray}
The parameter $E$ is the energy of the particle and $l$ is the orbital angular momentum of the particle in the $\phi$ direction measuring by an observer at
rest at infinity. Solving (\ref{PT}) and (\ref{Pphi}), we have the $t$-motion and $\phi$-motion
\begin{eqnarray}
\dot{t}=\frac{4C(r)E-2D(r)l}{4A(r)C(r)+D(r)^{2}},\quad \dot{\phi}=\frac{2D(r)E+4A(r)l}{4A(r)C(r)+D(r)^{2}}.\label{tphi0}
\end{eqnarray}
The Hamiltonian of this system is
\begin{eqnarray}
2\mathcal{H}&=&2(p_{\mu}\dot{x}^{\mu}-\mathcal{L}) \nonumber\\
&=&-A(r)\dot{t}^{2}+B(r)\dot{r}^{2}
+C(r)\dot{\phi}^{2}-D(r)\dot{t}\dot{\phi}=\delta.\label{HH}
\end{eqnarray}
Here $\delta=-1, 0, 1$ are for timelike, null, and spacelike geodesics, respectively. Since we focus on the null geodesics, we take $\delta=0$. Then inserting (\ref{tphi0}) into (\ref{HH}), we get the $r$-motion
\begin{eqnarray}
\dot{r}^{2}=\frac{4}{B(r)}\left(\frac{C(r)E^{2}-D(r)El-A(r)l^{2}}{4A(r)C(r)+D(r)^{2}}\right),
\end{eqnarray}
which can be further rewritten as
\begin{eqnarray}
\dot{r}^{2}+V_{\text{eff}}=0,
\end{eqnarray}
with the effective potential $V_{\text{eff}}=-\frac{4}{B(r)}\left(\frac{C(r)E^{2}-D(r)El-Al^{2}}
{4A(r)C(r)+D(r)^{2}}\right)$. The unstable circular orbit is determined by $V_{\text{eff}}$ through the following conditions:
\begin{eqnarray}
V_{\text{eff}}=0,\quad \frac{\partial V_{\text{eff}}}{\partial r}=0, \quad
\frac{\partial^{2} V_{\text{eff}}}{\partial r^{2}}<0.
\end{eqnarray}
The third condition ensures the instability of the orbit. And from the above conditions, it is easy to find that the unstable circular orbit is located at the local maxima of the effective potential. And the first two conditions yield
\begin{eqnarray}
&&A(r)\tilde{l}^{2}+D(r)\tilde{l}-C(r)=0, \label{LL}\\
&&A'(r)\tilde{l}^{2}+D'(r)\tilde{l}-C'(r)=0,\label{LL2}
\end{eqnarray}
where the impact parameter $\tilde{l}=l/E$, and the prime denotes the derivative with respect to $r$. Solving (\ref{LL}), we get the impact parameter
\begin{eqnarray}
\tilde{l}=\frac{-D(r)+\sqrt{4A(r)C(r)+D(r)^{2}}}{2A(r)}.
\end{eqnarray}
The minimum impact parameter $\tilde{l}_{c}$ is measured at $r=r_{c}$ with $r_{c}$ the radius of the unstable circular orbit, which satisfies the equation derived from (\ref{LL2})
\begin{eqnarray}
A_{c}C'_{c}-A'_{c}C_{c}+\tilde{l}_{c}(A'_{c}D_{c}-A_{c}D'_{c})=0,\label{Rcir}
\end{eqnarray}
where the down index ``c" means that these functions are evaluated at $r_{c}$. Given the explicit forms of the metric functions, we can obtain the radius of the unstable circular orbit, which corresponds to the largest root of equation (\ref{Rcir}) with the unstable condition satisfied. For the Schwarzschild black hole, we easily get $r_{c}=3M$. The Lyapunov exponent $\lambda$ and angular velocity $\Omega_{c}$ are two important quantities to measure the property of the unstable circular null geodesics, which are defined as
\begin{eqnarray}
\lambda=\sqrt{\frac{V_{\text{eff}}''}{2\dot{t}^{2}}}\bigg|_{r_{c}},\quad
\Omega_{c}=\frac{\dot{\phi}}{\dot{t}}\bigg|_{r_{c}}.
\end{eqnarray}
Combining by the effective potential and the null geodesics, we find that
\begin{eqnarray}
\lambda=\frac{\kappa}{\tilde{l}_{c}}, \quad
\Omega_{c}=\frac{1}{\tilde{l}_{c}},
\end{eqnarray}
with $\kappa$ in a compacted form
\begin{eqnarray}
\kappa^{2}=\frac{A_{c}C''_{c}-A''_{c}C_{c}+\tilde{l}_{c}(A''_{c}D_{c}-A_{c}D''_{c})}
{2A_{c}B_{c}}.
\end{eqnarray}
In \cite{Cardoso}, the author showed that, for the $d$-dimensional Myers-Perry black holes, the real part of the QNM frequency, or the angular frequency of the equatorial circular geodesics is the inverse of their impact parameter. Here we clearly show that this is a general property and it holds for any stationary, asymptotically flat black hole. Inserting the parameters $\lambda$ and $\Omega_{c}$ into (\ref{QNM}), we will get the QNM frequency corresponding to the metric (\ref{metric0}),
\begin{eqnarray}
\omega=\frac{1}{\tilde{l}_{c}}(m-i(n+1/2)\kappa).
\end{eqnarray}
Then the vibrational frequency $\Delta \omega$ could be written as
\begin{eqnarray}
\Delta \omega&=&\sqrt{(\Delta\omega_{R})^{2}+(\Delta\omega_{I})^{2}}\nonumber\\
&=&\frac{1}{\tilde{l}_{c}}\sqrt{1+\kappa^{2}},\label{varomega}
\end{eqnarray}
where $\Delta\omega_{R}=(|\omega_{R}|)_{m}-(|\omega_{R}|)_{m-1}$ and $\Delta\omega_{I}=(|\omega_{I}|)_{n}-(|\omega_{I}|)_{n-1}$.
According to the Bohr-Sommerfeld quantization
rule, substituting this vibrational frequency into the Clausius relation $\Delta Q=\hbar \Delta\omega=T\Delta S$ yields the entropy spectrum
\begin{eqnarray}
S=\frac{\hbar \Delta\omega}{T}\cdot n=\frac{\sqrt{1+\kappa^{2}}}{T \tilde{l}_{c}}\hbar\cdot n,
\end{eqnarray}
with spacing
\begin{eqnarray}
\Delta S=\frac{\sqrt{1+\kappa^{2}}}{T \tilde{l}_{c}}\hbar,
\end{eqnarray}
where $T$ is the Hawking temperature of the black hole. From this expression, we see that the spacing depends both on the temperature of the black hole and the property of the unstable circular null geodesics. Using the Bekenstein-Hawking entropy/area law $S=A/4$, the area spectrum can also be obtained, which is also obviously dependent of the temperature and the null geodesics.
To close this section, we would like to remark that, the area and entropy spectra obtained from such view of the QNM frequencies is not only valid for a static and spherically symmetric black hole, but also for a stationary and axis-symmetric black hole with the asymptotics (\ref{asymptotics}). In order to work out more details about the spectra, we will apply this method to different black holes in the next of this paper.
\section{Static and spherically symmetric black holes}
\label{Static}
In this section, we would like to apply the above method to study the entropy spectrum of the static and spherically symmetric black holes in any dimension.
\subsection{$d$-dimensional Schwarzschild black holes}
Here, let us consider a specific example, the $d$-dimensional Schwarzschild-Tangherlini metric, which is
\begin{eqnarray}
&&ds^{2}=-f(r)dt^{2}+f^{-1}(r)+r^{2}d\Omega^{2}_{(d-2)},\label{Sch}\\
&&f(r)=1-\bigg(\frac{r_{h}}{r}\bigg)^{d-3},
\end{eqnarray}
where $d\Omega^{2}_{(d-2)}$ is the line element of the unit
$(d-2)$-dimensional sphere $S^{(d-2)}$ with the usual
angular coordinates $\phi \in [0,\;2\pi]$ and $\theta_{i} \in [0,\;\pi]$ ($i=1,2,...,d-3$). The parameter $r_{h}$ denotes the outer horizon radius of the black hole, which is related to the ADM mass of the spacetime as $M=\frac{(d-2)A_{(d-2)}r_{h}^{d-3}}{16\pi}$ with $A_{(d-2)}$ the area of a unite $(d-2)$-dimensional sphere. The temperature of this black hole is calculated as
\begin{eqnarray}
T^{\text{sch}}=\frac{\partial_{r}f(r)}{4\pi}\bigg|_{r_{h}}
=\frac{(d-3)}{4\pi r_{h}}.
\end{eqnarray}
Note that $T^{\text{sch}}$ is proportional to the inverse of the horizon radius, which means a supermassive black hole has a low temperature, while it has strong gravitational effect. This result holds for any dimension of the spacetime.
Next, let us examine the null geodesics of this spacetime. We only restrict our attention to the equatorial hyperplane defined by $\theta_{i}=\pi/2$ with $i=1,..., (d-3)$. Then the metric reduces to
\begin{eqnarray}
ds^{2}=-f(r)dt^{2}+f^{-1}(r)dr^{2}+r^{2}d\phi^{2},
\end{eqnarray}
Comparing with (\ref{metric0}), we easily get
\begin{eqnarray}
A=f(r),\;B=f^{-1}(r),\; C=r^{2},\; D=0.\label{RNreduce}
\end{eqnarray}
Thus, by solving equation (\ref{Rcir}), the radius of the unstable circular geodesics will be obtained
\begin{eqnarray}
r_{c}=\bigg(\frac{(d-1)}{2}\bigg)^{1/(d-3)}r_{h}.
\end{eqnarray}
It is worth noting that we always have $r_{c}>r_{h}$ for any value of $d\geq 4$. When $d=4$, we get the usual result that $r_{c}=\frac{3}{2}r_{h}$. And when $d\rightarrow \infty$, the unstable circular orbits radius will approach the horizon. A straightforward calculation shows
\begin{eqnarray}
\tilde{l}_{c}&=&\sqrt{\frac{2}{(d-3)}}\bigg(\frac{d-1}{2}\bigg)^{\frac{(d-1)}{2(d-3)}}r_{h},\\
\kappa&=&\sqrt{d-3}.
\end{eqnarray}
Thus, the vibrational frequency deduced from (\ref{varomega}) reads
\begin{eqnarray}
\Delta \omega=(d-2)\bigg(1-\big(2(d-1)\big)^{3-d}\bigg)\frac{1}{r_{h}}.
\end{eqnarray}
Then, the spacing of the entropy spectrum is
\begin{eqnarray}
\Delta S=2^{\frac{2d-5}{d-3}}\pi\bigg(d-1\bigg)^{\frac{d-1}{2(3-d)}}
\sqrt{\frac{d-2}{d-3}}\hbar.
\end{eqnarray}
From this equation, it is clear that the spacing of the entropy spectrum depend on the dimension $d$ of the spacetime. And when $d\rightarrow \infty$, we find the spacing monotonically decreases to zero. We list the spacing of the entropy spectrum for $d=4-10$ in table \ref{Table1}. One easily learns from the table that, for $d=4$, the spacing is the largest one with $\Delta S\approx 2.1774\pi\hbar>2\pi\hbar$. However this spacing will be smaller than $2\pi\hbar$ when $d\geq 5$. Therefore this result is very different from these obtained in the previous work from the usual QNM frequencies, where the spacing is $2\pi\hbar$ and independent of the dimension $d$. Moreover, we are aware that the spacing $\Delta S$ will be below the low bound obtained from the tunneling method as suggested in \cite{BanerjeeVagenas} for $d\geq 151$.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$d$& 4& 5& 6& 7& 8& 9& 10\\
\hline
$\frac{\Delta S}{2\pi\hbar}$
& 1.0887 & 0.8660 & 0.7610 & 0.6936 & 0.6446 & 0.6062 & 0.5749 \\
\hline
\end{tabular}
\caption{The spacing of the black hole entropy spectrum for different values of the dimension $d$ of the spacetime.}\label{Table1}
\end{center}
\end{table}
\subsection{$d$-dimensional Reissner-Nordstr$\ddot{o}$m black holes}
It is natural to speculate that the presence of the charge will dramatically change the spacing of the entropy spectrum for a Schwarzschild black hole. So it is worth to examine, the charged Reissner-Nordstr$\ddot{o}$m black hole in any dimension.
The line element describing this spacetime is in the same form as (\ref{Sch}), while with different metric function
\begin{eqnarray}
f(r)=1-\frac{2m}{r^{d-3}}+\frac{q^{2}}{r^{2(d-3)}},\label{RN}
\end{eqnarray}
where the parameters $m$ and $q$ are linked to the mass $M$ and charge $Q$ as
\begin{eqnarray}
m=\frac{8\pi M}{(d-2)A_{d-2}}, \quad
q^{2}=\frac{8\pi Q^{2}}{(d-2)(d-3)A_{d-2}}.
\end{eqnarray}
Solving $f(r)=0$, we get the horizons located at
\begin{eqnarray}
r_{\pm}^{d-3}=m\pm\sqrt{m^{2}-q^{2}}.
\end{eqnarray}
From this equation, we note that there may exist two horizons for $q/m<1$, one horizon for $q/m=1$, or no horizon for $q/m>1$. Here we are only interesting in the noextremal black hole with outer horizon $r_{h}=r_{+}$. Therefore, the temperature of the outer horizon is
\begin{eqnarray}
T^{\text{RN}}=\frac{(d-3)}{2\pi}r_{h}^{2-d}(m-r_{h}^{3-d}q^{2}).
\end{eqnarray}
In the equatorial hyperplane, the reduced metric has the form of (\ref{RNreduce}) with $f(r)$ given by (\ref{RN}).
The radius of the unstable circular orbit is calculated as
\begin{eqnarray}
r_{c}^{d-3}=\frac{(d-1)m+\sqrt{(d-1)^{2}m^{2}-4(d-2)q^{2}}}{2}.
\end{eqnarray}
For $d=4$, we have $r_{c}=\frac{3M+\sqrt{9M^{2}-8Q^{2}}}{2}$, which implies that in the range $1<\frac{Q^{2}}{M^{2}}\leq 9/8$, the naked singularity located at $r=0$ is surrounded by a photon sphere rather than a horizon. A simple calculation shows
\begin{eqnarray}
\tilde{l}_{c}&=&\frac{r_{c}^{d-2}}{\sqrt{r_{c}^{2(d-3)}-2mr_{c}^{d-3}+q^{2}}},\\
\kappa&=&r_{c}^{3-d}\sqrt{r_{c}^{2(d-3)}+(d-1)(d-4)mr_{c}^{d-3}-(d-2)(2d-7)q^{2}}.
\end{eqnarray}
Then the spacing of the entropy spectrum is
\begin{eqnarray}
\Delta S=&&\frac{2\pi r_{h}^{d-2} \sqrt{1-2r_{c}^{(3-d)}m+r_{c}^{2(3-d)}q^{2}}}{(d-3)r_{c}(m-r_{h}^{3-d}q^{2})}
\nonumber\\
&&\times\sqrt{2-(d-2)(2d-7)r_{c}^{2(3-d)}q^{2}+(d-4)(d-1)r_{c}^{3-d}m}\hbar.
\end{eqnarray}
For a clear view, we plot this spacing in Figure \ref{PRN} for different values of $d$ and the dimensionless charge parameter $q/m$. When $q=0$, the result will reduce to the Schwarzschild black hole case. Although the spacing depends on the charge $q/m$, however from the figure, we find that the spacing for the same $d$ varies very slowly in the small charge case. When the dimensionless charge parameter $q/m$ approaches one, the black hole will approach an extremal one, and the spacing blows up mainly due to the vanishing temperature of the extremal black hole.
\begin{figure}
\centerline{\includegraphics[width=8cm]{RN.eps}}
\caption{The spacing of the entropy spectrum for the Reissner-Nordstr$\ddot{o}$m black holes as a function of the dimensionless charge parameter $q/m$ for $d=4,5,6,7,8,9,10$ from top to bottom.}\label{PRN}
\end{figure}
\section{Stationary and axis-symmetric black holes}
\label{Stationary}
In the above section, we consider the static, spherically symmetric black holes. The result shows that the spacing of the entropy spectrum depends on the dimension $d$ and dimensionless charge $q/m$ of the black holes. And the spacing is larger than $2\pi\hbar$ for $d=4$, and smaller than $2\pi\hbar$ for $d\geq 5$ in small charge, which is very different from the result obtained through the usual QNM frequencies. In this section, we will generalize this method to the stationary and axis-symmetric black holes: four-dimensional Kerr black holes and $d$-dimensional Myers-Perry black holes.
\subsection{Kerr black holes}
In Boyer-Lindquist coordinates, the four-dimensional Kerr metric is expressed as
\begin{eqnarray}
ds^{2}=&-&\bigg(1-\frac{2Mr}{\rho^{2}}\bigg)dt^{2}+\frac{\rho^{2}}{\Delta}dr^{2}+\rho^{2}d\theta^{2}
+\bigg(a^{2}+r^{2}+\frac{2a^{2}Mr\sin^{2}\theta}{\rho^{2}}\bigg)\sin^{2}\theta d\phi^{2}\nonumber\\
&-&\frac{4aMr\sin^{2}\theta}{\rho^{2}}dtd\phi,
\end{eqnarray}
where $\Delta=r^{2}-2Mr+a^{2}$ and $\rho^{2}=r^{2}+a^{2}\cos^{2}\theta$. Here $M$ and $a$ are the mass and spin of the black hole, respectively. It is well known that this black hole has two horizons determined by $\Delta=0$, i.e., $r_{\pm}=M\pm\sqrt{M^{2}-a^{2}}$. When $a/M<1$, there is the outer horizon $r_{h}=r_{+}$ endowed with a temperature
\begin{eqnarray}
T^{\text{Kerr}}=\frac{\sqrt{M^{2}-a^{2}}}{4\pi M(M+\sqrt{M^{2}-a^{2}})}.
\end{eqnarray}
In the equatorial plane, the metric is of the general form (\ref{metric0}) with metric functions given by
\begin{eqnarray}
A(r)=1-\frac{2M}{r},\quad B(r)=\frac{r^{2}}{\Delta},\quad
C(r)=r^{2},\quad D(r)=\frac{4aM}{r}.
\end{eqnarray}
Note that when the spin $a$ approaches 0, the Kerr black hole will reduce to a Schwarzschild black hole. And in the equatorial plane, we have a vanishing $D(r)$. Solving (\ref{Rcir}), we get the circular orbit radius
\begin{eqnarray}
r_{c}=2M\bigg(1+\cos\bigg(\frac{2}{3}\arccos(\mp|a|/M)\bigg)\bigg).
\end{eqnarray}
Here, the upper sign is for the corotating orbit, while the lower sign is for the counterrotating orbit. It is also worth noting that the counterrotating orbit is located far away from the horizon than the corotating orbit. Then we are allowed to calculate the parameters $\tilde{l}_{c}$ and $\kappa$,
\begin{eqnarray}
\tilde{l}_{c}&=&\frac{2aM-r_{c}\sqrt{\Delta}_{c}}{2M-r_{c}},\\
\kappa&=&\sqrt{\frac{\Delta_{c}[r^{2}_{c}(r_{c}-2M)+4aM(a-\sqrt{\Delta_{c}})]}
{r_{c}^{3}(r_{c}-2M)^{2}}},
\end{eqnarray}
with $\Delta_{c}=\Delta(r_{c})$. Then we get the spacing of the entropy spectrum
\begin{eqnarray}
\Delta S=&&\frac{\sqrt{\Delta_c \left(4 a^2 M+r_c^2 \left(r_c-2M\right)\right)
-4 a M \Delta _c^{3/2}+r_c^3 \left(r_c-2 M\right)^2}}
{r_c^{3/2}\sqrt{M^2-a^2}
\left(r_c \sqrt{\Delta_c}-2aM\right)} \nonumber\\
&&\times 4\pi M \left(M+\sqrt{M^2-a^2}\right)\hbar.
\end{eqnarray}
The behavior of this spacing is shown in Figure \ref{Pkerr}. If we restrict our attention to the far from extremal black hole, i.e., in the neighborhood of $a/M=0$, one easily obtains a spacing larger than $2\pi\hbar$. And when $a/M=0$, it reduces to the Schwarzschild black hole case in $d=4$. On the other hand, one can find that for a Kerr black hole with spin $|a|$, there exist two unstable circular null geodesics, the counterrotating orbit and corotating orbit, which implies that there are two sets of the QNM frequencies, and two different entropy spectra will be obtained. Thus the entropy spectrum obtained in this way is seems to depend on the sign of the orbit angular momentum of the massless particle. However, it is not the case. In \cite{Zimmerman}, the authors clearly showed that there exist two distinct sets of the QNM frequencies for nearly extremal Kerr black holes. Here we can conjecture that this result is also held for an arbitrary rotating black hole. And these two distinct sets could be interpreted with the counterrotating and corotating orbits, respectively. So the entropy spectra obtained from the counterrotating and corotating orbits are in fact calculated with the two distinct sets of QNM frequencies. Therefore the spectra have no relation with the sign of the orbit angular momentum. From Figure \ref{Pkerr}, one easily finds that the spacing related to the counterrotating orbit has the smaller value than the corotating one, which can be understood as that the spacing related to the counterrotating orbit may more approach the minimum spacing of the black hole entropy.
\begin{figure}
\centerline{\includegraphics[width=8cm]{Kerr.eps}}
\caption{Behavior of the spacing of the entropy spectrum for the Kerr black hole.}\label{Pkerr}
\end{figure}
\subsection{$d$-dimensional Myers-Perry black holes}
In this subsection, we would like to generalize this method to these black holes with spin in higher dimensions.
In \cite{Myers}, Myers and Perry found a general black hole solution in asymptotically flat spacetime in $d$ dimensions with all $\frac{(d-1)}{2}$ $a_{i}$ nonvanishing. Here, we only consider the simple case that only a non-zero spin $a_{1}=a\neq 0$, therefore the metric can be written as
\begin{eqnarray}
ds^{2}=&-&dt^{2}+\frac{\mu}{r^{d-5}\rho^{2}}(dt+a\sin^{2}\theta d\phi)^{2}
+\frac{\rho^{2}}{\Delta}dr^{2}+\rho^{2}d\theta^{2}\nonumber\\
&+&(r^{2}+a^{2})\sin^{2}\theta d\phi^{2}
+r^{2}\cos^{2}\theta d\Omega^{2}_{(d-4)},
\end{eqnarray}
with $\rho^{2}=r^{2}+a^{2}\cos^{2}\theta$ and $\Delta=r^{2}+a^{2}-\frac{\mu}{r^{d-5}}$. The parameters $\mu$ and $a$ are related to the black hole mass and angular momentum in the following forms
\begin{eqnarray}
M=\frac{(d-2)\Omega_{d-2}}{16\pi G}\mu,\quad
J=\frac{2}{d-2}Ma.
\end{eqnarray}
The outer horizon is determined as the largest root of $\Delta(r)=0$, which gives
\begin{eqnarray}
r_{h}^{2}+a^{2}-\frac{\mu}{r_{h}^{d-5}}=0.
\end{eqnarray}
For $d=4$, it reduces to the Kerr black hole with $r_{h}=M+\sqrt{M^{2}-a^{2}}$. And for $d=5$, we easily get $r_{h}=\sqrt{\mu-a^{2}}$. Thus these two black hole cases are bounded with a maximum spin $a_{max}$. While when $d\geq 6$, there is an interesting result that the spin $a$ of the black hole could be arbitrary large, which is referred as the ``ultra-spinning" black hole. However, here we only consider the small $a$ limit. The black hole temperature is calculated as
\begin{eqnarray}
T^{\text{MP}}=\frac{1}{4\pi}(\frac{2r_{h}^{d-4}}{\mu}+\frac{d-5}{r_{h}}).
\end{eqnarray}
In the equatorial hyperplane, the metric is of the general form (\ref{metric0}) with metric functions given by
\begin{eqnarray}
A(r)=1-r^{3-d}\mu,\quad B(r)=\frac{r^{2}}{\Delta},\quad
C(r)=r^{2}+a^{2}(1+r^{3-d}\mu),\quad D(r)=\frac{2a\mu}{r^{d-3}}.
\end{eqnarray}
The radius of the circular orbit is determined by (\ref{Rcir}), which leads to
\begin{eqnarray}
2 r^{2d-6}-(d+1)\mu r^{d-3}+2a\mu(d-3)(\sqrt{\Delta}-a)r^{d-5}+(d-1)\mu^{2}=0.
\end{eqnarray}
Generally, this equation cannot be solved analytically. However for small $d$, the exact result can be obtained. For $d=4$, it reduces to the Kerr case. And for $d=5$, we get $r_{c}=\sqrt{2}\sqrt{\mu\pm a\sqrt{\mu}}$ for counterrotating and corotating orbits, respectively. The parameters $\tilde{l}_{c}$ and $\kappa$ are
\begin{eqnarray}
\tilde{l}_{c}&=&\frac{-a\mu+r^{d-3}_{c}\sqrt{\Delta}_{c}}{r_{c}^{d-3}-\mu},\\
\kappa&=&\frac{\Delta_c^{1/2}}{\sqrt{2}r_{c}(r_{c}^{d-3}-\mu)}\nonumber\\
&\times& \sqrt{2a(d-3)(d-2) \mu
(\sqrt{a-\Delta_c})
r_c^{d-5}+\left(d^2-5d+2\right)
\mu r_c^{d-3}+2 r_c^{2 d-6}-(d-4)(d-1)\mu^2}.\nonumber\\
\end{eqnarray}
Finally, in the small $a$ limit, we can expand the spacing of the entropy spectrum as
\begin{eqnarray}
\Delta S&=&\frac{2\pi\mu r_{h}\sqrt{(1-r_{c}^{3-d}\mu)(8+2(d-1)(d-4)\mu r_{c}^{3-d})}}{r_{c}(2r_{h}^{d-3}+(d-5)\mu)}\hbar\nonumber\\
&+&\frac{2\sqrt{2}\pi \mu^2 r_h r_c^{2(2-d)}
\left((d-4)(d-1)\mu-(2+(d-5) d) r_c^{d-3}\right)}
{\sqrt{(d-4)(d-1)\mu r_c^{3-d}+4} \left(2 r_h^{d-3}+(d-5)\mu\right)}a\hbar+\mathcal{O}(a^{2}).
\end{eqnarray}
The first term will go back to the Schwarzschild black hole case if $\mu$, $r_{c}$ and $r_{h}$ take the values as that of the Schwarzschild black hole. The second term looks like a corrected term from the spin $a$ of the black hole. The spacing for $d\geq 5$ is plotted against the dimensionless spin parameter $a/\mu^{1/(d-3)}$ in Figure \ref{PMP}. It shares the same behavior as the Kerr black hole. As $d$ increases, the spacing gets lower and lower, however the line becomes flat in high dimensions. In summary, we still obtain a spacing smaller than $2\pi\hbar$ in the small $a$ limit for $d\geq 5$.
\begin{figure}
\centerline{\includegraphics[width=8cm]{MP.eps}}
\caption{The spacing of the entropy spectrum for the Myers-Perry black holes as a function of the dimensionless spin parameter $a/\mu^{1/(d-3)}$ for $d=5,6,7,8,9,10$ from top to bottom.}\label{PMP}
\end{figure}
\section{Summary}
In this paper, we show that the QNM frequencies of an asymptotically flat black hole in any dimension can be understood from the null geodesics. The physical view is thought that the QNM frequencies can be interpreted in terms of the massless particles trapped at the unstable circular null geodesics and slowly leaking out to infinity. The real part of the QNM frequencies is found to be the inverse of their impact parameter $\tilde{l}_{c}$ measured at the unstable circular null geodesics, and the imaginary part is $\kappa/\tilde{l}_{c}$.
Then using the new physical interpretation of QNM frequencies proposed by Maggiore, we calculate the quantum spectrum of the entropy for different black holes following the Hod's method. The result can be summarized as follows:
(i) The spacing of the entropy spectrum is dependent of the dimension $d$ of the spcetime. It decreases with $d$, and the largest spacing is $\Delta S\approx 2.1774\pi\hbar$ for $d=4$. When $d\geq 151$, the low bound of the spacing $\Delta S$ as suggested in \cite{BanerjeeVagenas} will be violated, and when $d\rightarrow\infty$, the spacing will vanish.
(ii) For a far from extremal black hole, the value of the spacing of the entropy spectrum is found to be larger than $2\pi\hbar$ for $d=4$ and smaller than $2\pi\hbar$ for $d\geq 5$, which is very different from these obtained from the previous work \cite{Vagenasjhep2008,Medvedcqg2008} (and references therein) by using the usual QNM frequencies.
In summary, since the spacing of the entropy spectrum in this paper is expressed in terms of the Hawking temperature and null geodesics of the black holes, this method is therefore justify to extend to other stationary black holes in non-Einstein gravity, and we conjecture that our result is universal.
\section*{Acknowledgement}
This work was supported by the National Natural Science Foundation of China (Grant No. 11205074 and Grant No. 11075065), and the Huo Ying-Dong Education Foundation of the Chinese Ministry of Education (Grant No. 121106).
| -32,295.36173
|
[
-2.3515625,
2.251953125
] | 28.571429
|
[
-2.78125,
1.1005859375,
-2.078125,
-5.61328125,
-1.025390625,
7.875
] |
[
3.578125,
8.4921875,
3.013671875,
5.66796875
] | 211
| 4,102
|
[
-2.748046875,
3.01953125
] | 29.922568
|
[
-6.13671875,
-4.20703125,
-4.484375,
-2.41796875,
1.7294921875,
12.46875
] | 2.055329
| 15.937242
| 23.330083
| 4.709675
|
[
2.106874465942383
] | -21,645.182437
| 5.930522
| -32,320.330477
| 1.923788
| 5.52503
|
[
-2.4921875,
-3.734375,
-3.83203125,
-4.828125,
2.23828125,
12.3515625
] |
[
-5.30859375,
-1.66015625,
-2.22265625,
-1.5048828125,
3.33203125,
4.3515625
] | |
BkiUdmo5qsNCPdvTYuIS
|
\section{Introduction}
\label{intr}
The existence of sterile neutrinos has not been proven yet. However, their existence is suggested by various scenarios which can explain the detected differences of masses of the three known light neutrinos. Furthermore, most of such scenarios suggest that the neutrinos are Majorana fermions. Since Majorana fermions, unlike the Dirac fermions, are their own antiparticles, they can participate not just in the lepton number conserving (LNC) processes, but also in the lepton number violating (LNV) processes. LNV processes are appreciable if the Majorana neutrinos are sufficiently massive. Various scenarios suggest that mixing of sterile neutrinos with the known Standard Model (SM) flavor neutrinos leads to neutrinos which are significantly heavier than the known light neutrinos. The main questions facing the neutrino physics beyond the SM are: (1) Are the neutrinos Majorana or Dirac? (2) How heavy are the new mass eigenstates $N$? (3) What are the values of the heavy-light mixing parameters $U_{\ell N}$, i.e., the mixing parameters of a massive $N$ neutrino with the SM flavor neutrinos $\nu_{\ell}$ ($\ell=e, \mu, \tau$)?
Whether the neutrinos are Majorana particles can be determined in neutrino experiments with various LNV processes. Among the most known such experiments are those with the neutrinoless double beta decay ($0\nu\beta\beta$) \cite{0NBB}, rare LNV decays of mesons \cite{RMDs,HKS,Atre,CDKK,CDK,CKZ,symm,Quint,Mand} and of $\tau$ lepton \cite{GKS,tau}, and specific scattering processes \cite{scatt1,scatt2,scatt3,KimLHC}.
Observation of neutrino oscillations \cite{Pontecorvo} can determine (small) mass differences between neutrinos, and thus prove that the neutrinos have mass. The neutrino oscillations of the SM flavor neutrinos have been observed \cite{oscatm,oscsol,oscnuc}. If sterile neutrinos exist and if their mixing with the SM flavor neutrinos leads to almost degenerate heavy neutrinos, also such neutrinos can oscillate among themselves \cite{Boya,CKZosc}.
The neutrino sector can also have CP violation \cite{oscCP}, which plays an important role in the leptogenesis \cite{Lepto}. Resonant CP violation of neutrinos appears when we have two heavy almost degenerate neutrinos. It can appear in scattering processes \cite{Pilaftsis}, in semileptonic rare meson decays \cite{CKZ2,DCK,symm}, and in purely leptonic rare meson decays \cite{CKZ,symm}. Among the models with almost degenerate heavy neutrinos are the neutrino minimal standard model ($\nu$MSM) \cite{nuMSM,Shapo} and low-scale seesaw models \cite{lsseesaw}.
As mentioned, extended sectors of Majorana neutrinos appear in models which explain the very small masses of the three light neutrinos. Such models are the original seesaw models \cite{seesaw} (the heavy neutrinos there have masses $M_N \gg 1$ TeV), and seesaw models with heavy neutrinos with lower masses $M_N \sim 1$ TeV \cite{WWMMD}, and $M_N \sim 1$ GeV \cite{scatt2,nuMSM,HeAAS,KS,AMP,NSZ}. In such models, the heavy-light mixing parameters are in general less suppressed than in the original seesaw models.
In this work, we will work in a generic framework where we have one massive neutrino $N$ which mixes with the SM flavor neutrinos $\nu_{\ell}$ ($\ell=e, \mu, \tau$). We will evaluate the rates of some rare decays of $B$ mesons at the future \textcolor{black}{LHCb upgrade and}
Belle-II experiments, namely, the LNV decays with one on-shell Majorana massive neutrino $N$: $B \to (D^{(*)}) \mu^{\pm} N \to (D^{(*)}) \mu^{\pm} \mu^{\pm} X^{\mp}$, where $X^{\mp}$ is either a pion $\pi^{\mp}$, or a lepton-neutrino pair $\ell \nu_{\ell}$
\textcolor{black}{(this latter option only at Belle-II).}
This work is based on our previous work \cite{BdecBII}, but now the obtained results are more specific and directly applicable to the calculation of the sensitivity limits on the $|U_{\mu N}|^2$ mixing parameter, as a function of mass $M_N$, achievable
\textcolor{black}{at LHCb upgrade and at Belle-II,}
where the projected total number of produced $B$ mesons is
\textcolor{black}{$4.8 \times 10^{12}$ \cite{Sheldon} and $5 \times 10^{10}$ \cite{Belle-II}, respectively.}
Unlike in Ref.~\cite{BdecBII}, here we do not make any assumptions on the size of the probability $P_N$ of the produced neutrino $N$ to decay within the detector (in \cite{BdecBII} we assumed that either $P_N \approx 1$ or $P_N \ll 1$).
Detailed explanation on this issue is given in Sec.~\ref{sec:PN} and in Appendix \ref{appENpp}.
\textcolor{black}{Similar analyses for the upper bounds on $|U_{\mu N}|^2$ from the absence of the rare $B$-meson decays were made for the Belle-I mesurements in Ref.~\cite{BelleUB}, and for LHCb (run I) measurements in Refs.~\cite{LHCba1} and reconsideration thereof in Ref.~\cite{LHCba2}.}
In Sec.~\ref{sec:decw} we summarize the framework in which we work, and the decay widths which are relevant for the decay rates that we want to obtain. The summarized formulas for these decay widths are presented in subsections of Sec. II and Appendix \ref{appNall}. In Sec.~\ref{sec:PN} we present the probability $P_N$ of the produced on-shell neutrino $N$ to decay within the detector, and the integration formulas which account for the effect of this probability on the effective rate for the mentioned LNV decays. In Appendix \ref{appENpp} we present detailed formulas for the Lorentz factors and the probabilities $P_N$ for the various considered decays.
In Sec.~\ref{sec:num} we present the results of the numerical evaluations, in the form of the obtained sensitivity limits on $|U_{\mu N}|^2$, as a function of $M_N$, that can be achieved by
\textcolor{black}{LHCb upgrade and}
Belle-II experiments. In Sec.~\ref{sec:concl} we discuss the obtained results and make conclusions.
\section{Decay widths for $B \to (D^{(*)}) \ell_1 N \to (D^{(*)}) \ell_1 \ell_2 X$}
\label{sec:decw}
Here we briefly summarize the results of Ref.~\cite{BdecBII} for the decay widths of the rare decays of $B$ mesons via on-shell sterile neutrino $N$. The on-shellness of $N$ implies the factorization
\begin{equation}
\Gamma \left( B \to (D^{(*)}) \ell_1 N \to (D^{(*)}) \ell_1 \ell_2 X \right)
= \Gamma \left( B \to (D^{(*)}) \ell_1 N \right) \frac{\Gamma(N \to \ell_2 X)}{\Gamma_N} \ .
\label{fact}
\end{equation}
Here, $\ell_j$ ($j=1,2$) are generical names for charged leptons; later we will use $\ell_1 = \ell_2 = \mu^{\pm}$. The second factor on the right-hand side of Eq.~(\ref{fact}) represents the effect of the subsequent decay of the produced heavy on-shell neutrino $N$ into $\ell_2 + X$, where $X$ will be either a charged pion $\pi$, or a leptonic pair $\ell_3 \nu_3$.
The first factor in Eq.~(\ref{fact}), $\Gamma \left( B \to (D^{(*)}) \ell_1 N \right)$, is well known when no $D^{(*)}$ meson is produced; when $D^{(*)}$ is produced, this factor was obtained and evaluated in Ref.~\cite{BdecBII}. The formulas for this factor are summarized in subsections A-C, as well as some (here relevant) differential decay widths for these decays $B \to (D^{(*)}) \ell_1 N$. The second factor in Eq.~(\ref{fact}) includes the exclusive decay width $\Gamma(N \to \ell_2 X)$ which is well known, either for $X=\pi$ or $X=\ell_3 \nu_3$. For both cases, the expressions for these decay widths are summarized in subsections D-E. The denominator of the second factor in Eq.~(\ref{fact}), namely the total decay width $\Gamma_N$ of neutrino $N$, was evaluated numerically in \cite{CKZ2} for the case of Majorana $N$ (cf.~also \cite{symm} for the case of $N$ Majorana or Dirac); the expression for $\Gamma_N$ and its evaluation is presented in Appendix \ref{appNall}.
All the mentioned decay widths involve the (suppressed) heavy-light mixing parameters $U_{\ell N}$ ($\ell=e, \mu, \tau$) appearing in the coupling of the heavy $N$ neutrino with the $W$ boson and $\ell$ lepton. These parameters are part of the (extended) Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix, i.e., the light flavor neutrino states $\nu_{\ell}$ (with flavor $\ell = e, \mu, \tau$) are the following combination of the three light mass eigenstates $\nu_k$ and of the heavy mass eigenstate $N$:
\begin{equation}
\nu_{\ell} = \sum_{k=1}^3 U_{\ell \nu_k} \nu_k + U_{\ell N} N \ .
\label{mixN}
\end{equation}
\subsection{Decay width $\Gamma(B \to \ell_1 N)$}
\label{subs:GBellN}
The decay width for the process $B \to \ell_1 N$, where $\ell_1$ is a charged lepton ($\ell_1=e, \mu, \tau$) and $N$ is a (massive) neutrino, is
\begin{equation}
\Gamma(B^{\pm} \to \ell_1^{\pm} N) = |U_{\ell_1 N}|^2
{\overline{\Gamma}} (B^{\pm} \to \ell_1^{\pm} N) \ ,
\label{GBlN}
\end{equation}
where the canonical decay width ${\overline{\Gamma}}$, i.e., the part without the heavy-light mixing factor, is
\begin{equation}
{\overline{\Gamma}} (B^{\pm} \to \ell_1^{\pm} N) =
\frac{G_F^2 f_{B}^2}{8 \pi} |V_{u b}|^2 M_{B}^3 \lambda^{1/2}(1,y_N,y_1)
\left[ (1 - y_N) y_N + y_1 (1 + 2 y_N - y_1) \right] \ .
\label{bGBlN}
\end{equation}
Here, $G_F$ is the Fermi coupling constant ($G_F = 1.1664 \times 10^{-5} \ {\rm GeV}^{-2}$), $f_{B}$ is the decay constant of the $B$-meson,
$V_{u b}$ its CKM matrix element, and in the mass dependent parts the following notations are used:
\begin{subequations}
\label{notylam}
\begin{eqnarray}
y_N &=& \frac{M_N^2}{M_B^2} \ , \qquad y_1 = \frac{M_1^2}{M_B^2} \ ,
\label{yNyell}
\\
\lambda^{1/2}(x,y,z) &=& \left[ x^2 + y^2 + z^2 - 2 x y - 2 y z - 2 z x \right]^{1/2}.
\label{lam}
\end{eqnarray}
\end{subequations}
We denote the mass of $\ell_1$ as $M_1$ throughout this paper. We use the values $|V_{ub}|=0.00409$ and $f_B=0.1871$ GeV \cite{PDG2016} (cf.~also \cite{Kangetal}).
\subsection{Decay width $\Gamma(B \to D \ell_1 N)$}
\label{subs:GBDellN}
We now consider the decay $B \to D \ell_1 N$, cf.~Fig.~\ref{FigBDW}.
\begin{figure}[htb]
\centering\includegraphics[width=90mm]{FigBDW.pdf}
\caption{Schematical representation of the decay $B^- \to D^0 \ell_1^- {\bar N}$.}
\label{FigBDW}
\end{figure}
For the general case of a massive neutrino $N$ (and a massive charged lepton $\ell_1$), the general expression for the decay width of the process $B \to D \ell_1 N$ was obtained in Ref.~\cite{BdecBII}. There, the differential decay width $d \Gamma(B^- \to D^0 \ell_1^- {\bar N})/d q^2$ was presented.
Here we present the ``more differential'' cross section $d \Gamma(B^- \to D^0 \ell_1^- {\bar N})/(d q^2 d \Omega_{{\hat q}'} d \Omega_{{\hat p}_1})$, which is needed for calculation of the effective (true) branching ratio $ {\rm Br}_{\rm eff}(B \to D \ell_1 N \to D \ell_1 \ell_2 X)$ of Eq.~(\ref{Breff}).
The differential of the decay width is
\begin{equation}
d \Gamma(B^- \to D^0 \ell^-_1 N) = \frac{1}{2 M_B} \frac{1}{(2 \pi)^5} d_3 |{\cal T}|^2 \ ,
\label{BDNl}
\end{equation}
where $d_3$ is the differential for the three-particle final phase space
\begin{eqnarray}
d_3 & = & \frac{d^3 {\vec p}_D}{2 E_{D}({\vec p}_D)}
\frac{d^3 {\vec p}_1}{2 E_{\ell_1}({\vec p}_1)}
\frac{d^3 {\vec p}_N}{2 E_N({\vec p}_N)}
\delta^{(4)} \left( p_B - p_D - p_1 - p_N \right)
\nonumber\\
& = & d_2 \left( B^- \to D^0(p_D) W^*(q) \right) d q^2 d_2 \left( W^*(q) \to \ell_1(p_1) {\overline N}(p_N) \right) \ ,
\label{d3}
\end{eqnarray}
and the two-particle final phase space differentials are
\begin{subequations}
\label{d2}
\begin{eqnarray}
d_2(B^- \to D^0(p_D) W^*(q)) & = & \frac{1}{8} \lambda^{1/2} \left( 1, \frac{M_D^2}{M_B^2}, \frac{q^2}{M_B^2} \right) d \Omega_{{\hat q}'},
\label{d2BDW}
\\
d_2(W^*(q) \to \ell^-_1(p_1) {\overline N}(p_N)) & = & \frac{1}{8} \lambda^{1/2} \left( 1, \frac{M_1^2}{q^2}, \frac{M_N^2}{q^2} \right) d \Omega_{{\hat p}_1}.
\label{d2WellN}
\end{eqnarray}
\end{subequations}
The decay amplitude ${\cal T}$ appearing in Eq.~(\ref{BDNl}) is
\begin{equation}
{\cal T} = U_{\ell_1 N} V_{c b} \frac{G_F}{\sqrt{2}} \left[{\overline u}_{(\ell_1)}(p_1) \gamma_{\mu} (1 - \gamma_5) v_{(N)}(p_N) \right]
\left\{ \left[ (2 p_D + q)^{\mu} - \frac{(M_B^2-M_D^2)}{q^2} q^{\mu} \right] F_1(q^2) + \frac{(M_B^2-M_D^2)}{q^2} q^{\mu} F_0(q^2) \right\},
\label{TBDNl}
\end{equation}
where $F_1(q^2)$ and $F_0(q^2)$ are the form factors of the
$B$-$D$ transition, and we consider them to be real.
In terms of the reduced canonical decay amplitude ${\widetilde {\cal T}}$ defined via the relation
\begin{equation}
| {\cal T} |^2 = |U_{\ell_1 N}|^2 |V_{c b}|^2 G_F^2 |{\widetilde {\cal T}}|^2,
\label{tildeT}
\end{equation}
we can then express the differential decay width (\ref{BDNl}) in a somewhat more explicit form
\begin{eqnarray}
\frac{d \Gamma(B^- \to D^0 \ell^-_1 N)}{d q^2 d \Omega_{{\hat q}'} \Omega_{{\hat p}_1}} & = & \frac{|U_{\ell_1 N}|^2 |V_{c b}|^2 G_F^2}{4 M_B (4 \pi)^5} |{\widetilde {\cal T}}|^2 \lambda^{1/2} \left( 1, \frac{M_D^2}{M_B^2}, \frac{q^2}{M_B^2} \right) \lambda^{1/2} \left( 1, \frac{M_1^2}{q^2}, \frac{M_N^2}{q^2} \right),
\label{dGBDlN}
\end{eqnarray}
where ${\hat p}_1$ is the direction of $\ell^-_1$ in the $W^*$-rest frame ($\Sigma$), and ${\hat q}'$ is the direction of $W^{*-}$ ($\ell^-_1 N$ pair) in the $B$-rest frame ($\Sigma'$). We use the expression (\ref{TBDNl}) for the decay amplitude, and calculate the square of its absolute magnitude, $|{\cal T}|^2$, summing over the helicities of the final particles. We then obtain for the square of the reduced canonical amplitude, $|{\widetilde {\cal T}}|^2$, introduced via Eq.~(\ref{tildeT}), the following expression:
\begin{eqnarray}
|{\widetilde {\cal T}}|^2 &=&
\frac{1}{q^2} F_1(q^2) (F_0(q^2)-F_1(q^2)) \left(M_B^2-M_D^2\right)
{\bigg [} M_1^2 \left(-4 (\cos \theta_1 |{\vec p}_D| |{\vec p}_N|+p_D^0
p_1^0)+2 M_B^2-2 M_D^2+2 M_N^2-q^2\right)
\nonumber\\
&&
+M_N^2 \left(4 (\cos \theta_1 |{\vec p}_D|
|{\vec p}_N|+p_D^0
p_1^0)-M_N^2+q^2\right)-M_1^4 {\bigg ]}
\nonumber\\
&&
-\frac{1}{2} F_1(q^2)^2 {\bigg [}M_1^2
\left( 8 (\cos \theta_1 |{\vec p}_D|
|{\vec p}_N|+p_D^0 p_1^0)-4 M_B^2-2 M_N^2+3
q^2\right)
-8 M_B^2 (\cos \theta_1 |{\vec p}_D| |{\vec p}_N|+
p_D^0 p_1^0)
\nonumber\\
&&
+M_D^2 \left(8 (\cos \theta_1 |{\vec p}_D|
|{\vec p}_N|+p_D^0 p_1^0)-4 M_N^2+4 q^2\right)
-8 M_N^2 (\cos \theta_1 |{\vec p}_D| |{\vec p}_N|+p_D^0 p_1^0)
+8q^2 (\cos \theta_1 |{\vec p}_D| |{\vec p}_N|+p_D^0 p_1^0)
\nonumber\\
&&
+16 (\cos \theta_1 |{\vec p}_D| |{\vec p}_N|+p_D^0
p_1^0)^2+M_1^4+M_N^4-M_N^2
q^2 {\bigg ]}
\nonumber\\
&&
+\frac{1}{2 (q^2)^2}
(F_0(q^2)-F_1(q^2))^2 \left(M_B^2-M_D^2\right)^2 \left[-M_1^4+M_1^2 \left(2
M_N^2+q^2\right)-M_N^4+M_N^2 q^2\right] \ .
\label{tildeTexp}
\end{eqnarray}
Here, we denoted as $p_1$ the 4-momentum of $\ell_1$ (in $W^{*}$-rest frame $\Sigma$), and $\theta_1$ is the angle between ${\vec p}_1$ and ${\hat z} = {\hat q}'$. We also used in Eq.~(\ref{tildeTexp}) the following quantities:
\begin{subequations}
\label{vecpo0}
\begin{eqnarray}
|{\vec p}_N| = |{\vec p}_1| & = & \frac{1}{2} \sqrt{q^2} \; \lambda^{1/2} \left( 1, \frac{M_1^2}{q^2}, \frac{M_N^2}{q^2} \right),
\label{vecpN}
\\
|{\vec p}_D| & = & \frac{M_B^2}{2 \sqrt{q^2}} \; \lambda^{1/2} \left( 1, \frac{M_D^2}{M_B^2}, \frac{q^2}{M_B^2} \right) = \frac{M_B |{\vec {q'}}|}{\sqrt{q^2}},
\label{vecpD}
\\
p_1^0 & = & \frac{1}{2 \sqrt{q^2}} (q^2 - M_N^2 + M_1^2),
\label{p10}
\\
p_D^0 & = & \frac{1}{2 \sqrt{q^2}} (M_B^2 - M_D^2 - q^2).
\label{pD0}
\end{eqnarray}
\end{subequations}
They are all in the $W^*$-rest frame ($\Sigma$). We can see from these expressions that the absolute square of the reduced canonical amplitude, $|{\widetilde {\cal T}}|^2$, and thus the differential decay width (\ref{dGBDlN}), depend only on the variables $q^2$ (square of the invariant mass of $W^{*}$) and on $\cos \theta_1$ [note: $d \Omega_{{\hat p}_1} = d \phi_1 d (\cos \theta_1$)]. They are thus independent of the direction ${\hat q}'$, i.e., of the direction of $W^*$ in the $B$-rest frame.
The expressions (\ref{tildeTexp}) and (\ref{dGBDlN}) contain two form factors, $F_1$ and $F_0$. The form factor $F_1(q^2)$ is well known \cite{CLN} and can be expressed in terms of a variable $w(q^2)$
\begin{subequations}
\label{wz}
\begin{eqnarray}
w & = & \frac{(M_B^2 + M_D^2 - q^2)}{2 M_B M_D} \ ,
\label{w}
\\
z(w) & = & \frac{\sqrt{w+1} - \sqrt{2}}{\sqrt{w+1} + \sqrt{2}} \ .
\label{z}
\end{eqnarray}
\end{subequations}
According to Ref.~\cite{CLN}, $F_1(q^2)$ has the following power expansion in $z(w(q^2))$:
\begin{equation}
F_1(q^2) = F_1(w=1) \left( 1 - 8 \rho^2 z(w) + (51 \rho^2 - 10) z(w)^2 - (252 \rho^2 - 84) z(w)^3 \right) \ .
\label{CLNF1}
\end{equation}
The free parameters $\rho^2$ and $F_1(w=1)$ in this expansion have been determined by the Belle Collaboration, Ref.~\cite{Belle1}
\begin{subequations}
\label{rho2F1max}
\begin{eqnarray}
\rho^2 &= & 1.09 \pm 0.05 \ ,
\label{rho2}
\\
|V_{cb}| F_1(w=1) &=& (48.14 \pm 1.56) \times 10^{-3} \ .
\label{F1max}
\end{eqnarray}
\end{subequations}
In our numerical evaluations we use the above central values, and $|V_{cb}|=40.12 \times 10^{-3}$ \cite{Belle1}.
The form factor $F_0(q^2)$ is not well known at present, principally because it contributes only when the masses of $N$ and $\ell_1$ are not very small as can be deduced from Eq.~(\ref{tildeTexp}).\footnote{It can be checked that the difference $[ |{\widetilde {\cal T}}|^2 - |{\widetilde {\cal T}}|^2(F_0 \mapsto 0)]$ is zero when $M_1=M_N=0$.} In our case $F_0(q^2)$ is important, and it was presented in Ref.~\cite{BdecBII} by using the truncated expansion for $F_0$ in powers of $w(q^2) - 1$ of Ref.~\cite{CaNeu}
\begin{subequations}
\label{F0}
\begin{eqnarray}
F_0(q^2) & = & \frac{(M_B+M_D)}{2 \sqrt{M_B M_D}}
\left[ 1 - \frac{q^2}{(M_B+M_D)^2} \right] f_0(w(q^2)) \ ,
\label{F0a}
\\
f_0(w) & \approx & f_0(w=1) \left[ 1 + {\rho}_0^2 (w - 1) + (0.72 \rho_0^2 - 0.09) (w - 1)^2 \right] \ .
\label{F0b}
\end{eqnarray}
\end{subequations}
Here, we use the value $f_0(w=1) \approx 1.02$ \cite{NeuPRps,CaNeu} which is obtained from the heavy quark limit. The other free parameter $\rho_0$ in Eq.~(\ref{F0b}) is then fixed by requiring the absence of spurious poles at $q^2=0$: $F_0(0)=F_1(0)$ ($\approx 0.690$). This yields the value $\rho_0^2 \approx 1.102$ and $(0.72 \rho_0^2 - 0.09) \approx 0.704$.
For the curves of these form factors $F_1(q^2)$ and $F_0(Q^2)$, as a function of positive $q^2$, we refer to Ref.~\cite{BdecBII} (Fig.~2 there).
\subsection{Decay width $\Gamma(B \to D^{*} \ell_1 N)$}
\label{subs:GBDstellN}
We now consider the decay $B \to D^{*} \ell_1 N$, i.e., the same type of decay as in the previous Sec.~\ref{subs:GBDellN}, but now instead of the (pseudoscalar) $D$ meson we have vector meson $D^{*}$. The expressions for the (differential) decay widths are now more complicated, because $D^{*}$ is a vector particle. For the case of massive neutrino $N$ (and massive lepton $\ell_1$), these expressions were obtained in Ref.~\cite{BdecBII}, using the approach of Ref.~\cite{GiSi}. The needed differential decay width, after summation over helicities and over the three polarizations of $D^{*}$, turns out to be \cite{BdecBII}
\begin{eqnarray}
\frac{d \Gamma}{dq^2 d \Omega_{{\hat q}'} d \Omega_{{\hat p}_1}} & = &
\frac{1}{8^4 \pi^5} \frac{|U_{\ell_1 N}|^2 |V_{cb}|^2 G_F^2}{M_B^2}
{\overline \lambda}^{1/2} 2 |{\vec {q'}}| q^2 {\bigg \{}
\left[2 \left(1 - \frac{(M_N^2+M_1^2)}{q^2} \right) -
{\overline \lambda} \sin^2 \theta_1 \right]
\left( ({\bar H_{+1}})^2 + ({\bar H_{-1}})^2 \right)
\nonumber\\
&& - \eta \; 2 {\overline \lambda}^{1/2} \cos \theta_1
\left( ({\bar H_{+1}})^2 - ({\bar H_{-1}})^2 \right)
+ 2 \left[ \left(1 - \frac{(M_N^2+M_1^2)}{q^2} \right) - {\overline \lambda} \cos^2 \theta_1 \right] ({\bar H^3})^2
\nonumber\\
&& +
4 \left( \frac{M_N^2-M_1^2}{q^2} \right) {\overline \lambda}^{1/2} \cos \theta_1 {\bar H^0}{\bar H^3}
+ 2 \left[ - \left(\frac{M_N^2-M_1^2}{q^2} \right)^2 + \frac{(M_N^2+M_1^2)}{q^2}
\right] ({\bar H^0})^2 {\bigg \}} \ .
\label{dGdq2domdom2}
\end{eqnarray}
Here, the factor $\eta=\pm 1$ appears at one term proportional to $\cos \theta_1$; $\eta=+1$ if $\ell^-_1$ is produced, and $\eta=-1$ if $\ell^+_1$ is produced.\footnote{The quantity (\ref{dGdq2domdom2}) is written in Ref.~\cite{BdecBII} in Eq.~(C19) for the case $\eta=-1$; the quantity $d \Gamma/d q^2$ used there is independent of $\eta$.} Further, the following notations are used:
\begin{subequations}
\label{notBDstellN}
\begin{eqnarray}
|{\vec {q'}}| &=& \frac{1}{2} M_B \lambda^{1/2} \left( 1, \frac{ M_{D^*}^2}{M_B^2}, \frac{q^2}{M_B^2} \right),
\label{magq}
\\
{\overline \lambda} &\equiv& \lambda \left( 1, \frac{M_1^2}{q^2}, \frac{M_N^2}{q^2} \right) \ ,
\label{blam}
\end{eqnarray}
\end{subequations}
and ${\bar H}_{\pm 1}$, ${\bar H^0}$ and ${\bar H^3}$ are expressions containing the form factors $V$ and $A_j$ ($j=0,1,2,3$) appearing in the $B$-$D^{*}$ matrix elements
\begin{subequations}
\label{bHs}
\begin{eqnarray}
{\bar H_{\pm 1}} &=& (M_B+M_{D^*}) A_1(q^2) \mp V(q^2) \frac{|{\vec {q'}}| 2 M_B}{(M_B+M_{D^*})} \ ,
\label{bHpm}
\\
{\bar H^3} & = & \frac{M_B^2}{2 M_{D^*} \sqrt{q^2}} \left[
(M_B+M_{D^*}) A_1(q^2) \left(1 - \frac{(q^2+M_{D^*}^2)}{M_B^2} \right)
- 4 A_2(q^2) \frac{|{\vec {q'}}|^2}{(M_B+M_{D^*})} \right] \ ,
\label{bH3}
\\
{\bar H^0} & = & \frac{M_B |{\vec {q'}}|}{M_{D^*} \sqrt{q^2}} \left[
(M_B+M_{D^*}) A_1(q^2) - (M_B- M_{D^*}) A_2(q^2) + 2 M_{D^*} \left( A_0(q^2) - A_3(q^2) \right) \right] \ .
\label{bH0}
\end{eqnarray}
\end{subequations}
$A_3$ form factor is not independent, it is a linear combination of $A_1$ and $A_2$
\begin{equation}
A_3(q^2) = \frac{(M_B+M_{D^*})}{2 M_{D^*}} A_1(q^2) -
\frac{(M_B-M_{D^*})}{2 M_{D^*}} A_2(q^2) \ .
\label{A3}
\end{equation}
Among the other four form factors, three ($V$, $A_1$ and $A_2$) are well known, they were recently determined to a high precision \cite{Belle2} in terms of the parametrization of Ref.~\cite{CLN}
\begin{subequations}
\label{A1VA2}
\begin{eqnarray}
A_1(q^2) & = & \frac{1}{2} R_* (w+1) F_*(1) \left[ 1 - 8 \rho_*^2 z(w) + (53 \rho_*^2 - 15) z(w)^2 - (231 \rho_*^2 - 91) z(w)^3 \right] \ ,
\label{A1}
\\
V(q^2) & = & A_1(q^2) \frac{2}{R_*^2 (w+1)} \left[ R_1(1) - 0.12 (w-1) + 0.05 (w-1)^2 \right] \ ,
\label{V}
\\
A_2(q^2) & = & A_1(q^2) \frac{2}{R_*^2 (w+1)} \left[ R_2(1) + 0.11 (w-1) - 0.06 (w-1)^2 \right] \ .
\label{A2}
\end{eqnarray}
\end{subequations}
The notation $R_* = 2 \sqrt{ M_B M_{D^*}}/(M_B+M_{D^*})$ is used here, and $w=w(q^2)$ and $z=z(w(q^2))$ are given in Eqs.~(\ref{wz}) (with $M_D \mapsto M_{D^{*}}$). The values of the three parameters in Eqs.~(\ref{A1VA2}) were determined in Ref.~\cite{Belle2}
\begin{subequations}
\label{paramsDst}
\begin{eqnarray}
\rho_*^2 & = & 1.214(\pm 0.035) \ , \qquad 10^3 F_*(1) |V_{cb}| = 34.6(\pm 1.0) \ ,
\label{rhostFst}
\\
R_1(1) & = &1.401(\pm 0.038) \ , \qquad R_2(1) = 0.864(\pm 0.025) \ .
\label{R1R2}
\end{eqnarray}
\end{subequations}
We use the central values in the present work.
The form factor $A_0$, on the other hand, is not well known. It is relevant only if the masses of $N$ or $\ell_1$ are nonnegligible, which is the case here. Employing the heavy quark limit relations between $A_1$ and $A_2$, the relation (\ref{A3}) gives a relation between $A_2$ and $A_3$. Using this relation in the heavy quark limit relation $A_0 \approx A_2$, we then obtain the following approximation for the form factor $A_0$ in terms of $A_3$:
\begin{equation}
A_0(q^2) \approx A_3(q^2)/\left[1 - \frac{q^2}{2 M_{D^*} (M_B+M_{D^*})} \right]
= \frac{(M_B+M_{D^*})^2}{\left( 2 M_{D^*} (M_B+M_{D^*}) - q^2 \right)}
\left( 1 - \frac{(M_B-M_{D^*})}{(M_B+M_{D^*})} \frac{A_2(q^2)}{A_1(q^2)} \right) A_1(q^2)
\ ,
\label{A0appr}
\end{equation}
This relation satisfies the relation $A_0(0)=A_3(0)$ which is obligatory since it reflects the absence of the pole at $q^2=0$ in the $B$-$D^{*}$ matrix elements. We refer for any further details on these points to Ref.~\cite{BdecBII}.
\subsection{Decay width for $N \to \ell^{\pm} \pi^{\mp}$}
\label{subs:Nellpi}
The decay width $\Gamma(N \to \ell^{\pm} \pi^{\mp})$ is proportional to the heavy-light mixing factor $|U_{\ell N}|^2$
\begin{equation}
\Gamma(N \to \ell^{\pm} \pi^{\mp}) = |U_{\ell N}|^2 {\overline{\Gamma}}(N \to \ell^{\pm} \pi^{\mp}) \ .
\label{GNlPi}
\end{equation}
Here, the canonical decay width ${\overline{\Gamma}}$ is (e.g., cf.~Refs.~\cite{CDKK,CKZ2,symm,CKZosc})
\begin{equation}
{\overline{\Gamma}}(N \to \ell^{\pm} \pi^{\mp}) =
\frac{1}{16 \pi} |V_{u d}|^2 G_F^2 f_{\pi}^2 M_N^3 \lambda^{1/2}(1, x_{\pi}, x_{\ell})
\left[ 1 - x_{\pi} - 2 x_{\ell} - x_{\ell} (x_{\pi}-x_{\ell}) \right] \ ,
\label{bGNlPi}
\end{equation}
where $f_{\pi}$ ($\approx 0.1304$ GeV) is the decay constant of pion,
and we use the notations
\begin{equation}
x_{\pi} = \frac{M_{\pi}^2}{M_N^2} \ , \qquad x_{\ell}=\frac{M_{\ell}^2}{M_N^2} .
\label{xPixell}
\end{equation}
\subsection{Decay width for $N \to \ell_2 \ell_3 \nu$}
\label{subs:Nellellnu}
If the heavy neutrino $N$ is produced by the decay $B \to (D^{(*)}) \ell_1^{\pm} N$, the neutrino can decay into various leptonic channels $\ell_2 \ell_3 \nu$.
We can have the leptonic decays of $N$ of the lepton number conserving (LNC) type $N \to \ell_2^{\mp} \ell_3^{\pm} \nu_{\ell_3}$, and of the lepton number violating (LNV) type $N \to \ell_3^{\pm} \ell_2^{\mp} \nu_{\ell_2}$
\begin{subequations}
\label{GNllnu}
\begin{eqnarray}
\Gamma^{\rm (LNC)}(N \to \ell_2^{\mp} \ell_3^{\pm} \nu_{\ell_3}) & = & |U_{\ell_2 N}|^2 {\overline{\Gamma}} (N \to \ell_2 \ell_3 \nu) \ ,
\label{GNllnu.LC}
\\
\Gamma^{\rm (LNV)}(N \to \ell_3^{\pm} \ell_2^{\mp} \nu_{\ell_2}) & = & |U_{\ell_3 N}|^2 {\overline{\Gamma}} (N \to \ell_2 \ell_3 \nu) \ .
\label{GNllnu.LV}
\end{eqnarray}
\end{subequations}
Here, the charged leptons can be $\mu, e$ or $\tau$. The canonical decay widths ${\overline{\Gamma}}(N \to \ell_2 \ell_3 \nu)$ have in the general case (with masses of leptons) the following form \cite{CDKK,CKZ,symm}:
\begin{equation}
{\overline \Gamma}(N \to \ell_2 \ell_3 \nu) = \frac{G_F^2 M_N^2}{192 \pi^3}
{\cal F}(x_2,x_3) \ ,
\label{bGNllnu}
\end{equation}
where we denoted $x_j = M_j^2/M_N^2$ ($M_j$ is the mass of $\ell_j$), and the function ${\cal F}$ is \cite{CKZ}
\begin{eqnarray}
\lefteqn{
{\cal F}(x_2,x_3) =
{\Bigg \{}
\lambda^{1/2} (1, x_2, x_3) {\Big [} (1 + x_2) (1 -8 x_2 + x_2^2) -
x_3 (7 - 12 x_2 + 7 x_2^2)
}
\nonumber\\
&&
- 7 x_3^2 (1 + x_2) + x_3^3 {\Big ]}
- 24 (1 - x_3^2) x_2^2 \ln 2
\nonumber\\
&&
+ 12 {\bigg [} - x_2^2 (1 - x_3^2) \ln x_2
+ (2 x_2^2 -x_3^2 (1 + x_2^2)) \ln (1 + x_2
+ \lambda^{1/2} (1, x_2, x_3) - x_3)
\nonumber\\
&&
+ x_3^2 (1 - x_2^2)
\ln \left( \frac{(1 - x_2)^2 + (1-x_2) \lambda^{1/2} (1, x_2, x_3) - x_3 (1+x_2)}{x_3}
\right) {\bigg ]}
{\Bigg \}} .
\label{calF}
\end{eqnarray}
The function ${\cal F}$ is symmetric under the exchange of the two arguments. When one lepton is massless (or almost massless, i.e., lepton $e$), this expression reduces to the well-known result
\begin{equation}
{\cal F}(x,0) = {\cal F}(0,x)= f(x) = 1 - 8 x + 8 x^3 - x^4 - 12 x^2 \ln x \ .
\label{fx}
\end{equation}
\section{Decay probability of heavy neutrino in the detector; effective branching ratio}
\label{sec:PN}
If all the neutrinos $N$ decay within the detector with probability one, then the decay width Eq.~(\ref{fact}) is also the effective (true) decay width, and the effective branching ratio is obtained by dividing it by the decay width of the $B$ meson $\Gamma_B$. However, since the neutrino $N$ is weakly coupled to SM particles, it often does not decay within the detector and, consequently, the mentioned decays $B \to (D^{(*)}) \ell_1 \ell_2 X$ are not observed although $N$ may be produced in the $B$-decays. The effect of the decay of $N$ can be accounted for by multiplying the above decay width Eq.~(\ref{fact}) by the decay (nonsurvival) probability $P_N$ of $N$ within the detector
\begin{equation}
P_N = 1 - \exp \left[ - \frac{L}{\tau_N \gamma_N \beta_N} \right]
= 1 - \exp \left[ - \frac{L \Gamma_N}{\gamma_N \beta_N} \right]
\label{PN}
\end{equation}
where $L$ is the maximum possible flight length of $N$ within the detector, $\beta_N$ is the velocity of $N$ in the lab frame, $\tau_N = 1/\Gamma_N$ is the lifetime of $N$ in its rest frame, and $\gamma_N =(1 - \beta_N^2)^{-1/2}$ is the Lorentz time dilation factor \cite{CDK,CKZ,CKZ2,symm,scatt3,CERN-SPS,commKim,Gronau}.
For Belle-II, the $B$ meson pairs will be produced in SuperKEKB in central collisions of $e^-(p_1)$ and $e^+(p_2)$, which will produce a moving $\Upsilon(4S)$, the latter decaying into a $B$ meson pair (either $B^+ B^-$ or $B^0 {\bar B}^0$). In the lab frame, the $e^{\pm}$ have the momenta
\begin{equation}
p_j = \left( E_j,0,0,(-1)^{j+1} E_j \right) \qquad (j=1,2),
\label{E1E2}
\end{equation}
with the values $E_1=7.007$ GeV and $E_2 = 3.993$ GeV. This then produces the invariant mass $(p_1+p_2)^2 = M^2_{\Upsilon(4 S)}$, where $M_{\Upsilon(4 S)}=10.579$ GeV \cite{PDG2016}. The kinetic energy of the produced $\Upsilon(4 S)$ is $K_{\Upsilon}=E_1 + E_2 -M_{\Upsilon(4 S)} = 0.421$ GeV, which is semirelativistic, leading to the Lorentz factor in the lab frame
\begin{equation}
\gamma_{\Upsilon} = \frac{(E_1+E_2)}{M_{\Upsilon(4 S)}} = 1.0398
\; \Rightarrow \; \beta_{\Upsilon} = (1 - 1/\gamma_{\Upsilon}^2)^{1/2} = 0.274 \ .
\label{gammabetaU}
\end{equation}
When $\Upsilon(4 S)$ produces $B$ meson pair, the kinetic energy of the produced $B$ mesons is about $0.010$ GeV in the $\Upsilon(4 S)$-rest frame, which is negligible. Therefore, we consider the velocity of the produced $B$ mesons in the lab frame to be the same as the velocity of $\Upsilon(4 S)$
\begin{equation}
\beta_B = \beta_{\Upsilon} = 0.274, \qquad
\gamma_B = \gamma_{\Upsilon} = 1.0398, \qquad
(p_B)_{\rm lab}=M_B \beta_B \gamma_B = 1.504 \ {\rm GeV}.
\label{gammabetaB}
\end{equation}
In the decays $B \to D^{(*)} \ell_1 N$, we will denote the rest frame of the off-shell $W^*$ (i.e., of $\ell_1 N$ pair) as $\Sigma$; the $B$-rest frame as $\Sigma'$; the laboratory frame as $\Sigma''$. With these notations, the effective (true) branching ratio is calculated
\begin{eqnarray}
{\rm Br}_{\rm eff}(B \to D^{(*)} \ell_1 N \to D^{(*)} \ell_1 \ell_2 X) & = &
\int d q^2 \int d \Omega_{{\hat q}'} \int d \Omega_{{\hat p}_1}
\frac{d \Gamma(B \to D^{(*)} \ell_1 N)}{ d q^2 d \Omega_{{\hat q}'} d \Omega_{{\hat p}_1}} \frac{ \Gamma(N \to \ell_2 X) }{\Gamma_N \Gamma_B}
\nonumber\\
&& \times \left\{ 1 - \exp \left[- \frac{L \Gamma_N}{\sqrt{ \left(E''_N(q^2;{\hat q}',{\hat p}_{1})/M_N \right)^2 - 1 }} \right] \right\},
\label{Breff}
\end{eqnarray}
where in the denominator inside the exponent we have the Lorentz factor
\begin{equation}
\beta_N^{''} \gamma_N^{''} = \sqrt{ \left(E''_N(q^2;{\hat q}',{\hat p}_{1})/M_N \right)^2 - 1 } \ ,
\label{bNgNpp}
\end{equation}
in the laboratory frame, which is a function of $W^*$ ($=\ell_1 N$) momentum $q'$ (in the $B$-rest frame)\footnote{Note that ${q'}^2=q^2$ is frame independent.}
and of the direction ${\hat p}_{1}$ of the momentum $p_{1}$ of the produced charged lepton $\ell_1$ (in the $W^*$-rest frame). The expression (\ref{bNgNpp}) as an explicit function of $q^2$, ${\hat q}^{'}$ and ${\hat p}_1$ is derived in Appendix \ref{appENpp}. It depends on the angle $\theta_q$ between the direction of ${\hat \beta}_B$ (in the lab frame $\Sigma''$) and ${\hat q}'$ of $W^*$ (in the $B$-rest frame $\Sigma'$), as well as on the spherical angles $\theta_1$ and $\phi_1$ of the vector ${\vec p}_1$ of $\ell_1$ in the $W^*$-rest ($\Sigma$) frame, in a specific 3-dimensional system of coordinates in the frame $\Sigma$ (cf.~Fig.~\ref{Figthqth1} in Appendix \ref{appENpp}). On the other hand, the differential decay width $d \Gamma(B \to D^{(*)} \ell_1 N)/( d q^2 d \Omega_{{\hat q}'} d \Omega_{{\hat p}_1})$ depends only on $q^2$ and $\theta_1$, as shown in subsections \ref{subs:GBDellN}-\ref{subs:GBDstellN}. Due to the mentioned dependence in the decay (nonsurvival) factor $P_N$, integration over these momenta is needed, as indicated in Eq.~(\ref{Breff}). The differential decay widths $d \Gamma(B \to D^{(*)} \ell_1 N)/(d q^2 d \Omega_{{\hat q}'} d \Omega_{{\hat p}_1})$ are given in subsections \ref{subs:GBDellN}-\ref{subs:GBDstellN}. All this implies that the integration Eq.~(\ref{Breff}) has the following form:
\begin{equation}
\int_{(M_N+M_1)^2}^{(M_B - M_{D^{(*)}})^2} d q^2 2 \pi \int_{-1}^{+1} d (\cos \theta_q) \int_{-1}^{+1} d (\cos \theta_1) \int_0^{2 \pi} d \phi_1 f(q^2, \theta_q, \theta_1, \phi_1) .
\label{integrbounds}
\end{equation}
If no mesons $D^{(*)}$ are produced in the decays, then the differential decay width is even simpler, as it depends only on the direction ${\hat {p}}'_N$ of the on-shell $N$ in the $B$-rest frame, and the expression (\ref{Breff}) simplifies
\begin{eqnarray}
{\rm Br}_{\rm eff}(B \to \ell_1 N \to \ell_1 \ell_2 X) & = &
\int d \Omega_{{\hat p}'_N}
\frac{d \Gamma(B \to \ell_1 N)}{ d \Omega_{{\hat p}'_N} } \frac{ \Gamma(N \to \ell_2 X) }{\Gamma_N \Gamma_B}
\nonumber\\
&&\times \left\{ 1 - \exp \left[- \frac{L \Gamma_N}{\sqrt{ \left(E''_N({\hat p}'_N)/M_N \right)^2 - 1 }} \right] \right\}.
\label{BreffnoD}
\end{eqnarray}
The differential decay width is $d \Gamma(B \to \ell_1 N)/d \Omega_{{\hat p}'_N} = \Gamma(B \to \ell_1 N)/(4 \pi)$ since $B$ is a pseudoscalar, and the expression of $\Gamma(B \to \ell_1 N)$ is given in subsection II A. The nonsurvival probability $P_N$ is in the case of Eq.~(\ref{BreffnoD}) also simpler, because it (and the energy of $N$ in the lab frame, $E''_N$) depends only on the direction ${\hat p}'_N$ of $N$ in the $B$-rest frame. The expression $E''_N({\hat p}'_N)$ is given in Appendix \ref{appENpp}.
\textcolor{black}{On the other hand, in the LHCb experiment, the entire procedure described in this Section, designed for a given momentum $(p_B)_{\rm lab} \equiv p_B^{''}$ of $B$ in the laboratory frame [cf.~Eq.~(\ref{gammabetaB}) for Belle-II where $p_B=1.504$ GeV], has to be repeated for various values of momenta $p_B^{''}$. The obtained effective branching ratios then have to be averaged over these momenta $p_B^{''}$. We took into account that the lab momentum $p_B^{''}$ of the produced $B$ mesons in LHCb is distributed over a large interval, cf.~the shaded curve in Fig.~\ref{Bdistr}(a).\footnote{We thank Sheldon L. Stone (LHCb Collaboration) for providing us with the distribution, from Ref.~\cite{LHCC98004}, appearing here as Fig.~\ref{Bdistr}(a).}
\begin{figure}[htb]
\begin{minipage}[b]{.49\linewidth}
\includegraphics[width=85mm,height=50mm]{B0DistrLHCb.pdf}
\end{minipage}
\begin{minipage}[b]{.49\linewidth}
\includegraphics[width=75mm,height=47mm]{figDistBins.pdf}
\end{minipage} \vspace{12pt}
\caption{\textcolor{black}{\footnotesize (a) (left-hand figure) The lab momentum ($p_B^{''}$) distribution of the produced $B^0$ mesons in LHCb \cite{LHCC98004}. We take the shaded figure as the representative case; (b) (right-hand figure) the distribution of the left-hand shaded curve in ten bins of equal weight (equal number of events).}}
\label{Bdistr}
\end{figure}
We separated this distribution in ten bins of equal weight (equal number of events), cf.~Fig.~\ref{Bdistr}(b), and calculated the results of Figs.~\ref{figUmuN2LHCb}(a)-(d) by averaging over these ten bins. For each bin, we took in our evaluations the value of the $B$ meson momentum to be such that, within the bin interval, the number of events to the left and to the right of it [according to the shaded curve of Fig.~\ref{Bdistr}(a)] are equal; e.g., in the last bin, $223 \ {\rm GeV} < p_N < 403 \ {\rm GeV}$, the average momentum value taken is $p = 273 \ {\rm GeV}$.}
\section{Numerical results for sensitivity limits on $|U_{\mu N}|^2$
at LHCb upgrade and Belle-II}
\label{sec:num}
We assume that in the considered decays, the produced on-shell neutrino $N$ has the available length of $L=1 \ m$ for flight within the detector,
\textcolor{black}{at Belle-II and $L=2.3 \ m$ at LHCb upgrade.}\footnote{This length $L$ is considered here to be independent of the position of the vertex where $N$ is produced and independent of the direction in which the produced $N$ travels. It can be called here the effective detector length for the neutrino $N$.
\textcolor{black}{In the case of LHCb, the length of the Vertex Locator (VELO) is about $1 \ m$ \cite{VELO}; the effective detector length could be extended beyond that locator, to $L=2.3 \ m$ \cite{RICHdesign,Sheldon}.}} We consider that at Belle-II,
\textcolor{black}{the total number of $5 \times 10^{10}$ $B$-mesons will be produced \cite{Belle-II}, and at LHCb upgrade this number will be about $4.8 \times 10^{12}$ \cite{Sheldon}.}
We assume that there are no background events for the considered lepton number violating (LNV) decays
\textcolor{black}{
$B \to D^{(*)} \mu^{\pm} N \to D^{(*)} \mu^{\pm} \mu^{\pm} X^{\mp}$; and $B^{\pm} \to \mu^{\pm} N \to \mu^{\pm} \mu^{\pm} X^{\mp}$. Here, $X^{\pm}$ stands either for $\pi^{\pm}$ (LHCb and Belle-II), or the lepton pair $e^{\pm} \nu_e$ (Belle-II), and $B$ stands for $B^0$, ${\bar B}^0$ or $B^{\pm}$.}
In these events, we have no QED background because no $\mu^+ \mu^-$ pairs appear in the final states.
The effective branching ratios of the mentioned decay modes depend crucially on the heavy-light mixing parameter $|U_{\mu N}|^2$. The sensitivity limit on $|U_{\mu N}|^2$ at 95 \% confidence limit is obtained for $N_{\rm events}=3.09$ \cite{FC}. Therefore, the sensitivity limits on $|U_{\mu N}|^2$ are obtained by requiring
\textcolor{black}{ $\langle {\rm Br}_{\rm eff} \rangle = 3.09/(4.8 \times 10^{12})$ at LHCb upgrade, and $\langle {\rm Br}_{\rm eff} \rangle = 3.09/(5 \times 10^{10})$ at Belle-II, where we recall that the projected total number of produced $B$ mesons at LHCb upgrade and at Belle-II is $4.8 \times 10^{12}$ and $5 \times 10^{10}$, respectively.}
\textcolor{black}{The values of $\langle {\rm Br}_{\rm eff}(B \to D^{\star} \mu \mu X) \rangle $ ($X=\pi$ or $e \nu_e$) are obtained by taking the arithmetic average of the values of ${\rm Br}_{\rm eff}$ for the four LNV decay modes: $B^- \to D^{\star 0} \mu^- \mu^- X^+$, ${\bar B}^0 \to D^{\star +} \mu^- \mu^- X^+$ and their charge conjugates. Analogously, $\langle {\rm Br}_{\rm eff}(B \to D \mu \mu X ) \rangle$ is the arithmetic average over the four analogous LNV decays as mentioned before, having now $D$ instead of $D^{\star}$. We note that the total decay widths of $B^0$ and $B^{\pm}$ differ somewhat, $\Gamma_{B^0}/\Gamma_{B^+} =1.078$ \cite{PDG2016}, and we took this into account. In our calculations we neglected, however, the small difference between the masses of $D^+$ and $D^0$ (about $5$ MeV), and between the masses of $D^{\star +}$ and $D^{\star 0}$ (about $3$ MeV); we used $m_D \approx 1.865$ GeV and $m_{D^{\star}} \approx 2.010$ GeV.}
\textcolor{black}{Further, for the LNV decays of $B$ without $D^{(*)}$ mesons, $B \to \mu \mu X$, we do not have four, but only two modes, due to the electric charge restriction: $B^{\pm} \to \mu^{\pm} \mu^{\pm} X^{\mp}$. For such decays, the average $\langle {\rm Br}_{\rm eff}(B \to \mu \mu X) \rangle$ is taken only over these two LNV modes. In these latter cases, we have to take into account that the total number of produced charged $B$ mesons is only half of the total number of produced $B$ mesons. Hence, the sensitivity limits on $|U_{\mu N}|^2$ are obtained in these cases by requiring $\langle {\rm Br}_{\rm eff} (B \to \mu \mu X) \rangle = 3.09/(2.4 \times 10^{12})$ at LHCb upgrade, and $\langle {\rm Br}_{\rm eff} \rangle = 3.09/(2.5 \times 10^{10})$ at Belle-II.}
\textcolor{black}{We note that the charge-conjugated versions of the decays, i.e., the decays of $B^0$ vs ${\bar B}^0$, and of $B^+$ vs $B^-$, give in general the same results. The only exception are the decays in which $D^*$ vector meson is produced. This is so because of the factor $\eta=\pm 1$ in the expression (\ref{dGdq2domdom2}), in one term there proportional to $\cos \theta_1$, which changes sign. The effect of this sign change does not entirely cancels out in the integration (\ref{Breff}) for the effective branching ratio, because the expression $E''_N(q^2;{\hat q}',{\hat p}_{1})$ in the neutrino $N$ decay probability also has dependence on $\cos \theta_1$.}
We assume in our formulas that only the mixings $|U_{\mu N}|^2$ are nonzero; if other mixings ($|U_{e N}|^2$, $|U_{\tau N}|^2$) are nonzero, the obtained upper bounds for $|U_{\mu N}|^2$ are in general less restrictive (higher).\footnote{
If ${\bar N}$ (and $N$) were Dirac, it would produce, e.g., a pair $\mu^+ \mu^-$ or a pair $e^+ e^-$, which have a strong QED background, and would thus not be useful. Or it could produce a pair $\mu^{\pm} e^{\mp}$; this could give important contribution, but only in the scenario where both $U_{\mu N}$ and $U_{e N}$ are nonnegligible, i.e., the scenario not considered here.}
\begin{figure}[htb]
\begin{minipage}[b]{.49\linewidth}
\centering\includegraphics[width=85mm]{figUmuN2BDstmumuXLHCb1to10AVext.pdf}
\end{minipage}
\begin{minipage}[b]{.49\linewidth}
\centering\includegraphics[width=85mm]{figUmuN2BDmumuXLHCb1to10AVext.pdf}
\end{minipage}
\begin{minipage}[b]{.49\linewidth}
\centering\includegraphics[width=85mm]{figUmuN2BmumuXLHCb1to10AVext.pdf}
\end{minipage}
\begin{minipage}[b]{.49\linewidth}
\centering\includegraphics[width=85mm]{figUmuN2BallmumuXthLHCb1to10AVext.pdf}
\end{minipage}
\caption{\textcolor{black}{\footnotesize (a) The sensitivity limits on $|U_{\mu N}|^2$ at LHCb upgrade, as solid lines, from LNV decays $B \to D^{*} \mu^{\pm} N \to D^{*} \mu^{\pm} \mu^{\pm} \pi^{\mp}$; for comparison, the present bounds from various experiments are included, giving the grey region of exclusion. (b) As (a), but for the decays $B \to D \mu^{\pm} N \to D \mu^{\pm} \mu^{\pm} \pi^{\mp}$. (c) As (a), but for the decays $B^{\pm} \to \mu^{\pm} N \to \mu^{\pm} \mu^{\pm} \pi^{\mp}$. (d) Comparison of the prospective LHCb sensitivity limits for the three decays. The effective detector length is taken $L=2.3 \ m$, and the expected total number of produced $B$ meson pairs $N=4.8 \times 10^{12}$.}}
\label{figUmuN2LHCb}
\end{figure}
\textcolor{black}{The results for the decays with $\pi^{\pm}$ in the final state, for LHCb upgrade, are given in Figs.~\ref{figUmuN2LHCb}(a)-(d).
In Figs.~\ref{figUmuN2LHCb}(a)-(c), the present direct experimental bounds are included for comparison, along with our results - the obtained prospective sensitivity limits for LHCb upgrade. Fig.~\ref{figUmuN2LHCb}(d) shows the LHCb sensitivity limits for the three considered decays, for mutual comparisons. Further, we note that the decay modes $B \to (D^{(*)}) \mu^{\pm} \mu^{\pm} e^{\mp} \nu_e$ cannot be detected at LHCb.}
\begin{figure}[htb]
\begin{minipage}[b]{.49\linewidth}
\centering\includegraphics[width=85mm]{figUmuN2BDstmumuXBelleIIAVext.pdf}
\end{minipage}
\begin{minipage}[b]{.49\linewidth}
\centering\includegraphics[width=85mm]{figUmuN2BDmumuXBelleIIAVext.pdf}
\end{minipage}
\begin{minipage}[b]{.49\linewidth}
\centering\includegraphics[width=85mm]{figUmuN2BmumuXBelleIIAVext.pdf}
\end{minipage}
\begin{minipage}[b]{.49\linewidth}
\centering\includegraphics[width=85mm]{figUmuN2BallmumuXthBelleIIAVext.pdf}
\end{minipage}
\caption{\footnotesize (a) The future Belle-II sensitivity limits on $|U_{\mu N}|^2$, as solid lines, from LNV decays $B \to D^{*} \mu^{\pm} N \to D^{*} \mu^{\pm} \mu^{\pm} X^{\mp}$ at Belle-II, where $X^{\mp}=\pi^{\mp}$ or $X^{\mp}=e^{\mp} \nu_e$; included are also the present bounds from various experiments, resulting in the grey region of exclusion. (b) The same, but for the decays $B^ \to D \mu^{\pm} N \to D \mu^{\pm} \mu^{\pm} X^{\mp}$. (c) The same, but for the decays $B^{\pm} \to \mu^{\pm} N \to \mu^{\pm} \mu^{\pm} X^{\mp}$. (d) Comparison of the prospective Belle-II sensitivity limits for the three mentioned pairs of decays. The effective detector length is taken $L=1 \ m$, and the expected total number of produced $B$ meson pairs $N=5 \times 10^{10}$.}
\label{figUmuN2}
\end{figure}
The results for the considered decays
\textcolor{black}{at Belle-II, either with $\pi^{\pm}$ or with $e^{\pm} \nu_e$ in the final state},
are given in Figs.~\ref{figUmuN2}(a)-(d). In Figs.~\ref{figUmuN2}(a)-(c), the present experimental bounds are included for comparison. In Fig.~\ref{figUmuN2}(d), the prospective Belle-II sensitivity limits for all the six considered decays are presented, for mutual comparisons.
\section{Discussions and conclusions}
\label{sec:concl}
From Figures \ref{figUmuN2LHCb} and \ref{figUmuN2}, we can see that the decays where $D^{*}$ and $D$ are produced give quite strong new sensitivity limits on $|U_{\mu N}|^2$ in the mass interval $1.75 \ {\rm GeV} < M_N < 3 \ {\rm GeV}$. This is a reflection of the fact that the presence of $D^{(*)}$ mesons leads to a significantly weaker CKM suppression in the decay rates, because $|V_{cb}|^2 \approx 10^2 |V_{ub}|^2$. However, when $M_N > 3 \ {\rm GeV}$, such decays are kinematically suppressed, and then only the (CKM-suppressed) decays $B \to \mu \mu X$ give useful sensitivity limits, as seen in Figs.~\ref{figUmuN2LHCb}(c), (d) and Figs.~\ref{figUmuN2}(c), (d). Further, we see in Figs.~\ref{figUmuN2LHCb} that in general the sensitivity limits are more restrictive (lower) when $X=e \nu$ than when $X=\pi$.
\textcolor{black}{Comparing Figs.~\ref{figUmuN2LHCb} with Figs.~\ref{figUmuN2}, we can see that the decays $B \to (D^{(*)}) \mu^{\pm} \mu^{\pm} \pi^{\mp}$, which can be measured at both LHCb and Belle-II experiments, give more stringent (lower) sensitivity limits on $|U_{\mu N}|^2$ at LHCb upgrade experiment. This is so primarily because the expected number of produced $B$ mesons at LHCb upgrade ($4.8 \times 10^{12}$) is by two orders of magnitude larger than the number at Belle-II ($5 \times 10^{10}$). Yet another factor contributing to the more stringent bounds is the effective detector length, which is assumed to be larger at LHCb upgrade ($L=2.3 \ m$ vs $L=1 \ m$ at Belle-II). The difference between the two sets of the sensitivity limits is somewhat reduced by the fact that the lab energy of the produced $B$ mesons in LHCb is significantly higher than in Belle-II; as a consequence, the produced on-shell $N$ neutrinos move in the LHCb case faster and are thus less likely to decay within the detector. If, on the other hand, the acceptance factors decrease the effective number $N$ of produced $B$ mesons, or if the effective detector length $L$ turns out to be smaller, the sensitivity limits for $|U_{\mu N}|^2$ go up, in general as approximately proportional to $1/\sqrt{N L}$ for not very heavy neutrinos ($M_N < 3$ GeV).}
\textcolor{black}{This approximate proportionality comes from the following behavior. For the values of $|U_{\mu N}|^2$ which are of the order of magnitude of the presented upper bounds, we have at $M_N \lesssim 2.5$ GeV small $N$-decay probabilities, $P_N \ll 1$, and therefore our expressions imply in such a case the approximate proportionality ${\rm Br}_{\rm eff} \propto |U_{\mu N}|^4 L$. However, for $M_N \gtrsim 4.5$ GeV we have $P_N \approx 1$ and thus the approximate proportionality ${\rm Br}_{\rm eff} \propto |U_{\mu N}|^2$ (and $L$-independent). We verified these approximate proportionalities also numerically with our expressions. Approximate $L$-independence of ${\rm Br}_{\rm eff}$ occurs already at $M_N \gtrsim 3$ GeV.}
In Ref.~\cite{AsIsh}, a similar analysis was made for the decay $B^+ \to \mu^+ N \to \mu^+ \mu^- \pi^-$ at Belle-II, where the same total number of $B$ meson pairs was assumed as here, $5 \times 10^{10}$. They obtained lower, i.e., more restrictive sensitivity limits on $|U_{\mu N}|^2$ than we do for this decay for Belle-II. The reason for the difference cannot be the fact that they did not take into account the movement of $B$-mesons in the lab frame (this effect changes the sensitivity limits only weakly). The reason for the difference lies possibly in the evaluated values of the total decay width $\Gamma_N$ as a function of $M_N$. We evaluated this decay width according to the formulas and Figures in Appendix \ref{appNall}, based on Refs.~\cite{Atre,HKS}, and we applied those evaluations in Refs.~\cite{CKZ2,symm}.
The experimental bounds on $|U_{\mu N}|^2$ presented in Figs.~\ref{figUmuN2LHCb}(a)-(c) and Figs.~\ref{figUmuN2}(a)-(c) are from various experiments: DELPHI \cite{DELPHI}, BEBC \cite{BEBC}, NuTeV \cite{NuTeV}, NA3 \cite{NA3}, CHARM II \cite{CHARMII}, and Belle \cite{BelleUB}.
On the basis of the obtained results, Figs.~\ref{figUmuN2LHCb} and \ref{figUmuN2}, we conclude that the LHCb upgrade and Belle-II experiments have the potential to either find a new heavy Majorana neutrino $N$, or to improve significantly the sensitivity limits (upper bounds) on the heavy-light mixing parameter $|U_{\mu N}|^2$, particularly in the mass range $1.75 \ {\rm GeV} < M_N < 3 \ {\rm GeV}$ where the LNV decays of $B$ mesons involving $D$ or $D^{*}$ mesons and an on-shell neutrino $N$ are possible.
If $N$ is not Majorana but Dirac particle, then clear sensitivity limits cannot be obtained for $|U_{\mu N}|^2$, but rather for the product $|U_{e N} U_{\mu N}|$; this is a less attractive possibility, principally because the present upper bounds for $|U_{e N}|^2$ in the mentioned mass range, coming from the neutrinoless double beta decay experimental data \cite{Kova}, are more restrictive (lower) than those for $|U_{\mu N}|^2$.
\section*{Acknowledgments}
\noindent
The work of C.S.K. was supported by the NRF grant funded by the Korean government of the MEST (No. 2016R1D1A1A02936965).
\textcolor{black}{We thank Y.J.~Kwon and Sheldon L.~Stone for providing us with valuable information on Belle-II and LHCb upgrade experiments, respectively.}
| -55,009.691737
|
[
-3.390625,
3.1640625
] | 22.469983
|
[
-3.212890625,
0.94873046875,
-1.6962890625,
-5.671875,
-1.0078125,
8
] |
[
2.6953125,
8.5859375,
4.3359375,
7.03125
] | 397
| 6,580
|
[
-2.54296875,
2.611328125
] | 36.860157
|
[
-6.0234375,
-3.626953125,
-3.76171875,
-2.220703125,
1.787109375,
11.0859375
] | 1.106765
| 16.211288
| 21.048632
| 6.353141
|
[
3.264312744140625
] | -32,652.8923
| 5.012006
| -53,706.769507
| 0.445738
| 6.023037
|
[
-2.5390625,
-3.685546875,
-4.1640625,
-5.34375,
2.125,
13.09375
] |
[
-5.765625,
-2.083984375,
-2.267578125,
-1.2421875,
3.28515625,
4.4375
] | |
BkiUdeE5qX_BgkbTKHZx
|
\section{Problem Formulation}
{\bf Formulation 1 - Feature Preference Model:} We assume that the reason that a particular worker has completed a particular task is because the worker has a hidden preference over the task features which we want to uncover. As an example, if locations are used as task features, we can learn the preference of workers for different location, which can be used for recommend new task to workers. Based on the explicit knowledge of task feature matrix $Y$ and worker task completion matrix we learn the preference of each worker in the feature space or $X$. Formally, we want to minimize the following objective function.
\begin{align}
M = \sum_{w,i}q_{wi}(p_{wi}-x_wy_i)^2+\lambda(\|X\|^2)\\
X \geq 0\\
q_{wi} = 1 + \alpha \times c_{wi}
\label{eqn:objective}
\end{align}
Here, $q_{wi}$ is designed such that, the weight of positive signals is amplified. $Q$ denotes the matrix representing the values of $q_{wi}$ for all workers and tasks. If a particular observation has high confidence the system will choose $x_w$ such that $x_wy_i$ becomes close to $1$. $\alpha$ is set to a positive value indicating the confidence for the positive signals over negative signals. By solving $M$, we get the solution for user vector, $x_w = (Y^tQ^wY + \lambda I)^{-1}Y^tQ^wP_w$. Due to the non-negativity constratint of $X$, we solve the following optimization problem as $\|(Y^tQ^wY + \lambda I)x_w - Y^tQ^wP_w\|^2$. We refer to our algorithm as {\tt Feat-Based-NNLS} or Feature Based Non-Negative-Least Square.
{\bf Formulation 2: Latent Factor Model:} We consider the following objective function for task recommendation -
\begin{dmath}
\label{eqn:objective2}
M = \sum_{w,i}q_{wi}(p_{wi}-u_wv_i)^2+\lambda(\|U\|^2+ \|V\|^2 - \sum_{i,i'}v_i^tv_i'Sim(i,i'))
\end{dmath}
Here, the goal is to find $U$ and $V$ such that it minimizes the error, where $\lambda$ is the regularization parameter. For any new task, the predicted recommendation score is calculated by multiplying $U_w$ with $V_i$. To incorporate the task similarity into the latent factor based formulation, we add a penalty term in the equation. Our intuition is that if the similarity between any two tasks is high, then they should also be similar in the latent factor space. Our notion of task similarity is defined as $sim(t_i, t_j)$ = $\frac{1}{1 + e^{-Y_i^tY_j}}$. The analytical solution for $U$ and $V$ is given below.
\begin{align}
\label{eqn:user_latent}
u_w = (V^tQ^uV + \lambda I)^{-1}V^tQ^wP_w
\end{align}
\begin{align}
v_i = (U^tQ^iU + \lambda I)^{-1}(U^tQ^iP_i + \lambda * 0.5 * \sum_{i'=1}^{n_t} Sim(i,i')v_i')
\end{align}
We solve the optimization problem by alternating and fixing $U$ and $V$. This method is referred to as Implicit Factorization with Task Similarity or {\tt IFTS}.
\section{Algorithms for Task Recommendations}
\label{sec:algorithms}
In this section, we present our solutions based on the two alternative formulations described in previous section.
\subsection{Solution using Feature Preference Model }\label{sec:algorithms}
In this section we present the techniques to solve the objective function described in Equation~\ref{eqn:objective}. Throughout the following sections we refer to our algorithms as \\{\tt Feature-Based-NNLS} or Feature Based Non-Negative-Least Square . Here, we have a fixed $Y$ matrix and the non-negativity of matrix $X$ needs to be respected. Since $Y$ matrix is fixed in our case, the objective function is quadratic.
In order to solve for $X$ by minimizing the objective function described in Equation~\ref{eqn:objective}, first we need to take the derivative against each user vector $x_u$. Let's introduce another notation $W_u$ which denotes a diagonal matrix, where $W^u_{ii} = w_{ui}$.
\begin{align*}
\frac{\partial M}{\partial x_u} &= -2\sum_iw_{ui}(p_{ui}-{x_u^Ty_i})y_i+2\lambda x_u\\
&= -2Y^tW^up_u + 2Y^tW^uYx_u + 2 \lambda x_u
\end{align*}
Now, by putting $\frac{\partial M}{\partial x_u} = 0$ we get the analytical solution for $x_u$ as -
\begin{align}
\label{eqn:user}
x_u = (Y^tW^uY + \lambda I)^{-1}Y^tW^uP_u
\end{align}
However, we cannot solve exactly due to the non-negativity constraint. Hence we want to minimize the L-2 error for each user $\|(Y^tW^uY + \lambda I)x_u - Y^tC^uP_u\|^2$. There exists two ways to solve this problem - i) the constrained version of this problem can be transformed using a new formulation to an unconstrained version and then solve it ii) or, the problem can be solved by treating as a Generalized Singular Value Decomposition problem~\cite{bjorck1996numerical}
{\bf Complexity :} The worst case complexity for solving this constrained version of the least square problem for each user is $O(n_l^3)$ since the size of the coefficient matrix in our case is $n_l \times n_l$ ~\cite{bjorck1996numerical}. Hence for each user the complexity becomes $O(n_un_l^3)$.
\noindent{\bf Presence of User-Feature:} The first variant appears where we know the user's feature preference instead of task feature information. Then, we can use the similar formulation to find out the Task Feature-matrix and thereby solving the task recommendation problem.
\noindent{\bf Presence of both User-Feature and Task-Feature:} Another variant of this problem appears, when we know only the zero entries of both Task-Feature matrix $Y$ and User-Feature matrix $X$, then the non-zero entries of both matrices are subject to optimization. This problem is similar to solving a sparse constrained matrix factorization problem. This can be computationally expensive if the number of explicit features are too many.
\subsection{Solution using Latent Factor Model}
In this section, we describe the techniques to solve the objective function described in Equation~\ref{eqn:objective2}. We call this algorithm as Implicit Factorization with Task Similarity, \texttt{IFTS}. We apply alternating least square based approach to solve this problem. This is an iterative approach where we partially differentiate objective $M$ with respect to both users and items. At each iteration, we fix the item's latent factor matrix $V$ in order to solve for $U$ and vice versa. If we differentiate the objective function with respect to $U$, we get the similar solution as equation~\ref{eqn:user} for solving $U$,
\begin{align}
\label{eqn:user_latent}
u_u = (V^tW^uV + \lambda I)^{-1}V^tW^uP_u
\end{align}
However, for solving $V$ we need to consider the penalty term while differentiating against $V$
\begin{align*}
\frac{\partial M}{\partial v_i} &= -2\sum_iw_{ui}(p_{ui}-{u_u^Tv_i})v_i+2\lambda v_i - \lambda\sum_{i'=1}^{n_t}Sim(i,i')v_i' \\
&= -2V^tW^ip_i + 2U^tW^iUv_i + 2 \lambda v_i - \lambda\sum_{i'=1}^{n_t}Sim(i,i')v_i'\\
\end{align*}
By setting $\frac{\partial M}{\partial v_i}$ = $0$, we get the following equation for solving $v_i$
\begin{align}
\label{eqn:item_latent}
v_i = (U^tW^iU + \lambda I)^{-1}(U^tW^iP_i + \lambda * 0.5 * \sum_{i'=1}^{n_t} Sim(i,i')v_i')
\end{align}
We achieve reasonable speed-up by using the fact that $U^tW_iU$ = $U^tU + U^t(W_i - I)U$~\cite{hu2008collaborative} . We can compute $U^tU$ once and reuse it over all the iterations and number of non-zero elements in $W_i - I$ is exactly the number of users for which $c_{ui} > 0$, let's call that $m_i$, is much smaller than the total number of users. Then, $m << n_u$. The running time of this algorithm is $O(n_f^2\mathbf{N} + n_f^3)$ with the additional cost of $\sum_{i'=1}^{n_t} Sim(i,i')v_i'$, where $\mathbf{N}$ is the total number of non-zero entries, $\sum_{i=1}^{i=n_t}m_i = \mathbf{N}$. So the overall running time of this algorithm is $O(n_f^2\mathbf{N} + n_f^3 + n_u*n_t)$.
\section{Conclusion and Future Work}
We initiate the study of the task recommendation problem in citizen science based crowdsourcing applications, considering both implicit feedback and explicit features. We formalize two optimization problems and present preliminary results. As ongoing research, we are investigating our method's validity on other datasets, as well as the generality of our proposed solution outside citizen science applications.
\newpage
\section{Methodologies}
\label{sec:dataModel}
{\bf Notations:} $W =\langle w_1,w_2,w_3 \dots w_{n_w} \rangle$ and $T= \langle t_1,t_2,t_3,\dots t_{n_t} \rangle$ represents the set of workers and tasks respectively. The relationship between workers and tasks is represented by matrix $C_{n_w\times n_t}$, where $c_{wi}$ represents the number of times worker $w$ has completed task $i$. The preference matrix $P$ is a boolean version of $C$, such that $p_{wi} =1$, if $c_{wi} \geq 1$, otherwise $p_{wi} =0$.
$Y_{n_t \times n_l}$ represents the explicit task feature matrix, where $y_{il} \in \{0,1\}$ denotes the absence or presence of feature $l$ for task $i$. Worker feature preference matrix is denoted as $X_{n_w \times n_l}$. Additionally, $U_{n_w \times n_f}$ and $V_{n_t \times n_f}$ are the two latent factor matrices, where $U$ is for the workers and $V$ is for the tasks.
\section{Experiments}
\label{sec:exp}
{\bf Dataset:} We collected data from a popular citizen science platform named {\em Ebird }\footnote{Ebird.org}. Ebird is a popular citizen science platform for bird observations. We crawled all the observations from year $2012$ and randomly choose a set of $5000$ workers for our experiments leading to $1767$ tasks with a total number of $2.5$ million observations. We used $294$ locations as task features
{\em Evaluation:} We evaluate our methods using a hold out test set. We randomly choose $90\%$ of our data as the training set and remaining $10\%$ as the test set which gives us the ground truth. All the results are an average of three runs.
{\bf Implemented Baseline Algorithms:}
i){\tt Implicit-ALS-Neg}: This algorithm is implemented according to~\cite{lin2014signals}.The algorithm uses alternating least square method considering negative signals. If a worker has not completed a task then the total number of times that task has been completed by other users is considered as the weight of the negative signal.
ii) {\tt Feature-Based-Reg}: We assume that the task-feature matrix $V$ is given to us. We solve the regularized regression~\cite{wu2006learning} problem $(C_{ij} - x_iy_j)^2 + \lambda \|X\|^2 $ to find $X$.
{\bf Evaluation Metrics:}
We use Mean Percentile Ranking(MPR) proposed by ~\cite{hu2008collaborative} for evaluating implicit feedback. The mathematical formula to calculate MPR is $\frac{\sum_{ij}c_{ij}\rho_{ij}}{\sum_{ij}c_{ij}}$. $\rho_{ij}$ is the percentile ranking of the task $j$ for worker $i$. Our recommendation is based on the estimated Worker-Task Preference matrix, $\hat{P}$. For {\tt Feat-Based-NNLS}, $\hat{P}= XY$, where $X$ is Worker-Feature matrix and $Y$ is Task-Feature matrix. For {\tt IFTS}, $\hat{P}= UV$. We experimented with different values of $\alpha$ and choose $\alpha$ = $50$. We also use Precision Recall curve as our second evaluation method. In this method, we want to evaluate our method based on how many task in the test set we can correctly predict by taking only (t\%) of the top-tasks. We vary $t$ (in an increment of $1\%$) in a continuous manner and obtain PR curve.
{\bf Summary of Results:}
The objective of our empirical study is to see how effective our proposed task recommendation models are in comparison with the baseline models. Our proposed algorithm {\tt Feature-Based-NNLS} convincingly outperforms the baseline algorithms in both MPR and PR-Curve. The reason behind the worse performance of {\tt Implicit-ALS-Negative} is that the worker does not choose tasks from a list of available task list, so a task that hasn't been attempted by the user really has no preference rather than ``negative preference''. {\tt IFTS} also performs reasonably well compare to other methods.
\begin{figure}
\begin{floatrow}
\ffigbox{%
\includegraphics[height=2.8cm, width=4.5cm] {figures/ebird_pr.pdf}%
}{%
\label{fig:PR}
\caption{PR Curve}%
}
\capbtabbox{%
\begin{tabularx}{0.5\textwidth}{ b|s }
\hline
Algorithm & MPR\\ \hline
{\tt Impl-ALS-Neg} & $17.3$ \\ \hline
{\tt Feat-Based-Reg} & $13.706$ \\ \hline
{\bf {\tt Feat-Based-NNLS}} & $\mathbf{5.68}$ \\ \hline
{\tt IFTS} & $6.87$ \\
\hline
\end{tabularx}
}{%
\label{tab:MPR}
\caption{MPR}%
}
\end{floatrow}
\end{figure}
\section{Introduction}
Crowdsourcing platforms, such as Amazon's Mechanical Turk or Crowdflower, have recently gained immense popularity due to their elegant framework, where a task requester can get work done by numerous virtual workers for very low compensation. One common problem in these platforms is that workers have to suffer huge latency to find suitable tasks, which creates dissatisfaction and eventually leads to the abandonment of the platform. Task recommendation problems are studied in the crowdsourcing context, where the objective is to recommend a set of tasks to each worker such that these tasks are best suited for the workers~\cite{geiger2014personalized,yuen2012taskrec}. In this work, we aim at leveraging the task completion history of the workers (referred to as {\em implicit feedback}) and augment that with explicit task characteristics or features to recommend tasks to the workers. Our focus of investigation is limited to citizen science crowdsourcing applications where the importance of effective task recommendation is pivotal~\cite{xue2013improving}. We focus on the crowdsourcing of biodiversity observations, where volunteer visit sites, observe species, and report their findings via web applications. Currently, a volunteer, upon identifying a species, uploads information into the server specifying the details of the identification. A common problem which frequently occurs in this scenario is {\em incorrect identification}. A reliable task recommender system can alleviate the problem. If we have historical data on how many tasks a volunteer has successfully performed and those observations are on what species and from which locations, then we can lower the risk in incorrect identification by asking volunteers to identify species they have prior experience with.
\section{Related Work}
Task recommendation with explicit observation is studied in~\cite{yuen2012taskrec}. We are the first to treat worker-task completion history as implicit observations and incorporate task feature information for recommendation. Works in recommender systems such as ~\cite{forbes2011content,nguyen2013content,koren2008factorization} mostly rely on explicit feedback or content based feedback, whereas our model relies on implicit feedback. This precludes direct adaptation of their techniques.
| -12,233.343723
|
[
-2.302734375,
2.052734375
] | 57.360406
|
[
-3.62109375,
0.337646484375,
-2.001953125,
-5.1875,
-0.791015625,
7.6171875
] |
[
0.321044921875,
7.15625,
-0.34228515625,
6.8984375
] | 105
| 1,999
|
[
-3.25390625,
4.08984375
] | 26.804962
|
[
-6.40234375,
-4.734375,
-4.11328125,
-1.455078125,
2.681640625,
11.40625
] | 0.410659
| 47.702091
| 33.266633
| 2.03772
|
[
2.3940751552581787
] | -8,768.486586
| 5.481741
| -12,207.624385
| 0.912575
| 5.6789
|
[
-2.798828125,
-3.259765625,
-3.13671875,
-4.27734375,
2.5625,
10.75
] |
[
-6.18359375,
-2.041015625,
-2.171875,
-1.169921875,
3.7890625,
4.6640625
] | |
BkiUd6nxaKgQKN2tU97o
|
\section{INTRODUCTION}
\renewcommand{\theequation}{1.\arabic{equation}}
\setcounter{equation}{0}
The background here is properly the Bohmian or trajectory approach
to quantum mechanics (QM) (cf. \cite{bm,ha} for technical details
and \cite{bg,cm,hb,pd} for perspectives, philosophy, etc.). In a
seminal paper \cite{fa} Faraggi and Matone (FM) develop a 1-D
trajectory theory based on a deep equivalence principle which seems to
provide the proper foundational structure for such theories (cf. also
\cite{fh,fi}). For example quantization is a direct consequence of
the equivalence principle and there is a nontrivial action even
for bound states.
In particular in \cite{fa} one avoids a flaw in the Bohm
theory based on the erroneous assumption that particle velocity
$\dot{q}$ is the same as $p/m=\partial_qS_0=S_0'$ where $S_0=W$ is
Hamilton's characteristic function or reduced action ($S=S_0-Et$).
The correct version here is $p=\partial_qW=m\partial_{\tau}q$ where
$\tau-\tau_0=m\int_{q_0}^q(dx/\partial_xW)$ represents a time
concept developed by Floyd (cf. \cite{fb,fc,fd,fe,fn,
fr}) in studies of trajectory representations and microstates.
In particular one can work with $t\sim\partial_EW$ to write
$t-t_0=\partial_E\int_{q_0}^qW'dx$ and arrive at
$m\dot{q}=m(dt/dq)^{-1}=
m/W'_E=W'/(1-\partial_EQ)$ where $Q$ is the quantum potential
$Q=(\hbar^2/4m)\{W,q\}$ (Schwartzian derivative - details below).
Given even alone this important variation from the traditional Bohm
theory (and more generally, given \cite{fa}) many philosophical
discussions (some quite recent) regarding trajectory representations
should be drastically modified. We will mostly avoid philosophy
in this note and simply make a few comments about matters related
to this use of time. One notes in \cite{fa} also that a formula
$p=m_Q\dot{q}$ can be obtained via the use of a quantum mass
$m_Q=m(1-\partial_EQ)$. The quantum potential $Q$ is regarded
here as the particles reaction to an external potential $V$ and no
pilot-wave philosophy is needed.
We will consider stationary states but allow $E$ to
vary continuously (so discrete eigenvalues $E_n$ are
not indicated although some arguments could be adjusted to
include them).
\section{BACKGROUND}
\renewcommand{\theequation}{2.\arabic{equation}}
\setcounter{equation}{0}
We sketch briefly some background. First one starts with
$i\hbar\psi_t=-(\hbar^2/2m)\psi_{qq}+V\psi$ for
stationary states $\psi=\psi(q)
exp(-iEt/\hbar)$ so $i\hbar\psi_t=E\psi$ and $-(\hbar^2/2m)\psi''
+V\psi=E\psi$ ($'\sim\partial_q$). ``Classically" one takes then
$\psi=Rexp(iW/\hbar)$ where $S=S_0-Et$ and $S_0=W$ to
arrive at
\begin{equation}
\frac{1}{2m}(W')^2+V+Q-E=0;\,\,(R^2W')'=0
\label{1}
\end{equation}
where $(\bullet)\,\,Q=-\hbar^2R''/2mR$ is the quantum potential.
In \cite{fa} this whole procedure is changed and the matter is
developed in the context of a general equivalence principle leading
to a quantum stationary Hamilton-Jacobi equation (QSHJE)
\begin{equation}
\frac{1}{2m}(W')^2+{\cal W}(q)+Q(q)=0
\label{2}
\end{equation}
which is actually an identity. The individual terms are (here
$\{f,q\}=(f'''/f')-(3/2)(f''/f')^2$ is the Schwartzian derivative)
\begin{equation}
{\cal W}(q)=-\frac{\hbar^2}{4m}\left\{exp\left(\frac{2iW}{\hbar}
\right),q\right\};\,\,Q(q)=\frac{\hbar^2}{4m}\{W,q\}
\label{3}
\end{equation}
This $Q$ is the general quantum potential which in the special context
of (\ref{1}) is $Q=-(\hbar^2/2m)(R''/R)$ as in $(\bullet)$ (note
$R^2W'=c$ from (\ref{1}) will produce the Schwartzian derivative
$\{W,q\}$ as in (\ref{3})). Further one can identify $(\bullet\bullet)\,\,
{\cal W}(q)=V-E$ for which we refer to \cite{fa}. Thus
given $V$ one can determine $W$ via $(\bullet\bullet)$ or via
(\ref{2}) with ${\cal W}=V-E$. Note that (\ref{2}) is a third order
differential equation for $W$ and its solutions will lead to microstates
\`a la Floyd.
\\[3mm]\indent
Now in Floyd's work \cite{fb} it was apparently first observed
that Bohm's assumption $p=W'=m\dot{q}$ (for particle velocity
$\dot{q}$) is not generally valid and the correct version is
(cf. \cite{fa})
\begin{equation}
m(1-\partial_EQ)\dot{q}=W'\equiv m_Q\dot{q}=W'\equiv
m\partial_{\tau}q=W';
\label{4}
\end{equation}
$$m_Q=m(1-\partial_EQ);\,\,\tau-\tau_0=m\int_{q_0}^q\frac{dx}
{W'}$$
Then one has, using (\ref{1}) and Floyd's effective or modified
potential $U=V+Q$,
\begin{equation}
t-t_0=\partial_E\int_{q_0}^qW'dx=\left(\frac{m}{2}\right)^{1/2}
\int_{q_0}^q\frac{(1-\partial_EQ)dx}{\sqrt{E-U}}
\label{5}
\end{equation}
and $d\tau/dt=1/(1-\partial_EQ)$. Thus $t$ is explicitly a function
of $E$ and we want to expand upon this aspect of the theory.
It is important to note that general solutions of
the Schr\"odinger equation above should be taken in the form
${\bf (2A)}\,\,\psi=(W')^{-1}[Aexp(-iW/\hbar)+Bexp(iW/\hbar)]$
and $p\sim\partial_qW=W'$ is the generic form for $p$
corresponding to momentum in a quantum mechanical Hamiltonian
$(1/2m)p^2+V\sim (1/2m)(-i\hbar\partial_q)^2+V$. Thus $p=W'$
corresponds to $p\sim -i\hbar\partial_q$ and this is the quantum
mechanical meaning for $p$; it will not correspond in general to
mechanical momentum $m\dot{q}$ for particle motion.
\section{GENERAL COMMENTS IN 1-D}
\renewcommand{\theequation}{3.\arabic{equation}}
\setcounter{equation}{0}
First one sees that Hamilton-Jacobi (HJ) procedures involve
$t\sim\partial_EW$ and we will modify an argument in \cite{fz}
in order to give further insight into the relation $t=t(E)$. We think
of a general stationary state situation with $S\sim W-Et=W(q,E)-Et$
so that $\partial S/\partial t=-E$ and $t=t(E)$. Setting ${\cal S}=
-S$ with $\partial{\cal S}/\partial t=E$ we can write then
$(\clubsuit)\,\,W=Et-{\cal S}=t{\cal S}_t-{\cal S}$ in Legendre form.
Now given $W=W(E,q)$, with $q$ fixed, one has $\partial_EW=t
+Et_W-{\cal S}_tt_W=t$ so $(\spadesuit)\,\,{\cal S}=EW_E-W$
gives the dual Legendre relation. Consequently the constructions
in \cite{fa} for example automatically entail the Legendre transformation
relations $(\clubsuit)$ and $(\spadesuit)$ involving ${\cal S}=-S$ and
$W$.
\\[3mm]\indent
Now one comes to the energy-time uncertainty ``principle" which should
be mentioned because of situations involving energy
dependent time for example (cf. \cite{aa,bk,da,ea,fs,se} for various
approaches - we make no attempt to be
complete or exhaustive here). First, in a perhaps simple minded
spirit, let us recall that microstates are compatible with the
Schr\"odinger representation by wave functions $\psi$ and hence
one will automatically have a connection of the trajectory representation
with Hilbert space ideas of observables and probability
(more on this below). In the
Hilbert space context the uncertainty $\Delta q\Delta p\geq\hbar/2$ is
a trivial consequence of operator inequalities and we take it to mean
nothing more nor less ($\Delta p$ for example represents a standard
deviation $<\psi|(\hat{p}-<\hat{p}>)^2|\psi>$ where $\hat{p}\sim
-i\hbar\partial_q$). In this spirit nothing need be said about
measurement and we will not broach the subject in any way except
to say that sometimes for a trajectory we will think e.g. of physical
increments $\delta q\sim q-q_0$ and $\delta p\sim p-p_0$. Thus
we will try to maintain a distinction between $\delta q$ and $\Delta q$
for example and we do not require that $\delta q$ be measured, only
that it be a natural mathematical concept. After all $\Delta q$ above
is also only a natural mathematical concept without any a priori
connection to measurement. The idea of attaching physical meaning
to $\Delta q$ via measurement seems no stranger than thinking
of $\delta q$ as a meaningful possibly measurable physical
quantity.
As for energy-time uncertainty we remark first that if one departs
by $\epsilon$ from a correspondence between observables and
self-adjoint operators then an $\epsilon$ approximate inequality
${\bf (ET)}\,\,\Delta E\Delta t\geq\hbar/2$ can be proved in a
Hilbert space context (see e.g. \cite{fs} for a detailed discussion).
There are also various crude physical derivations
based on $\delta q\sim (p/m)\delta t$ where $p$ is the
physical momentum and
subsequently, for $\delta E\sim (p/m)\delta p$ when e.g. $E\sim
(p^2/2)+V(q)$, one often writes ${\bf (EET)}\,\,
\delta p\delta q\sim(p/m)(m/p)
\delta E\delta t=\delta E\delta t
\geq\hbar/2$ based on a $q-p$ uncertainty with
$\delta q\sim\Delta q$ etc. (displacement version).
This would be fine if $p=m\dot{q}$ but we have seen that $p$ has
an unambiguous quantum mechanical meaning as in Section 2 and
the argument involving $\delta q=(p/m)\delta t$ is generally
not valid.
A more convincing argument can be made
via use of Ehrenfest type relations ($\hat{H}\sim E$)
\begin{equation}
\frac{d<\hat{Q}>}{dt}=\frac{1}{\hbar}<i[\hat{H},\hat{Q}]>;\,\,
\delta E\delta Q\geq\frac{\hbar}{2}\left|\frac{d}{dt}<\hat{Q}>
\right|
\label{6}
\end{equation}
and an argument that the time $\delta t$ for a change $\delta Q=
\delta <\hat{Q}>$ should be $\delta t=\delta Q/|d<\hat{Q}>/dt|$,
leading to $
{\bf ({\cal E}{\cal E}{\cal T})}\,\,\delta E\delta t\geq\hbar/2$
(without intervention
of $p$, where however one has $d<q>/dt\sim (1/m)<p>$).
\\[3mm]\indent
Since the beginning step $\delta q\sim (p/m)\delta t$ is generally
wrong in the crude argument above
let us adjust this following (\ref{4}) to be $\delta q\sim
(p/m)\delta\tau\sim(p/m_Q)\delta t$ where $p=W'$ is the conjugate
momentum (QM momentum).
Then with $\delta E\sim
(p/m)\delta p$ one will arrive at $\delta p\delta q\sim\delta E
\delta \tau\sim\delta E\delta t/((1-Q_E)$ and consequently a
correct displacement (or perhaps trajectory)
version of {\bf (ET)} should be
\begin{equation}
\delta E\delta \tau\geq\frac{\hbar}{2}\equiv\delta E\delta t
\geq (1-\partial_EQ)\frac{\hbar}{2}
\label{7}
\end{equation}
Since in the trajectory picture
we are dealing with $t=t(E)$ via $t=\partial_EW$ (with
$E$ a continuous variable here) one will have $\delta t=W_{EE}
\delta E$ so (\ref{7}) requires a curious condition
$(\clubsuit\clubsuit)\,\,(\delta E)^2\geq [(1-Q_E)\hbar/2W_{EE}]$.
Thus apparently for any energy change compatible with the microstate
picture $(\clubsuit\clubsuit)$ must hold.
This would seem to preclude any positive infinitesimal $\delta E$
unless $(1-Q_E)/W_{EE}\leq 0$ (with restrictions on any negative
$\delta E$). One can perhaps envision here microstates as developed
by Floyd generated at energy $E_0$ with initial conditions
$W_0(E_0),\,W'_0(E_0),$ and $W''_0(E_0)$ (or $(q_0,\,\dot{q}_0,
\,\ddot{q}_0)(E_0)$) and then imagine $E$ to be changed while keeping
the initial conditions constant. This would affect the time relations
on the trajectories and lead to a situation where the inequality
$(\clubsuit\clubsuit)$ could have meaning, but for general situations
$t=t(E)$ (\ref{7}) seems untenable, and hence we exclude
energy-time uncertainty for completely determined microstates
(see below however).
Further we cannot suggest its
applicability in the operator form ${\bf (ETT)}\,\,\Delta E\Delta \tau
\geq \hbar/2$ since that would clash with {\bf (ET)} which has a
more or less substantial foundation. Thus we argue that while
{\bf (ET)} may be acceptable its displacement version
{\bf (EET)} is not, except perhaps in the averaged form
${\bf ({\cal E}{\cal E}{\cal T})}$.
\\[3mm]\indent
As for computation in (\ref{7}) for example
one notes that the equations in \cite{fe,fn,fr} for example
have to be put in ``canonical" form as in \cite{fa} and,
in computing $W_E$, one should
only differentiate terms which under
a transformation $E\to E'\not= E$ do not correspond to a M\"obius
transformation of $exp(2iW/\hbar)$ (i.e. one only differentiates
terms in which ${\cal W}$ is changed under a transformation $E\to E'
\ne E$).
Regarding $1-Q_E$ one can use the relation $W'W'_E=m(1-Q_E)$ for
computation. As for uncertainty
however my interpretation of some remarks of Floyd suggests the
following approach. First I would claim that uncertainty type inequalities
are incompatible with functional relations between the quantities
(e.g. $p=\partial_qW=p(q)$ or $t=t(E)$ via $W$). Thus if
$W$ is completely known there is generally no room for uncertainty
since e.g. $\delta t\sim W_{EE}\delta E$ or with adjustment of
constants $\delta t=t-t_0=W_E$ completely specifies $\delta t$. Note
that one of the themes in \cite{fa} involves replacing canonical
transformations between independent variables $(p,q)$ with
coordinate transformations $q\to\tilde{q}$ with $p=W_q(q)$ depending
on $q$. Now we recall that the QSHJE is third order and one needs
three initial conditions $(q_0,\dot{q}_0,\ddot{q}_0)$ or $(W_0,W'_0,
W''_0)$ for example to determine a solution and fix the microstate
trajectories. However the Copenhagen representation uses an
insufficient set of initial conditions for microstates (and literally
precludes microstate knowledge). The substitute for exact microstate
knowledge is then perhaps an uncertainty principle.
It would be interesting to see if the two
pictures interact and one could perhaps think of uncertainty
relations involving $\delta t$ and $\delta E$
as in (\ref{7}) for example
as constraints
for the microstate initial conditions. However the microstates are
always compatible with the Schr\"odinger equation for any
initial conditions and hence lead to the same operator
conclusions in Hilbert space (such as
{\bf (ET)} for example).
In any event one can continue to use the standard quantum mechanics,
knowing that a deliberate sacrifice of information has been made
in not specifying the background microstates (i.e. quantum
mechanics in Hilbert space is imprecise by construction, leading
naturally to a probabilistic theory etc.). We refrain from
any further attempts at ``philosophy" here.
\section{SPIN}
\renewcommand{\theequation}{4.\arabic{equation}}
\setcounter{equation}{0}
We follow here \cite{eb} with references to \cite{ec,rf,
rh,sf,ta} (a very incomplete list).
First without discussion we write down some equations
from \cite{eb} and \cite{rh} ($\hbar$ is removed in \cite{eb} so
we reinsert it \`a la \cite{rh}). Thus one thinks of $\psi=Rexp(iS/\hbar)$
with
\begin{equation}
{\cal L}=-\rho\left[S_t+\frac{1}{2m}(\nabla S)^2+\frac
{\hbar^2}{8m}\left(\frac{\nabla\rho}{\rho}\right)^2+V\right]
\label{97}
\end{equation}
($\rho=R^2$).
Thence one determines the equations for a Madelung fluid
\begin{equation}
S_t+\frac{1}{2m}(\nabla S)^2+\frac{\hbar^2}{4m}\left[\frac{1}{2}
\left(\frac{\nabla\rho}{\rho}\right)^2-\frac{\Delta\rho}{\rho}\right]
+V=0;\,\,\rho_t+\nabla\cdot (\rho\nabla S/m)=0
\label{98}
\end{equation}
(cf. also \cite{sf}). The quantum potential is ($|\psi|=R=\rho^{1/2}$)
\begin{equation}
\frac{\hbar^2}{4m}\left[\frac{1}{2}\left(\frac{\nabla\rho}{\rho}
\right)^2-\frac{\Delta\rho}{\rho}\right]=-\frac{\hbar^2}{2m}
\frac{\Delta |\psi|}{|\psi|}=Q
\label{99}
\end{equation}
and this arises from the single nonclassical term $(\hbar^2/8m)
(\nabla\rho/\rho)^2$ in ${\cal L}$ of (\ref{97}). The internal motion
or spin is related to the Zitterbewegung idea and $\vec{v}\not=
\vec{p}/m$ in this context.
Now one defines
\begin{equation}
\vec{v}_B=\frac{\nabla S}{m};\,\,\vec{v}_S=\frac{\hbar\nabla\rho}
{2m\rho}
\label{9}
\end{equation}
and we note that the equations are invariant under $\vec{J}=\rho
\vec{v}_B\to\vec{J}+\nabla\times\vec{b}$. This leads to a spin
vector $\vec{s}$ with current velocity ${\bf (4A)}\,\,\vec{v}
=\vec{v}_B+\vec{v}_S\times\vec{s}=\vec{v}_{\parallel}+
\vec{v}_{\perp}$ and ${\bf (4B)}\,\,|\vec{s}|^2=1$ with $\vec{v}_S
\cdot\vec{s}=0$ and $\vec{v}_B\cdot (\vec{v}_S\times\vec{s})=0$.
One notes also that ${\bf (4C)}\,\,Q=-(m/2)\vec{v}_S^2-(\hbar/2)
\nabla\cdot\vec{v}_S$. For stationary states $\psi=\psi(q)exp(-iEt/
\hbar)$ with $\psi(q)=Rexp(iW/\hbar)$ one has then
\begin{equation}
\frac{1}{2m}(\nabla W)^2-\frac{\hbar^2}{2m}\frac{\Delta R}{R}
+V-E=0;\,\,\nabla\cdot (\rho\nabla W)=0
\label{10}
\end{equation}
with $Q$ given in (\ref{99}).
\\[3mm]\indent
We want to see now if we can relate the spin picture to the trajectory
representation. Since we are dealing with the same basic
Schr\"odinger equation the only new feature is 3-D. One can speak
of internal motion, spin, Zitterbewegung, etc. but once a current
velocity $\vec{v}$ appears as in {\bf (4A)} we are at least implicitly
making contact with the idea of particle motion and some comparison
between $\vec{v}$ and trajectory velocity $d\vec{q}/dt$ should be
possible and meaningful. This point may be arguable but we assume
it momentarily at least. Actually in \cite{eb} one explicitly deals
with $\vec{v}$ as a particle velocity so this should carry over to the
stationary state. However the arguments in \cite{eb} about trajectory
representations do not take into account the work of FM or Floyd
involving microstates so we feel the conclusions in \cite{eb} should
be correspondingly adjusted (see below for more on this). We will
show that the use of current velocity $\vec{v}$ as particle velocity
seems to be incorrect.
\\[3mm]\indent
Thus following the 1-D example of (\ref{4}) where $m(1-Q_E)\dot{q}
=W'$ we use the relation $t\sim\partial_EW$ again to suggest
($\vec{x}\sim\vec{q}$)
\begin{equation}
t-t_0=\partial_E\int_{\vec{x}_0}^{\vec{x}}
\nabla W\cdot d\vec{x}\sim\nabla t=
\nabla W
\label{11}
\end{equation}
Then from (\ref{10}) one has
\begin{equation}
\frac{1}{n}\nabla W\cdot\nabla W_E+Q_E-1=0;\,\,\nabla\cdot
[\partial_E(\rho\nabla W)]=0
\label{12}
\end{equation}
We can write then ${\bf (4D)}\,\,\partial_E(\rho\nabla W)=\nabla
\times\vec{\gamma}$ say and following $dt/dq=\partial_EW'=W'_E$ in
1-D, or $dq/dt=1/W'_E$, we suggest for (\ref{11}) the relation
${\bf (4E)}\,\,\partial t/\partial x_i=\partial W_E/\partial x_i\sim
\dot{x}_i=1/\partial_iW_E$. In a similar manner (\ref{12}) leads to
\begin{equation}
\nabla W\cdot\nabla t=m(1-Q_E);\,\,m\dot{x}_i=\frac{|\nabla W|^2}
{(1-Q_E)\partial_iW}
\label{13}
\end{equation}
which implies
${\bf (4F)}\,\,m(1-Q_E)(d\vec{x}/dt)\cdot\nabla W=3|\nabla W|^2$.
\\[3mm]\indent
Now the constructions of \cite{eb} with $\vec{v}_B\cdot (\vec{v}_S
\times \vec{s})=0$ as in {\bf (4B)} give $\vec{v}\cdot\vec{v}_B=
|\vec{v}_B|^2$ where $\vec{v}_B=(1/m)\nabla W$. If one could
identify $\vec{v}$ with $d\vec{x}/dt$ there would result then from
{\bf (4F)} the formula ${\bf (4G)}\,\,(1-Q_E)=3m$ which is
unlikely at best. Hence we conclude that the current velocity
$\vec{v}$ of \cite{eb} cannot be identified with particle velocity
and the conclusions there about trajectories are not correct. Perhaps
the difficulty lies in the following observation. Even though the quantum
potential $Q$ can be recovered from $\vec{v}_S$ as in {\bf (4C)}
nevertheless the full quantum potential is not used in constructing
$\vec{v}_S$ via (\ref{9}) as can be seen from (\ref{99}). In the
trajectory picture from FM or Floyd the full quantum potential is used
in determining $d\vec{x}/dt$ as indicated in (\ref{13}) or {\bf (4F)}.
\\[3mm]\indent
A few additional comments should also be made about adapting the
development in \cite{eb} or \cite{rh} for stationary state situations.
Thus from (\ref{10}) we first extend $\vec{J}$ in the form $\vec{J}
=\rho\vec{v}_B\to \rho\vec{v}_B+\nabla\times (\rho\vec{c})$ and
satisfy the equation $\nabla\cdot\vec{J}=0$ via ${\bf (4H)}\,\,
\vec{J}=\rho\vec{v}_B+\nabla\times (\rho\vec{c})=\
\nabla\times\vec{\phi}$
with $\vec{s}=(2m/\hbar)\vec{c}$ and $\vec{\phi}$ dependent on
$\vec{x}$. Then (cf. (\ref{9})) $\nabla\cdot\vec{J}=0\sim \nabla
\cdot (\rho\vec{v}_B)=0$ and one can write
$$\nabla\times (\rho\vec{c})=\nabla\rho\times\vec{c}+
\rho(\nabla\times\vec{c})=\rho\left[\vec{v}_S\times\vec{s}+
\frac{\hbar}{2m}\nabla\times\vec{s}\right]$$
with
\begin{equation}
\vec{v}=\vec{v}_B+\vec{v}_S\times\vec{s};\,\,\vec{J}=\rho\vec{v}
+\vec{J}_0=\nabla\times\vec{\phi}=\vec{\eta};
\,\,\vec{J}_0=\frac{\hbar\rho}{2m}
\nabla\times\vec{s}
\label{14}
\end{equation}
One can think of $\vec{J}_0$ as a ``pseudocurrent" added on to deal
with cases of a variable spin vector $\vec{s}(\vec{x})$ and $\vec{s}$
still is to satisfy {\bf (4B)}. The term $\vec{\phi}$ is added here to
give a variable right side for $\vec{J}$. Now in order to determine
if this produces a solvable configuration we note that the equations
{\bf (4B)} consist of three equations in three unknowns $s_i$
for $\vec{s}=(s_1,s_2,s_3)$ and the coefficients in these equations
depend on $\nabla W$ via $\vec{v}_B$ and on $\rho$ via $\vec{v}_S$
yielding $\vec{s}=\vec{s}(\rho,\nabla W)$. We think of $\vec{\eta}$
as arbitrary for the moment and from ${\bf (4J)}\,\,\vec{v}=(1/\rho)
[\vec{\eta}-\vec{J}_0]$ one obtains via (\ref{98}) and (\ref{10})
\begin{equation}
|\vec{v}|^2=|\vec{v}_B|^2+|\vec{v}_S|^2=\frac{2}{m}(E-V)
+\frac{\hbar^2}{2m^2}\frac{\Delta\rho}{\rho}=\left|\frac{\vec{\eta}}
{\rho}-\frac{\hbar}{2m}\nabla\times\vec{s}\right|^2
\label{15}
\end{equation}
Thus we have two more equations, namely (\ref{15}) and {\bf (4J)}
in the form ${\bf (4K)}\,\,\vec{v}=(1/m)\nabla W+(\hbar/2m)(\nabla
\rho/\rho)=(\vec{\eta}/\rho)-(\hbar/2m)\nabla\times\vec{s}$. If
we put $\vec{s}=\vec{s}(\rho,\nabla W)$ in these equations they
become two equations (\ref{15}) and {\bf (4K)} for $\rho$ and
$\nabla W$ in terms of an arbitrary $\vec{\eta}$. Thus in principle
this configuration should be solvable
and some preliminary calculations
are promising. As an example take $\vec{s}=\vec{i}$ and
$\nabla\rho=\rho_2\vec{j}$ so $curl\vec{s}=0$ and $\nabla\rho
\times\vec{s}=-\rho_2\vec{k}$. Then $\vec{v}_B\perp
(\nabla\rho\times\vec{s})$ will imply $\nabla W=W_1\vec{i}
+W_2\vec{j}$ and one has ${\bf (4L)}\,\,\vec{\eta}/\rho=
(1/m)(W_1\vec{i}+W_2\vec{j})-(\hbar\rho/2m\rho)\vec{k}$.
Thus given $\vec{\eta}$ we must have
\begin{equation}
\frac{\rho W_1}{m}=\eta_1;\,\,\frac{\rho W_2}{m}=\eta_2;\,\,
-\frac{\hbar\rho_2}{2m}=\eta_3
\label{16}
\end{equation}
along with (\ref{15}) in the form
\begin{equation}
\frac{2\rho^2}{m}(E-V)+\frac{\hbar^2\rho\rho_{22}}{2m^2}=
\frac{\rho^2}{m^2}(W_1^2+W_2^2)+\frac{\hbar^2\rho_2^2}{4m^2}
\label{17}
\end{equation}
This can be satisfied if e.g. ${\bf (4M)}\,\,2(E-V)=W_1^2+W_2^2$
and $\rho\rho_{22}=(1/2)\rho_2^2$. The latter equation is $\rho_{22}/
\rho_2=(1/2)(\rho_2/\rho)$ leading to $log(\rho_2/\rho^{1/2})=
f(x,z)$ and eventually $2\rho^{1/2}=\alpha(x,z)y +\beta(x,z)$
with $\alpha,\,\beta$ constant. Evidently {\bf (4M)} requires also
$V=V(x,y)$.
\\[3mm]\indent
{\bf ACKNOWLEDGEMENT}$\,\,$ The author would like to
thank E. Floyd and M. Matone for valuable comments.
\newpage
| -24,558.14888
|
[
-1.9873046875,
1.806640625
] | 10.828025
|
[
-3.53515625,
0.00800323486328125,
-2.072265625,
-5.37890625,
-0.1329345703125,
7.5390625
] |
[
2.21484375,
7.453125,
-0.356689453125,
3.958984375
] | 131
| 2,741
|
[
-3.458984375,
3.77734375
] | 31.83805
|
[
-5.51953125,
-3.849609375,
-4.03515625,
-2.138671875,
1.7841796875,
11.40625
] | 0.44287
| 5.723604
| 34.549435
| 3.308233
|
[
1.6362736225128174
] | -15,986.558557
| 5.766509
| -24,363.919935
| 0.601038
| 5.957455
|
[
-2.603515625,
-3.4921875,
-3.224609375,
-4.59375,
2.400390625,
11.4140625
] |
[
-4.8671875,
-1.0869140625,
-1.306640625,
-0.36279296875,
2.8515625,
2.189453125
] | |
BkiUdYw5qYVBOT10Juvv
|
\section{Introduction}
Soft gamma-ray repeaters (SGRs) were first discovered
as high-energy burst sources in the late 1970's \citep{Mazets1981}.
Once SGRs enter burst active phases,
they produce a lot of short-duration ($\sim$0.1 s) energetic ($\sim10^{41}$
erg) soft gamma-ray bursts. These bursts were distinguished
from cosmological gamma-ray bursts (GRBs)
by the soft spectra and the repeated activities.
Furthermore, as rare events, SGRs emit extremely bright giant flares
(GFs). A GF lasts for several hundred seconds and
its isotropic total energy amounts to 10$^{44}-10^{46}$ erg.
So far, only three have been recorded. On 1979 March 5,
the first GF was detected from SGR 0526-66 by Venela spacecraft
\citep{Mazets1979}. The second GF was observed
from SGR 1900+14 on 27 August 1998 \citep{Hurley1999,Mazets1999,Feroci2001}.
Recently SGR 1806-20 emitted the third GF on 27 December 2004
\citep{terasawa2005,Hurley2005,Palmer2005,Mereghetti2005,Schwartz2005}.
The overall time profile of each GF is characterized
by a very intense spectrally hard initial spike
whose duration is $\lesssim$ 0.5 s,
and a subsequent pulsating tail which has a softer spectrum and lasts for
some hundred seconds. After the GFs, radio afterglows were observed
from SGR 1900+14 \citep{Frail1999}
and from SGR 1806-20 \citep{Gaensler2005,Cameron2005}.
SGRs show the slow spin periods ($5-8$ s) and
rapid spin-down rates ($10^{-11}-10^{-10}$ s s$^{-1}$)
\citep{Kouveliotou1998,Kouveliotou1999}.
Assuming magnetic dipole radiation,
we can estimate the magnetic fields of SGRs to be $10^{14}-10^{15}$ G
and SGRs are recognized
as magnetars \citep{Duncan1992,Thompson1995,Thompson1996}.
According to the magnetar model,
the energy source of both recurrent bursts and GFs is the
ultrastrong magnetic field:
stored magnetic energy inside a magnetar is suddenly released
via cracking of a magnetar's crust, and the large scale crustal
fracturing produces GFs.
Similar to earthquakes, the power-law distribution
of the radiated energy of the repeated burst and the lognormal distribution
of waiting times between successive bursts are reported \citep{Cheng1996,
Gogus2000}. These observations also support the idea
that SGR bursts originate from the starquakes.
In this paper, first, we focus on the SGR 1900+14 GF on 1998 August 27.
This flare was detected by gamma-ray instruments on the
Ulysses, Konus-Winds and BeppoSAX satellites
\citep{Hurley1999,Mazets1999,Feroci2001}.
However the flare was so intense that these instruments
underwent severe dead-time or pulse pile-up problems.
Consequently, the time profile during the most intense period
was not obtained and only the
lower limits of the peak flux intensity and fluence were reported
\citep{Hurley1999,Mazets1999}.
Here we present the clear peak profile of the SGR 1900+14 GF on 1998 August 27.
The profile was recorded by the Low Energy Particle instrument (hereafter LEP)
\citep{Mukai1994} onboard the GEOTAIL spacecraft, whose principal
objective is to study the Earth's magnetosphere.
The light curve for the first 350 ms of the GF
is unsaturated and has a high time resolution
of 5.58 ms. We also show the energetics of the flare.
Second, we present a comparative study of the initial spikes of
SGR GFs in 1998 and 2004, the latter of which was also detected by the same
instrument \citep{terasawa2005}. From both of the light curves,
we extract the characteristics of the initial spikes of SGR GFs,
focusing on the timescales discovered during the initial spikes.
Finally we argue that the observed timescales may provide a clue to
identify extragalactic SGR giant flares among short GRBs.
\section{Instrumentation and Observation}
The LEP is designed to measure three-dimensional velocity distributions
of the Earth's magnetospheric ions and electrons.
It consists of two nested sets of quadspherical electrostatic
analyzers; one analyzer to select ions, and the other to select electrons.
At the receving end of the ion and electron optics,
seven microchannel plate detectors (MCPs) and seven channel electron
multipliers (CEMs) are used, respectively.
During the SGR 1806-20 GF in 2004, the peak flux was so intense that
the MCPs were saturated during the first 150 ms.
Alternatively the peak profile was derived from the CEMs, because
the CEMs are much less sensitive to gamma-rays than
the MCPs. After the most intense period, the MCPs recovered from the saturation
and observed the decay profile clearly.
On the other hand, during the SGR 1900+14 GF in 1998,
we obtained the peak profile from the MCPs. The peak flux of
the 1998 GF was about one-tenth of that of the 2004 GF (see below),
and hence the MCPs did not suffer the severe saturation problem.
The CEMs showed count increases ($\lesssim$ 20) corresponding to
those of the MCPs. However, since the background electron counts
for CEMs were high ($\sim$50$-$80), we do not use the CEM data for the
analysis of the SGR 1900+14 GF.
The LEP records the data every 15/8192 of the spacecraft
spin period over 32 sequences,
followed by a gap of 1/256 of the spin period. The spacecraft
spin period was 3.046 s on 1998 August 27, leading to
$3.046 \times \left( 15/8192 \right) = 5.58\times10^{-3}$ s $= 5.58$ ms
time resolution.
This is slightly different compared to a 5.48 ms time resolution
of SGR 1806-20 GF observation in 2004, during which the spin period
was 2.993 s.
In this report, we use the LEP calibration that the effective energy range
and the detection efficiency are $>\sim$ 50 keV and $\sim$1\% against
incident photons, respectively. Since the LEP was not designed to
measure gamma-rays, this calibration was made after the launch of the GEOTAIL
spacecraft through the analyses of solar flare photons for which
the Hard X-ray Telescope onboard the Yohkoh satellite \citep{Kosugi1991}
provided photon energy spectra and intensities.
Recently we have made (i) GEANT4 simulations based on the detailed mass model
of the LEP, satellite structure and other instruments, and
(ii) the laboratory measurements
of the detection efficiency of the MCP \citep{Tanaka2007}, both of which have
successfully reproduced what were obtained from the solar flare
photon analyses. In addition, we found from the GEANT4 simulations that
the effect of the rotation of the spacecraft was negligibly small
around the spin phase angles corresponding to the two GFs.
Fig. 1 shows the first 350 ms unsaturated peak profile
of the GF from SGR 1900+14 on 27 August 1998.
Dead time and saturation effects are negligible for the count rates
smaller than $\sim$1000 counts per 5.58 ms:
only the peak count at $t$=5.58 ms was dead-time corrected.
The shaded bars in Fig. 1 indicate the instrumental data gaps of 12 ms.
The onset time ($t$=0) was 10:22:15.47 UT, which coincided with the
expected arrival time at the GEOTAIL position.
Before the onset, the count was less than 25 counts per 5.58 ms (shown
by a black arrow in Fig. 1(b)), i.e. the background level.
Then it increased to 792 counts within 5.58 ms, and this rapid increase
provided the upper
limit of the e-folding time of the initial rise as 1.6 ms.
After the onset, it reached a very sharp peak of 4776 counts
at $t$=5.58 ms. This increase yielded the e-folding time of
the intermediate rise time to the peak as $3.1^{+0.9}_{-2.0}$ ms.
Following the peak, it decayed rapidly. The exponential decay time was
calculated as 2.9$\pm$0.2 ms from the counts for $t$=5.6$-$22 ms.
Note that the timing of the dip at $t$=22 ms corresponds to
the timing of the temporal count recovery from the total shut down
of the Konus-Wind instrument (see Fig. 6 of \citet{Mazets1999}).
After that, it increased again with e-folding time of 16$\pm$2.5 ms
for $t$=22$-$50 ms and reached a flat-top second peak during 60$-$120 ms.
Finally the exponential decay was clearly observed and the decay
time was obtained as 23$\pm$1.6 ms during $t$=120$-$160 ms.
Note that a small hump was seen around 310 ms, which was
also observed with the Konus-Wind instrument (Fig.6 of \citet{Mazets1999}).
To convert physical quantities such as an energy flux
from the observed count rates, we need an
assumption on the photon energy spectrum, because the LEP detected
integrated photon numbers above 50 keV. We assume $kT$=240 keV
optically thin thermal bremsstrahlung (OTTB) spectrum which was
obtained from Ulysses observation \citep{Hurley1999}.
Resultant physical quantities are tabulated in Table 1, combined with
Venela observation of the SGR 0526-66 GF in 1979 \citep{Mazets1999}
and GEOTAIL observation of the SGR 1806-20 GF in 2004 \citep{terasawa2005}.
We found that the peak luminosity and the total emitted energy
are $2.3\times10^{46} d^2_{15}$ erg s$^{-1}$ and
$4.3\times10^{44} d^2_{15}$ erg ($E \gtrsim$ 50 keV), respectively.
Here we assume that the distance to SGR 1900+14 is 15 kpc
\citep{Vrba2000} and $d_{15}=\left( d/15 \mathrm{kpc} \right)$.
We also found that the total energy of this GF is about 130 times smaller than
that of the 2004 December 27 GF from SGR 1806-20,
although it is reported that the energy emitted during the pusating tail
in each GF is comparable ($E_{\rm tail} \sim 10^{44}$ erg, see Table 1).
\citep{Hurley2005,Palmer2005,Mazets1999}. Note that this difference
by a factor of 130 is the same order of the radio observations:
the radio afterglow of the SGR 1900+14 GF is approximately 500 times
fainter than that of the SGR 1806-20 GF
\citep{Frail1999,Gaensler2005,Cameron2005}.
\section{Discussion}
We observed two SGR GFs out of ever recorded three:
from SGR 1900+14 in 1998 and
SGR 1806-20 in 2004. Here we present a comparative study and
extract characteristics of the initial spikes of the SGR GFs.
Fig. 1 and Fig. 2 show the light curves of the initial spikes of
SGR 1900+14 GF and SGR 1806-20 GF, respectively.
Fig. 3 shows the detailed initial rise profiles of both GFs.
From these light curves, we identify four common features:
(1) initial steep rise (2) intermediate rise to the peak
(3) exponential decay (4) small hump in the decay phase.
The calculated e-folding times corresponding to the structures
of (1)-(3) and the timing when we observed the structure (4)
are tabulated in Table 1.
First, we focus on (1) initial steep rise. The observed initial rise time
of SGR 1900+14 GF is $\le1.6$ ms. This is comparable to the
initial rise time of $\le1.3$ ms observed in the SGR 1806-20 GF,
implying the same physical mechanism producing the initial rapid
energy release of these two GFs. Note that
in the leading edge of the initial spike of SGR 1806-20 GF,
Swift and Rhessi observed the similar timescale
(Swift: $\sim$ 0.3 ms, Rhessi: $0.38\pm0.04$ ms) \citep{Palmer2005,Boggs2006}.
These correspond to our observation of $\le1.3$ ms initial rise time.
According to
the reconnection model of GFs \citep{Thompson1995,Duncan2004},
reconnection typically occurs
at a fraction of the Alfven velocity \citep{Thompson1995,Duncan2004},
and this interpretation leads to
$\tau_{\rm mag} \sim L/0.1V_{\rm A} \sim 0.3 \left( L / 10 \rm km \right)$ ms,
where $L$ is the scale of the reconnection-unstable zone, and
$V_{\rm A} \sim c$ is the Alfven velocity in the magnetosphere.
This theoretical timescale $\tau_{\rm mag}$ seems consistent with the
observation of the initial rise time.
Next, we consider (2) intermediate rise to the peak.
The observed e-folding rise time of the SGR 1900+14
GF is 3.1 ms, which is shorter than the 9.4 ms rise time
observed in the SGR 1806-20 GF by factor of about 3.0.
If this timescale is limited by the propagation of a fracture,
we can infer the fracture size $l$ as
$l\sim 4 \mathrm{km} \left( t_{\mathrm{rise}}/
4 \mathrm{ms} \right)$ \citep{Thompson2001}.
Using this, the fracture size of the SGR 1900+14
is estimated as $\sim$ 3.1 km, and that of the SGR 1806-20
is as $\sim9.4$ km. It should be noted that our 9.4 ms rise time
observed in the SGR 1806-20 GF differs by factor of $\sim$2 from
4.9 ms derived from the CLUSTER spacecraft observation of the
same GF \citep{Schwartz2005}. The origin of the difference between
these time scales is not understood, but could possibly attribute
to the different energy coverages of the detectors.
Unfortunately, since the energy response of the CLUSTER detectors
against incoming X-ray and gamma-ray photons was not calibrated,
further quantitative comparison between GEOTAIL and CLUSTER
is not possible.
In the initial spike of the SGR 1900+14 GF in 1998, we found a deep dip
and rebrightening following a sharp peak (Fig. 1).
We propose that this dip explains the temporal recovery of the counter
of the Konus-Wind \citep{Mazets1999}, since the dip and the recovery
occurred nearly simultaneously.
Note that Swift and Rhessi also detected a dip and rebrightening
in the leading edge of the initial spike of
the SGR 1806-20 GF \citep{Palmer2005,Boggs2006}, which could not be resolved
by the GEOTAIL observation.
This association implies that the dip and rebrightening are common features
of the initial spikes of the SGR GFs, although theoretical interpretation
is unclear.
Then, we concentrate on (3) exponential decay.
The decay time of the SGR 1900+14 GF is 23 ms.
This is shorter than the 66 ms decay time
of the SGR 1806-20 GF by factor of 2.9,
which roughly coincides with the factor 3.0
found in the intermediate rise times.
From this similarity, we infer that the decay time is also
proportional to the fracture size of a magnetar's crust.
Finally, we focus on (4) small hump in the decay phase. Small humps are
observed nearly at the same timing; $\sim$310 ms in 1998 and $\sim$430 ms
in 2004 (note that the hump in 2004 GF was also observed with Swift
satellite \citep{Palmer2005}), although the total emitted energy
differs by a factor of 130.
This implies that the hump is caused by the continuing energy
injections rather than the environmental interactions of the flare ejecta.
To conclude, the observed initial rise times imply that
the onsets of both of the GFs result from magnetospheric instabilities.
The intermediate rise times, on the other hand,
are consistent with the idea that main energy release
mechanism of the GFs is the large scale crustal fracturing.
For this interpretation to be valid, magnetospheric instabilities
should trigger the cracking of a magnetar's crust.
Further theoretical study is needed.
The above four structures discovered in the initial spikes
may provide a clue to identify extragalactic SGR GFs among short GRBs.
Recently, a possible detection of an extragalactic SGR GF
is reported \citep{2005GCN}. Bright short GRB 051103 was localized
near the M81/M82 galaxy group by the interplanetary network.
This association implies that the GRB 051103 is the SGR GF
outside the local group. Furthermore, if the GRB 051103 is emitted from
a SGR in M81, the isotropic total energy amounts to
$\sim 7 \times 10^{46}$ erg, which
is the same order of the energy of SGR 1806-20 GF \citep{Frederiks2006}.
Not only existence of star forming regions
inside the IPN error quadrilateral of GRB 051103
but also no detection of optical and radio afterglow support
the SGR hypothesis \citep{Ofek2006a}.
Here we investigate the hypothesis from the viewpoint of its light curve.
(1) The light curve of GRB 051103 observed by Konus-Wind
showed a steep rise and the timescale is reported as $\leq$ 6 ms
\citep{Frederiks2006}. This nearly corresponds to the
intermediate rise time of a galactic SGR GF presented above,
although we do not know whether the timescale observed by Konus-Wind
represents an initial rise time or an intermediate rise time.
Furthermore,(2) quasi-exponential decay was seen and the
decay time is $\sim$ 55 ms \citep{Frederiks2006}.
This timescale is also the same order of magnitude as
the decay times presented above.
These two similarities found in the light curves
also support the SGR hypothesis.
A hump in a decay phase was not seen in the light curve of
GRB 051103. This is explicable in terms of the detector's
detection limit, because the flux of the humps, if exists, are expected
to be about one hundredth of the peak flux.
\acknowledgments
We thank R. Yamazaki for valuable comments and discussions. We are also
grateful to all the members of GEOTAIL team for their collaboration.
Y.T.T. is receiving a financial support from JSPS.
| -10,378.053492
|
[
-3.60546875,
3.19140625
] | 45.730028
|
[
-3.173828125,
0.442626953125,
-1.7392578125,
-4.8515625,
-0.1444091796875,
7.5234375
] |
[
1.40625,
5.2734375,
4.3125,
2.779296875
] | 176
| 2,569
|
[
-3.6484375,
4.15625
] | 27.693244
|
[
-5.78515625,
-2.921875,
-2.69921875,
-1.939453125,
1.34765625,
9.9921875
] | 2.580141
| 24.705994
| 28.727131
| 6.363667
|
[
2.5978965759277344
] | -9,154.085158
| 4.978591
| -10,075.306856
| 0.938233
| 5.516839
|
[
-3.4765625,
-3.61328125,
-2.822265625,
-3.619140625,
2.525390625,
10.5703125
] |
[
-5.83203125,
-2.103515625,
-2.275390625,
-1.5283203125,
3.611328125,
5.47265625
] | |
BkiUfv45qhDChOLprBZ0
|
\section{Introduction: dataset shift breaks learned biomarkers}
Biomarkers are measurements that provide information about a medical condition
or physiological state \citep{strimbu2010biomarkers}. For example, the presence
of an antibody may indicate an infection; a complex combination of features
extracted from a medical image can help assess the evolution of a tumor.
Biomarkers are important for diagnosis, prognosis, and treatment
or risk assessments.
Complex biomedical measures may carry precious medical information,
as with histopathological images or genome sequencing of biopsy samples in
oncology. Identifying quantitative biomarkers from these requires
sophisticated statistical analysis. With large datasets becoming
accessible, supervised machine learning provides new promises by
optimizing the information extracted to relate to a specific output variable of
interest, such as a cancer diagnosis
\citep{andreu2015big,faust2018deep,deo2015machine}. These methods,
cornerstones of artificial intelligence, are starting to
appear in clinical practice: a machine-learning based radiological tool
for breast-cancer diagnosis has recently been approved by the
FDA\footnote{\url{https://fda.report/PMN/K192854}}.
Can such predictive biomarkers, extracted through complex data processing, be safely
used in clinical practice, beyond the initial research settings? One risk
is the potential mismatch, or \emph{dataset shift}, between the distribution
of the individuals used to estimate this statistical link and that of the target
population that should benefit from the biomarker. In this case,
the extracted associations may not apply to the target
population \citep{kakarmath2020best}.
Computer aided diagnostic of thoracic diseases
from X-ray images has indeed been shown to be unreliable for individuals of a
given sex if built from a cohort over-representing the other sex
\citep{larrazabal2020gender}.
More generally, machine-learning systems may fail on data from different
imaging devices, hospitals, populations with a different age distribution, \emph{etc.}\,.
Dataset biases are in fact frequent in medicine. For instance selection
biases --\emph{eg} due to volunteering self-selection, non-response,
dropout...-- \citep{rothman2012epidemiology,tripepi2010selection}
may cause cohorts to capture only a small range of possible patients and
disease manifestations in the presence of
spectrum effects \citep{ransohoff1978problems,mulherin2002spectrum}.
Dataset shift or dataset bias can
cause systematic errors that cannot be fixed by
acquiring larger datasets and require specific methodological care.
In this article, we consider predictive biomarkers identified with supervised machine learning.
We characterize the problem of dataset shift, show how it can hinder the use
of machine learning for health applications
\citep{woo2017building,wynants2020prediction}, and provide mitigation
strategies.
\section{A primer on machine learning for biomarkers}
\subsection{Empirical Risk Minimization}
Let us first introduce the principles of machine learning used to identify biomarkers.
Supervised learning captures, from observed data,
the link between a set of input
measures (features) $X$ and an output (e.g.\, a condition) $Y$: for example the relation between the absorption spectrum of
oral mucosa and blood glucose concentration \citep{kasahara2018noninvasive}. A
supervised learning algorithm finds a function $f$ such that $f(X)$ is as close as possible to
the output $Y$.
Following machine-learning terminology, we call the system's best guess $f(X)$
for a value $X$ a \emph{prediction}, even when it does not concern a measurement
in the future.
Empirical Risk Minimization, central to machine learning,
uses a loss function $L$ to measure how far a
prediction $f(X)$ is from the true value $Y$, for example the squared
difference:
\begin{equation}
L(Y, f(X)) = (Y - f(X))^2 \; .
\end{equation}
The goal is to find a function $f$ that has
a small \emph{risk}, which is the \emph{expected} loss on
the true distribution of \(X\) and \(Y\), i.e.\, on \emph{unseen individuals}.
The true risk cannot be computed in practice: it would require having seen all
possible patients, the true distribution of patients.
The \emph{empirical} risk is used instead: the average error over available
examples,
\begin{equation}
\hat{R}(f) = \frac{1}{n}\sum_{i=1}^n L(y_i, f(x_i)) \; ,
\end{equation}
where $\{(x_i, y_i)\,,\,i=1,\dots,n\}$ are available $(X, Y)$ data,
called \emph{training} examples.
The statistical link of interest is then approximated by choosing $f$
within a family of candidate functions as
the one that minimizes the empirical risk \(\hat{R}(f)\).
The crucial assumption underlying this very popular approach is that the
prediction function $f$ will then be applied to individuals drawn from the same
population as the training examples $\{x_i, y_i\}$. It can be important
to distinguish the \emph{source} data, used to fit and evaluate a machine-learning model (e.g. a dataset
collected for research), from the \emph{target} data, on which
predictions are meant to be used for clinical applications (e.g. new visitors of a hospital).
Indeed, if the training
examples are not representative of the target population -- if there is a
dataset shift -- the empirical risk is a poor estimate of the expected error,
and $f$ will not perform well on individuals from the target population.
\subsection{Evaluation: Independent test set and cross-validation}
\label{sec:org87b9cec}
Once a model has been estimated from training examples, measuring its error
on these same individuals results in a (sometimes wildly) optimistic estimate of
the expected error on \emph{unseen} individuals (\citet[Sec.
7.4]{friedman2001elements}, \citet[Sec. 1, ``Association vs Prediction'']{poldrack2020establishment}).
Indeed, predictors chosen from a rich family of functions are very
flexible and can learn rules that fit tightly the training examples but fail
to generalize to new individuals. This is called \emph{overfitting}.
To obtain valid estimates of the expected performance on new data, the
error is measured on an independent sample held out during training, called the
test set.
The most common approach to obtain such a test set is to randomly split the
available data.
This process is usually repeated with several splits, a procedure called
cross-validation \citep[Sec. 7]{arlot2010survey,friedman2001elements}.
When training and test examples are chosen uniformly from the same sample, they
are drawn from the same distribution (i.e. the same population): there is no
dataset shift.
Some studies also measure the error on an \emph{independent} dataset
\citep[e.g.][]{beck2011systematic,jin2020generalizable}. This helps
establishing external validity, assessing
whether the predictor will perform well outside of the dataset used to
define it \citep{bleeker2003external}.
Unfortunately, the biases in participant recruitment may be similar
in independently collected datasets. For example if patients with severe
symptoms are difficult to recruit, this is likely to distort all datasets
similarly. Testing on a dataset collected independently is therefore a useful
check, but no silver bullet to rule out dataset shift issues.
\section{False solutions to tackling dataset shift}\label{sec:misconceptions}
We now discuss some misconceptions and
confusions with problems not directly related to dataset shift.
\begin{figure*}[t!]
\centering
\begin{minipage}{.44\textwidth}
\centerline{\sffamily\bfseries Data generation}
\hspace*{.05\linewidth}%
\includegraphics[width=.6\textwidth]{parabolas_schema.pdf}
\hspace*{.05\linewidth}%
\includegraphics[width=.23\textwidth]{parabolas_dataset.pdf}
\end{minipage}%
\hspace{-.2\textwidth}%
\begin{minipage}{.69\textwidth}
\includegraphics[width=\textwidth]{parabolas.pdf}
\end{minipage}%
\caption{\label{fig:parabolas} %
\textbf{Classification with dataset shift -- regressing out a correlate of the
shift does not help generalization.} The task is to classify patients
(orange) from healthy controls (blue), using
2-dimensional features. Age, indicated by the shade of gray, influences
both the features and the probability of disease.
%
\textbf{Left: generative process for the simulated data.} Age influences
both the target \(Y\) and the features \(X\), and \(Y\) also has an effect on
\(X\). Between the source and target datasets, the distribution of age
changes.
%
The two arrows point towards increasing age and represent the Healthy and
Diseased populations, corresponding to the orange and blue clouds of
points in the right panel.
%
The grayscale gradient in the arrows represents the increasing age of
the individuals (older individuals correspond to a darker shade).
%
Throughout their life, individuals can jump from the Healthy trajectory to
the Diseased trajectory, which is slightly offset in this 2-dimensional
feature space. As age increases, the prevalence of the
disease increases, hence the Healthy trajectory contains more
individuals of
young ages (its wide end), and less at older ages (its narrow end) -- and
vice-versa for the Diseased trajectory.
%
\textbf{Right: predictive models}
%
In the target data (bottom row), the age distribution is shifted:
individuals tend to be older. Elderly are indeed often less
likely to participate in clinical studies \citep{heiat2002representation}.
%
\textbf{First column:} no correction is applied. As the situation is close to
a covariate shift (\Cref{sec:covariate-shift}), a powerful learner (RBF-SVM)
generalizes well to the target data. An over-constrained model -- Linear-SVM --
generalizes poorly.
%
\textbf{Second column:} wrong approach. To remove
associations with age, features are replaced by the residuals after regressing
them on age. This destroys the signal and results in poor performance for both
models and datasets.
%
\textbf{Third column:} Samples are weighted to
give more importance to those more likely in the target
distribution. Small circles indicate
younger individuals, with less influence on the classifier estimation.
This reweighting improves prediction for the linear model on the older population.
%
}
\end{figure*}
\paragraph{``Deconfounding'' does not correct dataset shift for
predictive models}
Dataset shift is sometimes confused with the notion of
\emph{confounding}, as both settings arise from an undesired effect in the data.
Confounding comes from \emph{causal analysis}, estimating
the effect of a \emph{treatment} --an intervention, sometimes fictional-- on an outcome. A confounder is
a third variable --for example age, or a comorbidity-- that influences both the
treatment and the outcome. It can produce a non-causal association
between the two \citep[See][Chap. 7, for a precise definition]{hernan2020causal}.
However, the machine-learning methods we consider here capture statistical
associations, but \emph{do not target causal effects}.
Indeed, for biomarkers, the association itself is interesting, whether causal or not.
Elevated body temperature may be the consequence of a condition, but also
cause a disorder. It is a clinically useful measure in both settings.
Tools for causal analysis are not all useful for prediction,
as pointed out by seminal textbooks:
``if the
goal of the data analysis is purely predictive, no adjustment for confounding is
necessary [...] the concept of confounding does not even apply.''\citep[Sec.
18.1]{hernan2020causal}, or \citet{pearl2019seven}.
In prediction settings, applying procedures meant to adjust for confounding
generally degrades prediction performance without solving the dataset
shift issue.
\Cref{fig:parabolas} demonstrates the detrimental effect of ``deconfounding''
on simulated data: while the target population is shifted due to a
different age distribution, removing the effect of age also removes
the separation between the two outcomes of interest.
The same behavior is visible on real epidemiologic data with age shifts,
such as predicting the
smoking status of participants in the UKBiobank study
\citep{sudlow2015uk}, as shown in \Cref{fig:ukb-smoking}.
Drawing training and testing samples with different age distributions
highlights the effect of these age shifts on prediction performance
(see \Cref{sec:ukb-experiment-details} for details on the procedure).
For a given learner and test population, training on a different population degrades prediction.
For example, predictions on the old population are degraded when the model is trained on the young population.
A flexible model (Gradient Boosting) outperforms the linear model with or without dataset shift.
``Regressing out'' the age (as in the second column of \Cref{fig:parabolas}, ``+ regress-out'' strategy in \Cref{fig:ukb-smoking}) degrades the predictions in \emph{all} configurations.
For both illustrations on simulated and real data (\Cref{fig:parabolas}
and \ref{fig:ukb-smoking}), we also demonstrate an approach suitable for
predictive models: reweighting training examples giving more importance to those more likely in the test population.
This approach improves the predictions of the overconstrained (misspecified) linear model in the presence of dataset shift, but degrades the predictions of the powerful learner.
The non-linear model already captures the correct separation for both young and old individuals, thus reweighting examples does not bring any benefit but only increases the variance of the empirical risk.
A more detailed discussion of this approach, called \emph{importance weighting}, is provided in \Cref{sec:importance-weighting}.
\begin{figure*}[t!]
\centering
\includegraphics[width=.7\textwidth]{ukb_smoking_prediction.pdf}
\caption{\label{fig:ukb-smoking} \textbf{Predicting the smoking status
of UKBiobank participants.} Different predictive models are trained on 90K UKBiobank participants and tested on 9K participants with a possibly shifted age distribution. ``young $\rightarrow$ old'' means the training set was drawn from a younger sample than the testing set. Models perform better when trained on a sample drawn from the same population as the testing set. Reweighting examples that are more likely in the test distribution (``+ reweighting'' strategy, known as Importance Weighting, \Cref{sec:importance-weighting}) alleviates the issue for the simple linear model, but is detrimental for the Gradient Boosting. Regressing out the age (``+ regress-out'' strategy) is a bad idea and degrades prediction performance in all configurations.}
\end{figure*}
\paragraph{Training examples should not be selected to be homogeneous}
To obtain valid predictive models that perform well beyond the training sample,
it is crucial to collect datasets that represent the whole population and
reflect its diversity as much as possible
\citep{kakarmath2020best,england2019artificial,o2016weapons}. Yet clinical research often
emphasizes the opposite: very
homogeneous datasets and carefully selected participants. While this may help
reduce variance and improve statistical testing, it degrades prediction
performance and fairness. In other words, the machine-learning system may perform worse for segments of the population that are under-represented in the dataset, resulting in uneven quality of care if it is deployed in clinical settings.
Therefore in \emph{predictive} settings, where the goal is
machine-learning models that generalize well, large and diverse datasets are desirable.
\paragraph{Simpler models are not less sensitive to dataset shift}
Often, flexible models can be more robust to dataset
shifts, and thus generalize better, than linear models
\citep{storkey2009training}, as seen in
\Cref{fig:ukb-smoking,fig:parabolas}. Indeed, an over-constrained (ill-specified) model may
only fit well a restricted region of the feature space, and its performance can
degrade if the distribution of inputs changes, even if the relation to the
output stays the same (i.e.\, when covariate shift occurs, \Cref{sec:covariate-shift}).
Dataset shift does not call for simpler models as it is not a small-sample
issue. Collecting more data from the same sources will not correct systematic dataset bias.
\section{Preferential sample selection: a common source of shift}
\label{sec:preferential-sample-selection}
In 2017, competitors in the million-dollar-prize
\href{https://www.kaggle.com/c/data-science-bowl-2017/overview}{data science
bowl} used machine learning to predict if individuals would be diagnosed with
lung cancer within one year, based on a CT scan.
Assuming that the winning model achieves satisfying accuracy on left-out
examples from this dataset, is it ready to be deployed in hospitals? Most likely
not.
Selection criteria may
make this dataset not
representative of the potential lung cancer patients general population.
Selected participants verified many criteria, including being a smoker and not
having recent medical problems such as pneumonia. How would the winning
predictor perform on a more diverse population? For example, another disease
could present features that the classifier could mistakenly take for signs of lung
cancer.
Beyond explicit selection criteria, many factors such as age, ethnicity, or
socioeconomic status influence participation in biomedical studies
\citep{henrich2010most,murthy2004participation,heiat2002representation,chastain2020racial}.
Not only can these shifts reduce overall predictive performance, they can also
lead to discriminative clinical decisions for poorly represented populations
\citep{oakden2020hidden,gianfrancesco2018potential,barocas-hardt-narayanan,abbasi2020risk,cirillo2020sex}.
The examples above are instances of preferential selection, which happens when
members of the population of interest do not have equal probabilities of being
included in the source dataset: the selection \(S\) is not independent of \((X,
Y)\).
Preferential sample selection is ubiquitous and cannot always be prevented by
careful study design \citep{bareinboim2012controlling}. It is therefore a major
challenge to the identification of reliable and fair biomarkers.
Beyond preferential sample selection, there are many other sources of dataset
shifts, e.g. population changes over time, interventions such as the
introduction of new diagnostic codes in Electronic Health Records
\citep{saez2020ehrtemporalvariability}, and the use of different acquisition
devices.
\subsection{The selection mechanism influences the type of dataset shift}
The correction for a dataset shift depends on the nature of this shift,
characterized by which and how distributions are modified \citep{storkey2009training}.
Knowledge of the mechanism producing the dataset shift
helps formulate hypotheses about distributions that remain unchanged in the
target data \citep[Chap. 5]{scholkopf2012causal,peters2017elements}.
\Cref{fig:sample-selection-bias} illustrates this process
with a simulated example of preferential sample selection.
We consider the problem of predicting the volume \(Y\) of a tumor from
features \(X\) extracted from contrast CT images. These features can be
influenced not only by the tumor size, but also by the dosage of a contrast
agent \(M\).
The first panel of \Cref{fig:sample-selection-bias} shows a selection
of data independent of the image and tumor volume: there is no dataset shift.
In the second panel, selection depends on the CT image itself (for example images
with a low signal-to-noise ratio are discarded). As selection is independent of
the tumor volume \(Y\) given the image \(X\), the distribution of images changes but the
conditional distribution \(P(Y \,|\, X)\) stays the same: we face a
\emph{covariate shift} (\Cref{sec:covariate-shift}). The learned association
remains valid.
Moreover, reweighting examples to give more importance to those
less likely to be selected can improve predictions for target
data (\Cref{sec:importance-weighting}), and
it can be done with only \emph{unlabelled} examples from the target data.
In the third panel, individuals who received a low contrast agent dose are less
likely to enter the training dataset. Selection is therefore not independent of
tumor volume (the output) given the image values (the input features). Therefore
we have sample selection bias: the relation \(P(Y \,|\, X)\) is different in
source and target data, which will affect the performance of the prediction.
\begin{figure}
\begin{minipage}{.3\textwidth}
\begin{minipage}{\textwidth}
\begin{minipage}{.57\textwidth}
\includegraphics[width=\textwidth]{selection_bias_1.pdf}
\end{minipage}%
\begin{minipage}{.42\textwidth}
\includegraphics[width=\textwidth]{sample_selection_bias_1.pdf}
\centering
\(S \perp \!\!\! \perp X\,,\,Y\)
\end{minipage}%
\end{minipage}%
\begin{minipage}{\textwidth}
\vspace{-7pt}
\begin{minipage}{.57\textwidth}
\includegraphics[width=\textwidth]{selection_bias_3.pdf}
\end{minipage}%
\begin{minipage}{.42\textwidth}
\includegraphics[width=\textwidth]{sample_selection_bias_3.pdf}
\centering
\(Y \perp \!\!\! \perp S \, | \, X\)
\end{minipage}
\begin{minipage}{\textwidth}
\begin{minipage}{.57\textwidth}
\vspace{-7pt}
\includegraphics[width=\textwidth]{selection_bias_2.pdf}
\end{minipage}%
\begin{minipage}{.42\textwidth}
\includegraphics[width=\textwidth]{sample_selection_bias_2.pdf}
\centering
\(Y \not \! \perp \!\!\! \perp S \, | \, X\)
\end{minipage}
\end{minipage}
\end{minipage}
\end{minipage}
\caption{\textbf{Sample selection bias: three examples.}
On the right are graphs giving conditional independence relations
\citep{pearl2016causal}.
\(Y\) is the lesion volume to be predicted (i.e.\, the output). \(M\) are the imaging
parameters, e.g.\, contrast agent dosage. \(X\) is the image, and depends both
on \(Y\) and \(M\) (in this toy example \(X\) is computed as
\(X \coloneqq Y + M + \epsilon\), where \(\epsilon\) is additive noise.
\(S\) indicates that data is selected to enter the source dataset (orange
points) or not (blue points). The symbol \( \perp \!\!\! \perp \) means independence
between variables.
%
Preferentially selecting samples results in a dataset shift (middle and
bottom row). Depending on whether \(Y \perp \!\!\! \perp S \,|\, X\), the conditional
distribution of \(Y \,|\, X\) -- here lesion volume given the image -- estimated on
the selected data may be biased or not.
}
\label{fig:sample-selection-bias}
\end{figure}
As these examples illustrate, the causal structure of the data helps identify
the type of dataset shift and what information is needed to correct it.
When such information is available, it may be possible to leverage it in order to improve robustness to dataset shift \citep[e.g.\,][]{subbaswamy2019preventing}.
\section{Importance weighting: a generic tool against dataset shift}\label{sec:importance-weighting}
Importance weighting is a simple approach to dataset shift that applies to
many situations and can be easy to implement.
Dataset shift occurs when the joint distribution of the features and outputs is
different in the source (data used to fit the machine-learning model) and in the target data.
Informally, importance weighting consists in \emph{reweighting} the
available data to create a pseudo-sample that follows the same distribution as
the target population.
To do so, examples are reweighted by their \emph{importance weights} -- the
ratio of their likelihood in target data over source data. Examples that are
rare in the source data but are likely in the target data are
more relevant and therefore receive higher weights.
A related approach is importance \emph{sampling} -- resampling the training data according to the importance weights.
Many statistical learning algorithms -- including Support Vector Machines,
decision trees, random forests, neural networks -- naturally
support weighting the training examples. Therefore, the challenge relies mostly
in the estimation of the appropriate sample weights and the learning algorithm
itself does not need to be modified.
To successfully use importance weighting, no part of the target distribution
should be completely unseen.
For example, if sex (among other features) is used to predict heart failure and
the dataset only includes men, importance weighting cannot transform this
dataset and make its sex distribution similar to that of the general population
(\Cref{fig:importance-weighting-positivity}).
Conversely, the source distribution may be broader than the target distribution
(as seen for example in \Cref{fig:parabolas}).
\begin{figure}[h]
\centering
\includegraphics[width=.2\textwidth]{importance_weighting_positivity.pdf}
\caption{\textbf{Dataset shifts that may or may not be compensated by
reweighting} -- \textbf{Left:} distribution of sex can be balanced by downweighting
men and upweighting women. \textbf{Right:} women are completely missing; the
dataset shift cannot be fixed by importance weighting.}
\label{fig:importance-weighting-positivity}
\end{figure}
Importance weights can also be applied to validation examples, which may produce a more accurate estimation of generalization error on target data.
Importance weighting is a well-known approach and an important body of literature focuses on its application and the estimation of importance weights.
It is illustrated on small datasets for the prediction of breast cancer in \citet{dudik2006correcting} and heart disease in \citet{kouw2019review}.
However, it cannot always be applied: some knowledge of the target distribution is required, and the source distribution must cover its support.
Moreover, importance weighting can increase the variance of the empirical
risk estimate, and thus sometimes \emph{degrades} performance -- as seen in \Cref{fig:ukb-smoking}.
It is therefore a straightforward and popular approach to consider, but not a complete solution.
It is particularly beneficial when using a simple learning model which
cannot capture the full complexity of the data, such as the linear models
in \Cref{fig:parabolas}. Indeed, simple models are often prefered in
biomedical applications because they are easy to interpret and audit.
In \Cref{sec:definition-estimation-iw}, we provide a more precise definition of
the importance weights, as well as an overview of how they can be estimated and
used.
\section{Other approaches to dataset shift}
Beyond importance weighting, many other solutions to dataset shift have been proposed.
They are typically more difficult to implement, as they require adapting or desiging new learning algorithms.
However, they may be more effective, or applicable when information about the target distribution is lacking.
We summarize a few of these approaches here.
A more systematic review can be found in \citet{kouw2019review}.
\Citet{weiss2016survey} and \citet{pan2009survey} give systematic reviews of transfer learning (a wider family of learning problems which includes dataset shift).
The most obvious solution is to do nothing -- ignoring the dataset shift.
This approach should be included as a baseline when testing on a sample of target data -- which is a prerequisite to clinical use of a biomarker \citep{storkey2009training,woo2017building}.
With flexible models, this is a strong baseline that can outperform
importance weighting, as in the right panel of \Cref{fig:ukb-smoking}.
Another approach is to learn representations--transformations of the
signal--- that are invariant to the shift \citep{achille2018emergence}.
Some deep-learning methods strive to extract features that are predictive
of the target while having similar distributions in the source and target
domains \citep[e.g.\,][]{long2015learning}, or while preventing an adversary to distinguish source and target data \citep[``domain-adversarial'' learning, e.g.\,][]{tzeng2017adversarial}.
When considering such methods, one must be aware of the fallacy shown in
\Cref{fig:parabolas}: making the features invariant to the effect driving the dataset shift can
remove valuable signal if this effect is not independent of the outcome
of interest.
It may also be possible to explicitly model the mapping from source to target domains, e.g.\, by training a neural network to translate images from one modality or imaging device to another, or by relying on optimal transport \citep{courty2016optimal}.
Finally, synthetic data augmentation sometimes helps -- relying on known invariances e.g.\, for images by applying affine transformations, resampling, \emph{etc.}\,. or with learned generative models \citep[e.g.][]{antoniou2017data}.
\subsection{Performance heterogeneity and fairness}
It can be useful not to target a specific population, but rather
find a predictor robust to certain dataset shifts.
Distributionally robust optimization tackles this goal by
defining an ambiguity, or uncertainty set -- a set of distributions to which the target distribution might belong -- then minimizing the worse risk across all distributions in this set \citep[see][for a review]{rahimian2019distributionally}.
The uncertainty set is often chosen centered on the empirical (source) distribution for some divergence between distributions.
Popular choices for this divergence are the Wasserstein distance, \(f\)-divergences (e.g. the KL divergence) \citep{duchi2018learning}, and the Maximum Mean Discrepancy \citep{zhu2020kernel}.
If information about the target distribution is available, it can be incorportated in the definition of the uncertainty set.
An approach related to robust optimization is to strive not only to minimize
the empirical loss \(L(Y, f(X))\) but also its variance
\cite{maurer2009empirical,namkoong2017variance}.
It is also useful to assess model performance across values of demographic
variables such as age, socioeconomic status or ethnicity. Indeed,
a good overall prediction performance can be achieved despite a poor
performance on a minority group.
Ensuring that a
predictor performs well for all subpopulations reduces sensitivity to potential
shifts in demographics and is essential to ensure fairness
\citep{abbasi2020risk}.
For instance, there is a risk that machine-learning analysis of dermoscopic images under-diagnoses malignant moles on skin tones that are typically under-represented in the training set \cite{adamson2018machine}.
Fairness is especially relevant when the model output could be used to grant
access to some treatment.
As similar issues arise in many applications of
machine learning, there is a growing literature on fairness
\citep[see e.g.\,][for an overview]{barocas-hardt-narayanan}.
For instance,
\citet{duchi2018learning} show that distributionally robust optimization can
help performance on under-represented subpopulations.
\subsection{Multi-site datasets}
Often datasets are collected across several sites or hospitals, or with
different measurement devices. This heterogeneity
provides an opportunity to train models that generalize to
unseen sites or devices. Some studies attempt to remove site effects by
regressing all features on the site indicator variable. For the same reasons
that regressing out age is detrimental in \Cref{fig:parabolas}, this
strategy often gives worse generalization across sites.
Data harmonization, such as compensating differences across measurement devices, is crucial, but remains very difficult and cannot correct these differences perfectly \citep{glocker2019machine}.
Removing too much inter-site variance can lead to loss of informative
signal. Rather, it is important to model it well, accounting for the two
sources of variance, across participants and across sites. A good model
strives to yield good results on all sites. One solution is to adapt
ideas from robust optimization: on data drawn from different
distributions (e.g.\, from several sites), \citet{krueger2020out} show the
benefits of
minimizing the empirical risk on the worse site or adding penalties on the
variance of the loss across sites.
Measures of prediction performance should aggregate scores at
the site level (not pooling all individuals), and check the variance across
sites and the performance on the worse site. Cross-validation schemes should
hold out entire sites \citep{woo2017building,little2017using}.
\section{Special cases of dataset shift}
Categorizing dataset shift helps finding the best approach to tackle it
\cite{storkey2009training,moreno2012unifying}.
We summarize two frequently-met scenarios that are easier to handle than the general case and can call for different adjustments: covariate shift (\Cref{sec:covariate-shift}) and prior probability shift (\Cref{sec:prior-probability-shift}).
\subsection{Covariate shift}
\label{sec:covariate-shift}
Covariate shift occurs when the marginal distribution of \(X\) changes between
the source and target datasets (i.e. \( p_t(x) \neq p_s(x) \)), but \(P(Y \,|\, X)\) stays the same.
This happens for example in the second scenario in
\Cref{fig:sample-selection-bias}, where sample selection based on \(X\) (but not
\(Y\)) changes the distribution of the inputs.
If the model is correctly specified, an estimator trained with uniform weights
will lead to optimal predictions given sufficient training data
\citep[prediction consistency][Lemma 4]{shimodaira2000improving}.
However the usual (unweighted) estimator is not consistent for an
over-constrained (misspecified) model.
Indeed, a over-constrained model may be able to fit the data well only in some
regions of the input feature space (\Cref{fig:parabolas}). In this case reweighting training examples (\Cref{sec:importance-weighting}) to give
more importance to those that are more representative of the target data is
beneficial \citep{storkey2009training,scholkopf2012causal}.
\Cref{fig:covariate-shift} illustrates covariate shift.
\begin{figure}
\centering
\includegraphics[width=.7\linewidth]{covariate_shift.pdf}
\caption{\label{fig:covariate-shift}\textbf{Covariate shift:} \( P (Y \,|\,
X)\) stays the same but the feature space is sampled differently in the source
and target datasets. A powerful learner may generalize well as \( P (Y \,|\,
X)\) is correctly captured \citep{storkey2009training}. Thus the polynomial
fit of degree 4 performs well on the new dataset. However, an overconstrained
learner such as the linear fit can benefit from reweighting training examples
to give more importance to the most relevant region of the feature space.}
\end{figure}
\subsection{Prior probability shift}
\label{sec:prior-probability-shift}
Another relatively simple case of dataset shift is \emph{prior probability shift}.
With prior probability shift (a.k.a.\, label shift or target shift), the
distribution of \(Y\) changes but not \(P(X \,|\, Y)\).
This happens for example when disease prevalence changes in the target population but manifests itself in the same way.
Even more frequently, prior probability shift arises when one rare class is over-represented in the training data so that the dataset is more balanced, as when extracting a biomarker from a case-control cohort, or when the dataset is resampled as a strategy to handle the \emph{class imbalance} problem \citep{he2009learning}.
Prior probability shift can be corrected without extracting a new biomarker, simply by adjusting a model's predicted probabilities using Bayes' rule \citep[as noted for example in][]{storkey2009training,scholkopf2012causal}.
When the classes are well separated, the effect of this correction may be small, i.e.\, the uncorrected classifier may generalize well without correction.
\Cref{fig:label-shift-scatter} illustrates prior probability shift.
\begin{figure}
\centering
\includegraphics[width=.5\linewidth]{label_shift.pdf}
\caption{
\textbf{Prior probability shift:} when \(P(Y)\) changes but \(P(X
\,|\, Y)\) stays the same. This can happen for example when participants are
selected based on \(Y\) -- possibly to have a dataset with a balanced number
of patients and healthy participants: \(X \leftarrow Y \rightarrow \text{\fbox{$S$}}\).
%
When the prior probability (marginal distribution of \(Y\)) in the
target population is known, this is easily corrected by applying Bayes' rule.
%
The output \(Y\) is typically low-dimensional and discrete
(often it is a single binary value), so \(P(Y)\) can often be estimated
precisely from few examples.}
\label{fig:label-shift-scatter}
\end{figure}
\section{Conclusion}
Ideally, machine learning biomarkers would be designed and trained using
datasets carefully collected to be representative of the
targeted population -- as in \citet{liu2020sensitive}.
To be trusted, biomarkers ultimately need to be evaluated rigorously on one or several
independent and representative samples.
However, such data collection is expensive. It is therefore useful to
exploit existing datasets in an opportunistic way as much as possible in the
early stages of biomarker development.
When doing so, correctly accounting for dataset shift can prevent wasting
important resources on machine-learning predictors that have little chance of
performing well outside of one particular dataset.
We gave an overview of importance weighting, a simple tool against dataset
shift.
Importance weighting needs a clear definition of the targeted population and
access to a diverse training dataset. When this is not possible,
distributionally robust optimization may be promising alternative, though it
is a more recent approach and more difficult to implement.
Despite much work and progress, dataset shift remains a difficult problem.
Characterizing its impact and the effectiveness of existing solutions for biomarker discovery will be important for machine learning models to become more reliable in healthcare applications.
We conclude with the following recommendations:
\begin{itemize}
\item be aware of the dataset shift problem and the difficulty of out-of-dataset generalization. Do not treat cross-validation scores on one dataset as a guarantee that a model will perform well on clinical data.
\item collect diverse, representative data.
\item use powerful machine-learning models and large datasets.
\item consider using importance weighting to correct biases in the data
collection, especially if the learning model may be over-constrained (e.g.\, when using a linear model).
\item look for associations between prediction performance and demographic variables in the validation set to detect potential generalization or fairness issues.
\item \emph{do not} remove confounding signal in a predictive setting.
\end{itemize}
These recommendations should help designing fair biomarkers and their efficient application on new cohorts.
\paragraph{Author contributions}
Jérôme Dockès, Gaël Varoquaux and Jean-Baptiste Poline participated in
conception, literature search, data interpretation, and editing the manuscript.
Jérôme Dockès wrote the software and drafted the manuscript. Both Gaël Varoquaux
and Jean-Baptiste Poline contributed equally to this work (as last authors).
\paragraph{Competing interests statement}
The authors declare that there are no competing interests.
\paragraph{Software and data availability}
The source files used to create this publication can be found in this repository:
\url{https://github.com/neurodatascience/dataset_shift_biomarkers}.
UKBiobank data can be obtained from \url{https://www.ukbiobank.ac.uk}.
| -18,022.998526
|
[
-2.21484375,
2.388671875
] | 37.188873
|
[
-2.71484375,
0.6455078125,
-1.70703125,
-4.45703125,
-0.3037109375,
6.45703125
] |
[
3.71875,
7.171875,
3.6796875,
8.09375
] | 349
| 5,122
|
[
-1.4609375,
1.5068359375
] | 21.56265
|
[
-6.171875,
-4.49609375,
-4.7265625,
-2.013671875,
2.673828125,
12.671875
] | 0.697682
| 12.880732
| 25.925926
| 1.218786
|
[
2.955673933029175
] | -14,434.152363
| 6.156384
| -17,958.410485
| 0.482035
| 6.007839
|
[
-3.291015625,
-3.458984375,
-2.53125,
-3.3203125,
2.865234375,
9.21875
] |
[
-5.06640625,
-1.970703125,
-2.05078125,
-1.5439453125,
3.103515625,
5.1953125
] | |
BkiUeazxK1yAgYaM3lkH
|
\section{Introduction}
Neutrino oscillations are a direct consequence of the assumption raised in the seminal article by Bruno Pontecorvo in 1957~\cite{Pontecorvo:1957cp} which asserts that neutrino states interacting with charged leptons through weak interactions are superpositions of neutrino states of non-vanishing definitive mass. In his paper, Pontecorvo used an analogy with neutral kaon mixing to propose that neutrino-antineutrino transitions may occur. Although such matter-antimatter neutrino oscillation has not been observed, this idea formed the conceptual foundation for the quantitative theory of neutrino flavor oscillations, which were first developed by Maki, Nakagawa, and Sakata in 1962~\cite{Maki:1962mu} and further elaborated by several authors~\cite{Pontecorvo:1967fh,GribovPontecorvo,probability1,probability2,probability3}.
Under the assumption that flavor eigenstates are different superpositions of mass eigenstates, neutrino flavor oscillations can be described in the following way: as a neutrino propagates through space, the quantum mechanical phases of the mass eigenstates advance at slightly different rates due to the tiny differences in the neutrino mass eigenvalues. This results in a changing admixture of mass states as the neutrino travels. But a different admixture of mass states corresponds to a different flavor state. So a neutrino born as, say, an electron neutrino will be some different admixture of electron, muon, and tau neutrino after traveling some distance. Since the quantum mechanical phase advances in a periodic fashion, after some distance the state will return to the original admixture, and the neutrino will be again an electron neutrino. The electron flavor content of the neutrino will then continue to oscillate as long as the quantum mechanical state maintains coherence. It is because the mass differences between the neutrinos are small that the coherence length for neutrino oscillation is so long, making this microscopic quantum effect observable over macroscopic distances.
Interesting enough, neutrino flavor oscillations are the basis of the so-called solutions of several puzzling neutrino observations recorded along the last four decades. The solar neutrino deficit initially observed
in different experiments, counts on the neutrino oscillations resonantly enhanced by solar matter, the MSW phenomena~\cite{PhysRevD.17.2369,Mikheev:1986gs}, to explain the solar neutrino observations. Similarly the same oscillation parameters can explain the deficit in Kamland experiment. Also the strong zenith dependence of
atmospheric neutrino and antineutrino data can be explained by evoking neutrino oscillations. Finally,
completely different sources of muon-neutrinos and muon-antineutrinos produced by meson decays in accelerators, confirms the necessity of neutrino oscillations to understand the observations~\cite{GonzalezGarcia:2007ib}.
Furthermore, recent measurements of experiments collecting neutrinos from reactors observed the necessity of a nonvanshing neutrino mixing $\theta_{13}$~\cite{An:2012eh,Abe:2011fz}, composing a robust picture in favor of the Pontecorvo's hypotheses which give rise to neutrino oscillations. This complete scenario involving the three neutrino generations was recently analyzed in the Ref.~\cite{Tortola:2012te}.
On the other hand, one can argue that both Pontecorvo's hypotheses, i.e. neutrinos are massive and there exists neutrino mixing, have been experimentally confirmed only indirectly through their main consequence, precisely, the neutrino quantum oscillations. In fact, the first Pontecorvo's hypothesis which asserts that neutrinos are massive particles, has been directly tested in experiments involving precise measurements of the endpoint of the beta decay spectrum, if one is interested in the mass eigenstates present in what is called electron neutrino, or the kinematic behavior of the charged lepton produced in pion decay or tau decay, if one is interested in the mass eigenstates present in muon or tau neutrinos, respectively. Nevertheless, such observations had generated so far only superior limits to the values of the neutrino masses and no absolute values of such quantities were measured.
The second Pontecorvo's hypothesis, i.e., the mixing hypothesis, could be directly tested carefully analyzing the composition of a neutrino beam just after its creation, very close to its source. An ideal experiment would consist of positioning a detector sensitive to different neutrino mass eigenstates very close to a neutrino source and observe its compositeness. Since detectors are only sensitive to neutrino flavor eingenstates, such an experiment cannot be realized. Nevertheless some hints on the neutrino compositeness could be achieved analyzing the flavor content of the recently created neutrino beam. The Pontecorvo mixing hypothesis foresee that close to their source neutrinos are found in a pure flavor eigenstate. Therefore, according to this Pontecorvo's hypotheses, very close to a reactor neutrinos have to be in pure electron flavor in the same way that very close to the pion decay pipe in an accelerator experiment, neutrinos have to be muon neutrinos or antineutrinos.
Nevertheless, there are indications that this is not always the case. Recent theoretical calculations of neutrino flux from nuclear reactors indicate that a larger than previously expected neutrino flux is produced~\cite{Mention:2011rk,Huber:2011wv}. Such new fluxes are not entirely compatible with the short-baseline experiments which measure electron antineutrinos in distances of order 10 to 100 meters from nuclear reactors. Furthermore, the procedure of calibration of the experiments GALLEX~\cite{Kaether201047} and SAGE~\cite{PhysRevC.80.015807} which measured neutrinos within distances as small as 1 m or so from the source raise also some incompatibility with observations and predicted neutrino flux according to these new theoretical calculations. Such incompatibility has been called the anomaly of reactor antineutrino~\cite{Mention:2011rk,Huber:2011wv} and Gallium anomaly, respectively. Several different phenomena have been evoked to explain such anomalies~\cite{Mention:2011rk,PhysRevD.85.073012}.
In the present paper, we raise the possibility that the incompatibility of predictions and observations related to the reactor antineutrino and Gallium anomalies is a consequence of the usual interpretation of the Pontecorvo's quantum mixing hypothesis which defines, in a very fundamental and unique way, what is the admixture of mass eigenstates in a specific flavor neutrino eigenstate. We will keep the usual interpretation that a neutrino produced in a reaction in which a charged lepton is involved is a neutrino of the same flavor of this charged lepton. Therefore, the antineutrino produced in a $\beta$-decay will be assumed to be of electron flavor. As well as, in a pion decay, once that a muon is involved, the corresponding neutrino will be assumed to be of muonic flavor and so on. Note that this is an arbitrary supposition once that neutrinos are not directed observed neither in creation nor in the detection processes. Nevertheless, different from the usual interpretation, we will assume that the admixture of mass eigenstates in the moment of neutrino creation is not unique but can vary for different neutrinos produced in that reaction. This implies also that what is called an electron neutrino in the creation moment can be a different combination of mass eigenstates from what is assumed to be an electron neutrino at the detection moment. Although unusual, we notice that such a hypothesis has never been tested so far and propitiates a possible explanation for the antineutrino reactor and Gallium anomalies, as we will see in the following. This new hypothesis and its consequences is what we call the Stochastic Neutrino Mixing Mechanism (SNMM).
In order to appreciate how the SNMM works, we will analyze the particular case in which only two neutrinos are involved in the oscillation process. The generalization to the three neutrino case will be done in the next section. We propose relaxing the Pontecorvo's mixing hypothesis, allowing that neutrinos can be produced in an arbitrary superposition of neutrino mass eigenstates, each of them parametrized by a specific mixing angle $\theta_c$ in the creation moment:
\begin{equation}
\left|\nu_e^c\right\rangle = \cos\theta_c\left|\nu_1\right\rangle + \sin\theta_c\left|\nu_2\right\rangle,
\label{thetacreation}
\end{equation}
where $\theta_c$ can assume, in principle, any value in the interval $[0,\frac{\pi}{2}]$. The same assumption is made in the detection process, where the flavor state can also be identified in an arbitrary admixture of physical states, parametrized by a mixing angle at the detection moment in general different from the creation one, defined as $\theta_d$:
\begin{equation}
\left|\nu_e^d\right\rangle = \cos\theta_d\left|\nu_1\right\rangle + \sin\theta_d\left|\nu_2\right\rangle.
\label{thetadetection}
\end{equation}
Again, $\theta_d$ can assume any value in $[0,\frac{\pi}{2}]$. Under such assumption, after some distance $L$ from the source to the detector, the $\nu_e$ neutrino will present a survival probability calculated as
\begin{equation}
P^{one}_{\nu_e \rightarrow \nu_e} = \cos^2(\theta_d - \theta_c) - \sin2\theta_c\sin 2\theta_d \sin^2(\frac{\Delta m^2_{12} L}{4 E}),
\label{prob}
\end{equation}
where $E$ is the neutrino energy and $\Delta m^2_{12}$ is the usual squared mass difference between the mass eigenstates involved in the oscillation process.
Interesting enough, for the general case in which $\theta_c\neq\theta_d$, this survival probability can be smaller than the unity even in short baselines in which $L\rightarrow 0$. Such behavior, which is not allowed in usual oscillation processes, is the essence of the solution of the Gallium and reactor neutrino anomalies which will be explored in the next section in the more realistic case envolving three neutrinos.
\section{Three Neutrinos Case and the solution to the Gallium and Reactor Anomalies}
We propose relaxing the Pontecorvo's mixing hypothesis, allowing that each neutrino flavor eigenstate can be produced and detected in an arbitrary superposition of neutrino mass eigenstates around the usual admixture.
In order to keep the success of neutrino oscillation observations, we assume that neutrinos are created and detected most of the time around the usual superposition of neutrino mass eigenstates which fit the oscillation phenomena, parametrized by the usual neutrino mixings $\sin^2 \theta_{12} = 0.320\pm 0.050$, $\sin^2 \theta_{23}=0.613^{+0.067}_{-0.247}$, and the recently measured $\sin^2 \theta_{13}= 0.025\pm 0.008$, in 3$\sigma$~\cite{Tortola:2012te}. In general, these specific angles are going to be assumed only as the averaged values of the actual mixing angles. Under this simple assumption, we will conclude that besides keeping the good fit of the observed long baseline neutrino oscillation phenomena, one can fit short baseline neutrino data setting a natural explanation for the anomaly of reactor antineutrino as well as Gallium anomalies. We assume, for simplicity, that such arbitrary superposition involves only the first two neutrino families. Therefore only variations around $\theta_{12}$ will be considered~\cite{comment}.
The $3\times 3$ mixing matrix at the moment of the neutrino creation ($U^c$) and at the detection moment ($U^d$) can be written as:
\begin{equation}
\small
U^{c,d} =\left(
\begin{array}{ccc}
c^{c,d} c_{13} & -s^{c,d} c_{13} & s_{13} \\
s^{c,d} c_{23} + c^{c,d} s_{23} s_{13} & c^{c,d} c_{23} - s^{c,d} s_{23} s_{13} & -s_{23} c_{13} \\
s^{c,d} s_{23} - c^{c,d} c_{23} s_{13} & c^{c,d} s_{23} + s^{c,d} c_{23} s_{13} & c_{23} c_{13} \\
\end{array}\right)
\label{Ui}
\nonumber
\end{equation}
where $c_{ij}=\cos \theta_{ij}$, $s_{ij}=\sin\theta_{ij}$, $c^{c,d}=\cos\theta_{c,d}$ and $s^{c,d}=\sin\theta_{c,d}$, and $\theta_{c,d}$ can assume values in the interval $[0,\pi/2]$.
The one particle electron neutrino survival probability can be computed:
\begin{equation}
\small
P^{one}_{\nu_e\rightarrow \nu_e} = \left(\sum_{\gamma}{U^{c}_{1\gamma}U^{d}_{1\gamma}}\right)^2 - 4\sum_{\gamma>\beta}{U^{c}_{1\gamma}U^{d}_{1\gamma}U^{c}_{1\beta}U^{d}_{1\beta} \sin^2\left(\frac{\Delta m^2_{\gamma\beta} L}{4E}\right)}.
\end{equation}
where $\gamma$ and $\beta$ run from 1 to 3.
And then, averaging over different mixing angles, the total probability becomes:
\begin{equation}
P_{\nu_e \rightarrow \nu_e} = \int_0^{\pi/2} P_{\nu_e \rightarrow \nu_e}^{one} f(\theta_c)f(\theta_d)d\theta_c d\theta_d
\label{Preal3fam}
\end{equation}
where $f(\theta_c)$ and $f(\theta_d)$ are the distribution functions of the mixing angles involving only the electronic-muonic channel at the creation and detection instants, respectively. To keep the good fit of oscillation hypothesis with solar neutrino data and long baseline reactor observations, we choose these distribution functions as:
\begin{equation}
f(\theta_{c,d}) = \frac{1}{\sqrt{N_{c,d}}}e^{-(\frac{\theta_{c,d}-\theta_{12}}{\alpha_{c,d}})^2} ,
\end{equation}
which guarantees that mixing angles $\theta_{c,d}$ will present an
average value given by $\theta_{12}$.
In the above equation, $\alpha_{c,d}$ are the Gaussian widths at the creation and detection instants, respectively, and we will assume, for simplicity, $\alpha_c=\alpha_d=\alpha$. The normalization, $N_{c,d}$, is computed by imposing $\int_{0}^{\frac{\pi}{2}} f(\theta_{c,d})d\theta_{c,d}~=~1$. Note that in the limit case when $\alpha\to 0$, we recover the usual Pontecorvo mechanism.
Using all data of GALLEX and SAGE experiments~\cite{Kaether201047,PhysRevC.80.015807} (see also Ref. \cite{PhysRevC.83.065504}), old reactors~\cite{oldreactors,Mention:2011rk} as well as the Daya Bay data~\cite{An:2012eh} with a free normalization found according to the new flux calculations for reactor experiments, we perform a global analysis through the $\chi^2$ method, defining:
\begin{eqnarray}
\chi^2 = \sum_{i,j=1}^4(\vec{R}^t-\vec{R}^e)_i^T W^{-1}_{ij} (\vec{R}^t-\vec{R}^e)_j, \nonumber
\label{chi}
\end{eqnarray}
where $i$ and $j$ correspond to each one of the four sets of experiments indicated by the labels appearing in Fig.~\ref{fig:Grafico3familias}: $i,j=1$ for GALLEX and SAGE, $i,j=2$ for old reactor experiments~\cite{oldreactors}, $i,j=3$ for Daya Bay and $i,j=4$ for Chooz and Palo Verde. $W_{ij}$ is the correlation matrix \cite{Mention:2011rk}, in which correlations between data coming from reactors described by $i,j= 2$ are taken into account, while no correlation among other data is assumed, and column vector $\vec{R^e}$ collects the experimental data while $\vec{R^t}$ the corresponding theoretical predictions for reactor and gallium experiments. For reactors one has:
\begin{equation}
R^t_{\rm reactor} = \dfrac{ \int{P_{\nu_e \rightarrow \nu_{e}} S(E) \sigma(E) dE}}{\int{S(E) \sigma(E) dE}},
\label{reac}
\end{equation}
where $S(E)$ is the energy neutrino spectrum which can be found in reference \cite{PhysRevC.83.054615} and $\sigma(E)$ is the cross section~\cite{Mention:2011rk}.
In GALLEX and SAGE radioactive calibration experiments, the reactions of electron capture produce neutrinos of fixed energies. This implies:
\begin{equation}
R^t_{\rm gallium} = \dfrac{\int dV L^{-2}\sum_i{(B.R.)_i \sigma_i P_{\nu_e \rightarrow \nu_{e}}(L,E_i)}}{\int dV L^{-2}\sum_i{(B.R.)_i \sigma_i }}
\label{gallium}
\end{equation}
and the branching ratio (B.R.), the cross section $(\sigma_i)$ and the detector specifications are found in Tables 1 and 2 of Ref.~\cite{PhysRevC.83.065504} and references therein.
The set of data includes 4 points from GALLEX and SAGE~\cite{PhysRevC.83.065504}, 21 from old reactors~\cite{Mention:2011rk} as well as 6 from Daya Bay~\cite{An:2012eh}. We obtain the best fit value for~$\alpha = 0.174$ varying in the intervals $[0.141,0.201]$, $[0.117,0.222]$ and $[0.067,0.249]$
at 90, 95 and 99\% C.L., respectively.
This probability fits the data with a minimum $\chi_{min}^2~=~39.08$ which can be compared with the one obtained from the usual Pontecorvo's hypothesis resulting $\chi^2= 48.24$, for $31-1$ degrees of freedom. The best fit of SNMM as well as the fit coming from the usual Pontecorvo's hypothesis are shown in Fig.~\ref{fig:Grafico3familias} where it can be seen that the SNMM provides a possible explanation for short-baseline neutrino disappearance, something which is not allowed in usual neutrino oscillations based on Pontecorvo's original hypotheses.
\begin{figure}
\centering
\includegraphics[trim = 30mm 10mm 30mm 30mm, scale=.3]{grafprob3fam}
\caption{Comparison between the standard Pontecorvo's hypothesis prediction and the SNMM one plotted using the best fit value for the Gaussian width~$\alpha = 0.174$. The experimental points are distributed in the following way: 1. GALLEX and SAGE data;
2. old reactors~\cite{oldreactors};
3. Daya Bay data with free normalization;
4. Palo Verde and CHOOZ.}
\label{fig:Grafico3familias}
\end{figure}
Before introducing our conclusions, we will add a possible extension of the SNMM scenario. Up to now, we have discussed the relaxation of the Pontecorvo's hypotheses assuming a Gaussian distribution for the mixing angle $\theta_{12}$ characterized by a constant value of the corresponding Gaussian width $\alpha$. Here we will observe that the SNMM can nicely fit several experiments assuming an energy dependence of such width. This is a consequence of the fact that, differently from low energy short-baseline experiments, high energy short-baseline experiments do not present any appearance or disappearance neutrino phenomenon. In fact, the neutrino disappearance is more intense in GALLEX and SAGE $^{37}$Ar and $^{57}$Cr sources than in reactor experiments. In the first case the ratio $R$ is smaller than unity nearly 14\% \cite{PhysRevC.83.065504}and in reactors $R$ is lower than unity nearly 6\% \cite{Mention:2011rk}. Note however that the neutrino energy released in $^{37}$Ar and $^{57}$Cr sources have average value of 740~KeV while neutrino from reactors possess a wide range of energy with a peak in 3.6~MeV.
Similar fact occurs in accelerator experiments. LSND~\cite{PhysRevLett.81.1774} shows an excess of electronic neutrinos for energies about 30 MeV. MiniBooNE~\cite{MiniBooNECollaboration:arXiv1207.4809} searched in two channels $\nu_{\mu} \rightarrow \nu_e$ and $\bar{\nu}_{\mu} \rightarrow \bar{\nu}_e$ for oscillations. In the energy range of $200 < E /$MeV $< 1250$ was found signal of oscillation in both channels, however, data suggest that the excess of events decreases when the neutrino/antineutrino energies increase. The experiment described in Ref. \cite{PhysRevLett.47.1576}, which we refer to as Fermi, worked in a different scale of energy, with peak in 30~GeV, searching for oscillation in the $\nu_{\mu} \rightarrow \nu_e$ channel and did not report any signal of oscillations. The same happened in NuTeV experiment~\cite{PhysRevLett.89.011804}. Executed with an average energy of about 200~GeV, it did not find signal of oscillations in both channels $\nu_{\mu} \rightarrow \nu_e$ and $\bar{\nu}_{\mu} \rightarrow \bar{\nu}_e$. The only possible exception is the experiment KARMEN \cite{Eitel200289} that was executed with energies of about 15 MeV, lower than LSND. Although it did not find any compelling excess related to the background, its measurement was associated with large uncertainties.
The above cited experiments suggest that there is a relation between appearance/disappearance phenomena with the energy. Identifying this possible dependence, we independently calculate the free parameter $\alpha$ for each one of the following groups of experiments: 1. GALLEX and SAGE \cite{Kaether201047,PhysRevC.80.015807}, 2. all reactor data analyzed in \cite{Mention:2011rk} and Daya Bay \cite{An:2012eh}, 3. LSND \cite{PhysRevLett.81.1774}, 4. Fermi \cite{PhysRevLett.47.1576} and 5. NuTeV \cite{PhysRevLett.89.011804}, which result is indicated by the points and their uncertainties at 68\%~C.L. in Fig.~\ref{fig:polalpha}. To fit all data, we propose that the width have an energy dependence $\alpha(E) = A + (B/E)^n$. Taking the best fit parameters ($A =~0.012$,~$B=~0.076 $ MeV and $n =$~0.565), we also show this curve in Fig. \ref{fig:polalpha}.
\begin{figure}
\centering
\includegraphics[trim = 30mm 10mm 30mm 30mm, scale=.3]{polalpha}
\caption{The Gaussian width $\alpha$ calculated to each set of experiments: GALLEX and SAGE, Reactors, LSND, Fermi and NuTeV. Points indicate the best fit at 68\%~C.L. and the curve shows the fitting to these points of the functional form $\alpha(E) = A + (B/E)^n$, taking the best fit parameters
$A =~0.012$,~$B=~0.076 $ MeV and $n =$~0.565.}
\label{fig:polalpha}
\end{figure}
\section{Conclusions and final comments}
The SNMM can accommodate data that indicate disappearance of electronic neutrinos/antineutrinos in very short-baseline experiments. Assuming that neutrino mixing angles can vary according to a Gaussian distribution around a preferable value given by the usual mixing angle $\theta_{12}$, a gaussian width $\alpha$ around $0.17$ can fit all experimental data in what is called Gallium and antineutrino reactor anomalies. Furthermore, identifying an energy dependence in short-baseline neutrino disappearance/appearance phenomena, we could explain different behaviors of high and low energy neutrino experiments assuming an energy dependence of the Gaussian width $\alpha$ which characterizes the SNMM.
A few final comments are in order. First, we do not expect significant modifications of previous analyzes involving solar, atmospheric and other long-baseline neutrino experiments due to the implementation of the SNMM. Only modifications of few percents in the initial neutrino flux predictions as well in the detection rate calculations in the analyses of those experiments can appear due to the SNMM. They can be accommodated in several uncertainties present in these analyses and will not substantially alter their results~\cite{comment}.
Secondly, it is often assumed that contributions to neutrino masses come from new physics, while neutrino interactions are given by the Standard Model. Nevertheless, neutrinos are observed only indirectly through their interactions which produce charged leptons. This represents a challenge to implement realistic neutrino sectors in any model describing this particle. Some previous articles propose new approaches. An interesting discussion about the definition of a flavor neutrino state and its relation with physical neutrino states can be found in Ref.~\cite{Grossman1995,GonzalezGarcia:2001mp,Bilenky:1992wv,Meloni:2009cg} in which possible mechanisms which can generate non trivial mixing matrices that can be different in the neutrino creation and in the neutrino detection are discussed. This is one of the requirements to implement the SNMM and can inspire the proposition of models which accomplish the mechanism.
We also propose possible tests to the SNMM. A muon neutrino detector located near a reactor could exclude this hypothesis in case no muon neutrino would be found. Such a detector could be based on muon neutrino elastic scattering on electrons, in a similar way which is discussed in Ref.~\cite{Adams:2008cm}, which is able to explore the muon neutrino at zero distance by $\nu_{\mu}e$ scattering.
Also,
radioactive electron neutrino sources allocated inside experiments able to detect neutrinos through both charged and neutral currents channels (like as SNO~\cite{sno}) would test the SNMM hypothesis. A non-oscillation effect in the neutral current measurement and an oscillation effect in the charged current can favor SNMM in contrast to the sterile hypothesis, while a oscillation effect in the NC and CC measurement can indicate the presence of sterile neutrinos~\cite{PhysRevD.85.073012}.
Finally, when we include an energy dependent gaussian width $\alpha$ we can fulfill the constraints on oscillation effects from low energy experiments, such as reactor and Gallium experiments, as well as the high energy experiments, such as FERMI, LSND, and NUTEV. We show in Fig.~2 that a weak energy dependence is sufficient to achieve a nice and consistent picture of SNMM as a solution to the reactor and Gallium anomalies.
\begin{acknowledgments}
The authors would like to thank FAPESP, CNPq and CAPES for several financial supports.
\end{acknowledgments}
| -14,897.179899
|
[
-3.419921875,
3.111328125
] | 45.637584
|
[
-3.060546875,
0.45654296875,
-2.033203125,
-5.71875,
-0.2900390625,
8.171875
] |
[
5.04296875,
7.97265625,
4.5,
8.3203125
] | 216
| 3,280
|
[
-3.58984375,
4.19921875
] | 24.14073
|
[
-5.99609375,
-4.21484375,
-4.296875,
-2.34765625,
2.021484375,
12.5625
] | 0.933658
| 24.586807
| 28.597561
| 3.335808
|
[
3.0762734413146973
] | -11,195.058767
| 5.877744
| -14,878.675961
| 0.472016
| 5.782936
|
[
-2.88671875,
-3.56640625,
-3.5625,
-4.67578125,
2.25,
11.921875
] |
[
-5.23046875,
-1.8583984375,
-1.73046875,
-1.2255859375,
3.1796875,
4.4296875
] | |
BkiUaX66NNjgB1scVvi2
|
\section{Appendix}
\end{document}
\section{Conclusion and Future Work}
We propose a novel graph neural network approach that effectively integrates textual and structural information and uses loss trajectories of samples during training to learn effective curricula for predicting relations between given entity pairs. Our approach can be used for both sentence- and document-level relation extraction, and shows a sizable improvement over the state-of-the-art models across several datasets.
In future, we will investigate curriculum learning approaches for other sub-tasks of relation extraction, develop more effective techniques to better fit trends to time series data, and investigate the effect of curricula on other graph neural networks for relation extraction.
\section{GDPR Dataset}
\cut{
\begin{table}[t]\small
\centering
\begin{tabular}{|l|l|}\hline
{\bf Metric} & {\bf Value} \\\hline
\# Gene nodes & 4,274\\\hline
\# Disease nodes & 6,143\\\hline
\# Disease type nodes & 472\\\hline
\# Phenotype nodes & 9,603\\\hline
\# Nodes in LCC & 20,264 \\\hline
\# Edges & 964,222 \\\hline
\hspace{5pt} $\rightarrow$ Gene-Disease & 6,284 \\\hline
\hspace{5pt} $\rightarrow$ Gene-Phenotype & 595,296 \\\hline
\hspace{5pt} $\rightarrow$ Disease-Disease type & 3,912 \\\hline
\hspace{5pt} $\rightarrow$ Disease-Phenotype & 358,730 \\\hline
\# Edges in LCC & 964,087 \\\hline
\# Connected components & 94 \\\hline
Average node degree & 94.11 \\\hline
\hspace{5pt} $\rightarrow$ Genes & 140.75 \\\hline
\hspace{5pt} $\rightarrow$ Diseases & 60.04 \\\hline
\hspace{5pt} $\rightarrow$ Disease types & 8.28 \\\hline
\hspace{5pt} $\rightarrow$ Phenotypes & 99.35 \\\hline
Graph density & 0.004592 \\\hline
\end{tabular}
\caption{Statistics of GDPR dataset.}
\label{tab:omim_dataset}
\end{table}
The Online Mendelian Inheritance in Man, OMIM,~\cite{amberger2019omim} dataset is the primary repository of curated information on genes, rare diseases and their causal relations. Each gene and disease is provided with a detailed textual summary based on expert reviews of the relevant biomedical literature.\footnote{An example of a textual summary for LEUKEMIA can be found at https://omim.org/entry/608232}.
There exist three categories of information in OMIM: genes, diseases, and diseases types.
Genes are basic units of heredity and sequences of nucleotides in DNA or RNA.
Disease types are collections of similar diseases across the same genome.
Genes or diseases may link to one or more diseases or genes respectively.
Similarly, a disease can be associated with one or more disease types.
Each verified relation between genes and diseases contains references to scientific literature providing evidence about the corresponding relation.
In addition, OMIM provides detailed textual summaries for genes and diseases describing their clinical characteristics. The textual summary of a gene contains a short description about the gene, its functions, cloning information and gene structure. The textual summary of a disease contains a short description, clinical features, inheritance and its history. There are many genes and diseases in this dataset which are not associated to any disease or genes. As a pre-processing step, we remove genes/diseases that are not linked to any disease/gene respectively.
\todo[inline]{explanation to add: relevant information present in the textual summary and why not to include publication}
Human Phenotype Ontology, HPO,~\cite{hpo} provides standard vocabulary to describe abnormalities encountered in human diseases. Each phenotype in HPO has a textual summary describing the clinical characteristics of the phenotype. HPO provides mapping of its phenotypes to gene and diseases in the OMIM dataset. We use this mapping to compile the {\bf G}ene, {\bf D}isease, {\bf P}henotype {\bf R}elation (GDPR) dataset, which contains relations between genes, diseases, disease types and phenotypes.
Table~\ref{tab:omim_dataset} provides graph-level statistics of GDPR.\footnote{We note that, since OMIM focuses on rare diseases, each gene is linked to at most two diseases, leading to sparsity issues at gene-disease level. HPO helps alleviate this sparsity issue through phenotype nodes linked to genes and diseases.}
}
\section{Trend Model Introspection}\label{sec:discussion}
We conduct several ablation studies to shed light on the improved performance of Trend-SL.
\subsection{Inversion Analysis}\label{sec:inversions}
\paragraph{Trend-SL results in robust estimation of difficulty:} In curriculum learning, instantaneous sample losses can fluctuate as model trains~\cite{zhou2020curriculum}. These changes result in samples being moved across easy and hard data groups. Let's define an \textit{inversion} as an event where the difficulty group of a sample is inverted in two consecutive epochs (determined by curricula), i.e., an easy sample becomes hard in the next iteration or vice versa. Figure~\ref{fig:flip_analysis} shows the number of inversions in SL and Trend-SL during training. Both models converge on their estimated difficulty classes of samples as training progresses. However, we observe that Trend-SL results in fewer inversions compared to SL, as the area under the curve for Trend-SL is 2.12 compared to 2.15 of SL.
Given these results and the performance of Trend-SL on our target tasks, we conjecture that trend information leads to more robust estimation of sample difficulty.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.4]{images/fractions_of_flip_omim_2.pdf}\label{fig:flip_analysis}
\caption{The fraction of samples with an inverted difficulty group in two consecutive epochs. Both models are converging on the their estimated difficulty classes of samples as training progresses. Trend-SL results in fewer inversions compared to SL; the area under the curve for Trend-SL is 2.12 compared to 2.15 of SL.}
\label{fig:flip_analysis}
\end{figure}
\paragraph{Transition patterns at inversion time:}
Let epoch $e$ be the epoch at which an inversion occurs.
Considering SL as the curriculum, Figure~\ref{fig:transition} reports the average normalized loss of samples at their inversion epochs ($e$) and $k$ epochs before and after that. There are some insightful patterns:
(a): easy-to-easy (E2E) and hard-to-hard (H2H) transitions are almost flat lines, indicating the lack of any significant trend when no inversion occurs; and
(b): easy-to-hard (E2H) and hard-to-easy (H2E) transitions show that, on average, there is a sharp and significant increase and decrease in loss patterns as samples are inverted to hard and easy difficulty groups respectively. Since SL does not directly take into account trend information, these results show that trend dynamics can inform our technical objective of developing better curricula.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.55,clip]{images/trend_window_analysis_2.pdf}
\label{fig:trend_window}
\caption{Transition in sample difficulty determined by SL. 0 on the x-axis denotes any epoch at which an inversion occurs, and the y-axis shows average normalized losses at epochs around the inversion epochs.
Easy-to-Hard and Hard-to-Easy transitions show sharp and significant increase and decrease in losses respectively.}
\label{fig:transition}
\end{figure}
\begin{figure}[!t]
\centering
\subfigure[Easy to Hard]{\includegraphics[scale=0.18]{images/easy_to_hard_fraction.pdf}\label{fig:e2h_fraction}}
\subfigure[Hard to Easy]{\includegraphics[scale=0.18]{images/hard_to_easy_fraction.pdf}\label{fig:h2e_fraction}}
\caption{Inversion dynamics at difficulty level during training: (a) inversions from easy to hard with rising loss trends and (b) inversions from hard to easy with falling loss trends. The initial epochs on the y-axis are brighter then later epochs, indicating that most inversions occur early in training.}
\label{fig:fraction2d}
\end{figure}
\begin{figure*}[!t]
\centering
\subfigure[Easy to Hard]{\includegraphics[scale=0.4]{images/easy_fraction_analysis_1d_3.pdf}\label{fig:e2h_fraction_1d}}
\subfigure[Hard to Easy]{\includegraphics[scale=0.4]{images/hard_fraction_analysis_1d_2.pdf}\label{fig:h2e_fraction_1d}}
\caption{Inversion heatmap when (a): easy samples with rising loss trend become hard (left) and (b): hard samples with falling loss trend become easy (right).}
\label{fig:fraction1d}
\end{figure*}
\paragraph{Inversions occur early during training:}
Figure~\ref{fig:e2h_fraction} shows the fraction of samples that were easy at epoch $i$ but became hard with a rising trend at epoch $j$ > $i$. The corresponding heatmap for Hard-to-Easy with falling trend is shown in Figure~\ref{fig:h2e_fraction}. In both case, the initial epochs (see the y-axis) are brighter then later epochs, indicating that most inversions occur early in training and the effect of trend is more prominent in the initial part of training.
\paragraph{Inversions occur with falling or rising loss trends:}
SL does not use trend information. However, its estimated difficulty for a considerable fraction of samples (with falling or rising loss trends) is inverted during training. In fact, we observe that 21.2\% to 50.0\% of hard samples that have a falling loss trend will become easy in their next training iteration; similarly 1.3\% to 11.1\% of easy samples that have a rising loss trend will become hard in their next training iteration.
Figure~\ref{fig:fraction1d} shows the inversion heatmap for such Easy-to-Hard and Hard-to-Easy transitions in consecutive epochs.
The area under the curve for Easy-to-Hard with rising trend and Hard-to-Easy with falling trend are 24.87 and 4.51 respectively.
Trend-SL employs such trend dynamics to create better curricula.
\subsection{Domain and Feature Analysis}
\paragraph{In-domain embeddings improve the performance:}
In these experiments, we re-train our model
with different embedding initialization. As shown in Figures~\ref{fig:ablation_w_feature}, Doc2Vec embeddings result in an overall better performance than BioBERT and random initialization approaches across the datasets. We attribute this result to in-domain training using text summaries of genes, diseases and phenotypes associated to {\em rare} diseases. In addition, the performance using BioBERT embeddings is either comparable or considerably lower than that of other embeddings including Random. This is perhaps due to pre-training of BioBERT using a large scale PubMED dataset, which has a significantly lower prevalence of publications on rare versus common diseases. On the other hand, we directly optimize Doc2Vec on in-domain rare-disease datasets, which leads to higher performance of the model. We tried to fine tune BioBERT on our corpus but as the text summaries are long, only a small fraction of texts (512 tokens) can be considered.
\paragraph{Additional Features improve the performance:}
We re-train our models and exclude additional feature (i.e., relevance scores for \gdpr~and sentence embeddings for \pgr), with different node embedding initialization.
Figure~\ref{fig:ablation_wo_feature} shows that excluding these features considerably reduces the F1-scores of our model across datasets and embedding initialization.
These results show that both text features and information obtained from graph structure contribute to predicting relations between nodes.
\begin{figure}[t]
\centering
\includegraphics[scale=0.3]{images/ablation_w_feature.pdf}
\caption{Performance of \gtnn{} with Trend-SL with additional features.}
\label{fig:ablation_w_feature}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.3]{images/ablation_wo_feature.pdf}
\caption{Performance of \gtnn{} with Trend-SL without additional features.}
\label{fig:ablation_wo_feature}
\end{figure}
\cut{
We conduct several ablation studies to shed light on our model's improved performance; we analyze the effects of different node embedding initialization, graph structure, and additional textual features.
\subsection{Effect of Initial Node Embeddings}
For these experiments, we re-train our model without including additional textual feature and evaluate its performance. As shown in Table~\ref{ablation_study} Doc2Vec embeddings result in an overall better performance than BioBERT and random initialization approaches across both dataset. We attribute this result to in domain training using text summaries of genes, diseases and phenotypes associated to rare diseases. The model performance using BioBERT embeddings is considerably lower than that of other embeddings including Random. We attribute this result to pre-training of BioBERT using a large scale PubMED dataset, which has a significantly lower prevalence of publications on rare diseases. On the other hand, we directly optimize Doc2Vec on in the domain datasets, which leads to higher performance of the model.
\subsection{Effects of Additional Features}
In these experiments, we re-train our models and include additional feature (i.e., relevance scores for GDPR and sentence embeddings for PGR, see section~\ref{additional_features ??}),
across different node embedding initialization.
The last three rows in Table~\ref{ablation_study} show that the additional feature improves the F1-scores across all embedding types and datasets.
Overall, these results show that both textual features and information obtained from graph structure contribute to predicting relations between genes, diseases and phenotypes (see further discussion below).
\iffalse
\subsection{Effects of Common Neighbors}
To understand the contribution of graph information, we analyze the performance of models against the number of neighbors that input node pairs (gene-disease or gene-phenotype pairs) have in common. Figures~\ref{fig:neighbor_study_omim}~and~\ref{fig:neighbor_study} show the performance of the top three best-performing models for GDPR and PGR datasets respectively. We observer consistent improvement in the performance of most models as the number of common neighbors increases. This is expected because, e.g., gene and disease nodes that share many common neighbors are expected to have similar morphological, physiological or behavioral effects on human body. In addition, it is interesting that, in Figure~\ref{fig:neighbor_study_omim}, although the Relevance Score model doesn't use any information from the graph for relation extraction, it's performance improves as the number of common neighbors among node pairs increases. This result shows that there is likely a positive correlation between textual/lexical similarity of node pairs and their number of common neighbors. In addition, the performance of GraphSAGE+Doc2vec initially decreases as more neighbors are observed (up to around 40 common neighbors). Further investigation of this behavior will be the subject of our future work. Furthermore, Figure~\ref{fig:neighbor_study} clearly shows the fastest increase in performance occurs with the first common neighbor, indicating the role of such neighbors in accurate relation extraction. However, the performance grows at a much smaller rate as more common neighbors are observed. These results may indicate that only a few common neighbors is enough for reliable predictions.
\fi
\subsection{Total Degree Analysis }
\nidhi{
To understand the contribution of graph information, we analyze the performance of models against the total degree of the input node pairs (gene-disease or gene-phenotype pairs). Here, total degree is the sum of degree of the nodes present in a given pair. Figures~\ref{fig:total_degree_omim}~and~\ref{fig:total_degree_pgr} show the performance of the top three best-performing models for GDPR and PGR datasets respectively. We observe consistent improvement in the performance of most models as the total degree increases. This is expected because, e.g., gene and disease nodes that has higher degree are expected to have similar morphological, physiological or behavioral effects on human body. In addition, it is interesting that, in Figure~\ref{fig:total_degree_omim}, although the Relevance Score model doesn't use any information from the graph for relation extraction, it's performance improves as the total degree increases. This result shows that there is likely a positive correlation between textual/lexical similarity of node pairs and their total degree. In addition, \ref{fig:total_degree_pgr} shows that the performance of GraphSAGE+Doc2vec is lower than the GTNN which indicates that the additional features are most useful when the total degree of the pair is lower. This gap gradually disappears as the total degree increases. Similar behaviour is seen for GDPR dataset regarding GTNN and Relevance Score but the gap in performance remains persistent as the total degree increases unlike PGR. Further investigation of this behavior will be the subject of our future work. Furthermore, Figure~\ref{fig:total_degree_pgr} clearly shows the fastest increase in performance occurs with the lower degree, indicating the role of degree in accurate relation extraction. However, the performance grows at a much smaller rate as the degree increases. }
\begin{figure}[!t]
\centering
\subfigure{\includegraphics[scale=0.42]{images/omim_degree_analysis_neg_x_5_AAAI.pdf}\label{fig:total_degree_omim}}\\
\subfigure{\includegraphics[scale=0.42]{images/pgr_degree_analysis_neg_x_5_AAAI.pdf}\label{fig:total_degree_pgr}}\\
\caption{Performance of the top three models against the total degree for the input node pairs on (a): GDPR (top) and (b): PGR datasets (bottom). X-axis shows the total degree of a pair ($n$) and Y-axis shows cumulative F1-scores for the node pairs with $\leq n$ total degree.}
\end{figure}
}
\section*{Ethical statement}
This investigation partially uses data from the field of medicine. Specifically, it includes genes, diseases and phenotypes that contribute to rare diseases. Although the present work does not include any patient information, it is translational in nature and its broader impacts are first and foremost the potential to improve the well-being of individual patients in the society, and support clinicians in their diagnostic efforts, especially for rare diseases. Our work can also help Wikipedia curators and content generators in finding relevant concepts.
\section{Experiments} \label{sec:experiments}
\subsection{Datasets}
\paragraph{Gene, Disease, Phenotype Relation ({\gdpr})} dataset contains textual descriptions for genes, diseases and phenotypes (symptoms) as well as their relations, and is obtained by combining two freely available datasets: Online Mendelian Inheritance in Man (OMIM)~\cite{amberger2019omim} and Human Phenotype Ontology (HPO)~\cite{hpo}. OMIM is the primary repository of curated information on the causal relations between genes and rare diseases, and HPO provides mappings of phenotypes to genes/diseases in the OMIM.\footnote{A gene can cause one or more diseases and a disease can have several disease types. As a pre-processing step, we remove isolated nodes from the dataset and explicit mentions of relations between entities from summaries.}
We introduce a challenging experimental setup based on the task of {\em differential diagnosis}~\cite{raftery2014churchill} using \gdpr, where competing models should distinguish relevant diseases to a gene from irrelevant ones that present {\em similar} clinical features, making the task more difficult because of high textual and structural similarity between relevant and irrelevant diseases. For example, diseases {\it 3-methylglutaconic type I}, {\it Barth syndrome} and {\it 3-methylglutaconic type III} are of the same disease type and have high lexical similarity in their descriptions, but they are not related to the same genes.
We include such harder negative gene-disease pairs by sampling genes from those that are linked to diseases that share the same disease type with a target disease, but are not linked to the target disease. We also include an equal number of randomly sampled negative pairs to this set.
\paragraph{Gene Phenotype Relation (\pgr)}~\cite{sousa2019silver} is created from PubMed articles and contains sentences describing relations between given genes and phenotypes ( Figure~\ref{fig:pgr_example_intro}).
We only include data points with available text descriptions for their genes and phenotypes. For fair comparison, we apply the best model from~\cite{sousa2019silver} to this dataset.
\paragraph{Wikipedia}~\cite{rozemberczki2021multi} is on the topic of the old world lizards Chameleons with 202 species. In this dataset, nodes represent pages and edges indicate mutual links between them. Each page has an informative set of nouns, which we use as additional features. We note that this dataset contains only these noun features but not the original text, which is required by our text only models.
\begin{table}[t]\small
\centering
\begin{tabular}{llll}
\hline
\textbf{Metric} & \textbf{GDPR} & \textbf{PGR} & \textbf{Wikipedia} \\ \hline
\# Nodes & 18.3K & 20.4K & 2.2K \\
\# Edges & 365.0K & 605.4K & 31.4K \\
\# Sampled Edges & 37.6K & 3.0K & 188.5K \\
\hspace{5pt} $\rightarrow$ \# pos. Edges & 6.2K & 1.4K & 31.4K \\
\hspace{5pt} $\rightarrow$ \# neg. Edges & 31.4K & 1.6K & 157.1K \\ \hline
\end{tabular}
\caption{Statistics of the three datasets. Sampled edges are used to create training, validation and test sets. All models take the entire graph as input.}
\label{tab:stats}
\end{table}
Table~\ref{tab:stats} shows statistics of these datasets. In case of \gdpr{}
and \wikipedia{}, we create five negative examples for every positive pair.
We divide these pairs into $80$\%, $10$\% and $10$\% as training, validation and test splits respectively. The data splits for \pgr{} is the same as the original dataset, except that we discard data points (node pairs) that do not have text descriptions.
\subsection{Baselines} \label{sec:baselines}
We use the following baselines:
\begin{itemize}[leftmargin=*]
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\item \textbf{Co-occurrence} labels a test pair as positive if both entities occur together in the input text.
\item \textbf{Relevance Score} uses scores from IR models (Section~\ref{additional_features}) as features of a logistic classifier.
\item \textbf{Doc2Vec}~\cite{le2014distributed} uses domain-specific text embeddings obtained from Doc2Vec
as features of a logistic classifier.
\item \textbf{BioBERT}~\cite{lee2020biobert,devlin2018bert} is a BERT model pre-trained on PubMed articles. BioBERT is most appropriate for relation extraction on both \gdpr~and~\pgr~datasets as they are also developed based on PubMed articles. It is the current state-of-the-art model on \pgr~\cite{sousa2019silver}. We also include a version of BioBERT that uses graph information by concatenating the representation of each given pair with the average embedding of its neighbors.
\item \textbf{Graph Convolutional Network} (GCN)~\cite{kipf2017semi} is an efficient and scalable approach based on convolution neural networks which directly operates on graphs.
\item \textbf{Graph Attention Network} (GAT)~\cite{velickovicgraph} extends GCN by employing self-attention layers to identify informative neighbors while aggregating their information, effectively prioritizing important neighbors for target tasks.
\item \textbf{GraphSAGE}~\cite{hamilton2017inductive} is an inductive framework which aggregates node features and network structure to generate node embeddings, see (\ref{eq:graph_equation}). It uses both text and graph information. We use Doc2Vec~\cite{le2014distributed} embeddings to initialize node features of GraphSAGE, as they led to better performance than other embeddings in our experiments.
\item \textbf{Graph Isomorphism Network} (GIN)~\cite{xu2018powerful} identifies the graph structures that are not distinguishable by the variants of graph neural networks like GCN and GraphSAGE. Compared to GraphSAGE and GCN, GIN uses extra learnable parameters during sum aggregation and uses MLP encoding.
\item \textbf{CurGraph}~\cite{wang2021curgraph} is a curriculum learning framework for graphs that computes difficulty scores based on the intra- and inter-class distributions of embeddings and develops a smooth-step function to gradually include harder samples in training. We report the results of our implementation of this approach.
\item \textbf{SuperLoss} (SL)~\cite{castells2020superloss} is a generic curriculum learning approach that dynamically learns a curriculum from model behavior. It uses a fixed difficulty threshold at batch level, determined by the exponential moving average of all sample losses, to assign higher weights to easier samples than harder ones.
\end{itemize}
We compare these baselines against \textbf{GTNN} and \textbf{Trend-SL}, described in Section~\ref{sec:model}.
\begin{table*}[htbp]\small
\centering
\begin{tabular}{l p{3cm} l l l l l l l l l c}
\hline
\textbf{Modality} & \textbf{Model} & \multicolumn{3}{c}{\textbf{GDPR}} & \multicolumn{3}{c}{\textbf{PGR}} & \multicolumn{3}{c}{\textbf{Wikipedia}} & \\
& & \textbf{P} & \textbf{R} & \textbf{F1} & \textbf{P} & \textbf{R} & \textbf{F1} & \textbf{P} & \textbf{R} & \textbf{F1} & \multicolumn{1}{c}{\textbf{avg F1}} \\
\hline
- & Co-occurance & 16.7 & 100 & 28.6 & 47.5 & 100 & 64.4 & 16.7 & 100 & 28.6 & 40.5 \\
T & Relevance Score & 59.2 & 83.4 & 69.2 & 75.6 & 64 & 69.1 & - & - & - & 69.2 \\
T & BioBERT (node pairs) & 20.3 & 55.6 & 29.7 & 84.9 & 74.7 & 79.4 & - & - & - & 54.6\\
T & BioBERT (neighbors) & 21.1 & 57.4 & 30.9 & 74.0 & 76.0 & 75.0 & - & - & - & 53.0 \\
T & Doc2vec (node pairs) & 19.8 & 45.0 & 27.5 & 80.5 & 82.7 & 81.6 & - & - & - & 54.6 \\
T & Doc2vec (neighbors) & 20.6 & 51.9 & 29.5 & 83.1 & 78.7 & 80.8 & - & - & - & 55.2\\ \hline
G & GCN & 34.2 & 44.5 & 38.6 & 61.1 & 79.5 & 68.6 & 72.8 & 89.7 & 80.3 & 62.5 \\
G & GAT & 23.7 & 50.3 & 31.7 & 75.8 & 91.1 & 82.5 & 78.2 & 86.7 & 82.2 & 65.5 \\
G & GIN & 21.8 & 48.1 & 29.8 & 54.2 & 88.1 & 67.0 & 76.4 & 77.2 & 76.1 & 57.6 \\
G & GraphSAGE (random) & 17.2 & 90.4 & 28.5 & 84.8 & 79.2 & 81.8 & 57.9 & 82.28 & 67.9 & 59.4 \\ \hline
G,T & GraphSAGE (Doc2Vec) & 54.0 & 79.2 & 64.1 & 91.8 & 90.2 & 91.0 & 81.5 & 93.0 & 86.6 & 80.6 \\
G,T & GTNN & 78.0 & 87.9 & \textbf{82.6} & 93.6 & 93.2 & \textbf{93.4} & 87.9 & 95.4 & \textbf{91.5} & \textbf{89.2} \\
\hline
\end{tabular}
\caption{Performance of different models on \gdpr{}, \pgr{}, and \wikipedia{} datasets.
Here, (T) indicates ``Text only", (G) indicates ``Graph only", (G,T) indicates combination of both.
Note that the \wikipedia{} dataset contains only noun features but not the original text, which is required by the text only models.}.
\label{tab:performance}
\end{table*}%
\begin{table}[htbp]\footnotesize
\centering
\begin{tabular}{ l c c c c }
\hline
\textbf{Model} & \textbf{GDPR} & \textbf{PGR} & \textbf{Wikipedia} & \textbf{avg F1}\\
\hline
\textbf{GTNN} & 82.6 & 93.4 & 91.5 & 89.2 \\
\textbf{CurGraph} & 75.9 & 85.1 & 80.3 & 80.3 \\
\textbf{SL} & 83.5 & 94.0 &\textbf{ 92.0} & 89.8 \\
\textbf{Trend-SL} & \textbf{84.3} & \textbf{94.2 } & 91.3 & \textbf{89.9 }\\
\hline
\end{tabular}%
\caption{Performance of curriculum models on \gdpr{}, \pgr{}, and \wikipedia{} datasets. The base model for all curriculum learning approaches is GTNN, see the last row in Table~\ref{tab:performance}.}
\label{tab:curricula}%
\end{table}%
\subsection{Settings}
We reproduce the results reported in~\cite{sousa2019silver} using BioBERT and therefore follow the same settings on the \pgr~dataset.
Initial domain-specific node embeddings are obtained using Doc2Vec~\cite{le2014distributed} or Bio-BERT~\cite{lee2020biobert}. In case of Bio-BERT, since nodes carry long descriptions, we first generate sentence level embeddings and use their average to represent each node, following~\cite{zhang2019bertscore}. More recent techniques can be used as well~\cite{beltagy2020longformer}.
We consider 1-hop neighbors and set $t=1$ in (\ref{eq:graph_equation}).
To optimize our model, we use the Adam optimizer~\cite{kingma2014adam} and apply hyper-parameter search and tuning for all competing models based on performance on validation data.
In (\ref{eq:trend_sl}), we set $\alpha$ from $[0, 1]$ with a step size of 0.1, $\lambda$ from $\{0.1, 0.5, 1.0, 5, 10, 100\}$, and loss window $k$ from $[1,10]$ with a step size of 1. We consider a maximum number of $100$ training iterations with early stopping based on validation data for all models.
In addition, we evaluate models based on the standard Recall, Precision and F1 score for classification tasks \cite{sklearn_api}.
We experiment with five random seeds and report the average results. For all experiments, we use Ubuntu 18.04 with one 40GB A100 Nvidia GPU, 1 TB RAM and 16 TB hard disk space. GPU hours to train our model have been linear to the size of the datasets ranging from 30 min to 5 hours.
We use Precision (P), Recall (R) and F1 score (F1) as evaluation metrics.
\subsection{Results}
Table~\ref{tab:performance} shows the results. We start with text only and graph only baselines followed by baselines that incorporate both data modalities.
\paragraph{Text models (T):} Comparing all text based model, Relevance Score and Doc2Vec outperform other models. In case of \gdpr, high performance of Relevance Score indicates the ability of unsupervised IR models in finding relevant information in long text descriptions. However, Relevance Score shows poor performance on \pgr{} compared to Doc2Vec, which is better at semantic representation of input data.
BioBERT (node pair) obtains higher precision on both datasets and good performance on \pgr.
In addition, the F1 score of the BioBERT model developed in~\cite{sousa2019silver} for \pgr{} is 76.6.
We note that Doc2Vec obtains better performance than BioBERT, perhaps due to its in-domain pre-training.
\paragraph{Graph models (G):} The results show that GCN and GAT perform better than other competing graph models. We attribute their performance to the use of convolution and attention networks, which effectively prioritize important neighboring nodes with respect to the target tasks.
\paragraph{Graph models with additional information:} Comparing GraphSAGE (Doc2Vec) and GraphSAGE (random) illustrates the significant effect of initialization with in-domain embeddings. In addition, GTNN outperforms GraphSAGE, resulting in an average of 8.6 points improvement in F1 score. This improvement is because GTNN {\em directly} uses text descriptions at its prediction layer. This information, although available to GraphSAGE as well, can be lost in the iterative process of learning node embeddings through neighbors, see (\ref{eq:graph_equation}).
\paragraph{Training with curricula:} The results in Table~\ref{tab:curricula} show that training GTNN with effective curricula can further improve its performance. We attribute the better performance of Trend-SL compared to SL to the use of trend information, which leads to better curricula. We conduct further analysis on the effect of trend information below. The lower performance of CurGraph could be due to close probability densities that we obtained for samples in our datasets, which do not allow easy and hard samples to be effectively discriminated by CurGraph.
\section{Introduction}
Relation extraction is the task of detecting (often pre-defined) relations between entity pairs. It has been investigated in both natural language processing~\cite{mintz-etal-2009-distant,lin-etal-2016-neural,peng2017cross,zhang-etal-2018-graph} and network science~\cite{zhang2018link,alex2017protein}. Relation extraction is a challenging task, especially when data is scarce.
Nonetheless, the ability to automatically link entity pairs is a crucial task as it can reveal relations that have not been previously identified, e.g., informing clinicians about a causal relation between a gene and a phenotype or disease. Figure~\ref{fig:pgr_example_intro} shows an example sentence from a PubMed article in the Gene Phenotype Relation (PGR) dataset~\cite{sousa2019silver}, which describes the application domain of the present work as well.
Previous research has extensively investigated relation extraction at both sentence~\cite{zeng-etal-2015-distant,dos-santos-etal-2015-classifying,sousa2019silver} and document~\cite{yao-etal-2019-docred,quirk-poon-2017-distant} levels. Furthermore, effective graph-based neural network approaches have been developed for various prediction tasks on graphs, including link prediction between given node pairs~\cite{kipf2017semi, hamilton2017inductive,xu2018powerful,velickovicgraph}. Several recent approaches~\cite{lidistance,zhang2018link,alsentzer2020subgraph} illustrated the importance of enhancing graph neural networks using structurally-informed features such as shortest paths, random walks and node position features.
\begin{figure}
\centering
\includegraphics[scale=0.66]{images/pgr_example.pdf}
\caption{An example showing the report of a causal relation between a gene and a phenotype (symptom) from the PGR dataset~\citep{sousa2019silver}.}
\label{fig:pgr_example_intro}
\end{figure}
In this work, we develop a graph neural network titled \textbf{G}raph \textbf{T}ext \textbf{N}eural \textbf{N}etwork (GTNN) that employs structurally-informed node embeddings as well as textual descriptions of nodes at prediction layer to avoid information loss for relation extraction. GTNN can be trained using a standard approach where data samples are fed to the network in a random order~\cite{hamilton2017inductive}. However, nodes, edges or sub-graphs can significantly vary in their difficulty to learn, owing to frequent substructures, complicated topology and indistinct patterns in graph data. We tackle these challenges by presenting a generic and trend-aware curriculum learning approach that incorporates {\em sample-level} loss trajectories (trends) to better discriminate easier from harder samples and schedule them for training graph neural networks.
The contributions of this paper are:
(a): a graph neural network that effectively integrates textual data and graph structure for relation extraction, illustrating the importance of {\em direct} use of text embeddings at prediction layer to avoid information loss in the iterative process of learning node embeddings for graph data; and
(b): a novel curriculum learning approach that incorporates loss trends at sample-level to discover effective curricula for training graph neural networks.
We conduct extensive experiments on real world datasets in both general and specific domains, and compare our model against a range of existing approaches including the state-of-the-art models for relation extraction. Experimental results demonstrate the effectiveness of the proposed approach; the model achieves an average of 8.6 points improvement in F1 score against the best-performing graph neural network baseline that does not directly use text embeddings at its prediction layer. The proposed curriculum learning approach further improves this performance by 0.7 points, resulting in an average F1 score of 89.9 on our three datasets. We conduct extensive experiments to shed light on the improved performance of the model. Code and data are available at \url{https://clu.cs.uml.edu/tools.html}.
\section{Method}\label{sec:model}
\begin{figure*}
\centering
\includegraphics[scale=0.33]{images/architecture_diagram_gtnn_trend.pdf}
\caption{The architecture of the proposed graph text neural network (GTNN) model with Trend-SL curriculum learning approach. The proposed model consists of an encoder-decoder component that determines relations between given node pairs. The graph neural encoder takes as input features from textual descriptions of nodes and sub-graph extracted for a given node pair to create node embeddings. The resulting embeddings in conjunction with additional text features are {\em directly} used by the decoder to predict links between given entity pairs. The resulting loss is given as an input to our Trend-SL approach to dynamically learn a curriculum during training.}
\label{architecture}
\end{figure*}
Consider an undirected graph $G$ = ($\mathcal{V}$, $\mathcal{E}$) where $\mathcal{V}$ and $\mathcal{E}$ are nodes and edges respectively, and nodes carry text summaries as their descriptions. Edges in the graph indicate ``relations'' between their end points, e.g., causal relations between genes and diseases, or links between concepts in an encyclopedia.
Our goal is to predict relations/links between given node pairs in $G$.
\subsection {Graph Text Neural Network}
We present the Graph Text Neural Network (GTNN) model which directly operates on $G$ and textual descriptions of its nodes.
Figure \ref{architecture} shows the architecture of GTNN, which we describe below.
\subsubsection{Graph Encoder}
Given $G$ and its initial text embeddings, $\bx_i$ for each node $i$, we apply a graph encoder~\cite{hamilton2017inductive} to generate a $d$-dimensional embedding for each node by iteratively aggregating the current embeddings of the node and its $t$-hop neighbours through the {\tt sigmod} function denoted by $g$:
\begin{eqnarray}
\bh_{i}^{(t+1)}=g\Big(\bW_{1}\bh_{i}^{(t)}+
\bW_{2}(\frac{1}{|\mathcal{N}_{i}|}{\displaystyle \sum_{j\in \mathcal{N}_{i}}}\bh_{j}^{(t)})\Big),
\label{eq:graph_equation}
\end{eqnarray}
where ${\bh_i}^{(t)}$ is the embedding of node \textit{i} at the $t^{th}$ layer of the encoder and is initialized by $\bx_i$, i.e., ${\bh_i}^{(0)} = \bx_i,\forall i$, and $\mathcal{N}_i$ is the set of neighbors of node \textit{i} aggregated through a mean operation. ${\bW}_1$ and ${\bW}_2$ are parameter matrices to learn during training. Equation (\ref{eq:graph_equation}), applied iteratively, generates node embeddings $\bz_i = {\bh_i}^{(t+1)}\in\mathbb{R}^d$.
\subsubsection{Additional Text Features}\label{additional_features}
In addition to the representations obtained from the graph encoder, we use additional features from text data to better learn the relations between entities. Here, we consider three types of features:
(a) relevance score between the descriptions of node pairs obtained from information retrieval (IR) algorithms; we use BM-25~\cite{robertson1995okapi}, classic TF/IDF~\cite{jones1972statistical}, as well as DFR-H~and~DFR-Z~\cite{amati2002probabilistic} models. These IR models capture lexical similarities and relevance between node pairs through different approaches;
(b): we also use the initial text embeddings of nodes ($\bx_i, \forall i$) as additional features because the direct uses of these embeddings at prediction layer can avoid information loss in the iterative process of learning node embeddings for graph data; and
(c): if there exist other text information for a given node pair, e.g., a sentence mentioning the node pair as in Figure~\ref{fig:pgr_example_intro}, we use the embeddings of such information as additional features.
\subsubsection{Graph Text Decoder}
For a given node pair ($u$,$v$),
we combined representation of their additional features using a single hidden layer neural network as follows:
\begin{equation}
\bh_{uv} = {\tt ReLU}\big(\bW^e\ba_{uv}+\bb^e\big),
\label{eq:additional_feature_hidden_layer}
\end{equation}
where $\ba$ is obtained by concatenating the additional feature vectors of $u$ and $v$.
We combine $\bh_{uv}$ with node representations, $\bz_u$ and $\bz_v$, and pass them to a two layer decoder to predict their relations:
\begin{eqnarray}
\bh = {\tt ReLU} \Big(\bW^{last} f(\bh_{uv},\bz_u,\bz_v) +\bb^{last}\Big), \\ \nonumber
p(u,v) = g\left(\bW^{output} \bh +\bb^{output}\right),
\label{similirity}
\end{eqnarray}
where $f$ is a fusion operator, $g$ is the {\tt sigmod} function, and $p(u,v)$ indicates the probability of an edge between nodes $u$ and $v$. Flattened outer product, inner product, concatenation and 1-D convolution can be used as the fusion operator~\cite{amiri-etal-2021-attentive}. In our experiments, we obtained better performance using outer product, perhaps due to its better encoding of feature interactions:
\begin{equation}
f(\bh_{uv},\bz_u,\bz_v) = \bh_{uv} \otimes [\bz_u;\bz_v].
\end{equation}
\subsection{Generic Trend-aware Curricula}
Graph neural networks are often trained using the standard or ``rote'' approach where samples are fed to the network in a random order for training~\cite{hamilton2017inductive}. However, edges (and other entities in graphs such as nodes and sub-graphs) can vary significantly in their classification difficulty, and therefore we argue that graph neural networks can benefit from a curriculum for training. Recent work by~\citet{castells2020superloss} described a generic loss function called SuperLoss (SL) which can be added on top of any target-task loss function to dynamically weight training samples according to their difficulty for the model. Specifically, it uses a {\em global} difficulty threshold ($\tau$), determined by the exponential moving average of all sample losses, and considers samples with an instantaneous loss smaller than $\tau$ as easy and the rest as hard. Similar to the commonly-used easy-to-hard transition curricula, such as those in~\cite{bengio2009curriculum}~and~\cite{kumar2010self}, the model initially assigns higher weights to easier samples, thereby allowing back-propagation to initially focus more on easier samples than harder ones.
However, SL does not take into account the trend of instantaneous losses at sample-level, which can
(a): improve the difficulty estimations of the model by making them {\em local}, {\em sample dependent} and potentially more {\em precise}, and
(b): enable the model to distinguish samples with similar losses based on their known loss trajectories. For example, consider an easy sample with a rising loss trend which is about to become a hard sample versus another easy sample with the same instantaneous loss but a falling loss trend which is about to become further easier for the model. Trend information allows distinguishing such examples.
The above observations inspire our work to utilize trend information in our curriculum learning framework, called Trend-SL. The model uses loss information from the local time window before each iteration to capture a form of momentum of loss in terms of rising or falling trends and determine individual sample weights as follows:
\vspace{-0.2cm}
{\small{
\begin{eqnarray}\label{eq:trend_sl}
TrendSL_{\lambda,\alpha}(l_{uv}) = \arg\min_{\sigma_{uv}} \big(l_{uv}- (\tau-\alpha\Delta_{uv}) \big) \\\nonumber
\times \sigma_{uv} +\lambda(\log\sigma_{uv})^{2},
\end{eqnarray}}}
where $\sigma_{uv}$ is the latent weight for the training sample $(u,v)$ , $l_{uv}$ is the target-task loss (binary cross-entropy in our experiments) for $(u,v)$ at current iteration,
$\tau$ is the batch-level global difficulty threshold determined by the exponential moving average of sample losses~\cite{castells2020superloss},
and $\Delta\in[-1,1]$ is the trend indicator quantified by the normalized sample-level loss trend weighted by $\alpha\in[0,1]$; our approach reduces to SL with $\alpha=0$.
$\Delta$ captures the trend in the instantaneous losses of samples over recent $k$ iterations, effectively utilizing local sample-level information to determine difficulty. There are various techniques for fitting trends to time series data~\cite{bianchi1999comparison}. We use differences between consecutive losses to determine the trend for each sample:
\vspace{-0.4cm}
\begin{equation}\label{eq:trend_sl_delta}
\Delta_{uv} = \stackrel[j=i-k+2]{i}{\sum} (l_{uv}^{j}-l_{uv}^{j-1}) / \stackrel[j=i-k+2]{i}{\sum}\mid l_{uv}^{j}-l_{uv}^{j-1}\mid,
\end{equation}
where $i$ is the current iteration, $l_.^j$ indicates loss at iteration $j$ and $k$ controls the number of previous losses to consider.
As Figure~\ref{fig:sl_vs_tsl} illustrates, Trend-SL increases the difficulty threshold for samples with falling loss trends (negative $\Delta$s), becoming more flexible in increasing the weights of such samples by allowing greater instantaneous losses. On the other hand, it becomes more conservative in weighting samples with rising trends (positive $\Delta$s) by reducing the difficulty threshold.
Finally, we note that the weight $\sigma_{uv}$ in (\ref{eq:trend_sl}) can be computed as follows, where $W$ is the Lambert W function~\citep{euler1783serie}; see details in the supplementary materials in~\citep{castells2020superloss}:
\begin{eqnarray}
\sigma_{uv}^* & = & \exp{\Big(-W \big(\frac{1}{2}\max (-\frac{2}{e}, \beta)\big)\Big)},\\
\beta & = & \frac{l_{uv}-\left(\tau-\alpha\Delta_{uv}\right)}{\lambda}.
\end{eqnarray}
\begin{figure}
\centering
\includegraphics[scale=0.22]{images/threshold_sl_fig_1.pdf}
\caption{Difficulty dynamics in Trend-SL. $\tau$ is the fixed difficulty threshold of SL, which can be thought of as a global difficulty metric to separate easy and hard samples. Dotted (red) and dashed (green) trend lines indicate four samples with rising and falling loss trends respectively. Trend-SL uses trend dynamics to shift the difficulty boundaries and adjust global difficulty using local sample-level loss trends. The vertical dashed and dotted lines show updated sample-specific difficulty thresholds for easy and hard samples respectively. }
\label{fig:sl_vs_tsl}
\end{figure}
\section{Related Work} \label{relatedwork}
Previous research on relation extraction can be categorized into text- and graph-based approaches. In addition, to our knowledge, there is limited work on curriculum learning with graph datasets.
\paragraph{Text-based models:} Text-based methods extract entities and the relations between them from given texts.
Although, previous works typically focus on extracting intra-sentence relations for entity pairs in supervised and distant supervised settings~\cite{sousa2019silver,mintz-etal-2009-distant,dai2019distantly,lin-etal-2016-neural,peng2017cross, zhang-etal-2018-graph,alex2017protein,zhang2018link,quirk-poon-2017-distant}, there are relation extraction approaches that focus on inter-sentence relations~\cite{kilicoglu2016inferring,yao-etal-2019-docred}. \citet{kilicoglu2016inferring} investigated multi-sentence relation extraction between chemical-disease entity pairs mentioned at multi-sentence level. They considered lexical features, and features obtained from intervening sentences as input to a classifier.
A close related work to our study has been conducted by~\citet{sousa2019silver}, who developed an effective model to detect relations between genes and phenotypes at sentence-level using sentential context and medical named entities in text. We compared our approach with \citet{sousa2019silver} on the dataset that they developed (PGR), see Section~\ref{sec:baselines}.
\paragraph{Graph based models:}
Previous research show that adding informative additional features with graph helps models learn better node representations for extracting relation between entity pairs. For example, \citet{zhang2018link} used distance metric information, and \citet{lidistance} used distance features like shortest path and landing probabilities between pair of nodes in subgraphs as additional features. We note that some graph properties, although informative and effective, can be expensive to calculate on large graphs during training and should be computed offline.
\paragraph{Curriculum learning with graph data:}
Curriculum learning approaches design curricula for model training and generalizability~\cite{bengio2009curriculum,kumar2010self,Jiang2015-ek,amiri-etal-2017-repeat,jiang2018mentornet,castells2020superloss,zhou2020curriculum}. The common approach is to detect and use easy examples to train the model and gradually add harder examples as training progresses.
Curricula can be static and pre-built by humans or can be automatically and dynamically learned by the model. There are very few curriculum learning methods designed to work on the graph structure. \citet{wang2021curgraph} developed CurGraph, which is a curriculum learning method for sub-graph classification. The model estimates the difficulty of samples using intra and inter-class distributions of sub-graph embeddings and orders training instances to initially expose easy sub-graphs to the underlying graph neural network followed by harder ones.
As opposed to static curriculum, \citet{saxena2019data} introduced a dynamic curriculum approach which automatically assigns a confidence score to samples based on their estimated difficulty. However, the model requires a large number of extra trainable parameters especially when data set is large. To overcome this limitation, \citet{castells2020superloss} introduced a framework with similar idea but calculates the optimal confidence score for each instances using a closed-form solution, thereby avoiding learning extra parameters. We extended this approach to include trend information at sample-level for learning effective curriculum.
\paragraph{Graph neural networks for NLP:}
There are several distantly related work that develop graph neural network algorithm for downstream tasks such as semantic role labeling~\cite{marcheggiani2017encoding}, machine translation~\cite{bastings2017graph,marcheggiani2018exploiting}, multimedia event extraction~\cite{liu2020story}, text classification~\cite{yao2019graph,zhang2020every} and abstract meaning representation~\cite{song2018graph}. Graph neural networks are used to model word-word or word-document relations, or applied to dependency trees.
\citet{yao2019graph} generated a single text graph using word occurrences and document word relations from text data, and used the GCN method to learn embeddings of words and documents. Similarly, \citet{peng2018large} used GCN to capture the semantics between non-consecutive and long-distance entities.
\section{In-domain embeddings improves the performance}
For these experiments, we re-train our model without additional textual feature but with different embedding initialization. As shown in Figure \ref{fig:ablation_w_feature} and \ref{fig:ablation_wo_feature}, Doc2Vec embeddings result in an overall better performance than BioBERT and random initialization approaches across the datasets. We attribute this result to in-domain training using text summaries of genes, diseases and phenotypes associated to rare diseases. In addition, the performance using BioBERT embeddings is considerably lower than that of other embeddings including Random. This is perhaps due to pre-training of BioBERT using a large scale PubMED dataset, which has a significantly lower prevalence of publications on rare vs. common diseases. On the other hand, we directly optimize Doc2Vec on in the domain datasets, which leads to higher performance of the model. We tried to fine tune BioBERT on our corpus but as the textual summaries were long, only few initial sentences were considered rest were discarded. Hence we focus on fine tuning Doc2Vec.
\section{Additional Features improves the performance}
For these experiments, we re-train our models and include additional feature (i.e., relevance scores for \gdpr~and sentence embeddings for \pgr), with different node embedding initialization.
Figure \ref{fig:ablation_w_feature} show that additional features improve the F1-scores of our model across all datasets and embedding initialization.
These results show that both textual features and information obtained from graph structure contribute to predicting relations between nodes.
\begin{figure}[t]
\centering
\includegraphics[scale=0.42]{images/ablation_w_feature.pdf}
\caption{Performance of \gtnn{} with Trend-SL with additional features.}
\label{fig:ablation_w_feature}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.42]{images/ablation_wo_feature.pdf}
\caption{Performance of \gtnn{} with Trend-SL without additional features.}
\label{fig:ablation_wo_feature}
\end{figure}
\section{Inversions occur with falling or rising loss trends}
\begin{figure*}[!t]
\centering
\subfigure{\includegraphics[scale=0.45]{images/easy_fraction_analysis_1d 3.pdf}\label{fig:e2h_fraction_1d}}
\subfigure{\includegraphics[scale=0.45]{images/hard_fraction_analysis_1d 2.pdf}\label{fig:h2e_fraction_1d}}
\caption{Inversion heatmap of the examples when easy example converts to hard example (left) and vice versa (right)}
\label{fig:fraction1d}
\end{figure*}
Fig \ref{fig:fraction1d} shows the inversion heatmap for Easy-to-Hard and Hard-to-Easy for consecutive epoch where the y-axis fraction indicates the number of examples that were considered easy at epoch $e$ and had a rising trend which were converted into hard examples at epoch $e+1$. Similarly for Hard-to-Easy inversion heatmap. In Hard-to-Easy inversion the examples were hard at current epoch but had falling trend to eventually be inverted to easy examples. The area under the curve for Easy-to-Hard with rising trend is 24.87 and 4.51 for Hard-to-Easy.
| -27,271.980973
|
[
-2.78515625,
2.47265625
] | 61.290323
|
[
-3.185546875,
0.64208984375,
-1.1474609375,
-2.380859375,
-0.49853515625,
4.09375
] |
[
-0.82666015625,
4.23046875,
0.2724609375,
3.818359375
] | 518
| 6,686
|
[
-2.5703125,
2.521484375
] | 25.242093
|
[
-6.140625,
-3.47265625,
-3.5234375,
-1.681640625,
1.7646484375,
10.984375
] | 0.28757
| 39.599372
| 23.302423
| 4.348221
|
[
1.774150013923645
] | -19,801.174267
| 6.24125
| -26,750.66166
| 0.345084
| 6.193652
|
[
-3.244140625,
-3.548828125,
-2.982421875,
-3.67578125,
2.861328125,
10.4140625
] |
[
-6.19921875,
-2.189453125,
-2.21875,
-1.53125,
3.515625,
5.23828125
] | |
BkiUeBk5qhLAB45oKIRx
|
\section{Introduction}
In recent years there has been a notable advance in the understanding of electronic transport through superconducting nanosystems. In particular, the development of fabrication techniques such as scanning tunneling microscopy, break-junction and lithographic methodologies \cite{review} have allowed to study atomic-size metallic contacts as ideal systems to test fundamental properties of charge transfer through superconducting weak links.
These achievements have not only deepened our understanding of subgap structures in the current-voltage characteristics but also revealed even microscopic details of the contact such as transmission coefficients \cite{muller,Cron}. Theoretical predictions based on Greens-function or mean-field approaches have been confirmed in various experiments \cite{averin,cuevas}. In this context, set-ups where atomic-size tunnel junctions are subject to both dc- and ac-voltages give access to the intimate relation between phase dynamics, driving, and dissipation leading to pronounced Shapiro resonances of integer and fractional order.
Similar to conventional weak links, atomic point contacts are characterized by two energy scales \cite{agrait}, namely, the coupling energy between the superconducting domains (Josephson energy) and the charging energy of the junction. The competition between these two scales is crucially influenced by the electromagnetic environment so that a realistic modeling of charge transport across the contact must necessarily incorporate its embedding in
an actual circuit. Accordingly, the dynamics of the superconducting phase difference as the only relevant degree of freedom exhibits a diffusive motion subject to noise which, as it is well-known from the physics of Josephson junctions \cite{barone}, can be visualized as the Brownian motion of a fictitious particle.
It turned out that for atomic point contacts this phase dynamics occurs in an overdamped regime. The corresponding classical frame is provided by the Smoluchowski equation which has already been the starting point for calculations of current-voltage characteristics of Josephson junctions in low impedance environments \cite{IZ,AH}. There, only the Josephson energy remains as relevant parameter while charging effects related to inertia drop out. The impact of quantum fluctuations have been attacked within a time-dependent perturbation theory in \cite{grabert} where the whole range from
coherent to incoherent Cooper pair transfer in the domain of Coulomb blockade could be captured. Later, a generalization of the Smoluchowski approach to the quantum regime (Quantum Smoluchowski) developed by one of us (JA) and co-workers \cite{qmsmolu1,qmsmolu2,Anker,Anker-libro} has allowed to derive the same physics in a very elegant manner and in close analogy to the classical description. Quantum noise has been shown to be inevitably associated with charging effects according to the uncertainty principle, thus physically ruling the changeover toward Coulomb blockade dominated transport.
The motivation for the present work is two-fold. On the one hand, it is based on experiments conducted with atomic point contacts in the last years in the Quantronics group \cite{Chauvintesis,stein} and on the other hand on a corresponding description in terms of the classical Smoluchowski approach \cite{dupret}. While experimental results with ac-driven junctions followed theoretical predictions for the structure of Shapiro resonances, substantial discrepancies appeared for their heights and widths. A plausible explanation has been the presence of residual spurious noise sources which lead to an effective temperature at the contact different from the actual base temperature. Here, we analyze if and if yes to what extent quantum noise must also be incorporated into this picture. Eventually, this may open the door to unambiguously characterize quantum effects in overdamped systems at low temperatures.
The paper is organized as follows. In Sec.~\ref{model} we present our generalization of the RSJ model for contacts of arbitrary transmission in the presence of microwave radiation and in the presence of quantum fluctuations. In Sec.~\ref{numerics} the numerical method to solve the generalized quantum Smoluchowski equation is outlined together with the relevant scales and approximations. Section \ref{results} discusses results for the $I-V$ characteristics and fractional Shapiro resonances, before in Sec.~\ref{effective} the question whether quantum noise can be captured by an effective temperature is addressed. Section \ref{conclusions} is devoted to discussion and conclusions on future experimental realizations.
\section{The Model}\label{model}
We model a superconducting tunnel junction using an equivalent circuit with so-called lumped circuit parameters that includes both the effect of dissipative sources and the distributed capacity. For weak links the standard resistively and capacitively shunted junction (RCSJ) model captures the essential physics even in presence of an external ac-voltage \cite{barone}. The equivalent circuit, shown in Fig.~\ref{fig:circuit}, is formed by a contact with a resistance $R$, a capacitance $C$, and biased by a dc-current $I_{b}$.
The superconducting phase difference across the junction denoted by $\theta$ is the only relevant degree of freedom with a current-phase relation $I(\theta)$. Energy dissipation in the contact is accompanied by Johnson-Nyquist noise $I_n(t)$ thus the current conservation relation gets:
\begin{figure}[ht]
\epsfig{file=Circuito.eps, width=6cm}
\caption{Lumped circuit model of a superconducting weak link with capacitance $C$ and within a resistive environment $R$ subject to a dc-bias current $I_b$ and an ac-voltage.}
\label{fig:circuit}
\end{figure}
\begin{equation}
I_{b} = C \frac{dV}{dt} + I(\theta)+ \frac{V_{\rm tot}(t)}{R} + I_n(t)\, .
\label{eq:lan1}
\end{equation}
Here, the first term on the RHS is the displacement and the third term is the dissipative current.
Further, $V_{\rm tot}(t)=V(t)+V_{ac} \cos(\omega t)$ with voltage $V$ across the contact and $V_{ac}$ being an additional ac-voltage induced by a microwave field.
According to $V=\Phi_0 \dot{\theta}$ ($\Phi_0=\hbar/2e$) this equation is in fact an equation of motion for the phase, i.e.,
\begin{equation}
\frac{V_{ac}(t)}{R}+I_{b}=\Phi_0 C \ddot{\theta}(t)+ \frac{\Phi_0}{R}\dot{\theta}(t) +I(\theta)+ I_n(t)\, ,
\label{eq:lan1b}
\end{equation}
which is equivalent to the Brownian motion of a fictitious particle with mass $M \equiv \Phi_0^2 C$, friction constant $\eta \equiv 1/RC$ and classical noise force $Z_{Cl}(t) \equiv -R I_n(t)/\Phi_0$ in the potential
\begin{equation}
U(\theta,t) = \Phi_0 \int_{0}^{\theta} I(\theta) d\theta -\Phi_0 \theta [I_{b}-\frac{V_{ac}}{R} cos(\omega t)]\, .
\label{eq:pot}
\end{equation}
The noise force has zero mean and obeys
\begin{equation}
\langle Z_{Cl}(t)Z_{Cl}(t^{\prime })\rangle=\frac{2D_{Cl}}{\eta M}\, \delta (t-t^{\prime }).
\end{equation}
with $D_{Cl}=k_{B}T\equiv 1/\beta$.
The regime where the capacitance is negligible thus corresponds to the strong friction domain (Smoluchowski regime) in the mechanical analog \cite{barone}. Previous work has shown that this is indeed the range where phase diffusion in superconducting atomic point contacts happens to occur \cite{Chauvintesis,dupret}. Strong friction considerably simplifies the description and for the situation with zero driving even allows for analytical results. It is then often convenient to switch from the classical Langevin equation corresponding to (\ref{eq:lan1b}), i.e.,
\begin{equation}
\dot{\theta} = \frac{1}{\eta M}\frac{dU(\theta,t)}{d\theta} + Z_{Cl}(t)
\label{eq:lan2}
\end{equation}
to an equation of motion for the probability distribution $P(\theta, t)$, namely,
\begin{equation}
\frac{\partial P(\theta,t)}{\partial t} = \frac{1}{\eta M}\frac{\partial}{\partial \theta}\left[-\frac{\partial U(\theta,t)}{\partial \theta} +D_{Cl} \frac{\partial}{\partial \theta}\right] P(\theta,t)\, .
\label{eq:smol}
\end{equation}
As first pointed out in \cite{qmsmolu1}, this classical Smoluchowski equation (SE) can be generalized to the low temperature domain where quantum fluctuations become substantial \cite{qmsmolu2}. The quantum Smoluchowski equation (QSE) has been studied since then in a variety of applications \cite{Anker-libro} including particularly an extension of the classical Ivanchenko-Zil'berman theory for Josephson junctions in low impedance environments \cite{IZ,AH}. There, quantum fluctuations are related to charging effects and reveal signatures of Coulomb blockade physics.
The QSE follows from its classical counterpart by replacing $D_{Cl}\ \to \ D_{Q}(\theta)$ with the position dependent quantum diffusion coefficient
\begin{equation}
D_{Q}(\theta)=\dfrac{k_{B}T}{1-\Lambda \beta U''(\theta)}
\label{eq:qsmol}
\end{equation}
with a friction and temperature dependent function
\begin{equation}
\Lambda=2\rho\left[c+\frac{2\pi^2\rho}{\beta E_c}+\Psi\left(\frac{\beta E_c}{2\pi^2 \rho}\right)\right]\, ,
\end{equation}
where $\Psi$ denotes the digamma function and $c=0.5772\ldots$ Euler's constant. Further, the charging energy is $E_c=2 e^2/C$ and we introduced the dimensionless resistance $\rho=R/R_Q$ with $R_Q=h/4 e^2$. Usually, $\rho\ll 1$ for circuits operated in the overdamped regime.
The classical Smoluchowski range corresponds to the high temperature limit $\eta\hbar\beta\equiv \beta E_c/(\pi\rho)\ll 1$, where $\Lambda\approx \beta E_c/\pi^2\ll 1$, while at low temperatures $\beta E_c/(\pi\rho)\gg 1 $ quantum fluctuations are substantial according to $\Lambda \approx 2\rho\, {\rm ln}(\beta E_c/\pi^2 \rho)$. The generalization of the classical Langevin equation follows from (\ref{eq:lan2}) by replacing $Z_{Cl}\to Z_Q\equiv \sqrt{\beta \, D_Q(\theta)}\, Z_{Cl}$ which describes a classical stochastic process with multiplicative noise.
In the sequel we consider an atomic point contact with one conduction channel with transmission probability $\tau \in [0,1]$. Generalizations are straightforward. As it is well-known the current through the contact is then carried by two Andreev bound states with energies $E_{\pm}(\theta,\tau)=\pm\Delta \sqrt{1-\tau \sin^{2}(\theta/2)}$ ($\Delta$ is the superconducting gap). If we restrict ourselves to voltages much smaller than $\Delta$ and $k_{\rm B} T$, there are no Landau-Zener transitions between Andreev states and an adiabatic approximation applies. Thus, the
current-phase relation gets
\begin{equation}
I(\theta,\tau)=\frac{e\Delta}{2\hbar} \frac{\tau \sin(\theta)}{\sqrt{1-\tau \sin^{2}(\theta/2)}}\ {\rm tanh}\left[\frac{\beta E_{+}(\theta,\tau) }{2}\right]\, .
\label{eq:curfase}
\end{equation}
This expression simplifies to the known sinusoidal relation for tunnel junctions in the low transmission limit ($\tau \rightarrow 0$) and is proportional to $\sin(\theta/2)$ in the ballistic limit for $\tau \rightarrow 1$. In this latter domain and for externally driven contacts higher harmonics become relevant such that apart from the conventional
integer Shapiro steps also fractional ones appear. This situation has been studied in the classical realm in Ref.~\cite{dupret}. Here, we focus on the low temperature region where the classical description must be extended to include quantum fluctuations as discussed above.
Before we proceed, let us specify the domain in parameter space where the strong friction approach and the modeling of the environment in terms of an ohmic resistor apply. With respect to the first issue we consider circuits of the type shown in Fig.~\ref{fig:circuit}.
Then, roughly speaking, the friction constant must sufficiently exceed all other relevant frequencies. In the mechanical analog this means $\eta\gg \omega_J^2\hbar\beta, \omega_J^2/\eta, e V_{ac}/\hbar, \omega$ with plasma frequency $\omega_J=\sqrt{E_J E_c}/\hbar$ and Josephson energy $E_J=\Delta (1-\sqrt{1-\tau})$. Note that the last condition $\eta\gg \omega$ enures that the external driving acts on time scales sufficiently larger than the relaxation time for momentum which is of order $1/\eta$. In terms of circuit parameters one has
\begin{equation}
\frac{E_c}{\pi \rho} \gg \pi \rho E_{J}, \beta E_{c} E_{J}, {eV_{ac}}, \hbar\omega\,
\label{constraint}
\end{equation}
with $\rho\ll 1$.
In addition, as discussed above, the ratio $\hbar\beta\eta\equiv\beta E_c/(\pi\rho)$ controls the impact of quantum fluctuations. Further, the adiabatic description (\ref{eq:curfase}) is justified if $\hbar\omega\ll 2\Delta \sqrt{1-\tau}$ to avoid driving induced mixing of the two Andreev surfaces.
With respect to the second issue, the modeling of the environment as being purely ohmic is of course a crude approximation to actual experimental set-ups. Any realistic circuit exhibits at least a cut-off frequency $\Omega_c$ due to unavoidable additional capacitances. The Smoluchowski description remains valid as long as there is still a time scale separation between relaxation in phase [approach of a quasi-stationary state for $P(\theta,t)$] and the response time of the environment, i.e., $\eta/\omega_0^2 \gg 1/\Omega_c$. In fact, it turns out that inertia effects (finite capacitance) and a more refined modeling of the electromagnetic environment lead for sufficiently large friction to only minor deviations from the classical Smoluchowski prediction \cite{Chauvintesis,dupret} (they are relevant for a detailed quantitative analysis of actual circuits though). For the quantum case considered in the sequel, the same is true if $\hbar\beta>1/\Omega_c$ with $\hbar\beta$ being at low temperatures the relevant scale for the coarse graining in time \cite{qmsmolu2}.
\section{Current-voltage characteristics}\label{numerics}
Mean values of relevant observables are determined by the distribution $P(\theta, t)$ determined from the QSE. Since most of the results can only be obtained numerically, we switch in this section to dimensionless quantities and scale energies in units of $\Delta$, frequencies in units of $\Delta/\hbar$, and times in units of $\hbar/(\Delta \rho)$. In particular, this means to measure temperature in units of $\Delta/k_{\rm B}$ and currents in units of $I_c=E_J/\Phi_0$.
The dimensionless QSE then reads
\begin{equation}
\frac{\partial P(\theta,t)}{\partial t}=-\dfrac{\partial }{\partial \theta}\left[\frac{\partial U(\theta,t)}{\partial \theta}P(\theta,t) + \frac{\partial D_{Q}(\theta) P(\theta,t)}{\partial \theta}\right]\, ,
\label{eq:curmean2}
\end{equation}
which is in fact a continuity equation for the probability, i.e., $\partial P/\partial t+\partial J/\partial\theta=0$ with
the probability flux
\begin{equation}
J(\theta,t) \equiv \frac{\partial U(\theta,t)}{\partial \theta}P(\theta,t) + \frac{\partial D_{Q}(\theta) P(\theta,t)}{\partial \theta}\, .
\label{eq:curmean2b}
\end{equation}
Now, these expressions determine mean values with respect to phase $\overline{(...)}$ and time $<...>$. For the current one has
\begin{equation}
\overline{\langle I(\theta)\rangle}=\int_{0}^{2\pi} d\theta \int_{-\infty}^{\infty} dt I(\theta) P(\theta,t)
\end{equation}
and the voltage across the contact $\overline{\langle V\rangle}=\overline{\langle \dot{\theta}\rangle}$ follows as
\begin{equation}
\overline{\langle V\rangle}=\int_{0}^{2\pi} d\theta \int_{-\infty}^{\infty} dt J(\theta,t)\, .
\end{equation}
Due to the periodicity of the potential $U(\theta,t)$ in phase and time [cf.~Eq.~(\ref{eq:pot})], one expands density and current according to
\begin{equation}
P(\theta,t)= \sum_{n,k \in Z} P_{n,k} e^{i k \theta+i n \omega t}
\end{equation}
\begin{equation}
J(\theta,t)= \sum_{n,k \in Z} J_{n,k} e^{i k \theta+i n \omega t}
\end{equation}
with the normalization condition
\begin{equation}
P_{n,0} = \delta_{n,0}/2\pi\, .
\end{equation}
Further, one writes due to (\ref{eq:curfase})
\begin{equation}
I({\theta}) = \sum_{m=1}^{\infty} I_{m}(\theta,\tau) \sin(m \theta)
\label{eq:curexp}
\end{equation}
as well as
\begin{equation}
D_{Q}({\theta}) = \sum_{m=0}^{\infty} D_{m}(\theta,\tau) \cos(m \theta)\, .
\label{eq:difexp}
\end{equation}
This way, the QSE (\ref{eq:curmean2}) is cast in an algebraic equation for the expansion coefficients, namely,
\begin{eqnarray}
\frac{n}{k} \omega P_{n,k} &=& I_{b}P_{n,k}-i \frac{V_{ac}}{2\rho}(P_{n-1,k}+P_{n+1,k}) \nonumber\\
&&+ \sum_{m=1}^{\infty} I_{m}(P_{n,k-m}-P_{n,k+m}) \nonumber\\
&&+ i k \sum_{m'=0}^{\infty}D_{m'}(P_{n,k-m'}+P_{n,k+m'})
\label{seteq}
\end{eqnarray}
Practically, one works on a two-dimensional grid for $n, k$ with $|n|\leq N_{\rm max}, |k|\leq K_{\rm max}$.
As already pointed out in \cite{dupret} the corresponding set of $N_{\rm max} \times K_{\rm max}$ coupled equations can be associated with a non-Hermitian lattice model for particles on a square lattice. In particular, one observes that there is a coupling between chains $n$ and $n\pm 1$ proportional to $V_{ac}$, a coupling between chains $k$ and $k\pm m$ proportional to the $m$-th harmonic of the Josephson current, and a coupling between chains $k$ and $k\pm m'$ proportional to the $m'$-th harmonic of the quantum diffusion coefficient. Note that this latter coupling is absent in the classical regime.
Now, the orthogonality of circular functions allows to express the mean current and voltage as
\begin{equation}
\overline{<I(\theta)>}=\sum_{k\in Z} P_{0,k} I_{-k}
\label{curr}
\end{equation}
\begin{equation}
\overline{<V>}=\rho\ \left(I_{b}-\sum_{k\in Z} P_{0,k} I_{-k}\right)
\label{volt}
\end{equation}
meaning that we only need to calculate $P_{0,k}$ explicitly. In fact, peaks in this probability coefficient are related to the observed Shapiro steps of order $n/k$ in the $I-V$ characteristics.
To solve (\ref{seteq}) numerically we define vectors $\overrightarrow{P_{n}}\equiv (\ldots P_{n,k},\ldots,
P_{n,1},P_{n,-1},\ldots,P_{n,-k},\ldots)$ and
$\overrightarrow{I}\equiv (\ldots I_{k},\ldots,I_{1},I_{-1},\ldots,I_{-k},\ldots)$ and matrices
\begin{eqnarray}
(L_{n})_{k,k'} &\equiv &\left(\frac{n}{k} \omega P_{n,k}+ I_{b}\right)\delta_{k,k'} \nonumber\\
&&- I_{m}\left(\delta_{k',k-m}-\delta_{k',k+m}\right)\nonumber\\
&& -i k D_{m'}\left(\delta_{k',k-m'}+\delta_{k',k+m'}\right)\,
\label{setmat}
\end{eqnarray}
so that (\ref{seteq}) takes the compact form
\begin{equation}
L_{n} \overrightarrow{P}_{n}= \frac{V_{ac}}{2\rho}(\overrightarrow{P}_{n-1}+\overrightarrow{P}_{n+1})
+\delta_{n,0} \overrightarrow{I}\, .
\label{setmat2}
\end{equation}
This equation is solved via a recursive procedure (continued fraction method-upward iteration) by introducing for $n>0$ the auxiliary quantity $S_{n+1} \overrightarrow{P_{n}}= \frac{V_{ac}}{2\rho}(\overrightarrow{P}_{n+1})$ and for $n<0$ by defining with $\overline{n}\equiv-n$ the quantity $
S_{\overline{n}+1} \overrightarrow{P}_{\overline{n}}= \frac{V_{ac}}{2\rho}(\overrightarrow{P}_{\overline{n}+1})$. Accordingly, one has a simple equation for the relevant probability coefficients $\overrightarrow{P}_{0}= [L_{0}- S_{1} -S_{\overline{1}}]+\overrightarrow{I}$,
where
\begin{equation}
\begin{split}
S_{1(\overline{1})}&= -
\dfrac{\mu}{L_{1(\overline{1})}-\dfrac{2\mu}{L_{2(\overline{1})}-\dfrac{\mu}{L_{3(\overline{3})}-...} }}
\end{split}\,
\label{setmat6}
\end{equation}
with the abbreviation $\mu=(V_{ac}/2 \rho)^2$.
\section{Results}\label{results}
We start with a brief discussion of actual experimental parameters and proceed with a presentation of the numerical results.
\subsection{Approximations and parameters ranges}
We take typical experimental values for atomic contacts with Al electrodes \cite{stein} with a superconducting gap $\Delta \simeq 180 \mu eV$. Temperatures are varied between about $10$mK and $100$mK and the circuit is assumed to have an ohmic resistance of $200\Omega$ such that $\rho \ll 1$ and a capacitance on the order of fF.
Typical microwave frequencies are $\hbar\omega \sim 10^{-2} \Delta - 10^{0} \Delta$ with $\omega >\rho \Delta/\hbar$ to observe fractional Shapiro steps.
Within this range of parameters the conditions in (\ref{constraint}) are fulfilled and phase diffusion is supposed to be affected by quantum fluctuations in the strong damping regime. In particular, $\beta E_c/\pi\rho\gg 1$ so that the energy scale related to friction $\hbar\eta$ by far exceeds the thermal energy scale $k_{\rm B} T$.
We now solve numerically the recursion (\ref{setmat6}) where convergence depends on the maximum number of spatial and temporal harmonics, the temperature and the external voltage. The numerics is quite sensitive at low temperatures and high voltages, but in the chosen ranges of these parameters accurate data can be achieved with $N_{\rm max}=90$ and $K_{\rm max}=45$.
\subsection{Numerical Results}
To analyze the Shapiro step structure we calculate numerically, from (\ref{curr}) and (\ref{volt}), mean currents $\overline{\langle I(\theta)\rangle}$ and mean voltages $\overline{\langle V\rangle}$ (in the figures $I$ and $V$ respectively), over the temperature range specified above and for various transmission coefficients.
\begin{figure}
\begin{center}
\epsfig{file=Fig2.eps,width=10cm}
\end{center}
\caption{$I-V$ curves for an ac-driven atomic point contact with $\tau=0.995$ and $T=0.005$. Other parameters are $\omega=2\pi.10^{-3}$, $\eta=5$, $V_{ac}=5.10^{-3}$ , $R=10^{-3}$ (dimensionless units, see beginning of Sec.\ref{numerics}).}
\label{fig:IVTotal}
\end{figure}
Figure~\ref{fig:IVTotal} shows a typical $I-V$ curve with the first integer and several fractional resonances in the quantum regime (low temperatures) and for a high transmissive junction.
\begin{figure}
\epsfig{file=Fig3AColor.eps,width=8.75cm}
\epsfig{file=Fig3BColor.eps,width=8.75cm}
\caption{Fractional Shapiro steps for a high ($\tau=0.995$, top panel) and lower ($\tau=0.99$, bottom panel) transmissive channels. Red(green)-solid lines depict data obtained with the QSE at $T=0.006(0.02)$, while blue(black)-dotted lines describe results from the SE at $T=0.006(0.02)$. At a higher temperature $T=0.02$ both approaches give identical curves. Other parameters are as in Fig.~\ref{fig:IVTotal}.}
\label{fig:IvsVzoomComp}
\end{figure}
While resonances appear in a pattern very similar to the known classical ones,
the role of quantum fluctuations is revealed when one compares low temperature results obtained with the SE and those gained with the QSE, respectively (Fig.~\ref{fig:IvsVzoomComp}). At higher temperatures the diffusion coefficient $D_Q\to D_{Cl}$ such that both descriptions deliver identical data, but there are substantial deviations at low temperatures where $D_Q(\theta)>D_{Cl}$. Indeed, the classical equation predicts much sharper resonances than the quantum one, where the reduction in height and the increase in width is more striking at higher transmissions. This smearing out is basically absent away from the resonances.
\begin{figure}[h]
\epsfig{file=Fig4.eps, width=8.5cm}
\caption{Resonant peak height vs. temperature for the Shapiro step 1/2 according to the classical approach (dotted) and the quantum one (solid) and transmission coefficients $\tau=0.995$ (shifted upwards by 0.01), $\tau=0.99$, and $\tau=0.9$. Other parameters are the same than in previous figures.}
\label{fig:12Temp}
\end{figure}
In order to have a better insight in this behavior the heights of the fractional peak $I_{1/2}$ [calculated from $(I_{\rm 1/2, max}-I_{\rm 1/2, min})/2$ as differences between maximal and minimal peak values] are plotted as functions of temperature in Fig. \ref{fig:12Temp}. As already discussed, quantum fluctuations reduce the peak heights at lower temperatures and for higher transmissive channels. Since the overall peak structure is not altered,
one may misleadingly describe a reduced height within the classical approach by an effectively enhanced temperature. However, while it is true that the quantum diffusion coefficient is typically larger than the classical one ($D_Q>D_{Cl}\equiv k_{\rm B} T$), due to its dependence on the phase $\theta$ the QSE can in general not simply be reduced to the SE by replacing $T$ by an effective temperature (see next section).
Variations of peak heights with increasing transmission are illustrated in Fig.~\ref{fig:TotalTP} for several fractional resonances. Interestingly, mean values $I_{n/k}$ saturate in the quantum case towards the ballistic limit $\tau\to 1$ with larger deviations from the classical data for higher order steps.
\begin{figure}[h]
\begin{center}
\epsfig{file=Fig5.eps, width=8.5cm}
\end{center}
\caption{Peak heights $I_{n/k}$ vs. 1-$\tau$ for resonances with $n/k$=1/2, 1/3 and 2/3 (from top to bottom) at temperature $T=0.006$. Solid lines correspond to the quantum case and dotted lines to the classical one. Parameters are the same than in previous figures.}
\label{fig:TotalTP}
\end{figure}
Apart from reduced heights quantum effects appear as a widening of the resonances. A natural magnitude to quantify this, is the full width at half maximun ($FWHM$) of the peaks. As expected, we see in Fig.~\ref{fig:FWMH} that the spreading of the quantum peaks exceeds that of the classical ones at lower temperatures with the $FWHM$ taking larger values for higher harmonics. Both predictions coincide only at relatively elevated temperatures. The relative strength of quantum fluctuations is larger at lower fractional steps, cf.~Fig.~\ref{fig:FWMH}.
\begin{figure}
\epsfig{file=Fig6a.eps, width=9.5cm}
\epsfig{file=Fig6b.eps, width=9.5cm}
\caption{$FWHM$ vs. $T$ for $\tau = 0.995$ and various fractional resonances (top) together with the relative strength of quantum fluctuations
$(FWHM_{Q}-FWHM_{Cl})/FWHM_{Cl}$ (bottom). Other parameters are the same as in previous figures.}
\label{fig:FWMH}
\end{figure}
The crucial question is, of course, whether the influence of quantum fluctuations seen above could actually be detected in a real experimental set-up. In fact, for contacts with different transmissions also experimental data (see e.g.\ Ref.\cite{Chauvintesis}) deviate from results of the adiabatic classical theory. There are at least three possible explanations for this discrepancy: Landau-Zener (LZ) transitions between adiabatic surfaces, charging effects and associated quantum fluctuations, and spurious noise. With respect to the first one, it was shown in Ref. \cite{fritz} that nonadiabatic LZ transitions enhance the magnitude of the supercurrent peak in almost ballistic channels. This effect is stronger at slightly elevated temperatures and in highly transmissive channels.
Physically, nonadiabatic transitions between $E_-$ and $E_+$ surfaces only occur if the diffusive passage of the phase through the LZ-range around $\theta=\pi$ is sufficiently fast compared to the instantaneous relaxation time of momentum. Accordingly, LZ-transitions are suppressed towards very low temperatures
in contrast to what is observed for quantum fluctuations (see previous figures). Hence, we are left with either quantum fluctuations or spurious noise or, what is most likely, both to explain the addressed differences. Spurious noise alone can be captured by an effective temperature which is indeed the strategy that has been followed in \cite{Chauvintesis}. For this purpose, we analyze in the following to what extent this concept may also be applicable to effectively include quantum noise.
\section{Quantum diffusion and effective temperature}\label{effective}
To describe the dynamics of the phase in the vicinity of a resonance, we consider instead of the current biased circuit in Fig.~\ref{fig:circuit} (Norton representation) the completely equivalent circuit where the contact is voltage biased (Thevenin representation). Within the adiabatic approximation and in the overdamped quantum range $\beta E_c/\pi\rho\gg 1$ the voltage across the contact is then given by (again in physical dimensions)
\begin{equation}
V\equiv \Phi_0\dot{\theta}(t) = R \, I(\theta(t))+ V_{b}+V_{ac} \cos(\omega t)+ R\, {Z}_{Q}(\theta(t))
\label{eq:lan4}
\end{equation}
with the voltage bias $V_b=I_b R$ and the quantum noise $Z_{Q}(\theta)=\sqrt{\beta\, D_{Q}(\theta)}\, Z_{Cl}$.
For a perfect voltage bias (no noise) and in absence of external driving the phase evolves as $\theta(t)=\theta(0)+\omega_0 t$ with the
Josephson frequency $\omega_0=2 e V_b/\hbar$. In presence of an ac-drive the $n/k$-Shapiro resonance appears if $k\omega_0=n \omega$ at a corresponding voltage $V_b=\langle{V}\rangle=(n/k) \Phi_0\omega$. Thus, right at the center of the $n/k$ resonance no dc-current flows and according to (\ref{eq:lan4}) the diffusive motion of the phase can be expressed as
\begin{equation}
\theta(t) = 2\nu \sin(\omega t)+\frac{n}{k} \omega t + \delta(t)
\label{eq:fase2}
\end{equation}
with $\delta$ being the stochastic component of the phase on top of the dominating deterministic part $\theta_0(t)=2\nu \sin(\omega t)+\frac{n}{k} \omega t$ where $\nu=e V_{ac}/\hbar\omega$. Upon inserting this expression in (\ref{eq:lan4}) one arrives at
\begin{equation}
\Phi_0 \dot{\delta}(t) = R I(\theta(t))+V_{b}-\frac{n}{k} \Phi_0\omega + Z_{Q}(\theta(t))\, .
\label{eq:fase2b}
\end{equation}
Apparently, the dynamics of the stochastic part $\delta$ is much slower than that of the deterministic part since $|V_{b}-(n/k) \omega|\ll (n/k)\omega$ and $I(\theta)\approx 0$. This separation of time scales can be exploited when calculating time averaged currents.
Namely, plugging the result (\ref{eq:fase2}) into the Fourier expansion (\ref{eq:curexp}) of the current leads first to
\begin{eqnarray}
I({\theta}(t)) &= &\sum_{m=1}^{\infty} I_{m}(\tau)\Biggl\{\sin[m\alpha_{nk}(t)] \Bigl[J_0(2m\nu)\nonumber\\
&& +2 \sum_{p\geq 1}J_{2p}(2m\nu) \cos(2p\omega t)\Bigr]+2 \cos[m \alpha_{nk}(t)] \nonumber\\
&& \times \sum_{p\geq 1}J_{2p+1}(2m\nu)\cos[(2p+1)\omega t]\Biggr\}
\label{eq:fase5}
\end{eqnarray}
with the abbreviation $\alpha_{nk}(t)=(n/k)\omega t+\delta(t)$ and the Bessel function $J_p$.
Taking now time averages over one period of the external drive and accounting for the time scale separation between deterministic and stochastic dynamics of the phase we obtain
\begin{eqnarray}
I({\theta}(t))&\approx &2 \sum_{m \geq 1} I_{m}(\tau)\, \sin[m \delta(t)] \nonumber\\
&&\times \sum_{p \geq 1} J_{2p}(2m\nu) \langle \cos\left(\frac{m n}{k}\omega t\right) \cos(2p\omega t)\rangle\nonumber\\
&&- 2 \sum_{m \geq 1} I_{m}(\tau)\, \sin[m \delta(t)] \nonumber\\
&& \times \sum_{p \geq 0} J_{2p}(2m\nu) \langle \sin\left(\frac{m n}{k}\omega t\right) \sin(2p\omega t)\rangle
\label{eq:fase6}
\end{eqnarray}
Here, the averages can only take two values, namely, 0 or $1/2$ depending on the relation between $\frac{m n}{k},2p$ and $(2p+1)$. Eventually, this yields the current phase relation in the vicinity of the $n/k$ Shapiro resonance
\begin{equation}
I({\theta}(t)) \approx \sum_{l \geq 1} (-1)^{ln} I_{lk}(\tau) J_{ln}(2lk\nu) \sin[l\phi_k(t)]\, ,
\label{eq:fase7}
\end{equation}
where we put $m=l\, k$, $2p=l\, n$ with $l\in N$ and introduced the scaled phase $\phi_k(t)=k\,\delta(t)$.
The expression (\ref{eq:fase7}) is then inserted into (\ref{eq:fase2b}) to provide an approximate equation of motion for $\phi_k$.
While in general solutions are accessible only numerically, insight is already gained by keeping just the term with $l=1$ in (\ref{eq:fase7}), i.e.,
\begin{eqnarray}
\Phi_0 \dot{\phi_k}& =& k\left(V_{b}-\frac{n}{k}\Phi_0 \omega\right)+R\, (-1)^n k I_k J_n(2 k \nu) \sin(\phi_k) \nonumber\\
&& + k\, R\, Z_{Q}(\theta_0+\phi_k/k)\, .
\label{eq:faseeff}
\end{eqnarray}
In contrast to the classical case, here the noise term
also depends on the phase.
In the classical regime, one shows that (\ref{eq:faseeff}) is identical to the equation of motion for the phase in absence of ac-driving if
parameters are renormalized \cite{Chauvintesis}: $I_{c, \rm eff}=|k I_k J_n(2 k \nu)|$, $T_{\rm eff}=k^2 T$, $V_{b,\rm eff}=k (V_{b}-\frac{n}{k}\Phi_0 \omega)$.
This way, one gains replicas of the Ivanchenko-Zil'berman expression for the dc-supercurrent around each $n/k$ peak, namely,
\begin{equation}
\langle I \rangle(V_{b}) = I_{c, \rm eff} f_{IZ}\left(\dfrac{V_{b, \rm eff}}{I_{c, \rm eff}} , \dfrac{I_{c, \rm eff}}{T_{\rm eff}}\right)
\label{eq:ivan}
\end{equation}
with $f_{IZ}(x,y)={\rm Im}\{I_{1-i x y}(y)/I_{-i x y }(y)\}$ the modified Bessel function of first kind.
Now, in the quantum regime in leading order we may put $Z_Q(\theta)\approx Z_Q(\theta_0)=\sqrt{\beta D_Q(\theta_0)} \, Z_{Cl}$
such that a similar renormalization applies, however, with a modified temperature scaling, i.e.,
\begin{equation}
T_{q, \rm eff}= k^2 \langle \beta D_Q(\theta_0(t))\rangle \, T > T_{\rm eff}\, .
\label{effectiveT}
\end{equation}
This effective temperature depends also on the dissipation strength, the driving frequency and amplitude, and is a nonlinear function of the actual environmental temperature. We note that the actual experimental data \cite{Chauvintesis} do not follow the scaling of $T_{\rm eff}$ with $k$, but rather can only be described by a much higher effective temperature $T_{\rm eff, exp}$ which even affects the integer Shapiro resonances and has been attributed to spurious noise in the circuitry. The above enhancement due to quantum fluctuations may partially contribute to $T_{\rm eff, exp}$, however, is not able to completely account for the discrepancy between $T_{\rm eff, exp}$ and $T_{\rm eff}$.
Beyond the case for $l=1$ progress is achieved by assuming $|\Lambda \beta U''(\theta)|\ll 1$ so that one may expand [cf.~Eq.~(\ref{eq:qsmol})] $\beta D_Q(\theta)\approx 1+\Lambda \beta I'(\theta)$ with
\begin{equation}
I'({\theta})\approx \sum_{l \geq 1} l\, q (-1)^{ln} I_{lk}(\tau) J_{ln}(2lk\nu) \cos(l\phi_k)\, .
\label{eq:pot3}
\end{equation}
Thus, if contributions with sufficiently large $l$ are relevant, one may no longer replace $\cos(l\phi_k)\to 1$ meaning that quantum noise {\em cannot} be captured by a global effective temperature. Instead, its phase dependence leads to a local ''temperature'' and $n/k$ resonances are {\em not} simply replicas of the supercurrent peak. To extract signatures of this breakdown of the universal scaling behavior (\ref{effectiveT}), the residual spurious noise dominating $T_{\rm eff, exp}$ must be substantially reduced.
We note that the findings of Grabert et. al \cite{grabert} who studied the supercurrent phase diffusion in absence of driving in the {\it tunnel limit} and at low temperatures within a time-dependent perturbation theory, obtained also an extension of the Ivanchenko Zil'berman expression similar to (\ref{eq:ivan}) but with an effective Josephson energy. This result was later reproduced within the QSE-formulation \cite{Anker}.
\section{Summary}\label{conclusions}
In summary, the impact of quantum fluctuations on fractional Shapiro resonances, a hallmark of a non-sinusoidal current phase relations, is analyzed for atomic point contacts with highly transmitting channels in the presence of microwave.
Known experimental $I-V$ results \cite{Chauvin,Chauvintesis} exhibit substantial deviations when compared with predictions from a classical adiabatic theory. While one explanation has been the appearance of spurious noise in the circuit, here, we find that quantum fluctuations may give rise to a similar effect. Departures from the classical approach become relevant for highly transmissive channels and for sufficiently low temperatures. An effective description of quantum noise in terms of an effective temperature only applies to
contacts with almost sinusodial current-phase relations which may offer a way to distinguish between classical noise and quantum fluctuations in high transmissive contacts. As a prerequisite, however, unspecific spurious noise sources in the circuitry must be under control so that contacts are embedded in heat baths with temperatures of about $T\sim 40$ mK or below.
The results that we have obtained correspond to Al point contacts with a superconducting gap of $ \Delta_{\rm Al} \sim 200 \mu eV$. For this metal our model predicts that quantum fluctuations play a pronounced role for very low temperatures ($T < 40$ mK). Landau-Zener transitions are negligible if $T \ll T_{\rm LZ}\sim 0.5 \Delta_{\rm Al}/k_{\rm B} \approx 1$ K (for transmissions $\tau>1-10^{-4}$) \cite{fritz}. Experimentally, signatures of quantum fluctuations are expected to be even more dominant for materials with larger superconducting gaps such as e.g.\ Nb with $\Delta_{\rm Nb} \approx 3600 \mu eV$ leading to a clear separation between $T_{\rm LZ} \approx 9$ K and the temperature range where typical experiments are performed (between 30 mK and 150 mK).
\begin{acknowledgements}
We acknowledge stimulating discussion with A. Levy Yeyati. This work was supported by PIP Conicet (FC) and by the DFG through SFB569 (JA).
\end{acknowledgements}
| -28,015.224633
|
[
-3.044921875,
2.84765625
] | 31.952663
|
[
-3.349609375,
-0.1119384765625,
-2.365234375,
-6.6640625,
-0.7626953125,
9.40625
] |
[
5.1328125,
8.8828125,
3.337890625,
6.828125
] | 251
| 4,616
|
[
-3.443359375,
4.109375
] | 26.261996
|
[
-6.5546875,
-4.73828125,
-4.96484375,
-2.5625,
2.39453125,
13.5234375
] | 0.797882
| 14.574222
| 30.957539
| 1.835185
|
[
2.0793673992156982
] | -17,522.431002
| 5.973354
| -27,548.133914
| 0.685453
| 6.201324
|
[
-2.76953125,
-4.08984375,
-4.0703125,
-5.01953125,
2.52734375,
13.015625
] |
[
-5.6328125,
-2.33203125,
-2.435546875,
-1.609375,
3.81640625,
5.12890625
] | |
BkiUbE45qU2Aps3F5jLi
|
\section{Introduction}
The precise Cosmic Microwave Background (CMB) properties
reported by the {\sc Planck} experiment \cite{Planck_params,Planck_infl,BK15} and
the discovery by LHC of the Higgs boson \cite{Atlas,CMS}
increased the interest in so called Higgs portal interactions that connect
the hidden (dark) sector and the visible sector of the Standard Model (SM),
with expected imprints on collider experiments \cite{PBC}.
Scenarios beyond-the-SM (BSM), that introduce a dark sector in addition to the visible SM sector
are required to explain a number of observed phenomena in particle physics,
astrophysics and cosmology such as the non-zero neutrino masses and oscillations, the Dark Matter (DM), baryon asymmetry of the universe, the cosmological inflation.
It is usual to assume that cosmic inflation is decoupled from the SM at
energies lower than the inflationary scale since the slow-roll conditions for inflation generally permit only tiny
couplings of the inflaton field to other fields. This assumption prevents
the direct investigation of inflation mechanism in particle physics experiments. Consequently,
there are little compelling scenarios of inflation based on particle physics theory.
Since the only known fundamental scalar quantum
field is the SM Higgs field, the inflation models using the SM Higgs boson
as inflaton attained great attention over the past years.
A number of Higgs inflation models, mostly with non-canonical action, have been proposed.
They include models with Higgs scalar field non-minimaly coupled to gravity
\cite{Bezrukov08,Futamase89,Fakir90}, non-minimal derivative coupling to the Einstein tensor
\cite{Germani10,Granda11,Jimenez20,Tsu12},
scalar-tensor models \cite{Jimenez19a,Jimenez19b},
Galileon models \cite{Kamada11,Ohashi10,Kobayashi10,Kobayashi11}, quartic hilltop models \cite{Bramante16,German21}. \\
The viability of these models is already substantially limited
mostly because they predict tensor-to-scalar ratios larger than the upper bound
set by the combined analysis of {\sc Planck} and BICEP-Keck Array data (hereafter {\sc Planck}+BK15) that constrain
the energy scale of inflation to \cite{Planck_infl,BK15}:
\begin{equation}
\label{infl_scale}
{V}^{1/4}_*=\left({\frac{3 \pi^2 A_s^{*} }{2} r_{*}}\right)^{1/4} M_{pl}
< 1.6 \times 10^{16} {\rm GeV} \hspace{0.2cm} (95\% {\rm CL}) \,.
\end{equation}
Here the quantities with $(^*)$ are evaluated at the pivot scale $k_*=0.002$,
$r_*$ is the ratio of tensor-to-scalar amplitudes, $A_s^{*}$
is the amplitude of the curvature density perturbations and
$M_{pl} $ is the reduced Planck mass. This imply an upper bound for the Hubble expansion rate during inflation:
\begin{equation}
H_* < 2.5 \times 10^{-5} M_{pl}\, \hspace{0.2cm} (95\% {\rm CL}) \,.
\end{equation}
The above bound selects the viable Higgs inflation models from the requirement $H_{*} \ll \Lambda_c$,
where $\Lambda_c$ is the unitary bound of each underlying theory, defined as the scale below which the quantum gravitational corrections are sub-leading \cite{Burges09,Bezrukov11,Burges14}. \\
It worths to mention that the chaotic inflation model with quartic potential is excluded by the data at more than
95\% confidence level \cite{Linde}.
Among the models used to lower the predictions for tensor-to-scalar ratio,
the most studied is the Higgs inflation with non-minimal coupling to gravity \cite{Bezrukov08}.
At tree level and for large non-minimal coupling $\xi \sim {\cal O} (10^4)$, this model
gives a small tensor-to-scalar ratio,
in agreement with the
{\sc Planck}+BK15 data.
However, for such large values
of $\xi$ the unitary bound scale, $\Lambda_c=M_{pl}/\xi$, could be close or below the
energy scale of inflation \cite{Burges09,Calmet11}. \\
An interesting framework for Higgs inflation is provided the scalar-tensor models including the
non-minimal kinetic coupling to the Einstein tensor and to the Gauss-Bonnet invariant.
These models can produce inflation simultaneously satisfying the present inflationary
observational constrains and the unitary bound constraints \cite{Jimenez19a,Jimenez19b}.
Higgs portal interactions via the Renormalisation Group (RG) loop contributions can also lower the
predictions of Higgs inflation models for the tensor-to-scalar ratio.
The price to pay in these models is the electroweak (EW) vacuum metastability issue.
The actual values of Higgs boson and top quark masses imply
that the EW vacuum is metastable at energies larger than $\Lambda_I \sim 10^{11}$ GeV,
where Higgs quartic coupling turns negative\footnote{The actual value of EW vacuum metastability scale is defined for the top quark mass $m_t=173.15$ GeV and Higgs boson mass $m_H=125.10$ GeV \cite{PDG} as the value of the Higgs field at which the Higgs quartic coupling, $\lambda_h$, becomes negative due to radiative corrections.}\cite{Bezrukov15,Buttazzo13,Degrassi12,Allison14}. \\
However, it is found that a small admixture of the Higgs field with a SM scalar singlet
with non-zero vacuum expectation value ({\it vev}) can make the EW
completely stable due to a tree-level effect on the Higgs quartic coupling
which may be enough to guarantees the stability
at large Higgs field values \cite{Lebedev12,Elias12,Ballesteros15}.
An appealing scenario in the presence of Higgs portal interactions
is given by a SM singlet scalar field with non-zero {\it vev}
mixed with the SM Higgs boson, often called dark Higgs boson.
The dark Higgs mixing with the SM Higgs boson
make possible the direct search of the dark Higgs inflaton at collider experiments.
The mixing guarantee that dark Higgs can be produced in the same channels as the SM Higgs boson
if its mass would be the same as that of the dark Higgs boson.
Through the same mixing the dark Higgs boson inherits
the SM Higgs boson couplings to SM fermions via the Yukawa interaction term:
\begin{equation}
L \supset
\theta \frac{m_{f}} {v} \phi {\bar f}{f} \,,
\end{equation}
where: $\phi$ is the dark Higgs field, $\theta$ is its mixing angle with SM Higgs boson and $m_f$ is the fermion mass.
Dark Higgs bosons can be produced at LHC in rare heavy meson decays (such as K and B mesons).
They are highly collimated, with characteristic angles $\alpha=M/E$ relative to the parent meson's direction ($M$ is the meson mass and $E$ is the dark Higgs energy).
For $E \sim$ 1 TeV the light dark Higgs decay lengths are of ${\cal O}$($10^3 \,m$).
Therefore a significant number of dark Higgs bosons can be detected in faraway
detectors of the LHC experiments \cite{PBC}. Thus, present and future experimental sensitivity
to the light dark Higgs boson decay crucially depends on its production and decay rates and
on detector's location and acceptance.
The light dark Higgs boson as inflaton (rather than the Higgs boson) has been first
implemented in Ref. \cite{Tkchev06}, extending the
$\nu$MSM model \cite{nuMSM1,nuMSM2} to simultaneously explain the cosmological inflation, the DM sterile neutrino masses and the baryon asymmetry of the universe \cite{Tkchev06,Anisimov09}.
The light dark Higgs inflaton properties has been mostly studied in the frame of
dark Higgs inflation with non-minimal coupling to gravity \cite{Lerner11,Tenkanen16,Aravind16,Kim17}.
Refs.\cite{Bezrukov10,Bezrukov13a} present a detailed analysis on the possibility to explore this model in the particle physics experiments.\\
This possibility has been also investigated in the frame of low-scale inflation models, such the quartic hilltop model \cite{Bramante16} that predicting a very small value for tensor-to scalar ratio, beyond the sensitivity of the CMB experiments.
Thus, the dark Higgs searchers at LHC could experimentally test the low-scale of inflation.
In this paper we analyse the dark Higgs inflation model
with curvature corrections given by the kinetic term non-minimally coupled to the Einstein tensor and the coupling to the Gauss-Bonnet (GB) 4-dimensional invariant (hereafter EGB dark Higgs inflation)
and explore the possibility to test its predictions
by the particle physics experiments.\\
In this model, the non-minimal kinetic coupling to the Einstein tensor causes the inflaton field to roll slower, avoiding the problem of large fields present in chaotic inflation \cite{Granda11}.
On the other hand, the second-order curvature corrections represented by the scalar field coupled to the GB term
can increment or suppress (depending on the sign) the tensor-to-scalar ratio
\cite{Jiang13,Kanti15,Odintsov18}. The dynamics of the slow-roll inflation
by combining both corrections has been proposed in context of the SM Higgs inflation in Refs.\cite{Jimenez19a,Jimenez19b}.\\
The possibility to explore this model by the dark Higgs searchers at LHC
could provide connections between fundamental theories like supergravity and string theories where these couplings are expected to arise, and the Higgs portal interactions.
The paper is organised as follows. In the next section we discuss the dark Higgs inflaton properties.
In Section~3 we introduce the EGB dark Higgs inflation model.
In Section~4 we analyse the cosmological consistency of the EGB dark Higgs inflation
predictions. Section~5 discuss the
possibility to test the EGB dark Higgs inflation predictions by some representative particle physics experiments at LHC. In Section~6 we draw our conclusions.\\
Throughout the paper we consider an homogeneous and isotropic flat background described
by the Friedmann-Robertson-Walker (FRW) metric:
\begin{equation}
\label{FRW}
{\rm d}s^2=g_{\mu,\nu}{\rm d }x_{\mu}{\rm d}x^{\nu}=-{\rm d}t^2+a^2(t)dx^2 \,,
\end{equation}
where $a$ is the cosmological scale factor ($a_0$=1 today). Also, we use
the overdot to denote the time derivative and $( ' )$ to denote the derivative with respect to the scalar field.
\section{Dark Higgs inflaton properties}
\subsection{Dark Higgs inflaton parameters}
We consider the extension of the SM canonical action with the dark Higgs inflaton field, as introduced in Ref. \cite{Tkchev06}:
\begin{eqnarray}
\label{S}
S=\int{{\rm d}^4\,x}\sqrt{-g}\, \left[\frac{\cal R}{2\kappa^2} + \frac{1}{2} (\partial_{\mu} \phi)^2 -V(\phi) \right]\,,
\end{eqnarray}
where ${\cal R}$ is the Ricci scalar, $\kappa^2 = M^{-2}_{pl}$, $\phi$ is the dark Higgs inflaton field with the potential
$V(\phi)$ defined as:
\begin{equation}
\label{DH_V}
V(\phi)=-\frac{1}{2}m_{\phi}^2 \phi^2 +\frac{\beta}{4}\phi^4 +
\lambda \left( {\cal H}^{\dagger} {\cal H} -\frac{\alpha}{\lambda}\phi^2\right)^2 \,.
\end{equation}
In the above equation $\lambda$ is the SM Higgs field self coupling,
$m_{\phi}$ is the dark Higgs mass, $\beta$ is the dark Higgs quartic coupling
and $\alpha$ is the coupling between the SM Higgs field ${\cal H}$ and the dark Higgs inflaton.
For $\alpha, \beta \ll \lambda$, inflation is driven along a flat direction of the scalar potential given by:
\begin{equation}
\label{flat}
{\cal H}^{\dagger} {\cal H} \simeq \frac{\alpha}{\lambda} \phi^2 \,.
\end{equation}
Along this direction the dark Higgs potential is $V(\phi) = \beta \phi^4/4$ and the coupling constant $\beta$
can be fixed from the requirement to obtain the correct the amplitude of the curvature density perturbations.
This condition leads to $\beta \sim 1.3\times 10^{-13}$ \cite{Lyth99}. \\
The negative sign of the quadratic term in (\ref{DH_V}) ensures that the scale invariance
is explicitly broken on the classical level in the inflaton
sector, leading to non-zero {\it vev} for the dark Higgs inflaton after reheating. Then,
the condition (\ref{flat}) gives rise to EW spontaneous symmetry breaking and the SM Higgs field
gains non-zero {\it vev } too.
We remind that the SM Higgs boson mass is given by $m_{H}=\sqrt{2 \lambda}\, v$, where the SM Higgs {\it vev} is fixed at $v\equiv(\sqrt{2} G_F)^{1/2}=246.22$ GeV by the Fermi coupling constant
$G_F$, and the experimentally measured Higgs boson mass is $m_H= 125.10\pm 0.14$ GeV \cite{PDG}. \\
In the gauge base $(\sqrt{2}{\cal H} - {\it v}, \phi)$
the dark Higgs field expectation value, $<\phi>$, its mass $m_{\phi}$ and mixing angle $\theta$, are given by:
\begin{eqnarray}
\label{DH_par}
< \phi > = \frac{m_H}{2 \sqrt{\alpha} }\,, \hspace{0.5cm} m_{\phi} = m_H \sqrt{ \frac{\beta}{2 \alpha}}\,,
\hspace{0.5cm} \theta=\sqrt{ \frac{2 \alpha}{\lambda}}\,.
\end{eqnarray}
For the purpose of this work we choose $\alpha > \beta/2$, therefore the dark Higgs inflaton is lighter than the SM Higgs boson, $m_{\phi} < m_H$.
\begin{figure}
\label{fig1}
\centering
\includegraphics[width=7cm,height=7cm]{Fig1.eps}
\caption{The evolution with the scale dependent variable $t={\rm ln}(\phi/m_t)$ of the running of
$\lambda$, $\beta$ and $\alpha$ coupling constants normalised to their initial values chosen at $t=0$
as: $\lambda(0)=0.129$, $\beta(0)=1.3 \times 10^{-13}$ and $\alpha(0)=3 \times 10^{-7}$.
The SM Higgs mass is fixed at $m_H=125.09$ GeV.
The right-hand blue region indicates the slow-roll inflationary regime. \label{Fig1}}.
\end{figure}
The upper bound on the coupling constant $\alpha$ cames from the requirement
that the quantum corrections do not upset
the flatness of the inflation potential.
This constrain leads to $\alpha < 3 \times 10^{-7}$ at the tree level \cite{Bezrukov10} and corresponds
to the lower bound of the dark Higgs inflaton mass:
\begin{equation}
m_{{\phi}} \geqslant 0.058 \left ( \frac{\beta}{1.3 \times 10^{-13}} \right)^{1/2} \,\, {\rm GeV}.
\end{equation}
The lower bound on $\alpha$ cames from the requirement
to have an efficient conversion of the lepton asymmetry to baryon asymmetry
during baryogengesis. This requirement leads to to $\alpha > \beta \sim 10^{-13}$ \cite{Tkchev06}.
A stronger lower bound on $\alpha$ is placed by the estimate of the reheating temperature.
For the inflaton particles in thermal equilibrium the reheating temperature is given by \cite{Anisimov09}:
\begin{equation}
\label{Treh}
T_r \simeq \frac{\zeta(3) \alpha^2} {4 \pi^2} \sqrt {\frac{90}{g_r}} \,{\rm M_{pl}} \,,
\end{equation}
where $g_r=106.75$ is the SM effective number of relativistic degrees of freedom at reheating and
$\zeta(3)=1.202$ is the Reimann zeta function.
The requirement that $T_r > 150$ GeV ($T \simeq 150 $ GeV is the temperature of the EW symmetry breaking),
leads to $ \alpha >7.3 \times 10^{-8}$.\\
For a non-thermal distribution of the inflaton field the estimate of the reheating temperature
is $\sim 10^5 T_r$ \cite{Micha03,Micha04}, leading to $\alpha > 7 \times 10^{-10}$. \\
These constraints are consistent to the upper bounds of the dark Higgs inflaton mass \cite{Anisimov09}:
\begin{equation}
m_{{\phi}} \leqslant (0.116 -1.166) \left( \frac{\beta} {1.3 \times 10^{-13} } \right) ^{1/2}\,\, {\rm GeV}\,,
\end{equation}
where the range corresponds to the thermal or non-thermal estimates.
The above bounds of the inflaton mass may be changed
if the quantum corrections of the coupling constants are take into account. \\
Figure~\ref{Fig1} presents the evolution with the scale dependent variable $t={\rm ln}(\phi/m_t)$ of the running of
$\lambda(t)$, $\beta(t)$ and $\alpha(t)$ coupling constants normalised to their initial values
$\lambda(0)$, $\beta(0)$ and $\alpha(0)$, obtained by the integration the corresponding beta functions \cite{Degrassi12,Lerner11,Kim17,Aravind16}. \\
As the SM Higgs mass is fixed at $m_H=125.09$ GeV we take $\lambda(0)=0.129\,$.
We also fix $\alpha(0)=3 \times 10^{-7}$ to the $\alpha$ upper bound and take $\beta(0)=1.3 \times 10^{-13}$. \\
One should note that the correction to $\beta$ from the coupling of the dark Higgs inflaton to the SM Higgs boson is
$ \delta \beta \sim \alpha^2$ and therefore the evolution of $\beta$ is dominated by the $\alpha^2$ contribution.
Figure~1 shows that all coupling constants remain positive at the inflationary scale while the
flatness of the inflationary potential is preserved.
\subsection{Reheating and horizon crossing}
The reheating proceeds by the energy transfer from the dark Higgs inflaton field to the SM Higgs particles
through a regime of parametric resonance \cite{Micha03,Micha04}.
At early stages the entire energy is in the inflaton zero-mode
and all other modes are absent. The inflaton zero-mode
oscillations excite the non-zero modes of the inflaton and of the SM Higgs particles.
This parametric resonance regime ends before a significant
part of the inflaton zero-mode energy is depleted \cite{Anisimov09}.
The reason is the SM Higgs re-scattering process that become important quite early
because of the large SM Higgs sef-coupling ($\lambda \sim 0.1 $).\\
After the end of the parametric resonance regime, the fluctuations of the inflaton
field continue to grow exponentially while the energy transferred to the SM Higgs field
is negligible small.
The SM Higgs re-scattering processes bring the inflaton particles in the thermal equilibrium
and the reheating proceeds through the decay
of the dark Higgs inflaton into the SM Higgs particles.
The inflationary observables are evaluated at the epoch of the Hubble crossing scale $k_*$ (pivot scale) quantified
by the number of {\it e}-folds ${\cal N}$ before the end of the inflation.
Therefore, the uncertainties in the determination of $\cal N$ translates into theoretical uncertainties in determination of the inflationary observables \cite{Kinney06,Lerner11}.
Assuming that the ratio of the today entropy per co-moving volume to that after reheating
is negligible, the main error $\Delta {\cal N}$ in the determination of ${\cal N}$ is given by the uncertainty in
the determination of the reheating temperature ${T_r}$.
The number of {\it e}-foldings at Hubble crossing scale $k_*$ is related to $T_r$ through:
\begin{equation}
\label{N_TR}
{\cal N}= {\rm log} \left[ \left( \frac{\rho_r}{\rho_{e} }\right)^{1/4}
\left(\frac{g_0 T^3_0}{g_r T^3_r}\right)^{1/3} \left(\frac{k_*}{a_0H_0}\right) \right] \,,
\end{equation}
where $\rho_r$ and $\rho_e$ refer to the densities at reheating and at end of inflation,
$T_0$ is the present photon temperature, $H_*$ is the Hubble parameter at $k_*$,
$g_r=106.75$ and $g_0=43/11$ are the effective number of relativistic degrees of freedom
at reheating and at present time. \\
From (\ref{Treh}) and ({\ref{N_TR}) we get $\Delta {\cal N} \simeq 3$
corresponding to the uncertainty in determination of $T_r$ for to a thermal distribution
of the inflaton. This uncertainty is four times higher, $\Delta {\cal N} \simeq 12$, in the case of non-thermal distribution.
\section{Dark Higgs inflation with curvature corrections}
Closely following \cite{Jimenez19b,Jimenez19a}, in this section we introduce the inflation model
assuming non-minimal coupling of the dark Higgs
field with the Einstein tensor and to the Gauss-Bonnet (GB) 4~-~dimensional invariant
(the EGB dark Higgs inflation model),
derive the background field equations, the slow-roll parameters and evaluate
the primordial power spectra the the scalar and tensor perturbations.\\
The action of this model is:
\begin{equation}
\label{S_E}
S_E=\int{{\rm d}^4\,x}\sqrt{-g}\, \left[\frac{\cal R}{2\kappa^2} +
X-V(\phi) +F_1(\phi) G_{\mu \nu} \partial^{\mu} \phi \partial^{\nu} \phi
-F_2(\phi) {\cal G} \right]\,,
\end{equation}
where $V(\phi)$ is the dark Higgs potential given in
(\ref{DH_V}), $F_1(\phi)$ and $F_2(\phi)$ are coupling functions,
$G_{\mu \nu}$ is the Einstein tensor, $\cal G$ is the GB 4-dimensional invariant:
\begin{equation}
{\cal G}= {\cal R}^2- 4 {\cal R}_{\mu \nu}{\cal R}^{\mu \nu}+{\cal R}_{\mu \nu \delta \rho} {\cal R}^{\mu \nu \delta \rho}\,.
\end{equation}
The field equations in a spatially flat background
described by the FRW metric (\ref{FRW}) are of the form
(see Appendix B from \cite{Jimenez19b}) :
\begin{equation}
\label{H2}
H^2 = \frac{\kappa^2}{3}\left(\frac{1}{2} +V(\phi)+9H^2 F_1{\dot \phi}^2+24H^3 {\dot F_2} \right) \,,
\end{equation}
\begin{eqnarray}
\label{phi_dot}
{\ddot \phi}+ 3 H {\dot \phi} & + & V^{'}+24 H^2(H^2+{\dot H})F^{'}_2+18 H^3F_1{\dot \phi} \\ \nonumber
& + & 12 H {\dot H}F_1 {\dot \phi}+6*H^2F_1{\ddot \phi}+3 H^2 F^{'}_1 +{\dot \phi}^2 =0 \,.
\end{eqnarray}
The slow-roll parameters are defined as:
\begin{eqnarray}
\label{slow-roll}
\epsilon_0= - \frac{{\dot H}}{H^2}\,, \hspace{0.2cm}\epsilon_1=\frac{{\dot \epsilon_0}}{H \epsilon_0} \,,
\hspace{0.2cm} k_0=3F_1{\dot \phi}^2\,,
\hspace{0.2cm} k_1=\frac{ {\dot k_0 }}{H k_0}\,, \hspace{0.3cm}
\Delta_0=8H{\dot F_2}\,, \hspace{0.2cm} \Delta_1=\frac{ {\dot \Delta_0}}{H\Delta_0}.
\end{eqnarray}
Under the slow-roll conditions ${\ddot \phi} \ll 3 H {\dot \phi} $ and
$ |\epsilon_0|\,,|\epsilon_1|\,,...|\Delta_1|\, \ll 1$ the potential and field equations take the
form:
\begin{eqnarray}
\label{eq_field}
H^2 &\simeq & \frac{\kappa^2}{3}V(\phi) \,\\
3H{\dot \phi} & \simeq& -V^{'} -18 H^3F_1{\dot \phi}-24H^4F^{'}_2 \,.
\end{eqnarray}
The number of {\it e}-folds before the end of inflation expressed in terms of the inflaton field is given by:
\begin{equation}
\label{folds}
{\cal N}= \int^{\phi_E}_{\phi_I} \frac{H}{{\dot \phi}} {\rm d} \phi=\int^{\phi_e}_{\phi_{\cal N}} \frac{H^2+6H^4F_1}
{-8H^4F^{'}_2- \frac{1}{3} V^{'}} {\rm d} \phi \,,
\end{equation}
where $\phi_I$ and $\phi_E$ are the values of the inflaton field at the begging and at the end of inflation.\\
The power spectra of the primordial scalar and tensor perturbations, $\cal {P}_R$ and ${\cal P}_T$, are computed as:
\begin{eqnarray}
\label{PFS}
{\cal P}_R & =& A_S \frac{H^2}{2 \pi^2} \frac{{\cal G_S}^{1/2}} {{\cal F_S}^{3/2}}\,,
\hspace{0.1cm}\hspace{0.4cm}A_S=\frac{1}{2}2^{2\mu_S-3} \left |\frac{\Gamma(\mu_S)}{\Gamma(3/2)}\right|^2,
\hspace{0.1cm} \mu^2_S=\frac{9}{4}\left[1+\frac{4}{3}\epsilon_0+\frac{2}{3}
\frac{2 \epsilon_0 \epsilon_1 -\Delta_0 \Delta_1}{2 \epsilon_0-\Delta_0}\right] \nonumber \\
{\cal P}_T& = & 16 A_T \frac{H^2}{2 \pi^2} \frac{{\cal G_T}^{1/2}} {{\cal F_T}^{3/2}},
\hspace{0.2cm} A_T=\frac{1}{2} 2^{2 \mu_{T}-3} \left| \frac{\Gamma(\mu_T)}{\Gamma(3/2)} \right|^2,
\hspace{0.2cm} \mu_T=\frac{3}{2}+\epsilon_0 \,,
\end{eqnarray}
\begin{eqnarray}
{\cal F_S}=c^2_S {\cal G_S}\,,\hspace{0.2cm}
{\cal G_S} & = & \epsilon_0 -\frac{1}{2}\Delta_0 \,,
\hspace{1.2cm} c^2_S = 1- \frac{ \frac{4}{3} k_0 (\Delta_0+\frac{4}{3} k_0)
+ \frac{4}{3} k_0\epsilon_0} {2 \epsilon_0-\Delta_0}\,, \nonumber \\
{\cal F_T}=c^2_T {\cal G_T}\,, \hspace{0.2cm}
{\cal G_T} & = &1-\frac{1}{3}k_0-\Delta_0 \,,
\hspace{0.4cm} c^2_T = \frac{3+k_0-3 \Delta_0(\epsilon_0+\Delta_1)}{3-k_0-3\Delta_0} \,,
\end{eqnarray}
where $c_S$ and $c_T$ are the
sound speeds of scalar and tensor density perturbations. \\
The spectral index of scalar density perturbations $n_S$ and the tensor-to-scalar ratio expressed
in terms of slow-roll parameters are given by:
\begin{equation}
\label{n_S}
n_S=-2\epsilon_0-\frac{2 \epsilon_0 \epsilon_1-\Delta_0 \Delta_1}{2 \epsilon_0-\Delta_0}\,.
\end{equation}
\begin{equation}
\label{r}
r= 8 \left(\frac {2 \epsilon_0 -\Delta_0} { 1-\frac {1}{3}k_0 - \Delta_0} \right)\,.
\end{equation}
Hereafter we take $F_1(\phi)$ and $F_2(\phi)$ power-law functions of the form:
\begin{eqnarray}
\label{ST_coupling}
F_1(\phi)=\frac{\gamma}{\phi^4}\,, \hspace{0.5cm}F_2(\phi)=\frac{\eta}{\phi^4}\,,
\end{eqnarray}
where $\gamma$ and $\eta$ are positive constants with the dimension
$[\gamma]=M_{pl}^2$ and $[\eta]=M^4_{pl}$.\\
For this setup, the first slow-roll parameter $\epsilon_0$ reads as:
\begin{equation}
\label{slow-roll1}
\epsilon_0=\frac{16}{3}\frac{(3-2\eta\beta)}{(2+\gamma\beta)\phi^2}\,.
\end{equation}
From (\ref{folds}) one gets the number of {\it e}-folds before the end of inflation:
\begin{equation}
\label{folds_1}
{\cal N}=-\frac{3(2+\gamma \beta)}{16(3-2 \eta\beta)} \phi^2 \bigg\vert^{\phi_E}_{\phi_I}\,.
\end{equation}
The value of the scalar field at the end of inflation, $\phi_E$,
is obtained from the requirement $\epsilon_0=1$,
while (\ref{folds_1}) allows the determination of the inflaton
field value $\phi_I$ at ${\cal N}${\it e}-folds before the end of inflation:
\begin{eqnarray}
\label{phi_E}
\phi_E= \frac{ 4\sqrt{3-2\eta\beta}} {\sqrt{3(2+\gamma\beta)}}\,,
\hspace{0.5cm} \phi_I=\sqrt{{\cal N}+1}\phi_E \,.
\end{eqnarray}
\section{Cosmological constraints }
\subsection{Parameterisation and methods}
The dark Higgs baseline cosmological model is described
by the following parameters:
\begin{equation}
\label{base_line}
{\bf P}=\left\{ \Omega_bh^2 \,,\,\Omega_ch^2\,,\,\theta_s\,,\,\tau\,,\,
{\rm log}(10^{10} A_s)\,,\, n_s\,,\, {\cal N}\,,\,\beta\,, \alpha \right\} \,,
\end{equation}
where: $\Omega_bh^2$ is the present baryon energy density, $\Omega_ch^2$
is the present CDM energy density, $\theta_s$ is the ratio of sound horizon to angular diameter distance at decoupling, $\tau$ is the optical depth at reionization, $A_s$ and $n_s$ are the amplitude and the spectral index of the primordial curvature perturbations,
${\cal N}$ is the number of {\it e}-folds introduced to account for the uncertainty in the determination of the
reheating temperature, $\beta$
is the dark Higgs quartic coupling and $\alpha$ is the dark Higgs - SM Higgs coupling constant. \\
The EGB dark Higgs inflation model extends the dark Higgs baseline model by including the following parameters:
\begin{equation}
{\bf P}_{\rm EGB}=\left\{ \gamma \beta\,, \eta \beta\right\} \,,
\end{equation}
where the coupling constants $\gamma$ and $\beta$ are defined in (\ref{ST_coupling}).
We compute the dependence on the scaling variable $t = {\rm ln} (\phi / m_t)$
of the running of various coupling constants by integrating the corresponding beta functions:
\begin{eqnarray}
Y(t)=\int^{t}_{0} { {\bf \beta} }_Y {\rm d} t \,, \hspace{1cm} Y=\{g,\, g^{'} \,, g_{s} \,,y_{t}\,,\beta \,,\alpha \} \,,
\end{eqnarray}
where $ g,\, g^{'} \,, g_{s}$ are the gauge couplings, $y_t$ is the Yukawa coupling, $\beta$
is the dark Higgs quartic coupling and $\alpha$ is the dark Higgs - SM Higgs coupling (for the relevant beta functions see Appendix A from \cite{Kim17} and references therein).\\
At $t=0$ the SM Higgs self coupling $\lambda(0)=0.129$ and the top Yukawa coupling $y_t(0)=0.976$ are fixed
by the SM Higgs and top quark pol masses \cite{Degrassi12}.
For the gauge couplings at $t=0$ we take $g^{'}(0)=0.364$, $g(0)=0.64$ and $g_s(0)=1.161$ \cite{Barvinsky09}.
The priors for $\beta(0)$ and $\alpha(0)$ are given in Table~1 (see below).
We modify the standard Boltzmann code \texttt{camb}\footnote{\url{http://camb.info}} \cite{camb} to
calculate the primordial power spectra of scalar ${\cal P}_R(k)$ and tensor ${\cal P}_T(k)$ density perturbations
for the dark Higgs inflation model with curvature corrections presented in the previous section.
The code evolves the coupled dark Higgs field equations (\ref{H2}) and (\ref{phi_dot})
with respect to the conformal time for wave numbers in the range $5 \times 10^6$~-~$5$ Mpc$^{-1}$ and evaluate the
RG corrections to the coupling constants as presented before.
The value of the inflaton field $\phi_I$ and $\phi_E$ at the beginning and at the end of inflation
are obtained from (\ref{phi_E}).
The primordial power spectra ${\cal P}_R(k)$ and ${\cal P}_T(k)$ are
obtained from (\ref{PFS}) with the slow-roll parameters defined in (\ref{slow-roll}).\\
The scalar spectral index of the curvature perturbations $n_S$
and the ratio of tensor-to-scalar amplitudes $r$ are then evaluated
at the pivot scale $k_*=0.002$Mpc$^{-1}$ as:
\begin{eqnarray}
n_S=\frac{{\rm d \, ln}{\cal P}_{R}(k)}{ {\rm d\, ln} k }\bigg\vert _{k_*}\,,
\hspace{0.5cm} r=\frac{{\cal P}_T(k)}{ {\cal P}_R(k)} \bigg\vert_{k_*} \,.
\end{eqnarray}
The extraction of parameters from the cosmological dataset
is based on Monte-Carlo Markov Chains (MCMC) technique.
We modify the publicly available version of the package
\texttt{CosmoMC}\footnote{\url{http://cosmologist.info/cosmomc/}}
\cite{cosmomc} to sample from the space of dark Higgs inflation model
parameters and generate estimates of their posterior mean and confidence intervals.\\
We made some test runs to optimise the parameters prior intervals and sampling.
The final run is based on 120 independent channels, reaching the convergence criterion
$(R-1) \simeq 0.01$. The $(R-1)$ criterion is defined as
the ratio between the variance of the means and the mean of variances for the second half of chains \cite{cosmomc}.\\
We assume a flat universe and uniform priors for all parameters adopted in the analysis in the intervals listed in Table~1.
The Hubble expansion rate $H_0$
is a derived parameter in our analysis. We constrained $H_0$ values to reject the extreme models.
For the cosmological analysis we use the CMB temperature (TT), polarization (EE,TE) and lensing angular power spectra from {\sc Planck} {\texttt 2018} release \cite{Planck_params} and the likelihood codes corresponding to different multipole ranges \cite{Planck_likes}\footnote{\url{http://pla.esac.esa.int/pla/cosmology}}.
The {\sc Planck} data currently provide the best characterisation of the primordial density perturbations \cite{Planck_infl}, constraining the cosmological parameters at the sub-percent level \cite{Planck_params}.
We use the following combinations of TT, TE, EE and lensing {\sc Planck} likelihoods \cite{Planck_infl}:\\
(i) Planck TT+lowE: the combination of high-{\it l} TT likelihood at multipoles {\it l} $\ge$ 30, the Commander likelihood
for low-{\it l} temperature-only and the SimAll low-{\it l}
EE likelihood in the range 2 $ <$ {\it l} $<$ 29;
(ii) {\sc Planck} TE and Planck EE: the combination of TE and EE likelihoods at {\it l} $\ge$ 30;
(iii) {\sc Planck} TT,TE,EE+lowE: the combination of Commander likelihood using TT, TE, and EE spectra
at $\it l$ $\ge $30, the low-{\it l} temperature, and the low-{\it} SimAll EE likelihood;
(iv) {\sc Planck} TT,TE,EE+lowP: the combination of the likelihoods using TT, TE, and
EE spectra at {\it l} $>$ 30;
(v) {\sc Planck} high-{\it l} and Planck low-{\it l} polarization: the Plik likelihood;
(vi) {\sc Planck} CMB lensing: the CMB lensing likelihood \cite{Planck_lens} for lensing multipoles 8 $<$ {\it l} $<$ 400.
We also consider the measurement of the CMB B-mode polarization angular power spectrum by
the BICEP2/Keck Array collaboration \cite{BK15}.
The BK15 likelihood B-mode polarization only
leads to an upper limit of tensor-to-scalar ratio amplitudes
$r <$ 0.07 (95\% CL) \cite{BK15}.\\\\
We will refer to the combination of these datasets as {\sc Planck} TT,TE,EE+lowE+lensing+BK15.
\begin{table}
\caption{ Priors and constraints on EGB dark Higgs inflation model parameters adopted in the analysis.
All priors are uniform in the listed intervals. We assume a flat universe.}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Parameter&Prior \\
\hline
$\Omega_bh^2$& [0.005,\,0.1] \\
$\Omega_ch^2$& [0.001,\,0.5 ] \\
$100\theta_s$ & [0.5,\,10] \\
$\tau$& [0.01,\,0.9] \\
${\rm log}(10^{10} A_s)$ & [2.5,\, 5]\\
$n_s$& [0.5,\,1.5]\\
${\cal N}$ &[54,\,64]\\
$\alpha \times 10^{7}$& [0.007,\,3] \\
$\beta \times 10^{13}$& [1,\,5] \\
$\gamma \beta$& [0,\,3] \\
$\eta \beta$ & [0\,,3] \\
\hline
$H_0({\rm km\,s}^{-1}{\rm Mpc}^{-1})$& [20,\,100] \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Analysis}
Left panel from Figure~\ref{Fig2} presents the marginalised likelihood probability distributions
of the inflationary parameters, $A_s$, $n_s$, $r$ and ${\cal N}$
from the fit of the EGB dark Higgs inflation model
with the {\sc Planck} TT,TE,EE+lowE+lensing+BK15 dataset.
These predictions are computed
at pivot scale $k_*$=0.002 Mpc$^{-1}$ and include
the uncertainty in the number of e-folds.
For comparison, we also show the corresponding 65\% and 95\% limits
from the fit of $\Lambda$CDM model with the same dataset \cite{Planck_infl}.
The right panel from the same figure presents the joint confidence
regions (68\% and 95\% CL) of $n_s$ and $r$.\\
The mean values and the errors for all parameters are presented in Table~2.\\
We find that the EGB dark Higgs inflation model is strongly favoured by the {\sc Planck}+BK15 data \cite{Planck_infl}.
We test the consistency of the EGB dark Higgs inflation model predictions for the mean values of $\gamma \beta$, $\eta \beta$ and ${\cal N}$ given in Table~2.
From (\ref{phi_E}) we evaluate the dark Higgs field at the beginnig of inflation, $\phi_I=10.36 M_{pl}$.
The slow-roll parameters at $\phi_I$ defined in (\ref{slow-roll1}) are given by:
\begin{eqnarray}
\label{slow-roll-fit}
\epsilon_0 & = & \epsilon_1=0.0163, \hspace{0.3cm} k_0=0.0017\,,\hspace{0.3cm}k_1= 0.0165\,,
\hspace{0.3cm}\Delta_0=0.0249\,,\hspace{0.3cm} \Delta_1=0.0165 \,,
\end{eqnarray}
while the tensor-to-scalar ratio (\ref{r}) and the amplitude of the curvature perturbations (\ref{PFS})
at $\phi_I$ are:
\begin{eqnarray}
r=0.065\,, \hspace{0.5cm}{\cal P}_R= 1.472 \times 10^{-9}\,.
\end{eqnarray}
The inflation potential (\ref{infl_scale}) at $\phi_I$ is obtained as:
\begin{eqnarray}
\label{V_I}
V^{1/4}(\phi_I)= \left (\frac{3 \pi^2 {\cal P}_R}{2}r \right)^{1/4} M_{pl}\simeq 6.11 \times 10^{-3} M_{pl} \simeq 1.46 \times 10^{16} {\rm GeV}\,.
\end{eqnarray}
This constraint applied to the dark Higgs potential $V=\beta \phi_I^4/4$ leads to quartic coupling
$\beta < 3.38 \times 10^{-13}$, value consistent with the inflationary observables and with the dark Higgs parameters.
Under the slow-roll conditions, we get from ({\ref{V_I}}) the Hubble parameter at $\phi_I$:
\begin{eqnarray}
\label{H_I}
H(\phi_I) \simeq 1.51 \times 10^{-5}M_{pl} \simeq 3.63 \times 10^{13}{\rm GeV}\,.
\end{eqnarray}
From (\ref{H_I}) it follows that the curvature scale at $\phi_I$ satisfy the condition
$R\simeq12H^2 \ll M^2_{pl}$ and therefore the unitarity bound of dark Higgs inflation model with curvature couplings is not exceeded.
\begin{figure}
\centering
\includegraphics[width=13cm,height=7cm]{Fig2.eps}
\caption{{\it Left}: Marginalised likelihood probability distributions of the main inflationary parameters from the fit of the EGB dark Higgs inflation model with the {\sc Planck} TT,TE,EE+lowE+lensing+BK15 dataset.
The distributions are obtained at $k_*$=0.002 Mpc$^{-1}$ and include
the uncertainty in the number of e-folds.
For comparison we also show the corresponding 65\% (dark blue) and 95\% (light blue) limits
from the fit of $\Lambda$CDM model with the same dataset \cite{Planck_infl}.
{\it Right}: Marginalised joint 68\% and 95\% CL regions for $n_s$ and $r$ distributions presented in the left panel.\label{Fig2}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=13cm,height=7cm]{Fig3.eps}
\caption{{\it Left}: Marginalised likelihood probability distributions of the dark Higgs parameters
from the fit of the EGB dark Higgs inflation model with the {\sc Planck} TT,TE,EE+lowE+lensing+BK15 dataset, obtained with (red) and without (dashed black) quantum corrections.
{\it Right}: Marginalised joint 68\% and 95\% CL regions for $m_{\phi}$ and $\theta$
obtained for EGB dark Higgs inflation model.\label{Fig3}}
\end{figure}
Left panel from Figure~\ref{Fig3} presents the likelihood probability distributions of the dark Higgs parameters $\beta$,
$\alpha$, $m_{\phi}$ and $\theta$ obtained from the fit of the EGB dark Higgs inflation model with the {\sc Planck} TT,TE,EE+lowE+lensing+BK15 dataset. These predictions are computed
at $k_*$=0.002 Mpc$^{-1}$ and include the quantum corrections of the coupling constants.
The mean values and the errors of these parameters are given in Table~2. \\
For comparison we plot the same distributions without quantum corrections.
The dark Higgs mass $m_\phi$ and mixing angle $\theta$ are derived parameters in our analysis
and are obtained from (\ref{DH_par}).
The predictions of the EGB dark Higgs inflation model for the joint confidence
regions (68\% and 95\% CL) of $m_{\phi}$ and $\theta$ are shown in the right panel of Figure~\ref{Fig3}. \\
We find for the dark Higgs - SM Higgs coupling,
$ 7 \times 10^{-10}< \alpha < 3 \times 10^{-8}$.
These bounds are in agreement with the estimate of the reheating temperature
for a non-thermal distribution of the inflaton field \cite{Anisimov09}.\\
The bounds on the dark Higgs mass and mixing angle are found to be:
\begin{eqnarray}
\label{mass_range}
0.49 \,\, {\rm GeV} & < & m_{\phi} < 1.43\,\, {\rm GeV}\,, \hspace{1cm} (95\%\,\,{\rm CL}) \\
4.48 \times 10^{-5} & < & \theta < 1.88 \times 10^{-4} \,.
\end{eqnarray}
\section{Dark Higgs inflaton at LHC experiments}
\subsection{Dark Higgs inflaton decay inside detector}
\begin{figure}
\centering
\includegraphics[width=7cm,height=7cm]{Fig4.eps}
\caption{The evolution with $m_{\phi }$ of dark Higgs inflaton decay length,
$d=c \tau_{\phi} \gamma \beta$, for various dark Higgs energies $E_{\phi}$ and $\theta=10^{-4}$. \label{Fig4} }
\end{figure}
The dark Higgs decay widths are suppressed by $\theta^2$ relative to those of the SM Higgs boson
if it would have the some mass as the dark Higgs.
For $m_{\phi} < 2m_{ \pi}$ the inflaton mostly decays in $e^{+} e^{-}$, $\mu^{+}\mu^{-}$ and
$\tau^{+}\tau^{-}$ with decay width given by:
\begin{eqnarray}
\Gamma (\phi \rightarrow {\bar l} l)= G_F\frac{m^2_{l} m_{\phi}} {4 \sqrt{2} \pi } \beta^3_l \theta^2
\hspace{0.5cm} (l= e\,,\mu\,,\tau)\,,
\end{eqnarray}
where $G_F$ is the Fermi constant and $\beta_l= \sqrt { 1- m^2_l/ m^2_{\phi} } $ is the lepton velocity.\\
For inflaton masses in the range $2 m_{\pi} < m_{\phi} < 2.5$ GeV
the dominant decay modes are to $\pi^{+}\pi^{-}$, $k^{+}k^{-}$ and other hadrons.\\
The dark Higgs hadronic decay modes
suffers from theoretical uncertainties since the chiral expansion breaks down above
$2m_{\pi}$ while the perturbative QCD calculation
are reliable for masses of few GeV \cite{Clarke14,Winkler19}.\\
For the inflaton mass range (\ref{mass_range}) we adopt the numerical
results from \cite{Winkler19} that uses the dispersive analysis
for $2 m_{\pi} < m_{\phi}< 1.3$ GeV \cite{Grinstein88},
the perturbative spectator model \cite{Guinon00,Keen09} for $m_{\phi}\,>\,$ 2GeV and
interpolate between these two for 1.3 GeV$< m_{\phi}< $2 GeV. \\
Figure~\ref{Fig4} presents the dependence on $E_{\phi}$ of the dark Higgs decay length:
\begin{eqnarray}
d=c \tau_{\phi} \gamma \beta \,
\end{eqnarray}
where $\tau_{\phi}=1/\Gamma(\phi \rightarrow ll,hh)$ is the dark Higgs lifetime,
$\Gamma(\phi \rightarrow ll,hh)$ is the decay width scaled with $\theta^2$, $\gamma=E_{\phi}/m_{\phi}$ and $\beta=\sqrt{1-1/\gamma^2}$. The decay length scales as $d \sim E_{\phi}$ for large $E_{\phi}$. For $E_{\phi}\sim {\cal O} (10^3)$ GeV the decay lengths are d$\sim {\cal O} (1)$ km
and therefore a significant number of dark Higgs inflatons can decay within
the detector volume.
To determine the number of dark Higgs inflatons that decay inside the detector volume, we must specify
the size, shape, and location of the detector relative to the LHC collider interaction point (IP). \\
We consider two representative experiments, FASER (the ForwArd Search ExpeRiment) \cite{Faser18} and
MAPP-1(the MoEDAL Apparatus for Penetrating Particles) (\cite{Pinfold19}).\\
FASER detectors have cylindrical shape and are centred on the LHC beam collision
axis. The detectors have the radius R and
the depth $\Delta= L_{max}-L_{min}$, where $L_{max}$ and $L_{min}$ are the distances from the
IP to the far and near edge of detectors along the beam axis. The location of FASER detectors is:
\begin{eqnarray}
{\rm FASER\,\,\,far\,\,\,location}&:& \hspace{0.5cm}L_{max}=400\, m\,, \,\Delta=10\,m, \,R=1\,m \,,\\
{\rm FASER\,\,\,near \,\,\,location}&:& \hspace{0.5cm}L_{max}=150\,m\,,\, \Delta=5\,m\,,\, R=4\,{\rm cm}\,.
\end{eqnarray}
MAPP detector is a parallelepiped at approximately $5^0$ from the beam collision axis with the following location:
\begin{eqnarray}
\label{prob}
\hspace{0.2cm}{\rm MAPP-1}: \hspace{0.5cm} L_{max}=55\,m\,, \Delta=3\,m\,, H=1\,{\rm m} \,,
\end{eqnarray}
where H is the is the parallelepiped height. \\
The probability of the dark Higgs boson to decay inside the detector volume is given by:
\begin{equation}
\label{prob}
{\cal P}^{det}(E_{\phi},\theta_{\alpha})=\left( e^{-L_{min}/d}-e^{-L_{max}/d} \right)\Theta(R, \tan(\theta_{\alpha}) L_{max})\,
\end{equation}
where $E_{\phi}$ is the dark Higgs energy, $d$ is its decay length, $\theta_{\alpha}$ is the angular acceptance of the detector, $\tan(\theta_{\alpha})=R/L_{max}$,
and $\Theta$ is the Heaviside step function.
For MAPP-1 we take $R=H\pi^{-1/2}$ in (\ref{prob}) to conserve the effective acceptance area.\\
In Figure~\ref{Fig5} we present the dependence on $E_{\phi}$ of the normalised detection probability
corresponding to the different experimental configurations obtained for
cosmological best fit solution for $m_{\phi}$ and $\theta$.
The figure shows that the experimental configurations are sensitive to
complementar ranges of the dark Higgs energy.
\begin{figure}
\centering
\includegraphics[width=8cm,height=5cm]{Fig5.eps}
\caption{ The dependence on the dark Higgs energy $E_{\phi}$ of the normalised detection probability
corresponding to different experimental configurations obtained for
cosmological best fit solution for $m_{\phi}$ and $\theta$.\label{Fig5}}
\end{figure}
\subsection{Dark Higgs inflaton production at LHC}
The dark Higgs bosons are mainly produced through K and B meson decays.
As $m_{\phi} > m_K$ ($m_K=0.494$ GeV) for the inflaton mass range (\ref{mass_range}),
in the following we consider the dark Higgs production only through the B-meson decay.
The B-meson branching fraction is given by \cite{Faser18}:
\begin{equation}
Br(B\rightarrow X_{s} \phi) =5.7 \left(1-\frac{m^2_{\phi}}{m^2_B} \right)^2 \theta^2\,,
\end{equation}
where $X_s$ denotes any strange hadronic state and $m_B$ is the B-meson mass ($m_B=5.28$ GeV).
\begin{figure}
\centering
\includegraphics[width=9cm,height=9cm]{Fig6.eps}
\caption{LHC experiments reach for dark Higgs inflaton
in the cosmological confidence region (\ref{mass_range}) for
an integrated luminosity of 3ab$^{-1}$ at 13 TeV LHC
assuming 100\% detection efficiency.\label{Fig6} }
\end{figure}
The dark Higgs production cross section at LHC energies can be estimated as \cite{Bezrukov10}:
\begin{eqnarray}
\frac{\sigma_{\phi} }{ \sigma_{inel}} =M_{pp} Br(B\rightarrow X_{s} \phi) \,,
\end{eqnarray}
where $M_{pp}$ is the proton multiplicity and $\sigma_{inel}$ is the $pp$ inelastic cross section.
\subsection{LHC experiments reach for dark Higgs inflaton}
The total number of dark Higgs bosons that decay inside detector are then given by:
\begin{equation}
N_{sig} (m_{\phi},\theta)=N_{inel} \frac{\sigma_{\phi}} {\sigma_{inel}}\,
Br(\phi \rightarrow KK) \,Br(\phi \rightarrow \pi\pi)
\int {\cal P}^{det}
(E_{\phi},\theta_{\alpha})
{\rm d\,} \theta_{\alpha} {\rm d}\,E_{\phi} \,,
\end{equation}
where $N_{inel}$ is the total number of inelastic $pp$ scatering events.\\
Throughout we assume an integrated luminosity of 3 a$b^{-1}$ at the 13 TeV LHC, implying
$N_{inel} \simeq1.1 \times 10^{16}$. We also take $\sigma_{inel}$(13 TeV) $\simeq$ 75 mb and $M_{pp}$(13 TeV) $\simeq$ 66 \cite{PDG}.
Figure~6 shows the predicted number of dark Higgs inflaton decays in
the cosmological confidence region (\ref{mass_range}) obtained for the experimental configurations discussed for an integrated luminosity of 3ab$^{-1}$ at 13 TeV LHC assuming 100\% detection efficiency. \\
In our computation we take the dark Higgs energy in the range
100 GeV $< E_{\phi} < 10{^6}$ GeV, imposed by
the requirement that the dark Higgs inflaton propagate to the detector locations, as
shown in Figure~\ref{Fig5}. \label{Fig6}}\\
For comparison, in Figure~\ref{Fig7} we present the FASER reach \cite{Faser18} and the MAPP-1 reach \cite{Pinfold19} for the dark Higgs boson for an integrated luminosity of 3ab$^{-1}$ at 13 TeV LHC.
The cosmological dark Higgs inflaton confidence region (\ref{mass_range}) is also shown.
\begin{figure}
\centering
\includegraphics[width=7cm,height=7cm]{Fig7.eps}
\caption{ FASER far location, FASER near location and MAPP-1 reach for dark Higgs boson
for an integrated luminosity of 3ab$^{-1}$ at13 TeV LHC.
The cosmological dark Higgs inflaton confidence region (\ref{mass_range}) is also shown.
\label{Fig7}}
\end{figure}
\begin{table}
\caption{The mean values and the absolute errors of the main parameters obtained from the fit
of the EGB dark Higgs inflation model with {\sc Planck} TT,TE,EE+lowE+lensing+BK15 dataset. The errors are
quoted at 68\% CL.
The upper limits are quoted at 95\% CL.
The first group of parameters are the
base cosmological parameters sampled in the Monte-Carlo Markov Chains analysis with uniform
priors.The others are derived parameters.}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Parameter & \\
\hline
$\Omega_b h^{2}$& 0.0223 $\pm$ 0.0002\\
$\Omega_c h^{2}$ & 0.1194 $\pm$ 0.0011 \\
$\theta_s$ & 1.0410 $\pm$ 0.0004 \\
$\tau$ &0.050 $\pm$ 0.009 \\
${\rm ln}(10^{10}A_s)$ &3.050 $\pm$ 0.008 \\
$n_s$ & 0.967 $\pm$ 0.0044 \\
$r_{0.002}$& $<$ 0.059 \\
${\cal N}$& 59.4 $\pm$ 1.210 \\
$10^{13} \times\beta$&0.892 $\pm$ 0.051 \\
$10^{9}\times\alpha$ & 1.021 $\pm$0.219 \\
$\gamma \beta$ & 0.218 $\pm$ 0.015 \\
$\eta \beta$& 1.129 $\pm$ 0.067 \\
\hline
$H_0({\rm km\,s}^{-1}{\rm Mpc}^{-1})$ &67.729 $\pm$ 0.641 \\
$m_{\phi}$ (GeV) & 0.919 $\pm$ 0.211 \\
$10^4 \times \theta$ & 1.492 $\pm$ 0.045 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}
In this paper we analyse the dark Higgs inflation model with curvature corrections
given by the kinetic term non-minimally coupled to the Einstein tensor and the coupling to
the Gauss-Bonnet (GB) 4-dimensional invariant (EGB dark Higgs inflation) and
explore the possibility to test its predictions by particle physics experiments at LHC.\\
The dynamics of the slow-roll inflation with curvature corrctions
has been proposed in context of the SM Higgs inflation in Refs \cite{Jimenez19a,Jimenez19b}.
We show that the EGB dark Higgs inflation model
is strongly favoured by {\sc Planck}+BK15 data \cite{Planck_infl}.
The cosmological predictions of this model for dark Higgs inflaton mass
$m_{\phi}$ and mixing angle $\theta$, including the RG quantum corrections of dark Higgs coupling constants and the uncertainty in estimation of the reheating temperature,
are found to be:
\begin{eqnarray}
\label{mass_range_fin}
\label{mass_range_RG}
0.49 \,\, {\rm GeV} & < & m_{\phi} < 1.34\,\, {\rm GeV}\,, \hspace{1cm} (95\%\,\,{\rm CL}) \nonumber \\
4.48 \times 10^{-5} & < & \theta < 1.88 \times 10^{-4} \nonumber \,.
\end{eqnarray}
The consistency test of the EGB dark Higgs inflation model predictions leads to
a lower bound of dark Higgs inflaton quartic coupling $\beta < $ 3.38 $\times 10^{-13}$,
value consistent with the inflationary observables and with the dark Higgs parameters.\\
We find the dark Higgs inflaton - SM Higgs boson coupling constant
$\alpha > 7\times 10^{-10}$, in agreement with the estimate of the reheating temperature for a non-thermal distribution of the inflaton field \cite{Anisimov09}.
We evaluate the FASER and MAPP-1 experiments reach for dark Higgs inflaton parameters
$m_{\phi}$ and $\theta$ in the 95\% CL cosmological confidence region, for an integrated luminosity of 3ab$^{-1}$ at 13 TeV LHC assuming 100\% detection efficiency. \\
We conclude that the dark Higgs inflation model with curvature corrections
is a compelling inflation scenario based on particle physics theory
favoured by the present cosmological measurements that leaves imprints in the dark Higgs boson searchers at LHC.
\begin{acknowledgments}
The author would like to thank to Vlad Popa for helpful discussions
and acknowledge the use of the GRID system computing facility at the Institute of Space Science.\\
This work was partially supported by ESA/PRODEX Contract 4000124902.
\end{acknowledgments}
| -39,074.026462
|
[
-2.83203125,
2.650390625
] | 23.857868
|
[
-2.8359375,
0.4248046875,
-1.8837890625,
-5.59375,
-0.67822265625,
7.53515625
] |
[
2.66796875,
8.265625,
2.353515625,
4.265625
] | 365
| 6,234
|
[
-2.94921875,
3.20703125
] | 31.703326
|
[
-6.15625,
-4.3984375,
-4.3984375,
-2.46875,
1.9755859375,
12.3203125
] | 2.798686
| 9.83596
| 24.222008
| 4.692123
|
[
1.9945515394210815
] | -24,517.417982
| 5.51957
| -38,048.17855
| 2.12735
| 5.952241
|
[
-3,
-3.9765625,
-3.669921875,
-4.55859375,
2.4140625,
11.8515625
] |
[
-5.50390625,
-1.4873046875,
-1.7197265625,
-0.7705078125,
2.953125,
3.53515625
] | |
BkiUcenxK7FjYCv2RJvX
|
\section{Introduction}
Recently, Deep Convolutional Neural Networks (DCNNs) have attracted a lot of attention in visual recognition due to its good performance \cite{ImageNetDeepLearning}. It has been discovered that activations of a DCNN pretrained on a large dataset, such as ImageNet \cite{ImageNet}, can be employed as a universal image representation and applying this representation to many visual classification problems leads to astounding performance \cite{CNN_Baseline,ArxivNewBaseline}. This discovery quickly sparks a lot of interest and inspires a number of further extensions \cite{CNN_Regional,Our_NIPS}. A fundamental issue of this kind of methods is that how to generate image representation from a pretrained DCNN. Most of current solutions, if not all, take activations of the fully connected layer as the image representation. In contrast, activations of convolutional layers are rarely used and some studies \cite{VisualizeCNN,ExistingConvEx} have reported that directly using convolutional layer activations as image features produces inferior performance.
In this paper, however, we advocate that convolutional layer activations can be turned into a powerful image representation if they are used appropriately. We propose a new method called cross-convolutional layer pooling, or cross layer pooling in short, to fully leverage the discriminative information of convolutional layers.
The main contributions and also key differences to the previous attempt of using convolutional layer activations lay in two aspects: (1) we use convolutional layer activations in a `local feature' setting which extracts subarrays of convolutional layer activations as region descriptors. (2) we pool extracted local features with cross-layer information.
The first aspect is motivated by some recent works \cite{CNN_Regional,Our_NIPS} which have shown that DCNN activations are not translational invariant and it is beneficial to apply a DCNN to describe local regions and create the image representation by pooling multiple regional DCNN activations. Our method steps further to use subarrays of convolutional layer activations, that is, parts of CNN activations as regional descriptors. Compared with previous works \cite{CNN_Regional,Our_NIPS}, our method enjoys two major advantages: (1) instead of running DCNN forward computation multiple times, one for each local region, our method only needs to run a DCNN once (or very few times in our multiple-resolution scheme) for all local regions. This results in great computational cost saving. (2) existing methods \cite{CNN_Regional,Our_NIPS} essentially apply a network trained for representing an image to represent a local region. This causes significant domain mismatch between the input at the training stage and testing stages, which may degrade the discriminative power of DCNN activations. In contrast, our method avoids this issue since it still uses the whole image as the network input and only extracts parts of convolutional activations as regional descriptors.
The second aspect is motivated by the parts-based pooling methods \cite{zhangningpos,PANDA,ZhangNingECCV} which are commonly used in fine-grained image classification. This kind of methods create one pooling result for each detected part region and the final image representation is obtained by concatenating pooling results from multiple parts. We generalize this idea into the context of DCNN and avoid using any predefined parts annotation. More specifically, we deem the feature map of each filter in a convolutional layer as the detection response map of a part detector and apply the feature map to weight regional descriptors extracted from previous convolutional layer in the pooling process. The final image representation is obtained by concatenating pooling results from multiple channels with each channel corresponding to one feature map. Note that unlike existing regional-DCNN based methods \cite{CNN_Regional,Our_NIPS}, the proposed method does not need any additional dictionary learning and encoding steps at both training and testing stages. Besides the above two aspects, we also develop a simple scheme to extract local features at a finer resolution from convoluational layers and experiment with a coarse feature quantization scheme which significantly reduces the memory usage in storing image representations.
We conduct extensive experiments on four datasets covering four popular visual classification tasks, that is, scene classification, fine-grained object classification, generic object classification and attribute classification. Experimental results suggest that the proposed method can achieve comparable or in some cases significantly better performance than competitive methods while being considerably faster in creating image representations.
\begin{figure*}[ht!]
\centering
\includegraphics[height=60mm,width=100mm]{fig/MethodOverview-crop}
\caption{An overview of the proposed method.}
\label{fig:overview}
\end{figure*}
\noindent \textbf{Preliminary:}
Our network structure and model parameters are identical to those in \cite{ImageNetDeepLearning}, that is, we have five convolutional layers and two fully connected layers. We use conv-1, conv-2, conv-3, conv-4, conv-5, fc-6, fc-7 to denote them respectively. At each convolutional layer, multiple filters are applied and it results in multiple feature maps, one for each filter. In this paper, we use the term `feature map' to indicate the convolutional results (after applying the ReLU) of one filter and the term `convolutional layer activations' to indicate feature maps of all filters in a convolutional layer.
\section{Current strategies for creating image representations from a pretrained DCNN}\label{sect:existing_ways}
In the literature, there are two major ways of using a pretrained DCNN to extract image representations: using a pretrained DCNN as global features and using a retrained DCNN as local features.
The first way takes the whole image as the input to a pretrained DCNN and extracts the fc-6/fc-7 activations as the image-level representation. To make the network better adapted to a given task, fine-tuning sometimes is applied. Also, to make this kind of methods more robust toward image transforms, averaging activations from several jittered versions of the original image, e.g. a slightly shifted version of the input image or a mirrored version of the input image, has been employed to obtain better classification performance \cite{ArxivNewBaseline}.
DCNNs can also be applied to extract local features. It has been suggested that DCNN activations are not invariant to a large amount of translation \cite{CNN_Regional} and the performance can be degraded if input images are not well aligned. To handle this, it is suggested to sample multiple regions from an input image and use one DCNN, called regional-DCNN in this scenario, to describe each region. The final image representation is aggregated from activations of those regional-DCNNs. Usually, another layer of unsupervised encoding is employed to create the image-level representation \cite{CNN_Regional, Our_NIPS}. Also, multiple-scale extraction strategy can be applied to further boost performance \cite{CNN_Regional}. It has been shown that for many visual tasks \cite{CNN_Regional,Our_NIPS} this kind of methods lead to better performance than directly extracting DCNN activations as global features.
One common factor of the above methods is that they all use fully-connnected layer activations as features. The convolutional layer activations, however, are not usually employed and some preliminary studies \cite{VisualizeCNN,ExistingConvEx} have suggested that the convolutional layer has weaker discriminative power than the fully-connected layer.
\section{Proposed method}
\begin{figure}
\centering
\includegraphics[height=35mm]{fig/DomainShiftDemo}
\caption{ This figure demonstrates the domain mismatch issue when applying the fully-connected layer activations as regional descriptors. Top row: input images that a DCNN `sees' at the training stage. Bottom row: input images that a DCNN `sees' at the test stage.}
\label{fig:image_vs_region}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=40mm]{fig/ConvLocalFeature-crop}
\caption{ The demonstration of extracting local features from a convolutional layer.}
\label{fig:extract_conv_feature}
\end{figure}
\subsection{Convolutional layer vs. fully-connected layer}
One major difference between convolutional and fully-connected layer activations is that the former is embedded with rich spatial information while the latter does not. The convolutional layer activations can be formulated as a tensor of the size $H \times W \times D$, where $H,W$ denote the height and width of each feature map and $D$ denotes the number of feature maps. Essentially, the convolutional layer divides the input image into $H \times W$ regions and uses $D$-dimensional feature maps to describe the visual pattern within each region. Thus, convolutional layer activations can be viewed as a 2-D array of $D$-dimensional \textit{local features} with each one describing a local region. For the clarity of presentation, we call each of the $H \times W$ regions as a \textbf{spatial unit}, and the $D$-dimensional feature maps corresponding to a spatial unit as the \textbf{feature vector in a spatial unit}. The fully-connected layer takes the convolutional layer activations as the network input and transforms them into a feature vector representing the whole image. In this process, the spatial information is lost and the feature vector in a spatial unit cannot be explicitly recovered from activations of a fully-connected layer.
As mentioned in section \ref{sect:existing_ways}, DCNNs can also be applied to extract local features to handle the drawback of being translational variant. However, this scheme comes along with two side-effects in practice: (1) Its computational cost becomes higher than using DCNN activations as global image features since in this scheme one needs to run DCNN forward computing multiple times, one for each region. (2) Moreover, it makes the input of a DCNN become significant different from the input images that are used to train the network. This is because when applied as a regional feature, a DCNN is essentially used to describe local visual patterns which correspond to small parts of objects rather than whole images at the training stage. In Figure \ref{fig:image_vs_region}, we plot some training images from the ImageNet dataset and resized local regions. As can be seen, although they all have the same image size, their appearance and levels of details are quite different. Thus, blindly applying fully-connected layer activations as local features introduces significant domain mismatch which could potentially undermine the discriminative power of DCNN activations.
Our idea to handle aforementioned drawbacks is to extract multiple regional descriptors from \textit{a single DCNN applied to a whole image}. We realize this idea by leveraging the spatial information of convolutional layers. More specifically, in convolutional layers, we can easily locate a subset of activations which correspond to a local region. These subset of activations correspond to a set of subarrays of convolutional layer activations and we use them as local features. Figure \ref{fig:extract_conv_feature} demonstrates the extraction of such local features. For example, we can first extract $D$-dimensional feature vectors from regions $1,2,3,14,15,16,27,28,29$ and concatenate them into a $9\times D$-dimensional feature vector and then shift one unit along the horizontal direction to extract features from regions $2,3,4,15,16,17,28,29,30$. After scanning all the $13 \times 13$ feature maps we obtain 121 (omitting boundary spatial units) ($9\times D$)-dimensional local features.
It is clear that the proposed method enjoys the following merits: (1) the input of the DCNN is still a whole image rather than local regions. Thus the domain mismatch issue is avoided. (2) we only need to run DCNN forward calculation once and thus computational cost can be greatly saved. Note that in our method, we extract regional features from multiple spatial units and concatenate the feature vectors from them. This is different to previous work \cite{DNPRegionlets} (although their method is for a different application) which only treates the feature vector in one spatial unit as the local feature. We find that the use of feature vectors from multiple spatial units can significantly boost classification performance. This is because the feature vector from a single spatial unit may not be descriptive enough to characterize the visual pattern within a local region.
\subsection{Cross-convolutional-layer pooling}\label{sect:cl_pooling}
\begin{figure*}
\centering
\begin{tabular}{c}
\subfloat{ \includegraphics[height=35mm]{fig/demo3-crop}}
\subfloat{ \includegraphics[height=35mm]{fig/demo4-crop}}
\subfloat{ \includegraphics[height=35mm]{fig/demo5-crop}} \\
\subfloat{ \includegraphics[height=35mm]{fig/demo16-crop}}
\subfloat{ \includegraphics[height=35mm]{fig/demo17-crop}}
\subfloat{ \includegraphics[height=35mm]{fig/demo19-crop}}
\end{tabular}
\caption{
Visualization of some feature maps extracted from the 5th layer of a DCNN.
}
\label{fig:conv_visualize}
\end{figure*}
After extracting local features from a convolutional layer, one can directly perform traditional max-pooling or sum-pooling to obtain the image-level representation. In this section, we propose an alternative pooling method which can significantly improve the classification performance. The proposed method is inspired by the parts based pooling strategy \cite{zhangningpos,PANDA,ZhangNingECCV} in fine-grained image classification. In this kind of methods, multiple regions-of-interest (ROI) are firstly detected and each of them corresponds to one human-specified object part, e.g. the tail of birds. Then local features falling into each ROI are pooled together to obtain a pooled feature vector. Given $D$ object parts, this strategy creates $D$ different pooled feature vectors and these vectors are concatenated together to form the final image representation. It has been shown that this simple strategy achieves significantly better performance than blindly pooling all local features together. Formally, the pooled feature from the $k$th ROI, denoted as $\mathbf{P}^{t}_k$, can be calculated by the following equation (let's consider sum-pooling in this case):
\begin{align}\label{Eq:part_pooling}
\mathbf{P}^{t}_k = \sum_{i = 1} \mathbf{x}_i I_{i,k},
\end{align}
where $\mathbf{x}_i$ denotes the $i$th local feature and $I_{i,k}$ is a binary \textit{indicator map} indicating that if $\mathbf{x}_i$ falls into the $k$th ROI. We can also generalize $I_{i,k}$ to real value with its value indicating the `membership' of a local feature to a ROI. Essentially, each indicator map defines a pooling channel and the image representation is the concatenation of pooling results from multiple channels.
However, in a general image classification task, there is no human-specified parts annotation and even for many fine-grained image classification tasks, the annotation and detection of these parts are usually non-trivial. To handle this situation, in this paper we propose to use feature maps of the $(t+1)$th convolutional layer as $D_{t+1}$ indicator maps. By doing so, $D_{t+1}$ pooling channels are created for the local features extracted from the $t$th convolutional layer. We call this method cross-convolutional-layer pooling or cross-layer pooling in short. The use of feature maps as indicator maps is motivated by the observation that a feature map of a deep convolutional layer is usually sparse and indicates some semantically meaningful regions\footnote{Note that similar observation has also been made in \cite{VisualizeCNN}.}. This observation is illustrated in Figure \ref{fig:conv_visualize}. In Figure \ref{fig:conv_visualize}, we choose two images taken from two datasets, Birds-200 \cite{Birds200} and MIT-67 \cite{MIT67}. We randomly sample some feature maps from 256 feature maps in conv5 and overlay them to original images for better visualization. As can be seen from Figure \ref{fig:conv_visualize}, the activated regions of the sampled feature map (highlighted in warm color) are actually semantically meaningful. For example, the activated region in top-left corner of Figure \ref{fig:conv_visualize} corresponds to the wing-part of a bird.
Thus, the filter of a convolutional layer works as a part detector and its feature map serves a similar role as the part region indicator map. Certainly, compared with the parts detector learned from human specified part annotations, the filter of a convolutional layer is usually not directly task-relevant. However, the discriminative power of our image representation can be benefited from combining a much larger number of indicator maps, e.g. 256 as opposed to 20-30 (the number of parts usually defined by human), which is akin to applying bagging to boost the performance of multiple weak classifiers.
Formally, image representation extracted from cross-layer pooling can be expressed as follows:
\begin{align}
& \mathbf{P}^{t} = [\mathbf{P}^{t}_1,\mathbf{P}^{t}_2,\cdots,\mathbf{P}^{t}_k,\cdots,\mathbf{P}^{t}_{D_{t+1}}] \nonumber \\
& \mathrm{where,~~} \mathbf{P}^{t}_k = \sum_{i = 1}^{N_t} \mathbf{x}^{t}_i a^{t+1}_{i,k},
\end{align}
where $\mathbf{P}^{t}$ denotes the pooled feature for the $t$-th convolutional layer, which is calculated by concatenating the pooled feature of each pooling channel $\mathbf{P}^{t}_k, k = 1,\cdots,D_{t+1}$. $\mathbf{x}^{t}_i$ denotes the $i$th local feature in the $t$th convolutional layer. Note that feature maps of the $(t+1)$th convolutional layer is obtained by convolving feature maps of the $t$th convolutional layer with a $m\times n$-sized kernel. So if we extract local features $\mathbf{x}^{t}_i$ from each $m\times n$ spatial units in the $t$th convolutional layer then each $\mathbf{x}^{t}_i$ naturally corresponds to a spatial unit in the $(t+1)$th convolutional layer. Let's denote the feature vector in this spatial unit as $\mathbf{a}^{t+1}_{i} \in \mathbb{R}^{D_{t+1}}$ and the value at its $k$th dimension as $a^{t+1}_{i,k}$. Then we use $a^{t+1}_{i,k}$ to weight local feature $\mathbf{x}^{t}_i$ in the $k$th pooling channel.%
\noindent \textbf{Implementation Details:} In our implementation, we perform PCA on $\mathbf{x}^{t}_i$ to reduce the dimensionality of $\mathbf{P}^{t}$. Also, we apply power normalization to $\mathbf{P}^{t}$, that is, we use $\mathbf{\hat{P}}^{t} = \mathrm{sign}(\mathbf{P}^{t})\sqrt{|\mathbf{P}^{t}|}$ as the image representation to further improve performance. We also tried to directly use $\mathrm{sign}(\mathbf{P}^{t})$ as image representations, that is, we coarsely quantize $\mathbf{P}^{t}$ into $\{-1,1,0\}$ according to the feature sign of $\mathbf{P}^{t}$. To our surprise, this only introduces a slight performance drop. This observation allows us to simply use 2-bits to represent each feature dimension and this saves a lot of memory to store image representations. Please refer to section \ref{sect:coarse_quantization} for more detailed discussion.
\subsection{Creating finer resolutions of spatial units partitioning}\label{sect:multi-resolution}
One drawback of the above method is that it only works in a single resolution. For example, if the 4-th and 5-th convolutional layer are used, the image can only be partitioned into $13\times 13$ spatial units. For some applications such as scene classification, the object of interest is usually small and it is favorable to use finer partitioning to capture finer object details. To achieve this goal, we proposed to divide the whole image into multiple non-overlap or overlapped blocks and apply a DCNN to each block. Then same as in section \ref{sect:cl_pooling}, local features are extracted from the convolutional layer of each DCNN. In total, it generates much more spatial units for the whole image, for example, if we partition the whole image into four quadrats we will have $26\times 26$ spatial units in total. This scheme is illustrated as in Figure \ref{fig:finer_partition}. Note that using this scheme we can easily create more spatial units by only introducing very few number of DCNN forward computation. In practice, we adopt a multi-resolution scheme which combines image representations extracted from multiple resolutions to achieve better classification performance.
\begin{figure*}
\centering
\includegraphics[height=60mm]{fig/multi-resolution-crop}
\caption{ Our scheme to create finer spatial unit partition.}
\label{fig:finer_partition}
\end{figure*}
\section{Experiments}
We evaluate the proposed method on four datasets: MIT indoor scene-67 (MIT-67 in short) \cite{MIT67}, Caltech-UCSD Birds-200-2011 \cite{Birds200} (Birds-200 in short), Pascal VOC 2007 \cite{pascal-voc-2007} (Pascal-07 in short) and H3D Human Attributes dataset \cite{HAT} (H3D in short). These four datasets cover several popular topics in image classification, that is, scene classification, fine-grained object classification, generic object classification and attribute classification. Previous studies \cite{CNN_Baseline,ArxivNewBaseline} have shown that using activations from the fc layer of a pretrained DCNN leads to surprisingly good performance in those datasets. Here in our experiments, we further compare different ways to extract features from a pretrained DCNN.
We organized our experiments into two parts, the first part compares the proposed method with other competitive methods and the second part examines the impact of various components in our method.
\subsection{Experimental protocol}
\begin{table}
\caption{Comparison of results on MIT-67. The lower part of this table lists some results reported in the literature. The proposed methods are denote with *. For each method, the required number of CNN forward calculation is denoted as CNN $\times k$. }
\centering
\label{table:MIT67_Result}
\scalebox{0.9}{
\begin{tabular}{llll}
\hline\noalign{\smallskip}
Methods & Accuracy & Comments \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
CNN-Global & 57.9\% & CNN$\times$1 \\
CNN-Jitter & 61.1\% & CNN$\times$10 \\
R-CNN SCFV \cite{Our_NIPS} & 68.2\% & CNN$\times$100 \\
*CL-45 & 64.6\% & CNN$\times$1 \\
*CL-45F & 65.8\% & CNN$\times$4 \\
*CL-45C & 68.8\% & CNN$\times$5 \\
*CL + CNN-Global & 70.0\% & CNN$\times$6 \\
*CL + CNN-Jitter & \bf 71.5\% & CNN$\times$15 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Fine-tuning \cite{ArxivNewBaseline} & 66.0\% & fine-tunning on MIT-67 \\
MOP-CNN \cite{CNN_Regional} & 68.9\% & CNN$\times$53, three scales \\
VLAD level2 \cite{CNN_Regional} & 65.5\% & CNN$\times$16, single scale \\
CNN-SVM \cite{CNN_Baseline} & 58.4\% & - \\
FV+DMS \cite{Dis_Mode_Seeking} & 63.2\% & - \\
DPM \cite{DPM} & 37.6\% & - \\
\hline
\end{tabular}
}
\end{table}
We compare the proposed method against three baselines, they are: (1) directly using fully-connected layer activations for the whole image (CNN-Global); (2) averaging fully-connected layer activations from several transformed versions of the input image. Following \cite{CNN_Baseline,ArxivNewBaseline}, we transform the input image by cropping its four corners and middle regions as well as by creating their mirrored versions; (3) the method in \cite{CNN_Baseline,ArxivNewBaseline} which extracts fully-connected layer CNN activations from multiple regions in an image and encodes them using sparse coding based Fisher vector encoding (RCNN-SCFV). Since RCNN-SCFV has demonstrated superior performance than the MOP method in \cite{CNN_Regional}, we do not include MOP in our comparison. To make fair comparison, we reimplement all three baseline methods. For the last method, we use the code provided by the author of \cite{Our_NIPS} to extract the regional CNN features and perform encoding.
For our method, we use the multi-resolution scheme suggested in section \ref{sect:multi-resolution}, that is, besides applying our method to the whole image, we also partition the image into $M \times N$ blocks and appy our method. The final image representation is the concatenation of image representations obtained from these two resolutions. For all datasets expect H3D, we set $M = N = 2$ and we set $ M = 2, N =1$ for H3D because most images in H3D have longer height than width.
In the first part of our experiments, we report the result obtained by using the 4th and 5th convolutional layer since using them achieves the best performance. We denote our methods as CL-45, CL-45F, CL-45C, corresponding to the settings of applying our method to the whole image, to multiple blocks for finer resolution and combining representations from two different resolutions respectively. We also conduct similar experiment on the 3-4th layer of a DCNN in the second part of experiments and denote them as CL-34, CL-34F and CL-34C.
To reduce the dimensionality of image representations, we perform PCA on local features extracted from convolutional layers and reduce their dimensionality to 500 before cross-layer pooling. In practice, we find that reducing to higher dimensionality only slightly increases the performance.
We use libsvm \cite{libsvm} as the SVM solver and use precomputed linear kernels as inputs. This is because the calculation of linear kernels/Gram matrices can be easily implemented in parallel. When feature dimensionality is high this part of computation actually occupies most of computational time. Thus it is appropriate to use parallel computing to accelerate this part.
\subsection{Performance evaluation}
\subsubsection{Classification results}
\noindent\textbf{Scene classification: MIT-67.}
MIT-67 is a commonly used benchmark for evaluating scene classification algorithms, it contains 6700 images with 67 indoor scene categories. Following the standard setting, we use 80 images in each category for training and 20 images for testing. The results are shown in Table \ref{table:MIT67_Result}. It can be seen that all the variations of our method (methods with `*' mark in Table \ref{table:MIT67_Result}) outperforms the methods that use DCNN activations as global features (CNN-Global and CNN-Jitter). This clearly demonstrates the advantage of using DCNN activations as local features. We can also see that the performance obtained by combining CL-45 and CL-45F, denoted as CL-C, has already achieved the same performance as the regional-CNN based methods (R-CNN SCFV and MOP-CNN) while it requires much fewer times of CNN forward calculation. Moreover, combining with the global-CNN representation, our method can obtain further performance gain. By combining CL-C with CNN-Jitter, our method, denoted as CL+CNN-Global and CL+CNN-Jitter respectively, achieves impressive classification performance 71.5\%.
\noindent\textbf{Fine-grained image classification: Birds-200.}
Birds-200 is the most popular dataset in fine-grained image classification research. It contains 11788 images with 200 different bird species. This dataset provides ground-truth annotations of bounding boxes and parts of birds, e.g. the head and the tail, on both the training set and the test set. In this experiment, we just use the bounding box annotation. The results are shown in Table \ref{table:Birds_result}. As can be seen, the proposed method performs especially well on this dataset. Merely CL-45 has already achieved 72.4\% classification accuracy, wining 6\% improvement over the performance of R-CNN SCFV which as far as we know is the best performance obtained in the literature when no parts information is utilized. Combining with CL-45F, our performance can be improved to 73.5\%. It is quite close to the best performance obtained from the method that relies on strong parts annotation. Another interesting observation is that for this dataset, CL-45 significantly outperforms CL-45F, which is in contrary to the case in MIT-67. This suggests that the suitable resolution of spatial units may vary from dataset to dataset.
\begin{table*}
\caption{Comparison of results on Birds-200. Note that the method with ``use parts'' mark requires parts annotations and detection while our methods do not employ these annotations so they are not directly comparable with us.}
\centering
\label{table:Birds_result}
\begin{tabular}{llll}
\hline\noalign{\smallskip}
Methods & Accuracy & Remark \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
CNN-Global & 59.2\% & CNN$\times$1, no part. \\
CNN-Jitter & 60.5\% & CNN$\times$1, no part \\
R-CNN SCFV \cite{Our_NIPS} & 66.4\% & CNN$\times$100, no part \\
*CL-45 & 72.4\% & CNN$\times$1, no part \\
*CL-45F & 68.4\% & CNN$\times$4, no part \\
*CL-45C & \bf 73.5\% & CNN$\times$5, no part \\
*CL + CNN-Global & 72.4\% & CNN$\times$6, no part \\
*CL + CNN-Jitter & 73\% & CNN$\times$16, no part \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
GlobalCNN-FT \cite{ArxivNewBaseline} & 66.4 \% & no parts, fine tunning \\
Parts-RCNN-FT \cite{ZhangNingECCV} & 76.37 \% & use parts, fine tunning \\
Parts-RCNN \cite{ZhangNingECCV} & 68.7 \% & use parts, no fine tunning \\
CNNaug-SVM \cite{CNN_Baseline} & 61.8\% & CNN $\times$1 \\
CNN-SVM \cite{CNN_Baseline} & 53.3\% & CNN global \\
DPD+CNN \cite{Decaffe} & 65.0\% & use parts \\
DPD \cite{Zhang_2013_ICCV} & 51.0\% & - \\
\hline
\end{tabular}
\end{table*}
\noindent\textbf{Object classification: Pascal-2007.}
Pascal VOC 2007 has 9963 images with 20 object categories. The task is to predict the presence of each object in each image. Note that most object categories in Pascal-2007 are also included in ImageNet. So ImageNet can be seen as a super-set of Pascal-2007. The results on this dataset are shown in Table \ref{table:Pascal_result}. From Table \ref{table:Pascal_result}, we can see that the best performance of our method (CL + CNN-Jitter) achieves comparable performance to the state-of-the-art. Also, by merely using the feature extracted from convolutional layer, our method CL-45C outperforms the CNN-Global and CNN-Jitter which use DCNNs to extract global image features. However, our CL-45C does not outperform R-CNN and our best performed method CL + CNN-Jitter does not achieve significant performance improvement as what it has achieved in MIT-67 and Birds-200. This is probably due to that the 1000 categories in ImageNet training set has already included 20 categories in Pascal-2007. Thus the fully-connected layer actually contains some classifier-level information and using fully-connected layer activations implicitly utilizes more training data from ImageNet.
\begin{table}
\caption{Comparison of results on Pascal VOC 2007. }
\centering
\label{table:Pascal_result}
\begin{tabular}{llll}
\hline\noalign{\smallskip}
Methods & mAP & Remark \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
CNN-Global & 71.7\% & CNN $\times$ 1 \\
CNN-Jitter & 75.0\% & CNN $\times$ 10 \\
R-CNN SCFV \cite{Our_NIPS} & 76.9\% & CNN $\times$ 100 \\
*CL-45 & 72.6\% & CNN $\times$ 1 \\
*CL-45F & 71.3\% & CNN $\times$ 4 \\
*CL-45C & 75.0\% & CNN $\times$ 5 \\
*CL + CNN-Global & 76.5\% & CNN $\times$ 6 \\
*CL + CNN-Jitter & \bf 77.8\% & CNN $\times$ 15 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
CNNaug-SVM \cite{CNN_Baseline} & 77.2\% & with augmented data \\
CNN-SVM \cite{CNN_Baseline} & 73.9\% & no augmented data \\
NUS \cite{NUS} & 70.5\% & - \\
GHM \cite{GHM} & 64.7\% & - \\
AGS \cite{AGS} & 71.1\% & - \\
\hline
\end{tabular}
\end{table}
\noindent\textbf{Attribute classification: H3D.}
In recent years, attributes of object, which are semantic or abstract qualities of object and can be shared by many categories, have gained increasing attention due to its potential use in zero/one-shot learning and image retrieval \cite{RelativeAttribute,RelativeAttributeSearch}. In this experiment, we evaluate the proposed method on the task of predicting attribute of human. We use H3D dataset \cite{HAT} which defines 9 attributes for a subset of `person' images from Pascal VOC 2007 and H3D. The results are shown in Table \ref{table:HAT_result}. Again, our method shows quite promising results. Merely using the information from convolutional layer, our approach has already achieved 77.3\% which outperforms R-CNN SCFV by 4\%. By combining with CNN-Jitter, our method becomes comparable to PANDA \cite{PANDA} which needs complicated poselet annotations and detections.
\begin{table}
\caption{Comparison of results on Human attribute dataset. }
\centering
\label{table:HAT_result}
\begin{tabular}{llll}
\hline\noalign{\smallskip}
Methods & mAP & Remark \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
CNN-Global & 74.1\% & CNN $\times$ 1 \\
CNN-Jitter & 74.6\% & CNN $\times$ 10 \\
R-CNN SCFV \cite{Our_NIPS} & 73.1\% & CNN $\times$ 100 \\
*CL-45 & 75.3\% & CNN $\times$ 1 \\
*CL-45F & 70.7\% & CNN $\times$ 4 \\
*CL-45C & 77.3\% & CNN $\times$ 5 \\
*CL + CNN-Global & 78.1\% & CNN $\times$ 6 \\
*CL + CNN-Jitter & \bf 78.3\% & CNN $\times$ 15 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
PANDA \cite{PANDA} & 78.9 & needs poselet annotation \\
CNN-FT \cite{ArxivNewBaseline} & 73.8 & CNN-Global, fine tunning \\
CNNaug-SVM \cite{CNN_Baseline} & 73.0\% & with augmented data \\
CNN-SVM \cite{CNN_Baseline} & 70.8\% & no augmented data \\
DPD \cite{NUS} & 69.9\% & - \\
\hline
\end{tabular}
\end{table}
\subsubsection{Computational cost}
\begin{table}
\caption{Average time used for extracting an image representation for different methods. The time can be break down into two parts, time spend on extracting CNN features and time spend on performing pooling.}
\centering
\label{table:speed comparison}
\begin{tabular}{lllll}
\hline\noalign{\smallskip}
Method & CNN Extraction & Pooling & Total \\
\noalign{\smallskip}
\hline
*CL-45 & 0.45s & 0.14s & 0.6s \\
*CL-45F & 1.3s & 0.27s & 1.6s \\
*CL-45C & 1.75s & 0.41s & 2.2s \\
CNN-Global & 0.4s & 0s & 0.4s \\
CNN-Jitter & 1.8s & 0s & 1.8s \\
SCFV \cite{Our_NIPS} & 19s & 0.3s & 19.3s \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
It is clear that our method requires much less time in DCNN forward computation. To give an intuitive idea of the computational cost incurred by our method, we report the average time spend on extracting image representations of various methods in Table \ref{table:speed comparison}. As can bee seen, the computational cost of our method is comparable to that of
CNN-Global and CNN-Jitter. This is quite impressive given that our method achieves significantly better performance than these two methods. Compare with SCFV, the most competitive method to our approach in the sense of classification accuracy, we have obtained around 10 times speedup. Note that this speed evaluation is based our naive MATLAB implementation and our method can be further accelerated by using C++ or GPU implementation.
\subsection{Analysis of components of our method}
From the above experiments, the advantage of using the proposed method has been clearly demonstrated. In this section, we further examine the effect of various components in our method.
\subsubsection{Using different convolutional layers}
First, we are interested to examine the performance of using convolutional layers other than the 4th and 5th convolutional layers. We experiment with the 3th and 4th convolutional layers and report the resulting performance in Table \ref{table:3-4 Layer result}. From the result we can see that using 4-5th layers can achieve superior performance than 3-4th layers. This is not surprising since it has been observed that the deeper the convolutional layer is, the better discriminative power it has \cite{VisualizeCNN}.
\begin{table}
\caption{Comparison of results obtained by using different pooling layers.}
\centering
\label{table:3-4 Layer result}
\begin{tabular}{llllll}
\hline\noalign{\smallskip}
Method & MIT-67 & Birds200 & Pascal07 & H3D \\
\noalign{\smallskip}
\hline
CL-34 & 61.7\% & 64.6\% & 66.3\% & 74.7\% \\
CL-34F & 61.4\% & 61.4\% & 64.9\% & 70.4\% \\
CL-34C & 64.1\% & 66.8\% & 68.5\% & 75.9\% & \\
CL-45C & \bf 68.8\% & \bf 73.5\% & \bf 75.0\% & \bf 77.3\% \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
\subsubsection{Comparison of different pooling schemes}
The cross-layer pooling is an essential component in our method. In this experiment, we compare it against other possible alternative pooling approaches, they are: directly performing sum-pooling (with square operation) and max-pooling, using spatial pyramid pooling as suggested in \cite{SPP_Conv}, applying the sparse coding based fisher vector encoding (SCFV) \cite{Our_NIPS} to encode extracted local features and perform pooling. To simplify the comparison, we only report results on the best performed single resolution setting for each dataset, that is, CL-45F for MIT-67 and CL-45 for the rest of three datasets. The results are shown in Table \ref{table:pooling_comparison}. As can be seen, the proposed cross-layer pooling significantly outperforms directly applying max-pooling or sum-pooling or even spatial-pyramid pooling. By applying another layer of encoding on local features before pooling, the classification accuracy can be greatly boosted. However, in most cases, its performance is still much inferior to the proposed method, as seen in cases of MIT-67, Pascal-07 and Birds-200. The only exception is the result on H3D, where SCFV performs slightly better than our method. However, it needs additional codebook learning and encoding computation while our method does not. Considering this computational benefit and superior performance in most cases, cross-layer pooling is clearly preferable than the other alternative methods.
\begin{table}
\caption{Comparison of results obtained by using different pooling schemes}
\centering
\label{table:pooling_comparison}
\begin{tabular}{llllll}
\hline\noalign{\smallskip}
Method & MIT-67 & Birds200 & Pascal07 & H3D \\
\noalign{\smallskip}
\hline
Direct Max & 42.6\% & 52.7\% & 48.0\% & 61.1\% \\
Direct Sum-sqrt & 48.4\% & 49.0\% & 51.3\% & 66.4\% \\
SPP \cite{SPP_Conv} & 56.3\% & 59.5\% & 67.3\% & 73.1\% \\
SCFV \cite{Our_NIPS} & 61.9\% & 64.7\% & 69.0\% & \bf 76.5\% \\
CL-single & \bf 65.8\% & \bf 72.4\% & \bf 72.6\% & 75.3\% \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
\subsubsection{Feature sign quantization}\label{sect:coarse_quantization}
Finally, we demonstrate the effect of applying a feature sign quantization to pooled feature. Feature sign quantization quantizes a feature to 1 if it is positive, -1 if it is negative and 0 if it equals to 0. In other words, we use 2 bits to represent each dimension of the pooled feature vector. This scheme greatly saves the memory usage. Similar to the above experiment setting, we only report the result on the best performed single resolution setting for each dataset. The results are shown in Table \ref{table:FS_Quantization}. Surprisingly, this coarse quantization scheme does not degrade the performance too much, in three datasets, MIT-67, Pascal-07 and H3D, it achieves almost same performance as the original feature.
\begin{table}
\caption{Results obtained by using feature sign quantization.}
\centering
\label{table:FS_Quantization}
\begin{tabular}{llll}
\hline\noalign{\smallskip}
Dataset & Feature sign quantization & Original \\
\noalign{\smallskip}
\hline
MIT-67 & 65.2\% & 65.8\% \\
Birds-200 & 71.1\% & 72.4\% \\
Pascal07 & 71.2\% & 71.3\% \\
H3D & 75.4\% & 75.3\% \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
\section{Conclusion}
In this paper, we have proposed a new method called cross-convolutional layer pooling to create image representations from the convolutional activations of a pre-trained CNN. Through the extensive experiments, we have shown that this method enjoys good classification performance and low computational cost. Our discovery suggests that if used appropriately, convolutional layers of a pre-trained CNN contains very useful information and has many advantages over the scheme that using fully-connected layer activations as image representation.
\onecolumn
\bibliographystyle{ieee}
| -26,666.962501
|
[
-2.302734375,
2.1640625
] | 31.606218
|
[
-3.130859375,
1.107421875,
-1.7978515625,
-4.2578125,
-0.833984375,
6.59375
] |
[
3.154296875,
7.25390625,
1.4794921875,
6.79296875
] | 358
| 5,337
|
[
-2.087890625,
2.228515625
] | 29.567723
|
[
-6.65625,
-5.046875,
-4.8828125,
-1.7119140625,
2.96484375,
13.3203125
] | 2.123115
| 24.603861
| 22.128537
| 6.827696
|
[
1.7953989505767822
] | -20,402.251861
| 5.877647
| -26,387.223655
| 1.109375
| 5.962368
|
[
-3.080078125,
-3.626953125,
-3.302734375,
-4.1640625,
2.814453125,
11.2109375
] |
[
-5.85546875,
-2.65234375,
-2.97265625,
-2.2578125,
4.16015625,
6.8125
] | |
BkiUdms4dbjiU9oEegS7
|
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what's in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\texttt{acmsmall}}: The default journal template style.
\item {\texttt{acmlarge}}: Used by JOCCH and TAP.
\item {\texttt{acmtog}}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\texttt{acmconf}}: The default proceedings template style.
\item{\texttt{sigchi}}: Used for SIGCHI conference articles.
\item{\texttt{sigchi-a}}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\texttt{sigplan}}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\texttt{anonymous,review}}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \texttt{\acmSubmissionID} command to print the
submission's unique ID on each page of the work.
\item{\texttt{authorversion}}: Produces a version of the work suitable
for posting by the author.
\item{\texttt{screen}}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf, language=french,
language=german, language=spanish, language=english]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. As an exception, multiple authors may share one
affiliation. Authors' names should not be abbreviated; use full first
names wherever possible. Include authors' e-mail addresses whenever
possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what's in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what's in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what's in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
Since the outbreak of COVID-19 in 2020, this global pandemic has caused 625 million infections and 6.57 million deaths.\footnote{https://covid19.who.int/} Even though it has been three years, COVID-19 has not been eradicated worldwide. The rapid spread of the pandemic and the resulting economic downturn have exacerbated widespread anxiety, confusion, emotional isolation, and panic~\cite{mental-NEJM-2020}. According to Global Burden Disease (2020) study \cite{2020GBD-Lancet-2021}, the COVID-19 pandemic has caused nearly a 27.6\% increase in depression and a 25.6\% increase in anxiety worldwide.
Depression, which affects an estimated 3.8\% of the world's population\footnote{https://www.who.int/news-room/fact-sheets/detail/depression}, is now the leading cause of mental health-related disease burden globally~\cite{depression-burden-lancet-2019}. Depression causes persistent feelings of sadness that negatively affect how individuals feel, think, and act. In severe cases, depression can lead to suicide \cite{suicide-depression-lancet-2016}. Approximately 5\% of depressed adolescents will commit suicide \cite{depression-suicide-intervention-1998}. However, depression is preventable and treatable \cite{depression-treatable-2010}, and the sooner it is treated, the better the outcome \cite{depression-suicide-intervention-1998}. Despite a 41\% increase in the burden of mental disorders over the past two decades \cite{suicide-depression-lancet-2016}, mental health remains one of the most neglected yet crucial development issues. In many low- and middle-income countries (LIMCs), there are fewer than one mental health worker for every 100,000 people, and more than 75\% of people do not receive treatment \cite{2020GBD-Lancet-2021}.$^2$
\begin{figure*}[ht]
\centering
\includegraphics[width=18cm]{Figures/early_detection_of_depression.png}
\caption{Tweet Timeline of a COVID-19 Patient with Depression. Tweet $t_i$ represents the patient's mention of their COVID-19 diagnosis, while tweet $t_j$ represents their mention of becoming depressed. To perform early prediction of depression risk, we selected a set of tweets $T_S = \{t_1, t_2, \ldots, t_s\}$ posted at least two weeks prior to $t_j$ to predict the likelihood of the patient developing depression. (Note that all raw tweets included in this paper have been rephrased for desensitization and brevity.)}
\Description{early detection of depression}
\label{fig:example}
\end{figure*}
To alleviate the depression crisis caused by COVID-19, it is crucial to detect depressed patients at an early stage so that they can receive prompt treatment \cite{benefit-early-2016}. Nonetheless, social stigma and self-stigma have emerged as significant barriers to treatment \cite{stigma-barrier-2014, stigma-barrier-2006}. Despite the fact that depression can result in social withdrawal and isolation, many affected individuals attempt to disclose their experiences on social media due to the virtuality and privacy of social identity \cite{disclose-2020, disclose-plosone}. Moreover, online communities provide a hospitable environment that enables individuals to connect with others who face comparable challenges \cite{disclose-AAAI}. After the outbreak of COVID-19, the use of social media platforms has increased by 61\% as people rely on them to stay in touch with others \cite{covid-more-social}. As more individuals with depression tend to self-disclose and seek assistance on social media, these platforms provide a rich ecosystem for studying the manifestation and characteristics of depression.
This paper aims to develop a social media-based depression early detection model among COVID-19 patients. Using a knowledge distillation framework, our proposed model combines the longitudinal contextual information from Twitter posts and the daily emotional status of COVID-19 patients to predict their risk of depression. The contributions of this work are as follows:
Firstly, we managed a dataset (DepCOV) comprising 10,656 Twitter users. It includes users at risk for depression following a COVID-19 diagnosis and a control group. We collected the date of COVID-19 infection, pre-infection posts, and post-infection posts for each patient in the dataset.
Secondly, we conducted in-depth experiments and data analysis to investigate the relationship between COVID-19 infection and depression. Our analysis focuses on identifying linguistic differences between depressed users and controls, as well as pre-and post-infection differences.
Thirdly, we developed an early depression risk detection model for COVID-19 patients. Figure \ref{fig:example} illustrates the historical tweets of a COVID-19 patient. The patient was infected around the time of tweet $t_i$ and developed depression signals around tweet $t_j$. To perform early prediction of depression risk, we selected tweets posted at least two weeks prior to $t_j$. Given the significant negative impact of COVID-19 on mood \cite{covid-mood-swing-2021, covid-mood-swing-2022}, we used mood swings as a potential diagnostic signal for depression detection. Our proposed Mood2Content model integrates both textual and emotional features through knowledge distillation to make predictions. Experiment results show that Mood2Content outperforms other competitive baselines, achieving high performance with an AUROC of 0.9317 and AUPRC of 0.8116.
\section{RELATED WORK }
\subsection{Depression Detection in Social Media}
Unlike the conventional machine learning task in other fields that are supported by extensive and high-quality datasets with gold-standard diagnoses, the myriad of privacy and ethical concerns of mental disorders have limited the accessibility of datasets with clinically validated diagnostic information. Consequently, many researchers have devoted themselves to constructing reliable datasets to support various tasks. The annotation/development schemes of a dataset are mainly based on affiliation behaviors, self-reports, and expert/external validation (see more details in \cite{review-nDM-2020, dataset-gap-CHI}). The most ideal datasets are curated by the third scheme, which introduces the experts' examination \cite{dataset-psychiatrist-2017} or incorporate electronic healthcare records \cite{dataset-ehr-facebook-2018}, but its effort- and time-consuming nature limit its scale, diversity, and accessibility \cite{dataset-gap-CHI}. Therefore, the first two methods are the most popular and practical schemes. The first strategy operationalizes hashtags, account following, and community participation related to psychiatric resources as interested signals, such as followers of psychiatrist account \cite{dataset-follower-2015}, posts in depression forum \cite{dataset-forum-2014, dataset-forum-2017, dataset-TRT-2018}. The third scheme identifies the interested person according to their self-disclosure in social media, such as the matching pattern for feelings or diagnoses of mental disorders (e.g., \textit{"I was diagnosed with depression"}). For example, \cite{dataset-self-1-2014} adopted the regular expression of diagnosed pattern to seek persons with mental disorders in Tweet. Since then, more similar datasets have been proposed, such as RSDD \cite{dataset-RSDD-2017}, SMHD \cite{dataset-SMHD-2018}, eRisk \cite{dataset-erisk-2019}, and have flourished related workshops, such as CLPsych \cite{competition-clpsych-2015} and eRisk \cite{competition-erisk-2017}. Beyond these, Kelly and Gillan recruited participants who self-reported depressive episodes through an online worker platform \cite{dataset-recruitment-NC-2022, dataset-recruitment-nDM-2022}.
Research about mental health based on social media mainly focused on the detection model and the potential indications of mental disorders. For model development, the classical paradigm is the combination of feature extraction and classifier, such as linguist features with logistic regression \cite{dataset-self-1-2014}. The common feature extraction methods include TF-IDF, word embedding \cite{depression-word-embedding-2018}, LIWC (Linguistic Inquiry and Word Count) \cite{method-liwc-2010}, and LDA (Latent Dirichlet Allocation) \cite{lad-2003}. Currently, more research has gradually used deep learning models to represent posts, including convolution neural network (CNN) \cite{dataset-SMHD-2018}, recurrent neural network (RNN) \cite{method-RNN-2018}, long short-term memory neural network (LSTM) \cite{method-LSTM-2019} and Transformer \cite{method-transformer-abstract-2021}. Meanwhile, \cite{method-multimodality-gui-2019} and \cite{method-multimodality-An-2020} cooperated with the text and image by the multi-modality model. Especially, several research introduced the attention-based approach to improve model interpretability and generalizability \cite{method-SS3-2019}, such as hierarchical attention networks (HAN) \cite{method-HAN-2016}. And, recent studies cooperated with the psychiatric scale of clinical diagnosis to guide depression detection \cite{method-scale-acl-2022, method-scale-ijcai-2022}.
There is a growing interest in exploring the potential of social media for depression diagnosis, including linguistic characteristics \cite{method-semantic-2022, dataset-ehr-facebook-2018} and social behavior \cite{method-behavior-2017}. Studies have shown that LIWC, LDA, and text clustering can be used to examine linguistic differences between individuals with schizophrenia and healthy controls \cite{method-linguistic-schizophrenia-2015}. Trotzek et al. \cite{depression-word-embedding-2018} built a logistic regression classifier by integrating readability and emotion features into user-level linguistic metadata and further improved it with a CNN-based model. The work in \cite{method-multimodality-shen-2017} involved depression detection using multi-modality features such as social network features, user profile features, visual features, emotion features, topic-level features, and domain-specific features. Yang et al. \cite{mental-knowledge-2022} extracted mental state knowledge and infused it into a GRU model to explicitly model the mental states of the speaker. Kelley et al. \cite{dataset-recruitment-NC-2022} constructed personalized, within-subject networks based on depression-related linguistic features from LIWC and discovered a positive correlation between overall network connectivity and depression severity. The negative mood, a typical symptom of depression, has also been extensively studied in the context of social media posts, with the majority of research concentrating on content analysis or the extraction of hand-crafted features using lexicons or rules \cite{depression-sentiment-lexicon-2013, depression-sentiment-lexicon-2021}.
As a global health crisis, COVID-19 has received significant attention on social media. On the basis of large-scale social media data, there has been an abundance of research on COVID-19 \cite{social-covid-lancet-2021}, including thematic analysis \cite{covid-lda-2020}, symptom identification \cite{covid-symptom-2020, symptom-social-covid-2022}, and public perception analysis \cite{covid-perception-2020, mh-covid-2022}. However, research modeling the relationship between depression and COVID-19 is scarce. This paper represents, to the best of our knowledge, the first attempt to predict early the depression risk of COVID-19 patients.
\subsection{Research on Knowledge Distillation}
Large-scale deep learning models have limited practical applications due to their computational complexity and storage requirements. Knowledge distillation (KD) is a solution to this issue, as it enables the distillation of a large model into a smaller model with a relatively low reduction in performance \cite{distll-review-2021}. The
student model of KD is synchronously guided by the distillation loss that reflects the gap between the student model and teacher model, and the task loss that measures the prediction errors of the student model \cite{distill-hinton-2015}. Different KD strategies define distillation loss differently. Distilled BiLSTM \cite{distill-distilled-BiLSTM-2019} used the MSE loss between the output of the teacher model and the student model as the distillation loss. BERT-PKD \cite{distll-pkd-2019} extracted information from intermediate layers and computed the MSE loss. DistillBERT \cite{distill-distillBERT-2019} and TinyBERT \cite{distill-tinyBERT-2019} guided the student model in the pre-training stage. MiniLM \cite{distill-minilm-2020} further distilled the self-attention distributions and value relations of the teacher's last Transformer layer to guide the student model training, making it effective and generative for student models.
\section{METHODOLOGY}
\begin{figure*}[ht]
\centering
\includegraphics[width=12cm]{Figures/Mood2Content.png}
\caption{Framework of the Mood2Content model which includes a Content Encoder and a Mood Encoder, both encoders accept daily aggregated tweets as input and target to predict the depression risk at an early stage.}
\label{fig:framework}
\end{figure*}
\subsection{Problem Formulation}
This study uses Twitter as the major social platform to detect depression and predict early-stage risks. Given a user $u$, we can acquire his historical tweets, such as posts and comments, which contain abundant information about personal experiences and feelings. We denote all tweets acquired from $u$ with $T=\{t_1, t_2, \ldots, t_N\}$, where $N$ is the total number of tweets. We also denote the first tweet mentioning being infected by COVID-19 as $t_i$ and the first tweet emitting depression signals as $t_j$. In this paper, we aim to detect potential depression before users explicitly express depressive feelings and after they get COVID-19. Therefore, we focus on cases where $t_j$ is posted after $t_i$. For early detection, we further limit our study range to tweets posted at least two weeks before $t_j$. Consequently, the early depression risk prediction problem can be formulated as a binary classification problem on predicting a future depression label $y$ for the user $u$ using the subset $T_s=\{t_m, t_{m+1}, \ldots, t_n\}$ from $T$, where $t_i < t_m < t_n < t_j$ on timescale.
\subsection{Feature Extraction}
\subsubsection{COVID-19 Infection Time Extraction}
This subsection presents the extraction of $t_i$, i.e., the first tweet where the user self-reported a COVID-19 diagnosis. We identify self-reported COVID-19 tweets through the following steps: 1) use keywords to filter tweets that contain specific expression phrases, such as "get COVID", or "test positive". Then, we use dependency parsing (supported by Stanza \cite{tools-staza-2020}) and rule-based approaches (such as negation detection) to determine the subject of infection. The first tweet with the user as the infection subject is associated with a timestamp, but this timestamp does not necessarily represent the user's infection time $t_i$. Therefore, we further applied the regular expression to extract time information in the tweet to determine the user's infection time $t_i$. More details and related resources on dataset construction can be found in the code repository.\footnote{https://github.com/Dragon-Wu/DepCov-WWW2023}
\subsubsection{Depression Time Extraction}
This subsection presents the extraction of $t_j$, i.e., the first tweet where the user expressed depressed feelings after COVID-19 infection. Following \cite{dataset-SMHD-2018, method-scale-acl-2022}, we define self-reported depression as tweets that mention depression conditions and first-person pronouns within a short lexical distance. Based on official psychiatric resources\footnote{https://www.mayoclinic.org/diseases-conditions/depression/}$^,$\footnote{https://www.who.int/news-room/fact-sheets/detail/depression}$^,$\footnote{https://www.nimh.nih.gov/health/topics/depression}, we curate a comprehensive lexicon of depression conditions. The lexicon contains various expressions of depressive disorders (e.g., \textit{major depression disorder, dysthymia}), the status of extreme depression mood (e.g., \textit{miserable, hopeless}), and typical symptoms of depression (e.g., \textit{suicide, severe mood swings}). In addition, we also add colloquial expressions. With this lexicon and high-precision regular expression, we extract tweets with depressive signals and remove tweets with ambiguity, non-self-report, and negation. Manual validation on a random sample of 200 tweets shows an accuracy of 91.0\%.
\subsubsection{Aggregation of Daily Tweet}
After identifying the infection time $t_i$ and depression tweet $t_j$ were identified, we selected all tweets that were posted before $t_j$ (e.g., two weeks) and denoted them as $T_s={t_m, t_{m+1}, \ldots, t_n}$. The objective was to extract features from $T s$ in order to predict whether this user would develop depression in the near future. Due to Twitter's character limit, tweets were typically brief, making semantic and sentiment analysis difficult and resulting in frequent mood swings. To address this issue, we condensed the historical tweets $T_s$ into daily tweets $D={d_1, d_2, ..., d_M}$ and sorted them in reverse order, where $d_i$ represented all tweets generated on the $i$th day. While everyone's mental state fluctuates over time, including those of depressed patients, using tweets posted a long time ago may not have accurately reflected their current mental state and could have led to inaccurate predictions. To improve efficiency and focus on the current state of the user, we truncated historical posts after four weeks, enabling online and timely detection. Consequently, the maximum number of elements in $D$ was at most 28 (4 $\times$ 7).
\subsubsection{Tweet Representation}
After merging daily tweets into $D=\{d_1, d_2, ..., d_M\}$, we adopt BERT \cite{method-BERT-2018} as the textual encoder to represent each $d_j$ in $D$. Here we use the COVID-Twitter-BERT-v2 (CTB) \cite{BERT-covid-2020}, a BERT-large-uncased model that has been incrementally pre-trained on large-scale COVID-19-related tweets. In recognition of the important role of emotional information in depression detection, we also develop a Mood Encoder to capture the emotional context of tweets. To enhance its capability, we further pre-train the CTB model on three sentiment-related tasks: sentiment classification \cite{encoder-tweeteval-2020, semeval-sentiment-2017}, emotion recognition \cite{semeval-emotion-2018}, and targeted sentiment analysis \cite{encoder-metscov-2022}. These three tasks yield three optimized models based on CTB, we denote them with CTB-St, CTB-Emo, and CTB-Tsa, respectively.
The CTB-St and CTB-Emo models were fine-tuned using the SemEval 2017-Sentiment Analysis in Twitter \cite{semeval-sentiment-2017} and SemEval 2018 - Emotion Recognition \cite{semeval-emotion-2018} datasets, respectively. CTB-St categorizes the overall sentiment of tweets into negative, neutral, and positive, while CTB-Emo infers the emotional state of a tweet (anger, joy, sadness, optimism). Both models were fine-tuned with a basic BERT setting, which involves mean-pooling the embeddings of the last hidden state of CTB and inputting it into a linear classifier. The third task, TSA (Targeted Sentiment Analysis), is a fine-grained sentiment analysis aimed at inferring user sentiment toward targeted entities (negative, neutral, and positive). The CTB-Tsa model was fine-tuned on the METS-CoV dataset \cite{encoder-metscov-2022}, which contains COVID-19 related tweets, using the BERT-SPC model setting \cite{method-BERT-2018}.
For each daily aggregated tweet $d_j=\{w_{1}, w_{2}, ..., w_{N_j}\}$ of user $u$, we adopt the mean-pooling of the last hidden state of BERT model as the tweet representation:
\begin{equation}
c_j = Content_{Encoder}(d_j) =\frac{1}{N_j}\sum^{N_j}_{l=1}BERT_{|LAST|}(w_{1}, w_{2}, ..., w_{l})
\end{equation}
where $c_j$ refers to content representation of $d_j$.
\begin{align}
m_j & = Mood_{Encoder}(d_j)
\label{eqn:mood}
\end{align}
where $m_j$ refers to the mood representation of $d_j$ and $Mood_{Encoder}$ can be one of CTB-St, CTB-Emo and CTB-Tsa.
\subsection{Mood2Content Model}
As shown in Figure \ref{fig:framework}, we propose a novel framework Mood2Content that cooperates with both the content representation and mood representation to conduct early detection of depression.
Given a user $u$ to with daily merged tweets $D=\{d_1, d_2, ..., d_M\}$, we can use the content encoder and the mood encoder to acquire the corresponding representation $C= \{c_1, c_2, \ldots, c_M\}$ and $M= \{m_1, m_2, \ldots, m_M\}$. Then we generate the embedding $x$ of user $u$ based on $C$ and $M$.
The set $D$ contains the merged daily posts sorted in reverse chronological order, such ranking information needed to be included in modeling. This is because the most recent tweets record the current status of this user which is more informative for future depression risk prediction. As a result, we add the position information to the content representation $c_j$ of each $d_j$ in $D$.
\begin{equation}
c'_{j} = Concatenation(c_j,pos_j)
\end{equation}
where $pos_j$ is a hard position embedding denoting the day gap between the $j$th day and now, emphasizing the timeline information.
After updating content representation with position information, we acquire the user representation $x$ with a user encoder that consists of Transformer and self-attention layer. Transformer enables $c'_j$ to utilize the information from other daily tweets. $s_j$ is the $j$th embedding of the last hidden state of Transformer:
\begin{equation}
s_{j} = User_{Encoder|LAST|}(C', j)
\end{equation}
A self-attention layer is used to generate the weighted sum of all $s_i$:
\begin{equation}
\alpha_j = \frac{exp(Ws_{j}+b)}{\sum^M_{k=1}exp(Ws_{k}+b)}
\end{equation}
\begin{equation}
x = \sum_{j=0}^{M}\alpha_js_{j}
\end{equation}
where $W$ are learnable parameters.
Then, the user representation $x$ is the input of the classifier head $F$ (linear layer) to predict the depression risk $p$.
\begin{equation}
p = SIGMOID(W_F\cdot x + b_F)
\end{equation}
where $W_F$ are learnable parameters.
Therefore, the model can be trained with the loss function of depression prediction $\mathcal{L}_{clf}$ :
\begin{equation}
\mathcal{L}_{clf}=y\cdot log p + (1-y)\cdot log(1-p)
\end{equation}
To integrate the mood representation into depression risk prediction, inspired by knowledge distillation, we guide the content encoder to align with the mood representation. In detail, we first acquire the mood representation $M= \{m_1, m_2, \ldots, m_M\}$ in Eq.(\ref{eqn:mood}). Then, this mood encoder will be frozen in depression detection and no longer update model weights. We introduce $\mathcal{L}_{distill}$ as a distance measure between mood vector $M_i$ and content vector $C_i$, which guides the content encoder to reach a trade-off between feature fusion and model classification:
\begin{equation}
\mathcal{L}_{distill}=\lvert\lvert M - C \lvert\lvert^2_2
\end{equation}
Therefore, the Mood2Content model is optimized towards both mood distillation and prediction error reduction. The overall loss of model can be formulated as a weighted sum of $\mathcal{L}_{clf}$ and $\mathcal{L}_{distill}$:
\begin{equation}
\begin{split}
\mathcal{L}&=\alpha \cdot \mathcal{L}_{clf} + (1-\alpha) \cdot \mathcal{L}_{distill} \\
&=\alpha \cdot [y\cdot log p + (1-y)\cdot log(1-p)] + (1-\alpha) \cdot \lvert\lvert M - C \lvert\lvert^2_2
\end{split}
\end{equation}
where $\alpha$ is an adjustable factor that can emphasize feature fusion or classification.
\section{EXPERIMENTS}
\subsection{Dataset}
We select original English tweets related to COVID-19 using unique tweet identifiers (tweet ID) from a widely used open-source COVID-19 tweet database \cite{model-datacollection-chen-2017, model-datacollection-lopez-2021}. These tweets were identified by Twitter's trending topics and keywords associated with COVID-19, such as \textit{COVID-19} and \textit{SARS-COV-2}. We first download 471,553,966 target tweets across 27 months, from February 1st, 2020, to April 30th, 2022, using Twitter's Application Programming Interface (API). After the identification of COVID-19 patients, we further collect retrospective tweets between January 1st, 2020, and December 31st, 2021 from each infected user for further analysis and modeling.
Due to the mental disease problems brought by COVID-19, we presume that there are many vulnerable persons who may present depression risk after COVID-19 diagnosis. We split the entire user set into two groups according to their quantity and the corresponding timestamp of depression tweets: 1) the first group is the treatment group which includes users emitting depression signals after suffering COVID-19. We require users in this group to have posted more than three depression tweets and the first of which was posted at least two weeks after their COVID-19 diagnosis. In addition, these users never post a depression tweet before COVID-19. Particularly, we set a window period of two-week, a widely used time window in the diagnosis of mental disorders, between COVID-19 diagnosis and the emergence of depression risk and the subsequent modeling and analysis merely utilize their tweets before it; 2) the second group is the control group which includes users who don't mention depression both before and after COVID-19 infection. For each user in the first group, we select 5 users with a similar quantity of tweets and add them to the second group. Besides, all eligible users must contain more than 25 tweets both before and after the COVID-19 diagnosis respectively, $\geq75\%$ of which are written in English.
In this manner, we build a dataset of COVID-19 patients with depression signals and name it the DepCOV dataset. DepCOV consists of 1,776 depression cases (positive) and 8,880 controls (negative), with 10,488,061 tweets. For model development and evaluation, We split the DepCOV into the training set, validation set, and testing set with the proportion of 7:1:2.
As the overall statistic of DepCOV is shown in Table \ref{Table:DepCOV}, the depressed person among COVID-19 patients posted more tweets than the controls, and this has been further enhanced after they got COVID-19. Besides, the depressed users in the DepCOV have an average of 5.27 depression tweets and the time between their COVID-19 diagnosis and depression was an average of 59.41 days.
\subsection{Settings}
\begin{table}[t]
\caption{The statistics of the proposed dataset DepCOV. (Both include COVID-19 patients who will or will not get depressed in two weeks)}
\begin{center}{
\resizebox{\linewidth}{!}{
\begin{tabular}{ c c c c c c }
\hline
\multicolumn{1}{c}{{\textbf{Statistics}}} & \multicolumn{2}{c}{\textbf{Depression (n=1,776)}} & \multicolumn{2}{c}{\textbf{Controls (n=8,880)}} & \multicolumn{1}{c}{\textbf{DepCOV (n=10,656)}} \bigstrut\\
\cline{2-6} \multicolumn{1}{c}{\textbf{(Mean)}} & \multicolumn{1}{c}{\textbf{Before COV}} & \multicolumn{1}{c}{\textbf{COV to Dep}} & \multicolumn{1}{c}{\textbf{Before COV}} & \multicolumn{1}{c}{\textbf{After COV}} & \multicolumn{1}{c}{\textbf{Overall}} \bigstrut\\
\hline
Tweets Count &583.74 &383.44 &586.87 &400.78 &492.12 \\
\hline
Days Count &109.57 &59.01 &149.40 &108.70 &121.59 \\
\hline
Tweet length &23.21 &23.95 &21.44 &21.47 &21.81 \\
\hline
Tweet per day &5.33 &6.50 &3.93 &3.69 &2.02 \\
\hline
Daily Tweet length &113.64 &135.88 &78.64 &75.86 &85.17 \\
\hline
\end{tabular}}}
\label{Table:DepCOV}
\end{center}
\end{table}
To evaluate the model performance objectively, models of the same type have exactly the same parameters. The max number of training epochs is 10 and the patience of early stop is 10. The training batch size is 32 and the learning rate is 5e-5 with the cosine scheduler with warm-up.
The $\alpha$ of Mood2Content model is 0.5, which yields a balance between distillation loss and classification loss.
To avoid the influence of randomness, we run each model with 3 different seeds (42, 52, 62) and report the average performance. For the practical availability and generalizability, we adopt the area under the receiver operating characteristic curve (AUROC) and the precision-recall curve (AUPRC) instead of accuracy or F1-score, which are widely used in such tasks but set a hard threshold of 0.5. AUROC and AUPRC can more comprehensively evaluate the model performance regardless of any threshold, enabling more aggressive or conservative interventions for persons at depression risk.
\subsection{Analysis}
\subsubsection{Linguistic discrepancy}
To analyze content differences, we compared psycholinguistic characteristics between COVID-19-infected patients who developed depression and those who did not, as well as between tweets posted by depressed patients prior to and after their COVID-19 diagnosis. We utilized the LIWC lexicon, a psychometrically validated mapping of words to psychological concepts that had been widely applied to the analysis of mental health in social media text \cite{method-liwc-2010, dataset-self-1-2014}. We conducted Chi-square tests on each characteristic and determined its odds ratios (ORs). Table \ref{Table:LIWC} displayed the characteristics with the ten highest and ten lowest odds ratios, with p-values for each result <0.0001.
Compared to pre-COVID-19 diagnosis, depressed individuals used fewer words associated with recreation (leisure, positive emotion, friends, and motion) and more words associated with sexuality, health, risk, negation, and anger, indicating a change in their lifestyle and concerns \cite{depression-creation-2021}. Similarly, depressed individuals expressed fewer positive words (accomplishment, reward, power, and leisure) than non-depressed individuals. Specifically, and in accordance with clinical or social media studies, we observed that depressed individuals tended to use more first-person pronouns than controls, indicating an increase in self-focused attention \cite{dataset-recruitment-nDM-2022, depression-firstperson-2017}. In addition, an increase in female- and family-related words may reflect a description of familial affection.
\begin{table}[t]
\caption{Discrepancy of Psycholinguistic feature. \\(The odds ratios (ORs) quantify the linguistic disparities between depression and controls, as well as between the pre- and post-COVID phases of the depression. All \textit{p} < 0.0001)}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{c c c c c c c c}
\hline
\multicolumn{4}{c}{\textbf{After COVID-19 VS Before COVID-19 (Depression)}} & \multicolumn{4}{c}{\textbf{Depression VS Controls}} \\ \cmidrule(r){1-4} \cmidrule(r){5-8}
\textbf{Category} & \textbf{OR} & \textbf{Category} & \textbf{OR} & \textbf{Category} & \textbf{OR} & \textbf{Category} & \textbf{OR} \\ \hline
Leisure & 0.92 & Sexual & 1.08 & Money & 0.85 & I & 1.26 \\ \hline
Ingest & 0.94 & Health & 1.04 & You & 0.88 & Female & 1.17 \\ \hline
Nonfluencies & 0.96 & Risk & 1.04 & Achievement & 0.93 & Family & 1.10 \\ \hline
See & 0.96 & They & 1.04 & Work & 0.93 & Ingest & 1.10 \\ \hline
Home & 0.97 & Money & 1.04 & Death & 0.93 & Filler Words & 1.10 \\ \hline
Affiliation & 0.97 & Causal & 1.03 & Reward & 0.93 & Anxiety & 1.08 \\ \hline
Positive Emotions & 0.97 & SheHe & 1.03 & Power & 0.94 & Insight & 1.08 \\ \hline
Perceptual Processes & 0.97 & Negations & 1.02 & Leisure & 0.94 & Feel & 1.07 \\ \hline
Friends & 0.97 & Insight & 1.02 & We & 0.96 & Assent & 1.06 \\ \hline
Motion & 0.97 & Anger & 1.02 & Drives & 0.96 & Religion & 1.05 \\ \hline
\end{tabular}
\label{Table:LIWC}
}
\end{table}
\subsubsection{Content analysis of Depression tweet}
To shed light on the potential underlying causes of depression, we analyzed the content of tweets pertaining to depression. The tweets were filtered for depression-related conditions, and Latent Dirichlet Allocation (LDA) was used to identify the primary concerns of depressed individuals and determine what factors may have contributed to their depressive moods. The number of topics was limited to between ten and 200, and the optimal model was chosen based on its coherence and complexity. Table \ref{Table:LDA} displays the leading ten topics and their top 20 words for the best model, which had 125 topics. Our analysis revealed that the sources or targets of negative emotions were frequently associated with the ongoing pandemic, such as the lockdown, government, treatments, and the disease itself. In addition, additional topics centered on the participants' emotions and feelings, including depression, encouragement, and complaints.
\begin{table}
\caption{The most concerned Topic of Depression Tweets (Top-10 words of top-10 topics)}
\resizebox{\linewidth}{!}{
\begin{tabular}{cc}
\toprule
\textbf{Topic} & \textbf{Keywords}\\
\midrule
Lockdown & \makecell[c]{absolutely, outside, door, figure, stream, \\breathe, pressure, air, strange, day} \\
\hline
Government & \makecell[c]{place, government, wonder, explain, bunch, \\result, people, fix, citizen, believing} \\
\hline
Depression & \makecell[c]{attack, panic, panic\_attack, piece, awful, \\reminds, time, victim, forced, failure} \\
\hline
Policy & \makecell[c]{American, fast, freedom, middle, exist, \\accept, overwhelming, hero, military, ocd} \\
\hline
Encouragement & \makecell[c]{went, fall, strong, time, happens, \\option, praying, counseling, stay, stay\_strong} \\
\hline
Complaint & \makecell[c]{mind, fear, fact, past, win, \\space, city, committing, medicine, the\_fact} \\
\hline
Treatment & \makecell[c]{second, heard, happened, sadness, pill, \\smoking, xanax, intense, recovery, nausea} \\
\hline
Disease & \makecell[c]{heart, ask, important, ill, beat, \\present, reality, pm, heart\_attack, alcohol} \\
\bottomrule
\end{tabular}
\label{Table:LDA}
}
\end{table}
\subsection{Early Depression Detection }
\subsubsection{Baselines}
To fully evaluate the performance of different models in our experiment settings, we constructed several baselines which range from statistical NLP models to deep learning models.
\textbf{LIWC+LR:} As users' language characteristic can reveal their psychological state, the first baseline is LIWC+LR which extracts the psycholinguistic features of merged historical tweets by LIWC \cite{method-liwc-2010} and predict the depression risk by logistic regression classifier.
\textbf{TF-IDF+XGBoost:} It adopts TF-IDF weighted features of word and character n-grams and the popular machine learning model XGBoost \cite{model-xgbboost-2015}, which is also extensively used in similar tasks \cite{dataset-TRT-2018}.
\textbf{HAN}: The simple concatenation of all tweets in the absence of temporal information may easily lead to the crucial clues lost in large-scale corpus \cite{method-scale-ijcai-2022}. Therefore, we select the representative HAN \cite{method-HAN-2016} as a deep learning baseline, which conducts depression prediction with a hierarchical attention neural network. HAN obtains each tweet $t_j$ representation with bidirectional GRU and further encodes all tweet representations into user presentation with an attention mechanism to conduct the final prediction. To improve model performance, we select the 500, 1000, 2000 latest tweets, and 4-weeks' daily tweets as model input to develop baseline model \textbf{HAN-500}, \textbf{HAN-1000}, \textbf{HAN-2000} and \textbf{HAN-Daily}, respectively. \textbf{HAN-Daily(BERT)} takes daily tweets as input and replaces the BiGRU with CTB. On the basis of HAN(BERT), \textbf{HAN-User} adds the user encoder of Mood2Content to encode tweet representation.
\textbf{Mood\&Content}: It directly concatenates the mood representation and the content representation of daily tweets to generate the combined representation, which acts as the tweet representation to generate user representation for subsequent prediction.
\subsubsection{Results}
As the model performance shown in Table \ref{Table: performance of baseline} and Table \ref{Table: performance of ablation study}, the proposed Mood2Content outperforms other models in early depression detection. Among the baseline models, the HAN-Daily achieved the highest performance, indicating that the recent and aggregated daily tweet can promote the model to seize the recent changes in users' mental status. Meanwhile, the inclusion of more historical tweets did not always improve performance by comparing the results of HAN-500, HAN-1000, and HAN-2000.
For the advanced model, extensive experiments were conducted to demonstrate the performance of the different strategies and to investigate the effects of different components simultaneously:
\textbf{Effect of Tweet Encoder} Owing to the selection and aggregation of daily tweets, a large-scale pre-trained language model can be used as a tweet encoder instead of conventionally shallow CNN or GRU \cite{dataset-RSDD-2017}. The HAN(BERT) model achieved an improvement of 0.1333 in AUPRC and 0.0493 in AUROC at most than the original HAN with BiGRU. Meanwhile, other models with BERT-based tweet encoders all achieved good performance.
\textbf{Effect of User Encoder} Compared with HAN(BERT), the HAN-User was improved by introducing a user encoder, which consists of position embedding, Transformer, and a self-attention layer to further encode tweet representation. The cooperation of position embedding and Transformer enables the user encoder can capture the longitudinal information and the final self-attention layer improves the model interpretability.
\begin{table}
\caption{Performance of fine-tuned CTB on sentiment tasks. \\(CTB-st is fine-tuned on sentiment classification task, CTB-Emo is fine-tuned on emotion recognition task and CTB-Tsa is fine-tuned on targeted sentiment analysis task))}
\label{tab:freq}
\begin{threeparttable}
\begin{tabular}{cccc}
\hline
\textbf{Model} & \textbf{Acc} & \textbf{Recall} & \textbf{F1} \\
\hline
CTB-St &0.7183 &\textbf{0.7260} &0.7173 \\
\hline
CTB-Emo &0.8294 &0.8034 &\textbf{0.7974} \\
\hline
CTB-Tsa &76.29 &0.6738 &\textbf{0.7003}\\
\hline
\end{tabular}
\begin{tablenotes}
\item \footnotesize (The bold means the recommended metric)
\end{tablenotes}
\end{threeparttable}
\label{Table: performance of finetune}
\end{table}
\textbf{Effect of Emotional Signal} With
The result of HAN(BERT) and HAN-User demonstrated that emotional BERT can be also used as a tweet encoder, and CTB-Tsa resulted in better performance than general BERT in them. Notably, such strategy acts as modeling daily mood swing as the potential diagnostic signal for depression detection, which is consistent with psychiatric studies \cite{depression-mood-swing-2000, depression-mood-swing-2010} and the clinical practice \cite{depression-scale-phq9-2001, depression-scale-beck-1987}.
\textbf{Effect of Different Mood Encoder} The results of fine-tuned models are shown in Table \ref{Table: performance of finetune}, which all nearly reached the reported SOTA performance. We examined the improvement brought by the different mood encoders. All the mood coders performed about the same, and the CTB-Tsa achieved the best result in 3/4 models. This could be due to the specific COVID-Twitter dataset, or the fine-grained sentiment analysis could capture more mood information.
\textbf{Effect of Mood Distillation} As shown in Table \ref{Table: performance of ablation study}, the proposed Mood2Content yielded the highest performance with an AUPRC of 0.8116 and an AUROC of 0.9317. Compared to HAN-User, which relied solely on the content or emotional information, Mood2Content improved by an average of 0.0548 in AUPRC and 0.0160 in AUROC. However, Mood\&Content also contained the content and emotional information, it performed even worse than the model with single-resource information. This may be due to the huge gap between their semantic space because they are designed to capture the different contextual presentations. To address this discrepancy, Mood2Content guided the semantic space of the content encoder close to that of the mood encoder through knowledge distillation and learned to conduct depression detection simultaneously.
\begin{table}
\caption{Performance of early detection of depression with different models and strategies.\\( Mood\&Content is a simple concatenation of mood and content representation. Mood2Content is the proposed model which guides content representation with mood representation.)}
\begin{threeparttable}
\begin{tabular}{ccc}
\hline
\textbf{Model} & \textbf{AUPRC} & \textbf{AUROC} \\
\hline
LIWC+LR & 0.2815 & 0.7017 \\
\hline
TF-IDF+XGBoost &0.4737 &0.7933 \\
\hline
HAN-500 tweets &0.3219 &0.7082 \\
HAN-1000 tweets &0.3269 &0.7138 \\
HAN-2000 tweets &0.2623 &0.6251 \\
HAN-Daily &0.6026 &0.8621 \\
HAN-Daily(BERT)* &0.7359 &0.9114 \\
\hline
HAN-User* &0.7649 &0.9198 \\
\hline
Mood\&Content* &0.5364 &0.8447 \\
\hline
Mood2Content* &\textbf{0.8116} &\textbf{0.9317} \\
\hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item *means the best performance was reported
\end{tablenotes}
\end{threeparttable}
\label{Table: performance of baseline}
\end{table}
\begin{table}[t]
\caption{Ablation study. \\(HAN-Daily is the basic content encoder w/o user encoder and mood representation; HAN-User includes the user encoder; Mood\&Content concatenates mood and content representation. Mood2Content is the proposed model.)}
\resizebox{\linewidth}{!}{
\begin{tabular}{ccc}
\hline
\textbf{Model - Encoder} & \textbf{AUPRC} & \textbf{AUROC} \\
\hline
HAN-Daily(BERT) - CTB & 0.6794 & 0.8939 \\
HAN-Daily(BERT) - CTB-St & 0.5909 & 0.8422 \\
HAN-Daily(BERT) - CTB-Emo & 0.4772 & 0.7421 \\
HAN-Daily(BERT) - CTB-Tsa & 0.7359 &0.9114 \\
\hline
HAN-User - CTB & 0.7499 & 0.9194 \\
HAN-User- CTB-St & 0.7396 & 0.9098 \\
HAN-User- CTB-Emo & 0.7428 & 0.9108 \\
HAN-User- CTB-Tsa & 0.7649 & 0.9198 \\
\hline
Mood\&Content - CTB-St & 0.5514 & 0.8298 \\
Mood\&Content - CTB-Emo & 0.5364 & 0.8477 \\
Mood\&Content - CTB-Tsa & 0.5129 & 0.8235 \\
\hline
Mood2Content - CTB-St & 0.8029 & 0.9313 \\
Mood2Content - CTB-Emo & 0.7978 & 0.9299 \\
Mood2Content - CTB-Tsa &\textbf{ 0.8116} & \textbf{0.9317} \\
\hline
\end{tabular}
}
\label{Table: performance of ablation study}
\end{table}
\section{CASE STUDY}
Our framework's attention-based user encoder allows us to visualize the impact of daily tweets on the final depression prediction. This is accomplished by analyzing the assigned attention weight for each day. The daily attention weight of a positive case is depicted in Table \ref{Table: case study}, with a darker background color indicating a greater attention weight. On days 0, 2, and 16, the patient posted more desperate tweets, which are characterized by greater attention weights. This demonstrates that not only can our framework accurately predict depression risk but also estimate risk days. It is possible to delve deeper into the relationships between depression and social factors among a large number of depression patients by incorporating additional information such as weekdays vs. weekends or holidays vs. normal days. In addition, Table \ref{Table: case study} also lists the daily emotion of the patient, and we discovered that our model does not always emphasize negative emotions (such as sadness and anger). This suggests that the context-based model has a different emphasis than emotions, and the combination of both information sources can result in more accurate predictions.
\begin{table}
\caption{A case study of COVID-19 patient at depression risk. \\(Each daily tweet was performed emotion recognition by CTB-Emo and colored by its weight to user presentation.)}
\resizebox{\linewidth}{!}{
\begin{threeparttable}
\begin{tabular}{ccc}
\toprule
\textbf{Days (Before)} & \textbf{Emotion} & \textbf{Daily Tweet}\\
\midrule
Day 28 &Sadness & \makecell[c]{\colorbox{red!8.25}{\parbox{\columnwidth}{{\strut{I have had 3 telemed visits with my doctor. They are not really seeing people in person in my area...}}}}} \\
\hline
\multicolumn{3}{c}{......} \\
\hline
Day 17 &Optimism & \makecell[c]{\colorbox{red!4.88}{\parbox{\columnwidth}{{\strut{Ongoing mission to find new life and new civilizations. Boldly go where no one has gone before.}}}}} \\
\hline
Day 16 &Optimism & \makecell[c]{\colorbox{red!23.62}{\parbox{\columnwidth}{{\strut{I have covid, I have the antibodies. I was only very sick for 4 days. Then my immune system kicked in and i felt much better. it does not have to be a long battle. stay up as much as possible. don't take it laying down. don't sleep laying flat. eat and drink plenty. This is how i want to die.}}}}} \\
\hline
Day 15 &Optimism & \makecell[c]{\colorbox{red!4.78}{\parbox{\columnwidth}{{\strut{@user glad we talked about your problems and made it over that. each one teach one. ...}}}}} \\
\hline
\multicolumn{3}{c}{......} \\
\hline
Day 8 &Sadness & \makecell[c]{\colorbox{red!22.88}{\parbox{\columnwidth}{{\strut{I developed a mental health disorder in which I crave ice cream. New song upload. ... }}}}} \\
\hline
Day 7 &Anger & \makecell[c]{\colorbox{red!6.38}{\parbox{\columnwidth}{{\strut{@user Yes, I believe only the best can apply for police, the only problem is that only \#offensive and \#offensive want this job now. }}}}} \\
\hline
\multicolumn{3}{c}{......} \\
\hline
Day 1 &Null* & \makecell[c]{\colorbox{red!8.25}{\parbox{\columnwidth}{{\strut{This is not bf6. It is a demo of dice current tech, probably taken from the last update of bf5. With less drug testing, less probation violations.}}}}} \\
\hline
Day 0 &Anger & \makecell[c]{\colorbox{red!30.00}{\parbox{\columnwidth}{{\strut{... I'm sure investigation will uncover solid evidence of a liberal conspiracy. It made me a mental case for a month after i had it. ...}}}}} \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item *Null indicates that the probability of all emotions is less than 0.5. \\ $\#$Offensive words have been replaced
\end{tablenotes}
\end{threeparttable}
\label{Table: case study}
}
\end{table}
\section{ETHNIC CONSIDERATION}
For the protection of vulnerable individuals, privacy and ethical considerations are of paramount importance in the field of mental health. Using publicly accessible data collected via Twitter's official API, our study adheres to these stringent requirements. Our research utilized tweets obtained in accordance with Twitter's Privacy Policy, which informs users that the content they post on the platform, including their social profiles and tweets, is public and freely accessible to third parties. To protect individual privacy, we omitted usernames from our study and only provided the Tweet ID for download via the Tweet API.
\section{CONCLUSION}
The COVID-19 pandemic has been three years, but the negative impact of COVID-19 still exists and tends to last for a long time. One critical social problem is the mental health risk of COVID-19 patients. COVID-19 triggers a non-trivial increase in depression patients. To alleviate this problem, one crucial step is to detect depressed COVID-19 patients as soon as possible and conduct an early intervention. This paper targets this critical social problem and the contributions of this paper are three folds: 1) We propose a novel research topic: predict the early depression risk with social media data; 2) We build a dataset from Twitter users which consists of 10,656 COVID-19 patients. 1,776 are positive cases who will emit depression signal after infection; 8,880 are in the control group who don't get depressed after infection. For each positive user in this dataset, we have the timestamp of COVID-19 infection and depression signal emergence as well as all posted tweets; 3) We also propose the Mood2Content model which manages to detect early depression risk. Mood2Content achieves an AUROC of 0.9317 in predicting the depression risk two weeks ahead of time, which outperforms baseline models ranging from popular machine learning models to pre-trained large language models. This enables the feasibility of early intervention of depressed patients.
\section{LIMITATION}
Several potential limitations should be considered for this study. First, although we have taken numerous steps to identify eligible individuals as precisely as possible, it is possible that the dataset still contains some false positive cases. However, manual validation was performed to confirm the dataset's dependability, and the vast quantity of social media data helps to mitigate this issue. Second, we only encoded the first 256 tokens of daily tweets as sentence embeddings in order to meet the length limit of large language models and improve model efficiency. It may result in some information loss. Nonetheless, the threshold of 256 tokens covers 93\% of tweets, which mitigates the issue to some extent. Lastly, we did not use information about COVID-19 disease, such as symptoms, to enhance the model performance. We intend to investigate this in future studies.
\bibliographystyle{ACM-Reference-Format}
| -52,184.661095
|
[
-1.822265625,
1.9140625
] | 29.876797
|
[
-2.46484375,
1.029296875,
-1.1953125,
-4.8359375,
-0.27294921875,
5.96484375
] |
[
3.525390625,
4.56640625,
-0.42626953125,
7.15625
] | 984
| 14,820
|
[
-2.29296875,
2.41796875
] | 24.258928
|
[
-3.064453125,
1.2705078125,
0.0880126953125,
-0.87060546875,
-1.458984375,
2.341796875
] | 0.692481
| 18.748761
| 16.036717
| 1.626825
|
[
2.1570048332214355
] | -37,520.81758
| 5.943927
| -51,427.632528
| 0.476791
| 6.389242
|
[
-3.384765625,
-2.55078125,
-1.212890625,
-2.73046875,
2.5859375,
6.37109375
] |
[
-5.8203125,
-3.271484375,
-2.890625,
-2.029296875,
4.2734375,
6.30078125
] | |
BkiUd1E25V5hYDkHHlC8
|
\section{\textbf{INTRODUCTION}}
Forgery and manipulation of multimedia like images and videos including facial information generated by digital manipulation, in particular with DeepFake methods, have become a great public concern recently \cite{citrond}, \cite{rcellanjones} especially for public figures. The famous term "DeepFake" is referred to a deep learning-based technique able to create fake videos by manipulating features or swapping the face of a person by the face of another person. This term originated after a Reddit user named "deepfakes" claimed in late 2017 to have developed an algorithm that helped him to transpose celebrity faces into adult videos \cite{bbcbitesize}. Additionally, to fake pornography, some of the more harmful usages of such fake content include fake news, hoaxes, financial fraud, and defamation of the victim. Resulting in revitalizing general media forensics which is now dedicated to advance in detecting facial manipulation in image and video \cite{swaminathan2008digital} \cite{korus2017digital} \cite{rossler2019faceforensics++}.\\
The efforts in fake face detection are built on the foundation of the past research in biometric anti-spoofing and modern supervised deep learning \cite{neves2020ganprintr} \cite{dang2020detection}. The growing interest in manipulation detection is demonstrated through the increasing number of workshops in various top conferences. International projects such as MediFor funded by the DARPA, and competitions such as the Media Forensics Challenge and the Deepfake Detection Challenge launched by the National Institute of Standards and Technology (NIST) and Facebook, respectively. In the old days, the number and realism of the manipulations have been limited by the lack of advanced tools, domain expertise, and the complex and time-consuming process. For example, the early work in this domain \cite{bregler1997video} was able to modify the motion of the lip using a different audio track, by making connections between the soundtrack and the shape of the subject's face. However, many things have evolved now since those experiments. Nowadays, it is becoming really easy to synthesize/generate non-existent faces or manipulate an existing face in an image/video, All of this is possible because of accessibility to large-scale public data, and the advancement in deep learning techniques that eliminate many manual steps such as Autoencoders (AE) and Generative Adversarial Networks (GAN) \cite{kingma2013auto}, \cite{goodfellow2014generative}. As a result, Much public software and mobile application (e.g FaceApp, etc) have been released giving access to everyone to create fake images and videos, without any experience in this domain. Therefore, to counter those advanced and realistic manipulated content, large efforts are being carried out by the research community to design improved methods for face manipulation detection. \\
Over the past couple of years, huge steps forward in the field of automatic video editing techniques have been made and great interest has been shown towards methods for facial manipulation. For Instance, it is nowadays possible to easily perform facial reenactment, i.e. transferring the facial expressions between people. This enables to change of the identity of a speaker with almost no effort. Advancement in these Systems and tools for facial manipulations enables even users without any previous experience in digital arts to use them. Indeed, code and libraries that work in an almost automatic fashion are more and more often open sources. On one hand, this technological advancement opens the door to uncharted territories. And On the other hand, people are using these gifts in the worst possible ways for their reasons. \\
In this paper, We consider MesoNet, ResNet-50, VGG-19, Xception and comparing their characteristics to know which of these networks is the most efficient and accurate one on basis of different parameters like operation time, accuracy rate, loss rate, and ability to perform on random data. Training and Evaluation are performed on three datasets: Celeb-DF and Celeb-DF-v2, which has been proposed as a public benchmark; DFDC, which has been released as part of the DFDC Kaggle competition. Results show that the attention-based neural network modification helps the system in outperforming the baseline reported in the domain on all three datasets. Our paper makes contributions by Comparing the state-of-the-art neural networks like MesoNet, VGG-19, ResNet-50, and Xception performances in this domain and drawing the conclusion from the results to advance in media manipulation detection. Detailed evaluation of complex forgery detectors in various scenarios.\\
\section{\textbf{PROBLEM FORMULATION}}
The recent improvement in the field of Deep Learning has produced some state-of-the-art neural network architectures like xception network(sometimes also referred to as extreme inception), Senet, and others, which in turn has lead to astonishing developments in the field of machine learning and computer vision. Although the benefits of such an invention far outweigh the cons, there still exists some which if not treated in time can lead to major disarray in society as we know it. One such con is the creation of deep fakes, deep fakes are computer-generated fake images and videos, which as of today floods one of the major sources of information i.e., the Internet. If not treated it can lead to some major problems, of which privacy violation, public defamation are few. A recent Forbes article \cite{robtoews} claims that "Deepfakes Are Going To Wreak Havoc On Society. We Are Not Prepared.", Currently, the predominant use of deepfakes is for pornography. In June 2020, research \cite{adamsmith} indicated that 96 percent of all deepfakes online are for pornographic context, and nearly 100 percent of those cases are of women and that many actresses like Kristen bell have already suffered from it.\\
All of these incidents beg to raise the question that, what have we done to stop this and the answer lies in deep fake detection. In layman terms deep fake detection is done by neural networks specializing in detecting deepfakes, i.e., With deep fake detection we can detect if a photo or video is fake or real, and therefore it must remain an active topic for research so that we can filter out the fake content from the internet, and once again make it reliable. Many of the tech giants like Facebook has also taken initiatives to try and stop this misuse of neural networks which otherwise is a wonderful technology. \\
\section{\textbf{RELATED WORK}}
In the last couple of years, several techniques for facial manipulation in media like images, video, etc have been successfully developed and are available to the public (i.e., FaceSwap, Face2Face, deepfake, etc.). This enables anyone to easily edit faces in video sequences with incredibly realistic results and very little effort. Moreover, the free access to large-scale public databases, together with the fast progress of deep learning techniques, in particular Generative Adversarial Networks, have led to the generation of very realistic fake content with its corresponding implications towards society in this era of fake news. Likewise, deepfake detection is also an important application of deep learning and machine learning which helps detect forgeries in media like images, and videos and a wide range of research has already been done that encompasses a comprehensive study and implementation of various popular algorithms. \cite{bonettini2021video} where They tackle the problem of face manipulation detection in video sequences targeting modern facial manipulation techniques. In particular, the ensembling of different trained Convolutional Neural Network (CNN) models. In the proposed solution, different models are obtained starting from a base network making use of two different concepts, attention layers, and siamese training. They showed the community that combining these networks leads to promising face manipulation detection results on two publicly available datasets with more than 119000 videos. In \cite{tolosana2020deepfakes} the authors survey the other popular techniques for manipulating face images including DeepFake methods, and methods to detect such manipulations. In particular, they reviewed four types of facial manipulation, entire face synthesis, identity swap (DeepFakes), attribute manipulation, and expression swap. For each of them, they provided details regarding manipulation techniques on existing open-source databases, including a summary of results from those evaluations. \\
\section{\textbf{METHODOLOGY}}
The comparison of the neural networks (MesoNet, ResNet-50, VGG-19, and Xception) is based on the characteristic chart of each network on common grounds like dataset, the number of epochs, complexity of the network, accuracy of each network, specification of the device (Ubuntu 20.04 LTS, 8 GB RAM, intel core i7 8th gen processor, NVIDIA GTX 1050Ti GPU) used to execute the program and runtime of the algorithm, under ideal condition. \\
\begin{flushleft}
\textbf{\textit{A. DATASET}}
\end{flushleft}
Deep Fake Detection is an expansive research area that already contains detailed ways of implementation which include major learning datasets, popular algorithms, features scaling, and feature extraction methods. Celeb-DF, Celeb-DF-v2, and DFDC datasets are datasets containing the real and manipulated videos of common people and public figures. Due to hardware limitations, we had to take only a small part of the datasets mentioned above. Celeb-DF \& Celeb-DF-v2 are high-quality, large-scale challenging datasets for deepfake forensics. They contain DeepFake videos of celebrities generated using an improved synthesis process. The DFDC dataset was created by the companies to solve the deepfake detection problem and it is by far the largest currently and publicly available face swap video dataset, with over 100,000 total clips sourced from 3,426 paid actors, produced with several Deepfake, GAN-based, and non-learned methods. Celeb-DF contains a total of 1,171 videos out of which 376 are real and 795 are fake. whereas Celeb-DF-v2, DFDC contains 2,172 videos(890 real \& 1,282 fake), and 910 videos(362 real \& 548 fake) respectively. \\
\begin{flushleft}
\includegraphics[scale=0.30]{./datasetBar.pdf}
\end{flushleft}
\footnotesize Figure 1. Category wise number of videos in each dataset that we have used. \\
\begin{center}
\includegraphics[scale=0.30]{./dataset.pdf}
\footnotesize Figure 2. Some random snapshots of videos from each datasets (Celeb-DF, Celeb-DF-v2, and DFDC). \\
\end{center}
\begin{flushleft}
\textbf{\textit{B. MESO NETWORK (MesoNet)}}
\end{flushleft}
This network is a derivation from well-performing networks for classification that alternate layers of convolutions, pooling, and a dense network for classification. This neural network comprises a sequence of four convolution layers and pooling and is followed by a fully connected dense layer with one hidden layer in between. The convolutional layers use ReLU as its activation functions that introduce non-linearities and Batch Normalization \cite{ioffe2015batch} to regularize their output which prevent the vanishing gradient problem, and the fully-connected layers use Dropout \cite{srivastava2014dropout} to regularize which improve its robustness and taking generalization on another level \cite{afchar2018mesonet}.
\begin{center}
\includegraphics[scale=0.37]{./mesoNetArchitecture.pdf}
\end{center}
\footnotesize Figure 3. The network architecture of Meso-4. Layers and parameters. \\
\begin{flushleft}
\begin{normalsize}
\textbf{\textit{C. RESIDUAL NETWORK (ResNet)}}
\end{normalsize}
\end{flushleft}
Residual Network a.k.a ResNet50 is a variant of the ResNet model which consists of 48 Convolution layers along with 1 MaxPool and 1 Average Pool layer. It is capable of 3.8 billion Floating-point operations. Out of all other variants of residual network with different capabilities, this one widely used ResNet model and we have shown ResNet50 architecture in detail in Figure 4. Because of this framework, it is possible to train ultra DNN (deep neural networks) i.e. Now, the network can contain thousands of layers and still achieve great performance. The ResNets were initially applied to the image recognition task but as is mentioned in the paper that the framework can be used for non-computer vision tasks also to achieve better accuracy. Many people argued that simply stacking more layers also gives us better accuracy why was there a need for Residual learning for training ultra-deep neural networks but stacking more layer arises a serious problem of vanishing/exploding gradients, that is why ResNet is used in this paper so that we can assess it's effectiveness in deepfake detection problem \cite{opengenus1}.
\begin{center}
\includegraphics[scale=0.31]{./resnetArchitecture.pdf}
\end{center}
\footnotesize Figure 4. The architecture of the ResNet-50 with variable specification of the network.\\
\begin{flushleft}
\begin{normalsize}
\textbf{\textit{D. VISUAL GEOMETRY GROUP NETWORK (VGG-19)}}
\end{normalsize}
\end{flushleft}
Visual Geometry group network a.k.a VGG-19 is a variant of the VGG model which consists of 19 layers that include 16 convolution layers, 3 fully connected layers, 5 MaxPool layers, and 1 SoftMax layer. There are other variants of VGG like VGG-11, VGG-16, etc. VGG-19 has 19.6 billion Floating Operations. VGG is a deep CNN used to classify images \cite{opengenus2}. \\
\begin{center}
\includegraphics[scale=0.29]{./vggArchitecture.pdf}
\end{center}
\footnotesize Figure 5. The architectural design of VGG-19 Network. \\
~ \\
\begin{flushleft}
\begin{normalsize}
\textbf{\textit{E. XCEPTION NETWORK}}
\end{normalsize}
\end{flushleft}
Xception neural network was created by Google. It stands for Extreme Inception. It consists of a modified depth-wise separable convolution, it has shown even better results than Inception-v3. The original depthwise separable convolution is the depthwise convolution followed by a pointwise convolution but In Xception, modified depthwise separable convolution is the pointwise convolution followed by a depthwise convolution. This modification is motivated by the inception module in Inception-v3. The 14 modules are grouped into three groups viz. the entry flow, the middle flow, and the exit flow. And each of the groups has four, eight, and two modules respectively. The final group, i.e the exit flow, can optionally have fully connected layers at the end. This modification is the reason for the order of operation \& the presence/absence of non-linearity. Due to this modified depthwise separable convolution, there is NO intermediate ReLU non-linearity. Moreover, Xception without any intermediate activation has the highest accuracy. \\
\begin{center}
\includegraphics[scale=0.30]{./xceptionArchitecture.pdf}
\end{center}
\footnotesize Figure 6. The architectural design of Xception Network.
\begin{flushleft}
\begin{normalsize}
\textbf{\textit{F. OPTIMIZATION}}
\end{normalsize}
\end{flushleft}
TensorRT is an SDK for deep learning which provides significantly low inference time, developed by NVIDIA. It contains an inference optimizer and a runtime that is capable of delivers significantly low latency and high throughput for deep learning inference applications. TensorRT-based applications are capable of performing up to a whopping 40 times faster than CPU-only platforms during inference. With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyper-scale data centers. TensorRT is built on CUDA®, NVIDIA's parallel programming model which enables the model to efficiently utilize GPU resources, while also enables you to optimize inference leveraging libraries and development tools for artificial intelligence-related tasks. TensorRT provides INT8 and FP16 optimizations for production deployments of deep learning inference applications such as video streaming, speech recognition, recommendation, fraud detection, and natural language processing, to provide the models in class floating point precision. TensorRT achieves this by reducing precision inference significantly which in turn reduces application latency, which is a requirement for many real-time services, as well as autonomous and embedded applications\cite{tensorrt}.
\begin{flushleft}
\begin{normalsize}
\textbf{\textit{G. VISUALIZATION}}
\end{normalsize}
\end{flushleft}
In this research, we have used multiple datasets (i.e. Celeb-DF, Celeb-DF-v2, and a part of DFDC dataset due to hardware limitations) to compare different neural networks (i.e. MesoNet, ResNet-50, VGG-19, and Xception) based on training \& testing accuracy, training \& testing loss, training time, inference time on CPU, GPU \& after TRT optimization. To visualize the information obtained by the detailed analysis of algorithms we have used Line graphs and Tabular format charts using module matplotlib, which gives us the most precise visuals of the advances of the algorithms in classifying. The graphs are given at each vital part of the programs to give visuals of each part to bolster the outcome. \\
\section{\textbf{IMPLEMENTATION}}
To compare the networks based on working accuracy rate, loss, training time, complexity, and inference time, we have used four different classifiers: \\
\begin{itemize}
\item MesoNet Classifier
\item ResNet-50 : Residual Neural Network
\item VGG-19 : Visual Geometry Group Network
\item Xception
\end{itemize}
After training the neural networks, we have optimized the models using TensorRT, to get the minimum inference time and maximum accuracy. We have encapsulated every information in Table 1. \\
\begin{table*}
\centering
\caption{Comparison Analysis of Different network.}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Network Name} & \multicolumn{2}{c|}{Training} & \multicolumn{2}{c|}{Testing} & \multicolumn{3}{c|}{Inference Time} \\
\cline{2-8}
& ~~~ Accuracy ~~~ & ~~~ Loss ~~~ & ~~~ Accuracy ~~~ & ~~~ Loss ~~~ & ~~~ CPU ~~~ & ~~~ GPU ~~~ & ~~~ TRT Op ~~~ \\
\hline
MesoNet & 73.189\% & 25.83 & 72.39\% & 23.92 & 194 ms & 180.7 ms & 64.6 ms \\
\hline
ResNet-50 & 75.26\% & 6.55 & 74.12\% & 15.05 & 1978 ms & 1142.2 ms & 789 ms \\
\hline
VGG - 19 & 74.92\% & 1.06 & 73.28\% & 3.39 & 302.2 ms & 254.3 ms & 113.9 ms \\
\hline
Xception & 77.83\% & 11.69 & 75.99\% & 16.11 & 1080 ms & 1002.1 ms & 976.2 ms \\
\hline
\end{tabular}
\\
\end{table*}
We have discussed in detail the implementation of each algorithm explicitly below to create a flow of this analysis for a fluent and accurate comparison.\\
\begin{flushleft}
\textbf{\textit{I. DATASET HANDLING \& PRE-PROCESSING}}
\end{flushleft}
The datasets we used in this paper (i.e. Celeb-DF, Celeb-DF-v2, and DFDC) are quite large and due to hardware limitations, we were unable to utilize the complete dataset. So, we took small chunks of the datasets. Now the challenge is to train the neural network on these video datasets. Now, we converted the videos into face images (we used the dlib library to extract images from frames). Overall, We have 51,036 images divided into two categories: Real (19,536 images) and Fake (31,500 images). Since we cannot store all this data for training into the memory, we used the ImageDataGenerator by TensorFlow to create batches of our dataset while training the network. Pre-processing is a crucial step in machine learning which focuses on improving the input data by reducing unwanted impurities and redundancy. To simplify and break down the input data we reshaped all the images present in the dataset in 2-dimensional images i.e (128,128,1). Each pixel value of the images lies between 0 to 255 so, we Normalized these pixel values by dividing them by 255.0 so that the input features will range between 0.0 to 1.0. \\
\begin{flushleft}
\textbf{\textit{II. MESO NETWORK}}
\end{flushleft}
The MesoNet-4 used in this paper is a shallow convolutional neural network that was made for the sole purpose of detecting video forgery. In \cite{afchar2018mesonet}, Meso-4 and MesoInception-4 are classes capable of performing binary classification on a dataset. In this paper, we have used MesoNet-4 for the classification of deepfakes datasets. Various libraries and sub-modules of libraries like TensorFlow, TensorFlow.Keras.preprocessing, and matplotlib have been used for the implementation purpose. Firstly, we will download the datasets, followed by loading them using TensorFlow ImageDataGenerator and pre-processing the images while loading them in the network in batches to reduce the memory usage. After this, plotting of some samples of the dataset followed by normalization and scaling of features have been done. Finally, we have created our experimental model. \\
\begin{flushleft}
\textbf{\textit{III. RESIDUAL NETWORK - 50}}
\end{flushleft}
The implementation of deepfake detection by ResNet-50 is done with the help of the TensorFlow module to create an MLP model of Sequential class and add the respective inbuilt model of resnet in TensorFlow to take an image of 128x128 pixel size as input. After creating a sequential model, we added a Global average pooling layer followed by a Dense layer. Once you have the training and test data, you can follow these steps to train a neural network in Tensorflow. We used a neural network with 50 hidden layers with multiple max-pooling layers and an output layer with 1 unit (i.e. total number of labels). The number of units in the hidden layers is standard. The input to the network is the 16,384-dimensional array converted from the 128×128 image. We used the Sequential model for building the network. In the Sequential model, we can just stack up layers by adding the desired layer one by one. We used the Dense layer, also called a fully connected layer. Apart from the Dense layer, we added the sigmoid activation function which is a common preference for the binary classification model.
\begin{flushleft}
\textbf{\textit{IV. VISUAL GEOMETRY GROUP NETWORK - 19 }}
\end{flushleft}
The model implementation is done using Tensorflow as well. From it, we have used a Sequential class which allowed us to create a model layer-by-layer. The dimension of the input image is set to 128(Height), 128(Width), 3(Number of channels). Next, we added the standard vgg-19 model to this sequential model. The VGG-19 model consists of 19 layers with multiple pooling layers followed by 2 fully connected layers. The pooling layer \cite{machinelearningmastery} is used which reduces the dimensionality of the image and computation in the network. We have employed MAX-pooling which keeps only the maximum value from a pool. The convolution layer uses a matrix to convolve around the input data across its height and width and extract features from it. This matrix is called a Filter or Kernel. The values in the filter matrix are weights. We have used the standard filter of VGG-19. Stride determines the number of pixels shifts. Convolution of filter over the input data gives us activation maps whose dimension is given by the formula: ((N + 2P - F)/S) + 1 where N= dimension of input image, P= padding, F= filter dimension and S=stride. This model returns probability distribution over all the classes. The class with the maximum probability is the output.
\begin{flushleft}
\textbf{\textit{V. XCEPTION NETWORK}}
\end{flushleft}
The xception network consists of 36 convolutional layers and its implementation is done using Tensorflow. From Tensorflow, we have used a Sequential class, the dimension of the input image is set to 128(Height), 128(Width), 3(Number of channels). Next, we load the inbuilt standard model of xception. The depthwise separable convolution layer is what powers the Xception. And it heavily uses that in its architecture. This type of convolution is similar to the extreme version of the Inception block. But differs slightly in its working. The effect of having activation on both the depthwise and pointwise steps in the DSC (i.e Deep Separate Convolution). And has observed that learning is faster when there's no intermediate activation. For this network, we have followed the standard practice \& configuration for training the model.
\section{\textbf{RESULT}}
\begin{small}
After implementing and comparing all the four networks that are MesoNet, ResNet-50, VGG-19, and Xception we have compared their accuracy rate, loss rate, training time, and inference time on both CPU and GPU. Moreover, We also applied TRT optimization on the network models and showed the difference in the performance with the help of experimental graphs for perspicuous understanding. We have taken into account the Training and Testing Accuracy of all the models stated above. Generally, the running time of an algorithm depends on the number of operations it has performed. So, we have trained our large deep learning models like Xception, ResNet, VGG up to 10 epochs (Due to hardware limits), and MesoNet models up to 20 epochs.
\begin{figure}
\centering
\subfloat{{\includegraphics[scale=0.25]{./mesonetAccuracy.pdf} }} \\
\footnotesize Figure 7. The transition graph of training accuracy with increasing number of epochs in MesoNet
\qquad
\subfloat{{\includegraphics[scale=0.25]{./mesonetLoss.pdf} }} \\
\footnotesize Figure 8. The transition graph of training loss with increasing number of epochs in MesoNet
\qquad
\subfloat{{\includegraphics[scale=0.25]{./resnetAccuracy.pdf} }} \\
\footnotesize Figure 9. The transition graph of training accuracy with increasing number of epochs in ResNet-50
\qquad
\subfloat{{\includegraphics[scale=0.25]{./resnetLoss.pdf} }} \\
\footnotesize Figure 10. The transition graph of training loss with increasing number of epochs in ResNet-50
\end{figure}
\begin{figure}
\centering
\subfloat{{\includegraphics[scale=0.25]{./vggAccuracy.pdf} }} \\
\footnotesize Figure 11. The transition graph of training accuracy with increasing number of epochs in VGG-19
\qquad
\subfloat{{\includegraphics[scale=0.25]{./vggLoss.pdf} }} \\
\footnotesize Figure 12. The transition graph of training loss with increasing number of epochs in VGG-19
\qquad
\subfloat{{\includegraphics[scale=0.25]{./xceptionAccuracy.pdf} }} \\
\footnotesize Figure 13. The transition graph of training accuracy with increasing number of epochs in Xception
\qquad
\subfloat{{\includegraphics[scale=0.25]{./xceptionLoss.pdf} }} \\
\footnotesize Figure 14. The transition graph of training loss with increasing number of epochs in Xception
\end{figure}
Furthermore, we visualized the performance of models and how they ameliorated their accuracy and reduced the error rate concerning the number of epochs. All the figures regarding that are found on next page from Figure 7.
\end{small}
~ \\
\section{\textbf{CONCLUSION}}
~ \\
\begin{small}
In this research, we have surveyed four networks for the task of deep fake detection using a fraction of Celeb-DF, Celeb-DF-v2, and Deep Fake Detection Challenge (DFDC) datasets. We compared them based on their characteristics to appraise the most accurate \& efficient model amongst them. MesoNet is a shallow network \& one of the most basic classifiers used in this group that's why it's faster than the other networks and in this case, the training accuracy rate it acclaims is also on par with other deeper networks, but due to its simplicity, it's not possible to classify complex and well-crafted deepfakes as accurately as achieved with other networks. We have found that Resnet-50 \& VGG-19 gave them better results than MesoNet due to their large number of feature extraction layers. When Resnet-50 and VGG-19 are compared with each other, the accuracy rate and loss of both of them lie in the same range but due to more layers in resnet-50, it can perform far better than VGG-19 for Deep Fake detection. At last, the Xception network is unique in its way, because it has a modified depth-wise separable convolution which makes this network flexible \& robust at the same time for this particular problem. That's why, it has delivered better results than other standard networks reviewed in this survey but the drawbacks with the complex networks are that they require more time to train, inference time would be much longer, highly effective dataset, and higher-end hardware is required. Although the inference time can be significantly reduced when used after TensorRT[https://developer.nvidia.com/tensorrt] optimization \\
At this point, Mesonet would only be suggested if the hardware available is low-end, and in the scenario where inference time is more important than the accuracy. However, with the results portrayed by this research, the VGG-19 architecture is most preferable for low to medium end hardware, as not only it provides considerably smaller inference time compared to the other relatively bulky networks(ResNet-50 and Xception) but it is also easier on the hardware, thus making it more viable of a choice when compared to mesonet. But for the scenarios where there are no hardware limitations, the Xception network is the most viable option, as the research concluded, it outperforms the rest of the options considered in this research by a considerable margin, at the same time sacrificing a considerable chunk of time both on training and inference. However, if you find yourself in a very niche scenario where, Xception is coming out to be particularly hard on the resources, while the accuracy VGG-19 provides is not up to standards, then ResNet-50 will prove to be the best option of them all. \\
\end{small}
\section{\textbf{FUTURE ENHANCEMENT}}
\begin{small}
The future development of the applications based on algorithms of deep learning is practically boundless. In the future, we can work on a hybrid algorithm with separate attention-based layers to increase the focus on tampered media than the current set of algorithms with more data to achieve the solutions to the problems.
In the future, the application of these algorithms lies from the public to high-level authorities, as from the differentiation of the algorithms above and with future development, we can attain high-level functioning applications which can be used in the social media companies, classified or government agencies as well as for the common people, we can use these algorithms in different application for ensuring if the media has been tampered with and monitoring the virtual space, The advancement in this field can help us create an environment of safety, awareness, and comfort by using these algorithms in the day-to-day application and high-level application (i.e. Corporate level or Government level). Application-based on artificial intelligence and deep learning is the future of the technological world because of their absolute accuracy and advantages over many major problems. \\
\end{small}
\section{\textbf{ACKNOWLEDGMENT}}
\begin{small}
There are several people without whom this project research work would not have been feasible. Their high academic standards and personal integrity provided me with continuous guidance and support. We owe a debt of sincere gratitude, a deep sense of reverence, and respect to our guide and mentor Prof. Rashid Sheikh, Associate Professor, AITR, Indore for his motivation, sagacious guidance, constant encouragement, vigilant supervision, and valuable critical appreciation throughout this research, which helped us to complete it. We express profound gratitude and heartfelt thanks to Dr. Kamal Kumar Sethi, HOD CSE, AITR Indore for his support, suggestion, and inspiration for carrying out this project. I am very much thankful to other faculty and staff members of the CSE Dept, AITR Indore for providing us all support, help, and advice during this research. We would be failing in our duty if we did not acknowledge the support and guidance received from Dr. S C Sharma, Director, AITR, Indore whenever needed. We are grateful to our parents and family members who have always loved and supported us unconditionally. To all of them, we want to say "Thank you", for being the best family that one could ever have and without whom none of this would have been possible.
\end{small}
\bibliographystyle{ieeetr}
| -14,241.794842
|
[
-0.447021484375,
1.046875
] | 24.770642
|
[
-3.515625,
0.494140625,
-1.39453125,
-3.189453125,
1.275390625,
4.7890625
] |
[
2.20703125,
6.09765625,
3.419921875,
4.82421875
] | 318
| 4,606
|
[
-0.71923828125,
0.403076171875
] | 22.354845
|
[
-5.6328125,
-2.369140625,
-2.466796875,
-0.71142578125,
1.7138671875,
8.28125
] | 0.712937
| 24.18994
| 28.890818
| 2.623924
|
[
2.5632641315460205
] | -11,850.500271
| 5.633739
| -14,103.501363
| 0.53952
| 6.05506
|
[
-3.970703125,
-3.23046875,
-1.80859375,
-2.640625,
3.17578125,
7.8515625
] |
[
-6.2734375,
-1.98828125,
-1.6220703125,
-0.9501953125,
4.16796875,
4.91015625
] | |
BkiUb2HxK3YB9i3RJdEe
|
\section{Introduction}
The third Formula Student Driverless (FSD) competition was held at the Hockenheimring in Germany from the 5th to the 11th of August 2019. The competition was introduced in 2017 and extended the previously existing combustion and electric classes. Since then, KA-RaceIng\footnote{\url{https://www.ka-raceing.de}}, the Formula Student team of the Karlsruhe Institute of Technology\footnote{\url{http://www.kit.edu}} (KIT) is competing in all three classes. Meanwhile, the driverless series has become a research platform for cutting edge technology in the area of autonomous driving.
\begin{figure}
\centering
\includegraphics[width=.99\linewidth]{Graphics/KIT19D_Zenker.jpg}
\caption{\small{The KIT19d. Autocross, Trackdrive and Skidpad winner in Formula Student Germany 2019. First place overall in Formula Student Spain 2019. Photo by Zenker, ©FSG.}}
\label{fig:KIT19_FSG}
\end{figure}
The competition consists of four static and four dynamic disciplines \cite{fsg2019rules}. Static disciplines challenge the student teams beyond the development of an autonomous race car and evaluate their knowledge in terms of hypothetical business and cost plans as well as their engineering know-how. The dynamic disciplines test the vehicle's performance and reliability under high longitudinal and lateral accelerations, as well as the system's ability to race on unknown tracks. As shown in \prettyref{fig:KIT19_FSG}, the vehicles race without a human fallback driver. The boundaries of the race track are defined by yellow traffic cones to the right and blue ones to the left, which must be identified autonomously by the system.
This paper introduces the autonomous system (AS) software design of the KIT19d\footnote{\url{https://www.ka-raceing.de/19d}}. Arguably the most challenging discipline of the FSD competition is the Autocross, in which the vehicle has to complete a complex and unknown course as fast as possible. We believe that a vehicle that is competitive in the Autocross event will also be competitive in the other dynamic disciplines. Therefore, much attention was directed at the performance of the system in the Autocross event. See \url{https://youtu.be/sxqkt\_ydOkY?t=3155} for the run at the Hockenheimring in Germany. Furthermore, \url{https://youtu.be/h22J8YzNdjo} provides a visualization of the mapping and planning process during this run.
At the beginning of the project, the following main goals for the AS software have been set:
\quad\textit{Modularity} enables the development of a well-structured software architecture that is a sustainable base for future competition seasons.
\quad\textit{Reliability}
is the major goal behind each concept and design decision as it is the key to success in the Formula Student Driverless competition.
\quad\textit{Efficient Design}
enables a small team to develop a functional AS despite limited resources and leads to a lightweight system that is easily surveyed and tested.
\quad\textit{Performance}
improvement is the driving force behind most new developments. Provided these developments do not deteriorate reliability, they contribute to high scores in static and dynamic disciplines. \\
This paper is structured as follows: \prettyref{sec:methods} outlines the development methods we found to work well in the context of the limited resources of Formula Student teams. Sections \ref{sec:sys_architecture} - \ref{sec:motion_planning} present the technical features of the KIT19d's AS where \prettyref{sec:sys_architecture} covers the hardware and software architecture. \prettyref{sec:perception} presents the perception module, \prettyref{sec:slam} outlines localization and mapping, and \prettyref{sec:motion_planning} describes methods used for motion planning and control. Performance evaluations of the resulting system can be found in \prettyref{sec:results} and conclusions and an outlook are provided in \prettyref{sec:conclusion}.
\section{Related Work}
Several Formula Student teams have published overview papers, describing the software stack developed for their autonomous vehicles, such as the teams from Zurich \cite{valls2018design, kabzan2019amz}, Vienna \cite{zeilinger2017design}, and Beijing \cite{jun2018autonomous}. Autonomous racing cars have also been developed for other competitions, including the DARPA Grand Challange \cite{thrun2006stanley}, Roborace \cite{heilmeier2019minimum} and the Carolo Cup \cite{nolte2018carolo}. For the sake of brevity we omit a deeper literature review here, but we will point to relevant publications for the methods we use in the following sections.
\section{Methods}\label{sec:methods}
In Formula Student, race cars are developed in less than a year. Thus, strategies to evaluate concepts rapidly and efficient methods for testing and validating the results are the keys to fast improvements. However, most models employed in larger companies are not compatible with a Formula Student team structure. A limited amount of team members, time, and the lack of experience require workflows that can be adapted in close to no time and do not produce a lot of additional workload. To ensure flexibility in the task assignment, our software development process contained elements from SCRUM \cite{SCRUM}. Weekly reviews allowed constant tracking of progress and, if unavoidable, the relocation of resources.
When going into a new competition season, the first task is to identify the modules that need to be improved and to allocate the resources to do so. Reviewing previous development cycles, vulnerabilities can be determined and addressed. Testing processes must be tailored for all components whilst balancing their complexity and the level of system integration. More specifically some modules require complete simulations while others can be tested with recorded data. For the development of the motion planning and control module, for instance, we considered a simulation to be necessary. In this case, the simulation is required to generate a feedback loop around a model that is an accurate representation of the real vehicle behavior. Existing vehicle dynamic models were used and expanded with the required interfaces of the AS. Based on rviz\footnote{3D visualization tool from the open-source framework ROS \cite{Quigley2009ROSAO}}, a 3D visualization of the resulting vehicle trajectory facilitates the interpretation of simulation results and allows for fast parameter evaluation and tuning. In contrast, for the perception, the localization, and the mapping modules, only replays of recorded data from real sensors were used for debugging purposes instead of a full model-based simulation. For these purposes, it is very demanding to create a simulation that is as accurate as real data. A drawback of this approach is that the system integration only happens on the vehicle. In general, we want to emphasize that these test routines were an important prerequisite for the success of our developments. Test environments for all subsystems should be developed early. Besides being user-friendly, these should have well-defined interfaces to ensure repeatability and to avoid brute-forcing solutions.
Past experiences have shown, that coding errors and bad coding practices lead to delays in the deployment process. Consequently, a proper git workflow including code style rules, reviews, and automated tests is recommended.
\section{System Architecture}\label{sec:sys_architecture}
\begin{figure*}
\centering
\includegraphics[width=.99\textwidth]{Graphics/Hardware_architecture.png}
\caption{\small{Overview of the AS architecture of the KIT19d.}}
\label{fig:ASoverview}
\end{figure*}
The hardware platform is provided by the KIT15e\footnote{\url{https://www.ka-raceing.de/15e}}, an all-wheel-drive electric vehicle. It was retrofitted with the necessary components for autonomous racing, such as an emergency brake system (EBS), four lidars, three cameras, and a steering actuator. The driver, who would normally provide the control inputs was replaced by an autonomous system control unit (ACU), that perceives the environment, plans an appropriate trajectory, and controls the longitudinal and lateral motion of the vehicle.
\subsection{Overall Pipeline}
An overview of the complete autonomous system is given in \prettyref{fig:ASoverview}. To acquire extensive information about the track layout early-on, the car is equipped with both forward and rearward facing cameras and lidars, as shown in \prettyref{fig:fow}. This enables the detection of objects at ranges up to \SI{42}{\meter} around the vehicle to provide adequate information for the mapping and path planning. To complement the information generated by the perception system, an inertial measurement unit (IMU) and wheel speed sensors were added to the car. The output of these sensors is forwarded to the localization and mapping algorithm, where the data is combined with detected objects from the perception system and is used to estimate the vehicle pose and to create a map of the environment. During a fully autonomous race on an unknown track, as in the Autocross event, the desired vehicle path is continuously planned on a growing map.
The planned path and the estimated vehicle state are provided to the longitudinal and lateral controllers, which calculate the desired motor torques and the steering angle. The main control unit (MCU) provides the interface between the AS and the electronic infrastructure of the base vehicle. It controls all low-level features of the car, such as brake lights and fans, implements all safety checks necessary for rules compliant operation, and forwards the torque and steering request to the inverter and the steering controller.
\begin{figure}[ht]
\centering
\includegraphics[width=.99\linewidth]{Graphics/perception_fow.png}
\caption{\small{Field of view of the cameras (red) and the lidars (yellow) of the KIT19d.}}
\label{fig:fow}
\end{figure}
\subsection{Autonomous Control Unit (ACU)}
We decided to use the robot operating system (ROS) \cite{Quigley2009ROSAO} on our ACU, to manage the increasing complexity of the architecture, and to follow the goal of a modular structure. This allows for rapid advancements in the overall system by exchanging single blocks with enhanced versions as all interfaces are defined in the beginning of the development process. ROS also facilitates software deployment by providing numerous tools for debugging, system analysis, visualization, recording as well as replaying sets of data. The software stack was designed to run on Ubuntu 18.04, within the ROS melodic framework. The ACU is comprised of commercially available consumer-grade computer hardware, specifically, an Intel i7-9700k CPU, an ASRock x370 Mini-ITX mainboard with \SI{32} {\giga\byte} DDR4 RAM, a Samsung 970 EVO solid-state drive (SSD) and a CAN-Interface to communicate with the MCU. Note that we do not require a dedicated GPU or TPU. Three cameras are attached to the ACU via USB 3.0. The four lidars are connected via Ethernet.
\subsection{Synchronization}
To exploit the full potential of our lidar concept, the point clouds of the individual lidars have to be merged before they are further interpreted. Therefore, the scans must be started simultaneously. For this purpose, we have developed a control unit (sync-ECU) that provides a common time base to the lidars by generating a synchronization pulse. It also allows us to trigger them asynchronously, which effectively doubles the sample rate in the areas where the field of view of two lidars overlaps. The sync-ECU and the ACU synchronize their clocks using an adapted version of the precision time protocol (PTP) \cite{ptp} over CAN. The four lidars are connected to the ACU via Ethernet, which introduces a significant but non-deterministic latency. When a new scan is received by the ACU, it must therefore first be assigned to the other scans from the same time step. Once the system is initialized, the correct assignments can be determined comparing the scan counter of the individual lidars. However, the counters can differ at power-up and this difference has to be determined upon initialization. This is done by searching for a series of consecutive scans where the latency of each scan is below the empirically determined average latency of the system. When a series is found, the counter offsets are calculated. Note that this algorithm assumes that the average latency is lower than the scan rate of the lidar. The latency can be determined using the timestamps of the start of a scan published by the sync-ECU and the time recorded by the ACU when receiving a scan.
\section{Perception}\label{sec:perception}
Intuitively, race cars have to act in a highly complex environment. Navigating the race track requires knowledge about the car's environment, that has to be perceived "on the fly". Building such a perception system includes the detection of relevant features to perceive the track's borders. For a Formula Student race, an obvious choice are the traffic cones used to confine the race track. Since all of these features can be considered static, their location relative to the car's position can be used to subsequently localize the car within the track.
Our system uses a variety of sensors for perception. The most important ones are multiple monocular cameras and lidars. Lidar sensors create precise but sparse range measurements while camera sensors capture dense image information similar to the human eye. By using a combination of both sensors, we can accurately detect landmarks with high confidence. This section discusses the use of both the lidar and camera pipeline to locate traffic cones, their fusion, and interfaces between the car's perception and other subsystems.
\subsection{Overall Perception Pipeline}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Graphics/perception_pipeline}
\caption{\small{Overall perception pipeline: Lidar range measurements in the form of 3D point clouds and camera images are fed into the perception system. The lidar points are used to generate landmark proposals that are tracked in a global map (see \prettyref{sec:slam}). All proposals are validated against the camera images.}}
\label{fig:perception}
\end{figure}
In principle, landmarks can be detected with either one of both sensor systems. This creates a robust system because each subsystem works independently and extracts the same information targets. Such multi-sensor setups rely heavily on the accuracy of all subsystems to achieve good results since all measurements are passed on without external validation. As an improvement, all sensor outputs are cross-validated against other systems. Even fusion of raw sensor data is possible in some cases. Localization through the detection of landmarks requires a precise position estimate of these landmarks. In contrast to lidar, this is hard to achieve with monocular cameras. Here, a projection from the two-dimensional image plane into 3D space has to be done. A homography between the image plane and the ground plane of the track can be calculated, but the accuracy greatly decreases with the distance between sensor and object, due to nonlinear distortions and discretization of positional information.
We bypassed this issue by restricting the information retrieval of the cameras to their respective image planes. We use accurate lidar detections to generate proposals for landmarks. These proposals are passed to the mapping algorithm (\prettyref{sec:slam}) to store them on a global map. This enables a reliable tracking of multiple landmarks. Landmark proposals are then validated inside the respective camera image. Therefore, the camera sensors work as a validation-device. Furthermore, the camera information is used to infer additional information such as the color of the landmarks. The overall pipeline is depicted in \prettyref{fig:perception}.
Using such an architecture has multiple advantages. We combine precise position measurements given by the lidar sensors with abstract texture and color information of the cameras. Not relying on positional information of the cameras increases the overall system efficiency since no complex object detection techniques need to be applied. Therefore all of the computations can be performed on a standard CPU, which removes the need for an expensive, power consuming GPU. Validating proposals in pixel-space also removes the necessity of a projection to global coordinates.
\subsection{Landmark Proposals}
As outlined above, we use only the lidar sensors to generate landmark proposals.\\
The lidars capture a set of $N_{\text{raw}}$ measured points
\begin{equation}
P=\left\lbrace \mathbf{p}_i=\left(x_i,y_i,z_i\right)^{\mathpalette\@transpose{}} \: | \: i=1,2,\ldots,N_{\text{raw}}\right\rbrace,
\end{equation}
called a point cloud $P$. A series of transformations is applied to transform all lidar measurements into one frame of reference. We can state that only a subset of $P$ actually belongs to the desired landmarks, while other points should be discarded. This is done by filtering of the point cloud $P$ with adequate assumptions about the position of the landmarks, their size and the expected distribution of lidar points on the landmarks.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Graphics/lidar_pointcloud}
\caption{\small{Landmark detections: Black points show raw lidar points, red points are validated landmarks.}} \label{fig:detections}
\end{figure}
Analysis of the point density of $P$ showed that landmarks always occur in regions with a high point density. Therefore, a spatial clustering of all points was applied to extract features from the raw pointcloud (see \prettyref{fig:detections} for examples). We use the DBSCAN method \cite{MartinEster.1996} to extract features from $P$ according to their density. This leaves us with a set $C$ of $N_{\text{cluster}} \leq N_{\text{raw}}$ clusters of points. Each cluster $c_j$ is a subset of $P$, $c_j \subseteq P, \: \forall j\in{1,\ldots, N_{\text{cluster}}}$. This step also removes noise contained in $P$. For example, many ground detections are classified as noise, because of their sparse occurence. Each cluster $c_j$ includes information about the represented object. The position for each cluster is given by its centroid:
\begin{equation}
\boldsymbol{\mu}_j = \boldsymbol{\mu}\left( c_j \right) = \frac{1}{\left| c_j \right |} \sum_{\boldsymbol{p} \in c_j} \boldsymbol{p}.
\end{equation}
All traffic cones have similar size. We utilize this fact by thresholding the cluster points variance. The empiric variance for one cluster is
\begin{equation}
\mathbf{Var}\left(c_j\right) = \frac{1}{\left| c_j \right | - 1}\sum_{\boldsymbol{p} \in c_j}\left(\boldsymbol{p}-\boldsymbol{\mu}_j\right)^2.
\end{equation}
Setting maximal values for all directions removes big, high variance clusters.
Another heuristic we employ considers the neighbourhood of each cluster. We set restrictions on the distance to the nearest neighbour of each cluster. This distance is defined as the distance between the centroids of the considered clusters. This can be done, because we already preselected clusters with approximately equal variance. We use the epsilon-neighbourhood \cite{MartinEster.1996} for each cluster,
\begin{equation}
N\left(c_j, \varepsilon \right) = \left\lbrace c_k \in C \: | \: \left\Vert \boldsymbol{\mu}\left(c_j \right) - \boldsymbol{\mu}\left(c_k \right) \right\Vert^2 < \varepsilon \right\rbrace
\end{equation}
as a measure and calculate $N\left(c_j, \varepsilon \right)$ for two different thresholds $\varepsilon_1>0$ and $\varepsilon_2 > \varepsilon_1$. The neighbourhood defined by $\varepsilon_1$ describes an inner region, where we allow other clusters. This is necessary to account for cases, where multiple clusters describe one object of interest. Above the threshold $\varepsilon_1$, we assume no other clusters, because usually the landmarks are separated by approximately $3.5 \, \text{m}$. So, $\varepsilon_2$ describes the (squared) clearance radius in which no other clusters should be present. This yields the following criterion for landmark clusters:
\begin{equation}
\left| N\left(c_j, \varepsilon_1 \right)\right| \overset{!}{=} \left| N\left(c_j, \varepsilon_2 \right)\right|
\end{equation}
Applying all filters and neglecting the z-coordinate of the centroids $\boldsymbol{\mu}_i$ leaves a small set of potential landmarks, which are further treated as proposals.
\subsection{Landmark Validation}
All landmark proposals $\mathbf{p}_{l,i},\: i=1..N_\text{prop}$ are validated against features in the camera images. This is done by mapping the extracted landmark positions $\mathbf{p}_l=\left(x_l,y_l\right)^{\mathpalette\@transpose{}}$ into pixel space, calculation of tight bounding boxes, and classification of these boxes.
\paragraph{Projection of Landmarks}
To compare landmark positions in the image plane, a projection $\boldsymbol{\phi}:\mathbb{R}^2 \rightarrow \mathbb{R}^2$, mapping from the ground plane to the 2D image plane has to be found. This is usually done by calculating the intrinsic and extrinsic parameters for each camera. We simplify this process by approximating $\boldsymbol\phi$ with a polynomial of degree $N$:
\begin{equation}
\boldsymbol\phi \left( \mathbf{p}_l = (x_l,y_l)^{\mathpalette\@transpose{}} \right) \approx \sum_{i=0}^N\sum_{j=0}^N \boldsymbol a_{i,j} x_l^iy_l^j .
\end{equation}
This results in $2N^2$ parameters, that have to be tuned. We used a setup with multiple landmarks registered in both lidar and camera to regress these parameters $\boldsymbol a_{i,j}$. The projection maps landmark positions $\mathbf{p}_l$ to pixel coordinates $(u,v)^{\mathpalette\@transpose{}}=\boldsymbol\phi \left(\mathbf{p}_l\right)$. We assume these coordinates to be the center of the landmarks.
\paragraph{Bounding Box Regression}
To achieve a proper validation of all landmarks, a boundary of them has to be calculated. A standard approach is the bounding box, which creates a rectangular boundary around the object of interest. In addition to the already calculated center of the bounding box,, that is calculated by the projection method outlined above, a size estimate has to be made, to specify the boundary. Here we make use of the fact, that all objects should have the same aspect ratio, and that the scaling of each landmark inside the image is reciprocal to the distance of this landmark to the origin (vehicle rear axis). Furthermore, we assume an anisotropic scaling behavior and therefore introduce one scaling factor per direction, leading to the following approximations for width $w_l$ and height $h_l$ of bounding box $l$:
\begin{equation}
w_l = \frac{s_u}{ \left\Vert \mathbf{p}_l \right\Vert}, \quad
h_l = \frac{s_v}{ \left\Vert \mathbf{p}_l \right\Vert},
\end{equation}
with scaling factors $s_u$ and $s_v$. We use the resulting bounding box $\left( u, v, w_l, h_l \right)$ as a rough estimate of the ideal boundary (see \prettyref{fig:bbs_a}). This works reasonably well but is not yet robust enough. To reduce the error introduced by the assumptions made above, we correct the bounding box with a simple, yet efficient method. We make use of the unique color of the landmarks, to apply an area approximation for each specific landmark. We calculate the centroid of this area and shift the bounding box accordingly. This significantly improves the bounding box estimation (see \prettyref{fig:bbs_b}).
\begin{figure}
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{Graphics/bounding_boxes.png}
\caption{}
\label{fig:bbs_a}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{Graphics/bounding_boxes_corrected.png}
\caption{}
\label{fig:bbs_b}
\end{subfigure}
\caption{\small{Landmark bounding boxes: (a) shows an example track with regressed bounding boxes. (b) shows the same image, but with corrected bounding boxes.}}
\end{figure}
\paragraph{Landmark classification}
The final step for validation is a classification of each bounding box. According to the rules \cite{fsg2019rules}, there are five possible cases:
\begin{enumerate}
\item small blue traffic cone,
\item small yellow traffic cone,
\item small orange traffic cone,
\item big orange traffic cone,
\item none of them ( false positive proposal).
\end{enumerate}
This task is well established. We decided to use a Convolutional Neural Network (CNN), to create a robust classifier. Following our main goal of efficiency, we used an adapted MobileNet V2 \cite{Sandler.}, which was further optimized for our application. Training was done by using network-based transfer learning \cite{Oquab.2014}, with ImageNet \cite{Russakovsky.2014} as a source domain for all convolutional layers. The network is finetuned on a semi-manually annotated dataset.
\paragraph*{Dataset} Our dataset contains images that were taken with KIT19d's camera setup and also similar images from our previous driverless vehicle. Bounding boxes of landmarks are annotated automatically, using our bounding box regression setup. Each bounding box is further annotated with a quality label to indicate how difficult the classification would be for a human. We used a scale from $0$ (classification not possible) to 10 (easy, high resolution). False samples (label \textit{none}) were created by sampling false positive samples, proposals which consisted of random internet images with the same colors, and random crops of captured images.
\paragraph*{Optimization} We used a semi-factorial, manual parameter-search for all hyper-parameters. During training, we noticed a tendency to only rely on color information, which is an easy way for classification, but not reliable on low contrast image regions or against false positives with similar color (e.g. human legs). To guide our classifier, we preselected low-quality images with heavy augmentation to force more complex decision rules. This led to an increase in accuracy in all categories.
\section{Localization and Mapping}\label{sec:slam}
The localization and mapping algorithm tracks the position of all observed landmarks and provides a transformation between the body-fixed coordinate system of the car and a world-fixed coordinate system located at the beginning of the track. Based on this information, the motion planning algorithm calculates a suitable path through the landmarks.
Assuming the pose of the vehicle is known, well-established methods exist for generating a map from observations of the environment \cite[\S 9]{thrun2005robotics}. The same applies for the opposite case, where a map of the environment is available and the pose of the vehicle is unknown \cite[\S 7]{thrun2005robotics}. However, a much more difficult problem arises when both the map and the vehicle pose are unknown. This problem, which is commonly referred to as Simultaneous Localization and Mapping (SLAM), is subject to ongoing research and a selection of methods to solve it is for example presented in \cite{thrun2005robotics}.
Solving the SLAM problem is a computationally expensive task. Furthermore, deploying the algorithm in a race car requires higher update rates compared to other applications (e.g. indoor robots). However, our application is very confined and allows us to make several assumptions that we can leverage to create a simpler algorithm. In this section, an Extended Kalman Filter (EKF) for the pose estimation of the vehicle is proposed. In our case, the estimation is sufficiently accurate to generate a global map during the Autocross race without further correction of the landmark positions. Thus, in contrast to a SLAM algorithm, the landmark positions are not included in the state vector.
\subsection{Extended Kalman Filter}
The system uses an Extended Kalman Filter (EKF) to track the pose of the vehicle. We assume that the race track is flat, which is valid for most of the competitions sites, therefore the state vector contains the x- and y-coordinates and the yaw angle \(\psi\) of the vehicle.
\begin{equation}
\mathbf{x}_k = (x, y, \psi)^{\mathpalette\@transpose{}}
\end{equation}
\paragraph{Prediction}
Instead of modeling the vehicle's response to an input from the autonomous system, the rear wheel speeds \(n_{rl}\) and \(n_{rr}\) and yaw rate \(\dot{\psi}\) are used directly in the prediction step.
The state transition is given by
\begin{equation}
\mathbf{\hat{x}}_{k+1} = f(\mathbf{x}_k, \mathbf{u}_k) + \epsilon_k,
\end{equation}
with
\begin{equation}
f(\mathbf{x}_k, \mathbf{u}_k) = \mathbf{x}_k + \mathbf{B}_k \mathbf{u}_k,
\end{equation}
where $\epsilon$ is the zero mean gaussian process noise and \(\mathbf{u}_k = (n_{rl}, n_{rr}, \dot{\psi})^{\mathpalette\@transpose{}}\) denotes the input vector which is transformed into the state-space by
\begin{equation}
\mathbf{B}_k = \begin{pmatrix}
\pi r_{dyn} \cos \psi & \pi r_{dyn} \cos \psi & 0 \\
\pi r_{dyn}\sin \psi & \pi r_{dyn}\sin \psi & 0 \\
0 & 0 & 1
\end{pmatrix} \Delta t.
\end{equation}
Here \(r_{dyn}\) represents the dynamic radius of the wheel.
\paragraph{Correction} The predicted state vector $\mathbf{\hat{x}}_{k+1}$ is augmented with the position of each of the $N_{\text{map}}$ fixed landmarks $\mathbf{m}_{l,i}$ contained in the map $M$. The map is structured as follows:
\begin{equation}
M=\left\lbrace \mathbf{m}_{l,i} = (x_{m,i}, y_{m,i})^{\mathpalette\@transpose{}} \: | \: i=1,2,\ldots,N_{\text{map}}\right\rbrace.
\end{equation}
Equation (13) describes the resulting composition of the augmented state vector.
\begin{equation}
\mathbf{\hat{x}}_{k+1}^{Aug} = (x, y, \psi, \mathbf{m}_{{l,1}}, \mathbf{m}_{{l,2}}, ... ,\mathbf{m}_{l,N_{\text{map}}})^{\mathpalette\@transpose{}}.
\end{equation}
Furthermore, a landmark proposal $\mathbf{p}_{l}$ is only considered in the correction step if it was tracked $n > 1$ times in the same color. The observation model for a set of measurements $\mathbf{z}_k = (\mathbf{p}_{l,0}, ..., \mathbf{p}_{l,N_{\text{prop}}})^{\mathpalette\@transpose{}}$ is given by
\begin{equation}
\mathbf{z}_k = h(\mathbf{\hat{x}}_{k+1}) + \delta_k,
\end{equation}
with
\begin{equation}
h(\mathbf{\hat{x}}_{k+1})
= \begin{pmatrix}
\mathbf{R_\psi^{-1}} \cdot \begin{pmatrix} x_{m,1} - x \\ y_{m,1} - y \end{pmatrix} \\
\vdots \\
\mathbf{R_\psi^{-1}} \cdot \begin{pmatrix} x_{m,N_{\text{map}}} - x \\ y_{m,N_{\text{map}}} - y \end{pmatrix} \\
\end{pmatrix},
\end{equation}
The matrix $\mathbf{R_\psi^{-1}}$ describes a 2D rotation with respect to the yaw angle $\psi$ in the negative direction.
The observation noise is assumed to be a zero mean gaussian noise and denoted as $\delta_k$. Based on the difference between the measurement and the prediction, the state is then updated according to the EKF update step as described by Thrun et al. \cite{thrun2005robotics}.
\paragraph{Data association}
The correction step relies on the knowledge of the correspondence between the mapped landmarks and the measured landmarks. However, finding the correct associations is challenging and false associations will cause the filter to diverge.
\\
A common approach to solve this problem is the Individual Compatibility Nearest Neighbor (ICNN) algorithm \cite{neira2001jcbb}. However, according to Neira and Tardos in \cite{neira2001jcbb}, the pose estimate error must not be greater than the distance between features. Neither the FSG rules \cite{fsg2019rules} nor the competition handbook \cite{fsg2019handbook} specify a minimum distance between cones, but past competitions have shown that their spacing can be as low as \SI{30}{\centi\meter}. Especially in tight corners, the error of the pose estimate can approach this distance and thus make the ICNN algorithm susceptible to false associations. Therefore, the computationally more expensive, but also more robust Joint Compatibility Branch and Bound (JCBB) algorithm \cite{neira2001jcbb} was used.
\section{Motion Planning and Control}\label{sec:motion_planning}
Motion planning is concerned with the problem of finding a geometric path and the velocity at which it should be traveled starting at an initial pose and ending in a goal region. It is additionally required for the path to not intersect with obstructed areas of the configuration space which corresponds to staying inside the boundaries of the race track for autonomous racing applications. Static and dynamic obstacles are often also considered part of the problem but are not relevant in the Formula Student context. While it is possible to formulate this problem in terms of the control inputs of the vehicle to merge motion planning and control into a single problem, it is common practice to treat these separately. Paden et al. \cite{paden2016survey} give an overview of state-of-the-art motion planning and control techniques. In both motion planning and control, optimization-based methods are well suited for racing applications. We consider geometric path optimization to be of minor importance due to the small performance benefit to be gained between the narrow boundaries of typical Formula Student circuits. In contrast, optimization-based vehicle control is a crucial component for the performance of our vehicle, since it allows for highly dynamic maneuvers. We have therefore decided to combine a fairly simple motion planning method with a model predictive control (MPC) algorithm for lateral vehicle control. For the sake of simplicity, longitudinal control is decoupled from the MPC problem. Since the longitudinal dynamics of a race car are represented well by a double integrator with a velocity-dependent offset, a feedforward PI controller is a suitable choice for tracking a predefined velocity target.
\subsection{Motion Planner}
We use a simple motion planner that provides a geometric reference path to be tracked by the lateral controller and also the corresponding velocity target as reference for longitudinal control. Following the path-velocity-decomposition method these two reference trajectories are determined sequentially. As we consider geometric path optimization to be of minor importance, the centerline of the track ahead of the vehicle is used as the reference path. The next step is to attribute a target velocity to this reference path that is as fast as possible but also ensures that the vehicle is operated safely inside its performance envelope. Lap time simulation methods lend themselves to solve this kind of problem and in particular quasi-steady-state methods are suitable for online applications. This is because a large fraction of the method can be computed offline. A sophisticated nonlinear vehicle model is used to find the combined lateral and longitudinal acceleration limits of the vehicle for different speeds which is often referred to as GGS-data. In essence this provides an acceleration limit map of the vehicle which is used by the online algorithm to find a velocity trajectory that obeys those limitations. We parameterize the method such that if the velocity target is tracked closely by the real vehicle, the tires do not enter operation regions where tire behavior is highly nonlinear to ensure that the linear vehicle model used in the lateral controller is valid. The remainder of this section describes the lateral controller.
\subsection{Predictive Steering Controller}
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{Graphics/bicycle_model.png}
\caption{\small{Dynamic bicycle model used by the predictive steering controller.}}
\label{fig:MPC_vehicle_model}
\end{figure}
Having used a Stanley controller \cite{hoffmann2007autonomous} in previous years that is based on the steering kinematic of the vehicle and uses a single point on the target path as reference, a model predictive steering controller was developed for KIT19d. This choice was motivated by the goal to make better use of the knowledge about the path ahead of the vehicle and the possibility to use more realistic vehicle models as the basis for the controller. We start the discussion about controller design with the internal model used for making predictions. Vehicle motion is simplified to exist in an SE(2) configuration space, which means the vehicle has two translational and one rotational degree of freedom. Thereby, body motions such as pitch, roll and heave are neglected and the surface on which the vehicle moves is assumed to be planar, which is usually a valid assumption for Formula Student tracks. Since we use MPC only as a steering controller, further simplification can be achieved by excluding the longitudinal vehicle dynamics from the model. However to associate each time step on the horizon with the desired path coordinate it is necessary to prescribe longitudinal velocity which can be retrieved from the velocity target used for the longitudinal controller. An even simpler and in our tests close to equivalent method is to use the current velocity measurement and keep it constant over the whole prediction horizon. Using the dynamic bicycle model illustrated in \prettyref{fig:MPC_vehicle_model}, the following set of nonlinear differential equations then describes the remaining lateral and yaw dynamics:
\begin{align}
\dot{y} &= v_x \sin{\psi} + v_y \cos{\psi} \\
\dot{v}_y &= \frac{1}{m} (F_{y,f} \cos{\delta} + F_{y,r}) - v_x r \\
\dot{\psi} &= r \\
\dot{r} &= \frac{1}{I_z} M_z
\end{align}
with lateral tire forces
\begin{equation}
F_{y,f} = C_f \alpha_f, \quad F_{y,r} = C_r \alpha_r, \\
\end{equation}
yaw moment
\begin{equation}
M_z = l_f F_{y,f} \cos{\delta} - l_r F_{y,r}, \\
\end{equation}
and tire slip angles
\begin{align}
\alpha_f &= -\delta + \arctan{\left( \tfrac{v_y + l_f \dot{\psi}}{v_x} \right)}, \\ \alpha_r &= \arctan{\left( \tfrac{v_y - l_r \dot{\psi}}{v_x} \right)}.
\end{align}
With the state vector $\boldsymbol{x} = [y, v_y, \psi, r]^{\mathpalette\@transpose{}}$ and the steering angle as scalar input $u = \delta$ the linearization of this system around straight line driving reads:
\begin{equation}
\dot{\boldsymbol{x}} = \mathbf{A} \boldsymbol{x} + \mathbf{b} u,
\label{eq:ode}
\end{equation}
with
\begin{equation}
\mathbf{A} = \begin{bmatrix}0 & 1 & v & 0 \\ 0 & \frac{C_f + C_r}{m v} & 0 & \frac{l_f C_f - l_r C_r}{m v}-v \\ 0 & 0 & 0 & 1 \\ 0 & \frac{l_f C_f - l_r C_r}{I_z v} & 0 & \frac{l_f^2 C_f + l_r^2 C_r}{I_z v}\end{bmatrix},
\end{equation}
\begin{equation}
\mathbf{b} = \begin{bmatrix}0 \\ \frac{-C_f}{m} \\ 0 \\ \frac{-l_f C_f}{I_z}\end{bmatrix}.
\end{equation}
This is a continuous-time model which has to be discretized due to the transcription of the MPC problem from a dynamic optimization problem to a parameter optimization problem. Discretization can be achieved by using numerical integrators such as the forward Euler scheme or the Runge-Kutta methods. However, for linear models it is also possible to obtain a discretization by using matrix exponentials that result from solving the differential equation \eqref{eq:ode} over a single time step using a piecewise constant representation of the input variables.
We formulate and solve a linear time-varying MPC (LTV-MPC) problem, largely following the methods presented in \cite{maciejowski2002predictive}. Using the time-discrete linear vehicle model, the receding horizon optimal control problem can directly be stated as a parameter optimization problem. It is formulated in terms of the control input rate $\Delta u$ from which $u$ is obtained by accumulation:
\begin{subequations}
\label{eq:LTV_MPC}
\begin{alignat}{2}
& \underset{\substack{\Delta u_{1:N}, \\ \boldsymbol{x}_{1:N+1}}}{\text{min}} \
& & \sum^{N}_{k=1} \left(\Vert \boldsymbol{x}_k-\boldsymbol{x}^{ref}_k \Vert^2_\mathbf{Q} \nonumber + R \Delta u_k^2 \right) \\
\label{eq:LTV_MPCa}
&
& & \qquad \qquad \qquad + \Vert \boldsymbol{x}_{N+1}-\boldsymbol{x}^{ref}_{N+1} \Vert^2_\mathbf{P} \\
\label{eq:LTV_MPCb}
& \text{s.t.}
& & \boldsymbol{x}_{k+1} = \mathbf{A} \boldsymbol{x}_k + \mathbf{b} u_k \quad k=1,..,N\\
\label{eq:LTV_MPCc}
&
& & u_{k+1} = u_k + \Delta u_k \quad k=1,..,N\\
\label{eq:LTV_MPCd}
&
& & \boldsymbol{x}_1 = \hat{\boldsymbol{x}}\\
\label{eq:LTV_MPCe}
&
& & u_1 = \hat{u}\\
\label{eq:LTV_MPCf}
&
& & \mathbf{D} \boldsymbol{x}_k + \mathbf{e} u_k + \mathbf{f} \leq \mathbf{0} \quad k=1,..,N+1\\
\label{eq:LTV_MPCg}
&
& & \underline{\boldsymbol{x}} \leq \boldsymbol{x}_k \leq \overline{\boldsymbol{x}} \quad k=1,..,N+1\\
\label{eq:LTV_MPCh}
&
& & \underline{u} \leq u_k \leq \overline{u} \quad k=1,..,N+1 \\
\label{eq:LTV_MPCi}
&
& & \underline{\Delta u} \leq \Delta u_k \leq \overline{\Delta u} \quad k=1,..,N
\end{alignat}
\end{subequations}
This formulation follows the direct simultaneous discretization approach \cite[\S 5.4.2]{chachuat2007nonlinear}, since it includes the state variables in the set of optimization variables and enforces the system dynamics explicitly by equality constraints \eqref{eq:LTV_MPCb}. In such formulations the Hessian of the objective function and the constraint matrices are block diagonal if the variables are sorted appropriately. Quadratic programming (QP) solvers that exploit this structure can therefore solve the problem efficiently (e.g. OSQP \cite{stellato2018osqp}, HPIPM \cite{frison2020hpipm}, qpDUNES \cite{frasch2015parallel}, FORCES \cite{domahidi2012efficient}). We have however opted for an approach that does not require a sophisticated QP solver. Maciejowski \cite[\S 2.6]{maciejowski2002predictive} presents how the state variables can be eliminated from the problem by using the linear system dynamics to express them in terms of the control variables. In contrast to formulation \eqref{eq:LTV_MPC} where the system dynamics are only satisfied upon convergence, this leads to the dynamics being always satisfied even before the optimal solution is found. Thus, this process yields a formulation following the direct sequential approach \cite[\S 5.4.3]{chachuat2007nonlinear}. The elimination of the state variables transforms the originally sparse problem into a smaller problem with dense matrices and is therefore also known as condensing. The resulting problem has the following form:
\begin{subequations}
\label{eq:denseQP}
\begin{alignat}{2}
\label{eq:denseQPa}
& \underset{\boldsymbol{\Delta u}_{1:N}}{\text{min}} \quad
& & \frac{1}{2} \boldsymbol{\Delta u}^{\mathpalette\@transpose{}}_{1:N} \mathbf{H} \boldsymbol{\Delta u}_{1:N} + \mathbf{g}^{\mathpalette\@transpose{}} \boldsymbol{\Delta u}_{1:N}\\
\label{eq:denseQPb}
& \text{s.t.} \quad
& & \underline{\mathbf{d}} \leq \mathbf{D}\boldsymbol{\Delta u}_{1:N} \leq \overline{\mathbf{d}} \\
\label{eq:denseQPc}
&
& & \underline{\Delta u} \leq \Delta u_k \leq \overline{\Delta u}
\end{alignat}
\end{subequations}
Here all decision variables have been gathered in a single vector $\boldsymbol{\Delta u}_{1:N} = [\Delta u_1, \Delta u_2, ..., \Delta u_N]^{\mathpalette\@transpose{}}$. Problem \eqref{eq:denseQP} is a dense quadratic program and can be solved by general-purpose QP solvers such as qpOASES \cite{ferreau2014qpoases}. It is desirable for this QP to be strictly convex as in this case the problem is guaranteed to have a unique global minimum. Additionally the well-known KKT-conditions provide a sufficient condition of optimality for convex problems. As shown in \cite[p.76]{maciejowski2002predictive}, problem \eqref{eq:denseQP} is strictly convex if the input weight matrix is positive definite which translates into $R>0$ for our single-input case. Constraints on the state and control variables are encoded in the linear inequalities \eqref{eq:denseQPb}. For our application these could be used to implement lower and upper bounds on the steering angle. Experience from previous vehicles and simulation results on circuits representative for Formula Student competitions do however suggest that this limit is hardly ever reached when operating the vehicle within its handling limits. The constraints of problem \eqref{eq:denseQP} were therefore dropped, leading to an unconstrained QP for which the solution is obtained by solving the following set of linear equations:
\begin{equation}
\mathbf{H} \boldsymbol{\Delta u}_{1:N} = -\mathbf{g}.
\label{eq:linearSystem}
\end{equation}
There exist several direct and iterative methods for solving such linear systems. We have chosen to implement the method with the Eigen3 C++ library \cite{eigenweb} and use its Cholesky decomposition algorithm to solve problem \eqref{eq:linearSystem}.
This basic MPC controller can be enhanced by a delay compensation which is often used to make up for the time needed to solve the optimization problem. In our case of solving an unconstrained QP the delay introduced by optimization is not significant but the same techniques can be used to compensate for delays in between the controller and the actual steering angle. We have employed the approach from \cite[\S 2.5]{maciejowski2002predictive} where the state vector of the plant model is extended by a transport chain of input samples. A general real numbered delay $t_D$ is separated into an integer multiple of the discetization step size $T$ and a real numbered residual leading to $\lfloor \tfrac{t_D}{T} \rfloor +1$ new state vector entries.
Several methods that guarantee stability of MPC controllers for setpoint stabilization are well-established. Most of them are derived from the properties of infinite-horizon controllers for which from Bellman's principle of optimality, stability and recursive feasibility can be infered, if the initial optimization problem can be solved \cite[\S 6.2]{maciejowski2002predictive}. In receding-horizon control, penalties and constraints on the terminal state are used to obtain these characteristics. Also sufficiently long horizon lengths can be used to derive the required guarantees. However, since the correct choice of the terminal set, the weights of terminal penalties or the minimum horizon length is a nontrivial task, rigorous proofs of stability are often omitted in practice. Moreover, these proofs are often invalidated in the presence of disturbances and plant-model mismatch. This is especially true when linear MPC is used to control highly nonlinear systems. Strictly speaking stability proofs for setpoint stabilization are not even applicable for our use case of trajectory tracking, for which theoretical analysis can be found in \cite{faulwasser2012optimization}. We have therefore chosen a more heuristic approach and focused on setting the parameters of the velocity planner such that the vehicle's dynamics are close to linear to keep the plant-model mismatch small. A sufficiently long horizon of $N=65$ with a sampling time of $\Delta t=\SI{20}{\milli\second}$ has been used. While terminal region constraints are not possible for our unconstrained formulation, terminal penalties can be included. Extensive simulation studies to verify stability and robustness of the approach were conducted before testing on the real vehicle.
\section{Results}\label{sec:results}
To demonstrate the capabilities of our implementation, we decided to present two important aspects: the real-time feasibility of the presented algorithms and the performance of the final prototype in competitions.
\prettyref{fig:timingBoxPlot} shows the runtime distributions for the subsystems presented in this paper that were collected on the ACU under real racing conditions.
\begin{comment}
\begin{figure*}[h]
\begin{subfigure}[b]{\columnwidth}
\centering
\setlength\fwidth{0.8\columnwidth}
\includestandalone{Graphics/Clustering_timing}
\caption{\small{Perception: Lidar pipeline}}
\end{subfigure}
\begin{subfigure}[b]{\columnwidth}
\centering
\setlength\fwidth{0.8\columnwidth}
\includestandalone{Graphics/Network_timing}
\caption{\small{Perception: Image processing}}
\end{subfigure}
\begin{subfigure}[b]{\columnwidth}
\centering
\setlength\fwidth{0.8\columnwidth}
\includestandalone{Graphics/Localization_timing}
\caption{\small{Localization}}
\end{subfigure}
\begin{subfigure}[b]{\columnwidth}
\centering
\setlength\fwidth{0.8\columnwidth}
\includestandalone{Graphics/MPC_timing}
\caption{\small{Predictive steering controller}}
\end{subfigure}
\caption{}
\label{fig:timingHistograms}
\end{figure*}
\end{comment}
\begin{figure}
\centering
\setlength\fwidth{0.8\columnwidth}
\includestandalone{Graphics/timingBoxPlot}
\caption{\small{Timing results for the AS modules presented in this paper.}}
\label{fig:timingBoxPlot}
\end{figure}
All systems achieve cycle times below \SI{50}{\milli\second} which ensures that no significant delays build up between perception and control. It can also be seen that the perception module is the most time-consuming component, as the median aggregated inference time is about \SI{13.5}{\milli\second}. The runtimes of about \SI{3.2}{\milli\second} for the lidar clustering and \SI{10.3}{\milli\second} for the image processing do not limit the overall performance, as their inference times are lower than the lidar's and camera's update rates respectively. These results show the efficiency of the presented perception pipeline compared to other algorithms commonly used in a Formula Student context, such as \cite{yolo9000}. Furthermore, the computation times of the localization algorithm and the controller are considerably shorter compared to the perception modules.
\begin{figure}
\centering
\setlength\fwidth{0.9\columnwidth}
\includestandalone{Graphics/ggtest}
\caption{\small{Accelerations during the FSG19 Autocross w/o prior knowledge.}}
\label{fig:gg_diagrams}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.49\textwidth]{Graphics/autox_map.png}
\caption{\small{Visualization of the KIT19d's first Autocross run at FSG19.}}
\label{fig:autocross_mapping}
\end{figure}
\begin{table*}
\centering
\begin{tabular}{p{0.2\linewidth} | p{0.16\linewidth} p{0.16\linewidth} p{0.16\linewidth} p{0.16\linewidth} }
\hline
& Skidpad & Acceleration & Autocross \textbf{w/o} prior knowledge & Trackdrive \\
\hline
\hline
Zurich ETH & 5.992 + 2.00 & \textcolor{OliveGreen}{\textbf{3.597}} & 31.158 & 226.31 + 32.00 \\
\textbf{Karlsruhe KIT} & \textcolor{OliveGreen}{\textbf{6.671 + 0.20}} & DNF & \textcolor{OliveGreen}{\textbf{28.186}} & \textcolor{OliveGreen}{\textbf{244.90 + 4.00}} \\
Delft TU & DNF & DNF & DNF & DNF \\
Augsburg UAS & DNF & 4.056 & DNF & 315.96 + 2.00 \\
Hamburg TU & 8.993 + 0.20 & 18.322 & 65.356 + 6.00 & DNF \\
\hline
\end{tabular}
\caption{\small{Top 5 FSD Teams - Dynamic disciplines - Best lap times incl. penalties at FSG19.}}
\label{tab:results}
\end{table*}
Besides the latencies between perception and control, the performance of an autonomous racing vehicle on an unknown circuit is to a large extend determined by the maximum preview distance its perception system can provide. The presented algorithms yield a preview of up to \SI{42}{\meter}, which amounts to an increase of \SI{210}{\percent} compared to KIT19d's predecessor. To showcase the on-track performance of the vehicle, \prettyref{fig:gg_diagrams} presents logging data of the lateral and longitudinal accelerations and the velocity attained by the vehicle while driving the FSG19 Autocross without prior knowledge. Maximum lateral acceleration values come close to \SI{10}{\meter\per\second\squared} which is not far off the performance of a non-professional human driver. For comparison, when using a map, the vehicle achieved lateral accelerations only slightly over \SI{11}{\meter\per\second\squared} on the same circuit. \prettyref{fig:autocross_mapping} further illustrates the advantages of a high range perception system, as the vehicle had collected enough data to calculate the entire trajectory after completing less than \SI{70}{\percent} of the Autocross track.
The KIT19d won three out of four dynamic disciplines at the FSD 2019 competition in Hockenheim (see \prettyref{tab:results}) and achieved an overall first place at Formula Student Spain (FSS) in Barcelona. Compared to the competition season of 2018, the lap time\footnote{In 2018 the first run of the Trackdrive without prior knowledge was equivalent to the Autocross in 2019.} without prior knowledge was more than four times lower in 2019, even though the same hardware setup was used. Furthermore, the FSG19 Autocross time of the KIT19d, without prior knowledge, was 9\% faster than the second-fastest car. We would like to note that 9\% in terms of lap time is still a large margin.
\section{Conclusion}\label{sec:conclusion}
We have presented the methods and concepts that were essential to the development of the software stack of the autonomous racing prototype KIT19d. Central algorithmic modules from perception, localization and control have been covered in depth. Selected performance evaluations of the final system and competition results have been provided to confirm the real-world applicability of the chosen methods.
At this point we would also like to point out that the performance and reliability of the KIT19d were not only the product of the work conducted during the 2019 competition season. The development of our software stack was based on the software of the previous seasons. Instead of reinventing complete modules, improvements came often in small steps. This ensured a continuous performance improvement even with a small group of developers.
Last but not least we want to provide an outlook on topics we consider to be relevant for future improvements of the system. Extending the range of the perception system and lowering its inherent delays would allow for more accurate localization and control especially at high velocities. Regarding localization and mapping we consider SLAM methods, such as the EKF SLAM or GraphSLAM algorithms described in \cite{thrun2005robotics}, to be potential improvements upon our current implementation. Furthermore, recent developments from the field of model predictive control could allow for improvements of the motion planning and control modules. The two problems could be integrated in a single optimization problem, as presented in \cite{kabzan2019amz}. This would allow to simultaneously optimize the future trajectory and the lateral and longitudinal control inputs.
\section{Acknowledgements}
We would like to thank all present and past members of KA-RaceIng for working with us on this exceptional project. Successful participation in Formula Student Driverless requires much more than the software stack we outlined in this paper. Also, we want to thank our sponsors and all partners at the KIT for their continuous support without which we would not be able to turn our ideas into reality.
| -26,470.115248
|
[
-1.830078125,
1.7353515625
] | 38.016529
|
[
-2.951171875,
0.9853515625,
-0.9111328125,
-4.3671875,
0.00940704345703125,
6.05859375
] |
[
1.6552734375,
3.515625,
2.734375,
5.64453125
] | 430
| 7,238
|
[
-2.302734375,
2.6953125
] | 23.104667
|
[
-5.90625,
-3.333984375,
-3.658203125,
-1.5400390625,
2.205078125,
10.390625
] | 1.255902
| 26.261687
| 27.272727
| 1.542834
|
[
2.3452258110046387
] | -16,663.643768
| 5.940453
| -25,972.538295
| 0.362816
| 6.293875
|
[
-3.341796875,
-3.24609375,
-2.3359375,
-3.373046875,
2.7578125,
8.953125
] |
[
-5.9140625,
-3.642578125,
-3.548828125,
-2.845703125,
4.421875,
8.359375
] | |
BkiUdDE4uzlhge3-E75T
|
\section{Introduction}
Anderson localization is a fundamental phenomenon of quantum disorder systems
and has attracted longstanding attention in condensed matter physics \citep{Anderson_disorder,Anderson 50 year,Mirlin_anderson_transition}. While localization-delocalization transition and mobility edges only occur in three dimensions for random disorder systems, the localization-delocalization transition can be found in one-dimensional quasiperiodic systems, which
have attracted increasing interest in recent years \cite{AAmodel,AAmodel2,Bloch2018,Roati,Lucioni}.
When the quasiperiodical potential strength exceeds a critical value, a localization transition takes place as illustrated by the prototypical quasiperiodic model known as the Aubry-Andr\'{e} (AA) model \citep{AAmodel,AAmodel2}. The quasiperiodic optical lattices have become an ideal platform for studying the localization-delocalization transition \cite{Bloch2018,Roati,Lucioni}.
Particularly, the interplay of interaction and disorder can induce many-body localization (MBL) \citep{MBL1,MBL2,MBL3,David Huse ratio,MBL5}, which violates
eigenstate thermalization hypothesis and prevents erogdicity \citep{LIOM1,LIOM2}. The existence of MBL has been confirmed in one-dimensional interacting systems with random disorder \citep{David Huse entropy MBL tansition,MBL-random1,MBL-random2,MBL-random3,David Huse ratio,Gray,David Huse entropy MBL tansition,MBL-EE1}
or incommensurate potential\citep{MBL-inc1,MBL-inc2,MBL-inc3,MBL-inc4,MBL-inc5,MBL-inc6,MBL-inc7,MBL-inc9,MBL-inc10,MBL-inc11,MBL-inc12,MBL-inc13,WangYC2021,XuSL,Aramthottil,Sierant}.
Moreover, the MBL phase has been experimentally observed in the ultracold atomic gases trapped in incommensurate optical lattices \citep{MBL-expeiment1,MBL-experiment2,MBL-experiment3,MBL-experiment4,MBL-experiment5}.
Exploring novel non-equilibrium phases in driven, interacting quantum systems is a topic of perennial interest. In general, periodically driving a quantum system results in thermalization of the system \cite{drivingETH1,drivingETH2}. Nevertheless, recent works have demonstrated the existence of MBL which allows avoiding heating in the
presence of driving \citep{F_MBL2,F_MBL1,Abanin Thouless Energy Floquet,Abanin MBL Floquet 2015,Huse Floquet for MBL}.
The combination of MBL and Floquet driving can lead to new non-equilibrium
phases of matter, such as time crystals \citep{TmCr1,TmCr2}, suggesting that the interplay of periodic driving and MBL would give rise to rich dynamical phenomena.
On the other hand, by applying a pulsed incommensurate potential to an optical lattice, a periodically kicked AA model was proposed to exhibit dynamical Anderson transition \cite{Qin}, which is revealed from its dynamical evolution of wave packets. A dynamical localization is characterized
by the halt of the spreading of an initial wave packet, and the transition depends on both the strength of the quasiperiodic potential and the kicking period \cite{Qin,Sacramento,Santhanam}.
For the periodical kicked case, while the
time evolution is governed by an effective time-independent AA model in the high-frequency region, the dynamics in the low-frequency region is far more intricate and has not yet been well understood. Besides the kicked AA model, other periodically driving quasiperiodic models are also studied \cite{Ghosh,Sarkar,YiXX,Sarkar2}, and the effect of temporal disorder on the wave-packet dynamics is also analyzed \cite{TongPQ}. Particularly, a recent experiment has observed non-ergodic and ergodic phases in the driven quasiperiodic many-body system which are separated by a drive-induced delocalization transition \cite{Bordia}.
Motivated by these theoretical and experimental progresses, we shall study the periodically kicked interacting AA model and investigate the combined effect of quasiperiodic disorder, driven period (frequency), and interaction on the dynamical localization by analyzing the
quasienergy spectrum of Floquet operator and the related dynamical behavior. To understand the interplay of the quasiperiodic potential and kicked period, we first analyze the quasienergy spectrum statistics of the noninteracting kicked AA model, which displays different behaviors in the high-frequency and low-frequency region. In the high-frequency region, the spectrum statistics clearly demonstrates a dynamical localization transition signaled by the abrupt change of the average ratio of adjacent quasienergy gaps. In the low-frequency region, the spectrum statistics becomes intricate due to the emergence of the extended/localized-to-multifractal edges, which separate the multifractal states from the localized (extended) states. The corresponding average ratio of adjacent quasienergy gaps is not an universal value and depends on the ratio of numbers of multifratal states and localized (extended) states.
The multifractal states can be identified by finite-size scaling analyze of the corresponding wavefunctions, and we propose a scheme to extract the average multifractal exponent from the long-time survival probability.
We then investigate the interacting kicked AA model and identify the existence of MBL in the high-frequency region. Through the finite-size analyse, we unveil the occurrence of a transition from the ergodic phase to the MBL phase when the quasiperiodic potential strength exceeds a critical value. However, in the low-frequency region we find that the MBL phase vanishes and no MBL occurs even for strong quasiperiodic disorder.
The rest of this paper is structured as follows. In Sec. II, we introduce the model and the method of quasienergy spectrum statistics. In Sec. III, we analyze the quasienergy spectral statistics and carry out multifractal analysis for the noninteracting kicked AA model. We unveil the existence of dynamical localization transition in the high-frequency region and the emergence of extended/localized-to-multifractal edges in the low-frequency region. In Sec. IV, we study the MBL in the interacting kicked AA model in detail. A summary is given in the final section.
\section{Model and method}
We consider a periodically kicked quasiperiodic model described by the Hamiltonian
\begin{equation}
H= H_0 + H_K, \label{H}
\end{equation}
with
\begin{eqnarray}
H_0 &=& H_J + H_V \nonumber \\
&=& -J\sum_{j} \left(\hat{c}_{j}^{\dagger}\hat{c}_{j+1}+\text{H.c.}\right)+ \sum_{j}V\hat{n}_{j}\hat{n}_{j+1}
\end{eqnarray}
and
\begin{equation}
H_K=\sum_{n}\delta(t-nT) \sum_{j} \mu_{j}\hat{n}_{j} \label{eq:Hamiltonian},
\end{equation}
where $\hat{c}_{j}^{\dagger}(\hat{c}_{j})$ is the fermion creation
(annihilation) operator, $\hat{n}_{j}=\hat{c}_{j}^{\dagger}\hat{c}_{j}$
is the particle number operator, $J$ is the hopping amplitude between
nearest-neighbor sites, and $V$ is the interaction strength between the neighboring particles. The kicking part of the Hamiltonian is described by $H_K$ with the quasiperiodic potential
\begin{equation}
\mu_{j} = \lambda\cos\left(2\pi j\alpha+\phi\right)
\end{equation}
being periodically added with a pulsed period $T$, where $\alpha=\frac{\sqrt{5}-1}{2}$, $\lambda$ is the strength of the quasiperiodic potential and $\phi$ is a random phase. Taking sample average for the random phase $\phi$ can reduce statistical and finite-size effects.
For convenience we set $\hbar=1$ and take $J=1$ as the unit of energy in the following calculation.
Our model is similar to the interacting spinless fermions model in a quasiperiodic lattice, which has been applied to study MBL \citep{MBL-inc13,MBL-inc4}, but with a periodic kicked potential. We shall demonstrate that the periodically kicked quasiperiodic lattice displays rich dynamical phenomena, including the emergence of multifractal states with no equilibrium counterpart, and the driving frequency plays an important role in the formation of MBL in addition to the quasiperiodic potential strength.
The dynamical evolution of the periodically kicked system
is determined by the Floquet unitary propagator, which can be written as
\begin{align}
U(T) =e^{-i H_0 T}e^{-i\lambda\sum_{j}^{L}\cos\left(2\pi j\alpha+\phi\right)\hat{n}_{j}} . \label{eq:floquet propagator}
\end{align}
For a given initial
state $\psi(t)$, the finial state after $N$ periods can be written
as $\psi\left(t+NT\right)=\left[U\left(T\right)\right]^{N}\psi\left(t\right)$.
For a Floquet unitary propagator, all the quasienergies are distributed
on the unit circle and we use angles $\theta_{n}$ to denote different quasienergies:
\begin{equation}
\Theta=\left\{ \theta_{n}|\lambda_{n}=e^{i\theta_{n}},\theta_{n}\in[-\pi,\pi)\right\} , \label{eq:theta}
\end{equation}
where $\lambda_{n}$ are the eigenvalues of the operator $U\left(T\right)$ and
$\theta_{n}<\theta_{n+1}$.
In analogy to Hamiltonian systems, we define $s_{n}=\theta_{n+1}-\theta_{n}$, and the level spacing distribution of $\theta$
can be captured by the ratio between adjacent gaps \cite{drivingETH1,Abanin MBL Floquet 2015}:
\begin{equation}
r_n = \frac{\min\{s_{n},s_{n+1}\}}{\max\{s_{n},s_{n+1}\}} . \label{eq:r}
\end{equation}
The average of $r_n$ is introduced as
\begin{equation}
\langle r\rangle=\frac{1}{\mathcal{D}}\sum_{n=1}^{\mathcal{D}} r_n , \label{eq:r}
\end{equation}
where $\mathcal{D}=\mathcal{N}-1 $ with $\mathcal{N}$ being the size of Hilbert space.
For the static Hamiltonian system with eigenvalues $E_n$, we have $s_{n}=E_{n+1}-E_{n}$ and the ratio can serve
as a probe of the phase transition between the ergodic and MBL phase \citep{David Huse ratio,MBL5}.
In the ergodic phase the energy level spacings satisfy the Wigner-Dyson
distribution with $\langle r\rangle \approx 0.529$, whereas in the localized
phase with the Poisson distribution $\langle r\rangle \approx 0.387$ \citep{David Huse ratio}.
This quantity was also applied to study the disordered Floquet systems \cite{meng cheng energy level statistics,Abanin MBL Floquet 2015}.
\section{Spectral statistics and multifractal analysis for the kicked AA model}
We first consider the noninteracting case with $V=0$, for which the model (\ref{H}) reduces to the periodically kicked AA model \cite{Qin}.
In the high-frequency region, the
time evolution can be effectively described by the AA model with the critical point given by $\lambda/T =2$. However, in the low-frequency region, the time evolution is far more intricate and has not been fully explored yet \cite{Qin,Sacramento}.
Here we shall scrutinize the kicked AA model by studying the spectral statistics of the quasienergies. To reduce the impact of the edge, we take the lattice size as the Fibonacci
number and consider the periodic boundary condition (PBC) in the calculation.
In Fig.1(a), we show $\langle r\rangle$ (the average ratio of two consecutive quasienergy gaps) in the parameter space spanned by $\lambda$ and $T$.
In the high-frequency region of $T \ll 1$, it is shown that there is an abrupt transition in $\langle r\rangle$ when the parameter $\lambda$ or $T$ crosses over the diagonal line $\lambda/T=2$. This is also witnessed in Fig.1(b) and 1(c), which indicate an abrupt transition around the diagonal region of $\lambda/T=2$. When the system is in the dynamically extended region, the
ratio is close to 0, whereas $\langle r\rangle \approx 0.39$ in the region of dynamical localization. It turns out that increasing $\lambda$ can lead to a transition from the extended region to the localized region in the high-frequency region.
On the other hand, when $T>1$, an abrupt transition is observed before reaching the diagonal line of $\lambda/T=2$, due to the emergence of multifractal eigenstates, which are neither fully localized nor fully extended and separated from the extended (localized) eigenstates by extended/localized-to-multifractal edges.
\begin{figure}
\centering{}\includegraphics[scale=0.17]{fig1}\caption{(a) The average ratio $\langle r\rangle$ in the parameter space spanned by $\lambda$ and $T$ \label{fig:phasediagram}. (b) $\langle r\rangle$ vs $T$ for fixed $\lambda$.
(c) $\langle r\rangle$ vs $\lambda$ for fixed $T$. The dashed lines are given by $\lambda/T=2$. The system is under the PBC with
$L=987$ and we take 100 samples for each point. \label{fig:r-=00005Clambda}}
\end{figure}
We note that $\langle r\rangle \approx 0$ is due to the existence of nearly double degeneracy in the dynamically extended region.
To see it clearly, we define the even-odd (odd-even) level spacings of the quasienergies as
\[
s_{n}^{e-o}=\theta_{2n}-\theta_{2n-1}\left(s_{n}^{o-e}=\theta_{2n+1}-\theta_{2n}\right).
\]
In Fig.2(a)-(c), we show the even-odd (odd-even)
spacings of the kicked AA model with $L=987$, $T=0.2$ and $\lambda=0.2,0.4,0.6$, corresponding to extended, critical
and localized phases, respectively.
In the extended region, the spectrum is nearly doubly-degenerate and hence there is an obvious gap between $s_{n}^{e-o}$ and $s_{n}^{o-e}$. In
the localized region, $s_{n}^{e-o}$ and $s_{n}^{o-e}$ have the same
form and the gap vanishes. In the critical
region, distributions of $s_{n}^{e-o}$ and $s_{n}^{o-e}$ are strongly scattered.
Our results demonstrate that the distribution of quasienergies of the kicked AA model in the high-frequency region displays similar behaviors
as the spectrum distribution of the AA model, for which the even-odd (odd-even)
spacings $s_{n}^{e-o}=E_{2n}-E_{2n-1}$ $\left(s_{n}^{o-e}=E_{2n+1}-E_{2n}\right)$
were utilized to distinguish the different phases of the AA model \citep{xiaolong Deng/PRL}.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.35]{fig2}
\par\end{centering}
\centering{}\caption{Even-odd (odd-even) level spacings for the kicked AA model with $T=0.2$ and $L=987$ under the PBC. From top to bottom:
(a) $\lambda=0.2$ (extended), (b) $\lambda=0.4$ (critical), and(c) $\lambda=0.6$ (localized). }
\end{figure}
Now we study the low-frequency region where the distributions of $s_{n}^{e-o}$ and $s_{n}^{o-e}$ become intricate. As concrete examples, we consider systems with parameters $\left(\lambda=1.6,T=1.5\right)$
and $\left(\lambda=3,T=0.8\right)$, which distribute symmetrically about the diagonal line $\lambda=2T$ in the parameter space and can be connected together by a dual transformation (see appendix A) .
The even-odd (odd-even) spacings for these systems are displayed in Fig.\ref{fig:mobility-edge}(a) and (b), respectively. It is shown that the distribution of even-odd (odd-even) spacings exhibits different behavior in the middle and side regions, which are separated by some edges. While there is a gap between $s_{n}^{e-o}$ and $s_{n}^{o-e}$ in the middle region, their distributions are strongly scattered in the the side regions, as shown in Fig.\ref{fig:mobility-edge}(a). The distribution suggests that the states in the middle and side regions are extended and critical (multifractal) states, respectively. As a contrast, for the system shown in Fig.\ref{fig:mobility-edge}(b), while the distributions in side regions are similar, the gap vanishes in the middle region, suggesting that the states in the middle region are localized states.
To unveil the properties of states more clearly, we also calculate the inverse participation ratio (IPR) for the eigenstate of the
unitary operator, which is defined as $P_n = \sum_{i=1}^{L} |\psi_{n,i}|^4$ with $|\psi_{n} \rangle = \sum_{i=1}^{L} \psi_{n,i} |i \rangle$ representing the $n$-th eigenstate of $U(T)$. As shown in Fig.\ref{fig:mobility-edge}(c) for the system with $\lambda=1.6$ and $T=1.5$, we see $P_n \sim 1/L$ in the middle region, indicating that the corresponding states are extended states. In the side regions, the corresponding states are multifractal (critical) states which are separated from the extended states by the presence of extended-to-multifractal edges.
On the other hand, for the system with $\lambda=3$ and $T=0.8$ as shown in Fig.\ref{fig:mobility-edge}(d), the IPR tends to a finite number in the middle region with the corresponding state being localized. Also there exist localized-to-multifractal edges separating the localized and critical regions. By carrying out a finite-size scaling analysis for the eigenstates, we can distinguish the extended, localized and multifractal states.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.29]{fig3} \caption{Even-odd level spacing ($s^{e-o}$) and odd-even level spacing ($s^{o-e}$) for (a) $T=1.5, \lambda=1.6$ and (b) $T=0.8,\lambda=3$, respectively. IPR for (c) $T=1.5,\lambda=1.6$ and (d) $T=0.8,\lambda=3$, respectively. The system is under the PBC with $L=987$.
\label{fig:mobility-edge}}
\par\end{centering}
\end{figure}
\begin{figure}[h]
\begin{centering}
\includegraphics[scale=0.42]{fig4}\caption{Finite-size scaling analysis for different eigenstates. \label{fig:multifractal_analysis}}
\par\end{centering}
\end{figure}
Now we carry out a finite-size scaling analysis for eigenstates in
different regions \citep{xiaolong Deng/PRL,Sarkar2,WangYC2016}. For a given eigenstate $|\psi_{n}\rangle=\sum_{j}\psi_{n}(j)|j\rangle$,
we can use the moments
\begin{equation}
I_{q}(n)=\sum_{j}|\psi_{n}(j)|^{2q}\propto L^{-D_{q}(q-1)}
\end{equation}
to characterize the distribution information of the eigenstate. $D_{q}$ are the fractal dimensions
and take difference values in different regions : $D_{q}=1$ in the extended
region, $D_{q}=0$ in the localized region and $0<D_{q}<1$ in the multifractal
region. In our calculation, we choose $q=2$ and the fractal dimensions
can be obtained from the inverse participation ratio $(I_{2})$.
After a simple transformation, it is easy to get
\[
-\ln[I_{2}(n)]/\ln(L)=-c/\ln(L)+D_{2},
\]
where $c$ is a size-independent coefficient. We can get the $D_{2}$
by the intercept of the curve in the space spanned by $1/\ln(L)$ and $-\ln[I_{2}(n)]/\ln(L)$. In Fig.\ref{fig:multifractal_analysis}, we plot the curves
in different regions, as marked by the red squares in Fig.\ref{fig:mobility-edge}(c) and (d). For localized states and extended states, we choose
a typical eigenstate at the middle of the spectrum ($n/L=1/2$). For
multifractal states, we choose $20$ eigenstates near the n-th eigenstate
with $n/L=0.1$ and take an average to eliminate fluctuation.
After a linear fitting, we find that $D_{2}=1$ for extended states
and $D_{2}=0$ for localized states when $L\rightarrow\infty$. We
also find that $0<D_{2}<1$ for the multifractal eigenstates while
$L\rightarrow\infty$. It confirms the existence of the multifractal
states.
Similar to the AA model, we note that the unitary operator fulfills a self-duality relation at $\lambda=2T$ after a dual transformation (see appendix A for details). The existence of a duality mapping suggests that there is a one-to-one
correspondence for parameters which are symmetric about $\lambda=2T$, for
example $T=0.8,\lambda=3$ and $T=1.5,\lambda=1.6$. When across the self-duality point, there
is a sharp transition from localized (extended) to extended (localized) states.
As we discussed above, there are multifractal states in the low frequency
region which can be detected by analyzing the spectrum and eigenvectors.
The transition from extended to multifractal or localized to
multifractal cannot be predicted by the self-duality relation. In
order to study the behavior in the region that the multifractal states
begin to appear, we calculate the $\langle r\rangle$ along the line
$T=1$ and make finite size analysis. As shown in Fig.\ref{fig:transition}, there is a sharp transition when $\lambda\approx1.68$
and this value is smaller than the self-dual point $\lambda/T=2$. When
we increase the system size, we find that the transition of $\langle r\rangle$
around the transition point becomes more and more sharper, which shows a
signature of transition instead of crossover. Fixing the strength of quasiperiodic potential $\lambda=2$ and tuning
the period $T$, we observe that there is also a sharp change of $\langle r\rangle$
around $T\approx0.8$ as shown in Fig.\ref{fig:transition}.
It is worth pointing out that a sharp change around the self-duality point $\lambda/T=2$ is always observed in Fig.\ref{fig:transition}(a) and (b).
Such a change is induced by the change of extended (localized) to localized (extended) states in the middle region when across the self-duality point.
Besides, we define a quantity to describe the fraction of multifractal states:
$$ Q = n_{mul}/n_{all}, $$
where $ n_{mul}$ is the number of multifractal states and $ n_{all}$ is the number of all the eigenstates. We show the change of $Q$ versus $\lambda$ and $T$ for the system with $L=987$ in Fig.\ref{fig:transition}(a) and Fig.\ref{fig:transition}(b), respectively. Below the first transition point, $Q=0$. When the parameters satisfy $\lambda=2T$, all the eigenstates are multifractal and $Q=1$. The sharp change of $\langle r \rangle$ has a one-to-one correspondence to the change of $Q$. When $\langle r\rangle$ goes to zero or Poisson value, the fraction of multifractal states approaches to zero, indicating completely delocalized or localized bands, respectively.
\begin{figure}
\centering{}\includegraphics[scale=0.30]{fig5a}\includegraphics[scale=0.30]{fig5b}\caption{(a) $\langle r\rangle$ versus $\lambda$ with fixed $T=1$. (b) $\langle r\rangle$ versus $T$ with fixed $\lambda=2$. Black lines (right y axis) depict the fraction of multifractal states for the system with $L=987$ and different parameters. We take 100 samples for each point.\label{fig:transition}}
\end{figure}
In general, multifractal eigenstates or mobility edges can lead to exotic dynamical behaviors. Next we study the expansion dynamics of wavepacket in the region with multifractal states and try to extract the multifractal
exponents from the dynamical behavior. We label the center of the lattice as $j_{0}$ and choose the initial state localized at site of $j_{0}$. The time evolution of an initial state can be expanded by the eigenstates of $U(t)$:
\[
|\psi(t)\rangle=\sum_{m}e^{-i\theta_{m}t}\langle\psi_{m}|\psi(0)\rangle|\psi_{m}\rangle = \sum_{j} C_{j}(t) | j \rangle,
\]
with $C_{j}(t) = \sum_m C^{(m)*}_{j_0}C^{(m)}_{j}e^{-i \theta_m t}$, where $C^{(m)}_{j}=\langle j| \psi_m \rangle$.
Here we focus on the long-time survival probability $P(r)$ defined as
\begin{equation}
P(r)=\sum_{|j-j_{0}|\leq r/2}|C_{j}(t\rightarrow\infty)|^{2},
\end{equation}
which is the
probability of finding the particle in sites within the region $\left(-r/2,r/2\right)$
after a long time evolution. $P(r)$ is proportional
to $(r/L)^{\widetilde{D_{2}}}$, where $\widetilde{D_{2}}$ is the
generalized dimension of the spectral measure \cite{xiaolong Deng/PRL,XuZH}. For one-dimensional
systems, the dimension of eigenstates fulfills $D_{2}=\widetilde{D_{2}}$\cite{XuZH,Huckestein}.
It is obvious that $D_{2}=0$ in the localized region and $D_{2}=1$ in the extended region.
Consider the case with localized-to-multifractal edge, for which the eigenstates
are either localized or multifractal. While the wavepacket does not expand
in the localized region, the multifractal states play an important role in the
expansion of wavepacket. As shown in Fig.\ref{fig:Pr}, $P(r)$ shows quite different behaviors in the localized region ($T=0.3,\lambda=1$), extended region ($T=0.5,\lambda=0.6$) and region with localized-to-multifractal edge ($T=0.8,\lambda=3$).
In the localized region, all eigenstates are localized, and $P(r)$
grows to 1 rapidly because the wavefunction is mainly distributed
at the initial position. In the extended region, all the eigenstates are extended. $P(r)$ grows uniformly and the wavefunction is distributed in space
uniformly. In the region with localized-to-multifractal edge,
$P(r)$ increase with $r$ but with a nonzero value at $r=0$.
Due to the existence of some localized states, a part of the wavefunction remains at the
initial position. As the increase of $P(r)$ is entirely determined
by the multifractal states, we can extract the average multifractal
exponent by
\begin{equation}\label{fitting}
\ln(P(r)-c_{0})\approx D_{2}\ln(r/L)+\ln(1-c_0),
\end{equation}
where $c_{0}$ is a constant and depends on the proportion of localized states in all the eigenstates \cite{xiaolong Deng/PRL}. $D_{2}$ is determined by the slope of $\ln(P(r)-c_{0})-\ln(r/L)$ line, which gives rise to $D_{2}\approx0.53$.
Because all eigenstates contribute to the time evolution, the multifractal exponents extracted by the wavepacket dynamics should be an average for all mutlifractal states. In Fig.\ref{fig:Pr}(b), we
compare the multifractal exponent extracted from the wavepacket dynamics with
the result from the finite size analysis, which also approaches $0.53$ in the limit $L \rightarrow \infty$.
For the case with extended-to-multifractal edge ($T=1.5,\lambda=1.6$), since both the extended and multifractal
eigenstates attribute to the expansion of the wavepacket, it is hard to read out the multifractal exponent directly from $P(r)$. However, we note that one may roughly estimate the average multifractal exponent $D_2'$ by using the duality property and we get the result $D_2'=0.55$ which is consistent with the result from the finite-size analysis (see the appendix B for details).
\begin{figure}
\centering{}\includegraphics[scale=0.30]{fig6a}\includegraphics[scale=0.20]{fig6b}\caption{(a) Long-time survival probability ($t=10^{7}$) with different parameters.
Fitting result (1): we use Eq.(\ref{fitting}) to fit the curve of $T=0.8,~\lambda=3$ with the fitting parameters $c_0=0.55$ and $D_2=0.53$. Fitting result (2): for the curve of $T=1.5,~\lambda=1.6$, we get $c_0=0.55$ from its dual model. Then we use Eq.(\ref{fitting2}) to fit the curve and get $D_2'=0.55$.
(b) Finite size analysis for multifractal states and we take an average for all the multifractal states with fixed parameters. We choose
$L=987$ and take 1000 samples. \label{fig:Pr}}
\end{figure}
\section{Many-body localization in the interacting kicked AA model}
Now we study the interacting system with finite $V$ and consider the half-filling case with $N_{f}/L=1/2$, where $N_f$ is the particle number. In the high-frequency limit, we expect the interaction to induce MBL when the strength of quasiperiodic potential exceeds a critical value. The transition from a dynamical ergodic
phase to MBL phase can also be captured by the average level-spacing ratio $\langle r\rangle$ for the quasienergy spectrum.
In Fig.\ref{fig:Finite-size-critical-scaling} (a), we plot the average
energy level-spacing ratio with a fixed $T=0.1$ versus $\lambda$ for the system with $V=1$ and various system sizes.
We find the level-spacing ratio changes from about $0.52 \pm 0.1$ to $0.39$ when $\lambda$ increases.
The curves with different $L$ intersect at the same point $\lambda_c \approx 0.31$.
By plotting $\langle r\rangle$ versus the scaled potential strength $(\lambda- \lambda_c)/L^{1/\nu}$
for different system sizes, we find that all curves collapse into a single one, as shown in Fig.\ref{fig:Finite-size-critical-scaling} (b). The finite size analysis gives the transition
point and the critical index as $\lambda_{c} \approx0.3138$ and $\nu \approx0.6$ (see appendix C for more details). In the large size limit, it then follows that $\langle r\rangle\approx0.39$ for $\lambda>\lambda_c$ and $\langle r\rangle\approx0.53$ for $\lambda<\lambda_c$. Our numerical results confirm that quasienergy spectrum statistics follows a Poisson distribution in the MBL phase and a circular orthogonal ensemble (COE) in the ergodic phase \citep{Abanin MBL Floquet 2015,Abanin Thouless Energy Floquet,meng cheng energy level statistics}.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.32]{fig7}
\par\end{centering}
\caption{(a) The average ratio $\langle r\rangle$ versus $\lambda$ for systems with $T=0.1$ and different sizes. (b) $\langle r\rangle$ versus scaled $\lambda$ with different sizes collapse into a single curve. Each
data point is averaged over 2000 quasiperiodic disorder realizations for $L=12$
and $L=14$, and 50 realizations for
$L=16$. \label{fig:Finite-size-critical-scaling}}
\end{figure}
The entanglement entropy is another important parameter to distinguish the
ergodic phase and MBL phase. The entanglement entropy of the system's eigenstate shows distinct
behavior in different phases:it follows a volume
law in the ergodic phase yet an area law in the MBL phase \citep{MBL-EE2,MBL-EE1}.
The growth of entanglement entropy with time in the kicked quasiperiodic lattice is also expected to show different dynamical behaviors in the ergodic and MBL phase. We
choose the initial state as $|10\dots10\rangle$ with all the odd sites being occupied and all
the even sites empty. In our calculation, we act $U(T)^N$ on the initial state $| \psi(0) \rangle$ to get the finial state $| \psi(NT) \rangle$. In order to calculate
the growth of entanglement entropy, we divide the system into two
parts A and B with the same length and take the trace of subsystem
B to get the reduced density matrix $\rho_{A}$. The entanglement
entropy can be written as
\[
S=-\textrm{tr}\text{(\ensuremath{\rho_{A}\ln\rho_{A}})} .
\]
In Fig.\ref{fig:Entanglement-entropy}a, we display the entanglement entropy growth for $\lambda=0.1$, $0.3$ and $1$, which exhibits distinct behaviors in different phases. In the
ergodic phase with $\lambda=0.1$, the entanglement entropy increases with time and approaches a saturation value (about $3.916$ at $t=10^4T$).
As shown in Fig.\ref{fig:Entanglement-entropy}(b) for systems with $\lambda=0.1$ and different system sizes, the saturation value of the entanglement entropy displays a linear increase with $L$ in the ergodic phase and fulfills the volume law \citep{page entanglement_entropy}. In contrast, the entanglement entropy in the localized phase takes a small value and is not sensitive to the system size. In Fig.\ref{fig:difference-of-density}(c), we display the long-time behavior of entanglement entropy in MBL phase for different interaction strengthes. It is shown that the entanglement entropy for the interacting systems grows slowly after a long time evolution, whereas the entanglement entropy for the non-interacting system keeps almost unchanged. Although the entanglement entropy in the MBL phase keeps growing, it is always much smaller than that in the ergodic phase \cite{Moore}.
Further, the dynamics of the system can be intuitively illustrated through the evolution of density distributions. In Fig.\ref{fig:difference-of-density}(d), we display
the change of real space density distribution
$
\left\langle \left|n_{i}(t)-n_{i}(0)\right|\right\rangle
$
for various $\lambda$, where $n_{i}(t)$ is the time-dependent local density at site
$i$, $\left\langle \dots\right\rangle $ means
sample averages over different phase $\phi$, and $t=NT$ with $N=10^4$ kicked periods. In the ergodic region with $\lambda=0.1$,
$n_{i}(t)$ tends to $0.5$ and it means all the particles are evenly
distributed on the sites after a long time evolution, whereas in the localized
region with $\lambda=1$ the change of density distribution is small which means the
system retains the initial state information.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.24]{fig8}
\par\end{centering}
\centering{}\caption{Dynamical behavior in the kicked interacting AA model with kicked period $T=0.1$. (a) Entanglement entropy growth with $L=14$ and different quasiperiodic strength $\lambda=0.1$, $0.3$ and $1$. (b) Entanglement entropy growth in the ergodic region with $\lambda=0.1$ and different system sizes
\label{fig:Entanglement-entropy}. The insert in (b) shows the saturation values of the entanglement entropy with different system sizes. (c) The evolution of entanglement entropy in the MBL region with $\lambda=1$ and different interaction strength. (d) Density distribution after $10^4$ kicked periods for system with L=14 and different $\lambda$\label{fig:difference-of-density}. In our calculation, we choose 1000 samples for $L=10$, $12$ and 500 samples for $L=14$.}
\end{figure}
In the non-interacting case, we have shown the existence of multifractal states and extended/localized-to-multifractal edges in the low-frequency region. Now we study the fate of multifractal states and check whether extended/localized-to-multifractal edges survive in the interacting case.
To this end, we plot the energy-resolved spectral statistics in the parameter space spanned by $\lambda$ and $\epsilon$ for $V=1$, $T=0.3$ and $T=1.5$
in Fig.\ref{fig:left-:energy-resolved-spectral}, where $\epsilon$ is defined as
\[
\epsilon=\frac{\theta_n-\theta_{min}}{\theta_{max}-\theta_{min}},
\]
which labels the place of the quasi-energy $\theta_n$ lying in the quasi-energy density spectrum.
We note that the energy-resolved spectral statistics have been used to characterize the energy-resolved MBL in Ref.\citep{energy-resolved}.
In the high frequency
region with $T=0.3$, it is shown that a transition from ergodic to MBL phase occurs when we
increase $\lambda$. On the other hand, in the low frequency region with $T=1.5$, we do not observe such a transition and the system is always in the ergodic phase. Our result shows no signature for the existence of extended/localized-to-multifractal edges in the interacting system.
\begin{figure}
\centering{}\includegraphics[scale=0.30]{fig9a}\includegraphics[scale=0.30]{fig9b}\caption{left: Energy-resolved spectral statistics for $T=0.3$. right: Energy-resolved
spectral statistics for $T=1.5$. We choose $L=14$ and take 1000
samples.\label{fig:left-:energy-resolved-spectral}}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.6]{fig10}
\par\end{centering}
\caption{The average ratio $\langle r\rangle$ in the parameter
space spanned by $\lambda$ and T for $V=1$ and $L=12$. We take 1000 samples in our simulation.\label{fig:phasediagram}}
\end{figure}
In Fig.\ref{fig:phasediagram}, we show the average level-spacing ratio $\langle r\rangle$ in the parameter
space spanned by $\lambda$ and T for a system of $L=12$ with the interaction strength $V=1$. In comparison with
the non-interacting case, we find that the interaction term can lead to the appearance of the MBL phases in the regime with all the eigenstates being localized in the non-interacting limit.
However, in the low-frequency region with the corresponding eigenstates in the non-interacting limit being either extended or
multifractal states, adding an interaction term leads to the thermalization
of the system, characterized by $\left\langle r\right\rangle \approx 0.53$. No signature of MBL is observed by further increasing $\lambda$.
In Fig.\ref{figr-T}, we display $\left\langle r\right\rangle$ versus $T$ for system with the interaction strength $V=1$ and different strength of quasiperiodic potential $\lambda$ (discussion on effect of the interaction strength can be found in the appendix D). In the low-frequency region, it is shown that the ratio $\left\langle r\right\rangle$ increases with the increase of $T$ for various $\lambda$ and approaches $0.53$. Our results show that the MBL vanishes when the systems enter the low-frequency region, and the systems are ergodic even for a very large $\lambda$. The absence of MBL in the low-frequency region is related to the emergence of localized-to-multifractal edges in the quasienergy spectrum of non-interacting kicked AA model discussed in the previous section. The presence
of a localized-to-multifractal edge means that both localized and multifractal single-particle orbitals are present and their interplay to the interaction may induce the absence of MBL. Although MBL can occur in the static interacting systems with single-particle mobility edge \cite{MBL-inc1,MBL-inc2}, it has been shown that the presence of a mobility edge anywhere in the spectrum is enough to induce delocalization for any driving strength and frequency \cite{F_MBL1}. Our model, however, provides a different scenario in which either the presence or absence of MBL is possible by tuning the driving frequency.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.6]{fig11}
\par\end{centering}
\caption{The average ratio $\left\langle r\right\rangle $ versus T with different quasiperiodic
strength $\lambda$. \label{figr-T} The system size is L=14 and we take the average over 500 samples.}
\end{figure}
\section{Summary}
In summary, we have studied the phenomenon of dynamical localization and many-body localization as well as their breakdown in the periodically kicked
AA model. By analyzing the quasienergy spectrum statistics in the non-interacting limit, we have verified the existence of dynamical localization transition in the high-frequency region, which is characterized by an abrupt change of average quasienergy level-spacing ratio $\langle r \rangle$ across the self-dual point $\lambda/T=2$. On the other hand, the spectrum statistics becomes intricate in the low-frequency region due to the emergence of the extended/localized-to-multifractal edges in the quasienergy spectrum, which separate the multifractal states from the localized (extended) states.
We also find that there is a sharp transition when the multifractal states occur in the low frequency region. Furthermore, we discuss the dynamical behavior in different regions and extract the multifractal exponent from the long-time survival probability.
For the interacting periodically kicked AA model, we have found the occurrence of a transition from the ergodic phase to the MBL phase in the high-frequency
region, when the quasiperiodic potential strength exceeds a critical value.
The transition point and the
critical exponent of the ergodic-MBL transition are obtained by a finite-size scaling analysis. We also calculate
the time evolution of entanglement entropy and the density
distribution to confirm the existence of the MBL phase.
We find that the interaction can lead to the thermalization of the system when there are multifractal states in the non-interacting limit,
and demonstrate that the MBL phase vanishes even for strong quasiperiodic potential. Our results show that the interplay of quasiperiodic disorder, driven period, and interaction can lead to rich dynamical phenomena in the periodically kicked AA model.
Note added: Recently, we became aware of the experimental realization of the kicked AA quasiperiodic model studied in the present work and the study of multifractality in a related parallel experimental work\cite{exp-kickedAA}.
\begin{acknowledgments}
The work is supported by National Key
Research and Development Program of China (Grant No.
2021YFA1402104), the NSFC under Grants No.12174436, No.11974413
and No.T2121001 and the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No. XDB33000000.
\end{acknowledgments}
| -27,825.221674
|
[
-2.85546875,
2.6328125
] | 32.283465
|
[
-3.150390625,
0.47021484375,
-2.328125,
-5.21875,
-0.95654296875,
8.65625
] |
[
3.23046875,
8.453125,
3.78125,
6.25390625
] | 350
| 5,058
|
[
-3.078125,
3.65625
] | 24.469593
|
[
-6.3515625,
-4.51953125,
-4.7265625,
-2.470703125,
2.046875,
13.4140625
] | 1.839465
| 12.696332
| 20.996441
| 2.03212
|
[
1.2090873718261719
] | -19,156.194442
| 5.911427
| -27,143.397976
| 0.73913
| 5.613558
|
[
-2.443359375,
-3.955078125,
-4.28515625,
-5.11328125,
2.302734375,
13.0546875
] |
[
-5.73046875,
-1.7919921875,
-2.482421875,
-1.30078125,
3.744140625,
4.7265625
] | |
BkiUcVQ5qsNCPep75jrn
|
\section{Introduction}
This article is a continuation of the prequel article \cite{VO5}, on the topic
of Rankin-Selberg $L$-functions in cyclotomic towers, and particularly
the nonvanishing of their central values in families. The purpose of the
present article is to deduce a much stronger result towards the conjecture
posed in \cite{VO5} using only the theory of $p$-adic $L$-functions. In
particular, we establish an asymptotic analogue of Mazur's
conjecture in the non self-dual setting, with the remaining cases of small
ring class conductors being reduced to a straightforward nonvanishing criterion
for individual cyclotomic twists (Conjecture \ref{conjecture} below). In any case,
the main result presented here applies to show asymptotic bounds for ranks of elliptic
curves in maximal abelian $p$-extensions of imaginary quadratic fields via the associated
two-variable Iwasawa main conjecture divisibilities (Theorem \ref{main3} below).
Let $K$ be an imaginary quadratic field of discriminant $D<0$ and
associated quadratic character $\omega$. Fix a rational prime $p$.
Let $R_{\infty}$ denote the maximal abelian extension of $K$ unramified
outside of $p$. Hence, $R_{\infty}$ can be described as the compositum of
towers $K[p^{\infty}]K(\mu_{p^{\infty}})$, where $K[p^{\infty}] =
\bigcup_{n \geq 0}K[p^n]$ is the union of all ring class extensions
of $p$-power conductor over $K$, and $K(\zeta_{p^{\infty}}) =
\bigcup_{n \geq 0} K(\zeta_{p^n})$ the extension obtained by
adjoining to $K$ all $p$-power roots of unity. The ring class tower
$K[p^{\infty}]$ is Galois over ${\bf{Q}}$, and of generalized dihedral type.
Its Galois group over $K$ is isomorphic as a topological group to a
finite extension of ${\bf{Z}}_p$. The cyclotomic tower $K(\zeta_{p^{\infty}})$
is Galois over ${\bf{Q}}$ and abelian. Its Galois group over $K$ is isomorphic
as a topological group to ${\bf{Z}}_p^{\times}$. Let us write $\mathcal{G}$ to
denote the Galois group ${\operatorname{Gal}}(R_{\infty}/K)$, with $\varOmega$ the Galois
group ${\operatorname{Gal}}(K[p^{\infty}]/K)$ of the dihedral part, and $\varGamma$ the Galois
group of the ${\operatorname{Gal}}(K(\zeta_{p^{\infty}})/K)$ of the cyclotomic part, so that
$\mathcal{G} \approx \varOmega \times \varGamma$. The torsion
subgroup $G_0$ of $\mathcal{G}$ is finite, and corresponds to the
Galois group ${\operatorname{Gal}}(L/K)$ of the maximal tamely ramified extension
$L$ of $K$ contained in $R_{\infty}$. The quotient group $G$ of
$\mathcal{G}$ mod $G_0$ is then topologically isomorphic to
${\bf{Z}}_p^2$, with a corresponding decomposition into dihedral and
anticyclotomic parts denoted by $G \approx \Omega \times \Gamma$.
We consider the set $\mathfrak{X}$ of finite order characters $\mathcal{W}$ of
$\mathcal{G}$, which by class field theory can be identified with the set
of finite order Hecke characters of $K$ unramified outside of $p$ and infinity.
We shall commit an abuse of notation throughout in always making such an
identification implicitly. It is clear from the description of $\mathcal{G}$ given above
that any such character $\mathcal{W}$ decomposes uniquely into a product of characters
$\mathcal{W} = \rho \psi$, where $\rho$ is a finite order character of the dihedral Galois group
$\varOmega$, and $\psi$ is a finite order character of the cyclotomic Galois group
$\varGamma$. The reader should note that each of these cyclotomic characters $\psi$
takes the form $\psi = \chi \circ {\bf{N}}$, where $\psi$ is some uniquely determined Dirichlet
character of $p$-power conductor, and ${\bf{N}}$ is the norm homomorphism on ideals of $K$.
The decomposition $\mathcal{G} \approx G \times G_0$ also induces a unique factorization
of any such character $\mathcal{W}$ into a product of characters $\mathcal{W} = \mathcal{W}_0 \mathcal{W}_w$, where
$\mathcal{W}_0$ is a tamely ramified character of the Galois group $G_0$, and $\mathcal{W}_w$ is a wildly ramified
character of the Galois group $G$. Thus, given a character $\mathcal{W}$ of the set $\mathfrak{X}$, we shall
always take these factorizations \begin{align*}\mathcal{W} = \rho\psi = \rho \chi \circ {\bf{N}} = \mathcal{W}_0
\mathcal{W}_w \end{align*} to be fixed. We shall also assume throughout that $\mathcal{W}$ has $p$-power
conductor, writing $X \subset \mathfrak{X}$ to denote the subset of such characters.
A classical construction due to Hecke associates to any character $\mathcal{W}$ in $\mathfrak{X}$
a theta series $\Theta(\mathcal{W})$, which is a modular form of weight $1$, level $\Delta$,
and Nebentypus $\omega\chi^2$. Here, $\Delta = \vert D \vert c(\mathcal{W})^2$, where $c(\mathcal{W})$
denotes the (norm of) the conductor of $\mathcal{W}$. Moreover, this theta series $\Theta(\mathcal{W})$
can be characterized as the inverse Mellin transform of the complex $L$-function $L(s, \mathcal{W})$
associated to $\mathcal{W}$. Let us now fix a cuspidal Hecke newform $f$ of weight $2$, level $N$,
and trivial Nebentypus. We write \begin{align*} f(z) &= \sum_{n \geq 1} a_n(f)q^n \end{align*}
to denote its Fourier series expansion at infinity, where as usual $z = x+iy$ denotes an
element of the complex upper-half plane $\mathfrak{H} = \lbrace z \in {\bf{C}}: \Im(z) > 0 \rbrace$,
the coefficients $a_n(f)$ are normalized so that $a_1(f)=1$, and $q^n = \exp(2 \pi i n)$.
We consider the Rankin-Selberg $L$-function of $f$ times any of the theta series $\Theta(\mathcal{W})$
described above, normalized to have central value at $s = 1/2$, which we denote here by the
symbol $L(s, f \times \mathcal{W})$. Hence, $L(s, f \times \mathcal{W})$ can be described via its Dirichlet series
expansion \begin{align*} L(s, f \times \mathcal{W}) &= L^{(N)}(2s, \omega\chi^2)
\sum_{n \geq 1 \atop (n, c(\mathcal{W}))=1} \left( \sum_{A} \rho(A) r_A(n) \right) a_n(f) \chi(n) n^{-s}, \end{align*}
which converges absolutely for $\Re(s) \gg 1$. Here, $L(s, \omega\chi^2)$ denotes the usual Dirichlet
series $L(s, \omega\chi^2)$ of the character $\omega\chi^2$, with primes dividing $N$
removed from its Euler product. We have also used the factorization $\mathcal{W} = \rho
\chi \circ {\bf{N}}$ of $\mathcal{W}$ into dihedral and cyclotomic parts described above. Moveover,
we have viewed the underlying dihedral character $\rho$ as a ring class character of the
class group ${\operatorname{Pic}}(\mathcal{O}_c)$, where $\mathcal{O} = {\bf{Z}} + c\mathcal{O}_K$ is
the ${\bf{Z}}$-order of conductor $c = c(\rho)$ in $K$. And finally, given a class $A$ in
${\operatorname{Pic}}(\mathcal{O}_c)$, we have written $r_A(n)$ to denote the number of ideals of
absolute norm $n$ in $A$. The classical Rankin-Selberg method shows that this
$L$-function $L(s, f \times \mathcal{W})$ has an analytic continuation to the complex plane,
and moreover that its completed $L$-function \begin{align*} \Lambda(s, f \times \mathcal{W})
&= (N\Delta)^s \Gamma_{\bf{R}}(s+1/2)\Gamma_{\bf{R}}(s+3/2)L(s, f \times \mathcal{W}) \end{align*}
satisfies the functional equation \begin{align}\label{FE} \Lambda(s, f \times \mathcal{W}) &=
\epsilon(s, f \times \mathcal{W})\Lambda(1-s, f \times \overline{\mathcal{W}}). \end{align} Here, we have
written $\Gamma_{\bf{R}}(s)$ to denote $\pi^{-s/2}\Gamma(s/2)$,
with $\epsilon(s, f \times \mathcal{W})$ the epsilon factor associated to
$\Lambda(s, f \times \mathcal{W})$, and $\overline{\mathcal{W}}$ to denote the contragredient
character associated to $\mathcal{W}$. The epsilon factor at the central point
$\epsilon(1/2, f \times \mathcal{W})$ is a complex number of modulus one known as
the {\it{root number}} of $L(s, f \times \mathcal{W})$. In this particular setting, one can
see from the classical derivation of $(\ref{FE})$ via convolution that the root
number $\epsilon(1/2, f \times \mathcal{W})$ is given simply by $-\omega\chi^2(N')$, where
$N'$ denotes the prime-to-$p$ part of the level $N$ of $f$, at least if $N$ is prime
to the discriminant $D$ of $K$.
The $L$-function $L(s, f \times \mathcal{W})$ for any character $\mathcal{W}$ in $\mathfrak{X}$
is said to be {\it{self dual}} if the coefficients in its Dirichlet series expansion
are real-valued, or equivalently if its root number $\epsilon(1/2, f \times \mathcal{W})$
takes values in the set $\lbrace \pm 1 \rbrace$. This is well known to be the case
for the $L$-functions $L(s, f \times \mathcal{W})$ where $\mathcal{W} = \rho$ is a dihedral or
ring class character in the description given above. Moreover, if $\mathcal{W} = \rho$
is such a character, then it is easy to see that the functional
equation $(\ref{FE})$ relates the same completed $L$-function on either side, i.e. that
$\Lambda(s, f \times \mathcal{W}) = \Lambda(s, f \times \overline{\mathcal{W}})$. This is a consequence
of the well known fact that such characters (viewed as a Hecke characters of $K$)
are equivariant with respect to complex conjugation (cf. \cite[p. 384]{Ro2}). In this
particular setting with $\mathcal{W} = \rho$, the condition that the root number
$\epsilon(1/2, f \times \mathcal{W})$ equal $-1$ then imposes the vanishing of the associated
central value $L(1/2, f \times \mathcal{W})$. To distinguish this particular case in all that follows,
let us define a pair $(f, \mathcal{W})$ to be {\it{exceptional}} if (i) $\mathcal{W} = \rho$ is a ring class character
and (ii) the root number $\epsilon(1/2, f \times \mathcal{W})$ is $-1$. We then define a pair $(f, \mathcal{W})$ for
any given character $\mathcal{W}$ in the set $\mathfrak{X}$ to be {\it{generic}} if it is not exceptional.
The purpose of the present work, following that of the prequel \cite{VO5}, is to
study the nonvanishing behaviour of the central values $L(1/2, f \times \mathcal{W})$
in the generic setting. Such nonvanishing is predicted by the
generalized conjecture of Birch and Swinnerton-Dyer via the associated
context of two-variable main conjectures for elliptic curves, as we explain
in some more detail below. To describe the expected behaviour,
let us first recall the celebrated algebraicity theorem of Shimura \cite{Sh2},
in particular as it applies to the central values $L(1/2, f \times \mathcal{W})$ described
above. Thus, let us write $F= {\bf{Q}}(a_n(f))_{n \geq 0}$ to denote the extension
of the rational number field ${\bf{Q}}$ obtained by adjoining all of the normalized
Fourier coefficients $a_n(f)$ of $f$. Let us then write $F(\mathcal{W})$ to denote the
extension of $F$ obtained by adjoining the values of $\mathcal{W}$. The main algebraicity
theorem of Shimura \cite{Sh} shows that the values \begin{align}\label{value}
\frac{L(1/2, f \times \mathcal{W})}{\langle f, f \rangle} \end{align} lie in $F(\mathcal{W})$,
where $\langle f, f \rangle$ denotes the Petersson inner product of $f$ with itself.
In particular, these values $(\ref{value})$ are algebraic. They are also Galois
conjugate in the following sense. Writing $\mathcal{W} = \rho \chi \circ {\bf{N}} = \mathcal{W}_0 \mathcal{W}_w$
as above, let us define $P_{c, q; \mathcal{W}_0}$ to be the set of all such characters $\mathcal{W}$,
where $\rho$ is primitive of some conductor $c$, $\chi$ is primitive of some conductor $q$,
and the tamely ramified part $\mathcal{W}_0$ is fixed. The main result of Shimura \cite{Sh} then
implies that the value $L(1/2, f \times \mathcal{W})$ vanishes for some character
$\mathcal{W} \in P_{c, q; \mathcal{W}_0}$ if any only if the value $L(1/2, f \times \mathcal{W})$ vanishes for
all characters $\mathcal{W} \in P_{c, q; \mathcal{W}_0}$. Equipped with this notion, we can associate
to any character $\mathcal{W}$ of $\mathfrak{X}$ an associated Galois average,
\begin{align*}\delta_{[\mathcal{W}]} = \delta_{c, q; \mathcal{W}_0} &=
\vert P_{c, q; \mathcal{W}_0}\vert^{-1} \sum_{\mathcal{W} \in P_{c, q; \mathcal{W}_0}} L(1/2, f \times \mathcal{W}).
\end{align*} Of course, if $(f, \mathcal{W})$ is exceptional, then we can see from the
functional equation $(\ref{FE})$ that the associated Galois average $\delta_{[\mathcal{W}]}$
must vanish. In this setting, one studies instead the central values of the associated first
derivatives $L'(1/2, f \times \mathcal{W})$. One can establish via the formulae of
Gross-Zagier \cite{GZ} and Zhang \cite{Zh} an analogous notion of
Galois conjugacy for these values, as explained in \cite{VO5} (cf. also
\cite{Ro2}). This leads us to define for $k = 0$ or $1$ the notion
of a {\it{$k$-th Galois average $\delta_{[\mathcal{W}]}^{(k)} = \delta_{c, q; \mathcal{W}_0}^{(k)},$}}
\begin{align*} \delta_{[\mathcal{W}]}^{(k)} =
\delta_{c, q; \mathcal{W}_0}^{(k)} &=\vert P_{c, q; \mathcal{W}_0}\vert^{-1}
\sum_{\mathcal{W} \in P_{c, q; \mathcal{W}_0}} L^{(k)}(1/2, f \times \mathcal{W}). \end{align*}
Here, $L^{(0)}(1/2, f \times \mathcal{W})$ is taken to denote the central value
$L(1/2, f \times \mathcal{W})$, and $L^{(1)}(1/2, f \times \mathcal{W})$ that of the derivative
$L'(1/2, f \times \mathcal{W})$. In general, we expect that if the conductor of $\mathcal{W}$
is sufficiently large, then $\delta_{[\mathcal{W}]}^{(k)}$ does not vanish, where $k$
is taken to be $0$ or $1$ according as to whether the pair $(f, \mathcal{W})$ is generic
or exceptional respectively. This expectation in the exceptional setting with
$k=1$ can be deduced from the theorem of Cornut \cite{Cor}, as we shall explain in
some more detail below. Our aim here is to establish this expectation asymptotically
in the generic setting with $k=0$, building on the analytic estimates
of the prequel work \cite{VO5}, as well as giving asymptotic generalizations of the
earlier works of Rohrlich \cite{Ro}, \cite{Ro2}, and Vatsal \cite{Va} for the associated
one-variable settings (i.e. with either $\mathcal{W} = \rho$ ring class or $\mathcal{W} = \psi = \chi \circ {\bf{N}}$
cyclotomic in the setup described above). The novelty of this work is that we shall use
essentially no analysis in the proofs, but rather the existence of some associated
$p$-adic $L$-functions to reduce the problem to previously established results in the
self-dual setting via suitable (systematic) applications of the Weierstrass preparation theorem.
Moreover, the method presented here allows us to give streamlined proofs of some of the works
mentioned above (namely those of \cite{Ro} and \cite{Va}), and appears to work in
much greater generality. Here, we obtain the following main results, using
the work of Cornut \cite{Cor} with suitable algebraicity theorems to deduce the stated results
in the exceptional setting. Using the existence of an associated two-variable $p$-adic
$L$-function due to Hida \cite{Hi} and Perrin-Riou \cite{PR88}, along with the more
general nonvanishing results of Cornut-Vatsal \cite{CV} for self dual Rankin-Selberg
$L$-functions over totally real fields, we obtain the following result.
\begin{theorem}[Theorem \ref{main}] Fix an embedding $\overline{\bf{Q}} \rightarrow
\overline{\bf{Q}}_p$. Assume that the eigenform $f$ is $p$-ordinary in the sense that
the image of its $T_p$-eigenvalue under this embedding is a $p$-adic unit.
Assume additionally that $p \geq 5$, and that the prime to $p$-part of the level
$N$ of $f$ is prime to the discriminant $D$ of $K$. Fix a tamely ramified character
$\mathcal{W}_0$ of $p$-power conductor of $G_0 \approx {\operatorname{Gal}}(L/K)$. Let $n \geq 0$ be any integer.
There exists an integer $c(0) \geq 0$, independent of choice of $n$, such that for each
possible ($p$-power) dihedral or ring class conductor $c \geq c(0)$, the associated Galois
average $\delta_{c, p^n; \rho_0}^{(k)}$ does not vanish. Here, $k =0$ or $1$ according as
to whether the pair $(f, \mathcal{W})$ is generic or exceptional respectively. \end{theorem}
We conjecture that this results extends to all possible ($p$-power) ring class conductors $c \geq 0$
in the generic setting with $k=0$ and $n$ sufficiently large. More precisely, as we shall see in the
discussion below using the Weierstrass preparation theorem, these remaining cases can be
established via the following criterion.
\begin{conjecture}\label{conjecture} Fix a ring class character $\rho = \rho_0 \rho_w$ in the set
$\mathfrak{X}$ of any given conductor $c$. Then, there exists a cyclotomic character $\psi = \chi \circ {\bf{N}}$
in $\mathfrak{X}$ such that the central value $L(1/2, f \times \rho\psi)$ does not vanish. \end{conjecture}
Observe that we have of course established this criterion in Theorem \ref{main} above for $c$ sufficiently
large, i.e. for $c \geq c(0)$. It seems very likely that this criterion in the remaining cases of small ring
class conductor $c$ can be treated by certain averaging techniques that are beyond the scope
of the present paper. We hope to take this up in a subsequent work. Anyhow, if we can establish
the nonvanishing criterion of Conjecture \ref{conjecture} for a ring class character
$\rho = \rho_0\rho_w$ of each possible conductor $c < c(0)$ and given tamely ramified part
$\rho_0$, then we can deduce from the discussion below the following
full analogue of Mazur's conjecture in the non-self dual setting. Recall that we write $X$ to denote the
set of finite order characters of the Galois group $\mathcal{G}$ having $p$-power conductor.
Let $X^{(0)}$ denote the subset of characters $\mathcal{W}$ in $X$ for which the pair $(f, \mathcal{W})$ is generic,
and $X^{(1)}$ the subset of characters $\mathcal{W}$ in $X$ for which the pair $(f, \mathcal{W})$ is exceptional.
\begin{corollary} For each choice of tamely ramified ring class character $\mathcal{W}_0 = \rho_0$ of $G_0$,
assume Conjecture \ref{conjecture} for one primitive ring class character $\rho = \rho_0\rho_w$ of
each possible conductor $c < c(0)$ in $X^{(0)}$. Then, for all but finitely
many characters $\mathcal{W}$ in $X^{(k)}$, the associated $k$-th Galois average $\delta_{[\mathcal{W}]}^{(k)}$
does not vanish. Here, $k =0$ or $1$ according as to whether the pair $(f, \mathcal{W})$ is generic
or exceptional respectively. \end{corollary} An unconditional
partial analogue of this result can also be established via the main estimate of the prequel work
\cite{VO5}, i.e. for ring class characters $\rho=\rho_0\rho_w$ of conductors smaller than $c(0)$,
where we do not specify the tamely ramified part $\rho_0$. The difficulty in extending this unconditional
result to each individual $\rho_0$ seems to be in the apparent algebraic independence of the associated twisted
values $L(1/2, f \times \rho_0\rho_w\psi)/8 \pi^2\langle f, f \rangle \in \overline{\bf{Q}}$.
\begin{remark}[Bounds for ranks of elliptic curves.]
Let $K_{\infty}$ denote the ${\bf{Z}}_p^2$-extension of $K$, so that
$G \approx \mathcal{G} / G_0$ above is identified with the Galois group
${\operatorname{Gal}}(K_{\infty}/K)$. We obtain the following
unconditional bounds for ranks of elliptic curves in the tower
$K_{\infty}/K$, thanks to the two-variable main conjecture divisibility shown
by Skinner-Urban \cite{SU} (cf. \cite{MR} or \cite{VO3}). Let $E$ be
an elliptic curve of conductor $N$ defined over ${\bf{Q}}$, without complex multiplication.
We know by fundamental work of Wiles \cite{Wi}, Taylor-Wiles \cite{TW}
and Breuil-Conrad-Diamond-Tayor \cite{BCDT} that $E$ is modular, hence
parametrized by a cuspidal newform $f$ of weight $2$, level $N$ and trivial Nebentypus.
This for instance allows us to identify the Hasse-Weil $L$-function $L(E/K, \mathcal{W}, s)$
of $E$ over $K$ twisted by a finite order character $\mathcal{W}$ of $\mathcal{G}$ with the
Rankin-Selberg $L$-function $L(s, f \times \mathcal{W})$ described above, at least up to
normalization factor. Let us now take $\mathcal{O}$ to be the ring of integers of
some finite extension of ${\bf{Q}}_p$ containing the values of $\mathcal{W}$.
Let $\mathcal{O}[[G]]$ denote the $\mathcal{O}$-Iwasawa algebra of $G$,
i.e. the completed group ring of $G$ with coefficients in $\mathcal{O}$, or
equivalently the ring of $\mathcal{O}$-valued measures
on $G$. As explained below (Theorem \ref{hpr}), there exists an element
$\mathcal{L}_p(f/K_{\infty})$ in $\mathcal{O}[[G]]$ that interpolates $p$-adically
the algebraic values $(\ref{value})$, or rather the images of these values under
any fixed embedding $\overline{\bf{Q}} \rightarrow \overline{\bf{Q}}_p$. In particular,
the specialization of this element to any finite order character $\mathcal{W}$ of $X$ vanishes
if and only if the associated central value $L(1/2, f \times \overline{\mathcal{W}})$ vanishes.
Let us now write ${\operatorname{Sel}}(E/K_{\infty})$ to denote the $p$-primary Selmer group
of $E$ in the tower $K_{\infty}/K$, which fits into the exact descent
sequence of discrete $\mathcal{O}[[G]]$-modules
\begin{align}\label{SES} 0 \longrightarrow E(K_{\infty}) \otimes {\bf{Q}}_p/{\bf{Z}}_p
\longrightarrow {\operatorname{Sel}}(E/K_{\infty}) \longrightarrow {\mbox{\textcyr{Sh}}}(E/K_{\infty})(p).
\end{align} Here, $E(K_{\infty})$ denotes the Mordell-Weil group of $E$
over $K_{\infty}$, and ${\mbox{\textcyr{Sh}}}(E/K_{\infty})(p)$ the $p$-primary
subgroup of the Tate-Shafarevich of $E$ over $K_{\infty}$. Let us
then write $X(E/K_{\infty}) = {\operatorname{Hom}} ({\operatorname{Sel}}(E/K_{\infty}), {\bf{Q}}_p/{\bf{Z}}_p)$ denote
the Pontryagin dual of ${\operatorname{Sel}}(E/K_{\infty})$, which has the structure
of a compact $\mathcal{O}[[G]]$-module. Assume that $E$ has good
ordinary reduction at $p$. It is then well known that $X(E/K_{\infty})$ has
the structure of a torsion $\mathcal{O}[[G]]$-module, as shown for instance
in \cite[Theorem 3.8]{VO3}. Hence, by the well known structure theory of
finitely generated torsion $\mathcal{O}[[G]]$-modules, the dual Selmer group
$X(E/K_{\infty})$ has an associated $\mathcal{O}[[G]]$-characteristic power
series $\operatorname{char}X(E/K_{\infty})$. The two-variable main conjecture
of Iwasawa theory then asserts that we have an equality of principal ideals
\begin{align*} \left( \mathcal{L}_p(f/K_{\infty}) \right) &=
\left( \operatorname{char}X(E/K_{\infty}) \right) \end{align*} in $\mathcal{O}[[G]]$.
Now, the main result of Skinner-Urban \cite{SU} establishes in many cases the divisibility
$\left( \mathcal{L}_p(f/K_{\infty}) \right) \subseteq \left( \operatorname{char}X(E/K_{\infty}) \right)$
in $\mathcal{O}[[G]]$. The result of Theorem \ref{main} in the generic setting with $k=0$ can
then be viewed as a nontriviality condition for specializations to finite order characters of $G$
of the $p$-adic $L$-function $\mathcal{L}_p(f/K_{\infty})$. Combining these two results allows
us to bound the $\mathcal{O}[[G]]$-corank of ${\operatorname{Sel}}(E/K_{\infty})$, from which we can then obtain
bounds for the Mordell-Weil rank of $E(K_{\infty})$ via the exactness of $(\ref{SES})$. More
precisely, we can establish the following types of bounds by this deduction. Recall that we
write $K[p^{\infty}] = \bigcup_{n \geq 0}K[p^n]$ to denote the union of all ring class fields
of $p$-power conductor over $K$. Recall as well that we write $\omega$ to denote the
quadratic character associated to $K$, and moreover that the root number
$\epsilon(1/2, f \times \rho)$ for $\rho$ any dihedral or ring class character in the setup
described above is given by the value $-\omega(N)$, at least when the $N$, $D$ and $p$
are mutually coprime.
\begin{theorem}\label{main3} Let $E/{\bf{Q}}$ be an elliptic curve of conductor $N$ without
complex multiplication, and having good ordinary reduction at $p$. Assume for simplicity that
$p \geq 11$, and that $(Np, D)=1$. Given $M$ any finite extension
of $K$ contained in $R_{\infty}$, let us write $r_E(M) = \operatorname{rk}_{\bf{Z}}E(M)$ to denote
the rank of the Mordell-Weil group $E(M)$, i.e. so that $E(M) \approx {\bf{Z}}^{r_E(M)} \oplus E(M)_{{\operatorname{tors}}}$.
Then, for any integer $n \geq 0$, we have in the ring class tower $K[p^{\infty}]= \bigcup_{n \geq 0}K[p^n]$
that \begin{align}\label{rcrank} r_E(K[p^n]) &= \begin{cases} O_{f, D, p}(1)
&\text{if $-\omega(N) = +1$} \\ [K[p^n]:K] + O_{f, D, p}(1)&\text{if $-\omega(N) =-1.$} \end{cases} \end{align}
Moreover, let $K^{{\operatorname{cyc}}}$ denote the cyclotomic ${\bf{Z}}_p$-extension of $K$. Assume that we are not
in the exceptional setting, i.e. that for any finite extension $M$ of $K$ in the compositum
$K[p^{\infty}]K^{{\operatorname{cyc}}}$, $M$ is not contained in $K[p^{\infty}]$ if
$-\omega(N)=-1$. Assume additionally that $M$ has either (i) sufficiently large dihedral degree, i.e.
the dihedral intersection $\varOmega \cap {\operatorname{Gal}}(M/K)$ has sufficiently large order,
or (ii) trivial dihedral degree, i.e. $M$ is contained in the cyclotomic ${\bf{Z}}_p$-extension
$K^{{\operatorname{cyc}}}$. Then, \begin{align}\label{grank} r_E(M) &= O_{f, D, p}(1). \end{align}
Moveover, if the criterion of Conjecture \ref{conjecture} is established, then $(\ref{grank})$
holds for all generic extensions $M$ of $K$ in the compositum $K[p^{\infty}]K^{{\operatorname{cyc}}}$.
\end{theorem} Here as throughout, we have used the result of Cornut \cite{Cor} to address
the exceptional setting where the root number $-\omega(N)$ is $-1$. The reader should also
note that the estimate $(\ref{rcrank})$ has already been established via the nonvanishing
theorems of Vatsal \cite{Va} and Cornut-Vatsal \cite{CV}, after suitable known Euler system
arguments. The new result obtained here is therefore the generic bound $(\ref{grank}).$
\end{remark}
\begin{remark}[Some remarks on the general setting.]
Though we have not attempted to make the results stated above effective,
it is interesting to note that the number of vanishing twists can be bounded
above in terms of the Weierstrass degree(s) of the associated two-variable
$p$-adic $L$-function $\mathcal{L}_p(f/K_{\infty})$ introduced in Theorem \ref{hpr}
below. In fact, after some suitable review of the $\mathcal{O}[[G]]$-module
structure theory of the associated dual Selmer groups (cf. \cite[$\S 3$]{VO3}),
it should be possible to establish at least a partial analogue of the conjecture(s)
proposed by Coates-Fukaya-Kato-Sujatha \cite[$\S 4$]{CFKS} in this setting. We
have not pursued the matter here. It is also apparent, as in the prequel work \cite{VO5},
that many of these results carry over to higher weight forms, though the nonvanishing
theorems of Cornut-Vatsal \cite{CV} for totally real fields do not seem to apply
directly. Finally, the results described below can be extended to other
more general settings, for instance that of Rankin-Selberg $L$-functions associated to
Hilbert modular forms in abelian extensions of CM fields, as explained in the sequel
work \cite{VO7}. \end{remark}
\begin{remark}[Acknowledgements.] It is a pleasure to thank
John Coates, Christophe Cornut, Philippe Michel, Paul Nelson
and Rob Pollack for helpful discussions.
\end{remark}
\section{Nonvanishing via $p$-adic $L$-functions}
We now give the proofs of Theorem \ref{main}, using only the existence of an
associated $p$-adic $L$-function, i.e. to reduce the problem to previously
established results in the self-dual setting. Let us keep all of the notations and setup above.
\begin{remark}[Iwasawa algebras.]
Let $\mathcal{O}$ be a finite extension of the $p$-adic integers ${\bf{Z}}_p$, sufficiently large to contain the ring of integers of the number fields
$F$ and $F(\mathcal{W})$. Recall that we write $R_{\infty}$ to denote the maximal, abelian unramified outside of $p$ extension of $K$. Recall as well
that we write $\mathcal{G}$ to denote the Galois group ${\operatorname{Gal}}(R_{\infty}/K)$, so that $\mathcal{G} \approx G_0 \times G$, with
$G_0 \approx {\operatorname{Gal}}(L/K)$ the finite torsion subgroup of $\mathcal{G}$, and $G$ being isomorphic as a topological group to ${\bf{Z}}_p^2$.
We consider the $\mathcal{O}$-Iwasawa algebra $\mathcal{O}[[\mathcal{G}]]$ of $\mathcal{G}$,
which is the completed group ring \begin{align}\label{cgr}\mathcal{O}[[\mathcal{G}]] &= \varprojlim_{\mathcal{U}}
\mathcal{O}[\mathcal{G}/ \mathcal{U}]. \end{align} Here, the projective limit $(\ref{cgr})$ runs over all open normal subgroups
$\mathcal{U}$ of $\mathcal{G}$. More generally, such a definition of $\mathcal{O}[[\mathcal{G}]]$ can be made for $\mathcal{O}$ any discrete
valuation ring and $\mathcal{G}$ any profinite group. The reader should note that the elements of any such completed group ring
$\mathcal{O}[[\mathcal{G}]]$ can be viewed as $\mathcal{O}$-valued measures on $\mathcal{G}$ in a natural way. More precisely, given $\mathcal{W}$
any finite order character of $\mathcal{G}$, and $\mathcal{L}$ any element of the completed group ring $\mathcal{O}[[\mathcal{G}]]$, we can integrate
$\mathcal{W}$ against $\mathcal{L}$ in the following way. Since $\mathcal{W}$ is of finite order, it defines a locally constant function on $\mathcal{G}$, and hence
there exists an open subgroup $\mathcal{U} \subset \mathcal{G}$ such that $\mathcal{W}$ is constant modulo $\mathcal{U}$. Writing \begin{align*}
\mathcal{L}_{\mathcal{U}} &= \sum_{\sigma \in \mathcal{G}/\mathcal{U}} c_{\mathcal{U}}(\sigma) \sigma \end{align*} to denote the image
of $\mathcal{L}$ in the group ring $\mathcal{O}[\mathcal{G}/\mathcal{U}]$, with coefficients $c_{\mathcal{U}}(\sigma) \in \mathcal{O}$,
we can then define the integral of $\mathcal{W}$ against $\mathcal{L}$ to be the finite sum \begin{align}\label{specialization} \int_{\mathcal{G}}
\mathcal{W}(\sigma) d\mathcal{L}(\sigma) &= \sum_{\sigma \in \mathcal{G}/\mathcal{U}} c_{\mathcal{U}}(\sigma) \mathcal{W}(\sigma). \end{align} It is easy
to see that this definition does not depend on the choice of open subgroup $\mathcal{U} \subset \mathcal{G}$. Thus, given an element
$\mathcal{L}$ in $\mathcal{O}[[\mathcal{G}]]$, we write $d\mathcal{L}$ to denote the associated measure, which is determined
uniquely by this construction. We shall also write $\mathcal{W}(\mathcal{L})$ to denote the functional defined in $(\ref{specialization})$ above,
and moreover refer to this value as the {\it{specialization of $\mathcal{L}$ to $\mathcal{W}$}}. It is easy to see from this description that any group
like element $g \in \mathcal{G}$ corresponds to the Dirac measure $dg$, and that the product $\mathcal{L}_1\mathcal{L}_2$ of two elements
$\mathcal{L}_1, \mathcal{L}_2 \in \mathcal{O}[[\mathcal{G}]]$ corresponds to the convolution product $d\mathcal{L}_1 \star d \mathcal{L}_2$.
Moveover, the identity element $\mathcal{I}$ in $\mathcal{O}[[\mathcal{G}]]$ corresponds to a constant measure. Now, returning to the specific
setting of $\mathcal{G} = {\operatorname{Gal}}(R_{\infty}/K)$, we have a natural identification of completed group rings
$\mathcal{O}[[\mathcal{G}]] \approx \mathcal{O}[G_0][[G]]$. We also have an
isomorphism of completed group rings \begin{align}\label{isomorphism} \mathcal{O}[[\mathcal{G}]] \approx \mathcal{O}[G_0][[G]]
&\longrightarrow \bigoplus_{\mathcal{W}_0} \mathcal{O}[[G]], ~~~ \mathcal{L} \longmapsto (\mathcal{W}_0(\mathcal{L}))_{\mathcal{W}_0}, \end{align} where the direct
sum runs over all characters $\mathcal{W}_0$ of the finite group $G_0$, and $\left( \mathcal{W}_0(\mathcal{L})\right)_{\mathcal{W}_0}$ is the vector of specializations
$\mathcal{W}_0(\mathcal{L})$ of $\mathcal{L}$ to each character $\mathcal{W}_0$. The reader should note that here, we only specialize to the tamely ramified
part $\mathcal{W}_0$ (and not to any wildly ramified character of $\mathcal{G}$), so that the $\mathcal{W}_0(\mathcal{L})$ are elements of the completed group ring
$\mathcal{O}[[G]]$ rather than just values in the ring of integers $\mathcal{O}$. To denote this distinction more clearly, we shall write
$\mathcal{L}(\mathcal{W}_0)$ rather than $\mathcal{W}_0(\mathcal{L})$ is what follows to emphasize that each $\mathcal{L}(\mathcal{W}_0)$ is a genuine element of
$\mathcal{O}[[G]]$. \end{remark}
\begin{remark}[Two-variable $p$-adic $L$-functions.]
The constructions of Hida \cite{Hi} and Perrin-Riou \cite{PR88} give us the following result. Recall that by the theorem of Shimura \cite{Sh},
the values \begin{align}\label{algval} \frac{L(1/2, f \times \mathcal{W})}{8 \pi^2 \langle f, f \rangle} \end{align} are algebraic, and in fact contained in $F(\mathcal{W})$.
Let us now fix an embedding $\overline{{\bf{Q}}} \rightarrow \overline{\bf{Q}}_p$, where $\overline{\bf{Q}}_p$ is a fixed algebraic closure of ${\bf{Q}}_p$. We can then view
any element of $\overline{\bf{Q}}$ as an element of $\overline{\bf{Q}}_p$. In particular, we shall view the values $(\ref{algval})$ as elements of $\overline{{\bf{Q}}}_p$.
\begin{theorem}[Hida, Perrin-Riou]\label{hpr} Assume that $f$ is $p$-ordinary, and that $p \geq 5$. There exists an element
$\mathcal{L}_p = \mathcal{L}_p(R_{\infty}/K)$ in the $\mathcal{O}$-Iwasawa algebra $\mathcal{O}[[\mathcal{G}]]$ such that for
any finite order character $\mathcal{W}$ of $\mathcal{G}$, we have the equality \begin{align}\label{interpolation} \mathcal{W} \left(\mathcal{L}_p \right)
& = \eta(f, \mathcal{W}) \cdot \frac{L(1/2, f \times \overline{\mathcal{W}})}{8 \pi \langle f, f \rangle} \in \overline{\bf{Q}}_p.\end{align} Here, $\eta(f, \mathcal{W})$ denotes
some precise, nonvanishing algebraic number, viewed as an element of $\overline{\bf{Q}}_p$. In particular, the central value
$L(1/2, f \times \overline{\mathcal{W}})$ vanishes if any only if the specialization $\mathcal{W}(\mathcal{L}_p)$ vanishes.\end{theorem}
\begin{proof} The result follows from Perrin-Riou \cite[Th\'eor\`{e}me B]{PR88}, using the bounded linear form construction of Hida \cite{Hi},
as explained in \cite[Theorem 2.9]{VO3}. That is, the integrality of this construction is explained in \cite[Theorem 2.9]{VO3}, assuming for
simplicity that $p \geq 5$. Note that this construction, which also works for higher weight forms, requires that the
eigenform $f$ be $p$-ordinary. \end{proof} We shall say that the $p$-adic $L$-function $\mathcal{L}_p = \mathcal{L}_p(f/R_{\infty})$
in $\mathcal{O}[[\mathcal{G}]]$ {\it{interpolates}} the central values $L(1/2, f \times \overline{\mathcal{W}})$ once such a formula $(\ref{interpolation})$
is known. More generally, given $\mathcal{O}$ any discrete valuation ring and $\mathcal{G}$ any profinite group, we shall say that an
element $\mathcal{L}$ in $\mathcal{O}[[\mathcal{G}]]$ {\it{interpolates}} some complex value $\eta$ if the
specialization $\mathcal{W}(\mathcal{L}_p)$ equals $\eta /\vartheta$ in $\overline{\bf{Q}}_p$ for $\mathcal{W}$ some finite order character of
$\mathcal{G}$ and $\vartheta = \vartheta_{\eta}$ some suitable period (i.e. for which the quotient $\eta/\vartheta$ is lies in $\overline{\bf{Q}}$).
\end{remark}
\begin{remark}[Weierstrass preparation.]
We now review the Weierstrass preparation theorem, and in particular how it applies
to elements of the completed group ring $\mathcal{O}[[G]]$. Let us fix
topological generators $\gamma_1$ and $\gamma_2$ of the Galois group $G= \Omega \times \Gamma$, where $\gamma_1$ is a
topological generator of the anticyclotomic factor $\Omega \cong {\bf{Z}}_p$, and $\gamma_2$ is a topological generator of the
cyclotomic factor $\Gamma \cong {\bf{Z}}_p$. We may then invoke the well known isomorphism of completed group rings
\begin{align}\label{nc} \mathcal{O}[[G]] &\longrightarrow \mathcal{O}[[T_1, T_2]],
~~~ (\gamma_1, \gamma_2) \longmapsto (T_1 +1, T_2 +1) \end{align} to view each of the specialized $p$-adic $L$-functions
$\mathcal{L}_p(\mathcal{W}_0)$ in $(\ref{isomorphism})$ above as an element $\mathcal{L}_p(\mathcal{W}_0; T_1, T_2)$ of the two-variable power
series ring $\mathcal{O}[[T_1, T_2]]$. The reader should note that, under this non-canonical isomorphism $(\ref{nc})$, the specialization
$\mathcal{W}\left(\mathcal{L}_p\right) = \mathcal{W}_w(\mathcal{L}_p(\mathcal{W}_0)) = \rho_w \chi \circ {\bf{N}}\left( \mathcal{L}_p (\mathcal{W}_0) \right)$
corresponds to evaluating the power series $\mathcal{L}_p(\mathcal{W}_0; T_1, T_2) = \mathcal{L}_p(\mathcal{W}_0; \gamma_1-1, \gamma_2-1)$ at a certain
pair of primitive roots of unity $\zeta_1$ and $\zeta_2$, so that \begin{align*}\mathcal{W}(\mathcal{L}_p(T_1, T_2))
&= \mathcal{L}_p(\mathcal{W}_0; \rho_w(\gamma_1)-1, \psi_w (\gamma_2) -1) = \mathcal{L}_p(\mathcal{W}_0; \zeta_1 -1, \zeta_2 -1). \end{align*}
That is, these roots of unity $\zeta_1$ and $\zeta_2$ are determined uniquely by the values $\rho_w(\gamma_1) = \zeta_1$ and
$\psi_w (\gamma_2) = \zeta_2$, where $\rho_w$ denotes the wildly ramified part of the ring class character $\rho$, and
$\psi_w = \chi \circ {\bf{N}}$ denotes the (wildly ramified) cyclotomic part of $\mathcal{W} = \rho \chi \circ {\bf{N}}$.
Let us now write $R_i[[T_j]]$ to denote any of the one-variable power series rings $\mathcal{O}[[T]]$, $\mathcal{O}[[T_1]][[T_2]]$
or $\mathcal{O}[[T_2]][[T_1]]$, i.e. so that $R_i = \mathcal{O}$ or else $R_i = \mathcal{O}[[T_i]]$ with $i, j \in \lbrace 1, 2 \rbrace$
and $i \neq j$. Since $\mathcal{O}$ is a local ring, it has a unique maximal ideal $\mathfrak{P}$ say. Now, it is well
known and easy to show that each $\mathcal{O}[[T_i]]$ is a local ring, with unique maximal ideal $(\mathfrak{P}, T_i)$.
Thus, each choice of $R_i$ is a local ring with maximal ideal $\mathfrak{m}_i$ say. An element $f(T_j)$ of the
group ring $R_i[T_j]$ is said to be a {\it{distinguished (or Weierstrass) polynomial}} if it takes the form
\begin{align*} f(T_j) &= T_j^r + b_{r-1}(T_i)T_j^{r-1} + \ldots + b_{0}(T_i), \end{align*} where each of the coefficients
$b_{r-1}(T_i), \ldots, b_0(T_i)$ lies in the maximal ideal $\mathfrak{m}_i$. Suppose now that we have an element $g(T_j)$
of the formal power series ring $R_i[[T_j]]$,
\begin{align*} g(T_j) &= \sum_{k \geq 0} a_k(T_i)T_j^k, \end{align*} such that not all of the coefficients $a_k(T_j)
\in R_i$ lie in the maximal ideal $\mathfrak{m}_i$. Say $a_0(T_i), \ldots a_{r-1}(T_i) \in \mathfrak{m}_i$ for some integer
$r \geq 1$, with $a_r(T_i)$ a unit in $R_i$. The Weierstrass preparation theorem asserts that this $g(T_j)$ can be
written uniquely as \begin{align*} g(T_j) &= f_i(T_j) u_i(T_j),\end{align*} where $f_i(T_j)$ is a distinguished polynomial
in $R_i[T_j]$ of degree $r$, and $u_i(T_j)$ is a unit in $R_i[[T_j]]$. The integer $r \geq 1$ is then known as the
{\it{Weierstrass degree $\deg_W(g(T_j))$ of $g(T_j)$}}. More generally, if \begin{align*} g(T_j) &= \sum_{k \geq 0}
a_k(T_i)T_j^k, \end{align*} is a nonzero power series in $R_i[[T_j]]$, then the Weierstrass preparation theorem asserts
that $g(T_j)$ can be expressed uniquely as a product \begin{align}\label{wp} g(T_j) &= f_i(T_j)u_i(T_j)\varpi_i(T_i), \end{align}
where $f_i(T_j)$ and $u_i(T_j)$ are as above, and $\varpi_i(T_i)$ is the element of $R_i = \mathcal{O}[[T_i]]$ determined
by taking the greatest common divisor of all of the coeffiecients $a_k(T_i)$ contained in the maximal ideal
$\mathfrak{m}_i=(\mathfrak{P}, T_i)$. Now, as the invertible power series $u(T_j)$ cannot have any zeros, it follows that any
nonzero element $g(T_j)$ of $R_i[[T_j]]$ has at most $\deg_W(g(T_j))$ zeros in the indeterminate $T_j$. This result can be
interpreted for the $p$-adic $L$-functions $\mathcal{L}_p(\mathcal{W}_0)$ on the level of specializations to wildly ramified characters
in either the anticyclotomic variable $T_1 = \gamma_1 -1$ or the cyclotomic variable $T_2 = \gamma_2 -1$, as we shall
see more precisely below. Let us for now just record the following observation about each of the two-variables
$p$-adic $L$-functions $\mathcal{W}_0(\mathcal{L}_p)$ in $\mathcal{O}[[G]]$, viewed as power series $\mathcal{L}_p(\mathcal{W}_0; T_1, T_2)$
in $\mathcal{O}[[T_1, T_2]]$ under the fixed isomorphism $(\ref{nc})$. As explained below, we may assume that each such
$\mathcal{L}_p(\mathcal{W}_0; T_1, T_2)$ is not identically zero. Viewing $\mathcal{L}_p(\rho_0; T_1, T_2)$ as an element of the
one-variable power series ring $R_i[[T_j]]= \mathcal{O}[[T_i]][[T_j]]$, the Weierstrass preparation theorem then implies that
$\mathcal{L}_p(\mathcal{W}_0; T_1, T_2)$ can be expressed uniquely as a product of the form $(\ref{wp})$ above. Let us write $r(j)$
to denote the Weierstrass degree of $\mathcal{L}_p(\mathcal{W}_0; T_1, T_2)$ in $R_i[[T_j]]$. Observe that we
may also apply the Weierstrass preparation theorem to the element $\varpi_i(T_i)$ of $R_i$. Let us then write
$\deg_W(\varpi_i)$ to denote its Weierstrass degree. Putting together the two unique expressions $(\ref{wp})$ for
$\mathcal{L}_p(\mathcal{W}_0; T_1, T_2)$, it is then easy to see from the induced relations that
\begin{align}\label{wd2} r(1)\deg_W(\varpi_2) &= r(2)\deg_W(\varpi_1).\end{align} It is also easy
to see that $\deg_W(\varpi_2)$ is bounded above by the Weierstrass degree of the cyclotomic
$p$-adic $L$-function $\mathcal{L}_p(\mathcal{W}_0; 0, T_2)$ in $R_2$, and that that $\deg_W(\varpi_1)$
is bounded above by the Weierstrass degree of the anticyclotomic $p$-adic $L$-function
$\mathcal{L}_p(\rho_0; T_1, 0)$ in $R_1$. The arguments below will show that both sides of $(\ref{wd2})$
are finite, i.e. defined. \end{remark}
\begin{remark}[Nontriviality of tamely ramified specializations (via basechange).]
We begin with the following result, deduced via basechange from the nonvanishing theorems of Cornut-Vatsal \cite{CV}
for totally real fields. This result applies to the vector of $p$-adic $L$-functions $\left(\mathcal{L}_p(\mathcal{W}_0) \right)_{\mathcal{W}_0}
= \left( \mathcal{L}_p(\mathcal{W}_0; T_1, T_2) \right)_{\mathcal{W}_0}$ appearing in $(\ref{isomorphism})$ above, in particular to
show that each $\mathcal{L}_p(\mathcal{W}_0; T_1, T_2)$ is nontrivial in the sense that its specialization to infinitely many
wildly ramified characters $\mathcal{W}_w = \rho_w \psi_w$ (with both $\rho_w$ and $\psi_w$ nontrivial) does not vanish.
Thus, by the Weierstrass preparation theorem, this result will imply that each $\mathcal{L}_p(\mathcal{W}_0; T_1, T_2)$ has finite
Weierstrass degree as an element in either of the power series rings $\mathcal{O}[[T_1]][[T_2]]$ or $\mathcal{O}[[T_2]][[T_1]]$.
Let us first fix a character $\mathcal{W}_0$ of $G_0$, which recall is a tamely ramified character of some $p$-power conductor. Observe
that we can always take such a $\mathcal{W}_0$ to be a ring class character $\rho_0$, as there are no unramified cyclotomic
characters of $p$-power conductor apart from the trivial one. Let us now consider the following basechange setup.
Fix $n \geq 0$ an integer. Let $\zeta_{p^n}$ be a primitive $p^n$-th root of unity,
with ${\bf{Q}}(\zeta_{p^n})$ the field obtained by adjoining $\zeta_{p^n}$ to ${\bf{Q}}$.
We then write ${\bf{Q}}_n = {\bf{Q}}(\zeta_{p^{n+1}})^+$ to denote the maximal totally real subfield of
${\bf{Q}}(\zeta_{p^{n+1}})$. Hence, ${\bf{Q}}_n$ is the unique extension of degree $p^n$ over ${\bf{Q}}$ in the cyclotomic
${\bf{Z}}_p$-extension of ${\bf{Q}}$. Let us also write $K_n$ to denote the compositum $K {\bf{Q}}_n$. Hence, $K_n$ is the
unique extension of degree $p^n$ over $K$ in the cyclotomic ${\bf{Z}}_p$-extension of $K$. Clearly, $K_n$ is a totally
imaginary quadratic extension of the totally real field ${\bf{Q}}_n$. Moreover, the cyclotomic field ${\bf{Q}}_n$ is abelian,
and of odd degree. Thus, writing $\pi_f$ to denote the cuspidal automorphic representation of ${\operatorname{GL_2}}({\bf{Q}})$ associated to $f$,
there exists by the theory of cyclic basechange a cuspidal automorphic representation $\Pi_{f, n}$ of ${\operatorname{GL_2}}({\bf{Q}}_n)$ such that there
is an equality of $L$-functions \begin{align*} L(s, \Pi_{f,n}) &= \prod_{\chi} L(s, \pi_f, \chi). \end{align*} Here, $L(s, \Pi_{f,n})$ denotes
the $L$-function of $\Pi_{f, n}$, the product runs over all characters $\chi$ of ${\operatorname{Gal}}({\bf{Q}}_n/{\bf{Q}})$, and each $L(s, \pi_f, \chi)$ denotes
the $L$-function of $\pi_f$ twisted by $\chi$. Now, since ${\bf{Q}}_n$ is totally ramified at $p$, there exists a
unique prime $\mathfrak{p}$ above $p$ in ${\bf{Q}}_n$. Following Cornut-Vatsal \cite[$\S$ 1,2]{CV}, we then write
$K_n[\mathfrak{p}^{\infty}] = \bigcup_{m \geq 0} K_n[\mathfrak{p}^m]$ to denote the tower of all ring
class extensions of $\mathfrak{p}$-power conductor over $K_n$, with $\varOmega^{(n)}$ to denote its
Galois group ${\operatorname{Gal}}(K_n[\mathfrak{p}^{\infty}]/K_n)$. Let us now view any finite order character $\rho^{(n)}$ of
$\varOmega^{(n)}$ as an idele class character of $K_n$ via the reciprocity map of class field theory.
We can then consider for any such character the Rankin-Selberg $L$-function $L(s, \Pi_{f, n} \times \rho^{(n)})$
of $\Pi_{f, n}$ times the automorphic representation of ${\operatorname{GL_2}}({\bf{Q}}_n)$ associated to $\rho^{(n)}$. This $L$-function has
a well known analytic continuation, and satisfies a functional equation relating values at $s$ to $1-s$. Moreover, it
is self dual, its corresponding root number $\epsilon(1/2, \Pi_{f, n} \times \rho^{(n)})$ taking values
in $\pm 1$, and can be described by a well known formula (see for instance \cite[$\S$ 1.1]{CV}),
completely analogously to the setting with $\epsilon(1/2, f \times \rho) = - \omega(N)$ described above.
On the other hand, given $\rho$ any finite order ring class character of the imaginary quadratic field $K$
factoring through the Galois group $\mathcal{G}$, let us write $\rho'$ to denote the composition of $\rho$
with the norm homomorphism ${\bf{N}}_{K_n/K}$ from $K_n$ to $K$, so that $\rho'$ defines a finite order character of the
Galois group $\varOmega^{(n)}$. We then have for any such character $\rho'$ an equality of $L$-functions
\begin{align}\label{lfneq} L(s, \Pi_{f, n} \times \rho') &= \prod_{\chi} L (s, f \times \rho\chi \circ {\bf{N}}),\end{align}
where the product runs over all characters $\chi$ of ${\operatorname{Gal}}({\bf{Q}}_n/{\bf{Q}})$ as before. This equality of $L$-functions
induces an equality of the associated root numbers \begin{align}\label{rneq} \epsilon(1/2, \Pi_{f, n} \times \rho')
&= \prod_{\chi} \epsilon(1/2, f \times \rho\chi \circ {\bf{N}}).\end{align} In particular, using that the degree of ${\bf{Q}}_n$ i
s always odd by our hypothesis that $p \geq 5$, we can deduce from this relation $(\ref{rneq})$ that the condition
of having our base root number $\epsilon(1/2, f \times {\bf{1}}_K) = -\omega(N)$ equal $\pm 1$ will imply that the
root number $\epsilon(1/2, \Pi_{f,n} \times \rho')$ equals $\pm 1$ for any such character $\rho'$ of
$\varOmega^{(n)}$. This puts us in a good position to invoke the nonvanishing theorems
of Cornut-Vatsal \cite{CV} directly in either case on the root number $\epsilon(1/2, f \times {\bf{1}}_K)$.
More precisely, we obtain the following version of their result(s) in this setting after invoking the Artin
formalism of $(\ref{lfneq})$ and $(\ref{rneq})$ above.
\begin{proposition}\label{spec} Fix a tamely ramified character $\mathcal{W}_0 = \rho_0$ of $G_0$.
Let $n \geq 0$ be any integer. There exists a positive integer $c(n) \geq 0$ such that
for all ring class conductors $c \geq c(n)$, the associated Galois average $\delta_{c, p^n; \rho_0}^{(k)}$
does not vanish. Here, $k =0$ or $1$ according as to whether or not the pair $(f, \mathcal{W})$
is generic or exceptional. \end{proposition}
\begin{proof} We can assume without loss of generality that $n \geq 1$, since the case of $n =0$
is shown already by the nonvanishing theorems of \cite{Cor}, \cite{CV} and \cite{Va}. Thus, let us
fix an integer $n \geq 1$. In particular, this means that we need only work in the generic setting with
$k=0$. We now divide into cases on the root number $\epsilon(1/2, f \times {\bf{1}}_K)$.
Let us first suppose that the root number $\epsilon(1/2, f \times {\bf{1}}_K)=-\omega(N)$ is $+1$.
We can then deduce from Cornut-Vatsal \cite[Theorem 1.4]{CV} that for all but finitely
many finitely order characters $\rho^{(n)}$ of $\varOmega^{(n)}$, the value $L(1/2, \Pi_{f, n} \times \rho_n)$
does not vanish. The reader should note that this is not strictly what is stated in \cite[Theorem 1.4]{CV},
but rather what can be deduced from algebraicity (using for instance the main theorem of Shimura \cite{Sh2}
for Hilbert modular forms, or the existence of an associated $p$-adic $L$-function as given in \cite{VO}).
We can then deduce that for all ring class characters $\rho = \rho_0\rho_w$ of $K$ factoring through $\mathcal{G}$
of sufficiently large conductor that the value \begin{align*} L(1/2, \Pi_{f, n} \times \rho') &= \prod_{\chi}
L(1/2, f \times \rho \chi \circ {\bf{N}}) \end{align*} does not vanish. Here, the product again runs over all characters
$\chi$ of ${\operatorname{Gal}}({\bf{Q}}_n/{\bf{Q}})$, and sufficiently large conductor means that $\rho$ has conductor $c \geq c_0(n)$,
where $c_0(n) = c_0(n, f, p, K)$ is some positive integer depending on the choice of cyclotomic field ${\bf{Q}}_n$.
Thus, for each integer $n \geq 0$, we can find some ring class character $\rho = \rho_0 \rho_w$ of conductor
$c \geq c_0(n)$ such that the central value $L(1/2, f \times \rho \chi \circ {\bf{N}})$ does not vanish for any
Dirichlet character $\chi$ of ${\operatorname{Gal}}({\bf{Q}}_n/{\bf{Q}})$. Now, fixing a character $\chi$ of ${\operatorname{Gal}}({\bf{Q}}_n/{\bf{Q}})$, we can
deduce from the algebraicity theorem of Shimura \cite{Sh2} that the value $L(1/2, f \times \rho\chi \circ {\bf{N}})$ does
not vanish for all primitive ring class characters $\rho = \rho_0 \rho_w$ of conductor $c$. Moreover, for any
fixed primitive ring class character $\rho = \rho_0 \rho_w$ of conductor $c$, we can deduce from another
application of the algebraicity theorem of Shimura \cite{Sh2} that the value $L(1/2, f \times \rho \chi \circ {\bf{N}})$ does
not vanish for all primitive Dirichlet characters $\chi$ of conductor $p^n$. Thus, we can deduce from algebraicity that
the value $L(1/2, f \times \rho \chi \circ {\bf{N}})$ does not vanish for any primitive character $\mathcal{W} = \rho\chi \circ {\bf{N}}
\in P_{c, p^n; \rho_0}$. Equivalently, the Galois average $\delta^{(0)}_{c, p^n; \rho_0}$ does not vanish for all ring
class conductors $c \geq c_0(n)$.
Let us now suppose that the root number $\epsilon(1/2, f \times {\bf{1}}_K)$ is $-1$,
We can then deduce from Cornut-Vatsal \cite[Theorem 1.5]{CV}, that for all but finitely many characters
$\rho^{(n)}$ of $\varOmega^{(n)}$, the value $L'(1/2, \Pi_{f, n} \times \rho_n)$ does not vanish. The reader should note
again that this is not strictly what is stated in \cite[Theorem 1.5]{CV}, but rather what can be deduced from algebraicity
using special value formulae (in this case, the recent work of Yuan-Zhang-Zhang \cite{YZZ}). Anyhow, we then have
that for all but finitely many characters $\rho^{(n)}$ of $\varOmega^{(n)}$, ${\operatorname{ord}}_{s=1/2}L(s, \Pi_{f, n} \times \rho^{(n)}) =1.$
It follows that for ring class character $\rho = \rho_0\rho_w$ of $K$ factoring through $\mathcal{G}$ of
sufficiently large conductor, the value $L'(1/2, \Pi_{f,n} \times \rho')$ does not
vanish, whence \begin{align}\label{ordeq2} {\operatorname{ord}}_{s=1/2} L(s, \Pi_{f, n} \times \rho')
&= \sum_{\chi} {\operatorname{ord}}_{s=1/2}L(s, f \times \rho \chi \circ {\bf{N}} ) =1.\end{align} by the decomposition
$(\ref{lfneq})$. Here, sufficiently large conductor means that $\rho$ has conductor $c \geq c_1(n)$,
where $c_1(n) = c_1(n, f, p, K)$ is some positive integer depending on the choice of cyclotomic
field ${\bf{Q}}_n$. Now, since the result of \cite[Theorem 1.5]{CV} also applies in the
same way to the base field ${\bf{Q}}_0 = {\bf{Q}}$, we can assume without loss of generality that
${\operatorname{ord}}_{s=1/2}L(s, f \times \rho) =1$, whence $(\ref{ordeq2})$ implies that \begin{align*}
\sum_{\chi \neq {\bf{1}}} {\operatorname{ord}}_{s=1/2} L(s, f \times \rho \chi \circ {\bf{N}}) &= 0.\end{align*}
Thus, using the same reasoning as before, we can deduce that the Galois
average $\delta^{(0)}_{c, p^n; \rho_0}$ does not vanish for all ring class conductors $c \geq c_1(n)$. \end{proof}
\begin{corollary}\label{spec2} Let $c_0 = \min_n c(n)$. Then, for each ring class conductor
$c \geq c_0$, there exists an integer $m(c) \geq 0$ such that for all integers
$n \geq m(c)$, the associated Galois average $\delta_{c, p^n; \rho_0}^{(k)}$
does not vanish. Here, $k =0$ or $1$ according as to whether or not the pair
$(f, \mathcal{W})$ is generic or exceptional. \end{corollary}
\begin{proof} Recall that we fix a tamely ramified character $\mathcal{W}_0 = \rho_0$.
Consider the associated $p$-adic $L$-function $\mathcal{L}_p(\rho_0; T_1, T_2)$
in $\mathcal{O}[[T_1, T_2]]$. Fix a ring class of ($p$-power) conductor $c \geq c_0$. Let
$\rho = \rho_0 \rho_w$ be a primitive ring class character of conductor $c$. Consider the
(doubly) specialized $p$-adic $L$-function $\mathcal{L}_p(\rho_0; \rho_w(\gamma_1)-1, T_2)$,
in $\mathcal{O}[[T_2]]$. We know by the Weierstrass preparation
theorem that this element can be expressed uniquely as a product $u(T_2)f(T_2)\varpi^{\mu}$,
where $u(T_2)$ is an invertible power series in $\mathcal{O}[[T_2]]$, $f(T_2)$ is a distinguished
polynomial in $\mathcal{O}[T_2]$, $\varpi$ is a uniformizer for $\mathcal{O}$, and $\mu \geq 0$
is an integer. We also know by Proposition \ref{spec} that for some integer $h \geq 0$,
the associated Galois average $\delta_{c, p^h; \rho_0}^{(0)}$ does not vanish.
Hence, we can deduce from the interpolation property of Theorem \ref{hpr} that the specialized
$p$-adic $L$-function $\mathcal{L}_p(\rho_0; \rho_w(\gamma_1)-1, T_2)$ has finite Weierstrass degree
$\deg_W(\mathcal{L}_p(\rho_0; \rho_w(\gamma_1)-1, T_2 ))$ as an element of
$\mathcal{O}[[T_2]]$. Thus, $\mathcal{L}_p(\rho_0; \rho_w(\gamma_1)-1, T_2)$
can have at most $\deg_W(\mathcal{L}_p(\rho_0; \rho_w(\gamma_1)-1, T_2))$ many
zeros. It then follows from Shimura's algebraicity theorem \cite{Sh2} that
$\delta_{c, p^h; \rho_0}^{(0)}$ does not vanish for all but finitely many
integers $h \geq 0$. Writing $l=l(c) \geq 0$ to denote the largest integer for which
$\delta_{c, p^{l}; \rho_0}$ vanishes, it follows that $\delta_{c, p^n; \rho_0}^{(0)}$ does
not vanish for all integers $n \geq l(c) +1$. Thus, taking $m(c) = l(c)+1$ proves the claim
for $c$. \end{proof}
\end{remark}
\begin{remark}[Basechange $p$-adic $L$-functions.]
Let us keep all of the notations of the paragraph above. Hence, we fix an integer
$n \geq 0$, writing $\zeta_{p^n}$ to denote a primitive $p^n$-th power root of unity. We then
write ${\bf{Q}}(\zeta_{p^n})$ to denote the field obtained by adjoining $\zeta_{p^n}$ to ${\bf{Q}}$, with
${\bf{Q}}_n = {\bf{Q}}(\zeta_{p^{n+1}})^+$ the maximal totally real subfield of ${\bf{Q}}(\zeta_{p^{n+1}})$.
Thus, ${\bf{Q}}_n$ is the degree-$p^n$ extension of ${\bf{Q}}$ contained in the cyclotomic
${\bf{Z}}_p$-extension of ${\bf{Q}}$. Writing $K_n$ to denote the compositum $K{\bf{Q}}_n$, we
see that $K_n$ is the degree-$p^n$ extension of $K$ contained in the cyclotomic
${\bf{Z}}_p$-extension of $K$, and moreover that $K_n$ is a totally imaginary quadratic
over ${\bf{Q}}_n$. Recall as well that we write $\mathfrak{p}$ to denote the unique prime
above $p$ in ${\bf{Q}}_n$, with $K_n[\mathfrak{p}^{\infty}] = \bigcup_{m \geq 0}
K_n[\mathfrak{p}^m]$ the $\mathfrak{p}^{\infty}$-ring class tower of $K_n$, and
$\varOmega^{(n)} = {\operatorname{Gal}}(K_n[\mathfrak{p}^{\infty}]/K_n)$ its Galois group. On
the other hand, recall that we write $D_{\infty} = K_{p^{\infty}}$ to denote the dihedral
or anticyclotomic ${\bf{Z}}_p$-extension of the imaginary quadratic field $K$, so that
we have an identification of $\Omega$ with the Galois group ${\operatorname{Gal}}(D_{\infty}/K)$.
Let us then write $\Omega^{(n)}$ to denote the Galois group ${\operatorname{Gal}}(K_nD_{\infty}/K_n)$,
where $K_nD_{\infty}$ denotes the compositum of the finite cyclotomic extension $K_n$
with $D_{\infty}$. Hence, $\Omega^{(n)}$ is topologically isomorphic to ${\bf{Z}}_p$.
Now, recall that we fixed a topological generator $\gamma_1$ of $\Omega = \Omega^{(0)}$.
Let us fix a topological generator $\gamma_1^{(n)}$ of $\Omega^{(n)}$ lifting $\gamma_1$
that is compatible with respect to composition with the norm homomorphism ${\bf{N}}_{K_n/K}$ on
the associated character groups. We can then invoke the standard isomorphism
of completed group rings \begin{align}\label{bcisom} \mathcal{O}[[\Omega^{(n)}]]
\longrightarrow \mathcal{O}[[T_1^{(n)}]], ~~~\gamma_1^{(n)} \longmapsto T_1^{(n)} +1. \end{align}
Moreover, we can define from each two-variable $p$-adic $L$-function $\mathcal{L}_p(\rho_0; T_1, T_2)$
described above a basechange $p$-adic $L$-function in the lifted variable $T_1^{(n)}$,
\begin{align}\label{bcplfn} \mathcal{L}_p(\rho_0; T_1^{(n)}) &= \prod_{\psi_w}
\mathcal{L}_p(\rho_0; T_1, \psi_w(\gamma_2)-1) \in \mathcal{O}[[T_1^{(n)}]]. \end{align}
Here, the product ranges over all (wildly ramified) characters $\psi_w = \chi \circ {\bf{N}}$
of the Galois group ${\operatorname{Gal}}(K_n/K)$. Equivalently, by definition of the norm homomorphism
${\bf{N}}_{K_n/K}$, we have the defining relation \begin{align}\label{bcplfn2}
\mathcal{L}_p(\rho_0; T_1^{(n)}) &= \mathcal{L}_p(\rho_0; T_1, T_2)
\circ {\bf{N}}_{K_n/K} \in \mathcal{O}[[T_1^{(n)}]], \end{align} where the composition
with the norm homomorphism ${\bf{N}}_{K_n/K}$ is taken on the level of the associated
completed group ring element in $\mathcal{O}[[G]] \cong \mathcal{O}[[\Omega]]
[[\Gamma]]$. On the other hand, after taking images of these group ring
element(s) under $(\ref{bcisom})$, we claim it is a formal consequence of the definition
of the topological generator $\gamma_1^{(n)}$ that the Weierstrass
degree of $\mathcal{L}_p(\rho_0; T_1^{(n)})$ in $\mathcal{O}[[T_1^{(n)}]]$ must
equal the Weierstrass degree of $\mathcal{L}_p(\rho_0; T_1^{(0)}) =
\mathcal{L}_p(\rho_0; T_1, 0)$ in $\mathcal{O}[[T_1^{(0)}]] = \mathcal{O}[[T_1]]$.
Note that we have adopted the convention of taking the Weierstrass degree to be
infinite in the event that $\mathcal{L}_p(\rho_0; T_1^{(n)})$ is identically zero in
$\mathcal{O}[[T_1^{(n)}]]$. Let us now record these observations (with some
additional justification) as follows.
\begin{lemma}\label{bcwd} Fix a tamely ramified character $\mathcal{W}_0 = \rho_0$.
There exists a positive integer $n(0)$ such that the following property holds.
For each integer $n \geq n(0)$, the Weierstrass degree of $\mathcal{L}(\rho_0; T_1^{(n)})$
as an element of $\mathcal{O}[[T_1^{(n)}]]$ equals the Weierstrass degree of
$\mathcal{L}_p(\rho_0; T_1^{(0)}) = \mathcal{L}_p(\rho_0; T_1, 0)$
as an element of $\mathcal{O}[[T_1^{(0)}]] = \mathcal{O}[[T_1]]]$.
\end{lemma}
\begin{proof}
We know by the Weierstrass preparation theorem that $\mathcal{L}(\rho_0; T_1, T_2)$
as an element of $R_2[[T_1]]=\mathcal{O}[[T_2]][[T_1]]$ can be expressed uniquely as
\begin{align*} \mathcal{L}_p(\rho_0; T_1, T_2)&= u_2(T_1)f_2(T_1)\varpi_2(T_2).
\end{align*} Here, $u_2(T_1)$ is unit in $R_2[[T_1]]$, $f_2(T_1)$ is a distinguished
polynomial in $R_2[[T_1]]$, and $\varpi_2(T_2)$ is a power series in $\mathfrak{m}_2 \subset
R_2$ as in $(\ref{wp})$ above. Thus, we can write
\begin{align*} f_2(T_1) &= T_1^r + b_{r-1}(T_2)^{r-1}T_1^{r-1} + \ldots
+ b_0(T_2), \end{align*} for $r \geq 0$ an integer, with each $b_i(T_1)$
contained in the maximal ideal $\mathfrak{m}_2 = (\mathfrak{P}, T_2)$.
Recall that we write $\deg_W(\varpi_2)$ to denote the Weierstrass degree
of $\varpi_2(T_2)$ in $R_2$. Observe that the specialization
$\varpi_2(\psi_w(\gamma_2)-1)$ can then only vanish for at most finitely many
cyclotomic characters $\psi_w$ of $\Gamma$, i.e. for $\deg_W(\varpi_2)$ many characters.
In particular, we can deduce that there exists a (minimal) positive integer $q(0)$ such
that for all cyclotomic character $\psi_w$ of ($p$-power) conductor $q \geq q(0)$,
the specialization $\varpi_2(\psi_w(\gamma_2)-1)$ does not vanish.
In particular, specializing $\mathcal{L}_p(\rho_0; T_1, T_2)$ to any
such character $\psi_w$ of $\Gamma$ does not
change the Weierstrass degree in $R_2[[T_1]]$. That is, we have in this case that
\begin{align}\label{iwd} \deg_W (\mathcal{L}_p(\rho_0; T_1, T_2)) &=
\deg_W(\mathcal{L}_p(\rho_0; T_1, \psi_w(\gamma_2)-1))\end{align}
as elements of $R_2[[T_1]]$ for any such character
$\psi_w$. This is a consequence of the fact that the Weierstrass degree
$\deg_W(\mathcal{L}_p(\rho_0; T_1, \psi_w(\gamma_2)-1))$ of
$\mathcal{L}_p(\rho_0; T_1, \psi_w(\gamma_2)-1)$ in $\mathcal{O}[[T_1]] \subset
R_2[[T_1]]$ is given by that of the specialized Weierstrass polynomial
\begin{align*}f_2(T_1)\vert_{\psi_w} &:= T_1^r + b_{r-1}(\psi_2(\gamma_2)-1)
T_1^{r-1} + \ldots + b_0(\psi_w(\gamma_2)-1),\end{align*} which clearly still has
degree $r = \deg_W(\mathcal{L}_p(\rho_0; T_1, T_2))$ in $T_1$. Let us therefore
define $n(0)$ to be the exponent of $p$ in $q(0)$. It then follows that for
any integer $n \geq n(0)$, we have the relation
\begin{align}\label{bwd} \deg_W(\mathcal{L}_p(\rho_0; T_1^{(n)})) &=
[K_n: K] \cdot \deg_W (\mathcal{L}_p(\rho_1, T_1, T_2)) \end{align}
as elements in the base power series ring $\mathcal{O}[[T_1]]$. On
the other hand, it is clear from the defining relation
$(\ref{bcplfn2})$ (taking images under the isomorphism
$(\ref{bcisom})$) that the Weierstrass degree of
$\mathcal{L}_p(\rho_0; T_1^{(n)})$ in the basechange
power series ring $\mathcal{O}[[T_1^{(n)}]]$ must be given
by this quantity $(\ref{bwd})$ divided by the index $[K_n:K]$, as required. \end{proof}
Let us now consider the interpolation properties of these elements
$\mathcal{L}_p(\rho_0; T_1^{(n)})$. That is, by Theorem \ref{hpr},
each basechange $p$-adic $L$-function $\mathcal{L}_p(\rho_0; T_1^{(n)})$
in $\mathcal{O}[[T_1^{(n)}]]$ must interpolate the associated basechange
central values \begin{align*} L(1/2, \Pi_{f, n} \times \rho') &= \prod_{\psi_w}
L(1/2, f \times \rho_0\rho_w\psi_w) \end{align*} described in $(\ref{lfneq})$
above. Here again, we have taken the product over characters $\psi_w = \chi \circ {\bf{N}}$
of the Galois group ${\operatorname{Gal}}(K_n/K)$, and written $\rho'$ to denote the finite order character
of $\Omega^{(n)}$ defined by the composition $\rho \circ {\bf{N}}_{K_n/K}$, where
$\rho = \rho_0\rho_w$. Now, recall that we have the root number relation
$(\ref{rneq})$ under basechange. This in particular allows us to deduce
that for any ring class character $\rho' = \rho_0 \rho_w \circ {\bf{N}}_{K_n/K}$
factoring through $\varOmega^{(n)}$, the root number
$\epsilon(1/2, \Pi_{f, n} \times \rho')$ of the basechange $L$-function
$L(s, \Pi_{f, n} \times \rho')$ is given by the root number
$\epsilon(1/2, f \times \theta_K)$ of the base $L$-function $L(s, f \times \theta_K)$.
In particular, if $\epsilon(1/2, f \times \theta_K)$ equals $-1$, then it is easy to see
from the functional equation satisfied by each of the (self-dual) $L$-functions
$L(s, \Pi_{f, n} \times \rho' )$ that the basechange $p$-adic $L$-function
$\mathcal{L}_p(\rho_0; T_1^{(n)})$ must vanish identically in
$\mathcal{O}[[T_1^{(n)}]]$. To surmount this issue, we now
make the following modification in our construction of the
basechange $p$-adic $L$-functions $\mathcal{L}_p(\rho_0; T_1^{(n)})$
in the setting where the root number $\epsilon(1/2, f \times \theta_K)$ is $-1$.
Thus, when $\epsilon(1/2, f \times \theta_K)$ is $-1$, let us define the following
incomplete basechange element \begin{align}\label{ibcplfn} \mathcal{L}_p^{\star}
(\rho_0; T_1^{(n)}) &= \prod_{\psi_w \neq {\bf{1}}} \mathcal{L}_p(\rho_0; T_1, \psi_w(\gamma_2)-1)
\in \mathcal{O}[[T_1]].\end{align} Here, the product runs over nontrivial characters
$\psi_w = \chi \circ {\bf{N}}$ of ${\operatorname{Gal}}(K_n/K)$, and the product
$\mathcal{L}_p^{\star}(\rho_0; T_1^{(n)})$ is a priori only defined
in the base power series ring $\mathcal{O}[[T_1]]$.
\begin{lemma}\label{ie} For each integer $n \geq 1$, the incomplete basechange element
$\mathcal{L}_p^{\star}(\rho_0; T_1^{(n)})$ of $(\ref{ibcplfn})$ above defines an element of the
power series ring $\mathcal{O}[[T_1^{(n)}]]$. Moreover, the Weierstrass degree of
$\mathcal{L}_p^{\star}(\rho_0; T_1^{(n)})$ as an element of $\mathcal{O}[[T_1^{(n)}]]$ is finite.
\end{lemma}
\begin{proof} Fix an integer $n \geq 1$. To show the first part of the claim, observe that each
finite order character $\rho_{w}^{(n)}$ of the Galois group $\Omega^{(n)}$ defines a finite
order character $\rho_w$ of the Galois group $\Omega^{(0)}$ by restriction to $K$. That is, each
such character $\rho_{w}^{(n)}$ arises as the composition of some finite order character $\rho_w$ of
$\Omega^{(0)}$ with the norm homomorphism ${\bf{N}}_{K_n/K}$, as composition with ${\bf{N}}_{K_n/K}$
induces a natural isomorphism of Galois groups $\Omega^{(n)} \cong \Omega^{(0)}$.
This allows us to view $\mathcal{L}_p^{\star}(\rho_0; T_1^{(n)})$ as
an element of $\mathcal{O}[[\Omega^{(n)}]]$, whence taking its image under the isomorphism
$(\ref{bcisom})$ defines an element of $\mathcal{O}[[T_1^{(n)}]]$. More precisely,
each $\mathcal{L}_p^{\star}(\rho_0; T_1^{(n)})$ defines an element $\lambda^{(n)}$ say in the
completed group ring $\mathcal{O}[[\Omega^{(n)}]]$, characterized as an $\mathcal{O}$-valued
measure on $\Omega^{(n)}$ via the following interpolation property: for each finite order character
$\rho_w^{(n)} = \rho_w \circ {\bf{N}}_{K_n/K}$ of $\Omega^{(n)}$,
we have the identity \begin{align*} \rho_w^{(n)} ( \lambda^{(n)}) &= \prod_{\psi_w \neq {\bf{1}}}
\eta(f, \rho_0 \rho_w \psi_w) \cdot \frac{L(1/2, f \times \rho_0 \rho_w \psi_w)}
{8 \pi^2 \langle f, f \rangle} \in \overline{\bf{Q}}_p . \end{align*} Here, the
product ranges over nontrivial characters $\psi_w = \chi \circ {\bf{N}}$ of
${\operatorname{Gal}}(K_n/K)$, $\rho_w$ as before denotes the underlying
character of $\Omega$, and all other notations are the same as in
Theorem \ref{hpr}. That $\lambda^{(n)}$ defines a distribution on $\Omega^{(n)}$
is then a formal consequence of the fact that the convolution measure on $\Omega^{(0)}$
corresponding to the product $\mathcal{L}_p^{\star}(\rho_0; T_1^{(n)})$ defines a
distribution on $\Omega^{(0)} \cong \Omega^{(n)}$.
To show the second part of the claim, we appeal to the proof of Proposition
\ref{spec} in the setting where the root number $\epsilon(1/2, f \times \theta_K)$
is $-1$. This in particular allows us to deduce the nontriviality of
$\mathcal{L}_p^{\star}(\rho_0; T_1^{(n)})$ in $\mathcal{O}[[T_1^{(n)}]]$ as an
interpolation series for the associated complex values \begin{align*}L^{\star}
(1/2, \Pi_{f, n} \times \rho_n) &:= \prod_{\psi_w \neq {\bf{1}}}
L(1/2, f \times \rho_0\rho_w\psi_w).\end{align*} In particular, using the result of Proposition
\ref{spec} in this setting, we may then invoke the Weierstrass preparation theorem to deduce
that $\mathcal{L}_p^{\star}(\rho_0; T_1^{(n)})$ has finite Weierstrass degree as an element
of the basechange power series ring $\mathcal{O}[[T_1^{(n)}]]$. \end{proof}
Let us now introduce the following basechange $p$-adic $L$-function
in the setting where the root number $\epsilon(1/2, f \times \theta_K)$ is $-1$.
Consider the element $\mathcal{L}_p^{\star}(\rho_0, T_1^{(1)})$
defined in $(\ref{ibcplfn})$ above, which by Lemma \ref{ie} can be viewed
as an element in the power series ring $\mathcal{O}[[T_1^{(1)}]]$. Let ${\bf{N}}_{K_n/K_1}$
denote the norm homomorphism from $K_n$ to $K_1$. Given $n \geq 1$ an integer,
let us then define an associated element \begin{align}\label{bce2} g_p(\rho_0; T_1^{(n)})
&= \mathcal{L}_p^{\star}(\rho_0; T_1^{(1)}) \circ {\bf{N}}_{K_n/K_1} \end{align}
in the power series ring $\mathcal{O}[[T_1^{(n)}]]$, where the composition with
${\bf{N}}_{K_n/K_1}$ is on the underlying completed group ring element $\lambda^{(1)}$
in $\mathcal{O}[[\Omega^{(1)}]]$ described for Lemma \ref{ie} above. Note that the collection of
these elements $g_p(\rho_0; T_1^{(n)})$ for all $n \geq 1$ contains information about the
specialization of the two-variable $p$-adic $L$-function $\mathcal{L}_p(\rho_0; T_1, T_2)$
to each nontrivial character $\psi_w$ of $\Gamma$. We also have the following analogous
version of Lemma \ref{bcwd} for each $g_p(\rho_0; T_1^{(n)})$ as a power series in $\mathcal{O}[[T_1^{(n)}]]$.
\begin{corollary}\label{ibcwd} Fix a tamely ramified character $\mathcal{W}_0 = \rho_0$.
Let $n(0) \geq 0$ be the integer of Lemma \ref{bcwd} above. Then, for each integer
$n \geq n(0)$, the Weierstrass degree of $g_p(\rho_0; T_1^{(n)})$ as an element
of $\mathcal{O}[[T_1^{(n)}]]$ is equal to the Weierstrass degree of
$g_p(\rho_0; T_1^{(1)})$ as an element $\mathcal{O}[[T_1^{(1)}]]$.\end{corollary}
\begin{proof} Fix an integer $n \geq n(0)$. We can assume without loss of generality
that $n \geq 2$. By the argument of Lemma \ref{bcwd}, it is clear that
$g_p(\rho_0; T_1^{(n)})$ has Weierstrass degree equal to
\begin{align*}\varphi(p^n) \cdot \deg_W(\mathcal{L}_p(\rho_0; T_1, T_2))
&= [K_n:K_1]\left( [K_1:K] -1\right) \cdot \deg_W(\mathcal{L}_p(\rho_0; T_1, T_2))\end{align*}
as an element in $\mathcal{O}[[T_1]]$, where $\deg_W(\mathcal{L}_p(\rho_0; T_1, T_2))$
denotes the Weierstrass degree of the two-variable $p$-adic $L$-function
$\mathcal{L}_p(\rho_0; T_1, T_2)$ in $R_2[[T_1]]= \mathcal{O}[[T_2]][[T_1]]$.
Now, we saw in Lemma \ref{ie} that $\mathcal{L}_p^{\star}(\rho_0; T_1^{(1)})$ defines an
element in the power series ring $\mathcal{O}[[T_1^{(1)}]]$, of some finite
Weierstrass degree $d(1)$ say. Following the argument of Lemma \ref{bcwd}, we deduce
that the Weierstrass degree of $\mathcal{L}_p^{\star}(\rho_0; T_1^{(1)}) \circ {\bf{N}}_{K_n/K_1}$
in $\mathcal{O}[[T_1^{(1)}]]$ is equal to $p^{n-1}d(1)= [K_n:K_1]d(1) $. This formula can be
viewed as a consequence of the fact that specializations to characters of ${\operatorname{Gal}}(K_n/K_1)$
do not change the Weierstrass degree in $T_1^{(1)}$, or to be more precise the Weierstrass degree in $T_1$
(as explained in the proof of Lemma \ref{bcwd}). It is then a formal consequence that the
Weierstrass degree of $g_p(\rho_0; T_1^{(n)})= \mathcal{L}_p^{\star}(\rho_0; T_1^{(n)})
\circ {\bf{N}}_{K_n/K_1}$ in $\mathcal{O}[[T_1^{(n)}]]$ is given by $d(1)$, or in other words by its
Weierstrass degree in $\mathcal{O}[[T_1^{(1)}]]$ divided by the index $[K_n:K_1]$. \end{proof}
These results in particular allow us to make uniform the choice
of minimal ring class conductor $c(n)$ in Proposition \ref{spec} above. That
is, we can now establish the following main result.
\begin{theorem}\label{main}
Fix a tamely ramified character $\mathcal{W}_0 = \rho_0$ of $G_0$.
Let $n \geq 0$ be any integer. There exists an integer $c(0) \geq 0$, independent
of choice of $n$, such that for all ring class conductors $c \geq c(0)$, the associated Galois average
$\delta_{c, p^n; \rho_0}^{(k)}$ does not vanish. Here, $k =0$ or $1$ according as to whether or not
the pair $(f, \mathcal{W})$ is generic or exceptional. \end{theorem}
\begin{proof}
Recall the main result of Proposition \ref{spec} above, which asserts
that for each integer $n \geq 0$, there exists an integer $c(n) \geq 0$ such that for all
($p$-power) ring class conductors $c \geq c(n)$, the associated Galois average
$\delta_{c, p^n; \rho_0}^{(k)}$ does not vanish. We can assume without loss of
generality that we are in the generic setting with $k=0$, since otherwise the
results is known by the relevant theorems of \cite{Cor} and \cite{CV}. We now
proceed by dividing into cases on the root number $\epsilon(1/2, f \times \theta_K)$,
as in the proof of Proposition \ref{spec} above.
Let us first suppose that the root number $\epsilon(1/2, f \times \theta_K)$ is $+1$.
Consider the basechange $p$-adic $L$-functions $\mathcal{L}_p(\rho_0; T_1^{(n)})$
defined in $(\ref{bcplfn})$ above. Starting with $n=0$, we see from the
Weierstrass preparation theorem that $c(0)$ is determined by the Weierstrass
degree $\deg_W(0)$ of $\mathcal{L}_p(\rho_0; T_1^{(0)}) = \mathcal{L}_p(\rho_0; T_1, 0)$
in $\mathcal{O}[[T_1]]$. More precisely, writing $r(0)$ to denote the number of ring class characters
$\rho = \rho_0 \rho_w$ of conductor $c \leq c(0)$ for which the Galois average
$\delta_{c, p^0; \rho_0}^{(0)}$ vanishes, it is easy to see from the interpolation
property of Theorem \ref{hpr} that $r(0) = \deg_W(0)$. Let us now fix an integer
$n \geq n(0)$, where $n(0) \geq 0$ is the integer of Lemma \ref{bcwd} above. Writing
$r(n)$ to denote the number of basechange ring class characters
$\rho' = \rho_0\rho_w \circ {\bf{N}}_{K_n/K}$ of conductor
$c \leq c(n)$ for which the Galois average $\delta_{c, p^n: \rho_0}^{(0)}$ vanishes,
it then easy to see from the interpolation property of Theorem \ref{hpr} along with
Artin formalism for basechange values $(\ref{lfneq})$ that the Weierstrass degree
$\deg_W(n)$ of $\mathcal{L}_p(\rho_0; T_1^{(n)})$ in $\mathcal{O}[[T_1^{(n)}]]$
is equal to $r(n)$. The result for $n \geq n(0)$ then follows from that of Lemma
\ref{bcwd}, which shows that $\deg_W(0) = \deg_W(n)$, and hence that we can
take $c(n) =c(n(0))$ for each integer $n \geq n(0)$ in the statement of Proposition
\ref{spec}. The result for integers $n \leq n(0)$ then follows from another
application of Artin formalism to the associated basechange central values
$L(1/2, \Pi_{f, n(0)} \times \rho')$, where $\rho' = \rho_0\rho_w \circ {\bf{N}}_{K_{n(0)}/K}$
is any ring class character of conductor $c \geq c(0)$.
Let us now suppose that the root number $\epsilon(1/2, f \times \theta_K)$ is $-1$,
in which case we can assume without loss of generality that $n \geq 1$ (since the case
of $n =0$ corresponds to that of the exceptional setting treated by \cite{Cor} and \cite{CV}).
We can then use the same argument as given above for the case when the root number
$\epsilon(1/2, f \times \theta_K)$ is $+1$, replacing the basechange $p$-adic $L$-function
$\mathcal{L}_p(\rho_0; T_1^{(n)})$ (which vanishes identically by the associated functional
equation) with the element $g_p(\rho_0; T_1^{(n)})$ defined in $(\ref{bce2})$
above (which does not vanish identically by the proof of Proposition \ref{spec}), using
Corollary \ref{ibcwd} in lieu of Lemma \ref{bcwd} to obtain the required invariance of
Weierstrass degree in the basechange variable $T_1^{(n)}$ for integers $n \geq n(0)$.\end{proof}
\end{remark}
| -78,381.464279
|
[
-2.7578125,
2.45703125
] | 16.666667
|
[
-2.361328125,
0.6376953125,
-2.48828125,
-6.91796875,
-1.447265625,
9.546875
] |
[
2.986328125,
8.921875,
1.5478515625,
4.765625
] | 399
| 9,712
|
[
-3.328125,
3.921875
] | 33.156985
|
[
-4.9296875,
-3.837890625,
-5.33984375,
-2.7109375,
1.2265625,
12.9453125
] | 0.99257
| 9.684713
| 13.879736
| 1.825267
|
[
1.0070703029632568
] | -50,648.078625
| 5.238674
| -77,690.523501
| 0.713471
| 5.884175
|
[
-1.5791015625,
-3.26953125,
-4.109375,
-5.6015625,
1.6943359375,
12.5859375
] |
[
-5.625,
-2.05078125,
-1.9072265625,
-0.9228515625,
3.736328125,
3.703125
] | |
BkiUdoY5qsFAf8zx04a6
|
\section*{Data and methodology}
We start by considering the six strong lensing time delay measurements, of which five were analyzed blindly, from the H0LiCOW collaboration in~\cite{Wong:2019kwg,Suyu:2009by,Suyu:2013kha,Wong:2016dpo,Birrer:2018vtm,Rusu:2019xrq,Chen:2019ejq}. The difference of excess time delays between two lensed images (and angular positions $\mathbf{\theta}_i$ and $\mathbf{\theta}_j$) of a source (at angular position $\mathbf{\beta}$) is given by:
\begin{align} \label{Eq:TimeDelay}
\Delta t_{ij} = \frac{D_{\Delta t}}{c} \Bigg[ \frac{(\mathbf{\theta}_i - \mathbf{\beta})^2}{2} - \psi (\mathbf{\theta}_i) - \frac{(\mathbf{\theta}_j - \mathbf{\beta})^2}{2} + \psi (\mathbf{\theta}_j) \Bigg] \,,
\end{align}
where, $\psi(\mathbf{\theta})$ is the lens potential. By measuring time delays and modelling the gravitational potential of the source and the lens one can infer the time delay distance:
\begin{align} \label{Eq:TimeDelayDistance}
D_{\Delta t} \equiv (1+z_l)\frac{D_l D_s}{D_{ls}} \,,
\end{align}
where, hereafter, subscripts $l$ and $s$ indicates quantities referring to the lens and source respectively and $D$ is the angular diameter distance of different objects.
As we can see, the measured time delay distance is sensitive to the expansion history of the universe through its dependence on the angular diameter distance at two different redshifts.
For the measured systems the spread in redshift is large, with the lens redshift ranging in $z\in [0.3,0.7]$ and the source redshifts ranging in $z\in [0.6,1.7]$ thus making the time delay distance measurements sensitive to the shape of the distance-redshift relation.
In addition to the time delay distance measurements, lens kinematic data can be used to estimate the angular diameter distance of the lens, $D_l$, by comparing the dynamical mass with the lensing mass (which depends on distances) ~\cite{Paraficz:2009xj,Chen:2019ejq, Jee:2014uxa, Birrer:2015fsm, Jee:2019hah}.
The measurements of $D_{\Delta t}$ and $D_l$ are in general correlated since they use different aspects of the same data \cite{Birrer:2015fsm, Birrer:2018vtm}.
Since the H0LiCOW collaboration has not yet publicly released the full posterior of the two distance measurements for all but one of its observations, we separately consider the marginalized constraints on $D_{\Delta t}$ and $D_l$.
We observe that, if we consider logarithmic distances, $\log_{10}D_{\Delta t}$ and $\log_{10}D_l$, then the posterior distribution of the H0LiCOW measurements becomes practically indistinguishable from Gaussian.
The reason why logarithmic distances are likely to be Gaussian distributed, or very close to that, is that they are computed as the difference of well measured quantities rather than their ratios.
We discuss further details on the treatment of the strong lensing measurements in Appendix~\ref{App:SLGaussianTest}.
This allows us to convert the constraints on $D_{\Delta t}$ and $D_l$ from~\cite{Wong:2019kwg} into constraints on $\log_{10}D_{\Delta t}$ and $\log_{10}D_l$, properly accounting for the Jacobian of the transformation, and to consider the latter to be Gaussian distributed.
We check that cosmological results obtained by fitting the logarithmic measurements reproduce the ones reported in~\cite{Wong:2019kwg}, as detailed in Appendix~\ref{App:SLGaussianTest}.
We then consider the Pantheon SN compilation~\cite{Scolnic:2017caz} that provides accurate measurements of relative distances across the redshift range $z\in [0.01,2.26]$ with $1048$ SN measurements.
We use the measurement of the Hubble constant of $H_0=74.03\pm 1.42$ from the SH0ES project~\cite{Riess:2019cxk}. Cepheid variables are used to calibrate the absolute magnitude of the SN so that the measured distance modulus can be used to directly estimate luminosity distances.
Further details on the calibration of the SN distance modulus can be found in Appendix~\ref{App:SN_cal}. While the SH0ES analysis is the most mature and precise of the local measurements of $H_0$, analyses with alternative approaches are underway. A recent analysis using the Tip of the Red Giant Branch method by the Carnegie-Chicago Hubble Program~\citep{Freedman:2019jwv} yields a lower value of $H_0$ with somewhat larger uncertainty than SH0ES (but there is some debate about their analysis, see~\cite{Yuan:2019npk}).
We test this alternative result also in Appendix~\ref{App:SL_Bias}.
Note that the SN sample cannot be calibrated using the $H_0$ determination from CMB measurements in a model independent way.
Not surprisingly, when using the standard cosmological model determination of the sound horizon scale, ~\cite{Macaulay:2018fxi,Alam:2016hwk,Aubourg:2014yra} find $H_0$ consistent with the CMB value.
Finally we note that bias corrections of SN luminosities assume the $\Lambda$CDM model and in principle this breaks model independence, but as discussed in~\cite{Kessler:2016uwi} it is a very small effect.
Given that the redshift range spanned by SN observations is populated by a large number of measurements we can interpolate between them to obtain an estimate of distances as a function of redshift.
This is achieved by Gaussian process regression of the measured distance modulus.
The Gaussian process kernel and kernel parameters are chosen so that the Gaussian process inference is as close as possible to the binned Pantheon SN sample.
We find that this procedure is flexible enough to fully capture all the features that are present in the binned SN sample, starting from the full sample, thus effectively obtaining a version of the binned SN sample that is continuous in redshift. We also test the GP covariance matrix, and compare it to the result from polynomial interpolation which tends to significantly underestimate the error bars at intermediate redshifts.
Further details on the implementation of the Gaussian process regression are discussed in Appendix~\ref{App:SN_GP}.
\begin{figure*}[tbp]
\centering
\includegraphics[width=\textwidth]{figure_2}
\caption{\label{Fig:DistanceRatioComparison}
Comparison of the H0LiCOW measured time delay distance ratios (a) and lens distance ratios (b) with the Pantheon SN predicted values (shaded bands).
The ratio of angular diameter distances predicted from BAO are also shown in Panel b).
The time delay distances in Panel a) agree at the $78.2\%$ confidence level
while the lens distances in Panel b) agree at the $54.5\%$ confidence level.
The three BAO points in Panel b) and predictions from Pantheon SN agree at $95.4\%$ confidence level.
Note that since only distance ratios are considered in this test, the results are independent of the value of $H_0$.
}
\end{figure*}
With the Gaussian process for the SN we can now predict, with Eq.~\eqref{Eq:TimeDelayDistance}, the time delay and lens distances that H0LiCOW should have observed, independently of the cosmological model. We note that the predicuted distances from SN are correlated; we take this into account when computing the statistical significance of the reported results.
\section*{Results}
In Fig.~\ref{Fig:DistanceComparison} we show the results of the SN prediction for the time delay and lens distance measurements.
In Panel a) we can see that the time delay distance measurement predicted by SN agrees with the direct measurements from H0LiCOW.
The uncertainties in the two methods are comparable.
In Panel b) the lens distance as directly measured by H0LiCOW and predicted by SN are compared.
As in the previous case these are largely in agreement.
The error bars of the SN predicted distances are significantly smaller than the H0LiCOW ones because this quantity is directly measured by a large number of SN at intermediate redshifts.
To precisely quantify the agreement between the H0LiCOW measurements and the SN ones, we exploit the fact that both logarithmic distances are close to Gaussian distributed.
The difference between the two, weighted by the inverse sum of the two covariances is then chi squared distributed with number of degrees of freedom equal to the number of data points.
This results in a probability of agreement of $85.7\%$ for the time delay distances and $63.6\%$ for the lens distance.
Both results indicate very good agreement.
Further, in Fig.~\ref{Fig:DistanceComparison}, there are no outlier measurements.
Since the two distributions are very close, possible non-Gaussianities in the tails of the distributions are not expected to change the results significantly. This also rules out the possibility of a systematic uncertainty in the strong lensing sample at a level larger than the reported error bars.
Beyond the above tests, it is crucial to test whether the SN and strong lensing measurements agree on the amplitude and ``shape'' (the redshift dependence) of the distance-redshift relation.
The amplitude test relates to possible discrepancies in the determination of the Hubble constant, while being fully independent of the expansion history.
The shape test would tell us if there is agreement between the two measurements regardless of the overall calibration that measures the Hubble constant.
The amplitude of the distance-redshift relation can be tested by looking at the average residual logarithmic distance.
The H0LiCOW data measure the average amplitude to be $\langle \log_{10}D_{\Delta t} \rangle = 3.514 \pm 0.012$ while the SN predict the average amplitude to be $\langle \log_{10}D_{\Delta t}^{\rm SN} \rangle = 3.517 \pm 0.017$. These two determinations are in full agreement at $89\%$ confidence level and provide a test at about $5\%$ precision.
For lens distances we similarly have $\langle \log_{10}D_{\Delta t} \rangle = 3.026 \pm 0.044$ and $\langle \log_{10}D_{\Delta t}^{\rm SN} \rangle = 3.0416 \pm 0.0088$ which are again in full agreement at about $10\%$ precision.
To test the redshift dependence of the measurements we can consider distance ratios, or differences in logarithmic distances.
Since the Hubble constant enters as an overall distance multiplier, its value cancels in the ratio.
We then take both data sets and consider the ratio with respect to the system with the lowest lens redshift.
Although arbitrary, this choice does not influence the outcome of the test. All possible choices would just be different linear combinations of the same data thus leaving the statistical significance unchanged.
We show in Fig.~\ref{Fig:DistanceRatioComparison} the comparison of distance ratios, as predicted by SN measurements and as estimated by H0LiCOW.
As in Fig.~\ref{Fig:DistanceComparison}, the ratios also show agreement between the two measurements.
And since logarithmic distance differences are also close to Gaussian distributed, we can easily compute the statistical significance of their agreement.
The time delay distance ratios are in agreement at the $78.2\%$ level and the lens distance ratios are in agreement at the $54.5\%$ confidence level.
We note that the H0LiCOW measurements are incompatible with no redshift dependence and hence detect the shape of the distance-redshift relation at $34\sigma$ in Panel a) and $2.6\sigma$ in Panel b).
We can further compare these distance ratios with predictions from the BAO measurements. The BAO measurements are sensitive to the ratio of angular diameter distance of the galaxies and the photo-baryon sound horizon scale at the epoch of decoupling.
Therefore, we can take the ratio of BAO measurements at two different redshifts and remove any sensitivity to the overall calibration with the sound horizon.
In this analysis we consider three BAO measurements from BOSS DR12 data~\cite{Alam:2016hwk} and one measurement from the eBOSS DR14 LRG data~\cite{Bautista:2017wwp}. We use the lowest redshift DR12 measurement as our reference BAO measurement in the distance ratio test.
The comparison between the BAO and SN predicted distance ratios can be performed as before and results are in agreement at the 95.4\% confidence level. To qualitatively compare the results to the H0LiCOW measurements we shift the overall amplitude to match the H0LiCOW reference redshift. We compare the distance ratios of all three probes in Fig.~\ref{Fig:DistanceRatioComparison}.
All measurements show remarkable consistency in this test, which is independent of $H_0$ and of the cosmological model.
Moreover, if we use the SN Gaussian process to calibrate the BAO angular diameter distances, as discussed in~\cite{Aylor:2018drw}, we can directly measure the sound horizon scale independently of the cosmological model.
Individually the four calibrated BAO measurements that we consider are in good agreement on the sound horizon determination, giving: $137.39 \pm 3.91$ Mpc, $136.05 \pm 3.83$ Mpc and $137.48 \pm 3.92$ Mpc for the three SDSS DR12 observations in increasing order of their effective redshift; $133.77 \pm 6.24$ Mpc for the SDSS DR14 LRG observation.
Jointly the four measurements result in a sound horizon measurement of $135.92 \pm 3.26$ Mpc, which is in $3.3 \sigma$ tension with the {\it Planck} results of $147.09 \pm 0.26 $ Mpc~\cite{Aghanim:2018eyx}.
This effectively accounts for a large portion of the $4.4 \sigma$ Hubble constant tension.
Since the sound horizon is constant after recombination, this part of the Hubble constant tension cannot be resolved by introducing new physics between the redshift of the BAO measurements and recombination.
\section*{Summary and discussion}
We have presented a new way of testing the consistency of distance measurements from supernovae and strong lensing time delays that is independent of the cosmological model.
Our method exploits the power of SN observations across a large range of redshifts to directly predict the outcome of strong lensing measurements. We use Gaussian process regression to interpolate across redshift and show that these provide robust results. Tests of the covariance matrix show that it is more reliable than polynomial interpolation which tends to underestimate the error bars at intermediate redshifts.
We devise three types of tests sensitive to the distance-redshift relation: one directly tests distances (and therefore biases in $H_0$), one tests their calibration and another tests distance ratios.
While the first test is sensitive to both calibration and redshift dependent systematic effects, the other two single out and test these two aspects independently.
We find that all tests report excellent agreement between the Pantheon supernovae, calibrated with the SH0ES distance ladder, and H0LiCOW time delay measurements.
Given the model independence of our tests we conclude that, at present sensitivity,
there is no indication for the presence of unaccounted systematic effects in either data set. In particular, if the distance-redshift relation inferred from SN is correct, there is no evidence of a residual bias due to mass modeling uncertainties in the strong lensing data. It is possible that both measurements have a bias of the same sign, magnitude and redshift dependence -- this unlikely scenario would evade our tests.
The possibility that uncertainties in mass modeling, in particular the mass sheet degeneracy, have biased the strong lensing determination of $H_0$ has been discussed in the literature \citep{Schneider:2013wga,Xu:2015dra, Unruh:2016adf,Sonnenfeld:2017dca,Kochanek:2019ruu}.
Recently~\cite{Kochanek:2019ruu} has suggested that this leads to at least a 10\% level of uncertainty on $H_0$ in present and near future measurements.
We explicitly check the impact an unknown residual systematic would have on both $D_{\Delta t}$ and $D_{l}$. We create fake strong lensing observations and test for several types of unaccounted for errors:
\begin{itemize}
\item Biases in the distance-redshift relation that directly impact $H_0$ or the amplitude of the distance-redshift relation. An 8\% bias that would fully reconcile the tension with {\it Planck} is disfavored at nearly 3$\sigma$.
\item Redshift dependent biases. We test the ratio of distances as well as a bias that scales as $(1+z)$, motivated by uncertainties in lens mass modeling. These are also disfavored at over 2$\sigma$.
\item Underestimated errors. We artificially increase the covariance in the lensing measurements to obtain 10\% uncertainty on $H_0$. We then find that the probability of disagreement with SN inferred distances drops to 0.05\%, i.e. over 3$\sigma$ evidence against inflated errors.
\end{itemize}
Unknown measurement systematic effects and/or incorrect lens modelling are likely to affect different systems differently, so they would show up as an unexpected redshift dependence of the measurements. On the other hand if they affected all lenses the same way, they would affect the amplitude.
These problems are not guaranteed to show up as a discrepancy in the determination of some cosmological parameter, so our tests provide an independent check. We discuss the detailed results of these tests in Appendix~\ref{App:SL_Bias}.
Strong lensing and SN determination of distances almost certainly do not share the same systematic uncertainties. The consistency we have found between the two sets of measurements implies that the discrepancy with the CMB is more likely due to new physical phenomena, or a potential systematic error in the CMB analysis (which is considered unlikely as it has been tested extensively~\cite{Addison:2015wyg,Aghanim:2016sns,Hou:2017smn,Aylor:2017haa,Aghanim:2019ame}).
With reduced uncertainties, as expected by increases in the number of SN~\cite{Abbott:2018wog, Scolnic:2019apa, Hlozek:2019vjs,Shajib:2019toy} and new lens systems~\cite{Huber:2019ljb, Yildirim:2019vlv}, we expect our methodology to provide more stringent tests for systematic uncertainties. In the near term joint analysis of the full $D_{\Delta t}$ and $D_{l}$ posterior will also strengthen our tests.
With a factor of two reduction in statistical uncertainties, we could rule out a 5 percent bias at the 3$\sigma$ level, independent of the cosmological model -- this would certainly strengthen the case for new physics.
We find that measurements of BAO distances are also in agreement with predictions from SN. Once calibrated with SN, the BAO measurements are in tension with the CMB over the determination of the sound horizon.
We find that this accounts for a large part of the statistical significance of the Hubble constant tension.
Since the sound horizon is constant after recombination, this tension is largely independent of the expansion history between the redshift of the measured BAOs, of about $z\sim 0.7$,
and recombination.
The structure of the CMB peaks measures the angular size of the sound horizon very precisely. Thus an explanation of the Hubble constant tension that would significantly change the inference of the CMB parameters must rely on new physical phenomena before recombination (see~\cite{Knox:2019rjx} and references therein).
We implicitly assumed in our analysis that the universe is spatially flat. It is possible to perform the same tests assuming a curved universe. Since this will introduce an extra free parameter our tests would show stronger consistency.
We defer an analysis of time delay distances and their implications for curvature to a future study (see \cite{Collett:2019hrr, Arendse:2019hev} for related analyses).
\acknowledgments
We thank
Eric Baxter,
Gary Bernstein,
Simon Birrer,
Dillon Brout,
Neal Dalal,
Wayne Hu,
Mike Jarvis and
Sherry Suyu
for helpful discussions and comments on the paper.
SP is supported in part by the U.S. National Science Foundation award AST-1440226.
MR and BJ are supported in part by NASA ATP Grant No. NNH17ZDA001N, and by funds provided by the Center for Particle Cosmology.
BJ is supported in part by the US Department of Energy Grant No. DE-SC0007901.
Computing resources were provided by the University of Chicago Research Computing Center through the Kavli Institute for Cosmological Physics at the University of Chicago.
| -11,179.206086
|
[
-1.7529296875,
1.6162109375
] | 78.571429
|
[
-2.064453125,
1.068359375,
-1.6103515625,
-5.625,
-1.296875,
7.4140625
] |
[
2.107421875,
7.40234375,
2.734375,
4.69140625
] | 178
| 2,872
|
[
-3.59765625,
4.3828125
] | 22.438781
|
[
-6.3984375,
-4.90234375,
-5.046875,
-2.7890625,
2.18359375,
14.078125
] | 1.786386
| 21.661703
| 26.462396
| 1.715132
|
[
3.0445289611816406
] | -9,377.750741
| 5.555014
| -11,105.300807
| 0.676946
| 5.539394
|
[
-2.9921875,
-3.65234375,
-3.26953125,
-4.06640625,
2.455078125,
11.09375
] |
[
-5.8515625,
-2.513671875,
-2.330078125,
-1.9501953125,
3.794921875,
5.875
] | |
BkiUbfY4eIZijWO-mGlh
|
\section{INTRODUCTION}
Research on autonomous agents and Multi-Agent Systems (MAS) has been focusing on models and methods for the flexible exploitation of environmental resources. To engineer large-scale and afford- ance-rich MAS, the Web has been investigated as an enabler for autonomous agents to make the most of their reasoning and decision-making abilities in discovering and exploiting affordances of Web resources for their complex goals~\cite{ciortea2019decade}. The prime mechanism for affordance discovery and exploitation on the Web is \textit{hypermedia-driven interaction}, which drives interaction based on representations of combined high-level \textit{semantic information} and lower-level \textit{controls}~\cite{fielding_2008}: Agents would \emph{reason} about their actions based on semantic information, e.g., that grasping an item is possible because the item is close to a robotic arm, and \emph{act} by using the accompanying controls, e.g,. with an HTTP request to the arm's Web API. After acting, new hypermedia become available based on the ``Hypermedia as the Engine of Application State" (HATEOAS) principle of the Web architecture~\cite{fielding2000architectural}, enabling agents to reason about and to exploit new affordances, e.g., to pick the grasped item.
The W3C Web of Things~\cite{charpenay2016introducing} already extends the Web environment with machine-readable specifications of hypermedia-driven APIs, namely \emph{Thing Descriptions}, which enable programming applications on top of domain-specific abstractions rather than low-level controls (e.g., for a specific application-layer protocol like HTTP). As a result, such applications become reusable hypermedia clients that exploit affordances offered by different physical and virtual entities on the Web. Yet, even such clients behave more like utility applications rather than autonomous users of hypermedia as their lack of adaptivity limits their reusability: Clients search for specific semantic information whose discovery triggers the exploitation of controls based on some pre-compiled logic, comparable to an agent that -- as a reflex -- always picks items that are close to it.
This compares with human users who \textit{autonomously} and flexibly \textit{take the initiative} to exploit new affordances and dynamically achieve their goals, by \textit{exploring the environment} and \textit{interpreting, reasoning about, and planning on top of signifiers}: cues that represent high-level semantic information about affordances and that are specifically designed to increase the interpretability and discoverability of affordances based on the principles of Human-Computer Interaction (HCI)~\cite{lemee2022sign}, and exposed in a way that complements the run-time human-environment context. As a result, human users effectively and efficiently engage in flexible affordance discovery and exploitation even within new and affordance-rich environments.
Hence, compared to humans, hypermedia clients today lack a) awareness of entities like signifiers, which could tactically drive their interactions in the Web environment (e.g. through HATEOAS), and b) the abilities to reason, plan, and proactively deliberate about their actions based on such perceived environmental entities.
In this paper, we introduce signifiers as a first-class abstraction in hypermedia-based MAS as a means of provisioning machine-readable interaction specifications for clients that reason and act as autonomous agents on the Web. Inspired by the interaction principles of HCI, we aim at a user-centered design of signifiers that directly accommodate the versatility of hypermedia environments and of autonomous agents and their abilities and objectives. Concretely, we define a formal model of a mechanism for the contextual exposure of signifiers in the hypermedia environment to support interaction effectiveness and efficiency based on HATEOAS.
To demonstrate our approach, we implemented a prototypical hypermedia-based MAS, where two agents with different types of reasoning abilities proactively discover how to interact with shared artifacts in the environment, by dynamically perceiving signifiers in a manner that suits their abilities and context. We show that the exposure of such interaction specifications to autonomous agents can be inherently managed with respect to the dynamic agent-environment context towards facilitating effective and efficient interactions on the Web.
\section{BACKGROUND AND RELATED WORK}
We provide an overview of related work about how affordance exploitation by \textit{human agents} (Sect.~\ref{subsec:hci}) compares to affordance exploitation by \textit{hypermedia clients} (Sect.~\ref{subsec:clients}), and how research on \textit{MAS} could be used to reduce the gap between the two (Sect.~\ref{subsec:agents}).
\subsection{Human-centered Interaction Design} \label{subsec:hci}
Affordance Theory~\cite{gibson1977theory} examines how animals control their behavior by perceiving and exploiting \textit{affordances} in their environment.
Chemero defines an \textit{affordance} as a behavior possibility that is a relationship between a) an \textit{ability} of an agent and b) a \textit{situation} that includes agents and features of the environment~\cite{chemero2007complexity}. The ability and the situation of an affordance are considered \textit{complementary} to each other, i.e., the presence of an agent with an ability within a situation makes an affordance \textit{exploitable} by the agent. For example, the affordance \texttt{graspable} of an item becomes exploitable by a human agent if the agent has the \texttt{ability-to-reach} the item in the \texttt{hand-is-empty} situation. An affordance $aff$ is formally defined in~\cite{chemero2007complexity} as:
\begin{equation}
\label{eq:aff}
aff = <a, s>,
\end{equation}
where $a$ is an agent ability and $s$ is an agent-environment situation whose simultaneous presence make $aff$ exploitable by an agent.
When an affordance becomes exploitable by an agent, the agent has the \textit{ability} to perform a \textit{behavior}. For example if an item is indeed \texttt{graspable}, then the human agent can exhibit the \texttt{ability-to- grasp} the item, i.e. to perform the behavior \texttt{grasp}. Therefore, an ability $a'$ is defined as the quality of an agent to perform a behavior $b$ when an affordance $aff$ is exploitable; formally defined as:
\begin{equation}
\label{eq:ability}
a' = <aff, b>,
\end{equation}
where $aff$ is an affordance (cf. Def.~\ref{eq:aff}) and $b$ is the behavior that the agent has the ability to perform when $aff$ is exploitable.
Ability $a'$ may in turn be complementary to another situation $s'$, such that the simultaneous presence of $a'$ and $s'$ makes another affordance $aff'$ exploitable by the agent, i.e. $aff' = <a', s'>$. For example, the \texttt{ability-to-grasp} can be considered as complementary to the situation \texttt{nothing-on-top-of-the--item}. Such complementarity can be defined through the affordance $aff'$ that is the affordance \texttt{pickable} of the item.
Through Defs.~\ref{eq:aff} and \ref{eq:ability}, affordances and abilities are impredicatively defined in terms of one another, forming a complex system that can be studied through the use of hypersets~\cite{chemero2007complexity}. For this, a system is modeled through sets of affordances and abilities to examine how (human) agents can control their behaviors in the system: Consider the set $B$ of the behaviors that can be performed in the system, $B=\{b_1, b_2,..., b_n\}$ (e.g, $B$ = $\{grasp, reach, open\mbox{-}door, ...\}$). Then, the affordances and abilities that are considered for $B$ are, respectively, included in the sets $Affords\mbox{-}B$ and $Ability\mbox{-}to\mbox{-}B$. These sets are impredicatively inter-defined as follows:
\begin{equation}
\label{eq:affords-b}
Affords\mbox{-}B = \{<a_1, s_1> <a_2, s_2>, ..., <a_n, s_n>\},
\end{equation}
\begin{equation}
\label{eq:abilityto-b}
Ability\mbox{-}to\mbox{-}B = \{<aff_1, b_1>, <aff_2, b_2>, ..., <aff_n, b_n>\}\footnote{For example, $Ability\mbox{-}B= \{<graspable, grasp>, <reachable, reach>, ...\}$, and $Affords\mbox{-}B= \{<ability\mbox{-}to\mbox{-}reach, item\mbox{-}is\mbox{-}reachable>, ...\}$.}.
\end{equation}
Affordances have provided valuable input to the domain of Human-Computer Interaction (HCI) towards the design of human-made things that increase the \textit{discoverability} and \textit{interpretability} of their offered affordances. Specifically, Norman introduced \textit{signifiers} to discuss the design of \textit{perceivable cues that can be interpreted meaningfully to reveal information about affordances}~\cite{norman2008way,norman2013design}. Specifically, Norman advocates the \textit{human-centered} design of signifiers that should be driven by the needs, objectives, and abilities of targeted users. Then, signifiers become environmental cues that can be \textit{intuitively and reliably} discovered and interpreted to reveal \textit{what} are the possible behaviors and \textit{how} these behaviors can be performed.
The methodical design \textit{and exposure} of signifiers in the environment help human agents to explore, reason about, and exploit affordances even when situated within large-scale and affordance-rich settings, such as in the hypermedia environment of the Web. On the Web, signifiers are \textit{designed with respect to their expected human users} -- abstracting away from low-level interaction details that do not concern the users (i.e. the hypermedia controls), and revealing only the information that is required to intuitively invite user behavior (i.e., the informational part of hypermedia)\footnote{Although Affordance Theory defines affordances in relation to their functional aspects, in HCI functional affordances are commonly differentiated from cognitive ones~\cite{hartson2003cognitive} to separate design concerns, for instance between a service provider and a UI designer.}. On top of this, the \textit{exposure of signifiers is continuously adjusted} to better serve the dynamic agent-environment context of affordance exploitation. On the Web, this is achieved through the HATEOAS principle, which can be viewed as the environmental (server-side) support that manages to keep the provision of affordances and signifiers in alignment with the current application state (and the user needs that are expected in the current application state).
\subsection{Engineering Hypermedia Clients}\label{subsec:clients}
Hypermedia clients are applications capable of hypermedia-driven interaction, i.e. they exploit affordances and advance the application state by using hypermedia controls, such as \textit{links} and \textit{forms}, while application logic is coupled to the information that accompanies the controls, such as \textit{identifiers} of link relations or of form operation and content types. Then, through HATEOAS, each interaction results in a client receiving new information in the form of hypermedia regarding the (newly) available affordances. As a result, hypermedia clients remain decoupled from specific Web resources and data objects, and are thus more resilient to API changes than clients which are specialized and tailored to specific Web APIs~\cite{amundsen2017restful}.
In case decisions about action are made by a human agent, hypermedia clients become as generic (and reusable) as Web browsers are today: Clients iteratively 1) parse and present signifiers to human agents (see Sect.~\ref{subsec:hci}), 2) wait for human agents to reason and act based on exposed signifiers towards achieving their objectives, and 3) use hypermedia controls to advance the application state based on the human agent's decisions. In case the client itself has some higher-level objective to achieve, signifiers need to carry unambiguous semantics so that they can be handled reliably by the machine. This can be achieved through \textit{standardized identifiers} of link relations and operation types. For example, the Hydra Core Vocabulary~\cite{lanthaler2013hydra} and the Hypermedia Controls Ontology (HCTL)\footnote{The HCTL Ontology is available online: \url{https://www.w3.org/2019/wot/hypermedia}} enable the semantic description of RESTful Web APIs, thus offering a foundation for examining how to model signifiers for reusable clients with more complex application logic; subsequently, the W3C Web of Things (WoT) ~\cite{Kovatsch:20:WTT} examines how to enable clients to cope with the increasing number of heterogeneous devices that are being connected to the Internet. For this, W3C WoT takes a step further from hypermedia controls modelling (reusing HCTL) by providing an easy-to-use interaction model for defining \emph{Thing Descriptions (TDs)}, i.e. machine-readable specifications that reveal higher-level interaction semantics of device (\textit{Thing}) affordances.
Although WoT TDs enable the reuse of client application logic, there is also research on interaction specifications that allow for more adaptive hypermedia clients. For example, clients in~\cite{kafer2018rule} are executable Linked Data specifications that contain N3 rules\footnote{The N3 specification is available online: \url{https://www.w3.org/DesignIssues/Notation3}} for the more contextual consumption of hypermedia (e.g. currently exploitable affordances can be identified), and can be integrated in Hierarchical Task Network workflow specifications to manifest more complex behaviors~\cite{assfalg2021integrated}. Finally, interaction specifications in the RESTdesc format~\cite{verborgh2012functional} can be used as input to N3 reasoners, thus enabling the more dynamic combination of affordances based on the run-time system objectives.
\subsection{Reasoning for Goal-driven Interaction} \label{subsec:agents}
\textit{Autonomous agents} are capable of adaptive behavior, since they handle abstractions that enable them to reason about their actions and take the initiative to interact with respect to their dynamic goals. For example, agents that implement the Procedural Reasoning System~\cite{georgeff1987reactive} can discover and execute behaviors in the form of \textit{plans} and reason about the applicability and relevance of such plans based on their own \textit{beliefs}, \textit{desires}, and \textit{intentions}~\cite{rao1995bdi}. Additionally, autonomous agents can synthesize new plans using methods of automated planning~\cite{ghallab2016automated}, by reasoning on the \textit{preconditions} and \textit{postconditions} of actions with respect to desired effects.
Specifically, behaviors that relate to affordance exploitation are possible because the above abstractions establish a relationship between the agents and their environment. For example, the Agents \& Artifacts meta-model (A\&A)~\cite{ricci2011environment} defines \textit{artifacts} as tools organized within \textit{workspaces}, through which agents can perceive and manipulate the environment. Then, interaction \textit{efficiency} can be achieved through the agents' ability to be \textit{situated} within (logical) workspaces, and hence to direct the \textit{scope} of their perception and action towards co-located artifacts of interest.
Autonomous agents and their ability to reason about action with respect to goals and environmental artifacts have been also examined in the context of the Web, so as to enable the deployment of large-scale and affordance-rich Web-based Multi-Agent Systems (MAS). Hypermedia MAS~\cite{ciortea2018engineering} are a class of MAS that remains aligned with the architectural principles of the Web towards endowing hypermedia clients with the abilities to cope with evolvable Web APIs. Agents in Hypermedia MAS can already discover and exploit affordances in dynamic hypermedia environments through WoT TDs~\cite{ciortea2018engineering}, but they are still not able to make the most of their reasoning and decision-making abilities since TDs are not specialized to establish a relationship to abstractions in agent-oriented programming and MAS (e.g. goals, action preconditions, etc.).
\section{Signifiers for Hypermedia MAS}
In the following, we introduce signifiers as a first-class abstraction in Hypermedia MAS and provide a formalization of signifiers for this purpose in Sect.~\ref{subsec:signifiers}. The primary responsibility of signifiers is to support the discoverability and interpretability of affordances. Hence, in Sect.~\ref{subsec:sem}, we introduce a mechanism that exploits the agent-environment complementarity to expose only those signifiers that are relevant to agents situated in a hypermedia environment. Finally, in Sect.~\ref{subsec:srm}, we discuss how signifiers can be customised to support agents with different abilities, and we present signifiers for agents based on two well-known methods for reasoning about action -- the Procedural Reasoning System~\cite{georgeff1987reactive} and STRIPS~\cite{fikes1971strips}.
\subsection{Agent-centered Design of Signifiers for Hypermedia-driven Interaction}\label{subsec:signifiers}
In this section, we define a general model for signifiers that a) capture the agent-environment complementarity required to exploit affordances, and b) enable the hypermedia-driven exploitation of affordances on the Web.
Specifically, this model supports the interactions of autonomous agents in Hypermedia MAS.
\subsubsection{A General Model for Signifiers}
Our model for signifiers builds on top of the affordance model presented by Chemero and Turvey in ~\cite{chemero2007complexity} (see Sect.~\ref{subsec:signifiers}). We, accordingly, identify:
\begin{itemize}
\item an affordance $aff_1=<a_1, s_1>$, e.g. the affordance \texttt{gripper- closable} of a robotic arm, which is exploitable when an autonomous agent has the ability $a_1$ to \emph{log in as an operator} by using the device's HTTP API in the situation $s_1$ that the \emph{gripper is open} (cf. Def.~\ref{eq:aff});
\item an ability $a_0=<aff_1, b_1>$, e.g. the agent's ability to perform the behavior $b_1$ of \emph{closing the gripper} by using the device's HTTP API, when the affordance $aff_1$ \texttt{gripper-closable} is exploitable (cf. Def.~\ref{eq:ability});
\item the related sets of affordances $Affords\mbox{-}B$ (cf. Def.~\ref{eq:affords-b}) and abilities $Ability\mbox{-}to\mbox{-}B$ (cf. Def.~\ref{eq:abilityto-b}) for examining how an agent can exploit affordances in the system based on the set of behaviors $B=\{b_1, b_2, ..., b_n\}$, e.g., $B= \{gripper\-\mbox{-}closable, login, ...\}$.
\end{itemize}
We define a \textit{signifier} $sig$ as a perceivable cue or sign that can be interpreted meaningfully to reveal information about an affordance $aff_1$, formally defined as follows:
\begin{equation} \label{eq:signifier}
sig = (sp_b, A, c, salience),
\end{equation}
where:
\begin{itemize}
\item $sp_b$ is the signified \emph{specification} of a behavior $b$ of the set $B$, i.e. the specification of a course of action that can be performed by an agent when $aff_1$ is exploitable;
\item $A$ is a \emph{set of abilities}, where an ability is a quality of an agent to perform a behavior of the set $B$. Hence, $sig$ \emph{recommends} that agents should have all the abilities of the set $A$ so that $aff_1$ becomes exploitable.
\item $c$ is the \emph{context} which $sig$ recommends to hold so that $aff_1$ becomes exploitable, i.e. constraints to which the agent- environment situation is recommended to conform;
\item $salience$ is the quality of $sig$ that indicates how useful or relevant $sig$ is.
\end{itemize}
A machine-understandable signifier in RDF~\cite{Lanthaler:14:RCA} is structured based on Def. \ref{eq:signifier} in Lst. \ref{lst:sg-basic} (l.7-13): It signifies the specification of the behavior \texttt{close-gripper}, recommended to be performed when an agent is able to become an operator and the gripper is open.
Note that the definition (and the representation) of a signifier (here, $sig$) does not include any direct reference to an affordance (here, $aff_1$). This is preferable for two reasons: First, affordances emerge upon the presence of \textit{individual agents} in the appropriate situation, while signifiers are defined with respect to \textit{agent types}. This is essential because designers typically do not have prior knowledge of the individual agents that will inhabit the environment. In this way, Def. \ref{eq:signifier} preserves the evolvability and reusability of signifiers, since agents \emph{remain more loosely coupled} to their environment. Second, \emph{affordances} are defined in Affordance Theory for studying how \textit{animals} control their behaviors. However, we model \emph{signifiers} for supporting interactions of \textit{autonomous agents}, which exhibit more heterogeneous cognitive and sensorimotor abilities than animals -- including agents whose cognitive abilities heavily rely on computational processes such as reasoning over representations, and whose sensorimotor abilities\footnote{In Affordance Theory, action is considered to be strictly coupled to perception. Although there are examples of autonomous agents which exhibit similar cognitive and sensorimotor abilities for performing perception-to-action behaviors, such as Rodney Brooks' physically embodied agents that implement the subsumption architecture~\cite{brooks1999cambrian}, a Hypermedia MAS is expected to feature a greater diversity of agents' abilities.} allow for behaviors that are much broader than animal behaviors.
Even though there is no formal relation between signifiers and affordances, the designer of a signifier $sig$ may use the elements of Def. \ref{eq:signifier} to establish a (direct or indirect) relationship to specific elements of $aff_1$. The forms that this relationship can take can be narrowed down to the following cases:
\begin{itemize}
\item[(C1)] $sp_b$ of the behavior $b$ is equivalent to the specification of the behavior $b_1$ for exploiting $aff_1$ and optionally any behaviors that lead to making $aff_1$ exploitable, i.e. $sp_b$ specifies behaviors of the set $B'\subseteq B$, and $B' \supseteq \{b_1\}$;
\item[(C2)]$A$ is equivalent to the set of any of the abilities that lead to making $aff_1$ exploitable, i.e. $A \subseteq Ability\mbox{-}to\mbox{-}B$;
\item[(C3)] $c$ is equivalent to (part of) the constraints that are satisfied in situation $s_1$.
\end{itemize}
C1-C3 should be used based on the experience and expectations that environment designers hold about targeted agents towards enabling effective and efficient interactions. Therefore, designers are responsible for choosing when to aggregate or omit to refer to information about the sets $Affords\mbox{-}B$ and $Ability\mbox{-}to\mbox{-}B$.
For example, (C1) allows for managing the granularity of behavior $b$ that is specified in the signified specification $sp_b$. Considering that $aff_1$ is the affordance \texttt{gripper-closable}, then $b$ could be specified as the sequence $<b_2,b_1> = <login, close\mbox{-}gripper>$. Such granularity could be desirable if $b_2$ and $b_1$ are expected to be frequently performed in sequence or for enabling agents to access specifications of higher-level behaviors. Similarly, (C2) enables designers to avoid including in $A$ these abilities of $Ability\mbox{-}to\mbox{-}B$ that targeted agents are expected to always have. (C3) enables recommending a context $c$ by determining which constraints address the most significant aspects of situation $s_1$ that makes $aff_1$ exploitable. Generally, the context may capture constraints about agents and constraints about the entity offering an affordance, which are disjoint (e.g., constraints on the intentions of the agent, and respectively the state of the artifact) or interdependent (e.g., that the agent is situated in the same working environment that contains the environmental).
In our proposed model of signifiers, recommended abilities and context are not meant to \emph{regiment} agents' behaviors, i.e. to constrain whether an agent will actually exploit an affordance. However, they can both be beneficial in evaluating the relevance of an affordance exploitation, and consequently the relevance of the signifier for an agent with given abilities in a given situation.
\label{lst:sg-basic}
\begin{lstlisting}[float=b, style=RDFStyle, caption=A (generic) signifier that reveals information about the affordance gripper-closable of a robotic arm., label=lst:sg-basic, numbers=left, xleftmargin=14pt, mathescape=true ]
@base <http://ex.org/wksp/1/arts/1>.
@prefix hctl:<https://www.w3.org/2019/wot/hypermedia#>.
$\lstsetnumber{\ldots}$... $\lstresetnumber\setcounter{lstnumber}{6}$
<#sig> a hmas:Signifier ;
hint:signifies <#close-gripper> ;
hint:recommendsAbility [ a manu:OperatorAbility ] ;
hint:recommendsContext <#env-context> .
<#env-context> a hint:Context; sh:targetNode ex:leubot ;
sh:property [ sh:path manu:hasGripperValue ;
sh:hasValue "500"^^xsd:integer ] .
<#close-gripper> a hint:ActionSpecification;
hint:hasForm [ hctl:hasTarget leubot:base ;
hctl:forContentType "application/json" ] ;
hint:expects [ a hint:Input;
hint:hasSchema <#gripper-schema> ] .
<#gripper-schema> a js:ObjectSchema ;
js:properties [ a js:IntegerSchema ;
js:propertyName "manu:hasGripperValue"; js:enum "0"]] .
\end{lstlisting}
We further develop our model to define how \textit{behaviors} can be specified for \textit{hypermedia-driven interactions} in Hypermedia MAS, e.g., in the form of an AgentSpeak plan~\cite{bordini2007programming}, a JADE behavior~\cite{bellifemine2007developing}, etc. Due to this diversity, the only requirement that we formally impose for the definition of a behavior specification is that it specifies at least one action.
An \textit{action} is specified through a) \textit{forms} that describe the hypermedia controls that can be used to implement and execute the action and b) optionally an \textit{input}. For example, the behavior \texttt{close- gripper} could be specified as an action through a form that describes an HTTP request, and a gripper input value. An \textit{action execution} can be treated as a behavior that is expected to result in a modification of the state of the environment (e.g., of the gripper). In this case, any entities in the system that monitor an agent executing an action, may perceive that the agent acts on the environment. However, an action could also be used for perception in the case in which an agent executes the action with the purpose of affecting its perception.
Then, a specification $sp_b$ formally specifies such a behavior $b$ (an action execution) as follows\footnote{We insert $\lfloor$ and $\rfloor$ to delimit expressions considered optional.}:
\begin{equation}
sp_b = (Forms, \lfloor Input \rfloor),
\end{equation}
where $Forms$ is a set of forms where each form describes an implementation of the specified action, and $Input$ is the input expected when it is possible or required to parameterize the action execution. List. \ref{lst:sg-basic} (l.15-24) captures a signified specification of \texttt{close-gripper}.
\subsubsection{Abilities of Autonomous Agents}
In the following, we discuss
how \textit{abilities} recommended by signifiers can facilitate the effective and efficient affordance discovery and exploitation in the MAS. For example, an agent reasoning on signifiers could infer that it should exploit a related affordance only if it has the appropriate abilities. Even if the agent does not currently exhibit the recommended abilities, it could still use this information towards extending its abilities, or delegating goals to capable agents. For example, if an agent $ag_1$ has the abilities of the set $A_1$ and is aware of a signifier $sig_1=(sp_b, A, c, salience)$, where $A \nsubseteq A_1$, then $ag_1$ may decide to forward $sig_1$ and request the performance of the related behavior from an agent $ag_2$ that has the abilities of the set $A_2$, where $A \subseteq A_2$. Next, we consider the abilities of agents to illustrate diverse aspects of agents that signifier designers could take into account.
An agent may have an \textit{ability to reason on a representation of its internal state}; e.g., a BDI agent can reason about its beliefs, desires, and intentions~\cite{rao1995bdi}. Recommending such abilities abilities ensures that agents have the appropriate \emph{cognitive skills} to perform a specified behavior. For instance, a signifier that signifies the specification of sending a message with mentalistic semantics (e.g., in KQML~\cite{labrou1997proposal}) in the context of an interaction protocol should recommend the ability of an agent to have mental states, in order to support compliance to the protocol.
An agent may, further, have an \textit{ability to reason about actions and to plan ahead by using specific methods and mechanisms}, e.g., by using a STRIPS-like planner to synthesize plans~\cite{georgeff1987reactive}. Then, a signifier that targets agents with such a planning ability should signify behavior specifications that can contribute to the construction of a suitable planning domain (e.g., specifications of actions with their preconditions and postconditions). Alternatively, a signifier that targets BDI agents that implement the Procedural Reasoning System~\cite{georgeff1987reactive} could instead signify behavior specifications as plans.
An agent also may have an \textit{ability to behave based on its social context}, e.g., to fulfill its role of machine operator by operating machines in an industrial shopfloor. Recommending role-related abilities within a multi-agent setting (e.g., an organization~\cite{weiss2000multi}) can enable agents to interpret which affordances are exploitable based on a role or help to fulfill a role. For example, a signifier specifying a behavior for operating a robotic arm on a manufacturing shopfloor should recommend the ability of an agent to play the role of a robot operator rather than the role of a warehouse manager.
An agent also may have an \textit{ability to operate within a given domain}, e.g., by having knowledge of abstractions and processes in industrial manufacturing. For example, a signifier that signifies a behavior specification using a specific semantic model for industrial processes, such as the SAREF4INMA\footnote{SAREF4INMA defines a vocabulary for the industry and manufacturing domain; available online: \url{https://saref.etsi.org/saref4inma/v1.1.2/}} semantic model, should recommend the ability of an agent to interpret the required model.
An agent, finally, may have an \textit{ability to perform a behavior when an affordance is present}, e.g., to behave based on a behavior specification. Such abilities could originate from the set $Ability\mbox{-}to\mbox{-}B$ and their recommendation by a signifier falls under case (C2). For example, a signifier that signifies a specification for \texttt{close-gripper} (Lst. \ref{lst:sg-basic}) could recommend the ability of an agent to \emph{log in} as operator.
\subsection{Environment Support for Signifier Exposure}
\label{subsec:sem}
Based on our formalization of signifiers, and the discussion of the diverse types of agent abilities that can be considered with our proposed model, we next discuss how the \emph{exposure} of signifiers can be realized and dynamically adjusted in hypermedia environments.
\subsubsection{Exposing Signifiers in the Environment}
Signifiers are not simple informational resources, but rather constructs available in hypermedia environments that enable \textit{situated} agents to discover and interpret affordances. Here, we consider that signifiers can be exposed through \textit{workspaces} that logically partition the environment (similar to A\&A workspaces~\cite{ricci2011environment})\footnote{We formalize workspace to provide environment support for managing interaction cues/metadata through containes (e.g. as in HCI via the ``natural mapping'' principle~\cite{norman2013design}, or in Web-based systems via W3C directories~\cite{Tavakolizadeh:23:WTD} and API ecosystems~\cite{medjaoui2021continuous}). However, we do not expect (or impose) that workspaces is the only means to manage a MAS environment and signifier exposure.}.
Specifically, a workspace in a hypermedia environment is a container of Web resources and the interactions that are enacted among contained Web resources. A workspace $w$ is formally defined as:
\begin{equation}
w = (R, I),
\end{equation}
where $R$ is the set of resources contained in the workspace $w$ (e.g., agents, signifiers, other workspaces etc.) and $I$ is the set of interactions that take place in $w$, enacted among resources of the set $IR \subseteq R$ that offer and exploit affordances.
We further define that $IR=Ag \cup Ar$, where $Ag$ is the set of the agents that are situated in the workspace $w$, and $Ar$ is the set of the non-autonomous entities, i.e. artifacts~\cite{ricci2011environment}, that are contained in $w$. Then, signifiers are designed to enable the interactions of resources in $IR$, i.e. offered by artifacts in $Ar$ to agents in $Ag$. To examine what the environment affords in the context of agent-to-agent interaction, an affordance offered by an agent $ag \in Ag$ is treated as an affordance of the body\footnote{The body of an agent is an artifact that enables the agent to be situated in a workspace and interact with other resources in the workspace~\cite{ricci2006cartago}.} $body_{ag}$ of $ag$, where $body_{ag} \in Ar$\footnote{Following the definition of affordances (cf. Def.~\ref{eq:aff}), we only consider agent-to-environment (agent-to-artifact) interaction. Agent-to-agent interaction falls under agent-to-environment interaction through agent bodies, and environment-to-environment (artifact-to-artifact) interaction is not considered.}.
However, the work of designers should not be limited to the construction of signifier tuples as given by Def.~\ref{eq:signifier}. Instead, upon enriching the environment with signifiers, designers should in addition consider how to support the discoverability of signifiers by agents that are expected in the environment \emph{at run time}.\footnote{This represents a significant broadening of the signifier concept with respect to the HCI literature, where signifiers are thought as being able to be modulated \emph{at run time} only in edge cases.}
To advertise signifiers for affordances exposed by an \textit{artifact}, the environment designer can create an \textit{artifact profile},
i.e. structured data describing the artifact through signifiers and general (domain- and application-specific) metadata. Formally, a profile $p_{ar}$ of an artifact $ar$ is defined as the following construct:
\begin{equation}
\label{eq:profile-ar}
p_{ar} = (Sig_{ar}, s_{ar}), p_{ar} \in R,
\end{equation}
where $Sig_{ar}$ is the set of the signifiers that are exposed in the profile $p_{ar}$, and $s_{ar}$ is metadata capturing part of the situation of the $ar$.
For an artifact $ar$ in a workspace $w$, at least one artifact profile $p_{ar}$ is contained in $w$ to explicitly capture the containment relationship between $ar$ and $w$.
Then, the discovery of the signifier set $Sig_{ar}$ in $p_{ar}$ is enabled by the containment of $ar$ in $w$.
\subsubsection{Dynamically Adjusting Signifier Exposure}
In large-scale environments, additional support for the discoverability of signifiers is needed as the number of artifacts, affordances and, consequently, signifiers grows. At the same time, the interpretability of signifiers may be hindered within open environments where agents interpret signifiers based on diverse abilities.
To address these issues we propose a \emph{Signifier Exposure Mechanism} (SEM) that exposes a filtered set of signifiers based on the targeted agent's abilities and situation (e.g., the agent's goals), and the situations of artifacts (e.g., based on the valid transitions from the current artifact state).
First, we consider the set $P_{Ag}$ that is the set that contains the profiles of each agent $ag \in Ag$ in the workspace $w$. Then, each profile $p_{ag} \in P_{Ag}$ is metadata describing the abilities and the situation of the agent $ag$. The profile $p_{ag}$ of the agent $ag$ is then formally defined as the following construct:
\begin{equation}
p_{ag} = (A_{ag}, s_{ag}), p_{ar} \in P_{Ag} \subset R,
\end{equation}
where $A_{ag}$ is the set of the abilities of the agent $ag$, and $s_{ag}$ is metadata capturing part of the situation of the agent $ag$.
Additionally, we consider the set $P_{Ar}$ that is the set that contains the profiles of each artifact $ar \in Ar$ (cf. Def.~\ref{eq:profile-ar}) in the workspace $w$.
We now present a definition of an SEM (cf.~\cite{lemee2022sign}) whose functionality is described as follows: Given an agent profile $p_{ag} \in P_{Ag}$, and given an artifact profile $p_{ar} \in P_{Ar}$, the SEM outputs an artifact profile $p_{ar}'$ that exposes only those signifiers of $p_{ar}$ that match a) the abilities of agent $ag$ and b) the agent-environment situation (i.e. the $ag\mbox{-}ar$ situation).
For the former, we simply consider if the set of the recommended abilities is a subset of the agent's abilities.
For the latter, we use an evaluation function $E$ that evaluates to what degree the situation of the agent and the situation of the artifact conform to the context that is recommended by the signifier. For this, we consider $C_{ar}$ as the set of all the recommended contexts that can be accessed through the profile of artifact $ar$. Then, we use $SC_{Ar}$ to denote the set of all the recommended contexts that can be accessed through the set of artifact profiles $P_{Ar}$, paired with the situations that are captured in their corresponding profiles of $P_{Ar}$. Pairs $(c_{ar}, s_{ar}) \in SC_{Ar}$ are used for associating the situation of an artifact $ar$ only with the contexts that apply to the same artifact $ar$:
\begin{equation}
SC_{Ar} =\{ C_{ar} \times \{s_{ar}\}\ \:|\: p_{ar} = (Sig_{ar}, s_{ar}) \land ar \in Ar\}.
\end{equation}
We, also, use $S_{Ag}$ to denote the set that includes the situations that can be accessed through the set of agent profiles $P_{Ag}$, formally:
\begin{equation}
S_{Ag} = \{ s_{ag}\:|\:p_{ag} = (A_{ag}, s_{ag}) \land ag \in Ag \}
\end{equation}
Then, we define the evaluation function $E$ that validates an agent situation of the set $S_{Ag}$ and an artifact situation of the set $CS_{Ar}$ against a context of the set $CS_{Ar}$:
\begin{equation}
\begin{aligned}
E: CS_{Ar} \times S_{Ag} &\longrightarrow [0,1].
\end{aligned}
\end{equation}
Finally, $P_{Ar}'$ is the set of possible produced artifact profiles:
\begin{equation}
\begin{aligned}
\medmath{P_{Ar}' = }
& \medmath{\{(Sig_{ar}', s_{ar}) \;|\; p_{ar} = (Sig_{ar}, s_{ar})}
\medmath{\land\; Sig_{ar}' \in \mathcal{P}(Sig_{ar}) \;\land\; ar \in Ar \}}
\end{aligned}
\end{equation}
We can now formally define $SEM$ as follows:
\begin{equation}
\label{eq:sem}
\begin{aligned}
\medmath{ SEM : P_{Ag} \times P_{Ar} \times \{e | e \in [0,1]\}}
&\medmath{\longrightarrow P_{Ar}'};\\
\medmath{ ((Sig_{ar}, s_{ar}), (A_{ag}, s_{ag}), t) \longmapsto }
&\medmath{(\{sig = (sp_b, A, c, salience) \in Sig_{ar} }\\
&\medmath{|\: A \subseteq A_{ag} \land E((c, s_{ar}), s_{ag}) > t\}, s_{ar})}
\end{aligned}
\end{equation}
where $t$ is a threshold value that is given as input to the SEM.
While it may be beneficial, it is not required that exposed signifiers should always relate to affordances that are considered exploitable or relevant. For example, there may be cases where observations about agents and artifacts are insufficient for evaluating relevance, for instance within highly dynamic environments or when an agent prefers not to share such information. Additionally, agents with planning abilities may require an action space that relates to affordances which are not necessary currently exploitable.
\subsection{Customizing Signifiers to Heterogeneous Reasoning Abilities of Agents}\label{subsec:srm}
In this section, we examine how a generic signifier can be extended to reveal information that is relevant to agents with different reasoning and planning abilities. We, specifically, consider the design of signifiers for a) BDI agents that implement the Procedural Reasoning System (PRS), and b) agents capable of planning their actions by using a STRIPS-like planner.
To illustrate our approach with the generic signifier in Lst.~\ref{lst:sg-basic}, which signifies an action specification for closing the gripper of a robotic arm. In the following, we show how this signifier can be extended to customize its content towards accommodating the abilities of the targeted agents.
\subsubsection{Customizing Signifiers for PRS-based Reasoning}
A BDI agent with a PRS-based ability is expected to look for signifiers for satisfying its \textit{intentions}, i.e. the \textit{goals} that the agent has committed to achieve. Although the agent's \textit{plan library} may already contain a plan, i.e. a course of action for accomplishing the agent's goal, a dynamic environment does not always permit for the required actions to be coupled to specific action implementations. More concretely, in case an agent has access to a pick-and-place plan, it may intent to act on a robotic arm (e.g., to close an arm's gripper), but without knowing which robotic arm to use in its current workspace, or any other lower-level interaction details.
To discover fitting signifiers in our approach, an agent may update its profile to describe (part of) its \textit{mental attitudes} -- its current \textit{beliefs} and \textit{goals}, for instance its desire to achieve a specific goal while looking for the appropriate means to satisfy its desire. Considering the desire of an agent to achieve the goal of picking up an item and placing it in a specified location, the agent's profile could be similar to the profile presented in Lst.~\ref{lst:agent}\footnote{We consider that the agent has a desire that is a goal achievement as specified in the AgentSpeak language~\cite{bordini2007programming}
.}.
\begin{lstlisting}[float= b,floatplacement=H,style=RDFStyle, caption=The resource profile of a BDI agent that implements the PRS and desires to pick and place an item., label=lst:agent, numbers=left, xleftmargin=12pt, mathescape=true]
@base <http://ex.org/wksp/1/arts/2>.
@prefix item-profile: <http://ex.org/wksp/1/arts/3#>
$\lstsetnumber{\ldots}$... $\lstresetnumber\setcounter{lstnumber}{10}$
<> a hmas:AgentProfile ; hmas:isProfileOf <#agent> .
<#agent> a hmas:Agent ;
hint:hasAbility [ a prs:PRSAbility ] ;
hint:hasAbility [ a manu:OperatorAbility ] ;
prs:hasDesire [ a prs:GoalAchievement,manu:PickAndPlace;
prs:hasInputList [ a rdf:List ;
rdf:first wksp:item ;
rdf:rest [ a rdf:List ;
rdf:first <#location>] ] .
item-profile:item a manu:Item ;
manu:hasLocation item-profile:location .
\end{lstlisting}
\begin{lstlisting}[float= b,style=RDFStyle, caption=A customized signifier for agents that implement a BDI architecture based on the PRS., label=lst:sg-prs, numbers=left, xleftmargin=14pt, mathescape=true ]
$\lstsetnumber{\ldots}$... $\lstresetnumber\setcounter{lstnumber}{8}$
<#sig> a hmas:Signifier ;
hint:signifies <#close-gripper> ;
hint:recommendsAbility [ a prs:PRSAbility ] ;
hint:recommendsAbility [ manu:OperatorAbility ] ;
hint:recommendsContext <#env-context>, <#prs-context> .
<#prs-context> a hint:Context; sh:targetClass hmas:Agent ;
sh:property [ sh:path prs:hasDesire ;
sh:minCount 1 ; sh:qualifiedMinCount 1 ;
sh:qualifiedValueShape <#desire-shape> ] .
<#desire-shape> a sh:NodeShape ;
sh:class manu:PickAndPlace;
sh:property [ sh:path prs:hasInputList $\lstresetnumber\setcounter{lstnumber}{42}$
$\lstsetnumber{\ldots}$ ... ] .$\lstresetnumber\setcounter{lstnumber}{43}$
<#item-shape> a sh:NodeShape ;
sh:class manu:Item ;
sh:property [ sh:path manu:hasLocation ; $\lstresetnumber\setcounter{lstnumber}{58}$
$\lstsetnumber{\ldots}$ ... ] . $\lstresetnumber\setcounter{lstnumber}{59}$
<#location-shape> a sh:NodeShape ;
sh:class manu:Location ;
sh:property [ sh:path manu:inRangeOf ;
sh:minCount 1 ;
sh:hasValue ex:leubot ] .
\end{lstlisting}
Taking into account the abilities of PRS-based agents, designers can now construct signifiers recommending contexts that relate to the expected situations of the targeted agents, i.e. situations in terms of \textit{beliefs}, \textit{desires}, and \textit{intentions}. Lst.~\ref{lst:sg-prs} is an example of a signifier for the affordance \texttt{gripper-closable} of a robotic arm, which \emph{extends} the generic signifier for the same affordance (Lst.~\ref{lst:sg-basic}) to better cater to such agents. In this case, the recommended context is a set of constraints that should be validated by the agent-environment situation in order for the signifier to be considered relevant for the agent. The constraints are expressed in the Shapes Constraint Language (SHACL)\footnote{The SHACL specification is available online: \url{https://www.w3.org/TR/shacl/}} and concern the following aspects of the agent-artifact situation: 1) the agent has the desire to pick an item and place it in a target location (l.~20-43);
2) the item is in range of the robotic arm that offers the affordance \texttt{gripper-closable} (l.~45-59, 61-65);
3) the target location is in the range of the robotic arm that offers the affordance \texttt{gripper-closable} (l.~45-59, 61-65).
Additionally, the signifier is extended to recommend a \emph{PRS-based ability}. Therefore, if an agent with such an ability looks up signifiers at run time, and an SEM is available (see Sect.~\ref{subsec:sem}), the SEM will expose the \emph{customized} signifier of Lst.~\ref{lst:sg-prs} rather than the generic signifier of Lst.~\ref{lst:sg-basic}. Finally, if the SEM has a SHACL validation feature, exposure will be adjusted based on whether the agent's situation (Lst.~\ref{lst:agent}) conforms to the SHACL context shape\footnote{An SEM with a SHACL processor has already been presented in \cite{taghzouti2022step}. There, agents specify SHACL shapes that impose constraints on signifiers, and they provide such shapes as input to the SEM upon looking for conforming signifiers. On the other hand, here, we consider that environment designers create the signifiers and specify constraints for identifying at run time conforming agent-environment situations.}. In this case, the customized signifiers will be exposed to agents with a PRS-based ability only if the agent-environment situation conforms to the recommended constraints.
\begin{lstlisting}[float=b,style=RDFStyle, caption=A customized signifiers for agents with a STRIPS planning ability., label=lst:sg-strips, numbers=left, xleftmargin=14pt, mathescape=true ]
@prefix pddl:
<http://www.cs.yale.edu/homes/dvm/daml/pddlonto.daml#>.
$\lstsetnumber{\ldots}$... $\lstresetnumber\setcounter{lstnumber}{8}$
<#sig> a hmas:Signifier ;
hint:signifies <#close-gripper> ;
hint:recommendsAbility [
a strips:StripsPlanningAbility ] .
<#close-gripper> a hint:ActionSpecification;
$\lstsetnumber{\ldots}$... $\lstresetnumber\setcounter{lstnumber}{20}$
a a pddl:Action ;
pddl:action-label "closeGripper";
pddl:parameters [ a pddl:Param_seq ;
rdf:_1 <#param1> ];
pddl:precondition [$\lstresetnumber\setcounter{lstnumber}{30}$
$\lstsetnumber{\ldots}$... ] ; $\lstresetnumber\setcounter{lstnumber}{31}$
pddl:effect [$\lstresetnumber\setcounter{lstnumber}{40}$
$\lstsetnumber{\ldots}\lstresetnumber\setcounter{lstnumber}{41}$... ] .
<#param1> a pddl:Param ;
pddl:name "?gv" ;
drs:type manu:GripperValue ;
:hasSchema <#gripper-schema> .
\end{lstlisting}
\subsubsection{Customizing Signifiers for STRIPS-based Reasoning}
Instead of profiting from signifiers that are designed with agents' beliefs, desires, and intentions in mind, agents that are capable of automated planning would require signifiers that signify action specifications suitable to enrich a planning domain.
For instance, Lst.~\ref{lst:sg-strips} presents a customized signifier for the affordance \texttt{gripper-closable} of a robotic arm, which extends the generic signifier of Lst.
~\ref{lst:sg-basic} to signify an action specification that is relevant to agents with a STRIPS-based planning ability. The action specification specifies the type of the action that can be performed, and, additionally, the \textit{preconditions} and the \textit{effects} of executing the action. For this example, we reused the PDDL ontology presented in~\cite{mcdermott2002representing} to specify actions that can be used to enrich a PDDL domain and, thus, become suitable input to a PDDL automated planner. The signifier is also extended to recommend a \textit{STRIPS planning ability}, so that an SEM can adjust the signifier exposure based on the abilities of the requesting agent.
\section{IMPLEMENTATION AND DEPLOYMENT}
To ground our approach in a concrete use case, we present a prototypical Hypermedia MAS and a demonstrator scenario.
\subsection{Prototype Deployment}
\label{subsec:sysarch}
In our implementation~\footnote{The implementation is available online: \url{https://github.com/Interactions-HSG/} \url{yggdrasil/tree/feature/sem}}, we use the Yggdrasil platform for Hypermedia MAS~\cite{ciortea2018engineering} as a repository for our hypermedia environment. We extended the Yggdrasil API to enable the publication of agent and artifact profiles as presented in Sect.~\ref{subsec:signifiers}. The API hence permits the publication and management of resource profiles based on a) the Hypermedia MAS Core Ontology (HMAS-core) that is used to describe core aspects of Hypermedia MAS (e.g., workspaces, resource profiles etc.), and b) the Hypermedia MAS Interaction ontology which extends HMAS-core based on our model of signifiers.
We further extended Yggdrasil with an SEM for dynamically adjusting the signifier exposure based on the formal model presented in Sect.~\ref{subsec:sem}. Specifically, agent and artifact profiles can be published so that they are accessible through workspaces of the hypermedia environment. As a result, when an agent discovers an artifact profile in the hypermedia environment, the SEM is responsible for evaluating which signifiers should be exposed to the agent: The SEM identifies which agent is currently looking for signifiers, and attempts to retrieve the agent's profile. If no agent profile is available, the artifact profile is provided to the agent without undergoing any signifier adjustment. On the other hand, if the SEM acquires access to an agent profile, it proceeds to evaluate the complementarity between the agent and the artifact as given by Def.~\ref{eq:sem}. For example, if the agent exhibits specific abilities, then the SEM will construct a variation of the artifact profile that exposes only these signifiers that contain recommendations for the specified agent abilities.
For our prototype, we used Jason~\cite{bordini2007programming} agents which were implemented and deployed in a JaCaMo\footnote{The JaCaMo documentation is available online: \url{https://jacamo.sourceforge.net/}} application. The signifiers used were designed for revealing information about affordances of a PhantomX AX12 Reactor Robot Arm\footnote{\url{https://www.trossenrobotics.com/p/phantomx-ax-12-reactor-robot-arm.aspx}}.
\subsection{Deployment Scenario}
To validate our prototype implementation, we considered the case of a Hypermedia MAS where affordance discovery and exploitation take place in a workspace with industrial devices based on the following scenario: A manufacturing workspace contains a robotic arm that offers affordances to agents that are situated in the workspace, such as affordances to move the gripper and the base of the robotic arm. The robotic arm is modelled as an \textit{artifact} and its presence is indicated in the hypermedia environment through its \textit{artifact profile} that is contained in the workspace. Signifiers that reveal information about the affordances of the robotic arm are exposed through its profile, and have been designed by taking into consideration different abilities of agents that are expected to be situated in the manufacturing workspace. Specifically, we have considered three types of signifiers with regard to agents' abilities:
\begin{itemize}
\item Signifiers that recommend the abilities of BDI agents that implement the Procedural Reasoning System (PRS).
\item Signifiers that recommend the abilities of agents that can perform automated planning using a STRIPS-like planner.
\item Generic signifiers that are not customized to agents with any specific abilities of reasoning about action and acting.
\end{itemize}
A BDI agent that implements the PRS joins the workspace with the objective to pick and place an item. The agent's presence in the workspace is indicated in the hypermedia environment through its profile, i.e. metadata describing that a) the body of the agent is contained in the workspace, b) the agent desires to \texttt{pick-and-place}, and c) the agent has a PRS ability. The agent has already access to a relevant pick-and-place plan through its plan library, however, it is unaware of any lower-level implementation details that are required for effectively executing its plan. For this, the agent looks for signifiers to acquire information about the hypermedia controls that can be used to exploit relevant affordances. Upon looking for information about the affordances of the robotic arm that is contained in the manufacturing workspace, the agent receives only those signifiers that are relevant to its abilities from the SEM.
Another agent joins the workspace with the same objective of picking and placing an item. Although the agent does not currently have a relevant \texttt{pick-and-place} plan, it has access to a STRIPS-like planner for performing automated planning -- an ability that is indicated in the agent's profile. The agent decides to look for signifiers that may help it to synthesize a pick-and-place plan. Upon focusing on the robotic arm, the agent receives the profile of the robotic arm that exposes only signifiers that are relevant to the agent due to its STRIPS planning ability. Consequently, the agent acquires access to action specifications that would be suitable for enriching a planning domain as they specify the types of actions that can be performed upon the robotic arm, along with the preconditions and effects of executing such actions.
\section{CONCLUSIONS}
In this paper, we introduced signifiers as a first-class abstraction in Hypermedia MAS.
This was accomplished through formalizations for the design of signifiers that enable the hypermedia-driven exploitation of affordances on the Web, and that capture the agent-environment complementarity required to exploit affordances.
These signifiers are coupled with a mechanism that describes the dynamic adjustment of signifier exposure based on the agent-environment situation.
We provided examples and demonstrated how signifiers decouple behaviors from their implementations through hypermedia, and can be customized with respect to different reasoning abilities of agents, towards enabling the effective and efficient affordance discovery and exploitation in affordance-rich and open Web-based MAS. Based on the presented work, we aim at supporting and evaluating the effectiveness and efficiency of more complex behaviors through the design and exposure of signifiers based on the HATEOAS principle and the behavior-ability impredicativity of Affordance Theory.
\begin{acks}
This research has received funding from the Swiss National Science Foundation under grant No. 189474 (\textit{HyperAgents}) and from the European Union's Horizon 2020 research and innovation program under grant No. 957218 (\textit{IntellIoT}). We thank Yousouf Taghzouti for his valuable input on the topic of semantic content negotiation.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
| -29,204.047075
|
[
-1.62109375,
1.9423828125
] | 52.911392
|
[
-3.63671875,
0.2100830078125,
-2.57421875,
-6.63671875,
-0.03338623046875,
9.4765625
] |
[
4.203125,
7.53125,
1.498046875,
8.2578125
] | 426
| 7,360
|
[
-0.7578125,
0.5234375
] | 22.128663
|
[
-6.40625,
-4.2578125,
-4.6875,
-1.841796875,
2.6484375,
12.5234375
] | 0.733585
| 27.513191
| 19.6875
| 0.831337
|
[
2.646243095397949
] | -22,600.906412
| 6.019429
| -28,698.632096
| 0.338578
| 5.841745
|
[
-3.064453125,
-3.490234375,
-3.685546875,
-4.6015625,
2.8359375,
11.2578125
] |
[
-5.6640625,
-2.154296875,
-2.689453125,
-1.3447265625,
3.931640625,
5.3515625
] | |
BkiUfAHxK0wg09KOYwaw
|
\section{Introduction}\label{sec:introduction}
Small solar system bodies, such as asteroids and comets, are of significant interest to the scientific community; these small bodies offer great insight into the early formation of the solar system.
This insight offers additional detail into the formation of the Earth and also the probable formation of other planetary bodies.
Of particular interest are those near-Earth asteroids (NEA) which inhabit heliocentric orbits in vicinity of the Earth.
These easily accessible bodies provide attractive targets to support space industrialization, mining operations, and scientific missions.
NEAs potentially contain many materials such as those useful for propulsion, construction, or for use in semiconductors.
Also, many bodies contain highly profitable materials, such as precious or strategic metals~\cite{ross2001}.
In addition, these NEAs are also of concern for their potential to impact the Earth.
Asteroids and comets are the greatest threat to future civilizations and as a result there is a focused effort to mitigate these risks~\cite{wie2008}.
In spite of the great interest in asteroids, the operation of spacecraft in their vicinity is a challenging problem.
While there has been significant study of interplanetary transfer trajectories, relatively less analysis has been conducted on operations in the vicinity of asteroids.
The dynamic environment around asteroids is strongly perturbed and challenging for analysis and mission operation~\cite{scheeres1994,scheeres2000}.
Due to their low mass, which results in a low gravitational attraction, asteroids may have irregular shapes and potentially chaotic spin states.
Furthermore, since the magnitude of the gravitational attraction is relatively small, non-gravitational effects, such as solar radiation pressure or third-body effects, become much more significant.
As a result, the orbital environment is generally quite complex and it is difficult to generate analytical insights.
An accurate gravitational potential model is necessary for the operation of spacecraft about asteroids.
Additionally, a detailed shape model of the asteroid is needed for trajectories passing close to the body.
The classic approach is to expand the gravitational potential into a harmonic series and compute the series coefficients.
Radio tracking data of an orbiting spacecraft allows one to estimate the series coefficients.
The harmonic representation is guaranteed to converge outside of the circumscribing sphere and can be truncated at a finite order based on accuracy requirements~\cite{scheeres2012a}.
However, the harmonic expansion is always an approximation as a result of the infinite order series used in the representation.
Additionally, the harmonic model used outside of the circumscribing sphere is not guaranteed to converge inside the sphere.
A popular approach to deal with this divergence is to use a different gravitational model within the circumscribing sphere.
For example, Reference~\citenum{scheeres2000} uses both a polyhedron field and a spherical harmonic expansion to represent the full gravitational field of the body.
The spherical harmonic coefficients are computed from the polyhedron shape model using a constant density assumption~\cite{werner1997}.
This model is applied close to the body while the harmonic expansion is applied when outside the circumscribing sphere.
Similarly, Reference~\citenum{herrera2014} uses two different harmonic expansion models to represent the gravitational field inside and outside of the circumscribing sphere.
In this case, the coefficients must be carefully chosen to ensure continuity of the gravitational field at the circumscribing sphere.
In both cases, this type of approach results in a cumbersome gravity field expression that requires additional constraints to ensure continuity and validity at the radius of the circumscribing sphere.
Any integration software will need to incorporate a switching mechanism between the models when crossing the circumscribing sphere.
Instead, we use a different approach and model the asteroid as a constant-density polyhedron.
The polyhedron model results in an exact closed form expression of the gravitational potential field~\cite{werner1994,werner1996}.
This type of model results in the exact potential up to the accuracy of the shape model and a constant density assumption.
However, the calculation of the potential, or the acceleration, requires a summation over every face of the polyhedron.
As a result, it typically requires a large amount of computation in contrast to the harmonic expansion models.
However, the formulation is well suited to parallelization and improvements with efficient coding practices.
Finally, the polyhedron method is well suited to trajectories passing close to the body and offers a simple metric to determine if a particle is inside the asteroid.
The use of the polyhedron model results in a single expression of the gravitational field that is valid everywhere around the body.
The application of optimal control methods for orbital trajectory design is nontrivial.
Frequently, insight into the problem or intuition on the part of the designer is required to determine initial conditions that will converge to the optimal solution.
However, the asteroid system dynamics are nonlinear and exhibit chaotic behaviors.
This makes solving the optimization problem highly dependent on the initial condition.
Similar to the three-body problem, there is an insufficient number of analytical constants to derive an analytical solution in general.
As a result, accurate numerical methods are required to determine optimal solutions.
These methods are critically dependent on an accurate initial guess in order to allow for convergence.
We model the motion of particles around asteroids using the restricted full two-body problem.
The dynamics of a spacecraft about small bodies is very similar to that of the three-body problem.
This model has many similarities with the restricted three body problem, and much of the theory developed for the three-body problem is also applicable~\cite{mondelo2010,herrera2014}.
In addition, there has been a large amount of work on the optimal control of spacecraft orbital transfers in the three-body problem~\cite{mingotti2011,grebow2011}.
Typically, the optimal control problem is solved via direct methods, which approximate the continuous time problem as a parameter optimization problem.
The state and control trajectories are discretly parameterized and solved in the form of a nonlinear optimization problem.
Alternatively, indirect methods apply calculus of variations to derive the necessary conditions for optimality.
This yields a lower dimensioned problem as compared to the direct approach.
In addition, satisfication of the necessary conditions guarantee local optimality in contrast to direct methods which result in sub-optimal solutions.
In this paper, we extend the design method previously developed in the three-body problem to motion about asteroids~\cite{kulumani2015}.
Our systematic approach avoids the difficulties in selecting an appropriate initial guess for optimization.
We instead utilize the concept of the reachability set to enable a simple methodology of selecting initial conditions to achieve general orbital transfers.
This method allows the spacecraft to depart from fixed periodic solutions through the use of a low-thrust propulsion system.
In addition, we utilize a polyhedron gravitational model which is accurate and is globally applicable about the asteroid.
We formulate an optimal control problem to calculate the reachability set on a lower dimensional Poincar\'e section.
Given an initial condition and fixed time horizon, the reachable set is the set of states attainable, subject to the operational constraints of the spacecraft.
The generation of the reachable set allows for a more systematic method of determining initial conditions and eases the burden on the designer.
The Poincar\'e section reduces the dimensionality of the system dynamics to the study of a related discrete update map.
This allows for the design of complex transfer trajectories on a lower dimensional space.
Rather than relying on intuition or insight into the problem, trajectories are chosen which minimize a distance metric toward a desired target on the Poincar\'e section.
This simple methodology allows for extended transfer trajectories which iteratively approach a desired target orbit.
In short, the authors present a systematic method of generating optimal transfer orbits about asteroids.
Typically, optimal transfers are generated using a direct optimization method which results in a sub-optimal solution.
This paper present an indirect optimal control formulation to generate the reachability set on a Poincar\'e section.
Using the reachability set on the Poincar\'e section allows for a simple method of choosing trajectories which approach a target.
In addition, the reachability set gives an indication of the regions of the phase space accessible to the spacecraft.
This method allows us to avoid the difficulties inherent in choosing valid initial conditions for the computation of optimal transfer trajectories.
We develop the optimal control formulation and apply this method to a transfer about asteroid 4769 Castalia.
\section{Asteroid Model}\label{sec:asteroid_model}
In this analysis we consider transfers about the asteroid 4769 Castalia.
Castalia has an accurate shape model and is also considered a potentially hazardous asteroid with a possibility of Earth impact~\cite{hudson1994}.
We model the gravitational potential field of Castalia using a polyhedron gravity model instead of using a spherical harmonic expansion.
The spherical harmonic expansion is a popular method of representing the gravity field~\cite{scheeres1996}.
Approximations are possible by truncating the infinite order series to fixed set of coefficients, with the most important terms corresponding to the second order and degree~\cite{scheeres1994}.
However, when evaluated close to the body the series expansion will diverge and is no longer accurate.
Therefore, the spherical harmonic representation is not ideal for landing trajectories or those passing close to the surface.
A polyhedral model of the surface of an asteroid can be determined from remote optical or radar sensors.
The faces of the polyhedron can be large or small and allow for fine detail such as depression, craters, ridges, or interior voids.
In addition, there is no requirement for the body to be modeled at a uniformly high resolution so small details can be incorporated with minimal cost.
From the shape model, an analytical, closed form expression for the gravitational potential can be derived.
The polyhedral approach provides an accurate gravitational model consistent with the resolution of the shape and the chosen discretization.
Furthermore, the polyhedron model is an exact solution up to the surface of the body.
Therefore, this model is ideal for missions traversing large regions both close and far from the asteroid.
\subsection{Polyhedron Gravity Model}\label{sec:polyhedron_model}
We represent the gravitational potential of the asteroid using a polyhedron gravitation model.
This model is composed of a polyhedron, which is a three-dimensional solid body, that is defined by a series of vectors in the body-fixed frame.
The vectors define vertices in the body-fixed frame as well as planar faces which compose the surface of the asteroid.
We assume that each face is a triangle composed of three vertices and three edges.
As a result, only two faces meet at each edge while three faces meet at each vertex.
Only the body-fixed vectors, and their associated topology, is required to define the exterior gravitational model.
References~\citenum{werner1994} and~\citenum{werner1996} give a detailed derivation of the polyhedron model.
Here, we summarize the key developments and equations required for implementation.
Consider three vectors \( \vecbf{v}_1, \vecbf{v}_2, \vecbf{v}_3 \in \ensuremath{\mathbb{R}}^{3 \times 1} \), assumed to be ordered in a counterclockwise direction, which define a face.
It is easy to define the three edges of each face as
\begin{align}\label{eq:edges}
\vecbf{e}_{i+1,i} = \vecbf{v}_{i+1} - \vecbf{v}_i \in \ensuremath{\mathbb{R}}^{3 \times 1 },
\end{align}
where the index \( i \in \parenth{1,2,3} \) is used to permute all edges of each face.
Since each edge is a member of two faces, there exist two edges which are defined in opposite directions between the same vertices.
We can also define the outward normal vector to face \( f\) as
\begin{align}\label{eq:face_normal}
\hat{\vecbf{n}}_f &= \parenth{\vecbf{v}_{2} - \vecbf{v}_1} \times \parenth{\vecbf{v}_{3} - \vecbf{v}_2} \in \ensuremath{\mathbb{R}}^{3 \times 1},
\end{align}
and the outward facing normal vector to each edge as
\begin{align}\label{eq:edge_normal}
\hat{\vecbf{n}}_{i+1,i}^f &= \parenth{\vecbf{v}_{i+1} - \vecbf{v}_i} \times \hat{\vecbf{n}}_f \in \ensuremath{\mathbb{R}}^{3 \times 1}.
\end{align}
For each face we define the face dyad \( \vecbf{F}_f \) as
\begin{align}\label{eq:face_dyad}
\vecbf{F}_f &= \hat{\vecbf{n}}_f \hat{\vecbf{n}}_f \in \ensuremath{\mathbb{R}}^{3 \times 3}.
\end{align}
Each edge is a member of two faces and has an outward pointing edge normal vector, given in~\cref{eq:edge_normal}, perpendicular to both the edge and the face normal.
For the edge connecting the vectors \( \vecbf{v}_1 \) and \( \vecbf{v}_2 \), which are shared between the faces \(A\) and \( B\), the per edge dyad is given by
\begin{align}\label{eq:edge_dyad}
\vecbf{E}_{12} = \hat{\vecbf{n}}_A \hat{\vecbf{n}}_{12}^A + \hat{\vecbf{n}}_B \hat{\vecbf{n}}_{21}^B \in \ensuremath{\mathbb{R}}^{3 \times 3}.
\end{align}
The edge dyad \( \vecbf{E}_e \), is defined for each edge and is a function of the two adjacent faces meeting at that edge.
The face dyad \( \vecbf{F}_f \), is defined for each face and is a function of the face normal vectors.
Let \( \vecbf{r}_i \in \ensuremath{\mathbb{R}}^{3 \times 1} \) be the vector from the spacecraft to the vertex \( \vecbf{v}_i \) and it's length is given by \( r_i = \norm{\vecbf{r}_i} \in \ensuremath{\mathbb{R}}^{1} \).
The per-edge factor \( L_e \in \ensuremath{\mathbb{R}}^{1}\), for the edge connecting vertices \( \vecbf{v}_i \) and \( \vecbf{v}_j \), with a constant length \( e_{ij} = \norm{\vecbf{e}_{ij}} \in \ensuremath{\mathbb{R}}^1\) is
\begin{align}\label{eq:edge_factor}
L_e &= \ln \frac{r_i + r_j + e_{ij}}{r_i + r_j - e_{ij}}.
\end{align}
For the face defined by the vertices \( \vecbf{v}_i, \vecbf{v}_j, \vecbf{v}_k \) the per-face factor \( \omega_f \in \ensuremath{\mathbb{R}}^{1} \) is
\begin{align}\label{eq:face_factor}
\omega_f &= 2 \arctan \frac{\vecbf{r}_i \cdot \vecbf{r}_j \times \vecbf{r}_k}{r_i r_j r_k + r_i \parenth{\vecbf{r}_j \cdot \vecbf{r}_k} + r_j \parenth{\vecbf{r}_k \cdot \vecbf{r}_i} + r_k \parenth{\vecbf{r}_i \cdot \vecbf{r}_j}} .
\end{align}
The gravitational potential due to a constant density polyhedron is given as
\begin{align}\label{eq:potential}
U(\vecbf{r}) &= \frac{1}{2} G \sigma \sum_{e \in \text{edges}} \vecbf{r}_e \cdot \vecbf{E}_e \cdot \vecbf{r}_e \cdot L_e - \frac{1}{2}G \sigma \sum_{f \in \text{faces}} \vecbf{r}_f \cdot \vecbf{F}_f \cdot \vecbf{r}_f \cdot \omega_f \in \ensuremath{\mathbb{R}}^1,
\end{align}
where \( \vecbf{r}_e\) and \(\vecbf{r}_f \) are the vectors from the spacecraft to any point on the respective edge or face, \( G\) is the universal gravitational constant, and \( \sigma \) is the constant density of the asteroid.
Furthermore we can use these definitions to define the attraction, gravity gradient matrix, and Laplacian as
\begin{align}
\nabla U ( \vecbf{r} ) &= -G \sigma \sum_{e \in \text{edges}} \vecbf{E}_e \cdot \vecbf{r}_e \cdot L_e + G \sigma \sum_{f \in \text{faces}} \vecbf{F}_f \cdot \vecbf{r}_f \cdot \omega_f \in \ensuremath{\mathbb{R}}^{3 \times 1} , \label{eq:attraction}\\
\nabla \nabla U ( \vecbf{r} ) &= G \sigma \sum_{e \in \text{edges}} \vecbf{E}_e \cdot L_e - G \sigma \sum_{f \in \text{faces}} \vecbf{F}_f \cdot \omega_f \in \ensuremath{\mathbb{R}}^{3 \times 3}, \label{eq:gradient_matrix}\\
\nabla^2 U &= -G \sigma \sum_{f \in \text{faces}} \omega_f \in \ensuremath{\mathbb{R}}^1 .\label{eq:laplacian}
\end{align}
One interesting thing to note is that both~\cref{eq:face_dyad,eq:edge_dyad} can be precomputed without knowledge of the position of the satellite.
They are both solely functions of the vertices and edges of the polyhedral shape model and are computed once and stored.
Once a position vector \( \vecbf{r} \) is defined, the scalars given in~\cref{eq:edge_factor,eq:face_factor} can be computed for each face and edge.
Finally,~\cref{eq:potential} is used to compute the gravitational potential on the spacecraft.
The Laplacian, defined in~\cref{eq:laplacian}, gives a simple method to determine if the spacecraft has collided with the body~\cite{werner1996}.
In this work, we consider trajectories about asteroid 4769 Castalia.
Doppler radar images, obtained at the Arecibo Observatory in 1989, are used to determine a shape model of Castalia~\cite{hudson1994,neese2004}.
We use the estimated rotation period of \SI{4.07}{\hour} with a nominal density of \SI{2.1}{\gram\per\centi\meter\cubed}~\cite{scheeres1996}.
The shape model is composed of \num{4092} triangular faces and a rendering of the asteroid is provided in~\cref{fig:castalia_3d}.
In addition, we show a contour plot of the radius of Castlia in~\cref{fig:radius_contour}.
\begin{figure}
\centering
\begin{subfigure}[htbp]{0.45\textwidth}
\includegraphics[width=\textwidth]{castalia}
\caption{3D Shape Model of Castalia} \label{fig:castalia_3d}
\end{subfigure}~
\begin{subfigure}[htbp]{0.45\textwidth}
\includegraphics[width=\textwidth]{radius_contour}
\caption{Radius contours of Castalia} \label{fig:radius_contour}
\end{subfigure} ~
\caption{Polyhedron Shape Model of 4769 Castalia}
\label{fig:castalia}
\end{figure}
\subsection{Spacecraft Equations of Motion}\label{sec:sc_eoms}
The motion of a massless particle, or spacecraft, about an asteroid shares many similarities with that of the three-body problem.
As is typical in the three-body problem, the equations of motion are usually represented in a uniformly rotating frame aligned with the two primaries.
Similarly, the equations of motion about an asteroid are also defined in a body-fixed frame with uniform rotation.
In this reference frame, the gravitational potential field is time invariant and only a function of the position of the particle.
In addition, since the rotational rate of the asteroid is constant, the equations of motion are time invariant.
Finally, the use of the rotating reference frame allows for much greater insight into the dynamic structure of the behavior around the asteroid.
We define a reference frame originating at the center of mass of the asteroid.
The body-fixed reference frame is composed of the unit vectors \( \hat{\vecbf{x}} , \hat{\vecbf{y}}, \hat{\vecbf{z}} \), which are aligned along the principal axes of smallest, intermediate, and largest moment of inertia, respectively.
The body-fixed equations of motion of a massless particle about an arbitrarily rotating asteroid are given by
\begin{align}\label{eq:body_eoms}
\ddot{\vecbf{r}} + 2 \vecbf{\Omega} \times \dot{\vecbf{r}} + \vecbf{\Omega} \times \parenth{ \vecbf{\Omega} \times \vecbf{r} } + \dot{\vecbf{\Omega}} \times \vecbf{r} = \nabla U(\vecbf{r}) ,
\end{align}
where \( \vecbf{\Omega} \in \ensuremath{\mathbb{R}}^{3 \times 1}\) is the instantaneous angular velocity vector of the asteroid represented in the body-fixed frame, \( \vecbf{r} \) is the position of the particle in the body-fixed frame, and \( \nabla U(\vecbf{r}) \) is the gradient of the gravitational potential~\cite{scheeres2012a}.
We assume that the asteroid rotates at a uniform rate, \( \norm{\vecbf{\Omega}} = \omega \in \ensuremath{\mathbb{R}}^1 \), about the axis of the maximum moment of inertia, i.e.\ \( \vecbf{\Omega} = \omega \hat{\vecbf{z} }\).
As a result, we can represent the equations of motion in scalar form as
\begin{align} \label{eq:eoms}
\begin{split}
\ddot{x} - 2 \omega \dot{y} - \omega^2 y &= U_x , \\
\ddot{y} + 2 \omega \dot{x} - \omega^2 x &= U_y , \\
\ddot{z} &= U_z .
\end{split}
\end{align}
In this situation, the state is defined as \( \vecbf{x} = \bracket{\vecbf{r}~\>\vecbf{v}}^T \in \ensuremath{\mathbb{R}}^{6 \times 1}\) with \(\vecbf{r} = \bracket{x~\>y~\>z}^T \in \ensuremath{\mathbb{R}}^{3\times1}\) and \(\vecbf{v}= \bracket{ \dot{x}~\>\dot{y}~\>\dot{z} }^T \in \ensuremath{\mathbb{R}}^{3\times1}\) representing the position and velocity with respect to the body-fixed frame, respectively.
We further assume that our spacecraft is capable of exerting a translational acceleration, \( \vecbf{u} \in \ensuremath{\mathbb{R}}^{3\times1} \), in any direction, while subject to a maximum magnitude constraint, \( \norm{\vecbf{u}} \leq u_m \).
This is typical of many spacecraft which offer full rotational freedom and can direct a potentially varying force or acceleration in any direction.
The equations of motion may be rewritten in state space form as
\begin{align}\label{eq:state_space_eoms}
\begin{bmatrix} \dot{\vecbf{r}} \\ \dot{\vecbf{v}} \end{bmatrix} &=
\begin{bmatrix}\vecbf{v} \\ \vecbf{g} \parenth{\vecbf{r}} + \vecbf{h}\parenth{\vecbf{v}} + \vecbf{u} \end{bmatrix} ,
\end{align}
where the terms \(\vecbf{g} \parenth{\vecbf{r}} \) and \( \vecbf{h}\parenth{\vecbf{v}} \) are given by
\begin{align}\label{eq:state_space_terms}
\vecbf{g}\parenth{\vecbf{r}} = \begin{bmatrix} U_x + \omega^2 x \\ U_y + \omega^2 y \\ U_z \end{bmatrix} ,\quad
\vecbf{h}\parenth{\vecbf{r}} = \begin{bmatrix} 2 \omega \dot{y} \\ -2 \omega \dot{x} \\ 0 \end{bmatrix} .
\end{align}
Since Castalia is a uniformly rotating asteroid, the equations of motion are time invariant when represented in the body-fixed frame.
In addition, there exists an integral of motion, or a conserved quantity, that is constant for all motion of a particle.
The Jacobi constant, \( J (\vecbf{r} , \vecbf{v} ) \), is given by
\begin{align}\label{eq:jacobi}
J \parenth{\vecbf{r}, \vecbf{v}} = \frac{1}{2} \omega^2 \parenth{x^2 + y^2} + U(\vecbf{r}) - \frac{1}{2} \parenth{\dot{x}^2 + \dot{y}^2 + \dot{z}^2} .
\end{align}
The Jacobi constant functions in a similar manner as used in three-body problem~\cite{szebehely1967}.
We can define zero-velocity surfaces using the Jacobi constant by fixing the value to a desired constant.
The zero-velocity surfaces are the locus of points where the kinetic energy and hence velocity vanishes.
Just as in the three-body problem, the Jacobi constant in~\cref{eq:jacobi} divides the phase space into distinct realms of possible motion.
Similarly, there exist, in general, four equilibrium points and also their associated stable and unstable manifolds~\cite{scheeres1996,scheeres1994}.
The properties of these manifolds play a critical role in the dynamics of trajectories in their vicinity.
\section{Reachability Set on a Poincar\'e Section}\label{sec:reachability}
Typical optimal control methods, including both indirect and direct based methods, are highly dependent on an accurate initial guess.
For indirect optimization, which is based on the calculus of variations, this results in the well-known two-point boundary value problem.
Insight into the problem or insight by the designer is usually required to determine appropriate initial costates that will converge to the optimal solution and satisfy the desired constraints.
To avoid this issue, we utilize the concept of the reachability set on a lower dimensional Poincar\'e section.
By repeatedly constructing the reachability set, we can achieve general transfers by determining set intersections on the Poincar\'e section.
This alleviates the need to determine an accurate initial guess while offering some insight into the dynamics of neighboring trajectories
The reachable set contains all possible trajectories that are achievable over a fixed time horizon from a defined initial condition, subject to the constraints of the system.
Reachability theory has been applied to collision avoidance and safety planning in aerospace systems~\cite{holzinger2009,holzinger2011b}.
The theory supporting reachability analysis is directly derivable from optimal control theory~\cite{varaiya2000,lygeros2004}.
Analytic computation of reachability sets is only possible for a small class of potential systems.
Here, we use numerical methods to solve a related optimal control problem, which approximates a single solution that lies on the reachable set.
\begin{figure}
\centering
\begin{scaletikzpicturetowidth}{0.3\textwidth}
\begin{tikzpicture}[scale=\tikzscale]
\coordinate [label=left:\textcolor{black}{\large \(\vecbf{x}_0\)}] (x0) at (-1,-2);
\coordinate [label=below:\textcolor{black}{\large \(\vecbf{x}_n\)}] (xn) at (1,1);
\coordinate [label=left:\textcolor{black}{\large \(\Sigma\)}] (sigma) at (-4,3);
\coordinate [label=right:\textcolor{black}{}] (f1) at (5,-2);
\coordinate [label=below:\textcolor{black}{\large \(\psi(t,\vecbf{x}_0)\)}] (f2) at (2,-5);
\coordinate [label=right:\textcolor{black}{}] (f3) at (-4,-4);
\coordinate [label=right:\textcolor{black}{}] (f4) at (-4,-1);
\filldraw [black] (x0) circle [radius=3pt];
\filldraw [black] (xn) circle [radius=3pt];
\draw [ultra thick,black,->-](x0) to[out=20,in=90,distance=2cm] (f1) to[out=-90,in=0,distance=2cm] (f2) to[out=180,in=-45,distance=2cm] (f3) to[out=135,in=-135,distance=2cm] (f4) ;
\draw [ultra thick, black,dashed,->] (f4) to[out=45,in=180,distance=1cm] ($(xn)-(2,0)$);
\draw [ultra thick] plot [smooth cycle, tension=0.1, rotate=5] coordinates { (-4,-3) (4,-3) (4,3) (-4,3) };
\draw [thick,dashed] (xn) circle [radius=2cm];
\draw [thick,->] (xn) -- ($(xn) + (2.5,0)$);
\draw [thick,rotate=45,->] (xn) -- ($(xn) + (2.5,0)$);
\draw ($(xn) + (1,0)$) arc [start angle=0,end angle=45, radius=1];
\node [draw=none] at (2.4,1.5) {\Large \(\phi_d\)};
\draw [decorate,decoration={brace,amplitude=5pt},rotate=45] (xn) -- ($(xn) + (2,0)$);
\node [draw=none] at ($ (xn) + (0,1) $) {\Large \( J \)};
\end{tikzpicture}
\end{scaletikzpicturetowidth}
\caption{Reachability set on a Poincar\'e section\label{fig:reachability_set}}
\end{figure}
We seek to approximate the reachability set on a Poincar\'e section by solving an optimal control problem.
The Poincar\'e section is chosen in a manner similar to the previous work in both the three-body problem as well analysis performed around asteroids to determine periodic orbits.
Typically, analysis for the three-body problem relies heavily on symmetries in the force fields.
However, in our system model, the gravitational potential, given by~\cref{eq:potential}, has no symmetries.
In spite of this, it is still possible to determine periodic solutions through the application of a Poincar\'e map with the surface of section chosen normal to a surface in the phase space~\cite{scheeres2000}.
For a periodic orbit, the trajectories will intersect the Poincar\'e section at two distinct points every half orbit.
With the addition of a low thrust control input, we are able to expand the reachable set from a distinct point to a larger area on the Poincar\'e section.
\Cref{fig:reachability_set} illustrates this methodology.
Without any control input, the trajectories will follow the system dynamics, \( \psi(t, \vecbf{x}_0 ) \) and intersect the Poincar\'e section at \( \vecbf{x}_n\).
The addition of a control input allows the spacecraft to depart from the natural dynamics and intersect the section at another location denoted by the dashed circle.
We use the cost function \( J \) to define a distance metric on the Poincar\'e section.
Maximization of \( J \), or the minimization of \( -J \), along various directions, which are parameterized using \( \phi_d \), on the Poincar\'e section allows us to generate the largest reachability set under the bounded control input.
We define the Poincar\'e section as the surface normal to \( y = 0 \).
Following convention, the Poincar\'e map is defined as the map from one transversal crossing of the surface \( y = 0\) to the next.
Using the method of Reference~\citenum{scheeres2000}, we remove \( y \) and \( \dot{y} \) from consideration and create a four-dimensional map.
The Poincar\'e section, represented by \( \Sigma \), then becomes
\begin{align}\label{eq:poincare_section}
\Sigma = \braces{\parenth{x, \dot{x}, z, \dot{z}} | y(t_f) = 0 }.
\end{align}
We use this section to compute periodic orbits that serve as the initial and target states of our transfer.
In addition, this section serves as a lower dimensional space upon which we approximate the reachability set.
An optimal control problem is defined by the cost function
\begin{align}\label{eq:cost}
J = -\frac{1}{2} \left( \vecbf{x}(t_f) - \vecbf{x}_{n}(t_f)\right)^T
Q
\left( \vecbf{x}(t_f) - \vecbf{x}_{n}(t_f)\right) ,
\end{align}
where \( \vecbf{x}_n(t_f) \) is the final state of a control-free trajectory, while the term \( \vecbf{x}(t_f)\) is the final state of a trajectory under the influence of the control input.
We use the matrix \( Q = \text{diag} \bracket{1~\>0~\> 1~\> 1~\>0~\>1 } \in \ensuremath{\mathbb{R}}^{6 \times 6}\) to represent the mapping onto \( \Sigma \).
Maximization of the distance between \( \vecbf{x}_n \) and \(\vecbf{x} \), on the Poincar\'e section defined in~\cref{eq:poincare_section}, is equivalent to the minimization of \( J \) defined in~\cref{eq:cost}.
We ensure that the trajectories intersect the Poincar\'e section through the use of terminal constraints.
In addition, we use the terminal constraints to define a specific direction along which we seek to minimize the cost~\cref{eq:cost}.
Since the Poincar\'e section is four-dimensional, we parameterize a direction in \( \ensuremath{\mathbb{R}}^4 \) using three angles \( \phi_1, \phi_2 , \phi_3 \).
The terminal constraints are given in terms of these angles as
\begin{align}\label{eq:terminal_constraints}
\begin{split}
m_1 &= y = 0 , \\
m_2 &= \parenth{\sin \phi_{1_{d}}} \parenth{ x_1^2 + x_2^2 + x_3^2 + x_4^2} - x_1^2 = 0, \\
m_3 &= \parenth{\sin \phi_{2_{d}}} \parenth{ x_2^2 + x_3^2 + x_4^2} - x_2^2 = 0, \\
m_4 &= \parenth{\sin \phi_{3_{d}}} \parenth{ 2 x_3^2 + 2 x_3 \sqrt{x_4^2 + 2 x_4^2}} - x_3 - \sqrt{x_4^2 + x_3^2} = 0 ,
\end{split}
\end{align}
where we make use of the difference states \( \parenth{x_1, x_2 ,x_3, x_4 }\) defined as
\begin{align}\label{eq:diff_states}
\begin{split}
x_1 &= x(t_f) - x_n(t_f) , \\
x_2 &= z(t_f) - z_n(t_f) , \\
x_3 &= \dot{x}(t_f) - \dot{x}_n(t_f) , \\
x_4 &= \dot{z}(t_f) - \dot{z}_n(t_f) . \\
\end{split}
\end{align}
We select the terminal time, \( t_f \), from the time required for the uncontrolled trajectory to return back to the Poincar\'e section.
The constraint \( m_1 = 0 \) ensures that the terminal state lies on the Poincar\'e section.
The constraints \( m_2, m_3, m_4 \) are used to define a direction on the Poincar\'e section.
\Cref{eq:diff_states} represents the difference between our controlled and uncontrolled trajectory on the Poincar\'e section.
We approximate the entire reachable set by discretization over the space of angles \(\phi_1, \phi_2, \phi_3 \).
By convention we assume that the angles lie in the following range
\begin{align*}
\phi_1, \phi_2 &\in [ 0, \pi ) ,\\
\phi_3 &\in [ 0 , 2 \pi ) ,
\end{align*}
such that we parameterize all directions on the three sphere, \(\S^3\).
Finally, we also incorporate the control acceleration magnitude constraint as
\begin{align}\label{eq:control_constraint}
c(\vecbf{u}) = \vecbf{u}^T \vecbf{u} - u_m^2 \leq 0 ,
\end{align}
where \( u_m \) is the maximum acceleration possible by the propulsion system.
This constraint assumes that the control acceleration may be orientated in any direction yet the acceleration magnitude is variable but bounded.
The goal is to determine the control history \( \vecbf{u}(t) \) such that the cost function~\cref{eq:cost} is minimized while subject to the equations of motion~\cref{eq:body_eoms} and the constraints~\cref{eq:control_constraint,eq:terminal_constraints}.
We apply a standard calculus of variations approach to solve our optimal control problem~\cite{bryson1975}.
Using the Euler-Lagrange equations we arrive at the necessary conditions for optimality
\begin{align}\label{eq:necc_conditions}
\begin{split}
\dot{\vecbf{x}} ^T &= \deriv{H}{\vecbf{\lambda}} ,\\
\dot{\vecbf{\lambda}}^T &= \deriv{H}{\vecbf{x}} , \\
0 &= \deriv{\phi}{x}^T + \deriv{\vecbf{m}}{x}^T \vecbf{\beta} - \vecbf{\lambda}^T(t_f) , \\
0 &= \deriv{H}{\vecbf{u}} + \mu^T \deriv{c}{\vecbf{u}} ,
\end{split}
\end{align}
where the Hamiltonian, \( H\), is defined as
\begin{align}\label{eq:hamiltonian}
H = \vecbf{\lambda}_r^T \vecbf{v} + \vecbf{\lambda}_v^T \parenth{\vecbf{g}(\vecbf{r}) + \vecbf{h}(\vecbf{v}) + \vecbf{u}}.
\end{align}
The costate is given by \( \vecbf{\lambda} = \bracket{ \vecbf{\lambda}_r~\> \vecbf{\lambda}_v }^T \in \ensuremath{\mathbb{R}}^{6 \times 1}\).
The vector \( \vecbf{\beta} \in \ensuremath{\mathbb{R}}^{4 \times 1} \) are the additional Lagrange multiplers associated with the terminal constraints in~\cref{eq:terminal_constraints}, and \( \mu \) is a Lagrange multipler associated with the control constraint in~\cref{eq:control_constraint}.
We can redefine the optimal control in terms of the costate by rewriting the necessary condition as
\begin{align*}
\vecbf{u} = - \frac{u_m^2}{2 \mu} \vecbf{\lambda}_v .
\end{align*}
We use this along with the control constraint to solve for the Lagrange multiplier \( \mu \)
\begin{align*}
\mu = \pm \frac{u_m}{2} \norm{\vecbf{\lambda}_v} .
\end{align*}
Finally, we use the second-order necessary condition to determine the correct sign of \( \mu \) and find the optimal control input for the reachable set as
\begin{align}\label{eq:optimal_control}
\vecbf{u} = - u_m \frac{\vecbf{\lambda_{\vecbf{v}}}}{\norm{\vecbf{\lambda_{\vecbf{v}}}}} .
\end{align}
This optimal control formulation results in a two point boundary value problem.
We use a shooting method to determine the initial costates, \( \vecbf{\lambda}(t_0)\), such that the terminal constraints are satisfied.
In addition, we implement a multiple shooting method which sub-divides the the entire trajectory into small sub-intervals~\cite{stoer2013}.
The multiple shooting method reduces the sensitivity of the terminal states to the initial conditions.
Using a shorter time interval alleviates many of the issues of single shooting approaches, which suffer from convergence difficulties near the optimal solution.
To ensure a continuous trajectory we incorporate additional constraints
\begin{align}\label{sec:interior_constraints}
\begin{split}
\vecbf{x}(t_{m-1}^{-}) &= \vecbf{x}(t_{m}^{+}) , \\
\vecbf{\lambda}(t_{m-1}^{-}) &= \vecbf{\lambda}(t_{m}^{+}),
\end{split}
\end{align}
which ensure that both the state and costate are continuous at the patch point between segment \( m-1 \) and \( m\).
\section{Numerical Simulation}\label{sec:simulation}
We present an example transfer of a spacecraft about the asteroid 4769 Castalia.
Our equations of motion, given by~\cref{eq:eoms}, are an idealized version of the dynamics of a spacecraft.
For example, the model does not include the effect of mass transfer from propellant usage.
We instead model the control input as a generic acceleration vector in the body-fixed asteroid frame.
The acceleration magnitude constraint in~\cref{eq:control_constraint} is chosen to emulate a physically realizable thruster system.
In this analysis, we assume \( u_m = \SI{0.1}{\milli\meter\per\second\squared}\) which is equivalent to a thrust of approximately \SI{100}{\milli\newton} for a \SI{1000}{\kilo\gram} spacecraft.
This amount of thrust is typical of many current ion or hall effect thrusters~\cite{goebel2008 ,choueiri2009}.
The objective is to transfer the spacecraft between two periodic equatorial orbits about Castalia.
The initial and target orbits are periodic solutions about Castlia computed using the method introduced by Reference~\citenum{scheeres2003}.
The initial conditions for both orbits are defined in the body-fixed frame as
\begin{align}\label{sec:initial_transfer}
\vecbf{x}_i =
\begin{bmatrix}
1.4973 \\ 0 \\ 0.0061 \\ 0\\ -0.0009 \\ 0
\end{bmatrix} ,
\quad
\vecbf{x}_t =
\begin{bmatrix}
6.1175 \\ 0 \\ 0.0001 \\ 0\\ -0.0025 \\ 0
\end{bmatrix} .
\end{align}
\Cref{fig:initial_transfer} shows the initial, \( \vecbf{x}_i \), and target, \( \vecbf{x}_t\), periodic orbits which lie in the equatorial plane of Castalia.
Our goal is to transfer from a lower altitude to a higher altitude while remaining in the equatorial plane of the asteroid.
This type of scenario would occur frequently during mapping and observation missions to asteroids.
\begin{figure}[htbp]
\centering
\begin{subfigure}[htbp]{0.45\textwidth}
\includegraphics[width=\textwidth]{initial_transfer}
\caption{Equatorial View} \label{fig:eq_initial_transfer}
\end{subfigure}~
\begin{subfigure}[htbp]{0.45\textwidth}
\includegraphics[width=\textwidth]{initial_transfer_3d}
\caption{3D view} \label{fig:initial_transfer_3d}
\end{subfigure} ~
\caption{Initial and target periodic orbits}
\label{fig:initial_transfer}
\end{figure}
In this transfer example we also have used a reduced model of Castalia.
Rather than using the full \num{4092} face model we reduce the number of faces to \num{1024} using the Matlab function \verb+reducepatch+.
This greatly speeds up the computation with only a small difference in the gravitational field.
We first compute the reachability set originating from the initial periodic orbit at \( \vecbf{x}_i\) for a fixed time of flight and bounded control magnitude as defined previously.
The reachability set is computed by solving the two-point boundary value problem using a multiple shooting algorithm to satisfy the necessary conditions in~\cref{eq:necc_conditions}.
The reachability set is generated on the lower dimensional Poincar\'e section and is composed of the terminal states in the \( \parenth{x,z,\dot{x},\dot{z} } \) space.
We approximate the reachability set by discretization of each of the angles \( \phi_1, \phi_2 , \phi_3 \) into ten discrete steps.
This results in a total of \(10^3\) trajectories which approximate the reachability set on the Poincar\'e section.
We visualize the section using the two figures in~\cref{fig:poincare_section}.
\begin{figure}[htbp]
\centering
\begin{subfigure}[htbp]{0.45\textwidth}
\includegraphics[width=\textwidth]{poincare_xvsxdot.pdf}
\caption{\( x \) vs. \( \dot{x} \) Poincar\'e section} \label{fig:poincare_xvsxdot}
\end{subfigure}~
\begin{subfigure}[htbp]{0.45\textwidth}
\includegraphics[width=\textwidth]{poincare_zvszdot.pdf}
\caption{\( z \) vs. \( \dot{z} \) Poincar\'e section} \label{fig:poincare_zvszdot}
\end{subfigure}
\caption{Poincar\'e section visualization \label{fig:poincare_section}}
\end{figure}
These two-dimensional sections allow us to visualize the four-dimensional Poincar\'e section defined by~\cref{eq:poincare_section}.
The first stage of the transfer is represented by the magenta markers in~\cref{fig:poincare_section}.
From~\cref{fig:poincare_xvsxdot}, we can see that the reachability set has grown in the \( \dot{x} \) dimension but has not been enlarged much in the \( x \) direction.
Similarly,~\cref{fig:poincare_zvszdot} shows an increase in the \( \dot{z} \) component.
From the reachability set, we chose a trajectory and terminal state which minimizes a distance metric \( d(\vecbf{x}_f,\vecbf{x}_t) \) to the desired target
\begin{align}\label{eq:reach_dist}
d = \sqrt{k_x \parenth{x_f - x_t }^2 + k_z \parenth{z_f - z_t }^2 + k_{\dot{x}}\parenth{\dot{x}_f - \dot{x}_t }^2 + k_{\dot{z}}\parenth{\dot{z}_f - \dot{z}_t }^2} ,
\end{align}
where \( k_x, k_z, k_{\dot{x}}, k_{\dot{z}} \) are used to weight the relative importance of each of the components of the Poincar\'e section.
\Cref{fig:phi_distance} shows the distance to the target for the chosen discretization of \( \phi_i \).
\begin{figure}[htbp]
\centering
\begin{subfigure}[htbp]{0.3\textwidth}
\includegraphics[width=\textwidth]{phi1.pdf}
\caption{ \( \phi_1 \)} \label{fig:phi1}
\end{subfigure}~
\begin{subfigure}[htbp]{0.3\textwidth}
\includegraphics[width=\textwidth]{phi2.pdf}
\caption{\( \phi_2 \)} \label{fig:phi2}
\end{subfigure}~
\begin{subfigure}[htbp]{0.3\textwidth}
\includegraphics[width=\textwidth]{phi3.pdf}
\caption{\( \phi_3 \)} \label{fig:phi3}
\end{subfigure}
\caption{Variation of \(d(\vecbf{x}_f,\vecbf{x}_t)\) due to \( \phi_i\)}
\label{fig:phi_distance}
\end{figure}
The trajectory which minimizes \( d \) is indicated by the red marker in~\cref{fig:poincare_section,fig:phi_distance}.
Since the first reachability set does not include the target we use the minimum state from the first stage to initialize another reachability computation.
Once again we compute the reachability set by discretization of the angles \( \phi_i \) on the Poincar\'e section.
This second stage, represented by the cyan markers in~\cref{fig:poincare_section,fig:phi_distance}, further increases the \( x, z\) components but does not reach the target orbit.
As a result, a third and forth stage are generated in a similar manner and shown by the green and blue markers in~\cref{fig:poincare_section,fig:phi_distance}, respectively.
We can see in~\cref{fig:poincare_section} that the reachability set of the forth stage includes both the \( x \) and \( z\) components of the target periodic orbit.
At the same time there is a relatively large difference between the \( \dot{x} \), \( \dot{z} \), and \( z \) components of the forth stage and the target orbit.
In practice this is not a large concern as we have direct control over the spacecraft velocity via the control input and the equatorial plane still remains within the reachability set of the transfer.
With the reachability set encompassing the target orbit, we can generate a final transfer to the target.
We use the minimum state calculated from the final stage to serve as the initial condition of the transfer.
A final optimal transfer is computed to satisfy the fixed terminal state \( \vecbf{x}(t_f) = \vecbf{x}_t \) and the bounded control magnitude constraint.
\Cref{fig:trajectory} shows the entire transfer trajectory, from the four reachable set trajectories as well as the final transfer to the target.
\begin{figure}[htbp]
\centering
\begin{subfigure}[htbp]{0.45\textwidth}
\includegraphics[width=\textwidth]{trajectory.pdf}
\caption{Equatorial view of transfer} \label{fig:trajectory_up}
\end{subfigure}~
\begin{subfigure}[htbp]{0.45\textwidth}
\includegraphics[width=\textwidth]{trajectory_3d.pdf}
\caption{Out of plane view} \label{fig:trajectory_3d}
\end{subfigure}~
\caption{Complete transfer trajectory}
\label{fig:trajectory}
\end{figure}
It is interesting to note that while both the initial and target periodic orbit lie in the equatorial plane, the reachability trajectories show a relatively large out of plane component during the transfers.
In spite of this out of plane movement, the reachability set approaches and meets the target orbit.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{control.pdf}
\caption{Control history \label{fig:control}}
\end{figure}
\Cref{fig:control} shows the control input required during the maneuver.
We can see that the control constraint in~\cref{eq:control_constraint} is satisfied over the entire trajectory.
\section{Conclusions}\label{sec:conclusions}
In this paper, an optimal transfer process, which combines the concepts of reachability sets on a Poincar\'e section, is used to generate a transfer between periodic orbits about the asteroid 4769 Castalia.
We have linked several computations of the reachability set on a Poincar\'e section in order to design a transfer trajectory.
The use of the Poincar\'e section allows for trajectory design on a lower dimensional space and is an extension of its well-known use in the analysis of periodic orbits.
We use an indirect optimal control formulation to incorporate a control magnitude constraint and several terminal state constraints.
Utilizing the reachability set alleviates the need to determine accurate initial conditions that allow for convergence of the optimal solution.
In addition, the use of the polyhedron gravitational model gives simple method of extending this work to any small body that also possesses a defined shape model.
\section*{Acknowledgments}\label{sec:acknowledgments}
This research has been supported in part by NSF under the grants CMMI-1243000, CMMI-1335008, and CNS-1337722.
\bibliographystyle{aiaa}
| -34,087.464411
|
[
-3.6640625,
3.31640625
] | 55.291971
|
[
-2.681640625,
0.88671875,
-1.82421875,
-5.13671875,
-0.60107421875,
6.890625
] |
[
3.62890625,
6.4375,
4.69140625,
7.44921875
] | 290
| 5,602
|
[
-1.9541015625,
1.7783203125
] | 27.303732
|
[
-6.4921875,
-4.08984375,
-4.6796875,
-2.21484375,
2.279296875,
12.515625
] | 1.225599
| 9.445748
| 23.795073
| 2.1068
|
[
3.5534861087799072
] | -23,318.997927
| 6.044448
| -33,658.718212
| 1.222646
| 5.923255
|
[
-2.9453125,
-3.4609375,
-2.99609375,
-4.18359375,
2.54296875,
10.875
] |
[
-5.61328125,
-3.20703125,
-2.615234375,
-1.99609375,
3.94921875,
6.26171875
] | |
BkiUaljxK0zjCxh702gL
|
\section{Overview}
The purpose of this paper is to describe a general mechanism for constructing large commuting families of operators in the setting of geometric representation theory. We first sketch the underlying formal mechanism and then describe its primary application.
\subsection{Symmetries of Convolution Categories}\label{convolution categories}
Let $X$ denote a stack and $\mathcal G\circlearrowright X$ an ind-proper groupoid acting on $X$. Let $\mathcal R=Shv(X)$ be the symmetric monoidal
category of sheaves on $X$. In all our examples, $\mathcal R=R\mhyphen\mathrm{mod}$ is described as modules for a commutative algebra $R=\omega(X)$.
Let $\mathcal K=Shv(X)^\mathcal G$ the symmetric monoidal category of $\mathcal G$-equivariant sheaves. We have an equivalence
$$\mathcal K\simeq H\mhyphen\mathrm{mod}$$ where the Hecke algebra $H=(\omega(\mathcal G),\ast)$ is the associated groupoid algebra (concretely a cocommutative Hopf algebroid over $R$). Now consider the Hecke category $\mathcal H=Shv(\mathcal G)$ of sheaves on $\mathcal G$.
It forms a categorical cocommutative Hopf algebroid over $\mathcal R=Shv(X)$: in addition to the convolution monoidal structure and the diagonal action of $(Shv(X),\otimes)$, it carries a commutative pointwise tensor product operation. $\mathcal H$-modules represent $\mathcal G$-equivariant sheaves of categories on $X$, hence are naturally linear over $\mathcal G$-equivariant sheaves on $X$.
As a consequence of this structure we find the following:
\begin{theorem}[Informal]\label{main intro}
There is a
braided monoidal functor $\mathfrak{z}:\mathcal K\to \mathcal Z(\mathcal H)$ from the category of equivariant sheaves
to the Drinfeld center of the convolution category, lifting the diagonal embedding $\mathfrak{d}:\mathcal R\to \mathcal H$ and
admitting a monoidal left inverse $\mathfrak a:\mathcal Z(\mathcal H)\to \mathcal K$.
Thus we have a diagram with commutative square as follows, with morphisms labeled by their level of monoidal structure:
\begin{equation}\label{basic diagram}\xymatrix{\mathcal K\ar[r]_{E_2}\ar[d]_-{E_\infty}& \mathcal Z(\mathcal H) \ar@/_1pc/_-{E_1}[l] \ar[d]^{E_1}\\
\mathcal R \ar[r]_{E_1} & \mathcal H}\end{equation}
\end{theorem}
Theorem~\ref{main intro} applies to any setting where we have a category of geometric objects (stacks) theory of sheaves $X\mapsto Shv(X)$ admitting ($*$-)pushforward and ($!$-)pullback functors, satisfying base change and $(p_*,p^!)$ adjunction for ind-proper maps $p$. Together such a sheaf theory defines a functor from a correspondence category of stacks to a 2-category of categories. Such sheaf theories are one of the main objects of study of the book~\cite{GR} of Gaitsgory and Rozenblyum, which in particular develops two main examples of sheaf theories:
\begin{itemize}
\item the theory of ind-coherent sheaves $X\mapsto QC^!(X)$, a ``renormalized" variant of the theory of quasicoherent sheaves
\item the theory of $\mathcal D$-modules $X\mapsto \mathcal D(X) = QC^!(X_{dR})$.
\end{itemize}
We will mostly be interested in applying the theorem to a mild variant of the theory of $\mathcal D$-modules, the theory of {\em ind-holonomic $\mathcal D$-modules} on a class of ind-algebraic stacks, which we describe in Section~\ref{sheaf theory} using the formalism of~\cite{GR}. In the examples we study (equivariant flag varieties and in particular affine Grassmannians), this theory produces simply (the ind-completed version of) the familiar categories of equivariant constructible complexes.
The ordinary equivariant $\mathcal D$-module categories (where equivariant holonomic sheaves are not necessarily compact) can be recovered as a completion with respect to the equivariant cohomology of a point. In Section~\ref{KM section} we describe two related applications of this result, in which the category of modules for a nil-Hecke algebra acts centrally on convolution categories built out of flag varieties for the corresponding Kac-Moody group or the reflection representation of the corresponding Coxeter group.
\subsection{The Quantum Ng\^o Action}
Our motivation stems from the following application. Let $G$ be a complex reductive group, and consider the spherical Hecke category $\mathcal H=\mathcal H_{sph}$ associated to the Langlands dual group ${G^{\vee}}$: the category of sheaves on the equivariant Grassmannian
\[
\underline{\mathcal{G}r}^\vee = \quot{{LG^\vee_+}}{{LG^\vee}}{{LG^\vee_+}}
\] which we consider as a groupoid stack
\[
\mathcal G=\underline{\mathcal{G}r}^\vee\; \circlearrowright \; X=pt/{LG^\vee_+}
\]
We may apply Theorem \ref{main intro} in this setting, obtaining a diagram of the form of Diagram~\ref{basic diagram}. Langlands duality, in particular the renormalized geometric Satake theorem of Bezrukavnikov-Finkelberg~\cite{BezFink}, leads to interpretations of the various parts of the diagram in terms of the original group $G$, which naturally appear in a cohomologically graded form.
Bezrukavnikov, Finkelberg and Mirkovic~\cite{BFM} identifed the ring $H=H_\ast(\underline{\mathcal{G}r}^\vee)$ with the coordinate ring of the commutative group scheme $J$ of regular centralizers (see also the influential works of Teleman~\cite{teleman} where this construction is applied to categorical representation theory and symplectic topology and Braverman-Finkelberg-Nakajima~\cite{BraFinkNa} where it is generalized to a construction of Coulomb branches of 3d $\mathcal N=4$ supersymmetric gauge theory). We then identify the symmetric monoidal category $\mathcal K=H\mhyphen\mathrm{mod}$ with the category of quasi-coherent sheaves on $J$ under convolution. The functor $\mathfrak{z}: \mathcal K \to \mathcal Z(\mathcal H)$ can be interpreted in terms of the \emph{Ng\^o homomorphism} from regular centralizers to all centralizers, leading to a new conceptual construction (the original construction was via a Hartog's lemma argument). The Ng\^o homomorphism is best known for its central role in the proof of the Fundamental Lemma~\cite{Ngo}. As we explain below, it also gives rise to a canonical integration of all ``$G$-integrable systems": the commuting flows on any Hamiltonian $G$-space $X$ (or any of its Hamiltonian reductions by subgroups of $G$) coming from the $G$-invariant Poisson map $X\to \mathfrak c=\Spec\mathbb C[\mathfrak g^\ast]^G$ integrate to an action of the commutative symplectic groupoid $J\to \mathfrak c$.
The multiplicative group acts on the equivariant Grassmannian by loop rotation; considering a loop rotation equivariant version of the spherical Hecke category leads to a deformation over $H^*(BS^1)=\mathbb C[\hbar]$ also described in~\cite{BezFink}. As is familiar from the theory of cyclic homology and the Nekrasov $\Omega$-background~\cite{NW} (in particular the theory of quantized Coulomb branches in 3d $\mathcal N=4$ gauge theories~\cite{BraFinkNa}), the parameter $\hbar$ of the deformation appears in cohomological degree two, and as a result the familiar structures of representation theory appear in their ``cohomologically sheared" (or ``asymptotic"~\cite{BezFink}) avatars as differential graded algebras. Under this deformation, $\mathcal K_\hbar=H_\hbar\mhyphen\mathrm{mod}$ gets identified (as a monoidal category) with the Whittaker Hecke category $\mathcal{W}h_\hbar$ of bi-Whittaker $\mathcal D_\hbar$-modules on $G$ (which we will refer to as the $\mathcal W$-category). The Hecke algebra $H$ itself is identified both with the spherical subalgebra of the nil-Hecke algebra associated to the affine Weyl group $W_{\textit{aff}}$ of ${G^{\vee}}$, and with the bi-Whittaker differential operators $\mathfrak{Wh}_\hbar$ on $G$. (The underlying category $\mathcal{W}h_\hbar=\mathfrak{Wh}_\hbar\mhyphen\mathrm{mod}$ for $\hbar\neq 0$ has been recently described explicitly in~\cite{lonergan, ginzburg whittaker} as sheaves on the coarse quotient $\mathfrak h^\ast//W_{\textit{aff}}$ of the Cartan by the affine Weyl group, an identification that is expected to respect the tensor structure.)
In particular, we deduce (from a general conceptual point of view) that the convolution structure on the Whittaker Hecke category is naturally symmetric monoidal, answering a question of Arinkin-Gaitsgory. Moreover the functor $\mathfrak{z}$ becomes a central action on the category of conjugation equivariant $\mathcal D_\hbar$-modules on $G$ which is right inverse to the quantum Kostant section (Whittaker reduction); a quantum version of the Ng\^o homomorphism, conjectured by Nadler.
\begin{theorem}\label{central action intro}
The $\mathcal W$-category $\mathcal{W}h_\hbar$ is naturally symmetric monoidal, and equipped with a braided monoidal functor
$Ng\hat{o}_\hbar:\mathcal{W}h_\hbar \to \mathcal D_\hbar(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)$ lifting the quantum characteristic polynomial map $Char_\hbar:\mathcal Z_\hbar\to \mathcal{HC}_\hbar$ and
admitting a monoidal left inverse $Whit_\hbar:\mathcal D_\hbar(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)\to \mathcal{W}h_\hbar$.\footnote{The functors $Whit_\hbar$ and $Ng\hat{o}_\hbar$ do not form an adjoint pair in general, although see Remark \ref{remark quantum affine}.}
Thus we have a commutative diagram:
\[
\xymatrix{
\mathcal{W}h_{\hbar} \ar[r]_{Ng\hat{o}_\hbar}\ar[d]_-{}& \ar@/_1pc/_-{Whit_\hbar}[l] \mathcal D_{\hbar}(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G) \ar[d]^{\Gamma}\\
\mathcal Z_{\hbar} \ar[r]_{Char_\hbar} & \mathcal{HC}_{\hbar} }
\]
In particular, $\mathcal{W}h_\hbar$ acts by $G$-endomorphisms on $\mathcal D_\hbar(G)$-module categories.
\end{theorem}
In particular, passing to invariants for a subgroup $K\subset G$, the $\mathcal W$-category acts on $(\mathfrak U_\hbar\mathfrak g,K)\mhyphen\mathrm{mod}$ and on $\mathcal D_\hbar(K\backslash M)$ for any $G$-variety $M$.
Thus the quantum Ng\^o functor defines a categorical counterpart to Harish-Chandra's construction of a $G$-invariant commuting family of operators on $G$-spaces, parametrized by sheaves on $\mathfrak h^\ast{/\! /} W^{\textit{aff}}$.
It follows that one can consider categories of $\mathcal{W}h_\hbar$-eigensheaves in any $G$-category---a more refined version of infinitesimal character. In particular this applies (for the conjugation action of $G$ on itself) to give a refined version of the notion of central character of character sheaves.
We expect that this structure will play a key role in better understanding the truncated Hecke and character sheaf categories defined by Lusztig \cite{lusztig cells, lusztig convolution} (see also \cite{BFO}). More generally, the entire character field theory of~\cite{character2} is linear over $\mathcal{W}h_\hbar$, leading to a spectral decomposition of the homology of character varieties of surfaces over $\mathfrak h^\ast{/\! /} W^{\textit{aff}}$.
While the spherical Hecke category corresponds to the cohomological Harish-Chandra bimodule category $\mathcal{HC}_\hbar$, it is desirable to have a version of Theorem \ref{central action intro} which applies to the usual category of Whittaker $\mathcal D$-modules and Harish-Chandra bimodules:
\begin{theorem}\label{unsheared ngo intro}
There is a canonical $E_2$-morphism $Ng\hat{o}: \mathcal{W}h \to \mathcal D(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)$ which fits in to a diagram as in Theorem \ref{central action intro}. Moreover, this functor restricts to an exact functor of braided monoidal abelian categories (appearing as the heart of the natural $t$-structure on the source and target).
\end{theorem}
In Section \ref{ss graded lift} we sketch a proof of Theorem \ref{unsheared ngo intro}, by constructing a graded lift of the Ng\^o functor, i.e. a lift to the category consisting of objects with a compatible external grading. Geometrically, such a graded lift corresponds to a mixed version of the Satake category, in the sense of Beilinson-Ginzburg-Soergel \cite{BGS} (see also \cite{riche}).
Less categorically, the quantum Ng\^o action may be interpreted in terms of a {\em quantum integration} of all $G$-quantum Hamiltonian systems: $\mathfrak{Wh}_\hbar$ forms a commutative quantum groupoid (cocommutative Hopf algebroid) quantizing the commutative symplectic groupoid $J$,
which acts on $\hbar$-differential operators $\mathfrak{D}_{\hbar,M}$ on any $G$-space $M$ (or any of its quantum Hamiltonian reductions) extending the Harish-Chandra higher Laplacians $\mathfrak Z_\hbar\mathfrak g\to \mathfrak{D}_{\hbar,M}$ (in particular $\mathfrak{D}_{\hbar,M}$ is naturally a $\mathfrak{Wh}_\hbar$-comodule). This structure is closely related to the
theory of shift maps for quantum integrable systems. (Examples include quantized Coulomb branches of 3d $\mathcal N=4$ gauge theories, and more generally arbitrary supersymmetric reductions of 4d $\mathcal N=4$ super-Yang Mills on an interval, see Section~\ref{SUSY}.)
Since the theory of Hopf algebroids in the $\infty$-categorical setting is not currently documented in the literature, we confine ourselves to remarks (see Remarks~\ref{quantum groupoid remark} and~\ref{abelian vs derived}) and defer a more detailed discussion to a future paper.
\subsection{Outline of paper} In the rest of the introduction, we review the idea behind Theorem~\ref{main intro} and some of its instances, the classical Ng\^o construction, its quantization and their applications. In Section~\ref{sheaf theory} we develop some basic sheaf theory functoriality in the setting relevant for the renormalized geometric Satake theorem of~\cite{BezFink}, i.e., equivariant sheaves on the affine Grassmannian. In Section~\ref{Hecke section} we prove Theorem~\ref{main intro}. In Section~\ref{filtered D-mod section} we review some aspects of categorical representations and filtered $\mathcal D$-modules. Finally in
Section~\ref{quantum section} we describe how specializing the theorem in the setting of the affine Grassmannian produces the quantum Ng\^o action.
\medskip
{\bf Setting:} Throughout the paper we work in the setting of derived algebraic geometry over a field $k$ of characteristic zero, following~\cite{HA, GR}. Thus ``category" indicates an $\infty$-category, commutative or symmetric monoidal indicate $E_\infty$, schemes are derived $k$-schemes, and so forth, unless explicitly noted otherwise.
\section{Introduction}
We begin by describing the classical Ng\^o action and some of its applications (Section~\ref{classical Ngo}), followed by its sheaf-theoretic reinterpretation (Section~\ref{monoidal interp}) and its quantization (Sections~\ref{quantum Kostant section} and~\ref{quantum Ngo section}). We also explain some of the applications of the quantum Ng\^o action, in particular to quantum integrability, in Section~\ref{integrable section}. In Section~\ref{parabolic induction section} we explain a perspective on our quantum Ng\^o map that is closer in spirit to the classical theory of character sheaves. In Section~\ref{KM section} we mention some other applications of Theorem~\ref{main intro} (to Kac-Moody groups and Coxeter systems) and a couple of toy examples. Finally in Section~\ref{further section} we outline some further directions and perspectives, including geometric Langlands, character varieties and supersymmetric gauge theory.
\subsection{The classical Ng\^o action}\label{classical Ngo}
In the following sections we provide some background for the primary applications of our results: the classical and quantum Ng\^o actions.
Fix a complex reductive group $G$ with Lie algebra $\mathfrak g$. We also fix a Borel subgroup $B$ with unipotent radical $N$, and write $H=B/N$ for the universal torus (with Lie algebras $\mathfrak b$, $\mathfrak n$ and $\mathfrak{h}$ respectively). Let $$\mathfrak c:=\Spec \left( \mathbb C[\mathfrak g^\ast]^G\right) \simeq \mathfrak h^\ast{/\! /} W$$ denote the adjoint quotient scheme.
Recall the characteristic polynomial map and Kostant section
\[
\xymatrix{
\mathfrak g^\ast/G \ar[r]_{\chi} & \ar@/_1pc/[l]_{\kappa} \mathfrak c }
\]
The Kostant section $\kappa:\mathfrak c\to \mathfrak g^\ast$ lands in the open substack $\mathfrak g^{\ast}_{\reg}/G\subset \mathfrak g^\ast/G$ of regular elements - the locus of $x\in \mathfrak g^\ast$ whose stabilizer $G_x$ has the minimal dimension $l=rk(\mathfrak g)$. It can be described in terms of Hamiltonian reduction: fix $\psi\in \mathfrak n^\ast\simeq \mathfrak g/\mathfrak b$ a non degenerate character of $\mathfrak n$. Then the composite map $$\xymatrix{\mathfrak g^\ast{/\! /} _{\psi} N \ar[r]& \mathfrak g^\ast/G \ar[r]^-{\chi}& \mathfrak c}$$ is an isomorphism, and the Kostant section is its inverse.
We denote by $$I\simeq T^\ast(G/G)\longrightarrow \mathfrak g^\ast/G$$ the inertia stack (or derived loop space) of the adjoint quotient: informally,
\[
I = \left\{(g,x) \in G\times \mathfrak g^\ast \mid coAd_g(x)=x \right\}/G.
\]
It can be identified as the cotangent stack to the stack of conjugacy classes. We can restrict $I$ over the Kostant section, resulting in the {\em group scheme of regular centralizers}
\[
J = \kappa^\ast I \longrightarrow \mathfrak c.
\]
The group scheme $J$ also has a description as a Hamiltonian reduction of $T^\ast G$:
$$J\simeq N{}_\psi \backslash \!\backslash T^\ast G {/\! /} _\psi N.$$
It has the natural structure of commutative {\em symplectic groupoid} over $\mathfrak c$ -- in particular $Lie(J)\simeq T^\ast\mathfrak c$ as commutative symplectic Lie algebroids.
Note that there is an equivalence of group schemes over the regular locus
\[
\chi^\ast J|_{\mathfrak g^\ast_{\mathrm{reg}}/G} \simeq I|_{\mathfrak g^\ast_{\mathrm{reg}}/G}
\]
In fact the Kostant section defines an equivalence
\[
\mathfrak g^{\ast}_{\reg}/G\simeq B_\mathfrak c J\longrightarrow \mathfrak c
\]
of the regular adjoint quotient with the classifying stack of $J\to \mathfrak c$.
Ng\^o made the crucial observation that regular centralizers act canonically on all centralizers:
\begin{lemma}[Ng\^o]
The equivalence above extends to a morphism of group schemes over $\mathfrak g^\ast/G$:
\[
\chi^\ast J \to I
\]
\end{lemma}
The Lemma is a simple consequence of the Hartogs principle: the open substack $\mathfrak g^\ast_{\mathrm{reg}}/G \subseteq \mathfrak g^\ast/G$ has complement of codimension at least three. For $GL_n$, this map can be described as the natural action of invertible functions on the spectrum of a matrix $M$ via operators commuting with $M$. In general, no direct description of the Ng\^o map was available.
Ng\^o introduced his map as a universal ``mold" from which many more concrete actions are formed. Ng\^o applied it (extending the Donagi-Gaitsgory spectral theory for Higgs bundles~\cite{DG}) to give a new abelian symmetry group of the cohomology of Hitchin fibers, which plays a crucial role in his study of endoscopy and proof of the Fundamental Lemma.
Namely, given any variety $C$, the Ng\^o action (in its equivalent ``delooped" form, an action of the abelian group stack $BJ\to \mathfrak c$ of $\mathfrak g^\ast/G$), gives an action of the commutative group-stack $Map(C,BJ)\to Map(C,\mathfrak c)$
on the stack $Map(C,\mathfrak g^\ast/G)$ of $G$-Higgs bundles\footnote{By keeping track of $\G_m$-equivariant version of the above constructions, Ng\^o obtains also a more general version twisted by a line bundle on $C$, see Section~\ref{char poly section}.} on $C$.
We observe that the Ng\^o map has another concrete manifestation (which does not appear to have been discussed in the literature).
Given any Hamiltonian $G$-space $X$ with equivariant moment map
$$\xymatrix{X\ar[r]^-{\mu}& \mathfrak g^\ast \ar[r]^-{\chi}& \mathfrak c}$$ the induced map to $\mathfrak c$ defines a collection of
Poisson-commuting Hamiltonians on $X$ or (thanks to $G$-invariance) on any Hamiltonian reduction $Y=X{/\! /} _{\mathbb O}K$ of $X$ by a subgroup of $G$ - a mechanism that was used to describe and solve Toda, Calogero-Moser and many other integrable systems (see e.g.~\cite{KKS,Kostant Toda} and~\cite{etingof}).
The Hamiltonian flows coming from the Poisson map $\chi\circ\mu:X\to \mathfrak c$ may be interpreted as defining an action of the trivial commutative Lie algebroid $$T^\ast\mathfrak c\simeq Lie(J)\longrightarrow \mathfrak c$$ on $X$.
A simple consequence of the Ng\^o construction is the following:
\begin{proposition}\label{classical integration} The Hamiltonian flows (action of $Lie(J)$) on any Hamiltonian reduction $Y\to\mathfrak c$ of a Hamiltonian $G$-space $X$ integrate canonically to an action of the symplectic groupoid $J\to \mathfrak c$.
\end{proposition}
\begin{proof}
A Hamiltonian $G$-action on $X$ is equivalent to an action of the symplectic groupoid $T^\ast G$ over $\mathfrak g^\ast$. We may restrict this to an action of the inertia groupscheme $I$, and then use the Ng\^o map to induce an action of $J$ -- concretely, the action map is given as follows:
$$J \times_\mathfrak{c} X= (J \times_\mathfrak{c} \mathfrak{g}^\ast) \times_{\mathfrak{g}^\ast} X \to I \times_{\mathfrak{g}^\ast} X\to X$$
\end{proof}
\subsection{Monoidal interpretation}\label{monoidal interp}
In order to describe our construction of the Ng\^o action and its quantization, we first pass from spaces to tensor categories of sheaves.
The category $QC( I)=\mathcal Z(QC(\mathfrak g^\ast/G))$ of sheaves on the inertia stack $ I\simeq T^\ast(G/G)$ of $\mathfrak g^\ast/G$ (the Drinfeld center of $QC(\mathfrak g^\ast/G)$) is naturally braided under the convolution product.
Using the Ng\^o homomorphism, we may also define a braided\footnote{The braided structure can be seen by delooping the functor to an action of $(QC(B_\mathfrak c J),\ast)$ on $QC(\mathfrak g^\ast/G)$.} monoidal functor (with respect to the convolution structures on both sides)
\[
Ng\hat{o}_0: QC(J) \to QC( I)
\]
given by the correspondence.
\begin{equation}\label{Ngo correspondence}
\xymatrix{
J &\ar[l] \chi^\ast(J)\ar[r] & I
}
\end{equation}
Note that there is another monoidal functor $$Whit_0:QC( I)\to QC(J)$$
given by the correspondence
\begin{equation}\label{Kostant correspondence}
\xymatrix{
J&\ar[l]_-{\sim} \kappa^\ast( I) \ar[r]& I
}
\end{equation}
provided by the Kostant section. Moreover $Whit_0$ is a left inverse to $Ng\hat{o}_0$, $Whit_0\circ Ng\hat{o}_0\simeq Id$. Thus we have a commutative diagram:
$$\xymatrix{QC(J)\ar[r]_{Ng\hat{o}_0}\ar[d]_-{}& QC( I) \ar@/_1pc/_-{Whit_0}[l] \ar[d]^{}\\
QC(\mathfrak c) \ar[r]_{Char_0} & QC(\mathfrak g^\ast/G)}$$
where $Char_0 = \chi^\ast$.
One can check that the Ng\^o action is identified, via the renormalized Satake theorem of~\cite{BezFink}, with the construction of
Theorem~\ref{main intro} applied to ${LG^\vee_+}$-equivariant sheaves on the affine Grassmannian ${LG^\vee}/{LG^\vee_+}$ for the Langlands dual group ${G^{\vee}}$.\footnote{More precisely, the Satake Theorem of \cite{BezFink} gives a differential graded form of $\mathfrak g^\ast/G$ and the Kostant slice. To recover the statement above, one must consider some form of mixed sheaves on the affine Grassmannian as in~\cite{riche}, see Remark~\ref{mixed remark}.}
We can also describe Hamiltonian $G$-actions monoidally. Given a Hamiltonian $G$-space $X$, the action of the symplectic groupoid $T^\ast G$
endows $QC(X)$ with the structure of module category over the the convolution category $QC(T^\ast G)$. Equivalently, the equivariant moment map
$X/G\to \mathfrak g^\ast/G$ makes $QC(X/G)$ into a module category over $QC(\mathfrak g^\ast/G)$. This equivalence comes from a Morita equivalence
$$(QC(\mathfrak g^\ast/G),\otimes)\mhyphen\mathcal{M}od\simeq (QC(T^\ast G),\ast)\mhyphen\mathcal{M}od,$$ an instance of Gaitsgory's 1-affineness theorem~\cite{1affine},
which in particular identifies the Drinfeld centers of the two categories
$$\mathcal Z(QC(\mathfrak g^\ast/G),\otimes)\simeq \mathcal Z(QC(T^\ast G),\ast)\simeq QC( I).$$
Thus the Ng\^o action gives rise to an action of $QC(J)$ on $QC(X)$ commuting with the $G$-action and moment map, and hence descending to any Hamiltonian reduction.
\subsection{Quantum Kostant slice and geometric Satake}\label{quantum Kostant section}
The quantization of $\mathfrak g^\ast$ is the algebra $\mathfrak U\mathfrak g$ or equivalently the (pointed or $E_0$) category $\mathfrak U\mathfrak g\mhyphen\mathrm{mod}$.
Recall that by the Harish-Chandra isomorphism the adjoint quotient scheme $\mathfrak c$ is identified with the spectrum of the center of the enveloping algebra, $$\mathfrak c\simeq \Spec \mathfrak Z\mathfrak g \simeq \mathfrak h^\ast{/\! /} W.$$
The quantization of the Kostant section is given by the {\em Whittaker Hecke algebra}, the quantum Hamiltonian reduction $\mathfrak U\mathfrak g{/\! /} _\psi \mathfrak U\mathfrak n:$ the algebra which acts on the space of Whittaker vectors ($\mathfrak n$-eigenvectors with eigenvalue $\psi$) universally in any $\mathfrak U\mathfrak g$-module---in other words, the (principal) finite $\mathcal W$-algebra associated to $\mathfrak g$. Kostant~\cite{Kostant Whittaker} then proved that the canonical map $\mathfrak Z\mathfrak g\to \mathfrak U\mathfrak g{/\! /}_\psi N$ is an isomorphism, in particular that the $\mathcal W$-algebra $\mathfrak U\mathfrak g{/\! /} _\psi N$ is commutative.
The quantization of $\mathfrak g^\ast/G$ is the monoidal (or $E_1$) category $\mathcal{HC}$ of {\em Harish-Chandra bimodules}
$\mathfrak U\mathfrak g$-bimodules integrable for the diagonal action of $G$ (or weakly $G$-equivariant $\mathfrak U\mathfrak g$-modules). It receives a monoidal functor
$$\xymatrix{\mathcal Z:=\mathfrak Z\mathfrak g\mhyphen\mathrm{mod} \ar[rr]^-{Char}&& \mathcal{HC}}$$
quantizing the characteristic polynomial map.
Its Drinfeld center $$\mathcal Z(\mathcal{HC})\simeq \mathcal D(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)$$ is identified with the category of conjugation-equivariant $\mathcal D$-modules on $G$, quantizing sheaves on the inertia stack $QC( I)$.
Thanks to the (derived, renormalized, loop rotation equivariant) Geometric Satake theorem of Bezrukavnikov-Finkelberg~\cite{BezFink}, Harish-Chandra bimodules and Whittaker reduction appear out of the equivariant geometry of the affine Grassmannian for the Langlands dual group. In this setup, the category of Harish-Chandra bimodules and its relatives appear with a cohomological degree shift (as is familiar from cyclic homology theory, see in particular the closely related~\cite{conns}, or from the Nekrasov $\Omega$-background in supersymmetric gauge theory~\cite{NW}). In particular, the quantization parameter $\hbar$ appears with cohomological degree two as the equivariant parameter $\mathbb C[\hbar]=H^\ast(BS^1)$ for loop rotation; the dual Lie algebra $\mathfrak g^\ast$ is replaced by the 2-shifted Poisson variety $\mathfrak g^\ast[2]$, which deforms over the $\mathbb C[\hbar]$ to the Rees dg-algebra $\mathfrak U_\hbar\mathfrak g$; and the 1-shifted symplectic stack $\mathfrak g^\ast/G$ is replaced by the 3-shifted symplectic stack $\mathfrak g^\ast[2]/G$, which deforms to the monoidal category $\mathcal{HC}_\hbar$ of $\mathfrak U_\hbar\mathfrak g$-Harish-Chandra bimodules.
Geometric Satake gives an equivalence of monoidal categories between $\mathcal{HC}_\hbar$ and the spherical Hecke category $\mathcal H_\hbar=\cDv_{hol}(\underline{\mathcal{G}r})$ of ${LG^\vee_+} \rtimes \mathbb G_m$-equivariant $D$-modules on the affine Grassmannian ${\cG r^{\vee}} ={LG^\vee}/{LG^\vee_+}$. Moreover, this equivalence intertwines the Kostant-Whittaker action of $\mathcal{HC}_\hbar$ on $\mathcal Z_\hbar = \mathfrak Z_\hbar\mathfrak g\mhyphen\mathrm{mod}$ with the action of $\mathcal H_\hbar$ on $\mathcal R_\hbar := \cDv_{hol}(B{LG^\vee_+}) \simeq H_{{G^{\vee}}}(pt)\mhyphen\mathrm{mod}$. We denote by $$\mathcal K_\hbar=\End_{\mathcal H_\hbar}(\mathcal R_\hbar)$$ the monoidal category of Hecke-linear endomorphisms (compare our general notation of Section~\ref{convolution categories}, where the equivariant Grassmannian is playing the role of the groupoid $\mathcal G$).
\subsection{The $\mathcal W$-category and the quantum Ng\^o action}\label{quantum Ngo section}
Now we explain how the construction of Theorem~\ref{main intro} also gives rise to a quantization of the Ng\^o action (in its cohomologically sheared $\hbar$-form).
The quantum analog of $J$ is given by the {\em $\mathcal W$-category}, or Whittaker Hecke category of $G$
$$\mathcal{W}h_\hbar:=End_{\mathcal{HC}_{\hbar}}(\mathfrak Z_\hbar \mathfrak g\mhyphen\mathrm{mod}) \simeq \mathcal D_\hbar(N {}_\psi{\backslash} G /_{\psi} N),$$ given by the $\mathcal{HC}_\hbar$-endomorphisms of the category $\mathcal Z_\hbar=\mathfrak Z_\hbar\mathfrak g\mhyphen\mathrm{mod}$ of Whittaker modules; equivalently, it is the category of $\mathcal D$-modules on $G$ equivariant with respect to the left and right action of $(N,\psi)$\footnote{The Whittaker equation defining $\psi$-twisted equivariance is not homogeneous with respect to the usual filtration on differential operators; one must use the \emph{Kazhdan filtration} to make sense of the Rees algebra constructions--see Remark \ref{remark Kazhdan}.}.
According to geometric Satake, we have an equivalence of monoidal categories
\[
\mathcal{W}h_\hbar = \End_{\mathcal{HC}_\hbar}(\mathcal Z_\hbar) \simeq \End_{\mathcal H_\hbar}(\mathcal R_\hbar) = \mathcal K_\hbar
\]
More concretely, $\mathcal{W}h_\hbar$ is given by modules for the ring of bi-Whittaker differential operators $\mathfrak{Wh}_\hbar$, obtained from $\mathfrak{D}_G$ by two-sided Hamiltonian reduction by $N$ at $\psi$, whereas $\mathcal K_\hbar$ is given by modules for $H_\hbar = H_\ast^{{LG^\vee_+}\rtimes \mathbb C^\times}({\cG r^{\vee}})$, the equivariant convolution homology ring appearing in \cite{BFM}. The equivalence of monoidal categories above may be interpreted as an isomorphism of bialgebroids $$\mathfrak{Wh}_\hbar \simeq H_\hbar.$$
The $\mathcal W$-category provides a deformation quantization of sheaves on the groupscheme $J,$ the bi-Whittaker reduction of $T^\ast G$.
As with all Hecke categories, the $\mathcal W$-category is naturally monoidal. However it is surprising that it is in fact naturally symmetric monoidal (even on the derived level), and that the Ng\^o action quantizes: the construction of Theorem~\ref{main intro} applied to ${LG^\vee_+}\rtimes \G_m$-equivariant sheaves on the affine Grassmannian ${LG^\vee}/{LG^\vee_+}$ gives rise, in conjunction with the renormalized Satake theorem of~\cite{BezFink}, to Theorem \ref{central action intro} --- a central action of the $\mathcal W$-category on Harish-Chandra bimodules, the {\em quantum Ng\^o action}.
\begin{remark}
The Drinfeld center $\mathcal Z(\mathcal C)$ of any monoidal category $\mathcal C$ is nontrivially braided, so that the analog of Kostant's proof of his theorem fails: the canonical Kostant functor $\mathcal Z(\mathcal D(G))\to \mathcal{W}h$ is far from an equivalence. In fact $\mathcal{W}h_\hbar$ is closer to being a ``Lagrangian" in $\mathcal Z(\mathcal D_\hbar(G))$ - a maximal subcategory on which the braiding vanishes.
\end{remark}
\subsection{Spectral decomposition and quantum integrability}\label{integrable section}
One of the fundamental problems in harmonic analysis is spectral decomposition of functions on a symmetric space under
Harish-Chandra's commutative algebra of invariant differential operators, a collection of higher analogs of the Laplace operator for which we seek joint eigenfunctions. We now describe some immediate consequences of Theorem~\ref{central action intro} in this setting.
By a result of~\cite{1affine, dario}, the monoidal category $\mathcal{HC}$ of Harish-Chandra bimodules is Morita equivalent (as a monoidal category) to $\mathcal D$-modules on $G$ with convolution, the ``de Rham group algebra" $(\mathcal D(G),\ast)$.
The Morita equivalence relates a $\mathcal D(G)$-category with its weak $G$-equivariants, and an $\mathcal{HC}$-category with its de-equivariantization:
$$\mathcal D(G)\circlearrowright \mathcal M \longleftrightarrow \mathcal{HC}\circlearrowright \mathcal M^G, \hskip.3in \mathcal{HC}\circlearrowright \mathcal N \longleftrightarrow \mathcal D(G)\circlearrowright \left(\mathcal N\otimes_{Rep(G)} Vect\right)$$
It follows that module categories for $\mathcal{HC}$ are identified with $\mathcal D(G)$-modules, also known as {\em de Rham} or {\em strong} $G$-categories. The theory of de Rham $G$-categories, or the equivalent theory of $\mathcal{HC}$-modules, is a natural realization of the notion of quantum Hamiltonian $G$-space (an algebraic variant of an idea of~\cite{teleman}). Examples include $A\mhyphen\mathrm{mod}$ for algebras $A$ acted on by $G$, for which the Lie algebra action is made internal by means of a homomorphism $\mu^*:\mathfrak U\mathfrak g\to A$, e.g., $A=\mathfrak U\mathfrak g$ itself or $A=\mathfrak{D}_M$ for a $G$-space $M$. More abstractly the category $\mathcal D(M)$ for any $G$-space $M$ is a de Rham $G$-category.
For a $G$-space $M$, the composite map $$\xymatrix{\mathfrak Z\mathfrak g\ar[r]^-{\chi^*} &\mathfrak U\mathfrak g \ar[r]^-{\mu^*} & \mathfrak{D}_M}$$ provides a family of commuting $G$-invariant differential operators (similarly for any Hamiltonian $G$-algebra $(A,\mu^*)$ as above). These generalize the commuting $G$-invariant differential operators on symmetric spaces introduced by Harish-Chandra, and thanks to $G$-invariance descend to give commuting operators on any quantum Hamiltonian reduction (e.g., on locally symmetric spaces). This provides a source of many quantum integrable systems~\cite{etingof}.
In particular given $\lambda\in \mathfrak c\simeq \mathfrak h^\ast{/\! /} W$ we can define the $\lambda$-eigensystem for the Harish-Chandra Laplacians in this setting, the quantum analog of the fibers of the classical Hamiltonians $\chi\circ\mu$.
However, unlike in the classical setting, quantum Hamiltonian $G$-spaces $\mathcal M$ do not ``live" over $\mathfrak c=\Spec(\mathfrak Z\mathfrak g)$: $\mathcal M$ is {\em not} naturally a module category for $\mathcal Z=\mathfrak Z\mathfrak g\mhyphen\mathrm{mod}$. Thus unlike with spaces of functions, there is no spectral decomposition of $\mathcal M$ over $\mathfrak c$: e.g., it does not make sense to ask for a category which is the ``quantum fiber" of $\mathcal M$ over $\lambda\in \mathfrak c$. This is a manifestation of the well-known phenomena of shift maps and translation functors: the Harish Chandra systems associated to different $\lambda$
can be isomorphic.
\begin{example}
$\bullet$ The eigensystem $M_\lambda$ for the operator $z \frac{d}{dz}$ on ${\C^\times}$ depends on $\lambda$ only up to translation. Indeed the category of $\mathcal D$-modules on ${\C^\times}$ is equivalent (by the Mellin transform) to the category of equivariant sheaves $\mathcal{W}h_{\C^\times}=QC(\mathbb C)^{\mathbb Z}$, which then acts on $\mathcal M$ for any quantum Hamiltonian ${\C^\times}$-space $\mathcal M$.
$\bullet$ The $G$-category $\mathfrak U\mathfrak g_\lambda\mhyphen\mathrm{mod}$ of $\mathfrak g$-modules with a fixed central character depends on $\lambda$ only up to the action of translation functors. The corresponding $W_{\textit{aff}}$-orbit $[\lambda]\in \mathfrak h^\ast{/\! /} W_{\textit{aff}}$ is an invariant of this $G$-category, but not $\lambda$ itself.
\end{example}
Thus we might instead hope to spectrally decompose quantum Hamiltonian $G$-spaces over $\mathfrak h^\ast{/\! /} W^{\textit{aff}}$, and indeed our main result gives such a decomposition:
\begin{corollary}[Theorem~\ref{central action intro}]\label{quantum ham corollary} For any $\mathcal D_\hbar(G)$-module $\mathcal M$, there is an action of the tensor category $\mathcal{W}h_\hbar\circlearrowright \mathcal M$ commuting with the $\mathcal D_\hbar(G)$ action (and hence descending to any quantum Hamiltonian reduction such as $\mathcal D_\hbar(K\backslash G/ H)$ and $(\mathfrak g,K)\mhyphen\mathrm{mod}_\hbar$).
\end{corollary}
In other words, quantum Hamiltonian $G$-spaces may be spectrally decomposed under the ``categorical Harish-Chandra operators", i.e., the action of the commuting operators provided by the quantum Ng\^o map $\mathcal{W}h_\hbar\simeq QC^!(\mathfrak h^\ast{/\! /} W^{\textit{aff}})\to \mathcal D_\hbar(G/G)=\mathcal Z(\mathcal D_\hbar(G))$
In other words, quantum Hamiltonian $G$-spaces may be spectrally decomposed under the ``categorical Harish-Chandra operators", i.e., the action of the commuting operators provided by the quantum Ng\^o map $\mathcal{W}h_\hbar\simeq QC(\mathfrak h^\ast{/\! /} W^{\textit{aff}})\to \mathcal D_\hbar(G/G)=\mathcal Z(\mathcal D_\hbar(G))$.
\begin{remark}[Integrating quantum Hamiltonian systems by a quantum groupoid]\label{quantum groupoid remark}
This categorical statement has a more concerete ``function-level" interpretation as follows in the spirit of Proposition~\ref{classical integration}. For a $G$-space $M$, the algebra $\mathfrak{D}_{\hbar,M}$ is a $\mathfrak{D}_{\hbar,G}$-comodule in $\mathfrak U\mathfrak g$-modules. Concretely, $\mathfrak{D}_{\hbar,M}$ carries an action of $G$ and an action of $\mathfrak U_\hbar\mathfrak g$ (from the moment map), making it a Harish-Chandra bimodule.
The function level quantization of the action of Proposition~\ref{classical integration} endows $\mathfrak{D}_{\hbar,M}$ with the structure of $\mathfrak{Wh}_\hbar$-comodule--concretely the coaction map is given by
$$\mathfrak{D}_{\hbar,M}\rightarrow \mathfrak{D}_{\hbar,G} \otimes_{\mathfrak U_\hbar\mathfrak g} \mathfrak{D}_{\hbar,M} \rightarrow (\mathfrak{Wh}_\hbar \otimes_{\mathfrak Z_\hbar\mathfrak g} \mathfrak U_\hbar\mathfrak g) \otimes_{\mathfrak U_\hbar\mathfrak g} \mathfrak{D}_{\hbar,M} =\mathfrak{Wh}_\hbar \otimes_{\mathfrak Z_\hbar\mathfrak g} \mathfrak{D}_{\hbar,M} $$
Here we use the map $\mathfrak{D}_{\hbar,G}\rightarrow \mathfrak{Wh}_\hbar\otimes_{\mathfrak Z\mathfrak g} \mathfrak U_\hbar \mathfrak g $ which arises from the action of $\mathfrak{D}_{\hbar,G}$ on $\mathfrak{Wh}_\hbar\otimes_{\mathfrak Z\mathfrak g} \mathfrak U\mathfrak g$ defined by the Ng\^o action, and the Harish-Chandra bimodule structure map $\mathfrak{D}_{\hbar,M} \rightarrow \mathfrak{D}_{\hbar,M} \otimes_{\mathfrak U\mathfrak g} \mathfrak{D}_{\hbar,G}$ above.
This comodule structure on $\mathfrak{D}_{\hbar,M}$ underlies a structure of algebra in $\mathfrak{Wh}_\hbar$-comodules, i.e., an action of $\mathfrak{Wh}_\hbar$ on $\mathfrak{D}_{\hbar,M}$ as a {\em commutative quantum groupoid} over $\mathfrak Z_\hbar\mathfrak g$ (cocommutative Hopf algebroid, see~\cite{lu},~\cite{Bohm} and references therein), which is the natural quantum analog of the integration of Hamiltonian flows provided by Proposition~\ref{classical integration}. We postpone a discussion of Hopf algebroids in the $\infty$-categorical setting (and thus a precise formulation of this claim) to a future paper, though see Remark~\ref{abelian vs derived}.
\end{remark}
\begin{remark}[Conjectural Picture: Fukaya Quantization of the Ng\^o correspondence]
The Ng\^o correspondence~\ref{Ngo correspondence} has a Lagrangian structure, and defines a central action of the commutative symplectic groupoid $J$ on $T^\ast G$. This suggests a natural setting for quantization of the Ng\^o action: given a ``deformation quantization theory", a (lax) symmetric monoidal functor $\mathcal F$ from the Lagrangian correspondence category of symplectic manifolds to dg categories, we obtain a symmetric monoidal category $\mathcal F(J)$ together with a central action on the monoidal category $\mathcal F(T^\ast G)$ associated to the symplectic groupoid $T^\ast G$ integrating $\mathfrak g^\ast$. Informally speaking one expects suitable versions of the Fukaya category to define such a functor (as we learned from Teleman, Gualtieri and Pascaleff). In particular $\mathcal F(T^\ast G)\circlearrowright\mathcal F(M)$ for a Hamiltonian $G$-space $M$ (as explained in~\cite[Conjecture 2.9]{teleman}). Thus one would expect the Ng\^o action to define an action $$\mathcal F(T^\ast G)\circlearrowright \mathcal F(M) \circlearrowleft \mathcal F(J)$$ of the Fukaya category of $J$, with the symmetric monoidal structure coming from convolution, by $G$-automorphisms of the Fukaya category of any Hamiltonian $G$-space. Moreover mirror symmetry should identify $\mathcal F(J)$ in terms of the B-model on $H^\vee{/\! /} W$, providing a notion of spectral decomposition of $G$-categories. It would be very interesting to understand the relation of this picture to the remarkable comprehensive character theory for $G$-A-models developed by Teleman~\cite{teleman}. Note that Teleman's theory prominently features the (unquantized) groupscheme $J$ for the {\em Langlands dual} group, as the target for a spectral decomposition of a smarter ``decompleted" form of $\mathcal F(T^\ast G)$-modules.
\end{remark}
\subsection{The quantum Ng\^o action}\label{parabolic induction section}
In this section, we give a number of conjectural interpretations of the functor $Ng\hat{o}: \mathcal{W}h \to \mathcal D(G)^G$ in terms of more familiar constructions arising in the theory of character sheaves.
\subsubsection{The horocycle transform and parabolic induction/restriction}
First, let us consider the commutative diagram
\[
\xymatrix{
G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G &\ar[l]_a G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} B \ar[r]^b& Hor \\
G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G \ar@{=}[u] &\ar[l]^r B{/_{\hspace{-0.2em}ad}\hspace{0.1em}} B \ar@{^{(}->}[u] \ar[r]_{s}& H{/_{\hspace{-0.2em}ad}\hspace{0.1em}} B \ar@{^{(}->}[u]
}
\]
where
\[
Hor = (\quot NGN){/_{\hspace{-0.2em}ad}\hspace{0.1em}} H = \quot{G}{(G/N\times G/N)}{H}
\]
is the horocycle stack.
This diagram gives rise to two pairs of adjoint functors, both of which have been studied extensively in the context of character sheaves (see e.g. \cite{lusztig character, Ginzburg admissible, Ginzburg parabolic}): we have the horocycle and character functors
\[
\xymatrix{
hc = b_\ast a^! : \mathcal D(G)^G \ar[r] & \ar[l] \mathcal D(Hor) : a_\ast b^! = ch
}
\]
and the parabolic restriction and induction functors
\[
\xymatrix{
\Res = s_\ast r^![\dim N]: \mathcal D(G)^G \ar[r] & \ar[l] \mathcal D(H)^B \simeq \mathcal D(H)^H : r_\ast s^![-\dim N] = \Ind
}
\]
These functors are closely related, but have different features. For example:
\begin{itemize}
\item The composite of $hc$ followed by restricting to the diagonal $H{/_{\hspace{-0.2em}ad}\hspace{0.1em}} B$ in the Horocycle space is equivalent to $\Res$ (up to a shift).
\item
The category $\mathcal D(Y)$ carries a monoidal structure coming from convolution, and the functor $hc$ is naturally monoidal. On the other hand, $\Res$ does not intertwine the convolution structures in general.
\item The functor $hc$ is easily seen to be conservative by an argument of Mirkovic and Vilonen \cite{MV} (the composite $ch \circ hc$ is given by convolution with the Springer sheaf; in particular, the identity functor is a direct summand). On the other hand, the functor $\Res$ is only conservative in the case where no Levi subgroup of $G$ carries a cuspidal local system in the sense of Lusztig \cite{lusztig_intersection_1984} (this is the case for $G=GL_n$, for example, but not for $G=SL_2$).
\item The functors $\Ind$ and $\Res$ restrict to exact functors on the level of abelian categories, but $hc$ and $ch$ do not, in general.
\end{itemize}
\subsubsection{Springer theory and quantum Hamiltonian reduction}
In \cite{Gun1, Gun2}, the category $\mathcal D(\mathfrak g)^G$ is studied, along with the analogous functors to $\Res$ and $\Ind$ in the Lie algebra setting (which we continue to denote $\Res$ and $\Ind$). The category $\mathcal D(\mathfrak g)^G$ is shown to decompose in to blocks indexed by cuspidal data. One such block is the Springer block; this can be described as the subcategory of $\mathcal D(\mathfrak g)^G$ generated by the essential image of the functor $\Ind$. It is shown that the functor $\Res$ upgrades to an exact equivalence of abelian categories
\[
\ls{W}{\Res} : \mathcal M(\mathfrak g)^G_{Spr} \xrightarrow{\sim} \mathcal M(\mathfrak{h})^W: \ls{W}{\Ind}
\]
on the Springer block. The inverse functor $\ls{W}{\Ind}$ to $\ls{W}{\Res}$ takes a $W$-equivariant object $\mathfrak{M}$ of $\mathcal M(\mathfrak{h})$ to the $W$-invariants of $\Ind(\mathfrak{M})$. (This result can be thought of as an extension of the Springer correspondence, which identifies a block of the category of equivariant $\mathcal D$-modules with support on the nilpotent cone with representations of $W$.)
To state the conjectures below, we will assume the analogous results to \cite{Gun1, Gun2} in the setting of equivariant $\mathcal D$-modules on $G$ (which the second named author intends to address in future work). In particular, we will assume we have an equivalence:
\[
\xymatrix{
\ls{W}{\Res} : \mathcal M(G)^G_{Spr} \ar@{<->}[r]^\sim & \mathcal M(H)^W: \ls{W}{\Ind}
}
\]
In particular, there is an extension of this equivalence to a functor (no longer fully faithful) on the level of dg-categories\footnote{Note that the source category $\mathcal D(H)^W$ is the dg-derived category of its heart; though this is not the case for the target category, there is still a canonical functor from the dg-derived category of $\mathcal M(G)^G$ to $\mathcal D(G)^G$.}
\[
\xymatrix{
\ls{W}{\Ind}: \mathcal D(H)^W \ar[r] & \mathcal D(G)^G
}
\]
Now let us recall the functor of quantum Hamiltonian reduction and the Harish-Chandra homorphism. Consider the object
\[
\mathfrak{D}_{G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G} = \mathfrak{D}_G/\mathfrak{D}_G\mathrm{ad}(\mathfrak g)
\]
which represents the functor of quantum Hamiltonian reduction. There is an exact functor of abelian categories
\[
QHR:\mathcal M(G)^G \to (\mathfrak{D}_{G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G})^G\mhyphen\mathrm{mod}^\heartsuit
\]
which takes a strongly equivariant $\mathfrak{D}_G$-module to its $G$-invariants; it has a fully faithful left adjoint $QHR^L$, which we extend to a functor on derived categories. By results of Levasseur and Stafford \cite{LS1, LS2} (or rather, the natural analogue in the group setting), the Harish-Chandra homomorphism defines an isomorphism of rings
\[
rad:(\mathfrak{D}_{G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G})^G \simeq (\mathfrak{D}_H)^W.
\]
Note also there is a Morita equivalence between $\mathfrak{D}_H \# W$ and its spherical subalgebra $(\mathfrak{D}_H)^W$ which takes a $\mathfrak{D}_H \# W$-module to its $W$-invariants. These functors are compatible in the sense that there is a commutative diagram:
\[
\xymatrix{
\mathcal D(H)^W \ar[d]^\wr_{(-)^W} \ar@/^1pc/[rrd]^{\ls{W}{\Ind}} && \\
(\mathfrak{D}_H)^W\mhyphen\mathrm{mod} & & \mathcal D(G)^G \\
(\mathfrak{D}_{G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G})^G\mhyphen\mathrm{mod} \ar[u]_\wr^{rad} \ar@/_1pc/[rru]_{QHR^L} &&
}
\]
\begin{remark}
In the case $G=GL_n$, there are no non-trivial cuspidal data (equivalently, $QHR$ is conservative), and thus we have an equivalence of abelian categories
\[
\mathcal M(G)^G \simeq \mathcal M(H)^W
\]
However, note that this equivalence does not respect monoidal structures in general.
\end{remark}
\subsubsection{The nil-DAHA and sheaves on the coarse quotient}
Let $W^\textit{aff} \simeq \Lambda_H \rtimes W$ denote the extended affine Weyl group, which acts on $\mathfrak{h}^\ast$ with $\Lambda_H$ acting by translation. The (degenerate) \emph{double affine nil-Hecke algebra} (nil-DAHA) is defined to be the subring
\[
Nil_{W^\textit{aff}} \subseteq \left(\Sym(\mathfrak{h})[\alpha^{-1} \mid \alpha \in \Phi]\right) \rtimes W^\textit{aff}
\]
generated by $\Sym(\mathfrak{h})$ and the Demazure operators $\alpha^{-1}(1-s_\alpha)$ associated to affine simple roots $\alpha$ (see \cite[Chapter 4.3]{schubert book} for further details). The ring $\Sym(\mathfrak{h}) \rtimes W^\textit{aff}$ sits as a subring of $Nil_{W^\textit{aff}}$; in particular there is a fully faithful functor
\[
forg: Nil_{W^\textit{aff}} \mhyphen\mathrm{mod} \hookrightarrow \mathfrak{D}_H \rtimes W\mhyphen\mathrm{mod} \simeq \mathcal D(H)^W
\]
given by forgetting the action of the nil-Hecke algebra to the subring (the fully faithful property follows from the fact that both rings sit inside a common localization). Similarly, the spherical subalgebra $Nil_{W^\textit{aff}}^{sph}$ is Morita equivalent to $Nil_{W^\textit{aff}}$ and contains a copy of $(\mathfrak{D}_H)^W$.
More geometrically, the nil-DAHA represents the descent data for an object of $QC(\mathfrak{h}^\ast)$ to the coarse quotient $\mathfrak{h}^\ast {/\! /} W^\textit{aff}$, whereas $\Sym(\mathfrak{h})\rtimes W^\textit{aff}$ represents descent data to the stack quotient $\mathfrak{h}^\ast/W^\textit{aff}$ (see \cite{lonergan}).
The results of Ginzburg \cite{ginzburg whittaker} and Lonergan \cite{lonergan} identify the spherical nil-DAHA $Nil_{W^\textit{aff}}^{sph}$ with bi-Whittaker differential operators $\mathfrak{Wh}$, or alternatively, the loop rotation equivariant homology convolution algebra of the Langlands dual affine Grassmannian (with $\hbar$ formally set to 1). In fact, one can check that this is an isomorphism of bialgebroids, and thus there is an equivalence of monoidal categories
\[
Nil_{W^\textit{aff}}\mhyphen\mathrm{mod} \simeq Nil^{sph}_{W^\textit{aff}} \mhyphen\mathrm{mod} \simeq \mathfrak{Wh}\mhyphen\mathrm{mod} \simeq \mathcal{W}h
\]
In particular, there is a copy of $Nil\mhyphen\mathrm{mod}$ sitting inside $\mathcal D(H)^W$, which we denote by $\mathcal D(H)^W_{Nil}$. The following conjecture states that the Ng\^o functor is compatible with the functors given by Springer theory and the Harish-Chandra homomorphism.
\begin{conjecture}\label{quantum ngo induction}
There is a commutative diagram:
\[
\xymatrix{
QC(\mathfrak{h}^\ast {/\! /} W^\textit{aff}) \ar@{<->}[d]^\wr \ar[r]^{\pi^\ast} & QC(\mathfrak{h}^\ast)^{W^\textit{aff}} \ar@{<->}[d]^\wr \\
Nil_{W^\textit{aff}}\mhyphen\mathrm{mod} \ar[r] \ar@{<->}[d]^\wr & \ar@{<->}[d]^\wr \mathfrak{D}_H \rtimes W\mhyphen\mathrm{mod} \ar[d] \ar@/^3pc/[dd]^{\ls{W}{\Ind}} \\
Nil^{sph}_{W^\textit{aff}}\mhyphen\mathrm{mod} \ar[r] \ar@{<->}[d]^\wr & (\mathfrak{D}_H)^W\mhyphen\mathrm{mod} \ar[d]_{QHR^L} \\
\mathcal{W}h \ar[r]^{Ng\hat{o}} & \mathcal D(G)^G
}
\]
\end{conjecture}
\begin{remark}
A remarkable feature of this diagram is that, while the functor
\[
\ls{W}{\Ind}:\mathcal D(H)^W \to \mathcal D(G)^G
\]
relates two braided monoidal categories, it does not carry a monoidal structure; however, according to the conjecture, $\ls{W}{\Ind}$ is braided monoidal upon restriction to the full subcategory given by modules for the nil-Hecke algebra.
\end{remark}
\begin{remark}\label{remark quantum affine}
Recall that the Ng\^o functor arises from the Lagrangian correspondence (read from left to right)
\[
\xymatrix{
J & \ar[l] \chi^\ast J \ar[r] & I= T^\ast(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)
}
\]
whereas the functor $Whit$ arises from the Lagrangian correspondence (read from right to left)
\[
\xymatrix{
J & \ar[l] \kappa^\ast(I) \ar[r] & I = T^\ast(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)
}
\]
While these diagrams are manifestly different in general in particular (in particular, the classical Ng\^o functor is not adjoint to the Whittaker functor), both diagrams have isomorphic affinizations:
\begin{equation}\label{affine diagram}
\xymatrix{J & \ar[l] J \simeq (\chi^\ast J)^{aff} \ar[r] & I^{aff} \simeq (T^\ast H) {/\! /} W}
\end{equation}
Diagram \ref{affine diagram} corresponds to an inclusion of rings
\[\xymatrix{
\mathbb C[J] & \ar[l] \mathbb C[T^\ast H]^W
}\]
which quantizes to
\[
\xymatrix{
\mathfrak{Wh} \simeq Nil_{W^\textit{aff}}^{sph} & \ar[l] (\mathfrak{D}_H)^W
}
\]
Thus, on the level of affinization, the functors $Ng\hat{o}$ and $Whit$ correspond to the forgetful functor and the base change functor associated to the above inclusion of rings (in particular, they form an adjoint pair). It is remarkable that, while the stack $T^\ast(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)$ is far from affine, it's quantization is almost affine: the abelian category of $(\mathfrak{D}_H)^W$-modules (the quantum affinization) sits as a full subcategory (in fact, a direct summand) of $\mathcal M(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)$ (and in the case $G=GL_n$, the two categories are equivalent, i.e. $T^\ast(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)$ is quantum affine). This explains the simpler form of the Ng\^o and Whittaker functors appearing in Conjecture \ref{quantum ngo induction}.
\end{remark}
\subsubsection{Very central $\mathcal D$-modules}
The following definition was given in the PhD thesis of the second named author.
\begin{definition}\label{very central}
We say that an object $\mathfrak{M} \in \mathcal M(G)^G$ is \emph{very central} if $hc(\mathfrak{M})$ is supported on the diagonal substack $H{/_{\hspace{-0.2em}ad}\hspace{0.1em}} B \subseteq Hor$.
\end{definition}
Note that if $\mathfrak{M}$ is very central, then $hc(\mathfrak{M})$ is identified with the parabolic restriction $\Res(\mathfrak{M})$. In particular, restricting $\ls{W}{\Res}$ to $\mathcal M(G)^G_{vc}$ defines a fully faithful monoidal functor to the symmetric monoidal abelian category $\mathcal M(H)^W$.
\begin{remark}
At the level of abelian categories, the functor $hc$ takes an equivariant $\mathfrak{D}_G$-module $\mathfrak{M}$ to its $N$-average $(G\to G/N)_\ast \mathfrak{M}$. The very central property means that this $N$-average is supported on $H=B/N \subseteq G/N$. This property has been studied in \cite{Chen}.
\end{remark}
\begin{conjecture}
\begin{enumerate}
\item The Ng\^o functor defines a fully faithful braided monoidal functor on abelian categories, whose essential image is given by $\mathcal M(G)^G_{vc}$.
\item The essential image of $\ls{W}{\Res}$ restricted to $\mathcal M(G)^G_{vc}$ is given by $\mathcal M(H)^W_{Nil}$.
\end{enumerate}
\end{conjecture}
Note that the two statements are mutually equivalent given Conjecture \ref{quantum ngo induction}.
\subsubsection{Twisted Harish-Chandra systems and almost idempotent character sheaves}
Recall that there is an equivalence $\mathcal{W}h \simeq QC(\mathfrak{h}^\ast{/\! /} W^\textit{aff})$; thus the Ng\^o functor defines a collection of orthogonal almost-idempotent objects of $\mathcal D(G)^G$, given by the image of skyscraper sheaves of points $\mathcal O_{[\theta]}$, where $[\theta]$ denotes a point of $\mathfrak{h}^\ast {/\! /} W^\textit{aff}$ corresponding to $\theta \in \mathfrak{h}^\ast{/\! /} W$. These objects are expected to be certain twisted forms of the Harish-Chandra system associated to $\theta$.
To explain this more precisely, let $\mathcal{W}h_{[\theta]}$ denote the category of\emph{admissible $\mathfrak{Wh}$-modules with central character $[\theta]$}, i.e. the full monoidal subcategory of $\mathcal{W}h$ consisting of objects whose (set-theoretic) support with respect to $\mathfrak Z\mathfrak g \simeq \mathbb C[\mathfrak{h}^\ast]^W$ is contained in $[\theta]$. As the fibers of $\mathfrak{h}^\ast {/\! /} W \to \mathfrak{h}^\ast {/\! /} W^\textit{aff}$ are discrete, there is a symmetric monoidal equivalence with sheaves on $\mathfrak c$ set-theoretically supported at $\theta$ (for any choice of lift $\theta$ of $[\theta]$):
\[
\mathcal{W}h_{[\theta]} \simeq \mathfrak Z\mathfrak g\mhyphen\mathrm{mod}_{\theta} \simeq QC(\mathfrak c)_{\theta}
\]
Abstractly this category is symmetric monoidally equivalent to $QC(\mathbb A^r)_{(0)}$, the subcategory of modules for a symmetric algebra generated by the augmentation.\footnote{In particular, the categories $\mathcal{W}h_{[\theta]}$ are equivalent for all values of $\theta$. This result is not immediately apparent from the definition, and somewhat surprising given how the category of character sheaves $\mathcal D(G)^G_{[\theta]}$ varies with the central character $[\theta]$.} By Koszul duality, this in turn is isomorphic to $QC(\mathbb A^r[-1]) \simeq L\mhyphen\mathrm{mod}$, where $L = \Sym(\mathbb C^r[1])$. It follows that the objects $\mathcal O_{[\theta]}$ are orthogonal and almost idempotent with respect to the monoidal structure on $\mathcal{W}h$ (i.e. idempotent up to a ``scalar'' given by the dg-vector space $L$). There is also an (actual) derived idempotent $\widecheck{\mathcal O}_{[\theta]} \in QC(\mathfrak{h}^\ast{/\! /} W^\textit{aff})_{[\theta]}$ which corresponds to the augmentation module in $L\mhyphen\mathrm{mod}$, or the $\mathcal D$-module of delta functions in $\mathbb A^r$, considered as an object of $QC(\mathbb A^r)_{(0)}$.
It follows that the image of $\mathcal O_{[\theta]}$ (respectively $\widecheck{\mathcal O}_{[\theta]}$) are almost idempotent (respectively idempotent) objects in the monoidal category $\mathcal D(G)^G$. We denote these objects by $\mathfrak{E}_{[\theta]}$ (respectively $\widecheck{\mathfrak{E}}_{[\theta]}$.
Recall \cite{Ginzburg admissible} that the category of \emph{character sheaves} (or \emph{admissible modules}) with central character $[\theta]$ is the subcategory of $\mathcal D(G)^G$ consisting of $\mathfrak{D}_G$-modules whose $\mathfrak Z\mathfrak g$-support is contained in $[\theta]$.\footnote{In this paper, we do not require character sheaves to be semisimple, or even coherent as $D$-modules.} It follows directly that the objects $\mathfrak{E}_{[\theta]}$ and $\widecheck{\mathfrak{E}}_{[\theta]}$ are examples of character sheaves with central character $[\theta]$; in fact, $\widecheck{\mathfrak{E}}_{[\theta]}$ is the unit object in the category of character sheaves.
These objects may be described more explicitly, assuming Conjecture \ref{quantum ngo induction}. Recall that we have a sequence of functors
\[
\mathcal{W}h \simeq QC(\mathfrak{h}^\ast{/\! /} W^\textit{aff}) \to QC(\mathfrak{h}^\ast)^{W^\textit{aff}} \simeq \mathcal D(H)^W
\]
Given a skyscraper sheaf $\mathcal O_{[\theta]}$ in $QC(\mathfrak{h}^\ast {/\! /} W^\textit{aff})$, let $\mathfrak {L}_{[\theta]}$ denote the corresponding object of $\mathcal D(H)^W$. Uniwinding the definitions, we see that $\mathfrak {L}_{[\theta]}$ is a certain $W$-equivariant flat connection of rank $W$ on $H$; for example, $\mathfrak {L}_{[0]}$ is an indecomposible unipotent flat connection on $H$, where the invariant differential operators $\Sym(\mathfrak{h})$ act on a frame of sections as the module of coinvariants $\Sym(\mathfrak{h})/\Sym(\mathfrak{h})^W_+$. Similarly, the object $\widecheck{\mathfrak {L}}_{[\lambda]}$ is a certain infinite rank flat connection; for example, $\widecheck{\mathfrak {L}}_{[0]}$ is an ind-unipotent flat connection, which has a frame isomorphic to $\Sym(\mathfrak{h}^\ast) = \mathbb C[\mathfrak{h}]$, where the $\Sym(\mathfrak{h})$ action is via constant coefficient differential operators. Thus we obtain the following:
\begin{proposition}
Assume Conjecture \ref{quantum ngo induction}; then we have almost idempotent objects
\[
\mathfrak{E}_{[ \theta]} \simeq \ls{W}{\Ind}(\mathfrak {L}_{[\theta]})
\]
and idemptotent objects
\[
\widecheck{\mathfrak{E}}_{[\theta]} \simeq \ls{W}{\Ind}(\widecheck{\mathfrak {L}}_{[\theta]})
\]
of $\mathcal D(G)^G$, for each $[\theta] \in \mathfrak h^\ast{/\! /} W^\textit{aff}$.
\end{proposition}
\begin{remark}
The objects $\widecheck{\mathfrak{E}}_{[\theta]}$ were studied recently by Chen; see, for example, Theorem 3.8 in \cite{Chen}, where it was shown that the $\mathfrak{E}_{[\theta]}$ were very central in the sense of Definition \ref{very central}.
\end{remark}
Finally, recall the \emph{Harish-Chandra system}
\[
\mathfrak{M}_{0} := \mathfrak{D}_{G}/\mathfrak{D}_G\left(\mathrm{ad}(\mathfrak g) + \mathfrak Z\mathfrak g_+\right)
\]
The fundmental results of Hotta and Kashiwara~\cite{HK} identify the Harish-Chandra system with the (Grothendieck)-Springer sheaf
\[
\Ind(\mathfrak{O}_{0})\simeq \ls{W}{\Ind}(\mathbb C[W] \otimes \mathfrak{O}_0)
\]
where $\mathfrak{O}_{0}$ is the trivial rank one flat connection on $H$. Similarly, one can define $\mathfrak{M}_\theta$ for any $\theta \in \Spec(\mathfrak Z\mathfrak g)$, and there is an analogous description in terms of parabolic induction. Note that $\mathbb C[W] \otimes \mathfrak{O}_0$ is precisley the semisimplification of the $W$-equivariant flat connection $\mathfrak {L}_{[0]}$ (and there is an analogous statement for any $\theta$). Thus the Harish-Chandra system $\mathfrak{M}_{\theta}$ is the semisimplification of the almost idempotent object $\mathfrak{E}_{[\theta]}$. This justifies the name twisted Harish-Chandra system.
\subsection{Kac-Moody Groups and Coxeter Systems}\label{KM section}
Now let us explore some other examples of our construction of central actions on convolution categories from Theorem \ref{main intro}. We will have two closely related classes of examples: one topological, arising from Kac-Moody groups, and another combinatorial, associated to Coxeter systems. These examples are related to the motivating example, by taking the affine Kac-Moody group associated to ${G^{\vee}}$.
\subsubsection{Toy examples}
Before discussing further, we give two examples to illustrate the basic principle of Theorem \ref{main intro}: for a groupoid $\mathcal G$ acting on a space $X$ with quotient $Y$, $\mathcal G$-equivariant sheaves on $X$, i.e., sheaves on $Y$, act centrally on modules for the convolution category of sheaves on $\mathcal G$, i.e., sheaves of categories on $Y$.
\begin{example}
Let $\pi:X\to Y$ denote a map of finite sets, and $\mathcal G=X\times_Y X$. In this case the convolution algebra $(H=k[\mathcal G],\ast)$ is the algebra of $|X|$ by $|X|$ block-diagonal matrices (with blocks labeled by $Y$), which is Morita equivalent to the commutative algebra $k[Y]$. We also consider the convolution category $(\mathcal H=Vect(X\times_Y X),\ast)$. In this case the inclusion of block-scalar matrices $Vect(Y)\hookrightarrow Vect(X\times_Y X)$ identifies
$$\xymatrix{H\mhyphen\mathrm{mod}\simeq Vect(Y)\ar[rr]^-{\sim}&& \mathcal Z(Vect(X\times_Y X))}$$ with the Drinfeld center of $(Vect(X\times_Y X),\ast)$, categorifying the familiar identification of block-scalar matrices $k[Y]$ as the center of block-diagonal matrices $k[X\times_Y X]$.
\end{example}
\begin{example} Let $G$ denote a finite group, and $X=pt\to Y=BG$, so that $G\simeq X\times_Y X$. In this case the convolution algebra $H=(\mathbb C[G],\ast)$ is the group algebra, and $H\mhyphen\mathrm{mod}=Rep(G)$ is the symmetric monoidal category of representations. The Drinfeld center of the monoidal category $(Vect(G),\ast)$ is now the braided tensor category $Vect(G/G)$, which contains $Rep(G)\simeq Vect(pt/G)$ as the tensor subcategory of equivariant vector bundles supported on the identity. The latter is in fact a Lagrangian subcategory of $Vect(G/G)$ in the sense of~\cite{DGNO}. We expect our general construction provides (derived analogues of) Lagrangian subcategories as well. The action of $Vect(G)$ on a $Vect(pt)=Vect$ induces an action of its center
$$\xymatrix{\mathcal Z(Vect(G))=Vect(G/G)\ar[rr]&&End_{Vect(G)}(Vect)\simeq Rep(G)}$$
which provides the desired left inverse.
\end{example}
\subsubsection{Kac-Moody groups}
Let $\mathbf{G}$ denote a simply-connected Kac-Moody group, with Borel subgroup $\mathbf{B}$ (or more generally parabolic subgroup $\bf P$).
The flag variety $\mathbf{G}/\mathbf{B}$ is an ind-projective ind-scheme of ind-finite type~\cite{Mathieu, Kumar}. We let $\mathcal G_{\mathbf{G},\mathbf{B}}=\mathbf{B}\backslash \mathbf{G}/\mathbf{B}$ denote the corresponding
``Hecke" groupoid acting on $X_{\mathbf{G},\mathbf{B}} = pt/\mathbf{B}$.
In this setting, the convolution algebra $H_{\mathbf{G},\mathbf{B}}$ is given by the
equivariant homology ring $H_\ast(\quot{\mathbf{B}}{\mathbf{G}}{\mathbf{B}})$ (considered as a dg-ring).
The Kostant category $\mathcal K_{\mathbf{G},\mathbf{B}}=H_{\mathbf{G},\mathbf{B}}\mhyphen\mathrm{mod}$ has a symmetric monoidal structure arising from the ``cup coproduct'' on $H_{\mathbf{G},\mathbf{B}}$. The convolution category $\mathcal H_{\mathbf{G},\mathbf{B}}$ is the (renormalized) Iwahori-Hecke category
$\cDv_{hol}(\mathbf{B}\backslash \mathbf{G}/\mathbf{B})$ of equivariant ind-holonomic $\mathcal D$-modules (or ind-constructible sheaves) on the affine flag variety.
Theorem \ref{main intro} applies in this setting, giving the following:
\begin{theorem}\label{Kac-Moody intro}
There is a natural $E_2$ functor from the symmetric monoidal Kostant category $\mathcal K_{\mathbf{G},\mathbf{B}}$ to the center $\mathcal Z(\mathcal H_{\mathbf{G},\mathbf{B}})$ of the Iwahori-Hecke category, together with a monoidal right inverse (and likewise for any parabolic $\bf P$ of $\mathbf{G}$). Thus we have an instance of Diagram~\ref{basic diagram}:
\[
\xymatrix{
H_\ast(\quot \mathbf{B} \mathbf{G} \mathbf{B})\mhyphen\mathrm{mod}\ar[r]_{E_2}\ar[d]_-{E_\infty}& \mathcal Z(\cDv_{hol}(\quot \mathbf{B}\mathbf{G}\mathbf{B})) \ar@/_1pc/_-{E_1}[l] \ar[d]^{E_1}\\
H^\ast(pt/\mathbf{B})\mhyphen\mathrm{mod} \ar[r]_{E_1} & \cDv_{hol}(\quot \mathbf{B}\mathbf{G}\mathbf{B})
}
\]
\end{theorem}
The objects appearing in the theorem carry combinatorial realizations in terms of the Coxeter system $(\mathbf{h},\mathbf{W})$ associated to $\mathbf{G}$. Namely, the homology convolution algebra $H_{\mathbf{G},\mathbf{B}}$ is isomorphic to the Kostant-Kumar nil-Hecke algebra of the Coxeter system (see~\cite{KK,Kumar, Arabia,schubert book,ginzburg hecke}). Analogously, the Iwahori-Hecke category $\mathcal H_{\mathbf{G},\mathbf{B}}$ can be interpreted in terms of Soergel bimodules for $(\mathbf{h},\mathbf{W})$.
\begin{example}[Finite case]
Consider the case where $\mathbf{G}$ is a reductive algebraic group, so $(\mathbf{h},\mathbf{W})$ is a finite Coxeter system. In this case $H_{\mathbf{G},\mathbf{B}}$ is the finite nil-Hecke algebra, acting on $\mathbb C[\mathbf{h}]\mhyphen\mathrm{mod}$ by Demazure operators. The category of $H_{\mathbf{G},\mathbf{B}}$-modules is identified with $\mathbb C[\mathbf{h}]^{\mathbf{W}}\mhyphen\mathrm{mod}$, or in other words sheaves the coarse quotient $\mathbf{h}{/\! /}\mathbf{W}$ of $\mathbf{h}$ by $\mathbf{W}$. The geometric setting is a differential-graded version of the combinatorial; forgetting about grading, we have $\mathbb C[\mathbf{h}] = H^\ast_{\mathbf{B}}(pt)$ and $\mathbb C[\mathbf{h}]^\mathbf{W} = H^\ast_{\mathbf{G}}(pt)$. The result of Theorem \ref{main intro} is simply the linearity of the finite Hecke category $\mathcal H=\cDv_{hol}(B\backslash G/B)$ over the $G$-equivariant cohomology ring.
\end{example}
\begin{example}[$\mathcal D$-modules on a reductive group]
Our main application, the quantum Ng\^o action, constructs a central action on the monoidal category $\mathcal D(G)$ (or its Morita equivalent realization, $\mathcal{HC}$) via its Langlands dual realization on the loop Grassmannian. We can also apply the construction verbatim to $\mathcal D(G)$, taking $\mathcal G=G$ as a groupoid acting on $X=pt$ (a variant of the previous example with the equivariant flag variety as a groupoid on $pt/B$). In this case we find a central action of the Kostant category $\mathcal K=H_\ast(G)\mhyphen\mathrm{mod}$ on $\mathcal D(G)$, which again is a Koszul dual form of linearity over the $G$-equivariant cohomology ring. This form of the Kostant category is manifestly different from (and less interesting than) the Ng\^o action of Whittaker $\mathcal D$-modules; this example clearly demonstrates that our central actions depend on the presentation as a convolution category (rather than being intrinsic invariants of the monoidal category).
\end{example}
\subsubsection{Coxeter groups}
More generally Theorem~\ref{main intro} has a realization in the setting of a Coxeter group $\mathbf{W}$ with reflection representation $\mathbf{h}$ (for example, $\mathbf{W}$ could be the Weyl group of $\mathbf{G}$ and $\mathbf{h}$ the Cartan). For $w\in \mathbf{W}$ we let $\Gamma_w\subset \mathbf{h}\times \mathbf{h}$ denote the graph of the corresponding reflection. Let $$\Gamma_\mathbf{W}=\coprod_{w\in \mathbf{W}} \Gamma_w.$$ Then $\Gamma_\mathbf{W}$ is an ind-proper groupoid acting on the scheme $\mathbf{h}$. This is the equivalence relation underlying the action of $\mathbf{W}$ on $\mathbf{h}$ -- i.e., $\Gamma_\mathbf{W}$ is the {\em adjacency groupoid} of $\mathbf{W}\circlearrowright \mathbf{h}$ in the language of~\cite{lonergan}. We may still consider the (non-representable) quotient $\mathbf{h}/\Gamma_\mathbf{W}$, i.e. the coarse (set-theoretic rather than stack theoretic) quotient, which we still denote $\mathbf{h}{/\! /}\mathbf{W}$.
Let $\omega(\Gamma_\mathbf{W})$ denote the convolution algebra of distributions, i.e. global sections of the Serre-dualizing complex on the singular ind-variety $\Gamma_{\mathbf{W}}$. On the other hand,
it follows from the results of Lonergan~\cite{lonergan,lonergan2} that the algebra $\omega(\Gamma_\mathbf{W})$ is isomorphic to the nil-Hecke algebra
$$\omega(\Gamma_\mathbf{W})\simeq H_{\mathbf{h},\mathbf{W}}$$
It follows from ind-proper descent~\cite{GR} that the category $\mathcal K_{\mathbf{h},\mathbf{W}} = \omega(\Gamma_\mathbf{W})\mhyphen\mathrm{mod}$ is equivalent to ind-coherent sheaves on $\mathbf{h}{/\! /}\mathbf{W}$, so that we have an equivalence
$$H_{\mathbf{h},\mathbf{W}}\mhyphen\mathrm{mod}\simeq QC^!(\mathbf{h}{/\! /}\mathbf{W}).$$
For the Hecke category $\mathcal H_{\mathbf{h},\mathbf{W}}$ we may take ind-coherent sheaves $QC^!(\Gamma_\mathbf{W})$ on the adjecency groupoid under convolution. Once again, Theorem \ref{main intro} applies in this setting, giving a diagram of the form Diagram~\ref{basic diagram}.
\begin{theorem}\label{Coxeter intro}
There is a natural symmetric monoidal structure on modules $\mathcal K_{\mathbf{h},\mathbf{W}}=H_{\mathbf{h},\mathbf{W}}\mhyphen\mathrm{mod}$ for the nil-Hecke algebra compatible with the forgetful functor to $\mathbb C[\mathbf{h}]$, and the action $\mathbb C[\mathbf{h}]\to \mathcal H_{\mathbf{h},\mathbf{W}}$ on the Coxeter Hecke category lifts to a central action $\mathfrak{z}:H_W\mhyphen\mathrm{mod}\to \mathcal Z(\mathcal H_W)$ with a monoidal right inverse $\mathfrak a$. Thus we have an instance of Diagram~\ref{basic diagram}:
\[
\xymatrix{
H_{\mathbf{h},\mathbf{W}}\mhyphen\mathrm{mod}\ar[r]_{E_2}\ar[d]_-{E_\infty}& \mathcal Z(QC^!(\Gamma_\mathbf{W})) \ar@/_1pc/_-{E_1}[l] \ar[d]^{E_1}\\
\mathbb C[\mathbf{h}]\mhyphen\mathrm{mod} \ar[r]_{E_1} & QC^!(\Gamma_\mathbf{W})
}
\]
\end{theorem}
\begin{remark}\label{dg vs graded remark}
Making the connection between the Kac-Moody story and the Coxeter story more precise requires a number of modifications. The first issue arises from the fact that the convolution algebra $H_{\mathbf{G},\mathbf{B}}$ in the Kac-Moody set-up is really a dg-algebra (and the corresponding $\mathcal K_{\mathbf{G},\mathbf{B}}$ is given by dg-modules), whereas in the Coxeter set-up, the nil-Hecke algebra $H_{\mathbf{h},\mathbf{W}}$ is considered to be an ordinary algebra (and $\mathcal K_{\mathbf{h},\mathbf{W}}$ is its derived category of modules). This issue is fixed by considering an external grading on the convolution algebra and (dg-)modules with a compatible external grading. The external grading allows for a ``shearing'' equivalence between the two $\mathcal K$-categories. On the level of convolution categories, adding the external grading corresponds to considering a mixed version of the Iwahori-Hecke category; this mixed category is equivalent to chain complexes of graded Soergel bimodules for $(\mathbf{h},\mathbf{W})$, which can be thought of as a certain modification of the combinatorial Hecke category $\mathcal H_{\mathbf{W},\mathbf{h}}$. Unfortunately, this mixed set-up does not seem to fall so neatly in to the set-up of Theorem \ref{main intro}.
\end{remark}
\begin{example}[Spherical affine case]
Let us explain how to connect the examples discussed in this section with our main motivation. We take for $\mathbf{G}$ the affine Kac-Moody group associated to a reductive group ${G^{\vee}}$, i.e., the extended form of the loop group ${LG^\vee}$, and for the parabolic $\mathbf P$ the maximal parabolic corresponding to the complement of the ``extra'' node in the extended Dynkin diagram, i.e., the extended form of ${LG^\vee_+}$. In particular $\mathbf{W}=W^{\textit{aff}}$ is the affine Weyl group. In this way, the Iwahori-Hecke category $\mathcal H_{\mathbf{G},\mathbf P}$ is replaced by the spherical Hecke category (the renormalized Satake category). Similarly, the convolution algebra $H_{\mathbf{G},\mathbf{B}}$ is replaced by its spherical subalgebra. In fact, there is a Morita equivalence between the nil-Hecke algebra and its spherical subalgebra~\cite{webster} so the $\mathcal K$ categories are the same in the Iwahori and spherical settings (see also~\cite{ginzburg whittaker} which constructs the equivalence from the Whittaker $\mathcal D$-module perspective).
The work of Lonergan and Ginzburg~\cite{lonergan,ginzburg whittaker} identifies the Kostant category $\mathcal K$ with the full subcategory of $W_{\textit{aff}}$-equivariant quasicoherent sheaves on $\mathfrak h^\ast$, on which the derived inertia action is trivial, or equivalently descend to the categorical quotient $\mathfrak h^\ast{/\! /} W$ by the finite Weyl group (or by every finite parabolic subgroup of $W_{\textit{aff}}$).
\end{example}
\begin{remark}
The Morita equivalence between the spherical and full nil-DAHA can be understood from the Whittaker perspective as follows. Recall that there is a categorical Morita equivalence between the monoidal category $\mathcal{HC}$ and the (universal, monodromic) Hecke category
\[
\widehat{\mathcal H}_G = \mathcal D(\quot{\overline{N}}{G}{\overline{N}})^{H\times H, wk}
\]
consisting of weakly $\overline{B}$, strongly $\overline{N}$ bi-equivariant $\mathcal D$-modules (where $\overline{B}$ is the opposite Borel to $B$).
Under this Morita equivalence, the action of $\mathcal{HC}$ on the category $\mathcal Z$ via Whittaker modules corresponds to the action of $\widehat{\mathcal H}_G$ on
\[
\mathcal D( \quot{\overline{N}}{G}{_\psi N})^{H,wk} \simeq \Sym(\mathfrak{h})\mhyphen\mathrm{mod} = QC(\mathfrak{h}^\ast)
\]
In particular, there is a monoidal, monadic forgetful functor
\[
\mathcal{W}h \simeq \End_{\mathcal{HC}}(\mathcal Z) \simeq \End_{\widehat{\mathcal H}_G}(\Sym(\mathfrak{h})\mhyphen\mathrm{mod}) \to \Sym(\mathfrak{h})\mhyphen\mathrm{mod}
\]
The $\Sym(\mathfrak{h})$-ring corresponding to the monad is precisely the nil-DAHA (after unwinding the definitions, this is computed in \cite{ginzburg whittaker}).
\end{remark}
\subsection{Further directions}\label{further section}
In this section we briefly mention some applications of the quantum Ng\^o action that we intend to pursue in future work.
\subsubsection{Langlands parameters}
The action of $\mathcal{W}h\simeq QC(\mathfrak h^\ast{/\! /} W_{\textit{aff}})$ provides a notion of Langlands parameters for categorical representations of $G$. Indeed, we may identify the quotient complex analytically $$\mathfrak h^\ast{/\! /} W_{\textit{aff}} \sim H^\vee{/\! /} W$$ with the affinization of the (Betti) stack $${G^{\vee}}/{G^{\vee}}=\mathcal Loc_{G^{\vee}}(D^\times)$$ of ${G^{\vee}}$-local systems on the punctured disc. This identification should be closely related to the (de Rham) local geometric Langlands program~\cite{frenkel dennis, quantum langlands}.
Indeed it is expected (see~\cite[Example 1.23.1]{raskin W}) that
$$\mathcal{W}h(LG)\simeq QC(Conn_{G^{\vee}}(D^\times)):$$ the Hecke category of bi-Whittaker $\mathcal D$-modules on the loop group $LG$ (i.e. the ``affine $\mathcal W$-category") is symmetric monoidal, and equivalent to quasicoherent sheaves on the stack of ${G^{\vee}}$-flat connections on the punctured disc. Thus
passing to ``Whittaker vectors" on categorical representations of $LG$ produces quasi-coherent sheaves of categories on the stack of ${G^{\vee}}$-connections on $D^\times$ -- the geometric version of local Langlands parameters~\cite{quantum langlands}. This conjecture is a categorical analog of the Feigin-Frenkel description~\cite{FF} of the affine $\mathcal W$-algebra, as our result is a categorical analog of the Kostant description of the finite $\mathcal W$-algebra. Our proof of commutativity of $\mathcal{W}h$ does not readily generalize to the affine setting, but we hope a deeper and cleaner understanding of $\mathcal{W}h$ will prove useful in this regard.
\subsubsection{Eigencategories for $\mathcal{W}h$ and refined central character}
The Harish-Chandra system on $G/G$ (as in~\cite{HK}) is a reductive group ancestor of Beilinson-Drinfeld's quantized Hitchin system on the stack $Bun_G$ of $G$-bundles on an algebraic curve, and Lusztig's character sheaves~\cite{lusztig character, laumon character} are likewise the ancestors of automorphic sheaves in the geometric Langlands correspondence. Arinkin~\cite{arinkin thesis, arinkin paper} explained that the Hecke functors on $\mathcal D(Bun_G)$ in the geometric Langlands correspondence appear naturally as an aspect of the quantized Hitchin system -- in Arinkin's paradigm, a quantization of completely integrable systems entails a deformation of symmetric monoidal categories, in this case deforming the translation symmetries of the classical system to the action of Hecke functors. We expect the action of $\mathcal{W}h$ on $\mathcal D(G/G)$, deforming the Ng\^o integration of the Hamiltonian flows, plays an analogous role for the Harish-Chandra system as the Hecke functors for the quantized Hitchin system. In particular character sheaves appear as $\mathcal{W}h$-eigensheaves just as automorphic sheaves appear as Hecke eigensheaves. In particular, the action of $\mathcal{W}h$ on $\mathcal D(G/G)$ provides a refinement of the theory of central characters of character sheaves, as explained below.
Recall that the symmetric monoidal category $\mathcal{W}h = QC(\mathfrak{h}^\ast {/\! /} W^\textit{aff})$ acts centrally on any $G$-category. Given a point $[\lambda] \in \mathfrak{h}^\ast/W^\textit{aff}$, we have an corresponding symmetric monoidal functor $\mathcal{W}h \to \mathbf{Vect}$, i.e. a $\mathcal{W}h$-module category $\mathbf{Vect}_{[\lambda]}$. For any $\mathcal D(G)$-module category or $\mathcal{HC}$-module category $\mathcal C$, we regard $\mathcal C$ as a $\mathcal{W}h$-module category via the Ng\^o functor and consider the categorical (co)invariants
\[
\mathcal C^{\mathcal{W}h,[\lambda]} = \Hom_{\mathcal{W}h}(\mathbf{Vect}_{[\lambda]}, \mathcal C) \simeq \mathcal C \otimes_{\mathcal{W}h} \mathbf{Vect}_{[\lambda]}
\]
For example, we have a braided monoidal category $\mathcal D(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)^{\mathcal{W}h, [\lambda]}$, a refined (or strict) version of the category of character sheaves with central character $[\lambda]$ (the usual category $\mathcal D(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)^{\widehat{[\lambda]}}$ of character sheaves with a fixed central character corresponds to taking the completion at $[\lambda] \in \mathfrak{h}^\ast{/\! /} W^\textit{aff}$ rather than the fiber). One expects that the category $\mathcal D(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)^{\mathcal{W}h,[\lambda]}$ is ``more semisimple'' than $\mathcal D(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)^{\widehat{[\lambda]}}$. We hope that these constructions will shed some light on the truncated Hecke and character sheaf categories defined by Lusztig \cite{lusztig cells, lusztig convolution} (see also \cite{BFO}).
\subsubsection{Character field theory and cohomology of character varieties}\label{TFT section}
We showed in~\cite{character2} (extending~\cite{character}) that the monoidal category $\mathcal{HC}$ controls the Borel-Moore homology of character varieties of surfaces, via the mechanism of a 3d topological field theory $\mathcal X_G$, the {\em character field theory}.
Recall that given a topological surface $S$ the
character variety (or Betti space) $\mathcal Loc_G(S)$ is the derived stack of $G$-local systems on
$S$, $$\mathcal Loc_G(S)\sim \{\pi_1(S)\to G\}/G.$$
The character theory is defined by prescribing that quantum Hamiltonian $G$-spaces ($\mathcal{HC}$-modules) define boundary conditions for $\mathcal X_G$, and ``integrates" them on surfaces to obtain the homology of character varieties:
\begin{theorem}\cite{character2} The assignment $$\mathcal X_G(pt)=\mathcal{HC}\mhyphen\mathcal{M}od\simeq \mathcal D(G)\mhyphen\mathcal{M}od$$ satisfies the conditions of the Cobordism Hypothesis~\cite{jacob TFT} to define an oriented topological field theory, attaching a dg vector space to a closed surface. Moreover we have a canonical equivalence $$\mathcal X_G(S)\simeq H_*^{BM}(\mathcal Loc_G(S))$$ with the Borel-Moore homology on the character variety, for $S$ an oriented closed surface.
\end{theorem}
We also prove a ``Hodge filtered" version of the theorem, which in particular defines a family $\mathcal X_{\hbar,G}$ of topological field theories out of the $\hbar$-family of monoidal categories $\mathcal{HC}_\hbar$.
The quantum Ng\^o action of $\mathcal{W}h_\hbar$ on $\mathcal{HC}_\hbar$ makes the entire character theory $\mathcal X_{\hbar,G}$ linear over $\mathcal{W}h_\hbar$, i.e. (for $\hbar\neq 0$) a family of topological field theories over $\mathfrak h^\ast{/\! /} W_{\textit{aff}}$. In particular the Borel-Moore homology of $\mathcal Loc_G(S)$ sheafifies over $\mathfrak h^\ast{/\! /} W_{\textit{aff}}$, with fibers defining new invariants, the {\em eigenhomology} of the character variety. We expect eigenhomologies of character varieties to be more accessible to combinatorial description.
The work of Hausel, Rodriguez-Villegas and Letellier \cite{H,HRV,HLRV}
has uncovered remarkable combinatorial patterns in the
cohomology of the character varieties, leading to a series of striking
conjectures. A central technique is counting points over finite fields, i.e., points of character varieties $\mathcal Loc_{G_q}(S)$ of the finite Lie groups
$G_q=G(\mathbb F_q)$. These counts are captured by the values of a 2d TFT ($G_q$ Yang-Mills theory), which assigns to a point the category $Rep(G_q)$ of representations of the finite group. Lusztig's Jordan decomposition of characters breaks up this category, and hence the counts on any surface, into blocks labeled by semisimple conjugacy classes in the dual group (i.e., informally speaking, over $H^\vee{/\! /} W$). The 3d character theory accesses the homology of character varieties (as opposed to the point count or $E$-polynomial) directly, and the decomposition over $\mathcal{W}h$ (i.e. over $\mathfrak h^\ast{/\! /} W^{\textit{aff}}\sim H^\vee{/\! /} W$) plays the role of the Jordan decomposition. This decomposition, which we will explore in a future paper, provided the original motivation for this work.
\subsection{Supersymmetric gauge theory}\label{SUSY}
We briefly indicate the interpretation of our constructions in the context of supersymmetric gauge theory, following discussions with Andy Neitzke, Tudor Dimofte and Justin Hilburn. See~\cite{BZ talk} for a slightly more leisurely discussion. Details will appear elsewhere.
To any 3d $\mathcal N=4$ theory $\mathcal Z$ is associated a holomorphic symplectic variety $\mathfrak{M}_\mathcal Z$, its {\em Coulomb branch}, together with a deformation quantization $\mathbb C_\hbar[\mathfrak{M}_\mathcal Z]$ of its ring of functions, obtained as the algebra of supersymmetric local operators in the theory in $\Omega$-background~\cite{NW} (with quantization parameter $\hbar=\epsilon\in H^\ast(BS^1)$). See e.g.~\cite{BDG} and references therein. If $\mathcal Z$ is a 3d {\em gauge} theory, one can define (using a Lagrangian description of $\mathcal Z$) an integrable system $$\mathfrak{M}_\mathcal Z\to \mathfrak c$$ with base the adjoint quotient of the Langlands dual of the gauge group.
The identification by~\cite{BFM} of the groupscheme $J$ of regular centralizers in terms of the equivariant homology of the Langlands dual Grassmannian is now understood (thanks to~\cite{teleman, BraFinkNa}) as describing the Coulomb branch of pure 3d $\mathcal N=4$ gauge theory, while its quantization using ${\C^\times}$-equivariant homology is an instance of the quantization in $\Omega$-background.
However the {\em abelian group} structure of $J$ (and symmetric monoidal structure of its quantization), as well as the classical and quantum Ng\^o actions, are
best understood using 4d gauge theory -- specifically, in the spirit of Kapustin-Witten~\cite{KW}, as aspects of
4d $\mathcal N=4$ super-Yang-Mills (in the GL twist at $\Psi=\infty$). Indeed the base $\mathfrak c$ arises as the Coulomb branch of 4d SYM, while the characteristic polynomial map (in fact, a shifted integrable system)
$$\mathfrak g^\ast/G\to\mathfrak c$$ arises from identifying the residual gauge symmetry of the theory on its Coulomb branch.
The category $QC(\mathfrak g^\ast/G)$ is the monoidal (naturally $E_3$) category of (Wilson) line operators in the theory, and its deformation $\mathcal{HC}_\hbar$ is the monoidal category of line operators in the 4d $\Omega$-background (with $\epsilon_1=\hbar,\epsilon_2=0$).
The derived geometric Satake theorem of~\cite{BezFink} is thus interpreted as implementing S-duality for line operators, identifying the Wilson lines with 't Hooft lines (Hecke modifications).
The Ng\^o map and its quantization are most naturally interpreted as providing an integration of the shifted integrable system $\mathfrak g^\ast/G\to \mathfrak c$ (Ng\^o's ``mold") and its quantization. Rather than spell this structure out, we mention one of its consequences in terms of the familiar geometry of 3d Coulomb branches. The Ng\^o action provides symmetries of arbitrary BPS boundary conditions for the 4d $\mathcal N=4$ theory and of their Coulomb branches (which produce holomorphic hamiltonian $G$-spaces). In particular one can pair two such boundary conditions, reducing the 4d theory on an interval to produce a 3d $\mathcal N=4$ theory:
\begin{claim}
Let $\mathcal Z$ denote any 3d $\mathcal N=4$ theory obtained by reduction of 4d $\mathcal N=4$ on an interval. Then the Coulomb branch $\mathfrak{M}_\mathcal Z$ carries an integrable system $$\mathfrak{M}_\mathcal Z\to\mathfrak c$$ which integrates to an action of the symplectic groupoid $J\to \mathfrak c$. Likewise the $\Omega$-deformed algebra $\mathbb C_\hbar[\mathfrak{M}_\mathcal Z]$ carries a quantum integrable system $\mathfrak Z_\hbar\mathfrak g\to \mathbb C_\hbar[\mathfrak{M}_\mathcal Z]$ which integrates to an action of $\mathcal{W}h_\hbar$. In particular the category of modules for the quantized Coulomb branch sheafifies over $\mathfrak h^\ast{/\! /} W^{\textit{aff}}$.
\end{claim}
The claim is the physical counterpart to Proposition~\ref{classical integration} and Corollary~\ref{quantum ham corollary} --- (classical and quantum)
Hamiltonian reductions of $G$-spaces correspond to reductions of 4d $\mathcal N=4$ along different pairs of boundary conditions. In particular {\em Whittaker} reduction, or restriction to the Kostant slice, corresponds to pairing with a Neumann boundary condition, i.e., {\em gauging} 3d $\mathcal N=4$ theories with global symmetry (with gauge group the compact form of
the dual group ${G^{\vee}}$) - see~\cite{BDGH} for a closely related discussion. Thus the class of 3d $\mathcal N=4$ theories obtained this way includes in particular all 3d $\mathcal N=4$ gauge theories. However since the Kostant slice is contained in the regular locus, such theories don't probe the irregular locus of $\mathfrak g^\ast$, and one doesn't need the Ng\^o construction to see the action of $J$, which follows immediately from the structure of hamiltonian $G$-space (or Langlands dually from the Braverman-Finkelberg-Nakajima construction of the Coulomb branch~\cite{BraFinkNa}).
\subsection{Acknowledgments}
This project grew out of a joint project with David Nadler (parts of which appeared as~\cite{character2} and~\cite{hendrik}), and we would like to express our deep gratitude for his essential contributions. In particular the idea to quantize the commutative group-scheme $J$ of regular centralizers and the Ng\^o correspondence to a central action of a symmetric monoidal category is due to him.
We are greatly indebted to Dario Beraldo and Sam Raskin for their help with the formalism of renormalized D-modules.
We would also like to thank Constantin Teleman for generously sharing his understanding of the relations between categorical representation theory, gauge theory and $J$, Simon Riche for discussion of mixed geometric Satake, Geoffroy Horel for his assistance with formality of Hopf algebras, Marco Gualtieri and James Pascaleff for sharing their ideas on Fukaya categories on symplectic groupoids, and Dima Arinkin, Dennis Gaitsgory, Victor Ginzburg, Gus Lonergan, and Ben Webster for their interest and useful discussions.
DBZ would like to acknowledge the National Science Foundation for its support through individual grants DMS-1103525 and DMS-1705110. We would also like to acknowledge that part of the work was carried out at MSRI as part of the program on Geometric Representation Theory.
\section{Sheaf Theory: Ind-holonomic $D$-modules}\label{sheaf theory}
\subsection{DG categories}\label{dgcat}
We refer the reader to~\cite[I.1.5-8]{GR} as well as~\cite{BFN,BGT} for summaries of the basic properties of stable $\infty$-categories following~\cite{HA}.
We now summarize the main points we will need.
Recall~\cite{HTT,HA} that $\Pr^L$ denotes the symmetric monoidal $\infty$-category of presentable $\infty$-categories with continuous (colimit preserving) functors, i.e., (by the adjoint functor theorem) functors which are left adjoints. Further $St\subset \Pr^L$ denotes the symmetric monoidal $\infty$-category of stable presentable $\infty$-categories.
We will denote by $\mathbf{DGCat}_k$ the symmetric monoidal $\infty$-category of {\em cocomplete dg categories over $k$}, i.e., stable presentable $k$-linear $\infty$-categories. In other words $\mathbf{DGCat}_k$ consists of module categories for $k\mhyphen\mathrm{mod}$ in $St$.
We are mostly interested in the subcategory $\mathbf{DGCat}_k^c$ of compactly-generated dg categories with proper functors, i.e., continuous functors preserving compact objects, or equivalently functors that admit continuous right adjoints. The functors of taking compact objects and passing to Ind-categories define inverse symmetric monoidal equivalences of $\mathbf{DGCat}_k$
with the symmetric monoidal $\infty$-category $\mathbf{DGCat}_k^{sm}$ of small, idempotent-complete dg categories with exact functors. By~\cite[Corollary 4.25]{BGT} $\mathbf{DGCat}_k^c$ (or equivalently $\mathbf{DGCat}^{sm}$ is presentable.
\subsection{Sheaf Theory Formalism.}
We will study monoidal properties of categories of sheaves on stacks. The geometric spaces that appear are ind-algebraic stacks and groupoids (Section~\ref{context}). We require a theory of sheaves that attaches to a stack $X$ a presentable DG category $Shv(X)$ with continuous pull-back and pushforward functors $p_*,p^!$ for maps $p:X\to Y$ of ind-finite type, satisfying base change and an adjunction $(p_*,p^!)$ in the case that $p$ is ind-proper.
Two important examples of such a theory of sheaves, developed in~\cite{GR}, are the theory of ind-coherent sheaves $IndCoh(X)$ and the theory of $\mathcal D$-modules $\mathcal D(X)$. Their properties are summarized in the following:
\medskip
\begin{theorem}~\cite[Theorem III.3.5.4.3, III.3.6.3]{GR} \label{GR sheaf theory}
There is a uniquely defined right-lax symmetric monoidal functor $IndCoh$ from the $(\infty,2)$-category whose objects are {\em laft} prestacks, morphisms are correspondences with vertical arrow ind-inf-schematic, and 2-morphisms are ind-proper and ind-inf-schematic, to the $(\infty,2)$ category of DG categories with continuous morphisms.
\end{theorem}
\medskip
The theorem encodes a tremendous amount of structure. Let us highlight some salient features useful in practice.
The theorem assigns a symmetric monoidal dg category $IndCoh(X)$ to any reasonable (locally almost of finite type) stack. The symmetric monoidal structure, the $!$-tensor product, is induced by $!$-pullback along diagonal maps. For an arbitrary morphism $p:X\to Y$ there is a continuous symmetric monoidal pullback functor $p^!:IndCoh(Y)\to IndCoh(X)$, while for $p$ schematic or ind-schematic there is a continuous pushforward $p_*:IndCoh(X)\to IndCoh(Y)$, which satisfies base change with respect to $!$-pullbacks. Moreover for $p$ ind-proper, $(p_*,p^!)$ form an adjoint pair. Furthermore, the formalism of {\em inf-schemes} greatly extends the validity of the construction. In particular the same formal properties holds for the theory of $\mathcal D$-modules, defined by the assignment $X\mapsto \mathcal D(X)=IndCoh(X_{dR})$, ind-coherent sheaves on the de Rham space of X.
For our applications we require a minor variation, the theory of ind-holonomic $\mathcal D$-modules $\cDv_{hol}(X)$, the main instance of which is the renormalized Satake category $\cDv_{hol}(\underline{\mathcal{G}r}^\vee)$ studied in~\cite{AG} (and, implicitly,~\cite{BezFink}). We will explain the appropriate modifications of the formalism of~\cite{GR} needed to establish the minimal functoriality of ind-holonomic $\mathcal D$-modules we will require.
\subsubsection{Geometric context}\label{context}
We adopt the following geometric conventions: all schemes will be of almost finite type, and all algebraic stacks will be {\em laft} QCA stacks, as studied in particular in~\cite{finiteness}. In other words, an algebraic stack $X$ is a prestack whose diagonal is affine and which admits a smooth and surjective map from an affine scheme of almost finite type.
By an {\em ind-algebraic stack} we refer to a prestack $X$ which is equivalent to a filtered colimit $X=\lim_{\rightarrow} X_i$ of algebraic stacks under closed embeddings.
In our applications $X$ will be realized as the quotient of an ind-scheme of ind-finite type by an affine algebraic group. The main example of interest is the equivariant affine Grassmannian $$X=\underline{\mathcal{G}r}^\vee=G(\mathcal O)\backslash G(\mathcal K)/G(\mathcal O)$$ of a reductive group $G$.
\subsection{Motivating Ind-Holonomic $\mathcal D$-modules}\label{d-modules}
First recall (see e.g.~\cite{finiteness}) that for a scheme of finite type we have an equivalence $\mathcal D(X)\simeq \Ind \mathcal D_{coh}(X)$, and that we have a full stable subcategory $\mathcal D_{coh,hol}\subset \mathcal D_{coh}(X)$. Thus we have a fully faithful embedding $$\cDv_{hol}(X):=\Ind \mathcal D_{coh,hol}(X)\subset \mathcal D(X)$$ of ind-holonomic $\mathcal D$-modules into all $\mathcal D$-modules. Holonomic $\mathcal D$-modules are preserved by $!$-pullback and $*$-pushforward for finite type morphisms, and carry a symmetric monoidal structure through $!$-tensor product for which $!$-pullback is naturally symmetric monoidal.
This picture persists for $X$ an ind-scheme of ind-finite type $X=\lim_{\rightarrow} X_i$, for example the affine Grassmannian $Gr=G(\mathcal K)/G(\mathcal O)$. The $(i_*,i^!)$ adjunction for a closed embeddings provides the alternative descriptions $$\mathcal D(X)\simeq \lim_{\leftarrow,(-)^!} \mathcal D(X_i)\simeq \lim_{\rightarrow,(-)_*} \mathcal D(X_i).$$ As a result (by a general lemma of~\cite{DrG2}) $\mathcal D(X)$ is compactly generated by coherent $\mathcal D$-modules, which by definition are the pushforwards of coherent $\mathcal D$-modules on the finite type closed subschemes $X_i$, and include the similarly defined holonomic $\mathcal D$-modules. Note that with this definition the pullback of a holonomic $\mathcal D$-module by an ind-finite type morphism (for example, the dualizing complex of an ind-scheme) is ind-holonomic but not necessarily holonomic (i.e. compact).
For $X$ an algebraic stack, the situation (as studied in detail in~\cite{finiteness}) changes: coherent (and in particular holonomic) $\mathcal D$-modules, defined by descent using a smooth atlas, are no longer compact in general. The category $\mathcal D(X)$ is compactly generated by {\em safe} objects, which are coherent objects satisfying a restriction on the action of stabilizers (in the case of quotient stacks). One can thus measure the lack of safety of $X$ by the difference between $\mathcal D(X)$ and the category $\breve{\cD}(X):=\Ind \mathcal D_{coh}(X)$ of {\em ind-coherent} or {\em renormalized} $\mathcal D$-modules. This is analogous to the difference between quasicoherent and ind-coherent sheaves on a derived stack measuring its singularities, with safe (respectively, coherent) $\mathcal D$-modules taking on the role of perfect (respectively, coherent) complexes of $\mathcal O$-modules.
\begin{example}\label{example-Dhol on classifying}
Suppose $X=pt/G$ is the classifying stack of a reductive group. Let $\Lambda=C_*G\simeq \mathbb C[\mathfrak g^*[-1]]^G$ and $S=C^*X\simeq \mathbb C[\mathfrak g[2]]^G$ be the corresponding Koszul dual exterior and symmetric algebras. Then $$\mathcal D(X)\simeq \Lambda\mhyphen\mathrm{mod}\simeq QC(\mathfrak g[2]{/\! /} G)_0$$ is the completion of sheaves on the graded version of the adjoint quotient $\mathfrak g{/\! /} G\simeq \mathfrak{h}{/\! /} W$ at the origin. On the other hand we have $$\cDv_{hol}(X)=\cDv_{hol}(X)\simeq \Ind(Coh \Lambda)\simeq S\mhyphen\mathrm{mod}\simeq QC(\mathfrak g[2]{/\! /} G)$$ is the ``anticompleted" version of the same category.
This can also be described in terms of the corresponding homotopy type $X_{top}$ (as a constant prestack) and $\mathfrak{X}=\Spec C^*(X)$ the corresponding coaffine stack. We then have equivalences $$\mathcal D(X)\simeq QC(X_{top})\simeq QC(\mathfrak{X}).$$
On the other hand we have the following description of renormalized sheaves:
$$\cDv_{hol}(X) \simeq C^*(X)\mhyphen\mathrm{mod}.$$
In particular $\mathcal D(X)$ is the completion of $\cDv_{hol}(X)$.
\end{example}
We will be interested in a combined setting of ind-algebraic stacks. In this setting the category $\cDv_{hol}(X)$ (defined formally in the next section) is identified with the Ind-category of (coherent) holonomic $\mathcal D$-modules, which are pushforwards of holonomic $\mathcal D$-modules on algebraic substacks. Thus ind-holonomic $\mathcal D$-modules form a full subcategory of {\em ind-coherent} (or renormalized) $\mathcal D$-modules
$\cDv_{hol}(X) = \Ind \mathcal D_{coh}(X).$
\begin{example}
Our main motivating example is the equivariant affine Grassmannian $X=\underline{\mathcal{G}r}^\vee$. The {\em renormalized Satake category} $\cDv_{hol}(\underline{\mathcal{G}r}^\vee)$ of~\cite{AG} is a variant of the usual Satake category $\mathcal D(\underline{\mathcal{G}r}^\vee)$
which appears (implicitly) in the derived Satake correspondence of~\cite{BezFink}. It can be defined as the
ind-category $\Ind(Shv_{lc}(\underline{\mathcal{G}r}^\vee))$ of the category of {\em locally compact} sheaves on $\underline{\mathcal{G}r}^\vee$, i.e., equivariant sheaves on the affine Grassmannian for which the underlying sheaves are constructible (hence compact).
In the language of $\mathcal D$-modules, it is the Ind-category of the category of holonomic $\mathcal D$-modules on $\underline{\mathcal{G}r}^\vee$ - note that (as in the previous example) all coherent $\mathcal D$-modules on $\underline{\mathcal{G}r}^\vee$ are holonomic, in fact regular holonomic, hence identified with constructible sheaves.
The renormalized Satake theorem~\cite{BezFink,AG} is an equivalence of monoidal categories
$$\cDv_{hol}(\underline{\mathcal{G}r}^\vee)=\cDv_{hol}(\underline{\mathcal{G}r}^\vee)\simeq IndCoh(\mathfrak g^{\vee}[2]/{G^{\vee}}).$$ Dropping the renormalization of $\mathcal D$-modules corresponds to imposing finiteness conditions on the right hand side.
\end{example}
\begin{remark}[Ind-constructible sheaves]
The notion of ind-holonomic $\mathcal D$-modules has a natural analog in the setting of l-adic sheaves or constructible sheaves in the analytic topology. Namely on a scheme $X$ the compact objects in $Shv(X)$ are the constructible sheaves, but this is no longer the case on a stack. A {\em locally compact} sheaf on a stack $X$ is a sheaf whose stalks are perfect complexes -- i.e., whose pullback under any map $pt\to X$ is compact. We denote $Shv(X)_{lc}\subset Shv(X)$ the full subcategory of locally compact sheaves, and define the category $\breve{Shv}(X)$ of renormalized, or ind-constructible, sheaves as $\Ind Shv(X)_{lc}$. It has $Shv(X)$ as a colocalization:
$$\xymatrix{\Xi: Shv(X)\ar[r]<.5ex> &\ar[l]<.5ex> \breve{Shv}(X):\Psi}$$
For example for $X=Y/G$ a quotient stack, $\breve{Shv}(X)$ can be identified with the Ind category of $G$-equivariant constructible complexes on $Y$ in the sense of Bernstein--Lunts \cite{bernsteinlunts}.
The $!$-tensor structure on $Shv(X)$ respects locally compact objects, hence extends by continuity to define a symmetric monoidal structure on $\breve{Shv}(X)$, for which the functors $\Xi,\Psi$ upgrade to symmetric monoidal functors.
When $X$ is a finite orbit stack (for example, a quotient stack $Y/G$ where $G$ acts on $Y$ with finitely many orbits) or an ind-finite orbit stack such as $\underline{\mathcal{G}r}^\vee$, every coherent complex on $X$ is regular holonomic. Thus, via the Riemann-Hilbert correspondence, $\cDv_{hol}(X)=\cDv_{hol}(X)\simeq \breve{Shv}(X)$.
\end{remark}
\subsection{Formalism of ind-holonomic $\mathcal D$-modules}
Recall~\cite{GRcrystals,GR,dario,raskin} the construction of the contravariant functor of $\mathcal D$-modules $\mathcal D^!$ on ind-schemes. Namely we start with the functor $$\mathcal D^!:AffSch^{f.t.,op}\to \mathbf{DGCat}$$ of $\mathcal D$-modules with $!$-pullback
on schemes of finite type as constructed e.g. in~\cite{GRcrystals,GR}. We then right Kan extend to ind-schemes of ind-finite type (or more generally to {\em laft} prestacks).
\begin{defn} The right-lax symmetric monoidal functor $\cDv_{hol}^!:QCA^{op}\to \mathbf{DGCat}$ is defined as the (symmetric monoidal)
ind-construction
$$\xymatrix{QCA^{op} \ar[rr]^-{\mathcal D_{coh,hol}^!} && \mathbf{DGCat}^{sm} \ar[rr]^-{Ind} && \mathbf{DGCat}}$$ applied to the subfunctor of $\mathcal D^!$ defined by coherent holonomic $\mathcal D$-modules.
\end{defn}
Note that by construction $\cDv_{hol}(X)$ for a QCA stack is compactly generated by $\mathcal D_{coh,hol}(X)$.
\begin{lemma}\label{QCA functoriality} For $p:X\to Y$ a finite type morphism of QCA stacks, we have continuous pullback and pushforward functors
$$\xymatrix{p_*:\cDv_{hol}(X)\ar[r]<.5ex>& \ar[l]<.5ex> \cDv_{hol}(Y): p^!}$$ satisfying base change. Moreover for $p:X\to Y$ a proper morphism, $(p_*,p^!)$ form an adjoint pair.
\end{lemma}
\begin{proof}
Pullback and pushforward of holonomic $\mathcal D$-modules on stacks under finite type morphisms remain holonomic. Hence the functors
$$\xymatrix{p_*:\mathcal D_{coh,hol}(X)\ar[r]<.5ex>& \ar[l]<.5ex> \mathcal D_{coh,hol}(Y): p^!}$$
extend by continuity to the ind-categories. The property of base-change can likewise be checked on the compact objects.
\end{proof}
If we need to consider schemes beyond finite type, we first perform a left Kan extension to extend from affine schemes to all affines and then right Kan extend extend $\mathcal D^!$ to all ind-schemes~\cite{raskin}. Another formulation~\cite{dario} is to consider schemes of pro-finite type or simply {\em pro-schemes}, schemes that can be written as filtered limits of schemes of finite type along affine smooth surjective maps. Again $\mathcal D^!$ is extended from finite type schemes to pro-schemes as a left Kan extension, and then to ind-pro-schemes by a right Kan extension.
We are interested in objects such as the equivariant affine Grassmannian $\underline{\mathcal{G}r}^\vee$, which is nearly but not quite an ind-finite type algebraic stack. Namely $\underline{\mathcal{G}r}^\vee$ is the inductive limit (under closed embeddings) of stacks of the form $X/K$ where $X$ is a scheme of finite type and $K$ (${LG^\vee_+}$ in our setting) is an algebraic group acting on $X$ through a finite type quotient $K_f=K/K^u$ with pro-unipotent kernel $K^u$. Thus $X/K$ is a projective limit of finite type algebraic stacks under morphisms which are gerbes for unipotent group schemes. However the category of $\mathcal D$-modules is insensitive to unipotent gerbes, so in particular the category of $\mathcal D$-modules on $X/K$ is equivalent to that of the finite type quotient
$X/K_f$.
Thus we make the following more modest variant of the constructions in~\cite{dario, raskin}:
\begin{defn} \begin{enumerate}
\item By a stack nearly of finite type we refer to an algebraic stack expressible as a projective limit of QCA stacks under morphisms which are gerbes for unipotent group schemes.
\item By an ind-nearly finite type stack, or simply {\em ind-stack}, we denote a prestack equivalent to an inductive limit of stacks nearly of finite type under closed embeddings. The symmetric monoidal category of ind-stacks is denoted $IndSt$.
\end{enumerate}
\end{defn}
\begin{defn}
The functor $\cDv_{hol}^!:IndSt^{op}\to \mathbf{DGCat}$ on ind-stacks is defined by first left Kan extending $\cDv_{hol}^!$ from QCA stacks to stacks nearly of finite type, and then right Kan extending to ind-nearly finite type stacks.
\end{defn}
\begin{proposition} The functor $\cDv_{hol}^!$ admits a right-lax symmetric monoidal structure extending that previously defined on QCA stacks.
\end{proposition}
\begin{lemma} \begin{enumerate}
\item For $\mathcal X=\lim_\leftarrow X_n$ an inverse limit of stacks of finite type under unipotent gerbes, the functor $$\lim_{\leftarrow} \cDv_{hol}(X_n)\to \cDv_{hol}(X_i)$$ is an equivalence for any $i$.
\item The assertions of Lemma~\ref{QCA functoriality} extend to morphisms of nearly finite type stacks.
\end{enumerate}
\end{lemma}
To calculate the abstractly defined functor $\cDv_{hol}$ on ind-stacks, we follow the strategy of~\cite{GR} (see also~\cite[Section 2]{GRindschemes}):
\begin{proposition}\label{inductive ind-holonomic}
For $X$ an ind-stack, expressed as a filtered colimit of closed embeddings $i_n:X_n\hookrightarrow X$ with $X_n$ nearly of finite type, we have identifications $$\cDv_{hol}(X)\simeq \lim_{\leftarrow, i_{n}^!} \cDv_{hol}(X_n)\simeq \lim_{\rightarrow, i_{n,*}} \cDv_{hol}(X_n).$$
In particular $\cDv_{hol}(X)$ is compactly generated by pushforwards of coherent holonomic $\mathcal D$-modules on the $X_n$.
\end{proposition}
\begin{proof}
The functor $\cDv_{hol}$ takes colimits in $IndSt$ to limits in $\mathbf{DGCat}$. Hence for an ind-stack $X=\lim_{\leftarrow, i_n} X_n$ written as a colimit of nearly finite type stacks under closed embeddings,
we have an identification $\cDv_{hol}(X)\simeq \lim_{\rightarrow,i_n^!} \cDv_{hol}(X_n)$. Since the $X_n$ are nearly finite type stacks and $i_n$ are proper morphisms, we may apply proper adjunction to further identify the limit over the pullbacks with the colimit over their left adjoints,
$\cDv_{hol}(X)\simeq \lim_{\leftarrow, i_{n*}} \cDv_{hol}(X_n)$ as desired.
\end{proof}
\begin{proposition} \label{IndHol adjunction}
For $p:X\to Y$ an ind-finite type morphism in $IndSch$, we have continuous pushforward and pullback functors
$$\xymatrix{p_*:\cDv_{hol}(X)\ar[r]<.5ex>& \ar[l]<.5ex> \cDv_{hol}(Y): p^!}$$ satisfying base change.
For $p:X\to Y$ ind-proper, $(p_*,p^!)$ form an adjoint pair.
\end{proposition}
\begin{proof}
Let us write $Y$ as the filtered colimit of closed embeddings of nearly finite type substacks $t_n:Y_n\hookrightarrow Y$, and $s_n:X_n=X\times_Y Y_n \hookrightarrow X$. Then by hypothesis we can further decompose $X_n$ as the colimit of substacks $i_{m,n}:X_{m,n}\hookrightarrow X_n$ with $p_{m,n}:X_{m,n}\to Y_n$ finite type.
A holonomic $\mathcal D$-modules $\mathcal F$ on $X$ can be represented as the pushforward of a holonomic $\mathcal D$-module $\mathcal F_{m,n}$ on some $X_{m,n}$. Hence $p_*\mathcal F=p_{m,n*}\mathcal F_{m,n}$ is holonomic. Thus pushforward on all $\mathcal D$-modules restricts to a functor
$$p_*:\mathcal D_{coh,hol}(X)\to \mathcal D_{coh,hol}(Y)$$ which thus extends by continuity to the ind-categories $\cDv_{hol}$.
Pullback defines a functor $$p_{m,n}^!:\mathcal D_{coh,hol}(Y_n)\to \mathcal D_{coh,hol}(X_{m,n}),$$ and thus passing to ind-categories by continuity
$$p_{m,n}^!:\cDv_{hol}(Y_n)\to \cDv_{hol}(X_{m,n}).$$ By Proposition~\ref{inductive ind-holonomic}, these functors assemble to a continuous functor to the inverse limit category and on to the target,
$$\xymatrix{\cDv_{hol}(Y_n)\ar[r]^-{p_n^!}& \cDv_{hol}(X_n)\ar[r]^-{s_{n,*}}& \cDv_{hol}(X)}$$ Finally by (finite type) base change the functors $s_{n,*}p_n^!\simeq p^!t_{n,*}$ assemble to a functor from the direct limit category $$\lim_{\rightarrow}\cDv_{hol}(Y_n)=\cDv_{hol}(Y)$$ to $\cDv_{hol}(X)$. The resulting functors inherit the base change property from their finite type constituents.
\end{proof}
\begin{remark}[Bivariant functoriality]\label{remark-bivariant} The key 2-categorical extension theorem of Gaitsgory-Rozenblyum, ~\cite[Theorem V.1.3.2.2]{GR}, allows one to define functors out of correspondence 2-categories given 1-categorical data, namely a functor (in our case $\cDv_{hol}^!$) satisfying an adjunction and base change property for a particular class of morphisms (in our case ind-proper morphisms). Thus we find that the functor $\cDv_{hol}^!:IndSt^{op}\to \mathbf{DGCat}$ extends to a functor of $(\infty,2)$-categories
$$\cDv_{hol}:Corr_{ind-f.t,ind-prop}^{ind-prop}(IndSt)\to \mathbf{DGCat}^{(\infty,2)}.$$
\end{remark}
\section{Hecke algebras and Hecke categories} \label{Hecke section}
In this section we describe a general formalism for constructing symmetric monoidal categories acting centrally on convolution categories. We work in the setting of ind-holonomic $\mathcal D$-modules on ind-stacks described above, since our main example is the renormalized Satake category $\cDv_{hol}(\underline{\mathcal{G}r}^\vee)$ and more generally Hecke categories for Kac-Moody groups $\cDv_{hol}(\underline{P}\backslash {\underline{G}}/\underline{P})$. However the discussion of this section works identically when applied to the sheaf theories of ind-coherent sheaves $QC^!$ or $\mathcal D$-modules $\mathcal D$ when restricted to {\em laft} prestacks, as in~\cite{GR}.
\subsection{Looping and delooping monoidal categories}
We recall the following fundamental feature of algebras and their module categories, due to Lurie (combining aspects of Theorems 6.3.5.5, 6.3.5.10 and 6.3.5.14 in~\cite{HA} -- Lurie also proves functoriality in $\mathcal P$ which we omit). See~\cite[Section E.2]{AG} for a related discussion in the stable setting of dg categories.
Let us fix a presentable symmetric monoidal category $\mathcal P\in \Pr^L$, and let $Cat_\mathcal P:=\mathcal P\mhyphen\mathcal{M}od$ denote the symmetric monoidal category of $\mathcal P$-module categories in $\Pr^L$. Thus for $\mathcal P=k\mhyphen\mathrm{mod}$ ($k$ a ring of characteristic zero) we have $Cat_\mathcal P=\mathbf{DGCat}_k$, the symmetric monoidal category of presentable $k$-linear dg categories. We will be interested in applying the result for $\mathcal P=\mathbf{DGCat}_k^c$ (see Section~\ref{dgcat}), so that an algebra $A$ in $\mathcal P$ is a small monoidal dg category, or equivalently a compactly generated presentable dg category with proper monoidal structure, and $A\mhyphen\mathrm{mod}$ is the $\infty$-category of $A$-module categories.
\medskip
\begin{theorem}~\cite[Section 6.3.5]{HA}
There is a symmetric monoidal functor $$\mathbf{Mod}:Alg_{E_1}(\mathcal P)\longrightarrow (Cat_\mathcal P)_{\mathcal P/}$$
from $E_1$-algebras in $\mathcal P$ to $\mathcal P$-categories under $\mathcal P$ (i.e., $E_0$-$\mathcal P$-categories), sending $A$ to the $\mathcal P$-category $\mathrm{mod}\mhyphen{\mathcal A}$ of right $A$-modules in $\mathcal P$, pointed by $A$ itself.
This functor admits a right adjoint $\Omega$, sending a pointed category $p:\mathcal P\to\mathcal M$ to $$\Omega_p(\mathcal M,p)=End_{\mathcal M}(p(1_\mathcal P))\in Alg_{E_1}(\mathcal P).$$
\end{theorem}
By iteratively applying Lurie's Dunn additivity theorem~\cite[Theorem 5.1.2.2]{HA} we may likewise loop and deloop between $E_n$-algebras in $\mathcal P$ and $E_{n-1}$-monoidal $\mathcal P$-categories. We spell out the case we will use:
\begin{corollary} \label{nonsense}
\begin{enumerate}
\item Taking endomorphisms of unit objects defines a functor
$$\Omega: Alg_{E_1}(Cat_\mathcal P)\longrightarrow Alg_{E_2}(\mathcal P)$$
from monoidal $\mathcal P$-categories to $E_2$-algebras in $\mathcal P$.
\item For $\mathcal A\in Alg_{E_1}(Cat_\mathcal P)$ a monoidal $\mathcal P$-category, the $E_2$-morphism
$$1_\mathcal A\otimes -: End(1_\mathcal A)\to End(Id_\mathcal A)$$ given by applying $\Omega$ to the action of $\mathcal A$ on itself admits a left inverse as an $E_1$-morphism
$$act_{1_\mathcal A}:End(Id_\mathcal A)\to End(1_\mathcal A), $$ given by the action of $End(Id_\mathcal A)$ on the object $1_\mathcal A$.
\end{enumerate}
\end{corollary}
\begin{proof}
The functor $\Omega$, by virtue of being the right adjoint to a symmetric monoidal functor, is itself right-lax symmetric monoidal.
Thus we can upgrade $\Omega$ to a functor
$$\Omega: Alg_{E_1}(Cat_\mathcal P)\longrightarrow Alg_{E_1}(Alg_{E_1}(\mathcal P))\simeq Alg_{E_2}(\mathcal P)$$
on monoidal $\mathcal P$-categories, pointed by their units, to $E_2$-algebras in $\mathcal P$.
We now apply this construction to the monoidal functor (morphism of $E_1$-algebras in $Cat_\mathcal P$) given by the action of a monoidal category on itself, $$\otimes: \mathcal A\to End(\mathcal A),$$ obtaining an $E_2$-morphism $End(1_\mathcal A)\to End(Id_\mathcal A)$.
The functor $\otimes$, considered as a morphism only of pointed $\mathcal P$-categories, admits a left inverse
$$act_{1_\mathcal A}: End(\mathcal A)\longrightarrow \mathcal A$$ obtained from acting on the unit of $\mathcal A$ by endofunctors of $\mathcal A$: the composite $$\xymatrix{\mathcal A\ar[r]^-{\otimes}. \ar@/^2pc/_-{1_\mathcal A\otimes -}[rr] & End(\mathcal A) \ar[r]^-{act_{1_\mathcal A}} & \mathcal A}$$ is identified with the identity functor of $\mathcal A$ since $1_\mathcal A$ is the monoidal unit.
Applying $\Omega$ to this morphism we obtain the desired left inverse morphism of $E_1$-algebras in $\mathcal P$
$$act_{1_\mathcal A}:End(Id_\mathcal A)\to End(1_\mathcal A).$$
\end{proof}
\subsection{Groupoids}
\begin{definition} By an {\em ind-proper groupoid} we refer to a groupoid object $\mathcal G\circlearrowright X$ in ind-stacks, with ind-proper source and target maps $\pi_1,\pi_2:\mathcal G\to X$.
\end{definition}
More precisely, the groupoid object is given by a simplicial object $\mathcal G_\bullet$ satisfying a Segal condition resulting in an identification of the simplices with iterated fiber products:
\begin{equation}\label{groupoid}
\xymatrix{\cdots \ar[r]<1ex> \ar[r]<.5ex> \ar[r] \ar[r]<-.5ex> \ar[r]<-1ex> &
\mathcal G\times_X \mathcal G\times_X \mathcal G \ar[r]<.75ex> \ar[r]<.25ex> \ar[r]<-.25ex> \ar[r]<-.75ex> &
\mathcal G\times_X \mathcal G \ar[r]<.5ex> \ar[r] \ar[r]<-.5ex> &
\mathcal G \ar[r]<.25ex> \ar[r]<-.25ex>&
X}
\end{equation}
See~\cite[Sections II.2.5.1, III.3.6.3]{GR} for a discussion of ind-proper groupoid objects.
We denote $p=(\pi_1,\pi_2):\mathcal G\to X\times X$.
It will be convenient (but technically irrelevant) to think in terms of the (potentially very poorly behaved) quotient prestack $Y=|\mathcal G_\bullet|=X/\mathcal G$, so that $\mathcal G_\bullet$ is identified with the \v{C}ech simplicial object $\{X\times_Y X\times_Y\cdots\times_Y X\}$.
\begin{remark}[Monoid/Segal objects] Our constructions apply equally well to monoid objects (also known as Segal or category objects) in stacks, rather than groupoids (the setting of the constructions in~\cite[Sections II.2.5.1, III.3.6.3]{GR}) - in other we will make no use of invertibility of morphisms. We use the language of groupoids for psychological reasons, for example to think of $\mathcal G$ as the \v{C}ech construction on a mythical quotient stack $X\to Y$.
\end{remark}
Our main example of an ind-proper groupoid will be the equivariant Grassmannian $\mathcal G=\underline{\mathcal{G}r}^\vee$ acting on $X=pt/{LG^\vee_+}$, i.e., the \v{C}ech construction for the ind-proper, ind-schematic morphism $X=pt/LG^{\vee}_+\to Y=pt/L{G^{\vee}}$ or its loop rotated version (see Section \ref{subsection-derived satake}).
For the remainder of this section we will fix an ind-proper groupoid $\mathcal G \rightrightarrows X$.
\subsection{Hecke categories}\label{Hecke categories}
\begin{definition} The Hecke category attached to the ind-proper groupoid $\mathcal G\circlearrowright X$ is $\mathcal H:=\cDv_{hol}(\mathcal G)$.
\end{definition}
The construction of the monoidal structure, the convolution product, following the general mechanism discussed in~\cite[II.2.5.1, V.3.4]{GR} --- it is inherited on applying $\cDv_{hol}$ to the structure on $\mathcal G$ of algebra object in correspondences. Since the pushforward under a proper map is a proper functor (it has a continuous right adjoint), the convolution product is proper, hence the Hecke algebra defines an algebra in $\mathbf{DGCat}_k^c$.
Explicitly, given objects $A,B \in \mathcal H$, their convolution is given by $A\ast B = p_{13\ast} p_{12}^!(A) \otimes^! p_{23}^! (B)$, where
\[
\xymatrix{
p_{12},p_{13},p_{23} : \mathcal G\times_X \mathcal G \ar[r]<.5ex> \ar[r] \ar[r]<-.5ex> & \mathcal G
}
\]
are the three projection maps. The diagonal embedding (unit map) $i:X\to \mathcal G$ induces a monoidal functor $$\mathfrak{d}:\mathcal R\to \mathcal H$$ making the monoidal category $\mathcal H$ into a {\em $\mathcal R$-ring}, i.e., algebra object in $\mathcal R$-bimodules.
\subsection{Hecke algebras}
The groupoid $\mathcal G$ defines a monad acting on $\mathcal R=\cDv_{hol}(X)$ following the general mechanism discussed in~\cite[II.2.5.1, V.3.4]{GR} which we call the Hecke algebra $\underline{H}$. The Hecke algebra is an algebra object structure on the functor $\pi_{2,*}\pi_1^!\simeq p^!p_*\in End(\mathcal R)$.
\begin{definition} The Kostant category associated to the groupoid $\mathcal G$ is the category $\mathcal K=\underline{H}\mhyphen\mathrm{mod}$ of $\underline{H}$-modules in $\mathcal R=\cDv_{hol}(X)$.
\end{definition}
Alternatively, one can think of $\mathcal K$ as the category of $\mathcal G$-equivariant objects of $\cDv_{hol}(X)$. More precisely, since Diagram~\ref{groupoid} is a diagram of ind-stacks and ind-finite type maps, we can pass to $\cDv_{hol}$ and $!$-pullbacks to find the cosimplicial symmetric monoidal category $\cDv_{hol}(\mathcal G_\bullet)$:
$$\xymatrix{\cdots &\ar[l]<1ex> \ar[l]<.5ex> \ar[l] \ar[l]<-.5ex> \ar[l]<-1ex>
\cDv_{hol}(\mathcal G\times_X \mathcal G\times_X \mathcal G)& \ar[l]<.75ex> \ar[l]<.25ex> \ar[l]<-.25ex> \ar[l]<-.75ex>
\cDv_{hol}(\mathcal G\times_X \mathcal G)& \ar[l]<.5ex> \ar[l] \ar[l]<-.5ex>
\cDv_{hol}(\mathcal G) &\ar[l]<.25ex> \ar[l]<-.25ex>
\cDv_{hol}(X)}$$
\begin{definition} The symmetric monoidal category $\cDv_{hol}(X)^{\mathcal G}$ of $\mathcal G$-equivariant sheaves on $X$ is the totalization $Tot(\cDv_{hol}(\mathcal G_\bullet))$.
\end{definition}
To identify $\cDv_{hol}(X)^\mathcal G$ with $\underline{H}$-modules in $\cDv_{hol}(X)$, we require the theory of monadic descent, in this setting due to Lurie \cite[Theorem 6.2.4.2]{HA} (see also \cite{1affine}, Appendix C). In general, if a cosimplicial category $\mathcal C^\bullet$ satisfies the monadic Beck-Chevalley conditions, then we can identify the totalization of $\mathcal C^\bullet$ with modules for a monad acting on $\mathcal C^0$, whose underlying functor may be identified with the composite of one face map with the left adjoint of the other. In the case $\mathcal C^\bullet = \cDv_{hol}(\mathcal G_\bullet)$, these conditions are equivalent to the base change property for ind-proper morphisms in $\cDv_{hol}$ (see Remark \ref{remark-bivariant}), and the corresponding monad is precisely $\underline{H}$. Thus we obtain the following result:
\begin{proposition} We have an identification $\mathcal K\simeq \cDv_{hol}(X)^{\mathcal G}$, and hence a symmetric monoidal structure on the Kostant category for which the forgetful functor $\mathcal K\to\mathcal R$ is symmetric monoidal.
\end{proposition}
\subsection{Symmetric monoidal structure vs. cocommutative bimonad}
One can view the (symmetric) monoidal structure on $\mathcal K$ in terms of a (cocommutative) bimonad structure on $\underline{H}$ in the sense of Moerdijk and Brugui\`eres-Virelizier, see~\cite{Bohm} (in fact it's naturally a Hopf monad).
More precisely, the symmetric monoidal structure on $!$-pullback and oplax symmetric monoidal structure on ind-proper $*$-pushforward endow $\underline{H}=\pi_{2,*}\pi_1^!\simeq \pi^!\pi_*$ with a canonical oplax symmetric monoidal structure. In this way the endofunctor $\underline{H}$ naturally upgrades to a cocommutative bimonad, i.e. an algebra object in the category of oplax symmetric monoidal endofunctors. In particular, $\underline{H}(1_\mathcal R)$ is naturally a cocommutative coalgebra object in $\mathcal R$.
The monoidal structure on $\underline{H}$-module is equivalent to the bimonad structure enhancing the monad $\underline{H}$. Explicitly, given $\underline{H}$-modules $M,N$ with structure maps $\underline{H} M\to M$, $\underline{H} N\to N$, we give $M\otimes N$ a $\underline{H}$-module structure with structure map $$\underline{H} (M\otimes N)\to \underline{H} M \otimes \underline{H} N \to M\otimes N$$
where the first morphism uses the oplax monoidal structure on $\underline{H}$.
Similarly, we have a natural transformation
$$\underline{H}(\omega_X)=\pi_{2,*}\omega_{\mathcal G}\simeq \pi_{2,*}\pi_2^!\omega_X\longrightarrow \omega_X.$$
\subsection{(Bi)monads vs. (bi)algebroids.}\label{monad vs algebra}
In the generality we're working, the Hecke algebra $\underline{H}$ is only a monad, i.e., algebra object in endofunctors of $\cDv_{hol}(X)$. In the cases of practical interest however (in particular, for the equivariant Grassmannian) this reduces to an ordinary algebra, thanks to ``affineness" (or rather coaffineness).
In general, if $R$ is any $k$-algebra object then we can monoidally identify continuous endofunctors of $R\mhyphen\mathrm{mod}$ with $R$-bimodules, and thus a continuous monad $\underline{H}$ acting on $\mathcal R =R\mhyphen\mathrm{mod}$ is the same thing as an algebra object $H$ in the monoidal category of $R$-bimodules. Unwinding the definitions, we observe that such an algebra object $H$ is nothing more than an $R$-ring, i.e. $H$ is itself a $k$-algebra object, together with a morphism of algebra objects $R\to A$. Moreover, the category of modules for $A$ in $R\mhyphen\mathrm{mod}$ (thinking of $A$ as a monad acting on $R\mhyphen\mathrm{mod}$) is the same thing as $A\mhyphen\mathrm{mod}$, i.e. $A$-modules in $\mathbf{Vect}$ (see \cite{Bohm} Lemma 2.4). Moreover, if $R$ is a commutative ring, then a (cocommutative) bimonad structure on the monad $\underline{H}$ is equivalent to the structure of a (cocommutative) $R$-bialgebroid on the $R$-ring $H$. In that case, we have an identification of left $R$-modules $H = \underline{H}(R)$; in particular $H$ is a cocommutative $R$-coalgebra object.
Returning to the setting of an ind-proper groupoid $\mathcal G$ acting on $X$, let us consider the case where the functor $p_{X\ast}:\cDv_{hol}(X) \to \mathbf{Vect}$ is monadic, so that $\cDv_{hol}(X) \simeq C^\ast(X)\mhyphen\mathrm{mod}$ (this happens for example in the case when $X$ is the classifying stack of an algebraic group).
In this case, the Hecke monad $\underline{H}$ on $\cDv_{hol}(X)$ may be identified as a $C^\ast(X)$-ring which we denote $H$. By construction $H$ is given by global sections of the relative dualizing complex $\omega_{\mathcal G/X} = \pi_1^! p^\ast(\mathbb C) \simeq \pi_1^! p^\ast(\mathbb C)$.
Unwinding the definitions, we see that the $R$-ring structure on $H$ arises from convolution of (relative) chains on $\mathcal G$. For example, the multiplication $H\otimes_R H \to H$ is given by direct image of chains along the ind-proper morphism
\[
\mathcal G \times_{\pi_2,X,\pi_1} \mathcal G \to \mathcal G
\]
On the other hand, the $R$-coalgebra structure on $H$ arises from ``cup coproduct'' of chains. For example, the comultiplication $H \mapsto H\otimes_R H$ on $H$ (where the commutative ring $R=R^{op}$ acts on both factors of $H$ by left multiplication) is given by pushforward associated to the diagonal map
\[
\mathcal G \to \mathcal G \times_{\pi_1,X,\pi_1} \mathcal G
\]
Note that the fiber product involved in the cup coproduct is defined using $\pi_1$ on both factors, in contrast to the fiber product involved in convolution.
\begin{remark}
Note that as a dg vector space, $H$ may be unbounded in both directions. For example, in the case $\mathcal G = \underline{\mathcal{G}r}^\vee$, $X=pt/{LG^\vee_+}$. Then $C^\ast(X)$ is the $G^\vee$ equivariant cohomology ring of a point (thus unbounded in positive cohomoglical degrees) and $C_{-\ast} (pt\times_X \mathcal G) = C_{-\ast}({\cG r^{\vee}})$ is the (non-equivariant) homology of the affine Grassmannian (thus unboundeed in negative cohomological degrees). Equivariant formality gives an isomorphism of dg-vector spaces (in fact of $C^\ast(X)$-coalgebras) $$H \simeq C^\ast(X) \otimes C_{-\ast}({\cG r^{\vee}})$$
\end{remark}
\subsection{Modules for Hecke Categories}
We now consider a categorical analog of the above discussion.
Consider the cosimplicial symmetric monoidal category $\cDv_{hol}(\mathcal G_\bullet)$. We may pass to module categories, obtaining a cosimplicial symmetric monoidal category $\cDv_{hol}(\mathcal G_\bullet)\mhyphen\mathcal{M}od$.
\begin{definition} The symmetric monoidal category $\cDv_{hol}(X)\mhyphen\mathcal{M}od^{\mathcal G}$ of $\mathcal G$-equivariant module categories on $X$ is the totalization $Tot(\cDv_{hol}(\mathcal G_\bullet)\mhyphen\mathcal{M}od)$.
\end{definition}
\begin{remark}[Algebra vs Monad, revisited]
As we noted in Remark~\ref{monad vs algebra}, we treat the Hecke algebra in general as a monad on $\cDv_{hol}(X)$, but in situations of interest this reduces to an algebra object in $C^*(X)$-bimodules. Here we chose to treat the Hecke category directly as an algebra in $\cDv_{hol}(X)$-bimodules. One could instead consider the monad on sheaves of categories on $X$ obtained by push-pull along $\mathcal G$. Likewise the category $\cDv_{hol}(X)\mhyphen\mathcal{M}od^{\mathcal G}$ is an avatar for the category of $\mathcal G$-equivariant sheaves of categories on $X$, with which it is connected by the localization-global sections adjunction, and which it would recover if we were in a 1-affine situation. Thus we can also consider it as an avatar of sheaves of categories on the quotient stack $Y=X/\mathcal G$, which is the source of its symmetric monoidal structure.
\end{remark}
\begin{proposition} The cosimplicial category $\cDv_{hol}(\mathcal G_\bullet)\mhyphen\mathcal{M}od$ satisfies the monadic Beck-Chevalley conditions.
Moreover the associated monad on $\cDv_{hol}(X)\mhyphen\mathcal{M}od$ is identified with the {\em Hecke category} $\mathcal H=\cDv_{hol}(\mathcal G)$ as an algebra in $\cDv_{hol}(X)$-bimodules via the diagonal map $\delta_*:\cDv_{hol}(X)\to \mathcal H$. Thus we have an identification $\cDv_{hol}(X)\mhyphen\mathcal{M}od^{\mathcal G}\simeq \mathcal H\mhyphen\mathcal{M}od$.
\end{proposition}
\begin{proof}
The Beck-Chevalley conditions for $\cDv_{hol}(\mathcal G_\bullet)\mhyphen\mathcal{M}od$ follow from those for $\cDv_{hol}(\mathcal G_\bullet)$ upon applying the functor $\mhyphen\mathcal{M}od$.
\end{proof}
It follows that the category $\mathcal H\mhyphen\mathcal{M}od$ of $\mathcal G$-equivariant $\cDv_{hol}(X)$-modules inherits a symmetric monoidal structure, such that the forgetful functor $\mathcal H\mhyphen\mathcal{M}od\to\mathcal R\mhyphen\mathcal{M}od$ is symmetric monoidal. The unit object is the $\mathcal H$-module $\mathcal R$
itself, which corresponds to the cosimplicial category $\cDv_{hol}(\mathcal G_\bullet)$.
\subsection{Hecke algebras vs. Hecke categories}
We now compare descent for module categories with descent for sheaves.
Given a $\mathcal H$-module $\mathcal M$, or equivalently $\mathcal M^\bullet \in Tot(\cDv_{hol}(\mathcal G_\bullet)\mhyphen\mathcal{M}od)$, we define the {\em $\mathcal G$-equivariant objects} $\mathcal M^{\mathcal G}$ to be
$$\mathcal M^{\mathcal G}:=Hom_{\mathcal H}(\cDv_{hol}(X), \mathcal M).$$ Thus we have
$$\mathcal M^{\mathcal G}\simeq Tot(Hom(\cDv_{hol}(\mathcal G_\bullet), \mathcal M^\bullet)).$$
\begin{proposition}\label{Hecke equivariance}
\begin{enumerate}
\item The $\mathcal G$-equivariant objects in the $\mathcal H$-module $\mathcal R$ recover the category of $\mathcal G$-equivariant sheaves on $X$, i.e.,
$$\mathcal R^\mathcal G\simeq \mathcal K.$$
\item The resulting equivalence of $\mathcal R^\mathcal G$ with the endomorphisms of the unit $\mathcal R$ of the symmetric monoidal
category $\mathcal H\mhyphen\mathcal{M}od$ lifts to a symmetric monoidal equivalence.
\end{enumerate}
\end{proposition}
\begin{proof}
We apply the above definition in the case $\mathcal R=\cDv_{hol}(X)$, which corresponds to $\mathcal R^\bullet=\cDv_{hol}(\mathcal G_\bullet)$:
\begin{eqnarray*}
[\cDv_{hol}(X)]^{\mathcal G}&:=&Hom_{\mathcal H}(\cDv_{hol}(X),\cDv_{hol}(X))\\
&\simeq& Hom_{Tot(\cDv_{hol}(\mathcal G_\bullet)\mhyphen\mathcal{M}od)}(\cDv_{hol}(\mathcal G_\bullet),\cDv_{hol}(\mathcal G_\bullet))\\
&\simeq& Tot(\cDv_{hol}(\mathcal G_\bullet))\\
&\simeq& \cDv_{hol}(X)^{\mathcal G}.
\end{eqnarray*}
Tracing through the identifications above, we see that the symmetric monoidal structure on $Tot(\cDv_{hol}(\mathcal G_\bullet))$ coming from tensor product of sheaves is identified with the symmetric monoidal structure on endomorphisms of the unit in $Tot(\cDv_{hol}(\mathcal G_\bullet)\mhyphen\mathcal{M}od)$, as claimed.
\end{proof}
Our main result asserts that $\mathcal G$-equivariant sheaves give central objects in the groupoid category $\mathcal H$. This central action can be thought of as expressing the linearity of convolution on $\mathcal G=X\times_Y X$ over sheaves on the (possibly ill-behaved) quotient $Y=X/\mathcal G$.
\begin{theorem}\label{groupoid center}
Let $\mathcal G$ denote an ind-proper groupoid acting on an ind-stack $X$, $\underline{H}$ the corresponding monad on $\mathcal R=\cDv_{hol}(X)$, $\mathcal K=\underline{H}\mhyphen\mathrm{mod}$ the Kostant category and $\mathcal H=\cDv_{hol}(\mathcal G)$ the groupoid category.
Then there is a canonical $E_2$-morphism $\mathfrak{z}$ with a monoidal left inverse $\mathfrak a$ ($\mathfrak a\circ \mathfrak{z}\simeq Id$),
$$\xymatrix{\mathcal K\ar[r]_-{\mathfrak{z}}& \ar@/_1pc/_-{\mathfrak a}[l]\mathcal Z(\mathcal H)}$$
lifting the diagonal map $\mathfrak{d}:\mathcal R\to \mathcal H$:
$$\xymatrix{\mathcal K\ar[r]_{E_2}\ar[d]_-{E_\infty}& \ar@/_1pc/_-{E_1}[l] \mathcal Z(\mathcal H)\ar[d]^{E_1}\\
\mathcal R \ar[r]_{E_1} & \mathcal H }$$
\end{theorem}
\begin{proof}
We apply Corollary~\ref{nonsense} in the setting of the presentable symmetric monoidal category
$\mathcal P=\mathbf{DGCat}_k^c$ of compactly generated dg categories with proper morphisms.
For the algebra object $\mathcal A\in Alg_{E_1}(Cat_\mathcal P)$ (which in our case happens to be a commutative algebra object) we
take $$\mathcal A=\mathcal H\mhyphen\mathcal{M}od\simeq \mathcal R\mhyphen\mathcal{M}od^{\mathcal G}$$ to be the category of modules for the Hecke category, i.e., $\mathcal G$-equivariant $\mathcal R$-modules. The center $End(Id_\mathcal A)$ of $\mathcal A$ is identified with the center $\mathcal Z(\mathcal H)$ of the monoidal category $\mathcal H$. We have identified $$End(1_\mathcal A)\simeq\underline{H}\mhyphen\mathrm{mod}\simeq \cDv_{hol}(X)^{\mathcal G}$$ as categories. We need to show that this identification can be upgraded to an $E_2$ identification, hence obtaining the desired $E_2$-morphism from $\mathcal K=\underline{H}\mhyphen\mathrm{mod}$ to $End(Id_\mathcal A)=\mathcal Z(\mathcal H)$. However we have seen in Proposition~\ref{Hecke equivariance} that the identification is in fact naturally $E_\infty$. Thus Corollary~\ref{nonsense} provides the desired morphisms $\mathfrak{z}$ and $\mathfrak a$.
To conclude the theorem, we only need to establish that the morphism $\mathfrak{z}$ lifts the morphism $\mathfrak{d}$ (i.e., the commutativity of the above diagram). Note that the monoidal functor $\mathfrak{d}: \mathcal R \to \mathcal H$ (which defines the structure of $\mathcal H$ as an $\mathcal R$-module) induces a corresponding functor $\End(\mathcal R) \to \End(\mathcal H)$ which we still denote by $\mathfrak{d}$. By construction, the functor $\mathfrak{z}: \mathcal K \to \mathcal Z(\mathcal H)$
takes an object of $\mathcal K$, represented by a $\mathcal H$-linear endomorphism $F:\mathcal R \to \mathcal R$ to $\mathfrak{d}(F)$, which has the structure of an $\mathcal H\otimes \mathcal H^{op}$-linear endomorphism of $\mathcal H$, i.e. an object of $\mathcal Z(\mathcal H)$. In other words, we have a commutative diagram
\[
\xymatrix{
\End_{\mathcal H}(\mathcal R) \ar[r]^{\mathfrak{z}} \ar[d] & \End_{\mathcal H \otimes \mathcal H^{op}}(\mathcal H) \ar[d] \\
\End(\mathcal R) \ar[d]_{act_{1_\mathcal R}} \ar[r]^{\mathfrak{d}} & \End(\mathcal H) \ar[d]^{act_{1_\mathcal H}} \\
\mathcal R \ar[r]^\mathfrak{d} & \mathcal H
}
\]
as required.
\end{proof}
\section{Sheaf theory: Filtered $D$-modules}\label{filtered D-mod section}
In the previous two sections, we considered categories of ind-holonomic $D$-modules in the setting of ind-proper groupoid stacks. The main example was the equivariant affine Grassmannian $\underline{\mathcal{G}r}^\vee \rightrightarrows pt/{LG^\vee_+}$ associated to the group ${G^{\vee}}$. In this section we will discuss the relevant sheaf theory for the Langlands dual side, which involves finite dimensional geometry associated to the group $G$. In this setting we will be using the category of all (not-necessarily holonomic) $D$-modules (rather than its ind-holonomic variant)
\[
\mathcal D(Y) = QC^!(Y_{dR})
\]
and we will need to understand the degeneration of this category to $QC(T^\ast Y)$ quasi-coherent sheaves
on the cotangent bundle.
\subsection{Categorical Representation Theory}\label{categorical representation theory}
Let $G$ be a fixed affine algebraic group. In this subsection, we give a brief overview of the theory of $G$-actions in the setting of dg or stable, presentable $\infty$-categories (see \cite{BD,frenkel dennis,1affine} as well as \cite[Section 3]{dario} and the references therein for further details).
Consider the category $\mathcal D(G)$ of $\mathcal D$-modules of $G$, equipped with the convolution monoidal structure. A strong $G$-category is, by definition, a module category for $\mathcal D(G)$. Examples of such include $\mathcal D(X)$ for a stack $X$ with a $G$-action, and $\mathfrak U(\mathfrak g)\mhyphen\mathrm{mod}$. Given a strong $G$-category $\mathcal C$, we have its (strong) invariants $\mathcal C^G = \Hom_{\mathcal D(G)}(\mathbf{Vect},\mathcal C)$. This is computed as the totalization of a cosimplicial category
\[
\xymatrix{
\cdots& \ar[l]<.75ex> \ar[l]<.25ex> \ar[l]<-.25ex> \ar[l]<-.75ex>
\mathcal C \otimes \mathcal D(G) \otimes \mathcal D(G) & \ar[l]<.5ex> \ar[l] \ar[l]<-.5ex>
\mathcal C \otimes \mathcal D(G) &\ar[l]<.25ex> \ar[l]<-.25ex>
\mathcal C
}
\]
A weak $G$-category is defined to be a module category for the convolution category $QC(G)$, and we denote the weak invariants of a weak $G$-category $\mathcal C$ by $\mathcal C^{G,w} := \Hom_{QC(G)}(\mathbf{Vect},\mathcal C)$, which can be computed using a similar diagram. Given a weak $G$-category $\mathcal C$ its weak invariants $\mathcal C^{G,w}$ naturally carries an action of the rigid symmetric monoidal category $\Rep(G) = QC(pt/G) = \Hom_{QC(G)}(\mathbf{Vect},\mathbf{Vect})$. It is a result of Gaitsgory \cite{1affine} that $pt/G$ is a 1-affine stack: quasi-coherent sheaves of categories on $pt/G$ are identified with module categories $QC(pt/G) = \Rep(G)$. By descent, sheaves of categories on $pt/G$ are identified with module categories for $(QC(G),\ast)$, leading to the following interpretation of 1-affineness:
\begin{theorem}[Gaitsgory's 1-affineness] \label{BG1affine}
The $QC(G)$-$\Rep(G)$ bimodule $\mathbf{Vect}$ defines a Morita equivalence between the monoidal categories $(QC(G), \ast)$ and $(\Rep(G), \otimes)$.
\end{theorem}
In other words a weak $G$-category can be recovered from its weak invariants as a $\Rep(G)$-module category.
If $\mathcal C$ is a strong $G$-category, then in particular it is a weak $G$-category, and we have
\[
\mathcal C^{G,w} = \Hom_{QC(G)}(\mathbf{Vect},\mathcal C) \simeq \Hom_{\mathcal D(G)}(\mathfrak U\mathfrak g\mhyphen\mathrm{mod},\mathcal C)
\]
In the case $\mathcal C=\mathcal D(X)$ for a smooth stack $X$ with a $G$-action, we have identifications $\mathcal D(X)^G \simeq \mathcal D(X/G)$, and $\mathcal D(X)^{G,w} \simeq \mathcal D(\qw XG) = QC(X_{dR}/G)$ (this is smooth descent). Note that
\[
\mathcal D(X)^G = \Hom_{\mathcal D(G)}(\mathbf{Vect},\mathcal D(X)) \simeq \Hom_{\mathcal{HC}}(\Rep(G),\mathcal D(X)^{G,w})
\]
This is a derived rephrasing of familiar equivalence between strongly equivariant $\mathcal D$-modules and weakly equivariant $\mathcal D$-modules for which the quantum moment map is identified with the derivative of the $G$-action.
Consider the monoidal category of Harish-Chandra bimodules:
\[
\mathcal{HC} = \Hom_{\mathcal D(G)}(\mathfrak U\mathfrak g\mhyphen\mathrm{mod},\mathfrak U\mathfrak g\mhyphen\mathrm{mod}) = \left(\mathfrak U\mathfrak g\mhyphen\mathrm{bimod} \right)^G \simeq \mathcal D(\wqw GGG)
\]
Objects of $\mathcal{HC}$ are given by $\mathfrak U\mathfrak g$-bimodules in $\Rep(G)$, together with an identification of the adjoint $\mathfrak U(\mathfrak g)$-action with the derivative of the $G$-action.\footnote{Note that, while the abelian category heart of $\mathcal{HC}$ is a full subcategory of $\mathfrak U\mathfrak g$-bimodules, in the derived setting strong equivariance is data not a condition (even for $G$ connected).} If $\mathcal C$ is a strong $G$-category then $\mathcal C^{G,w} = \Hom_{\mathcal D(G)}(\mathfrak U\mathfrak g\mhyphen\mathrm{mod},\mathcal C)$ is naturally a $\mathcal{HC}$-module category. Using the 1-affineness of $pt/G$, we have
\begin{theorem}[Beraldo \cite{dario}]\label{thm-morita}
The $\mathcal D(G)$-$\mathcal{HC}$-bimodule $\mathfrak U\mathfrak g\mhyphen\mathrm{mod}$ defines a Morita equivalence between the monoidal categories $\mathcal D(G)$ and $\mathcal{HC}$.
\end{theorem}
In other words, a strong $G$-category can be recovered from its weak invariants as a $\mathcal{HC}$-module category.
\begin{corollary}\label{center of HC}
There are equivalences of $E_2$-categories
\[
\mathcal Z(\mathcal D(G)) \simeq \mathcal D(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G) \simeq \mathcal Z(\mathcal{HC})
\]
\end{corollary}
\begin{remark}
The forgetful functor
\[
\mathcal D(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G) \simeq \mathcal Z(\mathcal{HC}) \to \mathcal{HC} \simeq (\mathfrak U\mathfrak g \otimes \mathfrak U\mathfrak g)^G
\]
takes a $G$-equivariant $D$-module on $G$ to its underlying $\mathfrak U\mathfrak g \otimes \mathfrak U\mathfrak g$-module via the algebra map $\mathfrak U\mathfrak g \otimes \mathfrak U\mathfrak g \to \mathfrak{D}_G$; the $G$-equivariant structure ensures that the adjoint action of $\mathfrak U\mathfrak g$ is integrable.
\end{remark}
As an example of a strong $G$-category, suppose $K$ is an algebraic subgroup of $G$. The homogeneous space $G/K$ carries a $G$-action, and thus $\mathcal D(G/K)$ is a strong $G$-category. The corresponding $\mathcal{HC}$-module is the category of Harish-Chandra $(\mathfrak g,K)$-modules
\[
(\mathfrak g,K)\mhyphen\mathrm{mod} = \mathfrak U(\mathfrak g)\mhyphen\mathrm{mod}^{K} \simeq \mathcal D(\wq GG/K)
\]
In particular, the symmetries of $(\mathfrak g,K)\mhyphen\mathrm{mod}$ as an $\mathcal{HC}$-module category can be identified as follows
\[
\End_{\mathcal{HC}}((\mathfrak g,K)\mhyphen\mathrm{mod}) \simeq \End_{\mathcal D(G)}(\mathcal D(G/K)) \simeq \mathcal D(G/K \times G/K)^G \simeq \mathcal D(\quot KGK)
\]
where the right hand side is considered as a monoidal category with respect to convolution.
\subsection{Graded and filtered lifts of categories}
In the case $G=\mathbb G_m$, Gaitsgory's Theorem \ref{BG1affine} says that for a given category $\mathcal C$ in $\mathbf{DGCat}$, the following data are equivalent:
\begin{itemize}
\item A quasi-coherent sheaf of categories on $pt/\mathbb G_m$ whose pullback to $pt\to pt/\mathbb G_m$ is identified with $\mathcal C$.
\item A weak $\mathbb G_m$ action on $\mathcal C$.
\item A module category $\mathcal C_{gr}$ for $\Rep(\mathbb G_m) = \mathbf{Vect}_{gr}$ with an identification $\mathcal C_{gr} \otimes_{\mathbf{Vect}_{gr}} \mathbf{Vect} \simeq \mathcal C$.
\end{itemize}
We will refer to the category $\mathcal C_{gr}$ as a \emph{graded lift} of $\mathcal C$, and the forgetful functor $\mathcal C_{gr} \to \mathcal C$ as a \emph{degrading functor}. For example, if $A$ is a graded algebra (i.e. algebra object in $\mathbf{Vect}_{gr}$), then the category $A\mhyphen\mathrm{mod}_{gr}$ consisting of dg-modules for $A$ equipped with an external grading, is a graded lift of $A\mhyphen\mathrm{mod}$.
Similarly, the 1-affineness of $\mathbb A^1/\mathbb G_m$\footnote{Recall that in this paper the action of $\mathbb G_m$ on a vector space (for example, $\mathbb A^1 = \Spec\mathbb C[t]$) has weight 2.} implies that the following data are equivalent:
\begin{itemize}
\item A quasi-coherent sheaf of categories on $\mathbb A^1/\mathbb G_m$ whose pullback to $pt \simeq \mathbb A^1-\{0\}/\mathbb G_m\to \mathbb A^1/\mathbb G_m$ is identified with $\mathcal C$.
\item A module category $\mathcal C_{t,gr}$ for $QC(\mathbb A^1/\mathbb G_m) = \mathbb C[t]\mhyphen\mathrm{mod}_{gr}$ with an identification
\[
\mathcal C_{t,gr}[t^{-1}] = \mathbb C[t,t^{-1}]\mhyphen\mathrm{mod}_{gr} \otimes_{\mathbb C[t]\mhyphen\mathrm{mod}_{gr}} \mathcal C_{t,gr} \simeq \mathcal C.
\]
\end{itemize}
We refer to $\mathcal C_{t,gr}$ as a \emph{filtered lift} of the category $\mathcal C$. To such a data, we have an associated graded category $\mathcal C_{t=0,gr}$, and also an associated asymptotic category $\mathcal C_t$ which is a degrading of $\mathcal C_{t,gr}$. For example, if $A= \bigcup_{i\in \mathbb Z} A_{\leq i}$ is a filtered algebra, then the Rees algebra $A_t := \bigoplus_{i\in \mathbb Z} A_{\leq i}t^i$ is a graded $\mathbb C[t]$-algebra, and the category $A_t\mhyphen\mathrm{mod}_{gr}$ is a filtered lift of $A\mhyphen\mathrm{mod}$. The associated graded category is the category of graded modules for the associated graded algebra $A_{t=0}$. The associated asymptotic category $A_t\mhyphen\mathrm{mod}$ is given by (ungraded) modules for the Rees algebra.
\begin{example}
Suppose $X$ is an Artin stack; then the category $\mathcal D(X)=QC^!(X_{dR})$ has a filtered lift $\mathcal D_{t,gr}(X)$ given by $QC(X_{Hod})$, where $X_{Hod} \to \mathbb A^1/\mathbb G_m$ is the Hodge stack of $X$. The associated graded category is given by
\[
\mathcal D_{t=0,gr}(X) \simeq QC^!(T[1]X)_{gr} \simeq QC(T^\ast X)_{gr}
\]
In the case when $X$ is a smooth affine algebraic variety, we have $\mathcal D(X) = \mathfrak{D}_X\mhyphen\mathrm{mod}$, and $\mathcal D_{t,gr}(X)$ is equivalent to $\mathfrak{D}_{X,t}\mhyphen\mathrm{mod}_{gr}$, where $\mathfrak{D}_{X,t}\mhyphen\mathrm{mod}_{gr}$ is the Rees algebra of $\mathfrak{D}_{X}$, as explained above.
In particular, returning to the main setting of this section, we have monoidal categories $\mathcal{HC}_{t,gr}$, $\mathcal D_{t,gr}(G)$ which define filtered lifts of $\mathcal{HC}$ and $\mathcal D(G)$. The same proof as in \cite{dario} gives that $\mathcal{HC}_{t,gr}$ is Morita equivalent to $\mathcal D_{t,gr}(G)$, and the center of $\mathcal{HC}_{t,gr}$ is $\mathcal D_{t,gr}(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)$.
\end{example}
\subsection{Shearing}
In the examples relevant to this paper, the original filtered algebra $A$ will be supported in cohomological degree $0$ (i.e. it is an ordinary algebra, not a dg-algebra). In that case, the Rees algebra $A_t$ also sits in cohomological degree $0$, but carries a non-trivial external (weight) grading for which the Rees parameter $t$ has weight 2. We will be interested in another form of the Rees algebra $A_\hbar = \bigoplus A_{\leq i}\hbar^i$ where an element of homogeneous weight $i$ sits in cohomogical degree $i$; in particular, the Rees parameter $\hbar$ now sits in cohomological degree 2. Note that $A_\hbar$ is a dg-algebra in general, even when the original algebra $A$ lives in cohomological degree $0$.
\begin{remark}
Throughout this paper, $\mathbb C[t]$ will always refer to a polynomial algebra in which the variable $t$ has cohomological degree $0$ and weight 2; on the other hand, $\mathbb C[\hbar]$ always refers to a polynomial algebra in which $\hbar$ has cohomological degree $2$ and weight $2$.
\end{remark}
The categories of graded $A_t$-modules and of graded $A_\hbar$-modules are related by the notion of \emph{shearing}. The fundamental result is:
\begin{lemma}
There is a symmetric monoidal autoequivalence of $\mathbf{Vect}_{gr}$ called \emph{shearing}
defined by
\[
M = \bigoplus_i M_i \mapsto M^{\fatslash} := \bigoplus_{i \in \mathbb Z} M_i[-i]
\]
with inverse
\[
N = \bigoplus N_i \mapsto N^\fatbslash := \bigoplus_{i \in \mathbb Z} N_i[i]
\]
\end{lemma}
Note that $\fatslash$ has the property that it takes a an ordinary graded vector space (i.e. a graded dg-vector space concentrated in cohomological degree $0$) to a dg-vector space for which the weight on the cohomology agrees with the cohomological degree.
\begin{remark}[Formality]\label{formality remark}
The shearing autoequivalence is related to a well-known criterion for formality of a dg-algebra.
Recall that taking cohomology objects defines a symmetric monoidal endofunctor $H^\ast$ of $\mathbf{Vect}$ which takes $A$ to $\bigoplus_{i\in \mathbb Z} H^i(A)[-i]$.\footnote{Note that the underlying functor of $H^\ast$ is equivalent to the identity functor, but it carries an interesting monoidal structure.} A (co)algebra object $A$ in $\mathbf{Vect}$ is called formal if there is an equivalence of (co)algebra objects $A\simeq H^\ast(A)$. Now suppose $R$ is an (co)algebra object of $\mathbf{Vect}$ which carries an external grading such that the weight of $H^i(R)$ is $i$. Then $R^\fatbslash$ is concentrated in cohomological degree $0$, and in particular $R^\fatbslash$ is formal: $R^\fatbslash \simeq H^0(R^\fatbslash)$. It follows that the original (co)algebra is formal: $R \simeq \bigoplus H^i(R)[-i]$.
\end{remark}
Twisting by the shearing autoequivalence leads to the following result:
\begin{lemma}
There is an equivalence of graded, symmetric monoidal categories
\[
\fatslash: \mathbb C[t]\mhyphen\mathrm{mod}_{gr} \simeq \mathbb C[\hbar]\mhyphen\mathrm{mod}_{gr}
\]
\end{lemma}
In particular, given a filtered lift of a category $\mathcal C$, there is a corresponding $\mathbb C[\hbar]\mhyphen\mathrm{mod}_{gr}$-module category $\mathcal C_{\hbar,gr}$ with an equivalence $\mathcal C_{t,gr} \simeq \mathcal C_{\hbar,gr}$. Thus there is a ``sheared'' degrading functor to the associated dg-asymptotic category:
\[
\xymatrix{
\mathcal C & \ar[l] \mathcal C_{t,gr} \ar[r] & \mathcal C_\hbar.
}
\]
\begin{remark}
In the case $\mathcal C = A\mhyphen\mathrm{mod}$ for a filtered ordinary algebra $A$ (i.e. concentrated in cohomological degree $0$), we have that $\mathcal C = A\mhyphen\mathrm{mod}$, $\mathcal C_{t,gr} = A_t\mhyphen\mathrm{mod}_{gr}$, and $\mathcal C_t$ carry a natural $t$-structure, and are each equivalent to the dg derived category of the corresponding abelian categories appearing as the heart. On the other hand, $\mathcal C_\hbar = A_\hbar\mhyphen\mathrm{mod}$ does not carry a $t$-structure which makes the degrading functor $\mathcal C_{t,gr} \to \mathcal C_\hbar$ $t$-exact in general.
\end{remark}
\subsection{Filtered categorical representation theory}
We have monoidal categories $\mathcal{HC}_{t,gr}$, $\mathcal D_{t,gr}(G)$ which are filtered lifts of $\mathcal{HC}$ and $\mathcal D(G)$. The same proof as in \cite{dario} gives the following:
\begin{theorem}\label{Morita equivalence filtered}
There is a Morita equivalence between the monoidal categories $\mathcal{HC}_{t,gr}$ and $\mathcal D_{t,gr}(G)$.
There is an $E_2$-monoidal equivalence of categories
\[
\mathcal Z(\mathcal D_{t,gr}(G)) \simeq \mathcal D_{t,gr}(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G) \simeq \mathcal Z(\mathcal{HC}_{t,gr})
\]
\end{theorem}
Applying the degrading functors, we get the corresponding statement for the $\hbar$-versions: $\mathcal Z(\mathcal{HC}_{\hbar}) \simeq \mathcal D_\hbar(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)$.
\section{The Spherical Hecke Category and quantum Ng\^o action}\label{quantum section}
In this section, we translate the results of Section~\ref{Hecke section} through the Geometric Satake equivalence. Throughout this section $G$ will be a complex reductive group with a fixed Borel subgroup $B$, $N=[B,B]$, and $H=B/N$. The corresponding Lie algebras are denoted $\mathfrak g$, $\mathfrak b$, $\mathfrak n$, and $\mathfrak{h}$ respectively. The Langlands dual group will be denoted ${G^{\vee}}$, with loop group ${LG^\vee}$ and arc group ${LG^\vee_+}$.
\subsection{The Characteristic Polynomial Map and Kostant Section}\label{char poly section}
Following \cite{Ngo}, Section 2, let us recall some constructions arising from the the diagram of stacks
\begin{equation}\label{Kostant section}
\xymatrix{
\mathfrak g^\ast/G \ar[r]_{\chi} & \ar@/_1pc/[l]_{\kappa} \mathfrak c
}
\end{equation}
where $\chi \circ \kappa = id_{\mathfrak c}$. Here $\chi$ is the canonical map $\mathfrak g^\ast/G \to \mathfrak c := \Spec(\Sym(\mathfrak g)^G)$, which we call the characteristic polynomial map. The Kostant section, $\kappa$ can be constructed as follows. Let $\psi: \mathfrak n \to \mathbb C$ denote a character which is non-zero on every simple root space, and denote by $\mu: \mathfrak g^\ast \to \mathfrak n^\ast$ the projection map (which is also the moment map for the adjoint action of $N$ on $\mathfrak g^\ast$). Then Kostant \cite{Kostant Whittaker} showed that the action of $N$ on $\mu^{-1}(\psi)$ is free, and the composite
\[
\xymatrix{
\mathfrak g^\ast {/\! /}_{\psi} N := \mu^{-1}(\psi)/N \ar[r]& \mathfrak g^\ast/G \ar[r]^\chi& \mathfrak c
}
\]
is an isomorphism, providing the desired section $\kappa$ of $\chi$.
The restriction of $i$ to the regular locus $\mathfrak g^\ast_\mathrm{reg}/G \to \mathfrak c$ is a gerbe for the abelian group scheme $J \to \mathfrak c$, trivialized by $\kappa$ (thus $J = \mathfrak c \times_{\kappa}\mathfrak c$). Alternatively, $J$ may be realized as $\kappa^\ast I$ where $ I$ the inertia stack of $\mathfrak g^\ast/G$: informally
\[
I = \left\{(g,x) \in G\times \mathfrak g^\ast \mid coAd_g(x)=x \right\}/G
\]
Now consider the multiplicative group $\mathbb G_m$ acting on $\mathfrak g^\ast$ by scaling with weight 2 (throughout this paper, the scaling action of $\mathbb G_m$ on a vector space will always have weight 2, or equivalently, polynomial rings will be considered as graded rings generated in degree 2). This action commutes with the coadjoint action and the characteristic polynomial map $\chi$ is equivariant for the $\mathbb G_m$ action, where $\mathbb G_m$ acts on $\mathfrak c$ by twice the exponents of the Lie algebra $\mathfrak g$. It is not immediately clear that the Kostant section is equivariant for this $\mathbb G_m$ action, as $\mu^{-1}(\psi)/N$ is not preserved under scaling. However, as explained in \cite[Section 2]{Ngo}, there is a diagram of stacks
\[
\xymatrix{
\mathfrak g^\ast/G\times \mathbb G_m \ar[rr]_{\chi/\mathbb G_m} && \ar@/_1pc/[ll]_{\kappa/\mathbb G_m} \mathfrak c/\mathbb G_m
}
\]
where the equivariance data of $\kappa/\mathbb G_m$ is defined via the homomorphism $\mathbb G_m \to G\times \mathbb G_m$ given by $(2\rho,1)$, where $2\rho$ refers to the sum of the simple coroots.
In order to explain why the $2\rho$ appears above let us give another construction of the Kostant slice, which has the additional advantage of not requiring a choice of the character $\psi$. Let $\mathfrak n' = \mathfrak n/[\mathfrak n,\mathfrak n]$ denote the maximal abelian quotient, so $\mathfrak{ch} = (\mathfrak n')^\ast$ is identified with the space of characters of $\mathfrak n$. The torus $T=B/N$ acts on $\mathfrak{ch}$, which has a one dimensional weight space for each negative simple root. There is a unique open dense orbit $\mathfrak{ch}^\circ$ on which $T$ acts simply transitively; the elements of $\mathfrak{ch}^0$ correspond precisely to the possibly choices of $\psi$ above.
Any choice of $\psi \in \mathfrak{ch}^\circ$ defines a slice to the $T$-action on $\mu^{-1}(\mathfrak{ch}^\circ)/N$. Thus the composite
\[
\mu^{-1}(\psi)/N \to \mu^{-1}(\mathfrak{ch}^\circ)/N \to \mu^{-1}(\mathfrak{ch}^\circ)/B
\]
is an isomorphism. Note that $\mathbb G_m$ acts on the right hand side compatibly with the map to $\mathfrak g^\ast/G$. If we use the isomorphism above to translate the $\mathbb G_m$-action to $\mu^{-1}(\psi)/N$, we see that under the map $\mu^{-1}(\psi)/N \to \mathfrak g^\ast/G$ has a $\mathbb G_m$-equivariant structure using the homomorphism $\mathbb G_m \to \mathbb G_m \times G$ given by $(1,2\rho)$, recovering the description above.
\subsection{The group scheme of regular centralizers}\label{J section}
Recall that the fiber product $J = \mathfrak c \times_{\mathfrak g^\ast/G,\kappa} \mathfrak c$, which is a priori a groupoid acting on $\mathfrak c$, is in fact a commutative group scheme over $\mathfrak c$. Its fiber over an element $a\in \mathfrak c$ is the centralizer of $\kappa(a) \in \mathfrak g$.
\begin{lemma}
We have an isomorphism of groupoids over $\mathfrak c$
\[
J \simeq \GITquot{N_\psi}{T^\ast G}{\lsub{\psi}N}
\]
\end{lemma}
\begin{proof}
(See also~\cite[Theorem 6.3]{teleman}.)
Note that the operation of Hamiltonian reduction is a composite of taking a closed fiber and a quotient by a group action. As both these operations commute with fiber products, we have
\[
J = (\mathfrak g^\ast {/\! /}_{\psi} N) \times_{\mathfrak g^\ast/G} (\mathfrak g^\ast {/\! /}_{\psi} N) \simeq \GITquot{N_\psi}{(\mathfrak g^\ast \times_{\mathfrak g^\ast/G} \mathfrak g^\ast)}{\lsub \psi N}
\]
compatible with the projection maps to $\mathfrak c$, as required.
\end{proof}
Note that $QC(J)$ has a monoidal structure arising from the convolution diagram
\[
\xymatrix{
J\times J & \ar[l] J\times_\mathfrak c J \ar[r]& J
}
\]
As the group structure on $J$ is commutative, this monoidal structure is naturally symmetric. As $J$ is affine, $QC(J) = \mathbb C[J]\mhyphen\mathrm{mod}$, where $\mathbb C[J]$ has the structure of a commutative and cocommutative Hopf algebra over $\mathbb C[\mathfrak c]$.
As in the previous section, we can identify $\mathfrak g^\ast{/\! /}_{\psi} N$ with $\mu^{-1}(\mathfrak{ch}^\circ)/B$. As the latter carries a $\mathbb G_m$-action, so does the fiber product
\[
J = \mu^{-1}(\mathfrak{ch}^\circ)/B \times_{\mathfrak g^\ast/G} \mu^{-1}(\mathfrak{ch}^\circ)/B
\]
In particular, the coordinate ring of $J$ is a graded ring (note that the grading is only in even degrees, but is generally unbounded in both positive and negative degrees). Thus we have a symmetric monoidal category $QC(J)_{gr}$ of graded $\mathbb C[J]$-modules (with respect to convolution).
\subsection{Bi-invariant differential operators: the quantum characteristic polynomial map}
Recall that the ring of bi-invariant differential operators
\[
\mathfrak Z\mathfrak g = (\mathfrak{D}_{G})^{G\times G} = \mathfrak U\mathfrak g^G
\]
is a commutative ring, which is identified with the center of left invariant differential operators $\mathfrak U\mathfrak g = (\mathfrak{D}_G)^G$. We write $\mathcal Z = \mathfrak Z\mathfrak g\mhyphen\mathrm{mod}$ for the symmetric monoidal category of modules. The filtration on $\mathfrak{D}_G$ by order of differential operator defines PBW filtrations on $\mathfrak U\mathfrak g$ and $\mathfrak Z\mathfrak g$, and we write $\mathfrak U_t\mathfrak g$ and $\mathfrak Z_t\mathfrak g$ for the corresponding Rees algebras. The Duflo/Harish-Chandra isomorphisms define equivalences of filtered algebras
\[
\mathfrak Z\mathfrak g \simeq \Sym(\mathfrak g)^G \simeq \Sym(\mathfrak{h})^W
\]
Thus we have $\mathcal Z_{t,gr} \simeq QC(\mathfrak c\times \mathbb A^1)_{gr}$.
There is a natural monoidal functor
\[
\xymatrix{
Char_{t,gr}: \mathcal Z_{t,gr} \ar[r] & \mathcal{HC}_{t,gr}
}
\]
given by
\[
\xymatrix{
\mathfrak Z_t\mathfrak g\mhyphen\mathrm{mod}_{gr} \ar[r]& \mathcal D_{t,gr}(\wqw GGG) \ar[r] &(\mathfrak U_t\mathfrak g\mhyphen\mathrm{mod}_{gr})^{G,wk} \\
\mathfrak{M} \ar@{|->}[r] & \mathfrak{D}_{G,t} \otimes_{\mathfrak Z_t\mathfrak g} \mathfrak{M} \ar[r] & \mathfrak U_t\mathfrak g \otimes_{\mathfrak Z_t\mathfrak g} \mathfrak{M}
}
\]
Setting $t=0$, we recover the symmetric monoidal functor
\[
Char_{t=0,gr} = \chi_{gr}^\ast: QC(\mathfrak c)_{gr} \to QC(\mathfrak g^\ast/G)_{gr}
\]
Thus $Char_{t,gr}$ is thought of as a quantization of the characteristic polynomial map $\chi$.
\subsection{Whittaker modules: the quantum Kostant slice}
We consider a twisted variant of the category $(\mathfrak g,K)\mhyphen\mathrm{mod}$ defined in Subsection \ref{categorical representation theory}.
Let $\psi:\mathfrak n =Lie(N) \to \mathbb C$ be a Lie algebra character. This gives rise to a monoidal functor $\mathcal D(N) \to \mathbf{Vect}$ (a ``categorical character''); we denote the corresponding $\mathcal D(N)$-module category $\mathbf{Vect}_\psi$. Given a strong $N$-category $\mathcal C$, we define the $(N,\psi)$-semi-invariants $\mathcal C^{N,\psi} \simeq \Hom_{\mathcal D(N)}(\mathbf{Vect}_\psi,\mathcal C)$. In particular, we have the category $\mathcal D(X/_\psi N) \simeq \mathcal D(X)^{N,\psi}$ of $(N,\psi)$-twisted equivariant $\mathcal D$-modules on a $N$-space $X$. We also have the category $(\mathfrak g,N,\psi)\mhyphen\mathrm{mod} = \mathfrak U\mathfrak g\mhyphen\mathrm{mod}^{N,\psi}$ of $(N,\psi)$-Whittaker modules studied in~\cite{Kostant Whittaker}, consisting of $\mathfrak U(\mathfrak g)$-modules with a compatible action of $N$, together with an identification of the deriviative of the $N$-action with the $\mathfrak U(\mathfrak n)$-action twisted by $\psi$.
Given an object of $(\mathfrak g,N,\psi)\mhyphen\mathrm{mod}$, its space of (derived)
$N$-invariants
(known as Whittaker vectors) carries an action of the center $\mathfrak Z\mathfrak g$ of $\mathfrak U\mathfrak g$, and we have the following extension of the results of~\cite{Kostant Whittaker}, known as the Skryabin equivalence \cite{Premet} (see also \cite[Theorem 6.1]{Gan-Ginzburg}):
\begin{theorem}[Skryabin's equivalence,]
Suppose $\psi$ is generic. Then the functor of taking Whittaker vectors is a $t$-exact equivalence of categories
\[
(\mathfrak g,N,\psi)\mhyphen\mathrm{mod} \xrightarrow{\sim} \mathcal Z
\]
\end{theorem}
\begin{remark}
The object $\mathfrak U\mathfrak g \otimes_{\mathfrak U\mathfrak n} \mathbb C_\psi$ is a compact generator of the category of Whittaker modules, which represents the functor of taking Whittaker invariants. The theorem can be interpreted as saying that $\mathfrak U\mathfrak g \otimes_{\mathfrak U\mathfrak n} \mathbb C_\psi$ is a projective generator of the abelian category of Whittaker modules, and its endomorphism ring $\mathfrak U\mathfrak g {/\! /}_{\psi} N$ is isomorphic to $\mathfrak Z\mathfrak g$.
\end{remark}
Using the Skryabin equivalence, we have an action of the monoidal category $\mathcal{HC}$ on $\mathcal Z \simeq (\mathfrak g,N,\psi)\mhyphen\mathrm{mod}$, which can be considered as a quantum form of the Kostant slice.
In \cite{BezFink}, the authors define a filtered lift of this $\mathcal{HC}$-module category, i.e. an action of $\mathcal{HC}_{t,gr}$ on $\mathcal Z_{t,gr}$, or equivalently, a monoidal functor
\[
\xymatrix{
Whit_{t,gr}: \mathcal{HC}_{t,gr} \to \End_{QC(\mathbb A^1_t/\mathbb G_m)}(\mathcal Z_{t,gr}) \simeq \mathfrak Z_t\mathfrak g \otimes_{\mathbb C[t]} \mathfrak Z_t\mathfrak g\mhyphen\mathrm{mod}_{gr}
}
\]
where the right hand side has a monoidal structure coming from identifying with $\mathfrak Z_t\mathfrak g$-bimodules in $\mathbb C[t]\mhyphen\mathrm{mod}$. Specializing to $t=0$ we recover the functor of restriction under the graded Kostant slice:
\[
\xymatrix{
QC(\mathfrak g^\ast/G)_{gr} \ar[r]^{\kappa^\ast_{gr}}& QC(\mathfrak c)_{gr} \ar[r]^{\Delta_\ast} & QC(\mathfrak c\times \mathfrak c)_{gr}
}
\]
\begin{remark}\label{remark Kazhdan}
Defining the grading on the quantum Kostant slice is not immediate as the Whittaker equation $n.m = \psi(n)m$ is not homogeneous (for an element $m$ in a $\mathfrak U\mathfrak g$-module $M$, and $n \in \mathfrak n$). One approach is given by the \emph{Kazhdan filtration} on $\mathfrak U\mathfrak g$ (see \cite{Gan-Ginzburg}). The Rees algebra of the Kazhdan filtration is isomorphic to the usual (PBW) Rees algebra $\mathfrak U_t\mathfrak g$ as plain algebras, but the grading is defined by the homomorphism $ (id,2\rho^\vee):\mathbb G_m \to G\times \mathbb G_m$. In particular, the category $\mathcal{HC}_{t,gr}$, which consists of $G\times \mathbb G_m$-weakly equivariant $\mathfrak U_t\mathfrak g$-modules may thought of in terms of the Rees algebra of either filtration. With respect to the Kazhdan filtration, the Whittaker equation is homogeneous of degree $0$, and thus we can define a graded lift of the category of Whittaker modules as required. Alternatively, one can proceed as in the classical case in Section \ref{char poly section} and consider a certain localization of the category of $B$-integral $\mathfrak U\mathfrak g$-modules with a factorization of the action of $\mathfrak U_t\mathfrak n$ through the quotient $\mathfrak U_t\mathfrak n/[\mathfrak n,\mathfrak n] \simeq \mathbb C[\mathfrak{ch} \times A^1_t]$. One can use this latter approach to define an (ungraded) dg-version of Whittaker modules, i.e. an action of $\mathcal{HC}_\hbar$ on $\mathcal Z_\hbar$.
\end{remark}
\subsection{The Whittaker category}
The Whittaker category is a monoidal category which quantizes the group scheme $J \to \mathfrak c$. To motivate the definition below, note by \cite{BFN}
\[
QC(J) = QC(\mathfrak c\times_{\mathfrak g^\ast/G} \mathfrak c) \simeq \End_{QC(\mathfrak g^\ast/G)}(QC(\mathfrak c))
\]
\begin{definition}
The \emph{Whittaker category} is the monoidal category $ \mathcal{W}h = \End_{\mathcal{HC}}(\mathcal Z)$. It has a filtered lift given by
$
\mathcal{W}h_{t,gr} = \End_{\mathcal{HC}_{t,gr}}(\mathcal Z_{t,gr}).
$
\end{definition}
Note that under the Morita equivalence of Theorem \ref{thm-morita}, the $\mathcal{HC}$-module category $(\mathfrak g,N,\psi)\mhyphen\mathrm{mod}$ corresponds to the $\mathcal D(G)$-module category $\mathcal D(G/_{\psi} N)$. Thus we identify
\[
\mathcal{W}h \simeq \End_{\mathcal D(G)}(\mathcal D(G/_{\psi} N)) \simeq \mathcal D(\quot{N_\psi}{G}{_\psi N}).
\]
Similarly, there is a filtered version
\[
\mathcal{W}h_{t,gr} \simeq \End_{\mathcal D_{t,gr}(G)}(\mathcal D_{t,gr}(G/_{\psi} N)) \simeq \mathcal D_{t,gr}(\quot{N_\psi}{G}{_\psi N}).
\]
(one should use the grading on $\mathfrak{D}_{G,t}$ induced by the Kazhdan filtration to make sense of this).
In general, the Drinfeld center of a monoidal category acts by endomorphisms on any module. In particular, there is a monoidal functor
\[
Whit_{t,gr}: \mathcal Z(\mathcal{HC}_{t,gr}) \simeq \mathcal D_{t,gr}(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G) \to \mathcal{W}h_{t,gr}
\]
Unwinding the definitions, we see that this functor is given by a composite
\[
\mathcal D_{t,gr}(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G) \to \mathcal D_{t,gr}(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} N) \to \mathcal D_{t,gr}(\quot{N_\psi}{G}{_\psi N}) \simeq \mathfrak{Wh}_{t}\mhyphen\mathrm{mod}_{gr}
\]
which is identified with the Whittaker functor appearing in \cite{ginzburg whittaker} (see the next section for the algebra $\mathfrak{Wh}_t$).
\subsection{Bi-Whittaker differential operators}
The category $\mathcal{W}h \simeq \mathcal D(N {}_\psi{\backslash} G /_{\psi} N)$ contains a distinguished object $$\mathfrak{D}_{\quot{N_\psi}{G}{_\psi N}} = \mathfrak{D}_G \otimes_{\mathfrak U\mathfrak n^L \otimes \mathfrak U\mathfrak n^R}( \mathbb C_{-\psi} \otimes \mathbb C_{\psi})$$
The Skyrabin equivalence implies that this object (which represents the functor of taking left and right Whittaker vectors) is a compact generator of $\mathcal D(\quot{N_\psi}{G}{_\psi N})$, which moreover is a projective object in the heart of the $t$-structure. Consider its endomorphism ring, which is identified with the bi-Whittaker differential operators as studied in \cite{ginzburg whittaker}
\[
\mathfrak{Wh} := \End_{\mathcal D(\quot{N_\psi}{G}{_\psi N})}(\mathfrak{D}_{\quot{N_\psi}{G}{_\psi N}}) \simeq \left(\mathfrak{D}_{\quot{N_\psi}{G}{_\psi N}}\right)^{N\times N}
\]
Applying the same argument in the filtered setting, we get a graded algebra $\mathfrak{Wh}_t$ which is the Rees algebra with respect to the Kazhdan filtration (see \cite{ginzburg whittaker}) on $\mathfrak{Wh}$\footnote{Warning: the filtration on $\mathfrak{Wh}$ (or equivalently, the grading on the Rees algebra $\mathfrak{Wh}_t$) is unbounded in both directions in general.}.
We record these results in the following proposition.
\begin{proposition}
There are equivalences of categories
\[
\mathcal{W}h \simeq \mathcal D(\quot{N_\psi}{G}{_\psi N}) \simeq \mathfrak{Wh}\mhyphen\mathrm{mod}
\]
with a corresponding filtered lift
\[
\mathcal{W}h_{t,gr} \simeq \mathcal D_{t,gr}(N {}_\psi{\backslash} G /_{\psi} N) \simeq \mathfrak{Wh}_t\mhyphen\mathrm{mod}_{gr}
\]
\end{proposition}
The monoidal structure on $\mathcal{W}h_{t,gr}$ can be recovered from a $\mathfrak Z_t\mathfrak g$-bialgebroid structure on the ring $\mathfrak{Wh}_t$. First note that there is a map of rings $\mathfrak Z_{t}\mathfrak g \to \mathfrak{Wh}_t$, and the corresponding forgetful functor on modules coincides with the manifestly monoidal functor
\[
\End_{\mathcal{HC}_{t,gr}}(\mathcal Z_{t,gr}) \to \End_{\mathcal Z_{t,gr}}(\mathcal Z_{t,gr}) \simeq \mathcal Z_{t,gr}
\]
where we use the quantum characteristic polynomial map $\mathcal Z_{t,gr} \to \mathcal{HC}_{t,gr}$. Thus the corresponding monad acting on $\mathcal Z_{t,gr}$ is just given by the graded $\mathfrak Z_t\mathfrak g$-ring $\mathfrak{Wh}_t$ (see \ref{monad vs algebra} for details on how a monad acting on a module category can be regarded as a ring). The monoidal structure on the forgetful functor endows the monad itself with an oplax monoidal structure, which, according to \cite{Bohm}, precisely corresponds to a (graded) $\mathfrak Z_t\mathfrak g$-bialgebroid structure on the (graded) $\mathfrak Z_t\mathfrak g$-ring $\mathfrak{Wh}_t$. This bialgebroid in fact is a Hopf algebroid (though we will not need this fact) which specializes to the commutative and cocommutative graded Hopf algebra $\mathbb C[J]$ after setting $t=0$.
One can recover the monoidal structure on $\mathfrak{Wh}_t\mhyphen\mathrm{mod}_{gr}$ naturally from the bialgebroid structure using the comultiplication in the usual way.
\begin{remark}\label{abelian vs derived}
We were not able to locate a reference for bialgebroids in the dg/homotopical setting. However, our present situation may be expressed purely in terms of the usual theory in discrete abelian categories as follows. The monoidal (dg)-category $\mathcal{W}h_{t,gr}$ can be recovered as the dg derived category of the heart its $t$-structure, which is a right-exact Grothendieck abelian monoidal category. The forgetful functor defines an exact, monadic, monoidal functor to the abelian category of graded $\mathfrak Z_t\mathfrak g$-modules, so the results in \cite{Bohm} apply verbatim to recover the monoidal structure on the abelian category of graded $\mathfrak{Wh}_t$-modules (and thus on the dg-category $\mathcal{W}h_t$) in terms of the $\mathfrak Z_t\mathfrak g$-bialgebroid structure.
\end{remark}
As a consequence of Remark \ref{abelian vs derived}, we obtain the following:
\begin{proposition}\label{cocommutative}
If the (discrete) graded bialgebroid $\mathfrak{Wh}_t$ is cocommutative, then $\mathcal{W}h_{t,gr}$ (and thus $\mathcal{W}h$) carries a symmetric monoidal structure.
\end{proposition}
\begin{proof}
The abelian category of modules for a cocommutative bialgebroid over a commutative ring is naturally sysmmtric monoidal. This structure carries through to the derived category, as required.
\end{proof}
\begin{remark}
In the next section we will see that the $\mathfrak Z_t\mathfrak g$-bialgebroid $\mathfrak{Wh}_t$ is indeed cocommutative, and thus the Whittaker category $\mathcal{W}h = \mathfrak{Wh}\mhyphen\mathrm{mod}$ is symmetric monoidal.
\end{remark}
\subsection{Derived geometric Satake and the Kostant/Whittaker category}\label{subsection-derived satake}
In this subsection we will apply the results of Section \ref{Hecke section} in the setting of the equivariant Grassmannian for $G^\vee$ to derive results about the Whittaker category via the derived geometric Satake theorem of Bezrukavnikov and Finkelberg \cite{BezFink}.
We take $X=pt/{LG^\vee_+}\rtimes \G_m$, $\underline{\mathcal{G}r}^\vee={LG^\vee_+}\backslash {LG^\vee}/{LG^\vee_+}\rtimes \mathbb G_m$. Note that $X$ is an ind-stack and $\underline{\mathcal{G}r}^\vee$ an ind-proper groupoid acting on $X$. Let $\mathcal H_\hbar = \cDv_{hol}(\underline{\mathcal{G}r}^\vee)$ denote the spherical Hecke category, and $\mathcal R_\hbar = \cDv_{hol}(X)$. Note that there is an isomorphism $R_\hbar=H^\ast(X) \simeq \mathfrak Z_\hbar\mathfrak g$, thus we may identify $\mathcal R_\hbar = H^\ast(X)\mhyphen\mathrm{mod}$ with $\mathcal Z_\hbar = \mathfrak Z_\hbar\mathfrak g\mhyphen\mathrm{mod}$.
\begin{theorem}~\cite{BezFink} \label{Bez-Fink}
There is an equivalence of monoidal categories $\mathcal H_\hbar \simeq \mathcal{HC}_\hbar$ giving rise to a commutative diagram
\[
\xymatrix{
\mathcal Z_{t,gr} \ar[d]_{Char_{t,gr}} \ar[r] &\mathcal Z_\hbar \ar[r]^\sim \ar[d] & \ar[l] \mathcal R_{\hbar} \ar[d]^{\mathfrak{d}}\\
\mathcal{HC}_{t,gr} \ar[r] \ar[d]_{Kost_{t,gr}} &\mathcal{HC}_\hbar \ar[r]^\sim \ar[d] & \ar[l] \mathcal H_{\hbar} \ar[d]^{\mathfrak a} \\
\End(\mathcal Z_{t,gr}) \ar[r]&\End(\mathcal Z_\hbar) \ar[r]^\sim & \ar[l] \End(\mathcal R_{\hbar})
}\]
where the left column is a graded lift of the right two.
\end{theorem}
\begin{remark}
Given algebra objects $S$ and $A$ (in some closed symmetric monoidal category $\mathcal C$, say) we say that $A$ is an augmented $S$-ring if there is an algebra homomorphism $S\to A$, and $S$ carries the structure of an $A$-module, such that the composite
\[
S \to A \to \End(S)
\]
is the structure map for $S$ as a module over itself. The Geometric Satake Theroem may be interpreted as saying that $\mathcal{HC}_\hbar$ and $\mathcal H_\hbar$ are equivalent as augmented $(\mathcal Z_\hbar \simeq \mathcal R_\hbar)$-rings.
\end{remark}
Recall the Kostant category $\mathcal K_\hbar$ is the symmetric monoidal category $\cDv_{hol}(X)^{\underline{\mathcal{G}r}^\vee}$ which is monoidally identified with $\End_{\mathcal H_\hbar}(\mathcal R_{\hbar})$. On the other hand, the (dg-asymptotic) Whittaker category is the monoidal category given by $\mathcal{W}h_{\hbar} = \End_{\mathcal{HC}_{\hbar}}(\mathcal Z_{\hbar})$. In particular, Theorem \ref{Bez-Fink} implies that there is an equivalence between $\mathcal{W}h_{\hbar}$ and $\mathcal K_{\hbar}$; thus, $\mathcal{W}h_{t,gr}$ defines a graded lift of $\mathcal K_{\hbar}$. Thus we obtain the following:
\begin{corollary}\label{corollarykostantwhittaker}
There is a commutative diagram of monoidal categories
\[
\xymatrix{
\mathcal{W}h_{t,gr} \ar[d] \ar[r] & \mathcal{W}h_\hbar \ar[r]^\sim \ar[d] & \ar[l] \mathcal K_{\hbar} \ar[d] \\
\mathcal Z_{t,gr} \ar[r] & \mathcal Z_\hbar \ar[r]^\sim & \mathcal R_{\hbar}
}
\]
where the horizontal arrows are monoidal degrading functors. In particular, the dg-Whittaker category $\mathcal{W}h_\hbar$ is symmetric monoidal.
\end{corollary}
Recall that the Drinfeld center of a monoidal category acts by endomorphisms on any module category. Identifying these actions on either side of Theorem \ref{Bez-Fink} we obtain:
\begin{corollary}\label{corollary action Kostant}
There a commutative diagram:
\[
\xymatrix{
\mathcal D_{t,gr}(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G) \ar[r] \ar[d]^{Whit_{t,gr}} & \mathcal D_\hbar(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G) \ar[r] \ar[d]^{Whit_\hbar} & \ar[l] \mathcal Z(\mathcal H_\hbar) \ar[d]^\mathfrak a \\
\mathcal{W}h_{t,gr} \ar[r] & \mathcal{W}h_{\hbar} \ar[r] & \ar[l] \mathcal K_\hbar
}
\]
\end{corollary}
Using Corollary \ref{corollarykostantwhittaker}, we can recover a theorem of Bezrukavnikov-Finkelberg-Mirkovic \cite{BFM}, identifying the homology convolution algebra of the affine Grassmannian.
\begin{corollary}\label{corollaryBFimpliesBFM}\cite{BFM}
There is an equivalence of graded Hopf algebroids
\[
\mathfrak{Wh}_t \simeq H^\ast(H_\hbar) \simeq H^{{LG^\vee_+} \rtimes \mathbb G_m}_{-\ast}({\cG r^{\vee}})
\]
In particular, setting $t=0$ we have an equivalence of Hopf algebras.
\[
\mathbb C[J] \simeq H_{-\ast}^{{LG^\vee_+}}({\cG r^{\vee}})
\]
\end{corollary}
\begin{proof}
We have $\mathcal{W}h_\hbar = \mathfrak{Wh}_\hbar\mhyphen\mathrm{mod}$ and $\mathcal K_\hbar = H_\hbar\mhyphen\mathrm{mod}$, so Corollary \ref{corollarykostantwhittaker} gives rise to an equivalence $\mathfrak{Wh}_\hbar \simeq H_\hbar$ of $(\mathfrak Z_\hbar\mathfrak g\simeq R_\hbar)$-rings (or equivalently monads acting on $\mathcal Z_\hbar \simeq \mathcal R_\hbar$). Moreover, as the forgetful functors carry a monoidal structure, the ring objects $\mathfrak{Wh}_\hbar$ and $H_\hbar$ admit $(\mathfrak Z_\hbar\mathfrak g\simeq R_\hbar)$-bialgebroid structures, and the equivalence $\mathfrak{Wh}_\hbar \simeq H_\hbar$ respects this structure.
The existence of the graded lift $\mathcal{W}h_{t,gr} \to \mathcal{W}h_\hbar$ means that the dg-bialgebra $\mathfrak{Wh}_\hbar$ arises from the graded bialgebra $\mathfrak{Wh}_t$ (which we consider to be in cohomological degree 0) by shearing so that the $\mathbb G_m$-weight is equal to the cohomological degree. In particular, $\mathfrak{Wh}_\hbar$ and thus $H_\hbar$ is formal as a bialgebroid by Remark \ref{remark formality}
. In other words, the homology of $\mathfrak{Wh}_\hbar$ (and thus of $H_\hbar$) is isomorphic to $\mathfrak{Wh}_t$ as graded bialgebroids, as claimed.
\end{proof}
\begin{corollary}\label{corollarysymmetricmonoidal}
The monoidal category $\mathcal{W}h_{t,gr}$ upgrades to a symmetric monoidal category. In particular, $\mathcal{W}h = \mathcal D(\quot{N_\psi}{G}{_\psi N})$ is symmetric monoidal.
\end{corollary}
\begin{proof}
By Corollary \ref{corollaryBFimpliesBFM} the $\mathfrak Z_t\mathfrak g$-coalgebra structure on $Wh_t \simeq H_{-\ast}(\underline{\mathcal{G}r}^\vee)$ is cocommutative (it is the ``cup coproduct'' arising from pushforward under diagonal maps). Thus the result follows from Proposition \ref{cocommutative}.
\end{proof}
\subsection{The Quantum Ng\^o map}
Applying Theorem \ref{groupoid center} to the Ind-proper groupoid $\underline{\mathcal{G}r}^\vee \rightrightarrows X$, and interpreting the results using Theorem \ref{Bez-Fink}, Corollary \ref{corollarykostantwhittaker}, and Corollary \ref{corollary action Kostant} we obtain the following result.
\begin{theorem}\label{thm-quantum-ngo-hbar}
There is a canonical $E_2$-morphism $Ng\hat{o}_\hbar:\mathcal{W}h_\hbar\to \mathcal Z(\mathcal{HC}_\hbar)$ which fits in to a diagram:
\[
\xymatrix{
\mathcal{W}h_\hbar \ar[r]_-{Ng\hat{o}_\hbar}\ar[d]_-{}& \ar@/_1pc/_-{Whit_\hbar}[l] \mathcal D(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G) \ar[d]^{}\\
\mathcal Z_\hbar \ar[r]_-{Char_\hbar} & \mathcal{HC}_\hbar }
\]
\end{theorem}
Let us try to understand the functor $Ng\hat{o}_\hbar$ more explicitly.
Composing $Ng\hat{o}_\hbar$ with the monoidal forgetful functor $\mathcal D_\hbar(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G) \simeq \mathcal Z(\mathcal D_\hbar(G)) \to \mathcal D_\hbar(G)$, we obtain a functor
\[
\Ngo^\sim_\hbar: \mathcal{W}h_\hbar = \mathfrak{Wh}_{\hbar}\mhyphen\mathrm{mod} \to \mathfrak{D}_\hbar\mhyphen\mathrm{mod}
\]
All our constructions have taken place in the category $\mathbf{DGCat}_k$, and all functors appearing are continuous. It follows that the functor $F_\hbar$ above is represented by a $\mathfrak{D}_{G,\hbar}-\mathfrak{Wh}_\hbar$-bimodule $\mathfrak{B}_\hbar$. Applying the forgetful functors given by the vertical arrows in Theorem \ref{thm-quantum-ngo-hbar}, we see that there is an equivalence of underlying $\mathfrak U_\hbar\mathfrak g\otimes \mathfrak U_\hbar\mathfrak g - \mathfrak Z_\hbar\mathfrak g$-bimodules:
\[
\mathfrak{B}_\hbar \simeq F_\hbar(\mathfrak{Wh}_\hbar) \simeq Char_\hbar(\mathfrak{Wh}_\hbar) \simeq \mathfrak U\mathfrak g_{\hbar}\otimes_{\mathfrak Z_\hbar\mathfrak g}\mathfrak{Wh}_\hbar
\]
In other words, there is a left $\mathfrak{D}_{\hbar,G}$-module structure on $\mathfrak U\mathfrak g_{\hbar}\otimes_{\mathfrak Z_\hbar}\mathfrak{Wh}_\hbar$ commuting with the right $\mathfrak{Wh}_\hbar$ action. In particular, there is a map of $\mathfrak U_t\mathfrak g$-bimodules
\[
\mathfrak{D}_{G,\hbar} \to \mathfrak U\mathfrak g_{\hbar}\otimes_{\mathfrak Z_\hbar\mathfrak g}\mathfrak{Wh}_\hbar
\]
given by acting on the distinguished element $1\otimes 1$. This map of can be thought of as the quantiziation of the Ng\^o morphism
\[
\chi^\ast(J) \to T^\ast G
\]
(this will be explained more precisely in Remark \ref{remark classical Ngo}).
\subsection{Graded lift of the quantum Ng\^o map}\label{ss graded lift}
The goal of this subsection is to sketch a proof of the following result, which claims a graded lift of the functor $Ng\hat{o}_\hbar$ given by Theorem \ref{thm-quantum-ngo-hbar}.
\begin{theorem}\label{thm-quantum-ngo-t}
There is a $t$-exact $E_2$-monoidal functor
\[
Ng\hat{o}_{t,gr}: \mathcal{W}h_{t,gr} \to \mathcal D_{t,gr}(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)
\]
lifting the monoidal functor $Char_{t,gr}: \mathcal Z_{t,gr} \to \mathcal{HC}_{t,gr}$.
\end{theorem}
The idea is to deduce Theorem \ref{thm-quantum-ngo-t} from Theorem \ref{thm-quantum-ngo-hbar} using certain formality properties.
Let us first construct the composite
\[
\Ngo^\sim_{t,gr}: \mathcal{W}h_{t,gr} \longrightarrow \mathcal D_{t,gr}(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G) \longrightarrow \mathcal D_{t,gr}(G)
\]
Such a functor will be represented by a certain graded $\mathfrak{D}_{G,t}-\mathfrak{Wh}_t$-bimodule, $\mathfrak{B}_t$, whose underlying $\mathfrak U_t\mathfrak g\otimes \mathfrak U_t\mathfrak g - \mathfrak Z_t\mathfrak g$-bimodule is isomorphic to $\mathfrak U_t\mathfrak g \otimes_{\mathfrak Z_t\mathfrak g} \mathfrak{Wh}_t$. Recall from the comments following Theorem \ref{thm-quantum-ngo-hbar} that we have a corresponding dg-bimodule $\mathfrak{B}_\hbar$. Let us note the following:
\begin{lemma}\label{lemma formal}
The object $\mathfrak{B}_\hbar$ is formal as a $\mathfrak{D}_{G,\hbar}-\mathfrak{Wh}_\hbar$-bimodule.
\end{lemma}
\begin{proof}
By construction, $\mathfrak U_\hbar\mathfrak g$, $\mathfrak Z_\hbar\mathfrak g$, and $\mathfrak{Wh}_\hbar$ all carry compatible pure external gradings (i.e. the weight of the external grading on the $i$th cohomology object is equal to $i$). Note also that $\mathfrak U_\hbar\mathfrak g$ is free as a $\mathfrak Z_\hbar\mathfrak g$-module, by a theorem of Kostant. Thus the $\mathfrak U_\hbar\mathfrak g\otimes \mathfrak U_\hbar\mathfrak g - \mathfrak Z_\hbar\mathfrak g$-bimodule
\[
\mathfrak U\mathfrak g_{\hbar}\otimes_{\mathfrak Z_\hbar\mathfrak g}\mathfrak{Wh}_\hbar
\]
carries a pure grading, so in particular is formal as a $\mathfrak U_\hbar\mathfrak g\otimes \mathfrak U_\hbar\mathfrak g - \mathfrak Z_\hbar\mathfrak g$-bimodule (Kostant's theorem implies that the tensor product as graded algebras is the same as the tensor product in the dg-derived category). On the other hand, we have an equivalence $\mathfrak{D}_{G,\hbar} \simeq \mathcal O(G) \rtimes \mathfrak U_\hbar\mathfrak g$, where $\mathcal O(G)$ is in pure degree $0$. It follows that the $\mathfrak U_\hbar\mathfrak g$-module isomorphism from $\mathfrak{B}_\hbar$ to its homology is automatically a $\mathfrak{D}_{G,\hbar}$-module isomorphism, as required.
\end{proof}
Lemma \ref{lemma formal} is equivalent to the statement that $\mathfrak{B}_{\hbar}$ carries a pure external grading as a (dg) $\mathfrak{D}_{G,\hbar}-\mathfrak{Wh}_\hbar$-bimodule. In particular, $\Ngo^\sim_\hbar$ lifts to a functor
\[
\Ngo^\sim_{\hbar,gr}: \mathcal{W}h_{\hbar,gr} \to \mathcal D_{\hbar,gr}(G)
\]
or equivalently, after shearing,
\[
\Ngo^\sim_{t,gr}: \mathcal{W}h_{t,gr} \to \mathcal D_{t,gr}(G)
\]
Note that $\Ngo^\sim_{t,gr}$ is $t$-exact as it is a lift of $Char_{t,gr}$, which is $t$-exact due to the flatness of $\mathfrak U_t\mathfrak g$ over $\mathfrak Z_t\mathfrak g$.
\begin{remark}
The functor $\Ngo^\sim_{t,gr}$ is represented by a graded $\mathfrak{D}_{G,t}-\mathfrak{Wh}_t$-bimodule $\mathfrak{B}_t$ (sitting in cohomological degree $0$); this is just the formal (dg)-bimodule $\mathfrak{B}_\hbar$ where the cohomological grading is reinterpreted as an external grading.
\end{remark}
To deduce Theorem \ref{thm-quantum-ngo-t} boils down to equipping the bimodule $\mathfrak{B}_{t}$ with extra structure, corresponding to the fact that the functor $\Ngo^\sim_{t,gr}$ factors through the center, and the factorization $Ng\hat{o}_{t,gr}$ carries an $E_2$-monoidal structure. To simplify matters, let us consider the restriction of the (for now, still hypothetical) functor $Ng\hat{o}_{t,gr}$ to the subcategory $\mathcal{W}h_{t,gr}^{proj,\heartsuit}$ of $\mathcal{W}h_{t,gr}$ consisting of graded, projective $\mathfrak{Wh}_t$-modules in the heart of the $t$-structure. Such objects are, in particular, projective modules (and thus free by the Quillen-Suslin theorem) over $\mathfrak Z_t\mathfrak g$; we will denote the category of such modules by $\mathcal Z_{t,gr}^{fr,\heartsuit}$. Let us also consider the category $\mathcal{HC}_{t,gr}^{fr,\heartsuit}$ of graded Harish-Chanrdra bimodules (in the heart of the $t$-structure) which are free as left (or equivalently, right) modules over $\mathfrak U_t\mathfrak g$ (such objects are necessarily of the form $\mathfrak U_t\mathfrak g \otimes V(k)$ where $V$ is a representation of $G$, and $(k)$ indicates grading shift). Note that $\mathcal{HC}_{t,gr}^{fr,\heartsuit}$ is a discrete, exact category which sits fully faithfully in the dg-category $\mathcal{HC}_{t,gr}$, similarly for $\mathcal Z_{t,gr}^{fr,\heartsuit}$ and $\mathcal{W}h_{t,gr}^{proj}$; these categories form the heart of a weight structure on the corresponding dg-categories, in the sense of \cite{Bondarko}.
The graded Ng\^o functor (assuming it exists) must restrict to a braided monoidal functor
\[
Ng\hat{o}_{t,gr}^{fr,\heartsuit}: \mathcal{W}h_{t,gr}^{proj,\heartsuit} \to \mathcal Z(\mathcal{HC}_{t,gr}^{fr})
\]
On the other hand, the dg-category $\mathcal{W}h_{t,gr}$ can be recovered as the category of complexes in the additive category $\mathcal{W}h_{t,gr}^{proj}$:
\[
\mathcal{W}h_{t,gr} \simeq K(\mathcal{W}h_{t,gr}^{fr})
\]
Assuming certain properties of the functor $K$ which takes an additive category to its category of complexes, one may recover the functor $Ng\hat{o}_{t,gr}$ from its restriction to $\mathcal{W}h_{t,gr}^{proj,\heartsuit}$.
Now let us explain how to construct $Ng\hat{o}_{t,gr}^{fr,\heartsuit}$ from $Ng\hat{o}_\hbar$ (which was constructed in Theorem \ref{thm-quantum-ngo-hbar}). Consider the subcategory
$
\mathcal{HC}_\hbar^{fr}
$
consisting of direct sums and cohomological shifts of objects of the form $\mathfrak U_\hbar\mathfrak g \otimes V$, where $V$ is a finite dimensional representation of $G$. Note that $\mathcal{HC}_{\hbar}^{fr}$ is a non-stable, additive, $\mathbb C$-linear $\infty$-category, and its homotopy category $H^0\mathcal{HC}_{\hbar}^{fr}$ is a discrete additive category. Similarly, we define $\mathcal Z_\hbar^{fr}$ to be the subcategory consisting of direct sums and shifts of $\mathfrak Z_\hbar\mathfrak g$, and $\mathcal{W}h_{\hbar}^{fr}$ to be the full subcategory consisting of direct sums and shifts of $\mathfrak{Wh}_\hbar$. Finally, let $\mathcal D_\hbar(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)^{fr}$ be the full subcatgory of $\mathcal D_\hbar(G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G)$ such that the essential image of the forgetful functor to $\mathcal{HC}_\hbar$ is contained in $\mathcal{HC}_\hbar^{fr}$.
\begin{lemma}\label{lemma homotopy category}
There is a monoidal equivalence of categories
\[
H^0(\mathcal{HC}_\hbar^{fr}) \simeq \mathcal{HC}_{t,gr}^{fr,\heartsuit}
\]
Analogous results hold for $\mathcal Z^{fr}$, $\mathcal{W}h^{fr}$, and $\mathcal Z(\mathcal{HC}^{fr})$ (as braided monoidal categories).
\end{lemma}
\begin{proof}[Proof (Sketch)]
The objects $\mathfrak U_\hbar\mathfrak g \otimes V[k]$ form a skeleton of $H^0\mathcal{HC}_\hbar^{fr}$, where $V$ ranges over a skeleton of $\Rep(G)^\heartsuit$, and $k$ ranges over the integers. These objects correspond to $\mathfrak U_t\mathfrak g\otimes V (k)$ in $\mathcal{HC}_{t,gr}^{fr}$. We observe that the morphism sets agree, as required.
\end{proof}
\begin{remark}
Note that the cohomological shift functor $[1]$ is taken to the grading shift $(1)$ under the equivalences of Lemma \ref{lemma homotopy category}.
\end{remark}
Note that $Char_\hbar$ takes objects of $\mathcal Z_\hbar^{fr}$ to $\mathcal{HC}_\hbar^{fr}$; as $\mathfrak{Wh}_\hbar$ is itself free over $\mathfrak Z_\hbar\mathfrak g$, we see that $Ng\hat{o}_\hbar$ (which lifts $Char_\hbar$) takes objects of $\mathcal{W}h_\hbar^{fr}$ to $\mathcal Z(\mathcal{HC}_\hbar^{fr})$. In fact, we claim that $Ng\hat{o}_\hbar$ restricts to a $E_2$-monoidal functor
\[
Ng\hat{o}_\hbar^{fr}: \mathcal{W}h_{\hbar}^{fr} \to \mathcal Z(\mathcal{HC}_\hbar^{fr})
\]
Taking the homotopy categories and using Lemma \ref{lemma homotopy category}, we obtain a braided monoidal functor
\[
Ng\hat{o}_{t,gr}^{fr,\heartsuit}: \mathcal{W}h_{t,gr}^{fr,\heartsuit} \to \mathcal Z(\mathcal{HC}_{t,gr}^{fr,\heartsuit})
\]
The functor $Ng\hat{o}_{t,gr}$ is then obtained by taking the functor $K$.
\begin{remark}\label{mixed remark}
Under geometric Satake, the subcategory $\mathcal{HC}_\hbar^{fr}$ corresponds to the full subcategory $\mathcal H^{pure}_\hbar$ of $\mathcal H_\hbar = \cDv_{hol}(\underline{\mathcal{G}r})$ consisting of direct sums of intersection cohomology complexes $IC_\mu$ on orbits $\underline{\mathcal{G}r}_\mu$ (in fact the equivalence is proved by first identifying these subcategories). The graded lift $\mathcal{HC}_{t,gr}$ of $\mathcal{HC}_\hbar$ corresponds to a \emph{mixed} Satake category. The word ``mixed'' is used in the sense of Beilinson-Ginzburg-Soergel \cite{BGS} (see also \cite{riche} for a mixed version of the derived geometric Satake eqivalence in the modular setting); the weight of the grading corresponds to weight as in Deligne's theory of weights or in mixed Hodge theory. The reconstruction of the functor $Ng\hat{o}_{t,gr}$ from $Ng\hat{o}_\hbar$ mirrors the construction of a mixed category by taking the dg-category of complexes of the additive category of a suitable subcategory of pure objects--see e.g. \cite{Rider, Achar-Riche}.
\end{remark}
\begin{remark}
Note that the quantum Hamiltonian reduction of $\mathfrak{B}$ (considered as an equivariant left $\mathfrak{D}_G$-module) is equivalent to $\mathfrak{Wh}$ as a right $\mathfrak{Wh}$-module. It follows that the left action of $(\mathfrak{D}_{G{/_{\hspace{-0.2em}ad}\hspace{0.1em}} G})^G \simeq (\mathfrak{D}_T)^W$ is given by a ring homomorphism $(\mathfrak{D}_T)^W \to \mathfrak{Wh}$. Thus the Ng\^o functor, at the level of quantum Hamiltonian reduction, is given by the forgetful functor from $\mathfrak{Wh}$ to $(\mathfrak{D}_T)^W$. This is the basis for Conjecture \ref{quantum ngo induction}.
\end{remark}
Lemma \ref{lemma formal} is equivalent to the statement that $\mathfrak{B}_{\hbar}$ carries a pure external grading as a (dg) $\mathfrak{D}_{G,\hbar}-\mathfrak{Wh}_\hbar$-bimodule. In particular, $\Ngo^\sim_\hbar$ lifts to a functor
\[
\Ngo^\sim_{\hbar,gr}: \mathcal{W}h_{\hbar,gr} \to \mathcal D_{\hbar,gr}(G)
\]
or equivalently, after shearing,
\[
\Ngo^\sim_{t,gr}: \mathcal{W}h_{t,gr} \to \mathcal D_{t,gr}(G)
\]
Note that $\Ngo^\sim_{t,gr}$ is $t$-exact as it is a lift of $Char_{t,gr}$, which is $t$-exact due to the flatness of $\mathfrak U_t\mathfrak g$ over $\mathfrak Z_t\mathfrak g$.
\begin{remark}\label{remark classical Ngo}
The bimodule structure gives a morphism
\[
\mathfrak{D}_{G,t} \to \mathfrak{B}_t = \mathfrak U_t\mathfrak g \otimes_{\mathfrak Z_t\mathfrak g} \mathfrak{Wh}_t
\]
If we set $t=0$, then the monoidal category $\mathcal{HC}_{t=0,gr}\simeq QC(\mathfrak g^\ast/G)_{gr}$ upgrades to a symmetric monoidal category, and the action of $\mathcal{HC}_{t=0,gr}$ on $\mathcal Z_{t=0,gr}$ upgrades to the symmetric monoidal functor $\kappa^\ast:QC(\mathfrak g^\ast/G) \to QC(\mathfrak c)$. It follows that the monad $\mathfrak{Wh}_{t=0} = \mathcal O(J)$ is in fact a commutative (and cocommutative) Hopf algebra object in $QC(\mathfrak c)_{gr}$, and thus $\mathfrak{B}_{t=0} = QC(\chi^\ast J)$ is a cocommutative Hopf algebra object in $QC(\mathfrak g^\ast/G)_{gr}$. It follows formally that the structure map $\mathfrak{D}_{G,t} \to \mathfrak{B}_{t}$ arising from the Ng\^o map is in fact a morphism of Hopf algebroids over $\mathcal O(\mathfrak g^\ast)$
\[
\mathcal O(T^\ast G) \to \mathcal O(\chi^\ast(J))
\]
Thus there is a morphism of groupoids over $\mathfrak g^\ast$:
\[
\chi^\ast(J) \to T^\ast G
\]
which factors thorugh the centralizer subgroup $\mathcal I \rightarrow T^\ast G$. To see that this agrees with the Ng\^o homomorphism, as constructed in \cite{Ngo} (using Hartog's lemma), it suffices to check that they agree on the regular semisimple locus. In terms of sheaves on the Grassmannian, this corresponds to a certain localization of $\mathcal H_{\hbar=0}$ over $R_{\hbar=0}$, after which $\mathcal H_{\hbar=0}$ and $\mathcal W_{\hbar=0}$ become Morita equivalent (this corresponds to the fact that $QC(\mathfrak g^\ast/G)$ becomes equivalent to $\Rep(J) = QC(\mathfrak c/J)$ after localizing, and this latter category is Morita equivalent to $QC(J)$ under convolution, by 1-affineness). Thus, after this localization, the Ng\^o map is just the natural braided monoidal functor from a symmetric monoidal category to its Drinfeld center. The corresponding functor arising from Ng\^o's construction can also be characterized in this way, so the two constructions must agree.
\end{remark}
\subsection{Example: the abelian case}
Suppose $G=T$ is an algebraic torus, and ${G^{\vee}} = {T^{\vee}}$ the dual torus. In this case, everything can be made very explicit. First, note that the conclusions of Theorem \ref{thm-quantum-ngo-t} are clear: the $\mathcal W$-category is just $\mathcal D_\hbar(T)$ under convolution, which is symmetric monoidal as $T$ is commutative; the Ng\^o map $\mathcal D_\hbar(T) \to \mathcal D_\hbar(T{/_{\hspace{-0.2em}ad}\hspace{0.1em}} T)$ is just the natural map from a symmetric monoidal category in to its own Drinfeld center. Explicitly, this situation is controlled by the cocommutative Hopf algebroid $\mathcal D_{T,\hbar}$: the $\mathcal W$-category $\mathcal D_\hbar(T)$ is its category of modules (which is symmetric monoidal), the category of Harish-Chandra bimodules is given by $\mathcal D_{T,\hbar}$-comodules in $\mathfrak U_\hbar(\mathfrak t)\mhyphen\mathrm{mod}$, and $\mathcal D_\hbar(T{/_{\hspace{-0.2em}ad}\hspace{0.1em}} T)$ can be thought of as Yetter-Drinfeld modules for $\mathcal D_{T,\hbar}$, which identifies as the center of $\mathcal D_\hbar(T)$ and of $\mathcal{HC}_{T,\hbar}$ (in fact, the two monoidal categories are Morita equivalent).
Let $\Lambda = \Hom(T,\mathbb G_m) \subset \mathfrak t^\ast$ denote the character lattce of $T$, and consider the action groupoid $\Gamma_{\Lambda,\hbar}$ of $\Lambda$ acting on $\mathfrak t^\ast \times \mathbb A^1_\hbar$ by $n \cdot (\lambda,\hbar) = (\lambda + \hbar n,\hbar)$. The corresponding convolution algebra $H_{\Lambda,\hbar} = \Sym(\mathfrak t[-2] \oplus \mathbb C.\hbar) \rtimes \mathbb C[\Lambda]$ is a cocommutative Hopf algebroid over $R_\hbar \simeq \Sym(\mathfrak t \oplus \mathbb C.\hbar)$; the corresponding convolution category $\mathcal H_{\Lambda,\hbar}$ of sheaves on $\Gamma_{\Lambda,\hbar}$ is identified with $H_{\Lambda,\hbar}\mhyphen\mathrm{comod}$, and $\mathcal K_{\Lambda,\hbar}$ with $H_{\Lambda_\hbar}\mhyphen\mathrm{mod}$.
It is easy to check that $\mathfrak{D}_{\hbar,T}$ coincides with $H_{\Lambda,\hbar}$ as Hopf algebroids over $\mathfrak U_\hbar(\mathfrak t) =R_\hbar$. Thus $\mathcal{HC}_{T,\hbar}$ identifies with $\mathcal H_{\Lambda,\hbar}$, and $\mathcal D_{\hbar}(T)$ with $\mathcal{W}h_{\Lambda,\hbar}$.
On the other hand, the affine Grassmannian for $T^\vee$ is equal to $\Lambda$ (we only care about the reduced scheme structure here). The spherical Hecke category $\cDv_{hol}(\underline{\mathcal{G}r}^\vee_{T^\vee})$ is identified with $\mathcal H_{\Lambda,\hbar}$, and the convolution bialgebra of chains $C_\ast (\underline{\mathcal{G}r}^\vee)$ with $H_{\Lambda,\hbar}$. These identifications explicitly establish the renormalized Satake equivalence of Theorem \ref{Bez-Fink} in this setting. Note that the inclusion of local systems in to the spherical Hecke category is an equivalence in this case, corresponding to the fact that the Kostant section (which is just the map $\mathfrak t^\ast \to \mathfrak t^\ast /T = \mathfrak t^\ast \times BT$) is surjective.
| -154,976.177463
|
[
-2.8046875,
2.470703125
] | 45.094851
|
[
-2.123046875,
0.9638671875,
-2.19921875,
-6.12109375,
-1.5107421875,
8.8125
] |
[
2.853515625,
8.8203125,
0.6513671875,
6.87109375
] | 1,225
| 23,573
|
[
-3.474609375,
4.01953125
] | 27.325746
|
[
-5.25,
-3.685546875,
-5.984375,
-2.693359375,
1.48828125,
14.0078125
] | 0.75235
| 29.12869
| 14.181479
| 0.548019
|
[
1.4369919300079346
] | -99,300.567624
| 6.033216
| -152,797.022824
| 0.329065
| 6.250047
|
[
-1.1875,
-3.38671875,
-4.40234375,
-5.70703125,
1.638671875,
12.890625
] |
[
-5.7265625,
-1.865234375,
-2.13671875,
-0.693359375,
3.734375,
4.1171875
] | |
BkiUdVU5qsFAfn88tar0
|
\section{Introduction and results}
Unless otherwise stated, all the manifolds discussed in this paper
are closed smooth manifolds and all involutions and circle actions
on the manifolds are smooth. We denote by superscripts the
corresponding dimensions of the manifolds.
The following is a classical result of Conner and Floyd (\cite{CF},
$\S 27.2$).
\begin{theorem}[Conner-Floyd]\label{CF}
Suppose $g:~M^{2n}\rightarrow M^{2 n}$ is an involution on a
manifold and $M^{g}$ is the fixed point set of $g$. If
$\textrm{dim}(M^{g})< n$, then the Euler characteristic of $M$ is
even.
\end{theorem}
Here by $\textrm{dim}(M^{g})$ we mean the dimension of highest
dimensional connected component of $M^{g}$.
Using their famous $G$-signature theorem, Atiyah and Singer reproved
(\cite{AS}, p.582-p.583) Theorem \ref{CF} when $n$ is even, $M$ is
oriented and $g$ is orientation preserving.
We recall that a circle action ($S^{1}$-action) is called
\emph{semi-free} if it is free on the complement of the fixed point
set or equivalently, the isotropy subgroup of any non fixed point on
the manifold is trivial. Using bordism techniques developed by
Conner and Floyd in \cite{CF}, Kawakubo and Uchida showed the
following result (\cite{KU}, Theorem 1.2), which could be taken as a
counterpart in the circle case to Theorem \ref{CF} in some sense.
\begin{theorem}[Kawakubo-Uchida]\label{KU}
Suppose $M^{4k}$ admits a semi-free $S^{1}$-action and $M^{S^{1}}$
is the fixed point set of this action. If $\textrm{dim}(M^{S^{1}})<
2k$, then the signature of $M$, $\textrm{sign}(M^{4k})$, is zero.
\end{theorem}
Our first purpose in this note is, by closer looking at the
$G$-signature theorem in the circle case, to generalize Theorem
\ref{KU} to more general cases. Before stating our first main
result, we will introduce some notations, which will be used
throughout this paper without further explanation.
Suppose $M^{2n}$ is a oriented manifold admitting a $S^{1}$-action.
Let $F^{2m}$ be a connected component of this action. With respect
to this $S^{1}$-action, the tangent bundle of $M^{2n}$ restricted to
$F^{2m}$, $\textrm{T}M^{2n}\big|_{F^{2m}}$, has the following
equivariant decomposition:
$$\textrm{T}M^{2n}\big|_{F^{2m}}=L_{1}\oplus\cdots\oplus L_{n-m}\oplus\textrm{T}F^{2m},$$
where each $L_{i}$ is a real $2$-plane bundle of $F^{2m}$. We can
identify $L_{i}$ with a complex line bundle relative to which the
representation of $S^{1}$ on each fiber of $L_{i}$ is given by
$e^{\sqrt{-1}\theta}\rightarrow e^{\sqrt{-1}k_{i}\theta}$ with
$k_{i}\in \mathbb{Z}-\{0\}$. These $k_{1},\cdots,k_{n-m}$ are called
\emph{weights} of this $S^{1}$-action on the connected component
$F^{2m}$ and uniquely determined up to
signs.
\begin{definition}\label{def}
Let the notations be as above. We call a $S^{1}$-action \emph{prime}
if there exists a number $\xi\in S^{1}$ such that, for any $k\in
\bigcup_{F^{2m}}\{k_{1},\cdots,k_{n-m}\},$ we have $\xi^{k}=-1$.
\end{definition}
\begin{remark}
Note that the weights of a semi-free circle action are $\pm 1$.
Hence semi-free circle actions are prime.
\end{remark}
Now we can state our first result, which generalizes Theorem
\ref{KU}.
\begin{theorem}\label{result1}
Suppose $M^{4k}$ admits a prime $S^{1}$-action and $M^{S^{1}}$ is
the fixed point set of this action. If $\textrm{dim}(M^{S^{1}})<
2k$, then $\textrm{sign}(M^{4k})=0$.
\end{theorem}
We will prove this result in Section 2. The signature of a oriented
manifold can be realized as an index of some elliptic operator
(\cite{AS}, \S 6), now called signature operator. Besides the
$G$-signature theorem, the key ingredient of the proof of Theorem
\ref{result1} is the rigidity of the signature operator (see Section
2 for more details). The rigidity of signature operator is only the
beginning of a remarkable rigidity theorem: Witten-Taubes-Bott
rigidity theorem. Our second purpose in this note is, by using this
rigidity theorem, to replace the conclusion of $\textrm{sign}(M)=0$
in Theorem \ref{result1} by those of vanishing indices of some
twisted signature operators.
In order to state our second result, let us begin with the rigidity
of elliptic operators.
Let $D:~\Gamma(E)\rightarrow\Gamma(F)$ be an elliptic operator
acting on sections of complex vector bundles $E$ and $F$ over a
manifold $M$. Ellipticity guarantees that both $\textrm{ker}(D)$ and
$\textrm{coker}(D)$ are finite-dimensional. Then the index of $D$ is
defined as
$$\textrm{ind}(D)=\textrm{dim}_{\mathbb{C}}\textrm{ker}(D)-\textrm{dim}_{\mathbb{C}}\textrm{coker}(D).$$
If $M$ admits an $S^{1}$-action preserving $D$, i.e., acting on $E$
and $F$ and commuting with $D$, then both $\textrm{ker}(D)$ and
$\textrm{coker}(D)$ admit an $S^{1}$-action and hence are
$S^{1}$-modules. Therefore the virtual complex vector space
$\textrm{ker}(D)-\textrm{coker}(D)$ has a Fourier decomposition into
a finite sum of complex one-dimensional irreducible representations
of $S^{1}$:
$$\textrm{ker}(D)-\textrm{coker}(D)=\sum a_{i}\cdot L^{i},$$
where $a_{i}\in\mathbb{Z}$ is the representation of $S^{1}$ on
$\mathbb{C}$ given by $\lambda\mapsto\lambda^{i}$. The equivariant
index of $D$ at $g\in S^{1}$, $\textrm{ind}(g, D)$, is defined to be
$$\textrm{ind}(g, D)=\sum a_{i}\cdot g^{i}.$$
The elliptic operator $D$ is called \emph{rigid} with respect to
this $S^{1}$-action if $a_{i}=0$ for all $i\neq 0$, i.e.,
$\textrm{ker}(D)-\textrm{coker}(D)$ consists of the trivial
representation with multiplicity $a_{0}$. Consequently,
$\textrm{ind}(g, D)\equiv\textrm{ind}(D)$ for any $g\in S^{1}$. An
elliptic operator is called \emph{universally rigid} if it is rigid
with respect to \emph{any} $S^{1}$-action. The fundamental examples
of universally rigid operators are signature operator and Dirac
operator (on spin manifolds). The reason for the former is that both
of its kernel and cokernel can be identified with subspaces of the
deRham cohomology group (\cite{AS}, $\S 6$) on which $S^{1}$ always
induces a trivial action. The latter is a classical result of Atiyah
and Hirzebruch \cite{AH}.
Let $\Omega_{\mathbb{C}}^{+}$ and $\Omega_{\mathbb{C}}^{-}$ be the
even and odd complex differential forms on a oriented Riemann
manifold $M^{2n}$ under the Hodge $\ast$-operator. Then the
signature operator
$$d_{s}:~\Omega_{\mathbb{C}}^{+}\rightarrow\Omega_{\mathbb{C}}^{-}$$
is elliptic and the index of $d_{s}$ equals to $\textrm{sign}(M)$
(\cite{AS}, \S 6).
Let $W$ be a complex vector bundle over $M$. By means of a
connection on $W$, the signature operator can be extended to a
twisted operator (\cite{Pa}, IV, $\S 9$)
$$d_{s}\otimes W:~\Omega_{\mathbb{C}}^{+}(W)\rightarrow\Omega_{\mathbb{C}}^{-}(W).$$
This operator is also elliptic and the index of $d_{s}\otimes W$ is
denoted by $\textrm{sign}(M, W)$.
Let $T_{\mathbb{C}}$ be the complexified tangent bundle of $M$. For
an indeterminate $t$, set
$$\Lambda_{t}T_{\mathbb{C}}=\sum_{k=0}^{\infty}t^{k}\Lambda^{k}T_{\mathbb{C}},\qquad S_{t}T_{\mathbb{C}}=\sum_{k=0}^{\infty}t^{k}S^{k}T_{\mathbb{C}},$$
where $\Lambda^{k}T_{\mathbb{C}}$ and $S^{k}T_{\mathbb{C}}$ are the
$k$-th exterior power and symmetry power of $T_{\mathbb{C}}$
respectively (\cite{At}, $\S
3.1$).
Let $R_{i}$ be the sequence of bundles defined by the formal
series
$$\sum_{i=0}^{+\infty}q^{i}R_{i}=\bigotimes_{i=1}^{+\infty}\Lambda_{q^{i}}T_{\mathbb{C}}\otimes\bigotimes_{j=1}^{+\infty}S_{q^{j}}T_{\mathbb{C}}.$$
The first few terms of this sequence
are
$$R_{0}=1,\qquad R_{1}=2T_{\mathbb{C}},\qquad R_{2}=2(T_{\mathbb{C}}\otimes T_{\mathbb{C}}+T_{\mathbb{C}}),\qquad\cdots$$
With all this understood we have the following rigidity theorem.
\begin{theorem}[Witten-Taubes-Bott]\label{WTB}
For a spin manifold $M^{2n}$, each of the elliptic operators
$d_{s}\otimes R_{i}$ is universally rigid.
\end{theorem}
This rigidity theorem was conjectured and given a string-theoretic
interpretation by Witten \cite{Wi2}. It was first proved by Taubes
\cite{Ta}. A simper proof was then presented by Bott and Taubes
\cite{BT}. Using modular invariance of Jacobi functions, the second
author gave a more simpler and unified new proof in \cite{Li1} and
further generalized them in \cite{Li2}.
Using this remarkable rigidity theorem, Hirzebruch and Slodowy
showed that (\cite{HS}, p.317), among other things, if $g$ is an
involution contained in a circle acting on a spin manifold $M^{4k}$
and $\textrm{dim}(M^{g})< 2k,$ then $\textrm{sign}(M, R_{i})=0$ for
all $i$.
We are now ready to state our second main result, which could be
taken as the counterpart to Hirzebruch-Slodowy's above mentioned
result in the circle case.
\begin{theorem}\label{result2}
Suppose $M^{2n}$ is a spin manifold admitting a prime
$S^{1}$-action. If $\textrm{dim}(M^{S^{1}})< n$, then
$\textrm{sign}(M^{2n}, R_{i})=0$ for all $i$.
\end{theorem}
\begin{corollary}
Suppose $M^{2n}$ is a spin manifold admitting a semi-free
$S^{1}$-action. If $\textrm{dim}(M^{S^{1}})< n$, then
$\textrm{sign}(M^{2n}, R_{i})=0$ for all $i$.
\end{corollary}
\begin{remark}
In \cite{LS}, Landweber and Stong proved two results concerning the
signature and the indices of three twisted Dirac operators, which
also have the same feature as our results in some sense. More
precisely, they showed that (\cite{LS}, Theorem 1), if a closed spin
manifold $M^{2n}$ admits a circle action of \emph{odd type}, then
$\textrm{sign}(M)=0$. Moreover, if this action is semi-free, then
(\cite{LS}, Theorem 2) $\hat{A}(M, T_{\mathbb{C}})=\hat{A}(M,
\Lambda^{2}T_{\mathbb{C}})=\hat{A}(M,
\Lambda^{3}T_{\mathbb{C}}+T_{\mathbb{C}}^{2})=0$, where $\hat{A}(M,
E)$ is the index of the Dirac operator on $M$ twisted by a complex
vector bundle $E$.
\end{remark}
\section{Proof of results}
Let $M^{2n}$ be a oriented manifold admitting an $S^{1}$-action. Let
$F^{2m}$ be a connected component of the fixed point set
$M^{s^{1}}$. As pointed out in Introduction,
$\textrm{T}M^{2n}\big|_{F^{2m}}$ can be decomposed into
$$\textrm{T}M^{2n}\big|_{F^{2m}}=L_{1}\oplus\cdots\oplus L_{n-m}\oplus\textrm{T}F^{2m}.$$
Here $L_{i}$ could be taken as a complex line bundle over $F^{2m}$
with weight $k_{i}$, $1\leq i\leq n-m$.
$F^{2m}$ can be oriented so that all orientations of $L_{1},\cdots,
L_{n-m}$ and $F^{2m}$ taken together yield the orientation of
$M^{2n}$. Let $c_{1}(L_{i})\in H^{2}(F^{2m};\mathbb{Z})$ be the
first Chern class of $L_{i}$. Suppose the total Pontrjagin class of
$F^{2m}$ has the following formal
decomposition
$$p(F^{2m})=1+p_{1}(F^{2m})+\cdots=\prod_{i=1}^{m}(1+x_{i}^{2}),$$
i.e., $p_{i}(F^{2m})$ is the $i$-th elementary symmetry polynomial
of $x_{1}^{2},\cdots, x_{m}^{2}$.
With these notations set up, we have the following important lemma,
which should be well-known for experts (cf. \cite{HBJ}, $\S 5.8$),
although, according to the authors acknowledge, nobody state it
explicitly as follows.
\begin{lemma}\label{lemma}
Let $g$ be an indeterminate. Then the rational function of $g$
\be\sum_{F^{2m}}\big\{\big[\big(\prod_{i=1}^{m}x_{i}\frac{1+e^{-x_{i}}}
{1-e^{-x_{i}}}\big)\big(\prod_{j=1}^{n-m}\frac{1+g^{k_{j}}e^{-c_{1}(L_{j})}}{1-g^{k_{j}}e^{-c_{1}(L_{j})}}\big)\big]\cdot[F^{2m}]\big\}\nonumber\ee
identically equals to $\textrm{sign}(M)$. Here $[F^{2m}]$ is the
fundamental class of $F^{2m}$ determined by the orientation and the
sum is over all the connected components of $M^{S^{1}}$.
\end{lemma}
\begin{proof}
Let $g\in S^{1}$ be a topological generator. Then the fixed point
set of the action of $g$ on $M$ are exactly $M^{s^{1}}$. So the
$G$-signature theorem (\cite{AS}, p.582) tells us that
\be\label{signature formula}\textrm{sign}(g,
M^{2n})=\sum_{F^{2m}}\big\{\big[\big(\prod_{i=1}^{m}x_{i}\frac{1+e^{-x_{i}}}
{1-e^{-x_{i}}}\big)\big(\prod_{j=1}^{n-m}\frac{1+g^{k_{j}}e^{-c_{1}(L_{j})}}{1-g^{k_{j}}e^{-c_{1}(L_{j})}}\big)\big]\cdot[F^{2m}]\big\}\ee
Here $\textrm{sign}(g, M^{2n})$ is the equivariant index of the
signature operator at $g\in S^{1}$. According to the rigidity of the
signature operator, we have $\textrm{sign}(g,
M^{2n})\equiv\textrm{sign}(M^{2n})$. Therefore (\ref{signature
formula}) holds for a dense subset of $S^{1}$ (the topological
generators are dense in $S^{1}$), which means (\ref{signature
formula}) is in fact an identity for an indeterminate $g$.
\end{proof}
\begin{remark}
\begin{enumerate}
\item
Lemma \ref{lemma} was used in (\cite{HBJ}, $\S 5.8$), by putting
$g=0$, to obtain the famous formula
$$\textrm{sign}(M^{2n})=\sum_{F^{2m}}\textrm{sign}(F^{2m}),$$
of which several proofs are given by Atiyah-Hirzebruch (\cite{AH}, $\S 3$), Hattori-Taniguchi (\cite{HT}, $\S 4$),
and Witten (\cite{Wi1}, $\S 3$) respectively.
\item
When $M^{S^{1}}$ consists of isolated points, Lemma \ref{lemma} was
used by Ding (\cite{Di}, p.3947) to obtain some interesting results
concerning the representations on the isolated fixed points.
\end{enumerate} \end{remark}
\emph{Proof of Theorem \ref{result1}}.
\begin{proof}
Now suppose $M^{4k}$ has a prime $S^{1}$-action. Let $\xi\in S^{1}$
be the desired element as in Definition \ref{def}. Then we have
\be\label{proof}\begin{split}\big[\prod_{j=1}^{2k-m}\frac{1+g^{k_{j}}e^{-c_{1}(L_{j})}}{1-g^{k_{j}}e^{-c_{1}(L_{j})}}\big]\big|_{g=\xi}&=\prod_{j=1}^{2k-m}\frac{1-e^{-c_{1}(L_{j})}}{1+e^{-c_{1}(L_{j})}}\\
&=\prod_{j=1}^{2k-m}\frac{c_{1}(L_{j})-\frac{1}{2}c_{1}^{2}(L_{j})+\cdots}{2-c_{1}(L_{j})+\cdots}\\
&=\big(\prod_{j=1}^{2k-m}c_{1}(L_{j})\big)\cdot\prod_{j=1}^{2k-m}\frac{1-\frac{1}{2}c_{1}(L_{j})+\cdots}{2-c_{1}(L_{j})+\cdots}\\
&=e(\nu
F^{2m})\cdot\prod_{j=1}^{2k-m}\frac{1-\frac{1}{2}c_{1}(L_{j})+\cdots}{2-c_{1}(L_{j})+\cdots},
\end{split}\ee
where $e(\nu F^{2m})\in H^{4k-2m}(F^{2m};\mathbb{Z})$ is the Euler
class of the normal bundle of $F^{2m}$ in
$M^{4k}$.
If $\textrm{dim}(M^{S^{1}})< 2k$, then $4k-2m>2m,$ which means
$e(\nu F^{2m})=0$ and so by Lemma \ref{lemma}
$\textrm{sign}(M^{4k})=0$. This completes the proof.
\end{proof}
\emph{Proof of Theorem \ref{result2}}.
\begin{proof}
Let $R_{i}$ be the complex vector bundles defined in Introduction.
Then for each topological generator $g\in S^{1}$, the equivariant
index of the elliptic operator $d_{s}\otimes R_{i}$,
$\textrm{sign}(g, M^{2n}, R_{i})$, like $G$-signature theorem, could
be computed in terms of the local invariants of the fixed point set
$M^{S^{1}}$. This is given by a general Lefschetz fixed point
formula of Atiyah-Bott-Segal-Singer (\cite{AS}, p.254-p.258).
Instead of writing down the general form of this formula, we only
indicate that, for $\textrm{sign}(g, M^{2n}, R_{i})$, this formula
is of the following form.
$$\sum^{+\infty}_{i=0}q^{i}\cdot\textrm{sign}(g, M^{2n}, R_{i})=\sum_{F^{2m}}\big\{\prod_{i=1}^{m}\big[(x_{i}\frac{1+e^{-x_{i}}}
{1-e^{-x_{i}}}\big)\cdot
u_{i}\big]\prod_{j=1}^{n-m}\big[\big(\frac{1+g^{k_{j}}e^{-c_{1}(L_{j})}}{1-g^{k_{j}}e^{-c_{1}(L_{j})}}\big)\cdot
v_{j}\big]\big\}\cdot[F^{2m}],$$
where
$$u_{i}=\prod_{r=1}^{+\infty}\frac{(1+q^{r}e^{-x_{i}})(1+q^{r}e^{x_{i}})}{(1-q^{r}e^{-x_{i}})(1-q^{r}e^{x_{i}})},$$
and
$$v_{j}=\prod_{r=1}^{+\infty}\frac{(1+q^{r}g^{k_{j}}e^{-c_{1}(L_{j})})(1+q^{r}g^{-k_{j}}e^{c_{1}(L_{j})})}{(1-q^{r}g^{k_{j}}e^{-c_{1}(L_{j})})(1-q^{r}g^{-k_{j}}e^{c_{1}(L_{j})})}.$$
We recommend the readers the references \big((\cite{Li1},
\S 1 and \S 5) or (\cite{DJ}, $\S 2.4$)\big) for a detailed
description of $\textrm{sign}(g, M^{2n}, R_{i})$ in terms of the
local data of $M^{S^{1}}$.
Rigidity theorem \ref{WTB} says that, for an indeterminate $g$, the
following identity holds
$$\sum^{+\infty}_{i=0}q^{i}\cdot\textrm{sign}(M^{2n}, R_{i})\equiv\sum_{F^{2m}}\big\{\prod_{i=1}^{m}\big[(x_{i}\frac{1+e^{-x_{i}}}
{1-e^{-x_{i}}}\big)\cdot
u_{i}\big]\prod_{j=1}^{n-m}\big[\big(\frac{1+g^{k_{j}}e^{-c_{1}(L_{j})}}{1-g^{k_{j}}e^{-c_{1}(L_{j})}}\big)\cdot
v_{j}\big]\big\}\cdot[F^{2m}],$$
By using the same idea as in the proof of
Theorem \ref{result1}, we can get the conclusion of Theorem
\ref{result2}.
\end{proof}
\section{Conclusion remarks}
As we have seen, the key idea in the proofs is to extract a
cohomology class $e(\nu F^{2m})$ from the right-hand side of
(\ref{signature formula}), by giving a special value on $g$. In
fact, for a general compact Lie group $G$ acting on $M^{2n}$ and
$g\in G$, the $G$-signature theorem is of the following form
(\cite{AS}, p.582) \be\label{G-s}\textrm{sign}(g,M)=\sum_{F\in
M^{g}}[e\big(N^{g}(-1)\big)\cdot u]\cdot[F],\ee where $F$ is a
connected component of the fixed point set of $g$, $M^{g}$ (rather
than the fixed point set of the whole $G$), $e\big(N^{g}(-1)\big)$
is the Euler class of a subbundle of the normal bundle on $M^{g}$,
corresponding to the eigenvalue $-1$ of the representation of $g$ on
the normal bundle of $F$, and $u\in H^{\ast}(F)$.
Consequently, the right-hand side of (\ref{G-s}) vanishes if, for
every $F$, the fiber dimension of $N^{g}(-1)$ is greater than the
dimension of $F$. This is the corollary $6.13$ in \cite{AS} on page
582. It is this corollary that makes Atiyah and Singer to reprove
Theorem \ref{CF} in some cases, because for an involution, $-1$ is
the \emph{only} eigenvalue on the normal bundle of the fixed point
set. While in the circle case, for a topological generator
$g=e^{2\pi\sqrt{-1}\theta}\in S^{1}$, $-1$ in \emph{not}
the eigenvalue ($g$ is a topological generator if and only of $\theta$ is irrational,
then $g^{k}\neq -1$ for all weights $k$). So we have to construct
a cohomology class \big($e(\nu F^{2m})$ in (\ref{proof})\big) similar to $e\big(N^{g}(-1)\big)$ in (\ref{G-s}) artificially.
This is the origin of our Definition \ref{def}.
| -23,717.261306
|
[
-2.916015625,
2.646484375
] | 24.712644
|
[
-2.271484375,
0.916015625,
-2.3203125,
-6.30859375,
-1.068359375,
9.234375
] |
[
2.701171875,
9.0078125,
1.740234375,
5.95703125
] | 128
| 2,067
|
[
-3.587890625,
4.140625
] | 35.374072
|
[
-5.0546875,
-3.623046875,
-5.0078125,
-2.544921875,
1.30078125,
12.78125
] | 1.02916
| 13.941942
| 30.285438
| 3.212403
|
[
1.5776398181915283
] | -16,944.341149
| 5.641026
| -23,855.435097
| 1.114923
| 5.53755
|
[
-1.5927734375,
-3.203125,
-3.9921875,
-5.46875,
1.8466796875,
12.515625
] |
[
-5.67578125,
-1.701171875,
-1.9765625,
-1.2470703125,
3.46484375,
4.0390625
] | |
BkiUdI84uzlha6mQo4iy
|
\section{Introduction}
An important tool in analysing games is the concept of
{\it Nash equilibrium} \cite{nash1950equilibrium}, which represents situations
where no player has incentive
to deviate from their strategy. This corresponds to situations observed in real life,
with applications in economics, sociology, international relations, biology, etc.
All equilibria do not have the same {\it social
welfare}, i.e. the average payoff is different from one equilibrium to another.
Games of incomplete information can exhibit better equilibria if players use a resource – a general correlation, $Q$. Such correlation
can be viewed as a resource produced by a mediator to
give {\it advice} to the players. The concept of advice generalizes the notion of Nash equilibrium to a broader class of equilibria \cite{aumann}. All such equilibria can be classified according to the properties of the resource correlation. Three classes can be identified in addition to Nash equilibria (no correlation),
namely general communication equilibria (Comm) \cite{forges1982}, where $Q$ is unrestricted, belief-invariant equilibria (BI) \cite{forges1993,forges2006,lehrer2010,liu2015} and correlated equilibria (Corr) \cite{aumann}. The canonical versions of these equilibria form a sequence of nested sets within the set of canonical correlations:
\begin{equation*}
\text{Nash}\subset\text{Corr}\subset\text{BI}\subset\text{Comm}.
\end{equation*}
It was demonstrated that there exist games where BI equilibria can outperform $\text{Corr}$ equilibria \cite{pappa} (in terms of a social welfare (SW) of a game) as well
as games where BI equilibria outperform any non-BI equilibria.
Winter at al. \cite{belief1} introduce quantum correlated equilibria as a
subclass of BI equilibria and show
that quantum correlations can achieve optimal SW. This provides the link with quantum nonlocality, where quantum resources are used to produce {\it non-signalling} correlations.
In this context, belief invariance describes the largest class of correlations that obey {\it relativistic causality}.
A characteristic feature of belief-invariance is that it ensures privacy -- the other players involved in the game have
no infomation about the input one player sent to the resource.
To obtain the canonical form of the games, \cite{mathieu2018separating} show that one can suppose that the output of the correlation resource is the answer
the players give by delegating the extra computation (from game question to input
to the box and from output of the box to players' answer) to the mediator.
Therefore,
quantum equilibria can be reached in a setting where players each measure quantum
systems or, equivalently, by just having a central
system providing advices by measuring a quantum device.
Ref. \cite{belief1} highlights several open questions. In particular,
\begin{enumerate}[(1)]
\item Whether any full-coordination game (a.k.a. a {\it non-local game} in
quantum physics and computer science communities) can be converted into a
conflict-of-interests
game. Ref. \cite{pappa} gives an example of a two-player variant of the CHSH game,
while \cite{belief1} extends their result to an $n$-player game in which there
exists
a BI equilibrium which is better than any $\text{Corr}$ equilibrium.
\item How can we get a large separation between the expected payoff for the
quantum and correlated
equilibrium cases, and what is the upper bound for the separation? In the case of
two-player full coordination games this question was settled in
\cite{buhrman2011, junge2010}.
Are there conflict-of-interest games which exhibit large separation?
\end{enumerate}
In this paper, we provide a natural way to convert graph games
(and more generally stabiliser games) into conflict-of-interest games,
and we show how we can create
unbounded separation by increasing the number of players or using penalty techniques (a negative payoff).
An interesting feature in these games compared to the usual
pseudo-telepathy scenarios studied in quantum information is the notion of {\it involvement} \cite{MCTX, mathieu2018separating},
which allows one to define some interesting
scenarios in non-cooperative games and which exhibits novel features,
e.g. unlimited separation. If a player participates in the game but
is not involved (on a particular
round) it means that their strategy is not taken into
account when determining the win/lose outcome.
However, they do receive a corresponding payoff.
Using these games one can build games with bounded personal utilities
$v_0$, $v_1$ on $O(log(\frac 1 \epsilon))$ players
ensuring $\frac{CSW(G)}{QSW(G}\le \epsilon$, where CSW/QSW are
the Classical/Quantum Social Welfares, respectively.
The paper is organized as follows. In Sec. \ref{sec:graph games} we describe graph games which are the underlying non-local games used to define our games.
In Sec. \ref{sec:non-coll games} we define a non-collaborative game as a modification of the collaborative games by introducing unequal payoffs
corresponding to answers 0 and 1 of each player,
and discuss the corresponding quantum perfect strategy. We consider a particular version of graph games from the cycle on five vertices. Sec. \ref{sec:variant} discusses
variations of non-collaborative games based on the cycle on five vertices. Finally, Sec. \ref{sec:amp} shows how one can amplify the quantum advantage by adding a penalty for wrong answers and by increasing the number of players.
\section{Graph games}\label{sec:graph games}
Non-local games play a key role in Quantum Information theory. They can be viewed
as a setting in which players that are not allowed to communicate receive some inputs and have to produce some outputs, and there is a winning/losing condition depending globally on their outputs for each input.
Particular types of games are pseudo-telepathy games \cite{pseudo}
which are games that can be won perfectly using quantum resources but that are
impossible to win perfectly without communication when the players have
access only to shared randomness.
Multipartite collaborative games ($MCG(G)$) are a
family of pseudo-telepathy games based on certain types of quantum states
called {\it graph states}.
The players are identified with vertices of the graph and have
a binary input/output each with the winning/losing conditions
built using the stabilisers of the graph states.
The combinatorial game\footnote{without considering probability distributions}
$MCG$ with $n$ players consists in asking the players questions: for each question $q$,
each player $i$ receives one bit $q_i$ as input and answers one bit $a_i$.
They can either all win or all lose depending on their answer,
with winning/losing conditions described by a set $\{(q,I(q),b(q))\}$ where
\begin{itemize}
\item $q\in \{0,1\}^n$ is a valid question in which each player $i$
gets the bit $q_i$ and in the subgraph of the vertices corresponding to players
receiving one, all vertices have even degree. Let $I_1=\{i, q_i=1\}$
and $G'=G_{|I_1}$, a question is valid if each vertex of $ G'$ has an even
number of neighbors in $G'$
\item $I(q)\subset [n]$ is a subset of players that are called {\it `involved'}
in the question as the sum (modulo $2$) of their answers determines
the winning/losing condition according to the bit $b(q)$:
\item $b(q)$ is defined such that the players win the game when the question
is $q$ if the sum of the answers of the involved players is equal
to the parity of the number of edges of the subgraph of the vertices
corresponding to players receiving one: $\sum_{i\in I(q)} a_i=b(q)=|E(G')| \bmod 2$.
\end{itemize}
For instance the game associated to the cycle on 5 elements $MCG(C_5)$
is defined by
\begin{itemize}
\item When the question is $q=11111$ (each player has input 1),
the players lose if the binary sum of their answer is 0, {\it{i.e.}} $\sum_{i=0}^{4} a_i=0 \bmod 2$
, and win otherwise.
\item When the question contains $010$ for three players corresponding to three adjacent vertices, the players lose if the binary sum
of the answer of these three players is 1 {\it{i.e.}} $a_{i-1}+a_i+a_{i+1}=0 \bmod 2$ when $q$ contains $0_{i-1}1_i0_{i+1}$.
\item The players win otherwise.
\end{itemize}
A variation of this game can be done by reducing the set of valid questions,
for instance in the above set-up the questions of the second type have only
three players ``involved", so a first version could be to chose only 5 questions
of the second type and give always 0 as advice to the non-involved players.
This is the game studied as an example in \cite{mathieu2018separating}.
An important point is that the notion of involvement in $MCG$ games is absent
in unique games and introduces situations where the players might change their
strategy (answer) without changing the winning/losing status
of the global strategy.
To analyse these games and the strategies, one can imagine a scenario
where there is one special player representing Nature who is playing against
the other
players. The strategy of Nature is therefore a probability distribution
over the questions that we study here (as is standard in game theory)
as a known function on the set of questions $w:T\rightarrow [0,1]$ such
that $\sum_{t\in T} w(t)=1$.
The games will be therefore defined by equipping the combinatorial
game with a probability distribution over the questions.
\section{Defining non-collaborative games}\label{sec:non-coll games}
Like in multipartite collaborative graph games $MCG(G)$, we associate a
non-collaborative game $NC(G)$ to each graph. We differentiate the payoff
of the
players using the value of their output: If the global answer wins in the
non-local game, each player gets $v_1$ if they answer 1 and $v_0$ if they
answer 0. If the global answer loses, they get 0.
To match the traditional terminology used in game theory the output from now on will be called {\it strategy}, and the input called {\it type}. The payoff is called {\it
utility} and the social welfare is the average of the utilities over the players
A non-collaborative game $NC(G)$ is thus defined from $MCG(G)$ as follows
\begin{itemize}
\item The considered types are $T\subset \{0,1\}^n$ where $n$ is the number of vertices of $G$.
\item As in $MCG$, to each type $t\in T$ corresponds an associated
involved set $I(t)$ of players, and an expected binary answer $b(t)$.
\item As in $MCG$, the losing set is
$${\cal L}=\{(s,t), \sum_{i\in I(t)} s_i \neq b(t) \bmod 2\}.$$
We say that the players using a strategy $s$,
given a type $t$, collectively win the game when the sum of
the local strategies of the involved players is equal to the requested
binary answer modulo 2.
\item the payoff function is:
$$u_j(s|t)=\left\{
\begin{array}{ll}
v_{s_j} & \mathrm{ \, \, if \, \,} (s,t)\not \in {\cal L} \\
0 &\mathrm{\, \, Otherwise}
\end{array} \right.
$$
\end{itemize}
Firstly we consider the
cycle on five vertices $C_5$. We define $NC_{00}(C_5)$
based on the
non-local
game $MCG(C_5)$ studied in \cite{MCTX,mathieu2018separating}
For questions
which
involve three players, both non-involved players have type $0$
(see Figure \ref{fig1}).
\begin{figure}[ht]
\centering
\begin{tabular}{ccc}
\includegraphics{tikzP0.pdf}
&
\includegraphics{tikzP1.pdf}
&
\includegraphics{tikzP2.pdf}
\\
\parbox{\pSize}{\centering
(a) $T_a=11111,$\\
$ I = \{0,1,2,3,4\}, b=1$
}
&
\parbox{\pSize}{\centering
(b) $T_0=10000,$\\
$ I = \{4,0,1\}, b=0$
}
&
\parbox{\pSize}{\centering
(c) $T_1=01000,$\\
$I = \{0,1,2\}, b=0$
}
\\
\\
\includegraphics{tikzP3.pdf}
&
\includegraphics{tikzP4.pdf}
&
\includegraphics{tikzP5.pdf}
\\
\parbox{\pSize}{\centering
(d) $T_2=00100,$\\
$I = \{1,2,3\}, b=0$
}
&
\parbox{\pSize}{\centering
(e) $T_3=00010,$\\
$I = \{2,3,4\}, b=0$
}
&
\parbox{\pSize}{\centering
(f) $T_4=00001,$\\
$I = \{3,4,0\}, b=0$
}
\\
\end{tabular}
\caption{$NC_{00}(C_5)$: Square nodes indicate
a 1 in the associated type, while circular nodes indicate a 0. \emph{Involved} players in each case are shaded in {\bf \color{red} red}.\label{fig1}}
\end{figure}
\begin{table}[!ht]
\begin{center}
\begin{tabular}{ccc} \hline\noalign{\smallskip}
Type&Involved set& Binary answer {\smallskip} \\
\hline\hline\noalign{\smallskip}
$T_a=11111$ & $I(T_a)=\{0,1 ,2,3,4\}$ & $b(T_0)=1$\\
\hline\noalign{\smallskip}
$T_0=10000$ & $I(T_0)=\{0,1 ,4\}$ & $b(T_0)=0$\\
\hline\noalign{\smallskip}
$T_1=01000$ & $I(T_1)=\{0,1 ,2\}$ & $b(T_1)=0$ \\
\hline\noalign{\smallskip}
$T_2=00100$ & $I(T_2)=\{1 ,2,3\}$ & $b(T_2)=0$ \\
\hline\noalign{\smallskip}
$T_3=00010$ & $I(T_3)=\{2,3 ,4\}$ & $b(T_3)=0$ \\
\hline\noalign{\smallskip}
$T_4=00001$ & $I(T_4)=\{3,4 ,0\}$ & $b(T_4)=0$ \\
\hline
\end{tabular}
\end{center}
\caption{$NC_{00}(G)$ game.}
\label{T1}
\end{table}
We consider the game with the type probability distribution $w(t)=1/6$ for all the types.
The quantum perfect strategy for $NC(G)$ is obtained when the players
each have a qubit from graph state $\ket{G}$ \cite{MCTX}.
Each player $i$ measures their qubit according to their type $t_i$,
getting a quantum advice representing their part of the quantum
strategy $s_i$.\cite{MCTX}
From the study of $MCG(G)$ we have
\begin{theorem}\label{thm1}
If all the players collaborate (follow the quantum advice) then
for any probability distribution over the types, the utility of each player
is $(v_0+v_1)/2$.
\end{theorem}
\begin{proof}
The output of each quantum measurement provides uniformly all the possible answers.
\end{proof}
\subsection{Is the quantum pseudo-telepathy solution a Nash equilibrium?}\label{0}
As the players now have an incentive to answer $1$, they can sacrifice
always getting a good answer to maximize their utility.
Indeed, in the previous game, each player is always involved when they
get type $1$ and with probability $1/2$ when they get type $0$;
getting the wrong answer in that case only costs $v_0$.
Without loss of generality we consider $v_1\ge v_0$. The players now have an
incentive to answer $1$, because they might be able to maximize their
utility
by allowing the non-zero probability of a wrong answer. Indeed, in the
previous game, $NC_{00}(C_5)$, if the player gets type $1$ then they are
certain that they
are involved, and they won't gain by defecting (not following advice). However,
if their type is $0$, then the probability of them being involved is $1/2$,
and so there is a
fifty percent chance that they will benefit from always answering $1$ while
not
compromizing the winning combination. Getting the wrong answer in that case only
costs $v_0$.
\begin{theorem}\label{thm2}
Let $p_{\text{inv}}^{(i)}(t_i,s_i)$ be the probability for the player $i$ who gets
type $t_i$ and advice $s_i$ to be involved
Then, in $NC(G)$, the quantum advice gives a belief-invariant Nash equilibrium iff
$$\frac{v_0} {v_1}\ge (1-p), $$
where \[p=\min_i\min_{t_i}p_{\text{inv}}^{(i)}(t_i,0).\]
\end{theorem}
\begin{proof}
If the advice is $s_i=1$ then the winning payoff is already $v_1$. Consider the case when player $i$ is given the advice $s_i=0$ (which would lead to payoff $v_0$
in the winning case).
If the player defects then the difference of utility is
$-v_0 p_{\text{inv}}^{(i)}(t_i,0) + (1-p_{\text{inv}}^{(i)}(t_i,0)) (v_1-v_0)$.
So the strategy is a Nash-equilibrium when
$ (1-p_{\text{inv}}^{(i)}(t_i,0)) v_1\le v_0 $, i.e
$ v_0/v_1 \ge {1-p_{\text{inv}}^{(i)}(t_i,0)} $.
This inequality has to hold for all types and all players
\end{proof}
For $NC_{00}(C_5)$,
$p_{\text{inv}}^{(i)}(0,0)=1/2$ and therefore
the quantum nonlocal strategy is an equilibrium
only when
$v_0/v_1 \ge1/2$.
One important characteristic of an equilibrium is the {\it Social Welfare},
which is the average utility of the players.
As a direct consequence of Theorem \ref{thm1} the average social welfare of
the quantum strategy is independent on the graph $$QSW(NC(G))=\frac{v_0+v_1} {2}.$$
Note that the non collaborative games defined have a special feature that we call {\it guaranteed value}: in any run of the game players following the quantum strategy receive their expected payoff with probability 1.
\section{Some versions of \texorpdfstring{$NC(C_5)$}{Lg}}\label{sec:variant}
In this section we study the game $NC_{00}(C_5)$ and then introduce a number of modifications
in order to improve the quantum advantage
(ratio of quantum social welfare to correlated social welfare) and also to symmetrize the game such that the players get $0$ and $1$ with same probability or
have the same probability of being involved regardless of whether their type is 0 or 1.
\subsection{Study of \texorpdfstring{$NC_{00}(C_5)$}{Lg}}
Pure Nash equilibria can be described by local functions:
each player having one local type bit and one strategy bit to produce,
can locally act as follows:
\begin{itemize}
\item $ 0\rightarrow 0$ , $ 1\rightarrow 0$ constant function 0 denoted {\bf 0}
\item $ 0\rightarrow 1$ , $ 1\rightarrow 1$ constant function 1 denoted {\bf 1}
\item $ 0\rightarrow 0$ , $ 1\rightarrow 1$ Identity function denoted {\bf 2}
\item $ 0\rightarrow 1$ , $ 1\rightarrow 0$ NOT function denoted {\bf 3}
\end{itemize}
The set of pure Nash equilibria depends on the ratio $v_0/v_1$.
The are 20/25/40 pure Nash equilibria (4/4/6 up to symmetry) when $v_0/v_1$ lies within the interval
$[0,1/3]$, $[1/3,1/2]$ or $[1/2,1]$ respectively (see Table \ref{T:C00}).
\begin{center}
$\begin{array}{lllll|lllll|l}
\multicolumn{5}{c}{\text{Local functions}}
& \multicolumn{5}{c}{\text{Players utility $[\times 6]$ }}& SW [\times30]\\ \hline\hline
\noalign{\smallskip}
\multicolumn{11}{c}{v_0/v_1\le 1/3}{\smallskip}\\ \hline \\
{ \bf
2}
&
{ \bf
1}
&
{ \bf
1}
&
{ \bf
1}
&
{ \bf
1
}
&
2
v_0
\, + \,
v_1
&
3
v_1
&
3
v_1
&
3
v_1
&
3
v_1
&
2v_0 +13v_1
\\
{ \bf
3
}
&
{ \bf
3}
&
{ \bf
1}
&
{ \bf
1}
&
{ \bf
1}
&
2
v_0
\, + \,
v_1
&
2
v_0
\, + \,
v_1
&
3
v_1
&
3
v_1
&
3
v_1
&
v_0 +11v_
\\
{ \bf
3}
&
{ \bf
1}
&
{ \bf
3}
&
{ \bf
1}
&
{ \bf
1}
&
2
v_0
\, + \,
v_1
&
3
v_1
&
2
v_0
\, + \,
v_1
&
3
v_1
&
3
v_1
&
{4v_0 +11v_1}
\\
{ \bf
3}
&
{ \bf
3}
&
{ \bf
3}
&
{ \bf
3}
&
{ \bf
1}
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
8v_0 +17v_
\\
\hline
\noalign{\smallskip}
\multicolumn{11}{c}{1/3 \le v_0/v_1\le 1/2}{\smallskip}\\ \hline \\
{ \bf
1}
&
{ \bf
3}
&
{ \bf
1}
&
{ \bf
1}
&
{ \bf
0}
&
5
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
5
v_1
&
5
v_0
&
7v_0 +18v_
\\
{ \bf
2}
&
{ \bf
2}
&
{ \bf
1}
&
{ \bf
1}
&
{ \bf
1}
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
5
v_1
&
5
v_1
&
6v_0 +19v_
\\
{ \bf
3}
&
{ \bf
3}
&
{ \bf
1}
&
{ \bf
1}
&
{ \bf
1}
&
2
v_0
\, + \,
v_1
&
2
v_0
\, + \,
v_1
&
3
v_1
&
3
v_1
&
3
v_1
&
4v_0 +11v_
\\
{ \bf
3}
&
{ \bf
3}
&
{ \bf
3}
&
{ \bf
3}
&
{ \bf
1}
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
8v_0 +17v_
\\
\hline
\noalign{\smallskip}
\multicolumn{11}{c}{v_0/v_1\ge 1/2}{\smallskip}\\ \hline \\
{ \bf
3}
&
{ \bf
2}
&
{ \bf
1}
&
{ \bf
1}
&
{ \bf
0}
&
2
v_0
\, + \,
3
v_1
&
4
v_0
\, + \,
v_1
&
5
v_1
&
5
v_1
&
5
v_0
&
11v_0 +14v_
\\
{ \bf
1}
&
{ \bf
3}
&
{ \bf
1}
&
{ \bf
1}
&
{ \bf
0}
&
5
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
5
v_1
&
5
v_0
&
7v_0 +18v_
\\
{ \bf
2}
&
{ \bf
2}
&
{ \bf
1}
&
{ \bf
1}
&
{ \bf
1}
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
5
v_1
&
5
v_1
&
6v_0 +19v_
\\
{ \bf
3}
&
{ \bf
3}
&
{ \bf
1}
&
{ \bf
2}
&
{ \bf
1}
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
4
v_0
\, + \,
v_1
&
5
v_1
&
8v_0 +17v_
\\
{ \bf
3}
&
{ \bf
3}
&
{ \bf
3}
&
{ \bf
3}
&
{ \bf
1}
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
8v_0 +17v_
\\
{ \bf
3}
&
{ \bf
2}
&
{ \bf
3}
&
{ \bf
2}
&
{ \bf
2}
&
2
v_0
\, + \,
3
v_1
&
4
v_0
\, + \,
v_1
&
2
v_0
\, + \,
3
v_1
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
14v_0 +11v_
\\
\hline
\end{array}
$
\captionof{table}{Nash equilibria for three intervals of the value $v_0/v_1$. Note that the critical values 1/2 and 1/3 have union of both tables as equilibria.}
\label{T:C00}
\end{center}
We can see that most of these equilibria (all of them when $v_0/v_1\ge 1/2$)
correspond to local functions winning for the 5 types.
When $v_0=2/3$ and $v_1=1$ then the quantum social welfare of the
pseudotelepathy strategy is $QSW=0.83$ whereas the best classical social
welfare $CSW=0.77$.
As noted in section \ref{0} the probability of being involved in $NC_{00}$ is $(p(1,s)=1$ and $p(0,s)=1/2$ and the quantum pseudotelepathy measurements
strategy is an equilibrium if $ v_0/v_1\ge 1/2$.
Simililar behavior can be seen with Pareto equilibria (ones in which local utility cannot improve without reducing the outcome of someone else): see Appendix.
Recall that the characteristic feature of $NC_{00}(C_5)$ is that each player has unequal probabilities of getting different types. The game can be symmetrized
by changing the types of the non-involved players from $00$ to $01$, as shown in the next section.
\subsection{Comments on \texorpdfstring{$NC_{01}(C_5)$}{Lg}}
We define a second variant from $MCG(C_5)$ : $NC_{01}(C_5)$ where any player
gets the types 0 and 1 with probability 1/2 by adding an
extra 1 for a non-involved player in the types so that
$T_i=0_{i-1}1_i0_{i+1}1_{i+2}0_{i+3}$: see Table \ref{T:C01}.
If the type probability distribution is $w(t)=1/6$ for all the types, then
one can see that any player is involved
with probability 2/3 whether their input is 0 or 1, i.e.
$p_{\text{inv}}^{(i)}(0,0)=p_{\text{inv}}^{(i)}(1,0)=2/3$.
Hence, by Theorem \ref{thm2}, the quantum strategy of MCG produces a
Nash equilibrium iff $v_0/v_1 \ge 1/3$.
Thus, one of the benefits of this variant is that quantum Nash equilibria exist at a lower ratio $v_0/v_1$.
Note that in this version each player is getting a perfect random bit as
advice : $p(a=1)=p(a=0)=1/2$.
When $v_0=2/3$ and $v_1=1$ then the quantum social welfare of the
pseudotelepathy strategy is $QSW=0.83$ whereas
the best classical social welfare is $CSW=0.78$.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{ccc} \hline\noalign{\smallskip}
Type&Involved set& Binary answer {\smallskip} \\
\hline\hline\noalign{\smallskip}
$T_a=11111$ & $I(T_a)=\{0,1 ,2,3,4\}$ & $b(T_a)=1$\\
\hline\noalign{\smallskip}
$T_0=10100$ & $I(T_0)=\{0,1 ,4\}$ & $b(T_0)=0$\\
\hline\noalign{\smallskip}
$T_1=01010$ & $I(T_1)=\{0,1 ,2\}$ & $b(T_1)=0$ \\
\hline\noalign{\smallskip}
$T_2=00101$ & $I(T_2)=\{1 ,2,3\}$ & $b(T_2)=0$ \\
\hline\noalign{\smallskip}
$T_3=10010$ & $I(T_3)=\{2,3 ,4\}$ & $b(T_3)=0$ \\
\hline\noalign{\smallskip}
$T_4=01001$ & $I(T_4)=\{3,4 ,0\}$ & $b(T_4)=0$ \\
\hline
\end{tabular}
\end{center}
\caption{$NC_{01}(G)$ game (Here the players are identified with the integers modulo 5).}
\label{T:C01}
\end{table}
\subsection{Comments on \texorpdfstring{$NC_{00,0}(C_5)$}{Lg}}
A modification of a different kind consists in adding more
questions from the stabiliser. As the first example of this kind we
define a game $NC_{00,0}(C_5)$, where the additional family of questions has
four involved players with the non-involved player getting type
$0$, as specified by Table \ref{T:C000}.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{ccc} \hline\noalign{\smallskip}
Type&Involved set& Binary answer {\smallskip} \\
\hline\hline\noalign{\smallskip}
$T_a=11111$ & $I(T_a)=\{0,1 ,2,3,4\}$ & $b(T_0)=1$\\
\hline\noalign{\smallskip}
$T_{i_1}=0_{i_1-2}0_{i_1-1}1_{i_1}0_{i_1+1}0_{i_1+2} $ & $I(T_{i_1})=\{i_1-1,i_1,i_1+1\}$ & $b(T_{i_1})=0$\\
$i_1\in \{0,\ldots ,4\}$ &&\\
\hline\noalign{\smallskip}
$T_{i_2}=0_{i_2-1}1_{i_2} 0_{i_2+1}1_{i_2+2}0_{i_2+3}$ & $ I(T_{i_2})=\{i_2-1,i_2,i_2+2,i_2+3\}$ &$ b(T_{i_2})=0$ \\%& w(t_{i_3})=1/13 \\
$i_2\in \{0,\ldots ,4\}$ &&\\
\hline\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\caption{$NC_{00,0}(G)$ game.}
\label{T:C000}
\end{table}
For $v_1=1$, $v_0= \frac 2 3$, and the probability distribution $w(T_a)=\frac 3 {13}$, $w(T_{i_1})=w(T_{i_2})= \frac 1 {13}$
we get a CSW of 0.72 versus a QSW of $0.83$
Note that each player gets types $0$ and $1$ with different probabilities. In fact, it is simple to show that no choice of $w_1, w_2$ and $w_3$ can make
these probabilities equal. However, it is possible to modify the set of types so that equality becomes possible, as shown in the following
\subsection{Comments on \texorpdfstring{$NC_{00,01,0}(C_5)$}{Lg}}
We increase the set of types using other questions from the stabiliser:
We define a game $NC_{00,01,0}(C_5)$ for which with a suitable choice
of probability distribution the players get 0 and 1 with the same probability.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{ccc} \hline\noalign{\smallskip}
Type&Involved set& Binary answer {\smallskip} \\
\hline\hline\noalign{\smallskip}
$T_a=11111$ & $I(T_a)=\{0,1 ,2,3,4\}$ & $b(T_0)=1$\\
\hline\noalign{\smallskip}
$T_{i_1}=0_{i_1-2}0_{i_1-1}1_{i_1}0_{i_1+1}0_{i_1+2} $ & $I(T_{i_1})=\{i_1-1,i_1,i_1+1\}$ & $b(T_{i_1})=0$\\
$i_1\in \{0,\ldots ,4\} $&& \\
\hline\noalign{\smallskip}
$T_{i_2}=0_{i_2-1}1_{i_2} 0_{i_2+1}1_{i_2+2}0_{i_2+3} $& $I(T_{i_2})=\{i_2-1,i_2,i_2+1\} $&$ b(T_{i_2})=0$ \\%& w(t_{i_2})=1/26 \\
$i_2\in \{0,\ldots ,4\} $&&\\
\hline\noalign{\smallskip}
$T_{i_3}=0_{i_3-1}1_{i_3} 0_{i_3+1}1_{i_3+2}0_{i_3+3}$&$ I(T_{i_3})=\{i_2-1,i_2,i_2+2,i_2+3\}$ &$ b(T_{i_3})=0$ \\%& w(t_{i_3})=1/13 \\
$i_3\in \{0,\ldots ,4\} $&&\\
\hline\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\caption{$NC_{00,01,0}(G)$ game.}
\label{T:C00010}
\end{table}
We consider this game with type probability distributions given by $w(T_a)=3/13$,
$w(t_{i_1})=1/26$, $w(T_{i_2})=1/26$ and $w(T_{i_3})=1/13$.
The involvement probabilities satisfy $P_{inv}(1)>P_{inv}(0)=8/13$ and the best
classical Social wellfare with $v_0=2/3$, $v_1=1$ is $CSW= 0.72$ versus a
QSW of $0.83$
Note that even though the types $T_{i_2}$ and $T_{i_3}$ are similar,
the involved sets and thus the utilities are different. However, if one wants to
restrict to scenarios in which the utility can be deterministically determined
from the type, one can just add an extra player with a type allowing to distinguish
the different cases and with utility the average utility of the other players
independently of his/her action.
\section{Quantum vs correlation separation}\label{sec:amp}
In \cite{belief1} it is asked as an open question whether the separation
between classical and quantum social welfare is bounded.
We show in this section how two families of amplification techniques can
increase the separation by adding a penalty for wrong answers and then by
increasing the number of players.
\subsection{Wrong answer penalty}
A possible technique is to penalize bad answers more, using the fact that classical
functions always produce a bad answer for some question.
Instead of getting 0 when losing we generalize so that each player gets
$-N_g v_1$ if they answer 1 and $-N_g v_0$ if they answer 0, where $N_g$ can
be seen as the penalty for giving a wrong answer.
If $\delta_{(s,t),{\cal L}}=1$ if $(s,t)\in {\cal L}$ and 0 otherwise, and
$N_g$ is a positive number, then
$$u_j(s|t)= (-N_g)^{\delta_{(s,t),{\cal L}}} v_{s_j}$$
For $NC_{01}(C_5)$ as soon as $N_g>3 v_1$ there exists only two classical Nash
equilibria:
\begin{itemize}
\item All 0 with a social welfare of
$\frac{ - N_g v_0 +
5 v_0}{6}$ and
\item All NOT with a social welfare of $\frac{-N_g v_0 + 2 v_0 + 3 v_1}{6}$.
\end{itemize}
Therefore the classical social welfare decreases linearly with the penalty while the quantum average social welfare remains $\frac{ v_1+ v_0} {2}$.
\subsection{Distributed parallel repetition}
The distributed parallel composition of nonlocal games appears in \cite{holmg}
for the study of non signaling correlations and also in \cite{Viddick}
where it is called $k$-fold repetition.
$k$ groups of players play at the same time and they win collectively if
all the groups win their game.
\begin{theorem} There exists games with bounded personal utilities
$v_0$, $v_1$ on $O(log(\frac 1 \epsilon))$ players
ensuring $\frac{CSW(G)}{QSW(G}\le \epsilon$ for the ratio best
classical social welfare over quantum social welfare with guaranteed value.
\end{theorem}
\begin{proof}
It is easy to bound the utility in these settings as for any strategy in a
repeated game. If a player $p$ is involved in the strategy $S_j$
but is not involved in the strategy $S_i$ of another group then his utility is
conditioned by the fact that the $S_i$ strategy wins to receive a positive utility
and
$$u^p(S_i\times S_j)\le p_{win}(S_i) u^p(S_j)$$
As the quantum strategy obtained from following the nonlocal advice always
wins, the QSW remains unchanged whereas the CSW decreases.
For instance $CSW(k-$fold$\ NC_{00}(C_5)) =\frac {5} {6}^k CSW(NC_{00}(C_5))$.
Therefore using these games one can build games with bounded personal
utilities $v_0$, $v_1$ on $O(log(\frac 1 \epsilon))$ players
ensuring $\frac{CSW(G)}{QSW(G}\le \epsilon$
\end{proof}
\section{Conclusion}
We have used properties of multipartite graph games to define conflict of interest games, and shown
that by combining such games the ratio classical social welfare / quantum social welfare can go to zero
One can easily extend to stabilizer games \cite{Viddick} to have any number
of types and possible strategies.
As pointed out by \cite{belief1}, quantum advice equilibria can be reached
without needing a trusted mediator, furthermore they
ensure privacy as they are belief invariant.
Some other features may be emphasized if we define Nash equilibria using
pseudotelepathy games: such situations ensure a guaranteed utility and
they are also better when analysing the maximal minimal utility.
It may be interesting to investigate further how this guaranteed value property
for some quantum equilibria can be used.
On the other hand it would also be interesting to investigate how relaxing the guaranteed win requirement might allow to increase the QSW even further.
The possibility of potentially unlimited
improvement of social welfare while preserving belief invariance is therefore a
strong motivation to consider classical payoff tables that arise for usual
situations in which Nash equilibria occur and play an important role. For
example, in routing problems an advice provider could use
a quantum advice system as follows. Send either one rotated quantum bit to
each player asking for the advice and each player measures his qubit to get the
answer, or (in a trusted setting) compute the advice to give using a quantum measurement and send a classical message.
\section*{Acknowledgement}
This research was supported through the program ``Research in Pairs''
by the
Mathematisches Forschungsinstitut Oberwolfach in 2019.
The authors also acknowledge the
\emph{``Investissements d'avenir''} (ANR-15-IDEX-02) program of
the French National Research Agency, NCN grant Sonata UMO-2014/14/E/ST2/00020 and thank Sidney Sussex College, Cambridge for support.
\newpage
\section*{Appendix}
\label{Appendix}
{\bf Pareto} equilibria for $NC_{00}$ when $v_0/v_1\le 1/3$, Nb solutions : 121, Nb distinct equilibria : 18
$
\begin{array}{lllll|lllll|l} \multicolumn{5}{c}{\text{Local functions}} & \multicolumn{5}{c}{\text{Players utility $[\times 6]$}}& SW [\times30] \\ \hline \\
2
&
1
&
1
&
0
&
0
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
5
v_1
&
5
v_0
&
5
v_0
&
13v_0 +12v_1
\\
3
&
3
&
2
&
0
&
0
&
v_0
\, + \,
2
v_1
&
v_0
\, + \,
2
v_1
&
v_0
\, + \,
2
v_1
&
3
v_0
&
3
v_0
&
9v_0 +6v_1
\\
3
&
2
&
1
&
1
&
0
&
2
v_0
\, + \,
3
v_1
&
4
v_0
\, + \,
v_1
&
5
v_1
&
5
v_1
&
5
v_0
&
11v_0 +14v_1
\\
1
&
3
&
1
&
1
&
0
&
5
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
5
v_1
&
5
v_0
&
7v_0 +18v_1
\\
1
&
3
&
3
&
1
&
0
&
5
v_1
&
v_0
\, + \,
4
v_1
&
v_0
\, + \,
4
v_1
&
5
v_1
&
5
v_0
&
7v_0 +18v_1
\\
2
&
3
&
1
&
2
&
0
&
3
v_0
\, + \,
2
v_1
&
v_0
\, + \,
4
v_1
&
5
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_0
&
12v_0 +13v_1
\\
3
&
2
&
2
&
3
&
0
&
v_0
\, + \,
4
v_1
&
4
v_0
\, + \,
v_1
&
4
v_0
\, + \,
v_1
&
v_0
\, + \,
4
v_1
&
5
v_0
&
15v_0 +10v_1
\\
2
&
1
&
1
&
1
&
1
&
2
v_0
\, + \,
v_1
&
3
v_1
&
3
v_1
&
3
v_1
&
3
v_1
&
2v_0 +13v_1
\\
2
&
2
&
1
&
1
&
1
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
5
v_1
&
5
v_1
&
6v_0 +19v_1
\\
3
&
3
&
1
&
1
&
1
&
2
v_0
\, + \,
v_1
&
2
v_0
\, + \,
v_1
&
3
v_1
&
3
v_1
&
3
v_1
&
4v_0 +11v_1
\\
3
&
1
&
3
&
1
&
1
&
2
v_0
\, + \,
v_1
&
3
v_1
&
2
v_0
\, + \,
v_1
&
3
v_1
&
3
v_1
&
4v_0 +11v_1
\\
3
&
3
&
1
&
2
&
1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
4
v_0
\, + \,
v_1
&
5
v_1
&
8v_0 +17v_1
\\
3
&
3
&
2
&
2
&
1
&
2
v_0
\, + \,
3
v_1
&
v_0
\, + \,
4
v_1
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
9v_0 +16v_1
\\
3
&
2
&
3
&
2
&
1
&
v_0
\, + \,
2
v_1
&
2
v_0
\, + \,
v_1
&
2
v_0
\, + \,
v_1
&
2
v_0
\, + \,
v_1
&
3
v_1
&
7v_0 +8v_1
\\
3
&
2
&
2
&
3
&
1
&
v_0
\, + \,
2
v_1
&
v_0
\, + \,
2
v_1
&
v_0
\, + \,
2
v_1
&
v_0
\, + \,
2
v_1
&
3
v_1
&
4v_0 +11v_1
\\
3
&
3
&
3
&
3
&
1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
8v_0 +17v_1
\\
3
&
2
&
3
&
2
&
2
&
2
v_0
\, + \,
3
v_1
&
4
v_0
\, + \,
v_1
&
2
v_0
\, + \,
3
v_1
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
14v_0 +11v_1
\\
3
&
3
&
3
&
3
&
3
&
v_0
\, + \,
4
v_1
&
v_0
\, + \,
4
v_1
&
v_0
\, + \,
4
v_1
&
v_0
\, + \,
4
v_1
&
v_0
\, + \,
4
v_1
&
5v_0 +20v_1
\\
\end{array}
$
\vspace{0.4cm}
\noindent {\bf Pareto} equilibria for $NC_{00}$ when $1/3\le v_0/v_1\le 1/2$ Nb solutions : 91, Nb distinct equilibria : 14
$\begin{array}{lllll|lllll|l} \multicolumn{5}{c}{\text{Local functions}} & \multicolumn{5}{c}{\text{Players utility $[\times 6]$}}& SW [\times30] \\ \hline \\
2
&
1
&
1
&
0
&
0
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
5
v_1
&
5
v_0
&
5
v_0
&
13v_0 +12v_1
\\
3
&
2
&
1
&
1
&
0
&
2
v_0
\, + \,
3
v_1
&
4
v_0
\, + \,
v_1
&
5
v_1
&
5
v_1
&
5
v_0
&
11v_0 +14v_1
\\
1
&
3
&
1
&
1
&
0
&
5
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
5
v_1
&
5
v_0
&
7v_0 +18v_1
\\
1
&
3
&
3
&
1
&
0
&
5
v_1
&
v_0
\, + \,
4
v_1
&
v_0
\, + \,
4
v_1
&
5
v_1
&
5
v_0
&
7v_0 +18v_1
\\
2
&
3
&
1
&
2
&
0
&
3
v_0
\, + \,
2
v_1
&
v_0
\, + \,
4
v_1
&
5
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_0
&
12v_0 +13v_1
\\
3
&
2
&
2
&
3
&
0
&
v_0
\, + \,
4
v_1
&
4
v_0
\, + \,
v_1
&
4
v_0
\, + \,
v_1
&
v_0
\, + \,
4
v_1
&
5
v_0
&
15v_0 +10v_1
\\
2
&
2
&
1
&
1
&
1
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
5
v_1
&
5
v_1
&
6v_0 +19v_1
\\
3
&
3
&
1
&
1
&
1
&
2
v_0
\, + \,
v_1
&
2
v_0
\, + \,
v_1
&
3
v_1
&
3
v_1
&
3
v_1
&
4v_0 +11v_1
\\
3
&
3
&
1
&
2
&
1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
4
v_0
\, + \,
v_1
&
5
v_1
&
8v_0 +17v_1
\\
3
&
3
&
2
&
2
&
1
&
2
v_0
\, + \,
3
v_1
&
v_0
\, + \,
4
v_1
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
9v_0 +16v_1
\\
3
&
2
&
2
&
3
&
1
&
v_0
\, + \,
2
v_1
&
v_0
\, + \,
2
v_1
&
v_0
\, + \,
2
v_1
&
v_0
\, + \,
2
v_1
&
3
v_1
&
4v_0 +11v_1
\\
3
&
3
&
3
&
3
&
1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
8v_0 +17v_1
\\
3
&
2
&
3
&
2
&
2
&
2
v_0
\, + \,
3
v_1
&
4
v_0
\, + \,
v_1
&
2
v_0
\, + \,
3
v_1
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
14v_0 +11v_1
\\
3
&
3
&
3
&
3
&
3
&
v_0
\, + \,
4
v_1
&
v_0
\, + \,
4
v_1
&
v_0
\, + \,
4
v_1
&
v_0
\, + \,
4
v_1
&
v_0
\, + \,
4
v_1
&
5v_0 +20v_1
\\
\end{array}$
\vspace{0.4cm}
\noindent {\bf Pareto} equilibria when $\ge 1/2$, NB solutions 81, Nb distinct equilibria : 12
$\begin{array}{lllll|lllll|l} \multicolumn{5}{c}{\text{Local functions}} & \multicolumn{5}{c}{\text{Players utility $[\times 6]$}}& SW [\times30] \\ \hline \\
2
&
1
&
1
&
0
&
0
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
5
v_1
&
5
v_0
&
5
v_0
&
13v_0 +12v_1
\\
3
&
2
&
1
&
1
&
0
&
2
v_0
\, + \,
3
v_1
&
4
v_0
\, + \,
v_1
&
5
v_1
&
5
v_1
&
5
v_0
&
11v_0 +14v_1
\\
1
&
3
&
1
&
1
&
0
&
5
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
5
v_1
&
5
v_0
&
7v_0 +18v_1
\\
1
&
3
&
3
&
1
&
0
&
5
v_1
&
v_0
\, + \,
4
v_1
&
v_0
\, + \,
4
v_1
&
5
v_1
&
5
v_0
&
7v_0 +18v_1
\\
2
&
3
&
1
&
2
&
0
&
3
v_0
\, + \,
2
v_1
&
v_0
\, + \,
4
v_1
&
5
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_0
&
12v_0 +13v_1
\\
3
&
2
&
2
&
3
&
0
&
v_0
\, + \,
4
v_1
&
4
v_0
\, + \,
v_1
&
4
v_0
\, + \,
v_1
&
v_0
\, + \,
4
v_1
&
5
v_0
&
15v_0 +10v_1
\\
2
&
2
&
1
&
1
&
1
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
5
v_1
&
5
v_1
&
6v_0 +19v_1
\\
3
&
3
&
1
&
2
&
1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
4
v_0
\, + \,
v_1
&
5
v_1
&
8v_0 +17v_1
\\
3
&
3
&
2
&
2
&
1
&
2
v_0
\, + \,
3
v_1
&
v_0
\, + \,
4
v_1
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
9v_0 +16v_1
\\
3
&
3
&
3
&
3
&
1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
8v_0 +17v_1
\\
3
&
2
&
3
&
2
&
2
&
2
v_0
\, + \,
3
v_1
&
4
v_0
\, + \,
v_1
&
2
v_0
\, + \,
3
v_1
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
14v_0 +11v_1
\\
3
&
3
&
3
&
3
&
3
&
v_0
\, + \,
4
v_1
&
v_0
\, + \,
4
v_1
&
v_0
\, + \,
4
v_1
&
v_0
\, + \,
4
v_1
&
v_0
\, + \,
4
v_1
&
5v_0 +20v_1
\\
\end{array}$
\newpage
{\bf Nash equilibria for $NC_{01}$}
Nash equilibria for $NC_{01}$ when $1/3\le v_0/v_1\le 1/2$, Nb solutions : 76, Nb distinct equilibria : 13
$\begin{array}{lllll|lllll|l} \multicolumn{5}{c}{\text{Local functions}} & \multicolumn{5}{c}{\text{Players utility $[\times 6]$}}& SW [\times30] \\ \hline \\
1
&
1
&
2
&
0
&
0
&
5
v_1
&
5
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_0
&
5
v_0
&
12v_0 +13v_1
\\
3
&
2
&
1
&
1
&
0
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
5
v_1
&
5
v_0
&
11v_0 +14v_1
\\
1
&
3
&
1
&
1
&
0
&
5
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
5
v_1
&
5
v_0
&
8v_0 +17v_1
\\
3
&
2
&
2
&
1
&
0
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
5
v_0
&
11v_0 +14v_1
\\
1
&
3
&
3
&
1
&
0
&
5
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
5
v_0
&
9v_0 +16v_1
\\
1
&
3
&
1
&
2
&
0
&
5
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_0
&
9v_0 +16v_1
\\
2
&
1
&
3
&
2
&
0
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_0
&
11v_0 +14v_1
\\
2
&
2
&
1
&
1
&
1
&
3
v_0
\, + \,
2
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
5
v_1
&
5
v_1
&
5v_0 +20v_1
\\
3
&
3
&
1
&
2
&
1
&
2
v_0
\, + \,
3
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
8v_0 +17v_1
\\
3
&
3
&
2
&
2
&
1
&
3
v_0
\, + \,
2
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
10v_0 +15v_1
\\
3
&
3
&
3
&
3
&
1
&
3
v_0
\, + \,
2
v_1
&
2
v_0
\, + \,
3
v_1
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
11v_0 +14v_1
\\
3
&
2
&
3
&
2
&
2
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
2
v_0
\, + \,
3
v_1
&
14v_0 +11v_1
\\
3
&
3
&
3
&
3
&
3
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
2
v_0
\, + \,
3
v_1
&
10v_0 +15v_1
\\
\end{array}$
\vspace{0.4cm}
Nash equilibria for $NC_{01}$ when $v_0/v_1\ge 1/2$, Nb solutions 40, Nb distinct equilibirums :6
$\begin{array}{lllll|lllll|l} \multicolumn{5}{c}{\text{Local functions}} & \multicolumn{5}{c}{\text{Players utility $[\times 6]$}}& SW [\times30] \\ \hline \\
3
&
2
&
1
&
1
&
0
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
5
v_1
&
5
v_0
&
11v_0 +14v_1
\\
1
&
3
&
1
&
1
&
0
&
5
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
5
v_1
&
5
v_0
&
8v_0 +17v_1
\\
2
&
2
&
1
&
1
&
1
&
3
v_0
\, + \,
2
v_1
&
2
v_0
\, + \,
3
v_1
&
5
v_1
&
5
v_1
&
5
v_1
&
5v_0 +20v_1
\\
3
&
3
&
1
&
2
&
1
&
2
v_0
\, + \,
3
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
8v_0 +17v_1
\\
3
&
3
&
3
&
3
&
1
&
3
v_0
\, + \,
2
v_1
&
2
v_0
\, + \,
3
v_1
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
5
v_1
&
11v_0 +14v_1
\\
3
&
2
&
3
&
2
&
2
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
3
v_0
\, + \,
2
v_1
&
2
v_0
\, + \,
3
v_1
&
14v_0 +11v_1
\\
\end{array}$
\bibliographystyle{plain}
| -52,201.410491
|
[
-2.978515625,
2.70703125
] | 8.669528
|
[
-2.654296875,
0.85986328125,
-2.087890625,
-5.0703125,
-0.89013671875,
7.48828125
] |
[
3.244140625,
7.7265625,
2.43359375,
7.2734375
] | 179
| 5,769
|
[
-2.923828125,
3.185546875
] | 45.613165
|
[
-5.484375,
-3.73828125,
-4.3125,
-2.2265625,
1.82421875,
11.53125
] | 1.424375
| 5.85354
| 18.099861
| 59.081178
|
[
3.2372517585754395
] | -37,924.269852
| 4.308026
| -51,031.135609
| 1.77041
| 5.364555
|
[
-2.505859375,
-2.96875,
-2.921875,
-4.34765625,
2.259765625,
10.59375
] |
[
-5.65625,
-1.716796875,
-1.869140625,
-1.0244140625,
3.427734375,
4.0703125
] | |
BkiUdvXxaL3Suji9h9ta
|
\section{Background}
\label{sec:background}
This section provides general technical background on wireless
charging and power side-channel attacks which is necessary to understand
the proposed wireless charging power side-channel attack.
\subsection{Wireless Charging}
Using the open interface standard Qi for wireless power transfer is the prevailing method for wirelessly charging smart devices. Qi was developed by the Wireless Power Consortium and describes the functional and physical characteristics necessary to allow the exchange of power and information between a receiver and a transmitter. Currently, Qi supports two power specifications to charge mobile devices: the Qi Baseline Power Profile, which delivers power below 5 W, and the Qi Extended Power Profile, which supports up to 15 W~\cite{wpcs}. Wireless charging has quickly become standard in new devices; following its release in 2008, Qi was integrated into over 200 smart devices by 2021~\cite{qi}. The ubiquity of wireless charging in the form of public charging stations makes the consequences of potential side-channel attacks severe.
Qi utilizes inductive charging to wirelessly transfer power from a transmitter to a receiver. Under this charging scheme, an induction coil on the transmitter (the primary coil) couples to another coil on the receiver (the secondary coil). The transmitter then runs an alternating current through its coil which induces a charge in the receiving coil by Faraday's law of induction. Additionally, resonant inductive coupling is employed so that the devices can charge while up to 4 cm apart. Resonant inductive coupling occurs in coupling systems that have capacitors connected to both induction coils, creating LC circuits with individual resonance frequencies~\cite{wpcs}. The alternating current driven by the transmitter can then cause the load-bearing side to resonate which increases the coupling strength. The current in the receiving coil is then rectified into direct current so that it can be employed to charge a battery or directly power a device.
The implementation of the Qi standard requires special circuitry that interposes between the power source and the device battery. Figure~\ref{fig:qi-hardware} shows that communication between the two devices occurs via backscatter modulation and is unidirectional from the receiver to the transmitter. The transmitting coil is a part of a power conversion unit while the receiving coil is a part of a power pick-up unit. Both the transmitter and receiver contain communications and control units that use PID controllers in order to balance the transferred power level to the amount requested by the charging device.
The communication protocol of the Qi standard involves five phases. In the first phase, the power transmitter sends out an analog ping to detect whether or not an object is present. The power transmitter then sends out a longer, digital ping in order to give the receiver time to reply with a signal-strength packet. If the transmitter determines this packet is valid, it will continue to power its coil and proceed to the next step. The third phase is known as the identification and configuration phase, where information is sent by the receiver in packets in order to properly configure the transmitter for power transfer. Next, the power-transfer phase begins, during which the receiver sends control error packets to modify the power supply. The final phase occurs when the power receiver stops communication or specifically requests the end of power transfer~\cite{wpcs}.
In terms of power delivery, Qi wireless charging is less efficient than wired charging. Wireless charging introduces noise, and some have speculated that this type of noise is a good countermeasure against side-channel attacks that examine the amount of current used to charge a smartphone~\cite{daily_swig}. However, wireless charging transmitters do not store any significant amount of charge. Therefore, most current that enters the transmitter will directly reflect the phone activity which acts as a load on the receiver.
\vspace{-0.1in}
\subsection{Battery Charging Cycles}
Most smartphones use lithium-ion (Li-ion) batteries. These batteries go through different charging stages~\cite{microchip}. The first stage, known as constant current, involves supplying the maximum current to the battery, steadily increasing its voltage. Once the voltage of the battery reaches approximately 4.2 V, the second stage, known as the constant voltage stage, will begin. During this phase, the supplied current drops off in order to maintain the current voltage level of the battery. After the battery state of charge has reached 100\%, if it is still charging, the charger will provide a topping charge to make up for any phenomena that discharged the battery and return the state of charge to 100\%~\cite{time}.
As a result of the charging stages, the amount of current drawn by a phone heavily depends on the battery state of charge regardless of how much power the device is consuming. When the phone battery is at a low state of charge that corresponds to the constant current stage, the amount of power a phone is consuming will not significantly affect its overall current draw. This is because the phone is consuming power from the battery, and as long as the battery remains below the threshold for constant voltage to be applied, the same amount of maximum current will be delivered in order to charge the battery. On the other hand, when the battery of the smartphone is in the constant voltage stage, the power consumption of the phone will affect the voltage of the battery and the current will vary in order to maintain the desired voltage. When the phone battery is fully charged, the amount of power drawn from the battery is a direct reflection of the amount of current supplied to top off its charge.
\vspace{-0.1in}
\subsection{Power Side-Channels}
Side-channel attacks are methods to acquire sensitive information through unintended secret-dependent variations in physical behaviors.
The information leaked from a side-channel attack is a byproduct of computations occurring on hardware and is not a specific software vulnerability.
Power side-channel attacks are a specific type of side-channel attack that analyze the power traces of the electrical activity on a device to extract information~\cite{kocher}. Simple power analysis is a method of power side-channel attack that infers a secret value from a power trace by identifying power consumption profiles that directly depend on the secret. Frequency filters and averaging functions can be applied to filter out noise in these power traces~\cite{clark-identify}. Differential power analysis is a more complex method of side-channel attack that allows identification of intermediate values within cryptographic computations after a statistical analysis of data collected prior. Signal processing and error-correcting can also be applied to DPA attacks.
While power side-channel attacks are an established field of research, applying these techniques to mobile devices is a relatively new endeavor. Mobile devices are uniquely susceptible to side-channel attacks because they are portable, generally powered on, and have a multitude of sensors. Understanding the extent of sensitive information that a power side-channel attack can infer will provide insight into security risks.
Smartphone security relies on two basic premises, application sandboxing and a permission system. These rules ensure that applications cannot access sensitive information contained in another resource. Yet, even without direct access to the data pins of the smart device, power side-channel attacks have proven to be effective against smartphones. A public USB power station for a smartphone can be considered as a potential physical adversary for a power side-channel because it requires the phone to have a direct connection to the station that is collecting data on its power usage. This type of attack is non-invasive because it does not manipulate the packaging of the chip and is passive because the power consumption is only observed and not influenced.
For example, Yang et al.~\cite{yang-usb} showed that charging a smartphone over a USB cable exposes a side-channel that is vulnerable to an SPA attack. By monitoring the power that a charging smartphone drew while loading webpages, they were able to successfully infer private browsing information. Figure~\ref{wired} shows that in the current traces we collected, different websites leave unique signatures through the wireless charging side-channel over short time durations.
\begin{figure}[!h]
\centering
\includegraphics{figs/Wired_multiple_battery_levels_double.pdf}
\caption{A wireless charger draws a varying amount of current as mobile webpages are loaded on the charging phone.}
\label{wired}
\end{figure}
\section{Conclusion}
This paper presents a new side-channel attack that occurs when a Qi-compatible smart device is wirelessly charging and the power consumption of the wireless transmitter is recorded. We show that a low-cost device can be used to collect current traces and infer private information such as browser activity. We demonstrate that this attack can occur even if the user's phone is not fully charged, requires no permission from the phone OS or user, and can occur even if the acquired current trace is quite short (2.5 seconds). Additionally, this new side-channel leaks more information at lower battery levels than a wired power side channel in the same setup.
While this work explores a new side-channel present in all wireless charging compatible smart devices, the entire scope and constraints of the wireless charging side-channel attack and useful countermeasures need to be researched in future work.
\section{Discussions}
\subsection{Countermeasures}
While it enables attacks without a physical connection, the wireless charging side-channel attack is still based on the same secret-dependent variations in the device's power consumption that the traditional power side-channel attacks exploit.
In that sense, the existing countermeasures against power side-channel attacks can also prevent the wireless charging side-channel attack.
For example, Pothukuchi et al.~\cite{maya} show that the power dissipated by a computer can be reshaped to obfuscate the fingerprint left by a running application. Matovu et al.~\cite{matovu} present both a software and hardware solution as defense mechanisms against malicious charging stations. Yan et al. ~\cite{yan-approach} suggest energy obfuscation through code injection, which would embed meaningless code in applications in order to make features in the power trace be less predictable. Similarly, Spreitzer et al.~\cite{spreitzer} proposes execution randomization as a defense mechanism against power analysis attacks. A variety of methods exist to insert random noise into a power trace or obscure sensitive information by making adjustments at the cell level~\cite{popp}. Cronin et al.~\cite{chargesurfing} found that applying a low-pass filter with a cutoff of 60 Hz to collected power trace data reduced the accuracy of their passcode cracking attack to that of a random guess.
To further reduce the amount of information leaked though wireless charging, we may be able to augment the charging algorithm to avoid fully-charging the battery at less trusted locations.
Currently, iPhones running iOS 13 or later employ Optimized Battery Charging, a charging algorithm that reduces the amount of time an iPhone spends fully charged in order to preserve its battery lifespan. This feature uses location data to determine whether or not to delay charging past 80\%~\cite{apple}. If this algorithm could be adjusted to also engage when the iPhone is connected to an untrustworthy charger, then the battery would never leave the constant current Li-ion charging stage as seen in Figure~\ref{fig:long}. Our results show that minimal information would leak to the charger at states of charge less than or equal to this point because the same amount of maximum current will be delivered to the battery regardless of the process currently executing.
\subsection{Other Attacks on Wireless Charging}
This paper investigates the information leakage arising from wireless charging, and
demonstrated the website fingerprinting attack using the wireless power side channel.
The wireless power side channel has a potential to leak many other types of information
or activities on a mobile device that affect the device power consumption.
For example, a recent study~\cite{chargesurfing} showed that the wired USB power
side channel can be used to infer the context of a touchscreen.
It is also an open question if the wireless power side channel can be used to
leak more fine-grained information such as a secret value that is used for processing.
The wireless charging interface may also introduce additional vulnerabilities
beyond side-channel information leak. For example, a malicious wireless charger may deliver a high current as a way to damage a circuit or perform repeated charging/discharging cycles to reduce battery life.
\subsection{Other Use Cases of Wireless Charging Side-Channel}
Previous studies \cite{watts,lsid} discussed how traditional power side-channels may be used to detect malicious software on embedded devices.
In a similar fashion, the wireless charging interface may also be leveraged as a way to check the integrity of small mobile or embedded devices without physical connectors, such as a smartwatch.
For such application scenarios, we will need further studies to see if the resolution and the accuracy of the power monitoring through wireless charging is sufficient to detect software changes or malicious activities on an embedded device.
\section{Website Fingerprinting Attack}
In this section, our website fingerprinting attack is explained and the attack overview, data collection process, and classifier architecture are presented.
\subsection{Attack Overview}
The attacker seeks to utilize the collected power data to identify the webpages being loaded in a mobile browsing application by a victim as they charge their phone. As established by the mobile power side-channel attacks previously discussed, loading a website on a smartphone can affect its power consumption patterns. When the phone battery is near full charge, the power delivered to the wireless charging transmitter is directly proportional to the fluctuation in activity on the phone and will be recorded by the malicious public wireless charging station.
A set of training data can be collected by the repeated loading of websites onto a charging device in this manner. This data can then be preprocessed and inputted to a website fingerprinting classifier for training and validation. After the model has been successfully trained, it can be used to classify new power data collected from victims by the malicious charging station. This victim data will then be similarly preprocessed to form the testing data, which if classified correctly, will reveal an individual's private browsing activity. This attack is performed on untampered wireless charging transmitters, but it is possible that a malicious transmitter that is designed for power side-channel attacks could provide more accurate traces.
\subsection{Current Trace Collection}
In the case of the iPhone 11, the Safari browser on the phone is connected to the Safari development tool, Web Inspector, on a Mac computer. The computer then runs a script that sequentially loads a set of websites on the iPhone 50 times. This is performed twice (once for wireless charging, once for wired charging) at each battery level examined. Trace collection on the Pixel 4 followed a similar process except that the Chrome browser and Chrome Developer Tools were used to initiate webpage loading. The current trace corresponding to the first 10 seconds of loading a website is recorded and between loading each site, the script waits 4 seconds. This script also automatically initializes the data collection in order to ensure that all power traces are synchronous and aligned. The top 20 websites from the Alexa Top Sites in United States list~\cite{alexa} were examined in this attack.
For nearly all configurations, testing traces were collected with the intent to mirror normal device operation. This included setting the phone's brightness and volume at a constant level (although no websites visited automatically played audio), and enabling Bluetooth and cellular data. The exception to this is in Section 5.1, where test traces were collected with volume, Bluetooth, and cellular data disabled. For all traces, notifications on the devices were disabled in order to prevent calls from interrupting the data collection script. The Pixel 4 did not have a SIM card inserted, so it did not have cellular data enabled.
\subsection{Classification Algorithm}
\begin{figure*}[!t]
\centering
\includegraphics{figs/plotmodel_revision.pdf}
\caption{1D CNN model where the duration of the windowed trace is 1 second. A layer labeled with (3, 233, 1) means that the layer can accept variable batch sizes of 1 second traces that have been segmented into three temporal slices of equal length to be processed separately by the convolutional layers and then interpreted together by the LSTM layer.}\label{fig:cnn-model}
\end{figure*}
For feature extraction, each current trace was broken into segments that represented 1 second of the original trace, with 97.5\% overlap. These segments were acquired by applying a sliding window algorithm to the overall current trace. This feature duration was chosen because many of the identifiable features that distinguished each trace were less than a second long. Training on many small segments rather than entire traces helped to increase the amount of training data available and reduced overfitting by making our model more shift-invariant. Each trace in the test set is broken into segments as was the training data, and each segment's classification is cast as a vote for classifying the overall test trace. The final trace label was assigned using a majority voting scheme. A 64/16/20 training/validation/testing split was used, which resulted in 200 unlabeled test current traces for each experiment conducted.
Deep neural networks act as both feature extractors and classifiers, which can make attacks more successful than traditional techniques. Additionally, convolutional neural networks (CNNs)~\cite{CNN,sadouk,cnn2} incorporate translation invariance, which allows them to recognize features even if they are translated to different time positions. Although our current traces were collected automatically, the loading time of pages sometimes is delayed randomly due to website traffic or other causes.
A 1D CNN, the architecture of which is pictured in Figure~\ref{fig:cnn-model}, is trained as a classifier on these segments and was implemented in the Keras~\cite{keras} software package. Our architecture is a modified version of a 1-D CNN that was used for human activity recognition~\cite{mlm}. This model was chosen as a base because it was designed for multi-output classification, had a foundational architecture that was easy to build on, and proved resilient to overfitting.
The topology of our CNN is three convolutional layers followed by a long short-term memory (LSTM) layer~\cite{lstm}, a fully connected layer, and a Softmax layer with 20 outputs, one for each website. Every convolutional layer used ReLU activation~\cite{relu}, had a convolutional window of size 5, and was followed by a max-pooling layer with a window of size 2 and a stride of 2. Each window was split into three equal length temporal slices in order to allow the LSTM layer to update its weights based on the chronological relationship it learned between the features from each slice. The CNN layers were wrapped in a TimeDistributed layer which is a layer that applies the same input operation across all time slices constructed from each window. There are 128 filters in the first convolutional layer, 192 in the second, and 300 in the third. The network also uses a dropout layer with a frequency of 50\% in order to further reduce overfitting by randomly dropping nodes and regularizing the network.
The LSTM layer was chosen for this classification problem because it is a recurrent neural network layer that is able to learn the order dependence within data. Given that the segments the network examines are 250-350 time steps in length, the ability of the classifier to learn order dependence would allow it to identify the presence of multiple features within a single segment. The data we collected was a one-dimensional time series and was a natural fit to this problem because while loading a website, many events such as executing JavaScript and loading images will always be executed by the phone in the same order. In this way, the LSTM layer complements the convolutional layers in our architecture: the convolutional layers extract features and the LSTM layer learns their order dependence.
The CNN outperformed all other classifiers we explored when evaluated on our collected data. The second best performance we obtained was with a Random Forest~\cite{randomforest} classifier that was trained with the frequency domain representation of the current traces. Although we were able to get reasonably high accuracy with this classifier, it did not perform as well on test traces that were translated when compared to training traces, and was not able to generalize to different charging conditions. In contrast, our CNN performed well on all scenarios in which current traces were collected and did not require any feature engineering aside from the application of the sliding window algorithm. Our model also successfully identified traces that were time-shifted with respect to the training data. In this way, our attack is proven to be able to conform to traces collected from multiple devices and charging methods with the same feature extraction method. This is critical because our threat model is intended to apply to a variety of phone models, operating systems, and chargers.
\section{Introduction}
\label{sec:intro}
Smartphone usage and charging have become increasingly prevalent. According to a Pew Research Center survey, 81\% of American adults report owning a smartphone~\cite{pew}. Moreover, a market research poll conducted by Veloxity, a phone charging station company, found that on average, respondents charged their phones from 1.6 to 2.7 times per day~\cite{vel}. While wired chargers are currently more common, the market share of wireless charging solutions has been consistently expanding and wireless chargers are supplanting wired chargers. A BIS research report claims the global wireless charging market will be worth over \$20.97B in 2023, and the CEO of BIS Research has claimed that will be more wireless chargers than cables by that point~\cite{bis}.
In this paper, we show that today's wireless charging interfaces are vulnerable to a power side-channel attack
that can leak private information from a charging device to a wireless charger (charging transmitter).
In particular, we demonstrate the attack on the Qi standard~\cite{qi}, which is currently the dominant standard for wireless charging.
The side-channel attack through wireless charging represents an important threat because it does not require a physical connection to a victim device, and can occur without user permission or any sophisticated equipment.
While a similar power side-channel attack has been previously demonstrated through the traditional wired charging interface, wireless charging has been considered noisy and therefore secure against practical side-channel attacks.
This paper is the first to investigate power side-channel attacks through wireless charging and demonstrate that practical attacks are feasible.
As a concrete example, we study a website fingerprinting attack through the wireless charging power side-channel, and perform detailed experimental studies on an Apple iPhone 11 and a Google Pixel 4.
The phones are placed on a wireless charging transmitter and loads a webpage from a set of 20 candidates. While the webpage is loaded on the phone, the amount of current being drawn by the wireless charging transmitter is recorded as a current trace.
We find that after collecting enough current traces, it is possible to train a classifier to correctly identify the webpage that was loaded at the time an unlabeled current trace was recorded. We were able to achieve an accuracy of at least 80\% on average with current traces as short as 2.5 seconds on the Qi Baseline Power Profile.
Our study also shows that this power side-channel attack can be performed without expensive or bulky measurement equipment such as a high-performance oscilloscope, which
makes concealing a power monitoring circuit at a malicious wireless charger quite plausible.
In our experimental setup, we used a small microcontroller to measure the current at a charger.
We believe that the attack circuits can be even more tightly integrated with an existing microcontroller on a malicious charger.
When a smartphone owner uses a public wireless charging station, they will generally not have access to the circuitry of the wireless charging transmitter and will not be able to identify a malicious charger.
Furthermore, public charging stations are becoming ubiquitous and are increasingly supporting Qi-enabled wireless charging. There are currently over 190 smart devices that natively support the Qi standard, and older phones can implement the standard by connecting to a Qi compatible wireless receiver via an accessory or case for as little as \$10.
Given the prevalence of wireless charging and the ease of an attack, we believe that the side-channel attack through wireless charging represents a significant security risk.
\begin{figure*}[!t]
\centering
\includegraphics{figs/fig1redo.pdf}
\caption{A diagram of the transmitter and receiver hardware for the Qi standard.}
\label{fig:qi-hardware}
\end{figure*}
In addition to demonstrating that today's wireless charging interface is vulnerable to practical power side-channel attacks, this paper also presents the results from a set of in-depth experimental studies in order to better understand the capabilities and limitations of the wireless charging side-channel.
For example, we compare wireless charging and traditional wired charging in the context of power side-channel attacks.
The experimental results show that the wireless charging side-channel, while noisy, is comparable to the wired side-channel, leading to similar or better website prediction accuracy depending upon the device attacked. We also observed the effects of other variables including the length of time between the collection of training and testing traces, and the amount of power delivered by the charger.
Our study also found that the information leakage through these side-channels in today's battery-powered devices depends heavily on the state of charge of the battery.
When the battery level is high, the power consumption of the victim device is almost directly reflected on the power draw from the charger, revealing the activities on the device.
On the other hand, when the battery level is low, most of the power from a charger is used to charge the battery.
In that sense, we found that devices are far more vulnerable to wireless charging side-channel attacks when the battery level is above 80\%.
Unfortunately, given their convenience, users often leave devices on wireless chargers when fully charged.
The chairman of the Wireless Power Consortium (WPC) stated that the WPC was unaware of negative consequences of prolonged wireless charging and suggested that topping off a phone battery will increase its life span~\cite{nyt}.
For user privacy, our study suggests that future devices may want to adjust their charging algorithm and avoid fully charging a battery through an untrusted wireless charger.
The following summarizes the main technical contributions of this paper.
\begin{itemize}
\item This paper represents the first demonstration of the existence of a wireless charging power side-channel on today's smartphones. Even with noise, the wireless charging power side-channel leaks enough information to allow accurate website fingerprinting on a charging smartphone.
\item This paper experimentally compares the wireless and wired charging side-channels, and shows that they leak the same power consumption information. Additionally, traces from the wireless and wired charging can be used to classify each other.
\item This paper shows that the amount of information leaked through these side-channels depends significantly on battery level. The exact amount of information leakage at different battery levels depends on the model of the charging device.
\end{itemize}
The rest of this paper is organized as follows: Section 2 discusses background information related to wireless charging, the Qi standard, and the concept of power side-channel attacks. Section 3 introduces our threat model and presents high-level observations about the wireless charging power side-channel along with our experimental setup. Section 4 provides an overview of our website fingerprinting attack and presents our classification algorithm, Section 5 details the experimental results of the attack and the impact of a number of variables. Section 6 discusses possible countermeasures, limitations, and future research directions. Related work is discussed in Section 7 and Section 8 concludes the paper.
\section{Power Side-Channels in Wireless Charging}
\label{sec:overview}
This section introduces the concept of wireless charging power side-channel attacks and discusses their capabilities and limitations at a high-level.
The following section provides more in-depth study using website fingerprinting as a concrete example attack.
\subsection{Threat Model}
Figure~\ref{fig:threatmodel} shows the threat model that is assumed for the wireless charging side-channel attack. Under this threat model, an attacker is able to monitor and record the amount of power being delivered to an untampered Qi wireless transmitter from a malicious public wireless charging station. The target device performs activities that depend on sensitive events or data values, which influence its power consumption. The goal of the attacker is to infer these events or data values on the target device by analyzing the recorded power traces. While we assume the public charging station is compromised, it need not be malicious because the classification and inference can occur remotely.
\begin{figure}[!h]
\centering
\includegraphics{figs/threat_model.pdf}
\caption{Threat model demonstrating a power side-channel attack by a compromised public charging station.}
\label{fig:threatmodel}
\end{figure}
Wireless charging does not require any user permissions or initiation and will begin immediately if both the mobile device and the transmitter follow the Qi standard and are in range (4 cm). There is no need for the device to be plugged in to the charging station. The target device is not assumed to have any malicious software and this threat model does not depend on any particular software vulnerability. Additionally, this type of attack does not require any physical tampering of the target device or battery.
\subsection{Experimental Setup}
The high-level idea of the wireless power side-channel attack is similar to that of the traditional wired charger power side-channel attack.
However, given that wireless charging interfaces do not have physical wire connections and are likely to be more susceptible to noise, it has been hypothesized~\cite{daily_swig} that power side-channel attacks will not be practical through wireless charging.
In that sense, the main technical contributions of this paper lie in experimental studies that demonstrate that wireless power side-channel attacks are feasible in today's mobile phones and their capabilities are comparable to those of wired power side-channel attacks.
Here we briefly describe the experimental setup that we used. The experiments are designed to understand the capabilities and limitations of the wireless power side-channels:
\begin{itemize}
\item Does the wireless power side-channel leak enough information to infer activities on a mobile device even with noise in the wireless interface? Are the measurements repeatable?
\item How is the wireless power side-channel impacted by the battery level?
\item How does the wireless power side-channel compare to the wired power side-channel in terms of leakage?
\end{itemize}
\textbf{Current Trace Collection Circuit.} The DC current delivered to either a 5 W Adafruit Qi Wireless Charging Transmitter or a 10 W Max Anker Wireless Charging Pad from a USB AC adapter was sampled by placing an INA219 High Side DC Current Sensor in series with the V\textsubscript{CC} wire of the Micro-USB cable that charged the transmitters. This is depicted in Figure~\ref{fig:setup}.
\begin{figure}[!h]
\centering
\subfigure[Overview of current trace monitoring.]{\includegraphics[scale=.95]{figs/setup_revised.pdf}}
\hfill
\subfigure[Photo of setup with the Adafruit 5 W transmitter.]{\includegraphics[scale=.95]{figs/photo_setup_revised.pdf}}
\caption{Current trace collection.}
\label{fig:setup}
\end{figure}
In order to collect wired current traces, the current sensor was instead placed in series with a spliced USB-A to Lightning or USB-A to USB-C cable. An Arduino Micro was then programmed to sample the current sensor at a frequency of 700 Hz (500 Hz in Sections 5.4 and 5.6). The cost of the entire current trace collection circuit used in this work is less than \$30.
\textbf{Example Current Traces.} Figure~\ref{fig:wireless_sites} demonstrates that like the USB charging side-channel, the wireless charging side-channel also leaks enough information to distinguish different websites. Additionally, we find that the collected current traces are repeatable across different trials indicating that the activity visible in the traces is a direct result of loading a particular website. In all cases, the websites take a variable amount of time to load and once fully loaded, the current drawn by the charging transmitter returns to a steady level.
\begin{figure}[!h]
\centering
\includegraphics{figs/wreddit2.pdf}
\includegraphics{figs/woffice2.pdf}
\includegraphics{figs/wtwitch2.pdf}
\caption{Current traces demonstrating the activity leaked when automatically loading webpages on an iPhone 11.}
\label{fig:wireless_sites}
\end{figure}
\textbf{Phone Configuration.} The attack is performed on an Apple iPhone 11 (2019) running iOS 14 and a Google Pixel 4 (2019) running Android 11 which are both capable of wireless charging with Qi-certified chargers up to powers of 7.5 W and 11 W respectively.
When the iPhone 11 traces were collected without noise, an outline for the phone was placed around the coil so that it could be positioned consistently above the transmitter across every trace. Otherwise, both phones were placed at various orientations while remaining centered enough to properly charge.
The phones used Wi-Fi to load the webpages. Several measures were taken to reduce the impact of the cache on reloading previously visited webpages including closing all other tabs and enabling private browsing. On the Safari browser with the iPhone 11, the options to reload the page from the origin, empty the cache, and ignore the resource cache were also enabled.
\subsection{Impact of Battery Level}
Figure~\ref{fig:long} shows how the wireless charger's current draw varies as the charging phone's battery level increases.
The red line represents the battery state of charge and the blue line shows the current draw.
The results indicate that the charging profiles of a wireless charger mirror those of a wired charger~\cite{invio}.
At a low charge level, the current draw is relatively fixed except for a high-frequency component coming from the wireless interface. Then, the power draw gradually decreases as the battery state of charge increases.
\begin{figure}[!h]
\centering
\includegraphics{figs/long322.pdf}
\caption{Current delivered by a 5 W Qi charger/battery state of charge vs charging time for an iPhone 11. The constant current and constant voltage charging stages are identified.}\label{fig:long}
\centering
\includegraphics{figs/Experiment_1_Results.pdf}
\caption{The average current consumption vs iPhone 11 state of charge for five different activities.}\label{fig:average-current}
\end{figure}
Figure~\ref{fig:average-current} shows how the average current consumption of a wirelessly charging phone varies as it executes different processes. The experiment was carried out at 8 different battery levels.
While the results demonstrate that different processes do consume different amounts of power on average while wirelessly charging, a clear differentiation between activities only occurred while the phone's battery level is high. At battery states of charge less than or equal to 95\%, the activities were generally indistinguishable by the metric of average current assumption. The reason for this is that when the phone's battery is fully charged, the amount of power delivered by the wireless transmitter is solely determined by the energy the phone is currently using as it cannot deliver more charge to a battery that is already at maximum capacity. If the battery is not fully charged, a part of the power delivered from the charger will also be used to charge the battery, regardless of the app running on the phone.
Even if the average power consumption does not leak enough information to distinguish different activities at a lower battery level, a trace of dynamic power consumption over time can reveal far more information.
For example, in Section 5, we show that with the current traces over time, it is possible to distinguish different activities at battery levels lower than 95\%. For all experiments in our evaluation section, except for Section 5.6 where specific battery states of charge were examined, current traces were collected automatically beginning when the device's battery was full. During the duration traces were collected, the device's state of charge dropped to 90\%. In general, we found that battery-powered mobile devices are more susceptible to power side-channel attacks when the battery state of charge is high. The exact amount of information leaked depends on the charging algorithms used by a victim device. Our experiments in Section 5 suggest that even with time series data, the iPhone 11 leaks little information when the battery charge level is below 80\%.
\section{Related Work}
Power analysis attacks are a well established field of research and a variety have been studied in mobile devices. Spreitzer et al.~\cite{spreitzer} presented a thorough categorization system and survey of existing side-channel attacks, especially those applicable to mobile devices. Clark et al.~\cite{clark-identify} found that a computer plugged into a wall was susceptible to an SPA attack and used AC power traces in order to carry out a website fingerprinting attack. While we build upon the existing body of power side-channel and website fingerprinting attacks to demonstrate a vulnerability, our work is the first to identify a wireless charging side-channel which utilizes completely different circuitry than that of wired charging.
Yang et al.~\cite{yang-usb} determined that even when none of a smartphone's data pins are connected, a USB power station can still identify specific activity occurring on the phone. Cronin et al.~\cite{chargesurfing} demonstrate that USB power traces from smartphones leak information about the contents of a device's touch screen. While we also examine this charging power side-channel in our attack, our work differs in several respects. We find that the wireless side-channel is just as, if not more, susceptible to a website fingerprinting attack compared to the traditional wired side-channel. We also sample at 700Hz rather than 250kHz, allowing our attack to be performed by less sophisticated hardware and be more difficult to detect. Additionally, our classifier can effectively classify current traces from different device and charger models without any preprocessing.
The unique combination of hardware and sensor functionality on mobile devices leaves them susceptible to some unique side-channel attacks. Yan et al.~\cite{yan-approach} established a general exploitation approach for a variety of power side-channel attacks on an Android smartphone. While our attack is based on this model, we also demonstrate it on an Apple iPhone and do not require a wired connection, only the physical proximity required to wirelessly charge.
Matyunin et al.~\cite{magnetometer} successfully identify the application running on a phone by studying how CPU operations affect magnetometer measurements. Yang et al.~\cite{yang-fingerprint} showed that the transition between running apps leaves a side-channel in memory that can be used to determine what application was executing. Lifshits et al.~\cite{lifshits} installed a malicious, power monitoring battery in a smartphone in order to identify various types of activity. Qin et al.~\cite{qinetal} also adopt a similar approach to smartphone website fingerprinting by using a malicious application which estimates the fluctuation of power data. The power estimation model employs CPU data that can be accessed without permission in Android 7. In contrast to these works, our work does not require a malicious app or an otherwise compromised phone, because the act of wireless charging itself is vulnerable regardless of permissions set by the operating system.
Another method of website classification besides power side-channels is through traffic and hardware analysis. In contrast to these works, our attack can occur without any software permissions at all. Hintz~\cite{hintz}, Hayes and Danezis~\cite{hayes}, and Lu et al.~\cite{lu} measured the amount of encrypted data being transferred and other metadata to identify webpages even in the face of website fingerprinting defenses. Based on this work, Al-Shehari and Zhioua~\cite{al} proposed a unified traffic analysis attack model for traffic analysis attacks on computers. Our work also examines the Alexa top sites list~\cite{alexa}, but differs in that the side-channel exists locally on the phone's hardware, and is not a result of internet traffic characteristics. Our work demonstrates a novel attack that contributes to the body of website fingerprinting.
\section{Evaluation}
In this section, we present the detailed experimental results on the website fingerprint attack
through wireless charging and discuss our findings. Rank 1 and Rank 2 identification accuracy of the classifier in different scenarios were calculated. Rank 1 counts a classification as correct if the majority vote picks the correct website for the trace. Rank 2 accuracy counts a classification as correct if either the website with the most or second-most votes is correct. The baseline accuracy of a random guess classifier for the 20 websites is 5\% for Rank 1 and 10\% for Rank 2.
We conducted a range of experiments aiming to identify how the classifier accuracy changed with respect to the following variables: (1) device manufacturer; (2) different devices for training and testing; (3) different chargers for training and testing; (4) noise; (5) length of current traces; (6) aging of training traces; (7) battery state of charge. The following subsections detail our findings and contributions with respect to each question.
\subsection{iPhone 11 vs Pixel 4}
In this subsection, we aim to identify how the accuracy of the classifier depends on the device used to collect current traces. The iPhone 11 and Google Pixel 4 were both used to collect current traces under the same conditions from both a 5 W wired charger and a 10 W wireless charger. Results from these experiments are reported in Table~\ref{table:r1} (iPhone) and Table~\ref{table:r2} (Pixel). All test traces in this section and in Section 5.2 included noise in the form of normal device operation conditions which included leaving the phones' Bluetooth, cellular data, volume, and notifications on while placing them at a variety of alignments with the transmitting coil.
The classifier achieved a Rank 1 accuracy of at least 82.0\% and a Rank 2 accuracy of at least 87.0\% when classifying wireless traces from the iPhone 11 with trace durations ranging from 2.5 to 6 seconds. Pixel 4 wireless traces were classified with more accuracy, especially at longer trace lengths. It achieved a Rank 1 accuracy of at least 85.5\% and a rank 2 accuracy of at least 91.5\% with trace durations ranging from 2.5 to 6 seconds. The high identification accuracy of the classifier in these scenarios indicates that the small changes in processor activity that occur when loading various websites are detectable through this wireless side-channel in both devices examined.
\begin{table}[!h]
\centering
\begin{tabular}{c|ccccc}
Current Trace Type&10 s&6 s&5 s&4 s&2.5 s\\
\hline
Noiseless Wireless Rank 1& 94.0 & 94.5 &94.0& 87.5&80.5\\
Noisy Wireless Rank 1& N/A & 87.0&87.5& 87.5&82.0\\
Noiseless Wired Rank 1& 97.0 & 96.0 & 96.5&96.0&88.5\\
Noiseless Wireless Rank 2& 96.0 & 96.5 &97.5& 94.0&88.0\\
Noisy Wireless Rank 2& N/A & 94.0&94.0& 89.5&87.0\\
Noiseless W Wired Rank 2& 99.0 & 97.5 & 98.0 &97.0&93.5\\
\end{tabular}
\caption{Rank 1 and rank 2 accuracy (\%) for 1D CNN model when classifying 20 websites with a fully charged iPhone 11.}
\label{table:r1}
\begin{tabular}{c|ccccc}
Current Trace Types&6 s&5 s&4 s&2.5 s\\
\hline
Wireless Rank 1& 95.0 &94.0& 95.5&85.5\\
Wired Rank 1& 74.0& 75.0&70.5&63.0\\
Wireless Rank 2& 97.5 &98.0& 96.5&91.5\\
Wired Rank 2& 83.0& 85.5&82.5&79.0\\
\end{tabular}
\caption{Rank 1 and rank 2 accuracy (\%) for 1D CNN model when classifying 20 websites with a fully charged Pixel 4. All traces were collected under normal operation conditions.}
\label{table:r2}
\end{table}
\subsection{Training and Testing on Different Devices}
In order to see whether or not a cross-device attack is possible in this threat model, we trained the classifier exclusively on current traces from the iPhone and tested on traces from the Pixel and vice versa. When all collected 2.5-second current traces for both devices were input, the classifier was unable to identify traces from the device at all. Training on iPhone traces and testing on Pixel traces resulted in a Rank 1 accuracy of 4.2\% which is worse than a random guess and a Rank 2 accuracy of 12.1\% which is only slightly higher than that of a random guess. Training on Pixel traces and testing on iPhone traces was no better. In this scenario, the classifier achieved a Rank 1 accuracy of 5.7\% and a Rank 2 accuracy of 11.6\%.
These results align with previous works that found a drop in classification accuracy resulting from training and testing on different devices. This indicates that the information leaked through this side-channel is related to the charging and processor circuitry inside the device and is not directly transferable. An effective realistic attack would likely need to train on traces from a variety of phones in order to be able to generalize and account for more trace variety.
\subsection{Training and Testing on Traces from Different Chargers}
Current traces from a wired, 5 W charger were also collected with both the Pixel and the iPhone. Unlike wireless traces, wired traces from the iPhone were classified with higher accuracy than those of the Pixel. The minimum Rank 1 and Rank 2 accuracies of the classifier on the wired iPhone traces were 88.5\% and 93.5\%, respectively, whereas they were 63.0\% and 79.0\% on the Pixel.
Across all device and charger combinations, our classifier was able to perform well without any preprocessing or changes to architecture. The accuracies achieved by the classifier when trained and tested on wired and wireless traces are similar, indicating that the information leakage from the wireless charging power side-channel is comparable to that of the wired charging power side-channel. In the case of the Pixel 4, the wireless current traces were identified with higher accuracy than the wired current traces.
Figure ~\ref{fig:battery-full} shows the current traces measured using a wired charger while loading zoom.us on iPhone 11.
Visual comparison suggests that the wired and wireless channels leak the same information when a website is loading;
the shapes of their power traces when the phone is fully charged are similar. The traces differ, however, because the wireless traces contain a signal with a frequency of approximately 11 Hz and appear to be more noisy in general than the wired traces.
In order to measure how comparable both charging side-channels are, the classifier was trained exclusively on current traces from the wireless charger and tested on traces from the wired charger and vice versa. Using 10 websites and 2.5 second long traces, the classifier identified websites correctly with significant accuracy. The results of this experiment are shown in Figure~\ref{fig:train_wireless}. Training on wired traces and testing on wireless traces produced a Rank 1 accuracy of 60.6\% compared to a baseline of 10\% and a Rank 2 accuracy of 75.0\% compared to a baseline of 20\%. Training on wireless traces and testing on wired traces achieved a Rank 1 accuracy of 49.0\%, and a Rank 2 accuracy 68.4\%. The only website that was identified with over 90\% accuracy in both situations was facebook.com.
The existence of cross-channel leakage across both wired and wireless charging indicates that wirelessly charging devices may be susceptible to existing USB power side-channel attacks that have been trained only on wired power data.
\begin{figure}[!h]
\centering
\subfigure[Training on wireless traces, testing on wired traces.]{\includegraphics{figs/wireless_train_wired_test1.pdf}}
\vfill
\subfigure[Training on wired traces, testing on wireless traces.]{\includegraphics{figs/wired_train_wireless_test1.pdf}}
\caption{Results from training and testing with different chargers. The vertical axis shows the true label and the horizontal axis shows the predicted label. An ideal classifier would have ones down the diagonal.}
\label{fig:train_wireless}
\end{figure}
\subsection{Impact of Noise}
As evidenced by the results discussed in Section 5.1, the attack is quite resilient to noise, and was able to identify the test traces with high accuracy, even though the circumstances of the device varied between training and testing traces. This demonstrates that our attack is feasible in realistic scenarios where the current trace collected while a website is loading may be corrupted or altered by the existence of other executing processes.
In order to measure how well the attack might perform without noise, current traces were collected from the iPhone 11 while volume, Bluetooth, and cellular data were disabled at a sampling frequency of 500 Hz. Additionally, an outline from the phone was placed over the charger so that the alignment and angle of the phone over the transmitting coil was consistent.
The classifier performed slightly better when trained and tested on these noiseless traces as opposed to those collected under normal operation conditions. The full results are reported in Table~\ref{table:r1}. When classifying noiseless wireless traces, the classifier obtained a Rank 1 accuracy of at least 80.5\% and a Rank 2 accuracy of at least 88.5\% with trace durations ranging from 2.5 to 10 seconds. We present the confusion matrix for 5-second traces in Figure~\ref{fullcm}. For comparison, noiseless wired traces collected under the same conditions achieved a Rank 1 accuracy of at least 88.5\% and a Rank 2 accuracy of at least 93.5\%.
\begin{figure}[!h]
\centering
\includegraphics{figs/fullcm.pdf}
\caption{Confusion matrix showing the classification of 200 unlabeled current traces.}
\label{fullcm}
\end{figure}
\subsection{Impact of Trace Duration}
In addition to the full 10 or 6 second traces, the classifier was trained and tested on shorter duration traces. These shorter traces were formed by taking a slice of the first $n$ seconds of data from the original trace. Out of all five trace lengths examined, the best wireless and wired rank 1 identification accuracies achieved were with 5-second traces and 6-second traces respectively. While the classifier performed the worst on 2.5-second traces, the overall identification accuracy was still quite high and close to the best rank 1 accuracies out of all trace durations. These shorter traces removed noise present in the full 10-second traces because the websites examined take approximately 4 seconds to load ~\cite{fortune}. However, most websites take over 2.5 seconds to load, so traces of this duration cut off part of the signal from the website loading and therefore deteriorated identification accuracy.
Furthermore, websites that autoplayed videos had consistent leakage in their traces even after they initially loaded.
\subsection{Impact of Length of Time Between Trace Collection and Testing}
In this scenario, training and testing traces were collected on the same iPhone 11 except the test traces were collected nine months after the training traces were. Table~\ref{table:r3} summarizes the results of this scenario. Many of the websites we studied had dynamic content, such as news websites. After many months, the media in these websites completely changed which resulted in the current traces altering as well. Although accuracy was significantly lowered in this experiment, the classifier still performed over four times better than a random guess would achieve.
\begin{table}[!h]
\begin{tabular}{c|ccccc}
Current Trace Type&6 s&5 s&4 s&2.5 s\\
\hline
New Traces Rank 1& 18.0 & 20.5& 22.5&13.5\\
New Traces Rank 2& 28.5 & 29.5&33.5&19.5\\
\end{tabular}
\caption{Rank 1 and rank 2 accuracy (\%) for 1D CNN model when classifying with old training data.}
\label{table:r3}
\end{table}
\subsection{Impact of Battery State of Charge}
Below approximately 80\% state of charge, both wired and wireless charging side-channels observed in this experiment do not leak information and the classifier cannot identify the traces with any significant accuracy. Current traces collected at these states of charge can be seen in Figure~\ref{fig:battery-levels}. For the wired channel, information begins to be revealed when the battery state of charge reaches approximately 95\%. The wireless channel could consistently classify traces with a battery state of charge as low as 90\%.
\begin{figure}[!h]
\subfigure[wirelessly charging (top) and wired charging (bottom)]{\includegraphics[scale=.95]{figs/revised_figure911.pdf}\label{fig:battery-full}}
\vfill
\subfigure[76.22\% at 5.10 V]{\includegraphics[scale=.95]{figs/wireless_zoom_75.pdf}}
~
\subfigure[50.21\% at 4.87 V]{\includegraphics[scale=.95]{figs/wireless_zoom_50.pdf}}
\vfill
\subfigure[32.59\% at 4.81 V]{\includegraphics[scale=.95]{figs/wireless_zoom_30.pdf}}
~
\subfigure[21.09\% at 4.58 V]{\includegraphics[scale=.95]{figs/wireless_zoom_20.pdf}}
\vfill
\subfigure[While USB charging, the current traces recorded at the lowest three battery levels are indistinguishable.]{\includegraphics[scale=.95]{figs/wired_zoom2.pdf}\label{fig:battery-levels}}
\vfill
\caption{Comparison of the leakage when loading zoom.us on an iPhone 11. Plots (b)-(e) depict wireless charging.}
\label{fig:btty}
\end{figure}
Figure~\ref{fig:btty} also reveals how the power side-channel through wired charging is affected by the battery level.
The variations from the phone's activities are clearly visible at higher battery levels but not at lower ones.
Previously, Yang et al.\cite{yang-usb} found that power traces collected at battery levels of 30\% were classified with accuracies almost as high as those collected when the battery was fully charged. One explanation for this discrepancy is that the newer smart phones examined in this study have more battery capacity or actively prevent information leakage. This idea is developed further in section 5.5.
In order to further investigate how this side-channel is affected by different battery levels, current traces were collected from two older Apple iPhone models, an iPhone 6s and an iPhone 8, and compared on the same scale. We could collect the power traces for both of these phones using the same data acquisition setup that we used with iPhone 11.
Wired traces collected on an iPhone 6s leaked activity at lower states of charge than the iPhone 11 did. This can be seen in Figure ~\ref{fig:different}. Activity was visible at battery levels as low as 50\%, but began to become obfuscated at states of charge 30\% or lower. The iPhone 8 wired power side-channel revealed activity in the same range of battery state of charge as the iPhone 6s.
While the iPhone 6s does not support wireless charging, the iPhone 8 is Qi compatible. Its wireless current traces do not leak any significant information at states of charge less than or equal to 70\% state of charge. It is possible that there was a change in the hardware design between the iPhone 8 and iPhone 11 that removed the USB power side-channel at battery levels below full charge. However, even on the iPhone 8, little activity was revealed at the 30\% battery level compared to the Android phones studied by \cite{yang-usb}. Additionally, even though the iPhone 11 is not as vulnerable to USB power side-channel attacks as the iPhone 8, both phones appear to be similarly susceptible to the wireless charging side-channel attack at higher states of charge.
\begin{figure*}[!t]
\centering
\subfigure[98.35\% at 5.44 V]{\includegraphics[scale=.9]{figs/6sfull.pdf}}
\subfigure[70.27\% at 4.68 V]{\includegraphics[scale=.9]{figs/6s70.pdf}}
\subfigure[51.04\% at 4.70 V]{\includegraphics[scale=.9]{figs/6s50.pdf}}
\subfigure[31.13\% at 4.69 V]{\includegraphics[scale=.9]{figs/6s30.pdf}}
\subfigure[21.49\% at 6.93 V]{\includegraphics[scale=.9]{figs/6s20.pdf}}
\subfigure[98.95\% at 5.05 V]{\includegraphics[scale=.9]{figs/wired_fulle.pdf}}
\subfigure[70.97\% at 4.72 V]{\includegraphics[scale=.9]{figs/wired_70e.pdf}}
\subfigure[52.36\% at 4.70 V]{\includegraphics[scale=.9]{figs/wired_50e.pdf}}
\subfigure[31.43\% at 4.70 V]{\includegraphics[scale=.9]{figs/wired_30e.pdf}}
\subfigure[19.56\% at 4.66 V]{\includegraphics[scale=.9]{figs/wired_20e.pdf}}
\subfigure[98.95\% at 5.01 V]{\includegraphics[scale=.9]{figs/wireless_full3.pdf}}
\subfigure[70.06\% at 4.91 V]{\includegraphics[scale=.9]{figs/wireless_80e.pdf}}
\subfigure[53.15\% at 4.88 V]{\includegraphics[scale=.9]{figs/wireless_50e.pdf}}
\subfigure[31.30\% at 4.92 V]{\includegraphics[scale=.9]{figs/wireless_30e.pdf}}
\subfigure[19.32\% at 4.66 V]{\includegraphics[scale=.9]{figs/wireless_20e.pdf}}
\caption{Current traces for loading zoom.us on different devices while wirelessly charging and USB charging: wired iPhone 6s (top), wired iPhone 8 (middle), wireless iPhone 8 (bottom).}
\label{fig:different}
\end{figure*}
| -24,902.003458
|
[
-3.333984375,
3.05859375
] | 56.20438
|
[
-2.787109375,
0.4677734375,
-2.40234375,
-5.046875,
-0.0850830078125,
7.15234375
] |
[
0.90478515625,
6.07421875,
2.85546875,
6.5390625
] | 595
| 8,693
|
[
-2.734375,
3.09375
] | 20.661808
|
[
-5.953125,
-3.26953125,
-3.53125,
-1.5498046875,
2.099609375,
10.4296875
] | 2.258011
| 26.860651
| 19.291384
| 3.583278
|
[
2.992072105407715
] | -22,327.410167
| 5.542851
| -24,548.906166
| 0.392246
| 5.962136
|
[
-3.712890625,
-3.54296875,
-2.392578125,
-3.283203125,
2.935546875,
9.1328125
] |
[
-5.49609375,
-2.775390625,
-2.740234375,
-2.14453125,
3.9296875,
6.6015625
] | |
BkiUdRk4uBhivXSXyV7s
|
\section{INTRODUCTION}
The inclusion of internal fermion loops in the vacuum of QCD
is a major
challenge. The present state of the art for generating full QCD
configurations
is the so called Hybrid Monte Carlo algorithm which uses
Molecular Dynamic evolution in a ``fifth time'' coordinate t.
The Hamiltonian
for this evolution is
\begin{equation}
S = \smfrac{1}{2} Tr~P^2 + S_g(U) +
\varphi^\dagger [ M^\dagger M ]^{-1}\varphi,
\end{equation}
where $P_\mu(x)$ are the angular momenta conjugate to
the gauge fields
$U_\mu(x)$, $S_g(U)$ is the pure gauge action,
$M(U)$ is the Dirac matrix and
$\varphi$ is the pseudofermion field. In our discussion the
precise form of
the gauge action is not important. What is relevant is the
need to accurately
integrate the equations of motion, calculating the
force on $U$ due to the
pseudofermions at each time step. This requires solving,
over and over again,
the linear equation,
\begin{equation}
A(t)~ \chi(t) ~=~ \varphi,
\label{eq:linear}
\end{equation}
where $A(t)\equiv M(U)^\dagger M(U)$ and $\chi(t)$
is the solution of the
inverted Dirac operator. Technically this is
achieved by starting with a
trial value $\chi_{trial}$ and iteratively
solving Eq.~\ref{eq:linear} for
$\chi(t)$. On the order of 100 MD steps are
taken holding the pseudofermion
source fix, so that the operator A(t) changes
smoothly as a function of the MD
time t, as new values of U are generated.
These iterations, usually done by
the conjugate gradient (CG) method, are the
most computational expensive part
of Hybrid Monte Carlo algorithms.
This raises an obvious question. As we move in MD time,
why are we not
able to ``learn'' from the recent past enough
about the space of likely
solutions to vastly improve our iterative scheme?
One should be able to give a very good estimate,
$\chi_{trial}$ before
starting the conjugate gradient routine.
A crucial ingredient in this
approach, is the fact that detail balance
is in principle preserved
independent of the starting trial
configuration so long as one converges
accurately to the solution.
Therefore, we propose to estimate carefully the
starting configuration\cite{blk}.
To accomplish this some information on the
configurations in the previous MD
steps have to be stored. Although these
algorithms will use more memory,
memory is often not a severe constraint in
modern super computer simulations.
\section{ANALYTIC EXTRAPOLATIONS}
To motivate our extrapolation methods,
consider the function $\chi(t)$ as an
analytic function of t. For simplicity
of notation suppose we want the value
at t = 0, given past values at $t_1, t_2, t_3, \cdots$.
In practice this is
usually a regular series of values $t_i = - i \; \epsilon$
with a integrations
step of size $dt = \epsilon$.
Then a trial value for the the solution,
$\chi(0)$ of the new Dirac matrix $A(0)$,
might be considered as a linear
superposition of old solutions,
\begin{equation}
\chi_{trial} = c_1 \; \chi(t_1) + c_2 \; \chi(t_2)
+ \cdots + c_N \; \chi(t_N).
\end{equation}
If $N \epsilon$ is sufficiently small,
we may Taylor expand each term around t
= 0 and determine the coefficients by
canceling all terms for $\epsilon^k$ to
$O(\epsilon^N)$.
\begin{equation}
\sum_{i=1}^{N} (t_i)^{n-1} \; c_i = \delta_{1,n}
\label{equ:taylor}
\end{equation}
As we will demonstrate this procedure is equivalent to the familiar N-1 order
polynomial fit to N points.
\subsection{Polynomial Extrapolation}
Good result can be obtained even with a si
mple polynomial fit of degree $N-1$.
To estimate the configuration it is necessary
to store in memory the previous
$(N+1)$ configurations.
However the polynomial extrapolation does not require
significant computational effort,
it is just a local sum on each lattice point
with fixed coefficient,
that is less than a single CG step. The
$\chi_{trial}$ is expressed as a polynomial,
$\chi_{trial}~=~y_1 + t \; y_2 +\cdots + \;t^{N-1}\; y_N$,
whose coefficient satisfy the constraint,
\begin{equation}
\sum_{n=1}^{N} (t_i)^{n-1} \; y_n = \chi(t_i) \; .
\label{equ:poly}
\end{equation}
One can easily prove that Eq.~\ref{equ:taylor} and
Eq.~\ref{equ:poly} define
identical extrapolation
($ y_1 = \sum_i c_i \chi(t_i) =\chi_{trial}$. For
equally spaced time steps $t_i = - i \; \epsilon$,
the coefficients given by
\begin{equation}
c_i~=~ (-1)^{i-1}~ {N! \over i! (N-i)!} \; .
\end{equation}
Table~\ref{tab:tavola1} shows the results
of simulations for polynomial
extrapolation. The number of CG steps needed
to reach convergence is plotted
in function of the degree of extrapolation $N$,
and the MD time step
$\epsilon$. The case $N=1$ corresponds
to starting with the old solution.
\begin{table*}[hbt]
\setlength{\tabcolsep}{1.2pc}
\newlength{\digitwidth} \settowidth{\digitwidth}{\rm 0}
\catcode`?=\active \def?{\kern\digitwidth}
\caption{CG steps using polynomial extrapolation}
\label{tab:tavola1}
\begin{tabular*}{\textwidth}{llllllll}
\hline
N=1 (previous data)
& 0.98 & 0.93 & 0.93 & 0.88 & 0.85 & 0.77 & 0.63 \\
N=2 ($1^{st}$ order extrap.)
& 0.92 & 1.07 & 0.86 & 0.74 & 0.72 & 0.62 & 0.27 \\
N=3 ($2^{nd}$ order extrap.)\
& 0.91 & 0.82 & 0.74 & 0.61 & 0.59 & 0.47 & 0.21 \\
N=4 ($3^{th}$ order extrap.)
& 0.96 & 0.77 & 0.56 & 0.44 & 0.41 & 0.31 & 0.21 \\
N=5 ($4^{th}$ order extrap.)
& 0.86 & 0.77 & 0.54 & 0.45 & 0.42 & 0.35 & 0.26 \\
N=6 ($5^{th}$ order extrap.)
& 0.70 & 0.48 & 0.39 & 0.38 & 0.40 & 0.40 & 0.41 \\
N=7 ($6^{th}$ order extrap.)
& 0.70 & 0.59 & 0.40 & 0.42 & 0.42 & 0.39 & 0.46 \\
\hline
$\delta t=10^{-3}*$
& 15 & 10 & 9 & 8 & 7 & 5 & 2 \\
\hline
\multicolumn{8}{@{}p{160mm}}{ Number of CG steeps
to reach the solution (residue $<~10^{-12}$),
normalized to 1 for $\chi=0$.
Full QCD configurations on $16^4$ lattice with
Wilson fermions $k=0.157$ and $\beta=5.6$.
Average on 30 configurations,
statistical errors are of the order of 6\%. }
\end{tabular*}
\end{table*}
\begin{table*}[hbt]
\setlength{\tabcolsep}{1.2pc}
\catcode`?=\active \def?{\kern\digitwidth}
\caption{CG steps using minimum residual extrapolation}
\label{tab:tavola2}
\begin{tabular*}{\textwidth}{llllllll}
\hline
N=1 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
& 0.99 & 0.93 & 0.92 & 0.87 & 0.84 & 0.81 & 0.64 \\
N=2 & 0.87 & 0.72 & 0.71 & 0.69 & 0.63 & 0.49 & 0.25 \\
N=3 & 0.85 & 0.64 & 0.57 & 0.52 & 0.49 & 0.32 & 0.06 \\
N=4 & 0.72 & 0.49 & 0.41 & 0.34 & 0.28 & 0.16 & 0.10 \\
N=5 & 0.72 & 0.37 & 0.31 & 0.23 & 0.19 & 0.12 & 0.07 \\
N=6 & 0.51 & 0.28 & 0.26 & 0.21 & 0.16 & 0.13 & 0.06 \\
N=7 & 0.49 & 0.25 & 0.24 & 0.21 & 0.19 & 0.12 & 0.06 \\
N=8 & 0.40 & 0.26 & 0.23 & 0.18 & 0.15 & 0.09 & 0.06 \\
N=9 & 0.38 & 0.23 & 0.19 & 0.16 & 0.13 & 0.09 & 0.06 \\
N=10 & 0.39 & 0.22 & 0.19 & 0.14 & 0.12 & 0.10 & 0.04 \\
N=11 & 0.38 & 0.18 & 0.17 & 0.14 & 0.11 & 0.08 & 0.04 \\
\hline
$\delta t=10^{-3}*$ & 15 & 10 & 9 & 8 & 7 & 5 & 2 \\
\hline
\multicolumn{8}{@{}p{160mm}}{ Number of CG steeps
to reach the solution (residue $<~10^{-12}$),
normalized to 1 for $\chi=0$.
Same configurations of Table 1.
Statistical errors are of the order of 4\%.}
\end{tabular*}
\end{table*}
\subsection{ Minimum Residual Extrapolation}
An alternate, perhaps more appealing, approach
is to consider the
past history of solutions,
$\chi(t_i)$ as defining an important
linear subspace for seeking an optimal trial solution.
Since the Conjugate
Gradient method is in fact just a minimal
residual technique confined
to the Krylov subspace spanned by
vectors $A^{j-1} \chi_{trial}$, why not
start by looking at a ``smarter''
subspace based on past success for
nearby times?
In this spirit, we suggest minimizing
the norm of the residual,
\begin{equation}
r^\dagger r = \chi^\dagger M^\dagger M \chi -
\varphi^\dagger \chi - \chi^\dagger \varphi
+ b^\dagger b \; ,
\end{equation}
in the subspace spanned by $\chi_i \equiv \chi(t_i)$,
where $r = b - M \chi$ and $b \equiv M^\dagger \varphi$.
The minimization
condition reduces to
\begin{equation}
\sum_{j=1}^{N}~{ \chi_i }^\dagger M^\dagger M \chi_j \; c_j =
{ \chi_i }^{\dagger} \varphi \; .
\end{equation}
The only problem is that this system can be
ill conditioned because the
past solutions $ \chi_i $ differ from each
other by order $ \epsilon $.
Nevertheless if we only want to get
the minimum of $r^\dagger r$
in {\it span}($\chi_i$) using a
Gram Schmitt orthonormalization, we
can solve the system avoiding the singularities.
This method requires $(N^2+5N)/2$ dots product
and $(N)$ $M\chi$ matrix-vector
applications, and the storage of
$(N)$ past configuration. It is
interesting to note that this
method gives coefficients that for the first
few order reproduce very close to the polynomial
extrapolation.
Table 2 show the number of CG steps using this method.
\section{CONCLUSIONS}
To compare efficiencies, the CG
iterations should be divided by $\epsilon$, so we
compare the total number of CG
steps needed to evolve the system for a fixed
distance in configuration space.
Note that if $\epsilon$ is too large, the
overall performance is good, but
the acceptance will drop drastically. If
$\epsilon$ is too small, the extrapolation
is excellent, but the system will
evolve too slowly in phase space.
It is not trivial that we find a window
in $\epsilon$ where both the acceptance
is good and the extrapolation works
well. Moreover this window has parameters
close to those used in actual
simulations.
Our present approach is clearly not the
only one worth considering. In fact we
have emphasized the analytic properties
of $\chi(t)$ because it suggests ways
to understand and further improve our approach.
For example the failure at
fixed $\epsilon$ to improve the polynomial
extrapolation by increasing
indefinitely the number of terms is probably
a signal of nearby singularities
in t. Our success so far is probably due
to low eigenvalues of the Dirac
operator changing more slowly with time.
Many other ways of using the past to
avoid needless repetition can be
imagined\cite{sant}. For example the
CG routine itself generates vectors, that
may be useful than solutions in the more distant past.
A Karman filter can be
used top introduce exponential decreasing
information from the past without
increasing the storage requirement.
The subspace of past vectors might be
used not just as a way to arrive at an initial guess,
but also as a way to
precondition the iterative process itself.
For now, we are pleased for now
that even a few vectors from the past
can be combined with simple
extrapolation ideas to give a
very useful acceleration method.
| -11,348.774712
|
[
-3.056640625,
2.703125
] | 20.504732
|
[
-5.9453125,
-4.06640625,
-2.76953125,
-7.60546875,
1.4892578125,
12.6015625
] |
[
1.423828125,
8.515625,
0.603515625,
4.62109375
] | 205
| 1,489
|
[
-3.546875,
3.9765625
] | 35.421529
|
[
-6.17578125,
-5.515625,
-3.30078125,
-1.078125,
2.724609375,
9.9921875
] | 0.740641
| 10.397657
| 41.302888
| 6.500654
|
[
1.8729000091552734
] | -8,072.524766
| 4.98724
| -11,088.055533
| 0.67331
| 5.775179
|
[
-2.833984375,
-3.43359375,
-3.498046875,
-3.970703125,
2.41015625,
10.15625
] |
[
-6.71484375,
-4.1640625,
-2.8515625,
-1.712890625,
4.26171875,
7.05078125
] | |
BkiUdn85qsNCPV6YkjdR
|
\section{Introduction}
The general relativity (GR)\ theory has given to space-time a physical
status which makes of it one of the basic ingredients of the universe, being
the other matter/energy, however usually the nature of space-time is not
really given much attention. In the, till now, unsuccessful attempts to
quantize the gravitational field, space-time is in practice conceived as a
field, much like as for the other interactions and for matter. Despite the
enormous efforts spent on the front of quantum gravity \cite{loll98,
rovelli08, marteens10}, both in the string theory and in the loop quantum
gravity approaches, and notwithstanding the undoubtable progress and
hindsights obtained with the mathematical machinery of those theories, the
main questions still resist answers that can be both globally consistent and
unambiguously verifiable.
On the other hand, while quantum gravity tries to solve fundamental problems
at the smallest scales and the highest energies, a problem also exists at
large scales where classical approaches are in order. Observation \cite{riess98, komatsu10} has forced people to hypothetically introduce in the
universe entities that have scarse or no reference to the matter/energy we
know by experiment at intermediate or small scales. We apparently need dark
matter and dark energy \cite{kamionkowski}, and, especially for the second,
when trying to work out its properties and to build some physical
interpretation of its nature, people are led to results which, to say the
least, are far away from our intuition and experience .
Another approach consists in trying to modify the general theory of
relativity \cite{peebles03,dvali00,sotiriou08}, outside
and beyond the simplicity criteria that, despite the mathematical
complexity, guided its development. Both the dark-something and the modified
GR theories are in a sense ad hoc presciptions. Preserving an internal
consistency requirement the theories look for Lagrangians for the universe
apt to yield equations reproducing or mimicking what we observe.
The approach we have already followed in previous works \cite{CQG} consists
in treating space-time as a classical four-dimensional continuum behaving as
three-dimensional material continua do \cite{landau, eshelby}. An
appropriate name for the theory worked out in this way is Strained State
Theory (SST) since the new features it introduces are contained in the
strain tensor expressing the difference between a flat undifferentiated
four-dimensional Euclidean manifold and the actual space-time with its
curvature, originated from matter/energy distributions as well as from
texture defects in the manifold as such. In a sense SST is a theory of the
dark energy where the latter is a vacuum deformation energy present when the
space-time manifold is curved.
Here we shall discuss the behavior of such a strained space-time when some
external cause (be it a mass or a defect) induces a spherical symmetry in
space. In a sense we will treat the analog of the Schwarzschild problem in a
dark energy permeated environment.
As it will result, the presence of the strain energy appears at the cosmic scale, without affecting in a sensible way the physics at the scale of the solar system. In any case the data from the solar system will constrain the value of the parameters of the theory. Since the solution of the problem will be
attained by an approximation method, the asymptotic region, where the effect
of strain would be dominant, will be excluded from our description.
\section{The strained state of space-time}
The essence of the strained state theory is in the idea that space-time is a
four-dimensional manifold endowed with physical properties similar to the
ones we know for deformable three-dimensional material continua. In practice
we may think that our space-time, which we shall call the natural manifold,
is obtained from a flat four-dimensional Euclidean manifold, which will be
our reference manifold. The deformation, i.e. the curvature, of space-time
is due to the presence of matter fields as in GR or to the presence of
texture defects in the manifold, however here we assume that space-time
resists to deformation more or less as ordinary material continua do. In
practice, according to this approach, we introduce in the Lagrangian density
of space-time, besides the traditional Einstein-Hilbert term, an "elastic
potential term" built on the strain tensor in the same way as for the
classical elasticity theory. The additional term in a sense accounts for the
presence of a dark energy or even "curvature fluid" \cite{capo}.
The bases of SST are described in ref. \cite{CQG}; here we review the
essential.
The complete action integral of the theory is%
\begin{equation}
S=\int \left( R+\frac{1}{2}\left( \lambda \varepsilon ^{2}+2\mu \varepsilon
_{\mu \nu }\varepsilon ^{\mu \nu }\right) +\mathcal{L}_{matter}\right) \sqrt{%
-g}d^{4}x \label{action}
\end{equation}
Of course $R$ is the scalar curvature of the manifold; the parameters $\lambda $ and $\mu $ are the Lam\'{e} coefficients of
space-time; $\varepsilon _{\mu \nu }$ is the strain tensor of the natural
manifold and $\varepsilon =\varepsilon _{\alpha }^{\alpha }$; $\mathcal{L}_{matter}$ is the Lagrangian density of matter/energy. The strain
tensor is obtained by comparison of two corresponding line elements, one in
the natural frame and the other in the reference frame. By definition it is%
\begin{equation}
\varepsilon _{\mu \nu }=\frac{1}{2}\left( g_{\mu \nu }-E_{\mu \nu }\right)
\label{strain}
\end{equation}%
where $g_{\mu \nu }$ is the metric tensor of the natural manifold and $%
E_{\mu \nu }$ is the Euclidean metric tensor of the reference frame.
The action (\ref{action}) has already been used both in ref. \cite{CQG} and
\cite{cosmo} in order to describe the accelerated expansion of the universe,
and has given positive results when tested against four typical cosmological
tests \cite{cosmo}.
\section{Spherical symmetry in space.}
Now we focus on a stationary physical system endowed with spherical symmetry
in space. Of course there must be a physical reason for the symmetry to be
there, which means that "something" must exist in the central region of the
space-time we are considering. This can be either a time independent
spherical aggregate of mass/energy or a line defect\footnote{%
Line defect refers to the full four-dimensional space-time and the line will
be time-like, so that in space the defect will appear to be pointlike.}. The
general form of the line element of a space-time with the given symmetry is
well known:%
\begin{equation}
ds^{2}=fd\tau ^{2}-hdr^{2}-r^{2}\left( d\theta ^{2}+\sin ^{2}\theta d\phi
^{2}\right) \label{linen}
\end{equation}%
where $f$ and $h$ are functions of $r$ only and Schwarzschild coordinates
have been used.
The corresponding line element in the flat Euclidean reference frame will be:
\begin{equation}
ds_{r}^{2}=d\tau ^{2}+\left( \frac{dw}{dr}\right) ^{2}dr^{2}+w^{2}\left(
d\theta ^{2}+\sin ^{2}\theta d\phi ^{2}\right) \label{linef}
\end{equation}
In principle we have four degrees of freedom (together with the flatness
condition) in the choice of the coordinates on the reference manifold,
however when we decide to evidence the same symmetry as the one present in
the natural frame, the gauge functions in practice reduce to one. This is
the meaning of the $w$ function, only depending on $r$, in eq. (\ref{linef}). Fig.~\ref{fig1} pictorially clarifies the role of the gauge function.
\begin{figure*}
\centering
\includegraphics[width=0.6\textwidth]{Gauge.eps}
\caption{When using the coordinates of the natural frame $r$, the radial coordinate of the reference frame is a function $w\left( r\right) $ depending on the actual curvature of the natural frame.}
\label{fig1}
\end{figure*}
By direct inspection of formulae (\ref{linen}) and (\ref{linef}) and using the definition (\ref{strain}) we can easily read out the non-zero elements of the strain tensor for this physical configuration:
\begin{eqnarray}
\varepsilon _{00} & =& \frac{f-1}{2} \\
\varepsilon _{rr} & =& -\frac{h+w^{\prime 2}}{2} \\
\varepsilon _{\theta \theta }& =& -\frac{r^{2}+w^{2}}{2} \\
\varepsilon _{\phi \phi }& =& -\frac{r^{2}+w^{2}}{2}\sin ^{2}\theta%
\end{eqnarray}
From now on, primes will denote derivatives with respect to $r$.
Once we have the strain tensor, we are able to write the contribution to the
Lagrangian density of space-time due to the strain present in the natural
manifold. The needed ingredients are:%
\begin{equation}
\varepsilon =g^{\alpha \beta }\varepsilon _{\alpha \beta }=\frac{f-1}{2f}+%
\frac{h+w^{\prime 2}}{2h}+\frac{r^{2}+w^{2}}{r^{2}} \label{trace}
\end{equation}%
and%
\begin{equation}
\varepsilon _{\alpha \beta }\varepsilon ^{\alpha \beta }=g^{\alpha \mu
}g^{\beta \nu }\varepsilon _{\alpha \beta }\varepsilon _{\mu \nu }=\frac{%
\left( f-1\right) ^{2}}{4f^{2}}+\frac{\left( h+w^{\prime 2}\right) ^{2}}{%
4h^{2}}+\frac{\left( r^{2}+w^{2}\right) ^{2}}{2r^{4}} \label{second}
\end{equation}
For completeness let us remind that it is%
\begin{equation}
R=-\left( \frac{2}{r^{2}}-\frac{2}{hr^{2}}-\frac{f^{\prime \prime }}{fh}+%
\frac{f^{\prime 2}}{2f^{2}h}+\frac{1}{2fh^{2}}f^{\prime }h^{\prime }-\frac{2%
}{fhr}\allowbreak f^{\prime }+\frac{2h^{\prime }}{h^{2}r}\right)
\label{curvature}
\end{equation}
and%
\begin{equation}
\sqrt{-g}=\sqrt{fh}r^{2}\sin \theta \label{det}
\end{equation}
Going back to eq. (\ref{action}) we are now able to write the full explicit
Lagrangian density of our strained space-time, with the built in
Schwarzschild symmetry. We are interested in empty space-time so in the
region we shall be considering it will be $\mathcal{L}_{matter}=0$.
From the Lagrangian density, applying the usual variational procedure, we can obtain the Euler-Lagrange equations for the $f$, $h$ and $w$ functions. The effective Lagrangian density (modulo a $\sin \theta $) is:
\begin{eqnarray}
\mathfrak{L}& =& -\left( \frac{2}{r^{2}}-\frac{2}{hr^{2}}+\frac{2h^{\prime }}{h^{2}r}\right) \sqrt{fh}r^{2} \nonumber \\
& +& \frac{\lambda }{2}\left( \frac{f-1}{2f}+\frac{h+w^{\prime 2}}{2h}+\frac{%
r^{2}+w^{2}}{r^{2}}\right) ^{2}\sqrt{fh}r^{2} \label{lagrangiana} \\
& +& \mu \left( \frac{\left( f-1\right) ^{2}}{4f^{2}}+\frac{\left( h+w^{\prime
2}\right) ^{2}}{4h^{2}}+\frac{\left( r^{2}+w^{2}\right) ^{2}}{2r^{4}}\right)
\sqrt{fh}r^{2} \nonumber
\end{eqnarray}
The second derivative appearing in Eq.~(\ref{curvature}) has been eliminated by means of an integration by parts.
The $w$ function is treated as $f$ and $h$, which means that we assume it has to satisfy Hamilton's principle just as the others do. The reason for this choice is in that we are representing the correspondence between the natural and the reference manifolds as being established by an actual physical deformation process, which is something else from the obvious freedom in the choice of the coordinates. The three explicit final equations are:
\begin{eqnarray}
0& =& h -1+r\frac{h^{\prime }}{h}+\allowbreak \frac{1}{16f^{2}h}\lambda
r^{2}\left( 2fh\frac{w^{2}}{r^{2}}+4fh+3h+fw^{\prime 2}\right) \nonumber \\
&\times& \left( h-4fh-2fh\frac{w^{2}}{r^{2}}-fw^{\prime 2}\right) \label{uno} \\
& -& \frac{1}{8hf^{2}}\mu r^{2}\left( 2fh^{2}+4f^{2}h^{2}+2f^{2}h^{2}\frac{w^{4}%
}{r^{4}}-3h^{2}+f^{2}w^{\prime 4}+4f^{2}h^{2}\frac{w^{2}}{r^{2}}%
+2f^{2}hw^{\prime 2}\right) \nonumber \\
0&=&h-1-\frac{1}{f}rf^{\prime } -\frac{1}{16hf^{2}}\lambda r^{2}\left( h-4fh-2fh\frac{w^{2}}{r^{2}}+3fw^{\prime 2}\right) \nonumber \\
&\times & \left( h-4fh-2fh\frac{w^{2}}{r^{2}}-fw^{\prime 2}\right) \label{due} \\
&-&\frac{1}{8hf^{2}}\mu r^{2}\left( h^{2}+4f^{2}h^{2}+2f^{2}h^{2}\frac{w^{4}}{r^{4}}-2fh^{2}-3f^{2}w^{\prime 4}+4f^{2}h^{2}\frac{w^{2}}{r^{2}}-2f^{2}hw^{\prime 2}\right) \nonumber \\
0&=&\frac{\lambda }{2fh^{2}}w^{\prime \prime }\left( hr^{2}-3fr^{2}w^{\prime2}-4fhr^{2}-2fhw^{2}\right) \nonumber\\
& -& \frac{\lambda }{h}ww^{\prime 2}-\frac{\lambda r}{h^{2}}\left( \frac{f^{\prime }}{4f}r-\frac{3h^{\prime }}{4h}r+1\right) w^{\prime 3} \nonumber \\
& +& \lambda \frac{w^{\prime }}{h}\left( \left( -\frac{1}{2}w^{2}-r^{2}-\frac{1}{4f}\allowbreak r^{2}\right) \frac{f^{\prime }}{f}+\left( r^{2}+\frac{1}{2}w^{2}-\frac{1}{4f}r^{2}\right) \frac{h^{\prime }}{h}+\frac{1}{f}r-4r\right) \nonumber \\
&+&\lambda w\left( 4+\frac{2}{r^{2}}w^{2}-\frac{1}{f}\right) +\mu \frac{r^{2}}{h^{2}}w^{\prime \prime }\left( -3w^{\prime 2}-h\right) \nonumber \\
&-&\frac{\mu }{h^{2}}\left( 2r-\frac{3}{2h}r^{2}h^{\prime }+\frac{1}{2f}r^{2}f^{\prime }\right) w^{\prime 3} \nonumber \\
&+& \mu \frac{r}{h}\left( \frac{h^{\prime }}{2h}r-\allowbreak 2-\frac{f^{\prime }}{2f}r\right) w^{\prime }+2w\mu \left( 1+\frac{w^{2}}{r^{2}}\right) \label{tre}
\end{eqnarray}
As it is immediately seen, the three equations are highly non-linear, first
order differential in $f$ and $h$, second order differential in $w$. Solving
them exactly is apparently a desperate task, but we shall see that it is
possible to proceed perturbatively.
\section{Approximate solutions}
Looking at eqs. (\ref{uno}) and (\ref{due}) we see that there are a number
of terms multiplying either the $\lambda $ or $\mu $ parameter, while others
do not. From the application of the theory to the cosmic expansion we know
that the values of $\lambda $ and $\mu $ are indeed very small \cite{CQG}%
\cite{cosmo}; the dimension of the parameters is the inverse of the square
of a length, so we may say that for distances small with respect to some
typical radius $\tilde{r}$ the products $\lambda r^{2}$ and $\mu r^{2}$ will
be much smaller than $1$. The typical $\tilde{r}$ is $\sim 10^{26}$ m $\sim
10^{4}$ Mpc \cite{CQG}\cite{cosmo}.
We are then led to solve the equations by successive approximations. Our
first step in the approximation process will be to neglect the terms
multiplying $\lambda $ and $\mu $\footnote{%
For simplicity we assume that $\lambda $ and $\mu $ are of the same order of
magnitude.} so that the zero order equations become:
\begin{equation}
\allowbreak h_{0}-1+r\frac{h_{0}^{\prime }}{h_{0}}=0 \label{uno0}
\end{equation}%
$\allowbreak $%
\begin{equation}
h_{0}-1-r\frac{f_{0}^{\prime }}{f_{0}}=0. \label{due00}
\end{equation}
The solution is the typical Schwarzschild one:%
\begin{equation}
f_{0}=1-2\frac{m}{r} \label{Schwf}
\end{equation}%
\begin{equation}
h_{0}=\frac{1}{f_{0}}=\frac{1}{1-2\frac{m}{r}} \label{Schwh}
\end{equation}
Looking to the recovery of the Newtonian limit we see of course that the
integration constant does actually coincide with the central mass $m$.
In practice we can write that the solutions of equations Eqs.~(\ref{uno},\ref{due},\ref{tre}) are of the type
\begin{eqnarray}
f &=&f_{0}+\phi \nonumber \\
h &=&h_{0}+\chi \label{linsol} \\
w &=&lr\left( 1+\psi \right) \nonumber
\end{eqnarray}%
with $\phi $, $\chi $, $\psi <<1$. Up to this moment we have not said
anything about the relative size of $m/r$ with respect to the $\lambda r^{2}$
or $\mu r^{2}$ terms, inside the fiducial radius $\tilde{r}$. We know
however that, outside any Schwarzschild horizon, it is $m/r<1$ so that any $%
m\lambda r$ or $m\mu r$ term will be smaller that the $\lambda r^{2}$ or $%
\mu r^{2}$ terms. On these bases we conclude that at the lowest
approximation order $\phi $, $\chi $ and $\psi $ are functions of $\lambda
r^{2}$ and $\mu r^{2}$.
The adimensional scale factor $l$ would be arbitrary in a trivial flat
space-time, but this is not the case here.
Introducing the developments (\ref{linsol}) into (\ref{uno}) and (\ref{due})
and keeping the terms up to the first order in $\lambda r^{2}$ and $\mu
r^{2} $ we see that only $w=lr$ plays a role, so that we do not need to
worry about the unknown function $\psi$. In any case the functional form of $w$ is determined by requiring that in absence of elastic deformation the reference metric be Euclidean, which suggests that $\psi$ in Eq.~(\ref{linsol}) must go to zero for $\lambda=\mu=0$. We nevertheless explored the possibility that a different ansatz for $w$ could bring a new set of solutions; we considered as functional forms for $w$ either Maclaurin or Taylor expansions in (inverse) powers of $r$ and we found that higher order terms in the expansion must zero out. The linear $r$ term considered in Eq.~(\ref{linsol}) is then the only relevant one.
Finally we obtain:
\begin{eqnarray}
\phi &=&\Phi r^{2} \label{soluzphi} \\
\chi &=&\Psi r^{2} \label{soluzchi}
\end{eqnarray}
The explicit expressions of the $\Phi $ and $\Psi $ parameters are:
\begin{eqnarray}
\Phi & =& \frac{\lambda }{16}\left( 3l^{4}+2l^{2}-1\right) +\frac{\mu }{8} \left( l^{4}-1\right) \\
\Psi & =& \frac{\lambda }{16}\left( 3l^{4}+10l^{2}+7\right) +\frac{\mu }{8} \left( l^{2}+1\right) ^{2} \label{lambdapsi}
\end{eqnarray}
The result does indeed depend on the value of $l$; different values
correspond to different situations. We shall comment on this in a while. In
any case it is $\Phi \neq \Psi $ unless $\mu =-2\lambda $.
We could also have started from pure flat space-time as zero order
approximation, but at the end we would have found again the same solution,
i.e. Schwarzschild plus (\ref{soluzphi}) and (\ref{soluzchi}).
\section{The metric tensor}
Explicitly writing the results found in the previous section we see that we have different regions with specific approximate forms for the line element. Cosmological constraints suggest that $\lambda \sim \mu \sim 10^{-52} \mathrm{m}^{-2}$. Then, for masses as large as those of galaxies or clusters of galaxies, we can distinguish three regimes. An internal region, where $1>>m/r>>\lambda r^{2}$, $\mu r^{2}$:
\begin{equation}
ds^{2}\simeq \left( 1-2\frac{m}{r}+\Phi r^{2}\right) d\tau ^{2}-\left(
\frac{1}{1-2\frac{m}{r}}+\Psi r^{2}\right) dr^{2}-r^{2}\left( d\theta
^{2}+\sin ^{2}\theta d\phi ^{2}\right) \label{lineaz}
\end{equation}
An intermediate region, where $1>>m/r\sim \lambda r^{2},\mu r^{2}$:%
\begin{equation}
ds^{2}\simeq \left( 1-2\frac{m}{r}+\Phi r^{2}\right) d\tau ^{2}-\left( 1+2%
\frac{m}{r}+\Psi r^{2}\right) dr^{2}-r^{2}\left( d\theta ^{2}+\sin
^{2}\theta d\phi ^{2}\right) \label{linea}
\end{equation}
An outer region\allowbreak , where $r<\tilde{r}$ but $1>>\lambda r^{2},\mu
r^{2}>>m/r$:
\begin{equation}
ds^{2}\simeq \left( 1+\Phi r^{2}\right) d\tau ^{2}-\left( 1+\Psi
r^{2}\right) dr^{2}-r^{2}\left( d\theta ^{2}+\sin ^{2}\theta d\phi
^{2}\right) \label{lineae}
\end{equation}
Our approximate solutions are unfit to describe the asymptotic region where $\lambda r^{2}$,$\mu r^{2}\sim 1$ or bigger. This is the cosmological domain and the problem opens of the embedding in a given cosmic background
space-time.
The internal metric has vanishing values of $g_{00}$ for
\begin{equation}
r_{00} \simeq \sqrt[3]{\sqrt{\frac{m^{2}}{\Phi ^{2}}+\frac{1}{27\Phi ^{3}}}+%
\frac{m}{\Phi }}-\frac{\frac{1}{3\Phi }}{\sqrt[3]{\sqrt{\frac{m^{2}}{%
\Phi ^{2}}+\frac{1}{27\Phi ^{3}}}+\frac{m}{\Phi }}}
\end{equation}%
whose limit correctly goes to $2m$ when $\Phi \rightarrow 0$.
Eq. (\ref{lineae}) holds also in the case of a defect without mass. In that
case the scalar curvature in the inner region, to first order in $\lambda
r^{2}$,$\mu r^{2}$, is:
\begin{equation}
R\simeq 6\allowbreak \left( \Psi -\Phi \right) \label{curva}
\end{equation}%
$\allowbreak $
Explicitly it is:
\begin{equation}
R\simeq 3\left( \allowbreak 1+l^{2}\right) \left( \lambda +\frac{1}{2}\mu
\right)
\end{equation}%
The curvature is a scalar quantity, independent from the coordinates. As we
see the result depends on $l$ so that we are forced to attach a physical
meaning to that parameter. Since we are now treating a mass-free situation
we are led to conclude that some defect is present in the origin and its
relevance is quantitatively expressed by the value of $l$. Another remark is
that the curvature in the origin, even in the absence of mass, is never
zero, if we only allow for real values of $l$: the initial Euclidean
reference frame can be brought to locally coincide with a Minkowskian
tangent space only for imaginary values of $l$, in which case actually the
initial frame would have been Minkowskian.
\section{Perihelion precession}
\label{sec:peri}
\begin{table}
\centering
\caption{\label{tab_plan} Limits on $\Phi$ due to extra-precession of the inner planets of the solar system. Extra-precession values $\delta\dot{\omega}$ are from \cite{fie11}.}
\begin{tabular}{l|c|c}
Name & $\delta\dot{\omega}$ [mas/year] & $\Phi~[\mathrm{m}^{-2}] $ \\
\hline
\hline
Mercury & $0.6$ & $\mathrel{\lower0.6ex\hbox{$\buildrel {\textstyle <}\over{\scriptstyle \sim}$}} 0.6 {\times} 10^{-40}$
\\
Venus & $1.5$ & $\mathrel{\lower0.6ex\hbox{$\buildrel {\textstyle <}\over{\scriptstyle \sim}$}} 0.6 {\times} 10^{-40}$
\\
Earth & $0.9$ & $\mathrel{\lower0.6ex\hbox{$\buildrel {\textstyle <}\over{\scriptstyle \sim}$}} 0.2{\times} 10^{-40}$
\\
Mars & $0.15$ & $\mathrel{\lower0.6ex\hbox{$\buildrel {\textstyle <}\over{\scriptstyle \sim}$}} 0.2{\times} 10^{-41}$
\\
Jupiter & $42$ & $\mathrel{\lower0.6ex\hbox{$\buildrel {\textstyle <}\over{\scriptstyle \sim}$}} 0.8{\times} 10^{-40}$
\\
Saturn & $0.65$ & $\mathrel{\lower0.6ex\hbox{$\buildrel {\textstyle <}\over{\scriptstyle \sim}$}} 0.5{\times} 10^{-42}$
\\
\end{tabular}
\end{table}
Precessions of the perihelia of the Solar system planets have provided stringent local tests for competing theories of gravity \cite{isl83,je+se06,se+je06b}. A metric deviation of the form $\delta g_{00} \simeq \Phi r^2$ from the standard result obtained in general relativity induces a precession angle after one orbital period of
\begin{equation}
\Delta \phi \simeq 3 \pi \Phi \frac{s^3}{ r_\mathrm{g}} (1-e^2)^{1/2} .
\end{equation}
where $\Delta \phi $ is in radians; $s$ and $e$ are the semi-major axis and the eccentricity of the unperturbed orbit, respectively, and, $r_\mathrm{g} = G M/c^2$ is the gravitational radius of the central body.
Data from space flights and modern astrometric methods make it possible to create very accurate planetary ephemerides and to precisely determine orbital elements of Solar system planets \cite{pit05,fie11}. Results are compatible with GR predictions, so that any effect induced by modifications of the gravity law may be to the larger extent of the order of the statistical uncertainty in the measurement of the precession angle. Here we consider the planetary ephemerides in \cite{fie11}.
The accurate measurement of Saturn perihelion shift provides the tighter bound on $\Phi$ from solar system tests, $\Phi \mathrel{\lower0.6ex\hbox{$\buildrel {\textstyle <}\over{\scriptstyle \sim}$}} 0.5\times10^{-42} \mathrm{m}^{-2}$, see Table~\ref{tab_plan}. Local tests on perihelion precession put bounds on $\Phi$, whereas cosmological observations constrain a different combination of parameters of the CD theory, the $B [\equiv (\mu/4) (2 \lambda+\mu)/(\lambda + 2 \mu) ] $ parameter in \cite{cosmo}. Local bounds are anyway nine orders of magnitude less constraining than cosmological tests. Other solar or stellar system tests can probe gravitational theories but they are usually less constraining than results from measurements of the precession angle of the planets in the inner Solar system \cite{se+je06a}.
\section{Radial acceleration}
Another interesting quantity is the radial acceleration of an observer
instantaneously at rest. Now we refer to the geodetic equations deducible
from line element (\ref{lineaz}). Being interested to a pure radial fall, we
put $d\theta /ds=d\phi /ds=0$; the remaining pair of equations is:
\begin{equation}
\begin{array}{c}
\frac{d^{2}\tau }{ds^{2}}+2\left( r\Phi +\frac{m}{r\left( r-2m\right) }%
\right) \frac{d\tau }{ds}\frac{dr}{ds}\simeq 0 \\
\frac{d^{2}r}{ds^{2}}+\left( r\Phi +\frac{m}{r^{3}}\left( r-2m\right)
\right) \left( \frac{d\tau }{ds}\right) ^{2}+\left( r\Psi -\frac{m}{r\left(
r-2m\right) }\right) \left( \frac{dr}{ds}\right) ^{2}\simeq 0%
\end{array}
\label{geod}
\end{equation}
For a momentarily fixed position it is also $dr/ds=0$, so that the equations
become:
\begin{equation}
\begin{array}{c}
\frac{d^{2}\tau }{ds^{2}}\simeq 0 \\
\frac{d^{2}r}{ds^{2}}+\left( r\Phi +\frac{m}{r^{3}}\left( r-2m\right)
\right) \left( \frac{d\tau }{ds}\right) ^{2}\simeq 0%
\end{array}
\label{geod1}
\end{equation}
Let us evaluate the proper radial acceleration; we see that
\begin{equation}
\frac{d^{2}r}{d\tau ^{2}}\simeq -\frac{m}{r^{2}}\left( 1-2\frac{m}{r}\right)
-r\Phi \label{accelerre}
\end{equation}
The strained state of space-time adds a contribution to the Newtonian and post Newtonian acceleration strengthening (weakening) the force of gravity for a positive (negative) value of $\Phi$.
An additional term in the form of Eq.~(\ref{accelerre}) causes a change in Kepler's third law. Because of $\Phi$, the radial motion of a test body around a central mass $M$ is affected by an additional acceleration which perturbs the mean motion. For a radial acceleration in the form of $\Phi r$ perturbing an otherwise Newtonian orbit, the mean motion $n=\sqrt{G M/s^3}$ is changed by \cite{se+je06a}
\begin{equation}
\frac{\delta n}{n}= -\Phi\frac{s^3}{r_\mathrm{g}}.
\end{equation}
In principle, the variation of the effective gravitational force felt by the solar-system inner planets with respect to the effective forces felt by outer planets could probe new physics. However, observational uncertainties on the mean motion, i.e. on the measured semi-major axis of the solar-system planets, are quite large \cite{pit05}. The tighter constraint comes from the Earth orbit, whose orbital axis is determined with an accuracy of $\delta s = 0.15~\mathrm{m}$ \cite{pit05}. This provides an upper bound to $\Phi$ of the order of $\mathrel{\lower0.6ex\hbox{$\buildrel {\textstyle <}\over{\scriptstyle \sim}$}} 0.2 \times 10^{-40}~\mathrm{m}^{-2}$.
\section{Matching with the Robertson-Walker metric}
Up to now, we only required the metric to be spherically symmetric. The homogeneous and isotropic space-time is then a particular case of our local analysis. This highly symmetric case is obtained by considering a manifold without a central mass, i.e., $m=0$, and with just a central defect that can force the space-time to be homogeneous too. This condition can fix the size $l$ of the defect. It can be then interesting to compare with the exact solutions obtained with Robertson and Walker coordinates in the cosmological case. Being our new result local, we have to consider the RW metric at the present time. The today value of the curvature is
\begin{equation}
R_\mathrm{RW} = 12 B \left(1-\frac{1}{a_\mathrm{0}^2}\right),
\end{equation}
where $a_\mathrm{0}$ is the present value of the scale factor and $B \equiv (\mu/4) (2 \lambda+\mu)/(\lambda + 2 \mu)$. We can then look for the size $l$ of the central defect such that the resulting space-time is isotropic and homogeneous at the same time by requiring that the local value of the curvature is equal to the value in the RW metric. We get
\begin{equation}
l \simeq \frac{1}{a_\mathrm{0}} \sqrt{\frac{2\mu -a_\mathrm{0}^2 (\lambda+4\mu)}{\lambda +2\mu}}.
\end{equation}
In \cite{CQG}, cosmological expansion was explained as a consequence of a defect in an elastic medium. The above result describes the today expansion factor in terms of the local size of the defect.
\section{Comparison with massive gravity}
The SST theory looks very similar to the classical massive gravity theory initially proposed by Fierz and Pauli (FP) \cite{fierz}. At first sight indeed our Lagrangian corresponds to the FP one; if the similarity were an actual coincidence we would have to face the same kind of inconveniences which are known to plague massive gravity. These are essentially the so called van Dam-Veltman-Zakharov (vDVZ) discontinuity \cite{van}\cite{zakh} and the presence of ghosts appearing to various orders. In another work \cite{libro} one of us already had considered the problem and the remark had been that the FP theory is based on a first order perturbative treatment on a flat Minkowskian background; this is not the case of the SST which is "exact" and does not assume that the elements of the strain tensor are small. However the interest in massive gravity has stimulated a vast effort to formulate a theory valid to all orders and free from the mentioned troubles; a good review of the progress along the mentioned search can be read in ref. \cite{hin11} and we will refer to it for further considerations. Again when considering the non-linear version of massive gravity we find a Lagrangian which apparently corresponds to the one of SST; however, as we shall see in a moment, the two Lagrangians are different.
In fact non-linear massive gravity can be seen as a four dimensional bi-metric theory \cite{hin11}. One metric is dynamical, whereas the second is not coupled to the actual universe and is formally frozen, i.e. it describes a non-dynamical Einstein space background \cite{dam+al03}. The non-dynamical metric is used to raise and lower the indices of the $h_{\mu\nu}$ tensor which is the equivalent of our strain tensor \cite{hin11} or is combined with the full $g_{\mu\nu}$ to produce the scalars needed for the potential in \cite{dam+al03}.
In the SST theory, there is just one metric, $g_{\mu\nu}$, which is used for all tasks pertaining to a metric tensor. Our $E_{\mu\nu}$ tensor appearing in Eq.~\ref{strain} is indeed described as the metric tensor of the flat reference frame but is \textit{not} any metric at all for the natural frame. The only existing frame is the natural one; the reference frame belongs to a logically preceding phase in a descriptive paradigm where the present space-time is obtained as a deformation of some previous undeformed flat state, but the previous stage does not exist or coexist with the natural frame. $E_{\mu\nu}$ is not used to raise or lower any index; rather the full metric $g_{\mu\nu}$ is used to raise and lower all indices including those of $E_{\mu\nu}$, which is a symmetric tensor in the natural manifold. Often we find in the literature also the claim that in massive gravity theories General Coordinate Transformation (GCT) invariance is broken by the "massive" term (see for instance ref.~\cite{GCT}) and various devices are needed in order to restore it; this is not the case of SST, since in our theory all objects are true tensors.
The $E_{\mu\nu}$ tensor does not even coincide with the metric of the local tangent space, which is Minkowski and position depending. As a matter of fact, results in the SST theory can equally well be obtained starting from an Euclidean or a Minkowskian reference, which again indicates that the natural metric is the only relevant one.
The difference we have pointed out tells us that there is no obvious affection of the SST by the same difficulties affecting the classical massive gravity theories. By the way the vDVZ discontinuity is indeed absent in the cosmological application of SST as well as in the case studied here, where the solutions go smoothly to GR when one lets $\lambda$ and $\mu$ go to zero. One further comment about ghosts is in order. The whole discussion of ghosts implies a field theoretical approach to gravity and/or the study of propagating perturbations. As for the former we know that gravity cannot be described as a spin-2 field on a flat background; furthermore one cannot even say that the graviton exists, so we continue to use the expression "mass of the graviton" as a sort of abbreviation for something else. Once one analyzes the perturbations the problem of negative kinetic energy is discussed order by order, but the conclusions that one can draw summing to all orders is not well defined. Various tricks have been devised in order to get rid of ghosts up to a predefined order (e.g. the fourth or the fifth \cite{gaba}). Here we do not enter into the discussion, simply stress that: a) as seen above, we have just one metric, which is a properly defined metric; b) that SST is not based on a peculiar perturbative development. When taken globally, the problems of SST, if any, are shared with the cosmological constant model of space-time.
Actions in either of the two theories could be formally identified if we lower and raise indices with the full metric rather than the frozen metric in non-linear massive gravity. Then, in case the full metric and the full determinant can be expanded in powers of the deviation, we can re-organize the terms in the potential and show that the two approaches would carry the same information \cite{hin11}. However, this analogy has been probed only with this perturbative approach and we have a direct correspondence to first order only. The SST theory is intrinsically non linear. Just as an example, the expansion technique cannot be applied in the cosmological case, that was exactly analyzed in \cite{CQG}. We can then not conclude that the SST theory suffers the same pathologies as the standard massive gravity.
The comparison of what is known in the spherically symmetric case further shows how known problems affecting massive gravity do not automatically apply to SST. Usual problems in the standard massive gravity have been discussed expanding the equations around the flat solution in terms of small functions. An alternative expansion in the squared mass, which would mimic the expansion technique used in this paper for the SST theory, might hopefully show a smooth limit without discontinuity. Some recent analytic solutions in non-linear massive gravity \cite{koy+al11} have shown a branch of exact solutions which corresponds to Schwarzschild-de Sitter space-times where the curvature scale of de Sitter space is proportional to the squared mass of the graviton. This is similar to the results found in the present paper for the SST theory. Even if these arguments are not conclusive they are nevertheless encouraging.
\section{Conclusions}
We have found the approximate configuration of the space-time surrounding a
spherical mass distribution or texture defect independent from time,
assuming that a dark energy given by the strain of the manifold is present.
As expected, we see that the strain of space-time contributes "locally"
extremely tiny corrections to the Schwarzschild solution. These corrections
lead to a slight displacement of the horizon in the inner region and to
changes of the precession rates of the periapsis of orbiting celestial
bodies as well as of the proper radial acceleration. The comparison of the
expected corrections with the data known in the solar system puts upper
bounds to the parameters of the theory which are fully consistent with the
results found applying the SST to the universe as a whole. Summing up: the
Strained State Theory, while giving a physical interpretation to the dark
energy in vacuo, accounts for the accelerated expansion of the universe and
passes other relevant cosmological tests \cite{cosmo}; locally it leads to
effects that become visible at the scale of galaxy clusters or bigger.
Our results also show differences between the local predictions of the SST theory versus the standard interpretation of dark energy as a cosmological constant. In particular, we found that in the SST $g_{00} \neq -g_{rr}^{-1}$, which is a main difference with the de Sitter metric and implies that the two competing theories are not degenerate and might be distinguished with very accurate data.
The additional term $\Phi$ to the metric element $g_{00}$ influences the gravitational potential whereas $\Psi$ contributes to the space curvature perturbation. $\Phi$ directly affects the Poisson equation and determines the modified growth of structure with respect to GR. $\Psi$ together with $\Phi$ influences the null geodesics of light and might be constrained with gravitational lensing measurements.
\ack
Ninfa Radicella was funded by the Spanish Ministry of Science and Innovation through the "Subprograma Estancia J\'ovenes Doctores Extranjeros, Modalidad B", Ref: SB2009-0056.
\section*{References}
| -28,323.729754
|
[
-2.376953125,
2.41796875
] | 29.215686
|
[
-3.423828125,
0.427978515625,
-2.2265625,
-5.90625,
-0.71044921875,
8.171875
] |
[
3.154296875,
7.48046875,
0.54541015625,
3.931640625
] | 210
| 4,906
|
[
-2.82421875,
3.10546875
] | 28.12777
|
[
-5.33203125,
-4.0546875,
-4.07421875,
-2.041015625,
1.857421875,
10.8671875
] | 1.486919
| 20.399698
| 27.619242
| 2.932972
|
[
2.6909241676330566
] | -17,848.090567
| 5.414798
| -28,125.508061
| 1.56597
| 5.99841
|
[
-2.591796875,
-3.5546875,
-3.34765625,
-4.6328125,
2.234375,
11.4375
] |
[
-4.72265625,
-1.498046875,
-2.146484375,
-0.5732421875,
3.11328125,
3.201171875
] | |
BkiUd3s5qoaAwrv3XHXU
|
\section{Introduction}
Removing different singularites from cosmological models is one of
important problems in physics that many scientists are trying to
solve it. Recently, it has been shown that replacing classical
trajectories or geodesics by their quantal (Bohmian) trajectories
leads to the
quantum Raychaudhuri equation (QRE) which prevents the formation of
singularities \cite{a1}. The second order Friedmann equations obtained from the QRE
contains a couple of quantum correction terms, the first of which can be known as cosmological constant
while the second removes the big-bang singularity and shows that the age of our universe is infinite
\cite{a2}. Then, it has been declared that the same result may be derived in a brane-anti-brane system;
however, the origin of universe is N fundamental strings
\cite{a3}. In this method, first, these strings are excited and transit to N pair
of D0-anti-D0-branes. These branes glue to each other and form
a system of D5-anti-D5-branes. This system is unstable, broken
and a pair of universe-anti-universe in additional to a wormhole is
created. Two universes in this system interact with each other
via the wormhole and build a BIon \cite{a4,a5,a6,a7}. Thus,
there isn't any big bang singularity in this system and total
age of universe is equal to sum over the age of fundamental
string, initial system of D5-anti-D5 and present shape of
universe. It is observed that only in the case of infinite age
of fundamental string, the scale factor of universe becomes zero
which means that the age of universe is infinite \cite{a3}.
In parallel, some models in loop quantum cosmology predict that
universe contract and expand infinite times and thus the age of
universe may be infinite \cite{a8}. Now, the main question
arises that how we can unify these two types of theory in
M-theory? We answer this question in a system of oscillating
branes. In our model, at the beginning, N M0-anti-M0-branes are produced from decaying of N-fundamental strings. Then,
these branes glue to each other and an M8-anti-M8 system is formed. The branes in this system interact with each other, annihilate
and two anti-M4-branes, an M4-brane,
an M3-brane plus one M0-brane are created. The M4-brane is
compactified around a circle in eleven dimension and interacts
with other branes by exchanging M0-branes and scalars in
transverse dimensions. The M3-brane which our universe is located
on it, wraps around the M4-brane, universe contracts and
generalized uncertainty principle on it is emerged. The M4-M3
system oscillates between anti-M4-branes and becomes close to one
of them. At this stage, the M3-brane connects to the anti-M4-brane
from one end and remains sticking and wrapping around the M4-brane
from another end. Then, the M4 rebounds, rolls, wrapped M3 opens
and universe expands. Eventually, the M4 approaches the anti-M4
and some scalars become tachyons. To solve this problem, M4 moves
away from this anti-M4-brane, rolls and M3 wraps around it again.
In these conditions, universe evolves and make a transition from
expansion phase to contraction era. These contractions and
expansions continue until infinite and thus the age of universe
may be infinite.
The outline of the
paper is as follows. In section \ref{o1}, we will construct the
contraction branch of cosmology in a system of oscillating branes
and show that the origin of universe is a fundamental string . In
section \ref{o2}, we will consider expansion era of cosmology
during rolling of M4-brane in this system and estimate the age of
universe. In section \ref{o3}, we will show that by vanishing
tachyonic states, some extra energies is produced which leads to
the inflation. In section \ref{o4}, we will indicate that the
model matches with quantum field theory prescriptions. The last
section is devoted to summary and conclusion.
\section{ First contraction branch of universe in a system of oscillating
branes }\label{o1} In this section, we will show that all
evolutions of universe from the birth to the expansion era can be
considered in a system of anti-M4-M4-M3 brane. In this model, the
formation of universe in first contraction branch is via the
process (fundamental string $\rightarrow$ M0 + anti-M0
$\rightarrow$ M8 + anti-M8 $\rightarrow$ 2M4 + compact anti-M4 +
wrapped M3 + M0 $\rightarrow$ contraction branch of universe ).
Also, in our model, some scalars make a transition to a tachyonic
phase and causes the contraction branch to be terminated.
To begin, we explain the model of \cite{a2} in short terms. In
this mechanism, we estimate an explicit form for $\dot{H}$ = F(H),
where F(H) is a function of Hubble parameter (H) that is derived
from quantum Raychaudhuri equation . Using this function, we can
calculate the age of our world:
\begin{eqnarray}
&& \dot{H}=F(H)\rightarrow T=\frac{1}{F^{n}(H_{initial})}\int
dH\frac{1}{(H-H_{initial})^{n}} \rightarrow \infty \label{m1}
\end{eqnarray}
where $H_{initial}$ is the hubble parameter before the present era
of universe. This equation shows that the age of universe is
infinite. We will show that the same results can be obtained in
string theory. In our model, the universe is located on an
M3-brane which wraps around a compact M4 from one end and attached
to anti-M4 from another end. By oscillating and rolling M4, M3
oscillates between wrapping and opening states and consequently,
universe oscillates between contraction and expansion branches. To
show this, we use of the mechanism in \cite{a3}. In this paper,
it has been shown that a fundamental string can decay to a pair of
D0-anti-D0-branes or a pair of M0-anti-M0-branes in additional to
some extra energy ($V$) \cite{a3}:
\begin{eqnarray}
&& S_{F-string} = S_{D0}+S_{anti-D0}
=S_{M0}+S_{anti-M0}+2V(extra) \label{m2}
\end{eqnarray}
where, the actions of D0-brane and M0-branes have been defined as
\cite{a3,a9,a10,a11,a12,a13,a14,a15,a16,a17}:
\begin{eqnarray}
&& S_{D0} =S_{anti-D0}= -T_{D0} \int dt Tr( \Sigma_{m=0}^{9}
[X^{m},X^{n}]^{2}) \label{m3}
\end{eqnarray}
\begin{eqnarray}
S_{M0} = S_{anti-M0} = T_{M0}\int dt Tr( \Sigma_{M,N,L=0}^{10}
\langle[X^{M},X^{N},X^{L}],[X^{M},X^{N},X^{L}]\rangle) \label{m4}
\end{eqnarray}
Here, $T_{D0}$ and $T_{M0}$ are the brane tensions, $X^{m}$ are
transverse scalars, $X^{M}=X^{M}_{\alpha}T^{\alpha}$ and
\begin{eqnarray}
&&[T^{\alpha}, T^{\beta}, T^{\gamma}]= f^{\alpha \beta \gamma}_{\eta}T^{\eta} \nonumber \\&&\langle T^{\alpha},
T^{\beta} \rangle = h^{\alpha\beta} \nonumber \\&& [X^{M},X^{N},X^{L}]=[X^{M}_{\alpha}T^{\alpha},X^{N}_{\beta}T^{\beta},X^{L}_{\gamma}T^{\gamma}]\nonumber \\&&\langle X^{M},X^{M}\rangle = X^{M}_{\alpha}X^{M}_{\beta}\langle T^{\alpha}, T^{\beta} \rangle
\label{m5}
\end{eqnarray}
As can be seen from above equation, the relevant action of D0-branes contains two dimensional Nambu-Poisson bracket while the action of M3 has three one
with the Li-3-algebra \cite{a14,a15,a16,a17}. Also, the actions of D0 and M0 have the following relations \cite{a3}:
\begin{eqnarray}
&& S_{M0} = S_{D0} + V_{Extra}\label{m6} \\
&& \nonumber \\
&& \text{where}\nonumber \\
&& \nonumber \\
&& V_{Extra}= -6T_{M0}\int dt
\Sigma_{M,N,L,E,F,G=0}^{9}\varepsilon_{MNLD}\varepsilon_{EFG}^{D}X^{M}X^{N}X^{L}X^{E}X^{F}X^{G}
\nonumber
\end{eqnarray}
Here
$T_{D0}=6T_{M0}(\frac{R^{2}}{l_{p}^{3}})=\frac{1}{g_{s}l_{s}}$ is
the brane tension and $g_{s}$ and $l_{s}$ are the string coupling
and string length respectively.
At this stage, we want to obtain the relevant action for Dp-brane
by summing over the actions of D0-branes. To this end, we use of
following mappings \cite{a3,a9,a10,a11,a12}:
\begin{eqnarray}
&& \Sigma_{a=0}^{p}\Sigma_{m=0}^{9}\rightarrow \frac{1}{(2\pi l_{s})^{p}}\int d^{p+1}\sigma \Sigma_{m=p+1}^{9}\Sigma_{a=0}^{p} \qquad \lambda = 2\pi l_{s}^{2} \nonumber \\
&&[X^{a},X^{i}]=i
\lambda \partial_{a}X^{i}\qquad [X^{a},X^{b}]= i \lambda^{2} F^{ab}\nonumber \\
&& i,j=p+1,..,9\qquad a,b=0,1,...p\qquad m,n=0,1,..,9 \label{m7}
\end{eqnarray}
Now, we can obtain the relevant action of Dp-brane
\cite{a3,a9,a10,a11,a12}:
\begin{eqnarray}
&& S_{Dp} =-\Sigma_{a=0}^{p}T_{D0} \int dt Tr( \Sigma_{m=0}^{9}
[X^{m},X^{n}]^{2}) = \Sigma_{a=0}^{p}S_{D0} \nonumber \\
&&=-T_{Dp} \int d^{p+1}\sigma Tr (\Sigma_{a,b=0}^{p}
\Sigma_{i,j=p+1}^{9}
\{\partial_{a}X^{i}\partial_{b}X^{i}-\frac{1}{2
\lambda^{2}}[X^{i},X^{j}]^{2}+\frac{\lambda^{2}}{4} (F_{ab})^{2}
\}) \label{m8}
\end{eqnarray}
Also, to derive the relevant action for Mp-branes, we sum over the
action of M0-branes and use of following mappings
\cite{a3,a9,a10,a11,a12,a13,a14,a15,a16,a17}:
\begin{eqnarray}
&&\langle[X^{a},X^{b},X^{i}],[X^{a},X^{b},X^{i}]\rangle=
\frac{1}{2}\varepsilon^{abc}\varepsilon^{abd}(\partial_{a}X^{i}_{\alpha})(\partial_{a}X^{i}_{\beta})\langle(T^{\alpha},T^{\beta}\rangle
=
\frac{1}{2}\langle \partial_{a}X^{i},\partial_{a}X^{i}\rangle \nonumber \\
&&\nonumber \\
&&\langle[X^{a},X^{b},X^{c}],[X^{a},X^{b},X^{c}]\rangle=
(F^{abc}_{\alpha\beta\gamma})(F^{abc}_{\alpha\beta\eta})\langle[T^{\alpha},T^{\beta},T^{\gamma}],[T^{\alpha},T^{\beta},T^{\eta}]\rangle)=\nonumber \\
&&
(F^{abc}_{\alpha\beta\gamma})(F^{abc}_{\alpha\beta\eta})f^{\alpha
\beta \gamma}_{\sigma}h^{\sigma \kappa}f^{\alpha \beta
\eta}_{\kappa} \langle T^{\gamma},T^{\eta}\rangle=
(F^{abc}_{\alpha\beta\gamma})(F^{abc}_{\alpha\beta\eta})\delta^{\kappa
\sigma} \langle T^{\gamma},T^{\eta}\rangle=
\langle F^{abc},F^{abc}\rangle\nonumber \\
&&\nonumber \\
&&\Sigma_{m}\rightarrow \frac{1}{(2\pi)^{p}}\int d^{p+1}\sigma
\Sigma_{m-p-1} i,j=p+1,..,10\quad a,b=0,1,...p\quad m,n=0,..,10~~ \nonumber \\
&& F_{abc}=\partial_{a} A_{bc}-\partial_{b} A_{ca}+\partial_{c}
A_{ab} \label{m9}
\end{eqnarray}
Using above relations, the action of Mp-brane can be obtained as
\cite{a3,a14,a15,a16,a17}:
\begin{eqnarray}
&& S_{Mp} = \Sigma_{a=0}^{p}S_{M0}=-\Sigma_{a=0}^{p}T_{M0} \int dt
Tr( \Sigma_{m=0}^{9}
\langle[X^{a},X^{b},X^{c}],[X^{a},X^{b},X^{c}]\rangle) =
\nonumber \\ && -T_{Mp} \int d^{p+1}\sigma Tr
(\Sigma_{a,b,c=0}^{p} \Sigma_{i,j,k=p+1}^{10}
\{\langle\partial_{a}X^{i},\partial_{a}X^{i}\rangle
-\frac{1}{4}\langle[X^{i},X^{j},X^{k}],[X^{i},X^{j},X^{k}]\rangle+\nonumber
\\ &&\frac{1}{6} \langle F_{abc},F_{abc}\rangle \}) \label{m10}
\end{eqnarray}
Now, we can build our model in M-theory. For this, first, a pair of M8-anti-M8-branes are constructed from joining M0-branes. The, these objects decay to
two anti-M4-branes, one M4-brane, one M3-brane in additional one M0-brane:
\begin{eqnarray}
&& S_{tot}= \Sigma_{a=0}^{8}S_{M0} + \Sigma_{a=0}^{8}S_{anti-M0}=
S_{M8} + S_{anti-M8}=\nonumber
\\&&2\Sigma_{a=0}^{4}S_{anti-M0} + \Sigma_{a=0}^{4}S_{M0} + \Sigma_{a=0}^{3}S_{M0} + S_{M0} =\nonumber
\\&& 2S_{anti-M4} +
S_{M4} + S_{M3} + S_{M0} \label{m11}
\end{eqnarray}
In our method, the M4-brane is compactified around a circle in
eleven dimensions and M3 wraps around it. Then, the M4-M3 system
moves toward one of anti-M4-branes. Approaching anti-M4, M3 sticks
to the anti-M4 from one end and remains sticking to M4 from
another end. By getting away the M4, it rolls and M3 opens. This
process repeat many times. During wrapping and compactifications,
some components of gauge field should be replaced by scalars and
by opening, some scalars converts to gauge field
\cite{a3,a9,a10,a11,a12,a13,a14,a15,a16,a17}. For this reason, we
write following relations for mappings which includes both of
states :
\begin{eqnarray}
&& [X^{a},X^{b},X^{c/i}]=\alpha
\partial_{a}\partial_{b}X^{i} + \beta F_{abc} \nonumber
\\&&
[X^{a},X^{i/b},X^{j/c}]=\alpha
(X^{i}\partial_{a}X^{j}+X^{j}\partial_{a}X^{i})+\beta F_{abc}
\nonumber
\\&& [X^{i},X^{j},X^{k}]=
\varepsilon^{\alpha\beta\gamma}X^{i}_{\beta}X^{j}_{\beta}X^{k}_{\gamma}
\label{m12}
\end{eqnarray}
where a,b,c are the indexes on the branes and i,j,k are indexes in
transverse directions. By wrapping and compactification of branes,
some of brane's indexes like a,b or c are replaced by i,j. Also,
$\alpha$ and $\beta$ are functions of time which increase and
decrease during wrapping and opening phases. When, M3 wraps around
M4 completely, $\alpha$ should be maximum and as M3 opens
completely, $\beta$ is maximum. For this reason, we suggest that
$\alpha=sin\omega t$ and $\beta=cos\omega t$ where $\omega$ is the
frequency of oscillation ($\omega=\frac{2\pi}{T}$) and T is the
time of period. Using equations (\ref{m11}) and (\ref{m12}), we
obtain:
\begin{eqnarray}
&& S_{M3-M4} = -T_{D3} \int d^{4}\sigma Tr (\Sigma_{a,b=0}^{3}
\Sigma_{i,j=4}^{9} \{\partial_{a}X^{i}\partial_{b}X^{j}+
\alpha^{2}
\partial_{a}\partial_{b}X^{i}\partial_{a}\partial_{b}X^{i} + \beta^{2} F_{abc}^{2}+\beta^{3} F_{abc}^{3}\nonumber \\&&
+\alpha \beta \partial_{a}\partial_{b}X^{i} F_{abc}+ \alpha^{2}
(X^{i}X^{j}\partial_{a}X^{j}\partial_{b}X^{i})+\alpha F_{abc}
(X^{i}\partial_{a}X^{j}+X^{j}\partial_{a}X^{i})+\alpha^{2}\beta
\partial_{a}\partial_{b}X^{i}\partial_{a}\partial_{b}X^{i}F_{abc}\nonumber
\\&&
+\varepsilon^{\alpha\beta\gamma}\varepsilon^{\alpha'\beta'\gamma'}X^{i}_{\alpha}X^{j}_{\beta}X^{k}_{\gamma}X^{i}_{\alpha'}X^{j}_{\beta'}X^{k}_{\gamma'} \})
=S_{M3}+V_{int}
\label{m13}
\end{eqnarray}
where $V_{int}=-T_{D3} \int d^{4}\sigma Tr (\Sigma_{a,b=0}^{3}
\Sigma_{i,j=4}^{9} \{\beta^{3} F_{abc}^{3}+\alpha^{2}\beta
\partial_{a}\partial_{b}X^{i}\partial_{a}\partial_{b}X^{i}F_{abc}\}$ is the interaction potential between M3 and M4. Using the above equation, we can derive the
equation of motion for $X^{i}$:
\begin{eqnarray}
&& \{\partial_{a}^{2}+(\alpha^{2}+2\alpha\beta
A_{a})\partial_{a}^{4}+\alpha^{2}\beta
A_{a}X^{j}\partial_{a}^{6}+\alpha^{2}X^{j}X^{k}X^{l}\partial_{a}^{2}+\alpha
A_{a}\partial_{a}^{3} + \frac{\partial^{2} V}{\partial
x^{2}}\}X^{i}=0 \nonumber \\&&
V=\varepsilon^{\alpha\beta\gamma}\varepsilon^{\alpha'\beta'\gamma'}X^{i}_{\alpha}X^{j}_{\beta}X^{k}_{\gamma}X^{i}_{\alpha'}X^{j}_{\beta'}X^{k}_{\gamma'}
\label{m14}
\end{eqnarray}
At this stage, we substitute $\partial_{a} = p_{a}$ and
$\frac{\partial^{2} V}{\partial x^{2}} = m^{2}$ in above equation
and obtain:
\begin{eqnarray}
&&\{p_{a}^{2}+(\alpha^{2}+2\alpha\beta
A_{a})p_{a}^{4}+\alpha^{2}\beta
A_{a}X^{j}p_{a}^{6}+\alpha^{2}X^{j}X^{k}X^{l}p_{a}^{2}-\alpha
A_{a}p_{a}^{3} + m^{2}\}X^{i}=0 \label{m15}
\end{eqnarray}
If we compare above equations with usual equation for scalar
fields,
\begin{eqnarray}
&& \{\bar{p}_{\alpha}^{2}+ m^{2}\}X^{i}=0
\label{m16}
\end{eqnarray}
we can define the momentum $\bar{p}$ as:
\begin{eqnarray}
&& \bar{p}_{a}\sim(1+\alpha^{2}X^{j}X^{k}X^{l})^{1/2}p_{a}
-(\alpha^{2}+2\alpha\beta A_{a})^{1/2}p_{a}^{2}+\alpha^{2}\beta
A_{a}X^{j}p_{a}^{3}
\label{m17}
\end{eqnarray}
Thus our model produces the commutation relations in GUP:
\begin{eqnarray}
&& \{{x}_{a},\bar{p}_{b}\}=
(1+\alpha^{2}X^{j}X^{k}X^{l})^{1/2}\delta_{ab} -
(\alpha^{2}+4\alpha\beta A_{a})^{1/2}p_{a}+3\alpha^{2}\beta
A_{a}X^{j}p_{a}^{2} \label{m18}
\end{eqnarray}
This is the GUP proposed in \cite{a18,a19,a20,a21,a22} which
predicts maximum observable momenta besides the existence of
minimal measurable length and is consistent with Doubly Special
Relativity (DSR) theories, String Theory and Black Holes Physics.
Thus, wrapping M3 around M4 leads to the non-commutative relations
in GUP. Equation(\ref{m18}) gives the following uncertainty with
using the argument used in \cite{a23,a24,a25}:
\begin{eqnarray}
&& \Delta x \Delta p \geq
\frac{1}{2}[(1+\alpha^{2}X^{j}X^{k}X^{l})^{1/2} -
(\alpha^{2}+4\alpha\beta A_{a})^{1/2}p_{a}+3\alpha^{2}\beta
A_{a}X^{j}p_{a}^{2}]\label{m19}
\end{eqnarray}
The solution of the above inequality as quadratic equation in
$\Delta p$ is \cite{a23,a24,a25}:
\begin{eqnarray}
&& \Delta p \geq \frac{2\Delta x+(\alpha^{2}+4\alpha\beta
A_{a})^{1/2}}{3\alpha^{2}\beta
A_{a}X^{j}}[1-\sqrt{1-\frac{6\alpha^{2}\beta A_{a}X^{j}}{(2\Delta
x+(\alpha^{2}+4\alpha\beta A_{a})^{1/2})^{2}}}]\label{m20}
\end{eqnarray}
Now, we assume that wrapped M3-brane could be modelled as (D -1)-
dimensional sphere of size equal to twice of Schwarzschild radius,
$r_{s}$. Thus the uncertainty in position of particle has its
minimum value given by \cite{a23,a24,a25}:
\begin{eqnarray}
&& \Delta x = 2r_{s}=
2\lambda_{D}[\frac{G_{D}m}{c^{2}}]^{\frac{1}{D-3}}\label{m21}
\end{eqnarray}
where
$\lambda_{D}=[\frac{16\pi}{(D-2)\Omega_{D-2}}]^{\frac{1}{D-3}}$
and
$\Omega_{D}=\frac{2\pi^{\frac{D-1}{2}}}{\Gamma(\frac{D-1}{2})}$.
We substitute the position defined by (\ref{m21}) in equation
(\ref{m20})and derive $\Delta p$ as :
\begin{eqnarray}
&& \Delta p \geq
\frac{4\lambda_{D}[\frac{G_{D}m}{c^{2}}]^{\frac{1}{D-3}}+(4\alpha\beta
A_{a})^{1/2}}{3\alpha^{2}\beta
A_{a}X^{j}}[1-\sqrt{1-\frac{6\alpha^{2}\beta
A_{a}X^{j}}{(4\lambda_{D}[\frac{G_{D}m}{c^{2}}]^{\frac{1}{D-3}}+(\alpha^{2}+4\alpha\beta
A_{a})^{1/2})^{2}}}]\label{m22}
\end{eqnarray}
Applying definition of mass in equation (\ref{m14}), we calculate
the explicit form of mass:
\begin{eqnarray}
&& m^{2} = \frac{\partial^{2} V}{\partial
x^{2}}=\varepsilon^{\alpha\beta\gamma}\varepsilon^{\alpha'\beta'\gamma'}\frac{\partial^{2}
(X^{i}_{\alpha}X^{j}_{\beta}X^{k}_{\gamma}X^{i}_{\alpha'}X^{j}_{\beta'}X^{k}_{\gamma'})}{\partial
x^{2}} \label{m23}
\end{eqnarray}
Substituting equation (\ref{m23}) in equation (\ref{m22}), we get:
\begin{eqnarray}
&& \Delta p \geq
\frac{4\lambda_{D}[\frac{G_{D}\varepsilon^{\alpha\beta\gamma}\varepsilon^{\alpha'\beta'\gamma'}\frac{\partial^{2}
(X^{i}_{\alpha}X^{j}_{\beta}X^{k}_{\gamma}X^{i}_{\alpha'}X^{j}_{\beta'}X^{k}_{\gamma'})}{\partial
x^{2}} }{c^{2}}]^{\frac{1}{2D-6}}+(4\alpha\beta
A_{a})^{1/2}}{3\alpha^{2}\beta A_{a}X^{j}}\times
\nonumber\\&&[1-\sqrt{1-\frac{6\alpha^{2}\beta
A_{a}X^{j}}{(4\lambda_{D}[\frac{G_{D}\varepsilon^{\alpha\beta\gamma}\varepsilon^{\alpha'\beta'\gamma'}\frac{\partial^{2}
(X^{i}_{\alpha}X^{j}_{\beta}X^{k}_{\gamma}X^{i}_{\alpha'}X^{j}_{\beta'}X^{k}_{\gamma'})}{\partial
x^{2}} }{c^{2}}]^{\frac{1}{2D-6}}+(\alpha^{2}+4\alpha\beta
A_{a})^{1/2})^{2}}}]\label{m24}
\end{eqnarray}
This equation yields the following inequality for the scalars in
transverse direction:
\begin{eqnarray}
&& A_{a} \leq [\frac{
(4\lambda_{D}[\frac{G_{D}\varepsilon^{\alpha\beta\gamma}\varepsilon^{\alpha'\beta'\gamma'}\frac{\partial^{2}
(X^{i}_{\alpha}X^{j}_{\beta}X^{k}_{\gamma}X^{i}_{\alpha'}X^{j}_{\beta'}X^{k}_{\gamma'})}{\partial
x^{2}} }{c^{2}}]^{\frac{1}{2D-6}}+(\alpha^{2}+4\alpha\beta
A_{a})^{1/2})^{2}}{6\alpha^{2}\beta X^{i} }] \label{m25}
\end{eqnarray}
As can be seen from the above inequality, as the M3 brane
approaches the anti-M4-brane at $t=\frac{T}{4}$, scalars on the M3
grow, the right hand of this equality becomes smaller than left
hand and inequality violates. To avoid this violation and negative
values under the $\sqrt{}$ , the square mass of some scalars
becomes negative ($m^{2}\rightarrow -m^{2}$), they transit to
tachyonic phases and contraction branch ends.
\section{Estimating the age of universe in a system of oscillating branes }\label{o2}
In previous section, we observed that as the M3-M4 becomes close
to anti-M4, M3 attaches to it from one end and stays sticking from
another end and the system faces some tachyonic states. To remove
these states, M4 rebounds, rolls, M3 opens and expansion phase
begins. During this new phase, gauge fields ($A_{a}$) on the brane
grow, the left hand of equation (\ref{m25})becomes bigger than
right hand and inequality violates again. To avoid the negative
values under the the $\sqrt{}$ , the square mass of some scalars
becomes negative ($m^{2}\rightarrow -m^{2}$) and they become
tachyons. To solve this problem, M4 rebounds again, M3 wraps
around it and contraction epoch begins. Now, the question arises
that what is the age of universe? To answer this question, we
should calculate the contribution of branes on four dimensional
universe and write energy-momentum tensors. Using the action in
(\ref{m13}), we can calculate the energy-momentum of M3 and put
it equal to energy-momentum of universe:
\begin{eqnarray}
&& \rho = \frac{3H^{2}}{\kappa^{2}} =
\frac{1}{2}(1+\alpha^{2}(X^{j})^{2}+2\alpha
F_{abc}X^{j})(\dot{X}^{i})^{2}+\alpha^{2}
\partial_{a}\partial_{b}X^{i}\partial_{a}\partial_{b}X^{i} \nonumber \\&&
+\alpha \beta \partial_{a}\partial_{b}X^{i}
F_{abc}+\alpha^{2}\beta
\partial_{a}\partial_{b}X^{i}\partial_{a}\partial_{b}X^{i}F_{abc}+ \beta^{2} F_{abc}^{2}+\beta^{3} F_{abc}^{3}\nonumber
\\&&
+\varepsilon^{\alpha\beta\gamma}\varepsilon^{\alpha'\beta'\gamma'}X^{i}_{\alpha}X^{j}_{\beta}X^{k}_{\gamma}X^{i}_{\alpha'}X^{j}_{\beta'}X^{k}_{\gamma'}
\nonumber\\&& p=-\frac{1}{\kappa^{2}}(3H^{2}+2\dot{H})=
\frac{1}{2}(1+\alpha^{2}(X^{j})^{2}+2\alpha
F_{abc}X^{j})(\dot{X}^{i})^{2}\nonumber\\&&-\alpha^{2}
\partial_{a}\partial_{b}X^{i}\partial_{a}\partial_{b}X^{i}
-\alpha \beta \partial_{a}\partial_{b}X^{i}
F_{abc}-\alpha^{2}\beta
\partial_{a}\partial_{b}X^{i}\partial_{a}\partial_{b}X^{i}F_{abc}\nonumber
\\&&- \beta^{2} F_{abc}^{2}-\beta^{3} F_{abc}^{3}
-\varepsilon^{\alpha\beta\gamma}\varepsilon^{\alpha'\beta'\gamma'}X^{i}_{\alpha}X^{j}_{\beta}X^{k}_{\gamma}X^{i}_{\alpha'}X^{j}_{\beta'}X^{k}_{\gamma'}
\label{m26}
\end{eqnarray}
Solving equations (\ref{m15},\ref{m16},\ref{m25} and
\ref{m26})simultaneously, we obtain the explicit form of
$X^{i}$,$A^{i}$ and $a(t)$:
\begin{eqnarray}
&& X^{i}\sim sin(\bar{\omega}t) \quad
\bar{\omega}=\sqrt{\omega^{2}+(\alpha^{2}+2\alpha)\omega^{4}+\alpha^{2}\beta
\omega^{6}+\alpha^{5}\omega^{2}+\omega^{3}+m^{2}} \nonumber\\&&
\nonumber\\&& A^{a}\sim
[\frac{\sqrt{G_{D}[\omega^{2}+(\alpha^{2}+2\alpha)\omega^{4}+\alpha^{2}\beta
\omega^{6}+\alpha^{5}\omega^{2}+\omega^{3}+m^{2}][30sin^{4}(\bar{\omega}t)cos^{2}(\bar{\omega}t)-6sin^{6}(\bar{\omega}t)]}+sin\omega
t}{6\alpha^{2}\beta sin(\bar{\omega}t) }]\nonumber\\&&
\nonumber\\&& a(t)\sim e^{\omega t +\int dt G(t)}\nonumber\\&&
\nonumber\\&& G(t)\sim
([\frac{\sqrt{G_{D}[\omega^{2}+(\alpha^{2}+2\alpha)\omega^{4}+\alpha^{2}\beta
\omega^{6}+\alpha^{5}\omega^{2}+\omega^{3}+m^{2}]}
\sqrt{\omega^{2}+(\alpha^{2}+2\alpha)\omega^{4}+\alpha^{2}\beta
\omega^{6}+\alpha^{5}\omega^{2}+\omega^{3}+m^{2}}}{3\alpha \beta
\sqrt{[30sin^{4}(\bar{\omega}t)cos^{2}(\bar{\omega}t)-6sin^{6}(\bar{\omega}t)]}
}])\times \nonumber\\&&
[120sin^{3}(\bar{\omega}t)cos^{3}(\bar{\omega}t)+30sin^{5}(\bar{\omega}t)cos(\bar{\omega}t)-30cos(\bar{\omega}t)sin^{5}(\bar{\omega}t)]+\alpha^{2}
sin^{2}(\bar{\omega}t))cos^{2}(\bar{\omega}t) +
cos(2\bar{\omega}t)+\nonumber\\&&[\frac{cos\omega
t\sqrt{G_{D}[\omega^{2}+(\alpha^{2}+2\alpha)\omega^{4}+\alpha^{2}\beta
\omega^{6}+\alpha^{5}\omega^{2}+\omega^{3}+m^{2}][30sin^{4}(\bar{\omega}t)cos^{2}(\bar{\omega}t)-6sin^{6}(\bar{\omega}t)]}+2cos\omega
t sin\omega t}{3\alpha^{2}\beta sin^{3}(\bar{\omega}t) }]
\label{m27}
\end{eqnarray}
As can be seen from these equations, during contraction
branch($0<t<\frac{T}{4}$), the scalar fields ($X^{i}$) grow; while
the gauge fields ($A^{a}$) decrease. However, by passing time and
opening M3, universe enters into expansion phase
($\frac{T}{4}<t<\frac{T}{2}$), gauge fields grow and scalars
decreases. Also, this equation shows that only in the case that
time be infinite ($t \rightarrow -\infty $), the scale factor
becomes zero. This means that the age of universe is infinite and
thus our result is consistent with results of \cite{a2}.
\section{ Considering the inflation era at the beginning of expansion branch }\label{o3}
Until now, we have shown that by wrapping and opening M3, universe
contracts and expands. Also, we have replace the big bang
singularity by a fundamental string and indicate that the age of
universe is infinite. Now, another question arises that how our
universe undergoes an inflation phase at the beginning of
expansion branch? To reply this question, we remind that at the
end of contraction, some scalars gain negative square mass and
transit to tachyons. To remove these states, contraction stops and
universe enters to an expansion epoch. Also, the negative square
mass of scalars ($-m^{2}=-\frac{\partial^{2}V}{\partial x^{2}}$)
at the end of contraction should be converted to the positive
square mass ($m^{2}=\frac{\partial^{2}V}{\partial x^{2}}$))at the
beginning of expansion. Thus, the energy of system changes and
some energy
is released. Using equation (\ref{m14}), we can get:
\begin{eqnarray}
&&m^{2}-(-m^{2})=2m^{2} \quad V=V_{\text{end of
contraction}}=V_{\text{beginning of expansion}} \rightarrow
\nonumber\\&& \frac{\partial^{2}V_{\text{beginning of
expansion}}}{\partial x^{2}}-(-\frac{\partial^{2}V_{\text{end of
contraction}}}{\partial x^{2}})=2\frac{\partial^{2}V}{\partial
x^{2}}\rightarrow \nonumber\\&& V_{inf}=V_{\text{beginning of
expansion}}-V_{\text{end of contraction}}=2V=2
\varepsilon^{\alpha\beta\gamma}\varepsilon^{\alpha'\beta'\gamma'}X^{i}_{\alpha}X^{j}_{\beta}X^{k}_{\gamma}X^{i}_{\alpha'}X^{j}_{\beta'}X^{k}_{\gamma'}
\label{m28}
\end{eqnarray}
This energy causes that the velocity of opening of M3 increases
and our universe which is located on this brane, experience an
inflation phase.
Using equations (\ref{m27}and\ref{m28}) and assuming that inflation starts at the beginning of expansion ($t=\frac{T}{4}$), we can obtain the
Hubble parameter:
\begin{eqnarray}
&&\rho_{total}=\rho+\rho_{inf}\nonumber\\&&\rho_{inf} =
\frac{3H_{inf}^{2}}{\kappa^{2}}=V_{inf}
=2V=2sin^{6}(\bar{\omega}t)\rightarrow \nonumber\\&&
H_{inf}=\sqrt{\frac{2}{3}}\kappa
sin^{3}(\bar{\omega}t)\nonumber\\&&
t=\frac{T}{4}+t_{inf}\rightarrow H_{inf}=\sqrt{\frac{2}{3}}\kappa
cos^{3}(\bar{\omega}t_{inf}) \nonumber\\&&
H_{tot}^{2}=H^{2}+H_{inf}^{2} \quad H=G+\omega \nonumber\\&&
H_{tot}^{2}=(G+\omega)^{2}+\frac{2\kappa}{3}
cos^{6}(\bar{\omega}t_{inf}) \label{m29}
\end{eqnarray}
We can test our model by calculating the magnitude of the
slow-roll parameters and the tensor-to- scalar ratio r defined in
\cite{a26} and comparing with previous predictions:
\begin{eqnarray}
&&\varepsilon=-\frac{\dot{H}_{tot}}{H_{tot}^{2}}=\frac{2\dot{G}(G+\omega)+\frac{12\kappa}{3}
\bar{\omega}cos^{5}(\bar{\omega}t_{inf})sin(\bar{\omega}t_{inf})}{[(G+\omega)^{2}+\frac{2\kappa}{3}
cos^{6}(\bar{\omega}t_{inf})]^{3/2}}\nonumber\\&& \eta
=-\frac{\ddot{H}_{tot}}{2H_{tot}\dot{H }_{tot}}=[\frac{2\ddot{G}(G+\omega)+2\dot{G}\dot{G}+\frac{12\kappa}{3}
\bar{\omega}cos^{6}(\bar{\omega}t_{inf})-\frac{60\kappa}{3}
\bar{\omega}cos^{4}(\bar{\omega}t_{inf})sin^{2}(\bar{\omega}t_{inf})}{[(G+\omega)^{2}+\frac{2\kappa}{3}
cos^{6}(\bar{\omega}t_{inf})]^{1/2}}\nonumber\\&&+\frac{(2\dot{G}(G+\omega)+\frac{12\kappa}{3}\bar{\omega}
cos^{5}(\bar{\omega}t_{inf})sin(\bar{\omega}t_{inf}))^{2}}{[(G+\omega)^{2}+\frac{2\kappa}{3}
cos^{6}(\bar{\omega}t_{inf})]^{3/2}}]\times
\frac{1}{2\dot{G}(G+\omega)+\frac{12\kappa}{3}\bar{\omega}
cos^{5}(\bar{\omega}t_{inf})sin(\bar{\omega}t_{inf})}\nonumber\\&&
\nonumber\\&&t_{inf}\ll T \quad and \quad \bar{\omega}\sim
\frac{1}{T^{3}}\rightarrow \bar{\omega}t_{inf}\ll1\rightarrow
\nonumber\\&&
sin(\bar{\omega}t_{inf})\ll cos(\bar{\omega}t_{inf})\quad and
\quad
sin(\bar{\omega}t_{inf})\sim \frac{t_{inf}}{T}\sim 0 \quad and
\quad cos(\bar{\omega}t_{inf})\sim 1\Rightarrow \nonumber\\&&
\varepsilon=\frac{1}{G^{2}} \ll 1 \quad \eta \sim \frac{1}{G} \ll
1 \Rightarrow \quad r=16 \varepsilon\sim\frac{16}{G^{2}} \ll 1
\label{m30}
\end{eqnarray}
This equation shows that slow parameters are very small and thus
our model confirms the prediction of previous models for inflation
era in \cite{a26}. Another interesting result that comes out from
this equation is the value of the tensor-to- scalar ratio r which
is very smaller than one and is in agreement with experimental
data \cite{a27}. Thus, the extra energy which is produced during
vanishing tachyons, leads to an increase in velocity of expansion
and occurring inflation.
\section{ Reducing the model to quantum field theory prescriptions in four dimensional universe }\label{o4}
In this section, we will show that by reducing field theory in
eleven dimensional M-theory to field theory in four dimensional
universe, our model matches with known models in gravity. To this
end, we use of following relations:
\begin{eqnarray}
&& \int d^{4}\sigma F_{abc}^{2}=\int d^{4}\sigma (\partial_{a}
A_{bc}-\partial_{b} A_{ca}+\partial_{c}
A_{ab})(F_{abc})=\nonumber\\&&-\int d^{4}\sigma
(A_{bc}\partial_{a}F_{abc} -A_{ca}\partial_{b}F_{abc} +
A_{ab}\partial_{c}F_{abc}) +\int d^{4}\sigma
\partial_{a}(O)\nonumber\\&& \int d^{4}\sigma F_{abc}^{3}=\int d^{4}\sigma (\partial_{a}
A_{bc}-\partial_{b} A_{ca}+\partial_{c}
A_{ab})(F_{abc}^{2})=\nonumber\\&&-\int d^{4}\sigma
(A_{bc}\partial_{a}F_{abc} -A_{ca}\partial_{b}F_{abc} +
A_{ab}\partial_{c}F_{abc})F_{abc} +\int
d^{4}\partial_{a}(O)\nonumber\\&& \int d^{4}\sigma
A_{bc}F_{abc}\partial_{a}F_{abc}=\int d^{4}\sigma
(\partial_{b}A_{c}-\partial_{c}A_{b})F_{abc}\partial_{a}F_{abc}=-\int
d^{4}\sigma
(A_{c}\partial_{b}F_{abc}-A_{b}\partial_{c}F_{abc})\partial_{a}F_{abc}\nonumber\\&&
\int d^{4}\sigma
\partial_{\alpha}\partial_{\beta}X^{i}F_{abc}=-\int d^{4}\sigma
\partial_{\beta}X^{i}\partial_{\alpha}F_{abc}+\int d^{4}\sigma
\partial_{\alpha}O
\label{m31}
\end{eqnarray}
Substituting above relations in action (\ref{m13}), we obtain:
\begin{eqnarray}
&& S_{M3-M4} = -T_{D3} \int d^{4}\sigma Tr (\Sigma_{a,b=0}^{3}
\Sigma_{i,j=4}^{9} \{\partial_{a}X^{i}\partial_{b}X^{j}+
\alpha^{2}
\partial_{a}\partial_{b}X^{i}\partial_{a}\partial_{b}X^{i} - \beta^{2} A_{ab}\partial_{a}F_{abc}+\beta^{3} A_{a}\partial_{b}F_{abc}\partial_{c}F_{abc}\nonumber \\&&
-\alpha \beta \partial_{a}X^{i} \partial_{b}F_{abc}+ \alpha^{2}
(X^{i}X^{j}\partial_{a}X^{j}\partial_{b}X^{i})+\alpha
(X^{i}X^{j})\partial_{a}F_{abc} +\alpha^{2}\beta
\partial_{a}\partial_{b}X^{i}\partial_{b}X^{i}\partial_{a}F_{abc}\nonumber
\\&&
+\varepsilon^{\alpha\beta\gamma}\varepsilon^{\alpha'\beta'\gamma'}X^{i}_{\alpha}X^{j}_{\beta}X^{k}_{\gamma}X^{i}_{\alpha'}X^{j}_{\beta'}X^{k}_{\gamma'}
-\beta^{3} A_{a}F_{abc}\partial_{c}\partial_{b}F_{abc} -\alpha^{2}\beta
\partial_{a}X^{i}\partial_{b}X^{i}\partial_{a}\partial_{b}F_{abc}\})
\label{m32}
\end{eqnarray}
At this stage, we can show that this action matches to action in
four dimensional field theory by using following mappings:
\begin{eqnarray}
&& X^{i}\rightarrow \phi \quad A_{ab}\rightarrow h_{ab} \quad
A_{a} \rightarrow e_{a}\nonumber\\&& h_{ab}=\sqrt{-g}h^{ab} \quad
g_{ab}=\eta_{ab}+h_{ab}\nonumber\\&& F_{abc}=\partial_{a}
A_{bc}-\partial_{b} A_{ca}+\partial_{c} A_{ab}\rightarrow
\nonumber\\&& F_{abc}=\partial_{a} h_{bc}-\partial_{b}
h_{ca}+\partial_{c} h_{ab}\rightarrow
\nonumber\\&&\partial_{a}F_{abc}=\partial_{a}^{2}
h_{bc}-\partial_{a}\partial_{b} h_{ca}+\partial_{a}\partial_{c}
h_{ab}\rightarrow \nonumber\\&& g^{bc}F_{abc}=\partial_{a}^{2}
h_{b}^{c}+..\rightarrow \sqrt{-g}R
\label{m33}
\end{eqnarray}
where $\phi$ is the scalar field, $h_{ab}$ is the graviton field
and $g_{ab}$ is the component of metric. Replacing strings and
three form fields by scalars and elements of metric in four
dimensions, we get:
\begin{eqnarray}
&& S_{\text{field theory}} = -T_{D3} \int d^{4}\sigma Tr
(\Sigma_{a,b=0}^{3} \Sigma_{i,j=4}^{9}
\{\partial_{a}\phi\partial_{b}\phi+ \alpha^{2}
\partial_{a}\partial_{b}\phi\partial_{a}\partial_{b}\phi - \beta^{2}\sqrt{-g}R+\beta^{3} e_{a}\sqrt{-g}R^{2}\nonumber \\&&
-\alpha \beta \sqrt{-g} \partial_{a}\phi R+ \alpha^{2}
\phi^{2}\partial_{a}\phi\partial_{b}\phi+\alpha \phi^{2}
\sqrt{-g}R +\alpha^{2}\beta
\partial_{a}\partial_{b}\phi\partial_{b}\phi \sqrt{-g}R
-\beta^{3} e_{a}\sqrt{g}\partial_{a}h_{ab}\partial_{b}R+\nonumber \\&&\phi^{6}
-\alpha^{2}\beta \sqrt{-g}
\partial_{a}\phi\partial_{b}\phi\partial_{a}R\})
\label{m34}
\end{eqnarray}
Now, we can rewrite the above action as follows:
\begin{eqnarray}
&& S_{\text{field theory}} = -T_{D3} \int d^{4}\sigma
\{\sqrt{-g}F(R,\phi)+\partial_{a}\phi\partial_{b}\phi+V(\phi)\}
\label{m34}
\end{eqnarray}
where
\begin{eqnarray}
&& F(R,\phi)=( - \beta^{2}-\alpha
\beta\partial_{a}\phi+\alpha^{2}\beta
\partial_{a}\partial_{b}\phi\partial_{b}\phi)R+\beta^{3}
e_{a}R^{2}+(-\alpha^{2}\beta
\partial_{a}\phi\partial_{b}\phi -\beta^{3}
e_{a}\partial_{a}h_{ab})\partial_{a}R\nonumber\\&&
V(\phi)=\phi^{6}+ \alpha^{2}
\phi^{2}\partial_{a}\phi\partial_{b}\phi+ \alpha^{2}
\partial_{a}\partial_{b}\phi\partial_{a}\partial_{b}\phi
\label{m35}
\end{eqnarray}
This equation is very the same of actions in $F(R)$ gravity which
has been discussed in \cite{a28}. This means that by redefining
the quantum fields in M-theory and obtaining their relations by
fields in four dimensional universe, the action of model matches
the relevant action in quantum field theory prescription.
\section{Summary and Discussion} \label{sum}
In this research, we have reconsidered the results of \cite{a2} in
a system of oscillating brane. We have discussed that the universe
contracts and expands as due to interaction between branes. In our
model, first, N fundamental string transit to N pairs of
M0-anti-M0-brane. Then, these branes glue to each other and build
a pair of M8-anti-M8-branes. This system is unstable, broken and
two anti-M4-branes, an M4-brane, an M3-brane and an M0-brane are
produced. M4-branes is compactified around a circle and M3 which
our universe is located on it; wraps around it. The system of
M4-M3 is located between anti-M4-branes and oscillate. As this
system becomes close to one of anti-M4 branes, the M3 attaches to
it from one end and stay sticking to another from another end.
Also, the square mass of some scalars becomes negative and they
make a transition to tachyonic states. To remove these states, M4
rebounds, rolls, M3 opens and expansion branch of universe begins.
When M4 approaches to another anti-M4-brane, some other scalars
gain negative square mass and new phase of tachyon is created. To
solve this problem, M4 rebounds again, M3 wraps around it and new
contraction branch starts. We compare thee energy-momentum tensor
derived in this model with the energy-momentum tensor in our
present stage of universe and obtain the scale factor. We notice
that this scale factor, only in the case of $t\rightarrow -\infty$
becomes zero. This means that the age of universe may be infinite
which is consistent with prediction of \cite{a2}. Also, we show
that by disappearing tachyonic states, some energy is produced
which leads to an acceleration in openning of M3 and expansion of
universe. Finally, by reducing the quantum fields in eleven
dimensional M-theory to ones in four dimensional universe, we
observe that our model is consistent with usual field theory.
\section*{Acknowledgments}
\noindent A. Sepehri would like to thank of the Research
Institute for Astronomy and Astrophysics of Maragha, Iran for
financial support during investigation in this work. We are very
grateful of Ali Mohammad for his lecture in cosmology that gives
us new insight into this subject. We also thank of referee for
nice comments that help us to improve our paper.
| -50,660.9215
|
[
-1.779296875,
1.71484375
] | 19.592875
|
[
-3.703125,
0.477783203125,
-1.5625,
-4.02734375,
0.460693359375,
5.50390625
] |
[
1.2890625,
8.4453125,
0.95703125,
3.943359375
] | 119
| 3,361
|
[
-3.1328125,
3.693359375
] | 36.10492
|
[
-5.2109375,
-3.55859375,
-3.41796875,
-1.673828125,
1.7197265625,
9.71875
] | 1.027749
| 19.546025
| 29.782803
| 5.106672
|
[
2.404752492904663
] | -35,582.8228
| 6.947932
| -50,996.006855
| 0.445358
| 5.911124
|
[
-2.640625,
-3.19921875,
-3.4765625,
-4.6484375,
2.1484375,
11.390625
] |
[
-5.33984375,
-0.327880859375,
-1.1953125,
-0.145263671875,
2.658203125,
1.626953125
] | |
BkiUcU7xK6nrxrSHelSJ
|
\section{Introduction}
We can observe a wide variety of patterns,
such as in a traffic jam~\cite{Kikumako, Bando, Sugiyama}, a large-scale ordering of swimming bacteria~\cite{Peng, Nishi},
a swarm of mosquitoes, a parliament of birds and a school of fish~\cite{Vicsek, Vicsek2, Toner}, formed by living things as self-propelled objects.
It is one of the challenging studies to understand pattern formations induced by these collective motions.
Similar behaviors also emerge in chemical systems,
such as microtubes~\cite{Sumino}, droplets~\cite{Thutupalli, Ohmura, Tanaka}, Janus particles~\cite{Nishi2, janus}
and camphor systems~\cite{Suematsu, Suematsu3, Nishimori, Soh, Soh2, Ikura, Suematsu, Kohira, Suematsu2, Nakata, Nagayama, Eric, Kitahata2, Lauga, Suematsu2, Yui, Koyano, Heisler, NishiWakai}.
Self-propelled objects transform chemical energy into kinetic energy in non-equilibrium systems,
and move spontaneously as if these were living.
Recently, a lot of studies have reported on camphor boats as self-propelled particles in the chemical system~\cite{Suematsu, Nishimori, Kohira, Suematsu2, Nakata, Yui}.
A camphor boat is made of a plastic sheet attached to a camphor disk.
When the camphor boat is put on an aqueous surface, the camphor molecules dissolve from the disk under the boat and expand on the surface.
As the camphor molecules decrease the surface tension of the aqueous phase,
the camphor boat moves on the aqueous phase spontaneously due to a difference in surface tension around the boat.
There have been many experimental studies, as well as numerical ones, on the camphor boat.
Some of the numerical models are based on reaction-diffusion dynamics on the camphor concentration~\cite{Nakata, Nagayama, Eric, Heisler, NishiWakai},
and the others are based on fluid dynamics~\cite{Soh, Soh2, Lauga}.
These models could explain the experimental behaviors in a qualitative manner.
Basic physical quantities were necessary in order to realize the quantitative correspondence.
However, it had been difficult to measure the driving force on the motion of the camphor boat,
the surface tension difference between the front and the back of the boat,
the diffusion coefficient,
the supply rate of camphor molecules from the camphor disk to the water surface,
and a relaxation rate before Suematsu {\it et al.} measured these quantitative properties in experiments~\cite{Suematsu2}.
The results have allowed us to compare the experimental results with theoretical ones quantitatively,
and have provided a deep understanding of the interesting phenomena of the camphor boat.
However, they investigated only the situation for pure water as an aqueous phase.
Thus, we focused on viscosity dependence of the motion with regard to a camphor boat.
As methods to change the viscosities of the aqueous solution under the camphor boat,
the temperature control of the solution or the use of the solution with different physical concentration is considered.
We adopted the latter; we used aqueous solutions of glycerol with several glycerol concentration~\cite{Nagayama, Koyano},
and changed the viscosity of the base solution.
In this paper, we investigated the velocity $v$ of the camphor boat for several glycerol concentrations $p$,
and found that $v$ decreased with an increase in $p$.
In order to understand the $p$ dependence of $v$, we proposed the mathematical model.
The model showed a power law $v\sim\mu^{-1/2}$, where $\mu$ is the viscosity of the base solution.
Our experimental results satisfied the scaling relation obtained from the numerical model.
The agreement between the experimental result and the theoretical result for the viscosity dependence of $v$
provides an estimation of the concentration field around the camphor boat, which is difficult to measure directly in experiments.
\section{Experimental procedure}
A round-shape boat as shown in Figs.~\ref{fig:method}(a) and (b) was used to measure the velocity of the camphor boat.
The boat was composed of a plastic plate (thickness: 0.1 mm) and a camphor disk,
which was prepared by pressing camphor powder ((+)-Camphor, Wako, Japan)
using a pellet die set for the preparation of samples on Fourier transform-infrared (FT-IR) spectroscopy analysis.
The diameter and the thickness of the camphor disk were 3.0 mm and 1.0 mm, respectively.
The plastic plate was cut in a circle with a diameter of 6.0 mm,
and the camphor disk was attached to the edge of the flat circular plastic plate using an adhesive (Bath bond Q, KONISHI, Japan),
so that a half of the camphor disk was outside of the plastic sheet.
This round-shape camphor boat moved toward the direction of the plastic sheet.
An annular glass chamber was used,
which was composed of two petri dishes with different diameters as shown in Figs.~\ref{fig:method}(c) and (d).
Inner and outer diameters were 128.5 mm and 145.8 mm,
and the channel width of the chamber was thus 8.7 mm.
As it is known that the velocity is sensitive to the depth of water~\cite{Yui},
the chamber was put on the clear horizontal plate.
The solution was poured into the chamber so that the depth of the solution was 4.7 mm,
which was glycerol (Glycerol, Wako, Japan) and water mixed at several mass ratios $p$,
i.e. $p$ is a percentage of a glycerol mass in the mixed solution.
We investigated physical properties of the solution, such as the viscosity, the surface tension, and
the camphor solubility against glycerol concentration $p$.
The detailed results are shown in Appendix A.
The camphor boat was put on the surface of the solution in the glass chamber,
and then it started to move spontaneously.
For a visualization of the motion, a LED board was placed under the horizontal plate.
The motion of the boat was captured with a digital video camera (HDR-FX1, SONY, Japan) from the top of the chamber.
Obtained movies were analyzed using an image-processing system (ImageJ, Nature Institutes of Health, USA).
\begin{figure}[tb]
\begin{center}
\includegraphics[width=7cm,clip]{method.eps}
\end{center}
\caption{\label{fig:method}(Color online)
Schematic drawings of (a) top view and (b) side view of a camphor boat for the measurements of velocities,
(c) top view and (d) side view on the annular chamber.}
\end{figure}
\section{Experimental Results}
We investigated the velocity of the camphor boat on the solutions of various glycerol concentration $p$.
The position of the camphor boat is described as a radial angle $\theta$ in the annular chamber, as shown in Fig.~\ref{fig:velo}(a).
Analyses of the videos captured by the digital video camera provide the position $\theta$ at time $t$,
where $t=0$ corresponds to the time when the boat finished three laps along the chamber after the boat had been put on the surface of the solution.
In Fig.~\ref{fig:velo}(b), $\theta$ had a constant gradient in time,
that is to say, the camphor boat moved with a constant velocity.
Figure~\ref{fig:velo}(c) shows a time series of the angular velocity $\omega = \Delta\theta/\Delta t$,
where $\Delta t =1/30$ s for one frame of the video camera and $\Delta\theta$ is an angular difference between $t$ and $t+\Delta t$.
In Fig.~\ref{fig:velo}(b), the expanded plot is shown for the time region corresponding to the gray region in Fig.~\ref{fig:velo}(c).
The angular velocity $\omega$ in the region fluctuated around the average value 1.08 rad/s.
The similar tendency was observed at 50 s $\lesssim t \lesssim$ 200 s,
i.e. $\omega$ increased with time and had noisy data before $t\sim10$ s,
and $\omega$ began to decrease after $t\sim250$ s.
Therefore, we investigated $\omega$ at 60 s $\lesssim t \lesssim$ 180 s,
during which $\omega$ had almost a constant value for time.
Next, we investigated the angular velocity for $p$ as shown in Fig.~\ref{fig:velo}(d).
The vertical and horizontal axes in Fig.~\ref{fig:velo}(d) show the angular velocity $\overline{\omega}$ and concentration $p$.
The $\overline{\omega}$ was obtained from the linear fitting of time series as shown in Fig.~\ref{fig:velo}(b).
The values of the errors for each $\overline{\omega}$ were lower than $10^{-3}$ rad/s.
As shown in Fig.~\ref{fig:velo}(d), $\overline{\omega}$ decreased with an increase in $p$.
\begin{figure*}[tb]
\begin{center}
\includegraphics[width=14cm,clip]{velo.eps}
\end{center}
\caption{\label{fig:velo}(Color online)
(a) Snapshot of the camphor boat motion.
(b) Time series of the position $\theta$ of a camphor boat moving on water ($p=0$),
where $\theta$ is the angle shown in Fig.~\ref{fig:velo}(a).
(c) Time series of angular velocity $\omega$ of the camphor boat, where $\omega=\Delta\theta/\Delta t$ for each frame.
The gray region corresponds to the time range shown in Fig.~\ref{fig:velo}(b).
(d) Dependence of $\overline{\omega}$ on $p$, where $p$ is the glycerol concentration
and $\overline{\omega}$ is the angular velocity obtained from linear fitting of time series as shown in Fig.~\ref{fig:velo}(b).}
\end{figure*}
\section{Mathematical Model \label{sec:model}}
The glycerol concentration $p$ of the solution was controlled in our experiments, which led to a change in the viscosity $\mu$ shown in Appendix A.
In this section, we consider a viscosity dependence of the camphor boat velocity.
Now, the annular glass chamber used in our experiments is recognized as a one-dimensional channel with an infinite length.
The time evolution equation of the camphor boat in a one-dimensional system (The spatial coordinate is represented as $x$) is given as
\begin{align}
m\frac{d^2X}{dt^2} = -h\frac{dX}{dt}+F,
\label{eq:motion}
\end{align}
where $m$, $X$, $h$ and $F$ are the mass,
the center of mass,
the friction coefficient of the camphor boat,
and the driving force exerted on the moving camphor boat, respectively.
We assume that $h$ is proportional to viscosity $\mu$ such as $h=K\mu$, where $K$ is a constant ($K > 0$).
The assumption has been used in many previous papers \cite{Eric,Koyano,Nagayama,Nakata,Nishimori,Suematsu,Suematsu2,Kohira,Soh,NishiWakai,Heisler},
and it was also reported that the viscous drag on the mobility of thin film in Newtonian fluid obeyed a linear relationship with the fluid viscosity \cite{stone}.
Therefore, we considered that the assumption $h=K\mu$ is natural \cite{viscous drag}.
The driving force $F$ is described as
\begin{align}
F = w[\gamma(c(X+r+\ell))-\gamma(c(X-r))],
\label{eq:driving}
\end{align}
where $w$ is the width of the camphor disk.
Here, we consider that the positions of the front and the back of the boat are shown as $x=X+r+\ell$ and $x=X-r$,
where $r$ and $\ell$ are the radius of the disk and the size of the boat as defined in Fig.~\ref{fig:model}.
The surface tension $\gamma$ depends on the concentration $c$ of the camphor molecules at the surface of the solution,
and we assume the linear relation as
\begin{align}
\gamma=\gamma_0-\Gamma c,
\label{eq:surface}
\end{align}
where $\gamma_0$ is the surface tension of the base solution without camphor and $\Gamma$ is a positive constant.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=7cm,clip]{model.eps}
\end{center}
\caption{\label{fig:model}
Illustration of side view of a camphor boat.}
\end{figure}
The time evolution on the camphor concentration $c$ is shown as
\begin{align}
\frac{\partial c}{\partial t} = D \frac{\partial^2 c}{\partial x^2}-ac+f(x-X),
\label{eq:concentration}
\end{align}
where $a$ is the sum of sublimation rate and dissolution rate of the camphor molecules on solution surface, $D$ is the diffusion coefficient of the camphor molecule, and $f$ denotes the dissolution rate of the camphor molecules from the camphor disk to the aqueous solution surface.
As for the term $f(z)$, we apply the following description,
\begin{align}
f(z) = \begin{cases}
f_0, & ({-r < z <r}),\\
0, & \text({\rm otherwise}).
\end{cases}
\label{eq:provide2}
\end{align}
That is to say, the dissolution of camphor molecules from the disk occurs at $-r < z < r$.
The above equation does not include Marangoni effect directly,
although the flow has an influence on the camphor concentration.
The previous paper \cite{Kitahata2} showed that Eq. (\ref{eq:concentration}) was reasonable
if $D$ was recognized as the spatially uniform effective diffusion coefficient of the camphor to include the transportation by the flow.
In addition, this spatially-uniform effective diffusion coefficient is supported by the experimental results that the diffusion length is proportional to the square root of elapsed time \cite{Suematsu}.
\section{Theoretical analysis}
Our experimental results showed that the camphor boat moved with a constant velocity in time as shown in Fig.~\ref{fig:velo}.
Thus, we should consider solutions for the motion of the camphor boat with a constant velocity $v$ in $x$-direction,
i.e. $X = vt$.
From this condition, Eq. (\ref{eq:motion}) leads to
\begin{align}
-hv+F=0.
\label{eq:motion1}
\end{align}
By setting $\xi=x-vt$ and $c=c(\xi)$, Eq. (\ref{eq:concentration}) provides
\begin{align}
-v\frac{dc}{d\xi} = D \frac{d^2 c}{d\xi^2}-ac+f(\xi).
\label{eq:concentration1}
\end{align}
Equation (\ref{eq:concentration1}) leads to the following solutions
\begin{align}
c(\xi) = \begin{cases}
\beta_1 \exp \big(\lambda_-(\xi-r)\big), & ({\xi > r}),\\
\dfrac{f_0}{a}+\alpha_2 \exp\big(\lambda_+\xi)+\beta_2
\exp\big(\lambda_-\xi\big), & ({-r <\xi < r}),\\
\alpha_3 \exp \big(\lambda_+(\xi+r)\big), & ({\xi < -r}),
\end{cases}
\label{eq:provide}
\end{align}
where
\begin{align}
\lambda_\pm= -\frac{v}{2D}\pm\frac{\sqrt{v^2+4Da}}{2D},
\label{eq:lambda_pm}
\end{align}
\begin{align}
\beta_1 = \frac{f_0\lambda_+}{a(\lambda_+-\lambda_-)}(1-\exp(2\lambda_-r)),
\label{eq:beta1}
\end{align}
\begin{align}
\alpha_2= \frac{f_0\lambda_-\exp(-\lambda_+r)}{a(\lambda_+-\lambda_-)},
\label{eq:alpha2}
\end{align}
\begin{align}
\beta_2 = \frac{f_0\lambda_+\exp(-\lambda_-r)}{a(\lambda_+-\lambda_-)},
\label{eq:beta2}
\end{align}
\begin{align}
\alpha_3 = -\frac{f_0\lambda_-}{a(\lambda_+-\lambda_-)}(1-\exp(-2\lambda_+r)).
\label{eq:alpha1}
\end{align}
Equations (\ref{eq:provide})-(\ref{eq:alpha1}) provide
\begin{align}
F =& -\Gamma w \left[\beta_1 \exp \left(\lambda_-\ell \right)-\alpha_3
\right]
\nonumber \\
=&-\frac{\Gamma w f_0}{a \left(\lambda_+-\lambda_-\right)}
\left[\lambda_+ \left(1-\exp \left(2\lambda_-r \right) \right) \exp
\left(\lambda_-\ell \right) \right. \nonumber\\
& \left. + \lambda_- \left(1-\exp \left(-2\lambda_+r\right) \right) \right].
\end{align}
As $v$ is sufficiently large in our experiments, we assume $r\ll1/\lambda_+$ and $\ell\gg1/\left|\lambda_-\right|$.
Then, $\lambda_+\sim a/v$ and $\lambda_-\sim -v/D$, which lead to
\begin{align}
F = & -\frac{\Gamma w f_0}{a(v/D)} \left[\frac{a}{v} \left(1-\exp
\left(-\frac{2vr}{D} \right)\right)\exp\left(-\frac{v}{D}\ell\right)
\right. \nonumber \\ &
\left.-\frac{v}{D}
\left(1-\exp\left(-\frac{2ar}{v}\right)\right)\right]
\nonumber\\
\simeq & -\frac{\Gamma wf_0D}{av}\left(-\frac{v}{D}
\right)\left(\frac{2ar}{v} \right)
\nonumber\\
= & \frac{2\Gamma wf_0r}{v}.
\label{eq:F}
\end{align}
As $F = K\mu v$ from Eq.~(\ref{eq:motion1}),
\begin{align}
K\mu v = \frac{2\Gamma w f_0 r}{v}.
\label{eq:F1}
\end{align}
From Eq.~(\ref{eq:F1}), we obtain
\begin{align}
v=\sqrt{\frac{2\Gamma wf_0r}{K\mu}}.
\label{eq:v}
\end{align}
Equation~(\ref{eq:v}) shows a power law $v\propto\mu^{-1/2}$, if other parameters such as $\Gamma, w$ and $f_0$ are independent of $\mu$.
The power law with the index $-1/2$ is an interesting result,
since Stokes relation naturally suggests another relation; $v \propto \mu^{-1}$~\cite{fluid}.
\section{Numerical results}
In the theoretical analysis, we have assumed the solution depending on $\xi = x - vt$. However, the supposed mathematical model has other symmetries and whether the considered solution depending on $\xi$ is an attractor or not should be checked. Therefore, we performed numerical calculations based on equations in Sec.~\ref{sec:model}. For numerical calculation, we considered a one-dimensional array with a spatial step of $\Delta x = 0.1$. The spatial size of the considered system was 1000 with periodic boundary condition, and we adopted Euler method with time step $\Delta t = 10^{-3}.$ As for the spatial derivative, we used explicit method. The parameters are set to be $m = 0.1$, $w = 1$, $\Gamma = 1$, $r=1$, $\ell = 1$, $D = 1$, $a = 1$, and $f_0 = 1$. In the discretization process, the first-order interpolation was adopted for Eqs.~\eqref{eq:driving} and \eqref{eq:provide2}. The parameter $h$ corresponding to the viscosity $\mu$ was changed, and we investigated the time development of the camphor boat position and camphor concentration profile.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=7cm,clip]{fig_sim.eps}
\end{center}
\caption{\label{fig:sim}Numerical results. (a) Time course of camphor boat velocity $dX/dt$ for $h=0.01$. (b) Camphor concentration profile $c(x)$ for $h=0.01$ at $t=1000$, when the camphor boat velocity reached a constant value. The position of the camphor boat was $X \simeq 188.6$. (c) Final velocity ($t = 1000$) depending on $h$, which is proportional to viscosity. The power law $v \propto h^{-1/2}$ holds for smaller $h$.}
\end{figure}
In Fig.~\ref{fig:sim}, the numerical results are shown. In Fig.~\ref{fig:sim}(a), the time development of camphor boat velocity is shown. The camphor boat velocity is saturated to a constant value. The camphor concentration profile after the velocity became constant ($t = 1000$) is shown in Fig.~\ref{fig:sim}(b). The camphor concentration profile was asymmetric with regard to the camphor boat position $x = X \simeq 188.6$. After reaching a constant velocity, the concentration profile did not change the shape but shifted in a positive $x$-direction. Thus, we can guess that the solution with regards to $\xi = x - vt$ is an attractor of this system. We have also confirmed that the solution converged to this attractor from other initial conditions (data not shown). The mathematical analysis on this convergence to the solution depending on $\xi$ remains and it may be possible to approach such mathematical problem by considering Lie group symmetry \cite{Olver}.
The final velocity against $h$ is shown in Fig.~\ref{fig:sim}(c). For the regime of $h$ smaller than 0.1, the power law $v \propto h^{-1/2}$ held, where $h$ is proportional to the viscosity $\mu$ in the present framework. In the theoretical analysis, we assumed $r \ll 1/\lambda_+$ and $\ell \gg 1/\left|\lambda_- \right|$, which is equivalent to $aD/v^2 \ll 1$, as will be discussed in detail in the following section. Since the final velocity is nearly equal to 5 for $h \sim 0.1$, and $a = D = 1$, the divergence from the power law originates from the breakdown of the assumption in the analysis.
\section{Discussion}
Our model showed a power law $v\sim\mu^{-1/2}$ under the assumptions that $r\ll1/\lambda_+$ and $\ell\gg1/\left|\lambda_-\right|$.
In this section, we compare experimental results with the numerical results in Eq.~(\ref{eq:v}) in order to check whether our model is reasonable.
Equation~(\ref{eq:v}) has several parameters such as $\Gamma$, $w$, $f_0$, $r$, $K$, and $\mu$.
Since similar camphor boats were used, $w$, $r$, and $K$ were constant values in our experiments.
We investigated the dependence of the other parameters, i.e., $\Gamma$, $f_0$, and $\mu$, on the glycerol concentration $p$ in Appendix A.
Equation~(\ref{eq:surface}) showed $\Gamma=(\gamma_0 - \gamma)/c$.
As $(\gamma_0 - \gamma)$ was independent of $p$ in our measurements,
we considered that $\Gamma$ was constant.
The supply rate $f_0$ corresponds to $\Delta M$,
which is a loss of a camphor disk per unit time in our experiments,
and we found that $\Delta M$ decreased with an increase in $p$.
The viscosity $\mu$ of the base solution increased with $p$.
Thus, $f_0$ and $\mu$ in Eq.~(\ref{eq:v}) are functions of $p$.
In addition, the angular velocity is proportional to the camphor boat velocity in our experiments.
From the above discussion, Eq.~\eqref{eq:v} leads to
\begin{align}
\overline{\omega}(p) \propto\sqrt{\frac{\Delta M(p)}{\mu(p)}}.
\label{eq:v2}
\end{align}
Figure~\ref{power} shows a relationship between $\Delta M/\mu$ and $\overline{\omega}$ obtained from our experiments.
The result almost agrees with the solid line in Eq.~(\ref{eq:v2})~\cite{Delta_M}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=7cm,clip]{power.eps}
\end{center}
\caption{\label{power}(Color online)
Relationship between $\Delta M/\mu$ and $\overline{\omega}$,
where $\Delta M$, $\mu$, and $\overline{\omega}$ are a weight loss of a camphor disk per one second,
the viscosity of the base solution, and the angular velocity of the camphor boat, respectively.
The solid line shows the numerical result; $\overline{\omega}\sim\sqrt{\Delta M/\mu}$ in Eq.~\eqref{eq:v2}.}
\end{figure}
The power law was obtained under the assumptions that $r\ll1/\lambda_+$ and $\ell\gg1/\left|\lambda_-\right|$,
which is equivalent to $aD/v^2\ll1$.
Since $\sqrt{D/a}$ corresponds to a characteristic decay length of the camphor concentration profile,
and $v/a$ is a distance of the camphor boat motion during the characteristic time during which the concentration field keeps the memory,
the assumption means that the characteristic length for the camphor concentration profile is sufficiently smaller than the characteristic length for the camphor boat motion.
In such a case, the camphor concentration profile should be asymmetric with respect to the camphor particle position.
Here, we confirm the acceptability of the assumptions for our experiments.
We needed values of parameters such as $a$, $D$, and $v$ included in the assumption.
We used a rectangular camphor boat and chalk powders in measurements of $D$.
The boat was put on the solution surface covered by the chalk powders, and the camphor diffused into the solution.
The diffusion was visualized by the chalk powders.
We analyzed the videos of powders' motion and estimated $D$.
A method of the measurement is similar to that in a previous study~\cite{Suematsu2}.
The effective diffusion coefficient $D$ against $p$ is shown in Appendix B,
which shows that $D$ decreases with an increase in $p$.
For $a$, $a=1.8\times10^{-2}$ s$^{-1}$ was used, which was based on the experimental observation reported in the previous work~\cite{Suematsu2}.
Using these data, the relationship between $p$ and $aD/{v^2}$ was obtained as shown in Fig.~\ref{fig:compare}.
The result shows that the values of $aD/{v^2}$ were sufficiently smaller than 1 for all $p$,
which suggests that our assumption is reasonable.
The result provides the following consideration:
the camphor concentration around the boat is quite asymmetric,
and the decay length of the concentration field at the back of the boat is sufficiently greater than that at the front.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=6cm,clip]{compare.eps}
\end{center}
\caption{\label{fig:compare}
Relationship between $p$ and $aD/v^2$,
where $a$, $D$, and $v$ correspond to the sum of sublimation rate and dissolution rate of camphor molecules on an aqueous surface, effective diffusion coefficient, and velocity of a camphor boat, respectively.
$aD/v^2$ was much smaller than 1, which suggests our approximation is valid.}
\end{figure}
There have been many analytical studies on collective motion of symmetric camphor disks
in both experiments and theoretical analyses~\cite{Nishimori,Eric,Ikura,NishiWakai}.
There have also been some studies on asymmetric camphor boats, in which numerical calculation
for both concentration field and camphor boat positions was performed,
and analytical approach under the assumption of slow velocity was
performed~\cite{Suematsu,Heisler}. In contrast to these studies, we
discussed under the assumption of fast velocity, and this assumption
was justified by the experimental observation. It would enable
analytical approach on the collective motions of the camphor boats with
fast velocity. Therefore, our model would provide a deep understanding
of the collective motions on not only camphor boats but also living things.
\section{Conclusion}
We investigated the velocity $v$ of the asymmetric camphor boat against several glycerol concentration $p$ of the glycerol aqueous solution.
In order to know the dependence of the camphor boat velocity $v$ on the glycerol concentration $p$,
we discussed a numerical model based on a diffusion-reaction equation.
When it is assumed that the characteristic length of the camphor concentration at the front of the boat is shorter than that at the rear,
$v$ should obey a power law $v\sim\mu^{-1/2}$,
where $\mu$ is the viscosity of the base solution.
The power law agreed with experimental results,
and it was also confirmed that our assumption in the model was reasonable through a comparison with our experimental results.
Using our proposed model, we can discuss the profile of camphor concentration, which is difficult to be directly measured in experiments.
Thus, our experiment has profound significance in the estimation of the concentration through the measurements of the velocity.
As a future topic, it would be worth investigating whether the similar power law $v\sim\mu^{-1/2}$ persists with smaller levels of $v$ in experiments with such variables as increased the boat size.
In addition, we considered that the hydrodynamic effect was included in the effective diffusion coefficient in this paper.
It, however, would be also important to consider the fluid flow around the boat
when we study the behavior of two or more camphor boats as the collective motion.
As future work, it would be also interesting to consider the hydrodynamic interaction in multi camphor particle system.
\begin{acknowledgments}
This work was supported by Y. Koyano.
MS would like to thank Samantha Hawkins of Fukuoka Institute of Technology for proofreading this manuscript.
This work was supported by JSPS KAKENHI Grant Numbers JP18K11338, JP18K03572, JP25103008 and JP15K05199.
\end{acknowledgments}
| -20,536.808539
|
[
-1.90234375,
2.083984375
] | 42.377261
|
[
-3.064453125,
1.4794921875,
-1.2919921875,
-5.16015625,
-0.54931640625,
7.00390625
] |
[
3.3828125,
4.7578125,
3.484375,
5.90234375
] | 237
| 3,588
|
[
-3.326171875,
3.9609375
] | 27.325952
|
[
-5.56640625,
-2.927734375,
-2.669921875,
-1.560546875,
1.5048828125,
8.8984375
] | 2.012329
| 1.448066
| 24.832776
| 1.343933
|
[
2.6242618560791016
] | -14,471.994865
| 5.470736
| -20,136.149519
| 2.139691
| 5.572941
|
[
-3.1640625,
-3.0078125,
-2.71875,
-3.662109375,
2.53125,
9.5625
] |
[
-5.6796875,
-1.0029296875,
-1.6669921875,
-0.818359375,
2.8984375,
2.99609375
] | |
BkiUdEA4dbgj44SNlwfB
|
\section{Introduction}
The Tur\'an number of a graph $H$, denoted by $\text{ex}(n, H)$, is the maximum number of edges in an $n$-vertex graph that does not contain $H$ as a subgraph. Let $\text{EX}(n,H)$ denote the set of extremal graphs, i.e. the set of all $n$-vertex, $H$-free graph $G$ such that $e(G)=\text{ex}(n,H)$.
A systematic study of such type problems started after Tur\'an found and characterized $\text{EX}(n,K_{r+1})$. The case $r=2$ was solved by Mantel in 1907.
\begin{theorem}\cite{MAN}\label{mantel}
The maximum number of edges in an $n$-vertex triangle-free graph is $\floor{\frac{n^2}{4}}$. Furthermore, the only triangle-free graph with $\floor{\frac{n^2}{4}}$ edges is the complete bipartite graph $K_{\floor{\frac{n}{2}}\ceil{\frac{n}{2}}}$.
\end{theorem}
The Tur\'an graph, $T_r(n)$, is an $n$-vertex complete $r$-partite graph whose parts have as equal as possible sizes. Precisely speaking, the graph has ($n\ \text{mod}\ r$) parts of size $\lceil{n/r}\rceil$ and $r-(n\ \text{mod}\ r)$ parts of size $\lfloor{n/r}\rfloor$. Denote $e(T_r(n))$ by $t_r(n)$. Tur\'an proved the following fundamental result in the study of extremal graph theory:
\begin{theorem}\cite{turan1941external}\label{turan}
For an $n$-vertex $K_{r+1}$-free graph $G$, $$e(G)\leq t_{r}(n),$$
and equality holds if and only if $G$ is the Tur\'an graph $T_r(n)$, i.e., \\ $\ex(n,K_{r+1})=t_r(n)$ and $\EX(n,K_{r+1})=T_r(n)$.
\end{theorem}
In 1966, Erd\H{o}s, Stone, and Simonovits determined the asymptotic value of $\ex(n, H)$, where $H$ is a non-bipartite graph.
\begin{theorem}\cite{EDR1,EDR2} \label{ess}
Let $F$ be a non-bipartite graph. Then
$$\ex(n,H)=\left(1-\frac{1}{\chi(H)-1}\right){n\choose 2}+o(n^2),$$
where $\chi(H)$ denotes the chromatic number of $H$.
\end{theorem}
\begin{definition}
The Triangular Pyramid with $k$ layers, denoted by $TP_k$, is defined as follows: Draw $k+1$ paths in layers such that the first layer is a $1$-vertex path, the second layer is a $2$-vertex path,\dots, and the $(k+1)^{st}$ layer is a $(k+1)$-vertex path. Label the vertices from left to right of the $i^{th}$ layer's path as $x_1^{i},x_2^{i},\dots,x_i^{i}$, where $i\in\{1,2,3,\dots, k+1\}$.
The vertex set of the graph $TP_k$ is the set of all vertices of the $(k+1)$ paths. The edge set contains all the edges of the paths. Additionally, for any two consecutive $(i-1)^{th}$ and $i^{th}$ layer, $x_r^{i-1}x_r^{i}$ and $x_r^{i-1}x_{r+1}^{i}$ are in $E(TP_k)$, where $i\in\{1,2,\dots,k+1\}$ and $1\leq r\leq i-1$ (see Figure \ref{PT3PT5}).
\end{definition}
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.4]
\draw[fill=black](0,0)circle(6pt);
\draw[fill=black](-2,-2)circle(6pt);
\draw[fill=black](2,-2)circle(6pt);
\draw[fill=black](-4,-4)circle(6pt);
\draw[fill=black](0,-4)circle(6pt);
\draw[fill=black](4,-4)circle(6pt);
\draw[fill=black](-6,-6)circle(6pt);
\draw[fill=black](-2,-6)circle(6pt);
\draw[fill=black](2,-6)circle(6pt);
\draw[fill=black](6,-6)circle(6pt);
\draw[thick](0,0)--(-6,-6)(2,-2)--(-2,-6)(4,-4)--(2,-6)(0,0)--(6,-6)(-2,-2)--(2,-6)(-4,-4)--(-2,-6);
\draw[thick](-2,-2)--(2,-2)(-4,-4)--(4,-4)(-6,-6)--(6,-6);
\node at (0,-9) {$TP_3$};
\node at (0,1) {$x_1^{1}$};
\node at (-3,-1.8) {$x_1^{2}$};
\node at (3,-1.8) {$x_2^{2}$};
\node at (-5,-3.8) {$x_1^{3}$};
\node at (0,-3) {$x_2^{3}$};
\node at (5,-3.8) {$x_3^{3}$};
\node at (-7,-6.2) {$x_1^{4}$};
\node at (-2,-6.8) {$x_2^{4}$};
\node at (2,-6.8) {$x_2^{4}$};
\node at (7,-6.2) {$x_4^{4}$};
\end{tikzpicture} \qquad\qquad\qquad
\begin{tikzpicture}[scale=0.3]
\draw[fill=black](0,0)circle(7pt);
\draw[fill=black](-2,-2)circle(7pt);
\draw[fill=black](2,-2)circle(7pt);
\draw[fill=black](-4,-4)circle(7pt);
\draw[fill=black](0,-4)circle(7pt);
\draw[fill=black](4,-4)circle(7pt);
\draw[fill=black](-6,-6)circle(7pt);
\draw[fill=black](-2,-6)circle(7pt);
\draw[fill=black](2,-6)circle(7pt);
\draw[fill=black](6,-6)circle(7pt);
\draw[fill=black](-8,-8)circle(7pt);
\draw[fill=black](-4,-8)circle(7pt);
\draw[fill=black](0,-8)circle(7pt);
\draw[fill=black](4,-8)circle(7pt);
\draw[fill=black](8,-8)circle(7pt);
\draw[fill=black](-10,-10)circle(7pt);
\draw[fill=black](-6,-10)circle(7pt);
\draw[fill=black](-2,-10)circle(7pt);
\draw[fill=black](2,-10)circle(7pt);
\draw[fill=black](6,-10)circle(7pt);
\draw[fill=black](10,-10)circle(7pt);
\draw[thick](0,0)--(-10,-10)(2,-2)--(-6,-10)(4,-4)--(-2,-10)(6,-6)--(2,-10)(8,-8)--(6,-10);
\draw[thick](0,0)--(10,-10)(-2,-2)--(6,-10)(-4,-4)--(2,-10)(-6,-6)--(-2,-10)(-8,-8)--(-6,-10);
\draw[thick](-2,-2)--(2,-2)(-4,-4)--(4,-4)(-6,-6)--(6,-6)(-8,-8)--(8,-8)(-10,-10)--(10,-10);
\node at (0,-13) {$TP_5$};
\end{tikzpicture}
\caption{Triangular Pyramids with $3$ and $5$ layers respectively.}
\label{PT3PT5}
\end{figure}
For $k\geq 1$, the chromatic number of $TP_{k}$ is $3$. Hence by Theorem \ref{ess}, we have $\ex(n,TP_{k})=\frac{n^{2}}{4}+o(n^{2})$. Yet, it remains interesting to determine the exact value of $\ex(n,TP_{k})$. The graph $TP_1$ is a triangle and by Mantel's Theorem, $\ex(n,TP_1)=\floor{\frac{n^2}{4}}$. The graph $TP_2$ denotes the flattened tetrahedron. Liu \cite{LIU} determined $\ex(n,TP_2)$ for sufficiently large values of $n$. Later, C. Xiao, G. O.H. Katona, J. Xiao, and O. Zamora~\cite{XIAO} determined $\ex(n,TP_2)$ for small values of $n$.
\begin{theorem}\cite{XIAO}
The maximum number of edges in an n-vertex $TP_2$-free graph ($n\neq 5$) is,
$$ \ex(n,TP_2)=\left\{
\begin{aligned}
&\left\lfloor\frac{n^{2}}{4}\right\rfloor+\left\lfloor\frac{n}{2}\right\rfloor,& n\not\equiv2~(\bmod~4), \\
&\frac{n^{2}}{4}+\frac{n}{2}-1, & n\equiv2~(\bmod~4).
\end{aligned}
\right.
$$
\end{theorem}
In this paper, we study the Tur\'an number for $TP_{3}$, i.e. the Triangular Pyramid with three layers.
\begin{theorem}\label{maintheorem}
The maximum number of edges in an $n$-vertex $TP_3$-free graph is,
$$\ex(n,TP_3)= \frac{1}{4}n^2+n+o(n).$$
\end{theorem}
It can be checked that the constructions given in Figure \ref{fig1}, \ref{fig2} and \ref{fig3} are $TP_3$-free graphs containing $\frac{1}{4}n^2+n+1$, $\frac{1}{4}n^2+n+\frac{3}{4}$ and $\frac{1}{4}n^2+n$ edges respectively. Thus, the bound in Theorem \ref{mainthm} is best possible in terms of the linear terms, for infinitely many $n$.
\section{Notations}
All the graphs we consider in this paper are simple and finite. Let $G$ be a graph. We denote the set of vertices and edges of $G$ by $V(G)$ and $E(G)$ respectively. The number of edges and vertices is denoted by $e(G)$ and $v(G)$ respectively. We denote the degree of a vertex $v$ by $d(v)$, the minimum degree in graph $G$ by $\delta(G)$, and the neighborhood of $v$ by $N(v)$ respectively. Let $H$ be a subgraph of $G$ and $v$ be a vertex in $H$. We denote the set of vertices that are adjacent to $v$ in $H$ by $N_H(v)$. Let $x_1, x_2,\dots, x_k$ be $k$ vertices in $H$. The set of vertices in $H$ which are adjacent to all these $k$ vertices, $x_1, x_2,\dots, x_k$, is denoted by $N^*_H(x_1,x_2,\dots,x_k)$. For brevity, we may omit the subscript in the notation whenever the graph we are dealing with is clear. Let $A$ and $B$ be subsets $V(G)$, then the number of edges between them is denoted by $e(A, B)$. We denote the cycle of length $6$ (or simply a $6$ vertex cycle) by $C_6$ or $6$-cycle. A $7$-wheel, denoted by $W_7$, is a $7$-vertex graph containing a $C_6$ and a vertex that is adjacent to all vertices of the cycle.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.1]
\draw[thick](0,5)--(-4.8,1.5)--(-2.9,-4)--(2.9,-4)--(4.8,1.5)--(0,5);
\draw[thick](0,5)--(2.9,-4)(0,5)--(-2.9,-4)(2.9,-4)--(-4.8,1.5)(-2.9,-4)--(4.8,1.5)(4.8,1.5)--(-4.8,1.5);
\draw[thick](15,5)--(10.2,1.5)--(12.1,-4)--(17.9,-4)--(19.8,1.5)--(15,5);
\draw[thick](15,5)--(17.9,-4)(15,5)--(12.1,-4)(17.9,-4)--(10.2,1.5)(12.1,-4)--(19.8,1.5)(19.8,1.5)--(10.2,1.5);
\draw[thick](40,5)--(35.2,1.5)--(37.1,-4)--(42.9,-4)--(44.8,1.5)--(40,5);
\draw[thick](40,5)--(42.9,-4)(40,5)--(37.1,-4)(42.9,-4)--(35.2,1.5)(37.1,-4)--(44.8,1.5)(44.8,1.5)--(35.2,1.5);
\draw[thick,blue](-4.8,1.5)--(25,-30)(-4.8,1.5)--(15,-30)(-4.8,1.5)--(12,-30)(4.8,1.5)--(25,-30)(4.8,1.5)--(15,-30)(4.8,1.5)--(12,-30)(0,5)--(25,-30)(0,5)--(15,-30)(0,5)--(12,-30)(2.9,-4)--(25,-30)(2.9,-4)--(15,-30)(2.9,-4)--(12,-30)(-2.9,-4)--(25,-30)(-2.9,-4)--(15,-30)(-2.9,-4)--(12,-30);
\draw[thick,blue](10.2,1.5)--(25,-30)(10.2,1.5)--(15,-30)(10.2,1.5)--(12,-30)(19.8,1.5)--(25,-30)(19.8,1.5)--(15,-30)(19.8,1.5)--(12,-30)(15,5)--(25,-30)(15,5)--(15,-30)(15,5)--(12,-30)(17.9,-4)--(25,-30)(17.9,-4)--(15,-30)(17.9,-4)--(12,-30)(12.1,-4)--(25,-30)(12.1,-4)--(15,-30)(12.1,-4)--(12,-30);
\draw[thick,blue](35.2,1.5)--(25,-30)(35.2,1.5)--(15,-30)(35.2,1.5)--(12,-30)(40,5)--(25,-30)(40,5)--(15,-30)(40,5)--(12,-30)(44.8,1.5)--(25,-30)(44.8,1.5)--(15,-30)(44.8,1.5)--(12,-30)(42.9,-4)--(25,-30)(42.9,-4)--(15,-30)(42.9,-4)--(12,-30)(37.1,-4)--(25,-30)(37.1,-4)--(15,-30)(37.1,-4)--(12,-30);
\draw[rotate around={0:(20,0)},red] (20,0) ellipse (35 and 10);
\draw[rotate around={0:(20,-30)},red] (20,-30) ellipse (20 and 5);
\draw[fill=black](27.5,1.5)circle(6pt);
\draw[fill=black](30.5,1.5)circle(6pt);
\draw[fill=black](24.5,1.5)circle(6pt);
\draw[fill=black](20,-30)circle(6pt);
\draw[fill=black](22,-30)circle(6pt);
\draw[fill=black](18,-30)circle(6pt);
\draw[fill=black](0,5)circle(12pt);
\draw[fill=black](4.8,1.5)circle(12pt);
\draw[fill=black](-4.8,1.5)circle(12pt);
\draw[fill=black](-2.9,-4)circle(12pt);
\draw[fill=black](2.9,-4)circle(12pt);
\draw[fill=black](10.2,1.5)circle(12pt);
\draw[fill=black](15,5)circle(12pt);
\draw[fill=black](19.8,1.5)circle(12pt);
\draw[fill=black](17.9,-4)circle(12pt);
\draw[fill=black](12.1,-4)circle(12pt);
\draw[fill=black](35.2,1.5)circle(12pt);
\draw[fill=black](40,5)circle(12pt);
\draw[fill=black](44.8,1.5)circle(12pt);
\draw[fill=black](42.9,-4)circle(12pt);
\draw[fill=black](37.1,-4)circle(12pt);
\draw[fill=black](25,-30)circle(12pt);
\draw[fill=black](15,-30)circle(12pt);
\draw[fill=black](12,-30)circle(12pt);
\node at (60,0) {$\frac{n}{2}+1$};
\node at (45,-30) {$\frac{n}{2}-1$};
\end{tikzpicture}
\caption{Extremal construction when $n$ is even and $n\equiv 2(\text{mod }10)$.}
\label{fig1}
\end{figure}
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.1]
\draw[thick](0,5)--(-4.8,1.5)--(-2.9,-4)--(2.9,-4)--(4.8,1.5)--(0,5);
\draw[thick](0,5)--(2.9,-4)(0,5)--(-2.9,-4)(2.9,-4)--(-4.8,1.5)(-2.9,-4)--(4.8,1.5)(4.8,1.5)--(-4.8,1.5);
\draw[thick](15,5)--(10.2,1.5)--(12.1,-4)--(17.9,-4)--(19.8,1.5)--(15,5);
\draw[thick](15,5)--(17.9,-4)(15,5)--(12.1,-4)(17.9,-4)--(10.2,1.5)(12.1,-4)--(19.8,1.5)(19.8,1.5)--(10.2,1.5);
\draw[thick](40,5)--(35.2,1.5)--(37.1,-4)--(42.9,-4)--(44.8,1.5)--(40,5);
\draw[thick](40,5)--(42.9,-4)(40,5)--(37.1,-4)(42.9,-4)--(35.2,1.5)(37.1,-4)--(44.8,1.5)(44.8,1.5)--(35.2,1.5);
\draw[thick,blue](-4.8,1.5)--(25,-30)(-4.8,1.5)--(15,-30)(-4.8,1.5)--(12,-30)(4.8,1.5)--(25,-30)(4.8,1.5)--(15,-30)(4.8,1.5)--(12,-30)(0,5)--(25,-30)(0,5)--(15,-30)(0,5)--(12,-30)(2.9,-4)--(25,-30)(2.9,-4)--(15,-30)(2.9,-4)--(12,-30)(-2.9,-4)--(25,-30)(-2.9,-4)--(15,-30)(-2.9,-4)--(12,-30);
\draw[thick,blue](10.2,1.5)--(25,-30)(10.2,1.5)--(15,-30)(10.2,1.5)--(12,-30)(19.8,1.5)--(25,-30)(19.8,1.5)--(15,-30)(19.8,1.5)--(12,-30)(15,5)--(25,-30)(15,5)--(15,-30)(15,5)--(12,-30)(17.9,-4)--(25,-30)(17.9,-4)--(15,-30)(17.9,-4)--(12,-30)(12.1,-4)--(25,-30)(12.1,-4)--(15,-30)(12.1,-4)--(12,-30);
\draw[thick,blue](35.2,1.5)--(25,-30)(35.2,1.5)--(15,-30)(35.2,1.5)--(12,-30)(40,5)--(25,-30)(40,5)--(15,-30)(40,5)--(12,-30)(44.8,1.5)--(25,-30)(44.8,1.5)--(15,-30)(44.8,1.5)--(12,-30)(42.9,-4)--(25,-30)(42.9,-4)--(15,-30)(42.9,-4)--(12,-30)(37.1,-4)--(25,-30)(37.1,-4)--(15,-30)(37.1,-4)--(12,-30);
\draw[rotate around={0:(20,0)},red] (20,0) ellipse (35 and 10);
\draw[rotate around={0:(20,-30)},red] (20,-30) ellipse (20 and 5);
\draw[fill=black](27.5,1.5)circle(6pt);
\draw[fill=black](30.5,1.5)circle(6pt);
\draw[fill=black](24.5,1.5)circle(6pt);
\draw[fill=black](20,-30)circle(6pt);
\draw[fill=black](22,-30)circle(6pt);
\draw[fill=black](18,-30)circle(6pt);
\draw[fill=black](0,5)circle(12pt);
\draw[fill=black](4.8,1.5)circle(12pt);
\draw[fill=black](-4.8,1.5)circle(12pt);
\draw[fill=black](-2.9,-4)circle(12pt);
\draw[fill=black](2.9,-4)circle(12pt);
\draw[fill=black](10.2,1.5)circle(12pt);
\draw[fill=black](15,5)circle(12pt);
\draw[fill=black](19.8,1.5)circle(12pt);
\draw[fill=black](17.9,-4)circle(12pt);
\draw[fill=black](12.1,-4)circle(12pt);
\draw[fill=black](35.2,1.5)circle(12pt);
\draw[fill=black](40,5)circle(12pt);
\draw[fill=black](44.8,1.5)circle(12pt);
\draw[fill=black](42.9,-4)circle(12pt);
\draw[fill=black](37.1,-4)circle(12pt);
\draw[fill=black](25,-30)circle(12pt);
\draw[fill=black](15,-30)circle(12pt);
\draw[fill=black](12,-30)circle(12pt);
\node at (60,0) {$\frac{n+1}{2}$};
\node at (45,-30) {$\frac{n-1}{2}$};
\end{tikzpicture}
\caption{Extremal construction when $n$ is odd and $n\equiv 1(\text{mod }10)$.}
\label{fig2}
\end{figure}
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.1]
\draw[rotate around={0:(0,0)},red] (0,0) ellipse (28 and 12);
\draw[rotate around={0:(0,-40)},red] (0,-40) ellipse (28 and 12);
\draw[ultra thick](0,5)--(4,0)--(-4,0)--(0,5)(-14,5)--(-10,0)--(-18,0)--(-14,5)(20,5)--(16,0)--(24,0)--(20,5);
\draw[ultra thick](0,-35)--(4,-40)--(-4,-40)--(0,-35)(-14,-35)--(-10,-40)--(-18,-40)--(-14,-35)(20,-35)--(16,-40)--(24,-40)--(20,-35);
\draw[thick,blue](0,5)--(0,-35)(0,5)--(4,-40)(0,5)--(-4,-40)(0,5)--(-14,-35)(0,5)--(-10,-40)(0,5)--(-18,-40)(0,5)--(20,-35)(0,5)--(16,-40)(0,5)--(24,-40);
\draw[thick,blue](4,0)--(0,-35)(4,0)--(4,-40)(4,0)--(-4,-40)(4,0)--(-14,-35)(4,0)--(-10,-40)(4,0)--(-18,-40)(4,0)--(20,-35)(4,0)--(16,-40)(4,0)--(24,-40);
\draw[thick,blue](-4,0)--(0,-35)(-4,0)--(4,-40)(-4,0)--(-4,-40)(-4,0)--(-14,-35)(-4,0)--(-10,-40)(-4,0)--(-18,-40)(-4,0)--(20,-35)(-4,0)--(16,-40)(-4,0)--(24,-40);
\draw[thick,blue](-14,5)--(0,-35)(-14,5)--(4,-40)(-14,5)--(-4,-40)(-14,5)--(-14,-35)(-14,5)--(-10,-40)(-14,5)--(-18,-40)(-14,5)--(20,-35)(-14,5)--(16,-40)(-14,5)--(24,-40);
\draw[thick,blue](-10,0)--(0,-35)(-10,0)--(4,-40)(-10,0)--(-4,-40)(-10,0)--(-14,-35)(-10,0)--(-10,-40)(-10,0)--(-18,-40)(-10,0)--(20,-35)(-10,0)--(16,-40)(-10,0)--(24,-40);
\draw[thick,blue](-18,0)--(0,-35)(-18,0)--(4,-40)(-18,0)--(-4,-40)(-18,0)--(-14,-35)(-18,0)--(-10,-40)(-18,0)--(-18,-40)(-18,0)--(20,-35)(-18,0)--(16,-40)(-18,0)--(24,-40);
\draw[thick,blue](20,5)--(0,-35)(20,5)--(4,-40)(20,5)--(-4,-40)(20,5)--(-14,-35)(20,5)--(-10,-40)(20,5)--(-18,-40)(20,5)--(20,-35)(20,5)--(16,-40)(20,5)--(24,-40);
\draw[thick,blue](16,0)--(0,-35)(16,0)--(4,-40)(16,0)--(-4,-40)(16,0)--(-14,-35)(16,0)--(-10,-40)(16,0)--(-18,-40)(16,0)--(20,-35)(16,0)--(16,-40)(16,0)--(24,-40);
\draw[thick,blue](24,0)--(0,-35)(24,0)--(4,-40)(24,0)--(-4,-40)(24,0)--(-14,-35)(24,0)--(-10,-40)(24,0)--(-18,-40)(24,0)--(20,-35)(24,0)--(16,-40)(24,0)--(24,-40);
\draw[fill=black](0,5)circle(15pt);
\draw[fill=black](4,0)circle(15pt);
\draw[fill=black](-4,0)circle(15pt);
\draw[fill=black](-14,5)circle(15pt);
\draw[fill=black](-10,0)circle(15pt);
\draw[fill=black](-18,0)circle(15pt);
\draw[fill=black](10,0)circle(8pt);
\draw[fill=black](13,0)circle(8pt);
\draw[fill=black](7,0)circle(8pt);
\draw[fill=black](20,5)circle(15pt);
\draw[fill=black](16,0)circle(15pt);
\draw[fill=black](24,0)circle(15pt);
\draw[fill=black](0,-35)circle(15pt);
\draw[fill=black](4,-40)circle(15pt);
\draw[fill=black](-4,-40)circle(15pt);
\draw[fill=black](-14,-35)circle(15pt);
\draw[fill=black](-10,-40)circle(15pt);
\draw[fill=black](-18,-40)circle(15pt);
\draw[fill=black](10,-40)circle(8pt);
\draw[fill=black](13,-40)circle(8pt);
\draw[fill=black](7,-40)circle(8pt);
\draw[fill=black](20,-35)circle(15pt);
\draw[fill=black](16,-40)circle(15pt);
\draw[fill=black](24,-40)circle(15pt);
\end{tikzpicture}
\caption{Extremal construction when $n$ is divisible by 6.}
\label{fig3}
\end{figure}
\section{Proof of Theorem \ref{maintheorem}}
We will be using the following classical stability result of Erd\H os and Simonovits.
\begin{theorem}\cite{EDR3}
Let $k \geq 2$ and suppose that $H$ is a graph with $\chi(H) =k + 1$. If G is an H-free graph with $e(G) \geq t_k(n)-o(n^2)$, then G can be formed from $T_k(n)$ by adding and deleting $o(n^2)$ edges.
\end{theorem}
Since $\chi(TP_3)=3$, the above theorem can be restated as follows.
\begin{theorem}\label{stability}
For every $\gamma>0$, there exists an $\epsilon>0$ and $n_0(\gamma)$ such that for every $n$-vertex, $n>n_0(\gamma)$, and $TP_3$-free graph $G$ such that $e(G)\geq \frac{n^2}{4}-\epsilon n^2$, we have
$$|E(G)\Delta E(T_{2}(n))|\leq \gamma n^2.$$
\end{theorem}
We will prove the following version of Theorem \ref{maintheorem}.
\begin{theorem}\label{mainthm}
For $\delta >0$ and $n\geq \frac{5n_0(\delta)}{2\delta}$, the maximum number of edges in an $n$-vertex $TP_3$-free graph is $\ex(n,TP_3)\leq \frac{n^2}{4}+(1+\delta)n$.
\end{theorem}
Given a $\delta$, we define the following functions of $\delta$. The $n_0(\delta)$ in Theorem \ref{mainthm} is coming from the Theorem \ref{stability} and let $\beta(\delta)\geq \frac{\delta}{9296}$. Whereas $\gamma(\delta)$ satisfies the inequalities $\beta^3 + 512 \beta \gamma^2<16\beta (\beta + 1) (2 \beta + 1) \gamma$ and $\frac{\delta}{1328}\times \frac{\frac{1}{2}-3\beta }{3}<\gamma$. For brevity of the paper, we do not calculate these functions preciously.
For technical reasons, we start by proving the following weaker version of Theorem \ref{mainthm}.
\begin{lemma}\label{lemma1}
Let $G$ is a $TP_3$-free graph on $n$, $n\geq 10$ vertices. Then $e(G)\leq \frac{n^2}{4}+\frac{7}{2}n$.
\end{lemma}
\begin{proof}
The maximum number of edges in $7$-wheel free graph on $n$ vertices is $\text{ex}(n,W_7)=\lfloor{\frac{n^2}{4}+\frac{n}{2}+1}\rfloor$ \cite{TOMASZ}, which is less than or equal $\frac{n^2}{4}+\frac{7}{2}n$. So, we may assume that $G$ contains a $7$-wheel. We claim that each edge in $G$ is contained in at least $8$ triangles. Suppose not and there is an edge $xy\in E(G)$ such that $|N(x,y)|\leq 7$. In this case, the number of edges that are incident to either $x$ or $y$ is at most $n+6$. By the induction hypothesis,
$$e(G)\leq e(G-\{x,y\})+(n+6)\leq \frac{(n-2)^2}{4}+\frac{7}{2}(n-2)+(n+6)=\frac{n^2}{4}+\frac{7}{2}n.$$
One can check that the statement also holds for small $n$.
Now consider a $7$-wheel in $G$ with $6$-cycle $x_1x_2x_3x_4x_5x_6x_1$ and center $y$. For any edge $x_ix_j$ in the $6$-cycle, it can be easily seen that there are at least $3$ vertices in $V(G)\backslash \{x_1,x_2,\dots, x_6,y\}$ which are adjacent to both $x_i$ and $x_j$. Therefore by the Pigeonhole principle, we can find three distinct vertices, say $y_1, y_2$ and $y_3$ which are in $N^*(x_1,x_2), N^*(x_3,x_4)$, and $N^*(x_5,x_6)$ respectively. This is a contradiction as $G$ does not contain a $TP_3$.
\end{proof}
\begin{lemma}\label{lemma2}
Let $\delta>0$ be given. Let $G$ be an $n$-vertex, $n\geq \frac{5n_0(\gamma)}{2\delta}$ with $e(G)>\frac{n^2}{4}+(1+\delta)n$ edges. Then either $G$ contains a $TP_3$ or $G$ contains a subgraph $G_0$ on $n_0$ vertices such that $e(G_0)> \frac{n_0^2}{4}+(1+\delta)n_0$ with $d(x)>\floor{\frac{n_0}{2}+1}$,
for all $x\in V(G_0)$ and any two adjacent vertices are incident to at least $n_0+2$ common vertices (so each edge is contained in at least three triangles).
\end{lemma}
\begin{proof}
Define a subgraph $H$ of $G$ as good if $e(H)> \frac{v(H)^2}{4}+(1+\delta)v(H)$ with
\begin{equation}\label{equation2}
d(x)>\floor{\frac{v(H)}{2}+1},
\end{equation}
for all $x\in V(H)$ and any two adjacent vertices are incident to at least $v(H)+2$ edges.
If every vertex in $G$ satisfies the property (\ref{equation2}) (i.e., $G$ itself is good), then the lemma holds.
Otherwise, we delete the vertex in $G$ if it doesn't satisfy the degree condition in (\ref{equation2}) or along with one of its neighbors, they have fewer than $V(G)+2$ edges incident to it. We repeat this step, say $m$ times, till we get a subgraph $H$, satisfying the property (\ref{equation2}).
We claim the following:
\begin{claim}$e(H)\geq\frac{(n-m)^2}{4}+(1+\delta)(n-m)+\delta m.$
\end{claim}
\begin{proof}
Suppose not and $e(H)< \frac{(n-m)^2}{4}+(1+\delta)(n-m)+\delta m.$ We distinguish the following four cases based on the parity of $n$ and $m$ to complete the proof.
\subsubsection*{Case 1: $n$ is odd}
The sequence of the number of edges we delete form $G$ in each steps when $m$ is even and $m$ is odd are respectively $$\left(\frac{n+1}{2},\frac{n+1}{2},\frac{n-1}{2},\frac{n+1}{2},\dots, \frac{n-m+3}{2},\frac{n-m+3}{2}\right)$$ and $$\left(\frac{n+1}{2},\frac{n+1}{2},\frac{n-1}{2},\frac{n+1}{2},\dots, \frac{n-m+4}{2},\frac{n-m+4}{2}, \frac{n-m+2}{2}\right).$$ It can be checked that the number of edges be deleted after $m$ steps are respectively $\frac{m}{4}(2n-m+4)$ and $\frac{(m-1)}{4}(2n-m+5)+\frac{n-m+2}{2}=\frac{mn}{2}-\frac{m^2}{4}+m-\frac{1}{4}.$
Thus, when $m$ is even,
\begin{align*}
e(G)\leq E(H)+\frac{m}{4}(2n-m+4)&<\left(\frac{(n-m)^2}{4}+(1+\delta)(n-m)+\delta m\right)+\frac{m}{4}(2n-m+4)\\&=\frac{n^2}{4}+(1+\delta)n,
\end{align*}
which is a contradiction.
When $m$ is odd, we have
\begin{align*}
e(G)\leq E(H)+\frac{m}{4}(2n-m+4)&<\left(\frac{(n-m)^2}{4}+(1+\delta)(n-m)+\delta m\right)-\frac{m^2}{4}+\frac{mn}{2}+m-\frac{1}{4}\\&=\frac{n^2}{4}+(1+\delta)n-\frac{1}{4},
\end{align*}
which is again a contradiction.
\subsubsection*{Case 2: $n$ is even}
The sequence of the number of edges deleted in $m$ steps from $G$, when $m$ is odd and $m$ is even, are respectively $$\left(\frac{n+2}{2},\frac{n}{2},\frac{n}{2},\dots,\frac{n-m+3}{2},\frac{n-m+3}{2}\right)$$ and $$\left(\frac{n+2}{2},\frac{n}{2},\frac{n}{2},\dots,\frac{n-m+4}{2},\frac{n-m+4}{2},\frac{n-m+2}{2}\right).$$
Again it can be checked that the number of edges deleted after $m$ steps are respectively $\frac{m-1}{4}(2n-m+3)+\frac{n+2}{2}=-\frac{m^2}{4}+\frac{mn}{2}+m+\frac{1}{4}$ and $\frac{m-2}{4}(2n-m+4)+\frac{n+2}{2}+\frac{n-m+2}{2}=\frac{mn}{2}-\frac{m^2}{4}+m.$
When $m$ is even, we have
\begin{align*}
e(G)\leq E(H)+\frac{m}{4}(2n-m+4)&<\left(\frac{(n-m)^2}{4}+(1+\delta)(n-m)+\delta m\right)-\frac{m^2}{4}+\frac{mn}{2}+m+\frac{1}{4}\\&=\frac{n^2}{4}+(1+\delta)n+\frac{1}{4}.
\end{align*}
Clearly, $e(G)\leq \frac{n^2}{4}+(1+\delta)n$. Otherwise, we get an integer between $\frac{n^2}{4}+(1+\delta)n$ and $\frac{n^2}{4}+(1+\delta)n+\frac{1}{4}$, which is not true. This contradicts the fact that $e(G)>\frac{n^2}{4}+(1+\delta)n$.
When $m$ is odd, we have
\begin{align*}
e(G)\leq E(H)+\frac{m}{4}(2n-m+4)&<\left(\frac{(n-m)^2}{4}+(1+\delta)(n-m)+\delta m\right)-\frac{m^2}{4}+\frac{mn}{2}+m\\&=\frac{n^2}{4}+(1+\delta)n,
\end{align*}
which is again a contradiction.
\end{proof}
If $H$ contains a $TP_3$, we are immediately done. Hence consider $H$ is $TP_3$-free. By the previous lemma, $e(H)\leq \frac{(n-m)^2}{4}+\frac{7}{2}(n-m)$. Thus,
\begin{equation*}
\begin{split}
\frac{(n-m)^2}{4}+(1+\delta)(n-m)+\delta m &\leq \frac{(n-m)^2}{4}+\frac{7}{2}(n-m).
\end{split}
\end{equation*}
Hence, $$ m\leq\frac{2.5-\delta}{2.5}n.$$
This implies $n-m\geq \frac{2\delta n}{5}$. The condition, $n\geq\frac{5n_0(\gamma)}{2\delta}$ implies $n-m\geq n_0(\gamma)$ and thus we found the good subgraph $H$ of $G$.\\
\end{proof}
\begin{remark}
For the rest of the write-up, we always work on this ``good" subgraph and to simplify notations we denote it by $G$.
\end{remark}
\begin{definition}
We call a $7$-wheel in a graph $G$ with the $6$-cycle, say $x_1x_2x_3x_4x_5x_6x_1$, and center $y$, as a \textbf{sparse $7$-wheel}, if $x_ix_{i+2}\notin E(G)$ for all $i\in\{1,2\dots,6\}$ (see Figure \ref{goodwheel}).
\end{definition}
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.6]
\draw[thick](-2,-2)--(-4,-4)(2,-2)--(4,-4)(2,-2)--(-2,-6)(4,-4)--(2,-6)(-2,-2)--(2,-6)(-4,-4)--(-2,-6);
\draw[thick](-2,-2)--(2,-2)(-4,-4)--(4,-4)(-2,-6)--(2,-6);
\draw[dashed, red](-4,-4)--(2,-2)--(2,-6)--(-4,-4)(4,-4)--(-2,-2)--(-2,-6)--(4,-4);
\draw[fill=black](-2,-2)circle(5pt);
\draw[fill=black](2,-2)circle(5pt);
\draw[fill=black](-4,-4)circle(5pt);
\draw[fill=black](0,-4)circle(5pt);
\draw[fill=black](4,-4)circle(5pt);
\draw[fill=black](-2,-6)circle(5pt);
\draw[fill=black](2,-6)circle(5pt);
\node at (-3,-2) {$x_1$};
\node at (3,-2) {$x_2$};
\node at (4.5,-4) {$x_3$};
\node at (-4.5,-4) {$x_6$};
\node at (2,-7) {$x_4$};
\node at (-2,-7) {$x_5$};
\node at (0,-3.5) {$y$};
\end{tikzpicture}
\caption{A sparse $7$-wheel, the doted red edges are not in $G$.}
\label{goodwheel}
\end{figure}
\begin{lemma}\label{lemma4}
Let $\delta>0$ and $G$ be a graph on $n$ vertices containing a sparse $7$-wheel and $e(G)\geq \frac{n^2}{4}+(1+\delta)n$, then $G$ contains a $TP_3$.
\end{lemma}
\begin{proof}
Suppose $e(G)>\frac{n^2}{4}+(1+\delta)n$. Then by Lemma \ref{lemma2}, $G$ contains a good subgraph $H$. That means,
\begin{equation}\label{equation1}d(x)>
\begin{cases}
\frac{v(H)}{2}+1, &2\mid v(H),\\
\frac{v(H)+1}{2}, &2\nmid v(H).\\
\end{cases}
\end{equation}
For all $x\in V(H)$ and any two adjacent vertices that are incident to at least $v(H)+2$ edges( and so every edge is contained in at least three triangles). Note $G$ is a good subgraph.
Let a sparse $7$-wheel in $G$ be with center $y$ and $6$-cycle $x_1x_2x_3x_4x_5x_1$ as shown in Figure \ref{goodwheel}. Since $G$ is good, for each $x_ix_{i+1}$, $i\in \{1,2,\dots,6\}$, $|N(x_i,x_{i+1})|\geq 3$. Moreover, for each $x_ix_{i+1}$, $i\in \{1,2,\dots,6\}$, all the remaining four vertices of the cycle are not in $N(x_i,x_{i+1})$. Indeed, without loss of generality consider the edge $x_1x_2$. $x_3$ and $x_4$ are not in $N(x_1,x_2)$, since $G$ the wheel is sparse and hence they are not in $N(x_1)$ and $N(x_2)$ respectively. With similar argument $x_6$ and $x_5$ are not in $N(x_1,x_2)$. Therefore, there exist at least two vertices in $V(G)\backslash\{x_1,x_2,\dots,x_6,y\}$, which are in $N(x_i,x_{i+1})$. Take the matching $x_1x_2,x_3x_4$ and $x_5x_6$. If there are three distinct vertices in $V(G)\backslash\{x_1,x_2,\dots, x_6,y\}$, which are in $N(x_1,x_2)\cup N(x_3,x_4)\cup N(x_5,x_6)$, then $TP_3$ in $G$. Indeed, suppose not. Let $z_1,z_2$ and $z_3$ be vertices in $V(G)\backslash\{x_1,x_2,\dots, x_6,y\}$ such that $\{a,b,c\}\subset N(x_1,x_2)\cup N(x_3,x_4)\cup N(x_5,x_6)$. From the property that $G$ is contains no $TP_3$ and $|N(x_1,x_2)|$, $N|(x_3,x_4)|$ and $|N(x_5,x_6)|$ are at least 3, then each of the sets $N(x_1,x_2)$, $N(x_3,x_4)$ and $N(x_5,x_6)$ must contain at least two of the vertices in $\{z_1,z_2,z_3\}$. By the Hall's Theorem, we get distinct pairing of $z_1,z_2,z_3$ and $N(x_1,x_2),N(x_3,x_4)$ and $N(x_5,x_6)$ such that $z_i\in N(x_j,x_k)$, $i\in \{1,2,3\}$ and $(j,k)\in \{(1,2),(3,4),(5,6)\}$, which is a contradiction to the fact that $G$ does not contain $TP_3$.
Now we may assume that there are only two distinct vertices, say $v_1$ and $v_2$ in $V(G)\backslash\{x_1,x_2,\dots, x_6,y\}$, such that $N(x_1,x_2,\dots,x_6)=\{v,v_1,v_2\}$(see Figure \ref{wheelstructure}).
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.4]
\draw[thick](-2,-2)--(-4,-4)(2,-2)--(4,-4)(2,-2)--(-2,-6)(4,-4)--(2,-6)(-2,-2)--(2,-6)(-4,-4)--(-2,-6);
\draw[thick](-2,-2)--(2,-2)(-4,-4)--(4,-4)(-2,-6)--(2,-6);
\draw[thick, red](-2,-2)--(-7,5)(-2,-2)--(-9,3);
\draw[thick,blue](2,-2)--(7,5)(2,-2)--(9,3);
\draw[rotate around={45:(-8,4)},red] (-8,4) ellipse (1.4 and 1);
\draw[rotate around={-45:(8,4)},blue] (8,4) ellipse (1.4 and 1);
\draw[thick](-2,-2)--(0,2)(2,-2)--(0,2)(-2,-2)--(0,6)(2,-2)--(0,6);
\draw[thick](-4,-4)..controls (-4,1) and (-1,2) ..(0,2);
\draw[thick](4,-4)..controls (4,1) and (1,2) ..(0,2);
\draw[thick](-4,-4)..controls (-5,5) and (0,6) ..(0,6);
\draw[thick](4,-4)..controls (5,5) and (0,6) ..(0,6);
\draw[thick](-2,-6)..controls (-8,-7) and (-6,2) ..(0,2);
\draw[thick](2,-6)..controls (8,-7) and (6,2) ..(0,2);
\draw[thick](-2,-6)..controls (-10,-9) and (-6,6) ..(0,6);
\draw[thick](2,-6)..controls (10,-9) and (6,6) ..(0,6);
\node at (-3,-2) {$x_1$};
\node at (3,-2) {$x_2$};
\node at (4.5,-4) {$x_3$};
\node at (-4.5,-4) {$x_6$};
\node at (2,-7) {$x_4$};
\node at (-2,-7) {$x_5$};
\node at (0,-3.5) {$y$};
\node at (-8,4) {$A$};
\node at (8,4) {$B$};
\node at (0,1) {$v_1$};
\node at (0,7) {$v_2$};
\draw[fill=black](0,2)circle(5pt);
\draw[fill=black](0,6)circle(5pt);
\draw[fill=black](-2,-2)circle(5pt);
\draw[fill=black](2,-2)circle(5pt);
\draw[fill=black](-4,-4)circle(5pt);
\draw[fill=black](0,-4)circle(5pt);
\draw[fill=black](4,-4)circle(5pt);
\draw[fill=black](-2,-6)circle(5pt);
\draw[fill=black](2,-6)circle(5pt);
\end{tikzpicture}
\caption{Structure of the subgraph of $G$ with $2$ common neighbors for each vertices on the cycle of the good wheel.}
\label{wheelstructure}
\end{figure}
We prove the lemma for the case when $n$ is odd. With a similar argument, one can also solve the $n$ is even case.
Let $A$ and $B$ be sets of vertices in $V(G)\backslash \{x_1,\dots,x_6,y,v_1,v_2\}$ which are adjacent to $x_1$ and $x_2$ respectively (see Figure \ref{wheelstructure}). Obviously, $A\cap B=\emptyset$. Otherwise, the graph contains a $TP_3$. Thus, either $|A|\leq \frac{n-9}{2} $ or $|B|\leq \frac{n-9}{2}$.
Without loss of generality suppose $|A|\leq \frac{n-9}{2}$. If $|A|\leq \frac{n-11}{2}$, then $d(x_1)\leq |A|+6=\frac{n-11}{2}+6=\frac{n+1}{2}$, which is a contradiction.
So assume $|A|=\frac{n-9}{2} $. In this case, we also have that $|B|=\frac{n-9}{2}$. We need the following claim to complete proof of the lemma.
\begin{claim}\label{basic}
Each vertex in $A$ is adjacent to at least one other vertex in $A$ .
\end{claim}
\begin{proof}
Suppose not and let $x$ be a vertex in $A$ which is adjacent with no other vertex in $A$. The vertex is not adjacent to $x_2$ and $x_6$, otherwise, $G$ contains a $TP_3$.
If $x$ is adjacent to $x_4$, then $x$ is not adjacent to both $x_3$ and $x_5$ too. Otherwise, the graph contains a $TP_3$. In this case, the vertex $x$ is possibly adjacent to $y, v_1, v_2$ and vertices in $B$. Thus considering the vertex $x_1$ which is already adjacent with $x$, we get $d(x)\leq \frac{n-9}{2}+5=\frac{n+1}{2}$. This is a contradiction to the fact that $G$ is good.
Let $x$ be adjacent with $x_3$. Then $x$ can not be adjacent to $x_4$. If $x_5$ is not adjacent to $x$, then $d(x)\leq \frac{n-9}{2}+5=\frac{n+1}{2}$, which is a contradiction. So, let $x_5$ be adjacent to $x$. If $x$ is not adjacent to one of the vertices in $\{y,v_1,v_2\}$, then $d(x)\leq \frac{n-9}{2}+5=\frac{n+1}{2}$, which is a contradiction. Otherwise, consider the $7$-wheel, with the $6$-cycle $x_5yx_3v_1x_1v_2x_5$ (see the bold green cycle in Figure \ref{wheelstructureparticular}) and center $x$ . Consider the matching $x_5y$, $x_3v_1$ and $x_1v_2$. We can take the vertices $x_4$, $x_2$ and $x_6$ respectively, which are common neighbors of end vertices of the matching. Thus we get a $TP_3$, in $G$, which is a contradiction to the fact that $G$ is $TP_3$-free.
\end{proof}
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.5]
\draw[thick](-2,-2)--(-4,-4)(2,-2)--(4,-4)(2,-2)--(-2,-6)(4,-4)--(2,-6)(-2,-2)--(2,-6)(-4,-4)--(-2,-6);
\draw[thick](-2,-2)--(2,-2)(-4,-4)--(4,-4)(-2,-6)--(2,-6);
\draw[rotate around={45:(-8,4)},red] (-8,4) ellipse (1.4 and 1);
\draw[thick](-2,-2)--(0,2)(2,-2)--(0,2)(-2,-2)--(0,6)(2,-2)--(0,6);
\draw[thick](-4,-4)..controls (-4,1) and (-1,2) ..(0,2);
\draw[ultra thick, green](4,-4)..controls (4,1) and (1,2) ..(0,2);
\draw[thick](-4,-4)..controls (-5,5) and (0,6) ..(0,6);
\draw[thick](4,-4)..controls (5,5) and (0,6) ..(0,6);
\draw[thick](-2,-6)..controls (-8,-7) and (-6,2) ..(0,2);
\draw[thick](2,-6)..controls (8,-7) and (6,2) ..(0,2);
\draw[ultra thick, green](-2,-6)..controls (-10,-9) and (-6,6) ..(0,6);
\draw[thick](2,-6)..controls (10,-9) and (6,6) ..(0,6);
\draw[ultra thick, green](-2,-6)--(0,-4)--(4,-4)(0,2)--(-2,-2)--(0,6);
\draw[dashed,red](-8,4)--(0,6)(-8,4)--(0,2)(-8,4)--(-2,-2)(-8,4)--(-2,-6)(-8,4)--(4,-4);
\draw[dashed,red](-8,4)..controls (-4, 5) and (0,-2) ..(0,-4);
\node at (-3,-2) {$x_1$};
\node at (3,-2) {$x_2$};
\node at (4.5,-4) {$x_3$};
\node at (-4.5,-4) {$x_6$};
\node at (2,-7) {$x_4$};
\node at (-2,-7) {$x_5$};
\node at (0,-3.5) {$y$};
\node at (-8.5,4) {$x$};
\node at (-8,6) {$A$};
\node at (0,1) {$v_1$};
\node at (0,7) {$v_2$};
\draw[fill=black](-8,4)circle(5pt);
\draw[fill=black](0,2)circle(5pt);
\draw[fill=black](0,6)circle(5pt);
\draw[fill=black](-2,-2)circle(5pt);
\draw[fill=black](2,-2)circle(5pt);
\draw[fill=black](-4,-4)circle(5pt);
\draw[fill=black](0,-4)circle(5pt);
\draw[fill=black](4,-4)circle(5pt);
\draw[fill=black](-2,-6)circle(5pt);
\draw[fill=black](2,-6)circle(5pt);
\end{tikzpicture}
\caption{A graph containing $TP_3$. }
\label{wheelstructureparticular}
\end{figure}
With the same argument, one can verify that the minimum degree of each vertex in $B$ is at least $1$ in $B$.
Now we finish the proof of Case $1$ of the lemma. Consider the edge $x_5x_6$ and let $A'$ and $B'$ be the set of vertices in $V(G)\backslash \{x_1,\dots,x_6,y,v_1,v_2\}$ which are adjacent to $x_5$ and $x_6$ respectively. For the same reason given above, $|A'|=|B'|=\frac{n-9}{2}$. Clearly $A'\cap B'=\emptyset$.
Since $A\cap B'=\emptyset$ and $A'\cap B'=\emptyset$, then $|B'\cap B|=|A\cap A'|=\frac{n-9}{2}$.
Let $x\in A\cap A'$. Suppose $x$ is adjacent to $y$. We can take the $7$-wheel, with $6$-cycle $xx_1x_2x_3x_4x_5x$ and center $y$. By Claim \ref{basic}, there is a vertex $z$ in $A$ which is adjacent to $x$. Since this vertex is adjacent with $x_1$, then taking the matching $xx_1$, $x_2x_3$ and $x_4x_5$ with common neighbors $z, v_1$ and $v_2$ respectively, we show the graph contains a $TP_3$. Therefore, in this case, $x$ cannot be adjacent to $y$.
Let $t\in B\cap B'$. In this case, $t$ can not be adjacent with $y$. Suppose not. We can take the $7$-wheel, with $6$-cycle $tx_2x_3x_4x_5x_6t$ and center $y$. By Claim \ref{basic}, $t$ is adjacent with a vertex $r$ in $B$. So taking the matching $tx_2$, $x_3x_4$ and $x_5x_6$ with common neighbors $r, v_1$ and $v_2$ respectively, we show that $G$ contains a $TP_3$. Hence, a contradiction.
Thus we found that $y$ is a vertex in $G$ with constant degree, which is a contradiction to the fact that $G$ is a good graph.
\end{proof}
\begin{lemma}\label{lm3}
Let $G$ be a graph on $n$ vertices, where $n\geq \frac{5n_0(\gamma)}{2\delta}$, and then $e(G)\geq\frac{n^2}{4}+(1+\delta)n$. Let $A$ and $B$ a be partition of $V(G)$ with size as equal as possible and with maximum $e(A,B)$. If $A$ contains (similarly $B$ contains) a vertex, say $x$, such that $d_A(x)\geq \beta n$, then $G$ contains a $TP_3$.
\end{lemma}
\begin{proof}
Without loss of generality, suppose there exists vertex $x\in A$ such that $d_A(x)\geq \beta n$. Obviously $e(G)> \frac{n^2}{4}-\epsilon n^2$, for any $\epsilon>0$. Thus by the stability theorem, $|E(G)\Delta E(T_{n,2})|\leq \gamma n^2$.
Let $A_x$ be the graph induced by the vertices $N_A(x)\cup \{x\}$ in $A$. Hence, we have $e(A_x)\leq \gamma n^2$, which results in $\sum\limits_{y\in V(A_x)}d_{A_x}(y)\leq 2\gamma n^2$. The average degree of $A_x$ is
$$\Bar{d}(A_x)\leq \frac{\sum\limits_{y\in V(A_x)}d_{A_x}(y)}{v(A_x)}\leq \frac{2\gamma n^2}{\beta n}=\frac{2\gamma n}{\beta}.$$
Let $X$ be the set of vertices in $A_x$ with degree at least $\frac{4\gamma n}{\beta}$. It can be checked that the size of $X$ is at most $\frac{\beta n}{2}$. Let $Y=V(A_x)-X$. Thus, $|Y|\geq \frac{\beta n}{2}$ and for each $y\in Y$, $d_Y(y)\leq \frac{4\gamma n}{\beta}$. Now we can color $G[Y]$ with $ \frac{4\gamma n}{\beta}$ colors. The average size of the color class in $G[Y]$ is at least $\frac{\left(\beta n\right)/2}{\left(4\gamma n\right)/\left(\beta \right)}=\frac{\beta^2}{8\gamma}\geq 3.$
Thus we obtained at least $\frac{n}{3}\left(\frac{\beta}{2}-\frac{8\gamma}{\beta}\right)$ induced $K_{1,3}$'s in $A_x$(see Figure \ref{sparswheel}.)
Notice that the graph induced by $B$, denoted by $G_B$, contains at most $\gamma n^2$ edges. The average degree is $\Bar{d}(G_B)\leq 2\gamma n$. With the same argument as given above, we can keep an overwhelming majority of vertices in $B$ whose degree is at most $4\gamma n$. Indeed, deleting vertices in $B$ whose degree is at least $4\gamma n$, we are left with at least $\frac{n}{4}$ vertices. Let $Z$ be the set of vertices remaining in $B$ after deleting the vertices. We color $G[Z]$ with $4\gamma n$ colors. The average size of the color class in $G[Z]$ is at least $\frac{n/2}{4\gamma n}$. This implies that we can find at least $\frac{1}{3}\left(\frac{n}{4}-2\times 4\gamma n\right)=\frac{n}{3}\left(\frac{1}{4}-8\gamma\right)$ induced triples in $G_B$ (see Figure \ref{sparswheel}.)
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.2]
\draw[thick](0,0)--(0,-5)(0,0)--(-2,-5)(0,0)--(2,-5)(0,0)--(-6,-5)(0,0)--(-8,-5)(0,0)--(-10,-5) (0,0)--(10,-5)(0,0)--(12,-5)(0,0)--(14,-5);
\draw[thick, blue](0,-5)--(10,-25)--(-2,-5)--(12,-25)--(2,-5)--(14,-25)--(0,-5)(10,-25)--(0,0)--(14,-25)(0,0)--(12,-25)(-2,-5)--(14,-25)(0,-5)--(12,-25)(2,-5)--(10,-25);
\draw[dashed, green](-12,-8)--(-12,-2)--(16,-2)--(16,-8)--(-12,-8);
\draw[dashed, green](-12,-28)--(-12,-22)--(16,-22)--(16,-28)--(-12,-28);
\draw[rotate around={0:(0,-5)},red] (0,-5) ellipse (18 and 7);
\draw[rotate around={0:(0,-5)},blue] (0,-5) ellipse (3.5 and 1);
\draw[rotate around={0:(-8,-5)},blue] (-8,-5) ellipse (3.5 and 1);
\draw[rotate around={0:(12,-5)},blue] (12,-5) ellipse (3.5 and 1);
\draw[rotate around={0:(0,-25)},red] (0,-25) ellipse (18 and 7);
\draw[rotate around={0:(0,-25)},blue] (0,-25) ellipse (3.5 and 1);
\draw[rotate around={0:(-8,-25)},blue] (-8,-25) ellipse (3.5 and 1);
\draw[rotate around={0:(12,-25)},blue] (12,-25) ellipse (3.5 and 1);
\draw[fill=black](0,0)circle(10pt);
\draw[fill=black](0,-5)circle(10pt);
\draw[fill=black](-2,-5)circle(10pt);
\draw[fill=black](2,-5)circle(10pt);
\draw[fill=black](-6,-5)circle(10pt);
\draw[fill=black](-8,-5)circle(10pt);
\draw[fill=black](-10,-5)circle(10pt);
\draw[fill=black](6,-5)circle(3pt);
\draw[fill=black](7,-5)circle(3pt);
\draw[fill=black](5,-5)circle(3pt);
\draw[fill=black](10,-5)circle(10pt);
\draw[fill=black](12,-5)circle(10pt);
\draw[fill=black](14,-5)circle(10pt);
\draw[fill=black](0,-25)circle(10pt);
\draw[fill=black](-2,-25)circle(10pt);
\draw[fill=black](2,-25)circle(10pt);
\draw[fill=black](-6,-25)circle(10pt);
\draw[fill=black](-8,-25)circle(10pt);
\draw[fill=black](-10,-25)circle(10pt);
\draw[fill=black](6,-25)circle(3pt);
\draw[fill=black](7,-25)circle(3pt);
\draw[fill=black](5,-25)circle(3pt);
\draw[fill=black](10,-25)circle(10pt);
\draw[fill=black](12,-25)circle(10pt);
\draw[fill=black](14,-25)circle(10pt);
\node at (20,-5) {$A$};
\node at (20,-25) {$B$};
\node[green] at (-14,-25) {$Z$};
\node[green] at (-14,-5) {$Y$};
\end{tikzpicture}
\caption{A sparse $7$-wheel.}
\label{sparswheel}
\end{figure}
If for each pair of induced $K_{1,3}$ and induced triples obtained in $A$ and $B$ respectively, there is a missing edge, then the number of missed edges is at least $\frac{n}{3}\left(\frac{\beta}{2}-\frac{8\gamma}{\beta}\right)\times \frac{n}{3}\left(\frac{1}{4}-8\gamma\right)$. However if this is greater than $\gamma n^2$, it is a contradiction. Hence we need the following in-equation to be true:
\begin{equation}\label{bound1}
\frac{n}{3}\left(\frac{\beta}{2}-\frac{8\gamma}{\beta}\right)\times \frac{n}{3}\left(\frac{1}{4}-8\gamma\right)<\gamma n^2.
\end{equation}
It follows from the definition of $\beta$ and $\gamma$. Thus there must be an induced $K_{1,3}$ in $A$, which is joined completely to an induced triple of vertices in $B$. Therefore, we get a sparse $7$-wheel. Therefore, $G$ contains a $TP_3$ by Lemma \ref{lemma4}.
\end{proof}
\begin{corollary}
Let $G$ be a graph on $n$ vertices, where $n\geq \frac{5n_0(\gamma)}{2\delta}$, and $e(G)> \frac{n^2}{4}+(1+\delta)n$. Let $A$ and $B$ be a partition of $V(G)$ with size as equal as possible and with maximum $e(A,B)$. If $A$ or $B$ has a spider graph as a subgraph, then $G$ contains $TP_3$ as a subgraph.
\end{corollary}
\begin{proof}
Let $S$ denote the spider graph as denoted in Figure \ref{spider2}. Without loss of generality, Suppose $S\subseteq G[A]$.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.2]
\draw[thick](0,0)--(5,-5)--(5,-10)(0,0)--(0,-5)--(0,-10)(0,0)--(-5,-5)--(-5,-10);
\draw[fill=black](0,0)circle(12pt);
\draw[fill=black](5,-5)circle(12pt);
\draw[fill=black](5,-10)circle(12pt);
\draw[fill=black](0,-5)circle(12pt);
\draw[fill=black](0,-10)circle(12pt);
\draw[fill=black](-5,-5)circle(12pt);
\draw[fill=black](-5,-10)circle(12pt);
\node at (0,1.5) {$x$};
\node at (7,-5) {$w_1$};
\node at (7,-10) {$w_2$};
\node at (2,-5) {$v_1$};
\node at (2,-10) {$v_2$};
\node at (-7,-5) {$u_1$};
\node at (-7,-10) {$u_2$};
\end{tikzpicture}
\caption{A spider graph with three legs and one joint.}
\label{spider2}
\end{figure}
We consider $4$-vertex subsets of $S$, namely $\{x,u_1,u_2,v_1\},\{x,v_1,v_2,w_1\}$ and $\{x,w_1,w_2,u_1\}$. Note that, if we can find $3$ distinct vertices in $B$ such that, one of them is connected to all the vertices in the above subsets, we immediately find a $TP_3$. Without loss of generality, assume that the $4$-set $\{x,u_1,u_2,v_1\}$ does not have a common vertex in $B$. In other words, for every vertex $y\in B$, $y$ is not adjacent to at least one of the vertices in $\{x,u_1,u_2,v_1\}$. Note that, the average degree of vertices in $\{x,u_1,u_2,v_1\}$ is $\frac{3n}{8}$. So there exists a vertex $z\in \{x,u_1,u_2,v_1\}$, such that $d_{B}(z)\leq\frac{3n}{8}$. The minimum degree of the vertices in $G$ is at least $\frac{n}{2}$, thus $d_{A}(z)\geq \frac{n}{8}$.
So we have this large degree vertex in $A$ and are done by the Lemma \ref{lm3}
\begin{claim}
Given a graph $G_k$ on $k$ vertices, with $2k$ edges. We can find an independent set of vertices with size $\frac{3k}{55}$.
\end{claim}
\begin{proof}
Say we delete vertices with degrees greater than $10$. Denote the remaining graph with $G'$. The number of vertices deleted is denoted by $l$. The sum of the degrees is at least $10l$. Thus the number of edges deleted is at least $5l$. We already know the number of edges in the graph is $2k$, hence $l\leq \frac{2k}{5}$. Then in $G'$, every vertex has degree at most $10$. Start by choosing an arbitrary vertex $x\in G'$, delete its neighbors, and continue choosing another vertex in the graph $G'\setminus N(x)$. With this recursive procedure, we can get an independent set of size $\frac{3k}{55}$.
\end{proof}
\begin{claim}
Let $G$ be a graph on $n$ vertices, where $n\geq \frac{5n_0(\gamma)}{2\delta}$. Let $A$ and $B$ a be partition of $V(G)$ with size as equal as possible and with maximum $e(A,B)$. Let $e(A)\geq \frac{n}{2}+\delta \frac{n}{2}$, then the total number of triples of vertices we can find such that they are in $K_{1,3}$'s or induced $k_{1,3}$'s (which are a subgraph of a huge star, with center vertex having degree at-least $84$) is at-least $\frac{\delta n}{664}$.
\end{claim}
\begin{proof}
The degree sum of vertices in $A$ is greater than or equal to $2(\frac{n}{2}+\delta\frac{n}{2})$. Hence we have vertices that have degree at least $2$.
Let $v$ be a vertex in $A$ such that $d_A(v)=\Delta$. Let $A_v$ be the graph induced by the vertices $\{v\}\cup N_A(v)$. Note, $A_v$ doesn't contain the spider graph as a subgraph. We consider the following cases:\\
\textbf{Case $1$: $\Delta\leq 83$.}
Let $x_1,x_2$ and $x_3$ be in $N(v)$. The vertices $v,x_1,x_2$ and $x_3$ form a $K_{1,3}$. On deletion of these $4$ vertices, we have deleted at most $332$ edges. Note that $332$ is negligible compared to the number of extra edges in $A$, which was $\delta \frac{n}{2}$. Hence the number of $K_{1,3}$'s we can find is at least $\frac{\delta n}{664}$.\\
\textbf{Case $2$: $\Delta>84$.}
Denote the vertices in $N(v)$ with $x_i$. Note that we do not have $3$ independent edges going out of $G_A(v)$ from $x_i$'s, as we have a spider-free graph. Let $x_1,x_2$, and $x_3$ be vertices degree greater than $2$. Then by Halls Theorem, we immediately get a matching and $3$ independent edges going from the set $G_A(v)$ to $A\setminus G_A(v)$. Thus we have at-most $2$ vertices in the set $\{x_i\}$, who have degree greater than $2$. Thus the number of edges incident to $G_A(v)$ is at most $2(\Delta-1)+2(\Delta-2)+2\Delta\leq 6\Delta.$
By the previous lemma, in the graph induced by the set of vertices $x_i$, we can find an independent set of size at least $\frac{3\Delta}{55}$. Hence we can find at least $\frac{\Delta}{55}$ triples such that it forms an induced $K_{1,3}$ with $v$ being the center. The number of $K_{1,3}$'s we can find is at least $\frac{\delta n}{660}$.
\end{proof}
We want to prove $\ex(n,TP_3)\leq \frac{1}{4}n^2+(1+\delta)n$. Assume that there is a $TP_3$-free graph that has more than $\frac{1}{4}n^2+(1+\delta)n$ edges. Then one of the bi-partitions has to have more than $\frac{n}{2}+\frac{\delta n}{2}$ edges. In the next lemma, we show that this is not possible.
\begin{lemma}
Let $G$ be a graph on $n$ vertices, where $n\geq \frac{5n_0(\gamma)}{2\delta}$. Let $A$ and $B$ be partition of $V(G)$ with size as equal as possible and with maximum $e(A,B)$. Assume that, neither $A$ nor $B$ contains a spider graph as a subgraph and the maximum degree of vertices inside each of the class is $\beta n$. Say $e(A)\geq\frac{n}{2}+\frac{\delta n}{2}$, then $G$ contains a $TP_3$.
\end{lemma}\begin{proof}
By the previous lemma, we have the total number of triples either in $K_{1,3}$'s or induced $K_{1,3}$'s (which are a subgraph of a star, with the center vertex of degree at least $84$) is $\frac{\delta n}{664}$. Let us consider two cases:
\subsection*{Case $1$: Half of the triples lie in disjoint $K_{1,3}$'s.}
Consider a vertex $x\in B$. We know that the maximum degree on $x$ inside $B$ is less than equal to $\beta n$. So $x$ has at most $\beta n$ non-neighbors in $A$. Thus are at-least $\frac{\delta n}{1328}-\beta n$ triples in disjoint $K_{1,3}$, such that all four of the vertices in the $K_{1,3}$ are adjacent to $x$. Consider three independent edges in $B$, namely $y_1z_1,y_2z_2$ and $y_3z_3$. For each of these $6$ vertices, we can find at least $\frac{\delta n}{1328}-\beta n$ triples in disjoint $K_{1,3}$, such that the vertices of the $K_{1,3}$ are joined completely to the given vertex. Then each of the vertices $y_i$ (similarly $z_i$) is completely connected to all the vertices of at least $\frac{6}{7}$ triples of disjoint $K_{1,3}$ in $A$. In other words, we need the following in-equation to be true.
\begin{equation}\label{bound2}
\frac{\delta n}{1328}-\beta n\geq \frac{6}{7}\times\frac{\delta n}{1328}.
\end{equation}
This holds by the definition of $\beta$. Thus by the Pigeon-hole principle, we have a common triple, such that these $3$ independent edges are connected to it completely. Denote the vertices of this triple as $x_1,x_2$ and $x_3$. The vertices $x_1,y_1,x_2y_2$ and $x_3y_3$ along with $x$ form a $7$-wheel. The triangles $x_1y_1z_1,x_2y_2z_2$, and $x_3y_3z_3$ sitting on the $7$-wheel form a $TP_3$.
\subsection*{Case $2$: Half of the triples lie in induced $K_{1,3}$'s.}
Let the number of induced $K_{1,3}$'s in each of these stars be $k_i$. Note that, summing $k_i$ over all the vertices in $A$ which have degree at least $84$, is at least $\frac{\delta n}{1328}$. Consider the center of one such star in $A$, say $x$. The maximum degree of $x$ in $A$ is less than equal to $\beta n$. Hence $x$ can have at most $\beta n$ non-neighbors in $B$. Delete these vertices in $B$ and denote the graph remaining with $B'$. We know that $\Delta(B')\leq \beta n$. Hence we can color it with $\beta n$ colors and each color class is of size at most $\frac{1}{2\beta}-1$. Hence we can choose $\frac{\frac{n}{2}-3\beta n}{3}$ independent triples. Each of these triples must have a missing edge to the root vertices in the $K_{1,3}$ chosen in $A$, otherwise, we are done. Hence the number of missing edges is equal to $k_1\times \frac{\frac{n}{2}-3\beta n}{3}$. Summing this over vertices in $A$ with degree at least $25$, we get $\frac{\delta n}{1328}\times \frac{\frac{n}{2}-3\beta n}{3}$. This can't be bigger than the possible number of missing edges $\gamma n^2$.
This gives us the following in-equation
\begin{equation}\label{bound3}
\frac{\delta n}{1328}\times \frac{\frac{n}{2}-3\beta n}{3}<\gamma n^2,
\end{equation}
which holds by definition. Hence we find a sparse $7$-wheel and we are done.
\end{proof}
\end{proof}
\section{Concluding remarks and conjectures}
Following the two constructions given in Figure \ref{fig1} and Figure \ref{fig2}, we pose the following conjecture concerning $\text{ex}(n, TP_3)$.
\begin{conjecture}\label{con3}
\begin{equation*}\text{ex}(n,TP_3)\leq
\begin{cases}
\frac{1}{4}n^2+n+1, &\text{if $n$ is even,}\\
\frac{1}{4}n^2+n+\frac{3}{4},&\text{ otherwise}.
\end{cases}
\end{equation*}
\end{conjecture}
We also pose the following conjecture related to $\text{ex}(n,TP_4)$.
\begin{conjecture}\label{con1}
For $n$ sufficiently large, $\text{ex}(n,TP_4)=\frac{n^2}{4}+\Theta(n^{4/3})$.
\end{conjecture}
To show the lower bound, we consider an $n$-vertex graph $G$ obtained from a complete bipartite graph with color classes as equal as possible and adding a bipartite $C_6$-free graph with $cn^{4/3}$ edges in one of the color classes. Thus, $e(G)\geq \frac{n^2}{4}+O(n^{4/3})$. The only thing we need to show is $G$ does not contain a $TP_4$. We need the following claim to show that.
\begin{claim}\label{yesclaim}
Every $2$-coloring of the $TP_4$ such that color 1 is independent, contains either a $C_3$ or a $C_6$ in color 2.
\end{claim}
\begin{proof}
Consider a $2$-coloring $c$ of a $TP_4$ such that color $1$ is independent. We want to show that there is either a $C_3$ or a $C_6$ in color $2$. Suppose there is no such $C_3$. Then one of the vertices of the triangle $x_1x_2x_3$ (see Figure \ref{qwxs}) is in color $2$. Without loss of generality, let the color of $x_1$ be $1$. Since $c$ is a $2$-coloring with the property that color $1$ is independent, then all the $6$ neighboring vertices of $x_1$ must be of color $2$. Therefore, we obtain a $C_6$ with color $2$ and this completes the proof.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.4]
\draw[fill=black](0,0)circle(7pt);
\draw[fill=black](-2,-2)circle(7pt);
\draw[fill=black](2,-2)circle(7pt);
\draw[fill=black](-4,-4)circle(7pt);
\draw[fill=black](0,-4)circle(7pt);
\draw[fill=black](4,-4)circle(7pt);
\draw[fill=black](-6,-6)circle(7pt);
\draw[fill=black](-2,-6)circle(7pt);
\draw[fill=black](2,-6)circle(7pt);
\draw[fill=black](6,-6)circle(7pt);
\draw[fill=black](-8,-8)circle(7pt);
\draw[fill=black](-4,-8)circle(7pt);
\draw[fill=black](0,-8)circle(7pt);
\draw[fill=black](4,-8)circle(7pt);
\draw[fill=black](8,-8)circle(7pt);
\draw[thick](-2,-2)--(2,-2)(-4,-4)--(4,-4)(-6,-6)--(6,-6)(-8,-8)--(8,-8);
\draw[thick](0,0)--(-8,-8)(2,-2)--(-4,-8)(4,-4)--(0,-8)(6,-6)--(4,-8);
\draw[thick](0,0)--(8,-8)(-2,-2)--(4,-8)(-4,-4)--(0,-8)(-6,-6)--(-4,-8);
\node at (0,-3) {$x_2$};
\node at (-3,-6.5) {$x_1$};
\node at (3,-6.5) {$x_3$};
\end{tikzpicture}
\caption{$TP_4$.}
\label{qwxs}
\end{figure}
\end{proof}
The following lemma is a consequence of Claim \ref{yesclaim} and hence the lower bound of Conjecture \ref{con1} holds.
\begin{lemma}\label{lemma9}
Let $G$ be a graph obtained from a complete bipartite graph $K_{\frac{n}{2},\frac{n}{2}}$ (with color class $1$ and $2$) and a bipartite, $C_6$-free graph to the color class $2$. Then $G$ is a $TP_4$-free graph.
\end{lemma}
\section*{Acknowledgments}
The research of first and second authors was supported by the National Research, Development and Innovation Office NKFIH, grant K116769, and the research of second, third, fourth and fifth authors was supported by the National Research, Development and Innovation Office NKFIH, grant K132696.
| -82,743.22486
|
[
-2.1640625,
1.875
] | 24.763194
|
[
-3.025390625,
1.70703125,
-1.7373046875,
-5.171875,
-1.9697265625,
7.2421875
] |
[
2.9765625,
9.1953125,
3.87890625,
7.62109375
] | 670
| 5,362
|
[
-3.287109375,
3.703125
] | 46.900311
|
[
-4.19140625,
-1.857421875,
-2.103515625,
-1.9580078125,
0.00196075439453125,
7.24609375
] | 1.05925
| 14.264338
| 21.708318
| 16.036741
|
[
2.573378562927246
] | -56,685.84035
| 6.021447
| -82,096.252163
| 0.57918
| 5.848662
|
[
-2.544921875,
-2.447265625,
-2.72265625,
-4.11328125,
1.873046875,
9.78125
] |
[
-5.3125,
-0.281005859375,
-1.1865234375,
-0.6767578125,
2.3515625,
1.6357421875
] | |
BkiUd685qsNCPbQZ7v7Y
|
\section{Introduction}
Decision-directed (DD) carrier phase recovery (CPR)
\cite{gianni_compensation_2013} and the blind phase search (BPS)
algorithm \cite{pfau_hardware-efficient_2009} have been widely used in
optical coherent receivers to compensate the impact of laser phase
noise. The two-stage CPR scheme based on a DD low latency parallel
phase locked loop (DD-PLL) followed by a feedforward CPR (e.g., BPS)
has demonstrated an excellent performance in the presence of laser
phase noise and frequency fluctuations. The latter are generated by
mechanical vibrations or power supply noise and modeled as a carrier
frequency modulation with a sinusoid of large amplitude (e.g., 200
MHz) and low frequency (e.g., 35 KHz)
\cite{gianni_compensation_2013}. CPR algorithms such as DD-PLL or BPS
use a symbol detector (or slicer) to estimate phase error. Therefore,
severe performance degradation may be experienced in the presence of
amplitude and phase imbalances introduced at the transmitter side,
between the in-phase (I) and quadrature (Q) components (i.e., Tx I/Q
\emph{imbalance}) \cite{zhang_algorithms_2019}. Tx I/Q time skew is
another impairment that degrades the receiver performance.
The compensation of the I/Q imbalance has been extensively addressed
in the literature (e.g.,
\cite{da_silva_widely_2016,lagha_blind_2020}). Typically, the
transmitter I/Q imbalance compensation at the receiver is achieved
\emph{after} the CPR stage. Unfortunately, the performance of this
approach with conventional CPR algorithms such as DD-PLL or BPS can be
seriously degraded with high-order modulation formats if severe Tx I/Q
imbalance is present. This degradation is a result of the
constellation warping at the input of the slicers in the CPR blocks
caused by the Tx I/Q imbalance \cite{zhang_algorithms_2019}. To deal
with this problem, some blind estimation schemes of the Tx I/Q
imbalance have been recently reported in the literature
\cite{zhang_algorithms_2019},\cite{zhang_modulation-format-transparent_2019}. Although
these proposals can achieve good compensation of the Tx I/Q imbalance,
their proper performance in the presence of laser frequency
fluctuations is challenged, owing to the large latency introduced by
their parallel implementation in high-speed transceivers
\cite{zhang_modulation-format-transparent_2019}, and the poor
performance of feedforward CPRs such as BPS
\cite{zhang_algorithms_2019} when carrier frequency fluctuations are
experienced \cite{gianni_compensation_2013}.
This work proposes a superscalar parallel (SSP) two-stage CPR to
improve the receiver performance in the presence of the Tx I/Q
imbalance, Tx I/Q skew, laser phase noise, \emph{and} carrier
frequency fluctuations. A first CPR stage based on a DD-PLL is used to
compensate frequency offset and carrier frequency fluctuations
\cite{gianni_compensation_2013}. The second stage, based on a
feedforward CPR (e.g., BPS), operates on the signal demodulated by the
DD-PLL and is mainly used to compensate the residual laser phase noise
not eliminated by the DD-PLL. The accuracy of the phase estimations
provided by both DD-PLL and BPS is improved by adding a one-tap
adaptive $2\times 2$ multiple-input multiple output (MIMO) equalizer
to compensate the Tx I/Q imbalance at the input of the slicers in the
CPR. As a result of the extra latency introduced by the one-tap MIMO
equalizer in the DD-PLL stage, the use of existing low latency
parallel DD-PLL schemes such as that proposed in
\cite{gianni_compensation_2013} is precluded for implementing in
high-speed receivers. Therefore, we use a superscalar parallel
architecture to reduce the latency of the DD-PLL
\cite{piyawanno_low_2010}. We show that the proposed SSP-based CPR
scheme with Tx I/Q imbalance compensation is able to drastically
improve the receiver performance even in the presence of Tx I/Q time
skew.
\section{Superscalar Parallel Two-Stage CPR with Tx I/Q
Compensation}\label{sec:2}
\begin{figure}[!t]
\centering
\subfloat[]{\includegraphics[width=0.39\columnwidth]{fig2}}
\hfill
\subfloat[]{\includegraphics[width=0.59\columnwidth]{fig4a}}
\caption{\small Proposed (a) DD-PLL with Tx I/Q imbalance
compensation and (b) 2-stage SSP-PLL+BPS.}
\label{fig2_fig4a}
\vspace{-.35cm}
\end{figure}
Without loss of generality, we consider the transmitted signal in one
polarization. The complex optical carrier with Tx I/Q imbalance can be
written as
$p(t)=
(1-\varepsilon_g)\cos[\omega_0t+\phi_e/2+\theta(t)]+j(1+\varepsilon_g)\sin[\omega_0t-\phi_e/2+\theta(t)],$
where $\varepsilon_g$ and $\phi_e$ are the gain and phase imbalance,
respectively, $\theta(t)$ is the carrier phase error, while
$\omega_0=2\pi f_0$ with $f_0$ being the carrier frequency
\cite{da_silva_widely_2016}. Let $a_n=a_n^{(I)}+ja_n^{(Q)}$ and $T$ be
the $n$-th transmitted complex quadrature amplitude modulation (QAM)
symbol and the symbol period, respectively. The equalized received
baseband signal (e.g., after chromatic dispersion compensation) in a
dispersive optical channel with Tx I/Q imbalance and carrier phase
error can be formulated as
\begin{equation}
\label{eq:2}
{\bf r}_n={\bf P}_n {\bf W} {\bf a}_n+{\bf z}_n,
\end{equation}
where ${\bf r}_n=[r^{(I)}_{n}\quad r^{(Q)}_{n}]^{T_r}$ is a
$2\times 1$ real vector whose components are the received I/Q samples
(superscript $T_r$ denotes transpose),
${\bf a}_n=[a^{(I)}_{n}\quad a_n^{(Q)}]^{T_r}$,
${\bf z}_n=[z^{(I)}_{n}\quad z_n^{(Q)}]^{T_r}$ is the vector that
includes the amplified spontaneous emission (ASE) noise as well as any
other residual interference, while ${\bf W}$ and ${\bf P}_n$ are the
$2\times 2$ real matrices:
\begin{equation}
\label{eq:W}
{\bf W}=
\begin{bmatrix}
\cos(\frac{\phi_e}{2}) &\sin(\frac{\phi_e}{2}) \\
\sin(\frac{\phi_e}{2}) &\cos(\frac{\phi_e}{2}) \\
\end{bmatrix}
\begin{bmatrix}
1-\varepsilon_g &0 \\
0 &1+\varepsilon_g \\
\end{bmatrix};\quad
{\bf P}_n=
\begin{bmatrix}
\cos(\theta_n) &-\sin(\theta_n) \\
\sin(\theta_n) &\cos(\theta_n) \\
\end{bmatrix},
\end{equation}
with $\theta_n=\theta(nT)$. Notice that $\bf W$ models the Tx I/Q
imbalance while ${\bf P}_n$ incorporates the effect of the phase
error, $\theta_n$. The latter is modeled as
$\theta_n=\psi_n+\Omega_cn+\Delta \Omega_n$, where $\psi_n$ is the
laser phase noise (i.e., $\psi_n=\sum_{k=-\infty}^n\eta_k$ where
$\eta_k$ are zero-mean iid Gaussian random variables with variance
$\sigma^2_{\eta}=2\pi T\Delta \nu$, being $\Delta \nu$ the laser
linewidth), $\Omega_c=2\pi T f_c$ with $f_c$ being the frequency
offset, while $\Delta \Omega_n$ is the phase change caused by the
frequency fluctuation
$\Delta \Omega_n=(A_p/\Delta f_c) \sin(2\pi T\Delta f_c n)$, where
$A_p$ and $\Delta f_c$ are the amplitude and frequency of the carrier
modulation tone \cite{gianni_compensation_2013}. Figure
\ref{fig2_fig4a}-a depicts the proposed first-stage CPR based on a
DD-PLL. The received signal \eqref{eq:2} is first demodulated by using
a $2\times2$ rotation matrix ${\bf Q}_n$ which uses the carrier phase
$\hat \theta_n$ provided by the DD-PLL (e.g., notice that
${\bf Q}_n={\bf P}^{-1}_n$ if $\hat \theta_n=\theta_n$). Then, the
samples are processed by a $2\times2$ real matrix $\bf C$ in order to
compensate the Tx I/Q imbalance at the slicer input (ideally,
$\bf C=\bf W^{-1}$). After that, the BPS algorithm estimates the phase
error by testing $B$ different phases. However, as pointed out in
\cite{zhang_algorithms_2019}, the Tx I/Q imbalance may cause BPS to
make wrong decisions and subsequently degrade the accuracy of the
phase estimation. To avoid this degradation, multiplication of the
slicer inputs by a $2\times2$ real matrix $\bf C$ is introduced in the
BPS to compensate the effects of the Tx I/Q imbalance, similarly to
what was done before with the DD-PLL as shown in
Fig. \ref{fig2_fig4a}-a.
\subsection{Impact of the Tx I/Q Time Skew}
The transmitter I/Q time skew degrades the receiver performance. This
effect is mainly caused by mismatches between the I/Q electrical path
responses from the Tx digital-to-analog converters (DACs) upto the
Mach Zehnder modulator (MZM). Based on the models used in
\cite{da_silva_widely_2016}, the discrete time electrical signal at
the MZM input in the presence of I/Q time skew can be written as
${\bf s}_n=\sum_{k}{\bf H}_k {\bf a}_{n-k}$, where ${\bf H}_k$ are
$2 \times 2$ real matrices whose elements depend on the I/Q skew,
$\tau$, and the Tx filter. Then, the received signal \eqref{eq:2} is
given by
${\bf r}_n={\bf P}_n {\bf W}_0 {\bf a}_n+{\bf z}_n+{\bf q}_n,$ where
${\bf W}_0={\bf W}{\bf H}_0$ while
${\bf q}_n={\bf P}_n {\bf W}\sum_{k,k\ne0}{\bf H}_k {\bf a}_{n-k}$ is
the $2\times 1$ real vector with the interference components caused by
the Tx I/Q time skew and the adjacent symbols. Therefore, the proposed
DD-PLL+BPS scheme with ${\bf C}={\bf W}_0^{-1}$ can compensate the Tx
I/Q imbalance (${\bf W}$) and a part of the Tx I/Q time skew
(${\bf H}_0$). Thus, the interference component ${\bf q}_n$ will be
\emph{seen} by the proposed DD-PLL+BPS as an extra noise component.
\subsection{Superscalar Parallel Implementation of DD-PLL (SSP-PLL)}
The high symbol rate requirements mandate the use of parallel
processing techniques for the implementation of coherent
transceivers. The tolerance of a conventional interleaving
parallelization system DD-PLL to the laser linewidth is reduced by a
factor $N\times D_L$, where $N$ is the parallelization factor and
$D_L$ is the processing delay (latency) due to the pipelined
implementation of the DD-PLL. A low latency parallel DD-PLL CPR has
been proposed in \cite{gianni_compensation_2013} to improve the
tolerance to the laser linewidth and frequency
fluctuations. Unfortunately, the benefits of this low latency PLL
\cite{gianni_compensation_2013} will be significantly reduced due to
the presence of the MIMO compensation filter ${\bf C}$ in the loop. To
deal with this problem, we propose to implement a superscalar
parallelization DD-PLL. The superscalar parallelization of the PLL has
been proposed in \cite{piyawanno_low_2010} for implementing a feedback
CPR in high-speed optical transceivers. SSP employs pilot
symbols\footnote{Pilot symbols of the SSP-PLL are used to correct
cycle slips (CS) generated in the different CPR stages.} and a
buffer to store the input samples, which are then rearranged to have
consecutive symbols in each parallelized channel (see
Fig. \ref{fig2_fig4a}-b). In this way, the processing delay of the
parallel architecture can be reduced from $N\times D_L$ to
$D_L$. Notice that a parallel implementation of all the DSP blocks
after the SSP-PLL stage (e.g., BPS, Tx I/Q imbalance and skew
compensation) is straightforward.
\section{Simulation Results}
\label{sec:3}
\begin{figure}[!t]
\centering
\subfloat[]{\includegraphics[width=0.66\columnwidth]{fig5}\label{fig5a}
\hfill
\subfloat[]{\includegraphics[width=0.32\columnwidth]{fig6}\label{fig6a}
\caption{\small (a) OSNR penalty versus $\phi_e$ and
$\varepsilon_g$ with $\tau=0.1T$ for the proposed and
conventional SSP-PLL+BPS. (b) Impact of the Tx I/Q time skew on
the performance of the proposed and conventional SSP-PLL+BPS.}
\label{fig5_fig6}
\vspace{-.65cm}
\end{figure}
We investigate the performance of the proposed two-stage CPR in the
presence of Tx I/Q imbalance and time skew, laser phase noise, and
carrier frequency fluctuations. We consider 16-QAM with a baud rate
of $1/T=32$ Giga-baud (GBd), a type II second-order digital PLL with
$D_L=5$, and a BPS algorithm with filter length $M=40$ and $B=32$ test
phases. The MIMO tap $\bf C$ is estimated at the receiver with the
LMS algorithm. The block buffer size is $N=16$ and $S=400$ with a
pilot overhead of 1\%. We consider $A_p=140$MHz, $\Delta f_c=35$KHz,
and $\Delta \nu=1$MHz. The Tx I/Q skew compensator at the Rx is
implemented by using two independent baud-rate adaptive real
equalizers.
Figure \ref{fig5_fig6}-a shows the optical signal-to-noise ratio
(OSNR) penalty at a bit-error-rate (BER) of $10^{-3}$ versus the Tx
phase and the gain imbalance with Tx I/Q time skew of
$\tau=0.1T$. Notice the important degradation caused by the Tx I/Q
imbalance in the conventional SSP-PLL+BPS solution. In contrast, the
proposed two-stage CPR architecture is able to drastically reduce the
OSNR penalty at high values of gain and phase errors. Nevertheless, it
is interesting to highlight that the receiver performance worsens with
the increase of the Tx gain and phase imbalance even with the proposed
2-stage CPR and a perfect estimation of the MIMO compensation matrix
$\bf C$. This is caused in part by the ASE noise amplification
generated by the MIMO compensation filter since
$\det\{{\bf C}\}\approx\det\{{\bf W}^{-1}\}=1/\det\{{\bf W}\}>1$ if
$0<|\phi_e|<45^{\text o}$ and/or $0<|\varepsilon_g|<1$ (see
\eqref{eq:W}). Finally, Fig. \ref{fig5_fig6}-b depicts the OSNR
penalty at a BER of $10^{-3}$ versus the Tx I/Q time skew. Note that
the proposed SSP-PLL+BPS architecture for $|\tau|< 0.2T$ achieves an
important gain respect to the conventional CPR without Tx I/Q
imbalance compensation.
\vspace{-.15cm}
\section{Conclusions}
\label{sec:conclusions}
\vspace{-.10cm} A superscalar parallel CPR architecture with Tx I/Q
imbalance compensation for coherent optical receivers has been
proposed. Numerical results have demonstrated that the proposed
parallel CPR scheme is able to drastically improve the receiver
performance in the presence of Tx I/Q imbalance, Tx I/Q time skew, and
laser frequency fluctuations. This improvement will be more noticeable
when the modulation order is increased (e.g., 64-QAM).
\vspace{-.15cm}
| -12,351.513195
|
[
-2.865234375,
2.619140625
] | 8.461538
|
[
-3.41796875,
0.66845703125,
-2.056640625,
-5.48046875,
-0.98828125,
7.8125
] |
[
1.322265625,
7.78125,
0.4501953125,
4.08203125
] | 106
| 1,843
|
[
-2.76953125,
3.330078125
] | 27.138493
|
[
-6.390625,
-3.9765625,
-3.666015625,
-1.865234375,
2.20703125,
11.25
] | 1.326829
| 8.99236
| 30.005426
| 2.39347
|
[
1.7875161170959473
] | -9,721.31867
| 5.561584
| -12,139.549178
| 2.663415
| 5.373243
|
[
-2.876953125,
-3.78515625,
-4.015625,
-4.765625,
2.59375,
12.234375
] |
[
-5.390625,
-1.46484375,
-2.37890625,
-1.7421875,
3.27734375,
4.5234375
] | |
BkiUaSLxK1UJ-rRH9lAT
|
\section{Introduction}
\label{sec-introduction}
The question of whether DNA conducts electric charges is intriguing to
physicists and biologists alike. The suggestion that electron transfer/transport in DNA might be
biologically important has triggered a series of experimental and
theoretical investigations
\cite{MurAJG93,BerBR00,WanFSB00,DelB03,OneDB04,EndCS04}. Processes that
possibly use electron transfer include the function of DNA damage
response enzymes, transcription factors or polymerase co-factors all of
which play important roles in the cell \cite{AlbBLR94}. Indeed there is
direct evidence \cite{BooLCD03} that MutY --- a DNA base excision repair
enzyme with an [4Fe4S]$^+$ cluster of undetermined function ---
takes part in some kind of electron transfer as part of the DNA repair
process \cite{OneF93,RetHBL93}. This seems consistent with studies in
which an electric current is passed through DNA revealing that damaged
regions have significantly different electronic behaviour than healthy
ones \cite{BooLCD03}.
For physicists, the continuing progress
of nanotechnologies and the consequent need for further size
miniaturisation makes the DNA molecule an excellent candidate for
molecular electronics \cite{CunCPD02,GarABG01,RakAPK01,BhaBB03}. DNA
might serve as a wire, transistor, switch or rectifier depending on its
electronic properties \cite{DekR01,PorCD04,EndCS04}.
In its natural environment, DNA is always in liquid solution and
therefore experimentally one can study the molecule either in solution
or in artificially imposed dry environments. In solution experiments DNA
can be chemically processed to host a donor and an acceptor molecule at
different sites along its long axis. Photo-induced charge transfer rates
can then be measured whilst the donor/acceptor molecules, the distance
and the sequence of DNA that lies between them are varied. The
reactions are observed to depend on the type of DNA used, the
intercalation, the integrity of the intervening base pair stack and,
albeit weakly, on the molecular distance
\cite{BerBR00,OneDB04,BooLCD03,DelB03,TreHB01}.
Direct conductivity measurements on dry DNA have also been preformed in
the past few years. The remarkable diversity that characterises the
results seems to arise from the fact that many factors need to be
experimentally controlled. These include methods for DNA alignment and
drying, the nature of the devices used to measure the conductivity, the
type of metallic contacts and the sequence and length of the DNA. DNA
has been reported to be an insulator \cite{BraESB98}, an ohmic conductor
\cite{RakAPK01,FinS99,NakGSO03,AsaT00,OkaKTS98} and a semiconductor
\cite{PorBVD00}. Theoretically, single-step super exchange
\cite{MurAJG93} and multi-step hopping \cite{BixGWL99} models have
provided interpretations of solution experiments. For experiments in dry
DNA, several additional approaches such as variable range hopping
\cite{YuS01}, one-dimensional quantum mechanical tight-binding models
\cite{CunCPD02,Roc03,ZhaU04a,ZhaU04b,RocBMK03,WanLS04} and non-linear
methods \cite{CueS04,Pey04} have also been proposed.
Despite the lack of a consistent picture for the electronic properties
of DNA, one conclusion has been established: the environment of the DNA
impacts upon its structural, chemical and thus probably also electronic
properties. Both theoretical and experimental studies show that the
temperature and the type of solution surrounding DNA have a significant
effect on its structure and shape \cite{YuS01,BarCJL01,BruGOR00}. The
effect of the environment is a key one to this report, where the
environmental fluctuations are explicitly modelled as providing
different types of disorder.
In this work, we focus on whether DNA, when treated as a quantum wire in
the fully coherent low-temperature regime, is conducting or not. To this
end, we study and generalise a tight-binding model of DNA which has been
shown to reproduce experimental \cite{CunCPD02} as well as {\em
ab-initio} results \cite{DavI04}. A main feature of the model is the
presence of sites which represent the sugar-phosphate backbone of DNA
but along which no electron transport is permissible. We measure the
``strength'' of the electronic transport by the {\em localisation
length} $\xi$, which roughly speaking parametrises whether an electron
is confined to a certain region $\xi$ of the DNA (insulating behaviour)
or can proceed across the full length $L$ ($\leq \xi$) of the DNA
molecule (metallic behaviour).
Sections \ref{sec-models}--\ref{sec-localization} introduce our models
and the numerical approach. In section \ref{sec-results-clean}, we
show that DNA sequences with different arrangement of nucleotide bases
Adenine (A), Cytosine (C), Guanine (G) and Thymine (T) exhibit different
$\xi$'s when measured, e.g.\ as function of the Fermi energy $E$. The
influence of external disorder, modelling variants in the solution,
bending of the DNA molecule, finite-temperature effects, etc., is
studied in section \ref{sec-results-disordered} where we show that,
surprisingly, the models support an increase of $\xi$ when disorder is
increased. We explain that this effect is linked to the existence of the
backbone sites.
\section{Tight-binding models for DNA with a gap in the spectrum}
\label{sec-models}
\subsection{The Fishbone model}
\label{sec-fishbone}
DNA is a macro-molecule consisting of repeated stacks of bases formed by
either AT (TA) or GC (CG) pairs coupled via hydrogen bonds and held in
the double-helix structure by a sugar-phosphate backbone. In Fig.\
\ref{fig-DNA}, we show a schematic drawing.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fig/fig-DNA-BW.eps}
\caption{\label{fig-DNA}
The chemical composition of DNA with the four bases Adenine,
Thymine, Cytosine, Guanine and the backbone. The backbone is made of
phosphorylated sugars shown in yellow and brown.}
\end{figure}
In most models of electronic transport \cite{CunCPD02,Zho03} it has been
assumed that the transmission channels are along the long axis of the
DNA molecule \cite{perpendiculartolongaxis} and that the conduction path
is due to $\pi$-orbital overlap between consecutive bases
\cite{TreHB01}; density-functional calculations \cite{PabMCH00} have
shown that the bases, especially Guanine, are rich in $\pi$-orbitals.
Quantum mechanical approaches to the problem mostly use strictly
one-dimensional (1D) tight-binding models
\cite{Roc03,ZhaU04a,ZhaU04b,RocBMK03,WanLS04}.
Of particular interest to us is a quasi-1D model \cite{CunCPD02} which
includes the backbone structure of DNA explicitly and exhibits a
semiconducting gap.
This {\em fishbone model}, shown in Fig.\ \ref{fig-fishbone}, has one
central conduction channel in which individual sites represent a
base-pair; these are interconnected and further linked to upper and
lower sites, representing the backbone, but are \emph{not}
interconnected along the backbone. Every link between sites implies the
presence of a hopping amplitude. The Hamiltonian for the fishbone model
$(H_F)$ is given by:
\begin{eqnarray}
H_F &=& \sum_{i=1}^{L}
\sum_{q=\uparrow,\downarrow} \left(
-t_{i} |i \rangle \langle i+1|-t_i^q |i,q \rangle \langle i|
\right.
\nonumber \\
& &
\left.+ \varepsilon_i |i \rangle \langle i| + \varepsilon_i^q |i,q
\rangle \langle i,q| \right) + h.c. \label{eq-ham1D}\label{eq-fishbone}
\end{eqnarray}
where $t_{i}$ is the hopping between nearest-neighbour sites $i,i+1$
along the central branch, $t_i^q$ with $q=\uparrow, \downarrow$ gives
the hopping from each site on the central branch to the upper and lower
backbone respectively. Additionally, we denote the onsite energy at each
site along the central branch by $\varepsilon_i$ and the onsite energy
at the sites of the upper and lower backbone is given by
$\varepsilon_i^q$, with $q=\uparrow\downarrow$. $L$ is the number of
sites/bases in the sequence.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fishbone-model-grey.eps}
\caption{\label{fig-fishbone}
The fishbone model for electronic transport along DNA corresponding
to the Hamiltonian given in Eq.\ (\ref{eq-ham1D}). Lines denote
hopping amplitudes and circles give the central (grey) and backbone
(open) sites.}
\end{figure}
The model (\ref{eq-fishbone}) clearly represents a dramatic
simplification of DNA. Nevertheless, in Ref.\ \cite{CunCPD02} it had
been shown that this model when applied to an artificial sequence of
repeated GC base pairs, poly(dG)-poly(dC) DNA, reproduces experimental
data current-voltage measurements when $t_{i}=0.37 e$V and
$t_i^q=0.74 e$V are being used. Therefore, we will assume $t_i^q = 2
t_{i}$ and set the energy scale by $t_{i}\equiv 1$ for hopping
between GC pairs. In what follows we will adopt energy
units in which $eV=1$ throughout.
For natural DNA sequences, we need to know how the hopping amplitudes
vary as the electron moves between like pairs, i.e.\ from GC to GC or
from AT to AT, and unlike pairs, i.e., from GC to AT and vice versa. We
choose $t_{i}=1$ between identical and matching bases (e.g.\ AT/TA,
GC/CG). Assuming that the wavefunction overlap between consecutive bases
along the DNA strand is weaker between unlike and non-matching bases
(AT/GC, TA/GC, etc.) we thus choose $1/2$.
\subsection{The Ladder model}
\label{sec-ladder}
We performed semi-empirical calculations on DNA base pairs and stacks
using the SPARTAN quantum chemistry software package \cite{Spartan}. The
results have shown that the relevant electronic states of DNA
(highest-occupied and lowest-unoccupied molecular orbitals with and
without an additional electron) are localised on one of the bases of a
pair only. The reduction of the DNA base-pair architecture into a single
site per pair, as in the fishbone model (\ref{eq-fishbone}), is
obviously a highly simplified approach. As an improvement on this we
model each base as a distinct site where the base pair is then weakly
coupled by the hydrogen bonds. The resulting 2-channel model is shown in
Fig.\ \ref{fig-ladder}. This {\em ladder} model is a planar projection
of the structure of the DNA with its double-helix unwound. We note that
results for electron transfer also suggest that the transfer proceeds
preferentially down one strand \cite{KelB99}.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{ladder-model-grey.eps}
\caption{\label{fig-ladder}
The ladder model for electronic transport along DNA. The model
corresponds to the Hamiltonian (\ref{eq-ladder}).}
\end{figure}
There are two central branches, linked with one another, with
interconnected sites where each represents a complete base and which are
additionally linked to the upper and lower backbone sites. The backbone
sites as in the fishbone model are not interconnected. The Hamiltonian
for the ladder model is given by
\begin{eqnarray}
H_{L} &=& \sum_{i=1}^{L} \left[
\sum_{\tau=1,2}
\left( t_{i,\tau}|i,\tau\rangle \langle i+1,\tau| +
\varepsilon_{i,\tau} |i,\tau\rangle \langle i,\tau| \right) \right.
\nonumber \\
& & \mbox{} + \sum_{q=\uparrow,\downarrow}
\left( t_i^q |i,\tau\rangle \langle i,q(\tau)|+
\varepsilon_i^q|i,q\rangle \langle i,q| \right)
\nonumber \\
& & \mbox{ }
+ t_{1,2}|i,1\rangle \langle i,2|
\Big]
+ h.c. \label{eq-ham2D}\label{eq-ladder}
\end{eqnarray}
where $t_{i,\tau}$ is the hopping amplitude between sites along each
branch $\tau=1$, $2$ and $\varepsilon_{i,\tau}$ is the corresponding
onsite potential energy. $t_i^q$ and and $\varepsilon_i^q$ as before
give hopping amplitudes and onsite energies at the backbone sites. Also,
$q(\tau)=\uparrow, \downarrow$ for $\tau=1, 2$, respectively. The new
parameter $t_{12}$ represents the hopping between the two central
branches, i.e., perpendicular to the direction of conduction. SPARTAN
results suggest that this value, dominated by the wave function overlap
across the hydrogen bonds, is weak and so we choose $t_{12}= 1/10$.
\subsection{Including disorder}
\label{sec-disorder}
In order to study the transport properties of DNA, we could now either
use artificial DNA (poly(dG)-poly(dC) \cite{PorBVD00}, random sequences
of A,T,G,C \cite{PenBGH92,YamSHA04}, etc.) or natural DNA (bacteriophage
$\lambda$-DNA \cite{PabMCH00}, etc.). The biological content of the
sequence would then simply be encoded in a specific sequence of hopping
amplitudes $1$ and $1/2$ between like and unlike base-pair sequences.
However, in vivo and most experimental situations, DNA is exposed to
diverse environments and its properties, particularly those related to
its conformation, can change drastically depending on the specific
choice. The solution, thermal effects, presence of binding and
packaging proteins and the available space are factors that alter the
structure and therefore the properties that one is measuring
\cite{YuS01,BarCJL01}. Clearly, such dramatic changes should also be
reflected in the electronic transport characteristics. Since it is
precisely the backbone that will be most susceptible to such influences,
we model such environmental fluctuations by including variations in the
onsite potentials $\varepsilon_{i,q}$.
Different experimental situations will result in a different
modification of the backbone electronic structure, and we model this by
choosing different distribution functions for the onsite potentials,
ranging from uniform disorder $\varepsilon_{i,q}\in [-W/2,W/2]$, to
Gaussian disorder and on to binary disorder $\varepsilon_{i,q}=
\pm W/2$. $W$ is a measure for the strength of the disorder in all
cases. Particularly the binary disorder model can be justified by the
localisation of ions or other solutes at random positions
along the DNA strand \cite{BarCJL01}.
\subsection{Effective models and the energy gap}
\label{sec-effective}
Due to the non-connectedness of the backbone sites along the DNA
strands, the models (\ref{eq-fishbone}) and (\ref{eq-ladder}) can be
further simplified to yield models in which the backbone sites are
incorporated into the electronic structure of the DNA. The effective
fishbone model is then given by
\begin{eqnarray}
\tilde{H}_F &=& \sum_{i=1}^{L}
-t_{i} |i \rangle \langle i+1| + h.c.
\nonumber \\
& & \mbox{ } + \left[\varepsilon_i -
\sum_{q=\uparrow,\downarrow}
\frac{\left(t_{i}^{q}\right)^{2}}{\varepsilon_{i}^{q} - E}\right]
|i \rangle \langle i| \quad .
\label{eq-fishbone-effective}
\end{eqnarray}
Similarly, the effective ladder model reads as
\begin{eqnarray}
\tilde{H}_{L} &=& \sum_{i=1}^{L}
t_{1,2}|i,1\rangle \langle i,2| +
\sum_{\tau=1,2}
t_{i,\tau}|i,\tau\rangle \langle i+1,\tau|
\nonumber \\
& &
+
\left[ \varepsilon_{i,\tau} -
\frac{\left(t_{i}^{q(\tau)}\right)^{2}}{\varepsilon_{i}^{q(\tau)} - E}
\right]
|i,\tau\rangle \langle i,\tau|
\nonumber \\
& & \mbox{ }+ h.c. \quad . \label{eq-ladder-effective}
\end{eqnarray}
In these two models, the backbone has been incorporated into an
energy-dependent onsite potential on the main DNA sites. This
re-emphasises that the presence of the backbone influences the local
electronic structure on the DNA bases and similarly, any variation in
the backbone disorder potentials $\varepsilon_{i}^{\uparrow,\downarrow}$
will results in a variation of {\em effective} onsite potentials as
given in the brackets of Eqs.\ (\ref{eq-fishbone-effective}) and
(\ref{eq-ladder-effective}).
Both models allow to quickly calculate the gap of the completely ordered
system (all onsite potentials zero) by assuming that the lowest-energy
state $\psi=\sum_{i} \psi_{i(,\tau)} |i(,\tau)\rangle$ in each band
corresponds to constant $\psi_i$ ($\psi_{i,\tau}$) whereas for the
highest-energy states, a checker-board pattern is obtained with
$\psi_{i}=\psi_{i+1}$ ($\psi_{i,\tau}=-\psi_{i+1,\tau}$,
$\psi_{i,1}=-\psi_{i,2}$). For the fishbone model, this shows that,
e.g.\ $E_{{\rm min},\mp}=-
t_{i}\mp\sqrt{t_{i}^2+t_{i,\uparrow}^2+t_{i,\downarrow}^{2}}$ and
$E_{{\rm max},\mp}=
t_{i}\mp\sqrt{t_{i}^2+t_{i,\uparrow}^2+t_{i,\downarrow}^{2}}$.
For the chosen set of hopping parameters for
(\ref{eq-fishbone-effective}) and (\ref{eq-ladder-effective}), this
gives $E_{{\rm min},\mp}= -4, 2$ and $E_{{\rm max},\mp}= -2, 4$ for the
fishbone model and $E_{{\rm min},\mp}\approx -3.31, 1.21$ and $E_{{\rm
max},\mp}= -1.21, 3.31$ for the ladder model.
\section{The numerical approach and localisation}
\label{sec-localization}
There are several approaches suitable for studying the transport
properties of the models (\ref{eq-fishbone}) and (\ref{eq-ladder}) and
these can be found in the literature on transport in solid state
devices, or, perhaps more appropriately, quantum wires. Since the
variation in the sequence of base pairs precludes a general solution, we
will use two methods well-known from the theory of disordered systems
\cite{RomS03}.
The first method is the iterative transfer-matrix method (TMM)
\cite{PicS81a,PicS81b,MacK83,KraM93,Mac94} which allows us in principle
to determine the localisation length $\xi$ of electronic states in
systems with cross sections $M=1$ (fishbone) and $2$ (ladder) and length
$L \gg M$, where typically a few million sites are needed for $L$ to
achieve reasonable accuracy for $\xi$. However, in the present
situation we are interested in finding $\xi$ also for viral DNA strands
of typically only a few ten thousand base-pair long sequences. Thus in
order to restore the required precision, we have modified the conventional
TMM and now perform the TMM on a system of fixed length $L_0$. This
modification has been previously used \cite{FraMPW95,RomS97b,NdaRS04}
and may be summarised as follows:
After the usual forward calculation with a global transfer matrix ${\cal
T}_{L_0}$, we add a backward calculation with transfer matrix ${\cal
T}^{\rm b}_{L_0}$. This forward-backward-multiplication procedure is
repeated $K$ times. The effective total number of TMM multiplications is
$L_{\rm }=2KL_0$ and the global transfer-matrix is ${\tau}_{L_{\rm }} =
\left( {\cal T}^{\rm b}_{L_0} {\cal T}_{L_0}\right)^K$. It can be
diagonalised as for the standard TMM with $K\rightarrow \infty$ to give
${\tau}^{\dagger}_{L_{\rm }} {{\tau}_{L_{\rm }}} \rightarrow \exp[ {\rm
diag}(4KL_0/\xi_{\tau})]$ with $\tau=1$ or $\tau= 1, 2$ for fishbone
and ladder model, respectively. The largest $\xi_{\tau} \forall \tau$
then corresponds to the localisation lengths of the electron on the DNA
strand and will be measured in units of the DNA base-pair spacing
($0.34$ nm).
The second method that we will use is the recursive Green function
approach pioneered by MacKinnon \cite{Mac80,Mac85}. It can be used to
calculate the dc and ac conductivity tensors and the density of states
(DOS) of a $d$-dimensional disordered system and has been adopted to
calculate all kinetic linear-transport coefficients such as
thermoelectric power, thermal conductivity, Peltier coefficient and
Lorentz number \cite{RomMV03}.
The main advantage of both methods is that they work reliably (i) for
short DNA strands ranging from 13 (DFT studies \cite{PabMCH00}) base
pairs up to 30 base pairs length which are being used in the nanoscopic
transport measurements \cite{DavI04} as well as (ii) for somewhat longer
DNA sequences as modelled in the electron transfer results and (iii)
even for complete DNA sequences which contain, e.g.\ for human
chromosomes up to 245 million base pairs \cite{AlbBLR94}.
\section{DNA sequences}
\label{sec-dna}
The exact arrangement of the four bases A, T, G, C determines the nature
and function of its associated DNA strand such as the chemical
composition of the proteins which are encoded. While previous studies
have aimed to elucidate whether DNA conducts at all, we shall also focus
our attention to investigate how different DNA sequences, be they
artificial or naturally occurring, ``conduct'' charge differently. Thus
we study a set of different DNA.
A convenient starting point for most electronic transport studies
\cite{PorCD04} is the aforementioned poly(dG)-poly(dC) sequence, which
corresponds to a simple repetition of a GC (or CG) pair. Note that
within our models, there is no difference between GC and CG pairs.
Although not occurring naturally, such sequences can be synthesised
easily. Another convenient choice of artificial DNA strand is a simple
{\em random} sequence of the four bases, which we construct with equal
probability for all 4 bases. However, they are not normally used in
experiments.
As DNA samples existing in living organisms, we shall use $\lambda$-DNA
of the bacteriophage virus \cite{lambda} which has a sequence of 48502
base pairs. It corresponds to a bacterial virus and is biologically very
well characterised. We also investigate the $29728$ bases of the SARS
virus \cite{sars}.
Telomeric DNA is a particular buffer part at the beginning and ends of
of DNA strands for eukaryote cells \cite{AlbBLR94}. In mammals it is a
Guanine rich sequence in which the pattern TTAGGG is repeated over
thousands of bases. Its length is known to vary widely between species
and individuals but we assume a length of 6000 base-pairs.
Last, we show some studies of centromeric DNA for chromosome 2 of yeast
with 813138 base pairs \cite{cen2}. This DNA is also reportedly rich in G
bases and has a high rate of repetitions which should be favourable for
electronic transport.
Initially, we will compute transport properties for complete DNA
sequences, i.e.\ including and not differentiating between coding and
non-coding sequences (this distinction applies to the naturally
occurring DNA strands only). However, we will later also study the
difference between those two different parts of a given DNA. We
emphasise that while non-coding DNA suffers from the label of ``junk'',
it is now known to play several important roles in the functioning of
DNA \cite{AlbBLR94}.
Before leaving the description of our DNA sequences, we note that
occasionally, we show results for ``scrambled'' DNA. This is DNA with
the same number of A, T, C, G bases, but with their order randomised.
Clearly, such sequences contain the same set of electronic potentials
and hopping variations, but would perform quite differently if released
into the wild. A comparison of their transport properties with those
from the original sequence thus allows to measure how important the
exact fidelity of a sequence is.
\section{Results for clean DNA}
\label{sec-results-clean}
Let us start by studying the localisation properties of DNA without any
onsite disorder either at $\varepsilon_{i,\tau}$ or at
$\varepsilon_{i,q}$. For a poly(dG)-poly(dC) sequence, both fishbone
and ladder model produce two separate energy bands between the extremal
values computed at the end of section \ref{sec-effective}. Within these
energy bands, the electronic states are extended with infinite
localisation length $\xi$ as expected. Outside the bands, transport is
exponentially damped due to an absence of states and the $\xi$ values
are very close the zero. In Fig.\ \ref{fig-inverse2d} the
resulting {\em inverse} localisation lengths are shown.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fig/fig-LM-invloc_energy.eps}
\caption{\label{fig-inverse2d}
Plot of the inverse localisation lengths $\xi$ as a function of
Fermi energy for the ladder model (\ref{eq-ladder-effective} and
four DNA sequences as well as for the fishbone model with a
poly(dG)-poly(dC) sequence. The data for telomeric DNA has been
shaded for clarity. Lines are guides to the eye only.}
\end{figure}
These are zero for the extended states in the two bands, but finite
outside, showing the quick decrease of the localisation lengths outside
the bands.
In Fig.\ \ref{fig-energy2d}, we show the same data but now plot the
localisation length itself.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fig/fig-LM-loc_energy.eps}
\caption{\label{fig-energy2d}
Localisation lengths as a function of energy for poly(dG)-poly(dC),
telomeric, random-ATGC, and $\lambda$-DNA as described in the text.
The spectrum is symmetric in energy. The data for telomeric DNA has
been shaded for clarity. Lines are guides to the eye only.}
\end{figure}
We see that the energy gap observed previously \cite{CunCPD02} for the
poly(dG)-poly(dC) sequence in the fishbone model remains. The difference
with respect to the ladder model is a slight renormalisation of the gap
width.
The localisation lengths of poly(dG)-poly(dC) DNA tend to infinity,
meaning that the sequence is perfectly conducting. This is expected due
to its periodic electronic structure.
Turning our attention to the other three DNA sequences, we find that
telomeric DNA also gives rise to perfect conductivity like
poly(dG)-poly(dC) DNA. But due to its structure of just 6 repeating base
pairs, there is a further split of each band into 3 separate sub-bands.
They may be calculated like in section \ref{sec-effective}.
We would like to point out that it may therefore be advantageous to use
the naturally occurring telomeric parts of DNA sequences as prime,
in-vivo candidates when looking for good conductivity in a DNA strand.
The structure of the energy dependence for the random-ATGC and the
$\lambda$-DNA is very different from the preceding two sequences, but it
is quite similar between just these two. The biological content of the
DNA sequences is --- within the description by our quantum models ---
just a sequence of binary hopping elements between like and unlike base
pairs. Thus the models are related to the physics of random hopping
models \cite{EilRS98a,BisCRS00} and in agreement with these, we see a
Dyson peak \cite{Dys53} in the centre of each sub-band. Furthermore, we
see that the range of energies for which we observe non-zero
localisation lengths is increased into the gap and for large absolute
values of the energy. This is similar to the broadening of the single
energy band for the Anderson model of localisation \cite{RomS03}.
The localisation lengths, which roughly equal the average distance an
electron would be able to travel (conduct), are close to the distance of
$20$ bases within the band, with a maximum of $\sim 30$ bases at the
centre of each band. Note that this result is surprisingly good ---
given the level of abstraction used in the present models --- when
compared to the typical distances over which electron transfer processes
have been shown to be relevant
\cite{WanFSB00,BooLCD03,KelB99,MurAJG93,OneDB04,DelB03,TreHB01}.
\section{Results for disordered DNA}
\label{sec-results-disordered}
\subsection{DNA randomly bent or at finite temperatures}
\label{sec-uniform_energy}
As argued before, environmental influences on the transport properties
of DNA are likely to influence predominantly the electronic structure of
the backbone. Within our models, this can be captured by adding a
suitable randomness onto the backbone onsite potentials
$\varepsilon_{i}^{q}$. In this fashion, we can model for example the
influence of a finite-temperature \cite{BruGOR00} and thus a coupling to
phonons \cite{GutMC04}. We emphasise however, that in order for our
localisation results --- which rely on quantum mechanical interference
effects --- to remain valid, the phase breaking lengths should stay much
larger than the sequence lengths. Thus the permissible temperature range
is a few K only.
The bending of DNA is another possibility which can be modelled by a
local, perhaps regular, change in $\varepsilon_{i}^{q}$ along the strand.
Another important aspect is the change in $\varepsilon_{i}^{q}$ due to
the presence of a solution in which DNA is normally immersed.
All these effects can be modelled in a first attempt by choosing an
appropriate distribution function $P(\varepsilon_{i}^{q}$). Let us first
choose uniform disorder with $\varepsilon_{i}^{q} \in [-W/2,W/2]$.
In Fig.\ \ref{fig-LM-uniW1-loc_energy} we show the results for all 4 DNA
sequences as a function of energy for $W=1$. Comparing this to Fig.\
\ref{fig-energy2d}, we see that now all localisation lengths are finite;
poly(dG)-poly(dC) and telomeric DNA having localisation lengths of a few
hundreds and a few tens of bases, respectively. The localisation lengths
for random-ATGC and $\lambda$-DNA are only slightly reduced. In all
cases, the structure of 2 energy bands remains. Furthermore, $W=1$
already represents a sizable broadening of about $1/2$ the width of each
band. Thus although the localisation lengths are finite compared to the
results of section \ref{sec-results-clean}, they are still larger than
the lengths of the DNA strands used in the nano-electric experiments,
implying finite conductances. We remark that the Dyson peaks have
vanished as expected \cite{EilRS98a}. We plot the DOS for $\lambda$-DNA
in Fig.\ \ref{fig-LM-uniW1-loc_energy} which clearly indicates the $2$
bands.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fig/fig-LM-uniW1-loc-DOS_energy.eps}
\caption{\label{fig-LM-uniW1-loc_energy}
Top: Energy dependence of the localisation lengths, $\xi(E)$, for
poly(dG)-poly(dC), telomeric, random-ATGC and $\lambda$-DNA in the
presence of {\em uniform} backbone disorder with $W=1$. Only every
2nd and 5th symbol is shown for random-ATGC and $\lambda$-DNA,
respectively.
Bottom: DOS for $\lambda$-DNA using the same parameters as in the
top panel.}
\end{figure}
Upon further increasing the disorder to $W=2$, as shown in Fig.\
\ref{fig-LM-uniW2-loc_energy}, the localisation lengths continue to
decrease. Note that we observe a slight broadening of the bands and
states begin to shift into the gap.
We also see that the behaviour of random-ATGC and $\lambda$-DNA is quite
similar and at these disorder strengths, even telomeric DNA follows the
same trends.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fig/fig-LM-uniW2-loc-DOS_energy.eps}
\caption{\label{fig-LM-uniW2-loc_energy}
Top: $\xi(E)$ as in Fig.\ \ref{fig-LM-uniW1-loc_energy} but with
$W=2$. Only every 2nd and 5th symbol is shown for random-ATGC and
$\lambda$-DNA, respectively.
Bottom: DOS for $\lambda$-DNA using the same parameters as in the
top panel.}
\end{figure}
At $W=5$, the localisation lengths have been reduced to a few base-pair
separation distances and the differences between all $4$ sequences are
very small. The gap has been nearly completely filled as shown by the
DOS, albeit with states which have a very small localisation length.
This will become important later.
Thus, in summary, we have seen that adding uniform disorder onto the
backbone leads to a reduction of the localisation lengths and
consequently a reduction of the electron conductance. Strictly speaking,
all 4 strands are insulators. However, their localisation lengths can
remain quite large, larger than in many of the experiments. Thus even
the localised electron can contribute towards a finite conductivity for
these short sequences. In agreement with experiments, poly(dG)-poly(dC)
DNA is the most prominent candidate.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fig/fig-LM-uniW5-loc-DOS_energy.eps}
\caption{\label{fig-LM-uniW5-loc_energy}
Top: $\xi(E)$ as in Fig.\ \ref{fig-LM-uniW1-loc_energy} but with
$W=5$. Only every 2nd and 5th symbol is shown for random-ATGC and
$\lambda$-DNA, respectively.
Bottom: DOS for $\lambda$-DNA using the same parameters as in the
top panel.}
\end{figure}
\subsection{DNA in an ionic solution}
\label{sec-binary_energy}
When in solution, the negatively charged oxygen on the backbone will
attract cations such as Na$^{+}$. This will give rise to a dramatic
change in local electronic properties at the oxygen-carrying backbone
site, but not necessarily influence the neighbouring sites. The effects
at each such site will be the same and thus in contrast to a uniform
disorder used in section \ref{sec-uniform_energy}, a binary distribution
such as $\varepsilon_{i,q}= \pm W/2$ is more appropriate.
For simplicity, we choose $50\%$ of all backbone sites to be occupied
$\varepsilon_{i,q}=-W/2$ while the other half remains empty with
$\varepsilon_{i,q}=+W/2$. We note that a mixture of concentrations has
been studied in the context of the Anderson model in Ref.\
\cite{PlyRS03}.
In Fig.\ \ref{fig-LM-binW1-loc_energy}, we show the results for moderate
binary disorder. In comparison with the uniformly disordered case of
Fig.\ \ref{fig-LM-uniW1-loc_energy}, we see that the localisation
lengths have decreased further. This is expected because binary disorder
is known to be very strong \cite{PlyRS03}. Also, the gap has already
started to fill.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fig/fig-LM-binW1-loc-DOS_energy.eps}
\caption{\label{fig-LM-binW1-loc_energy}
Top: Energy dependence of the localisation lengths, $\xi(E)$, for
poly(dG)-poly(dC), telomeric, random-ATGC and $\lambda$-DNA in the
presence of {\em binary} backbone disorder with $W=1$. Only every
2nd and 5th symbol is shown for random-ATGC and $\lambda$-DNA,
respectively.
Bottom: DOS for $\lambda$-DNA using the same parameters as in the
top panel.}
\end{figure}
Increasing the disorder leads again to a decrease of $\xi$ in the energy
regions corresponding to the bands. Directly at $E=\pm W/2$, we observe
$2$ strong peaks in the DOS which is accompanied by reduced localization
lengths. This peak corresponds to the infinite potential barrier or well
at $E=-W/2$ or $+W/2$, respectively, as indicated by Eq.\
(\ref{eq-ladder-effective}). In Fig.\ \ref{fig-LM-binW1-loc_energy},
these peaks were not yet visible. We also see in Fig.\
\ref{fig-LM-binW2-loc_energy} that the localisation lengths for states
in the band centre start to increase to values $\gtrsim 1$.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fig/fig-LM-binW2-loc-DOS_energy.eps}
\caption{\label{fig-LM-binW2-loc_energy}
Top: $\xi(E)$ as in Fig.\ \ref{fig-LM-binW1-loc_energy} but with
$W=2$. Only every 2nd and 5th symbol is shown for random-ATGC and
$\lambda$-DNA, respectively.
Bottom: DOS for $\lambda$-DNA using the same parameters as in the
top panel.}
\end{figure}
This trend continues for larger $W$ as shown in Fig.\
\ref{fig-LM-binW5-loc_energy}. We see a crossover into a regime where
the two original, weak-disorder bands have nearly vanished and states in the
centre at $E=0$ are starting to show an increasing localisation length
{\em upon increasing the binary disorder}.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fig/fig-LM-binW5-loc-DOS_energy.eps}
\caption{\label{fig-LM-binW5-loc_energy}
Top: $\xi(E)$ as in Fig.\ \ref{fig-LM-binW1-loc_energy} but with
$W=5$. Only every 2nd and 5th symbol is shown for random-ATGC and
$\lambda$-DNA, respectively.
Bottom: DOS for $\lambda$-DNA using the same parameters as in the
top panel.}
\end{figure}
A further increase in $W$ eventually leads to the complete destruction
of the original bands and the formation of a single band symmetric
around $E=0$ at about $W\sim 2.5$.
\subsection{Delocalisation due to disorder}
\label{sec-delocalization}
The results of the previous section suggest that increasing the disorder
in different regions of the energy will lead to different transport
behaviour. Of particular interest is the region at $E=0$. In Fig.\
\ref{fig-LM-binE0-loc_disorder} the variation of $\xi$ as a function of
binary disorder strength for all different sequences is shown. While
$\xi < 1$ for small disorder, we see that upon increasing the disorder,
states begin to appear and their localisation lengths increase for all
DNA sequences. Thus we indeed observe a counter-intuitive {\em
delocalisation} by disorder at $E=0$. As before, poly(dG)-poly(dC) and
telomeric disorder show the largest localisation lengths, whereas
random-ATGC and $\lambda$-DNA give rise to a smaller and nearly
identical effect.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fig/fig-LM-binE0-loc-DOS_disorder.eps}
\caption{\label{fig-LM-binE0-loc_disorder}
Disorder dependence of $\xi$ for poly(dG)-poly(dC), telomeric,
random-ATGC and $\lambda$-DNA at $E=0$. Only every 10th symbol is
shown for all sequences. The shaded curve is the corresponding unnormalized DOS
for $\lambda$-DNA.}
\end{figure}
In Fig.\ \ref{fig-LM-binE3-loc_disorder} we show that this effect does
not exist at $E=3$, i.e.\ for energies corresponding to the formerly
largest localisation lengths. Rather, at $E=3$, the localisation lengths
for all DNA sequences quickly drop to $\xi \sim 1$.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fig/fig-LM-binE3-loc-DOS_disorder.eps}
\caption{\label{fig-LM-binE3-loc_disorder}
$\xi(W)$ as in Fig.\ \ref{fig-LM-binE0-loc_disorder} but with $E=3$.
Only every 10th symbol is shown for all DNA sequences. The shaded
curve is the corresponding unnormalized DOS for $\lambda$-DNA.}
\end{figure}
The delocalisation effect is also observed for uniform disorder, but is
much smaller. As shown in Fig.\ \ref{fig-FM-uniE0-loc_disorder}, the
enhancement is up to about $\xi=1$ for the fishbone model
(\ref{eq-fishbone}). Results for the ladder model (\ref{eq-ladder}) are
about $1.7$ times larger.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fig/fig-FM-uniE0-loc-DOS_disorder.eps}
\caption{\label{fig-FM-uniE0-loc_disorder}
$\xi(W)$ as in Fig.\ \ref{fig-LM-binE0-loc_disorder} but with
uniform disorder at $E=0$ and for the {\em fishbone model}. Only
every 10th symbol is shown for all DNA sequences. The shaded curve
is the corresponding unnormalized DOS for $\lambda$-DNA.}
\end{figure}
This surprising delocalisation-by-disorder behaviour can be understood
by considering the effects of disorder at the backbone for the effective
Hamiltonians (\ref{eq-fishbone-effective}) and
(\ref{eq-ladder-effective}). At $E=0$, the onsite potential correction
term ${\left(t_{i}^{q}\right)^{2}}/{(\varepsilon_{i}^{q} - E)}$ will
{\em decrease} upon increasing the $\varepsilon_{i}^{q}$ values. For
binary disorders $\varepsilon_{i}^{q} = \pm W/2$, this holds for
$|\varepsilon_{i}^{q}| > |E|$ as shown in Fig.\
\ref{fig-LM-binE3-loc_disorder}. However, for large $|E|$, the
localisation lengths decrease quickly due to the much smaller density of
states. Thus the net effect is an eventual decrease (or an only very
small increase) of $\xi$ for large $E$. Note the dip at
$|\varepsilon_{i}^{q}|=E=3$ in the figure, which corresponds to the
effective $\varepsilon_{i}= \infty$, i.e.\ an infinitely strong trap
yielding extremely strong localisation.
For uniform disorder $\varepsilon_{i}^{q} \in [-W/2,W/2]$ --- and
generally any disorder with compact support around $E=0$ --- the above
inequality is never full-filled and even for $E=0$ we will find small
$\varepsilon_{i}^{q} \sim 0$ such that we have strong trapping and
localisation.
\section{Investigating the local properties of the sequences}
\label{sec-local}
\subsection{Variation of $\xi$ along the DNA strand}
In the preceding sections, we had computed estimates of the localisation
length $\xi$ for complete DNA strands, i.e.\ the $\xi$ values are {\em
averages}. However, the biological function of DNA clearly depends on
the local structure of the sequence in a paramount way. After all, only
certain parts of DNA code for proteins, while others do not. In
addition, the exact sequence of the bases specifies the protein that is
to be assembled.
Thus, in order to gain access to the local properties, we have performed
computations of $\xi$ on subsequences of complete DNA strands. We start
by artificially restricting ourselves to finite windows of length $K=
10, 30, 50, 100, 200, 500, 1000$ and compute the localisation lengths
$\xi_{K}(r)$ where $r=1, 2, \ldots, L-K$ denotes the starting position
of the window of length $K$.
In order to see how the exact sequence determines our results, we have
also randomly permuted (scrambled) the $\lambda$-DNA sequence so that
the content of A, T, G, and C bases is the same, but their order is
randomised. Differences in the localisation properties should then
indicate the importance of the exact order.
From the biological information available on bacteriophage
$\lambda$-DNA, we compute the localisation length for the coding regions
\cite{DanSSS83} and then for window lengths $K$ that correspond exactly
to the length of each coding region. Again, if the electronic properties
--- as measured by the localisation length --- are linked to biological
content, we would expect to see characteristic differences.
In Figs.\ \ref{fig-LM-cleanE3-loc_window-100} and
\ref{fig-LM-cleanE3-loc_window-1000}, we show results for $K=100$ and
$1000$, respectively.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fig/fig-LM-cleanE3-loc_window-100.ps}
\caption{\label{fig-LM-cleanE3-loc_window-100}
Top: Variation of the localisation lengths for a sliding window of
length $K=100$ as a function of window starting position for
$\lambda$-DNA at $E=3$. The black crosses ($\times$) denote results
for windows corresponding to the coding sequences of $\lambda$-DNA
only. The dashed horizontal line denotes $K$.
Middle: Same as in the top panel but with randomly scrambled
$\lambda$-DNA.
Bottom: Normalised distribution functions $P(\xi)$ for the
localisation lengths $\xi$ of $\lambda$- (black) and
scrambled-$\lambda$-DNA (grey). }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fig/fig-LM-cleanE3-loc_window-1000.ps}
\caption{\label{fig-LM-cleanE3-loc_window-1000}
Variation of the localisation lengths for a sliding window of length
$K=1000$ at $E=3$ as in Fig.\
\protect\ref{fig-LM-cleanE3-loc_window-100}.
Middle: Same as in the top panel but with randomly scrambled
$\lambda$-DNA.
Bottom: Normalised distribution functions $P(\xi)$ for the
localisation lengths $\xi$ of $\lambda$- (black) and
scrambled-$\lambda$-DNA (grey). }
\end{figure}
From Fig.\ \ref{fig-LM-cleanE3-loc_window-100}, we see from $P(\xi)$
that the localisation lengths for $\lambda$-DNA are mostly distributed
around $15$--$20$, but $P(\xi)$ has a rather long tail for large $\xi$.
However, there are some windows where the localisation lengths exceed
even the size of the window $K=100$. Thus at specific positions in the
DNA sequence, the system appears essentially extended with $\xi > K$. On
the other hand, the distribution $P(\xi)$ is identical when instead of
$\lambda$-DNA, we consider scrambled DNA. Therefore the presence of
such regions is not unique to $\lambda$-DNA. The results from windows
positioned at the coding part of $\lambda$-DNA appear statistically
similar to the complete sequence, i.e.\ including also the non-coding
regions. This suggests that with respect to the localisation properties
there is no obvious difference between $\lambda$-DNA and scrambled
$\lambda$-DNA as well as coding and non-coding regions. We emphasise
that similar results have been obtained for a DNA sequence constructed
from the SARS corona-viral data.
In Fig.\ \ref{fig-LM-cleanE3-loc_window-100}, we repeat these
calculations but with $K=1000$. Clearly, $P(\xi)$ is peaked again around
$15$--$20$ and this time has no tail. In all cases, $K>\xi$. Again, the
results for scrambled DNA are different in each window, and now even
$P(\xi)$ is somewhat shifted with respect to $\lambda$-DNA.
Thus in conclusion, we do not see significant differences between
$\lambda$-DNA and its scrambled counter part. Moreover, there appears to
be no large difference between the localisation lengths measured in the
coding and the non-coding sequences of bacteriophage $\lambda$-DNA. This
indicates that the average $\xi$ values computed in the previous
sections is sufficient when considering the electronic localisation
properties of the $4$ complete DNA sequences.
\subsection{Computing correlation functions}
As shown in the last section, the spatial variation of $\xi$ for a fixed
window size is characteristic of the order of bases in the DNA sequence.
Thus we can now study how this biological information is retained at the
level of localisation lengths. In order to do so, we define the
correlation function
\begin{equation}
{\rm Cor}(k
=
\frac{\sum_{i=1}^{n-k}\left[\xi(r_i)-\langle{\xi}\rangle\right]
\left[\xi(r_{i+k})-\langle{\xi}\rangle\right]}%
{\sum_{i=1}^{n}\left[\xi(r_i)-\langle{\xi}\rangle\right]^2}
\label{eq-cor}
\end{equation}
where $\langle{\xi}\rangle={\sum_{i=1}^{n}\xi(r_i)}/{n}$ is $\xi$
averaged over all $n=L-(K-1)$ windows for each of which the individual
localisation lengths are $\xi(r_i)$.
In Fig.\ \ref{fig-LM-Win-E0-cor_pos} we show the results obtained for
$\lambda$-DNA with windows of length $10$, $100$ and $1000$. We first
note that ${\rm Cor}(k)$ drops rapidly until the distance $k$ exceeds
the window width $K$ (see the inset of Fig.\
\ref{fig-LM-Win-E0-cor_pos}). For $k>K$, ${\rm Cor}(k)$ fluctuates
typically between $\pm 0.2$ and there is a larger anti-correlation for
base-pair separations of about $k=8000$. We note that such large scale
features are not present when considering scrambled $\lambda$-DNA
instead.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fig/fig-LM-Win-E0-cor_pos.eps}
\caption{\label{fig-LM-Win-E0-cor_pos}
${\rm Cor}(k)$ as defined in Eq.\ (\ref{eq-cor}) for $\lambda$-DNA and
$K=10$, $100$, and $1000$ at $E=0$. The inset shows the same date
but plotted as a function of normalized seperation $k/K$.}
\end{figure}
\section{Discussion}
The fishbone and ladder models studied in the present paper give
qualitatively similar results, i.e.\ a gap in the DOS on the order of
the hopping energies to the backbone, extended states for periodic DNA
sequences and localised states for any non-zero disorder strength. Thus
at $T=0$, our results suggest that DNA is an insulator unless perfectly
ordered. Quantitatively, the localisation lengths $\xi$ computed for
the ladder model are larger than for the fishbone model. Since we are
interested in these non-universal lengths, the ladder model is clearly
the more appropriate model.
The localisation lengths measure the spatial extent of a conducting
electron. Our results suggest --- in agreement with all previous
considerations --- that poly(dG)-poly(dC) DNA allows the largest values
of $\xi$. Even after adding a substantial amount of disorder,
poly(dG)-poly(dC) DNA can still support localization lengths of a few
hundred base-pair seperation lengths. With nanoscopic experiments
currently probing at the most a few dozen bases, this suggests that
poly(dG)-poly(dC) DNA will appear to be conducting in these experiments.
Furthermore, telomeric DNA is a very encouraging and interesting
naturally occuring sequence because it gives very large localisation
lengths in the weakly disordered regime. Nevertheless, we find that all
investigated, non-periodic DNA sequences such as, e.g.\ random-ATGC and
$\lambda$-DNA, give localised behaviour even in the clean state. This
indicates that they are insulating at $T=0$.
When the effects of the environment, modelled by their potential changes
on the backbone, are included, we find that the localisation lengths in
the two bands decrease quickly upon increasing the disorder.
Nevertheless, depending on the value of the Fermi energy, the resulting
$\xi$ values can still be 10-20 base-pairs long. While this may not
give metallic behavior, it can still result in a finite current for
small sequences. We also note that these distances are quite close to
those obtained from electron-transfer studies.
The backbone disorder also leads to states moving into the gap.
Therefore the environment prepared in the experiments determines the gap
which is being measured. Furthermore, the localisation properties of the
states in the former gap are drastically different from those in the 2
bands. Increasing the disorder leads to an increase in the localization
lengths and thus potentially larger currents. This is most pronounced
for binary disorder, taken to model the adhesion of cations in solution.
Thus within the $2$ models studied, we find that their transport
properties are in a very crucial way determined by the environment.
Differences in experimental set-up such as measurements in 2D surfaces
or between elevated contacts are likely to lead to quite different
results.
As far as the correlations within biological $\lambda$-DNA are
concerned, we see only a negligible difference between the localisation
properties of the coding and non-coding parts. However, this is clearly
dependent on the chosen energy and the particular window lengths used.
Investigations on other DNA sequences are in progress.
\acknowledgments
It is a pleasure to thank H.\ Burgert, D.\ Hodgson, M.\ Pfeiffer and D.\
Porath for stimulating discussions.
| -30,132.646539
|
[
-2.828125,
2.6796875
] | 19.830329
|
[
-2.892578125,
0.278564453125,
-2.052734375,
-5.91015625,
-0.55224609375,
8.3203125
] |
[
3.81640625,
6.75,
3.791015625,
8.234375
] | 432
| 6,582
|
[
-2.2265625,
2.2890625
] | 24.910482
|
[
-6.23046875,
-3.982421875,
-3.755859375,
-2.060546875,
2.04296875,
11.421875
] | 0.918824
| 11.687642
| 22.409602
| 1.667963
|
[
2.819152355194092
] | -21,665.585025
| 5.704649
| -29,590.370146
| 1.523383
| 5.959047
|
[
-3.232421875,
-3.953125,
-3.66015625,
-4.55859375,
2.6015625,
11.953125
] |
[
-5.796875,
-2.359375,
-2.765625,
-1.865234375,
3.748046875,
5.46875
] | |
BkiUfdk25V5jQ-dgBe0c
|
\section{Auxiliary lemmas}
\begin{lemma}[\cite{Ver2011rand,CSV:CPAM:13}]\label{lem:aux_lemma1}
Assume $\bm{a}_k\sim\mathcal{N}(0,\bm{I}_n)$, $k=1,\cdots,m$, are independent. Then
\begin{align*}
\frac{1}{2}\leq\left\|\frac{1}{m}\sum_{k=1}^m\bm{a}_k\bm{a}_k^\top\right\|\leq 2
\end{align*}
hold with probability at least $1-2e^{-\Omega(m)}$ provided $m\gtrsim n$.
\end{lemma}
\begin{lemma}[\cite{CaiWeiphase}]\label{lem:aux_lemma2}
Fix $\eta\geq 1$ and let $\epsilon\in(0,1)$ be a sufficiently small constant. Assume $\bm{a}_k\sim\mathcal{N}(0,\bm{I}_n)$, $k=1,\cdots,m$, are independent. Then
\begin{align*}
\left\|\frac{1}{m}\sum_{k=1}^m\bm{a}_k\bm{a}_k^\top\dsone{|\bm{a}_k^\top\bm{z}|>\eta\|\bm{z}\|}\right\|
\lesssim \eta e^{-0.49\eta^2}+\epsilon
\end{align*}
holds uniformly for all $\|\bm{z}\|\neq 0$ with probability exceeding $1-2e^{-\Omega(m\epsilon^2)}$ provided $m\gtrsim \epsilon^{-2}\log\epsilon^{-1}\cdot n$.
\end{lemma}
\section{Gradient and Hessian of the loss function}\label{app:gradHess}
Recall that
\begin{align*}
f(\bm{z})=\frac{1}{2m}\sum_{k=1}^m\left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)^2h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right).
\end{align*}
By the chain rule we have
\begin{align*}
\nabla f(\bm{z}) &= \frac{1}{m}\sum_{k=1}^m2\left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right) h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top\bm{z}\\
&+\frac{1}{m}\sum_{k=1}^m\left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)^2h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\frac{\bm{a}_k\bm{a}_k^\top\bm{z}}{\|\bm{z}\|^2}\\
&-\frac{1}{m}\sum_{k=1}^m\left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)^2h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\frac{(\bm{a}_k^\top\bm{z})^2\bm{z}}{\|\bm{z}\|^4}.
\end{align*}
In order to compute $\nabla^2 f(\bm{z})$, let
\begin{align*}
g_{1k} &= 2\left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right) h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top\bm{z},\\
g_{2k} & = \left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)^2h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\frac{\bm{a}_k\bm{a}_k^\top\bm{z}}{\|\bm{z}\|^2},\\
g_{3k} & = -\left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)^2h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\frac{(\bm{a}_k^\top\bm{z})^2\bm{z}}{\|\bm{z}\|^4}.
\end{align*}
Then we have
\begin{align*}
\nabla^2 f(\bm{z}) = \frac{1}{m}\sum_{k=1}^m J_{g_{1k}} + J_{g_{2k}}+ J_{g_{3k}},\numberthis\label{eq:hessian}
\end{align*}
where $J_{g_{1k}}$, $J_{g_{2k}}$ and $J_{g_{3k}}$ are the Jacobian matrices of $g_{1k}$, $g_{2k}$ and $g_{3k}$ respectively, given by
\begin{align*}
J_{g_{1k}} & = 2\lb3(\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right) h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top\\
&+\frac{4}{\|\bm{z}\|^2}\left( (\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)(\bm{a}_k^\top\bm{z})^2h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top\\
&-\frac{4}{\|\bm{z}\|^4}\left( (\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)(\bm{a}_k^\top\bm{z})^3h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{z}^\top,\\
\\
J_{g_{2k}} & =\frac{1}{\|\bm{z}\|^2}\left( 5(\bm{a}_k^\top\bm{z})^4-6(\bm{a}_k^\top\bm{z})^2(\bm{a}_k^\top\bm{x})^2+(\bm{a}_k^\top\bm{x})^4\right) h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top\\
&+\frac{2}{\|\bm{z}\|^4} \left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)^2
(\bm{a}_k^\top\bm{z})^2h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top\\
&-\frac{2}{\|\bm{z}\|^6} \left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)^2
(\bm{a}_k^\top\bm{z})^3h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{z}^\top\\
&-\frac{2}{\|\bm{z}\|^4}\left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)^2(\bm{a}_k^\top\bm{z})h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{z}^\top,\\
\\
J_{g_{3k}} &=-\frac{1}{\|\bm{z}\|^4}\lb6(\bm{a}_k^\top\bm{z})^5-8(\bm{a}_k^\top\bm{z})^3(\bm{a}_k^\top\bm{x})^2+2(\bm{a}_k^\top\bm{z})(\bm{a}_k^\top\bm{x})^4\right) h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bm{a}_k^\top\\
&-\frac{2}{\|\bm{z}\|^6}\left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)^2(\bm{a}_k^\top\bm{z})^3h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bm{a}_k^\top\\
&+\frac{2}{\|\bm{z}\|^8}\left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)^2(\bm{a}_k^\top\bm{z})^4h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bz^\top\\
&+\frac{4}{\|\bm{z}\|^6}\left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)^2(\bm{a}_k^\top\bm{z})^2h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bz^\top\\
&-\frac{1}{\|\bm{z}\|^4}\left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)^2h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)(\bm{a}_k^\top\bm{z})^2\bm{I}.
\end{align*}
It is worth noting that even though each Jacobian matrix is not symmetric, their sum is indeed symmetric which satisfies the symmetric property of a Hessian matrix. To see this, adding all the terms involving $\bm{a}_k\bm{z}^\top$ and $\bm{z}\bm{a}_k^\top$ together gives
\begin{align*}
&-\frac{2}{\|\bm{z}\|^6} \left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)^2
(\bm{a}_k^\top\bm{z})^3h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)(\bm{a}_k\bm{z}^\top+\bm{z}\bm{a}_k^\top)\\
&-\frac{1}{\|\bm{z}\|^4}\lb6(\bm{a}_k^\top\bm{z})^5-8(\bm{a}_k^\top\bm{z})^3(\bm{a}_k^\top\bm{x})^2+2(\bm{a}_k^\top\bm{z})(\bm{a}_k^\top\bm{x})^4\right) h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)(\bm{a}_k^\top\bm{z}+\bm{z}\bm{a}_k^\top).
\end{align*}
{To further check the correctness of our calculations, we consider the special case when $n=1$, and compare the values of $f'(z)$ and $f''(z)$ computed using the derived formulas with that computed via the following finite difference schemes:
\begin{align*}
f'(z) \approx \frac{f(z+\epsilon)-f(z-\epsilon)}{2\epsilon}\quad\mbox{and}\quad f''(z)\approx\frac{f(z+\epsilon)-2f(z)+f(z-\epsilon)}{\epsilon^2},
\end{align*}
where $\epsilon>0$ is a small constant (here we choose $\epsilon=10^{-5}$). Table~\ref{table:GH} includes the computational results for the fixed $x=1$ and a few randomly generated $z$.
}
\begin{table}[t!]
\centering
\caption{ Computational results from computing $f'(z)$ and $f''(z)$ via the formulas and the finite difference schemes. Here, $x=1$, $\{a_k\}_{k=1}^m$ ($m=128$) is a set of standard Gaussian random variables, and the results for three $z$'s in {$\{4.3042, 1.7588, 0.5544
\}$} are presented. Each $z$ is uniformly sampled from $[0,10]$ .}
\label{table:GH}
\makegapedcells
\setcellgapes{3pt}
\begin{tabular}{c|ccc|ccc}
\hline
& \multicolumn{3}{c|}{$f'(z)$ for three $z$'s} & \multicolumn{3}{c}{$f''(z)$ for three $z$'s}\\
\hline
By formulas & 787.0769 & 38.4171 & -4.0066 & 569.4601 &86.3959 &-0.8143 \\
\hline
By finite difference & 787.0769 & 38.4171 & -4.0066 & 569.4601 &86.3959 &-0.8143 \\
\hline
\end{tabular}
\end{table}
\subsection{Proof of \eqref{eq:thmR1_eq2a0}}\label{sec:app:sub3}
Denote by $B_i,~i=2,\cdots, 12$ the spectral norm of the $i$-th matrix in the Hessian expression \eqref{eq:hessian}.
\paragraph{{Bound for $B_2$}}
\begin{align*}
B_2&\leq \frac{4}{\|\bm{z}\|^2}\left\|\frac{1}{m}\sum_{k=1}^m \left[ (\bm{a}_k^\top\bm{z})^4-(\bm{a}_k^\top\bm{z})^2(\bm{a}_k^\top\bm{x})^2\right]h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&\leq \frac{4}{\|\bm{z}\|^2}\left\|\frac{1}{m}\sum_{k=1}^m (\bm{a}_k^\top\bm{z})^4h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&+\frac{4}{\|\bm{z}\|^2}\frac{1}{m}\sum_{k=1}^m \left[ (\bm{a}_k^\top\bm{z})^2(\bm{a}_k^\top\bm{x})^2h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&\lesssim \frac{4}{\|\bm{z}\|^2}\cdot\|\bm{z}\|^4\cdot |h'|_{\infty}\cdot\gamma^2\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)+\frac{4}{\|\bm{z}\|^2}\cdot\|\bm{z}\|^2\|\bm{x}\|^2\cdot |h'|_{\infty}\cdot 2\gamma^2\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\\
&\lesssim \gamma^2\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\cdot|h'|_{\infty}\max\left\{\|\bm{z}\|^2,\|\bm{x}\|^2 \right\},
\end{align*}
where the third inequality follows from Lemma \ref{lem:tech_lemma2} with $(s,t)=(4,0), (2,2)$, respectively.
\paragraph{{Bound for $B_3$}}
\begin{align*}
B_3 &\leq \frac{4}{\|\bm{z}\|^4}\left\|\frac{1}{m}\sum_{k=1}^m \left[ (\bm{a}_k^\top\bm{z})^5-(\bm{a}_k^\top\bm{z})^3(\bm{a}_k^\top\bm{x})^2\right]h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{z}^\top \right\|\\
&\leq \frac{4}{\|\bm{z}\|^4}\left\|\frac{1}{m}\sum_{k=1}^m (\bm{a}_k^\top\bm{z})^5h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{z}^\top \right\|\\
&+ \frac{4}{\|\bm{z}\|^4}\left\|\frac{1}{m}\sum_{k=1}^m (\bm{a}_k^\top\bm{z})^3(\bm{a}_k^\top\bm{x})^2h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{z}^\top \right\|\\
&\lesssim \frac{4}{\|\bm{z}\|^4}\cdot \|\bm{z}\|^6\cdot |h'|_{\infty}\cdot\gamma^3\left(e^{-0.245\beta}+\sqrt{\epsilon}\right)+\frac{4}{\|\bm{z}\|^4}\cdot \|\bm{z}\|^4\|\bm{x}\|^2\cdot |h'|_{\infty}\cdot 2\gamma^3\left(e^{-0.245\beta}+\sqrt{\epsilon}\right)\\
&\lesssim \gamma^3\left(e^{-0.245\beta}+\sqrt{\epsilon}\right)\cdot|h'|_{\infty}\max\left\{\|\bm{z}\|^2,\|\bm{x}\|^2 \right\},
\end{align*}
where the third inequality follows from Lemma \ref{lem:tech_lemma3} with $(s,t)=(5,0), (3,2)$, respectively.
\paragraph{Bound for $B_4$}
\begin{align*}B_4&\leq
\frac{1}{\|\bm{z}\|^2}\left\|\frac{1}{m}\sum_{k=1}^m\left[ 5(\bm{a}_k^\top\bm{z})^4-6(\bm{a}_k^\top\bm{z})^2(\bm{a}_k^\top\bm{x})^2+(\bm{a}_k^\top\bm{x})^4\right]h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top \right\|\\
&\leq \frac{1}{\|\bm{z}\|^2}\left\|\frac{1}{m}\sum_{k=1}^m 5(\bm{a}_k^\top\bm{z})^4h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top \right\|\\
& +\frac{1}{\|\bm{z}\|^2}\left\|\frac{1}{m}\sum_{k=1}^m6(\bm{a}_k^\top\bm{z})^2(\bm{a}_k^\top\bm{x})^2h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top \right\|\\
&+\frac{1}{\|\bm{z}\|^2}\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{x})^4h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top \right\|\\
&\lesssim \frac{1}{\|\bm{z}\|^2}\cdot 5\|\bm{z}\|^4\cdot|h'|_{\infty}\cdot \gamma^2\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right) + \frac{1}{\|\bm{z}\|^2}\cdot 6\|\bm{z}\|^2\|\bm{x}\|^2\cdot|h'|_{\infty}\cdot 2\gamma^2\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\\
&+ \frac{1}{\|\bm{z}\|^2}\cdot \|\bm{x}\|^4\cdot|h'|_{\infty}\cdot 4\gamma^2\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\\
&\lesssim \gamma^2\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\cdot |h'|_{\infty}\max\left\{\|\bm{z}\|^2,\|\bm{x}\|^2,\frac{\|\bm{x}\|^4}{\|\bm{z}\|^2} \right\},
\end{align*}
where the third inequality follows from Lemma \ref{lem:tech_lemma2} with $(s,t)=(4,0), (2,2), (0,4)$, respectively.
\paragraph{Bound for $B_5$}
\begin{align*}
B_5&\leq \frac{2}{\|\bm{z}\|^4}\left\|\frac{1}{m}\sum_{k=1}^m\left[ (\bm{a}_k^\top\bm{z})^6-2(\bm{a}_k^\top\bm{z})^4(\bm{a}_k^\top\bm{x})^2+(\bm{a}_k^\top\bm{z})^2(\bm{a}_k^\top\bm{x})^4\right]h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top \right\|\\
&\leq \frac{2}{\|\bm{z}\|^4}\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^6h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top \right\|\\
&+\frac{2}{\|\bm{z}\|^4}\left\|\frac{1}{m}\sum_{k=1}^m2(\bm{a}_k^\top\bm{z})^4(\bm{a}_k^\top\bm{x})^2h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top \right\|\\
&+\frac{2}{\|\bm{z}\|^4}\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^2(\bm{a}_k^\top\bm{x})^4h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top \right\|\\
&\lesssim \frac{2}{\|\bm{z}\|^4}\cdot \|\bm{z}\|^6\cdot |h''|_{\infty}\cdot \gamma^3 \left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)+\frac{4}{\|\bm{z}\|^4}\cdot \|\bm{z}\|^4\|\bm{x}\|^2\cdot |h''|_{\infty}\cdot 2\gamma^3 \left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\\
&+\frac{2}{\|\bm{z}\|^4}\cdot \|\bm{z}\|^2\|\bm{x}\|^4\cdot |h''|_{\infty}\cdot 4\gamma^3 \left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\\
&\lesssim \gamma^3 \left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\cdot |h''|_{\infty}\max\left\{\|\bm{z}\|^2,\|\bm{x}\|^2,\frac{\|\bm{x}\|^4}{\|\bm{z}\|^2} \right\},
\end{align*}
where the third inequality follows from Lemma \ref{lem:tech_lemma2}, with $(s,t)=(6,0), (4,2), (2,4)$, respectively.
\paragraph{Bound for $B_6$}
\begin{align*}
B_6&\leq \frac{2}{\|\bm{z}\|^6}\left\|\frac{1}{m}\sum_{k=1}^m\left[ (\bm{a}_k^\top\bm{z})^7-2(\bm{a}_k^\top\bm{z})^5(\bm{a}_k^\top\bm{x})^2+(\bm{a}_k^\top\bm{z})^3(\bm{a}_k^\top\bm{x})^4\right]h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{z}^\top \right\|\\
&\frac{2}{\|\bm{z}\|^6}\left\|\frac{1}{m}\sum_{k=1}^m (\bm{a}_k^\top\bm{z})^7h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{z}^\top \right\|\\
&+\frac{2}{\|\bm{z}\|^6}\left\|\frac{1}{m}\sum_{k=1}^m2(\bm{a}_k^\top\bm{z})^5(\bm{a}_k^\top\bm{x})^2h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{z}^\top \right\|\\
&+\frac{2}{\|\bm{z}\|^6}\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^3(\bm{a}_k^\top\bm{x})^4h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{z}^\top \right\|\\
&\lesssim \frac{2}{\|\bm{z}\|^6}\cdot \|\bm{z}\|^8\cdot |h''|_{\infty}\cdot\gamma^4\left( e^{-0.245\beta}+\sqrt{\epsilon}\right)+\frac{4}{\|\bm{z}\|^6}\cdot \|\bm{z}\|^6\|\bm{x}\|^2\cdot |h''|_{\infty}\cdot2\gamma^4\left( e^{-0.245\beta}+\sqrt{\epsilon}\right)\\
&+\frac{2}{\|\bm{z}\|^6}\cdot \|\bm{z}\|^4\|\bm{x}\|^4\cdot |h''|_{\infty}\cdot4\gamma^4\left( e^{-0.245\beta}+\sqrt{\epsilon}\right)\\
&\lesssim \gamma^4\left( e^{-0.245\beta}+\sqrt{\epsilon}\right)\cdot |h''|_{\infty}\max\left\{\|\bm{z}\|^2,\|\bm{x}\|^2,\frac{\|\bm{x}\|^4}{\|\bm{z}\|^2} \right\},
\end{align*}
where the third inequality follows from Lemma \ref{lem:tech_lemma3} with $(s,t)=(7,0), (5,2), (3,4)$, respectively.
\paragraph{Bound for $B_7$}
\begin{align*}
B_7&\leq \frac{2}{\|\bm{z}\|^4} \left\|\frac{1}{m}\sum_{k=1}^m\left[ (\bm{a}_k^\top\bm{z})^5-2(\bm{a}_k^\top\bm{z})^3(\bm{a}_k^\top\bm{x})^2+(\bm{a}_k^\top\bm{z})(\bm{a}_k^\top\bm{x})^4\right]h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{z}^\top \right\|\\
&\leq \frac{2}{\|\bm{z}\|^4} \left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^5h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{z}^\top \right\|\\
&+\frac{2}{\|\bm{z}\|^4} \left\|\frac{1}{m}\sum_{k=1}^m2(\bm{a}_k^\top\bm{z})^3(\bm{a}_k^\top\bm{x})^2h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{z}^\top \right\|\\
&+\frac{2}{\|\bm{z}\|^4} \left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})(\bm{a}_k^\top\bm{x})^4h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{z}^\top \right\|\\
&\lesssim \frac{2}{\|\bm{z}\|^4}\cdot \|\bm{z}\|^6\cdot |h'|_{\infty}\cdot\gamma^3 \left( e^{-0.245\beta}+\sqrt{\epsilon}\right)+\frac{4}{\|\bm{z}\|^4}\cdot \|\bm{z}\|^4\|\bm{x}\|^2\cdot |h'|_{\infty}\cdot2\gamma^3 \left( e^{-0.245\beta}+\sqrt{\epsilon}\right)\\
&+\frac{2}{\|\bm{z}\|^4}\cdot \|\bm{z}\|^2\|\bm{x}\|^4\cdot |h'|_{\infty}\cdot4\gamma^3 \left( e^{-0.245\beta}+\sqrt{\epsilon}\right)\\
&\lesssim \gamma^3 \left( e^{-0.245\beta}+\sqrt{\epsilon}\right)\cdot |h'|_{\infty}\max\left\{\|\bm{z}\|^2,\|\bm{x}\|^2,\frac{\|\bm{x}\|^4}{\|\bm{z}\|^2} \right\},
\end{align*}
where the third inequality follows from Lemma \ref{lem:tech_lemma3} with $(s,t)=(5,0), (3,2), (1,4)$, respectively.
\paragraph{Bound for $B_8$}
\begin{align*}
B_8&\leq\frac{1}{\|\bm{z}\|^4}\left\|\frac{1}{m}\sum_{k=1}^m\left[ 6(\bm{a}_k^\top\bm{z})^5-8(\bm{a}_k^\top\bm{z})^3(\bm{a}_k^\top\bm{x})^2+2(\bm{a}_k^\top\bm{z})(\bm{a}_k^\top\bm{x})^4\right]h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bm{a}_k^\top \right\|\\
&\leq \frac{1}{\|\bm{z}\|^4}\left\|\frac{1}{m}\sum_{k=1}^m 6(\bm{a}_k^\top\bm{z})^5h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bm{a}_k^\top \right\|\\
&+\frac{1}{\|\bm{z}\|^4}\left\|\frac{1}{m}\sum_{k=1}^m8(\bm{a}_k^\top\bm{z})^3(\bm{a}_k^\top\bm{x})^2h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bm{a}_k^\top \right\|\\
&+\frac{1}{\|\bm{z}\|^4}\left\|\frac{1}{m}\sum_{k=1}^m2(\bm{a}_k^\top\bm{z})(\bm{a}_k^\top\bm{x})^4h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bm{a}_k^\top \right\|\\
&\lesssim \frac{1}{\|\bm{z}\|^4}\cdot 6\|\bm{z}\|^6\cdot |h'|_{\infty}\cdot \gamma^3\left(e^{-0.245\beta}+\sqrt{\epsilon}\right)+ \frac{1}{\|\bm{z}\|^4}\cdot 8\|\bm{z}\|^4\|\bm{x}\|^2\cdot |h'|_{\infty}\cdot 2\gamma^3\left(e^{-0.245\beta}+\sqrt{\epsilon}\right)\\
&+ \frac{1}{\|\bm{z}\|^4}\cdot 2\|\bm{z}\|^2\|\bm{x}\|^4\cdot |h'|_{\infty}\cdot 4\gamma^3\left(e^{-0.245\beta}+\sqrt{\epsilon}\right)\\
&\lesssim 30\gamma^3\left(e^{-0.245\beta}+\sqrt{\epsilon}\right)\cdot |h'|_{\infty}\max\left\{\|\bm{z}\|^2,\|\bm{x}\|^2,\frac{\|\bm{x}\|^4}{\|\bm{z}\|^2} \right\},
\end{align*}
where the third inequality follows from Lemma \ref{lem:tech_lemma3} with $(s,t)=(5,0), (3,2), (1,4)$, respectively.
\paragraph{Bound for $B_9$}
\begin{align*}
B_9 & \leq \frac{2}{\|\bm{z}\|^6}\left\|\frac{1}{m}\sum_{k=1}^m\left[ (\bm{a}_k^\top\bm{z})^7-2(\bm{a}_k^\top\bm{z})^5(\bm{a}_k^\top\bm{x})^2+(\bm{a}_k^\top\bm{z})^3(\bm{a}_k^\top\bm{x})^4\right]h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bm{a}_k^\top \right\|\\
&\leq \frac{2}{\|\bm{z}\|^6}\left\|\frac{1}{m}\sum_{k=1}^m (\bm{a}_k^\top\bm{z})^7h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bm{a}_k^\top \right\|\\
&+\frac{2}{\|\bm{z}\|^6}\left\|\frac{1}{m}\sum_{k=1}^m2(\bm{a}_k^\top\bm{z})^5(\bm{a}_k^\top\bm{x})^2h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bm{a}_k^\top \right\|\\
&+\frac{2}{\|\bm{z}\|^6}\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^3(\bm{a}_k^\top\bm{x})^4h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bm{a}_k^\top \right\|\\
&\lesssim \frac{2}{\|\bm{z}\|^6}\cdot \|\bm{z}\|^8\cdot |h''|_{\infty}\cdot \gamma^4\left(e^{-0.245\beta}+\sqrt{\epsilon}\right)+\frac{4}{\|\bm{z}\|^6}\cdot \|\bm{z}\|^6\|\bm{x}\|^2\cdot |h''|_{\infty}\cdot 2\gamma^4\left(e^{-0.245\beta}+\sqrt{\epsilon}\right)\\
&+\frac{2}{\|\bm{z}\|^6}\cdot \|\bm{z}\|^4\|\bm{x}\|^4\cdot |h''|_{\infty}\cdot 4\gamma^4\left(e^{-0.245\beta}+\sqrt{\epsilon}\right)\\
&\lesssim\gamma^4\left(e^{-0.245\beta}+\sqrt{\epsilon}\right)\cdot |h''|_{\infty}\max\left\{\|\bm{z}\|^2,\|\bm{x}\|^2,\frac{\|\bm{x}\|^4}{\|\bm{z}\|^2} \right\},
\end{align*}
where the third inequality follows from Lemma \ref{lem:tech_lemma3} with $(s,t)=(7,0), (5,2), (3,4)$, respectively.
\paragraph{Bound for $B_{10}$}
\begin{align*}
B_{10}&\leq \frac{2}{\|\bm{z}\|^8}\left\|\frac{1}{m}\sum_{k=1}^m\left[ (\bm{a}_k^\top\bm{z})^8-2(\bm{a}_k^\top\bm{z})^6(\bm{a}_k^\top\bm{x})^2+(\bm{a}_k^\top\bm{z})^4(\bm{a}_k^\top\bm{x})^4\right]h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bz^\top \right\|\\
&\leq \frac{2}{\|\bm{z}\|^8}\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^8h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bz^\top \right\|\\
&+\frac{2}{\|\bm{z}\|^8}\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^4(\bm{a}_k^\top\bm{x})^4h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bz^\top \right\|\\
&+\frac{2}{\|\bm{z}\|^8}\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^4(\bm{a}_k^\top\bm{x})^4h''\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bz^\top \right\|\\
&\lesssim \frac{2}{\|\bm{z}\|^8}\cdot \|\bm{z}\|^{10}\cdot|h''|_{\infty}\cdot \gamma^4\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)+ \frac{4}{\|\bm{z}\|^8}\cdot \|\bm{z}\|^8\|\bm{x}\|^2\cdot|h''|_{\infty}\cdot 2\gamma^4\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\\
&+ \frac{2}{\|\bm{z}\|^8}\cdot \|\bm{z}\|^6\|\bm{x}\|^4\cdot|h''|_{\infty}\cdot 4\gamma^4\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\\
&\lesssim \gamma^4\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\cdot |h''|_{\infty}\max\left\{\|\bm{z}\|^2,\|\bm{x}\|^2,\frac{\|\bm{x}\|^4}{\|\bm{z}\|^2} \right\},
\end{align*}
where the third inequality follows from the fact $\|\bm{z}\bz^\top\|=\|\bm{z}\|^2$ and Lemma \ref{lem:tech_lemma4} with $(s,t)=(8,0),(6,2),(4,4)$, respectively.
\paragraph{Bound for $B_{11}$}
\begin{align*}
B_{11}&\leq \frac{4}{\|\bm{z}\|^6}\left\|\frac{1}{m}\sum_{k=1}^m\left[ (\bm{a}_k^\top\bm{z})^6-2(\bm{a}_k^\top\bm{z})^4(\bm{a}_k^\top\bm{x})^2+(\bm{a}_k^\top\bm{z})^2(\bm{a}_k^\top\bm{x})^4\right]h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bz^\top \right\|\\
&\leq \frac{4}{\|\bm{z}\|^6}\left\|\frac{1}{m}\sum_{k=1}^m (\bm{a}_k^\top\bm{z})^6h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bz^\top \right\|\\
&+\frac{4}{\|\bm{z}\|^6}\left\|\frac{1}{m}\sum_{k=1}^m2(\bm{a}_k^\top\bm{z})^4(\bm{a}_k^\top\bm{x})^2h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bz^\top \right\|\\
&+\frac{4}{\|\bm{z}\|^6}\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^2(\bm{a}_k^\top\bm{x})^4h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bz^\top \right\|\\
&\lesssim \frac{4}{\|\bm{z}\|^6}\cdot \|\bm{z}\|^8\cdot |h'|_{\infty}\cdot \gamma^3\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)+\frac{8}{\|\bm{z}\|^6}\cdot \|\bm{z}\|^6\|\bm{x}\|^2\cdot |h'|_{\infty}\cdot 2\gamma^3\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\\
&+\frac{4}{\|\bm{z}\|^6}\cdot \|\bm{z}\|^4\|\bm{x}\|^4\cdot |h'|_{\infty}\cdot 4\gamma^3\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\\
&\lesssim\gamma^3\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\cdot |h'|_{\infty}\max\left\{\|\bm{z}\|^2,\|\bm{x}\|^2,\frac{\|\bm{x}\|^4}{\|\bm{z}\|^2} \right\},
\end{align*}
where the third inequality follows from the fact $\|\bm{z}\bz^\top\|=\|\bm{z}\|^2$ and Lemma \ref{lem:tech_lemma4} with $(s,t)=(6,0),(4,2),(2,4)$, respectively.
\paragraph{Bound for $B_{12}$}
\begin{align*}
B_{12}&\leq\frac{1}{\|\bm{z}\|^4}\left|\frac{1}{m}\sum_k\left[ (\bm{a}_k^\top\bm{z})^6-2(\bm{a}_k^\top\bm{z})^4(\bm{a}_k^\top\bm{x})^2+(\bm{a}_k^\top\bm{z})^2(\bm{a}_k^\top\bm{x})^4\right]h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right) \right|\\
&\leq\frac{1}{\|\bm{z}\|^4}\left|\frac{1}{m}\sum_k(\bm{a}_k^\top\bm{z})^6h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right) \right|\\
&+\frac{1}{\|\bm{z}\|^4}\left|\frac{1}{m}\sum_k2(\bm{a}_k^\top\bm{z})^4(\bm{a}_k^\top\bm{x})^2h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right) \right|\\
&+\frac{1}{\|\bm{z}\|^4}\left|\frac{1}{m}\sum_k(\bm{a}_k^\top\bm{z})^2(\bm{a}_k^\top\bm{x})^4h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right) \right|\\
&\lesssim \frac{1}{\|\bm{z}\|^4}\cdot \|\bm{z}\|^6\cdot |h'|_{\infty}\cdot \gamma^3\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)+ \frac{2}{\|\bm{z}\|^4}\cdot \|\bm{z}\|^4\|\bm{x}\|^2\cdot |h'|_{\infty}\cdot 2\gamma^3\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\\
&+ \frac{1}{\|\bm{z}\|^4}\cdot \|\bm{z}\|^2\|\bm{x}\|^4\cdot |h'|_{\infty}\cdot 4\gamma^3\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\\
&\lesssim\gamma^3\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\cdot |h'|_{\infty}\max\left\{\|\bm{z}\|^2,\|\bm{x}\|^2,\frac{\|\bm{x}\|^4}{\|\bm{z}\|^2} \right\}
\end{align*}
where the third inequality follows from Lemma \ref{lem:tech_lemma4} with $(s,t)=(6,0),(4,2),(2,4)$, respectively.\\
Noting that $1<\beta<\gamma$, combing the above bounds together yields \eqref{eq:thmR1_eq2a0}.
\section{Conclusion and outlook}\label{sec:conclusion}
A new loss function has been constructed for solving random systems of quadratic equations, which does not have spurious local minima when the sampling complexity is optimal. This paper has focused on the real-valued problem, and we will leave the examination of the complex case to future work. For the complex case, it is interesting to see whether the same loss function is still well-behaved under the optimal sampling complexity, or a more delicate activation function should be adopted.
In addition, the technique presented in this paper may apply equally to the problem of reconstructing a general low rank matrix from symmetric rank-$1$ projections \cite{CCGold15,KRTersti14,WWSS15}.
As stated at the beginning of this paper, the problem of solving systems of quadratic equations can be cast as a rank-$1$ matrix recovery problem. To see this, let $\mathcal{A}$ be a linear operator from $n\times n$ symmetric matrices to vectors of length $m$, defined as \begin{align*}
\mathcal{A}(\bm{Z}) = \left\{\langle\bm{Z},\bm{a}_k\bm{a}_k^\top\rangle\right\}_{k=1}^m,\quad\forall~\bm{Z}\in\mathbb{R}^{n\times n}\mbox{ being symmetric}.\numberthis\label{eq:A}
\end{align*}
Then a simple algebra yields that
\begin{align*}
y_k = |\bm{a}^\top_k\bm{x}|^2 = \langle\bm{a}_k\bm{a}_k^\top,\bm{X}\rangle,
\end{align*}
where $\bm{X}=\bm{x}\bx^\top$ is the lift matrix defined associated with $\bm{x}$.
Noticing the one to one correspondence between $\bm{X}$ and $\bm{x}$, instead of reconstructing $\bm{x}$, one can attempt to reconstruct $\bm{X}$ by seeking a rank-$1$ positive semidefinite matrix which fits the measurements as well as possible:
\begin{align*}
\min_{\bm{Z}}\frac{1}{2}\|\mathcal{A}(\bm{Z})-\bm{y}\|^2\quad\mbox{subject to}\quad \rank(\bm{Z})=1\mbox{ and }\bm{Z}\succeq 0.\numberthis\label{eq:low_rank}
\end{align*}
Note that the geometric landscape analysis presented in this paper as well as that in \cite{SQW:FCM:18} are carried out in the vector space. Instead, one can consider the geometric landscape of the loss function $\frac{1}{2}\|\mathcal{A}(\bm{Z})-\bm{y}\|^2$ on the embedded manifold of positive semidefinite rank-$1$ (or general rank-$r$) matrices under the rank-$1$ measurements. Moreover, it is worth studying whether there exists a loss function on the lift matrix space which is well-behaved under the condition of optimal sampling complexity.
\section{Introduction}\label{sec:introduction}
Many applications in science and engineering, such as X-ray crystallography \cite{Ha93a}, diffraction and array imaging \cite{Buetal07}, and electron microscopy \cite{Mietal08}, are essentially about solving systems of quadratic equations. This paper concerns a real-valued case of the problem. The goal is to find a vector $\bm{x}\in\mathbb{R}^n$ which can solve $m$ quadratic equations of the form
\begin{align*}
y_k=|\bm{a}_k^\top\bm{x}|^2,\quad k=1,\cdots,m,\numberthis\label{eq:problem}
\end{align*}
where $\bm{y}=\begin{bmatrix}y_1,\cdots,y_m
\end{bmatrix}^\top\in\mathbb{R}_+^{m}$ and { $\{\bm{a}_k\in\mathbb{R}^n\}_{k=1}^m$} are known. Despite the seeming simplicity of \eqref{eq:problem}, solving this problem is computationally intractable. Indeed, a special instance of \eqref{eq:problem} is the NP-hard stone problem \cite{CC:CPAM:17}.
The problem of recovering a vector from a set of quadratic measurements, especially from the Fourier type measurements, has long been studied. Moreover, it has received intensive investigations over the past few years largely due to its connection with low rank matrix recovery. Even though the corresponding low rank matrix recovery problem is still nonconvex and computationally intractable,
we can approximate it by its nearest convex relaxation, leading to a convex formulation known as PhaseLift.
Performance guarantee of PhaseLift has been established in \cite{CSV:CPAM:13,CL:FCM:14,phaselift4,phaselift5} under different measurement models, showing that successful recovery can be achieved when the number of equations is (nearly) proportional to the number of unknowns. There are also other convex relaxation methods for solving systems of quadratic equations; see for example \cite{phasecut,phasemax1,phasemax2,phasemax3}.
Though convex approximations usually come with recovery guarantees, they are not computationally desirable for { large-scale problems}. In contrast, many simple nonconvex algorithms are able to solve \eqref{eq:problem} both accurately and efficiently. Among them are a family of algorithms with optimal or near-optimal provable guarantees, including alternating projections and its resampled variant \cite{alt_min_pr,Waldspurger16}, Kaczmarz methods \cite{phase_kacz02,phase_kacz03}, and those algorithms which propose to compute the solution of \eqref{eq:problem} by minimizing certain nonconvex loss functions { \cite{CLS:TIT:15,CC:CPAM:17,WGE:TIT:18,CaiWeiphase,ReshapedWF,RAF2018}}. Specifically, a gradient descent algorithm known as Wirtinger Flow has been developed in
\cite{CLS:TIT:15} based on the following smooth quadratic loss function
\begin{align*}
{ \tilde{f}(\bm{z})}=\frac{1}{2m}\sum_{k=1}^m\left( ({ \bm{a}_k^\top}\bm{z})^2-y_k\right)^2.\numberthis\label{eq:loss1}
\end{align*}
In \cite{WGE:TIT:18,ReshapedWF}, { gradient descent} algorithms were developed based on a loss function similar to \eqref{eq:loss1} but with $(\bm{a}_k^\top\bm{z})^2-y_k$ replaced by $|\bm{a}_k^\top\bm{z}|-\sqrt{y_k}$, while a Poisson loss function is adopted in \cite{CC:CPAM:17}.
Theoretical guarantees of the aforementioned algorithms typically require that the initial guess is sufficiently close to the true solution. However, numerical simulations show that these algorithms can often achieve successful recovery even with random initialization. To understand this empirical success, Sun et al. \cite{SQW:FCM:18} investigated the global geometry of the loss function in \eqref{eq:loss1}. It has been shown that under the Gaussian measurement model { $\tilde{f}(\bm{z})$} does not have any spurious local minima provided\footnote{The notation $m\gtrsim { g(n)}$ means that there exists an absolute constant $C>0$ such that $m\ge C\cdot{ g(n)}$. { It is worth noting that $m\gtrsim n\log^3n$ in \cite{SQW:FCM:18} is a sufficient condition for the well-behaved landscape of $\tilde{f}(\bm{z})$, so this does not mean that a spurious { minimum} of $\tilde{f}(\bm{z})$ exists when $n\lesssim m\lesssim n\log^3n$.}} $m\gtrsim n\log^3n$. Putting it in another way, under this sampling condition, the target signal $\bm{x}$ is { the only minimizer} of { $\tilde{f}(\bm{z})$} up to a global phase factor. Moreover, { $\tilde{f}(\bm{z})$} possesses a negative directional curvature around each saddle point. Thus, algorithms that can avoid saddle points and converge to a local minimizer are bound to find { a global minimizer}; see for example \cite{LSJR:COLT:16}. Our work follows this line of research and attempts to construct a loss function with $\bm{x}$ being { the only minimizer} up to a global phase factor when $m\gtrsim n$.
That is, we want to construct a loss function without spurious local minima for \eqref{eq:problem} conditioned on the optimal sampling complexity.
In recent years, there has been a surge of interest in nonconvex optimization for problems arising from signal processing and machine learning; solving systems of quadratic equations is one of them. For more general low rank matrix recovery, a variety of nonconvex algorithms have been developed and analyzed, including those based on matrix factorization \cite{TBSSR:ICML:16,ZL:ArXiv:16} and those based on the embedded manifold of low rank matrices \cite{WCCL:SIMAX:16,WCCL:ArXiv:16}. The reader can refer to the review paper \cite{CaiWeiReview} for more details. Geometric landscape of related loss functions for low rank matrix recovery has been investigated in \cite{GJZ:ICML:17,GLM:NIPS:16,LiZhTa16,BNS16a,PKCS:ArXiv:16}. Similar results have also been established for nonconvex formulations of other problems, for example blind deconvolution \cite{ZLKC:CVPR:17}, dictionary learning \cite{SQW:TIT:17:I,SQW:TIT:17:II}, tensor completion \cite{AnGeJa,GeMa:170a}, phase synchronization \cite{BourmalSyn16, LYSo170a,BoVoBa:ArXiv:18}, and deep neural networks \cite{WeBaBruna:ArXiv:18,YunSraJad:ArXiv:18,SoudryCarmon:ArXiv:16,Kawaguchi:ArXiv:16}.
\subsection{Motivation and main result}
As stated previously, a few of the algorithms for solving Gaussian random systems of quadratic equations are able to achieve successful recovery with high probability
provided $m\gtrsim n$, including TWF \cite{CC:CPAM:17}, TAF \cite{WGE:TIT:18} and TRGrad \cite{CaiWeiphase}, just to name a few. In addition, it is also known that a unique solution (up to a global phase factor) of \eqref{eq:problem} can be determined from $m\geq 2n-1$ generic measurements for the real problem or from $m\geq 4n-4$ generic measurements for the complex problem \cite{BaCaEd06a,CoEdHeVi2013a}. Thus, it is interesting to see whether
there exists a loss function for solving random systems of quadratic equations which does not have any spurious local minima when $m\gtrsim n$, in contrast to $m\gtrsim n\log^3 n$ for \eqref{eq:loss1} as is established in \cite{SQW:FCM:18}. To the best of our knowledge, this question has not been explored yet. In our work, we will give an affirmative answer for the real-valued problem.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{h1.eps}\hfil
\includegraphics[width=0.4\textwidth]{h2.eps}
\caption{Two examples of activation functions: $h_1(u)$ (left) and $h_2(u)$ (right).}
\label{fig1}
\end{figure}
We construct the new loss function $f(\bm{z})$ by coupling \eqref{eq:loss1} with an activation function $h(u)$,
\begin{align*}
f(\bm{z})=\frac{1}{2m}\sum_{k=1}^m\left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)^2h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right),\numberthis\label{eq:loss2}
\end{align*}
where the activation function $h(u)$ satisfies
\begin{align*}
\begin{cases}
h(u) = 1 & \mbox{if }0\leq u\leq \beta,\\
h(u)\in[0,1] &\mbox{if }u\in(\beta,\gamma),\\
h(u)=0&\mbox{if }u\geq \gamma
\end{cases}
\quad\mbox{and}\quad
|h'(u)|,~|h''(u)| \mbox{ exist and are bounded }
\end{align*}
for two predetermined universal parameters { $1<\beta<\gamma$} that are sufficiently large. { As can be seen later, the activation function has been introduced to control the gradient of the function so that overshooting can be avoided.}
For simplicity, we assume $\gamma=C\cdot\beta$ for some absolute constant $C>1$. Note that
the bounds of $|h'(u)|$ and $|h''(u)| $ { rely on} the parameters $\beta$ and $\gamma$. Two examples of $h(u)$ are
\begin{align*}
h_1(u) = \begin{cases}
1 & 0\leq u\leq \beta\\
-6\left(\frac{u-\beta}{\gamma-\beta}\right)^5+15\left(\frac{u-\beta}{\gamma-\beta}\right)^4-10\left(\frac{u-\beta}{\gamma-\beta}\right)^3+1 & u\in (\beta,\gamma)\\
0 & u \geq \gamma.
\end{cases}
\end{align*}
and
\begin{align*}
h_2(u) = \begin{cases}
1 & 0\leq u\leq \beta\\
-30000\left(\frac{u-\beta}{\gamma-\beta}\right)^5+8000\left(\frac{u-\beta}{\gamma-\beta}\right)^4-600\left(\frac{u-\beta}{\gamma-\beta}\right)^3+1 & 0<\frac{u-\beta}{\gamma-\beta}<0.1 \\ 1-\frac{u-\beta}{\gamma-\beta}&
0.1\leq \frac{u-\beta}{\gamma-\beta}\leq 0.9\\
-30000\left(\frac{u-\beta}{\gamma-\beta}-1\right)^5-8000\left(\frac{u-\beta}{\gamma-\beta}-1\right)^4-600\left(\frac{u-\beta}{\gamma-\beta}-1\right)^3&0.9<\frac{u-\beta}{\gamma-\beta}<1\\
0 & u \geq \gamma.
\end{cases}
\end{align*}
See Figure~\ref{fig1} for a graphical illustration of $h_1(u)$ and $h_2(u)$ when $\beta=10$ and $\gamma=2\beta$. { The smoothness of $h_1(u)$ and $h_2(u)$ can be verified directly. Indeed, a direct calculation yields that
\begin{align*}
\begin{cases}
h_1'(\beta-)=h_1'(\beta+)=0,~h_1''(\beta-)=h_1''(\beta+)=0,\\
h_1'(\gamma-)=h_1'(\gamma+)=0,~h_1''(\gamma-)=h_1''(\gamma+)=0,
\end{cases}
\end{align*}
and
\begin{align*}
\begin{cases}
h_2'(\beta-)=h_2'(\beta+)=0,~h_2''(\beta-)=h_2''(\beta+)=0,\\
h_2'((0.1(\gamma-\beta)+\beta)-)=h_2'((0.1(\gamma-\beta)+\beta)+)=-\frac{1}{\gamma-\beta},~h_2''((0.1(\gamma-\beta)+\beta)-)=h_2''((0.1(\gamma-\beta)+\beta)+)=0,\\
h_2''((0.9(\gamma-\beta)+\beta)-)=h_2'((0.9(\gamma-\beta)+\beta)+)=-\frac{1}{\gamma-\beta},~h_2''((0.9(\gamma-\beta)+\beta)-)=h_2'((0.9(\gamma-\beta)+\beta)+)=0,\\
h_2'(\gamma-)=h_2'(\gamma+)=0,~h_2''(\gamma-)=h_2''(\gamma+)=0.
\end{cases}
\end{align*}
}
{ The introduction of the activation function makes the gradient and Hessian of $f(\bm{z})$ more complicated, for example the Hessian of $f(\bm{z})$ has 12 terms. Thus we postpone the calculations of $\nabla f(\bm{z})$ and $\nabla^2 f(\bm{z})$ to Appendix~\ref{app:gradHess}. Despite this,
the activation function is able to circumvent the effect of the { fourth power} of Gaussian random variables which is heavy-tailed. To demonstrate this effect, we consider the case when $n=1$ (i.e., both $z$ and $x$ are scalars) and then use the Q-Q plot to compare the random variables in the expressions of the gradients (indeed derivative since $n=1$) of $\nabla \tilde{f}(\bm{z})$ and $\nabla f(\bm{z})$, as well as the random variables in the expressions of the Hessians (indeed second derivative since $n=1$) of $\nabla^2 \tilde{f}(\bm{z})$ and $\nabla^2 f(\bm{z})$. The plots are presented in Figure~\ref{fig:QQ}, from which we can clearly see that the random variables without the activation function are more heavy-tailed. }
\begin{figure}[!t]
\centering
\includegraphics[width=.35\textwidth]{QQ01.eps}
\hfil
\includegraphics[width=.35\textwidth]{QQ02.eps}
\caption{ Q-Q plots for the random variables in the gradients (left) and Hessians (right) of ${f}(\bm{z})$ and $\tilde{f}(\bm{z})$ when $n=1$. In this simulation, $x=1$, $z=2$, and a total of $10^5$ (i.e., $m=10^5$) standard Gaussian random variables are independently generated. Then we compute the first and second derivatives of each term in \eqref{eq:loss1} and \eqref{eq:loss2}, respectively. The activation function $h_1(u)$ with $\beta=10$ and $\gamma=20$ is used in $f(\bm{z})$. }\label{fig:QQ}
\end{figure}
Assuming that $\bm{a}_k$, $k=1,\cdots,m$ are independent Gaussian vectors: $\bm{a}_k\sim\mathcal{N}(0,\bm{I}_n)$, our main result for $f(\bm{z})$ is stated { as follows.}
\begin{theorem}[Main result] \label{thm:main}
With probability exceeding\footnote{{ Here $\Omega(m)$ is a value which is greater than $C\cdot m$ for some numerical constant $C>0$.}} $1-e^{-\Omega(m)}$, the function $f(\bm{z})$ { defined in \eqref{eq:loss2}} with sufficiently large $1<\beta<\gamma$ does not have any spurious local minima provided $m\gtrsim n$. Moreover, at each saddle point $f(\bm{z})$ has a negative directional { curvature.}
\end{theorem}
{ We would like to note that $\beta$ and $\gamma$ in Theorem~\ref{thm:main} are two absolute positive constants whose values are fixed, and the constant hidden in $m\gtrsim n$ relies on $\beta$ and $\gamma$.
}
\subsection{Numerical Illustration}
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=.35\textwidth]{rate01.eps}
\label{mu01}}
\hfil
\subfloat[]{\includegraphics[width=.35\textwidth]{rate02.eps}
\label{mu02}}\\
\subfloat[]{\includegraphics[width=.35\textwidth]{rate03.eps}
\label{mu03}}
\hfil
\subfloat[]{\includegraphics[width=.35\textwidth]{rateNew.eps}
\label{muall}}
\caption{Recovery performance of gradient descent for functions in \eqref{eq:loss1} and \eqref{eq:loss2}.}
\end{figure*}
In numerical simulations, a direct examination of the geometric landscape of a loss function seems to be out of reach. Instead, we investigate the performance of the gradient descent iteration
\begin{align*}
{ \bm{z}_{l+1} = \bm{z}_l - \mu\nabla \tilde{f}(\bm{z}_l)\quad\mbox{and}}\quad\bm{z}_{l+1} = \bm{z}_l - \mu\nabla f(\bm{z}_l)
\end{align*}
with three different stepsizes $\mu\in\{0.1,0.2,0.3\}/(\|\bm{y}\|_1/m)$ when minimizing the loss functions defined in \eqref{eq:loss1} and \eqref{eq:loss2}, respectively. We use $h_2(u)$ with $\gamma=1.5\beta$ for the loss function in \eqref{eq:loss2}. Different values of $\beta$ are adopted for different stepsizes, namely, $\beta=20$ when $\mu=0.1/(\|\bm{y}\|_1/m)$, $\beta=10$ when $\mu=0.2/(\|\bm{y}\|_1/m)$, and $\beta=5$ when $\mu=0.3/(\|\bm{y}\|_1/m)$. Roughly speaking, a more stringent activation condition is imposed for the larger stepsize.
Numerical tests are conducted for fixed $n=128$ and $m/n$ increasing from $4$ to $10$ by $0.5$. For each fixed pair of $(n,m)$, $500$ problem instances on randomly generated $\bm{a}_k\sim\mathcal{N}(0,\bm{I}_n)$ and $\bm{x}\sim\mathcal{N}(0,\bm{I}_n)$ are tested. The initial guess for the gradient descent iteration
is generated randomly and independently according to the standard Gaussian distribution. We consider the algorithm to have successfully reconstructed a test signal if it returns an estimate with the relative reconstruction error { $\dist(\bm{z}_l,\bm{x})/\|\bm{x}\|$} being { less than or equal to} $10^{-3}$ under the distance defined by
\begin{align*}
\dist(\bm{z},\bm{x}){ =} \min\{\|\bm{z}-\bm{x}\|,\|\bm{z}+\bm{x}\|\}.
\end{align*}
The plots of the successful recovery probability against the sampling ratio for the three different stepsizes are presented in Figures~\ref{mu01} -- \ref{mu03}. We can see that, when
$\mu=0.1/(\|\bm{y}\|_1/m)$, the transition curves of the gradient iterations based on the two different loss functions are nearly indistinguishable. However, the advantage of our loss function over the one without the activation function becomes more significant as $\mu$ increases. In particular, when $\mu=0.3/(\|\bm{y}\|_1/m)$, the gradient iteration based on the new loss function with proper ($\beta$, $\gamma$) can achieve more than $80$\% successful recovery when $m\geq 6n$, whereas the gradient descent iteration based on the other loss function can hardly succeed even when $m=10n$. { A close look at the simulation results reveals that the gradient descent method for the vanilla $\ell_2$ loss function $\tilde{f}(\bm{z})$ can either diverge or converge to a local minimizer when $\mu=0.3/(\|\bm{y}\|_1/m)$. In contrast, the new loss function can still succeed for larger stepsizes potentially because the activation function can regularize each component of the gradient so that those components that go wild can be avoided. }
We also put the recovery transitions corresponding to the new loss function but with different values of ($\mu$, $\beta$, $\gamma$) in the same plot; see Figure~\ref{muall}.
Competitive performance of the gradient descent iterations corresponding to different triples of ($\mu$, $\beta$, $\gamma$) can be observed when $m\gtrsim 5n$.
This suggests that similar recovery performance can be achieved by trading off appropriately between the stepsize and the parameters in the loss function.
\subsection{Organization and notation}
The rest of this paper is organized as follows. The geometric landscape of the new loss function is presented in Section~\ref{sec:results}. Section~\ref{sec:main_proofs} contains the detailed justification, with the proofs for the technical lemmas being presented in Section~\ref{sec:tech_proofs}. We conclude this paper with potential future directions in Section~\ref{sec:conclusion}.
Following the notation above we use bold face lowercase letters to denote { column} vectors and use normal font letters with subindices for their entries. In particular, we fix $\bm{x}$ as the underlying vector to be reconstructed. The $\ell_1$-norm and $\ell_2$-norm of a vector $\bm{z}$ are denoted by $\|\bm{z}\|_1$ and $\|\bm{z}\|$, respectively. { Given two vectors $\bm{z}$ and $\bm{x}$, their distance, denoted $\dist(\bm{z},\bm{x})$, is defined by
\begin{align*}
\dist(\bm{z},\bm{x}){=} \min\{\|\bm{z}-\bm{x}\|,\|\bm{z}+\bm{x}\|\}.
\end{align*}}{For a given matrix $\bm{A}$, we use $\|\bm{A}\|$ to denote the matrix operator norm which is defined by
\begin{align*}
\|\bm{A}\| = \sup_{\|\bm{z}\|=1}\|\bm{A}\bm{z}\|.
\end{align*}
When $\bm{A}$ is symmetric we also have
\begin{align*}
\|\bm{A}\|=\sup_{\|\bm{z}\|=1}|\bm{z}^\top\bm{A}\bm{z}|.
\end{align*}
For two symmetric positive semidefinite matrices $\bm{A}$ and $\bm{B}$, if $\bm{A}\preceq \bm{B}$ then $\|\bm{A}\|\leq \|\bm{B}\|.$}
Recall that the notation $m\gtrsim { g(n)}$ means that there exists an absolute constant $C>0$ such that $m\ge C\cdot { g(n)}$. Similarly, the notation $m\lesssim { g(n)}$ means that there exists an absolute constant $C>0$ such that $m\leq C\cdot { g(n)}$. Throughout the paper, $C$ denotes an absolute constant whose value may change from line to line. { In addition, $!!$ means double factorial; that is $n!!=n(n-2)(n-4)\cdots$.}
\section{Proofs for Section~\ref{sec:results}}\label{sec:main_proofs}
\subsection{Technical lemmas}
In order to prove the main theorems, we first list several technical lemmas that will be used repeatedly in this section, but defer the proofs to Section~\ref{sec:tech_proofs}.
Here and throughout this paper, if the expression of a random variable or a random matrix is long we will simply use $\mean{\bm{\cdot}}$ to denote the associated expectation.
\begin{lemma}\label{lem:tech_lemma1}
Let $h(u)$ be a continuous function defined on $[0,\infty)$ which obeys
\begin{align*}
\begin{cases}
h(u) = 1 & \mbox{if }0\leq u\leq \beta,\\
h(u)\in[0,1] &\mbox{if }u\in{ (\beta,\gamma)},\\
h(u)=0&\mbox{if }u\geq \gamma
\end{cases}
\end{align*}
for two absolute numerical constants $\gamma>\beta\geq 1$. Assume $\bm{a}_k\sim\mathcal{N}(0,\bm{I}_n)$, $k=1,\cdots,m$, are independent. Then for any $\epsilon\in(0,1)$ and all nonzero vectors $\bm{u},~\bm{v}\in\mathbb{R}^n$,
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{u})^s(\bm{a}_k^\top\bm{v})^th\left(\frac{|\bm{a}_k^\top\bm{u}|^2}{\|\bm{u}\|^2}\right) h\left(\frac{|\bm{a}_k^\top\bm{v}|^2}{\|\bm{v}\|^2}\right)\bm{a}_k\bm{a}_k^\top-\mean{\bm{\cdot}}\right\|\\
&\lesssim \left(\epsilon\cdot \max\{s,t\}\gamma^{\frac{t+s}{2}}+\gamma^{\frac{s+t}{2}} \epsilon^{-1}e^{-0.49\epsilon^{-2}}+\gamma^{\frac{s+t+1}{2}}e^{-0.49\beta}\right)\|\bm{u}\|^s\|\bm{v}\|^t
\end{align*}
holds with probability at least $1-e^{-\Omega(m\epsilon^2)}$ provided $m\gtrsim \epsilon^{-2}\log\epsilon^{-1}\cdot n$, where the exponents $s$ and $t$ are two nonnegative integers.
\end{lemma}
{\begin{lemma}\label{lem:tech_lemma12}
Under the setup of Lemma~\ref{lem:tech_lemma1},
\begin{align*}
&\left\|\mean{(\bm{a}_k^\top\bm{u})^s(\bm{a}_k^\top\bm{v})^t\left( h\left(\frac{|\bm{a}_k^\top\bm{u}|^2}{\|\bm{u}\|^2}\right) h\left(\frac{|\bm{a}_k^\top\bm{v}|^2}{\|\bm{v}\|^2}\right)-1\right)\bm{a}_k\bm{a}_k^\top}\right\|\lesssim ((8s)!!)^{1/8}((8t)!!)^{1/8}\|\bm{u}\|^s\|\bm{v}\|^t\cdot e^{-0.25\beta}
\end{align*}
holds for all $\|\bm{u}\|\neq 0$ and $\|\bm{v}\|\neq 0$.
\end{lemma}}
\begin{lemma}\label{lem:tech_lemma11}
Under the setup of Lemma~\ref{lem:tech_lemma1},
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^s(\bm{a}_k^\top\bm{x})^t h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)\left[ h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)-h\left(\frac{|\bm{a}_k^\top\bm{x}|^2}{\|\bm{x}\|^2}\right)\right]\bm{a}_k\bm{a}_k^\top\right\|\\& \lesssim 2^{\frac{t}{2}}\gamma^{\frac{s+t}{2}}\left(\sqrt{\beta}e^{-0.245\beta}+\epsilon\right)\|\bm{z}\|^s\|\bm{x}\|^t
\end{align*}
holds uniformly for all $\|\bm{z}\|\neq 0$ with probability at least $1-e^{-\Omega(m\epsilon^2)}$ provided $m\gtrsim\epsilon^{-2}\log\epsilon^{-1}\cdot n$.
\end{lemma}
\begin{lemma}\label{lem:tech_lemma2}
Let $h(u)$ and { $g(u)$} be two continuous functions defined on $[0,\infty)$ { satisfying}
\begin{align*}
\begin{cases}
h(u) = 1 & \mbox{if }0\leq u\leq \beta,\\
h(u)\in[0,1] &\mbox{if }u\in(\beta,\gamma),\\
h(u)=0&\mbox{if }u\geq \gamma
\end{cases}
\quad\mbox{and}\quad
\begin{cases}
g(u) = 0 & \mbox{if }u\in[0,\beta]\cup[\gamma,\infty], \\
\left| g(u)\right|\leq 1 & \mbox{if } \beta<u<\gamma
\end{cases}
\end{align*}
for two absolute numerical constants $\gamma>\beta\geq 1$. Assume $\bm{a}_k\sim\mathcal{N}(0,\bm{I}_n)$, $k=1,\cdots,m$, are independent. Then for any $\epsilon\in(0,1)$ and all nonzero vectors $\bm{z}\in\mathbb{R}^n$,
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^s(\bm{a}_k^\top\bm{x})^t g\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top\right\|\lesssim 2^{\frac{t}{2}}\gamma^{\frac{s+t}{2}}\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\|\bm{z}\|^s\|\bm{x}\|^t
\end{align*}
holds with probability at least $1-e^{-\Omega(m\epsilon^2)}$ provided $m\gtrsim \epsilon^{-2}\log\epsilon^{-1}\cdot n$, where the exponents $s$ and $t$ are two nonnegative integers.
\end{lemma}
\begin{lemma}\label{lem:tech_lemma3}
Under the setup of Lemma~\ref{lem:tech_lemma2}
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^s(\bm{a}_k^\top\bm{x})^t g\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bm{a}_k^\top\right\|\lesssim 2^{\frac{t}{2}}\
\gamma^{\frac{s+t+1}{2}}\left( e^{-0.245\beta}+\sqrt{\epsilon}\right)\|\bm{z}\|^{s+1}\|\bm{x}\|^t
\end{align*}
holds uniformly for all $\|\bm{z}\|\neq 0$ with probability at least $1-e^{-\Omega(m\epsilon^2)}$ provided $m\gtrsim \epsilon^{-2}\log\epsilon^{-1}\cdot n$
\end{lemma}
\begin{lemma}\label{lem:tech_lemma4}
Under the setup of Lemma~\ref{lem:tech_lemma2}, for $s\geq 2$
{\begin{align*}
&\left|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^s(\bm{a}_k^\top\bm{x})^t g\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right) \right| \lesssim 2^{\frac{t}{2}}\gamma^{\frac{s+t}{2}}\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)
\|\bm{z}\|^s\|\bm{x}\|^t
\end{align*}}
holds uniformly for all $\|\bm{z}\|\neq 0$ with probability at least $1-e^{-\Omega(m\epsilon^2)}$ provided $m\gtrsim \epsilon^{-2}\log\epsilon^{-1}\cdot n$.
\end{lemma}
{\subsection{Proof of Theorem~\ref{thm:R1}}}
Due to symmetry, it suffices to consider the region ${ \|\bm{z}-\bm{x}\|}\leq \frac{1}{5}\|\bm{x}\|$, from which we have \begin{align*}\frac{4}{5}\|\bm{x}\|\leq\|\bm{z}\|\leq\frac{6}{5}\|\bm{x}\|.\numberthis\label{eq:thmR1_eq1}
\end{align*}
Though there are twelve terms in the expression for $\nabla^2 f(\bm{z})$ (see \eqref{eq:hessian}), it is not difficult to see that the second term through the last term, with their sum denoted by $\mathbf{I}_2$, can be bounded by Lemmas~\ref{lem:tech_lemma2} to \ref{lem:tech_lemma4} { (the details are deferred to Appendix \ref{sec:app:sub3})}, giving
\begin{align*}
\|\mathbf{I}_2\| &\lesssim \gamma^{\frac{9}{2}}\left( e^{-0.245\beta}+\sqrt{\epsilon}\right)\max\left\{ |h'|_{\infty},|h''|_{\infty}\right\}\max\left\{\|\bm{z}\|^2,\|\bm{x}\|^2,\frac{\|\bm{x}\|^4}{\|\bm{z}\|^2}\right\}\numberthis\label{eq:thmR1_eq2a0}\\
&\lesssim \gamma^{\frac{9}{2}}\left( e^{-0.245\beta}+\sqrt{\epsilon}\right)\max\left\{ |h'|_{\infty},|h''|_{\infty}\right\}\|\bm{x}\|^2,\numberthis\label{eq:thmR1_eq2}
\end{align*}
where we have used \eqref{eq:thmR1_eq1} in the second line. Define
\begin{align*}
\mathbf{I}_1 = \frac{1}{m}\sum_{k=1}^m\left( 6(\bm{a}_k^\top\bm{z})^2-2(\bm{a}_k^\top\bm{x})^2\right) h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top,
\end{align*}
which is the first term in the Hessian of $\nabla^2 f(\bm{z})$. By setting $(s,t)$ to be $(2,0)$ and $(0,2)$ respectively in Lemma~\ref{lem:tech_lemma11}, we have
\begin{align*}
&\left\|\mathbf{I}_1-\frac{1}{m}\sum_{k=1}^m\left( 6(\bm{a}_k^\top\bm{z})^2-2(\bm{a}_k^\top\bm{x})^2\right) h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{|\bm{a}_k^\top\bm{x}|^2}{\|\bm{x}\|^2}\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&\lesssim \gamma^{\frac{3}{2}}\left( e^{-0.245\beta}+\epsilon\right)\max\left\{\|\bm{z}\|^2,\|\bm{x}\|^2\right\}\\
&\lesssim \gamma^{\frac{3}{2}}\left( e^{-0.245\beta}+\epsilon\right)\|\bm{x}\|^2.\numberthis\label{eq:thmR1_eq3}
\end{align*}
Moreover, letting $(s,t)$ to be $(2,0)$ and $(0,2)$ respectively in Lemma~\ref{lem:tech_lemma1}, we have
\begin{align*}
&\lambda_{\min}\left( \frac{1}{m}\sum_{k=1}^m\left( 6(\bm{a}_k^\top\bm{z})^2-2(\bm{a}_k^\top\bm{x})^2\right) h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{|\bm{a}_k^\top\bm{x}|^2}{\|\bm{x}\|^2}\right)\bm{a}_k\bm{a}_k^\top\right)\\
&\geq \lambda_{\min}\left(\mean{\left( 6(\bm{a}_k^\top\bm{z})^2-2(\bm{a}_k^\top\bm{x})^2\right) h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{|\bm{a}_k^\top\bm{x}|^2}{\|\bm{x}\|^2}\right)\bm{a}_k\bm{a}_k^\top}\right)\\
&-C\gamma^{\frac{3}{2}}\left(\epsilon+ \epsilon^{-1}e^{-0.49\epsilon^{-2}}+e^{-0.49\beta}\right)\|\bm{x}\|^2\\
&\geq \lambda_{\min}\left(\mean{\left( 6(\bm{a}_k^\top\bm{z})^2-2(\bm{a}_k^\top\bm{x})^2\right)\bm{a}_k\bm{a}_k^\top}\right)\\
&-\left\| \mean{\left( 6(\bm{a}_k^\top\bm{z})^2-2(\bm{a}_k^\top\bm{x})^2\right) \left\{ h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{|\bm{a}_k^\top\bm{x}|^2}{\|\bm{x}\|^2}\right)-1\right\}\bm{a}_k\bm{a}_k^\top}\right\|\\
&-C\gamma^{\frac{3}{2}}\left(\epsilon+ \epsilon^{-1}e^{-0.49\epsilon^{-2}}+e^{-0.49\beta}\right)\|\bm{x}\|^2,
\end{align*}
where $C$ is an absolute constant whose value may change from line to line.
For any unit vector $\bm{q}\in S^{n-1}$, we have
\begin{align*}
&\bm{q}^\top\mean{\left( 6(\bm{a}_k^\top\bm{z})^2-2(\bm{a}_k^\top\bm{x})^2\right)\bm{a}_k\bm{a}_k^\top}\bm{q}\\
&=6\|\bm{q}\|^2\|\bm{z}\|^2+12(\bm{q}^\top\bm{z})^2-2\|\bm{q}\|^2\|\bm{x}\|^2-4(\bm{q}^\top\bm{x})^2\\
&\geq 6\|\bm{z}\|^2-2\|\bm{x}\|^2-4|\bm{q}^\top(\bm{z}+\bm{x})||\bm{q}^\top(\bm{z}-\bm{x})|\\
&\geq \frac{2}{25}\|\bm{x}\|^2,
\end{align*}
{ where the second line follows from the standard result
\begin{align*}
\mean{(a_k^\top\bm{u})^2(\bm{a}_k^\top\bm{v})^2} = \|\bm{u}\|^2\|\bm{v}\|^2+2(\bm{u}^\top\bm{v})^2,
\end{align*}
the third line follows from $12(\bm{q}^\top\bm{z})^2\geq 4(\bm{q}^\top\bm{z})^2$, and the last line follows from \eqref{eq:thmR1_eq1} which implies $\|\bm{z}\|^2\geq \frac{16}{25}\|\bm{x}\|^2$, $\|\bm{z}+\bm{x}\|\leq \frac{11}{5}\|\bm{x}\|$, and $\|\bm{z}-\bm{x}\|\leq \frac{1}{5}\|\bm{x}\|$.}
Moreover, the application of Lemma~\ref{lem:tech_lemma12} yields
{\begin{align*}
&\left\| \mean{\left( 6(\bm{a}_k^\top\bm{z})^2-2(\bm{a}_k^\top\bm{x})^2\right) \left\{ h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{|\bm{a}_k^\top\bm{x}|^2}{\|\bm{x}\|^2}\right)-1\right\}\bm{a}_k\bm{a}_k^\top}\right\|\\
&\lesssim \left(\|\bm{z}\|^2+\|\bm{x}\|^2\right){e^{-0.25\beta}}\\
&\lesssim e^{-0.25\beta}\|\bm{x}\|^2,
\end{align*}}It follows that
\begin{align*}
&\lambda_{\min}\left( \frac{1}{m}\sum_{k=1}^m\left( 6(\bm{a}_k^\top\bm{z})^2-2(\bm{a}_k^\top\bm{x})^2\right) h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{|\bm{a}_k^\top\bm{x}|^2}{\|\bm{x}\|^2}\right)\bm{a}_k\bm{a}_k^\top\right)\\
&\geq \left( \frac{2}{25}-Ce^{-0.25\beta}-C\gamma^{\frac{3}{2}}\left(\epsilon+ \epsilon^{-1}e^{-0.49\epsilon^{-2}}+e^{-0.49\beta}\right)\rb\|\bm{x}\|^2.\numberthis\label{eq:thmR1_eq4}
\end{align*}
Noting that $\nabla^2 f(\bm{z}) = \mathbf{I}_1+\mathbf{I}_2$, combining \eqref{eq:thmR1_eq2}, \eqref{eq:thmR1_eq3}, and \eqref{eq:thmR1_eq4} together yields
\begin{align*}
\lambda_{\min}\left(\nabla^2 f(\bm{z})\right)&\geq \left( \frac{2}{25}-Ce^{-0.25\beta}-C\gamma^{\frac{3}{2}}\left(\epsilon+ \epsilon^{-1}e^{-0.49\epsilon^{-2}}+e^{-0.49\beta}\right)\rb\|\bm{x}\|^2\\
&-C\gamma^{\frac{3}{2}}\left( e^{-0.245\beta}+\epsilon\right)\|\bm{x}\|^2-C\gamma^{\frac{9}{2}}\left( e^{-0.245\beta}+\sqrt{\epsilon}\right)\max\left\{ |h'|_{\infty},|h''|_{\infty}\right\}\|\bm{x}\|^2\\
&\geq\frac{1}{25}\|\bm{x}\|^2
\end{align*}
for sufficiently small $\epsilon$ and sufficiently large $\beta$ and $\gamma$ since $\max\left\{ |h'|_{\infty},|h''|_{\infty}\right\}=O(1)$ in our construction.
\subsection{Proof of Theorem~\ref{thm:R2a}}
Due to symmetry we only need to consider the case $\bm{z}^\top\bm{x}\geq 0$ in $\mathcal{R}_{2a}$. We will first show that with probability $1-e^{-\Omega(m)}$,
\begin{align*}
\bm{x}^\top\nabla f(\bm{z})<-\frac{\delta}{100}\|\bm{x}\|^4\numberthis\label{eq:R2a01}
\end{align*}
holds uniformly for all $\bm{z}$ in the region $\mathcal{R}_{2a}\cap \{\bm{z}~|~\bm{z}^\top\bm{x}\geq \delta\|\bm{x}\|^2\}$ provided $m\gtrsim n$, and hence excluding the possibility of any critical points in this region.
Next, we will show that with probability $1-e^{-\Omega(m)}$,
\begin{align*}
\bm{z}^\top\nabla f(\bm{z}) > \delta\|\bm{x}\|^4\numberthis\label{eq:R2a02}
\end{align*}
holds uniformly for all $\bm{z}$ in the region $\mathcal{R}_{2a}\cap \{\bm{z}~|~0\leq\bm{z}^\top\bm{x}< \delta\|\bm{x}\|^2\}\cap \{\bm{z}~|~\|\bm{z}\|^2/\|\bm{x}\|^2\geq\frac{1}{3}+\delta\}$ provided $m\gtrsim n$, and again excluding the possibility of any critical points in this region.
Then the first part of Theorem~\ref{thm:R2a} follows immediately by combining the above two results together.
\paragraph{Proof of \eqref{eq:R2a01}} Notice that
\begin{align*}
\bm{x}^\top\nabla f(\bm{z}) & = \frac{1}{m}\sum_{k=1}^m2\left( (\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)(\bm{a}_k^\top\bm{z})(\bm{a}_k^\top\bm{x})h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\\
&+\frac{1}{\|\bm{z}\|^2}\cdot\frac{1}{m}\sum_{k=1}^m\left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)^2(\bm{a}_k^\top\bm{z})(\bm{a}_k^\top\bm{x})h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\\
&-\frac{\bm{z}^\top\bm{x}}{\|\bm{z}\|^4}\cdot\frac{1}{m}\sum_{k=1}^m\left((\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)^2(\bm{a}_k^\top\bm{z})^2h'\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\\
&:=\mathbf{I}_1+\mathbf{I}_2+\mathbf{I}_3.
\end{align*}
By Lemmas~\ref{lem:tech_lemma11}, \ref{lem:tech_lemma1},
and \ref{lem:tech_lemma12}, and noticing {$\|\bm{z}\|^2\leq \|\bm{x}\|^2$} in $\mathcal{R}_{2a}$, we have
\begin{align*}
\mathbf{I}_1&\leq \frac{1}{m}\sum_{k=1}^m2\left( (\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)(\bm{a}_k^\top\bm{z})(\bm{a}_k^\top\bm{x})h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{|\bm{a}_k^\top\bm{x}|^2}{\|\bm{z}\|^2}\right)+C\gamma\left(\sqrt{\beta}e^{-0.245\beta}+\epsilon\right)\|\bm{x}\|^4\\
&\leq \mean{2\left( (\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)(\bm{a}_k^\top\bm{z})(\bm{a}_k^\top\bm{x})h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{|\bm{a}_k^\top\bm{x}|^2}{\|\bm{z}\|^2}\right)}\\
&+C\left(\gamma\epsilon+\gamma\epsilon^{-1}e^{-0.49\epsilon^{-2}}+\gamma^{1.5}e^{-0.49\beta}\right) \|\bm{x}\|^4+C\gamma\left(\sqrt{\beta}e^{-0.245\beta}+\epsilon\right)\|\bm{x}\|^4\\
&\leq \mean{2\left( (\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)(\bm{a}_k^\top\bm{z})(\bm{a}_k^\top\bm{x})}\\
&+Ce^{-0.25\beta}\|\bm{x}\|^4+C\left(\gamma\epsilon+\gamma\epsilon^{-1}e^{-0.49\epsilon^{-2}}+\gamma^{1.5}e^{-0.49\beta}\right) \|\bm{x}\|^4+C\gamma\left(\sqrt{\beta}e^{-0.245\beta}+\epsilon\right)\|\bm{x}\|^4.
\end{align*}
On the other hand, by Lemma~\ref{lem:tech_lemma2} and noticing ${\frac{97}{300}\|\bm{x}\|^2\leq(\frac{1}{3}-\delta)\|\bm{x}\|^2 \leq \|\bm{z}\|^2\leq \|\bm{x}\|^2}$ in $\mathcal{R}_{2a}$ since $\delta\leq\frac{1}{100}$, we have
\begin{align*}
\mathbf{I}_2+\mathbf{I}_3\leq C|h'|_\infty\gamma^2\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)\|\bm{x}\|^4.
\end{align*}
Thus, combining the above two inequalities together implies that for all
$\bm{z}$ in $\mathcal{R}_{2a}\cap \{\bm{z}~|~\bm{z}^\top\bm{x}\geq \delta\|\bm{x}\|^2\}$ we have
\begin{align*}
\bm{x}^\top\nabla f(\bm{z})&\leq \mean{2\left( (\bm{a}_k^\top\bm{z})^2-(\bm{a}_k^\top\bm{x})^2\right)(\bm{a}_k^\top\bm{z})(\bm{a}_k^\top\bm{x})}+\frac{\delta}{20}\|\bm{x}\|^4\\
&=6(\bm{z}^\top\bm{x})(\|\bm{z}\|^2-\|\bm{x}\|^2)+\frac{\delta}{20}\|\bm{x}\|^4\\
&\leq -\frac{\delta}{100}\|\bm{x}\|^4
\end{align*}
where the first line can be achieved by choosing $\epsilon$ sufficiently small and $\gamma>\beta$ sufficiently large, and in the last line we have used the fact
{$\|\bm{z}\|^2\leq \frac{99}{100}\|\bm{x}\|^2$ and $\bm{z}^\top\bm{x}\geq\delta\|\bm{x}\|^2$} in
$\mathcal{R}_{2a}\cap \{\bm{z}~|~\bm{z}^\top\bm{x}\geq \delta\|\bm{x}\|^2\}$.
\paragraph{Proof of \eqref{eq:R2a02}} First we have
\begin{align*}
\bm{z}^\top\nabla f(\bm{z}) =\frac{1}{m}\sum_{k=1}^m2(|\bm{a}_k^\top\bm{z}|^2-|\bm{a}_k^\top\bm{x}|^2)(\bm{a}_k^\top\bm{z})^2h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{z}|^2}{\|\bm{y}\|_1}\right)\numberthis\label{eq:ztfz}
\end{align*}
By applying Lemmas~\ref{lem:tech_lemma11}, \ref{lem:tech_lemma1} and \ref{lem:tech_lemma12} in order, we have
\begin{align*}
\bm{z}^\top\nabla f(\bm{z}) &\geq
\frac{1}{m}\sum_{k=1}^m2(|\bm{a}_k^\top\bm{z}|^2-|\bm{a}_k^\top\bm{x}|^2)(\bm{a}_k^\top\bm{z})^2h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{x}\|^2}\right)\\
&-C\gamma\left(\sqrt{\beta}e^{-0.245\beta}+\epsilon\right)\left(\|\bm{z}\|^4+\|\bm{z}\|^2\|\bm{x}\|^2\right)\\
&\geq \mean{2(|\bm{a}_k^\top\bm{z}|^2-|\bm{a}_k^\top\bm{x}|^2)(\bm{a}_k^\top\bm{z})^2h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{x}\|^2}\right)}\\
&-C\gamma^{1.5}\left( 2\epsilon+\epsilon^{-1}e^{-0.49\epsilon^{-2}}+e^{-0.49\beta}\right)\left(\|\bm{z}\|^4+\|\bm{z}\|^2\|\bm{x}\|^2\right)\\
&-C\gamma\left(\sqrt{\beta}e^{-0.245\beta}+\epsilon\right)\left(\|\bm{z}\|^4+\|\bm{z}\|^2\|\bm{x}\|^2\right)\\
&\geq \mean{2(|\bm{a}_k^\top\bm{z}|^2-|\bm{a}_k^\top\bm{x}|^2)(\bm{a}_k^\top\bm{z})^2}-Ce^{-0.25\beta}\left(\|\bm{z}\|^4+\|\bm{z}\|^2\|\bm{x}\|^2\right)\\
&-C\gamma^{1.5}\left( 2\epsilon+\epsilon^{-1}e^{-0.49\epsilon^{-2}}+e^{-0.49\beta}\right)\left(\|\bm{z}\|^4+\|\bm{z}\|^2\|\bm{x}\|^2\right)\\
&-C\gamma\left(\sqrt{\beta}e^{-0.245\beta}+\epsilon\right)\left(\|\bm{z}\|^4+\|\bm{z}\|^2\|\bm{x}\|^2\right).\numberthis\label{eq:R2a03}
\end{align*}
Noticing that in $\mathcal{R}_{2a}\cap \{\bm{z}~|~0\leq\bm{z}^\top\bm{x}< \delta\|\bm{x}\|^2\}\cap \{\bm{z}~|~\|\bm{z}\|^2/\|\bm{x}\|^2\geq\frac{1}{3}+\delta\}$ we have $(\frac{1}{3}+\delta)\|\bm{x}\|^2\leq\|\bm{z}\|^2\leq\|\bm{x}\|^2$, and consequently,
\begin{align*}
\bm{z}^\top f(\bm{z}) &\geq 6\|\bm{z}\|^4-2\|\bm{z}\|^2\|\bm{x}\|^2-4(\bm{z}^\top\bm{x})^2\\
&-\left( C\gamma\left( \sqrt{\beta}e^{-0.245\beta}+\epsilon\right)+C\gamma^{1.5}\left( 2\epsilon+C\epsilon^{-1}e^{-0.49\epsilon^{-2}}+e^{-0.49\beta}\right)+Ce^{-0.25\beta}\right)\|\bm{x}\|^4\\
&\geq 6\|\bm{z}\|^4-2\|\bm{z}\|^2\|\bm{x}\|^2-4(\bm{z}^\top\bm{x})^2-\delta\|\bm{x}\|^4\\
& \geq 6\left(\frac13+\delta\right)^2\|\bm{x}\|^4-2\left(\frac13+\delta\right)\|\bm{x}\|^4-4\delta^2\|\bm{x}\|^4-\delta\|\bm{x}\|^4\\
&=(\delta+2\delta^2)\|\bm{x}\|^4>\delta\|\bm{x}\|^4,
\end{align*}
where the second inequality can be achieved by choosing $\epsilon$ to be sufficiently small and $\gamma>\beta$ to be sufficiently large.\\
In the first part we have established that critical points in $\mathcal{R}_{2a}$ must obey
\begin{align*}
\frac{1}{3}-\delta<\frac{\|\bm{z}\|^2}{\|\bm{x}\|^2}<\frac{1}{3}+\delta\quad\mbox{and}\quad
|\bm{z}^\top\bm{x}|<\delta\|\bm{x}\|^2.
\end{align*}
Thus, by \eqref{eq:thmR1_eq2}, we have
\begin{align*}
\bm{x}^\top\nabla^2 f(\bm{z})\bm{x} &\leq \frac{1}{m}\sum_{k=1}^m\left( 6(\bm{a}_k^\top\bm{z})^2-2(\bm{a}_k^\top\bm{x})^2\right)(\bm{a}_k^\top\bm{x})^2 h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\\
&+C\gamma^{\frac{9}{2}}\left( e^{-0.245\beta}+\sqrt{\epsilon}\right)\max\left\{ |h'|_{\infty},|h''|_{\infty}\right\}\|\bm{x}\|^4
\end{align*}
Applying Lemmas~\ref{lem:tech_lemma11}, \ref{lem:tech_lemma1} and \ref{lem:tech_lemma12} in order yields
\begin{align*}
\bm{x}^\top\nabla^2f(\bm{z})\bm{x} & \leq \mean{6(\bm{a}_k^\top\bm{z})^2(\bm{a}_k^\top\bm{x})^2-2(\bm{a}_k^\top\bm{x})^4}\\
&+\left( C\gamma\left( \sqrt{\beta}e^{-0.245\beta}+\epsilon\right)+C\gamma^{1.5}\left( 2\epsilon+C\epsilon^{-1}e^{-0.49\epsilon^{-2}}+e^{-0.49\beta}\right)+Ce^{-0.25\beta}\right)\|\bm{x}\|^4\\
&+C\gamma^{\frac{9}{2}}\left( e^{-0.245\beta}+\sqrt{\epsilon}\right)\max\left\{ |h'|_{\infty},|h''|_{\infty}\right\}\|\bm{x}\|^4\\
&\leq 6\|\bm{z}\|^2\|\bm{x}\|^2+12(\bm{z}^\top\bm{x})^2-6\|\bm{x}\|^4+\delta\|\bm{x}\|^4\\
&\leq \lb6\left(\frac{1}{3}+\delta\right)+12\delta^2-6+\delta\right)\|\bm{x}\|^4\\
&\leq -3\|\bm{x}\|^4,
\end{align*}
where in the second inequality for fixed $\delta$ we choose $\epsilon$ to be sufficiently small and $\beta$ and $\gamma$ to be properly large.
Similarly, but considering a different direction, we have
\begin{align*}
\bm{z}^\top\nabla^2 f(\bm{z})\bm{z}&\geq 18\|\bm{z}\|^4-2\|\bm{z}\|^2\|\bm{x}\|^2-4(\bm{z}^\top\bm{x})^2\\
&-\left( C\gamma\left( \sqrt{\beta}e^{-0.245\beta}+\epsilon\right)+C\gamma^{1.5}\left( 2\epsilon+C\epsilon^{-1}e^{-0.49\epsilon^{-2}}+e^{-0.49\beta}\right)+Ce^{-0.25\beta}\right)\|\bm{x}\|^4\\
&-C\gamma^{\frac{9}{2}}\left( e^{-0.245\beta}+\sqrt{\epsilon}\right)\max\left\{ |h'|_{\infty},|h''|_{\infty}\right\}\|\bm{x}\|^4\\
&\geq \lb18\left(\frac{1}{3}-\delta\right)^2-2\left(\frac{1}{3}+\delta\right)-4\delta^2-\delta\right)\|\bm{x}\|^4\\
&\geq \|\bm{x}\|^4.
\end{align*}
\subsection{Proof of Theorem~\ref{thm:R2b}}
We only need to consider the case $\bm{z}^\top\bm{x}\geq 0$. Since in $\mathcal{R}_{2b}$, one has
\begin{align*}
\frac{99}{100}\leq \frac{\|\bm{z}\|^2}{\|\bm{x}\|^2}\leq\frac{101}{100}\quad \mbox{and} \quad\|\bm{z}-\bm{x}\|>\frac{1}{5}\|\bm{x}\|.
\end{align*}
Thus,
\begin{align*}
\bm{z}^\top\bm{x}\leq \frac{1}{2}\left(\|\bm{x}\|^2+\|\bm{z}\|^2-\frac{1}{25}\|\bm{x}\|^2\right)\leq 0.985\|\bm{x}\|^2.
\end{align*}
Noticing {$\|\bm{z}\|^2\geq\frac{99}{100}\|\bm{x}\|^2$} in $\mathcal{R}_{2b}$, by choosing $\epsilon$ to be sufficiently small and $\beta$ and $\gamma$ to be properly large in \eqref{eq:R2a03}, we have
\begin{align*}
\bm{z}^\top\nabla f(\bm{z})&\geq \mean{2(|\bm{a}_k^\top\bm{z}|^2-|\bm{a}_k^\top\bm{x}|^2)(\bm{a}_k^\top\bm{z})^2}-\delta\|\bm{x}\|^4\\
&=6\|\bm{z}\|^4-2\|\bm{z}\|^2\|\bm{x}\|^2-4(\bm{z}^\top\bm{x})^2-\delta\|\bm{x}\|^4\\
&=\|\bm{x}\|^4\left( 6\left(\frac{\|\bm{z}\|^2}{\|\bm{x}\|^2}\right)^2-2\frac{\|\bm{z}\|^2}{\|\bm{x}\|^2}\right)-4(\bm{z}^\top\bm{x})^2-\delta\|\bm{x}\|^4\\
&\geq \lb6\cdot \frac{ 99^2}{100^2}-2\cdot \frac{99}{100}-4\cdot0.985^2-\delta\right)\|\bm{x}\|^4\\
&\geq \frac{9}{1000}\|\bm{x}\|^4
\end{align*}
provided {$\delta\leq \frac{1}{100}$}, where in the fourth line we have used the fact that the minimum of $6\|\bm{z}\|^4-2\|\bm{z}\|^2\|\bm{x}\|^2$ over $\frac{99}{100}\leq \frac{\|\bm{z}\|^2}{\|\bm{x}\|^2}$ is achieved at $\|\bm{z}\|^2=\frac{99}{100}\|\bm{x}\|^2$.
\subsection{Proof of Theorem~\ref{thm:R2c}}
Similarly to the proof for Theorem~\ref{thm:R2b}, we have
\begin{align*}
\bm{z}^\top\nabla f(\bm{z})&\geq 6\|\bm{z}\|^4-2\|\bm{z}\|^2\|\bm{x}\|^2-4(\bm{z}^\top\bm{x})^2-\delta\|\bm{x}\|^4\\
&\geq 6\|\bm{z}\|^2(\|\bm{z}\|^2-\|\bm{x}\|^2)-\delta\|\bm{x}\|^4\\
&\geq \frac{6}{101}\|\bm{z}\|^4-\delta\|\bm{z}\|^4\\
&\geq \frac{49}{1000}\|\bm{z}\|^4,
\end{align*}
where in the third line we have used the fact {$\|\bm{z}\|^2\geq\frac{101}{100}\|\bm{x}\|^2$} in $\mathcal{R}_{3c}$, and in the last line we have used the assumption {$\delta\leq\frac{1}{100}$}.
\subsection{Proof of Theorem~\ref{thm:R3}}
Recall that $\bm{z}^\top\nabla f(\bm{z})$ is given in \eqref{eq:ztfz}.
Thus similar to \eqref{eq:R2a03} but applying by Lemmas~\ref{lem:tech_lemma11}, \ref{lem:tech_lemma1} and \ref{lem:tech_lemma12} in the reverse direction yields
\begin{align*}
\bm{z}^\top\nabla f(\bm{z})
&\leq \mean{2(|\bm{a}_k^\top\bm{z}|^4-|\bm{a}_k^\top\bm{z}|^2|\bm{a}_k^\top\bm{x}|^2)}+Ce^{-0.25\beta}\left(\|\bm{z}\|^4+\|\bm{z}\|^2\|\bm{x}\|^2\right)\\
&+C\gamma^{1.5}\left( 2\epsilon+\epsilon^{-1}e^{-0.49\epsilon^{-2}}+e^{-0.49\beta}\right)\left(\|\bm{z}\|^4+\|\bm{z}\|^2\|\bm{x}\|^2\right)\\
&+C\gamma\left( \sqrt{\beta}e^{-0.245\beta}+\epsilon\right)\left(\|\bm{z}\|^4+\|\bm{z}\|^2\|\bm{x}\|^2\right)
\end{align*}
It follows that
\begin{align*}
&\bm{z}^\top\nabla f(\bm{z})\leq 6\|\bm{z}\|^4-2\|\bm{z}\|^2\|\bm{x}\|^2-4(\bm{z}^\top\bm{x})^2\\
&+\left( C\gamma\left( \sqrt{\beta}e^{-0.245\beta}+\epsilon\right)+C\gamma^{1.5}\left( 2\epsilon+C\epsilon^{-1}e^{-0.49\epsilon^{-2}}+e^{-0.49\beta}\right)+Ce^{-0.25\beta}\right)(\|\bm{z}\|^4+\|\bm{z}\|^2\|\bm{x}\|^2)\\
&\leq 6\|\bm{z}\|^4-2\|\bm{z}\|^2\|\bm{x}\|^2+\left( C\gamma\left( \sqrt{\beta}e^{-0.245\beta}+\epsilon\right)+C\gamma^{1.5}\left( 2\epsilon+C\epsilon^{-1}e^{-0.49\epsilon^{-2}}+e^{-0.49\beta}\right)+Ce^{-0.25\beta}\right)\|\bm{z}\|^2\|\bm{x}\|^2\\
&= \left\{ 2\left(\frac{3\|\bm{z}\|^2}{\|\bm{x}\|^2}-1\right)+\left( C\gamma\left( \sqrt{\beta}e^{-0.245\beta}+\epsilon\right)+C\gamma^{1.5}\left( 2\epsilon+C\epsilon^{-1}e^{-0.49\epsilon^{-2}}+e^{-0.49\beta}\right)+Ce^{-0.25\beta}\right)\right\}\|\bm{z}\|^2\|\bm{x}\|^2\\
&\leq \left\{ -6\delta+\left( C\gamma\left( \sqrt{\beta}e^{-0.245\beta}+\epsilon\right)+C\gamma^{1.5}\left( 2\epsilon+C\epsilon^{-1}e^{-0.49\epsilon^{-2}}+e^{-0.49\beta}\right)+Ce^{-0.25\beta}\right)\right\}\|\bm{z}\|^2\|\bm{x}\|^2\\
&\leq -5\delta\|\bm{z}\|^2\|\bm{x}\|^2,
\end{align*}
where in the second and the third inequalities we have used the assumption { $\frac{\|\bm{z}\|^2}{\|\bm{x}\|^2}\leq \frac{1}{3}-\delta$}, and in
the last inequality we choose $\epsilon$ to be sufficiently small and $\gamma>\beta$ to be sufficiently large.
\section{Proofs of technical lemmas}\label{sec:tech_proofs}
\subsection{Proof of Lemma~\ref{lem:tech_lemma1}}
Due to the homogeneity, it suffices to establish the inequality for all $\bm{u}\in\mathcal{S}^{n-1}$ and $\bm{v}\in\mathcal{S}^{n-1}$. We will first consider a fixed pair of $\bm{u}$ and $\bm{v}$ and then use the covering argument. For fixed $\bm{u}$ and $\bm{v}$ of unit norm, it suffices to establish a uniform bound for
\begin{align*}
\left| \frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{u})^s(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{u}|^2\right) h\left(|\bm{a}_k^\top\bm{v}|^2\right)(\bm{a}_k^\top\bm{w})^2-\mean{\bm{\cdot}}\right|\numberthis\label{eq:tech1_eq1}
\end{align*}
over all $\bm{w}\in\mathcal{N}_{1/4}$, where $\mathcal{N}_{1/4}$ is a $1/4$-net of $\mathcal{S}^{n-1}$. { This is because for any symmetric matrix $\bm{A}$ one has
\begin{align*}
\|\bm{A}\|\leq 2\sup_{\bm{z}\in\mathcal{N}_{1/4}}|\langle \bm{A}\bm{z},\bm{z}\rangle|,
\end{align*}
see Lemma~5.4 in \cite{Ver2011rand}}. Noticing that
\begin{align*}
&\left|(\bm{a}_k^\top\bm{u})^s(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{u}|^2\right) h\left(|\bm{a}_k^\top\bm{v}|^2\right)(\bm{a}_k^\top\bm{w})^2\right|\\
&\leq |\bm{a}_k^\top\bm{u}|^s|\bm{a}_k^\top\bm{v}|^t\dsone{|\bm{a}_k^\top\bm{u}|^2\leq\gamma} \dsone{|\bm{a}_k^\top\bm{v}|^2\leq\gamma}(\bm{a}_k^\top\bm{w})^2\\
&\leq \gamma^{\frac{s+t}{2}}(\bm{a}_k^\top\bm{w})^2
\end{align*}
and $(\bm{a}_k^\top\bm{w})^2$ is a standard Chi-square, we can see that \begin{align*}(\bm{a}_k^\top\bm{u})^s(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{u}|^2\right) h\left(|\bm{a}_k^\top\bm{v}|^2\right)(\bm{a}_k^\top\bm{w})^2
\end{align*}
is sub-exponential with the sub-exponential norm $\|\cdot\|_{\psi_1}$ bounded by an absolute constant times $\gamma^{\frac{s+t}{2}}$. It follows that \cite{Ver2011rand}
\begin{align*}
\left\|(\bm{a}_k^\top\bm{u})^s(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{u}|^2\right) h\left(|\bm{a}_k^\top\bm{v}|^2\right)(\bm{a}_k^\top\bm{w})^2
-\mean{\bm{\cdot}}\right\|_{\psi_1}\lesssim \gamma^{\frac{s+t}{2}}.
\end{align*}
Thus the application of the Bernstein's inequality implies that
\begin{align*}
\left| \frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{u})^s(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{u}|^2\right) h\left(|\bm{a}_k^\top\bm{v}|^2\right)(\bm{a}_k^\top\bm{w})^2-\mean{\bm{\cdot}}\right|\lesssim \gamma^{\frac{s+t}{2}}\epsilon\numberthis\label{eq:tech1_eq2}
\end{align*}
with probability at least $1-2e^{-\Omega(m\epsilon^2)}$ for $\epsilon\in(0,1)$.
{ By \cite[Lemma 5.2]{Ver2011rand} we know that $|\mathcal{N}_{1/4}|\leq 9^n$. Thus the failure probability over all $\bm{w}\in\mathcal{N}_{1/4}$ can be bounded by $9^n\cdot 2e^{-\Omega(m\epsilon^2)}=2 e^{-\Omega(m\epsilon^2)+n\log9}$ which is less than $2e^{-\Omega(m\epsilon^2)}$ (with a different constant hidden in $\Omega(m\epsilon^2)$) provided $m\gtrsim \epsilon^{-2}\cdot n$.}
Therefore,
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{u})^s(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{u}|^2\right) h\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top-\mean{\bm{\cdot}}\right\|\lesssim \gamma^{\frac{s+t}{2}}\epsilon\numberthis\label{eq:tech1_eq3}
\end{align*}
for fixed $\bm{u}\in\mathcal{S}^{n-1}$ and $\bm{v}\in\mathcal{S}^{n-1}$ with probability at least
$1-2e^{-\Omega(m\epsilon^2)}$ provided $m\gtrsim \epsilon^{-2}\cdot n$.
To establish a bound over all $\bm{u}\in\mathcal{S}^{n-1}$ and $\bm{v}\in\mathcal{S}^{n-1}$, we will use the covering argument again. Let $\mathcal{N}_{\epsilon^2}$ be a $\epsilon^2$-net of $\mathcal{S}^{n-1}$ with cardinality $|\mathcal{N}_{\epsilon^2}|\leq (3/\epsilon^2)^n$. Then it is evident that \eqref{eq:tech1_eq3} holds for all $\bm{u}_0\in\mathcal{N}_{\epsilon^2}$ and $\bm{v}_0\in\mathcal{N}_{\epsilon^2}$ with probability at least
$1-2e^{-\Omega(m\epsilon^2)}$ provided $m\gtrsim \epsilon^{-2}\log\epsilon^{-1}\cdot n$. For any $\bm{u}\in\mathcal{S}^{n-1}$ and $\bm{v}\in\mathcal{S}^{n-1}$, there exists a pair of $\bm{u}_0,\bm{v}_0\in\mathcal{N}_{\epsilon^2}$ such that $\|\bm{u}-\bm{u}_0\|\leq\epsilon^2$ and $\|\bm{v}-\bm{v}_0\|\leq\epsilon^2$. It follows that
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m\left\{(\bm{a}_k^\top\bm{u})^s(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{u}|^2\right) h\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top-(\bm{a}_k^\top\bm{u}_0)^s(\bm{a}_k^\top\bm{v}_0)^th\left(|\bm{a}_k^\top\bm{u}_0|^2\right) h\left(|\bm{a}_k^\top\bm{v}_0|^2\right)\bm{a}_k\bm{a}_k^\top\right\}\right\|\\
&\leq \left\|\frac{1}{m}\sum_{k=1}^m\left\{ (\bm{a}_k^\top\bm{u})^sh\left(|\bm{a}_k^\top\bm{u}|^2\right)-(\bm{a}_k^\top\bm{u}_0)^sh\left(|\bm{a}_k^\top\bm{u}_0|^2\right)\right\}(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&+\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{u}_0)^sh\left(|\bm{a}_k^\top\bm{u}_0|^2\right)\left\{ (\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{v}|^2\right)-(\bm{a}_k^\top\bm{v}_0)^th\left(|\bm{a}_k^\top\bm{v}_0|^2\right)\right\}\bm{a}_k\bm{a}_k^\top\right\|.\numberthis\label{eq:tech1_eq4}
\end{align*}
Next we will focus on the first term of \eqref{eq:tech1_eq4} and the second term can be similarly bounded.
We can split the first term into five terms based on the decomposition of $[0,\infty)\times [0,\infty)$ and then provide an upper bound for each term.
\paragraph{Region $[0,\beta]\times[0,\beta]$}
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m\left\{ (\bm{a}_k^\top\bm{u})^sh\left(|\bm{a}_k^\top\bm{u}|^2\right)-(\bm{a}_k^\top\bm{u}_0)^sh\left(|\bm{a}_k^\top\bm{u}_0|^2\right)\right\}\dsone{{ |\bm{a}_k^\top\bm{u}|^2\leq\beta,|\bm{a}_k^\top\bm{u}_0|^2\leq\beta}}(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&\leq \left\|\frac{1}{m}\sum_{k=1}^m\left\{ (\bm{a}_k^\top\bm{u})^s-(\bm{a}_k^\top\bm{u}_0)^s\right\}\dsone{|\bm{a}_k^\top\bm{u}|\leq{ \sqrt{\beta}},|\bm{a}_k^\top\bm{u}_0|\leq{ \sqrt{\beta}},|\bm{a}_k^\top\bm{u}-\bm{a}_k^\top\bm{u}_0|\leq\epsilon}(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&+\left\|\frac{1}{m}\sum_{k=1}^m\left\{ (\bm{a}_k^\top\bm{u})^s-(\bm{a}_k^\top\bm{u}_0)^s\right\}\dsone{|\bm{a}_k^\top\bm{u}|\leq{ \sqrt{\beta}},|\bm{a}_k^\top\bm{u}_0|\leq{ \sqrt{\beta}},|\bm{a}_k^\top\bm{u}-\bm{a}_k^\top\bm{u}_0|>\epsilon}(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&=\left\|\frac{1}{m}\sum_{k=1}^m\left\{ (\bm{a}_k^\top\bm{u}-\bm{a}_k^\top\bm{u}_0)\left((\bm{a}_k^\top\bm{u})^{s-1}+(\bm{a}_k^\top\bm{u})^{s-2}(\bm{a}_k^\top\bm{u}_0)+\cdots+(\bm{a}_k^\top\bm{u}_0)^{s-1}\right)\right\}\right.\\&\left.{\color{white}\sum_{k=1}^m}\dsone{|\bm{a}_k^\top\bm{u}|\leq{ \sqrt{\beta}},|\bm{a}_k^\top\bm{u}_0|\leq{ \sqrt{\beta}},|\bm{a}_k^\top\bm{u}-\bm{a}_k^\top\bm{u}_0|\leq\epsilon}(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&+\left\|\frac{1}{m}\sum_{k=1}^m\left\{ (\bm{a}_k^\top\bm{u})^s-(\bm{a}_k^\top\bm{u}_0)^s\right\}\dsone{|\bm{a}_k^\top\bm{u}|\leq{ \sqrt{\beta}},|\bm{a}_k^\top\bm{u}_0|\leq{ \sqrt{\beta}},|\bm{a}_k^\top\bm{u}-\bm{a}_k^\top\bm{u}_0|>\epsilon}(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&\leq \epsilon\cdot s\beta^{\frac{s-1}{2}}\gamma^{\frac{t}{2}}\left\|\frac{1}{m}\sum_{k=1}^m\bm{a}_k\bm{a}_k^\top\right\|+2\beta^{\frac{s}{2}}\gamma^{\frac{t}{2}}\left\|\frac{1}{m}\sum_{k=1}^m\bm{a}_k\bm{a}_k^\top\dsone{|\bm{a}_k^\top\bm{u}-\bm{a}_k^\top\bm{u}_0|>\epsilon}\right\|\\
&\leq \epsilon\cdot s\beta^{\frac{s-1}{2}}\gamma^{\frac{t}{2}}\left\|\frac{1}{m}\sum_{k=1}^m\bm{a}_k\bm{a}_k^\top\right\|+2\beta^{\frac{s}{2}}\gamma^{\frac{t}{2}}\left\|\frac{1}{m}\sum_{k=1}^m\bm{a}_k\bm{a}_k^\top\dsone{|\bm{a}_k^\top\bm{u}-\bm{a}_k^\top\bm{u}_0|>\epsilon^{-1}\|\bm{u}-\bm{u}_0\|}\right\|,
\end{align*}
{ where in the second inequality we have used the fact that because $h\left(|\bm{a}_k^\top\bm{v}|^2\right)=0$ when $|\bm{a}_k^\top\bm{v}|^2\geq\gamma$ there holds
\begin{align*}
(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{v}|^2\right)\leq \gamma^{t/2},
\end{align*}}
and the last equality follows from the assumption $\|\bm{u}-\bm{u}_0\|\leq\epsilon^2$. {
Note that in the above calculation, it requires $s\geq 1$. However, when $s=0$, we have
\begin{align*}
\frac{1}{m}\sum_{k=1}^m\left\{ (\bm{a}_k^\top\bm{u})^sh\left(|\bm{a}_k^\top\bm{u}|^2\right)-(\bm{a}_k^\top\bm{u}_0)^sh\left(|\bm{a}_k^\top\bm{u}_0|^2\right)\right\}\dsone{|\bm{a}_k^\top\bm{u}|\leq{ \sqrt{\beta}},|\bm{a}_k^\top\bm{u}_0|\leq{ \sqrt{\beta}}}(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top=0,
\end{align*}
and hence the upper bound still holds.
}
\paragraph{Region $[0,\beta]\times(\beta,\gamma]$ or $(\beta,\gamma]\times [0,\beta]$}
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m\left\{ (\bm{a}_k^\top\bm{u})^sh\left(|\bm{a}_k^\top\bm{u}|^2\right)-(\bm{a}_k^\top\bm{u}_0)^sh\left(|\bm{a}_k^\top\bm{u}_0|^2\right)\right\}\dsone{|\bm{a}_k^\top\bm{u}|^2\leq\beta,\beta<|\bm{a}_k^\top\bm{u}_0|^2\leq\gamma}(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&\leq \left(\beta^{\frac{s}{2}}+\gamma^{\frac{s}{2}}\right)\gamma^{\frac{t}{2}}
\left\|\frac{1}{m}\sum_{k=1}^m\bm{a}_k\bm{a}_k^\top\dsone{|\bm{a}_k^\top\bm{u}_0|>\sqrt{\beta}\|\bm{u}_0\|}\right\|
\end{align*}
and
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m\left\{ (\bm{a}_k^\top\bm{u})^sh\left(|\bm{a}_k^\top\bm{u}|^2\right)-(\bm{a}_k^\top\bm{u}_0)^sh\left(|\bm{a}_k^\top\bm{u}_0|^2\right)\right\}\dsone{\beta<|\bm{a}_k^\top\bm{u}|^2\leq\gamma,|\bm{a}_k^\top\bm{u}_0|^2\leq\beta}(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&\leq \left(\beta^{\frac{s}{2}}+\gamma^{\frac{s}{2}}\right)\gamma^{\frac{t}{2}}
\left\|\frac{1}{m}\sum_{k=1}^m\bm{a}_k\bm{a}_k^\top\dsone{|\bm{a}_k^\top\bm{u}|>\sqrt{\beta}\|\bm{u}\|}\right\|.
\end{align*}
\paragraph{Region $[0,\beta]\times(\gamma,\infty)$ or $(\gamma,\infty)\times [0,\beta]$}
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m (\bm{a}_k^\top\bm{u})^sh\left(|\bm{a}_k^\top\bm{u}|^2\right)\dsone{|\bm{a}_k^\top\bm{u}|^2\leq\beta,|\bm{a}_k^\top\bm{u}_0|^2>\gamma}(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&\leq\beta^{\frac{s}{2}}\gamma^{\frac{t}{2}}\left\|\frac{1}{m}\sum_{k=1}^m\bm{a}_k\bm{a}_k^\top\dsone{|\bm{a}_k^\top\bm{u}_0|>\sqrt{\gamma}\|\bm{u}_0\|}\right\|
\end{align*}
and
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m (\bm{a}_k^\top\bm{u}_0)^sh\left(|\bm{a}_k^\top\bm{u}_0|^2\right)\dsone{|\bm{a}_k^\top\bm{u}|^2>\gamma,|\bm{a}_k^\top\bm{u}_0|^2\leq\beta}(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&\leq\beta^{\frac{s}{2}}\gamma^{\frac{t}{2}}\left\|\frac{1}{m}\sum_{k=1}^m\bm{a}_k\bm{a}_k^\top\dsone{|\bm{a}_k^\top\bm{u}|>\sqrt{\gamma}\|\bm{u}\|}\right\|.
\end{align*}
\paragraph{Region $(\beta,\gamma]\times(\beta,\gamma]$}
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m\left\{ (\bm{a}_k^\top\bm{u})^sh\left(|\bm{a}_k^\top\bm{u}|^2\right)-(\bm{a}_k^\top\bm{u}_0)^sh\left(|\bm{a}_k^\top\bm{u}_0|^2\right)\right\}\dsone{\beta<|\bm{a}_k^\top\bm{u}|^2\leq\gamma,\beta<|\bm{a}_k^\top\bm{u}_0|^2\leq\gamma}(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&\leq 2\gamma^{\frac{s+t}{2}}\left\|\frac{1}{m}\sum_{k=1}^m\bm{a}_k\bm{a}_k^\top\dsone{|\bm{a}_k^\top\bm{u}|>\sqrt{\beta}\|\bm{u}\|}\right\|.
\end{align*}
\paragraph{Region $(\beta,\gamma]\times(\gamma,\infty)$ or $(\gamma,\infty)\times(\beta,\gamma]$}
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m (\bm{a}_k^\top\bm{u})^sh\left(|\bm{a}_k^\top\bm{u}|^2\right)\dsone{\beta<|\bm{a}_k^\top\bm{u}|^2\leq\gamma,|\bm{a}_k^\top\bm{u}_0|^2>\gamma}(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&\leq\gamma^{\frac{s+t}{2}}\left\|\frac{1}{m}\sum_{k=1}^m\bm{a}_k\bm{a}_k^\top\dsone{|\bm{a}_k^\top\bm{u}_0|>\sqrt{\gamma}\|\bm{u}_0\|}\right\|
\end{align*}
and
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m (\bm{a}_k^\top\bm{u}_0)^sh\left(|\bm{a}_k^\top\bm{u}_0|^2\right)\dsone{|\bm{a}_k^\top\bm{u}|^2>\gamma,\beta<|\bm{a}_k^\top\bm{u}_0|^2\leq\gamma}(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&\leq\gamma^{\frac{s+t}{2}}\left\|\frac{1}{m}\sum_{k=1}^m\bm{a}_k\bm{a}_k^\top\dsone{|\bm{a}_k^\top\bm{u}|>\sqrt{\gamma}\|\bm{u}\|}\right\|.
\end{align*}\\
Combining the bounds from (a) to (e) together and noting that the second term in \eqref{eq:tech1_eq4} can be bounded similarly to the first one yields that
{ \begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m\left\{(\bm{a}_k^\top\bm{u})^s(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{u}|^2\right) h\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top-(\bm{a}_k^\top\bm{u}_0)^s(\bm{a}_k^\top\bm{v}_0)^th\left(|\bm{a}_k^\top\bm{u}_0|^2\right) h\left(|\bm{a}_k^\top\bm{v}_0|^2\right)\bm{a}_k\bm{a}_k^\top\right\}\right\|\\
&\lesssim\underbrace{\epsilon\cdot s\beta^{\frac{s-1}{2}}\gamma^{\frac{t}{2}}+\beta^{\frac{s}{2}}\gamma^{\frac{t}{2}}\left( \epsilon^{-1}e^{-0.49\epsilon^{-2}}+\epsilon\right)}_{\mbox{bound for (a)}}+\underbrace{\left(\beta^{\frac{s}{2}}+\gamma^{\frac{s}{2}}\right)\gamma^{\frac{t}{2}}\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right)}_{\mbox{bound for (b)}}+\underbrace{
\beta^{\frac{s}{2}}\gamma^{\frac{t}{2}}\left(\sqrt{\gamma}e^{-0.49\gamma}+\epsilon\right)}_{\mbox{bound for (c)}}\\
&\quad+\underbrace{\gamma^{\frac{s+t}{2}}\left( \sqrt{\beta}e^{-0.49\beta}+\epsilon\right)}_{\mbox{bound for (d)}}+\underbrace{\gamma^{\frac{s+t}{2}}\left( \sqrt{\gamma}e^{-0.49\gamma}+\epsilon\right)}_{\mbox{bound for (e)}}\\
&\lesssim\epsilon\cdot \max\{s,t\}\gamma^{\frac{t+s}{2}}+\gamma^{\frac{s+t}{2}} \epsilon^{-1}e^{-0.49\epsilon^{-2}}+\gamma^{\frac{s+t+1}{2}}e^{-0.49\beta},\numberthis\label{eq:tech1_eq5}
\end{align*}
where each term in the first inequality respectively corresponds the bound for (a) to (e) after applying Lemmas~\ref{lem:aux_lemma1} and \ref{lem:aux_lemma2}, and in the second inequality we have used the fact $1<\beta < \gamma$.}
{
By the same splitting scheme, we can similarly show that
\begin{align*}
&\left\|\mean{(\bm{a}_k^\top\bm{u})^s(\bm{a}_k^\top\bm{v})^th\left(|\bm{a}_k^\top\bm{u}|^2\right) h\left(|\bm{a}_k^\top\bm{v}|^2\right)\bm{a}_k\bm{a}_k^\top}-\mean{(\bm{a}_k^\top\bm{u}_0)^s(\bm{a}_k^\top\bm{v}_0)^th\left(|\bm{a}_k^\top\bm{u}_0|^2\right) h\left(|\bm{a}_k^\top\bm{v}_0|^2\right)\bm{a}_k\bm{a}_k^\top}\right\|\\
&\lesssim\epsilon\cdot \max\{s,t\}\gamma^{\frac{t+s}{2}}+\gamma^{\frac{s+t}{2}} \epsilon^{-1}e^{-0.5\epsilon^{-2}}+\gamma^{\frac{s+t+1}{2}}e^{-0.5\beta}.\numberthis\label{eq:tech1_eq6}
\end{align*}
}Then the proof is complete after combining \eqref{eq:tech1_eq3}, \eqref{eq:tech1_eq5} and \eqref{eq:tech1_eq6} together and using the triangular inequality.
\subsection{Proof of Lemma~\ref{lem:tech_lemma12}}
A direct calculation yields that
\begin{align*}
&\left\|\mean{(\bm{a}_k^\top\bm{u})^s(\bm{a}_k^\top\bm{v})^th\left(\frac{|\bm{a}_k^\top\bm{u}|^2}{\|\bm{u}\|^2}\right) h\left(\frac{|\bm{a}_k^\top\bm{v}|^2}{\|\bm{v}\|^2}\right)\bm{a}_k\bm{a}_k^\top}-\mean{(\bm{a}_k^\top\bm{u})^s(\bm{a}_k^\top\bm{v})^t\bm{a}_k\bm{a}_k^\top}\right\|\\
&=\max_{\|\bm{q}\|=1}\left| \mean{(\bm{a}_k^\top\bm{u})^s(\bm{a}_k^\top\bm{v})^t(\bm{a}_k^\top\bm{q})^2\left( h\left(\frac{|\bm{a}_k^\top\bm{u}|^2}{\|\bm{u}\|^2}\right) h\left(\frac{|\bm{a}_k^\top\bm{v}|^2}{\|\bm{v}\|^2}\right)-1\right)}\right|\\
&\leq \max_{\|\bm{q}\|=1}\mean{|\bm{a}_k^\top\bm{u}|^s|\bm{a}_k^\top\bm{v}|^t|\bm{a}_k^\top\bm{q}|^2\left( 1-h\left(\frac{|\bm{a}_k^\top\bm{u}|^2}{\|\bm{u}\|^2}\right) h\left(\frac{|\bm{a}_k^\top\bm{v}|^2}{\|\bm{v}\|^2}\right)\rb}\\
&\leq \max_{\|\bm{q}\|=1}\mean{|\bm{a}_k^\top\bm{u}|^s|\bm{a}_k^\top\bm{v}|^t|\bm{a}_k^\top\bm{q}|^2\left(\dsone{|\bm{a}_k^\top\bm{u}|>\sqrt{\beta}\|\bm{u}\|}+\dsone{|\bm{a}_k^\top\bm{v}|>\sqrt{\beta}\|\bm{v}\|}\right)}\\
&\leq \max_{\|\bm{q}\|=1}\left(\mean{\left(|\bm{a}_k^\top\bm{u}|^{2s}|\bm{a}_k^\top\bm{v}|^{2t}|\bm{a}_k^\top\bm{q}|^4\right)}\right)^{1/2}\left(\sqrt{\mean{\dsone{|\bm{a}_k^\top\bm{u}|>\sqrt{\beta}\|\bm{u}\|}}}+\sqrt{\mean{\dsone{|\bm{a}_k^\top\bm{v}|>\sqrt{\beta}\|\bm{v}\|}}}\right)\\
&{ {\lesssim \left(\max_{\|\bm{q}\|=1}\mathbb{E}|\bm{a}_k^\top\bm{q}|^8\right)^{\frac{1}{4}}\left( \mathbb{E}|\bm{a}_k^\top\bm{u}|^{4s}|\bm{a}_k^\top\bm{v}|^{4t} \right)^{\frac{1}{4}} \sqrt{\sqrt{\frac{2}{\pi\beta}}e^{-\frac{\beta}{2}}}}}\\
&{ {\lesssim
\left(\mathbb{E}|\bm{a}_k^\top\bm{u}|^{8s}\right)^{\frac{1}{8}} \left(\mathbb{E}|\bm{a}_k^\top\bm{v}|^{8t}\right)^{\frac{1}{8}}e^{-0.25\beta}}}\\
&\lesssim ((8s)!!)^{1/8}((8t)!!)^{1/8}\|\bm{u}\|^s\|\bm{v}\|^t\cdot e^{-0.25\beta},
\end{align*}
{ {where the fourth inequality follows from H\"older's inequality and the fact $$\mean{\dsone{\{|\bm{a}_k^\top\bm{u}|>\sqrt{\beta}\|\bm{u}\|\}}}=2\int_{\sqrt{\beta}}^{\infty} \frac{1}{\sqrt{2\pi}} e^{-\frac{t^2}{2}}dt\leq \sqrt{\frac{2}{\pi}}\int_{\sqrt{\beta}}^{\infty}\frac{t}{\sqrt{\beta}}e^{-\frac{t^2}{2}}dt\leq \sqrt{\frac{2}{\pi\beta}}e^{-\frac{\beta}{2}}$$ as well as $\beta>1$, and the fifth and sixth inequalities hold as the $2k$-th moment of a standard Gaussian variable is $(2k)!!$.
}}
\subsection{Proof of Lemma~\ref{lem:tech_lemma11}}
{ Noting that
\begin{align*}
\frac{1}{m}\|\bm{y}\|_1 = \frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{x})^2=\left|\bm{x}^\top \left(\frac{1}{m}\sum_{k=1}^m\bm{a}_k\bm{a}_k^\top\right)\bm{x}\right|,
\end{align*}
it follows from Lemma~\ref{lem:aux_lemma1} that $\frac{1}{2}\|\bm{x}\|^2\leq\frac{1}{m}\|\bm{y}\|_1\leq2\|\bm{x}\|^2$ holds with probability $1-e^{\Omega(m)}$ provided $m\gtrsim n$.} Thus, on the same event, we have
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^s(\bm{a}_k^\top\bm{x})^t h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)\left[ h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)-h\left(\frac{|\bm{a}_k^\top\bm{x}|^2}{\|\bm{x}\|^2}\right)\right]\bm{a}_k\bm{a}_k^\top\right\|\\
&\leq \left\|\frac{1}{m}\sum_{k=1}^m|\bm{a}_k^\top\bm{z}|^s|\bm{a}_k^\top\bm{x}|^t h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)\left|h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)-h\left(\frac{|\bm{a}_k^\top\bm{x}|^2}{\|\bm{x}\|^2}\right)\right|\bm{a}_k\bm{a}_k^\top\right\| \\
&\leq \left\|\frac{1}{m}\sum_{k=1}^m|\bm{a}_k^\top\bm{z}|^s|\bm{a}_k^\top\bm{x}|^t h\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)\cdot\dsone{|\bm{a}_k^\top\bm{z}|^2<\gamma\|\bm{z}\|^2}\cdot\dsone{\frac{\beta}{2}\|\bm{x}\|^2\leq |\bm{a}_k^\top\bm{x}|^2\leq2\gamma\|\bm{x}\|^2 }\bm{a}_k\bm{a}_k^\top\right\|\\
&\leq \|\bm{z}\|^s\|\bm{x}\|^t\cdot2^{\frac{t}{2}}\gamma^{\frac{s+t}{2}}\cdot \left\|\frac{1}{m}\sum_{k=1}^m \mathbf{1}_{\{|\bm{a}_k^\top\bm{x}|\geq\sqrt{\frac{\beta}{2}}\|\bm{x}\| \}}\bm{a}_k\bm{a}_k^\top\right\|\\
&\lesssim \|\bm{z}\|^s\|\bm{x}\|^t\cdot2^{\frac{t}{2}}\gamma^{\frac{s+t}{2}}\left(\sqrt{\beta}e^{-0.245\beta}+\epsilon\right),
\end{align*}
where { in the third inequality we have used the facts
\begin{align*}
|\bm{a}_k^\top\bm{z}|^sh\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)\cdot\dsone{|\bm{a}_k^\top\bm{z}|^2<\gamma\|\bm{z}\|^2}\leq\gamma^{\frac{s}{2}}\|\bm{z}\|^s
\end{align*}
and
\begin{align*}
|\bm{a}_k^\top\bm{x}|^t\dsone{\frac{\beta}{2}\|\bm{x}\|^2\leq |\bm{a}_k^\top\bm{x}|^2\leq2\gamma\|\bm{x}\|^2 }\leq 2^{\frac{t}{2}}\gamma^{\frac{t}{2}} \|\bm{x}\|^t\mathbf{1}_{\{|\bm{a}_k^\top\bm{x}|\geq\sqrt{\frac{\beta}{2}}\|\bm{x}\| \}},
\end{align*}
}
and
the last inequality holds with probability exceeding $1-e^{\Omega(m\epsilon^2)}$ provided $m\gtrsim \epsilon^{-2}\log\epsilon^{-1}\cdot n$ (see Lemma~\ref{lem:aux_lemma2}).
\subsection{Proof of Lemma~\ref{lem:tech_lemma2}}
It follows from Lemma~\ref{lem:aux_lemma1} that $\frac{1}{2}\|\bm{x}\|^2\leq\frac{1}{m}\|\bm{y}\|_1$ holds with probability $1-e^{\Omega(m)}$ provided $m\gtrsim n$. Thus, on the same event, we have
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^s(\bm{a}_k^\top\bm{x})^t g\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{a}_k\bm{a}_k^\top\right\|\\
&\leq \left\|\frac{1}{m}\sum_{k=1}^m |\bm{a}_k^\top\bm{z}|^s|\bm{a}_k^\top\bm{x}|^t g\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)\dsone{\beta\|\bm{z}\|^2<|\bm{a}_k^\top\bm{z}|^2<\gamma\|\bm{z}\|^2} \cdot\dsone{|\bm{a}_k^\top\bm{x}|^2<2\gamma\|\bm{x}\|^2}\bm{a}_k\bm{a}_k^\top \right\| \\
&\leq\|\bm{z}\|^s\|\bm{x}\|^t\cdot 2^{\frac{t}{2}}\gamma^{\frac{s+t}{2}}\left\|\frac{1}{m}\sum_{k=1}^m \dsone{|\bm{a}_k^\top\bm{z}|>\sqrt{\beta}\|\bm{z}\|}\bm{a}_k\bm{a}_k^\top\right\| \\
&\lesssim \|\bm{z}\|^s\|\bm{x}\|^t\cdot2^{\frac{t}{2}}\gamma^{\frac{s+t}{2}}\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right),
\end{align*}
where the last inequality holds with probability exceeding $1-e^{\Omega(m\epsilon^2)}$ provided $m\gtrsim \epsilon^{-2}\log\epsilon^{-1}\cdot n$; see Lemma~\ref{lem:aux_lemma2}.
\subsection{Proof of Lemma~\ref{lem:tech_lemma3}}
{ Firstly, similar to the proof of Lemma~\ref{lem:tech_lemma2},} $\frac{1}{2}\|\bm{x}\|^2\leq\frac{1}{m}\|\bm{y}\|_1$ holds with probability $1-e^{\Omega(m)}$ provided $m\gtrsim n$. Thus, we have
\begin{align*}
&\left\|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^s(\bm{a}_k^\top\bm{x})^t g\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\bm{z}\bm{a}_k^\top\right\|\\
&\leq \max_{\|\bm{u}\|=\|\bm{v}\|=1}\frac{1}{m}\sum_{k=1}^m |\bm{a}_k^\top\bm{z}|^s|\bm{a}_k^\top\bm{x}|^t g\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right)\mathbf{1}_{\{\beta\|\bm{z}\|^2<|\bm{a}_k^\top\bm{z}|<\gamma\|\bm{z}\|^2\}}\cdot \dsone{|\bm{a}_k^\top\bm{x}|^2<2\gamma\|\bm{x}\|^2}\cdot|\bm{u}^\top\bm{z}|\cdot|\bm{a}_k^\top\bm{v}|\\
&\leq \|\bm{z}\|^{s+1}\|\bm{x}\|^t\cdot 2^{\frac{t}{2}}\
\gamma^{\frac{s+t}{2}}\cdot \max_{\|\bm{v}\|=1}\frac{1}{m}\sum_{k=1}^m\dsone{|\bm{a}_k^\top\bm{z}|>\sqrt{\beta}\|\bm{z}\|}|\bm{a}_k^\top\bm{v}| \\
&\leq \|\bm{z}\|^{s+1}\|\bm{x}\|^t\cdot 2^{\frac{t}{2}}\
\gamma^{\frac{s+t}{2}}\cdot\max_{\|\bm{v}\|=1} \sqrt{\frac{1}{m}\sum_{k=1}^m\dsone{|\bm{a}_k^\top\bm{z}|>\sqrt{\beta}\|\bm{z}\|}|\bm{a}_k^\top\bm{v}|^2}\\
& { \leq\|\bm{z}\|^{s+1}\|\bm{x}\|^t\cdot 2^{\frac{t}{2}}\
\gamma^{\frac{s+t}{2}}\sqrt{\sqrt{\beta}e^{-0.49\beta}+\epsilon}}\\
&\lesssim \|\bm{z}\|^{s+1}\|\bm{x}\|^t\cdot 2^{\frac{t}{2}}\
\gamma^{\frac{s+t+1}{2}}\cdot \left( e^{-0.245\beta}+\sqrt{\epsilon}\right),
\end{align*}
where the fourth inequality holds with probability exceeding $1-e^{\Omega(m\epsilon^2)}$ provided $m\gtrsim \epsilon^{-2}\log\epsilon^{-1}\cdot n$ which follows from Lemma~\ref{lem:aux_lemma2},
{
\begin{align*}
\frac{1}{m}\sum_{k=1}^m\dsone{|\bm{a}_k^\top\bm{z}|>\sqrt{\beta}\|\bm{z}\|}|\bm{a}_k^\top\bm{v}|^2 &=\left|\bm{v}^\top\left( \frac{1}{m}\sum_{k=1}^m\dsone{|\bm{a}_k^\top\bm{z}|>\sqrt{\beta}\|\bm{z}\|}\bm{a}_k\bm{a}_k^\top\right)\bm{v}\right|\\
&\leq \left\| \frac{1}{m}\sum_{k=1}^m\dsone{|\bm{a}_k^\top\bm{z}|>\sqrt{\beta}\|\bm{z}\|}\bm{a}_k\bm{a}_k^\top\right\|\\
&\leq \sqrt{\beta}e^{-0.49\beta}+\epsilon,
\end{align*}
}
{ and the last inequality follows from the fact $1<\beta<\gamma$ and $\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}$}.
\subsection{Proof of Lemma~\ref{lem:tech_lemma4}}
{ It follows from Lemma~\ref{lem:aux_lemma1} that $\frac{1}{2}\|\bm{x}\|^2\leq\frac{1}{m}\|\bm{y}\|_1$ holds with probability $1-e^{\Omega(m)}$ provided $m\gtrsim n$}. Then a simple algebra yields that
\begin{align*}
&\left|\frac{1}{m}\sum_{k=1}^m(\bm{a}_k^\top\bm{z})^s(\bm{a}_k^\top\bm{x})^t g\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right) \right| \\
&\leq \frac{1}{m}\sum_{k=1}^m|\bm{a}_k^\top\bm{z}|^s|\bm{a}_k^\top\bm{x}|^t g\left(\frac{|\bm{a}_k^\top\bm{z}|^2}{\|\bm{z}\|^2}\right) h\left(\frac{m|\bm{a}_k^\top\bm{x}|^2}{\|\bm{y}\|_1}\right)\\
&\leq \frac{1}{m}\sum_{k=1}^m|\bm{a}_k^\top\bm{z}|^s|\bm{a}_k^\top\bm{x}|^t \dsone{\beta\|\bm{z}\|^2\leq |\bm{a}_k^\top\bm{z}|^2\leq \gamma\|\bm{z}\|^2} \dsone{|\bm{a}_k^\top\bm{x}|^2<2\gamma\|\bm{x}\|^2}\\
&\leq \gamma^{\frac{s-2}{2}}\|\bm{z}\|^{s-2}\cdot 2^{\frac{t}{2}}\gamma^{\frac{t}{2}}\|\bm{x}\|^t\frac{1}{m}\sum_{k=1}^m|\bm{a}_k^\top\bm{z}|^2\dsone{|\bm{a}_k^\top\bm{z}|>\sqrt{\beta}\|\bm{z}\|}\\
&\lesssim \|\bm{z}\|^s\|\bm{x}\|^t\cdot 2^{\frac{t}{2}}\gamma^{\frac{s+t}{2}}\left(\sqrt{\beta}e^{-0.49\beta}+\epsilon\right),
\end{align*}
where the last inequality holds with probability exceeding $1-e^{\Omega(m\epsilon^2)}$ provided $m\gtrsim \epsilon^{-2}\log\epsilon^{-1}\cdot n$; see Lemma~\ref{lem:aux_lemma2}.
\section{Geometric landscape of the new function}\label{sec:results}
In this section we present the detailed geometric landscape of $f(\bm{z})$. Differing from the partition in \cite{SQW:FCM:18}, we decompose $\mathbb{R}^n$ into five non-overlapping regions (see Figure~\ref{fig2}):
\begin{itemize}
\item $\mathcal{R}_1:= \left\{\bm{z}:~\dist(\bm{z},\bm{x})\leq \frac{1}{5}\|\bm{x}\|\right\}$,
\item $\mathcal{R}_{2a}:=\{\bm{z}:~\frac{1}{3}-\delta<\frac{\|\bm{z}\|^2}{\|\bm{x}\|^2}<\frac{99}{100}\mbox{ and }\dist(\bm{z},\bm{x})>\frac{1}{5}\|\bm{x}\| \}$,
\item $\mathcal{R}_{2b}:=\{\bm{z}:~\frac{99}{100}\leq \frac{\|\bm{z}\|^2}{\|\bm{x}\|^2}\leq\frac{101}{100}\mbox{ and }\dist(\bm{z},\bm{x})>\frac{1}{5}\|\bm{x}\|\}$,
\item $\mathcal{R}_{2c}:=\{\bm{z}:~\frac{\|\bm{z}\|^2}{\|\bm{x}\|^2}>\frac{101}{100}\mbox{ and }\dist(\bm{z},\bm{x})>\frac{1}{5}\|\bm{x}\|\}$,
\item $\mathcal{R}_{3}:=\{ \bm{z}:~0<\frac{\|\bm{z}\|^2}{\|\bm{x}\|^2}\leq \frac{1}{3}-\delta\}$,
\end{itemize}
where $\delta$ is a fixed constant in $(0,\frac{1}{100}]$. The properties of $f(\bm{z})$ over these five regions are summarized in the following five theorems.
\begin{theorem}\label{thm:R1}
With probability at least $1-e^{-\Omega(m)}$,
\begin{align*}
\lambda_{\min}\left(\nabla^2f(\bm{z})\right)\ge \frac{1}{25}\|\bm{x}\|^2
\end{align*}
holds uniformly for all $ \bm{z}\in\mathcal{R}_1$ provided $m\gtrsim n$.
\end{theorem}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{partition.eps}
\caption{ Partition of $\mathbb{R}^2$: $\bm{x}=[\pm1,0]^\top$.}\label{fig2}
\end{figure}
\begin{theorem}\label{thm:R2a}
With probability at least $1-e^{-\Omega(m)}$, all critical points in $\mathcal{R}_{2a}$ must exist in the subregion defined by
\begin{align*}
\frac{1}{3}-\delta<\frac{\|\bm{z}\|^2}{\|\bm{x}\|^2}<\frac{1}{3}+\delta\quad\mbox{and}\quad
|\bm{z}^\top\bm{x}|<\delta\|\bm{x}\|^2\numberthis\label{eq:SubofR2a}
\end{align*}
provided $m\gtrsim n$. Moreover, with probability exceeding $1-e^{-\Omega(m)}$,
\begin{align*}
\bm{x}^\top\nabla^2 f(\bm{z})\bm{x} \leq -3\|\bm{x}\|^4\quad\mbox{and}\quad\bm{z}^\top\nabla^2 f(\bm{z})\bm{z}\geq \|\bm{x}\|^4.
\end{align*}
hold uniformly for all $\bm{z}$ in { the subregion} \eqref{eq:SubofR2a} provided $m\gtrsim n$. \end{theorem}
\begin{theorem}\label{thm:R2b}
With probability at least $1-e^{-\Omega(m)}$,
\begin{align*}
\bm{z}^\top\nabla f(\bm{z})\geq \frac{9}{1000}\|\bm{x}\|^4
\end{align*}
holds uniformly for all $\bm{z}\in\mathcal{R}_{2b}$ provided $m\gtrsim n$.
\end{theorem}
\begin{theorem}\label{thm:R2c}
With probability at least $1-e^{-\Omega(m)}$,
\begin{align*}
\bm{z}^\top\nabla f(\bm{z})\geq \frac{49}{1000}{\|\bm{z}\|^4}
\end{align*}
holds uniformly for all $\bm{z}\in\mathcal{R}_{2c}$ provided $m\gtrsim n$.
\end{theorem}
\begin{theorem}\label{thm:R3}
With probability at least $1-e^{-\Omega(m)}$,
\begin{align*}
\bm{z}^\top\nabla f(\bm{z}) \leq -5\delta\|\bm{z}\|^2\|\bm{x}\|^2
\end{align*}
holds uniformly for all $\bm{z}\in\mathcal{R}_3$ provided $m\gtrsim n$.
\end{theorem}
The proofs of the above theorems are deferred to Section~\ref{sec:main_proofs}. { We can check the results in these theorems by conducting a simple numerical test: 1) randomly generate a set of standard Gaussian vectors $\{\bm{a}_k\}_{k=1}^m\subset\mathbb{R}^n$, 2) fix the measurement vectors and randomly { generate} different $\bm{z}$'s for each region, and 3) check whether the result in each theorem holds or not. For conciseness, we report in Table~\ref{table:main} the computational results in regions $\mathcal{R}_1$, $\mathcal{R}_{2c}$ and $\mathcal{R}_3$ (with three randomly generated $\bm{z}$ in each region). }
\begin{table}[t!]
\centering
\caption{ { This table numerically checks the results in the theorems:} $\bm{x}=[1,\cdots,1]^\top\in\mathbb{R}^{n}$ with $n=128$, $\{\bm{a}_k\}_{k=1}^m\subset\mathbb{R}^n$ ($m=6n$) are independent standard Gaussian vectors. The vectors $\bm{z}$ in $\mathcal{R}_1$, $\mathcal{R}_{2c}$, and $\mathcal{R}_3$ are generated uniformly at random (for $\mathcal{R}_{2c}$ we consider its intersection with the ball $\{\bm{z}:~\|\bm{z}\|\leq2\|\bm{x}\|\}$). The parameter $\delta$ which defines $\mathcal{R}_3$ is chosen to be $1/100$. The theoretical bounds refer to those given in Thms. \ref{thm:R1}, \ref{thm:R2c}, and \ref{thm:R3}. }
\label{table:main}
\makegapedcells
\setcellgapes{3pt}
\begin{tabular}{c|ccc|ccc|ccc}
\hline
& \multicolumn{3}{c|}{$\lambda_{\min}\left(\nabla^2f(\bm{z})\right)$ in $\mathcal{R}_1$} & \multicolumn{3}{c|}{$\bm{z}^\top\nabla f(\bm{z})$ in $\mathcal{R}_{2c}$} & \multicolumn{3}{c}{$\bm{z}^\top\nabla f(\bm{z})$ in $\mathcal{R}_3$} \\
\hline
{Numerical results} & 94.43 & 96.39 & 108.83 & 2.87e5 &2.61e5 &6.87e6 & -2184.41 &-2203.39 &-95.00\\
\hline
\multirow{ 2}{*}{Theoretical bounds}& 5.12 & 5.12 & 5.12 & 2632.14 & 3233.31 &55528.5 & -73.63 & -81.98 & -2.55\\
& \multicolumn{3}{c|}{{ (lower bound)}} & \multicolumn{3}{c|}{{ (lower bound)}} & \multicolumn{3}{c}{{ (upper bound)}}
\\
\hline
\end{tabular}
\end{table}
From the five theorems, it is evident that critical points of $f(\bm{z})$ can only occur in $\mathcal{R}_1$ and $\mathcal{R}_{2a}$, since at critical points one has $\nabla f(\bm{z})=0$. Noticing that $\pm\bm{x}\in\mathcal{R}_1$, $f(\bm{z})\geq 0$ and $f(\pm\bm{x})=0$, by Theorem~\ref{thm:R1}, we know that $\pm\bm{x}$ are the local minimizers. Theorem~\ref{thm:R2a} implies that at any critical point in $\mathcal{R}_{2a}$, the Hession of $f(\bm{z})$ has a negative directional curvature as well as a positive directional curvature. Thus, critical points in $\mathcal{R}_{2a}$ must be ridable saddle points \cite{SQW:FCM:18}. Putting it all together, we can establish Theorem~\ref{thm:main} and show that every local minimizer is a global minimizer.
Additionally, though $f(\bm{z})$ is singular at $\bm{z}=0$, Theorem~\ref{thm:R3} shows that local minimizers of $f(\bm{z})$ cannot exist around $0$. Moreover, it also implies that searching along the gradient descent direction at any point in $\mathcal{R}_{3}$ will move the point further away from the origin.
| -237,481.596797
|
[
-2.4375,
2.041015625
] | 15.078534
|
[
-2.94921875,
2.38671875,
-1.57421875,
-5.5625,
-2.287109375,
7.16796875
] |
[
-0.1412353515625,
8.015625,
-1.5341796875,
3.609375
] | 480
| 6,134
|
[
-3.21875,
3.46875
] | 44.09974
|
[
-5.015625,
-3.08984375,
-2.32421875,
-2.072265625,
1.3544921875,
7.890625
] | 0.644769
| 6.01784
| 28.317574
| 5.087955
|
[
0.6361370086669922
] | -167,829.873626
| 10.189599
| -242,821.911241
| 0.486377
| 6.339427
|
[
-2.54296875,
-3.3671875,
-3.98828125,
-5.875,
2.580078125,
12.515625
] |
[
-5.07421875,
-0.29833984375,
-1.986328125,
-1.458984375,
2.068359375,
2.125
] | |
BkiUgCzxK4tBVhat6uVc
|
\section{Introduction}
Gravity, as the oldest force known to man has also had the longest history of trials and tribulations along the road to discovering its nature. Over the course of its development, it has
been witnessing a myriad of attempts to unlock its notoriously difficult and mysterious behavior
from the largest to the smallest of distances. The challenge has been truly spectacular. Since
the first formulation of the gravitational field by Newton and centuries later by Einstein in
the form of the theory of general relativity (GR), the scenery is still cluttered with debris left from
various attempts to understand its hard to grasp nature. Even today, the challenge is as fresh and
as interesting as ever. Not surprisingly, building on GR, the last couple of decades have been particularly rich in new ideas and approaches which attempt to formulate the gravitational field in such a way as to pave the way to a formulation of the theory which would explain such recently observed phenomenon as the accelerated expansion of the universe, galaxy rotation curves and even the birth of the universe. One such attempt has been surfacing over the past few years in the form of what is now known as massive gravity which, as the name suggests, is a theory with a massive graviton as the building block of the gravitational field.
The notion of a massive graviton has been a tempting and challenging premise in theoretical physics. One of the main motivations of having a massive graviton is that gravity could become weak at large distances, thus mimicking the effect of accelerated expanding universe. The problem is not as easy as it seems. The first attempt to build a theory for a massive spin-2 field was back in 1939 when Fierz and Pauli (FP) developed a linear theory for a massive graviton \cite{FP}. It took thirty years for physicists to find out that the theory does not reduce to the standard GR when one takes the limit $m\rightarrow0$ \cite{VDVZ}. The problem was soon addressed by Vainstein \cite{Vain} who proposed that adding non-linearities to the action can cure the problem and screen the effect of helicity-0 component of the massive graviton at solar system scales. The simplest possible non-linearity one can add to the FP action is by replacing the linear kinetic term for the helicity-2 field with the fully non-linear, and
still ghost free, Einstein-Hilbert action. However, the
resulting action have proven to have a ghost instability which was discovered by Boulware and Deser \cite{BD}. The problem arises because the lapse function is no longer a Lagrange multiplier. This new problem can then be solved if one appropriately adds interaction terms to the Lagrangian and again makes the lapse function a Lagrange multiplier order by order in non-linearities \cite{arkani,cremine}.
In this paper, after a brief review of the theory we study the curvature perturbation around an accelerating solution and obtain the background equations. The second order Lagrangian can then be obtained using the perturbed FRW metric. It is an immediate observation from the form of the second order Lagrangian that the tensor, vector and scalar modes do not couple. The tensor mode shows a massive gravitational wave with a time-dependent mass parameter which we shall obtain in section \ref{tensorsec}. In section \ref{vectorsec} we consider the vector mode and show that the vector part of the action vanishes at superhorizon scales and subsequently obtain the scalar mode in section \ref{scalarsec}. Two of the scalar modes are non-dynamical and can be integrated out immediately with three degrees of freedom remaining in the scalar sector. We show that in the superhorizon limit where one of the degrees of freedom does not play a role, the curvature perturbation can be obtained analytically and there is a vast region in the parameter space over which the curvature perturbation will grow on the superhorizon scales.
\section{A brief review}
Recently, a two parameter theory of massive gravity has been proposed \cite{dRGT}. With the aid of a new re-summation of the non-linear interaction \cite{dRG} de Rham, Gabadadze and Tolley (dRGT) have constructed a theory which is free of ghost instabilities in the decoupling limit. The theory has been proven to be ghost-free in the full non-linear theory and hence a reliable effective field theory of a massive spin-2 field \cite{mir}. In all fairness, there are also criticisms towards dRGT in that superluminal shock wave solutions have been shown to appear in the theory \cite{deser}. However, subsequently it was shown that such shock waves are unstable with an arbitrary fast decaying time \cite{jcap}.
The Lagrangian for dRGT non-linear massive gravity can be written as
\begin{align}\label{1}
\mathcal{L}= \frac{M_{pl}^2}{2} \sqrt{-g} \bigg( R+m^2\mathcal{U}(\mathcal{K}) \bigg)+ \sqrt{- g} \mathcal{L}_m(g_{_{\mu \nu}},\psi) \,,
\end{align}
where the non-linear interactions are collected in $\mathcal{U}$
\begin{align}\label{2}
\mathcal U(\mathcal{K})=\mathcal{U}_2+\alpha_3\ \mathcal{U}_3+\alpha_4\ \mathcal{U}_4,
\end{align}
which consists of polynomials of various traces of the matrix $\mathcal{K}^\mu_\nu(g,\phi^a)=\delta^\mu_\nu-\sqrt{g^{\mu\alpha}f_{\alpha\nu}}$ where the fiducial metric is defined as $f_{\alpha\nu}=\partial_\alpha \phi^a
\partial_\nu \phi^b \eta_{ab}$ and $ \phi^a$ are the Stuckelberg fields responsible for the breaking of general covariance
\begin{subequations}\label{eq3}
\begin{align}\label{3}
\mathcal{U}_2&=[\mathcal{K}]^2-[\mathcal{K}^2]\,,\\
\mathcal{U}_3&=[\mathcal{K}]^3-3 [\mathcal{K}][\mathcal{K}^2]+2[\mathcal{K}^3]\,,\\
\mathcal{U}_4&=[\mathcal{K}]^4-6[\mathcal{K}^2][\mathcal{K}]^2+8[\mathcal{K}^3]
[\mathcal{K}]+3[\mathcal{K}^2]^2-6[\mathcal{K}^4],
\end{align}
\end{subequations}
where the rectangular brackets denote
traces, $[\mathcal{K}]\equiv {\rm Tr} (\mathcal{K})= \mathcal{K}^\mu_\mu$.
The first term concides with the Fierz-Pauli mass term at the linear level, and the last two terms are non-linear interactions which ensure that the theory has no ghost.
One of the interesting properties of the theory is that if one assumes the Stuckelberg fields to be in the unitary gauge defined as $\phi^a=\delta^a_\mu x^\mu$, the theory does not have a non-trivial flat FRW solution \cite{amico}. However, the theory has an open FRW solution with an additional consideration in that one must transform the field space to the open slicing of the Minkowski metric \cite{gumru1}. The cosmological perturbations around such a solution was also considered in \cite{mukohyama} where the authors found that the scalar, vector and tensor modes actually decouple and as a result the vector mode does not play any role in the theory. The tensor mode describes a massive gravitational wave with a time-dependent mass. Cosmological evidences of dRGT theory is also considered in \cite{cosmo}. For a review on the theoretical aspect of the theory see \cite{hinter}.
One can also let the non-dynamical metric of the theory to have a kinetic term and hence construct a bimetric theory \cite{bimet1} which has been proven to be ghost free \cite{bimetr1}. The cosmological aspects of such bimetric theory is investigated in \cite{cosbi}. One may continue the procedure to build a multimetric gravity theory with dRGT non-linear mass terms. In \cite{mult1}, the authors show that the theory is ghost free in the metric formulation only when the interactions between gravitons are not cyclic. However, in \cite{mult2} the authors show that in the vielbein formulation of the theory any interactions are allowed. Also in \cite{khos} the authors show that the problem of having no flat FRW solution persists in multimetric theories. In fact if one of the metrics is assumed to be flat, all of the other metrics will become flat.
One of the solutions to the problem of the non-existence of the flat FRW solution in the theory is by extending the theory in such a way that the graviton mass becomes a function of some scalar field $\varphi$ \cite{masva}. The cosmological solutions and dynamical analysis of such theories are considered in \cite{varcos}. Another way to extend the theory is to couple a scalar field to the mass Lagrangian such that the resulting new Lagrangian has an extra symmetry. In particular, one can couple the scalar field to the Lagrangian to achieve dilation invariance on the field space \cite{quasi}
\begin{align}\label{5}
\sigma\rightarrow\sigma-M_{pl}\alpha, \qquad \phi^a\rightarrow\textmd{e}^{\alpha}\phi^a.
\end{align}
In the Einstein frame one can write the new Lagrangian as
\begin{align}\label{3.1}
\mathcal{L}= \frac{M_{pl}^2}{2} \sqrt{-g} \bigg[ R -\frac{\omega}{M_{pl}^2} g^{_{\mu \nu}}\partial_\mu\sigma\partial_\nu\sigma+m^2\mathcal{U}(\tilde{\mathcal{K}}) \bigg]+ \sqrt{- g} \mathcal{L}_m(g_{_{\mu \nu}},\psi) \,,
\end{align}
where $\tilde{\mathcal{K}}^\mu_\nu$ is now defined as
\begin{align}\label{4}
\tilde{\mathcal{K}}^\mu_\nu(g,\phi^a)=\delta^\mu_\nu-\textmd{e}^{\sigma/M_{pl}}\sqrt{g^{\mu\alpha}f_{\alpha\nu}}
\end{align}
Only the pure geometric part of the above action is invariant under transformation \eqref{5}, thus the acronym Quasi-Dilaton (QD) for the scalar field \cite{quasi}. This theory has been proven to be free of ghost in the Minkowski background if $\omega>6$. The most interesting feature of this theory is that it admits a flat de Sitter solution even if the Stuckelberg fields are in the unitary gauge \cite{quasi}. In our notation, the de Sitter solution is also stable in the decoupling limit for
\begin{align}\label{5.5}
\alpha_3\neq0,\quad 0<\alpha_4<\f{\alpha_3^2}{2},\quad 0\leq\omega<6.
\end{align}
\section{The Background equation}
Let us assume that the background metric is of the form
\begin{align}\label{6}
ds^2=-N(t)^2dt^2+a(t)^2\big(dx^2+dy^2+dz^2\big),
\end{align}
and the Stuckelberg fields take the form
$$\phi^0=f(t),\qquad \phi^i=\delta^i_\mu x^\mu.$$
Note that we will work in the unitary gauge. However, in order to obtain the Stuckelberg equation from the action we assume the above form for the Stuckelberg fields, and finally set $f(t)=t$. We also assume that the QD field only depends on $t$. By varying the action \eqref{1} with respect to $f(t)$, one obtains the constraint equation as
\begin{align}\label{7}
-9m^2M_{pl}^2 e^{\sigma/M_{pl}}\left(a-e^{\sigma/M_{pl}}\right)\left[\f{4}{3}\alpha_4e^{2\sigma/M_{pl}}-\left(\f{8}{3}\alpha_4-
\alpha_3\right)ae^{\sigma/M_{pl}}+\left(\f{4}{3}\alpha_4+\alpha_3+\f{1}{3}\right)a^2\right]=k_1,
\end{align}
where $k_1$ is an integration constant.
We are interested in the set of equations for which $k_1=0$. In this case one can solve the above equation by using the ansatz
$$e^{\sigma/M_{pl}}=Xa,$$
in the equation which results in
\begin{align}\label{8}
X=\f{3\alpha_3+8\alpha_4\pm\sqrt{9\alpha_3^2-16\alpha_4}}{8\alpha_4}, \qquad X=1.
\end{align}
The solution $X=1$ is not acceptable because in this case the effective cosmological constant vanishes, and the consistency of the theory does not allow one to have a flat cosmological solution \cite{quasi}. Putting the above ansatz in the mass part of the Lagrangian, one can write the Friedman and Raychaudhuri equations as
\begin{align}
&3H^2-\f{\omega}{M_{pl}^2}\f{\dot{\sigma}^2}{2N^2}=3M_f,\label{9}\\
&2\f{\dot{H}}{N}+3H^2+\f{\omega}{M_{pl}^2}\f{\dot{\sigma}^2}{2N^2}=3M_g,\label{10}
\end{align}
where we define $H=\dot{a}/(aN)$ and
\begin{align}\label{10.05}
M_f&=\frac{m^2}{16 \alpha _4^2} \bigg[3 \alpha _3^2 \left(3 (X-1) \alpha _3-1\right)+4 \left(1-4 (X-1) \alpha _3\right) \alpha _4\bigg],\\
M_g&=\frac{m^2}{3} \bigg[-6-X (-6-3 r+X+2 r X)+3 (X-1) (4+(-2+r (X-3)) X) \alpha _3+12 (X-1)^2 (r X-1) \alpha _4\bigg],
\end{align}
and $r\equiv r(t)=a(t)/N(t)$.
We will use these forms of the background equations in the second order Lagrangian. Substituting equation \eqref{8} in \eqref{9} leads to a constant Hubble parameter, showing a de Sitter solution
\begin{align}\label{10.1}
H^2\equiv H_0^2=\frac{6m^2 (1-X) \big[2+4 \alpha_3+4 \alpha_4+X (-1+(X-5) \alpha_3+4 (X-2) \alpha_4)\big]}{\omega -6 }.
\end{align}
From equation \eqref{10} we obtain that $r$ should be a constant given by
\begin{align}\label{11}
r=\frac{12 (1+2 \alpha_3+2 \alpha_4) \omega +4 X^2 (1+6 \alpha_3+12 \alpha_4) (3+\omega )-3 X^3 (\alpha_3+4 \alpha_4) (6+\omega )-3 X (1+3 \alpha_3+4 \alpha_4) (6+5 \omega )}{X (\omega-6)\big(3+9 \alpha_3+12 \alpha_4+3 X^2 (\alpha_3+4 \alpha_4)-2 X (1+6 \alpha_3+12 \alpha_4)\big)}.
\end{align}
The scalar field equation is satisfied automatically by plugging in equations \eqref{8}, \eqref{10.1} and \eqref{11}.
\section{Second order Lagrangian}
In this section we study perturbations signified by
\begin{align}
ds^2=-N(t)^2\big[1+2\phi(t,x,y,z)\big]dt^2+2N(t)a(t)\beta_i (t,x,y,z)dtdx^i +a(t)^2\big[\delta_{ij}+h_{ij}(t,x,y,z)\big]dx^i dx^j,
\end{align}
where $\phi$, $\beta_i$ and $h_{ij}$ are the perturbation variables of the FRW metric. The perturbation of the Stuckelberg fields in unitary gauge is
\begin{align}
\phi^a=x^a+\pi^a(t,x,y,z),
\end{align}
and the perturbation of the dilaton field about the background solution has the form
\begin{align}
\sigma=\sigma_0(t)+\zeta(t,x,y,z).
\end{align}
Now, we consider the infinitesimal coordinate transformation
\begin{align}
x^\mu \rightarrow x^\mu+\xi^\mu(t,x,y,z),
\end{align}
which leads to the change of the perturbed quantities as
\begin{subequations}
\begin{align}
\phi& \rightarrow \phi-\f{1}{N}\partial_t(N\xi^0),\\
\beta_i& \rightarrow \beta_i+\f{N}{a} \partial_i\xi^0-\f{a}{N}\dot{\xi}_i,\\
h_{ij} &\rightarrow h_{ij}-\partial_i\xi_j - \partial_j \xi_i-2NH\xi^0\delta_{ij},\\
\pi^a& \rightarrow \pi^a-\xi^a,\\
\zeta& \rightarrow \zeta-\dot{\sigma}_0 \xi^0,
\end{align}
\end{subequations}
where a dot denotes the time derivation.
Using the perturbations of the Stuckelberg fields one can construct the gauge invariant quantities using the perturbed Stuckelberg fields in the following manner
\begin{subequations}
\begin{align}
\Phi &=\phi-\f{1}{N}\partial_t(N\pi^0) ,\\
\mc{B}_i&=\beta_i+\f{N}{a}\partial_i\pi^0-\f{a}{N}\dot{\pi}_i,\\
\mc{H}_{ij}&=h_{ij}-\partial_i\pi_j-\partial_j\pi_i-2NH\pi^0\delta_{ij},\\
\mc{Z}&=\zeta-\dot{\sigma}_0 \pi^0.
\end{align}
\end{subequations}
We may decompose the gauge invariant vector and tensor parts as
\begin{subequations}
\begin{align}
\mc{B}_i&=\partial_i \beta+S_i,\\
\mc{H}_{ij}&=2\psi\delta_{ij}+\partial_i\partial_j E+\f{1}{2}(\partial_i F_j+\partial_j F_i)+\gamma_{ij},
\end{align}
\end{subequations}
where
\begin{align}
& \partial^i S_i=0=\partial^i F_i,\nonumber\\
&\partial^i \gamma_{ij}=0=\delta^{ij}\gamma_{ij}.
\end{align}
We note that there are four degrees of gauge freedom because of the coordinate transformation, two of which represent the scalar part and the others relate to the vector part. One may fix the gauge freedom by the choice
\begin{align}
\pi^0=0, \qquad \pi^i=0.
\end{align}
Note that with this gauge fixing all the gauge invariant perturbation variables become equal to the original one, making our calculations simpler.
Also note that the above gauge fixing is similar to the use of the unitary gauge, and so the fiducial metric takes the form
\begin{align}
f_{\mu\nu}=\eta_{\mu\nu}.
\end{align}
The components of the $f^\mu_\nu=g^{\mu\rho}f_{\rho\nu}$ matrix in the unitary gauge are
\begin{subequations}
\begin{align}
f^0_0&=\f{1}{N^2}\left(1-2\phi+ 4\phi^2-\beta_i\beta^i\right)+\mc{O}(\epsilon^3),\\
f^0_i&=\f{1}{Na}\left(\beta_i-\beta^j h_{ji}-2\phi\beta_i\right)+\mc{O}(\epsilon^3),\\
f^i_0&=-\f{1}{Na}\left(\beta^i-\beta_j h^{ji}-2\phi\beta^i\right)+\mc{O}(\epsilon^3),\\
f^i_j&=\f{1}{a^2}\left(\delta^i_j-h^i_j-\beta^i\beta_j+h^{ik}h_{kj}\right)+\mc{O}(\epsilon^3),
\end{align}
\end{subequations}
where $\epsilon$ represents a generic perturbation parameter. To compute the components of the $\tilde{\mc{K}}^\mu_\nu$, we use the method presented in \cite{mukohyama} to expand the square root in \eqref{4}.
To zeroth order perturbation we find
\begin{align}
\tilde{\mc{K}}^{(0)0}_{~~~~0}=1-\f{\Delta}{N},\qquad \tilde{\mc{K}}^{(0)i}_{~~~~0}=0=\tilde{\mc{K}}^{(0)0}_{~~~~i}, \qquad \tilde{\mc{K}}^{(0)i}_{~~~~j}=\left(1-\f{\Delta}{a}\right)\delta^i_j,
\end{align}
where $\Delta=e^{\sigma_0/ M_{pl}}$. The first and second orders are
\begin{align}
\tilde{\mc{K}}^{(1)0}_{~~~0}=\f{\Delta}{N}\left(\phi-\f{\zeta}{M_{pl}}\right),\qquad \tilde{\mc{K}}^{(1)0}_{~~~i}=-\f{\Delta}{N(1+r)}\beta_i,\qquad \tilde{\mc{K}}^{(1)i}_{~~~0}=\f{\Delta}{N(1+r)}\beta^i,\qquad \tilde{\mc{K}}^{(1)i}_{~~~j}=\f{\Delta}{a}\left(\f{1}{2}h^i_j-\f{\zeta}{M_{pl}}\delta^i_j\right),
\end{align}
\begin{subequations}
\begin{align}
\tilde{\mc{K}}^{(2)0}_{~~~0}&=\f{\Delta}{N}\left(\f{r(r+2)}{2(r+1)^2}\beta^i\beta_i-\f{3}{2}\phi^2+\f{1}{M_{pl}}\zeta\phi-\f{1}{2M_{pl}^2}\zeta^2\right),\\
\tilde{\mc{K}}^{(2)0}_{~~~i}&=\f{\Delta}{N(r+1)}\left(\f{r+2}{r+1}\phi \beta_i+\f{2r+1}{2(r+1)}\beta^j h_{ji}-\f{1}{M_{pl}}\zeta \beta_i\right),\\
\tilde{\mc{K}}^{(2)i}_{~~~0}&=-\f{\Delta}{N(r+1)}\left(\f{r+2}{r+1}\phi \beta^i+\f{2r+1}{2(r+1)}\beta_j h^{ji}-\f{1}{M_{pl}}\zeta \beta^i\right),\\
\tilde{\mc{K}}^{(2)i}_{~~~j}&=\f{\Delta}{2a}\left(\f{2r+1}{(r+1)^2}\beta^i \beta_j-\f{3}{4}h^{ik}h_{kj}
+\f{1}{M_{pl}}\zeta h^i_j-\f{1}{M_{pl}^2}\zeta^2 \delta^i_j\right),
\end{align}
\end{subequations}
where $r=\f{a}{N}$, as that in the background. The traces of $ \tilde{\mc{K}}$ for zero and first order perturbation are given by
\begin{align}
[\tilde{\mc{K}}^n]^{(0)}=3(1-X)^n+(1-rX)^n,
\end{align}
\begin{align}
[\tilde{\mc{K}}^n]^{(1)}=nrX(1-rX)^{n-1}\left(\phi-\f{1}{M_{pl}}\zeta\right)+\f{n}{2}X(1-X)^{n-1}\left(h-\f{6}{M_{pl}}\zeta\right),
\end{align}
where $X=\Delta/a$. To second order we obtain
\begin{subequations}
\begin{align}
[\tilde{\mc{K}}]^{(2)}&=\f{r_2}{2r_1}X\beta^i\beta_i-\f{3}{8}Xh^{ij}h_{ij}-\f{1}{2}rX\left(3\phi^2+\f{1}{M_{pl}^2}\zeta^2\right)+\f{1}{2M_{pl}}X\zeta(h+2\phi)-\f{3}{2M_{pl}^2}X\zeta^2,\\
[\tilde{\mc{K}}^2]^{(2)}&=\f{r_2-Xr_3}{r_1}X\beta^i\beta_i+(4rX-3)Xr\phi^2+\f{2}{M_{pl}}(1-2rX)Xr\phi\zeta+\f{1}{M_{pl}^2}\big[3(2X-1)+r(2rX-1)\big]X\zeta^2\nonumber\\
&+\f{1}{M_{pl}}(1-2X)Xh\zeta+(X-\f{3}{4})Xh^{ij}h_{ij},\\
[\tilde{\mc{K}}^3]^{(2)}& =\f{3}{2r_1}(r_2-2r_3X+r_4X^2)X\beta^i\beta_i-\f{3}{8}(3-5X)(1-X)Xh_{ij}h^{ij}-\f{3}{2}(1-rX)(3-5rX)Xr\phi^2\nonumber\\
&+\f{3}{M_{pl}}(1-rX)(1-3rX)Xr\phi\zeta+\f{3}{2M_{pl}}(1-X)(1-3X)X h\zeta\nonumber\\&
-\f{3}{2M_{pl}^2}\big[(r+3)-4X(r^2+3)+3X^2(r^3+3)\big]X\zeta^2,\\
[\tilde{\mc{K}}^4]^{(2)}&=\f{2}{r_1}\left(r_2-3Xr_3+3X^2r_4-X^3r_5\right)X\beta_i \beta^i+\f{3}{2}(2X-1)(1-X)^2Xh_{ij}h^{ij}+6(2rX-1)(1-rX)^2Xr\phi^2\nonumber\\
&+\f{4}{M_{pl}}(1-4rX)(1-rX)^2Xr\phi\zeta+\f{2}{M_{pl}^2}\left(-r-3+6X(r^2+3)-9X^2(r^3+3)+4X^3(r^4+3)\right)X\zeta^2 \nonumber\\
&+\f{2}{M_{pl}}(1-4X)(1-X)^2Xh\zeta.
\end{align}
\end{subequations}
where
\begin{align}
r_n=\sum^{n}_{i=0}r^i.
\end{align}
From the above formulae, one may construct the mass term using equations \eqref{eq3}.
The gauge invariant second order Lagrangian can then be written as
\begin{align}\label{ac1}
S^{(2)}=M_{pl}^2\int d^4 x Na^3\Bigg(\mathcal{L}+\f{3}{2}M_f\left(-\Phi^2+\mc{B}^i\mc{B}_i+\Phi\mc{H}\right)+\f{3}{8}(2M_f-M_g)(\mc{H}^2-2\mc{H}_{ij}\mc{H}^{ij})+\mc{L}_{mass}\Bigg),
\end{align}
where we have defined
\begin{align}
\mc{L}&=\f{1}{8N^2}(\dot{\mc{H}}_{ij}\dot{\mc{H}}^{ij}-\dot{\mc{H}}^2)+\f{H}{N}\Phi\dot{\mc{H}}-\f{1}{a}\left(2H\Phi-\f{1}{2N}\dot{\mc{H}}\right)\partial_i\mc{B}^i-\f{1}{2Na}\partial_i\mc{B}_j\dot{\mc{H}}^{ij}-3H^2\Phi^2+\f{1}{4a^2}\left(\partial_i\mc{B}_j\partial^i\mc{B}^j-(\partial_i\mc{B}^i)^2\right)\nonumber\\&+\f{1}{2a^2}\left(\partial_i\partial_j\mc{H}^{ij}-\nabla^2\mc{H}\right)\Phi+\f{1}{8a^2}\left(2\partial^i\mc{H}_{ik}\partial_j\mc{H}^{jk}+\mc{H}_{ij}\nabla^2\mc{H}^{ij}+2\mc{H}\partial_i\partial_j\mc{H}^{ij}
-\mc{H}\nabla^2\mc{H}\right)
\nonumber\\&+\f{\omega}{M_{pl}^2}\left(\f{\mc{Z}^2}{2N^2}+\f{\dot{\sigma}_0}{2N^2}(\mc{H}-2\Phi)\dot{\mc{Z}}+\f{1}{2a^2}\mc{Z}\nabla^2\mc{Z}+\f{\dot{\sigma}_0}{aN}\mc{Z}\partial_i\mc{B}^i+\f{\dot{\sigma}_0^{2}}{2N^2}\Phi^2\right),
\end{align}
and
\begin{align}
\mc{L}_{mass}=M_1\mc{H}_{ij}\mc{H}^{ij}+M_2\mc{H}^2+M_{\zeta}\mc{Z}^2+(M_{h\zeta}\mc{H}+M_{\zeta\phi}\Phi)\mc{Z}+M_{\phi}\Phi^2
+M_\beta\mc{B}_i\mc{B}^i+M_{h\phi}\mc{H}\Phi.
\end{align}
The definition of $M_i$'s are given in Appendix \ref{app1}. We have also used equations \eqref{9} and \eqref{10} to simplify the action.
The above action ensures that the scalar, vector and tensor modes do not couple to each other. Therefore we study them separately.
\subsection{Tensor mode}\label{tensorsec}
Keeping only $\gamma_{ij}$ in the action \eqref{ac1}, we obtain
\begin{align}
S^{(2)}_{tensor}=M_{pl}^2\int d^4x Na^3\Bigg[\f{1}{8N^2}\dot{\gamma}_{ij}\dot{\gamma}^{ ij}+\f{1}{8a^2}\gamma_{ij}\nabla^2\gamma^{ij}-\f{1}{4}(3M_g-4M_1)\gamma_{ij}\gamma^{ij}\Bigg].
\end{align}
Variation of the above action with respect to $\gamma_{ij}$ leads to the equation of motion for tensor perturbations
\begin{equation}\label{ten1}
\frac{\partial}{\partial t}\bigg(\frac{a^3}{N}\dot{\gamma}^{ij}\bigg)-Na \nabla^2 \gamma^{ij}+2(3M_g-4M_1) Na^3\gamma^{ij}=0.
\end{equation}
Fourier transforming the above equation and using the conformal time defined as
\begin{align}
d\eta=\f{N}{a}dt,
\end{align}
one can write equation \eqref{ten1} as
\begin{align}
\bar{\gamma}^{\prime\p}+\bigg[\overrightarrow{k}^2-\f{a^{\prime\p}}{a}+2a^2(3M_g-4M_1)\bigg]\bar{\gamma}=0,
\end{align}
where we have dropped the indices of $\gamma_{ij}$ and define $\bar{\gamma}=\f{a}{2}\gamma$. This equation shows that the graviton acquires a time-dependent mass in this background. This is in agreement with the result of \cite{mukohyama} with a different mass parameter.
\subsection{Vector mode}\label{vectorsec}
We now study the vector mode of action \eqref{ac1}. There are two vector modes $S^i$ and $F^i$ in the action. One can write the vector part of the action as
\begin{align}\label{acvec}
S^{(2)}_{vector}=M_{pl}\int d^4x Na^3\bigg[-\f{1}{16N^2}\dot{F}_i\nabla^2\dot{F}^i&+\f{1}{4Na}\dot{F}_i\nabla^2S^i-
\f{1}{4a^2}S_i\nabla^2S^i\nonumber\\&+\f{1}{2}(3M_f+2M_\beta)S_iS^i+\f{1}{8}(3M_g-4M_1)F_i\nabla^2F^i\bigg].
\end{align}
One can see from the above action that the vector mode $S^i$ is an auxiliary field. Varying the action with respect to $S^i$ gives
\begin{align}
\f{1}{4Na}\nabla^2\dot{F}_i-\f{1}{2a^2}\nabla^2S_i+(3M_f+2M_\beta)S_i=0. \label{extra}
\end{align}
Going over to the Fourier space and substituting $S^i$ from the above equation into action \eqref{acvec} we obtain
\begin{align}
S^{(2)}_{vector}=\f{M_{pl}}{8}\int d^4x Na^3\bigg[\f{(3M_f+2M_\beta)r^2k^2}{k^2+2(3M_f+2M_\beta)a^2}\dot F^i\dot F_i-k^2(3M_g-4M_1) F_i F^i\bigg],
\end{align}
On the superhorizon scales one can see that the vector part of the action vanishes. Note that equation (\ref{extra}), expressed in Fourier space, implies $S_i=0$ on superhorizon scales.
\subsection{Scalar mode}\label{scalarsec}
In this section we study the scalar perturbations of action \eqref{ac1}. The scalar part of the second order Lagrangian can be written as
\begin{align}\label{sca1}
\mc{L}_{scalar}&=\f{12M_{pl}^2 a}{N}\Bigg[\f{1}{2}\left(M_1+M_2-\f{3}{4}M_f+\f{3}{8}M_g\right)N^2a^2(\nabla^2E)^2+\f{1}{12}N^2\left(M_{h\zeta}a^2\mc{Z}-\f{3}{4}\nabla^2\Phi\right)\nabla^2E\nonumber\\
&+\f{1}{16}N^2\psi\nabla^2(\nabla^2E)+\f{1}{8}\left(M_f+\f{2}{3}M_{h\Phi}\right)N^2a^2\Phi\nabla^2E+\left(\f{1}{3}M_1+M_2+\f{1}{4}M_f-\f{1}{8}M_g\right)N^2a^2\psi\left(3\psi+\nabla^2E\right)\nonumber\\
&+\f{1}{12}Na^2H\Phi\nabla^2\dot{E}+\f{1}{2}a\left(aNH\Phi+\f{1}{3}N\nabla^2\beta-\f{1}{6}a\nabla^2\dot{E}\right)\dot{\psi}-\f{1}{4}a^2\dot{\psi}^2+\f{3}{4}\left(M_f+\f{2}{3}M_{h\Phi}\right)N^2a^2\Phi\psi\nonumber\\
&-\f{1}{6}N^2\Phi\nabla^2\psi-\f{1}{8}\left(2H^2-\f{2}{3}M_\Phi+M_f\right)a^2N^2\Phi^2-\f{1}{6}N^2aH\Phi\nabla^2\beta+\f{1}{12}M_{\zeta\Phi}N^2a^2\mc{Z}\Phi\nonumber\\
&+\f{1}{2}M_{h\zeta}N^2a^2\mc{Z}\psi-\f{1}{12}N^2\psi\nabla^2\psi+\f{1}{12}M_\zeta N^2a^2\mc{Z}^2-\f{1}{12}\left(M_\beta+\f{3}{2}M_f\right)N^2a^2\beta\nabla^2\beta\nonumber\\
&+\f{\omega}{12M_{pl}^2}\left(\f{1}{2}a^2\dot{\mc{Z}}^2-a^2\dot{\sigma}_0\big(\Phi-3\psi-\f{1}{2}\nabla^2E\big)\dot{\mc{Z}}+\f{1}{2}a^2\dot{\sigma}_0^2\Phi^2+\f{1}{2}N^2\mc{Z}\nabla^2\mc{Z}+\dot{\sigma}_0Na\mc{Z}\nabla^2\beta\right)
\Bigg].
\end{align}
As is seen from the equation above, $\beta$ and $\Phi$ are non-dynamical. Transforming back to the Fourier space, one finds their equations of motion
\begin{align}\label{sca2}
\beta=\frac{-2 M^2 H N \Phi +\omega \mc{Z} \dot{\sigma}+2 M^2 \dot{\psi}}{M^2 (2 M_\beta + 3 M_f) a N},
\end{align}
and
\begin{align}\label{sca3}
\Phi=&\f{1}{\Lambda_\phi}\bigg[\omega\bigg(k^2NH\mc{Z}-a^2(6M_f+4M_\beta)\dot{\mc{Z}}\bigg)\dot{\sigma}-6M^2NH\bigg(\f{4}{3}k^2-6M_f-4M_\beta\bigg)\dot{\psi}-M^2N\bigg(k^2a^2H\dot{E}\nonumber\\
&+a^2N\left(\f{3}{2}M_f+M_{h\phi}\right)(k^2E-6\psi)+a^2M_{\zeta\phi}N\zeta +2k^2N\psi \bigg)\bigg],
\end{align}
where we have defined
\begin{align}\label{sca4}
\Lambda_\phi=a^2M^2N^2\big(6M_f+4M_\beta\big)\big(3M_f-2M_\phi +6H^2\big)+8k^2M^2N^2H^2-a^2\omega\big(6M_f+4M_\beta\big)\dot{\sigma}.
\end{align}
It is worth mentioning that the scalar perturbations of QD massive gravity has also been addressed in the decoupling limit in \cite{quasi} where the authors argue that only one of the scalar modes can be
captured in this limit. As we can see above, we have three scalar modes in our scalar Lagrangian. However, one combination of these scalar modes should be non-dynamical due to the ghost-free nature
of the theory \cite{quasi}.
At this point we are interested in the behavior of the fields on the superhorizon scales where $k^2\rightarrow0$. After substitution of $\beta$ and $\Phi$ from equations \eqref{sca2} and \eqref{sca3} and defining the curvature perturbation on constant quasi-dilaton hypersurface as
\begin{align}\label{sca5}
\mc{R}=\psi+\f{H}{\dot{\sigma}_0}\mc{Z},
\end{align}
the Lagrangian \eqref{sca1} takes the form
\begin{align}\label{sca6}
\mc{L}_{scalar}^{k\rightarrow0}=&-\f{6M_{pl}^2a^2}{r^3\big[2M_\phi-3M_f+(\omega-6)H_0^2\big]}\Bigg(\f{1}{2}r^2\left(\lambda_5a^2+2r\omega H_0^2a+r^2(2M_\phi-3M_f+\omega H_0^2)\right)\dot{\psi}^2\nonumber\\
&-r^2(r\omega H_0^2+\lambda_5a)a\dot{\mc{R}}\dot{\psi}+\f{1}{2}r^2\lambda_5a^2\dot{\mc{R}}^2-\f{1}{6}\lambda_3a^4(\mc{R}-\psi)^2-r\lambda_2a^3\psi(\mc{R}-\psi)-2r^2\lambda_1a^2\psi^2\nonumber\\
&+H_0r\left[\f{1}{6}\lambda_6a^2(\mc{R}-\psi)+ra\big[\lambda_4\psi+(M_{pl} M_{\phi\zeta}-\omega H_0^2)\mc{R}\big]+9r^2\left(\f{2}{3}M_{h\phi}+M_f\right)\right]a\dot{\psi}\nonumber\\
&-\f{1}{2}H_0r\left[\f{1}{3}\lambda_6a(\mc{R}-\psi)+\omega r\big[(\omega-6)H_0^2+2M_\phi+2M_{h\phi}\big]\psi\right]a^2\psi
\Bigg),
\end{align}
where we define
\begin{align}\label{lam1}
\lambda_1&=\left(\f{3}{4}M_f-\f{3}{8}M_g+M_1+3M_2\right)\left(\omega-6\right)H_0^2-\f{45}{8}M_f^2-\left(3M_1+\f{9}{2}M_{h\phi}+9M_2-\f{9}{8}M_g-\f{3}{2}M_\phi\right)M_f\nonumber\\
&+\left(6M_2-\f{3}{4}M_g+2M_1\right)M_\phi-\f{3}{2}M_{h\phi}^2,\\
\lambda_2&=\f{1}{2}\omega(\omega-6)H_0^2+\bigg((M_{pl} M_{h\zeta}+M_\phi+M_{h\phi})\omega-6M_{pl} M_{h\phi}\bigg)H_0^2\nonumber\\
&-3M_{pl}\left((M_{h\zeta}+\f{1}{2}M_{h\phi})M_f+\f{1}{3}M_{h\phi}M_{\zeta\phi}-\f{2}{3}M_\phi M_{h\zeta}\right),\\
\lambda_3&=-3\omega H_0^4+\left((M_\phi+M_{pl}^2 M_\zeta +M_{pl} M_{\zeta\phi})\omega-6M_{pl}^2 M_\zeta\right)H_0^2-3M_{pl}^2\left(\f{1}{6}m_{\zeta\phi}^2+M_\zeta M_f-\f{2}{3}M_\phi m_\zeta\right),\\
\lambda_4&=\f{1}{2}\omega(\omega-4)H_0^2+(M_{h\phi}+M_\phi)\omega-M_{pl} M_{\zeta\phi},\\
\lambda_5&=\left(H_0^2-\f{1}{3}M_\phi+\f{1}{2}M_f\right)\omega,\\
\lambda_6&=-6\lambda_5+\omegaM_{pl} M_{\zeta\phi}.
\end{align}
For $\omega=0$, the field equations at the superhorizon scales are simplified to
\begin{align}\label{eqR}
3r^2H_0MM_{\zeta\phi}\dot{\psi}-a^2\lambda_3 (\mc{R}-\psi)-3ra \lambda_2\psi=0 ,
\end{align}
\begin{align}
r^3(2M_\phi &-3M_f)\big(r\ddot{\psi}+2aH_0\dot{\psi}\big)+a^2\bigg[r^2H_0MM_{\zeta\phi}\dot{\mc{R}}-\f{1}{3}\lambda_3 a^2(\mc{R}-\psi)+2ra(2\lambda_4 H_0^2-\lambda_2)\psi \nonumber\\
&+ra(4H_0MM_{\zeta\phi}+\lambda_2)\mc{R}+r^2\left(9H_0^2(3M_f+2M_{h\phi})+4\lambda_1\right)\psi\bigg]=0,
\end{align}
which can be analytically solved for $\mc{R}$ and $\psi$. Substituting $\mc{R}$ from \eqref{eqR} into the second equation and solving the resulting equation for $\psi$ results in
\begin{align}
\psi = t^{\frac{3}{2}}\bigg(C_1t^{\f{\sqrt{9 A^2 H_0^2-32 A B}}{2 A H_0}} +C_2t^{\frac{-\sqrt{9 A^2 H_0^2-32 AB}}{2 A H_0}}\bigg),
\end{align}
where $C_1$ and $C_2$ are integration constants and we have defined
\begin{align}
A= M_f M_\zeta-\f{2}{3} M_\phi M_\zeta+\f{1}{6}M_{\zeta\phi}^2,
\end{align}
\begin{align}
B=&\f{1}{8}\bigg(8M_1+24M_h-6M_{h\phi}-3M_g-3M_f+3\big(M_{\zeta\phi}-M_{h\zeta}\big) M_{h\zeta}\bigg)H_0^2+\f{1}{96}\bigg(24 M_{h\zeta}^2 M_\phi + 90 M_f^2 M_\zeta \nonumber\\
&+24 M_{h\phi}^2 M_\zeta -
24 M_{h\phi} M_{h\zeta} M_{\zeta\phi} - (8 M_1 - 3 M_g + 24 M_h) (4 M_\phi M_\zeta - M_{\zeta\phi}^2) \nonumber\\
&+ 6 M_f \big(-6 M_{h\zeta}^2 + (8 M_1 - 3 M_g + 24 M_h + 12 M_{h\phi} - 4 M_{\phi}) M_\zeta -
6 M_{h\zeta} M_{\zeta\phi} + M_{\zeta\phi}^2\big)\bigg),
\end{align}
\begin{align}
\mc{R}=\psi+t^{5/2}\left(C_3t^{\f{\sqrt{9 A^2 H_0^2-32 A B}}{2 A H_0}} +C_4t^{\frac{-\sqrt{9 A^2 H_0^2-32 AB }}{2 A H_0}}\right),
\end{align}
where $C_3$ and $C_4$ are some functions of $C_1$, $C_2$ and $M_i$.
Noting that $t$ is the conformal time, a simples analysis shows that if the condition
\begin{align}
-\f{3}{2}<\f{\sqrt{9 A^2 H_0^2-32 A B}}{2 A H_0}<\f{3}{2}, \label{eq63}
\end{align}
holds, the curvature perturbation decays on superhorizon scales. On the other hand, if we have
\begin{align}
\f{\sqrt{9 A^2 H_0^2-32 A B}}{2 A H_0}=\f{3}{2}\quad\textmd{or}\quad-\f{3}{2},\label{eq64}
\end{align}
the curvature perturbation becomes constant on superhorizon scales. However, writing the above expressions in terms of $\alpha_3$ and $\alpha_4$ one can see that conditions (\ref{eq63},\,\ref{eq64}) cannot be satisfied
for $H_0>0$. The other limits imply that the curvature perturbation grows on the superhorizon scales which restricts the constants $\alpha_3$ and $\alpha_4$ to
\begin{subequations}\label{had}
\begin{align}
\alpha_3\leq -1 \qquad &\hspace{2mm}\mbox{and} \quad \bigg(0<\alpha_4<-\f{1}{4}(1+3\alpha_3)\quad \textmd{or}\quad-\f{1}{4}(1+3\alpha_3)<\alpha_4<\f{\alpha_3^2}{2}\bigg),\\
-1<\alpha_3<0 \qquad &\hspace{2mm}\mbox{and} \quad 0<\alpha_4<\f{\alpha_3^2}{2}.
\end{align}
In the case $\alpha_4=-\f{1}{4}(1+3\alpha_3)$ only the growing mode survives and the constant $\alpha_3$ admits the following range
\begin{align}
-\f{1}{2}<\alpha_3<-\f{1}{3}.
\end{align}
\end{subequations}
Therefore, the only possibility for the curvature perturbation in QD massive gravity is to grow on superhorizon scales.
\section{Conclusions and final remarks}
In this paper we have studied the cosmological perturbations of the Quasi-Dilaton massive gravity. This theory is the extension of the non-linear massive gravity theory recently proposed by de Rham, Gabadadze and Tolley through a scalar field. The scalar field is coupled to the mass term in such a way that the field space of the theory admits a dilatation invariance. This new symmetry of the theory enables us to obtain flat FRW solutions. If considered without matter, the theory predicts an accelerating solution which is the effect of the graviton mass. The stability of this solutions is considered in \cite{quasi} where the authors find that the $\omega$ parameter has to have a positive value less than $6$, and the parameter $\alpha_4$ has to be less than $\alpha_3^2/2$.
The tensor mode has a different behavior as compared to that of the standard GR but similar to the gravitational waves obtained in the dRGT massive gravity theory. The gravitational waves in this theory have a non-vanishing time-dependent mass which modifies the dispersion relation of the gravitational waves. The vector mode has the property that it vanishes on the superhorizon scales.
In order to find the scalar spectrum of the theory, one can use the gauge invariant variables and then integrate out two of the non-dynamical variables included in the metric perturbation. The equations of motion of the remaining three scalar perturbations can then be obtained by varying the resulting action. As a matter of fact, the procedure is so difficult that one cannot solve the equations analytically. However, as long as we are interested in studying the behavior of the theory at the superhorizon scales, we can study the lagrangian over that scales. One of the scalar perturbations does not play any role in the superhorizon scales because it always comes with a wave number. The resulting superhorizon Lagrangian can subsequently be varied with respect to $\psi$ and the curvature perturbation $\mc{R}$. The equations can be solved analytically if one assumes that the quasi-dilaton field has no kinetic term. We may then obtain conditions for which the curvature perturbation grows over the superhorizon scales,
i. e. equation \eqref{had}. However, if one considers the range of $\alpha_4$ within which the solution is stable, equation \eqref{5.5}, one may reduce the allowed parameter space to that represented by relations (\ref{had})
for $\omega=0$. One should note that the above range for the parameter space would be different if one considered a non-zero $\omega$ parameter. It is also worth noting that relations (\ref{eq63},\ref{eq64}) imply no constant curvature perturbation on superhorizon scales.
It is worth mentioning that our results in this paper are in agreement with the
work by Wands \textit{et al.} \cite{wands} where it is proved that the comoving curvature perturbation will become constant on superhorizon scales if the energy-momentum tensor of the matter is conserved. This is so since in the context of the present work, one can write the field equations of the metric as
\begin{align}\label{ap1}
G_{\mu\nu}=T^\sigma_{\mu\nu}+m^2X_{\mu\nu},
\end{align}
where $T^\sigma_{\mu\nu}$ is the energy-momentum tensor of the dilaton field. The $X_{\mu\nu}$ tensor is the contribution of the graviton mass term which depends on the dilaton field, the metric and Stuckelberg fields. The covariant divergence of $X_{\mu\nu}$ is not zero in the full theory which implies that the energy-momentum tensor of the dilaton field from which the curvature perturbation is constructed is not constant in the full theory. So, one expects to have growing modes on superhorizon scales in QD massive gravity theory.
Finally, the QD massive gravity theory has the potential of producing a reasonable inflationary scenario if one adds a potential to the action or change the graviton mass to be a function of the quasi-dilaton field. This can break the dilatation invariance, which we will study in future works.
After the completion of our paper, two more works have appeared on the same subject \cite{quasi1, quasi2} where the emphasis is on the appearance of ghosts in the scalar mode on sub-horizon scales.
\acknowledgments
We would like to thank A. E. Gumrukcuoglu for useful discussions.
| -46,504.711349
|
[
-3.2421875,
3.04296875
] | 18.087855
|
[
-3.568359375,
0.43505859375,
-2.1640625,
-5.9140625,
-0.301025390625,
8.2265625
] |
[
4.0078125,
7.33984375,
2.6796875,
5.83203125
] | 156
| 3,691
|
[
-3.125,
3.27734375
] | 34.049844
|
[
-5.6953125,
-4.41796875,
-4.8984375,
-2.388671875,
2.16015625,
12.4609375
] | 1.343812
| 7.341872
| 29.314549
| 5.269397
|
[
3.619856834411621
] | -28,520.214931
| 6.653211
| -45,306.887339
| 0.582319
| 5.770418
|
[
-2.326171875,
-3.8515625,
-3.96484375,
-5.14453125,
2.302734375,
12.65625
] |
[
-5.0078125,
-1.599609375,
-1.9765625,
-0.99609375,
3.07421875,
3.572265625
] | |
BkiUeivxK7ICUyBu09mp
|
\subsection{Pointwise surjective presentations of Deligne-Mumford stacks} \label{intermezzo}
\begin{notation} \label{notnot}
In this subsection, we fix a field $F$ which is either real closed or finite. For a scheme $S$, an algebraic stack $\mr X$ over $S$ and a scheme $T$ over $S$, we use the notation $|\mr X(T)|$ to denote the set of isomorphism classes of the groupoid $\mr X(T)$.
\end{notation}
\noindent
The goal of Section \ref{intermezzo} is to prove that every Deligne-Mumford stack $\mr X$ of finite type over $F$ admits an \'etale presentation by an $F$-scheme $X$ such that the induced map $X(F) \to |\mr X(F)|$ is surjective. See Definition \ref{Rontodefinition} \& Theorem \ref{rfonto2} below. This statement is the \'etale analogue of Theorem A in \cite{2019}. In this article, Aizenbud and Avni prove that any algebraic stack $\mr X$ of finite type over a noetherian scheme $S$ admits a smooth presentation $\phi: X \to \mr X$ by an $S$-scheme $X$ such that for every morphism $\textnormal{Spec } F \to S$, the map $X(F) \to |\mr X(F)|$ is surjective. We use their result to extend it in the following way: if $\mr X$ is Deligne-Mumford over $S = \textnormal{Spec } F$, then $\phi$ can be chosen \'etale.
\begin{definition}[\cite{Sakellaridis2016TheSS}]\label{Rontodefinition}
Let $\mr X_{/F}$ be an algebraic stack. A smooth presentation $X \to \mr X$ by an $F$-scheme $X$ is \textit{$F$-surjective} if the map $X(F) \to |\mr X(F)|$ is surjective.
\end{definition}
\begin{theorem}[\cite{2019}] \label{smoothonto}
Any algebraic stack $\mr X$ of finite type over $F$ admits an $F$-surjective smooth presentation $\phi: X \to \mr X$ by a scheme $X$ over $F$. $\hfill \qed$
\end{theorem}
\begin{theorem} \label{rfonto2}
Any Deligne-Mumford stack $\mr X$ of finite type over $F$ admits an $F$-surjective \'etale presentation $\phi: X \to \mr X$ by a scheme $X$ over $F$.
\end{theorem}
\begin{proof}
By Theorem \ref{smoothonto}, there exists an $F$-scheme $X$ and a $F$-surjective smooth presentation $P: X \to \mr X$. Let $y: \textnormal{Spec } F \to \mr X$ be any $F$-point of $\mr X$. Since $P$ is $F$-surjective, there exists an $F$-point $x: \textnormal{Spec } F \to X$ such that $P \circ x \cong y$. We claim that there exists a subscheme $j: U \hookrightarrow X$ such that $P|_U: U \to \mr X$ is \'etale, and such that $x: \textnormal{Spec } F \to X$ factors through an $F$-point $u: \textnormal{Spec } F \to U$. It will follow that $P|_U \circ u = P \circ j \circ u = P \circ x \cong y$. By taking the disjoint union $Y$ of all such subschemes $U \subset X$ we thus construct a scheme $Y$ over $F$ together with an $F$-surjective \'etale presentation $Y \to \mr X$. In other words, the claim implies the theorem. So let us prove this claim. The first part of its proof follows closely the proof of Theorem 8.1 in \cite{LM-B}; we shall shortly repeat those arguments for the convenience of the reader. Let $Z = X \times_{P, \mr X, P} X$ with $p_1: Z \to X$ and $p_2: Z \to X$ the two projections. Now $\Delta: \mr X \to \mr X \times_{F} \mr X$ is unramified \cite[Lemme 4.2]{LM-B} hence $p: Z \to X \times_{F} X$ is unramified. Consequently, $
p_1^*\Omega^1_{X/ F} \oplus p_2^*\Omega^1_{X/ F} = p^*\Omega^1_{X \times_{F} X / F} \to \Omega^1_{Z / F}
$ is surjective \cite[\S IV, 4, 17.2.2]{EGA}, hence by \cite[(8.2.3.2)]{LM-B}, the natural morphism of quasi-coherent $\ca O_X$-modules
$
\Omega^1_{X/F} \to \Omega^1_{X / \mr X}
$
is surjective and $\Omega^1_{X/\mr X}$ is an $\ca O_X$-module locally free of finite rank. Let $r$ be the rank of $\Omega^1_{X/\mr X}$ around the point $x \in X(F)$. Because $\Omega^1_{X/F} \to \Omega^1_{X / \mr X}$ is surjective, there exists global sections $f_1, \dotsc, f_r$ of $\ca O_X$ of which the differentials at the point $x$ form a basis of the $k(x)$-vector space $\Omega^1_{X/\mr X} \otimes k(x)$. We obtain $F$-morphisms
$$
f : = (f_1, \dotsc, f_r): X \to \bb A^r_{F}, \;\;\; (P,f): X \to \mr X \times_{F} \bb A^r_{F}.
$$
Then $(P,f)$ is a map of smooth algebraic stacks over $\mr X$ whose differential is an isomorphism at the point $x \in X(F)$, hence $(P,f)$ is \'etale at an neighbourhood $X' \subset X$ of $x $ \cite[\S IV, 4, 17.11.2]{EGA}. Replace $X$ by $X'$ and define $j: U \hookrightarrow X$ by the cartesian diagram
$$
\xymatrixcolsep{5pc}
\xymatrix{
U \ar[r] \ar[d]^j & \mr X \ar[d] & \mr X \times_{F} \textnormal{Spec } F \ar[dl]^{\textnormal{id} \times f(x)} \ar@{=}[l]\\
X \ar[r]^{(P,f) } & \mr X \times_{F} \bb A^r_{F}. & }
$$
Then $P|_U: U \to \mr X$ is \'etale since $(P,f)$ is \'etale. Moreover, the morphisms $x: \textnormal{Spec } F \to X$ and $y: \textnormal{Spec } F \to \mr X$ are such that their images in $(\mr X \times_{F} \bb A^r_{F})(F)$ are given by
$$(P \circ x, f(x), \textnormal{id} ) \in (\mr X \times_{F} \bb A^r_{F})(F) \;\;\; \tn{ and } \;\;\; (y, f(x), \textnormal{id}) \in (\mr X \times_{F} \bb A^r_{F})(F).$$
But clearly the isomorphism between $P \circ x$ and $y$ in $\mr X(F)$ induces an isomorphism between $(P \circ x, f(x), \textnormal{id} )$ and $(y, f(x), \textnormal{id}) $ in $(\mr X \times_{F} \bb A^r_{F})(F)$. By the universal property of the $2$-fibre product, $x$ and $y$ induce the required map $
u = (x,y): \textnormal{Spec } F \to U$.
\end{proof}
\subsection{Topology on the real locus of a real algebraic stack}
\subfile{topologyrealpoints}
\end{document}
\section{Introduction}
\label{introduction}
Fix an integer $g \geq 1$. Let $\ca A$ and $B$ be complex manifolds, and let
\begin{equation} \label{family}
\psi: \ca A \to B, \;\;\; s: B \to \ca A, \;\;\; E \in R^2\psi_*\bb Z
\end{equation} be a polarized holomorphic family of $g$-dimensional complex abelian varieties. The map $\psi$ is a proper holomorphic submersion, $s$ is a section of $\psi$, and $A_t: = \psi^{-1}(t) $ is a complex abelian variety of dimension $g$ with origin $s(t)$, polarized by $E_t \in H^2(A_t, \bb Z)$, for $t \in B$.
\\
\\
Suppose moreover that $\psi$ admits a real structure in the following sense: $\ca A$ and $B$ are equipped with anti-holomorphic involutions $\tau$ and $\sigma$, commuting with $\psi$ and $s$ and compatible with the polarization, in the sense $\tau^*(E) = -E$. For example, this is true when $\psi$ is real algebraic, i.e. induced by a polarized abelian scheme over a smooth $\bb R$-scheme.
\\
\\
Let $B({\bb R})$ be the set of fixed points under the involution $\sigma: B \to B$. If $t \in B({\bb R})$, then $A_t$ is equipped with an anti-holomorphic involution $\tau$ preserving the group law. We shall not distinguish between the category of abelian varieties over ${\bb R}$ and the category of complex abelian varieties equipped with an anti-holomorphic involution preserving the group law. Thus, if $t \in B({\bb R})$, then $A_t$ is an abelian variety over $\bb R$. Define $R_k \subset B({\bb R})$:
\begin{equation}\label{rk}
R_k = \left\{ t \in B({\bb R}): A_t \text{\textit{ contains an abelian subvariety over ${\bb R}$ of dimension }} k \right\}.
\end{equation}
For $t \in B$, the polarization gives an isomorphism $H^{0,1}(A_t) \cong H^{1,0}(A_t)^*$; using the dual of the differential of the period map we obtain a symmetric bilinear form
\begin{equation} \label{symbil}
q: H^{1,0}(A_t) \otimes H^{1,0}(A_t) \to T_t^*B.
\end{equation}
\blfootnote{\textcolor{blue}{$^\ast$}\footnotesize{\'Ecole normale sup\'erieure, 45 rue d'Ulm, Office T17, 75230 Paris, \href{mailto:[email protected]}{[email protected]}.}}
\blfootnote{\textcolor{blue}{$^\ast$}\textit{Date:} \today $\;$
- This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreement N\textsuperscript{\underline{o}} 754362.$\;$\img{EU}}
\restoregeometry
\pagestyle{plain}
\begin{condition} \label{criterion}
There exists an element $t \in B$ and a $k$-dimensional complex subspace $W \subset H^{1,0}(A_t)$ such that the complex $0 \to \bigwedge^2 W \to W \otimes H^{1,0}(A_t) \to T_t^*B $ is exact.
\end{condition}
\noindent
Our main theorem is the following.
\begin{theorem} \label{theorem1}
If $B$ is connected and if Condition \ref{criterion} holds, then $R_k$ is dense in $B({\bb R})$.
\end{theorem}
\noindent
We give the following three applications of Theorem \ref{theorem1}. A \textit{real algebraic curve} will be a proper, smooth, geometrically connected curve over ${\bb R}$. Let $\mr M_g^{\bb R}$ be the set of isomorphism classes of real algebraic curves of genus $g$, and $\mr A_g^{{\bb R}}$ the set of isomorphism classes of real principally polarized abelian varieties of dimension $g$. The sets $\mr A_g^{{\bb R}}$ and $\mr M_g^{\bb R}$ carry natural real semi-analytic structures by work of Gross-Harris \cite{grossharris} and Sepp\"{a}l\"{a}-Silhol \cite{seppalasilhol2}.
\begin{theorem} \label{theorem2}
\begin{enumerate}
\item[\hypertarget{theoremA}{\textbf{A.}}]
Given an integer $k$ with $1 \leq k \leq g-1$, abelian varieties over $\bb R$ \\
containing a $k$-dimensional abelian subvariety over ${\bb R}$ are dense in the moduli space \\
$\mr A_g^{{\bb R}}$ of principally polarized abelian varieties of dimension $g$ over ${\bb R}$. \item[\hypertarget{theoremB}{\textbf{B.}}] For $g \geq 3$ and $k \in \{1,2,3\}$, real algebraic curves $C$ that admit a map $\varphi: C \to A$ \\ with $A$ a $k$-dimensional abelian variety over ${\bb R}$ such that $\varphi(C)$ generates $A$ as an \\ algebraic group are dense in the moduli space $\mr M_g^{{\bb R}}$ of real genus $g$ algebraic curves.
\item[\hypertarget{theoremC}{\textbf{C.}}] If $V \subset \bb PH^0(\bb P^2_{{\bb R}}, \mathcal{O}_{\bb P^2_{{\bb R}}}(d))$ is the real algebraic set of degree $d$ smooth plane curves \\ over ${\bb R}$, then the subset of $V$ corresponding to those curves that map non-trivially to \\ elliptic curves over $\bb R$ is dense in $V$.
\end{enumerate}
\end{theorem}
\noindent
\begin{remark}
The topology in the moduli spaces of Theorem \ref{theorem2}.\hyperlink{theoremA}{A} and \ref{theorem2}.\hyperlink{theoremB}{B} is the one underlying the real semi-analytic structure. We prove in Section \ref{realan} that these topologies have other more intrinsic incarnations. It might also be worth noting that, although well-known in the complex case, Theorem \ref{theorem2}.\hyperlink{theoremA}{A} is new in the real case.
\end{remark}
\noindent Our proofs rely on results in the complex setting that were proved by Colombo and Pirola in \cite{Colombo1990}. Indeed, Theorem \ref{theorem1} is the analogue over $\bb R$ (with \textit{unchanged} hypothesis) of the following theorem. Define $S_k \subset B$ to be the set of those $t$ in $B$ for which the complex abelian variety $A_t$ contains a complex abelian subvariety of dimension $k$.
\begin{theorem}[Colombo-Pirola \cite{Colombo1990}] \label{colpol}
If $B$ is connected and if Condition \ref{criterion} holds, then $S_k$ is dense in $B$. $\hfill \qed$
\end{theorem}
\noindent
Colombo and Pirola in turn were inspired by the Green-Voisin Density Criterion \cite[Proposition 17.20]{voisin}. Indeed, the latter gives a criterion for density of the locus where the fiber contains many Hodge classes for a variation of Hodge structure of weight $2$. Theorem \ref{colpol} adapts this result to a polarized variation of Hodge structure of weight $1$ (which is nothing but a polarized family of complex abelian varieties). The result is a criterion for the density of the locus where the fiber admits a sub-Hodge structure of dimension $k$.
\\
\\
To be a little more precise, recall that for a complex manifold $U$ and a rational weight $2$ variation of Hodge structure $(H_{\bb Q}^2, \ca H, F^1, \nabla)$ on $U$, the \textit{Noether-Lefschetz locus} $\tn{NL}(U) \subset U$ is the locus where the rank of the vector space of Hodge classes is bigger than the general value. If $H_{\bb Q}^2$ is polarizable then $\tn{NL}(U)$ is a countable union of closed algebraic subvarieties of $U$ \cite{MR1273413}. The Green-Voisin Density Criterion referred to above decides whether $\tn{NL}(U)$ is dense in $U$. It was first stated in \cite{NLlocus} and applied to the universal degree $d\geq4$ surface $\mr S \to \mr B$ in $\bb P^3$. In this case, $\tn{NL}(\mr B)$ is the locus where the Picard group is not generated by a hyperplane section, and the union of the general components of $\tn{NL}(\mr B)$ is dense in $\mr B$ [\textit{loc. cit.}]. Analogously, $S_k$ is a countable union of components of $\textnormal{NL}(B)$ \cite{laszlodebarre}, and Theorem \ref{colpol} says that this union is dense in $B$.
\\
\\
Now let us carry the discussion over to the real setting. Unfortunately, the Green-Voisin Density Criterion cannot be adapted to the reals without altering the hypothesis. Going back to the universal family $\mr S \to \mr B$ of degree $d\geq 4$ surfaces in $\bb P^3_{{\bb C}}$, one observes that this family has a real structure, so that we can define the \textit{real Noether-Lefschetz locus} $\textnormal{NL}(\mr B({\bb R})) \subset \mr B({\bb R})$ as the locus of real surfaces $S$ in $\bb P^3_{{\bb R}}$ with $\textnormal{Pic}(S) \neq {\bb Z}$. By the above, the Green-Voisin Density Criterion is fulfilled hence $\textnormal{NL}(\mr B)$ is dense in $\mr B$, whereas density of $\textnormal{NL}(\mr B({\bb R}))$ in $\mr B(\bb R)$ may fail: for every degree $4$ surface in $\bb P^3_{{\bb R}}$ whose real locus is a union of $10$ spheres, $\textnormal{Pic}(S) = {\bb Z}$, and so $\textnormal{NL}(\mr B({\bb R})) \cap K = \emptyset$ for any connected component $K$ of surfaces of such a topological type \cite[Rem.1.5]{benoistttt}. There is an alternate criterion \cite[Prop.1.1]{benoistttt}, but the hypothesis is more complicated thus harder to fulfill, and only implies density of $\textnormal{NL}(\mr B({\bb R}))$ in one component of $\mr B(\bb R)$ at a time. It is therefore remarkable that for the real analogue of density of $S_k$ in $B$, none of these problems occur. Theorem \ref{theorem1} shows that the complex density criterion can be carried over to the reals \textit{without changing it}. Condition \ref{criterion} does not involve the real structures at all, applying to any real structure on the family. The result is density of $S_k \subset B$ \textit{and} $R_k \subset B({\bb R})$. It is for this reason that the applications of Theorem \ref{theorem1} are generous: the statements in Theorem \ref{theorem2}, as well as their proofs, are direct analogues of some applications of Theorem \ref{colpol} in \cite{Colombo1990}.
\\
\\
Let us comment on the topologies appearing in Theorem \ref{theorem2}.\hyperlink{theoremA}{A}\&\hyperlink{theoremB}{B}. One may be inclined to believe that to obtain a \textit{real moduli space} (i.e. a reasonable topology on the set of real isomorphism classes), one equips the complex moduli space with an anti-holomorphic involution and considers the set of fixed points. This often fails, as it does for $\mr A_g^{\bb R}$ and $\mr M_g^{\bb R}$: there can be a complex algebraic variety carrying different real structures (e.g. $y^2 = x^3 + x$ and $y^2 = x^3-x$ in $\mr A_1^{\bb C}$), and there can be real points of the complex moduli space that do not represent any real variety (e.g. the example \cite[86]{silholsurfaces} in $\mr A_2^{\bb C}$ due to Shimura \cite{shimura1972}). The obstruction is that these moduli spaces are not fine - indeed, one solution is to cover $\mr A_g^{\bb C}$ and $\mr M_g^{\bb C}$ by spaces that rigidify the varieties in such a way that any real structure can be lifted to a real structure compatible with the rigidification: take the disjoint union of several fixed point sets of non-equivalent real structures on these covering spaces and quotient out by appropriate groups to obtain a semi-analytic structure on $\mr A_g^{\bb R}$ and on $\mr M_g^{\bb R}$. This was done by Gross and Harris \cite{grossharris} for $\mr A_g^{\bb R}$ using the Siegel space $\bb H_g \to \mr A_g^{\bb C}$ and by Sepp\"al\"a and Silhol \cite{seppalasilhol2} for $\mr M_g^{\bb R}$ using the Teichm\"uller space $\ca T_g \to \mr M_g^{\bb C}$.
\\
\\
In the second part of this paper we put our density results in some perspective. The goal is to prove that the above topologies on $\mr A_g^{\mathbb{R}}$ and $\mr M_g^{\mathbb{R}}$ are natural in some sense. In complex geometry there are many ways to construct a moduli space of complex varieties, among which natural ones use algebraic stacks. We show that something similar holds over $\mathbb{R}$, and study the real locus of an algebraic stack $\mr X$ of finite type over $\mathbb{R}$. We define a topology on the set $|\mr X(\mathbb{R})|$ of isomorphism classes of $\mr X(\mathbb{R})$ in a way that generalizes the euclidean topology on $\mr X(\mathbb{R})$ when $\mr X$ is a scheme. If $\mr X$ admits a coarse moduli space $\mr X \to M$, $|\mr X(\mathbb{R})|$ should be thought of as the real analogue of the euclidean topology on $M(\mathbb{C})$. The point is that we cannot use $M(\mathbb{R})$ since this set will almost never be in bijection with $|\mr X(\mathbb{R})|$. For the stacks of abelian varieties $\ca A_g$ and curves $\ca M_g$, the bijections $|\ca A_g(\mathbb{R})| \cong \mr A_g^{\mathbb{R}}$ and $|\ca M_g(\mathbb{R})| \cong \mr M_g^{\mathbb{R}}$ are homeomorphisms, see Theorems \ref{th:homeomorphismmoduli} and \ref{th:homeomorphismmoduli2}.
\\
\\
Finally, we would like to remark that Colombo-Pirola's Theorem \ref{colpol} has been generalized by Ching-Li Chai \cite{Chai1998DensityOM}, who considers a variation of rational Hodge structures over a complex analytic variety and rephrases and answers the following question in the context of Shimura varieties: when do the points corresponding to members having extra Hodge cycles of a given type form a dense subset of the base? It should be interesting to investigate whether such a generalization can be carried over to the real numbers as well.
\subsection{Outline of the paper}
This paper is organized as follows. Section \ref{realfamily} is devoted to the proof of Theorem \ref{theorem1}. In Section \ref{satisfydensity} we satisfy Condition \ref{criterion} in the case of a universal local deformation of a polarized abelian variety. In Section \ref{realmoduli} we prove that any real structure on a complex manifold admitting a universal local deformation extends uniquely to a real structure on the local deformation. In Section \ref{realjacobians} we show that any real structure on a family of curves induces a real structure on the relative Jacobian. We prove Theorem \ref{theorem2} in Section \ref{provetheorem2}. In Section \ref{coarsemodulistsack} we define a topology on the real locus of a real algebraic stack. For algebraic moduli stacks over $\mathbb{R}$, the induced topological space becomes a moduli space of real varieties. We conclude in Section \ref{realan} that for abelian varieties and curves, the so-obtained real moduli spaces coincide with those defined by Gross-Harris \cite{grossharris} and Sepp\"al\"a-Silhol \cite{seppalasilhol2}.
\subsection{Acknowledgements}
I would like to thank my thesis advisor Olivier Benoist for his great guidance, encouragement and support. I would also like to sincerely thank the referee for carefully reading the manuscript. His or her comments substantially improved the quality of this paper.
\section{Real Abelian Subvarieties in Family} \label{realfamily}
\subfile{realfamily}
\section{Density in Deformation Spaces} \label{satisfydensity}
\subfile{satisfydensity}
\section{Real Deformation Spaces}
\label{realmoduli}
\subfile{realdeformationspaces}
\section{Real Structures on Relative Jacobians} \label{realjacobians}
\subfile{realjacobians}
\section{Proof of Theorem \ref{theorem2}} \label{provetheorem2}
\subfile{provetheorem2}
\section{The Coarse Moduli Space of a Real Algebraic Stack} \label{coarsemodulistsack}
\subfile{intermezzo}
\section{Comparing the Real Moduli Spaces}
\label{realan}
\subfile{comparingmodulispaces}
\newgeometry{left=10mm, bottom=1in}
\printbibliography
\end{document}
\subsection{Density in $\mr A_g^\mathbb{R}$} \label{sec:densityinag}
The set $\mr A_g^\mathbb{R}$ of isomorphism classes of principally polarized real abelian varieties of dimension $g$ can be provided with a real semi-analytic space structure as follows. The result seems to have been proven independently by to Gross-Harris \cite[Section 9]{grossharris} and Silhol \cite[Ch.IV, Section 4]{silholsurfaces}. Following \cite[Ch.IV, Definition 4.4]{silholsurfaces}, define $I \subset \bb Z^2$ as
$$I : = \left\{(\alpha, \lambda) \in \bb Z^{2}: 1 \leq \lambda \leq g \tn{ and } \alpha \in \{1,2\} \tn{ such that } \alpha = 1 \tn{ if } \lambda \tn{ is odd } \right\} \cup \left\{ (0,0) \right\}.$$ Attach to each $i = (\alpha,\lambda) \in I$ a matrix $M \in M_{g}(\bb Z)$ as in \cite[Ch.IV, Theorem 4.1]{silholsurfaces}, define $
\Gamma_i = \{ A \in \textnormal{GL}_g(\bb Z): A M A^t = M \} \subset \textnormal{GL}_g(\bb Z) \subset \textnormal{Sp}_{2g}(\bb Z)$, where the inclusion
$\textnormal{GL}_g(\bb Z) \hookrightarrow \textnormal{Sp}_{2g}(\bb Z)$ is defined by $ A \mapsto \big(\begin{smallmatrix}
A & 0 \\
0 & A^{-t}
\end{smallmatrix}\big)$, and define an anti-holomorphic involution $\tau_i : \bb H_g \to \bb H_g$ on the Siegel space $\bb H_g$ by $\tau_i(Z) = M - \overline{Z}$. Let $\bb H_g^{\tau_i}$ be its fixed locus. Then the period map defines a bijection \cite[Proposition 9.3]{grossharris}, \cite[Ch.IV, Theorem 4.6]{silholsurfaces}
\begin{equation} \label{eq:topologyaggrossharris}
\mr A_g^{\bb R} \cong \bigsqcup_{i \in I} \Gamma_i \backslash \bb H_g^{\tau_i}.
\end{equation}
\begin{proof}[Proof of Theorem \ref{theorem2}.\textcolor{blue}{A}]
Let $i \in I$ and $Z \in \bb H_g^{\tau_i}$. Let $(X, \omega\in H^2(X, \mathbb{Z}))$ be the principally polarized complex abelian variety with symplectic basis whose period matrix is $Z$. Then $(X, \omega)$ admits a unique real structure $\sigma: X \to X$, compatible with $\omega$ and the symplectic basis \cite[Section 9]{grossharris}. There exists a $\tau_i$-invariant connected open neighborhood $B \subset \bb H_g$ of $Z$ and a universal local deformation $\pi: \mr X \to B \ni Z$ of the polarized complex abelian variety $(X, \omega)$. By Proposition \ref{realdef}, possibly after restricting $B$ around $Z$, the real structure $\sigma: X \to X$ extends uniquely to a real structure on the polarized family $\pi$, which by uniqueness is compatible with $\tau_i : B \to B$. By Proposition \ref{satisfycondition}, Condition \ref{criterion} is satisfied. By Theorem \ref{theorem1}, $R_k \cap B^{\tau_i} $ is dense in $B^{\tau_i} \subset \bb H_g^{\tau_i}$. It follows that $R_k$ is dense in $\bb H_g^{\tau_i}$.
\end{proof}
\subsection{Density in $\mr M_g^\mathbb{R}$}
Similarly, Sepp\"al\"a and Silhol provide the set $\mr M_g^\mathbb{R}$ of real genus $g$ algebraic curves with a topology as follows. Fix a compact oriented $\ca C^{\infty}$-surface $\Sigma$ of genus $g$ and let $\ca T_g$ be the Teichm\"uller space of the surface $\Sigma$ (see e.g. \cite{arbarelloteichmuller}). Define $J \subset \bb Z^2$ to be the set of tuples $(\epsilon, k) \in \bb Z^2$ with $\epsilon \in \{0,1\}$ such that $1 \leq \lambda \leq g+1$ and $k \equiv g + 1 \mod 2$ when $\epsilon = 1$, and $0 \leq k \leq g$ otherwise. To every $j = (\epsilon(j), k(j)) \in J$ one can attach an orientation-reversing involution $\sigma_j: \Sigma \to \Sigma$ of \textit{type $j \in J$} \cite{seppalasilhol2}. This means that $k(j) = \#\pi_0(\Sigma^{\sigma_j})$ and that $\epsilon(j) = 0$ if and only if $\Sigma \setminus \Sigma^{\sigma_j}$ is connected. Moreover, every such involution $\sigma_j: \Sigma \to \Sigma$ induces an anti-holomorphic involution $\sigma_j: \ca T_g \to \ca T_g$ [\textit{loc. cit.}]. Denote by $N_j = \{ g \in \Gamma_g: g \circ \sigma_j = \sigma_j \circ g \}$ the normalizer of $\sigma_j: \ca T_g \to \ca T_g$ in the mapping class group $\Gamma_g$ of $\Sigma$. Then there is a natural bijection \cite[Theorem 2.1, Definition 2.3]{seppalasilhol2}
\begin{equation} \label{eq:topologymgseppalasilhol}
\mr M_g^{\bb R} \cong \bigsqcup_{j \in J} N_j \backslash \ca T_g^{\sigma_j}.
\end{equation}
\begin{proof}[Proof of Theorem \ref{theorem2}.\textcolor{blue}{B}]
Suppose that $g \geq 3$, let $j \in J$ and consider a point $0 \in \ca T_t^{\sigma_j}$. Let $(X, [f])$ be the complex Teichm\"uller curve of genus $g$ corresponding to the point $0$. By \cite{seppalasilhol2}, there is a unique real structure $\sigma: X\to X$ which is compatible with the Teichm\"uller structure $[f]$ and the involution $\sigma_j: \Sigma \to \Sigma$. Moreover, there exists a $\sigma_j$-invariant simply connected open subset $B \subset \ca T_g$ of $0$ in the Teichm\"uller space and a Kuranishi family $\pi: \mr X \to B \ni 0$ of the Riemann surface $X$. By Proposition \ref{realdef}, up to restricting $B$ around $0$, the real structure $\sigma: X\to X$ extends uniquely to a real structure $(\tau, \mr T)$ on the Kuranishi family $\pi$ such that $\tau(0) = 0$. By uniqueness, $\tau: B \to B$ coincides with $\sigma_j$. By Lemma \ref{realjacobianstructure}, $(\tau, \mr T)$ induces a real structure $(\tau, \Sigma)$ on the Jacobian $J_{\mr X} \to B$ of the curve $\pi: \mr X \to B$. Fix $k \in \{1,2,3\}$. Observe that, by Theorem \ref{theorem1}, it suffices to prove that Condition \ref{criterion} holds in $B$. That is, we need to show that there exists an element $t \in B$ and a $k$-dimensional complex subspace $W \subset H^{1,0}(\text{Jac}(\mr X_t)) = H^{1,0}(\mr X_t)$ such that the sequence
$
0 \to \bigwedge^2 W \to W \otimes H^{1,0}(\mr X_t) \to T_t^*B$ is exact. The family
$
\pi: \mr X \to B
$
is a universal local deformation of $\mr X_t$ for each $t \in B$, hence $ T_tB \cong H^1(\mr X_t, T_{\mr X_t})$. By \cite[Lemme 10.22]{voisin}, the dual of $q: H^{1,0}(\mr X_t) \otimes H^{1,0}(\mr X_t) \to T_t^\ast{B}$ is nothing but the cup-product $
H^0(K_{\mr X_t}) \otimes H^0(K_{\mr X_t}) \to H^0(K_{\mr X_t}^{\otimes 2})$. Consequently, we are reduced to the claim that for each $k \in \{1,2,3\}$ there exists an element $t \in B$ and a $k$-dimensional subspace $W \subset H^0(K_{\mr X_t})$ such that the following sequence is exact:
\begin{equation} \label{exseq}
0 \to \wedge^2 W \to W \otimes H^0(K_{\mr X_t}) \to H^0(K_{\mr X_t}^{\otimes 2}).
\end{equation}
This is true by the \textit{Proof of Theorem (3)} in \cite{Colombo1990}. Indeed, Colombo and Pirola consider the moduli space of complex genus $g \geq 3$ curves $\mr M_g^\mathbb{C} = \Gamma_g \setminus \ca T_g$ to prove the complex analogue of Theorem \ref{theorem2}.\hyperlink{theoremB}{B}. They show that there exists a point $p = [C] \in \mr M_g^\mathbb{C}$ and a $k$-dimensional complex subspace $W \subset H^0(K_C)$ such that (\ref{exseq}) is exact. So Condition \ref{criterion} is satisfied for some point $t \in \ca T_g$. Since Condition \ref{criterion} is open for the Zariski topology on $\ca T_g$, it is dense for the euclidean topology, hence Condition \ref{criterion} holds for some $t \in B$.
\end{proof}
\subsection{Density of real plane curves covering an elliptic curve}
\begin{proof}[Proof of Theorem \ref{theorem2}\textcolor{blue}{.C}]
We need to prove that the subset $\mr R_k(V)$ of $V$ of real plane curves that map non-trivially to elliptic curves over $\bb R$ is dense in $V$, where $V \subset \bb PH^0(\bb P^2_{{\bb R}}, \mathcal{O}_{\bb P^2_{{\bb R}}}(d))$ is the set of degree $d$ smooth plane curves over ${\bb R}$. Let $d \in \bb Z_{\geq 0}$, $N = {d+2 \choose 2}$ and let $\mr B({\bb C}) \subset H^0(\bb P^2_{\bb C}, \mathcal{O}_{\bb P^2_{\bb C}}(d)) \cong \bb C^N$ be the Zariski open subset of non-zero degree $d$ homogeneous polynomials $F$ that define smooth plane curves $\{F = 0 \} \subset \bb P^2({\bb C})$. Consider the universal plane curve $\mr B({\bb C}) \times \bb P^2({\bb C}) \supset \mr S(\mathbb{C}) \to \mr B(\mathbb{C})$. The complex vector space $H^0(\bb P^2_{\bb C}, \mathcal{O}_{\bb P^2_{\bb C}}(d))$ has a real structure, i.e. $\textnormal{Gal}({\bb C} / {\bb R})$ acts anti-linearly on it and this action preserves the space $\mr B({\bb C})$. The induced action on $\mr B({\bb C}) \times \bb P^2({\bb C})$ preserves in turn $\mr S({\bb C})$, and the morphism $\pi : \mr S({\bb C}) \to \mr B({\bb C})$ is Galois equivariant. Note that the family $\pi$ is algebraic, that is, comes from a morphism of algebraic varieties $\mu: \mr S \to \mr B$ over $\bb C$. By the above, $\mr B$, $\mr S$ and $\mu$ are actually defined over $\bb R$. For a projective and flat morphism of locally Noetherian schemes with integral geometric fibers, the relative Picard scheme exists \cite[\S V, 3.1]{FGA}. We obtain an abelian scheme
$
\tn{\underline{Pic}}_{\mr S/\mr B}^0 \to \mr B
$ over $\bb R$ of relative dimension $g = (d-1)(d-2)/2$. By Theorem \ref{theorem1}, it suffices to satisfy Density Condition \ref{criterion}. In other words, we need to show the existence of $t \in \mr B({\bb C})$ and a non-zero $v \in H^{1,0}( \tn{Jac}(\mr S_t({\bb C})))$ such that $\langle v \rangle \otimes H^{1,0}(\tn{Jac}(\mr S_t({\bb C}))) \to T_t^*\mr B({\bb C})$ is injective. This is done in \cite[\textit{Proof of Proposition (6)}]{Colombo1990}, where Colombo and Pirola prove the complex analogue of Theorem \ref{theorem2}.\hyperlink{theoremC}{C}: an element $t \in \mr B(\mathbb{C})$ that satisfies the criterion is the $t \in \mr B(\mathbb{C})$ that corresponds to the Fermat equation $F = X_0^d + X_1^d + X_2^d$ (compare \cite[Proposition 3]{kim}).
\end{proof}
\end{document}
\subsection{Proving the density theorem} \label{densityproofsection}
\subfile{densityproofsection}
\end{document}
| -31,451.735756
|
[
-2.77734375,
2.453125
] | 34.545455
|
[
-2.68359375,
0.6171875,
-2.224609375,
-4.52734375,
-1.2197265625,
7.37890625
] |
[
2.80859375,
9.3828125,
-0.2437744140625,
5.15625
] | 222
| 4,290
|
[
-3.2109375,
3.630859375
] | 31.366616
|
[
-5.24609375,
-3.115234375,
-5.09375,
-2.642578125,
1.35546875,
12.5546875
] | 0.578899
| 22.247297
| 22.307692
| 1.684307
|
[
1.9503765106201172
] | -20,644.864675
| 4.831935
| -30,972.926479
| 0.506537
| 5.69499
|
[
-1.748046875,
-3.140625,
-4.04296875,
-5.63671875,
1.7734375,
12.71875
] |
[
-5.71875,
-0.9521484375,
-1.49609375,
-0.8828125,
3.21484375,
2.72265625
] | |
BkiUdog5qX_BwR884E6E
|
\section{Introduction}
Groups, like humans, move through successive phases; they tend to advance and regress \cite{tuckman1965}. A group is sometimes defined as three or more members that interact with each other to perform a number of tasks and achieve a set of common goals \cite{grupp}. A team, on the other hand, has developed both the goals and the means to achieve these tasks effectively \cite{validationstudy}. The emphasis on the importance of arranging work in a group-form emerged, in part, from the growing awareness of the role of groups in facilitating or blocking individual and organizational effectiveness, and more work can be achieved in well-functioning teams than dividing work to individuals only \cite{hinsz1997emerging}. As a result, organizations are counting on teams as the main asset for accomplishing goals \cite{facilitating}.
Group development can be defined as the process in which a group navigates a number of stages until it becomes a mature team. Consequently, the term ``group maturity'' refers to the level of development a group has acquired over the course of its lifespan. Wheelan et al. \cite{validationstudy} reported that 83\% of teams that were assessed in a study were found to be work groups without effective means to reach their common goals. A team, therefore, is here defined as one that has successfully navigated the earlier stages of group development and has emerged as a mature, high performing unit capable of achieving common goals \cite{wheelan2005}.
The work of Susan Wheelan on group development research helped determine the common threads among group development models and postulate the basis for the Integrated Model of Group Development (IMGD). In this model, a group is believed to go through five successive stages of development, namely ``Safety and Inclusion,'' ``Counter-dependency and Fight,'' ``Trust and Structure,'' ``Productivity and Work,'' and ``Termination.'' The importance of this model lies in the fact that it proposes a statistically validated instrument that measures the maturity of a given group at a given time, called the Group Development Questionnaire (GDQ). The instrument, developed by Susan Wheelan in 1993, contains four sub-scales bases on the stages from her IMGD. Each sub-scale contains 15 items which measure the amount of energy a group is spending on the corresponding stage of IMGD. A comprehensive validation study on the GDQ, performed by \cite{wheelan1996}, revealed reliability scores for scales one through four to be 0.74, 0.87, 0.69, and 0.82 respectively, which indicate a good overall reliability of the GDQ items. In this study, we used the IMGD model as the theoretical framework for understanding the group dynamics of the participating work groups and the GDQ was used to assess their group maturity.
While team performance is defined as the extent to which a team is able to meet cost, time, and quality objectives, a differentiation between two variables, effectiveness and efficiency, needs to be made in order to gain insights into the actual performance of software teams. Effectiveness refers to the team's adherence to the predetermined quality of a product \cite{hoegl2001teamwork}. In a software context, effectiveness could be the robustness or reliability of functionality in software. Efficiency, on the other hand, is evaluated in terms of team's commitment to schedules \cite{hoegl2001teamwork}, like launching software on the target date and within budget. Therefore, effectiveness reflects a comparison of actual versus intended outcomes, whereas efficiency ratings are based on a comparison of actual versus intended input \cite{hoegl2001teamwork}.
The performance of software teams are sometimes stated to essentially be measured using two extremes: objective and subjective \cite{ong2005team}. The subjective approach relies on the perception of key stakeholders (e.g., the customer) on the performance of a given team whereas the objective approach relies on a quantitative assessment of team performance \cite{ong2005team}. One way to measure the latter is to look at the team's adherence to schedule, just like our previous definition of efficiency, which are both only in relation to plans and not the customer value. In software teams that adopt scrum in their development, planning occurs on a sprint level, where all sizes of completed work items are collected at the end of a sprint to determine the velocity of the team \cite{agarwal2012tracking}. The value of the completed work is only recognized when the work gets accepted by the product owner at the end of the sprint. In other words, no points are given for any work done until it gets accepted. Based on this, we used the Schedule Performance Indicator (SPI) \cite{albero2014understanding} to measure the effectiveness of the scrum teams in planning their stories and delivering the expected outcome. In this current research, we use the term ``planning effectiveness'' to describe the teams' ability to deliver the planned work as expected in relation to customer value represented by the product owner. Also, we measured the velocity of the work groups in accomplishing their scrum tasks, at the end of a given sprint, by calculating the number of hours spent on them. As a result, the velocity measurement used in this research reflects the teams' efficiency in accomplishing scrum tasks while planning effectiveness reflects their ability to estimate and deliver, within each sprint, the expected outcome.
Due to the overlap and, therefore, some confusion between the constructs of performance, productivity, effectiveness, efficiency, planning, quality, etc.\ we reduced our study to only comprising Schedule Performance Indicator (SPI) with story points as a measurement of effectiveness, and mean velocity of scrum tasks as a measurement of efficiency.
It must be noted here that we do not claim that mean team velocity and planning effectiveness are by any means a complete measurement of performance. However, we believe the related work presented in Section 3 provides us with good reasons to believe that they are, at least, key factors in software development performance.
Several studies that used group development psychology as a theoretical framework have been conducted to examine the effect of group maturity on the productivity of teams in different contexts \cite{school,intensivecare,facilitating}. This highlights the usefulness and versatility of understanding groups from this perspective. However, empirical evidence regarding the influence of group maturity on the success of software engineering work groups with innovative tasks is lacking. In fact, studies demonstrating a link between teamwork in the field of agile software engineering has just begun~\cite{grenjss2}, and the agile approach do imply more focus on teams, which calls for even more focus on such empirical studies. To the best of our knowledge, only one study investigates the link between group maturity and agility of software teams in more detail~\cite{grenjss2}. In the same authors' earlier conference publication~\cite{maturityAgility} they suggested the use of velocity as a factor to further validate their findings since the tool used in their study, Sidky's \cite{sidky}, is not thoroughly validated~\cite{grenjss}, which means that it might not even measure agility. Therefore, our research investigates the correlation between group maturity and velocity to address this gap, but also adds the aspect of planning effectiveness. Planning effectiveness was added since we also wanted to investigate another essential mechanism of software performance that is more dependent on the group dynamics than velocity is, i.e., developing features fast might not be useful if they do not add costumer value.
\section{Background}\label{ch:related_work}
\subsection{Group Development}
Group development research began in the 1930s with the work of Lewin on group climate and conflict between groups. The study of the behavior of small groups was launched with the establishment of a research center for group dynamics in 1946 \cite{verbal}.
Bion \cite{verbal} described the effect of emotional states on group development. The results revealed that two levels of activity are found in groups. One level is geared towards the accomplishment of tasks, known as \emph{work group}, whereas the other, known as \emph{basic assumption group}, interferes with tasks achievement. Dependency, fight-flight, and pairing were identified as the emotional states that deviate a group from its work task, i.e., the basic assumption group, but these are crucial for group cohesion. These are not necessarily sequential and they can occur at anytime during the life of a group. Bion's theory was further expanded by Slater \cite{verbal}, who postulated that the themes that affect group development are the relationship of members to its leader, its need for order, and its wish for immortality.
Likewise, the influence of emotional states on group development was recognized by a plethora of literature such as members experiencing negative emotions will influence team performance regardless of the team's level of integration \cite{rippleeffect}. The study also showed no significant difference between the effect of positive versus negative valence on emotional contagion, i.e., both negative and positive emotions would result in a contagion of moods among group members with varying degrees.
An integrative theory of linear and cyclic models was first introduced in 1964 \cite{wheelan2005}. The theory postulates the existence of four primary elements in group development. \emph{Acceptance}, which focuses on the creation of trust and the reduction of anxiety, and the growth of self-confidence among members of the group. \emph{Data-flow}, involves the ability of a group to make decisions as a result of communicated feelings and data across its members. \emph{Goal Information}, relates to the group's productivity as evidenced by their ability to perform problem solving and decision-making. The final element is referred to as \emph{Control}, the degree by which members of the group are recognized as interdependent and organized \cite{wheelan2005}.
A major comprehensive analysis of various group development models was conducted by \cite{tuckman1965}. In this analysis, 50 articles on group development were reviewed based on a classification system of three elements: 1) setting (such as a laboratory group, natural group, or therapy group) 2) task or social focus 3) stage of development. The result of this analysis was a conceptual model comprising of four stages of group development in which each stage has a social realm and a task realm. The four stages proposed by Tuckman are: \emph{forming} which is categorized by high dependency, orientation, and testing; \emph{storming} during which resistance to both tasks and the influence of a group is apparent; \emph{norming} in which opinions are more freely expressed; and \emph{performing} in which a focus is on tasks' accomplishment after structural issues in a group are resolved. A review of this model was made by Tuckman in which he added a fifth stage of \emph{adjourning} \cite{tuckman1965, tuckman1977}. This was following a review made by \cite{mills} on his four-stages model suggesting adding a separation and a conclusion stage. Tuckman's theory gained empirical support by many researchers \cite{wheelan2005}.
In the review of a number of the studies that did not support the group development stage theory, \cite{cissna1984} pinpointed a number of erroneous approaches in the methods adopted in these studies consequently citing that ``every group is like all groups in some respect, and like no groups in other respect'' \cite{cissna1984}. Moreover, there is ample evidence in the body of literature which support the theory of stages in group development \cite{wheelan2005}. While these models share the same view that groups face a basic set of developmental changes over time, the differences persist in the recognition and labeling of each stage and their sub-components in the group development \cite{wheelan2005}.
\paragraph{The Integrated Model of Group Development (IMGD)}
The IMGD was theorized after consolidating previous theories which proposed a unified group development model for all group types \cite{wheelan1994}. The overall goal of group development was set to establish an organized unit of members capable of working effectively to achieve specific goals. What follows is a description of the five stages, found in the IMGD, which describe the behavioral pattern of any group type \cite{wheelan1994}.
\paragraph{Stage One}
The first stage is a period of \emph{Dependency and Inclusion}, where members tend to show significant dependency on the leader in resolving new issues. At this stage, members spend a significant amount of energy to achieve a feeling of safety and inclusion in their group. As a result, members become leader-focused, in a sense that she will provide protection for the members. Members are indulged in an exploratory phase for the sake of identifying their roles, rules, and the structure within the group. Their exploration is characterized by being tentative and overly polite since they fear being rejected \cite{wheelan2005}.
\paragraph{Stage Two}
The second stage is referred to as \emph{Counterdependency and Fight}. At this stage, members feel freer to express conflict between each other or among members and leaders since some needs for safety have been achieved in the previous stage. The group tries to free itself from being leader focused, and tends to fight about the group's goals. Coser explained that conflict is an important part for the development of cohesion, as it provides the opportunity for setting the psychological boundaries, which facilitate the establishment of goals, shared values, and structure \cite{verbal}. The occurrence of conflict is a result of the members' attempt to reach a unified direction out of the many divergent viewpoints. The rise of coalitions between members who share similar values and ideas is very much apparent \cite{wheelan2005}.
\paragraph{Stage Three}
After navigating the inevitable stage of conflicts, communication becomes more open and members' trust and cooperation increase. Feedback and information sharing increase rather than being kept as a way to gain power.\ The aforementioned characteristics consolidate a more solid and positive relationship between members, which allow the group to carry out more mature negotiations about their goals and procedures. The group is at a stage where it is designing and preparing itself to start working effectively. Although work occurs in all the stages of group development, the group's focus on structure and goals at this stage significantly increases the group's capacity to work more productively \cite{wheelan2005}.
\paragraph{Stage Four}
As soon as the goals and structure of the group are set from the previous stage, the group's focus is diverged into getting the work done well at the same time as the group cohesion is maintained, and remains cohesive while engaging in task-related conflict. It is in stage four that the group can start self-organizing and the leader can step back and be an expert member of the team instead of helping directing the work \cite{wheelan2005}.
\paragraph{Stage Five}
Most groups, temporary or continuously, experience an ending point at some point in the course of their lives. At the ending point, functional teams tend to give feedback about each other \cite{wheelan2005}. It has been reported that this type of processing is important for individual members since it enhances their ability to work effectively in the future. Impending termination of a group alters its structure and is likely to result in the group's regression to earlier stages of the group development \cite{wheelan2005}.
\subsection{Tools for Measuring Group Development}
Various self-reporting instruments have been developed in the last few decades to aid team building and highlight the importance of group development. In order to decide which instrument to use for measuring the maturity of the work groups, we reviewed some of the tools and investigated if they were statistically tested for validity and reliability. Below are some of the tools that we came across in our literature review:
\begin{itemize}
\item The Team Development Inventory (TDI)
\item The Group Development Stage Analysis
\item The Group Attitude Scales
\item and the Group Development Questionnaire (GDQ)
\end{itemize}
Our investigation showed that the GDQ has been studied thoroughly relative to validity and reliability \cite{wheelan1996}, which makes it an appropriate choice for measuring the maturity level of work groups.
\paragraph{Group Development Questionnaire (GDQ)}
Based on the IMGD, the GDQ was developed after being subjected to a number of statistical tests for reliability and validity \cite{wheelan1996}. The 60-item instrument contains a total of four scales. Each scale contains fifteen items, which corresponds to a single stage in the IMGD. For copyright reasons, only three items from each scale are presented (see Table \ref{GDQ}). The instrument does not assess the termination stage since it is meant for use with existing groups only. Items on scale I measure the amount of energy a group is spending in dealing with issues of inclusion and dependency. Items on scale II seek to measure the amount of group focus on issues of counter-dependency and conflict. The group's current level of trust and structure is measured by scale III, which corresponds to stage three in the group development model whereas the group's maturity on the ``work and productivity'' is measured by scale IV \cite{wheelan1994}.
Internal consistency tests for each fifteen-item scale were performed to ensure that all items within each scale were consistent \cite{wheelan1996}. Furthermore, the instrument was correlated with the Group Attitude Scale to establish concurrent validity \cite{evans1986}. The results indicated a significant concurrent validity between the two measures. Moreover, criterion-related validity was investigated. Results showed that groups who ranked high on productivity had significantly lower scores on the first and second scales of the GDQ. Similarly, groups that ranked high on productivity had significantly scored high scores on the third and fourth GDQ scales \cite{wheelan1996}.
\subsection{Soft Factors Affecting Software Team Performance}
Purna et al. \cite{purna2011soft} classified the factors that influence the performance of software teams into technical, non-technical (soft), organizational, and environmental. These
factors are interconnected with each other and together they contribute to the overall performance of a software development team. Since this study focuses on investigating the relationship between group maturity and some aspects of software performance, only soft factors were considered. Below are some of these factors that were shown to influence the performance of software development teams, as we came across in our literature review.
\paragraph{Team diversity}
Although team diversity stems from a myriad of reasons such as education, experience, ethnicity, culture, skills, age, gender, etc, The results from a study conducted by \cite{liang2007effect} showed that knowledge diversity in teams had positively influenced team performance whereas value diversity had a negative influence on teams performance \cite{liang2007effect}.
\paragraph{Team member competencies and characteristics}
Competencies can be classified into two categories: technical and personal competencies \cite{asproni2004motivation}. Asproni \cite{asproni2004motivation} explained that personal competencies can sometimes outweigh the technical in their influence on team performance. For example, a team of junior programmers with high personal competencies can perform better than a team of senior software developers. Similarly, another study conducted by \cite{huckman2009team} in cooperation with a software development company in India confirmed that having team members who have previously worked with each other has a positive influence on team performance regardless of the years of experience of team members.
\paragraph{Conflicts in team}
In a study conducted by \cite{sawyer2001effects} on 40 software development teams found that team members' characteristics and the intra-group conflicts explained half of the variance between good and bad performing teams. The results concluded that intra-group conflicts have a negative influence on the performance of software teams. Similarly, it was also shown by \cite{gren2017links} that some of the agile practices are negatively affected by interpersonal conflict.
The IMGD model characterized a productive work group as one that has navigated the earlier stages of group development and has become more focused on building trust and structure, and work and productivity. As the IMGD describes some of the behavioral aspects manifested by groups in all the stages of group development, these aspects show similarity with some of the soft factors described above. For example, \cite{wheelan1994} described stage two in group development as a period of fight and counter-dependency where conflicts between members are prominent, which negatively affects the group performance. Likewise, Sawyer \cite{sawyer2001effects} suggested that conflict in teams is a significant factor that yields to a deterioration in team performance. Moreover, Wheelan \cite{wheelan2005} described a stage three group as one whose members communicate more openly, cooperate more effectively, and share information and feedback. Similarly, \cite{anderson1998measuring} proposed that a clarity on the team's goals, a safe working environment that supports idea sharing and the participation of individuals are key factors in positively affecting the performance of software teams.
\subsection{Software process improvement models}
Over the years, the high rate of software failures has been a challenge confronted by many software organizations \cite{integratingimprovement}. In 2001, a survey conducted with over 8000 U.S software projects, a schedule overrun of 120\% and a project cancellation rate of 25\% were reported \cite{integratingimprovement}. The problem with most of these software projects is that they are run on an ad-hoc structure, where poor planning and high defect rate are common. Improving the outcomes of these projects requires an effort on three different levels: organization, team, and people \cite{TSPinGSD}. For each level, different models were developed to ensure continuous improvement and quality.
Models such as CMMI (Capability Maturity Model Improvement) operate on the organizational level, which is based on the premise that organizations are continuously looking for ways to evaluate and improve their current processes in the guide of achieving better quality.
On the other hand, models such as TSP (Team Software Process) work on the team level, providing sound software engineering guidelines for engineers to create and maintain self-steering software teams. The model provides a framework for groups working on software-intensive projects to organize and manage their work via an iterative cycle of eight phases: launch, strategy, planning, requirements, high level design, implementation, integration and testing, and post-mortem \cite{TSPinGSD}. In all iterations, the launch stage is conducted to clarify goals and assign roles for group members. Members of work groups conduct regular meetings, usually weekly, to share data about their work including goals achieved, risks that have developed, and issues that have emerged. The openness in sharing these data promotes for an atmosphere of trust and structure in the work group, where members are encouraged to report, listen, and contribute in planning their work \cite{TSPRevolution}. The competitive advantage of implementing TSP was reported by several global organizations. For example, in a TSP project implemented by Hill Air Force Base, a U.S. government organization rated at CMM Level 5, a productivity improvement of 123\% and an average reduction of 20\% in the test time of the project schedule were reported \cite{humphrey2000personal}. Another example is in an avionics project carried out by Boeing where 94\% of system test time was reduced by the implementation of TSP resulting in substantial improvements in the project schedule and allowing Boeing to deliver a high-quality product ahead of schedule.
Lastly, models such as PSP (Personal Software Process), which is a prerequisite for TSP, deal with people. This model provides methods that allow individual engineers to improve their planning and reduce product defect rates. By utilizing PSP, practitioners learn how to manage and evaluate the quality of their work. The PSP model provides a set of advantages that improves the performance of software engineers. The results from a study conducted by \cite{humphrey2000guest} showed that by adopting PSP, engineers could overcome resistance to transition when introduced to new technology. Also, the model teaches engineers a wide variety of skills ranging from requirements and system design to testings and deployment.
While all of these software improvement process models aim at improving software quality, they disregard the psychological element associated with changes in group dynamics over time within work groups and its influence on building mature teams. The TSP model was founded on the premise that building mature teams who are capable of cooperating tasks and working towards shared goals improve their planning effectiveness and work quality. Likewise, the IMGD model suggests that mature teams, who are at stage four of group development, are highly effective and deliver high quality products in a timely manner. Both models promotes trust and cooperativeness as a vehicle for teams to become more effective. However, unlike TSP, which is tailored specifically to software engineering teams, the IMGD model accounts for the group development stages and acknowledges its influence on building mature teams.
\begin{table}
\caption{An excerpt of the items contained in each Group Development Questionnaire (GDQ) scale} \label{GDQ}
\centering
\begin{tabular}{|c|l|}
\hline
\footnotesize \textbf{Scale} & \footnotesize \textbf{Sample Items} \\ \hline
\footnotesize GDQI & \footnotesize Members tend to go along with whatever the leader suggests \\
& \footnotesize There is very little conflict expressed in the group. \\
& \footnotesize We have not discussed our goals very much. \\
\footnotesize GDQII & \footnotesize People seem to have very different views about how things should be done in this group \\
& \footnotesize Members challenge the leader's ideas\\
& \footnotesize There is quite a bit of tension in the group at this time \\
\footnotesize GDQIII & \footnotesize The group is spending its time planning how it will get its work done \\
& \footnotesize We can rely on each other. We work as a team. \\
& \footnotesize The group is able to form subgroups, or subcommittees, to work on specific tasks. \\
\footnotesize GDQIV & \footnotesize The group gets, gives, and uses feedback about its effectiveness and productivity \\
& \footnotesize The group acts on its decisions \\
& \footnotesize The group encourages high performance and quality work \\ \hline
\end{tabular}
\end{table}
\section{Related Work}
\paragraph{Application of IMGD in Different Contexts}
Several studies, adopting the IMGD as a theoretical framework, have been conducted to examine the effect of group maturity using GDQ on the productivity of teams in different contexts, highlighting the usefulness and versatility of this tool. One study looked at the learning outcomes of students in schools as measured by math, reading, and achievement ranks and the maturity level of school administrators as measured by GDQ. The study concluded a significant relationship between the functioning of faculty group and students' learning outcomes \cite{school}. Similarly, another study investigated the relationship between the level of teamwork in the Intensive Care Unit (ICU) and the patients' outcome. Data were analyzed by correlating the ICU mortality rate (patients' risk of dying in the hospital using a mortality prediction system) and stage of group development of 394 staff members in the participating 17 ICU in nine hospitals. A significant correlation was identified between a unit's stage of group development and that unit's mortality rate \cite{intensivecare}. As the staff perception of their level of group development increased, mortality rate in their unit decreased, i.e., the higher the level of group development a group is, the fewer deaths occurred. A third study used the GDQ to plan an appropriate intervention to improve the effectiveness of three work groups in semi-governmental organizations. In this study, the group development scores of the three groups on the four GDQ scales were determined, an appropriate intervention to improve the teams' effectiveness was devised, and a three-months follow-up plan was set to determine whether significant positive changes had occurred. The intervention revolved around the issues revealed from the GDQ data. For example, member discussion was encouraged to focus on the importance of hearing opinions from all team members, reducing the dominance of the leader without creating a hostile environment, etc. Paired samples tests were employed to determine whether the intervention resulted in a positive significance on the fourth GDQ scale and effectiveness ratio within each group from pre to post tests \cite{facilitating}.
These various models suggest that interactions within a group display predictable patterns and that human interactions affect work performance within a group. These models have been the result of mainly observation of groups functioning in different settings (laboratory group, natural group, therapy group, etc.). The culmination of these models helped Susan Wheelan formulate the IMGD (Integrated Model for Group Development) which, unlike many other models, developed an instrument, the GDQ, to capture data on how groups behave and progress relative to stages of group development.
\subsection{Software Team Performance Measurement}
Ong et al. \cite{ong2005team} identified two approaches in which the performance of software development teams can be measured: objective and perceptual or subjective. The first approach includes measuring function points, object points, use case points, kilo lines of code, and defect rate. Sawyer \cite{sawyer2001effects} explained that perceptual measures, such as quality of the product and satisfaction with the product should be taken from external stakeholders in order to account for self-bias. The perceptual or subjective approach relies on the group's perception of their team performance and is based on items such as \emph{our group is very productive, we work well as a team, and the quality of our work is very good} \cite{bahli2005group}. Table~\ref{table:teamPerformance} classifies the two approaches.
\begin{table
\caption{Approaches for Measuring Software Team Performance. Taken From \cite{purna2011soft}}
\centering
\begin{tabular}{ | p{4.0cm} | p{4.0cm} |} \hline
M1 - Objective measures \begin{itemize}
\item Function Points
\item KLOC
\item Object Points
\item Use Case Points
\item Defect Rates
\item Defect Density
\item Quantitative Metric
\end{itemize} &
M2 - Subjective\slash Perceptual Team performance Ratings By:
\begin{itemize}
\item Team Members
\item Management
\item Customer
\end{itemize} \\ \hline
\end{tabular}
\label{table:teamPerformance}
\end{table}
Similarly, \cite{ramasubbu2007globally} concluded in another study that software teams performance is measured in terms of function points per person hour and conformance to quality. The conformance quality refers to the defect rate claimed by the customer during acceptance testing. Team's adherence to budget and schedule is another measure of performance reported by \cite{boehm1981software}. According to \cite{purna2011soft}, a team's performance is a function of what individual team members are doing. More specifically, a successful team is one that is characterized by the following: 1) shared leadership roles, 2) specific and clear goals, 3) mutual accountability, 4) collective problem solving.
Albero Pomar et al. \cite{albero2014understanding} proposed two techniques for predicting future performance of scrum software teams. The first approach relies on plotting the accrued velocity for all previous sprints in order to identify the trends (downward or upward) of performance. The second approach depends on calculating a confidence interval to comprehend the probability of future velocities. They also proposed exploiting a traditional (non-agile) project management metric to gauge the amount of completed work over the planned work. The metric is calculated as the ratio of the total earned points over the total points planned in the sprint planning meeting \cite{albero2014understanding}.
\[Schedule Performance Indicator
= \dfrac{Earned Points}{Planned Points} * 100
\]
\section{Method}\label{ch:research_methodology}
The objective of this paper is to investigate and analyze whether group maturity is related to aspects of the performance of software development teams. More specifically, performance is examined by measuring both planning effectiveness and development velocity of four participating work groups from company A.
\paragraph{Research Questions}
This study aims to contribute to answering of the following questions.
\begin{enumerate}
\item What is the association between group maturity and planning effectiveness?
\item What is the association between group maturity and software development velocity?
\end{enumerate}
Group maturity in the four participating work groups was measured using the GDQ. The software development velocity was in turn measured by calculating the number of hours spent on developing scrum tasks for each member in the participating teams whereas planning effectiveness was assessed by using the Schedule Performance Indicator metric.
\subsection{Case}
A combination of qualitative and quantitative data was used in this study. According to \cite{methodologyguidelines}, a case study is a suitable methodology for software engineering research, since it provides a deeper understanding of the phenomena under study. As a result, a case study was selected as the most suitable means for conducting this research. Using both qualitative and quantitative data provides an in-depth understanding to the way the participating groups are functioning and facilitates a better comparison between the groups.
\subsection{Subject Selection}
\paragraph{Company Description}
Company A is a Swedish company with 1,400 employees located in four different countries. The company is active in the fields of software development and business development. The increasing growth of the company's market share has stemmed a need for the company to work towards achieving more efficient and effective ways to develop its products. Part of their development effort is spent on developing the group dynamics in their software development teams. This research was conducted in collaboration with the company's staff at their branch located in Gothenburg, Sweden.
\paragraph{Work Groups in Company A}
First, we would like to reinforce the distinction, made in introduction section, between teams and work groups for the purpose of clarifying the terms we used in this research. A team is a structured group of individuals who share well-defined common goals that require coordinated interactions in order to effectively accomplish their tasks. A work group, on the other hand, is one in which members accomplish their tasks successfully, but not necessarily coordinate well and share the same goals \cite{teamsgroups}. Accordingly, we decided to use the term \emph{work groups} to refer to the participating groups in this research. Additionally, we gave anonymous names to the work groups to keep their identity unknown.
Four software development work groups adopting scrum participated in this research. All work groups were formed eight to 40 months prior to the date of conduct of this research.\ Groups' size comprised of three to six members with ages ranging from 20 to 60 years. The duration of which the work groups were formed and had practiced scrum ranged between eight to 40 weeks. All work groups are cross-functional, which means that the skill-sets of members within each work group were homogeneously distributed and that members have the necessary skills to perform multiple essential roles in the development process. All of the work groups receive work packages, analyzed and defined by company B, which acts as the main customer for company A. These work packages shape out the work groups' product backlogs, which contain a number of requirements, written in the form of user stories, from which teams select and plan their development cycles (or sprints) respectively. The assignment of work packages to the work groups is done based on the their competence level.
\subsection{Data Collection} \label{datacollection}
Estimations of user stories are done using planning poker, which is used to estimate the complexity in unit of points for either new features or change requests. Each work group collaborates closely with a designated product owner assigned by company B to represent the business, prioritize requirements, and conveys the product vision. It is important to mention that the selection of user stories, done at every planning and review meeting, is based on the priority of requirements conveyed by the product owner to the development group rather than being based on members' preference. Participating groups use a web-based project management and issue tracking tool. This allows them to manage their projects and visualize their work progress at any point in time. Stories are located at the leftmost part of the UI and are moved to the right as stories progress towards completion. This UI is divided into seven columns, starting from the far left: new, in progress, needs review, blocked, closed, and rejected. The column in progress indicates scrum tasks that have been assigned to an individual for development. The column \emph{blocked} contains all the stories that are temporary blocked because of other external dependencies or the absence of the assignee. On the other hand, column \emph{closed} refers to the stories that were completed by members.
The selection of software development work groups was carried out with help from a gatekeeper at company A. Data were collected from the work groups (N=4) at their work site during regularly scheduled meetings, with all members of each respective work group present. We used multiple data sources to increase the validity of the findings. Below are the data collection steps arranged in chronological order.
\paragraph{Unstructured Interviews}
Brief interviews of approximately 15 minutes each with the scrum master of each work group were conducted at the onset of the data collection process. These interviews allowed the author to gain a better understanding of the context of the groups' work and to schedule for the GDQ fill-out sessions and semi-structured interviews with the 19 participants from the four work groups. Some scrum masters were interviewed twice over the course of this study as new issues emerged.
\paragraph{The Maturity Levels of the Groups}
To examine the maturity level of the participating groups, the GDQ was used to obtain the members' perception about how each group is functioning. Individuals were requested to answer the sixty questions of the GDQ. All the GDQ fill-out sessions occurred during the last week of the group's ongoing sprints. This time was chosen to give the work groups the longest time possible in the sprint to resolve any issue related to their dynamics. A background variable in the GDQ is a question regarding the perceived productivity of the work group rated from not productive at all to highly productive.
\paragraph{Development Velocity}
In this study, the velocity of the four participating work groups was measured by calculating the mean of hours spent on implementing a number of new scrum tasks that were planned as part of new features. Scrum tasks were chosen over user stories since tasks, unlike stories, share similar complexity as each corresponds to a small unit of work planned by the scrum team \cite{tasks}. This method of measurement was discussed and approved by Company A.
To measure the velocity of the work groups, access to their task boards was granted and data about velocity was collected at the end of the same sprint when the GDQs were administered. For each work group, an average of 40 completed (closed) tasks, planned under new features, were arbitrary selected from their last development cycle whereby eight tasks on average were taken per individual. Consequently, the difference between the end and start time (in unit of hours) for each task was computed and deducted from the total time in which the task was \emph{blocked}. A given task may get blocked in the event of a disruption caused by an external dependency or an unexpected member drop-out/absence. For example, if the assignee was on a leave, the status of his \emph{in progress} tasks will be temporary set to blocked. This requires the assignee to remember changing the status of the task to \emph{blocked} before he leaves and back to \emph{in progress} once he comes back. The scrum masters of the four work groups were requested, prior to the start of the sprint, to inform their members to ensure updating their tasks status promptly with every change. This will mitigate the risk of encountering skewness in the data resulting from members forgetting to update the status of tasks.
Subsequently, the mean value of tasks accomplishment, for each work group, was calculated and recorded as the group's velocity.
\[Mean Velocity
= \frac{\sum_{i=1}^{i=n}((End time - Start time)-Blocked time)}{n}
\]
Ultimately, the first author sent the computed velocity with the IDs of the selected tasks to the scrum master of each work group to perform cross-checking on the computed velocity. Two scrum masters reported some errors in the computation of the mean values as they pinpointed a lack of compliance of two members in updating the status of five tasks respectively. Consequently, we recomputed and recorded the mean velocity of the two work groups.
\paragraph{Planning Effectiveness}
Since all of the four participating work groups adopt scrum as their development methodology, they decide what can be accomplished in each sprint during their planning and review meetings.\ Accordingly, teams take into consideration the complexity of stories, the group's availability, and their technical competence level in planning what they can commit to in each sprint.\
The planning effectiveness of the work groups was measured using the Schedule Performance Indicator metric, which calculates the ratio of their total earned points over the total planned points for a given sprint.
\[ Schedule Performance Indicator
= \dfrac{Earned Points}{Planned Points} * 100
\]
The mean planning effectiveness for all the sprints, which were selected according to two criteria, for each work group was calculated. The first criterion for the sprint selection is that the structure of the work groups remained unchanged, that is, no individuals joined or left the work group. The second is that their maturity level remained stable. This was confirmed by the interviewed group members during the semi-structured interviews when participants were asked \say{How long has the team's maturity level been stable?} allowing for an estimation of the duration that their work group maturity has not changed. Table~\ref{tab:planning} shows the total number of planned versus earned story points for each selected sprint for all the participating work groups. As can be seen, the number of sprints from which the planned and earned points were collected varied considerably. This reflects the difference in the duration that a given group's maturity remained unchanged. For example, the responses of the majority of members from group A during the semi-structured interview, revealed that their maturity has remained unchanged over the past ten sprints, in their opinion. Therefore, data from this period only was collected. On the other hand, the majority of members of work groups C and D agreed that their maturity has remained unchanged over the past four sprints. Therefore, the planned and earned points were collected from those sprints only. Table~\ref{tab:planning} also demonstrates considerable variations in the planned points of some sprints from groups C and D. For example, a drop in the planned points for the fourth sprints of group D was identified (from 40 to 9). These fluctuations in the planned points between sprints can be attributed to the cumulative experience attained over previous development iterations as well as to the availability of members within each respective group.
\begin{table
\caption{Planned vs. Earned Points}
\centering
\begin{tabular}{|p{0.8cm}|p{1.0cm}p{0.2cm}p{0.2cm}p{0.2cm}p{0.2cm}p{0.2cm}p{0.2cm}p{0.2cm}p{0.2cm}p{0.2cm}p{0.3cm}|}
\hline
Group & \textbf{Sprint}& \textbf{1} & \textbf{2}& \textbf{3}& \textbf{4} & \textbf{5}& \textbf{6}& \textbf{7}& \textbf{8} & \textbf{9} & \textbf{10} \\ \cline{1-12}
\footnotesize \centering A & Planned & \footnotesize4 & \footnotesize3 & \footnotesize5 & \footnotesize7 & \footnotesize4 & \footnotesize6 & \footnotesize2 & \footnotesize4 & \footnotesize2 & \footnotesize 8\\
&\footnotesize Earned & \footnotesize 0 & \footnotesize3 & \footnotesize3 &\footnotesize6 & \footnotesize2 & \footnotesize6 & \footnotesize2 & \footnotesize4 & \footnotesize2 & \footnotesize0\\ \hline
\footnotesize \centering B &\footnotesize Planned & \footnotesize 4 & \footnotesize 4 & \footnotesize6 & \footnotesize8 & \footnotesize6 & \footnotesize5 & \footnotesize 2.5 & \footnotesize- & \footnotesize- & \footnotesize-\\
&\footnotesize Earned &\footnotesize 2 & \footnotesize2 & \footnotesize 4 &\footnotesize 6 & \footnotesize0 & \footnotesize1 & \footnotesize0.5 & \footnotesize- & \footnotesize- & \footnotesize-\\ \hline
\footnotesize \centering C &\footnotesize Planned & \footnotesize22 & \footnotesize18 & \footnotesize14 & \footnotesize30 & \footnotesize- & \footnotesize- & \footnotesize- & \footnotesize- & \footnotesize- & \footnotesize-\\
&\footnotesize Earned & \footnotesize18 & \footnotesize10 & \footnotesize11 & \footnotesize21 & \footnotesize- & \footnotesize- & \footnotesize- & \footnotesize- & \footnotesize- & \footnotesize-\\ \hline
\footnotesize \centering D &\footnotesize Planned & \footnotesize80 & \footnotesize63 & \footnotesize40 & \footnotesize9 & \footnotesize- & \footnotesize- & \footnotesize- & \footnotesize- & \footnotesize- & -\\
&\footnotesize Earned & \footnotesize65 & \footnotesize24 & \footnotesize21 & \footnotesize3 & \footnotesize- & \footnotesize- & \footnotesize- & \footnotesize- & \footnotesize- & \footnotesize-\\ \hline
\end{tabular}
\label{tab:planning}
\end{table}
\paragraph{Semi-Structured Interviews}
A primary source of data collection was semi-structured interviews, a common way of interviewing in case study research \cite{qualitative}. These involve working from an interview guide -- a list of prepared questions and topics aimed at ensuring systematic and chronological coverage across interviews. However, the interview is flexibly conducted to allow for self-elaboration and exploration of emerging issues \cite{allison2007software}. In this research, the main purpose was to explore more issues of group development as well as to strengthen the validity of the responses obtained from the surveys (the GDQ). Following the interviewees' approvals to participate, an interview for each individual was taped, transcribed, and coded. 16 out of the 19 members agreed to have their interviews taped, while two members did not. As a result, the author did not include the latter as part of the data collection.
\subsection{Data Analysis}
\paragraph{Normality Test}
A first step to decide which correlation method to use in the data analysis would be to evaluate if the data is normally distributed. We conducted a Shapiro-Wilk test for each residual value of the four GDQ scales and the velocity of the four participating work groups. The \emph{p} values for the velocity of groups A and C indicate statistical significance, where \emph{p}=.05 for group A and \emph{p}=.048 for group C. As a result, our normality assumption for our linear regression model is not valid. In addition, the Q-Q plot of residuals for velocity showed a wide scatter in the distribution of residuals across the regression line, which supports our finding from the Shapiro-Wilk analysis that our normality assumption is not valid. Spearman's rank-order correlation analysis was, therefore, selected as the most appropriate method to conduct the correlation for the collected data set.
\paragraph{Quantitative Data Analysis}
Spearman's rank-order correlation coefficient was used to investigate the connection between group maturity and development velocity, and between group maturity and planning effectiveness. Given the normality analysis check and the small sample size available in this research (four groups), Spearman's correlation was chosen as the most appropriate method to run the analysis, since it does not assume normality in the data. SPSS was used to aid in investigating the aforementioned correlations. For question one, Spearman's correlations were run on both individual and group level, using individual data (19 group members) and then using group data (four groups). For question two, Spearman's correlation was run on the group level only because the planning effectiveness is a group endeavour rather than an individual one. Running the analysis on the group and individual levels will reinforce the idea of the IMGD theory, which states that the dynamics of a particular group constitute the source of individual perceptions of that group. Moreover, it emphasizes the idea that groups, not individuals, should be the key element of any change efforts deemed important. Moreover, some group demographic background collected from the groups' responses on the GDQ were tested for correlation with the four group development scales. Specifically, this was done to examine the impact of the individuals' age, educational background, employment time in company A on the four different maturity scales.
\paragraph{Qualitative Data Analysis}
Thematic analysis was used to interpret the collected qualitative data. The data from the semi-structured interviews were collated into electronic documents, which made the process of handling, searching and comparing the large volumes of data more convenient and manageable. Data were broadly categorized into seven themes, which were related to the dynamics within the work groups, in order to address some of the issues in the four stages of group development (see Table~\ref{themes}).
\begin{table
\caption{Themes Explored in Group Development} \label{themes}
\centering
\begin{tabular}{cc}
\hline
\footnotesize \textbf{Themes} & \footnotesize \textbf{Stage} \\ \hline
\footnotesize Leader Dependence & \footnotesize I \\
\footnotesize Tentativeness and Politeness & \footnotesize I \\
\footnotesize Participation and Cooperativeness & \footnotesize II \\
\footnotesize Subgroups or Cliques & \footnotesize II \\
\footnotesize Goal Clarity & \footnotesize III \\
\footnotesize Structure & \footnotesize III \\
\footnotesize Trust & \footnotesize III \\
\footnotesize Goal Accomplishment & \footnotesize IV \\ \hline
\end{tabular}
\end{table}
Based on these themes, a list of seven questions was prepared to address issues related to the four GDQ scales. Additionally, the last question was asked to estimate the number of sprints to consider when calculating the planning effectiveness of each work group (see Table~\ref{table:semistructured}). The Nvivo software was used for transcribing and coding the data.
\begin{table}
\caption{Semi-structured Interview Questions}
\centering
\begin{tabular}{c}
\toprule
\footnotesize \textbf{Questions} \\ \hline
\footnotesize What are your roles in the team? \\
\footnotesize Are members overly polite to each other? \\
\footnotesize Are members hesitant to ask for support from each other? \\
\footnotesize Are there subgroups in the team? \\
\footnotesize Is trust high in the team? \\
\footnotesize Are you clear on your team goals? \\
\footnotesize What is causing delays in your sprint? \\
\footnotesize How long has the team's maturity level been stable? \\ \hline
\end{tabular}
\label{table:semistructured}
\end{table}
\subsection{Ethical Considerations}
The importance of ethical standards of conduct for maintaining trust and collaboration with the participants in question has been highlighted by many authors \cite{runeson2009tutorial}. Participants were spoken to about the objectives of the research, the nature of their involvement, the measures that would be taken to protect their identity, and the right to not participate or to withdraw at any stage. This was first done during scheduled meetings with the scrum masters, then explained to the other group members during the first meeting.
\section{Results}\label{sec:results}
\subsection{Maturity and Group Demography}
In order to examine the connection between some demographic information (age, years in company, and educational background) and group development, a Spearman's $\rho$ correlation analysis on the individual level was run.
Overall, the results presented in table~\ref{table:descriptive_statistics} suggest that age relates to the group perceptions about ``trust and structure,'' i.e., the older the members were in a work group, the higher their perception about ``trust and structure'' gets. On the contrary, the number of employment years within a company is negatively correlated, on a moderate level, with the members' perception about their productivity. In other words, the more years the members spent in company A, the less productive they viewed their work groups. In this correlation analysis, the educational background played no role in the members' group development, according to the participants views.
\begin{longtable}{| p{.17\textwidth} | p{.12\textwidth} | p{.09\textwidth} | p{.09\textwidth} |p{.09\textwidth} | p{.09\textwidth} | p{.11\textwidth} |} \caption{Spearman's Correlations between Group Demography and Perception } \\ \hline
\label{table:descriptive_statistics}
\footnotesize \textbf{Demography} & \footnotesize \textbf{Statistic} & \footnotesize \textbf{GDQ1} & \footnotesize \textbf{GDQ2} & \footnotesize \textbf{GDQ3} & \footnotesize \textbf{GDQ4} & \footnotesize \textbf{Productivity} \\ \hline
\endhead
\begin{tabular}{c} \footnotesize Age \end{tabular}&
\begin{tabular}{c} \footnotesize Coefficient \\ \footnotesize Sig. \\ \footnotesize N \end{tabular} &
\begin{tabular}{c} \footnotesize -0.352 \\ \footnotesize 0.139 \\ \footnotesize 19 \end{tabular} &
\begin{tabular}{c} \footnotesize 0.068 \\ \footnotesize 0.783 \\ \footnotesize 19 \end{tabular} &
\begin{tabular}{c} \footnotesize 0.455 \\ \footnotesize 0.050 \\ \footnotesize 19 \end{tabular} &
\begin{tabular}{c} \footnotesize 0.368 \\ \footnotesize 0.121 \\ \footnotesize 19 \end{tabular} &
\begin{tabular}{c} \footnotesize -0.248 \\ \footnotesize 0.305 \\ \footnotesize 19 \end{tabular} \\ \hline
\begin{tabular}{c} \footnotesize Years In Company \end{tabular}&
\begin{tabular}{c} \footnotesize Coefficient \\ \footnotesize Sig. \\ \footnotesize N \end{tabular} &
\begin{tabular}{c} \footnotesize -0.239\\ \footnotesize 0.325 \\ \footnotesize 19 \end{tabular} &
\begin{tabular}{c} \footnotesize 0.203 \\ \footnotesize 0.405 \\ \footnotesize 19 \end{tabular} &
\begin{tabular}{c} \footnotesize 0.341 \\ \footnotesize 0.153 \\ \footnotesize 19 \end{tabular} &
\begin{tabular}{c} \footnotesize 0.220 \\ \footnotesize 0.366 \\ \footnotesize 19 \end{tabular} &
\begin{tabular}{c} \footnotesize -0.512 \\ \footnotesize 0.025 \\ \footnotesize 19 \end{tabular} \\ \hline
\begin{tabular}{c} \footnotesize Education \end{tabular}&
\begin{tabular}{c} \footnotesize Coefficient \\ \footnotesize Sig. \\ \footnotesize N \end{tabular} &
\begin{tabular}{c} \footnotesize 0.315\\ \footnotesize 0.190 \\ \footnotesize 19 \end{tabular} &
\begin{tabular}{c} \footnotesize 0.039 \\ \footnotesize 0.872 \\ \footnotesize19 \end{tabular} &
\begin{tabular}{c} \footnotesize 0.000* \\ \footnotesize 1.000 \\ \footnotesize 19 \end{tabular} &
\begin{tabular}{c} \footnotesize -0.102\\ \footnotesize 0.678 \\ \footnotesize 19 \end{tabular} &
\begin{tabular}{c} \footnotesize 0.090 \\ \footnotesize 0.715 \\ \footnotesize 19 \end{tabular} \\ \hline
\end{longtable}
\subsection{Maturity and Planning Effectiveness}
Since the sample size of our data set was small (N=4), a normality test was not performed on the residuals for the planning effectiveness.\ Therefore, Spearman's correlation was run to determine the connection between planning effectiveness and group development since it does not assume normality of the data.
\paragraph{Correlation Analysis}
Since planning is a group endeavour, this correlation analysis was run on group level only. The results revealed a positive correlation between the fourth stage of group development and planning effectiveness and showed a significant convergent validity, i.e., the more mature a team is, the more effective they plan their sprints' stories thus deliver the expected outcome. While significant correlations were not found with scales I, II, and III, correlations on scale II and III are going in the right direction (see table \ref{table:corrPlan}). The correlation coefficient and significance (\emph{r} = 1 and \emph{p} = 0.000) describe the strength of the association between the two variables, the \emph{GDQ4} and \emph{Planning effectiveness}, which is a perfect positive one for this small data set.
\begin{table}[h]
\centering
\caption{Correlations for GDQ Perceptions and Planning Effectiveness }
\begin{tabular}{| p{2.0cm} | p{1cm} | p{1cm} | p{1cm} | p{1cm}|}
\hline
\centering
\footnotesize \textbf{Scale} & \footnotesize \textbf{GDQ1} & \footnotesize \textbf{GDQ2} & \footnotesize \textbf{GDQ3} & \footnotesize \textbf{GDQ4} \\ \hline
\begin {tabular} {@{}l@{}} \small \textbf{Planning } \\ \footnotesize Sig. (2-tailed) \\ \footnotesize N
\end{tabular} &
\begin {tabular} {@{}l@{}} \footnotesize 0.400 \\ \footnotesize 0.6000 \\ \footnotesize4
\end{tabular} &
\begin {tabular} {@{}l@{}} \footnotesize -0.2000 \\ \footnotesize 0.8000 \\ \footnotesize 4
\end{tabular} &
\begin {tabular} {@{}l@{}} \footnotesize 0.4 \\ \footnotesize 0.6 \\ \footnotesize 4
\end{tabular} &
\begin {tabular} {@{}l@{}} \footnotesize 1.000 \\ \footnotesize . \\ \footnotesize 4
\end{tabular} \\ \hline
\end{tabular}
\label{table:corrPlan}
\end{table}
\paragraph{Planning Effectiveness Comparison}
Table~\ref{table:groupPlan} shows the planning effectiveness and the group development mean values of the four participating work groups. The evidence showed that work groups which scored higher in GDQ4 also scored higher in planning effectiveness. As can be seen from the table, Group D scored the highest GDQ4 score, compared to the other work groups, with a mean value of 64.67. It also outperformed the other work groups in planning effectiveness with a mean value of 71.48. On the other hand, the lowest GDQ4 mean value, 53.17, was scored by work group B, which exhibited the minimum planning effectiveness with a mean value of 40.2.
\begin{table}
\centering
\caption{ Plannng Effectiveness and Group Development Mean values}
\begin{tabular}{| p{1.0cm} | p{1.4cm} | p{0.9cm} | p{0.9cm} | p{0.9cm}| p{0.9cm}|}
\hline
\centering
\footnotesize \textbf{Group} & \footnotesize \textbf{Planning} &
\footnotesize \textbf{GDQ1} & \footnotesize \textbf{GDQ2} & \footnotesize \textbf{GDQ3} & \footnotesize \textbf{GDQ4} \\ \hline
\begin {tabular} {@{}c@{}} \footnotesize A \\ \footnotesize B \\ \footnotesize C \\ \footnotesize D
\end{tabular} &
\begin {tabular} {@{}c@{}} \footnotesize 66.11 \\ \footnotesize 40.2 \\ \footnotesize 51.27 \\ \footnotesize 71.48
\end{tabular} &
\begin {tabular} {@{}c@{}} \footnotesize 40.80 \\ \footnotesize 40.33 \\ \footnotesize 44 \\ \footnotesize 42
\end{tabular} &
\begin {tabular} {@{}c@{}} \footnotesize 31.60 \\ \footnotesize 37.67 \\\footnotesize 29.4 \\\footnotesize 32
\end{tabular} &
\begin {tabular} {@{}c@{}} \footnotesize 62.60 \\ \footnotesize 54.67 \\\footnotesize 56.8 \\\footnotesize 55.67
\end{tabular} &
\begin {tabular} {@{}c@{}} \footnotesize 63.80 \\ \footnotesize 53.17 \\\footnotesize 60.20 \\\footnotesize 64.67
\end{tabular} \\ \hline
\end{tabular}
\label{table:groupPlan}
\end{table}
\begin{figure*}
\centering
{\includegraphics[width=0.50 \textwidth]{Planning_GDQ4.png}\label{fig:scatter:GDQ4}}
\caption{Planning Effectiveness and Group Development Mean Values}
\label{fig:scatter:Planning_GDQ}
\end{figure*}
\Cref{fig:scatter:Planning_GDQ} shows the scatter plot for the planning effectiveness as the dependent variable and the fourth group development scale (GDQ IV) as the independent variable. Each dot represents one of the four participating groups with the \emph{x} coordinate as the group development mean and the \emph{y} coordinate as the planning effectiveness mean value. Figure \ref{fig:scatter:Planning_GDQ} shows that $R^2$ = 0.93, which means that 93.2\% of the variance in the planning effectiveness can be explained by the fourth scale of the group development (GDQ4). This conclusion is built on the assumption that our population data is linear.
\subsection{Maturity and Development Velocity}
The analysis performed on a group and an individual level helped reinforce
the notion that dynamics of a particular group constitute the source of individual perceptions of that group. Tables~\ref{table:groupVel} and~\ref{table:groupInd} clearly demonstrate that the findings are the same, i.e., no correlation exists between development velocity and maturity on both the individual and group levels.
\paragraph{Correlation Analysis}
A second Spearman's correlation was conducted to determine the association between the work groups' perception about their maturity, on all the GDQ scales, and their development velocity. On both the group and individual levels, no significant relationship was identified (see \cref{table:groupVel,table:groupInd}).
\begin{table}
\centering
\caption{Correlations between GDQ scales and Velocity -- Group Level}
\begin{tabular}{| p{2.0cm} | p{1cm} | p{1cm} | p{1cm} | p{1cm}|}
\hline
\centering
\small \textbf{Scale} & \footnotesize \textbf{GDQ1} & \footnotesize \textbf{GDQ2} & \footnotesize \textbf{GDQ3} & \footnotesize \textbf{GDQ4} \\ \hline
\begin {tabular} {@{}l@{}} \textbf{Velocity} \\ \footnotesize Sig. (2-tailed) \\ \footnotesize N
\end{tabular} &
\begin {tabular} {@{}l@{}} \footnotesize 0.2000 \\ \footnotesize 0.8000 \\ \footnotesize 4
\end{tabular} &
\begin {tabular} {@{}l@{}} \footnotesize -0.4000 \\ \footnotesize 0.6000 \\ \footnotesize 4
\end{tabular} &
\begin {tabular} {@{}l@{}} \footnotesize 0.8000 \\ \footnotesize 0.2000 \\ \footnotesize 4
\end{tabular} &
\begin {tabular} {@{}l@{}} \footnotesize 0.800 \\ \footnotesize 0.2000 \\ \footnotesize 4
\end{tabular} \\ \hline
\end{tabular}
\label{table:groupVel}
\end{table}
\begin{table}
\centering
\caption{Correlations between GDQ scales and Velocity -- Individual Level}
\begin{tabular}{| p{2.0cm} | p{1cm} | p{1cm} | p{1cm} | p{1cm}|}
\hline
\centering
\small \textbf{Scale} & \footnotesize \textbf{GDQ1} & \footnotesize \textbf{GDQ2} & \footnotesize \textbf{GDQ3} & \footnotesize \textbf{GDQ4} \\ \hline
\begin {tabular} {@{}l@{}} \textbf{Velocity} \\ \footnotesize Sig. (2-tailed) \\ \footnotesize N
\end{tabular} &
\begin {tabular} {@{}l@{}} \footnotesize 0.310 \\ \footnotesize 0.196 \\ \footnotesize 19
\end{tabular} &
\begin {tabular} {@{}l@{}} \footnotesize -2.16 \\ \footnotesize 0.374 \\ \footnotesize 19
\end{tabular} &
\begin {tabular} {@{}l@{}} \footnotesize 0.236 \\ \footnotesize 0.330 \\ \footnotesize 19
\end{tabular} &
\begin {tabular} {@{}l@{}} \footnotesize 0.204 \\ \footnotesize 0.402 \\ \footnotesize 19
\end{tabular} \\ \hline
\end{tabular}
\label{table:groupInd}
\end{table}
\paragraph{Velocity Comparison}
Table~\ref{table:descriptive_statistics_velocity} shows the velocity and the group development mean values of the four participating work groups, which were both measured during the same sprint. As can be seen from the table, work group B has the minimum velocity mean value of 27.33, while work group A scored the highest mean with a value of 48.07. As shown in table~\ref{table:groupVel}, no connection between the groups' mean velocity and the group development scales was found.
\begin{table}
\centering
\caption{Velocity Mean Values and Group Development Mean Values}
\begin{tabular}{| p{.9cm} | p{1.2cm} | p{0.9cm} | p{0.9cm} | p{0.9cm}| p{0.9cm}|}
\hline
\centering
\footnotesize \textbf{Group} & \footnotesize \textbf{Velocity} & \footnotesize \textbf{GDQ1} & \footnotesize \textbf{GDQ2} & \footnotesize \textbf{GDQ3} & \footnotesize \textbf{GDQ4} \\ \hline
\begin {tabular} {@{}c@{}} \footnotesize A \\ \footnotesize B \\ \footnotesize C \\ \footnotesize D
\end{tabular} &
\begin {tabular} {@{}c@{}} \footnotesize 48.07 \\ \footnotesize 27.33 \\ \footnotesize 35.29 \\ \footnotesize 47.82
\end{tabular} &
\begin {tabular} {@{}c@{}} \footnotesize 40.80 \\ \footnotesize 40.33 \\ \footnotesize 44 \\ \footnotesize 42
\end{tabular} &
\begin {tabular} {@{}c@{}} \footnotesize 31.60 \\ \footnotesize 37.67 \\\footnotesize 29.4 \\\footnotesize 32
\end{tabular} &
\begin {tabular} {@{}c@{}} \footnotesize 62.60 \\ \footnotesize 54.67 \\\footnotesize 56.8 \\\footnotesize 55.67
\end{tabular} &
\begin {tabular} {@{}c@{}} \footnotesize 63.80 \\ \footnotesize 53.17 \\\footnotesize 60.20 \\\footnotesize 64.67
\end{tabular} \\ \hline
\end{tabular}
\label{table:descriptive_statistics_velocity}
\end{table}
\subsection{Semi-Structured Interviews} \label{semi_structure}
Responses from interviewees were thematically analyzed. Below are the main results of this analysis.
\paragraph{Roles in The Work Groups}
All members from the participating work groups were able to describe their roles with ease, which means they were all clear on their responsibilities.
\paragraph{Politeness and Tentativeness}
The majority of members in work group A (75\%) did not consider over politeness evident in their work group, whereas 25\% considered politeness to be a \emph{rare occasion} in the work group. Members of work group B explained that over politeness depends on the situation and the member's personality rather than a general trait for the work group. All of the interviewed members from work group C agreed that members were overly polite with each other. 75\% of them linked this over politeness to the nature of engineers and their cultural backgrounds whereas 25\% said that members exhibited over politeness only in process related situations rather than technical ones.\ Finally, members from work group D perceived each other as being overly polite and associated this to the fact that they are newly formed and have not yet built a cohesive relationship with each other. The exception to this is one work group member who has been in the team for the longest time, perceived the work group not extremely polite.
\paragraph{Cooperativeness and Support}
All of the interviewees from work group A reported that they are not hesitant to reach out each other when needed. 50\% of members from work group B suggested that people are not hesitant to ask for support from any one in the work group. One of those linked this to the fact that members do not want to take responsibilities on their behalf \emph{they might ask anyone so that they do not take the responsibility}. On the other hand, the remaining 50\% of members suggested that members are sometimes hesitant and linked this to the personality and topic type. 25\% of the interviewees from work group C shared a consensus that members are hesitant to seek for support from each other \emph{People are very concerned with each other and are reluctant to ask for support} whereas 75\% suggested that members tend to seek support from knowledgeable people in the work group, regardless of who they are. Finally, the majority of members in work group D explained that members are a bit hesitant to seek for support and explained that it is related to the fact that they have not yet had enough time to interact with each other and create a cohesive relationship. On the contrary, one member explained that members reach each other and are not scared to point out problems.
\paragraph{Subgroups and Cliques}
75\% of the interviewed members from work group A believed that there are no sub-work groups whereas 25\% agreed that there are subgroups. work group B explained that their work group is divided into two subgroups, over half of those linked this to the \emph{age range} factor \emph{there are two subgroups, and they are about the same age and stage in life}. All of the interviewed members from work group C explained that people with more technical knowledge or similar interests formed cliques in this work group. In addition, a third of the members in work group D believed that there are no cliques in their work group, the second third suggested the occurrence of subgroups, and the last third were not able to tell.
\paragraph{Trust}
All interviewed members from work group A and C believed that trust is high in their work groups. Opinions in work group B is divided, whereby 40\% believed there is no trust among members, 40\% believed that there is trust, while the remaining mentioned that trust is a relative issue in the work group. Members in work group D distinguished between internal and external trust. They referred to external trust as the way in which external teams in company A perceive their work group; whereas internal trust was defined as how much members in the work group trust each other. All members in work group D believed that the external trust is high: \emph{From outside the team, trust is pretty solid. If something should be done, people trust us}. On the other hand, a third of the members explained that trust is high but not satisfactory yet: \emph{Maybe in the long run we will probably build higher internal trust}, while two thirds of the members suggested a lack of internal trust (within the work group) \emph{I don't have trust nor members have it to others}.
\paragraph{Goal Clarity}
75\% of members in work group A mentioned some of their goals on a sprint level only: \emph{we only look at the sprint goals} whereas 25\% of them did not know any goals, whether on a sprint level or not. The majority of members from work group B could not mention any long or short term goals for their work group \emph{There are no common team goals, but rather some individual goals to reach}. 50\% of them linked this to the poorly defined customer specifications while almost 20\% believed that members are too focused on the development that they forget the work group goals: \emph{Most of them would remember them if you remind them but not everyone realizes that they know them}. On the contrary, all members from work group C recited their short term goals and not the long term ones. There was a 50\% overlap in their answers. Finally, third of the members from work group D were able to recite some of their work group's goals, while two third of the individuals could not.
\paragraph{Delays in Goal Accomplishment }
The majority of members of work group A agreed that their lack of knowledge in one particular software engineering discipline (kept anonymous here) is negatively affecting their commitment to achieving their goals. Half of the members from work group B explained that the main reason for their delay was the unclarity of requirements received from their customer company: \emph{We don't know what we are doing. We need clear requirements}. The remaining 50\% had a common view that the external dependencies, the underestimation of workload, and the lack of knowledge in the domain of work are the reasons for their delay. 25\% of members from work group C attributed the delay to not knowing how to work well enough as a new team, whereas 75\% of them gave a common explanation, which persisted in their lack of experience to estimate the time needed for code review. 25\% of those gave additional reasons such as external dependencies and sick leaves. The other 25\% suggested that delays were the result of lack of knowledge in coding and the product, and underestimation of refactoring. Finally, member of work group D had extremely different explanations for their delay, which mainly persists in their lack of knowledge in the product and the lack of norms within the work group, and the variations in the level of technical competencies within the work group.
\paragraph{Stability of Maturity in The Work Groups}
Finally, all of the interviewees from work group A presumed that their maturity level has not changed during the last ten months. The majority of members from work group B suggested that their maturity has been stable for six months. 75\% of members in work group C believed that their maturity level has been stable for three months, whereas 25\% could not proximate a specific period. Members from work group D expressed different views about the period of stability. 75\% agreed that their group maturity remained unchanged for two months, whereas 25\% suggested that the group maturity has continuously progressed and it stopped progressing one month ago.
The qualitative analysis revealed that work groups B and D experienced the highest number of group development issues explored in the semi-structured interviews whereas work groups A and C showed the lowest number of these issues. Also, a disparity of viewpoints was evident from the opinions of members in work groups B and D where individuals perceived their work groups' to be functioning differently. The major issues that emerged in work groups B and D seemed to relate to the different technical knowledge or age range of members. Some members with high technical knowledge (more experienced) tend to prefer working collaboratively with each other rather than working with individuals who had less technical experience. This may explain the lack of trust and goals' clarity between individuals in both group B and D.
\section{Discussion}\label{sec:disc}
\subsection{Reflection on Efficiency and Effectiveness}
We emphasize on interpreting the results in light of the distinction between efficiency and effectiveness. Our velocity measurement only reflects the efficiency of work groups in accomplishing scrum tasks, with no indication on how effective they were implemented. On the other hand, the measurements of planning effectiveness reveals the work groups' ability to deliver the expected outcome within the planned time frame.
\subsection{Answers to Research Questions}
\paragraph{RQ1 - What is the association between group maturity and planning effectiveness?}
In this research, we investigated the relationship between four independent variables (the group development stages) and the planning effectiveness. The results showed a perfect positive correlation (+1.0) between the fourth GDQ scale and the planning effectiveness among the four participating work groups, which means that both variables move in a strong tandem with each other and are positive in 100\% of the time. In other words, the higher a software development team scores on the measurement of the fourth group development phase, the more effective it becomes in planning its requirements. This supports the findings of other studies which confirm that task performance and work activity occur at higher levels later in a group's development~\cite{school,intensivecare,facilitating}. The significance of this research is that it provided evidence to support a relationship between group development and team performance in software engineering context.
The overall conclusion drawn from the qualitative analysis overlapped with those revealed from the quantitative ones, such that they provided further evidence for the validity of the interviewees' responses to the GDQ with respect to the topics explored in the semi-structured interviews. For example the thematic analysis revealed that members from work group B had the highest number of group development issues, compared to the other work groups. Contrary to work group B, members of work group A had the lowest number of issues, which might be an indicator that this former work group is at the higher levels of group development.
\paragraph{RQ2 - What is the association between group maturity and software development velocity?}
We investigated the connection between the two variables, group development and velocity. The motivation for investigating this research question was to address a gap recited by \cite{maturityAgility} in which a positive correlation between maturity and velocity would support their findings about the connection between agility and group maturity. The results drawn from our analysis to this question were not in concordance with what \cite{maturityAgility} suggested, since we could not provide an empirical evidence to support a significant convergent validity between group maturity and development velocity. The analysis of the qualitative data revealed that the majority of participants linked their tasks development delays to technical and process related aspects rather than issues pertinent to the dynamics and norms within their work groups. This shows an interesting and non-predicted dependence on technical skills and process-related aspects in the software engineering domain that might be different compared to performance aspects in other fields.
\subsection{Implications for Research and Practice}
We will now present some of the possible improvements that would increase the software development team performance. The first would be to motivate software developers to focus more on discussing and clarifying their work group goals. By this we mean that members should work more on achieving their group goals rather than focusing on the individual ones only. Our research suggests that the more effective work groups know their group goals, which is in alignment with stage III of the IMGD model which suggests that clarity of goals contribute to the development of more productive and work-focused groups. The second would be to motivate software developers to freely discuss and communicate process-related issues rather than only discussing the technical ones as members respectively reported a tendency of hesitance to ask for support in process-related issues. The third would be to consider having team members of diverse backgrounds working together in order to allow building more trust and structure within teams. This is supported by research done by \cite{roberge} who attempted to address when and how diversity in teams leads to better performance by conceptualizing a multi-level model that identifies the psychological mechanisms that explain how diversity can have a positive impact on the performance of teams. On the group level, these psychological mechanisms were identified as communication, group involvement, and group trust \cite{roberge}. Although our research only included one aspect of diversity, which is age, our qualitative and quantitative analysis clearly show that the age of software development team members relates to their perceptions of \emph{trust and structure}. In addition, work groups need to be given the opportunity to mature over time in order to achieve higher planning effectiveness; thus, becoming a self-organizing unit where all members can provide input for accurate sprint planning.
\subsection{Validity Threats}
One needs to be careful when generalizing the findings of this study outside this specific case because only four participating groups from the same company were studied, which is a small sample. However, the combination of the data collection methods we used in this research, qualitative interviews and quantitative surveys would triangulate our findings, thus would strengthen the validity of our results.
This area of research is sensitive to the participating members since it involves the disclosure of the dynamics within their groups to us, which may influence the validity of the groups' responses to the quantitative survey and the qualitative interviews. At the onset of this research, an attempt to mitigate this was made by explaining the research purpose to the participants and by confirming the anonymity of their responses. However, it is not possible to refute the presence of bias in the participants' responses on both the surveys and the interviews. To reinforce the anonymity of the participating work groups, we avoided stating any information that would indicate their identity. Moreover, a self bias in the coding process of the semi-structured interviews cannot be ruled out and requires a second coder to validate the responses and thus minimize the self bias.
Our measurement of velocity relied on measuring the time spent by each work group on tasks accomplishment, which means that the amount of teamwork required to accomplish those tasks may not be significant, and an individual endeavor on each task may be sufficient to get the work done. This may provide an explanation to the absence of correlation between velocity and group maturity. Finally, although the majority of members explained that they instantly close their tasks after finishing their implementations, we can not guarantee that all of the tasks we selected for analysis were closed this way. This may have had an effect on the validity of our velocity measurement.
\section{Conclusion and Future Work}
In the course of this research, we aimed at investigating how development velocity and planning effectiveness of software development teams relate to the four phases of their group development. Our findings showed that the fourth stage of group development, in the adapted framework, is significantly related to the effectiveness of software teams in planning their requirements whereas no evidence was provided to conclude a similar relationship with development velocity. Moreover, it indicates that there are considerable differences as to how group development relates to the effectiveness and efficiency of software teams. That is, a team with a higher score on the measurement of group development is possibly a more effective one but not more efficient. We believe that this research provided additional knowledge to the prominence of the human interactions within software development teams. Particularly by providing empirical evidence about the link between group maturity and planning effectiveness. We believe that the knowledge provided is sufficient to trigger organizations to drive more focus on those aspects, as they may provide benefits to software development teams.
We would like to see the results of similar studies conducted with larger sample sizes from different companies. Also, we would encourage further studies to expand more upon the connection between group development aspects and team performance in software development. For example by measuring, function points, defects rate, and kilo lines of codes to assess team performance. Finally, we would like to see the results of studies that combine several objective and subjective methods to assess the performance of software development teams and highlight how each relates to group maturity.
\bibliographystyle{abbrv}
| -37,432.290252
|
[
-3.177734375,
3.013671875
] | 47.297297
|
[
-3.591796875,
0.29931640625,
-2.55859375,
-4.91796875,
0.712890625,
7.40625
] |
[
4.19921875,
5.7421875,
2.017578125,
9.8046875
] | 616
| 11,909
|
[
-2.8671875,
2.916015625
] | 22.716796
|
[
-5.59765625,
-2.619140625,
-3.51171875,
-1.53515625,
1.96484375,
10.5703125
] | 0.78193
| 22.626152
| 17.524561
| 2.137487
|
[
2.987035036087036
] | -29,628.139599
| 5.691578
| -36,061.826375
| 0.610791
| 6.018676
|
[
-4.53515625,
-3.328125,
-1.5703125,
-2.5,
3.326171875,
7.36328125
] |
[
-5.71875,
-2.94140625,
-2.6953125,
-1.7587890625,
4.421875,
6.61328125
] | |
BkiUeHk4eIOjRxkyjMK-
|
\section{Introduction}
Person re-identification (re-ID)
is a
cross-camera image retrieval task, which aims to match persons of a given query
from an image gallery collected from disjoint cameras.
Many studies resort to deep metric
learning~\cite{hermans2017defense,zheng2017discriminatively}, or use classification losses
as the proxy targets to extract discriminative features~\cite{li2017person,sun2018beyond,tang2019cityflow,wu2019progressive}.
With recent progress in the generative adversarial networks
(GANs), another possibility is explore GANs as a style transformer to augment training data and improve the discriminative
capacity of model~\cite{zheng2017unlabeled,ge2018fd,liu2018pose,qian2018pose,zheng2019joint}.
Existing re-ID methods mainly treat visible images as single-modality
and degrade dramatically in real complex scenarios where person images are captured from both dark and bright lighting environments.
However, visible light cameras can not work at night.
Fortunately, some surveillance devices like infrared cameras can
capture the appearance characteristics of a person under poor illumination
conditions to overcome these difficulties.
This yields popular research interest on RGB-IR cross-modality matching, which is more
challenge due to the large discrepancy between two modalities compared with the RGB single modality. For instance, RGB images contain some discriminative cues like colors while these information are missing in infrared images.
\begin{figure}
\begin{center}
\includegraphics[width=1\columnwidth]{introduction}
\end{center}
\caption{\small Affinity modeling (AM) infers cross-modality
sample similarities by exploiting intra-class compactness and inter-class separability in the sample similarities.
In training batch, each training image can be considered as query image and it treats all the training samples as its neighbours, where each image accepts the structure
information from all the neighbours.
}
\label{fig:intro}
\end{figure}
Recently, many studies resort to two typical approaches to address
the aforementioned challenges in cross-modality re-ID.
The first approaches~\cite{wu2017rgb,ye2018hierarchical,ye2018visible} attempt to reduce the cross-modality discrepancy with feature-level constraints
like aligning feature distribution of images.
The other approaches~\cite{wang2019rgb,wang2019learning,wang2020cross} are at input-level using GANs to transfer images from one modality to another while preserving the identity information as much as possible.
The two approaches mainly focus on reducing discrepancy across modalities, whereas there is still the challenge of appearance variations in a single RGB or IR modality, including background
clutter, viewpoint variations, occlusion, etc.
To address the above problem, we propose to learn attention for
discriminative feature from local to global manner.
The critical idea behind it lies in different parts of a person containing different
discriminative information.
The network model can still capture useful information from the upper body using attention mechanism, regardless of a pedestrian's lower body occluded by something (\emph{e.g}, a bicycle).
Specifically,
we propose a $local$ $attention$: the attention for a local feature is determined locally, \emph{i.e.}, applying a learned transformation function on itself, where the refined part-aggregated features consider the importance between different body parts. However, such local strategies do not fully exploit the feature information from a global view.
Our solution is to use the global feature information from feature maps using global average pooling (GAP), which is named $global$
$attention$.
In this way, we consider both the global feature and its part information
to determine importance between different body parts of a person from global and local
views. This is also consistent with the perception
of human in finding discriminative cues: making a
comparison and then determine the importance.
The aforementioned method processes each sample independently, ignoring the relationships between person images.
Thus, we present a novel and efficient similarity inference
to obtain the optimal intra- and inter-modality image matching.
It utilizes intra-class compactness and inter-class separability in the sample similarities as supervised information to model the affinities between intra- and
inter-modality samples.
In particular, every sample contains some structural information and propagates the information to its neighbors using the pairwise relations, which is shown in Figure~\ref{fig:intro}. This neighbor reasoning scheme
can compensate for the lack of specific information existing in the same person's different images and further
enhance the robustness of the learned feature from object-level.
In our proposed method, contextual information for RGB-IR cross-modality re-ID
consists of dual levels. The lowest is the patch-level where the appearance variation (\emph{e.g.}, data occlusion problem) is mitigated with a
weighted sum over important body parts to assist more accurate information of object (\emph{i.e.}, person).
At the object level, coexistence
of objects provides strong hints on identification of the same person.
Experimental results show that our proposed method can surpass state-of-the-arts by large margins
on two widely used cross-modality
re-ID datasets
SYSU-MM01~\cite{wu2017rgb} and RegDB~\cite{nguyen2017person}, respectively.
In summary,
the contributions of our work include:
$\bullet$ We propose to learn the attention for discriminative representation by taking both local and global views of features.
$\bullet$ We design an efficient mutual neighbor reasoning to capture long-range dependency of objects,
by modeling affinity between intra- and inter-modality images.
$\bullet$ Our proposed method achieves significant performance improvement
over the state-of-the-arts on
on the two most popular benchmark datasets.
\begin{figure*}
\begin{center}
\includegraphics[width=2\columnwidth]{df2am}
\end{center}
\caption{\small The architecture of our DF$^{2}$AM method. The entire framework includes two important components: the weighted-part and global feature fusion and affinities modeling for intra-
and inter-modality feature matching. Our goal is to learn discriminative
features and enhance the robustness of the learned features from patch-level to object-level.
}
\label{fig:overview}
\end{figure*}
\section{Related Work}
\textbf{RGB-RGB single-modality person re-Identification}.
Conventional person re-ID research is a RGB-RGB single modality re-ID to address the problem of matching pedestrian across non-overlapping cameras. The challenge of re-ID lies in how to learn discriminative features from person images where there are the large intra-class and small inter-class variations caused by diversity of poses, illumination conditions, viewpoint occlusion, etc. To address the aforementioned challenges, many deep re-ID methods~\cite{guo2019beyond,sun2019perceive,sun2018beyond,liu2015spatio} have been proposed. Some of them resort to a partial feature learning~\cite{guo2019beyond,sun2019perceive} and focus on the powerful network structures to align the body parts~\cite{sun2018beyond,liu2015spatio}. Other methods try to discard the appearance variant out in metric space using loss functions, which contains contrastive loss~\cite{hermans2017defense}, triplet loss~\cite{hermans2017defense}, quadruplet loss~\cite{chen2017beyond}. Recent graph-based methods~\cite{shen2018person,wu2019unsupervised} consider connections between sample pairs. However, these methods are developed for single-modality re-ID but not for the cross-modality re-ID due to the large discrepancy across modalities.
\textbf{RGB-IR cross-modality person re-Identification}. The large discrepancy of cross-modality re-ID comes not only from appearance variations but also from cross-modality variation between RGB and IR images. Existing studies for cross-modality re-ID can be mainly summarized into two categories of methods. The first category~\cite{wu2017rgb,ye2018hierarchical,ye2018visible} attempts to align the feature distribution of training images in representation space. The work~\cite{wu2017rgb} focus on how to design one-stream networks such as a deep zero-padding network for evolving domain-specific nodes. The two-stream network with modality-specific~\cite{ye2018hierarchical} and top-ranking loss~\cite{ye2018visible} are developed to learn multi-modality
representations. In~\cite{dai2018cross}, a generative adversarial training method is proposed to jointly discriminate the identity and modality. ~\cite{hao2019hsme} design a hyper-sphere manifold embedding model to learn discriminative representations from different modalities. The second approach instead uses cross-modality generative adversarial network (GAN) to transfer person images style from one modality to another.~\cite{kniaz2018thermalgan} collects a new ThermalWorld dataset and propose a ThermalGAN framework for color-to-thermal image translation.~\cite{wang2019learning} further considers dual-level discrepancy and use a bi-directional cycle GAN~\cite{zhu2017unpaired} to generate unlabeled images as data augmentation. A hierarchical disentanglement method~\cite{choi2020hi} is proposed to disentangle ID-discriminative factors and ID-excluded factors simultaneously by using pose- and illumination-invariant features from cross-modality images.
\textbf{Attention mechanisms}. Humans does not attempt to process a whole scene of the data at once. Instead, they selectively use an salient part information to make a decision~\cite{itti1998model,mnih2014recurrent}. The above process is called attention mechanism and is actively used in many tasks including image captioning~\cite{xu2015show,chen2017sca}, transfer learning~\cite{zagoruyko2016paying}, object localization~\cite{zhang2018self}. Furthermore, the self-attention mechanism~\cite{vaswani2017attention} is proposed to draw global dependencies of inputs. Recently, various methods~\cite{wang2017residual,wang2018non,hu2018squeeze,woo2018cbam} use the self-attention mechanism to improve the performance of the classification model. For person re-ID, attention is used to capture spatial and temporal characteristics of pedestrian sequences from different video frames~\cite{xia2019second,li2018diversity,li2018diversity}. However, the application of these methods is limited for the cross-modality re-ID due to the different camera environments and large visual appearances change. In this work, we use attention mechanism to focus on important local features instead of processing all the data equally for cross-modality re-ID.
\section{Methodology}
In this section, we describe details of the proposed DF$^{2}$AM approach for cross-modality re-ID. We first revisit the baseline single-modality model and introduce a more efficient way for intra- and inter-modality feature matching. Then, the details of the proposed DF$^{2}$AM, founded on the above finding, are presented for learning discriminative features, and enhancing the robustness of the learned feature from patch-level to object-level.
\subsection{Single-modality Person Re-identification Revisit}
We first present the baseline single-modality re-ID model, which offers a promising way to learn discriminative global features. The training process can be treated as a conventional classification problem~\cite{zheng2016person}. To learn the feature embedding, the baseline usually learn the parameters with manual annotations where extracted feature $f_{k},k=1,\cdots,K$ is associated with a one-hot label $y_{k}$. The classification procedure is achieved by minimizing a cross-entropy loss $\mathcal{L}_{ID}$. Meanwhile, hard-mining triplet loss $\mathcal{L}_{BH}$~\cite{hermans2017defense} is used to optimizes the triplet-wise relationships among different person images. Thus, the baseline re-ID model is optimized by minimizing the following loss function as
\begin{equation}
\mathcal{L}_{B}=\mathcal{L}_{ID}+\mathcal{L}_{BH},
\end{equation}
where
\begin{footnotesize}
\begin{align}
L_{ID}&=-\frac{1}{K}\sum_{i=1}^{K}\log p(y_{k}|f_{k}),\\
L_{BH}&=
\overbrace{\sum_{i=1}^{N}\sum_{a=1}^{M}}^{\text{all anchors}}\Big[m+\overbrace{\max_{p=1,\cdots,M}d(f_{a},f_{p})}^{\text{hardest positive}}
-\underbrace{\max_{\substack{j=1,\cdots,N \\ n=1,\cdots,M \\ j\neq i}} d(f_{a},f_{n})}_{\text{hardest negative}}\Big]_{+},
\end{align}
\end{footnotesize}
Here, $d$ is a metric function (\emph{i.e.}, European distance) measuring distances in the embedding
space,
$K=MN$ is the number of images in single modality, $N$ is randomly selected identity number, and $M$ is randomly sampled image number of each identity.
$p(y_{i}|f_{k})$ is the predicted probability that the encoded feature $f_k$ belongs to its identity $y_k$, which is obtained by a classifier.
\subsection{The overall framework}
To explore richer
visual patterns for cross-modality re-ID, we propose DF$^{2}$AM method and integrate it with a conventional re-ID network. The DF$^{2}$AM is fulfilled from the perspective of patch-level and object-level, and is deployed as dual-level feature fusion and affinity modeling modules.
The learning procedure is fulfilled by optimizing a joint objective function, as
\begin{equation}\label{overall loss}
\mathcal{L}_{Final}=\mathcal{L}_B+\lambda\mathcal{L}_D+\zeta\mathcal{L}_A,
\end{equation}
where $\mathcal{L}_B=\mathcal{L}_B^{RGB}+\mathcal{L}_B^{IR}$. Here, $\mathcal{L}_B^{RGB}$ and $\mathcal{L}_B^{IR}$ are the baseline loss for
RGB and IR modality respectively, $\mathcal{L}_D$ denotes the classification loss for dual-level feature fusion module,
and $\mathcal{L}_A$ is the affinity modeling loss. $\lambda$ and $\zeta$ are the regularization
factors.
As shown in Figure~\ref{fig:overview},
we first feed visible image batch $X^{\text{RGB}}=\{x_k^{\text{RGB}}\}_{k=1}^{K^{*}}$ and infrared image batch $X^{\text{IR}}=\{x_k^{\text{IR}}\}_{k=1}^{K^{*}}$ into different convolutional layers to capture modality-specific low-level feature patterns, where $2K^{*}$ is the batch size. Then,
we use the shared feature extractor
(\emph{i.e.}, convolutional layers) $Feat$ to transform the specific features
onto a common representation space to acquire modality-sharable high-level features,
formulated as
$$
f_k^{\text{RGB}}=Feat(Conv_{1}(x_k^{RGB})),$$
$$f_k^{\text{IR}}=Feat(Conv_{2}(x_k^{IR})).$$
Here, $f_k^{\text{RGB}}\subseteq \mathbb{R}^{C\times H\times W}$ and $f_k^{\text{IR}}\subseteq \mathbb{R}^{C\times H\times W}$ for visible and infrared images, respectively.
Note that $C$ is the number of channels,
$H$ and $W$ are height and width, respectively.
With the obtained high-level features, intra- and inter-modality feature learning are required to be undertaken. For the intra-modality feature learning, dual-level feature fusion (DF$^{2}$) module learns part-aggregated feature embeddings, and combine them with global features to enhance the representative capacity of features
for intra-modality re-ID. Meanwhile, the shared feature extractor is to learn aligned features
for bridging RGB and IR modalities. For inter-modality feature matching, a similarity inference is presented to model the affinities between both intra- and inter-modality global features. This neighbor
reasoning scheme utilizes intra-class compactness and inter-class
separability in the sample similarities to enhance the robustness of the learned feature
from object-level.
\subsection{Dual-level Feature Fusion}\label{ssec:localattention}
Previous cross-modality re-ID models~\cite{hao2019hsme,wang2020cross} commonly focus on constructing feature- or image-level constraints for reducing distribution discrepancy in same identities. However, the challenge of appearance variations, including background clutter, viewpoint variations, and occlusion, are not overcome by only using the global features. In this case, we propose the local attention mechanism, which refines part-aggregated features by a learned transformation function on itself, to consider the importance between different body parts of a person.
We take the feature $f_k^{\text{RGB}}$ from the RGB modality as an example. In our local attention mechanism, a patch-wise average pooling (PAP) is used to extract $P$ local features $f_k^{\text{RGB},p}\subseteq \mathbb{R}^{C},p=1,\cdots,P$ (assuming $P$ is a factor of $H$) by
\begin{equation}
f_k^{\text{RGB},p}=\frac{P}{WH}\sum_{w=1}^W\sum_{h=\frac{(p-1)H}{P}+1}^{\frac{pH}{P}}f_{k,:,h,w}^{\text{RGB}},
\end{equation}
where PAP first splits the feature maps $f_k^{\text{RGB}}$ into $P$ horizontal feature spatial parts and then generates local features
$f_k^{\text{RGB},p}$ by compressing spatial parts using global average pooling (GAP).
To obtain a discriminative part-aggregated feature $\tilde{f}_k^{\text{RGB}}$, we compute a weighted summation of local features from body parts together
with learnable local attention as weight $\omega=(\omega_1,\cdots,\omega_P)^{\text{T}}$. In summary, it is formulated by
\begin{equation}\label{eq:localattention}
f_k^{\text{RGB},*}=\sum_{p=1}^P\tilde{\omega}_p f_k^{\text{RGB},p},
\end{equation}
where $\tilde{\omega}_p=\frac{e^{\omega_p}}{\sum_{p'=1}^{P}e^{\omega_{p'}}}$.
Although the aforementioned local attention mechanism assigns weights for local features, such local strategy cannot fully exploit the feature information from a global view and affect the representative capacity of feature, which is discussed in the experiments. Thus, we combine global feature with local feature as
\begin{equation}
\tilde{f}_k^{\text{RGB}}=\text{BN}(f_k^{\text{RGB,g}})+f_k^{\text{RGB},*},
\end{equation}
where $\text{BN}(\cdot)$ is batch normalization and $f_k^{\text{RGB,g}}$ represents the GAP output of the input
feature map $f_k^{\text{RGB}}$. For the infrared modality, we can also obtain the discriminative representation $\tilde{f}_k^{\text{IR}}$ by using $f_k^{\text{IR}}$ in the same way. Finally, the loss for $DF^{2}$ can be expressed by
\begin{equation}
\small
\begin{aligned}
\mathcal{L}_{D}&=\mathcal{L}_{D}^{RGB}+\mathcal{L}_{D}^{IR} \\
=&-\frac{1}{K^{RGB}}\sum_{k=1}^{K^{RGB}}\log p(y_{k}|\tilde{f}_k^{\text{RGB}}) \\
&-\frac{1}{K^{IR}}\sum_{k=1}^{K^{IR}}\log p(y_{k}|\tilde{f}_k^{\text{IR}}),
\end{aligned}
\end{equation}
where
$K^{RGB}$ ($K^{IR}$) is the
number of images in visible (infrared) modality, and each of features $\tilde{f}_k^{\text{RGB}}$ and $\tilde{f}_k^{\text{IR}}$ corresponds to an
identity label $y_{k}\in\{1,\cdots,N\}$.
\subsection{Affinity Modeling}\label{ssec:similarityinference}
A common strategy to align feature distribution of images is utilizing hard-mining triplet loss~\cite{hermans2017defense}.
For each chosen anchor, it is required for selecting the hard positive/negative exemplars from within a mini-batch. The above strategy is complex and required for additional computing resources.
To better align feature distribution of images across intra- and inter-modality,
we propose a simple and efficient similarity inference to obtain the optimal intra- and inter-modality image
matching. It utilizes intra-class compactness and inter-class
separability in the sample similarities as supervised information to model the affinities
between intra-and inter-modality samples.
Here we aim to ensure that an image of a specific person is closer to all positive images of the same person than any (negative) image of any other person. In addition, our method is simple and efficient since no hard pairs are mined and all the pairs are utilized in training.
\textbf{Affinity matrix construction}. We first use the encoded global features $f_k^{\text{RGB,g}}$ and $f_k^{\text{IR,g}}$ from two modalities to model the pair-wise affinities (affinity matrix $D$), which is defined as
\begin{equation}\label{1}
D=\begin{pmatrix}
D_{\text{RGB,RGB}} & D_{\text{RGB,IR}} \\
D_{\text{IR,RGB}} & D_{\text{IR,IR}}
\end{pmatrix},
\end{equation}
where $D_{\text{RGB,RGB}}$ and $D_{\text{IR,IR}}$ are intra-modality affinity matrices of the visible and infrared modalities, respectively, and $D_{\text{RGB,IR}}$ and $D_{\text{IR,RGB}}$ are inter-modality affinity matrices.
At each training step, an identity-balanced sampling strategy are adopted for training~\cite{eccv20ddag}.
For each of $N^{*}$ different randomly selected identities, $M^{*}$ visible and $M^{*}$ infrared images are randomly sampled, resulting in
batch size $2K^{*}$ is equal to $2N^{*}M^{*}$ and $D_{a,b},a,b\in\{\text{RGB},\text{IR}\}$ are $K^{*} \times K^{*}$-dimensional.
The elements of the sub-matrices $D_{a,b},a,b\in\{\text{RGB},\text{IR}\}$ are calculated by
\begin{equation}\label{2}
D_{a,b}^{ij}=\left|\left|\frac{f_i^{a\text{,g}}}{||f_i^{a\text{,g}}||_2}-\frac{f_j^{b\text{,g}}}{||f_j^{b\text{,g}}||_2}\right|\right|_2 \in [0,
+\infty),
\end{equation}
where $||\cdot||_2$ is $L_2$-norm. It is noted that other distance metrics, such as cosine similarity, can be also used. In this work, we adopt the Euclidean distance between normalized global features and smaller distance value means more similar.
\textbf{Ground truth affinity matrix}. According to label information of each image, we construct a ground truth affinity, which corresponds to the affinity matrix and performs as the ground truth labels of the similarity inference. The label matrix is defined as a binary matrix $G=[G^{ij}],i,j=1,\cdots,2K^{*}$, where $G^{ij}=1$ if the $i^{th}$ and the $j^{th}$ samples belong to one same identity, otherwise zero.
Thus, our aim is that the affinity elements of negative pairs are as close to one as possible and that of positive pairs are as close to zero as possible.
This can be achieved by minimizing the following mean $L_{1}$ error between affinity matrix and its ground truth,
\begin{equation}
\mathcal{L}_1=||D-(\mathbbm{1}-G)\cdot \delta||_1\rightarrow0,
\end{equation}
where $\mathbbm{1}$ is a matrix whose elements are all $1$ and $\delta$ is a suitable large value.
Theoretically, the above consistency condition is a necessary but not sufficient requirement for discriminative embeddings. This may in practice induce model converge to bad local minima early in training, which is also proved by our experiment. The main reason is
all the positive and negative pairs are still treated equally when some pairs' distances are suitable enough.
To address this, our solution is to relax the learning objective. We expect the affinity elements of negative pairs are larger than that of positive pairs, which can be achieved by constraining a suitable margin $m$ between the distances of positive and negative pairs.
Thus, we modify the objective loss function to
\begin{equation}
\mathcal{L}_A=\sum_{i=1}^{2K^{*}}\sum_{j=1}^{2K^{*}}[D^{ij}\otimes G^{ij}-(D^{ij}-m)\otimes(1-G^{ij})]_{+},
\end{equation}
where $m$ is a margin and $\otimes$ presents Hadamard product.
This loss makes sure that the affinity elements of negative pairs are large at leat a
margin $m$. The advantage of this formulation is that, compared with hard-mining triplet loss~\cite{hermans2017defense}, we can easily optimize this loss over the whole dataset without a long enough training and all points of the same class merely need to be closer to each other than to any point
from a different class.
\begin{table*}[!t]
\centering
\begin{tabular}{lcccccccccc}
\toprule
\multicolumn{1}{c}{Setting} & & \multicolumn{4}{c}{All Search} & & \multicolumn{4}{c}{Indoor Search} \\
\cmidrule{1-1}\cmidrule{3-6}\cmidrule{8-11} \multicolumn{1}{c}{Method} & & \multicolumn{1}{c}{r = 1} & \multicolumn{1}{c}{r = 10} & \multicolumn{1}{c}{r = 20} & \multicolumn{1}{c}{mAP} & & \multicolumn{1}{c}{r = 1} & \multicolumn{1}{c}{r = 10} & \multicolumn{1}{c}{r = 20} & \multicolumn{1}{c}{mAP} \\
\midrule
One-stream~\cite{wu2017rgb}(ICCV'17) & & 12.04 & 49.68 & 66.74 & 13.67 & & 16.94 & 63.55 & 82.10 & 22.95 \\
Two-stream~\cite{wu2017rgb}(ICCV'17) & & 11.65 & 47.99 & 65.50 & 12.85 & & 15.60 & 61.18 & 81.02 & 21.49 \\
Zero-Pad~\cite{wu2017rgb}(ICCV'17) & & 14.80 & 54.12 & 71.33 & 15.95 & & 20.58 & 68.38 & 85.79 & 26.92 \\
TONE~\cite{ye2018hierarchical}(AAAI'18) & & 12.52 & 50.72 & 68.60 & 14.42 & & 20.82 & 68.36 & 84.46 & 26.38 \\
HCML~\cite{ye2018hierarchical}(AAAI'18) & & 14.32 & 53.16 & 69.17 & 16.16 & & 24.52 & 73.25 & 86.73 & 30.08 \\
cmGAN~\cite{dai2018cross}(IJCAI'18) & & 26.97 & 67.51 & 80.56 & 31.49 & & 31.63 & 77.23 & 89.18 & 42.19 \\
BDTR~\cite{ye2019bi}(TIFS'19) & & 27.32 & 66.96 & 81.07 & 27.32 & & 31.92 & 77.18 & 89.28 & 41.86 \\
eBDTR~\cite{hao2019hsme}(AAAI'19) & & 27.82 & 67.34 & 81.34 & 28.42 & & 32.46 & 77.42 & 89.62 & 42.46 \\
HSME~\cite{hao2019hsme}(AAAI'19) & & 20.68 & 32.74 & 77.95 & 23.12 & & - & - & - & - \\
D$^{2}$RL~\cite{wang2019learning}(CVPR'19) & & 28.90 & 70.60 & 82.40 & 29.20 & & - & - & - & - \\
MAC~\cite{ye2020cross}(TIP'20) & & 33.26 & 79.04 & 90.09 & 36.22 & & 36.43 & 62.36 & 71.63 & 37.03 \\
MSR~\cite{feng2019learning}(TIP'19) & & 37.35 & 83.40 & 93.34 & 38.11 & & 39.64 & 89.29 & 97.66 & 50.88 \\
AlignGAN~\cite{wang2019rgb}(ICCV'19) & & 42.40 & 85.00 & 93.70 & 40.70 & & 45.90 & 87.60 & 94.40 & 54.30 \\
Xmodal~\cite{li2020infrared}(AAAI'20) & & 49.92 & 89.79 & 95.96 & 50.73 & & - & - & - & - \\
DDAG~\cite{eccv20ddag}(ECCV'20) & & 54.75 & 90.39 & 95.81 & 53.02 & & 61.20 & 94.06 & 98.41 & 67.98 \\
\midrule
Ours & & \textbf{56.93} & \textbf{90.80} & \textbf{96.11} & \textbf{55.10} & & \textbf{66.39} & \textbf{94.93} & \textbf{98.55} & \textbf{71.52} \\
\bottomrule
\end{tabular}%
\caption{Experimental results of the proposed DF$^{2}$AM and state-of-the-art methods on on SYSU-MM01 dataset under two different settings. Rank at r accuracy (\%) and mAP (\%) are reported.}
\label{tab:SYSU-MM01
\end{table*}%
\section{Experimental Results}
\subsection{Datasets and Settings}
\textbf{Datasets}. The proposed method is evaluated over two widely used cross-modality datasets,
SYSU-MM01~\cite{wu2017rgb} and RegDB~\cite{nguyen2017person}.
SYSU-MM01 is a popular RGB-IR re-ID dataset, which contains images of 419 identities
captured in both indoor and outdoor environments.
The training set includes 22,258
RGB images and 11,909 IR images of 395 persons, and the testing set contains 96
identities, with 3,803 IR images for the query and 301 RGB images for the gallery set.
There are two different testing settings for RGB-IR re-ID: indoor-search
and all-search~\cite{wu2017rgb}. All-search mode treats images from all RGB cameras as the gallery set, while the images of
gallery set are captured by two indoor RGB cameras in the indoor-search mode.
RegDB contains 412 persons and 8,240 images, where each person has 10 RGB images and 10 IR images.
There are 4,120 images of 206 persons for training and the remaining images of 206 persons for testing.
The testing set has two evaluation modes, Visible to Infrared and Infrared to Visible, where the former is to search RGB images from a infrared image and the latter is to search IR images from a Visible image.
The stable result is obtained over 500 repetitions on a random split.
\textbf{Evaluation protocols}.
We use two standard evaluation protocols as evaluation metrics:
Cumulative Matching Characteristic (CMC) and mean average precision (mAP).
The Rank-$k$ identification rate in the CMC curve represents the cumulative rate of true
matches in the top-k position, while mAP treats person
re-identification as a retrieval task.
\textbf{Implementation details}.
We adopt ResNet-50~\cite{he2016deep} with
the last classification layer removed as the backbone network and initialize it by using parameters pre-trained on ImageNet~\cite{deng2009imagenet}.
We randomly sample $N^{*}$ identities, and then
randomly sample $M^{*}$ visible and $M^{*}$ infrared images to constitute a training batch, so that the mini-batch size is $2K^{*}=2N^{*}\times M^{*}$.
In this paper, $N^{*}$ and $M^{*}$ are
set to 8 and 4, respectively.
We resize input images to 256$\times$128 and then employ random cropping and flipping as data augmentation for them.
During training, we also use the SGD optimizer with the momentum parameter 0.9 and the initial learning rate 0.1 for 80 epoches. The learning rate decays by 0.1 and 0.01 at the 30th and 50th epoch, respectively.
\begin{table*}[!t]
\centering
\begin{tabular}{lrccccccccc}
\toprule
Setting & & \multicolumn{4}{c}{Visible to Infrared} & & \multicolumn{4}{c}{Infrared to Visible } \\
\cmidrule{1-1}\cmidrule{3-6}\cmidrule{8-11} Method & & r = 1 & r = 10 & r = 20 & mAP & & r = 1 & r = 10 & r = 20 & mAP \\
\midrule
HCML~\cite{ye2018hierarchical}(AAAI'18) & & 24.44 & 47.53 & 56.78 & 20.08 & & 21.70 & 45.02 & 55.58 & 22.24 \\
Zero-Pad~\cite{wu2017rgb}(ICCV'17) & & 17.75 & 34.21 & 44.35 & 18.90 & & 16.63 & 34.68 & 44.25 & 17.82 \\
BDTR~\cite{ye2019bi}(TIFS'19) & & 33.56 & 58.61 & 67.43 & 32.76 & & 32.92 & 58.46 & 68.43 & 31.96 \\
eBDTR~\cite{ye2019bi}(AAAI'19) & & 34.62 & 58.96 & 68.72 & 33.46 & & 34.21 & 58.74 & 68.64 & 32.49 \\
HSME~\cite{hao2019hsme}(AAAI'19) & & 50.85 & 73.36 & 81.66 & 47.00 & & 50.15 & 72.40 & 81.07 & 46.16 \\
D$^{2}$RL~\cite{wang2019learning}(CVPR'19) & & 43.4 & 66.1 & 76.3 & 44.1 & & - & - & - & - \\
MAC~\cite{ye2020cross}(TIP'20) & & 36.43 & 62.36 & 71.63 & 37.03 & & 36.20 & 61.68 & 70.99 & 36.63 \\
MSR~\cite{feng2019learning}(TIP'19) & & 48.43 & 70.32 & 79.95 & 48.67 & & - & - & - & - \\
AlignGAN~\cite{wang2019rgb}(ICCV'19) & & 57.90 & - & - & 53.60 & & 56.30 & - & - & 53.40 \\
Xmodal~\cite{li2020infrared}(AAAI'20) & & 62.21 & 83.13 & 91.72 & 60.18 & & - & - & - & - \\
DDAG~\cite{eccv20ddag}(ECCV'20) & & 69.34 & 86.19 & 91.49 & 63.46 & & 68.06 & 85.15 & 90.31 & 61.80 \\
\midrule
Ours & & \textbf{73.06} & \textbf{87.96} & \textbf{91.51} & \textbf{67.81} & & \textbf{70.49} & \textbf{85.78} & \textbf{90.44} & \textbf{63.85} \\
\bottomrule
\end{tabular}%
\caption{Perforamce comparison with the existing methods on RegDB dataset under visible-
infrared and infrared-visible settings. Rank at r accuracy (\%) and mAP (\%) are reported.}
\label{tab:RegDB}%
\end{table*}%
\subsection{Comparisons with SOTA in Cross-Modality Person Re-ID}
In this section, we compare the proposed method with a number of cross-modality re-ID methods,
including One-stream~\cite{wu2017rgb}, Two-stream~\cite{wu2017rgb}, Zero-Pad~\cite{wu2017rgb}, TONE~\cite{ye2018hierarchical}, HCML~\cite{ye2018hierarchical}, cmGAN~\cite{dai2018cross}, BDTR~\cite{ye2019bi}, BDTR~\cite{ye2019bi}, HSME~\cite{hao2019hsme},
D2RL~\cite{wang2019learning}, MAC~\cite{ye2020cross}, MSR~\cite{feng2019learning}, AlignGAN~\cite{wang2019rgb}, Xmodal~\cite{li2020infrared}, and DDAG~\cite{eccv20ddag}.
Table~\ref{tab:SYSU-MM01} and~\ref{tab:RegDB} summary the experimental results on the SYSU-MM$01$ and the RegDB datasets, respectively.
Table~\ref{tab:SYSU-MM01} reports the experimental results on SYSU-MM01. Our proposed method achieves better overall performance than all the other methods in terms of
all the evaluation metrics. Specifically, in all-search mode, our method can obtain 56.93\% for Rank-$1$ and 55.10\% for mAP in the all-search mode, which surpass the second best approach (\emph{i.e.}, DDAG~\cite{eccv20ddag}) by 2.18\% and 2.08\%, respectively.
Similar results are observed in indoor-search mode. Compared to the current SOTA method, we achieve 5.19 points and
3.54 points improvement on Rank-1 and mAP, respectively. The above impressive performance
suggests that our method can learn better modality-sharable features from patch-level to object-level.
We also evaluate our method against existing competing approaches on RegDB in Table~\ref{tab:RegDB}. As shown in Table~\ref{tab:RegDB}, Our method always
outperforms others by large margins in different query settings. For the Visible
to Infrared mode, it reaches 73.06\% on Rank-1 and 67.81\% on mAP, 3.72\% and 4.35\% higher
than the current SOTA (DDGA), respectively. For Infrared
to Visible, the improvements are 2.43\% on rank-1 and 2.05\% on mAP. This indicates that our DF$^{2}$AM is robust to different evaluation
modes.
\subsection{Ablation Study}
Extensive experiments are conducted under
four different settings in Table~\ref{tab:ablation1} to evaluate each component of our proposed method: (1) The effectiveness of DF$^{2}$ module, (2) The effectiveness of AM module, (3) The necessity of taking Baseline into our method. All the experiments are conducted on SYSU-MM$01$ dataset with two evaluation modes.
\begin{table}[!t]
\small
\centering
\setlength{\tabcolsep}{1.5mm}{
\begin{tabular}{cccrccccc}
\toprule
& \multicolumn{1}{l}{Settings} & & & \multicolumn{2}{c}{All Search} & & \multicolumn{2}{c}{Indoor Search} \\
\cmidrule{1-3}\cmidrule{5-6}\cmidrule{8-9} Baseline & DF$^{2}$ & AM & & r=1 & mAP & & \multicolumn{1}{c}{r=1} & \multicolumn{1}{c}{mAP} \\
\midrule
$\surd$ & & & & 52.88 & 51.14 & & 56.16 & 64.27 \\
$\surd$ & $\surd$ & & & 55.61 & 53.90 & & 61.32 & 67.58 \\
$\surd$ & & $\surd$ & & 52.88 & 52.09 & & 59.38 & 66.82 \\
& $\surd$ & $\surd$ & & 55.80 & 54.04 & & 65.80 & 70.15 \\
$\surd$ & $\surd$ & $\surd$ & & \textbf{56.93} & \textbf{55.10} & & \textbf{66.39} & \textbf{71.52} \\
\bottomrule
\end{tabular}}%
\caption{Ablation study. We evaluate five settings on SYSU-MM01 dataset. ``Baseline'', ``Baseline'' with DF$^{2}$, ``Baseline'' with AM, DF$^{2}$ with AM, and our DF$^{2}$AM. Our method achieves the best result among other competitors.}
\label{tab:ablation1}%
\end{table}%
As shown in Table~\ref{tab:ablation1}, single-modality re-ID
model obtain a better result than current some method. This indicates that some training
tricks taken from single-modality Re-ID [67] also contributes to the performance of our method.
Next, we directly merge DF$^{2}$ into the training process based on baseline model,
which improves mAP scores by 2.76\% and 3.31\% for both evaluation modes. The above impressive improvements demonstrate that learning
dual-level features is beneficial for cross-modality Re-ID. Similar results can be observed when AM is integrated into the training process,
which also demonstrates the effectiveness of AM module.
As the fifth line of Table~\ref{tab:ablation1} shows,
the performance is further improved when we aggregate two modules with
baseline. The impressive improvement
suggests that these components are mutually beneficial to each other. However, without baseline, the performance of our method has a slight decline, demonstrating the necessity of taking Baseline into our method.
\begin{table}[t]
\small
\centering
\begin{tabular}{c|c|c|c|c}
\toprule
Method & mAP & r=1 & r=5 & r=10 \\
\midrule
DF$^{2}$AM(3 parts) & 68.06 & 62.45 & 91.71 & 96.83 \\
DF$^{2}$AM(4 parts) & \textbf{71.52} & \textbf{66.39} & \textbf{94.93} & \textbf{98.55} \\
DF$^{2}$AM(5 parts) & 70.40 & 65.17 & 92.93 & 97.69 \\
\bottomrule
\end{tabular}%
\caption{The performance of proposed DF$^{2}$AM with different number
of splitted feature parts when trained on SYSU-MM$01$ dataset and tested
on indoor search mode.}
\label{tab:parts}%
\end{table}%
In addition, we compare the performance of DF$^{2}$AM
with different number of sliced horizontal feature
parts. As shown in Table~\ref{tab:parts}, our method achieves the best results
when the feature maps are splitted into four parts.
In this way, local features contain the most discriminative information for cross-modality re-ID.
\begin{figure}
\begin{center}
\includegraphics[width=1\columnwidth]{1}
\end{center}
\caption{\small The experimental results on SYSU-MM$01$ under indoor-search mode. Left: The performance along with different parameters $(N,M)$. Right:
The performance of our method with different loss functions under varying values of parameters $\delta(m)$.
}
\label{fig:nm}
\end{figure}
\textbf{How do the identity and sample numbers of sampling strategy affect representation quality?} The hyper-parameter $N^{*}$ and $M^{*}$ controls identity number and sample numbers for each identity in training batch, respectively.
For fair comparison, batch size $2K^{*}$ is fixed as $64$, while $K^{*}$, $N^{*}$ and $M^{*}$ satisfy the condition $K^{*}=M^{*}N^{*}$.
Figure~\ref{fig:nm} shows how the re-ID performance varies with different
numbers of person identities, $N^{*}$. It can be observed that as $N^{*}$
increases from 1 to 8, the re-ID performance continues rising,
and the performance begins to decrease when $N^{*}$ gets larger. That is, the best accuracy is achieved with $N^{*}=8$ and $M^{*}=4$. The main reason is when $N^{*}$ is too small, the learned similarity
is not adequate, which makes the model
difficult to match features of the same identity during affinity modeling. When $N^{*}$ is
too large, images of different persons will be reduced, which harms the network training for similarity inference.
\textbf{The impact of $\mathcal{L}_A$ and $\mathcal{L}_1$}. As shown in Figure~\ref{fig:nm}, our method (w/ $\mathcal{L}_A$) achieves the best performance of mAP=71.52\% on SYSU-MM$01$,
2.00\% higher than directly using $\mathcal{L}_1$. Such results prove the necessity and effectiveness
of our affinity modeling loss $\mathcal{L}_A$, which alleviates the problem of model converging to local minima early in training stage. In addition, it can be observed that compared with our method (w/ $\mathcal{L}_1$), smaller value $m(\delta)$ leads to the better results of our method (w/ $\mathcal{L}_A$) and vice versa, which is consistent with our expectation. The main reason
is that our network is trained with $\mathcal{L}_A$, which
avoids pushing always images of different
identities separate as $\mathcal{L}_1$, but it has a relatively small strength (\emph{i.e.}, $m$) to force them separate.
\begin{figure}
\begin{center}
\includegraphics[width=1\columnwidth]{2}
\end{center}
\caption{\small Performance evaluation of our proposed method with different values of $\lambda$ and $\zeta$
on SYSU-MM$01$ under indoor-search mode.
}
\label{fig:para}
\end{figure}
\subsection{Parameter sensitivities}
The parameter $\lambda$ balances the effect of DF$^{2}$ module
and baseline model. We evaluate our method with different values for
the parameter $\lambda$ in Figure~\ref{fig:para}. As $\lambda$ increases, the accuracy
improves at first. When $\lambda=1.1$, we obtain the best
performance. After that, the performance begins to decline. Similar results can be observed when
we vary parameter values of $\zeta$ from 0.1 to 2.1.
The optimal accuracy is achieved when $\zeta=1.5$. Despite the performances vary with different parameter values, most results by our approach
outperform the current state-of-the-arts significantly.
\section{Conclusions}
In this paper, we propose an efficient network that integrates
feature information at different modalities into person re-identification.
we introduce two key modules into the network and fuse dual-level feature to reduce the cross-modality discrepancy.
The proposed DF$^{2}$ module
considers both the
global feature and its part information to determine importance between different body parts of a person from global
and local views. The AM module uses intra-class compactness and inter-class
separability in the sample similarities as supervised information to model the relationships between person images.
Ablation studies demonstrate the effectiveness
of the proposed modules in improving the identification accuracy.
Extensive experiments with our
state-of-the-art results on two competitive datasets further demonstrate
the effectiveness and generality of our DF$^{2}$AM approach.
{\small
\bibliographystyle{ieee_fullname}
| -27,165.308342
|
[
-1.94921875,
1.8525390625
] | 45.182724
|
[
-3.259765625,
0.175048828125,
-2.103515625,
-4.46484375,
-0.411376953125,
7.2265625
] |
[
2.591796875,
6.48828125,
1.16796875,
5.19921875
] | 486
| 4,667
|
[
-2.896484375,
3.4609375
] | 31.117054
|
[
-6.9140625,
-5.0234375,
-5.05859375,
-1.8486328125,
3.0546875,
13.5859375
] | 0.301831
| 30.520877
| 30.147847
| 5.828894
|
[
1.9584068059921265
] | -19,667.238045
| 6.389115
| -26,387.41018
| 0.633845
| 6.177366
|
[
-2.974609375,
-3.751953125,
-3.421875,
-4.078125,
2.57421875,
11.1953125
] |
[
-5.57421875,
-1.8896484375,
-2.22265625,
-1.400390625,
3.638671875,
4.7734375
] | |
BkiUddg4eIXhqTbAaFox
|
\section{Problem overview}
\label{sec:intro}
Fault Tree Analysis (FTA) is an analytical tool aimed at modelling and evaluating how complex systems may fail. FTA is widely used as a risk assessment tool in safety and reliability engineering for a broad range of industries including aerospace, power plants, nuclear plants, and others high-hazard fields \cite{Ruijters2015}. Essentially, a fault tree is a directed acyclic graph~(DAG) which involves a set of basic events (e.g. component failures) that are combined using logic operators (e.g. AND and OR gates) to model how these events may lead to an undesired state of the system normally represented at the root of the tree (top level event).
Our work is focused on a novel measure for FTA in the form of a hybrid analysis technique that involves quantitative and qualitative aspects of fault trees. From a qualitative perspective, we focus on Minimal Cut Sets (MCS). An MCS is a minimal combination of events that together cause the top level event. As such, MCSs are fundamental for structural analysis. The problem is that, in large scenarios, computing all MCSs might be very expensive and there might be hundreds of MCSs, which makes it hard to handle and prioritise which MCSs should be attended first.
In that context, the goal of this work is to identify the MCS with maximum probability. We call this problem the MPMCS.
This is an NP-complete problem and we use a MaxSAT-based approach to address it.
\section{Simple example}
The fault tree shown in Fig. \ref{fig:simple-example1} illustrates the different combinations of events that may lead to the failure of an hypothetical Fire Protection System (FPS) based on \cite{Kabir2017}. The FPS can fail if either the fire detection system or the fire suppression mechanism fails. In turn, the detection system can fail if both sensors fail simultaneously (events $x_1$ and $x_2$), while the suppression mechanism may fail if there is no water ($x3$), the sprinkler nozzles are blocked ($x_4$), or the triggering system does not work. The latter can fail if neither of its operation modes (automatic ($x_5$) or remotely operated) works properly. The remote control can fail if the communications channel fails ($x_6$) or the channel is not available due to a cyber attack, e.g. DDoS attack ($x_7$). Each basic event has an associated value that indicates its probability of occurrence~$p(x_i)$.
\vspace{-0.2cm}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.19]{images/fault-tree-v9.png}
\vspace{-0.6cm}
\caption{Fault tree of a cyber-physical fire protection system (simplified)}
\label{fig:simple-example1}
\vspace{-0.15cm}
\end{figure}
A fault tree $F$ can be represented as a Boolean equation $f(t)$ that expresses the different ways in which the top event $t$ can be satisfied \cite{FtaHandbook2002}.
In our example, $f(t)$ is as follows:
\vspace{-0.5cm}
\begin{equation}
\begin{array}{c}
f(t) = (x_1 \land x_2) \lor (x_3 \lor x_4 \lor (x_5 \land (x_6 \lor x_7))) \nonumber
\end{array}
\vspace{-0.1cm}
\end{equation}
The objective is to find the minimal set of logical variables that makes the equation $f(t)$ \textit{true} and whose joint probability is maximal among all minimal sets. In our example, the MPMCS~is $\{x_1,x_2\}$ with a joint probability of $0.02$.
\section{Fault tree generation}
The benchmark presented in this paper relies on our open source tool MPMCS4FTA~\cite{BarrereMPMCS4FTAGithub}.
We have used MPMCS4FTA~to generate and analyse synthetic pseudo-random fault trees of different size and composition.
We use AND/OR graphs as the underlying structure to represent fault trees.
The benchmark presented in \cite{Barrere-MaxSAT-Benchmark-Arxiv2019} also considers AND/OR graphs as a means to represent operational dependencies between components in industrial control systems \cite{BarrereJisa2020}.
However, the instances presented in this paper differ in that:
\begin{inparaenum}
\item they are restricted to directed acyclic graphs (DAGs),
\item only the basic events represented at the leaves of the fault tree involve a probability of failure, and
\item leaves can have more than one parent in order to relax the definition of strict logical trees.
\end{inparaenum}
We control the size and composition of a random fault tree of size $n$ according to a configuration $R=(R_{AT},R_{AND},R_{OR})$.
$R_{AT} \in [0,1]$ indicates the proportion of atomic nodes (basic events) with respect to size $n$ (e.g. $0.2$ means $20\%$) whereas
$R_{AND}$ and $R_{OR}$ indicate the proportion of AND and OR nodes respectively.
To create a fault tree of size $n$, we first create two lists: $L=\{l_1, \ldots, l_m\}$ and $A=\{a_1,\ldots,a_s\}$.
$L$ is a random sequence of AND and OR nodes with the specified proportions for each operator where $m=n*(R_{AND}+R_{OR})$.
$A$ is a list of atomic nodes where $s=n*R_{AT}$, thus $n=m+s$.
In addition, each atomic node has a random probability of failure $p(a_i) \in [0,1]$.
To ensure connectivity, we first create the root node $t$ and connect $l_1$ to $t$ ($l_1 \rightarrow t$).
Then, for each logic node $l_i$ in the sequence $L$, we randomly choose $k$ nodes $l_j$ ahead (thus $j>i$) and create $k$ edges ($l_j \rightarrow l_i$) in the tree.
When the remaining nodes in $L$ are not enough to cover $k$ nodes, we use random atomic nodes from $A$.
At this point, we also make sure that $l_i$ points to at least one previous node in the sequence $L$. If that is not the case, we choose a random node $l_h$ (with $h < i$) and create an edge ($l_i \rightarrow l_h$).
Once the sequence $L$ has been processed, we traverse the list $A$ and connect each atomic node $a_i$ as follows.
First, we draw a random value $k'$ between 1 and $k$. Then, we add random edges ($a_i \rightarrow l_j$) from $a_i$ to logic nodes $l_j$ until we cover $k'$ connections.
\vspace{-0.15cm}
\section{Benchmark description}
Out dataset includes 80 cases in total, and can be obtained at \cite{BarrereMPMCS4FTAGithub}.
It contains fault trees with four different sizes: 2500, 5000, 7500, and 10000 nodes (20 cases each).
For each tree size, we consider two different graph configurations, $R_1=(0.8,0.1,0.1)$ and $R_2=(0.6,0.2,0.2)$, which determine the composition of the fault trees (10 cases each).
Table \ref{tab:experiments} shows the identifiers of the cases within each one of these categories.
\vspace{-0.1cm}
\begin{table}[h!]
\centering
\begin{tabular}{@{}|c|c|c|@{}}
\toprule
\textbf{\#Nodes/Configurations} & \textbf{$R_1=(0.8,0.1,0.1)$} & \textbf{$R_2=(0.6,0.2,0.2)$} \\ \midrule
2500 & 1 to 10 & 11 to 20 \\ \midrule
5000 & 21 to 30 & 31 to 40 \\ \midrule
7500 & 41 to 50 & 51 to 60 \\ \midrule
10000 & 61 to 70 & 71 to 80 \\ \bottomrule
\end{tabular}
\vspace{0.05cm}
\caption{Benchmark cases and configurations}
\label{tab:experiments}
\end{table}
\vspace{-0.5cm}
Each case is specified in an individual \textbf{.wcnf} (DIMACS-like, weighted CNF) file named with the case id and the number of nodes involved.
The weight for hard clauses (\textit{top} value) has been set to $2.0 \times 10^9$.
The weight of each soft constraint is an integer value that corresponds to the transformation (right shifting) of the probability value in $-log$ space.
Tables \ref{tab:part1} and \ref{tab:part2} detail each case as well as the results obtained with our tool.
The field \textbf{id} identifies each case.
\textbf{gNodes} and \textbf{gEdges} indicate the total number of nodes and edges in the fault tree.
\textbf{gAT}, \textbf{gAND}, and \textbf{gOR}, indicate the approximate composition of the graph in terms of atomic (basic events), AND, and OR nodes.
\textbf{tsVars} and \textbf{tsClauses} show the number of variables and clauses involved in the MaxSAT formulation after applying the Tseitin transformation.
\textbf{time} shows the resolution time reported by MPMCS4FTA~in milliseconds.
Currently, the MaxSAT solvers used in MPMCS4FTA~are SAT4J~\cite{SAT4J} and a Python-based linear programming approach using Gurobi~\cite{Gurobi}.
\textbf{size} indicates the number of nodes identified in the MPMCS~solution.
\textbf{intLogCost} indicates the cost of the solution in $-log$ space as an integer value (right shifted).
\textbf{logCost} indicates the cost of the solution in $-log$ space.
\textbf{MPMCS~probability} indicates the joint probability of the MPMCS.
These experiments have been performed on a MacBook Pro (16-inch, 2019), 2.4 GHz 8-core Intel Core i9, 32 GB 2666 MHz DDR4.
\input{\content/dataset-01-40}
\input{\content/dataset-41-80}
\section{MaxSAT formulation strategy}
Given a fault tree and its logical formulation $f(t)$, we carry out a series of steps to compute the MPMCS~as follows.
\textbf{1. Logical transformation.} Since we are interested in minimising the number of satisfied clauses, which is opposed to what MaxSAT does (maximisation), we flip all logic gates but keep all events in their positive form.
In our example, we obtain: $g(t) = (x_1 \lor x_2) \land (x_3 \land x_4 \land (x_5 \lor (x_6 \land x_7)))$.
Then, the objective is to satisfy $\neg g(t)$ where the falsified variables will indicate the minimum set of events that must simultaneously occur to trigger the top level event.
A more detailed explanation of this transformation can be found in \cite{Barrere-Arxiv2020-FTA}.
We then use the Tseitin transformation to produce in polynomial time an equisatisfiable CNF formula \cite{Tseitin70}.
\textbf{2. MaxSAT weights.}
Due to the fact that MaxSAT is additive in nature and the MPMCS~problem involves the multiplication of decision variables, we transform the probabilities into a negative log-space so the multiplication becomes a sum. In addition, many SAT solvers only support integer weights so we perform a second transformation by right shifting (multiplying by 10) every value until the smallest value is covered with an acceptable level of precision. For example, 0.001 and 0.00007 would become 100 and 7 (right shift 5 times).
Additional variables introduced by the Tseitin transformation have weight 0.
We then specify the problem as a Partial Weighted MaxSAT instance by assigning the transformed probability values as a penalty score for each decision variable.
\textbf{3. Parallel SAT-solving architecture.}
Since different SAT solvers normally use different resolution techniques, some of them are very good at some instances and not that good at others. To address this issue, we run multiple SAT-solvers in parallel and pick the solution of the solver that finishes first.
We have experimentally observed that the combination of different solvers provides good results in terms of performance and scalability.
Once the solution has been found, we translate back the transformed values into their stochastic domain and output the MCS with maximum probability.
| -8,259.288843
|
[
-2.04296875,
1.990234375
] | 63.50365
|
[
-6.828125,
-4.76171875,
-2.310546875,
-7.640625,
1.8916015625,
12.15625
] |
[
2.0625,
7.17578125,
2.666015625,
8.6171875
] | 120
| 1,590
|
[
-3.224609375,
3.796875
] | 27.324841
|
[
-6.55859375,
-6.05078125,
-3.7109375,
-0.61328125,
3.630859375,
10.4453125
] | 0.667557
| 24.335229
| 40
| 5.308187
|
[
2.674135684967041
] | -6,245.889483
| 5.181761
| -8,151.552194
| 0.485496
| 5.699217
|
[
-3.197265625,
-4.1015625,
-3.908203125,
-3.806640625,
3.51953125,
10.1640625
] |
[
-7.6796875,
-5.37109375,
-3.205078125,
-2.443359375,
5.16015625,
8.5625
] | |
BkiUdPQ5qU2Ap2aRxr1x
|
\section{Introduction}
Considerable attention has recently been devoted to the study of non-trivial physics arising from strong spin-orbit coupling (SOC).
Such studies were initiated by theoretical proposals of topological insulators with conducting surface states protected by time reversal (TR) symmetry~\cite{KanePRL05,BernevigPRL06,ShengPRL06,BernevigScience06,FuPRB07,FuPRL07,MoorePRB07,Roy06,KanePW11},
which was then experimentally confirmed in two-dimensional (2D) HgTe/Hg$_{1-x}$Cd$_x$Te quantum wells~\cite{KonigScience07}
and indirectly by angle resolved photoemission spectroscopy (ARPES) in three dimensional (3D) systems such as
Bi$_{1-x}$Sb$_x$~\cite{HsiehNature08,HsiehScience09}, Bi$_2$Se$_3$~\cite{XiaNP09,HorPRB09}, Bi$_2$Te$_3$ and Sb$_2$Te$_3$~\cite{HsiehPRL09,HasanRMP10,QiRMP11}.
Since then, a variety of topological phases have been theoretically suggested. These include topological crystalline insulators with surface states protected by crystal lattice symmetry~\cite{Fu11, Hsieh12, Xu12, Dziawa12, Tanaka12}, Weyl semimetals with chiral fermions~\cite{Burkov11, Wan11, Volovik07, Murakami07, Yang11}, and topological magnetic insulators with quantized anomalous Hall (QAH) effects~\cite{Yu10, Liu08, Xu11, Chang13}. Furthermore, strongly interacting systems could provide a new avenue to explore more exotic phases such as topological Mott insulators and fractional Chern insulators~\cite{Sondhi13,Krempa14}.
While the number of topological phases proposed in theory is still growing, experimental confirmations
are limited to the systems of groups IV-VI elements. Why have such topological phases not been detected in other abundant materials such as oxides ?
In particular, transition metal oxides exhibit various collective phenomena stemming from strong electronic correlations, and this has led to tremendous interest and effort in growing
oxide films to discover new functionalities. However, this effort has so far been focus mainly on 3d- and 4d-orbital systems with weak
or moderate SOC, and little attention has been paid to 5d-orbital systems with strong SOC until recently.
Among 5d-orbital systems, Ir oxides named Iridates have provided an excellent playground to study the combined effects of
SOC and electron correlations.
Depending on the underlying lattice structure,
Iridates have offered a rich phase diagram~\cite{Krempa14}. Despite different phases,
a common ingredient is the $J_{\textrm{eff}}=\frac{1}{2}$ description due to strong atomic SOC is a good starting point in building microscopic Hamiltonians.
Using $J_{\rm eff}=\frac{1}{2}$ wavefunction, a topological insulator was proposed in 3D perovskite Iridates.~\cite{CarterPRB12}
It was found that bulk SrIrO$_3$ with P$_{\rm bnm}$ structure exhibits a crystal-symmetry-protected nodal line which becomes a 3D nodal point when the mirror symmetry along the c-axis is broken. It becomes a topological insulator with large mirror symmetry breaking term.~\cite{CarterPRB12}
A successful growing of Ir oxide superlattice, [(SrIrO$_3$)$_{n}$,SrTiO$_3$] where the integer ${n}$ controls the number of Ir oxide layers
using pulsed laser deposition (PLD) technique has been also reported.~\cite{Matsuno14}
It has demonstrated how a spin-orbit magnetic insulator arises by tuning the number of SrIrO$_3$ layers.
Given that SrIrO$_3$ with P$_{\rm bnm}$ structure
possesses a crystal-symmetry-protected nodal line, it is possible to design other topological phases by employing the current experimental techniques.
While a topological insulator was proposed in an effective honeycomb bilayer by fabricating [111] superlattice structure from perovskite oxides ~\cite{XiaoNC11},
atomically controlled [111] superlattice of perovksite oxides is known difficult to be fabricated.
On the other hand, Ir oxide superlattice along the [001] axis
has been successfully made by J. Matsuno et al. ~\cite{Matsuno14} as stated above.
In this paper, we show how to realize topological phases in Ir oxide superlattices grown along the [001] axis; [(SrIrO$_3$)$_{n}$, (AMO$_3$)$_{n^\prime}$] for integer $n^\prime$ and
${n}=1$ or 2 where AMO$_3$ is a band insulator with a closed shell transition metal $M^{4+}$ and
an alkaline earth metal $A^{2+}$. To realize topological phases, one has to retain oxygen octahedra rotation {\it and} tilting which is necessary to generate a Rashba-like SOC in
$J_{\rm eff}=\frac{1}{2}$ basis. Thus AMO$_3$ should have the orthorhombic P$_{\rm bnm}$ structure such as CaTiO$_3$, SrZrO$_3$, or SrHfO$_3$ instead of SrTiO$_3$ with tetragonal structure.
The topological states realized in these superlattices include topological magnetic insulators with QAH effects, non-trivial valley insulators, topological
insulators with TR symmetry, and topological crystalline insulators.
This paper is organized as follows.
In Sec. 2, we show how a 2D topological insulator can be made in an Ir oxide single layer system. When oxygen octahedron is rotated and titled away from
c-axis, there are two 2D Dirac points similar to the honeycomb lattice~\cite{KanePRL05-2}. These 2D Dirac points are protected by
the b-glide symmetry.
Breaking this b-glide symmetry generates a 2D topological insulator, and
furthermore in the presence of a magnetic ordering and/or in-plane magnetic field, the system
becomes a topological magnetic insulator. This could be confirmed by quantized Hall conductance in Hall measurement.
In Sec. 3, we propose two different types of bilayer Ir oxides. Depending on the layer stacking,
one becomes a topological magnetic insulator for any small magnetic field that breaks the b-glide symmetry.
The other case possesses various topological phases including topological crystalline, topological magnetic, and mirror valley insulators.
In each section, we offer a schematic crystal structure of Ir oxide superlattices and physical origins of such topological phases based on symmetry of lattice and TR.
We summarize our findings in the last section.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{single_layer.png}
\caption{(color online) (left) IrO$_6$ octahedron with the rotation $\theta$ along c-axis and titling $\phi$ along local $(110)$ axis. (right) Single layer Ir oxide superlattice structure. IrO$_2$ layer contains two different sites denoted by A and B representing different rotations and tiltings, $(\theta,\phi)$ and $(-\theta, -\phi)$ oxygen octahedra, and it is grown
on a band insulator AMO$_3$ with P$_{\rm bnm}$ structure. The primitive lattice vectors are $\vec{a}=(\hat{x}-\hat{y})/2$ and $\vec{b}=(\hat{x}+\hat{y})/2$.}
\label{fig:slayer}
\end{figure}
\section{Single-Layer Iridates}
\subsection{Model Hamiltonian and Dirac fermion}
In bulk samples AMO$_3$ with P$_{\rm bnm}$ structure, each M atom surrounded with six O atoms forms an octahedron.
This octahedron is rotated by an angle $\theta$ around the c-axis and tilted by an angle $\phi$ around the local (110) direction as shown in Fig. ~\ref{fig:slayer}. The rotation and tilting angles alternate between two neighboring IrO$_6$ octahedra in the plane and between adjacent layers making four M atoms in a unit cell.
To engineer a single-layer Ir oxide, IrO$_2$ layer is grown from AMO$_3$ as shown in Fig. ~\ref{fig:slayer}.
$x$- and $y$-direction are rotated by 45 degree from the crystal $a$- and $b$-axis for convenience.
As we state above, the alternating rotation and tilting of neighboring IrO$_6$ is crucial to realize topological phases for the following reason.
The relatively strong SOC of Ir atoms splits t$_{\rm 2g}$ states into $J_{\rm eff}=\frac{1}{2}$ and $J_{\rm eff}=\frac{3}{2}$,
and Ir$^{4+}$ ionic configuration
leading to the valence of $5d^{5}$ makes
these iridates to be a half-filled $J_{\rm eff}=\frac{1}{2}$ band. Even though the tetragonal distortion of IrO$_6$ octahedra may affect the validity of the $J_{\rm eff}=\frac{1}{2}$ description in reality, the tetragonal crystal field splitting is small compare to the SOC of iridium~\cite{JinPRB09,AritaPRL12}. Thus, $J_{\rm eff}=\frac{1}{2}$ states are well separated from $J_{\rm eff}=\frac{3}{2}$ states, which makes $J_{\rm eff}=\frac{1}{2}$ picture still adequate to describe the physics near the Fermi energy. Note that $J_{\rm eff} =\frac{1}{2} $ consists of
$| J_z = \pm \frac{1}{2} \rangle = \frac{1}{\sqrt{3}} \left( |d_{xy,s} \rangle \pm |d_{yz,-s} \rangle + i |d_{xz,-s} \rangle \right)$
where $\pm s$ represents spin-1/2 up and down states~\cite{footnoteaxis}, respectively.
In the presence of the alternating tilting and rotation between neighboring sites,
a hopping integral between $d_{xy,s}$ and $d_{xz/yz,s}$ orbitals becomes finite.
Since $d_{xy,s}$ and $d_{xz/yz,s}$ belong to different spin states of $|J_z\rangle$,
this hopping involves $|J_z= \frac{1}{2} \rangle$ and $|J_z =-\frac{1}{2} \rangle$ states which then generates a spin-flip Rashba-like term.
For a single layer of IrO$_2$, there are two sites due to different rotation ($\theta$) and tilting angle ($\phi$) between nearest-neighbor sites. We denote these Ir sites by
A and B indicating different oxygen environments as shown in Fig.~\ref{fig:slayer}. It has a rectangle structure associated with a glide symmetry plane which corresponds to
the invariance under a 1/2 translation along a certain direction, and reflection afterwards. In this lattice, it is along $b$-axis and thus named the b-glide.
The effect of this glide plane on $t_{2g}$ orbitals is to interchange $d_{yz}$ with $d_{xz}$ orbital and to exchange A with B site.
Introducing the Pauli matrices $\vec{\tau}$ and $\vec{\sigma}$ for
the sublattice $A$ and $B$, and $J_{\rm eff}=1/2$ pseudospin, respectively, this b-glide symmetry plane is
expressed as,
\begin{eqnarray}
\hat{\Pi}_b=\frac{i}{\sqrt{2}}(\sigma_x-\sigma_y)\tau_x \hat{k}_{bg},
\label{eq:a1}
\end{eqnarray}
where $\hat{k}_{bg}$ is the operator acting on crystal momentum space as $\hat{k}_{bg} : (k_x,k_y) \rightarrow (k_y, k_x)$.~\cite{CarterPRB12}
\begin{figure*}
\subfigure[$\phi=0$ with b-glide symmetry]{
\includegraphics[width=5.5cm]{Notiltrot.pdf}
\label{fig:notiltrot}
}
\subfigure[Finite $\phi$ with b-glide symmetry]{
\includegraphics[width=5.5cm]{bglidehas.pdf}
\label{fig:bglidehas}
}
\subfigure[Finite $\phi$ without b-glide symmetry]{
\includegraphics[width=5.5cm]{bglideno.pdf}
\label{fig:bglideno}
}
\caption{(color online) Band dispersion of single layer Ir oxide (a) without tilting $\phi$. It shows four fold degeneracy along $S=(\pi,0) \rightarrow X=(\frac{\pi}{2},-\frac{\pi}{2})$ direction.
(b) Finite rotation and tilting leaves two Dirac points at $X$ and $Y=(\frac{\pi}{2},\frac{\pi}{2})$. (c) When the b-glide symmetry is broken, Dirac point acquires a finite gap at $X$ and $Y$
points. The set of $(\theta,\phi)$ for both (b) and (c) is $(7^{\circ},19^{\circ})$. }
\label{fig:singleband}
\end{figure*}
A tight-binding model can be constructed from $J_{\textrm{eff}}=1/2$ bands with the basis $(A \uparrow, B \uparrow, A \downarrow, B \downarrow)$ where A and B denote two different Ir sites in the unit cell as discussed above, and $(\uparrow, \downarrow)$ represents $J_z=\pm \frac{1}{2}$.
Taking into account nearest and next-nearest hoppings, the Hamiltonian is given by
\begin{eqnarray}
H_0({\bf{k}})
&=&\epsilon_0({\bf{k}})\tau_x + \epsilon^{\prime}({\bf{k}}) {\bf I} \nonumber\\
&+&\epsilon_{1d}({\bf{k}})\sigma_z\tau_y+\epsilon_y({\bf{k}})\sigma_y\tau_y+\epsilon_x({\bf{k}})\sigma_x\tau_y,
\label{eq:a3}
\end{eqnarray}
where
\begin{eqnarray}
\epsilon_{0/1d}({\bf{k}})&=&2t_{0/1d}(\cos(k_x)+\cos(k_y)),\nonumber\\
\epsilon_{y/x} ({\bf{k}})&=& t_1\cos(k_{x/y})+t_{2}\cos(k_{y/x}),\nonumber\\
\epsilon^\prime({\bf k})&=&t^\prime\cos(k_x)\cos(k_y).
\label{eq:a4}
\end{eqnarray}
Here $t_0$ is the nearest neighbor (NN) intra-orbital hopping and $t_{1d}$ is the NN hopping between $d_{yz}$ and $d_{xz}$ orbitals.
$t^\prime$ is the next-nearest neighbor (NNN) intra-orbital hopping.
$t_1$ and $t_2$ are the NN hopping from $d_{yz}$ and $d_{xz}$ orbitals to $d_{xy}$ orbital, respectively. $t_{1d}$, $t_1$ and $t_2$ vanish without the rotation and tilting of octahedra. The hopping parameters are obtained based on Slater-Koster method~\cite{Slater54} and the parameters
are functions of $\theta$ and $\phi$. For example, they are given by $(t^\prime, t_0, t_{1d}, t_{1}, t_{2})/t=(-0.3,-0.6,-0.15,0.15,0.45)$ when $(\theta,\phi)\approx(7^{\circ},19^{\circ})$, where
$t$ is the $\pi$-bonding between d-orbitals $t_{dd\pi}$, and we set $t_{dd\pi}:t_{dd\sigma}: t_{dd\delta} = 1:\frac{3}{2}:\frac{1}{4}$. Note that the tight-binding parameters are fully determined by a set of $(\theta,\phi)$. Different values of $(\theta,\phi)$ will simply modify the detailed shape of the band dispersion. Thus, by tunning the magnitude of $(\theta,\phi)$, it is possible to have the electron and hole pockets near Fermi energy. However, the topological feature of the band structure (characterized by the Chern numbers) remains intact. This particular choice of $(\theta,\phi)$ is made to avoid the electron and hole pockets at $\epsilon_F$ but topological properties do not depend on the choice of $(\theta,\phi)$.
The band structure is shown in Fig.~\ref{fig:singleband}. Without the tilting angle $\phi$, two bands are degenerate along $X=(\frac{\pi}{2},-\frac{\pi}{2})$ to $S=(\pi,0)$ as shown in Fig~\ref{fig:notiltrot}.
However, when both rotation and tilting of octahedra are present, this degeneracy is broken,
and there are two Dirac points at $X$ and $Y$ protected by the b-glide symmetry as shown in Fig.~\ref{fig:bglidehas}.
The Dirac point may appear below the Fermi energy $\epsilon_F$ when the tilting angle $\phi$ is not significant ($\phi < 17^{\circ}$).
Indirect hopping via the oxygens can change the strength of hopping parameters as well, but the topological nature of phases described here is not altered
by such quantitative changes.
When the b-glide symmetry is broken, for example by a strain field along x-direction, these Dirac points are gapped as shown in
Fig. ~\ref{fig:bglideno}. In the following subsection, we discuss the topological nature of this insulator by providing the corresponding Chern numbers
and edge state analysis.
\begin{figure*}[t]
\subfigure[TI]{
\includegraphics[width=7.9cm]{TI1.pdf}
\label{fig:a1TI}
}
\subfigure[QAHI]{
\includegraphics[width=8cm]{MTI1.pdf}
\label{fig:a1MTI}
}
\caption{(color online) Edge state calculation of (a) topological insulator (TI) shown in Fig.~\ref{fig:bglideno} and (b) quantized anomalous Hall insulator (QAHI) when TR is broken due to a non-collinear magnetic ordering or an in-plane magnetic field. Grey lines represent bulk state and red (blue) lines denotes edge state at $L=0$ ($L=N$) plotted along $k_a=k_x-k_y-\pi$. The parameter set is the same with the band dispersion in Fig.~\ref{fig:bglideno}. The two gapless edge modes at $L=0$/$L=N$ (red/blue) crossing at 1D TR invariant momentum indicates the system belongs to 2D TI. After breaking the TR, only one gapless edge state left propagating along the boundary.}
\label{fig:a1}
\end{figure*}
\subsection{Topological Insulator and quantized anomalous Hall effects}
Since the Dirac points are protected by the b-glide symmetry, any small perturbation that breaks the b-glide symmetry opens a gap at
these two Dirac points. The b-glide operator is given by Eq.~(\ref{eq:a1}), and thus a small strain along x (or y)-direction is sufficient to break the b-glide symmetry.
Such a broken b-glide symmetry term allows additional NNN and third NN hoppings as follows.
\begin{equation}
\begin{split}
\epsilon_{2n}({\bf{k}})&=(t_{2n}\cos(k_x+k_y)+t^\prime_{2n}\cos(k_x-k_y))\tau_z,\\
\epsilon_{3n}({\bf{k}})&=2t_{3n}(\cos(2k_x)-\beta \cos(2 k_y))\tau_z,
\end{split}
\label{eq:a5}
\end{equation}
where $t_{2n}$ and $t_{3n}$ are the NNN intra-orbital hopping. $t_{3n}$ is the third NN intra-orbital hopping, and $\beta$ is the parameter to measure the strength of a broken b-glide term.
The tight-binding parameters $(t_{2n},t^\prime_{2n},t_{3n})=(0.098,-0.1,0.06)$ obtained by Slater-Koster using the same set of angles ($\theta$,$\phi$) as above, and with $\beta=0.6$ the band dispersion is shown in Fig.~\ref{fig:bglideno}.
The non-trivial topology behind the gapped Dirac point can be revealed through the following edge state calculation. The slab computation has been performed in a zigzag slab geometry periodic along $\vec{b}=\frac{\hat{x}+\hat{y}}{2}$, while it has an open boundary along $\vec{a}=\frac{\hat{x}-\hat{y}}{2}$; Along $\vec{a}$ direction, one end terminates at atom A and the other side ends with atom B. When TR symmetry is not broken, the system shows gapless edge modes propagating from valence band to conduction band
as shown in Fig.~\ref{fig:a1TI}. These two gapless edge states cross at a time reversal invariant momentum (TRIM) point indicating their protection by the TR symmetry.
As long as TR symmetry is present, the degeneracy can not be lifted by disorders or weak interactions. Indeed, we have checked that the edge states are robust, even in the presence
of a random sublattice potential.
$Z_2$ index is another way to confirm the topological insulator. It is straightforward to compute the eigenvalues of the inversion operator~\cite{Fu07}.
The result shows that $Z_2$ index $=1$ consistent with the edge state calculation.
Another effect of strong SOC in Iridates is an amplification of electronic correlation leading to a spin-orbit Mott insulator.
The relevant bandwidth $W$ is $J_{\rm eff} =\frac{1}{2}$ band rather than the full t$_{\rm 2g}$ band due to the SOC, and thus the ratio of Hubbard interaction $U$
and the bandwidth $W$ is magnified in Iridates.~\cite{Moon08,KimSci09}
In order to understand the magnetic ordering pattern, let us consider the Hubbard model with tight-binding Hamiltonian of Eq.~\ref{eq:a3} where $\epsilon_{1d}({\bf k})$ and $\epsilon_{y/x}({\bf k})$ contain pseudospin dependent terms. This NN Hamiltonian can be expressed as
\begin{eqnarray}
H_0=&&\sum_{\langle i.j \rangle}\{t_0 c^{\dagger}_{i,A,\sigma}c_{j,B,\sigma} + i c^{\dagger}_{i,A,\alpha}(\vec{v}\cdot\vec{\sigma})_{\alpha\beta} c_{j,B,\beta}\}+ {\rm h.c.}\nonumber\; ,
\label{eq:r1}
\end{eqnarray}
where $\vec{v}=(\frac{t_2}{2},\frac{t_1}{2},t_{1d})$ along x-bond while $\vec{v}=(\frac{t_1}{2},\frac{t_2}{2},t_{1d})$ along y-bond. Here $c^{\dagger}_{i,A/B,\sigma}$ represents the operator creates an electron on site $i$ with sublattice $A/B$ and pseudospin $\sigma$.
In large $U$ limit, the spin model is then obtained as~\cite{DM1960}
\begin{eqnarray}
H_{\rm eff}=J\sum_{\langle i,j \rangle}\vec{S}_i\cdot\vec{S}_j +\sum_{\langle i,j \rangle}\vec{D}_{ij}\cdot(\vec{S}_i \times \vec{S}_j)\; .
\label{eq:r2}
\end{eqnarray}
Here $J=\frac{4}{U}[(t_0)^2-\vec{v}\cdot\vec{v}]$ and $\vec{D}_{ij}=\frac{8\epsilon_i t_0 \vec{v}}{U}$ where $\epsilon_i$ is the change of sign in the adjacent bond~\cite{Krempa14,Carter2013}.
Note that when the bond retains the inversion symmetry, the DM vector $\vec{D}$ should vanish. However, due to the different rotation and tilting angles of oxygen octahedra between neighboring Ir atoms which break the inversion symmetry on the bond, the effective spin model of Eq.~\ref{eq:r2} is obtained. The ground state of such spin Hamiltonian has a non-collinear form:
\begin{eqnarray}
m_{100}\sigma_x+m_{(010)}\sigma_y\tau_z+m_{(001)}\sigma_z\tau_z\; ,
\label{eq:r3}
\end{eqnarray}
where $m_{(010)}$ and $m_{(001)}$ represent sublattice antiferromagnetic orderings, while $m_{(100)}$ denotes a ferromagnetic component of ordering. The exact form and amplitudes of the magnetic orderings in Eq.~\ref{eq:r3} are related to the crystal symmetry and detailed hopping parameters on the bond. However, the specific magnetic pattern is not crucial to realize the QAH effect in single-layer iridates as long as TR symmetry is broken.
In the absence of TR symmetry, the topological invariance characterizing the QAH effects is identified by the charge Chern number defined as,
\begin{equation}
C_p=\frac{1}{2\pi}\int d^2 {\bf{k}}\Omega^z_p({\bf{k}}),
\label{eq:a7}
\end{equation}
where $p$ is the band index and $\Omega^z_p({\bf k})$ is z-component of p-th band Berry curvature ${\bf{\Omega}}_p({\bf{k}})$ given in the Appendix.
The quantized transverse Hall conductance $\sigma_{xy}$ is then given by
\begin{equation}
\sigma_{xy}=\frac{e^2}{h}\sum_{p \in occupied} C_p,
\label{eq:a8}
\end{equation}
where the sum goes over all occupied bands below Fermi energy $\epsilon_F$.
For the single layer 2D Ir oxide, the quantized Hall conductivity is obtained as
\begin{equation}
\sigma_{xy}=\frac{e^2}{h},
\label{eq:a9}
\end{equation}
indicating the topological invariance $C \equiv \sum_{p \in occupied }C_p=1$ related to the edge currents propagating along one direction on the sample boundary~\cite{Hatsugai93}
shown in Fig. ~\ref{fig:a1MTI}.
Note that the QAH phase depends on the magnitude of the ordering. The different sizes of gaps at X and Y point appear after breaking b-glide symmetry see Fig.~\ref{fig:bglideno}. If the strength of the magnetic ordering reverses the bands at X point for instance, while keeping the gap at Y point intact, the system turns into the QAH phase with quantized $\sigma_{xy}$ of Eq.~\ref{eq:a9}. However, if the magnitude of the ordering is sufficiently large to reverse both bands at X and Y points, the system will thus turn to a trivial insulator. Thus, above the magnetic ordering temperature, the QAH phase should show up in a certain range of external magnetic field.
\section{Bilayer Iridates}
To realize the topological phases in the single layer IrO$_2$ layer, the b-glide symmetry should be externally broken.
This requires a strain field in a certain direction, which is not trivial in an experimental setting.
In this section, we propose two types of bilayer IrO$_2$ systems, which naturally hold topological phases without a lattice symmetry breaking perturbation.
Since the single IrO$_2$ layer has two different sets of rotation and tilting angles,
one way to engineer bilayer systems is to stack two layers of $A$ and $B$ on top of each other.
Note that
$A$ and $B$ per unit cell have the rotation and tilting angle $(\theta,\phi)$ and $(-\theta,-\phi)$, respectively.
Another way to stack two single layers is to make the second layer has different rotation and tilting set such as
$(\theta,-\phi)$ and $(-\theta,\phi)$ denoted by $C$ and $D$ sites, respectively.
We call the first case ABAB stacking and the other case ABCD stacking: see Fig. ~\ref{fig:abab}.
The distance between top and bottom layers in both cases can be manipulated by the number of AMO$_3$ layers in between,
and the nature of topological phases is not altered by such quantitative changes.
Let us consider the ABAB stacking case first.
\subsection{ABAB stacking}
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{ABCDABAB.png}
\caption{(color online) (left) ABAB stacking with A=($\theta,\phi$) and B=($-\theta,-\phi$) types of octahedra rotation and tilting.
(right) ABCD bilayer stacking which contains A and B in the top layer, while C=$(\theta,-\phi)$ and D=$(-\theta,\phi)$ types of octahedra rotation and tilting in the bottom layer.}
\label{fig:abab}
\end{figure}
\begin{figure}[t]
\subfigure[$\phi=0$]{
\includegraphics[width=7cm]{ABCDnotilt.pdf}
\label{fig:bsnotilt}
}
\subfigure[Finite $\phi$ with ABAB stacking]{
\includegraphics[width=7cm]{ABABtilt.pdf}
\label{fig:bstilt}
}
\subfigure[Finite $\phi$ with ABCD stacking]{
\includegraphics[width=7cm]{ABCDtilt.pdf}
\label{fig:abcdtilt}
}
\caption{(color online) Band structure for bilayer with no tilt effect ({\it i.e.} $\phi=0$) in octahedron environment (a) has a degenerate line circling $\Gamma$ point.~\cite{footnote1}(b)ABAB: Finite tilting lift the line node degeneracy but leaves one Dirac points protected by the b-glide symmetry along $S\rightarrow X$ for ABAB stacking. (c)Band structure for ABCD bilayer with finite tilting $\phi$. It has a band gap at $(k_0,\pm k_0)$ (circled out by band lines). The Fermi energy is $\epsilon_F=0$ indicated by gray solid lines.}
\label{fig:bsbilayer}
\end{figure}
\begin{figure*}[t]
\subfigure[FM]{
\includegraphics[width=7cm]{EdgeSameh009.pdf}
\label{fig:bsfm}
}
\subfigure[AFM]{
\includegraphics[width=7cm]{EdgeSameh006.pdf}
\label{fig:bsafm}
}
\caption{(color online) Slab dispersion with (a) ferromagnetic (FM) with $m_{(110)}=0.09t$ and (b) antiferromagnetic ordering (AFM) with strength $m_{(1\bar{1}0)}=0.06t$.
Two gapless edge modes at $L=0$ and $L=N$ boundary are represented by red and blue, respectively. }
\label{fig:bs}
\end{figure*}
As presented in Fig.~\ref{fig:abab}, the ABAB bilayer structure with significant rotation and tilting can be obtained by inserting one layer band insulator material MO$_2$ (M=Zr, Hf) between two IrO$_2$ layers. The tight-binding Hamiltonian is given by
\begin{equation}
H_{ABAB}({\bf{k}})=\sum_{i=1,2} H^i_{0} ({\bf k})+H_{12}({\bf{k}}),
\label{eq:bs1}
\end{equation}
where $H^i_{0}$ represents a top ($i=1$) and bottom ($i=2$) IrO$_2$ layer and is same as Eq.~(\ref{eq:a3}).
$H_{12}$ contains the hopping terms between the two layers, and introducing another Pauli matrices $\vec{\nu}$ for the layer degree of freedom, it is written as
\begin{equation}
\begin{split}
H_{12}({\bf{k}})&=\epsilon_{di}({\bf{k}})\nu_x\\
&+\textrm{Re}(\epsilon_{dz}({\bf{k}}))\sigma_y\tau_y\nu_x+\textrm{Im}(\epsilon_{dz}({\bf{k}}))\sigma_z\tau_y \nu_y\\
&+\textrm{Re}(\epsilon_{z}({\bf{k}}))\sigma_y \tau_y \nu_x+\textrm{Im}(\epsilon_{z}({\bf{k}}))\sigma_x\tau_y \nu_x\\
&+\textrm{Re}(\epsilon_{z}^{\prime}({\bf{k}}))\sigma_y\tau_y \nu_y+\textrm{Im}(\epsilon_{z}^{\prime}({\bf{k}}))\sigma_x\tau_y\nu_y,
\end{split}
\label{eq:bs2}
\end{equation}
where
\begin{equation}
\begin{split}
\epsilon_{di}({\bf{k}})&=t_z+t_{(110)}\cos(k_x+k_y)+t_{(1\bar{1}0)}\cos(k_x-k_y),\\
\epsilon_{dz}({\bf{k}})&=t_{dz}(\cos(k_x)+\cos(k_y))+i t^{\prime}_{dz}(\sin(k_x)+\sin(k_y)),\\
\epsilon_{z}({\bf{k}})&=(t_{2z}\cos(k_y)+t_{1z}\cos(k_x))+i (k_x \leftrightarrow k_y),\\
\epsilon_{z}^{\prime}({\bf{k}})&=(t^{\prime}_{2z}\sin(k_y)+t^{\prime}_{1z}\sin(k_x))+i (k_x \leftrightarrow k_y).
\end{split}
\label{eq:bs3}
\end{equation}
Here
$t_z$ is the NN hopping between two layers. $t_{(110)}$ and $t_{(1\bar{1}0)}$ are the third NN intra-orbital hopping along $(110)$ and $(1\bar{1}0)$, respectively. $t_{dz}$ and $t^{\prime}_{dz}$ arise from $d_{yz}$ orbital to $d_{xz}$ orbital
NNN hopping due to the rotation and tilting angles. $t_{2z}, t_{1z}, t^{\prime}_{2z}$ and $t^{\prime}_{1z}$ are given by the overlap hopping integral between $d_{yz} (d_{xz})$ and $d_{xy}$-orbital.
The parameters in tight-binding Hamiltonian Eq.~(\ref{eq:bs1}) are obtained based on Slater-Koster Method~\cite{Slater54} and $(t_z,t_{(110)},t_{(1\bar{1}0)},t_{dz},t_{dz},t^{\prime}_{dz},t_{2z},t_{1z},t_{2z}^{\prime},t_{1z}^{\prime})/t=(-0.13,-0.01,-0.09,-0.03,-0.01,0.014,
0.01,0.062,0.01)$ for the same $\theta$ and $\phi$ used in the single layer.
The band structure in Fig.~\ref{fig:bsnotilt} shows that there are two line nodes around $X$ and $Y$ when $\phi=0$. However, a finite tilting $\phi$ lifts
the band degeneracy, but keeps one pair of Dirac points along the high symmetry line $X \rightarrow S$ which is protected by the b-glide symmetry in Fig.~\ref{fig:bstilt}.
Due to the electronic correlation and DM interaction, a non-collinear magnetic ordering is expected. One example of non-collinear orderings has the form of
\begin{equation}
m_{(110)} \left( \sigma_x + \sigma_y \right) + m_{(1\bar{1}0)} \left( \sigma_x - \sigma_y \right) \tau_z+ m_{(001)}\sigma_z\tau_z,
\label{eq:bs4}
\end{equation}
Since an exact direction of magnetic ordering is not important for the topological nature,
we compute the Hall conductivity for (a) $m_{(110)} \neq 0$ and (b) $m_{(1\bar{1}0)} \neq 0$ cases.
For both cases, we found it is quantized as
\begin{equation}
\sigma^{bilayer}_{xy}=2\frac{e^2}{h},
\label{eq:bs5}
\end{equation}
which implies the charge Chern number defined in Eq.~(\ref{eq:a7}) for the entire valence bands $C=2$.
The edge states computed in the zigzag slab geometry are shown
for (a) case in Fig.~\ref{fig:bsfm} and (b) case in Fig. ~\ref{fig:bsafm}, respectively.
This also confirms the existence of the two gapless edge modes propagating along the sample boundary.
Thus any magnetic ordering (or in-plane magnetic field) leads to a topological magnetic insulator with QAH effect in the 2D ABAB stacked bilayer Ir oxides.
The difference between the single layer and the bilayer ABAB stacking deserves some discussion, as the bilayer is obtained simply by stacking the AB single layer.
The Dirac nodes at X and Y TRIM points of the single layer are protected by the b-glide symmetry. However,
finite hopping integrals between two layers generate the different size of gaps at X and Y points in the ABAB bilayer system, and
the Dirac point is shifted to a non-symmetric point. Thus any magnetic field or magnetic ordering that breaks the b-glide symmetry
would turn the system into a topological magnetic insulator. On the other hand in the single layer, a magnetic field and/or ordering that breaks TR and the b-glide symmetry simultaneously induces the same
strength of gap at the X and Y points
making the system a trivial insulator. Thus an external b-glide symmetry breaking perturbation is necessary to generate different gaps at $X$ and $Y$ in order to realize QAH insulator in
the single layer case. Below we consider the other type of layer stacking, which offers various topological phases.
\subsection{ABCD stacking}
\begin{figure*}[t]
\begin{minipage}[h]{0.3\textwidth}
\subfigure[QAH]{
\includegraphics[width=5.3cm]{EdgeQAH.pdf}
\label{fig:mvh}
}
\subfigure[QVH]{
\includegraphics[width=5cm]{EdgeQVH.pdf}
\label{fig:tci}
}
\end{minipage}
\begin{minipage}[h]{0.3\textwidth}
\subfigure[Phase Diagram]{
\includegraphics[width=6.cm]{PD13degree.pdf}
\label{fig:pd}
}
\end{minipage}
\begin{minipage}[h]{0.37\textwidth}
\subfigure[MVH]{
\includegraphics[width=5.2cm]{EdgeMVHhz0.pdf}
\label{fig:mvh}
}
\subfigure[TCI]{
\includegraphics[width=5cm]{EdgeTCIhz0.pdf}
\label{fig:tci}
}
\end{minipage}
\caption{(color online) Phase diagram when rotation degree is $\theta=13^{\circ}$ in the middle panel (c) plotted as z-direction exchange field $h_z$ in the unite of Tesla (T) versus tilting degree $\phi$. Different phases has been characterized by different topological invariants $(C_{mv},C_m,C_v,C)$. The edge state for each phase has been displayed in (a) QAH, (b) quantized valley Hall (QVH), (d) mirror valley Hall (MVH) and (e)topological crystalline insulator (TCI). Two gapless edge modes in (a), (b), (d) and (e) at $L=0$ and $L=N$ boundary are represented by red and blue, respectively. Edge states are purple (mixed color of red and blue) in (d) and (e) because of the degeneracy between edge modes at $L=0$ and $L=N$. See the main text for finite $C_{mv}$ and $C_m$ related to these edge modes.}
\label{fig:pdedge}
\end{figure*}
The crystal structure with ABCD stacking is displayed in Fig.~\ref{fig:abab}.
The tight-binding Hamiltonian for this stacking is given by
\begin{eqnarray}
H_{ABCD}({\bf{k}})=\sum_{i=\pm} H^i_0({\bf{k}})+H^\prime_{12}({\bf{k}}),
\label{eq:b1}
\end{eqnarray}
where
\begin{eqnarray}
H^{\pm}_{0}({\bf{k}})&=&\epsilon^\prime({\bf{k}}){\bf{I}}+\epsilon_0 ({\bf{k}})\tau_x+\epsilon_{1d}({\bf{k}})\sigma_z\tau_y\nonumber\\
&&\pm (\epsilon_y({\bf{k}})\sigma_y\tau_y+\epsilon_x({\bf{k}})\sigma_x\tau_y),\\
H^\prime_{12}({\bf{k}})&=&\epsilon_{di}({\bf{k}})\nu_x+\epsilon_{12}({\bf{k}})\tau_x\nu_x
+t_z^{\prime}(\sigma_y+\sigma_x)\tau_z\nu_y.\nonumber
\label{eq:b2}
\end{eqnarray}
The various dispersions $\epsilon({\bf k})$s in $H^{\pm}_{0}$ have the same expression as Eq.~(\ref{eq:a4}), which represent intra-layer hopping integrals for top ($i=+$) and bottom ($i=-$) layer.
$H^\prime_{12}$ contains hopping paths between the two layers, and the dispersion $\epsilon_{di}({\bf k})$ is the same as Eq.~(\ref{eq:bs3}). $t_z^{\prime}$
represents the 1D orbital to $d_{xy}$-orbital hopping between the layers, and
\begin{eqnarray}
\epsilon_{12}({\bf{k}})=t_{12}(\cos(k_x)+\cos(k_y)),
\label{eq:b3}
\end{eqnarray}
where $t_{12}$ denotes the NNN inter-layer intra-orbital hopping.
In addition to the b-glide symmetry $\hat{\Pi}_b$ in Eq.~(\ref{eq:a1}),
there exists another glide plane which transfers between top and bottom layers in this bilayer system.
\begin{eqnarray}
\hat{\Pi}_{layer}=\frac{i}{\sqrt{2}}(\sigma_x+\sigma_y)\tau_x\nu_x \hat{k}_{layer},
\label{eq:b4}
\end{eqnarray}
where $\hat{k}_{layer}$ is the operator that interchanges $k_x$ with $k_y$ as $\hat{k}_{layer}: (k_x,k_y) \rightarrow (-k_y, -k_x)$. By computing the commutator of $\hat{\Pi}_{layer}$ with $H_{ABCD}({\bf{k}})$, it is straightforward to verify that $[\hat{\Pi}_{layer},H_{ABCD}]$=0.
The band dispersion is shown in Fig. ~\ref{fig:abcdtilt}. The set of tight-binding parameters is given by $(t_z,t_{(110)},t_{1\bar{1}0},t_{12},t_z^{\prime})/t=(-0.23,-0.01,-0.09,-0.11,-0.04)$ for the same $\theta$ and $\phi$ in the single layer. The hopping amplitude changes as a function of distance and has been estimated by introducing a scaling function $1/r^5$. There are two line nodes appear when the tilting degree vanishes \textit{i.e.} $\phi=0$ and those degeneracies are gapped out after introducing some finite tilting as shown in Fig.~\ref{fig:abcdtilt}
To analyze the topological nature of the bilayer system, we introduce the combined symmetry of $\hat{\Pi}_b$ and $\hat{\Pi}_{layer}$ such that
$\hat{\Pi}_{mirror} \equiv \hat{\Pi}_b \hat{\Pi}_{layer}=i\sigma_z\nu_x \hat{k}$ with $\hat{k}$: $(k_x,k_y)\rightarrow(-k_x,-k_y)$.
Since the Hamiltonian is even under $\hat{k}$, $[i\sigma_z\nu_x,H_{ABCD}]=0$. Furthermore, the low energy effective Hamiltonian can be brought into a block diagonalized form
near $X$ and $Y$ TRIM points with each block labeled by the eigenvalues of $\sigma_z\nu_x$, given by
\begin{eqnarray}
H^{{\rm eff}}_{\pm,X/Y}=\vec{A}_{\pm,X/Y}({\bf{k}})\cdot \vec{\sigma},
\label{eq:b5}
\end{eqnarray}
where $\pm$ subscripts are assigned to reflect the eigenvalues of the combined operator $\hat{\Pi}_{mirror}$.
The explicit expression of vector $\vec{A}_{\pm,X/Y}({\bf{k}})$ is presented in the Appendix.
One way to glimpse the novel topological phases lying behind the gapped band structure is to evaluate the topological charges~\cite{Ezawa12}
defined by the mirror valley (MV) Chern number $C_{mv}$, valley Chern number $C_v$, and mirror Chern number $C_m$ in addition to the charge Chern number $C$ at $X$ and $Y$ TRIM points:
\begin{eqnarray}
C_{mv}&=& \frac{1}{2} (C_{+,X}-C_{-,X}+C_{-,Y}-C_{+,Y}),\nonumber\\
C_m&=&\frac{1}{2}(C_{+,Y}-C_{-,Y}-C_{-,X}+C_{+,X}),\nonumber\\
C_v&=& (C_{+,X}+C_{-,X}-C_{+,Y}-C_{-,Y}),\nonumber\\
C&=& ( C_{+,X}+C_{-,X}+C_{+,Y}+C_{-,Y}).
\label{eq:b7}
\end{eqnarray}
The charge Chern number $C$ is sum of all Chern number $C_{\pm,X/Y}$ associated with valleys ($X/Y$) and mirror symmetry eigenvalues ($\pm$). The valley-Chern/mirror-Chern number $C_v$/$C_m$ is odd only under the interchange of two valleys/mirror symmetry eigenvalues. The mirror-valley-Chern number $C_{mv}$, however, is odd under the interchange of valleys and mirror symmetry eigenvalues, respectively. The computation details of $(C_{mv},C_m,C_v,C)$ and the explicit expressions are presented in the Appendix.
A phase diagram contains various phases~\cite{footnote}
including mirror valley Hall phase, topological crystalline insulator phase,
QAH phase and quantized valley Hall phase with distinguished topological features, as displayed in Fig.~\ref{fig:pd}. The phases listed here are robust against disorder as long as it preserves the symmetry associated with each phase~\cite{KanePRL05-2,Hsieh12,ZhangPRL11}.
The vertical axis is the degree of tilting angle $\phi$ and
the horizontal axis corresponds to the strength of $z$-component of the magnetic exchange field and/or ordering.
The phase boundaries can be modified depending on the magnetic ordering
or exchange field pattern, but the qualitative picture of the phase diagram is not sensitive to the choice of magnetic ordering direction, as long as there is a finite z-component of ferromagnetic $h_z$ or antiferromagnetic ordering of $m_z$.
Thus we only tune the strength of $h_z$ for simplicity. In Fig. ~\ref{fig:pd}, $h_z$ is estimated in Tesla using the tight binding parameters discussed above,
and set $t \sim 100 meV$.
Each phase separated by thick black line in Fig.~\ref{fig:pd} is charactered by the unique set of topological invariance $(C_{mv},C_m,C_v,C)$ defined in Eq.~(\ref{eq:b7}). The edge states shown in Fig. 7(a), 7(b), 7(d) and 7(e)
are obtained with the slab geometry under the same boundary condition with ABAB stacking case described in the last section.
The bilayer with small tilting angle is characterized by mirror valley Hall phase with $C_{mv} = -2$.
The valley physics in mirror valley Hall phase manifests explicitly in the edge state dispersion in Fig.~\ref{fig:mvh}.
When the degree of tilting angle $\phi$ increases, it becomes a topological crystalline insulator with $C_m =2$.
The large tilting degree is able to inverse the sign of one of the mass term near $X$ or $Y$, and thus modifies the topology of the system.
The edge state dispersion for topological crystalline insulator phase in Fig.~\ref{fig:tci} has two pairs of gapless currents moving along opposite directions on each boundary. Each pair of edge modes carries opposite mirror eigenvalues. As the name suggest, these two pairs of gapless edge states are indeed protected by $\hat{\Pi}_{mirror}$.
A TR breaking term will not lift the degeneracy between edge states as long as the perturbation preserves $\hat{\Pi}_{mirror}$.
By tuning the strength of $h_z$, the QAH phase arises. In the QAH phase, two gapless edge states localized at $L=0$ propagate along the same direction.
Each one contributes $e^2/h$ to the Hall conductance and the total Hall conductivity, when Fermi energy has been tuned inside the bulk gap is given by
\begin{equation}
\sigma_{xy}=2\frac{e^2}{h}.
\label{eq:b12}
\end{equation}
However, in quantized valley Hall phase, within valley $X$ ($Y$), the two edge states localized at $L=0$ propagating along the same direction lead to quantized vally-Hall conductivity $\sigma_{xy}^v$.
\begin{equation}
\sigma_{xy}^v=C_v\frac{e^2}{h}=4\frac{e^2}{h}.
\label{eq:b11a}
\end{equation}
In order to detect the anomalous Hall conductivity $\sigma_{xy}^v$, photon illumination with circularly polarized light can be used which has been reported in the monolayer MoS$_2$ transistors~\cite{Mak2014}. Since these two valleys are related by the inversion symmetry, it requires to break the inversion symmetry to measure the valley-Hall conductance in Eq.~\ref{eq:b11a}.
The mirror and mirror valley Chern numbers $(C_m, C_{mv})$ can be understood through the behavior of edge modes localized at $L=0$ for instance. When the system is in mirror valley Hall phase, there are four edge modes at $L=0$ or $L=N$ as shown in Fig.~\ref{fig:mvh}. Two edge modes are propagating from left to right labeled with $(-,X)$ and $(+,Y)$, respectively. The other two are flowing along the opposite direction named as $(+,X)$ and $(-,Y)$, respectively. Here $(\pm,X/Y)$ means the edge state carries $\pm$ quantum number which is the eigenvalue of $\sigma_z\nu_x$ and the valley degree of freedom $X/Y$. Thus $C_{mv}$ is finite. When the gap is reversed at $X$, the propagating direction of the edge modes $(\pm,X)$ will reverse and result in a non-vanishing $C_m$. Therefore, the system is a topological crystalline insulator as shown in Fig.~\ref{fig:tci}. ARPES has proven
to be ideally suited to detect topological signatures of TCIs~\cite{Tanaka12}; such
methods can be in principle generalized to detect the MVH insulator.
As we emphasize above, a finite bilayer hopping integral is crucial to achieve the QAH phase when TR symmetry is broken,
because the z-axis ferromagnetic exchange field $h_z \sigma_z$ (or sublattice antiferromagnetic ordering $m_z \tau_z \sigma_z$)
has to overcome $t_z$ to reverse the sign of Berry curvature around $X$ or $Y$ in order to enter the QAH insulator phase (see the Appendix for the proof).
Using the current tight binding parameters, the strength of $h_z$ needs to be about a few Tesla as shown in Fig. \ref{fig:pd}.
Since the critical strength of $h_z$ is tuned by the strength of $t_z$,
it is desirable to make the bilayer hopping $t_z$ smaller, which can be controlled by the spacing
between the layers as shown in of Fig.~\ref{fig:abab}.
\section{Conclusions}
A recent experiment has reported successful growth of Ir oxide superlattice [(SrIrO$_3$)$_n$, SrTiO$_3$] with controllable number of layers $n$,
which tailors a spin-orbit magnetic insulator for $n=1$ and $2$. ~\cite{Matsuno14}
Due to the smaller lattice constant in TiO$_2$ compared with IrO$_2$, it was expected that there are alternating rotations of Ir octahedra, but lacking the tilting ($\phi$) of octahedra to keep the tetragonal crystal structure of SrTiO$_3$. This was confirmed by the magnetic ordering patterns in $n=1$ and 2 superlattices, consistent with the first principle
calculations. ~\cite{Matsuno14}
However, topological phases have not been observed in these superlattices, even though bulk SrIrO$_3$ orthorhombic perovskites possess a crystal-symmetry-protected nodal line.~\cite{CarterPRB12}
One essential ingredient to realize any topological insulator is a Rashba-like SOC. In the J$_{\rm eff}$=1/2 wavefunction formed by a strong atomic SOC, this Rashba-like SOC
is generated by finite hopping integrals between different $J_z = \pm 1/2$ states. For example,
finite hopping paths between $d_{xy}$ and $d_{xz/yz}$ generate Rashba-like SOC terms in $J_{\rm eff}=1/2$ basis since $d_{xy}$ up-spin and one-dimensional orbitals of $d_{xz/yz}$ up-spin belong to different $J_z$ states. In layered perovskite systems, this is possible when the hopping path does not
respect the mirror symmetry under $z \rightarrow -z$, as $d_{xy}$ is even while $d_{xz/yz}$ is odd under this operation.
Thus the alternating octahedra rotations and tiltings are necessary for topological phases in layered perovskites.
We propose topological phases in Ir oxide superlattices or films.
Different topological phases were found depending on how the TR and crystal symmetries are broken. We consider three types of superlattice: single layer, bilayer with ABAB stacking and bilayer with ABCD stacking. A brief summary of our results is listed below.
For the single-layer Ir oxide, the Dirac dispersion at X and Y TRIM points is protected by the b-glide symmetry.
When this b-glide symmetry is broken for instance by an uniaxial pressure, it reveals a 2D topological insulator by gapping the Dirac nodes.
In the presence of a magnetic ordering or external magnetic field, the system becomes a topological magnetic insulator with QAH effects.
In the bilayer Ir oxides, we consider two different types of stacking. (1) For ABAB stacking, the system is a semimetal with two nodal points at $\epsilon_F$. Any finite magnetic field for any direction expect $[1\bar{1}0]$ axis or magnetic ordering turns the system into a topological magnetic insulator with QAH effects. Thus, the topological magnetic insulator in ABAB stacking is more realizable in current experiment setting than the single layer case. (2) In the ABCD stacking case, due to an additional mirror symmetry $\Pi_{mirror}$, it provides a richer phase diagram. Besides the QAH phase, there are two additional phases: TCI with non-trivial mirror Chern number and MVH insulators with quantized mirror-valley Chern number.
Experimentally, these superlattices or films are grown along the [001] axis, which can be achieved by a most standard PLD growing technique.
To test the proposal, ARPES measurement can be employed to
investigate the Dirac points in these superlattices when TR symmetry is preserved,
and Hall conductivity measurement should exhibit the QAH effect when a magnetic ordering occurs or an external magnetic field is applied.
\begin{acknowledgments}
This work was supported by the NSERC of Canada and the center for Quantum Materials at the University of Toronto.
\end{acknowledgments}
| -32,830.488513
|
[
-2.83984375,
2.564453125
] | 37.711069
|
[
-3.185546875,
0.05633544921875,
-2.1953125,
-5.34765625,
-0.1763916015625,
8.4375
] |
[
3.650390625,
8.796875,
4.0234375,
5.34375
] | 358
| 5,689
|
[
-3.05859375,
3.53515625
] | 28.082746
|
[
-6.27734375,
-4.0625,
-3.953125,
-2.48046875,
2.09375,
11.96875
] | 0.489057
| 17.090219
| 24.468272
| 2.45959
|
[
1.9492939710617065
] | -21,943.784088
| 5.750747
| -31,726.425402
| 0.831397
| 6.051651
|
[
-2.955078125,
-3.970703125,
-3.869140625,
-4.921875,
2.537109375,
12.546875
] |
[
-5.55078125,
-1.8095703125,
-2.2578125,
-1.279296875,
3.525390625,
4.35546875
] | |
BkiUeE84dbghfPG8YGA-
|
\section{Introduction}
\label{sec:introduction}
We recently derived a unified continuum formulation based on the Gibbs free energy in order to construct a well-behaved continuum model in both compressible and incompressible regimes \cite{Liu2018}. This modeling approach naturally recovers important continuum models, including viscous fluids and hyperelastic solids. Importantly, it bridges previously diverging approaches in computational fluid dynamics (CFD) and computational solid dynamics (CSD). The residual-based VMS formulation can be applied to the unified continuum body. It yields a large-eddy simulation procedure for the incompressible Navier-Stokes equations \cite{Bazilevs2007a}, which performs equally well for laminar, transitional, and fully turbulent flows \cite{Hughes2001,Liu2020}. On the other hand, when applied to the hyperelastic model, it leads to a numerical formulation for finite elasticity that allows equal-order interpolation of all fields. This is particularly beneficial for problems with complex geometries and bears similarity to some recent works \cite{Scovazzi2016,Rossi2016,Gil2014,Masud2013}. In our opinion, the unified concept gives rise to promising opportunities for designing new numerical methodologies. Recent advances include the development of a provably energy-stable scheme for incompressible finite elasticity \cite{Liu2019a} and preconditioning techniques for both solids \cite{Liu2019} and fluids \cite{Liu2020}. The benefit of the unified modeling framework is further evident in the realm of multiphysics coupled problems. Since the CFD and CSD implementations only differ in constitutive routines, monolithic FSI coupling is dramatically simplified. Furthermore, in comparison with conventional FSI modeling approaches \cite{Bazilevs2013,Yan2016,Bazilevs2010,Takizawa2012}, the new framework allows one to simulate structural dynamics with a Poisson's ratio up to $0.5$, using either the multiscale/stabilized formulation or inf-sup stable methods. Since soft tissues typically exhibit nearly incompressible behavior under physiologic loading \cite{Humphrey2013}, the proposed FSI modeling framework is extremely favorable for computational biomechanics and cardiovascular hemodynamics.
In this work, we present a suite of FSI modeling techniques for cardiovascular applications. In addition to the unified FSI modeling framework, we discuss mesh generation from medical image data as well as a modular approach for implicit coupling of lumped parameter network (LPN) models with the three-dimensional (3D) domain \cite{Moghadam2013}. The efficacy of the proposed methodology is demonstrated through a numerical study in the pulmonary arteries of a pediatric patient. The FSI results are directly compared to those of a rigid wall simulation.
\section{The unified continuum formulation for fluid-structure interaction}
\label{sec:unfiied-continuum-formulation}
In this section, we present the governing equations for the FSI problem using the arbitrary Lagrangian-Eulerian (ALE) method \cite{Bazilevs2013,Scovazzi2007}. Here, and in what follows, we use superscripts $f$, $s$, and $m$ to indicate quantities related to the fluid, solid, and ALE mesh motion in the fluid sub-domain.
\subsection{Kinematics on moving domains}
We first consider the domain occupied by the continuum body in the referential frame $\Omega_{\bm \chi} \subset \mathbb R^3$, an open and bounded set. For FSI problems, $\Omega_{\bm \chi}$ admits a non-overlapping subdivision, $\overline{\Omega}_{\bm \chi} = \overline{ {\Omega}_{\bm \chi}^{f} \cup {\Omega}_{\bm \chi}^{s} }$, $\emptyset = \Omega_{\bm \chi}^{f} \cap \Omega_{\bm \chi}^{s}$, in which $\Omega^{f}_{\bm \chi}$ and $\Omega^{s}_{\bm \chi}$ represent the sub-domains occupied by the fluid and solid, respectively. Following the notation used in \cite{Liu2018}, the referential-to-Eulerian map at time $t$ is denoted $\hat{\bm \varphi}_t(\cdot) = \hat{\bm \varphi}(\cdot, t)$ and maps $\Omega_{\bm \chi}$ to $\Omega_{\bm x}(t) = \hat{\bm \varphi}\left(\Omega_{\bm \chi}, t\right)$. We wish to think of $\Omega_{\bm x}(t)$ as the current `spatial' domain where the fluid mechanics problem can be conveniently formulated. Correspondingly, the current configuration admits a subdivision, $\overline{\Omega}_{\bm x}(t) = \overline{\Omega^{f}_{\bm x}(t) \cup \Omega^{s}_{\bm x}(t)}$, $\emptyset = \Omega^{f}_{\bm x}(t) \cap \Omega^{s}_{\bm x}(t)$. Conceptually, $\Omega_{\bm \chi}$ is fixed in time and is associated with a computational mesh. Therefore, $\hat{\bm \varphi}$ describes the motion of the mesh, and we can correspondingly define the mesh displacement and velocity as
\begin{align}
& \hat{\bm U}^m := \hat{\bm \varphi}(\bm \chi, t) - \hat{\bm \varphi}(\bm \chi, 0) = \hat{\bm \varphi}(\bm \chi, t) - \bm \chi, \\
\label{eq:mesh-disp-velo}
& \hat{\bm V}^m := \left. \frac{\partial \hat{\bm \varphi}}{\partial t} \right|_{\bm \chi} = \left. \frac{\partial \hat{\bm U}^m}{\partial t} \right|_{\bm \chi}.
\end{align}
One may conveniently push them forward to the current configuration as $\hat{\bm u}^m := \hat{\bm U}^m \circ \hat{\bm \varphi}_t^{-1}$ and $\hat{\bm v}^m := \hat{\bm V}^m \circ \hat{\bm \varphi}_t^{-1}$.
The initial position of point $\bm x \in \Omega_{\bm x}(t)$ is denoted as $\bm X \in \Omega_{\bm X}(t)$, where $\Omega_{\bm X}(t)$ is the Lagrangian domain. The smooth Lagrangian-to-Eulerian map at time $t$ is denoted $\bm \varphi_t(\cdot) = \bm \varphi(\cdot, t)$ and maps $\Omega_{\bm X}(t)$ to $\Omega_{\bm x}(t)$. Then the displacement, velocity, deformation gradient, the Jacobian determinant, the right Cauchy-Green tensor of the material particle initially located at $\bm X$ are defined as
\begin{align*}
& \bm U := \bm \varphi(\bm X, t) - \bm \varphi(\bm X,0) = \bm \varphi(\bm X, t) - \bm X, \\
& \bm V := \left. \frac{\partial \bm \varphi}{\partial t}\right|_{\bm X}= \left. \frac{\partial \bm U}{\partial t}\right|_{\bm X} = \frac{d\bm U}{dt}, \\
& \bm F:= \frac{\partial \bm \varphi}{\partial \bm X}, \quad
J := \textup{det}\left(\bm F \right), \quad \bm C := \bm F^T \bm F.
\end{align*}
The displacement and velocity can be similarly pushed forward to the current configuration as $\bm u := \bm U \circ \bm \varphi_t^{-1}$ and $\bm v := \bm V \circ \bm \varphi_t^{-1}$. We also introduce the distortional parts of $\bm F$ and $\bm C$ as
\begin{align*}
\tilde{\bm F} := J^{-\frac13} \bm F, \quad \tilde{\bm C} := J^{-\frac23} \bm C.
\end{align*}
\subsection{Balance and mesh motion equations}
\label{subsec:balance-equations}
We invoke Stokes' hypothesis and further consider the isothermal condition on the continuum body, allowing the energy equation to be decoupled from the mechanical system. The FSI system can thus be viewed as a two-component continuum body governed by the following momentum and mass balance equations,
\begin{align*}
\bm 0 &= \rho(p) \left. \frac{\partial \bm v}{\partial t}\right|_{\bm \chi} + \rho(p) \left( \bm v - \hat{\bm v}^m \right) \cdot \nabla_{\bm x} \bm v - \nabla_{\bm x} \cdot \bm \sigma_{dev} + \nabla_{\bm x} p \nonumber \\
& \hspace{3mm} - \rho(p) \bm b, \\
0 &= \left. \beta_{\theta}(p)\frac{\partial p}{\partial t}\right|_{\bm \chi} + \beta_{\theta}(p) \left( \bm v - \hat{\bm v}^m \right) \cdot \nabla_{\bm x} p + \nabla_{\bm x} \cdot \bm v,
\end{align*}
which are posed in $\Omega_{\bm x}(t)$. In the above equations, $\rho$ is the density, $p$ is the pressure, $\bm \sigma_{dev}$ is the deviatoric part of the Cauchy stress, $\bm b$ is the body force per unit mass, and $\beta_{\theta}$ is the isothermal compressibility factor. The constitutive laws of the material are dictated by the Gibbs free energy $G(\tilde{\bm C}, p)$, which was previously shown to adopt a decoupled structure \cite[p.~559]{Liu2018},
\begin{align*}
G(\tilde{\bm C}, p) = G_{ich}(\tilde{\bm C} ) + G_{vol}(p),
\end{align*}
where $G_{ich}$ and $G_{vol}$ represent the isochoric and volumetric parts of the free energy, respectively. Given the free energy, the constitutive relations can be written as
\begin{align*}
& \rho(p) := \left( \frac{d G_{vol}}{d p} \right)^{-1}, \hspace{1mm} \beta_{\theta}(p) := \frac{1}{\rho} \frac{d\rho}{d p} = -\frac{d^2 G_{vol}}{d p^2} / \frac{d G_{vol}}{d p}, \displaybreak[2] \\
& \bm \sigma_{dev} := J^{-1} \tilde{\bm F} \left( \mathbb P : \tilde{\bm S} \right) \tilde{\bm F}^T + 2\bar{\mu} \textup{dev}[\bm d], \displaybreak[2] \\
& \mathbb P := \mathbb I - \frac13 \bm C^{-1} \otimes \bm C, \quad \tilde{\bm S} := 2 \frac{\partial \left(\rho_0 G \right) }{\partial \tilde{\bm C} }, \displaybreak[2] \\
& \bm d := \frac12 \left(\nabla_{\bm x} \bm v + \nabla_{\bm x} \bm v^T \right),
\end{align*}
where $\mathbb I$ is the fourth-order identity tensor, and $\rho_0$ is the density in the Lagrangian domain.
In the solid sub-domain, we consider a purely elastic material and choose the referential configuration to be identical to the Lagrangian configuration. Consequently, the balance equations in $\Omega^s_{\bm x}(t)$ can be stated as
\begin{align*}
\bm 0 &= \rho^s(p^s) \left. \frac{\partial \bm v^s}{\partial t}\right|_{\bm \chi = \bm X} - \nabla_{\bm x} \cdot \bm \sigma^s_{dev} + \nabla_{\bm x} p^s - \rho^s(p^s) \bm b, \\
0 &= \left. \beta^s_{\theta}(p^s)\frac{\partial p^s}{\partial t}\right|_{\bm \chi = \bm X} + \nabla_{\bm x} \cdot \bm v^s.
\end{align*}
In the fluid sub-domain, the free energy contains no mechanical contribution, so $\bm \sigma_{dev}^f = 2\bar{\mu}\textup{dev}[\bm d]$. We further assume incompressible flow, which implies $\rho^f(p^f) = \rho^f$ and $\beta_{\theta}^f = 0$. The balance equations in $\Omega^f_{\bm x}(t)$ are then
\begin{align*}
\bm 0 &= \rho^f \left. \frac{\partial \bm v^f}{\partial t}\right|_{\bm \chi} + \rho^f \left( \bm v^f - \hat{\bm v}^m \right) \cdot \nabla_{\bm x} \bm v^f - \nabla_{\bm x} \cdot \bm \sigma^f_{dev} + \nabla_{\bm x} p^f \nonumber \\
& \hspace{3mm} - \rho^f \bm b,\\
0 &= \nabla_{\bm x} \cdot \bm v^f.
\end{align*}
In this work, we use the pseudo-linear-elasticity algorithm to model the ALE mesh motion \cite{Bazilevs2008,Johnson1994}. Consider a time instant $\tilde{t} < t$, which is often chosen to be the previous time step in numerical computations. Given the identity $\hat{\bm \varphi}(\bm \chi, t) = \hat{\bm \varphi}(\bm \chi, \tilde{t}) + \hat{\bm U}(\bm \chi, t ) - \hat{\bm U}(\bm \chi, \tilde{t})$, we introduce $\tilde{\bm u}^m \left( \hat{\bm \varphi}(\bm \chi, \tilde{t}) ,t \right) := \hat{\bm U}(\bm \chi, t) - \hat{\bm U}(\bm \chi, \tilde{t})$. The mesh velocity $\hat{\bm v}^m$ is then completely determined by $\tilde{\bm u}^m(\tilde{\bm x}, t)$ and the relation in \eqref{eq:mesh-disp-velo}. The mesh motion is solved via the following linear elastostatic problem posed in $\Omega^{f}_{\bm x}(\tilde{t})$,
\begin{align*}
& \nabla_{\tilde{\bm x}} \cdot \left( \mu^m \left( \nabla_{\tilde{\bm x}} \tilde{\bm u}^m + \left(\nabla_{\tilde{\bm x}} \tilde{\bm u}^m\right)^T \right) + \lambda^m \nabla_{\tilde{\bm x}} \cdot \tilde{\bm u}^m \bm I \right) = \bm 0.
\end{align*}
The boundary of the fluid sub-domain can be decomposed into the luminal, inlet, and outlet surfaces. On the luminal surface, the mesh motion follows the motion of the solid body and is therefore subject to a Dirichlet boundary condition; on the inlet and outlet surfaces, we prescribe homogeneous Dirichlet boundary conditions to fix the mesh. Furthermore, to enhance the robustness of the mesh moving algorithm, the Lam\'e parameters $\mu^m$ and $\lambda^m$ are chosen to be proportional to the inverse of the Jacobian determinant of the element mapping \cite{Johnson1994,Bazilevs2013}.
\section{Numerical formulation}
\subsection{Solid sub-problem}
\label{subsec:solid-spatial-formulation}
Let $\mathcal S^s_{\bm u}$, $\mathcal S^{s}_{\bm v}$, and $\mathcal S^{s}_{p}$ denote the finite dimensional trial solution spaces for the solid displacement, velocity, and pressure in the current solid sub-domain, respectively; let $\mathcal V^s_{\bm v}$, and $\mathcal V^s_{p}$ represent the test function spaces; let $\Gamma^s_{\bm x,h}(t)$ denote the Neumann part of the solid boundary with traction $\bm h^s$ prescribed. The spatial discretization for the solid body is based on the variational multiscale formulation \cite{Liu2018}, which is stated as follows: Find $\left\lbrace \bm u_h^s(t), p_h^s(t), \bm v_h^s(t)\right\rbrace \in \mathcal S_{\bm u}^s \times \mathcal S_{p}^s \times \mathcal S_{\bm v}^s$ such that for $\forall \left\lbrace \bm w^s, w^s\right\rbrace \in \mathcal V_{\bm v}^s \times \mathcal V_{p}^s$,
\begin{align*}
& \bm 0 = \frac{d\bm u_h^s}{dt} - \bm v_h^s, \displaybreak[2] \\
& 0 = \int_{\Omega_{\bm x}^s(t)} \bm w^s \cdot \rho^s(p^s_h) \frac{d\bm v_h^s}{dt} d\Omega_{\bm x} + \int_{\Omega_{\bm x}^s(t)} \nabla_{\bm x} \bm w^s : \bm \sigma^s_{dev}(\bm u^s_h) d\Omega_{\bm x} \nonumber \\
& \hspace{3mm} - \int_{\Omega_{\bm x}^s(t)} \nabla_{\bm x} \cdot \bm w^s p_h^s d\Omega_{\bm x} - \int_{\Omega_{\bm x}^s(t)}\bm w^s \cdot \rho^s(p^s_h) \bm b d\Omega_{\bm x} \nonumber \\
& \hspace{3mm} -\int_{\Gamma_{\bm x,h}^{s}(t)}\bm w^s \cdot \bm h^s d\Gamma_{\bm x},\\
& 0 = \int_{\Omega_{\bm x}^s(t)} w^s \left( \beta_{\theta}^s(p^s_h) \frac{dp_h^s}{dt} + \nabla_{\bm x} \cdot \bm v_h^s \right) d\Omega_{\bm x} \nonumber \\
& \hspace{3mm} - \int_{\Omega_{\bm x}^{\prime s}(t)} \nabla_{\bm x} w^s \cdot \bm v^{s\prime} d\Omega_{\bm x}, \\
& \bm v^{s\prime} := -\bm \tau_M^s \left( \rho^s(p_h^s) \frac{d\bm v_h^s}{dt} - \nabla_{\bm x} \cdot \bm \sigma^s_{dev}(\bm u^s_h) + \nabla_{\bm x} p^s_h - \rho^s(p^s_h) \bm b \right).
\end{align*}
In the above formulation, the parameter $\bm \tau_M^s$ is associated with the subgrid-scale models and is defined as
\begin{align*}
\bm \tau_M^s = \tau_M^s \bm I, \quad \tau_M^s = c_m \frac{\Delta x}{c\rho^s},
\end{align*}
where $\Delta x$ is the diameter of the circumscribing sphere of the tetrahedral element, $c$ is the maximum wave speed in the solid material, and $c_m$ is a non-dimensional scalar \cite{Scovazzi2016}.
\subsection{Mesh motion of the fluid sub-domain}
Let $\mathcal S^m_{\tilde{\bm u}}$ denote the trial solution space of the mesh displacement $\tilde{\bm u}^m_h$ defined on the domain $\Omega_{\bm x}^{f}(\tilde{t})$, and let $\mathcal V^m_{\tilde{\bm u}}$ denote the corresponding test function space. The variational formulation of the problem is stated as follows. Find $\tilde{\bm u}^m_h \in \mathcal S^m_{\tilde{\bm u}}$ such that for $\forall \tilde{\bm w}^m \in \mathcal V^m_{\tilde{\bm u}}$,
\begin{align*}
\int_{\Omega_{\bm x}^f(\tilde{t})} \nabla^s_{\tilde{\bm x}} \tilde{\bm w}^m : \left( 2\mu^m \nabla^s_{\tilde{\bm x}} \tilde{\bm u}^m_h \right) + \nabla_{\tilde{\bm x}} \cdot \tilde{\bm w}^m \lambda^m \nabla_{\tilde{\bm x}} \cdot \tilde{\bm u}^m_h d\Omega_{\bm x} = 0.
\end{align*}
\subsection{Fluid sub-problem}
\label{subsec:fluid-spatial-formulation}
Let $\mathcal S^f_{\bm v}$ and $\mathcal S^f_{p}$ denote the trial solution space of the fluid velocity and pressure; let $\mathcal V^f_{p}$ and $\mathcal V^f_{\bm v}$ be the test function spaces; let $\Gamma^f_{\bm x,h}(t)$ denote the Neumann part of the fluid boundary with traction $\bm h^f$ prescribed. The VMS formulation for the fluid sub-problem can be stated as follows. Find $\left\lbrace p_h^f(t), \bm v_h^f(t) \right\rbrace \in \mathcal S^f_{p} \times \mathcal S^f_{\bm v}$ such that for $\forall \left\lbrace \bm w^f, w^f\right\rbrace \in \mathcal V_{\bm v}^f \times \mathcal V_{p}^f$,
\begin{align*}
& 0 = \int_{\Omega_{\bm x}^f(t)} \bm w^f \cdot \left( \left. \rho^f \frac{\partial \bm v_h^f}{\partial t} \right|_{\bm \chi} + \rho^f \left(\bm v_h^f - \hat{\bm v}^m_h \right) \cdot \nabla_{\bm x} \bm v_h^f \right) d\Omega_{\bm x} \nonumber \displaybreak[2] \\
& - \int_{\Omega_{\bm x}^f(t)} \nabla_{\bm x} \cdot \bm w^f p_h^f d\Omega_{\bm x} + \int_{\Omega_{\bm x}^f(t)} \nabla_{\bm x} \bm w^f : \bm \sigma_{dev}^f(\bm v^f_h) d\Omega_{\bm x} \nonumber \displaybreak[2] \\
& - \int_{\Omega_{\bm x}^f(t)} \bm w^f \cdot \rho^f \bm b d\Omega_{\bm x} - \int_{\Gamma^{f}_{\bm x,h}(t)} \bm w^f \cdot \bm h^f d\Gamma_{\bm x} \nonumber \displaybreak[2] \\
& - \int_{\Omega_{\bm x}^{\prime f}(t)} \nabla_{\bm x} \bm w^f : \left( \rho^f \bm v^{f\prime} \otimes \left( \bm v_h^f - \hat{\bm v}^m_h \right) \right) d\Omega_{\bm x} \nonumber \displaybreak[2] \\
& + \int_{\Omega_{\bm x}^{\prime f}(t)} \nabla_{\bm x} \bm v^f_h : \left( \rho^f \bm w^f \otimes \bm v^{f\prime} \right) - \nabla_{\bm x} \bm w^f : \left( \rho^f \bm v^{f\prime} \otimes \bm v^{f\prime} \right) d\Omega_{\bm x} \nonumber \displaybreak[2] \\
& - \int_{\Omega_{\bm x}^{\prime f}(t)} \nabla_{\bm x} \cdot \bm w^f p^{f\prime} d\Omega_{\bm x}, \displaybreak[2] \\
& 0 = \int_{\Omega_{\bm x}^f(t)} w^f \nabla_{\bm x} \cdot \bm v_h^f d\Omega_{\bm x} - \int_{\Omega_{\bm x}^{\prime f}(t)} \nabla_{\bm x} w^f \cdot \bm v^{f\prime} d\Omega_{\bm x}, \displaybreak[2] \\
& \bm v^{f\prime} := -\bm \tau_{M}^f \Big( \left. \rho^f \frac{\partial \bm v_h^f}{\partial t} \right|_{\bm \chi} + \rho^f \left( \nabla_{\bm x} \bm v_h^f \right) \left(\bm v_h^f - \hat{\bm v}^m_h \right) + \nabla_{\bm x} p_h^f \nonumber \\
& \hspace{8mm} - \nabla_{\bm x} \cdot \bm \sigma^f_{dev}(\bm v^f_h) - \rho^f \bm b \Big), \displaybreak[2] \\
& p^{f\prime} := -\tau^f_C \nabla_{\bm x} \cdot \bm v_h^f, \displaybreak[2] \\
& \bm \tau^f_M := \tau^f_M \bm I, \displaybreak[2] \\
& \tau^f_M := \frac{1}{\rho^f}\left( \frac{C_T}{\Delta t^2} + \left(\bm v_h^f - \hat{\bm v}^m_h \right) \cdot \bm G \left(\bm v_h^f - \hat{\bm v}^m_h \right) \right. \displaybreak[2] \\
& \hspace{8mm} \left. + C_I \left( \frac{\bar{\mu}}{\rho^f} \right)^2 \bm G : \bm G \right)^{-\frac12}, \displaybreak[2] \\
& \tau^f_C := \frac{1}{\tau_M \textup{tr}\bm G}, \displaybreak[2] \\
& G_{ij} := \sum_{k=1}^{3} \frac{\partial \xi_k}{\partial x_i} M_{kl} \frac{\partial \xi_l}{\partial x_j}, \displaybreak[2] \\
& \bm M = [ M_{kl} ] = \frac{\sqrt[3]{2}}{2}\begin{bmatrix}
2 & 1 & 1 \\
1 & 2 & 1 \\
1 & 1 & 2
\end{bmatrix}, \displaybreak[2] \\
& \bm G : \bm G := \sum_{i,j=1}^{3} G_{ij} G_{ij}, \displaybreak[2] \\
& \textup{tr}\bm G := \sum_{i=1}^{3} G_{ii}.
\end{align*}
In the above, $\bm \xi= \left\lbrace \xi_i \right\rbrace_{i=1}^{3}$ represents the natural coordinates in the parent domain. The values of $C_I$ and $C_T$ are chosen to be $36$ and $4$ in this study. $\bm M$ is introduced for simplex elements to give a node-numbering-invariant definition of $\tau^f_M$ and $\tau^f_C$ \cite{Danwitz2019}.
\subsection{Boundary conditions}
For the solid sub-problem, we prescribe homogeneous Dirichlet boundary conditions on the annulus surfaces at the inlet and outlets and zero traction on the external surface of the arterial wall.
For the fluid sub-problem, we prescribe the no-slip boundary condition on the luminal surface. On the inlet surface, we prescribe a Poiseuille velocity profile scaled by a periodic volumetric flow waveform. A special mapping technique introduced in \cite{Takizawa2010} is utilized to generate the inflow profile. To achieve physiological flows and pressures, we couple LPN models to the outlet surfaces as traction boundary conditions mimicking the effect of the downstream circulation. For each outlet surface $\Gamma^k_{\mathrm{out}}$ with unit outward normal vector $\bm n^k$, where $k$ is the outlet surface index, we prescribe
\begin{align}
\label{eq:outflow_bc}
\bm h^f = -P^k(t) \bm n^k + \beta \rho^f \left\lbrace \left( \bm v^f_h - \hat{\bm v}^m_h \right) \cdot \bm n^k \right\rbrace_{-} \bm v^f_h,
\end{align}
where $P^k(t)$ is the spatially averaged normal component of the surface traction on $\Gamma^k_{\mathrm{out}}$, $\beta$ is a positive coefficient between 0.0 and 1.0, and
\begin{align*}
\left\lbrace \left( \bm v^f_h - \hat{\bm v}^m_h \right) \cdot \bm n^k \right\rbrace_{-} =
\begin{cases}
\left( \bm v^f_h - \hat{\bm v}^m_h \right) \cdot \bm n^k & \mbox{ if } \left( \bm v^f_h - \hat{\bm v}^m_h \right) \cdot \bm n^k < 0, \\
0 & \mbox{ otherwise }.
\end{cases}
\end{align*}
The second term in \eqref{eq:outflow_bc} introduces energy dissipation in the case of backflow and is critical for maintaining the overall numerical stability of hemodynamic simulations. In this work, $\beta$ is fixed to be $0.2$ \cite{Moghadam2011}.
Given a LPN model, $P^k(t)$ can be implicitly determined from the flow rate $Q^k(t):=\int_{\Gamma^k_{\mathrm{out}}} \bm v^f \cdot \bm n^k d\Gamma$. In this study, we consider the three-element Windkessel model,
\begin{align}
\label{eq:rcr_1}
& \frac{d\Pi^k(t)}{dt} = -\frac{\Pi^k(t)}{\mathrm R_{\mathrm d}^k\mathrm C^k } + \frac{Q^k(t)}{\mathrm C^k }, \\
\label{eq:rcr_2}
& P^k(t) = \mathrm R_{\mathrm p}^k Q^k(t) + \Pi^k(t) + P^k_{\mathrm d}(t).
\end{align}
In \eqref{eq:rcr_1}-\eqref{eq:rcr_2}, $\mathrm R_{\mathrm p}^k$, $\mathrm C^k$, and $\mathrm R_{\mathrm d}^k$ respectively represent the proximal resistance, compliance, and distal resistance of the downstream vasculature; $\Pi^k$ represents the pressure drop across the distal resistance; $P^k_{\mathrm d}$ denotes the distal reference pressure. Although one may obtain an analytical representation of $P^k$ in terms of $Q^k$ for this model, we solve the ordinary differential equations \eqref{eq:rcr_1}-\eqref{eq:rcr_2} for $P^k(t)$ via the fourth-order Runge-Kutta method \cite{Moghadam2013}. This approach enables solution of more complex LPN models with satisfactory numerical robustness.
\subsection{Solution strategies for the coupled problem}
The semi-discrete problem stated in Sections \ref{subsec:solid-spatial-formulation}-\ref{subsec:fluid-spatial-formulation} is discretized in time by the generalized-$\alpha$ method \cite{Jansen2000,Kadapa2017}. We advocate collocating the pressure at the intermediate time step to achieve second-order temporal accuracy \cite{Liu2018}. This is in contrast to the conventional approach of treating pressure with the backward Euler method, which we have recently found to be only first-order accurate for pressure \cite{Liu2020a}.
For the fully discrete problem in the solid sub-domain, block factorization can be performed on the resulting tangent matrix \cite{Liu2018,Rossi2016}, allowing the consistent Newton-Raphson procedure to be performed in a segregated manner. In this approach, the velocity and pressure are first solved implicitly. The solid displacement is then explicitly updated using the velocity. This segregated solution procedure naturally leads to a coupling algorithm for the FSI system. In each Newton-Raphson iteration, the velocity, pressure, and solid displacement are solved in the segregated manner just described; the solid displacement is prescribed as the Dirichlet data on the luminal surface for the ALE mesh motion; the mesh velocity is then computed for use in the fluid sub-problem in the next Newton-Raphson iteration. This coupling strategy should still be considered a monolithic approach, as we seek solutions that minimize the residual of the whole FSI system. It is, however, closely related to the `quasi-direct' coupling approach \cite{Bazilevs2013,Tezduyar2007a}.
\begin{figure}
\begin{center}
\includegraphics[width=0.49\textwidth]{fig/Presentation1.pdf}
\caption{The mesh for the pulmonary arterial wall (blue) and lumen (red), with detailed views at the inlet and a representative outlet.}
\label{fig:mesh}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[trim=100 100 100 100, clip=true, width=0.45\textwidth]{fig/SU0243_inflow_outflow.pdf} \\
(a) \\
\includegraphics[trim=100 100 100 100, clip=true, width=0.45\textwidth]{fig/SU0243_outflow.pdf} \\
(b)
\end{tabular}
\caption{(a) The volumetric flow rates over time in one cardiac cycle on surfaces A (red), B (green), and C (blue), where the waveform for A was used to prescribe the velocity on the inlet surface. The flow rates on outlet surfaces B and C are calculated from simulation results and plotted in solid and dashed lines for FSI and rigid wall simulations, respectively. The locations of the surfaces are indicated in Figure \ref{fig:mesh}. (b) Detailed view of the flow rates on surfaces B and C.}
\label{fig:inflow}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[trim=100 100 100 100, clip=true, width=0.45\textwidth]{fig/SU0243_pressure.pdf}
\caption{The pressure over time in one cardiac cycle on the surfaces A (red), B (green), and C (blue). Results from the FSI and rigid wall simulations are plotted in solid and dashed lines, respectively. The locations of the three surfaces are indicated in Figure \ref{fig:mesh}.}
\label{fig:pressure}
\end{center}
\end{figure}
\section{Model construction and mesh generation from patient-specific medical image data}
Using the open source software package SimVascular (SV) \cite{Lan2018,Updegrove2017}, we generated a healthy patient-specific pulmonary arterial model from clinically available magnetic resonance imaging (MRI) data of a nine-year-old subject with congenital heart defects in the systemic circulation. All retrospective clinical data collection was approved by the Institutional Review Board for modeling purposes. Our steps constitute a complete pipeline for robust vascular wall (the solid sub-domain) and luminal (the fluid sub-domain) mesh generation from medical image data for FSI modeling of blood flow.
Path points along the centerlines of all arteries of interest were first manually identified. Two-dimensional (2D) image segmentations were generated along the vessel centerlines and subsequently lofted into a 3D model of the arterial lumen. To generate a model of the arterial wall, we adopted the common assumption that the arterial wall thickness is approximately ten percent of the effective lumen diameter \cite{Humphrey2013}. Therefore, we scaled each of the 2D segmentations such that the distance between every segmentation point and the centroid was increased by twenty percent. An `enlarged' model encompassing both the arterial wall and lumen was thereby generated by lofting these scaled segmentations. Finally, the model of the arterial wall itself was obtained via a boolean operation provided by Parasolid (Siemens PLM Software, Plano, TX, USA), in which the previously generated lumen model was subtracted from the enlarged model. Our approach led to a physiologically accurate geometric model with variable wall thickness. With the arterial wall and lumen models constructed, we meshed the solid and fluid domains using MeshSim (Simmetrix Inc., Clifton Park, NY, USA) and TetGen \cite{Si2015}, respectively, with linear tetrahedral elements, ensuring that the luminal surface mesh remained identical in both domains.
The resulting mesh (Figure \ref{fig:mesh}) consisted of $7.0\times 10^5$ elements in the fluid sub-domain and $7.4\times 10^5$ elements in the solid sub-domain.
\begin{figure}
\begin{center}
\includegraphics[trim=160 20 420 0, clip=true, width=0.45\textwidth]{fig/wall_displacement.jpeg}
\caption{The relative wall displacement between peak systole and early diastole.}
\label{fig:disp}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[trim=160 20 420 0, clip=true, width=0.32\textwidth]{fig/fsi-velo-peak-systole.jpeg} \\
(a) FSI \\
\includegraphics[trim=160 20 420 0, clip=true, width=0.32\textwidth]{fig/cfd-velo-peak-systole.jpeg} \\
(b) Rigid wall
\end{tabular}
\caption{Volume rendering of the velocity magnitude at peak systole.}
\label{fig:velo}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[trim=160 20 420 0, clip=true, width=0.32\textwidth]{fig/fsi-wss-peak-systole.jpeg} \\
(a) FSI \\
\includegraphics[trim=160 20 420 0, clip=true, width=0.32\textwidth]{fig/cfd-wss-peak-systole.jpeg} \\
(b) Rigid wall
\end{tabular}
\caption{Wall shear stress (WSS) at peak systole.}
\label{fig:wss}
\end{center}
\end{figure}
\section{Computational results}
Unless otherwise specified, all parameters and results are presented in the centimeter-gram-second units.
The fluid density and viscosity were set to be $1.06$ and $0.04$, respectively. The arterial wall was modeled as a fully incompressible Neo-Hookean material with the following form for the Gibbs free energy,
\begin{align*}
G\left( \tilde{\bm C}, p \right) = \frac{\mu^s}{2\rho^s_0} \left( \textup{tr}\tilde{\bm C} -3 \right) + \frac{p}{\rho^s_0}.
\end{align*}
The density $\rho^s_0$ and shear modulus $\mu^s$ of the arterial wall were chosen to be $1.0$ and $6.7 \times 10^5$. The material parameters are adopted from \cite{Yang2019} and are representative for pediatric patients. The flow rate on the inlet surface (\ref{fig:inflow}) was measured by phase-contrast MRI (PC-MRI). Resistance and capacitance values used in the three-element Windkessel models were taken from our previous study \cite{Yang2019}, in which the total resistance and capacitance values for the right and left pulmonary arteries were first determined by a simplified LPN model of the pulmonary circulation to match target clinical pressures. These total values were then distributed to each outlet with an assumption of parallel circuits and an area rule \cite{Yang2019}. In addition to the FSI simulation, we simulated the same problem under the rigid wall assumption with identical inlet and outlet boundary conditions.
The spatially averaged pressure on the inlet surface and two representative outlet surfaces are plotted over time in Figure \ref{fig:pressure}. The rigid wall assumption clearly overestimates the pressure on all three surfaces. The pressure difference between the FSI and rigid wall simulations is most pronounced on the inlet surface at peak systole, at approximately $13$ mm Hg. The pressure overestimation of the rigid wall assumption is consistent with our prior experiences and can be even larger for diseased pulmonary arteries. In Figure \ref{fig:disp}, the wall mesh at early diastole and peak systole are superposed and colored by the wall displacement at peak systole. The cross-sectional area of a slice in the main pulmonary artery increased by $18\%$ from diastole to peak systole, which agrees favorably with our PC-MRI measurement. Figure \ref{fig:velo} depicts the volume rendering of the velocity magnitude at peak systole. Comparing the FSI and rigid wall simulations reveals the largest deviation in the distal branches, where the rigid wall assumption yields a higher velocity magnitude prediction. The flow rates over time in two outlet surfaces are plotted in Figure \ref{fig:inflow}. It reveals that the rigid wall assumption leads to $25\%$ and $17\%$ overpredictions of the flow rates on the two outlet surfaces, respectively. In addition, the FSI simulation yields phase shifts of $0.035$ s and $0.045$ s from the inlet to the outlet surfaces B and C, respectively. This is in contrast to the in-phase behavior of the rigid wall simulation, reflecting the finite wave speed in deformable vessels. Figure \ref{fig:wss}, which depicts the instantaneous wall shear stress (WSS) on the luminal surface at peak systole, also suggests that the rigid wall assumption overpredicts the WSS, especially in the distal branches. For example, near the outlet surface B (refer to Figure \ref{fig:mesh} for its location), the spatially averaged WSS in the rigid wall calculation gives a $52.6\%$ overestimation in comparison with the FSI result. The overestimation of WSS from rigid wall simulations was also previously reported in cerebral aneurysm simulations \cite{Bazilevs2010,Bazilevs2010a}.
\section{Conclusion}
We have presented a general framework for patient-specific FSI simulations of blood flow. This involves mesh generation from medical image data, a VMS formulation for low-order finite elements and both compressible and incompressible materials, boundary conditions involving coupled LPN models of the downstream circulation, and a time integration scheme offering second-order accuracy for the entire system.
More specifically, the numerical formulation is constructed from the unified continuum model, which uses the Gibbs free energy as the thermodynamic potential and is thus well-behaved in the incompressible limit \cite{Liu2018}. It further makes use of the VMS technique to provide a simple, stable FSI formulation using low-order elements. Together, these two attributes of our numerical formulation allow us to model the arterial wall as a fully incompressible material without resorting to mixed elements; the formulation is particularly well-suited to complex geometries such as those found in the arterial system. The treatment of our fluid and solid sub-domains as a single continuum body governed by the same first-order balance equations facilitates time integration of both domains in a uniform way. Importantly, while the generalized-$\alpha$ method has been established as an accurate and robust temporal scheme for structural dynamics, fluid dynamics, and FSI, the conventional approach has been to treat pressure with the backward Euler method. We have fine-tuned the temporal treatment of pressure such that pressure is evaluated at the intermediate time step no differently from velocity. This fine-tuned temporal scheme has been demonstrated to yield second-order accuracy for the entire system \cite{Liu2020}. Interestingly, when used in conjunction with first-order structural dynamics, the generalized-$\alpha$ method has been found to enjoy better dissipation and dispersion accuracy and avoid the `overshoot' phenomenon \cite{Kadapa2017}. These attributes together yield a stable numerical FSI scheme that not only exhibits higher accuracy, but also is more convenient in implementation.
In our study, we performed an FSI simulation of a nine-year-old subject's healthy pulmonary arterial tree and compared results against those of a rigid wall simulation. The rigid wall assumption was found to consistently overestimate hemodynamic quantities, including velocity, pressure, and WSS, compared to FSI. The differences are sufficiently large to necessitate the use of FSI for blood flow simulations.
As part of our future directions, we plan to further improve the arterial wall model by incorporating anisotropy and viscoelasticity \cite{Humphrey2013}. To evaluate its predictive capacity in the context of clinically significant hemodynamic quantities, validation of this FSI methodology will also be performed using a combination of clinical and experimental data.
\section*{Acknowledgments}
This work is supported by the NIH under the award numbers 1R01HL121754, 1R01HL123689, R01EB01830204, the computational resources from the Stanford Research Computing Center, and the Extreme Science and Engineering Discovery Environment supported by the NSF grant ACI-1053575. We thank Drs. Jeffrey Feinstein and Frandics Chan for their expertise in pediatric cardiology and cardiac imaging.
\bibliographystyle{elsarticle-num}
| -37,656.284638
|
[
-3.34765625,
3.046875
] | 32.962963
|
[
-3.552734375,
0.35546875,
-1.7978515625,
-6.45703125,
-0.92822265625,
8.71875
] |
[
4.328125,
9.1171875,
1.8935546875,
6.734375
] | 198
| 4,493
|
[
-1.640625,
1.357421875
] | 29.200126
|
[
-6.33203125,
-4.46875,
-4.76171875,
-2.083984375,
2.369140625,
12.5703125
] | 0.82981
| 20.617505
| 27.487202
| 2.50635
|
[
1.2291476726531982
] | -25,153.211496
| 5.63254
| -37,097.525971
| 0.663848
| 6.01592
|
[
-2.8359375,
-3.75,
-4.109375,
-5.31640625,
2.49609375,
13.015625
] |
[
-5.53515625,
-2.138671875,
-2.625,
-1.9990234375,
3.8046875,
4.96484375
] | |
BkiUd3DxK0iCl2n1_xgR
|
\section{Introduction}\label{sec:1}
This is the companion paper to \cite{Hodges.Yong}. That work studies multiplicity-freeness of
key polynomials in the context of \emph{spherical Schubert geometry}. We refer the reader to it for
additional motivation and references about the main result, Theorem~\ref{thm:mfKey}.
Let ${\sf Pol}_n={\mathbb Z}[x_1,\ldots,x_n]$. The \emph{Demazure operator} $\pi_j:{\sf Pol}_n\to {\sf Pol}_n$
is defined by
\[f\mapsto \frac{x_j f - x_{j+1}s_jf}{x_j-x_{j+1}}, \text{
\ where \ ${s_j}f:=f(x_1,\ldots,x_{j+1},x_j,\ldots,x_n)$.}\]
A \emph{weak composition} of length $n$ is $\alpha=(\alpha_1,\ldots,
\alpha_{n})\in {\mathbb Z}_{\geq 0}^n$.
Let ${\sf Comp}_n$ be the set of such $\alpha$.
If $\alpha \in {\sf Comp}_n$ is weakly decreasing, the \emph{key polynomial} $\kappa_{\alpha}$ is
$x^{\alpha}:=x_1^{\alpha_1}\cdots x_n^{\alpha_n}$.
Otherwise,
\[
\kappa_{\alpha}=\pi_j(\kappa_{\widehat \alpha}) \mbox{\ where $\widehat\alpha=(\alpha_1,\ldots,\alpha_{j+1},\alpha_j,\ldots,\alpha_n)$ and
$\alpha_{j+1}>\alpha_{j}$.}
\]
The key polynomials for $\alpha \in {\sf Comp}_n$
form a ${\mathbb Z}$-basis of ${\mathbb Z}[x_1,\ldots,x_n]$; see work of V.~Reiner--M.~Shimozono \cite{Reiner.Shimozono} and of A.~Lascoux \cite{Lascoux:polynomials} (and references therein) for more on $\kappa_{\alpha}$. In \cite[Section~4.4]{Hodges.Yong} we use the fact that $\kappa_{\alpha}$ is the character of a \emph{Demazure module} of $B\subset GL_n$
\cite{Reiner.Shimozono, Ion, Mason}. We do not need this in the present paper, which is entirely combinatorial.
Let ${\sf Comp} := \bigcup_{n=1}^{\infty} {\sf Comp}_n$. For $\alpha=(\alpha_1,\ldots,
\alpha_{\ell}), \beta=(\beta_1,\ldots, \beta_{k}) \in {\sf Comp}$, $\alpha$ \emph{contains the composition pattern} $\beta$
if there exists integers $j_1 < j_2 < \cdots < j_k$ that satisfy:
\begin{itemize}
\item $\alpha_{j_s} \leq \alpha_{j_t}$ if and only if $\beta_{s} \leq \beta_{t}$,
\item $|\alpha_{j_s} - \alpha_{j_t}| \geq |\beta_{s} - \beta_{t}|$.
\end{itemize}
If $\alpha$ does not contain $\beta$, $\alpha$ \emph{avoids} $\beta$. This is a recapitulation of \cite[Definition~4.8]{Hodges.Yong}.
Let \[ {\sf KM} = \{ (0,1,2), (0,0,2,2), (0,0,2,1), (1,0,3,2), (1,0,2,2) \}. \]
Define ${\overline {\sf KM}}_n$ to be those $\alpha\in {\sf Comp}_n$ avoiding all compositions in ${\sf KM}$. The expansion
\[\kappa_{\alpha}=\sum_{\gamma\in {\sf Comp}_n} c_{\gamma}x^{\gamma}\]
is \emph{multiplicity-free} if $c_{\gamma}\in \{0,1\}$ for all
$\gamma\in {\sf Comp}_n$.
\begin{theorem}
\label{thm:mfKey}
$\kappa_{\alpha}$ is multiplicity-free if and only if $\alpha \in {\overline {\sf KM}}_n$.
\end{theorem}
\begin{example}
$\alpha=(0,1,1)\in {\overline {\sf KM}}_3$. $\kappa_{\alpha}=x_2 x_3 +x_1 x_3 + x_2 x_1$
is multiplicity-free.\qed
\end{example}
\begin{example}
$\alpha=(\underline{0},2,\underline{1},\underline{2}) \not\in {\overline {\sf KM}}_4$ (contains $(0,1,2)$ in the underlined positions).
\begin{multline}\nonumber
\kappa_{\alpha} =x_1^2 x_2^2 x_4 + x_1^2 x_2^2 x_3+2 x_1^2 x_2 x_3 x_4+x_1^2 x_2 x_4^2 +x_1^2 x_2 x_3^2 +x_1^2 x_3 x_4^2\\+ x_1^2 x_3^2 x_4 +2 x_1 x_2^2 x_3 x_4+x_1 x_2^2 x_4^2+x_1 x_2^2 x_3^2+x_1 x_2 x_3 x_4^2+x_1 x_2 x_3^2 x_4+x_2^2 x_3 x_4^2+x_2^2 x_3^2 x_4,
\end{multline}
has multiplicity.
\qed
\end{example}
Theorem~\ref{thm:mfKey} is the same as \cite[Theorem~4.10]{Hodges.Yong} (stated there without proof). In \emph{ibid.}, we initiated a study of the notion of \emph{split} multiplicity-free problems. Theorem~\ref{thm:mfKey} concerns the ``most split''
case of these problems (the ``$[n-1]$'' case, in the terminology of \emph{ibid.}).
The sufficiency proof uses the \emph{quasi-key} model of key polynomials due to S.~Assaf--D.~Searles \cite{Assaf.Searles}.
In Section~\ref{sec:assafsearles}, we prove a preparatory theorem (Theorem~\ref{thm:qksummary}), which gives sufficient conditions
for their \emph{quasi-key polynomials} to be multiplicity-free. The conclusion of the proof of Theorem~\ref{thm:mfKey} is given in
Section~\ref{sec:mfKey}. There, the
necessity proof uses the older \emph{Kohnert diagram} model \cite{Kohnert}.
A.~Fink--K.~M\'esz\'aros--A.~St.~Dizier's \cite[Theorem~1.1]{FMSD} characterizes multiplicity-free \emph{Schubert polynomials} in terms of classical pattern avoidance of permutations. Since Schubert polynomials are linear combinations
of key polynomials with positive integer coefficients (see \cite[Theorem~4]{Reiner.Shimozono}), our results are related.
We do not know how to derive one result from the other. The proof methods are different. As explained in \cite[Section~4.3]{Hodges.Yong}, one can look forward to finding ``split'' generalizations of both theorems.
\section{Quasi-key polynomials of S.~Assaf-D.~Searles} \label{sec:assafsearles}
\subsection{Multiplicity-freeness}
\emph{Dominance order}
on $ {\sf Comp}_n$ is
\[\alpha \geq_{\sf Dom} \beta \text{\ \ if $\sum_{i=1}^t \alpha_i \geq \sum_{i=1}^t \beta_i$ for all
$1\leq t\leq n$}.\]
We will use notions introduced in S.~Assaf-D.~Searles' \cite{Assaf.Searles}.
\begin{definition}
A \emph{quasi-key tableau} $T$ of shape $\alpha$ fills $D(\alpha)$ with ${\mathbb Z}_{>0}$ such that
\begin{enumerate}
\item[\qkt{1}] Entries weakly decrease, left to right, along rows. Entries in row $i$ are at most $i$.
\item[\qkt{2}] Entries in each column are distinct. Entries increase upward in the first column.
\item[\qkt{3}] If $i$ appears above $k$ in the same column and $i<k$, then there is a $j$ that appears immediately to the right of that $k$, and $i < j$.
\item[\qkt{4}] If $r<s$, $\alpha_r<\alpha_s$, and $(r,c),(s,c+1)\in D(\alpha)$ then
$T(r,c)<T(s,c+1)$.
\end{enumerate}
\end{definition}
Let ${\sf qKT}(\alpha)$ be the set of quasi-key tableaux of shape $\alpha$. Given $T\in {\sf qKT}(\alpha)$, let ${\sf wt}(T)=(w_1,w_2,\ldots,w_\ell)$ where $w_i$ is the number of $i$'s appearing in $T$.
\begin{definition}
\label{def:quasiKeyPoly}
The \emph{quasi-key polynomial} $\mathfrak{D}_\alpha$ is
\[\mathfrak{D}_\alpha = \sum_{T \in {\sf qKT}(\alpha)} x^{{\sf wt}(T)}.\]
\end{definition}
\begin{definition}
A \emph{left swap} of $\alpha\in {\sf Comp}_n$ is $(\alpha_1,\ldots, \alpha_j,\ldots, \alpha_i,\ldots,\alpha_n)$ where
$\alpha_i < \alpha_j$ for some $i<j$. Let ${\sf lswap}(\alpha)\subseteq {\sf Comp}_n$ be all compositions obtained by iteratively
applying (a possibly empty sequence of) left swaps to $\alpha$. For $\alpha\in {\sf Comp}_n$, let ${\sf flat}(\alpha)\in {\sf Comp}_n$ be $\alpha$ with all $0$'s removed. Now define
\[{\sf Qlswap}(\alpha)=\{\gamma\in {\sf lswap}(\alpha): \text{$\gamma \leq_{\sf Dom} \tau$, for all $\tau \in {\sf lswap}(\alpha)$ such that ${\sf flat}(\gamma) = {\sf flat}(\tau)$}\}.\]
\end{definition}
\begin{theorem}[\cite{Assaf.Searles}]\label{AssafSearlesTheorem}
\[\displaystyle \kappa_\alpha = \sum_{\beta \in {\sf Qlswap}(\alpha)} \mathfrak{D}_\beta.\]
\end{theorem}
\begin{example}
Let $\alpha=(3,2,1,3,2)$. Then
\begin{multline}\nonumber
\kappa_{\alpha}=x_1^3 x_2^2 x_3^3 x_4^2 x_5+x_1^3 x_2^2 x_3^3 x_4 x_5^2+x_1^3 x_2^3 x_3^2 x_4^2 x_5
+x_1^3 x_2^3 x_3^2 x_4 x_5^2 +x_1^3 x_2^2 x_3^2 x_4^3 x_5\\
+x_1^3 x_2^3 x_3 x_4^2 x_5^2+{\color{blue} x_1^3 x_2^2 x_3 x_4^3 x_5^2+ x_1^3 x_2^2 x_3^2 x_4^2 x_5^2}
\end{multline}
\begin{multline}\nonumber
\text{and \ } {\sf lswap}(\alpha)=\{(3,2,1,3,2),(3,3,1,2,2),(3,2,3,1,2),(3,2,2,3,1),(3,3,2,1,2),\\
(3,3,2,2,1),(3,2,3,2,1)\}.
\end{multline}
Since $\alpha$ contains no $0$'s, ${\sf Qlswap}(\alpha)={\sf lswap}(\alpha)$ (in fact, this will be the case starting
in Section~\ref{sec:6.2}). For all $\beta\in {\sf lswap}(\alpha)$, except $\beta=\alpha$, $\#{\sf qKT}(\beta)=1$; the unique
tableau is
the \emph{super quasi-key} tableau: the one that places only $b$'s in row $b$. Hence ${\mathfrak D}_{\beta}=x^{\beta}$
in those cases. When $\beta=\alpha$ there are
two quasi-key tableaux, namely
\[\tableau{5 & 5\\4 & 4 & 4\\ 3 \\ 2 & 2\\1&1&1} \text{ \ \ and \ \ }
\tableau{5 & 5\\4 & 4 & 3\\ 3 \\ 2 & 2\\1&1&1}.
\]
Thus, ${\mathfrak D}_{\alpha}={\color{blue} x_1^3 x_2^2 x_3 x_4^3 x_5^2 + x_1^3 x_2^2 x_3^2 x_4^2 x_5^2}$. This
all agrees with Theorem~\ref{AssafSearlesTheorem}.\qed
\end{example}
Define
\[{\overline {\sf KM}}_n^{\geq 1}:=\{\alpha\in {\overline {\sf KM}}_n: \alpha_i \geq 1 \text{\ for $1 \leq i \leq n$}\}.\]
\begin{theorem}
\label{thm:qksummary}
${\mathfrak D}_{\beta}$ is multiplicity-free if $\beta\in {\sf Qlswap}(\alpha)$ and
$\alpha \in {\overline {\sf KM}}_n^{\geq 1}$.
\end{theorem}
In particular, ${\mathfrak D}_{\alpha}$ is multiplicity-free if $\alpha \in {\overline {\sf KM}}_n^{\geq 1}$.
It would be interesting to characterize precisely when ${\mathfrak D}_{\alpha}$ is multiplicity-free. D.~Brewster, H.~Raza
and the first author have conjectured that the hypothesis that $\alpha_i\geq 1$ in Theorem~\ref{thm:qksummary}
can be dropped.
The remainder of this section is devoted to the proof of Theorem~\ref{thm:qksummary}.
\subsection{Lemmas}\label{sec:6.2}
We need lemmas about ${\overline {\sf KM}}_n^{\geq 1}$, and ${\sf qKT}(\alpha)$ for
$\alpha \in {\overline {\sf KM}}_n^{\geq 1}$. Given $\alpha\in {\sf Comp}_n$, let
$i_1 < \cdots < i_k$
be all indices such that $\alpha_{i_r -1} < \alpha_{i_r }$. For convenience, let $i_0 = 1$, $i_{k+1}=n+1, \alpha_0=\infty$, and
$\alpha_{i_{k+1}}=0$. The $m$-th \emph{segment} of $\alpha$ is
\[ \seg{m}{}(\alpha) = \{ i_{m-1},i_{m-1}+1,\ldots,i_{m} - 1 \}, \] and it is denoted $\seg{m}{}$ when the composition is clear by context.
Define
\begin{align*}
\seg{m}{1} := & \{ b \in \seg{m}{} \, | \, \alpha_b \geq \alpha_{i_m} \},\\
\seg{m}{2} := & \{ b \in \seg{m}{}\, | \, \alpha_b < \min(\alpha_{i_{m-1} - 1}, \alpha_{i_m})\textrm{ and } b < i_{m} - 1 \},\\
\seg{m}{3} := & \left\{
\begin{array}{cl}
\emptyset & \text{if $m=k+1$}\\
\{ i_{m} - 1 \} & \text{otherwise.}
\end{array}\right.
\end{align*}
\begin{lemma}
\label{lemma:segPartsRange}
Let $\alpha \in {\overline {\sf KM}}_n^{\geq 1}$.
\begin{enumerate}
\item[(a)] $\seg{m}{} = \seg{m}{1}\sqcup \seg{m}{2}\sqcup \seg{m}{3}$.
\item[(b)] $\seg{m}{i}$ is a consecutive sequence of integers, for $i \in \{ 1,2,3 \}$.
\item[(c)] $\#\seg{m}{1} \geq 1$ for $m > 1$.
\item[(d)] If $b\in \seg{m}{3}$ (that is, $b=i_m-1$) then $\alpha_{i_{m-1}-1}\geq \alpha_b$.
\item[(e)] If $b\in \seg{m}{}$ and $\alpha_{i_{m-1}-1} < \alpha_b$, then $b\in \seg{m}{1}$.
\end{enumerate}
\end{lemma}
\begin{proof}
By definition of $\seg{m}{}$,
$\alpha_{i_{m-1}}\geq\alpha_{i_{m-1}+1}\geq \ldots\geq
\alpha_{i_m-1}$. Thus (b) holds. For the same reason, $\seg{m}{1},\seg{m}{2},\seg{m}{3}$ are disjoint.
Since $\alpha$ avoids $(0,1,2)$, there is no $b\in \seg{m}{}$ such that
$\alpha_{i_{m-1}-1}<\alpha_b<\alpha_{i_m}$. This proves $\seg{m}{} = \seg{m}{1}\sqcup \seg{m}{2}\sqcup \seg{m}{3}$; hence
(a) holds. Next, if $m>1$ and (c) is false, then $\alpha_{i_{m-1}-1}<\alpha_{i_{m-1}}<\alpha_{i_m}$ forms
a $(0,1,2)$ pattern, a contradiction.
If (d) is false then $(\alpha_{i_{m-1}-1}<\alpha_{i_m-1}<\alpha_{i_m})$ is a $(0,1,2)$ pattern, a contradiction. Finally, (e) follows from (d), the definition of $\seg{m}{2}$, and (a).
\end{proof}
\begin{example}
Let
$\alpha = (10,5,12,9,8,8,4,2,5,1,3)$.
Then $\alpha\in {\overline {\sf KM}}_{11}^{\geq 1}$, $k=3$,
$i_0:=1, i_1 = 3, i_2 = 9, i_3 = 11, i_4:=12$, and
\begin{center}
$\seg{1}{} = \{ 1,2 \}$, $\seg{2}{} = \{ 3,4,5,6,7,8 \}$, $\seg{3}{} = \{ 9,10 \}$, and $\seg{4}{} = \{ 11 \}$
\end{center}
with
\begin{center}
$\begin{array}{llll}
\qquad & \seg{1}{1} = \{ \},& \seg{1}{2} = \{ 1\},& \textrm{ and }\seg{1}{3} = \{ 2 \} \\[3pt]
& \seg{2}{1} = \{ 3,4,5,6 \},& \seg{2}{2} = \{ 7 \},&\textrm{ and }\seg{2}{3} = \{ 8 \} \\[3pt]
& \seg{3}{1} = \{ 9 \},& \seg{3}{2} = \{ \},&\textrm{ and }\seg{3}{3} = \{ 10 \} \\[3pt]
& \seg{4}{1} = \{ 11 \},& \seg{4}{2} = \{ \},&\textrm{ and }\seg{4}{3} = \{ \}
\end{array}$
\end{center}\qed
\end{example}
In this proof and the sequel, it will be convenient to write, \emph{e.g.,} $(\alpha_a,\alpha_b,\alpha_c,\alpha_d)\simeq (1,0,3,2)$
if the subsequence $(\alpha_a,\alpha_b,\alpha_c,\alpha_d)$ of $\alpha$ forms a $(1,0,3,2)$ pattern.
\begin{lemma}
\label{lemma:1032avoidingConsequence}
Suppose $\alpha \in {\overline {\sf KM}}_n^{\geq 1}$
and fix $1\leq m \leq k+1$. If $s < i_{m} - 1$ and $r > i_{m}$ then either
\begin{itemize}
\item $\alpha_s \geq \alpha_r$; or
\item $\alpha_s < \alpha_r$ with $\alpha_s=\alpha_{i_{m} - 1}$ and $\alpha_{i_m}=\alpha_r=\alpha_{i_{m} - 1}+1$.
\end{itemize}
\end{lemma}
\begin{proof}
If $\alpha_s \geq \alpha_r$ we are done. Assume $\alpha_s<\alpha_r$.
We have $s < i_{m} - 1 < i_m < r$ with $\alpha_{i_{m} - 1} < \alpha_{i_{m}}$.
Since $\alpha$ avoids $(0,1,2)$, so must the subsequence $A=(\alpha_s, \alpha_{i_{m} - 1}, \alpha_{i_{m}}, \alpha_r)$. Thus $\alpha_s \geq \alpha_{i_{m} - 1}$ and $\alpha_{i_{m}} \geq \alpha_r$. Since $A\in {\overline{\sf KM}}$, $A\not\simeq (1,0,3,2),(1,0,2,2)$. Hence $\alpha_s=\alpha_{i_{m} - 1}$. Since $A\not\simeq (0,0,2,1)$, $\alpha_{i_m}=\alpha_r$. Finally,
since $A\not\simeq (0,0,2,2)$, $\alpha_r=\alpha_{i_{m-1}}+1$.
\end{proof}
\begin{lemma}\label{lemma:rectangle}
If $D(\alpha)$ contains southwest $s\times t$ rectangle and $T\in {\sf qKT}(\alpha)$ then $T(r,c)=r$ for all $1\leq r\leq s$
and $1\leq c\leq t$.
\end{lemma}
\begin{proof}
By \qkt{1} and \qkt{2}.
\end{proof}
\begin{lemma}
\label{lemma:rowfilledrowval}
Suppose $\alpha \in {\overline {\sf KM}}_n^{\geq 1}$. Let $T \in {\sf qKT}(\alpha)$. If
\begin{itemize}
\item[(a)] $b\in \seg{1}{}$, $b\in \seg{m}{1}$ with $\alpha_{i_{m-1} - 1}\geq \alpha_b$, $b \in \seg{m}{2}$, or $b \in \seg{m}{3}$ then row $b$ of $T$ only contains $b$'s.
\item[(b)] $b\in \seg{m}{1}$ with $\alpha_{i_{m-1} - 1}<\alpha_b$ then the leftmost $\alpha_{i_{m-1} - 1} + 1$ columns of row $b$ only contain~$b$'s.
\end{itemize}
\end{lemma}
\begin{proof}
(a): First suppose $b\in \seg{1}{}$.
Row $1$ of $T$ must only contain $1$'s by \qkt{1}. If $2 \in \seg{1}{}$ then $\alpha_1 \geq \alpha_2$, so by \qkt{1} and \qkt{2} row $2$ of $T$ must only contain $2$'s. The same holds for all rows in $\seg{1}{}$, by induction.
Now suppose we satisfy one of the other possibilities of (a). Since $b\in \seg{m}{}$,
\begin{equation}
\label{eqn:June18dee}
\alpha_r\geq \alpha_b, \ i_{m-1}\leq r\leq b.
\end{equation}
Since $\alpha$ avoids $(0,1,2)$,
\begin{equation}
\label{eqn:June18yyyy}
\alpha_r\geq \alpha_{i_{m-1}-1}, \ 1\leq r\leq i_{m-1}-1.
\end{equation}
By the hypothesis (if $b\in \seg{m}{1}$), the
definition of $\seg{m}{2}$, or Lemma~\ref{lemma:segPartsRange}(d) (if $b\in\seg{m}{3}$),
\begin{equation}
\label{eqn:June18ywyw}
\alpha_{i_{m-1} - 1}\geq \alpha_b
\end{equation}
By (\ref{eqn:June18dee}), and by (\ref{eqn:June18yyyy}) combined with (\ref{eqn:June18ywyw}), we conclude
that $\alpha_r\geq \alpha_b$ for all $1\leq r\leq b$. Now apply Lemma~\ref{lemma:rectangle} to this $b\times \alpha_b$
southwest rectangle in $D(\alpha)$.
(b): By (\ref{eqn:June18yyyy}), the hypothesis $\alpha_{i_{m-1} - 1}<\alpha_b$, and
(\ref{eqn:June18dee}),
\begin{equation}
\alpha_r\geq \alpha_{i_{m-1}-1}, \ 1\leq r\leq b.
\end{equation}
This implies there is a southwest $b\times \alpha_{i_{m-1}-1}$ rectangle in $D(\alpha)$. Hence by Lemma~\ref{lemma:rectangle}, $T(r,c)=r$ for $1\leq r\leq b$ and $1\leq c\leq \alpha_{i_{m-1}-1}$. Since
$1\leq \alpha_{i_{m-1} - 1}<\alpha_s$
for $i_{m-1} - 1 < s \leq b$, we are done by \qkt{1}, \qkt{2} and \qkt{4}.
\end{proof}
\begin{example}\label{exa:June17zzz}
Let $\alpha = (10,5,12,9,8,8,4,2,5,1,3)$. Figure~\ref{fig:June17xdd} shows the forced entries for quasi-key tableau in ${\sf qKT}(\alpha)$.\qed
\end{example}
\begin{figure}
\begin{center}
\ytableausetup{boxsize=1.4em}
\begin{ytableau}
11 & 11 & \; \\
10 \\
9 & 9 & 9 & \; & \; \\
8 & 8 \\
7 & 7 & 7 & 7 \\
6 & 6 & 6 & 6 & 6 & 6 & \; & \; \\
5 & 5 & 5 & 5 & 5 & 5 & \; & \; \\
4 & 4 & 4 & 4 & 4 & 4 & \; & \; & \; \\
3 & 3 & 3 & 3 & 3 & 3 & \; & \; & \; & \; & \; & \; \\
2 & 2 & 2 & 2 & 2 \\
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1
\end{ytableau}
\end{center}
\caption{Forced entries of the quasi-key tableaux for Example~\ref{exa:June17zzz} \label{fig:June17xdd}}
\end{figure}
\begin{lemma}
\label{lemma:seesawLemma}
Suppose $\alpha \in {\overline {\sf KM}}_n^{\geq 1}$. Let $T \in {\sf qKT}(\alpha)$. Let ${\sf y}$ be a box such that ${\sf row}({\sf y}) > i_{m-1}$. Then $T({\sf y}) \geq i_{m-1}-1$.
\end{lemma}
\begin{proof}
To reach a contradiction, suppose
\begin{equation}
\label{eqn:July8aop}
T({\sf y})< i_{m-1}-1.
\end{equation}
\noindent \textit{Case 1:} ($\alpha_s \geq \alpha_{{\sf row}({\sf y})}$ for $1 \leq s \leq T({\sf y})$) By Lemma~\ref{lemma:rectangle}, $T(s,c)=s$ for all $1 \leq s \leq T({\sf y})$ and $1 \leq c \leq \alpha_{{\sf row}({\sf y})}$. Since ${\sf col}({\sf y}) \leq \alpha_{{\sf row}({\sf y})}$, $T(T({\sf y}), {\sf col}({\sf y}))=T({\sf y})$. This, with \eqref{eqn:July8aop}, and the hypothesis ${\sf row}({\sf y})>i_{m-1}$, shows the label $T({\sf y})$ occurs twice in ${\sf col}({\sf y})$, contradicting $\qkt{2}$.
\noindent \textit{Case 2:} ($\alpha_s < \alpha_{{\sf row}({\sf y})}$ for some $1 \leq s \leq T({\sf y})$)
Lemma \ref{lemma:1032avoidingConsequence} (applied to $i_{m-1}$, $r={\sf row}({\sf y})$) shows that for any $1 \leq s \leq T({\sf y})$ such that $\alpha_s < \alpha_{{\sf row}({\sf y})}$,
$\alpha_{{\sf row}({\sf y})} = \alpha_{i_{m-1}} = \alpha_{i_{m-1}-1} + 1 = \alpha_s + 1$. So,
\begin{equation}
\label{eqn:July8rft}
\alpha_s \geq \alpha_{{\sf row}({\sf y})} - 1\text{ for all $1 \leq s \leq T({\sf y})$.}
\end{equation}
Hence Lemma~\ref{lemma:rectangle} shows
\begin{equation}
\label{eqn:July8bvc}
T(s,c)=s \text{ for all $1 \leq s \leq T({\sf y})$ and $1 \leq c \leq \alpha_{{\sf row}({\sf y})} - 1$.}
\end{equation}
Let
\[t=\max_{s}\{1 \leq s \leq T({\sf y}), \alpha_s < \alpha_{{\sf row}({\sf y})}\};\]
$t$ is finite by this case's assumption.
By \eqref{eqn:July8rft} (and the case assumption), $\alpha_{{\sf row}({\sf y})} - 1 = \alpha_t$. Therefore,
by the maximality of $t$,
\begin{equation}
\label{eqn:July8ugh}
\alpha_{{\sf row}({\sf y})} - 1 = \alpha_t < \alpha_u, \text{\ for $t < u \leq T({\sf y})$.}
\end{equation}
Thus \eqref{eqn:July8bvc}, \eqref{eqn:July8ugh} and $\qkt{4}$ imply
$t = T(t,\alpha_{{\sf row}({\sf y})}-1) < T(u,\alpha_{{\sf row}({\sf y})})$, for $t < u \leq T({\sf y})$. Hence, by inductively applying $\qkt{1}$ and $\qkt{2}$ we conclude
\begin{equation}
\label{eqn:July8fin}
T(u,\alpha_{{\sf row}({\sf y})})=u, \text{\ for $t < u \leq T({\sf y})$.}
\end{equation}
Finally, by the definition of $t$, $\alpha_t < \alpha_{{\sf row}({\sf y})}$. So \eqref{eqn:July8bvc} and $\qkt{4}$ imply
\begin{equation}
\label{eqn:July8qwerty}
t = T(t,\alpha_{{\sf row}({\sf y})}-1) < T({\sf row}({\sf y}),\alpha_{{\sf row}({\sf y})}).
\end{equation}
However, by \eqref{eqn:July8fin} we have $t+1=T(t+1,\alpha_{{\sf row}({\sf y})})$. Hence by (\ref{eqn:July8qwerty})
and \qkt{2}, $t+1<T({\sf row}({\sf y}),\alpha_{{\sf row}({\sf y})})$. Repeating this argument replacing $t+1$ successively with $t+2,t+3,\ldots, T({\sf y})$
in (\ref{eqn:July8fin}) we arrive at $T({\sf y}) < T({\sf row}({\sf y}),\alpha_{{\sf row}({\sf y})})$; this
contradicts $\qkt{1}$.
\end{proof}
\begin{lemma}
\label{lemma:uniqueSmaller}
Suppose $\alpha \in {\overline {\sf KM}}_n^{\geq 1}$. Let $T \in {\sf qKT}(\alpha)$, $b \in \seg{m}{}$ for $1 < m \leq k+1$ with $\alpha_{i_{m-1}-1} < \alpha_b$, and $c > \alpha_{i_{m-1} -1}+1$. Then
\[\#\{{\sf x}\in D(\alpha):b\leq {\sf row}({\sf x}), {\sf col}({\sf x}) = c, T(x)<b\}\leq 1.\]
\end{lemma}
\begin{proof}
Suppose there were two rows
\begin{equation}
\label{eqn:June19ppp}
b\leq r<r' \text{\ such that $T(r',c), T(r,c)<b$.}
\end{equation}
By hypothesis, $\alpha_{i_{m-1}-1}<\alpha_b$.
Thus, if $\alpha_b<\alpha_r$ then $(\alpha_{i_{m-1}-1},\alpha_b,\alpha_r)\simeq (0,1,2)$, contradicting
$\alpha \in {\overline {\sf KM}}_n^{\geq 1}$. Hence $\alpha_b\geq \alpha_r$. Suppose $\alpha_r<\alpha_{r'}$. Now $r \in \seg{f}{}$ for some $f \geq m$. If $\alpha_{i_{f-1} -1} \geq \alpha_r$, then Lemma~\ref{lemma:rowfilledrowval}(a) would imply row $r$ contains only $r$'s. Since this is not the case by \eqref{eqn:June19ppp}, it must be that $\alpha_{i_{f-1} -1} < \alpha_r$. Thus
$(\alpha_{i_{f-1} -1}, \alpha_r, \alpha_{r'})$ is a $(0,1,2)$ pattern, a contradiction. Therefore,
\begin{equation}
\label{eqn:June19ggg}
\alpha_r \geq \alpha_{r'} \geq c
\end{equation}
(where the latter inequality is by
(\ref{eqn:June19ppp})). By (\ref{eqn:June19ppp}) together with \qkt{1} and \qkt{2}, there exists
two rows $s < s' < r$ with
\begin{equation}
\label{eqn:June19jjj}
\alpha_s,\alpha_{s'} < c.
\end{equation}
Since $(\alpha_s, \alpha_{s'}, \alpha_r, \alpha_{r'}) \in {\overline {\sf KM}}_4$ then it follows straightforwardly from (\ref{eqn:June19ggg}) and (\ref{eqn:June19jjj}) that
\begin{equation}
\label{eqn:June19xyp}
\alpha_s=\alpha_{s'} \text{\ and $\alpha_r=\alpha_{r'}=\alpha_s +1$.}
\end{equation}
In fact (\ref{eqn:June19xyp}) holds for any $s<s'<r$ satisfying (\ref{eqn:June19jjj}). In particular, by hypothesis
$\alpha_{i_{m-1}-1}<c$. Hence, there is at least one pair $s, s'$ satisfying (\ref{eqn:June19jjj}) with either $s = i_{m-1}-1$ or $s' = i_{m-1}-1$.
Then by
(\ref{eqn:June19ggg}) and
(\ref{eqn:June19xyp}), $c\leq \alpha_r=\alpha_{i_{m-1}-1}+1$, contradicting the hypothesis on $c$.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:qksummary}} The next two proposition immediately
give Theorem~\ref{thm:qksummary}.
\begin{proposition}
\label{prop:quasiKeyMultFree}
If $\alpha \in {\overline {\sf KM}}_n^{\geq 1}$, then $\mathfrak{D}_\alpha$ is multiplicity-free.
\end{proposition}
\begin{proof} Suppose not. There exists distinct $T,T'\in {\sf qKT}(\alpha)$ such that
${\sf wt}(T)={\sf wt}(T')$. Define
\[b:=\max\{v: \exists {\sf x}, T({\sf x})=v, T'({\sf x})\neq v\}.\]
Since ${\sf wt}(T)={\sf wt}(T')$, then
\begin{equation}
\label{eqn:June21x'}
\exists {\sf x}' \text{ \ such that \ } T'({\sf x}')=b, T({\sf x}')\neq b.
\end{equation}
Let $b'=\max\{v: \exists {\sf x}, T'({\sf x})=v, T({\sf x})\neq v\}$. We claim that $b = b'$. Firstly, \eqref{eqn:June21x'} implies $b \leq b'$. Since ${\sf wt}(T)={\sf wt}(T')$, the definition of $b'$ indicates there exists an ${\sf x}'$ such that $T({\sf x}')=b'$ with $T'({\sf x}')\neq b'$. If $b' > b$, this would contradict the definition of $b$. Hence $b=b'$ and
\begin{equation}
\label{eqn:June20pop}
b:=\max\{v: \exists {\sf x}, T({\sf x})=v, T'({\sf x})\neq v\}=\max\{v: \exists {\sf x}, T'({\sf x})=v, T({\sf x})\neq v\}.
\end{equation}
Let
\[p_T:=\max\{c: T(b,c)=b\} \text{ \ \ and \ \ } p_{T'}=\max\{c:T'(b,c)=b\}.\]
Since $\alpha_i \geq 1$ for all $1 \leq i \leq n$, $\qkt{1}$ and $\qkt{2}$ imply that
$T(b,1)=T'(b,1)=b$ and hence finite maximums exist
($p_T, p_{T'}\geq 1$). By swapping $T$ and $T'$ (if necessary), we may assume
\begin{equation}
\label{equation:wlogLess}
p_{T'} \leq p_T
\end{equation}
\begin{claim}
\label{claim:fixedAbove}
Let $b \in \seg{m}{}$ for $1 < m \leq k+1$ with $\alpha_{i_{m-1}-1} < \alpha_b$.
\begin{itemize}
\item[(I)] $T({\sf y})\geq b$ if ${\sf row}({\sf y})>b$ and ${\sf col}({\sf y}) \geq p_T$. Similarly,
$T'({\sf y})\geq b$ if ${\sf row}({\sf y})>b$ and ${\sf col}({\sf y}) \geq p_{T'}$.
\item[(II)] $T({\sf y})=T'({\sf y})$ if ${\sf row}({\sf y})>b$ and ${\sf col}({\sf y}) \geq p_{T}$.
\end{itemize}
\end{claim}
\noindent
\emph{Proof of Claim~\ref{claim:fixedAbove}:} (I): We prove the assertion for $T$; the $T'$ claim is the same. By definition of $p_T$ and \qkt{1}, $T(b,c)<b$ for any $c>p_T$.
By hypothesis $\alpha_{i_{m-1}-1} < \alpha_b$, and hence Lemma~\ref{lemma:segPartsRange}(e) implies $b \in \seg{m}{1}$.
Thus Lemma~\ref{lemma:rowfilledrowval}(b) indicates that
\begin{equation}
\label{eqn:June20ggg}
p_T\geq \alpha_{i_{m-1}-1}+1.
\end{equation}
Hence $c> \alpha_{i_{m-1}-1}+1$.
Thus, the hypotheses of Lemma \ref{lemma:uniqueSmaller} hold, and the conclusion of that lemma is that
$T({\sf y}) \geq b$
if ${\sf row}({\sf y})>b$ and ${\sf col}({\sf y}) =c (> p_T)$.
Thus we may assume ${\sf row}({\sf y})>b$ and ${\sf col}({\sf y}) = p_T$. Suppose $p_T = \alpha_b$. If $T({\sf y}) < b = T(b, p_T)$, then by \qkt{3}, there is a box of $D(\alpha)$ in position
$(b,p_T+1)=(b,\alpha_b+1)$, contradicting the definition of $D(\alpha)$. Hence, $p_T < \alpha_b$. Let
\begin{equation}
\label{eqn:b'def}
\ell=T(b, p_T+1) \text{\ and \ $d = T({\sf y})$.}
\end{equation}
We want to show $d\geq b$; suppose not. By the definition (\ref{eqn:b'def}) of $\ell$ together with $\qkt{1}$,
\begin{equation}
\label{eqn:Jul16yoa}
\ell<b.
\end{equation}
Thus there are three cases:
\noindent
\emph{Case 1:} ($\ell \leq d < b$) $T$ violates \qkt{3} (where here $i=d, k=b$ and $j=\ell$ in that rule).
\noindent
\emph{Case 2:} ($d<i_{m-1} - 1$) Since $b\in \seg{m}{}=\{i_{m-1},i_{m-1}+1,\ldots,i_m-1\}$ (by hypothesis), $b\geq i_{m-1}$.
Lemma~\ref{lemma:seesawLemma} states that $d=T({\sf y})\geq i_{m-1}-1$ since ${\sf row}({\sf y})>b\geq i_{m-1}$. Hence this case cannot occur.
\noindent
\emph{Case 3:} ($i_{m-1} - 1 \leq d < \ell$) Since $b\in \seg{m}{}$ (by hypothesis), and $\ell < b$ by \eqref{eqn:Jul16yoa}, the assumption of this case says $d+1\in \seg{m}{}$.
Hence by definition of $\seg{m}{}$, \[\alpha_{d+1}\geq \alpha_{b}>\alpha_{i_{m-1}-1}\] (the latter inequality by the
hypothesis). We claim
\begin{equation}
\label{eqn:July17tri}
T(s,p_T)=s\text{ for $d+1 \leq s \leq b$.}
\end{equation}
If $p_T = \alpha_{i_{m-1}-1}+1$, then Lemma~\ref{lemma:rowfilledrowval}(b) implies \eqref{eqn:July17tri}. Otherwise (\ref{eqn:June20ggg}) implies $p_T > \alpha_{i_{m-1}-1}+1$. Thus
Lemma \ref{lemma:uniqueSmaller} applied to column $p_T$ and row $d+1$ implies
\begin{equation}
\label{eqn:June20hhh}
\#\{s\geq d+1: T(s,p_T)< d+1\}\leq 1.
\end{equation}
However, $T({\sf y})=d$ and we assumed ${\sf row}({\sf y})>b>\ell\geq d+1$ (the last inequality being this case's assumption).
The previous sentence, combined with (\ref{eqn:June20hhh}) and $\qkt{1}$ says that $T(d+1,p_T)=d+1$. Iterating this
argument, using $\qkt{2}$, for $d+2 \leq s \leq b$ implies \eqref{eqn:July17tri}.
Now apply \qkt{3} to $T({\sf y})=d<T(s,p_T)$ to see that
$T(s,p_T+1)>d$ for $d+1 \leq s \leq b$. On the other hand, \qkt{1} says $T(s,p_T+1)\leq b$ for $d+1\leq s\leq b$. The definition of
$p_T$ means $T(s,p_T+1)\neq b$. Concluding,
\[d<T(s,p_T+1)<b, \text{\ for $d+1\leq s\leq b$.}\] By pigeonhole,
two of $\{T(s,p_T+1):d+1\leq s\leq b\}$ are equal, contradicting \qkt{2}.
Hence $d\geq b$, as desired.
(II): Suppose not and let $T({\sf y})\neq T'({\sf y})$ for some ${\sf y}$ such that ${\sf row}({\sf y})>b$ and
${\sf col}(y)\geq p_T$. In particular, at least one of $T({\sf y})$ and $T'({\sf y})$ is not $b$.
If $\max\{T({\sf y}),T'({\sf y})\}<b$ we contradict (I). Hence $\max\{T({\sf y}),T'({\sf y})\}>b$. This contradicts \eqref{eqn:June20pop}. \qed
There are four possible cases to consider.
\noindent \textit{Case 1:} ($b\in \seg{1}{}$, $b\in \seg{m}{1}$ with $\alpha_{i_{m-1} - 1}\geq \alpha_b$, or $b \in \seg{m}{2}$)
Let $b\in \seg{m}{}$ ($1\leq m\leq k+1$). By Lemma \ref{lemma:rowfilledrowval}(a),
\begin{equation}
\label{eqn:June21aaa}
T(b,c)=b, \ T'(b,c)=b \text{\ for all $1\leq c\leq \alpha_b$.}
\end{equation}
By \qkt{1}, $b$ cannot appear in $T$ in any row $s$ strictly south of $b$. On the other hand, if $s\in \seg{m}{}$, and $s>b$, then
$\alpha_{s}\leq \alpha_b$. Hence by \qkt{2}, $b$ cannot appear in row $s$ of $T$. Now suppose $s>i_m$. Since
$i_m\in \seg{m+1}{}$, thus $i_m> b+1$. Hence by Lemma~\ref{lemma:seesawLemma}, no labels $<i_m-1$ appear in rows
strictly north of row $i_m$. In particular, $b$ does not appear in those rows. What we have just written also applies to $T'$,
thus
\begin{equation}
\label{eqn:June21jjj}
T(r,c)=b \Rightarrow r=b,i_m \text{ \ and \ } T'(r',c)\Rightarrow r'=b,i_m.
\end{equation}
Let ${\sf x}'$ be the box defined in (\ref{eqn:June21x'}). By (\ref{eqn:June21aaa}), in both $T$ and $T'$, row $b$
is filled entirely by $b$'s. Hence ${\sf row}({\sf x}')\neq b$. Thus by (\ref{eqn:June21jjj}), ${\sf row}({\sf x}')=i_m$.
Now since ${\sf wt}(T)={\sf wt}(T')$ row $i_m$ has the same number of $b$'s in $T$ and $T'$. Now, by $\qkt{1}$, all
labels left of the $b$'s in row $i_m$ of $T$ are strictly larger; the exact same statement is true of $T'$. However,
those larger labels cannot differ between $T$ and $T'$ by (\ref{eqn:June20pop}). Hence in fact, the $b$'s in row
$i_m$ are exactly in the same place in $T$ and $T'$, contradicting the definition (\ref{eqn:June21x'}) of ${\sf x}'$.
\noindent \textit{Case 2:} ($b \in \seg{m}{3}$) By Lemma \ref{lemma:rowfilledrowval}(a),
\begin{equation}
\label{eqn:June21aaaa}
T(b,c)=b, \ T'(b,c)=b \text{\ for all $1\leq c\leq \alpha_b$.}
\end{equation}
By \qkt{1}, $b$ cannot appear in $T$ in any row $s$ strictly south of $b$. Let ${\sf x}'$ be the box defined in (\ref{eqn:June21x'}). By (\ref{eqn:June21aaaa}), in both $T$ and $T'$, row $b$
is filled entirely by $b$'s. Hence ${\sf row}({\sf x}')\neq b$. Notice
\begin{equation}
\label{eqn:June22qqq}
T({\sf y})=T'({\sf y}) \text{ \ for all ${\sf y}$ such that ${\sf row}({\sf y}) > i_m$.}
\end{equation}
Indeed, by Lemma~\ref{lemma:seesawLemma}, $T({\sf y}), T'({\sf y})\geq i_m-1=b$. Then
(\ref{eqn:June20pop}) shows
$T({\sf y}) = T'({\sf y})$.
It remains to consider if ${\sf row}({\sf x}')=i_m$ is possible. The contradiction in this case is derived exactly as
in the final four sentences of \textit{Case 1}.
\noindent \textit{Case 3:} ($b \in \seg{m}{1}$ for $1 < m \leq k+1$ with $\alpha_{i_{m-1}-1} < \alpha_b$, and $p_T = p_{T'}$)
By \qkt{1} any entry in row $b$ of $T$ or $T'$ is $\leq b$. Thus, since $p_T = p_{T'}$,
\begin{equation}
\label{eqn:June22like}
T(b,c)=b \iff 1\leq c\leq p_T \text{\ and \ } T'(b,c')=b \iff 1\leq c'\leq p_T(=p_{T'}).
\end{equation}
Hence ${\sf row}({\sf x}')\neq b$. Thus, by \qkt{1}, ${\sf row}({\sf x}') > b$. Then (\ref{eqn:June22like}) and
\qkt{2} implies ${\sf col}({\sf x}') > p_T$. Thus, Claim~\ref{claim:fixedAbove}(II) says that $T({\sf x}')=T'({\sf x}')$, which
contradicts the definition (\ref{eqn:June21x'}) of ${\sf x}'$.
\noindent \textit{Case 4:} ($b \in \seg{m}{1}$ for $1 < m \leq k+1$ with $\alpha_{i_{m-1}-1} < \alpha_b$, and $p_T > p_{T'}$)
Since
\begin{equation}
\label{eqn:June22much}
T(b,c)=b \iff 1\leq c\leq p_T \text{\ and \ } T'(b,c')=b \iff 1\leq c'\leq p_{T'}(<p_{T}),
\end{equation}
by
\qkt{1} and ${\sf wt}(T)={\sf wt}(T')$, we have
\begin{equation}
\label{eqn:June22music}
\#\{{\sf z}\in D(\alpha): T'({\sf z})=b, T({\sf z})\neq b, {\sf row}({\sf z})>b\}\geq p_T - p_{T'}.
\end{equation}
For all ${\sf z}$ in the set from (\ref{eqn:June22music}), we have that ${\sf col}({\sf z})>p_{T'}$ by (\ref{eqn:June22much})
combined with \qkt{2}. Moreover, by Claim~\ref{claim:fixedAbove}(II), $T({\sf z})=T'({\sf z})$ if ${\sf col}({\sf z})\geq p_T$
and ${\sf row}({\sf z})>b$. Hence ${\sf col}({\sf z})<p_{T}$. Summarizing, by (\ref{eqn:June22music}) there are $p_T-p_{T'}$
of these boxes ${\sf z}$ that satisfy $p_{T'}<{\sf col}({\sf z})<p_T$. By pigeonhole, at least two of these ${\sf z}$
are in the same column. This contradicts \qkt{2}.
We conclude that no such $T,T'$ can exist.
\end{proof}
\begin{proposition}
\label{prop:exhaustiveQKMultFreeCases}
Suppose $\alpha \in {\overline {\sf KM}}_n^{\geq 1}$ and $\beta \in {\sf Qlswap}(\alpha)$. Then $\mathfrak{D}_\beta$ is multiplicity-free.
\end{proposition}
\begin{proof}
By Proposition~\ref{prop:quasiKeyMultFree}, it suffices to show that
${\sf Qlswap}(\alpha)\subseteq {\overline {\sf KM}}_n^{\geq 1}$.
Since $\alpha$ has no parts equal to $0$, ${\sf Qlswap}(\alpha) = {\sf lswap}(\alpha)$.
Hence, by induction, it is enough to prove that if $\beta=(\ldots, \alpha_j,\ldots, \alpha_i,\ldots)$
is a left swap of $\alpha$ then $\beta \in {\overline {\sf KM}}_n^{\geq 1}$. To reach a contradiction, assume $\beta \notin {\overline {\sf KM}}_n^{\geq 1}$.
There are five cases to consider. In each Subcase, the contradiction derived is that $\alpha$ contains a pattern from
${\sf KM} = \{ (0,1,2), (0,0,2,2), (0,0,2,1), (1,0,3,2), (1,0,2,2)\}$.
\noindent (I) \underline{$(\beta_{a_1},\beta_{a_2},\beta_{a_3})\simeq (0,1,2)$.} Since $\beta$ is a left swap of $\alpha$,
$\{i,j\}\not\subseteq \{a_1,a_2,a_3\}$. Also, since $\alpha\in {\overline {\sf KM}}_n^{\geq 1}$,
$\{i,j\}\cap \{a_1,a_2,a_3\}\neq \emptyset$.
\noindent \textit{Subcase 1:} ($a_1=i$) $\alpha_{i} < \alpha_{a_2} < \alpha_{a_3}$. Hence,
$(\alpha_i,\alpha_{a_2},\alpha_{a_3})\simeq (0,1,2)$.
\noindent \textit{Subcase 2:} ($a_2=i$) If $\alpha_{a_1} < \alpha_{i}$, then $\alpha_{a_1} < \alpha_{i} < \alpha_{a_3}$ and
hence $(\alpha_{a_1} ,\alpha_{i} , \alpha_{a_3})\simeq (0,1,2)$. Thus assume $\alpha_{a_1}\geq \alpha_i$. If $j > a_3$, then $(\alpha_{a_1}, \alpha_{i},\alpha_{a_3},\alpha_{j}) \simeq (0,0,2,1)$ or $\simeq (1,0,3,2)$. Otherwise $j < a_3$ and hence
$\alpha_{i} < \alpha_{j} < \alpha_{a_3}$.
\noindent \textit{Subcase 3:} ($a_3=i$) $\alpha_{a_1} < \alpha_{a_2} < \alpha_{j}$.
\noindent \textit{Subcase 4:} ($a_1=j$) $\alpha_{i} < \alpha_{a_2} < \alpha_{a_3}$.
\noindent \textit{Subcase 5:} ($a_2=j$) If $\alpha_{a_3} > \alpha_{j}$, then $\alpha_{a_1} < \alpha_{j} < \alpha_{a_3}$. If $\alpha_{a_3} \leq \alpha_{j}$ with $i < a_1$, then $(\alpha_{i}, \alpha_{a_1}, \alpha_{j},\alpha_{a_3}) \simeq (1,0,2,2)$ or $\simeq (1,0,3,2)$. If $\alpha_{a_3} \leq \alpha_{j}$ with $i > a_1$, then $\alpha_{a_1} < \alpha_{i} < \alpha_{j}$.
\noindent \textit{Subcase 6:} ($a_3=j$) $\alpha_{a_1} < \alpha_{a_2} < \alpha_{j}$.
\noindent (II) \underline{$(\beta_{a_1},\beta_{a_2},\beta_{a_3},\beta_{a_4})\simeq (1,0,3,2)$.}
\noindent \textit{Subcase 1:} ($a_1=i$, $a_2=j$) $\alpha_{i} < \alpha_{j} < \alpha_{a_3}$.
\noindent \textit{Subcase 2:} ($a_3=i$, $a_4=j$) $\alpha_{a_2} < \alpha_{i} < \alpha_{j}$.
\noindent \textit{Subcase 3:} ($a_1=i$ and $j \notin \{ a_1,a_2,a_3,a_4 \}$) If $\alpha_i \geq \alpha_{a_2}$, then $(\alpha_{i}, \alpha_{a_2}, \alpha_{a_3},\alpha_{a_4})$ contains either $(1,0,3,2)$ or $(0,0,2,1)$. If $\alpha_i < \alpha_{a_2}$, then $\alpha_i < \alpha_{a_2} < \alpha_{a_3}$.
\noindent \textit{Subcase 4:} ($a_2=i$ and $j \notin \{ a_1,a_2,a_3,a_4 \}$) $(\alpha_{a_1}, \alpha_{i}, \alpha_{a_3},\alpha_{a_4})\simeq (1,0,3,2)$.
\noindent \textit{Subcase 5:} ($a_3=i$ and $j \notin \{ a_1,a_2,a_3,a_4 \}$) If $\alpha_i \geq \alpha_{a_4}$, then $(\alpha_{a_1}, \alpha_{a_2}, \alpha_{i},\alpha_{a_4})\simeq (1,0,3,2)$ or $\simeq(1,0,2,2)$. If $\alpha_{a_2} < \alpha_i < \alpha_{a_4}$, then $\alpha_{a_2} < \alpha_{i} < \alpha_{j}$. If $\alpha_{a_2} \geq \alpha_i$ and $j > a_4$, then $\alpha_i < \alpha_{a_4} < \alpha_{j}$. If $\alpha_{a_2} \geq \alpha_i$ and $j < a_4$, then $(\alpha_{a_2}, \alpha_{i}, \alpha_{j},\alpha_{a_4})\simeq (1,0,3,2)$.
\noindent \textit{Subcase 6:} ($a_4=i$ and $j \notin \{ a_1,a_2,a_3,a_4 \}$) $(\alpha_{a_1}, \alpha_{a_2}, \alpha_{a_3},\alpha_{j})\simeq (1,0,3,2)$.
\noindent \textit{Subcase 7:} ($a_1=j$ and $i \notin \{ a_1,a_2,a_3,a_4 \}$) $(\alpha_{i}, \alpha_{a_2}, \alpha_{a_3},\alpha_{a_4}) \simeq (1,0,3,2)$.
\noindent \textit{Subcase 8:} ($a_2=j$ and $i \notin \{ a_1,a_2,a_3,a_4 \}$) If $\alpha_j \leq \alpha_{a_1}$, then $(\alpha_{a_1}, \alpha_{j}, \alpha_{a_3},\alpha_{a_4})\simeq (0,0,2,1)$ or $\simeq (1,0,3,2)$. If $\alpha_{a_1} < \alpha_j < \alpha_{a_4}$, then $\alpha_{i} < \alpha_{j} < \alpha_{a_4}$. If $\alpha_{a_4} \leq \alpha_j$ and $i < a_1$, then $\alpha_i < \alpha_{a_1} < \alpha_{j}$. If $\alpha_{a_4} \leq \alpha_j$ and $i > a_1$, then $(\alpha_{a_1}, \alpha_{i}, \alpha_{a_3},\alpha_{a_4}) \simeq (1,0,3,2)$.
\noindent \textit{Subcase 9:} ($a_3=j$ and $i \notin \{ a_1,a_2,a_3,a_4 \}$) $(\alpha_{a_1}, \alpha_{a_2}, \alpha_{j},\alpha_{a_4}) \simeq (1,0,3,2)$.
\noindent \textit{Subcase 10:} ($a_4=j$ and $i \notin \{ a_1,a_2,a_3,a_4 \}$) If $\alpha_j \leq \alpha_{a_3}$, then $(\alpha_{a_1}, \alpha_{a_2}, \alpha_{a_3},\alpha_{j})\simeq (1,0,3,2)$ or $\simeq (1,0,2,2)$. If $\alpha_j > \alpha_{a_3}$, then $\alpha_{a_2} < \alpha_{a_3} < \alpha_{j}$.
We leave the cases $(\beta_{a_1},\beta_{a_2},\beta_{a_3},\beta_{a_4})\simeq (1,0,2,2), (0,0,2,1), (0,0,2,2)$
to the reader.
\end{proof}
\section{Proof of classification theorem of multiplicity-free key polynomials}
\label{sec:mfKey}
\subsection{Kohnert diagrams and proof of necessity} \label{sec:5.1}
Assume $\alpha\not\in {\overline {\sf KM}}_n$. We will now show that $\kappa_{\alpha}$
has multiplicity.
We use \emph{Kohnert's rule} to compute $\kappa_{\alpha}$. For any $\alpha\in {\sf Comp}_n$, the \emph{skyline diagram} is
\[D(\alpha)=\{(i,j):1\leq i\leq n, 1\leq j\leq \alpha_i\}\]
(where $i$ indexes the rows from south to north, and $j$ indexes the columns, from left to right). The set ${\sf KD}(\alpha)$ of \emph{Kohnert diagrams} is recursively defined as follows. Initially $D(\alpha)\in {\sf KD}(\alpha)$. At each stage thereafter, given $D\in {\sf KD}(\alpha)$ a box $(i,j)\in D$ is \emph{movable} if it is
the rightmost box of $D$ in row $i$ and there exists $i'<i$ such that $(i',j)\not\in D$.
For any such movable box, a Kohnert diagram
$D'$ is obtained by replacing $(i,j)$ with $(i',j)$ where $i'$ is largest among all choices.
Generate a $D'$ from $D$ for every choice of moveable $(i,j)$. Now ${\sf KD}(\alpha)$ is the (finite) \emph{set} (not multiset) of Kohnert diagrams obtained starting from
$D(\alpha)$.
For $D\in {\sf KD}(\alpha)$ let
\[{\sf Kohwt}(D)=\prod_{1\leq i\leq n} x_i^{\#\{j:(i,j)\in D\}}.\]
\begin{theorem}[A.~Kohnert \cite{Kohnert}]\label{thm:KD}
$\kappa_{\alpha}=\sum_{D\in {\sf KD}(\alpha)} {\sf Kohwt}(D)$.
\end{theorem}
Given $D\in {\sf D}(\alpha)$, call a row $i$ \emph{initial} if it is empty or
the boxes in that row are precisely $(i,1),(i,2),\ldots,(i,j)$ for some $j\in {\mathbb Z}_{\geq 1}$.
\begin{lemma}
\label{lemma:earlysteps}
Suppose $D\in {\sf KD}(\alpha)$, $i'<i$ and $j'<j$.
If $(i',j')$ and $(i,j)$ are the rightmost boxes of their rows, and all rows $i'\leq i''<i$ are initial, then $D'$ obtained by replacing $(i,j)$ with $(i',j)$, is in ${\sf KD}(\alpha)$.
\end{lemma}
\begin{proof}
The hypotheses on $i,j,i',j'$ guarantee that $(i,j)$ is moveable. Since each row
$i''$ is initial, $(i,j)\to (i'',j)$ is a Kohnert move (giving a diagram $D''$) whenever
$j''<j$ (where $(i'',j'')$ is the rightmost box of row $i''$). If indeed $i''=i'$ we are done, \emph{i.e.},
$D=D''$. Otherwise since $i''$ is initial in $D(\alpha)$, it must be that $(i'',j)$ is the rightmost box of its row in $D''$, and is therefore moveable. Then since all
rows $i'<i'''<i''$ are initial in $D''$ (since they were in $D$), the claim follows by
induction on $i-i'$.
\end{proof}
\begin{corollary}
\label{cor:earlysteps}
Suppose $D=D(\alpha)$, $i'<i$, $j'<j$. Let $(i',j')$ and $(i,j)$ be the rightmost boxes
of their rows. Then $D'$ as defined in Lemma~\ref{lemma:earlysteps} is in ${\sf KD}(\alpha)$.
\end{corollary}
\begin{proof}
All rows of $D(\alpha)$ are initial, so Lemma~\ref{lemma:earlysteps} applies.
\end{proof}
In what follows, $(i_k,j_k)$ is the rightmost box of row $i_k$.
\noindent \textit{Case 1:} ($\alpha$ contains $(0,1,2)$.) Let $i_0<i_1<i_2$ be the rows of the ``$0$'',
``$1$'' and ``$2$'' respectively. By Corollary~\ref{cor:earlysteps} we can
replace $(i_2,j_2)$ with $(i_0,j_2)$ in $D(\alpha)$, resulting in $D'\in {\sf Koh}(\alpha)$.
On the other hand, by Corollary~\ref{cor:earlysteps} one can obtain $E'\in {\sf Koh}(\alpha)$ by replacing $(i_1,j_1)$ with $(i_0,j_1)$. Since the rows $r\geq i_1$ in $E'$ are still
initial, we can apply Lemma~\ref{lemma:earlysteps} to obtain $E''\in {\sf KD}(\alpha)$
by replacing $(i_2,j_2)$ with $(i_1,j_2)$ in $D''$. The net effect in both cases is to place an additional box in row $i_0$ and remove a box from row $i_2$. Hence ${\sf Kohwt}(D')={\sf Kohwt}(E'')$; therefore by Theorem~\ref{thm:KD},
$[{\sf Kohwt}(D')] \kappa_{\alpha}\geq 2$, as desired.
\noindent \textit{Case 2:} ($\alpha$ contains $(0,0,2,1)$.) Let $i_{0'}<i_0<i_2<i_1$ be the indices of the $(0,0,2,1)$ pattern (in the respective order). By Corollary~\ref{cor:earlysteps}, $D'$ obtained from $D(\alpha)$ by the swap $(i_1,j_1)\to (i_{0'},j_1)$ is in ${\sf KD}(\alpha)$. Since
all rows $r\geq i_0$ of $D''$ are initial, we can use Lemma~\ref{lemma:earlysteps} to
move $(i_2,j_2)\to (i_0,j_2)$ giving $D'''\in {\sf KD}(\alpha)$. On the other hand,
starting from $D(\alpha)$ we can again use Corollary~\ref{cor:earlysteps} to define
$E'\in {\sf KD}(\alpha)$ by the swap $(i_2,j_2)\to (i_{0'},j_2)$. Then Lemma~\ref{lemma:earlysteps} allows us to move $(i_1,j_1)\to (i_0,j_1)$ giving $E''\in {\sf KD}(\alpha)$. Now one can see $D'''\neq E''$ but both have the same ${\sf Kohwt}$, as
needed.
\noindent \textit{Case 3:} ($\alpha$ contains $(1,0,3,2)$.) This is the same argument as
{\sf Case 2} except that we use $i_1<i_0<i_3<i_2$ in place of $i_{0'}<i_0<i_2<i_1$,
respectively.
\noindent \textit{Case 4:} ($\alpha$ contains $(0,0,2,2)$.) Let $i_{0'}<i_0<i_{2'}<i_2$ be the
rows of $(0,0,2,2)$ in that respective order. By Corollary~\ref{cor:earlysteps} turn
$D(\alpha)$ into $D'\in {\sf KD}(\alpha)$ by the move $(i_{2'},j_{2'})\to (i_{0'},j_{2'})$.
Now since all rows $r\geq i_0$ of $D'$ are initial, we can apply the argument of
{\sf Case 1} using rows $i_0<i_{2'}<i_2$ rather than the $i_0<i_1<i_2$ of that case.
\noindent \textit{Case 5:} ($\alpha$ contains $(1,0,2,2)$.) Let $i_{0'}<i_0<i_{2'}<i_2$ be the
rows of $(0,0,2,2)$ in that respective order. We can apply the argument of
{\sf Case 4}, where these indices play the role of $i_{0'}<i_0<i_{2'}<i_2$
from that case.
This completes the necessity argument.
\subsection{Proof of sufficiency}
Let $b \in \seg{m}{}$ for some $1\leq m \leq k+1$. Define
\begin{equation}
\label{equation:rmin}
\rmin{b}{\alpha}=\left\{\begin{array}{ll}
b & \text{$\alpha_{i_{m-1}-1} \geq \alpha_b$} \\[2pt]
i_{m-1}-1 \qquad & \text{otherwise} \\
\end{array} \right.
\end{equation}
\begin{equation}
\label{equation:rmax}
\rmax{b}{\alpha}=\left\{\begin{array}{ll}
b+1 \qquad & \text{if } \alpha_{b+1} \geq \alpha_{i_m} \\[2pt]
i_m \qquad & \text{otherwise} \\
\end{array} \right.
\end{equation}
\begin{equation}
\label{equation:flex}
\flex{b}{\alpha}=\left\{\begin{array}{ll}
\alpha_{\rmax{b}{\alpha}} - \alpha_{\rmin{b}{\alpha}} - 1 \qquad & \text{if } \alpha_{\rmax{b}{\alpha}} > \alpha_{\rmin{b}{\alpha}} \\[2pt]
0 \qquad & \text{otherwise}
\end{array} \right.
\end{equation}
\begin{example}\label{exa:rminmaxflex}
Consider $\alpha = (10,5,12,9,8,8,4,2,5,1,3)$. Then $k=3$,
$i_0:=1, i_1 = 3, i_2 = 9, i_3 = 11, i_4:=12$.
Let $b=5 \in \seg{2}{}$. Then $\rmin{b}{\alpha} = i_1 = 2$, since $\alpha_{i_1} < \alpha_b$. Since $\alpha_{b+1} > \alpha_{i_2}$, $\rmax{b}{\alpha} = b+1 = 6$. Thus $\flex{b}{\alpha} = \alpha_{\rmax{b}{\alpha}} - \alpha_{\rmin{b}{\alpha}} - 1 = \alpha_6 - \alpha_2 - 1 = 2$.
\end{example}
\begin{lemma}
\label{lemma:dominanceInterval}
Suppose $\alpha \in {\overline {\sf KM}}_n^{\geq 1}$ and $T \in {\sf qKT}(\alpha)$ with $\mu = {\sf wt}(T)$. Then
\begin{itemize}
\item[(I)] $\alpha \leq_{\sf Dom} \mu$.
\item[(II)] $\mu_1+\cdots+\mu_b \leq \alpha_1+\cdots+\alpha_b+\flex{b}{\alpha}$, for $1 \leq b \leq n$.
\end{itemize}
\end{lemma}
\begin{proof}
(I): By $\qkt{1}$, $\alpha_1+\cdots+\alpha_b \leq \mu_1+\cdots+\mu_b$, for $1\leq b\leq n$. Thus $\alpha \leq_{\sf Dom} \mu$.
(II):
Any $T \in {\sf qKT}(\alpha)$ only uses entries $\leq b$ in the first $b$ rows. Thus
\begin{equation}
\label{eqn:June22ggg}
\#\{{\sf x}:T({\sf x})\leq b, {\sf row}({\sf x})\leq b\}=\alpha_1+\ldots +\alpha_b.
\end{equation}
Define
\begin{equation}
\label{eqn:flexset}
F_{\alpha}(b) = \{{\sf x}:T({\sf x})\leq b, {\sf row}({\sf x}) > b\}.
\end{equation}
We claim that
\begin{equation}
\label{eqn:June22flexmeaning}
\#F_{\alpha}(b)\leq \flex{b}{\alpha}.
\end{equation}
Clearly (\ref{eqn:June22ggg}) combined with (\ref{eqn:June22flexmeaning}) proves (II).
It remains to prove (\ref{eqn:June22flexmeaning}). Fix $b$; thus
$b \in \seg{m}{}$ for some $1\leq m \leq k+1$. Let ${\sf y}\in F_{\alpha}(b)$.
If $b \notin \seg{m}{3}$ then $b<i_{m}-1$. Lemma \ref{lemma:seesawLemma} asserts that $b$ cannot appear in rows
strictly greater than $i_m$, \emph{i.e.}, $i_m \geq {\sf row}({\sf y})$. Therefore, in view of the definition (\ref{eqn:flexset}), we have
\[b<{\sf row}({\sf y})\leq i_m.\]
By definition of $\seg{m}{}$, $\max_{b<r\leq i_m}\{\alpha_r\}=\alpha_{{\sf rmax}_{\alpha}(b)}$. Thus
\begin{equation}
\label{eqn:June22vvv}
{\sf col}({\sf y})\leq \alpha_{{\sf rmax}_{\alpha}(b)}.
\end{equation}
If $b \in \seg{m}{3}$, then since $\alpha$ is $(0,1,2)$ avoiding,
$\max_{b<r\leq n}\{\alpha_r\}=\alpha_{i_m}=\alpha_{{\sf rmax}_{\alpha}(b)}$. Thus (\ref{eqn:June22vvv}) holds for all $b$.
We claim
\begin{equation}
\label{eqn:June22theclaim}
{\sf col}({\sf y}) \geq \alpha_{\rmin{b}{\alpha}}+2.
\end{equation}
\noindent \textit{Case 1:} ($\alpha_{i_{m-1}-1} \geq \alpha_b$)
By this case's assumption, and the definition (\ref{equation:rmin}), $\rmin{b}{\alpha}=b$. If $i_{m-1}\leq s\leq b$
then by definition of $\seg{m}{}$,
\begin{equation}
\label{eqn:June22qwe}
\alpha_s\geq \alpha_b=\alpha_{\rmin{b}{\alpha}}.
\end{equation}
If $s= i_{m-1}-1$, then (\ref{eqn:June22qwe}) holds by the assumed inequality of this case. Finally, if
$s<i_{m-1}-1$ then $\alpha_s\geq \alpha_{i_{m-1}-1}$ since $\alpha$ avoids $(0,1,2)$; hence (\ref{eqn:June22qwe})
holds again. By Lemma~\ref{lemma:rectangle}, $T(r,c)=r$ for $1\leq r\leq b$ and $1\leq c\leq \alpha_b$.
Thus by \qkt{2}, $T(r,c)>b$ whenever $r>b$ and $c\leq \alpha_b$. Lastly, if there exists $r>b$ such that $\alpha_r > \alpha_{\rmin{b}{\alpha}}$, then \qkt{4} implies $T(r,\alpha_{\rmin{b}{\alpha}} +1)>b$. Therefore (\ref{eqn:June22theclaim}) holds.
\noindent \textit{Case 2:} ($\alpha_{i_{m-1}-1} < \alpha_b$)
By this case's assumption, and the definition (\ref{equation:rmin}), $\rmin{b}{\alpha}=i_{m-1}-1$.
If $i_{m-1}\leq s<b$ then $\alpha_s\geq \alpha_{b}\geq \alpha_{i_{m-1}-1}=\alpha_{\rmin{b}{\alpha}}$, and in particular:
\begin{equation}
\label{eqn:June22cdef}
\alpha_s\geq \alpha_{i_{m-1}-1}=\alpha_{\rmin{b}{\alpha}}.
\end{equation}
If $s=i_{m-1}-1$ then (\ref{eqn:June22cdef}) holds trivially. Finally if $1\leq s<i_{m-1}-1$ then since
$\alpha$ is $(0,1,2)$-avoiding, $\alpha_s\geq \alpha_{i_{m-1}-1}$ and (\ref{eqn:June22cdef}) again holds.
Hence by Lemma~\ref{lemma:rectangle}, $T(r,c)=r$ for $1\leq r\leq b$ and $1\leq c\leq \alpha_{i_{m-1}-1}$.
Thus by \qkt{2}, $T(r,c)>b$ whenever $r>b$ and $c\leq \alpha_{i_{m-1}-1}$. If there exists $r>b$ such that $\alpha_r > \alpha_{\rmin{b}{\alpha}}=\alpha_{i_{m-1}-1}$, then \qkt{4} implies $T(r,\alpha_{\rmin{b}{\alpha}} +1)>b$. Therefore (\ref{eqn:June22theclaim}) holds.
By (\ref{eqn:June22vvv}) and (\ref{eqn:June22theclaim}),
\begin{equation}
\label{eqn:June23aaa}
\alpha_{\rmin{b}{\alpha}}+2 \leq {\sf col}({\sf y}) \leq \alpha_{\rmax{b}{\alpha}}.
\end{equation}
Therefore ${\sf y}$ can appear in $\leq \flex{b}{\alpha}$ columns of $T$. If we show at most one ${\sf y} \in F_{\alpha}(b)$ can have ${\sf col}({\sf y})=c$, for $\alpha_{\rmin{b}{\alpha}}+2 \leq c \leq \alpha_{\rmax{b}{\alpha}}$ then (\ref{eqn:June22flexmeaning}) follows.
\noindent \textit{Case 1:} ($\alpha_{i_{m-1}-1} \geq \alpha_b$)
By this case's assumption, and definition (\ref{equation:rmin}), $\rmin{b}{\alpha}=b$. If $\alpha_{\rmin{b}{\alpha}} \geq \alpha_{\rmax{b}{\alpha}}$, then by (\ref{equation:flex}) $\flex{b}{\alpha}=0$. By (\ref{eqn:June23aaa}) $\#F_{\alpha}(b)=0$. And hence, $\#F_{\alpha}(b) \leq \flex{b}{\alpha}$, proving (\ref{eqn:June22flexmeaning}).
Thus we assume $\alpha_{\rmin{b}{\alpha}} < \alpha_{\rmax{b}{\alpha}}$. If $b \in \seg{m}{3}$, then $\rmax{b}{\alpha}=i_m=b+1$. Otherwise, $\alpha_{b+1} \leq \alpha_b = \alpha_{\rmin{b}{\alpha}} < \alpha_{\rmax{b}{\alpha}}$, where the first inequality follows from the definition of $\seg{m}{}$. This implies $\rmax{b}{\alpha} \neq b+1$. Thus, by (\ref{equation:rmax}), $\rmax{b}{\alpha}=i_m$.
Let $b < s < i_m$. Then $\alpha_{i_{m-1}-1} \geq \alpha_b \geq \alpha_s$. So Lemma~\ref{lemma:rowfilledrowval}(a), applied to $s$ says that only $s$'s appear in row $s$; in particular
there are no entries $\leq b$ in row $s$. Thus if ${\sf y} \in F_{\alpha}(b)$, then
\begin{equation}
\label{eqn:June23abc}
{\sf row}({\sf y}) \geq i_m = \rmax{b}{\alpha}.
\end{equation}
We apply Lemma~\ref{lemma:uniqueSmaller} to row $i_m \in \seg{m+1}{}$, and $c \geq \alpha_{\rmin{b}{\alpha}}+2 = \alpha_{b}+2 > \alpha_{i_{m}-1} + 1$ (the final inequality follows from the definition of $\seg{m}{}$). We conclude that at most one ${\sf y} \in F_{\alpha}(b)$ can have ${\sf row}({\sf y}) \geq i_m$ and ${\sf col}({\sf y})=c$.
This, combined with (\ref{eqn:June23abc}), implies (\ref{eqn:June22flexmeaning}).
\noindent \textit{Case 2:} ($\alpha_{i_{m-1}-1} < \alpha_b$)
By this case's assumption, and (\ref{equation:rmin}), $\rmin{b}{\alpha}=i_{m-1}-1$. If $\alpha_{\rmin{b}{\alpha}} \geq \alpha_{\rmax{b}{\alpha}}$, we get $\#F_{\alpha}(b)=0=\flex{b}{\alpha}$ in exactly the same way as the previous case.
Hence we again assume $\alpha_{\rmin{b}{\alpha}} < \alpha_{\rmax{b}{\alpha}}$. If $\rmax{b}{\alpha}=b+1$, then by our assumption $\alpha_{i_{m-1}-1} = \alpha_{\rmin{b}{\alpha}} < \alpha_{\rmax{b}{\alpha}} = \alpha_{b+1}$. Thus we may apply Lemma~\ref{lemma:uniqueSmaller} to row $b+1$, and $c \geq \alpha_{\rmin{b}{\alpha}}+2 > \alpha_{i_{m-1}-1} + 1$. We conclude that at most one ${\sf y} \in F_{\alpha}(b)$ can have ${\sf col}({\sf x})=c$.
Otherwise, $\rmax{b}{\alpha}=i_m$.
\noindent
\emph{Subcase 2.1} ($\alpha_{i_{m-1}-1} \geq \alpha_{b+1}$) Let $b < s < i_m$. Then $\alpha_{i_{m-1}-1} \geq \alpha_{b+1} \geq \alpha_s$. Thus Lemma~\ref{lemma:rowfilledrowval}(a), applied to $s$
says that only $s$'s appear in row $s$;
there are no entries $\leq b$ in row $s$. Thus if ${\sf y} \in F_{\alpha}(b)$, then
\begin{equation}
\label{eqn:June23abc2}
{\sf row}({\sf y}) \geq i_m = \rmax{b}{\alpha}.
\end{equation}
We have $\alpha_{i_{m-1}-1} = \alpha_{\rmin{b}{\alpha}} < \alpha_{\rmax{b}{\alpha}}=\alpha_{i_m}$. We apply Lemma~\ref{lemma:uniqueSmaller} to row $i_m \in \seg{m+1}{}$, and $c \geq \alpha_{\rmin{b}{\alpha}}+2 > \alpha_{i_{m-1}-1} + 1 \geq \alpha_{i_{m}-1} + 1$ (the final inequality holds since $\alpha$ avoids $(0,1,2)$). The lemma implies that at most one ${\sf y} \in F_{\alpha}(b)$ can have ${\sf row}({\sf y}) \geq i_m$ and ${\sf col}({\sf x})=c$. This, combined with (\ref{eqn:June23abc2}), implies (\ref{eqn:June22flexmeaning}).
\noindent
\emph{Subcase 2.2} ($\alpha_{i_{m-1}-1} < \alpha_{b+1}$) Apply Lemma~\ref{lemma:uniqueSmaller} to row $b+1$, and $c \geq \alpha_{\rmin{b}{\alpha}}+2 > \alpha_{i_{m-1}-1} + 1$. So, at most one ${\sf y} \in F_{\alpha}(b)$ has ${\sf row}({\sf y}) \geq b + 1$ and ${\sf col}({\sf x})=c$, proving (\ref{eqn:June22flexmeaning}).
\end{proof}
Given $\gamma \in {\sf lswap}(\alpha)$, define a \emph{left swap sequence} of $\gamma$ to be $\gamma^{(0)},\ldots,\gamma^{(t)}$ such that $\gamma^{(0)} = \alpha$, $\gamma^{(t)} = \gamma$, and $\gamma^{(i+1)}$ is a left swap of $\gamma^{(i)}$ for $0 \leq i < t$.
\begin{lemma}
\label{lemma:lswapSeq}
Let $\gamma,\tau \in {\sf lswap}(\alpha)$ with $\gamma >_{lex} \tau$ and $b=\min \{ i : \gamma_i > \tau_i \}$.
\begin{itemize}
\item[(I)] There exists a left swap sequence of $\gamma$ equal to $\gamma^{(0)},\ldots,\gamma^{(t)}$ and of $\tau$ equal to
$\tau^{(0)},\ldots,\tau^{(u)}$ such that $\beta = \gamma^{(i)} = \tau^{(j)}$ with $\beta_s = \gamma_s = \tau_s$ for all $1 \leq s < b$.
\item[(II)] $\gamma,\tau\in {\sf lswap}(\beta)$.
\item[(III)] No left swap sequence from $\beta$ to $\gamma$ (or $\tau$) involves the indices $1,2,\ldots,b-1$.
\end{itemize}
\end{lemma}
\begin{proof}
(I): Given $(x_1,\ldots,x_k)\in {\mathbb Z}_{\geq 0}^k$ let $(x_1',\ldots,x_k')$ be the coordinates sorted into
weakly increasing order ($x_1'\leq x_2'\leq\ldots \leq x_k'$). Given $(x_1,\ldots,x_k), (y_1,\ldots,y_k)\in {\mathbb Z}_{\geq 0}^k$ we write
\[(x_1,\ldots,x_k)\preceq (y_1,\ldots,y_k) \text{\ if $x_i'\leq y_i'$ for $1\leq i\leq k$.}\]
Recall (see, e.g., \cite[Proposition~2.1.11]{Manivel}), if $u,v\in {\mathfrak S}_n$ then
\begin{equation}
\label{eqn:June24bruhat}
u\leq v \iff (u(1),u(2),\ldots,u(k))\preceq
(v(1),v(2),\ldots,v(k)) \text{\ for $1\leq k\leq n$.}
\end{equation}
In particular, if
\begin{equation}
\label{eqn:June24people}
u\leq v \Rightarrow u(1)\leq v(1)
\end{equation}
\noindent
\emph{Special Case:} ($\alpha\in {\mathfrak S}_n$) Hence $\alpha\leq \gamma,\tau$ (Bruhat order). Induct on $n$, the base $n=1$ being trivial. Suppose $n>1$. If $b=0$ (\emph{i.e.}, $\gamma(1)\neq \tau(1)$), $\beta=\alpha$ satisfies
(I). Thus assume $b\geq 1$, and let $T=\gamma(1)=\tau(1)$. Thus, by (\ref{eqn:June24people}),
$\alpha(1)\leq T$. Let $\alpha(j)=T$. There exists a sequence of left swaps that show
\[\alpha\leq \alpha':=T \ \alpha(2) \ \alpha(3) \ldots \alpha(j-1)\ \alpha(j+1) \ldots \alpha(n).\]
It is straightforward from (\ref{eqn:June24bruhat}) that $\alpha'\leq \gamma,\tau$. Let
$\overline{\alpha'}, \overline{\gamma}, \overline{\tau}$ be the list of rightmost $n-1$ entries of $\alpha',\gamma,\tau$ (respectively); these are permutations of ${\mathfrak S}_{n-1}$ on $[n]-\{T\}$. By induction, obtain $\overline{\beta}\in {\mathfrak S}_{n-1}$
satisfying (I) with respect to $\overline{\alpha'}, \overline{\gamma}, \overline{\tau}$. Then
\[\beta:=T \overline{\beta}(1) \overline{\beta}(2)\ldots \overline{\beta}(n-1)\in {\sf lswap}(\alpha)\]
satsfies (I) with respect to $\alpha,\beta,\gamma$, as desired (the two swap sequences to go from $\alpha\to \gamma$
and $\alpha\to \tau$ being the ``concatentation'' of the sequence from $\alpha\to \alpha'$ with the two sequences
that send $\overline{\alpha'}\to \overline{\gamma}$ and $\overline{\alpha'} \to \overline\tau$, interpreted in the obvious manner).
\noindent
\emph{General Case:} ($\alpha\in {\sf Comp}_n$) Pick any
$\widehat{\alpha}\in {\mathfrak S}_n$ with the property that
\begin{equation}
\label{eqn:June24swap}
\widehat{\alpha}(i)<\widehat{\alpha}(j)\iff \alpha(i)\leq \alpha(j).
\end{equation}
Define $\widetilde\gamma\in {\mathfrak S}_n$ by applying the same sequence (in terms of positions) of left swaps to $\widetilde{\alpha}$ used to
obtain $\gamma$ from $\alpha$. Similarly, define $\widetilde\tau$. By the definition of left swaps and (\ref{eqn:June24swap}),
\[\widehat{\alpha}\leq \widetilde\gamma,\widetilde\tau
\]
(if $u,v\in {\mathfrak S}_n$ and $v$ is obtained from $u$ by a left swap, then $u\leq v$; this follows from, \emph{e.g.}, (\ref{eqn:June24bruhat})). Call two labels $i,j\in [n]$ to be $\alpha$-equivalent ($i\equiv j$) if $\alpha(i)=\alpha(j)$. Now apply left swaps to $\widehat\gamma$ so that all equivalent labels are
in decreasing order. Similarly one defines $\widehat\tau$.
\begin{example} If $\alpha=(2,2,4,2,2,4)$ then $\widehat{\alpha}=125346$. Consider $\alpha=(2,2,4,{\color{blue} 2},2,{\color{blue} 4})\to ({\color{blue} 2},2,4,{\color{blue} 4},2,2)\to (4,2,4,2,2,2)=\gamma$.
Then $\widetilde\gamma=625143$. Here $\{1,2,3,4\}$ and $\{5,6\}$ are the two $\alpha$-equivalence classes.
So $\widehat\gamma=645321$.\qed
\end{example}
By simple considerations about Bruhat order,
\begin{equation}
\label{eqn:July11ggg}
\widehat{\alpha}\leq \widetilde\gamma\leq \widehat\gamma, \text{ \ and
$\widehat{\alpha}\leq \widetilde\tau\leq \widehat\tau$}.
\end{equation}
By construction,
\begin{equation}
\label{eqn:June24sss}
\widehat\gamma>_{lex} \widehat\tau
\end{equation}
and
\begin{equation}
\label{eqn:June24ttt}
\widehat\gamma(i)=\widehat\tau(i), \text{ \ for $1\leq i \leq b$.}
\end{equation}
In view of (\ref{eqn:July11ggg}), (\ref{eqn:June24sss}) and (\ref{eqn:June24ttt}) we can apply the Special Case
to construct $\widehat\beta\in {\mathfrak S}_n$, and left swap sequences, that satisfy (I) with respect to $\widehat\alpha, \widehat\gamma,\widehat\tau$.
Define $\beta$ from $\widehat\beta$ by replacing the label $i$ with $\alpha(i)$. To define the swap sequence from
$\alpha$ to $\beta$ we apply the same left swaps (\emph{i.e.}, interchange the same positions) as the sequence
from $\widehat{\alpha}$ to $\widehat{\beta}$, with the exception that we skip left swaps of the underlying permutations that involve equivalent labels.
Similarly, one defines continuation of this sequence to $\widehat \gamma$, and separately, to $\widehat\tau$. The claim
follows.
(II): This is trivial from (I).
(III): Suppose such a left swap exists, say with $i\in [1,b-1]$. We may assume $i$ is the minimal such
index. Then $\tau_i=\gamma_i=\beta_i$ is replaced with a strictly larger number, and it follows that
$\gamma_i>\beta_i$, a contradiction.
\end{proof}
\begin{lemma}
\label{lemma:minLswap}
Assume $\alpha\in {\overline {\sf KM}}_n^{\geq 1}$.
Let $\gamma,\tau \in {\sf lswap}(\alpha)$ with $\gamma >_{lex} \tau$ and $b=\min \{ i : \gamma_i > \tau_i \}$. Let $b \in \seg{m}{}(\beta)$ and $\beta$ be from Lemma~\ref{lemma:lswapSeq}. There exists a (minimum) index $r > b$ such that
\begin{equation}
\label{eqn:June25zzz}
\gamma_b = \beta_r > \beta_b.
\end{equation}
Let $i_1(\beta)<i_2(\beta)<\ldots<i_{m}(\beta)<\ldots$ be such that $\alpha_{i_r(\beta)-1}<\alpha_{i_{r}(\beta)}$. The indices $b,r$ satisfy one of the following.
\begin{itemize}
\item[(A)] $b \in \seg{m}{2}(\beta)$ and $r \geq i_{m}(\beta)$,
\item[(B)] $b\in \seg{m}{3}(\beta)$ and $r \in \seg{m+1}{1}(\beta)$,
\item[(C)] $b\in \seg{m}{3}(\beta)$ and $r \geq i_{m+1}(\beta)$.
\end{itemize}
\end{lemma}
\begin{proof}
By Lemma~\ref{lemma:lswapSeq}, $\gamma, \tau \in {\sf lswap}(\beta)$. Hence, by the reasoning in the proof of
that lemma (the characterization of Bruhat order) and the definition of $b$,
\begin{equation}
\label{eqn:June25fgh}
\gamma_b, \tau_b \geq \beta_b.
\end{equation}
If $\gamma_b = \beta_b$, then $\tau_b \geq \gamma_b$ (by \eqref{eqn:June25fgh}). This contradicts the definition of $b$, hence $\gamma_b > \beta_b$. This, combined with the definition of left swaps, implies \eqref{eqn:June25zzz}.
\noindent
\emph{Case 1:} ($b \in \seg{m}{1}(\beta)$) By the definition of $\seg{m}{1}(\beta)$ (and a simple induction), $\beta_b \geq \beta_s$ for all $b<s$. This contradicts \eqref{eqn:June25zzz}. That is, this case cannot actually occur.
\noindent
\emph{Case 2:} ($b \in \seg{m}{2}(\beta)$) By definition of $\seg{m}{}$, $r\not\in \seg{m}{}$. Hence
$r\geq i_m(\beta)\in \seg{m+1}{}$; this is (A).
\noindent
\emph{Case 3:} ($b \in \seg{m}{3}(\beta)$)
By definition, $r\not\in \seg{m+1}{2}(\beta)$. If $r\in \seg{m+1}{3}(\beta)$
then
\[(\beta_{i_{m}(\beta)-1}, \beta_{i_{m+1}(\beta)-1},\beta_{i_{m+1}(\beta)})\simeq (0,1,2),\]
a contradiction.
Thus, only (B) and (C) as possible, as desired.
\end{proof}
\begin{lemma}
\label{lemma:seg1big}
If $\alpha\in {\overline {\sf KM}}_n^{\geq 1}$, $i\in \seg{m}{1}$ ($1<m\leq k+1$), and
$\alpha_{i_{m-1}-1}<\alpha_i$ then $\alpha_i\geq \alpha_j$
for all $i\leq j$.
\end{lemma}
\begin{proof}
Say $i<j$ but $\alpha_i<\alpha_j$. Then $(\alpha_{i_{m-1}-1},\alpha_i,\alpha_j)\simeq (0,1,2)$,
contradicting $\alpha\in {\overline {\sf KM}}_n^{\geq 1}$.
\end{proof}
\begin{proposition}
\label{theorem:dominanceIntervalDisjoint}
Let $\alpha\in {\overline {\sf KM}}_n^{\geq 1}$.
If $\gamma,\tau \in {\sf lswap}(\alpha)$ and $\gamma >_{lex} \tau$ there exists $z\in [1,n]$ such that
\begin{equation}
\label{eqn:June29thegoal}
\tau_1+\cdots+\tau_z+\flex{z}{\tau} < \gamma_1+\cdots+\gamma_z.
\end{equation}
\end{proposition}
\begin{proof}
Our analysis is based on the cases (A), (B), (C) from Lemma \ref{lemma:minLswap}, as well as the notation from that lemma.
By the (proof of) Proposition~\ref{prop:exhaustiveQKMultFreeCases},
\begin{equation}
\label{eqn:June29yyy}
\beta\in {\overline {\sf KM}}_n^{\geq 1}
\end{equation}
\noindent
\emph{Case (A):} ($b \in \seg{m}{2}(\beta)$ and $r \geq i_m(\beta)$) Let $t > b$ such that $\beta_t > \beta_b$. By the definition of $\seg{m}{}$, $t \geq i_m(\beta)$. If $t > i_m(\beta)$, then by (\ref{eqn:June29yyy}) we can
apply Lemma \ref{lemma:1032avoidingConsequence} to $\beta$, $m$, $r = t$, and $s = b$. The
lemma concludes that $\beta_{i_m(\beta)}=\beta_t$. Otherwise, $t=i_m(\beta)$, and $\beta_t = \beta_{i_m(\beta)}$. Thus,
\begin{equation}
\label{eqn:June29bbb}
\beta_t = \beta_{i_m(\beta)} \text{ \ for all $t>b$ such that $\beta_t>\beta_b$.}
\end{equation}
By Lemma~\ref{lemma:seg1big},
\begin{equation}
\label{eqn:June29ccc}
\tau_b < \gamma_b = \beta_{r} \leq \beta_{i_m(\beta)}.
\end{equation}
Consider a sequence of left swaps transforming $\beta$ to $\tau$. None of these
left swaps involve the index $b$: Otherwise, by Lemma~\ref{lemma:lswapSeq}(III),
$b$ is the left index of such a swap, and some $t>b$ such that $\beta_t>\beta_b$ is the right index.
This contradicts (\ref{eqn:June29bbb}) and (\ref{eqn:June29ccc}) combined. Thus
\begin{equation}
\label{eqn:June26ttt}
\tau_b = \beta_b.
\end{equation}
By Lemma~\ref{lemma:seg1big}, $\beta_{i_m(\beta)} \geq \beta_v$ for all $v > b$.
That is $\max\{\beta_v:v>b\}\leq \beta_{i_m(\beta)}$. However, notice that
$\{\beta_v:v>b\}=\{\tau_v:v>b\}$ since $\{\beta_v:v\leq b\}=\{\tau_v:v\leq b\}$. Therefore,
\begin{equation}
\label{June27rmax1}
\tau_{\rmax{b}{\tau}} \leq \beta_{i_m(\beta)}.
\end{equation}
The assumption $b \in \seg{m}{2}(\beta)$ means $\beta_b < \beta_{i_{m-1}-1}$. Then the definition of $\beta$ and \eqref{eqn:June26ttt} imply $\tau_b < \tau_{i_{m-1}-1}$ (since $b>i_{m-1}-1\in \seg{m-1}{3}$). Hence, by \eqref{equation:rmin}, $\rmin{b}{\tau} = b$ and
\begin{equation}
\label{June27rmin1}
\tau_{\rmin{b}{\tau}} = \tau_b.
\end{equation}
\noindent
\emph{Case (A).1:} ($\tau_{\rmin{b}{\tau}} \geq \tau_{\rmax{b}{\tau}}$) By \eqref{equation:flex}, $\flex{b}{\tau} = 0$. Then the definition of $b$ implies
\[\tau_1 + \cdots + \tau_b + \flex{b}{\tau} = \tau_1 + \cdots + \tau_b < \gamma_1 + \cdots + \gamma_b,\]
establishing (\ref{eqn:June29thegoal}).
\noindent
\emph{Case (A).2:} ($\tau_{\rmin{b}{\tau}} < \tau_{\rmax{b}{\tau}}$) Thus,
$\flex{b}{\tau} = \tau_{\rmax{b}{\tau}} - \tau_{\rmin{b}{\tau}} - 1$, and
\begin{equation}
\label{eqn:June26bigeqn}
\begin{array}{ll}
\tau_1 + \cdots + \tau_b + \flex{b}{\tau} & = \beta_1 + \cdots + \beta_b + \tau_{\rmax{b}{\tau}} - \tau_{\rmin{b}{\tau}} - 1 \\
& \leq \beta_1 + \cdots + \beta_b + \beta_{i_m(\beta)} - \beta_b - 1 \\
& = \beta_1 + \cdots + \beta_{b-1} + \beta_{i_m(\beta)} - 1,
\end{array}
\end{equation}
where the first equality follows from \eqref{eqn:June26ttt}, and the inequality is by \eqref{June27rmax1} and \eqref{June27rmin1}.
We have two subcases:
\noindent
\emph{Case (A).2.1} ($r=i_m(\beta)$) Then \eqref{eqn:June25zzz} and \eqref{eqn:June26bigeqn} imply \[\tau_1 + \cdots + \tau_b + \flex{b}{\tau} < \beta_1 + \cdots + \beta_{b-1} + \beta_{i_m(\beta)} = \gamma_1 + \cdots + \gamma_b,\]
which proves (\ref{eqn:June29thegoal}).
\noindent
\emph{Case (A).2.2} ($r > i_m(\beta)$) By (\ref{eqn:June25zzz}),
$\beta_b<\beta_r$. Combining this with (\ref{eqn:June29yyy}), we may apply Lemma \ref{lemma:1032avoidingConsequence} to $\beta$ with $r,m$ being as above, and $s=b$, to conclude that $\beta_b=\beta_{i_m(\beta)-1}$ and $\beta_{i_m(\beta)} = \beta_r = \beta_b + 1$. Applying this to \eqref{eqn:June26bigeqn} yields
\begin{align*}
\tau_1 + \cdots + \tau_b + \flex{b}{\tau} & \leq \beta_1 + \cdots + \beta_{b-1} + \beta_{i_m(\beta)} - 1 \\
& = \beta_1 + \cdots + \beta_{b-1} + (\beta_b + 1) - 1 \\
& = \beta_1 + \cdots + \beta_{b} \\
& < \gamma_1 + \cdots + \gamma_{b},
\end{align*}
where the final inequality follows from \eqref{eqn:June25zzz} and the definition of $\beta$; this again proves
(\ref{eqn:June29thegoal}).
\noindent
\emph{Case (B):} ($b\in\seg{m}{3}(\beta)$ (\emph{i.e.,} $b=i_m(\beta)-1$) and $r \in \seg{m+1}{1}(\beta)$) If $b < s \leq r$, then $s\in \seg{m+1}{1}(\beta)$ and thus $\beta_s \geq \beta_r$. For such an $s$,
if a left swap involving indices $b$ and $s$ occurred in the transformation from $\beta$ to $\tau$, then it follows from Lemma~\ref{lemma:lswapSeq}(III) that
\begin{equation}
\label{eqn:June29copy1}
\tau_b \geq \beta_s \geq \beta_r = \gamma_b.
\end{equation}
This contradicts $b$'s definition. So such a left swap cannot exist. This, with Lemma~\ref{lemma:lswapSeq}(III)
means $s$ is not the right index of a left swap in the $\beta$ to $\tau$ transformation. Notice
\[\beta_{i_{m}(\beta)-1}=\beta_b<\beta_r\leq \beta_s,\]
where the middle inequality is by (\ref{eqn:June25zzz}). Thus, since $s\in\seg{m+1}{1}(\beta)$, by applying Lemma~\ref{lemma:seg1big} to $\beta$ with $i=s$ we conclude that
\begin{equation}
\label{eqn:June30mnb1}
\beta_s \geq \beta_t \text{ for all $t > s$.}
\end{equation}
Hence $s$ cannot be the leftmost index of a left swap that transforms
$\beta$ to $\tau$.
All of the above analysis, together with the definition of $\beta$, shows that
\begin{equation}
\label{eqn:June26yyy}
\beta_s = \tau_s \text{ for $1 \leq s \leq r$ with $s \neq b$.}
\end{equation}
A similar argument proves
\begin{equation}
\label{eqn:June26xxx}
\beta_s = \gamma_s \text{ for $1 \leq s < r$ with $s \neq b$.}
\end{equation}
More precisely, if $b<s<r$ then $s\in \seg{m+1}{1}(\beta)$ and $\beta_s>\beta_r$ (by the minimality of $r$). From
this, we get a variation of (\ref{eqn:June29copy1}) which says $\gamma_b \geq \beta_s > \beta_r = \gamma_b$ (a contradiction). The remainder of the argument is the same.
Further, we claim:
\begin{equation}
\label{eqn:June27yyy}
\beta_b \leq \tau_b < \gamma_b=\beta_r\leq \beta_s= \tau_s \text{ for $b < s \leq r$.}
\end{equation}
The first inequality is by Lemma~\ref{lemma:lswapSeq}(III). The next inequality is from the definition of $b$.
The equality thereafter is (\ref{eqn:June25zzz}). The next inequality is since $s\leq r$ and $s,r\in \seg{m+1}{1}(\beta)$.
The remaining equality is (\ref{eqn:June26yyy}).
Now, \eqref{eqn:June27yyy} says, in particular, that $\tau_{b}<\tau_{b+1}$. Since $\tau\in {\overline {\sf KM}}_n^{\geq 1}$
(by the proof of Proposition~\ref{prop:exhaustiveQKMultFreeCases}), it avoids $(0,1,2)$. Hence $\tau_{b-1}\geq \tau_b$.
This combined with \eqref{eqn:June26yyy} implies
\begin{equation}
\label{eqn:June30i_m}
b \in \seg{m}{3}(\tau) \text{ \ and $i_m(\beta)=i_m(\tau)=b+1$.}
\end{equation}
If $r = i_m(\beta)$, then $r-1 = b$ (by this case's assumption). Since $b \in \seg{m}{3}(\tau)$, then $\beta_{b} \leq \beta_{i_{m-1}(\beta)-1}$ (otherwise $(\beta_{i_{m-1}(\beta)-1}, \beta_{b},\beta_{b+1})\simeq (0,1,2)$). Hence
$\rmin{r-1}{\tau} = \rmin{b}{\tau} = b$. Otherwise, if $r > i_m(\beta)$, then \eqref{eqn:June26yyy} and \eqref{eqn:June27yyy} imply $\rmin{r-1}{\tau} = b$. Hence,
\begin{equation}
\label{eqn:June27dsa}
\tau_{\rmin{r-1}{\tau}} = \tau_{b}
\end{equation}
If $r = i_m(\beta)$, then $r-1 = b$ (by this case's assumption). By definition \eqref{equation:rmax} and \eqref{eqn:June30i_m}, $\rmax{b}{\tau}=i_m(\tau)=i_m(\beta)=b+1=r$. Otherwise, $r > i_m(\beta)$. Now, by \eqref{eqn:June30mnb1}, $\beta_r \geq \beta_t$ for all $t > r$.
This, combined with \eqref{eqn:June26yyy} and \eqref{eqn:June27yyy}, implies $\tau_r \geq \tau_t$ for all $t > r$.
This implies, by definition \eqref{equation:rmax}, that $\rmax{r-1}{\tau}=r$. This, combined with \eqref{eqn:June26yyy}, implies
\begin{equation}
\label{eqn:June27ewq}
\tau_{\rmax{r-1}{\tau}} = \tau_{r} = \beta_r
\end{equation}
\noindent
\emph{Case (B).1:} ($\tau_{\rmin{r-1}{\tau}} \geq \tau_{\rmax{r-1}{\tau}}$) By \eqref{equation:flex}, $\flex{r-1}{\tau} = 0$. Then \eqref{eqn:June26yyy}, \eqref{eqn:June26xxx}, and the definition of $b$, imply
\[\tau_1 + \cdots + \tau_{r-1} + \flex{r-1}{\tau} = \tau_1 + \cdots + \tau_{r-1} < \gamma_1 + \cdots + \gamma_{r-1}.\]
\noindent
\emph{Case (B).2:} ($\tau_{\rmin{r-1}{\tau}} < \tau_{\rmax{r-1}{\tau}}$) Here,
$\flex{r-1}{\tau} = \tau_{\rmax{r-1}{\tau}} - \tau_{\rmin{r-1}{\tau}} - 1$, and \eqref{eqn:June26yyy}, \eqref{eqn:June27dsa}, \eqref{eqn:June27ewq}, \eqref{eqn:June26xxx}, and \eqref{eqn:June25zzz} (in that order) give
\vspace{-0.25in}\begin{center}
$\begin{array}{ll}
\tau_1 \!+\! \cdots \!+\! \tau_{r-1} \!+\! \flex{r-1}{\tau}\!\!\!\! & = \beta_1 + \cdots + \beta_{b-1} + \tau_b + \beta_{b+1} + \cdots + \beta_{r-1} + \tau_{\rmax{b}{\tau}} \!-\! \tau_{\rmin{b}{\tau}} \!-\! 1 \\
& = \beta_1 + \cdots + \beta_{b-1} + \tau_b + \beta_{b+1} + \cdots + \beta_{r-1} + \beta_r - \tau_b - 1 \\
& = \beta_1 + \cdots + \beta_{b-1} + \beta_r + \beta_{b+1} + \cdots + \beta_{r-1} - 1 \\
& < \gamma_1 + \cdots + \gamma_{r-1}.
\end{array}$
\end{center}
\noindent
\emph{Case (C):} ($b\in \seg{m}{3}(\beta)$ (\emph{i.e.,} $b=i_m(\beta)-1$) and $r \geq i_{m+1}(\beta)$) Define
\begin{equation}
\label{eqn:June28ooo}
z = \max \{ v : v \in \seg{m+1}{}(\beta), \beta_v \geq \beta_{i_{m+1}(\beta)}, \text{ and } \beta_v > \beta_{i_{m}(\beta) - 1} \}.
\end{equation}
Note that $z\in [1,n]$ since the set is nonempty: it always contains $i_{m}(\beta)$. Clearly, $z < r$ since
$r\geq i_{m+1}(\beta) \in \seg{m+2}{}$. Let $b < s \leq z < r$. Then $\beta_s \neq \beta_r$ by the minimality of $r$. By definition of $\seg{m+1}{}$ and of $z$, $\beta_s \geq \beta_z > \beta_{i_{m}(\beta) - 1}$. Thus if $\beta_s < \beta_r$, then $(\beta_{i_{m}(\beta) - 1}, \beta_s, \beta_r) \simeq (0,1,2)$. Hence $\beta_s > \beta_r$. For such an $s$,
if a left swap involving indices $b$ and $s$ occurred in the transformation from $\beta$ to $\tau$, then it follows from Lemma~\ref{lemma:lswapSeq}(III) that
\begin{equation}
\label{eqn:June29copy}
\tau_b \geq \beta_s > \beta_r = \gamma_b.
\end{equation}
This contradicts $b$'s definition. Thus such a left swap cannot exist. This, with Lemma~\ref{lemma:lswapSeq}(III)
means $s$ is not the right index of any left swap in the transformation from
$\beta$ to $\tau$. Notice
\[\beta_{i_{m}(\beta)-1}=\beta_b<\beta_r < \beta_s,\]
where the middle inequality is by (\ref{eqn:June25zzz}). Thus, since $\beta_s\geq \beta_z>\beta_{i_m(\beta)-1}$ and
$s\in\seg{m+1}{}(\beta)$, we may apply Lemma~\ref{lemma:seg1big} to $\beta$ with $i=s$ to conclude that
\begin{equation}
\label{eqn:June30mnb}
\beta_s \geq \beta_t \text{ for all $t > s$.}
\end{equation}
Hence $s$ cannot be the leftmost index of a left swap that transforms
$\beta$ to $\tau$.
All of the above analysis, together with the definition of $\beta$, shows that
\begin{equation}
\label{eqn:June28yyy}
\beta_s = \tau_s \text{ for $1 \leq s \leq z$ with $s \neq b$.}
\end{equation}
The same argument, replacing $\tau$ with $\gamma$ throughout proves
\begin{equation}
\label{eqn:June28xxx}
\beta_s = \gamma_s \text{ for $1 \leq s \leq z$ with $s \neq b$.}
\end{equation}
Further, we have the following inequality
\begin{equation}
\label{eqn:June28sas}
\beta_b \leq \tau_b < \gamma_b= \beta_r < \beta_s = \tau_s \text{ for $b < s \leq z < r$.}
\end{equation}
The first inequality is by Lemma~\ref{lemma:lswapSeq}(III). The next inequality is from the definition of $b$.
The equality thereafter is (\ref{eqn:June25zzz}). The next inequality is by the minimality of $r$.
The remaining equality is (\ref{eqn:June28yyy}).
Then \eqref{eqn:June28sas} implies $\tau_{i_m(\beta)-1}=\tau_{b} < \tau_z$. This, with \eqref{eqn:June28yyy} and \eqref{eqn:June28sas}, implies $\rmin{z}{\tau} = b$. Thus
\begin{equation}
\label{eqn:June28rmin1}
\tau_{\rmin{z}{\tau}} = \tau_{b}.
\end{equation}
Let $z < t < i_{m+1}(\beta)$. First suppose $\beta_{i_{m+1}(\beta)} \leq \beta_t$. Then the maximality of $z$ implies $\beta_t \leq \beta_{i_{m}(\beta)-1}$. This implies $\beta_r \leq \beta_{i_{m+1}(\beta)} \leq \beta_t \leq \beta_{i_{m}(\beta)-1} = \beta_{b}$ (where the first inequality follows from Lemma~\ref{lemma:seg1big} applied to $i_{m+1}(\beta)$). This inequality contradicts \eqref{eqn:June25zzz}. Hence
\begin{equation}
\label{eqn:June30qwerty}
\beta_{i_{m+1}(\beta)} > \beta_t.
\end{equation}
Lemma~\ref{lemma:seg1big} applied to $i_{m+1}(\beta)$, implies $\beta_{i_{m+1}(\beta)} \geq \beta_v$ for all $v \geq i_{m+1}(\beta)$. Combining this with (\ref{eqn:June30qwerty}) shows that in fact
\begin{equation}
\label{eqn:June30Dvorak}
\beta_{i_{m+1}(\beta)} \geq \beta_v, \text{\ for all $v >z$}.
\end{equation}
Now, by \eqref{eqn:June28yyy} and (\ref{eqn:June30Dvorak}), $\beta_{i_{m+1}(\beta)} \geq \tau_v, \text{\ for all $v >z$}$.
Since $\rmax{z}{\tau}>z$ (by definition),
\begin{equation}
\label{eqn:June28rmax1}
\tau_{\rmax{z}{\tau}} \leq \beta_{i_{m+1}(\beta)}.
\end{equation}
\noindent
\emph{Case (C).1:} ($\tau_{\rmin{z}{\tau}} \geq \tau_{\rmax{z}{\tau}}$) By \eqref{equation:flex}, $\flex{z}{\tau} \!=\! 0$. Then \eqref{eqn:June28yyy}, \eqref{eqn:June28xxx}, and $b$'s definition imply
\[\tau_1 + \cdots + \tau_{z} + \flex{z}{\tau} = \tau_1 + \cdots + \tau_{z} < \gamma_1 + \cdots + \gamma_{z}.\]
\noindent
\emph{Case (C).2:} ($\tau_{\rmin{z}{\tau}} < \tau_{\rmax{z}{\tau}}$)
Now, $\flex{z}{\tau} = \tau_{\rmax{z}{\tau}} - \tau_{\rmin{z}{\tau}} - 1$, and \eqref{eqn:June28yyy}, \eqref{eqn:June28rmin1}, \eqref{eqn:June28rmax1} shows
\begin{equation}\label{eqn:June28bigeqn2}
\begin{array}{ll}
\tau_1 \!+\! \cdots \!+\! \tau_{z} \!+\! \flex{z}{\tau}\!\!\!\!\! & = \beta_1 + \cdots + \beta_{b-1} + \tau_b + \beta_{b+1} + \cdots + \beta_{z} + \tau_{\rmax{z}{\tau}} \!-\! \tau_{\rmin{z}{\tau}} - 1 \\
& \leq \beta_1 + \cdots + \beta_{b-1} + \tau_b + \beta_{b+1} + \cdots + \beta_{z} + \beta_{i_{m+1}(\beta)} - \tau_b - 1 \\
& = \beta_1 + \cdots + \beta_{b-1} + \beta_{i_{m+1}(\beta)} + \beta_{b+1} + \cdots + \beta_{z} - 1 \\
\end{array}
\end{equation}
We have two subcases:
\noindent
\emph{Case (C).2.1:} ($r=i_{m+1}(\beta)$) Then \eqref{eqn:June25zzz}, \eqref{eqn:June28xxx} and \eqref{eqn:June28bigeqn2} give \[\tau_1 + \cdots + \tau_z + \flex{z}{\tau} < \beta_1 + \cdots + \beta_{b-1} + \beta_{i_{m+1}(\beta)} + \beta_{b+1} + \cdots + \beta_{z} = \gamma_1 + \cdots + \gamma_z.\]
\noindent
\emph{Case (C).2.2:} ($r > i_{m+1}(\beta)$) By (\ref{eqn:June29yyy}), we may apply Lemma \ref{lemma:1032avoidingConsequence} to $\beta$ with $r,m+1$ being as above, and $s=b$.
Since (\ref{eqn:June25zzz}) says $\beta_b<\beta_r$, said lemma shows
$\beta_{i_{m+1}(\beta)} = \beta_r = \beta_b + 1$. Applying this to \eqref{eqn:June28bigeqn2} yields
\begin{align*}
\tau_1 + \cdots + \tau_z + \flex{z}{\tau} & \leq \beta_1 + \cdots + \beta_{b-1} + \beta_{i_{m+1}(\beta)} + \beta_{b+1} + \cdots + \beta_{z} - 1 \\
& = \beta_1 + \cdots + \beta_z \\
& < \gamma_1 + \cdots + \gamma_{z},
\end{align*}
where the final inequality follows from \eqref{eqn:June25zzz} and \eqref{eqn:June28xxx}.
\end{proof}
\begin{corollary}
\label{corollary:quasikeyDisjoint}
Let $\gamma,\tau \in {\sf lswap}(\alpha)$ with $\gamma >_{lex} \tau$. If $T \in {\sf qKT}(\gamma)$, $S \in {\sf qKT}(\tau)$, then ${\sf wt}(T) \neq {\sf wt}(S)$.
\end{corollary}
\begin{proof}
Suppose not, and $\mu = {\sf wt}(T) = {\sf wt}(S)$. Then the two parts of
Lemma \ref{lemma:dominanceInterval} give
$$\gamma_1+\cdots+\gamma_b \leq \mu_1+\cdots+\mu_b \leq \tau_1+\cdots+\tau_b+\flex{b}{\tau}$$ for all $b\in [1,n]$. This contradicts Proposition \ref{theorem:dominanceIntervalDisjoint}.
\end{proof}
\noindent
\emph{Conclusion of the proof of sufficiency}:
Define $\alpha'\in {\sf Comp}_n$ by $\alpha'_i = \alpha_i + 1$. Since $\alpha\in
{\overline {\sf KM}}_n$ then $\alpha'\in
{\overline {\sf KM}}_n^{\geq 1}$. Observe,
$\kappa_{\alpha'}=x_1 \cdots x_n \cdot \kappa_{\alpha}$. Therefore
$\kappa_{\alpha}$ is multiplicity-free if and only if $\kappa_{\alpha'}$ is multiplicity-free. Now, $\kappa_{\alpha'}$ is the sum of $\mathfrak{D}_\beta$ for $\beta \in {\sf Qlswap}(\alpha')={\sf lswap}(\alpha')$.
Each of these ${\mathfrak D}_{\beta}$'s are multiplicity-free by Theorem~\ref{thm:qksummary}.
Their sum is multiplicity-free by Corollary \ref{corollary:quasikeyDisjoint}. Hence $\kappa_{\alpha}$ is multiplicity-free.\qed
\section*{Acknowledgements}
We thank Mahir Can for helpful communications. We thank David Brewster
and Husnain Raza for writing computer code (as part of their
NSF RTG funded ICLUE program) that was useful for checking parts of the proof.
We used the Maple packages ACE and Coxeter/Weyl in our investigations.
AY was partially supported by a Simons Collaboration Grant, and an NSF RTG grant.
RH was partially supported by an AMS-Simons Travel Grant.
| -112,979.606176
|
[
-1.6875,
1.3828125
] | 32.727273
|
[
-3.45703125,
0.72021484375,
-2.05859375,
-6.1015625,
-0.669921875,
8.4453125
] |
[
2.884765625,
8.6953125,
1.017578125,
5.66796875
] | 709
| 8,528
|
[
-3.171875,
3.439453125
] | 41.934854
|
[
-5.10546875,
-3.228515625,
-3.923828125,
-2.19140625,
1.263671875,
10.3828125
] | 0.72507
| 14.972172
| 18.972795
| 4.241141
|
[
0.981480062007904
] | -72,340.944957
| 5.498593
| -111,367.027642
| 0.853024
| 6.112323
|
[
-1.826171875,
-2.953125,
-3.8828125,
-5.4609375,
2.08203125,
12.125
] |
[
-6.1796875,
-1.4970703125,
-2.2578125,
-1.3603515625,
3.080078125,
3.533203125
] | |
BkiUdks4eIXhtfm3PaaF
|
\section{Introduction}
A prevalent idea in modern sensory neuroscience is that early sensory systems invert generative models of the environment to infer the hidden causes or latent variables that have produced sensory observations. Perhaps the simplest form of such inference is \emph{maximum a posteriori} inference, or MAP inference for short, in which the most likely configuration of latent variables given the sensory inputs is reported. The implementation of MAP inference in neurally plausible circuitry often requires all-to-all connectivity between the neurons involved in the computation. Given that the latent variables are often very high dimensional, this can imply single neurons being connected to millions of others, a requirement that is impossible to achieve in most biological circuits. Here we show how a MAP inference problem can be reformulated to employ sparse connectivity between the computational units. Our formulation is inspired by the vertebrate olfactory system, but is completely general and can be applied in any setting where such an inference problem is being solved.
We begin by describing the olfactory setting of the problem, and highlight the requirement of all-to-all connectivity. Then we show how the MAP inference problem can be solved using convex duality to yield a biologically plausible circuit. Noting that it too suffers from all-to-all connectivity, we then derive a solution inspried by the anatomy of the vertebrate olfactory that uses sparse connectivity.
\subsection{Sparse coding in olfaction}
We consider sparse coding \cite{olshausen_emergence_1996} as applied to olfaction \cite{koulakov_sparse_2011,grabska-barwinska_demixing_2013, tootoonian_dual_2014,grabska-barwinska_probabilistic_2017, kepple_deconstructing_2016}. Odors are modeled as high-dimensional, real valued latent variables $\mathbf{x}\in\mathbb{\mathbb{{R}}}^{N}$ drawn from a factorized distribution
\begin{align}
\tag{Odor model} p(\mathbf{x})=\prod_{i=1}^{N}p(x_i)=\frac{{1}}{Z}e^{-\phi(\mathbf{x})}, \quad \phi(\mathbf{x})= \beta\|\mathbf{x}\|_{1}+\frac{{\gamma}}{2}\|\mathbf{x}\|_{2}^{2} + \mathbb I(\mathbf{x} \ge 0).
\end{align}
The first two terms of $\phi$ embody an elastic net prior \cite{kepple_deconstructing_2016,zou_regularization_2005} on molecular concentrations that models their observed sparsity in natural odors \cite{jouquand_sensory_2008}, while the last term enforces the non-negativity of molecular concentrations and is defined as $ \mathbb I(\mathbf{x} \ge 0) = \sum_{i=1}^N \mathbb I(x_i \ge 0)$, where $\mathbb I(x_i \ge 0) = 0$ when $x_i \ge 0$ and $\infty$ otherwise. The animal observes these latents indirectly via low dimensional glomerular responses $\mathbf{y}\in\mathbb{{R}}^{M}$, where $M\ll N$. Odors are transduced linearly into glomerular responses via the \emph{affinity matrix} $\mathbf{A}$, where $A_{ij}$ is the response of glomerulus $i$ to a unit concentration of molecule $j$. This results in a likelihoood $p(\mathbf{y}|\mathbf{x})=\mathcal{{N}}(\mathbf{y};\mathbf{A}\mathbf{x},\sigma^{2}\mathbf{I}$), where $\sigma^{2}$ is the noise variance. As in \cite{tootoonian_dual_2014}, we assume that the olfactory system infers odors from glomerular inputs via MAP inference, i.e. by finding the vector $\mathbf{x}_{\text{MAP}}$ that minimizes the negative log posterior over odors given the inputs:
\begin{align}
\tag{MAP inference} \mathbf{x}_{\text{MAP}}=\argmin_{\mathbf{x} \in \mathbf R^N}\;\phi(\mathbf{x})+\frac{{1}}{2\sigma^{2}}\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_{2}^{2}
\end{align}
A common approach to solving such problems is gradient descent \cite{olshausen_emergence_1996}, with dynamics in $\mathbf{x}$:
\begin{align}
\tag{Gradient descent}
\tau \frac{d\mathbf{x}}{dt} &= -\text{(leak)} + \frac{1}{\sigma^2}\mathbf{A}^T\mathbf{y} - \frac{1}{\sigma^2}\mathbf{A}^T\mathbf{A} \mathbf{x},
\end{align}
where we've absorbed the effects of the prior into the leak term for simplicity. These dynamics have a neural interpretation as feedforward excitation of the readout units $\mathbf{x}$ by the glomeruli $\mathbf{y}$ due to the $\mathbf{A}^T \mathbf{y}$ term, and recurrent inhibition among the readout units due to the $-\mathbf{A}^T\mathbf{A} \mathbf{x}$ term. This circuit is shown in Figure~\ref{fig:all-to-all}A.
Another circuit is motivated by noting that $\mathbf{A}^T\mathbf{y} - \mathbf{A}^T \mathbf{A} \mathbf{x} = \mathbf{A}^T(\mathbf{y} - \mathbf{A} \mathbf{x})$. This suggests a predictive coding \cite{rao_predictive_1999} reformulation:
\begin{align}
\tag{Predictive coding}
\tau_{\text{fast}} \frac{d\mathbf{r}}{dt} = -\text{(leak)} + \mathbf{y} - \mathbf{A} \mathbf{x}, \quad \tau_{\text{slow}} \frac{d\mathbf{x}}{dt} = -\text{(leak)} + \frac{1}{\sigma^2}\mathbf{A}^T\mathbf{r}.
\end{align}
Here the new variable $\mathbf{r}$ encodes the residual after explaining the glomerular activations $\mathbf{y}$ with odor $\mathbf{x}$. The neural interpretation of these dynamics is that the residual units $\mathbf{r}$ receive feed-forward input from the glomeruli due to the $\mathbf{y}$ term and feedback inhibition from the readout units due to the $-\mathbf{A} \mathbf{x}$ term, while the readout units receive feedforward excitation from the residual units due to the $\mathbf{A}^T \mathbf{r}$ term. This circuit is shown in Figure~\ref{fig:all-to-all}B.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{circuit_all_to_all.pdf}
\caption{Two architectures for MAP inference requiring all-to-all connectivity in general. Arrows indicate excitatory connections, knobs indicate inhibitory connections. (A) Gradient descent architecture. All-to-all feedforward excitation is required from the glomeruli to the readout units, and all-to-all recurrent inhibition between the readout units. (B) Predictive coding architecture. All mitral cells excite all granule cells and are in turn inhibited by them. No direct interaction among granule cells is required. Both architectures yield the MAP solution at convergence.}
\label{fig:all-to-all}
\end{figure}
\subsection{The problem of all-to-all connectivity}
Connectivity in the above circuits is determined by the affinity matrix $\mathbf{A}$. Given the combinatorial nature of receptor affinities \cite{nara_large-scale_2011}, $\mathbf{A}$ can be dense, i.e. have many non-zero values. This will result in correspondingly dense, even all-to-all connectivity. For example, the gradient descent architecture would require each glomerulus to connect to every readout unit, and for each readout unit to connect to every other. If we assume that the cells in the piriform cortex correspond to the readout units, this will require, in the case of the mouse olfactory bulb, that each glomerulus directly connect to millions of piriform cortical neurons, and for each cortical neuron to directly connect to millions of others. Such dense connectivity is clearly biologically implausible. The predictive coding circuit obviates the need for recurrent inhibition among the readout units, but still requires each residual unit to excite and receive feedback from millions of cortical neurons, which again is implausible. This problematic requirement of all-to-all connectivity is not limited to olfaction: the sparse coding formulation above is quite generic so that any system thought to implement it, such as the early visual system \cite{olshausen_sparse_1997}, is likely to face a similar problem.
\section{Results}
To address the problem of all-to-all connectivity we will first show how MAP inference can be solved as a constrained optimization problem, resulting in a principled derivation of the predictive coding dynamics derived heuristically above. The resulting circuit also suffers from all-to-all connectivity. Taking inspiration from the anatomy of the olfactory bulb, we then show how the problem can be reformulated and solved using sparse connectivity.
\subsection{MAP inference as constrained optimization}
The MAP inference problem is a high-dimensional \emph{unconstrained} optimization problem, where we search over the full $N$-dimensional space of odors $\mathbf{x}$. In \cite{tootoonian_dual_2014} the authors showed how a similar compressed-sensing problem can be solved in the lower-, $M$-dimensional space of observations by converting it to a low-dimensional \emph{constrained} optimization problem. Here we use similar methods to demonstrate how the MAP problem itself can be solved in the lower-dimensional space. We begin by introducing an auxiliary variable $\mathbf{r}$, and reformulate the problem as constrained optimization:
\begin{align}
\tag{MAP inference, constrained} \mathbf{x}_{\text{MAP}}, \mathbf{r}_{\text{MAP}}=\argmin_{\substack{\mathbf{x} \in \mathbf R^N \\\mathbf{r} \in \mathbf R^M}}\;\phi(\mathbf{x})+\frac{{1}}{2\sigma^{2}}\|\mathbf{r}\|_{2}^{2} \quad \text{s.t.}\quad \mathbf{r} = \mathbf{y} - \mathbf{A} \mathbf{x}.
\end{align}
The Lagrangian for this problem is
$$\mathcal{L}(\mathbf{x},\mathbf{r},\boldsymbol{\lambda})=\phi(\mathbf{x})+\frac{1}{2\sigma^2}\|\mathbf{r}\|_2^2 + \boldsymbol{\lambda}^T(\mathbf{y} - \mathbf{A} \mathbf{x} - \mathbf{r}),$$
where $\boldsymbol{\lambda}$ are the dual variables enforcing the constraint. The auxillary variable $\mathbf{r}$ can be eliminated by extremizing $\mathcal{L}$ with respect to it:
$$\nabla_{\mathbf{r}}\mathcal{L} = \frac{1}{\sigma^2}\mathbf{r} - \boldsymbol{\lambda},\quad \nabla_{\mathbf{r}}\mathcal{L} = 0 \implies \mathbf{r} = \sigma^2 \boldsymbol{\lambda}.$$
Plugging this value of $\mathbf{r}$ into $\mathcal{L}$ we get
$$ \mathcal{L}(\mathbf{x},\boldsymbol{\lambda})=\phi(\mathbf{x})-\frac{1}{2}\sigma^2\|\boldsymbol{\lambda}\|_2^2 + \boldsymbol{\lambda}^T(\mathbf{y} - \mathbf{A} \mathbf{x}).$$
After a change of variables to $\boldsymbol{\lambda} \leftarrow \sigma \boldsymbol{\lambda}$ (which we justify below) we arrive at
\begin{align}
\tag{MAP Lagrangian}\mathcal{L}_{\text{MAP}}(\mathbf{x},\boldsymbol{\lambda})=\phi(\mathbf{x})-\frac{1}{2}\|\boldsymbol{\lambda}\|_{2}^{2}+\frac{1}{\sigma}\boldsymbol{\lambda}^{T}(\mathbf{y}-\mathbf{A}\mathbf{x}).
\end{align}
Extermizing $\mathcal{L}_{\text{MAP}}$ yields dynamics
\begin{align}
\tag{Mitral cell firing rate relative to baseline}
\tau_{mc} \frac{d\boldsymbol{\lambda}}{dt} &= - \boldsymbol{\lambda} + \frac{1}{\sigma}(\mathbf{y} - \mathbf{A} \mathbf{x})\\
\tag{Granule cell membrane voltage}
\tau_{gc} \frac{d\mathbf{v}}{dt} &= - \mathbf{v} + \mathbf{A}^T\boldsymbol{\lambda},\\
\tag{Granule cell firing rate}
\mathbf{x} &= \frac{1}{\gamma \sigma}[\mathbf{v} - \beta \sigma]_+,
\end{align}
where $[z]_+ = \text{max}(z,0)$ is the rectifying linear function. These dynamics can easily be shown to yield the MAP solution in the value of $\mathbf{x}$ at convergence (see Supplementary Information). The identification of $\boldsymbol{\lambda}$ and $\mathbf{x}$ with mitral and granule cells, respectively is natural as the dynamics indicate that (a) the $\boldsymbol{\lambda}$ variables are excited by the sensory input $\mathbf{y}$ and inhibited by $\mathbf{x}$, whereas (b) the much more numerous $\mathbf{x}$ variables receive their sole excitation from the $\boldsymbol{\lambda}$ variables, and (c) the connectivity of the $\boldsymbol{\lambda}$ and $\mathbf{x}$ variables is symmetric, reminiscent of the observed dendro-dendritic connections between mitral and granule cells \cite{shepherd_synaptic_2004}. The rescaling applied to $\boldsymbol{\lambda}$ is to keep mitral cell activity at convergence on the same order of magnitude as that of the receptor neurons, as qualitatively observed experimentally (compare for example \cite{shusterman_precise_2011} and \cite{duchamp-viret_odor_1999}): We assume without loss of generality that the elements of $\mathbf{A}$ and $\mathbf{x}$ are scaled such that the elements of $\mathbf{y}$ are $O(1)$ in magnitude. At convergence, $\boldsymbol{\lambda} = \sigma^{-1}(\mathbf{y} - \mathbf{A} \mathbf{x})$, and as we expect the elements of $\mathbf{y} - \mathbf{A} \mathbf{x}$ to be $O(\sigma)$ at convergence, this results in the elements of $\boldsymbol{\lambda}$ being $O(1)$ in magnitude, as desired.
It may seem odd that the readout of the computation is in the activity of the granule cells, which not only do not project outside of the olfactory bulb, but lack axons entirely \cite{shepherd_synaptic_2004}. However, cortical neurons can read out the results of the computation by simply mirroring the dynamics of the granule cells:
\begin{align}
\tag{Piriform cell membrane voltage}
\tau_{pc} \frac{d\mathbf{u}}{dt} &= - \mathbf{u} + \mathbf{A}^T\boldsymbol{\lambda},\\
\tag{Piriform cell firing rate}
\mathbf{z} &= \frac{1}{\gamma \sigma}[\mathbf{u} - \beta \sigma]_+,
\end{align}
In this circuit cortical neurons receive exactly the same mitral cell input as the granule cells and integrate it in exactly the same way (in fact, there is an implied 1-to-1 correspondence between granule cells and piriform cortical neurons) but are not required to provide feedback to the bulb. Thus, basic olfactory inference can be performed entirely within the bulb, with the concomitant increases in computational speed, and the results can be easily read out in the cortex. As cortical feedback to the bulb (in particular to the granule cells, as this model would suggest) does exist \cite{shepherd_synaptic_2004}, its role may be to incorporate higher level cognitive information and task contingencies into the inference computation. We leave the exploration of this hypothesis to future work.
These dynamics and their implied circuit are essentially the same as those of predictive coding described in the Introduction (Figure~\ref{fig:all-to-all}B), and hence suffer from the same problem of all-to-all connectivity. However, as we have derived our dynamics in a principled way from the original MAP inference problem, we can now elaborate them by taking inspiration from olfactory bulb anatomy to derive a circuit that can perform MAP inference but with sparse connectivity.
\subsection{Incorporating sister mitral cells }
The circuit derived above (Figure~\ref{fig:all-to-all}C) implies that each glomerulus is sampled by a single mitral cell. However, in vertebrates there are many more mitral cells than glomeruli, but each mitral cell samples a single glomerulus, so that each mitral cell has several dozen `sister' cells all of whom sample the same glomerulus \cite{shepherd_synaptic_2004}. This is shown schematically in Figure~\ref{fig:sister_mcs}. Although sister mitral cells receive the same receptor inputs their odor responses can vary, presumably due to differing interactions with the granule cell population \cite{dhawale_non-redundant_2010}. The computational role of the sister mitral cells has thus far remained unclear. Here we show that how they can be used to perform MAP inference but with sparse connectivity.
\begin{figure}[h]
\centering
\includegraphics[width=4in]{sister_mcs_schematic.pdf}
\caption{Sister mitral cells. In the vertebrate olfactory bulb, each glomerulus is sampled by not one but $\sim 25$ `sister' cells \cite{shepherd_synaptic_2004}. Here we've shown a setting with 3 sisters/glomerulus.}
\label{fig:sister_mcs}
\end{figure}
We begin by noting the simple equalities
$$ \mathbf{A} \mathbf{x} = \sum_{i=1}^n \mathbf{A}^i \mathbf{x}^i, \quad \phi(\mathbf{x}) = \sum_{i=1}^n \phi(\mathbf{x}^i),$$
for any separable function $\phi$ (such as ours), and any partitioning of the matrix $\mathbf{A}$ and corresponding partitioning of the vector $\mathbf{x}$ into $n$ blocks. For example, if we partition $\mathbf{A}$ and $\mathbf{x}$ into consecutive blocks, we'd have:
$$ \mathbf{A} = [\underbrace{A_{:,1},\dots,A_{:,N/n}}_{\mathbf{A}^1},\dots,\underbrace{A_{:,N-N/n+1},\dots,A_{:,N}}_{\mathbf{A}^n}], \quad \mathbf{x} = [\underbrace{x_1,\dots,x_{N/n}}_{\mathbf{x}^1},\dots,\underbrace{x_{N-N/n+1},\dots,x_N}_{\mathbf{x}^n}].$$
This partitioning is shown schematically in Figure~\ref{fig:partition}.
\begin{figure}[h]
\centering
\includegraphics[width=4in]{partitioning.pdf}
\caption{An example partitioning of the affinity matrix $\mathbf{A}$ and the odor vector $\mathbf{x}$.}
\label{fig:partition}
\end{figure}
We can rewrite the Lagrangian $\mathcal{L}_{\text{MAP}}$ in terms of this partitioning as
$$\mathcal{L}_{\text{MAP}}(\mathbf{x}, \boldsymbol{\lambda}) = \mathcal{L}_{\text{MAP}}(\{\mathbf{x}^i\},\boldsymbol{\lambda})=-\frac{1}{2}\|\boldsymbol{\lambda}\|_{2}^{2} + \frac{1}{\sigma}\boldsymbol{\lambda}^T\mathbf{y} + \sum_{i=1}^n \phi(\mathbf{x}^i) - \frac{1}{\sigma}\boldsymbol{\lambda}^{T}\mathbf{A}^i \mathbf{x}^i.$$
Note that although we've split $\mathbf{A}$ and $\mathbf{x}$ into $n$ blocks, we're still using a single, shared $\boldsymbol{\lambda}$ variable. Extremizing with resepect to the $\{\mathbf{x}^i\}$ and a shared $\boldsymbol{\lambda}$ would be an application of dual decomposition \cite{boyd_distributed_2011} to our problem. Instead, inspired by the presence of sister mitral cells, we reformulate the Lagrangian $\mathcal{L}_{\text{MAP}}$ by assigning to each block its own set $\boldsymbol{\lambda}^i$ of mitral cells, and introduce a corresponding set of variables $\boldsymbol{\mu}^i$ to enforce the constraint $\boldsymbol{\lambda}^i = \boldsymbol{\lambda}$. This yields
\begin{align*}
\mathcal{L}_{\text{sis}}(\{\mathbf{x}^i\},\{\boldsymbol{\lambda}^i\},\{\boldsymbol{\mu}^i\},\boldsymbol{\lambda}) = \sum_{i=1}^n \frac{1}{n\sigma }\boldsymbol{\lambda}^{i,T}\mathbf{y} + \phi(\mathbf{x}^i) &- \frac{1}{2n} \|\boldsymbol{\lambda}^i\|_2^2 - \frac{1}{\sigma}\boldsymbol{\lambda}^{i,T}\mathbf{A}^i \mathbf{x}^i\\
&+ \boldsymbol{\mu}^{i,T}(\boldsymbol{\lambda} - \boldsymbol{\lambda}^i) - \frac{1}{2}\|\boldsymbol{\lambda} - \boldsymbol{\lambda}^i\|_2^2.
\end{align*}
The additional term $\frac{1}{2}\|\boldsymbol{\lambda} - \boldsymbol{\lambda}^i\|_2^2$ has been introduced because it does not alter the value of $\mathcal{L}_{\text{sis}}$ at the solution (since there $\boldsymbol{\lambda} = \boldsymbol{\lambda}^i$), while allowing us to eliminate $\boldsymbol{\lambda}$ by setting $\nabla_{\boldsymbol{\lambda}}\mathcal{L}_{\text{sis}} = 0$, yielding:
$$ \boldsymbol{\lambda} = \overline{\boldsymbol{\lambda}} + \overline{\boldsymbol{\mu}},\quad \overline{\boldsymbol{\lambda}} = \frac{1}{n}\sum_{i=1}^n \boldsymbol{\lambda}^i, \quad \overline{\boldsymbol{\mu}} = \frac{1}{n}\sum_{i=1}^n \boldsymbol{\mu}^i.$$
The values $\overline{\boldsymbol{\lambda}}$ and $\overline \boldsymbol{\mu}$ are averages computed over blocks, and are variables that would be available at the glomeruli. For example $\overline{\boldsymbol{\lambda}}_i$ would be the average activity of all sister cells that innervate the $i$'th glomerulus.
As before, we derive dynamics by extermizing a Lagrangian, in this case $\mathcal{L}_{\text{sis}}$. As the $\{\boldsymbol{\mu}^i\}$ are the dual variables of a constrained \emph{maximization} problem (that of maximizing $\mathcal{L}_{\text{sis}}$ with respect to $\{\boldsymbol{\lambda}^i\}$), their dynamics minimize $\mathcal{L}_{\text{sis}}$:
$$ \frac{d\boldsymbol{\mu}^i}{dt} \propto -\nabla_{\boldsymbol{\mu}^i}\mathcal{L}_{\text{sis}} = \boldsymbol{\lambda}^i - \boldsymbol{\lambda} = \boldsymbol{\lambda}^i - \overline{\boldsymbol{\lambda}} - \overline{\boldsymbol{\mu}} \implies \frac{d\overline{\boldsymbol{\mu}}}{dt} \propto -\overline{\boldsymbol{\mu}}.$$
Hence $\overline{\boldsymbol{\mu}}$ decays to zero irrespective of the other variables, and in particular, if it starts at 0 it will remain there. In the following we will assume that this initial condition is met so that $\overline \boldsymbol{\mu} = 0$ at all times, allowing us to eliminate it from the equations. The resulting dynamics that extremize $\mathcal{L}_{\text{sis}}$ are:
\begin{align*}
\tag{Mitral cell activity relative to baseline}\tau_{mc} \frac{d\boldsymbol{\lambda}^i}{dt} &= -(1 + \frac{1}{n})\boldsymbol{\lambda}^i + \frac{1}{\sigma}\left(\frac{\mathbf{y}}{n} - \mathbf{A}^i \mathbf{x}^i\right) + \overline{\boldsymbol{\lambda}} - \boldsymbol{\mu}^i\\
\tag{Granule cell membrane voltage}\tau_{gc} \frac{d\mathbf{v}^i}{dt} &= - \mathbf{v}^i + \mathbf{A}^{i,T}\boldsymbol{\lambda}^i\\
\tag{Granule cell firing rate}\mathbf{x}^i &= \frac{1}{\gamma \sigma }[\mathbf{v}^i - \beta \sigma]_+\\
\tag{Periglomerular cell activity relative to baseline, no leak}\tau_{pg} \frac{d\boldsymbol{\mu}^i}{dt} &= \boldsymbol{\lambda}^i - \overline{\boldsymbol{\lambda}}
\end{align*}
We have identified the $\boldsymbol{\mu}^i$ variables with olfactory bulb periglomerular cells because they inhibit the mitral cells and are in turn excited by them \cite{shepherd_synaptic_2004} and do not receive direct receptor input themselves, reminiscent of the Type II periglomerular cells of Kosaka and Kosaka \cite{kosaka_synaptic_2005}.
This circuit is shown schematically in Figure~\ref{fig:sparse_circuit}. Importantly, in this circuit each mitral cell interacts only with the granule cells within its block, reducing mitral-granule connectivity by a factor of $n$ (though the \emph{total} number of mitral-granule synapses has stayed the same due to the introduction $n$ sister mitral cells per glomerulus). The information from the other granule cells is delivered indirectly to each mitral cell via the influences of the glomerular average $\overline \boldsymbol{\lambda}$ and periglomerular inhibition.
\begin{figure}[h]
\centering
\includegraphics[width=4in]{sparse_circuit.pdf}
\caption{Inference circuit with sparse connectivity using sister mitral cells. Sister cells now only interact with the granule cells within their own block, reducing their connectivity by a factor of $n$. Information is shared between blocks at the glomeruli and through the periglomerular cells.}
\label{fig:sparse_circuit}
\end{figure}
\subsection{Leaky periglomerular cells via an approximate Lagrangian}
The dynamics above imply that that the periglomerular cells $\boldsymbol{\mu}^i$ do not leak i.e. are perfect integrators, a property that is obviously at odds with biology. To introduce a leak term we first recall that $\boldsymbol{\mu}^i$ dynamics minimize $\mathcal{L}_{\text{sis}}$. We then introduce an upper bound to $\mathcal{L}_{\text{sis}}$:
\begin{align*}
\mathcal{L}_{\text{sis}}^{\varepsilon} (\{\boldsymbol{\mu}^i\},\dots) = \mathcal{L}_{\text{sis}}(\{\boldsymbol{\mu}^i\},\dots) + \sum_{i=1}^n \frac{1}{2}\|\boldsymbol{\lambda} - \boldsymbol{\lambda}^i\|_2^2 - \frac{1}{2(1 + \varepsilon)}\|\boldsymbol{\lambda} - \boldsymbol{\lambda}^i\|_2^2 + \frac{1}{2}\varepsilon \|\boldsymbol{\mu}^i\|_2^2,
\end{align*}
where $\varepsilon \ge 0$ and we've suppressed the other arguments to the Lagrangians for clarity. The first two terms in the augmentation replace each $-\frac{1}{2}\|\boldsymbol{\lambda} - \boldsymbol{\lambda}^i\|_2^2$ term in $\mathcal{L}_{\text{sis}}$ with $-\frac{1}{2(1+\varepsilon)}\|\boldsymbol{\lambda} - \boldsymbol{\lambda}^i\|_2^2$, and the final term penalizes large values of $\boldsymbol{\mu}^i$. The dynamics that extremize $\mathcal{L}_{\text{sis}}^{\varepsilon}$ are the same as those that $\mathcal{L}_{\text{sis}}$ above, except for those of the mitral and periglomerular cells, which are modified to:
\begin{align*}
\tau_{mc} \frac{d\boldsymbol{\lambda}^i}{dt} &= -(\frac{1}{1+\varepsilon} + \frac{1}{n})\boldsymbol{\lambda}^i + \frac{1}{\sigma}\left(\frac{\mathbf{y}}{n} - \mathbf{A}^i \mathbf{x}^i\right) + \frac{\overline{\boldsymbol{\lambda}}}{1+\varepsilon} - \boldsymbol{\mu}^i\\
\tau_{pg} \frac{d\boldsymbol{\mu}^i}{dt} &= - \boldsymbol{\mu}^i + \frac{1}{\varepsilon}(\boldsymbol{\lambda}^i - \overline{\boldsymbol{\lambda}})
\end{align*}
Note that now the periglomerular cells are endowed with a leak, as desired. Because the resulting dynamics no longer extremize $\mathcal{L}_{\text{sis}}$, the solution no longer matches the MAP solution exactly, and is in fact denser. To understand this effect (see Supplementary Information), note that at $\varepsilon = 0$, $\mathcal{L}_{\text{sis}}^{\varepsilon} = \mathcal{L}_{\text{sis}}$, and the sister cells are `fully coupled' i.e. the $\boldsymbol{\mu}^i$ variables are free to enforce the constraint $\boldsymbol{\lambda}^i = \boldsymbol{\lambda}$. The system then solves the MAP problem exactly by combining information from all blocks, yielding a sparse solution. As $\varepsilon \to \infty$ non-zero values of $\boldsymbol{\mu}^i$ result in progressively higher values for the Lagrangian, forcing $\boldsymbol{\mu}^i$ to zero in the limit. In this `fully decoupled' state each block attempts to explain its fraction $\mathbf{y}/n$ of the input independently of the others using only its own subset $\mathbf{A}^i$ of the affinity matrix, reducing overcompleteness and resulting in denser representations. For the small values of $\varepsilon$ this can be counteracted by increasing the sparsity prior coefficient $\beta$.
Figure~\ref{fig:performance}A demonstrates the time course of the recovery error of the circuit in response to a 500 ms odor puff, as the number of blocks is varied, and averaged over 40 trials. Recovery error is defined as the mean sum of squares of the difference between the circuit's estimate and the MAP solution normalized by the mean sum of squares of the MAP solution. The all-to-all circuit is able to reduce this error to near zero (numerical precision) as it is performing MAP inference exactly. As the multi-block circuits use a non-zero value of $\varepsilon$ they are only approximating the MAP solution, but can still greatly reduce the recovery error when using an optimized setting of the sparsity parameter $\beta$, as described above. Figure~\ref{fig:performance}B shows the output of the 4-block circuit for a typical input odor, demonstrating its close approximation to the MAP solution. In Figure~\ref{fig:performance}C the dynamics of two different cells and their sisters from another block are shown, demonstrating that they are similar, but not identical, as experimentally observed \cite{dhawale_non-redundant_2010}, and Figure~\ref{fig:performance}D shows the activity of corresponding periglomerular cells. Finally, in Figure~\ref{fig:performance}E shows the membrane voltage and output firing rate of one of the active granule cells. Note that the firing rate has essentially stabilized by $\sim$200 ms after odor onset, broadly consistent with the fast olfactory discrimination times observed in rodents \cite{uchida_speed_2003}.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{performance.pdf}
\caption{Performance. (A) Time course of recovery error for different circuits averaged over 40 random odors puffed for 500 ms at $t = 0$. Recovery error is mean squared error of granule cell activity relative to the MAP estimate, normalized by mean sum of squares of the MAP estimate. The all-to-all circuit essentially recovers the MAP solution; $n$-block circuits do so approximately as $\varepsilon>0$. Odors were sparse 1000-dimensional vectors ($N=1000$) with 3 randomly selected element set to 1. All-to-all circuit had 50 mitral cells ($M=50$); $n$-block circuits had $50n$ mitral cells and the corresponding periglomerular cells. Other parameters: $\beta = 100\;\text{(all-to-all)}, 150\;\text{(2-block)}, 170;\text{(4-block)}$; $\gamma = 100$; $\sigma = 10^{-2}$ (but no noise was actually added above); $\varepsilon = 10^{-2}$; $\tau_{mc} = \tau_{pg} = \tau_{gc} = \text{50 ms}$. (B) Example recovery. The output of a circuit is the vector of granule cell activations immediately before odor offset. Top panel: actual odor presented. Bottom two panels: MAP estimate (blue) and output of the 4-block circuit (orange), zoomed in (and sign-inverted in the bottom panel) to values near 1 and 0, respectively, to highlight discrepancies between the MAP estimate and the circuit output, demonstrating good agreement. (C) Sister mitral cells: The time course of two mitral cells and one each of their sisters, showing that the activities of sister cells are similar but not identical. (D) The activity of the periglomerular cells paired to the mitral cells in (C). (E) The membrane voltage and firing rate of a granule cell strongly activated by the odor. Firing rate is stable by $\sim 200$ ms after odor onset, consistent with fast odor discrimination in rodents \cite{uchida_speed_2003}.}
\label{fig:performance}
\end{figure}
\section{Discussion}
Inspired by the sister mitral cells in the olfactory bulb, we have shown in this work how MAP inference, which often requires dense connectivity between computational units, can be reformulated in a principled way to yield a circuit with sparse connectivity, at the cost of introducing additional computational units. A key prediction of our model may appear to be that the mitral-granule cell connectome has block structure, in which granule cells only communicate with the mitral cells in their block and vise versa. As we show in the Supplemental Information, a simple generalization of our model shows that MAP solution can be found with mitral-granule cell connectivity that does not have the block structure we have assumed here (though equally sparse). This generalization also accommodates the the experimentally observed random sampling of glomeruli by mitral cells \cite{imai_construction_2014}, in addition to the ordered one presented above where exactly $n$ sister cells sample each glomerulus.
Previous work in several groups has addressed sparse coding in olfaction \cite{koulakov_sparse_2011,grabska-barwinska_demixing_2013, tootoonian_dual_2014, grabska-barwinska_probabilistic_2017, kepple_deconstructing_2016}. Our work extends that of \cite{tootoonian_dual_2014} in insects by showing how the MAP problem itself can be solved, rather than the related compressed sensing problem addressed in that paper. In our work we propose that olfactory bulb granule cells encode odor represenations, similar to \cite{koulakov_sparse_2011}. The authors there assumed a random mitral-granule connectome, resulting in `incomplete' odor represenations because granule cell firing rates are positive. In this work we assume that the connectome is set to its `correct' value determined by the affinity matrix $\mathbf{A}$, obviating the need for negative rates and resulting in `complete' representations. Even with such complete representations, mitral cell activity is not negligible, and allows for simple and exact readout of the infered odor concentrations in downstream cortical areas. Furthermore, previous work \cite{tootoonian_dual_2014} has shown, albeit in a limited setting, that the correct value of the connectome can be learned via biologically plausible learning mechanisms. We expect that to be the case here, though we leave that determination to future work. The authors in \cite{grabska-barwinska_demixing_2013, grabska-barwinska_probabilistic_2017} propose a model in which the olfactory bulb and cortex interact to infer odorant concentrations while retaining uncertainty information, rather than just providing point estimates as in MAP inference. The authors in \cite{kepple_deconstructing_2016} propose a bulbar-cortical circuit that represents odors based on `primacy', the relative strengths of the strongest receptor responses, automatically endowing the system with the concentration invariance likely to be important in olfactory computation. We've shown that the MAP computation can be performed entirely within the bulb while allowing for easy and exact cortical readout and without the need for cortical feedback, retaining odor information and allowing downstream areas to perform concentration invariance and primacy computations, as needed. Extending our methods to provide uncertainity information is an important task that we leave to future work.
\bibliographystyle{unsrt}
\clearpage
\pagebreak
| -27,900.529846
|
[
-3.26953125,
3.01171875
] | 25.806452
|
[
-2.783203125,
0.548828125,
-2.466796875,
-6.01171875,
-1.146484375,
8.1328125
] |
[
4.45703125,
8.890625,
1.8642578125,
8.5234375
] | 160
| 3,685
|
[
-3.30078125,
3.82421875
] | 25.835781
|
[
-6.09765625,
-4.75,
-4.578125,
-2.017578125,
2.248046875,
12.3125
] | 0.829944
| 10.871768
| 26.865672
| 0.877113
|
[
1.8242950439453125
] | -18,614.190819
| 6.539484
| -27,741.070752
| 0.497967
| 5.871428
|
[
-2.548828125,
-3.6484375,
-3.939453125,
-4.984375,
2.453125,
12.234375
] |
[
-5.1953125,
-2.048828125,
-2.58203125,
-2.04296875,
3.521484375,
5.390625
] | |
BkiUegW6NNjgBpvICryI
|
\section{Introduction}
\label{sec:introduction}
Many decision-making problems arising from real-world applications can be
formulated using \textit{Mixed Integer Programming (MIP)}. The
\textit{Branch-and-Bound} (\bnb{}) framework is a general approach to solving
MIPs
to global optimality. Over the recent years, the idea of using machine learning (ML) to improve optimization techniques has
gained renewed interest. There exist various approaches to tackle
different aspects of the solving process using classical ML techniques. For
instance, ML has been used to find good parameter configurations for a
solver \cite{hutter09,HutterHoosLeytonbrown2011}, improve node
\cite{he14}, variable \cite{khalil16,nair20} or cut
\cite{baltean19} selection strategies, and detect decomposable structures
\cite{kruber17}.
Even though exact MIP solvers aim for global optimality,
finding good feasible solutions fast is at least as important, especially
in the presence of a time limit. The use of \textit{primal heuristics} is
crucial in ensuring good primal performance in modern solvers. For
instance, \cite{berthold132} showed that the primal
bound--the objective value of the best solution--improved on average by around $80\%$ when primal heuristics were
used. Generally, a solver has a variety of primal heuristics implemented,
where each class exploits a different idea to find good solutions.
During \bnb{}, these heuristics are executed successively
at each node of the search tree, and improved solutions are reported back to the solver if found. Extensive overviews of different primal heuristics, their computational costs, and their impact in MIP solving can be found in \cite{lodi132,berthold13,berthold18}.
Since most heuristics can be very costly, it is necessary to be strategic about the order in which the heuristics are executed and the number of iterations allocated to each, with the ultimate goal of obtaining good primal performance overall.
Such decisions are often made by following hard-coded rules derived
from testing on broad benchmark test sets. While these static
settings yield good performance on average, their performance can be far from
optimal when considering specific families of instances.
To illustrate this fact, Figure \ref{fig:heurperformance} compares the
success rates of different primal
heuristics for two problem classes:
the \textit{Generalized Independent Set Problem (GISP)}
\cite{hochbaum97,colombi17}
and the
\textit{Fixed-Charge Multicommodity Network Flow Problem (FCMNF)} \cite{hewitt10}.
\begin{figure}
\centering
\begin{minipage}[t]{.4\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{heurperformance_new_nolabels_2.pdf}
\caption{\textbf{Average solution success rates of ten heuristics
for two problem classes}. Heuristic success is
problem-dependent: each pair of blue-yellow bars belongs to one
heuristic, and the heuristics are sorted in descending
order w.r.t. the solution success rates for GISP (blue). The yellow
bars representing the success rates for FCMNF are far from being
sorted, implying that the performance of a heuristic is strongly
problem-dependent.}
\label{fig:heurperformance}
\end{minipage}%
\hspace{.05\linewidth}
\begin{minipage}[t]{.37\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{primal_integral.pdf}
\caption{\textbf{Primal gap for an exemplary GISP instance.} Our method's heuristic schedule (orange) obtains better
solutions earlier than SCIP's default (blue).}
\label{fig:primalintegral}
\end{minipage}
\end{figure}
In this paper, we propose a data-driven approach to systematically
improve the use of primal heuristics in B\&B. By learning from
data about the duration and success of every heuristic call for a set of
training instances, we construct a \textit{schedule of
heuristics} deciding when and for how long a certain heuristic should be
executed to obtain good primal solutions early on. As a result, we are able
to significantly improve the use of primal heuristics which is shown
in Figure \ref{fig:primalintegral} for one MIP instance.
Even through we will focus on improving primal performance of MIP solving,
it is important to note that finding good solutions faster also improves
the overall running time of the solver. The B\&B procedure generates a
search tree in which the nodes correspond to different subproblems. To
determine whether a certain part of the tree
should be further explored or pruned, we keep track of the incumbent, i.e.,
the best feasible solution seen thus far. Hence, when good incumbents are
found early on, the size of the search tree may be significantly reduced,
leading to the problem being solved faster. On a standardized
test set, primal heuristics reduced the solving time by up to $30\%$
\cite{berthold132}.
\begin{enumerate}
\item {\textbf{We formalize the learning task} of finding an effective, cost-efficient heuristic schedule on a training dataset as a Mixed Integer Quadratic Program (see Section
\ref{sec:formulation});}
\item {We propose an \textbf{efficient heuristic} for solving
the training (scheduling) problem and a
\textbf{scalable data collection} strategy (see Section
\ref{sec:scheduling} and \ref{sec:datacollection});}
\item {We perform \textbf{extensive computational experiments} on a class of challenging instances and \textbf{demonstrate the benefits of our approach} (see
Section \ref{sec:expresults}).}
\end{enumerate}
%
Since primal heuristics have such a significant influence on the solving
process, optimizing their usage is a topic of ongoing research. For
instance, by characterizing nodes with different features, \cite{khalil17}
propose an ML method to decide at which nodes heuristics should run to improve
primal performance. After that decision, all heuristics are executed
according to the predefined rules set by the solver. The authors in
\cite{hendel18} and \cite{hendel182} use bandit algorithms to learn from
previous calls which heuristics to execute first. In contrast to the method
proposed in this paper, their procedure only adapts the order in which
heuristics are executed.
Furthermore, primal performance can also be improved by using
hyperparameter tuning \cite{hutter09,HutterHoosLeytonbrown2011}, but
generally come with extremely high computational cost, since they do not
exploit information about the structure of the problem.
\section{Preliminaries}
\label{sec:background}
Let us consider a MIP of the form
%
\begin{equation} \label{MIP} \tag{$P_{\textit{MIP}}$}
\begin{aligned}
& \underset{x \in \mathbb{R}^n}{\text{min}}
& & c^\mathsf{T} x, \\
& \text{s.t.} & & Ax \leq b, \\
& & & x_i \in \mathbb{Z}, \quad \forall i \in I \\
\end{aligned}
\end{equation}
%
with matrix $A \in \mathbb{R}^{m \times n}$, vectors $c \in \mathbb{R}^n$,
$b \in \mathbb{R}^m$, and index set $I \subseteq [n]$.
A MIP can be solved using Branch-and-Bound, a tree search algorithm that
finds an optimal solution to \eqref{MIP} by recursively partitioning the
original problem into linear subproblems. The nodes in the resulting search
tree correspond to these subproblems. Throughout this work, we assume
that each node has a unique index that identifies the node even across branch-and-bound trees obtained for different MIP instances. For a set of instances
$\mathcal{X}$, we denote the union of the corresponding node indices by
$\mathcal{N}_{\mathcal{X}}$.
\noindent\textbf{Primal Performance Metrics.}
Since we are interested in finding good solutions fast, we consider a
collection of different metrics for primal performance. Beside statistics
like the time to the first/best solution and the solution/incumbent success
rate, we mainly focus on the \textit{primal integral} \cite{berthold132}
as a comprehensive measure of primal performance. Intuitively, this metric can be interpreted as a normalized average of the incumbent value over time. Formally, if $x$ is feasible and $x^*$ is an
optimal (or best known)
solution to
\eqref{MIP}, the \textit{primal gap} of $x$ is defined as
%
\begin{align*}
\gamma(x) \coloneqq
\begin{cases}
0, &\text{ if } |c^\mathsf{T} x| = |c^\mathsf{T} x^*|, \\
1, &\text{ if } c^\mathsf{T} x \cdot c^\mathsf{T} x^* < 0, \\
\dfrac{|c^\mathsf{T} x - c^\mathsf{T} x^*|}{\text{max}\{|c^\mathsf{T} x|,|c^\mathsf{T} x^*|\}}, &\text{
otherwise}.
\end{cases}
\end{align*}
%
With $x^t$
denoting the incumbent at time $t$, the \textit{primal gap
function} $p: \mathbb{R}_{\geq 0} \to [0,1]$ is then defined as
%
\begin{align*}
p(t) \coloneqq
\begin{cases}
1, &\text{ if no incumbent is found until time } t, \\
\gamma(x^t), &\text{ otherwise}.
\end{cases}
\end{align*}
%
For a time limit $T \in \mathbb{R}_{\geq 0}$, the primal integral $P(T)$ is then given by the area
underneath the primal gap function $p$ up to time $T$,
%
\begin{align*}
P(T) \coloneqq \sum_{i = 1}^{K} p(t_{i - 1})(t_i - t_{i-1}),
\end{align*}
%
where $(K-1)$ incumbents have been found until time $T$, $t_0 = 0$,
$t_K = T$, and $t_1, \dots, t_{K-1}$ are the points in time at which new incumbents are found.
Figure~\ref{fig:primalintegral} gives an example for the primal gap function. The
primal integrals are the areas under each of the curves. It is easy to see
that finding near-optimal incumbents earlier shrinks the area under the
graph of $p$, resulting in a smaller primal integral.
\section{Data-Driven Heuristic Scheduling}
\label{sec:formulation}
The performance of heuristics strongly depends on the set of problem
instances they are applied to. Hence, it is natural to consider \textit{data-driven} approaches for optimizing the use of primal heuristics for the instances of interest.
Concretely, we consider the following practically relevant setting.
We are given a set of heuristics $\mathcal{H}$ and
a homogeneous set of training instances $\mathcal{X}$ from the same problem class. In a data collection phase, we are allowed to execute the \bnb{} algorithm on the training instances, observing how each heuristic performs at each node of each search tree. At a high level, our goal is then to leverage this data to obtain a schedule
of heuristics that minimizes a primal performance metric.
The specifics of how such data collection is carried out will be discussed
later on in the paper.
%
First, let us examine the decisions that could potentially benefit from a data-driven approach. Our discussion is inspired by an in-depth analysis of how the source-open academic MIP solver SCIP~\cite{gamrath20} manages primal heuristics. However, our approach is generic and is likely to apply to other MIP solvers.
\subsection{Controlling the Order}
\label{sec:ordering}
One important degree of freedom in scheduling heuristics is the order in which a set of applicable heuristics $\mathcal{H}$ is executed by the solver at a given node.
This can be controlled by assigning a \textit{priority} for each heuristic.
In a \textit{heuristic loop}, the solver then iterates over the heuristics in decreasing priority.
The loop is terminated if a heuristic finds a new incumbent solution. As such, an ordering $\langle h_{1}, \dots, h_{k} \rangle$ that prioritizes effective heuristics can lead to time savings without sacrificing primal performance.
\subsection{Controlling the Duration}
\label{subsec:duration}
Furthermore, solvers use working limits to control the computational effort spent on heuristics.
Consider diving heuristics as an example. Increasing the maximal
diving depth increases the likelihood of finding an integer feasible
solution. At the same time, this increases the overall running time. Figure
\ref{fig:divingdepth}
visualizes this cost-benefit trade-off empirically for three different
diving heuristics, highlighting the need for a careful ``balancing act".
%
\begin{figure*}[t]
\centering
\includegraphics[width=.415\textwidth]{heursuccessdepth_2_newlabel.pdf}
\hspace{.05\linewidth}
\includegraphics[width=.41\textwidth]{heurtimedepth_2.pdf}
\caption{\textbf{Number of solutions found (in percent) and cost of
different diving heuristics depending on the the maximal diving depth}:
This figure shows the average number of solutions found by a heuristic
(left) and average duration in seconds (right) of three diving
heuristics when limiting the maximal depth of a dive. Hereby, the
baseline for the values on the vertical axis of the left figure is the
number of found solutions by the heuristics with no limitations on the
diving depth. The
likelihood of finding a solution increases with the maximal diving
depth. At the same time, an
average call to all three heuristics becomes more expensive as the diving depth increases.}
\label{fig:divingdepth}
\end{figure*}
%
For a heuristic $h \in \mathcal{H}$, let $\tau \in
\mathbb{R}_{>0}$ denote $h$'s time budget. Then, we are
interested in finding a \textit{schedule} $S$ defined by
%
\begin{align*}
S \coloneqq \langle (h_{1}, \tau_{1}), \dots, (h_{k}, \tau_{k}) \rangle, h_i \in \mathcal{H}.
\end{align*}
%
Since controlling the time budget directly can be unreliable and lead to
nondeterministic behavior in practice
(see Appendix \ref{sec:implementation} for details), a deterministic proxy
measure is preferable.
For diving heuristics, the maximal diving depth provides a suitable measure
as demonstrated by Figure \ref{fig:divingdepth}.
Similar measures can be used for other types of heuristics, as we will
demonstrate with Large Neighborhood Search heuristics in
Section~\ref{sec:expresults}.
In general, we will refer to $\tau_i$ as the maximal number of
\emph{iterations} that is alloted to a heuristic~$h_i$ in schedule~$S$.
\subsection{Deriving the Scheduling Problem}
Having argued for order and duration as suitable control decisions, we will now formalize our heuristic scheduling problem.
Ideally, we would like to construct a single schedule $S$ that minimizes the
primal integral, as defined in Section \ref{sec:background}, averaged over the training instances $\mathcal{X}$. Unfortunately, it is very difficult to optimize the primal integral directly, as it depends on the \textit{sequence} of incumbents found over time during \bnb{}. The primal integral also depends on the way the search tree is explored, which is affected by pruning, further complicating any attempt at directly optimizing this primal metric.
We address this difficulty by considering a more practical surrogate objective. Recall that
$\mathcal{N}_{\mathcal{X}}$ denotes the collection of search tree nodes of
the set of training instances $\mathcal{X}$. We will construct a schedule $S$ that
finds feasible solutions for a large fraction of the nodes in
$\mathcal{N}_{\mathcal{X}}$, while also minimizing the number of iterations
spent by schedule $S$. Note that we consider feasible solutions instead of
incumbents here: This way, we are able to obtain more data faster since a
heuristic finds a feasible solution more often than a new incumbent. The
framework we propose in the following can handle incumbents instead, but we
have found no benefit in that in preliminary experiments.
For a heuristic $h \in \mathcal{H}$ and node $N$, denote by $t(h,N)$
the iterations necessary for $h$ to find a
solution at node $N$, and set $t(h,N) = \infty$ if $h$ does not succeed at $N$.
%
Now suppose a schedule $S$ is successful at node $N$, i.e., some heuristic finds a solution within the budget allocated to it in $S$. Let
\[
j_S = \text{min}\{j = 1,\ldots,|\mathcal{H}| \mid t(h_j,N) \leq \tau_j \}
\]
be the index of the first successful heuristic. Following the (successful) execution of $h_{j_S}$, the
heuristic loop is terminated, and the time spent by schedule $S$ at node
$N$ is given by
%
\begin{align*}
T(S,N) \coloneqq \sum_{i = 1}^{j_S-1} \tau_i + t(h_{j_S},N).
\end{align*}
%
Otherwise, set $T(S,N) \coloneqq \sum_{i = 1}^{k} \tau_i + 1$\footnote{We
add $1$ to penalize unsolved nodes.}.
Furthermore, let $\mathcal{N}_S$ denote the set of nodes at which schedule
$S$ is successful in finding a solution.
Then, we consider the heuristic scheduling problem given by
%
\begin{equation} \label{eq:schedulingproblem} \tag{$P_{\mathcal{S}}$}
\underset{S \in \mathcal{S}}{\text{min}}
\sum_{N \in \mathcal{N}_{\mathcal{X}}} T(S,N)
\;\text{ s.t. }\;
|\mathcal{N}_S| \geq \alpha |\mathcal{N}_{\mathcal{X}}|.
\end{equation}
%
Here $\alpha \in [0,1]$ denotes a minimum fraction of nodes at which we want the schedule to find a solution.
Problem \eqref{eq:schedulingproblem} can be formulated as a Mixed-Integer
Quadratic Program (MIQP); the exact formulation can be found in
Appendix \ref{sec:mip}.
To find such a schedule, we need to know $t(h,N)$ for every heuristic $h$
and node $N$. Hence, when collecting data for the instances in the training
set $\mathcal{X}$, we track for every B\&B node $N$ at which a
heuristic $h$ was called, the number of iterations $\tau_N^h$ it took $h$ to
find a feasible solution; we set $\tau_N^h = \infty$ if $h$ does not succeed at $N$.
Formally, we require a training dataset
%
\begin{align*}
\mathcal{D} \coloneqq \{ (h, N, \tau_N^h)) \mid h \in \mathcal{H}, N \in
\mathcal{N}_{\mathcal{X}}, \tau_N^h \in \mathbb{R}_+ \cup \{ \infty \}
\}.
\end{align*}
%
Section \ref{sec:datacollection} describes a computationally efficient
approach for building $\mathcal{D}$ using a \textit{single} \bnb{} run per
training instance.
\section{Solving the Scheduling Problem}
\label{sec:scheduling}
Problem \eqref{eq:schedulingproblem} is a generalization of the Pipelined
Set Cover Problem which is known to be $\mathcal{NP}$-hard
\cite{munagala05}. As for the MIQP in Appendix \ref{sec:mip}, tackling it
using a
non-linear integer programming solver is challenging: the MIQP has
$O(|\mathcal{H}||\mathcal{N}_{\mathcal{X}}|)$ variables and constraints,
and a single training instance may involve thousands of search tree nodes,
leading to an MIQP with hundreds of thousands of variables and constraints even
with a handful of heuristics and tens of training instances.
As already mentioned in the beginning, one approach to finding a schedule
that heuristically solves \eqref{eq:schedulingproblem} is using a
hyperparameter tuning software like SMAC \cite{HutterHoosLeytonbrown2011}. Since SMAC is a
sequential algorithm that searches for a good parameter configuration by
successively adapting and re-testing its best settings, training a SMAC
schedule can get very expensive quickly. In the following, we present a more
efficient approach.
We now direct our attention towards designing an efficient heuristic
algorithm for~\eqref{eq:schedulingproblem}.
A similar problem was studied by
\cite{streeter07} in the context of decision problems. Among other things,
the author discusses how
to find a schedule of (randomized) heuristics that minimizes the expected
time necessary to solve a set of training instances $\mathcal{X}$ of a decision problem. Although this setting is somewhat similar to ours, there exist multiple aspects in which they differ
significantly:
%
\begin{enumerate}
\item \textit{Decision problems are considered instead of MIPs:}
Solving a MIP is generally much more challenging than solving a
decision problem.
When solving a MIP with B\&B, we normally have to solve
many linear subproblems. Since in theory, every such LP is an
opportunity for a heuristic to find a new incumbent, we consider the
set of nodes $\mathcal{N}_{\mathcal{X}}$ instead of $\mathcal{X}$ as
the ``instances'' we want to solve.
\item \textit{A heuristic call can be suspended and resumed:} In the
work of~\cite{streeter07}, a heuristic can be executed in a
``suspend-and-resume
model'': If $h$ was executed before, the action $(h, \tau)$ represents
\textit{continuing} a heuristic run for an additional $\tau$ iterations.
When $h$ reaches the iteration limit, the run is suspended and its state kept in memory such that it can be resumed later in the schedule.
The ``suspend-and-resume" model is not used in MIP solving due to challenges in maintaining the states of heuristics in memory. As such, we allow every heuristic to be included in the schedule at most once.
%
\item \textit{Time is used to control the duration of a heuristic run:}
Controlling time directly is unreliable in practice and can lead to
nondeterministic behavior of the solver. Instead, we rely on different
proxy measures for different classes of
heuristics. Thus, when building a schedule that contains heuristics of
distinct types (e.g., diving and LNS heuristics), we need to ensure that these measures are comparable.
\end{enumerate}
Despite these differences, it is useful to examine the greedy scheduling
approach proposed by~\cite{streeter07}. A
schedule is built by successively adding the action $(h,\tau)$ to $G$
that maximizes the ratio of the marginal increase in the number of instances solved to the cost (i.e., $\tau$) of
including $(h, \tau)$. As shown in Corollary 2 of \cite{streeter07}, the
\textit{greedy schedule} $G$ yields a 4-approximation of that version of the
scheduling problem. In an attempt to leverage this elegant heuristic in our problem~\eqref{eq:schedulingproblem}, we will describe it formally.
Let us denote the greedy schedule by $G \coloneqq \langle g_1, \dots, g_k
\rangle$. Then, $G$ is defined inductively by setting $G_0 = \langle
\rangle$ and $G_j = \langle g_1, \dots, g_{j} \rangle $ with
%
\begin{equation*}
\begin{aligned}
g_j = \underset{(h,\tau) \in \mathcal{H}_{j-1} \times
\mathcal{T}}{\text{argmax}} \frac{|\{ N \in
\mathcal{N}_{\mathcal{X}}^{j-1} \mid \tau_N^h \leq \tau\}|}{\tau}.
\end{aligned}
\end{equation*}
%
Here, $\mathcal{H}_i$ denotes the set of heuristics that are not yet in
$G_i$,
$\mathcal{N}_{\mathcal{X}}^{i}$ denotes the subset of nodes where $G_i$ is not yet successful in finding a solution, and
$\mathcal{T}$ is the interval generated by all possible iteration limits in
$\mathcal{D}$, i.e.,
\begin{align*}
\mathcal{T} \coloneqq [\text{min}\{\tau_N^h \mid (N, h, \tau_N^h) \in
\mathcal{D}\}, \text{max}\{\tau_N^h \mid (N, h, \tau_N^h) \in
\mathcal{D}\}].
\end{align*}
%
We stop adding actions $g_j$ when $G_j$ finds a solution at all nodes
in $\mathcal{N}_{\mathcal{X}}$ or all heuristics are
contained in the schedule, i.e., $\mathcal{H}_j = \emptyset$.
Unfortunately, we can show that the resulting schedule can perform arbitrarily bad in our setting. Consider the following situation. We assume that there are
100 nodes in $\mathcal{N}_{\mathcal{X}}$ and only one heuristic $h$. This
heuristic solves one node in just one
iteration and takes 100 iterations each for the other 99 nodes. Following
the greedy approach, the resulting schedule would be $G = \langle (h,1)
\rangle$ since $\frac{1}{1} > \frac{99}{100}$. Whenever $\alpha > 0.01$,
$G$ would be infeasible for our constrained problem~\eqref{eq:schedulingproblem}. Since we are not allowed to add a heuristic more
than once, this cannot be fixed with the current algorithm.
To avoid this situation, we propose the following modification. Instead
of only considering the heuristics that are not in $G_{j-1}$ when choosing
the next action $g_j$, we also consider the option to run the last
heuristic $h_{j-1}$ of $G_{j-1}$ for longer. That is, we allow to choose
$(h_{j-1}, \tau)$ with $\tau > \tau_{j-1}$. Note that the cost of
adding $(h_{j-1}, \tau)$ to the schedule is not $\tau$, but $\tau -
\tau_{j-1}$, since we decide to run $h_{j-1}$ for $\tau - \tau_{j-1}$
iterations longer and not to rerun $h_{j-1}$ for $\tau$ iterations.
Furthermore, when including different classes of heuristics in the
schedule, the respective time measures are not necessarily comparable (see
Figure \ref{fig:heurcost}). To circumvent this problem, we use the average
time per iteration to normalize different notions of iterations. In the
following, we denote the average cost of an iteration by $t^h_{avg}$ for
heuristic $h$. Note that $t^h_{avg}$ can be easily computed by also
tracking the duration (w.r.t. time) of a heuristic run in data collection.
\begin{figure}[t]
\centering
\includegraphics[width=.41\textwidth]{heuritercost_sorted_legend_new_2.pdf}
\caption{\textbf{Comparison of average cost of iterations for
different primal heuristics:} While the cost of an iteration is
relatively similar among heuristics of the same type, they differ
significantly when comparing diving and LNS with each other. On
average, an iteration for LNS heuristics (number of nodes in
sub-MIP)
is much more expensive than for diving heuristics (maximal diving
depth).}
\label{fig:heurcost}
\end{figure}
%
Hence, we redefine $g_j$ and obtain
%
\begin{equation*}
\begin{aligned}
g_j = \underset{(h,\tau) \in \mathcal{A}_{j-1}}{\text{argmax}}
\frac{|\{
N \in \mathcal{N}_{\mathcal{X}}^{j-1} \mid \tau_N^h \leq
\tau\}|}{c_{j-1}(h,\tau)},
\end{aligned}
\end{equation*}
%
with $\mathcal{A}_i \coloneqq (\mathcal{H}_i \times \mathcal{T}) \cup \{
(h_i, \tau) \mid \tau > \tau_i, \tau \in \mathcal{T}\}$ and
%
\begin{align*}
c_{i}(h,\tau) \coloneqq
\begin{cases}
t^h_{avg} \tau, &\text{ if $h \neq h_{i}$} \\
t^h_{avg} (\tau - \tau_{i}), &\text{ otherwise}.
\end{cases}
\end{align*}
%
We set $\mathcal{A}_0 \coloneqq \mathcal{H} \times \mathcal{T}$ and
$c_0(h, \tau) = t^h_{avg} \tau$. With this modification, we would obtain
the schedule
$G = \langle (h,100) \rangle$ (which solves all 100 nodes) in the above
example.
Finally, note that this greedy procedure still does not explicitly enforce that the
schedule is successful at a fraction of at least $\alpha$ nodes. In our experiments,
however, we observe that the resultings schedules reach a success rate of
$\alpha=98\%$ or above. The final formulation of the
algorithm can be found in Algorithm \ref{alg:greedy}.
\begin{algorithm}[t]
\caption{Greedy algorithm to obtain a schedule}
\label{alg:greedy}
\begin{algorithmic}
\STATE {\bfseries Input:} Nodes $\mathcal{N}_\mathcal{X}$,
heuristics $\mathcal{H}$, data $D$, time frame $\mathcal{T}$
\STATE {\bfseries Output:} Greedy Schedule $G$
\STATE $G \gets \langle \rangle$
\STATE $\mathcal{N}_{unsol} \gets \mathcal{N}_\mathcal{X}$
\STATE $improve \gets$ TRUE
\REPEAT
\STATE $(h^*,\tau^*) \gets \underset{(h,\tau) \in
\mathcal{A}}{\text{argmax}} \left[ \frac{ | \{ N
\in \mathcal{N}_{unsol} \mid \tau_h^N \leq
\tau \} |}{c(h,\tau)} \right]$
\IF{$\frac{ | \{ N \in \mathcal{N}_{unsol} \mid \tau_{h^*}^N \leq
\tau^* \} |}{c(h,\tau^*)} > 0$}
\STATE $G \gets G \oplus \langle(h^*,\tau^*) \rangle$
\STATE $\mathcal{N}_{unsol} \gets \mathcal{N}_{unsol} \setminus \{
N \in \mathcal{N}_{unsol} \mid \tau_{h^*}^N \leq \tau^* \}$
\ELSE
\STATE $improve \gets$ FALSE
\ENDIF
\UNTIL{$improve ==$ FALSE}
\end{algorithmic}
\end{algorithm}
\textbf{Example.} Figure \ref{fig:example} shows an example of how we
obtain a schedule with three heurisitcs and nodes. As the left figure
indicates, the data set is given by
\begin{align*}
\mathcal{D} = \{&(h_1, N_1, 1), (h_1, N_2, \infty), (h_1, N_3,
\infty), (h_2, N_1, 4), (h_2, N_2, 3), \\
&(h_2, N_3, 3), (h_3, N_1,
\infty), (h_3, N_2, 4), (h_3, N_3, 2) \}.
\end{align*}
Let us now assume that an iterations of each heuristic has the same
average costs, i.e., $t^{h_1}_{avg} = t^{h_2}_{avg} = t^{h_3}_{avg}$, we
build an schedule $G$ as follows. First, we add the action $(h_1,1)$, since
$h_1$ solves one node with only one iteration yielding a ratio that cannot
be bet by the other heuristics. No other node can be solved by $h_1$, hence
it does not have to be considered anymore, as well as node $N_1$. Among the
remaining possibilities, the action $(h_2, 3)$ is the best, since $h_2$
solves both nodes in three iterations yielding a ratio of $\frac{2}{3}$. In
contrast, executing $h_3$ for two and four iterations, respectively, would
yield a ratio of $\frac{1}{2}$. Since this is smaller, we add $(h_2, 3)$ to
the $G$ and obtain the schedule $G = \langle (h_1,1), (h_2,3) \rangle$
which solves all three nodes as shown on the right of Figure
\ref{fig:example}. It is easy to see that this schedule is an optimal
solution of \eqref{eq:schedulingproblem} for $\alpha > \frac{1}{3}$.
\begin{figure}[H]
\centering
\includegraphics[width=.6\textwidth]{illustration3.pdf}
\caption{\textbf{Example of how to obtain a heuristic schedule from
data:} The data is shown on the left for three heuristics and nodes and
the (optimal) schedule obtained by following Algorithm \ref{alg:greedy}
is
illustrated on the right.}
\label{fig:example}
\end{figure}
\section{Data Collection}
\label{sec:datacollection}
The scheduling approach described thus far rests on the availability of a
training data set of the form
\begin{align*}
\mathcal{D} \coloneqq \{ (h, N, \tau_N^h) \mid h \in \mathcal{H}, N \in
\mathcal{N}_{\mathcal{X}}, \tau_N^h \in \mathbb{R}_+ \cup \{ \infty \}
\}.
\end{align*}
In words, each entry in data set $\mathcal{D}$ is a triplet containing: the
index of a heuristic $h$; the index of a \bnb{} node $N$ coming from one of
the training instances in $\mathcal{X}$; the number of iterations required
by $h$ to find a feasible solution at the node $N$. The latter piece of
information, $\tau_N^h$, must be collected by executing the heuristic and
observing its performance. Two main challenges arise in collecting such a
data set for multiple heuristics:
%
\begin{enumerate}
\item \textit{Efficient data collection:}
Solving $\mathcal{NP}$-hard MIPs by \bnb{} remains
computationally expensive, even given the sophisticated techniques
implemented in today's solvers. This poses difficulties to ML
approaches that create a single reward signal from one MIP evaluation,
which may take several minutes up to hours. This holds in particular
for challenging problem classes that are the focus of this work. In
other words, even with a handful of heuristics, i.e., a small set
$\mathcal{H}$, it is prohibitive to run \bnb{} once for each
heuristic-training instance pair in order to construct the data set
$\mathcal{D}$.
\item \textit{Obtaining unbiased data:}
On the other hand, executing multiple heuristics at each node of the search tree during data collection can have dangerous side effects: if a heuristic finds an incumbent, subsequent heuristics are no longer executed at the same node, as described in Section~\ref{sec:ordering}.
\end{enumerate}
%
We address the first point by using a specially crafted version of the MIP
solver for collecting \emph{multiple reward signals} for the execution of
\emph{multiple heuristics} per single MIP evaluation during the training
phase. As a result, we obtain a large amount of data points that scales with
the running time of the MIP solves. This has the clear advantage that the
efficiency of our data collection does not automatically decrease when the
time to evaluate a single MIP increases for more challenging problems.
To address the second point and prevent bias from mutual interaction of
different heuristic calls during training, we engineered the MIP solver to be
executed in a special \emph{shadow mode}, where heuristics are called in a
sandbox environment and interaction with the main solving path is maximally
reduced. In particular this means that new incumbents and primal bounds are
not communicated back, but only recorded for training data. This setting is
an improved version of the shadow mode introduced in \cite{khalil17}.
As a result of these measures, we have instrumented the SCIP solver in a way
that allows for the collection of a proper data set $\mathcal{D}$ with a
\textit{single run} of the Branch-and-Bound algorithm per training instance.
\section{Computational Results}
\label{sec:expresults}
We will now detail our computational results. The code we used for
data collection and scheduling is publicly available
\footnote{\url{https://github.com/antoniach/heuristic-scheduling}}.
\subsection{Heuristics}
\label{sec:heuristics}
We can build a schedule containing arbitrary heuristics as long as they
have some type of time measure. In this work, we focus on two
broad groups of heuristics: \textit{Diving} and \textit{Large Neighborhood
Search (LNS)}. Both classes are much more computationally expensive than
simpler heuristics like rounding, but are generally also more likely to
find (good) solutions \cite{berthold06}. That is why it is particularly
important to schedule
these heuristics most economically.
\textbf{Diving Heuristics.} Diving heuristics examine
a single probing path by successively fixing variables according to a
specific rule.
There are multiple ways of controlling the duration of a dive. After
careful consideration of the different options, we decided on using the
maximum diving depth to limit the cost of a call to a diving heuristic: It
is both related to the effort spent by the heuristic and its likelihood of
success.
\textbf{LNS Heuristics.} This class of heuristics first builds a
neighborhood of some reference point
which is then searched for improving solutions by solving a sub-MIP.
To control the duration of a call to a LNS heuristic, we choose to limit
the number of nodes in the sub-MIP.
The idea behind this measure is similar to limiting the diving depth of
diving heuristics: In both cases, we control the number of subproblems that
a heuristic considers within its execution. Nevertheless, the two measures are
not directly comparable, as shown in Figure \ref{fig:heurcost}.
To summarize, we use 16 primal heuristics in our schedule: ten diving and
six LNS heuristics.
By controlling this set, we cover the majority of the
more complex heuristics implemented in SCIP.
All other heuristics are executed after the schedule according to their
default settings.
\subsection{Instances}
\label{sec:instances}
Since our goal is to improve the primal performance of a solver, we focus
on a primally challenging problem class: The
\textit{Generalized Independent Set Problem (GISP)}
\cite{hochbaum97,colombi17}.
In the following, we will briefly explain how we generate and partition the
instances.
Let $G = (V,E)$ be a graph and $E' \subseteq E$ a subset of
removable edges. Each vertex has a revenue and every edge has
a cost associated with it. Then, GISP asks to select a subset of
vertices and removable edges that maximizes the profit, i.e., the
difference of vertex revenue and edge costs. Thereby, no edge should exist
between two selected vertices $v,u \in V$, i.e., either we have that $(v,u)
\notin E$ or $(v,u) \in E'$ is removed.
We generate
GISP instances in the following way. Given a graph, we
randomize the set of removable edges by setting the probability that an
edge is in $E'$ to $\alpha = 0.75$. Furthermore, we choose the revenue for
each node to be $100$ and the cost of every edge as $1$. This results in a
configuration for which it is difficult to find good feasible solutions as
shown in \cite{colombi17}.
We use this scheme to generate two types of instances. The first one takes
graphs from the 1993 DIMACS Challenge which is also used by \cite{khalil17,
colombi17}. Thereby, we focus on the same twelve dense graphs as well as
the same train/test partition as in \cite{khalil17}.
The training set consists of six graphs with 125--300 nodes and
6963--20864 edges, whereas the testing graphs are considerably larger with
250--400 nodes and 21928--71819 edges. We generate 20 instances for every
graph by using different random seeds,
leaving us with 120 instances for training as well as testing.
For the second group of GISP instances, we use randomly generated graphs
where the number of nodes is uniformly chosen from $\{L, \dots, U\}$ for
bounds $L,U \in \mathbb{N}$. An edge is added to the resulting graph with
probability $\bar{\alpha} = 0.1$, giving us slightly less dense graphs than
the previous case.
We denote these sets by \textsc{[L,U]}. For each set, we generate 25
instances for training and 10 for testing. The smallest set of graphs then
has 150--160 nodes and 1099--1268 edges whereas the largest set consists of
graphs with 550--560 nodes and 14932--15660 edges.
\subsection{Results}
\label{sec:results}
To study the performance of our approach, we used the state-of-the-art
solver SCIP 7.0 \cite{gamrath20} with CPLEX 12.10.0.0
\footnote{\url{https://www.ibm.com/products/ilog-cplex-optimization-studio}}
as the underlying LP solver. Thereby, we needed to modify
SCIP's source code to
collect data as described in Section \ref{sec:datacollection}, as well as
control heuristic parameters that are not already implemented by default.
For our experiments, we used a Linux cluster of Intel Xeon CPU E5-2660 v3
2.60GHz with 25MB cache and 128GB main memory. The time limit in all
experiments was set to two hours; for data collection, we used a time limit
of four hours. Since the primal integral depends on time, we
ran one process at a time on every machine, allowing for accurate on time measurements.
MIP solver performance can be highly sensitive to even small and seemingly
performance-neutral perturbations during the solving process~\cite{lodi13},
a phenomenon referred to as \textit{performance variability}. We
implemented a more exhaustive testing framework than
the commonly used benchmark methodology in MIP that uses extensive
cross-validation in addition to multiple random seeds.
In addition to comparing our scheduling method against default SCIP, we
also compare against \textsc{scip\_tuned}, a hand-tuned version of SCIP's
default settings for GISP\footnote{We set the frequency offset to 0 for all
diving heuristics.}. Since in practice, a MIP expert would try to
manually optimize some parameters when dealing with a homogeneous set of
instances, we emulated that process to create an even stronger baseline to
compare our method against.
\textbf{Random graph instances.} Table \ref{table:crossvalidation} shows
the results of the cross-validation experiments for schedules with
diving heuristics. Our scheduling framework yields a significant improvement w.r.t. primal
integral on all test sets. Since this improvement is consistent over all
schedules and test sets, we are able to validate that the behavior
actually comes from our procedure. Especially remarkable is the fact that
the schedules trained on smaller instances also perform well on much larger
instances.
Note that the instances in the first three test
sets were solved to optimality by all settings whereas the remaining ones
terminated after two hours without a provably optimal solution. When
looking at the instances that were not solved to optimality, we can see
that the schedules perform especially well on instances of increasing
difficulty. This behavior is intuitive: Since our method aims to improve
the primal performance of a solver, it performs better when an instance is very
challenging on the primal side.
\begin{table}
\renewcommand{\arraystretch}{1.35}
\tiny
\centering
\begin{tabular}{lcccccccccc}
\hline
\hline
\diagbox[width=6em]{train}{test} & \textsc{[150,160]} &
\textsc{[200,210]} &
\textsc{[250,260]} & \textsc{[300,310]} & \textsc{[350,360]} &
\textsc{[400,410]} & \textsc{[450,460]} & \textsc{[500,510]} &
\textsc{[550,560]} \\
\hline
\textsc{[150,160]} & $0.89 \pm 0.23$ & $0.76 \pm 0.22$ & $0.87
\pm 0.37$ &
$0.95 \pm 0.40$ & $0.87 \pm 0.28$ & $0.86 \pm 0.23$ & $0.78 \pm
0.24$ &
$0.80 \pm 0.25$ &
$0.65 \pm 0.24$ \\
\textsc{[200,210]} & $0.94 \pm 0.28$ & $0.75 \pm 0.25$ & $0.82
\pm 0.30$ &
$0.91 \pm 0.34$ & $0.93 \pm 0.23$ & $0.90 \pm 0.28$ & $0.83 \pm
0.22$ &
$0.79 \pm 0.20$ &
$0.66 \pm 0.20$ \\
\textsc{[250,260]} & $0.89 \pm 0.28$ & $0.69 \pm 0.23$ & $0.81
\pm 0.34$ &
$0.94 \pm 0.40$ & $0.92 \pm 0.23$ & $0.96 \pm 0.39$ & $0.81 \pm
0.24$ &
$0.76 \pm 0.22$ &
$0.66 \pm 0.20$ \\
\textsc{[300,310]} & $0.87 \pm 0.25$ & $0.71 \pm 0.26$ & $0.83
\pm 0.36$ &
$0.97 \pm 0.39$ &
$0.92 \pm 0.28$ & $0.90 \pm 0.35$ & $0.81 \pm 0.24$ & $0.75 \pm
0.24$ &
$0.61 \pm 0.24$ \\
\textsc{[350,360]} & $0.84 \pm 0.24$ & $0.70 \pm 0.23$ & $0.82
\pm 0.36$ &
$0.91 \pm 0.37$ &
$0.81 \pm 0.26$ & $0.86 \pm 0.31$ & $0.80 \pm 0.21$ & $0.75 \pm
0.19$ &
$0.59 \pm 0.20$ \\
\textsc{[400,410]} & $0.90 \pm 0.27$ & $0.70 \pm 0.23$ & $0.83
\pm 0.36$ &
$0.88 \pm 0.32$ &
$0.77 \pm 0.23$ & $0.88 \pm 0.30$ & $0.81 \pm 0.21$ & $0.74 \pm
0.20$ &
$0.58 \pm 0.20$ \\
\textsc{[450,460]} & $0.89 \pm 0.25$ & $0.70 \pm 0.23$ & $0.83
\pm 0.36$ &
$0.88 \pm 0.32$ &
$0.77 \pm 0.23$ & $0.88 \pm 0.30$ & $0.81 \pm 0.21$ & $0.74 \pm
0.20$ &
$0.58 \pm 0.20$ \\
\textsc{[500,510]} & $0.89 \pm 0.26$ & $0.72 \pm 0.22$ & $0.84
\pm 0.30$ &
$0.99 \pm 0.42$ & $0.92 \pm 0.24$ & $0.95 \pm 0.46$ & $0.81 \pm
0.23$ &
$0.80 \pm 0.25$ &
$0.61 \pm 0.20$ \\
\textsc{[550,560]} & $0.88 \pm 0.26$ & $0.72 \pm 0.24$ & $0.89
\pm 0.42$ &
$0.95 \pm 0.37$ &
$0.86 \pm 0.27$ & $0.90 \pm 0.28$ & $0.81 \pm 0.20$ & $0.78 \pm
0.23$ &
$0.63 \pm 0.21$ \\
\hline
\textsc{SCIP\_TUNED} & $0.89 \pm 0.28$ & $0.77 \pm 0.23$ & $0.99
\pm 0.31$ &
$1.08 \pm 0.45$ &
$1.05 \pm 0.28$ & $1.03 \pm 0.38$ & $0.94 \pm 0.23$ & $0.91 \pm
0.28$ &
$0.76 \pm 0.25$ \\
\hline
\hline
\end{tabular}
\caption{Average relative primal integral (mean $\pm$ std.) of schedule
(with diving heuristics) w.r.t.
default
SCIP over GISP
instances derived from random graphs. The first nine rows
correspond to
schedules that were trained on instances
of size
\textsc{[l,u]}}
\label{table:crossvalidation}
\end{table}
\begin{table}[t]
\renewcommand{\arraystretch}{1.35}
\tiny
\centering
\begin{tabular}{lcc}
\hline
\hline
schedule & better primal integral & better primal bound \\
\hline
\textsc{[150,160]} & 0.69 & 0.70 \\
\textsc{[200,210]} & 0.69 & 0.65 \\
\textsc{[250,260]} & 0.68 & 0.55 \\
\textsc{[300,310]} & 0.72 & 0.58 \\
\textsc{[350,360]} & 0.76 & 0.62 \\
\textsc{[400,410]} & 0.75 & 0.61 \\
\textsc{[450,460]} & 0.75 & 0.61 \\
\textsc{[500,510]} & 0.68 & 0.58 \\
\textsc{[550,560]} & 0.70 & 0.59 \\
\hline
\hline
\end{tabular}
\caption{Fraction of instances for which our method's schedule (with diving
heuristics) has a better
primal integral/bound at termination w.r.t. \textsc{scip\_tuned}. Only
instances that were not solved to optimality by both
\textsc{scip\_tuned} and the schedule are considered in the second
column.}
\label{table:crossvalidation2}
\end{table}
Over all test sets, the schedules terminated with a strictly better primal
integral on 69--76\% and with a strictly better primal bound on 59--70\%
of the instances compared to \textsc{scip\_tuned} (see Table
\ref{table:crossvalidation2}).
Table \ref{table:crossvalidation_LNS} shows a part of the
cross-validation experiments for schedules containing diving and LNS
heuristics. As expected, including both classes of heuristics improves the
overall performance of the schedule. In this case, the improvement is only
marginal since on the instances we consider, diving seems to perform
significantly better than LNS.
\begin{table}[t]
\renewcommand{\arraystretch}{1.35}
\tiny
\centering
\begin{tabular}{lccccc}
\hline
\hline
\diagbox[width=6em]{train}{test} & \textsc{[150,160]} &
\textsc{[200,210]} &
\textsc{[250,260]} & \textsc{[300,310]} & \textsc{[350,360]} \\
\hline
\textsc{[150,160]} & \hlfair{$0.84 \pm 0.19$} & \hlintense{$0.65
\pm 0.29$} & $0.89 \pm 0.35$ & \hlmediocre{$0.91 \pm 0.32$} &
$0.88 \pm 0.28$ \\
\textsc{[200,210]} & \hlmediocre{$0.87 \pm 0.16$} & $0.76 \pm
0.33$ & $0.91 \pm 0.32$ & $0.93 \pm 0.34$ & \hlmediocre{$0.89 \pm
0.29$} \\
\textsc{[250,260]} & \hlfair{$0.83 \pm 0.18$} & $0.71 \pm 0.34$ &
$0.89 \pm 0.31$ & \hlmediocre{$0.90 \pm 0.29$} & \hlmediocre{$0.87
\pm 0.32$} \\
\textsc{[300,310]} & \hlmediocre{$0.81 \pm 0.19$} &
\hlintense{$0.62 \pm 0.24$} & $0.91 \pm 0.42$ &
\hlmediocre{$0.92 \pm 0.32$} & $0.94 \pm 0.32$ \\
\textsc{[350,360]} & \hlfair{$0.82 \pm 0.19$} & \hlintense{$0.61
\pm 0.25$} & $0.84 \pm 0.42$ & \hlmediocre{$0.86 \pm 0.23$} &
$0.86 \pm 0.28$ \\
\hline
\textsc{SCIP\_TUNED} & $0.89 \pm 0.28$ & $0.77 \pm 0.23$ & $0.99
\pm
0.31$
&
$1.08 \pm 0.45$ & $1.05 \pm 0.28$ \\
\hline
\hline
\end{tabular}
\caption{Average relative primal integral (mean $\pm$ std.) of schedule
(with diving and LNS heuristics) w.r.t.
default
SCIP over GISP instances derived from random graphs. The first five
rows correspond to schedules that
were trained on instances of size \textsc{[l,u]}. On
highlighted entries, a schedule controling both diving and LNS
performs better than its diving counterpart (see Table
\ref{table:crossvalidation}). More intense colors denote higher
improvement.}
\label{table:crossvalidation_LNS}
\end{table}
\textbf{Finding a schedule with SMAC.}
As mentioned before, we can also
find a schedule by using the hyperparameter tuning tool SMAC. To test
SMAC's performance on the random graph instances, we trained ten SMAC
schedules on a selection of the nine training sets. To make it easier for
SMAC, we only considered diving heuristics in this case. For the sake of
comparability, we gave SMAC the same total computational time for training
as we did in data collection: With 25 training instances per set and a time
limit of four hours, this comes to 100 hours per training set and schedule.
Note that since SMAC runs sequentially, training the SMAC schedules took
over four days per schedule, whereas training a schedule following the
greedy algorithm only took four hours with enough machines. To pick the
best performing SMAC schedule for each training set, we ran all ten
schedules on the test set of same size as the corresponding training
set
and chose the best performing one to also run on the other test sets.
The results can be found in Table \ref{table:crossvalidation_SMAC}. As we
can see, on all test sets, all schedules are significantly better than
default SCIP. However, when comparing these results to the performance of
the greedy schedules (see Table \ref{table:crossvalidation}), we can see
that SMAC performs worse on average. Over all five test sets, the SMAC
schedules terminated with a strictly better primal integral on 36 -- 54\%
and with a strictly better primal bound on 37 -- 55\% of the instances.
\begin{table}[t]
\renewcommand{\arraystretch}{1.35}
\tiny
\centering
\begin{tabular}{lccccc|cccc}
\hline
\hline
\multirow{2}{*}{\diagbox[width=6em]{train}{test}} &
\multirow{2}{*}{\textsc{[150,160]}} &
\multirow{2}{*}{\textsc{[250,260]}} &
\multirow{2}{*}{\textsc{[350,360]}} &
\multirow{2}{*}{\textsc{[450,460]}} &
\multirow{2}{*}{\textsc{[550,560]}} &&
\multicolumn{2}{c}{compared to \textsc{schedule}} &\\
\cline{8-9}
& & & & & && better primal integral & better primal bound &\\
\hline
\textsc{[150,160]} & \hlintense{$0.81 \pm 0.23$} &
\hlintense{$0.77 \pm
0.34$} & $0.90 \pm 0.27$ & $0.85 \pm 0.24$ & $0.65 \pm 0.19$ &&
0.49
& 0.37 &\\
\textsc{[250,260]} & \hlfair{$0.87 \pm 0.26$} & $0.88 \pm 0.42$ &
\hlmediocre{$0.87 \pm 0.25$} & $0.83 \pm 0.24$ & \hlintense{$0.59
\pm 0.22$} && 0.52 & 0.53 &\\
\textsc{[350,360]} & $0.86 \pm 0.24$ & \hlfair{$0.80 \pm 0.37$} &
$0.86
\pm
0.25$ & $0.80 \pm 0.24$ & $0.68 \pm 0.18$ && 0.47 &
0.42 &\\
\textsc{[450,460]} & $0.93 \pm 0.26$ & $0.87 \pm 0.32$ & $0.90 \pm
0.19$ & $0.85 \pm 0.25$ & $0.69 \pm 0.23$ && 0.36 &
0.44 &\\
\textsc{[550,560]} & \hlfair{$0.87 \pm 0.22$} & \hlmediocre{$0.83
\pm 0.31$} &
$0.92 \pm
0.29$ & $0.84 \pm 0.26$ & \hlmediocre{$0.58 \pm 0.21$} && 0.54 &
0.55 &\\
\hline
\textsc{scip\_tuned} & $0.89 \pm 0.28$ & $0.77 \pm 0.23$ & $0.99
\pm
0.31$
&
$1.08 \pm 0.45$ & $1.05 \pm 0.28$ && - & - &\\
\hline
\hline
\end{tabular}
\caption{Average relative primal integral (mean $\pm$ std.) of SMAC
schedule w.r.t. default
SCIP and the fraction of instances for which the SMAC schedule
has a better primal integral/bound at termination w.r.t. the greedy
schedule over GISP instances derived from random graphs. The
first five rows correspond to schedules (with diving heuristics)
that were trained with SMAC on instances of size \textsc{[l,u]}. On
highlighted entries, a SMAC schedule performs better than its
greedy counterpart (see Table \ref{table:crossvalidation}). More
intense colors denote higher improvement. Only instances that were
not solved to optimality by both SMAC and the greedy schedule are
considered in the last column.}
\label{table:crossvalidation_SMAC}
\end{table}
%
\textbf{DIMACS graph instances.} Table \ref{table:dimacs} summarizes the
results on the instances derived from DIMACS graphs. To stay consistent
with \cite{khalil17}, we only schedule diving heuristics. As we can see,
the schedule setting dominates default SCIP in all metrics, but an
especially drastic
improvement can be obtained w.r.t. the primal integral: the schedule
reduces the primal integral by 49\%.
When looking at the total time spent in heuristics,
we see that heuristics run significantly shorter but with more success: On
average, the incumbent success rate is higher compared to
default SCIP.
\begin{table}
\renewcommand{\arraystretch}{1.35}
\scriptsize
\centering
\begin{tabular}{lS[table-format=4.2]S[table-format=4.2]S[table-format=4.2]}
\hline
\hline
& \textsc{DEFAULT} & \textsc{SCIP\_TUNED} & \textsc{SCHEDULE} \\
\hline
Primal Integral & 934.48 & 555.75 & 470.73 \\
Time to first Incumbent & 1.33 & 1.33 & 1.26 \\
Time to best Incumbent & 4266.68 & 2642.46 & 2803.38 \\
Best Incumbent & 2382.03 & 2385.73 & 2404.63 \\
\hline
Total heuristic calls* & 138.57 & 137.38 & 140.03 \\
Total heuristic time* & 258.88 & 304.96 & 190.10 \\
Number of Incumbents found* & 2.72 & 3.08 & 3.33 \\
Incumbent Success Rate* & 0.01 & 0.02 & 0.02 \\
\hline
Gap & 144.59 & 144.03 & 141.70 \\
Primal-dual Integral & 450148.72 & 435321.67 & 430882.04 \\
\hline
\hline
\end{tabular}
\caption{Summary of results on GISP instances derived from DIMACS
graphs. Values shown are aggregates over instances; geometric means
are
used. Statictics with * refer only to the heuristics used in the
schedule.}
\label{table:dimacs}
\end{table}
\section{Conclusion and Discussion}
In this work, we propose a data-driven framework for scheduling primal
heuristics in a MIP solver such that the primal performance is optimized.
Central to our approach is a novel formulation of the learning task as a
scheduling problem, an efficient data collection procedure, and a fast,
effective heuristic for solving the learning problem on a training dataset.
A comprehensive experimental evaluation shows that our approach
consistently learns heuristic schedules with better primal performance than
SCIP's default settings. Furtheremore, by replacing our heuristic algorithm
with the hyperparameter tuning tool SMAC in our scheduling framework, we
are able to obtain a worse but still significant performance improvement
w.r.t. SCIP's default. Together with the prohibitive computational costs of SMAC, we conclude that for our heuristic scheduling
problem, the proposed heuristic algorithm constitutes an efficient alternative to
existing methods.
A possible limitation of our approach is that it produces a single, ``one-size-fits-all" schedule for a class of training instances. It is thus natural to wonder whether alternative formulations of the learning problem that leverage additional contextual data about an input MIP instance and/or a heuristic can be useful. We note that learning a mapping from the space of MIP instances to the space of possible schedules is not trivial. The space of possible schedules is a highly structured output space that involves both the permutation over heuristics and their respective iteration limits. The approach proposed here is much simpler in nature, which makes it easy to implement and incorporate into a sophisticated MIP solver.
Although we have framed the heuristic scheduling problem in machine
learning terms, we are yet to analyze the learning-theoretic aspects of the
problem. More specifically, our approach is justified on empirical grounds
in Section~\ref{sec:expresults}, but we are yet to attempt to analyze potential generalization guarantees. We view the recent foundational
results by \cite{balcan2019much} as a promising framework that may apply to
our setting, as it has been used for the branching problem in
MIP \cite{balcan2018learning}.
\clearpage
\bibliographystyle{apalike}
| -38,390.590233
|
[
-2.39453125,
2.3984375
] | 17.387305
|
[
-2.84765625,
0.93212890625,
-1.5166015625,
-5.11328125,
-1.13671875,
7.4375
] |
[
2.6015625,
6.8203125,
1.19921875,
7.8671875
] | 731
| 7,137
|
[
-2.609375,
2.791015625
] | 32.634709
|
[
-6.125,
-4.52734375,
-4.47265625,
-1.849609375,
2.47265625,
11.796875
] | 0.53197
| 4.294373
| 22.01205
| 8.790998
|
[
2.53840970993042
] | -24,455.210728
| 5.399468
| -38,158.201872
| 0.256903
| 6.131522
|
[
-2.494140625,
-3.3984375,
-3.63671875,
-4.75390625,
2.603515625,
11.609375
] |
[
-6.02734375,
-2.5625,
-2.466796875,
-1.5126953125,
3.763671875,
5.23046875
] | |
BkiUbPvxK3xgpfdGOOo4
|
\section{Introduction}
Type Ia supernovae (SNIa) are characterized by the lack of H--lines and the presence of Si II--lines in their optical spectra during the maximum of light as well as by the presence of Fe emission features during the nebular phase. Their optical light curve displays a sudden rise to the maximum followed by a rapid decay of $\sim 3$~mag in one month and by a slowly-- fading tail. A noticeable property is the spectrophotometric homogeneity of the different outbursts. Furthermore, they appear in all kinds of galaxies including ellipticals. These properties point to an exploding object that is compact, free of hydrogen, that can be activated on short and long time scales, and is able to synthesize enough $^{56}$Ni to power the light curve. The most obvious candidate is a C/O white dwarf (WD) near the Chandrasekhar's limit in a close binary system that explodes as a consequence of mass accretion \cite{hoyl60}.
Despite their homogeneity, SNIa display some differences when observed in detail. Now it is
known that there is a group of SNIa with light curves showing very bright and
broad peaks, the SN1991T class, that represents 9\% of all the
events. There is another group with a much dimmer and narrower peak and that
lacks the characteristic secondary peak in the infrared, the SN1991bg class, that
represents 15\% of all the events. To these categories it has been
recently added a new one that contains very peculiar supernovae, the
SN2002cx or SNIax class, representing $\sim 5$\% of the total. These
supernovae are characterized by high ionization spectral features in
the pre-maximum, like the SN1991T class, a very low luminosity,
and the lack of a secondary maximum in the infrared, like the SN1991bg class. The
remaining ones, which amount to $\sim 70\%$, present normal behaviors and are known as
\emph{Branch-normal} \cite{li11a}. However, even the normal ones
are not completely homogeneous and show different luminosities at
maximum and light curves with different decline rates \cite{li11b}. This variety has recently increased with the discovery of SN2001ay, which is characterized by a fast rise and a very slow decline \cite{baro12}. This diversity strongly suggests that different scenarios and burning
mechanisms could be operating in the explosion.
In one dimension models, the explosion mechanism can be classified as \cite{hoef96,hill00}: the pure detonation model (DET), the pure deflagration model (DEF), the delayed detonation model (DDT), and the pulsating delayed detonation model (PDD). An additional class are the so called Sub--Chandrasekhar's (SCh) models in which a detonation triggered by the ignition of He near the base of a freshly accreted helium layer completely burns the white dwarf. At present, there is no basic argument to reject any of them, except the DET ones that are incompatible with the properties of
the spectrum of SNIa at maximum light. Present observations also pose severe constraints to
the total amount of $^{56}$Ni that can be produced by the He--layer in SCh models. The equivalent models in three dimensions also exist, but with a larger variety of possibilities (see for instance \cite{brav09}).
According to the nature of the binary, progenitors can be classified as single degenerate (SD) if the companion is a normal star \cite{whel73} or double degenerate (DD) if the companion is a white dwarf \cite{webb84,iben85}. The distinction among them is important in order to interpret the observations since, depending on the case, the white dwarf can ignite below, near or above the Chandrasekhar's mass and the total mass ejected as well as the mass of $^{56}$Ni synthesized can be different. It is not known if both scenarios can coexist or just one is enough to account for the supernova variety. Observations of the stellar content in the interior of known SNIa remnants point towards one possible SD candidate in the case of Tycho Supernova \cite{ruiz04}, two almost certain DD candidates, SNR0509-67.5 and SNR0519-69.0 \cite{scha12,edwa12}, and the new evidence that there is not a surviving companion in SN1006 \cite{gonz12}.
The amount and distribution of the radioactive material produced in the explosion strongly depend on how the ignition starts and how the nuclear flame propagates \cite{gome98,iser08}. Therefore, the detection of $\gamma$--rays from supernovae could provide important insight on the nature of the progenitor and especially on the explosion mechanism. The advantages of using $\gamma$--rays for diagnostic purposes rely on their ability to distinguish among different isotopes and on the relative simplicity of their transport modelling.
In the case of close enough outbursts, less than $\sim 1$~Mpc, it would be possible to obtain high quality $\gamma$--ray spectra that could allow to perform detailed comparisons with theoretical predictions. However, when more realistic distances are considered, the information provided by observations decreases drastically and only some outstanding features, like the intensity of the lines and of the continuum and the line profiles, have a chance to be detected \cite{gome98}. Because of the scarcity of close enough events up to now it has only been possible to place upper limits to the SN1991T \cite{lich94} and SN1998bu \cite{geor02} events.
\section{INTEGRAL observations of SN 2011fe}
SN 2011fe (RA = 14:03:05.81, Dec = +54:16:25.4; J2000) was discovered in M101 on August 24th, 2011, $\sim 1$ day after the explosion \cite{nuge11}. The absence of hydrogen and helium, coupled with the presence of silicon in the spectrum clearly indicated it was a SNIa. The distance of M101, 6.4 Mpc, is slightly less than the maximum distance at which current gamma-ray instruments should be able to detect an intrinsically luminous SNIa. The closeness of SN2011fe has made it possible to obtain the tightest constraints on the supernova and its progenitor system, leaving only either DD or a few cases of SD as possible progenitor systems of this supernova.
\begin{table}
\caption{Upper-limit of the flux in selected spectral regions for SPI (2$\sigma$), JEM--X (2$\sigma$), and IBIS/ISGRI (3$\sigma$) for the entire pre and post--maximum observation periods (from days 5 to 18 and 45 to 88 after the explosion respectively).}
\label{tab1}
\centering
\begin{tabular}{ccc}
\hline\hline
\multicolumn{3}{c}{Early period} \\
\hline
Energy band & Upper-limit flux & Instrument \\
(keV) & (photons s$^{-1}$ cm$^{-2}$) & \\
\hline
3 - 10 & $5.0 \times 10^{-4}$ & JEM-X \\
10 - 25 & $4.0 \times 10^{-4}$ & JEM-X \\
3 - 25 & $1.0 \times 10^{-3}$ & JEM-X \\
60 - 172 & $1.5 \times 10^{-4}$ & IBIS/ISGRI \\
90 - 172 & $1.1 \times 10^{-4}$ & IBIS/ISGRI \\
150 - 172 & $7.1 \times 10^{-5}$ & IBIS/ISGRI \\
160 - 166 & $7.5 \times 10^{-5}$ & SPI \\
140 - 175 & $2.3 \times 10^{-4}$ & SPI \\
814 - 846 & $2.3 \times 10^{-4}$ & SPI \\
800 - 900 & $3.5 \times 10^{-4}$ & SPI \\
\hline \hline
\multicolumn{3}{c}{Late period}\\
\hline
Energy band & Upper-limit flux & Instrument \\
(keV) & (photons s$^{-1}$ cm$^{-2}$) & \\
\hline
505 - 525 & $1.1 \times 10^{-4}$& SPI \\
830 - 875 & $1.4 \times 10^{-4}$&SPI \\
835 - 870 & $1.2 \times 10^{-4}$& SPI \\
1215-1275& $1.2 \times 10^{-4}$&SPI \\
1220-1270& $1.1 \times 10^{-4}$&SPI \\
1225-1265& $1.0 \times 10^{-4}$&SPI \\
\hline
\end{tabular}
\end{table}
\emph{INTEGRAL} observed this supernova with its four instruments (SPI, ISGRI/IBIS, JEM--X, and OMC) before the peak of the light curve, from August 29th to September 12th 2011 or, equivalently, from day $\sim 5$ to day $\sim 18$ after the explosion, and after the peak, from October 7th to November 19th, 2011 or, equivalently, from day $\sim 45$ to day $\sim 88$.
The early observations were essentially constrained by the Sun, that prevented the observation just after the optical maximum where the $^{56}$Ni lines are expected to peak. The results of JEM-X, ISGRI and SPI observations, displayed in Table \ref{tab1}, were only upper limits.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{fig1.eps}
\caption{SN2011fe predicted evolution of the $^{56}$Ni 158, 812 keV features (dashed and continous magenta lines, respectively), the $^{56}$Co 847, 1238 keV features (continous and dashed black lines) and the 511 keV annihilation line as well as the band, 200-540 keV (dashed and continuous red line). Blue dots represent the evolution of the SN2011fe visual magnitude as obtained with the OMC camera of INTEGRAL.}
\label{figlc}
\end{figure}
Figure \ref{figlc} displays the light curve in the optical V--band obtained with the OMC. The magnitude at maximum was $M_V = -19.04$, thus indicating that SN2011fe was a slightly dim average SNIa. This light curve is well fitted with a delayed detonation model of a Chandrasekhar mass white dwarf igniting at $\rho_c = 2\times 10^9$ g/cm$^3$ and making the transition from deflagration to detonation at $\rho_{tr} = 2.2 \times 10^7$ g/cm$^3$. The corresponding decline parameter in the blue was $\Delta m_{\rm 15} =1.2 \pm 0.2$ in agreement with the Phillips relationship. The total amount of $^{56}$Ni synthesized in this way was in the range of $\sim 0.51-0.55$ M$_\odot$ when the uncertainties in the values of the interstellar extinction are taken into account.
The expected gamma ray emission of the above model has been obtained with the code described in \cite{gome98}, which was successfully cross--checked with other independent codes \cite{miln04}. Figure \ref{figlc} displays the evolution with time of several important gamma ray features. The 200-540 keV band contains almost all the annihilation photons and it is the brightest feature. The $^{56}$Ni lines are characterized by their sharp rise and their relatively rapid decline as a consequence of the rapid expansion of the debris, with the corresponding increase of the transparency, and the relatively short lifetime of this isotope. The slow decline of the 812 keV line is a consequence of the overlapping with the growing 847 keV $^{56}$Co line. The $^{56}$Co lines have a more gentle growing and a slow decay as a consequence of the larger lifetime of the isotope and the increasing transparency of the envelope. Figure \ref{figspc} displays the spectra 70 and 90 days after the explosion. The main characteristics, valid for almost every epoch, is the extremeley large width of the lines ($\Delta E/ E \gtrsim 5\%$) as a consequence of Doppler and Compton broadening.
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{fig2.eps}
\caption{Predicted spectra from the DDTe model at 70 (red line) and 90 (blue line) days after the explosion.}
\label{figspc}
\end{figure}
\section{Discussion and conclusions}
Besides the relative weakness of the source, the width of the lines and the rapid variation of their intensity are responsible for the non detection of SN2011fe by \emph{INTEGRAL}. In the limit of weak signals \cite{jean96}, the significance is given by:
\begin{equation}
n_{\sigma} = \frac{{A_{eff} \int\limits_{t_i - \Delta t}^{t_i } {\varphi \left( t \right)dt} }}{{\sqrt {bV\Delta E \Delta t} }}
\label{nsigma}
\end{equation}
\noindent where $\Delta t$ is the effective observation time, $A_{eff}$ is the effective area at the
corresponding energies, $\varphi$ is the flux (cm$^{-2}$s$^{-1}$)
in the energy band $\Delta E$, V is the volume of the detector and b is the
specific noise rate (cm$^{-3}$s$^{-1}$keV$^{-1}$), where it has been assumed
that it is weakly dependent of the energy and time in the interval of interest. It is evident from Eq. \ref{nsigma} that if the photons produced by a nuclear transition are distributed over an energy band $\Delta E$, the noise will grow as $ \propto \sqrt{\Delta E} $ and the significance of the signal will decrease as compared with the thin line case. Table \ref{tab2} displays the width that optimizes the signal to noise ratio at maximum in the DDTe case.
It is also evident from Eq. \ref{nsigma} that if the flux changes with time, the significance will also change since it will be a function of the total number of photons detected during the observation time. For instance, assume that the flux follows an exponential law like $ \varphi \left( t \right) = \varphi _0 e^{\alpha t} $. The significance of the signal obtained integrating during the time interval $\left( {t_i - \Delta t,t_i } \right)$ is
\begin{equation}
n = \frac{{A_{eff} \varphi \left( {t_i } \right)}}{{\sqrt {\alpha bV\Delta E} }}\frac{{1 - e^{ -
\alpha \Delta t} }}{{\sqrt {\alpha \Delta t} }}
\end{equation}
\noindent For $\alpha \Delta t < < 1$, it behaves as $n \propto \sqrt{\Delta t}$ and has a
maximum at $ \alpha \Delta t = 1 .26$ in the general case. This dependence clearly shows the convenience to take a value of $\Delta t$ that maximizes the signal/noise ratio. Unfortunately, since the value of $\alpha$ is not known \emph{a priori} the optimal observing time is not known in advance \cite{iser13}.
\begin{table}
\caption{Width of the lines that optimizes the S/N ratio at the maximum of their intensity in the case of the DDTe model}
\label{tab2}
\centering
\begin{tabular}{cc}
\hline\hline
Energy (keV) & $\Delta E$ (keV) \\
\hline
158 & 20 \\
511 & 30 \\
511 (band) & 340 \\
812 & 35 \\
847 & 27 \\
1238 & 40 \\
\hline
\end{tabular}
\end{table}
For instance, if we assume that the DDTe model (see Fig. \ref{figlc}) is representative of SN 2011fe, we see that at the beginning, when the emission is dominated by the $^{56}$Ni and ejecta are still opaque, there is a rapid growing of the intensity of the lines and only the last five days of the first period of observation are useful. On the contrary, when lines are dominated by the $^{56}$Co lines and matter is already transparent, the temporal behavior is more gentle, the above restriction does not apply and it is possible to follow a cumulative strategy of observation. Therefore, estimating in advance the significance of an observation demands a previous knowledge of the evolution of the shape and intensity of the line.
It is clear from the previous discussion that the detectability of a supernova not only depends on the distance but also on the subtype since the intensity of the radioactive lines is a function of the total amount of $^{56}$Ni synthesized and this quantity goes from $\sim 1$ M$_\odot$ in the case of a superluminous SNIa to $\sim 0.05$ M$_\odot$ in the case of a dim SNIax. Neglecting the last family and only taking the well classified events that appear in the catalog of the Sternberg Astronomical Institute, it turns out that to detect $\sim 6$ bright events per year it is necessary to achieve a sensitivity of the order of $\sim 10^{-7}$ cm$^{-2}$s$^{-1}$keV$^{-1}$. Of course, these figures are approximate, but they represent the frontier that allows to set up a systematic program of observation of SNIa or to consider these events as serendipitous ToO.
\acknowledgments
This work has been supported by the MINECO-FEDER grants AYA2011-24704/ESP, AYA2011-24780/ESP, AYA2009-14648-C02-01, CONSOLIDER CSC2007-00050, by the ESF EUROCORES Program EuroGENESIS (MINECO grants EUI2009-04170), by the grant 2009SGR315 of the Generalitat de Catalunya. In parts, this work has also been supported by the NSF grants AST-0708855 and AST-1008962 to PAH.
The INTEGRAL SPI project has been completed under the responsibility and leadership of CNES.
We also acknowledge the INTEGRAL Project Scientist Chris Winkler (ESA, ESTEC) and the INTEGRAL personnel for their support to these observations.
| -10,303.127196
|
[
-3.51171875,
3.12109375
] | 33.112583
|
[
-3.185546875,
-0.01026153564453125,
-2.09765625,
-5.7109375,
-0.328369140625,
8.609375
] |
[
3.296875,
6.828125,
3.953125,
4.484375
] | 109
| 2,349
|
[
-2.8203125,
3.142578125
] | 31.304882
|
[
-6.0703125,
-3.123046875,
-2.990234375,
-2.1953125,
1.4482421875,
10.546875
] | 1.907265
| 23.780629
| 35.632184
| 11.604023
|
[
2.697719097137451
] | -8,028.896528
| 5.022137
| -9,945.85896
| 0.593371
| 5.742329
|
[
-3.740234375,
-3.728515625,
-2.83984375,
-3.7265625,
2.5703125,
10.5
] |
[
-6.09375,
-2.642578125,
-2.599609375,
-1.9345703125,
3.8359375,
5.8125
] | |
BkiUeg7xK1yAgWay6G6r
|
\section{Introduction}
\noindent More than twenty five years ago, Shifman, Vainshtein and Zakharov
\cite{SVZ} proposed to use the Operator Product Expansion (OPE) in hadronic
current-current correlators to extend asymptotic predictions of QCD to low
energies. In this approach there appear universal vacuum expectation values of
quark and gluon fields, the so-called vacuum condensates, which have to be
extracted from experiment. This extraction is usually carried out by using
methods based on dispersion relations. Ultimately, one has to relate error
afflicted data in the time-like region to asymptotic QCD in the space-like
region. Unfortunately, this task of analytic continuation constitutes,
mathematically, a so-called ill-posed problem. In fact, extracting condensates
from data is highly sensitive to data errors. Not surprisingly, results from
different collaborations have not been always consistent \cite{Friot}. The
main reason for these inconsistencies was frequently the impossibility of
estimating reliably the errors in the method. With the release of the final
analysis of the precise measurements of the vector (V) and axial-vector (A)
spectral functions obtained from tau-lepton decay by the ALEPH collaboration
\cite{ALEPH2}, an opportunity has been opened to check the validity of QCD sum
rules and the extraction of condensates in the light-quark sector with
unprecedented precision. It is therefore appropriate to reanalyze the data,
taking into account errors and correlations with the least possible
theoretical bias. In this paper we attempt such a critical and conservative
appraisal for the interesting case of chiral sum rules. These sum rules
involve the difference between the vector and the axial-vector correlators
(V-A), which vanishes identically to all orders in perturbative QCD in the
chiral limit. In fact, neglecting the light quark masses, the (V-A) two-point
function vanishes like $1/q^{6}$ in the space-like region, where the scale
$\mathcal{O}(300$ MeV) is set by the four-quark condensates. The interest in
these sum rules is twofold. Apart from describing a QCD order parameter, they
determine the leading contributions of the matrix elements of the electroweak
penguin operators
\begin{align}
Q_{7} & =6(\bar{s}_{L}\gamma_{\mu}d_{L})\sum\limits_{q=u,d,s}e_{q}(\bar
{q}_{R}\gamma_{\mu}q_{R})\nonumber\\
Q_{8} & =-12\sum\limits_{q=u,d,s}e_{q}(\bar{s}_{L}q_{R})(\bar{q}_{R}%
d_{L})\nonumber
\end{align}
where $e_{q}$ is the charge of the quark $q$.
In the time-like region, the chiral spectral function $\rho_{V-A}(q^{2})$
should vanish for sufficiently large $Q^{2}\equiv-q^{2}$, but judging from the
ALEPH data \cite{ALEPH2} shown in Fig.1 the asymptotic regime of local duality
does not seem to have been reached, i.e. the spectral function does not vanish
even for the highest momenta attainable in $\tau$-decay.
\begin{figure}
[h]
\begin{center}
\includegraphics[
height=3.0552in,
width=3.0552in
]%
{sf_vminusa_new.eps}%
\caption{The ALEPH data \cite{ALEPH2} on the vector minus axial-vector
spectral function vs. perturbative QCD (solid line).}%
\label{VmAspec}%
\end{center}
\end{figure}
Under less stringent assumptions, one would hope that at least global duality
should hold in the time-like region. In particular, this should be the case
for the Weinberg-type sum rules \cite{WSR}-\cite{DMO} which involve the first
and second moment of the spectral function. Surprisingly, these sum rules also
appear to be poorly convergent. A likely source of duality violation could be
some non-perturbative contribution to the correlator (e.g. due to instantons)
which falls off exponentially in the space-like region but oscillates in the
time-like region. From Fig.1 it is obvious that convergence could be improved
by incorporating a weight factor which would reduce the non-asymptotic
contribution to the spectral integral. This can be achieved e.g. by
considering so called "pinched sum rules" \cite{Nasrallah} or "minimizing
polynomial sum rules" \cite{BPS}. In view of its importance we have chosen to
reanalyze the issue of duality in chiral sum rules on the basis of the new
ALEPH measurements. Our analysis leads to results showing a significantly
improved accuracy.
\section{Finite energy sum rules}
We begin by defining the vector and axial-vector current correlators
\begin{align}
\Pi_{\mu\nu}^{VV}(q^{2}) & =i\int d^{4}x \; e^{iqx}<0|T(V_{\mu}(x)V_{\nu
}^{\dagger}(0))|0>\label{2.1}\\
& =(-g_{\mu\nu}\;q^{2}+q_{\mu}q_{\nu})\;\Pi_{V}(q^{2})\;,\nonumber
\end{align}
\begin{align}
\Pi_{\mu\nu}^{AA}(q^{2}) & =i\int d^{4}x \;e^{iqx}<0|T(A_{\mu}(x)A_{\nu
}^{\dagger}(0))|0>\label{2.2}\\
& =\;(-g_{\mu\nu}q^{2}+q_{\mu}q_{\nu})\;\Pi_{A}(q^{2})-q_{\mu}q_{\nu} \;
\Pi_{0}(q^{2})\; ,\nonumber
\end{align}
where $V_{\mu}(x)=:\bar{q}(x)\gamma_{\mu}q(x):$, $A_{\mu}(x)=:\bar{q}%
(x)\gamma_{\mu}\gamma_{5}q(x):$, and $q=(u,d)$. Here we shall concentrate on
the chiral correlator $\Pi_{V-A}\equiv\Pi_{V}-\Pi_{A}$. This correlator
vanishes identically in the chiral limit ($m_{q}=0$), to all orders in QCD
perturbation theory. Renormalon ambiguities are thus avoided. To define our
normalization we note that in perturbative QCD
\begin{equation}
\frac{1}{\pi}\operatorname{Im}\Pi_{V}^{QCD}\left( s\right) =\frac{1}{\pi
}\operatorname{Im}\Pi_{A}^{QCD}\left( s\right) =\frac{1}{8\pi^{2}}\left(
1+\frac{\alpha_{s}}{\pi} +...\right) \label{2.3}%
\end{equation}
Non-perturbative contributions due to vacuum condensates contribute to this
two-point function starting with dimension $d=6$, and involving the four-quark
condensate. The Operator Product Expansion (OPE) of the chiral correlator can
be written as
\begin{equation}
\Pi(Q^{2})|_{V-A}=\sum_{N\geq3}^{\infty}\frac{1}{Q^{2N}}\;C_{2N}(Q^{2},\mu
^{2})\;<O_{2N}(\mu^{2})>\;, \label{2.4}%
\end{equation}
with $Q^{2}\equiv-q^{2}$. The scale parameter $\mu$ separates the long
distance non-perturbative effects associated with the condensates $<O_{2N}%
(\mu^{2})>$ from the short distance effects which are included in the Wilson
coefficients $C_{2N}(Q^{2},\mu^{2})$. The OPE is valid for complex $q^{2}$ and
moderately large $|q^{2}|$ sufficiently far away from the positive real axis.
Radiative corrections to the dimension $d=6$ contribution are known
\cite{CH}-\cite{Cirigliano}. They depend on the regularization scheme,
implying that the value of the condensate itself is a scheme dependent
quantity. Explicitly,
\begin{equation}
\Pi(Q^{2})|_{V-A}\;=-\frac{32\pi}{9}\;\frac{\alpha_{s}<\bar{q}q>^{2}}{Q^{6}%
}\left\{ 1+\frac{\alpha_{s}(\mu^{2})}{4\pi}\left[ \frac{244}{12}%
+\mathrm{ln}\left( \frac{\mu^{2}}{Q^{2}}\right) \right] \right\}
+O(1/Q^{8})\;, \label{2.5}%
\end{equation}
in the $\overline{MS}$ scheme, and assuming vacuum saturation of the
four-quark condensate. Radiative corrections for $d\geq8$ are not known. To
facilitate comparison with current conventions in the literature it will be
convenient to absorb the Wilson coefficients, including radiative corrections,
into the operators, and rewrite Eq.(\ref{2.4}) compactly as
\begin{equation}
\Pi(Q^{2})=\sum_{N\geq3}^{\infty}\frac{1}{Q^{2N}}\;\mathcal{O}_{2N}%
(Q^{2})\;,\label{2.51}%
\end{equation}
where we are dropping the subscript (V-A) for simplicity. We will be concerned
with Finite Energy Sum Rules (FESR), which are nothing but the Cauchy
integral
\begin{equation}
\frac{1}{4\pi^{2}}\int_{0}^{s_{0}}ds\text{\thinspace}P_{N}(s)\left[
v(s)-a(s)\right] -f_{\pi}^{2}P_{N}(m_{\pi}^{2})=-\frac{1}{2\pi i}%
\oint_{|s|=s_{0}}ds\text{\thinspace}P_{N}(s)\;\Pi^{QCD}\left( s\right)
\;,\label{2.52}%
\end{equation}
where $P_{N}(s)$ is an arbitrary polynomial, i.e.
\begin{equation}
P_{N}(s)=a_{0}+a_{1}s+a_{2}s^{2}+\ldots+a_{N}s^{N},\label{POLY}%
\end{equation}
$f_{\pi}=92.4\pm0.26\text{ MeV}$ \cite{PDG}, and $v(s)$ ($a(s)$) is the vector
(axial-vector) spectral function measured by ALEPH in tau-decay \cite{ALEPH2},
normalized to the asymptotic value
\begin{equation}
v(s)_{QCD}=a(s)_{QCD}=\frac{1}{2}\left( 1+\frac{\alpha_{s}}{\pi}+...\right)
\;.\label{2.310}%
\end{equation}
The axial-vector spectral function $a(s)$ does not include the pion pole
contribution which is added separately. For most purposes one can work in the
chiral limit $m_{\pi}=0$, i.e. $P_{N}(m_{\pi}^{2})$ in Eq.(\ref{2.52}) may be
replaced by $a_{0}f_{\pi}^{2}$. The standard FESR follow from the
theorem of residues and assuming the Wilson coefficients are just numbers,
\begin{equation}
(-)^{(N+1)}\;\mathcal{O}_{2N}(s_0)=\frac{1}{4\pi^{2}}\int_{0}^{s_{0}}%
ds\,s^{N-1}\;[v(s)-a(s)]-f_{\pi}^{2}\;\delta_{N1}%
\ \ \ \ (N=1,2,3...)\;,\label{2.53}%
\end{equation}
where the index $N$ has been rearranged for convenience.
Strictly speaking, Eq.(\ref{2.53})
only holds for the constant terms of the Wilson coefficients. Otherwise condensates of lower
or higher dimension get mixed when taking into account
radiative corrections due to the logarithmic terms ($\mathrm{ln}(\mu^{2}/Q^2)$ or
higher). However this mixing of operators of different dimensions occurs
only at order $\alpha_{s}^{2}$ in a given FESR \cite{MIX}. For dimensional
reasons the contribution of the operators of higher dimension in Eq.(\ref{2.53}) vanishes for
large $s_{0}$, while that of operators of lower dimension increases with
$s_{0}$. The latter contribution is particularly disturbing for operators of
high dimension. As the logarithmic terms of the relevant Wilson coefficients
are not known (except the one for $\mathcal{O}_{6}$) we will neglect the
contribution of operators of dimension unequal to $2N$ in Eq.(\ref{2.53}). This
approximation is inherent to every sum rule analysis of the $\tau$-data and
can only be justified \emph{a posteriori} by demonstrating that the right hand
side of Eq.(\ref{2.53}) is (almost) constant. We will, however, examine the
order of magnitude of the mixing to be expected by using Eq.(\ref{2.5}) to
estimate the effect of the radiative corrections of $\mathcal{O}_{6}$.
For $N=1,2$ Eq.(\ref{2.53}) leads to the first two (Finite Energy) Weinberg
sum rules, while for $N=3,4$ the sum rules project out the $d=6$ and $d=8$
vacuum condensates, respectively (notice that in the chiral limit
$\mathcal{O}_{2}=\mathcal{O}_{4}=0$). In order to check the convergence of the
sum rules we consider the first Weinberg sum rule
\begin{equation}
W_{1}(s_{0})\equiv\;\frac{1}{4\pi^{2}}\int_{0}^{s_{0}}ds\,[v(s)-a(s)]=f_{\pi
}^{2} \label{2.7}%
\end{equation}
Strictly speaking $s_{0}\rightarrow\infty$, but precocious scaling would imply
that the sum rule should be saturated at moderate values of $s_{0}$. From
Fig.2, which shows $W_{1}(s_{0})$, one can see that this is clearly not the
case, even at the highest energies accessible in $\tau$-decay.
\begin{figure}
[h]
\begin{center}
\includegraphics[
height=2.1801in,
width=2.8177in
]%
{WSR1.eps}%
\caption{The first Weinberg sum rule, Eq.(\ref{2.7})\thinspace\ as a function
of $s_{0}$ together with $f_{\pi}^{2}=(0.00854\pm0.00005)\; \mbox{GeV}^{2}$ (solid
line).}%
\end{center}
\end{figure}
This lack of precocious scaling in the sum rule can be simply explained by
looking at the measured spectral function in Fig.1. If the spectral function
had reached its approximate asymptotic value (i. e. zero) starting, let us
say, from $s=2\;\mbox{GeV}^{2}$ then the spectral integral of Eq.(\ref{2.7})
would have yielded $f_{\pi}^{2}$ for all $s_{0}\geq2\;\mbox{GeV}^{2}$. This
observation shows us a way out of the dilemma by turning to the more general
sum rules of Eq.(\ref{2.52}). One can choose the polynomial $P_{N}(s)$ in the
sum rule Eq.(\ref{2.52}) in such a way that the problematic contribution of
the integration region near the endpoint of the physical cut is minimized.
This method addresses two problems at the same time, the first being that
experimental errors of the spectral functions grow considerably near the limit
of phase space, and the second, that the asymptotic QCD formula is unreliable
on the contour region near the physical cut. In following this method we will
employ two types of sum rules, pinched FESR and minimizing polynomial sum rules.
\section{Pinched sum rules}
We begin by considering a linear combination of the first two Weinberg sum
rules
\begin{equation}
\bar{W}_{1}(s_{0})\equiv\frac{1}{4\pi^{2}}\int_{0}^{s_{0}}ds\;(1-\frac
{s}{s_{0}})\;[v(s)-a(s)]=f_{\pi}^{2}. \label{3.1}%
\end{equation}
The left hand side of Eq.(\ref{3.1}) as a function of $s_{0}$ is shown in Fig.
3, together with the right hand side.
\begin{figure}
[h]
\begin{center}
\includegraphics[
height=2.2731in,
width=2.9855in
]%
{Graph2.eps}%
\caption{Pinched Weinberg sum rule, Eq.(\ref{3.1}), as a function of $s_{0}$.
The solid line is $f_{\pi}^{2}=(0.00854\pm0.00005)\;\mbox{GeV}^{2}$.}%
\label{Fig.2}%
\end{center}
\end{figure}
It is very reassuring that the sum rule appears to be saturated for
$s_{0}>2.3\;\mbox{GeV}^{2}$. We note that the error band is about a factor
three smaller than that found in a similar analysis \cite{DS2} using the old
ALEPH or OPAL data \cite{ALEPH}-\cite{OPAL}. The influence of the logarithmic
dependence of $\mathcal{O}_{6}$ in this sum rule is about $4 \times 10^{-6}
\, {\rm GeV}^2$ in the region of the saturation with $s_0$.
Motivated by this success, we impose the Weinberg sum rules as constraints in
other pinched finite energy sum rules involving different moments. To be
precise, we assume that there are no operators of dimension $d =2$ nor $d =4$,
which is true in the chiral limit, together with the condition that the FESR
involves factors of $(1-\frac{s}{s_{0}})$ so as to minimize the contribution
near the cut. In this way, we write the Das-Mathur-Okubo \cite{DMO} sum rule in the
form
\begin{equation}
\bar{\Pi}(0)=\frac{1}{4\pi^{2}}\int_{0}^{s_{0}}\frac{ds}{s}\;(1-\frac{s}%
{s_{0}})^{2}\;[v(s)-a(s)]+\frac{2f_{\pi}^{2}}{s_{0}}\;,\label{3.11}%
\end{equation}
where $\bar{\Pi}(0)$ is the finite remainder of the chiral correlator at zero
momentum. It is related to $\bar{L}_{10}$, the counter term of the $O(p^{4})$
Lagrangian of chiral perturbation theory \cite{GL}, which has been calculated independently,
\begin{equation}
\bar{\Pi}(0)=-4 \;\bar{L}_{10}=\left[ \frac{1}{3}f_{\pi}^{2}<r_{\pi}%
^{2}>-F_{A}\right] \;=0.026\pm0.001\;, \label{3.12}%
\end{equation}
where $<r_{\pi}^{2}>$ is the electromagnetic mean squared radius of the pion,
$<r_{\pi}^{2}>=0.439\pm0.008\;\mbox{fm}^{2}$ \cite{AMEN}, and $F_{A}$ is the
axial-vector coupling measured in radiative pion decay, $F_{A}=0.0058\pm
0.0008$ \cite{PDG}. From Fig. 4 we see that this sum rule is even more
remarkably satisfied. This can be understood by noting that the sum rule
emphasizes less the high-$s$ region where duality violation competes against
stability.
\begin{figure}
[h]
\begin{center}
\includegraphics[
trim=-0.005923in 0.000000in 0.005923in 0.000000in,
height=2.1694in,
width=2.6675in
]%
{DMO.eps}%
\caption{The finite remainder of the chiral correlator at zero momentum,
$\bar{\Pi}(0)$, from Eq.(\ref{3.11}) as a function of $s_{0}$. The solid line is
the central value $\bar{\Pi}(0)=0.02579$.}%
\end{center}
\end{figure}
Numerically, we find from the DMO sum rule
\begin{equation}
\bar{\Pi}(0)=0.02579\pm0.00023\;, \label{3.13}%
\end{equation}
a result showing a remarkable accuracy for a strong interaction parameter. In this
particular sum rule the contribution from the logarithmic term of $\mathcal{O}_{6}$
vanishes.
Next, we turn our attention to the extraction of the condensates with the help
of the pinched sum rules. The philosophy of our calculation is threefold, viz.
(i) to assume that dimension $d =2$ and $d = 4$ operators are absent in the OPE of the
chiral current , (ii) to require that the polynomial projects out only one
operator of the OPE at a time, and (iii) to require that the polynomial and
its first derivative vanish on the integration contour of radius $|s|=s_{0}$.
For the caveats on point (ii) of this approach see the text above. In this way
one obtains for $N\geq 3$ the sum rules (ignoring the energy dependence
in the Wilson coefficients)
\begin{align}
\mathcal{O}_{2N}(s_0) & =(-1)^{N-1}\Bigg \{\frac{1}{4\pi^{2}}\int_{0}^{s_{0}%
}ds[(N-2)s_{0}^{N-1}-(N-1)s_{0}^{N-2}s+s^{N-1}]\Bigg.\nonumber\\
& \Bigg. \times [v(s)-a(s)] -(N-2)s_{0}^{N-1}f_{\pi}^{2}\Bigg \}\;\;\;\;\;\;\;(N\geq
3)\;.\label{3.2}%
\end{align}
Note that there is always a pinch factor $(s-s_{0})^{2}$ in the polynomial. We
use once again the new ALEPH spectral function and error correlations in this
sum rule. The crucial point of the extraction of the condensates is a careful
inspection of the stability of the result with respect to the variation of all
parameters in the analysis. In our case there is only one parameter, namely
the radius $s_{0}$. This fact contrasts positively with other approaches based
on Laplace sum rules which involve at least two parameters, and in addition do
not project out just one single operator, even for $s_{0}\rightarrow\infty$.
As for stability, if a meaningful value of $\mathcal{O}_{2N}$ is to be
extracted we would expect the r.h.s. of Eq.(\ref{3.2}) to be constant for all
$s_{0}$ larger than some some critical value . We can call this requirement
\textbf{strong stability}. It is best discussed on the basis of the figures
below which show the predictions for various condensates. Figure 5 shows the
result for the dimension $d=6$ condensate.
\begin{figure}
[h]
\begin{center}
\includegraphics[
height=2.1801in,
width=2.694in
]%
{O6.eps}%
\caption{The dimension $d = 6$ condensate, $\mathcal{O}_{6}$, from Eq.(\ref{3.2})
as a function of $s_{0}$. The solid line is the central value
$\mathcal{O}_{6}=-0.00226 \; \mbox{GeV}^{6}$.}%
\end{center}
\end{figure}
\begin{figure}
[hptb]
\begin{center}
\includegraphics[
height=2.1237in,
width=2.6384in
]%
{O8.eps}%
\caption{The dimension $d =8$ condensate, $\mathcal{O}_{8}$, from Eq.(\ref{3.4} )
as a function of $s_{0}$. The solid line is the central value
$\mathcal{O}_{6}=-0.0053\; \mbox{GeV}^{8}$.}%
\end{center}
\end{figure}
There is an obvious stability region: $2.3\leq s_{0}(\mbox{GeV}^{2})\leq3$
from where we find
\begin{equation}
\mathcal{O}_{6}(2.7 \;\mbox{GeV}^{2})=-(0.00226\pm 0.00055)\;\mbox{GeV}^{6}\;.\label{3.3}%
\end{equation}
This value is consistent with the one found from the vacuum saturation
approximation $\mathcal{O}_{6}^{\text{VS}}=-0.0020\;\mbox{GeV}^{6}$ from
Eq.(\ref{2.5}) with $<\bar{q}q>(s_{0})=-0.019\; \mbox{GeV}^{3}$, and $\alpha(s_{0}%
)/\pi=0.1$ , but it is significantly lower than the one found in some earlier
analyses based on the old, incomplete ALEPH data; e.g. $\mathcal{O}%
_{6}=-(0.004\pm0.001)\;\mbox{GeV}^{6}$, obtained in \cite{DS2} using a similar
stability criterion as here. The contribution of the logarithmic term from the
$\mathcal{O}_{6}$ coefficient in (\ref{3.3}), in the region of $s_0$ considered here, is about
$8 \times 10^{-6}\;\mbox{GeV}^{6}$ and hence negligible within the errors.
To facilitate a comparison with an alternative
type of sum rules discussed in the next section, we give the sum rule for
$\mathcal{O}_{8}$ explicitly
\begin{equation}
\mathcal{O}_{8}(s_{0})=-\frac{1}{4\pi^{2}}\int_{0}^{s_{0}}ds\;[2s_{0}%
^{3}-3s_{0}^{2}s+s^{3}][v(s)-a(s)]+2s_{0}^{3}f_{\pi}^{2}\;.\label{3.4}%
\end{equation}
(Notice that
$2s_{0}^{3}-3s_{0}^{2}s+s^{3}=\left( s_{0}-s\right)^{2}\left(s+2s_{0}\right)$).
The result for $\mathcal{O}_{8}$ is shown in Fig.6. In
spite of the larger errors there is still a distinct region of duality in the
interval: $2.3\leq s_{0}(\mbox{GeV}^{2})\leq3\;$, which yields
\begin{equation}
\mathcal{O}_{8}(2.6\; \mbox{GeV}^{2})=-(0.0054\pm0.0033)\;\mbox{GeV}^{8}\label{3.5}%
\end{equation}
Both the sign and the numerical value of this condensate are controversial
(see e.g. Table 1 in \cite{Friot}). Our result agrees within errors with e.g.
that of \cite{Bijnens}-\cite{CGM}, and that of \cite{Rojo} but disagrees in
sign with \cite{Narison}, \cite{DGHS}, \cite{Ioffe}, and with the result of
the minimal hadronic approximation of large $N_{c}$ \cite{Friot}. In
\cite{Narison} the sum rules were evaluated at very low values of $s_{0}$,
mainly because the data at that time was considered to be too inaccurate at
higher $s_{0}$. Fortunately, this state of affairs has now changed with the
new ALEPH analysis. Eq.(\ref{2.5}) can be used to estimate the effect on
$\mathcal{O}_{8}$ due to the mixing of $\mathcal{O}_{6}$ arising from the
logarithmic term. We find a correction of $\ $about$-3 \times10^{-4}\; \mbox{GeV}^{8}$
which is negligible compared to the data error in Eq.(\ref{3.5}). The results
of the sum rules for $\mathcal{O}_{10}$, and $\mathcal{O}_{12}$ as given by
Eq.(\ref{3.2}) are shown in Figs. 7 and 8, respectively.
\begin{figure}
[hptbptb]
\begin{center}
\includegraphics[
height=2.1694in,
width=2.6019in
]%
{O10.eps}%
\caption{The dimension $d = 10$ condensate, $\mathcal{O}_{10}$, from Eq.(\ref{3.2} )
as a function of $s_{0}$. The solid line is the central value
$\mathcal{O}_{10}=0.0355 \; \mbox{GeV}^{10}$.}%
\end{center}
\end{figure}
The strong stability obtained so far is now no longer obvious, and we find at
$s_{0}\approx 2.5 \;\mbox{GeV}^{2}$
\begin{align}
\mathcal{O}_{10} (2.5\; \mbox{GeV}^2)& =(0.036\pm 0.014)\;\mbox{GeV}^{10}\label{3.6}\\
\mathcal{O}_{12}(2.5\; \mbox{GeV}^2) & =-(0.12\pm 0.05)\;\mbox{GeV}^{12}\;.\label{3.7}%
\end{align}
These results, though, should be taken\emph{\ }\textit{cum}
\textit{grano} \textit{salis}, to wit. Because the OPE is an asymptotic
series, the upper limit of the integration range, $s_{0}$, should increase as
the dimension of the operators increases. The comparison of the numerical
results for $\mathcal{O}_{6}$, $\mathcal{O}_{8}$, $\mathcal{O}_{10}$, and
$\mathcal{O}_{12}$ indicates that at a scale of about $|s|=1\;\mbox{GeV}^{2}$
the OPE starts to diverge at dimension $d=10$. In addition, the problem of the
mixing of operators of different dimensions becomes more severe for higher
dimensional operators. For these reasons we believe that it is rather
meaningless to extract quantitative results for condensates of dimension
higher than $d=8$ from $\tau$-decay spectral functions.
\begin{figure}
[hptbptbptb]
\begin{center}
\includegraphics[
height=2.1694in,
width=2.6492in
]%
{O12.eps}%
\caption{The dimension $d = 12$ condensate, $\mathcal{O}_{12}$, from Eq.(\ref{3.2} )
as a function of $s_{0}$. The solid line is the central value
$\mathcal{O}_{12}=-0.12 \;\mbox{GeV}^{12}$.}%
\end{center}
\end{figure}
\section{Minimizing Polynomial Sum Rules}
The starting point in this analysis is the general sum rule Eq.(\ref{2.52}).
The polynomial can be chosen in such a way that the problematic contribution
of the integration region near the endpoint of the physical cut is minimized.
With the normalization condition
\begin{equation}
P_{N}\left( s=0\right) \,=\,1, \label{4.1}%
\end{equation}
we require that the polynomial $P_{N}(s)$ should minimize the contribution of
the continuum in the range $\left[ c,s_{0}\right] $ in a least square sense,
i.e.
\begin{equation}
\int_{c}^{s_{0}}s^{k}P_{N}(s)\,\,ds=0\,\,\;\;\;\;\; \,\,(k=0,\ldots N-1)\;,
\label{4.2}%
\end{equation}
The parameter $c$ can be chosen freely in the interval $0<c<s_{0}$. On the
basis of the spectral function of Fig.1, a reasonable choice would be $2
\;\mbox{GeV}^{2}\leq c \leq s_{0}\sim3 \;\mbox{GeV}^{2}$. In a sense, these
polynomials are a generalization of pinched moments, and the $P_{N}(s)$ will
approach $(s-s_{0})^{N}$ when $c \rightarrow s_{0}$. While pinched moments
eliminate the contribution on the physical real axis at a single point, our
polynomials tend to eliminate a whole region (from $c$ to $s_{0}$). The degree
$N$ of the polynomial can be chosen appropriately to project out certain terms
in the OPE, Eq.(\ref{2.51}). The polynomials obtained in this way are closely
related to the Legendre polynomials as follows. Let us introduce the variable
\begin{equation}
x(s) \equiv\frac{2 s - (s_{0} +c)}{(s_{0} - c)} = \frac{2s}{(s_{0}-c)} + x(0)
\;, \label{5.1}%
\end{equation}
and define the polynomials as
\begin{equation}
P_{N}(s) = \frac{L_{N}[x(s)]}{L_{N}[x(0)]} \; , \label{5.2}%
\end{equation}
where $L_{N}(x)$ are the Legendre polynomials
\begin{equation}
L_{N}(x) = \frac{1}{2^{N} N!} \frac{d^{N}}{dx^{N}} (x^{2} -1)^{N} \;.
\label{5.3}%
\end{equation}
We give here only the first few minimizing polynomials
\begin{align}
P_{1}\left( s\right) & = 1 - \frac{2 s}{(s_{0} + c)}\label{4.200}\\
P_{2}\left( s\right) & =\frac{3\left( 2s-\left( s_{0}+c\right) \right)
^{2}-\left( s_{0}-c\right) ^{2}}{3\left( s_{0}+c\right) ^{2}-\left(
s_{0}-c\right) ^{2}}\label{4.21}\\
P_{3}\left( s\right) & =\frac{5\left( 2s-\left( s_{0}+c\right) \right)
^{3}-3\left( s_{0}-c\right) ^{2}\left( 2s-\left( s_{0}+c\right) \right)
}{-5\left( s_{0}+c\right) ^{3}+3\left( s_{0}-c\right) ^{2}\left(
s_{0}+c\right) } \label{4.22}%
\end{align}
If the polynomials are expressed as in Eq. (8), then from Eqs.(\ref{2.52}) and
(\ref{2.53}) there follows the sum rule
\begin{equation}
a_{0}d_{0}+a_{1}d_{1}+...+ a_{N}d_{N}-f_{\pi}^{2} =a_{2}\mathcal{O}_{6}%
-a_{3}\mathcal{O}_{8}+...+\left( -1\right) ^{N} a_{N}\mathcal{O}_{2N+2} \; ,
\label{4.3}%
\end{equation}
where the constants
\begin{equation}
d_{N}= \frac{1}{4\pi^{2}} \int_{0}^{s_{0}}ds\,s^{N}\,\left( v(s)-a(s)\right)
\label{4.4}%
\end{equation}
are to be determined from the ALEPH data.
We begin with the $\mathcal{O}_{6}$ condensate and obtain from Eq.(\ref{4.3})
the sum rule
\begin{equation}
d_{0}+a_{1}d_{1}+a_{2}d_{2}-f_{\pi}^{2}=a_{2}\mathcal{O}_{6}\;.\label{4.5a}%
\end{equation}
After substituting $P_{2}$ from Eq.(\ref{4.21}) this sum rule becomes
\begin{equation}
\mathcal{O}_{6}(s_0)=\frac{1}{6}(s_{0}^{2}+4s_{0}c+c^{2})\left( d_{0}-f_{\pi}%
^{2}\right) -\left( s_{0}+c\right) d_{1}+d_{2}\;.\label{4.7}%
\end{equation}
In the sequel we choose the initial value $s_{0}=3\;\mbox{GeV}^{2}$, but will
subsequently change it in the range $2.5\leq s_{0}(\mbox{GeV}^{2})\leq3$ in
order to verify the criterion of \textbf{strong stability}. The condensate
$\mathcal{O}_{6}$ from Eq.(\ref{4.7}) is plotted in Fig. 9 as a function of
$c$.
\begin{figure}
[h]
\begin{center}
\includegraphics[
height=2.1428in,
width=2.8634in
]%
{O6c.eps}%
\caption{The condensate $\mathcal{O}_{6}$ as a function of the parameter $c$.}%
\label{O6(c)}%
\end{center}
\end{figure}
One can appreciate a stable point for $c=2.5 \;\mbox{GeV}^{2}$. Fixing $c$ at
this point of minimal sensitivity has been discussed previously in other FESR
applications, e.g. in \cite{BPS}. For $c=2.5\;\mbox{GeV}^{2}$, and
$s_{0}=3\;\mbox{GeV}^{2}$ we obtain from Eq.(\ref{4.7}) the result:
\begin{equation}
\mathcal{O}_{6}(3\; \mbox{GeV}^{6})=-(0.0023\pm0.0013)\;\mbox{GeV}^{6}\;,\label{4.14}%
\end{equation}
which compares well within errors with the previous result from the pinched
sum rule, Eq.(\ref{3.3}). We have tested positively the stability around this
point by choosing different values of $s_{0}$ in the range $2.5\leq
s_{0}(\mbox{GeV}^{2})\leq3$. For instance, using $s_{0}=2.5\;\mbox{GeV}^{2}$
we obtain $O_{6}(2.5 \;\mbox{GeV}^{6})=-(0.00224\pm 0.00046)\;\mbox{GeV}^{6}$. Other
values of $s_{0}$ in the above range lead to similar results, well in
agreement within errors with Eq.(\ref{4.14}), thus satisfying the criterion of
\textbf{strong stability}.
Next, we consider the $\mathcal{O}_{8}$ condensate, and use Eq.(\ref{4.3}) to
obtain the sum rule
\begin{equation}
d_{0}+a_{1}d_{1}+a_{2}d_{2}+a_{3}d_{3}-f_{\pi}^{2}=a_{2}\mathcal{O}_{6}%
-a_{3}\mathcal{O}_{8}\;.\label{4.15}%
\end{equation}
The presence of $\mathcal{O}_{6}$ in the sum rule for $\mathcal{O}_{8}$ can be
dealt with in two ways. One could insert the numerical value of $\mathcal{O}%
_{6}$, e.g. Eq.(\ref{4.14}), as obtained from its own sum rule, or rather
substitute the analytic expression of the sum rule itself. The latter
procedure yields the best possible results in terms of stability and accuracy,
and leads to the sum rule
\begin{equation}
\mathcal{O}_{8}(s_0)=-\frac{1}{5}\;[s_{0}(s_{0}+2c)^{2}+c^{3}]\;(d_{0}-f_{\pi}%
^{2})+\frac{3}{10}\;[3(s_{0}^{2}+c^{2})+4s_{0}c]\;d_{1}-d_{3}\;. \label{4.16}
\end{equation}
Notice the welcome absence of the term involving the second moment, i.e.
$d_{2}$; it cancels out when substituting in Eq.(\ref{4.15}) the sum rule for
$\mathcal{O}_{6}$, Eq.(\ref{4.7}). Choosing again the initial value
$s_{0}=3\;\mbox{GeV}^{2}$, one obtains for $\mathcal{O}_{8}$ the results shown
in Fig. 10. One can see again a stability region near $c=2.5\;\mbox{GeV}^{2}$,
leading to the result
\begin{equation}
\mathcal{O}_{8}(3 \;\mbox{GeV}^{2})=-(0.0048\pm 0.0039)\;\mbox{GeV}^{8}%
\end{equation}
It is worth mentioning that the polynomial coefficients entering the sum rule
Eq.(\ref{4.16}) differ significantly from the ones in the corresponding pinched
sum rule Eq.(\ref{3.4}). It is therefore reassuring that both results for
$\mathcal{O}_{8}$ are compatible. To check for \textbf{strong stability} we
have, once again, varied $s_{0}$ in the range $2.5\leq s_{0}(\mbox{GeV}^{2}%
)\leq3$. For $s_{0}=2.5\;\mbox{GeV}^{2}$ we find $O_{8}(2.5 \; \mbox{GeV}^{8}
)=-(0.0056\pm0.0024)\;\mbox{GeV}^{8}$, and similar results for other values of
$s_{0}$ in the above range. Thus, the criterion of \textbf{strong stability}
is again satisfied, albeit within large errors. Proceeding beyond dimension
$d=8$ is marred by the same problems mentioned at the end of Section 3; the
minimizing polynomial FESR do not avoid the divergence of the OPE, nor the
increasing importance of operator mixing.
\begin{figure}
[h]
\begin{center}
\includegraphics[
height=2.1619in,
width=2.8634in
]%
{O8c.eps}%
\caption{$\mathcal{O}_{8}$ as a function of the parameter $c$.}%
\end{center}
\end{figure}
\section{Conclusion}
The final ALEPH data for the chiral spectral function $v(s)-a(s)$ shows
clearly that this spectral function has not yet reached its asymptotic form
dictated by perturbative QCD, i.e. it does not vanish, even at the highest
energies attainable in $\tau$-decay. If the asymptotic regime had been reached
precociously, let us say at $Q^{2}\simeq2\;\mbox{GeV}^{2}$, then it would have
been straightforward to calculate the non-perturbative condensates with the
help of the Cauchy Integral. Since this is not the case, some method to
improve convergence must be applied. We have shown that in the framework of
FESR this can be done by suitably reducing the impact of the high energy
region in the dispersive integral, either by using pinched sum rules or by
using minimizing polynomial sum rules. We first used the data in a pinched
linear combination of the first two Weinberg sum rules which follow from the
fact that there are no dimension $d =2$ and $d =4 $ operators contributing to the chiral
correlator to demonstrate the precocious saturation of the sum rule and the
remarkable effectiveness of the method. Motivated by this success, we
determined a number of QCD condensates by making maximal use of the fact that
there are no dimension $d =2$ and $d =4$ operators and requiring \textbf{strong
stability} for both methods, i.e. we varied the radius $s_{0}$ in the Cauchy
integral beginning at the end of $\tau$-decay phase space and required that
the condensates calculated from the data should be reasonably constant for all
$s_{0}$ in some finite region including the end of phase space. We do not
assume (as is done in most FESR calculations) that the dispersive integral
vanishes from $s_{0}\rightarrow\infty$. By showing that there is "strong
stability" i.e. precocious saturation of the FESR we prove that this region
contributes only negligibly. It would indeed be surprising if the observed
stability would disappear for $s_{0}$ larger than the end of phase space. We
do, however, have to make the assumption, inherent in all sum rule analyses of
$\tau$-decay, that unknown $O(\alpha_{s}^{2})$ effects of mixing of operators
of different dimensions are negligible for the relevant duality range
$2.5 \;\mbox{GeV}^{2}\lesssim s_{0}\lesssim 3\; \mbox{GeV}^{2}$. We have checked
explicitely that in all the cases considered in this work this is the situation
when one takes into account the logarithmic term of the
dimension six Wilson coefficient. The results for $\mathcal{O}_{6}$
and $\mathcal{O}_{8}$ satisfy this strong stability criterion as is best seen
from the figures. Extraction of higher condensates of dimension $d\geq10$
leave room for interpretation, but the conclusion that they grow rapidly with
dimensionality is rather obvious. Together with the increasing importance of
operator mixing, it makes the extraction of these condensates a difficult
exercise. Our result that $\mathcal{O}_{6}$ and $\mathcal{O}_{8}$ have the
same sign is in conflict with some of the earlier determinations based on the
incomplete ALEPH data but agrees with others (see e.g. \cite{Friot} for a
comparative summary).
ACKNOWLEDGMENTS: We wish to thank Hubert Spiesberger and Alexei Pivovarov for discussions.
| -33,554.861821
|
[
-2.884765625,
2.73046875
] | 10.47041
|
[
-3.208984375,
0.348388671875,
-2.310546875,
-6.1796875,
-0.9140625,
8.3671875
] |
[
2.50390625,
8.703125,
1.8251953125,
4.81640625
] | 435
| 4,075
|
[
-3.537109375,
4.2109375
] | 33.265689
|
[
-5.94921875,
-4.3046875,
-4.29296875,
-2.419921875,
1.587890625,
12.0234375
] | 1.401527
| 7.266629
| 28.834356
| 6.383218
|
[
1.410524606704712
] | -21,335.066787
| 5.690552
| -33,081.67322
| 0.646858
| 5.89028
|
[
-2.654296875,
-3.84765625,
-3.599609375,
-4.7578125,
2.32421875,
12.109375
] |
[
-4.94140625,
-1.771484375,
-2.150390625,
-1.2548828125,
3.173828125,
4.16015625
] | |
BkiUdeQ5qhDBeSTLE1qD
|
\section{Introduction}
Stereo algorithms benefit enormously from benchmarks~\cite{scharstein2002taxonomy}. They provide quantitative evaluation to encourage competition and track progress. Despite great progress over the past years, many challenges still remain unsolved, such as transparency, specularity, lack of texture and thin objects. These image regions are called hazardous regions~\cite{zendel2015cv} because they are likely to cause the failure of an algorithm. These regions are sometimes small, uncommon and do not have a big impact on overall performance, but critical in the real world. For example, a street light is a thin object and covers a small region of an image, but missing it could be a disaster for autonomous driving.
Images in the real world contain different degrees of hazardous factors, for example, images in KITTI dataset~\cite{menze2015object} contain specular windshields or dark tunnels. In order to better study algorithm robustness, images were captured on extreme weather conditions \cite{meister2012outdoor} or through rendering~\cite{peris2012towards,ros2016synthia}. But these images can only be sparse samples of different hazardous degrees. Even though it is possible to collect a huge dataset with enormous degrees of different hazards, the size of it would be very large making labeling hazardous regions of these images prohibitively expensive.
\begin{figure}
\includegraphics[width=\columnwidth]{img_grad.jpg}
\caption{\label{fig:specularity} Different levels of specularity of the TV, from top to bottom are input image, disparity estimation and error compared with ground truth, the error is only computed for the specular regions. The visual difference in the first row is subtle, but is a very big challenge for state-of-art methods~\cite{chakrabarti2015low}. Best seen in color.}
\end{figure}
To address the problem of thoroughly testing stereo algorithms, we develop a data generation tool for researchers to precisely control {\it hazardous factors}, e.g. material properties, of a virtual scene and produce their own images. For example, in Fig.~\ref{fig:specularity}, we use it to vary the degree of specularity and show how this impacts the performance of a state-of-art stereo algorithm~\cite{chakrabarti2015low}. More generally, our approach enables us to follow the standard strategy in scientific research which changes variables separately and systematically and study their impact.
In particular, we use this technique in our paper to study the relationship between hazardous factors and algorithm performance to understand the robustness of an algorithm. Adversarial attack~\cite{Xie2017-md,Nguyen2015-ul} is another popular approach to understand model robustness. It requires the model to be differentiable and is mostly applied to deep neural networks. Since the hazardous factors are well understood in binocular stereo~\cite{zendel2015cv}, we are able to study model robustness by controlling the hazardous factors which is more systematical.
In Fig.~\ref{fig:specularity}, the small perturbation of images is done by changing material property, instead of from back-propagation, this perturbation is easy to find and be validated in the real world. The discovery from synthetic images can be validated using real images, and this validation only requires a small amount of test images (hence avoiding the need for excessive annotation of real images). In our diagnosis experiment, after analyzing the impact of individual hazardous factor, we also validate our result on real-world datasets with annotated images.
In this paper, we use our synthetic image generation tool to study the effect of four important hazardous factors on stereo algorithms. These hazardous factors are chosen to violate some of the basic assumptions of traditional stereo algorithms. For example, specular and transparent surfaces violate the brightness consistency constraint, which assume that the intensity properties of corresponding points are similar (because specularity means that the intensity of a surface point will depend on the viewpoint). Although these hazardous factors are well-known to the community, there have been few attempts at quantitative evaluation of the impact of individual factor due to challenges of annotating these factors. We were inspired by the theoretical framework to analyze hazardous factors proposed by Zendel \etal~\cite{zendel2015cv}, but their framework requires a lot of manual annotation of hazardous regions of images. Our tool can produce these hazardous regions masks automatically, making their theoretical framework practical.
To summarize, we develop a data generation tool called UnrealStereo and use it to stress test stereo algorithms. The main contributions of our paper are as follows: Firstly, we provide a tool enabling researchers to control the hazardous factors in a virtual environment to analyze stereo algorithms. Secondly, hazardous regions are automatically determined in our framework, making the theoretical framework in \cite{zendel2015cv} practical. Third, we control the hazardous factors to show the characteristics of different stereo methods and validate our result on annotations of Middlebury and KITTI dataset. Our tools are open source and will be made available to the community.
\section{Related Work}
\subsection{Robustness Evaluation for Stereo Vision}
Many stereo datasets have been created for training and evaluating stereo algorithms. The Middlebury stereo dataset~\cite{scharstein2002taxonomy,scharstein2003high,hirschmuller2007evaluation,scharstein2014high} is a widely used indoor scene dataset, which provides high-resolution stereo pairs with nearly dense disparity ground truth. The KITTI stereo dataset~\cite{geiger2012we,menze2015object} is a benchmark consisting of urban video sequences where semi-dense disparity ground truth along with semantic labels are available. Tanks and Temples~\cite{Knapitsch:2017:TTB} and ETH3D~\cite{schoeps2017cvpr} are proposed recently as benchmarks for multi-view stereo. Besides these most commonly used ones, \cite{Zendel_2017_CVPR} makes a detailed summary of existing stereo datasets. Due to demand of complex equipment and expensive human labor, real-world datasets usually have relatively small sizes. And the uncertainty in measurements imposes a constraint on the ground truth accuracy of real-world datasets. Furthermore, it is not easy to control hazardous factors in real-world settings.
Many stereo benchmarks provide scene variation to understand the robustness of stereo algorithms. Middlebury~\cite{hirschmuller2007evaluation,scharstein2014high} provide scenes with varying degrees of illumination and exposure. Neilson \etal~\cite{neilson2008evaluation} provide synthetic data with varying texture, levels of noise and baselines. Tsukuba dataset~\cite{peris2012towards} provides the same synthetic video scene with four different illuminations. In the HCI/Bosch robustness challenge~\cite{meister2012outdoor}, images on challenging weather were captured. In order to test algorithm in different conditions in a controlled way, lab setup based on toys and robotics arm is created~\cite{borji2016ilab} to control hazardous factors, but the images are very different from normal conditions. \cite{morales2009robustness} evaluated the robustness of stereo algorithms against differing noise parameters. Haeusler \etal~\cite{haeusler2013synthesizing} designed cases for typical stereo failure using non-realistic synthetic 2D patterns without an underlying 3D scene.
Taking the average of pixel errors at full image is not enough for performance evaluation~\cite{kostkova2003dense}. \cite{scharstein2002taxonomy} proposes region specific evaluations for areas of textureless, disparity discontinuities and occlusion. The HCI stereo metrics~\cite{honauer2015hci} focus on disparity discontinuities, planar surfaces, and fine structures. CV-HAZOP~\cite{zendel2015cv} proposes the idea of analyzing hazardous factors in an image. Their method requires manually annotating risk factors, such as specular areas, from images, which is difficult to perform and hard to scale up. Our synthetic pipeline can automatically identify these hazardous regions, enables large-scale analysis. The ability to control the severity of hazardous factors also helps us to better understand the weakness of an algorithm.
\subsection{Synthetic Dataset for Computer Vision}
Synthetic data has attracted a lot of attention recently, because of the convenience of generating large amounts of images with ground truth. And the progress of computer graphics makes synthesizing realistic images much easier. Synthetic data have been used in stereo~\cite{peris2012towards,butler2012naturalistic,haeusler2013synthesizing,ros2016synthia,mayer2016large}, optical flow~\cite{barron1994performance,butler2012naturalistic}, detection~\cite{qiu2016unrealcv,tremblay2018training} and semantic segmentation~\cite{richter2016playing, ros2016synthia, gaidon2016virtual,tsirikoglolu2017procedural}. Images and ground truth are provided in these datasets, but the virtual scenes are not available to render new images or change the properties of these scenes. Instead of constructing proprietary virtual scenes from scratch, we use game projects that are publicly available in the marketplace. Our tool enables tweaking virtual scenes, e.g. by varying the hazardous factors in virtual experiments, to generate more images and ground truth. Many virtual scenes constructed by visual artists in the marketplace can be used. Unlike Sintel~\cite{butler2012naturalistic} and Flyingthing3D~\cite{mayer2016large}, our approach utilizes more realistic 3D models arranged in real-world settings.
\section{Hazardous Factor Analysis}
Most of stereo algorithms can be formulated in terms of minimizing an objective function w.r.t disparity $d$,
\begin{equation}
E(d) = \sum_{\boldsymbol{p}} E_{d}(d(\boldsymbol{p})) + \lambda \sum_{\boldsymbol{(p,q) \in \mathcal{C}}} E_{s}(d(\boldsymbol{p}), d(\boldsymbol{q}))
\end{equation}
where the data term $E_{d}$ usually represents a matching cost and the smoothness term $E_{s}$ encodes context information within a support region $\mathcal{C}$ of pixel $\mathbf{p}$ ($\mathbf{q}$ is a pixel in $\mathcal{C}$). Local stereo methods~\cite{geiger2010efficient,ma2013constant} do not have a smoothness term and utilize only local matching cues. Global methods~\cite{hirschmuller2005accurate, yamaguchi2014efficient,zbontar2015computing,chakrabarti2015low} incorporate the smoothness priors on neighboring pixels or superpixels in the smoothness term.
The success of these methods relies on some basic assumptions hold for the scene they encounter. First, to do correspondence between binocular image pairs, image patches of the projection of the same surface should be similar which requires Lambertian surface assumption and the single image layer assumption. Second, the local surface should be well-textured for matching algorithms to extract feature. Third, the smoothness term in global method functions under the assumption that the disparity vary slowly and smoothly in space. However, these assumptions can easily be broken in real world scenarios. For example, the first assumption does not hold for specular surface which is not Lambertian and transparent surfaces which would create multiple image layers. Textureless objects are everywhere such as white walls and objects under intense lighting. Besides, smoothness assumption does not hold for regions with many jumps in disparity, e.g. fences and bushes.
Since the aforementioned factors often break the assumptions of most stereo methods, we call them \textit{hazardous factors} following~\cite{zendel2015cv}. Special efforts have been made to resolve these difficulties in recent years. Yang \etal~\cite{yang2008near} proposed an approach which replaces estimates in textureless regions with planes. Nair \etal~\cite{nair2015reflection} derive a data term that explicitly models reflection. G\"uney \etal~\cite{guney2015displets} leverage semantic informations and 3D CAD models to resolve stereo ambiguities caused by specularity and no texture. An end-to-end trained DCNN based algorithm~\cite{mayer2016large} performs well on specular regions of KITTI stereo 2015 after finetuning on the training set.
Evaluating stereo algorithms under different hazardous factors on real data is highly inconvenient, because hazardous regions 1) require annotation by human labor and 2) can hardly be controlled. To this end, we develop a synthetic data generation tool for systematic study of hazardous factors.
For the rest of this section, we first describe the data generation tool UnrealStereo. Then we vary the hazardous factors to produce hazardous regions to stress test state of the art stereo algorithms. Finally, hazardous regions are computed for images rendered from realistic 3D scenes to analyze the impact of each hazardous factor.
\subsection{UnrealStereo Data Generation Tool}
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{multi_cams}
\caption{\label{fig:multi_cam} UnrealStereo is a synchronized multiple camera system. From left to right are a two-camera system used in our stereo experiment, cameras mounted on a virtual car and a 16 camera system surrounding a virtual human head.}
\end{figure*}
Game and movie industries are able to create realistic computer graphics images, but it is expensive and technically challenging for researchers to do so. Professional tools such as Blender and Maya are difficult to use because 1) they are created for professional designers with many irrelevant features to research, mastering these tools requires weeks to months experience, 2) they are designed for rendering images and require a significant engineering effort to generate correct ground truth for vision tasks, 3) 3D models for these tools are either expensive or of low-quality.
UnrealStereo solves these problems by providing an easy-to-use tool. The tool is designed for multi-view vision data generation and diagnosis for researchers. It is based on Unreal Engine 4 (UE4), an open-source 3D game engine.
UnrealStereo supports multiple camera. Users can place virtual cameras in a virtual scene according to their specification. An example is shown in Fig.~\ref{fig:extra_gt}. It generates images and ground truth synchronously from multiple cameras, which enables capturing a dynamic scene. Our optimized code makes data generation very fast and only a small overhead is added to the rendering. For a two-camera setup, the speed can reach 30 - 60 FPS depending on complexity of the scene. The speed is important for large scale data generation and interactive diagnosis.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{extra_gt.png}
\caption{\label{fig:extra_gt} From left to right are rendered images, object instance mask, material information (green shows transparent and red shows specular region).}
\end{figure}
The depth generation from Unreal Engine is improved based on \cite{qiu2016unrealcv}. The depth is stored as floating point instead of 8-bit integer to preserve precision. The depth of transparent objects is missing from the depth buffer of UE4, this issue is fixed to produce accurate depth for transparent objects. Dynamic scenes and visual effects are supported. Many scenes were tested to ensure compatibility.
For the stereo analysis, we created a two-camera system. The second camera automatically follows the first one and keeps relative position fixed. The distance between two cameras can be adjusted to simulate different baseline. The image and depth are captured from the 3D scenes for both two cameras, along with other extra information shown in Fig.~\ref{fig:extra_gt}. Given a rectified image pair, the goal of stereo matching is to compute the disparity $d$ for each pixel in the reference image. The disparity is defined as the difference in horizontal location of a point in the left image and its corresponding one in the right. Then the conversion between depth $z$ and disparity $d$ is shown in the following relation $ z = \frac{fB}{d} $. where $f$ is the focal length of the camera and $B$ is the baseline that is the distance between the camera centers. The correctness of disparity is verified by warping the reference image according to its disparity map and comparing it with the target image.
UnrealStereo supports hazardous factor control, such as adjusting material property, which enables the diagnosis experiment in Sec.~\ref{subsection:manual}. The hazardous factor control can be done with Python, through the communication layer provided by UnrealCV~\cite{qiu2016unrealcv}. This makes it possible to generate various cases to stress test an algorithm.
The 3D scenes used in this paper are created by 3D modelers trying to mimic real world configuration. This is important for two reasons: 1) many diverse challenging cases can prevent over-fitting which usually happens in a toy environment. 2) the semantic information provides the opportunity to solve low level vision problems with high level semantic cues~\cite{guney2015displets}. The physics based material system of UE4~\cite{karis2013real} not only makes the rendering realistic, but also enables UnrealStereo to tweak material parameters to create hazardous challenges.
Unreal Engine uses a rasterization renderer combined with off-line baked shadow map to produce realistic lighting effect. Recently announced V-ray plugin provides another powerful ray tracing renderer for UE4. Our tool can support both renderers. Due to the lack of 3D models for the ray tracing renderer, our synthetic images are mainly produced by the rasterization renderer.
\subsection{Controling Hazardous Factors}
\label{subsection:manual}
\begin{figure*}
\begin{center}
\subfigure[Specularity]{ \label{fig:subfig:a} \includegraphics[width=0.4\columnwidth]{reflective.png}}
\subfigure[Texturelessness]{\label{fig:subfig:b} \includegraphics[width=0.4\columnwidth]{textureless.png}}
\subfigure[Transparency]{ \label{fig:subfig:c} \includegraphics[width=0.4\columnwidth]{transparent.png}}
\subfigure[Disparity jumps]{ \label{fig:subfig:d} \includegraphics[width=0.4\columnwidth]{jumps.png}}
\end{center}
\caption{\label{fig:cases} From (a) to (d) are cases we designed to test algorithms. They are specularity, lack of texture, transparency and disparity jump. In (a), the screen of a TV is set to be specular. In (b), the wall and the ceiling in the room are made textureless. In (c), the sliding door has a transparent surface. In (d), objects such as bamboos, fences and plants give frequent disparity discontinuities.}
\end{figure*}
The UnrealStereo tool we developed is able to produce hazardous cases in the virtual world with lighting and material controlled, making it tractable to conduct precise evaluation. As a demonstration, we establish four virtual scenes with high reality each of which includes one factor. Stereo image pairs are rendered from various viewpoints together with dense disparity groundtruth. Fig.~\ref{fig:cases} shows the snapshots of the four scenes.
\noindent\textbf{Specularity}
Shown in Fig.~\ref{fig:subfig:a}, the major specular object in the scene is the TV screen. The specularity is controlled by the roughness of metallic materials.
\noindent\textbf{Texturelessness}
In Fig.~\ref{fig:subfig:b}, the wall and the ceiling in the room are made textureless because they are the most common textureless objects in real world. To achieve texturelessness while keep reality, we do not directly remove the material of the walls but adjust the scale property of the parameterized texture. Various viewpoints are used from which the walls form slanted planes, raising challenges to some less intricate regularizers or smoothness term.
\noindent\textbf{Transparency}
In Fig.~\ref{fig:subfig:c}, we placed a transparent sliding door in a room. By adjusting the opacity property of the glass on the door, we are able to create different levels of transparency.
\noindent\textbf{Disparity Jumps}
In the disparity jumps case(Fig.~\ref{fig:subfig:d}), thin objects such as bamboos, fences and plants of various sizes and poses are placed in the scene, which easily form frequent disparity discontinuities distributed within a small region.
One of the advantage of our tool is the ability to vary the extent of hazard while keeping the rest of the scene intact. We isolate the hazardous factors and focus on one at a time. There are certainly other hazardous factors which can be controlled in our framework. For example, the area of textureless regions is crucial to stereo methods because as the textureless region gets larger, it becomes more difficult for the smoothness term to use context information such as the disparity of neighboring well-textured objects.
Because synthetic and real data are in different domain, after receiving the evaluation results on virtual scenes, it is important to verify them on real-world dataset. To this end, we manually annotated corresponding hazardous regions on Middlebury 2014~\cite{scharstein2014high} and KITTI 2015~\cite{menze2015object}. Details and results for evaluation on these cases are presented in Section~\ref{experiment:section1}.
\subsection{Automatic Hazardous Region Discovery}
\label{subsection:masks}
Manually designed hazardous cases are important for understanding an algorithm. Furthermore, our tool enables us to tweak many realistic virtual scenes to perform large-scale evaluation. The popularity of virtual reality provides a lot of high quality virtual environments, which can be purchased with a fair price (less than \$50) or even free.
Our rendering process produces extra information beyond depth information including object instance mask and material information. Using these extra information, we can locate these hazardous regions mentioned in Section~\ref{subsection:manual}. Fig.~\ref{fig:mask} shows an example of these masks. For each object, we annotate its material information only once, before rendering process, then no more human effort is required to obtain corresponding masks.
Textureless regions can also be computed from image using image gradient and disparity jumps regions can be computed given accurate disparity ground truth~\cite{szeliski1999experimental,scharstein2002taxonomy}. Compared with them, our method is a generic way that covers more hazardous factors.
\begin{figure}
\begin{center}
\centering
\includegraphics[width=0.8\columnwidth]{masks1.jpg}
\end{center}
\caption{\label{fig:mask} Binary masks that we compute from object mask and material property. From top left in clockwise are: mask for non-occluded region, object boundary region, specular region and textureless region. Best seen in color.}
\end{figure}
We establish a large dataset using six publicly available game scenes. They are a small indoor room, a large temple scene, three houses and one block of street. There are different layouts in these houses such as living room, kitchen and bathroom. The largest scene contains more than 1,000 objects while hundreds on average, including reflective objects, such as mirrors, bathtubs and metal statues, transparent objects such as glass, glass-doors and windows. Snapshots of these scenes can be seen in Fig.~\ref{fig:games}. Specifically, for each scene we record a video sequence that covers different viewpoints in the environment, which results in 10,825 image pairs in total.
\begin{figure*}[h]
\centering
\includegraphics[width=0.9\textwidth]{db_overview}
\caption{\label{fig:games} The six virtual scenes we use in our experiments, from left to right are image, depth and object mask. These virtual scenes are purchased from Unreal Engine marketplace.}
\end{figure*}
A unique feature of our dataset is the hazardous factors of virtual worlds can be controlled and more challenging images can be produced.
Instead of just providing an image dataset with fixed number of images, we provide a synthetic image generation tool. This tool can be used to design new hazardous cases, generate more images. More game scenes from the marketplace can be used in experiment.
\section{Experiment}
We choose five types of state-of-the-art stereo algorithms to evaluate on the challenging testing data we rendered. They are representatives of local methods ELAS~\cite{geiger2010efficient} and local method with spatial cost aggregation CoR~\cite{chakrabarti2015low}, global methods on pixel-level MC-CNN~\cite{zbontar2015computing} and superpixel-level SPS-St~\cite{yamaguchi2014efficient} as well as end-to-end CNN based method DispNetC~\cite{mayer2016large}. Implementation from the authors of these methods are adopted. For model weights of the MC-CNN, we use the model used in their submission to KITTI. For DispNetC, the original model trained on the synthetic dataset FlyingThings3D~\cite{mayer2016large} is used.
Two error metrics, i.e. bad-pixel percentage (BadPix) and end-point error (EPE), are used in evaluation.
\subsection{Evaluation on Controlled Hazardous Levels \label{experiment:section1}}
We use 10 viewpoints for each of the hazardous cases we designed, i.e. specular, semi-transparent, textureless, and disparity jumps, covering both fronto-parallel and slanted surfaces. At each viewpoint of hazardous scenes except disparity jumps case, we start from the easiest parameter settings that are roughest, opaque or well-textured and adjust the corresponding parameter step by step to increase the extent of hazard, creating different levels of corresponding hazard per viewpoint. We exclude occluded regions and only evaluate hazardous regions identified by method described in Section~\ref{subsection:masks}. Results are shown in Fig.~\ref{fig:case_performance} and Table~\ref{tab:case1}. As a reference, overall performance on Middlebury and KITTI is shown in Table.~\ref{tab:case1}
\begin{figure}[h]
\includegraphics[width=0.235\textwidth]{curves_Bad_3.pdf}
\includegraphics[width=0.235\textwidth]{curves_EPE.pdf}
\caption{\label{fig:case_performance} The influence of texturelessness, specularity and transparency at different levels in terms of bad-pixel percentage and end-point error. The level of each hazardous factor is controlled by parameters for corresponding materials. Each data point represents an average over 10 viewpoints. }
\end{figure}
\input{tables4_1}
\begin{table}[h]
\small
\centering
\begin{tabular}{lcccc}
\hline
& Spec. & Txtl. & Tran. & Jumps\\
\hline
KITTI & 0.55 & 0.16 & 0.75 & - \\
MB & 0.76 & 0.87 & - & - \\
\hline
\end{tabular}
\vspace{5pt}
\caption{\label{tab:corr} Correlation between performance on our dataset and real-world datasets on hazardous regions in EPE.}
\end{table}
The ability to control hazardous factors enables us to analyze a stereo algorithm from different perspectives. We can study not only the overall performance, but also the robustness to different hazardous cases. Here are some interesting observations from the experiment results.
First, methods which perform better in general are not always doing well on hazardous regions. For example, the state-of-the-art method MC-CNN achieves the best overall scores on both real-world datasets and our synthetic dataset (see Table~\ref{tab:attr}), but it is not the best for many hazardous cases. We compute the correlation coefficients of the performance of these methods for hazardous factors at high level and their overall performance in EPE. For specular, textureless, transparent and disparity jumps factors, they are $0.25$, $0.41$, $0.43$, $0.63$ respectively. Therefore, the overall scores cannot reflect the characteristics of an algorithm on hazardous regions.
Second, different regularization methods have big impact on the robustness. The cost aggregation on suitable regions or regularization on superpixels can to some extent reduce the vulnerability to matching ambiguities. As shown in Fig.~\ref{fig:case_performance}, CoR and SPS-St exhibit high robustness as they outperform other methods for specularity and transparency factors at all levels under both metrics. Intuitively, large support regions also helps regularize the result on textureless regions, which is confirmed by the leading performance of CoR and SPS-St for texturelessness.
Third, the ability to precisely control the hazardous factors enable us to discover more characteristics of the algorithms than using standard benchmarks. As shown in the curves for textureless in Fig.~\ref{fig:case_performance}, DispNetC exhibits an early insensitivity to further texture weakening, which may result from a different way to incorporate context, i.e. through large receptive field. Without controlling hazardous factors, it is hard to discover these kinds of information.
From the experiments for disparity jumps, we find that the global methods evaluated here still suffer a lot on these areas even though they have taken depth discontinuity into consideration. The evaluated methods perform bad on disparity discontinuity regions as shown in Table~\ref{tab:case1}. For BadPix metric, CoR is slightly better than others while DispNetC achieves the best result in EPE. The reason that DispNetC outperforms others in EPE could be that it does not explicitly impose smoothness constraints, which helps to avoid erroneous over-smooth.
\subsection{Comparison with Middlebury and KITTI\label{experiment:compare}}
To verify our result, we annotate specular and textureless regions on Middlebury 2014 and KITTI 2015 training set and transparent regions on the latter (Note that the objects in Middlebury are rarely transparent). On Middlebury the annotation and evaluation are performed at quarter size of the original images. Disparity jumps is not included here because the missing ground truth for many pixels on both datasets makes disparity discontinuity computation inaccurate. To annotate hazardous regions of these datasets, annotators are asked to mask corresponding regions with Photoshop selection tool and examples are shown in Fig~\ref{fig:KITTI_MB_anno}.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{KITTI_MB_anno.png}
\caption{\label{fig:KITTI_MB_anno} Hazardous regions annotation on KITTI (left) and Middlebury (Right) used to validate the results on synthetic data. Specular and textureless regions are encoded by red and green color.}
\end{figure}
\input{big_table}
Performance on annotated hazardous regions is consistent with our synthetic dataset. As shown in Table~\ref{tab:corr}, there is a strong correlation between performance on our dataset and real-world datasets. For textureless regions on KITTI, the correlation coefficient is $0.16$ at high level and $0.54$ for medium level, which indicates that KITTI shares similar statistics for textureless regions with our dataset at medium level.
As shown in Table~\ref{tab:case1}, MC-CNN does not outperform others on hazardous regions on Middlebury and KITTI, which verifies the first conclusion in Section~\ref{experiment:section1} that methods which perform better in general are not always doing well on hazardous regions. The second conclusion also holds true here. Since global methods, e.g. SPS-St and MC-CNN, and local methods with large support regions, e.g. CoR, obtain lower errors on specular and transparent regions than other methods, they are more robust to these hazardous factors.
We also find that Middlebury and KITTI have different statistics. For example, on textureless regions, DispNetC performs the best on Middlebury while on KITTI it does not. The analysis of DispNetC in Sec.\ref{experiment:section1} shows it has different performance for different levels of texturelessness. Since Middlebury and KITTI are both real-world dataset and the level of hazardous factors is unknown and not controllable, the performance for DispNetC can be different. According to Fig.~\ref{fig:case_performance}, it is possible that the annotated textureless regions on Middlebury are at the higher level while those on KITTI is more towards the lower level.
\subsection{Evaluation on Automatically Generated Hazardous Regions \label{experiment:section2}}
We evaluate these algorithms on a testing set including 484 stereo image pairs which are randomly sampled from the 10k images from the six virtual scenes. Hazardous regions are generated automatically. The average performance on full, non-occluded and four hazardous regions are shown in Table.~\ref{tab:attr}.
The top performance of SPS-St and CoR on specular and transparent regions verifies the analysis in Section~\ref{experiment:section1} that non-local regularization using large support regions would reduce the influence of matching ambiguity. That DispNetC outperforms others on textureless region could result from the level of texturelessness, since Fig.~\ref{fig:case_performance} shows that DispNetC is robust on extremely textureless scene.
It is also worthwhile to compare the results with overall performance on Middlebury and KITTI in Table.~\ref{tab:case1}. The correlation coefficients for the performance in EPE between our dataset and Middlebury and KITTI are 0.91 and 0.61 respectively. The overall errors are higher on our data. There are two possible causes. One is that the percentage of hazardous regions on our dataset is larger than KITTI. The other is that KITTI only provides semi-dense ground truth, which excludes many hazardous regions, i.e. the windows of cars.
\section{Conclusion}
In this paper, we presented a data generation tool UnrealStereo to generate synthetic images to create a stereo benchmark. We used this tool to analyze the effect of four hazardous factors on state-of-the-art algorithms. Each factor was varied at different degrees and even to an extreme level to study its impact. We also tested several state-of-the-art algorithms on six realistic virtual scenes. The hazardous regions of each image were automatically computed from the ground truth, e.g., the object mask and the material properties. We found that the state-of-the-art method MC-CNN~\cite{zbontar2015computing} outperforms others in general, but lacks robustness in hazardous cases. DCNN based method~\cite{mayer2016large} exhibits interesting properties due to its awareness of larger context. We also validated our findings by comparing to results on the real-world datasets where we manually annotated the hazardous regions. The synthetic data generation tools enables us to explore many degrees of hazardous factors in a controlled setting, so that the time-consuming manual annotation of real images can be reduced. Manual annotation will only be needed in a limited (sparse) number of cases in order to validate the results from synthetic images.
Our data generation tool can be used to produce more challenging images and is compatible with publicly available high-quality 3D game models. This makes our tool capable for many applications other than stereo. In our future work, we will extend our platform to include more hazardous factors such as the ratio of occlusion and analyze more computer vision problems. It is also interesting to explore the rich ground truth we generate, such as object mask and material properties. This semantic information will enable the development of computer vision algorithms that utilizes high-level knowledge, for example like stereo algorithms that use 3D car models~\cite{guney2015displets}.
\noindent\footnotesize\textbf{Acknowledgement:} This work was supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/ Interior Business Center (DOI/IBC) contract number D17PC00345. We also want to thank the reviewers for providing useful comments.
{
\small
\bibliographystyle{ieee}
| -16,089.792455
|
[
-0.134521484375,
0.5546875
] | 68.683274
|
[
-2.7265625,
0.5673828125,
-2.43359375,
-4.69921875,
-0.392822265625,
7.4140625
] |
[
3.646484375,
6.21484375,
2.728515625,
6.3984375
] | 334
| 4,799
|
[
-0.759765625,
0.70654296875
] | 19.944406
|
[
-6.67578125,
-4.375,
-4.6171875,
-1.970703125,
2.640625,
12.890625
] | 1.803956
| 49.494793
| 25.609502
| 1.842767
|
[
2.813260316848755
] | -13,683.13685
| 6.098979
| -15,989.580125
| 0.584236
| 5.971376
|
[
-3.36328125,
-3.697265625,
-2.8984375,
-3.767578125,
2.869140625,
10.3828125
] |
[
-5.91015625,
-2.716796875,
-3.087890625,
-2.166015625,
4.0546875,
6.91796875
] | |
BkiUc1o5qsFAf3AH_2ic
|
\section{Introduction}
The classical Kuramoto model \cite{kuramoto,strogatz} describes a
collection of globally coupled phase oscillators that exhibits a
transition from incoherence to synchronization as the coupling
strength is increased past a critical value. Since real world
networks typically have a more complex structure than all-to-all
coupling \cite{newman1,barabasi1}, it is natural to ask what effect
interaction structure has on the synchronization transition. In
Ref.~\onlinecite{onset}, we studied the Kuramoto model allowing general
connectivity of the nodes, and found that for a large class of
networks there is still a transition to global synchrony as the
coupling strength exceeds a critical value $k_c$. We found that the
critical coupling strength depends on the largest eigenvalue of the
adjacency matrix $A$ describing the network connectivity. We also
developed several approximations describing the behavior of an order
parameter measuring the coherence past the transition. This past
work was restricted to the case in which $A_{nm}= A_{mn} \geq 0$,
that is, undirected networks in which the coupling tends to reduce
the phase difference of the oscillators.
Most networks considered in applications are directed
\cite{newman1,barabasi1}, which implies an asymmetric adjacency
matrix, $A_{nm} \neq A_{mn}$. Also, in some cases the coupling
between two oscillators might drive them to be out of phase, which
can be represented by allowing the coupling term between these
oscillators to be negative, $A_{nm} < 0$. The effect that the
presence of directed and mixed positive/negative connections can
have on synchronization is, therefore, of interest. Here we show how
our previous theory can be generalized to account for these two
factors. We study examples in which either the asymmetry of the
adjacency matrix or the effect of the negative connections are
particularly severe and compare our theoretical approximations with
numerical solutions.
\section{Background}\label{back}
In Ref.~\onlinecite{onset} we considered the onset of synchronization in networks of heterogeneous
coupled phase oscillators.
This situation can be modeled by the equation,
\begin{equation}\label{eq:coupled}
\dot{\theta}_n = \omega_n + k \sum_{m=1}^{N} A_{nm}\sin(\theta_{m} - \theta_{n}),
\end{equation}
where $\theta_n$, $\omega_n$ are the phase and natural frequency of
oscillator $n$, and $N \gg 1$ is the total number of oscillators.
The frequencies $\omega_n$ are assumed to be independently drawn
from a probability distribution characterized by a density function
$g(\omega)$ that is symmetric about a single local maximum at
$\omega = \overline{\omega}$. The mean frequency $\overline{\omega}$
can be shifted to $\overline{\omega} = 0$ by introduction of the
change of variables $\theta_n \to \theta_n -\overline{\omega} t $.
Thus we henceforth take $\overline{\omega} = 0$. The adjacency
matrix $\{A_{nm}\}$ determines the network connecting the
oscillators. Positive coupling was imposed in Ref.~\onlinecite{onset} by
the condition $A_{nm} \geq 0$. Furthermore, the matrix $A$ was
assumed to be symmetric and thus only undirected networks were
considered. In this Section we will review our results for this
class of networks, following Sec. II of Ref.~\onlinecite{onset}. Thus
throughout this Section $A_{nm}= A_{mn}\geq0$.
In order to quantify the coherence of the inputs to a given node,
a positive real valued local order parameter $r_{n}$ is defined by
\begin{equation}\label{eq:rn}
r_n e^{i \psi_n} \equiv \sum_{m=1}^{N} A_{nm}\langle e^{i \theta_m}\rangle_t,
\end{equation}
where $\langle\dots\rangle_t$ denotes a time average. To characterize the macroscopic
coherence for the whole network,
a global order parameter is defined by
\begin{equation}\label{eq:orderpara}
r = \frac{\sum_{n=1}^{N} r_{n}}{\sum_{n=1}^{N} d_{n}},
\end{equation}
where $d_n$ is the degree of node $n$ defined by
\begin{equation}
d_n = \sum_{m=1}^N A_{nm}.
\end{equation}
In terms of $r_n$, Eq.~(\ref{eq:coupled}) can be rewritten as
\begin{equation}\label{eq:coupled0}
\dot{\theta}_n = \omega_n - k r_{n} \sin(\theta_n - \psi_n) -k h_n(t),
\end{equation}
where the term $h_n(t)$ takes into account time fluctuations and is
given by $h_n = Im \{ e^{-i\theta_n}\sum_m A_{nm}\left( \langle
e^{i\theta_m}\rangle_t - e^{i\theta_m}\right)\} $, where $Im$ stands
for the imaginary part. As argued in Ref.~\onlinecite{onset}, when the number
of connections into each node is large, the term $h_n$ is small
compared to $r_n$ and we obtain approximately
\begin{equation}\label{eq:coupled2}
\dot{\theta}_n = \omega_n - k r_{n} \sin(\theta_n - \psi_n).
\end{equation}
Henceforth, we will assume that the number of connections into each
node is large enough that we can neglect the time fluctuations
represented by the term $h_n$. For a discussion of the effect of
nodes with few connections, see Sec. VI of Ref.~\onlinecite{onset}.
From Eq.~(\ref{eq:coupled2}), we conclude that oscillators with $\left|\omega_n\right| \leq k r_n$
become locked,
i.e., for these oscillators $\theta_n$ settles at a value for which
\begin{equation}\label{eq:locked}
\sin(\theta_{n}-\psi_n) = \omega_n/(k r_n).
\end{equation}
Then
\begin{eqnarray}\label{splitup}
r_n = \sum_{\left|\omega_{m}\right| \leq k r_{m}} A_{nm} e^{i(\theta_m - \psi_n)}\\
+ \sum_{\left|\omega_{m}\right| > k r_{m}} A_{nm} \langle e^{i(\theta_m - \psi_n)}\rangle_t.\nonumber
\end{eqnarray}
The sum over the non-locked oscillators can be shown to vanish in the large number of connections per node
limit (see Appendix A of Ref.~\onlinecite{onset}),
and we obtain from the real and imaginary parts of
Eq.~(\ref{splitup})
\begin{eqnarray}\label{eq:betaprime}
r_n = \sum_{\left|\omega_{m}\right| \leq k r_{m} }
A_{nm} \cos(\psi_m - \psi_n)\sqrt{1 - \left(\frac{\omega_{m}}{k r_{m}}\right)^2}\\
- \sum_{\left|\omega_{m}\right| \leq k r_{m} }
A_{nm} \sin(\psi_m - \psi_n)\left(\frac{\omega_{m}}{k r_{m}}\right),\nonumber
\end{eqnarray}
and
\begin{eqnarray}\label{eq:imaginary}
0 = \sum_{\left|\omega_{m}\right| \leq k r_{m} }
A_{nm} \cos(\psi_m - \psi_n) \left(\frac{\omega_{m}}{k r_{m}}\right)\\
+ \sum_{\left|\omega_{m}\right| \leq k r_{m} }
A_{nm} \sin(\psi_m - \psi_n)\sqrt{1-\left(\frac{\omega_{m}}{k r_{m}}\right)^2}.\nonumber
\end{eqnarray}
Introducing the assumption that the solutions $\psi_n$, $r_n$ are statistically
independent of $\omega_n$ (see Ref.~\onlinecite{onset}) and using the assumed symmetry of the frequency
distribution $g(\omega)$
we obtain from Eq.~(\ref{eq:betaprime}) the approximation,
\begin{equation}\label{eq:cosfi}
r_n = \sum_{\left|\omega_{m}\right| \leq k r_{m} }
A_{nm} \cos(\psi_m - \psi_n)\sqrt{1 - \left(\frac{\omega_{m}}{k r_{m}}\right)^2},
\end{equation}
and the right side of Eq.~(\ref{eq:imaginary}) is approximately zero
for large number of connections per node. The solution of
Eq.~(\ref{eq:cosfi}) with $\psi_n = \psi_m$ for all $n$ is the one
corresponding to the smallest value of $k$, and thus corresponds to
the smallest critical coupling $k_{c}$ leading to a transition to a
macroscopic value of $r_n$. Therefore we consider the equation
\begin{equation}\label{eq:betass}
r_n = \sum_{\left|\omega_{m}\right| \leq k r_{m} }
A_{nm} \sqrt{1 - \left(\frac{\omega_{m}}{k r_{m}}\right)^2}.
\end{equation}
We refer to this approximation [Eq.~(\ref{eq:betass})], based on neglecting the time fluctuations,
as the {\it time averaged theory} (TAT). In Ref.~\onlinecite{onset} we showed numerically that
this approximation consistently describes the large time behavior of the order parameter $r$ past
the transition for
various undirected networks with positive coupling strengths (i.e., $A_{nm} = A_{mn} \geq 0$).
Averaging over the frequencies, one obtains the {\it frequency distribution approximation} (FDA):
\begin{equation}\label{eq:betaint}
r_{n} = k {\sum_{m}} A_{nm} r_{m} \int_{-1}^{1} g(z k r_{m}) \sqrt{1 - z^2 } dz.
\end{equation}
The value of the critical coupling strength can be obtained from the frequency distribution
approximation by letting
$r_n \to 0^+$, producing
\begin{equation}\label{eq:firstor}
r_{n}^{(0)} = \frac{k}{k_0} {\sum_{m}} A_{nm} r_{m}^{(0)},
\end{equation}
where $k_{0} \equiv 2/[\pi g(0)]$. The critical coupling strength thus corresponds to
\begin{equation}\label{eq:kc}
k_{c} = \frac{k_0}{\lambda},
\end{equation}
where $\lambda$ is the largest eigenvalue of the adjacency matrix
$A$ and $r^{(0)}$ is proportional to the corresponding eigenvector
of $A$. By considering perturbations from the critical values as
$r_n = r_n^{(0)} + \delta r_n$, expanding $g(z k r_m)$ in
Eq.~(\ref{eq:betaint}) to second order for small argument,
multiplying Eq.~(\ref{eq:betaint}) by $r_n^{(0)}$ and summing over
$n$, we obtained an expression for the order parameter past the
transition valid for networks with relatively homogeneous degree
distributions \cite{footnote}:
\begin{equation}\label{perturba}
r^2 = \left(\frac{\eta_{1}}{\alpha k_{0}^2}\right)
\left(\frac{k}{k_{c}} - 1\right)
\left(\frac{k}{k_{c}}\right)^{-3}
\end{equation}
for $0< (k/k_c) -1\ll 1$, where
\begin{equation}\label{eq:eta1}
\eta_1 \equiv \frac{\langle u\rangle^2 \lambda^2}{ N \langle d\rangle^2 \langle u^4\rangle},
\end{equation}
$\alpha = -\pi g''(0)k_0/16$, $u$ is the normalized eigenvector of $A$ corresponding to $\lambda$,
and $\langle\dots\rangle$ is defined by $\langle x^q\rangle = \sum_{n = 1}^N x^q_n / N$.
The {\it mean field theory} (MFT) \cite{ichinomiya,lee} was obtained from the frequency
distribution equation by introducing the extra
assumption that the local mean field is approximately proportional to the degree, $r_n = r d_n$.
Substituting this into Eq.~(\ref{eq:betaint}) and summing over $n$ we obtained
\begin{equation}\label{eq:betasumed}
\sum_{m = 1}^N d_{m} = k \sum_{m = 1}^N d_{m}^2 \int_{-1}^{1} g(z k r d_{m}) \sqrt{1 - z^2 } dz.
\end{equation}
Letting $r\to 0^+$, the critical coupling strength is given by
\begin{equation}\label{eq:firstorder}
k \equiv k_{mf} = k_{0} \frac{\langle d \rangle}{\langle d^2 \rangle}.
\end{equation}
An expansion to second order yields
\begin{equation}\label{eq:secondmf}
r^2 = \left(\frac{\eta_{2}}{\alpha k_{0}^2}\right)
\left(\frac{k}{k_{mf}} - 1\right)\left( \frac{k}{k_{mf}}\right)^{-3}
\end{equation}
for $0< (k/k_{mf}) -1\ll 1$, where
\begin{equation}\label{eq:eta2}
\eta_{2} \equiv \frac{\langle d^2\rangle^3}{\langle d^4\rangle \langle d \rangle^2}.
\end{equation}
Comparing the above three approximations, we note the following points:
\begin{enumerate}
\item The TAT requires knowledge of the adjacency matrix and the particular realization of the oscillator
frequencies $\omega_n$ at each node.
\item The FDA requires knowledge of the adjacency matrix and the frequency distribution,
but averages over realizations
of the node frequencies.
\item The MFT (like the FDA) averages over realizations of the node frequencies, but
only requires knowledge of the degree distribution $d_m$ (knowledge
of the adjacency matrix is not required).
\item Computationally, the TAT and
the FDA are more demanding than
the MFT; all three, however, are much less costly than direct
integration of Eq.~(\ref{eq:coupled}) to find the time asymptotic
result.
\item Finally, one might suspect that the TAT is more accurate for describing a specific system realization, given that
one has knowledge of the network and the realization of the oscillator frequencies $\omega_n$ on each node, while the FDA
might be more appropriate for investigating the mean behavior averaged over an ensemble of realizations of the
oscillator frequencies.
\end{enumerate}
\section{Directed networks}\label{dire}
In this Section we will extend our previous results to include
directed networks, $A_{nm} \neq A_{mn}$. As in the previous Section,
we will assume that the number of connections per node (both
incoming and outgoing) is large, that the frequencies are drawn
randomly from a distribution symmetric around its unique local
maximum at $\omega = 0$, and that the coupling is positive, $A_{nm}
\geq 0$. We define the {\it in-degree} $d_n^{in}$ and {\it
out-degree} $d_n^{out}$ of node $n$ as
\begin{equation}
d_n^{in}\equiv \sum_{m=1}^N A_{nm}
\end{equation}
and
\begin{equation}
d_n^{out}\equiv \sum_{m=1}^N A_{mn}.
\end{equation}
For directed networks, the degrees $d_n^{in}$ and $d_n^{out}$ may be
unequal, and it is therefore necessary to take this difference into
account when developing approximations for the synchronization
transition based on the degree of the nodes [e.g., the mean field
theory, Eq.~(\ref{eq:betasumed})].
The approximations to $r$ given by the time averaged theory
[Eq.~(\ref{eq:betass})], the frequency distribution approximation
[Eq.~(\ref{eq:betaint})], and the estimate for the critical coupling
constant given by Eq.~(\ref{eq:kc}) are still valid in this more
general case. The existence of a nonnegative real eigenvalue
$\lambda$ larger than the magnitude of any other eigenvalue is
guaranteed for matrices with nonnegative entries by the Frobenius
theorem \cite{bapat}, and we use this eigenvalue in
Eq.~(\ref{eq:kc}).
We now consider the perturbation solution to the FDA
[Eq.~(\ref{eq:betaint})] for $(k-k_c)$ small taking into account
asymmetry of $A$. Expanding Eq.~(\ref{eq:betaint}) to second order
in $k r_n$, inserting $r_n = r_n^{(0)} + \delta r_n$, and canceling
terms of order $r_n^{(0)}$, the leading order terms remaining are
\begin{eqnarray}\label{eq:fea}
\delta r_n = \frac{k}{k_{c}\lambda} \sum_{m} A_{nm}\delta r_m -
\frac{\alpha k^3}{k_{c}\lambda}\sum_{m} A_{nm} (r_m^{(0)})^3 \\
+ \frac{k - k_{c}}{k_c \lambda} \sum_{m} A_{nm} r_m^{(0)}\nonumber.
\end{eqnarray}
In order for Eq.~(\ref{eq:fea}) to have a solution for $\delta r_n$,
it must satisfy a solubility condition. This condition can be
obtained as follows. Let $\overline{u}_n$ be an eigenvector of the
transpose of $A$, $A^{T}$, with eigenvalue $\lambda$. Multiplying
Eq.~(\ref{eq:fea}) by $\overline{u}_n$, summing over $n$ and using
Eq.~(\ref{eq:firstor}), we obtain
\begin{equation}
\frac{\sum_{m} (r_m^{(0)})^3 \overline{u}_m}{\sum_{m} r_m^{(0)}\overline{u}_m} = \frac{k - k_c}{\alpha k^3}.
\end{equation}
In terms of $u$ and $\overline{u}$, eigenvectors of $A$ and $A^T$
associated with the eigenvalue $\lambda$, the square of the order
parameter $r$ can be expressed as [cf. Eqs.~(\ref{perturba}) and
(\ref{eq:eta1})]
\begin{equation}\label{eq:secondpt}
r^2 = \left(\frac{\overline{\eta}_{1}}{\alpha k_{0}^2}\right)
\left(\frac{k}{k_{c}} - 1\right)
\left(\frac{k}{k_{c}}\right)^{-3}
\end{equation}
for $0< (k/k_c) -1\ll 1$, where
\begin{equation}\label{eq:eta1dire}
\overline{\eta}_1 \equiv \frac{\langle u\rangle^2 \langle u \overline{u}\rangle \lambda^2 }{ N \langle d\rangle^2
\langle u^3 \overline{u}\rangle},
\end{equation}
and $\langle x^p y^q \rangle$ is defined by $\langle x^p y^q \rangle
= \sum_{n=1}^N x_n^p y_n^q/N$. We will refer to this generalization
of the perturbation theory as the {\it directed perturbation theory}
(DPT).
The mean field theory can also be generalized for directed networks
by introducing the assumption $r_n = r d_n^{in}$. We obtain as a
generalization of Eq.~(\ref{eq:betasumed}) the {\it directed mean
field theory} (DMFT)
\begin{equation}\label{mftdire}
\sum_{m = 1}^N d_{m}^{in} = k \sum_{m = 1}^N d_{m}^{in}d_m^{out} \int_{-1}^{1} g(z k r d_{m}^{in}) \sqrt{1 - z^2 } dz.
\end{equation}
Letting $r\to 0^+$, the critical coupling strength is given by
\begin{equation}\label{eq:firstorder2}
k \equiv k_{mf} = k_{0} \frac{\langle d^{in} \rangle}{\langle d^{in}d^{out} \rangle}.
\end{equation}
An expansion to second order yields [cf. Equations~(\ref{eq:secondmf}) and (\ref{eq:eta2})]
\begin{equation}\label{eq:secondmfdire}
r^2 = \left(\frac{\overline{\eta}_{2}}{\alpha k_{0}^2}\right)
\left(\frac{k}{k_{mf}} - 1\right)\left( \frac{k}{k_{mf}}\right)^{-3}
\end{equation}
for $0< (k/k_{mf}) -1\ll 1$, where
\begin{equation}\label{eq:eta2dire}
\overline{\eta}_{2} \equiv \frac{\langle d^{in} d^{out}\rangle^3}{\langle (d^{in})^3 d^{out}\rangle \langle d^{in} \rangle^2}.
\end{equation}
\section{Networks with negative coupling}\label{inhibitory}
Here we extend our previous results to the case in which the matrix
elements $A_{nm}$ are allowed to be negative. In this case, a
solution to Eqs.~(\ref{eq:betaprime}) and (\ref{eq:imaginary}) in
which all the phases are equal, ($\psi_n = \psi_m$ for all $n$,$m$),
does not necessarily exist. [In fact, if one were to set $\psi_n =
\psi_m$ in Eq.~(\ref{eq:cosfi}) the right hand side of
Eq.~(\ref{eq:betass}) could be negative, while by definition $r_n$
is nonnegative.]
Although in this section we will assume $k \geq 0$, the case $k < 0$
can be treated by redefining $k \to -k$ and $A_{nm} \to -A_{nm}$. By
neglecting the contribution of the drifting oscillators, using the
symmetry of $g(\omega)$ and the assumed independence of $\psi_n$ and
$r_n$ from $\omega_n$, we obtain from Eqs.~(\ref{eq:rn}),
(\ref{eq:locked}) and (\ref{splitup}) the equation
\begin{eqnarray}\label{eq:betaprime2}
r_n e^{i\psi_n} = \sum_{\left|\omega_{m}\right| \leq k r_m}A_{nm} e^{i\psi_m} \sqrt{ 1 - \left(\frac{\omega_{m}}{k r_m}\right)^2 }.
\end{eqnarray}
Our approach will now be to solve Eq.~(\ref{eq:betaprime2}) numerically for $\psi_n$ and $r_n$.
We note that such numerical solution will still be orders of magnitude faster than finding the
exact temporal evolution of the network by numerically integrating Eqs.~(\ref{eq:coupled}).
In order to numerically solve Eq.~(\ref{eq:betaprime2})
for the variables $\psi_n$, $r_n$, we look for fixed points
of the following mapping, $(r_n^{j},\psi_n^{j})\to (r_n^{j+1},\psi_n^{j+1})$, defined by
\begin{eqnarray}\label{eq:betaprime3}
r_n^{j+1} e^{i\psi_n^{j+1}} = \sum_{\left|\omega_{m}\right| \leq k r_m^{j}}
A_{nm} e^{i\psi_m^{j}} \sqrt{ 1 - \left(\frac{\omega_{m}}{k r_m^{j}}\right)^2 }.
\end{eqnarray}
Repeatedly iterating the above map starting from random initial
conditions, the desired solution will be produced if the orbit
converges to a fixed point. We will discuss the convergence of this
procedure when considering particular examples.
We now comment on some aspects introduced by connections with
negative coupling. First, we note that when the coupling between the
oscillators is positive, the effect of the coupling between them is
a tendency to reduce their phase difference. In this case, as $k\to
\infty$, the phases synchronize, $\theta_n \to 0$. There is in this
case frequency and phase synchronization [i.e.,
$\frac{d}{dt}(\theta_n - \theta_m)\to 0$ and $(\theta_n -
\theta_m)\to 0$]. On the other hand, two oscillators coupled with a
negative connection $A_{nm} < 0$ tend to oscillate out of phase.
However, in a network with many nodes and mixed positive/negative
connections, the relative phases of two oscillators can not in
general be determined only from the sign of their coupling. When the
oscillators lock, their relative phase is determined by $\psi_n$
[let $k \to \infty$ in Eq.~(\ref{eq:locked})], and in general the
phases $\psi_n$ can be broadly distributed in $[0,2\pi)$. Therefore
in this case we expect frequency synchronization, but not phase
synchronization [i.e., $\frac{d}{dt}(\theta_n - \theta_m)\to 0$ but
$(\theta_n - \theta_m)\nrightarrow 0$]. We also note that in this
case the order parameter $r$, as we have defined it in
Eq.~(\ref{eq:orderpara}), may attain values higher than $1$ for
$k\to \infty$. We therefore replace the definition
(\ref{eq:orderpara}) by
\begin{equation}\label{eq:orderpara2}
r = \frac{\sum_{n=1}^{N} r_{n}}{\sum_{m=1}^{N}\sum_{n=1}^{N} |A_{nm}|}.
\end{equation}
Note that if $A_{nm} \geq 0$ this definition reduces to the previous one.
From Eq.~(\ref{eq:cosfi}) we have for $k\to \infty$
\begin{equation}\label{eq:cosfi2}
r\to \frac{\sum_{m,n}A_{nm}\cos(\psi_m-\psi_n)}{\sum_{m,n}|A_{nm}|}.
\end{equation}
The order parameter achieves its maximum value, $r = 1$, when the
phase difference $\psi_m - \psi_n$ between two oscillators is $0$
for positive coupling ($A_{nm} > 0$) and $\pi$ for negative coupling
($A_{nm} < 0$). An order parameter smaller than $1$ as $k\to \infty$
indicates frustration in the collection of coupled oscillators,
i.e., the phase difference favored by the coupling between each pair
of oscillators cannot be satisfied simultaneously by all pairs
\cite{daido2}. The order parameter is similar to the overlap
function used in neural networks for measuring the closeness of the
state of the network to a memorized pattern \cite{takashi2}.
Using the assumption that the number of connections per node is
large, we average Eq.~(\ref{eq:betaprime2}) over the frequencies to
obtain the approximation
\begin{eqnarray}\label{eq:fdainhi}
r_n e^{i \psi_n} = k \sum_{m=1}^N
A_{nm} e^{i\psi_m} r_m \int_{-1}^{1}\sqrt{1 - z^2}g(z k r_m)dz.
\end{eqnarray}
The critical coupling strength $k_c$ can be estimated by letting
$r_n \to 0^+$ to be as in Sec.~\ref{back}
\begin{equation}\label{kcinhi}
k_c = \frac{k_0}{\lambda},
\end{equation}
where $k_0=2/[\pi g(0)]$ and we have assumed the existence of a
positive real eigenvalue $\lambda$ which is larger than the real
part of all other (possibly complex) eigenvalues of $A$. We now
discuss the validity of this assumption.
If the adjacency matrix $A$ is asymmetric and there are mixed
positive/negative connections (both $A_{nm} > 0$ and $A_{n'm'} < 0$
for some $n$,$m$,$n'$,$m'$), it might occur that the matrix $A$ has
no real eigenvalues, or it has complex eigenvalues with real part
larger than the largest real eigenvalue. In our examples we find,
however, that when there is a bias towards positive coupling
strengths, there is a real eigenvalue $\lambda$ with real part
larger than that of the other eigenvalues. Furthermore, the largest
real part of the remaining eigenvalues is typically well separated
from $\lambda$. This issue is discussed further and illustrated with
the spectrum of a particular matrix in Appendix \ref{appendixa}.
So far, we have considered situations in which coupling from
oscillator $m$ to oscillator $n$ favors a phase difference $\theta_n
- \theta_m = 0$ (positive coupling, $A_{nm} >0$), or situations in
which a phase difference $\theta_n - \theta_m = \pi$ is favored
(negative coupling, $A_{nm} < 0$). A more general case is that in
which coupling from oscillator $m$ to oscillator $n$ favors a phase
difference $\theta_n - \theta_m = \alpha_{nm}$, with $0\leq
\alpha_{nm} < 2\pi$. (Such nontrivial phase differences could be
favored, for example, by a time delay in the interaction of the
oscillators in conditions in which, in the absence of a delay, their
interaction would reduce their phase difference to zero.) This more
general case can be described by the following generalization of
Eq.~(\ref{eq:coupled}):
\begin{equation}\label{eq:alphas}
\dot{\theta}_n = \omega_n + k \sum_{m=1}^{N} |A_{nm}|\sin(\theta_{m}
- \theta_{n} + \alpha_{nm}).
\end{equation}
In this scenario, positive coupling corresponds to $\alpha_{nm} = 0$
and negative coupling to $\alpha_{nm} = \pi$.
By considering complex values of the coupling constants,
\begin{equation}
A_{nm} = |A_{nm}|e^{i\alpha_{nm}},
\end{equation}
the same process described at the beginning of this Section can be
used to show that Eq.~(\ref{eq:betaprime2}) is still valid in this
more general case. For simplicity, in our examples we will consider
cases in which $\alpha_{nm}$ is either $0$ or $\pi$.
\section{Examples}\label{examples}
In this section we will numerically test our approximations (Secs. \ref{dire} and \ref{inhibitory})
with examples.
In Ref.~\onlinecite{onset} we showed how our theory described the behavior
of the order parameter $r$ for a particular realization of the
network and the frequencies. Although the agreement was very good,
there was a small but noticeable difference between the time
averaged theory and the frequency distribution approximation. Here,
besides the asymmetry of the adjacency matrix, we will investigate
the variations that occur when different realizations of the network
and the frequencies of the individual oscillators are considered. We
will show that the small discrepancies mentioned above can be
accounted for by averaging over many realizations of the
frequencies.
We will compare the approximations described in this section with
the numerical solution of Eq.~(\ref{eq:coupled}) for different types
of networks. When numerically solving Eq.~(\ref{eq:coupled}), the
initial conditions for $\theta_n$ are chosen randomly in the
interval $[0,2\pi)$ and Eq.~(\ref{eq:coupled}) is integrated forward
in time until a stationary state is reached (stationary state here
means stationary in a statistical sense; i.e., although the solution
might be time dependent, its statistical properties remain constant
in time). From the values of $\theta_n(t)$ obtained for a given $k$,
the order parameter $r$ is estimated using Eqs.~(\ref{eq:rn}) and
(\ref{eq:orderpara}), where the time average is taken after the
system reaches the stationary state. (Close to the transition, the
time needed to reach the stationary state is very long, so that it
is difficult to estimate the real value of $r$. This problem also
exists in the classical Kuramoto all-to-all model.) The value of $k$
is then increased and the system is allowed to relax to a stationary
state, and the process is repeated for increasing values of $k$.
Throughout this section, the frequency distribution is taken to be
$g(\omega) = \frac{3}{4}(1 - \omega^2)$ for $\left|\omega\right|
\leq 1$ and $0$ otherwise.
\subsection{Example (i), A Randomly Asymmetric Network with $A_{nm} > 0$}
As our first example [example (i)] we consider a directed random
network generated as follows. Starting with $N \gg 1$ nodes, we
consider all possible ordered pairs of nodes $(n,m)$ with $n \neq m$
and add a directed link from node $n$ to node $m$ with probability
$s$. (Equivalently, each nondiagonal entry of the adjacency matrix
is independently chosen to be $1$ with probability $s$ and $0$ with
probability $1-s$, and the diagonal elements are set to zero.) Even
though the network constructed in this way is directed, for most
nodes $d_n^{in} \approx d_n^{out}$.
\begin{figure}[h]
\begin{center}
\epsfig{file = figpeasy2r.eps, clip = ,width=1.0\linewidth}
\caption{
(a) Average of the order parameter $r^2$ obtained from numerical solution of Eq.~(\ref{eq:coupled})
over $10$ realizations of the
network and frequencies (triangles), from the frequency distribution
approximation (solid line) and from the directed mean field theory
(long dashed line) as a function of $k/k_c$.
(b) Order parameter
$r^2$ obtained from numerical solution of Eq.~(\ref{eq:coupled}) for
a particular realization of the network and frequencies (boxes),
from the time averaged theory (short dashed line) and from the
frequency distribution approximation (solid line) as a function of
$k/k_c$. } \label{fig:figpeasy}
\end{center}
\end{figure}
For $N = 1500$ and $s = 2/15$, Fig.~\ref{fig:figpeasy}(a) shows the
average of the order parameter $r^2$ obtained from numerical
solution of Eq.~(\ref{eq:coupled}) averaged over $10$ realizations
of the network and frequencies (triangles), the frequency
distribution approximation (FDA, solid line), and the mean field
theory (MFT, long dashed line) as a function of $k/k_c$, where the
results for the FDA and the MFT are averaged over the $10$ network
realizations (note, however, that the FDA and the MFT do not depend
on the frequency realizations). (The perturbation theory
Eq.~(\ref{perturba}) agreed with the frequency distribution
approximation and was left out for clarity.) The error bars
correspond to one standard deviation of the sample of $10$
realizations. We note that the larger error bars occur after the
transition. When the values of the order parameter are averaged over
$10$ realizations of the network and the frequencies, the results
show very good agreement with the frequency distribution
approximation and the directed mean field theory.
In order to study how well our theory describes single realizations,
we show in Fig.~\ref{fig:figpeasy}(b) the order parameter $r^2$
obtained from numerical solution of Eq.~(\ref{eq:coupled}) for a
particular realization of the network and frequencies (boxes), the
time averaged theory (short dashed line), and the frequency
distribution approximation (solid line) as a function of $k/k_c$. As
can be observed from the figure, in contrast with the time averaged
theory, the frequency distribution approximation deviates from the
numerical solution (boxes) by a small but noticeable amount. This
behavior is observed for the other realizations as well. We note
that the FDA and MFT results are virtually identical for all $10$
realizations. On the other hand, the TAT and the results from direct
numerical solution of Eq.~(\ref{eq:coupled}) show dependence on the
realization. Since the FDA and MFT incorporate the realizations of
the connections $A_{nm}$, but not the frequencies, we interpret the
observed realization dependence of the TAT and the direct solutions
of Eq.~(\ref{eq:coupled}) as indicating that the latter dependence
is due primarily to fluctuations in the realizations of the
frequencies rather than to fluctuations in the realizations of
$A_{nm}$.
Note that for our example $N = 1500$ and $s = 2/15$ implies that on
average we have $d^{in} \approx d^{out} \approx 200$. Thus for
comparison purposes, we generated an undirected network as follows:
starting with $N = 1500$ nodes, we join pairs of nodes with
undirected links in such a way that all nodes have $d_n^{in}=
d_n^{out} = 200$. This is accomplished by using the configuration
model described in Sec. IV of Ref.~\onlinecite{newman1}. The resulting
network is described by a symmetric adjacency matrix $A$. The
results for this network are similar to those shown in the previous
example. This suggests that the asymmetric network in the previous
example can be considered (in a statistical sense) as symmetric.
In summary, for the random asymmetric network in example (i) and
for the symmetric network described in the previous paragraph (not
shown), all the approximations work satisfactorily: single
realizations are described by the time averaged theory, and the
average over many realizations is described by the frequency
distribution approximation or the directed mean field theory.
\subsection{Example (ii), A Strongly Asymmetric Network with $A_{nm} > 0$}
Now we consider a network in which the asymmetry has a more
pronounced effect [example (ii)]. We consider directed networks
defined in the following way. Using the configuration model as
above, we first randomly generate an undirected network with $N =
1500$ nodes and $400$ connections to each node, obtaining a
symmetric adjacency matrix $A'$ with entries $0$ or $1$. We
construct directed networks from this undirected network as follows.
From the symmetric matrix $A'$, $1$'s above the diagonal are
independently converted into $0$'s with probability $1-p$,
generating by this process an asymmetric adjacency matrix $A$.
(Imagining that the nodes are arranged in order of ascending $n$
along a line, connections pointing in the direction of increasing
$n$ are randomly removed. This could model, for example, oscillators
which are coupled chemically along the flow of some medium, or
flashing fireflies that are looking mostly in one direction.) We
will consider a rather low value of $p$, $p = 0.1$, in order to
obtain a network with a strong asymmetry.
In Fig~\ref{fig:figpe110} we compare our approximations against
the values of the order parameter obtained from
numerical solution of Eq.~(\ref{eq:coupled}) as a function of $k/k_c$
for a network constructed as described above where $k_c$ is given by Eq.~(\ref{eq:kc}).
\begin{figure}[h]
\begin{center}
\epsfig{file = figpe1102r.eps, clip = ,width=1.0\linewidth}
\caption{
(a) Average of the order parameter $r^2$ obtained from numerical solution of
Eq.~(\ref{eq:coupled}) over $10$ realizations of the
network and frequencies with $p = 0.1$ (triangles), from the frequency
distribution approximation (solid line),
from the directed mean field theory (long dashed line), and from the directed perturbation
theory (dotted-dashed line) as a function of $k/k_c$.
(b)
Order parameter $r^2$ obtained from numerical solution of Eq.~(\ref{eq:coupled})
for a particular realization of the
network and frequencies (boxes), from the time averaged theory (short dashed line) and
from the frequency distribution approximation (solid line) as a function of $k/k_c$.}
\label{fig:figpe110}
\end{center}
\end{figure}
In Fig.~\ref{fig:figpe110}(a) we show the average of the order
parameter $r^2$ [defined by Eq.~(\ref{eq:orderpara})] versus $k/k_c$
obtained from numerical solution of Eq.~(\ref{eq:coupled}) over $10$
realizations of the network and frequencies (triangles), the
frequency distribution approximation (solid line), the directed mean
field theory Eq.~(\ref{mftdire}) (long dashed line) and the directed
perturbation theory Eq.~(\ref{eq:secondpt}) (dotted-dashed line).
The frequency distribution approximation captures, as in the
undirected case, the values of the average of the order parameter
obtained from numerical solution of Eq.~(\ref{eq:coupled}). The
directed perturbation theory gives a good approximation for small
values of $k$ close to $k_c$, as expected. On the other hand, the
directed mean field theory predicts a transition point which is
smaller than the one actually observed. We note that for this
network solutions of Eq.~(\ref{eq:coupled}) yield substantial rms
deviation of individual realizations [the error bars in
Fig.~\ref{fig:figpe110}(a)] for all $k > k_c$.
Now we consider a single realization. In Fig.~\ref{fig:figpe110}(b) we show the order
parameter $r^2$ obtained from numerical
solution of Eq.~(\ref{eq:coupled}) for a particular realization of
the network and frequencies (boxes), the time averaged theory (short
dashed line) and the frequency distribution approximation (solid
line) as a function of $k/k_c$. The time averaged theory tracks the
value of the order parameter for this particular realization. This
is also observed for the other realizations.
As an indication of why the directed mean field theory gives
a smaller transition point than that given by $k_c$ in Eq.~(\ref{eq:kc}),
we note that in the limiting case,
$p \to 0$, all the elements above and in the diagonal of $A$ are $0$, so that $\lambda = 0$
and $k_c \to \infty$.
However, the directed mean field theory predicts a transition at the finite value
$k_{mf} = k_{0} \langle d^{in}\rangle/(\langle d^{in}d^{out}\rangle)$.
\begin{figure}[h]
\begin{center}
\epsfig{file = fignega2r.eps, clip = ,width=1.0\linewidth}
\caption{
Average of the order parameter $r^2$ obtained from numerical solution of
Eq.~(\ref{eq:coupled})
over $10$ realizations of the network with $q = 2/3$ and frequencies
(triangles with thin error bars),
average of the time averaged theory (solid line with oval error bars), and
frequency distribution approximation (dashed line) as a function of $k/k_c$.
The horizontal line represents the value of the order parameter if the
oscillators were phase locked ($\theta_n = \theta_m$ for all $m$ and $n$).}
\label{inhi}
\end{center}
\end{figure}
\subsection{Examples of Networks with Negative Coupling}
Now we consider examples in which there are negative connections,
i.e., some of the entries of the adjacency matrix are negative,
$A_{nm} < 0$. In our next example, we construct first an undirected
network with $N = 1500$ nodes and $400$ connections per node. We
then set $A_{nm} = 0$ if $n$ and $m$ are not connected, and if they
are we set $A_{nm}$ to $1$ with probability $q$ and to $-1$ with
probability $1-q$.
First we consider the case $q = 2/3$, so that one third of the
connections are negative [example (iii)]. In Fig.~\ref{inhi} we
compare the numerical solution of Eq.~(\ref{eq:coupled}) with our
theoretical approximations in Eqs.~(\ref{eq:betaprime2}) and
(\ref{eq:fdainhi}) for ten realizations of the network and
frequencies. We show the average of the order parameter $r^2$ over
$10$ realizations of the network (triangles with thin error bars),
the average of the TAT [Eq.~(\ref{eq:betaprime2}), solid line with
oval error bars], and the average of the FDA
[Eq.~(\ref{eq:fdainhi}), dashed line]. The error bar widths
represent one standard deviation of the sample of $10$ realizations.
As in the previous examples, the FDA did not show noticeable
variations for different realizations of the network. We observe
that the order parameter computed from our theory yields a slightly
larger value than that obtained from the numerical solution of
Eq.~(\ref{eq:coupled}), but in general both the transition point and
the behavior of the order parameter are described satisfactorily by
the theory.
In this case, the phases $\psi_n$ obtained from numerical solution
of Eq.~(\ref{eq:betaprime2}) do not depend on $n$, i.e., $\psi_n =
\psi_m$ for all $n$, $m$. This can be understood on the basis that
there are not enough negative coupling terms to make the right hand
side of Eq.~(\ref{eq:betass}) negative, so that a solution exists in
which all the phases $\psi_n$ are equal. As mentioned in
Sec.~\ref{inhibitory}, the difference in the phases in
Eq.~(\ref{eq:betaprime2}) prevents the right hand side of
Eq.~(\ref{eq:betass}) from becoming negative in the presence of
negative connections. As a confirmation of this we note that as
$k\to \infty$ the order parameter $r$ appears to approach $1/3$
(the dotted-dashed horizontal line in Fig.~\ref{inhi}), which
corresponds to $(\psi_n - \psi_m)\to 0$ in Eq.~(\ref{eq:cosfi2}) for
$q = 2/3$. The fact that both the phases $\psi_n$ and $\theta_n$ do
not depend on $n$ as $k \to \infty$ is consistent with
Eq.~(\ref{eq:locked}).
In order to consider a case in which the effect of the negative
connections is more extreme, we consider a network constructed as
described above with with $q = 0.54$ [example (iv)]. In
Fig.~\ref{inhi54} we compare the numerical solution of
Eq.~(\ref{eq:coupled}) with our theoretical approximations in
Eqs.~(\ref{eq:betaprime2}) and (\ref{eq:fdainhi}) for ten
realizations of the network and frequencies. We show the average of
the order parameter $r^2$ over $10$ realizations of the network
(triangles with thin error bars), the average of the FDA [Eq.~(\ref{eq:fdainhi}),
dashed line with thin error bars] and the average of the TAT
[Eq.~(\ref{eq:betaprime2}), solid line with oval error bars]. When
numerically solving Eq.~(\ref{eq:betaprime2}) by iteration of
Eq.~(\ref{eq:betaprime3}), on some occasions a period two orbit was
found instead of the desired fixed point. If we denote the left hand
side of Eq.~(\ref{eq:betaprime3}) by $z^{j+1}_n$ and the right hand
side by $f(z^{j}_n)$, we found that convergence to a fixed point was
facilitated by replacing the right hand side by $[z^{j}_n +
f(z^{j}_n)]/2$ and finding the fixed points of this modified system.
\begin{figure}[h]
\begin{center}
\epsfig{file = fignega54.eps, clip = ,width=1.0\linewidth}
\caption{
Average of the order parameter $r^2$ obtained from numerical solution of Eq.~(\ref{eq:coupled})
over $10$ realizations of the network with $q = 0.54$ and frequencies (triangles)
and average of the TAT (solid line) as a function of $k/k_c$.
Note the different scale in the horizontal axis as compared with the previous figures.
The horizontal dotted-dashed line represents the value of the order parameter if the oscillators
were phase locked ($\theta_n = \theta_m$ for all $m$ and $n$).}
\label{inhi54}
\end{center}
\end{figure}
In this example, at low coupling strengths [roughly $k/k_c \lesssim
4$, where $k_c$ is computed from Eq.~(\ref{kcinhi})] the order
parameter computed from numerical solution of Eq.~(\ref{eq:coupled})
is smaller than that obtained from the TAT and FDA. As $k$ increases,
however, the TAT and FDA theories captures the asymptotic value of the order
parameter $r$. We note that in this case the asymptotic value is
larger than that corresponding to phase locking [i.e., the one
obtained by setting $\psi_n = 0$ in Eq.~(\ref{eq:cosfi2}), $r
\approx 0.54 -0.46 = 0.08$], which we indicate by a horizontal
dotted-dashed line in Fig~\ref{inhi54}, and much smaller than $r =
1$, the value corresponding to no frustration [i.e., $\psi_n -
\psi_m = 0$ for $A_{nm} > 0$ and $\pi$ for $A_{nm} < 0$ in
Eq.~(\ref{eq:cosfi2})]. The small scale of the horizontal axis is
due to the fact that we are plotting $r^2$, and to our definition of
the order parameter which assigns a value of $1$ to a non frustrated
configuration. The small value of the order parameter indicates a
strong frustration.
We note that in this example, in contrast with the examples discussed so far,
there is variation in the values of the order parameter predicted by the FDA
for different realizations of the network. This indicates that,
as the expected value of the coupling strengths $A_{nm}$ becomes small
(i.e., $|q - 1/2|$ small), fluctuations
due to the realization of the network become noticeable. Although
the values predicted by the FDA and TAT depend on the realization of the
network and frequencies, we note for $k/k_c \gtrsim 6$ that these values track
the values observed for the numerical simulations
of the corresponding realization. As an illustration of this, we plot in Fig.~\ref{figcorre}(a)
the values of $r^2$ obtained from the TAT (triangles) and in Fig.~\ref{figcorre}(b)
the values of $r^2$ obtained from the FDA (boxes) versus the value obtained
from numerical solution of Eq.~(\ref{eq:coupled})
for $k/k_c = 8$. Besides a small positive bias in the FDA, the theories track the spread
in the results of the numerical solution for different realizations. Some bias in the FDA is not
surprising, because we averaged the right hand side of the nonlinear equation (\ref{eq:betass})
for the TAT in order to get Eq.~(\ref{eq:betaint}) for the FDA. Nonetheless, the bias is
extremely small in most of our examples.
\begin{figure}[h]
\begin{center}
\epsfig{file = coore2.eps, clip = ,width=0.8\linewidth}
\caption{
Order parameter $r^2$ obtained from (a) the TAT (triangles) and from (b) the FDA
(boxes) versus the value obtained from numerical solution of Eq.~(\ref{eq:coupled})
for $k/k_c = 8$. The solid line is the identity.
Besides a small positive bias in the FDA, the theories track the spread in the results of
the numerical solution for different realizations.
}
\label{figcorre}
\end{center}
\end{figure}
The behavior observed in Fig.~\ref{inhi54} at $k/k_c \lesssim 4$ can be
interpreted as a shift in the transition point to a larger value of
the coupling strength, and is reminiscent of what occurs when the
time fluctuations [$k h_n(t)$ in Eq.~(\ref{eq:coupled0})] neglected
in Eq.~(\ref{eq:coupled2}) have an appreciable effect \cite{onset}.
We believe that the time fluctuations have a more pronounced effect
as the number of negative connections becomes comparable to the
number of positive connections (i.e., as $|q-1/2|$ becomes small)
because the critical coupling strength $k_c$ becomes large
(roughly $k_c \sim |q -1/2|^{-1}$).
In particular, with positive connections, the condition for
neglecting $k h_n(t)$ was that the number of connections to each node
was large. In contrast, for the present case, the analogous
statement would be that $|q-1/2|$ times the number of connections is
large, which is much less well satisfied, $|q - 1/2|400 =
0.04\times400 = 16$. The extreme case of zero mean coupling has
already been studied numerically by Daido \cite{daido2}, who found
that in this case the oscillators lock in the sense that their
average frequency is the same, but their phases diffuse. As argued
in Ref.~\onlinecite{onset}, such fluctuations have the effect of shifting the
transition to larger values of the coupling strength. It would be
interesting to carry on simulations in networks with a much larger
number of connections per node, as the effect of fluctuations would
likely be reduced.
We also considered a case in which the adjacency matrix is
asymmetric and has mixed positive/negative connections. For $N =
1500$ nodes, we constructed an adjacency matrix by setting its
nondiagonal entries to $1$, $-1$, and $0$ with probability $8/45$,
$4,45$, and $11/15$, respectively. The latter probability yields an
expected number of connections of $400$. Our theories work
satisfactorily in this case, and, since the results are similar to
those in Fig.~\ref{inhi}, we do not show them. In this case there is
no guarantee that there is a real eigenvalue [as needed for
estimating the critical coupling strength in Eq.~(\ref{eq:kc})], or
that the largest real eigenvalue (if there is one) has the largest
real part. Numerically, we find that for matrices constructed as in
this example there is a real positive eigenvalue and that,
furthermore, it is well separated from the largest real part of the
remaining eigenvalues (see Fig.~\ref{eithe}). We also find this for
other values of $q$ provided $|q - 1/2|$ is not too small. We
provide a discussion of this issue and show the spectrum of the
adjacency matrix in Appendix \ref{appendixa}.
\section{Discussion}\label{discussion}
In this paper, we have considered interacting phase oscillators
[Eq.~(\ref{eq:coupled})] connected by directed networks and networks
with mixed positive/negative connections.
The previous theory of Ref.~\onlinecite{onset} given by
Eq.~(\ref{eq:betass}) (the time averaged theory, TAT) can still be
applied for asymmetric networks with purely positive coupling and
was found to give good predictions, applicable to
{\it individual} asymmetric random realizations [Figs.
\ref{fig:figpeasy}(b), \ref{fig:figpe110}(b)]. The previous theory
given by Eq.~(\ref{eq:betaint}) (the frequency distribution
approximation, FDA) can also still be applied for asymmetric
networks with purely positive coupling and was found to give good
predictions applicable to the {\it ensemble average behavior}
of asymmetric network realizations [Figs. \ref{fig:figpeasy}(a),
\ref{fig:figpe110}(a)]. The perturbative theory for the FDA was
generalized to account for directed networks
[Eqs.~(\ref{eq:secondpt}) and (\ref{eq:eta1dire})], as was the
previous undirected network mean field theory, MFT (generalized from
Eqs.~(\ref{eq:betasumed})-(\ref{eq:eta2}) to
Eqs.~(\ref{mftdire})-(\ref{eq:eta2dire})]. In our example (ii),
which had a very strong asymmetry, we found that our directed FDA
perturbation theory [Eqs.~(\ref{eq:secondpt}) and
(\ref{eq:eta1dire})] gave a good description of synchronization, but
that the directed mean field approximation gave a transition to
synchronization at a coupling substantially below that observed. In
contrast, for example (i), in which the coupling matrices were
individually asymmetric but their ensemble average was symmetric,
the mean field theory (and all the other theories in
Sec.~\ref{dire}) gave good results.
For the case of mixed positive/negative couplings we presented a
generalization of the TAT and FDA,
Eqs.~(\ref{eq:betaprime2})-(\ref{kcinhi}). We tested these results
on two examples, example (iii) in which a fraction $1-q = 1/3$ of
the couplings were negative, and example (iv) in which a fraction
$1-q = 0.46$ of the couplings were negative. For example (iii) we
found that iteration of Eq.~(\ref{eq:betaprime3}) converges to a
fixed point with $\psi_n -\psi_m =0$, and thus the result is similar
to the case where all connections are positive. In example (iv), the
result of iteration of Eq.~(\ref{eq:betaprime3}) yields nontrivial
values for the phases $\psi_n$. In this case we found good agreement
between the solutions of (\ref{eq:coupled}) and the theory for the
order parameter for $k/k_c$ large ($k/k_c \gtrsim 4 $), but that for
smaller $k/k_c$ ($k/k_c \lesssim 4 $), although yielding
qualitatively similar behavior to that observed (Fig.~\ref{inhi54}),
the theory overestimates the order parameter. Analogous to similar
observations for symmetric networks with only positive coupling
\cite{onset}, we speculate (Sec.~\ref{examples}) that this is a
finite size effect associated with the fact that the effective
number of connections given in this example by $|q-1/2|400 = 16$ is
not sufficiently large to justify neglect of $k h_n(t)$ in
Eq.~(\ref{eq:coupled0})
In order to isolate the effect of the asymmetry and the negative
connections, we considered networks in which the degree distribution
is very narrow. The combined effect of these factors with different
heterogeneous degree distributions (e.g., scale free networks
\cite{barabasi2}) and with correlations in the network (in
particular, degree-degree correlations) is still open to
investigation.
In practice, one could be interested in networks in which the
asymmetry in the connections is strongly correlated with the sign of
the coupling (in analogy to some models in neuroscience
\cite{wang}). Although we did not study such a case here, we believe
our theory provides a good starting point to study the emergence of
synchronization in these kind of structured complex networks.
| -39,854.153574
|
[
-3.390625,
2.98828125
] | 21.836735
|
[
-2.947265625,
0.71435546875,
-2.396484375,
-6.3359375,
-1.482421875,
8.609375
] |
[
3.029296875,
8.0078125,
1.7841796875,
6.44140625
] | 418
| 6,673
|
[
-3.466796875,
4.015625
] | 27.077111
|
[
-6.234375,
-4.69140625,
-4.453125,
-2.48828125,
1.9921875,
12.6015625
] | 1.553222
| 13.970792
| 17.278585
| 1.311127
|
[
1.3530325889587402
] | -26,244.690685
| 5.451221
| -39,324.506263
| 1.58896
| 5.718963
|
[
-2.3671875,
-3.716796875,
-4.015625,
-5.2734375,
2.302734375,
12.6875
] |
[
-5.765625,
-2.060546875,
-2.634765625,
-1.5244140625,
3.427734375,
4.62109375
] | |
BkiUdkE5qoTBG46msLyS
|
\section{Introduction}
\label{sec:intro}
Strong gravitational lensing of extragalactic sources by intervening galaxies or galaxy clusters are exquisite astrophysical probes. The observed fluxes of these sources are often strongly affected by random microlensing, collectively caused by many compact masses embedded within the foreground lens~\citep{KayserRefsdalStabell1986microlensing, Wambsganss1992muPDFML}. Observations of multiply imaged quasars first engendered theoretical interests in statistical microlensing (see \cite{Wambsganss2006MicrolensingReview} for a review), with the major aim to resolve the size and structure of the quasar accretion disk~\citep{BlackBurne2011QSOmicrolensing} and to probe compact lenses of planetary to stellar masses.
To quantify the flux variability under random microlensing, a statistical approach is appropriate. For observations carried out at random epochs, the key observables include the mean and variance of the magnification factor. The scenario of microlenses embedded in constant background convergence and shear has been extensively studied, mostly in the context of multiply-imaged quasars. In this case, the magnification factor averaged over random microlens realizations equals the macro one regardless of the source size, while the variance is difficult to compute analytically and was the focus of many past works. Lenses solely comprised of compact objects were studied in the pioneering works of \cite{DeguchiWatson1987muvariance} and \cite{RefsdalStabell1991MLLargeSources}, and the effects of a shear and a diffuse surface mass component were investigated in follow-up studies~\citep{SeitzSchneider1994microlensingI, Seitz1994microlensingII, RefsdalStabell1997MLLargeSrcShear}. The statistical formalism was further developed in more recent works~\citep{Neindorf2003extragalML, Tuntsov2004compactDMFromClusters, GoodmanSun2014MicrolensingFluxVariance}, and other methods have been explored~\citep{Fleury2020AnalyticAmpStats}. When analytical results are unavailable, unreliable, or cumbersome to apply, direct numerical simulations come to rescue. Important numerical techniques that greatly enhance efficiency and accuracy include inverse ray-shooting~\citep{KayserRefsdalStabell1986microlensing}, the hierarchical tree algorithm~\citep{Wambsganss1999MLTreeAlgorithm}, and the image tracking method~\citep{Lewis1993imagetrack, Witt1993imagetrack}.
Recently, theoretical interests in statistical microlensing have been reinvigorated by the detections of high-$z$ stars~\citep{1991ApJ...379...94M} magnified by a spectacular $\sim 10^2$--$10^3$ folds near critical curves of cluster lenses~\citep{2018NatAs...2..334K, 2018NatAs...2..324R, Chen:2019ncy, 2019ApJ...880...58K}. Owing to the extreme macro magnification factors realized in these cases, even a low convergence of intracluster stars around the cluster Einstein radius $\kappa_\star\sim 10^{-3}$--$10^{-2}$ (compared to $\kappa_\star \sim 0.1$--$1$ around galaxy Einstein radii) leads to frequent microlensing brightening episodes~\citep{2017ApJ...850...49V, 2018ApJ...857...25D, Oguri:2017ock, Diego2019ExtremeMagnificationUniverse}, making these outstanding probes of compact constituents of the lens mass. Such stochastic microlensing may act also on a cluster of stars if the cluster as a whole is highly magnified~\citep{Dai2020ArcSymmetryS1226, Dai2021SunburstStarClusterMicrolensing}. With new observational data, other highly magnified candidate sources: lensed quasar images suspected of large magnification~\citep{Fujimoto2020UltraluminousQuasar} or flux anomaly~\citep{Glikman2018arXiv180705434G} have been reported; a magnified ``knot'' in a Cosmic Noon starburst showing perplexing flux anomalies~\citep{Vanzella2020SunburstTr}.
While semi-analytic scaling laws have offered much insight into the behavior of statistical microlensing in the extreme magnification regime, more accurate modeling of flux statistics have so far relied on large yet artful numerical simulations~\citep{2017ApJ...850...49V, 2018ApJ...857...25D, Diego2019ExtremeMagnificationUniverse, Dai2021SunburstStarClusterMicrolensing}. Sometimes, it may even be computationally prohibitive to simulate for realistic parameter values! In this work, with a focus on the macro caustic vicinity, we aim to develop a general and practical analytic model, which will both deepen our understanding of microlensing in the high optical depth regime and facilitate scans of large parameter space.
Microlensing has a significant impact on the magnifications achievable near a macro caustic~\citep{2017ApJ...850...49V, 2018ApJ...857...25D, Diego2019ExtremeMagnificationUniverse}. Microlenses of a characteristic Einstein radius $\theta_\star$ induce a stochastic deflection component $ \kappa^{1/2}_\star\,\theta_\star$ in the ray equation~\citep{Katz1986RandomScattering}, up to a multiplicative ``Coulomb'' logarithm reflecting the long-ranged nature of point lens deflections. This sets an effective smoothing scale on the source plane, independently of the source's angular extent $\sigma_{\rm W}$. If the magnification caused by the macro lens is uniform across this scale, stochastic microlensing conserves the mean magnification, but only induces fluctuations around it. However, when the macro magnification varies significantly, a situation that inevitably arises near a caustic, even the mean magnification is modified. Indeed, \cite{2017ApJ...850...49V} showed that a sharp caustic induced by a macro lens of a smooth mass profile is ``smeared out'' across a width of $\sim \kappa^{1/2}_\star\,\theta_\star$ by microlenses.
In the absence of microlenses, the maximum magnification is realized when a finite source grazes a sharp marco caustic, $\mu_{\rm max} \sim (\sigma_{\rm W}\,d)^{-1/2}$, where $d$ is typically the inverse of the characteristic angular scale of the macro lens that produces the caustic. Sub-galactic substructure lenses produce secondary caustics~\citep{2018ApJ...867...24D} that have larger values of $d$ compared to what smooth galaxy-scale or cluster-scale lenses can produce, and hence decreased values of $\mu_{\rm max}$. In the presence of microlenses, the sharp macro caustic is disrupted, and a corrugated network of micro caustics form instead. In this situation, the random deflection scale $\kappa^{1/2}_\star\,\theta_\star$ comes into play. Now the mean magnification can reach a maximum value $\sim (\sigma_{\rm eff}\,d)^{-1/2}$ where $\sigma_{\rm eff} \simeq \sqrt{\sigma^2_{\rm W} + \kappa_\star\,\theta^2_\star}$, i.e. the grazing scale can get as small as $\sigma_{\rm W}$ or $\kappa^{1/2}_\star\,\theta_\star$, whichever is larger.
In real astrophysical contexts, a source may be sufficiently large (e.g. quasar, SN) to overlap multiple micro caustics, i.e. $\sigma_{\rm W} \gtrsim \theta_\star\,\kappa^{-1/2}_\star/(\sigma_{\rm eff}\,d)^{-1/2}$, when the density of micro caustics is the highest in the proximity of a macro caustic. As a result, its flux fluctuations around the mean do not exceed the mean by any large factor, and hence $(\sigma_{\rm eff}\,d)^{-1/2}$ is often a fair estimate for the maximal possible magnification after accounting for fluctuations. For smaller sources, it may still be true that $\sigma_{\rm eff} \gtrsim \theta_\star\,\kappa^{-1/2}_\star/(\sigma_{\rm eff}\,d)^{-1/2}$, so that flux fluctuations remain mild as many disconnected micro images form and contribute uncorrelated magnification fluctuations. Even in this regime, $(\sigma_{\rm eff}\,d)^{-1/2}$ is not an underestimate of the maximal achievable magnification. Flux fluctuations can greatly exceed the mean value only for very small sources (e.g. individual stellar photospheres), and for a sufficiently low surface number density of microlenses. During these short events of micro caustic crossing, the peak magnification is dominated by just a pair of micro images.
A conceptual obstacle to fully analytic calculations of the microlensing flux statistics has to do with the ``ultraviolet'' (UV) and ``infrared'' (IR) divergences, which are terms borrowed from field theory. These logarithmic divergences arise because the deflection due to any single point microlens scales inversely with the impact parameter, analogous to the Coulomb divergence associated with the inverse-square force law in three-dimensonal space. The UV divergence is traced back to arbitrarily large ray deflections at arbitrarily small impact parameters to any single point microlens; consequently, the probability distribution functions (PDFs) for the random deflections have divergent higher-order moments, and the corresponding characteristic functions (CFs) contain non-analytic logarithmic terms. We seek a prescription in which the UV divergence is regulated by the finite source size. On the other hand, the IR divergence implies that the statistics can be sensitive to far-away microlenses distributed over the largest scales on the lens plane. We require a prescription in which the dependence on the IR cutoff on statistical averages is manifest and unambiguous.
The main result of this work is an accurate semi-analytic approximation for the mean and variance of the magnifications (\refeq{muWvevML} and \refeq{muW2dblintegML} respectively), for a Gaussian source profile, and in the regime that multiple micro images form. The gist of the approximation is about capturing the Gaussian bulk of the deflection distribution, while judiciously neglecting the non-Gaussian tail of large deflections which are only important for either a small portion of the source or a minority of the micro images. The approximation determines the UV and IR logarithms in a physical way, and respect the expected translational and rotational symmetries of the deflection distributions. The results only require evaluating two- and four-dimensional integrals with simple, well-behaved integrands, and are applicable to {\it any} macro lens model. In practice, answers can be obtained in less than a second by employing standard Monte Carlo integrators. The new results can be directly used to efficiently and accurately quantify the magnification statistics, for microlenses embeded in a variety of macro lens models and as a function of microlens abundance and source size, thus saving the need to perform expensive yet tricky numerical simulations. Conversely, the semi-analytic answers enable calibration to numerical codes.
The remainder of the paper is organized as follows. In \refsec{theory}, we introduce the general theoretical framework to compute magnification statistics, in particular, the mean and the variance. In \refsec{gaussian}, we examine a toy model in which microlensing deflections are Gaussian random variables. We will develop useful intuition into the problem as this toy model is exactly solvable. In \refsec{microlens}, we turn to the real problem of discrete microlenses, and derive our key results. We then demonstrate that our semi-analytic approximation agrees well with numerical ray-shooting experiments, for various parameter choices ranging from Gaussian to non-Gaussian flux variability behaviors. We will discuss our results in \refsec{discuss}, before we give concluding remarks in \refsec{concl}. Additional technical details are presented in Appendices for reference. Results of \refsec{gaussian} and \refsec{microlens} are presented in dimensionless angular units, and can be easily scaled to the appropriate physical units in any specified astrophysical context.
\section{Theoretical framework}
\label{sec:theory}
A dozen of previous studies have treated microlensing using a statistical theory. Those include the earlier works of \cite{DeguchiWatson1987muvariance}, \cite{Katz1986RandomScattering}, \cite{SeitzSchneider1994microlensingI} and \cite{Seitz1994microlensingII}, as well as a more general formulation in \cite{Neindorf2003extragalML}. For clarity, we reintroduce this theoretical framework, which is based on the concept of multivariate probability distribution functions and the corresponding characteristic functions.
We decompose the deflection field into a background component $\boldsymbol{\alpha}_{\rm B}(\boldsymbol{x})$ and a fluctuating component $\boldsymbol{\alpha}(\boldsymbol{x})$. The former varies smoothly as a function of the image plane position $\boldsymbol{x}$ and is given. The latter, being stochastic in nature with specific spatial correlations, will be given a statistical treatment. We adopt the general assumption that the $\boldsymbol{\alpha}(\boldsymbol{x})$'s have correlation functions that are invariant under spatial translations and rotations, as is the case if $\boldsymbol{\alpha}(\boldsymbol{x})$ is generated by point-like microlenses that are uniformly distributed on the lens plane.
For a point source at the source-plane position $\boldsymbol{y}$, the lens equation is $\boldsymbol{y} = \boldsymbol{x} - \boldsymbol{\alpha}_{\rm B}(\boldsymbol{x}) - \boldsymbol{\alpha}(\boldsymbol{x})$. Let $W(\boldsymbol{y})$ be the normalized surface brightness profile of a finite-sized source centered at the source-plane origin, satisfying $\int\,\mathrm{d}^2\boldsymbol{y}\,W(\boldsymbol{y}) = 1$. If unresolved, the total magnification summed over all geometric images can be written as
\begin{align}
\mu_{\rm W}(\boldsymbol{y}) = \int\,\mathrm{d}^2\boldsymbol{x}\,W\left( \boldsymbol{x} - \boldsymbol{y} - \boldsymbol{\alpha}_{\rm B}(\boldsymbol{x}) - \boldsymbol{\alpha}(\boldsymbol{x}) \right).
\end{align}
Throughout, we use the $\VEV{\cdots}$ notation to indicate averaging over random realizations of $\boldsymbol{\alpha}(\boldsymbol{x})$. Inserting the Fourier decomposition of $W(\boldsymbol{y})$, we obtain the mean magnification factor~\citep{2017ApJ...850...49V}
\begin{eqnarray}
\label{eq:muWy}
\VEV{\mu_{\rm W}(\boldsymbol{y})} & = & \int\,\mathrm{d}^2\boldsymbol{x}\, \int\,\frac{\mathrm{d}^2{\boldsymbol{\ell}}}{(2\pi)^2}\,e^{-i\,{\boldsymbol{\ell}}\cdot(\boldsymbol{x} - \boldsymbol{y} - \boldsymbol{\alpha}_{\rm B}(\boldsymbol{x}))}\,\widetilde W({\boldsymbol{\ell}})\,\VEV{e^{i\,{\boldsymbol{\ell}}\cdot\boldsymbol{\alpha}(\boldsymbol{x})}},
\end{eqnarray}
where $\boldsymbol{\ell}$ is the Fourier wave vector conjugate to the real-space angular variable, and $\widetilde W({\boldsymbol{\ell}})= \int\,\mathrm{d}^2\boldsymbol{y}\,W(\boldsymbol{y})\,e^{i\,{\boldsymbol{\ell}}\cdot\boldsymbol{y}}$ is the Fourier transform of $W(\boldsymbol{y})$. This expression depends on the characteristic function (CF) involving the stochastic deflection at one image-plane position $\boldsymbol{x}$. By statistical homogeneity, $\VEV{e^{i\,\boldsymbol{\ell}\cdot\boldsymbol{\alpha}(\boldsymbol{x})}}$ is independent of $\boldsymbol{x}$. The result can be recast into the form
\begin{align}
\label{eq:muWmean}
\VEV{\mu_{\rm W}(\boldsymbol{y})} = \int\,\mathrm{d}^2\boldsymbol{x}\,P^{\rm W}_1\left( \boldsymbol{x} - \boldsymbol{y} - \boldsymbol{\alpha}_{\rm B}(\boldsymbol{x}) \right),
\end{align}
where $P^{\rm W}_1(\boldsymbol{\alpha})$ is the one-point probability distribution function (PDF) for the random deflection $\boldsymbol{\alpha}(\boldsymbol{x})$ at any $\boldsymbol{x}$, $P_1(\boldsymbol{\alpha})$ (see e.g. \cite{Katz1986RandomScattering}), convoluted with the source profile
\begin{align}
\label{eq:P1Wdef}
P^{\rm W}_1(\boldsymbol{\alpha}) = \int\,\mathrm{d}^2\boldsymbol{y}\,W(\boldsymbol{y})\,P_1(\boldsymbol{\alpha} - \boldsymbol{y}).
\end{align}
By transforming the integration variable from $\boldsymbol{y}$ to $\boldsymbol{\alpha}=\boldsymbol{x} - \boldsymbol{y} - \boldsymbol{\alpha}_{\rm B}(\boldsymbol{x})$, \refeq{muWmean} can also be written as (see e.g. \cite{2017ApJ...850...49V})
\begin{align}
\label{eq:muWmeanbfalp}
\VEV{\mu_{\rm W}(\boldsymbol{y})} = \int\,\mathrm{d}^2\boldsymbol{\alpha}\,P^{\rm W}_1(\boldsymbol{\alpha})\,\mu_{\rm B}(\boldsymbol{y} + \boldsymbol{\alpha}),
\end{align}
where $\mu_{\rm B}(\boldsymbol{y})$ is the magnification of a point source at $\boldsymbol{y}$ due to only the background deflection.
\refeq{muWmeanbfalp} can be interpreted as smoothing the point-source background magnification pattern with a ``point spread function'' $P^{\rm W}_1(\boldsymbol{\alpha})$, whose characteristic width is set by that of the source profile $W(\boldsymbol{y})$ or that of the random deflections $P_1(\boldsymbol{\alpha})$, whichever is larger. $\VEV{\mu_{\rm W}(\boldsymbol{y})}$ equals the background magnification $\mu_{\rm B}(\boldsymbol{y})$ if the latter is approximately uniform over the smoothing scale set by $P^{\rm W}_1(\boldsymbol{\alpha})$. A particularly interesting situation arises near a lensing caustic where $\mu_{\rm B}(\boldsymbol{y})$ often varies rapidly; as a result $\VEV{\mu_{\rm W}(\boldsymbol{y})}$ can differ substantially from $\mu_{\rm B}(\boldsymbol{y})$.
The simplest way to quantify the scatter in the value of $\mu_{\rm W}(\boldsymbol{y})$ due to random microlens realizations is the second moment, which is given by~\citep{Neindorf2003extragalML}
\begin{eqnarray}
\label{eq:muWy2}
\VEV{\mu_{\rm W}(\boldsymbol{y})^2} & = & \int\,\mathrm{d}^2\boldsymbol{x}_1\, \int\,\mathrm{d}^2\boldsymbol{x}_2\,\int\,\frac{\mathrm{d}^2{\boldsymbol{\ell}}_1}{(2\pi)^2}\,\int\,\frac{\mathrm{d}^2{\boldsymbol{\ell}}_2}{(2\pi)^2}\,e^{-i\,{\boldsymbol{\ell}}_1\cdot(\boldsymbol{x}_1 - \boldsymbol{y} - \boldsymbol{\alpha}_{\rm B}(\boldsymbol{x}_1))}\,e^{-i\,{\boldsymbol{\ell}}_2\cdot(\boldsymbol{x}_2 - \boldsymbol{y} - \boldsymbol{\alpha}_{\rm B}(\boldsymbol{x}_2))} \nonumber\\
&& \times \widetilde W({\boldsymbol{\ell}}_1)\,\widetilde W({\boldsymbol{\ell}}_2)\,\VEV{e^{i\,{\boldsymbol{\ell}}_1\cdot\boldsymbol{\alpha}(\boldsymbol{x}_1)}\,e^{i\,{\boldsymbol{\ell}}_2\cdot\boldsymbol{\alpha}(\boldsymbol{x}_2)}}.
\end{eqnarray}
The CF involving the stochastic deflections at {\it two} different image-plane positions $\boldsymbol{x}_1$ and $\boldsymbol{x}_2$ needs to be computed. By statistical homogeneity, $\VEV{e^{i\,{\boldsymbol{\ell}}_1\cdot\boldsymbol{\alpha}(\boldsymbol{x}_1)}\,e^{i\,{\boldsymbol{\ell}}_2\cdot\boldsymbol{\alpha}(\boldsymbol{x}_2)}}$ only depends on $\boldsymbol{x}_2 - \boldsymbol{x}_1$, but not on $\boldsymbol{x}_1$ and $\boldsymbol{x}_2$ individually. The magnification factor has a standard deviation ${\rm Std}[\mu_{\rm W}(\boldsymbol{y})] = \sqrt{\VEV{\mu_{\rm W}(\boldsymbol{y})^2} - \VEV{\mu_{\rm W}(\boldsymbol{y})}^2}$~\citep{Neindorf2003extragalML}.
\section{Toy model: Gaussian random deflections}
\label{sec:gaussian}
Before we consider discrete, point microlenses as realistic random deflectors, we would like to first solve a toy model in which $\boldsymbol{\alpha}(\boldsymbol{x})$ behaves strictly as a Gaussian random vector field on the image plane. The toy model allows for the analytic calculation of many results, and hence will offer us much insight into how the source-convoluted magnification factor behaves in the presence of stochastic deflections. We note that this Gaussian deflection model precisely describes the collective lensing effects of axion minihalos on lensed extragalatic stars crossing micro caustics~\citep{Dai2020AxionMinihalo}.
For full analytic tractability, we choose to consider a Gaussian source profile with half width $\sigma_{\rm W}$, $W(\boldsymbol{y}) = [(2\pi)\,\sigma^2_{\rm W}]^{-1}\,\exp(-(1/2)\,|\boldsymbol{y}|^2/\sigma^2_{\rm W})$ throughout. While many commonly studied sources such as individual stars are more appropriately modeled as uniform disks, the assumption of a Gaussian source profile does not impose a fundamental limitation of our derivation, as we will see later. The Fourier transform of this Gaussian profile is given by
\begin{align}
\label{eq:Wlgaussian}
\widetilde W({\boldsymbol{\ell}}) = \exp\left(-\frac12\,\sigma^2_{\rm W}\,|{\boldsymbol{\ell}}|^2 \right).
\end{align}
We can generally assume that $\boldsymbol{\alpha}(\boldsymbol{x})$ is the gradient of a scalar potential (valid for deflection by a single lens plane), $\boldsymbol{\alpha}(\boldsymbol{x}) = \nabla\,\psi(\boldsymbol{x})$, and that in the Fourier domain the scalar potential has an isotropic power spectrum $\langle\widetilde\psi({\boldsymbol{\ell}})\,\widetilde\psi^*({\boldsymbol{\ell}}')\rangle = (2\pi)^2\,\delta_D({\boldsymbol{\ell}} - {\boldsymbol{\ell}}')\,P_\psi(\ell)$. The two-point correlation for $\boldsymbol{\alpha}(\boldsymbol{x})$ can be decomposed into a longitudinal component $C_\parallel(r)$ and a transverse component $C_\perp(r)$:
\begin{align}
\label{eq:2ptalpalp}
\VEV{\alpha_i(\boldsymbol{x}_1)\,\alpha_j(\boldsymbol{x}_2)} = C_\parallel(r
_{12})\,\frac{r_{12,i}\,r_{12,j}}{r^2_{12}} + C_\perp(r_{12})\,\left( \delta_{ij} - \frac{r_{12,i}\,r_{12,j}}{r^2_{12}} \right),
\end{align}
where $\boldsymbol{r}_{12} = \boldsymbol{x}_2 - \boldsymbol{x}_1$ and $r_{12} = |\boldsymbol{r}_{12}|$. These are related to the potential power spectrum through:
\begin{eqnarray}
\label{eq:Caapara}
C_\parallel(r) & = & \int\,\frac{\ell^3\,\mathrm{d}\ell}{2\pi}\,\left( \frac{J_1(\ell\,r)}{\ell\,r} - J_2(\ell\,r) \right)\,P_\psi(\ell), \\
\label{eq:Caaperp}
C_\perp(r) & = & \int\,\frac{\ell^3\,\mathrm{d}\ell}{2\pi}\, \frac{J_1(\ell\,r)}{\ell\,r}\,P_\psi(\ell).
\end{eqnarray}
For a well-behaved $P_\psi(\ell)$, we can define, at zero separation $r = 0$, $C(0) := C_\perp(0) = C_\parallel(0)$ (i.e. the one-point variance of $\boldsymbol{\alpha}(\boldsymbol{x})$ is finite).
For Gaussian random deflections, the mean magnification factor is given by an integral over a single image plane,
\begin{align}
\label{eq:muWinteg}
\VEV{\mu_{\rm W}(\boldsymbol{y})} & = \frac{1}{(2\pi)\,(C(0) + \sigma^2_{\rm W})}\, \int\,\mathrm{d}^2\boldsymbol{x}\,\exp\left[-\frac12\,\frac{|\boldsymbol{x} - \boldsymbol{y} - \boldsymbol{\alpha}_{\rm B}(\boldsymbol{x})|^2}{C(0) + \sigma^2_{\rm W}}\right].
\end{align}
The second moment is given by an integral over double image planes,
\begin{eqnarray}
\label{eq:muW2dblinteg}
\VEV{\mu_{\rm W}(\boldsymbol{y})^2} & = & \int\,\mathrm{d}^2\boldsymbol{x}_1\, \int\,\mathrm{d}^2\boldsymbol{x}_2\,\frac{\exp\left[ -\frac12\,\textbf{u}^T(\boldsymbol{x}_1,\,\boldsymbol{x}_2;\,\boldsymbol{y})\,\left( \textbf{C}(\boldsymbol{r}_{12}) + \sigma^2_{\rm W}\,\textbf{I} \right)^{-1}\,\textbf{u}(\boldsymbol{x}_1,\,\boldsymbol{x}_2;\,\boldsymbol{y}) \right]}{(2\pi)^2\,\sqrt{{\rm det}[ \textbf{C}(\boldsymbol{r}_{12}) + \sigma^2_{\rm W}\,\textbf{I}]}}.
\end{eqnarray}
To condense the expression, we have constructed a four-component vector
\begin{align}
\textbf{u}\left( \boldsymbol{x}_1,\,\boldsymbol{x}_2;\,\boldsymbol{y} \right) = \left[\begin{array}{c}
x_{1,1} - y_1 - \alpha_{{\rm B}, 1}(\boldsymbol{x}_1) \\
x_{1,2} - y_2 - \alpha_{{\rm B}, 2}(\boldsymbol{x}_1) \\
x_{2,1} - y_1 - \alpha_{{\rm B}, 1}(\boldsymbol{x}_2) \\
x_{2,2} - y_2 - \alpha_{{\rm B}, 2}(\boldsymbol{x}_2) \\
\end{array}\right].
\end{align}
We also introduce the four-by-four identity matrix $\textbf{I}$, and define a four-by-four covariance matrix
\begin{align}
\label{eq:Cmatrix}
\textbf{C}(\boldsymbol{r}) = \left[\begin{array}{cccc}
C(0) & 0 & C_\parallel(r)\,c^2 + C_\perp(r)\,s^2 & C_\parallel(r)\,c\,s - C_\perp(r)\,c\,s \\
0 & C(0) & C_\parallel(r)\,c\,s - C_\perp(r)\,c\,s & C_\parallel(r)\,s^2 + C_\perp(r)\,c^2 \\
C_\parallel(r)\,c^2 + C_\perp(r)\,s^2 & C_\parallel(r)\,c\,s - C_\perp(r)\,c\,s & C(0) & 0 \\
C_\parallel(r)\,c\,s - C_\perp(r)\,c\,s & C_\parallel(r)\,s^2 + C_\perp(r)\,c^2 & 0 & C(0) \\
\end{array}\right],
\end{align}
where the shorthand notations $c:=\cos\varphi$ and $s:=\sin\varphi$ are used, under the polar coordinate parametrization $\boldsymbol{r} = r\,\left[ \cos\varphi,\,\sin\varphi \right]$. Given the matrix determinant ${\rm det}[\textbf{C}(\boldsymbol{r})]=(C(0)+C_\parallel(r))\,(C(0)-C_\parallel(r))\,(C(0)+C_\perp(r))\,(C(0)-C_\perp(r))$, $\textbf{C}(\boldsymbol{r})$ is a positive-definite matrix if and only if $C(0) \pm C_\parallel(r) >0$ and $C(0) \pm C_\perp(r) >0$. In our toy model, these are strictly guaranteed by \refeq{Caapara} and \refeq{Caaperp} provided that $P_\psi(\ell) > 0$. If the off-diagonal elements in \refeq{Cmatrix} were all vanishing, we would have concluded that $\VEV{\mu_{\rm W}(\boldsymbol{y})^2} = \VEV{\mu_{\rm W}(\boldsymbol{y})}^2$. Hence, the off-diagonal matrix elements of $\textbf{C}(\boldsymbol{r})$ is the reason for the nonzero variance for $\mu_{\rm W}(\boldsymbol{y})$.
For giving numerical examples, we specify a simple analytic form for the potential power spectrum:
\begin{align}
P_\psi(\ell) = \alpha^2_0\,\frac{\sigma^4_\psi}{(2\pi)^3}\,\exp\left( -\frac12\,\frac{\sigma^2_\psi\,\ell^2}{(2\pi)^2} \right).
\end{align}
The resultant root-mean-square (RMS) deflection is $\sqrt{\VEV{|\boldsymbol{\alpha}(\boldsymbol{x})|^2}} = \sqrt{2}\,\alpha_0$. The parameter $\sigma_\psi$ is introduced to set the coherent scale of the deflections on the image plane. The deflection correlation functions can be explicitly computed, $C_\perp(r)=\alpha^2_0\,e^{-(1/2)\,(2\pi\,r/\sigma_\psi)^2}$, and $C_\parallel(r)=C_\perp(r)\,[1-(2\pi\,r/\sigma_\psi)^2]$. The RMS convergence is $\VEV{\kappa^2(\boldsymbol{x})} = \VEV{[(1/2)\,\nabla^2\,\phi(\boldsymbol{x})]^2} = \sqrt{2}\,(2\pi)\,\alpha_0/\sigma_\psi$.
To study the situation of a rapid varying $\mu_{\rm B}(\boldsymbol{y})$, let us consider the background deflection of a fold caustic, which is the most commonly encountered caustic. We use the following parameterization for the fold caustic~\citep{1992grle.book.....S},
\begin{eqnarray}
x_1 - \alpha_{\rm B, 1}(\boldsymbol{x}) & = & \frac12\,d_1\,x^2_1 + d_2\,x_1\,x_2 - \frac12\,d_1\,x^2_2, \\
x_2 - \alpha_{\rm B, 2}(\boldsymbol{x}) & = & 2\,(1 - \kappa_0)\,x_2 + \frac12\,d_2\,x^2_1 - d_1\,x_1\,x_2 - \frac12\,d_2\,x^2_2.
\end{eqnarray}
The parameters are the local background convergence $\kappa_0$, and a gradient vector $\boldsymbol{d}=[d_1,\,d_2]$ which is related to the third-order derivative of the background lensing potential. We introduce $d=\sqrt{d^2_1 + d^2_2}$ and $\alpha=-\tan^{-1}(d_1/d_2)$ following the notation of \cite{2017ApJ...850...49V}. The coordinate system is conveniently chosen so that the caustic aligns with the $y_2$ axis on the source plane, and the corresponding critical curve on the image plane intersects the $x_1$ axis at an angle $\alpha$. As an example, we consider the case $d_1>0$ and $d_2=0$, so that $d=d_1$ and $\alpha=\pi/2$.
Several important scales can be identified on the source plane. First, the characteristic source size is $\sigma_{\rm W}$. Second, in the vicinity of the ideally smooth background caustic, the random deflections cannot be neglected. This is relevant within a source-plane width $\sim \alpha_0$ from the background caustic, as set by the root mean square (RMS) deflection. Within this proximity of the background caustic, the background magnification reaches $\mu_f \sim (\alpha_0\,d)^{-1/2}$, which we assume to be large. When $\mu_f$ is greater than the inverse of the typical convergence fluctuation $\sim (\alpha_0/\sigma_\psi)$, micro caustics join to form a network within this narrow band. The random deflections have a coherent scale $\sim \sigma_\psi$ on the image plane. When ray-traced onto the source plane following the background ray equation, this on average maps to a compressed scale $\sim \sigma_\psi/\mu_f = \sigma_\psi\,(\alpha_0\,d)^{1/2}$ for the corrugated micro caustic pattern, along the direction perpendicular to the background caustic. When the micro caustic network does arise, parametrically we have the hierarchy $\alpha_0 > \sigma_\psi\,(\alpha_0\,d)^{1/2}$. \reffig{micro_caustics} shows a numerical example of how a smooth macro caustic is replaced by a corrugated micro caustic pattern due to the Gaussian random deflections.
\begin{figure}[h]
\centering
\includegraphics[scale=0.43]{micro_caustic_network_example_gaussian.png}
\caption{A corrugated micro caustic network that forms on the source plane in the vicinity of a background caustic (red dashed line). We adopt the model of Gaussian random deflections as introduced \refsec{gaussian}, setting parameter values $d=10^{-5}$, $\alpha_0 = 0.2$, and $\sigma_\psi=10$. The density of points is proportional to the magnification for a point source. Micro caustics are visible in the form of sharp linear features of high point densities.}
\label{fig:micro_caustics}
\end{figure}
We evaluate the multi-dimensional integrations \refeq{muWinteg} and \refeq{muW2dblinteg} using the widely used Monte Carlo algorithm \texttt{vegas}~\citep{Lepage1978vegas, vegasEnhanced}. In \reffig{mu_example}, we present numerical examples that verify the intuitions we have developed based on our analytic scrutiny of the Gaussian random deflection model. We see that the mean magnification $\VEV{\mu_{\rm W}(\boldsymbol{y})}$ as a function of source center $\boldsymbol{y}$ indeed has a smoothed behavior at the background caustic. The maximal mean magnification and the width of the smoothed curve is set by the size of random deflections $\alpha_0$ if that exceeds the source size $\alpha_0 > \sigma_{\rm W}$. From one realization to another, $\mu_{\rm W}(\boldsymbol{y})$ fluctuates around the mean $\VEV{\mu_{\rm W}(\boldsymbol{y})}$. However, such fluctuations are suppressed when the source size $\sigma_{\rm W}$ is larger than the characteristic separation of micro caustics.
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{mu_example_Gaussian_defl.pdf}
\caption{Total magnification factor $\mu_{\rm W}$ for a finite source with a Gaussian profile as a function of distance to the background caustic (vertical dotted line at $y_1 = 0$). We set parameter values $\kappa_0=0.7$, $d=10^{-5}$, $\alpha_0=0.2$ and $\sigma_\psi=10$, as defined in the text, and consider increasing source sizes $\sigma_{\rm W}=0.005,\,0.05,\,0.2,\,0.5$ (from top to bottom). We show both the mean magnification $\VEV{\mu_{\rm W}}$ (red solid) and its standard deviation $\sqrt{\VEV{\mu^2_{\rm W}}-\VEV{\mu_{\rm W}}^2}$ due to random deflections (cyan band; $\pm 1\,\sigma$). The magnification factor for an idealized point source without random deflections is also shown for comparison (blue dashed). For $\sigma_{\rm W} < \alpha_0$, the height and width of the peak $\VEV{\mu_{\rm W}}$ are set by $\alpha_0$, while the magnification fluctuations decreases as $\sigma_{\rm W}$ increases. For $\sigma_{\rm W} > \alpha_0$, the height and width of the peak $\VEV{\mu_{\rm W}}$ are instead set by $\sigma_{\rm W}$, and the magnification fluctuations are highly smoothed out.}
\label{fig:mu_example}
\end{figure}
\section{Random deflections from compact microlenses}
\label{sec:microlens}
\begin{figure}[h]
\centering
\includegraphics[scale=0.43]{micro_caustic_network_example_ML.png}
\caption{The corrugated micro caustic network caused by point microlenses with $\theta_\star=1$ and $\kappa_\star=0.02$ ($\sigma_{\rm eff} = 0.14$; $\sigma_{\rm ml}$ comparable to $\alpha_0$ used for \reffig{micro_caustics}). Parameter choices for the background fold caustic are the same as adopted for \reffig{micro_caustics}. In comparison to the case of Gaussian random deflections, the micro caustics induced by point lenses are stronger, and lie preferentially parallel to the background caustic.}
\label{fig:micro_caustics_ml}
\end{figure}
In most astrophysical situations, random deflections are due to compact lenses such as individual stars. \reffig{micro_caustics_ml} shows an example of the corrugated micro caustic network cast by random point lenses, which looks qualitatively different from that formed due to Gaussian random deflections.
An important difference between the case of point microlenses and that of Gaussian random deflections is that very large deflections are generated with a low but non-negligible probability. This happens when the ray encounters a microlens at a small impact parameter. This not only greatly enhances the flux fluctuations, but also renders the distribution of $\boldsymbol{\alpha}(\boldsymbol{x})$ heavy-tailed~\citep{Katz1986RandomScattering}. As a consequence, $\boldsymbol{\alpha}(\boldsymbol{x})$ has divergent second-order and higher-order moments, which hinders exact analytic treatment.
For a simple discussion, we assume that all microlenses have the same mass, and hence the same angular Einstein scale $\theta_\star$. We will comment on the generalization to the case of a continuous microlens mass distribution toward the end of the this Section. Other angular scales of the problem can all be expressed in units of $\theta_\star$. We assume that the microlenses are uniformly distributed on the lens plane and contribute a mean convergence $\kappa_\star > 0$.
The microlensing deflection can be written as
\begin{align}
\boldsymbol{\alpha}_{\rm ml}(\boldsymbol{x}) = - \kappa_\star\,\boldsymbol{x} + \theta^2_\star\,\sum^N_{I=1}\,\frac{\boldsymbol{x} - \boldsymbol{z}_I}{|\boldsymbol{x} - \boldsymbol{z}_I|^2}.
\end{align}
In the second term, we sum over contributions from all microlenses $I=1,2,\cdots,\,N$, which are located at $\boldsymbol{z}_I$'s, respectively. In the first term, we subtract the mean deflection due to a uniform mass sheet of convergence $\kappa_\star$. Including the first term enforces that $\VEV{\boldsymbol{\alpha}_{\rm ml}(\boldsymbol{x})}=0$. Unlike in the case of Gaussian random deflections, analytic calculations of the CFs for $\boldsymbol{\alpha}_{\rm ml}(\boldsymbol{x})$ are difficult for the case of point microlenses. Still, the CFs can be reduced to those corresponding to a single microlens, provided that microlenses have independent positions on the lens plane. This forms the basis of the following analytic treatment.
\subsection{One-point deflection statistics}
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{Ifun_plot.pdf}
\caption{The function $I(t)$ as defined in \refeq{It}. We compare the exact numerical evaluation (black) and the leading-order approximation (red), $-(1/2\,t^2)\,(1 - \gamma_E + \ln\,2 t)$, in the $(1/t)$-expansion. The leading-order approximation works remarkably well for $t \gtrsim 1$.}
\label{fig:Ifun}
\end{figure}
Let $P_1[\boldsymbol{\alpha}_{\rm ml}(\boldsymbol{x})]$ be the PDF for $\boldsymbol{\alpha}_{\rm ml}(\boldsymbol{x})$. Its Fourier transform gives the one-point CF:
\begin{align}
\Phi_1[{\boldsymbol{\ell}};\,\boldsymbol{x}] = \VEV{e^{i\,{\boldsymbol{\ell}}\cdot\boldsymbol{\alpha}_{\rm ml}(\boldsymbol{x})}} = e^{-i\,\kappa_\star\,{\boldsymbol{\ell}}\cdot\boldsymbol{x}}\,e^{ N\,\ln\varphi_1[{\boldsymbol{\ell}};\,\boldsymbol{x}] }.
\end{align}
Here $\varphi_1[{\boldsymbol{\ell}};\,\boldsymbol{x}]$ is the one-point CF due to a single microlens. We specify that each microlens is uniformly distributed within a disk of radius $R$, and compute for $\boldsymbol{\alpha}_{\rm ml}(\boldsymbol{0})$. We perform the following integral (\reffig{ml_defl_integs}; left)
\begin{align}
\label{eq:Itinteg}
\ln\varphi_1[{\boldsymbol{\ell}}] \equiv \ln\varphi_1[{\boldsymbol{\ell}};\,\boldsymbol{0}] & \approx \varphi_1[{\boldsymbol{\ell}};\,\boldsymbol{0}] - 1 = \frac{1}{\pi\,R^2}\,\int\displaylimits_{|\boldsymbol{z}| < R}\,\mathrm{d}^2\boldsymbol{z}\,\left( e^{ -i\,\theta^2_\star\,{\boldsymbol{\ell}}\cdot\boldsymbol{z}/|\boldsymbol{z}|^2 } - 1 \right)
\equiv I\left( \frac{R}{\theta^2_\star\,\ell} \right),
\end{align}
where $\ell = |\boldsymbol{\ell}|$. The integral $I(t)$ can be expressed as a series expansion:
\begin{align}
\label{eq:It}
I(t) & \equiv \frac{2}{t^2}\,\int^{t}_0\,t'\,\mathrm{d} t'\,\left[ J_0\left(\frac{1}{t'} \right) - 1 \right] \nonumber\\
& = - \frac{t^{-2}}{2}\,\left( 1 - \gamma_E + \ln\,2\,t \right) - \frac{t^{-4}}{64} + \frac{t^{-6}}{4608} + \frac{t^{-8}}{884736} + \cdots,
\end{align}
where $J_0(x)$ is the Bessel function of the first kind, and $\gamma_E \approx 0.577216$ is the Euler–Mascheroni constant. For sufficiently large $R$, $t \gtrsim 1$, and we observe that the series rapidly converges. Keeping only the $\mathcal{O}(1/t^2)$ term gives a decent approximation (\reffig{Ifun}):
\begin{align}
\label{eq:lnvphi1}
\ln\varphi_1[{\boldsymbol{\ell}}] \approx -\frac12\,\frac{\theta^4_\star\,\ell^2}{R^2}\,\left( 1 - \gamma_E + \ln\frac{2\,R}{\theta^2_\star\,\ell} \right).
\end{align}
The presence of the $\ell$-dependent logarithm $\ln(2\,R/\theta^2_\star\,\ell)$ reflects that the deflection due to one microlens follows a heavy-tailed distribution; a Taylor expansion in $\boldsymbol{\ell}$ around $\boldsymbol{\ell} = 0$ is not possible.
Using $\kappa_\star = N\,\theta^2_\star/R^2$, and setting $\boldsymbol{x}=\boldsymbol{0}$, we derive
\begin{align}
\label{eq:Phi1}
\Phi_1[\boldsymbol{\ell}] = \exp\left[ - \frac12\,|\boldsymbol{\ell}|^2\,\kappa_\star\,\theta^2_\star\,\left( 1 - \gamma_E + \ln\frac{2\,R}{\theta^2_\star\,\ell} \right) \right].
\end{align}
Note that in the large $N$ limit, the higher order terms in \refeq{It}, i.e. $\sim t^{-2\,k}$ for $k=2,3,\cdots$, are suppressed by $\sim 1/N^{k-1}$, respectively.
The variance of the deflection angle $\VEV{|\boldsymbol{\alpha}_{\rm ml}(\boldsymbol{0})|^2}\approx \kappa_\star\,\theta^2_\star\,(1-\gamma_E + \ln(2\,R/\theta^2_\star\,\ell))$ is always of order $\sigma^2_{\rm eff} = \theta^2_\star\,\kappa_\star$. Additionally, there is a multiplicative factor that depends logarithmically on the IR cutoff scale $R$ as well as the correlation scale of interest $\sim 1/\ell$. By fixing $R=R_*$ and $\ell=\ell_*$ in the logarithm for some appropriate values of $R_*$ and $\ell_*$, we essentially approximate $P_1[\boldsymbol{\alpha}_{\rm ml}]$ as a two-dimensional Gaussian distribution:
\begin{align}
\label{eq:P1gaussian}
P_1[\boldsymbol{\alpha}_{\rm ml}] \approx \frac{1}{2\pi\,\sigma^2_{\rm ml}(R_*,\,\ell_*)}\,\exp\left( - \frac12\,\frac{|\boldsymbol{\alpha}_{\rm ml}|^2}{\sigma^2_{\rm ml}(R_*,\,\ell_*)} \right),
\end{align}
where we introduce
\begin{align}
\label{eq:sigma2ml}
\sigma^2_{\rm ml}(R_*,\,\ell_*) = \theta^2_\star\,\kappa_\star\,\left( 1 - \gamma_E + \ln\frac{2\,R_*}{\theta^2_\star\,\ell_*} \right).
\end{align}
The key question is what the appropriate values for $R_*$ and $\ell_*$ would be under this approximation.
The $R$ dependence originates from the collective influence of faraway microlenses. One straightforward choice for $R_*$ would be to use the full angular scale on the image plane over which the microlens convergence has nearly a uniform value $\kappa_\star$. This scale can be orders of magnitude larger than the Einstein scale $\theta_\star$. For galaxy or intracluster microlensing, it may be taken to be the characteristic extent of the galactic stellar halo or the intracluster light halo, respectively.
\cite{Katz1986RandomScattering} (hereafter KBP86) instead propose that $R_*$ should be the scale over which the microlenses produce incoherent deflections across the distribution of micro images, arguing that coherent deflections, for any fixed microlens realization, lead to an overall uniform shift of the micro images without affecting the magnification. Define an effective source size
\begin{align}
\label{eq:sigmaeff}
\sigma_{\rm eff} = \sqrt{\sigma^2_{\rm W} + \kappa_\star\,\theta^2_\star},
\end{align}
which accounts for intrinsic source size and effective ``broadening'' due to random microlensing deflections. Let $\mu_{\rm B}$ be the macro magnification near a background fold caustic as introduced in \refsec{gaussian}. The maximal value of $\mu_{\rm B}$ is limited by the effective source size $\sigma_{\rm eff}$, reaching $\mu_f \sim (\sigma_{\rm eff}\,d)^{-1/2}$. Either macro image thus has an extent $\sim \mu_{\rm B}\,\sigma_{\rm eff}$. Following the choice of KBP86, we may set $R_*$ to $\mu_{\rm B}\,\sigma_{\rm eff} \lesssim \mu_f\,\sigma_{\rm eff}= (\sigma_{\rm eff}/d)^{1/2}$.
However, the choice of KBP86 is not entirely justifiable if the background lens model is non-uniform. The issue can be particularly non-trivial near a macro caustic where the (small) gradients of background convergence and shear set the strength of the caustic --- large-scale Poisson fluctuations in the microlens number not only generate a coherent deflection, but also contribute small gradients of convergence and shear which can modify the effective caustic strength from realization to realization. In this case, we set $R_*$ to be the largest scale over which the microlenses are uniformly distributed.
The typical value of $\ell_*$ to be set in the logarithm can be related to the inverse of the width of $P^{\rm W}_1[\boldsymbol{\alpha}]$, which is the amount needed to compensate for $\boldsymbol{x} - \boldsymbol{y} - \boldsymbol{\alpha}_{\rm B}(\boldsymbol{x}) \neq 0$. This means that we can set $\ell_*=1/\sigma_{\rm eff}$.
For the choice $R_*=\mu_f\,\sigma_{\rm eff}$ (KBP86) and $\ell_*=1/\sigma_{\rm eff}$, the truncation of expansion \refeq{It} is valid if
\begin{align}
\label{eq:th2ltoR}
\frac{\theta^2_\star\,\ell_*}{R_*} = \frac{\theta^2_\star}{\mu_{\rm B}\,\sigma^2_{\rm eff}} = \frac{\theta_\star\,\kappa^{-1/2}_\star\,\mu^{-1}_{\rm B}}{\sigma_{\rm eff}}\,\frac{\theta_\star\,\kappa^{1/2}_\star}{\sigma_{\rm eff}} \lesssim 1.
\end{align}
If $\sigma_{\rm eff}$ is larger than the characteristic scale of the micro-caustic network $\sim \theta_\star\,\kappa^{-1/2}_\star\,\mu^{-1}_{\rm B}$, then the first factor is smaller than unit. This corresponds to the situation where either the physical source extent $\sigma_{\rm W}$ overlaps multiple micro caustics, or the typical microlensing broadening $\theta_\star\,\kappa^{1/2}_\star$ overlaps multiple micro caustics (the regime of many micro images). The second factor is no larger than unity. Hence, $\theta^2_\star\,\ell_*/R_* < 1$ is satisfied in this situation.
As we verify numerically in \reffig{P1W}, assuming the random microlensing deflections to follow approximately a two-dimensional Gaussian distribution can fairly accurately reproduce the exact source-profile convoluted distribution $P^{\rm W}_1(\boldsymbol{\alpha}_{\rm ml})$, provided that $\theta^2_\star\,\ell_* < R_*$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{P1W_exact_vs_gaussian.pdf}
\caption{Source-profile convoluted distribution $P^{\rm W}_1(\boldsymbol{\alpha}_{\rm ml})=P^{\rm W}_1(|\boldsymbol{\alpha}_{\rm ml}|)$. We consider the case in which the highest possible mean magnification $\VEV{\mu_{\rm W}(\boldsymbol{y})}$ is realized when the source center $\boldsymbol{y}$ is within a distance $\sim \sigma_{\rm eff}$ (see \refeq{sigmaeff}) of the background caustic, and set $R_*=(\sigma_{\rm eff}/d)^{1/2}$ due to \cite{Katz1986RandomScattering}. Panels correspond to different values for the characteristic source size $\sigma_{\rm W}$. Good agreement is found between direct numerical evaluation of \refeq{P1Wdef} (black solid curves), and our analytic model (\refeq{P1gaussian}; red dashed curves) which approximates the PDF for the microlensing random deflections, $P _1(|\boldsymbol{\alpha}_{\rm ml}|)$, as an isotropic two-dimensional Gaussian distribution.}
\label{fig:P1W}
\end{figure}
\refeq{sigma2ml} can be recast into the following form
\begin{align}
\sigma^2_{\rm ml}(R_*,\,\ell_*) = \theta^2_\star\,\kappa_\star\,\left[ \ln\left( 2\,e^{1-\gamma_E} \, N^{1/2}_* \right) - \ln\left(\theta_\star\,\kappa^{1/2}_\star\,\ell_* \right) \right],
\end{align}
where $N_* = \kappa_\star\,R^2_*/\theta^2_\star$ is the number of microlenses within an image-plane disk of radius $R_*$. The first logarithmic term has been previously derived by KBP86 as $\ln(3.05\,N^{1/2}_*)$. Our result includes a second term $\ln(\theta_\star\,\kappa^{1/2}_\star\,\ell_*)$ not found in KBP86, as we have argued that it is appropriate to set $\ell_* = 1/\sigma_{\rm eff}$ (see \refeq{sigmaeff}). For small source sizes, $\sigma_{\rm W} < \theta_\star\,\kappa^{1/2}_\star$ and hence $1/\ell_* = \sigma_{\rm eff} \approx \theta_\star\,\kappa^{1/2}_\star$, rendering this second logarithm numerically negligible. On the other hand, this second logarithm increases the effective Gaussian width for large sources, i.e. $\sigma_{\rm eff} \approx \sigma_{\rm W} > \theta_\star\,\kappa^{1/2}_\star$.
From the above analysis, we obtain the following approximation formula for the mean magnification factor,
\begin{align}
\label{eq:muWvevML}
\VEV{\mu_{\rm W}(\boldsymbol{y})} & \approx \frac{1}{(2\pi)\,(\sigma^2_{\rm ml}(R_*,\,\ell_*) + \sigma^2_{\rm W})}\, \int\,\mathrm{d}^2\boldsymbol{x}\,\exp\left[-\frac12\,\frac{|\boldsymbol{x} - \boldsymbol{y} - \boldsymbol{\alpha}_{\rm B}(\boldsymbol{x})|^2}{\sigma^2_{\rm ml}(R_*,\,\ell_*) + \sigma^2_{\rm W}}\right],
\end{align}
where the value of $\sigma^2_{\rm ml}(R_*,\,\ell_*)$ must be appropriately set using \refeq{sigma2ml}.
\subsection{Two-point deflection statistics}
Apart from the mean magnification $\VEV{\mu_{\rm W}(\boldsymbol{y})}$, we would like to know the statistical fluctuations in the magnification factor between random microlens realizations. As we have shown in \refeq{muWy2}, calculating the second moment $\VEV{\mu_{\rm W}(\boldsymbol{y})^2}$ requires the knowledge of the two-point CF for microlensing deflection:
\begin{align}
\Phi_2[{\boldsymbol{\ell}}_1,\,{\boldsymbol{\ell}}_2;\,\boldsymbol{x}_1,\,\boldsymbol{x}_2] = \VEV{e^{i{\boldsymbol{\ell}}_1\cdot\boldsymbol{\alpha}_{\rm ml}(\boldsymbol{x}_1)}\,e^{i{\boldsymbol{\ell}}_2\cdot\boldsymbol{\alpha}_{\rm ml}(\boldsymbol{x}_2)}} = e^{-i\,\kappa_\star\,({\boldsymbol{\ell}}_1\cdot\boldsymbol{x}_1 + {\boldsymbol{\ell}}_2\cdot\boldsymbol{x}_2 )}\,\exp\left[N\,\ln\varphi_2\left[ {\boldsymbol{\ell}}_1,\,{\boldsymbol{\ell}}_2;\,\boldsymbol{x}_1,\,\boldsymbol{x}_2 \right]\right].
\end{align}
Here $\varphi_2\left[ {\boldsymbol{\ell}}_1,\,{\boldsymbol{\ell}}_2;\,\boldsymbol{x}_1,\,\boldsymbol{x}_2 \right]$ is the two-point CF due to a single microlens, and is given by the following integral
\begin{eqnarray}
\label{eq:lnvphi2}
\ln\varphi_2\left[ {\boldsymbol{\ell}}_1,\,{\boldsymbol{\ell}}_2;\,\boldsymbol{x}_1,\,\boldsymbol{x}_2 \right] & \approx & \varphi_2\left[ {\boldsymbol{\ell}}_1,\,{\boldsymbol{\ell}}_2;\,\boldsymbol{x}_1,\,\boldsymbol{x}_2 \right] - 1 \nonumber\\
& = & \frac{1}{\pi\,R^2}\,\int\displaylimits_{|\boldsymbol{z}|<R}\,\mathrm{d}^2\boldsymbol{z}\,\left[ \exp\left( i\,\theta^2_\star\,\left( \frac{{\boldsymbol{\ell}}_1\cdot(\boldsymbol{x}_1 - \boldsymbol{z})}{|\boldsymbol{x}_1 - \boldsymbol{z}|^2} + \frac{{\boldsymbol{\ell}}_2\cdot(\boldsymbol{x}_2 - \boldsymbol{z})}{|\boldsymbol{x}_2 - \boldsymbol{z}|^2} \right) \right) - 1 \right].
\end{eqnarray}
A closed-form result for \refeq{lnvphi2} for arbitrary wave vectors ${\boldsymbol{\ell}}_1$ and ${\boldsymbol{\ell}}_2$ and arbitrary image positions $\boldsymbol{x}_1$ and $\boldsymbol{x}_2$ is unknown to us. For finite sources, nevertheless, it is useful to analytically extract the part that has quadratic dependence on ${\boldsymbol{\ell}}_1$ and ${\boldsymbol{\ell}}_2$ (with additional logarithms as a result of non-analyticity). Keeping only this quadratic part amounts to approximating the two-point joint PDF for the random deflections
\begin{align}
P_2\left(\boldsymbol{\alpha}_{\rm ml}(\boldsymbol{x}_1),\,\boldsymbol{\alpha}_{\rm ml}(\boldsymbol{x}_2) \right) = \int\,\frac{\mathrm{d}^2{\boldsymbol{\ell}}_1}{(2\pi)^2}\,\int\,\frac{\mathrm{d}^2{\boldsymbol{\ell}}_2}{(2\pi)^2}\,e^{-i\,{\boldsymbol{\ell}}_1\cdot\boldsymbol{\alpha}_{\rm ml}(\boldsymbol{x}_1)}\,e^{-i\,{\boldsymbol{\ell}}_2\cdot\boldsymbol{\alpha}_{\rm ml}(\boldsymbol{x}_2)}\,\Phi_2\left[ {\boldsymbol{\ell}}_1,\,{\boldsymbol{\ell}}_2;\,\boldsymbol{x}_1,\,\boldsymbol{x}_2 \right],
\end{align}
as a two-dimensional normal distribution, which will lead to analytic simplification of the expression for $\VEV{\mu_{\rm W}(\boldsymbol{y})^2}$. We argue that this approximation is reasonable if the effective source size overlaps multiple micro-caustics $\theta_\star/(\kappa^{1/2}_\star\,\mu_{\rm B}\,\sigma_{\rm eff}) \lesssim 1$, which, as we have argued before with \refeq{th2ltoR}, necessarily implies that $\theta^2_\star\,\ell_*/R_* \lesssim 1$. In this regime, ${\boldsymbol{\ell}}_1$ and ${\boldsymbol{\ell}}_2$ are typically on the order of $\ell_*$, and the separation between the two image-plane positions is typically comparable to the macro image extent $r_{12} \equiv |\boldsymbol{x}_1 - \boldsymbol{x}_2|$. With the condition $\theta^2_\star\,\ell_{1,2}/|\boldsymbol{x}_1 - \boldsymbol{x}_2| < 1$, an analytic approximation for the integral \refeq{lnvphi2} can be obtained. This is given by \refeq{lnvphi2approx}. See \refapp{twoptinteg} for the derivation. The result is the following approximation for the two-point CF:
\begin{eqnarray}
\label{eq:lnPhi2}
\ln\Phi_2[{\boldsymbol{\ell}}_1,\,{\boldsymbol{\ell}}_2;\,\boldsymbol{x}_1,\,\boldsymbol{x}_2] & \approx & - \frac{\kappa_\star\,\theta^2_\star}{2}\,\left( |{\boldsymbol{\ell}}_1|^2 + |{\boldsymbol{\ell}}_2|^2 \right)\,\left( 1 - \gamma_E + \ln\frac{2\,R_*}{\theta^2_\star\,\ell_*} \right) \\
&& - \kappa_\star\,\theta^2_\star\,\left[ \left( \ln\frac{2\,R_*}{r_{12}} + \frac12\,\ln\left( 1 + \frac{r^2_{12}}{4\,R^2_*} \right) - \ln 2 + \frac12 \right)\,\left( {\boldsymbol{\ell}}_1\cdot{\boldsymbol{\ell}}_2 \right) - \frac{\left({\boldsymbol{\ell}}_1\cdot\boldsymbol{r}_{12}\right)\,\left({\boldsymbol{\ell}}_2\cdot\boldsymbol{r}_{12}\right)}{r^2_{12}}\right]. \nonumber
\end{eqnarray}
Terms of $|{\boldsymbol{\ell}}_1|^2$ and $|{\boldsymbol{\ell}}_2|^2$ encode auto correlation at a single image-plane point and are consistent with the one-point CF in \refeq{Phi1}, while the cross terms encode cross correlation between a pair of image-plane points. Since the cross terms only depend on $\boldsymbol{r}_{12}=\boldsymbol{x}_2 - \boldsymbol{x}_1$, but not on $\boldsymbol{x}_1$ and $\boldsymbol{x}_2$ separately, \refeq{lnPhi2} preserves the expected symmetries of the two-point microlensing deflection statistics under spatial translations and rotations on the image plane.
Through a direct calculation of $\langle \alpha_{{\rm ml}, i}(\boldsymbol{x}_1)\,\alpha_{{\rm ml}, j}(\boldsymbol{x}_2) \rangle$, we verify that the second line of \refeq{lnPhi2} can be rewritten as
\begin{align}
- \left[ C^{\rm ml}_{\parallel}(r_{12};\,R_*)\,\frac{\left({\boldsymbol{\ell}}_1\cdot\boldsymbol{r}_{12}\right)\,\left({\boldsymbol{\ell}}_2\cdot\boldsymbol{r}_{12}\right)}{r^2_{12}} + C^{\rm ml}_{\perp}(r_{12};\,R_*)\,\left( {\boldsymbol{\ell}}_1\cdot{\boldsymbol{\ell}}_2 - \frac{\left({\boldsymbol{\ell}}_1\cdot\boldsymbol{r}_{12}\right)\,\left({\boldsymbol{\ell}}_2\cdot\boldsymbol{r}_{12}\right)}{r^2_{12}} \right) \right].
\end{align}
where $C^{\rm ml}_{\parallel}(r;\,R_*)$ and $C^{\rm ml}_{\perp}(r;\,R_*)$ are respectively the two-point correlation functions for random microlensing deflections parallel and perpendicular to the separation vector (as defined via decomposition \refeq{2ptalpalp}):
\begin{eqnarray}
\label{eq:CL}
C^{\rm ml}_\parallel(r;\,R_*) & = & \kappa_\star\,\theta^2_\star\,\left[ \ln\frac{2\,R_*}{r} + \frac12\,\ln\left( 1 + \frac{r^2}{4\,R^2_*} \right) - \ln 2 - \frac12 \right], \\
\label{eq:CT}
C^{\rm ml}_\perp(r;\,R_*) & = & \kappa_\star\,\theta^2_\star\,\left[ \ln\frac{2\,R_*}{r} + \frac12\,\ln\left( 1 + \frac{r^2}{4\,R^2_*} \right) - \ln 2 + \frac12 \right].
\end{eqnarray}
Both functions are sensitive to the ``infrared'' cutoff scale $R_*$ and are logarithmically divergent in the limit $r \rightarrow 0$, so the correlation at zero separation is ill-defined. This is unlike our toy model of Gaussian random deflections with a regular lensing potential power spectrum, as we have studied in \refsec{gaussian}, for which $C_\parallel(0) = C_\perp(0)$ is finite. To our knowledge, \refeq{CL} and \refeq{CT} have not been presented before in the literature.
Based on \refeq{lnPhi2}, we derive an approximate formula for $\VEV{\mu_{\rm W}(\boldsymbol{y})^2}$ in a form identical to \refeq{muW2dblinteg}:
\begin{eqnarray}
\label{eq:muW2dblintegML}
\VEV{\mu_{\rm W}(\boldsymbol{y})^2} & = & \int\,\mathrm{d}^2\boldsymbol{x}_1\, \int\,\mathrm{d}^2\boldsymbol{x}_2\,\frac{\exp\left[ -\frac12\,\textbf{u}^T(\boldsymbol{x}_1,\,\boldsymbol{x}_2;\,\boldsymbol{y})\,\left( \textbf{C}_{\rm ml}(\boldsymbol{r}_{12}) + \sigma^2_{\rm W}\,\textbf{I} \right)^{-1}\,\textbf{u}(\boldsymbol{x}_1,\,\boldsymbol{x}_2;\,\boldsymbol{y}) \right]}{(2\pi)^2\,\sqrt{{\rm det}[ \textbf{C}_{\rm ml}(\boldsymbol{r}_{12}) + \sigma^2_{\rm W}\,\textbf{I}]}}.
\end{eqnarray}
where we introduce the covariance matrix appropriate for random point microlenses:
\begin{align}
\label{eq:covCml}
\textbf{C}_{\rm ml}(\boldsymbol{r}) = \left[\begin{array}{cccc}
C^{\rm ml}(0) & 0 & C^{\rm ml}_\parallel(r)\,c^2 + C^{\rm ml}_\perp(r)\,s^2 & C^{\rm ml}_\parallel(r)\,c\,s - C^{\rm ml}_\perp(r)\,c\,s \\
0 & C^{\rm ml}(0) & C^{\rm ml}_\parallel(r)\,c\,s - C^{\rm ml}_\perp(r)\,c\,s & C^{\rm ml}_\parallel(r)\,s^2 + C^{\rm ml}_\perp(r)\,c^2 \\
C^{\rm ml}_\parallel(r)\,c^2 + C^{\rm ml}_\perp(r)\,s^2 & C^{\rm ml}_\parallel(r)\,c\,s - C^{\rm ml}_\perp(r)\,c\,s & C^{\rm ml}(0) & 0 \\
C^{\rm ml}_\parallel(r)\,c\,s - C^{\rm ml}_\perp(r)\,c\,s & C^{\rm ml}_\parallel(r)\,s^2 + C^{\rm ml}_\perp(r)\,c^2 & 0 & C^{\rm ml}(0) \\
\end{array}\right].
\end{align}
Here on the diagonal we use
\begin{align}
C^{\rm ml}(0) := \kappa_\star\,\theta^2_\star\,\left( 1 - \gamma_E + \ln\frac{2\,R_*}{\theta^2_\star\,\ell_*} \right),
\end{align}
which depends on the choice for $R_*$ and $\ell_*$. Note that $C^{\rm ml}(0)$ is not given by the $r \rightarrow 0$ limit of $C^{\rm ml}_\parallel(r)$ or $C^{\rm ml}_\perp(r)$.
The covariance matrix $\textbf{C}(\boldsymbol{r})$ being positive definite requires $C^{\rm ml}(0) \pm C^{\rm ml}_\parallel(r) > 0$ and $C^{\rm ml}(0) \pm C^{\rm ml}_\perp(r) > 0$. Unlike in the Gaussian random deflection model, this is not strictly guaranteed here, which is a shortcoming of our analytic approximation. However, violation occurs in two regimes, $r \gg R_*$ or $r \ll \theta^2_\star\,\ell_*$, both of which are not expected to have important contributions to $\VEV{\mu_{\rm W}(\boldsymbol{y})^2}$. In practice, we must regularize the logarithmic divergences in order for the integral \refeq{muWy2} to be well behaved. For an example, we present in \refapp{Creg} one regularization scheme, which, as we numerically test out, renders the result for $\VEV{\mu_{\rm W}(\boldsymbol{y})^2}$ insensitive to regularization parameter choices.
Our results are readily applicable to the special case of uniform background convergence and shear. This situation has already been intensively studied, mostly in the context of quasar microlensing. In \refapp{uniformbkg}, we derive additional analytic results for this special case, and remark on comparisons to the literature.
While \refeq{muW2dblintegML} assumes a Gaussian source, \refeq{lnPhi2}, \refeq{CL} and \refeq{CT} are generally valid independent of the specific source profile. For stellar photospheres, the uniform disk would be a more appropriate source model than the Gaussian one. However, the $\boldsymbol{\ell}_1$- and $\boldsymbol{\ell}_2$-integrals cannot be analytically carried out, unlike for a Gaussian source, which is a shortcoming of our results.
\subsection{Numerical experiments}
We validate the semi-analytic results we have derived for $\VEV{\mu_{\rm W}(\boldsymbol{y})}$ and $\VEV{\mu_{\rm W}(\boldsymbol{y})^2}$ using numerical experiments. We set parameters $\theta_\star=1$ and $d=10^{-5}$ as adopted in \reffig{micro_caustics_ml}. Random microlenses with identical $\theta_\star$ are generated within a circular disk that centers on the macro critical curve and has a radius $R_*=1500$. We efficiently compute the summed deflection from a large number of microlenses using the hierarchical tree algorithm~\citep{Wambsganss1999MLTreeAlgorithm}. For $\kappa_\star=0.004,\,0.02,\,0.1$ that we simulate, we need to include for each realization $N \approx 9000,\,45000,\,225000$ point lenses, respectively. We sample the image-plane vicinity of the macro critical curve with a large number of rays. These rays are inversely traced onto the source plane~\citep{KayserRefsdalStabell1986microlensing, Wambsganss1992muPDFML}, which can then be used to calculate $\mu_{\rm W}(\boldsymbol{y})$ for any source profile and central position.
We numerically derive $\VEV{\mu_{\rm W}(\boldsymbol{y})}$ and $\VEV{\mu_{\rm W}(\boldsymbol{y})^2}$ by averaging over many random realizations for the microlenses. To be consistent with the averaging procedure we adopt in the numerical experiment, we always set $R_*=1500$ when evaluating $C^{\rm ml}(0)$, $C^{\rm ml}_\parallel(r)$ and $C^{\rm ml}_\perp(r)$; this is different from setting $R_*$ to be the clustering size of the micro images as proposed in \cite{Katz1986RandomScattering}. As we show in \reffig{num_check_muW}, for a range of parameters our semi-analytic calculations agree with numerical results to high accuracy, provided that the source typically overlaps with multiple micro caustics. The magnification $\mu_{\rm W}(\boldsymbol{y})$ in fact can have a rather skewed, non-Gaussian distribution when the relative fluctuation is large, while the mean and variance are still accurately predicted by our semi-analytic formulae.
An interesting observation can be made from comparing the last three panels of \reffig{num_check_muW}: $\VEV{\mu_{\rm W}(\boldsymbol{y})}$ and $\VEV{\mu_{\rm W}(\boldsymbol{y})^2}$ in fact become insensitive to the source size $\sigma_{\rm W}$ if $\theta_\star\,\kappa^{1/2}_\star \gg \sigma_{\rm W}$, even though that the light curves are qualitatively distinct. The numerical results show that as $\sigma_{\rm W}$ decreases, the light curve becomes increasingly non-Gaussian while preserves $\VEV{\mu_{\rm W}(\boldsymbol{y})}$ and $\VEV{\mu_{\rm W}(\boldsymbol{y})^2}$! It is therefore reasonable to hypothesize that as long as the source size is much smaller than $\theta_\star\,\kappa^{1/2}_\star$, the results for $\VEV{\mu_{\rm W}(\boldsymbol{y})}$ and $\VEV{\mu_{\rm W}(\boldsymbol{y})^2}$ are insensitive to the source profile either; if the Gaussian source is replaced with a uniform-disk source, our results for $\VEV{\mu_{\rm W}(\boldsymbol{y})}$ and $\VEV{\mu_{\rm W}(\boldsymbol{y})^2}$ should remain correct.
The cases we examine in \reffig{num_check_muW} all correspond to a sufficiently large source size $\sigma_{\rm W}$ that overlaps multiple or at least order unity micro caustics, regardless of the size of the microlensing broadening $\theta_\star\,\kappa^{1/2}_\star$. As we can see from the ``light curves'', the fluctuations of the magnification factor are Gaussian or weakly non-Gaussian. To further test the range of validity of our approximation, in \reffig{num_check_muW_2} we examine cases where the number density of micro caustics are reduced and the source size $\sigma_{\rm W}$ is made smaller. The fluctuations of the magnification factor become highly non-Gaussian and very dramatic, approaching the familiar behavior of small sources exhibiting intermittent ``flares'' at micro caustic crossings. In these cases, the physical source extent hardly overlaps multiple micro caustics, while our semi-analytic approximation remains successful. We note that in these cases multiple micro images still arise (albeit the number of micro images is small), which may explain the success of the approximation. Hence, we find robust numerical evidences that the semi-analytic approximation developed in this work is applicable to computing the mean and variance of the magnification factor over a wide range of parameters.
\begin{figure}[t]
\centering
\includegraphics[scale=0.53]{num_check_muW_6panel.pdf}
\caption{Statistics of the magnification factor $\mu_{\rm W}$ derived from numerical ray-shooting as a function of the distance $y_1$ to the macro fold caustic. We set parameter values $\theta_\star=1$, $d=10^{-5}$ and $R_*=1500$. Results are shown in separate panels for several choices of the microlens surface abundance $\kappa_\star$ and the source size $\sigma_{\rm W}$, all in the regime that the source overlaps multiple micro caustics. Theoretical calculations for the mean magnification $\VEV{\mu_{\rm W}}$ (solid blue) and its standard deviation ${\rm Std}[{\mu_{\rm W}}]$ (dash-dotted blue), all predicted by the semi-analytic model of this work, agree well with the numerical statistics (solid red curve for $\VEV{\mu_{\rm W}}$ and light red band for ${\rm Std}[{\mu_{\rm W}}]$) derived from 200 independent microlensing realizations. Additionally, 10 random realizations are shown as the grey curves. In all panels, the macro caustic is located at $y_1=0$ (vertical dotted line), and the magnification factor for a point source is shown as the dashed black curve.}
\label{fig:num_check_muW}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.53]{num_check_muW_2.pdf}
\caption{Same as \reffig{num_check_muW}, but for a relatively low microlens surface density $\kappa_\star=0.0004$ and reduced source sizes. Such small sources do not overlap multiple micro caustics, except in the very vicinity of the macro caustic where the macro magnification factor is sufficiently high $\mu_{\rm B} \gtrsim 2000$. In spite of the highly non-Gaussian nature of the magnification fluctuations, our semi-analytic predictions for the mean magnification and its variance still agree well with numerical simulations.}
\label{fig:num_check_muW_2}
\end{figure}
\subsection{microlens mass distribution}
\label{sec:mlmassfun}
So far, results have been derived assuming identical microlens masses. Generalization to an arbitrary distribution of Einstein radii is straightforward if microlenses of different masses thoroughly mix in space. Introduce a differential contribution of the microlens convergence $\mathrm{d} \kappa_\star/\mathrm{d} \ln\theta^2_\star$. Formally, \refeq{sigma2ml} must be modified to
\begin{align}
\label{eq:sigml2massfun}
\sigma^2_{\rm ml}(R_*,\,\ell_*) = \left( 1 - \gamma_E + \ln\frac{2\,R_*}{\overline{\theta^2_\star}\,\ell_*} \right)\,\left( \int\mathrm{d}\ln\theta^2_\star\,\frac{\mathrm{d} \kappa_\star}{\mathrm{d} \ln\,\theta^2_\star}\,\theta^2_\star \right) - \left( \int\mathrm{d}\ln\theta^2_\star\,\frac{\mathrm{d} \kappa_\star}{\mathrm{d} \ln\,\theta^2_\star}\theta^2_\star\,\ln\frac{\theta^2_\star}{\overline{\theta^2_\star}} \right),
\end{align}
where $\overline{\theta^2_\star}$ is the squared Einstein radius for the mean microlens mass. If we still set $\ell_*=1/\sigma_{\rm eff}$, then in the defining equation for $\sigma_{\rm eff}$, \refeq{sigmaeff}, $\kappa_\star\,\theta^2_\star$ also needs to be modified similarly to account for a distribution of Einstein radii. Following a similar logic, in using \refeq{lnPhi2} we must replace the first line with $-(1/2)\,(|{\boldsymbol{\ell}}_1|^2 + |{\boldsymbol{\ell}}_2|^2)$ multiplying \refeq{sigml2massfun}, and replace $\kappa_\star\,\theta^2_\star$ in the second line with the appropriate averaged quantity $\int\mathrm{d}\ln\theta^2_\star\,(\mathrm{d} \kappa_\star/\mathrm{d} \ln\,\theta^2_\star)\,\theta^2_\star$.
If microlenses do not differ in mass by orders of magnitude, the second integral is expected to be suppressed by the logarithmic factor, while the first integral is proportional to the average {\it squared} mass $\overline{\theta^4_\star}$ (i.e. weighted toward the more massive microlenses). However, the second integral may not be small at all when there is a hierarchy in $\theta^2_\star$. Interestingly, for galactic or intracluster stars a large mass hierarchy does exist between the sub-solar main-sequence (MS) dwarfs and the remnant black holes (BHs). For an old stellar population of which all stars with initial masses $> 1\,{\rm M}_\odot$ have become stellar remnants, and assuming an IMF $\mathrm{d} \phi(M)/\mathrm{d} M \propto M^{-2}$ for $M> 0.5\,{\rm M}_\odot$, about $\sim 0.007$ BH is expected for every MS dwarf~\citep{SatoshiTakada2021BHML}. Using a typical mass $0.3\,{\rm M}_\odot$ for the MSs and $8\,{\rm M}_\odot$ for the BHs, the BHs can make a comparable contribution to $\overline{\theta^4_\star}$, if not more, than the MSs. This implies the importance of BH microlenses in broadening the ``point spread function'' of random deflections despite their low number fraction. A detailed investigation into a realistic mass function will be included in a future work.
\section{Discussion}
\label{sec:discuss}
As we have explained, stochastic microlensing has a profound effect on the magnification factor of a lensed source. Now we discuss this more in the context of strong lensing produced by galaxy and galaxy cluster lenses.
The characteristic scale of random microlensing deflection for a microlens mass $M_\star$ corresponds to a source-plane scale
\begin{align}
R_\star = 3\,\kappa^{1/2}_\star\,\theta_\star\,D_S \approx 2500\,{\rm AU}\,\left( \frac{\kappa_\star}{0.3} \right)^{1/2}\,\left( \frac{M_\star}{0.3\,{\rm M}_\odot} \right)^{1/2}\,\left( \frac{D_{LS}\,D_S\,D^{-1}_L}{{\rm Gpc}} \right)^{1/2},
\end{align}
where $D_L$, $D_S$ and $D_{LS}$ are the angular diameter distances to the lens plane, to the source plane, and from the lens plane to the source plane, respectively. We have multiplied by a factor of 3 to account for the Coulomb logarithm (see \refeq{sigma2ml}). In galactic lenses, the surface density of stellar microlenses is high $\kappa_\star \simeq 0.1$--$1$; hence $R_\star$ is smaller than the typical size of a star cluster, comparable to or larger than the sizes of optical quasars $\sim 10^3\,$AU~\citep{BlackBurne2011QSOmicrolensing}, while certainly larger than individual stellar photospheres. In galaxy cluster lenses, the intracluster stars have a substantially lower surface density $\kappa_\star \sim 0.001$--$0.01$, and $R_\star$ can be reduced by up to a factor of ten.
As long as the source's physical size is smaller than $R_\star$, the highest persistent magnification factor $\VEV{ \mu_{\rm W}(\boldsymbol{y})}$ reached at a macro caustic is on the order of
\begin{align}
\label{eq:muWmeanmax}
\VEV{\mu_{\rm W}}_{\rm max} \simeq \frac{\left(2\,d\,R_\star/D_S \right)^{-1/2}}{1-\kappa_0}\, \approx 400\,(1 - \kappa_0)^{-1}\,\left( \frac{d^{-1}}{1\arcsec} \right)^{1/2}\,\left( \frac{\kappa_\star}{0.3} \right)^{-1/4}\,\left( \frac{M_\star}{0.3\,{\rm M}_\odot} \right)^{-1/2}\,\left( \frac{D_L\,D_S\,D^{-1}_{LS}}{{\rm Gpc}} \right)^{1/2}.
\end{align}
For galaxy lenses, the choices $d^{-1} \sim 1\arcsec$ and $\kappa_\star \sim 0.1$ are reasonable, and hence only sufficiently compact sources can possibly acquire a {\it temporary} magnification much higher than $1000$, at micro caustic crossings. This still requires that $\mu_{\rm W}$ can fluctuate to a value much higher than $\VEV{\mu_{\rm W}(\boldsymbol{y})}$. A conservative constraint is that the source size $\sigma_{\rm W}$ is smaller than $\sim \theta_\star\,\kappa^{-1/2}_\star/\VEV{\mu_{\rm W}}_{\rm max}$, the typical separation of micro caustics on the source plane. This limits the source size to
\begin{align}
\label{eq:sigWDSconstr}
\sigma_{\rm W}\,D_S \lesssim 6\,{\rm AU}\,(1-\kappa_0)\,\left( \frac{d^{-1}}{1\arcsec} \right)^{-1/2}\,\left( \frac{\kappa_\star}{0.1} \right)^{-1/4}\,\left( \frac{M_\star}{0.3\,{\rm M}_\odot} \right)^{3/2}\,\left( \frac{D_L\,D_S\,D^{-1}_{LS}}{{\rm Gpc}} \right)^{-3/2}\,\left( \frac{D_S}{{\rm Gpc}} \right),
\end{align}
for which only individual source stars meet the requirement. However, the fluctuation in $\mu_{\rm W}$ can still be significantly suppressed for small sources if many micro images form ($\sigma_{\rm eff} > \theta_\star\,\kappa^{-1/2}_\star/\VEV{\mu_{\rm W}}_{\rm max}$), because only one pair of micro images are enhanced at each micro caustic crossing.
The best opportunities to have very high temporary magnifications for individual stars are to be found in galaxy cluster lensing with a small $\kappa_\star \sim 0.001$--$0.01$, for which the maximum values (at the tail of the distribution) can range from a few thousands to $10^4$~\citep{Diego2019ExtremeMagnificationUniverse}. The maximal mean magnification \refeq{muWmeanmax} can now reach $\VEV{\mu_{\rm W}}_{\rm max} \sim 3000\,(1-\kappa_0)^{-1}$ for $\kappa_\star=0.01$ and $d^{-1}=10\,\arcsec$, and the constraint on the source size \refeq{sigWDSconstr} is relaxed, which is also helped by the fact that the typical value of $d$ is reduced in cluster lenses. For galaxy lenses with $\kappa_\star \gtrsim 0.1$, we do not expect the magnification to strongly fluctuate and reach significantly higher than $1000$, as the corrugated micro caustic network is too dense. This conclusion is also reached from the argument that the peak magnification at a micro caustic crossing scales as $\kappa^{-3/4}_\star$~\citep{2017ApJ...850...49V}. From the perspective of this work, if we choose $\kappa_0=0.7$, $d^{-1}=1\arcsec$, $\theta_\star=1\,\mu{\rm as}$ and $\kappa_\star=0.3$, our semi-analytic approximation for $\VEV{\mu_{\rm W}}$ and $\VEV{\mu^2_{\rm W}}$ is applicable because $\sigma_{\rm eff} > \theta_\star\,\kappa^{-1/2}_\star/\VEV{\mu_{\rm W}}_{\rm max}$, and a small standard deviation $\sim 10$--$20 \%$ for the fractional magnification fluctuation around $\VEV{\mu_{\rm W}} \sim 1000$--$2000$ is predicted, insensitive to the source size. If we instead set $\kappa_\star=0.005$, for the same macro caustic, we find a larger standard deviation $\gtrsim 30 \%$ for the fractional magnification fluctuation, around a much higher mean $\VEV{\mu_{\rm W}} \sim 2000$--$6000$. This example is shown in \reffig{muW_gal_vs_cluster}.
It is worth to note that sub-galactic DM subhalos as substructure lenses tend to strongly perturb a galactic or cluster caustic and create secondary caustics under suitable conditions~\citep{2018ApJ...867...24D, Dai2020ArcSymmetryS1226}. An interesting consequence of these subhalos is then to increase the typical value of $d$ (i.e. weaken the caustic strength) and hence further reduce the allowed maximum mean magnification, in both galaxy and cluster lenses.
Applying the same analysis to a variety of large sources $\gtrsim$ tens of AUs, which include quasars, SNe~\citep{Kelly2015SNRefsdal, Goobar2017iPTF16geu}, or bloated stellar photospheres due to outburst or mass ejection, $\mu_{\rm W}$ is expected to fluctuate only mildly around the mean value in \refeq{muWmeanmax}, so magnifications significantly higher than $\sim 1000$ are prohibited by microlensing, even for galaxy cluster lenses. While multiply-imaged quasars commonly have magnification factors on the order $\mathcal{O}(10)$, quasars magnified by a hundred to a thousand fold are rarely reported. \cite{Fujimoto2020UltraluminousQuasar} suggested a candidate lensed quasar with a total magnification $\sim 450$. However, analysis of the proximity zone does not seem to support this idea~\citep{Davies2020LensedQSOproximity}. In another quadruply-imaged quasar, a ten-fold magnification anomaly was detected for one of the images, requiring a magnification factor as large as $\sim 100$~\citep{Glikman2018arXiv180705434G}. These large magnifications are likely to be consistent with the maximal values permitted by microlensing effects, if the caustic strength is not dramatically reduced by subhalos, i.e. $d^{-1} \gtrsim 0.1\arcsec$. We expect that microlensing effects dominate the truncation in the high magnification tail of the lensed quasars (especially for low-mass quasars and in the case of small Einstein radii), which may have implications for the impact of magnification bias on the luminosity function~\citep{PacucciLoeb2019LensedQSOzgt6, PacucciLoeb2020RealityMirage}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.53]{muW_gal_vs_cluster.pdf}
\caption{The mean and variance of the magnification factor $\mu_{\rm W}$ in the vicinity of a macro caustics with $\kappa_0=0.7$ and $d^{-1}=1\arcsec$ computed using the semi-analytic approximation developed in this work. We contrast between a high microlens surface density $\kappa_\star=0.3$ (left), typical of galaxy lensing, and a low microlens surface density $\kappa_\star=0.005$ (right), typical of cluster lensing. Both the mean magnification and the fluctuation around it are significantly suppressed in the former case by the excessively high number of micro caustics. While $\VEV{\mu_{\rm W}}$ and $\VEV{\mu^2_{\rm W}}$ are insensitive to the source size $\sigma_{\rm W}$ as long as $\sigma_{\rm W} \ll \theta_\star\,\kappa^{1/2}_\star$ (true for stellar photospheres), the magnification distribution becomes increasingly non-Gaussian for smaller sources.}
\label{fig:muW_gal_vs_cluster}
\end{figure}
\section{Conclusion}
\label{sec:concl}
Gravitationally lensed sources exhibit stochastic fluxes as a result of random microlensing if compact masses contribute a fraction of the lens surface mass. Through a first-principle statistical treatment of microlensing deflections, we have in this work derived a semi-analytic approximation for the mean and variance of the magnification factor, for a finite Gaussian source and for arbitrary macro lens models. A theoretically appealing feature of the new result is that the UV and IR logarithms are physically determined.
These general results are in the form of single and double image-plane integrals with simple and well-behaved integrands, and hence are practically useful as these can be efficiently evaluated using Monte Carlo integrators. Our analytic derivations suggest that the results are good approximations if the source of an effective size $\sigma_{\rm eff}$ overlaps multiple micro caustics, where the effective size is either the source's physical size $\sigma_{\rm W}$ or the scale of random microlensing deflections $\simeq \theta_\star\,\kappa^{1/2}_\star$, whichever is larger. Using numerical ray-shooting with random microlens realizations, we have demonstrated the accuracy of the approximation, even in cases where the microlensing-induced light curves are highly non-Gaussian.
While we have specifically examined highly magnified sources near a macro fold caustic, for which a small convergence from the microlenses can induce dramatic flux variance, our results are readily applicable to other macro caustics, such as a cusp caustic, or higher-order catastrophes~\citep{Feldbrugge2019PichardLifshitz}. We have pointed out that the maximal magnification that can be realized at a macro fold caustic is not only limited by the source size $\sigma_{\rm W}$, but also by the characteristic scale of microlensing deflections $\simeq \theta_\star\,\kappa^{1/2}_\star$, especially for a source that overlaps multiple micro caustics or has many micro images.
Future work may adopt the formalism here to study the correlation of the magnification factor between two different source-plane positions, i.e. $\VEV{\mu_{\rm w}(\boldsymbol{y}_1)\,\mu_{\rm w}(\boldsymbol{y}_2)}$. For a moving source, this translates to the temporal correlation of microlensing lightcurves~\citep{WyitheTurner2002MicrolensingVariability, LewisIrwin1996MLIITemporalAnalysis, Neindorf2003extragalML}, and will be useful for interpreting cadence observations. Another interesting question regards the third-order moments of the magnification factor, as well as higher-order moments, which characterize the departure from Gaussian statistics. Our formalism may be applicable to the computation of these moments, which can help with the analyses of highly non-Gaussian light curves.
\begin{acknowledgments}
The authors thank Brenda Frye and Jordi Miralda-Escud\'{e} for inspiring discussions, and Jos\`{e} M. Diego for commenting on the draft paper near its completion. This research is supported under the startup grant provided as the Michael M. Garland Chair in Physics at the University of California, Berkeley. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE 1752814.
\end{acknowledgments}
\software{\texttt{Matplotlib} \citep{Matplotlib}, \texttt{vegas} \citep{vegasEnhanced}.}
\bibliographystyle{aasjournal}
| -63,754.698243
|
[
-2.984375,
2.83984375
] | 39.252336
|
[
-3.271484375,
0.05322265625,
-2.16796875,
-6.37109375,
-0.9921875,
8.96875
] |
[
4.4375,
7.484375,
3.396484375,
5.9296875
] | 451
| 8,350
|
[
-3.2109375,
3.505859375
] | 29.371663
|
[
-6.55078125,
-4.87109375,
-5.15625,
-2.890625,
2.1875,
14.1015625
] | 0.64747
| 21.079482
| 23.015208
| 2.468642
|
[
1.8776394128799438
] | -41,258.303922
| 6.658802
| -63,318.406121
| 0.593515
| 6.228977
|
[
-2.76171875,
-3.845703125,
-4.0390625,
-4.91796875,
2.271484375,
12.5234375
] |
[
-5.46484375,
-2.896484375,
-2.669921875,
-2.06640625,
3.72265625,
6.53515625
] | |
BkiUc5M4ubngxRYeLfPK
|
\section{Introduction}
Language-based interactions are an integral part of our everyday life. Reinforcement learning (RL) is a promising technique for developing agents that act in real-life scenarios, such as dialog systems. However, training these agents is difficult due to missing feedback or reward signals. Because of this, text-based adventure games are an ideal benchmark for developing language-based agents \citep{hausknecht2020interactive}. In games, the players receive automatic rewards from the game environment and we can use the final game score for comparing performances of different agents.
Figure \ref{fig:zork3} illustrates the problem setup for this paper. One main difference between text-based adventure games and other RL scenarios is the large and discrete action space. In contrast to other games (e.g., ATARI games), each action is characterized by a sentence or word (e.g., climb tree). Also, the action space is not fixed. For example, if the agent is in front of the house, the action ``open door'' is available, whereas if the agent is in the forest, other actions are possible, e.g. ``climb tree'', but not ``open tree''. Therefore, in addition to the action space, there is the space of valid actions in the current state (see Figure \ref{fig:zork3} for an example of gameplay in the game zork3). This space is much smaller than the space of all actions but can be significantly different in each step. In general, this space of valid actions is unknown to the agent, but a common simplification is to let the agent have the list of valid actions as input. A number of prior works in this domain focused on the above-mentioned challenges \citep{yao-etal-2020-keep, ammanabrolu2020graph,ammanabrolu2020avoid,guo2020interactive,xu2020deep}. Most of those works used deep Q-learning as a learning agent.
Deep Q-learning has several drawbacks. As an off-policy algorithm, it suffers from high variance, and the performance can be unstable \citep{sutton2018reinforcement}. Other online, policy-based learning algorithms are also unsuitable for our scenario since the agent needs to reuse experiences from the training history. Therefore, in this paper, we develop a learning agent based on the soft actor critic (SAC) algorithm \citep{haarnoja2018soft}, which combines both value-based and policy-based learning. Additionally, the maximum entropy technique encourages \textit{stability} and \textit{exploration}. SAC was originally designed for continuous action spaces; however, with slight modifications, it is applicable for discrete action spaces \citep{christodoulou2019soft}. Nevertheless, it has never been applied to text-based adventure games before.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=1.00\textwidth]{example.png}
\end{center}
\caption{This figure shows an example of gameplay for the game zork3. The RL agent receives the valid action space, state information, reward, and score from the Jericho environment. The agent then needs to predict the action and move to the next state.}
\label{fig:zork3}
\end{figure*}
A problem that text-based adventure games have in common with many other RL problems is the sparseness of rewards. Especially at the beginning of training, the agent needs to perform many actions before receiving feedback. In text-based adventure games, this problem is even more severe due to the large and context-dependent action space. To speed up the convergence, it is therefore desirable to have a denser reward function. A popular way to achieve this is through reward shaping. However,
finding a good reward function is difficult and requires significant manual effort, background information, or expert knowledge. A well-known reward shaping technique, circumventing the need for external knowledge, is potential-based reward shaping \citep{ng1999policy} which has strong theoretical guarantees. This enables faster convergence at the beginning of training which we show for several of the difficult games
\section{Related work}
\label{sec:related}
\textbf{Text-based adventure games}
\citet{hausknecht2020interactive} built the Jericho Interactive Fiction environment which includes 57 different games that are categorized into \textit{possible}, \textit{difficult}, and \textit{extreme games}. In this work, we focus on the difficult games that were compared by \citet{hausknecht2020interactive} because they tend to have sparser rewards than the possible games. The difficult games still include several games where no method has been able to achieve a score higher than a random agent to date.
In general, for text-based adventure games, there are \textit{choice-based} agents and \textit{parser-based} agents \citep{hausknecht2020interactive}. Parser-based agents \cite{narasimhan-etal-2015-language} generate actions using verb-object combinations, whereas choice-based agents choose an action from a pre-generated list of actions. Other related work focuses not on the RL agent but on action generation \citep{ammanabrolu2020graph,yao-etal-2020-keep,ammanabrolu2020avoid,xu2020deep,guo2020interactive}\footnote{Note that \citet{ammanabrolu2020graph} uses Actor-to-Critic, but they focus on action generation.}. In this work, we follow the line of choice-based agents which is a simplification that allows us to concentrate on the RL part of our method.
We compare our experimental results with the Deep reinforcement relevance network (DRRN) \citep{he2015deep} agent.
DRRN is one of the widely used frameworks for choice-based and parse-based agents. The basic idea behind DRRN is to encode the actions and states into embedding vectors separately, and then use the state and its corresponding action embeddings as inputs into a neural network to approximate the Q-values of all possible actions $Q(s_t,a_t^{i})$. The action at each time step is selected by $a_t = \argmax_{a_t^i}(Q(s_t,a_t^{i}))$.
NAIL \citep{hausknecht2019nail} is an agent, which is not choice-based, that is trained to play any unseen text-based game without training or repeated interaction and without receiving a list of valid actions. We compare both DRRN (and variants) and NAIL in our experiments, but only DRRN has the exact same experimental setup and handicaps as our agent. NAIL serves as a baseline of scores possible without any simplifications of gameplay.
Another baseline is by \citet{yao2021reading} who investigate whether the RL agent can make a decision without any semantic understanding. They evaluate three variants based on DRRN: a) only location information is available as observation b) observations and actions are hashed instead of using the pure text c) inverse dynamic loss based vector representations are used. Their results show that the RL agent can achieve high scores in some cases, even without language semantics.
In concurrent work, building on \citet{yao2021reading}, \citet{gu2022revisiting} point out the RL agent can achieve higher scores when combining semantic and non-semantic representations.
\citet{tuyls2022multi} propose a new framework that includes two stages: the exploitation phase and the exploration phase. The exploitation policy uses imitation learning to select the action based on previous trajectories. The goals of the second exploration policy are to explore the actions to find rewards and reach new states. In this work, relevant actions are manually added into the valid action space.
\textbf{Soft-actor-critic} \cite{haarnoja2018soft} combines both advantages of value-based and policy-based learning. The drawback of value-based learning like deep Q learning is the instability of the performance because the policy can have high variance \cite{sutton2018reinforcement}. The SAC algorithm includes three elements. The first is separate predict and critic neural networks, the second is that offline learning can reuse the past collections via replay buffer, which is the same as deep Q learning, and the third is that the entropy of the policy is maximized to encourage exploration. The optimal policy aims to find the highest expected rewards and maximize the entropy term $\mathcal{H}(\pi(.|s_t))$:
\begin{equation*}
\begin{aligned}
\pi^{\star} = \underset{\pi}{\arg\max} \sum_{t=0}^T \mathbb{E}_{(s_t,a_t) \sim \rho_\pi}
[& r(s_t,a_t) +\\& \alpha \mathcal{H}(\pi(.|s_t))]
\end{aligned}
\end{equation*}
where $s_t$ and $a_t$ denote the state and action at time step $t$ and $\rho$ denotes the state-action marginals of the trajectory distribution induced by a policy $\pi$. The temperature parameter $\alpha$ controls the degree of exploration.
The original SAC is evaluated on several continuous control benchmarks. Since we are dealing with discrete text data, we base our method on the framework for discrete action spaces by \citet{christodoulou2019soft}. The key difference between continuous and discrete action spaces is the computation of the action distribution. For discrete action spaces, it is necessary to compute the probability of each action in the action space. The actor policy is changed from $\pi_{\phi}(a_t|s_t)$, a distribution over the continuous action space, to $\pi_{\phi}(s_t)$, a discrete distribution over the discrete action space.
\textbf{Potential-based reward shaping}
Introduced in the seminal work of \citet{ng1999policy}, potential-based reward shaping (PBRS) is one of the most well-studied reward design techniques. The shaped reward function is obtained by modifying the reward using a state-dependent potential function. The technique preserves a strong invariance property: a policy $\pi$ is optimal under shaped reward \emph{iff} it is optimal under extrinsic reward.
Furthermore, when using the optimal value function $V^*$ under the original reward function as the potential function, the shaped rewards achieve the maximum possible informativeness.
In a large number of prior studies interested in PBRS,
\citet{wiewiora2003principled} propose the \textit{state-action potential advice} methods, which not only can estimate a good or bad state, but also can advise action. \citet{grzes2010online} evaluate the idea of using the online learned value function as a potential function. Moreover, \citet{harutyunyan2015expressing} introduce an arbitrary reward function by learning a secondary Q-function. They consider the difference between sampled next state-action value and the expected next state-action value as dynamic advice. Based on \citet{harutyunyan2015expressing}, \citet{brys2015policy} develop the policy transfer to learn the policy from a source task. \citet{devidze2021explicable} propose a reward design framework, EXPRD, which interprets two key criteria of a reward function: \textit{informativeness} and \textit{sparseness}.
\textbf{Reward in NLP based RL agent} One of the challenges of using RL to solve natural language processing (NLP) tasks is the difficulty of designing reward functions. There could be more than one factor that affects the rewards, such as semantic understanding and grammatical correctness. \citet{li2016deep} define reward considering three factors: \textit{ease of answering}, \textit{information flow}, and \textit{semantic coherence} for dialogue generation tasks. Reward shaping techniques have also been used in other NLP-based RL tasks, for example, \citet{lin2018multi} use knowledge-based reward shaping for a multi-hop knowledge graph reasoning task.
The core difference to our model is that we do not pre-define any function or knowledge as a reward signal, instead shaping the rewards automatically.
\section{Problem setting and background}
\label{sec:background}
\textbf{The experiment agent}
An environment is defined as a Markov Decision Process (MDP) $M := (\mathcal{S},\mathcal{A},T,\gamma,R)$, where the set of states and actions are denoted by $\mathcal{S}$ and $\mathcal{A}$ respectively. $T: \mathcal{S} \times \mathcal{S} \times \mathcal{A} \rightarrow [0,1]$ captures the state transition dynamics, i.e., $T(s' \mid s,a)$ denotes the probability of landing in state $s'$.
The reward $R$ and terminal signal $d$ come from the game environment, and $\gamma$ is the discount factor. The stochastic policy $\pi: \mathcal{S} \rightarrow \Delta (\mathcal{A})$ is a mapping from a state to a probability distribution over actions, i.e., $\sum_a \pi(a|s) = 1$ parameterized by a neural network.
Notice that the valid action space size is variable at each time step. Following \citet{hausknecht2020interactive}, we differentiate between game state $s$ and observation $o$, where the observation refers only to the text that is output by the game whereas the state corresponds to the locations of players, items, monsters, etc. Our agent has only knowledge of the observations and not of the complete game state
\subsection{SAC for discrete action spaces}
The SAC algorithm has a separate predictor (actor) and critic. In the following, we first describe the two crucial equations for updating the critic and then the actor policy update.
In the critic part, following the original SAC definition \cite{haarnoja2018soft} and adaptation to the discrete setting by \citet{christodoulou2019soft}, the targets for the Q-functions are computed by
\begin{equation}
\begin{split}
y(r,s',d) = & r +\gamma (1-d)\\
& \left(\min_{i=1,2}\left(Q_{\hat\theta_i}(s')\right)-\alpha \log\left(\pi_\phi(s'_t)\right)\right),
\end{split}
\label{eq:target_q}
\end{equation}
where in our scenario, the target Q-values and the policy distribution range over the set of valid actions $A_{valid}(s')$ \cite{hausknecht2020interactive}. As was proposed by \citet{haarnoja2018soft}, we use two Q-functions $Q_{\theta_i}$ and two Q target functions $Q_{\hat\theta_i}$, and $i\in\{1,2\}$ is the index of the Q-neural networks. $\gamma$ is a discount factor and $d\in \{0,1\}$ is 1 if the terminal state has been reached.
\begin{algorithm*}
\caption{SAC with potential-based reward shaping}\label{alg:sac}
\begin{algorithmic}[1]
\Require policy $\pi$; Q-functions $\theta_1,\theta_2, \hat{\theta_1},\hat{\theta_2}$; replay buffer D; roll-out
\For{step $=1\hdots$ max step}
\newline
\Comment{Update the \textit{critic}:}
\If{Reward Shaping is True}
\State{$V_{step}(s) \leftarrow\pi (s)^T\left[(Q_{\hat\theta_i}(s)-\alpha \log(\pi(s))\right)]$ (Equation \ref{eq:soft_state_value})}\Comment{Compute soft state value}
\For{$i=1\hdots N$}:
\State $V_{step}(s) \leftarrow (1-\alpha)V_{step}(s) + \alpha (r + \gamma' V_{step}(s'))$ (Equation \ref{eq:rsv})\Comment{Update value function}
\EndFor
\State $F_{step}(s,a,s') \leftarrow \gamma V_{step}(s') - V_{step}(s)$ (Equation \ref{eq:f}) \Comment{Compute shaping function}
\State $\hat{R}(s,a) \leftarrow R(s,a) + F_{step}(s,a,s')$ (Equation \ref{eq:reward}) \Comment{Compute reshaped reward}
\EndIf
\State{Update Q-function (Equation \ref{eq:q_approx})}
\newline
\Comment{Update the \textit{actor}:}
\State{Update policy (Equation \ref{eq:actor_policy})}
\EndFor
\end{algorithmic}
\end{algorithm*}
The critic optimization is the same as in the original SAC algorithm, learning to minimize the distance between the target soft Q-function and the Q approximation with stochastic gradients:
\begin{equation}
\begin{aligned}
&\nabla J_Q(\theta) =\\
&\nabla \mathbb{E}_{a\sim \pi(s),s\sim D}\frac{1}{B} \sum_{i=1,2} \left(Q_{\theta_i}(s)-y(r,s',d)\right)^2,
\label{eq:q_approx}
\end{aligned}
\end{equation}
where $D$ is the replay buffer, and $B$ is the size of mini-batch sampled from $D$. If using double Q-functions, the agent should learn the loss functions of both Q-neural networks with parameters $\theta_1$ and $\theta_2$.
As proposed by \citet{christodoulou2019soft} the update of the actor policy is given by:
\begin{equation}
\begin{aligned}
& \nabla J_{\pi}(\phi) = \\&\nabla\mathbb{E}_{s\sim D} \frac{1}{B}\left[\pi_t(s)^T[\alpha \log\pi_{\phi}(s) - \min_{i=1,2}(Q_{\theta_i}(s)]\right].
\end{aligned}
\label{eq:actor_policy}
\end{equation}
where $Q_{\theta_i}(s)$ denotes the actor value by the Q-function (critic policy), and $\log\pi_{\phi}(s)$ and $\pi_t(s)$ are the expected entropy and probability estimate by the actor policy.
As shown in Algorithm \ref{alg:sac} in lines 10 and 11, Equations \ref{eq:q_approx} and \ref{eq:actor_policy} constitute the basic SAC algorithm without reward shaping, where critic and actor are updated in turn. In the next section, we will explain the reward shaping in lines 2--9 of the algorithm.
\section{Method}
\label{sec:method}
The original SAC equation is given in Equation \ref{eq:target_q}. In the following we describe how we are modifying it through reward shaping. The whole algorithm is given by Algorithm \ref{alg:sac}. We start by reward shaping in line 2. The shaping reward function $F: S\times A\times S\rightarrow \mathbb{R}$ \citep{ng1999policy} is given by
\begin{equation}
\label{eq:PBRS_general_form}
F(s,a,s') = \gamma \Phi(s') - \Phi(s),
\end{equation}
where $s'$ is the target state and $s$ refers to the source state. As mentioned in Section \ref{sec:related}, when using the optimal value-function $V^*$ under original reward as the potential function, i.e., $\Phi(s)=V^*(s)$ , the shaped rewards achieve the maximum possible informativeness.
\textbf{Dynamic reward shaping}
Since we do not have access to the optimal value function $V^*$, we use the idea of dynamic reward shaping. In particular, \citet{grzes2010online} generalized the form in Equation \ref{eq:PBRS_general_form}
to dynamic potentials, and
empirically showed an advantage in helping the agent.
The idea is that the RL agent uses the current approximation of the value function as a potential function. More precisely, the shaped function $F_l$ at learning time step $l$ can be represented as follows (Algorithm \ref{alg:sac}, line 7):
\begin{equation}
F_l(s,a,s') = \gamma V_l(s') - V_l(s),
\label{eq:f}
\end{equation}
where $\Phi(s)$ from Equation \ref{eq:f} is given by $V_l(s)$ and superscript $l$ denotes the learning time step.
Hence, the new shaped reward $\hat{R}: A\times S\rightarrow \mathbb{R}$ at learning time step $l$ is defined as
\begin{equation}
\hat{R}(s, a) := R(s,a) + F_l(s,a,s'),
\label{eq:reward}
\end{equation}
where $R(s, a)$ is the original extrinsic reward from the environment (Algorithm \ref{alg:sac}, line 8).
To shape reward signals, we use the soft state value function instead of the plain value function. This allows us to use reward shaping without a separate neural network for the reward function. Experimentally, we found this also to perform similar to using a plain value function approximated using a neural network (see Section \ref{sec:results_reward}). \citet{haarnoja2018soft} also mention that it is in principle not necessary to add a separate approximator for the state value although they find it to stabilize results in practice. More precisely, we directly utilize the original form of the soft value function as given in the SAC algorithm for discrete action spaces \citep{christodoulou2019soft}:
\begin{equation}
V(s) =\pi (s)^T\left[(Q_{\hat\theta_i}(s)-\alpha \log(\pi(s))\right)],
\label{eq:soft_state_value}
\end{equation}
where $Q$ denotes the target Q-functions.
The soft value has two terms, the expected Q-value at the given state and the entropy regularized probability of all possible actions.
The Q-function aims to update the policy to maximize the expected reward. The maximum entropy policy brings the agent into the states with less knowledge while still satisfying the given constraints \citep{ziebart2010modeling}.
Using Equation \ref{eq:soft_state_value}, the value function $V(s)$ is updated inspired by the batch RL idea \citep{sutton2018reinforcement, lange2012batch} and the N-steps Q iteration algorithm \citep{ernst2005approximate}. Instead of using the sample once to learn the TD, we can repeat the sample $N$ times to estimate the TD value (see Algorithm \ref{alg:sac}, lines 4--6).
\begin{equation}
V(s) = (1-\alpha)V(s) + \alpha (r + \gamma' V(s'))
\label{eq:rsv}
\end{equation}
Now, we can rewrite the target Equation \ref{eq:target_q} by incorporating Equation \ref{eq:f}:
\begin{equation}
\begin{aligned}
&y(r,s',d) = \\&[r+(\gamma V(s') - V(s))] + \gamma (1-d) V(s')
\end{aligned}
\end{equation}
This concludes the description of our reward shaping algorithm which relies on the soft value function and utilizes an N-step update.
\begin{table*}[!htbp]
\centering
\begin{adjustbox}{width=\textwidth,center}
\begin{tabular}{l|rrrr|rrr|rr}
& & \multicolumn{3}{c|}{\cite{hausknecht2020interactive}}&\multicolumn{3}{c|}{\cite{yao2021reading}}&\multicolumn{2}{|c}{Ours}\\%
\textbf{Game} &Max& \textbf{RAND} & \textbf{DRRN}& \textbf{NAIL}&\textbf{MIN-OB}&\textbf{HASH}&\textbf{INV-DY}&\textbf{SAC}& \textbf{SAC+RS} \\
advent & 350 & 36 &36&36&-&- &- &36.00$\pm$0.00 & 36.00$\pm$0.00 \\
balances & 51 & 10 &10 &10&10&10 &10 &10.00$\pm$0.00 &9.98$\pm$0.01 \\
deephome & 300 & 1 &1 &13.3&8.5 &\textbf{58} &57.6 & 28.91 $\pm$0.474 &22.52 $\pm$0.389 \\
gold & 100 & 0 &4.1&3&- & -&- &5.98$\pm$1.16 &\textbf{7.74 $\pm$ 0.79}\\
jewel &90 & 0 &1.6 &1.6& - &- &- &5.89 $\pm$1.64 & \textbf{7.70$\pm$1.99}\\
karn & 170 & 0 & \textbf{2.1}&1.2&- &- & - & 0.01$\pm$0.01 &0.83$\pm$1.45\\
ludicorp &150 & 13.2 &13.8 &8.4& 11.6&14.8 &13.5 &14.89$\pm$0.40 &\textbf{15.73 $\pm$0.09}\\
yomomma & 35 & 0 &\textbf{0.4} &0&- &- &- &0.16 $\pm$0.02 &0.13 $\pm$0.06 \\
zenon & 20 & 0 & 0&0& - &- &- &0.00$\pm$0.00 &0.00$\pm$0.00\\
zork1 & 350 & 0 & 32.6&10.3& 29 &35.5 &\textbf{43.1} & 30.74 $\pm$5.57 & 32.72 $\pm$7.33 \\
zork3 & 7 & 0.2 & 0.5&1.8&0 &0.4 &0.4 & 2.69$\pm$0.05& \textbf{2.72$\pm$0.04}\\
\end{tabular}
\end{adjustbox}
\caption{\label{tab:result}
The average score of the \textbf{last} 100 episodes is shown for three repetitions of each game with standard deviation. The maximum number of training steps is 50,000 for our method.
\end{table*}
\section{Experimental results}
\label{sec:results}
\subsection{Datasets}
The experiments are run on the Jericho environment
\citep{hausknecht2020interactive}\footnote{https://github.com/microsoft/jericho}, which categorizes the games into three groups: possible games, difficult games, and extreme games. In the following experiments, we focused on the difficult games, which have sparser rewards and require a higher level of long-term decision-making strategies than the possible games.
\subsection{Experimental settings}
We built a choice-based agent. The agent predicts one of the possible actions from the action space distribution based on the observation of the current time step and the previous action from the last time step. The agent receives the valid action space identified by the world-change detection handicap from the Jericho game environments. Using the same handicaps as the DRRN method, we also use the Load, Save handicap to receive information on inventory and location without changing the game state. As shown in Table \ref{tab:result}, we ran the main experiments in two variants. In Figure \ref{fig:results} we compare two additional variants:
a) SAC: This is the basic RL agent using the SAC algorithm.
b) SAC+RS: Here we use the reward shaping technique in combination with SAC. This is our main algorithm as given in Algorithm \ref{alg:sac}.
c) SAC+1S\_RS: This variant is the same as SAC+RS except that $N=1$ instead of $N=32$. This means reward shaping is done without the N-step repetition of the TD update.
d) SAC+NN\_RS: In this variant we replace line 3 of Algorithm \ref{alg:sac} with a neural network that estimates the plain value function.
In Appendix \ref{sac:appendix}, we show the details of the architectures and parameters for the neural networks and the RL agent.
\textbf{Input representation} Following \citet{hausknecht2020interactive}, the state $s$ includes three elements: (observation, inventory, look) at the current time step. The representation of the elements in the state and the action are tokenized by a SentencePiece \citep{kudo2018sentencepiece} model and then use seperate GRUs to learn the embeddings. The embedding size is 128. During training, the agent randomly samples the data from the replay buffer.
\begin{figure*}[ht!]
\centering
\subfigure[balances]{\includegraphics[width=0.3\textwidth]{balances.png}}
\subfigure[deephome]{\includegraphics[width=0.3\textwidth]{deephome.png}}
\subfigure[gold]{\includegraphics[width=0.3\textwidth]{gold.png}}
\subfigure[jewel]{\includegraphics[width=0.3\textwidth]{jewel.png}}
\subfigure[karn]{\includegraphics[width=0.3\textwidth]{karn.png}}
\subfigure[ludicorp]{\includegraphics[width=0.3\textwidth]{ludicorp.png}}
\subfigure[yomomma]{\includegraphics[width=0.3\textwidth]{yomomma.png}}
\subfigure[zork1]{\includegraphics[width=0.3\textwidth]{zork1.png}}
\subfigure[zork3]{\includegraphics[width=0.3\textwidth]{zork3.png}}
\caption{This figure shows the development of the game scores over training episodes where shaded areas correspond to standard deviations. Compared is the SAC agent with and without reward shaping. You can see that reward shaping leads to faster convergence at the beginning of training for b) deephome, d) jewel, f) ludicorp and i) zork3. The end score is higher with reward shaping for four of the nine games. Shown are only the games where the agents learn something (advent and zenon are excluded).}
\label{fig:results_sac}
\end{figure*}
\subsection{Results}
We compare our results with the previous choice-based agents using deep Q-learning in Section \ref{sec:results_q}. The effect of reward shaping and variants thereof is discussed in Section \ref{sec:results_reward}.
\subsubsection{Comparison to Q-learning methods}
\label{sec:results_q}
Table \ref{tab:result} shows the game score of the SAC-based learning agent and SAC with reward shaping (SAC+RS). In comparison with DRRN and \citet{yao2021reading}, which are deep-Q learning-based RL agents, four of the SAC agent-based games can achieve notably higher scores. Three games got the same scores, and zork1 achieves similar results to DRRN (which is the closest baseline) but only uses half of the training steps. Only the scores of yomomma and karn are lower than those using the Deep-Q-learning agent. Same as for the baselines, we compute the average of the last 100 episodes for each run of the game. Each game is run three times and the mean and standard deviation are shown. For each run of one game, eight environments are run in parallel and the average score is computed. The results of the baselines are taken directly from the respective papers. The training progress is shown in Figure \ref{fig:results_sac} where the game score is plotted over training episodes. We can see that the method converges well except for two games, yomomma and karn, where the agent is not able to learn (see Section \ref{sec:limitations} for a possible explanation). Overall, the results indicate that SAC is well-suited to solve text-based games.
\subsubsection{Reward shaping}
\label{sec:results_reward}
Figure~\ref{fig:results_sac} shows the game score over training episodes. We can see that shaping the original rewards (SAC+RS) leads to faster convergence than without reward shaping (SAC). As mentioned in Section~\ref{sec:method}, the soft state value can achieve a similar performance as the state value while using fewer parameters. To experimentally prove this point, we run an additional variate of our method following \citet{grzes2010online} to reshape the reward using the state value. The state values are approximated by a multi-layer neural network. The input of the neural network is the state. The target value is estimated by $G_t = r_t + \gamma V(S_{t+1})$, and the neural network updates by minimizing the MSE loss function of TD error at each time step: $L = MSE(G_t - V(S_t))$.
We show the results in Appendix~\ref{appendix:all results}. As expected, the neural network-based value approximation (SAC+NN\_RS) can reach similar performance as directly using the soft state value from the critic policy. We sometimes even get better performance using the soft value function.
We also empirically investigate the effect of the N-step update described in Section~\ref{sec:method} and Algorithm~\ref{alg:sac}, lines 4--6. In Figure~\ref{fig:results} in Appendix~\ref{appendix:all results} we compare the update with $N=32$ steps (SAC+RS) to the update with only one step (SAC+1S\_RS). As the figure shows, the method converges to a similar final score, but exhibits much higher variance. In the case of zork3, the convergence is also slower. Therefore, we can conclude that the N-step update is beneficial for stabilizing training.
Overall, the final score of SAC with reward shaping is higher or the same for seven of the eleven games as shown in Table~\ref{tab:result}. Only for one game, deephome, does SAC+RS reduce the score. However, as shown in Appendix~\ref{appendix:all results}, in this case the SAC+1S\_RS and SAC+NN\_RS methods are better than SAC only.
We also observe that in many cases the standard deviation is lower when reward shaping is used than when it is not used.
\section{Limitations and future work}
\label{sec:limitations}
As shown in Table \ref{tab:result}, the SAC-based agent improves the state of the art on several games, but not all of them. We manually checked the agent-predicted trajectories and the games' walkthroughs and found two main limitations.
The first limitation are the incomplete valid action spaces.
An important part of the game balances is understanding and using different spells. However, those spells, such as \textit{'bozbar tortoise'} and \textit{caskly chewed scroll}, are not included in the valid action space. As shown in Figure \ref{fig:limitation} the agent can only repeat meaningless actions and is unable to reach higher scores as the required actions, shown in red, are not included in the valid action space.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{balance_limt.png}
\caption{Game balances: The walkthrough and RL agent trajectory are shown. The relevant actions, shown in red, are not in the valid action space.}
\label{fig:limitation}
\end{figure}
One solution to overcome the imperfection of the valid action space handicap is manually adding some relevant actions from game walkthroughs \citet{tuyls2022multi}.
In future work, we plan to adapt our method to play without the valid action handicap. We will apply the SAC agent and potential-based reward shaping technique to the action space generation task. Action generation is a critical challenge of playing text-based games which requires a high level of language understanding.
The second limitation is that the agent performs poorly when receiving a large valid action space. Compared to ludicorp or jewel, the game karn often receives action spaces with many possible actions at a state.
We built a toy example to see the inside of the critic during training as shown in Figure \ref{fig:karn_limitation}. The player's inventory includes a jacket, hat, and scarf. The agent gets stuck in the same location and repeats the same actions: 'put jacket down', 'take off jacket', 'take off hat', 'take card'. The distributions and reshaped rewards change only slightly. We assume the agent tends to try uncertain actions, requiring more steps to find valuable actions.
One possible solution, inspired by the masked language model and \citet{huang2020closer}, is masking the irrelevant actions to reduce the size of the action space.
It is necessary to pay more attention to the exploration-exploitation trade-off and the reward technique to speed up the learning process in the future.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{karn_limt.png}
\caption{Game karn: Most of the actions in the valid action spaces do not lead the agent to a new location and significantly change reward signals. (In the right column, choosing the action 'put jacket down,' labeled in yellow, the agent is still in the same location. In the left column, when the agent moves 'west,' labeled in red, the agent goes to a new location.) }
\label{fig:karn_limitation}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We propose a SAC-based RL agent to play text-based adventure games. The results show that the SAC-based agent achieves significantly higher scores than deep-Q learning for some difficult games while using only half the number of training steps. Furthermore, we use a reward-shaping technique to deal with sparse rewards. This allows us to learn intermediate rewards, which speeds up learning at the beginning of training for some games and leads to higher scores than without reward shaping for many games. Our analysis reveals two key limitations involving the valid action space that will be addressed in future work
| -21,005.450836
|
[
-2.7890625,
2.669921875
] | 46.855346
|
[
-3.615234375,
0.1119384765625,
-2.1171875,
-5.1484375,
0.1800537109375,
7.40234375
] |
[
2.943359375,
6.55859375,
2.083984375,
8.15625
] | 315
| 4,340
|
[
-1.1904296875,
1.037109375
] | 25.256891
|
[
-6.58984375,
-4.4296875,
-4.94140625,
-2.28515625,
2.8046875,
13.0703125
] | 0.808774
| 28.96535
| 27.626728
| 3.429808
|
[
2.0966081619262695
] | -14,723.050141
| 5.840323
| -20,540.101382
| 0.607567
| 5.909455
|
[
-2.794921875,
-3.701171875,
-3.544921875,
-4.4921875,
2.689453125,
11.546875
] |
[
-5.6328125,
-2.21484375,
-2.1875,
-1.67578125,
3.73046875,
5.11328125
] | |
BkiUdjI5qoaAwm0PHENQ
|
\section{Introduction}
The quantum Liouville equation (also referred to as the von Neumann equation) describes the evolution of a (possibly infinite) statistical ensemble of quantum particles. The main object of interest is the \textit{density operator}, denoted by $u$ in the sequel, which is in general a trace class, self-adjoint and nonnegative operator on some Hilbert space (here $L^2(\mathbb R^d)$, where $d\geq 1$ is dimension). With physical constants set to one, the free, linear Liouville equation reads
\begin{equation} \label{liou}
\left\{
\begin{array}{l}
i \partial_t u=\big[-\Delta,u \big]+\varrho,\\
u(t=0)=u_0,
\end{array}
\right.
\end{equation}
where $u_0$ and $\varrho$ are given operators, $[\cdot, \cdot]$ denotes commutator between operators and $\Delta$ is the usual Laplacian on $\mathbb R^d$. Roughly speaking, \fref{liou} can be seen as a large (or infinite) system of coupled Schr\"odinger equations. It is then an interesting problem to understand how the dispersive and regularity properties of the solutions to the Schr\"odinger equation translate to the solutions to the Liouville equation. Indeed, setting $\varrho=0$ for the sake of concreteness, the solution $u$ reads
$$
u(t)=\sum_{j \in \mathbb N} \lambda_j \ket{e^{i t\Delta}u_j} \bra{e^{it \Delta} u_j} \qquad \textrm{if} \qquad u_0 =\sum_{j \in \mathbb N} \lambda_j \ket{u_j} \bra{u_j},
$$
where the $\lambda_j$ are here positive numbers, the $u_j$ are orthonormal in $L^2(\mathbb R^d)$, and we used the Dirac notation for the rank one projector $\varphi \mapsto u_j (u_j,\varphi) \equiv \ket{u_j} \langle u_j \vert\varphi \rangle$. The mathematical properties of the operator $e^{it \Delta}$ are well known, and the question is to understand how they translate to the collection defined by $u$. This has been an extensive subject of research, and some of the most notable results are the following:
\begin{itemize}
\item[-] \textit{The Lieb-Thirring inequalities \cite{LT,LP}.} They are the density operator counterpart of Gagliardo-Nirenberg-Sobolev inequalities: for $u_0$ as before, if we define formally the local density $n_{u_0}$ and the local kinetic energy $\mathcal E_{u_0}$ by
$$
n_{u_0}:=\sum_{j \in \mathbb N} \lambda_j |u_j|^2, \qquad \mathcal E_{u_0}:=\sum_{j \in \mathbb N} \lambda_j |\nabla u_j|^2,
$$
an example of such inequalities is
$$
\|n_{u_0}\|_{L^q(\mathbb R^d)} \leq C \left(\sum_{j \in \mathbb N} \lambda_j^p \right)^{\theta/p} \|\mathcal E_{u_0}\|^{1-\theta}_{L^1(\mathbb R^d)}
$$
where $p \in [1,\infty]$ and
$$
\theta=\frac{2p}{(d+2)p-d}, \qquad q=\frac{(d+2)p-d}{dp-(d-2)}.
$$
\item[-] \textit{The Strichartz inequalities \cite{LewStri,FrankSabin}.} They are generalizations to orthonormal systems of the classical Strichartz estimates for the Schr\"odinger equation, and read, see \cite{FrankSabin}, Theorem 8,
$$
\left\|\sum_{j \in \mathbb N} \lambda_j |e^{i t\Delta}u_j|^2 \right\|_{L^p_tL^q_x(\mathbb R\times \mathbb R^d)} \leq \left( \sum_{j \in \mathbb N} \lambda_j^{\frac{2q}{q+1}}\right)^{\frac{q+1}{2q}},$$
with $p,q,d \geq 1$,
$$
\frac{2}{p}+\frac{d}{q}=d, \qquad 1 \leq q <1+\frac{2}{d-1}.
$$
\end{itemize}
These estimates on orthonormal systems have some remarkable properties: as mentioned in \cite{LewStri,FrankSabin}, if $\lambda_j=0$ for $j>N$, then they behave much better in terms of $N$ than the estimates obtained by applying the triangle inequality and the scalar estimates for the Schr\"odinger group $e^{i t\Delta}$. This can be seen as a consequence of the orthogonality of the $u_j$, while that latter property is not used with the triangle inequality. Another interesting feature is the following: standard ways to define rigorously the local density $n_{u_0}$ are either to assume that $u_0$ is trace class (i.e. $\sum_{j \in \mathbb N} \lambda_j < \infty$), or to assume that $u_0$ enjoys some smoothness, e.g.
\begin{equation} \label{regu0}\textrm{Tr} \left((\mathbb{I}-\Delta)^{\beta/2} (u_0)^2 (\mathbb{I}-\Delta)^{\beta/2}\right)^p < \infty
\end{equation}
for appropriate $p \geq 1$ and $\beta\geq 0$, see e.g. \cite{sabin1} (above $\textrm{Tr}$ denotes the operator trace). The Strichartz estimates offer much weaker conditions that justify the definition of the local density: the local density of the solution $u$ to the Liouville equation for $\varrho=0$ is defined in the space $L^p_tL^q_x$ as soon as $u_0$ lies in Schatten space $\mathcal J_{2q/(q+1)}$. This can be seen as a combined effect of the orthogonality of the $u_j$ and the dispersive effects of the Schr\"odinger operator. The situation is similar when $\varrho \neq 0$.\\
Our motivation in this work is to pursue the analysis of the properties of the solutions to the Liouville equation in the light of those of the Schr\"odinger equation. We are interested in the \textit{local smoothing effect} discovered by Constantin and Saut \cite{ConstSaut}, Sjolin \cite{sjolin} and Vega \cite{vega}: if $v_0 \in L^2(\mathbb R^d)$, then $v=e^{it\Delta}v_0$ admits locally spatial fractional derivatives of order $1/2$. The real interest in this result is the gain in differentiability, and not the gain in integrability obtained in turn by using Sobolev embeddings since the Strichartz estimates provide better exponents. When $u_0$ and $\varrho$ are in some Schatten spaces of order $p\geq 1$, we show that the local density $n_u$ admits locally spatial fractional derivatives of order $\beta$, where $\beta$ depends on the dimension $d$ and $p$. In particular, $\beta$ decreases as $p$ increases. Note that here again, the local density $n_u$ can be defined without a trace class assumption or the regularity \fref{regu0} on the data: this is a combined effect of the orthogonality and of the local gain in derivatives; the latter provides local estimates similar to \fref{regu0} on the solution $u$, which in turn allow one to define $n_u$. We also obtain results on the trace of the operator $\chi (\mathbb{I}-\Delta)^{\beta/2} u (\mathbb{I}-\Delta)^{\beta/2} \chi$ ($\chi$ is a smooth cut-off) in terms of the Schatten norms of $u_0$ and $\varrho$.
To conclude this introduction, we would like to mention that Lieb-Thirring inequalities, Strichartz inequalities and local smoothing estimates are important in the analysis of non-linear quantum systems, see e.g. \cite{sabin1, sabin2}.
The paper is organized as follows: we introduce some notation in section \ref{secresults}, recall one of the main theorems of Constantin and Saut, and state our main results in Theorem \ref{th1} and Corollary \ref{cor1}. We present as well in Lemma \ref{pr1} a generalization of the results of Constantin and Saut that is the main ingredient in the proof of Theorem \ref{th1}. Sections \ref{proofth}, \ref{prooflem} and \ref{proofcor} are devoted to the proofs of Theorem \ref{th1}, Lemma \ref{pr1} and Corollary \ref{cor1}.\\
\noindent \textbf{Acknowledgment.} This work is supported by NSF CAREER grant DMS-1452349. The author would like to thank the anonymous referee for the suggested corrections.
\section{Main results} \label{secresults}
We start with some notation.
\paragraph{Notation.} We denote by $\mathcal F_x$ the Fourier transform with respect to the $x \in \mathbb R^d$ variable, and by $\mathcal F$ the Fourier transform with respect to $(t,x)$, with the convention
$$
\mathcal F_x \varphi(\xi)=\int_{\mathbb R^d} e^{-i x \cdot \xi} \, \varphi(x) dx.
$$ For $\beta \in \mathbb R$, let $\Lambda_\beta:=(\mathbb{I}-\Delta)^{\beta/2}$, and let $D^\beta:=(-\Delta)^{\beta/2}$ be the fractional Laplacian, that we will only consider for $\beta \in (0,1/2]$. The operator $\Lambda_\beta$ and $D^\beta$ are defined in the Fourier space as pseudo-differential operators with symbols $(1+|\xi|^2)^{\beta/2}$ and $|\xi|^{\beta}$. We systematically use $r'$ for the conjugate exponent of $r \in [1,\infty]$ defined as usual by $\frac{1}{r}+\frac{1}{r'}=1$.
For $\beta \in \mathbb R$ and $p\in[1,\infty]$, we will use the potential space $H^{\beta,p}(\mathbb R^d)$ defined by $H^{\beta,p}(\mathbb R^d)=\{ \varphi \in \mathcal S'(\mathbb R^d), \Lambda_\beta \varphi \in L^p(\mathbb R^d) \}$ with norm $\| \cdot \|_{H^{\beta,p}(\mathbb R^d)}=\| \Lambda_\beta \cdot \|_{L^p(\mathbb R^d)}$, where $\mathcal S'(\mathbb R^d)$ is the space of tempered distributions on $\mathbb R^d$. We denote as well by $\mathcal S(\mathbb R^d)$ the Schwartz space and by $C_0^\infty(\mathbb R^d)$ the space of infinitely differentiable functions on $\mathbb R^d$ with compact support. The space of bounded operators on $L^2(\mathbb R^d)$ is denoted by $\mathcal L(L^2(\mathbb R^d))$ and its associated norm is $\| \cdot \|$. The Schatten spaces on $L^2(\mathbb R^d)$ are denoted by $\mathcal J_p$, $p \in [1,\infty]$ with the convention $\mathcal J_\infty=\mathcal L(L^2(\mathbb R^d))$.
We decompose the solution $u$ to \fref{liou} into homogeneous and non-homogeneous parts as $u=\Upsilon+\Gamma$ with
\begin{equation} \label{defop}
\left\{
\begin{array}{ll} \displaystyle \Upsilon(t):=e^{i t \Delta} \,u_0\, e^{-i t\Delta}, &\qquad \displaystyle \Gamma(t):=\int_0^t e^{i (t-s) \Delta} \varrho(s) e^{-i (t-s) \Delta} ds\\
\displaystyle \sigma_\beta(t) := \chi_t\, \Lambda_\beta\, \Upsilon(t)\, \Lambda_\beta\, \chi_t, &\qquad \displaystyle \gamma_\beta(t) := \chi_t\, \Lambda_\beta\, \Gamma(t)\, \Lambda_\beta\, \chi_t,
\end{array}
\right.
\end{equation}
where $\chi_t \equiv \chi(t,x) \in C_0^\infty(\mathbb R^{d+1})$, $u_0 \in \mathcal J_p$, $\varrho \in L^1(\mathbb R,\mathcal J_p)$, and $u_0,\varrho$ are self-adjoint operators. We do not assume here that $u_0$ and $\varrho$ have a particular sign. It is then a classical matter that $u$ is the unique mild solution to \fref{liou} and belongs to $C^0(\mathbb R,\mathcal J_p)$. Assuming that $\Upsilon, \Gamma \in C^0(\mathbb R,\mathcal J_p)$ admit the decompositions
$$
\Upsilon=\sum_{j \in \mathbb N} \lambda_j \ket{\psi_j} \bra{\psi_j}, \qquad \Gamma=\sum_{j \in \mathbb N} \mu_j \ket{\phi_j} \bra{\phi_j},
$$
where $\lambda_j$ and $\mu_j$ are real and $(\psi_i,\psi_j)_{L^2(\mathbb R^d)}=(\phi_i,\phi_j)_{L^2(\mathbb R^d)}=\delta_{ij}$, we define formally the local densities of $\sigma_\beta$ and $\gamma_\beta$ by
\begin{equation} \label{defn}
n_{\sigma_\beta}= \sum_{j \in \mathbb N} \lambda_j | \chi_t \Lambda_\beta \psi_j |^2, \qquad n_{\gamma_\beta}= \sum_{j \in \mathbb N} \mu_j | \chi_t \Lambda_\beta \phi_j|^2.
\end{equation}
For a trace class operator $u$, the local density $n_u$ is defined rigorously by duality by
$$
\int_{\mathbb R^d} n_u(x) \varphi(x) dx=\textrm{Tr} \big( u \varphi \big), \qquad \forall \varphi \in L^\infty(\mathbb R^d),
$$
where the multiplication operator by $\varphi$ and the function $\varphi$ are identified (this will be systematically done in the sequel).
Before stating our main results, we recall the important estimates of Constantin and Saut \cite{ConstSaut}:
\begin{theorem} \label{thCS} Let $q,r' \in [1,2]$, $\beta>0$ such that
$$
\left\{
\begin{array}{l} \displaystyle
\beta<\frac{1}{r}-d\left(\frac{1}{q}-\frac{1}{r}\right), \quad \textrm{if}\qquad r\geq 2, \quad r>q\\
\beta \leq \frac{1}{2}\quad \textrm{if}\qquad r=q=2.
\end{array}
\right.
$$
Then, for $\chi_t \equiv \chi \in C_0^\infty(\mathbb R^{d+1})$, $\varphi_t \equiv \varphi \in \mathcal S(\mathbb R^{d+1})$, the following estimate holds
$$
\left(\int_{\mathbb R^d} (1+|\xi|^2)^{\beta q /2} |\mathcal F(\chi_t \varphi_t)(-|\xi|^2,\xi)|^q d\xi \right)^{1/q} \leq C_\chi \| \varphi_t\|_{L^{r'}(\mathbb R^{d+1})}.
$$
The constant $C_\chi$ above depends on $\chi$ but not on $\varphi$.
\end{theorem}
\begin{remark} \label{remdual} We will also use the dual form of Theorem \ref{thCS}. Under the same hypotheses as above, we have
$$
\left(\int_{\mathbb R^{d+1}} |\chi(t,x)|^r |\Lambda_\beta e^{it \Delta}\varphi(x) |^r dtdx \right)^{1/r} \leq C_\chi \|\varphi\|_{L^{q}(\mathbb R^{d})},
$$
which shows the local gain of a derivative of order $\beta$.
\end{remark}
\begin{remark}\label{remchi}
It is known that the condition $\chi \in C^\infty_0(\mathbb R^{d+1})$ is not necessary in Theorem \ref{thCS}. In particular, when $q=r=2$, the function $\chi(t,x)=(1+|x|)^{-s}$, for $s>1/2$ is sufficient, see e.g. \cite{vega}. A close look at the proof of Theorem \ref{thCS} shows that in the general case where $r\geq 2$, $r>q$, the previous function is also sufficient. We will use that observation in the proof of our main result.
\end{remark}
Our first result concerns the local densities $n_{\sigma_\beta}$ and $n_{\gamma_\beta}$, that we show are properly defined in some $L^q(\mathbb R^{d+1})$ space provided the data are in appropriate $\mathcal J_p$ spaces. We also show that the operators $\sigma_\beta$ and $\gamma_\beta$ are trace class.
\begin{theorem}\label{th1}
Let $n_{\sigma_\beta}$ and $n_{\gamma_\beta}$ be the local densities defined formally in \fref{defn} with $u_0 \in \mathcal J_p$ and $\varrho \in L^1(\mathbb R,\mathcal J_p)$ for $p \in [1,\frac{2d}{2d-1})$. Then, $n_{\sigma_\beta}$ and $n_{\gamma_\beta}$ belong to $L^{r/2}(\mathbb R^{d+1})$, and we have the estimates
\begin{equation} \label{estnth}
\|n_{\sigma_\beta}\|_{L^{r/2}(\mathbb R^{d+1})} \leq C_\chi \|u_0\|_{\mathcal J_p}, \qquad \|n_{\gamma_\beta}\|_{L^{r/2}(\mathbb R^{d+1})} \leq C_\chi \|\varrho\|_{L^1(\mathbb R,\mathcal J_p)},
\end{equation}
for $r \in [2,\infty)$ and $\beta >0$ such that
$$
\left\{
\begin{array}{l} \displaystyle
\displaystyle \beta < \frac{1+d}{r}-\frac{d}{2}, \qquad \textrm{if} \qquad p=1 \qquad \textrm{and} \qquad r>2\\[2mm]
\displaystyle \beta \leq \frac{1}{2}, \qquad \textrm{if} \qquad p=1 \qquad \textrm{and} \qquad r=2\\[2mm]
\displaystyle \beta < \frac{1+d}{r}-\frac{d}{2}-\frac{d}{p'}, \qquad \textrm{if} \qquad p>1.
\end{array}
\right.
$$
Moreover, under the same conditions as above with $r=2$, we have the following estimates on the operators $\sigma_\beta$ and $\gamma_\beta$ defined in \fref{defop}:
\begin{equation} \label{esttraceth}
\|\sigma_\beta\|_{L^1(\mathbb R,\mathcal J_1)} \leq C_\chi \|u_0\|_{\mathcal J_p}, \qquad \|\gamma_\beta\|_{L^1(\mathbb R,\mathcal J_1)} \leq C_\chi \|\varrho\|_{L^1(\mathbb R,\mathcal J_p)}.
\end{equation}
The constant $C_\chi$ in \fref{estnth}-\fref{esttraceth} depends on $\chi$ but not on $u_0$ and $\varrho$.
\end{theorem}
Note that for $\beta$ to be positive, the following conditions need to be satisfied:
$$
r\leq 2+\frac{2}{d}, \qquad p <\frac{2d}{2d-1}, \qquad \frac{1}{p'}<\frac{1+d}{rd}-\frac{1}{2}.
$$
Besides, as can be expected, the gain in differentiability is maximal when $p=1$, that is when the strongest assumption is made on the data. Whether Theorem \ref{th1} is optimal or not is an open question. We use next Theorem \ref{th1} to obtain a local gain of fractional derivatives for the local densities $n_\Upsilon$ and $n_\Gamma$ of the operators $\Upsilon$ and $\Gamma$.
\begin{corollary} \label{cor1} Let $u_0 \in \mathcal J_p$ and $\varrho \in L^1(\mathbb R, \mathcal J_p)$ for $p \in [1,\frac{2d}{2d-1})$ and $\beta > 0$ such that
$$
\left\{
\begin{array}{l} \displaystyle
\beta \leq \frac{1}{2}, \qquad \textrm{if} \qquad p=1\\[2mm]
\displaystyle \beta < \frac{1}{2}-\frac{d}{p'}, \qquad \textrm{if} \qquad p>1.
\end{array}
\right.
$$ Then, for any $T>0$, any $v \in C_0^\infty(\mathbb R^d)$ and $\chi=|v|^2$, $D^\beta(\chi n_{\Upsilon})$ and $D^\beta(\chi n_{\Gamma})$ belong to $L^1(-T,T,L^{\frac{d}{d-\beta}}(\mathbb R^d))$, with the following estimates,
$$
\| D^\beta(\chi n_{\Upsilon}) \|_{L^1(-T,T,L^{\frac{d}{d-\beta}}(\mathbb R^d))} \leq C_T \|u_0\|_{\mathcal J_p}, \quad \| D^\beta(\chi n_{\Gamma})\|_{L^1(-T,T,L^{\frac{d}{d-\beta}}(\mathbb R^d))} \leq C_T \|\varrho\|_{L^1(\mathbb R,\mathcal J_p)}.
$$
The constant $C_T$ above depends on $T$ and $v$ but not on $u_0$ and $\varrho$.
\end{corollary}
\begin{remark} \label{intime} In Corollary \ref{cor1}, we only have an $L^1$ integrability in time since Theorem \ref{th1} was only exploited in the case $r=2$. The reason for this is explained in the proof of the corollary
\end{remark}
The core of the proof of Theorem \ref{th1} is the following extension of Theorem \ref{thCS}. As will be clear further, the weight $|x|^\alpha+1$ is introduced in order to handle the case when $u_0$ and $\varrho$ are not trace class, but only in some $\mathcal J_p$ with $p>1$.
\begin{lemma} \label{pr1} Let $\phi_t \equiv \phi(t,x) \in \mathcal S(\mathbb R^{d+1})$ and $\chi_t \equiv \chi(t,x) \in C_0^\infty(\mathbb R^{d+1})$. Then, we have the following estimate:
$$
\left\| (|x|^\alpha+1) \Lambda_{\alpha+\beta} \int_{\mathbb R } dt \, e^{-i t \Delta} \left(\chi_t \phi_t\right) \right\|_{L^2(\mathbb R^d)} \leq C_\chi \| \phi_t \|_{L^{r'}(\mathbb R^{d+1})},
$$
for $\alpha \in (0,\frac{1}{4}]$, $\beta \geq 0$, and $ r \in [2,\infty)$, such that
\begin{equation} \label{conslem}
\left\{\begin{array}{l}
\displaystyle 2\alpha+\beta < \frac{(d+1)}{r}-\frac{d}{2}, \qquad \textrm{if} \qquad r\in (2,\infty),\\
\displaystyle 2\alpha+\beta \leq \frac{1}{2}, \qquad \textrm{if} \qquad r =2.
\end{array}
\right.
\end{equation}
Moreover, the constant $C_\chi$ above depends on $\chi$ but not on $\phi$.
\end{lemma}
Above, the bound $\alpha \leq 1/4$ is arbitrary, the value $1/4$ is chosen since the largest value of the r.h.s. of \fref{conslem} is $1/2$. In order to see the link with Theorem \ref{thCS}, note that
$$
\mathcal F(\chi_t \varphi_t)(-|\xi|^2,\xi) = \mathcal F_x \left(\int_{\mathbb R } dt e^{-i t \Delta} \left(\chi_t \phi_t\right) \right)(\xi).
$$
The result of Lemma \ref{pr1} is fairly intuitive: for the sake of simplicity, replace $|x|^\alpha$ by $x_1$ (the first coordinate of $x$); then, up to a commutator between $x_1$ and $\Lambda_{\alpha+\beta}$, we use the classical fact that
$$
\Lambda_{\alpha+\beta} \, x_1\, e^{-it\Delta} = \Lambda_{\alpha+\beta}\, e^{-it\Delta} \, x_1+ 2 i t\Lambda_{\alpha+\beta}\, \partial_{x_1} \, e^{-it\Delta}.
$$
The weights $x_1$ and $t $ in both terms of the r.h.s are handled by the fact that $\chi_t$ has a compact support, and the second term exhibits a loss of a derivative of order one (of order $\alpha$ in the Lemma, which explains \fref{conslem}). Of course, what makes Lemma \ref{pr1} non trivial is the fact that $\alpha$ is not an integer, which necessitates then the use of fractional derivatives for which there are no simple product and chain rules. In particular, the fractional derivatives are not local, which makes the use of Theorem \ref{thCS} not direct. We conclude this section by the following remark.
\begin{remark} \label{dual2} The dual version of Lemma \ref{pr1} reads, under the same assumptions on the parameters,
$$
\left(\int_{\mathbb R^{d+1}} |\chi(t,x)|^r | \Lambda_{\alpha+\beta} e^{it \Delta}\left((|\cdot|^\alpha+1)\varphi\right)(x) |^r dtdx \right)^{1/r} \leq C_\chi \|\varphi\|_{L^{2}(\mathbb R^{d})}.
$$
\end{remark}
\section{Proof of Theorem \ref{th1}} \label{proofth}
The proof is an application of Theorem \ref{thCS} and Lemma \ref{pr1}. We focus on the term $\gamma_\beta$ and explain at the end of the proof the straighforward adaptation to $\sigma_\beta$. We remark first that we can limit ourselves to nonnegative operators $\varrho$ in $\mathcal J_p$, $p<\infty$. We can indeed decompose $\varrho$ as $\varrho=\varrho_+-\varrho_-$, $\varrho_{\pm} \geq 0$, $\varrho_+ \varrho_-=\varrho_- \varrho_+=0$, and the linearity of the trace yields
$$
|n_{\gamma_\beta}|=|n_{\gamma_{+,\beta}}-n_{\gamma_{-,\beta}}| \leq n_{\gamma_{+,\beta}}+n_{\gamma_{-,\beta}}.
$$
It suffices then to apply the estimates that we will derive for $n_{\gamma_{\pm,\beta}}$, together with
$$
\| \varrho_{\pm} \|_{\mathcal J_p} \leq \|\varrho\|_{\mathcal J_p}, \qquad p \in [1,\infty).
$$
Also, in order to justify some formal computations, we consider ``smooth'' operators $\varrho$ satisfying the following conditions: $\varrho$ is finite rank and
\begin{equation} \label{condrho}
\int_\mathbb R dt\, \textrm{Tr} \Big((1+|x|^k) (-\Delta)^n \,\varrho(t)\, (-\Delta)^n (1+|x|^k)\Big) <\infty, \quad \forall k,n \in \mathbb N.
\end{equation}
The final estimates are then obtained by density since such operators are dense in $L^1(\mathbb R,\mathcal J_p)$, $p<\infty$: on the one hand finite rank operators are dense in $\mathcal J_p$, $p<\infty$, and on the other, we have, for any smooth cut-off $\chi_\eta$ with $\chi_\eta(x) \to 1$ pointwise as $\eta \to 0$,
$$
\chi_\eta e^{\eta \Delta}\, \varrho \, e^{\eta \Delta} \chi_\eta \to \varrho \qquad \textrm{in} \quad L^1(\mathbb R_+,\mathcal J_p) \qquad \textrm{as} \qquad \eta \to 0,
$$
whenever $\varrho \in L^1(\mathbb R,\mathcal J_p)$, $p\in [1,\infty]$. The regularity of $\varrho$ is then propagated to the operator $\Gamma(t)=\int_0^t e^{i (t-s) \Delta} \varrho(s) e^{-i (t-s) \Delta} ds$, see e.g. \cite{cazenave}, section 2.5, and as a consequence $\Gamma$ verifies
$$
\sup_{t \in \mathbb R} \textrm{Tr} \Big((1+|x|^k) (-\Delta)^n \,\Gamma(t)\, (-\Delta)^n (1+|x|^k)\Big) <\infty, \quad \forall k,n \in \mathbb N.
$$
In particular, if $(\mu_j,\phi_j)_{j\in \mathbb N}$ is the spectral decomposition of $\Gamma$, this shows that
$$
\sup_{t \in \mathbb R} \sum_{j \in \mathbb N} \mu_j(t) \left\|(1+|x|^k) (-\Delta)^n \,\phi_j(t)\right\|^2_{L^2(\mathbb R^d)} < \infty, \quad \forall k,n \in \mathbb N,
$$
and therefore $(\mu_j(t))^{1/2}\phi_j(t) \in \mathcal S(\mathbb R^d)$ for all $t \in \mathbb R$.\\
The proof starts with the following lemma:
\begin{lemma} \label{expdens} Let $\beta \in [0,1/2]$. For any $\varphi\in C_0^\infty(\mathbb R^{d+1})$, the following relation holds:
$$
\int_{\mathbb R} \int_{\mathbb R^d} dt dx\, n_{\gamma_\beta}(t,x) \varphi(t,x) =\int_\mathbb R ds \textrm{Tr} \, \big( e^{-i s \Delta} \varrho(s) e^{is \Delta} F_\beta(s) \big),
$$
where
$$
F_\beta(s)=\int_{|s|}^\infty dt \, \Lambda_{\beta}e^{-i t \Delta }f_t e^{i t \Delta } \Lambda_{\beta} \in C^0(\mathbb R,\mathcal L(L^2(\mathbb R^d))), \qquad f_t=\chi_t \varphi_t \chi_t.
$$
\end{lemma}
\begin{proof}
We begin with the following equalities, supposing that the support in $t$ of $\chi_t$ is included on $[-T,T]$:
\begin{eqnarray*}
\int_{\mathbb R} \int_{\mathbb R^d} dt dx \, n_{\gamma_\beta(t)}(t,x) \varphi(t,x) &=& \int_{-T}^T dt \, \textrm{Tr} \big( \gamma_\beta(t) \varphi_t \big)\\
&=&\int_{-T}^T dt \,\textrm{Tr} \left( \chi_t \Lambda_{\beta} \int_0^t ds e^{i(t- s) \Delta} \varrho(s) e^{-i (t-s) \Delta }\Lambda_{\beta} \chi_t \, \varphi_t ds \right)\\
&=&\int_{-T}^T \int_0^t ds dt \, \textrm{Tr} \Big( \chi_t \Lambda_{\beta}e^{i(t- s) \Delta} \varrho(s)e^{-i (t-s) \Delta }\Lambda_{\beta}\chi_t \, \varphi_t \Big)\\[2mm]
&:=&I.
\end{eqnarray*}
The last equality is justified by
\begin{align} \label{justi}
&\int_0^t ds \,\left|\textrm{Tr} \left( \chi_t \Lambda_{\beta}e^{i(t- s) \Delta} \varrho(s)e^{-i (t-s) \Delta }\Lambda_{\beta}\chi_t \, \varphi_t \right)\right| \\ \nonumber
&\hspace{2cm}=\int_0^t ds \,\left|\textrm{Tr} \left( \chi_t e^{i(t- s) \Delta} \Lambda_{\beta} \varrho(s)\Lambda_{\beta} e^{-i (t-s) \Delta }\chi_t \, \varphi_t \right)\right| \\ \nonumber
&\hspace{2cm} \leq \|\chi_t\|^2_{L^\infty(\mathbb R^{d+1})} \|\varphi_t \|_{L^\infty(\mathbb R^{d+1})} \int_\mathbb R ds\, \| \Lambda_{\beta} \varrho(s) \Lambda_{\beta} \|_{\mathcal J_1} < \infty.
\end{align}
Above, we used the fact that $\Lambda_\beta$ and $e^{it \Delta}$ commute. Since the operator $\Lambda_\beta$ is not bounded, we need an additional step to justify the use of the cyclicity of the trace and to obtain that
$$
I=\int_{-T}^T ds\,\textrm{Tr} \left( e^{-i s \Delta} \varrho(s)e^{i s \Delta } \Lambda_{\beta} \int_{|s|}^\infty dt e^{-i t \Delta} f_t e^{i t \Delta} \Lambda_{\beta} \right),
$$
which would end the proof. We therefore regularize $I$ as
$$
I_\eta=\int_{-T}^T \int_0^tds dt\, \textrm{Tr} \left( \chi_t \Lambda^\eta \Lambda_{\beta} e^{i(t- s) \Delta} \varrho(s)e^{-i (t-s) \Delta }\Lambda_{\beta} \Lambda^\eta \chi_t \, \varphi_t \right),
$$
where $\Lambda^\eta=(\mathbb{I}-\eta \Delta)^{-1}$. Similar estimates as in \fref{justi} show that $I_\eta \to I$ as $\eta \to 0$. Using the semigroup property of $e^{it \Delta}$, and the fact that $\Lambda^\eta \Lambda_\beta$ is bounded, we can write
\begin{eqnarray*}
I_\eta &=&\int_{-T}^T \int_0^t ds dt\, \textrm{Tr} \left(e^{-i s \Delta} \varrho(s) e^{is \Delta } e^{-i t \Delta}\Lambda_{\beta} \Lambda^\eta f_t \Lambda^\eta \Lambda_{\beta} e^{i t \Delta}\right) \\
&=&\int_{-T}^T ds \int_{|s|}^\infty dt \,\textrm{Tr} \left(e^{-i s \Delta} \varrho(s) e^{is \Delta } \Lambda_{\beta} \Lambda^\eta e^{-i t \Delta} f_t e^{i t \Delta} \Lambda^\eta \Lambda_{\beta} \right)\\
&=&\int_{-T}^T ds\, \textrm{Tr} \big( e^{-i s \Delta} \varrho(s) e^{is \Delta} F^\eta_\beta(s) \big),
\end{eqnarray*}
where, for all $s\in \mathbb R$,
$$
F^\eta_\beta(s):=\int_{|s|}^\infty dt \Lambda_{\beta}\Lambda^\eta e^{-i t \Delta }f_t e^{i t \Delta } \Lambda^\eta \Lambda_{\beta}\in \mathcal L(L^2(\mathbb R^d)).
$$
Above, the use of the Fubini theorem and the inversion of the integral and the trace are justified by similar estimates as \fref{justi}. We now show that $F^\eta_\beta \to F_\beta$ in $C^0([-T,T],\mathcal L(L^2(\mathbb R^d)))$ as $\eta \to 0$. We remark first that $F_\beta(s) \in \mathcal L(L^2(\mathbb R^d))$ for all $s \in [-T,T]$ and $\beta \in [0,1/2]$ thanks to Theorem \ref{thCS} and Remark \ref{remdual}. Indeed, for any $v \in C_0^\infty(\mathbb R^d)$,
\begin{eqnarray*}
\| F_\beta(s) v \|_{L^2(\mathbb R^d)}&=&\left\| \Lambda_{\beta} \int_\mathbb R dt e^{-i t \Delta } \chi_t g_t \right\|_{L^2(\mathbb R^d)}, \qquad g_t:={\mathbf{1}}\hspace{-0.24em}\mathrm{I}_{(|s|,\infty)} \varphi_t \chi_t \Lambda_{\beta} e^{i t \Delta } v,\\
&\leq & C \|g_t \|_{L^2(\mathbb R^{d+1})}\\
& \leq & C \|\varphi_t\|_{L^\infty(\mathbb R^{d+1})} \|\chi_t \Lambda_{\beta} e^{i t \Delta } v\|_{L^2(\mathbb R^{d+1})}\\
&\leq & C \|v\|_{L^2(\mathbb R^d)}.
\end{eqnarray*}
The continuity of $F_\beta(s)$ in $s$ is straightforward. We then write
$$
F_\beta^\eta-F_\beta=(\Lambda^\eta-\mathbb{I}) F_\beta \Lambda^\eta+F_\beta (\Lambda^\eta-\mathbb{I}),
$$
and conclude with the fact that $(\Lambda^\eta-\mathbb{I})\to 0$ in $\mathcal L(L^2(\mathbb R^{d}))$ together with $F_\beta \in C^0([-T,T],\mathcal L(L^2(\mathbb R^d)))$. This ends the proof of the lemma since $F_\beta(s)=0$ for $|s| \geq T$.
\end{proof}
\paragraph{The case $\varrho$ trace class.} From Lemma \ref{expdens}, we have
\begin{eqnarray*}
\left|\int_{\mathbb R} \int_{\mathbb R^d} n_{\gamma_\beta}(t,x) \varphi(t,x) dt dx \right| &\leq& \sup_{s \in \mathbb R} \| F_\beta(s) \| \int_\mathbb R ds \, \textrm{Tr} \big( e^{i s \Delta} \varrho(s) e^{- is \Delta} \big)\\
&=&\sup_{s \in \mathbb R} \| F_\beta(s) \| \|\varrho\|_{L^1(\mathbb R,\mathcal J_1)},
\end{eqnarray*}
and it remains to estimate $F_\beta$ in terms of $\varphi$. As in the proof of Lemma \ref{expdens}, we write
\begin{eqnarray*}
\| F_\beta(s) v \|_{L^2(\mathbb R^d)}&=&\left\| \Lambda_{\beta} \int_\mathbb R dt e^{-i t \Delta } \chi_t g_t \right\|_{L^2(\mathbb R^d)}, \qquad g_t={\mathbf{1}}\hspace{-0.24em}\mathrm{I}_{(|s|,\infty)} \varphi_t \chi_t \Lambda_{\beta} e^{i t \Delta } v,\\
&\leq & C \|g_t \|_{L^{r'}(\mathbb R^{d+1})}\\
& \leq & C \|\varphi_t\|_{L^{r'q'}(\mathbb R^{d+1})} \|\chi_t \Lambda_{\beta} e^{i t \Delta } v\|_{L^{r'q}(\mathbb R^{d+1})}\\
&\leq & C \|\varphi_t\|_{L^{r'q'}(\mathbb R^{d+1})} \|v\|_{L^2(\mathbb R^d)}.
\end{eqnarray*}
The first inequality above is an application of Theorem \ref{thCS} with parameters satisfying
\begin{equation} \label{eqq1}
\left\{\begin{array}{l}
\displaystyle \beta < \frac{1+d}{r}-\frac{d}{2} \qquad \textrm{if} \qquad 1 \leq r' <2, \qquad \\
\displaystyle \beta\leq \frac{1}{2} \qquad \textrm{if} \qquad r'=2.
\end{array} \right.
\end{equation}
The second inequality follows from the H\"older inequality with $1\leq q \leq \infty$, and the last inequality is a consequence of the dual version of Theorem \ref{thCS} given in Remark \ref{remdual}, provided we have the relations
\begin{equation} \label{eqq2}
\left\{\begin{array}{l}
\displaystyle \beta < \frac{1+d}{r' q}-\frac{d}{2} \qquad \textrm{if} \qquad 2< r'q <\infty, \qquad \\
\displaystyle \beta\leq \frac{1}{2} \qquad \textrm{if} \qquad r'q=2.
\end{array} \right.
\end{equation}
We optimize the right-hand-side of the inequalities of \fref{eqq1}-\fref{eqq2} by setting $q=1$ when $r'=2$, and when $r'<2$, we set $r'q=r$, i.e. $q=r-1$. In this case, $r'q'=r/(r-2)$ and $r'q'/(r'q'-1)=r/2$, which yields by duality the estimate
$$
\|n_{\gamma_\beta}\|_{L^{r/2}(\mathbb R^{d+1})} \leq C \|\varrho\|_{L^1(\mathbb R,\mathcal J_1)}, \qquad \textrm{with} \qquad \left\{\begin{array}{l}
\displaystyle \beta < \frac{1+d}{r}-\frac{d}{2} \qquad \textrm{if} \qquad 1\leq r'<2, \qquad \\
\displaystyle \beta\leq \frac{1}{2} \qquad \textrm{if} \qquad r'=2.
\end{array} \right.
$$
Note that for $\beta$ to be positive, we need the condition $r < 2(d+1)/d$.
\paragraph{The case $\varrho \in \mathcal J_p$, $p>1$.} This case is more interesting since new estimates are required. We start as before with Lemma \ref{expdens}. Since $\varrho$ is not trace class, one may expect a lower value for the exponent $\beta$. A simple way to proceed is to compensate for the fact that $\varrho \notin \mathcal J_1$ by introducing the operator $J_\alpha:=\Lambda_\alpha (1+|x|^\alpha)$ for an appropriate $\alpha$ in
\begin{eqnarray*}
\textrm{Tr} \, \big( e^{-i s \Delta} \varrho(s) e^{is \Delta} F_\beta(s) \big)&=&\textrm{Tr} \, \left( J_\alpha J_\alpha^{-1} e^{-i s \Delta} \varrho(s) e^{is \Delta} (J_\alpha^*)^{-1}J_\alpha^* F_\beta(s) \right):=II(s).
\end{eqnarray*}
As in the proof of Lemma \ref{expdens}, we cannot use directly the cyclicity of the trace since $J_\alpha$ is not bounded. We therefore introduce a regularization as before, with the difference that we now use Lemma \ref{pr1} instead of Theorem \ref{thCS} to pass to the limit. This justifies the fact that
\begin{eqnarray} \nonumber
|II(s)|&=&\left|\textrm{Tr} \, \left( J_\alpha^{-1} e^{-i s \Delta} \varrho(s) e^{is \Delta} (J_\alpha^*)^{-1}J_\alpha^* F_\beta(s) J_\alpha \right) \right|\\
&\leq & \textrm{Tr} \, \left( J_\alpha^{-1} e^{-i s \Delta} \varrho(s) e^{is \Delta} (J_\alpha^*)^{-1} \right) \| J_\alpha^* F_\beta(s) J_\alpha \|. \label{T2}
\end{eqnarray}
Using the H\"older inequality for $\mathcal J_p$ spaces, we find
$$
\textrm{Tr} \, \left( J_\alpha^{-1} e^{-i s \Delta} \varrho(s) e^{is \Delta} (J_\alpha^*)^{-1} \right) \leq \|\varrho(s)\|_{\mathcal J_p} \| J_\alpha^{-1} \|^2_{\mathcal J_{2p'}} = \|\varrho(s)\|_{\mathcal J_p} \| (1+|x|^\alpha)^{-1} \Lambda_{-\alpha}\|^2_{\mathcal J_{2p'}}.
$$
Above, we used the facts that $(J_\alpha^*)^{-1}=(J_\alpha^{-1})^{*}$ and that the $\mathcal J_p$ norms of $A$ and $A^*$ are equal. The Kato-Seiler-Simon inequality \cite{Simon-trace}, Chapter 4, then yields
$$
\| (1+|x|^\alpha)^{-1} \Lambda_{-\alpha}\|_{\mathcal J_{2p'}} \leq C \|(1+|x|^\alpha)^{-1}\|_{L^{2p'}(\mathbb R^d)} \|(1+|x|^2)^{-\alpha/2}\|_{L^{2p'}(\mathbb R^d)},
$$
which is finite whenever $2\alpha p'>d$. It remains to treat the term in \fref{T2} involving $F_\beta$. It is treated in the same fashion as in case $p=1$, with the difference that we use Lemma \ref{pr1} and its dual version instead of Theorem \ref{thCS}. Hence, with
$$
g_t={\mathbf{1}}\hspace{-0.24em}\mathrm{I}_{(|s|,\infty)} \varphi_t \chi_t \Lambda_{\alpha+\beta} e^{i t \Delta } (1+|x|^\alpha) v,
$$
we find the estimate
\begin{eqnarray*}
\| J_\alpha^* F_\beta(s) J_\alpha v \|_{L^2(\mathbb R^d)}&=&\left\| (1+|x|^\alpha)\Lambda_{\alpha+\beta} \int_\mathbb R dt e^{-i t \Delta } \chi_t g_t \right\|_{L^2(\mathbb R^d)}\\
&\leq & C \|\varphi_t\|_{L^{r'q'}(\mathbb R^{d+1})} \|v\|_{L^2(\mathbb R^d)},
\end{eqnarray*}
under the conditions
$$
2\alpha+\beta < \frac{1+d}{r}-\frac{d}{2}, \qquad \textrm{for} \qquad1 < r' < 2, \qquad \textrm{and} \qquad 2\alpha+\beta \leq \frac{1}{2}, \qquad \textrm{for} \qquad r'=2.
$$
Owing to $2\alpha p'>d$, the latter condition becomes the one of Theorem \ref{th1}. Going back to Lemma \ref{expdens}, this finally yields by duality
$$
\|n_{\gamma_\beta}\|_{L^{r/2}(\mathbb R^{d+1})} \leq C \|\varrho\|_{L^1(\mathbb R,\mathcal J_p)}, \qquad 1 < r' \leq 2, \qquad \beta < \frac{1+d}{r}-\frac{d}{2}-\frac{d}{p'}.
$$
Note that for $\beta$ to be positive, we need the conditions
$$
r\leq 2+\frac{2}{d}, \qquad p'>2d, \qquad \frac{1}{p'}<\frac{d+1}{rd}-\frac{1}{2}.
$$
\begin{remark} It is interesting to explore another route to the above estimate on $n_{\gamma_\beta}$. Instead of introducing the operators $J_\alpha$, the term $II$ could be directly bounded by
$$
\|\varrho\|_{\mathcal J_p} \|F_\beta(s)\|_{\mathcal J_{p'}}, \qquad p>1.
$$
The main difficulty is now to estimate $F_\beta$ in the Schatten space $\mathcal J_{p'}$. This does not seem direct, in particular it is unclear whether the method of \cite{LewStri}, which is used to derive Strichartz estimates and consists in writing $\textrm{Tr}((F_\beta)^{p'})$ (when $p'$ is an integer) as a multiple integral in time and in switching trace and integration, can be employed or not since the order of integration matters for estimating $F_\beta$.
\end{remark}
\paragraph{Estimates \fref{esttraceth} and conclusion.} We address now estimates \fref{esttraceth}, which are direct according to what was previously done: adapting Lemma \ref{expdens} yields
$$
\|\gamma_\beta\|_{L^1(\mathbb R,\mathcal J_1)} = \int_\mathbb R dt\, \textrm{Tr} \big( \gamma_\beta(t)\big)= \int_\mathbb R dt\, \textrm{Tr} \big( \varrho(t) F_\beta(t) \big),
$$
where $F_\beta$ is the same as in Lemma \ref{expdens} with now $\varphi_t\equiv 1$. Introducing the operator $J_\alpha$ as before, we find
$$
\|\gamma_\beta\|_{L^1(\mathbb R_+,\mathcal J_1)} \leq C \sup_{t\in \mathbb R} \| J_\alpha^* F_\beta(t) J_\alpha\| \int_\mathbb R dt\, \| \varrho(t)\|_{\mathcal J_p},
$$
and we already know that the term involving $F_\beta$ is finite provided $\beta$ satisfies the conditions of Theorem \ref{th1} with $r=2$. A similar method yields the estimate on $\sigma_\beta$.
This concludes the proof of the estimates involving $\gamma_\beta$. The estimates on $n_{\sigma_\beta}$ are derived with the relation
$$
\int_{\mathbb R} \int_{\mathbb R^d} n_{\sigma_\beta}(t,x) \varphi(t,x) dt dx = \textrm{Tr} \big( u_0 F_\beta(0) \big)
$$
and estimating $F_\beta$ as before. This ends the proof of Theorem \ref{th1}.
\section{Proof of Lemma \ref{pr1}} \label{prooflem}
Let $u_t:=\chi_t \phi_t \in C_0^\infty(\mathbb R^{d+1})$ and define
$$
I:=(|x|^\alpha+1) \Lambda_{\alpha+\beta} \int_{\mathbb R } dt e^{-i t \Delta} u_t.
$$
We then write, for $[A,B]=AB-BA$ the usual commutator between operators,
\begin{eqnarray*}
I&=&\Lambda_{\alpha+\beta} (|x|^\alpha+1) \int_{\mathbb R } dt e^{-i t \Delta} u_t+ \big[|x|^\alpha+1,\Lambda_{\alpha+\beta}\big] \int_{\mathbb R } dt e^{-i t \Delta} u_t\\
&:=&I_0+I_1+I_2,
\end{eqnarray*}
where
$$
I_0=\Lambda_{\alpha+\beta} |x|^\alpha \int_{\mathbb R } dt e^{-i t \Delta} u_t, \qquad I_1=\Lambda_{\alpha+\beta} \int_{\mathbb R } dt e^{-i t \Delta} u_t,
$$
and $I_2$ is the term involving the commutator. We start with $I_0$, the most singular term.
\paragraph{The term $I_0$.}
For $\varphi$ smooth and $\alpha>0$, let
$$
A^\alpha_t \varphi(x):=2^\alpha t^\alpha e^{i\frac{|x|^2}{4t}} D^\alpha \left( e^{-i\frac{|x|^2}{4t}} \varphi(x)\right).
$$
Then, according to \cite{ponce}, Proposition 3, we have the following commutation relation between $|x|^\alpha$ and the operator $e^{-it \Delta}$:
$$
|x|^\alpha e^{-i t \Delta} \varphi= e^{-i t \Delta} A^\alpha_t \varphi.
$$
This leads to the decomposition of $I_0$ below:
\begin{eqnarray*}
I_0&=&2^\alpha \Lambda_{\alpha+\beta}\int_{\mathbb R } dt e^{-i t \Delta}\left[ t^\alpha e^{i\frac{|x|^2}{4t}} D^\alpha \left( e^{-i\frac{|x|^2}{4t}} u_t \right) \right]\\
&:=&J_1+J_2,
\end{eqnarray*}
with
\begin{align*}
&J_1=2^\alpha \Lambda_{\alpha+\beta}\int_{\mathbb R } dt e^{-i t \Delta}D^\alpha \left( t^\alpha u_t \right)\\
&J_2=2^\alpha \Lambda_{\alpha+\beta}\int_{\mathbb R } dt \,t^\alpha e^{-i t \Delta}\left[ e^{i\frac{|x|^2}{4t}} D^\alpha \left( e^{-i\frac{|x|^2}{4t}} u_t \right)-D^\alpha \left( u_t \right) \right].
\end{align*}
The term $J_1$ is directly estimated with Theorem \ref{thCS}: since $D^\alpha$ and $e^{-it \Delta}$ commute, we have
$$
\| J_1\|_{L^2(\mathbb R^d)} \leq C \left\| \Lambda_{2 \alpha+\beta}\int_{\mathbb R } dt e^{-i t \Delta} \, t^\alpha u_t \right\|_{L^2(\mathbb R^d)},
$$
and therefore, applying Theorem \ref{thCS} with $q=2$, and supposing that the support of $\chi_t$ in the $t$ variable is included in $[-T,T]$, the following estimate holds
\begin{equation} \label{estJ1}
\| J_1\|_{L^2(\mathbb R^d)} \leq C \|t^\alpha \phi_t\|_{L^{r'}((-T,T)\times \mathbb R^{d})} \leq C \|\phi_t\|_{L^{r'}(\mathbb R^{d+1})},
\end{equation}
with
\begin{equation} \label{mainconst}
\left\{
\begin{array}{l}
\displaystyle 2\alpha+\beta < \frac{(d+1)}{r}-\frac{d}{2}, \qquad r'\in [1,2).\\
\displaystyle 2\alpha+\beta \leq \frac{1}{2}, \qquad r'=2.
\end{array}
\right.
\end{equation}
This treats the leading term and yields the condition on $\alpha$ and $\beta$ stated in the lemma. We turn now to the term $J_2$. For $\varphi \in C^1_c(\mathbb R^{d})$, we use the following representation formula for the fractional Laplacian (see e.g. \cite{stein1}): we have, for any $\alpha \in (0,1)$,
\begin{equation} \label{locfrac}
D^\alpha(\varphi)(x)=c_{d,\alpha} \int_{\mathbb R^d} \frac{\varphi(x)-\varphi(y)}{|x-y|^{d+\alpha}} dy.
\end{equation}
Note that there is no need to introduce a principal value in the definition since $\alpha \in (0,1)$ and $\varphi \in C^1$. We then write
\begin{eqnarray*}
e^{i\frac{|x|^2}{4t}} D^\alpha \left( e^{-i\frac{|x|^2}{4t}} t^\alpha u_t \right)-D^\alpha \left( t^\alpha u_t \right)&=&c_{d,\alpha} \int_{\mathbb R^d} \frac{t^\alpha u_t(x)- e^{i\frac{|x|^2-|y|^2}{4t}} t^\alpha u_t(y)}{|x-y|^{d+\alpha}} dy-D^\alpha \left( t^\alpha u_t \right)\\
&=&c_{d,\alpha} \int_{\mathbb R^d} \frac{(1- e^{i\frac{|x|^2-|y|^2}{4t}}) t^\alpha u_t(y)}{|x-y|^{d+\alpha}} dy\\[3mm]
&:=&R_{t,0}.
\end{eqnarray*}
We need now to estimate a term of the form
$$
\Lambda_{\alpha+\beta}\int_{\mathbb R } dt e^{-i t \Delta} R_{t,0}.
$$
For this, we cannot directly apply Theorem \ref{thCS} since the function $R_{t,0}$ does not have the required regularity and compact support. This is therefore where we invoke Remark \ref{remchi} and the fact that the function $\chi(t,x)=(1+|x|)^{-s}$ for $s>1/2$ is sufficient. We actually choose $(1+|x|)^{-1}$ for simplicity. Then,
\begin{eqnarray*}
(1+|x|) R_{t,0}
&=&c_{d,\alpha} \int_{\mathbb R^d} (1+|y|)\frac{(1- e^{i\frac{|x|^2-|y|^2}{4t}}) t^\alpha u_t(y)}{|x-y|^{d+\alpha}} dy\\
&&+c_{d,\alpha} \int_{\mathbb R^d} (|x|-|y|)\frac{(1- e^{i\frac{|x|^2-|y|^2}{4t}}) t^\alpha u_t(y)}{|x-y|^{d+\alpha}} dy\\
&:=&R_{t,1}+R_{t,2}.
\end{eqnarray*}
This yields
$$
J_2=2^\alpha \Lambda_{\alpha+\beta}\int_{\mathbb R } dt e^{-i t \Delta}(1+|x|)^{-1}\left[R_{t,1}+R_{t,2}\right],
$$
and as a consequence of Theorem \ref{thCS},
\begin{equation} \label{estJ2}
\| J_2\|_{L^2(\mathbb R^d)} \leq C \|R_{t,1}+R_{t,2}\|_{L^{q'}(\mathbb R^{d+1})},\; \textrm{with} \; \left\{
\begin{array}{l}
\displaystyle \alpha+\beta < \frac{(d+1)}{q}-\frac{d}{2}, \quad q'\in [1,2).\\
\displaystyle \alpha+\beta \leq \frac{1}{2}, \quad q'=2.
\end{array}
\right
\end{equation}
We estimate now $R_{t,1}+R_{t,2}$. We will actually only treat the most singular term $R_{t,1}$ since $R_{t,2}$ follows from the same techniques. We will use for this the simple inequality below, for $a \in [0,1]$,
\begin{equation} \label{AF}
\left| 1- e^{i\frac{|x|^2-|y|^2}{4t}}\right| \leq 2^{1-3a}t^{-a} |x+y|^a|x-y|^a\leq 2^{1-3a}t^{-a}\left( 2^a|y|^a|x-y|^{a} +|x-y|^{2a}\right).
\end{equation}
Using \fref{AF} in $R_{t,1}$ with $a=\alpha+\varepsilon$, for some $\varepsilon>0$ such that $\varepsilon \leq 1-\alpha$, we find
\begin{eqnarray*}
|R_{t,1}| &\leq& C t^{-\varepsilon} \int_{\mathbb R^d} (1+|y|^{1+\alpha+\varepsilon})\frac{|u_t(y)|}{|x-y|^{d-\varepsilon}} dy+C t^{-\varepsilon} \int_{\mathbb R^d} (1+|y|)\frac{|u_t(y)|}{|x-y|^{d-\alpha-2\varepsilon}} dy\\
&:=&R_{t,3}+R_{t,4}.
\end{eqnarray*}
We start with the most singular term $R_{t,3}$. For any $r'\in (1,2]$ as in \fref{estJ1}, we claim that we can find $q'$ (the one in \fref{estJ2}) and $\varepsilon>0$ such that
\begin{equation} \label{R3}
\|R_{t,3}\|_{L^{q'}(\mathbb R^{d+1})} \leq C \|\phi_t\|_{L^{r'}(\mathbb R^{d+1})}.
\end{equation}
We will use for this the following classical estimate on Riesz potentials, see e.g. the Hardy-Littlewood-Sobolev inequality of \cite{RS-80-2}, section IX.4,
\begin{equation} \label{riesz}
\left\| \int_{\mathbb R^d} \frac{\varphi(y)}{|\cdot-y|^{d-\varepsilon}} dy \right\|_{L^{\frac{s d}{d-\varepsilon s}}(\mathbb R^d)} \leq C \|\varphi\|_{L^s(\mathbb R^d)}, \qquad 1<s<\infty, \qquad 1<\frac{s d}{d-\varepsilon s}<\infty.
\end{equation}
Fix now $r'\in (1,2]$ as in \fref{estJ1}, and set $q'=\frac{s d}{d-\varepsilon s}$ in \fref{estJ2} for some $s$ such that $1<s<r'$. Then, for \fref{estJ2} (in the case $q'<2$) and \fref{riesz} to hold, it is required that
\begin{equation} \label{const}
1<\frac{s d}{d-\varepsilon s} < 2, \qquad \alpha+\beta < \left(\frac{1}{s'}+\frac{\varepsilon}{d}\right)(d+1)-\frac{d}{2}, \qquad r<s'<\infty.
\end{equation}
Using \fref{riesz} and the H\"older inequality, it follows that
\begin{eqnarray} \nonumber
\|R_{t,3}\|_{L^{q'}(\mathbb R^{d+1})} &\leq& C\| t^{-\varepsilon} (1+|x|^{1+\alpha+\varepsilon}) u_t \|_{L^{q'}(\mathbb R_t,L^{s}(\mathbb R_x^{d}))} \\\nonumber
&\leq& C\| t^{-\varepsilon} u_t \|_{L^{q'}(\mathbb R_t,L^{r'}(\mathbb R_x^{d}))}\\
&\leq& C\left(\int_{-T}^T t^{-\varepsilon q' p' } \right)^{1/(q'p')} \|\phi_t\|_{L^{r'}(\mathbb R^{d+1})}, \label{intT}
\end{eqnarray}
where $pq'=r'$, $p\in (1,\infty)$. In the second line above, we used the fact that $u_t$ has a compact support in order to control the $L^s_x$ norm of $u_t$ by the $L^{r'}_x$ norm, which is possible since $s<r'$. With $pq'=r'$ and $q'=\frac{s d}{d-\varepsilon s}$, we find therefore
$$
p'=r' \left(\frac{d-\varepsilon s }{r'(d-\varepsilon s)-sd}\right)
$$
with the constraints
\begin{equation} \label{eqp}
p=\frac{r'}{q'}=\frac{(d-\varepsilon s)r'}{sd} > 1, \qquad i.e. \qquad 1<s<\frac{dr'}{d+\varepsilon r'}.
\end{equation}
For the first term of \fref{intT} to be finite, we need $\varepsilon q'p'<1$, that is
\begin{equation} \label{est4}
\varepsilon q' p'=\frac{\varepsilon r'sd}{r'(d-\varepsilon s)-sd}<1, \qquad i.e. \qquad sr'\varepsilon (d+1)+sd<dr'.
\end{equation}
When the latter inequality is satisfied, it implies the second inequality in \fref{eqp} that we thus ignore. We collect now the various constraints on the parameters and define $\varepsilon$ and $s$ appropriately. The first inequality in \fref{const} becomes
$$
\frac{1}{2} < \frac{1}{s}-\frac{\varepsilon}{d}<1.
$$
The second inequality holds since $s>1$ and $\varepsilon>0$, and we therefore only keep the first one.
Defining $\delta$ by $r'=s+\delta$, with $0<\delta<r'-1$ since $r'>s>1$, \fref{est4} and the above inequality become
\begin{equation} \label{const5}
r' (r'-\delta) \varepsilon (d+1)<\delta d, \qquad \frac{1}{2} < \frac{1}{r'-\delta}-\frac{\varepsilon}{d}.
\end{equation}
We have in addition $r'\in (1,2]$ as well as
\begin{equation} \label{const3}
0<\varepsilon\leq 1-\alpha, \qquad \alpha+\beta < \left(\frac{1}{s'}+\frac{\varepsilon}{d}\right)(d+1)-\frac{d}{2}.
\end{equation}
Since $r'\leq 2$ and $\delta>0$, $r'-\delta \leq 2-\delta<2$, the inequalities in \fref{const5} are satisfied as soon as
$$
4 (d+1) \varepsilon \leq \delta d, \qquad \textrm{and} \qquad \frac{1}{2} <\frac{1}{2-\delta}-\frac{\varepsilon}{d}, \qquad \textrm{that is for the latter} \qquad \frac{4\varepsilon}{d+2\varepsilon} < \delta.
$$
These latter inequalities are satisfied for instance when
$$
\varepsilon=\frac{\delta d}{4(d+1)} < \frac{1}{4} \qquad (\textrm{since} \quad\delta <r'-1\leq 1),
$$
and we verify that $\alpha+\varepsilon \leq 1$ since $\alpha \leq 1/4$. It remains to choose $\delta$. We need to do it in such a way that $0<\delta<r'-1$ and such that the second inequality of \fref{const3} holds when \fref{mainconst} is verified. The latter condition can be expressed as the inequality, for $r \in [2,\infty)$,
$$
\frac{(d+1)}{r}\leq \left(\frac{1}{s'}+\frac{\varepsilon}{d}\right)(d+1)+\alpha.
$$
With direct algebra and our current choice of $\varepsilon$, this becomes
\begin{equation} \label{const22}
\frac{\delta}{r'(r'-\delta)} \leq \frac{\varepsilon}{d}+\frac{\alpha}{d+1}=\frac{\delta }{4(d+1)}+\frac{\alpha}{d+1}.
\end{equation}
Since $r'>1$ and $r'-\delta>1$, we have $1<r'(r'-\delta)$ and \fref{const22} is satisfied whenever
$$
\delta \leq \frac{\delta }{4(d+1)}+\frac{\alpha}{d+1}, \qquad \textrm{that is} \qquad \delta \leq \frac{\alpha (4d+1)}{(d+1)(4d+3)}.
$$
Setting finally
$$
\delta= \min \left( \frac{\alpha (4d+1)}{(d+1)(4d+3)},\frac{r'-1}{2} \right),
$$
we verify that all constraints are satisfied. This concludes the estimation of the term $R_{t,3}$ and yields \fref{R3}. Regarding $R_{t,4}$, we use once more the Hardy-Littlewood-Sobolev inequality to obtain, for the same $q',\varepsilon$ and $s$ as above and $s_0=\frac{sd}{d+s(\alpha+\varepsilon)}<s$,
$$
\|R_{t,4}\|_{L^{q'}(\mathbb R^{d+1})} \leq C\| t^{-\varepsilon} (1+|x|) u_t \|_{L^{q'}(\mathbb R_t,L^{s_0}(\mathbb R_x^{d}))} \leq C\| t^{-\varepsilon} u_t \|_{L^{q'}(\mathbb R_t,L^{s}(\mathbb R_x^{d}))}
$$
since $u_t$ has a compact support in $x$. It then suffices to proceed as for $R_{t,3}$. Finally, as already mentioned, the term $R_{t,2}$ is more regular and is treated with a similar and simpler analysis. This then enables us to estimate $I_0$ by
$$
\|I_0\|_{L^2(\mathbb R^d)} \leq C \|\phi_t\|_{L^{r'}(\mathbb R^{d+1})},
$$
under the condition \fref{mainconst}. We turn now to the term $I_2$.
\paragraph{The term $I_2$.} We start with the following lemma.
\begin{lemma} \label{com} Let $\varphi \in \mathcal S(\mathbb R^{d})$, and let $\alpha,\beta>0$ with $\alpha+\beta<1$. Then, for any $\varepsilon>0$, we have the estimate
$$
\left\| \big[1+|x|^\alpha,\Lambda_{\alpha+\beta}\big] \varphi \right \|_{L^2(\mathbb R^{d})} \leq C \| \varphi \|_{H^{\beta+\varepsilon}(\mathbb R^d)}.$$
\end{lemma}
\begin{proof}
We proceed in the Fourier space. First, the Fourier transform of $|x|^\alpha$ reads, see \cite{gelfand}, section 3.3,
$$
\mathcal F_x(|x|^\alpha)(\xi)= C_{\alpha,d} |\xi|^{-d-\alpha}, \qquad C_{\alpha,d}= 2^{\alpha+d} \pi^{\frac{d}{2}} \frac{\Gamma(\frac{\alpha+d}{2})}{\Gamma(-\frac{\alpha}{2})}.
$$
Therefore,
\begin{eqnarray*}
\mathcal F_x \left(\big[1+|x|^\alpha,\Lambda_{\alpha+\beta}\big ]\varphi\right)(\xi)&=&C_{\alpha,d}\int_{\mathbb R^d} \frac{(1+|k|^2)^{(\alpha+\beta)/2}-(1+|\xi|^2)^{(\alpha+\beta)/2}}{|\xi-k|^{d+\alpha}} \mathcal F_x \varphi(k) dk\\
&:=&F(\xi).
\end{eqnarray*}
Since the function $ \mathbb R \ni u \mapsto (1+u^2)^{\gamma/2}$ is of H\"older regularity $\gamma$ for $\gamma \in (0,1)$, there exists a constant $C$ such that
$$
|(1+|k|^2)^{(\alpha+\beta)/2}-(1+|\xi|^2)^{(\alpha+\beta)/2}| \leq C||k|-|\xi||^{\alpha+\beta} \leq C |k-\xi|^{\alpha+\beta}.
$$
Hence, $F$ can be estimated by
$$
|F(\xi)|\leq C \int_{\mathbb R^d} \frac{|\mathcal F_x \varphi(k)| }{|\xi-k|^{d-\beta}} dk.
$$
Using the estimate \fref{riesz} on Riesz potentials, we find, together with the H\"older inequality for the second line,
\begin{eqnarray*}
\|F\|_{L^2(\mathbb R^d)} &\leq& C \| \mathcal F_x \varphi\|_{L^{\frac{2d}{d+2\beta}}(\mathbb R^d)}\\
&\leq & C \|(1+|\xi|^{(\frac{\beta+\varepsilon}{\beta})d})^{-1}\|_{L^1(\mathbb R^d)} \| (1+|\xi|^{\beta+\varepsilon})\mathcal F_x \varphi\|_{L^{2}(\mathbb R^d)}\\[2mm]
&\leq & C \| (1+|\xi|^{\beta+\varepsilon})\mathcal F_x \varphi\|_{L^{2}(\mathbb R^d)}.
\end{eqnarray*}
This concludes the proof of the lemma.
\end{proof}
\medskip
With the latter lemma at hand, the estimation of $I_2$ is now straightforward: we have, for any $\varepsilon>0$,
\begin{eqnarray*}
\|I_2\|_{L^2(\mathbb R^d)}&=&\left\|\big[1+|x|^\alpha,\Lambda_{\alpha+\beta}\big] \int_{\mathbb R } dt e^{-i t \Delta} u_t \right\|_{L^2(\mathbb R^d)}\\
& \leq& C \left\| \Lambda_{\beta+\varepsilon} \int_{\mathbb R } dt e^{-i t \Delta} u_t \right\|_{L^2(\mathbb R^d)}\\
&\leq & C \| \phi_t\|_{L^{r'}(\mathbb R^d)},
\end{eqnarray*}
where we used Theorem \ref{thCS} in the last line with
$$
\beta+\varepsilon < \frac{d+1}{r}-\frac{d}{2}, \qquad r'\in[1,2), \qquad \textrm{and} \qquad \beta+\alpha \leq \frac{1}{2}, \qquad r'=2.
$$
With for instance $\varepsilon=\alpha$, the latter condition is implied by \fref{mainconst}. Note as well that the relation above implies $\beta \leq 1/2$, so that with the fact that $\alpha \leq 1/4$ according to the hypotheses of Lemma \ref{pr1}, we have $\alpha+\beta<1$ and Lemma \ref{com} can indeed be applied.
In order to conclude the proof of Lemma \ref{pr1}, it remains to treat the term $I_1$, which is a direct consequence of Theorem \ref{thCS}. This ends the proof.
\section{Proof of Corollary \ref{cor1}} \label{proofcor}
As in the proof of Theorem \ref{th1}, we focus on $n_{\gamma_\beta}$, and consider a nonnegative finite rank ``smooth'' operator $\varrho$ verifying \fref{condrho}, the final estimates following by density. This will justify the formal calculations. In this setting, we have in particular that $\Gamma (t) \in \mathcal J_1$ and $(\mu_j(t))^{1/2}\phi_j(t) \in \mathcal S(\mathbb R^d)$ for all $t \in \mathbb R$, for $(\mu_j,\phi_j)_{p\in \mathbb N}$ the spectral decomposition of $\Gamma$. For $v\in C_0^\infty(\mathbb R^d)$, let then
$$
n^v(t,x):=|v(x)|^2 n_{\Gamma}(t,x)=\sum_{j\in \mathbb N} \mu_j(t) |\phi^v_j(t,x)|^2, \qquad \phi^v_j:=v \phi_j.
$$
The proof consists in applying $D^\beta$ to $n^v$ and in using the estimates of Theorem \ref{th1}. This requires some care since there is no simple product rule for the fractional derivative. Let $\varphi \in \mathcal S(\mathbb R^d)$. Then,
with the local representation of the fractional Laplacian \fref{locfrac} and the identity $|a|^2-|b|^2= 2\Re\, (a-b)(\overline{a}+\overline{b})$, we find, for $\beta \in (0,1/2]$,
\begin{eqnarray*}
D^\beta(|\varphi|^2)(x)&=&c_{d,\beta} \int_{\mathbb R^d} \frac{|\varphi(x)|^2-|\varphi(y)|^2}{|x-y|^{d+\beta}}dy\\
&=&2 c_{d,\beta} \Re\int_{\mathbb R^d} \frac{(\varphi(x)-\varphi(y))(\overline{\varphi(y)}-\overline{\varphi(x)}+2\overline{\varphi(x)} )}{|x-y|^{d+\beta}}dy\\[2mm]
&=&4 \Re \; \overline{\varphi} D^\beta(\varphi)(x)-2 c_{d,\beta} |\mathcal D_{\beta/2}\varphi|^2,
\end{eqnarray*}
where
$$
\mathcal D_{\beta/2}(\varphi)=\left(\int_{\mathbb R^d} \frac{|\varphi(x)-\varphi(y)|^2}{|x-y|^{d+\beta}}dy\right)^{1/2}.
$$
Replacing $\varphi$ by $ (\mu_j(t))^{1/2}\phi_j^v(t)$, we obtain the following expression of $D^\beta n^v$:
$$
D^\beta n^v= 4 \Re \, \sum_{j\in \mathbb N} \mu_j\overline{\phi^v_j} D^\beta(\phi^v_j)-2 c_{d,\beta} \, \sum_{j\in \mathbb N} \mu_j |\mathcal D_{\beta/2}(\phi^v_j)|^2.
$$
The first term of the r.h.s. can be thought of as the most singular term, and can be treated with Theorem \ref{th1} after some technicalities. An important point is the fact that we do not use the triangle inequality for the estimation, and therefore the fact that the $\phi_j$'s are orthogonal is directly exploited. The second term is more critical. To the best of our knowledge, there are not many ways to estimate $\mathcal D_{\beta/2}(\phi^v_j)$, and one is given in \cite{stein1}, Theorem 1, and provides controls of the $L^p$ norm of $\mathcal D_{\beta/2}(\phi_j^v)$ in terms of the $L^p$ norm of $\Lambda_{\beta/2}\phi^v_j$. This means that we need to use the triangle inequality at this stage. This does not imply that the orthogonality of the $\phi_p$'s is not used at all, it is when we invoke Theorem \ref{th1} when $r=2$ in the case $p>1$. These latter facts explain the limitation to the $L^1$ norm in time. The results could be extended to higher norms in time if we were able to replace $\Lambda_\beta$ by $\mathcal D_{\beta}$ in the definition of $\gamma_\beta$, and prove an analog to Theorem \ref{th1}. This does not seem trivial since $\mathcal D_{\beta}$ is not linear.
Using therefore the Cauchy-Schwarz and triangle inequalities, we find
\begin{eqnarray} \nonumber
\|D^\beta n^v\|_{L^{\frac{d}{d-\beta}}(\mathbb R_x^d)}&\leq& C \left\| \left(\sum_{j\in \mathbb N} \mu_j |\phi^v_j|^2\right)^{1/2}\left(\sum_{j\in \mathbb N} \mu_j |D^\beta(\phi^v_j)|^2\right)^{1/2} \right\|_{L^{\frac{d}{d-\beta}}(\mathbb R^d_x)}\\
&& \qquad +C \sum_{j\in \mathbb N} \mu_j\big\|\mathcal D_{\beta/2}(\phi_j^v) \big\|_{L^{\frac{2d}{d-\beta}}(\mathbb R^d_x)}^2. \label{Dn}
\end{eqnarray}
Thanks to \cite{stein1}, Theorem 1, we can control the term in $\mathcal D_{\beta/2}(\phi_j^v)$ by (remark that obviously $2d/(d+\beta)<2d/(d-\beta)$),
$$
\|\mathcal D_{\beta/2}(\phi_j^v) \big\|_{L^{\frac{2d}{d-\beta}}(\mathbb R^d_x)} \leq C \|\Lambda_{\beta/2} \phi_j^v\|_{L^{\frac{2d}{d-\beta}}(\mathbb R^d_x)} \leq C \|\Lambda_{\beta} \phi_j^v\|_{L^{2}(\mathbb R^d_x)},
$$
where we used the Sobolev embedding $H^{\beta/2,2}(\mathbb R^d) \subset L^{\frac{2d}{d-\beta}}(\mathbb R^d)$ for the second inequality. Furthermore, using the H\"older inequality and the embedding $H^{\beta,2}(\mathbb R^d) \subset L^{\frac{2d}{d-2\beta}}(\mathbb R^d)$ for the first term in the r.h.s of \fref{Dn}, we find
\begin{eqnarray*}
\|D^\beta n^v\|_{L^{\frac{d}{d-\beta}}(\mathbb R_x^d)}&\leq& C \sum_{j\in \mathbb N} \mu_j \left( \|\Lambda^\beta(\phi^v_j)\|^2_{L^2(\mathbb R_x^d)}+\|D^\beta(\phi^v_j)\|^2_{L^2(\mathbb R_x^d)}\right)\\
&\leq & C \sum_{j\in \mathbb N} \mu_j \left( \|\phi^v_j\|^2_{L^2(\mathbb R^d_x)}+\|D^\beta(\phi^v_j)\|^2_{L^2(\mathbb R_x^d)} \right).
\end{eqnarray*}
Above, we used \cite{stein1}, Theorem 2, in order to estimate $\Lambda_\beta$ in terms of $D^\beta$. We control now $D^\beta( v \phi_p)$ in terms of $v D^\beta(\phi_p)$. We have, for all $\varphi \in \mathcal S(\mathbb R^d)$,
\begin{eqnarray*}
D^\beta(v \varphi)(x)&=&c_{d,\beta} \int_{\mathbb R^d} \frac{v(x)\varphi(x)-v(y)\varphi(y)}{|x-y|^{d+\beta}}dy\\
&=&c_{d,\beta} \int_{\mathbb R^d} \frac{v(x)(\varphi(x)-\varphi(y))+(v(x)-v(y))\varphi(y)}{|x-y|^{d+\beta}}dy\\
&=&v(x)D^\beta(\varphi)(x)+c_{d,\beta} \int_{\mathbb R^d} \frac{v(x)-v(y)}{|x-y|^{d+\beta}}\varphi(y) dy.
\end{eqnarray*}
Supposing that the support of $v$ is compactly embedded in a bounded set $\Omega$, we write the last term above as
$$
c_{d,\beta} \int_{\Omega} \frac{v(x)-v(y)}{|x-y|^{d+\beta}}\varphi(y) dy+c_{d,\beta} v(x)\int_{\Omega^c} \frac{\varphi(y)}{|x-y|^{d+\beta}} dy.
$$
Hence,
\begin{align*}
& \sum_{j\in \mathbb N} \mu_j(t) \|D^\beta(\phi^v_j(t,\cdot))\|^2_{L^2(\mathbb R^d_x)}=\left\| \sum_{j\in \mathbb N} \mu_j(t) |D^\beta(\phi^v_j(t,\cdot))|^2 \right\|_{L^1(\mathbb R^d_x)}\\
&\hspace{1.8cm} \leq C \left\| \sum_{j\in \mathbb N} \mu_j(t) |v D^\beta(\phi_j(t,\cdot))|^2 \right\|_{L^1(\mathbb R^d_x)}+C\left\| N_1(t,\cdot) \right\|_{L^1(\mathbb R^d_x)}+C\left\| N_2(t,\cdot) \right\|_{L^1(\mathbb R^d_x)},
\end{align*}
where we have defined
\begin{eqnarray*}
N_1(t,x)&:=&\sum_{j\in \mathbb N} \mu_j(t)\left|\int_{\Omega} \frac{v(x)-v(y)}{|x-y|^{d+\beta}}\phi_j(t,y) dy \right|^2\\
N_2(t,x)&:=& \sum_{j\in \mathbb N} \mu_j(t)\left|v(x)\int_{\Omega^c} \frac{\phi_j(t,y)}{|x-y|^{d+\beta}} dy \right|^2.
\end{eqnarray*}
Writing the squares above as the product of two integrals, and using the Cauchy-Schwarz inequality lead to
$$
N_1(t,x) \leq \left|\int_{\Omega} \frac{|v(x)-v(y)|}{|x-y|^{d+\beta}} (n_{\Gamma}(t,y))^{1/2} dy \right|^2, \quad N_2(t,x) \leq |v(x)|^2 \left|\int_{\Omega^c} \frac{(n_{\Gamma}(t,y))^{1/2} }{|x-y|^{d+\beta}} dy \right|^2.
$$
Using the estimate \fref{riesz} on Riesz potentials and the fact that $|v(x)-v(y)|\leq C |x-y|$, we find, when $d\geq 2$,
\begin{eqnarray*}
\left\| N_1(t,\cdot) \right\|_{L^1(\mathbb R^d_x)} &\leq& C \int_{\mathbb R^d} \left|\int_{\Omega} \frac{(n_{\Gamma}(t,y))^{1/2} }{|x-y|^{d+\beta-1}} dy \right|^2 dx\\[2mm]
& \leq &C \|n_{\Gamma}(t,\cdot)\|_{L^{\frac{d}{d+2(1-\beta)}}(\Omega)} \leq C \|n_{\Gamma}(t,\cdot)\|_{L^{1}(\Omega)}.
\end{eqnarray*}
When $d=1$, \fref{riesz} cannot be used as above. We then control $|v(x)-v(y)|$ instead by $|v(x)-v(y)|\leq C |x-y|^{1/2}$, and since $\beta \in (0,\frac{1}{2}]$, we can apply \fref{riesz} and obtain
$$
\left\| N_1(t,\cdot) \right\|_{L^1(\mathbb R_x)} \leq C \|n_{\Gamma}(t,\cdot)\|_{L^{\frac{1}{1+2(1/2-\beta)}}(\Omega)} \leq C \|n_{\Gamma}(t,\cdot)\|_{L^{1}(\Omega)}.
$$
Regarding $N_2$, for $x \in \textrm{supp } v \subset \subset \Omega$ and $y \in \Omega^c$, there exists a constant $C$ such that $|x-y|^{-1} \leq C (1+|y|)^{-1}$, and therefore
$$
\left\| N_2(t,\cdot) \right\|_{L^1(\mathbb R^d_x)} \leq C \|n_{\Gamma}(t,\cdot)\|_{L^{q}(\mathbb R^d_x)}, \qquad \forall q\in[1,\infty].
$$
It remains to bound $n_{\Gamma}$. Note that according to the last estimate, this has to be done over $\mathbb R^d$ and not just on a bounded domain, and this is a consequence of the non-locality of the fractional derivative. Estimating $n_{\Gamma}$ is not trivial unless $\varrho$ is trace class so that the triangle inequality can be used. When $\varrho$ is not trace class, the orthogonality of the eigenfunctions plays again a crucial role, and we then use the Strichartz estimates of \cite{FrankSabin}, Theorem 15, to obtain
$$
\|n_{\Gamma} \|_{L^{\frac{p'}{d}}(-T,T,L^{\frac{p}{2-p}}(\mathbb R^d))} \leq C \|\varrho \|_{L^1(\mathbb R,\mathcal J_p)}, \qquad p \in [1,2d/(2d-1)].
$$
We verify indeed that our choice of parameters above satisfy the assumptions of the theorem of \cite{FrankSabin}, using in particular that $p'>2d$ when $\beta \geq 0$. Collecting all previous estimates, we find
$$
\|D^{\beta}(n^v) \|_{L^1(-T,T,L^{\frac{d}{d-\beta}}(\mathbb R^d))} \leq C \left\| \sum_{j\in \mathbb N} \mu_j |v D^\beta(\phi_j)|^2 \right\|_{L^1((-T,T) \times \mathbb R^d)} + C \|\varrho \|_{L^1(\mathbb R,\mathcal J_p)}
$$
and conclude by controlling the first term in r.h.s. using Theorem \ref{th1} with $r=2$ and $\beta$ satisfying the conditions stated in the corollary. This ends the proof.
{\footnotesize \bibliographystyle{siam}
| -77,775.860926
|
[
-2.734375,
2.44921875
] | 23.303835
|
[
-3.013671875,
0.499267578125,
-2.232421875,
-6.47265625,
-1.1005859375,
9.2734375
] |
[
3.947265625,
9.9609375,
0.646484375,
7.09375
] | 304
| 6,789
|
[
-3.56640625,
4.203125
] | 37.592965
|
[
-5.5625,
-4.46875,
-5.53125,
-2.79296875,
1.9638671875,
14
] | 0.64338
| 7.427795
| 21.416998
| 2.640099
|
[
2.013072967529297
] | -47,927.100173
| 5.494624
| -78,326.34925
| 0.600488
| 6.02131
|
[
-1.5927734375,
-3.40234375,
-4.20703125,
-5.9140625,
1.8603515625,
13.4140625
] |
[
-5.609375,
-2.337890625,
-2.501953125,
-1.7197265625,
4.03515625,
5.08203125
] | |
BkiUbFnxK6wB9k0iJjLU
|
\section{Discussion} \label{sec:conclusion}
In the current status,
the time complexity of duel-and-sweep algorithm for 2d-OPPM problem in Theorem~\ref{thm:duel2D} is not better than straightforward reduction to 1d-OPPM problem explained in Theorem~\ref{thm:2dOPPM}.
We showed this result as a preliminary work on solving 2d-OPPM,
and we hope the 2d-OPPM can be solved more efficiently
by finding more sophisticated method based on some unknown combinatorial properties,
as Cole \etal~\cite{Cole2014TwoDimParaMatch} did for two dimensional parameterized matching problem.
This is left for future work.
\section{Experiment} \label{sec:experiment}
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.49\hsize}
\centering
\includegraphics[scale=0.47]{figures/n_time}
\ \ \ \scriptsize{(a)}
\end{minipage}
\begin{minipage}[t]{0.49\hsize}
\centering
\includegraphics[scale=0.47]{figures/m_time}
\ \ \ \scriptsize{(b)}
\end{minipage}
\caption{Running time of the algorithms with respect to (a)~text length, and (b)~pattern length.}
\label{fig:ex_time}
\end{figure}
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.49\hsize}
\centering
\includegraphics[scale=0.47]{figures/n_comparison}
\ \ \ \scriptsize{(a)}
\end{minipage}
\begin{minipage}[t]{0.49\hsize}
\centering
\includegraphics[scale=0.47]{figures/m_comparison}
\ \ \ \scriptsize{(b)}
\end{minipage}
\caption{Number of comparisons in the algorithms with respect to (a)~text length, and (b)~pattern length.}
\label{fig:ex_comp}
\end{figure}
In order to compare the performance of proposed algorithm with the KMP-based algorithm, we conducted experiments on 1d-OPPM problem.
We performed two sets of experiments.
In the first experiment, the pattern size $m$ is fixed to $10$, while the text size $n$ is changed from $100000$ to $1000000$.
In the second experiment, the text size $n$ is fixed to $1000000$ while the pattern size $m$ is changed from $m$ $5$ to $100$.
We measured the average of running time and the number of comparisons for $50$ repetitions on each experiment.
We used randomly generated texts and patterns with alphabet size $|\Sigma|=1000$.
Experiments are executed on a machine with Intel Xeon CPU E5-2609 8 cores 2.40 GHz, 256 GB memory, and Debian Wheezy operating system.
The results of our preliminary experiments are shown in Fig.~\ref{fig:ex_time} and Fig.~\ref{fig:ex_comp}.
We can see that our algorithm is better that KMP based algorithm in running time and number of comparison when the pattern size and text size are large.
However, our algorithm is worse when the pattern size is small, less than $10$.
\section{Introduction}
The exact string matching problem is one of the most widely studied problems.
Given a text and a pattern, the exact matching problem searches for all occurrences positions of pattern in the text.
Motivated by low level image processing, the two-dimensional exact matching problem has been extensively studied in recent decades.
Given a text $T$ of size $n \times n$ and a pattern $P$ of size $m \times m$ over alphabet $\Sigma$ of size $\sigma = |\Sigma|$, the exact matching problem on two-dimensional strings searches for all occurrence positions of $P$ in $T$.
Bird~\cite{bird1977two} and Baker~\cite{baker1978technique} proposed two-dimensional exact matching using dictionary matching algorithm and Amir and Farach~\cite{amir1992two} proposed an algorithm that uses suffix trees.
These algorithms require total ordering from the alphabet and run in $O (n ^ 2 \log \sigma)$ time with $O (m ^ 2 \log \sigma)$ preprocessing time.
Amir~\etal~\cite{amir1994alphabet} also proposed alphabet independent approach to the problem that runs in $O(m^2 \log \sigma)$ preprocessing time and $O(n^2)$ matching time.
Unlike the exact matching problem, \emph{order-preserving pattern matching} (OPPM) considers the relative order of elements, rather than their real values.
Order-preserving matching has gained much interest in recent years, due to its applicability in problems where the relative order is compared, rather than the exact value, such as share prices in stock markets, weather data or musical notes.
Kubica~\etal~\cite{kubica2013linear} and Kim~\etal~\cite{kim2014order} proposed a solution based on KMP algorithm.
These algorithms address the one-dimensional OPPM problem and have time complexity of $O (n + m \log m)$.
Cho~\etal~\cite{cho2015fast} brought forward another algorithm based on the Horspool's algorithm that uses $q$-grams, which was proven to be experimentally fast.
Crochemore~\etal~\cite{SPIRE_Crochemore_2013} proposed data structures for OPPM.
On the other hand, Chhabra and Tarhio~\cite{SEA_Chhabra_2014}, Faro and K\"{u}lekci~\cite{faro2016efficient} proposed filtration methods which practically fast.
Moreover, faster filtration algorithms by using SIMD (Single Instruction Multiple Data) instructions
were proposed by Cantone~\etal~\cite{ref:PSC_Cantone}, Chhabra~\etal~\cite{ref:PSC_Chhabra} and Ueki~\etal~\cite{ueki2016fast}.
They showed that SIMD instructions are efficient in speeding up their algorithms.
In this paper, we propose an algorithm that based on dueling technique~\cite{vishkin1985optimal} for OPPM.
Our algorithm runs in $O(n + m\log m)$ time which is as fast as KMP based algorithm.
Moreover, we perform experiments those compare the performance of our algorithm with the KMP-based algorithm.
The experiment results show that our algorithm is faster that KMP-based algorithm.
Last, we introduce the two-dimensional order preserved pattern matching
and give a duel and sweep algorithm that runs in $O(n^2)$ time for duel stage and $O(n^2 m)$ time for sweeping time
with $O(m^3)$ preprocessing time.
To the best of our knowledge, our solution is the first to address the two-dimensional order preserving patern matching problem.
The rest of the paper is organized as follows. In Section~\ref{sec:prelim}, we give preliminaries on the problem.
In Section~\ref{sec:one dimension}, we describe the algorithm for OPPM problem.
In Section~\ref{sec:experiment} we will show some experiment results those compare the performance of our algorithm with the KMP-based algorithm.
In Section~\ref{sec:two dimension}, we extend the algorithm and describe the method for the two-dimensional OPPM problem.
In Section~\ref{sec:conclusion}, we conclude our work and discuss future work.
\section{One-dimensional order-preserving matching} \label{sec:one dimension}
In this section, we will propose an algorithm for one-dimensional OPPM using the ``duel-and-sweep" paradigm~\cite{amir1994alphabet}.
In the dueling stage, all possible pairs of candidates ``duel'' with each other.
The surviving candidates are further pruned during the sweeping stage, leaving the candidates that are order-isomorphic with the pattern.
Prior to the dueling stage, the pattern is preprocessed to construct a \emph{witness table} that contains \emph{witness pairs} for all possible offsets.
\begin{definition} [1d-OPPM problem]
The one-dimensional order-preserving matching problem is defined as follows,
\begin{description}[topsep=0pt,parsep=0pt,partopsep=0pt]
\item[Input:] A text $T \in \Sigma^*$ of length $n$ and a pattern $P \in \Sigma^*$ of length $m$,
\item[Output:] All occurrences of substrings of $T$ that are order-isomorphic with $P$.
\end{description}
\end{definition}
\subsection{Pattern preprocessing}
Let $a > 0$ be an integer such that when $P$ is superimposed on itself with the offset $a$, the overlap regions are not order-isomorphic. We say that a pair $\pair{i}{j}$ of locations is \emph{a witness pair for the offset $a$} if either of the following holds:
\begin{itemize}[topsep=3pt]
\item $P[i] = P[j] \text{ and } P[i+a] \ne P[j+a]$,
\item $P[i] > P[j] \text{ and } P[i+a] \le P[j+a]$,
\item $P[i] < P[j] \text{ and } P[i+a] \ge P[j+a]$.
\end{itemize}
Next, we describe how to construct a \emph{witness table} for $P$, that stores witness pairs for all possible offsets $a$ $(0 < a < m)$.
For the one-dimensional problem, the witness table $\WIT{P}$ is an array of length $m-1$, such that $\WIT{P}[a]$ is a witness pair for offset $a$.
In the case when there are multiple witness pairs for offset $a$, we take the pair $\pair{i}{j}$ with the smallest value of $j$ and $i < j$.
When the overlap regions are order-isomorphic for offset $a$, which implies that no witness pair exists for $a$, we express it as $\WIT{P}[a] = \pair{m+1}{m+1}$.
\begin{lemma} \label{lemma:Z_p computation}
For a pattern $P$ of length $m$, we can construct $\WIT{P}$ in $O(m)$ time assuming that $Z_P$ is already computed.
\end{lemma}
\begin{proof}
Remind that $Z_P[k]$ is the length of the longest prefix of $P[k\!:\!m]$ that is order-isomorphic with a prefix of $P$.
For each $1 < k < m$, we have two cases.
\begin{description}[topsep=5pt,itemsep=3pt]
\item[Case 1] $Z_P[k] = m - k + 1$ : Since $P[1\!:\!m - k + 1] \approx P[k\!:\!m]$, there is no witness pair for offset $k - 1$.
\item[Case 2] $Z_P[k] < m - k + 1$ :
Let $j_k = Z_P[k] + 1$, $\imax = \Prev{P}[j_k]$, and $\imin = \Next{P}[j_k]$.
Then $P[1\!:\!j_k-1]\approx P[k\!:\!k+j_k-2]$ and $P[1\!:\!j_k]\not \approx P[k\!:\!k+j_k-1]$, by the definition of $Z_P[k]$.
By Lemma~\ref{lem:op}, neither condition~(\ref{eq:opeq}) nor (\ref{eq:opineq}) holds.
If ${P[\imax]} = P[j_k]$ then $P[j_k] = P[\imin]$ by property~(\ref{prop:lmaxmineq}), so that
\begin{align}
P[k + \imax - 1] \ne P[k+j_k-1]\ \vee\ P[k+j_k-1] \ne P[k + \imin - 1] \label{cond:eq}
\end{align}
holds by condition~(\ref{eq:opeq}). Otherwise, i.e. ${P[\imax]} < P[j_k]$, we have $P[j_k] < P[\imin]$ by property~(\ref{prop:lmaxmineq}), so that
\begin{align}
P[k + \imax - 1] \ge P[k+j_k-1]\ \vee P[k+j_k-1]\ \ge P[k + \imin - 1] \label{cond:ineq}
\end{align}
holds by condition~(\ref{eq:opineq}).
Therefore, $\pair{\imax}{j_k}$ is a witness pair if the leftside of condition~(\ref{cond:eq}) or (\ref{cond:ineq}) holds,
and $\pair{\imin}{j_k}$ is a witness pair if rightside of condition~(\ref{cond:eq}) or (\ref{cond:ineq}) holds.
\end{description}
Algorithm~\ref{alg:Witness} describes the procedure.
Clearly it runs in $O(m)$ time.
\end{proof}
\subsection{Dueling stage}
A substring of $T$ of length $m$ will be referred to as a \emph{candidate}. A candidate that starts at the location $x$ will be denoted by $T_x$.
Witness pairs are useful in the following situation.
Let $T_x$ and $T_{x+a}$ be two overlapping candidates and $\pair{i}{j}$ be the witness pair for offset $a$.
Without loss of generality, we assume that $P[i] < P[j]$ and $P[i+a] > P[j + a]$.
\begin{itemize}[topsep=3pt]
\item If $T[x + a + i -1] > T[x + a + j -1]$, then $T_x \not\approx P$.
\item If $T[x + a + i -1] < T[x + a + j -1]$, then $T_{x+a} \not\approx P$.
\end{itemize}
Based on this information, we can safely eliminate either candidate $T_x$ or $T_{x+a}$ without looking into other locations. This process is called \emph{dueling}. The procedure for the dueling is described in the Algorithm \ref{alg:Dueling}.
\begin{algorithm2e}[tbp]
\Fn(\tcc*[h]{Construct the witness table $\WIT{P}$}){\Witness{}}{
compute the Z-array $Z_P$ for the pattern $P$\;
\For{$k=2$ \KwTo $m-1$}{
$j = Z_P[k] + 1$\;
\lIf{$j = m - k + 1$}{
$\WIT{P}[k - 1] = \pair{m + 1}{m + 1}$}
\ElseIf{$P[\Next{P}[j]] = P[j] = P[\Prev{P}[j]]$}{
\If{$P[k+j-1] \ne P[k + \Prev{P}[j] - 1]$}{
$\WIT{P}[k-1] = \pair{\Prev{P}[j]}{j}$\;
}
\lElse{
$\WIT{P}[k-1] = \pair{\Next{P}[j]}{j}$}
}\Else{
\If{$P[k+j-1] \le P[k + \Prev{P}[j] - 1]$}{
$\WIT{P}[k-1] = \pair{\Prev{P}[j]}{j}$\;
}
\lElse{
$\WIT{P}[k-1] = \pair{\Next{P}[j]}{j}$}
}
}
}
\caption{Algorithm for constructing the witness table $\WIT{P}$}
\label{alg:Witness}
\end{algorithm2e}
Next, we prove that the \emph{consistency} property is transitive.
Suppose $T_x$ and $T_{x+a}$ are two overlapping candidates.
We say that $T_x$ and $T_{x+a}$ are \emph{consistent} with respect to $P$ if $P[1\!:\!m-a] \approx P[a+1\!:\!m]$.
Candidates that do not overlap are trivially consistent.
\begin{lemma} \label{lemma:consistency 1d}
For any $a$ and $a'$ such that $0 < a < a + a' < m$, let us consider three candidates $T_{x}$, $T_{x + a}$, and $T_{x + a + a'}$. If $T_x$ is consistent with $T_{x+a}$ and $T_{x+a}$ is consistent with $T_{x + a + a'}$, then $T_x$ is consistent with $T_{x + a+ a'}$.
\end{lemma}
\begin{proof}
Since $T_x$ is consistent with $T_{x+a}$, it follows that $P[1\!:\!m - a] \approx P[a+1\!:\!m]$, so that $P[a' + 1\!:\!m - a] \approx P[(a + a')+1\!:\!m]$.
Moreover, since $T_{x+a}$ is consistent with $T_{x+a +a'}$, it follows that $P[1\!:\!m - a'] \approx P[a'+1\!:\!m]$, so that $P[1\!:\!m - a' - a] \approx P[a'+1\!:\!m - a]$.
Thus, $P[1\!:\!m - (a + a')] \approx P[(a + a') + 1\!:\!m]$, which implies that $T_x$ is consistent with $T_{x+a+a'}$.
\end{proof}
During the dueling stage, the candidates are eliminated until all remaining candidates are pairwise consistent.
For that purpose, we can apply the dueling algorithm due to Amir~\etal~\cite{amir1994alphabet} developed for ordinal pattern matching.
\begin{lemma}[\cite{amir1994alphabet}] \label{lemma:duel 1D}
The dueling stage can be done in $O(n^2)$ time by using $\WIT{\PP}$.
\end{lemma}
\begin{algorithm2e}[t]
\Fn(\tcc*[h]{Duel between candidates $T_x$ and $T_{x+a}$}){\Dueling{$T_x, T_{x + a}$}}{
$\pair{i}{j} = \WIT{P}[a]$\;
\If{$P[i] = P[j]$}{
\leIf{$T[x + a + i -1] \ne T[x + a + j -1]$}{
\bf{return} $T_{x+a}$\;
}{
\bf{return} $T_{x}$}
}
\If{$P[i] < P[j]$}{
\leIf{$T[x + a + i -1] > T[x + a + j -1]$}{
\bf{return} $T_{x+a}$\;
}{
\bf{return} $T_{x}$}
}
\If{$P[i] > P[j]$}{
\leIf{$T[x + a + i -1] < T[x + a + j -1]$}{
\bf{return} $T_{x+a}$\;
}{
\bf{return} $T_{x}$}
}
}
\caption{Dueling}
\label{alg:Dueling}
\end{algorithm2e}
\subsection{Sweeping stage}\label{sec:1DSweeping}
The goal of the sweeping stage is to prune candidates until all remaining candidates are order-isomorphic with the pattern.
Suppose that we need to check whether some surviving candidate $T_x$ is order-isomorphic with the pattern $P$.
It suffices to successively check the conditions~(\ref{cond:eq}) and (\ref{cond:ineq}) in Lemma~\ref{lem:op}, starting from the leftmost location in $T_x$.
If the conditions are satisfied for all locations in $T_x$,
then $T_x \approx P$.
Otherwise,
$T_x \not\approx P$, and obtain a mismatch position $j$.
A naive implementation of the sweeping will result in $O(n^2)$ time.
However, if we take advantage of the fact that all the remaining candidates are pairwise consistent,
we can reduce the time complexity to $O(n)$ time.
Since the remaining candidates are consistent to each other, for the overlapping candidates $T_x$ and $T_{x + a}$,
the overlap region is checked only once if $T_x$ is order-isomorphic with the pattern $P$.
Otherwise, for a mismatch position $j$,
$T_{x+a}$ should be checked from position $j - a + 1$ of $T_{x+a}$,
because $P[a:j-1] \approx T_x[a\!:\!j-1] \approx T_{x+a}[1\!:\!j-a]$.
Algorithm~\ref{alg:1DSweep} describes the procedure for the sweeping stage.
\begin{algorithm2e}[t]
\Fn{\SweepingStage{}}{
\While{there are unchecked candidates to the right of $T_x$}{
let $T_x$ be the leftmost unchecked candidate\;
\eIf{there are no candidates overlapping with $T_x$}{
\lIf{$T_x \not \approx P$}{
eliminate $T_x$}
}{
let $T_{x+a}$ be the leftmost candidate that overlaps with $T_x$\;
\lIf{$T_x \approx P$}{
start checking $T_{x+a}$ from the location $m - a + 1$}
\Else{
let $j$ be the mismatch position\;
eliminate $T_x$\;
start checking $T_{x+a}$ from the location $j-a$\;
}
}
}
}
\caption{The sweeping stage algorithm}
\label{alg:1DSweep}
\end{algorithm2e}
\begin{lemma} \label{lemma:Sweep 1D}
The sweeping stage can be completed in $O(n)$ time.
\end{lemma}
By Lemmas~\ref{lemma:Z_p computation}, \ref{lemma:duel 1D}, and \ref{lemma:Sweep 1D}, we summarize this section as follows.
\begin{theorem}
The duel-and-sweep algorithm solves 1d-OPPM Problem in $O(n + m\log m)$ time.
Moreover, the running time is $O(n+m)$ under the natural assumption that
the characters of $P$ can be sorted in $O(m)$ time.
\end{theorem}
\section{Two-dimensional order preserving pattern matching} \label{sec:two dimension}
In this section, we will discuss how to perform two-dimensional order preserving pattern matching (2d-OPPM).
Array indexing is used for two-dimensional strings, the horizontal coordinate $x$ increases from left to right and the vertical coordinate $y$ increases from top to bottom. $\ST[x,y]$ denotes an element of $\ST$ at position $(x,y)$ and $\ST[x\!:\!x + w - 1 , y\!:\!y + h - 1]$ denotes a substring of $\ST$ of size $w \times h$ with top-left corner at the position $(x, y)$.
We say that two dimensional strings $\ST$ and $\TT$ are \emph{order-isomorphic}, written $\ST \approx \TT$, if $\ST[i_x,i_y] \leq \ST[j_x,j_y] \Longleftrightarrow \TT[i_x,i_y] \leq \TT[j_x,j_y]$ for all $1 \leq i_x, j_x \leq w$ and $1 \leq i_y, j_y \leq h$.
For a simple presentation, we assume that both text and pattern are squares $(w=h)$ in this paper, but we can generalize it straightforwardly.
\begin{definition} [2d-OPPM problem]
The two-dimensional order-preserving matching problem is defined as follows,
\begin{description}[topsep=0pt,parsep=0pt,partopsep=0pt]
\item[Input:] A text $\TT$ of size $n \times n$ and a pattern $\PP$ of size $m \times m$,
\item[Output:] All occurrences of substrings of $\TT$ that are order-isomorphic with $\PP$.
\end{description}
\end{definition}
Our approach is to reduce 2d-OPPM problem into 1d-OPPM problem, based on the following observation.
For two-dimensional string $\ST$, let $\ser{\ST}$ be a (one-dimensional) string which \emph{serializing} $\ST$ by traversing it
in the left-to-right/top-to-bottom order. We can easily verify the following lemma.
\begin{lemma} \label{lemma:serialized}
$\ST \approx \TT$ if and only if $\ser{\ST} \approx \ser{\TT}$ for any $\ST$ and $\TT$.
\end{lemma}
\begin{theorem} \label{thm:2dOPPM}
2d-OPPM problem can be solved in $O(n^2 m + m^2 \log{m})$.
\end{theorem}
\begin{proof}
For a fixed $1 \leq x \leq n -m + 1$, consider the substring $\TT[x:x+m-1, 1:n]$ and let $S_x = \ser{\TT[x:x+m-1, 1:n]}$.
By Lemma~\ref{lemma:serialized}, $\PP$ occurs in $\TT$ at position $(x, y)$, i.e.
$\PP \approx \TT[x:x+m-1, y:y+m-1]$ if and only if $\ser{\PP} \approx S_x[m(y-1)+1, m(y-1)+m^2]$.
The positions $m(y-1)+1$ satisfying the latter condition can be found in $O(nm + m^2\log{m})$ time by 1d-OPPM algorithms, which we showed in Section~\ref{sec:one dimension} or KMP-based ones~\cite{kubica2013linear,kim2014order}, because $|S_x| = nm$ and $|\ser{\PP}|=m^2$.
Because we need the preprocess for the pattern $\ser{\PP}$ only once, and execute the search in $S_x$ for each $x$, the result follows.
\end{proof}
In the rest of this paper, we try a direct approach to two-dimensional strings based on the duel-and-sweep paradigm, inspired by the work~\cite{amir1992two,Cole2014TwoDimParaMatch}.
A substring of $\TT$ of size $m \times m$ will be referred as a candidate. $\TT_{x,y}$ denotes a candidate with the top-left corner at $(x,y)$.
\subsection{Pattern preprocessing}
For $0 \leq a < m$ and $-m < b < m$,
we say that a pair $\pair{(i_x, i_y)}{(j_x, j_y)}$ of locations is \emph{a witness pair for the offset $(a, b)$} if either of the following holds:
\begin{itemize}[topsep=3pt]
\item $\PP[i_x, i_y] = \PP[j] \text{ and } \PP[i_x+a, i_y+b] \ne \PP[j_x, j_y]$,
\item $\PP[i_x, i_y] > \PP[j] \text{ and } \PP[i_x+a, i_y+b] \le \PP[j_x, j_y]$,
\item $\PP[i_x, i_y] < \PP[j] \text{ and } \PP[i_x+a, i_y+b] \ge \PP[j_x, j_y]$.
\end{itemize}
The \emph{witness table $\WIT{\PP}$ for pattern $\PP$} is a two-dimensional array of size $m \times (2m-1)$, where
$\WIT{\PP}[a, b]$ is a witness pair for the offset $(a, b)$.
If the overlap regions are order-isomorphic when $\PP$ is superimposed with offset $(a, b)$, then no witness pair exists.
We denote it as $\WIT{\PP}[a,b] = \pair{(m+1, m+1)}{(m+1, m+1)}$.
We show how to efficiently construct the witness table $\WIT{\PP}$.
For $\PP$ and each $0 \leq a < m$, we define the \emph{Z-array} $\ZZ{\PP}{a}$ by
\[\ZZ{\PP}{a}[i] = \max_{1 \le j \le |P_1| - i + 1}\{j \mid P_1[1:j]\approx P_2[i:i+j-1] \} \mbox{\ for each } 1 \leq i \leq |P_1|, \]
where $P_1 = \ser{\PP[1\!:\!m-a, 1\!:\!m]}$, $P_2 = \ser{\PP[a + 1\!:\!m, 1\!:\!m]}$, and $|P_1| = |P_2| = m(m-a)$.
\begin{lemma} \label{lemma:Wit-2d, each cell}
For arbitrarily fixed $a \geq 0$, we can compute the value of $\WIT{\PP}[a,b]$ in $O(1)$ time and for each $b$, assuming that $\ZZ{\PP}{a}$ is already computed.
\end{lemma}
\begin{proof}
For an offset $(a, b)$ with $b \geq 0$, let us consider $z_{a,b} = \ZZ{\PP}{a}[b \cdot (m - a) + 1]$.
\begin{description}[topsep=5pt,itemsep=3pt,partopsep=0pt]
\item[Case 1] $z_{a,b} = (m-a)\!\cdot\!(m-b)$: Note that the value is equal to the number of elements in the overlap region. Then $\PP[1:m-a,1:m-b] \approx \PP[a+1:m,b+1:m]$, so that no witness pair exists for the offset $(a,b)$.
\item[Case 2] $z_{a,b} < (m-a)\!\cdot\!(m-b)$: There exists a witness pair $\pair{(i_x, i_y)}{(j_x, j_y)}$, where $(j_x, j_y)$ is the location of the element in $\PP$, that corresponds to the $(z_{a,b} + 1)$-th element of $P_1 = \ser{\PP[1\!:\!m-a, 1\!:\!m]}$.
By a simple calculation, we can obtain the values $(j_x, j_y)$ in $O(1)$ time.
We can also compute $(i_x, i_y)$ from $(j_x, j_y)$ in $O(1)$ time, similarly to the proof of Lemma~\ref{lemma:Z_p computation}, with the help of auxiliary arrays $\Prev{\PP,a}$ and $\Next{\PP,a}$.
(Details are omitted.)
\end{description}
Symmetrically, we can compute it for $b<0$.
\end{proof}
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{figures/witness-pair2}\\
\caption{An example of witness pair. The pattern $\PP$ is shown on the left and the alignment of $P$ with itself with offset $(3,2)$ is shown on the right.
The pair $\pair{(2, 1)}{(2, 2)}$ is a witness pair for offset $(3,2)$, since $\PP[2,1] =47 > 44=\PP[2,2]$, but $\PP[5,3]=23 < 27=\PP[5,4]$.}
\label{fig:witness-pair}
\end{figure}
\begin{table}[t]
\centering
\caption{Computation of $\ZZ{\PP}{3}$. For $\PP$ in Fig.~\ref{fig:witness-pair}, the overlap regions for offset $(3,0)$ are traversed in left-to-right/top-to-bottom order to obtain $P_1$ and $P_2$.}
\setlength{\tabcolsep}{8pt} \label{table:witness-table-Z-array}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
& $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ \\
\hline
$P_1$ & $36$ & $47$ & $42$ & $44$ & $17$ & $39$ & $22$ & $12$ & $24$ & $29$ \\
\hline
$P_2$ & $9$ & $49$ & $8$ & $11$ & $12$ & $23$ & $15$ & $27$ & $42$ & $49$ \\
\hline
$\ZZ{\PP}{3}$ & $2$ & $1$ & $2$ & $2$ & $3$ & $1$ & $2$ & $2$ & $2$ & $1$ \\
\hline
\end{tabular}
\centering
\end{table}
\begin{table}[t]
\label{table:witness-table}
\centering
\caption{Witness pairs for offsets $(3,0)$, $(3,1)$, $(3,2)$, $(3,3)$, $(3,3)$ for $\PP$ in Fig.~\ref{fig:witness-pair}.}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$(a,b)$ & $(3,0)$ & $(3,1)$ & $(3,2)$ & $(3,3)$ & $(3,4)$\\
\hline
$z_{a,b}$ & $2$ & $2$ & $3$ & $2$ & $2$ \\
\hline
$\WIT{\PP}[a,b]$ & $\pair{(1,1)}{(2,1)}$ & $\pair{(1,2)}{(2,1)}$ & $\pair{(2,1)}{(2,2)}$ & $\pair{(1,2)}{(2,1)}$ & $\pair{(5,5)}{(5,5)}$ \\
\hline
\end{tabular}
\centering
\end{table}
\begin{lemma} \label{lemma:WIT 2d}
We can construct the witness table $\WIT{\PP}$ in $O(m^3)$ time.
\end{lemma}
\begin{proof}
Assume that we sorted all elements of $\PP$.
For an arbitrarily fixed $a$, calculation of $\Prev{\PP,a}$ and $\Next{\PP,a}$ takes $O(m^2)$ time
by using sorted $\PP$.
$\ZZ{\PP}{a}$ can be constructed in $O(m^2)$ time by Lemma~\ref{lemma:Z-array}.
Furthermore, finding witness pairs for all offsets $(a, b)$ takes $O(m)$ time by Lemma~\ref{lemma:Wit-2d, each cell}.
Since there are $m$ such $a$'s to consider, $\WIT{\PP}$ can be constructed in $O(m^3)$ time.
\end{proof}
\subsection{Dueling stage}
Similarly to Lemma~\ref{lemma:consistency 1d}, we can show the transitivity as follows.
\begin{lemma}\label{lem:2D-consistency}
For any $a , b, a', b' \geq 0$, let us consider three candidates $\TT_1 = \TT_{x,y}$, $\TT_2 = \TT_{x + a, y + b}$, and $\TT_3 = \TT_{x + a', y + b'}$. If $\TT_1$ is consistent with $\TT_2$ and $\TT_2$ is consistent with $\TT_3$, then $\TT_1$ is consistent with $\TT_3$.
\end{lemma}
The dueling algorithm due to Amir \etal~\cite{amir1994alphabet} is also applicable to the problem.
\begin{lemma} (\cite{amir1994alphabet}) \label{lemma:duel 2D}
The dueling stage can be done in $O(n^2)$ time by using $\WIT{\PP}$.
\end{lemma}
\subsection{Sweeping stage} \label{sec:2DSweeping}
This is the hardest part for two-dimensional strings.
We first consider two surviving candidates $\TT_{x, y_1}$ and $\TT_{x,y_2}$ in some column $x$, with $y_1 < y_2$.
If we traverse $\TT[x\!:\!x+m-1, 1\!:\!n]$ from top-to-bottom/left-to-right manner we can reduce the problem to one-dimensional order-preserving problem.
Thus performing the sweeping stage for some column $x$ will take $O(nm)$ time.
Since there are $n - m -1$ such columns, the sweeping stage will take $O(n^2m)$ time.
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.19\hsize}
\centering
\includegraphics[scale=0.38]{figures/serial1}
\end{minipage}
\begin{minipage}[t]{0.19\hsize}
\centering
\includegraphics[scale=0.38]{figures/serial2}
\end{minipage}
\begin{minipage}[t]{0.19\hsize}
\centering
\includegraphics[scale=0.38]{figures/serial3}
\end{minipage}
\begin{minipage}[t]{0.19\hsize}
\centering
\includegraphics[scale=0.38]{figures/serial4}
\end{minipage}
\begin{minipage}[t]{0.19\hsize}
\centering
\includegraphics[scale=0.38]{figures/serial5}
\end{minipage}
\caption{Example of traversing directions that we use for sweeping algorithm.}
\label{fig:serial}
\end{figure}
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.49\hsize}
\centering
\includegraphics[scale=0.4]{figures/2-cands}\\
\ \ \ \scriptsize{(a)}
\end{minipage}
\begin{minipage}[t]{0.49\hsize}
\centering
\includegraphics[scale=0.4]{figures/3-cands}\\
\ \ \ \scriptsize{(b)}
\end{minipage}
\vspace{-2mm}
\caption{(a) Elements in the overlap region is checked only once. (b) Elements in the blue region must be checked twice.}
\label{fig:2-3 cands}
\vspace{-2mm}
\end{figure}
Next, we propose a method that takes advantage of consistency relation in both horizontal and vertical directions.
First, we construct $m$ strings $P_i = \ser{\PP[1:m-i,1:m]}\ser{\PP[m-i+1:m,1:m]}$ for $0 \le i < m$ by serializing $\PP$ in different way.
We then compute $\Prev{P_i}$ and $\Next{P_i}$ for $0 \le i < m$,
thus we can compare the order-isomorphism of the pattern with the text in several different ways.
$\Prev{P_i}$ and $\Next{P_i}$ for $0 \le i < m$ can be computed in $O(n^3)$ time by sorting $\ser{\PP}$ once
and then calculated $\Prev{P_i}$ and $\Next{P_i}$ by using the sorted $\ser{\PP}$.
Fig.~\ref{fig:serial} shows $P_i$ for $0 \le i < m$ where $m=5$.
We also do the same computation for bottom-to-top/left-to-right traversing direction.
Let us consider two overlapping candidates $\TT_{x_1, y_1}$ and $\TT_{x_2, y_2}$, where $x_1 < x_2$ and $y_1 < y_2$.
Suppose that $\TT_{x_1, y_1}$ is order-isomorphic with the pattern and we need to check $\TT_{x_2, y_2}$. Since $\TT_{x_1, y_1}$ is consistent with $\TT_{x_2, y_2}$, we need to check the order-isomorphishm of the region of $\TT_{x_2, y_2}$ that is not an overlap region.
We do this by using $P_j$, where $j= x_2-x_1$, without checking the overlap region.
This idea is illustrated in Figure~\ref{fig:2-3 cands} (a).
The procedure for $y_1 > y_2$ is symmetrical.
Next, consider three overlapping candidates $\TT_1 = \TT_{x_1,y_1}$, $\TT_2 = \TT_{x_2,y_2}$ and $\TT_3 = \TT_{x_3,y_3}$,
such that $x_1 \leq x_2 \leq x_3$ and $y_2 \leq y_3$.
We assume that $\TT_1$ and $\TT_2$ are both order-isomorphic with the pattern.
If $y_1 \leq y_2$, we can use the method for two overlapping candidates that we described before to perform sweeping efficiently.
However, if $y_1 \geq y_2$, as showed in Fig.~\ref{fig:2-3 cands} (b),
we need to check the blue region twice since we do not know the order-isomorphism relation
between the blue region with the overlap region of $\TT_2$ and $\TT_3$.
By using the above method, we can reduce the number of comparisons for sweep stage.
However, the time complexity remains the same.
\begin{lemma} \label{lemma:2d sweep}
The sweeping stage can be completed in $O(n^2m)$ time.
\end{lemma}
By Lemmas~\ref{lemma:WIT 2d}, \ref{lemma:duel 2D}, and \ref{lemma:2d sweep}, we conclude this section as follows.
\begin{theorem} \label{thm:duel2D}
The duel-and-sweep algorithm solves 2d-OPPM Problem in $O(n^2 m + m^3)$ time.
\end{theorem}
\section{Preliminaries} \label{sec:prelim}
We use $\Sigma$ to denote an alphabet of integer symbols such that the comparison of any two symbols can be done in constant time. $\Sigma^*$ denotes the set of strings over the alphabet $\Sigma$.
For a string $S \in \Sigma^*$, we will denote $i$-th element of $S$ by $S[i]$ and a substring of $S$ that starts at the location $i$ and ends at the location $j$ as $S[i\!:\!j]$.
We say that two strings $S$ and $T$ of equal length $n$ are \emph{order-isomorphic}, written $S \approx T$, if $S[i] \leq S[j] \Longleftrightarrow T[i] \leq T[j]$ for all $1 \leq i, j \leq n$.
For instance, $(12, 35, 5) \approx (25, 30, 21) \not\approx (11, 13, 20)$.
In order to check order-isomorphism of two strings, Kubica~\etal~\cite{kubica2013linear} introduced~\footnote{Similar arrays $\textit{Prev}_S$ and $\textit{Next}_S$ are introduced in~\cite{hasan2015order}.} useful arrays $\Prev{S}$ and $\Next{S}$ defined by
\begin{align}
\Prev{S}[i]=j \mbox{ if } S[j]=\max_{k < i} \{S[k] \mid S[k] \le S[i] \}, \\
\Next{S}[i]=j \mbox{ if } S[j]=\min_{k < i} \{S[k] \mid S[k] \ge S[i] \}.
\end{align}
We use the rightmost (largest) $j$ if there exist more than one such $j$.
If there is no such $j$ then we define $\Next{S}[i] = 0$ and $\Prev{S}[i] = 0$, respectively.
From the definition, we can easily observe the following properties.
\begin{align}
S[\Prev{S}[i]] = S[i]\quad \Longleftrightarrow\quad S[i] = S[\Next{s}[i]], \label{prop:lmaxmineq} \\
S[\Prev{S}[i]] < S[i]\quad \Longleftrightarrow\quad S[i] < S[\Next{s}[i]]. \label{prop:lmaxminineq}
\end{align}
\begin{lemma}[\cite{kubica2013linear}]
For a string $S$, let $sort(S)$ be the time required to sort the elements of $S$.
$\Prev{S}$ and $\Next{S}$ can be computed in $O(sort(S) + |S|)$ time.
\end{lemma}
Thus, $\Prev{S}$ and $\Next{S}$ can be computed in $O(|S|\log |S|)$ time in general.
Moreover, the computation can be done in $O(|S|)$ time
under a natural assumption~\cite{kubica2013linear} that the characters of $S$ are elements of the set $\{1,\ldots,|S|^{O(1)}\}$.
By using $\Prev{S}$ and $\Next{S}$, order-isomorphism of two strings can be decided as follow.
\begin{lemma}[\cite{cho2015fast}]\label{lem:op}
For two strings $S$ and $T$ of length $n$,
assume that $S[1\!:\!i] \approx T[1\!:\!i]$ for some $i < n$.
Let $\imax = \Prev{S}[i+1]$ and $\imin = \Next{S}[i+1]$.
Then $S[1\!:\!i+1] \approx T[1\!:\!i+1]$ if and only if either of the following two conditions holds.
\begin{align}
S[\imax] = S[i+1] = S[\imin]\ \wedge\ T[\imax] = T[i+1] = T[\imin], \label{eq:opeq}\\
S[\imax] < S[i+1] < S[\imin]\ \wedge\ T[\imax] < T[i+1] < T[\imin]. \label{eq:opineq}
\end{align}
We omit the corresponding equalities/inequalities if $\imax=0$ or $\imin=0$.
\end{lemma}
Hasan~\etal~\cite{hasan2015order} proposed a modification to Z-function, which Gusfield~\cite{gusfield1997algorithms} defined for ordinal pattern matching, to make it useful from the order-preserving point of view.
For a string $S$, the \emph{(modified) Z-array} of $S$ is defined by
\[Z_S[i] = \max_{1 \le j \le |S| - i + 1}\{j \mid S[1:j]\approx S[i:i+j-1] \} \mbox{\quad for each } 1 \leq i \leq |S|.\]
In other words, $Z_S[i]$ is the length of the longest substring of $S$ that starts at position $i$ and is order-isomorphic with some prefix of $S$. An example of Z-array is illustrated in Table \ref{table:Z-array}.
\begin{table}[t]
\centering
\caption{Z-array of a string $S = (18, 22, 12, 50, 10, 17)$.
For instance, $Z_S[3] = 3$ because $S[1\!:\!3] = (18, 22, 12) \approx (12, 50, 10) = S[3\!:\!5]$ and $S[1\!:\!4] = (18, 22, 12, 50) \not\approx (12, 50, 10, 17) = S[3\!:\!6]$. $\Prev{S}$ and $\Next{S}$ are also shown. }
\label{table:Z-array}
\setlength{\tabcolsep}{8pt}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& $1$ & $2$ & $3$ & $4$ & $5$ & $6$ \\
\hline
$S$ & $18$ & $22$ & $12$ & $50$ & $10$ & $17$ \\
\hline
$Z_S$ & $6$ & $1$ & $3$ & $1$ & $2$ & $1$ \\
\hline
$\Prev{S}$ & $0$ & $1$ & $0$ & $2$ & $0$ & $3$ \\
\hline
$\Next{S}$ & $0$ & $0$ & $1$ & $0$ & $3$ & $1$ \\
\hline
\end{tabular}
\end{table}
\begin{lemma} (\cite{hasan2015order}) \label{lemma:Z-array}
For a string $S$, Z-array $Z_S$ can be computed in $O(|S|)$ time, assuming that $\Prev{S}$ and $\Next{S}$ are already computed.
\end{lemma}
Note that in their original work, Hasan \etal~\cite{hasan2015order} assumed that each character in $S$ is distinct.
However, we can extend their algorithm by using Lemma~\ref{lem:op} to verify order-isomorphism even when $S$ contains duplicate characters.
| -38,639.841085
|
[
-1.404296875,
1.419921875
] | 42.95416
|
[
-3.0703125,
1.189453125,
-1.373046875,
-4.44140625,
-1.1923828125,
6.4453125
] |
[
-1.31640625,
4.625,
-0.94091796875,
3.09765625
] | 290
| 4,282
|
[
-3.6953125,
4.28125
] | 35.890736
|
[
-5.984375,
-4.0703125,
-3.6953125,
-1.6396484375,
2.234375,
10.875
] | 0.424297
| 14.408907
| 23.890705
| 3.597245
|
[
1.209185004234314
] | -24,933.895752
| 5.228865
| -38,770.100769
| 0.714605
| 5.911556
|
[
-2.2421875,
-3.123046875,
-3.61328125,
-4.8515625,
2.33984375,
11.4453125
] |
[
-5.82421875,
-1.8359375,
-1.951171875,
-1.466796875,
3.525390625,
4.05859375
] | |
BkiUdro4dbjiU7z3Zt3l
|
\section{Introduction}
Recent advances in Unmanned Aerial Vehicles (UAVs) have led to many new applications for aerial vehicles. These include search {\color{black}and} rescue, last-mile delivery, and surveillance, {\color{black}and they benefit from the small size and maneuverability of quadrotors}. Furthermore, many of these applications use a large number of quadrotors (e.g., swarms). A key issue is developing robust navigation algorithms so that each quadrotor agent avoids collisions with other dynamic and static obstacles in its environment. Moreover, in general, the quadrotors {\color{black}{need to}} operate in uncontrolled outdoor settings like urban regions, where the agents rely on onboard sensors for state estimation. In practice, onboard sensing can be noisy, which can significantly affect the collision avoidance performance.
Prior work on collision-free navigation is broadly classified into centralized and decentralized methods. Centralized methods~\cite{Augugliaro, Kushleyev, Preiss} plan collision-free trajectories for all agents in a swarm simultaneously, and they can also provide guarantees on smoothness, time optimality, and collision avoidance. However, due to the centralized computation, these algorithms do not scale well with the number of agents. In decentralized methods~\cite{VO, RVO, ORCA, Morgan, zhou}, each agent makes independent decisions to avoid a collision. In practice, they are scalable due to the decentralized decision making, but do not guarantee optimality or reliably handle uncertainty.
In addition, prior work on multi-agent collision avoidance is limited to deterministic settings. These methods are mainly designed for indoor environments, where the physical evaluations are performed with a MoCap-based state estimation. On the other hand, real-world quadrotor deployment relies on onboard sensor data, which can be noisier. For example, depth cameras are widely used in robotics applications, {\color{black}{but the estimated depth values may have errors}} due to lighting, calibration, or object surfaces~\cite{khoshelham2012accuracy,park2020efficient}. Some of the simplest techniques consider zero-mean Gaussian uncertainty by enlarging the agent's bounding geometry in relation to the variance of uncertainty~\cite{Snape, Kamel}.
However, these methods tend to over-approximate the collision probability, resulting in conservative navigation schemes~\cite{Zhu}. Other uncertainty algorithms are based on chance-constraint methods~\cite{Zhu, PRVO}. {\color{black}These algorithms} are less conservative in practice, but assume simple agent dynamics or {\color{black}are limited to simple scenarios}.
{\color{black}{\subsection{Main Results:}}}
We present a decentralized, probabilistic collision avoidance method (SwarmCCO) for quadrotor swarms operating in dynamic environments. Our approach builds on prior techniques for multi-agent navigation based on reciprocal collision avoidance~\cite{ORCA, DCAD}, and we present efficient techniques to perform probabilistic collision avoidance by chance-constrained optimization (CCO). We handle the non-linear quadrotor dynamics using flatness-based feedforward linearization. The reciprocal collision avoidance constraints are formulated as chance constraints and {\color{black}{combined with the MPC (Model Predictive Control) framework}}.
\begin{itemize}
\item {\color{black}Our} first algorithm assumes a Gaussian noise distribution for the state uncertainties and reformulates the {\color{black}collision avoidance} chance constraints as a set of deterministic second-order cone constraints.
\item {\color{black}Our} second algorithm is designed for non-Gaussian noises. {\color{black}We use a Gaussian Mixture Model (GMM) to approximate the noise distribution and replace each collision avoidance constraint using a second-order cone for each Gaussian components.} The cone constraints for the individual Gaussian components are related to the GMM's probability distribution {\color{black}by using an additional constraint based on GMM's mixing coefficient.
\end{itemize}
We evaluate our probabilistic methods (SwarmCCO) in simulated environments with a large number of quadrotor agents. We compare our probabilistic method's performance with the deterministic algorithm~\cite{DCAD} in terms of the path length, time to goal, and {\color{black}the number of collisions}. We observe {\color{black}that} both our Gaussian and non-Gaussian methods result in {\color{black}fewer} collisions in the presence of noise. {\color{black}Our average computation time is} $\sim5ms$ per agent for our Gaussian method and $\sim9ms$ per agent for the non-Gaussian method in {\color{black}scenarios with $4$ agents}. {\color{black}The non-Gaussian method is computationally expensive compared to the Gaussian method, but the non-Gaussian method provides improved performance in terms of shorter path lengths, and satisfying collision avoidance constraints (Section V-D)}. Hence, the non-Gaussian method {\color{black}tends to} offer better performance in constricted regions due to better approximation of noise, {\color{black}where the Gaussian method may result in an infeasible solution}.
{\color{black}The paper is organized as follows.} Section II summarizes the recent relevant works in probabilistic collision avoidance. Section III provides a brief introduction to DCAD~\cite{DCAD} and ORCA~\cite{ORCA} chance-constraints. {\color{black}In Section IV, we present our algorithms and describe the chance constraint formulation}. In Section V, we present our results and compare the performance with other methods. In Section V, we present our results and compare the performance with other methods. In Section VI, we summarize our major contributions, results, and present the limitations and future work.
\section{Previous Work}
In this section, we provide a summary of the recent work on collision avoidance and trajectory planning under uncertainty.
\subsection{Decentralized Collision Avoidance with Dynamics}
Decentralized collision avoidance methods~\cite{VO,RVO,ORCA,Morgan,AVO} compute the paths by locally altering the agent's path based on the local sensing information and state estimation. Velocity Obstacle (VO)~\cite{VO} methods such as RVO~\cite{RVO} and ORCA~\cite{ORCA} provide decentralized collision avoidance for agents with single-integrator dynamics. This concept was extended to double integrator dynamics in the AVO algorithm~\cite{AVO} and used to generate $n^{th}$ order continuous trajectories in \cite{cnco}. Berg et al.~\cite{LQG} and Bareiss et al.~\cite{LQR} proposed control obstacles for agents with linear dynamics. Moreover, the authors demonstrated the algorithm on quadrotors by linearizing about the hover point.
Cheng et al. \cite{MPCORCA} presented a variation by using ORCA constraints on velocity and a linear MPC to account for dynamics. Morgan et al.~\cite{Morgan} described a sequential convex programming (SCP) method for trajectory generation. However, SCP methods can be computationally expensive for rapid online replanning. Most of these methods have been designed for deterministic settings. Under imperfect state estimation and noisy actuation, the performance of deterministic algorithms may not be reliable and can lead to collisions {{\cite{PRVO}}}. Hence, we need probabilistic collision avoidance methods for handling uncertainty.
\subsection{Uncertainty Modeling}
Snape et al.~\cite{Snape} extended the concept of VO to address state estimation uncertainties using Kalman filtering and bounding volume expansion for single-integrator systems. That is, the agent's bounding polygon is enlarged based on the co-variance of uncertainty. Kamel et al.~\cite{Kamel} proposed an N-MPC formulation for quadrotor collision avoidance and used the bounding volume expansion to address sensor uncertainties. DCAD~\cite{DCAD} presented a collision avoidance method for quadrotors using ORCA and bounding volume expansion. Bounding volume expansion methods retain the linearity of ORCA constraints; hence, they are fast but tend to be conservative. They do not differentiate samples close to the mean from those farther away from the mean~\cite{Zhu, arxivChance}. Hence, they can lead to infeasible solutions in dense scenarios~\cite{Zhu}. Angeris et al.~\cite{angeris2019fast} accounted for uncertainty in estimating a neighbor's position using a confidence ellipsoid before computing a safe reachable set for the agent.
In contrast to bounding volume methods, \cite{Zhu,B-UAVC} modeled the stochastic collision avoidance as a chance-constrained optimization. These techniques assumed a Gaussian noise distribution for the position and transformed the chance constraints to deterministic constraints on mean and co-variance of uncertainty. Gopalakrishnan et al.~\cite{PRVO} presented PRVO, a probabilistic version of RVO. PRVO assumed a Gaussian noise distribution and used Cantelli's bound to approximate the chance constraint. However, PRVO considers simple single-integrator dynamics. Jyotish et al.~\cite{NagaJyotish} extended PRVO to non-parametric noise and formulated the CCO problem as matching the distribution of PVO with a certain desired distribution
using RKHS embedding for a simple linear dynamical system. However, this method is computationally expensive and requires about $0.2s$ to compute a suitable velocity in the presence of 2 neighbors.
There is also considerable literature on probabilistic collision detection to check for collisions between noisy geometric datasets~\cite{rusu2009real,lee2013sigma,du2011probabilistic,park2016fast,park2020efficient}. They are applied on point cloud datasets and used for trajectory planning in a single high-DOF robot, but not for multi-agent navigation scenarios.
\section{Background and Problem Formulation}
\begin{table}[t]
\caption{\label{tab:notation} Notation and symbols.}
\begin{center}
\renewcommand{\arraystretch}{1.1}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|p{1.0cm}|p{6.5cm}|}
\hline
\textbf{Notation} & \textbf{Definition} \\
\hline\hline
$\mathcal{W}$ & World Frame defined by unit vectors $\mathbf{x_W}$, $\mathbf{y_W}$, and $\mathbf{z_W}$ along the standard X, Y and Z axes \\
\hline
$\mathcal{B}$ & Body Frame attached to the center of mass, defined by the axes $\mathbf{x_B}$, $\mathbf{y_B}$, and $\mathbf{z_B}$ \\
\hline
$\mathbf{r}_{i}$ & 3-D position of $i^{th}$ quadrotor given by $[r_{i,x}, r_{i,y}, r_{i,z}]$ \\
\hline
$\mathbf{v}_i, \mathbf{a}_i$ & 3-D Velocity and Acceleration of $i^{th}$ quadrotor given by $[v_{i,x}, v_{i,y}, v_{i,z}]$ and $[a_{i,x}, a_{i,y}, a_{i,z}]$, respectively\\
\hline
${R}_i$ & Radius of agent $i$'s enclosing sphere\\
\hline
$\phi, \theta, \psi$ & Roll, pitch and yaw of the quadrotor\\
\hline
$\mathbf{R}$ & Rotation matrix of quadrotor body frame ($\mathcal{B}$) w.r.t world frame ($\mathcal{W}$)\\% i.e., {\color{red} $\mathcal{B}$ = $\mathbf{R}\mathcal{W}$}\\
\hline
$\mathbf{T}$ & Net thrust in body fixed coordinate frame\\
\hline
$\mathbf{m}_q$ & Mass of quadrotor\\
\hline
${\boldsymbol{\omega}}$ & Angular velocity in body fixed coordinate frame given by $[p,q,r]$\\
\hline
$\mathbf{v}_{i}^{orca}$ & Collision avoiding velocity for agent $i$\\
\hline
$\color{black}{f^{orca_i^j}}$ & ORCA \color{black}{plane constraint given by $\mathbf{m}^T\mathbf{v}^{orca}_{i}-b \geq 0$. $\mathbf{m}$ and b are functions of the agents trajectory.}\\
\hline
${\color{black}{\mu_m, \sigma_m}}$ & Mean and standard deviation of a variable `{\color{black}{m}}'\\
\hline
$\color{black}{P(x)}$ & { $\text{\color{black}Probability of x}$}\\
\hline
${\color{black}{\boldsymbol{\mu}_m, \Sigma_m}}$ & Mean and covariance of a vector
`{\color{black}{m}}'\\
\hline
$\mathbf{x}, \mathbf{u}$ &{Quadrotor \color{black}{flat state and flat control} input}\\
\hline
\end{tabular}
}
\end{center}
\vspace{-10pt}
\end{table}
This section discusses the problem statement and gives an overview of various concepts used in our approach. Table \ref{tab:notation} summarizes the symbols and notations used in our paper.
\subsection{Problem Statement}
We consider $\mathrm{N}$ agents occupying a workspace $\mathcal{W}\subseteq\mathbb{R}^3$. Each agent $i \in \left\{1,2,...,\mathrm{N}\right\}$ is modeled with non-linear quadrotor dynamics, as described in {\cite{DCAD}}. For simplicity, {\color{black}each} agent's {\color{black}geometric representation} is approximated {\color{blue}as} a sphere of radius $R$. We assume that each agent knows its neighbor's position and velocity either through {\color{black}inter-agent communication or visual sensors, and this information is not precise}. No assumption is made on the nature of the uncertainty distribution {\color{black}in this information}; hence, the random variables are assumed to be non-parametric (i.e. they are assumed to follow no particular family of probability distribution).
Our algorithm approximates the distribution using a {\color{black}Gaussian Mixture Model} GMM.
At any time instant, two agents $i$ and $j$, where $i, j \in \left\{1,2,...,\mathrm{N} \right\}$, $i\neq j$, are said to be collision-free if their separation is greater than sum of their bounding sphere radii. That is, {\color{black}$\left\Vert \mathbf{r}_{i} - \mathbf{r}_{j} \right\Vert_2 \ge R_i + R_j$.} Since the position {\color{black}and velocity are random variables}, collision avoidance is handled {\color{black}using} a stochastic method based on chance constraints.
At any time instant, two agents $i$ and $j$, where $i, j \in \left\{1,2,...,\mathrm{N} \right\}$, $i\neq j$, are said to be collision-free if their separation is greater than sum of their bounding sphere radii. That is,
\begin{equation*}
\left\Vert \mathbf{r}_{i} - \mathbf{r}_{j} \right\Vert_2 \ge R_i + R_j.
\end{equation*}
Since the position is a random variable, collision avoidance is handled through a stochastic method based on chance constraints.
\subsection{ORCA}
ORCA~\cite{ORCA} is a velocity obstacle-based method that computes a set of velocities that are collision-free. Let us consider the RVO equation as given in~\cite{PRVO}.
\begin{equation}
f^{RVO_j^i}(\mathbf{r}_{i},\mathbf{r}_{j},\mathbf{v}_{i},\mathbf{v}_{j}, \mathbf{v}^{rvo}_{i})\geq 0,\\
\end{equation}
\begin{equation}
f^{RVO_{j}^{i}} (.) = \Vert \mathbf{r}_{ij}\Vert^{2}-\frac{((\mathbf{r}_{ij})^{T}(2\mathbf{v}_{i}^{rvo}-\mathbf{v}_{i}-\mathbf{v}_{j}))^{2}}{\Vert 2\mathbf{v}_{i}^{rvo}-\mathbf{v}_{i}-\mathbf{v}_{j}\Vert^{2}}-(R_{ij})^{2},\\
\label{frvo}
\end{equation}
\begin{equation}
\mathbf{r}_{ij}=\mathbf{r}_{i}-\mathbf{r}_{j}, \\ R_{ij} = R_{i} + R_{j}.
\end{equation}
Since DCAD considers linear constraints given by ORCA, we construct the ORCA constraints from Eqn.~(\ref{frvo}) by linearizing the function about an operating point. In our case, the operating point is chosen as a velocity on the surface of truncated VO cone closest to the relative velocity between the two agents. The ORCA constraint (linearized equation) has the following form:
\begin{equation}\label{forcaE}
f^{ORCA_{j}^{i}} (.) =
{\mathbf{m}^T\mathbf{v}^{orca}_{i}-b \geq 0.}
\end{equation}
Here, $\mathbf{v}^{orca}_{i}$ is any velocity in the half-space of collision-free velocities. Eqn.~\ref{forcaE} is used to construct the chance constraint, which is detailed in Section IV.
\begin{figure}[t]
\centering
\includegraphics[width=0.99\linewidth]{errorDistribution.png}
\caption{A sample additive non-Gaussian noise distribution added to the 3D position of the agent. The distribution is generated using a 5 component GMM.}
\label{fig:errorDist}
\end{figure}
{\color{black}{\subsection{Differential Flatness}
A non-linear system $\dot{\mathbf{x}}_o = f(\mathbf{x}_o,\mathbf{u}_o)$ is differentially flat if a set $\zeta = [\zeta_1, \zeta_2, ..., \zeta_m] $ of differentially independent components and their derivatives can be used to construct the system state space and control inputs~\cite{DCAD}. Here, $\mathbf{x}_o$ and $\mathbf{u}_o$ represent the state and control input of the non-linear system.\\
Given a non-linear, differentially flat system and a smooth reference trajectory in $\zeta$ (denoted $\zeta_{ref}$), the nonlinear dynamics can be represented as a linear flat model ($\dot{\mathbf{x}} = A\mathbf{x} + B\mathbf{u}$) using feedforward linearization. The vectors $\mathbf{x}$ and $\mathbf{u}$ represent the flat state and flat control input respectively. From the definition of differential flatness, a reference state ($\mathbf{x}_{ref}$) and control input ($\mathbf{u}_{ref}$) can be constructed using $\zeta_{ref}$.
Formally, a differentially flat, non-linear system $\dot{\mathbf{x}}_o = f(\mathbf{x}_o,\mathbf{u}_o)$ can be feedforward linearized into the following linear system
$$
\dot{\mathbf{x}} = A\mathbf{x} + B\mathbf{u}
$$
$$
\mathbf{u} = g(\mathbf{x}, \mathbf{u}_o, \dot{\mathbf{u}}_o, ... , \mathbf{u}_o^\sigma)
$$
provided a nominal control input $\mathbf{u}_o^*$ computed from ($\mathbf{x}_{ref}$) and ($\mathbf{u}_{ref}$) is applied to the non-linear system, and the initial condition of the reference trajectory is consistent with the current state of the non-linear system. That is, when the following two conditions are satisfied.
$$
\mathbf{u}_o = g^{-1}(\mathbf{x}_{ref}, \mathbf{u}_{ref})
$$
$$
\mathbf{x}(0) = \mathbf{x}_{ref}(0)
$$
Here, $g$ represents some non-linear mapping and $\sigma$ represents the highest power of $\mathbf{u}_o$ required to construct the flat input $\mathbf{u}$.
A quadrotor is a differential flat system with $\zeta = [r_x, r_y, r_z, \psi]$~\cite{mellinger}. We use the linear flat model in the MPC problem (as shown in Equation ~\ref{eqn:optimization}), and compute $\mathbf{x}_{ref}$ and $\mathbf{u}_{ref}$, which are used to compute a nominal control input $\mathbf{u}_o^*$ for the quadrotor using an inverse mapping (section III-D).
}}
\subsection{DCAD}\label{sec:DCAD}
Decentralized Collision Avoidance with Dynamics (DCAD)~\cite{DCAD} is a receding horizon planner for generating local, collision-free trajectories for quadrotors. DCAD exploits the differential-flatness property of a quadrotor to feedforward linearize the quadrotor dynamics and uses linear MPC and ORCA constraints to plan a collision-free trajectory in terms of differentially-flat states. Further, DCAD uses an inverse mapping to account for the non-linear quadrotor dynamics by transforming the flat control inputs into inner loop controls. DCAD accounts for uncertainty in state estimation by assuming Gaussian noise and uses bounding volume expansion to account for the uncertainties. Our method differs from DCAD in posing the ORCA linear constraints as chance constraints and performing a chance-constrained optimization to compute a collision-avoiding input for the quadrotor. Further, in this work we use a flat state space of 7 states given by, $\mathbf{x} = [\mathbf{r}_i, \mathbf{v}_i, \psi].$ The flat control input is given by, $\mathbf{u} = [\mathbf{a}_i, \dot{\psi}]$. The feedforward linearized dynamics model is used in our optimization problem~\ref{eqn:optimization}
\noindent{\bf{Quadrotor Model:}}
The quadrotor state space and the control input are given by
\begin{eqnarray}
{\bf{x}_o} = [x, y, z, \dot{x}, \dot{y}, \dot{z}, \phi, \theta, \psi, p, q, r],\\
{\bf{u}_o} = [T, {\phi}, {\theta}, \dot{\psi}].
\end{eqnarray}
The quadrotor dynamics we consider is same as in prior literature~\cite{mellinger, ferrin}. The states and control input are similar to~\cite{ferrin}.
The quadrotor dynamics can be represented by the following set of equations:
\begin{eqnarray}
\label{eqn:rfot=v}
\mathbf{\dot{r}} = \mathbf{v},\\
\label{eqn:accel}
\mathbf{m\bar{a} = -mgz_{W} + Tz_{B}},\\
\label{eqn:rdot}
\mathbf{\dot{R} = R \times \boldsymbol{\omega}^T},\\
\label{eqn:tau}
\boldsymbol{\dot{\omega}} = \mathbf{j^{-1}}[-\boldsymbol{\omega} \times \mathbf{j}\boldsymbol{\omega} + \boldsymbol{\tau}].
\end{eqnarray}
We consider the flat output set given by $\boldsymbol{\zeta} = [x,y,z,\psi]$ similar to~\cite{mellinger}.
In SwarmCCO, the flat state ($\mathbf{x}$) and flat input ($\mathbf{u}$) are given by,
$$\mathbf{x} = [r_x, r_y, r_z, v_x, v_y, v_z, \psi],$$
$$\mathbf{u} = [a_x, a_y, a_z, \dot{\psi}].$$
The system dynamics in flat states is linear and hence is used as the agent dynamics in the optimization (MPC) problem (2). This results in faster computation than when compared to using the non-linear quadrotor model. Since $\mathbf{x}$ and $\mathbf{u}$ represent the flat state and input in the optimization problem (2) we need to transform the output from (2) to the original quadrotor input $\mathbf{u_o}$. The optimization problem (2) corresponds to a model predictive control (MPC) problem has an output given by optimized flat control inputs for $N-1$ time steps. Here, N is the time horizon of the MPC problem. Considering the mass of the quadrotor as $m_q$, we can compute the quadrotor control input ($\mathbf{u_o}$) from the flat inputs ($\mathbf{u}$) by,
$$T = m_q\sqrt{a_x^2 + a_y^2 + a_z^2}$$
$$\begin{bmatrix} z_1\\ z_2\\ z_3 \end{bmatrix} = Rot(\psi)\begin{bmatrix}
a_x\\ a_y\\ a_z\\
\end{bmatrix}
\frac{m_q}{-T}$$
$$\phi = \sin{(-z_2)}$$
$$\theta = \tan{(z_1/z_3)}$$
$$\psi = \dot{\psi}\cos{(\theta)}\cos{(\psi)} - \dot{\theta}\sin{(\phi)}$$
Here, $Rot$ denotes the rotation matrix. Thus, we can transform flat inputs $\mathbf{u} = [a_x, a_y, a_z, \dot{\psi}]$ to quadrotor control input $\mathbf{u_o}=[T, \phi, \theta, \psi]$. This set of equations transforming $\mathbf{u}$ to $\mathbf{u_o}$ is the inverse mapping.
\subsection{Chance Constraints}
Chance-constrained optimization is a technique for solving optimization problems {\color{black}with} uncertain variables~\cite{cooperCCP,Prekopa}.
A general formulation for chance-constrained optimization is given {as}
\begin{equation*}
\begin{aligned}
& \underset{}{\text{minimize}} \quad
& & \gamma \\
& \textrm{subject to} \quad
& & {\textnormal{P}(f(x)\le \epsilon)\ge \delta.}\\
\end{aligned}
\end{equation*}
Here, $\gamma$ is the objective function for the optimization. $f(x)$ is the constraint {on} the random variable $x$. $f(x)$ is said to be satisfied when $f(x)\le \epsilon$. Since x is a random variable, the constraint $f(x)\le \epsilon$ is formulated as the probability of satisfying of the constraint. That is, {\color{black}$\textnormal{P}(f(x)\le \epsilon)\ge \delta$} is the chance constraint and is said to be satisfied when the probability of satisfying the constraint $f(x)\le \epsilon$ is over a specified confidence level, $\delta$.
\section{SwarmCOO: Probabilistic Multi-agent Collision Avoidance}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[height=1.6in,width=0.90\linewidth]{twoAgentNoise4.png}
\caption{Two-robot scenario}
\label{fig:New_traj}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[height=1.6in,width=0.90\linewidth]{detORCA5.png}
\caption{Deterministic ORCA}
\label{fig:Orca_traj}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[height=1.6in,width=0.90\linewidth]{chanceORCA5.png
\caption{Chance Constrained ORCA}
\label{fig:Avo_traj}
\end{subfigure}
\caption{Deterministic and Probabilistic collision avoidance between two agents. (a) Two circular agents with mean positions $r_i$ and $r_j$ and their respective mean velocities $v_i$ and $v_j$ (indicated by the black arrow) are shown. The red `{\color{black}{+}}' markers indicate position samples from the position's uncertainty distribution{\color{black},} and the {\color{black}{grey arrows}} indicate velocity samples from the velocity's uncertainty distribution. (b) The {\color{black}{\textit{darker}}} red cone represents the Velocity Obstacle (VO) constructed using the mean position and velocity, while the {\color{black}{\textit{lighter}}} or {\color{black}{\textit{translucent}}} red cones represent the VOs constructed from the position and velocity samples from the uncertainty distribution. The {\color{black}{green}} region is the feasible region for the relative velocity computed using ORCA. As can be seen, the {\color{black}green} regions overlap with parts of VOs constructed using the position and velocity samples, which can lead to {\color{black}collisions}. (c) {\color{black}Feasible relative velocity set computed using the chance constraint ORCA method avoids a major portion of the VO samples.}}
\label{fig:sampleScenario}
\vspace{-8pt}
\end{figure*}
In this section, we describe our MPC optimization problem and summarize our Gaussian and non-Gaussian chance constraint formulations for collision avoidance. Fig.~\ref{fig:sampleScenario} {\color{black}highlights a 2D scenario for collision avoidance} between two agents with state uncertainty. The figure shows a distribution of Velocity Obstacle (VO) cones constructed for the given position and velocity distribution. We observe that the collision-free relative velocity set computed using deterministic ORCA overlaps with a portion of this VO distribution. Hence, a velocity chosen from this set can result in a collision. In contrast, {\color{black}our formulation based on} chance constraints results in a feasible relative velocity set that has a higher probability of being collision-free.
\subsection{\color{black}MPC Optimization Setup}
We use a receding horizon planner to generate the collision-free trajectories for each quadrotor agent. {\color{black}The underlying optimization framework} is common to both our Gaussian and non-Gaussian SwarmCCO formulations. {\color{black}Our} Gaussian and non-Gaussian methods differ in the formulation for collision avoidance chance constraints, i.e. the constraint $\text{P}\left({\color{black}{\mathbf{m}^T {\mathbf{v}^{orca}_{i}-b\le0}}}\right) \le \delta$ in the optimization is formulated differently~(\ref{subsec:ccf}). {\color{black}Each pair of agents has a constraint $\text{P}\left({\color{black}{\mathbf{m}^T {\mathbf{v}^{orca}_{i}-b\le0}}}\right) \le \delta$. That is, if an agent has 5 neighbors, there would be 5 chance constraints in Eqn. (\ref{eqn:optimization}). The variables $\mathbf{m}$ and $b$ for each constraint would depend on the relative positions and velocities between that pair of agents. For clarity, we have considers a single neighbor case in this section.} {\color{black}Each quadrotor agent} computes the collision avoidance constraints at each time step and plans a trajectory for the next $N$ time steps. The trajectory is re-planned continuously to account for the changes in the environment.
{\color{black}In our optimization formulation (given below)}, ``$N$" represents the prediction horizon. The weight matrices{\color{black},} Q and R{\color{black},} prioritize between trajectory tracking error and the control input, respectively. $\mathbf{x}_k$ represents the state of the agent at time step k, and the matrix A and B are the system matrices for the linearized quadrotor model. Velocity and acceleration are constrained to maximum values{\color{black}, $v_{max}$ and $a_{max}$, respectively,} and are realized using box constraints.
\begin{equation}
\begin{aligned}
& \underset{}{\text{minimize}} \quad
& & \sum_{t=0}^{N} \left(\mathbf{x}_{ref,t} - \mathbf{x}_{t}\right)Q\left(\mathbf{x}_{ref,t} - \mathbf{x}_{t}\right) + \mathbf{u}_{t}R\mathbf{u}_{t} \\
& \textrm{subject to} \quad
& & \mathbf{x}_0 = \mathbf{x}_t,\\
& & & \mathbf{x}_{k+1} = A\mathbf{x}_{k} + B\mathbf{u}_{k},\\
& & & ||\mathbf{v}_{k}|| \le v_{max}, \ ||\mathbf{a}_{k}|| \le a_{max}\\
& & & \text{P}\left({\color{black}{\mathbf{m}^T {\mathbf{v}^{orca}_{i}-b\ge0}}}\right) \ge \delta,\\
& & & \forall k=0, 1, ..., N-1.
\end{aligned}
\label{eqn:optimization}
\end{equation}
{\color{black} Our MPC-based algorithm} plans in terms of acceleration and yaw rate, {\color{black}which} constitute the control input (Section~\ref{sec:DCAD}). The constraint $\text{P}\left({\color{black}{\mathbf{m}^T {\mathbf{v}^{orca}_{i}-b\le0}}}\right) \le \delta$ {\color{black}represents} the chance constraints defined on ORCA, i.e. the constraint is said to be satisfied if{\color{black},} given the uncertainty in state, the probability that {\color{black}the} ORCA constraint is satisfied is greater than $\delta$. The output of the MPC is the control input vector for the quadrotor.
\subsection{Collision Avoidance Velocity}
The VO is constructed using the relative position ({\color{black}$\mathbf{r}_i-\mathbf{r}_j$}) and velocity ({\color{black}$\mathbf{v}_A-\mathbf{v}_B$}) of the agents. From {\color{black}Fig.~\ref{fig:Orca_traj}}, we notice that the ORCA {\color{black}plane} passes through the origin. Thus{\color{black},} the parameter $\color{black}{b}$ in the constraint~({\color{black}\ref{forcaE}}) is zero in this case. The feasible velocity set for agent A is constructed by translating this plane by {\color{black}the} agent's mean velocity plus $0.5$ times ({\color{black}}as in~\cite{ORCA}) the change in relative velocity proposed by ORCA. {\color{black}Due to this translation, $b$ can be non-zero,} and we use a mean value for $\color{black}{b}$ instead of a distribution to apply the formulation below.
\subsection{Chance Constraint Formulation}\label{subsec:ccf}
Since the position and velocity data of the agent and its neighbors are non-Gaussian random variables, the collision avoidance constraints {\color{black}must} consider the uncertainty in the agent's state estimations for safety. As mentioned in Section III-A, we do not make any assumption on the nature of the uncertainty distribution. However, we model uncertainty using two different methods. Our first method approximates the noise distribution as a Gaussian distribution, which works well for certain sensors. In comparison, our second method is more general and works with non-Gaussian uncertainty by fitting a Gaussian Mixture Model to the uncertainty distribution.\\
\textbf{Method I: Gaussian Distribution}\\
In this method, we approximate the position and velocity uncertainties using a multivariate Gaussian distribution. For an agent $i$, its position and velocity variables are approximated as
$\mathbf{r}_i \sim N(\boldsymbol{\mu}_{i,r}, {\Sigma_{i,r}})$, and $\mathbf{v}_i \sim N(\boldsymbol{\mu}_{i,v}, {\Sigma_{i,v}}).$
The deterministic ORCA constraint between two agents $\color{black}{i}$ and $\color{black}{j}$ is given by the following plane (linear) equation{\color{black}:}
\begin{equation}
f^{ORCA_{j}^{i}} (.) = {\color{black}{\mathbf{m}^T\mathbf{v}^{orca}_{i}-b}}.
\label{forca}
\end{equation}
Here, the parameters $\color{black}{\mathbf{m}}$ and $\color{black}{b}$ are functions of the agent's trajectory. In a stochastic setting, the parameters $\color{black}{\mathbf{m}}$ and $\color{black}{b}$ are random variables due to their dependence on the agent's position and velocity. Though the uncertainty in position and velocity are assumed to be Gaussian for this case, this need not translate to a Gaussian distribution for $\color{black}{\mathbf{m}}$. {\color{black}However, we} approximate $\color{black}{\mathbf{m}}$'s distribution as Gaussian for the application of our {\color{black}algorithm}, with expectation and covariance represented by $\color{black}{\boldsymbol{\mu}_\mathbf{m}}$ and $\color{black}{\Sigma_\mathbf{m}}$, respectively. Eqn.~(\ref{eqn:chanceORCA}) {\color{black}highlights} the chance constraint, representing the probability that the ORCA constraint~(\ref{forca}) is satisfied, given the uncertainties in {\color{black}the} position and velocity data. This probability is set to be above a predefined confidence level ($\delta$).
\begin{align}\label{eqn:chanceORCA}
\begin{split}
P({\color{black}{\mathbf{m}^T\mathbf{v}^{orca}_{i}-b \ge 0}}) \ge \delta, \ \
{\color{black}{\mathbf{m} \sim N(\boldsymbol{\mu}_\mathbf{m}, \Sigma_\mathbf{m}). }}
\end{split}
\end{align}
From \cite{Prekopa}, we know that if $\mathbf{a}$ follows a Gaussian distribution, the chance constraint can be transformed into a deterministic second-order cone constraint. This is summarized in Lemma \ref{lemma:1}.
\begin{lemma}\label{lemma:1}
For a multivariate random variable $\color{black}{\mathbf{m} \sim N(\boldsymbol{\mu}_\mathbf{m}, \Sigma_\mathbf{m})}$, the chance constraint $\color{black}{P(\mathbf{m}^T\mathbf{x_t} \le b) > \delta}$ can be reformulated as a deterministic constraint on the mean and covariance.
{\small
\begin{multline}
\color{black}{P(\mathbf{m}^T\mathbf{x}_t \le b) > \delta
\iff b - \boldsymbol{\mu}_\mathbf{m}^T \mathbf{x}_t \ge \textnormal{erf}^{-1}(\delta) \left\Vert{{\Sigma}_\mathbf{m}^{\frac{1}{2}}\mathbf{x}_t}\right\Vert_2},
\end{multline}
}
where \textnormal{erf}(x) is the standard error function given by,
$\textnormal{erf}(x) = \frac{1}{2\pi}\int_{0}^{x}e^{-\tau^2/2}d\tau.$
\end{lemma}
Here, $\delta$ is the confidence level associated with satisfying the constraint $\color{black}{\mathbf{m}^T\mathbf{x_t} \le b}$.
Since our collision avoidance constraints are linear, each chance constraint can be {\color{black}reformulated} to a second order cone constraint{\color{black}, as shown} in Lemma~\ref{lemma:1}. Hence, each collision avoidance chance constraint can be written as
\begin{multline}
{\color{black}{P(\mathbf{m}^T\mathbf{v}_{i}^{orca}-b \ge 0) \ge \delta}} \iff \\
{\color{black}{\boldsymbol{\mu}_\mathbf{m}^T \mathbf{v}_{i}^{orca} - b + \textnormal{erf}^{-1}(\delta) \left\Vert{{\Sigma}_\mathbf{m}^{\frac{1}{2}}\mathbf{v}_i^{orca}}\right\Vert_2 \le 0.}}
\end{multline}
\textbf{Method II: Gaussian Mixture Model (GMM)}\\
To handle non-Gaussian uncertainty distributions, we present an extension of the Gaussian formulation (Method I). {\color{black}In this case}, the probability distribution for the position and velocity is assumed to be non-parametric and non-Gaussian, i.e. the probability distribution {\color{black}{is}} not known. We assume that we have access to ${\color{black}s}$ samples of these states that could come from a black-box simulator or a particle filter. Using ${\color{black}s}$ samples, a distribution for the parameter $\color{black}{\mathbf{m}}$ is constructed. In our implementation{\color{black}, we use a value of} ${\color{black}s}=40$ samples
A GMM model of ${\color{black}n}$ Gaussian components is fit to the probability distribution of parameter $\color{black}{\mathbf{m}}$ in Eqn.~\ref{forca} using Expectation-Maximization (EM). Each collision avoidance constraint is split into ${\color{black}n}$ second-order constraints, each corresponding to a single Gaussian component. Furthermore, an additional constraint is {\color{black}used} that relates the $\mathbf{n}$ second-order constraints to GMM probability distribution. From~\cite{hu2018chance}, we know that if $\color{black}{\mathbf{m}}$ follows a GMM distribution, the chance constraint can be transformed into a set of {\color{black}deterministic} constraints. This is summarized in Lemma \ref{lemma:2}.
\begin{lemma}\label{lemma:2}
For a non-Gaussian random variable $\color{black}{\mathbf{m}}$ and a linear equation $f(\mathbf{x_t}) = \color{black}{\mathbf{m}^T\mathbf{x_t} \le b}$, the chance constraint $P({\color{black}{\mathbf{m}^T\mathbf{x_t} \le b}}) > \delta$ can be reformulated as a set of deterministic constraints on the mean and co-variance.
Let the distribution of $\color{black}{\mathbf{m}}$ be approximated by $\mathbf{n}$ Gaussian components. The probability that $f(\mathbf{x_t})$ is satisfied while $\color{black}{\mathbf{m}}$'s distribution is given by the $n^{th}$ Gaussian component of the GMM is given by
\begin{equation}
{\color{black}{P_{i} = \textnormal{erf}\bigg(\frac{b - \boldsymbol{\mu}_{\mathbf{m}}^T\mathbf{x}_t}{\sqrt{\mathbf{x_t}^T\Sigma_\mathbf{m} \mathbf{x_t}}}\bigg).}}
\end{equation}
Then, the probability that $f(\mathbf{x_t})$ is satisfied for GMM distribution of $\color{black}{\mathbf{m}}$ is given by
\begin{equation}
P_{GMM} =\sum_{i=1}^{n} {\color{black}\alpha_iP_{i}}.
\end{equation}
Here, ${\color{black}\alpha_i}$s are the mixing coefficients for the GMM, satisfying $\sum_{i=1}^n {\color{black}\alpha_{i}} = 1$.
\end{lemma}
Let us assume that the probability of satisfying $\color{black}{\mathbf{m}^T\mathbf{v}_{i}^{orca}-b \ge 0,}$ while considering only the $n^{th}$ Gaussian component{\color{black},} is given by $\eta_{i}$. We can reformulate the constraint using Lemma~\ref{lemma:2}, which is given by Eqn.~\ref{eqn:GMM1}.
The probability of satisfying the linear constraint considering the GMM distribution for $\color{black}{\mathbf{m}}$ is computed by mixing the individual component probabilities using the mixing coefficients ({\color{black}$\alpha_i$}). A probability $\delta$ is chosen as the required confidence level.
Eqn.~(\ref{eqn:GMMCC}) represents the chance constraint that the probability of satisfying {\color{black}Eqn.~(\ref{forca})} is greater than $\delta$. The chance constraint can be reformulated as:
\begin{equation*}
\resizebox{0.5\linewidth}{!}{ $P({\color{black}{\mathbf{m}^T\mathbf{v}_{i}^{orca}-b \ge 0}}) \ge \delta \iff $}
\end{equation*}
\begin{numcases} {}
\resizebox{0.85\linewidth}{!}{ $
\begin{aligned}\label{eqn:GMM1}
{\color{black}{\boldsymbol{\mu}_{\mathbf{m}}^T {\mathbf{v}_{i}^{orca}} - b + \textnormal{erf}^{-1}(\eta_i){\left\Vert{\Sigma_\mathbf{m}^{1/2} \mathbf{v}_{i}^{orca}}\right\Vert}_2}} \le 0, \forall i \in \text{1 to n }
\end{aligned}
$}\\
\resizebox{0.25\linewidth}{!}{$\sum_{i=1}^n {\color{black}{\alpha_i\eta_i > \delta}}$}. \label{eqn:GMMCC}
\end{numcases}
In Eqn.~\ref{eqn:GMMCC} the values for the mixing coefficient and confidence are known prior to the optimization. We notice that for a given set of mixing coefficients ({\color{black}$\alpha_i$s}) and confidence ($\delta$), multiple sets of values for {\color{black}$\eta_i$'s} can satisfy Eqn.~\ref{eqn:GMMCC}. The value of $\eta_i$ in turn affects the feasible velocity set. Hence, we plan for {\color{black}$\eta_i$'s} in addition to the control input in problem~({\color{black}Equation}~\ref{eqn:optimization}). When GMM has 3 components, i.e. ${n} = 3$, we have three additional variables given by $\eta_1, \eta_2, \eta_3$ in the optimization problem.
Now the optimization problem~({\color{black}Equation}~\ref{eqn:optimization}) simultaneously computes values for acceleration, $\dot{\psi}$ and $\eta_i${\color{black},} such that the collision avoidance chance constraints (Eqn.~\ref{eqn:GMMCC}) are satisfied.
The computed control input from the optimization problem~(\ref{eqn:optimization}) is transformed into an inner-loop control input for the quadrotor using an inverse map{\color{black},} similar to~\cite{DCAD}.
\section{Results}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[height=4.0cm,width=1\linewidth]{detORCAtraj.png}
\caption{DCAD(Deterministic)}
\label{fig:proposedMethod_Vel}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[height=4.0cm,width=1\linewidth]{GaussColAv.png}
\caption{Gaussian Noise Approximation}
\label{fig:ORCA_Vel}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[height=4.0cm,width=1\linewidth]{NonGussColAv.png}
\caption{Non-Gaussian Noise Approximation}
\label{fig:AVO_Vel}
\end{subfigure}
\caption{The figure illustrates a scenario with 4 agents exchanging positions with their diagonally opposite {\color{black}agents}. The collision avoidance is performed based on deterministic DCAD and Gaussian and non-Gaussian SwarmCCO. {\color{black}{DCAD results in more collisions because uncertainty is not explicitly considered}}. We observe that {\color{black}some agents travel} a {\color{black}longer} path when using the Gaussian formulation {\color{black}{than}} when using the non-Gaussian formulation (Section~\ref{GvsNG:PL})
}
\label{fig:velvar}
\end{figure*}
In this section, we describe the implementation of {\color{black}our algorithm} and our simulation setup. Further, we summarize our evaluations and {\color{black}highlight the benefits of our approach}.
\subsection{Experimental Setup}
Our method is implemented on an Inter Xeon w-2123 \@ 3.6 GHz with 32 GB RAM and a GeForce GTX 1080 GPU. Our simulations are built using the PX4 Software In The Loop (SITL) framework, ROS Kinetic, and Gazebo 7.14.0. We solve the MPC optimization using the IPOPT Library with a planning horizon of $N = 8$ steps and a time step of $\delta$t = $100$ms. We {\color{black}consider} a non-Gaussian distribution for the position and velocity data, {\color{black}from which the input sensor readings for both the Gaussian and non-Gaussian SwarmCCO methods are generated}. Gazebo state information represents the ground truth position and velocity data, while we add non-zero mean, non-Gaussian noise to simulate state uncertainties. The added noise is generated through a GMM model of 3 Gaussian components. We use two different GMM to simulate noise for position readings{\color{black}:} $GMM_1: \mu_{GMM1} = [0.15, 0.08, -0.05]$, {\color{black}$\Sigma_{GMM1} = diag([0.06, 0.7, 0.3])$,} and $GMM_2: \mu_{GMM2} = [0.2, 0.0, -0.2]$, {\color{black}$\Sigma_{GMM2} = diag([1.0, 0.3, 1.0])$.} The velocity readings are simulated using a noise distribution that {\color{black}has} half the mean and covariance values of $\Sigma_{GMM1}$ and $\Sigma_{GMM2}$. Further, for our evaluation we consider two confidence levels given by $\delta_1 = 0.75$ and $\delta_2 = 0.90$.
The RVO-3D library is utilized to compute the ORCA collision avoidance constraints. We consider a sensing region of 8m for {\color{black}the} ORCA plane computation. {\color{black}{The agent's physical radius is assumed to be $0.25m$ while the agent's radius for the ORCA computation is set as $0.5m$.}} In our evaluations, two agents are considered to be in collision if their positions are less than $0.5m$ apart. Further, we show results for {\color{black}the} non-Gaussian method with 2 {\color{black}components} ($n=2$) GMM and 3 {\color{black}components} ($n=3$) GMM.
\subsection{Generated Trajectories}
We evaluate our method in simulation with four quadrotors exchanging positions with the antipodal agents (circular scenario). Fig.~{{\ref{fig:velvar}}} shows the resulting trajectories for this scenario using deterministic DCAD~\cite{DCAD} and Gaussian and non-Gaussian SwarmCCO. We observe that in deterministic DCAD, the agents do not handle noise and hence they trajectories often result in collisions. In Fig.~\ref{fig:velvar} the DCAD trajectories are such that the agent graze past each other. In contrast, the trajectories generated by SwarmCCO methods are safer.
\subsection{Collision Avoidance}
\begin{table*}
\caption{Number of episodes in which one or more quadrotor collided when the agents out of a total of 100 episodes. The DCAD algorithm is deterministic and does not consider the uncertainty in state. We observe that our Gaussian and non-Gaussian formulations provide improved safety.}\label{tab:Table 2}
\centering
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c||c|c|c|c|c|c|c|}
\hline
\multirow{3}{*}{Noise} & \multirow{3}{*}{Confidence Level} & \multirow{3}{*}{Method} & \multicolumn{5}{c|}{Number of collisions}\\
\cline{4-8}
& & & \multicolumn{5}{c|}{No of Agents} \\
\cline{4-8}
& & & 2 & 4 & 6 & 8 & 10 \\
\hline
\multirow{8}{*}{$\Sigma_1$} & \multirow{4}{*}{$\delta=0.75$}
& DCAD (deterministic) & 27 & 25 & 35 & 40 & 44 \\
& & Gaussian SwarmCCO & 0 & 4 & 4 & 7 & 15 \\
& & Non-Gaussian SwarmCCO (n=2) & 0 & 2 & 5 & 4 & 12\\
& & Non-Gaussian SwarmCCO (n=3) & 0 & 0 & 2 & 5 & 13\\
\cline{2-8}
& \multirow{4}{*}{$\delta=0.90$}
& DCAD (deterministic) & 27 & 25 & 35 & 40 & 44 \\
& & Gaussian SwarmCCO & 0 & 3 & 2 & 7 & 13 \\
& & Non-Gaussian SwarmCCO (n=2) & 0 & 1 & 3 & 4 & 12 \\
& & Non-Gaussian SwarmCCO (n=3) & 0 & 0 & 1 & 2 & 10 \\
\hline
\multirow{8}{*}{$\Sigma_2$} & \multirow{4}{*}{$\delta=0.75$}
& DCAD (deterministic) & 56 & 26 & 32 & 51 & 58 \\
& & Gaussian SwarmCCO & 0 & 2 & 3 & 11 & 18 \\
& & Non-Gaussian SwarmCCO (n=2) & 0 & 1 & 3 & 5 & 9 \\
& & Non-Gaussian SwarmCCO (n=3) & 0 & 0 & 0 & 3 & 10 \\
\cline{2-8}
& \multirow{4}{*}{$\delta=0.90$}
& DCAD (deterministic) & 56 & 26 & 32 & 51 & 58 \\
& & Gaussian SwarmCCO & 0 & 2 & 1 & 4 & 6 \\
& & Non-Gaussian SwarmCCO (n=2) & 0 & 0 & 1 & 3 & 4 \\
& & Non-Gaussian SwarmCCO (n=3) & 0 & 0 & 0 & 2 & 4 \\
\hline
\end{tabular}
\end{table*}
{\color{black}{We evaluate our method in a circular scenario}}. Table~\ref{tab:Table 2} summarizes the number of trials with observed collisions out of a total of 100 trials. {\color{black}{We observe that the performance of the deterministic method degrades (in terms of number of collisions) with added noise}}
{\color{black}{ In contrast, we}} observe good performance with both the Gaussian and non-Gaussian SwarmCCO. {\color{black}{As expected, the number of collisions reduces with}} an increase in confidence level ($\delta$).
\subsection{Gaussian vs. Non-Gaussian:}
In this subsection we compare the performance of our Gaussian and non-Gaussian formulations for SwarmCCO. We consider the circular scenario where the agents move to their antipodal positions.
\subsubsection{Path Length}\label{GvsNG:PL}
\begin{table*}
\caption{Average path length traveled by the agent while exchanging positions with the antipodal agents. The reference, straight line path to the goal is $40$m long. The Gaussian method is relatively conservative, resulting in longer path lengths on average compared to the non-Gaussian method. We observe this performance in scenarios with 8 and 10 agents
}\label{tab:Table 3}
\centering
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c||c|c|c|c|c|c|c|}
\hline
\multirow{3}{*}{Noise} & \multirow{3}{*}{Confidence Level} & \multirow{3}{*}{Method} & \multicolumn{5}{c|}{Path Length}\\
\cline{4-8}
& & & \multicolumn{5}{c|}{No of Agents} \\
\cline{4-8}
& & & 2 & 4 & 6 & 8 & 10 \\
\hline
\multirow{6}{*}{$\Sigma_1$} & \multirow{3}{*}{$\delta=0.75$}
& Gaussian & 41.12 & 42.75 & 44.04 & 44.88 & 47.92 \\
& & Non-Gaussian (n=2) & 41.06 & 41.95 & 43.03 & 44.39 & 46.66 \\
& & Non-Gaussian (n=3) & 41.06 & 42.09 & 43.12 & 44.59 & 46.52 \\
\cline{2-8}
& \multirow{3}{*}{$\delta=0.90$}
& Gaussian & 41.10 & 43.14 & 45.05 & 46.64 & 48.87 \\
& & Non-Gaussian (n=2) & 41.07 & 42.03 & 43.31 & 44.42 & 46.53 \\
& & Non-Gaussian (n=3) & 41.06 & 42.12 & 43.35 & 44.57 & 46.62 \\
\hline
\multirow{6}{*}{$\Sigma_2$} & \multirow{3}{*}{$\delta=0.75$}
& Gaussian & 41.21 & 43.06 & 44.51 & 46.38 & 49.61 \\
& & Non-Gaussian (n=2) & 41.21 & 42.67 & 44.09 & 45.80 & 47.87 \\
& & Non-Gaussian (n=3) & 41.22 & 42.47 & 44.14 & 45.93 & 48.15 \\
\cline{2-8}
& \multirow{3}{*}{$\delta=0.90$}
& Gaussian & 41.23 & 43.69 & 45.37 & 47.90 & 50.41 \\
& & Non-Gaussian (n=2) & 41.21 & 42.47 & 44.13 & 45.68 & 48.09 \\
& & Non-Gaussian (n=3) & 41.23 & 42.70 & 44.62 & 46.13 & 48.25 \\
\hline
\end{tabular}
\end{table*}
{\small
\begin{table*}
\caption{Mean time required by the agents to reach the goal. The mean time required is approximately the same for all the methods due to the trajectory tracking MPC uses in our formulation.}\label{tab:Table 4}
\centering
\renewcommand{\arraystretch}{1.3}
\resizebox{18cm}{!}{%
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{3}{*}{Method} & \multicolumn{8}{c|}{$\Sigma_1$} & \multicolumn{8}{c|}{$\Sigma_2$}\\
\cline{2-17}
& \multicolumn{4}{c|}{$\delta=0.75$} & \multicolumn{4}{c|}{$\delta=0.90$} & \multicolumn{4}{c|}{$\delta=0.75$} & \multicolumn{4}{c|}{$\delta=0.90$} \\
\cline{2-17}
& 2 & 4 & 6 & 8 & 2 & 4 & 6 & 8 & 2 & 4 & 6 & 8 & 2 & 4 & 6 & 8\\
\hline
Gaussian
& 31.41 & 31.41 & 31.41 & 31.42 & 31.41 & 31.42 & 31.41 & 31.41 & 31.41 & 31.41 & 31.42 & 31.43 & 31.41 & 31.41 & 31.41 & 31.42\\
Non-Gaussian (n=2)
& 31.41 & 31.41 & 31.41 & 31.42 & 31.41 & 31.42 & 31.41 & 31.41 & 31.41 & 31.41 & 31.42 & 31.41 & 31.41 & 31.41 & 31.41 & 31.41\\Non-Gaussian (n=3)
& 31.41 & 31.41 & 31.41 & 31.42 & 31.41 & 31.42 & 31.41 & 31.41 & 31.41 & 31.41 & 31.42 & 31.41 & 31.41 & 31.41 & 31.41 & 31.42\\
\hline
\end{tabular}
}
\end{table*}
}
For each agent, the reference path to its goal is a straight-line path of 40m length directed to the diametrically opposite position. In Table~\ref{tab:Table 3}, we tabulate the mean path length for the agents as they reach their goal while avoiding collisions. To compute this mean, we utilize only the trials that were collision-free. The mean is computed over 100 trials. We observe that the Gaussian method is (relatively) more conservative than the non-Gaussian method resulting in a longer path length for most scenarios.
\subsubsection{Time to Goal}
We observe that on average, the agents in all the methods reach their goal in the same time, this can be observed from Table~\ref{tab:Table 4}. This is due to the trajectory tracking MPC used by the agents, which modifies the agent velocity such that the agents reach their goal in approximately the same time duration ($\sim30s$).
\subsubsection{Inter-agent Distance}
\begin{figure*}
\centering
\begin{subfigure}[b]{1\textwidth}
{\includegraphics[height=3.8cm,width=.32\textwidth]{Gauss_2.png}}\hfill
{\includegraphics[height=3.8cm,width=.32\textwidth]{NonGauss_2_2.png}}\hfill
{\includegraphics[height=3.8cm,width=.32\textwidth]{NonGauss_3_2.png}}\par
\caption{Scenario with four agents.}
\hfill
\end{subfigure}
\begin{subfigure}[b]{1\textwidth}
{\includegraphics[height=3.8cm,width=.32\textwidth]{Gauss_6.png}}\hfill
{\includegraphics[height=3.8cm,width=.32\textwidth]{NonGauss_2_6.png}}\hfill
{\includegraphics[height=3.8cm,width=.32\textwidth]{NonGauss_3_6.png}}\par
\caption{Scenario with six agents.}
\hfill
\end{subfigure}
\begin{subfigure}[b]{1\textwidth}
{\includegraphics[height=3.8cm,width=.32\textwidth]{Gauss_8.png}}\hfill
{\includegraphics[height=3.8cm,width=.32\textwidth]{NonGauss_2_8.png}}\hfill
{\includegraphics[height=3.8cm,width=.32\textwidth]{NonGauss_3_8.png}}
\caption{Scenario with eight agents.}
\hfill
\end{subfigure}
\caption{Histogram of least inter-agent distance in the circular scenario with 4, 6 and 8 agents. The agents have a radius of $0.25m$, and the ORCA planes are constructed with an augmented agent radius of $0.5m$. Hence, an inter-agent distance of less than $1m$ is a collision according to the ORCA constraint, though the agents do not actually collide. The histogram is constructed over 100 trials, and the trials with least inter-agent distance greater than $1.5m$ are not included in the histogram. The safe inter-agent distance of $1m$ is denoted by the dotted orange line. We observe that the non-Gaussian method results in fewer trials with least inter-agent distance below $1m$, especially for the non-Gaussian formulation with 3 components
}
\label{fig:ID}
\end{figure*}
In the ORCA computation, the agent radius is augmented to be $0.5m$ in contrast to the original agent radius of $0.25m$ to provide a safe distance around the agent. Thus, the safe inter-agent distance is $1m$. We compare the Gaussian and non-Gaussian formulations for the number of trials in which the safety threshold distance was compromised (out of 100 trials). We observe that the non-Gaussian method performs better in this case, and the results are illustrated through a histogram in Fig.~\ref{fig:ID}. We observe that with an increase in the number of agents in the environment, the inter-agent distance dips below $1m$ in multiple trials, but the non-Gaussian method with 3 Gaussian components performs better resulting in lower number of such trials.
\subsection{Scalability}
\begin{figure}[t]
\centering
\includegraphics[width=0.99\linewidth]{scalabilityChanceIROS.png}
\caption{Control input computation time (ms) for one agent in the presence of 2 to 20 neighboring agents.}
\label{fig:scalability}
\end{figure}
Figure~\ref{fig:scalability} illustrates the computation time (in milliseconds) of our algorithm for one agent with $1$ to $20$ neighbouring agents in the environment. From our previous work~\cite{DCAD}, and from our experiments we observe that considering the closest 10 obstacles provides good performance in most cases. We observe that, on average, our Gaussian method requires $\sim5ms$ to compute a collision avoiding input, while our non-Gaussian method requires $\sim7ms$ in the presence of $4$ neighbors.
\subsection{Comparision with Bounding Volume Expansion}
In Table~\ref{tab:vsKalmanPLR2}, we compare the non-Gaussian formulation with a conservative method based on bounding volume expansion (DCAD~\cite{DCAD}). We observed the bounding volume formulation returned infeasible frequently owing to its conservative approximation. This was observed for the $8$ and $10$ agent cases. A 100 trial runs in the circular scenario was used to tabulate this result. Thus, a conservative method may not be practical in all scenarios.
\begin{table*}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Method} & \multicolumn{5}{c|}{Path Length} & \multicolumn{5}{c|}{No. of trials with collision}\\
\cline{2-11}
& 2 & 4 & 6 & 8 & 10 & 2 & 4 & 6 & 8 & 10\\
\hline
DCAD with Kalman filter & 41.39 & 45.47 & 61.83 & - & - & 0 & 0 & 4 & - & -\\
non-Gaussian SwarmCCO (n=2) & 41.24 & 45.34 & 47.29 & 51.30 & 57.90 & 0 & 0 & 0 & 6 & 11 \\
\hline
\end{tabular}
\caption{Comparison of average path length and collision prbability between DCAD (kalman filter) and non-Gaussian SwarmCCO}
\label{tab:vsKalmanPLR2}
\end{table*}
\section{Conclusion, Limitation, and Future Work}
In this paper, we presented a probabilistic method for decentralized collision avoidance among quadrotors in a swarm. Our method uses a flatness-based linear MPC to handle quadrotor dynamics and accounts for the state uncertainties using a chance constraint formulation. We presented two approaches to model the chance {constraints}; the first assumes a Gaussian distribution for the state, while the second approach is more general and can handle non-Gaussian noise using a GMM. Both the Gaussian and non-Gaussian methods {result in fewer collisions as compared to the deterministic algorithms}, but the Gaussian method was found to be more conservative, leading to longer path lengths for the agents. Further, we {observed} that the non-Gaussian method with 3 Gaussian components {demonstrates better performance in terms of satisfying the ORCA constraints}, resulting in {{fewer}} trials with {an} inter-agent safety distance lower than $1m$ compared to the Gaussian formulation.
On average, our Gaussian method {required} $\sim5ms$ to compute a collision avoiding input, while our non-Gaussian method requires $\sim9ms$ in {scenarios with $4$ agents}.
Our method has a few limitations. { We estimate the distribution of $\mathbf{m}$ using position and velocity samples from a black-box simulator, hence the distribution may not be accurate.} The non-Gaussian method is computationally expensive, {which} affects the rapid re-planning of trajectories. {In addition}, we do not consider the ego-motion noise, i.e. the noise in implementing the control input. Moreover, our optimization's cost function uses mean values and does not consider the uncertainties in the {state}.
As a part of our future work, we plan to work on faster methods to evaluate the chance constraint for the non-Gaussian, non-parametric case. {Additionally}, we plan to evaluate our algorithm on physical quadrotors.
\section*{APPENDIX}
\end{document}
\section{INTRODUCTION}
This template provides authors with most of the formatting specifications needed for preparing electronic versions of their papers. All standard paper components have been specified for three reasons: (1) ease of use when formatting individual papers, (2) automatic compliance to electronic requirements that facilitate the concurrent or later production of electronic products, and (3) conformity of style throughout a conference proceedings. Margins, column widths, line spacing, and type styles are built-in; examples of the type styles are provided throughout this document and are identified in italic type, within parentheses, following the example. Some components, such as multi-leveled equations, graphics, and tables are not prescribed, although the various table text styles are provided. The formatter will need to create these components, incorporating the applicable criteria that follow.
\section{PROCEDURE FOR PAPER SUBMISSION}
\subsection{Selecting a Template (Heading 2)}
First, confirm that you have the correct template for your paper size. This template has been tailored for output on the US-letter paper size.
It may be used for A4 paper size if the paper size setting is suitably modified.
\subsection{Maintaining the Integrity of the Specifications}
The template is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin in this template measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. Please do not revise any of the current designations
\section{MATH}
Before you begin to format your paper, first write and save the content as a separate text file. Keep your text and graphic files separate until after the text has been formatted and styled. Do not use hard tabs, and limit use of hard returns to only one return at the end of a paragraph. Do not add any kind of pagination anywhere in the paper. Do not number text heads-the template will do that for you.
Finally, complete content and organizational editing before formatting. Please take note of the following items when proofreading spelling and grammar:
\subsection{Abbreviations and Acronyms} Define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. Abbreviations such as IEEE, SI, MKS, CGS, sc, dc, and rms do not have to be defined. Do not use abbreviations in the title or heads unless they are unavoidable.
\subsection{Units}
\begin{itemize}
\item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as Ò3.5-inch disk driveÓ.
\item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation.
\item Do not mix complete spellings and abbreviations of units: ÒWb/m2Ó or Òwebers per square meterÓ, not Òwebers/m2Ó. Spell out units when they appear in text: Ò. . . a few henriesÓ, not Ò. . . a few HÓ.
\item Use a zero before decimal points: Ò0.25Ó, not Ò.25Ó. Use Òcm3Ó, not ÒccÓ. (bullet list)
\end{itemize}
\subsection{Equations}
The equations are an exception to the prescribed specifications of this template. You will need to determine whether or not your equation should be typed using either the Times New Roman or the Symbol font (please no other font). To create multileveled equations, it may be necessary to treat the equation as a graphic and insert it into the text after your paper is styled. Number equations consecutively. Equation numbers, within parentheses, are to position flush right, as in (1), using a right tab stop. To make your equations more compact, you may use the solidus ( / ), the exp function, or appropriate exponents. Italicize Roman symbols for quantities and variables, but not Greek symbols. Use a long dash rather than a hyphen for a minus sign. Punctuate equations with commas or periods when they are part of a sentence, as in
$$
\alpha + \beta = \chi \eqno{(1)}
$$
Note that the equation is centered using a center tab stop. Be sure that the symbols in your equation have been defined before or immediately following the equation. Use Ò(1)Ó, not ÒEq. (1)Ó or Òequation (1)Ó, except at the beginning of a sentence: ÒEquation (1) is . . .Ó
\subsection{Some Common Mistakes}
\begin{itemize}
\item The word ÒdataÓ is plural, not singular.
\item The subscript for the permeability of vacuum ?0, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ÒoÓ.
\item In American English, commas, semi-/colons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.)
\item A graph within a graph is an ÒinsetÓ, not an ÒinsertÓ. The word alternatively is preferred to the word ÒalternatelyÓ (unless you really mean something that alternates).
\item Do not use the word ÒessentiallyÓ to mean ÒapproximatelyÓ or ÒeffectivelyÓ.
\item In your paper title, if the words Òthat usesÓ can accurately replace the word ÒusingÓ, capitalize the ÒuÓ; if not, keep using lower-cased.
\item Be aware of the different meanings of the homophones ÒaffectÓ and ÒeffectÓ, ÒcomplementÓ and ÒcomplimentÓ, ÒdiscreetÓ and ÒdiscreteÓ, ÒprincipalÓ and ÒprincipleÓ.
\item Do not confuse ÒimplyÓ and ÒinferÓ.
\item The prefix ÒnonÓ is not a word; it should be joined to the word it modifies, usually without a hyphen.
\item There is no period after the ÒetÓ in the Latin abbreviation Òet al.Ó.
\item The abbreviation Òi.e.Ó means Òthat isÓ, and the abbreviation Òe.g.Ó means Òfor exampleÓ.
\end{itemize}
\section{USING THE TEMPLATE}
Use this sample document as your LaTeX source file to create your document. Save this file as {\bf root.tex}. You have to make sure to use the cls file that came with this distribution. If you use a different style file, you cannot expect to get required margins. Note also that when you are creating your out PDF file, the source file is only part of the equation. {\it Your \TeX\ $\rightarrow$ PDF filter determines the output file size. Even if you make all the specifications to output a letter file in the source - if your filter is set to produce A4, you will only get A4 output. }
It is impossible to account for all possible situation, one would encounter using \TeX. If you are using multiple \TeX\ files you must make sure that the ``MAIN`` source file is called root.tex - this is particularly important if your conference is using PaperPlaza's built in \TeX\ to PDF conversion tool.
\subsection{Headings, etc}
Text heads organize the topics on a relational, hierarchical basis. For example, the paper title is the primary text head because all subsequent material relates and elaborates on this one topic. If there are two or more sub-topics, the next level head (uppercase Roman numerals) should be used and, conversely, if there are not at least two sub-topics, then no subheads should be introduced. Styles named ÒHeading 1Ó, ÒHeading 2Ó, ÒHeading 3Ó, and ÒHeading 4Ó are prescribed.
\subsection{Figures and Tables}
Positioning Figures and Tables: Place figures and tables at the top and bottom of columns. Avoid placing them in the middle of columns. Large figures and tables may span across both columns. Figure captions should be below the figures; table heads should appear above the tables. Insert figures and tables after they are cited in the text. Use the abbreviation ÒFig. 1Ó, even at the beginning of a sentence.
\begin{table}[h]
\caption{An Example of a Table}
\label{table_example}
\begin{center}
\begin{tabular}{|c||c|}
\hline
One & Two\\
\hline
Three & Four\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[thpb]
\centering
\framebox{\parbox{3in}{We suggest that you use a text box to insert a graphic (which is ideally a 300 dpi TIFF or EPS file, with all fonts embedded) because, in an document, this method is somewhat more stable than directly inserting a picture.
}}
\caption{Inductance of oscillation winding on amorphous
magnetic core versus DC bias magnetic field}
\label{figurelabel}
\end{figure}
Figure Labels: Use 8 point Times New Roman for Figure labels. Use words rather than symbols or abbreviations when writing Figure axis labels to avoid confusing the reader. As an example, write the quantity ÒMagnetizationÓ, or ÒMagnetization, MÓ, not just ÒMÓ. If including units in the label, present them within parentheses. Do not label axes only with units. In the example, write ÒMagnetization (A/m)Ó or ÒMagnetization {A[m(1)]}Ó, not just ÒA/mÓ. Do not label axes with a ratio of quantities and units. For example, write ÒTemperature (K)Ó, not ÒTemperature/K.Ó
\section{CONCLUSIONS}
A conclusion section is not required. Although a conclusion may review the main points of the paper, do not replicate the abstract as the conclusion. A conclusion might elaborate on the importance of the work or suggest applications and extensions.
| -44,465.512641
|
[
-2.6796875,
2.748046875
] | 35.520685
|
[
-3.85546875,
-0.00879669189453125,
-1.86328125,
-5.015625,
-0.5087890625,
7.93359375
] |
[
2.544921875,
6.69921875,
2.708984375,
7.421875
] | 678
| 7,955
|
[
-2.15625,
2.28515625
] | 28.195853
|
[
-6.3359375,
-4.5078125,
-4.6015625,
-2.1328125,
2.564453125,
12.578125
] | 0.368362
| 21.999835
| 24.537962
| 5.196076
|
[
2.49542498588562
] | -28,409.844714
| 6.142678
| -44,203.162717
| 0.58324
| 6.304301
|
[
-2.39453125,
-3.51953125,
-3.96484375,
-4.85546875,
2.4375,
11.9140625
] |
[
-5.65625,
-1.8994140625,
-2.28125,
-1.4521484375,
3.73828125,
4.97265625
] | |
BkiUdgI5qU2Ap6C-CPW4
|
\section{Introduction}
The study of quantum gases trapped and controlled by optical potentials has expanded rapidly in recent years \cite{bloch2008,lewenstein2012}, as they provide a clean and versatile way to realise and observe many-body quantum dynamics, enabling quantum simulation of models from condensed matter and particle physics, and beyond. In parallel, studies of quantum light, such as cavity quantum-electrodynamics \cite{haroche2006}, have yielded fascinating results, including controlled state preparation and quantum non-demolition measurement. Uniting these fields \cite{mekhov2012, ritsch2013} broadens both, and goes beyond the cases when either the light or matter are treated classically. Experimental \cite{baumann2010, wolke2012, schmidt2014, landig2015, klinder2015, landig2016} and theoretical works in this regime have revealed many interesting phenomena, such as the preparation of atomic states and dynamics \cite{mekhov2009b, chen2009a, mekhov2011, pedersen2014, lee2014, kollath2015, elliott2015, mazzucchi2016}, non-destructive measurement \cite{javanainen2003, mekhov2007, kozlowski2015, elliott2015b, caballero2015a}, many-body light-matter entanglement \cite{elliott2015b}, self-organisation, and other new quantum phases \cite{larson2008, maschler2008, chen2009b, gopalakrishnan2009, fernandez2010, strack2011, piazza2013, habibian2013, padhi2014, bakhtiari2015, caballero2015, ostermann2015}.
In the aforementioned works, these effects occur due to either the collective behavior arising from the cavity-mediated interactions, or the suppression of atomic dynamics by light measurement backaction. We go beyond this, and for the first time study the union of these mechanisms. In doing so, we show that their interplay enables a selective engineering of the cavity-mediated processes, which may then be used to orchestrate dynamics for quantum simulation purposes.
Specifically, we consider a system of ultracold (bosonic) atoms trapped in an optical lattice, probed by light. The introduction of optical cavities [\figref{figsetup}] enhances the light scattering from the atoms, and these cavities, once populated, can drive atomic dynamics. The light fields inside the cavities are dynamical quantum fields, and hence form a quantum potential for the atoms. By engineering the light modefunctions, atomic dynamics beyond the standard Bose-Hubbard model can then be realised.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{./figure1.pdf}
\caption{Ultracold atoms trapped in an optical lattice scatter incident light into optical cavities. The cavity-mediated light-matter interactions induce correlated atomic dynamics that are tunable through the optical geometry. Detection of the leaked photons enables a selective suppression of these processes, through the measurement backaction effect. This allows the atomic dynamics to be engineered for quantum simulation purposes.}
\label{figsetup}
\end{figure}
In this article, we begin by investigating the form of these dynamics, and provide a characterisation of the constituent terms. We show that this classification reveals processes that include perfectly correlated tunnelling, effective pair creation and annihilation, long-range tunnelling, superexchange, and independently tunable long- and short-range density-density interactions. We illustrate how these different effects may be controlled and tuned by the optical geometry.
Following this, we introduce the backaction effect that arises from measurement of the light leaking from the cavity to these extended dynamics. We demonstrate how this enables further control of the processes, by imparting additional structure to the lattice, allowing for the highlighting of desired effects (by suppression of others) through quantum Zeno dynamics \cite{facchi2008}. We describe how this united formalism provides a framework to enhance quantum simulations, through the incorporation of these correlated and long-range processes that are not accessible in other systems with finite-range interactions, through the introduction of building block components, which include reservoir models and dynamical global gauge fields.
\section{Cavity-Mediated Dynamics}
\subsection{The Model}
\label{secmodel}
In this article we study a extended form of the Bose-Hubbard model \cite{mekhov2012} in which interactions with an additional set of modes possessing long-range spatial extent over the lattice are included. Physically, this can correspond to the scenario where the lattice is a (classical) optical trap containing bosonic atoms, embedded within an optical cavity, with light from an external laser scattered by the atoms into the cavity modes. A key feature of this model is that both the light and the atoms are dynamical quantum fields. It forms the cornerstone of many of the aforementioned works in the field of fully-quantum many-body light-matter interactions, and in its general form describes a wide range of possible optical geometries. We outline the main steps in the derivation of the effective atomic Hamiltonian; more detailed treatments may be found in, e.g.~\cite{maschler2008, mekhov2012, ritsch2013, caballero2015, caballero2015a}.
The full Hamiltonian can be written $\mathcal{H}=H_L+H_M+H_{LM}$, where (in natural units)
\begin{subequations}
\begin{equation}
H_L=\sum_m \omega_m a^\dagger_ma_m
\end{equation}
and
\begin{equation}
H_M=\sum_{ij}J_{ij}^Tb^\dagger_ib_j+\frac{U}{2}\sum_{i}b^\dagger_ib^\dagger_ib_ib_i
\end{equation}
\end{subequations}
are the bare Hamiltonians describing the light and matter respectively (the matter Hamiltonian being the standard Bose-Hubbard Hamiltonian for atoms in a lattice when $J^T_{ij}$ is restricted to nearest-neighbour terms only). Here, $a^\dagger_m$ creates photons in light mode $m$ with frequency $\omega_m$ and modefunction $u_m(\bm{r})$, while $b_i^\dagger$ creates bosons at lattice site $i$ with Wannier function $w(\bm{r}-\bm{r}_i)$. On-site interactions between atoms are parameterised by $U$, and $J_{ij}^T$ are the (classical) tunnelling rates between sites due to the classical potential.
The final term $H_{LM}$ describes the fully quantum light-matter interactions between the light and atomic modes. It follows from a many-atom generalisation of the Jaynes-Cummings Hamiltonian, with the excited atomic state adiabatically eliminated \cite{mekhov2012}. To obtain this, consider first the single-particle Hamiltonian for an atom interacting with many light modes: the interaction part of the Hamiltonian can be written
\begin{equation}
H_{\mathrm{LM}}^{\mathrm{(SP)}}=\sum_m g_ma_mu_m(\bm{r})\sigma^++h.c.,
\end{equation}
where $\sigma^+$ raises the atom to its excited state, and $g_m$ is the light-matter coupling constant for mode $m$. To perform the adiabatic elimination, we assume that the detuning is sufficiently large that the excited state population is negligible, and from the Heisenberg equation $\dot{\sigma}^-=i[H^{\mathrm{(SP)}},\sigma^-]$ set the time dependence of this operator to vanish in a frame rotating at a reference frequency $\omega_p$ (e.g.~that of an external pump laser) leading to $\sigma^-=(1/\Delta_a)\sum_mg_mu_m(\bm{r})a_m$, where $\Delta_a=\omega_p-\omega_a$ is the atomic transition frequency $\omega_a$ detuning from $\omega_p$. Inserting this into the single-particle Hamiltonian, we have
\begin{equation}
H_{\mathrm{LM}}^{\mathrm{(SP)}}=\sum_{mn}\frac{g_mg_n}{\Delta_a}u_m^*(\bm{r})u_n(\bm{r})a^\dagger_ma_n.
\end{equation}
Finally, to obtain the many-body form of the Hamiltonian, we express the single-particle state in terms of the localised atomic basis states $\psi(\bm{r})=\sum_iw(\bm{r}-\bm{r}_i)b_i$, resulting in \cite{mekhov2012}
\begin{equation}
\label{eqHlm}
H_{LM}=\sum_{mn}\frac{g_mg_n}{\Delta_a}a^\dagger_ma_n\sum_{ij}J_{ij}^{mn}b_i^\dagger b_j.
\end{equation}
The interactions are parameterised by
\begin{equation}
J_{ij}^{mn}=\int d\bm{r} w(\bm{r}-\bm{r}_i)u^*_m(\bm{r})u_n(\bm{r})w(\bm{r}-\bm{r}_j),
\end{equation}
describing the overlap between the atomic Wannier functions and light modefunctions. These coefficients thus encompass the dependence of the dynamics on the particular optical setup used in the system. We note also that due to the complex nature of light modefunctions, these coefficients may too be complex. Taking the Wannier functions to be real-valued, these coefficients satisfy the properties $J_{ij}^{mn}=J_{ji}^{mn}={J_{ij}^{nm}}^*$.
Let us now consider one of the light modes, which we label $0$, to be a pump mode sourced from an external laser. We describe this mode by a coherent state of amplitude $\alpha_0$, with an occupation much larger than the cavity modes, such that we can replace $a_0\to\alpha_0$. Assuming that the light scattering occurs on timescales much faster than the atomic dynamics, we can obtain the time dependence of the cavity modes (in the frame rotating at the pump frequency) from the Heisenberg equation:
\begin{align}
\dot{a}_{m\neq0}&=i[\mathcal{H},a_{m\neq0}]\nonumber \\ &=-i\Delta_ma_m-i\sum_{n}\Omega_{mn}\mathcal{J}_{mn}-\kappa_ma_m.
\end{align}
In this expression, the cavity detuning $\Delta_m=\omega_p-\omega_m$, and we have phenomenologically introduced a cavity decay for photon loss with rate $\kappa_m$, and defined for shorthand $\Omega_{mn}=g_mg_n/\Delta_a$ and
\begin{equation}
\mathcal{J}_{mn}=\sum_{ij}J_{ij}^{mn}b_i^\dagger b_j.
\end{equation}
These light-matter coupling operators $\mathcal{J}_{mn}$ inherit certain properties from the coupling coefficients $J_{ij}^{lm}$: $\mathcal{J}_{mn}=\mathcal{J}_{nm}^\dagger=\mathcal{J}_{nm}^*$.
We now further assume also that there is only a small dispersive frequency shift of the cavity modes $(\Omega_{mm}\mathcal{J}_{mm}\ll\Delta m$ for $m\neq0$), and use that $\Omega_{m0}\alpha_0\mathcal{J}_{m0}\gg\sum_{n\neq\{0,m\}}\Omega_{mn}\mathcal{J}_{mn}$ due to the large pump amplitude compared to the cavity occupations. Thus, the steady states of the cavity modes are hence
\begin{equation}
\label{eqsteady}
a_{m\neq0}=\frac{\Omega_{m0}\alpha_0\mathcal{J}_{m0}}{i\kappa_m+\Delta_m}\equiv C_m\mathcal{J}_{m0}.
\end{equation}
When these steady states are reached on timescales much faster than the atomic dynamics (i.e.~$\kappa_m\gg J^T_{ij}$), we can perform a further adiabatic elimination, to remove the cavity modes from the Hamiltonian. Replacing the cavity mode operators with their steady state values in $\mathcal{H}$ results in the effective atomic Hamiltonian \cite{maschler2008,caballero2015}
\begin{align}
\label{eqheff}
\mathcal{H}=&H_M+\Omega_{00}|\alpha_0|^2\mathcal{J}_{00} \nonumber \\
&+\displaystyle\sum_{m\neq0}\frac{\Delta_m|C_m|^2}{2}(\mathcal{J}_{m0}^\dagger\mathcal{J}_{m0}+\mathcal{J}_{m0}\mathcal{J}_{m0}^\dagger).
\end{align}
Note that the terms in Eq.~\eqref{eqHlm} containing products of two cavity modes are neglected, as they are much smaller than the pump-pump and pump-cavity product terms. Note also that the symmetric splitting of the products of $\mathcal{J}_{m0}$ and its conjugate into two parts of opposite order are necessary to preserve the form of the Heisenberg equations for the atomic modes $b_j$ before and after the elimination, as the ordering freedom of $a_m$ and $b_j$ is lost after the steady state replacement (see Appendix) \cite{maschler2008}. This regime in which the cavity steady state is reached much faster than the timescales of atomic dynamics is readily accessible in experiments, and indeed has already been demonstrated \cite{landig2016}; typical tunnelling rates $J^T_{ij}$ between nearest-neighbour sites are $\mathcal{O}(10^3$Hz) \cite{jaksch1998}, while cavities with decay rates $\kappa_m$ $\mathcal{O}(10^6$Hz) are found in experimental use \cite{landig2016}.
Beyond the standard Bose-Hubbard Hamiltonian, the first additional term in this effective Hamiltonian is the pump-pump term, due to the interaction between the atoms and the (classical) light from the pump laser. Such terms can be derived straightforwardly from semi-classical treatments of light-matter interactions, and is essentially of the form of a Raman transition, giving rise to light-induced tunnelling and effective chemical potentials. The inclusion of additional pumps will manifest similar such terms, including terms mixing different pump modes, which have previously been used to introduce complex phases to atomic tunnelling (e.g.~\cite{jaksch2003}). The second set of additional terms, the cavity-pump terms, arise due to interaction of the atoms with the quantised cavity light fields, and do not appear in semi-classical treatments, as they are inherently defined by the backaction of the atomic state on the cavity population. These terms, due to the long-range spatial extent of the cavity modes, induce effective long-range correlations and interactions between lattice sites. We shall primarily focus on the use of these two-body dynamical processes.
The choice of light modefunctions leads to different dynamics, and are tunable by a variety of methods, including changing the wavelength and angle of the lasers, and the angle and size of the cavities. We highlight that multimode, or even multiple cavities allow flexibility beyond a single cavity mode. We now proceed to characterise the induced processes, focussing on the two dominant contributions, from the on-site, and neighbouring inter-site terms. In contrast to previous works investigating these additional terms, which treat them as a whole and consider them generically as four-point correlations \cite{caballero2015}, we introduce a characterisation that becomes meaningful and important when we later introduce a method to distinguish between and selectively tailor such processes, through the measurement backaction.
\subsection{On-site Terms}
The on-site terms will typically dominate in the light-matter interaction operators $\mathcal{J}_{mn}$ \cite{mekhov2012}, and thus these operators can often be approximated with replacement by their on-site counterparts
\begin{equation}
D_{mn}=\sum_i J_{ii}^{mn}b^\dagger_i b_i.
\end{equation}
Due to the perfect overlap of the Wannier functions (as they are identical), the corresponding light-matter interaction coefficients $J_{ii}^{m0}$ have close to unit magnitude when the light modefunctions are at peak intensity at the centre of lattice sites \cite{mekhov2012, caballero2015}. These coefficients may be imprinted with complex phases through the phase difference of the incoming and outgoing light modefunctions. This allows for the generation of a matter mode structure, in which the lattice is partitioned into sets of sites scattering light with the same phase, and the modes are defined by atoms occupying these sets of sites in which they scatter light with the same phase \cite{elliott2015}. For example, when considering the main diffraction maximum, we have that $J_{ii}^{m0}=(-1)^i$, and hence all odd sites scatter light with the same phase (thus forming one matter mode), while all even sites scatter light with the same, opposing phase (hence forming the second matter mode).
Focussing first on the special case of illumination in the diffraction maximum, in which the light-matter interaction coefficients are all identically $J_{ii}^{m0}=1$ (and similarly, $J_{ii}^{00}=1$) the interaction terms become $D_{m0}=N_m$, where $N_m$ is the number of atoms in total that occupy any of the sites illuminated by both the pump, and cavity mode $m$. Analogously, we have that $D_{00}=N_0$. Thus, the light-induced dynamics in the effective atomic Hamiltonian Eq.~\eqref{eqheff} is given by
\begin{equation}
\label{eqdensdens}
\Omega_{00}|\alpha_0|^2N_0+\sum_{m\neq0}\Delta_m|C_m|^2N_m^2,
\end{equation}
where the first term forms an effective chemical potential, and the second set of terms mediate density-density interactions between sites illuminated by the pump and their respective cavity mode. These latter interactions occur irrespective of the spatial separation of the sites, and thus exemplify the long-range nature of the cavity-induced processes.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{./figure2.pdf}
\caption{Schematic displaying the regions and associated interaction strengths of cavity-mediated density-density interactions. The flexibility in the geometry of the cavity modes allows for the strength of the long- and short-range processes to be tuned independently.}
\label{figregions}
\end{figure}
When considering the inclusion of multiple cavity modes, the tunability of these interactions will exceed what is possible with a single cavity. One such possibility this provides is that the long- and short-range interaction strengths may be tuned independently, as can be seen by noting that the sign of the cavity detuning determines whether the long-range density-density interactions are repulsive or attractive, and thus different cavities can have different contributions to the overall dynamics.
As can be seen from Eq.~\eqref{eqdensdens}, a single pump and cavity mode will produce density-density interactions with a strength $U^L=\Delta_m|C_m|^2$ between atoms on all sites illuminated by pump and mode $m$. An illustration of how short- and long-range interactions between two regions may be varied can be seen by considering three cavity modes, which we label $X$, $Y$, and $Z$, which illuminate regions 1, 2, and both respectively (see \figref{figregions}) at the diffraction maximum, with both regions illuminated by a common pump. Denoting the light-mediated density-density interaction strength between an atom in region $A$ and an atom in region $B$ as $U^L_{AB}$, we hence have
\begin{subequations}
\begin{equation}
U_{11}^L=\Delta_X|C_X|^2+\Delta_Z|C_Z|^2,
\end{equation}
\begin{equation}
U_{22}^L=\Delta_Y|C_Y|^2+\Delta_Z|C_Z|^2,
\end{equation}
and
\begin{equation}
U_{12}^L=\Delta_Z|C_Z|^2.
\end{equation}
\end{subequations}
Thus, the three interaction strengths can all be tuned independently of each other, through the respective $C_m$ and $\Delta_m$ of each cavity mode. We note also that in general, as the regions are defined by the matter mode structure, which is in turn defined by the light-matter interaction coefficients $J_{ii}^{m0}$, the regions considered here need not be spatially contiguous, and indeed, the modes can have a very non-trivial spatial overlap with each other \cite{elliott2015}. In this sense, one can more generally consider the interaction strengths as being classed as inter- and intra-mode, rather than short- and long-range.
For more general, arbitrary illumination patterns, all of the sites in the illuminated region will still be part of the resultant density-density interactions. The interaction strength between two particular sites (or modes) is determined by their $J_{ii}^{mn}$, and for a cavity mode $c$, the associated interaction strength between modes $A$ and $B$ is
\begin{equation}
U_{AB}^L=\Delta_c|C_c|^2\cos(\phi_{AB}),
\end{equation}
where $\phi_{AB}$ is the difference between the phases of the associated $J_{ii}^{m0}$ of the two modes. For multiple cavity modes, one sums up the contributions from each cavity individually, while with multiple pumps one must sum over the $C_c$ generated by each pump, and also consider the additional pump-pump terms in the chemical potential, which may now differ from unit strength.
The effective atomic interactions resulting from such cavity-induced processes have already been demonstrated experimentally for the special case of illumination at the main diffraction minimum \cite{landig2016}. In these experiments, the resulting interactions were of the form $\Delta_c|C_c|^2(N_c^{(\mathrm{even})}-N_c^{(\mathrm{odd})})$, with the detuning chosen such that this term favours a population imbalance between even and odd states, leading to cavity-induced atomic self-organisation for large enough interaction strengths. The experiment demonstrated that this interaction strength can be made comparable to, and even stronger than the tunnelling from the standard Bose-Hubbard model by one or two orders of magnitude (i.e.~up to $\mathcal{O}(10^4-10^5$Hz))\cite{landig2016}. As the other more general schemes based on using the on-site terms in the light-matter interaction operator typically involve only changing the phase of the $J_{ii}^{mn}$ through adjustment of the optical geometry, the size of such interaction strengths can be expected to remain of a similar size.
\subsection{Inter-site Terms}
In contrast to the on-site case discussed above, when the overlap of the light modes are arranged to be concentrated between lattice sites the nearest-neighbour terms in $\mathcal{J}_{mn}$ can be made more significant than the on-site terms \cite{kozlowski2015}. Specifically, when the modefunction of cavity $c$ and the pump modefunction are given by standing waves with wavenumbers $k_c=k_0=\pi/d$ ($d$ being the lattice spacing), at opposing angles to the lattice $\theta_0=-\theta_c$, then the on-site pump-cavity light-matter coupling coefficients vanish ($J^{c0}_{ii}=0$), while the nearest-neighbour coefficients $J^{c0}_{\mathrm{nn}}$ (the subscript nn denoting nearest-neighbour sites) take a constant value at all site pairs with bonds in a given direction (see \cite{kozlowski2015} for further details). Intuitively, this can be seen to occur because the Wannier functions of a site are symmetric, while the product of light modefunctions $u_c^*(\bm{r})u_0(\bm{r})$ are antisymmetric, and periodic across two lattice sites, hence leading to a cancellation of their overlap with each on-site product of Wannier functions, but not the inter-site products. Note that because $|u_0(\bm{r})|^2$ is a positive-definite function, such a cancellation does not occur for the $J_{ii}^{00}$; however, for such an illumination pattern, this coefficient is equal for all illuminated lattice sites, forming a constant effective potential within this region, and hence when the pump illuminates the entire lattice this may generally be neglected in the effective Hamiltonian Eq.~\eqref{eqheff}. The pump-pump inter-site coefficients $J_{\mathrm{nn}}^{00}$ will then also take a constant value.
Using this, in this regime we replace the light-matter coupling operators by these inter-site terms alone:
\begin{equation}
B_{mn}=\sum_{<ij>} J_{ij}^{mn}b^\dagger_ib_j,
\end{equation}
where $<ij>$ indicate neighbouring site pairs. These terms describe light-induced atomic tunnelling events. While in general the $J_{<ij>}^{mn}$ are different for each pair of sites, leading to spatially-dependent tunnelling rates, which may be tuned as with the density-density interactions, we shall here focus on the aforementioned case where they are homogeneous. This allows for the one-body tunnelling terms (as would be expected from semiclassical treatments) to be suppressed or enhanced by the pump light, or potentially even eliminated. This latter case would leave only the two-body terms, the rates of which may be tuned semi-independently of the one-body terms.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{./figure3.pdf}
\caption{The cavity fields can also induce tunable correlated tunnelling processes, which may be classified according to the relationship between the two tunnelling atoms: (1) pair tunnelling; (2) pair exchange; (3) effective next-nearest-neighbour tunnelling; (4 and 5) effective pair processes; and (6) general long-range correlated tunnelling.}
\label{figtunnelling}
\end{figure}
These two-body processes are of the form
\begin{equation}
\label{eqtwobodyproc}
\Delta_m\frac{|C_m|^2}{2}\sum_{\substack{<ij>\\<kl>}} (J_{ij}^{m0}J_{kl}^{0m}b^\dagger_ib_jb^\dagger_kb_l+h.c.),
\end{equation}
where $\{i,j\}$ and $\{k,l\}$ must be pairs of neighbouring sites, though the two pairs may be distributed anywhere within the illuminated regions. Each pair corresponds to a tunnelling process, and hence the complete two-body processes correspond to correlated tunnelling events. These can be classified according to the relationship between the two site pairs [\figref{figtunnelling}], giving rise to (1) pair tunnelling, (2) pair exchange, (3) effective next-nearest-neighbour tunnelling, (4 and 5) effective pair processes, and (6) general long-range correlated tunnelling. Due to the smaller inter-site overlap of Wannier functions compared to on-site overlaps, the inter-site light-matter coupling coefficients will necessarily be smaller than the on-site coefficients at their respective maxima. Their relative magnitude has previously been studied in \cite{caballero2015}, where they are shown to typically be separated by approximately an order of magnitude. Thus, the complete two-body correlated tunnelling terms can have a strength approximately two orders of magnitude less than that which may be achieved for the light-induced density-density interactions, and hence can occur at rates comparable to the classical tunnelling $J_{\mathrm{nn}}^{T}$ (i.e.~$\mathcal{O}(10^3$Hz)).
\subsection{Simulation of Superexchange Interactions}
Before we introduce the measurement backaction as a method of control for these processes, we shall first suggest an alternative approach for exerting additional tunability beyond the optical geometry. One such way is to impart an additional structure to the atomic system by shaping the underlying lattice, so that it is no longer homogeneous at every site. We provide as an example of this a proposal for how superexchange interactions in spin models may be simulated using the setup.
Two-body atomic tunnelling processes have previously been used to implement such simulations \cite{trotzky2008}. However, in these earlier proposals, the superexchange occurs as a second-order perturbative process. In contrast, here we suggest using the cavity-induced pair exchange processes for the same purpose. Our proposal follows the original by dividing the lattice into pairs of double wells with a superlattice potential, with each site pair containing two atoms of different (pseudo)spin species, and strong interparticle interactions enforcing a one-particle-per-site constraint (we also assume the use of the same method for the initial state preparation). Introducing the correlated tunnelling as above for the inter-site case (through use of a single cavity mode $c$), the superlattice potential and limit on site occupation effects constraints on the allowed dynamical processes, permitting only pair exchange processes between atoms in each site pair.
To see this, consider each of the possible tunnelling events. The superlattice potential only permits tunnelling of the atoms into the corresponding other site in the double well pair. The one-particle-per-site constraint suppresses any process in which an atom tunnels to an already occupied site (as is the case in each of the double wells). However, when we consider the correlated processes, such tunnelling events can take place when coupled with another tunnelling that preserves the unit occupation of each site. There are two such processes: those in which the atom of the other species in the site pair tunnels in the opposite direction, and those in which the same atom tunnels in the reverse direction (see \figref{figsuperexchange}). The latter processes take the form $b^\dagger_{Lx}b_{Rx}b_{Rx}^\dagger b_{Lx}$, where $\{L,R\}$ denotes the corresponding site in the well, and $x$ the spin species. Due to the unit occupation of each site pair by each spin species, these terms have a constant value, and so may be neglected.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{./figure4.pdf}
\caption{The correlated tunnelling processes may be used to simulate superexchange interactions. Here, a superlattice is used to divide the system into a series of double well potentials, each containing an atom of each of two spin species (denoted by their colour). The superlattice potential and strong repulsive interactions allow only pair exchange processes to take place, between atoms in the same double well, mimicking superexchange dynamics. These correlated tunnellings are indicated by arrows of the same colour.}
\label{figsuperexchange}
\end{figure}
Thus, each double well (up to constant terms) behaves according to the Hamiltonian
\begin{equation}
H_{\mathrm{ex}}=2\Delta_c|C_cJ_{\mathrm{nn}}^{c0}|^2(b_{L\uparrow}^\dagger b_{R\uparrow} b^\dagger_{R\downarrow}b_{L\downarrow}+h.c.).
\end{equation}
This can equivalently be expressed in terms of spin operators ($2S_j^Z=b^\dagger_{j\uparrow}b_{j\uparrow}-b^\dagger_{j\downarrow}b_{j\downarrow}$), to give
\begin{equation}
H_{\mathrm{ex}}=J_{\mathrm{ex}}(S_L^+S_R^-+S_L^-S_R^+),
\end{equation}
where
\begin{equation}
J_{\mathrm{ex}}=2\Delta_c|C_cJ_{\mathrm{nn}}^{c0}|^2.
\end{equation}
Critically, the exchange term here does not suffer the $1/U$ dependence of the second-order process in the original proposal, and hence the interparticle interactions necessary for enforcing the single-particle-per-site constraint can here be increased without suppressing the exchange interaction. Indeed, as noted above, these correlated tunnelling processes can occur at rates comparable to the standard tunnelling $J^T_{\mathrm{nn}}$ from the classical optical lattice potential.
\section{Inclusion of the Light Measurement Backaction}
A drawback of the above method of imparting additional structure by altering the lattice geometry is that it bears an adverse consequence whereby the suppression of a particular tunnelling event will also necessitate such a suppression when it would otherwise have occurred as part of a correlated tunnelling event. For example, if the process $b^\dagger_Xb_Y$ is suppressed by the lattice geometry causing it to occur with low amplitude, then any process of the form $b^\dagger_Xb_Yb_i^\dagger b_j$ also has a correspondingly low amplitude. In contrast to this, the matter mode structure discussed above offers an alternative avenue for imprinting further structure onto the atoms, without having to suffer such a penalty.
In previous works \cite{elliott2015, mazzucchi2016}, we have discussed how the light measurement backaction from the photons leaked by a cavity can be used to selectively suppress atomic dynamics in the standard Bose-Hubbard model, through constraints on the dynamics imposed by the rate at which photons are detected. We now go beyond this, and incorporate the cavity light-induced dynamics into such methods, thus uniting the measurement backaction effect with the cavity backaction for the first time. We will then evince the potential of this union for enhancing quantum simulations with atomic systems. Crucially, the measurement backaction allows for a process to be forbidden as a single event, but permitted when correlated with another particular event (or events) commensurate with the measurement outcome; in this case the suppression is achieved through constraints on the matter mode occupations.
This measurement backaction is realised through the introduction of an additional cavity (and possibly pump) mode, where measurement is made of the scattered light that it leaks. The cavity leaks photons at a rate that is proportional to its occupation, which is determined by the particular atomic state through Eq.~\eqref{eqsteady}. We will here assume that this measurement cavity and associated pump is arranged such that the on-site terms dominate the light-matter interactions, and hence for a given atomic Fock state configuration $\bm{n}$, the cavity mode $\Pi$ has a well-defined amplitude
\begin{equation}
a_\Pi=C_\Pi D_{\Pi0}=C_\Pi\sum_j J_{jj}^{\Pi0} n_j.
\end{equation}
For such a configuration, the (average) rate of photon leakage from the cavity is constant. More generally, the atomic configuration is a superposition of such Fock states. Consider a state where the Fock states $\bm{n}$ occur with initial amplitudes $c_{\bm{n}}^0$. Applying the quantum jump measurement formalism, these amplitudes then evolve according to \cite{mekhov2012}
\begin{equation}
c_{\bm{n}}(k,t)=\frac{1}{\mathcal{N}}\alpha_{\Pi\bm{n}}^ke^{-|\alpha_{\Pi\bm{n}}|^2\kappa t}c_{\bm{n}}^0
\end{equation}
where $\alpha_{\Pi\bm{n}}=C_\Pi\bopk{\bm{n}}{D_{\Pi0}}{\bm{n}}$ is the cavity field amplitude for the given Fock state $\bm{n}$, and $\mathcal{N}$ is a normalisation factor. In this expression, the first factor represents the effect of the quantum jumps occurring at each of the $k$ photodetection events of the leaked photons, while the second factor gives the non-Hermitian evolution occurring between such jumps during the elapsed time $t$. This evolution then enacts a natural selection of sorts, reducing the relative probabilities of states not consistent with the observed leakage rate. For intense, persistent measurement of this form, the distribution of amplitudes is compressed, and consequently the light field state converges towards a particular coherent state $\alpha_{\Pi z}$. When this convergence to a single state happens on timescales much shorter than the atomic dynamics, the light field is ultimately pinned to this state (through the quantum Zeno effect \cite{misra1977}). The requisite condition on the timescales can be expressed as $|C_\Pi|^2\kappa_\Pi\gg J^T_{\mathrm{nn}}$ \cite{mazzucchi2016}. Note that while this regime is not reached in the aforementioned example experiment \cite{landig2016} (because their intent was to study the cavity backaction alone), it would be feasible by, e.g.~a modest reduction of the cavity detuning, or an increase in the pump power, both of which have been performed in previous incarnations of the same setup \cite{baumann2010}.
The matter is then confined to evolve within only a subspace of its full Hilbert space, which is determined by the particular $\alpha_{\Pi z}$; specifically, it must remain within the subspace of states $\{\bm{n}_z\}$ for which $\alpha_{\Pi\bm{n}_z}=\alpha_{\Pi z}$. This set of states is defined by the matter mode structure, and hence depends on the modefunctions of the measurement cavity and pump \cite{elliott2015}. The ensuing atomic dynamics must take place within this measurement subspace, thus undergoing quantum Zeno dynamics \cite{facchi2008}. In this regime, the dynamics is described by the appropriate Zeno Hamiltonian, defined as
\begin{equation}
H_Z=\mathcal{P}H\mathcal{P},
\end{equation}
where $\mathcal{P}$ is the projector that describes the subspace of states $\{\bm{n}_z\}$ consistent with the measured light state \cite{facchi2008}. This result follows generally from considerations of a system subject to very frequent measurement: in the limit that the number of measurements $N\to\infty$ in a fixed time $t$, the evolution $(\mathcal{P}\exp(-iHt/N))^N$ can be expanded approximately as \cite{facchi2008}
\begin{align}
\lim_{N\to\infty}(\mathcal{P}e^{-iH\frac{t}{N}})^N&\approx\lim_{N\to\infty}(\mathcal{P}(1-iH\frac{t}{N}))^N\nonumber\\&=e^{-iH_Zt},
\end{align}
with the second line following from the definition of the exponential function $\exp(x)\equiv\lim_{n\to\infty}(1+x/n)^n$, and the idempotence of the projector ($\mathcal{P}^2=\mathcal{P}$). We shall now drop the $Z$ subscript from the Zeno Hamiltonians for the remainder of this article.
Thus, utilising this formalism, we can use the measurement backaction to selectively eliminate particular atomic dynamics, as determined by the measurement cavity geometry and the associated projection operators. In contrast to earlier work incorporating measurement backaction, in which the two-body terms that appear due to measurement are only second-order processes \cite{mazzucchi2016}, in this scheme such terms now arise here at first order in the system evolution, as they are directly induced by the cavity backaction (rather than simply being higher-order processes which are not suppressed by the measurement), and are hence perfectly correlated. The two cases are thus fundamentally different, and here increasing pump strength increases the two-body tunnelling rates, rather than suppressing them. Note that because the dynamics of the system is now constrained to the subspace of states consistent with a single value of the measurement cavity light-matter interaction operator, the dynamics induced by the measurement cavity may be disregarded, as they are identical for all states in the subspace, and hence form a constant energy shift for all states.
As a straightforward example of such a scheme, consider the case of measurement made at the diffraction minimum ($J^{\Pi0}_{ii}=(-1)^i$; $D_{\Pi 0}=N_\Pi^{\mathrm{(even)}}-N_\Pi^{\mathrm{(odd)}}$) across the lattice. This freezes the occupation number difference between odd and even sites, and thus when this difference is given by $\Delta N=N^{(\mathrm{odd})}-N^{(\mathrm{even})}$, the allowed states are superpositions of states of the form $\ket{\bm{n}^{(\mathrm{even})},\bm{n}^{(\mathrm{odd})}}$ with $\sum_{j\in\mathrm{(even)}} n_j = n$ and $\sum_{j\in\mathrm{(odd)}} n_j=n+\Delta N$, for integer $n>0$. The dynamics is restricted to this subspace of states with associated projector
\begin{equation}
\mathcal{P}_{\Delta N}=\sum_n\ket{n,n+\Delta N}\bra{n,n+\Delta N}
\end{equation}
by the Zeno dynamics, and since single nearest neighbour tunnelling events change the $\Delta N$ of a state, such processes are forbidden from taking place by the measurement. In the absence of a further cavity driving dynamics this would leave next-nearest neighbour tunnelling as the leading process, which typically occurs at a much slower rate than nearest-neighbour tunnelling in the standard Bose-Hubbard model (and thus is often ignored), but may now be no longer negligible.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{./figure5.pdf}
\caption{Illustration of the allowed processes in the presence of measurement-induced dynamical constraints requiring fixed occupation number difference between even and odd sites ($\Delta N=N^{(\mathrm{odd})}-N^{(\mathrm{even})}$). These allowed processes must conserve the total occupation difference between even and odd sites, and include: (a) nearest neighbour tunnelling; (b) effective nearest-neighbour tunnelling; (c) pair exchange; and (d) long-range correlated tunnelling events that preserve total mode occupations.}
\label{figtwomode}
\end{figure}
Considering also the cavity backaction when a cavity is present to drive dynamics, this scenario may be augmented with the two-body terms introduced above (namely, those that preserve matter mode occupation number difference). The correlated processes in which two atoms tunnel into the same site (and the reverse process) are forbidden by the measurement as they violate the constraint on $\Delta N$, as are certain of the long-range correlated tunnelling events. The allowed processes of the latter form lead to effective long-range tunnelling events within each mode. All of the light-induced effective nearest-neighbour tunnelling events are permitted, as they simply move atoms between neighbouring sites in each of the two modes, as are the pair exchange events, as they do not change the site occupation numbers. \figref{figtwomode} illustrates examples of each of the allowed processes in this two mode example. In the simplest case of uniform illumination, the rate of the long-range correlated tunnelling processes are independent of the site separation, though as noted above, akin to the density-density interactions, spatially-dependent long-range tunnelling rates can be tuned with multiple cavity modes.
The flexibility of the light-modes allows a range of configurations of the two-body terms to be engineered through measurement backaction; for example, when light measurement fixes occupation numbers for multiple matter modes \cite{elliott2015}, the only allowed correlated tunnelling events are pair exchange, and the long-range which preserve the total mode occupations.
\section{Framework for Quantum Simulations}
The architecture based on the union of measurement backaction and cavity backaction that we have detailed above naturally lends itself to quantum simulations. In particular, it offers opportunities to mimic correlated processes and long-range interactions of the atoms that would be difficult (or even not possible) to realise in systems with finite-range interactions. We now proceed by outlining a framework to this end, by describing `building block' components for implementing reservoir and dynamical global gauge field models, which can be used to enhance methods of optical lattice quantum simulation beyond current methods, by incorporating such phenomena.
\subsection{Reservoir Models}
The matter mode structure defined by the light geometry allows the lattice to be partitioned into sets of sites corresponding to each mode. We can assign a subset of these modes as reservoirs, and investigate the dynamics of the remaining sites, subject to the presence of the reservoirs. Consider the conceptual three-site model shown in \figref{figreservoir}(a). In this scenario, a cavity is used to drive dynamics that generate (homogenous amplitude) two-body correlated tunnelling between the sites, as per Eq.~\eqref{eqtwobodyproc} with a single cavity $c$ and uniform light-matter interaction coefficients $J_{\mathrm{nn}}^{c0}$, such that the system is described by
\begin{align}
H&=H_M+\Omega_{00}|\alpha_0|^2(D_{00}+B_{00})\nonumber\\&+\frac{\Delta_c|C_c|^2}{2}(B_{c0}^\dagger B_{c0}+B_{c0}B_{c0}^\dagger).
\end{align}
Neglecting the effective (uniform) chemical potential due to the pump light (which can be tuned away by e.g.~using another classical light source), and the on-site interactions (which can be tuned away by Feshbach resonances \cite{lewenstein2012}), this becomes
\begin{align}
\label{eq3site}
H&=\sum_{<ij>}(J^T_{\mathrm{nn}}+\Omega_{00}|\alpha_0|^2J^{00}_{\mathrm{nn}})b_i^\dagger b_j\nonumber\\&+\Delta_c|C_cJ_{\mathrm{nn}}|^2\sum_{\substack{<ij>\\<kl>}} b_i^\dagger b_j b_k^\dagger b_l.
\end{align}
We label the sites $i=1,2,3$, and designate the outer two sites as the reservoirs. The measurement cavity is arranged at the maxima of two coherent antiphase pump lasers antisymmetric about the central site, such that they fully destructively interfere at this site, and have opposing contributions at the corresponding site pairs about the centre. The resulting total pump mode function is $u_0(\bm{r})=u_p(\bm{r})-u_p(-\bm{r})$, and thus the measured operator is $D_{\Pi0}=N_1-N_3$. An example set of appropriate pump modefunctions are $u_p(\bm{r})=\mathrm{sin}(\sqrt{2}\pi x/d)$, where $d$ is the lattice spacing and $x$ is the lattice axis, such modefunctions being achievable by using travelling waves angled at $45^\circ$ to the lattice (the incommensurate nature of the pump with the lattice ensures that any other sites adjacent to the reservoirs do not have vanishing contributions to the measurement, preventing events where one atom tunnels into the central site simultaneously with an atom tunnelling to an external site). The measurement then constrains the system to remain in a subspace of states in which this is constant. When the measured value for this operator is $\Delta N$, the appropriate projectors applied to the system are (using the designation $\ket{N_1,N_2,N_3}$)
\begin{equation}
\label{eqproj3}
\mathcal{P}_{\Delta N}=\sum_{N_2,N_3}\ket{N_3+\Delta N,N_2,N_3}\bra{N_3+\Delta N, N_2, N_3}.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{./figure6.pdf}
\caption{Dynamical constraints imposed by measurement of the light allows for selective engineering of atomic dynamics for use in quantum simulation. By designating some sites as reservoir modes, (a) effective pair processes and (b) generalised Dicke models can be realised. The highlighting of sites indicates the contribution of their occupation to the measurement value (green positive, blue negative), which is fixed by the quantum Zeno effect. In both cases, the measurement operator is given by $D_{\Pi0}=N_{\mathrm{Res}1}-N_{\mathrm{Res}2}$). Tunnelling events constrained to only occur simultaneously because of the constraints are indicated by identically coloured arrows.}
\label{figreservoir}
\end{figure}
Applying these projectors to the Hamiltonian Eq.~\eqref{eq3site}, we obtain the Zeno Hamiltonian $\mathcal{P}_{\Delta N}H\mathcal{P}_{\Delta N}$. The projectors eliminate all of the single-atom tunnelling terms, and preserve only the two-body tunnellings in which $N_1-N_3$ is conserved. These are the following terms: $b_1^\dagger b_2 b_2^\dagger b_1$; $b_2^\dagger b_1 b_1^\dagger b_2$; $b_2^\dagger b_3 b_3^\dagger b_2$; $b_3^\dagger b_2 b_2^\dagger b_3$; $b_1^\dagger b_2 b_3^\dagger b_2$; $b_3^\dagger b_2 b_1^\dagger b_2$; $b_2^\dagger b_1 b_2^\dagger b_3$; and $b_2^\dagger b_3 b_2^\dagger b_1$. The first four of these terms correspond to processes where an atom leaves one of the reservoirs, concurrent with another atom entering the same reservoir, while the latter four describe events where atoms simultaneously leave (or enter) both reservoirs.
Consider now the reservoirs to both be prepared in coherent states (as would be approximately expected for a system initially in a superfluid state from the Gutzwiller ansatz \cite{lewenstein2012}). Replacing the operators for the two reservoir sites by their coherent amplitudes $b_1\to\beta_1$ and $b_3\to\beta_2$, we now only have one dynamical variable, describing the occupation of the central site. Relabelling this as $b_2\to b$, and defining $n=b^\dagger b$, the effective Hamiltonian becomes
\begin{equation}
H=\Delta_c|C_cJ^{c0}_{\mathrm{nn}}|^2((|\beta_1|^2\!+|\beta_2|^2)(2n+1)+(2\beta_1\beta_2b^\dagger b^\dagger\! +h.c.)).
\end{equation}
Discarding the constant term, and further tuning on-site terms to eliminate the effective chemical potential, we arrive at a Hamiltonian that describes effective pair creation and annihilation dynamics at the central site:
\begin{equation}
H_{\mathrm{PP}}=(\lambda b^\dagger b^\dagger + h.c.),
\end{equation}
where
\begin{equation}
\label{eqlambdares3}
\lambda=2\beta_1\beta_2\Delta_c|C_cJ^{c0}_{\mathrm{nn}}|^2.
\end{equation}
With the experimental parameters considered above, and for typical occupations $|\beta|^2\sim\mathcal{O}(1-10)$, these processes can be of a comparable size to the classical tunnelling rate $J^T_{\mathrm{nn}}$ (or even slightly larger depending on $|\beta|^2$).
Now consider the extension of this to include an additional site between the reservoirs, as depicted in \figref{figreservoir}(b). Again, the outer sites are designated as the reservoirs, and their occupation number difference is the subject of measurement. One way to achieve such a mode structure is to again use the interference of two coherent pump beams, now arranged to have vanishing contributions on the central two sites. Thus, the measured operator is (labelling the central two sites 1 and 2, and the reservoirs Res1 and Res2) given by $D_{\Pi0}=N_{\mathrm{Res1}}-N_{\mathrm{Res2}}$, again fixed at some value $D_{\Pi0}=\Delta N$, now with the associated projectors
\begin{align}
\mathcal{P}_{\Delta N}=\sum_{N_1,N_2,N_{\mathrm{Res}}}&\ket{N_{\mathrm{Res}}+\Delta N,N_1,N_2,N_{\mathrm{Res}}}\nonumber\\ \times&\bra{N_{\mathrm{Res}}+\Delta N, N_1, N_2,N_{\mathrm{Res}}}.
\end{align}
As before, this measurement imposes constraints on the allowed tunnelling events. Unlike the previous case however, some single atom tunnelling events survive: those between the two central sites $b_1^\dagger b_2$ and $b_2^\dagger b_1$. The permitted correlated tunnelling events involving the reservoirs are analogous to the previous case, with the processes involving atoms simultaneously crossing the same boundary between a reservoir and central site in opposite directions, or the simultaneous tunnelling of a particle from (to) each of the reservoirs in to (from) each of the central sites.
However, there are also now present correlated events where both tunnelling events take place between the central two sites, these being of the form $b^\dagger_1b_2b^\dagger_1b_2$, $b^\dagger_1b_2b^\dagger_2b_1$, $b^\dagger_2b_1b^\dagger_1b_2$, and $b^\dagger_2b_1b^\dagger_2b_1$. Once again replacing the operators for the reservoirs by their coherent state amplitudes, we can write the Hamiltonian describing the dynamics of the central two sites (again for now neglecting the effective chemical potentials) as
\begin{align}
\label{eqfulldicke}
H&=(J^T_{\mathrm{nn}}+\Omega_{00}|\alpha_0|^2J^{00}_{\mathrm{nn}})(b_1^\dagger b_2+h.c.)\nonumber\\
&+(2\beta_1\beta_2\Delta_c|C_cJ^{c0}_{\mathrm{nn}}|^2b_1^\dagger b_2^\dagger+h.c.)\nonumber\\
&+\Delta_c|C_cJ^{c0}_{\mathrm{nn}}|^2(b^\dagger_1b_2b^\dagger_1b_2\!+\!b^\dagger_1b_2b^\dagger_2b_1\!+\!b^\dagger_2b_1b^\dagger_1b_2\!+\!b^\dagger_2b_1b^\dagger_2b_1).
\end{align}
Consider now the reservoirs to have a large occupation compared to the central sites (e.g.~$\mathcal{O}(1)$ atom per central site, and $\mathcal{O}(10)$ per reservoir; such filling factors are available in typical experiments \cite{klinder2015,landig2016}). In this case, the terms in the third line of Eq.~\eqref{eqfulldicke}, corresponding to the correlated processes between atoms in the central sites alone, become negligible compared to the reservoir-based correlated tunnellings. Reintroducing the chemical potential (which, as noted before, can be tuned with additional semiclassical light sources), the effective Hamiltonian describing the central two sites is now a generalised Dicke Hamiltonian \cite{dicke1954, schmidt2014}:
\begin{equation}
\label{eqgdicke}
H_{\mathrm{GD}}=\sum_{i=1}^2 \mu_i b^\dagger_i b_i + (\lambda_1 b^\dagger_1b_2 + \lambda_2 b_1^\dagger b_2^\dagger +h.c.),
\end{equation}
where
\begin{align}
\lambda_1&=J^T_{\mathrm{nn}}+\Omega_{00}|\alpha_0|^2J_{\mathrm{nn}}^{00}\\
\lambda_2&=2\beta_1\beta_2\Delta_c|C_cJ^{c0}_{\mathrm{nn}}|^2.
\end{align}
In contrast to the original model, and its corresponding realisation in optical cavities \cite{baumann2010}, the parameters $\lambda_{1,2}$, which represent the co- and counter-rotating terms respectively, can be tuned independently, by adjusting the optical geometry; their magnitude is controlled by e.g.~increasing the pump strength or adjusting the reservoir populations $\beta_{1,2}$, and complex phases can be induced with the use of additional pump beams.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{./figure7.pdf}
\caption{Ground state average occupation $\langle n_1 \rangle$ of the generalised Dicke model, showing a change in the superradiance phase transition point when varying $\lambda_1/\lambda_2$. In the standard Dicke model, the transition occurs at $\lambda_1=\lambda_2=\mu/2$; here we find the general boundary at $\lambda_1+\lambda_2\approx\mu$. We use $\mu_1=\mu_2=\mu$, and a maximum occupation of 20 for each mode.}
\label{figphase}
\end{figure}
In addition to the well-known phase diagrams for $\lambda_1=\lambda_2$ (traditional Dicke) and $\lambda_2=0$ (Tavis-Cummings) \cite{bastarrachea2014} in the quantum case, classical treatments have shown bifurcations when varying $\lambda_{1,2}$ \cite{aguiar1991}, suggesting possible further novel phase behavior. We explore part of this extended parameter space by using exact diagonalisation methods to find the ground state average occupation of each of the modes. Specifically, we do this by limiting the occupation of the two modes to 20 atoms per site, and with this limitation we can construct the Hamiltonian Eq.~\eqref{eqgdicke}, and obtain the ground state. We do this for the regime in which $\mu_1=\mu_2=\mu$, and find that a general phase boundary [\figref{figphase}] for the onset of superradiance (that is, the transition from $\langle n_j \rangle=0$ to diverging (in the full non-occupation-limited case)) occurs at $\lambda_1+\lambda_2\approx\mu$, with the standard transition at $\lambda_1=\lambda_2=\mu/2$ being a special case of this.
This can be further extended by using the light measurement to fix occupation number differences between larger numbers of modes \cite{elliott2015}, allowing for additional reservoir modes to be generated. These can be used to increase the number of sites coupled to reservoirs, and increase the number of simulated modes in our model (for example, to have multiple atomic species). As a concrete example of how these building blocks we propose can be put together to form more complex systems, we now describe how to realise a generalised Dicke model with two (synthetic) atomic species and one synthetic light mode.
Consider an amalgamation of two copies of the above setup for the one-species generalised Dicke model. When these setups are crossed perpendicularly, with both setups sharing a common site for the synthetic light mode (see \figref{figtwodicke}), the two-species generalised Dicke model can be realised. As before, cavities are used to induce the correlated tunnelling dynamics, one in each of the setups (that is, the correlated events both involve tunnellings within the same individual setup). By using also two measurement cavities, to fix each the difference in occupation of the reservoir sites in one of the setups (to $\Delta N_1$ and $\Delta N_2$ respectively), the resulting projectors are
\begin{align}
\mathcal{P}_{\Delta N_1\Delta N_2}&=\!\!\!\!\!\!\!\sum_{\substack{N_L,N_A,N_B,\\N_{R1},N_{R2}}}\!\!\!\!\!\!\!\!\ket{N_{R1}+\Delta N_1,N_{R2}+\Delta N_2,N_{R1},N_{R2}}\nonumber\\ \times&\bra{N_{R1}+\Delta N_1,N_{R2}+\Delta N_2,N_{R1},N_{R2}}\nonumber\\\otimes&\ket{N_A,N_B,N_L}\bra{N_A,N_B,N_L},
\end{align}
where $A$ and $B$ are the simulated atomic species, $L$ is the synthetic light mode, $R_{L_1}$ and $R_{A}$ are the reservoir modes associated with the simulated light and atomic modes respectively in the first setup (and $R_{L_2}$ and $R_{B}$ for the second), and the states are labelled as $\ket{N_{R_{L_1}},N_{R_{L_2}},N_{R_A},N_{R_B}}\otimes\ket{N_A,N_B,N_L}$.
Following the analysis for the single-species Dicke model Eq.~\eqref{eqgdicke}, replacing the reservoirs by coherent states, and noting that both setups have a central site in common (corresponding to the synthetic light mode), the resulting Hamiltonian in the Zeno subspace for the central sites may be written
\begin{equation}
H_{\mathrm{2GD}}=\!\!\!\!\sum_{i=A,B,L}\!\!\!\! \mu_i b^\dagger_i b_i + \!\sum_{i=A,B}\!(\lambda_1^{(i)} b^\dagger_ib_L + \lambda_2^{(i)} b_i^\dagger b_L^\dagger +h.c.),
\end{equation}
where the $\lambda_{1,2}^{(i)}$ are the same as for the one-species setup, with the appropriate coefficients $\beta_{1,2}$, $\Delta_c$, $C_c$ and $J_{\mathrm{nn}}^{c0}$ as for each of the individual setups.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{./figure8.pdf}
\caption{Setup for the two-species generalised Dicke model. Combining two setups for the one-species generalised Dicke model, with a mutual synthetic light mode, allows the introduction of another atomic species. Circles represent the simulated modes, squares the reservoirs. The required measurements made are of occupation differences $R_{L_1}-R_{A}$ and $R_{L_2}-R_{B}$, which correlate certain tunnelling events, as indicated by arrows the same colour.}
\label{figtwodicke}
\end{figure}
Yet more exotic setups can be envisaged within this framework, through the use of additional reservoirs, or with multiple sites coupling to each reservoir. In the extreme case of each site in the simulated system being coupled to its own two reservoirs with fixed occupation differences (i.e.~a lattice of copies of the setup \figref{figreservoir}(a)), one would obtain an extended Bose-Hubbard model with additional pair creation/annihilation effects present at each site. Perhaps more straightforward to realise experimentally, consider a similar setup with two reservoir modes (again with fixed occupation difference) common to all sites. This could be achieved by again taking the setup of \figref{figreservoir}(a), where instead of three individual sites we instead have three connected lattices (which may be one- or two-dimensional), with each of the lattices forming one of the three modes. Arranging the measurement cavity to scatter light with the same phase from all sites in a given reservoir mode renders them indistinguishable to the measurement, as per the matter mode structure, and so they behave like a collective reservoir. In the superfluid regime, each site in reservoir $j$ may be approximated again by the Gutzwiller ansatz, and described by a coherent state of amplitude $\beta_j$, where $|\beta_j|^2$ is the filling factor of the lattice. As the sites in each reservoir mode are indistinguishable, the correlated tunnellings from each of the reservoirs to/from the central lattice may occur due to any of the sites in each reservoir. Thus, the effective pair creation/annihilation in the central lattice may have the pair of atoms far apart in the lattice from each other. We have analogous projectors to the three-site case Eq.~\eqref{eqproj3}, but with each number now representing the occupation across the whole of the respective lattice, rather than individual sites. Note that the standard Bose-Hubbard dynamics for atoms tunnelling between sites in the central lattice is unaffected by the measurement, and thus the resulting system in the central lattice obeys a Bose-Hubbard model with long-range pair creation/annihilation;
\begin{align}
H_{\mathrm{PPBHM}}&=-J\displaystyle\sum_{<ij>}b^\dagger_ib_j + \frac{U}{2}\displaystyle\sum_i b^\dagger_ib^\dagger_ib_ib_i\nonumber\\&+(\lambda\displaystyle\sum_{ij}b^\dagger_ib^\dagger_j+h.c.),
\end{align}
where
\begin{equation}
\lambda=2\beta_1\beta_2\Delta_c|C_cJ^{c0}_{\mathrm{nn}}|^2.
\end{equation}
\subsection{Dynamical Global Gauge Fields}
Another application of this framework for quantum simulation is in the synthesis of artificial dynamical gauge fields. Current proposals to this end also typically employ ultracold atoms in optical lattices, focussing on local gauge fields based on quantum link models \cite{horn1981, orland1990, brower1999}, where changes in the gauge field due to the motion of matter are simulated by the motion of another particle species that plays the role of the gauge field \cite{banerjee2012, banerjee2013, stannigel2014}. In addition to the possibility of extending the variety of such models by the inclusion of long-range interactions, the introduction of the cavity-mediated dynamics presents a further opportunity: the long-range nature of the interactions allows for the correlated atomic motion itself to be long-range, and hence the links need not be local. As such, this paves the way for realising global dynamical gauge fields, where motion across all sites is controlled by a common link.
For example, consider a one-dimensional lattice, with light measurement of a site-dependent strength. Specifically (by, e.g.~a gradiated intensity of the measurement pump), the measured light state has an amplitude that is proportional to $D_{\Pi0}=\sum_jj\Upsilon N_j$, for some constant $\Upsilon$. We consider two auxiliary sites $L$ and $R$, between which atoms can tunnel (and tunnelling out of this pair is suppressed), which are also measured in the same fashion, contributing an effective $\Upsilon N_R$ (plus constant) to the measurement value (because their conserved total occupation $N_L+N_R$). In this case, sites $L$ and $R$ form a link that mediates the dynamics in the rest of the lattice, as the Zeno Hamiltonian will then only contain the dynamics from the cavity-mediated correlated tunnelling events, where an atom tunnels in the main lattice simultaneously with a tunnelling event either in the link, or in the opposite direction in the main lattice. This follows from considering that a tunnelling $j\to j+1$ (or $L\to R$) increases the value of $D_{\Pi0}$ by $\Upsilon$, while a tunnelling $j\to j-1$ (or $R\to L$) decreases it by $\Upsilon$, and so only when one of each of these processes occur simultaneously is the measurement outcome value preserved.
The resulting Hamiltonian after applying this constraint is
\begin{align}
H&=\lambda\sum_j (b^\dagger_jb_{j+1}(\sum_kb^\dagger_{k+1}b_{k} + \vartheta b_L^\dagger b_R)+h.c.)\nonumber\\&+\lambda\vartheta^2(b^\dagger_Lb_Rb^\dagger_Rb_L+h.c.).
\end{align}
where
\begin{equation}
\lambda=\Delta_c|C_cJ_{\mathrm{nn}}^{c0}|^2
\end{equation}
and $\vartheta^2$ is the ratio of the intensity of the pump used to drive dynamics at the link sites compared to the rest of the lattice. Equivalently, by mapping the link to a spin with $2S^Z=N_L-N_R$, this can be written
\begin{align}
H_{\mathrm{DGGF}}&=\lambda \sum_j(b^\dagger_jb_{j+1}(\sum_kb^\dagger_{k+1}b_{k} + \vartheta S^+) + h.c.)\nonumber\\& - 2\lambda \vartheta^2 (S^Z)^2.
\end{align}
This $\lambda$ is comparable in size to the equivalent parameter considered in the reservoir models (e.g. Eq.\eqref{eqlambdares3}), and so too may also be of a similar magnitude to the standard tunnelling $J^T_{\mathrm{nn}}$, with its precise value depending on the pump strength.
This forms a global-link dynamical matter-gauge field interaction, in contrast to current proposals for dynamical gauge fields, which are limited to local links by their finite-range interactions. Strictly, we note that pair-correlated tunnelling events are allowed within the main lattice independent of the link, provided that they occur in opposite directions, though the particle current across the lattice is wholly dependent on the link. These terms bypassing the link can be made less significant by adjustment of $\vartheta$, which also controls the gauge field energy terms $\propto (S^Z)^2$. These gauge field energy terms can be further engineered through the density-density interactions discussed above. Unlike the local link models ubiquitous in high-energy physics, the common global link here leads to a peculiar effect where the motion of a particle at any site can significantly affect the field experienced by all other particles, at all other sites.
\section{Conclusions}
In summary, we have characterised the new dynamics manifest by the interactions of atomic quantum gases with quantum light in optical cavities, exhibiting effects beyond those possible with classical light, and subsequently shown that these may then be controlled through measurement of the light leaked from the cavity. These effects include long-range correlated tunnelling, effective pair processes, and density-density interactions. Further, we have discussed how this provides opportunities for the enhancement of quantum simulations, by using these correlated processes. Specifically, we have demonstrated how the formalism can mimic superexchange interactions, reservoir models, and dynamical global gauge fields. This invites a wealth of opportunities for further study, such as combining the various simulation building blocks presented here to generate yet more exotic and interesting systems for study, finding additional building blocks to expand the simulation framework, and further characterisation of cavity-induced processes.
As discussed above, our proposal should be feasible with current state-of-the-art experimental setups. So far, several groups have trapped Bose-Einstein condensates inside cavities, without a lattice potential \cite{baumann2010,wolke2012,schmidt2014}, while others have scattered light from quantum gases trapped in lattice potentials, but with no cavity present \cite{weitenberg2011,miyake2011}. The amalgamation of these proposals has recently been achieved, where the light scattered into an optical cavity by the atoms generates a further, quantum potential for the atoms, which is dynamically evolving conditional on the atomic state \cite{klinder2015, landig2016}. These experiments already exhibit a particular case of the cavity-induced dynamics, where the the cavity field causes the atoms to self-organise into charge density wave and supersolid phases.
Additionally, many recent experiments have demonstrated examples of quantum Zeno physics in similar, but less versatile settings, such as cavity QED and optical lattices \cite{patil2014, schafer2014, signoles2014, barontini2015}. Thus, the possibility to use measurement for such a selective suppression of dynamics is well verified. Furthermore, it may be possible to implement the dynamical effects discussed here using other types of systems that are also based on off-resonant scattering, such as molecules \cite{mekhov2013}, fermions \cite{ruostekoski2009}, spins \cite{cordobes2014}, ions \cite{blatt2012}, and semiconductor \cite{trauzettel2007} and superconducting qubits \cite{fink2009}, as their dynamics are based on similar mathematical structures.
\section*{Acknowledgements}
The authors thank the Engineering and Physical Sciences Research Council for financial support (Doctoral Training Account and EP/I004394/1).
| -31,819.155384
|
[
-2.6171875,
2.45703125
] | 39.841689
|
[
-2.76171875,
0.143310546875,
-2.611328125,
-6.12109375,
-0.75341796875,
9.1796875
] |
[
5.4140625,
8.4453125,
3.427734375,
7.1640625
] | 359
| 8,373
|
[
-2.5546875,
2.73046875
] | 23.327838
|
[
-6.31640625,
-4.6484375,
-5.28515625,
-2.498046875,
2.265625,
13.4921875
] | 1.135309
| 22.473679
| 19.63454
| 1.689543
|
[
1.745775818824768
] | -22,185.800566
| 6.048847
| -30,977.254399
| 0.213241
| 5.967998
|
[
-2.634765625,
-4.12890625,
-4.1015625,
-5.078125,
2.501953125,
13.078125
] |
[
-5.3125,
-2.388671875,
-2.70703125,
-2.095703125,
3.9609375,
5.796875
] | |
BkiUd1w5qdmDNqo3lLj8
|
\section{Introduction}
With the help of a 3+1 decomposition Einstein's equations can be split
into a set of constraint equations and a set of evolution equations
\cite{Arnowitt-Deser-Misner:1962,York:1979}. The four constraint
equations -- one in the Hamiltonian constraint and three in the
momentum constraint -- constrain the induced spatial metric
$\SMetric_{ij}$ and the extrinsic curvature $\ExCurv_{ij}$ on spatial
hypersurfaces representing instants of constant coordinate time $t$.
The constraint equations constrain only four of these initial
variables; the remaining ones are freely specifiable and have to be
chosen independently before the constraint equations can be solved. A
decomposition of the initial data separates the freely specifiable
variables from the constrained ones. Given a particular
decomposition, the construction of initial data then entails making
well-motivated choices for the freely specifiable independent
background data and then solving the constraint equations for the
constrained variables.
The conformal thin-sandwich decomposition has emerged as a
particularly popular decomposition among numerical relativists,
especially for the construction of quasi-equilibrium data (see, e.g.,
the reviews \cite{Cook:2000,Baumgarte-Shapiro:2003} and a brief
discussion below). This variation of the original (non-conformal)
thin-sandwich decomposition
\cite{Belasco-Chanian:1969,Bartnik-Fodor:1993,Giulini:1999} was
formally developed by York \cite{York:1999} (see also
\cite{Pfeiffer-York:2003}), but more restricted versions had been
introduced earlier \cite{Isenberg:1978,Wilson-Mathews:1989}. It has
been used, for example, for the construction of binary neutron stars
(e.g.
\cite{Baumgarte-Cook-etal:1997,Bonazzola-Gourgoulhon-Marck:1999,
Uryu-Eriguchi:2000,Gourgoulhon-Grandclement-etal:2001}), binary black
holes (e.g. \cite{Gourgoulhon-Grandclement-Bonazzola:2001a,
Grandclement-Gourgoulhon-Bonazzola:2001b,Yo-Cook-etal:2004,
Cook-Pfeiffer:2004,Caudill-Cook-etal:2006}) and black hole-neutron star
binaries (\cite{Baumgarte-Skoge-Shapiro:2004,Taniguchi-Baumgarte-etal:2005}).
Given this wealth of experience with the conformal thin-sandwich
decomposition, it came as quite a surprise when Pfeiffer and York
(\cite{Pfeiffer-York:2005}, hereafter PY) recently discovered
non-uniqueness in the solution of the conformal thin-sandwich system.
Even for `small' independent background data, which one would expect
to generate gravitational initial data close to a flat slice of flat
spacetime, the so-called ``extended'' set of conformal thin-sandwich
data allowed for two branches of solutions. One of these two branches
has comparatively weak gravitational fields and indeed approaches flat
space in the limit of vanishing background data, while the second
strong-field branch approaches a singular solution.
In this paper we discuss some of the aspects of this non-uniqueness.
We consider a spherically symmetric, constant density star and show
that even for this very simple model we can find -- for sufficiently
small values of the density -- two branches of solutions. These two
branches share some of the characteristics of the solutions found by
PY and illustrate their properties. For our spherically symmetric
solution the non-uniqueness of solutions is caused by a particular
term having the ``wrong sign'' (see, e.g.~\cite{York:1979}), and we
argue that the non-uniqueness found in the extended conformal
thin-sandwich equations may be caused by a similar term.
We note also that certain constrained evolution
schemes~\cite{Choptuik-Hirschmann-Liebling-Pretorius:2003,Rinne:2005}
solve a set of elliptic equations at every timestep which is very
similar to the extended conformal thin sandwich equations. These
authors observed occasional failure of their elliptic solvers in the
strong field regime, and it was
argued~\cite{Rinne:2005,Rinne-Steward:2005} that this failure is
caused by the ``wrong sign'' in the maximal slicing condition.
This paper is organized as follows. In Section \ref{Sec:CTS} we
briefly review the conformal thin-sandwich decomposition. We then
consider the Hamiltonian constraint in spherical symmetry in Section
\ref{Sec:HamToy}. First, in Sec.~\ref{SubSec:ToyModel}, we
construct analytic solutions for constant density stars and show that
these solutions consist of two branches with properties very similar
to the solutions found by PY. Subsequently, in
Sec.~\ref{SubSec:Math}, we prove that at least some of these
properties persist for arbitrary spherically symmetric solutions.
Finally, we provide a brief summary and discussion in Section
\ref{Sec:Sum}.
\section{The conformal thin-sandwich decompositions}
\label{Sec:CTS}
Conformal decompositions of the constraint equations start with a
conformal transformation of the spatial metric, $\SMetric_{ij} = \CF^4
\CMetric_{ij}$, where $\CF$ is the conformal factor and
$\CMetric_{ij}$ the conformally related metric. The Hamiltonian
constraint then becomes an equation for the conformal factor
\begin{equation}
\label{eq:Ham2}
\CCDu^2\CF-\frac{1}{8}\CRicciS\CF-\frac{1}{12}\TrExCurv^2\CF^5
+\frac{1}{8}\CF^{-7}\CA^{ij}\CA_{ij} + 2\pi \CF^5 \rho = 0.
\end{equation}
Here $\CCD$ and $\CRicciS$ are the covariant derivative and the trace of
the Ricci tensor associated with $\CMetric_{ij}$, and the extrinsic
curvature is decomposed into its trace $K$ and the conformally related
trace-free part $\CA^{ij}$,
\begin{equation}
K^{ij} = \psi^{-10} \CA^{ij} + \frac{1}{3} g^{ij} K.
\end{equation}
For completeness we have also included the matter source $\rho = n^a
n^b T_{ab}$, where $n^a$ is the normal on the spatial hypersurface and
$T_{ab}$ the stress-energy tensor, and where summation is carried out
over four spacetime indices.
The matter term as written in Eq.~({\ref{eq:Ham2}) has the
defect~\cite{York:1979} that its positive sign combined with the
positive exponent of $\CF$ {\em prevent} use of the maximum principle
to prove local uniqueness of solutions. Therefore, it is not
immediately clear that solutions to Eq.~(\ref{eq:Ham2}) are unique
(indeed, we show in Sec.~\ref{Sec:HamToy}, that often they are not
unique). This defect can be cured~\cite{York:1979} by introduction of
a conformally scaled matter density $\tilde\rho=\CF^{8}\rho$; taking
$\tilde\rho\ge 0$ as freely specifiable data, the matter term in
Eq.~({\ref{eq:Ham2}) becomes $2\pi\CF^{-3}\tilde\rho$. Because of the
sign-change in the exponent, this term is now well-behaved and the
maximum principle is applicable. We will use Eq.~(\ref{eq:Ham2}) with
the unscaled $\rho$ as a toy example in Sec.~\ref{Sec:HamToy} below.
Besides that, we are only interested in vacuum space-times and
therefore do not include matter terms in the rest of this Section.
The conformal metric $\CMetric_{ij}$, meanwhile, is freely
specifiable. In the conformal thin-sandwich decompositions, the time
derivative of the conformal metric, $\dtCMetric_{ij} \equiv
\dtime\CMetric_{ij}$ is also considered freely specifiable. Using the
evolution equation for the spatial metric we can relate
$\dtCMetric_{ij}$ to $\CA_{ij}$,
\begin{equation}\label{A}
\CA^{ij} =
\frac{1}{2\CLapse}\Big(\CLong{\Shift}^{ij}-\dtCMetric^{ij}\Big),
\end{equation}
where the conformal (or densitized) lapse $\CLapse$ is related to the
lapse $N$ by $\Lapse = \CF^6 \CLapse$. Inserting this expression into
the momentum constraint yields
\begin{equation}\label{eq:Mom2}
\CCD_j\Big(\frac{1}{2\CLapse}\CLong{\Shift}^{ij}\Big)
-\frac{2}{3}\CF^6\CCDu^i\TrExCurv
-\CCD_j\Big(\frac{1}{2\CLapse}\dtCMetric^{ij}\Big) = 0
\end{equation}
where $\CLong{\Shift}^{ij}\equiv 2\CCDu^{(i}\Shift^{j)}
-2/3\,\CMetric^{ij}\CCD_k\Shift^k$ is the conformal longitudinal
operator.
There are two versions of the conformal thin sandwich approach. In the
{\em standard} conformal thin sandwich equations, one specifies
$(\CMetric_{ij}, \dtCMetric_{ij}; \TrExCurv, \CLapse)$ and suitable
matter-terms, if applicable. Given these background variables,
Eq.~(\ref{eq:Ham2}) and (\ref{eq:Mom2}) (together with (\ref{A})) can
be solved for the conformal factor $\CF$ and the shift $\Shift^i$,
which completes the set of initial data.
For maximal slices, $\TrExCurv=0$, Eqs.~(\ref{eq:Ham2}) and
(\ref{eq:Mom2}) decouple, so that Eq.~(\ref{eq:Mom2}) can be
considered first. For any given strictly positive $\CLapse$, this
equation is a linear elliptic equation so that the existence of a
unique solution $\Shift^i$ is guaranteed. This is a key motivation for
the entire structure and is discussed in
\cite{York:1999,Pfeiffer-York:2003}. Eq.~(\ref{eq:Ham2}) -- with
zero matter density $\rho$ -- becomes the
standard Lichnerowicz equation for the conformal factor
\cite{Lichnerowicz:1944} and again has a unique solution as long as
the base metric is in the positive Yamabe class.
In the {\em extended} system one regards $\dtime\TrExCurv$
instead of $\CLapse$ as freely specifiable. The lapse can then be
solved for from the trace of the evolution equation for the extrinsic
curvature, which often is written as
\begin{align}
\nonumber
\CCDu^2(\CLapse\CF^7)-(\CLapse\CF^7)&\bigg[\frac{\CRicciS}{8}\!+\!\frac{5}
{12}\TrExCurv^4\CF^4\!
+\!\frac{7}{8}\CF^{-8}\CA^{ij}\CA_{ij}\bigg]\\
\label{eq:Lapse2}
&=-\CF^5(\dtime\TrExCurv-\Shift^k\partial_k\TrExCurv).
\end{align}
The independent background data now
are $(\CMetric_{ij}, \dtCMetric_{ij}; \TrExCurv, \partial_t\TrExCurv)$
(and suitable matter terms, if applicable) and we solve five coupled
elliptic equations (\ref{eq:Ham2}), (\ref{eq:Mom2}) and
(\ref{eq:Lapse2}) for the conformal factor $\CF$, the shift $\Shift^i$
and the lapse $N$. This extended system has become very popular in
numerical relativity because the ability to set the time derivatives
$\dtCMetric_{ij}$ and $\dtime\TrExCurv$ to zero provides a means of
constructing quasi-equilibrium data.
However, PY demonstrated that the extended conformal thin sandwich
equations behave very differently from the standard set, even for
$\TrExCurv=0=\dtime\TrExCurv$. Specifically, they found two branches
of solutions for the same choices of free data.
We wish to point out that Eq.~(\ref{eq:Lapse2}) is
written in a misleading way. As written, it appears that
the maximum principle can be used for Eq.~(\ref{eq:Lapse2}).
However, $\tilde A^{ij}$ contains the lapse itself, cf. Eq.~(\ref{A});
displaying this dependence explicitly results in
\begin{align}
\nonumber
\CCDu^2(&\CLapse\CF^7)-\frac{7}{32}\frac{\CF^{6}}{(\CLapse\CF^7)}
\left(\CLong\Shift^{ij}-\dtCMetric^{ij}\right)
\left(\CLong\Shift_{ij}-\dtCMetric_{ij}\right)
\\
&-(\CLapse\CF^7)\bigg[\frac{1}{8}\CRicciS\!+\!\frac{5}
{12}\TrExCurv^4\CF^4\!\bigg]=
-\CF^5(\dtime\TrExCurv-\Shift^k\partial_k\TrExCurv).
\label{eq:Lapse3}
\end{align}
The first line of this equation has the structure
\begin{equation*}
\CCDu^2(\CLapse\CF^7)-f \left(\CLapse\CF^7\right)^{-1}
\end{equation*}
with non-negative coefficient $f$. The sign of $f$ combined with the
negative exponent of $(\CLapse\CF^7)$ in the second term prevents
application of the maximum principle, as did the unscaled density
term in the Hamiltonian constraint (\ref{eq:Ham2}).
We believe that this term might very well be responsible for the
complex behavior exhibited by the extended conformal thin sandwich
equations. To support our claim, we analyze the Hamiltonian
constraint (\ref{eq:Ham2}) with an unscaled density in Section
\ref{Sec:HamToy} below. We construct an analytic solution in
spherical symmetry and explicitly show the existence of two branches
of solutions with properties very similar to those reported by PY.
\section{Hamiltonian constraint with unscaled matter density}
\label{Sec:HamToy}
As we discussed above, the Hamiltonian constraint Eq.~(\ref{eq:Ham2})
with {\em unscaled} matter density is not amenable to the maximum
principle, and it turns out to be interesting investigate consequences
of this fact. We consider the initial value problem at a moment of
time-symmetry, $\ExCurv_{ij}\equiv 0$, so that the momentum constraint
is satisfied identically. Assuming further conformal flatness and
spherical symmetry, the Hamiltonian constraint Eq.~(\ref{eq:Ham2})
reduces to
\begin{subequations}
\label{eq:Toy1}
\begin{equation}\label{eq:Toy1a}
\nabla^2\CF+2\pi\rho\CF^5=0,\\
\end{equation}
with $\CF>0$ and with boundary conditions
\begin{align}\label{eq:Toy1b}
\frac{\partial\CF}{\partial r}=0,&\qquad r=0,\\
\CF\to 1,&\qquad r\to\infty,
\end{align}
\end{subequations}
where $\nabla^2=\partial^2/\partial r^2+(2/r)\partial/\partial r$
represents the flatspace Laplacian, and we assume a density profile
$\rho(r)\ge 0$.
\subsection{The constant density star}
\label{SubSec:ToyModel}
First we will consider a constant density star of (conformal) radius
$R$ and mass-density
\begin{equation}\label{eq:Toy2}
\rho(r)=\left\{\begin{aligned}&\rho_0,&r<R,\\
&0,&r>R.
\end{aligned}
\right.
\end{equation}
We will take $R$ to be fixed, and examine the solutions of this
equation as we vary $\rho_0$. Thus, $\rho_0$ plays the role of the
``amplitude'' of the perturbation away from trivial initial data.
Solutions of Eq.~(\ref{eq:Toy1}) in the interior of the star can be
found with the help of the so-called Sobolev functions
\begin{equation}\label{eq:Toy3}
u_\alpha(r)\equiv\frac{(\alpha R)^{1/2}}{\big[r^2+(\alpha R)^2\big]^{1/2}},
\end{equation}
which satisfy
\begin{equation}\label{eq:Toy4}
\nabla^2u_\alpha=-3u_\alpha^5.
\end{equation}
Considering the function $Cu_\alpha$, we find that this function
satisfies Eq.~(\ref{eq:Toy1a}) for any choice of $\alpha$, given that
$C=(2\pi\rho_0/3)^{-1/4}$. Indeed {\em any} solution $\bar\CF$ to
Eqs.~(\ref{eq:Toy1}) must be of this form in the interior of the star:
The function $Cu_{\bar\alpha}$ with $\bar\alpha=C^2[\bar\CF(0)]^{-2}$
has the same value and derivative as $\bar\CF$ at the origin, and as
we show in the next section, this implies that $\bar\CF\equiv
Cu_{\bar\alpha}$ throughout the interior of the star.
In the exterior, the only solutions of the flat-space Laplace
equation with asymptotic value unity are the functions $\beta/r+1$,
for some parameter $\beta$. Consequently, any solution to
Eq.~(\ref{eq:Toy1}) must be a member of the family of functions
\begin{equation}\label{eq:Toy4_qq}
\CF(r)=\left\{\begin{aligned}&Cu_{\alpha}(r),&r<R\\
&\frac{\beta}{r}+1,&r>R
\end{aligned}
\right.
\end{equation}
with $C$ given above, and $\alpha$, $\beta$ real parameters. The
parameters $\alpha$ and $\beta$ are determined by continuity of $\CF$
and its first derivative at the surface of the star,
\begin{align}
\frac{\beta}{R}+1 &= C u_\alpha(R),\label{eq:Toy4a}\\
-\frac{\beta}{R^2} &= C u'_\alpha(R),\label{eq:Toy4b}
\end{align}
where a prime denotes $\partial/\partial r$. Eliminating $\beta$, we
find that $\alpha$ has to satisfy,
\begin{equation}\label{eq:Toy5}
\rho_0 R^2=\frac{3}{2\pi}f^2(\alpha),
\end{equation}
where
\begin{equation}
f(\alpha)=\frac{\alpha^5}{(1+\alpha^2)^3}.
\end{equation}
Given a value for $\alpha$ we can find $\beta$ from (\ref{eq:Toy4a})
or (\ref{eq:Toy4b}), which then completely specifies a solution to
(\ref{eq:Toy1}).
The non-uniqueness of the solutions arises through the properties of
the function $f(\alpha)$. We can see immediately that $f(\alpha)$
approaches zero for both $\alpha \to 0$ and $\alpha \to \infty$. For
a sufficiently small value of $\rho_0 R^2$ in (\ref{eq:Toy5}) we may
therefore pick either a small or a large value of $\alpha$, which, as
we will show below, corresponds to either a strong-field or a
weak-field solution.
Examining $f(\alpha)$ more carefully, we see that it takes its maximum
at $\alpha_c=\sqrt{5}$. Therefore, Eq.~(\ref{eq:Toy5}) has no
solution if $\rho_0$ is larger than the critical value
\begin{equation} \label{eq:rho_crit}
\rho_c=\frac{3}{2\pi R^2}f^2(\alpha_c)=\frac{3}{2\pi R^2}\frac{5^5}{6^6}
\approx \frac{0.0320}{R^2}.
\end{equation}
At the critical density Eq.~(\ref{eq:Toy5}) has exactly one solution,
$\alpha=\alpha_c$, while below the critical density there are two
solutions; one with $\alpha<\alpha_c$ and one with $\alpha>\alpha_c$.
This behavior is in complete analogy to the behavior of the extended
conformal thin-sandwich system examined in PY.
Having just derived all solutions to Eq.~(\ref{eq:Toy1}), we now now
discuss their properties in more detail. It turns out to be convenient
to parametrize these solutions by $\alpha$. Each value of $\alpha$
corresponds to precisely one solution with $\rho_0$ given by
Eq.~(\ref{eq:Toy5}). Both limiting cases, $\alpha\to 0$ and
$\alpha\to\infty$ correspond to the limit of vanishing mass-density
(see Eq.~(\ref{eq:Toy5})).
We begin by computing the ADM-energy, which can be found using
Eq.~(\ref{eq:Toy4a}),
\begin{equation}
E=2\beta=\frac{2}{\alpha^2}R.
\end{equation}
For large $\alpha$, the ADM-energy tends to zero and we recover flat
space. In the limit $\alpha\to 0$, however, the energy grows without
bound, despite the fact that $\rho\to 0$ as $\alpha\to 0$. This
establishes the $\alpha > \alpha_c$ branch as the weak-field branch,
and $\alpha < \alpha_c$ as the strong-field branch. We show a graph
of the energy as a function of density in Figure~\ref{fig:ToyEnergy}.
\begin{figure}
\includegraphics[scale=0.5]{Energy.eps}
\caption{\label{fig:ToyEnergy}ADM-energy and rest mass as function of
$\rho_0 R^2$ for the constant density star of Section
\ref{SubSec:ToyModel}.}
\end{figure}
Next we consider the rest mass $M$ of the star, which is given by
\begin{align}\nonumber
M&=\int_{r<R} \rho_0 \sqrt{g} dV=\int_0^R \rho_0 \CF^6\, 4\pi r^2\,dr\\
&=\frac{3}{4\alpha^5}\left[\alpha-\alpha^5+(1+\alpha^2)^3\arctan(\alpha^{-1}
)\right]R,
\end{align}
where we have used Eq.~(\ref{eq:Toy5}) to eliminate $\rho_0$. This
expression has the limiting values
\begin{align}
M&\approx \frac{3\pi}{\alpha^5}R &&\mbox{for }\alpha\to 0,\\
M&\approx \frac{2}{\alpha^2}R &&\mbox{for }\alpha\to\infty.\label{eq:M+}
\end{align}
The weak-field limit $\alpha\to\infty$ corresponds to the limit in
which the star has vanishing mass, whereas the strong-field limit
$\alpha\to 0$ results in a star with unbounded mass, even though the
density itself approaches zero. This behavior is caused by the fact
that the conformal factor, and hence the proper volume inside the
stellar radius $R$ diverges more rapidly than the rate at which the
density $\rho_0$ vanishes.
We point out that for all $\alpha>0$ we have $E<M$, so that the star
has negative binding energy as expected. In the Newtonian limit
$\alpha\to\infty$ we recover the Newtonian binding energy,
\begin{equation}
E-M\approx -\frac{12}{5\alpha^4}R\approx-\frac{3}{5}\frac{M^2}{R},
\end{equation}
where we have used Eq.~(\ref{eq:M+}) in the second step.
Finally, we locate the apparent horizons in this family of initial
data sets. For a time-symmetric hypersurface, apparent horizons
coincide with maximal surfaces, which in spherical symmetry and
conformal flatness are given by the roots of
\begin{equation}
\frac{\partial\CF}{\partial r}+\frac{\CF}{2r}=0.
\end{equation}
For $\alpha\!>\!1$, no roots to this equation exist, so that the
initial data surface does not contain an apparent horizon. For
$\alpha<1$, two roots exist, one in the interior of the star at
$r=\alpha R$, and one in the exterior at $r=R/\alpha^2$. The latter
one is the outermost extremal surface, which is the apparent horizon.
Both extremal surfaces merge on the surface of the star for
$\alpha=1$. The density, ADM-energy and rest mass at formation of the
apparent horizon are $\rho_0|_{\alpha=1}=3/(128\pi R^2)\approx 0.0075
R^{-2}$, $E=2R$, and $M|_{\alpha=1}=3\pi R/2\approx 4.71 R$.
We now turn our attention to the critical point. Around its maximum
$f(\alpha)$ behaves like a parabola, therefore
\begin{equation}\label{eq:Parabola}
\alpha-\alpha_c\propto
\pm(\rho_c-\rho_0)^{1/2}.
\end{equation}
Energy and mass at the critical point are
\begin{align}
E_c&=\frac{2}{5}R,\\
M_c&=\frac{18}{125}\left(9\sqrt{5}\arctan(1/\sqrt{5})-5\right)R\approx0.499R
,
\end{align}
respectively. Since $\partial E/\partial\alpha\neq 0$ there, the
energy also changes parabolically with $\rho_0$,
\begin{equation}
E-E_c\;\propto\;\alpha-\alpha_c\;\propto\;
\pm(\rho_c-\rho_0)^{1/2}.
\end{equation}
This parabolic behavior is apparent in Fig.~\ref{fig:ToyEnergy}.
At the critical point the local uniqueness of solutions must break
down, since the two branches meet there. For this to happen, the
linearized operator must have a non-trivial solution at the critical
point. The linearization of Eq.~(\ref{eq:Toy1}) reads
\begin{subequations}
\label{eq:Toy7}
\begin{equation}\label{eq:Toy7a}
\nabla^2\delta\CF+10\pi\rho_0\CF^4\,\delta\CF=0,
\end{equation}
with boundary conditions
\begin{align}\label{eq:Toy7b}
\frac{\partial\delta\CF}{\partial r}=0,& \qquad r=0\\
\delta\CF\to 0,& \qquad r\to\infty.\label{eq:Toy7c}
\end{align}
\end{subequations}
We will now construct all solutions of Eq.~(\ref{eq:Toy7}). While
doing so, we consider $\rho_0$ as given and fixed. If $\delta\psi=0$
is the only solution, then the kernel of this equation is trivial, and
solutions to the non-linear equation (\ref{eq:Toy1}) are locally
unique. As just argued, at the critical point this will not be the
case, and there must be a non-zero solution of Eq.~(\ref{eq:Toy7}).
As it turns out, we can construct this solution analytically.
The key to solving Eq.~(\ref{eq:Toy7}) are again the Sobolev functions
$u_\alpha$. Recall that $\CF=Cu_\alpha$ satisfies Eq.~(\ref{eq:Toy1})
in the interior of the star for {\em any} value of $\alpha$. We can
therefore take the derivative of Eq.~(\ref{eq:Toy1}) with respect to
$\alpha$ and find
\begin{equation}\label{eq:Toy8}
\nabla^2\frac{\partial
u_\alpha}{\partial\alpha}+10\pi\rho_0 C^4u_\alpha^4\frac{\partial
u_\alpha}{\partial \alpha}=0.
\end{equation}
Choosing $\alpha$ to be a solution of (\ref{eq:Toy5}), so that it is
consistent with $\rho_0$, we can identify $C^4u_\alpha^4=\CF^4$, and
Eq.~(\ref{eq:Toy8}) reduces to Eq.~(\ref{eq:Toy7a}). Consequently,
any function $A \partial u_\alpha/\partial\alpha$, with $\alpha$ given
by (\ref{eq:Toy5}) and $A$ an arbitrary constant, satisfies
Eq.~(\ref{eq:Toy7a}). This forms a one-parameter family of functions,
{\em all} of which automatically satisfy the differential equation
Eqs.~(\ref{eq:Toy7a}) and the boundary condition (\ref{eq:Toy7b}) in
the interior.
Solutions $\delta\CF$ in the exterior must satisfy the Laplace
equation and the outer boundary condition (\ref{eq:Toy7c}), i.e. they
must take the form $B/r$ for some constant $B$. Since this is a
one-parameter family of solutions, we have found {\em all} solutions
to Eqs.~(\ref{eq:Toy7a}) and (\ref{eq:Toy7c}) in the exterior.
To find a global solution $\delta\psi$ we now have to find constants
$A$ and $B$ so that the interior solution $A\partial
u_\alpha/\partial\alpha$ matches the exterior solution $B/r$
continuously in both the functions and their first derivatives at the
stellar radius $r = R$. As expected, non-trivial solutions with
non-zero $A$ and $B$ exist only at the critical point
$\alpha=\alpha_c$. There, the solution $\delta \psi$ takes the form
\begin{equation}
\delta\CF_c(r)\propto\left\{\begin{aligned}
&\frac{5R^2-r^2}{(r^2+5R^2)^{3/2}},&r<R\\ &\frac{4}{6^{3/2}r},&r>R
\end{aligned}
\right.
\end{equation}
and it is easy to verify that it indeed satisfies Eq.~(\ref{eq:Toy7})
at the critical point.
\subsection{Results for general \boldmath $\rho\ge 0$}
\label{SubSec:Math}
A fully worked out example like the constant density star presented
above is very instructive. However, the example itself does not
provide any indication whether its behavior is generic. In this
Section we prove theorems valid for general $\rho\ge 0$ with compact
support, indicating that the behavior found for the constant density
star is indeed generic for the Hamiltonian constraint with unscaled
matter density. We will first show that for sufficiently ``large''
matter-densities $\rho$, no solution exists. We will then consider the
critical point and show that if a critical point exists, the solution
must vary parabolically close to it, as did the constant density star,
cf. Eq.~(\ref{eq:Parabola}). Finally, we will prove a result which was
stated above to show that all solutions to the constant density star
have been found: If two functions each satisfying
Eqs.~(\ref{eq:Toy1a}) and (\ref{eq:Toy1b}) have the same value at the
origin, then they are identical.
We start with some preliminaries. Rewriting the Laplacian in
Eq.~(\ref{eq:Toy1}) we find
\begin{equation}
\left(\CF+r\CF'\right)'=-2\pi r\rho\CF^5\le 0.
\end{equation}
Therefore the combination $\CF+r\CF'$ is monotonically decreasing and
bounded from below by its asymptotic value for large $r$,
\begin{equation}\label{eq:Math1}
\CF+r\CF'\ge 1.
\end{equation}
Furthermore, integrating Eq.~(\ref{eq:Toy1}) over a sphere of radius
$R$ we find
\begin{equation}\label{eq:Math2}
4\pi R^2\; \CF'(R) =
-2\pi \int_{R} \rho\CF^5 dV.
\end{equation}
Since $\rho\ge 0$ and $\CF>0$ we have $\CF'\le 0$, so that $\CF$ is a
decreasing function of radius, which is bounded from below by its
asymptotic value, $\CF\ge 1$.
We can now show that for sufficiently large $\rho$ Eq.~(\ref{eq:Toy1})
does not admit strictly positive solutions $\CF$. Solving
Eq.~(\ref{eq:Math2}) for $\CF'(R)$ and substituting into
Eq.~(\ref{eq:Math1}) we find
\begin{equation}
2R\big[\CF(R)-1\big]\ge \int_{R} \rho\, \CF^5\, dV\ge\CF(R)^{5-k}
\int_R\CF^{k}\rho\,dV,
\end{equation}
for any $k\le 5$, where the last inequality follows from $\CF'\le 0$.
Rearranging terms we obtain
\begin{equation}
\frac{1}{R}\int_R\CF^k\rho\, dV\le 2\frac{\CF(R)-1}{\CF(R)^{5-k}}.
\end{equation}
The right hand side of this inequality is bounded independently of the
value of $\CF(R)$ by the biggest value of the function
\begin{equation}
g_k(x)=2\frac{x-1}{x^{5-k}}
\end{equation}
for $x\ge 1$. For $k\le4$, this function is bounded by
\begin{equation}
g_k(x)\le C_k \equiv 2\frac{(4-k)^{4-k}}{(5-k)^{5-k}},
\end{equation}
so that any solution $\CF$ satisfies the integral bounds
\begin{equation}\label{eq:MathBound}
\frac{1}{R}\int_R \CF^k \rho\, dV\le C_k
\end{equation}
for any $k \le 4$. For a given $k$, the bound $C_k$ is independent of $R$.
For positive $k$, $0<k\le 4$, this inequality constrains how large
solutions can be. For example,
\begin{equation}
C_4\ge \frac{1}{R}\int_R\CF^4\rho\,dV\ge \frac{\CF(R)^4}{R}\int_R\rho\, dV
\end{equation}
implies
\begin{equation}
\CF(R)\le C_4^{1/4}\left(\frac{1}{R}\int_R\rho\,dV\right)^{-1/4},
\end{equation}
which is a bound of how quickly the ``upper'' branch can diverge as
$\rho\to 0$.
For $k=0$, the inequality~(\ref{eq:MathBound}) becomes independent of $\CF$: If a solution
$\CF$ exists for a certain $\rho$, then
\begin{equation}\label{eq:MathBound1}
\frac{1}{R}\int_R\rho\, dV\le 2 \frac{4^4}{5^5}\approx 0.163.
\end{equation}
Equation~(\ref{eq:MathBound1}) holds for any $R$ for any strictly
positive solution of Eq.~(\ref{eq:Toy1}), therefore if a density
distribution $\rho(r)$ satisfies
\begin{equation}\label{eq:MathBound2}
m(R) \equiv \int_{R} \rho\,dV > 2\frac{4^4}{5^5}R
\end{equation}
even for one $R$, then no regular solution to the Hamiltonian
constraint Eq.~(\ref{eq:Toy1}) exists for this density.
For the constant density star, $m(r)$ is largest at the surface of the
star, $r=R$, where
\begin{equation}
m(R)=\rho_0\int_R 4\pi r^2 dr=\frac{4\pi}{3}\rho_0R^3.
\end{equation}
Equation~(\ref{eq:MathBound2}) then gives the necessary bound
$\rho_0\lesssim 0.0389/R^2$ for the existence of solutions.
Comparison with the exact critical density $\rho_c=0.0320/R^2$ from
Eq.~(\ref{eq:rho_crit}) reveals that the upper bound of the theorem is
only 20 per cent larger than the exact critical density (see also
Fig.~\ref{fig:ToyEnergy}).
Let us now examine the character of the critical point. We take a
smooth sequence of non-negative densities, $\rho_{\gamma}$, such that
$\rho \equiv 0$ when $\gamma = 0$. We then look for a smooth sequence
of solutions $\CF_\gamma$ to Eq.~(\ref{eq:Toy1}) with the density
$\rho$ given by $\rho_\gamma$, starting from $\CF \equiv 1$ at
$\gamma=0$. The Implicit Function Theorem tells us that as long as
the linearized equation, Eq.~(\ref{eq:Toy7}),
\begin{subequations}
\label{eq:lin}
\begin{equation}\label{eq:lina}
\nabla^2\delta\CF+10\pi\rho_{\gamma}\CF_{\gamma}^4\,\delta\CF=0,
\end{equation}
with boundary conditions
\begin{align}\label{eq:linb}
\frac{\partial\delta\CF}{\partial r}=0,& \qquad r=0\\
\delta\CF\to 0,& \qquad r\to\infty.\label{eq:linc}
\end{align}
\end{subequations}
has {\em no} non-trivial solution, then the full nonlinear equation
\begin{subequations}
\label{eq:beta}
\begin{equation}\label{eq:betaa}
\nabla^2\CF_{\gamma}+2\pi\rho_{\gamma}\CF_{\gamma}^5=0,\\
\end{equation}
with boundary conditions
\begin{align}
\frac{\partial\CF}{\partial r}=0,&\qquad r=0,\\
\CF_{\gamma}\to 1,&\qquad r\to\infty,
\end{align}
\end{subequations}
has a regular solution which changes smoothly as a function of $\gamma$.
The obvious question to ask is what happens if the sequence approaches
the point where the first kernel of Eq.~(\ref{eq:lin})
appears\footnote{Clearly there are sequences
$\rho_\gamma$ for which this never happens,
e.g. $\rho_\gamma\equiv0$.}. Let us assume this happens at $\gamma_0$.
The trick is to consider the limiting process rather than the limit
point itself. We know that when $\gamma = \gamma_0$ the equation
\begin{equation}\label{eq:gs}
\nabla^2\theta + 10\pi\rho_{\gamma_0}\CF_{\gamma_0}^4\theta = 0,
\end{equation}
has a positive solution $\theta$, going to zero at infinity. This is the ground state of a Schr\"odinger
equation, because it is the {\it first} appearance of a kernel, hence it has no nodes, and thus it can be chosen to be everywhere positive.
We now differentiate
Eq.~(\ref{eq:betaa}) with respect to $\gamma$ (at any $\gamma < \gamma_0$ to find
\begin{equation}\label{eq:beta'}
\nabla^2\frac{d\CF_{\gamma}}{d\gamma} +
10\pi\rho_{\gamma}\CF_{\gamma}^4\frac{d\CF_{\gamma}}{d\gamma} =
- 2\pi\frac{d\rho_{\gamma}}{d\gamma}\CF_{\gamma}^5.
\end{equation}
Multiplying Eq.~(\ref{eq:gs}) by $d\CF_{\gamma}/{d\gamma}$,
Eq.~(\ref{eq:beta'}) by $\theta$ and subtracting the results
we obtain
\begin{align}\label{eq:diff}
\nabla \cdot (\frac{d\CF_{\gamma}}{d\gamma}&\nabla \theta -
\theta \nabla \frac{d\CF_{\gamma}}{d\gamma}) = \frac{d\CF_{\gamma}}{d\gamma}\nabla^2 \theta -
\theta \nabla^2 \frac{d\CF_{\gamma}}{d\gamma}\nonumber \\
=& 10\pi(\rho_{\gamma}\CF_{\gamma}^4 - \rho_{\gamma_0}\CF_{\gamma_0}^4)\frac{d\CF_{\gamma}}{d\gamma}\theta
+
2\pi\frac{d\rho_{\gamma}}{d\gamma}\CF_{\gamma}^5\theta.
\end{align}
Next we wish to integrate this equation over the whole space. Let us
assume that $\rho_{\gamma}$ has compact support, or, at least, falls
off rapidly at infinity. Both $\theta$ and $d\CF_{\gamma}/d\gamma$
fall off at infinity like $1/r$ and their first derivatives fall off
like $1/r^2$. This means that the total divergence upon integration
becomes a surface term, and the integrand falls off like
$1/r^3$. Therefore the integral vanishes. We get
\begin{equation}\label{eq:int}
5\int(\rho_{\gamma}\CF_{\gamma}^4 - \rho_{\gamma_0}\CF_{\gamma_0}^4)\frac{d\CF_{\gamma}}{d\gamma}\,\theta\, dV +
\int \frac{d\rho_{\gamma}}{d\gamma}\CF_{\gamma}^5\,\theta\, dV = 0.
\end{equation}
Consider this in the limit as $\gamma \rightarrow \gamma_0$. The
second term tends to the constant
\begin{equation}
{\cal I}\equiv
\int\left.\frac{d\rho_\gamma}{d\gamma}\right|_{\gamma_0}\!\CF^5_{\gamma_0}\,\theta\,
dV.
\end{equation}
Note that both $\theta$ and $\CF_{\gamma_0}$ depend only on
$\rho_{\gamma_0}$, and not its derivative. Therefore, changing
$\left.d\rho_\gamma/d\gamma\right|_{\gamma_0}$ via
\begin{equation}
\rho_\gamma \to \rho_\gamma + v(\gamma-\gamma_0)
\end{equation}
for any function $v(r)$ will change ${\cal I}$ as
\begin{equation}
{\cal I}\to {\cal I}+\int v\, \CF^5_{\gamma_0}\,\theta\,dV.
\end{equation}
Clearly, except for special instances, ${\cal I}$ will be non-zero.
Let us now assume this generic case, ${\cal I}\neq 0$ . In the limit
$\gamma\to\gamma_0$, the second term in Eq.~(\ref{eq:int}) becomes the
non-zero constant ${\cal I}$, whereas the first term seems to go to
zero. This cannot be and thus we are forced to conclude that the
limiting process as $\gamma \rightarrow \gamma_0$ must be somehow
singular.
The only thing that can possibly go bad is that $d\CF_{\gamma}/d\gamma
\rightarrow \infty$. Not only has it to blow up, it must do so over an
extended region. This is the only way that the integral, in the limit,
can go to a nonzero value. Let us assume
\begin{equation}
\frac{d\CF_\gamma}{d\gamma}\propto (\gamma_0-\gamma)^p
\end{equation}
for some negative power $p<0$. This implies
\begin{equation}
\CF_\gamma-\CF_{\gamma_0} \propto (\gamma_0-\gamma)^{p+1}.
\end{equation}
Consider now the term
\begin{align}
\rho_\gamma\CF_\gamma^4&-\rho_{\gamma_0}\CF^4_{\gamma_0}
=\left(\rho_\gamma-\rho_{\gamma_0}\right)\CF_\gamma^4 \\ \nonumber
+&\rho_{\gamma_0}\left(\CF_\gamma^3+\CF_\gamma^2\CF_{\gamma_0}
+\CF^\gamma\CF^2_{\gamma_0}+\CF^3_{\gamma_0}\right)
\left(\CF_\gamma-\CF_{\gamma_0}\right)
\end{align}
in the first integral in Eq.~(\ref{eq:int}). The first term on the
right hand side scales as $\gamma_0-\gamma$ as the critical point is
approached, whereas the second one scales as
$(\gamma-\gamma_0)^{p+1}$. Because $p+1<1$, the second term
dominates, and the full integrand scales as
\begin{align}\label{eq:fullScaling}
\left(\rho_\gamma\CF_\gamma^4-\rho_{\gamma_0}\CF^4_{\gamma_0}\right)\frac{d\CF_\gamma}{d\gamma}
\propto (\gamma_0-\gamma)^{p+1}(\gamma_0-\gamma)^p
\end{align}
close to the critical point. In the generic case, the integral has to
approach the finite, non-zero value ${\cal I}$ in the limit
$\gamma\to\gamma_0$, which can only happen if $p = -1/2$. Therefore,
the solution must vary parabolically,
\begin{equation}
\CF_\gamma-\CF_{\gamma_0}\propto \left(\gamma-\gamma_0\right)^{1/2}.
\end{equation}
This is exactly the behavior we have seen
in the constant density star model and also with what Pfeiffer and
York \cite{Pfeiffer-York:2005} observed.
This parabolic nature of solutions near the critical point can be
demonstrated explicitly even in the non spherically symmetric case by
using Lyapunov-Schmidt techniques \cite{OMW}. We conjecture that a
second branch of solutions exists beyond the critical point as a
consequence of this parabolic behavior. We have seen this explicitly
for the constant density stars, and we again refer to \cite{OMW} for a
more general treatment.
So far we have shown that there are distributions $\rho(r)$ for which
no solutions of Eq.~(\ref{eq:Toy1}) exist, and based on the parabolic
nature of the solutions at the critical point we have conjectured that
there are distributions for which exactly two solutions exist. We do
not know whether this is generic.
Having moved past the first critical point, an open question is
whether another critical point is reached. Immediately past the first
critical point, $\rho$ does not change, but the conformal factor
increases. In the language of the Schr\"odinger equation this means
that the potential deepens
and the zero-energy ground state becomes a bound state with negative energy.
As one moves
away from the critical point along the upper branch one is moving
`back' toward smaller $\gamma$, and so we expect $\rho_{\gamma}$ to
decrease while $\CF$ continues to increase. One could have that $\rho
\CF^5$ increases enough that the first excited state appears with zero
energy, or that $\rho \CF^5$ decreases again so that the ground state
becomes a zero energy state again. In either case, the system reaches
another critical point and the solution curve may turn again.
Alternatively, $\rho_\gamma\CF_\gamma^5$ may be such that neither of
these two cases happens and the solution continues on all the way to
$\gamma = 0$. This last alternative occurs for the constant density
star, as we have shown by explicit calculation; however, we do not
know whether this behavior is generic.
Finally, we show that if we have two positive solutions $\CF_1$ and
$\CF_2$ to Eq.~(\ref{eq:Toy1}) whose maxima agree, then they are
identical. To prove this we first note that the maxima of both must
occur at $r = 0$. The maximum principle tells us that there cannot be
a positive minimum, therefore there can only be one maximum, and
therefore it must occur at the origin. Therefore at $r=0$, the two
functions $\CF_1$ and $\CF_2$ agree, their first derivatives both
vanish, and the second derivatives are equal (from
Eq.(\ref{eq:Toy1})). By differentiating Eq.(\ref{eq:Toy1}), one can
show that all the derivatives of the two functions agree at $r =
0$. If the functions were analytic, we were done. However, there is
no reason to expect that this be true. We need a more subtle
argument.
Track $\CF_1$ and $\CF_2$ as they move out from the origin. If they remain
the same all the way to infinity, we are done. Instead, let us assume that
at some point $\CF_1 > \CF_2$. Therefore we must encounter a region in
which $d(\CF_1 - \CF_2)/dr > 0$ and $\CF_1 - \CF_2 > 0$ and inside this
region $\CF_1 \ge \CF_2$. Take a point in this region, call it $R_0$.
Consider the equations satisfied by $\CF_1$ and $\CF_2$, subtract one from
the other and integrate over the ball of radius $R_0$. We get
\begin{equation}
\int_{R_0}\nabla^2(\CF_1 - \CF_2) dV + 2\pi\int_{R_0} \rho(\CF_1^5 -
\CF_2^5) dV = 0.
\end{equation}
The Laplacian becomes a boundary term, which is positive, because the
gradient of the difference is positive at $R_0$, while the bulk term
is also non-negative. This cannot be, so therefore the initial
assumption that the functions are different must be incorrect.
\section{Summary and Discussion}
\label{Sec:Sum}
In this paper we investigate the reason for non-uniqueness in the
extended conformal thin sandwich equations~\cite{Pfeiffer-York:2005}.
We argue that a term with the ``wrong sign'' in the elliptic equation
determining the lapse, Eq.~(\ref{eq:Lapse3}), is the cause for
non-uniqueness. The sign of this particular term is such that the
maximum principle cannot be applied to prove local uniqueness of
solutions. We support our claim by examining a simpler equation
having a term with an analogous ``wrong sign'', namely the the
Hamiltonian constraint with unscaled~\cite{York:1979} matter source
$\rho$ (cf. Eq.~(\ref{eq:Ham2})). Specializing to constant density
stars we construct analytical solutions. We find two branches of
solutions -- a weak-field and a strong-field branch -- with properties
that are remarkably similar to those found by PY.
We comment briefly that solutions to the {\em original} conformal
thin-sandwich decomposition, consisting of the Hamiltonian constraint
(\ref{eq:Ham2}) and the momentum constraint (\ref{eq:Mom2}) only, are
unique \cite{Cantor:1977,Cantor-Brill:1981,York:1999,%
Bartnik-Isenberg:2004,Maxwell:2005,Pfeiffer-York:2005}. PY found
multiple solutions only for the {\em extended} conformal thin-sandwich
decomposition, which includes the lapse equation (\ref{eq:Lapse2}) in
addition to the two constraints. This, too, suggests that the
non-uniqueness is caused by the lapse equation, in accordance with our
findings.
Our findings are certainly relevant for numerical work: If one wants
to solve the extended conformal thin sandwich equations, then
apparently, the possibility of finding two solutions is unavoidable.
Whether this will pose a problem for numerical work is less clear.
Sufficiently far away from the critical point, the solutions along the
upper and lower branch are significantly different, and it should be
obvious which solution is desired (generally the ``lower'' one, which
reduces to flat space for trivial free data). Many different
researchers have solved the extended conformal thin sandwich equations
without
problems~\cite{Baumgarte-Cook-etal:1997,Bonazzola-Gourgoulhon-Marck:1999,%
Uryu-Eriguchi:2000,Gourgoulhon-Grandclement-etal:2001,%
Gourgoulhon-Grandclement-Bonazzola:2001a,%
Grandclement-Gourgoulhon-Bonazzola:2001b,Yo-Cook-etal:2004,%
Cook-Pfeiffer:2004,Caudill-Cook-etal:2006,Baumgarte-Skoge-Shapiro:2004,%
Taniguchi-Baumgarte-etal:2005} and have obtained a solution with
satisfactory properties. However, past success is no guarantee for
future success, and for choices of free data which may be interesting
in the future, non-uniqueness issues could very well arise, especially
if one is interested in solutions which happen to be ``close'' to the
critical point. Indeed, in constrained evolutions schemes, which
solve elliptic equations similar to the extended conformal thin
sandwich equations, it was reported that the elliptic solver failed to
converge in near-critical collapse of Brill
waves~\cite{Choptuik-Hirschmann-Liebling-Pretorius:2003,Rinne:2005}.
It was further argued that this failure related to the ``wrong sign''
in a term of the maximum slicing
condition~\cite{Rinne:2005,Rinne-Steward:2005}. How precisely a
numerical code behaves in such cases depends very sensitively on its
implementation. Some algorithms may not converge at all, like the
multi-grid schemes
in~\cite{Choptuik-Hirschmann-Liebling-Pretorius:2003,Rinne:2005},
while other algorithms may converge to one of the two solutions
(e.g. the Newton-Raphson method used in PY;
cf. \cite{Pfeiffer-Kidder-etal:2003}). In the former case it may be
difficult to ascertain whether failure of the numerical method is
indeed due to non-uniqueness properties of the underlying analytic
problem (rather than a bug), whereas in the latter case one is faced
with the question of which of the two solutions one wants, and how to
ensure convergence toward the desired solution.
\acknowledgments
We would like to thank Edward Malec and Darragh Walsh for helpful
comments, as well as the Isaac Newton Institute and the California
Institute of Technology for hospitality during various stages of this
work. This research was supported in part by a grant from the Sherman
Fairchild Foundation, by NSF grant PHY-0601459 and by NASA grant
NNG05GG52G to Caltech, as well as by NSF Grant PHY-0456917 to Bowdoin
College.
| -35,821.597782
|
[
-3.1953125,
2.787109375
] | 20.044543
|
[
-2.44140625,
0.82958984375,
-2.033203125,
-5.8359375,
-1.515625,
7.8671875
] |
[
3.380859375,
8.8984375,
1.5751953125,
6.16796875
] | 343
| 5,205
|
[
-3.37109375,
3.94921875
] | 28.081577
|
[
-5.66015625,
-4.3125,
-4.92578125,
-2.2578125,
1.89453125,
12
] | 0.782873
| 11.350473
| 25.43708
| 2.784962
|
[
2.2747693061828613
] | -22,861.066832
| 6.012488
| -35,544.701389
| 0.869148
| 5.975687
|
[
-2.19140625,
-3.59375,
-3.91015625,
-4.9140625,
2.056640625,
12.15625
] |
[
-5.25,
-2.2265625,
-2.57421875,
-1.6845703125,
3.669921875,
4.71484375
] | |
BkiUbhDxK1ThhAcYi2os
|
\section{Introduction}
In quantum networking, the spectra produced by the photon sources is critical to their performance. For quantum teleportation~\cite{quanttel} and entanglement swapping~\cite{entswappaper}, the photons entering the intermediate measurement system (Bell-state measurement~\cite{PhysRevA.51.R1727}) need to be indistinguishable~\cite{u2005generation}. To certify this, it often includes measuring the joint-spectral intensity of the photon pair source with a single-photon spectrometer~\cite{doi:10.1021/ac50039a022,doi:10.1063/1.121984,ToF1,ToF2,Chen:17,Chen:19,cheng2019broadband}. Moreover, adaptive bandwidth management has come to the forefront of current research as the community tries to circumvent the often-low emission rates of quantum technologies and provision the available photons to user needs~\cite{Lingaraju:21,Appas2021}. In this case, a single photon spectrometer is required to characterize the highly entangled joint-spectral intensity~\cite{PRXQuantum.2.040304}. These previous demonstrations have often necessitated use of expensive cryogenic single-photon detectors. It is therefore desirable to develop a single-photon spectrometer with high spectral resolution but without the use of cryogenic detectors.
A heterodyne spectrometer mixes the input signal with a tunable local oscillator and detects the strength of the intermediate frequency using a photodetector, e.g., a balanced photodiode pair followed by electrical amplification~\cite{1448067,blaney1975signal,menzies1976laser,parvitte2004infrared}. Over the years, heterodyne spectrometers have found many uses, including some of the earliest being characterization of atmospheric molecular composition~\cite{10.2307/1731444,Menzies:71}, astronomical observation~\cite{MUMMA1975}, and laser linewidth characterization~\cite{iet:/content/journals/10.1049/el_19800437}. More recently, heterodyne spectrometers are still primarily used for atmospheric molecular composition~\cite{Bomse:20,Deng:21,Sappey:21}.
Heterodyne spectrometers have the benefit of the frequency resolution being primarily limited only by the local oscillator bandwidth and stability, assuming the post-processing electronics have the required precision. Because of this, heterodyne spectrometers have been demonstrated with resolution $< 125$ MHz ($< 1$ pm at 1550 nm)~\cite{kovalyuk2017chip,s17020348,352070,Sonnabend:02,WIRTZ20022457}. The sensitivity of these heterodyne spectrometers on the other hand has not been as impressive, often not much below - 60 dBm~\cite{Furukawa:13,986811} when using photodiodes. Using superconducting-nanowire single-photon detectors, -126 dBm has been shown for narrowband signals~\cite{kovalyuk2017chip}. Others have shown that there are practical and theoretical limits to the sensitivity of heterodyne spectrometer and discussed them in the context of astronomical observation~\cite{blaney1975signal,AO1976} and laser side-band characterization~\cite{Boucher1993}.
In this work, we analyze the utility of heterodyne spectrometers for characterization of several light sources interesting to the growing quantum networking community. These very dim and often broadband light sources required specification of a more general sensitivity limit for heterodyne spectroscopy which we use in comparison with the brightness of several interesting light sources. In Sec. \ref{demo}, we demonstrate a proof-of-principle fiber-based heterodyne spectrometer which has picometer resolution and high sensitivity in the conventional optical communications band. In Sec. \ref{SA}, we generalize the sensitivity analysis of ~\cite{AO1976} with respect to the input spectra. Although our demonstration is in the infrared, our sensitivity analysis is applicable to any shot-noise limited heterodyne detector for any input spectra so is generally applicable to many use cases. We use our generalized analysis to compare the measured sensitivity for our heterodyne spectrometer with the generalized sensitivity limit, and with the measured sensitivity of a conventional grating-based spectrometer and a direct-detection single-photon-detector-based spectrometer. Finally, in Sec. \ref{MBDLS}, we compare the heterodyne-spectrometer-sensitivity limit with common single-photon sources, i.e., spontaneous parametric downconversion, spontaneous four-wave mixing, Raman scattering, and quantum dots.
\section{Fiber-based }Demonstration \label{demo}
In our simple single-mode optical-fiber-based heterodyne spectrometer (see Fig. \ref{fig:HTsetup}), an input signal of any polarization, and potentially unknown spectra, enters the input port. It is attenuated, if necessary, and polarized. The polarization controller is used to optimize power through the polarizer. The input signal is mixed with the local oscillator on a 50/50 fiber beamsplitter. The mixing products are detected by an amplified balanced photodiode pair (Thorlabs PDB430C). For convenience, the amplified subtracted detector output goes to an electronic spectrum analyzer (ESA, specifically an Agilent N9000A CXA Signal Analyzer). Alternatively, if an ESA is unavailable or a smaller device is desired, an analog-to-digital converter (ADC) connected to a microprocessor could have been used~\cite{s17020348} for a more cost-effective and integrated analysis solution.
\begin{figure*}
\centering
\includegraphics[scale=0.6]{HetSpecsetupv7.pdf}
\caption{Spectrometer comparison experimental setup. Atten = Optional attenuator(s). COSA = Conventional optical spectrum analyzer. PC = Polarization controller. POL = Polarizer. ESA/ADC = Electronic spectrum analyzer or analog-to-digital converter with microprocessor.}
\label{fig:HTsetup}
\end{figure*}
In this experiment, we use two different input signals. We use a steep-edge filtered (1 nm bandpass filter from the Finisar Waveshaper 1000A) amplified-spontaneous-emission (ASE) source (Pritel SCG-40) and a narrowband (100-kHz linewidth, broadened to 50-500 MHz) tunable laser (Hewlett Packard 8168C). To generate the tunable narrowband local oscillator (LO), we use a tunable continuous-wave (CW) external-cavity laser (Pure Photonics PPCL550) with an intrinsic linewidth of about 10 kHz, frequency noise-broadened to about 200 kHz for $>0.1$-ms integration times~\footnote{We estimate the broadened bandwidth using the technique outlined in \cite{DiDomenico:10}, which used the manufacturer's measurement of the frequency noise.}, and relative intensity noise < -120 dBc/Hz at 100 kHz. There is also an internal 900-Hz frequency dither on the LO, which further broadens the laser to about 100 MHz for $>1$-ms integration times. For simplicity of this demonstration, we utilize the manufacturer's wavelength calibration of the LO. Others have already demonstrated real-time wavelength calibration that could be used instead~\cite{s17020348}. For comparison, we also record spectra on a conventional grating optical spectrum analyzer (Yokogawa AQ6370B) which has up to 20 pm resolution and -90 dBm sensitivity, implying a power-spectral-density sensitivity of -90 dBm/20 pm for signals of bandwidth $\geq$ 20 pm. The input signal was split with a 50/50 fiber beamsplitter between the Yokogawa optical spectrum analyzer (OSA) and the heterodyne spectrometer.
We measure the spectra of the steep-edge filtered ASE source using the Yokogawa OSA and our heterodyne spectrometer (see Fig. \ref{fig:HTdata1nmfilt}). Due to the < 10 GHz minimum bandwidth of the programmable filter used, we expect very steep edges. Fig. \ref{fig:HTdata1nmfilt}a is the output of the Yokogawa OSA on the highest resolution setting (20 pm) and Fig. \ref{fig:HTdata1nmfilt}b is the output of the heterodyne spectrometer. Notice the rounded edge of the magnified graph in Fig. \ref{fig:HTdata1nmfilt}c compared to the steeper edge and increased detail in Fig. \ref{fig:HTdata1nmfilt}d; this indicates the heterodyne spectrometer has a better resolution.
\begin{figure*}
\centering
\includegraphics[scale=0.45]{EDFA1nmfiltcompfigv3.pdf}
\caption{Spectrometer comparison with 1-nm input signal. (a) Yokogawa spectrometer output using best resolution (20 pm). Note: the absolute wavelength calibration of the Yokogawa OSA differs from our other instruments by about 0.4 nm. (b) Heterodyne spectrometer output. (c) Magnified top left corner of (a). (d) Magnified top left corner of (b).}
\label{fig:HTdata1nmfilt}
\end{figure*}
In Fig. \ref{fig:HTdataHPlaser}, we compare the resolution more directly and can estimate it straight from the data. Here the input signal is a narrowband tunable laser, which is broadened, presumably by internal frequency dithering. In Fig. \ref{fig:HTdataHPlaser}a, we see a peak produced by the input signal being detected by the Yokogawa spectrometer. In Fig. \ref{fig:HTdataHPlaser}b, on the same scale, we see a much narrower peak produced by the same input signal being detected by the heterodyne spectrometer. A magnified graph (Fig. \ref{fig:HTdataHPlaser}c) reveals the input signal linewidth convolved with the LO linewidth and the dithering of the two lasers. From this we can see the true laser linewidths of the LO and input signal are each much less than 1 pm.
\begin{figure*}
\centering
\includegraphics[scale=0.45]{narsigfig20210901v4.pdf}
\caption{Spectrometer comparison with narrowband laser input. (a) Yokogawa spectrometer output using best resolution (20 pm). Note: the absolute wavelength calibration of the Yokogawa OSA differs from our other instruments by about 0.4 nm. (b) Heterodyne spectrometer output. Data was taken with a ESA video bandwidth of 10 Hz which partially integrates over LO frequency dither. (c) Magnified version of (b). $\lambda_{\text{LO}}=1548.800$ nm.}
\label{fig:HTdataHPlaser}
\end{figure*}
We see in Fig. \ref{fig:HTdataHPlaser}c artifacts of the LO operating mode which uses a frequency dither that spanned about 100 MHz, at a rate of about 900 Hz. The narrowband input signal was also broadened, likely via dithering as well. The video bandwidth during this data collection partially integrated the dither, resulting in many apparent peaks. With further development and automation, a frequency scan of the LO can be used to avoid an LO dither and the scan can be synchronized with the detector measurement by an ADC using a control microprocessor~\cite{s17020348}. Using a frequency scan, the spectrometer resolution can be improved significantly and is then limited by the laser linewidth, scan speed, and measurement integration time. Using these techniques near 1550 nm, resolution down to 6 MHz has been demonstrated~\cite{s17020348}.
Furthermore, for amplification, our Thorlabs PDB430C uses two gain stages of $10^3$ and 10, via two Analog Devices OPA847 amplifiers. We modified another PDB430C we have to a single gain stage of $10^4$ using one Analog Devices OPA657 and an output low-pass filter with a corner frequency of about 10 MHz. These modifications significantly reduce the bandwidth from about 350 MHz to about 10 MHz, but they also reduce the electronics noise by about 8 dB. With these modifications, we improve on Fig. \ref{fig:HTdataHPlaser}b; the heterodyne spectrometer now has a sensitivity of about -89 dBm (see Fig. \ref{fig:HTsensdata}a), roughly matching the Yokogawa OSA, and to our knowledge, now exceed the power sensitivity of all other demonstrated heterodyne spectrometers using photodiodes which published data providing absolute input signal power of the noise floor. Unfortunately, most previous heterodyne spectrometer demonstrations only published normalized power measurements.
\begin{figure*}
\centering
\includegraphics[scale=0.45]{sensitivityfigurestacked5.pdf}
\caption{Heterodyne spectrometer output at the sensitivity level with $\lambda_{\text{LO}}=1548.809$ nm. (a) The input signal is the narrowband laser, attenuated to -89 dBm (optical power). The ESA video bandwidth was 1 kHz, and the resolution bandwidth was 1 MHz. RF = radio frequency. (b) Using an ESA 100-kHz video bandwidth, we remeasure the heterodyne signal from the same input signal as (a).}
\label{fig:HTsensdata}
\end{figure*}
While we believe we have developed the most sensitive hetrodyne spectrometer with photodiodes to date, it is natural to consider if the sensitivity may be further improved. In Fig. \ref{fig:HTsensdata}b, we use a higher video bandwidth on the ESA which shows higher peaks for the same optical input power as Fig. \ref{fig:HTsensdata}a. This leads us to conclude that without frequency dithered lasers (which the LO and signal were), the sensitivity can be further improved. In the next section, we consider the practical sensitivity limits for a heterodyne spectrometer. Accordingly, we compare the measured sensitivity of our spectrometer to the sensitivity limit described in Sec. \ref{SA} and show what the sensitivity could be without laser frequency dithering.
\section{Sensitivity Analysis}
\label{SA}
Previous work shows that heterodyne detection~\cite{Ref1AO1976}, and even heterodyne spectrometers~\cite{AO1976}, when LO shot noise dominates all other sources of noise, can reach the quantum detection limit ($P_{min}$), i.e., detection of one photon (with energy $h \nu$) within the system resolution time $\Delta \nu^{-1}$ (sec),
\begin{equation}
P_{min} = h \nu \Delta \nu,\label{Pmin}
\end{equation}
assuming the detector has unit quantum efficiency, where $h$ is Planck's constant and $\nu$ is the frequency of the light.
This detection limit and our analysis assume there is mutual spatial coherence~\cite{wolf2007introduction} between the signal and local oscillator. To satisfy this, we assume in our analysis that both the signal and local oscillator are combined into the same single spatial mode before detection, e.g., through careful optical alignment of single-spatial-mode beams or coupling into single-mode fiber-optic cable. Steady-state temporal coherence, as normally required for conventional interferometry is not required here.
More crucially, in Eq. (\ref{Pmin}) there is an implicit assumption that there is just a single spectral-temporal mode under consideration too. This may be true for narrow molecular transition lines, for which previous analyses were developed~\cite{AO1976, Boucher1993}, but in general, light sources emit into many spectral-temporal modes. When that is the case, the above analysis implies, and we state explicitly, that detecting a single photon per mode within the resolution time is the more general limit for heterodyne detection and therefore, also heterodyne spectrometers. For a single polarization, the number of spectral-temporal modes within a certain bandwidth $\Delta \nu$ and integration time $\Delta t$ is $N=\Delta \nu \Delta t=c\Delta \lambda \Delta t/\lambda^2$ ~\cite{Qi2017RSI}. For certain pulsed experiments, $\Delta t$ is instead proportional to the pulse width~\cite{PhysRevLett.121.083602}. For a common integration time of 1 second, and a 1-kHz bandwidth, that implies one thousand spectral-temporal modes. Even narrow-linewidth sources ($\sim$ kHz) emit into many spectral-temporal modes, let alone broadband sources, so this is an important consideration.
For a heterodyne spectrometer, the average detected photon number per mode is
\begin{equation}
\langle n(\nu_{\text{in}}) \rangle = \frac{S_{\text{in}}(\nu_{\text{in}}) \eta }{h \nu_{\text{in}}} \label{nin}
\end{equation}
where the input signal power-spectral density is $S_{\text{in}}(\nu_{\text{in}})$ and the detection efficiency is $\eta$. The input signal power-spectral density is equivalent to the average energy detected per temporal-spectral mode, using dimensional analysis. Thus, Eq. (\ref{nin}) is the average energy detected per temporal-spectral mode divided by the energy per photon ($h \nu_{\text{in}}$) which equals the average detected photon number per mode. Shot noise contributes on average one photon per mode~\cite{AO1976,Qi:20}, thus, the signal-to-noise ratio (SNR) is $
\text{SNR}(\nu_{\text{in}})=\frac{ \langle n(\nu_{\text{in}}) \rangle}{1}= \langle n(\nu_{\text{in}}) \rangle$.
The sensitivity limit is defined above as one detected photon per mode, equates to a measured noise variance 3-dB higher than the shot noise, when the input noise quadratures are equal. To derive this we use the relation from \cite{Qi:20} which equates average photon number per mode to measured noise variance.
\begin{align}
\langle n \rangle &= \langle Z \rangle - 1 \label{nz}\\
&= \langle X^2 \rangle + \langle P^2 \rangle - 1\\
&= 2 \langle \Delta X^2 \rangle -1, \label{ndelX}
\end{align}
where we use the definition $Z=X^2+P^2$, and the assumptions $\langle \Delta P^2 \rangle=\langle \Delta X^2 \rangle=\langle X^2 \rangle -\langle X \rangle^2$ and $\langle X \rangle=\langle P \rangle=0$. These are valid assumptions for symmetric phase-space distributions when averaging over the phase between the LO and input signal. Using Eq. (\ref{ndelX}), when $\langle n \rangle = 0$, $\langle \Delta X^2 \rangle=1/2$ (shot-noise variance). Moreover, when $\langle n \rangle = 1$, $\langle \Delta X^2 \rangle=1$, which is twice (3 dB) the shot-noise variance. Therefore, when the measured noise variance is 3 dB above the shot-noise variance, there is one photon per mode on average. Similarly, the noise-equivalent power of a coherent detector is on average one photon per mode, when the noise is dominated by LO shot noise and the $-1$ in Eq. (\ref{nz})-(\ref{ndelX}) is interpreted as the shot noise contribution~\cite{Qi:20}.
Using the same filtered ASE source from Fig. \ref{fig:HTdata1nmfilt}b, we attenuate its input power into the heterodyne spectrometer until we measure an radio-frequency (RF) noise power of about -65.5 dBm at 6 MHz, which is 3-dB greater than the shot-noise (-68.5 dBm, which is 10 dB greater than the electronics noise using the modified detector) measured on the ESA. These measurements were taken with an integration time of 0.00012 s (one point of a 1001-point sweep of the ESA taking 0.12 s) with an ESA resolution bandwidth of 1 MHz. To measure the optical input power of the filtered ASE source sent into the heterodyne spectrometer when it produced a heterodyne signal 3 dB above shot noise, we use the Yokogawa OSA as a power meter and measured an optical power-spectral-density of $S_{\text{in}}=$-64 dBm/20 pm, near $\nu_{\text{LO}}$. Using Eq. (\ref{nin}), this implies we need about 1.25 input photons per mode from the filtered ASE source to have one detected photon per mode. Thus, the heterodyne spectrometer has a measured sensitivity of -64 dBm/20 pm. The necessary input photons per mode are greater than one because of electronics noise, loss, and imperfect detection efficiency. This shows our spectrometer is very near the quantum limit for shot-noise-limited detection sensitivity.
Using the narrowband laser (from Fig. \ref{fig:HTdataHPlaser}b), we attenuate its input power into the heterodyne spectrometer until we measure a signal about 3 dB above shot noise (see Fig. \ref{fig:HTsensdata}a). At this optical power level, we use the Yokogawa OSA as a power meter and measure an optical power of -89 dBm. The laser is specified to have a 100-kHz linewidth; thus, the actual power-spectral density sensitivity measured is -89 dBm/0.8 fm = -45 dBm/20 pm, assuming a top-hat spectral shape with a width of 100 kHz. This means that for 1 detected photon per mode, about 100 input photons per mode are needed, much greater than one. This sensitivity is primarily due to the internal laser frequency dithering of both the LO and the narrowband signal laser, and other noise broadening of the laser linewidths. Accordingly, there is not always an intermediate frequency produced at 6 MHz (our detection frequency), or even in the detection bandwidth (see Fig. \ref{fig:HTsensdata}b). If the lasers were not dithered or broadened, these measurements imply a possible power-spectral-density sensitivity of about -109 dBm/0.8 fm = -65 dBm/20 pm, which is roughly equal to the directly measured sensitivity using the filtered ASE source. The sensitivity for the heterodyne spectrometer is thus consistent, namely, one detected photon per spectral-temporal mode, regardless of the input light source.
In contrast, the Yokogawa OSA measures power-spectral density with noise that is fixed for a given spectral resolution. Importantly, it does not have any noise contribution from LO shot noise (since there is no LO used) so it is not limited by Eq. \ref{Pmin}. At the minimum resolution, the power-spectral-density sensitivity is -90 dBm/20 pm. That is equivalent to 0.003 photons per spectral-temporal mode. Nevertheless, this noise is fixed. If the signal occupies less than a 20 pm bandwidth, there is still the same amount of noise when measured with the Yokogawa OSA, which will effectively increase the input photons required per spectral-temporal mode. In that situation, it is possible for the heterodyne spectrometer to have better power sensitivity, assuming the LO linewidth is much less than 20 pm, as it is in our experiment. For example, it has been shown that a heterodyne spectrometer measured the resonance fluorescence of a single ion~\cite{doi:10.1080/09500349708231861}. These conclusions agree with a similar analysis which compared direct detection and heterodyne detection for astronomical sources of varying bandwidth~\cite{blaney1975signal}.
Let us now compare the sensitivity of a heterodyne spectrometer and the Yokogawa OSA to one based on a single-photon detector~\cite{cheng2019broadband}. Superconducting-nanowire single-photon detectors (SNSPD) have exceptionally low noise characteristics in the near infrared~\cite{yamashita2011origin,shibata2015ultimate}. For a SNSPD, sensitive around 1550 nm, coupled to single-mode fiber, a typical dark noise count rate is about 100 counts/s~\cite{yamashita2011origin,doi:10.1063/5.0006221}, depending on the temperature and bias current. This dark count rate is independent of any optical filter in front of the detector. For an even comparison, let us say there is a 20-pm wide tunable filter in front of the SNSPD. This notional configuration yields $4\times10^{-8}$ noise counts per mode. Thus, for dim broadband signals considered here, the filter and SNSPD is the better choice because it has lower noise. If the noise counts per mode using a filter and an SNSPD exceeds one, then it is more advantageous to use a heterodyne spectrometer. Interestingly, there was a demonstration with SNSPDs integrated into a heterodyne spectrometer~\cite{kovalyuk2017chip}. This device showed detection at low light levels (about 1000 photons/sec) for a very narrowband light source (about 1 kHz), which agrees with the shot-noise detection limit discussed here. Overall, these results agree with the consensus in the astronomical community that single-photon detectors are preferred for viewing dim astronomical objects~\cite{NAP26141,Nightingale1990}.
\section{Modal Brightness of Dim Light Sources}
\label{MBDLS}
Now we turn our attention to several light sources, which are commonly detected by single-photon detectors and are of great importance to the quantum networking community, to see if a heterodyne spectrometer could be used for their characterization.
Spontaneous parametric downconversion (SPDC)~\cite{SPDC} is a spectrally rich and diverse process, heavily used in quantum communications experiments. SPDC pair generation is highest most often in waveguides due to better mode overlap between the signal, idler, and pump and increased SPDC spectral density inside the waveguide~\cite{Fiorentino:07}. Waveguide pair generation rates of about $R_p=3\times10^8$ pairs/s per mW of pump in a 1-nm bandwidth have been measured for type-I SPDC processes in lithium niobate~\cite{Clausen_2014}. Here we are interested in calculating how many photons are generated into a single spectral-temporal mode. For $\lambda=1550$ nm, $\Delta \lambda=1$ nm, and $\Delta t=1$ s, there are about $N=10^{11}$ spectral-temporal modes. For a 1-mW laser-pumped SPDC source, that gives on average $3\times10^8/10^{11}=0.003$ photons per mode, resulting in an SNR much less than one, which means the signal is not practically detectable with a heterodyne spectrometer. A practical application of heterodyne spectrometry here would require scanning quickly over many frequency settings to capture the broadband spectra, leaving little time to integrate for each data point. It is not even likely that integration would help much for this application anyway, as it would help define the average noise amount better, but \emph{not} remove it to reveal the true signal and improve the dynamic range.
There has been development of factorable joint-spectral intensity SPDC sources, employing pulsed pump lasers~\cite{PhysRevLett.100.133601,Halder:09,Harder:13,Meyer-Scott:18}, which is important for entanglement swapping and other experiments involving multi-photon interference~\cite{PhysRevLett.59.2044,PhysRevLett.84.5304,PhysRevA.77.022312}. The brightness of these sources is still much less than one photon pair per pump pulse, so that even if an LO sharing the same spectral-temporal mode of the SPDC were used, it would still not be easily detectable with a heterodyne spectrometer. Furthermore, since the SPDC and LO would be in the same mode, the spectrometer resolution would be equal to the convolution of spectral width of the SPDC spectrum.
Raman scattering is commonly produced in fiber-optic cables by bright laser light (such as the pulses which carry data in fiber-optic networks) inelastically scattering with the fiber itself~\cite{SRS}. Raman scattering is a pervasive noise source that often degrades the results of demonstrations where quantum and classical signals coexist together in the same fiber~\cite{Peters_2009,PhysRevX.2.041010,Niu:18,Tessinari:21}. The bandwidth of Raman scattering in optical fibers is usually several Tera-Hertz and the scattering cross-section ($\rho(\lambda)$) is on the order of $10^{-9}$ nm$^{-1}$ km$^{-1}$~\cite{10.1117/12.2306875}. For a $P_0=1$ mW laser going down a $L=25$ km standard single-mode fiber (with attenuation per unit length $\alpha=0.2$ dB/km), that produces at the output of the fiber~\cite{10.1117/12.2306875}
\begin{align}
P_{\text{SRS}} &= P_0 L \rho(\lambda)10^{\alpha L/10}=8\times10^{-11}\text{W}/\text{nm}\\
&=6\times10^8 (\text{1550-nm photons/s/nm}).
\end{align}
For an 1-sec integration time and a 1-nm bandwidth at 1550 nm there are about $N=10^{11}$ spectral-temporal modes. In that case, the number of SRS photons per mode is about $6\times10^8/10^{11}=0.006$, resulting in an SNR much less than one, which means the signal is not practically detectable with a heterodyne spectrometer. Using a pulsed pump with a high peak power ($> 1$ W) can greatly increase the Raman scattered photons within the pulse and the scattered photons may be able to be seen on a heterodyne spectrometer with the right LO (which would also likely need to be pulsed).
Spontaneous four-wave mixing (SFWM) is a process where two degenerate pump photons are converted into a signal-idler pair of photons~\cite{horowicz1987generation, pinard1989self, grandclement1989four, vallet1990generation}. SFWM is a competing process to SPDC for the generation of photon pairs. It can occur in fiber-optic cables rather efficiently due to long interaction lengths and small mode volumes~\cite{Wang_2001}. This process occurs most efficiently when there is optimal phase matching which happens when the pump is located at the zero-dispersion wavelength of the fiber~\cite{mechels1997accurate}. The average number of photons per mode for the signal and idler beams is $|\gamma P_0 L|^2$, where $\gamma$ is the non-linear coefficient of the fiber (often having units of W$^{-1}$ km$^{-1}$), $P_0$ is the pump power, and $L$ is the fiber length~\cite{Wang_2001}. Highly non-linear fibers can have $\gamma = 10$ W$^{-1}$ km$^{-1}$ and lengths commonly less than 1 km. With a pump power of 1 mW, the average number of photons per mode is $10^{-4}$. Clearly, this source of light is not bright enough to be practically detected by a heterodyne spectrometer either.
Finally, there is current development of deterministic single-photon sources using quantum dots~\cite{senellart2017high}. These narrow-linewidth emitters, at first, appear to be a good candidate for characterization with a heterodyne spectrometer, but they will not be bright enough since they are emitting equal to or less than one photon per spectral-temporal mode, as ``single'' photon sources. If the quantum dot sources were operated with higher brightness, then spectral characterization via a heterodyne spectrometer may be possible.
\section{Conclusion}
We have demonstrated a proof-of-principle heterodyne spectrometer which has a 20-times better wavelength resolution than a conventional low-noise grating-based spectrometer, with the potential for 200-times better resolution. Furthermore, the heterodyne spectrometer sensitivity can be much better than the conventional spectrometer for signals narrower than the conventional spectrometer's minimum resolution, otherwise the heterodyne spectrometer sensitivity is worse. Moreover, we calculate that, due to LO shot noise, heterodyne-detection-based spectrometers have fundamental sensitivity limitations that are significantly higher than that of a single-photon detector. Finally, we analyze the brightness of several light sources of interest to quantum networking and compare them with the heterodyne spectrometer sensitivity limit. From this comparison, it is now clear that the heterodyne spectrometer is not sensitive enough to measure spectra of many light sources of interest in quantum networking, specifically those of broad bandwidth that have on average much less than one photon per mode yet, it is sensitive enough for brighter narrow linewidth sources.
\begin{backmatter}
\bmsection{Acknowledgments}
The authors acknowledge Bing Qi for his advice on accurately counting spectral-temporal modes.
This work was performed at Oak Ridge National Laboratory, operated by UT-Battelle for the U.S. Department of Energy under contract no. DE-AC05-00OR22725.
Funding was provided by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, through the Transparent Optical Quantum Networks for Distributed Science Program (Field Work Proposal ERKJ355).\\
\bmsection{Disclosures}
The authors declare no conflicts of interest.
\bmsection{Data Availability Statement}
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
\end{backmatter}
| -16,394.253057
|
[
-2.79296875,
2.626953125
] | 57.03125
|
[
-2.873046875,
0.78125,
-2.1328125,
-5.28125,
-1.0478515625,
8.171875
] |
[
3.525390625,
7.5078125,
3.203125,
6.1796875
] | 281
| 4,168
|
[
-1.8369140625,
1.5810546875
] | 23.101112
|
[
-6.20703125,
-3.630859375,
-3.5078125,
-1.8828125,
1.9189453125,
11.171875
] | 3.39948
| 32.873747
| 25.887716
| 2.525661
|
[
2.588526725769043
] | -13,028.529717
| 5.900192
| -16,050.919851
| 1.524886
| 5.904937
|
[
-3.283203125,
-3.88671875,
-3.552734375,
-4.27734375,
2.61328125,
11.7109375
] |
[
-5.54296875,
-2.169921875,
-2.298828125,
-2.046875,
3.5625,
5.8203125
] | |
BkiUcS05qsNCPdQKtx6y
|
\section{Introduction}
In this paper, we consider a long-standing open problem in the applied probability literature: what is the quadrant occupation time of planar Brownian motion? This question had intrigued Larry Shepp since 1995 (see \cite{SheppSem}). Formally, let $T$ be the total time that the vector process $X(t) = (W_1(t),W_2(t))$ on $0 \leq t \leq 1$ is in the first quadrant; the task is to find the distribution of $T$. In 1988, Bingham and Doney remarked on p. 121 of \cite{Bingham} that ``in no case to our knowledge is the law of $T$ known explicitly.'' Using independence of coordinate processes, the authors obtained the first two moments of $T$ and provided a solution for the third moment, the latter of which was corrected by \cite{Desbois}. The author of \cite{Desbois} generalized the aforementioned quadrant problem by considering the occupation time spent in a wedge of apex $O$ and angle $\theta$. Analytical results for general $\theta$ were provided for both the second and third moments, and the fourth moment for the quadrant problem ($\theta=\pi/2$) was obtained. \\
\indent Despite these new additions to the literature, the author of \cite{Desbois} concludes that ``our feeling is that the occupation time problem for Brownian motion is far from being understood as soon as we leave one-dimensional or quasi-one-dimensional (graphs) situations. Clearly, new ideas are needed if we want to tackle this problem.'' Our work, through its use of Kontorovich-Lebedev transforms and pasting of solutions, offers a new, but ultimately incomplete, approach on this long-standing problem. For references on the Kontorovich-Lebedev transform, we refer to the reader to the following sources: \cite{KL2}, \cite{KL1}, \cite{KL4}, and \cite{KL3}.
\section{Main Results}
\subsection{Setup}
We consider the following alternative formulation of Bingham and Doney's (\cite{Bingham}) quadrant occupation of planar Brownian motion problem. Let
\begin{eqnarray*}
X(t), Y(t), t \geq 0
\end{eqnarray*}
be standard Brownian motions starting at $x,y$ respectively. We wish to find the distribution of the total time $T=Leb\{t \in [0,1]: X(t) \times Y(t) >0\}$, when $x=y=0$, i.e., the occupation time of the union of the first and third quadrants. If two adjacent quadrants are used, the problem becomes much easier and the distribution follows the arcsine law (\cite{Levy}).
The Feynman-Kac theorem states (see \cite{Karatzas}) that
\begin{eqnarray}
U(x,y)=\expesub{x,y}{\int_0^\infty \exp{-\alpha t-\lambda\int_0^t \indic{\parens{X(u) \times Y(u)>0}}du}dt}
\end{eqnarray}
is the bounded solution of the Helmholtz partial differential equation in each quadrant,
\begin{eqnarray}
\frac{1}{2}(U_{xx}+U_{yy})(x,y)-\beta(x,y)U(x,y)+1 \equiv 0,
\end{eqnarray}
where $\beta(x,y)=\beta_1=\alpha+ \lambda$ for $(x,y) \in Q_1$, $Q_3$ (the first and third quadrants, respectively) and $\beta(x,y)=\beta_2=\alpha, (x,y) \in Q_2, Q_4$ (the second and fourth quadrants, respectively). The function $U$ must be twice differentiable interior to each quadrant, continuously differentiable overall, and uniformly bounded. If we can find $U$, then we know $U(0,0)$, and then we will have:
\begin{eqnarray}
U(0,0)= \expe{\int_0^\infty e^{-\alpha t-\lambda tT}dt}=\expe{\frac{1}{\alpha+\lambda T}}.
\end{eqnarray}
We turn to finding the Kontorovich-Lebedev solution in each quadrant. Let $x=r\cos\theta$, $y=r\sin\theta$, and set $V(r,\theta)=U(x,y)$. As is well known,
\begin{eqnarray}
U_{xx}+U_{yy}=V_{rr}+\frac{1}{r}V_r+ \frac{1}{r^2}V_{\theta\theta}.
\end{eqnarray}
The modified Bessel function $v(r)=\kappa_{iv}(r)$ satisfies the ordinary differential equation (see \cite{Ober}):
\begin{eqnarray}
r^2v^{''}(r)+rv'(r)-(r^2-\nu^2)v(r)=0.
\end{eqnarray}
It now can be easily checked that, for any functions $f(\nu), g(\nu)$,
\begin{eqnarray}
V(r,\theta)=\frac{1}{\beta}+\int_0^\infty f(\nu)\kappa_{iv}(r\sqrt{2\beta})\sinh(\nu\theta)d\nu+\int_0^\infty g(\nu)\kappa_{iv}(r\sqrt{2\beta})\cosh(\nu\theta)d\nu
\end{eqnarray}
solves the differential equation:
\begin{eqnarray}
\frac{1}{2}\parens{V_{rr}(r,\theta)+\frac{1}{r}V_r(r,\theta)+\frac{1}{r^2}V(r,\theta)}-\beta V(r, \theta)+1=0,
\end{eqnarray}
with a different choice of $f,g, \beta$ in each quadrant, which we must paste together to satisfy the needed smoothness. For convenience, we can use any two linearly independent combinations of $\sinh, \cosh$, etc. We now proceed to do so.
\subsection{Pasting}
Our strategy is to use ``pasting'' of the solutions. We denote $f_i$ and $g_i$ as the densities for each of the two linear combinations of $\sinh, \cosh$, respectively, in the $i$th quadrant. In:
\begin{eqnarray*}
Q_1&=& \{(r,\theta): r >0, 0<\theta<\pi/2\}
\end{eqnarray*}
we set:
\small
\begin{eqnarray}
V(r,\theta)=\frac{1}{\beta_1}+\int_0^\infty f_1(\nu)\kappa_{iv}\parens{r\sqrt{2\beta_1}}\sinh\parens{\nu\parens{\frac{\pi}{2}-\theta}}d\nu+\int_0^\infty g_1(\nu)\kappa_{iv}\parens{r\sqrt{2\beta_1}}\sinh (v\theta)d\nu.
\end{eqnarray}
\normalsize
In
\begin{eqnarray}
Q_2&=& \{(r,\theta): r >0, \frac{\pi}{2}<\theta<\pi\}
\end{eqnarray}
we set:
\footnotesize
\begin{eqnarray}
V(r,\theta)=\frac{1}{\beta_2}+\int_0^\infty f_2(\nu)\kappa_{iv}\parens{r\sqrt{2\beta_2}}\sinh\parens{\nu\parens{\pi-\theta}}d\nu+\int_0^\infty g_2(\nu)\kappa_{iv}\parens{r\sqrt{2\beta_2}}\sinh \parens{v\parens{\theta-\frac{\pi}{2}}}d\nu.
\end{eqnarray}
\normalsize
By symmetry, $U(x,y)=U(y,x)=U(-x,-y)$, and so $g_j \equiv f_j$ for $j=1,2,3,4$, $f_3=f_1,f_4=f_2$.
\subsection{Consequences of Continuity and Continuous Differentiability on the Axes}
Note that:
\begin{eqnarray*}
V\parens{r,\frac{\pi}{2}-0}&=&\frac{1}{\beta_1}+\int_0^\infty f_1(\nu)\sinh\parens{\frac{\nu\pi}{2}}\kappa_{iv}(r\sqrt{2\beta_1})d\nu,\\
V\parens{r,\frac{\pi}{2}+0}&=&\frac{1}{\beta_2}+\int_0^\infty f_2(\nu)\sinh\parens{\frac{\nu\pi}{2}}\kappa_{iv}(r\sqrt{2\beta_2})d\nu,
\end{eqnarray*}
and the right-hand sides of these equations are equal. The derivatives, taken with respect to $\theta$, are:
\begin{eqnarray*}
V_{\theta}\parens{r,\frac{\pi}{2}-0}&=&\int_0^\infty f_1(\nu)\nu\parens{\cosh\parens{\frac{\nu\pi}{2}}-1}\kappa_{iv}(r\sqrt{\beta_1})d\nu,\\
V_{\theta}\parens{r,\frac{\pi}{2}+0}&=&\int_0^\infty f_2(\nu)\nu\parens{-\cosh\parens{\frac{\nu\pi}{2}}+1}\kappa_{iv}(r\sqrt{\beta_2})d\nu,
\end{eqnarray*}
and the right-hand sides of these equations are equal.
\subsubsection{Solving the Above Smoothness Equations}
We assume that there are signed measures, $\mu_j(dz),\, j=1,2,$ such that:
\begin{eqnarray} \label{cooleqn}
f_j(\nu)=\frac{2}{\pi}\frac{\cosh(\nu\frac{\pi}{2})}{\sinh(\nu\frac{\pi}{2})}\parens{\frac{-1}{\beta_j}}+\int_0^\infty \mu_j\, \sin{\nu z}dz.
\end{eqnarray}
We then can plug in $f_j$ into the first two equations expressing continuity on the $y$-axis. The first term in $f_j$ kills the term $\frac{1}{\beta_j}$ because the Kontorovich-Lebedev transform of $\cosh(\nu\frac{\pi}{2})$ is, for every $y$,
\begin{eqnarray}
\frac{2}{\pi}\int_0^\infty\cosh\parens{\nu\frac{\pi}{2}}\kappa_{iv}(y)d\nu=e^{-y\cos{\frac{\pi}{2}}}\equiv 1,
\end{eqnarray}
(see \cite{Ober} p.242). This leaves the following equation for the sine transforms of $\mu_j$ for $V(r,\theta)$ to be continuous at each $r$ when $\theta=\frac{\pi}{2}$:
\begin{eqnarray}
V\parens{r,\frac{\pi}{2}}&=&\int_0^\infty \mu_1dz\int_0^\infty\sin{\nu z}\sinh\parens{\frac{\nu \pi}{2}}\kappa_{iv}(r\sqrt{2\beta_1})d\nu\\
&=&\int_0^\infty \mu_2dz\int_0^\infty\sin{\nu z}\sinh\parens{\frac{\nu \pi}{2}}\kappa_{iv}(r\sqrt{2\beta_2})d\nu.
\end{eqnarray}
Identity (8) on p.244 of \cite{Ober} states that
\begin{eqnarray*}
\int_0^\infty\sin{\nu z}\sinh\parens{\frac{\nu\pi}{2}}\kappa_{iv}(y)d\nu=\frac{\pi}{2}\sin{y\sinh(z)}.
\end{eqnarray*}
This gives us the continuity equation linking $\mu_1, \mu_2$ as follows:
\begin{eqnarray}
\int_0^\infty \sin{r\sqrt{2\beta_1}\sinh(z)}\mu_1dz=\int_0^\infty \sin{r\sqrt{2\beta_2}\sinh(z)}\mu_2dz.
\end{eqnarray}
We now define the change of variables $z'(z)=\phi(z), z\geq 0$ such that:
\begin{eqnarray}
\sqrt{2\beta_1}\sinh(z)=\sqrt{2\beta_2}\sinh(\phi(z)).
\end{eqnarray}
Explicitly,
\begin{eqnarray}
\phi(z)=\log\parens{{\sqrt{\frac{\beta_1}{\beta_2}}\sinh z+\sqrt{\frac{\beta_1}{\beta_2}\sinh^2(z)+1}}}.
\end{eqnarray}
We then have, for every $r\geq 0$,
\begin{eqnarray}
\int_0^\infty \sin{r\sqrt{2\beta_1}\sinh(z)}\mu_1dz=\int_0^\infty \sin{r\sqrt{2\beta_1}\sinh(z)}\mu_2d\phi(z).
\end{eqnarray}
Since the sine transform is unique on a half interval, we have:
\begin{eqnarray}
\mu_1(dz)=\mu_2(d\phi(z)), z\geq 0.
\end{eqnarray}
Thus, we have expressed one relationship between $\mu_1$ and $\mu_2$. We now need a second equation linking $\mu_1$ and $\mu_2$, and we use the equation obtained by using the continuity of the derivative on the positive $y$-axis.
\subsubsection{Continuity of the Derivative of $V$ on $\theta$ at $\theta=\frac{\pi}{2}$}
Noting that:
\begin{eqnarray*}
V_{\theta}\parens{r,\frac{\pi}{2}-0}&=&\int_0^\infty f_1(\nu)\nu\parens{\cosh\parens{\frac{\nu\pi}{2}}-1}\kappa_{iv}(r\sqrt{2\beta_1})d\nu,\\
V_{\theta}\parens{r,\frac{\pi}{2}+0}&=&-\int_0^\infty f_2(\nu)\nu\parens{\cosh\parens{\frac{\nu\pi}{2}}-1}\kappa_{iv}(r\sqrt{2\beta_2})d\nu,
\end{eqnarray*}
we immediately see that the right-hand sides are equal for all $r \geq 0$ (the derivative of $V$ on $\theta$ is continuous at $\theta=\frac{\pi}{2}$). Placing the expressions for the $f_j$, given in (\ref{cooleqn}), in terms of the unknown $\mu_j$, into the right-hand side of each of the above equations gives:
\begin{eqnarray}
0=\sum_{j=1}^2\int_0^\infty\bracks{\frac{2}{\pi}\frac{\cosh(\nu\frac{\pi}{2})}{\sinh(\nu\frac{\pi}{2})}\parens{\frac{-1}{\beta_j}}+\int_0^\infty \mu_j\sin{\nu z}dz}\nu\parens{\cosh\parens{\frac{\nu\pi}{2}}-1}\kappa_{iv}(r\sqrt{2\beta_j})d\nu.
\end{eqnarray}
From \cite{Ober}, p.244, equation (7), for $a$ real or $\abss{a}\leq \frac{\pi}{2}$,
\begin{eqnarray}
\frac{2}{\pi}\int_0^\infty\nu \sin{a\nu}\kappa_{i\nu}(y)d\nu=ye^{-y\cosh a}\sinh a.
\end{eqnarray}
We then use the identities:\\
\begin{eqnarray}
\frac{\cosh\parens{\frac{\nu \pi}{2}}}{\sinh\parens{\frac{\nu \pi}{2}}}\parens{\cosh\parens{\frac{\nu \pi}{2}}-1}&=&\sinh\parens{\frac{\nu\pi}{2}}-\tanh\parens{\frac{\nu\pi}{4}}, \nonumber \\
\sin{\nu z}\parens{\cosh\parens{\frac{\nu \pi}{2}}-1}&=&\frac{1}{2}\sin{\nu\parens{z+\frac{i\pi}{2}}}+\frac{1}{2}\sin{\nu\parens{z-\frac{i\pi}{2}}}-\sin{\nu z}, \nonumber
\end{eqnarray}
and link $\mu_1$ and $\mu_2$ as follows:
\scriptsize
\begin{eqnarray} \label{cooldude}
0&=&\sum_{j=1}^2\int_0^\infty\frac{2}{\pi}\parens{\frac{-1}{\beta_j}}\parens{\sinh\parens{\frac{\nu\pi}{2}}-\tanh\parens{\frac{\nu\pi}{4}}}\nu\kappa_{iv}(r\sqrt{2\beta_j})d\nu \nonumber \\
&+&\sum_{j=1}^2\int_0^\infty \mu_jdz \int_0^\infty \bracks{\frac{1}{2}\sin{\nu\parens{z+\frac{i\pi}{2}}}+\frac{1}{2}\sin{\nu\parens{z-\frac{i\pi}{2}}}-\sin{\nu z}\nu\kappa_{iv}(r\sqrt{2\beta_j})}d\nu.
\end{eqnarray}
\normalsize
Since $\sinh(z+2i\sqrt{\pi})=i\cosh z$ and $\cosh(z+i\pi/2)=-i\sinh z$, we obtain from (\ref{cooldude}), with $a=i\frac{\pi}{2}$, $a=z\pm i\frac{\pi}{2}$, respectively:
\normalsize
\begin{eqnarray}
0&=&\sum_{j=1}^2\parens{-r\sqrt{\frac{2}{\beta_j}}}+\frac{2}{\pi \beta_j}\int_0^\infty \nu \tanh \parens{\frac{\nu\pi}{4}}\kappa_{iv}(r\sqrt{2\beta_j})d\nu \nonumber\\
&+&\sum_{j=1}^2\frac{\pi}{2}\int_0^\infty \mu_j\frac{r\sqrt{2\beta_j}}{2}\bracks{e^{r\sqrt{2\beta_j}i \sinh z}(i \cosh z)+e^{-r\sqrt{2\beta_j}i \sinh z}(-i \cosh z)-2e^{-r\sqrt{2\beta_j}\cosh z}\sinh z}dz. \nonumber
\end{eqnarray}
\normalsize
We now use the substitution $z'(z)=\phi(z)$, implicitly defined by
\begin{eqnarray*}
\sqrt{2\beta_1}\sinh z= \sqrt{2\beta_2}\sinh z'
\end{eqnarray*}
in the second integral of the last display, with $j=2$. Combined with the fact that
\begin{eqnarray*}
\mu_2(d\phi(z))=\mu_1(dz)
\end{eqnarray*}
we obtain:
\small
\begin{eqnarray} \label{lastone}
&&\frac{\pi}{2}\int_0^\infty \mu_1 r\sqrt{2\beta_1}\bracks{\sin{r\sqrt{2\beta_1}\sinh z}\cosh z + e^{-r\sqrt{2\beta_1}\cosh z}\sinh z}dz \nonumber \\
&+& \frac{\pi}{2}\int_0^\infty \mu_1 r\sqrt{2\beta_2} \bracks{\sin{r\sqrt{2\beta_1}\sinh z} \cosh \phi (z)+ e^{-r\sqrt{2\beta_2}\cosh \phi (z)}\sinh \phi(z)}dz \nonumber \\
&=& \sum_{j=1}^2\parens{-r\sqrt{\frac{2}{\beta_j}}}+\frac{2}{\pi \beta_j}\int_0^\infty\tanh \parens{\frac{\nu\pi}{4}}\nu\kappa_{iv}(r\sqrt{2\beta_j})d\nu.
\end{eqnarray}
\normalsize
Despite much joint effort, further explicit calculations beyond Equation (\ref{lastone}) quickly become intractable. It is our opinion that an explicit solution to this long-standing problem may not be possible. Nonetheless, we have successfully reduced the problem to that where an analyst of special functions could pick up where we have left off. The problem now becomes one of finding the relationship between functions $f$ and $g$ if their Kontorovich-Lebedev transforms $F$ and $G$ satisfy $F(r) = G(cr)$ for all $r$ with $c$ given.
\section{Remarks}
\begin{enumerate}
\item Professor Terry Lyons is credited (Personal communication with Professor Nick Bingham) with saying that the simplest case beyond the half-plane (which reduces to the arc-sine law in one dimension) is the third plane $0 < \theta < 2 \pi/3$. This could help one to compare the differential equations we obtain in (\ref{lastone}).
\item One can consider the random occupation measure generated on the unit circle (or a sphere in higher dimensions) by the angular part of a Brownian motion starting at 0 and running for time 1. Some results regarding this random measure, and its relation to the angle of the Brownian motion at time 1, were obtained in \cite{Pemantle}.
\item Another natural problem in two dimensions is to find the law of the occupation time $A_u$ of an interval of length $2 \pi u$ around the unit circle, for $0 < u < 1.$ Some problems involving cyclically stationary local-time processes were treated in \cite{Pitman}.
\end{enumerate}
\section{Acknowledgments}
We thank Professor Nick Bingham, Professor Jim Pitman, and Professor Robin Pemantle for helpful discussions. We are extremely appreciative of the work of an anonymous referee, whose very helpful report greatly strengthened the quality of this work.
\bibliographystyle{plain}
| -22,475.939451
|
[
-3.16796875,
2.876953125
] | 34.40367
|
[
-3.6171875,
0.0418701171875,
-2.19921875,
-6.05078125,
-0.4189453125,
8.4921875
] |
[
3.498046875,
8.8671875,
1.9443359375,
6.48046875
] | 70
| 1,403
|
[
-3.509765625,
3.919921875
] | 33.578586
|
[
-5.7265625,
-3.818359375,
-3.87890625,
-2.3125,
1.76171875,
11.2890625
] | 0.712613
| 22.617993
| 41.411262
| 2.679798
|
[
1.2639049291610718
] | -15,235.27169
| 7.001426
| -22,114.946114
| 0.692253
| 5.611661
|
[
-2.412109375,
-3.380859375,
-3.82421875,
-5.328125,
2.37109375,
12.328125
] |
[
-4.96875,
-1.86328125,
-2.1953125,
-1.7138671875,
3.21875,
4.203125
] | |
BkiUceLxK7Ehm4qsz--k
|
\section{Introduction}
Let $T$ be a tree with vertex set $V(T)=\{1,\hdots,n\}$ and edge set $E(T)=\{e_1,\hdots,e_{n-1}\}$. If two vertices $i$ and $j$ are adjacent, we write $i\sim j$. Let us assign an orientation to each edge of $T$. Two edges $e_i=(p,q)$ and $e_j=(r,s)$ of $T$ are \textit{ similarly oriented} if $d(p,r)=d(q,s)$ and is denoted by $e_i\Rightarrow e_j$, otherwise they are \textit{oppositely oriented} and is denoted by $e_i \rightleftharpoons e_j$. The \textit{edge orientation matrix} $H=(h_{ij})$ of $T$ is the $(n-1)\times (n-1)$ matrix whose rows and columns are indexed by the edges of $T$ and the entries are defined \cite{bapat2013product} as
$$h_{ij}=
\begin{cases}
\text{$1$} & \quad\text{if $e_i\Rightarrow e_j$, $i \neq j$};\\
\text{$-1$} & \quad\text{if $e_i \rightleftharpoons e_j$, $i \neq j$};\\
\text{$1$} & \quad\text{if $i=j$.}
\end{cases}$$
The \textit{incidence matrix} $Q$ of $T$ is the $n \times n-1$ matrix with its rows indexed by $V(T)$ and the columns indexed by $E(T)$. The entry corresponding to the row $i$ and column $e_j$ of $Q$ is $1$ if $e_j$ originates at $i$, $-1$ if $e_j$ terminates at $i$, and zero if $e_j$ and $i$ are not incident. We assume that the same orientation is used while defining the edge orientation matrix $H$ and the incidence matrix $Q$.
The \emph{distance} between the vertices $i,j\in V(T)$, denoted by $d(i,j)$, is the length of the shortest path between them in $T$. The \emph{distance matrix} of $T$, denoted by $D(T)$, is the $n \times n$ matrix whose rows and columns are indexed by the vertices of $T$ and the entries are defined as follows: $D(T)=(d_{ij})$, where $d_{ij}=d(i,j)$. In \cite{bapat2013product}, the authors introduced the notion of \emph{squared distance matrix} $\Delta$, which is defined to be the Hadamard product $D\circ D$, that is, the $(i,j)$-th element of $\Delta$ is $d_{ij}^2$. For the unweighted tree $T$, the determinant of $\Delta$ is obtained in \cite{bapat2013product}, while the inverse and the inertia of $\Delta$ are considered in \cite{bapat2016squared}. In \cite{bapat2019}, the author considered an extension of these results to a weighted tree whose each edge is assigned a positive scalar weight and found the determinant and inverse of $\Delta$. Recently, in \cite{das2020squared}, the authors determined the inertia and energy of the squared distance matrix of a complete multipartite graph. Also, they characterized the graphs among all complete $t$-partite graphs on $n$ vertices for which the spectral radius of the squared distance matrix and the squared distance energy are maximum and minimum, respectively.
In this article, we consider a weighted tree $T$ on $n$ vertices with each of its edge weights are positive definite matrices of order $s$. For $i,j \in V(T)$, the distance $d(i,j)$ between $i$ and $j$ is the sum of the weight matrices in the unique $(i,j)$-path of $T$. Thus, the distance matrix $D=(d_{ij})$ of $T$ is the block matrix of order $ns\times ns$ with its $(i,j)$-th block $d_{ij}=d(i,j)$ if $i\neq j$, and is the $s \times s$ zero matrix if $i=j$. The squared distance matrix $\Delta$ of $T$ is the $ns\times ns$ block matrix with its $(i,j)$-th block is equal to $d(i,j)^2$ if $i\neq j$, and is the $s \times s$ zero matrix if $i=j$. The Laplacian matrix $L=(l_{ij})$ of $T$ is the $ns \times ns$ block matrix defined as follows: For $i,j \in V(T)$, $i\neq j$, the $(i,j)$-th block $l_{ij}=-(W(i,j))^{-1}$ if $i \sim j$, where $W(i,j)$ is the matrix weight of the edge joining the vertices $i$ and $j$, and the zero matrix otherwise. For $i \in V(T)$, the $(i,i)$-th block of $L$ is $\sum_{j\sim i}(W(i,j))^{-1}$.
In the context of classical distance, the matrix weights have been studied in \cite{atik2017distance} and \cite{Bapat2006}. The Laplacian matrix with matrix weights have been studied in \cite{atik2017distance,Sumit2022laplacian} and \cite{hansen2021expansion}. The Resistance distance matrix and the Product distance matrix with matrix weights have been considered in \cite{Atik-resistance}, and \cite{Product-matrix}, respectively. In this article, we consider the squared distance matrix $\Delta$ of a tree $T$ with matrix weights and find the formulae for the determinant and inverse of $\Delta$, which generalizes the results of \cite{bapat2013product,bapat2016squared,bapat2019}.
This article is organized as follows. In Section $2$, we define needed notations and state some preliminary results, which will be used in the subsequent sections. In Section $3$, we find some relations of Incidence matrix, Laplacian matrix, and Distance matrix with squared distance matrix. In Section $4$ and Section $5$, we obtain the formula for the determinant and inverse of $\Delta$, respectively.
\section{Notations and preliminary results}
In this section, we define some useful notations and state some known results which will be needed to prove our main results.
The $n\times 1$ column vector with all ones and the identity matrix of order $n$ are denoted by $\textbf{1}_n$ and $I_n$, respectively. Let $J$ denote the matrix of appropriate size with all entries equal to $1$. The transpose of a matrix $A$ is denoted by $A^{\prime}$. Let $A$ be an $n\times n$ matrix partitioned as
$ A=\left[ {\begin{array}{cc}
A_{11} & A_{12} \\
A_{21} & A_{22} \\
\end{array} } \right]$,
where $A_{11}$ and $A_{22}$ are square matrices. If $A_{11}$ is nonsingular, then the \textit{Schur complement }of $A_{11}$ in $A$ is defined as $A_{22}-A_{21}{A_{11}^{-1}}A_{12}$. The following is the well known Schur complement formula: $ \det A= (\det A_{11})\det(A_{22}-A_{21}{A_{11}^{-1}}A_{12})$. The\textit{ Kronecker product }of two matrices $A=(a_{ij})_{m\times n}$ and $B=(b_{ij})_{p\times q}$, denoted by $A\otimes B$, is defined to be the $mp\times nq$ block matrix $[a_{ij}B]$. It is known that for the matrices $A,B,C$ and $D$, $(A\otimes B)(C\otimes D)=AC\otimes BD$, whenever the products $AC$ and $BD$ are defined. Also $(A\otimes B)^{-1}=A^{-1}\otimes B^{-1}$, if $A$ and $B$ are nonsingular. Moreover, if $A$ and $B$ are $n \times n$ and $p\times p$ matrices, then $\det(A\otimes B)=(\det A)^p(\det B)^n$. For more details about the Kronecker product, we refer to \cite{matrix-analysis}.
Let $H$ be the edge-orientation matrix, and $Q$ be the incidence matrix of the underlying unweighted tree with an orientation assigned to each edge. The edge-orientation matrix of a weighted tree whose edge weights are positive definite matrices of order $s$ is defined by replacing $1$ and $-1$ by $I_s$ and $-I_s$, respectively. The incidence matrix of a weighted tree is defined in a similar way. That is, for the matrix weighted tree $T$, the edge-orientation matrix and the incidence matrix are defined as $(H\otimes I_s)$ and $(Q\otimes I_s)$, respectively.
Now we introduce some more notations. Let $T$ be a tree with vertex set $V(T)=\{1,\hdots,n\}$ and edge set $E(T)=\{e_1,\hdots,e_{n-1}\}$. Let $W_i$ be the edge weight matrix associated with each edge $e_i$ of $T$, $i=1,2,\hdots,n$. Let $\delta_i$ be the degree of the vertex $i$ and set $\tau_i=2-\delta_i$ for $i=1,2,\hdots,n$. Let $\tau$ be the $n \times 1$ matrix with components $\tau_1,\hdots,\tau_n$ and $\Tilde{\tau}$ be the diagonal matrix with diagonal entries $\tau_1,\tau_2,\hdots,\tau_n$. Let $\hat{\delta_i}$ be the matrix weighted degree of $i$, which is defined as
$$\hat{\delta_i}=\sum_{j:j\sim i}W(i,j), ~~i= 1,\hdots,n.$$
Let $\hat{\delta}$ be the $ns\times s$ block matrix with the components $\hat{\delta_1},\hdots,\hat{\delta_n}$. Let $F$ be a diagonal matrix with diagonal entries $W_1,W_2,\hdots,W_{n-1}$. It can be verified that $L=(Q\otimes I_s){F}^{-1} (Q^{\prime}\otimes I_s)$.
A tree $T$ is said to be directed tree, if the edges of the tree $T$ are directed. If the tree $T$ has no vertex of degree $2$, then $\hat{\tau}$ denote the diagonal matrix with diagonal elements $1/\tau_1,1/\tau_2,\hdots,1/\tau_n$. In the following theorem, we state a basic result about the edge-orientation matrix $H$ of an unweighted tree $T$, which is a combination of Theorem $9$ of \cite{bapat2013product} and Theorem $11$ of \cite{bapat2016squared}.
\begin{thm}\cite{bapat2013product,bapat2016squared}\label{detH}
Let $T$ be a directed tree on $n$ vertices and let $H$ and $Q$ be the edge-orientation matrix and incidence matrix of $T$, respectively. Then $\det H=2^{n-2}\prod_{i=1}^n \tau_i$. Furthermore, if $T$ has no vertex of degree $2$, then $H$ is nonsingular and $H^{-1}=\frac{1}{2}Q^{\prime}\hat{\tau}Q$.
\end{thm}
Next, we state a known result related to the distance matrix of a tree with matrix weights.
\begin{thm}[{\cite[Theorem 3.4]{atik2017distance}}]\label{thm:DL}
Let $T$ be a tree on $n$ vertices whose each edge is assigned a positive definite matrix of order $s$. Let $L$ and $D$ be the Laplacian matrix and distance matrix of $T$, respectively. If $D$ is invertible, then the following assertions hold:
\begin{enumerate}
\item $LD=\tau \textbf{1}_n^{\prime}\otimes I_s-2I_n\otimes I_s$.
\item $DL=\textbf{1}_n{\tau}^{\prime}\otimes I_s-2I_n\otimes I_s.$
\end{enumerate}
\end{thm}
\section{Properties of the squared distance matrices of trees }
In this section, we find the relation of the squared distance matrix with other matrices, such as distance matrix, Laplacian matrix, incidence matrix, etc. We will use these results to obtain the formulae for determinants and inverses of the squared distance matrices of directed trees.
\begin{lem}\label{lem:Ddel}
Let $T$ be a tree with vertex set $\{1,2,\hdots,n\}$ and each edge of $T$ is assigned a positive definite matrix weight of order $s$. Let $D$ and $\Delta$ be the distance matrix and the squared distance matrix of $T$, respectively. Then
$\Delta (\tau \otimes I_s) =D \hat{\delta}.$
\end{lem}
\begin{proof}
Let $i \in \{1,2,\hdots,n\}$ be fixed. For $j \neq i$, let $p(j)$ be the predecessor of $j$ on the $(i,j)$-path of the underlying tree. Let $e_j$ be the edge between the vertices $p(j)$ and $j$. For $1 \leq j\leq n-1 $, let $W_j$ denote the weight of the edge $e_j$ and $X_j=\hat{\delta_j}-W_j$. Therefore,
\begin{eqnarray*}
2\sum_{j=1}^n d(i,j)^2 &=& \sum_{j=1}^n d(i,j)^2+\sum_{j\neq i} \Big(d(i,p(j))+W_j\Big)^2\\
&=&\sum_{j=1}^n d(i,j)^2+\sum_{j\neq i} d(i,p(j))^2+2\sum_{j\neq i} d(i,p(j))W_j+\sum_{j\neq i}W_j^2.
\end{eqnarray*}
Since the vertex $j$ is the predecessor of $\delta_j-1$ vertices in the paths from $i$, we have
$$\sum_{j\neq i} d(i,p(j))^2=\sum_{j=1}^n(\delta_j-1)d(i,j)^2.$$
Thus,
\begin{eqnarray*}
2\sum_{j=1}^n d(i,j)^2 &=& \sum_{j=1}^n d(i,j)^2+\sum_{j=1}^n(\delta_j-1)d(i,j)^2+2\sum_{j\neq i} d(i,p(j))W_j+\sum_{j\neq i}W_j^2\\
&=& \sum_{j=1}^n\delta_jd(i,j)^2+2\sum_{j\neq i} d(i,p(j))W_j+\sum_{j\neq i}W_j^2.
\end{eqnarray*}
Therefore, the $(i,j)$-th element of $\Delta (\tau \otimes I_s)$ is
\begin{align*}
(\Delta (\tau \otimes I_s))_{ij}= \sum_{j=1}^n(2-\delta_j) d(i,j)^2=2\sum_{j\neq i} d(i,p(j))W_j+\sum_{j\neq i}W_j^2.
\end{align*}
Now, let us compute the $(i,j)$-th element of $D \hat{\delta}$.
\begin{eqnarray*}
(D \hat{\delta})_{ij}=\sum_{j=1}^n d(i,j)\hat{\delta_j} &=& \sum_{j\neq i}\Big(d(i,p(j))+W_j\Big)(W_j+X_j)\\
&=&\sum_{j\neq i} d(i,p(j))W_j+\sum_{j\neq i}W_j^2+\sum_{j\neq i}\Big(d(i,p(j))+W_j\Big)X_j.
\end{eqnarray*}
Note that $X_j$ is the sum of the weights of all edges incident to $j$, except $e_j$. Hence,
\begin{align*}
\big(d(i,p(j))+W_j\big)X_j =d(i,j)X_j= \sum_{l\sim j,l\neq p(j)} d(i,p(l))W_l.
\end{align*}
Therefore,
$$\sum_{j\neq i}\big(d(i,p(j))+W_j\big)X_j=\sum_{j\neq i}\sum_{l\sim j,l\neq p(j)} d(i,p(l))W_l=\sum_{j\neq i} d(i,p(j))W_j. $$
Thus,
\begin{align*}
(D \hat{\delta})_{ij}= \sum_{j=1}^n d(i,j)\hat{\delta_j}=2\sum_{j\neq i} d(i,p(j))W_j+\sum_{j\neq i}W_j^2=(\Delta (\tau \otimes I_s))_{ij}.
\end{align*}
This completes the proof.
\end{proof}
\begin{lem}\label{lem:FHF}
Let $T$ be a directed tree with vertex set $\{1,\hdots,n\}$ and edge set $\{e_1,\hdots,e_{n-1}\}$ with each edge $e_i$ is assigned a positive definite matrix weight $W_i$ of order $s$ for $1 \leq i \leq n-1$. Let $H$ and $Q$ be the edge orientation matrix and incidence matrix of $T$, respectively.
If $F$ is the diagonal matrix with diagonal entries $W_1,W_2,\hdots,W_{n-1}$, then
$$(Q^{\prime}\otimes I_s)\Delta (Q\otimes I_s)=-2F(H\otimes I_s)F.$$
\end{lem}
\begin{proof}
For $i,j\in \{1,2,\hdots,n-1\}$, let $e_i$ and $e_j$ be two edges of $T$ such that $e_i$ is directed from $p$ to $q$ and $e_j$ is directed from $r$ to $s$. Let $W_i$ and $W_j$ be the weights of the edges $e_i$ and $e_j$, respectively. If $d(q,r)=Y$, then it is easy to see that
\begin{eqnarray*}
\Big((Q^{\prime}\otimes I_s)\Delta (Q\otimes I_s)\Big)_{ij} &=&
\begin{cases}
\text{$(W_i+Y)^2+(W_j+Y)^2-(W_i+W_j+Y)^2-Y^2$,} & \text{if $e_i\Rightarrow e_j$,}\\
\text{$-(W_i+Y)^2-(W_j+Y)^2+(W_i+W_j+Y)^2+Y^2$,}& \text{if $e_i \rightleftharpoons e_j$.}\\
\end{cases}\\
&=&
\begin{cases}
\text{$-2W_iW_j$,} & \text{if $e_i\Rightarrow e_j$,}\\
\text{$2W_iW_j$,}& \text{if $e_i \rightleftharpoons e_j$.}\\
\end{cases}
\end{eqnarray*}
Note that $(F(H\otimes I_s)F)_{ij}=
\begin{cases}
\text{$W_iW_j$} & \quad\text{if $e_i\Rightarrow e_j$,}\\
\text{$-W_iW_j$}& \quad\text{if $e_i \rightleftharpoons e_j$.}
\end{cases}$\\
Thus, $\Big((Q^{\prime}\otimes I_s)\Delta (Q\otimes I_s)\Big)_{ij}=-2(F(H\otimes I_s)F)_{ij}.$
\end{proof}
\begin{lem}\label{deltaL}
Let $T$ be a tree with vertex set $\{1,\hdots,n\}$ and edge set $\{e_1,\hdots,e_{n-1}\}$ with each edge $e_i$ is assigned a positive definite matrix weight $W_i$ of order $s$ for $1 \leq i \leq n-1$. Let $L,D$ and $\Delta$ be the Laplacian matrix, the distance matrix and the squared distance matrix of $T$, respectively. Then
$\Delta L=2D(\Tilde{\tau}\otimes I_s)-\textbf{1}_n\otimes {\hat{\delta}^\prime}$.
\end{lem}
\begin{proof}
Let $i,j\in V(T)$ and the degree of the vertex $j$ is $t$. Suppose $j$ is adjacent to the vertices $v_1,v_2,\hdots,v_t$, and let $e_1,e_2,\hdots,e_t$ be the corresponding edges with edge weights $W_1,W_2,\hdots,W_t$, respectively.\\
\textbf{Case 1.} For $i=j$, we have
\begin{eqnarray*}
(\Delta L)_{ii}&=&\sum_{s=1}^n d(i,s)^2 l_{si}\\
&=&\sum_{s\sim i} d(i,s)^2 l_{si}\\
&=& W_1^2(-W_1)^{-1}+\hdots +W_t^2(-W_t)^{-1}\\
&=&-(W_1+W_2+\hdots +W_t)\\
&=&-\hat{\delta_i}\\
&=& \big(2D(\Tilde{\tau}\otimes I_s)-\textbf{1}_n\otimes {\hat{\delta}^\prime}\big)_{ii}.
\end{eqnarray*}
\textbf{Case 2.} Let $i\neq j$. Without loss of generality, assume that the $(i,j)$-path passes through the vertex $v_1$ (it is possible that $i=v_1$). If $d(i,j)=Z$, then $d(i,v_1)=Z-W_1$, $d(i,v_2)=Z+W_2$, $d(i,v_3)=Z+W_3$, $\hdots, d(i,v_t)=Z+W_t$. Therefore,
\begin{eqnarray*}
(\Delta L)_{ij}&=&\sum_{s=1}^n d(i,s)^2 l_{sj}\\
&=&\sum_{s\sim j} d(i,s)^2 l_{sj}+d(i,j)^2 l_{jj}\\
&=& {d(i,v_1)}^2(-W_1)^{-1}+{d(i,v_2)}^2(-W_2)^{-1}+\hdots +{d(i,v_t)}^2(-W_t)^{-1}+d(i,j)^2 l_{jj}\\
&=&(Z-W_1)^2(-W_1)^{-1}+(Z+W_2)^2(-W_2)^{-1}+(Z+W_3)^2(-W_3)^{-1}\\
& &+\hdots +(Z+W_t)^2(-W_t)^{-1}+Z^2\big((W_1)^{-1}+(W_2)^{-1}+\hdots+(W_t)^{-1}\big)\\
&=&(W_1^2-2ZW_1)(-W_1)^{-1}+(W_2^2+2ZW_2)(-W_2)^{-1}+(W_3^2+2ZW_3)(-W_3)^{-1}\\ & & +\hdots+(W_t^2+2ZW_t)(-W_t)^{-1}\\
&=&-(W_1+W_2+\hdots +W_t)+2Z-2(t-1)Z\\
&=& 2(2-t)Z-(W_1+W_2+\hdots +W_t)\\
&=& 2\tau_j Z-\hat{\delta_j}\\
&=& \big(2D(\Tilde{\tau}\otimes I_s)-\textbf{1}_n\otimes {\hat{\delta}^\prime}\big)_{ij}.
\end{eqnarray*}
This completes the proof.
\end{proof}
\section{Determinant of the squared distance matrix}
In this section, we obtain a formula for the determinant of the squared distance matrix of a tree with positive definite matrix weights. First, we consider the trees with no vertex of degree $2$.
\begin{thm}\label{det1}
Let $T$ be a tree on $n$ vertices, and let $W_i$ be the weights of the edge $e_i$, where $W_i$'s are positive definite matrices of order $s$, $i=1,2,\hdots,n-1$. If $T$ has no vertex of degree $2$, then
$$\det (\Delta)=(-1)^{(n-1)s}2^{(2n-5)s}\prod_{i=1}^n {(\tau_i)^s}\prod_{i=1}^{n-1}\det (W_i^2) \det\bigg(\sum_{i=1}^n \frac{\hat{\delta_i}^2}{\tau_i}\bigg ).$$
\end{thm}
\begin{proof}
Let us assign an orientation to each edge of $T$, and let $H$ be the edge orientation matrix and $Q$ be the incidence matrix of the underlying unweighted tree.
Let $\Delta_i$ denote the $i$-th column block of the block matrix $\Delta$. Let $t_i$ be the $n \times 1$ column vector with $1$ at the $i$-th position and $0$ elsewhere, $i=1,2,\hdots,n$. Then
\begin{equation}\label{eqn1}
\left[ {\begin{array}{c}
Q^{\prime}\otimes I_s\\
t_1^{\prime}\otimes I_s\\
\end{array} } \right]
\Delta
\left[ {\begin{array}{cc}
Q\otimes I_s & t_1\otimes I_s\\
\end{array} } \right]=
\left[ {\begin{array}{cc}
(Q^{\prime}\otimes I_s)\Delta (Q\otimes I_s) & (Q^{\prime}\otimes I_s)\Delta_1\\
\Delta_1^{\prime}(Q\otimes I_s) & 0\\
\end{array} } \right].
\end{equation}
Since $\det\left[ {\begin{array}{c}
Q^{\prime}\otimes I_s\\
t_1^{\prime}\otimes I_s\\
\end{array} } \right]=\det \Bigg( \left[ {\begin{array}{c}
Q^{\prime}\\
t_1^{\prime}\\
\end{array} } \right]\otimes I_s \Bigg)=\pm 1$, by taking determinant of matrices in both sides of equation (\ref{eqn1}), we have
\begin{align*}
\det (\Delta) =&
\det \left[ {\begin{array}{cc}
(Q^{\prime}\otimes I_s)\Delta (Q\otimes I_s) & (Q^{\prime}\otimes I_s)\Delta_1\\
\Delta_1^{\prime}(Q\otimes I_s) & 0\\
\end{array} } \right].
\end{align*}
Using Lemma \ref{lem:FHF}, we have $\det (\Delta)=\det \left[ {\begin{array}{cc}
-2F(H\otimes I_s)F & (Q^{\prime}\otimes I_s)\Delta_1\\
\Delta_1^{\prime}(Q\otimes I_s) & 0\\
\end{array} } \right].$ By Theorem \ref{detH}, we have $\det H=2^{n-2}\prod_{i=1}^n \tau_i$ and hence $\det(H\otimes I_s)=(\det H)^s=2^{(n-2)s}\prod_{i=1}^n \tau_i^s$. Thus, $-2F(H\otimes I_s)F$ is nonsingular, and by the Schur complement formula, we have
\begin{eqnarray*}
\det (\Delta) &=& \left[ {\begin{array}{cc}
-2F(H\otimes I_s)F & (Q^{\prime}\otimes I_s)\Delta_1\\
\Delta_1^{\prime}(Q\otimes I_s) & 0\\
\end{array} } \right]\\
&=& \det(-2F(H\otimes I_s)F)\det \Big(-\Delta_1^{\prime}(Q\otimes I_s)(-2F(H\otimes I_s)F)^{-1}(Q^{\prime}\otimes I_s)\Delta_1\Big)\\
&=&(-1)^{(n-1)s}2^{(n-2)s}\prod_{i=1}^{n-1}\det(W_i^2) \det(H\otimes I_s)\det\Big(\Delta_1^{\prime}(Q\otimes I_s)F^{-1}(H\otimes I_s)^{-1}F^{-1}(Q^{\prime}\otimes I_s)\Delta_1\Big).
\end{eqnarray*}
Now, from Theorem \ref{detH}, it follows that $(H\otimes I_s)^{-1}=H^{-1}\otimes I_s=\frac{1}{2}Q^{\prime}\hat{\tau}Q\otimes I_s=\frac{1}{2}(Q^{\prime}\hat{\tau}Q\otimes I_s)$. Therefore,
\begin{equation}\label{eqn det}
\det (\Delta)=(-1)^{(n-1)s}2^{(2n-5)s}\prod_{i=1}^n {(\tau_i)^s}\prod_{i=1}^{n-1}\det(W_i^2)\det \Big(\Delta_1^{\prime}(Q\otimes I_s)F^{-1}(Q^{\prime}\hat{\tau}Q\otimes I_s)F^{-1}(Q^{\prime}\otimes I_s)\Delta_1\Big).
\end{equation}
Now, by Lemma \ref{deltaL} and Lemma \ref{lem:Ddel}, we have
\begin{eqnarray*}
& &\Delta_1^{\prime}(Q\otimes I_s)F^{-1}(Q^{\prime}\hat{\tau}Q\otimes I_s)F^{-1}(Q^{\prime}\otimes I_s)\Delta_1\\
&=&\Delta_1^{\prime}(Q\otimes I_s)F^{-1}(Q^{\prime}\otimes I_s)(\hat{\tau}\otimes I_s)(Q\otimes I_s)F^{-1}(Q^{\prime}\otimes I_s)\Delta_1\\
&=&\Big(\Delta_1^{\prime}(Q\otimes I_s)F^{-1}(Q^{\prime}\otimes I_s)\Big)(\hat{\tau}\otimes I_s)\Big(\Delta_1^{\prime}(Q\otimes I_s)F^{-1}(Q^{\prime}\otimes I_s)\Big)^{\prime}\\
&=&\big(\Delta_1^{\prime}L\big)(\hat{\tau}\otimes I_s)\big(\Delta_1^{\prime}L\big)^{\prime}\\
&=&\sum_i\big(2\tau_i d_{1i}-\hat{\delta_i}\big)^2\frac{1}{\tau_i}\\
&=&\sum_i\big(4{\tau_i}^2 d_{1i}^2+{\hat{\delta_i}}^2-4\tau_i d_{1i}\hat{\delta_i}\big)\frac{1}{\tau_i}\\
&=&\sum_i 4{\tau_i}^2 d_{1i}^2+\sum_i \frac{\hat{\delta_i}^2}{\tau_i}-\sum_i 4d_{1i}\hat{\delta_i}\\
&=&\sum_i \frac{\hat{\delta_i}^2}{\tau_i}.
\end{eqnarray*}
Substituting the value of $\Delta_1^{\prime}(Q\otimes I_s)F^{-1}(Q^{\prime}\hat{\tau}Q\otimes I_s)F^{-1}(Q^{\prime}\otimes I_s)\Delta_1$ in (\ref{eqn det}), we get the required result.
\end{proof}
\begin{figure}
\centering
\includegraphics[scale= 0.50]{sqdst1.jpg}
\caption{ Tree $T_1$ on 4 vertices}
\label{fig1}
\end{figure}
Next, let us illustrate the above theorem by an example.
\begin{ex}
Consider the tree $T_1$ in Figure 1, where the edge weights are
\begin{align*}
W_1=\left[ {\begin{array}{cc}
1 & 0\\
0 & 1\\
\end{array} } \right], \qquad
W_2=\left[ {\begin{array}{cc}
2 & 0\\
0 & 1\\
\end{array} } \right], \qquad
W_3=\left[ {\begin{array}{cc}
1 & 0\\
0 & 2\\
\end{array} } \right].
\end{align*}
\end{ex}
Then,
\begin{align*}
\Delta =&\left[ {\begin{array}{cccc}
0 & W_1^2 & (W_1+W_2)^2 & (W_1+W_3)^2\\
W_1^2 & 0 & W_2^2 & W_3^2\\
(W_1+W_2)^2 & W_2^2 & 0 & (W_2+W_3)^2\\
(W_1+W_3)^2 & W_3^2 & (W_2+W_3)^2 & 0\\
\end{array} } \right] \\
=&\left[ {\begin{array}{cccccccc}
0 & 0 & 1 & 0 & 9 & 0 & 4 & 0\\
0 & 0 & 0 & 1 & 0 & 4 & 0 & 9\\
1 & 0 & 0 & 0 & 4 & 0 & 1 & 0\\
0 & 1 & 0 & 0 & 0 & 1 & 0 & 4\\
9 & 0 & 4 & 0 & 0 & 0 & 9 & 0\\
0 & 4 & 0 & 1 & 0 & 0 & 0 & 9\\
4 & 0 & 1 & 0 & 9 & 0 & 0 & 0 \\
0 & 9 & 0 & 4 & 0 & 9 & 0 & 0\\
\end{array} } \right] ~ \text{and}\\
\sum_{i=1}^4 \frac{\hat{\delta_i}^2}{\tau_i}=& W_1^2+W_2^2+W_3^2-(W_1+W_2+W_3)^2=
\left[ {\begin{array}{cc}
-10 & 0\\
0 & -10\\
\end{array} } \right].
\end{align*}
One can verify that,
$$\det (\Delta)= 102400= (-1)^{6}2^{6}\prod_{i=1}^3 {(\tau_i)^2}\prod_{i=1}^{3}\det({W_i}^2) \det\Big (\sum_{i=1}^4 \frac{\hat{\delta_i}^2}{\tau_i}\Big ).$$
Next, we obtain a formula for the determinant of the squared distance matrix of a tree $T$, which has exactly one vertex of degree $2$.
\begin{thm}\label{det}
Let $T$ be a tree on $n$ vertices with the edge set $E(T)=\{e_1,e_2,\hdots,e_{n-1}\}$. Let the positive definite matrices $W_1,W_2,\hdots,W_{n-1}$ of order $s$ be the weights of the edges $e_1,e_2,\hdots,e_{n-1}$, respectively. Let $v$ be the vertex of degree $2$ and $u$ and $w$ be its neighbours in $T$. If $e_i=(u,v)$ and $e_j=(v,w)$, then
$$\det (\Delta)=-(1)^{(n-1)s}2^{(2n-5)s}\det(W_i+W_j)^2 \prod_{k=1}^{n-1} \det(W_k^2)\prod_{k\neq v}\tau_k^s.$$
\end{thm}
\begin{proof}
Let us assign an orientation to each edge of $T$. Without loss of generality, assume that, the edge $e_i$ is directed from $u$ to $v$ and the edge $e_j$ is directed from $v$ to $w$.
Let $\Delta_i$ denote the $i$-th column block of the block matrix $\Delta$. Let $t_i$ be the $n \times 1$ column vector with $1$ at the $i$-th position and $0$ elsewhere, $i=1,2,\hdots,n$. Therefore, by using Lemma \ref{lem:FHF}, we have
\begin{eqnarray*}
\left[ {\begin{array}{c}
Q^{\prime}\otimes I_s\\
t_v^{\prime}\otimes I_s\\
\end{array} } \right]
\Delta
\left[ {\begin{array}{cc}
Q\otimes I_s & t_v\otimes I_s\\
\end{array} } \right] &=&
\left[ {\begin{array}{cc}
(Q^{\prime}\otimes I_s)\Delta (Q\otimes I_s) & (Q^{\prime}\otimes I_s)\Delta_v\\
\Delta_v^{\prime}(Q\otimes I_s) & 0\\
\end{array} } \right]\\
&=& \left[ {\begin{array}{cc}
-2F(H\otimes I_s)F) & (Q^{\prime}\otimes I_s)\Delta_v\\
\Delta_v^{\prime}(Q\otimes I_s) & 0\\
\end{array} } \right]
\end{eqnarray*}
Pre-multiplying and post-multiplying the above equation by $\left[ {\begin{array}{cc}
F^{-1}& 0\\
0 & I_s\\
\end{array} } \right]$, we get
\begin{eqnarray*}
\left[ {\begin{array}{cc}
F^{-1}& 0\\
0 & I_s\\
\end{array} } \right]
\left[ {\begin{array}{c}
Q^{\prime}\otimes I_s\\
t_v^{\prime}\otimes I_s\\
\end{array} } \right]
\Delta
\left[ {\begin{array}{cc}
Q\otimes I_s & t_v\otimes I_s\\
\end{array} } \right]
\left[ {\begin{array}{cc}
F^{-1}& 0\\
0 & I_s\\
\end{array} } \right] &=&
\left[ {\begin{array}{cc}
-2(H\otimes I_s) & F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\
\Delta_v^{\prime}(Q\otimes I_s)F^{-1} & 0\\
\end{array} } \right],
\end{eqnarray*}
which implies that
\begin{eqnarray*}
(\det(F^{-1}))^2 \det(\Delta) =\det
\left[ {\begin{array}{cc}
-2(H\otimes I_s) & F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\
\Delta_v^{\prime}(Q\otimes I_s)F^{-1} & 0\\
\end{array} } \right].
\end{eqnarray*}
Let $H(j|j)$ denote the $(n-2)s\times (n-2)s$ submatrix obtained by deleting the all blocks in the $j$-th row and $j$-th column from $H\otimes I_s$. Let $R_i$ and $C_i$ denote the $i$-th row and $i$-th column of the matrix $\left[ {\begin{array}{cc}
-2(H\otimes I_s) & F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\
\Delta_v^{\prime}(Q\otimes I_s)F^{-1} & 0\\
\end{array} } \right]$, respectively. Note that the blocks in the $i$-th and $j$-th column of $H\otimes I_s$ are identical. Now, perform the operations $R_j-R_i$ and $C_j-C_i$ in $\left[ {\begin{array}{cc}
-2(H\otimes I_s) & F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\
\Delta_v^{\prime}(Q\otimes I_s)F^{-1} & 0\\
\end{array} } \right]$, and then interchange $R_j$ and $R_{n-1}$, $C_j$ and $C_{n-1}$ . Since $\Delta_v^{\prime}(Q\otimes I_s)F^{-1})_j-( \Delta_v^{\prime}(Q\otimes I_s)F^{-1})_i=-W_j-W_i$, therefore
\begin{equation}
\det \left[ {\begin{array}{cc}
-2(H\otimes I_s) & F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\
\Delta_v^{\prime}(Q\otimes I_s)F^{-1} & 0\\
\end{array} } \right] = \det \left[ {\begin{array}{ccc}
-2H(j|j) & 0 & F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\
0 & 0 & -W_j-W_i\\
\Delta_v^{\prime}(Q\otimes I_s)F^{-1} & -W_j-W_i & 0\\
\end{array} } \right].
\end{equation}
Since $H(j|j)$ is the edge orientation matrix of the tree obtained by deleting the vertex $v$ and replacing the edges $e_i$ and $e_j$ by a single edge directed from $u$ to $w$ in the tree, by Theorem \ref{detH}, we have
$\det(H(j|j)=2^{(n-3)s}\prod_{k \neq v}\tau_k^s$, which is nonzero. Therefore, by applying the Schur complement formula, we have
\begin{eqnarray*}
& &\det \left[ {\begin{array}{ccc}
-2H(j|j) & 0 & F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\
0 & 0 & -W_j-W_i\\
\Delta_v^{\prime}(Q\otimes I_s)F^{-1} & -W_j-W_i & 0\\
\end{array} } \right] \\
&=& \det(-2H(j|j)) \det \bigg(\left[ {\begin{array}{cc}
0 & -W_j-W_i\\
-W_j-W_i & 0\\
\end{array} } \right]-\\ & &~~~~~~~~~~~~~~~~~~~~~~~~~~~
\left[ {\begin{array}{cc}
0 & 0 \\
0 & \Delta_v^{\prime}(Q\otimes I_s)F^{-1}(-2H(j|j))^{-1}F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\
\end{array} } \right] \bigg)\\
&=&(-2)^{(n-2)s}\det(H(j|j)) \det \left[ {\begin{array}{cc}
0 & -W_j-W_i\\
-W_j-W_i & -\Delta_v^{\prime}(Q\otimes I_s)F^{-1}(-2H(j|j))^{-1}F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\
\end{array} } \right].
\end{eqnarray*}
Again, by the proof of Theorem \ref{det1}, we have $$\Delta_v^{\prime}(Q\otimes I_s)F^{-1}(-2H(j|j))^{-1}F^{-1}(Q^{\prime}\otimes I_s)\Delta_v=-\frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}.$$ Therefore,
\begin{eqnarray*}
& &\det \left[ {\begin{array}{ccc}
-2H(j|j) & 0 & F^{-1}(Q^{\prime}\otimes I_s)\Delta_v\\
0 & 0 & -W_j-W_i\\
\Delta_v^{\prime}(Q\otimes I_s)F^{-1} & -W_j-W_i & 0\\
\end{array} } \right] \\
&=& (-2)^{(n-2)s}\det(H(j|j)) \det \left[ {\begin{array}{cc}
0 & -W_j-W_i\\
-W_j-W_i & \frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}\\
\end{array} } \right]\\
&=& (-2)^{(n-2)s}\det(H(j|j)) \det \left[ {\begin{array}{cc}
0 & W_j+W_i\\
W_j+W_i & -\frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}\\
\end{array} } \right].
\end{eqnarray*}
Since $\det \Big(-\frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}\Big)\neq 0$, by Schur complement formula, we have
\begin{eqnarray*}
\det \left[ {\begin{array}{cc}
0 & W_j+W_i\\
W_j+W_i & -\frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}\\
\end{array} } \right]
&=&\det \bigg(-\frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}\bigg) \det \bigg[0-(W_j+W_i) \bigg(-\frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}\bigg)^{-1}( W_j+W_i)\bigg]\\
&=&(-1)^s \det \bigg(-\frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}\bigg) \det \bigg(-\frac{1}{4}\sum_{i\neq v} \frac{\hat{\delta_i}^2}{\tau_i}\bigg)^{-1} \det(W_j+W_i)^2\\
&=&(-1)^s \det(W_i+W_j)^2.
\end{eqnarray*}
Thus,
\begin{eqnarray*}
\det (\Delta) &=&(\det F)^2(-1)^{s}(-2)^{(n-2)s}2^{(n-3)s}\prod_{k\neq v}\tau_k^s~\det(W_i+W_j)^2\\
&=&(-1)^{(n-1)s}2^{(2n-5)s}\det(W_i+W_j)^2\prod_{k=1}^{n-1}\det(W_k^2)\prod_{k\neq v}\tau_k^s.
\end{eqnarray*}
This completes the proof.
\end{proof}
\begin{figure}
\centering
\includegraphics[scale= 0.50]{sqdst2.jpg}
\caption{ Tree $T_2$ on 5 vertices }
\label{fig2}
\end{figure}
Now, we illustrate the above theorem by the following example.
\begin{ex}
Consider the tree $T_2$ in Figure \ref{fig2}, where the edge weights are
\begin{align*}
W_1=\left[ {\begin{array}{cc}
1 & 0\\
0 & 1\\
\end{array} } \right], \qquad
W_2=\left[ {\begin{array}{cc}
2 & 0\\
0 & 1\\
\end{array} } \right], \qquad
W_3=\left[ {\begin{array}{cc}
1 & 0\\
0 & 2\\
\end{array} } \right], \qquad
W_4=\left[ {\begin{array}{cc}
2 & 0\\
0 & 2\\
\end{array} } \right].
\end{align*}
\end{ex}
Then,
\begin{eqnarray*}
\Delta &=&\left[ {\begin{array}{ccccc}
0 & W_1^2 & (W_1+W_2)^2 & (W_1+W_2+W_3)^2 & (W_1+W_2+W_4)^2\\
W_1^2 & 0 & W_2^2 & (W_2+W_3)^2 & (W_2+W_4)^2\\
(W_1+W_2)^2 & W_2^2 & 0 & W_3^2 & W_4^2\\
(W_1+W_2+W_3)^2 &(W_2+W_3)^2 & W_3^2 & 0 & (W_3+W_4)^2\\
(W_1+W_2+W_3)^2 & (W_2+W_4)^2 & W_4^2 & (W_3+W_4)^2 & 0\\
\end{array} } \right] \\
&=&\left[ {\begin{array}{cccccccccc}
0 & 0 & 1 & 0 & 9 & 0 & 16 & 0 & 25 & 0\\
0 & 0 & 0 & 1 & 0 & 4 & 0 & 16 & 0 & 16\\
1 & 0 & 0 & 0 & 4 & 0 & 9 & 0 & 16 & 0\\
0 & 1 & 0 & 0 & 0 & 1 & 0 & 9 & 0 & 9\\
9 & 0 & 4 & 0 & 0 & 0 & 1 & 0 & 4 & 0\\
0 & 4 & 0 & 1 & 0 & 0 & 0 & 4 & 0 & 4\\
16 & 0 & 9 & 0 & 1 & 0 & 0 & 0 & 9 & 0\\
0 & 16 & 0 & 9 & 0 & 4 & 0 & 0 & 0 & 16\\
25 & 0 & 16 & 0 & 4 & 0 & 9 & 0 & 0 & 0 \\
0 & 16 & 0 & 9 & 0 & 4 & 0 & 16 & 0 & 0 \\
\end{array} } \right].
\end{eqnarray*}
One can verify that, $$\det (\Delta)= 9437184= (-1)^{8}2^{10}\det(W_1+W_2)^2 \prod_{i=1}^{4} \det(W_i^2)\prod_{k\neq 2}\tau_k^s.$$
\begin{cor}
Let $T$ be a tree on $n$ vertices and each edge $e_i$ of $T$ is assigned a positive definite matrix $W_i$ order $s$, $i=1,2,\hdots,n-1$. If $T$ has at least two vertices of degree $2$, then $\det (\Delta)=0$.
\end{cor}
\begin{proof}
The result follows from Theorem \ref{det}, since $\tau_i=0$ for at least two values of $i$.
\end{proof}
\section{Inverse of the squared distance matrix}
This section considers trees with no vertex of degree $2$ and obtains an explicit formula for the inverse of its squared distance matrix. First, let us prove the following lemma which will be used to find $\Delta^{-1}$.
\begin{lem}\label{lem:inv}
Let $T$ be a tree of order $n$ with no vertex of degree $2$ and each edge of $T$ is assigned a positive definite matrix weight of order $s$. If $\beta=\Hat{{\delta}^{\prime}}(\Hat{\tau}\otimes I_s)\Hat{\delta}$ and $\eta=2\tau \otimes I_s-L(\hat{\tau}\otimes I_s)\Hat{\delta}$, then
$$\Delta \eta =\textbf{1}_n \otimes \beta.$$
\end{lem}
\begin{proof}
By Lemma \ref{deltaL}, we have $\Delta L=2D(\Tilde{\tau}\otimes I_s)-\textbf{1}_n \otimes {\hat{\delta}^\prime}$. Hence,
\begin{eqnarray*}
\Delta L(\Hat{\tau}\otimes I_s)\hat{\delta}&=&2D\hat{\delta}-(\textbf{1}_n \otimes {\hat{\delta}^\prime})(\Hat{\tau}\otimes I_s)\hat{\delta}\\
&=&2D\hat{\delta}-\textbf{1}_n \otimes
\sum_{i=1}^n\frac{\hat{\delta_i}^2}{\tau_i}.
\end{eqnarray*}
Since $\beta=\Hat{{\delta}^{\prime}}(\Hat{\tau}\otimes I_s)\Hat{\delta}=\sum_{i=1}^n\frac{\hat{\delta_i}^2}{\tau_i}$, therefore
$\Delta L(\Hat{\tau}\otimes I_s)\hat{\delta}=2D\hat{\delta}-\textbf{1}_n \otimes \beta$. By Lemma \ref{lem:Ddel}, we have $\Delta (\tau \otimes I_s) =D \hat{\delta}$ and hence $\Delta L(\Hat{\tau}\otimes I_s)\hat{\delta}= 2\Delta (\tau \otimes I_s)-\textbf{1}_n\otimes \beta$. This completes the proof.
\end{proof}
If the tree $T$ has no vertex of degree $2$ and $\det(\beta) \neq 0$, then $\Delta$ is nonsingular, that is, ${\Delta}^{-1}$ exists. In the next theorem, we determine the formula for ${\Delta}^{-1}$.
\begin{thm}
Let $T$ be a tree of order $n$ with no vertex of degree $2$ and each edge of $T$ is assigned a positive definite matrix weight of order $s$. Let $\beta=\Hat{{\delta}^{\prime}}(\Hat{\tau}\otimes I_s)\Hat{\delta}$ and $\eta=2\tau \otimes I_s-L(\hat{\tau}\otimes I_s)\Hat{\delta}$. If $\det(\beta) \neq 0$, then
$${\Delta}^{-1}=-\frac{1}{4}L(\Hat{\tau}\otimes I_s)L+\frac{1}{4}\eta {\beta}^{-1} {\eta}^{\prime}.$$
\end{thm}
\begin{proof}
Let $X=-\frac{1}{4}L(\Hat{\tau}\otimes I_s)L+\frac{1}{4}\eta {\beta}^{-1} {\eta}^{\prime}$.
Then,
\begin{equation}\label{eqn:inv1}
\Delta X=-\frac{1}{4}\Delta L(\Hat{\tau}\otimes I_s)L+\frac{1}{4}\Delta \eta {\beta}^{-1} {\eta}^{\prime}.
\end{equation}
By Lemma \ref{deltaL}, we have $\Delta L=2D(\Tilde{\tau}\otimes I_s)-\textbf{1}_n\otimes {\hat{\delta}^\prime}$. Therefore,
$$\Delta L(\Hat{\tau}\otimes I_s)L=2DL-(\textbf{1}_n\otimes {\hat{\delta}^\prime})(\Hat{\tau}\otimes I_s)L. $$
By Theorem \ref{thm:DL}, $DL=\textbf{1}_n{\tau}^{\prime}\otimes I_s-2I_n\otimes I_s$ and hence
\begin{equation}\label{eqn:inv2}
\Delta L(\Hat{\tau}\otimes I_s)L=2\Big(\textbf{1}_n{\tau}^{\prime}\otimes I_s-2I_n\otimes I_s\Big)-(\textbf{1}_n\otimes {\hat{\delta}^\prime})(\Hat{\tau}\otimes I_s)L.
\end{equation}
By Lemma \ref{lem:inv}, we have $\Delta \eta =\textbf{1}_n\otimes \beta=(\textbf{1}_n\otimes I_s)\beta$. Therefore, from equation (\ref{eqn:inv1}) and (\ref{eqn:inv2}), we have
\begin{eqnarray*}
\Delta X &=& -\frac{1}{2}\Big(\textbf{1}_n{\tau}^{\prime}\otimes I_s-2I_n\otimes I_s\Big)+\frac{1}{4}(\textbf{1}_n\otimes {\hat{\delta}^\prime})(\Hat{\tau}\otimes I_s)L+\frac{1}{4}(\textbf{1}_n \otimes I_s){\eta}^{\prime}\\
& = & -\frac{1}{2}\textbf{1}_n{\tau}^{\prime}\otimes I_s+I_n\otimes I_s+\frac{1}{4}(\textbf{1}_n\otimes {\hat{\delta}^\prime})(\Hat{\tau}\otimes I_s)L+\frac{1}{4}(\textbf{1}_n\otimes I_s)\Big(2\tau \otimes I_s-L(\hat{\tau}\otimes I_s)\Hat{\delta}\Big)^{\prime}\\
& = & -\frac{1}{2}\textbf{1}_n{\tau}^{\prime}\otimes I_s+I_n\otimes I_s+\frac{1}{4}(\textbf{1}_n\otimes {\hat{\delta}^\prime})(\Hat{\tau}\otimes I_s)L+\frac{1}{4}(\textbf{1}_n\otimes I_s)\Big(2\tau^{\prime} \otimes I_s-{\Hat{\delta}}^{\prime}(\hat{\tau}\otimes I_s)L\Big)\\
&=& I_n\otimes I_s=I_{ns}.
\end{eqnarray*}
This completes the proof.
\end{proof}
Now, let us illustrate the above formula for $\Delta^{-1}$ by an example.
\begin{ex}
Consider the tree $T_1$ in Figure 1, where the edge weights are
\begin{align*}
W_1=\left[ {\begin{array}{cc}
1 & 0\\
0 & 1\\
\end{array} } \right], \qquad
W_2=\left[ {\begin{array}{cc}
2 & 0\\
0 & 1\\
\end{array} } \right], \qquad
W_3=\left[ {\begin{array}{cc}
1 & 0\\
0 & 2\\
\end{array} } \right].
\end{align*}
\end{ex}
Then,
\begin{align*}
\Delta =&\left[ {\begin{array}{cccc}
0 & W_1^2 & (W_1+W_2)^2 & (W_1+W_3)^2\\
W_1^2 & 0 & W_2^2 & W_3^2\\
(W_1+W_2)^2 & W_2^2 & 0 & (W_2+W_3)^2\\
(W_1+W_3)^2 & W_3^2 & (W_2+W_3)^2 & 0\\
\end{array} } \right] \\
=&\left[ {\begin{array}{cccccccc}
0 & 0 & 1 & 0 & 9 & 0 & 4 & 0\\
0 & 0 & 0 & 1 & 0 & 4 & 0 & 9\\
1 & 0 & 0 & 0 & 4 & 0 & 1 & 0\\
0 & 1 & 0 & 0 & 0 & 1 & 0 & 4\\
9 & 0 & 4 & 0 & 0 & 0 & 9 & 0\\
0 & 4 & 0 & 1 & 0 & 0 & 0 & 9\\
4 & 0 & 1 & 0 & 9 & 0 & 0 & 0 \\
0 & 9 & 0 & 4 & 0 & 9 & 0 & 0\\
\end{array} } \right],\\
L=&\left[ {\begin{array}{cccc}
W_1^{-1}& -W_1^{-1} & 0 & 0\\
-W_1^{-1} & W_1^{-1}+W_2^{-1}+W_3^{-1} & -W_2^{-1} & -W_3^{-1}\\
0 & -W_2^{-1} & W_2^{-1} & 0 \\
0 & -W_3^{-1} & 0 &W_3^{-1}\\
\end{array} } \right] \\
=&\left[ {\begin{array}{cccccccc}
1 & 0 & -1 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & -1 & 0 & 0 & 0 & 0\\
-1 & 0 & 2.5 & 0 & -0.5 & 0 & -1 & 0\\
0 & -1 & 0 & 2.5 & 0 & -1 & 0 & -0.5\\
0 & 0 & -0.5 & 0 & 0.5 & 0 & 0 & 0\\
0 & 0 & 0 & -1 & 0 & 1 & 0 & 0\\
0 & 0 & -1 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & -0.5 & 0 & 0 & 0 & 0.5\\
\end{array} } \right],
\end{align*}
\begin{align*}
\beta =\sum_{i=1}^4 \frac{\hat{\delta_i}^2}{\tau_i}=& W_1^2+W_2^2+W_3^2-(W_1+W_2+W_3)^2=\left[ {\begin{array}{cc}
-10 & 0\\
0 & -10\\
\end{array} } \right], ~ \text{and}\\
{\eta}^{\prime} =& \left[ {\begin{array}{cccccccc}
-3 & 0 & 11 & 0 & -1 & 0 & -3 & 0\\
0 & -3 & 0 & 11 & 0 & -3 & 0 & -1\\
\end{array} } \right].
\end{align*}
Therefore,
\begin{align*}
L(\Hat{\tau}\otimes I_s)L=& \left[ {\begin{array}{cccccccc}
0 & 0 & 1.5 & 0 & -0.5 & 0 & -1 & 0\\
0 & 0 & 0 & 1.5 & 0 & -1 & 0 & -0.5\\
1.5 & 0 & -4& 0 & 1 & 0 & 1.5 & 0\\
0 & 1.5 & 0 & -4 & 0 & 1.5 & 0 & 1\\
-0.5 & 0 & 1 & 0 & 0 & 0 & -0.5 & 0\\
0 & -1 & 0 & 1.5 & 0 & 0 & 0 & -0.5\\
-1 & 0 & 1.5 & 0 & -0.5 & 0 & 0 & 0 \\
0 & -0.5 & 0 & 1 & 0 & -0.5 & 0 & 0\\
\end{array} } \right],~ \text{and} \\
\eta {\beta}^{-1} {\eta}^{\prime}=& \left[ {\begin{array}{cccccccc}
-0.9 & 0 & 3.3 & 0 & -0.3 & 0 & -0.9 & 0\\
0 & -0.9 & 0 & 3.3 & 0 & -0.9 & 0 & -0.3\\
3.3 & 0 & -12.1 & 0 & 1.1 & 0 & 3.3 & 0\\
0 & 3.3 & 0 & -12.1 & 0 & 3.3 & 0 & 1.1\\
-0.3 & 0 & 1.1 & 0 & -0.1 & 0 & -0.3 & 0\\
0 & -0.9 & 0 & 3.3 & 0 & -0.9 & 0 & -0.3\\
-0.9 & 0 & 3.3 & 0 & -0.3 & 0 & -0.9 & 0 \\
0 & -0.3 & 0 & 1.1 & 0 & -0.3 & 0 & -0.1\\
\end{array} } \right].
\end{align*}
One can verify that, $$\Delta^{-1}=-\frac{1}{4}L(\Hat{\tau}\otimes I_s)L+\frac{1}{4}\eta {\beta}^{-1} {\eta}^{\prime}.$$
\bibliographystyle{plain}
| -64,277.746859
|
[
-2.427734375,
2.056640625
] | 17.161716
|
[
-4.0625,
0.8720703125,
-1.91015625,
-7.296875,
-1.220703125,
10.40625
] |
[
1.673828125,
8.1484375,
0.875,
7.20703125
] | 245
| 4,203
|
[
-3.408203125,
3.66796875
] | 45.676398
|
[
-5.46484375,
-3.376953125,
-3.888671875,
-2.130859375,
1.431640625,
10.8828125
] | 1.132093
| 8.728966
| 18.796098
| 13.13921
|
[
2.027026414871216
] | -42,109.492865
| 5.254104
| -62,926.3387
| 0.66567
| 5.424712
|
[
-2.427734375,
-3.16015625,
-3.83203125,
-5.46875,
2.173828125,
12.3671875
] |
[
-6.65625,
-2.302734375,
-2.345703125,
-1.8642578125,
3.95703125,
5.296875
] | |
BkiUdJM4eIXhrsEtjj7U
|
\section{Introduction}
The apparent excess of the number of galaxies at faint magnitudes in the blue relative to predictions of
non-evolving models, even in the most favourable case of an open Universe, is
a longstanding
problem of cosmology.
Various scenarios have been proposed to solve this problem
in a flat Universe, as a strong number density
evolution of galaxies via merging (Rocca-Volmerange \& Guiderdoni 1990; Broadhurst, Ellis \& Glazebrook 1992) or with a cosmological constant (Fukugita et al. 1990).
In the framework of more conservative pure luminosity evolution models in an
open Universe, two solutions were
advocated. Either these blue galaxies are intensively star forming galaxies
at high redshift, or counts are dominated
by a population of intrinsically faint blue nearby galaxies. Looking for the
optimal luminosity functions (LF) fitting most observational
constraints, Gronwall \& Koo (1995) have introduced in particular {\em non-evolving populations} of
faint {\em very} blue galaxies (see also Pozzetti, Bruzual \& Zamorani (1996)),
contributing significantly to faint counts. Such blue colors require
however that {\em individual} galaxies have
recently been bursting and are thus rapidly evolving.
With a modelling of the spectral evolution of these galaxies taking also in
consideration post-burst phases,
Bouwens \& Silk (1996) concluded that the LF adopted by Gronwall \& Koo (1995) leads to a strong excess of nearby galaxies in
the redshift distribution and that vBG may thus not be the main explanation of
the blue excess.
On the basis of considerable observational progress in collecting deep
survey data, it is timely to address the question of the nature of the
blue excess anew, with the help of our new model PEGASE (FRV).
In this paper, we propose a star formation scenario and a LF
respecting the observational constraints on vBG.
Far-UV and optical counts are well matched with the classical
Hubble Sequence population and that bursting population extension.
The importance of vBG relative to normal galaxies and the physical origin of
bursts are finally discussed in the conclusion.
\section{Observational evidences of very blue galaxies}
In contrast with the so-called `normal' galaxies of the Hubble Sequence,
supposed to form at high redshift with definite star formation timescales,
bursting galaxies are rapidly evolving without clear timescales.
Specifically, in the red post-burst phases, they
might be undistinguishable from normal
slowly evolving galaxies. The bluest phases during the burst should, however,
allow to recognize them and to constrain their evolution and their number.
The existence of galaxies much bluer than normal and classified as starbursts
has been recently noticed
at optical wavelengths by Heyl et al. (1997). At fainter magnitudes ($B=22.5-24$), Cowie et
al. (1996) deep survey has revealed two populations of blue ($B-I<1.6$) galaxies (Figs.~\ref{cowie} and \ref{nz}).
Normal star forming galaxies, as predicted by standard models, are observed at high redshift
($z>0.7$) but another, clearly distinct population of blue galaxies is identified
at $0<z<0.3$, among which some of them are very blue.
The best constraint on the weight of these vBG comes from
the far-UV (2000 \AA) bright counts observed with the balloon experiment
FOCA2000 (Armand \& Milliard 1994).
By using a standard LF, the authors obtain a strong deficit of predicted galaxies
in UV counts all along the magnitude range ($UV=14-18$) and argue in favour of a LF biased towards
later-type galaxies.
With the star formation scenarios and the LF of Marzke et
al. (1994) fitting optical and near-infrared bright counts (FRV), we confirm
that this UV deficit reaches a factor 2 (Fig.~\ref{UV}).
Moreover, the $UV-B$ color distributions show
a clear lack of blue galaxies and notably of those with $UV-B<-1.5$ (Fig.~\ref{UV}).
A 10 Gyr old galaxy which formed stars at a constant rate, would
however only have $UV-B\sim-1.2$.
Although a low metallicity may lead to bluer colors, it will
still be too red
and a population of bursting galaxies is clearly needed to explain UV counts
and the Cowie et al. (1996) data.
\section{Modeling very blue galaxies}
\subsection{Star formation scenario}
Very blue colors are possible only in very young galaxies or in galaxies currently
undergoing enhanced star formation. Two kinds of models are thus possible and have been advanced by
Bouwens \& Silk (1996) to maintain such a population over a wide range of redshifts. In the first one,
new blue galaxies are continually formed and leave red fading remnants whereas in the second one,
star formation occurs recurrently. We adopt the last scenario and will discuss
in the conclusion the reasons for this choice. For the sake of simplicity, we assume that
all vBG form stars periodically. In each period, a burst phase with a constant star formation rate (SFR)
$\tau_{\rm b}$ and the same initial mass function as in FRV
is followed by a quiescent phase without star formation.
A good agrement with observational constraints is obtained with 100 Myr long burst phases
occuring every Gyr.
\subsection{Luminosity function}
Because bursting galaxies rapidly redden and fade during inter-burst phases,
we may not assign a single
LF by absolute magnitude, independently on color.
We therefore prefer to adopt for vBG a Schechter
function determined by $\tau_{\rm b}$.
The lack of vBG at $z\ga0.4$ in Cowie et al. (1996) redshift distribution
is particularly constraining for the LF. It may be interpreted in two
ways. Either vBG formed only at low redshifts ($z<0.3$) or the lack is due to
the exponential cut-off in the Schechter LF.
Physical arguments for such low formation redshifts are weak.
Scenarios invoking a large population of blue dwarf galaxies, as proposed by
Babul \& Rees (1992), generally predict a higher redshift of formation ($z\sim 1$). Adopting the last
solution, we get
$M^{\ast}_{\rm b_j}\sim-17$ ($H_0=100\,{\rm km.s^{-1}.Mpc^{-1}}$) for galaxies
with $B-I<1.6$ and may constrain the
other parameters of the LF. As noticed by Bouwens \& Silk (1996),
a steep LF extending to very faint magnitudes leads to a large local ($z<0.1$) excess
in the redshift distribution. A steep slope ($\alpha<-1.8$) is however
only necessary
to reconcile predicted number counts with observations in a flat universe. In
an open
Universe, a shallower slope is possible. In the following, we adopt $\alpha=-1.3$ for vBG.
The normalization is taken in agreement with UV counts and the
Cowie et al. (1996) redshift distribution.
\begin{table}
\begin{tabular}{|c|c|c|c|}
\hline
Galaxy type & $M^{\ast}_{\rm b_j}/\tau^{\ast}_{\rm b}$ & $\alpha$ & $\phi^{\ast}$\\
\hline
E & -20.02 & -1. & $1.91\,10^{-3}$\\
S0 & -20.02 & -1. & $1.91\,10^{-3}$\\
Sa & -19.62 & -1. & $2.18\,10^{-3}$\\
Sb & -19.62 & -1. & $2.18\,10^{-3}$\\
Sbc & -19.62 & -1. & $2.18\,10^{-3}$\\
Sc & -18.86 & -1. & $4.82\,10^{-3}$\\
Sdm & -18.86 & -1. & $9.65\,10^{-3}$\\
vBG & $3.95\,10^5$ & -1.3 & $6.63\,10^{-2}$\\
\hline
\end{tabular}
\caption{Luminosity functions parameters ($H_0=100\,{\rm km.s^{-1}.Mpc^{-1}}$). For vBG,
we give the SFR during the burst phase $\tau_{\rm b}^{\ast}$ at the LF knee in $M_{\odot}.{\rm Myr}^{-1}$.}
\label{FL}
\end{table}
\begin{figure}
\psfig{figure=sursautsfig1.ps,width=8.4cm,angle=-90}
\caption{$B-I$ versus $z$ for galaxies from Cowie et al. (1996) sample.
The thick lines define the envelope of normal galaxies. The upper one holds
for a 13 Gyr old initial burst without subsequent star formation and the
lower one for a 10 Gyr old galaxy forming stars at a constant rate. The
dashed line separates galaxies at $B-I=1.6$. A significant fraction
of galaxies are observed outside the envelope at $z\sim0.2$, with $B-I<1.6$.}
\label{cowie}
\end{figure}
\begin{figure}
\psfig{figure=sursautsfig2.ps,width=8.4cm,angle=-90}
\caption{Number counts and color distributions predicted with Marzke et al. (1994) LF
(dashed) and with vBG (see text) and observed Heyl et al. (1997) LFs (solid)
compared to the observations
of Armand \& Milliard (1994) (circles and histograms). Color distributions are
normalized to the areas of the histograms.}
\label{UV}
\end{figure}
\begin{figure}
\psfig{figure=sursautsfig3.ps,width=8.4cm,angle=-90}
\caption{Predicted redshift distribution ($22.5<B<24$) compared to the observations
of Cowie et al. (1996). The thick line is for galaxies with $B-I<1.6$ and
the thin line for all galaxies.}
\label{nz}
\end{figure}
\section{Galaxy counts}
Galaxy counts and the amplitude of the projected correlation function by color
in an open Universe ($\Omega_0=0.1$, $\lambda_0=0$, $H_0=65\,{\rm km.s^{-1}.Mpc^{-1}}$), obtained with our
modelling of vBG and the standard scenarios\footnote{A constant SFR and $z_{\rm for}=2$
are assumed for Sd-Im galaxies.} discussed in FRV, are presented in Fig.~\ref{UV} to \ref{Aw}.
For `normal' types, we use the $z=0$ SSWML LF of Heyl et al. (1997), after
deduction of the contribution of vBG. Characteristics of the LF finally adopted
are given in table~\ref{FL}.
Though faint in the blue, vBG play an essential role in UV bright counts thanks to their
blue $UV-B$ colors and give a much better agreement on
Fig.~\ref{UV}, both in number counts and color distributions.
Their contributions to counts at longer wavelengths is however
much smaller. They represent less than 10 per cent of the total number of galaxies at $B=22.5-24$
in Cowie et al. (1996) redshift survey and may thus not be the main explanation
of the excess of faint blue galaxies observed over the model without evolution. High redshift,
intrinsically bright galaxies forming stars at a higher rate in the past are the main reason
as it clearly arises from the $z>1$ tail of normally blue galaxies.
In an open Universe, these galaxies reproduce the faint $B$ and even $U$
counts, assuming a normalization
of the LF fitting the bright counts of Gardner (1996) as discussed in FRV.
\begin{figure}
\psfig{figure=sursautsfig4.ps,width=8.4cm,angle=-90}
\caption{Number counts in $b_{\rm j}$ (left), $U$ (middle) and $F300W$ (right).}
\label{compt}
\end{figure}
The agreement with the Hubble Deep Field (HDF, Williams et al. 1996) in the blue is notably
satisfying. Though a small deficit may be observed in the $F300W$ band (3000 \AA),
the $F300W-F450W$ (3000\AA--4500\AA) color distribution is well reproduced (Fig.~\ref{HDF}). The fraction of vBG at these faint magnitudes
is still small; they are therefore not the main reason for the agreement
with HDF data.
\begin{figure}
\psfig{figure=sursautsfig5.ps,width=8.4cm,angle=-90}
\caption{$F300W-F450W$ color distribution in the HDF for
$(F300W)_{\rm AB}<27.75$ and $(F450W)_{\rm AB}<28.75$ at 80 per cent completeness. Thin line: all galaxies.
Thick line: vBG only.}
\label{HDF}
\end{figure}
From this previous analysis, it is clear that vBG are difficult to constrain in the visible
from broad statistics like number counts and even color distributions.
The angular correlation function might be promising since it is more directly
related
to the redshift distribution. In a $B_{\rm J}=20-23.5$ sample,
Landy, Szalay \& Koo (1996) recently obtained an
unexpected increase of the amplitude $A_w$
of the angular correlation function with galaxy colors
$U-R_{\rm F}<-0.5$, and suggested that this might be due to a population of
vBG located at $z<0.4$.
We compute $A_w$ from our redshift distributions, assuming the classical
power law
for the local spatial correlation function and no evolution of the intrinsic
clustering in proper coordinates.
A slope $\gamma=1.8$ and a single correlation length
$r_0=5.4h^{-1}\,{\rm Mpc}$ (see Peebles (1993)) are adopted for all types.
The increase of $A_w$ in the blue
naturally arises from our computations (Fig.~\ref{Aw}) and is due to vBG.
The interval of magnitude, the faint $M^{\ast}$ and the color criterion conspire to select
galaxies in a small range of redshift.
In spite of the simplicity of our computation of $A_w$,
the trend we obtain is very
satisfying.
Modelling improved by extra physics or type effects might better fit the $A_w$-color
relation, but at
the price of
an increased number of parameters.
\begin{figure}
\psfig{figure=sursautsfig6.ps,width=8.4cm,angle=-90}
\caption{Amplitude at $1\degr$ of the angular correlation function in $B_{\rm J}=20-23.5$
as a function of $U-R_{\rm F}$ color in bins of 1 magnitude wide. Stars are
from Landy et al. (1996). The solid line is the amplitude predicted
without evolution of the intrinsic clustering in proper coordinates.
}
\label{Aw}
\end{figure}
\section{Conclusion}
We modelled the vBG appearing
notably in UV counts with cycling star formation.
Our modelling agrees
well with the constraints brought by the 2000\AA\ bright counts
(Armand \& Milliard 1994),
the redshift survey of Cowie
et al. (1996) and the angular correlation function of Landy et al. (1996).
The cycling star formation provides very blue colors
in a more physical way than by assuming a population of unevolving galaxies.
The continual formation of new bursting galaxies
might lead to similar predictions in the UV-optical, but would produce a high
number of very faint red remnants. Future deep near-infrared surveys
should provide discriminations between these scenarios.
The hypothesis of cycling star forming galaxies has however some theoretical
support.
The feedback of supernovae on the interstellar medium,
may lead to
oscillations of the SFR (Wiklind 1987; Firmani \& Tutukov
1993; Li \& Ikeuchi 1988). Since the probability of propagation
of star formation increases with galaxy mass (Coziol 1996),
according to the stochastic self propagation star formation theory
(Gerola, Seiden \& Schulman 1980),
this behaviour should be more frequent in small
galaxies.
More regular SFR might be attained in
more massive ones.
The nature
of vBG is poorly constrained, but we tentatively identify them
from their typical luminosity and ${\rm H}\alpha$ equivalent width
($\sim 200$ \AA) with H{\sc ii} galaxies (Coziol 1996).
Very blue galaxies, as modelled in this paper, are only a small
fraction of the number of
galaxies predicted at faint magnitudes in the visible and are not
the main reason
for the excess of blue galaxies, although they may cause some confusion
in the interpretation of the faint surveys. In an open Universe, the population
of normal high redshift star forming galaxies,
even with a nearly flat LF, reproduces fairly well the counts till the faintest magnitudes
observed by the Hubble Space Telescope.
As is now well established, this population is however, unable to explain the excess
of faint blue galaxies in a flat Universe. Increasing strongly
the number of vBG (for example, with a steeper slope of the LF) may not be the solution
since it would lead to an excess of galaxies at very low redshift which is not observed.
This result depends however on the hypotheses of pure luminosity evolution
and null cosmological constant. A flat Universe might still be possible
if other evolutionary scenarios are favoured by new observations in the
far-infrared and submillimeter.
| -10,554.888148
|
[
-3.279296875,
2.978515625
] | 22.382671
|
[
-2.44921875,
0.5302734375,
-2.05859375,
-5.00390625,
-0.92333984375,
7.56640625
] |
[
4.46484375,
7.2734375,
2.455078125,
6.0546875
] | 192
| 2,236
|
[
-2.5234375,
2.822265625
] | 26.682455
|
[
-6.0859375,
-4.2734375,
-4.2578125,
-2.404296875,
1.7880859375,
12.8046875
] | 1.035107
| 5.889269
| 31.753131
| 4.347498
|
[
1.7709091901779175
] | -8,748.559793
| 5.184705
| -10,277.337035
| 0.759079
| 5.624881
|
[
-3.279296875,
-3.66796875,
-2.828125,
-3.71484375,
2.357421875,
10.5703125
] |
[
-5.484375,
-2.263671875,
-2.43359375,
-1.3720703125,
3.61328125,
5.21875
] | |
BkiUcfTxK3YB9m7_9jKX
|
\section{Introduction}
The {\it Kepler} satellite was launched in March 2009 with the
primary goal to search for transiting exoplanets in the solar
neighbourhood. It delivers single band-pass light curves of
micromagnitude precision and has found hundreds of planet
candidates \citep{Borucki}. The long, uninterrupted, and
high-precision time series photometry taken for a huge number of
stars has also led to the discovery of many new pulsating stars
and is an ideal basis for an in-depth asteroseismic analysis
\citep[see e.g.][]{Gilliland2010}. For this analysis and the
subsequent asteroseismic modelling, precise knowledge of the
fundamental parameters of the stars is essential. These parameters
cannot be determined from the single band-pass photometry
delivered by {\it Kepler} alone, however. Hence, ground-based
spectroscopic follow-up observations have been undertaken to
determine the stellar parameters.
In this paper, we are concerned with $\gamma$~Doradus candidates
found from data assembled with the {\it Kepler} mission. Such
pulsators are named after their prototype, Gamma Doradus, whose
multiperiodic variable nature was first reported by
\citet{Cousins1992}. \citet{Krisciunas1993} discovered a
multiperiodic photometric variability with an amplitude of about
0.1~mag and periods of 2.275 and 1.277~d in 9~Aurigae and reported
on similar behaviour in $\gamma$~Doradus and HD\,96008. Following
these discoveries, \citet{Balona1994} introduced a new class of
variable stars named after the prototype star $\gamma$ Doradus.
$\gamma$~Dor\ stars are assumed to pulsate in high-order, low-degree
non-radial gravity modes driven by the flux blocking mechanism
near the base of their convective zones
\citep{Guzik2000,Dupret2005}. The typical masses of these stars
lie in the range of 1.5--1.8~M$_{\odot}$ \citep{Aerts2010}.
According to \citet{Kaye1999}, $\gamma$~Dor-type stars can be characterised
as follows: (1) spectral type A7--F5 and luminosity class IV,
IV-V, or V, (2) low-amplitude photometric variations with periods
between 0.5 and 3 days as well as spectroscopic variability seen
as both line-profile and low-amplitude radial velocity (RV)
variations.
\begin{table} \tabcolsep 1.9mm\caption{\small Journal of observations. N gives the number of obtained spectra, V - the visual magnitude. All spectra have
been taken in 2010.}
\begin{tabular}{rlcrll}
\hline
\multicolumn{1}{c}{KIC\rule{0pt}{9pt}} & \multicolumn{1}{c}{Designation} & $N$ & \multicolumn{1}{c}{$V$} & \multicolumn{1}{c}{SpT} & \multicolumn{1}{c}{Observed}\\
\hline
01\,571\,152\rule{0pt}{9pt} & BD+36\,3535 & 2 & 9.3 & F0 & May-June\\
02\,166\,218 & BD+37\,3490 & 3 & 9.5 & F0 & May-June\\
03\,217\,554 & BD+38\,3415 & 2 & 9.6 & A5 & September\\
03\,453\,494 & BD+38\,3666 & 3 & 9.6 & A5 & September\\
04\,847\,411 & HD\,\,\,\,225314 & 4 & 9.8 & A7 & June-July\\
05\,088\,308 & HD\,\,\,\,180099 & 1 & 8.7 & F5 & May\\
05\,164\,767 & HD\,\,\,\,175537 & 6 & 7.8 & F0 & April-June\\
05\,446\,068 & BD+40\,3704 & 7 & 9.7 & --- & June-July\\
05\,785\,707 & HD\,\,\,\,181902 & 3 & 9.0 & A & August\\
06\,289\,468 & BD+41\,3389 & 5 & 9.4 & A2 & May\\
06\,509\,175 & BD+41\,3248 & 2 & 10.0 & A2 & August\\
06\,587\,551 & BD+41\,3207 & 3 & 9.8 & A0 & June-July\\
06\,756\,386 & HD\,\,\,\,175939 & 2 & 8.7 & A2 & August\\
07\,748\,238 & HD\,\,\,\,181985 & 2 & 9.5 & A & May-July\\
08\,623\,953 & BD+44\,3134 & 3 & 9.3 & A5 & August\\
08\,738\,244 & HD\,\,\,\,176390 & 2 & 8.1 & A3 & August\\
08\,750\,029 & BD+44\,3113 & 3 & 9.7 & A5 & June\\
09\,413\,057 & BD+45\,2954 & 5 & 9.6 & A2 & May-June\\
09\,764\,965 & HD\,\,\,\,181206 & 2 & 8.8 & A5 & August\\
09\,812\,351 & HD\,\,\,\,174019 & 3 & 7.9 & A0 & April-May\\
10\,119\,517 & TYC\,3544-1245-1 & 3 & 9.9 & --- & August\\
10\,451\,090 & HD\,\,\,\,174789 & 3 & 9.2 & A & August\\
10\,616\,594 & TYC\,3561-971-1 & 3 & 9.8 & --- & August\\
10\,977\,859 & HD\,\,\,\,184333 & 2 & 8.8 & A2 & August\\
11\,498\,538 & HD\,\,\,\,178874 & 2 & 7.3 & F5 & September\\
12\,353\,648 & HD\,\,\,\,234859 & 3 & 9.6 & A2 & August\\
\hline
\end{tabular}
\label{Table:observations}
\end{table}
$\gamma$~Dor\ stars are located close to the red edge of the classical
instability strip in the HR-diagram. The theoretical $\gamma$~Dor\
instability strip overlaps with the red edge of classical
instability strip where the $\delta$~Sct\ pulsators are located. While
the low-order p-modes of $\delta$~Sct\ stars are characterized by short
periods ranging from 18 min to 8 h, typical $\gamma$~Dor\ high-order
g-modes have periods of the order of a day
\citep[e.g.,][]{Aerts2010}. Multiperiodicity is found for most of
the $\gamma$~Dor\ class members from ground-based photometry
\citep{Henry2007,Cuypers2009} and spectroscopy
\citep[e.g.,][]{Mathias2004,DeCat2006}. Pulsators in the
overlapping region of the $\delta$~Sct and $\gamma$~Dor\ instability strip are
expected to show the two pulsation characteristics, i.e.,
high-order g-modes probing the core and low-order p- and g-modes
probing the outer layers. While the frequency patterns of these
two types of oscillations are in principle easy to distinguish in
the co-rotating frame of reference, the frequencies start to
overlap in an inertial frame of reference, particularly for the
fast rotators. Moreover, the overall beating patterns are complex
and hard to unravel from interrupted ground-based data. Some
hybrid pulsators were already found previously
\citep[e.g.,][]{Rowe2006,King2007}, but the {\it Kepler} data make
it clear that hybrid pulsators turn out to be numerous, both for
AF-type stars \citep[][]{Uytterhoeven2011b,Balona2011a} and for
B-type stars \citep{Balona2011b}.
\citet{Gray1999} were the first to report a connection between the
$\lambda$~Bootis ($\lambda$~Boo) type stars and the $\gamma$~Dor\ variables. $\lambda$~Boo\
stars are Pop\,I hydrogen burning metal poor (except of C, N, O,
and S) A-type stars \citep[][]{Paunzen,Paunzen2004} showing
significant underabundances of Fe-peak elements (up to --2~dex
compared to the solar composition). They belong to the class of
non-magnetic, chemically peculiar stars. Up to now, only two
further reports \citep{Sadak2006,Rodrig2007} on a possible
connection between the $\lambda$~Boo\ stars and $\gamma$~Dor-type variability
appeared. A recent analysis of a sample of 18 $\gamma$~Dor\, stars
performed by \citet{Bruntt2008} revealed no principal difference
between the abundances of the analysed stars and the chemical
composition of non-pulsating A- and F-type stars.
In this paper, we investigate a sample of 26 among the brighter
stars in the {\it Kepler} field which have been proposed to be
candidates for $\gamma$~Dor\, variables \citep{Uytterhoeven2011a}. We aim
to evaluate fundamental stellar parameters like effective
temperature $T_{\rm eff}$, surface gravity $\log{g}$, projected rotational
velocity $v\sin{i}$, and microturbulent velocity $\xi$\, as well as
the chemical composition of the target stars from newly obtained
high-resolution spectra. Based on the derived parameters, we
present a classification of the sample stars according to the
expected type of variability. The derived chemical composition in
turn allows to check for a possible connection between $\gamma$~Dor-type
variability and $\lambda$~Boo-type abundance patterns.
\section{Observations}\label{Section:Observations}
\begin{table*}
\tabcolsep 1.2mm\center\caption{\small Fundamental stellar
parameters. The values labeled with ``K'' are taken from the KIC
and given for comparison. Metallicity values labeled with ``(Fe)''
refer to the derived Fe-abundance.}
\begin{tabular}{lclrllrllll}
\hline \multicolumn{1}{c}{KIC\rule{0pt}{9pt}} &
$T_{\rm{eff}}^{\rm{K}}(K)$ & $\log{g}$$^{\rm{K}}$ &
\multicolumn{1}{c}{$[M/H]^{\rm{K}}$} & $T_{\rm{eff}}(K)$ &
\multicolumn{1}{c}{$\log{g}$} & \multicolumn{1}{c}{$[M/H]$}
& $v\sin{i}$\,(km\,s$^{-1}$) & $\xi$\,(km\,s$^{-1}$) & \multicolumn{1}{c}{SpT$^{\rm{K}}$} & \multicolumn{1}{c}{SpT}\\
\hline
01\,571\,152$^{1)}$\rule{0pt}{11pt} & 7048 & 3.164 & +0.05 & 7065$^{+79}_{-79}$ & 4.46$^{+0.23}_{-0.38}$ & --0.18$^{+0.10}_{-0.10}$ & 90.1$^{+8.3}_{-12.7}$ & 2.03$^{+0.46}_{-0.49}$ & F0.5 III & F1 V\vspace{1.5mm}\\
02\,166\,218 & 7153 & 3.345 & --0.11 & 7062$^{+56}_{-56}$ &
3.88$^{+0.17}_{-0.21}$ & --0.37$^{+0.07}_{-0.07}$ &
99.7$^{+3.5}_{-3.5}$ &
2.82$^{+0.23}_{-0.26}$ & F0 IV-III & F1 IV\vspace{1.5mm}\\
03\,217\,554$^{1)}$ & 7801 & 3.504 & +0.06 & 7667$^{+66}_{-66}$ & 2.78$^{+0.10}_{-0.09}$ & --1.20(Fe) & 225.5$^{+17.0}_{-17.2}$ & 4.82$^{+0.75}_{-0.74}$ & A6.5 IV-III & A7.5 III-II\vspace{1.5mm}\\
03\,453\,494 & 7806 & 3.843 & --0.33 & 7737$^{+57}_{-57}$ & 3.71$^{+0.21}_{-0.22}$ & --0.95(Fe) & 210.8$^{+14.5}_{-14.5}$ & 3.24$^{+0.66}_{-0.60}$ & A7 IV & A7 IV\vspace{1.5mm}\\
04\,847\,411$^{\rm RV)}$ & 6563 & 4.517 & --1.95 &
7466$^{+50}_{-50}$ & 3.83$^{+0.24}_{-0.24}$ &
--0.52$^{+0.09}_{-0.09}$ & 139.9$^{+6.4}_{-6.2}$ &
2.62$^{+0.33}_{-0.33}$ & F4 V & A8.5 V-IV\vspace{1.5mm}\\
05\,088\,308$^{1)}$ & 6567 & 4.035 & --0.68 & 6708$^{+37}_{-37}$ &
2.67$^{+0.13}_{-0.14}$ & --0.35$^{+0.06}_{-0.07}$ &
40.7$^{+1.2}_{-1.2}$ &
4.01$^{+0.17}_{-0.17}$ & F4 V-IV & F4 III-II\vspace{1.5mm}\\
05\,164\,767$^{\rm RV)}$ & ------ & ------ & ------ & 6933$^{+76}_{-76}$ & 3.59$^{+0.39}_{-0.32}$ & --0.19$^{+0.11}_{-0.12}$ & 163.9$^{+8.7}_{-8.6}$ & 2.62$^{+0.47}_{-0.41}$ & -----------& F1.5 IV\vspace{1.5mm}\\
05\,446\,068$^{1,2)}$ & 5337 & 4.476 & --0.69 & 5763$^{+90}_{-90}$ & 3.37$^{+0.25}_{-0.26}$ & +0.24$^{+0.10}_{-0.10}$ & 7.8$^{+2.1}_{-2.2}$ & 0.0$^{\rm{fixed}}$ & G9.5 V & F9.5 IV\vspace{1.5mm}\\
05\,785\,707 & 8009 & 3.615 & --0.14 & 7965$^{+70}_{-70}$ & 3.37$^{+0.15}_{-0.09}$ & --0.56$^{+0.11}_{-0.11}$ & 171.3$^{+10.4}_{-10.0}$ & 2.84$^{+0.45}_{-0.48}$ & A5.5 IV & A6 IV-III\vspace{1.5mm}\\
06\,289\,468$^{\rm RV)}$ & 8267 & 3.741 & --0.30 &
8107$^{+70}_{-70}$ & 3.30$^{+0.06}_{-0.06}$ &
--0.48$^{+0.11}_{-0.10}$ & 149.7$^{+7.0}_{-7.4}$ &
2.67$^{+0.41}_{-0.39}$ & A4.5 IV & A5 III\vspace{1.5mm}\\
06\,509\,175 & 7299 & 3.522 & --0.21 & 7510$^{+50}_{-50}$ & 3.20$^{+0.25}_{-0.26}$ & --0.45$^{+0.10}_{-0.10}$ & 132.4$^{+7.8}_{-7.2}$ & 2.81$^{+0.43}_{-0.39}$ & A9 IV-III & A8 III\vspace{1.5mm}\\
06\,587\,551 & 8377 & 3.929 & --0.07 & 8826$^{+144}_{-144}$ & 3.76$^{+0.07}_{-0.07}$ & --0.11$^{+0.12}_{-0.12}$ & 139.8$^{+7.9}_{-8.1}$ & 2.80$^{+0.53}_{-0.90}$ & A4 V-IV & A2.5 IV\vspace{1.5mm}\\
06\,756\,386 & 7992 & 3.513 & --0.51 & 7891$^{+62}_{-62}$ & 3.19$^{+0.09}_{-0.08}$ & --0.59$^{+0.10}_{-0.10}$ & 192.8$^{+11.6}_{-11.8}$ & 2.99$^{+0.52}_{-0.49}$ & A6 IV-III & A6 III\vspace{1.5mm}\\
07\,748\,238 & 7228 & 3.470 & --0.22 & 7264$^{+58}_{-58}$ & 3.96$^{+0.20}_{-0.25}$ & --0.37$^{+0.08}_{-0.08}$ & 120.8$^{+4.8}_{-4.9}$ & 3.53$^{+0.33}_{-0.38}$ & A9.5 IV-III & A9.5 V-IV\vspace{1.5mm}\\
08\,623\,953 & 7725 & 3.738 & --0.11 & 7726$^{+50}_{-50}$ & 3.43$^{+0.18}_{-0.17}$ & --0.35$^{+0.08}_{-0.08}$ & 84.8$^{+3.2}_{-3.3}$ & 2.76$^{+0.29}_{-0.29}$ & A7 IV & A7 IV-III\vspace{1.5mm}\\
08\,738\,244 & 8167 & 4.152 & +0.41 & 8154$^{+96}_{-96}$ & 3.24$^{+0.07}_{-0.08}$ & --0.27$^{+0.12}_{-0.12}$ & 133.9$^{+7.1}_{-7.1}$ & 2.70$^{+0.42}_{-0.67}$ & A5 V-IV & A5 III\vspace{1.5mm}\\
08\,750\,029 & ------ & ------ & ------ & 7341$^{+59}_{-59}$ &
3.70$^{+0.27}_{-0.23}$ & --0.56$^{+0.10}_{-0.10}$ &
166.1$^{+8.8}_{-8.4}$ &
2.95$^{+0.46}_{-0.41}$& -----------& A9 IV\vspace{1.5mm}\\
09\,413\,057$^{\rm RV)}$ & 8465 & 3.868 & --0.06 &
8588$^{+97}_{-97}$ & 3.59$^{+0.05}_{-0.05}$ &
--0.56$^{+0.15}_{-0.12}$ & 171.0$^{+10.6}_{-10.4}$ &
2.83$^{+0.57}_{-0.60}$ & A4 V-IV & A3 IV\vspace{1.5mm}\\
09\,764\,965 & 7455 & 4.085 & --0.18 & 7478$^{+41}_{-41}$ & 3.74$^{+0.17}_{-0.18}$ & --0.27$^{+0.06}_{-0.06}$ & 85.1$^{+2.5}_{-2.5}$ & 3.55$^{+0.24}_{-0.24}$ & A8.5 V-IV & A8 IV\vspace{1.5mm}\\
09\,812\,351 & 7794 & 3.470 & --0.29 & 7833$^{+62}_{-62}$ & 3.20$^{+0.15}_{-0.11}$ & --0.90(Fe) & 55.6$^{+3.5}_{-3.2}$ & 2.18$^{+0.40}_{-0.32}$ & A6.5 IV-III & A6 III\vspace{1.5mm}\\
10\,119\,517$^{\rm RV)}$ & 6225 & 4.375 & --0.45 & 6438$^{+69}_{-69}$ & 4.21$^{+0.22}_{-0.13}$ & --0.24$^{+0.07}_{-0.07}$ & 77.9$^{+3.4}_{-3.3}$ & 1.26$^{+0.30}_{-0.27}$ & F8 V & F5 V\vspace{1.5mm}\\
10\,451\,090 & 7577 & 4.134 & --0.05 & 7633$^{+50}_{-50}$ & 3.58$^{+0.15}_{-0.15}$ & +0.04$^{+0.06}_{-0.06}$ & 44.2$^{+1.5}_{-1.5}$ & 3.00$^{+0.16}_{-0.16}$ & A8 V-IV & A7.5 IV-III\vspace{1.5mm}\\
10\,616\,594$^{2)}$ & 5161 & 3.762 & --0.75 & 5327$^{+88}_{-88}$ & 2.81$^{+0.25}_{-0.22}$ & +0.16$^{+0.13}_{-0.20}$ & 7.24$^{+0.9}_{-0.8}$ & 0.63$^{+0.20}_{-0.25}$ & G8.5 V-IV & G3.5 III \vspace{1.5mm}\\
10\,977\,859 & 8052 & 3.935 & +0.15 & 8195$^{+57}_{-57}$ & 3.60$^{+0.07}_{-0.06}$ & --0.11$^{+0.07}_{-0.06}$ & 63.0$^{+3.0}_{-2.9}$ & 3.08$^{+0.28}_{-0.27}$ & A5.5 V-IV & A5 IV\vspace{1.5mm}\\
11\,498\,538$^{2)}$ & 6287 & 4.036 & --0.29 & 6428$^{+57}_{-57}$ & 2.90$^{+0.14}_{-0.15}$ & --0.15$^{+0.07}_{-0.07}$ & 39.7$^{+1.2}_{-1.1}$ & 2.54$^{+0.19}_{-0.16}$ & F7 V-IV & F5.5 III\vspace{1.5mm}\\
12\,353\,648 & 7414 & 3.473 & --0.38 & 7163$^{+51}_{-51}$ & 3.49$^{+0.23}_{-0.29}$ & --1.05(Fe) & 192.0$^{+10.6}_{-10.6}$ & 2.32$^{+0.42}_{-0.43}$ & A8.5 IV-III & F0 IV-III\vspace{1.5mm}\\
\hline \multicolumn{11}{l}{$^{1)}$ suspected SB2 star; $^{2)}$ no
reliable fit obtained; $^{\rm RV)}$ differences in the measured
RVs are observed\rule{0pt}{11pt}}
\end{tabular}
\label{Table:FundamentalParameters}
\end{table*}
\begin{table*}
\tabcolsep 0.9mm\center\caption{\small Derived metallicity and
elemental abundances relative to solar ones in dex. Asterisks
indicate elements with error estimates of $\pm$0.10~dex
($\pm$0.20~dex in all other cases). Metallicity values labeled
with ``(Fe)'' refer to the derived Fe-abundance.} \tiny
\begin{tabular}{cclllllllllllllllll}
\hline
KIC\rule{0pt}{9pt} & $[M/H]$ & \multicolumn{1}{c}{C} & \multicolumn{1}{c}{O} & \multicolumn{1}{c}{Mg} & \multicolumn{1}{c}{Si} & \multicolumn{1}{c}{Ca} & \multicolumn{1}{c}{Fe} & \multicolumn{1}{c}{Na} & \multicolumn{1}{c}{Sc} & \multicolumn{1}{c}{Ti} & \multicolumn{1}{c}{Cr} & \multicolumn{1}{c}{Mn} & \multicolumn{1}{c}{Y} & \multicolumn{1}{c}{Ba} & \multicolumn{1}{c}{V} & \multicolumn{1}{c}{Co} & \multicolumn{1}{c}{Ni} & \multicolumn{1}{c}{Zr}\\
& & --3.65 & --3.38 & --4.51 & --4.53 & --5.73 & --4.59 & --5.87 & --8.99 & --7.14 & --6.40 & --6.65 & --9.83 & --9.87 & --8.04 & --7.12 & --5.81 & --9.45\\
\hline
01\,571\,152\rule{0pt}{11pt} & --0.18$^{+0.10}_{-0.10}$ & +0.05 & --- & --0.10$^*$ & --0.10 & --0.25$^*$ & --0.15$^*$ & +0.00 & +0.00 & --0.25$^*$ & --0.15$^*$ & --0.15 & --0.20 & +0.80 & --- & --- & --0.20$^*$ & ---\vspace{2.0mm}\\
02\,166\,218 & --0.37$^{+0.07}_{-0.07}$ & --0.10 & --- & --0.15$^*$ & --0.20 & --0.20$^*$ & --0.45$^*$ & --0.10 & --0.35 & --0.40$^*$ & --0.40$^*$ & --0.40 & --0.35 & +0.60 & --- & --- & --0.45$^*$ & ---\vspace{2.0mm}\\
03\,217\,554 & --1.20(Fe) & --1.10 & --- & --0.35$^*$ & --0.10 & --1.35 & --1.20$^*$ & --- & --1.15 & --1.05 & --1.70 & --- & --- & --1.10 & --- & --- & --0.80 & ---\vspace{2.0mm}\\
03\,453\,494 & --0.95(Fe) & --0.60 & --- & --0.10$^*$ & --0.10 & --1.25 & --0.95$^*$ & --- & --0.75 & --0.95 & --1.00 & --- & --- & --1.05 & --- & --- & --0.70 & ---\vspace{2.0mm}\\
04\,847\,411 & --0.52$^{+0.09}_{-0.09}$ & --0.20 & --- & --0.20$^*$ & --0.10 & --0.70$^*$ & --0.55$^*$ & +0.05 & --0.45 & --0.75$^*$ & --0.55$^*$ & --0.40 & --- & +0.25 & --- & --- & --0.50$^*$ & ---\vspace{2.0mm}\\
05\,088\,308 & --0.35$^{+0.06}_{-0.07}$ & --0.30 & --- & --0.30$^*$ & --0.50 & --0.25$^*$ & --0.50$^*$ & --0.30 & --0.35 & --0.55$^*$ & --0.45$^*$ & --0.45 & --0.10 & +0.40 & --- & --- & --0.30$^*$ & ---\vspace{2.0mm}\\
05\,164\,767 & --0.19$^{+0.11}_{-0.12}$ & +0.15 & --- & +0.00$^*$ & +0.15 & --0.20$^*$ & --0.35$^*$ & +0.50 & --0.05 & --0.65$^*$ & --0.10$^*$ & --0.05 & --0.60 & +0.60 & --- & --- & --0.25$^*$ & ---\vspace{2.0mm}\\
05\,446\,068 & +0.24$^{+0.10}_{-0.10}$ & --0.30 & --- & +0.10$^*$ & --0.25$^*$ & --0.05$^*$ & +0.15$^*$ & +0.10 & +0.20$^*$ & +0.30$^*$ & +0.25$^*$ & +0.35$^*$ & +0.25$^*$ & +0.50 & +0.55$^*$ & +0.35$^*$ & +0.10$^*$ & +0.10\vspace{2.0mm}\\
05\,785\,707 & --0.56$^{+0.11}_{-0.11}$ & --0.20 & --- & --0.25$^*$ & +0.15 & --0.50 & --0.50$^*$ & --- & --0.50 & --0.85 & --0.70 & --- & --- & --0.40 & --- & --- & --0.45 & ---\vspace{2.0mm}\\
06\,289\,468 & --0.48$^{+0.11}_{-0.10}$ & --0.30 & --- & +0.05$^*$ & --0.05 & --0.75 & --0.50$^*$ & --- & --0.55 & --0.65 & --0.60 & --- & --- & --0.50 & --- & --- & --0.55 & ---\vspace{2.0mm}\\
06\,509\,175 & --0.45$^{+0.10}_{-0.10}$ & --0.15 & --- & --0.05$^*$ & +0.05 & --0.35$^*$ & --0.55$^*$ & +0.10 & --0.45 & --0.65$^*$ & --0.55$^*$ & --0.20 & --- & --0.60 & --- & --- & --0.45$^*$ & ---\vspace{2.0mm}\\
06\,587\,551 & --0.11$^{+0.12}_{-0.12}$ & +0.00 & +0.00 & +0.20$^*$ & +0.30 & --0.20 & --0.10$^*$ & --- & --0.05 & --0.35 & --0.10 & --- & --- & --0.40 & --- & --- & --0.15 & ---\vspace{2.0mm}\\
06\,756\,386 & --0.59$^{+0.12}_{-0.12}$ & --0.20 & --- & +0.00$^*$ & +0.20 & --0.50 & --0.65$^*$ & --- & --0.75 & --0.70 & --0.65 & --- & --- & --1.30 & --- & --- & --0.60 & ---\vspace{2.0mm}\\
07\,748\,238 & --0.37$^{+0.10}_{-0.10}$ & --0.15 & --- & --0.10$^*$ & +0.00 & --0.25$^*$ & --0.45$^*$ & +0.10 & --0.35 & --0.45$^*$ & --0.45$^*$ & --0.35 & --- & +0.50 & --- & --- & --0.40$^*$ & ---\vspace{2.0mm}\\
08\,623\,953 & --0.35$^{+0.10}_{-0.10}$ & --0.10 & --- & --0.10$^*$ & +0.00 & --0.25 & --0.40$^*$ & +0.10 & --0.35 & --0.55 & --0.45 & --0.30 & --- & --0.35 & --- & --- & --0.30 & ---\vspace{2.0mm}\\
08\,738\,244 & --0.27$^{+0.12}_{-0.12}$ & --0.25 & --- & +0.20$^*$ & +0.25 & --0.40 & --0.35$^*$ & --- & --0.50 & --0.60 & --0.20 & --- & --- & --0.55 & --- & --- & --0.35 & ---\vspace{2.0mm}\\
08\,750\,029 & --0.56$^{+0.10}_{-0.10}$ & --0.20 & --- & --0.35$^*$ & +0.00 & --0.60$^*$ & --0.60$^*$ & --0.05 & --0.60 & --0.90$^*$ & --0.50$^*$ & --0.70 & --- & +0.35 & --- & --- & --0.65$^*$ & ---\vspace{2.0mm}\\
09\,413\,057 & --0.56$^{+0.15}_{-0.12}$ & --0.45 & --- & --0.05$^*$ & +0.10 & --0.70 & --0.60$^*$ & --- & --0.85 & --0.75 & --0.65 & --- & --- & --0.30 & --- & --- & --0.60 & ---\vspace{2.0mm}\\
09\,764\,965 & --0.27$^{+0.08}_{-0.08}$ & --0.05 & --- & --0.10$^*$ & +0.00 & --0.15$^*$ & --0.30$^*$ & +0.05 & --0.25 & --0.45$^*$ & --0.30$^*$ & --0.30 & --- & --0.15 & --- & --- & --0.30$^*$ & ---\vspace{2.0mm}\\
09\,812\,351 & --0.90(Fe) & +0.00 & --- & --0.50$^*$ & --0.40 & --0.85 & --0.90$^*$ & --- & --0.85 & --1.00 & --0.85 & --- & --- & --1.20 & --- & --- & --0.65 & ---\vspace{2.0mm}\\
10\,119\,517 & --0.24$^{+0.07}_{-0.07}$ & +0.05 & --- & --0.20$^*$ & --0.15 & --0.05$^*$ & --0.20$^*$ & +0.10 & --0.35 & --0.40$^*$ & --0.20$^*$ & --0.15 & --0.40 & +0.40 & --- & --- & --0.25$^*$ & ---\vspace{2.0mm}\\
10\,451\,090 & +0.04$^{+0.06}_{-0.06}$ & --0.10 & --- & +0.10$^*$ & +0.00 & --0.25 & +0.00$^*$ & +0.25 & --0.15 & --0.15 & +0.15 & +0.00 & --- & +0.75 & --- & --- & +0.20 & ---\vspace{2.0mm}\\
10\,616\,594 & +0.16$^{+0.13}_{-0.20}$ & --0.30 & --- & +0.00$^*$ & --0.35$^*$ & +0.05$^*$ & +0.05$^*$ & +0.10 & +0.10$^*$ & +0.10$^*$ & +0.10$^*$ & +0.20$^*$ & +0.20$^*$ & +0.45 & +0.30$^*$ & +0.05$^*$ & +0.00$^*$ & --0.20\vspace{2.0mm}\\
10\,977\,859 & --0.11$^{+0.07}_{-0.06}$ & --0.05 & --- & +0.20$^*$ & +0.10 & --0.05 & --0.10$^*$ & --- & --0.15 & --0.35 & --0.10 & --- & --- & --0.25 & --- & --- & --0.05 & ---\vspace{2.0mm}\\
11\,498\,538 & --0.15$^{+0.07}_{-0.07}$ & --0.15 & --- & +0.20$^*$ & --0.15 & --0.15$^*$ & --0.20$^*$ & --0.15 & --0.20 & --0.30$^*$ & --0.10$^*$ & --0.15 & --0.40 & +0.65 & --- & --- & --0.20$^*$ & ---\vspace{2.0mm}\\
12\,353\,648 & --1.05(Fe) & --0.25 & --- & --0.55$^*$ & --0.25 & --0.90$^*$ & --1.05$^*$ & --- & --1.35 & --2.05$^*$ & --1.25$^*$ & --- & --- & --0.75 & --- & --- & --1.05$^*$ & ---\vspace{1.5mm}\\
\hline
\end{tabular}
\label{Table:IndividualAbundances}
\end{table*}
We base our analysis on high-resolution, high signal-to-noise
ratio (S/N) spectra taken with the Coud\'{e}-Echelle spectrograph
attached to the 2-m telescope of the Th\"{u}ringer
Landessternwarte Tautenburg, Germany. The spectra have a
resolution of 32000 and cover the wavelength range from 4720 to
7400~\AA. Table~\ref{Table:observations} represents the journal of
observations and gives the Kepler Input Catalog (KIC) number, an
alternative designation, the number of obtained spectra, the
visual magnitude, the spectral type as is indicated in the SIMBAD
database, and the period of observations in 2010. The number of
acquired spectra is different for different stars since we aimed
to reach a S/N of about 100 for the mean, averaged spectrum of
each object.
The data have been reduced using standard ESO-MIDAS packages. The
data reduction included bias and stray-light subtraction, cosmic
rays filtering, flat fielding by a halogen lamp, wavelength
calibration by a ThAr lamp, and normalisation to the local
continuum. All spectra were additionally corrected in wavelength
for individual instrumental shifts by using a large number of
telluric O$_2$ lines. The cross-correlation technique was used to
estimate the RVs from the individual spectra so that the single
spectra finally could be shifted and added to build the mean, high
S/N ratio averaged spectrum of each star. The RVs computed at this
step have also been used to check for possible variations due to
binarity, high-amplitude stellar oscillations and rotational
modulation.
\section{Spectral analysis}
\subsection{Method}\label{Section:Methods}
The ``classical'' method of spectrum analysis by means of
equivalent width measurements and subsequent fitting of the
ionization equilibria of different elements requires the star to
rotate slowly, so that a sufficient number of clean, un-blended
spectral lines can be identified and measured in the stellar
spectrum. In the case of rapidly rotating stars, this method fails
due to the high percentage of blended lines. The method of
spectrum synthesis, on the other hand, is based on the comparison
between observed and theoretical spectra in a certain wavelength
range. Its advantage is that the effect of line blending can be
taken into account when computing the synthetic spectra and thus
no restrictions with respect to the rotational velocity occur.
Because of the large number of fast rotators in our sample, we use
the second method comparing the observed spectra with a huge
number of synthetic spectra computed on a grid in the stellar
parameters. Against the much faster approach of solving the
so-called inverse problem, i.e. to determine the physical
parameter values directly from the observations using some
non-linear optimization method, the grid search has the advantage
that it will always determine the globally best solution if the
grid is dense enough. Its principal disadvantage of much longer
computing time plays no crucial role in our analysis since the
synthetic spectra are calculated using a large library of
pre-computed model atmospheres
(Table\,\ref{Table:AtmosphereModels}) and the analysis runs very
fast on up to 300 processor cores of a cluster PC.
Our code GSSP (Grid Search in Stellar Parameters) finds the
optimum values of effective temperature $T_{\rm eff}$, surface gravity
$\log{g}$, microturbulent velocity $\xi$, metallicity $[M/H]$, and
projected rotational velocity $v\sin{i}$\ from the minimum in $\chi^2$
obtained from a comparison of the observed spectrum with the
synthetic ones computed from all possible combinations of the
before mentioned parameters. The errors of measurement (1$\sigma$
confidence level) are calculated from the $\chi^2$ statistics. A
detailed description of the method is given in \citet{Lehmann2011}
(Paper~I).
For the calculation of synthetic spectra, we use the LTE-based
code SynthV \citep{Tsymbal1996} which allows the computation of
spectra based on individual elemental abundances. The code uses
pre-calculated atmosphere models which have been computed with the
most recent, parallelised version of the LLmodels program
\citep{Shulyak2004}. Both programs make use of the VALD database
\citep{Kupka2000} for a pre-selection of atomic spectral lines.
The main limitation of the LLmodels code is that the models are
well suitable for early and intermediate spectral type stars but
not for very hot and cool stars where non-LTE effects or
absorption in molecular bands may become relevant, respectively.
\begin{table} \tabcolsep 2.3mm\caption{\small $E(B-V)$ determined from the Na\,D lines,
$T_{\rm eff}$\ obtained from SED-fitting, and the reddening-corrected $T_{\rm eff}$.}
\begin{tabular}{rccc}
\hline
\multicolumn{1}{c}{KIC\rule{0pt}{9pt}} & $E(B-V)$ & $T_{\rm eff}$ & $T_{\rm eff}$\ (dered)\\
\hline
01\,571\,152\rule{0pt}{9pt} & 0.03 & 6820$\pm$140 & 6980$\pm$190\\
02\,166\,218 & 0.01 & 7050$\pm$140 & 7110$\pm$200\\
03\,217\,554 & 0.04 & 7650$\pm$160 & 7910$\pm$250\\
03\,453\,494 & 0.03 & 7650$\pm$150 & 7840$\pm$240\\
04\,847\,411 & 0.05 & 7290$\pm$150 & 7600$\pm$220\\
05\,088\,308 & 0.02 & 6730$\pm$150 & 6840$\pm$190\\
05\,164\,767 & 0.06 & 6770$\pm$130 & 7100$\pm$190\\
05\,446\,068 & 0.04 & 4950$\pm$110 & 5060$\pm$130\\
05\,785\,707 & 0.03 & 7840$\pm$160 & 8060$\pm$260\\
06\,289\,468 & 0.03 & 8130$\pm$170 & 8300$\pm$280\\
06\,509\,175 & 0.08 & 7080$\pm$140 & 7560$\pm$220\\
06\,587\,551 & 0.05 & 8280$\pm$170 & 8760$\pm$350\\
06\,756\,386 & 0.01 & 7860$\pm$150 & 7930$\pm$240\\
07\,748\,238 & 0.05 & 7150$\pm$150 & 7450$\pm$220\\
08\,623\,953 & 0.04 & 7720$\pm$150 & 7990$\pm$250\\
08\,738\,244 & 0.01 & 8110$\pm$160 & 8190$\pm$240\\
08\,750\,029 & 0.05 & 7250$\pm$140 & 7560$\pm$220\\
09\,413\,057 & 0.05 & 8270$\pm$170 & 8720$\pm$340\\
09\,764\,965 & 0.01 & 7450$\pm$170 & 7510$\pm$230\\
09\,812\,351 & 0.01 & 7750$\pm$150 & 7830$\pm$230\\
10\,119\,517 & 0.00 & 6380$\pm$160 & 6380$\pm$160\\
10\,451\,090 & 0.01 & 7700$\pm$160 & 7750$\pm$240\\
10\,616\,594 & 0.00 & 5240$\pm$140 & 5240$\pm$140\\
10\,977\,859 & 0.01 & 8130$\pm$170 & 8200$\pm$260\\
11\,498\,538 & 0.03 & 6490$\pm$160 & 6530$\pm$190\\
12\,353\,648 & 0.02 & 7350$\pm$180 & 7470$\pm$230\\
\hline
\end{tabular}
\label{Table:SED}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.85]{Temp_graph_new.eps}
\caption{{\small Comparison of $T_{\rm eff}$\ derived spectroscopically
(open circles) with the KIC (filled boxes) and photometric values
(open triangles and stars). See text for detailed description.}}
\label{Temperature_comp}
\end{figure}
\begin{table} \tabcolsep 0.0mm\caption{\small
Stellar atmosphere models computed with the LLmodels code for
$\xi$\,=\,2\,km\,s$^{-1}$.}
\begin{tabular}{lclclc}
\hline\hline
\multicolumn{6}{c}{Parameter, step width\rule{0pt}{9pt}}\\
\hline \multicolumn{1}{c}{$[M/H]$\rule{0pt}{9pt}} & $\Delta[M/H]$
&\multicolumn{1}{c}{$T_{\rm eff}$(K)}
& $\Delta$$T_{\rm eff}$(K) & \multicolumn{1}{c}{$\log{g}$} & $\Delta$$\log{g}$\\
\hline --0.8\,--\,+0.8 & 0.1 & \begin{tabular}{l}
~~4\,500\,--\,10\,000\rule{0pt}{9pt}\\ 10\,000\,--\,22\,000\\
\end{tabular}
& \begin{tabular}{c}
100\rule{0pt}{9pt}\\ 250\\
\end{tabular}
& \begin{tabular}{c}
2.5\,--\,5.0\rule{0pt}{9pt}\\ 3.0\,--\,5.0\\
\end{tabular}
& 0.1\\
\hline
\multicolumn{5}{l}{Total number of models:\rule{0pt}{9pt}} & \textbf{41\,888}\\
\hline
\end{tabular}
\label{Table:AtmosphereModels}
\end{table}
The method has been tested on spectra of Vega and successfully
applied to {\it Kepler} $\beta$\,Cep and SPB candidate stars
(Paper~I). In Paper~I, the chemical composition of the stars has
been determined by means of an iterative procedure involving (1)
the estimation of $T_{\rm eff}$, $\log{g}$, $v\sin{i}$, $\xi$, and $[M/H]$, (2) the
determination of the individual abundances, element-by-element, by
fixing all parameters derived in the previous step and taking the
abundance table corresponding to the derived metallicity as a
first guess, and (3) the re-estimation of $T_{\rm eff}$, $\log{g}$, $v\sin{i}$, and
$\xi$ based on the chemical composition evaluated in the second
step. In the latest version of the GSSP code, we still iterate the
individual abundances element-by-element after the first step, but
together with $T_{\rm eff}$, $\log{g}$, $v\sin{i}$, and $\xi$. This allows us to
avoid the additional third step. In order to allow the possibility
of an incorrect normalisation to be taken into account, and to
minimize its influence on the results, we also introduced an
additional free parameter that allows the adjustment of the
observed continuum relative to the synthetic one during the
fitting procedure.
\subsection{Results}\label{Section:Results}
Table~\ref{Table:FundamentalParameters} summarises the results of
spectrum analysis for all 26 stars of our sample. The first four
columns of the table represent correspondingly the KIC-number of a
star, the effective temperature $T_{\rm{eff}}^{\rm{K}}$, surface
gravity $\log{g}^{\rm{K}}$, and metallicity $[M/H]^{\rm{K}}$ as is
indicated in the KIC. The five following columns list the stellar
parameters derived from our spectra, while the last two columns
represent the spectral types as estimated from $T_{\rm eff}$\ and $\log{g}$\
given in the KIC and determined in this work, respectively. In
both cases, the spectral types and the luminosity classes have
been derived using an interpolation in the tables by
\citet{Schmidt-Kaler1982}. We achieve a mean accuracy of about
$1\%$ for $T_{\rm eff}$, about $\pm$ 0.16~dex for $\log{g}$, and about $5\%$ for
$v\sin{i}$.
Table~\ref{Table:IndividualAbundances} lists the elemental
abundances derived for each target star. The metallicity given in
the second column of the table refers to the initially derived
chemical composition and was used as initial guess for the
determination of the individual abundances. All abundances are
given relative to solar values, i.e. negative/positive values
refer to an under-/overabundance of the corresponding element
compared to the solar composition. We assume the chemical
composition of the Sun given by \citet{Grevesse2007} and these
values are listed in the header of
Table~\ref{Table:IndividualAbundances} below the element
designations. For some of the stars we have reached the
metallicity limit in our grid of atmosphere models
(KIC\,03\,217\,554, 03\,453\,494, 09\,812\,351, and 12\,353\,648).
In these cases, we give the derived Fe-abundance instead of the
metallicity. In all other cases, the derived Fe abundance matches
the derived metallicity within the measurement error. The
abundance errors are estimated to be about $\pm$0.1~dex for the
elements showing a sufficient number of strong spectral lines in
the considered region and about $\pm$0.2~dex for the elements
represented in the spectrum by only few and rather faint spectral
lines.
\subsection{Special characteristics of some target stars}
\begin{figure}
\includegraphics[scale=0.8,clip=]{K04847411_Hbeta_comp.eps}
\includegraphics[scale=0.8,clip=]{K08738244_part_comp.eps}
\includegraphics[scale=0.8,clip=]{K09812351_part_comp.eps}
\includegraphics[scale=0.8,clip=]{K12353648_Hbeta_comp.eps}
\caption{{\small Fit of observed spectra (solid, black line) by
synthetic spectra calculated from our optimised parameters
(dashed, red line) and from the values given in the KIC (dotted,
green line), showing either the H$_\beta$ or a metal lines region.
A colour plot is provided in the online version.}}
\label{CompProfiles}
\end{figure}
The derived $T_{\rm eff}$\ and $\log{g}$\ are discussed in
Sect.\,\ref{Section:KIC}. Here, we focus on metallicity and
abundance anomalies based on
Table~\ref{Table:IndividualAbundances}, and on possible binarity
of the target stars.
{\it Stars of lower metallicity.} Only three stars in our sample
of 26 show metallicities slightly higher than the Sun; all other
stars have lower metallicity. Fifteen stars have a metallicity of
more than 0.3 dex lower than the Sun. The four stars of lowest
metallicity (KIC\,03217554, 03453494, 09812351, and 12353648) show
underabundances of the Fe-peak elements and Ca of about 1~dex but
much less for Mg and Si. Two of them (KIC\,09\,812\,351 and
12\,353\,648) have C abundances comparable to the solar value,
resembling the characteristics of $\lambda$~Boo\ stars.
{\it Abundance anomalies.} For eleven of the analysed stars the Ba
abundance is found to deviate by more than 0.4~dex from the
derived metallicity. In only one of them Ba is underabundant, all
other are Ba enhanced. Since the Ba abundance was determined from
only one resonance line at $\lambda$ 4934\,\AA\ which is known to
be strong and sensitive to non-LTE effects, this result must be
interpreted with caution.
\begin{table} \tabcolsep 2.0mm\caption{\small The stars for which remarkable differences in the individual RVs are observed.}
\begin{tabular}{rcrcc}
\hline
\multicolumn{1}{c}{KIC\rule{0pt}{9pt}} & BJD-245000 & \multicolumn{1}{c}{RV} & dRV & max. diff\\
& & (km\,s$^{-1}$) & (km\,s$^{-1}$) & (km\,s$^{-1}$)\\
\hline
04\,847\,411\rule{0pt}{9pt} & 362.552194 & 2.189 & 0.058 &\\
& 365.476249 & 2.136 & 0.137 &\\
& 381.512639 & --0.968 & 0.204 &\\
& 388.449139 & --3.357 & 0.267 & 5.5\\
05\,164\,767 & 316.464427 & 1.493 & 0.129 &\\
& 316.486187 & 0.798 & 0.152 &\\
& 316.509927 & --1.886 & 0.083 &\\
& 318.457963 & --0.599 & 0.109 &\\
& 318.479550 & 0.194 & 0.123 & 3.4\\
06\,289\,468 & 338.560657 & --0.248 & 0.285 &\\
& 345.482244 & --2.466 & 0.512 &\\
& 345.510347 & --0.107 & 0.368 &\\
& 345.532246 & 0.507 & 0.420 &\\
& 345.554214 & 2.314 & 0.394 & 4.8\\
09\,413\,057 & 340.573022 & --5.208 & 0.104 &\\
& 351.507244 & --6.058 & 0.082 &\\
& 363.430141 & 0.895 & 0.540 &\\
& 365.404034 & 7.092 & 0.736 &\\
& 376.484499 & 3.279 & 0.517 & 13\\
10\,119\,517 & 428.472508 & --110.03 & 0.066 &\\
& 429.421495 & 80.42 & 0.058 &\\
& 430.379879 & 29.61 & 0.059 & 190\\
\hline
\end{tabular}
\label{Table:RVs_indiv}
\end{table}
Most of the stars show overabundances of Mg and Si compared to the
derived metallicity, and for eight stars Na is found to be
significantly enhanced. Additionally, Ti is found to be depleted
in the atmospheres of KIC\,05\,164\,767, 08\,750\,029 and
12\,353\,648. For KIC\,03\,217\,554, we did not find consistent
results for the iron peak elements, there is a large scatter among
the abundances of Fe, Cr, and Ni.
{\it RV variations.} Five stars in our sample show differences in
the measured RVs that are much larger than the errors of
measurement. They are listed in Table~\ref{Table:RVs_indiv} (note
that the RVs are on a relative scale so that the mean RV is zero
for each star). From an inspection of the RVs, we suspect for two
of these stars (KIC\,04\,847\,411 and 09\,413\,057) a periodic
variation - although the time basis is too short to distinguish
between possible periods. KIC\,09\,413\,057 is found by
\citet{Uytterhoeven2011b} to be a hybrid pulsator. One of the
possible periods that fits the observed RV variations of this star
is about 1.6~d and could be caused by pulsations. For one of the
five stars (KIC\,10\,119\,517) we observed an extremely large
difference in RV of 190 km\,s$^{-1}$\ within one day.
\section{Spectral energy distributions}
The effective temperature can also be determined from the spectral
energy distribution (SED). For our target stars we constructed
SEDs using literature photometry from 2MASS \citep{Skrutskie2006},
Tycho \citep{Hoeg1997}, TASS \citep{Droege2006}, USNO-B1
\citep{Monet2003} and UCAC3 \citep{Zacharias2010}. $T_{\rm eff}$\ was
determined by fitting solar-composition \cite{Kurucz1993} model
fluxes to the photometry. The model fluxes were convolved with the
photometric filter response functions. A weighted
Levenberg-Marquardt non-linear least-squares fitting procedure was
used to find the solution that minimised the difference between
the observed and model fluxes. Since $\log g$ is poorly
constrained due to lack of UV flux measurements, we fixed log\,$g$
= 4.0 for all the fits. This introduces additional errors of about
50~K into the determined effective temperature for the stars
showing significant deviations from the assumed value of $\log{g}$.
Since spectral energy distributions can be significantly affected
by interstellar reddening, we measured the equivalent widths of
the interstellar Na\,D lines if present in our spectra and
determined $E(B-V)$ using the relation given by \cite{Munari1997}.
Table~\ref{Table:SED} lists the results. Columns 3 and 4
correspondingly give the temperature values derived without and
with the effects of interstellar reddening taken into account.
\section{Comparison with the Kepler Input Catalogue}\label{Section:KIC}
Figure~\ref{Temperature_comp} compares the spectroscopically
derived $T_{\rm eff}$\ with the photometric and KIC values (typical errors
of the KIC data are $\pm$200~K for $T_{\rm eff}$\ and $\pm$0.5~dex for both
$\log{g}$\ and metallicity). The stars are sorted by the spectroscopic
$T_{\rm eff}$\ value starting with the coolest object. For most of the
targets we find a rather good agreement between the
spectroscopically determined temperature (open circles) and the
value listed in the KIC (filled boxes). Whereas in most cases the
$T_{\rm eff}$\ from the uncorrected SED fit (open triangles) is slightly
lower than the spectroscopic one, the temperatures corrected for
the interstellar reddening (asterisks) show rather a good
agreement. In the following, we discuss stars that show larger
deviations from this general tendency in $T_{\rm eff}$\ based on
Fig.\,\ref{Temperature_comp} or large deviations from the KIC
values in the other parameters based on
Table\,\ref{Table:FundamentalParameters}.
{\it KIC\,05\,446\,068:} The spectroscopically derived temperature
exceeds the KIC value by 400\,K and the de-reddened photometric
one by 700\,K. Our fit with the best synthetic spectrum is rather
poor and the spectrum of a second star can clearly be seen in the
residuals. We assume the star to be a SB2 star so that none of the
derived temperatures may be valid.
\begin{figure}
\centering
\includegraphics[scale=0.85]{IS_graph_new.eps}
\caption{Location of the stars (see
Table~\ref{Table:Classification} for labels) and the $\gamma$~Dor\ (dashed
lines) and $\delta$~Sct\ (solid lines) theoretical instability strips in
the HR-diagram. Filled circles indicate suspected binaries, open
circles the stars for which no reliable fit has been obtained.}
\label{HR_diagram}
\end{figure}
\begin{table}
\caption{Classification according to the type of variability
derived from spectrum and light curve analysis.}
\begin{tabular}{|l|c|c|c|}
\hline \multicolumn{1}{|c|}{KIC} &
spectroscopic\rule{0pt}{9pt} & light curve &label\\
\hline
01\,571\,152$^{1)}$\rule{0pt}{9pt} & & & a \\
02\,166\,218 & $\gamma$~Dor s or hybrids & $\gamma$~Dor s & b \\
05\,164\,767\rule{0pt}{9pt} & & & g \\
\hline
03\,453\,494\rule{0pt}{9pt} & & & d \\
06\,509\,175 & & & k \\
07\,748\,238 & & \raisebox{1.5ex}[-1.5ex]{hybrids} & n \\
09\,764\,965 & &
& s \\\cline{3-4} 04\,847\,411$^{3)}$\rule{0pt}{3pt} & & & e \\
08\,623\,953 & \raisebox{1.5ex}[-1.5ex]{$\delta$~Sct s} & & o \\
08\,750\,029 & & & q \\
09\,812\,351 & & \raisebox{1.5ex}[-1.5ex]{$\delta$~Sct s} & t \\
10\,451\,090 & & & v \\
12\,353\,648 & & & z \\ \hline
06\,289\,468\rule{0pt}{9pt} & & & j \\
06\,587\,551 & & & l \\
06\,756\,386 & & hybrids & m \\
08\,738\,244 & possibly $\delta$~Sct s & & p \\
09\,413\,057 & &
& r \\\cline{3-4} 05\,785\,707\rule{0pt}{9pt} & & & i \\
10\,977\,859 & & \raisebox{1.5ex}[-1.5ex]{$\delta$~Sct s} & x \\
\hline
05\,446\,068$^{1,2)}$\rule{0pt}{9pt} & &hybrid & h \\
10\,616\,594$^{2,3)}$ & too cool & $\delta$~Sct & w \\
10\,119\,517 & ¬ pulsating & u \\
\hline
03\,217\,554$^{1)}$\rule{0pt}{9pt} & &$\delta$~Sct & c \\
05\,088\,308$^{1)}$ & too evolved & $\gamma$~Dor & f \\
11\,498\,538$^{2)}$ & &no classification & y \\
\hline
\multicolumn{4}{l}{$^{1)}$ suspected SB2 star; $^{2)}$ no reliable fit obtained;\rule{0pt}{11pt}}\\
\multicolumn{4}{l}{$^{3)}$ not analyzed by
\citet{Uytterhoeven2011b}}\\
\end{tabular}
\label{Table:Classification}
\end{table}
{\it KIC\,06\,587\,551:} According to the spectroscopic findings,
this is the hottest star of our sample, in agreement with the
photometrically evaluated $T_{\rm eff}$\ corrected for the interstellar
reddening. Both the KIC and the uncorrected photometric values are
by about 500~K lower. This is an interesting fact since we already
showed in Paper~I that, for the hotter stars ($T_{\rm eff}$$>$8000\,K), the
KIC values of $T_{\rm eff}$\ are systematically lower than the spectroscopic
ones, suspecting that the reason may be that the interstellar
reddening was not properly taken into account when deriving the
KIC temperatures.
{\it KIC\,04\,847\,411:} This example shows that the parameters
listed in the KIC can be unreliable. Figure~\ref{CompProfiles}
(top panel) compares the observed H$_\beta$ line profile (solid,
black line) with the synthetic ones computed from the parameters
derived by us (dashed, red line) and from those listed in the KIC
(dotted, green line). Obviously, the green spectrum does not match
the observations at all. Since the metallicity value of $-1.95$
listed in the KIC is much lower than the limiting value in our
grid of atmosphere models, we expect the deviation from the
observations to be even larger, in particular for all metal lines.
Our spectra show RV variations with an amplitude of about
5.5\,km\,s$^{-1}$\ and a period of $\sim$10~d. We need more spectra to
reveal the nature of this variability.
{\it KIC\,08\,738\,244:} This star shows a large discrepancy
between the derived values of $\log{g}$\ and $[M/H]$ and those listed
in the KIC. Figure~\ref{CompProfiles} (second panel) compares the
observed spectrum with synthetic spectra in one metal line region
and shows that the model based on the KIC values gives no reliable
fit. The same is the case for the Balmer line profiles.
{\it KIC\,09\,812\,351 and 12\,353\,648:} these are metal poor
stars with metallicities below the limit of our grid of atmosphere
models. Both show overabundances of C, Mg, and Si compared to the
derived Fe abundance, while the spectrum of KIC\,12\,353\,648
additionally exhibits a strong depletion of Ti. The derived
metallicities, represented in this case by the Fe abundance, are
much lower than the values listed in the KIC and KIC\,12\,353\,648
additionally shows a discrepancy of about 300~K in $T_{\rm eff}$. The effect
of the different parameter sets on the synthetic spectra is
illustrated in two lower panels of Figure~\ref{CompProfiles}.
Figure~\ref{HR_diagram} shows the positions of all stars of our
sample in the log\,(L/L$_{\odot}$)--log\,$T_{\rm eff}$\ diagram, together
with the $\delta$~Sct\ and $\gamma$~Dor\ instability strips. The latter were
reconstructed from \citet[ Figures 2 and 9]{Dupret2005}. The edges
of the $\delta$~Sct\ instability region have been computed with a
mixing-length parameter of $\alpha$=1.8 for the fundamental mode
(solid thin lines) and for a radial order of $n$=4 (solid thick
lines). The edges of the $\gamma$~Dor\ instability regions computed with
$\alpha$=2.0 and 1.5 are represented by dashed thin and thick
lines, respectively. To place the stars into the diagram, we
estimated their luminosities from the spectroscopically derived
$T_{\rm eff}$\ and $\log{g}$\ by means of an interpolation in the tables by
\citet{Schmidt-Kaler1982}. The luminosity error bars represent a
combination of the errors in $T_{\rm eff}$\ and $\log{g}$, and so in some cases
they appear to be significantly larger than the uncertainties in
$T_{\rm eff}$. Beside that, the luminosity errors can still be
underestimated due to the uncertainties in the empirical
relations. Realizing that, we base our classification mainly on
the position of the stars in the HR-diagram according to the
derived temperatures.
\section{The stars in the HR-diagram}
Table~\ref{Table:Classification} classifies the stars according to
their type of variability, listing the classifications expected
from their location in the HR-diagram (Fig.\,\ref{HR_diagram}) and
derived by \citet{Uytterhoeven2011b} from the frequency analysis
of the {\it Kepler} light curves. There are six ``outliers'' in
the log\,(L/L$_{\odot}$)-log\,$T_{\rm eff}$\ diagram
(Fig.~\ref{HR_diagram}). Three of them (labels c, f, and h) are
suspected binaries and two (labels w and y) are the stars for
which no reliable fit of the observed spectrum could be obtained.
For these objects we cannot give a certain classification. For the
remaining, sixth object, KIC\,10\,119\,517 (label u), no
pulsations could be found from the light curve analysis.
For most stars of our sample, the classification based on the
light curve analysis appears to be fully consistent with the
position of the objects in the log\,$T_{\rm eff}$-log\,(L/L$_{\odot}$)
diagram. We confirm three $\gamma$~Dor\ stars (labels a, b, and g), and 10
$\delta$~Sct\ pulsators lying in the expected region of the HR-diagram.
Four of them, however, have been classified by
\citet{Uytterhoeven2011b} as hybrid pulsators although they do not
fall in the overlapping region between the $\gamma$~Dor\ and $\delta$~Sct\ stars
in our HR-diagram. One star, KIC\,04\,847\,411, was not analyzed
by \citet{Uytterhoeven2011b}. Its light curve classification
listed in Table~\ref{Table:Classification} is based on our own
analysis of the first Quarter of {\it Kepler} data.
There are ten further stars that show $\delta$~Sct-like oscillations in
their light curves. Six of them have been classified by
\citet{Uytterhoeven2011b} as hybrid pulsators but do not fall in
the overlapping region in the HR-diagram. Five stars (labels i, j,
m, p, and x) are close to the hot border of the $\delta$~Sct\ instability
region, two other ones (labels l and r) are distinctly hotter than
given by this border. Four stars are too cool (labels h, w) or too
evolved (labels c, f) to be hybrid, $\gamma$~Dor\ or $\delta$~Sct\ pulsators.
Four stars of our sample are reported by \citet{Uytterhoeven2011b}
to be binaries. Three of them could be identified as SB2 stars.
For the fourth one, more observations are needed to confirm its
binarity spectroscopically.
Fifteen of the analysed targets show metallicities which are lower
by more than 0.3~dex than the metallicity of the Sun. The four
stars of lowest metallicity show underabundances of about 1~dex.
Two of them, KIC\,09\,812\,351 and 12\,353\,648, have a C
abundance comparable to the solar value which might be a sign of
$\lambda$~Boo\ nature. Additionally to the C abundance, this type of
variable stars is characterised by solar abundances of N, O and S.
We did not find any spectral lines of these elements in the
considered wavelength range that could be used for an abundance
determination, however. We also find that most of the analysed
stars are rather fast rotators with projected rotational
velocities above 90\,km\,s$^{-1}$.
\section{Conclusions}\label{Section:Conclusions}
We determined the fundamental parameters of 26 stars in the {\it
Kepler} satellite field of view proposed to be candidates for
$\gamma$~Dor\,-type variables \citep{Uytterhoeven2011a}. The analysis was
done by means of the spectrum synthesis method based on the
comparison between the observed and synthetic spectra. As an
additional test of the derived $T_{\rm eff}$, we computed SEDs by using
photometry from literature and determined $T_{\rm eff}$\ by fitting
solar-composition \citet{Kurucz1993} model fluxes to the
photometric data.
A comparison of the results from the different methods was made.
Besides some outliers, where the reasons can be explained, the
$T_{\rm eff}$\ derived from the spectrum analysis shows a good overall
agreement with the values given in the KIC. For the hottest star
of our sample, the KIC value appears to be underestimated. This
agrees with our finding in Paper\,I that the $T_{\rm eff}$\ given in the KIC
are in general too low for the hotter stars because the
interstellar reddening was not properly taken into account. The
$T_{\rm eff}$\ following from the SED fitting are systematically lower. This
can be explained by the interstellar reddening. Our correction for
this effect by using the equivalent widths of the interstellar
Na\,D lines to derive $E(B$$-$$V)$, improves the situation
although in some cases the resulting $T_{\rm eff}$\ is found to be slightly
overestimated. The accuracy of the values for $\log{g}$\ and $[M/H]$
in the KIC is rather poor. An uncertainty of $\pm$0.5\,dex is
stated in the catalogue for both parameters, in some cases we also
find larger deviations from our analysis so that the values given
in the catalogue are not suited to check for the quality of our
findings.
The spectroscopically derived fundamental parameters allow us to
place the stars in a HR-diagram and to compare their location with
the classification made by \citet{Uytterhoeven2011b} based on the
oscillation frequencies found in the {\it Kepler} light curves. In
the result, we observed most of the stars in a relative compact
region of the diagram that reaches from the cool edge of the
$\delta$~Sct\ instability strip to a region left of its hot border. For
all of the six outliers that are too cool or too evolved to fall
into the $\delta$~Sct\ instability region we could find an explanation
either by binary nature or insufficient convergence of the
parameter determination.
We find three stars (labels a, b, and g) out of the 14 stars that
show oscillations in their light curves typical for $\gamma$~Dor\ or
$\gamma$~Dor-$\delta$~Sct\ hybrid pulsators that certainly fall into the $\gamma$~Dor\
range of the diagram. Ten further stars are found to be located in
one of the $\delta$~Sct\ regions of the HR-diagram, six of them show
$\delta$~Sct\ and in four $\delta$~Sct\ and $\gamma$~Dor-typical oscillations co-exist.
This shows that oscillations with periods in the $\gamma$~Dor\ range are
much more common among the $\delta$~Sct\ stars than described by the
theoretical $\gamma$~Dor\ instability region. This finding is in agreement
with \citet{Grigancene2010} who investigated a sample of 234 {\it
Kepler} stars and found a significant number of hybrid pulsators,
whereas theory predicts the existence of hybrids in only a small
overlapping region of the instability strips.
We find seven stars close to the left or left of the blue edge of
the $\delta$~Sct\ instability strip calculated for fourth radial overtone
pulsations. Only two of these stars show $\delta$~Sct-like oscillations
but five of them show oscillations with periods in the $\delta$~Sct\ and
in the $\gamma$~Dor\ range. Similar to our results, \citet{Grigancene2010}
found a significant number of stars showing $\delta$~Sct\ or $\delta$~Sct-$\gamma$~Dor\
oscillations which are hotter than predicted by the theoretical
blue edge of the $\delta$~Sct\ instability strip calculated for fourth
radial overtone pulsations.
We found four stars with very low metallicities in the -1~dex
range. Two of them have about solar C abundance which could be a
sign of $\lambda$~Boo\ nature. Both stars show $\delta$~Sct-like but no
$\gamma$~Dor-typical oscillations. Thus we did not find any hint for a
relationship between the $\lambda$~Boo\ stars and $\gamma$~Dor-type variability in
our sample.
\section*{acknowledgements} The research leading to these results
has received funding from the European Research Council under the
European Community's Seventh Framework Programme
(FP7/2007--2013)/ERC grant agreement n$^\circ$227224 (PROSPERITY).
This research has made use of the SIMBAD database, operated at
CDS, Strasbourg, France.
| -55,141.599311
|
[
-2.67578125,
2.595703125
] | 13.668224
|
[
-3.685546875,
1.2744140625,
-1.0830078125,
-6.06640625,
-0.65966796875,
7.1796875
] |
[
4.46875,
6.40234375,
4.04296875,
6.5
] | 1,215
| 6,013
|
[
-2.96484375,
3.12109375
] | 47.049069
|
[
-4.80078125,
-0.8759765625,
-1.0205078125,
-2.138671875,
-0.1680908203125,
6.53515625
] | 1.092606
| 9.659443
| 28.804257
| 17.21693
|
[
2.331019878387451
] | -40,708.295117
| 5.631798
| -53,338.753413
| 0.283487
| 6.292674
|
[
-4.44921875,
-2.876953125,
-2.2421875,
-3.158203125,
2.205078125,
8.9453125
] |
[
-6.83203125,
-2.232421875,
-2.75,
-1.228515625,
3.68359375,
5.44921875
] | |
BkiUaQ_xK7kjXLlzZUCO
|
\section{Introduction}
\subsection{Definitions}
In this section, we present definitions and notations for the terms that are used in the paper. The reader can refer to \cite{StraubeBook} for the details of these and other definitions.
Let $\Omega$ be a bounded pseudoconvex domain in $\mathbb{C}^n$ and let $L^2_{(0,q)}(\Omega)$ be the space of $(0,q)$-forms on $\Omega$ with square integrable coefficients (for $(0,0)$-forms, i.e., functions, no subscript will be used). Each
such form can be written uniquely as
$$u=\sideset{}{'}\sum_{J}u_J d\overline{z_J}$$
where $J=(j_1,\dots, j_q)$ is a strictly increasing multi-index, $\sideset{}{'}\sum$ denotes the summation over such indices, and $d\overline{z_J}=d\overline{z_{j_1}}\wedge\dots\wedge d\overline{z_{j_q}}$.
We define the following inner product on $L^2_{(0,q)}(\Omega)$,
$$(u,v)=\left(\sideset{}{'}\sum_{J}u_J d\overline{z_J},\sideset{}{'}\sum_{J}v_J d\overline{z_J}\right)=\sideset{}{'}\sum_{J}\int_{\Omega}u_J \overline{v_J}dV,$$
under which $L^2_{(0,q)}(\Omega)$ is a Hilbert space. We also define the standard $\overline{\partial}$-operator (the Cauchy-Riemann operator) on $(0,q)$-forms as
$$\overline{\partial}\left(\sideset{}{'}\sum_{J}u_J d\overline{z_J}\right)=\sum_{j=1}^{n}\sideset{}{'}\sum_{J}\frac{\partial u_J}{\partial \overline{z_j}} d\overline{z_j}\wedge d\overline{z_J},$$
where the derivatives are computed as distributional derivatives. We say a form $u\in L^2_{(0,q)}(\Omega)$ is in the domain of $\overline{\partial}$ if $\overline{\partial} u \in L^2_{(0,q+1)}(\Omega)$. In this standard setup, the operator $\overline{\partial}$ is a closed and densely defined operator from $L^2_{(0,q)}(\Omega)$ to $L^2_{(0,q+1)}(\Omega)$. Moreover, it has a Hilbert space adjoint that is denoted by $\overline{\partial}^{\ast}$.
We define the complex Laplacian (also referred to as the $\overline{\partial}$-Neumann Laplacian) on $(0,q)$-forms as
\begin{equation}\label{complexlaplacian}
\Box=\Box_{q}=\overline{\partial}\dbar^{\ast}+\overline{\partial}^{\ast}\overline{\partial},
\end{equation}
where each operator is defined at the correct form level and with domain so that the compositions are defined. It is clear that Dom($\Box$) involves two boundary conditions: $u\in$ Dom($\overline{\partial}^{\ast}$) and $\overline{\partial} u\in$ Dom($\overline{\partial}^{\ast}$). The first one is a Dirichlet condition and the second
one is a Neumann condition.
It is known that $\Box$ has a bounded inverse (that is a solution operator) on bounded pseudoconvex domains. This operator is called the $\overline{\partial}$-Neumann operator of
$\Omega$ and it is denoted by $N=N_q$ (see also \cite{StraubeBook}).
For $\alpha \in L^2_{(0,q)}(\Omega)$ such that $\overline{\partial} \alpha=0$, the $\overline{\partial}$-problem is to find a form $u \in L^2_{(0,q-1)}(\Omega)$ such that
\begin{equation}\label{dbar}
\overline{\partial} u =\alpha.
\end{equation}
By using the machinery above we note that $\overline{\partial}^{\ast} N \alpha$ is a solution for \eqref{dbar} and moreover this solution has the smallest $L^2$-norm among all the solutions. The operator $\overline{\partial}^{\ast} N$ on $L^2_{(0,q)}(\Omega)$ is called the canonical solution operator.
A function $f\in C^{\infty}(\Omega)$ is said to be holomorphic on $\Omega$, if $\overline{\partial} f=0$ in $\Omega$. Let $\mathcal{O}(\Omega)$ denote the set of holomorphic functions on $\Omega$ and $L^2_a(\Omega)$ denote
the intersection $\mathcal{O}(\Omega)\cap L^2(\Omega)$. It is a consequence of the Cauchy integral formula that $L^2_a(\Omega)$ is a closed subspace
of $L^2(\Omega)$. Hence, there exists the orthogonal projection operator from $L^2(\Omega)$ onto $L^2_a(\Omega)$. This projection is called the Bergman projection operator of the domain $\Omega$ and denoted by $\mathbf{B}_{\Omega}$.
The closure of a domain $\Omega$ in $\mathbb{C}^n$ is said to have a Stein neighborhood basis if for every neighborhood $\mathcal{U}$ of $\overline{\Omega}$ there exists a pseudoconvex domain $\mathcal{W}$ such that $\overline{\Omega}\subset\subset \mathcal{W}\subset\subset \mathcal{U}$.
The Nebenh\"ulle of $\Omega$, denoted by $\mathcal{N}(\Omega)$, is the interior of the intersection of all pseudoconvex domains
that contain $\overline{\Omega}$. We say $\Omega$ has nontrivial Nebenh\"ulle if $\mathcal{N}(\Omega)\setminus \Omega$ has interior points.
The results in this note concern domains with non-smooth boundary therefore we have to be careful with the definition of the function space $C^{\infty}(\overline{\Omega})$. By $f\in C^{\infty}(\overline{\Omega})$ we mean for any multi-indices $\alpha,\beta$;
$$\sup_{z\in\Omega}\left|\frac{\partial^{\alpha+\beta}}{\partial z^{\alpha}\partial\overline{z}^{\beta}}f(z)\right| \text{ is finite.}$$
Note that this condition is weaker than requiring $f$ to be smooth in a full neighborhood of $\overline{\Omega}$. We use $A^{\infty}(\Omega)$ to denote the set of holomorphic functions that are in $C^{\infty}(\overline{\Omega})$. We denote the $L^2$ Sobolev spaces of forms by $W^k_{(0,q)}(\Omega)$ for integer values of $k$.
\subsection{Background}
The Hartogs triangle is an example of a pseudoconvex domain with nontrivial Nebenh\"ulle and the worm domain in \cite{DiedFoar77a} is an example of a smooth bounded pseudoconvex domain with nontrivial Nebenh\"ulle.
It is clear that if the Nebenh\"ulle is nontrivial, then the domain cannot have a Stein neighborhood basis. On the other hand, trivial Nebenh\"ulle does not guarantee that $\Omega$ has a Stein neighborhood basis; see \cite[Proposition 1]{Sato80} for an earlier discussion and see \cite{Stensones} for a counterexample.
We refer to \cite{BedfordFornaess78, DiedFoar77a, DiederichFornaess77b, Sibony87, Sibony91, FornaessHerbig08, FornaessNagel, Harrington}
and the references within for results concerning the existence of a Stein neighborhood basis. In particular, some sufficient conditions can be listed as follows.
\begin{enumerate}
\item[(S-1)] Existence of a holomorphic vector field in a neighborhood of $b\Omega$ that is transversal to $b\Omega$ \cite{BedfordFornaess78, FornaessNagel}.
\item[(S-2)] Smallness of a certain cohomology class \cite{BedfordFornaess78}.
\item[(S-3)] Property $(P)$ or Property $(\tilde{P})$ \cite{Sibony87}.
\item[(S-4)] Existence of a plurisubharmonic defining function \cite{FornaessHerbig08}.
\end{enumerate}
A related problem in this context is to find sufficient conditions on a bounded domain $\Omega$ in $\mathbb{C}^n$ such that the $\overline{\partial}$-Neumann operator $N$, the canonical solution operator $\overline{\partial}^{\ast} N$ and the Bergman projection operator $\mathbf{B}_{\Omega}$ of $\Omega$ are exactly regular, i.e., $N$, $\overline{\partial}^{\ast} N$ and $\mathbf{B}_{\Omega}$ are bounded on Sobolev spaces $W^k(\Omega)$ (with correct form level) for all $k\geq 0$. Actually, on bounded pseudoconvex domains with smooth boundary if $N_{\Omega}$ is exactly regular (or compact) then so are $\overline{\partial}^{\ast} N$ and $\mathbf{B}_{\Omega}$ \cite{BoasStraube90, StraubeBook}.
The intriguing relationship between the problem of existence of a Stein neighborhood basis and the problem of exact regularity of $N$, $\overline{\partial}^{\ast} N$ and $\mathbf{B}_{\Omega}$ rises to the surface when we examine the known sufficient conditions. Namely, each condition (S-1) to (S-4) has a more stringent version that implies the exact regularity of $N$, $\overline{\partial}^{\ast} N$ and $\mathbf{B}_{\Omega}$. In particular, compare (S-1) to \cite{BoasStraube91a}, (S-2) to \cite{BoasStraube94}, (S-3) to \cite{BoasStraube91a} and \cite{HerbigMcNeal06}, and (S-4) to \cite{Catlin84}, \cite{Straube97} and \cite{McNeal04}.
In this note, we explore this relationship further on complete Hartogs domains.
\subsection{Hartogs Domains}
In this section, we go over the notion of Nebenh\"ulle on Hartogs domains (see also \cite{ZeytuncuNeben}). Let $\mathbb{D}$ denote the unit disc in $\mathbb{C}$ and let $\psi(z)$ be a continuous and bounded from below function on $\mathbb{D}$. Let us consider the domain $\Omega$ in $\mathbb{C}^2$ defined by;
\begin{equation}\label{Hdomain}
\Omega=\left\{(z_1,z_2)\in \mathbb{C}^2~|~ z_1\in \mathbb{D}; |z_2|<e^{-\psi(z_1)}\right\}.
\end{equation}
The domain $\Omega$ is a bounded complete Hartogs domain. Moreover, it is known that (see \cite[page 129]{VlaBook}) $\Omega$ is a pseudoconvex domain if and only if $\psi(z)$ is a subharmonic function on $\mathbb{D}.$ In order to focus on pseudoconvex domains; we assume that $\psi(z)$ is a subharmonic function for the rest of the note. We further assume that $\psi(z)$ is smooth in the interior of $\mathbb{D}$.
Let $\mathcal{F}$ be the set of functions $r(z)$ where $r(z)$ is a subharmonic function on a neighborhood of $\overline{\mathbb{D}}$ such that $r(z)\leq\psi(z)$ on $\mathbb{D}$.
We define the following two functions;
\begin{align*}
R(z)&=\sup_{r\in\mathcal{F}}\left\{r(z) \right\},\\
R^*(z)&=\limsup_{\mathbb{D}\ni\zeta\to z}R(\zeta).
\end{align*}
Note that $R^{*}(z)$ is upper semicontinuous and subharmonic on $\mathbb{D}$.
The following proposition from \cite[Theorem 1]{Shirinbekov86} gives the description of $\mathcal{N}(\Omega)$ for $\Omega$ a complete Hartogs domain as above.
\begin{proposition}\label{description}
$\mathcal{N}(\Omega)=\left\{(z_1,z_2)\in \mathbb{C}^2~|~ z_1\in \mathbb{D}; |z_2|<e^{-R^{*}(z_1)}\right\}.$
\end{proposition}
This description does not give much information about the interior of the set difference $\mathcal{N}(\Omega)\setminus \Omega$ or the existence of a Stein neighborhood basis. The Hartogs triangle is a Hartogs domain ($\psi$ is not continuous) with nontrivial Nebenh\"ulle. The continuity assumption on $\psi$ is not enough to avoid having nontrivial Nebenh\"ulle as seen in the following example from \cite{Diederich98}.
\noindent \textbf{Example.} Consider a sequence of points in $\mathbb{D}$ that accumulates at every boundary point of $\mathbb{D}$, and let $f$ be a nonzero holomorphic function on $\mathbb{D}$ that vanishes on this sequence.
The function defined by $\psi(z)=|f(z)|^2$ is a subharmonic function and $\Omega$, defined as above for this particular $\psi$,
is a pseudoconvex domain. On the other hand, any pseudoconvex domain that compactly contains $\Omega$ has to contain the closure of the unit polydisc
$\mathbb{D}\times\mathbb{D}$. Therefore, $\mathcal{N}(\Omega)\setminus \Omega$ has nonempty interior.
This example suggests we must impose additional conditions on $\psi$ or $\Omega$ to have trivial Nebenh\"ulle. The following is an example of a positive result.
\begin{theorem}[\cite{ZeytuncuNeben}]\label{former}
Suppose $\Omega=\left\{(z_1,z_2)\in \mathbb{C}^2~|~ z_1\in \mathbb{D}; |z_2|<e^{-\psi(z_1)}\right\}$ is a smooth bounded pseudoconvex complete Hartogs domain.
Then $\mathcal{N}(\Omega)=\Omega$, in particular $\Omega$ has trivial Nebenh\"ulle.
\end{theorem}
Note that the smoothness assumption on the domain $\Omega$ is a stronger condition than the smoothness assumption on the function $\psi(z)$.
In the this note, we prove two results on Hartogs domains $\Omega$ that ensure $\mathcal{N}(\Omega)=\Omega$ under the assumptions of exact regularity of canonical operators.
\subsection{Results} The first result holds on any Hartogs domain defined by \eqref{Hdomain}.
\begin{theorem}\label{main}
Let $\Omega=\left\{(z_1,z_2)\in \mathbb{C}^2~|~ z_1\in \mathbb{D}; |z_2|<e^{-\psi(z_1)}\right\}$ where $\psi(z)$ is a smooth bounded below subharmonic function on $\mathbb{D}$. Suppose that $\overline{\partial}^{\ast} N_1$ maps $C^{\infty}_{(0,1)}(\overline{\Omega})$ to $C^{\infty}(\overline{\Omega})$ then $\mathcal{N}(\Omega)=\Omega$, in particular $\Omega$ has trivial Nebenh\"ulle.
\end{theorem}
\begin{remark}
If $N_1$ maps $C^{\infty}_{(0,1)}(\overline{\Omega})$ to $C^{\infty}_{(0,1)}(\overline{\Omega})$ then it is clear that $\overline{\partial}^{\ast} N_1$ has the desired property above since $\overline{\partial}^{\ast}$ is a first order differential operator. On the other hand, there are domains where $N_1$ fails to be regular but $\overline{\partial}^{\ast} N_1$ maps $C^{\infty}_{(0,1)}(\overline{\Omega})$ to $C^{\infty}(\overline{\Omega})$ \cite{EhsaniBidisc, EhsaniProduct}.
\end{remark}
\begin{remark}
Theorem \ref{main} is still true if the canonical solution operator $\overline{\partial}^{\ast} N_1$ is replaced by any other solution operator for the $\overline{\partial}$-problem. This resonates with the results in \cite{Chaumat} and \cite{Dufresnoy} where it is shown that existence of a Stein neighborhood basis implies existence of a regular solution operator for the $\overline{\partial}$-problem.
\end{remark}
\begin{remark}
Theorem \ref{main} implies that for the domain constructed in the example above; the $\overline{\partial}$-Neumann operator $N_1$, the canonical solution operator $\overline{\partial}^{\ast} N_1$, or any solution operator for the $\overline{\partial}$-problem fails to be globally regular.
\end{remark}
The second result concerns a special family of Hartogs domains. We take a bounded holomorphic function $g$ on $\mathbb{D}$ and define the following complete Hartogs domain by using this function,
\begin{equation}\label{domain}
\Omega_{g}=\left\{(z_1,z_2)\in \mathbb{C}^2~|~z_1\in \mathbb{D} \text{ and } |z_2|<|g(z_1)|\right\}.
\end{equation}
Note that here $\psi(z)=-\log |g(z)|$. Before we state the second result we observe two things.
\begin{remark}
If $g$ is constant then $\Omega_g$ is a bidisc and the closure of a bidisc admits a Stein neighborhood basis and the Bergman projection of a bidisc is exactly regular.
\end{remark}
\begin{remark}
If $g$ vanishes at a point in $\mathbb{D}$ then $\overline{\Omega_g}$ does not admit a Stein neighborhood basis and $\mathbf{B}_{\Omega_g}$ is not bounded on Sobolev spaces. Indeed, $\Omega_g$ behaves like the famous Hartogs triangle at a zero of $g$ and these two properties fail around this point. See the last section for details.
\end{remark}
The remaining case is when $g$ is nonconstant and has no zeros in $\mathbb{D}$. In this case, we notice that the domain does not necessarily have to admit a Stein neighborhood basis. In particular as in the example above, let us take a holomorphic function $h(z)$ on $\mathbb{D}$ such that the zero set of $h$, $\left\{z\in\mathbb{D}~|~h(z)=0\right\}$, does not have any accumulation point in the interior of $\mathbb{D}$ but it accumulates at every boundary point of $\mathbb{D}$ and also $|h(z)|<1$ on $\mathbb{D}$. We can use Blaschke products to construct such a function. Then, let $g(z)=1+h(z)$ and consider the domain $\Omega_g$. We observe that $\overline{\Omega_g}$ contains $b\mathbb{D}\times\mathbb{D}$ and therefore any pseudoconvex neighborhood of
$\overline{\Omega_g}$ also contains $\mathbb{D}\times\mathbb{D}$. This shows, for this particular choice of $g$, the closure of the domain $\Omega_g$ does not admit a Stein neighborhood basis.
On the other hand, if we assume the regularity of $\mathbf{B}_{\Omega_g}$ on Sobolev spaces then we get the existence of a Stein neighborhood basis.
\begin{theorem}\label{Sobolev}
Let $g$ be a nonconstant nonvanishing bounded holomorphic function on $\mathbb{D}$.
Suppose $\mathbf{B}_{\Omega_g}$ is bounded from $W^k(\Omega_g)$ to itself for all integers $k\geq 0$. Then the closure of $\Omega_g$ admits a Stein neighborhood basis.
\end{theorem}
\begin{remark}
It will be clear in the proof of Theorem \ref{Sobolev} that a weaker hypothesis, namely continuity from $C^{\infty}(\overline{\Omega_g})$ to itself, would suffice.
\end{remark}
\section{Proof of Theorem \ref{main}}
The proof builds on the proof of Theorem \ref{former} in \cite{ZeytuncuNeben}. For the convenience of the reader, we repeat the arguments from \cite{ZeytuncuNeben} here and highlight the new ingredients in this proof. The first difference to mention is that the boundary smoothness assumption in \cite[Theorem 5]{ZeytuncuNeben} is replaced by the assumption that the canonical solution operator is globally regular.
We start as in \cite{ZeytuncuNeben} and we suppose that $\mathcal{N}(\Omega)\not=\Omega$. Then, we take a point $p=(p_1,p_2)\in \mathcal{N}(\Omega)\setminus \Omega$ and notice that actually the set difference contains more points than this singleton. Namely, by Proposition \ref{description}, $R^{*}(p_1)<\psi(p_1)$ and by semicontinuity of $R^{*}$ and continuity of $\psi$;
there exists a neighborhood $\mathcal{U}$ of $p_1$ (compactly contained in $\mathbb{D}$) such that for all $q_1\in \mathcal{U}$, $R^{*}(q_1)<\psi(q_1)$. This neighborhood $\mathcal{U}$ guarantees that $\mathcal{N}(\Omega)$ contains a full $\mathbb{C}^2$ neighborhood of the boundary point $(p_1,e^{-\psi(p_1)}) \in b\Omega$.
Note that there exists $\delta>0$ such that $\mathcal{U}$ is contained in the set $\left\{(z_1,z_2)\in\mathbb{C}^2~|~|z_1|<1-3\delta\right\}$. Also note that we can add an appropriate function to $\psi(z)$ to construct a pseudoconvex complete Hartogs domain $\Omega_{\delta}$ with smooth boundary such that $\Omega_{\delta}\subset \Omega$ and they share the same boundary over $|z_1|<1-\delta$. We will need a smooth cut-off function $\chi_{\delta}(z_1)$ that is radially symmetric on $\mathbb{D}$, supported in $|z_1|<1-2\delta$ and identically 1 on $|z_1|<1-3\delta$.
Let $f(z_1,z_2)\in A^{\infty}(\Omega_{\delta})$ with the property that $f$ does not extend past any boundary point of $\Omega_{\delta}$; existence of such a function is guaranteed by \cite{Catlin80}. First, we extend this function to $\Omega$,
\begin{equation*}
F(z_1,z_2)=\left\{
\begin{array}{ll}
f(z_1,z_2)\chi_{\delta}(z_1) &: (z_1,z_2) \in \Omega_{\delta}\\
0 &: (z_1,z_2)\in \Omega\setminus \Omega_{\delta}
\end{array}
\right.
\end{equation*}
as a smooth function. Note that $F(z_1,z_2)\equiv f(z_1,z_2)$ for $|z_1|<1-3\delta$ and $F\in C^{\infty}({\overline{\Omega}})$.
Next, for any $q_1\in \mathcal{U}$ we define $u_{q_1}$ as
\begin{equation*}
u_{q_1}(z_1,z_2)=\overline{\partial}^{\ast} N\left(\frac{-\overline{\partial} F (z_1,z_2)}{z_1-q_1}\right).
\end{equation*}
The assumption on $\overline{\partial}^{\ast} N$ ensures that $u_{q_1}\in C^{\infty}(\overline{\Omega})$. By using this function we define $G_{q_1}(z_1,z_2)$ as
\begin{equation*}
G_{q_1}(z_1,z_2)=F(z_1,z_2)+(z_1-q_1)u_{q_1}(z_1,z_2).
\end{equation*}
Note that $G_{q_1}\in A^{\infty}(\Omega)$ and $G_{q_1}(q_1,z_2)=f(q_1,z_2)$ for all $|z_2|<e^{-\psi(q_1)}$. In the remaining part of the proof, we show that $G_{q_1}(z_1,z_2)$ extends to be a holomorphic function on the larger set $\mathcal{N}(\Omega)$ by using two lemmas from \cite{ZeytuncuNeben}. We present proofs of the lemmas for the convenience of the reader.
\begin{lemma}\label{uniform}
Suppose $p\in \mathcal{N}(\Omega)$ and $h$ is a function that is holomorphic in a neighborhood of $\overline{\Omega}$. Then $h$ has a holomorphic extension to $\mathcal{N}(\Omega)$ and $|h(p)|\leq\sup_{q\in \Omega}|h(q)|.$
\end{lemma}
\begin{proof} Note that since $h$ is holomorphic in a neighborhood of $\overline{\Omega}$, it is holomorphic on $\mathcal{N}(\Omega)$.
Next, assume the desired inequality is not true. In this case, $g(z_1,z_2)=\frac{1}{h(z_1,z_2)-h(p)}$ is a holomorphic function on some complete Hartogs domain $\Omega_1$ that compactly contains $\Omega$.
The domain $\Omega_1$ may not be pseudoconvex but its envelope of holomorphy, denoted by $\widetilde{\Omega_1}$ that is a single-sheeted(schlicht) and complete Hartogs domain,
is pseudoconvex (see \cite[page 183]{VlaBook}).
Moreover, by definition any function holomorphic on $\Omega_1$ extends to a holomorphic function on the envelope of holomorphy $\widetilde{\Omega_1}$.
In particular, $g(z_1,z_2)$ extends as holomorphic on $\widetilde{\Omega_1}$ and therefore the point $p$ can not be in $\widetilde{\Omega_1}$. But this is impossible since $p$ is a point in $\mathcal{N}(\Omega)$. \end{proof}
The next lemma, from \cite{ZeytuncuNeben}, is an approximation result that is similar to the one in \cite{BarrettFornaess}. Take a function $h$ holomorphic on $\Omega$ and expand it as follows: $$h(z_1,z_2)=\sum_{k=0}^{\infty}a_k(z_1)z_{2}^k,$$ where each $a_k(z_1)$ is a holomorphic function on $\mathbb{D}$. Next, define the following functions (that are polynomials in $z_2$) for any $N\in\mathbb{N}$,
\begin{equation}
\mathcal{P}_N(z_1,z_2)=\sum_{k=0}^{N}a_k\left(\frac{z_1}{1+\frac{1}{N}}\right)z_2^k.
\end{equation}
Clearly each $\mathcal{P}_N$ is holomorphic in a neighborhood of $\overline{\Omega}$.
\begin{lemma}\label{approximation}
If $h \in A^{\infty}(\Omega)$, then the sequence of functions $\left\{\mathcal{P}_N\right\}$ converges uniformly to $h$ on $\overline{\Omega}$.
\end{lemma}
\begin{proof}
Note that $h$ easily extends to the boundary of each fiber over an interior point of $\mathbb{D}$. For $(z_1,z_2)\in \Omega$ and $k\geq 2$, by the Cauchy's inequalities;
\begin{align*}
|a_k(z_1)z_2^k|&=\left|\frac{1}{k!}\frac{(k-2)!}{2\pi i}z_2^k\int_{|\zeta|=e^{-\psi(z_1)}}\frac{\frac{\partial^2}{\partial \zeta^2}h(z_1,\zeta)}{\zeta^{k-1}}d\zeta\right|\\
&\leq \frac{1}{2\pi k(k-1)}\left(e^{-\psi(z_1)}\right)^k2\pi e^{-\psi(z_1)}\left(\sup_{\Omega}\left|\frac{\partial^2}{\partial z_2^2}h\right|\right)\frac{1}{(e^{-\psi(z_1)})^{k-1}}\\
&\leq \frac{C}{k^2}
\end{align*}
for some global constant $C$ (that depends on $\psi$ (and hence the domain $\Omega$) and the function $h$). This estimates is enough for the uniform convergence.
\end{proof}
Recall each $\mathcal{P}_N$ is holomorphic on a neighborhood of $\overline{\Omega}$ and consequently on $\mathcal{N}(\Omega)$. But by Lemma \ref{uniform}, the uniform convergence
percolates onto $\mathcal{N}(\Omega)$ and therefore any function in $A^{\infty}(\Omega)$ extends to a holomorphic function on $\mathcal{N}(\Omega)$.
When we apply the argument above to $G_{q_1}$, we note that $G_{q_1}$ is a holomorphic function on $\mathcal{N}(\Omega)$. In particular, $G_{q_1}(q_1,z_2)=f(q_1,z_2)$ extends to a larger disc $|z_2|<e^{-R^*(q_1)}$.
Recall $q_1\in \mathcal{U}$ is an arbitrary point, and therefore we observe that for any point $q_1\in \mathcal{U}$; the holomorphic function $f(q_1,z_2)$ extends from the disc $|z_2|<e^{-\psi(q_1)}$ to a strictly larger disc $|z_2|<e^{-R^*(q_1)}$. This implies $f(z_1,z_2)$ extends a holomorphic function (by the joint analyticity lemma of Hartogs) past the boundary point $(p_1,e^{\psi(p_1)})$ but this is a contradiction since $f$ is assumed to be non-extendable. Hence we conclude that indeed, $$\mathcal{N}(\Omega)=\Omega.$$
\section{Proof of Theorem \ref{Sobolev}}
Let $\chi(z)$ be a nonzero compactly supported smooth radial function on $\mathbb{D}$. The key observation is the following lemma.
\begin{lemma}\label{rep}
$\mathbf{B}_{\Omega_g}\left(\frac{\chi(z_1)}{g(z_1)}\right)(z_1,z_2)=\frac{c}{g(z_1)}$
for some nonzero constant $c$.
\end{lemma}
\begin{proof}
Take a holomorphic function $h(z_1,z_2)$ that is in $L^2(\Omega_g)$ and consider the following two inner products $\Omega_g$.
\begin{align*}
\left<\frac{\chi(z_1)}{g(z_1)},h(z_1,z_2)\right>&=\pi \int_{\mathbb{D}} \frac{\chi(z_1)}{g(z_1)}\overline{h(z_1,0)}|g(z_1)|^2dA(z_1)\\
&=\pi \int_{\mathbb{D}} \chi(z_1)\overline{h(z_1,0)g(z_1)} dA(z_1)\\
&=c_1\overline{h(0,0)g(0)}\\
\left<\frac{1}{g(z_1)},h(z_1,z_2)\right>&=\pi \int_{\mathbb{D}} \frac{1}{g(z_1)}\overline{h(z_1,0)}|g(z_1)|^2dA(z_1)\\
&=\pi \int_{\mathbb{D}} \overline{h(z_1,0)g(z_1)} dA(z_1)\\
&=c_2\overline{h(0,0)g(0)}
\end{align*}
Note that $c_1,c_2$ are nonzero and we have $\frac{\chi(z_1)}{g(z_1)}-\frac{c_1}{c_2g(z_1)}$ is perpendicular to all square integrable holomorphic functions on $\Omega_g$.
\end{proof}
It is clear that $\frac{\chi(z_1)}{g(z_1)} \in W^k(\Omega_g)$ for all $k\geq 0$ and the regularity assumption implies that $\frac{1}{g(z_1)}=\frac{c_2}{c_1}\mathbf{B}_{\Omega_g}\left(\frac{\chi}{g}\right) \in W^k(\Omega_g)$ for all $k\geq 0$. This means for any $k\geq 0$,
\begin{equation}\label{derivatives}
\int_{\mathbb{D}}\left| \frac{\partial^k}{\partial z^k}\left(\frac{1}{g(z)}\right)\right|^2~|g(z)|^2dA(z)
\end{equation}
is finite. We use these estimates to conclude that $g(z)$ is uniformly continuous on $\mathbb{D}$.
First, we show that $|g(z)|$ does not decay too fast. Note that there exists a holomorphic function $h(z)$ on $\mathbb{D}$ such that $$g(z)=e^{h(z)}.$$
The case $k=1$ in \eqref{derivatives} gives $h'(z)$ is square integrable on $\mathbb{D}$ and if we let $h(z)=\sum_{n=0}^{\infty}h_nz^n$ we obtain the following.
\begin{align*}
|h(z)|^2&=\left|\sum_{n=0}^{\infty}h_nz^n\right|^2\\
&\leq \left(\sum_{n=1}^{\infty}|h_n|^2n\right)\left(\sum_{n=1}^{\infty}\frac{|z|^{2n}}{n}\right) + |h(0)|^2\\
&\lesssim ||h'||^2\log\left(\frac{1}{1-|z|^2}\right) + |h(0)|^2.
\end{align*}
This implies that there exists $M>0$ such that,
\begin{align*}
|\Re h(z)|&\leq|h(z)|\\
&\leq M\log\left(\frac{1}{1-|z|^2}\right) + |h(0)|.
\end{align*}
Without loss of generality, we can assume $|g(z)|<1$ and this gives us $\Re h(z)<0$ on $\mathbb{D}$. On the other hand, by using the previous estimate we get
\begin{equation*}
\Re h(z)\geq -|h(0)|-M\log\left(\frac{1}{1-|z|^2}\right).
\end{equation*}
This gives us a lower bound on $g(z)$,
\begin{equation*}
|g(z)|=e^{\Re h(z)}\geq C(1-|z|^2)^M.
\end{equation*}
We modify \eqref{derivatives} by using this lower bound and we get for any $k\geq 0$,
\begin{equation*}
\int_{\mathbb{D}}\left| \frac{\partial^k}{\partial z^k}\left(\frac{1}{g(z)}\right)\right|^2~(1-|z|^2)^{2M}dA(z)
\end{equation*}
is finite.
Consider the Taylor series expansion of $\frac{1}{g(z)}=\sum_{n=0}^{\infty}b_nz^n$. By using the orthogonality and Beta functions we get
\begin{align*}
\int_{\mathbb{D}}\left| \frac{\partial^k}{\partial z^k}\left(\frac{1}{g(z)}\right)\right|^2~(1-|z|^2)^{2M}dA(z)&=\int_{\mathbb{D}}\left| \sum_{n=0}^{\infty}b_{n+k}\frac{(n+k)!}{n!}z^{n}\right|^2~(1-|z|^2)^{2M}dA(z)\\
&=\sum_{n=0}^{\infty}|b_{n+k}|^2\frac{((n+k)!)^2}{(n!)^2}\int_{\mathbb{D}}|z|^{2n}(1-|z|^2)^{2M}dA(z)\\
&\geq C_k \sum_{n=0}^{\infty}|b_{n+k}|^2n^{2k}\frac{1}{n^{2M+1}}
\end{align*}
for any $k\geq 0$. Therefore, $\lim_{n\to \infty}n^j|b_n|=0$ for any $j\geq0$.
In particular, this implies $\frac{1}{g(z)}$ is bounded on $\mathbb{D}$ and hence $g(z)$ is bounded from below. We also conclude that
$\frac{1}{g(z)}\in W^2(\mathbb{D})$. These last two implications indicate that $g(z)$ is uniformly continuous on $\mathbb{D}$.
Furthermore, for any $\epsilon>0$, there exists $\delta(\epsilon)=\delta>0$ such that
\begin{equation*}
|g(z)|< (1+\epsilon)\left|g\left(\frac{z}{1+\delta}\right)\right|
\end{equation*}
for all $z\in\mathbb{D}$.
Now let us define the following domains for any $\epsilon>0$
\begin{equation*}
\Omega_{\epsilon}=\left\{(z_1,z_2)\in \mathbb{C}^2~|~ |z_1|< 1+\delta \text{ and } |z_2|<(1+\epsilon)\left|g\left(\frac{z_1}{1+\delta}\right)\right|\right\}.
\end{equation*}
It is clear that each $\Omega_{\epsilon}$ is a pseudoconvex domain and each one compactly contains $\Omega_g$. Furthermore,
\begin{equation*}
\bigcap_{\epsilon>0}\Omega_{\epsilon}=\overline{\Omega_g}~.
\end{equation*}
This shows the existence of a Stein neighborhood basis and finishes the proof of Theorem \ref{Sobolev}.
\subsection*{Zeros in $\mathbb{D}$}
Suppose $g(z)$ is a nonconstant holomorphic function on $\mathbb{D}$ and suppose $g(z_0)=0$ for some $z_0\in\mathbb{D}$. Then $\Omega_g$ behaves like the
Hartogs triangle around this point and therefore the closure does not admit a Stein neighborhood basis.
On the other hand, we note that $\mathbf{B}_{\Omega_g}$ is not exactly regular. Let's suppose that $z_0=0$. For the general case, we can use an automorphism of the unit disc and the fact that an automorphism of the disc preserves the regularity properties of $\mathbf{B}_{\Omega_g}$. We factor $g(z)$ as $z^jh(z)$ for some integer $j$ and some holomorphic function $h(z)$ that is zero-free in a neighborhood of $0$. We also take a radially symmetric real cut-off function $\chi$ that is supported in a small enough
neighborhood of $0$. We consider the function $$\chi(z_1)\frac{g(z_1)\overline{g(z_1)}^2}{|h(z_1)|^4}=\chi(z_1)\frac{|z_1|^{2j}\overline{z_1}^j}{h(z_1)}.$$
This function belongs to $W^k(\Omega_g)$ for any integer $k\geq 0$. However, Lemma \ref{rep} implies that
\begin{equation*}
\mathbf{B}_{\Omega_g}\left(\chi(z_1)\frac{g(z_1)\overline{g(z_1)}^2}{|h(z_1)|^4}\right)=\frac{c}{g(z_1)}
\end{equation*}
for some constant $c$. It is clear that $\frac{c}{g(z_1)}\not\in W^1(\Omega_g)$. Therefore, $\mathbf{B}_{\Omega_g}$ is not exactly regular.
\section*{Acknowledgments} I'd like to thank H.P. Boas, S. \c{S}ahuto\~{g}lu and S. Ravisankar for useful remarks on this paper. I'd also like to thank the anonymous referee for valuable comments that significantly improved the overall quality of the paper.
\vskip 1cm
\bibliographystyle{amsalpha}
| -31,523.955312
|
[
-2.193359375,
1.8447265625
] | 57.368421
|
[
-2.353515625,
1.0927734375,
-2.640625,
-6.0859375,
-1.19140625,
9.09375
] |
[
1.8310546875,
7.8125,
-0.796875,
6
] | 200
| 3,297
|
[
-3.548828125,
4.1328125
] | 32.184066
|
[
-5.01953125,
-3.607421875,
-4.984375,
-2.517578125,
1.3935546875,
12.6015625
] | 0.459418
| 35.623995
| 24.446466
| 2.657736
|
[
1.5040473937988281
] | -21,192.846255
| 6.139824
| -31,285.194935
| 1.333794
| 5.589396
|
[
-1.4970703125,
-3.27734375,
-3.951171875,
-5.484375,
1.822265625,
12.484375
] |
[
-5.71484375,
-1.76953125,
-2.322265625,
-1.24609375,
3.78125,
4.234375
] | |
BkiUa484ubnjoscfMt0K
|
\section{Introduction}
Due to the multidisciplinary cross-application in seismic exploration \cite{weglein2003inverse}, medical imaging \cite{zhou2020bayesian} and so on, the inverse problems of partial differential equations (PDEs) have undergone an enormous development over the past few decades \cite{arridge2019solving}.
For solving the inverse problems of PDEs, uncertainties are ubiquitous, e.g., measurement uncertainty and epistemic uncertainty.
The Bayesian inverse approach provides a flexible framework that solves inverse problems by transforming them into statistical inference problems, thereby making it feasible to analyze the uncertainty of the solutions to the inverse problems.
Inverse problems of PDEs have been defined on some infinite-dimensional spaces \cite{Kirsch2011Book} generally, which are not compatible with the well-studied finite-dimensional Bayesian inference approach \cite{bishop2006pattern,kaipio2006statistical}.
To overcome this obstacle, we usually employ these two main approaches:
\begin{itemize}
\item Discretize-then-Bayesianize:
The PDEs are initially discretized to approximate the original problem in some finite-dimensional space, and the reduced
approximated problem is then solved by using the Bayes' method \cite{kaipio2006statistical}.
\item Bayesianize-then-discretize:
The Bayes' formula and algorithms are initially constructed on infinite-dimensional space, and after the infinite-dimensional
algorithm is built, some finite-dimensional approximation is carried out \cite{dashti2013bayesian}.
\end{itemize}
The above two approaches both have advantages and disadvantages.
The first approach enables us to employ all of the Bayesian inference methods developed in the statistical literature \cite{kaipio2006statistical} to solve the inverse problems.
However, given that the original problems are defined on the infinite-dimensional spaces, some problems have arisen, e.g., descending convergence rate and mesh dependence, which generates an inevitable barrier for employing this approach.\ Concerned with the \emph{Bayesianize-then-discretize} approach, it has the following advantages:
\begin{itemize}
\item A better understanding of the function space structures will be of importance for designing optimal numerical schemes of PDEs,
especially the gradient information employed in \cite{hinze2008optimization}.
\item Formulating infinite-dimensional theory rigorously can avoid inappropriate intuitive ideas from the finite-dimensional inverse approach, e.g., total variation prior \cite{Lassas2004IP}.
\item Pushing the discretization to the last step usually generate \emph{mesh independence} algorithms, which means that
the sampling efficiency will not highly depend on the dimension of the discretization \cite{petra2014computational}.
\end{itemize}
Based on these advantages, \emph{Bayesianize-then-discretize} approach has attracted numerous researchers' attention in recent years \cite{bui2013computational,Cotter2009IP,Stuart2010ActaNumerica}.
One critical issue for applying the Bayesian approach is efficiently extracting posterior information.
Based on the \emph{Bayesianize-then-discretize} perspective, an infinite-dimensional Markov chain Monte Carlo (MCMC) type algorithm named preconditioned Crank-Nicolson (pCN) has been proposed and analyzed in detail \cite{cotter2013,Pillai2014SPDE}, which has consistent sampling efficiency under different discretizations.
Besides the pCN algorithm, other types of infinite-dimensional sampling algorithms have been proposed, e.g.,
infinite-dimensional sequential Monte Carlo algorithm \cite{Beskos2015SC} and
infinite-dimensional importance sampling algorithm \cite{Agapiou2017SS}.
In order to enhance the sampling efficiency, infinite-dimensional MCMC type sampling algorithms
with gradient and geometric informative proposals have been constructed, e.g., infinite-dimensional Metropolis-adjusted Langevin algorithm \cite{Thanh2016IPI}, and geometric pCN algorithm \cite{Beskos2017JCP}.
Although these algorithms have mesh independence property, they can hardly be employed to solve large-scale inverse problems of PDEs such as full waveform inversion \cite{Fichtner2011Book}.
Variational inference (VI), an efficient approximated sampling method, has been widely investigated in machine learning for quantifying the uncertainties of learning models \cite{blei2017variational,zhang2018advances}.
Various approximate probability measures have been frequently used for training deep neural networks in finite-dimensional spaces.
Thus VI methods are usually constructed for finite-dimensional problems.
Some studies on VI methods for inverse problems of PDEs are based on the \emph{discretize-then-Bayesianize} perspective on finite-dimensional spaces.
For example, a mean-field assumption based VI approach was employed to solve finite-dimensional inverse problems with hyper-parameters in prior and noise distributions \cite{Guha2015JCP, Jin2012JCP, jin2010hierarchical}.
Projected Stein variational gradient descent methods were constructed to solve inverse problems with low-intrinsic dimensions \cite{Chen2021SISC, Chen2019NIPS}.
However, under the infinite-dimensional settings, the VI methods are much less studied for solving inverse problems of PDEs from the \emph{Bayesianize-then-discretize} perspective.
Specifically, when the approximated measures are restricted to be Gaussian, a novel Robbins-Monro algorithm was developed in \cite{pinski2015algorithms,Pinski2015SIAMMA} from the calculus-of-variations viewpoint.
In order to employ the non-Gaussian approximated measures,
a general \textbf{i}nfinite-dimensional \textbf{m}ean-\textbf{f}ield \textbf{VI} (iMFVI) theoretical framework was established in \cite{jia2021variational}.
Based on this general theory, a generative deep neural network model named VINet was constructed in \cite{Jia2022VINet} by analyzing the approximated measures.
The infinite-dimensional Stein variational gradient descent approach was proposed to solve the nonlinear inverse problems in \cite{jia2021stein}.
Besides the problem of efficient sampling, another critical issue for applying the Bayesian approach is that we sometimes can hardly provide an appropriate prior measure.
Especially we can hardly specify the scales of the variance, which corresponds to the regularization parameter in the classical regularization methods \cite{Agapiou2013SPTA,Dashti2013IP}.
The hierarchical Bayesian inference approach has been widely adopted to specify the hyper-parameters, e.g., the scales of the covariance operator.
In the finite-dimensional space, a good introduction is provided in Chapter 3 of \cite{kaipio2006statistical}.
A hierarchical Bayesian model, providing an efficient iterative algorithm for calculating the maximum a posterior estimates, has been constructed in \cite{calvetti2009conditionally, Calvetti2008hypermodels}, with the probability density of the hyper-parameter being Gamma distribution.
Furthermore, in order to extract the posterior information based on the hierarchical model, the Gibbs sampler method has been studied in \cite{papaspiliopoulos2008stability, papaspiliopoulos2003non, papaspiliopoulos2007general}, which provides an efficient sampling method, and introduces the non-centered parameterization.
In this work, we focus on a similar hierarchical Bayesian model defined on infinite-dimensional space that has been investigated in \cite{agapiou2014analysis,chen2018dimension,dunlop2017hierarchical} under the framework of infinite-dimensional MCMC sampling methods.
As indicated in \cite{agapiou2014analysis,chen2018dimension}, how to introduce hyper-parameters in the prior measure in the infinite-dimensional case is very different from the finite-dimensional case since the infinite-dimensional Gaussian measures tend to be singular with each other (see also Chapter 2 of \cite{Prato2014book} for details).
In order to overcome the singular issue of infinite-dimensional Gaussian measures, the prior covariance operator $\mathcal{C}_0$ has been represented by its eigendecomposition in \cite{jia2021variational}, and the hyper-parameter is only introduced to a finite number of eigenbasis.
As stated in Subsection 3.1 of \cite{jia2021variational}, this strategy will lead to an inappropriate Bayesian model that can not adequately incorporate information encoded in data.
Recently, a detailed analysis has been given in \cite{Dunlop2020SMAIJCM}, which indicates that centered parametrization is suitable for evaluating maximum a posterior estimate, but non-centered parameterization is more appropriate for using the Gibbs sampler method.
To the best of our knowledge, there are few studies of the infinite-dimensional hierarchical Bayesian models under the infinite-dimensional VI framework that can overcome the singular issue without the truncation of eigenexpansions.
Hence, we intend to formulate the iMFVI method with a non-centered parametrization (NCP) approach, which yields a new VI method that allows us to introduce the hyper-parameters for the whole eigenexpansion of the prior covariance operator.
The new VI method has been constructed rigorously (see Subsections \ref{subsec2.2}, and \ref{subsec2.3}) and the relations with the centered parametrization based method are discussed in detail (see Subsections \ref{subsec2.4}).
Finally, we provide numerical discretization strategies based on the low-rank structure of the posterior measure (see Subsections \ref{subsec2.5} and \ref{subsec2.6}).
In summary, this work mainly contains four contributions:
\begin{itemize}
\item For the hierarchical Bayesian model, a \textbf{n}on-\textbf{c}entered \textbf{p}arameterization based iMFVI (NCP-iMFVI) approach is proposed.
Compared with the iMFVI approach proposed in Section 3.1 of \cite{jia2021variational}, we can introduce the hyper-parameter for the whole covariance operator of the prior measure rather than for the truncated finite number of components based on the eigenbasis.
\item Different from the iMFVI method proposed in \cite{jia2021variational}, we transform the unknown parameter into a new one by taking the non-centered parameterization and formulating a new VI problem instead of the original VI problem.
We carry out detailed discussions about the relationships between these two problems, which yield compelling reasons for employing the NCP formulation to solve the hierarchical inference problem in the infinite-dimensional space.
\item Through a detailed structural analysis of the posterior measure of the hyper-parameter, we transform the complicated trace calculation into solving PDEs.
Based on scalable PDE solvers and the ideas of low-rank approximation \cite{bui2013computational,Ghattas2021ActaNumerica}, the proposed method will also be scalable that can be employed for large-scale problems.
\item We verify the accuracy of the NCP-iMFVI approach on a one-dimensional elliptic inverse problem.
In addition, we demonstrate the scalability of the NCP-iMFVI approach for the number of parameters by the inverse source problem of the Helmholtz equation and the inverse permeability of the steady-state Darcy flow equation.
\end{itemize}
The outline of this paper is as follows.
In Section $\ref{sec2}$, we construct the non-centered VI method based on the framework under the linear case and verify the essential conditions in VI theory.
In Subsection $\ref{subsec2.1}$, we introduce the iMFVI theory proposed in \cite{jia2021variational}, and illustrate the critical problems of the infinite-dimensional hierarchical approach.
In Subsection $\ref{subsec2.2}$, we propose the non-centered Bayesian formulation.
In Subsection $\ref{subsec2.3}$, we construct the general theory of NCP-iMFVI approach.
In Subsection $\ref{subsec2.4}$, we discuss the relationships between the NCP formulation and the approach introduced in Subsection $\ref{subsec2.1}$, and provide convincing results to explain the reason for employing the NCP formulation.
In Subsections $\ref{subsec2.5}$ and $\ref{subsec2.6}$, the computational details are provided.
In Section $\ref{sec3}$, we employ the developed non-centered VI method to three inverse problems, governed by the simple smooth equation, Helmholtz equation and the steady state Darcy flow equation, with noise and hyper-parameter both Gaussian.
Furthermore, in each numerical simulation, we will illustrate the mesh independence as expected for ``Bayesianize-then-discretize'' approach.
In Section $\ref{sec4}$, we summarize our achievements and claim some deficiencies, and further investigate directions.
\section{Non-centered infinite-dimensional VI method}\label{sec2}
This section provides a non-centered infinite-dimensional VI method to solve the hierarchical Bayesian inverse problem.
Different from the iMFVI theory proposed in \cite{jia2021variational}, such a method avoids the obstacle that occurred in the iMFVI theory and has an explicit relationship with the centered VI problem.
Furthermore, under the settings in our article, we can clarify the relationship between these two VI problems.
Based on the analysis, an iterative algorithm is then proposed.
\subsection{Critical problems of infinite-dimensional hierarchical approach}\label{subsec2.1}
In this subsection, we first introduce the infinite-dimensional hierarchical Bayesian problem.
Let $\mathcal{H}_u$ be a separable Hilbert space and $N_d$ be a positive integer.
Denote that $\mathcal{N}(u, \mathcal{C})$ is a Gaussian measure with mean $u$ and covariance operator $\mathcal{C}$.
The linear inverse problem can be described as follow:
\begin{align}\label{eq:ca}
\bm{d} = Hu + \bm{\epsilon},
\end{align}
where $\bm{d}\in \mathbb{R}^{N_d}$ is the measurement data, $u\in \mathcal{H}_u$ is the interested parameter, $H$ is a bounded linear operator from $\mathcal{H}_u$ to $\mathbb{R}^{N_d}$, and $\bm{\epsilon}$ is a Gaussian random vector with zero mean and variance $\bm{\Gamma}_{\text{noise}} := \tau^{-1}\textbf{I}$ ($\tau$ is a fixed positive number), which means
\begin{align}\label{eq:noise}
\bm{\epsilon} \sim \mathcal{N}(0, \bm{\Gamma}_{\text{noise}}).
\end{align}
We adopt a hierarchical Bayesian approach with the unknown parameter $u \sim \mu^{u, \lambda}_0 = \mathcal{N}(0, \lambda^{2}\mathcal{C}_0)$, where $\mathcal{C}_0:\mathcal{H}_u\rightarrow \mathcal{H}_u$ is a positive defined, symmetric and trace-class operator, and $\lambda^2$ is the amplitude of prior variance.
Let $\lbrace \alpha_k, e_k \rbrace_{k=1}^{\infty}$ be the eigensystem of the operator $\mathcal{C}_0$ such that
$\mathcal{C}_0 e_k = \alpha_k e_k$ for $k=1,2,\cdots$.
Without loss of generality, we assume that the eigenvectors $\lbrace e_k \rbrace ^{\infty}_{k=1}$ are orthonormal and the eigenvalues $\lbrace \alpha_k \rbrace ^{\infty}_{k=1}$ are in descending order.
Assume that $\lambda$ is a Gaussian random variable with mean $\bar{\lambda}$ and variance $\sigma > 0$, i.e., $\lambda \sim \mu^{\lambda}_0$.
According to ($\ref{eq:ca}$), let us denote $\Phi(u) = \frac{1}{2}\lVert Hu - \bm{d} \rVert^2_{\bm{\Gamma}_{\text{noise}}}$ to be the potential function, where $\lVert \cdot \rVert^2_{\bm{\Gamma}_{\text{noise}}} = \lVert \bm{\Gamma}^{-1/2}_{\text{noise}}\cdot \rVert^2_{l^2}$, with $\lVert \cdot \rVert_{l^2}$ denoting the usual $l^2$-norm.
For notational convenience, we denote $\lVert \cdot \rVert_{l^2}$ by $\lVert \cdot \rVert$.
Then based on the Bayes' formula, the posterior measure $\widetilde{\mu}$ satisfies
\begin{align}\label{eq:bayeca}
\widetilde{\mu}(du, d\lambda) \varpropto \exp(-\Phi(u))\widetilde{\mu}_0(du, d\lambda),
\end{align}
where $\widetilde{\mu}_0(du, d\lambda) = \mu_0^{u,\lambda}(du)\mu_0^{\lambda}(d\lambda)$.
For employing the iMFVI theory proposed in \cite{jia2021variational}, we need that the conditional prior measures with different
hyper-parameters, i.e., $\mu^{u, \lambda}_0$, are equivalent with each other.
However, according to Remark 2.10 in \cite{da2006introduction}, we know that the Gaussian measures $\mu^{u, \lambda}_{0}$ and
$\mu^{u, \lambda^{\prime}}_{0}$ are singular with each other if the hyper-parameters $\lambda \neq \lambda'$.
In order to avoid this singularity problem, the prior covariance operators with hyper-parameters introduced in \cite{jia2021variational} are as follows:
\begin{align}\label{CKlam}
\mathcal{C}^{K}_0(\lambda) := \sum^{K}_{j=1} \lambda^2 \alpha_j e_j \otimes e_j + \sum^{\infty}_{j=K+1} \alpha_j e_j \otimes e_j,
\end{align}
where $K$ is a pre-specified positive integer.
This formulation has also been employed in \cite{feng2018adaptive}, which will be appropriate if the information in the data
only effective on the first $K$ terms of the prior covariance eigensystem.
However, if the data is particularly informative and far from the prior, the prior covariance operator specified in (\ref{CKlam}) will lead to a Bayesian inference model that lacks the ability to incorporate the data information.
In the following, we will construct a new approach to overcome the difficult singularity problem and, at the same time, add hyper-parameter to all terms of the eigensystem of the prior covariance operator.
\subsection{Non-centered formulation}\label{subsec2.2}
From Subsection $\ref{subsec2.1}$, we know that the priors $\mu^{u, \lambda}_0$ are mutually singular
for different values of $\lambda$,
which conflicts with the basic assumptions of the iMFVI theory \cite{jia2021variational}.
As indicated in \cite{agapiou2014analysis},
similar difficulties were also encountered for employing the two-component Metropolis-within-Gibbs (MwG) algorithms.
In the literature of MwG algorithms \cite{chen2018dimension,papaspiliopoulos2007general}, sampling $u$ and $\lambda$ iteratively is usually called the \emph{centered parameterization} (CP), which will suffer from increasing slow convergence as the discretization level increases.
Hence, the \emph{non-centered parameterization} (NCP) methods are proposed in the investigations of MwG type algorithms.
In the following, we introduce the NCP method into the formulation of the iMFVI theory to overcome the mutually singular problem of probability measures.
Specifically, we reparameterize the prior $\mu^{\prime}_0(du, d\lambda) = \mu_0^{u,\lambda}(du)\mu_0^{\lambda}(d\lambda)$ by writing $u = \lambda v$ with $ v \sim \mu^{v}_0 = \mathcal{N}(0, \mathcal{C}_0)$.
This parameterization will not change the original assumptions on $u$ since the parameter $u=\lambda v$ still distributed according to $\mathcal{N}(0, \lambda^2 \mathcal{C}_0)$.
By working in variables $(v, \lambda)$ rather than $(u, \lambda)$, the measures $\mu^{v}_0$ and $\mu^{\lambda}_0$ are priori independent, which avoids the lack of robustness arising from mutual singularity and the non-informative issue arising from the truncated approach (\ref{CKlam}).
As for this non-centered parameterization, we employ the prior probability measure as follows:
\begin{align}\label{eq:prior}
\mu_0 = \mu^v_0 \times \mu^{\lambda}_0,
\end{align}
then the forward problem turns to
\begin{align}\label{eq:nc}
\bm{d} = H(\lambda v) + \bm{\epsilon},
\end{align}
where $\bm{\epsilon} \sim \mathcal{N}(0, \bm{\Gamma}_{\text{noise}})$ is the Gaussian noise.
Concerned with the NCP formulation, i.e., formulas (\ref{eq:prior}) and (\ref{eq:nc}),
we need to answer the following two fundamental questions:
\begin{itemize}
\item Whether the Bayes' formula holds rigorously under the current NCP setting;
\item What is the relationship between CP and NCP formulations for constructing iMFVI methods.
\end{itemize}
The first question is addressed in the following Theorem \ref{BayesTheoremNCP}. To answer the second question,
we need to give a brief illustration of the general VI theory.
Hence, the statements are postponed to Subsection \ref{subsec2.4}.
\begin{theorem}\label{BayesTheoremNCP}
Let $\mathcal{H}_u$ be a separable Hilbert space and $N_d$ be a positive integer. Let $\mu_0 = \mu^v_0 \times \mu^{\lambda}_0 $ be a Gaussian measure defined on $\mathcal{H}_u \times \mathbb{R}$, and set the Gaussian noise $\bm{\epsilon} \sim \mathcal{N}(0, \bm{\Gamma}_{\text{noise}})$.
Let $\Phi : \mathcal{H}_u \times \mathbb{R} \rightarrow \mathbb{R}$ be defined as
\begin{align}\label{eq:likelihood}
\Phi(v, \lambda) = \frac{1}{2}\lVert \bm{d} - \lambda Hv\rVert ^{2}_{\bm{\Gamma}_{\text{noise}}},
\end{align}
where $\bm{\Gamma}_{\text{noise}} = \tau^{-1}\textbf{I}$ and $\tau$ is a positive constant.
Then $\mu \ll \mu_0$ is a well-defined probability measure on $\mathcal{H}_u \times \mathbb{R}$, with Rando-Nikodym derivative
\begin{align}\label{eq:post}
\qquad\,\,\frac{d\mu}{d\mu_0}(v, \lambda) = \frac{1}{Z_{\mu}}\exp (-\Phi(v, \lambda) ),
\end{align}
where $Z_{\mu}$ is a positive and finite constant given by
\begin{align*}
Z_{\mu} = \int_{\mathcal{H}_u \times \mathbb{R}} \exp (-\Phi(v, \lambda))\mu^v_0(dv)\mu^{\lambda}_0(d\lambda).\\
\end{align*}
\end{theorem}
\begin{proof}
Since $H$ is a bounded linear operator, it is easy to obtain that $\Phi(v, \lambda)$ is continuous with respect to the variables $v, \lambda$.
Noticing that the operator $H$ is a bounded linear operator, we have $\lVert Hv \rVert \leqslant M\lVert v \rVert_{\mathcal{H}_u}$, where $M$ is a fixed constant.
Through a simple calculation, we have
\begin{align*}
\bigg\lvert \Phi(v, \lambda;\bm{d}_1) - \Phi(v, \lambda;\bm{d}_2) \bigg\rvert &= \frac{1}{2}\bigg\lvert \lVert \bm{d}_1 - \lambda Hv\rVert ^2_{\bm{\Gamma}_{\text{noise}}} - \lVert \bm{d}_2 - \lambda Hv\rVert ^2_{\bm{\Gamma}_{\text{noise}}} \bigg\rvert \\
&= \frac{\tau}{2} \bigg \lvert \lVert \bm{d}_1 \rVert^2 - \lVert \bm{d}_2 \rVert^2 -2 \langle \bm{d}_1 - \bm{d}_2, \lambda Hv \rangle_{\mathbb{R}^{N_d}} \bigg \rvert \\
&\leqslant \tau (\lVert \bm{d}_1 \rVert + \lVert \bm{d}_2 \rVert + \lVert \lambda Hv \rVert)\lVert \bm{d}_1 - \bm{d}_2 \rVert \\
&\leqslant \tau (\lVert \bm{d}_1 \rVert + \lVert \bm{d}_2 \rVert + \lvert \lambda\rvert M\lVert v \rVert_{\mathcal{H}_u})\lVert \bm{d}_1 - \bm{d}_2 \rVert.
\end{align*}
Hence, we easily know that the Assumption 1 and conditions of Theorems 15-16 in \cite{dashti2013bayesian} are satisfied, which are provided in the Appendix.
Then the Bayes' formula ($\ref{eq:post}$) is well-defined.
Moreover, $Z_{\mu}$ is positive and finite.
\end{proof}
\subsection{General theory of iMFVI}\label{subsec2.3}
Based on the preparations provided in Subsections \ref{subsec2.1} and \ref{subsec2.2},
we will apply a general mean-field assumption based iMFVI theory developed in \cite{jia2021variational} to our NCP setting.
Specifically, we will prove a theorem, which provides the foundation for constructing practical iterative algorithms.
Let $\mathcal{H}$ be a separable Hilbert space, $\mathcal{M}(\mathcal{H})$ be the set of Borel probability measures on $\mathcal{H}$, and $\mathcal{A} \subset \mathcal{M}(\mathcal{H})$ be a set of ``simpler'' measures that can be efficiently calculated.
Following Subsection $\ref{subsec2.2}$, we define prior measure $\mu_0$ on $\mathcal{H}$ that can be decomposed as $\mu_0 = \mu^v_0 \times \mu^{\lambda}_0$.
Let the measure $\mu$ be the posterior measure with respect to $\mu_0$ defined on $\mathcal{H}$,
then we have the Bayes' formula ($\ref{eq:post}$).
For any $\nu \in \mathcal{M}(\mathcal{H})$ that is absolutely continuous with respect to $\mu$,
the Kullback-Leibler (KL) divergence is defined as
\begin{align*}
D_{\text{KL}}(\nu || \mu) &= \int_{\mathcal{H}} \log \bigg(\frac{d\nu}{d\mu}(u) \bigg)\frac{d\nu}{d\mu}(u)\mu(du)
=\mathbb{E}^{\mu} \bigg[\log \bigg(\frac{d\nu}{d\mu}(u) \bigg) \frac{d\nu}{d\mu}(u) \bigg].
\end{align*}
Here, the notation $\mathbb{E^{\mu}}$ means taking expectation with respect to the probability measure $\mu$.
If $\nu$ is not absolutely continuous with respect to $\mu$, the KL divergence is defined as $+\infty$.
The iMFVI methods aim to find the closest probability measure $\nu$ to the posterior measure $\mu$ with respect to the KL divergence from the set $\mathcal{A}$, i.e., solving the following minimization problem:
\begin{align}\label{expre:problem}
\mathop{\arg\min}\limits_{{\nu \in \mathcal{A}}} D_{{\text{KL}}} (\nu \Vert \mu).
\end{align}
The mean-field assumption means that all components of the parameters are assumed to be independent.
For the current setting, we assume that the random variables $v$ and $\lambda$ are independent with each other.
Hence, the space $\mathcal{H}$ and the set $\mathcal{A}$ can be decomposed as follows
\begin{align*}
\mathcal{H}=\mathcal{H}_u \times \mathcal{H}_{\lambda}, \qquad
\mathcal{A}=\mathcal{A}_v \times \mathcal{A}_{\lambda},
\end{align*}
where $\mathcal{H}_u$ is a separable Hilbert space, $\mathcal{H}_{\lambda}$ obviously is $\mathbb{R}$, $\mathcal{A}_v \subset \mathcal{M}(\mathcal{H}_u)$, and $\mathcal{A}_{\lambda} \subset \mathcal{M}(\mathcal{H}_{\lambda}) = \mathcal{M}(\mathbb{R})$.
In addition, we assume that the approximated probability measure $\nu$ is equivalent to $\mu_0$ defined by
\begin{align}\label{DefApproxMeasure}
\frac{d\nu}{d\mu_0}(v, \lambda) = \frac{1}{Z_{\nu}}\exp (-\Phi_v(v)-\Phi_{\lambda}(\lambda)),
\end{align}
where $\Phi_{v}(\cdot)$ and $\Phi_{\lambda}(\cdot)$ are two potential functions,
and $Z_\nu$ is the normalization constant.
Obviously, the approximated measure $\nu$ can be decomposed into two components that are absolutely continuous with respect to the corresponding components of the prior measure, i.e.,
\begin{align}
\frac{d\nu^v}{d\mu^v_0}(v) &= \frac{1}{Z^v_{\nu}}\exp (-\Phi_v(v))\label{DefApproxMeasure1}, \\
\frac{d\nu^{\lambda}}{d\mu^{\lambda}_0}({\lambda}) &= \frac{1}{Z^{\lambda}_{\nu}}\exp (-\Phi_{\lambda}(\lambda))\label{DefApproxMeasure2},
\end{align}
with ${Z^v_{\nu}} = \mathbb{E}^{\mu^v_0} \left[\exp (-\Phi_v(v)) \right]$ and
${Z^{\lambda}_{\nu}} = \mathbb{E}^{\mu^{\lambda}_0} \left[\exp (-\Phi_{\lambda}({\lambda}))\right]$.
With these assumptions, the problem ($\ref{expre:problem}$) can be written specifically as
\begin{align}\label{optimProb}
\mathop{\arg\min}\limits_{\substack{\nu_v \in \mathcal{A}_{v} \\
\nu_{\lambda} \in \mathcal{A}_{\lambda}}}D_{{\text{KL}}}
\bigg (\nu^v \times \nu^{\lambda} \bigg \Vert \mu \bigg ).
\end{align}
For the finite-dimensional theory, the sets $\mathcal{A}_{v}$ and $\mathcal{A}_{\lambda}$ are specified as
the set of any probability density functions. For the infinite-dimensional theory developed in \cite{jia2021variational},
we need more illustrations of the sets $\mathcal{A}_v$ and $\mathcal{A}_{\lambda}$, which ensure the measures
obtained by iterations are still in some admissible sets.
\begin{assumption}\label{assump1}
Let $\nu^v$ and $\nu^{\lambda}$ be the approximate measures defined in $(\ref{DefApproxMeasure1})$ and $(\ref{DefApproxMeasure2})$ respectively.
Let us define $T^v_N = \lbrace v|1/N \leqslant \lVert v\rVert_{\mathcal{H}_u} \leqslant N\rbrace$ that satisfies $\sup_N \mu ^v_0(T^v_N)=1$,
and define $T^{\lambda}_N = \lbrace \lambda|-N \leqslant \lambda \leqslant N \rbrace$ that satisfies $\sup_N\mu ^{\lambda}_0(T^{\lambda}_N)=1$.
Furthermore we assume that
\begin{align}\label{expre:condi}
\begin{split}
T_1 &:= \sup \limits_{v \in T^v_N} \int_{\mathbb{R}}\Phi(v, \lambda)\cdot 1_{A}(v, \lambda)\nu^{\lambda}(d\lambda) < \infty\\
T_2 &:= \sup \limits_{\lambda \in T^{\lambda}_N} \int_{\mathcal{H}_u}\Phi(v, \lambda)\cdot 1_{A}(v, \lambda)\nu^{v}(dv) < \infty\\
T_3 &:= \int_{\mathcal{H}_u} \exp \bigg (-\int_{\mathbb{R}}\Phi(v, \lambda)\cdot \nu^{\lambda}(d\lambda) \bigg )\max(1,\lVert v \rVert^{2}_{\mathcal{H}_u})\mu^{v}_0(dv) < \infty\\
T_4 &:= \int_{\mathbb{R}} \exp \bigg (-\int_{\mathcal{H}_u}\Phi(v, \lambda)\cdot \nu^{v}(dv) \bigg ) \max(1, \lambda^2)\mu^{\lambda}_0(d\lambda) < \infty,
\end{split}
\end{align}
where the set $A := \lbrace v, \lambda \vert \Phi(v, \lambda) \geqslant 0 \rbrace$.
\end{assumption}
Let
\begin{align}\label{expre:R}
\begin{split}
R^1_v &= \bigg \lbrace \Phi_v \bigg | \sup \limits_{v \in T^v_N} \Phi_v(v) < \infty, \quad \forall N > 0 \bigg \rbrace, \\
R^2_v &= \bigg \lbrace \Phi_v \bigg | \int_{\mathcal{H}_u}\exp(-\Phi_v(v))\max(1, \lVert v\rVert^2_{\mathcal{H}_u})\mu^v_0(dv) < \infty \bigg \rbrace, \\
R^1_{\lambda} &= \bigg \lbrace \Phi_{\lambda} \bigg | \sup \limits_{{\lambda} \in T^{\lambda}_N} \Phi_{\lambda}({\lambda}) < \infty, \quad \forall N > 0 \bigg \rbrace, \\
R^2_{\lambda} &= \bigg \lbrace \Phi_{\lambda} \bigg | \int_{\mathbb{R}}\exp(-\Phi_{\lambda}({\lambda}))\max(1, \lambda^2)\mu^{\lambda}_0(d\lambda) < \infty\bigg \rbrace.
\end{split}
\end{align}
Now we define:
\begin{align}\label{expreAv}
\mathcal{A}_v = \left\{
\begin{tabular}{l|l}
\multirowcell{2}[0pt][l]{$\nu^v \in \mathcal{M}(\mathcal{H}_u)$} &
\multirowcell{2}[0pt][l]{$\nu^v$ is equivalent to $\mu^v_0$ with ($\ref{DefApproxMeasure1}$) holding true,\\
and $\Phi_v \in R^1_v \bigcap R^2_v$} \\
&
\end{tabular}
\right\},
\end{align}
\begin{align}\label{expreAlam}
\mathcal{A}_{\lambda} = \left\{
\begin{tabular}{l|l}
\multirowcell{2}[0pt][l]{$\nu^{\lambda} \in \mathcal{M}(\mathcal{H}_{\lambda})$} &
\multirowcell{2}[0pt][l]{$\nu^{\lambda}$ is equivalent to $\mu^{\lambda}_0$ with ($\ref{DefApproxMeasure2}$) holding true,\\
and $\Phi_{\lambda} \in R^1_{\lambda} \bigcap R^2_{\lambda}$} \\
&
\end{tabular}
\right\}.
\end{align}
\begin{remark}
\itshape
Noticing that the definitions of $R^1_v$ and $R^1_{\lambda}$ are different from the corresponding definitions given in \cite{jia2021variational}.
Here, we employ a modified version of the general theory provided in the Appendix of \cite{Jia2022VINet}, which relaxes the uniform bounds making the theory more applicable to concrete problems.
For the reader's convenience, we briefly introduce the general theory in the Appendix.
\end{remark}
Now, we give the following key theorem that provides formulas for calculating the potential functions
$\Phi_v$ and $\Phi_{\lambda}$ introduced in (\ref{DefApproxMeasure}).
\begin{theorem}\label{the:posterior}
Let the prior measure $\mu_0$, noise measure, and the posterior measure $\mu$ are defined as in $(\ref{eq:prior}), (\ref{eq:noise}), (\ref{eq:post})$, and $\Phi(v, \lambda)$ is defined as in $(\ref{eq:likelihood})$.
If the approximate probability measure in problem $(\ref{optimProb})$ satisfies Assumption $\ref{assump1}$, then problem $(\ref{optimProb})$ possesses a solution $\nu = \nu^v \times \nu^{\lambda} \in \mathcal{M}(\mathcal{H})$ with the following form:
\begin{align}
\frac{d\nu}{d\mu_0}(v, \lambda) \varpropto \exp \bigg(-(\Phi_v(v) + \Phi_{\lambda}(\lambda)) \bigg),
\end{align}
where
\begin{align}\label{PotenFun1}
\begin{split}
\Phi_v(v) = \int_{\mathbb{R}}\Phi (v, \lambda)\nu^{\lambda}(d\lambda) + \text{Const}, \quad
\Phi_{\lambda}(\lambda) = \int_{\mathcal{H}_u}\Phi (v, \lambda)\nu^v(dv) + \text{Const},
\end{split}
\end{align}
and here ``Const'' denotes constants that are not relevant to the interested parameters.
Furthermore, we have
\begin{align}
\begin{split}
\nu^v(v) \varpropto \exp (-\Phi_v(v))\mu^v_0(dv), \quad
\nu^{\lambda}(\lambda) \varpropto \exp (-\Phi_{\lambda}(\lambda))\mu^{\lambda}_0(d\lambda).
\end{split}
\end{align}
\end{theorem}
~\\
\begin{proof}
To prove the theorem, we need to verify the conditions $(\ref{expre:condi})$ given in Assumption $\ref{assump1}$.
Noticing the fact that operator $H$ is a bounded linear operator, we have $\lVert Hv \rVert \leqslant M\lVert v \rVert_{\mathcal{H}_u}$.
For term $T_1$, we have
\begin{align*}
T_1 &= \sup \limits_{v \in T^v_N}\int_{\mathbb{R}}\Phi(v, \lambda)\cdot 1_{A}(v, \lambda)\nu^{\lambda}(d\lambda) \\
&= \sup \limits_{v \in T^v_N}\int_{\mathbb{R}}\frac{1}{2}\lVert \bm{d} - \lambda Hv \rVert ^2_{\bm{\Gamma}_{\text{noise}}}\nu^{\lambda}(d\lambda) \\
&= \sup \limits_{v \in T^v_N}\int_{\mathbb{R}}\frac{\tau}{2}\bigg (\lVert \bm{d} \rVert ^2 + \lVert \lambda Hv \rVert ^2 - 2\langle d, \lambda Hv \rangle \bigg )\nu^{\lambda}(d\lambda) \\
&\leqslant \sup \limits_{v \in T^v_N}\int_{\mathbb{R}}\tau\bigg (\lVert \bm{d} \rVert ^2 + \lambda^2\lVert Hv \rVert ^2 \bigg )\nu^{\lambda}(d\lambda) \\
&\leqslant \sup \limits_{v \in T^v_N} 2\tau M^2\lVert v \rVert^2_{\mathcal{H}_u} \int_{\mathbb{R}}\lambda^2\nu^{\lambda}(d\lambda) + \text{Const} \\
&\leqslant (2\tau M^2N^2) \cdot \int_{\mathbb{R}}\exp(-\Phi_{\lambda}({\lambda}))\max(1, \lambda^2)\mu^{\lambda}_0(d\lambda) + \text{Const}.
\end{align*}
Recalling the set $R^2_{\lambda}$ defined in $(\ref{expre:R})$, since $\nu^{\lambda} \in \mathcal{A}_{\lambda}$, we have
\begin{align*}
\int_{\mathbb{R}}\exp(-\Phi_{\lambda}({\lambda}))\max(1, \lambda^2)\mu^{\lambda}_0(d\lambda) < \infty.
\end{align*}
Then we obtain $T_1 < \infty$.
For term $T_2$, we have
\begin{align*}
T_2 &= \sup \limits_{\lambda \in T^{\lambda}_N}\int_{\mathcal{H}_u}\Phi(v, \lambda)\cdot 1_{A}(v, \lambda)\nu^{v}(dv) \\
&= \sup \limits_{\lambda \in T^{\lambda}_N} \int_{\mathcal{H}_u}\frac{1}{2}\lVert \bm{d} - \lambda Hv \rVert ^2_{\bm{\Gamma}_{\text{noise}}}\nu^{v}(dv) \\
&\leqslant \sup \limits_{\lambda \in T^{\lambda}_N}\int_{\mathcal{H}_u}\tau\bigg (\lVert \bm{d} \rVert ^2 + \lambda^2\lVert Hv \rVert ^2 \bigg )\nu^v(dv) \\
&\leqslant \sup \limits_{\lambda \in T^{\lambda}_N} 2\tau M^2\lambda^2 \int_{\mathcal{H}_u}\lVert v \rVert ^2_{\mathcal{H}_u}\nu^v(dv) + \text{Const} \\
&\leqslant (2\tau M^2N^2) \cdot \int_{\mathcal{H}_u}\exp(-\Phi_v(v))\max(1, \lVert v\rVert^2_{\mathcal{H}_u})\mu^v_0(dv) + \text{Const}.
\end{align*}
Recalling the set $R^2_v$ defined in $(\ref{expre:R})$, since $\nu^v \in \mathcal{A}_v$, we have
\begin{align*}
\int_{\mathcal{H}_u}\exp(-\Phi_v(v))\max(1, \lVert v\rVert^2_{\mathcal{H}_u})\mu^v_0(dv) < \infty.
\end{align*}
Then we obtain $T_2 < \infty$.
For term $T_3$, we notice $-\int_{\mathbb{R}}\Phi(v, \lambda)\cdot \nu^{\lambda}(d\lambda) \leqslant 0$.
Then the term $T_3$ can be estimated as follows:
\begin{align*}
T_3 &\leqslant \int_{\mathcal{H}_u} \max(1,\lVert v \rVert^{2}_{\mathcal{H}_u})\mu^{v}_0(du).
\end{align*}
For term $T_4$, we use the same strategy to obtain that
\begin{align*}
T_4 &\leqslant \int_{\mathbb{R}} \max(1, \lambda^2)\mu^{\lambda}_0(d\lambda) < \infty.
\end{align*}
The proof is completed by combining the estimates of $T_1, T_2, T_3$, and $T_4$.
\end{proof}
The above Theorem \ref{the:posterior} provides explicit expressions of the potential functions $\Phi_{v}$ and $\Phi_{\lambda}$ relating to the parameters $v$ and $\lambda$, respectively.
Although the equalities (\ref{PotenFun1}) are not closed form solutions, they can yield a practical iterative scheme, which will be explicitly formulated in Subsection \ref{subsec2.5} below.
\subsection{Relationships of CP and NCP formulations}\label{subsec2.4}
In this section, we discuss the relationships between CP and NCP formulations.
The choice of CP and NCP formulations depends on the methods employed, and both formulations have advantages and disadvantages.
For example, the CP formulation is more appropriate when computing the maximum a posterior estimates \cite{Dunlop2020SMAIJCM}.
However, the NCP formulation will be better when the Metropolis-within-Gibbs algorithm is employed \cite{agapiou2014analysis}.
Before constructing an iterative scheme, let us first clarify the relationships between the CP and NCP formulations under the circumstances of developing iMFVI methods.
Due to the mutually singular problem discussed in Subsection $\ref{subsec2.1}$, we could not formulate the CP problem based on the current iMFVI theory.
To avoid the obstacle, we need to introduce the covariance operator $\mathcal{C}^K_0$.
Thus, it is hard to reveal the connection between CP and NCP formulations in infinite-dimensional spaces.
In order to state the relationships more clearly, we need to start with the finite-dimensional case.
And then, we can clarify the relationships in the infinite-dimensional space with the help of the conclusions in the finite-dimensional space.
In this subsection, we shall prove that
\begin{itemize}
\item The VI problems formulated by CP and NCP attain the same minimal value;
\item One possible minimum point (probability density function) of two problems can be transformed into each other by taking a reparameterization.
\end{itemize}
Recall that $\lbrace \alpha_k, e_k\rbrace_{k=1}^{\infty}$ is the eigensystem of the operator $\mathcal{C}_0$ such that $\mathcal{C}_0 e_k = \alpha_k e_k$, as we stated in Subsection $\ref{subsec2.1}$.
We denote by $P^N$ the orthogonal projection of $\mathcal{H}_u$ onto $\mathcal{H}^N_u$, that is $\mathcal{H}^N_u = P^N\mathcal{H}_u := \text{span}\lbrace e_1, e_2, \cdots, e_N \rbrace$.
Define $\bm{\mathcal{C}}^N_0 = P^N \mathcal{C}_0 P^N$, such that $\bm{u}^N \sim \mu^{(\bm{u}^N, \lambda)}_0 = \mathcal{N}(0, \lambda^2\bm{\mathcal{C}}^N_0)$, and let $\bm{u}^N = P^Nu = \sum^{N}_{k=1} u_ke_k \in \mathcal{H}^N_u$.
Let us consider the finite-dimensional linear inverse problem:
\begin{align*}
\bm{d} = H\bm{u}^N + \bm{\epsilon},
\end{align*}
where $\bm{d} \in \mathbb{R}^{N_d}$, $\bm{\epsilon} \in \mathbb{R}^{N_d}$ is the random Gaussian noise with mean zero and variance $\bm{\Gamma}_{\text{noise}} = \tau^{-1}\textbf{I}$.
We take the prior measure $\widetilde{\mu}^N_0 = \mu^{(\bm{u}^N, \lambda)}_0 \times \mu^{\lambda}_0$, and the potential function
\begin{align*}
\Phi(\bm{u}^N) = \frac{1}{2}\lVert H\bm{u}^N - \bm{d} \rVert^2_{\bm{\Gamma}_{\text{noise}}}.
\end{align*}
According to the Bayes' formula, the posterior measure $\widetilde{\mu}^N$ is given by the Rando-Nikodym derivative
\begin{align*}
\frac{d\widetilde{\mu}^N}{d\widetilde{\mu}^N_0}(\bm{u}^N, \lambda) = \frac{1}{\widetilde{Z}^N}\exp(-\Phi(\bm{u}^N)),
\end{align*}
where $\widetilde{Z}^N = \int_{\mathcal{H}^N\times \mathbb{R}}\exp(-\Phi(\bm{u}^N))\widetilde{\mu}^N_0(d\bm{u}^N, d\lambda)$ is the normalization constant.
Taking the NCP formulation $\bm{u}^N = \lambda \bm{v}^N$, we reformulate the linear inverse problem as
\begin{align*}
\bm{d} = H(\lambda\bm{v}^N) + \bm{\epsilon}.
\end{align*}
Since $\bm{u}^N \sim \mu^{(\bm{u}^N, \lambda)}_0$, we obviously find that $\bm{v}^N \sim \mu^{\bm{v}^N}_0 = \mathcal{N}(0, \bm{\mathcal{C}}^N_0)$, and $\bm{v}^N = P^Nv \in \mathcal{H}^N$.
We have the prior measure $\mu^N_0 = \mu^{\bm{v}^N}_0 \times \mu^{\lambda}_0$, and the potential function
\begin{align*}
\Phi(\bm{v}^N, \lambda) = \frac{1}{2}\lVert H(\lambda \bm{v}^N) - \bm{d} \rVert^2_{\bm{\Gamma}_{\text{noise}}}.
\end{align*}
Then the posterior measure $\mu^N$ is given by the Rando-Nikodym derivative
\begin{align}\label{eq:ncfin}
\frac{d\mu^N}{d\mu^N_0}(\bm{v}^N, \lambda) = \frac{1}{Z^N}\exp(-\Phi(\bm{v}^N, \lambda)),
\end{align}
where $Z^N = \int_{\mathcal{H}^N\times \mathbb{R}}\exp(-\Phi(\bm{v}^N, \lambda))\mu^N_0(d\bm{v}^N, d\lambda)$ is the normalization constant.
For notational convenience, we will use the same symbol for the probability measure and probability density function of the corresponding measure in this section.
For example, we use the symbol $\mu^N$ to denote the probability measure $\mu^N$ and the probability density function of $\mu^N$.
Before introducing the finite-dimensional VI problems, we need to clarify that:
\begin{itemize}
\item In the CP case, we aim to find the approximated probability density $\widetilde{\nu}^{\dagger}(\bm{u}^N, \lambda)$ to minimize the KL divergence between $\widetilde{\nu}^N(\bm{u}^N, \lambda)$ and $\widetilde{\mu}^N(\bm{u}^N, \lambda)$.
Under the mean-field assumption, the approximated probability density could be written as $\widetilde{\nu}^N(\bm{u}^N, \lambda) = \widetilde{\nu}^N_1(\bm{u}^N)\widetilde{\nu}^N_2(\lambda)$, where $\widetilde{\nu}^N_1(\bm{u}^N)$ and $\widetilde{\nu}^N_2(\lambda)$ are the probability density functions.
\item In the NCP case, the aim is to find the approximated probability density $\nu^{\dagger}(\bm{v}^N, \lambda)$ to minimize the KL divergence between $\nu^N(\bm{v}^N, \lambda)$ and $\mu^N(\bm{v}^N, \lambda)$.
Taking the assumptions stated in Subsection $\ref{subsec2.3}$, $\nu^N(\bm{v}^N, \lambda)$ could be written as $\nu^N(\bm{v}^N, \lambda) = \nu^N_1(\bm{v}^N)\nu^N_2(\lambda)$, where $\nu^N_1(\bm{v}^N)$ and $\nu^N_2(\lambda)$ are the probability density functions.
\end{itemize}
Based on these goals, we introduce the sets
\begin{align*}
\widetilde{\mathcal{A}} &:= \bigg \lbrace \widetilde{\nu}^N(\bm{u}^N, \lambda) \ \bigg \vert \ \widetilde{\nu}^N(\bm{u}^N, d\lambda) = \widetilde{\nu}^N_1(\bm{u}^N)\widetilde{\nu}^N_2(\lambda) \bigg \rbrace, \\
\mathcal{A} &:= \bigg \lbrace \nu^N(\bm{v}^N, \lambda) \ \bigg \vert \ \nu^N(\bm{v}^N, \lambda) = \nu^N_1(\bm{v}^N)\nu^N_2(\lambda) \bigg \rbrace.
\end{align*}
Then, the VI problem formulated by CP can be written as:
\begin{align}\label{eq:vicafin}
\begin{split}
&\mathop{\arg\min}\limits_{\widetilde{\nu}^N \in \widetilde{\mathcal{A}}} D_{{\text{KL}}} (\widetilde{\nu}^N \Vert \widetilde{\mu}^N)\\
= &\mathop{\arg\min}\limits_{\widetilde{\nu}^N_1, \widetilde{\nu}^N_2}\int_{\mathcal{H}^N\times \mathbb{R}} \log \frac{\widetilde{\nu}^N_1(\bm{u}^N)\widetilde{\nu}^N_2(\lambda)}{\widetilde{\mu}^N(\bm{u}^N, \lambda)}\widetilde{\nu}^N_1(\bm{u}^N)\widetilde{\nu}^N_2(\lambda)d\bm{u}^Nd\lambda.
\end{split}
\end{align}
Similarly, the VI problem formulated by NCP has the following form:
\begin{align}\label{eq:vincfin}
\begin{split}
&\mathop{\arg\min}\limits_{\nu^N \in \mathcal{A}} D_{{\text{KL}}} (\nu^N \Vert \mu^N) \\
= &\mathop{\arg\min}\limits_{\nu^N_1, \nu^N_2}\int_{\mathcal{H}^N\times \mathbb{R}} \log \frac{\nu^N_1(\bm{v}^N)\nu^N_2(\lambda)}{\mu^N(\bm{v}^N, \lambda)}\nu^N_1(\bm{v}^N)\nu^N_2(\lambda)d\bm{v}^Nd\lambda.
\end{split}
\end{align}
Before discussing the relationship between problems $(\ref{eq:vicafin})$ and $(\ref{eq:vincfin})$, we need to clarify the relationship between the probabiltiy densities $\widetilde{\mu}^N(\bm{u}^N, \lambda)$, and $\mu^N(\bm{v}^N, \lambda)$, which are given by
\begin{align*}
\widetilde{\mu}(\bm{u}^N, \lambda) &\varpropto \exp \bigg (-\frac{1}{2}\lVert H\bm{u}^N - \bm{d} \rVert^2_{\bm{\Gamma}_{\text{noise}}} \bigg )\widetilde{\mu}^N_0(\bm{u}^N, \lambda), \\
\mu(\bm{v}^N, \lambda) &\varpropto \exp \bigg (-\frac{1}{2}\lVert H(\lambda \bm{v}^N) - \bm{d} \rVert^2_{\bm{\Gamma}_{\text{noise}}} \bigg )\mu^N_0(\bm{v}^N, \lambda).
\end{align*}
We know that
\begin{align*}
\widetilde{\mu}^N_0(\bm{u}^N, \lambda) = \frac{1}{(2\pi)^{\frac{N}{2}}\lambda^N\det(\bm{\mathcal{C}}^N_0)^{\frac{1}{2}}} \exp \bigg (-\frac{1}{2}\lVert \lambda^{-1}\bm{\mathcal{C}}^{-1/2}_0\bm{u}^N\rVert^2 \bigg )\mu^{\lambda}_0(\lambda),
\end{align*}
and
\begin{align*}
\mu^N_0(\bm{v}^N, \lambda) = \frac{1}{(2\pi)^{\frac{N}{2}}\det(\bm{\mathcal{C}}^N_0)^{\frac{1}{2}}}\exp \bigg (-\frac{1}{2}\lVert \bm{\mathcal{C}}^{-1/2}_0\bm{v}^N\rVert^2 \bigg )\mu^{\lambda}_0(\lambda).
\end{align*}
Taking $\bm{u}^N = \lambda \bm{v}^N$, it is clear that $\lambda^{-N}\mu^N_0(\bm{v}^N, \lambda) = \widetilde{\mu}^N_0(\bm{u}^N, \lambda)$, and furthermore
\begin{align}\label{eq:postrelate}
\lambda^{-N}\mu^N(\bm{v}^N, \lambda) = \widetilde{\mu}^N(\bm{u}^N, \lambda).
\end{align}
Then equation $(\ref{eq:vicafin})$ can be written as
\begin{align}\label{eq:vicafinn}
\begin{split}
&\mathop{\arg\min}\limits_{\widetilde{\nu}^N \in \widetilde{\mathcal{A}}} D_{{\text{KL}}} (\widetilde{\nu}^N \Vert \widetilde{\mu}^N)\\
= &\mathop{\arg\min}\limits_{\widetilde{\nu}^N_1, \widetilde{\nu}^N_2}\int_{\mathcal{H}^N\times \mathbb{R}} \log \frac{\widetilde{\nu}^N_1(\bm{u}^N)\widetilde{\nu}^N_2(\lambda)}{\widetilde{\mu}^N(\bm{u}^N, \lambda)}\widetilde{\nu}^N_1(\bm{u}^N)\widetilde{\nu}^N_2(\lambda)d\bm{u}^Nd\lambda \\
= &\mathop{\arg\min}\limits_{\widetilde{\nu}^N_1, \widetilde{\nu}^N_2}\int_{\mathcal{H}^N\times \mathbb{R}} \log \frac{\lambda^N\widetilde{\nu}^N_1(\bm{u}^N)\widetilde{\nu}^N_2(\lambda)}{\mu^N(\bm{v}^N, \lambda)}\lambda^N\widetilde{\nu}^N_1(\bm{u}^N)\widetilde{\nu}^N_2(\lambda)d\bm{v}^Nd\lambda.
\end{split}
\end{align}
Next, we assume that there exist $\widetilde{\nu}^{*}_1(\bm{v}^N), \nu^{*}_1(\bm{u}^N)$ satisfying
\begin{align*}
\widetilde{\nu}^{*}_1(\bm{v}^N) := \lambda^N \widetilde{\nu}^N_1(\bm{u}^N), \quad \nu^{*}_1(\bm{u}^N) := \lambda^{-N}\nu^N_1(\bm{v}^N),
\end{align*}
and set
\begin{align*}
\widetilde{\mathcal{A}}^{*} &:= \bigg \lbrace \widetilde{\nu}^N \ \bigg \vert \ \widetilde{\nu}^N(\bm{u}^N, \lambda) = \widetilde{\nu}^N_1(\bm{u}^N)\widetilde{\nu}^N_2(\lambda), \widetilde{\nu}^N_1(\bm{u}^N) = \lambda^{-N}\widetilde{\nu}^{*}_1(\bm{v}^N) \bigg \rbrace, \\
\mathcal{A}^{*} &:= \bigg \lbrace \nu^N \ \bigg \vert \ \nu^N(\bm{v}^N, \lambda) = \nu^N_1(\bm{u}^N)\nu^N_2(\lambda), \nu^N_1(\bm{v}^N) = \lambda^N\nu^{*}_1(\bm{u}^N) \bigg \rbrace.
\end{align*}
Now we provide a theorem that illustrates the relationship between CP and NCP formulations in the finite-dimensional spaces.
\begin{theorem}\label{the:minimal}
The minimal values of problems $(\ref{eq:vicafin})$ and $(\ref{eq:vincfin})$ are the same.
That is
\begin{align*}
\min_{\widetilde{\nu}^N \in \widetilde{\mathcal{A}}} D_{{\text{KL}}} (\widetilde{\nu}^N \Vert \widetilde{\mu}^N) = \min_{\nu^N \in \mathcal{A}} D_{{\text{KL}}} (\nu^N \Vert \mu^N).
\end{align*}
Assume that $\nu^{\dagger} \in \mathcal{A}$ and $\widetilde{\nu}^{\dagger} \in \widetilde{\mathcal{A}}$ are one of the minimized points of these problems, respectively.
If $\nu^{\dagger} \in \mathcal{A}^{*}, \widetilde{\nu}^{\dagger} \in \widetilde{\mathcal{A}}^{*}$ are satisfied, $\nu^{\dagger}$ and $\widetilde{\nu}^{\dagger}$ can be transformed into the solution of problems $(\ref{eq:vicafin})$ and $(\ref{eq:vincfin})$ by taking a reparameterization.
\end{theorem}
\begin{remark}
\itshape
Let $\widehat{\nu} = \widehat{\nu}^v \times \widehat{\nu}^{\lambda}$ be the measure given by Theorem $\ref{the:posterior}$, which is the solution of the NCP-iMFVI problem $(\ref{expre:problem})$.
Let $\widehat{\nu}^N$ be the pushforward of the measure $\widehat{\nu}$ defined on space $\mathcal{H}^N_u \times \mathbb{R}$, i.e., $\widehat{\nu}^N = \widehat{\nu} \circ (P^N)^{-1}$.
From Theorem $\ref{the:minimal}$, we know that $\widehat{\nu}^N \in \mathcal{A}^{*}$.
That is, the measure calculated by Theorem $\ref{the:posterior}$ can be transformed to the solution of problem $(\ref{eq:vicafin})$ formulated by CP in finite-dimensional space.
\end{remark}
\begin{proof}
Considering the following problems:
\begin{align}
\label{prob:caA}\min_{\widetilde{\nu}^N \in \widetilde{\mathcal{A}}^{*}} D_{{\text{KL}}} (\widetilde{\nu}^N\Vert \widetilde{\mu}^N), \\
\label{prob:ncA}\min_{\nu^N \in \mathcal{A}^{*}} D_{{\text{KL}}} (\nu^N \Vert \mu^N).
\end{align}
For notational convenience, let
\begin{align*}
\widetilde{\nu}^{\dagger}(\bm{u}^N, \lambda) = \widetilde{\nu}^{\dagger}_1(\bm{u}^N)\widetilde{\nu}^{\dagger}_2(\lambda) \in \widetilde{\mathcal{A}}^{*}, \quad
\nu^{\dagger}(\bm{v}^N, \lambda) = \nu^{\dagger}_1(\bm{v}^N)\nu^{\dagger}_2(\lambda) \in \mathcal{A}^{*}
\end{align*}
be the solutions to problem $(\ref{prob:caA})$ and $(\ref{prob:ncA})$, respectively.
Then we have
\begin{align*}
D_{{\text{KL}}} (\widetilde{\nu}^{\dagger} \Vert \widetilde{\mu}^N)
= &\int_{\mathcal{H}^N\times \mathbb{R}} \log \frac{\lambda^N\widetilde{\nu}^{\dagger}_1(\bm{u}^N)\widetilde{\nu}^{\dagger}_2(\lambda)}{\mu^N(\bm{v}^N, \lambda)}\lambda^N\widetilde{\nu}^{\dagger}_1(\bm{u}^N)\widetilde{\nu}^{\dagger}_2(\lambda)d\bm{v}^Nd\lambda \\
= &\int_{\mathcal{H}^N\times \mathbb{R}} \log \frac{\widetilde{\nu}^{\dagger*}_1(\bm{v}^N)\widetilde{\nu}^{\dagger}_2(\lambda)}{\mu^N(\bm{v}^N, \lambda)}\widetilde{\nu}^{\dagger*}_1(\bm{v}^N)\widetilde{\nu}^{\dagger}_2(\lambda)d\bm{v}^Nd\lambda \\
\geq &\int_{\mathcal{H}^N\times \mathbb{R}} \log \frac{\nu^{\dagger}_1(\bm{v}^N)\nu^{\dagger}_2(\lambda)}{\mu^N(\bm{v}^N, \lambda)}\nu^{\dagger}_1(\bm{v}^N)\nu^{\dagger}_2(\lambda)d\bm{v}^Nd\lambda \\
= &\min_{\nu^N \in \mathcal{A}^{*}} D_{{\text{KL}}} (\nu^N \Vert \mu^N),
\end{align*}
where we use the equation $(\ref{eq:postrelate})$ in the first equation.
Using the same strategy, we obtain that
\begin{align*}
D_{{\text{KL}}} (\nu^{\dagger} \Vert \mu^N) \geq \min_{\widetilde{\nu}^N \in \widetilde{\mathcal{A}}^{*}} D_{{\text{KL}}} (\widetilde{\nu}^N\Vert \widetilde{\mu}^N).
\end{align*}
Combining these two inequalities, we have
\begin{align*}
\min_{\nu^N \in \mathcal{A}^{*}} D_{{\text{KL}}} (\nu^N \Vert \mu^N) \geq \min_{\widetilde{\nu}^N \in \widetilde{\mathcal{A}}^{*}} D_{{\text{KL}}} (\widetilde{\nu}^N\Vert \widetilde{\mu}^N) \geq \min_{\nu^N \in \mathcal{A}^{*}} D_{{\text{KL}}} (\nu^N \Vert \mu^N),
\end{align*}
which indicates
\begin{align}\label{conclu:Astar}
\min_{\widetilde{\nu}^N \in \widetilde{\mathcal{A}}^{*}} D_{{\text{KL}}} (\widetilde{\nu}^N\Vert \widetilde{\mu}^N) = \min_{\nu^N \in \mathcal{A}^{*}} D_{{\text{KL}}} (\nu^N \Vert \mu^N).
\end{align}
Now these two VI problems have the same minimal value.
Due to the settings of prior measures and noise, measures $\widetilde{\nu}^N_1(d\bm{u}^N), \nu^N_1(d\bm{v}^N)$ are Gaussians by Theorem $\ref{the:posterior}$.
Since similar discussions are provided in Subsection $\ref{subsec2.5}$, we omit the details here.
Without loss of generality, we assume that $\widetilde{\nu}^N_1 = \mathcal{N}(\widetilde{\bm{a}}, \widetilde{\bm{\Sigma}})$, and $\nu^N_1 = \mathcal{N}(\bm{a}, \bm{\Sigma})$, then we have
\begin{align*}
\widetilde{\nu}^N_1(\bm{u}^N) &= \frac{1}{(2\pi)^{\frac{N}{2}}\det(\widetilde{\bm{\Sigma}})^{\frac{1}{2}}}\exp \bigg (-\frac{1}{2}\lVert \widetilde{\bm{\Sigma}}^{-1/2}(\bm{u}^N - \widetilde{\bm{a}})\rVert^2 \bigg ) \\
&= \frac{1}{(2\pi)^{\frac{N}{2}}\det(\widetilde{\bm{\Sigma}})^{\frac{1}{2}}}\exp \bigg (-\frac{1}{2}\lVert \lambda\widetilde{\bm{\Sigma}}^{-1/2}(\bm{v}^N - \widetilde{\bm{a}}/\lambda)\rVert^2 \bigg ),
\end{align*}
and
\begin{align*}
\nu^N_1(\bm{v}^N) &= \frac{1}{(2\pi)^{\frac{N}{2}}\det(\bm{\Sigma})^{\frac{1}{2}}}\exp \bigg (-\frac{1}{2}\lVert \bm{\Sigma}^{-1/2}(\bm{v}^N - \bm{a})\rVert^2 \bigg ) \\
&= \frac{1}{(2\pi)^{\frac{N}{2}}\det(\bm{\Sigma})^{\frac{1}{2}}}\exp \bigg (-\frac{1}{2}\lVert \lambda^{-1}\bm{\Sigma}^{-1/2}(\bm{u}^N - \lambda\bm{a})\rVert^2 \bigg ).
\end{align*}
Then measures $\widetilde{\nu}^{N\prime}_1 = \mathcal{N}(\widetilde{\bm{a}}/\lambda, \widetilde{\bm{\Sigma}}/\lambda^2)$, and $\nu^{N\prime}_1 = \mathcal{N}(\lambda\bm{a}, \lambda^2\bm{\Sigma})$ satisfy
\begin{align*}
\widetilde{\nu}^N_1(\bm{u}^N) = \lambda^{-N}\widetilde{\nu}^{N\prime}_1(\bm{v}^N) , \quad
\nu^N_1(\bm{v}^N) = \lambda^N\nu^{N\prime}_1(\bm{u}^N),
\end{align*}
which illustrates that
\begin{align*}
\widetilde{\nu}^N(\bm{u}^N ,\lambda) = \widetilde{\nu}^N_1(\bm{u}^N)\widetilde{\nu}^N_2(\lambda) \in \widetilde{\mathcal{A}}^{*}, \quad
\nu^N(\bm{v}^N, \lambda) = \nu^N_1(\bm{v}^N)\nu^N_2(\lambda) \in \mathcal{A}^{*}.
\end{align*}
As a result, for all $\widetilde{\nu}^N(\bm{u}^N ,\lambda) \in \widetilde{\mathcal{A}}$ and $\nu^N(\bm{v}^N, \lambda) \in \mathcal{A}$, due to the measures $\widetilde{\nu}^N_1(d\bm{u}^N)$, and $\nu^N_1(d\bm{v}^N)$ are Gaussians, we have
\begin{align*}
\widetilde{\nu}^N(\bm{u}^N ,\lambda) \in \widetilde{\mathcal{A}}^{*}, \quad
\nu^N(\bm{v}^N, \lambda) \in \mathcal{A}^{*}.
\end{align*}
That is, $\widetilde{\mathcal{A}}\subset \widetilde{\mathcal{A}}^{*}, \mathcal{A} \subset \mathcal{A}^{*}$.
Together with $(\ref{conclu:Astar})$, this indicates that
\begin{align*}
\min_{\widetilde{\nu}^N \in \widetilde{\mathcal{A}}} D_{{\text{KL}}} (\widetilde{\nu}^N\Vert \widetilde{\mu}^N) = \min_{\nu^N \in \mathcal{A}} D_{{\text{KL}}} (\nu^N \Vert \mu^N).
\end{align*}
Then we have
\begin{align*}
\min_{\widetilde{\nu}^N \in \widetilde{\mathcal{A}}} D_{{\text{KL}}} (\widetilde{\nu}^N\Vert \widetilde{\mu}^N)
= &\int_{\mathcal{H}^N\times \mathbb{R}} \log \frac{\lambda^N\widetilde{\nu}^{\dagger}_1(\bm{u}^N)\widetilde{\nu}^{\dagger}_2(\lambda)}{\mu^N(\bm{v}^N, \lambda)}\lambda^N\widetilde{\nu}^{\dagger}_1(\bm{u}^N)\widetilde{\nu}^{\dagger}_2(\lambda)d\bm{v}^Nd\lambda \\
= &\int_{\mathcal{H}^N\times \mathbb{R}} \log \frac{\nu^{\dagger}_1(\bm{v}^N)\nu^{\dagger}_2(\lambda)}{\mu^N(\bm{v}^N, \lambda)}\nu^{\dagger}_1(\bm{v}^N)\nu^{\dagger}_2(\lambda)d\bm{v}^Nd\lambda \\
= &\min_{\nu^N \in \mathcal{A}} D_{{\text{KL}}} (\nu^N \Vert \mu^N).
\end{align*}
This indicates that the probability densities $\lambda^N\widetilde{\nu}^{\dagger}_1(\bm{u}^N)\widetilde{\nu}^{\dagger}_2(\lambda)$ and $\nu^{\dagger}_1(\bm{v}^N)\nu^{\dagger}_2(\lambda)$ are both one of the minimized points of the problems $(\ref{eq:vincfin})$.
Using the same strategy, the probability densities $\lambda^{-N}\nu^{\dagger}_1(\bm{v}^N)\nu^{\dagger}_2(\lambda)$ and $\widetilde{\nu}^{\dagger}_1(\bm{u}^N)\widetilde{\nu}^{\dagger}_2(\lambda)$ are both one of the minimized points of problem $(\ref{eq:vicafinn})$.
Combining these two conclusions, we obtain that:
$\lambda^{-N}\nu^{\dagger}(\bm{v}^N, \lambda)$ and $\lambda^N\widetilde{\nu}^{\dagger}(\bm{u}^N, \lambda)$ are the minimized points of problems $(\ref{eq:vicafinn})$ and $(\ref{eq:vincfin})$, respectively, which can be transformed by $\nu^{\dagger}(\bm{v}^N, \lambda)$ and $\widetilde{\nu}^{\dagger}(\bm{u}^N, \lambda)$.
That is, the minimized points of problems $(\ref{eq:vicafinn})$ and $(\ref{eq:vincfin})$ can be transformed into each other by taking the stated transformation.
Here we complete the proof.
\end{proof}
We accomplish those two goals mentioned at the beginning of this subsection with the help of Theorem $\ref{the:minimal}$.
As mentioned in the studies of \cite{bishop2006pattern, jia2021variational, Pinski2015SIAMMA},
we cannot expect that the two VI problems $(\ref{eq:vicafin})$ and $(\ref{eq:vincfin})$ have a unique solution.
That is to say, the results stated in Theorem \ref{the:minimal} cannot ensure that all of the solutions to the two minimization problems can be transformed into each other. In this sense, the conclusions of Theorem \ref{the:minimal} are the best results we can hope for.
We can hardly employ the iMFVI theory under the CP formulation in the infinite-dimensional space due to the singularity issue recalled in Subsection \ref{subsec2.1}.
In Subsections \ref{subsec2.2} and \ref{subsec2.3}, we illustrate that iMFVI theory can be applied under the NCP formulation without the critical singularity problems.
In the finite-dimensional setting, the iMFVI theory (reduced to the corresponding finite-dimensional theory) can be constructed under both circumstances and, in addition, are intimately related to each other, as demonstrated in Theorem \ref{the:minimal}.
As a result, it is natural to use NCP formulation to solve the hierarchical inference problem in the infinite-dimensional case.
\subsection{Iterative algorithm}\label{subsec2.5}
We construct an iterative algorithm using Theorem $\ref{the:posterior}$, when the parameter $v$ belongs to the separable Hilbert space $\mathcal{H}_u$.
Before starting our algorithm, we need to introduce two operator norms and verify some properties of the operator $H^{*}H$.
Following \cite{reed2012methods}, let us assume that the operator $T_1$ is of trace-class, and the operator $T_2$ is a Hilbert-Schmidt operator, both of them are defined on a Hilbert space $\mathcal{H}_0$.
Then we define the operator norms:
\begin{align*}
\lVert T_1\rVert_{\text{Tr}} = \text{Tr}\sqrt{T^{*}_1T_1} < \infty, \quad \lVert T_2\rVert_{\text{HS}} = \sqrt{\text{Tr}(T^{*}_2T_2)} < \infty,
\end{align*}
where $T^{*}_1, T^{*}_2$ represent the adjoint operators of $T_1$ and $T_2$, respectively.
We denote $\text{Tr}(\cdot)$ by taking the trace of the operator.
Then we introduce some properties.
\begin{lemma}\label{lemma1}
Since the operator $H:\mathcal{H}_u \rightarrow \mathbb{R}^{N_d}$ is a bounded linear operator, we have
\begin{enumerate}
\item $H^{*}H$ is a Hilbert-Schmidt operator.
\item There exists a complete orthonormal basis $\lbrace \phi_k\rbrace^{\infty}_{k=1}$ for $\mathcal{H}_u$, which satisfies $H^{*}H\phi_k = \xi_k\phi_k$, and $\xi_k$ is the $k$-th eigenvalue of $H^{*}H$.
\item The Hilbert-Schmidt operator norm of $H^{*}H$ can be controlled by
\begin{align*}
\lVert H^{*}H \rVert_{\text{HS}} \leq \sqrt{\lVert HH^{*} \rVert_{\text{Op}}}\lVert H^{*} \rVert_{\text{HS}},
\end{align*}
where $\lVert\cdot \rVert_{\text{Op}}$ denotes the operator norm.
\end{enumerate}
\end{lemma}
\begin{proof}
According to the definition of the Hilbert-Schmidt operator in Appendix C in \cite{da2014stochastic}, we have $\lVert H \rVert_{\text{HS}} = \lVert H^{*} \rVert_{\text{HS}} = \sum^{N_d}_{k=1}\lVert H^{*}f_k \rVert^2_{\mathcal{H}_u} < \infty$, where $\lbrace f_k \rbrace^{N_d}_{k=1}$ denotes the complete orthonormal bases defined on $\mathbb{R}^{N_d}$.
The operator $H$ and $H^{*}$ are both the Hilbert-Schmidt operator.
Based on Proposition C.4 in \cite{da2014stochastic}, operator $H^{*}H$ is of trace-class defined on $\mathcal{H}_u$, which is exactly a Hilbert-Schmidt operator.
According to Theorem VI.22 \cite{reed2012methods}, we can precisely know that $H^{*}H$ is compact.
Combining that $H^{*}H$ is self-adjoint, we can obtain the first conclusion immediately according to the Hilbert-Schmidt theorem (Theorem VI.16 \cite{reed2012methods}).
For notational convenience, we will use the symbol $\langle \cdot, \cdot \rangle$ to denote the canonical inner product $\langle \cdot, \cdot \rangle_{\mathbb{R}^{N_d}}$ defined on the space $\mathbb{R}^{N_d}$.
Taking $\lVert H^{*}H \rVert^2_{HS} = \text{Tr}(H^{*}HH^{*}H)$, for all $x \in \mathbb{R}^{N_d}$, we notice that
\begin{align*}
\langle HH^{*}x, x\rangle_{\mathbb{R}^{N_d}} \leq \lVert x\rVert \lVert HH^{*}x\rVert \leq \lVert x\rVert \lVert HH^{*}\rVert_{\text{Op}} \lVert x\rVert = \lVert HH^{*}\rVert_{\text{Op}} \langle x, x\rangle.
\end{align*}
That is $HH^{*} \leq \lVert HH^{*}\rVert_{\text{Op}}\text{I}$, where $\text{I}$ denotes the identity operator.
And for all $x \in \mathcal{H}_u$, it is clear that
\begin{align*}
\langle H^{*}HH^{*}Hx, x\rangle \leq \langle HH^{*}Hx, Hx\rangle \leq \lVert HH^{*}\rVert_{\text{Op}} \langle Hx, Hx\rangle.
\end{align*}
We derive that $H^{*}HH^{*}H \leq \lVert HH^{*}\rVert_{\text{Op}} H^{*}H$.
For the term $\text{Tr}(H^{*}HH^{*}H)$, we have
\begin{align*}
\text{Tr}(H^{*}HH^{*}H) \leq \lVert HH^{*} \rVert_{\text{Op}} \text{Tr}(H^{*}H) = \lVert HH^{*} \rVert_{\text{Op}}\lVert H \rVert^2_{\text{HS}}.
\end{align*}
Then we derive
\begin{align*}
\lVert H^{*}H \rVert_{\text{HS}} \leq \sqrt{\lVert HH^{*} \rVert_{\text{Op}}}\lVert H^{*} \rVert_{\text{HS}}.
\end{align*}
\end{proof}
\begin{remark}
\itshape
Following Lemma $\ref{lemma1}$, once we assume the operator $H:\mathcal{H}_u \rightarrow \mathbb{R}^{N_d}$ is a bounded linear operator, the conclusions can be obtained without any additional assumptions.
However, if the range of $H$ changes to an infinite-dimensional space, the conclusions in Lemma $\ref{lemma1}$ still hold when the operator $H^{*}H$ is assumed to be a Hilbert-Schmidt operator, see \cite{engl1996regularization}.
\end{remark}
\textbf{Calculate $\Phi_v(v)$}:
Applying the formula $(\ref{PotenFun1})$ in Theorem $\ref{the:posterior}$, we obtain
\begin{align*}
\Phi_v(v) &= \int_{\mathbb{R}} \bigg(\frac{1}{2} \lVert \bm{d}-H\lambda v \rVert ^2_{\bm{\Gamma}_{\text{noise}}} \bigg)\nu^{\lambda}(d\lambda)+\text{Const} \\
&= \frac{\tau}{2} \bigg(\mathcal{C}_{\lambda} \lVert Hv\rVert^2 + (\lambda^{*})^2\lVert Hv\rVert^2 - 2 \langle\lambda^{*}Hv, \bm{d} \rangle \bigg)+\text{Const},
\end{align*}
where
\begin{align*}
\lambda^{*} = \mathbb{E}^{\nu^{\lambda}}[\lambda] = \int_{\mathbb{R}}\lambda\nu^{\lambda}(d\lambda), \quad
\mathcal{C}_{\lambda} = \mathbb{E}^{\nu^{\lambda}}[\lambda-\lambda^{*}]^2 =
\int_{\mathbb{R}} (\lambda-\lambda^{*})^2\nu^{\lambda}(d\lambda).
\end{align*}
We derive that
\begin{align*}
\frac{d\nu^v}{d\nu^v_0} \varpropto \exp \bigg(-\frac{\tau}{2} \bigg(\mathcal{C}_{\lambda} \lVert Hv\rVert^2 + (\lambda^{*})^2\lVert Hv\rVert^2 - 2\langle\lambda^{*}Hv, \bm{d}\rangle \bigg) \bigg).
\end{align*}
On the basis of Bayes' formula, the probability measure $\nu^v = \mathcal{N}(v^{*}, \mathcal{C}_v)$ is Gaussian with
\begin{align}\label{post:v}
\mathcal{C}^{-1}_v=\tau (\mathcal{C}_{\lambda}+(\lambda^{*})^2 )H^{*}H+\mathcal{C}^{-1}_0, \quad
v^{*}=\tau\mathcal{C}_{v}(\lambda^{*}H^{*}\bm{d}).
\end{align}
\textbf{Calculate $\Phi_{\lambda}(\lambda)$}:
Similarly, we have
\begin{align*}
\Phi_{\lambda}(\lambda) = &\int_{\mathcal{H}_u}\frac{1}{2} \bigg( \lVert \bm{d}-H\lambda v\rVert ^2_{\bm{\Gamma}_{\text{noise}}} \bigg ) \nu^{v}(dv)+\text{Const} \\
= &\frac{\tau}{2}\int_{\mathcal{H}_u} \lVert \bm{d}-\lambda Hv^{*}\rVert ^2 + \lVert H(v-v^{*}) \rVert ^2 \lambda^2 \\
&+ 2\lambda \langle \bm{d}-\lambda Hv^{*},H(v-v^{*}) \rangle \nu^vdv+\text{Const}.
\end{align*}
Then we derive that
\begin{align}\label{eq:tr1}\nonumber
\int_{\mathcal{H}_u} \lVert H(v-v^{*})\rVert^{2}\nu^{v}(dv)
= &\int_{\mathcal{H}_u} \bigg \langle H(v-v^{*}), H(v-v^{*}) \bigg \rangle\nu^{v}(dv) \nonumber \\
= &\int_{\mathcal{H}_u} \bigg \langle H\sum^{\infty}_{k=1}(v_k - v^{*}_k)\phi_k,
H\sum^{\infty}_{j=1}(v_j - v^{*}_j)\phi_j \bigg \rangle \nu^{v}(dv) \nonumber\\
= &\sum^{\infty}_{k=1}\sum^{\infty}_{j=1}\int_{\mathcal{H}_u} \bigg \langle H(v_k - v^{*}_k)\phi_k, H(v_j - v^{*}_j)\phi_j \bigg \rangle \nu^{v}(dv) \\
= &\sum^{\infty}_{k=1}\sum^{\infty}_{j=1} \int_{\mathcal{H}_u} (v_k - v^{*}_k)(v_j - v^{*}_j) \langle H^{*}H\phi_k, \phi_j \rangle \nu^{v}(dv) \nonumber.
\end{align}
Using the first conclusion of Lemma $\ref{lemma1}$, we have
\begin{align*}
\sum^{\infty}_{k=1}\sum^{\infty}_{j=1}\langle H^{*}H\phi_k, \phi_j \rangle
= \sum^{\infty}_{k=1}\sum^{\infty}_{j=1}\langle \xi_k\phi_k, \phi_j \rangle
= \sum^{\infty}_{k=1}\langle \xi_k\phi_k, \phi_k \rangle
= \sum^{\infty}_{k=1}\xi_k.
\end{align*}
Then equation $(\ref{eq:tr1})$ turns to
\begin{align*}
\int_{\mathcal{H}_u} \lVert H(v-v^{*})\rVert^{2}\nu^{v}(dv)
= &\sum^{\infty}_{k=1} \int_{\mathcal{H}_u} \xi_k(v_k - v^{*}_k)(v_k - v^{*}_k) \nu^{v}(dv)\\
= &\sum^{\infty}_{k=1} \langle \xi_k\phi_k, \mathcal{C}_v\phi_k \rangle = \sum^{\infty}_{k=1} \langle H^{*}H\phi_k, \mathcal{C}_v\phi_k \rangle \\
= &\sum^{\infty}_{k=1} \langle \mathcal{C}_vH^{*}H\phi_k, \phi_k \rangle = \text{Tr}(\mathcal{C}_{v}H^{*}H).
\end{align*}
In the last line, we use the property that operator $\mathcal{C}_v$ is self-adjoint.
According to Theorem VI.3 \cite{reed2012methods}, since $\mathcal{C}^{-1}_v$ is self-adjoint, we know that $\mathcal{C}_v$ is also self-adjoint.
Then the function $\Phi_{\lambda}(\lambda)$ can be written as
\begin{align*}
\Phi_{\lambda}(\lambda) = \frac{\tau}{2}\lVert \bm{d}-\lambda Hv^{*}\rVert^2+\frac{\tau}{2}\text{Tr}(\mathcal{C}_vH^{*}H)\lambda^2 + \text{Const},
\end{align*}
where
\begin{align*}
v^{*} = \mathbb{E}^{\nu^{v}}[v] = \int_{\mathcal{H}_u}v\nu^{v}(dv), \quad
\mathcal{C}_{v} = \int_{\mathcal{H}_u} (v - v^{*})\otimes (v - v^{*}) \nu^{v}(dv).
\end{align*}
This implies
\begin{align*}
\frac{d\nu^{\lambda}}{d\nu^{\lambda}_0} \varpropto \exp \bigg (-\frac{\tau}{2} \bigg(\text{Tr}(\mathcal{C}_vH^{*}H)+\lVert Hv^{*}\rVert^{2} \bigg)\lambda^2-\tau\lambda v^{*}H^{*}\bm{d} \bigg).
\end{align*}
Therefore, $\nu^{\lambda} = \mathcal{N}(\lambda^{*}, \mathcal{C}_{\lambda})$ is a Gaussian measure with
\begin{align}\label{post:lam}
\mathcal{C}^{-1}_{\lambda}=\tau \bigg (\text{Tr}(\mathcal{C}_vH^{*}H)+\lVert Hv^{*}\rVert^{2} \bigg )+\frac{1}{\sigma}, \quad
\lambda^{*}=\mathcal{C}_{\lambda}\bigg (\tau v^{*}H^{*}\bm{d}+\frac{1}{\sigma}\bar{\lambda}\bigg ).
\end{align}
We notice that the key point of calculating the posterior of $\lambda$ is to calculate $\text{Tr}(\mathcal{C}_vH^{*}H)$ efficiently.
Based on \cite{bui2012analysis}, we propose a theorem to transform the original form of $\text{Tr}(\mathcal{C}_vH^{*}H)$, which is
\begin{align}\label{eq:tr}
\begin{split}
\text{Tr} \bigg(\mathcal{C}_vH^{*}H \bigg) &= \text{Tr} \bigg( \bigg(\tau(\mathcal{C}_{\lambda}+(\lambda^{*})^2)H^{*}H+\mathcal{C}^{-1}_0 \bigg)^{-1}H^{*}H \bigg) \\
&= \text{Tr} \bigg(\mathcal{C}^{1/2}_0 \bigg(\tau(\mathcal{C}_{\lambda}+(\lambda^{*})^2)\mathcal{C}^{1/2}_0H^{*}H\mathcal{C}^{1/2}_0+\text{Id} \bigg)^{-1}\mathcal{C}^{1/2}_0H^{*}H \bigg),
\end{split}
\end{align}
into a form that can be calculated conveniently.
And we shall discuss the advantage of the form stated in the following theorem more precisely in Subsection $\ref{subsec2.6}$.
\begin{theorem}\label{the:caltr}
Let the operator $H^{*}H$ be a Hilbert-Schmidt operator defined on the Hilbert space $\mathcal{H}_u$, then we have
\begin{align*}
\text{Tr} (\mathcal{C}_vH^{*}H) = \text{Tr} \bigg( \bigg(\tau(\mathcal{C}_{\lambda}+(\lambda^{*})^2)\mathcal{C}^{1/2}_0H^{*}H\mathcal{C}^{1/2}_0+\text{Id} \bigg)^{-1}\mathcal{C}^{1/2}_0H^{*}H\mathcal{C}^{1/2}_0 \bigg) < \infty.
\end{align*}
\end{theorem}
\begin{proof}
Let us denote
\begin{align*}
\mathcal{A} := \bigg(\tau(\mathcal{C}_{\lambda}+(\lambda^{*})^2)\mathcal{C}^{1/2}_0H^{*}H\mathcal{C}^{1/2}_0+\text{Id} \bigg)^{-1}\negmedspace\mathcal{C}^{1/2}_0H^{*}H,
\quad \mathcal{B} := \mathcal{C}^{1/2}_0,
\end{align*}
then we have
\begin{align*}
\text{Tr} (\mathcal{C}_vH^{*}H ) = \text{Tr} (\mathcal{B}\mathcal{A}).
\end{align*}
We aim to prove $\text{Tr} (\mathcal{B}\mathcal{A}) = \text{Tr} (\mathcal{A}\mathcal{B})$, that is
\begin{align*}
\text{Tr} \bigg(\mathcal{C}_vH^{*}H \bigg)
= \text{Tr} \bigg( \bigg(\tau(\mathcal{C}_{\lambda}+(\lambda^{*})^2)\mathcal{C}^{1/2}_0H^{*}H\mathcal{C}^{1/2}_0+\text{Id} \bigg)^{-1}\mathcal{C}^{1/2}_0H^{*}H\mathcal{C}^{1/2}_0 \bigg).
\end{align*}
According to Theorem VI.25(c) in \cite{reed2012methods}, we need to demonstrate that the operator $\mathcal{B}$ is bounded and $\mathcal{A}$ is of trace-class.
Since the operator $\mathcal{C}_0$ is assumed to be trace-class, we know that $\mathcal{B} = \mathcal{C}^{1/2}_0$ is a Hilbert-Schmidt operator.
This demonstrates that operator $\mathcal{B}$ is bounded.
Our task now is to prove that $\mathcal{A}$ is of trace-class.
Setting
\begin{align*}
\mathcal{P} = \mathcal{C}^{1/2}_0H^{*}H, \quad
\mathcal{Q} = \bigg(\tau(\mathcal{C}_{\lambda}+(\lambda^{*})^2)\mathcal{C}^{1/2}_0H^{*}H\mathcal{C}^{1/2}_0+\text{Id} \bigg)^{-1},
\end{align*}
we know that $\mathcal{A} = \mathcal{Q}\mathcal{P}$.
According to Theorem VI.19(b) in \cite{reed2012methods}, we need to prove $\mathcal{P}= \mathcal{C}^{1/2}_0H^{*}H$ is of trace-class, i.e., $\lVert \mathcal{P} \rVert_{\text{Tr}} < \infty $ ,and $\mathcal{Q}$ is a bounded linear operator.
Noticing the property of $\mathcal{C}^{1/2}_0$, $\lVert \mathcal{C}^{1/2}_0\rVert_{\text{HS}} < \infty$ is clear.
Since $ H^{*}H$ is a Hilbert-Schmidt operator, we have
\begin{align}
\lVert \mathcal{C}^{1/2}_0H^{*}H\rVert_{\text{Tr}} \leqslant \lVert \mathcal{C}^{1/2}_0\rVert_{\text{HS}}\lVert H^{*}H\rVert_{\text{HS}} < \infty,
\end{align}
where we used the conclusion stated in Chapter VI \cite{reed2012methods}.
This claims that $\mathcal{P}= \mathcal{C}^{1/2}_0H^{*}H$ is a trace-class operator.
Then we need to illustrate that the operator $\mathcal{Q}$ is bounded.
Without loss of generality, we first show that $\mathcal{Q}^{-1}$ is injective:
\begin{align*}
\langle \mathcal{Q}^{-1}u, u \rangle_{\mathcal{H}_u} &= \bigg \langle (\tau(\mathcal{C}_{\lambda}+(\lambda^{*})^2)\mathcal{C}^{1/2}_0H^{*}H\mathcal{C}^{1/2}_0+\text{Id})u, u \bigg \rangle_{\mathcal{H}_u} \\
&= \tau(\mathcal{C}_{\lambda}+(\lambda^{*})^2) \langle H\mathcal{C}^{1/2}_0u, H\mathcal{C}^{1/2}_0u \rangle_{\mathcal{H}_u} + \langle u, u \rangle_{\mathcal{H}_u} \\
&\geqslant \lVert u \rVert^{2}_{\mathcal{H}_u}.
\end{align*}
Hence we have $0 = \langle \mathcal{Q}^{-1}u, u\rangle_{\mathcal{H}_u} \geqslant \lVert u \rVert^{2}_{\mathcal{H}_u} \geqslant 0$, which indicates that the operator $\mathcal{Q}^{-1}$ is injective.
To illustrate that $\mathcal{Q}^{-1}$ is surjective, we need to show that $\text{Ran}(\mathcal{Q}^{-1})$ is closed, and $\text{Ran}(\mathcal{Q}^{-1})^{\perp} = 0$.
For all $y \in \overline{\text{Ran}(\mathcal{Q}^{-1})}$, there exists $\lbrace u_k \rbrace^{\infty}_{k=1} \subset \mathcal{H}_u$, such that $y = \lim_{k \rightarrow \infty}\mathcal{Q}^{-1}u_k$.
Since
\begin{align*}
\lim_{k \rightarrow \infty}\lVert u_k - u_{k-1} \rVert^{2}_{\mathcal{H}_u} &\leq \lim_{k \rightarrow \infty}\langle \mathcal{Q}^{-1}(u_k - u_{k-1}), (u_k - u_{k-1}) \rangle_{\mathcal{H}_u} \\
&\leq \lim_{k \rightarrow \infty}\lVert u_k - u_{k-1} \rVert^{2}_{\mathcal{H}_u}\lVert \mathcal{Q}^{-1}(u_k - u_{k-1}) \rVert^2_{\mathcal{H}_u} \\
&= 0,
\end{align*}
we know that $\lbrace u_k \rbrace^{\infty}_{k=1}$ is a Cauchy sequence, that is, $\text{Ran}(\mathcal{Q}^{-1})$ is closed.
For any $y \in \overline{\text{Ran}(\mathcal{Q}^{-1})}^{\perp}$, we assume that $\langle y, \mathcal{Q}^{-1}u \rangle = 0$, for all $u \in \mathcal{H}_u$.
Let $y = u$, we obtain
\begin{align*}
\lVert u \rVert^{2}_{\mathcal{H}_u} \leq \langle \mathcal{Q}^{-1}u, u \rangle_{\mathcal{H}_u} = 0,
\end{align*}
which indicates $\overline{\text{Ran}(\mathcal{Q}^{-1})}^{\perp} = 0$.
Then we derive that $\mathcal{Q}^{-1}$ is surjective.
Since $\mathcal{Q}^{-1}$ is bounded, according to the inverse mapping theorem (Theorem III.11 in \cite{reed2012methods}), operator $\mathcal{Q}$ is a bounded linear operator defined on $\mathcal{H}_u$.
That is, operator $\mathcal{A}$ is of trace-class.
Here we complete the proof.
\end{proof}
It is worth mentioning that the infinite-dimensional formulation provides a general framework for conducting appropriate discretizations.
Combining Theorem $\ref{the:posterior}$ and the discussion in this section, we can provide an iterative algorithm to calculate the posterior measures explicitly, see Algorithm $\ref{alg A}$.
In order to keep dimensional independence, the discretization of Algorithm $\ref{alg A}$ should be done carefully, e.g., the adjoint operator $H^{*}$ is usually not trivially equal to the transpose of the discrete approximation of $H$, see \cite{bui2012analysis, jia2021stein}.
\begin{algorithm}
\caption{The NCP-iMFVI algorithm}
\label{alg A}
\begin{algorithmic}[1]
\STATE{Initialize $\lambda_0 = \bar{\lambda}$, specify the tolerance $tol$, and set $k=1$;}
\REPEAT
\STATE{$v_k \sim \nu^v_k = \mathcal{N}(v^{*}, \mathcal{C}_{v_k})$, \\
where $\mathcal{C}^{-1}_{v_k} = \tau (\mathcal{C}^{k-1}_{\lambda}+\lambda^2_{k-1})H^{*}H + \mathcal{C}^{-1}_{0}, \quad v^{*}_k = \tau \mathcal{C}_{v_k}\lambda_{k-1}H^{*}d$;}
\STATE{Calculate $\text{Tr}(\mathcal{C}_{v_k}H^{*}H)$;}
\STATE{$\lambda_k \sim \nu^{\lambda}_k = \mathcal{N}(\lambda^{*}_k, \mathcal{C}_{\lambda_k})$, \\
where $\mathcal{C}^{-1}_{\lambda_k} = \tau \bigg(\text{Tr}(\mathcal{C}_{v_k}H^{*}H)+\lVert Hv_k\rVert^2 \bigg)+\frac{1}{\sigma},
\quad \lambda^{*}_k=\mathcal{C}_{\lambda_k}(\tau d^{T}Hv_k + \frac{\bar{\lambda}}{\sigma}$);}
\UNTIL{$\max(\lVert \lambda_kv_k - \lambda_{k-1}v_{k-1} \rVert / \lVert \lambda_{k}v_{k}\rVert,
\lVert \lambda_{k} - \lambda_{k-1} \rVert / \lVert \lambda_{k-1}\rVert) \leqslant tol $;}
\STATE{Return the approximate probability measure $\nu(dv, d\lambda) = \nu^v_k(dv)\nu^{\lambda}_k(d\lambda)$.}
\end{algorithmic}
\end{algorithm}
\subsection{Some numerical details}\label{subsec2.6}
According to Theorem $\ref{the:caltr}$, we have a form of $\text{Tr} (\mathcal{C}_vH^{*}H)$, which can be calculated more conveniently.
Based on \cite{bui2013computational}, with the help of a low rank approximation of operators, we provide an efficient method for calculating the trace of $\mathcal{C}_vH^{*}H$.
We consider a finite-dimensional subspace $V_h$ of $L^2(\Omega)$ originating from a finite-element discretization with continuous Lagrange basis functions $\lbrace \phi_j \rbrace^{n}_{j=1}$, which correspond to the nodal points $\lbrace \bm{x}_j \rbrace^{n}_{j=1}$, such that
\begin{align*}
\phi_j(\bm{x}_i) = \delta_{ij}, \quad \text{for} \ i, j \in \lbrace 1, \cdots, n\rbrace,
\end{align*}
where $\Omega$ is the domain of some specific problem.
For all $m \in L^2(\Omega)$, we define the approximation $m_h = \sum^{n}_{j=1}m_j\phi_j \in V_h$.
Then, we introduce the mass matrix $\bm{M}$ similar to \cite{bui2012analysis}.
For any $m_1, m_2 \in L^2(\Omega)$, observe that $\langle m_1, m_2 \rangle_{L^2(\Omega)} \approx \langle m_{1h}, m_{2h} \rangle = \langle \bm{m}_1, \bm{m}_2 \rangle_{\bm{M}} := \bm{m}^T_1\bm{M}\bm{m}_2$, where $\bm{M}$ is defined by
\begin{align*}
\bm{M}_{ij} = \int_{\Omega}\phi_i(\bm{x})\phi_j(\bm{x})d\bm{x}.
\end{align*}
For simplicity of notation, we shall use the boldface symbol to denote the matrix representation of the operators.
Define $\rho := \tau(\mathcal{C}_{\lambda}+(\lambda^{*})^2)$ is a constant.
Based on the Theorem $\ref{the:caltr}$, setting
\begin{align*}
\bm{\mathcal{G}} := \bm{\mathcal{C}}^{1/2}_0\bm{H^{*}H\mathcal{C}}^{1/2}_0,
\end{align*}
we have
\begin{align*}
\text{Tr} \bigg(\bm{\mathcal{C}_vH^{*}H} \bigg) = \text{Tr} \bigg( (\rho\bm{\mathcal{G}}+\bm{I})^{-1}\bm{\mathcal{G}} \bigg).
\end{align*}
Let $\lbrace \xi_i, \bm{v}_i\rbrace^{n}_{i=1}$ be the eigenpairs of $\bm{\mathcal{G}}$, and $\bm{\Xi}=\diag(\xi_1, \cdots, \xi_n) \in \mathbb{R}^{n\times n}$, then denote $\bm{V} \in \mathbb{R}^{n\times n}$ such that its columns are the eigenvectors $\bm{v}_i$ of $\bm{\mathcal{G}}$.
Now we replace $\bm{\mathcal{G}}$ by its spectral decomposition
\begin{align}\label{eq:tr calbm}
\text{Tr} \bigg(\bm{\mathcal{C}_vH^{*}H} \bigg) = \text{Tr} \bigg((\rho \bm{V\Xi V^{\diamondsuit}+I)}^{-1}\bm{V\Xi V^{\diamondsuit}} \bigg),
\end{align}
where $\bm{V}^{\diamondsuit}$ is the adjoint of $\bm{V}$ defined as $\bm{V}^{\diamondsuit} = \bm{V}^T\bm{M}$, see \cite{bui2013computational}, and $\bm{M}$ is the mass matrix.
Since the eigenvalues of $\bm{\mathcal{G}}$ decay rapidly for many practical inverse problems \cite{bui2013computational}, it is considerable to construct a low-rank approximation of $\bm{\mathcal{G}}$ by computing the $r$ largest eigenvalues
\begin{align*}
\bm{\mathcal{G}} = \bm{V_r\Xi_r V^{\diamondsuit}_r} + \mathcal{O} \bigg(\sum^{n}_{i=r+1}\xi_i \bigg),
\end{align*}
where $\bm{V_r} \in \mathbb{R}^{n\times r}$ contains $r$ eigenvectors of $\bm{\mathcal{G}}$ corresponding to the $r$ largest eigenvalues, and $\bm{\Xi_r} = \diag(\xi_1, \cdots, \xi_r) \in \mathbb{R}^{r\times r}$. We can use the Sherman-Morrison-Woodbury formula to simplify ($\ref{eq:tr calbm}$), then we have
\begin{align*}
\bigg(\rho \bm{V\Xi V^{\diamondsuit}+I} \bigg)^{-1} = \bm{I - V_rD_rV^{\diamondsuit}_r} + \mathcal{O} \bigg(\sum^{n}_{i=r+1}\frac{\rho\xi_i}{\rho\xi_i+1} \bigg),
\end{align*}
where
\begin{align*}
\bm{D_r} := \diag(d_1, \cdots, d_r) = \diag(\rho\xi_1/(\rho\xi_1+1), \cdots, \rho\xi_r/(\rho\xi_r+1)).
\end{align*}
Thus, we have
\begin{align}
\begin{split}
\text{Tr} \bigg(\bm{\mathcal{C}_vH^{*}H} \bigg) &= \frac{1}{\rho}\text{Tr} \bigg((\rho \bm{V\Xi V^{\diamondsuit} + I)}^{-1}\rho \bm{V\Xi V^{\diamondsuit}} \bigg) \\
&= \frac{1}{\rho}\text{Tr} \bigg((\rho \bm{V\Xi V^{\diamondsuit} + I)}^{-1}(\rho \bm{V\Xi V^{\diamondsuit} + I - I)} \bigg) \\
&= \frac{1}{\rho}\text{Tr} \bigg(\bm{I} - (\rho \bm{V\Xi V^{\diamondsuit} + I)}^{-1} \bigg) \\
&= \frac{1}{\rho}\text{Tr} \bigg(\bm{I - (I - V_rD_rV^{\diamondsuit}_r}
+ \mathcal{O} (\sum^{n}_{i=r+1}\frac{\rho\xi_i}{\rho\xi_i+1})) \bigg) \\
&= \frac{1}{\rho}\text{Tr} \bigg(\bm{V_rD_rV^{\diamondsuit}_r} \bigg)
- \mathcal{O} \bigg(\sum^{n}_{i=r+1}\frac{\xi_i}{\rho\xi_i+1} \bigg).
\end{split}
\end{align}
In order to obtain an accurate approximation of $\text{Tr} (\bm{\mathcal{C}_vH^{*}H})$, following the strategy in \cite{bui2013computational}, we have these two options:
\begin{itemize}
\item If $\rho < 1$, when the eigenvalues $\xi_i$ are smaller than $1/(1-\rho)$, the tail terms can be neglected.
\item If $\rho > 1$, we notice that $\frac{\xi_i}{\rho\xi_i+1} < 1$ naturally. Then we can neglect the tail terms when the eigenvalues $\xi_i$ are smaller than $1$.
\end{itemize}
Removing tail terms, we have
\begin{align}
\text{Tr} \bigg(\bm{\mathcal{C}_vH^{*}H} \bigg) \approx \frac{1}{\rho}\sum^{r}_{i=1}d_i = \sum^{r}_{i=1}\frac{\xi_i}{\rho\xi_i+1}.
\end{align}
\begin{remark}
\itshape
Usually, it is time-costing to calculate the eigenvalues directly if the eigenvalue problem $\bm{H^{*}Hv}_i = \xi_i \bm{\mathcal{C}}^{-1}_0\bm{v}_i$ is of large-scale.
Matrix $\bm{H^{*}H} \in \mathbb{R}^{n \times n}$ is symmetric and positive-defined, $\bm{\mathcal{C}}^{-1}_0 \in \mathbb{R}^{n \times n}$ is symmetric positive defined, and $\lbrace \xi_i, \bm{v}_i \rbrace^{n}_{i=1}$ is the eigensystem.
Since matrix $\bm{H^{*}H}$ is a large dense matrix, it is hardly to store such a matrix in our computational procedure.
In order to calculate the eigenvalues, we shall employ a randomized SVD algorithm called double pass algorithm \cite{saibaba2016randomized} in which $\bm{H^{*}H}$ only operates on a random vector to save the computational resource without giving the explicit form of $\bm{H^{*}H}$.
This method will significantly reduce the computational cost required for eigenvalue decomposition.
Under this circumstance, the computational procedure only requires $\mathcal{O}(k)$ matrix vector product, where $k$ denotes the number of eigenvalues we need to calculate.
\end{remark}
\section{Numerical Examples}\label{sec3}
We now present numerical simulations supporting our results in Section $\ref{sec2}$.
Here we apply our general theory to three inverse problems.
In Subsection $\ref{subsec3.1}$, we consider a simple smooth model with Gaussian noise; meanwhile, we compare the computational results with the classical sampling method, such as the MCMC method.
In Subsection $\ref{subsec3.2}$, we solve a large-scale inverse source problem of Helmholtz equation.
In Subsection $\ref{subsec3.3}$, we work with a non-linear inverse problem of Darcy-flow equation.
In this section, we apply the NCP method developed in the previous section to the statistical inverse problems, including two linear and one non-linear problems.
\subsection{The simple elliptic equation with Gaussian noise}\label{subsec3.1}
\subsubsection{Basic settings}\label{subsec3.1.1}
Consider an inverse source problem of the elliptic equation
\begin{align}\label{prob1}
\begin{split}
-\alpha \Delta w + w &= u \quad \text{in}\ \Omega, \\
w &= 0 \quad \text{on}\ \partial \Omega,
\end{split}
\end{align}
where $\Omega = [0, 1] \subset \mathbb{R}$, $\alpha > 0$ is a positive constant.
The forward operator is defined as follows:
\begin{align}
Hu = (w(\bm{x}_1), w(\bm{x}_2), \cdots, w(\bm{x}_{N_d}))^T,
\end{align}
where $u \in \mathcal{H}_u := L^2(\Omega)$, $w$ denotes the solution to $(\ref{prob1})$, and $\bm{x}_i \in \Omega$ for $i = 1, \cdots, N_d$.
With these notations, the problem can be written abstractly as:
\begin{align}
\bm{d} = Hu + \bm{\epsilon},
\end{align}
where $\bm{\epsilon} \sim \mathcal{N}(0, \bm{\Gamma}_{\text{noise}})$ is the random Gaussian noise, $\bm{\Gamma}_{\text{noise}} = \tau^{-1}\textbf{I}$, and $\tau^{-1} = (0.05\max(Hu))^2$.
In our implementations, the measurement points $\lbrace \bm{x}_i \rbrace^{N_d}_{i=1}$ are taken at the coordinates $\lbrace \frac{i}{20} \rbrace^{20}_{i = 1}$.
To avoid the inverse crime \cite{kaipio2006statistical}, we discretize the elliptic equation by the finite element method on a regular mesh (the grid points are uniformly distributed on the domain $\Omega$) with the number of grid points being equal to $1 \times 10^4$.
In our experiments, the prior measure of $u$ is a Gaussian probability measure $\mu^{u, \lambda}_0$ with mean zero and covariance $\lambda^2\mathcal{C}_0$.
The prior measure of hyper-parameter $\lambda$ is a one-dimensional Gaussian probability measure $\mu^{\lambda}_0$, without particular mention, with mean $\bar{\lambda} = \sqrt{10}$, and variance $\sigma = 25$.
For clarity, we list the specific choices for some parameters introduced in this subsection as follows:
\begin{itemize}
\item Let domain $\Omega$ be an interval $[0, 1]$ with $\partial \Omega = \lbrace 0, 1 \rbrace$. And the available data are assumed to be $\lbrace w(\bm{x}_i) | i = 1, 2, \cdots, 20 \rbrace$.
\item We assume that the data produced from the underlying true signal $u^{\dagger}(x) = 10 \cdot (\cos 4\pi x+1)$.
\item The operator $\mathcal{C}_0$ is given by $\mathcal{C}_0 = (\text{I} - \alpha \Delta)^{-2},$ where $\alpha = 0.05$ is a fixed constant. Here the Laplace operator is defined on $\Omega$ with zero Neumann boundary condition.
\item In order to avoid inverse crime \cite{kaipio2006statistical}, the data is generated on a fine mesh with the number of grid points equal to $10^4$. And we use different sizes of mesh $n = \lbrace 100, 200, 500, 700, 900 \rbrace$ on the inverse stage.
\end{itemize}
To illustrate the effectiveness of NCP-iMFVI, we compare it with the non-centered pCN within Gibbs sampling method proposed in \cite{chen2018dimension}.
In the numerical experiment, we intend to compare the posterior covariance of $u$ obtained by both algorithms, thus showing that posterior distributions are very similar.
We assume that the hyper-parameter $\lambda$ defining the map $u = \lambda v$ with $v \sim \mu^v_0$, and $\lambda \sim \mu^{\lambda}_0$.
The likelihood thus depends on both $\lambda$ and $v$, and the Bayes' formula takes the form:
\begin{align*}
\frac{d\mu}{d\mu_0}(v, \lambda) \varpropto \exp (-\Phi(v, \lambda)).
\end{align*}
Because the Gibbs sampling method is a widely used algorithm, we won't discuss this method in this subsection.
This method is left in Appendix and provided as pseudocode.
\subsubsection{Numerical results}
In this subsection, we compare the computational results of NCP-iMFVI and non-centered Gibbs sampler methods.
Here we briefly discuss the computational cost of the methods in this subsection.
It is clear that the sampling method is computationally expensive.
In \cite{ agapiou2014analysis, jin2010hierarchical}, the iteration number of the sampling method is chosen as $10^4$ and $5\times 10^5$.
Thus the maximum iteration number $N_{\max}$ is chosen as $2 \times10^4$ order of magnitude, and we need to generate $4\times 10^4$ samples for sampling the parameter $v$ and hyper-parameter $\lambda$.
As for the NCP-iMFVI method, for calculating the posterior measure of $v$ in each iteration step, we need to solve one adjoint PDE (corresponding to calculate $H^{*}$), and solve $2N_{ite}$ numbers of PDEs (corresponding to calculate $\mathcal{C}_{v_k}$) with $N_{ite}$ being the maximum iteration number.
In the current settings, we assume that $N_{ite} = 30$ to ensure computational accuracy.
Thus we need to solve $61$ PDEs to calculate the mean function of the posterior measure of $v$.
Next, based on Subsection $\ref{subsec2.5}$, it is unavoidable to calculate the eigenvalues and vectors when we calculate $\text{Tr}(\mathcal{C}^{(k)}_vH^{*}H)$.
In order to obtain an accurate approximation of the trace, we use the strategy stated in Subsection $\ref{subsec2.6}$.
That is, if $\rho^k < 1$, the tail terms can be neglected when eigenvalue $\xi_k < 1/(1-\rho^k)$; if $\rho^k > 1$, the tail terms can be neglected when eigenvalue $\xi_k < 1$.
Under this prerequisite, in sub-figure (c) of Figure $\ref{fig:Error}$, we see that the horizontal red dashed line $\xi = 1$ shows the reference values for the truncation of the eigenvalues, and the number of the corresponding eigenvalues is less than $10$, which indicates that the maximum number of eigenvalues $N_{eig}$ is taken as $10$.
By taking the double-pass algorithm in \cite{villa2021hippylib}, for calculating each eigenvalue, it is required to solve two forward problems and two adjoint problems.
As a result, we need to solve $4N_{eig}=40$ PDEs to calculate the eigenvalues.
At last, for solving the posterior measure of hyper-parameter $\lambda$, we need to calculate $2$ forward PDEs.
Each iteration step is required to solve $2N_{ite} + 4N_{eig} + 3 = 103$ PDEs.
Moreover, the VI method will converge in $15$ steps based on our experiment results, thus we choose the maximum iteration number $N_{ite}$ to be $15$.
In summary, we need to calculate at most $1545$ PDEs during the iterative procedure.
On the other hand, for the non-centered Gibbs sampler method, it is required to calculate $4 \times 10^4$ PDEs.
We see that the computational cost of the NCP-iMFVI method is much less than the cost of the non-centered Gibbs sampler method.
In sub-figure (a) of Figure $\ref{fig:Error}$, the estimated function of $u$ obtained by the NCP-iMFVI method, non-centered Gibbs sampler method, and the background truth are drawn in blue solid line, orange dotted line and green dashed line, respectively.
We see no significant difference between the curves obtained by these two methods and the background truth, which means both methods provide a reliable estimate.
Before we provide the detailed comparisons, the trace of the non-centered Gibbs sampler method is our concern, as shown in sub-figure (b) of Figure $\ref{fig:Error}$.
We see that the whole sampling procedure explores the entire sample space completely.
The hyper-parameter $\lambda$ given by NCP-iMFVI provides a reliable estimate of the parameter in the covariance of the prior measure.
We show the estimated posterior mean and covariance obtained by each discrete level $ n = \lbrace 100, 300, 500, 700, 900 \rbrace$ in Table $\ref{table:lambda}$.
The posterior means in all discretized dimensions are around $0.91$.
The posterior variances are stable as the discrete level increases, at the same time, we see that the posterior density becomes similar to each other.
As a result, this illustrates that the posterior density of $\lambda$ is dimensional-invariant.
Moreover, we care about the posterior covariance of the parameter $u$ given by both algorithms, as shown in sub-figures (a) and (b) of Figure $\ref{fig:Covariance}$, respectively.
We show that the covariance obtained by NCP-iMFVI is similar to that obtained by the non-centered Gibbs sampler method.
In both sub-figures, expect four corners and the diagonal part; other parts are much flatter and get no significant change trend.
In sub-figures (c) and (d) of Figure $\ref{fig:Covariance}$, we show the posterior measures of $\lambda$ given by the NCP-iMFVI and non-centered Gibbs sampler method, respectively.
We see that for both parameters $u$ and $\lambda$, the NCP-iMFVI method provides reliable estimates.
Furthermore, we provide a detailed comparison of the variance and covariances functions in Figure $\ref{fig:Variance}$.
In all the sub-figures of Figure $\ref{fig:Variance}$, the variance and covariance functions obtained by NCP-iMFVI, and the non-centered Gibbs sampler method are drawn in blue solid line and the orange dashed line.
In sub-figure (a) of Figure $\ref{fig:Variance}$, we show the variance function calculated on all the mesh points with $n = 900$.
In sub-figures (b) and (c) of Figure $\ref{fig:Variance}$, we show the covariance functions calculated on the pairs of points $\lbrace (x_i, x_{i+50})\rbrace^{n-50}_{i=1}$, and $\lbrace (x_i, x_{i+100})\rbrace^{n-100}_{i=1}$, respectively.
The functions obtained by the NCP-iMFVI are visually similar to those provided by the Gibbs sampler method.
These figures confirm that the estimated posterior measure of $u$ given by NCP-iMFVI is as good as that given by the Gibbs sampler method.
To give more detailed comparisons, we present the relative errors of the estimated means in the $L^2$-norm of the NCP-iMFVI method in sub-figure (a) of Figure $\ref{fig:inde}$. The relative error is defined as follows:
\begin{align}
\text{relative error} = \lVert u - u^{\dagger} \rVert^2 / \lVert u^{\dagger} \rVert^2,
\end{align}
where $u$ is the estimated function generated by NCP-iMFVI and $u^{\dagger}$ is the true source function.
Here we compare three different discretized dimensions of meshes, including $100, 500$, and $900$, and the markers are given by solid blue line with star, orange solid line with circle, and solid green line with triangle.
The relative error decreases to $10 \%$ or less within $5$ steps in each discrete level, and we shall see that the convergence rate is independent of discretized dimension.
At last, we illustrate the mesh independence of the NCP-iMFVI method, as expected for the ``Bayesianize-then-discretize'' approach.
We define the step norm in the $L^2$-norm as follows:
\begin{align*}
\text{the k-th step norm} = \lVert u_{k+1} - u_{k} \rVert^2 / \lVert u_{k} \rVert^2.
\end{align*}
In sub-figure (b) of Figure $\ref{fig:inde}$, we draw the step norms computed by the ``Discretize-then-Bayesianize'' approach with different discretized dimensions, and we can see that this approach can hardly keep the infinite-dimensional natural.
Readers can find more details about this approach in \cite{jin2010hierarchical}.
We can see that the step norms decay rapidly when the dimension grows.
This indicates that the convergence speed depends highly on the discretized dimension.
On the contrary, the logarithm curves of step norms under the NCP-iMFVI method are almost the same for different discretizations, as shown in sub-figure (c) of Figure $\ref{fig:inde}$.
Thus we say that the convergence speed is not affected by discretized dimensions, demonstrating that the NCP-iMFVI method has mesh independence properties.
\begin{figure}
\centering
\subfloat[Comparison]{
\includegraphics[ keepaspectratio=true, width=0.30\textwidth, clip=true]{figscompare.eps}}
\subfloat[Trace of non-cenerted Gibbs]{
\includegraphics[ keepaspectratio=true, width=0.32\textwidth, clip=true]{figsTrace.eps}}
\subfloat[Eigenvalues (Logarithm)]{
\includegraphics[ keepaspectratio=true, width=0.30\textwidth, clip=true]{figseigenvalues.eps}}
\caption{\emph{\small (a): The comparison of the estimated function of $u$ obtained by NCP-iMFVI, non-centered Gibbs sampler method and the background truth of $u$, respectively; (b): The trace plot of non-centered Gibbs sampler method under the same mesh size of $900$ with $\sigma = 25$; (c): Logarithm of the eigenvalues of the operator $\mathcal{C}^{1/2}_0H^{*}H\mathcal{C}^{1/2}_0$. The horizontal red dashed line $\xi = 1$ shows the reference values for the truncation of the eigenvalues.}}
\label{fig:Error}
\end{figure}
\begin{table}
\centering
\caption{\emph{\small The posterior mean and variance of $\lambda$ under different discrete meshes.}} \label{table:lambda}
\begin{tabular}{c|ccccc}
\hline $\text{Mesh level}$ & $100$ & $300$ & $500$ & $700$ & $900$ \\
\hline $\text{Mean}$ & $0.910900$ & $0.910361$ & $0.909650$ & $0.909389$ & $0.909918$ \\
\hline $\text{Variance} (\times 10^{-4})$ & $1.412$ & $1.720$ & $1.619$ & $1.722$ & $1.714$ \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\subfloat[NCP-iMFVI]{
\includegraphics[ keepaspectratio=true, width=0.40\textwidth, clip=true]{figsCovarianceSamp.eps}}
\subfloat[non-centered Gibbs]{
\includegraphics[ keepaspectratio=true, width=0.40\textwidth, clip=true]{figsCovarianceMCMC.eps}} \\
\subfloat[NCP-iMFVI]{
\includegraphics[ keepaspectratio=true, width=0.40\textwidth, clip=true]{figsSampLam.eps}}
\subfloat[non-centered Gibbs]{
\includegraphics[ keepaspectratio=true, width=0.40\textwidth, clip=true]{figsMCMCLam.eps}}
\caption{\emph{\small The comparison of posterior covariance of $u$, and the comparison of the posterior measure of $\lambda$, with the mesh size $n=900$ respectively. (a): The covariance given by NCP-iMFVI method; (b): The covariance given by the non-centered Gibbs sampler method; (c): The posterior measure of $\lambda$ obtained by NCP-iMFVI method; (d): The posterior measure of $\lambda$ obtained by the non-centered Gibbs sampler method.}}
\label{fig:Covariance}
\end{figure}
\begin{figure}
\centering
\subfloat[Variance]{
\includegraphics[ keepaspectratio=true, width=0.30\textwidth, clip=true, trim=120pt 100pt 120pt 100pt]{figsvariance.eps}}
\subfloat[Covariance]{
\includegraphics[ keepaspectratio=true, width=0.30\textwidth, clip=true, trim=120pt 100pt 120pt 100pt]{figsvariance50.eps}}
\subfloat[Covariance]{
\includegraphics[ keepaspectratio=true, width=0.30\textwidth, clip=true, trim=120pt 100pt 120pt 100pt]{figsvariance100.eps}}
\caption{\emph{\small The estimated variances and covariances by the NCP-iMFVI (blue solid line), and MCMC method (orange dashed line).
(a):The estimated variances $\lbrace var_u(x_i) \rbrace^{n}_{i=1}$ on all mesh points with $n = 900$;
(b):The estimated covariances $\lbrace cov_u(x_i, x_{i+50})^{n-50}_{i=1}$ on mesh points pairs $\lbrace (x_i, x_{i+50})\rbrace^{n-50}_{i=1}$;
(c):The estimated covariances $\lbrace cov_u(x_i, x_{i+100})^{n-100}_{i=1}$ on mesh points pairs $\lbrace (x_i, x_{i+100})\rbrace^{n-100}_{i=1}$}.}
\label{fig:Variance}
\end{figure}
\begin{figure}
\centering
\subfloat[Relative errors]{
\includegraphics[keepaspectratio=true, width=0.302\textwidth, clip=true ]{figsRelative.eps}}
\subfloat[Step norms(Logarithm)]{
\includegraphics[keepaspectratio=true, width=0.30\textwidth, clip=true ]{figsfin.eps}}
\subfloat[Step norms(Logarithm)]{
\includegraphics[ keepaspectratio=true, width=0.30\textwidth, clip=true]{figsinde.eps}} \\
\caption{\emph{\small (a): Relative errors of estimated posterior means of $u$ in the $L^2-norm$ of NCP-iMFVI method with different sizes of meshes $n = \lbrace 100, 500, 900 \rbrace$; (b): Logarithm of the step norms computed by `` Discretize-then-Bayesianize'' approach with different discretized dimensions $n = \lbrace 100, 300, 500, 700 ,900 \rbrace$. (c): Logarithm of the step norms computed by NCP-iMFVI method with different discretized dimensions $n = \lbrace 100, 300, 500, 700 ,900 \rbrace$.}}
\label{fig:inde}
\end{figure}
~\
\subsection{Inverse source problem of Helmholtz equation}\label{subsec3.2}
\subsubsection{Basic settings}
The inverse source problem we studied in this section is borrowed from \cite{Bao_2010, Bao_2015}, which determines the unknown current density function from measurements of the radiates fields at multiple wavenumbers.
Let us consider the two-dimensional Helmholtz equation:
\begin{align*}
\Delta w + \kappa ^2 w &= u \quad \text{in} \ \mathbb{R}^2, \\
\partial_r w - i\kappa w &= o(r^{-1/2}) \quad \text{as} \ r=\lvert x \rvert \rightarrow \infty,
\end{align*}
where $\kappa$ is the wavenumber, $w$ is the acoustic field, and $u$ is the source supported in a bounded domain $\Omega = [0, 1]^2$.
For this two-dimensional case, to simulate the problem defined on $\mathbb{R}^2$, we use the uniaxial perfectly matched layer (PML) technique to truncate the open domain into a bounded domain.
Denoting $\bm{x} = (x, y) \in \mathbb{R}^2$, let $D$ be a rectangle containing $\Omega$ and let $d_1, d_2$ be the thickness of the PML layers along the $x$ and $y$ coordinates, respectively.
Denote by $\partial D$ the boundary of the domain $D$. Let $s_1(x) = 1 + i\sigma_1(x)$ and $s_2(y) = 1 + i\sigma_2(y)$ be the model medium property, where $\sigma_j$ are the positive continuous even functions and satisfy $\sigma_j(x) = 0$ in $\Omega$.
Readers can seek more details about PML in \cite{Bao_2010, jia2019recursive}.
Following the general idea in designing PML absorbing layers, we may deduce the truncated PML problem: find the PML solution from the following system
\begin{align}
\begin{split}
\nabla \cdot (s\nabla w) + \kappa ^2s_1s_2w &= u \quad \text{in} \ D, \\
w &= 0 \quad \text{on} \ \partial D,
\end{split}
\end{align}
where $s = \diag(s_2(y) / s_1(x), s_1(x) / s_2(y))$ is a diagonal matrix. The forward operator related to $\kappa$ is defined by the Helmholtz equation $H_{\kappa}(u) = (w(\bm{x}_1), \cdots, w(\bm{x}_{N_d}))^T$ with $\lbrace \bm{x}_i \rbrace ^{N_d}_{i=1} \in \partial \Omega$ and $u \in \mathcal{H}_u := L^2(\Omega)$.
Since we consider the multi-frequency case, i.e., a series of wavenumbers $0 < \kappa_1 < \kappa_2 < \cdots < \kappa_{N} < \infty$ are considered, then the forward operator has the following form:
\begin{align}
Hu = (H_{\kappa_1}(u), \cdots, H_{\kappa_N}(u)) \in \mathbb{R}^{N_d \times N}.
\end{align}
Similar to the simple smooth model, with these notations, we can write an abstract form of this problem :
\begin{align}
\bm{d} = Hu + \bm{\epsilon},
\end{align}
where $\bm{\epsilon} \sim \mathcal{N}(0, \bm{\Gamma}_{\text{noise}})$ is the random Gaussian noise, $\bm{\Gamma}_{\text{noise}} = \tau ^{-1}\bm{I}$ and $\tau^{-1} = (0.05\max(Hu))^2$.
Parameter $v$ and hyper-parameter $\lambda$ are generated by the Gaussian measures $\mathcal{N}(0, \mathcal{C}_0)$ and $\mathcal{N}(\bar{\lambda}, \sigma)$, where $\bar{\lambda} = \sqrt{10}, \sigma = 25$, respectively.
For clarity, we list the specific choices for some parameters introduced in this subsection as follows:
\begin{itemize}
\item We assume that the data produced from the underlying true signal
\begin{align*}
u^{\dagger} = \ &0.3(4-6x_1)^2u_0(x_1, x_2) - 0.03u_0(x_1, x_2) \\
&- ((1.2x_1-0.6)-(6x_1-3)^3-(6x_2-3)^5)u_0(x_1, x_2),
\end{align*}
where $u_0(x_1, x_2) = \exp(-(6x_1-3)^2 - (6x_2-2)^2)$.
\item The operator $\mathcal{C}_0$ is given by $\mathcal{C}_0 = (\text{I} - \alpha\Delta)^{-2}$, where $\alpha = 0.05$ is a fixed constant.
Here the Laplace operator is defined on $\Omega$ with zero Neumann boundary condition.
\item Let domain $\Omega$ be a bounded area $[0, 1]^2$.
And the available data is taken on the boundary uniformly for $80$ points.
\item The wavenumber series are specified with $\kappa_j = j (j = 1, 2, \cdots ,30)$, and the thickness of PML layers is set as $d_1 = d_2 = 0.1$.
\item In order to avoid the inverse crime, a fine mesh with the number of grid points equal to $500 \times 500$ is employed for generating the data.
For the inverse stage, the mesh with the number of grid points equal to $200 \times 200$ is employed as the wave numbers are below 30 according to \cite{wong2011exact}.
\end{itemize}
\begin{remark}\label{remark3}
\itshape
To employ sampling-type methods such as the MCMC method, researchers carefully parameterize the unknown source function to reduce the dimension, e.g., assume that the sources are point sources, then parameterize the source function by numbers, locations, and amplitudes.
If we employ MCMC algorithm \cite{cotter2013, feng2018adaptive} in our settings, the computational complexity is unacceptable for two reasons: calculation with many wavenumbers is needed for multi-frequency problems, and a large number of samples need to be generated for each wavenumber.
Similar to \cite{jia2021variational}, we could hardly compare our NCP-iMFVI method with the MCMC sampling method in this case.
\end{remark}
\subsubsection{Numerical results}
Firstly, in sub-figures (a) and (b) of Figure $\ref{fig:Helmcomparison}$, we show the estimated posterior mean function and the background truth of $u$.
And sub-figure (c) of Figure $\ref{fig:Helmcomparison}$ illustrates the estimated variance function of the posterior measure of $u$.
The estimated posterior mean function of $u$ is as similar to the true function.
Combined with the variance function, the NCP-iMFVI method provides a reliable estimate.
In sub-figure (d) of Figure $\ref{fig:Helmcomparison}$, we illustrate the trend of relative error in $L^2$ norm.
During the iteration procedure, it converges in $5$ steps with relative error going under $10 \%$ and is stable at around $7 \%$.
In sub-figure (e) of Figure $\ref{fig:Helmcomparison}$, we show the step values of $\lambda$ given by the NCP-iMFVI method.
The curve is stable around $10$th iteration number, which obtains the same trend as the relative error curve.
In sub-figure (f) of Figure $\ref{fig:Helmcomparison}$, we draw the step norms computed by the NCP-iMFVI method with different discretized dimensions $n = \lbrace 230\times 230, 240\times 240, 250\times 250, 260\times 260, 270\times 270 \rbrace$.
Although there are differences between each step's norm curves, they perform the same descending trend for the different discretized dimensions.
Thus we say that the convergence speed is not affected by discretized dimensions, demonstrating that the NCP-iMFVI method has mesh independence properties.
Furthermore, because of taking the reparameterization $u = \lambda v$, we expect that the posterior measure of $u$ is the same measure even if we take different prior measures of $v$.
We want to show that the posterior mean of $\lambda$ is self-adjustable when the prior measure of $v$ changes.
Based on the settings in Subsection $\ref{subsec3.1.1}$, the prior measure of $v$ is $\mu^v_0 = \mathcal{N}(0, \mathcal{C}_0)$, where $\mathcal{C}_0 = (\text{I} - \alpha\Delta)^{-2}$.
Then sub-figure (a) of Figure $\ref{fig:Helmlambda}$ shows the posterior measure of $\lambda$, and the posterior mean $m_{\mu^v_0}$ is around $0.194$.
Next, we set the prior measure of $v$ as $(\mu^{v}_0)^{\prime} = 0.5\mu^v_0 = \mathcal{N}(0, 0.25\mathcal{C}_0)$.
Then sub-figure (b) of Figure $\ref{fig:Helmlambda}$ shows the posterior measure of $\lambda$, and the posterior mean $m_{0.5\mu^v_0}$ is around $0.3695$.
We can see that $m_{0.5\mu^v_0} \approx 2\times m_{\mu^v_0}$.
That means the posterior mean of $\lambda$ is self-adjustable, which illustrates our expectations.
\begin{figure}
\centering
\subfloat[Estiamted mean]{
\includegraphics[ keepaspectratio=true, width=0.30\textwidth, clip=true]{figsmean2.eps}}
\subfloat[Background truth]{
\includegraphics[ keepaspectratio=true, width=0.30\textwidth, clip=true]{figstruth2.eps}}
\subfloat[Estiamted variance]{
\includegraphics[ keepaspectratio=true, width=0.30\textwidth, clip=true]{figsVarianceHelm2.eps}} \\
\subfloat[Relative errors]{
\includegraphics[ keepaspectratio=true, width=0.285\textwidth, clip=true]{figsRelative2.eps}}
\subfloat[Step values of $\lambda$]{
\includegraphics[ keepaspectratio=true, width=0.280\textwidth, clip=true]{figsLam2.eps}}
\subfloat[Step norms]{
\includegraphics[ keepaspectratio=true, width=0.30\textwidth, clip=true]{figsinde2.eps}}
\caption{\emph{\small (a): The estimated posterior mean function of $u$ obtained by NCP-iMFVI; (b): The background truth of $u$; (c): The estimated variance function of posterior measure of $u$ obtained by NCP-iMFVI;(d): Relative error of the estimated posterior means in $L^2-$norm under the mesh size $200 \times 200$; (e): The step values of $\lambda$ obtained by NCP-iMFVI; (f): The step norms computed by NCP-iMFVI method with different discretized dimensions $n = \lbrace 230\times 230, 240\times 240, 250\times 250, 260\times 260, 270\times 270 \rbrace$. }}
\label{fig:Helmcomparison}
\end{figure}
\begin{figure}
\centering
\subfloat[Posterior measure of $\lambda$ with $\mu^v_0$]{
\includegraphics[ keepaspectratio=true, width=0.49\textwidth, clip=true]{figsoneprior2.eps}}
\subfloat[Posterior measure of $\lambda$ with $0.5\mu^v_0$]{
\includegraphics[ keepaspectratio=true, width=0.49\textwidth, clip=true]{figsquarterprior2.eps}} \\
\caption{\emph{\small Comparison of posterior measures of $\lambda$ calculated by different prior measure of $v$. (a): the posterior measure of $\lambda$ calculated by prior $\mu^{v}_0 = \mathcal{N}(0, \mathcal{C}_0)$; (b): the posterior measure of $\lambda$ calculated by prior $0.5\mu^{v}_0 = \mathcal{N}(0, 0.25\cdot \mathcal{C}_0)$. }}
\label{fig:Helmlambda}
\end{figure}
\subsection{Non-linear inverse problem with steady-state Darcy flow equation}\label{subsec3.3}
\subsubsection{Basic settings}
In this section, we concentrate on the inverse problem of estimating the permeability distribution in a porous medium from a discrete set of pressure measurements, studied in \cite{calvetti2018iterative}. Consider the following steady-state Darcy flow equation:
\begin{align}
\begin{split}
-\nabla \cdot (e^u\nabla w) &= f \quad x \in \Omega, \\
w &= 0 \quad x \in \partial \Omega,
\end{split}
\end{align}
where $f \in H^{-1}(\Omega)$ is the source function, $u \in \mathcal{X} := L^{\infty}(\Omega)$ called log-permeability for the computational area $\Omega = [0, 1]^2$, and denote $\bm{x} = (x, y) \in \mathbb{R}^2$.
Then the forward operator has the following form:
\begin{align}
Hu = (w(\bm{x}_1), w(\bm{x}_2), \cdots, w(\bm{x}_{N_d}))^T,
\end{align}
where $x_i \in \Omega$ for $i = 1, \cdots, N_d$.
Let us assume that the operator $\mathcal{F}$ is Fr\'{e}chet differentiable, and $w = \mathcal{F}(u)$.
We linearize $\mathcal{F}(u)$ around $u_{\text{MAP}}$ to obtain
\begin{align}
w \approx \mathcal{F}(u_{\text{MAP}}) + F^{\prime}(u - u_{\text{MAP}}),
\end{align}
where $F^{\prime}$ is the Fr\'{e}chet derivative of $\mathcal{F}(u)$ evaluated at $u_{\text{MAP}}$. Consequently, we transform the non-linear problem into the linear form, and we are able to employ our NCP-iMFVI method.
We now denote $\delta w := F^{\prime}(\delta u)$, $w_0 = \mathcal{F}(u_{\text{MAP}})$, and $\delta u := u - u_{\text{MAP}} = \lambda v$.
Then we obtain the linearized equation as follows:
\begin{align}
\begin{split}
-\nabla \cdot (e^{u_{\text{MAP}}}\nabla w_0 \cdot \delta u) &= \nabla \cdot (e^{u_{\text{MAP}}} \nabla \delta w) \quad x \in \Omega, \\
\delta u &= 0 \quad x \in \partial \Omega,
\end{split}
\end{align}
and the abstract form turns to
\begin{align}
\begin{split}
\bm{d} &= \mathcal{F}(u_{\text{MAP}}) + \delta w + \bm{\epsilon},
\end{split}
\end{align}
where $\bm{\epsilon} \sim \mathcal{N}(0, \bm{\Gamma}_{\text{noise}})$ is the random Gaussian noise, $\bm{\Gamma}_{\text{noise}} = \tau ^{-1}\bm{I}$ and $\tau^{-1} = (0.05\max(Hu))^2$.
The parameters $v$ and $\lambda$ are generated by the Gaussian measures $\mathcal{N}(0, \mathcal{C}_0)$ and $\mathcal{N}(\bar{\lambda}, \sigma)$, where $\bar{\lambda} = \sqrt{10}, \sigma = 25$, respectively.
For clarity, we list the specific choices for some parameters introduced in this subsection as follows:
\begin{itemize}
\item We assume that the data produced from the underlying log-permeability
\begin{align*}
u^{\dagger} = \ &\exp(-20(x_1-0.3)^2 - 20(x_2-0.4)^2) \\
&+ \exp(-20(x_1-0.7)^2 - 20(x_2-0.6)^2).
\end{align*}
\item Let domain $\Omega$ be a bounded area $[0, 1]^2$. The available data is discretized by the finite element method on a regular mesh with the number of grid points equal to $20 \times 20$.
\item The operator $\mathcal{C}_0$ is given by $\mathcal{C}_0 = (\text{I} - \alpha\Delta)^{-2}$, where $\alpha = 0.05$ is a fixed constant. Here the Laplace operator is defined on $\Omega$ with zero Neumann boundary condition.
\item In order to avoid the inverse crime, a fine mesh with the number of grid points equal to $500 \times 500$ is employed for generating the data. For the inversion, a mesh with a number of grid points equal to $200 \times 200$ is employed.
\end{itemize}
\begin{figure}
\centering
\subfloat[Estiamted mean]{
\includegraphics[ keepaspectratio=true, width=0.30\columnwidth, clip=true]{figsmean1.eps}}
\subfloat[Background truth]{
\includegraphics[ keepaspectratio=true, width=0.30\textwidth, clip=true]{figstruth1.eps}}
\subfloat[Estiamted variance]{
\includegraphics[ keepaspectratio=true, width=0.30\textwidth, clip=true]{figsVarianceDarcy1.eps}} \\
\subfloat[Relative error]{
\includegraphics[ keepaspectratio=true, width=0.285\textwidth, clip=true]{figsRelative1.eps}}
\subfloat[Step values of $\lambda$]{
\includegraphics[ keepaspectratio=true, width=0.28\textwidth, clip=true]{figsLam1.eps}}
\subfloat[Step norms]{
\includegraphics[ keepaspectratio=true, width=0.3\textwidth, clip=true]{figsinde1.eps}}
\caption{\emph{\small (a): The estimated posterior mean function of $u$ obtained by NCP-iMFVI; (b): The background truth of $u$; (c): The estimated variance function of posterior measure of $u$ obtained by NCP-iMFVI;(d): Relative error of the estimated posterior means in $L^2-$norm under the mesh size $200 \times 200$; (e): The step values of $\lambda$ obtained by NCP-iMFVI; (f): The step norms computed by NCP-iMFVI method with different discretized dimensions $n = \lbrace 230\times 230, 240\times 240, 250\times 250, 260\times 260, 270\times 270 \rbrace$. }}
\label{fig:Darcycomparison}
\end{figure}
\begin{figure}
\centering
\subfloat[Posterior density with $\mu^{v}_0$]{
\includegraphics[ keepaspectratio=true, width=0.49\textwidth, clip=true]{figsoneprior1.eps}}
\subfloat[Posterior density with $0.5\mu^{v}_0$]{
\includegraphics[ keepaspectratio=true, width=0.49\textwidth, clip=true]{figsquarterprior1.eps}} \\
\caption{\emph{\small Comparison of posterior distributions of $\lambda$ due to different priors. (a): the posterior density of $\lambda$ obtained by prior $\mu^{v}_0 = \mathcal{N}(0, \mathcal{C}_0)$; (b): the posterior density of $\lambda$ obtained by prior $0.5\mu^{v}_0 = \mathcal{N}(0, 0.25\cdot \mathcal{C}_0)$. }}
\label{fig:Darcylambda}
\end{figure}
\subsubsection{Numerical results}
Firstly, we show the estimated posterior mean function and the background truth of $u$, see sub-figures (a) and (b) of Figure $\ref{fig:Darcycomparison}$.
Sub-figure (c) shows the estimated variance function of the posterior measure of $u$.
The estimated posterior mean function of $u$ is as similar to the true function.
Combining with the variance function, we know that the NCP-iMFVI method gives a reliable estimate.
In sub-figure (d) of Figure $\ref{fig:Darcycomparison}$, we illustrate the curve of relative error in $L^2$ norm.
During the iteration procedure, the curve converges in $5$ steps with relative error going under $10 \%$ and is stable at around $3 \%$ in the remaining steps.
In sub-figure (e) of Figure $\ref{fig:Darcycomparison}$, we show the step values of $\lambda$ given by the NCP-iMFVI method.
This curve is stable around $5$th iteration number, providing the same trend as the relative error curve.
In sub-figure (f) of Figure $\ref{fig:Darcycomparison}$, we draw the step norms computed by the NCP-iMFVI method with different discretized dimensions $n = \lbrace 230\times 230, 240\times 240, 250\times 250, 260\times 260, 270\times 270 \rbrace$, respectively.
Although there are differences between every curve, they perform the same descending trend for different discretized dimensions.
Thus we say that all step norm curves decline similarly, and the convergence speed is not affected by discretized dimensions.
This demonstrates that the NCP-iMFVI method has mesh independence property.
Similar to the discussions in Subsection $\ref{subsec3.2}$, we illustrate that the posterior mean of $\lambda$ is self-adjustable when the prior of $v$ changes.
The prior measure of $v$ is $\mu^v_0 = \mathcal{N}(0, \mathcal{C}_0)$, where $\mathcal{C}_0 = (\text{I} - \alpha\Delta)^{-2}$.
Then sub-figure (a) of Figure $\ref{fig:Darcylambda}$ shows the posterior measure of $\lambda$, and the posterior mean $m_{\mu^v_0}$ is around $0.0096$.
Next, we set the prior measure of $v$ as $(\mu^{v}_0)^{\prime} = 0.5\mu^v_0 = \mathcal{N}(0, 0.25\mathcal{C}_0)$.
Then sub-figure (a) of Figure $\ref{fig:Helmlambda}$ shows the posterior measure of $\lambda$, and the posterior mean $m_{0.5\mu^v_0}$ is around $0.0218$.
We can see that $m_{0.5\mu^v_0} \approx 2\times m_{\mu^v_0}$.
That means the posterior mean of $\lambda$ is self-adjustable, which is exactly what we expect.
\section{Conclusion}\label{sec4}
In this paper, we generalize the NCP-iMFVI method in the infinite-dimensional space, which provides an efficient computational method for applying the iMFVI approach based on the hierarchical Bayesian model.
The NCP-iMFVI method avoids the obstacle of priors being mutually singular, which is caused by the hyper-parameter $\lambda$ changing.
The established NCP-iMFVI approach is applied to linear inverse problems with Gaussian noises, deriving an explicit form of the posterior measure.
We employed this approach to three inverse problems of the simple smooth equation, the multi-frequency Helmholtz equation, and the steady-state Darcy flow problem, respectively.
The numerical experiments show that the NCP-iMFVI method efficiently generates reliable estimates.
Moreover, we also show that the posterior measure of the hyper-parameter $\lambda$ is self-adjustable, and the discretized dimension does not affect the convergence speed.
The current NCP-iMFVI method is based on the analysis under the hierarchical linear problem.
For solving the non-linear inverse problems, the NCP-iMFVI method can only manage the linearized part and provide an approximated posterior measure, as shown in Subsection $\ref{subsec3.3}$.
As a result, the approximated probability measure could be inaccurate for some highly non-linear problems.
In finite-dimensional spaces, the NCP formulation has been employed to solve a non-linear geostatistical inverse problem \cite{papaspiliopoulos2003non}.
Thus developing the NCP-iMFVI method for solving non-linear inverse problems is worth investigating in future work.
On the other hand, under our mean-field assumption, the parameters are assumed to be independent of each other, and the degree of dependence of each parameter cannot be accurately portrayed.
To ameliorate this problem, the hierarchical VI, which is able to capture dependencies between parameters, has been studied in \cite{tran2015variational} under the finite-dimensional setting.
It is worthwhile developing the infinite-dimensional hierarchical VI methods that can reserve the dependencies of the parameters.
\section{Appendix}
\subsection{The infinite-dimensional variational inference theory}
In this section, we intend to offer an introduction to the infinite-dimensional variational inference theory, which is a brief version of \cite{jia2021variational}. We should highlight that we improve the statement of Theorem 11 in \cite{jia2021variational} in order to make the theory more suitable.
Let the Bayesian formula on the Hilbert space be defined by
\begin{align}
\frac{d\mu}{d\mu_0}(x) = \frac{1}{Z_{\mu}} \exp (-\Phi(x)),
\end{align}
where $\Phi(x):\mathcal{H} \rightarrow \mathbb{R}$ is a continuous function, and $\exp(-\Phi(x))$ is integrable with respect to $\mu_0$.
Here we denote that $\mu_0$ represents the prior measure, and $\mu$ is the posterior we intend to estimate on some Hilbert space $\mathcal{H}$.
$Z_{\mu}$ is a Constant making sure that measure $\mu$ is indeed a probability measure.
As our aim is to choose a closest measure $\nu$ to estimate $\mu$, the variational inference problem can be modeled as
\begin{align}\label{eq1}
\arg\min \limits_{\nu \in \mathcal{A}} D_{KL} (\nu \Arrowvert \mu),
\end{align}
where $\mathcal{A}\subset \mathcal{M}(\mathcal{H})$ is a set of``simpler'' measures that can be calculated efficiently, and $\mathcal{M}(\mathcal{H})$ is the Borel measure set on $\mathcal{H}$.
For a fixed Constant $M$, we assume the variable $x = (x_1, x_2, \cdots, x_M)$. Particularly, in the main text, we obtain that $M = 2$, and $x_1 = \lambda, x_2 = v$. We specify the Hilbert space $\mathcal{H}$ and subset $\mathcal{A}$ as
\begin{align}
\mathcal{H} = \prod^M_{j=1}\mathcal{H}_j, \quad \mathcal{A} = \prod^M_{j=1}\mathcal{A}_j
\end{align}
where $\mathcal{H}_j, j = 1, \cdots, M$ are a series of separable Hilbert space and $\mathcal{A}_j \subset \mathcal{M}(\mathcal{H}_j)$. Let $\nu := \prod^M_{i=1}\nu^i$ be a probability measure such that $\nu(dx) = \prod^M_{i=1}\nu^i(dx)$. With these assumptions, the minimization problem of ($\ref{eq1}$) can be rewritten as
\begin{align}\label{eq3}
\arg\min \limits_{\nu^i \in \mathcal{A}_i}D_{KL} \bigg (\prod^M_{i=1}\nu^i \bigg \Arrowvert \mu \bigg )
\end{align}
for suit sets $\mathcal{A}_i$ with $i = 1, 2, \cdots, M$. Here we need to introduce the approximate probability measure $\nu$ given in (\ref{eq1}) which is equivalent to $\mu_0$, defined by
\begin{align}
\frac{d\nu}{d\mu_0}(x) = \frac{1}{Z_{\nu}} \exp (-\Phi_{\nu}(x)).
\end{align}
A nature way for introducing an independence assumption is to assume that the potential $\Phi_{\nu}(x)$ can be decomposed as
\begin{align}
\exp (-\Phi_{\nu}(x)) = \prod^M_{i=1} \exp \bigg (\Phi^i_{\nu}(x_i) \bigg ),
\end{align}
where $x = (x_1, \cdots, x_M)$. Given these considerations, the following assumption is introduced.
\begin{assumption}\label{ass1}
Let us introduce a reference probability measure
\begin{align}
\mu_r(dx) = \prod^M_{i=1}\mu^i_r(dx_j),
\end{align}
which is equivalent to the prior probability measure with the following relation:
\begin{align}
\frac{d\mu_0}{d\mu_r}(x) = \frac{1}{Z_0} \exp (-\Phi^0(x)).
\end{align}
For each $i=1, 2, \cdots, M$, there is a predefined continuous function $a_i(\epsilon, x_i)$, where $\epsilon$ is a positive number and $x_i \in \mathcal{H}_i$. Concerning these functions, we assume that $\mathbb{E}^{\mu^i_r}[a_i(\epsilon, \cdot)] < \infty$ where $\epsilon \in [0, \epsilon^i_0), i = 1, \cdots, M$ with $\epsilon^i_0$ is a small positive number we firstly defined. We also assume that the approximate probability measure $\nu$ is equivalent to the reference measure $\mu_r$ and that the Radon-Nikodym derivative of $\nu$ with respect to $\mu_r$ takes the following form
\begin{align}
\frac{d\nu}{d\mu_r}(x) = \frac{1}{Z_r} \exp \bigg (-\sum^M_{i=1}\Phi^r_i(x_i) \bigg ).
\end{align}
\end{assumption}
Following Assumption $\ref{ass1}$, we know that the approximation measure can be decomposed as $\nu(dx) = \prod^M_{i=1}\nu^i(dx_i)$ with
\begin{align}\label{eq2}
\frac{d\nu^i}{d\mu^i_r}(x) = \frac{1}{Z^i_r} \exp (-\Phi^r_i(x_i)).
\end{align}
Here denote that $Z^i_r = \mathbb{E}^{\mu^i_r}[\exp(-\Phi^r_i(x_i))]$ which ensures that $\nu_i$ is indeed a probability measure. For $i = 1, 2, \cdots, M$, let $\mathcal{Z}_i$ be a Hilbert space that embedded in $\mathcal{H}_i$, then we introduce
\begin{align*}
R^1_i &= \bigg \lbrace \Phi^r_i \bigg | \sup \limits_{1/N \leqslant \lVert x_i \rVert \leqslant N} \Phi^r_i(x_i) < \infty, \quad \text{forall} \ N > 0 \bigg \rbrace \\
R^2_i &= \bigg \lbrace \Phi^r_i \bigg | \int_{\mathcal{H}_i} \exp (-\Phi^r_i(x_i))\max(1, a_i(\epsilon, x_i))\mu^i_r(dx_i) < \infty, \quad \text{forall} \ \epsilon \in [0, \epsilon^i_0) \bigg \rbrace
\end{align*}
where $\epsilon^i_0$ and $a_i(\cdot, \cdot)$ are defined as in Assumption $\ref{ass1}$. With these preparations, we can define $\mathcal{A}_i, i = 1, 2, \cdots, M$ as follows:
\begin{align}
\mathcal{A}_i = \left\{
\begin{tabular}{l|l}
\multirowcell{2}[0pt][l]{$\nu^i \in \mathcal{M}(\mathcal{H}_i)$} &
\multirowcell{2}[0pt][l]{$\nu^i$ is equivalent to $\mu^i_r$ with (\ref{eq2}) holding true,\\
and $\Phi^r_i \in R^1_i \bigcap R^2_i$} \\
&
\end{tabular}
\right\}
\end{align}
Now, we are able to state the main theorem that yields practical iterative algorithms:
\begin{theorem}\label{the1}
Assume that the approximate probability measure in problem $(\ref{eq3})$ satisfies Assumption $\ref{ass1}$, For $i = 1, 2, \cdots, M$, we denote $T^i_N = \lbrace x_i | 1/N \leqslant \lVert x_i \rVert_{\mathcal{Z}_i} \leqslant N \rbrace$, with $N$ being an arbitrary positive Constant. For each reference measure $\mu^i_r$, we assume that $\sup_N \mu^i_r(T^i_N)=1$. In addition, we assume
\begin{align}
\sup \limits_{x_i \in T^i_N}\int_{\prod_{j \neq i}\mathcal{H}_j} \bigg (\Phi^0(x)+\Phi(x) \bigg )1_A(x)\prod \limits_{j \neq i}\nu^j(dx_j) < \infty
\end{align}
and
\begin{align}
\int_{\mathcal{H}_i} \exp \bigg (-\int_{\prod_{j \neq i}\mathcal{H}_j}(\Phi^0(x)+\Phi(x))1_{A^c}(x)\prod \limits_{j \neq i}\nu^j(dx_j) \bigg )M_i(x)\mu^i_r(dx_i) < \infty,
\end{align}
where $A := \lbrace x | \Phi^0(x) + \Phi(x) \geqslant 0 \rbrace$, and $M_i := \max(1, a_i(\epsilon, x_i))$ with $i, j = 1, 2, \cdots, M$. Then the problem $(\ref{eq3})$ possess a solution $\nu = \prod^M_{i=1}\nu_i \in \mathcal{M}(\mathcal{H})$ with the following form
\begin{align}
\frac{d\nu}{d\mu_r}(x) \varpropto \exp \bigg (-\sum^M_{i=1}\Phi^r_i(x_i) \bigg),
\end{align}
where
\begin{align}
\Phi^r_i(x_i) = \int_{\prod_{j \neq i}\mathcal{H}_j} \bigg (\Phi^0(x) + \Phi(x) \bigg )\prod \limits_{j \neq i}\nu^j(dx_j) + \text{Const}
\end{align}
and
\begin{align}
\nu^i(dx_i) \varpropto \exp (-\Phi^r_i(x_i))\mu^i_r(dx_i).
\end{align}
\end{theorem}
We should point out that Theorem $\ref{the1}$ and the definition of $R^1_i$ and $R^2_i, i = 1, 2, \cdots, M$ are slightly different from the statements given in \cite{jia2021variational}.
The proof of Theorem 11 in \cite{jia2021variational} can be taken step by step to prove Theorem $\ref{the1}$.
Actually, the version of Theorem $\ref{the1}$ can be regarded as a more appropriate amelioration of Theorem 11 in \cite{jia2021variational}, which can be verified more easily for practical problems.\\
\subsection{Assumption 1 and Theorem 15-16 in \cite{dashti2013bayesian}}
Here we provide the Assumption 1 and Theorem 15-16 in \cite{dashti2013bayesian} needed in proving Theorem 2.1, which is shown in Assumption $\ref{assump1app}$, Theorem $\ref{theorem1app}$ and $\ref{theorem2app}$, respectively.
Here we need to state that the symbols in \cite{dashti2013bayesian} have been changed to the notations in our paper for the assumption and theorems below.
\begin{assumption}\label{assump1app}
(Assumption 1 in \cite{dashti2013bayesian})
Let us denote $X = \mathcal{H}_u\times \mathbb{R}$, and assume $\Phi \in C(X\times \mathbb{R}^{N_d}; \mathbb{R})$.
Assume further that there are functions $M_i:\mathbb{R}^{+}\times \mathbb{R}^{+} \rightarrow \mathbb{R}^{+}, i=1, 2$, monotonic non-decreasing separately in each argument, and with $M_2$ strictly positive, such that for all $v \in \mathcal{H}_u$, $\bm{d}, \bm{d}_1, \bm{d}_2 \in B_{\mathbb{R}^{N_d}}(0, r)$,
\begin{align*}
\Phi(v, \lambda; \bm{d}) &\geq -M_1(r, |\lambda|\lVert v \rVert_{\mathcal{H}_u}), \\
\lvert \Phi(v, \lambda; \bm{d}_1) - \Phi(v, \lambda; \bm{d}_2)\rvert &\leq M_2(r, |\lambda|\lVert v \rVert_{\mathcal{H}_u})\lVert \bm{d}_1 - \bm{d}_2 \rVert_{\mathbb{R}^{N_d}}.
\end{align*}
\end{assumption}
\begin{theorem}\label{theorem1app}
(Theorem 15 in \cite{dashti2013bayesian})
Let Assumption $\ref{assump1app}$ hold.
Assume that $\mu_0(X) = 1$, and that $\mu_0(X \cap B) > 0$ for some bounded set $B$ in $X$.
Assume additionally that, for every fixed $r>0$,
\begin{align*}
\exp(M_1(r, |\lambda|\lVert v \rVert_{\mathcal{H}_u})) \in L^1_{\mu_0}(X; \mathbb{R}),
\end{align*}
where $L_{\mu_0}^1(X;\mathbb{R})$ represents $X$-valued integrable functions under the measure $\mu_0$.
Then for every $\bm{d} \in \mathbb{R}^{N_d}$, $Z_{\mu} = \int_X \exp(-\Phi(v, \lambda;\bm{d}))\mu^v_0(dv)\mu^{\lambda}_0(d\lambda)$ is positive and finite, and the probability measure $\mu$ is well defined, which is given by
\begin{align*}
\frac{d\mu}{d\mu_0}(v, \lambda) = \frac{1}{Z_{\mu}}\exp(-\Phi(v, \lambda; \bm{d})).
\end{align*}
\end{theorem}
\begin{theorem}\label{theorem2app}
(Theorem 16 in \cite{dashti2013bayesian})
Let Assumption $\ref{assump1app}$ hold.
Assume that $\mu_0(X) = 1$ and that $\mu_0(X \cap B) > 0$ for some bounded set $B$ in $X$.
Assume additionally that, for every fixed $r > 0$,
\begin{align*}
\exp(M_1(r, |\lambda|\lVert v \rVert_{\mathcal{H}_u}))(1 + M_2(r, |\lambda|\lVert v \rVert_{\mathcal{H}_u})^2) \in L^1_{\mu_0}(X; \mathbb{R}).
\end{align*}
Then there is $C = C(r) > 0$ such that, for all $\bm{d}, \bm{d}^{\prime} \in B_Y(0, r)$
\begin{align*}
\sqrt{\frac{1}{2}\int \bigg (1 - \sqrt{\frac{d\mu^{\prime}}{d\mu}} \bigg )^2d\mu} \leq C\lVert \bm{d} - \bm{d}^{\prime} \rVert_{\mathbb{R}^{N_d}},\\
\end{align*}
where measures $\mu^{\prime}$, $\mu$ represent the posterior measure according to $\bm{d}^{\prime}$, $\bm{d}$, respectively.
\end{theorem}
\subsection{Non-centered Gibbs sampler in Subsection 3.1 of the main text}
Here we provide the Non-centered Gibbs sampler method as pseudocode, and readers can seek more details in \cite{chen2018dimension}.
\begin{algorithm}
\caption{Non-centered pCN within Gibbs}
\label{alg B}
\begin{algorithmic}[1]
\STATE{Fix $\beta \in (0, 1]$, initialize $\lambda_0 = \bar{\lambda}$, $v_0 \sim \mu^v_0$ and set $k=0$ and maximum iteration number $N_{\max}$;}
\REPEAT
\STATE{Propose $\hat{v}_k=(1-\beta^2)v_k+\beta \zeta_k, \quad \zeta_k \sim \mathcal{N}(0, \mathcal{C}_0)$;}
\STATE{Set $v_{k+1} = \hat{v}_k$ with probability
\begin{align*}
\min \bigg\lbrace 1, \exp \bigg(\Phi(v_k, \lambda_k) - \Phi(\hat{v}_k, \lambda_k) \bigg)\bigg\rbrace
\end{align*}
or else set $v_k = \hat{v}_k$;}
\STATE{Propose $\hat{\lambda}_k \sim q(\lambda_k, \cdot)$, where $q(\lambda_k, \cdot)$ is the proposal measure of $\hat{\lambda}_k$;}
\STATE{Set $\lambda_{k+1} = \hat{\lambda}_k$ with probability
\begin{align*}
\min \bigg\lbrace 1, \exp \bigg(\Phi(v_k, \lambda_k) - \Phi(v_k, \hat{\lambda}_k) \bigg)
\frac{q(\hat{\lambda}_k, \lambda_k)\mu^{\lambda}_0(\hat{\lambda}_k)}{q(\lambda_k, \hat{\lambda}_k)\mu^{\lambda}_0(\lambda_k)} \bigg\rbrace
\end{align*}
or else set $\lambda_{k+1}=\lambda_k$;}
\STATE{Set $k = k+1$;}
\UNTIL{$k = N_{\max}$.}
\end{algorithmic}
\end{algorithm}
Inspired by \cite{agapiou2014analysis}, the proposal measure of $\lambda$ in step 5 of Algorithm $\ref{alg B}$ can be specified based on our settings.
Because of the Gaussian noise and prior measures, the posterior measure of $\lambda$ can be calculated explicitly since it is also a Gaussian measure.
According to the discussion in Subsection 2.5, we provide the posterior measure $\mathcal{N}(\bar{\lambda}_k, \sigma_k)$of $\lambda_k$ as the proposal measure at each step, where
\begin{align*}
\frac{1}{\sigma_k} = \tau\lVert Hv_k\rVert^{2} + \frac{1}{\sigma}, \quad
\bar{\lambda}_k = \sigma_k \bigg (\tau\langle d, Hv_k\rangle + \frac{\bar{\lambda}}{\sigma} \bigg ),
\end{align*}
and $\bar{\lambda}$, $\sigma$ is the mean and variance of the prior measure $\mu^{\lambda}_0$, respectively.
\section*{Acknowledgments}
This work was supported by the NSFC grant 12271428.
and the Major projects of the NSFC grants 12090020, 12090021 and
the National Key R\&D program of the Ministry of Science and Technology of China grant 2020YFA0713403.
\bibliographystyle{amsplain}
| -133,567.468843
|
[
-2.638671875,
2.421875
] | 42.29798
|
[
-2.642578125,
0.84619140625,
-2.056640625,
-4.84765625,
-0.8232421875,
7.3515625
] |
[
2.958984375,
8.8203125,
1.2470703125,
6.54296875
] | 735
| 13,670
|
[
-3.453125,
3.96484375
] | 32.486119
|
[
-6.3515625,
-5.26953125,
-5.4765625,
-2.287109375,
2.875,
14.1640625
] | 0.494183
| 11.944368
| 17.790783
| 2.060711
|
[
2.294869899749756
] | -76,703.142857
| 6.43921
| -133,069.624845
| 0.604381
| 6.352406
|
[
-2.380859375,
-3.88671875,
-4.19921875,
-5.00390625,
2.298828125,
12.8515625
] |
[
-5.77734375,
-2.287109375,
-2.248046875,
-1.1337890625,
3.779296875,
4.99609375
] | |
BkiUfJjxK7IDPjMdLaEo
|
\section{Introduction}
\label{sec:introduction}
In the graph drawing literature, the problem of finding
aesthetically pleasant drawings of graphs has been extensively
studied. The graph drawing community has introduced and studied
several criteria that judge the quality of a graph drawing, such as
the number of crossings among pairs of edges, the number of edge
bends, the maximum edge length, the total area occupied by the
drawing and so on (see the books~\cite{DBTT94,KW01}).
Motivated by the fact that the edge crossings have negative impact
on the human understanding of a graph drawing
\cite{P00,PCA02,CPCM02}, a great amount of research effort has been
devoted on the problem of finding drawings with minimum number of
edge crossings. Unfortunately, this problem is $\mathcal{NP}$-complete in
general \cite{GJ83}. However, recent eye-tracking experiments by
Huang et al.\ \cite{Hu07,HHE08} indicate that the negative impact of
an edge crossing is eliminated in the case where the crossing angle
is greater than $70$ degrees. These results motivated the study of a
new class of drawings, called \emph{right-angle drawings} or
\emph{RAC drawings} for short \cite{ACBDFKS09,DGDLM10,DEL09,DEL10}.
A RAC drawing of a graph is a polyline drawing in which every pair
of crossing edges intersects at right angle.
Didimo, Eades and Liota \cite{DEL09} proved that it is always
feasible to construct a RAC drawing of a given graph with at most
three bends per edge. In this work, we prove that the problem of
determining whether an input graph admits a straight-line RAC
drawing is $\mathcal{NP}$-hard.
\subsection{Related Work}
\label{sec:related-work}
Didimo et al.\ \cite{DEL09} initiated the study of RAC drawings and
showed that any straight-line RAC drawing with $n$ vertices has at
most $4n-10$ edges and that any graph admits a RAC drawing with at
most three bends per edge. A slightly weaker bound on the number of
edges of an $n$-vertices RAC drawing was given by Arikushi et al.
\cite{AFKMT10}, who proved that any straight-line RAC drawing with
$n$ vertices may have $4n-8$ edges. Angelini et al.\
\cite{ACBDFKS09} showed that the problem of determining whether an
acyclic planar digraph admits a straight-line upward RAC drawing is
$\mathcal{NP}$-hard. Furthermore, they constructed digraphs admitting
straight-line upward RAC drawings, that require exponential area. Di
Giacomo et al.\ \cite{DGDLM10} studied the interplay between the
crossing resolution, the maximum number of bends per edges and the
required area. Didimo et al.\ \cite{DEL10} presented a
characterization of complete bipartite graphs that admit a
straight-line RAC drawing. Arikushi et al.\ \cite{AFKMT10} studied
polyline RAC drawings in which each edge has at most one or two
bends and proved that the number of edges is at most $O(n)$ and
$O(n\log^2{n})$, respectively. Dujmovic et al.\ \cite{DGMW10}
studied \emph{$\alpha$~Angle Crossing} (or \emph{$\alpha$AC} for
short) drawings, i.e., drawings in which the smallest angle formed
by an edge crossing is at least $\alpha$. In their work, they
presented upper and lower bounds on the number of edges. Van Kreveld
\cite{vK10} studied how much better (in terms of area required,
edge-length and angular resolution) a RAC drawing of a planar graph
can be than any planar drawing of the same graph.
Closely related to the RAC drawing problem, is the angular
resolution maximization problem, i.e., the problem of maximizing the
smallest angle formed by any two adjacent edges incident to a common
vertex. Note that both problems correlate the resolution of a graph
with the visual distinctiveness of the edges in a graph drawing.
Formann et al.\ \cite{FHHKLSWW93} introduced the notion of the
angular resolution of straight-line drawings. In their work, they
proved that determining whether a graph of maximum degree $d$ admits
a drawing of angular resolution $\frac{2 \pi}{d}$ (i.e., the obvious
upper bound) is $\mathcal{NP}$-hard. They also presented upper and lower
bounds on the angular resolution for several types of graphs of
maximum degree $d$. Malitz and Papakostas \cite{MP92} proved that
for any planar graph of maximum degree $d$, it is possible to
construct a planar straight-line drawing with angular resolution
$\Omega(\frac{1}{7^d})$.
Garg and Tamassia \cite{GT94} presented a continuous tradeoff
between the area and the angular resolution of planar straight-line
drawings. For the case of connected planar graphs with $n$ vertices
and maximum degree $d$, Gutwenger and Mutzel \cite{GM98} presented a
linear time algorithm that constructs planar polyline grid drawings
on a $(2n-5)\times(\frac{3}{2}n-\frac{7}{2})$ grid with at most
$5n-15$ bends and minimum angle greater than $\frac{2}{d}$.
Bodlaender and Tel \cite{BT04} showed that planar graphs with
angular resolution at least $\frac{\pi}{2}$ are rectilinear.
Recently, Lin and Yen \cite{LY05} presented a force-directed
algorithm based on edge-edge repulsion that constructs drawings with
high angular resolution. Argyriou et al.\ \cite{ABS10} studied a
generalization of the crossing and angular resolution maximization
problems, in which the minimum of these quantities is maximized and
presented optimal algorithms for complete graphs and a
force-directed algorithm for general graphs.
The rest of this paper is structured as follows: In
Section~\ref{sec:preliminaries}, we introduce preliminary properties
and notation. In Section~\ref{sec:racgraphs}, we present a class of
graphs with unique RAC combinatorial embedding. In
Section~\ref{sec:np}, we show that the straight-line RAC drawing
problem is $\mathcal{NP}$-hard. We conclude in Section~\ref{sec:conclusions}
with open problems.
\section{Preliminaries}
\label{sec:preliminaries}
Let $G=(V,E)$ be a simple, undirected graph drawn in the plane. We
denote by $\Gamma(G)$ the drawing of $G$. Given a drawing
$\Gamma(G)$ of a graph $G$, we denote by $\ell_{u,v}$ the line
passing through vertices $u$ and $v$. By $\ell_{u,v}'$, we refer to
the semi-line that emanates from vertex $u$, towards vertex $v$.
Similarly, we denote by $\ell_{u,v,w}$ ($\ell_{u,v,w}'$) the line
(semi-line) that coincides (emanates from) vertex $u$ and is
perpendicular to edge $(v,w)$. The following properties are used in
the rest of this paper.
\begin{property}[Didimo, Eades and Liota \cite{DEL09}]
In a straight-line RAC drawing there cannot be three mutually
crossing edges. \label{prp:three-crossing-edges}
\end{property}
\begin{property}[Didimo, Eades and Liota \cite{DEL09}]
In a straight-line RAC drawing there cannot be a triangle
$\mathcal{T}$ and two edges $(a,b)$ and $(a,b')$, such that $a$ lies
outside $\mathcal{T}$ and $b$, $b'$ lie inside $\mathcal{T}$.
\label{prp:triangle-edges}
\end{property}
\section{A Class of Graphs with Unique RAC Combinatorial Embedding}
\label{sec:racgraphs}
The $\mathcal{NP}$-hardness proof employs a reduction from the well-known
$3$-SAT problem \cite{GJ79}. However, before we proceed with the
reduction details, we first provide a graph, referred to as
\emph{augmented square antiprism graph}, which has the following
property: All RAC drawings of this graph have two ``symmetric''
combinatorial embeddings. Figures~\ref{fig:basic-gadget-2} and
\ref{fig:basic-gadget-2_2} illustrate this property. Observe that the augmented
square antiprism graph consists of a ``central'' vertex $v_0$, which
is incident to all vertices of the graph, and two quadrilaterals
(refer to the dashed and bold drawn squares in
Figure~\ref{fig:basic-gadget-2_2}), that are denoted by $\mathcal{Q}_1$~and
$\mathcal{Q}_2$~in the remainder of this paper. Removing the central vertex,
the remaining graph corresponds to the skeleton of a square
antiprism, and, it is commonly referred to as \emph{square antiprism
graph}.
\begin{figure}[h!tb]
\centering
\begin{minipage}[b]{.32\textwidth}
\centering
\subfloat[\label{fig:basic-gadget-2}{}]
{\includegraphics[width=\textwidth]{images/basic-gadget-2}}
\end{minipage}
\hfill
\begin{minipage}[b]{.32\textwidth}
\centering
\subfloat[\label{fig:basic-gadget-2_2}{}]
{\includegraphics[width=\textwidth]{images/basic-gadget-2_2}}
\end{minipage}
\begin{minipage}[b]{.32\textwidth}
\centering
\subfloat[\label{fig:basic-gadget-3}{}]
{\includegraphics[width=\textwidth]{images/basic-gadget-3}}
\end{minipage}
\caption{(a)-(b)~Two different RAC drawings of the augmented square antiprism graph with different combinatorial embeddings.
(a)-(c) Two different RAC drawing with the same combinatorial embedding.}
\label{fig:basic-gadget-1}
\end{figure}
If we replace the two quadrilaterals with two triangles, then the
implied graph is the \emph{augmented triangular antiprism graph}.
Didimo et al.\ \cite{DEL09}, who showed that any $n$-vertex graph
which admits a RAC-drawing can have at most $4n-10$ edges, used the
augmented triangular antiprism graph, as an example of a graph that
achieves the bound of $4n-10$ edges (see Figure~1.c in
\cite{DEL09}). In contrast to the augmented triangular antiprism
graph, the augmented square antiprism graph does not achieve this
upper bound. In general, the class of \emph{the augmented $k$-gon
antiprism graphs, $k \geq 3$}, is a class of non-planar graphs, that
all admit RAC drawings. Recall that any planar $n$-vertices graph,
should have $3n-6$ edges, and since an augmented $k$-gon antiprism
graph has $2k+1$ vertices and $5k$ edges, it is not planar for the
entire class of these graphs.
\begin{lemma}
There does not exist a RAC drawing of the augmented square antiprism
graph in which the central vertex $v_0$ lies on the exterior of
quadrilateral $\mathcal{Q}_i$, $i=1,2$, and an edge connecting $v_0$ with a
vertex of $\mathcal{Q}_i$~crosses an edge of $\mathcal{Q}_i$. \label{lem:v0nocross}
\end{lemma}
\begin{proof}
Let $\mathcal{Q}$~be one of quadrilaterals $\mathcal{Q}_i$, $i=1,2$ and let $v_a$,
$v_b$, $v_c$ and $v_d$ be its vertices, consecutive along
quadrilateral $\mathcal{Q}$. Assume to the contrary that vertex $v_0$ lies on
the exterior of quadrilateral $\mathcal{Q}$~and there exists an edge, say
$(v_0,v_a)$, that emanates from vertex $v_0$ towards a vertex of
quadrilateral $\mathcal{Q}$, such that it crosses an edge, say
$(v_b,v_c)$\footnote{The case where, it crosses edge $(v_c,v_d)$ is
symmetric.}, of quadrilateral $\mathcal{Q}$. Vertices $v_b$ and $v_c$ have
the following properties: (a)~they are both connected to vertex
$v_0$, and, (b)~have a common neighbor $v_{bc}$, which is incident
to vertex $v_0$ and $v_{bc} \notin\ \mathcal{Q}$~(see
Figure~\ref{fig:basic-gadget-1}).
Observe that if vertex $v_{bc}$ lies in the non-colored regions of
Figure~\ref{fig:v0nocross-regions}, then at least one of the edges
incident to $v_{bc}$ crosses either $(v_0,v_a)$ or $(v_b,v_c)$,
which are already involved in a right-angle crossing. This leads to
a situation where three edges mutually cross, which, by
Property~\ref{prp:three-crossing-edges} is not permitted. Hence,
vertex $v_{bc}$ should lie in the interior of the dark-gray colored
regions $R_1$, $R_2$ or $R_3$ of Figure~\ref{fig:v0nocross-regions}.
We consider each of these cases separately in the following. Note
that, there exist cases where $R_2 \cup R_3= \emptyset$ (i.e.,
vertex $v_0$ is close to the intersection point of $(v_0,v_a)$ and
$(v_b,v_c)$), or $R_2 = \emptyset$ (i.e., vertex $v_c$ is close to
the intersection point of $(v_0,v_a)$ and $(v_b,v_c)$), or $R_3 =
\emptyset$ (i.e., vertex $v_b$ is close to the intersection point of
$(v_0,v_a)$ and $(v_b,v_c)$).
\begin{figure}[htb]
\centering
\includegraphics[width=.5\textwidth]{images/v0nocross-regions}
\caption{Vertex $v_{bc}$ should lie in the interior of $R_1$ or $R_2$ or $R_3$.}
\label{fig:v0nocross-regions}
\end{figure}
\begin{description}
\item [Case i:] \emph{Vertex $v_{bc}$ is in the interior of $R_1$}. This case is
depicted in Figure~\ref{fig:v0nocross-a-1}. Let $T_{v_{bc}}$ be the
region formed by vertices $v_{bc}$, $v_b$ and $v_c$ (i.e., the
dark-gray colored region of Figure~\ref{fig:v0nocross-a-1}). Vertex
$v_d$, which has to be connected to vertices $v_a$ and $v_c$, and,
the central vertex $v_0$, cannot lie within $T_{v_{bc}}$, since edge
$(v_a,v_d)$ would have to cross edge $(v_b,v_c)$, which is already
involved in a right-angle crossing. Since vertex $v_d$ has to be
connected to vertex $v_0$, has to coincide with semi-line
$\ell_{v_0,v_c,v_{bc}}'$, as illustrated in
Figure~\ref{fig:v0nocross-a-1}. However, under this restriction, the
common neighbor $v_{cd}$ of vertices $v_c$ and $v_d$ cannot be
connected to vertex $v_0$, since edge $(v_0,v_{cd})$ should be
perpendicular to one of the edges of $T_{v_{bc}}$, which cannot be
accomplished without introducing an edge overlap with edge
$(v_0,v_d)$.
\begin{figure}[htb]
\centering
\begin{minipage}[b]{.48\textwidth}
\centering
\subfloat[\label{fig:v0nocross-a-1}{}]
{\includegraphics[width=.85\textwidth]{images/v0nocross-a-1}}
\end{minipage}
\hfill
\begin{minipage}[b]{.48\textwidth}
\centering
\subfloat[\label{fig:v0nocross-b}{}]
{\includegraphics[width=\textwidth]{images/v0nocross-b}}
\end{minipage}
\caption{(a)~Vertex $v_{bc}$ lies in the interior of $R_1$. (b)~Vertex $v_{bc}$ lies in the interior of $R_3$.}
\label{fig:v0nocross}
\end{figure}
\item [Case ii:] \emph{Vertex $v_{bc}$ is in the interior of either $R_2$ or $R_3$.}
Say without loss of generality that vertex $v_{bc}$ is in the
interior of $R_3$. This case is depicted in
Figure~\ref{fig:v0nocross-b}. Let $u$ be a vertex of the augmented
antiprism graph (except $v_a$) and assume that $u$ lies in the
interior of the triangular face, say $T_{v_{bc}}$, formed by
vertices $v_b$, $v_c$ and $v_{bc}$. Vertex $u$ has to be connected
to the central vertex $v_0$. Edge $(v_0,u)$ should not be involved
in crossings with edges $(v_0,v_a)$ and $(v_b,v_c)$, since they are
already involved in a right-angle crossing. If edge $(v_0,u)$
crosses edge $(v_0, v_{bc})$, then the three vertices $v_b$, $v_c$
and $v_{bc}$ that define triangle $T_{v_{bc}}$ must be collinear,
which leads to a contradiction. Therefore, triangle $T_{v_{bc}}$
cannot accommodate any other vertex (except $v_a$). Now observe that
each vertex of quadrilateral $\mathcal{Q}$~has degree five and there do not
exist three vertices of quadrilateral $\mathcal{Q}$, that have a common
neighbor (see Figure~\ref{fig:basic-gadget-1}). These properties
trivially hold for vertex $v_a$, since $v_a\in \mathcal{Q}$. Based
on the above properties, each neighbor of vertex $v_a$ can lie
either in the interior of the dark-gray region of
Figure~\ref{fig:v0nocross-b}, or, on the external face of the
already constructed drawing (along the dashed semi-lines
$\ell_{v_a,v_c,v_{bc}}'$ and $\ell_{v_a,v_b,v_{bc}}'$ of
Figure~\ref{fig:v0nocross-b}, respectively). This implies that we
can route only four vertices out of those incident to vertex $v_a$,
i.e., one of them should lie in the light-gray colored region of
Figure~\ref{fig:v0nocross-b} and thus, it cannot be connected to
vertex $v_0$. \qed
\end{description}
\end{proof}
\begin{lemma}
In any RAC drawing of the augmented square antiprism graph,
quadrilateral $\mathcal{Q}_i$, $i=1,2$ is drawn planar. \label{lem:p6Planar}
\end{lemma}
\begin{proof}
Let $\mathcal{Q}$~be one of quadrilaterals $\mathcal{Q}_i$, $i=1,2$, and let, as in the
previous lemma, $v_a$, $v_b$, $v_c$ and $v_d$ be its vertices,
consecutive along quadrilateral $\mathcal{Q}$. Assume to the contrary that in
a RAC drawing of the augmented square antiprism graph, quadrilateral
$\mathcal{Q}$~is not drawn planar, and say that edges $(v_a,v_b)$ and
$(v_c,v_d)$ form a right-angle crossing. This case is illustrated in
Figure~\ref{fig:q4planar}. In the following, we will lead to a
contradiction the cases, where central vertex $v_0$ lies (i)~in the
interior of a triangular face of quadrilateral $\mathcal{Q}$, and, (ii)~on
the external face of quadrilateral $\mathcal{Q}$. Note that it is not
feasible a non-planar RAC drawing of a quadrilateral to contain more
than one (right-angle) crossing. Hence, its bounded faces are
triangular.
\begin{itemize}
\item \textbf{Case i:} \emph{Vertex $v_0$ lies in the interior of a triangular face of quadrilateral $\mathcal{Q}$.}
Assume without loss of generality that vertex $v_0$ (which is
incident to all vertices of quadrilateral $\mathcal{Q}$) lies in the interior
of the triangular face formed by vertices $v_b$, $v_c$ and the
intersection point of edges $(v_a,v_b)$ and $(v_c,v_d)$, as in
Figure~\ref{fig:q4planar-2}. In this case, edges $(v_0,v_a)$,
$(v_a,v_b)$ and $(v_c,v_d)$ mutually cross, which leads to a
contradiction, due to Property~\ref{prp:three-crossing-edges}.
\begin{figure}[h!tb]
\begin{minipage}[b]{.44\textwidth}
\subfloat[\label{fig:q4planar-2}{Vertex $v_0$ lies in the interior of a triangular face.}]
{\includegraphics[width=\textwidth]{images/q4planar-2}}
\end{minipage}
\hfill
\begin{minipage}[b]{.52\textwidth}
\centering
\subfloat[\label{fig:q4planar-3}{Vertex $v_0$ lies on the external face of quadrilateral $\mathcal{Q}$.}]
{\includegraphics[width=\textwidth]{images/q4planar-3}}
\end{minipage}
\caption{Quadrilateral $\mathcal{Q}$~is not drawn planar.}
\label{fig:q4planar}
\end{figure}
\item \textbf{Case ii:} \emph{Vertex $v_0$ lies on the external face of
quadrilateral $\mathcal{Q}$.} This case is illustrated in
Figure~\ref{fig:q4planar-3}. Recall that by
Lemma~\ref{lem:v0nocross}, vertex $v_0$ cannot introduce additional
crossings on quadrilateral $\mathcal{Q}$. We will first show that the common
neighbor $v_{ab}$ of vertices $v_a$ and $v_b$ cannot lie in the
region ``above'' line $\ell_{v_b,v_c}$. In the case, where vertex
$v_{ab}$ lies in the region ``above'' $\ell_{v_b,v_c}$ and to the
``left'' of both edge $(v_c,v_d)$ and semi-line $\ell_{v_0,v_d}'$,
edge $(v_b,v_{ab})$ would cross edge $(v_c,v_d)$, which is not
permitted by Property~\ref{prp:three-crossing-edges}. Symmetrically,
vertex $v_{ab}$ cannot lie in the region ``above'' $\ell_{v_b,v_c}$
and to the ``right'' of both edge $(v_a,v_b)$ and semi-line
$\ell_{v_0,v_a}'$. If vertex $v_{ab}$ lies within the left
gray-colored unbounded region of Figure~\ref{fig:q4planar-3} (that
is formed by semi-lines $\ell_{v_0,v_a}'$, $\ell_{v_b,v_a}'$), then,
edge $(v_{ab}, v_b)$ crosses two non-parallel edges $(v_a,v_d)$ and
$(v_a,v_d)$. In the case where, $v_{ab}$ lies in the right
gray-colored unbounded region of Figure~\ref{fig:q4planar-3} (that
is formed by semi-lines $\ell_{v_0,v_d}'$, $\ell_{v_c,v_d}'$), then
$(v_a,v_{ab})$ either crosses both $(v_a,v_d)$ and $(v_c,v_d)$ which
are non-parallel, or crosses edge $(v_0,v_d)$ forming a non-right
angle crossing. In the case, where vertex $v_{ab}$ lies in the
interior of the triangle formed by vertices $v_b$, $v_c$ and the
intersection point of edges $(v_a,v_b)$ and $(v_c,v_d)$, edge
$(v_a,v_{ab})$ would cross edge $(v_c,v_d)$, which leads to a
violation Property~\ref{prp:three-crossing-edges}. Therefore, vertex
$v_{ab}$ should be ``below'' $\ell_{v_b,v_c}$.
We continue our reasoning on vertex $v_{ab}$. Vertex $v_{ab}$ cannot
lie to the ``left'' of edge $(v_0,v_a)$, since edge $(v_b,v_{ab})$
or $(v_a,v_{ab})$ would cross more than one (non-parallel) edges
incident to vertex $v_0$. If vertex $v_{ab}$ lies to the ``right''
of edge $(v_0,v_b)$, then edge $(v_a,v_{ab})$ either crosses edge
$(v_c,v_d)$, that it is not permitted by
Property~\ref{prp:three-crossing-edges}, or, both edges $(v_0,v_b)$
and $(v_0,v_c)$, that are non-parallel. We complete our reasoning on
vertex $v_{ab}$ by the triangle formed by vertices $v_0$, $v_b$ and
$v_c$. In this case, $(v_a,v_{ab})$ should be perpendicular to edge
$(v_0, v_c)$. This suggests that the angle formed by edges
$(v_c,v_d)$ and $(v_0,v_c)$ is greater that $180^o$ and therefore,
edge $(v_0,v_d)$ either crosses $(v_a,v_b)$, or another edge of
quadrilateral $\mathcal{Q}$, which trivially leads to a contradiction, due to
Property~\ref{prp:three-crossing-edges}, or due to
Lemma~\ref{lem:v0nocross}, respectively. Based on the above, vertex
$v_{ab}$ should be within the left gray-colored region of
Figure~\ref{fig:q4planar-3}, along semi-line $\ell_{v_b,v_c,v_0}'$.
Following a similar reasoning scheme, as for vertex $v_{ab}$, we can
prove that the common neighbor $v_{cd}$ of vertices $v_c$ and $v_d$
should lie within the right light-gray colored region of
Figure~\ref{fig:q4planar-3}, along semi-line $\ell_{v_c,v_0,v_b}'$.
However, in this case, a common neighbor of vertices $v_{ab}$ and
$v_{cd}$, say $v_{bc}$, should lie on the intersection of semi-lines
$\ell_{v_b,v_0,v_c}'$ and $\ell_{v_c,v_0,v_b}'$, which leads to edge
overlaps. Thus, there exists no feasible placement for vertex
$v_{bc}$. \qed
\end{itemize}
\end{proof}
\begin{lemma}
In any RAC drawing of the augmented square antiprism graph, the
central vertex $v_0$ lies in the interior of quadrilateral $\mathcal{Q}_i$,
$i=1,2$. \label{lem:v0internal}
\end{lemma}
\begin{proof}
From Lemma~\ref{lem:p6Planar}, it follows that quadrilateral
$\mathcal{Q}_i$~should be drawn planar, for each $i=1,2$. In order to prove
this lemma, we assume to the contrary that central vertex $v_0$ lies
on the exterior of one of the two quadrilaterals. Say, w.l.o.g., on
the exterior of quadrilateral $\mathcal{Q}_1$. Let $v_a$, $v_b$, $v_c$ and
$v_d$ be $\mathcal{Q}_1$'s vertices, consecutive along quadrilateral $\mathcal{Q}_1$.
Then, by Lemma~\ref{lem:v0nocross}, vertex $v_0$ cannot contribute
additional crossings on quadrilateral $\mathcal{Q}_1$. This suggests that the
drawing of the graph induced by quadrilateral $\mathcal{Q}_1$~and vertex $v_0$
will resemble the one depicted in Figure~\ref{fig:v0internal-gen}.
We denote by $T_{v_0}$ the triangle formed by vertex $v_0$ and the
two vertices, which are on the convex hall of $\mathcal{Q}_1$$\cup v_0$~(refer
to the gray-colored triangle of Figure~\ref{fig:v0internal-gen}).
\begin{figure}[htb]
\centering
\includegraphics[width=.35\textwidth]{images/v0internal-gen}
\caption{Any drawing of the graph induced by $\mathcal{Q}_1$~and $v_0$ has to resemble to this one.}
\label{fig:v0internal-gen}
\end{figure}
Before we proceed with the detailed proof of this lemma, we recall
some properties of the augmented square antiprism graph. Two
consecutive vertices of $\mathcal{Q}_1$~($\mathcal{Q}_2$)~share a common vertex of
quadrilateral $\mathcal{Q}_2$~($\mathcal{Q}_1$). Each vertex of quadrilateral
$\mathcal{Q}_1$~($\mathcal{Q}_2$)~should be connected to two consecutive vertices of
quadrilateral $\mathcal{Q}_2$~($\mathcal{Q}_1$). We will prove that (i)~no vertex of
$\mathcal{Q}_2$~lies outside $T_{v_0}$, (ii)~$\mathcal{Q}_2$~cannot entirely lie in the
interior of $\mathcal{Q}_1$, (iii)~$\mathcal{Q}_2$~cannot entirely lie in the interior
of a triangular face of $T_{v_0}$, (iv)~$\mathcal{Q}_2$~cannot entirely lie
within two adjacent triangular faces of $T_{v_0}$, (v)~$\mathcal{Q}_2$~cannot
cross $\mathcal{Q}_1$, such that some of the vertices of $\mathcal{Q}_2$~reside within a
triangular face of $T_{v_0}$, whereas the remaining ones within
$\mathcal{Q}_1$. Note that quadrilateral $\mathcal{Q}_2$~cannot entirely lie within
three triangular faces of $T_{v_0}$ incident to vertex $v_0$.
\begin{description}
\item [Case i:] We prove that no vertex of quadrilateral $\mathcal{Q}_2$~lies on
the external face of the graph induced by quadrilateral $\mathcal{Q}_1$~and
vertex $v_0$, i.e., outside $T_{v_0}$. For the sake of
contradiction, assume that there exists a vertex of quadrilateral
$\mathcal{Q}_2$, say $v_{ab}$, that lies on the external face of the graph
induced by quadrilateral $\mathcal{Q}_1$~and vertex $v_0$ (see
Figure~\ref{fig:q4external}). Vertex $v_{ab}$ should be connected to
vertices $v_a$ and $v_b$ of quadrilateral $\mathcal{Q}_1$, and to the central
vertex $v_0$. If both vertices $v_a$ and $v_b$ are inside triangle
$T_{v_0}$, then vertex $v_{ab}$, which is assumed to lie on the
external face of this graph, would violate
Property~\ref{prp:triangle-edges}, since vertices $v_a$ and $v_b$
would lie in the interior of $T_{v_0}$, whereas vertex $v_{ab}$
outside. Therefore, at least one of vertices $v_a$ and $v_b$ should
be a corner of $T_{v_0}$. Then, vertex $v_{ab}$ contributes either
none (see Figure~\ref{fig:q4external-2}), or a single right-angle
crossing (see Figure~\ref{fig:q4external-1}).
\begin{figure}[h!tb]
\centering
\begin{minipage}[b]{.46\textwidth}
\centering
\subfloat[\label{fig:q4external-1}{}]
{\includegraphics[width=\textwidth]{images/q4external-1}}
\end{minipage}
\hfill
\begin{minipage}[b]{.50\textwidth}
\centering
\subfloat[\label{fig:q4external-2}{}]
{\includegraphics[width=\textwidth]{images/q4external-2}}
\end{minipage}
\begin{minipage}[b]{.55\textwidth}~\end{minipage}
\caption{Vertex $v_{ab}$ of quadrilateral $\mathcal{Q}_2$~lies on the external face of the graph induced by quadrilateral $\mathcal{Q}_1$~and vertex $v_0$.}
\label{fig:q4external}
\end{figure}
Let now $v_{bc}$ be a vertex of quadrilateral $\mathcal{Q}_2$, which is
incident to vertex $v_{ab}$. Vertex $v_{bc}$ is also incident to two
consecutive vertices of quadrilateral $\mathcal{Q}_1$, i.e., $v_b$ and $v_c$.
We first turn our attention in the case where $v_{ab}$ contributes a
single right-angle crossing on the graph induced by quadrilateral
$\mathcal{Q}_1$~and vertex $v_0$ (see Figure~\ref{fig:q4external-1}). Then, by
Property~\ref{prp:triangle-edges}, vertex $v_{bc}$ should lie in the
interior of triangle $T_{v_0}$. This immediately leads to a
contradiction, since edge $(v_{ab},v_{bc})$ should cross edge
$(v_0,v_a)$, which is already involved in a right-angle crossing
(see Figure~\ref{fig:q4external-1}).
Consider now the case where vertex $v_{ab}$ contributes no crossing
on $T_{v_0}$. Observe that vertex $v_{bc}$, which is adjacent to
vertex $v_{ab}$, vertices $v_b$ and $v_c$ of $\mathcal{Q}_1$, and the central
vertex $v_0$ cannot lie in the dark-gray region of
Figure~\ref{fig:q4external-2}, since in this case, edge $(v_a,v_b)$
would be crossed by more than one (non-parallel) edges, incident to
$v_{bc}$. The case where vertex $v_{bc}$ lies within the light-gray
colored region of Figure~\ref{fig:q4external-2}, leads to a
situation similar to the one depicted in
Figure~\ref{fig:q4external-1}. Therefore, vertex $v_{bc}$ should lie
``somewhere'' in the interior of $T_{v_0}$. Let $v_{ad}$ be the
common neighbor of vertices $v_a$ and $v_d$ of $\mathcal{Q}_1$, and vertex
$v_{ab}$. This vertex cannot lie within the dark-gray region of
Figure~\ref{fig:q4external-2}, for the same reason that vertex
$v_{bc}$ couldn't. In addition, vertex $v_{ad}$ cannot lie in the
interior of $T_{v_0}$, since in this case, both vertices $v_{bc}$
and $v_{ad}$ (that are in $T_{v_0}$), should be connected to
$v_{ab}$ (that is not in $T_{v_0}$), which trivially violates
Property~\ref{prp:triangle-edges}. Therefore, vertex $v_{ad}$ should
be on the external face of the graph induced by $\mathcal{Q}_1$~and vertices
$v_0$ and $v_{ad}$, along semi-line $\ell_{v_d,v_0,v_a}'$. However,
in this case, we are also led to a situation similar to the one
depicted in Figure~\ref{fig:q4external-1}, and subsequently, to a
contradiction.
\item [Case ii:] Say that quadrilateral
$\mathcal{Q}_2$~entirely lies within quadrilateral $\mathcal{Q}_1$~(see
Figure~\ref{fig:q4internal-1}). In this case, its vertices should be
connected to vertex $v_0$. For three vertices of quadrilateral
$\mathcal{Q}_2$, this can be accomplished using the three available edges of
quadrilateral $\mathcal{Q}_1$~(refer to the dotted edges of
Figure~\ref{fig:q4internal-1}), such that the right-angle crossings
occur along them. However, the fourth vertex cannot be connected to
vertex $v_0$, since only three edges of quadrilateral $\mathcal{Q}_1$~can be
used to realize connections with vertex $v_0$ (see the topmost edge
of Figure~\ref{fig:q4internal-1}).
\begin{figure}[h!tb]
\centering
\begin{minipage}[b]{.30\textwidth}
\centering
\subfloat[\label{fig:q4internal-1}{}]
{\includegraphics[width=\textwidth]{images/q4internal-1}}
\end{minipage}
\hfill
\begin{minipage}[b]{.33\textwidth}
\centering
\subfloat[\label{fig:q4internal-2}{}]
{\includegraphics[width=\textwidth]{images/q4internal-2}}
\end{minipage}
\hfill
\begin{minipage}[b]{.33\textwidth}
\centering
\subfloat[\label{fig:q4internal-3}{}]
{\includegraphics[width=\textwidth]{images/q4internal-3}}
\end{minipage}
\caption{Quadrilateral $\mathcal{Q}_2$~lies (a)~in the interior of $\mathcal{Q}_1$, (b)~in the interior of a triangular face
$T_{Q_2}$ incident to vertex $v_0$, and (c)~within two adjacent triangular faces incident to vertex $v_0$.}
\label{fig:q4internal-a}
\end{figure}
\item [Case iii:] Assume now that quadrilateral $\mathcal{Q}_2$~entirely lies within a
triangular face, say $T_{Q_1}$, incident to vertex $v_0$ (see
Figure~\ref{fig:q4internal-2}). Then, there exists at least one
vertex of quadrilateral $\mathcal{Q}_1$, say vertex $q$, which is incident to
two vertices of quadrilateral $\mathcal{Q}_2$, and is not identified with a
vertex at the corners of $T_{Q_1}$ (see
Figure~\ref{fig:q4internal-2}). Vertex $q$ has to be connected to
two vertices of quadrilateral $\mathcal{Q}_2$. However, vertex $q$ is external
to triangle $T_{Q_1}$, whereas its two incident vertices in the
interior of this triangle, which leads to a violation of
Property~\ref{prp:triangle-edges}.
\item [Case iv:] Say that quadrilateral $\mathcal{Q}_2$~entirely lies within two adjacent
triangular faces, say $T_{Q_1}^1$ and $T_{Q_1}^2$, incident to
vertex $v_0$ (see Figure~\ref{fig:q4internal-3}). Then,
quadrilateral $\mathcal{Q}_2$~should be ``perpendicular'' to the common edge
of $T_{Q_1}^1$ and $T_{Q_1}^2$. Recall that two consecutive vertices
of quadrilateral $\mathcal{Q}_2$~share a common vertex of quadrilateral $\mathcal{Q}_1$.
Hence, we can find a vertex $q$ of quadrilateral $\mathcal{Q}_1$, which is not
identified with the common vertex of $T_{Q_1}^1$ and $T_{Q_1}^2$,
and is incident to a pair of vertices of quadrilateral $\mathcal{Q}_2$, that
do not lie in the same triangular face (i.e., the topmost vertices
of quadrilateral $\mathcal{Q}_2$~or the bottommost vertices of quadrilateral
$\mathcal{Q}_2$~in Figure~\ref{fig:q4internal-3}). This leads to a
contradiction, since the common edge of $T_{Q_1}^1$ and $T_{Q_4}^2$
cannot be crossed, as it is already involved in a right-angle
crossing (refer to the dotted-edges of
Figure~\ref{fig:q4internal-3}).
\item [Case v:] We consider the case where quadrilateral $\mathcal{Q}_2$~crosses quadrilateral
$\mathcal{Q}_1$, such that some of the vertices of quadrilateral $\mathcal{Q}_2$~reside
within a triangular face of $T_{v_0}$, whereas the remaining ones
within quadrilateral $\mathcal{Q}_1$. We will lead to a contradiction the
cases where: (i)~Two vertices of $\mathcal{Q}_2$~lie in the interior of a
single triangular face incident to $v_0$, (ii)~two vertices of
$\mathcal{Q}_2$~lie in the interior of two adjacent triangular faces,
(iii)~three vertices of $\mathcal{Q}_2$~lie in the interior of two adjacent
triangular faces and two of them lie in the same triangular face of
$T_{v_0}$, (iv)~three vertices of $\mathcal{Q}_2$~lie in the interior of three
pairwise-adjacent triangular faces incident to vertex $v_0$. Recall
that none of the vertices of $\mathcal{Q}_2$~lies in the external face of the
graph induced by quadrilateral $\mathcal{Q}_1$~and vertex $v_0$. Let, with a
slight abuse of notation, $q_a$, $q_b$, $q_c$ and $q_d$ be the
vertices of quadrilateral $\mathcal{Q}_2$. Assume first that vertices $q_a$
and $q_b$ are in the interior of a single triangular face, whereas
vertices $q_c$ and $q_d$ in the interior of quadrilateral $\mathcal{Q}_1$~(see
Figure~\ref{fig:q4internal-4}). In this case, edges $(q_a,q_d)$ and
$(q_b,q_c)$ should perpendicularly cross quadrilateral $\mathcal{Q}_1$. The
connections between vertices $q_c$ and $q_d$ with vertex $v_0$ can
be accomplished using two of the available edges of quadrilateral
$\mathcal{Q}_1$, such that the right-angle crossings occur along them (refer
to dotted edges of Figure~\ref{fig:q4internal-4}). Thus, the
triangular faces that are adjacent to the one that accommodates
vertices $q_a$ and $q_b$ (refer to the light-gray faces of
Figure~\ref{fig:q4internal-4}) cannot be further used to connect
vertices of quadrilateral $\mathcal{Q}_1$~to vertices of quadrilateral $\mathcal{Q}_2$.
Then, there exists a vertex of quadrilateral $\mathcal{Q}_1$, say $q$, that it
is not identified with any of the vertices of the face that
accommodates vertices $q_a$ and $q_b$, and either $q_a$ or $q_b$ has
to be connected to vertex $q$. However, this cannot be accomplished,
since the edge from either $q_a$ or $q_b$ to vertex $q$ would cross
more than one non-parallel edges (refer to the dashed edge of
Figure~\ref{fig:q4internal-4}).
\begin{figure}[htb]
\centering
\includegraphics[width=.55\textwidth]{images/q4internal-4}
\caption{Vertices $q_a$ and $q_b$ are in the interior of a single triangular face incident to $v_0$.}
\label{fig:q4internal-4}
\end{figure}
Say now that vertices $q_a$ and $q_b$ are in the interior of two
adjacent triangular faces incident to vertex $v_0$, whereas vertices
$q_c$ and $q_b$ within quadrilateral $\mathcal{Q}_1$. This case is illustrated
in Figure~\ref{fig:q4internal-5}. Then, one of the vertices that lie
in the interior of quadrilateral $\mathcal{Q}_1$, say vertex $q_d$, can be
connected to vertex $v_0$ using one of the available edges of
quadrilateral $\mathcal{Q}_1$~(refer to the dotted edge of
Figure~\ref{fig:q4internal-5}). However, vertex $q_c$ cannot be
connected to vertex $v_0$, since only three of the edges of
quadrilateral $\mathcal{Q}_1$~can be used to realize connections from the
vertices that lie within quadrilateral $\mathcal{Q}_1$, to vertex $v_0$.
Consider now the case where three vertices, say $q_a$, $q_b$ and
$q_c$ of quadrilateral $\mathcal{Q}_2$~are in the interior of two adjacent
triangular faces incident to vertex $v_0$, and two of vertices
$q_a$, $q_b$ and $q_c$, say w.l.o.g., $q_b$ and $q_c$, lie in the
same triangular face (see Figure~\ref{fig:q4internal-7}). Then,
vertex $q_d$, as in the previous case, can be connected to vertex
$v_0$ using the ``last'' available edge of quadrilateral
$\mathcal{Q}_1$~(refer to the dotted edge of Figure~\ref{fig:q4internal-7}).
However, in this case, there exists a vertex of quadrilateral $\mathcal{Q}_1$,
say $q$, that it is not identified with any of the vertices of the
face that accommodates vertices $q_a$, $q_b$ and $q_c$, which has to
be connected to one of the vertices $q_a$, $q_b$ or $q_c$. However,
this cannot be accomplished, since an edge from either vertex $q_a$,
or $q_b$, or $q_c$, to vertex $q$ would cross more than one
non-parallel edges (refer to the dashed edge of
Figure~\ref{fig:q4internal-7}).
\begin{figure}[h!tb]
\centering
\begin{minipage}[b]{.28\textwidth}
\centering
\subfloat[\label{fig:q4internal-5}{}]
{\includegraphics[width=\textwidth]{images/q4internal-5}}
\end{minipage}
\hfill
\begin{minipage}[b]{.29\textwidth}
\centering
\subfloat[\label{fig:q4internal-7}{}]
{\includegraphics[width=\textwidth]{images/q4internal-7}}
\end{minipage}
\hfill
\begin{minipage}[b]{.39\textwidth}
\centering
\subfloat[\label{fig:q4internal-6}{}]
{\includegraphics[width=\textwidth]{images/q4internal-6}}
\end{minipage}
\caption{(a)~Vertices $q_a$ and $q_b$ are in the interior of two adjacent triangular faces incident to vertex $v_0$. Vertices $q_a$ and $q_b$ are not in the same
triangular face incident to vertex $v_0$. (b)~Vertices $q_a$, $q_b$ and $q_c$ are in the interior of two adjacent triangular faces incident to vertex $v_0$, and $q_b$
and $q_c$ lie in the same triangular face. (c)~Vertices $q_a$, $q_b$ and $q_c$ are in the interior of three pairwise-adjacent triangular faces incident to vertex $v_0$.}
\label{fig:q4internal-d}
\end{figure}
The last case we have to consider is the one where three vertices,
say $q_a$, $q_b$ and $q_c$ of quadrilateral $\mathcal{Q}_2$~are in the
interior of three pairwise-adjacent triangular faces incident to
vertex $v_0$, whereas the fourth vertex $q_d$ resides within
quadrilateral $\mathcal{Q}_2$~(see Figure~\ref{fig:q4internal-6}). In this
case, vertex $q_d$ has to use the fourth edge of quadrilateral
$\mathcal{Q}_1$~to reach vertex $v_0$, which leads to a contradiction, since
only three of the edges of quadrilateral $\mathcal{Q}_1$~can be used to
realize connections from the vertices that lie within quadrilateral
$\mathcal{Q}_1$, to vertex $v_0$.
\end{description}
Thus, we have considered all possible placements of $\mathcal{Q}_2$, with
vertex $v_0$ outside of $\mathcal{Q}_1$, and are all led to a contradiction.
We conclude that vertex $v_0$ is in the interior of quadrilateral
$\mathcal{Q}_1$~(and symmetrically in the interior of $\mathcal{Q}_2$, too). \qed
\end{proof}
\begin{lemma}
There does not exist a RAC drawing of the augmented square antiprism
graph where an edge emanating from vertex $v_0$ towards a vertex of
quadrilateral $\mathcal{Q}_i$, $i=1,2$, crosses quadrilateral $\mathcal{Q}_i$.
\label{lem:v0internalnocross}
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:v0internal}, vertex $v_0$ should lie in the
interior of quadrilateral $\mathcal{Q}_i$, $i=1,2$, which is drawn planar due
to Lemma~\ref{lem:p6Planar}. Assume to the contrary that in a RAC
drawing of the augmented square antiprism graph, an edge emanating
from vertex $v_0$ towards a vertex of quadrilateral $\mathcal{Q}_1$, say
$v_a$, crosses an edge, say $(v_c,v_d)$, of quadrilateral $\mathcal{Q}_1$~(see
Figure~\ref{fig:planar-convex}). Consider vertex $v_{cd}$, which is
incident to vertices $v_c$ and $v_d$ of quadrilateral $\mathcal{Q}_2$. Vertex
$v_{cd}$ cannot lie ``above'' line $\ell_{v_c,v_d}$ and to the
``left'' of semi-line $\ell_{v_d,v_a}'$, since it cannot be
connected to vertex $v_0$. In addition, it cannot lie ``above'' line
$\ell_{v_c,v_d}$ and to the ``right'' of semi-line
$\ell_{v_d,v_a}'$, since in this case it cannot be connected to
either vertex $v_c$ or $v_0$. Furthermore, vertex $v_{cd}$ cannot be
in the interior of the triangle formed by vertices $v_0$, $v_c$ and
$v_d$, as it would not be feasible to be connected either to vertex
$v_c$ or $v_d$, since in either case, it crosses edge $(v_0,v_a)$.
Also, $v_{cd}$ cannot be in the region formed by line
$\ell_{v_c,v_d}$ and edges $(v_0,v_d)$ and $(v_0,v_b)$, as it could
not be connected to vertex $v_c$. Thus, vertex $v_{cd}$ should lie
in the light-gray triangular face of Figure~\ref{fig:planar-convex},
along semi-line $\ell_{v_d,v_c,v_b}$. Following a similar reasoning
scheme, we can prove that vertex $v_{ad}$, which is incident to
vertices $v_a$, $v_d$ of quadrilateral $\mathcal{Q}_1$~and vertex $v_{ab}$ of
quadrilateral $\mathcal{Q}_2$, due to its adjacency with $v_a$, $v_d$, can lie
in the face formed by vertices $v_a$, $v_b$, $v_d$ and the
intersection point of edges $(v_d,v_{cd})$ and $(v_0,v_b)$. However,
under this restriction, vertex $v_{ad}$ cannot be connected to
vertex $v_{cd}$, without crossing edge $(v_0,v_b)$, which is already
involved in a right-angle crossing (refer to the dashed edge of
Figure~\ref{fig:planar-convex}). \qed
\end{proof}
\begin{figure}[h!tb]
\centering
\includegraphics[width=.7\textwidth]{images/planar-convex}
\caption{An edge emanating from vertex $v_0$ towards a vertex of $\mathcal{Q}_1$, cannot cross
$\mathcal{Q}_1$.}
\label{fig:planar-convex}
\end{figure}
\begin{lemma}
There does not exist a RAC drawing of the augmented square antiprism
graph in which quadrilateral $\mathcal{Q}_1$~intersects $\mathcal{Q}_2$.
\label{lem:quahexposition}
\end{lemma}
\begin{proof}
From Lemmata~\ref{lem:p6Planar}, \ref{lem:v0internal} and
\ref{lem:v0internalnocross}, it follows that the graph induced by
quadrilateral $\mathcal{Q}_1$~and vertex $v_0$ is drawn planar with vertex
$v_0$ in the interior of both quadrilaterals $\mathcal{Q}_1$~and $\mathcal{Q}_2$.
Therefore, it should resemble the one illustrated in
Figure~\ref{fig:v0internalq4external-1}.
\begin{figure}[h!tb]
\centering
\begin{minipage}[b]{.48\textwidth}
\centering
\subfloat[\label{fig:v0internalq4external-1}{}]
{\includegraphics[width=.7\textwidth]{images/v0internalq4external-1}}
\end{minipage}
\hfill
\begin{minipage}[b]{.48\textwidth}
\centering
\subfloat[\label{fig:v0internalq4external-6}{}]
{\includegraphics[width=.7\textwidth]{images/v0internalq4external-6}}
\end{minipage}
\caption{(a)~The graph induced by quadrilateral $\mathcal{Q}_1$~and vertex $v_0$ is drawn
planar with vertex $v_0$ in the interior of quadrilateral $\mathcal{Q}_1$. (b)~$\mathcal{Q}_1$~and $\mathcal{Q}_2$~cross and none of the vertices of $\mathcal{Q}_2$~is in the interior of $\mathcal{Q}_1$.}
\label{fig:v0internalq4external-a}
\end{figure}
In order to prove this lemma, we will contradict the following
cases: (i)~$\mathcal{Q}_1$~and $\mathcal{Q}_2$~cross and none of the vertices of
$\mathcal{Q}_2$~is in the interior of quadrilateral $\mathcal{Q}_1$, (ii)~two vertices
of $\mathcal{Q}_2$~lie in the interior of $\mathcal{Q}_1$~and $\mathcal{Q}_2$~crosses either a
single edge of $\mathcal{Q}_1$, or two edges of $\mathcal{Q}_1$, (iii)~three vertices of
$\mathcal{Q}_2$~lie in the interior of $\mathcal{Q}_1$, (iv)~only one vertex of
$\mathcal{Q}_2$~lies in the interior of $\mathcal{Q}_1$. We first assume that
quadrilateral $\mathcal{Q}_1$~and quadrilateral $\mathcal{Q}_2$~cross and none of the
vertices of quadrilateral $\mathcal{Q}_2$~is in the interior of quadrilateral
$\mathcal{Q}_1$~(see Figure~\ref{fig:v0internalq4external-6}). In this case,
an edge of quadrilateral $\mathcal{Q}_2$, say $e_q$, which is involved in the
crossing, divides quadrilateral $\mathcal{Q}_1$~into two regions, say
$\mathcal{Q}_1^1$~and $\mathcal{Q}_1^2$. Obviously, edge $e_q$ should cross parallel edges
of quadrilateral $\mathcal{Q}_1$. Then, vertex $v_0$, which lies in the
interior of quadrilateral $\mathcal{Q}_1$~and is incident to all vertices of
quadrilateral $\mathcal{Q}_1$~cannot reside to none of $\mathcal{Q}_1^1$~and $\mathcal{Q}_1^2$,
without introducing a non-right angle crossing with edge $e_q$.
We proceed to consider the case where quadrilaterals $\mathcal{Q}_1$~and
$\mathcal{Q}_2$~cross and some of the vertices of quadrilateral $\mathcal{Q}_2$~are in
the interior of quadrilateral $\mathcal{Q}_1$, whereas the remaining ones on
its exterior. Let $q_a$, $q_b$, $q_c$ and $q_d$ be the vertices of
quadrilateral $\mathcal{Q}_2$. Assume that $q_a$ and $q_b$ lie within
quadrilateral $\mathcal{Q}_1$, whereas $q_c$ and $q_d$ on its external face,
such that edges $(q_a,q_d)$ and $(q_b,q_c)$ are perpendicular either
to one edge of quadrilateral $\mathcal{Q}_1$~(see
Figure~\ref{fig:v0internalq4external-2}), or to two edges of
$\mathcal{Q}_1$~(see Figure~\ref{fig:v0internalq4external-3}). Note that edges
$(q_a,q_d)$ and $(q_b,q_c)$ cannot be crossed by any other edge
incident to both quadrilaterals, since they are already involved in
right-angle crossings. However, all vertices of quadrilateral
$\mathcal{Q}_1$~have to be connected to vertex $v_0$. Assuming that one vertex
of quadrilateral $\mathcal{Q}_1$~can utilize the ``last'' available edge of
quadrilateral $\mathcal{Q}_2$~(i.e., edge $(q_a,q_b)$) to reach vertex $v_0$
(refer to the dotted edges of
Figures~\ref{fig:v0internalq4external-2}~and~\ref{fig:v0internalq4external-3}),
there exists at least one vertex of $\mathcal{Q}_1$, say vertex $q$, that
cannot be connected to $v_0$, without introducing non right-angle
crossing (refer to the dashed edges of
Figures~\ref{fig:v0internalq4external-2}~and~\ref{fig:v0internalq4external-3}).
\begin{figure}[h!tb]
\centering
\begin{minipage}[b]{.48\textwidth}
\centering
\subfloat[\label{fig:v0internalq4external-2}{}]
{\includegraphics[width=.9\textwidth]{images/v0internalq4external-2}}
\end{minipage}
\hfill
\begin{minipage}[b]{.48\textwidth}
\centering
\subfloat[\label{fig:v0internalq4external-3}{}]
{\includegraphics[width=.7\textwidth]{images/v0internalq4external-3}}
\end{minipage}
\caption{Vertices $q_a$ and $q_b$ are in the interior of $\mathcal{Q}_1$~and $\mathcal{Q}_2$~crosses
(i)~a single edge of $\mathcal{Q}_1$, or (ii)~two edges of $\mathcal{Q}_1$.}
\label{fig:v0internalq4external-a}
\end{figure}
Following a similar reasoning scheme as for the previous cases, we
can prove that the cases where (i)~three vertices of $\mathcal{Q}_2$, say
w.l.o.g., $q_a$, $q_b$ and $q_c$, lie in the interior of $\mathcal{Q}_1$~(see
Figure~\ref{fig:v0internalq4external-8}) , and (ii)~only one vertex
of $\mathcal{Q}_2$, say w.l.o.g, vertex $q_b$, lies in the interior of
$\mathcal{Q}_1$~(see Figure~\ref{fig:v0internalq4external-9}), are led to a
contradiction. \qed
\begin{figure}[h!tb]
\centering
\begin{minipage}[b]{.48\textwidth}
\centering
\subfloat[\label{fig:v0internalq4external-8}{}]
{\includegraphics[width=.7\textwidth]{images/v0internalq4external-8}}
\end{minipage}
\hfill
\begin{minipage}[b]{.48\textwidth}
\centering
\subfloat[\label{fig:v0internalq4external-9}{}]
{\includegraphics[width=.6\textwidth]{images/v0internalq4external-9}}
\end{minipage}
\caption{(i)~Vertices $q_a$, $q_b$ and $q_c$ are in the interior of $\mathcal{Q}_1$. (ii)~Vertex $q_b$ is in the interior of $\mathcal{Q}_1$.}
\label{fig:v0internalq4external-a}
\end{figure}
\end{proof}
\begin{theorem}
\label{thm:uniqueness} Any straight-line RAC drawing of the
augmented square antiprism graph has two combinatorial embeddings.
\end{theorem}
\begin{proof}
So far, we have managed to prove that both quadrilaterals $\mathcal{Q}_1$~and
$\mathcal{Q}_2$~are drawn planar, do not cross, and have central vertex $v_0$
to their interior. This suggests that either quadrilateral $\mathcal{Q}_1$~is
in the interior of $\mathcal{Q}_2$, or quadrilateral $\mathcal{Q}_2$~is in the interior
of $\mathcal{Q}_1$. However, in both cases, vertex $v_0$, which has to be
connected to the four vertices of the ``external'' quadrilateral,
should inevitably perpendicularly cross the four edges of the
``internal'' quadrilateral, and this trivially implies only two
feasible combinatorial embeddings. \qed
\end{proof}
We extend the augmented square antiprism graph, by appropriately
``glueing'' multiple instances of it, the one next to the other,
either horizontally or vertically.
Figure~\ref{fig:basic-gadget-extension} demonstrates how a
horizontal extension of two instances, say $G$ and $G'$, is
realized, i.e., by identifying two ``external'' vertices, say $v$
and $v'$, of $G$ with two ``external'' vertices of $G'$ (refer to
the gray-colored vertices of
Figure~\ref{fig:basic-gadget-extension}), and by employing an
additional edge (refer to the dashed drawn edge of
Figure~\ref{fig:basic-gadget-extension}), which connects an
``internal'' vertex, say $u$, of $G$ with the corresponding
``internal'' vertex, say $u'$, of $G'$. Let $G \oplus G'$ be the
graph produced by a horizontal or vertical extension of $G$ and
$G'$. Since each of $G$ and $G'$ has two RAC combinatorial
embeddings each, one would expect that $G \oplus G'$ would have four
possible RAC combinatorial embeddings. We will show that this is not
true and, more precisely, that there only exists a single RAC
combinatorial embedding.
\begin{figure}[h!tb]
\centering
\begin{minipage}[b]{\textwidth}
\centering
\subfloat[\label{fig:basic-gadget-extension}{}]
{\includegraphics[width=\textwidth]{images/basic-gadget-extension}}
\end{minipage}
\vfill
\begin{minipage}[b]{.48\textwidth}
\centering
\subfloat[\label{fig:basic-gadget-extension-no-cross}{}]
{\includegraphics[width=.85\textwidth]{images/basic-gadget-extension-no-cross}}
\end{minipage}
\hfill
\begin{minipage}[b]{.48\textwidth}
\begin{minipage}[b]{\textwidth}
\centering
\subfloat[\label{fig:basic-gadget-extension-internal}{}]
{\includegraphics[width=.65\textwidth]{images/basic-gadget-extension-internal}}
\end{minipage}
\begin{minipage}[b]{\textwidth}
\centering
\subfloat[\label{fig:basic-gadget-turned}{}]
{\includegraphics[width=.85\textwidth]{images/basic-gadget-turned}}
\end{minipage}
\end{minipage}
\caption{(a)~Horizontal extension of two instances of the augmented square antiprism graph,
(b)~The additional (dashed) edge does not permit the second instance to be drawn in the interior of the first one.
(c)~The vertices which are identified, during a horizontal or vertical extension ($v$ and $v'$ in Figure), should be on the external face of each augmented square antiprism graph.
(d)~At each extension step the new instance of the augmented square antiprism graph may introduce a ``turn''.}
\label{fig:basic-gadget-attributes}
\end{figure}
\begin{theorem}
Let $G$ and $G'$ be two instances of the augmented square antiprism
graph. Then, $G \oplus G'$ has a unique RAC combinatorial embedding.
\end{theorem}
\begin{proof}
Assume first that in a RAC drawing of $G \oplus G'$, vertices $v$
and $v'$ are on the external quadrilateral of $G$ and graph $G'$ is
drawn completely in the interior of $G$ (see
Figure~\ref{fig:basic-gadget-extension-no-cross}; since $v$ and $v'$
are on the external face of $G'$, vertices $\alpha$ and $\beta$ in
Figure~\ref{fig:basic-gadget-extension-no-cross} should also be on
the external face of $G'$). First observe that vertex $u'$ of $G'$,
which is incident to vertices $v$ and $v'$, cannot reside to the
``left'' of both edges $(u,v)$ and $(u,v')$ (refer to the bold drawn
edges of Figure~\ref{fig:basic-gadget-extension-no-cross}), since
this would lead to a situation where three edges mutually cross and,
subsequently, to a violation of
Property~\ref{prp:three-crossing-edges} (see the gray-colored square
vertex of Figure~\ref{fig:basic-gadget-extension-no-cross}).
Therefore, vertex $u'$ should lie within the triangular face of $G$
formed by vertices $u$, $v$ and $v'$. The same similarly holds for
the central vertex of $G'$, which is also incident to vertices $v$
and $v'$. By Property~\ref{prp:triangle-edges}, any common neighbor
of vertices $u'$ and $v$ should also lie within the same triangular
face of $G$, which progressively implies that entire graph $G'$
should reside within this face, as in
Figure~\ref{fig:basic-gadget-extension-no-cross}. However, in this
case and since $u'$ is incident to $v$ and $v'$, edge $(u,u')$,
which is used on a horizontal or a vertical extension, crosses the
interior of $G'$, which is not permitted. This suggests that graph
$G'$ should be on the exterior of $G$.
Now assume that vertices $v$ and $v'$, which are identified, during
a horizontal or vertical extension, are along the internal
quadrilateral of $G$ in a RAC drawing of $G \oplus G'$. This is
illustrated in Figure~\ref{fig:basic-gadget-extension-internal}.
Then, the edge, say $e$, which perpendicularly crosses edge $(v,v')$
and emanates from the external quadrilateral towards the central
vertex of $G$ (refer to the bold solid edge of
Figure~\ref{fig:basic-gadget-extension-internal}) will be involved
in crossings with $G'$. More precisely, we focus on vertex $u'$ of
$G'$, which is incident to vertices $v$ and $v'$. These edges will
inevitably introduce non-right angle crossings, since one of them
should cross edge $e$. Therefore, the vertices that are identified,
during a horizontal or vertical extension, should always be on the
external face of each augmented square antiprism graph and,
subsequently, the drawing of the graph produced by a horizontal or
vertical extension will resemble the one of
Figure~\ref{fig:basic-gadget-extension}, i.e., it has a unique
embedding. \qed
\end{proof}
Note that the extension which is given in
Figure~\ref{fig:basic-gadget-extension}, is ideal. In the general
case, at each extension step the new instance of the augmented
square antiprism graph may introduce a ``turn'', as in
Figure~\ref{fig:basic-gadget-turned}. We observe that by ``glueing'' a new
instance of the augmented square antiprism graph on $G \oplus G'$ either by
a horizontal or a vertical extension, we obtain another graph of unique RAC combinatorial embedding.
In this way, we can define an infinite class of graphs of unique RAC combinatorial embedding. This is
summarized in the following theorem.
\begin{theorem}
There exists a class of graphs of unique RAC combinatorial embedding.
\end{theorem}
\section{The Straight-Line RAC Drawing Problem is NP-hard}
\label{sec:np}
\begin{theorem}
It is $\mathcal{NP}$-hard to decide whether an input graph admits a
straight-line RAC drawing.
\end{theorem}
\begin{proof}
We will reduce the well-known $3$-SAT problem \cite{GJ79} to the
straight-line RAC drawing problem. In a $3$-SAT instance, we are
given a formula $\phi$ in conjunctive normal form with variables
$x_1, x_2, \ldots, x_n$ and clauses $C_1, C_2, \ldots, C_m$, each
with three literals. We show how to construct a graph $G_\phi$ that
admits a straight-line RAC drawing $\Gamma(G_\phi)$ if and only if
formula $\phi$ is satisfiable.
Figure~\ref{fig:gadgets} illustrates the gadgets of our
construction. Each gray-colored square in these drawings corresponds
to an augmented square antiprism graph. Adjacent gray squares form
an extension (refer, for example, to the topmost gray squares of
Figure~\ref{fig:gadgets}a, which form a ``horizontal'' extension).
There also exist gray squares that are not adjacent, but connected
with edges. The legend in Figure~\ref{fig:gadgets} describes how the
connections are realized.
\begin{figure}[p]
\centering
\includegraphics[width=\textwidth]{images/gadget}
\caption{Gadgets of our construction: (a)~Variable gadget, (b)~Dummy variable gadget, (c)~Clause gadget}
\label{fig:gadgets}
\end{figure}
The gadget that encodes variable $x_i$ of formula $\phi$ is given in
Figure~\ref{fig:gadgets}a. The gadget of variable $x_i$ consists of
a combination of augmented square antiprism graphs, and,
``horizontal'' and ``vertical'' edges, which form a tower, whose RAC
drawing has unique combinatorial embedding. One side of the tower
accommodates multiple vertices that correspond to literal $x_i$,
whereas its opposite side accommodates vertices that correspond to
literal $\overline{x_i}$ (refer to vertices $x_{i,1}, \ldots
,x_{i,m}$ and $\overline{x}_{i,1}, \ldots ,\overline{x}_{i,m}$ in
Figure~\ref{fig:gadgets}a). These vertices are called \emph{variable
endpoints}. Then, based on whether on the final drawing the negated
vertices will appear to the ``left'' or to the ``right'' side of the
tower, we will assign a true or a false value to variable $x_i$,
respectively. Pairs of consecutive endpoints $x_{i,j}$ and
$x_{i,j+1}$ are separated by a \emph{corridor} (see
Figure~\ref{fig:gadgets}a), which allows perpendicular edges to pass
through it (see the bottommost dashed arrow of
Figure~\ref{fig:gadgets}a). Note that this is not possible through a
``corridor'' formed on a variable endpoint, since there exist four
non-parallel edges that ``block'' any other edge passing through
them (see the topmost dashed arrow of Figure~\ref{fig:gadgets}a).
The corridors can have variable height. In the variable gadget of
variable $x_i$, there are also two vertices (they are drawn as gray
circles in Figure~\ref{fig:gadgets}a), which have degree four. These
vertices serve as ``\emph{connectors}'' among consecutive variable
gadgets, i.e., these vertices should be connected to their
corresponding vertices on the variable gadgets of variables
$x_{i-1}$ and $x_{i+1}$. Note that the connector vertices of the
variable gadgets associated with variables $x_1$ and $x_n$ are
connected to connectors of the variable gadgets that correspond to
variables $x_2$ and $x_{n-1}$, respectively, and to connectors of
\emph{dummy variable gadgets}.
Figure~\ref{fig:gadgets}b illustrates a dummy variable gadget, which
(similarly to the variable gadget) consists of a combination of
augmented square antiprism graphs, and, ``horizontal'' and
``vertical'' edges, which form a tower. Any RAC drawing of this
gadget has also unique combinatorial embedding. A dummy variable
gadget does not support vertices that correspond to literals.
However, it contains connector vertices (they are drawn as gray
circles in Figure~\ref{fig:gadgets}b). In our construction, we use
exactly two dummy variable gadgets. The connector vertices of each
dummy variable gadget should be connected to their corresponding
connector vertices on the variable gadgets associated with variables
$x_1$ and $x_n$, respectively.
The gadget that encodes the clauses of formula $\phi$ is illustrated
in Figure~\ref{fig:gadgets}c and resembles to a valve. Let $C_i=(x_j
\vee x_k \vee x_l)$ be a clause of $\phi$. As illustrated in
Figure~\ref{fig:gadgets}c, the gadget which corresponds to clause
$C_i$ contains three vertices\footnote{With slight abuse of
notation, the same term is used to denote variables of $\phi$ and
vertices of $G_\phi$.}, say $x_j$, $x_k$, and $x_l$, such that:
$x_j$ has to be connected to $x_{j,i}$, $x_k$ to $x_{k,i}$ and $x_l$
to $x_{l,i}$ \underline{by paths of length two}. These vertices,
referred to as the \emph{clause endpoints}, encode the literals of
each clause. Obviously, if a clause contains a negated literal, it
should be connected to the negated endpoint of the corresponding
variable gadget. The clause endpoints are incident to a vertex
``trapped'' within two parallel edges (refer to the bold drawn edges
of Figure~\ref{fig:gadgets}c). Therefore, in a RAC drawing of
$G_{\phi}$, only two of them can perpendicularly cross these edges,
one from top (\emph{top endpoint}) and one from bottom (\emph{bottom
endpoint}). The other one (\emph{right endpoint}) should remain in
the interior of the two parallel edges. The one that will remain
``trapped'' on the final drawing will correspond to the true literal
of this clause.
The gadgets, which correspond to variables and clauses of $\phi$,
are connected together by the skeleton of graph $G_{\phi}$, which is
depicted in Figure~\ref{fig:skeleton}a. The skeleton consists of
two main parts, i.e., one ``horizontal'' and one ``vertical''. The
vertical part accommodates the clause gadgets (see
Figure~\ref{fig:skeleton}a). The horizontal part will be used in
order to ``plug'' the variable gadgets. The long edges that
perpendicularly cross (refer to the crossing edges slightly above
the horizontal part in Figure~\ref{fig:skeleton}a), imply that the
vertical part should be perpendicular to the horizontal part. The
horizontal part of the skeleton is separately illustrated in
Figure~\ref{fig:skeleton}b. Observe that it contains one set of
horizontal lines.
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{images/skeleton}
\caption{Illustration of the skeleton of the construction.}
\label{fig:skeleton}
\end{figure}
Figure~\ref{fig:example} shows how the variable gadgets are attached
to the skeleton. More precisely, this is accomplished by a single
edge, which should perpendicularly cross the set of the horizontal
edges of the horizontal part. Therefore, each variable gadget is
perpendicularly attached to the skeleton, as in
Figure~\ref{fig:example}. Note that each variable gadget should be
drawn completely above of these horizontal edges, since otherwise
the connections among variable endpoints and clause endpoints would
not be feasible. The connector vertices of the dummy variable
gadgets, the variable gadgets and the vertical part of the
construction, ensure that the variable gadgets will be parallel to
each other (i.e., they are not allowed to bend) and parallel to the
vertical part of the construction.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/example}
\caption{The reduction from $3$-SAT to the straight-line RAC drawing problem.
The input formula is $\phi=(x_1 \vee x_2 \vee x_3)\wedge(\overline{x}_1 \vee
\overline{x}_2 \vee \overline{x}_3)\wedge(\overline{x}_1 \vee
\overline{x}_2 \vee x_3)$. The drawing corresponds to the truth
assignment $x_1$=$x_3$=true, $x_2$=false.}
\label{fig:example}
\end{figure}
We now proceed to investigate some properties of our construction.
Any path of length two that emanates from a top- or bottom-clause
endpoint can reach a variable endpoint either on the left or on the
right side of its associated variable gadget. The first edge of this
path should perpendicularly cross the vertical edges of the vertical
part of the construction and pass through some corridors\footnote{In
Figure~\ref{fig:example}, the corridors are the gray-colored regions
that reside at each variable gadget.}, whereas the second edge will
be used to realize the ``final'' connection with the variable gadget
endpoint (see Figure~\ref{fig:example}). However, the same doesn't
hold for the paths that emanate from a right-clause endpoint. These
paths can only reach variable endpoints on the right side of their
associated variable gadgets. More precisely, the first edge of the
$2$-length path should cross one of the two parallel edges (refer to
the bold drawn edges of Figure~\ref{fig:gadgets}c) that ``trap'' it,
whereas the other one should be used to reach (passing through
variable corridors) its variable endpoint (see
Figure~\ref{fig:example}).
Our construction ensures that up to translations, rotations and
stretchings any RAC drawing of $G_\phi$ resembles the one of
Figure~\ref{fig:skeleton}. It is clear that the construction can be
completed in $O(nm)$ time. Assume now that there is a RAC drawing
$\Gamma(G_\phi)$ of $G_\phi$. If the negated vertices of the
variable gadget that corresponds to $x_i$, $i=1,2,\ldots,n$, lie to
the ``left'' side in $\Gamma(G_\phi)$, then variable $x_i$ is set to
true, otherwise $x_i$ is set to false. We argue that this assignment
satisfies $\phi$. To realize this, observe that there exist three
paths that emanate from each clause gadget. The one that emanates
from the right endpoint of each clause gadget can never reach a
false value. Therefore, each clause of $\phi$ must contain at least
one true literal, which implies that $\phi$ is satisfiable.
Conversely, suppose that there is a truth assignment that satisfies
$\phi$. We proceed to construct a RAC drawing $\Gamma(G_\phi)$ of
$G_\phi$, as follows: In the case where, in the truth assignment,
variable $x_i$, $i=1,2,\ldots,n$ is set to true, we place the
negated vertices of the variable gadget that corresponds to $x_i$,
to its left side in $\Gamma(G_\phi)$, otherwise to its right side.
Since each clause of $\phi$ contains at least one true literal, we
choose this as the right endpoint of its corresponding clause
gadget. As mentioned above, it is always feasible to be connected to
its variable gadgets by paths of length two. This completes our
proof. \qed
\end{proof}
\section{Conclusions}
\label{sec:conclusions}
In this paper, we proved that it is $\mathcal{NP}$-hard to decide whether a
graph admits a straight-line RAC drawing. Didimo et al.\
\cite{DEL09} proved that it is always feasible to construct a RAC
drawing of a given graph with at most three bends per edge. If we
permit two bends per edge, does the problem remain $\mathcal{NP}$-hard? It is
also interesting to continue the study on the interplay between the
number of edges and the required area in order to fill the gaps
between the known upper and lower bounds.
\bibliographystyle{abbrv}
| -59,944.154188
|
[
-3.064453125,
2.80078125
] | 16.207455
|
[
-2.896484375,
1.65625,
-1.5888671875,
-4.48828125,
-1.1259765625,
6.3984375
] |
[
3.4140625,
5.9765625,
2.31640625,
7.46875
] | 446
| 8,099
|
[
-2.41015625,
2.56640625
] | 28.307164
|
[
-5.48828125,
-3.294921875,
-3.962890625,
-1.74609375,
1.5869140625,
10.9140625
] | 0.995824
| 10.911731
| 13.606618
| 1.560544
|
[
2.3141860961914062
] | -45,161.060353
| 5.765527
| -59,321.109012
| 1.753935
| 5.594559
|
[
-1.8837890625,
-3.03125,
-3.646484375,
-4.828125,
2.181640625,
11.34375
] |
[
-5.62109375,
-1.912109375,
-2.53125,
-1.6171875,
3.689453125,
5.00390625
] | |
BkiUduk4uzlhiT5TDMs-
|
\section{INTRODUCTION}
In 1968, the first satellite, OAO2, capable of UV observations was launched. Till then, due to atmospheric extinction, the astronomical studies in the ultraviolet spectral region were not possible. Later on other satellites like TD-1, Astronomical Netherland satellite (ANS) and International Ultraviolet Explorer (IUE) were launched. IUE has provided a wealth of data on interstellar extinction in the UV region. We study the wavelength dependence of the interstellar extinction in the UV region, 1200-3200\AA~, observed by the IUE satellite towards several stars in the various interstellar environments; viz. diffuse interstellar medium, HII region, OB type association, reflection nebulae, dense medium and HI sources.
Recent studies of interplanetary, cometary and interstellar dust indicate that the cosmic dust grains are inhomogeneous viz. porous, fluffy and composite. The collected interplanetary particles are also porous and composite \citep{brown1987}.
\citet{mathis96}, \citet{dwek97}, \citet{greenli98} and \citet{zubko94}
have proposed composite grain models consisting of silicates and amorphous carbon to explain the observed wavelength dependence of interstellar extinction, polarization, albedo, IR emission and the observed
elemental depletion. They have used the effective medium theory (EMT). \citet{iati04} have
studied optical properties of the composite grains using the transition matrix
approach. \citet{vosh06} and \citet{vosh08} have
used the layered sphere method to study the extinction properties of the porous
grains. Very recently, \citet{sieb2013} have used a dust model, consisting of a mixture of large spheroidal amorphous carbon (AMC) and silicate grains. Small grains of graphite, silicates and polycyclic aromatic hydrocarbons (PAHs) are also included to explain the extinction, emission, linear and circular polarization in the diffuse interstellar medium. \citet{clayton03MEM} have used Maximum Entropy Method (MEM) and EMT for
2-component (silicates and graphite) and 3-component (silicates, graphite and
amorphous carbon) spherical grain models to study the extinction properties in the
Milky Way galaxy and the Magellanic clouds. In EMT, the inhomogeneous particle is replaced
by a homogeneous one with some `average effective dielectric function'. The
effects related to the fluctuations of the dielectric function within the inhomogeneous
structure can not be treated by this approach of the EMT.
We have used Discrete Dipole Approximation (DDA) which allows consideration for irregular shape effects,
surface roughness and internal structure within the grain \citep{wolff94,wolff98}. Since there is no exact theory to study these porous and composite particles, there is a need to formulate models of electromagnetic
scattering by approximate methods like EMT and DDA. We have used DDA to calculate the extinction cross sections of the composite grains in the spectral UV region, 1200\AA~ - 3200\AA~, and compared the model extinction curves with the extinction curves, derived from the IUE satellite
observations. For a discussion and comparison of EMT and DDA see for e.g. \citet{Ossenkopf91} and \citet{wolff98}. Earlier \citet{vaidya01,gupta07} have used composite grain models to interpret the average observed interstellar extinction. Moreover, the recent results of \citet{katyal11} show that the composite grain model is more efficient as compared to the bare silicate/graphite grain models in producing the extinction and also reducing the cosmic abundance constraints. Composite dust grain models are also being employed to analyze IR emission. Recently, \citet{vaidya11} have used the composite grain model to interpret the observed IR emission from circumstellar dust.
\citet{massa83} have done spectrophotometric measurements for a sample of stars judged
by \citet{meyer81} to study highly anomalous peculiar UV extinction as inferred from
the broad-band Astronomical Netherlands Satellite (ANS) data. These observations showed a discrete bump feature at 2175\AA~ \citep{stecher65,stecher69}.
This feature has been ascribed to small graphite
grains \citep{stecherdonn65,draine93}.
Other possible candidate for the spectral bump at 2175\AA~ could be polycyclic aromatic hydrocarbons (PAHs) as discussed by \citet{li2001} and \citet{malloci2008}. \citet{sieb2013} have also discussed about the strong electronic transitions of both graphites and PAHs at 2175\AA~ to be responsible for the bump feature. \citet{green83} have
found strong correlation between the strength of the `2175\AA~' feature
and the visible extinction. They obtained a poor correlation between far ultraviolet
(FUV) extinction, strength of the feature and visible extinction concluding that
a wide spectrum of size distribution is needed to explain the average observed
interstellar extinction curve. \citet{xiang2011} have shown that the carriers responsible for the 2175\AA~ feature and the extinction in the UV might not be the same.
Wavelength dependent studies of the interstellar extinction curves are the best tool for understanding environment around these stars. The most commonly used technique for deriving the wavelength dependence
of interstellar extinction is the ``pair method" \citep{massa83}.
Basically, the ratio of the fluxes of the reddened and comparison star gives a direct measurement of the dust extinction towards the reddened star. The resultant ratio, after
normalization is referred to as the `extinction curve'. Errors resulting from poorly
matched pairs can dominate the uncertainties of individual extinction curves.
\citet{massa86,massa88,fitz90} analyzed several IUE extinction curves
and found that all these curves could be fitted extremely well by a single
analytical expression with six parameters.
With the availability of much more observational data, revisions of earlier dust models was done as extinction of light is highly subject to the interstellar environments from where it passes through. Therefore, \citet{clayton88,clayton89} (hereafter called CCM method) found that in general, the properties of UV extinction curves are correlated with the extinction in the optical/IR region and that from the UV through the IR. They characterized this dependence by a single parameter, $R_{v}^{-1}$, which is the ratio of visual extinction to total extinction of V, and is defined as $R_{v}$=A(V)/E(B-V).
However, the CCM method has its limitations,
both from the standpoint of understanding dust grain properties and dereddening
energy distributions. While the UV curve shapes indeed correlate in general
with $R_{v}$, the $R_{v}^{-1}$ dependence adopted by CCM is insufficient to describe the
behavior over the entire range of observed $R_{v}$ values, and breaks down at small $R_{v}$.
Further, the CCM formula does not provide particularly good fits to individual extinction
curves. Evidently, factors other than $R_{v}$, e.g. chemical composition, processing
history, ambient radiant field play important roles in determining the extinction
properties. Hence, based on different interstellar environments of the stars, \citet{aiello} have presented a collection of 115 extinction curves derived from low dispersion IUE spectra.
The atlas includes extinction originating in the diffuse medium and several major
nebulae and dense clouds. The data can be easily accessed and used for various extinction studies.
The shape of extinction curves are substantially different for different $R_{v}'s$,
and hence changes in the size distributions is also expected.
As \citet{cardelli91} have pointed out, lines of sight with large $R_{v}$ are ideal
for examining processes that modify the grain properties in dense clouds. A good correlation between the strength of the `2175\AA' UV bump feature and the visual extinction was also noted by \citet{green83}.
\citet{Wein} have constructed size distributions for spherical carbonaceous and silicate grain
populations in different regions of the Milky way, LMC and SMC to account for the observed near IR and microwave emission from diffuse interstellar environment using a fairly simple functional form, characterized by various parameters. They have shown that these variations can be well parameterized by $R_{v}$.
Another study by \citet{kim94} found out that denser environments with high $R_{v}$(=5.3) have the presence of larger mean size of grains, though all denser regions may not necessarily have high $R_{v}$.
In the present study, we have used the `Pair method' which is described in the section 2.1. The main purpose of the present study is to infer the size distribution, shape and composition of the interstellar dust grains, in various interstellar environments (for different values of $R_{v}$), which are consistent with the observed extinction. We use composite grain models to compare extinction toward these stars as observed by the IUE satellite. We tabulate a list of the selected stars and describe the pair method to generate the extinction curves in the UV spectral regime for these stars in section 2. DDA technique and the generation of composite grain models using it are illustrated in Section 3. In section 4, we give the results of the computed model curves and compare these model extinction curves with the observed extinction curves. In section 4, we analyze these results in detail and compare our results in terms of size and composition with those obtained by others. Our conclusions from this study are summarized in section 5.
\section{Preliminary data reduction}
\subsection{PAIR METHOD}
The standard Pair method technique is used for a set of IUE stars to generate the extinction curves.
The technique involves selecting a highly reddened star
and comparing it with a star (flux standard) which has negligible reddening and
whose spectral features closely match with those of the reddened star.
An extinction curve is then constructed by the standard relation (\citet{massa83}):
\begin{equation}
\frac{E(\lambda-V)}{E(B-V)}=\frac{m(\lambda-V)-m(\lambda-V)_{o}}{(B-V)-(B-V)_{o}}
\end{equation}
where subscript `$o$' refers to the unreddened star and other is for the reddened star.
Here E(B-V) is the difference in extinction between the specified wavelengths and
corresponds to the color excess. The resultant extinction curve $E(\lambda-V)/E(B-V)$
is then plotted versus $1/\lambda$ for selected IUE stars.
\subsection{Object Selection Criteria}
\noindent
We have selected 26 ``program stars" (listed in Table 1)
from \citet{massa88}, \citet{fitz09} and IUE spectral atlas by \citet{wu}. The $R_{v}$ values of the sample reddened stars are taken from \citet{valencic04}.
The spectral types for these 26 stars lies in the range O7-B5. We have selected
reddened and dereddened stars on the basis of their visible spectral type and the
luminosity class. Spectral type mismatch error larger than one
luminosity subclass is avoided (see Table 1) in order to account for spectral type
uncertainties between reddened and the dereddened stars. Late type stars are excluded because their ultraviolet energy distributions are very strong functions of their spectral type - thus amplifying the magnitudes of error associated with the spectral mismatching between the reddened and unreddened stars.
\citet{massa83} and \citet{massa84} give the identification of the
features useful in matching B stars near the main sequence. Most of the sample stars are selected along different line of sights and are previously known to produce extinction curves that vary considerably from the average Milky way curve ($R_{v}$=3.1).
The lowest value of
color excess E(B-V) for unreddened stars sample is 0.01 and the highest value of E(B-V)
for reddened stars sample is 0.95. The stars selected represent a range of environments;
viz. diffuse interstellar medium (DIF); HII region (HII); OB type association (OB);
reflection nebulae (RN); dense medium (DEN) and radio or HI source (H/RADIO) which are mentioned in second column of Table 1. Environment type for the stars is taken from \citet{massa88,green93} and SIMBAD astronomical database. It is to be noted that the sample of stars selected span the
value of R$_{v}$ i.e the ratio of total to selective extinction, from $\sim$ 2.0 to 5.0 (see
Table 1) representing the physical environments in the galaxy i.e
from a diffuse medium to a very dense medium so as to study the effects
on the corresponding extinction curves. Other properties of the sample stars such as distance (in Kpc) and neutral hydrogen column densities for the sample stars chosen are given in \citet{fitz90}.
The column (1) of Table 1 gives the HD number of the Program star, column (2) refers to the environment type, column (3) gives
HD number of comparison star followed by their visible spectral types, column (4) and (5) give
magnitude in V and B band respectively, column (7) gives the color excess E(B-V) values and column (8)
gives the value of observed $R_{v}$.
\noindent
\begin{table*}
\caption{Extinction curve details for the program stars.}
\begin{tabular}{l l l c c c c c }
\hline
HD \# (Sp Type) & ENV$^a$ &Flux Std (Sp Type) & V & B & B-V & E(B-V) & $R_{v} $ \\
\hline
239693 (B5 V) &DIF & 25350 (B5 V) & 9.54 & 9.77 & 0.23 & 0.41 & 2.37 \\
185418 (B0.5V)&DEN & 55857 (B0.5 V) & 7.45 & 7.67 & 0.22 & 0.50 & 2.54 \\
123335 (B5 IV)&HII & 147394 (B5 IV) & 6.31 & 6.37 & 0.06 & 0.24 & 2.60 \\
18352 (B1 V) &DIF & 31726 (B1 V) & 6.80 & 7.03 & 0.23 & 0.47 & 2.66 \\
54439 (B1.5V)&HII & 74273 (B1.5V) & 7.72 & 7.72 & 0.00 & 0.29 & 2.73\\
179406 (B3 V) &HII & 190993 (B3 V) & 5.33 & 5.46 & 0.13 & 0.35 & 2.73\\
24432 (B3 II)&HII & 79447 (B3 III) & 6.93 & 7.51 & 0.58 & 0.51 & 2.77\\
217086 (O7 V) &OB & 47839 (O7 Vf) & 7.65 & 8.28 & 0.63 & 0.95 & 2.80\\
46660 (B1 V) &HII & 31726 (B1 V) & 8.04 & 8.35 & 0.31 & 0.56 & 2.82\\
281159 (B5 V) &HI/Radio & 25350 (B5 V) & 8.53 & 9.21 & 0.68 & 0.86 & 2.85\\
21483 (B3 III)&DEN & 79447 (B3 III) & 7.03 & 7.38 & 0.35 & 0.55 & 2.89\\
53974 (B0.5IV)&RN & 149881 (B0.5IV) & 5.38 & 5.41 & 0.03 & 0.31 & 2.94\\
38131 (B0.5V)&RN & 55857 (B0.5 V) & 8.19 & 8.40 & 0.21 & 0.49 & 3.01\\
217061 (B1 V) &RN & 31726 (B1 V ) & 8.77 & 9.46 & 0.69 & 0.95 & 3.03\\
205794 (B5 V) &RN & 25350 (B5 V) & 8.43 & 8.77 & 0.34 & 0.62 & 3.09\\
46202 (O9 V) &DIF & 38666 (O9.5 V) & 8.20 & 8.36 & 0.16 & 0.48 & 3.12\\
216658 (B0.5V)&RN & 55857 (B0.5 V) & 8.89 & 9.59 & 0.70 & 0.98 & 3.14\\
149452 (O9 V) &RN & 214680 (O9 V) & 9.07 & 9.65 & 0.58 & 0.84 & 3.37\\
34078 (O9.5V)&DEN & 38666 (O9.5 V) & 5.99 & 6.18 & 0.19 & 0.54 & 3.42\\
37367 (B2.5V)&DIF & 37129 (B2.5 V) & 5.98 & 6.11 & 0.13 & 0.40 & 3.55 \\
252325 (B1 V) &RN & 31726 (B1 V) &10.79 & 11.36& 0.57 & 0.87 & 3.55\\
147701 (B5 V) &DEN & 4180 (B5 III) & 8.36 & 8.92 & 0.56 & 0.76 & 3.86\\
147889 (B2 IV)&DEN & 3360 (B2 IV) & 7.10 & 7.42 & 0.32 & 1.10 & 3.95\\
37903 (B1.5V)&RN & 74273 (B1.5 V) & 7.84 & 7.91 & 0.07 & 0.35 & 4.11\\
37061 (B1 IV)&Or N & 34816 (B1 IV) & 6.83 & 7.09 & 0.26 & 0.56 & 4.29\\
93222 (O7IIIf)&OB & 47839 (O7 Vf) & 8.10 & 8.15 & 0.05 & 0.37 & 4.76\\
\hline
\multicolumn{8}{l}{$^a$ DIFF, Diffuse interstellar medium; DEN, Dense interstellar medium;} \\
\multicolumn{8}{l}{HII, HII region; OB, OB association; RN, Reflection nebula }\\
\multicolumn{8}{l}{Or N, Orion Nebula; HI/Radio source.}\\
\multicolumn{8}{l}{ENV type are taken from \citet{massa88,green93} and SIMBAD astronomical} \\
\multicolumn{8}{l}{database.}
\end{tabular}
\end{table*}
Table 2 gives the observational data for the flux standards which are the comparison stars.
\begin{table*}
\caption{Observational data for flux standard stars}
\begin{tabular}{l l c c c c }
\hline
HD \# & Sp type & V & B & B-V & E(B-V)\\
\hline
47839 & O7 Vf & 4.65 & 4.41 & -0.24 & 0.08\\
214680& O9 V & 4.88 & 4.64 & -0.24 & 0.11\\
38666 & O9.5 V& 5.17 & 4.89 & -0.28 & 0.02\\
55857 & B0.5 V& 6.11 & 5.85 & -0.26 & 0.02\\
63922 & B0 III& 4.11 & 3.92 & -0.19 & 0.11\\
149881& B0.5 IV&7.00 & 6.84 & -0.16 &0.09\\
75821 & B0 IV & 5.09 & 4.88 & -0.21 &0.08\\
36512 & B0 V & 4.62 & 4.36 & -0.26 &0.04\\
34816 & B1 IV & 4.29 & 4.02 & -0.27 & 0.01\\
31726 & B1 V & 6.15 & 5.94 & -0.21 & 0.05\\
74273 & B1.5 V& 5.87 & 5.69 & -0.18 & 0.03\\
3360 & B2 IV & 3.66 & 3.46 & -0.20 & 0.04\\
37129 & B2.5 V& 7.13 & 6.99 & -0.14 & 0.07\\
79447 & B3 III& 3.96 & 3.77 & -0.19 & 0.01\\
190993& B3 V & 5.07 & 4.89 & -0.18 & 0.02\\
147394& B5 IV & 3.90 & 3.75 & -0.15 & 0.01\\
25350 & B5 V & 5.28 & 5.13&-0.15&0.01\\
\hline
\end{tabular}
\end{table*}
\subsection{MERGING OF SPECTRAL BANDS for Pair Method}
Each spectra of program star consisted of two separate images,
one for the shorter wavelength and another for the longer wavelength.
Data was taken from following cameras: Short Wavelength Prime
(SWP,1150\AA~ $< \lambda < 1978$\AA~), Long Wavelength Redundant
(LWR,1851\AA~ $< \lambda < 3348$\AA~) and Long Wavelength Prime
(LWP,1851\AA~ $< \lambda < 3347$\AA~). The spectra of each reddened and unreddened star is taken from the IUE archives. For each program and the comparison star, SWP and
LWR/LWP data for fluxes were merged to achieve the instrumental resolution i.e ~6\AA~ and the resultant
fluxes are converted to magnitudes $m(\lambda)$, with $\lambda$
covering the wavelength range 1150\AA~-3348\AA~.
The magnitudes were further interpolated in the range 1153-3201\AA~ with a
binning of 1\AA~. Further, extinction curves are generated using standard Pair method as discussed in Section 2.2.
\section{Discrete Dipole Approximation (DDA) }
An approximate technique called discrete dipole approximation (DDA) was proposed by \citet{purcell73}. It is a powerful numerical technique for calculating the optical properties such as absorption and scattering of the target. DDA is basically designed for targets having arbitrary and irregular shape. As an approximation, the continuum target is replaced by an array of $N$ dipoles. To each dipole, a polarizibility can be assigned for a particular target composition. The polarizibility depends in general on the dielectric properties such as complex refractive index $m=n+ik$ of the material inside the target. The polarizibility and the refractive index of the material can be related to each other by the well known Clausius-Mossotti condition. Each dipole interacts with the neighboring dipoles on the application of electric field. After evaluating the polarization $P$ by all the $N$ dipoles inside the target, we can solve for the absorption and extinction cross sections of the target. The two criteria for the validity of DDA:
1) The value of $\mid m \mid kd$ should be $<$ 1, where $m$ is the complex refractive index of the material, k=$2\pi /\lambda$ is the wave number in vacuum and $d$ is the dipole spacing between the dipoles.
2) The dipole spacing $d$ should be small enough so that the number of dipoles $N$ should be large enough to describe the target shape satisfactorily.
For more detailed calculations, see \citet{draine88}. We have used discrete dipole scattering version 6.1 (DDSCAT6.1$\footnote{http://code.google.com/p/ddscat}$) for the present study. The work by \citet{manual7} may be looked upon for a more detailed analysis on the code.
\subsection{Composite grain models using DDA}
For this study, we have used the DDSCAT6.1 code \citep{draineflat03} which
has been modified and developed by \citet{dobbie99} to generate the composite grain
models. The code, first carves out an outer sphere (or spheroid) from a lattice
of dipole sites. Sites outside the sphere are vacuum and sites inside are
assigned to the host material. Once the host grain is formed, the code locates
centers for internal spheres to form inclusions. The inclusions are of a single
radius and their centers are chosen randomly. The code then outputs a three
dimensional matrix specifying the material type at each dipole site which is then
received by the DDSCAT program. In the present study the axial ratios (hereafter called AR) of the composite spheroidal grains is taken to be AR=1.33, 2.0 and 1.44 with number of dipoles N=9640, 14440 and 25896 respectively. The dipole sites are either silicates, graphites or vacuum. The optical constants of silicates and graphites are taken from \citet{draine85} and \citet{draine87}.
The spheroidal composite grain consists of silicates as the host and graphites as the inclusion. In order to study the effect of volume fraction of graphite inclusion, we use three different volume fraction `$f$' of graphite grain inclusion viz. $f$=0.1, 0.2 and 0.3.
Table 3 shows the number of dipoles for each grain model along
with the axial ratio and number of dipoles per inclusion with the number of inclusions
for each fraction. The calculations of extinction cross sections of the target depend in general upon the orientation of the target. Hence, we average over 27 orientations of the target for all the calculations done by DDA. For more details on the composite grain models and the modified code see \citet{gupta07}.
\begin{table}[!ht]
\caption{Number of dipoles for each inclusion of the grain model along with axes lengths for spheroid in x,y,z direction for host (H) and inclusion (I). Also, number of inclusions is mentioned in brackets in column 3,4 \& 5 for each of the volume fraction $f$ of inclusions.}
\small
\begin{tabular}{lllll}
\\
\hline
N (AR)& $N{x}/N_{y}/N_{z}$ & \multicolumn{3}{c}{No. of dipoles per inclusion} \\
& &\multicolumn{3}{c}{(No. of inclusions)}\\
& & f=0.1 & f=0.2 & f=0.3\\
\hline
9640(1.33) &H:32/24/24& 152(6) & 152(11) & 152(16) \\
& I: 8/6/6 & & & \\
25896(1.50) & H:48/32/32 & 432(7) & 432(13) & 432(19) \\
& I:12/8/8 & & &\\
14440(2.00) &H:48/24/24 &224(6) & 224(11) & 224(16) \\
& I:12/6/6 & & &\\
\hline
\\
\end{tabular}\\
\end{table}
Fig. 1 illustrates a composite grain model with N=9640 dipoles composed of silicates as host (in green) and graphite as inclusion (in red). The inclusions can be seen clearly in Fig. 2. There are eleven such inclusions consisting of 152 dipoles per inclusion. This model represents a composite dust grain with volume fraction of graphite $f=0.2$.
\begin{figure}[!ht]
\centering
\includegraphics[height=7.4cm]{fig1.eps}
\caption{A non-spherical composite dust grain consisting of host (in green) and inclusion (in red) with a
total of N=9640 dipoles where the inclusions embedded in the host spheroid are shown such that only the ones placed at the outer periphery are seen. }
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[height=7.4cm]{fig2.eps}
\caption{This figure shows the inclusions of the composite grain. The volume fraction $f$ of graphite inclusions is 0.2. The number of inclusions are 11 with 152 dipoles per inclusions.}
\end{figure}
\section{RESULTS AND DISCUSSIONS}
The following are the principal results of this work:
\subsection{Extinction efficiencies of the composite grain}
Though the exact composition of the interstellar dust is still uncertain, graphites and silicates are the most often used for cosmic dust models
(see for example \citet{mathis77}; \citet{draine84}). We have checked the extinction of graphite and amorphous carbon (AMC) as possible candidates
for explaining the UV feature at 2175\AA~. Figure 3 shows the extinction curve for very small AMC and graphite grain of radius $a$=0.01$\mu m$. It is seen that the AMC does not show any peak at 2175\AA~, whereas graphite prominently shows this feature. Amorphous carbon is also highly absorbing at very long wavelengths and would provide most of the extinction longword of 0.3$\mu m$ (3.3$\mu m^{-1}$) as seen by \citet{draineIAU89} and \citet{Weig01}. Grain models with AMC are also not favored
by \citet{zubko04}. Instead, large polycyclic aromatic hydrocarbons (PAHs) molecules are likely candidates
as carriers of the 2175\AA~ feature -- a natural extension of the graphite hypothesis \citep{joblin92,li2001}. \citet{clayton03PAH} have also considered PAHs as one of the constituents in the
dust model to explain the interstellar extinction in the UV.
\begin{figure}[!ht]
\centering
\includegraphics[height=7.4cm]{fig3.eps}
\caption{Extinction efficiencies for amorphous carbon (AMC) and graphite grains for very small grain size of 0.01$\mu m$ is shown in this figure. The peak in graphite curve at spectral wavelength 2175\AA~ explains why it is being used as inclusion in our composite grain model whereas no such peak is seen in AMC curve at 2175\AA~.}
\end{figure}
We calculate the extinction efficiencies $Q_{ext}$ for a composite grain model consisting of a host silicate spheroid along with graphite inclusions with the number of dipoles being $N=9640$, 14440 and 25896. The extinction efficiencies are calculated for target which are prolate spheroid in our case. The volume fractions, $f$ of the graphite inclusions in the composite grain is varied as f=0.1, 0.2 and 0.3. The extinction efficiencies for the composite grain model having number of dipoles $N=9640$ (AR= 1.33) with the variation in the volume fraction of graphite inclusion are shown in Fig. 4. The variation of extinction efficiencies for the composite grain model of the number of dipoles $N=14440$ (AR = 2.0) and $N=25896$ (AR = 1.50) with the variation in the volume fraction of graphitic inclusions can be seen in Fig. 5 and 6 respectively.
It is clearly noted that the extinction efficiencies and the shape of the extinction
curves vary considerably as the grain size increases. The 2175\AA~ feature is clearly seen for
small grains, viz. a=0.01$\mu m$ and 0.05$\mu m$, whereas for the larger sizes (a=0.1$\mu m$ and 0.2$\mu m$),
the feature disappears. For both the models, we see a shift in the peak
wavelength at 2175\AA~ as the volume fraction of the inclusion increases. Further,
the extinction efficiency is seen to vary with the variation in the volume
fraction of graphite inclusion.
\begin{figure}[!ht]
\centering
\includegraphics[height=7.4cm]{fig4.eps}
\caption{The figure shows extinction efficiencies for a composite grain model with number of dipoles, $N=9640$ for volume fractions of graphite inclusions, f=0.1, 0.2, 0.3 and 0.4. These extinction curves clearly show a shift in the peak wavelength `2175\AA~' (4.57$\mu m^{-1}$) and variation in extinction efficiency as the volume fraction of graphite varies. It is also to be noted that the `2175\AA~' feature vanishes for large grains with a=0.2$\mu m$}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[height=7.4cm]{fig5.eps}
\caption{The figure shows extinction efficiencies for a composite grain model with number of dipoles $N=14440$. The shift in the peak wavelength and variation in extinction efficiency with the volume fraction variation of graphite is seen.}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[height=7.4cm]{fig6.eps}
\caption{In this figure, the extinction efficiencies for a composite grain model with number of dipoles $N=25896$ and volume fraction of graphite $f=0.1$, 0.2 and 0.3 is shown. A shift in the peak wavelength and variation in extinction efficiency as the volume fraction of graphite varies is seen.}
\end{figure}
\subsection{Interstellar extinction curve}
The interstellar extinction curve (i.e. the variation of the extinction with wavelength)
is usually expressed by the ratio: $E(\lambda-V)/E(B-V)$ vs $1/\lambda$. A power law for the grain size distribution, n(a) $\sim$ $a^{-3.5}$ \citep{mathis77}; where $a_{min}< a <a_{max}$ is used for evaluating the interstellar
extinction curve for a given grain size distribution. We calculate the extinction efficiencies of the grain models using the above power law. It must be noted that we have used two types of size distribution
(i) a=0.001-0.100$\mu m$ (denoted as $a100$ henceforth) and (ii) a=0.005-0.250 $\mu m$
(denoted as $a250$ henceforth).
Earlier, we have used the porous \citep{vaidya97,vaidya99} and the composite spheroidal grain models \citep{gupta07} to interpret the average
observed extinction curve in the wavelength range 0.1$\mu m$-3.4$\mu m$ \citep{gupta07}.
In this paper, we use the composite spheroidal grain models to interpret the observed
extinction in the UV, in several directions towards individual stars, selected from
various galactic environments \citep{fitz90,valencic04}.
Subsequently, in case of composite grain models, each interstellar extinction curve of the observed IUE star is compared with the model curve formed from a $\chi^2$ minimized
and best fit linear combination of the composite grains (contributory fraction p)
and solid graphite grains (contributory fraction q). By varying $p$ and $q$ each from 0.1 to 1.0 in steps of 0.1, a set of 20 model curves are generated and on comparing these model curves with the observed extinction curve of the stars, a set of reduced $\chi^2$ are obtained. A minimum $\chi^2$ from this set is chosen depending on the linear combination of $p$ and $q$. Hence, we obtain a net model interstellar extinction curve as a result of the linear combination of $p$ and $q$ which gives a minimum $\chi^2$ value. We use the following formula to obtain the set of reduced $\chi^2$\citep{beving}:
$$
{\chi{^2_j}} = \frac {\sum_{i=1}^n (S_{i}^j-T_{i}^k)^2} {pp}
$$
where pp is the number of degrees of freedom, $S_{i}^{j}(\lambda_{i})$
is the $j$th model curve for the
corresponding $p$ and $q$ linear combination of the composite grains and bare graphite
grains and $T_{i}^{k}(\lambda_{i})$ is for the observed curve,
$\lambda_{i}$ are the wavelength points with i=1,n for n=12 wavelength points of the extinction curves.
Tables 4 shows the best fit parameters along with the minimized $\chi^{2}$ values for the composite grain model using DDA for 26 IUE stars.
\begin{table*}
\caption{Best fit $\chi^{2}$ values and other parameters for different composite grain models generated using DDA technique.}
\small
\begin{tabular}{l c c c c c c }
\hline
HD \#&$\chi^{2}$&p &q &N&$f_{Gr}$&a$(\mu m)$\\
\hline
239693& 0.1552& 0.2& 0.4 & 14440& 0.1& 0.001-0.100 \\
185418& 0.2032& 0.1& 0.6 & 14440& 0.2& 0.001-0.100 \\
123335& 0.3719& 0.2& 0.4 & 14440& 0.1& 0.005-0.250 \\
18352 & 0.0535& 0.2& 0.5 & 14440& 0.1& 0.005-0.250 \\
54439 & 0.4001& 0.1& 0.4 & 9640 & 0.1& 0.001-0.100 \\
179406& 0.2239& 0.3& 0.3 & 14440& 0.1& 0.001-0.100 \\
24432 & 0.3149& 0.4& 0.4 & 9640 & 0.1& 0.001-0.100 \\
217086& 0.1722& 0.2& 0.4 & 14440& 0.1& 0.001-0.100\\
46660 & 0.1488& 0.1& 0.5 & 14440& 0.1& 0.001-0.100 \\
281159& 0.1477& 0.2& 0.4 & 14440& 0.1& 0.001-0.100 \\
21483 & 0.1714& 0.3& 0.3 & 9640 & 0.1& 0.001-0.100 \\
53974 & 0.0963& 0.2& 0.3 & 9640& 0.1& 0.005-0.250 \\
38131 & 0.2747& 0.5& 0.3 & 14440& 0.1& 0.005-0.250 \\
217061& 0.0552& 0.2& 0.4 & 14440& 0.1& 0.005-0.250 \\
205794& 0.1625& 0.1& 0.4 & 9640& 0.1& 0.005-0.250 \\
46202 & 0.2304& 0.4& 0.4 & 14440& 0.1& 0.005-0.250 \\
216658& 0.2200& 0.4& 0.4 & 14440& 0.1& 0.005-0.250 \\
149452& 0.0778& 0.4& 0.4 & 14440& 0.1& 0.005-0.250 \\
34078 & 0.0920& 0.3& 0.4 & 14440& 0.1& 0.005-0.250 \\
37367 & 0.1760& 0.1& 0.5 & 14440& 0.1& 0.005-0.250 \\
252325& 0.2172& 0.2& 0.4 & 14440& 0.3& 0.005-0.250 \\
147701& 0.0926& 0.3& 0.2 & 9640& 0.1& 0.005-0.250 \\
147889& 0.0652& 0.2& 0.4 & 14440& 0.3& 0.005-0.250 \\
37903 & 0.0852& 0.3& 0.3 & 14440& 0.1& 0.005-0.250 \\
37061 & 0.1372& 0.2& 0.2 & 14440& 0.3& 0.005-0.250 \\
93222 & 0.0969& 0.3& 0.2 & 14440& 0.3& 0.005-0.250 \\
\hline
\end{tabular}
\end{table*}
Fig. 7 and 8 shows the comparison of the observed interstellar extinction curve with the best fit model for composite grains generated using DDA technique. {From Table 1 \& 4 it is seen that the grain models with the size
distribution a=0.001-0.100 $\mu m$ fit the observed extinction curves towards stars with low $R_{v}$ (~2-3), whereas, stars with high $R_{v}$ (~4-6) fit the grains with the sizes distribution, a= 0.005-0.250$\mu m$.}
\begin{figure}[!ht]
\centering
\includegraphics[height=11.4cm]{fig7.eps}
\caption{Comparison of the observed interstellar extinction curves with the best fit composite grain model extinction curves (generated using DDA) in the wavelength range 3.17-7.87 $\mu m^{-1}$ (3200-1200\AA~). The observed $R_{v}$, environment type and the best fit grain size distribution for the sample is mentioned inside the figure.}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[height=11.4cm]{fig8.eps}
\caption{Comparison of the observed interstellar extinction curves with the best fit
composite grain model extinction curves (generated using DDA) in the wavelength range 3.17-7.87 $\mu m^{-1}$ (3200-1200 \AA~). The observed $R_{v}$, environment type and the best fit grain size distribution for the sample is mentioned inside the figure. }
\end{figure}
Our results on the composite spheroidal grain models i.e Table 4, Fig 7 and 8 show the best fit parameters; size distributions 0.001-0.1 $\mu m$ ($a100$) and 0.005-0.250 $\mu m$ ($a250$), shape-axial ratio; 1.33-2.0 and the composition-volume fraction of the graphite inclusions $f=0.1$, 0.2 and 0.3; for the grains in the interstellar medium towards the 26 selected stars as observed by the IUE satellite.
\subsection{Environmental effects}
In order to examine how the extinction properties are influenced by the various dust environments,
we have analyzed the extinction curves for stars in seven different galactic environments; as shown in Table 1. The variation in the strength and width of the 2175\AA~ feature is seen for various environments
i.e. from dense regions and reflection nebulae to diffuse clouds ( Fig. 7 and 8). It can be clearly seen that the dust in the dense quiescent environments and reflection nebulae produces broad bumps of larger widths
whereas those stars lying in the diffuse environment produces narrower bumps of average widths.
Stars around HII regions and/or which are a part of OB association produces bump of
average widths with weak bumps. This results are in accordance with \citet{massa86}. They have shown that the observed width of the bump is strongly subject to environmental
influences by calculating the widths of the bump and the area under the bump through the
analytical parameterization scheme.
In Fig. 7 and 8, we show the fitting of the extinction curves for stars in the
HII region (HII), reflection nebula (RN) and dense medium (DEN + DC) with our models. The ratio, $R_{v}$ also varies from 2.37 for HD239693 to 4.76 for HD93222. These curves further highlight star to star variation in the extinction, demonstrating the sensitivity to local conditions. In particular, a large variation in the strength and width of the 2175\AA~ feature is seen in the extinction curves for the stars lying in the dense region i.e HD37903 ($R_{v}$=4.11), HD37061 ($R_{v}$=4.29) and HD93222 ($R_{v}$=4.76). The shape of the extinction curves for the stars in the HII region (Fig. 7 and 8) shows variation in the spectral region, shortward of 1500\AA~. In particular, see the steep rise in the extinction for HD24432 ($R_{v}$=2.77). This star lying in the diffuse clouds is fitted by the composite dust grain model of N=9640 with $a100$ size distribution. Thus, each extinction curve contains unique information about the grains along its sight line. Using analytic parameterization method, Fitzpatrick and Massa (1986, 1988) have also shown that the observed width of the 2175\AA~ feature is strongly subject to the environmental influences.
Our results on the composite grains show that the parameter $R_{v}$ varies from $\sim$2 for small grains (a=0.01$\mu m$) to $\sim$5 for the larger grains (a=0.2$\mu m$). These results also show consistency for the denser medium where $R_{v}$ has a small value (presence of small grains) and for the diffuse regions where $R_{v}$ has high value (presence of larger grains). These results are also consistent with the strength of the 2175\AA feature viz., for the larger grains (a=0.2$\mu m$), the feature is suppressed. See for example, Fig. 7 in \citet{gupta07}.
We would like to discuss here a distinct contribution of the type of media to the spectral band feature viz., the bump at 2175\AA~. We have seen that contribution to this bump feature is due to the presence of very small graphite grains. The size of the grain shows a tremendous effect on the extinction cross sections. Hence, weakening of the bump feature can be attributed to the removal of very small grains from the dust population of the media for example; clumpy media and those of unresolved sources which are classified as extended and inhomogeneous media. Using effective medium theory (Bruggeman mixing rule), \citet{kruegel1994} have shown variation in 2175\AA~ bump feature and flattening of UV extinction curve for fluffy dust aggregates of silicate, carbon and ice with increasing grain sizes.
Extinction measured for an region is directly related to the optical depth along the line of sight. For few of the sample stars with clumpy and dense molecular media, a weakening of 2175\AA~ peak along with flattening of extinction curve is seen. This effect is attributed to the influence of scattering on the extinction properties, specially the bump feature, of the stars. \citet{natta1984} have observed a suppression in the 2175\AA~ peak followed by a flattening of far UV curve, with increasing optical depth of the media. They have also shown that inhomogeneous layer of high optical depth (high $R_{v}$) tends to produce gray extinction.
Similar studies for low optical media have been conducted by \citet{kruegel2009}. They have investigated the influence of scattering on the extinction curve of stars. They have computed effective optical depth $\tau_{eff}$ for a variety of idealized geometrical configurations (spheres, slabs and blocks) for varying optical depth $\tau$ and analyzed the dependencies of effective optical thickness $\tau_{eff}$ on the various measurable optical properties of the dust including $\tau$. They also found out that standard dust is sensitive to spatial resolution and the structure of the medium (clumpiness, foreground/background). The extinction cross sections calculated by them, taking into account the scattering effects, for clumpy, homogeneous media and spatially unresolved stars show marked differences to the standard reddening curve.
\citet{mathis89} have fitted certain sight lines ($\rm R_{v}$=3.02, 4.83)
using effective medium approximation (Bruggeman mixing rule).
They used composite grains (silicates and amorphous carbon) and obtained large size
grains as the best fit parameter for sight lines with higher $\rm R_{v}$
values. \citet{wolff93} have used composite grains to model interstellar
polarization towards eight lines of sight.
They have used DDA to model the composite grains. However, their composite grain model
with silicates and amorphous carbon/ organic refractory material failed to reproduce
the observed polarization curve.
Several other groups have presented studies on size and composition of dust grains in interstellar medium using various techniques. For example, \citet{zubko04} have presented a dust model consisting of various components:
PAH's, bare silicates \& AMC as well as composite particles containing silicate,
organic refractory material, water ice and voids. They have used the method of
regularization (MR) to solve for the optimal grain size distribution of each
dust component knowing the observational constraints and the dust constituents
and properties. \citet{clayton03PAH} have employed a modified version of the MEM fitting algorithm, developed by \citet{kim94} to fit the observed extinction in eight preferred sightlines/directions. They have used a 3-component homogeneous spherical grain model consisting of silicates, graphite and AMC as well as composite grain model consisting of pyroxenes, AMC and vacuum. \citet{clayton03PAH} adopted solar abundances and used EMT (extension of Bruggeman rule) to compute the optical constants of composite grains. With the 3-component homogeneous grain, they obtained the upper size cutoffs of 0.3$\mu m$ for graphite and AMC and 1$\mu m$ for silicate
grains. With the composite grain models, \citet{clayton03PAH} obtained the fit to the average observed galactic extinction curve with 0.80 (Solar) Si (Pyroxene) and 0.36 (Solar) C abundance and found the upper size cutoff size for composite grain to be 1$\mu m$. Clearly, both these grain models of \citet{clayton03PAH} show deficit of small silicate and graphite grains. On the other hand, we have found all the fits with smaller size cutoffs of 0.100$\mu m$ and 0.250$\mu m$ as compared to size cutoffs of \citet{clayton03PAH}. Although \citet{clayton03MEM} have used AMC as a third component, we did not use it since
AMC exhibits absorption at about 2500\AA~. It is also highly absorbing
at very long wavelengths and thus would provide most of the extinction longword of 0.3$\mu m$ \citep{draineIAU89}. Recently, \citet{gordan09} have analyzed
FUSE+IUE extinction curves for 75 sightlines and have compared these curves with three different
dust grain models given by \citet{Weig01,clayton03MEM} and \citet{zubko04}. \citet{gordan09} found that
the models of \citet{clayton03MEM} and \citet{zubko04} provide much better fits than \citet{Weig01} model.
It is clear that the variation in the grain size distribution subject to the variation in the environment indicates that the small sized grains coagulate onto large grains in relatively dense environments, as expected \citep{draine1985,draine1990}. \citet{mathis89} have fitted certain sight lines ($\rm R_{v}$=3.02, 4.83)
using effective medium approximation (Bruggeman mixing rule). They used composite grains (silicates and amorphous carbon) and obtained large size grains as the best fit parameter for sight lines with higher $\rm R_{v}$
values. \citet{Wein} have fitted a specific case of extinction toward HD21021 with small value of $R_{v}$=2.1 with a small grain size distribution of graphite/silicate grains using a simple functional fitting form.
Shape of the grain is an important factor in determing the interstellar extinction. \citet{gupta05} have calculated the extinction efficiency for various shapes of silicate and graphitic spheroidal grains such as oblates, prolates and spheres using T-matrix theory. They have very well described the considerable variation in the extinction due to the different axial ratio grains as compared to the simple case of sphere. They also find out the best fit for explaining the observed extinction is obtained with a grain size distribution a=0.005-0.250$\mu m$ having an axial ratio of AR=1.33. However, in this work, we have used a more realistic composite dust grain model generated using DDA. We find out that most of the observed directions are well fitted by axial ratio (AR) equal to 2.0. Hence, we conclude that shape of the grain has an important role in determing the observed extinction properties of stars observed by IUE satellite.
The composite grain models with silicate as host material and graphite inclusions, presented in this study is found to fit the observed extinction curves towards the stars lying in various interstellar environments. It must also be emphasized here that we have used more realistic DDA method to calculate the extinction efficiencies for the spheroidal composite grains. It must also be noted that \citet{perrin90}; \citet{sivan90}, \citet{wolff94,wolff98}, \citet{gupta07} and \citet{vaidya11} have shown that DDA is more accurate than the EMT based grain models.
We plan to use the composite grain model with other carboneous materials as inclusions such as PAHs or SiC \citep{Weig01,clayton03PAH} for obtaining better fits in the UV region, 1500\AA~-1200\AA~.We also plan to interpret the extinction towards some more stars observed by the IUE satellite.
\section{CONCLUSIONS}
We have used more realistic DDA method to calculate the extinction efficiencies of the spheroidal composite grains
made up of the host silicate and graphite inclusions in the wavelength region of 1200\AA~3200\AA~. We have then, used the extinction efficiencies of the composite grains for a power law size distribution \citep{mathis77} to evaluate the interstellar extinction curves in the wavelength range 1200\AA~-3200\AA~. In the present study, we have used two size distributions viz. (i) a=0.001- 0.100 $\mu m$ and (ii) a=0.005- 0.250 $\mu m$. These extinction curves for the spheroidal composite grains are compared with the observed extinction curves obtained from the IUE
satellite data to infer the parameters such as size, shape and composition of grains. The important implications of the obtained results in terms of these physical parameters (as compared to the earlier studies) are discussed in the previous section. This study made use of a more sophisticated technique for modeling a composite dust model with various parameters that are able to characterize the actual physical dust parameters for a sample of stars, lying in different interstellar environments. The main conclusions of our study are as follows:
(i) The extinction properties of the composite grains vary considerably with
the variation in the volume fraction of the inclusions. In particular, the extinction peak at `2175\AA~' shifts and broadens with variation in the graphite inclusions.
(ii) The composite spheroidal grain models, with axial ratios 1.33 and 2.0 and
volume fraction of inclusions $f=0.1-0.3$, fit the observed extinction curves
reasonably well.
(iii) The ratio $\rm R_{v}=A(V)/E(B-V)$ is seen to be well correlated with the
`2175\AA~' feature. From the sample of 26 IUE stars, those lying in the dense regions with high $R{v}$ (~4-5), show a weakening of the bump feature at 2175\AA~ followed by a flattening of far UV extinction curve whereas stars in the diffuse interstellar medium with low $R_{v}$ (~2-3) show a distinct bump at this particular wavelength. This study clearly indicates, how the extinction properties of the grains vary with the optical depth of the media (which is related to $R_{v}$) and also the grain size. It is to be noted that scattering off many unresolved stellar sources also flattens the extinction curve at this wavelength.
These results are consistent as suggested earlier by \citet{natta1984}, \citet{kruegel1994} and \citet{kruegel2009}.
In this study, we have presented the composite grain model, consisting of host silicate and graphite as inclusions and have used the results obtained for these composite grain model to infer the size distributions, shape of the grain and volume fraction of the graphite inclusions, of the interstellar dust towards 26 stars situated in the various interstellar environments. Further the composite grain models, presented in this paper, simultaneously explain the observed interstellar extinction \citep{vaidya01,gupta07}, infrared emission from the circumstellar dust \citep{vaidya11}, scattering by the cometary dust \citep{gupta06} and cosmic abundances \citep{gupta07}.
\section{Acknowledgments}
The authors acknowledge the ISRO-RESPOND project (No. ISRO/RES/2/2007-08) for funding this research.
------------------------------------------
\def\aj{AJ}%
\def\actaa{Acta Astron.}%
\def\araa{ARA\&A}%
\def\apj{ApJ}%
\def\apjl{ApJ}%
\def\apjs{ApJS}%
\def\ao{Appl.~Opt.}%
\def\apss{Ap\&SS}%
\def\aap{A\&A}%
\def\aapr{A\&A~Rev.}%
\def\aaps{A\&AS}%
\def\azh{AZh}%
\def\baas{BAAS}%
\def\bac{Bull. astr. Inst. Czechosl.}%
\def\caa{Chinese Astron. Astrophys.}%
\def\cjaa{Chinese J. Astron. Astrophys.}%
\def\icarus{Icarus}%
\def\jcap{J. Cosmology Astropart. Phys.}%
\def\jrasc{JRASC}%
\def\mnras{MNRAS}%
\def\memras{MmRAS}%
\def\na{New A}%
\def\nar{New A Rev.}%
\def\pasa{PASA}%
\def\pra{Phys.~Rev.~A}%
\def\prb{Phys.~Rev.~B}%
\def\prc{Phys.~Rev.~C}%
\def\prd{Phys.~Rev.~D}%
\def\pre{Phys.~Rev.~E}%
\def\prl{Phys.~Rev.~Lett.}%
\def\pasp{PASP}%
\def\pasj{PASJ}%
\def\qjras{QJRAS}%
\def\rmxaa{Rev. Mexicana Astron. Astrofis.}%
\def\skytel{S\&T}%
\def\solphys{Sol.~Phys.}%
\def\sovast{Soviet~Ast.}%
\def\ssr{Space~Sci.~Rev.}%
\def\zap{ZAp}%
\def\nat{Nature}%
\def\iaucirc{IAU~Circ.}%
\def\aplett{Astrophys.~Lett.}%
\def\apspr{Astrophys.~Space~Phys.~Res.}%
\def\bain{Bull.~Astron.~Inst.~Netherlands}%
\def\fcp{Fund.~Cosmic~Phys.}%
\def\gca{Geochim.~Cosmochim.~Acta}%
\def\grl{Geophys.~Res.~Lett.}%
\def\jcp{J.~Chem.~Phys.}%
\def\jgr{J.~Geophys.~Res.}%
\def\jqsrt{J.~Quant.~Spec.~Radiat.~Transf.}%
\def\memsai{Mem.~Soc.~Astron.~Italiana}%
\def\nphysa{Nucl.~Phys.~A}%
\def\physrep{Phys.~Rep.}%
\def\physscr{Phys.~Scr}%
\def\planss{Planet.~Space~Sci.}%
\def\procspie{Proc.~SPIE}%
\let\astap=\aap
\let\apjlett=\apjl
\let\apjsupp=\apjs
\let\applopt=\ao
\bibliographystyle{apj}
| -32,471.421138
|
[
-2.86328125,
2.765625
] | 27.881041
|
[
-3.701171875,
1.11328125,
-1.634765625,
-5.69921875,
-0.2315673828125,
7.25
] |
[
2.353515625,
6.546875,
4.734375,
6.125
] | 881
| 6,935
|
[
-3.228515625,
3.81640625
] | 30.867742
|
[
-5.734375,
-2.12109375,
-2.447265625,
-1.853515625,
1.0205078125,
9.2265625
] | 1.260163
| 18.59493
| 22.95602
| 13.597954
|
[
3.110809087753296
] | -23,205.711749
| 5.320836
| -31,091.13552
| 0.926829
| 6.025142
|
[
-4.04296875,
-3.62109375,
-2.49609375,
-3.103515625,
2.6875,
9.53125
] |
[
-6.23828125,
-2.009765625,
-2.337890625,
-1.001953125,
3.884765625,
4.75
] | |
BkiUdNU5qoYAv4ZEv7IX
|
\section{Introduction}
Let $n\geq 2$, and $d\geq 1$. The \emph{configuration} space of $n$ points in
the $d$-dimensional euclidean space $E=\mathbb{R}^d$ is defined as
\[
\conf{n}{E} = \{ \simbolovettore{q} \in E^n : \simbolovettore{q}_i \neq \simbolovettore{q}_j \},
\]
where $\simbolovettore{q} = (\simbolovettore{q}_1, \simbolovettore{q}_2, \ldots, \simbolovettore{q}_n)\in E$ and $\forall j, \simbolovettore{q}_j\in E$.
Given a positive parameter $\alpha>0$, and $n$ positive masses $m_j>0$,
the \emph{potential function} $U\colon \conf{n}{E} \to \mathbb{R}$ is defined
as
\[
U(\simbolovettore{q}) = \sum_{1\leq i < j \leq n} \dfrac{m_im_j}{\norm{\simbolovettore{q}_i - \simbolovettore{q}_j}^\alpha}.
\]
A \emph{central configuration} is a configuration
that yields a relative equilibrium solution
of the Newton equations of the $n$-body problem with potential function $U$,
and can be shown (cf.
\cite{Moultonstraightlinesolutions1910},
\cite{moeckelCentralConfigurations1990},
\cite{AlbouyInverseProblemCollinear2000},
\cite{FerrarioFixedpointindices2015},
\cite{ferrarioCentralConfigurationsMorse2017},
\cite{ferrarioCentralConfigurationsMutual2017})
that it is a solution of the following $n$ equations
\begin{equation}
\lambda m_j \simbolovettore{q}_j = - \alpha \sum_{k\neq j} m_j m_k \dfrac{\simbolovettore{q}_j - \simbolovettore{q}_k}{\norm{\simbolovettore{q}_j - \simbolovettore{q}_k}^{\alpha+2}}.
\end{equation}
Such configurations have center of mass $\sum_{j=1}^n m_j \simbolovettore{q}_j = \boldsymbol{0}\in E$,
and the parameter $\lambda$ turns out to be equal to
\(
\lambda = -\alpha \dfrac{U(\simbolovettore{q})}{\sum_{j=1}^n m_j \norm{\simbolovettore{q}_j}^2 }.
\)
A generic central configuration (with center of mass
\(
\simbolovettore{q}_0 = \dfrac{\sum_{j=1}^n m_j \simbolovettore{q}_j}{M}
\)
not necessarily $\boldsymbol{0}$,
where $M=\sum_{j=1}^n m_j$)
satisfies the equation
\begin{equation}
\label{eq:CC}
\lambda m_j (\simbolovettore{q}_j-\simbolovettore{q}_0) = - \alpha \sum_{k\neq j} m_j m_k \dfrac{\simbolovettore{q}_j - \simbolovettore{q}_k}{\norm{\simbolovettore{q}_j - \simbolovettore{q}_k}^{\alpha+2}}.
\end{equation}
Now, if for each $i,j$ denote
\[
\simbolovettore{Q}_{jk} = \dfrac{\simbolovettore{q}_j - \simbolovettore{q}_k}{\norm{\simbolovettore{q}_j - \simbolovettore{q}_k}^{\alpha+2}},
\]
equation \eqref{eq:CC}
can be written as
\footnote{%
In the notation of \cite{AlbouyInverseProblemCollinear2000},
$\simbolovettore{q}_j = X_j$, $\simbolovettore{q}_0=c$,
$A_j=\sum_{k\neq j} m_k \simbolovettore{Q}_{ki}$,
so that the equation \eqref{eq:CC} reads as equation (3) of \cite{AlbouyInverseProblemCollinear2000}
$\alpha A_j - \lambda(\simbolovettore{q}_j-\simbolovettore{q}_0) =\boldsymbol{0}$, $j=1,\ldots, n$,
for some constant $\lambda<0$.
}
\begin{equation}
\label{eq:CC2}
\simbolovettore{q}_j = M^{-1} \sum_{k=1}^n m_k \simbolovettore{q}_k - \dfrac{\alpha}{\lambda} \sum_{k\neq j} m_k \simbolovettore{Q}_{jk} ,
\quad j=1,\ldots, n.
\end{equation}
The \emph{inverse problem}, introduced by Moulton
\cite{Moultonstraightlinesolutions1910} (see also
Buchanan
\cite{Buchanancertaindeterminantsconnected1909}), and considered by Albouy and Moeckel
in \cite{AlbouyInverseProblemCollinear2000},
can be phrased as follows: given the positions $\simbolovettore{q}_j$
(or, equivalently, the mutual differences $\simbolovettore{q}_i-\simbolovettore{q}_j$)
to find the (positive) masses $m_j$ and $\lambda<0$ such that \eqref{eq:CC2} holds.
As it is, the equation is not linear in the $(n+1)$-tuple $(m_1,\ldots, m_n,\lambda)$,
but can be transformed into the following equation
\begin{equation}
\label{eq:CC3}
\simbolovettore{q}_j = \hat\simbolovettore{c} + \sum_{k\neq j} \hat m_k \simbolovettore{Q}_{jk} ,
\quad j=1,\ldots, n,
\end{equation}
because of the following lemma.
\begin{lemma}
\label{eq:CC3-CC2}
Given $\simbolovettore{q}\in \conf{n}{E}$, there exists $(m_1,\ldots, m_n,\lambda)$,
with $m_j>0$ satisfying
$\eqref{eq:CC2}$ if and only if there exists
$(\hat m_1,\ldots, \hat m_n, \hat \simbolovettore{c}) \in \mathbb{R}^{n+d}$ such that
\eqref{eq:CC3} holds
and $\hat m_j >0 $ for each $j$.
\end{lemma}
\begin{proof}
If \eqref{eq:CC2} holds for $(m_1,\ldots, m_n,\lambda)$ with positive masses,
then $\lambda<0$ and simply by setting
\(
\hat \simbolovettore{c} =
M^{-1} \sum_{k=1}^n m_k \simbolovettore{q}_k ~, \quad
\hat m_k = - \dfrac{\alpha}{\lambda} m_k
\)
one has that \eqref{eq:CC3} holds.
Conversely, assume that $(\hat m_1,\ldots, \hat m_n,\hat \simbolovettore{c})$ satisfies
\eqref{eq:CC3}, with $\hat m_j>0$. Then by putting
\(
m_k = \hat m_k, \quad k=1,\ldots, n ~,\quad
\lambda = -\alpha %
\)
it follows, multiplying by $m_j$ (and setting as above $M=\sum_{j=1}^n m_j$)
and summing for $j=1,\ldots, n$
\[
\simbolovettore{q}_j = \hat\simbolovettore{c} -\dfrac{\alpha}{\lambda} \sum_{k\neq j} m_k \simbolovettore{Q}_{jk}, \quad
\implies \quad
\sum_{j=1}^n m_j \simbolovettore{q}_j = M \hat \simbolovettore{c} + \boldsymbol{0},
\]
and hence
\eqref{eq:CC2}.
\end{proof}
\begin{remark}
Multiplying each equation by $\hat m_j (\simbolovettore{q}_j - \hat\simbolovettore{c})$, and summing for $j=1,\ldots, n$, it follows that
\(
\sum_{j=1}^n \hat m_j \norm{\simbolovettore{q}_j-\hat \simbolovettore{c}}^2 %
= \sum_{j=1}^n \sum_{k\neq j} \hat m_j \hat m_k \simbolovettore{Q}_{jk} \cdot (\simbolovettore{q}_j - \hat \simbolovettore{c}) %
=
\sum_{1\leq j < k \leq n}\hat m_j \hat m_k \norm{\simbolovettore{q}_j -\simbolovettore{q}_k}^{-\alpha}. %
\)
Hence whenever \eqref{eq:CC2} or \eqref{eq:CC3} holds (for positive masses),
the corresponding $\lambda$ is in any case negative.
Moreover, \eqref{eq:CC2} holds for $(m_1,\ldots, m_n,\lambda)$ if and only if
it holds for $(t m_1,\ldots, t m_n,t\lambda)$ for any $t>0$, so that equations \eqref{eq:CC2}
and \eqref{eq:CC3} are %
equivalent.
\end{remark}
\begin{defi}
For each $\simbolovettore{q}\in \conf{n}{E}$, let
$\Psi(\simbolovettore{q}), \tilde\Psi(\simbolovettore{q}) \subset E^n$ be the subsets
\[
\begin{aligned}
\Psi(\simbolovettore{q}) & = \{ \simbolovettore{q} : \simbolovettore{q}_j =
\hat\simbolovettore{c} + \sum_{k\neq j} \hat m_k \simbolovettore{Q}_{jk} :
\hat \simbolovettore{c} \in E, \hat m_j >0, j=1,\ldots, n
\} \\
\subseteq
\tilde \Psi(\simbolovettore{q}) & = \{ \simbolovettore{q} : \simbolovettore{q}_j =
\hat\simbolovettore{c} + \sum_{k\neq j} \hat m_k \simbolovettore{Q}_{jk} :
\hat \simbolovettore{c} \in E, \hat m_j \in \mathbb{R}, j=1,\ldots, n
\}.
\end{aligned}
\]
Hence, given $\simbolovettore{q}\in \conf{n}{E}$, there exists a solution of \eqref{eq:CC3} if and only
if $\simbolovettore{q} \in \Psi(\simbolovettore{q})$; furthermore, if $\simbolovettore{q}\in \Psi(\simbolovettore{q})$ then
$\simbolovettore{q}\in \tilde\Psi(\simbolovettore{q})$.
\end{defi}
We will now deal with the collinear case. First, we will follow Albouy--Moeckel
\cite{AlbouyInverseProblemCollinear2000}
and consider the inverse problem with real masses;
then we will consider the problem with positive masses, and
follow Ouyang--Xie \cite{OuyangCollinearCentralConfiguration2005} (for $n=4$
bodies and $\alpha=1$) and Davis et al. \cite{DavisInverseproblemcentral2018a} (for $n=5$
bodies and $\alpha=1$),
in understanding in which regions the inverse problem has no solutions.
\section{The case $d=1$: collinear configurations and Pfaffians}
For $d=1$, all configurations are on a line, therefore
$E=\mathbb{R}$, $c=\simbolovettore{c}$, and $\simbolovettore{q} \in \tilde\Psi(\simbolovettore{q})$ if and only if
there exists
$(m_1,...,m_n,c) \in \mathbb{R}^{n+1}$ such that
\begin{equation}
\label{eq:d1}
\begin{bmatrix}
0 & Q_{12} & Q_{13} & \ldots & Q_{1n} \\
-Q_{12} & 0 & Q_{23} & \ldots & Q_{2n} \\
\vdots & \vdots & \vdots &\ddots & \vdots \\
-Q_{1n} & -Q_{2n} & \ldots & -Q_{n-1,n} & 0
\end{bmatrix}
\begin{bmatrix}
m_1 \\ m_2 \\ \vdots \\ m_n
\end{bmatrix}
+
c
\begin{bmatrix}
1\\1\\\vdots\\1
\end{bmatrix}
=
\begin{bmatrix}
q_1\\q_2\\\vdots\\q_n
\end{bmatrix}
\end{equation}
where $Q_{ij} =(q_i - q_j)\abs{q_i - q_j}^{-\alpha-2}$,
for $i,j=1,\ldots, n$, or
\begin{equation}\label{eq:main1d}
\iff
Q \simbolovettore{m} + c \simbolovettore{L} = \simbolovettore{q}
\end{equation}
where $Q$ is the $n\times n$ skew-symmetric matrix with entries $Q_{ij}$,
$\simbolovettore{m}$ the vector of masses, and $\simbolovettore{L}$ the vector with constant
components $1$.
Recall %
that if $n$ is odd and $A$ is an anti-symmetric
$n\times n$ matrix $A^{\mkern-1.5mu\mathsf{T}} = - A \implies \det(A) = \det(-A) = (-1)^n \det(A) \implies \det(A) = 0$.
If $n$ is even, then
$\det A = \left(
\operatorname{Pf} A
\right)^2$
(cf. for example the combinatorial approach of \cite{godsilAlgebraicCombinatorics1993}, Chap. 7,
or the multi-linear algebra approach of \cite{northcottMultilinearAlgebra1984}, from page 100).
The \term{pfaffian} $\operatorname{Pf} Q$ of a skew-symmetric matrix $Q$ (for even $n$) is defined as follows
(in Moulton's 1910 notation):
\setlength{\arrayrulewidth}{1pt}
\[
\operatorname{Pf} Q =
\begin{array}{cccc|}
\multicolumn{1}{|c}{Q_{12}} & Q_{13} & \cdots & Q_{1n}\\
& Q_{23} & \cdots & Q_{2n} \\
& & \ddots & \vdots \\
&& & Q_{n-1,n}
\end{array}
=
\sum_{\sigma} (-1)^\sigma Q_{r_1,s_1} Q_{r_2,s_2} \ldots Q_{r_k,s_k}
\]
where $n=2k$, and the permutation
$\sigma$ runs over all \emph{perfect matchings} of $\simbolovettore{n}=\{1,\ldots, n=2k\}$:
a perfect matching $\sigma$ is a fixpoint free involution of $\simbolovettore{n}$,
which can be represented also as
a partition of $\simbolovettore{n}$ in pairs $[r_1,s_1, r_2,s_2,\ldots, r_k,s_k]$.
The sign $(-1)^\sigma$ is the parity of this permutation.
In D. Knuth and Cayley notation
\cite{knuthOverlappingPfaffians1996,cayley1849determinants}
\( \operatorname{Pf} A = A[1,2,\ldots, n] \).
The following recursive identity is the analogue of the Laplace expansion for the determinant:
\begin{equation}\label{eq:pfaff1}
A[1,2,\ldots,n]=\sum_{j=1}^{n-1} (-1)^{j+1} A_{jn}
A[1,\ldots, \hat j, \ldots, \hat n],
\end{equation}
where $A[1,\ldots, \hat j, \ldots, \hat n]$ denotes the Pfaffian of the matrix
with the $j$-th and $n$-th rows and columns canceled out.
An elementary property of Pfaffians is the following: if $A$ is a skew-symmetric matrix,
and $B$ the matrix obtained by swapping the $i$-th and $j$-th columns and the $i$-th
and $j$-th rows, then
\begin{equation}
\label{eq:AminusB}
\operatorname{Pf} A = - \operatorname{Pf} B.
\end{equation}
\begin{lemma}[Halton]
\label{lemma:halton}
Let $A$ be an $n\times n$ skew-symmetric matrix, and $i<j$, with $n$ even.
If $A_{ij}$ denotes the matrix $A$ with row $i$ and column $j$ removed, then
\begin{equation}\label{eq:halton}
\det A_{ij} = -
A[1,\ldots, \hat i, \ldots, \hat j, \ldots, n] \, \operatorname{Pf} A.
\end{equation}
\end{lemma}
\begin{remark}
See for example lemma 3.2 at page 118 of \cite{godsilAlgebraicCombinatorics1993}, for a proof,
where it is used to prove the recursive relation of Pfaffians.
See also
\cite{stembridgeNonintersectingPathsPfaffians1990a},
\cite{DressSimpleProofIdentity1995},
\cite{HamelPfaffianIdentitiesCombinatorial2001} for other interesting combinatorial identities
for pfaffians.
\end{remark}
\begin{remark}[Buchanan Albouy--Moeckel Conjecture]
Buchanan%
, in his 1909 article \cite{Buchanancertaindeterminantsconnected1909},
proves a proposition which can be rephrased as follows:
\emph{for each even $n$, $\alpha=1$,
for each $\simbolovettore{q}\in \conf{n}{\mathbb{R}}$, the Pfaffian is non-zero: $\operatorname{Pf} A_n \neq 0$.}
As found by Albouy and Moeckel in \cite{AlbouyInverseProblemCollinear2000},
Buchanan's proof uses an incorrect argument, and cannot be repaired. So, they
conjecture it to be true, in the
\emph{Albouy--Moeckel Conjecture}: the Pfaffians are non-zero for all %
configurations.
The partial steps done in the direction of its complete proof are the following:
it is true for $n\leq 4$ and $\alpha>0$,
or $\alpha=1$ and $n\leq 6$, computer-assisted
(Albouy-Moeckel 2000 \cite{AlbouyInverseProblemCollinear2000});
it is true for $n\leq 6$ and $\alpha=1$ (Xie 2014 \cite{Xieanalyticalproofcertain2014}).
\end{remark}
The following lemma generalizes Theorem 2.4.(1-2) of
\cite{Xieanalyticalproofcertain2014};
the main conclusion
follows from Proposition 5 of \cite{AlbouyInverseProblemCollinear2000}.
\begin{lemma}
\label{lemma:Pf>0}
If $q_1>q_2>q_3>q_4$ and as above
$Q_{ij} =q_{ij}\abs{q_{ij}}^{-\alpha-2}$,
then
$Q_{12}Q_{34}>Q_{13}Q_{24}$,
and $ Q_{23}Q_{14} > Q_{13} Q_{24}$, and hence
\[
Q_{12} Q_{34} - Q_{13}Q_{24} + Q_{23}Q_{14} > 0.
\]
\end{lemma}
\begin{proof}
\[
\begin{aligned}
Q_{23} Q_{14} > Q_{13} Q_{24} \iff &
(q_{23} q_{14} )^{-\alpha-1} > (q_{13} q_{24})^{-\alpha-1}
\\
\iff &
q_{23} q_{14} < q_{13} q_{24} \\
\iff &
q_{23} (q_{13} + q_{34}) < q_{13} (q_{23} + q_{34})
\\
\iff &
\dfrac{ q_{13} + q_{34} }{ q_{13}} < \dfrac{q_{23} + q_{34}}{ q_{23} } \\
\iff &
1 + \dfrac{q_{34} }{ q_{13}} < 1+ \dfrac{q_{34}}{ q_{23} }~, \\
\end{aligned}
\]
and the last inequality holds true since $q_{13} > q_{23}$.
Now, this implies
\[
Q_{12}Q_{34} - Q_{13} Q_{24} + Q_{23} Q_{14} > Q_{12}Q_{34} > 0 ~.
\]
\end{proof}
The following lemma generalizes Theorem 2.4.(3) of
\cite{Xieanalyticalproofcertain2014}.
\begin{lemma}
\label{lemma:Pf'>0}
Assume $q_1>q_2>q_3>q_4$,
and as above
$Q_{ij} =q_{ij}\abs{q_{ij}}^{-\alpha-2}$.
The function $f(q_4) = \operatorname{Pf} A_4 = Q_{14}Q_{23} -Q_{24}Q_{13} +Q_{34} Q_{12}$
is monotone increasing in $(-\infty,q_3)$, with $q_1,q_2,q_3$ fixed.
The function $g(q_1) = \operatorname{Pf} A_4$ is monotone decreasing in $(q_2,+\infty)$,
with $q_2,q_3,q_4$ fixed.
\end{lemma}
\begin{proof}
\[
\begin{aligned}
\dfrac{d (\operatorname{Pf} A_4) }{d q_4} & = (\alpha+1) \left( q_{14}^{-\alpha-2} Q_{23} - q_{24}^{-\alpha-2} Q_{13}
+ q_{34}^{-\alpha-2} Q_{12} \right) \\
& = (\alpha+1) \left(
Q_{12}\dfrac{Q_{34}}{q_{34}} - Q_{13} \dfrac{Q_{24}}{q_{24}} + Q_{23} \dfrac{Q_{14}}{q_{14}}
\right)
\\
\end{aligned}
\]
Since $Q_{12}Q_{34}>Q_{13}Q_{24}$
and $ Q_{23}Q_{14} > Q_{13} Q_{24}$ by \ref{lemma:Pf>0},
\[
\begin{aligned}
Q_{12}\dfrac{Q_{34}}{q_{34}} - Q_{13} \dfrac{Q_{24}}{q_{24}} + Q_{23} \dfrac{Q_{14}}{q_{14}} & >
\dfrac{1}{q_{34}} Q_{13}Q_{24} - \dfrac{1}{q_{24}} Q_{13} Q_{24} + \dfrac{1}{q_{14}} Q_{23}Q_{14} \\
& = \left( \dfrac{1}{q_{34}} - \dfrac{1}{q_{24}} + \dfrac{1}{q_{14}} \right) Q_{13} Q_{24} >0.
\end{aligned}
\]
The second part of the statement follows by considering that if $q_1>q_2>q_3>q_4$,
then one can define $y_1=-q_4>y_2=-q_3>y_3=-q_2>y_4=-q_1$, and the Pfaffian of the
corresponding matrix $Y_{ij} = (y_{ij})\abs{y_{ij}}^{-\alpha-2}$,
with $y_{ij} = y_i - y_j$, is equal to
\[
\begin{array}{ccc|}
\multicolumn{1}{|c}{Y_{12}} & Y_{13} & Y_{14}\\
& Y_{23} & Y_{24} \\
&& Y_{34}
\end{array}
=
\begin{array}{ccc|}
\multicolumn{1}{|c}{Q_{34}} & Q_{24} & Q_{14}\\
& Q_{23} & Q_{13} \\
&& Q_{12}
\end{array}
=
\begin{array}{ccc|}
\multicolumn{1}{|c}{Q_{12}} & Q_{13} & Q_{14}\\
& Q_{23} & Q_{24} \\
&& Q_{34}.
\end{array}
\]
Since $f(y_4)$ is monotonically increasing in $(-\infty,y_3)$, and $y_3=-q_2$, the function $g(q_1) = f(-y_4)$
is monotonically decreasing in $(q_2,+\infty)$.
\end{proof}
The following lemma is inspired by the proof of Theorem 2.5 of
\cite{Xieanalyticalproofcertain2014},
and in fact generalizes it.
\begin{lemma}
\label{lemma:crisscross}
If $\simbolovettore{q}\in \mathbb{R}^n$ is a (collinear) configuration with $q_1>q_2>\ldots>q_n$, and $Q$
denotes the skew-symmetric matrix with entries $Q_{ij}$, then
\[
\operatorname{Pf} Q =
\begin{array}{cccc|}
\multicolumn{1}{|c}{Q_{12}} & Q_{13} & \cdots & Q_{1n}\\
& Q_{23} & \cdots & Q_{2n} \\
& & \ddots & \vdots \\
&& & Q_{n-1,n}
\end{array}
=
\left(
\prod_{j=1}^{n-1} Q_{jn}
\right) \cdot \left(
\begin{array}{cccc|}
\multicolumn{1}{|c}{\tilde Q_{12}} & \tilde Q_{13} & \cdots & 1 \\
& \tilde Q_{23} & \cdots & 1 \\
& & \ddots & \vdots \\
&& & 1
\end{array} \right) ~,
\]
where for each $i,j = 1,\ldots, n-1$
\[
\tilde Q_{ij} =
\left( q^{-1}_{jn}-q^{-1}_{in} \right)^{-\alpha-1}.
\]
Hence,
if the configuration $\tilde \simbolovettore{q} \in \mathbb{R}^{n-1}$ is
defined by $\tilde q_j = - q^{-1}_{jn}$ for each $j=1,\ldots, n-1$, it satisfies
\[
\tilde q_1 >
\tilde q_2 >
\ldots >
\tilde q_{n-1}
\]
and, as for $Q$, with $\tilde q_{ij} = \tilde q_i - \tilde q_j$,
$\tilde Q_{ij} =\tilde q_{ij}\abs{\tilde q_{ij}}^{-\alpha-2}$.
\end{lemma}
\begin{proof}
By multiplying on the left and the right the matrix $Q$ with the $n\times n$
matrix with diagonal $(Q_{1n}^{-1},Q_{2n}^{-1},\ldots, Q_{n-1,n}^{-1},1)$,
one obtains a matrix $\tilde Q$ with entries
\[
\tilde Q_{ij} = \begin{cases}
\frac{Q_{ij}}{Q_{in}Q_{jn}} & \text{ if } 1\leq i,j \leq n-1 \\
\end{cases}
\]
and the proof follows
from the fact that if $i<j$ then
\[
\frac{Q_{ij}}{Q_{in}Q_{jn}}
= \left( \frac{q_{ij}}{q_{in}q_{jn}}\right)^{-\alpha-1} =
\left( \frac{q_{in}-q_{jn} }{q_{in}q_{jn}}\right)^{-\alpha-1} =
\left( q^{-1}_{jn}-q^{-1}_{in} \right)^{-\alpha-1}.
\]
\end{proof}
Given an $n\times n$ skew-symmetric matrix $Q$,
let $\bordered{Q}$ denote the $(n+1)\times(n+1)$ skew-symmetric bordered matrix
\[
\bordered{Q} =
\begin{bmatrix}
0 & Q_{12} & Q_{13} & \ldots & Q_{1n} & 1 \\
-Q_{12} & 0 & Q_{23} & \ldots & Q_{2n} & 1\\
\vdots & \vdots & \vdots &\ddots & \vdots & \vdots\\
-Q_{1n} & -Q_{2n} & \ldots & -Q_{n-1,n} & 0 & 1 \\
-1 & -1 & \ldots & -1 & -1 & 0 \\
\end{bmatrix}~.
\]
With this notation, lemma \ref{lemma:crisscross} can be written as
\(
\operatorname{Pf} Q = \left(
\prod_{j=1}^{n-1} Q_{jn}
\right) \operatorname{Pf} \bordered{\tilde Q} \).
\begin{propo}
\label{propo:odd}
If $n$ is odd, and for $\simbolovettore{q}\in \conf{n}{\mathbb{R}}$ the product of pfaffians
\[
\bordered{Q}[1,\ldots, n,n+1] Q[1,\ldots, n-1,\hat n] \neq 0
\]
is non-zero, then
equation \eqref{eq:main1d} has solutions.
\end{propo}
\begin{proof}
Observe that equation \eqref{eq:main1d} has solutions if
the rank of the $n\times (n+1)$ matrix
\[
\begin{bmatrix}
0 & Q_{12} & Q_{13} & \ldots & Q_{1n} & 1 \\
-Q_{12} & 0 & Q_{23} & \ldots & Q_{2n} & 1\\
\vdots & \vdots & \vdots &\ddots & \vdots & \vdots\\
-Q_{1n} & -Q_{2n} & \ldots & -Q_{n-1,n} & 0 & 1 \\
\end{bmatrix}
\]
is equal to $n$, which happens if for some $j \in \{1,\ldots, n\}$ the $n\times n$ square matrix
obtained by removing the $j$-th column is non-singular.
Now, this is the same as the matrix obtained by
removing the $(n+1)$-th row and the $j$-th column of
the bordered matrix $\bordered{Q}$. By \eqref{eq:halton} (on transposed matrices)
its determinant is equal to
\[
Q[1,\ldots, \hat j, \ldots, n] \operatorname{Pf} \bordered{Q}.
\]
By taking $j=n$ the conclusion follows.
\end{proof}
Note that that statement holds
with $j$ chosen as any index from $1$ to $n$, instead of $n$;
moreover, because of \eqref{eq:pfaff1}, there exists $j$ such that
$ \bordered{Q}[1,\ldots, n,n+1] Q[1,\ldots, \hat j, \ldots, n] \neq 0$
if and only if
$ \bordered{Q}[1,\ldots, n,n+1] \neq 0$.
See also Theorem 1 of \cite{AlbouyInverseProblemCollinear2000}, %
where
shorter proofs or more general results are presented,
using exterior algebra as a computational device.
Let $n$ be odd and $\simbolovettore{q}$ a configuration. Then the corresponding
$Q_n$ is a $n\times n$ singular matrix. The two matrices in
\ref{propo:odd} are the $(n-1)\times (n-1)$ skew-symmetric matrix $Q_{n-1}$ correponding to
the configuration with the $n$-th body removed, and the $(n+1)\times(n+1)$ matrix
$\bordered{Q_n}$. Because of \ref{lemma:crisscross},
the pfaffian $\operatorname{Pf} Q_{n-1}$ is non-zero if and only if the pfaffian
of the corresponding $\bordered{\tilde Q_{n-1}}$ is non-zero. But
$\tilde Q_{n-1}$ is
an $(n-2)\times(n-2)$ matrix.
So, for odd $n$ the existence of solutions to \eqref{eq:main1d} follows
from the calculation of pfaffians of the even-dimensional matrices $\bordered{Q_{n}}$
and $\bordered{\tilde Q_{n-1}}$
(the existence of solutions for $n=5$ was proven in Theorem 2.6 of
\cite{Xieanalyticalproofcertain2014} in a different way).
On the other hand, let $n$ be even, and $\simbolovettore{q}$ a configuration and $Q_n$ as above.
By \ref{lemma:crisscross} the existence of solutions to \eqref{eq:main1d} follows
from the calculation of the pfaffian of $\bordered{\tilde Q_{n}}$,
where $\tilde Q_n$ is a matrix with odd size.
\begin{theo}
For all $\alpha>0$, and any $n\leq 6$,
the pfaffian of $Q$ (for even $n$) or of $\bordered{Q}$ (for odd n)
is non-zero,
hence for each configuration $\simbolovettore{q}$ equation \eqref{eq:main1d} has solutions
with real masses $m_j$.
\end{theo}
\begin{proof}
By lemma \ref{lemma:crisscross}, as explained before, the pfaffian of the matrix
corresponding to a collinear configuration $\simbolovettore{q}\in \conf{n}{\mathbb{R}}$
with $n$ even is non-zero, if it is non-zero the pfaffian of the bordered
matrix $\bordered{Q}$ corresponding to collinear $n-1$ bodies.
For $n=5$ one can apply \eqref{eq:pfaff1} and obtain,
given that $\bordered{Q}_{j6} = 1$ for $j=1,\ldots, 5$,
\[
\begin{aligned}
\operatorname{Pf} \bordered{Q} & =
Q[\hat 1, 2,3,4,5]
- Q[1,\hat 2, 3,4,5] \\
& + Q[1,2,\hat 3, 4,5]
- Q[1,2,3,\hat 4, 5]
+ Q[1,2,3,4,\hat 5].
\end{aligned}
\]
Without loss of generality one can assume $q_1>q_2\ldots > q_5$:
since by lemma \ref{lemma:Pf'>0} the pfaffian $Q[2,3,4,5]$ is decreasing in $q_2$, and $q_1>q_2$,
one has $Q[1,3,4,5] < Q[2,3,4,5]$;
since $Q[1,2,3,4]$ is increasing in $q_4$, and $q_4>q_5$,
$Q[1,2,3,4] > Q[1,2,3,5]$. Therefore $\operatorname{Pf} \bordered{Q} > Q[1,2,\hat 3, 4,5]$,
which is strictly positive by \ref{lemma:Pf>0}.
\end{proof}
\begin{remark}
Such a nice argument, introduced already by Xie in
\cite{Xieanalyticalproofcertain2014}, unfortunately does not work as it is for $n>6$:
when $n\geq 8$ in the (symmetric) sum of 7 terms only the two consecutive terms at both
endpoints
can be estimated by monotonicity. It is very interesting that, at least for $\alpha=1$
when the pfaffian is a rational function of the mutual distances,
it is possible to prove its positivity by checking that \emph{all the coefficients}
of the polynomials are positive. This was found by Albouy and Moeckel in
\cite{AlbouyInverseProblemCollinear2000}: in the following we show how we computed
the polynomial for $n=8$ and $10$, finding that it has all positive coefficients.
It is maybe worth noting that in the notation of
\cite{AlbouyInverseProblemCollinear2000} the following equalities
hold: if $n=2k$ then
$ K_n = k! \operatorname{Pf} Q$
while if $n=2k+1$, then
$K_n^L = k! \operatorname{Pf} \bordered{Q}$.
\end{remark}
\begin{lemma}
\label{lemma:polynomial}
Let $\alpha=1$, $\simbolovettore{q}\in \conf{n}{\mathbb{R}}$ an ordered collinear configuration (with
$q_1>q_2>\ldots>q_n$, and as above $q_{ij}=q_i-q_j$), and $n$ even.
Let $P$ be the skew-symmetric matrix
defined for each $i<j$ by $P_{ij} = $ the product of all $q_{ab}$ such that $a\in \{i,j\}$ or $b\in \{i,j\}$
and $a<b$:
\[
P_{ij} = \prod_{\substack{1\leq a<b\leq n\\ \{a,b\}\cap \{i,j\}\neq\emptyset \\ (a,b) \neq (i,j)}} q_{ab}.
\]
Its pfaffian and the pfaffian of
the anti-symmetric matrix with terms $Q_{ij} = q_{ij}^{-2}$ for $i<j$
satisfy the identity
\[
\operatorname{Pf} P = \left( \prod_{1\leq i<j\leq n} q_{ij}^2 \right) \operatorname{Pf} Q.
\]
\end{lemma}
\begin{proof}
Let $P'$ denotes the matrix obtained by
multiplying the $j$-th row and column of $Q$ by the factor
$\displaystyle(-1)^{j-1}\prod_{\substack{1\leq i \leq n\\ i\neq j}} q_{ij}$, for $j=1,\ldots, n$.
It follows that
\[\begin{aligned}
\operatorname{Pf} P' & =
\left( \prod_{1\leq j \leq n} (-1)^{j-1} \prod_{\substack{1\leq i \leq n\\ i\neq j}} q_{ij}\right)
\operatorname{Pf} Q %
= %
\left(
\prod_{1\leq i<j\leq n} q_{ij}^2
\right)
\operatorname{Pf} Q\\
\end{aligned}\]
since
\[
\prod_{\substack{1\leq i \leq n\\ i\neq j}} q_{ij} =
\left( \prod_{1\leq i < j} q_{ij} \right)
\left( \prod_{j<i\leq n} q_{ij} \right)
=
(-1)^{n-j-1}
\left( \prod_{1\leq i < j} q_{ij} \right)
\left( \prod_{j<i\leq n} q_{ji} \right).
\]
This implies also that
the $ij$-entry of $P'$ is equal to
\[
\begin{aligned}
P'_{ij} & = q_{ij}^{-2}
\left( \prod_{\substack{1\leq a<b\leq n\\ i\in \{a,b\} } } q_{ab} \right)
\left( \prod_{\substack{1\leq a<b\leq n\\ j\in \{a,b\} } } q_{ab} \right) %
= P_{ij}~.
\end{aligned}
\qedhere
\]
\end{proof}
\begin{remark}
\label{remark:polynomial}
For even $n$, if the matrix $P$ of \ref{lemma:polynomial} is computed starting
from
the matrix $\bordered{\tilde Q}$
of \ref{lemma:crisscross} instead of $Q$, it can be renamed $\tilde P$:
its pfaffian is a polynomial in the $n-2$ variables
$\tilde {x}_j=\tilde q_{j}-\tilde q_{j+1}=\tilde q_{j,j+1}$ for $j=1,\ldots, n-2$,
where $\tilde q_j = -q_{jn}^{-1}$,
and for each $1\leq i<j < n$ the equality $\tilde q_{ij} = \frac{q_{ij}}{q_{in}{q_{jn}}}$ holds,
and $\tilde q_{in} = 1$.
Note that $\tilde q_n$ is not defined,
and $\tilde q_{in}$ is \emph{not} $\tilde q_{i} - \tilde q_{n}$;
hence
$\tilde q_{in} = 1$,
for $i=1,\ldots, n-1$, does not imply $\tilde q_1 = \ldots = \tilde q_{n-1}$.
\end{remark}
\begin{theo}
\label{theo:code}
The pfaffian of the matrix $P$, defined in \ref{lemma:polynomial}, is a polynomial
with non-negative integer coefficients, for each even $n\leq 8$,
with respect to the variables $x_1,\ldots, x_{n-1}$,
defined as $x_j=q_{j}-q_{j+1}=q_{j,j+1}$ for $j=1,\ldots, n-1$.
The pfaffian of the matrix $\tilde P$, defined in \ref{remark:polynomial},
is a polynomial with non-negative integer coefficients, for each even $n\leq 10$,
with respect to the variables $\tilde x_1,\ldots, \tilde x_{n-2}$,
defined as $\tilde x_j=\tilde q_{j}-\tilde q_{j+1}=\tilde q_{j,j+1}$ for $j=1,\ldots, n-2$,
where $\tilde q_j = - q_{jn}^{-1}$.
As a consequence, for each even $n\leq 10$ the pfaffian of $Q$ is positive.
\end{theo}
\begin{proof}[Proof (computer assisted)]
The proof is just a computer computation, performed on some computer algebra systems.
The output numbers for the first cases are as follows.
For $P$:
$n=4$: minumum of coefficients $=1$, maximum of coefficients = 19.
Total of 25 non-zero coefficients in the $n-1$ variables $x_1,x_2,x_3$.
$n=6$:
minumum of coefficients $=1$, maximum of coefficients =
6217712.
Polynomial of degree 24 in 5 variables with 7993 non-zero coefficients.
$n=8$:
minimum of coefficients $=1$, maximum of coefficients =
1974986029814430328.
Polynomial of degree 48 in 7 variables with
8863399
non-zero coefficients.
For $\tilde P$:
$n=4$: minimum of coefficients $=1$, maximum of coefficients=2.
Total of 5 non-zero coefficients in the $n-2$ variables $\tilde x_1,
\tilde x_2$.
The pfaffian is the polynomial of degree $(n-2)^2 = 4$
\[
\tilde x_1^4 + 2 \tilde x_1^3 \tilde x_2 + \tilde x_1^2 \tilde x_2^2 + 2 \tilde x_1 \tilde x_2^3 + \tilde x_2^4.
\]
$n=6$: minimum of coefficients $=1$, maximum of coefficients = 3018.
Total of 519 non-zero coefficients in the $4$ variables of degree $(n-2)^2=16$.
$n=8$ minimum of coefficients $=1$, maximum of coefficients = 922577565632.
Total of 306016 non-zero coefficients in $n-2$.
Degree = $(n-2)^2 = 36$.
If $n=10$, then the number of perfect matchings is
$\dfrac{10!}{2^{5} (5)!}=945$: for each one a polynomial of degree $64$ in
8 variables is added. So, in theory computations even in dense multivariate
polynomials with integer coefficients could fit into the memory of a normal computer.
The minimum of the coefficients is $=1$, the maximum is 818182204944918819340996488.
There are a total of 488783941 non-zero coefficients (the runtime was approximately 10 days).
\end{proof}
For $n=12$, an empirical estimate of the time needed to perform the calculation
with this algorithm would be of the order of 4-5 years on the same computer.
\section{Positive masses}
Consider now the inverse problem with real and positive masses:
let $X_0 \subset E^n=\mathbb{R}^n$ be the subset
$X_0 = \{ \simbolovettore{q} \in E^n : \sum_{j=1}^n q_j = 0$,
which is the orthogonal complement of $\simbolovettore{L}$ in $E^n$.
The $n$ columns of the anti-symmetric matrix $Q$ (which can
be denoted as $\simbolovettore{Q}_1,\ldots, \simbolovettore{Q}_n$) generate a subspace of dimension $n$
(for even $n$) or $n-1$ (for odd $n$) in $E^n$.
Let $\Pi$ denote the orthogonal projection of $E^n$ onto $X_0$:
then if $\simbolovettore{x}\in X_0$, equation \eqref{eq:main1d} is equivalent to
\begin{equation}\label{eq:round1}
Q(\simbolovettore{x}) \simbolovettore{m} + c \simbolovettore{L} = \simbolovettore{x} \iff \simbolovettore{x} = \Pi Q(\simbolovettore{x}) \simbolovettore{m}.
\end{equation}
In fact, if $\simbolovettore{x} = Q(\simbolovettore{x}) \simbolovettore{m} + c \simbolovettore{L} $, then by projecting
one obtains $\Pi \simbolovettore{x} = \simbolovettore{x} = \Pi Q(\simbolovettore{x}) \simbolovettore{m}$ since $\Pi \simbolovettore{L} = \boldsymbol{0}$.
Conversely, if $\simbolovettore{x} = \Pi Q(\simbolovettore{x}) \simbolovettore{m}$,
then $\Pi Q(\simbolovettore{x})\simbolovettore{m} - Q(\simbolovettore{x}) \in \ker \Pi =\operatorname{Span}(\simbolovettore{L})$, since $\Pi^2=\Pi$, and hence
there exists $c\in \mathbb{R}$ such that $\Pi Q(\simbolovettore{x})\simbolovettore{m} - Q(\simbolovettore{x}) \simbolovettore{m} = c\simbolovettore{L}$,
that is $\simbolovettore{x} = Q(\simbolovettore{x}) \simbolovettore{m} + c\simbolovettore{L}$.
For a different set of variables, see
Ouyang--Xie \cite{OuyangCollinearCentralConfiguration2005} (for $n=4$
bodies and $\alpha=1$) and Davis et al. \cite{DavisInverseproblemcentral2018a} (for $n=5$
bodies and $\alpha=1$); for the general problem with positive masses,
see again \cite{AlbouyInverseProblemCollinear2000}.
Now, define the following coefficients, for $i=1,\ldots, n$ and $j=0,\ldots, n-1$:
\begin{equation}
\label{eq:betas}
\beta_{ij} =\begin{cases}
1 & \text{ if } j=0~; \\
1-\frac{j}{n} & \text{ if } i\leq j~; \\
-\frac{j}{n} & \text{ if } i > j~.
\end{cases}
\end{equation}
Consider the $n$ variables $x_0,x_1,\ldots, x_{n-1}$, where as above
$x_j = q_j - q_{j-1}$ for $j=1,\ldots, n-1$),
and $x_0 = \dfrac{1}{n}(q_1+\ldots+q_n)$.
Note that for each $i=1,\ldots, n$ and $j=2,\ldots, n-1$ one has
\[
\beta_{ij}-\beta_{i,j-1} =
\begin{cases}
-\frac{j}{n} + \frac{j-1}{n} = -\frac{1}{n} & \text{ if } j < i \\
\frac{n-1}{n} & \text{ if } j = i \\
-\frac{1}{n} & \text{ if } j > i \\
\end{cases}
\]
and therefore,
since $\beta_{i0}=1$, for each $i=1\ldots n$
the following identities hold
\begin{equation}
\label{eq:betas2}
q_i = \sum_{j=0\cdots n-1} \beta_{ij} x_j
\quad \& \quad x_0=\frac{1}{n}\sum_{i=1\cdots n} q_i,
j>0\implies x_j = q_j - q_{j+1}.
\end{equation}
Equation \eqref{eq:betas2} can be written in matrix form as follows
\begin{lemma}
\label{lemma:betas2}
Let $B$ be the matrix with coefficients $b_{ij} = \beta_{i,j-1}$
defined above,
$\simbolovettore{x}$ the column vector with components $x_0,\ldots, x_{n-1}$
and $\simbolovettore{q}$ the column vector with components $q_1,\ldots, q_n$.
Then $B$ is an invertible matrix such that $\simbolovettore{q} = B \simbolovettore{x}$.
\end{lemma}
Given equation \eqref{eq:round1}, and the permutation symmetries of the potential,
we can restrict the problem to the cone
\[
X_0^+ = \{ \simbolovettore{q} \in X_0 : q_1>q_2> \ldots > q_n\},
\]
which in coordinates $\simbolovettore{x}$ can be written as
\[
X_0^+ = \{ \simbolovettore{x} : x_0 = 0, x_i>0, i=1,\ldots, n-1\}.
\]
In such coordinates, equation \eqref{eq:round1} is transformed in
\begin{equation}
\label{eq:round2}
x_i = (B^{-1}Q\simbolovettore{m})_i, \quad i=1,\ldots, n-1,
\end{equation}
with suitable substitutions in the expressions of $Q$.
For example, if $n=3$ one has to consider only the second and third rows of the
following equation
\[
\begin{aligned}
\begin{bmatrix}
x_0\\x_1\\x_2
\end{bmatrix}
&
=
\begin{bmatrix}
1/3 & 1/3 & 1/3 \\
1 & -1 & 0 \\
0 & 1 & -1
\end{bmatrix}
\begin{bmatrix}
0 & Q_{12} & Q_{13} \\
-Q_{12} & 0 & Q_{23} \\
-Q_{13} & -Q_{23} & 0
\end{bmatrix}
\begin{bmatrix}
m_1\\m_2\\m_3
\end{bmatrix}
,
\end{aligned}
\]
which turns out to be
\[
\begin{bmatrix}
x_1 \\ x_2
\end{bmatrix}
=
\begin{bmatrix}
Q_{12} &
Q_{12} &
Q_{13} - Q_{23}
\\
-Q_{12} +Q_{13} &
Q_{23} &
Q_{23}
\\
\end{bmatrix}
\begin{bmatrix}
m_1\\m_2\\m_3
\end{bmatrix}~.
\]
As above, $Q_{ij} = q_{ij}^{-\alpha-1}$, for $i<j$, and hence
the last equation can be written as
\[
\begin{bmatrix}
x_1 \\ x_2
\end{bmatrix}
=
\begin{bmatrix}
x_1^{-\alpha-1} &
x_1^{-\alpha-1} &
(x_1+x_2)^{-\alpha-1} - x_2^{-\alpha-1}
\\
-x_1^{-\alpha-1} + (x_1+x_2)^{-\alpha-1}
&
x_2^{-\alpha-1} &
x_2^{-\alpha-1}
\\
\end{bmatrix}
\begin{bmatrix}
m_1\\m_2\\m_3
\end{bmatrix}~.
\]
Another way of writing equation \eqref{eq:round2} is as follows: if now $\simbolovettore{x}$ denotes
the $(n-1)$-dimensional vector of positive coordinates $x_1,\ldots, x_{n-1}>0$,
\begin{equation}
\label{eq:round3}
\simbolovettore{x} = \sum_{k=1}^n m_k \simbolovettore{Y}_k \text{ with } m_k >0 ,
\end{equation}
where $\simbolovettore{Y}_k$ is the $(n-1)$-dimensional vector with components
\(
Y_{ik} = Q_{i,k} - Q_{i+1,k}
\)
for $i=1,\ldots, n-1$ and $k=1,\ldots, n$.
Given that for each $k$
\[
\sum_{i=i\ldots n-1} Y_{ik} =
\sum_{i=1\ldots n-1} (Q_{i,k}-Q_{i+1,k}) =
Q_{1,k} + Q_{k,n} > 0
\]
$\simbolovettore{x}$ and all $\simbolovettore{Y}_k$ belong to the half-space $x_1+x_2+\ldots + x_{n-1}>0$,
and can be centrally projected on the hyperplane
$x_1+x_2+\ldots + x_{n-1} = 1$.
Let $\Delta^{n-2}$ denote the standard euclidean simplex in coordinates $x_i$,
and $X_1$ the affine subspace $X_1 = \{ \simbolovettore{x} \in X_0 : x_1+x_2\ldots + x_{n-1} = 1 \}$.
Let $p$ denote central projection $p(\simbolovettore{x}) = \dfrac{\simbolovettore{x}}{\sum_{i=1\ldots n-1}x_i } $,
partially defined $p\colon X_0 \to X_1$.
\begin{lemma}
\label{lemma:sumYikpositive}
The vector $\simbolovettore{x}$ is a solution of \eqref{eq:round3}
if and only if its projection $p(\simbolovettore{x})$ is a solution of
\begin{equation}
\label{eq:round4}
\simbolovettore{x} = \sum_{k=1}^n m'_k p(\simbolovettore{Y}_k) \text{ with } m'_k >0,
\end{equation}
with $\sum_{k} m_k =1$ and $\simbolovettore{x}\in X_1$.
\end{lemma}
\begin{proof}
As we have seen, $p$ is well defined on $\simbolovettore{x}$ (since all $x_j$ are positive) and
on all $\simbolovettore{Y}_k$.
If
\(
\simbolovettore{x} = \sum_{k=1}^n m_k \simbolovettore{Y}_k(\simbolovettore{x})
\)
then by homogeneity if we let $\lambda_0=x_1+\ldots+x_{n-1}$
and $\lambda_k =
Q_{1,k} + Q_{k,n} > 0$ for each $k$,
\[
\begin{aligned}
\sum_{k=1}^n m'_k p( \simbolovettore{Y}_k(\lambda_0^{-1} \simbolovettore{x}) ) & =
\sum_{k=1}^n m'_k p( \simbolovettore{Y}_k(\simbolovettore{x}) ) =
\sum_{k=1}^n m'_k \lambda_k^{-1} \simbolovettore{Y}_k(\simbolovettore{x}) \\
\implies
\sum_{k=1}^n m'_k p( \simbolovettore{Y}_k(\lambda_0^{-1} \simbolovettore{x}) ) & = p(\simbolovettore{x}) = \lambda_0^{-1}\simbolovettore{x} \iff
m'_k \lambda_k^{-1} \lambda_0 = m_k.
\end{aligned}
\]
Now, if $\simbolovettore{x}$ and all $\simbolovettore{Y}_k$ belong to $X_1$,
\[
1 = \sum_{j=1}^{n-1} x_j =
\sum_{j=1}^{n-1}\sum_{k=1}^n m_k Y_{jk} =
\sum_{k=1}^n
m_k \sum_{j=1}^{n-1}
Y_{jk} =
\sum_{k=1}^n m_k .
\]
\end{proof}
We can summarize the above facts in the following theorem.
\begin{theo}
\label{theo:mainpositive}
Let $f\colon \Delta^{n-2} \multimap X_1$ the multi-valued map defined as follows:
$f(\simbolovettore{x})=\operatorname{CH}[\simbolovettore{Y}_1(\simbolovettore{x}),\ldots, \simbolovettore{Y}_n(\simbolovettore{x})] $ is
the convex hull of the $n$ points $\simbolovettore{Y}_1,\ldots, \simbolovettore{Y}_n$ in $X_1$.
Then $\simbolovettore{x} \in f(\simbolovettore{x})$ if and only if any corresponding configuration
$\simbolovettore{q}$ solves the inverse central configuration problem.
\end{theo}
\begin{example}
The case $n=3$ as expected is rather simple: given that $x_1+x_2=1$, the matrix $Y$ turns
out to be
\[
\begin{bmatrix}
x_1^{-\alpha-1} &
x_1^{-\alpha-1} &
1 - x_2^{-\alpha-1}
\\
1 -x_1^{-\alpha-1}
&
x_2^{-\alpha-1} &
x_2^{-\alpha-1}
\\
\end{bmatrix},
\]
and the projections on $p(\simbolovettore{Y}_k)$ on $X_1$ are the columns of the following matrix
\[
\begin{bmatrix}
x_1^{-\alpha-1} &
\dfrac{x_1^{-\alpha-1}}{x_1^{-\alpha-1} + x_2^{-\alpha-1}} &
1 - x_2^{-\alpha-1}
\\
1 -x_1^{-\alpha-1}
&
\dfrac{x_2^{-\alpha-1}}{x_1^{-\alpha-1} + x_2^{-\alpha-1}} &
x_2^{-\alpha-1}
\\
\end{bmatrix}~.
\]
Given that for each $x_1\in (0,1)$
\[
1-x_2^{-\alpha-1} < 0< x_1 < 1 < x_1^{-\alpha-1},
\]
for each $\simbolovettore{x}=(x_1,x_2)\in \Delta^{1}$
one has
$\simbolovettore{x} \in \operatorname{CH}[\simbolovettore{Y}_1,\simbolovettore{Y}_3] \subset f(\simbolovettore{x})$,
and hence there are positive masses solving the inverse central configuration problem.
\end{example}
\begin{example}
Consider the case $n=4$, and $\alpha>0$. The matrix $Y$,
given that $x_1+x_2+x_3=1$,
\[\scriptsize
\begin{aligned}
Y & =
\begin{bmatrix}
Q_{11} - Q_{21} & Q_{12} - Q_{22} & Q_{13}-Q_{23} & Q_{14}-Q_{24} \\
Q_{21} - Q_{31} & Q_{22} - Q_{32} & Q_{23}-Q_{33} & Q_{24}-Q_{34} \\
Q_{31} - Q_{41} & Q_{32} - Q_{42} & Q_{33}-Q_{43} & Q_{34}-Q_{44} \\
\end{bmatrix}\\
&=
\begin{bmatrix}
x_1^{-\alpha-1} & x_{1}^{-\alpha-1} & (x_1+x_2)^{-\alpha-1} - x_2^{-\alpha-1} & 1 -(x_2+x_3)^{-\alpha-1} \\
-x_1^{-\alpha-1} + (x_1+x_2)^{-\alpha-1} &
x_2^{-\alpha-1} & x_2^{-\alpha-1} & (x_2+x_3)^{-\alpha-1} -x_3^{-\alpha-1} \\
1 - (x_1+x_2)^{-\alpha-1} & -x_2^{-\alpha-1} + (x_2+x_3)^{-\alpha-1} & x_3^{-\alpha-1} & x_3^{-\alpha-1} \\
\end{bmatrix}\\
\end{aligned}
\]
The projections on $X_1$ are
\[
p(\simbolovettore{Y}_1) = \simbolovettore{Y}_1, \quad p(\simbolovettore{Y}_4) = \simbolovettore{Y}_4
\]
and
\[
p(\simbolovettore{Y}_2) = \frac{\simbolovettore{Y}_2 }{ x_1^{-\alpha-1} +(x_2+x_3)^{-\alpha-1}}, \quad
p(\simbolovettore{Y}_3) = \frac{\simbolovettore{Y}_3 }{ x_3^{-\alpha-1} +(x_1+x_2)^{-\alpha-1}}.
\]
Note that the second components of $p(\simbolovettore{Y}_1)$ and $p(\simbolovettore{Y}_4)$ are negative:
\[
-x_1^{-\alpha-1} + (x_1+x_2)^{-\alpha-1} < 0, \quad
(x_2+x_3)^{-\alpha-1} -x_3^{-\alpha-1} < 0 .
\]
The second components of $p(\simbolovettore{Y}_2)$ and $p(\simbolovettore{Y}_3)$ are
\[
\dfrac{ x_2^{-\alpha-1} } { x_1^{-\alpha-1} +(x_2+x_3)^{-\alpha-1} } \quad \text{ and }
\dfrac{ x_2^{-\alpha-1} } { x_3^{-\alpha-1} +(x_1+x_2)^{-\alpha-1} }.
\]
If $x_2>\frac{1}{2}$, then $x_2^{-\alpha-1} < 2^{\alpha+1}$; since $x_1+x_2+x_3=1$,
and by convexity
\[
\begin{aligned}
x_1^{-\alpha-1} +(x_2+x_3)^{-\alpha-1} & = x_1^{-\alpha-1} + (1-x_1)^{-\alpha-1} > 2^{\alpha+2} \\
x_3^{-\alpha-1} +(x_1+x_2)^{-\alpha-1} & = x_3^{-\alpha-1} + (1-x_3)^{-\alpha-1} > 2^{\alpha+2}. \\
\end{aligned}
\]
Hence, if $x_2>1/2$ the second components of $p(\simbolovettore{Y}_2)$ and $p(\simbolovettore{Y}_3)$
satisfy the inequalities
\[
\begin{aligned}
\dfrac{ x_2^{-\alpha-1} } { x_1^{-\alpha-1} +(x_2+x_3)^{-\alpha-1} } & <
\dfrac{2^{\alpha+1}}{2^{\alpha+2}} = 2^{-1} \\
\dfrac{ x_2^{-\alpha-1} } { x_3^{-\alpha-1} +(x_1+x_2)^{-\alpha-1} } & < 2^{-1}
\end{aligned}
\]
But this means that for any $\simbolovettore{x}$ with $x_2>1/2$, the second components of $p(\simbolovettore{Y}_k)$ is smaller
than $1/2$ for each $k$, and hence
$\simbolovettore{x} \not\in \operatorname{CH}[\simbolovettore{Y}_1,\simbolovettore{Y}_2,\simbolovettore{Y}_3,\simbolovettore{Y}_4]$: the
inverse problem does not have solutions in this region.
For $\alpha=1$, a plot of the region where the inverse problem has solutions is
represented in figure \ref{fig:main}. The four simplices are represented in figure \ref{fig:four}.
The plane $x_1+x_2+x_3=1$ is projected to the $x_1x_2$-plane.
The symmetry $(x_1,x_2,x_3) \mapsto (x_3,x_2,x_1)$, which comes from the
symmetry $(q_1,\ldots, q_n) \mapsto (-q_n,\ldots, -q_1)$ %
is projected to the affine reflection $(x_1,x_2) \mapsto (1-x_1-x_2,x_2)$.
Note that if $x_1>\frac{1}{2}$, then ($x_2+x_3<1/2$ $\implies$ $(x_2+x_3)^{-\alpha-1} >2^{\alpha+1}$)
the following inequalities hold true:
\begin{equation}\label{eq:inequals}
\begin{aligned}
(x_1+x_2)^{-\alpha-1} & < x_1^{-\alpha} \\
(x_2+x_3)^{-\alpha-1} & < x_3^{-\alpha} \\
(x_2+x_3)^{-\alpha-1} & > 1 \\
x_1^{-\alpha-1} - (x_2+x_3)^{-\alpha-1} & = x_1^{-\alpha-1} -(1-x_1)^{-\alpha-1} < 0 \\
\end{aligned}
\end{equation}
Now write the projections $p(\simbolovettore{Y}_1), p(\simbolovettore{Y}_2), p(\simbolovettore{Y}_4)$ in barycentric coordinates
with respect to the affine frame $P_1'=(1,0,0)$, $P_2'=(1/2,1/2,0)$, $P_3'=(1/2,0,1/2)$ in $X_1$:
\[
\begin{aligned}
p(\simbolovettore{Y}_1) & = \simbolovettore{Y}_1 =
(2 x_1^{-\alpha-1} - 1)
\begin{bmatrix}1\\0\\0 \end{bmatrix} +
2 ((x_1+x_2)^{-\alpha-1} - x_1^{-\alpha-1} )
\begin{bmatrix}0\\1\\0 \end{bmatrix} + \\& +
2(1-(x_2+x_3)^{-\alpha-1})
\begin{bmatrix}0\\0\\1 \end{bmatrix} ;
\\
p(\simbolovettore{Y}_4) & = \simbolovettore{Y}_4 =
(1- 2 (x_2+x_3)^{-\alpha-1} )
\begin{bmatrix}1\\0\\0 \end{bmatrix} + \\& +
2 ((x_2+x_3)^{-\alpha-1} - x_3^{-\alpha-1} )
\begin{bmatrix}0\\1\\0 \end{bmatrix} +
2(x_3^{-\alpha-1})
\begin{bmatrix}0\\0\\1 \end{bmatrix} ;
\\
\lambda
p(\simbolovettore{Y}_2) & = \simbolovettore{Y}_2
=
(x_1^{-\alpha-1} - (x_2+x_3)^{-\alpha-1} )
\begin{bmatrix}1\\0\\0 \end{bmatrix} + \\&+
2 (x_2^{-\alpha-1})
\begin{bmatrix}0\\1\\0 \end{bmatrix} +
2(-x_2^{-\alpha-1} + (x_2+x_3)^{-\alpha-1})
\begin{bmatrix}0\\0\\1 \end{bmatrix} ;
\\
\end{aligned}
\]
where $\lambda= {x_2^{-\alpha-1} + (x_2+x_3)^{-\alpha-1} } > 0$.
Now, by inequalities \eqref{eq:inequals}, the signs of the barycentric coordinates
are $\simbolovettore{Y}_1 \mapsto (+,-,-)$, $(\simbolovettore{Y}_2 \mapsto (-,+,-)$, $\simbolovettore{Y}_4 \mapsto (-,-,+)$,
and hence the $2$-simplex $\sigma$ with vertices $P'_1$, $P'_2$ and $P'_3$ is contained in
$\operatorname{CH}[\simbolovettore{Y}_1,\simbolovettore{Y}_2,\simbolovettore{Y}_4]$ for each $\simbolovettore{x}\in \sigma$, which means that
the inverse problem has solutions.
In fact, consider the $3\times 3$ matrix whose columns are the coordinates of
$p\simbolovettore{Y}_1,p\vY2,p\simbolovettore{Y}_4$. It is of type
\[
A =
\begin{bmatrix}
a_{11} & -a_{12} & -a_{13} \\
-a_{21} & a_{22} & -a_{23} \\
-a_{31} & -a_{32} & a_{33} \\
\end{bmatrix}
\]
where the sum of the columns are $1$.
Hence, if $D$ is the matrix with diagonal $(a_{11},a_{22},a_{33})$,
$A=(I-\tilde A) D$, where $\tilde A$ is
\[
\tilde A =
\begin{bmatrix}
0 & b_{12} & b_{13} \\
b_{21} & 0 & b_{23} \\
b_{31} & b_{32} & 0 \\
\end{bmatrix}
\]
with all $b_{ij}>$ and the sum of the columns are $<1$.
Therefore $A^{-1} = D^{-1} \sum_{k=0}^\infty \tilde A^k$ as convergent $\norm{\tilde A}_1 <1$,
with all entries positive.
This implies that for each $\simbolovettore{x}$ in the vertices of the
triangle $x_1>1/2$ are in the interior of the $2$-simplex
$\operatorname{CH}[p\simbolovettore{Y}_1,p\simbolovettore{Y}_2,p\simbolovettore{Y}_4]$ (because their barycentric coordinates are proportional
to the columns of $A^{-1}$).
\end{example}
\begin{remark}
Because of the homogeneity, one can use the following procedure to check if $\simbolovettore{x} \in \operatorname{CH}[p\simbolovettore{Y}_1,\ldots, p\simbolovettore{Y}_n]$:
for each $j=1\ldots n$, compute the inverse $C_j^{-1}$ of the square matrix $C_j$ of order $n-1$
obtained by removing the first row and the $j$-th column of the matrix $B^{-1}Q$ (written
in terms of coordinates $x_i$). Then $\simbolovettore{x}\in X_1$ satisfy
$\simbolovettore{x} \in \operatorname{CH}[p\simbolovettore{Y}_1,\ldots, \widehat{p\simbolovettore{Y}_k}, \ldots, p\simbolovettore{Y}_n]$ (with the $k$-th
entry removed) if and only if the vector
$C_j^{-1} \simbolovettore{x}$ has all $n-1$ positive components,
which correspond to multiples of barycentric coordinates of $\simbolovettore{x}$
with respect to the vertices
in $\operatorname{CH}[p\simbolovettore{Y}_1,\ldots, \widehat{p\simbolovettore{Y}_k}, \ldots, p\simbolovettore{Y}_n]$.
\end{remark}
\begin{figure}
\centering
\includegraphics[width=0.24\textwidth]{mathfig1.pdf}
\caption{The region of $X_1$ where $\simbolovettore{x} \in f(\simbolovettore{x})$: $\simbolovettore{q} \in [\simbolovettore{Y}_2,\simbolovettore{Y}_3,\simbolovettore{Y}_4] \cup [\simbolovettore{Y}_1,\simbolovettore{Y}_3,\simbolovettore{Y}_4]
\cup [\simbolovettore{Y}_1,\simbolovettore{Y}_2,\simbolovettore{Y}_4] \cup [\simbolovettore{Y}_1,\simbolovettore{Y}_2,\simbolovettore{Y}_3]$ }
\label{fig:main}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{mathfig1p1.pdf}
\caption{$\simbolovettore{q} \in \operatorname{CH}[\simbolovettore{Y}_2,\simbolovettore{Y}_3,\simbolovettore{Y}_4]$}
\end{subfigure}%
~
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{mathfig1p2.pdf}
\caption{$\simbolovettore{q}\in \operatorname{CH}[\simbolovettore{Y}_1,\simbolovettore{Y}_3,\simbolovettore{Y}_4]$}
\end{subfigure}%
~
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{mathfig1p3.pdf}
\caption{$\simbolovettore{q}\in \operatorname{CH}[\simbolovettore{Y}_1,\simbolovettore{Y}_2,\simbolovettore{Y}_4]$}
\end{subfigure}%
~
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{mathfig1p4.pdf}
\caption{$\simbolovettore{q}\in \operatorname{CH}[\simbolovettore{Y}_1,\simbolovettore{Y}_2,\simbolovettore{Y}_3]$}
\end{subfigure}%
\caption{The four regions covered by the four $2$-simplices of $\operatorname{CH}[\simbolovettore{Y}_1,\simbolovettore{Y}_2,\simbolovettore{Y}_3,\simbolovettore{Y}_4]$}
\label{fig:four}
\end{figure}
\begin{theo}\label{theo:main}
Let $\simbolovettore{q} \in \conf{n}{\mathbb{R}}$ be a collinear configuration such that
$q_1>q_2>\ldots> q_n$.
If for an index $j$ with $2\leq j \leq n-2$ the inequality
$2(q_j-q_{j+1}) > q_1 - q_n$ holds true, then
the inverse problem does not
have solutions for this configuration $\simbolovettore{q}$:
no positive masses $m_j$ exist such that
$\simbolovettore{q}$ is a central configuration with respect to the masses $m_j$.
\end{theo}
\begin{proof}
The assertion follows if we prove that if for some $i$ such that $2\leq i \leq n-2$
the inequality $x_i> 1/2 $ holds for the point $\simbolovettore{x} \in X_1$
defined with coordinates $x_i = \dfrac{q_i-q_n}{q_1-q_n}$, then $\simbolovettore{x}$ does not belong
to $\operatorname{CH}[p\simbolovettore{Y}_1,\ldots, p\simbolovettore{Y}_n]$.
In fact, consider the matrix $\bar Y$ with columns the vectors $p\simbolovettore{Y}_k$:
its coefficients are, for $j=1,\ldots, n-1$ and $k=1,\ldots, n$,
\[
\bar Y_{jk} = \dfrac{ Q_{j,k} - Q_{j+1,k}}{Q_{1k}+Q_{kn}}
\]
If $x_i > \frac{1}{2}$, for some $2\leq i\leq n-2$,
then consider the terms
$Y_{ik}$:
if $k\in \{i,i+1\}$, then
$Q_{1k} + Q_{kn} = (x_1+\ldots+x_{k-1})^{-\alpha-1} + (x_k + \ldots + x_n)^{-\alpha-1} > 2 ^{\alpha+1}$
by convexity,
and $Q_{i,i+1}= x_i^{-\alpha-1} < 2 ^ {\alpha+1} $ by monotonicity;
hence the following inequalities hold
\[
Y_{ik} = \dfrac{Q_{ik} - Q_{i+1,k}}{Q_{1k}+Q_{kn}} =
\begin{cases}
\dfrac{ -Q_{ki} + Q_{k,i+1}}{Q_{1k}+Q_{kn}} < 0< \frac{1}{2} & \text{ if } k< i \\
\dfrac{Q_{i,i+1}}{Q_{1i}+Q_{in}} < \frac{1}{2} & \text{ if } k=i \\
\dfrac{Q_{i,i+1}}{Q_{1,i+1}+Q_{i+1,n}} < \frac{1}{2} & \text{ if } k=i+1 \\
\dfrac{Q_{ik}-Q_{i+1,k} }{Q_{1k}+Q_{kn}} < 0 < \frac{1}{2} & \text{ if } k>i+1. \\
\end{cases}
\]
Since all the $i$-th coordinates of the points $p\simbolovettore{Y}_k$ are less than $\frac{1}{2}$,
while $x_i>\frac{1}{2}$, the point $\simbolovettore{x}$ does not belong to
$\operatorname{CH}[p\simbolovettore{Y}_1,\ldots, p\simbolovettore{Y}_n]$.
\end{proof}
| -72,768.857159
|
[
-2.255859375,
1.828125
] | 19.288538
|
[
-3.2578125,
0.71435546875,
-2.216796875,
-6.12109375,
-1.1279296875,
8.546875
] |
[
1.1220703125,
7.3046875,
-0.317626953125,
4.65625
] | 192
| 4,964
|
[
-2.390625,
2.1484375
] | 40.38545
|
[
-5.37109375,
-3.578125,
-3.87890625,
-2.23828125,
1.5849609375,
10.4921875
] | 0.616283
| 9.680942
| 23.146656
| 8.075551
|
[
1.9105489253997803
] | -44,407.88155
| 6.210717
| -71,802.791989
| 0.463834
| 5.997792
|
[
-2.37890625,
-3.181640625,
-3.90625,
-5.33203125,
2.287109375,
11.8359375
] |
[
-5.44140625,
-1.296875,
-1.640625,
-0.908203125,
2.7109375,
2.681640625
] | |
BkiUcArxK4sA-46xM22m
|
\section{Introduction}
Optimization is a way of finding the best solutions to most of the problems encountered in real life. On a regular basis we encounter problems where we try to minimize efforts and maximize outcomes \cite{ABUALIGAH2021113609} on an action, may it be driving to work on a specific road at a specific time to minimise the time required to reach destination or decrease speed to increase mileage. Other than day-to-day implementations optimization is used on a larger scale too, such as manufacturing of cars in order to minimise wind resistance and maximise speed and handling or designing products in such a way to minimise material cost and maximise the quality and profits, etc. A variety of optimization methods inspired by nature have been developed to solve these problems\cite{inbook, optiintro}. These algorithms can be classified into four major categories: biology-inspired/bio-inspired, swarm intelligence, socio-inspired and physics/chemistry-based\cite{Fister2013ABR}.
\begin{enumerate}
\item {\textit{Bio-inspired Intelligence Techniques}: These algorithms are inspired by biological evolution and species. The most well-known and widely used bio-inspired algorithm is the Genetic Algorithm (GA)\cite{holland1992adaptation}. It is based on the Darwinian theory of survival of the fittest\cite{genetic}. The algorithm relies on three important factors, mutation, crossover and selection to approach better quality solutions. Other examples are Covariance Matrix Adaptation Evolution (CMA-ES)\cite{10.1162/evco.2007.15.1.1} based on basic genetic rules, Backtracking Search Algorithm (BSA)\cite{CIVICIOGLU20138121}, Evolutionary Strategies (ES)\cite{Beyer2004EvolutionS}, Evolutionary Programming\cite{Fogel:2011}, Differential Evolution (DE)\cite{DE,Qin2005SelfadaptiveDE}, inspired by biological evolutionary strategies such as reproduction, mutation, recombination and selection\cite{michalewicz1996evolutionary}. A variety of optimization methods inspired by nature have been developed to achieve better solutions than current methods. BSA is amongst the recently proposed algorithms, which generates a trial individual using basic genetic operators (selection, mutation and crossover). Evolutionary programming is one of the first genetic algorithms developed; however evolutionary programming differs from standard GA as the focus is on the behavior of individuals, thus no crossover is used. DE is a population-based stochastic function minimizer based on iterating population towards a quality goal. JDE\cite{JDE}, JADE\cite{5208221} and SADE\cite{Qin2010} are recent versions of DE.}
\item {\textit{Swarm Based Intelligence Techniques}: Swarm intelligence (SI) refers to a subset of bio-inspired techniques. The individuals in the swarm collectively organize themselves to achieve a common goal \cite{IGLESIAS2020273}. Particle Swarm Optimization (PSO) is one of the popular swarm intelligence methods \cite{PSO}. It is inspired from the schooling of fish. In PSO, it starts with random initialisation of population and moves to search optima while updating generations. PSO uses parameters of social and individual behaviors as opposed to evolution operators used in GA. CLPSO\cite{CLPSO} and PSO2011\cite{particle} are the updated versions of the standard PSO. Other examples of swarm-intelligence include Cuckoo Search (CS)\cite{cuckoo}, Bat Algorithm (BA)\cite{bat}, Ant Colony Optimization (ACO)\cite{antcolony}, Firefly Algorithm (FA)\cite{yang2010nature,2013}, Artificial Bee Colony (ABC)\cite{KARABOGA2009108}. ACO is based on the excretion of pheromones by ants which helps guide the way for other ants in the system. In FA, all the fireflies are unisexual and are attracted towards higher intensity(brightness) or the flash signals produced while moving towards better search space and decreasing distance between them. ABC is based on the behaviour of honey bees when discovering food sources.}
\item {\textit{Physics Based Intelligence Techniques}:
Some algorithms are nature-inspired but are based on principles of physicssuch as laws of gravitation by Newton. Existing physics-based algorithms are \cite{Natureinspired}, Colliding Bodies Optimisation (CBO)\cite{CBO} formulated based on Newton's law of motion, Gravitational Search Algorithm (GSA)\cite{GSA}, Central Force Optimisation (CFO) \cite{CFO}, Space Gravitation Optimisation (SGO) \cite{SGO} and \cite{GIO} formulated based on Newton's gravitational force, Big Bang–Big Crunch search (BB–BC) \cite{BBBC}, Galaxy-based Search Algorithm \cite{GBSA} and Artificial Physics-based Optimisation (APO) \cite{APO} formulated based on celestial mechanics and astronomy, Ray Optimisation (RO) \cite{RO} is based on optics, Harmony Search Algorithm (HSA) \cite{HSA} formulated based on acoustics, Simulated Annealing (SA) algorithm is based on thermodynamics principle \cite{SA}.}
\item {\textit{Socio-inspired Intelligence Techniques}:
The Cultural/Social Algorithm is a subset of evolutionary\textcolor{blue}{-}based intelligence. In a society, humans learn from one another by following them which eventually helps them evolve and achieve their goals together\cite{socioevo}. Based on these motives, many researchers began to develop social/socio-inspired algorithms, such as Society and Civilization Optimization Algorithm (SCO)\cite{SCO}, Imperialist Competitive Algorithm (ICA) \cite{ICA}, League Championship Algorithm (LCA) \cite{LCA}, Cultural Evolution Algorithm (CEA) \cite{CEA}, Cohort Intelligence (CI) \cite{CI}, Social learning Optimization (SLO) \cite{SLO}, Social Group Optimization (SGO) \cite{SGO2} and Ideology Algorithm (IA), etc.}
\end{enumerate}
In this manuscript, the work is based on the competitive behaviour of individuals within a group in a competitive environment that has existed in human society for ages, may it be at an academic level or corporate level. The ultimate goal of an individual in a group is to establish his/her position also, known as rank, by competing with other individuals within the group while moving towards promising directions.\\
This manuscript introduces a novel socio-inspired optimization algorithm referred to as LAB: A Leader-Advocate-Believer-based optimization algorithm. The society individuals are divided into groups and are categorised into certain roles. These groups and roles help by guiding a way for the individuals to achieve their goals by competing with the individuals within the corresponding group while moving towards a promising search space. The LAB is motivated by this competitive trait of individuals in a group. Every group leader moves in a certain direction that motivates individuals to compete with it in order to lead the group towards a more promising search space. Not only does every individual compete with its associated leader but it also competes with the individuals within its group with a goal to improve and promote to a higher rank. However, the short-term goal of an individual is to reach as close as possible to its local leader. Furthermore, every local group leader always desires to be the global best leader; thus, it competes with other group leaders to become the global leader while competing with the rising individuals within its associated group. This competitive behaviour of an individual increases its chances of improving and climbing up in the group while moving towards promising search spaces is modelled here. This mechanism enabled LAB to solve several benchmark problems as well as real world problems from manufacturing domain. The performance of the LAB algorithm was better in terms of objective function as well as computational cost as compared to the existing algorithms.
The rest of the manuscript is structured as follows: Section 2 describes the methodology of the LAB algorithm with its flowchart(fig.\ref{fig:flowchart}). Section 3 discusses the benchmark test problems, real-world machining problems. as well as individual problem formulations and a description of the processes. The performance analysis and comparison of algorithms are discussed in Section 4. In Section 5 concluding remarks and future directions are provided.
\section{LAB Algorithm}
\nomenclature{\(\textit{\textbf{P}}\)}{population of society}
\nomenclature{\(n\)}{number of individuals in each group}
\nomenclature{\(\textit{\textbf{G}}\)}{total number of groups}
\nomenclature{\(f(\textit{\textbf{X}})\)}{objective function}
\nomenclature{\(\psi^l\)}{lower bound}
\nomenclature{\(\psi^u\)}{upper bound}
\nomenclature{\(p\)}{individual}
\nomenclature{\(p_{L_{g}}\)}{leader for the $g^{th}$ group}
\nomenclature{\(p_{L_{1}}^{*}\)}{global best leader}
\nomenclature{\(p_{B_i}^{p_{L_{g}}}\)}{$i^{th}$ believer associated to the leader of the $g^{th}$ group}
\nomenclature{\(p_{A}^{p_{L_{g}}}\)}{advocate associated to the leader of the $g^{th}$ group}
\nomenclature{\(\textit{\textbf{p}}_{B}^{p_{L}}\)}{set of believers}
\nomenclature{\(\textit{\textbf{P}}_L\)}{set of leaders}
\nomenclature{\(\textit{\textbf{P}}_A\)}{set of advocates}
\nomenclature{\(R_a\)}{surface roughness}
\nomenclature{\(kerf\)}{taper angle}
\nomenclature{\(MRR\)}{material remove rate}
\nomenclature{\(REWR\)}{electrode wear rate}
\nomenclature{\(f_b\)}{flank wear}
\nomenclature{\(M_t\)}{machining time}
\nomenclature{\(B_h\)}{burr height}
\nomenclature{\(B_t\)}{burr thickness}
\nomenclature{\(\phi\)}{approach angle}
\nomenclature{\(V_c\)}{cutting speed}
\nomenclature{\(f\)}{feed rate}
\nomenclature{\(F_c\)}{tangential force}
\nomenclature{\(V_{Bmax}\)}{tool wear}
\nomenclature{\(L\)}{tool-chip contact length}
\nomenclature{\(w\)}{weight}
\printnomenclature
In the proposed LAB algorithm, every individual in a group competes with every other individual within the group to become the best individual. The position of the individual depends on the fitness/objective function value. The individual with the best fitness value in a group is assigned as the local group leader for the corresponding group and the individuals within the associated group will follow its direction. The second best individual is assigned as the advocate to the leader and the remaining individuals in the group are referred to as believers. The local leader also competes with all the other local leaders from corresponding groups to become the global best leader. All the other local leaders follow the direction of the global best leader while competing with one another. The local rankings motivate the group leader explore promising search spaces and the global rankings forces all the leaders to explore promising search spaces in order to remain the global best leader, while competing with other leaders. This makes every individual within the group compete with one another, thus motivating it to grow and search for better solutions.
\begin{figure}[H]
\centering
\includegraphics[width=0.84\linewidth]{diagrams/Visual.drawio.pdf}
\caption{Visual Abstract of the LAB Algorithm}
\label{fig:visual}
\end{figure}
Consider a general optimization problem as follows:
\begin{center}
Minimize \;\;\;\;\;\;\;\;$f(\textit{\textbf{X}}) = f(x_1,..,x_i,...x_N)$
s.t. \;\;\;\;\;\; $\psi_{i}^l \leq x_i \leq \psi_{i}^u, \; \; \;\;i=1,...,N$
\end{center}
The procedure begins with generating a society of population \textit{\textbf{P}} with individuals $p = 1,.....P$ randomly within its associated search space [$\psi_i^{l}$, $\psi_i^{u}$] and associated objective functions are evaluated. Rest of the steps in the algorithm are explained below along with a flowchart (refer to fig.\ref{fig:flowchart})
\textbf{Step 1 (Assigning Groups and establishing Roles) :}
Every group is assigned with an equal number of randomly selected individuals. Each group consists of $n$ number of individuals,
\begin{center}
where $n = \dfrac{total \;\;number \;\;of \;\; individuals (P)}{total \;number\; of\; groups(G)}$
\end{center}
thus making sure equal number of individuals in each group. After being assigned a group, individuals are locally ranked according to the fitness of their solution (objective function value) and arranged accordingly, i.e. individual with the best fitness quality referred to as Leader ($p_{L}$) followed by second best individual referred to as Advocate ($p_{A}^{p_{L}}$) and remaining individuals ($n-2$, since the first two have been assigned) with worse fitness quality ($p_{B_i}^{p_{L}}$) referred to as Believer. Local best individuals/Leaders from corresponding groups compete with one another. The leader with the best fitness solution is assigned as Global Best Leader ($p_{L}^*$) and the associated group is assigned as Group 1, all the other leaders from the corresponding group follow it's direction. A visual representation for a society of groups with an equal number of individuals is shown below in fig.\ref{fig:sets}.
\begin{figure}[H]
\input{diagrams/sets.tex}
\caption{Visual representation of Groups and Roles of Individuals for a Society}
\label{fig:sets}
\end{figure}
\textbf{Step 2 (Individual Search Direction) :}
With every iteration a new search direction for each individual is calculated, the formulation of the search direction varies as per individual's role as shown below:
\begin{itemize}
\item Leader : The search direction of every leader $p_L \in \textit{\textbf{P}}_L$ is influenced by the global leader $p_{L}^* \in \textbf{\textbf{P}}_L$, the corresponding advocate individual ${p}_{A}^{p_{L}}$, every associated believer $p_{B_{i}}^{p_{L}} \in {\textit{\textbf{p}}}_{B}^{p_{L}}$ and the associated randomly generated weights such that $w_{1}^{*}>w_{2}>w_{3} \in [0,1]$ and $w_{1}^{*} + w_{2} + w_{3} = 1$ as follows :
\end{itemize}
$$\forall x_{i}^{p_{L}}\;\; x_{i}^{p_{L}} = w_{1}^{*} \times x_{i}^{p_L^{*}} \:+\: w_{2} \times x_{i}^{p_{A}^{p_{L}}} \:+\: w_{3} \times {\frac {p_{B_1}^{p_L}+p_{B_2}^{p_L}+\cdots +p_{B_n}^{p_L}}{n-2}}, \;\;p_{L} \in \textit{\textbf{P}}_{L} $$
\begin{itemize}
\item Advocate : The search direction of every advocate $p_A \in \textit{\textbf{P}}_A$ is influenced by its corresponding leader $p_L \in \textit{\textbf{P}}_L$, every associated believer $p_B \in {\textit{\textbf{p}}}_{B}^{p_{L}}$ and the associated randomly generated weights such that $w_{1}^{*}>w_{2} \in [0,1]$ and $w_{1}^{*} + w_{2} = 1$ as follows :
\end{itemize}
$$\forall x_{i}^{p_{A}}\;\; x_{i}^{p_{A}} = w_{1}^{*} \times x_{i}^{p_L} \:+\: w_{2} \times {\frac {p_{B_1}^{p_L}+p_{B_2}^{p_L}+\cdots +p_{B_n}^{p_L}}{n-2}}, \;\;p_{A} \in \textit{\textbf{P}}_{A} $$
\begin{itemize}
\item Believers : The search direction of every believer $p_B \in \textit{\textbf{P}}_B$ is influenced by its corresponding leader $p_L \in \textit{\textbf{P}}_L$, advocate $p_A \in {\textit{\textbf{p}}}_{A}^{p_{L}}$ and the associated randomly generated weights such that $w_{1}^{*}>w_{2} \in [0,1]$ and $w_{1}^{*} + w_{2} = 1$ as follows :
\end{itemize}
$$\forall x_{i}^{p_{B}}\;\; x_{i}^{p_{B}} = w_{1}^{*} \times x_{i}^{p_L} \:+\: w_{2} \times x_{i}^{p_{A}^{p_{L}}}, \;\;p_{B} \in \textit{\textbf{P}}_{B} $$
\textbf{Step 3 (Updation: Global and Local Ranking) :}
After corresponding search directions are calculated, individuals are updated with new search directions, individuals within each set are locally ranked and positions are assigned accordingly, followed by global ranking based on the fitness value of leaders of corresponding groups. Group with Global Best Leader ($p_{L^*}$) is assigned as Group1.
\textbf{Step 4 (Convergence) :}
No significant improvement in the global as well as local group leaders or maximum iterations reached. Else continue to \textbf{Step 2}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.50\linewidth]{diagrams/Flowchart.drawio.pdf}
\caption{LAB Algorithm flowchart}
\label{fig:flowchart}
\end{figure}
\FloatBarrier
\section{Problem description and formulations}
\subsection{Benchmark Test Problems}
The LAB was tested by solving 27 well-studied benchmark problems (Table \ref{tab:Functions})\cite{KARABOGA2009108,ABC2}. The results are compared with contemporary algorithms.\\
\pagebreak
\input{diagrams/functions_table}
\subsection{Manufacturing And Machining Problems}
Engineering problems are generally complex in nature and may involve several local optima. The complexity grows when the associated objective function involves coupled variables. This necessitates development of approximation algorithms, which can efficiently jump out of local optima and search for the global optimum \cite{GANDOMI2020112917,LEE20053902}. The LAB algorithm's performance was tested by solving three types of engineering problems in the domain of machining, namely Abrasive Water Jet Machining, Electric Discharge Machining, micro-machining and Process Parameter Optimization for Turning of Alloy.
\subsubsection{Abrasive Water Jet Machining (AWJM)}
AWJM is an an extended version of water jet cutting, which uses water as the material to impinge on the work material to result in a cut. It can be also used for machining a heat-sensitive materials, as the heat generated is very low as well as the cut is 10x times faster than conventional methods.
Four critical parameters are $u_2$ (in $mm$) as nozzle diameter, $u_3$ (in $mm$) as standoff distance, $u_4$ (in $mm/min$) as cutting head speed/traverse speed and $u_1$ (in $mm$) as workpiece thickness \cite{Kecha,SHANMUGAM2008923,SHUKLA2017212}
for which the associated responses are surface roughness $R_a$ and taper angle $kerf$\cite{dhanawade}. It is evident all the process paramters interact with one another, affecting precision of cuts. Hence, optimum combination of the above process parameters is required for optimum results. Formulated regression model of the AWJM process is adopted here\cite{Kecha,SHUKLA2017212}. The function being linear nonseperable makes it complex to solve and increases the chances of getting stuck in the local minima making the problem harder and tedious to solve. The formulated regression model adopted is as shown below:
\begin{equation}
\begin{split}\label{Ra_AWJM}
\textrm{Minimize} \;\; R_a = -23.309555 + 16.6968 u_{1} + 26.9296 u_{2} + 0.0587 u_{3} + 0.0146 u_{4} - 5.1863 u_{2}^2\\
- 10.4571 u_{1} u_{2} - 0.0534 u_{1} u_{3} - 0.0103 u_{1} u_{4} + 0.0113 u_{2} u_{3} - 0.0039 u_{2} u_{4}
\end{split}
\end{equation}
\begin{equation}
\begin{split}\label{kerf_AWJM}
\textrm{Minimize} \;\; kerf = -1.15146+0.70118 u_1+2.72749 u_2+0.00689 u_3-0.00025 u_4\\
+0.00386 u_2 u_3-0.93947 u_2^2-0.25711 u_1 u_2-0.00314 u_1 u_3\\
-0.00249 u_1 u_4+0.00196 u_2 u_4-0.00002 u_3 u_4-0.00001 u_3^2
\end{split}
\end{equation}
where\;
$0.9 \leq u_1 \leq 1.25$,\;
$0.95 \leq u_2 \leq 1.5$,\;
$20 \leq u_3 \leq 96$,\;
$200 \leq u_4 \leq 600$
\subsubsection{Electric Discharge Machining (EDM)}
One of the elctro-thermal non-traditional machining processes is the EDM, which uses electrical spark or thermal energy to erode unwanted material in order to create desired shape. It is a controlled metal-removal process that is used to remove metal by means of electric spark erosion. The metal-removal process is performed by applying a pulsating (ON/OFF) electrical charge of high-frequency current through the electrode to the workpiece. In the gap between the tool and the workpiece, a difference in the applied potential is formed, establishing an electric field. Due to which the loose electrons on the tool gain high velocity and energy when subjected to electrostatic forces, after which these free electrons collide with the dielectric molecules which results in ionization. More the electrons get accelerated, more positive ions and electrons get generated resulting in increase in the concentration of electrons and ions. The energy released causes electrode wear rate to take place\cite{muthu,gopal} resulting in case hardening of the workpiece.\\
In order to control surface roughness $R_a$ process parameters: $v_1$ (in $A$) as discharge current, $v_2$ (in $V$) as gap voltage, $v_3$ (in $\mu s$) as pulse on-time and $v_4$ (in $\mu s$) as pulse off-time need to be optimized for the EDM process. The process responses for surface finish and electrode wear rate of machined component are $MRR$, $R_a$ and relative electrode wear rate $REWR$, respectively.
The regression model for the above process is adopted here\cite{SHUKLA2017212,TzengCJ}:
\begin{equation}
\begin{split}\label{MRR_EDM}
\textrm{Maximize} \;\; MRR = -235.15 + 39.7v_1 + 4.277v_2 + 1.569v_3 - 1.375v_4 - 0.0059v_3^2 - 0.536v_1v_2
\end{split}
\end{equation}
\begin{equation}
\begin{split}\label{Ra_EDM}
\textrm{Minimize} \;\; R_a = 30.347 - 0.618v_1 - 0.438v_2 + 0.059v_3 - 0.59v_4 + 0.019v_1v_4 + 0.0075v_2v_4
\end{split}
\end{equation}
\begin{equation}
\begin{split}\label{REWR_EDM}
\textrm{Minimize} \;\; REWR = 196.564 - 24.19v_1 - 3.135v_2 - 1.781v_3 + 0.153v_4 + 0.464v_1v_2 + 0.158v_1v_3\\
+ 0.025v_1v_4 + 0.029v_2v_3 - 0.017v_2v_4 - 0.003385v_1v_2v_3+0.093v_1^2 \\
+ 0.001491v_3^2 + 0.005265v_4^2
\end{split}
\end{equation}
where $\;
7.5 \leq v_1 \leq 12.5, \;
45 \leq v_2 \leq 55, \;
50 \leq v_3 \leq 150, \;
40 \leq v_4 \leq 60$
\subsubsection{Micro-machining processes}
The various processes of cutting raw materials into specific dimensions in a controlled removal process is termed as machining.
This process of machining usually consists of a cutting tool, machine tool and a workpiece\cite{EZUGWU20051353}. Machinability refers to evaluation of ease for cutting any type of material in minimum cost and time into a specific shape and dimension for a certain tolerance, surface quality, etc.,\cite{DeGarmo}.
Micro-turning is a type of micro-machining process which uses solid micro-tools to remove material from workpiece and is almost similar to conventional turning operation.
The micro-tools used to remove workpiece material have significant characteristics which significantly affect the size reduction\cite{Qin2010}.
$w_1$ (in $m/min$) as cutting speed , $w_2$ (in $\mu/rev$) as feed and $w_3$ (in $\mu m$) as depth of cut are the process parameters for micro-turning. Performance responses are flank wear ($f_b$) and surface roughness ($R_a$).
The formulated regression model for the above process is adopted here\cite{DURAIRAJ2013878} :
\begin{equation}
\begin{split}\label{fb_6}
\textrm{Minimize} \;\; f_b = 0.004 w_1^{0.495} w_2^{0.545} w_3^{0.763}
\end{split}
\end{equation}
\begin{equation}
\begin{split}\label{Ra_7}
\textrm{Minimize} \;\; R_a = 0.048w_1^{-0.062}w_2^{0.445}w_3^{0.516}
\end{split}
\end{equation}
where $25 \leq w_1 \leq 37,\; 5 \leq w_2 \leq 15,\; 30 \leq w_3 \leq 70 $
Process parameters $x_1$ (in $rpm$) as cutting speed, $x_2$ (in $mm/min$) as feed and process responses surface roughness $R_a$ and machining time $M_t$ with two milling cutters with diameters $0.7 mm$ and $1 mm$ are considered for micro-milling process \cite{aniket}. The formulated regression model for the above process is adopted here:
Tool with diameter $0.7 mm$
\begin{equation}
\begin{split}\label{Ra_8}
\textrm{Minimize} \;\; R_a = -0.455378 + 0.00027f_1 + 0.16422f_2 - 0.000077f_1f_2
\end{split}
\end{equation}
\begin{equation}
\begin{split}\label{Mt_9}
\textrm{Minimize} \;\; M_t = 17.71644 - 0.0002f_1 - 4.8404f_2 + 0.0001f_1f_2
\end{split}
\end{equation}
Tool with diameter $1 mm$
\begin{equation}
\begin{split}\label{Ra_10}
\textrm{Minimize} \;\; R_a = -0.208871 + 0.000144f_1 + 0.019571f_2
\end{split}
\end{equation}
\begin{equation}
\begin{split}\label{Mt_11}
\textrm{Minimize} \;\; M_t = 20.2906 - 0.0015f_1 - 5.8369f_2 + 0.0006f_1f_2
\end{split}
\end{equation}
where $\; 1500 \leq f_1$ $\leq 2500,\; 1 $ $\leq f_2 \leq 3$\\
$B_h$ as Burr height and $B_t$ as burr thickness, are performance responses for four drilling cutter diameters in micro-drilling process, $0.5 mm$; $0.6 mm$; $0.8 mm$ and $0.9 mm$. Process parameters for the above are $y_1$ (in $rpm$) as cutting speed and $y_2$ (in $mm/min$) as feed. The formulated regression model of the above process is adopted here for the tools used as follows\cite{pansari}
Tool with diameter $0.5 mm$
\begin{equation}
\begin{split}\label{Bh_12}
\textrm{Minimize} \;\; B_h = 420.94 - 0.234g_1 - 99.91g_2 + 6.55 \times 10^-5g_1^2 + 22.152g_2^2
\end{split}
\end{equation}
\begin{equation}
\begin{split}\label{Bt_13}
\textrm{Minimize} \;\; B_t = 90.57 - 0.049g_1 - 27.12g_2 + 1.32 \times 10^{-5}g_1^2 + 5.54g_2^2
\end{split}
\end{equation}
Tool with diameter $0.6 mm$
\begin{equation}
\begin{split}\label{Bh_14}
\textrm{Minimize} \;\; B_h = 369.67 - 0.028g_1 - 156.79g_2 + 6.64 \times 10^-6g_1^2 + 23.162g_2^2
\end{split}
\end{equation}
\begin{equation}
\begin{split}\label{Bt_15}
\textrm{Minimize} \;\; B_t = 35.34 - 0.019g_1 - 0.59g_2 + 6.44 \times 10^{-6}g_1^2 + 0.51g_2^2
\end{split}
\end{equation}
Tool with diameter $0.8 mm$
\begin{equation}
\begin{split}\label{Bh_16}
\textrm{Minimize} \;\; B_h = 106.116 + 0.13g_1 - 6.62g_2 + 1.49 \times 10^{-6}g_1^2 + 4.75g_2^2
\end{split}
\end{equation}
\begin{equation}
\begin{split}\label{Bt_17}
\textrm{Minimize} \;\; B_t = 59.79 - 0.024g_1 - 11.3g_2 + 7.78 \times 10^{-6}g_1^2 + 2.18g_2^2
\end{split}
\end{equation}
Tool with diameter $0.9 mm$
\begin{equation}
\begin{split}\label{Bh_18}
\textrm{Minimize} \;\; B_h = 450.7 - 0.09g_1 - 34.48g_2 + 2.34 \times 10^{-5}g_1^2 + 5.03g_2^2
\end{split}
\end{equation}
\begin{equation}
\begin{split}\label{Bt_19}
\textrm{Minimize} \;\; B_t = 80.07 - 0.040g_1 - 14.81g_2 + 1.516 \times 10^{-5}g_1^2 + 4.65g_2^2
\end{split}
\end{equation}
where$\; 1000 \leq g_1 \leq 2500,\; 1 \leq g_2 \leq 4$
\subsubsection{Process parameter optimization for turning of titanium alloy (MQL environment)}
Minimum Quantity Lubrication (MQL) has increasingly been adopted over the past few years in the manufacturing domain, due to its abilities to reduce costs and material wastes as compared with traditional methods. In MQL, a small quantity of cutting fluid such as sustainable lubricants (vegetable oil) is applied on the tool-chip surface region as well as compressed air acting as an alternative for coolant fluids. Thus cutting costs by avoiding use of huge amounts of coolant fluids, thus focusing more on the heat generated rather than using coolants to reduce surface temperatures resulting in increase in tool life\cite{GUPTA201767}.
$k_1 (mm/rev)$ as Feed rate, $k_2$(degrees) as approach angle, $k_3 (m/min)$ as cutting speed are considered process parameters and the performance responses for the above are $F_c$ as tangential force, $V_{Bmax}$ as tool wear, $R_a$ as surface roughness and $L$ as tool-chip contact length. The formulated regression model of the above process is adopted here\cite{Gupta2016,SNNiknam}:
\begin{equation}
\begin{split}\label{MQL_Fc}
\textrm{Minimize} \;\; F_c = -202.01471 + 1.28250 \times k_3 + 3225 \times k_1 - 0.74167 \times k_2 - 9.4 \times k_3 \times k_1
\end{split}
\end{equation}
\begin{equation}
\begin{split}\label{MQL_VBmax}
\textrm{Minimize} \;\; V_{Bmax} = -0.27368 + 0.001575 \times k_3 + 2.4 \times k_1 - 0.0010833 \times k_2
\end{split}
\end{equation}
\begin{equation}
\begin{split}\label{MQL_Ra}
\textrm{Minimize} \;\; R_a = -0.16294 + 0.001425 \times k_3 + 3.7 \times k_1 - 0.000416667 \times k_2
\end{split}
\end{equation}
\begin{equation}
\begin{split}\label{MQL_L}
\textrm{Minimize} \;\; L = 0.96302 - 0.00215931 \times k_3 + 0.92703 \times k_1 + 0.00152807 \times k_2
\end{split}
\end{equation}
where $\;200 \leq k_1 \leq 300, \;0.1 \leq k_2 \leq 0.2, \;60 \leq k_3 \leq 90$
\section{Tests and Validations}
The LAB algorithm was coded in Python3 on Google Collab Platform with an Intel(R) Xeon(R) @2.30 GHz Intel Core 2 Duo processor with 12 GB RAM. In the initialization step, individuals were generated and randomly assigned to groups. The selected LAB parameters are: number of groups $G=4$, number of individuals in each group $n=5$, max iterations = 100.
\subsection{Benchmark Problems}
LAB is validated by solving 27 benchmark test functions and the results are compared with other algorithms which are necessarily stochastic in nature.
The criteria for comparison are mean and best solutions, standard deviation as well as runtime of the algorithms(Refer to Table \ref{tab:TestTable}).
A statistical analysis is performed by executing two-sided and pairwise Wilcoxon signed-rank test (Refer to Tables \ref{tab:StatTable} and \ref{tab:multiprob}). In the two-sided comparison optmimum solutions obtained from 30 independent runs solving a benchmark test problem using LAB are compared with other algorithms solving the same benchmark test problems.
Significance value $\alpha$ was chosen as 0.05 with a null hypothesis H0: the median of solutions obtained by algorithms A and B are equal, in order to verify if an alternative hypothesis exists i.e. performance of algorithm B is better than algorithm A or the other way around, the size of the ranks provided by Wilcoxon signed-rank (T+ and T- values) were thoroughly examined \cite{CIVICIOGLU20138121}.
At the bottom of the Table \ref{tab:StatTable} counts of significant cases (+/-/=) are mentioned. The results obtained exhibited the superior performance of LAB algorithm as compared to the other algorithms. In the pairwise comparison, the average of the best solutions obtained by the algorithms over 30 runs for solving the benchmark test problems is compared.
The convergence plots of few selected functions namely Booth(unimodal), Hartmann6(unimodal), Matyas(multimodal) and Six-hump camelback(multimodal) are presented in Figures \ref{fig:booth}–\ref{fig:sixhump}.
These plots exhibit the competitive behaviour of individuals within a group to reach optimum solution. It is also evident from the plots that the individuals in the group follow the leader. During every iteration, group leaders from corresponding groups compete with the global best leader and are successful at times and the group associated with the global best leader is assigned as Group 1. Thus, changing the search direction of the local leaders following the global best leader. The abrupt changes in the graph exhibit this phenomena of competitiveness of individuals. The convergence highlights the significance of LAB approach by quickly reaching the optimum solution.
\begin{landscape}
\begin{table}[!htb]
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption{Statistical solutions of algorithms for Benchmark test problems\\(\textit{Mean= Mean solution;Std. Dev.= Standard Deviation;Best= Best Solution; Runtime= Mean Runtime in Seconds})}
\includegraphics[width=0.85\linewidth]{Tables/Test1_1.pdf}
\centering
\label{tab:TestTable}
\end{table}
\begin{table}[!htb]
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption*{\textbf{Table \ref{tab:TestTable} Continued}}
\includegraphics[width=0.85\linewidth]{Tables/Test1_2.pdf}
\centering
\end{table}
\begin{table}[!htb]
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption*{\textbf{Table \ref{tab:TestTable} Continued}}
\includegraphics[width=0.85\linewidth]{Tables/Test1_3.pdf}
\centering
\end{table}
\end{landscape}
\begin{table}[H]
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption{Statistical results for Benchmak Test problems using two-sided Wilcoxon signed-rank test ($\alpha$ = 0.05)}
\includegraphics[width=0.99\linewidth]{Tables/WilcoxonTest1.pdf}
\centering
\label{tab:StatTable}
\end{table}
\begin{table}[H]
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption*{\textbf{Table \ref{tab:StatTable} Continued}}
\includegraphics[width=0.98\linewidth]{Tables/WilcoxonTest2.pdf}
\centering
\end{table}
\begin{table}[H]
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption{Statistical pairwise comparison.}
\includegraphics[width=0.80\linewidth]{Tables/Comparative.pdf}
\centering
\label{tab:multiprob}
\end{table}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.99\linewidth]{images/boothsub.png}
\caption{Convergence: Booth Function(F10)}
\label{fig:booth}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.99\linewidth]{images/hartmann6sub.png}
\caption{Convergence: Hartmann6 Function(F20)}
\label{fig:hartmann6}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.99\linewidth]{images/matyassub.png}
\caption{Convergence: Matyas Function(F25)}
\label{fig:matyas}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.99\linewidth]{images/sixhumpsub.png}
\caption{Convergence: Six-hump Camelback Function(F43)}
\label{fig:sixhump}
\end{figure}
\FloatBarrier
\subsection{Solutions to AWJM and EDM}
Table \ref{tab:AWJM1} contains best and mean solutions along with their associated standard deviation obtained for $R_a$ and $kerf$ of AWJM using LAB, Multi-CI, GA, SA and PSO and comparison with the variations of CI is shown in Table \ref{tab:AWJM2}. In LAB approach, individuals are randomly assigned the group and the associated leader in the first iteration. For every following iteration, the individuals follow the local best individual/leader and the local best individual/local leader also follows the global best individual/Global Leader, helps to explore better solutions due to which individuals avoided local minima. Hence, LAB yielded in better solutions as compared with FA, experimental, regression approach and PSO for $kerf$. \\
LAB was able to outperform RSA, BPNN, FA, $f_{best}$, $f_{better}$ and alienation in the matter of quality of solution for solving $MRR$ for EDM problems due to its strong exploration and exploitation mechanism evident from Table \ref{tab:CompareEDM1}.\\
It is evident in Table \ref{tab:AWJM1}, the results shown for LAB are less robust as compared to GA and Multi-CI. As compared to SA and PSO, LAB outperformed by achieving 8\% and 23\% minimization of $kerf$ in AWJM as is evident in Tables \ref{tab:AWJM1} and \ref{tab:AWJM2}. Compared to $f_{best}$, $f_{better}$ and alienation, LAB achieved 78\%, 79\% and 47\% maximization of $MRR$ for EDM as is evident in Table \ref{tab:CompareEDM1}.
\begin{table}[!htb]
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption{Solutions to $R_a$ and $kerf$ of AWJM}
\includegraphics[width=0.6\linewidth]{Tables/AWJM_Table.pdf}
\centering
\label{tab:AWJM1}
\end{table}
\begin{table}[!htb]
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption{Overall solutions to $R_a$ and $kerf$ of AWJM}
\includegraphics[width=0.95\linewidth]{Tables/AWJM_Table2.pdf}
\centering
\label{tab:AWJM2}
\end{table}
Best solution plots in every iteration of LAB for solving AWJM and EDM problems are exhibited in Fig. \ref{fig:plot_AWJM} and Fig. \ref{fig:plot_EDM} a–c respectively, as well as the solution comparison is exhibited in Table \ref{tab:AWJM2}.
\begin{landscape}
\begin{table}[!htb]
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption{Solutions to $R_a$, $MRR$, $REWR$ of EDM}
\includegraphics[width=0.96\linewidth]{Tables/EDM1.pdf}
\label{tab:CompareEDM1}
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption{Overall solutions to $R_a$, $MRR$, $REWR$ of EDM}
\includegraphics[width=0.95\linewidth]{Tables/Compar_MRR_REWR_RA.pdf}
\centering
\label{tab:CompareEDM2}
\end{table}
\end{landscape}
\begin{figure}
\centering
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/AWJM_Ra.png}
\caption{$R_a$}
\label{fig:AWJM_Ra}
\end{subfigure}
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/AWJM_kerf.png}
\caption{$kerf$}
\label{fig:AWJM_kerf}
\end{subfigure}
\caption{Convergence: AWJM}
\label{fig:plot_AWJM}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/EDM_MRR.png}
\caption{$MRR$}
\label{fig:MRR}
\end{subfigure}
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/EDM_Ra.png}
\caption{$R_a$}
\label{fig:Ra_EDM}
\end{subfigure}
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/EDM_REWR.png}
\caption{$REWR$}
\label{fig:REWR}
\end{subfigure}
\caption{Convergence: EDM}
\label{fig:plot_EDM}
\end{figure}
\subsection{Solutions to Micro-machining problems}
Comparison Tables \ref{tab:Turning}, \ref{tab:Milling} and \ref{tab:Drilling} exhibit
solutions consisting of mean and best solution along with standard deviation for 30 trials of each objective function of algorithms for solving micro-turning, micro-milling and micro-drilling processes.
For micro-drilling processes, LAB obtained comparable results with Mulit-CI, GA, SA and variations of CI as well as outperforming GA, SA and PSO in convergence rate. However, for micro-turning and micro-milling with $0.7 mm$ and with $1 mm$ tool diameter for machining time ($M_t$) LAB could compete with other algorithms but could not produce superior results.
\begin{table}[H]
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption{Solutions to Micro-Turning processes}
\includegraphics[width=0.95\linewidth]{Tables/MicroTurning.pdf}
\centering
\label{tab:Turning}
\end{table}
\begin{figure}[H]
\centering
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/fb6.png}
\caption{$f_b$}
\label{fig:fb}
\end{subfigure}
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/Ra7.png}
\caption{$R_a$}
\label{fig:Ra}
\end{subfigure}
\caption{Convergence: Micro-Turning}
\label{fig:micro-turning}
\end{figure}
\begin{table}[!htb]
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption{Solutions to Micro-Milling processes}
\includegraphics[width=0.95\linewidth]{Tables/Micro_Milling.pdf}
\centering
\label{tab:Milling}
\end{table}
\begin{figure}[!htb]
\centering
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/Ra8.png}
\caption{$R_a(0.7mm)$}
\label{fig:Ra0.7}
\end{subfigure}
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/Mt9.png}
\caption{$M_t(0.7mm)$}
\label{fig:Mt0.7}
\end{subfigure}
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/Ra10.png}
\caption{$R_a(1mm)$}
\label{fig:Ra1}
\end{subfigure}
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/Mt11.png}
\caption{$M_t(1mm)$}
\label{fig:Mt1}
\end{subfigure}
\caption{Convergence: Micro-Milling}
\label{fig:micro-milling}
\end{figure}
\begin{table}[!htb]
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption{Solutions to Micro-Drilling processes}
\includegraphics[width=0.95\linewidth]{Tables/MicroDrilling1.pdf}
\centering
\label{tab:Drilling}
\end{table}
\pagebreak
\begin{table}[!htb]
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption*{\textbf{Table \ref{tab:Drilling} } Continued}
\includegraphics[width=0.95\linewidth]{Tables/MicroDrilling2.pdf}
\centering
\end{table}
\begin{figure}[!htb]
\centering
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/Bh12.png}
\caption{$B_h(0.5mm)$}
\label{fig:Bh0.5}
\end{subfigure}
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/Bt13.png}
\caption{$B_t(0.5mm)$}
\label{fig:Bt0.5}
\end{subfigure}
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/Bh14.png}
\caption{$B_h(0.6mm)$}
\label{fig:Bh0.6}
\end{subfigure}
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/Bt15.png}
\caption{$B_t(0.6mm)$}
\label{fig:Bt0.6}
\end{subfigure}
\label{fig:micro-drilling}
\end{figure}
\FloatBarrier
\begin{figure}[!htb]
\centering
\ContinuedFloat
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/Bh16.png}
\caption{$B_h(0.8mm)$}
\label{fig:Bh0.8}
\end{subfigure}
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/Bt17.png}
\caption{$B_t(0.8mm)$}
\label{fig:Bt0.8}
\end{subfigure}
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/Bh18.png}
\caption{$B_h(0.9mm)$}
\label{fig:Bh0.9}
\end{subfigure}
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/Bt19.png}
\caption{$B_t(0.9mm)$}
\label{fig:Bt0.9}
\end{subfigure}
\caption{Convergence: Micro-Drilling}
\label{fig:micro-drilling2}
\end{figure}
The LAB solutions exhibited higher standard deviation for micro-turning (Table \ref{tab:Turning}), micro-drilling (Table \ref{tab:Milling}) and micro-milling (Table \ref{tab:Drilling}) problems, for convergence plots refer to Fig. \ref{fig:micro-turning}, \ref{fig:micro-milling} and \ref{fig:micro-drilling2}, respectively. This is because the individuals in LAB are updated at every iteration after computing individual search directions, to simultaneously obtain updated solutions and rankings. This iterative individual updating process after computing individual search direction i.e local ranking as well as global ranking, thus resulting less robustness and higher standard deviation.
When comapred with other algorithms for solving micro-machining problems LAB resulted in lower run time as compared to other algorithms but showed less robustness. However, LAB outperformed SA, $f_{best}$ and $f_{better}$ by achieving 76\%, 85\% and 75\% minimization of $R_a$ respectively for micro-milling with 0.7 mm tool diameter. LAB achieved 81\%, 72\%, 85\% minimization of $R_a$ when compared to SA, $f_{best}$ and $f_{better}$ for 1 mm tool diameter. LAB also achieved 24\% and 34\% minimization of $B_h$ and $B_t$ as compared to SA for micro-drilling with tool diameter 0.5 mm. For tool diameter 0.8 mm and 0.9 mm, 16\% and 3\% minimization of $B_t$, respectively, were achieved as compared to SA (exhibited in Tables \ref{tab:Turning}, \ref{tab:Milling}, \ref{tab:Drilling}).
\subsection{Solution to Turning of Titanium Alloy}
Table \ref{tab:SN1} includes best solutions obtained for Cutting Force $F_c$, Tool Wear $V_{Bmax}$, Tool Chip Contact Length $L$ and Surface Roughness $R_a$ produced by variations of CI, Multi-CI and LAB with their corresponding mean solutions, standard deviation and run time.
Table \ref{tab:SN2} contains additional comparison of solutions by algorithms namely experimental work, desirability approach and PSO. In Table \ref{tab:SN3} optimum values yielded by variations of CI, Multi-CI and LAB for cutting speed $V_c$ , feed $f$ and the tool angle $\phi$ are shown. The algorithm needs more balanced exploration and exploitation abilities to find global optimum solution as it is quite evident from Eq. \ref{MQL_Fc} it is inseparable, multimodal and nonlinear in nature. Plots in Fig.\ref{fig:SN_Fc}, \ref{fig:VBmax}, \ref{fig:L} and \ref{fig:SNRa} represent the best solutions of LAB for cutting force $F_c$ , tool wear $V_{Bmax}$, tool-chip contact length $L$ and surface roughness $R_a$ respectively. Efforts of the individuals in climbing up the rankings by competing to be the best are evident in Fig. \ref{fig:SN}.
\begin{table}[!htb]
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption{Comparison of statistical solutions for Turning in MQL environment}
\includegraphics[width=0.95\linewidth]{Tables/SN1.pdf}
\centering
\label{tab:SN1}
\end{table}
\begin{table}[!htb]
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption{Comparison of algorithms}
\includegraphics[width=0.95\linewidth]{Tables/SN2.pdf}
\centering
\label{tab:SN2}
\end{table}
\begin{table}[!htb]
\captionsetup{singlelinecheck = false, format= hang, justification=raggedright, font=footnotesize, labelsep=space}
\caption{Comparison of optimum values for the solutions of $V_c$, $f$, and $\phi$}
\includegraphics[width=0.95\linewidth]{Tables/SN3.pdf}
\centering
\label{tab:SN3}
\end{table}
\begin{figure}[!htb]
\centering
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/SN_Fc.png}
\caption{$F_c$}
\label{fig:SN_Fc}
\end{subfigure}
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/SN_Vbmax.png}
\caption{$V_{Bmax}$}
\label{fig:VBmax}
\end{subfigure}
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/SN_L.png}
\caption{$L$}
\label{fig:L}
\end{subfigure}
\begin{subfigure}{0.99\textwidth}
\includegraphics[width=\textwidth]{micro-machining/SN_Ra.png}
\caption{$R_a$}
\label{fig:SNRa}
\end{subfigure}
\caption{Convergence Plots for optimal values of $F_c, V_{Bmax}, L, R_a$}
\label{fig:SN}
\end{figure}
\FloatBarrier
\section{Conclusions and future directions}
In this manuscript, a novel socio-inspired algorithm is introduced, named the LAB algorithm, based on how individuals in a group with certain personality traits follow, make decisions and compete within the group in society. The proposed algorithm was examined by solving 27 benchmark test problems from CEC 2005 and a statistical comparison using Wilcoxon-signed rank test was conducted. LAB was able to perform slightly better when compared in terms of best solution, mean solution, robustness and computational time when compared to CMAES and IA and was able to outperform PSO2011, CMAES, ABC, JDE, CLPSO, and SADE in computational time. LAB demonstrated low robustness but exceedinly low computational time.
The algorithm was also validated by solving 23 real-world problems consisting of AWJM, EDM, Parameter tuning of turning titanium alloy and Advanced manufacturing processes problem to compare exploitation, exploration, computation cost and convergence rate with other well-known and recent algorithms: Experimental (Kechigas, 2012), Regression (Kechigas, 2012), FA, Variations of CI (roulette wheel, $f_{best}$, $f_{better}$, alienation), GA, SA, PSO, Multi-CI.
Problems for minimization of surface roughness $R_a$ for AWJM, EDM and micro-machining processes namely micro-turning and micro-milling for Advanced Manufacturing Processes were solved.
Minimization of burr thickness $B_t$ and burr height $B_h$, relative electrode wear rate $REWR$ for EDM and taper angle $kerf$ for AWJM in micro-drilling was executed.
In micro-turning process flank wear $f_b$ and in micro-milling processes machining time $M_t$ were minimized.
Micro-drilling process utilized four drilling cutter diameters: $0.5 mm$; $0.6 mm$; $0.8 mm$ and $0.9 mm$. In the micro-milling processes, two cutter diameters: $0.7 mm$ and $1 mm$, were utilized. The results of LAB were then compared with mutltiple algorithms consisting of variations of CI, Multi-CI algorithm, experimental results and also with relatively modern algorithms such as SA, PSO, GA, BPNN, RSM and FA.
LAB was able to perform exceedingly well when compared to FA, SA, PSO, experimental results and solutions using regression for solving $kerf$ of AWJM problem in terms of solution quality. LAB results were comparable with GA and PSO for solving EDM and micro-machining problems. LAB was able to outperform variations of CI, regression, RSM, FA, SA, BPNN approaches in terms of solutions obtained. The run time of LAB is quite lower as compared to other algorithms for majority of the problems, because in LAB all the individuals simultaneously compete and interact with one another and individuals are updated at every iteration helps it gain more exploration and exploitation capabilities; however, it resulted in higher standard deviation which exhibited its low robustness.
Several enhancements can be done in the algorithm for better and faster computation in order to solve complex and higher dimension problems easily, by introducing a method of triggering the algorithm when stuck at local minima, which may help LAB solve a wider range of higher dimension complex real-life problems. Moreover, LAB algorithm can be modified to solve multi-objective problems making the competitive groups to handle different objectives.
\pagebreak
\begin{large}
\textbf{Acknowledgments}
\end{large}
This work was supported by the Fundamental Research Grant Scheme (FRGS) under the Ministry of Higher Education (MOHE) with project number FRGS/1/2020/ICT02/ MUSM/03/6
\printbibliography
\end{document}
| -40,845.594595
|
[
-0.1468505859375,
0.6953125
] | 22.916667
|
[
-4.3046875,
-0.1668701171875,
-1.82421875,
-4.328125,
1.0830078125,
6.4765625
] |
[
2.263671875,
5.1484375,
1.955078125,
7.46875
] | 493
| 5,505
|
[
-1.8564453125,
1.935546875
] | 27.90928
|
[
-5.6171875,
-2.638671875,
-2.537109375,
-0.99365234375,
1.8798828125,
8.1171875
] | 0.38921
| 19.748522
| 28.135778
| 5.160014
|
[
3.2236781120300293
] | -27,353.853687
| 6.767484
| -39,283.970994
| 1.733995
| 6.235686
|
[
-3.681640625,
-3.0546875,
-2.568359375,
-3.498046875,
2.80078125,
9.2578125
] |
[
-6.3984375,
-2.37890625,
-2.220703125,
-1.162109375,
4.28515625,
5.3203125
] | |
BkiUc3o5qWTA9fx-_3o2
|
\section{Introduction}
\label{intro}
Wigner function has been originally introduced in deformation quantization of classical mechanics as the substitute of probability density in ordinary quantum mechanics. However, it may take negative values, so that it is considered as the quasiprobability distribution which provides quantum corrections to classical statistical mechanics. It is built with the wave functions satisfying Schroedinger equation \cite{Wig-rev}. This quantum mechanical formulation can be generalized to field theory by employing quantum fields to construct the Wigner function \cite{Groot}. For spin-$1/2$ fermions the constituent fields satisfy the Dirac equation. It turned out that the Wigner function method is a powerful tool in constructing relativistic quantum kinetic theories of spin-$1/2$ fermions \cite{Heinz83-1, egv,vge,zh1}.
In heavy-ion collisions quark-gluon plasma is created where the constituent quarks are approximately massless \cite{Gyulassy:2004zy, Shuryak:2004cy}. Their properties can be inspected within the chiral kinetic theory (CKT) which leads to an intuitive understanding of chiral magnetic effect (CME) \cite{kmw,fkw,kz} and chiral separation effect (CSE) \cite{mz,jkr} for the chiral plasma subjected to the external electromagnetic fields. When the chiral plasma is considered as a fluid, the chiral transport equations are also useful to study the chiral vortical effect (CVE) \cite{ss} and local polarization effect (LPE) \cite{lw,bpr,glpww}.
The main characteristics of the quarks in quark-gluon plasma are acquired by considering them as massless particles. However, it as an approximation. Therefore, studying the mass corrections is needed. The Wigner function formalism of massive spin-$1/2$ fermions yields some different covariant kinetic equations \cite{wsswr,hhy}, in contrary to the massless case. This is due to the fact that for massive fermions there are more than one way of eliminating the irrelevant set of field equations derived from the quantum kinetic equation (QKE).
Although four-dimensional (4D) approach has some advantages like being manifestly Lorentz invariant, non-relativistic or equal-time formalism is needed if one would like to solve a transport equation starting from an initial distribution function provided by field equations \cite{bbgr,zh2,oh}.
Integrating 4D transport equation over the zeroth-component of momentum is the custom method of constructing the related three-dimensional (3D) transport equation \cite{zh1,oh}.
Wigner function of spin-$1/2$ fermions which are coupled to gauge fields is constructed to be invariant under gauge transformations which leave the Dirac equation intact. The gauge invariant Wigner function satisfies the QKE which depends on the field strength explicitly \cite{egv,vge}. When one deals with relativistic plasma as a fluid, the vorticity effects should also be taken into account. Although the magnetic and vortical effects are similar, the QKE does not explicitly depend on the rotational properties of fluid in contrary to electromagnetic interactions. Hence, noninertial effects like the Coriolis force are absent. To overcome these difficulties we proposed the modification of QKE by means of enthalpy current either for the massless or massive fermions \cite{dk,dk-m}. We obtained the relativistic transport equations and studied the 3D theories which they generate. The chiral formulation was successful in generating a consistent 3D CKT which does not depend explicitly on the position vector and also addresses the noninertial effects like the Coriolis force correctly. The modification of QKE also gives satisfactory results for massive fermions. However, the modified QKE has not been following from an action in contrary to the electromagnetic part. Here, we present an underlying Lagrangian which naturally yields the aforementioned modification of QKE.
3D CKT with the Coriolis force was first presented in \cite{SteYin} by making use of the resemblance between the magnetic field and angular velocity. Then this formulation was derived in \cite{husad} from the first principles in a rotating coordinate frame. In \cite{husad} it was also shown that spatial coordinate dependence appearing in some CKTs can be removed by an appropriate phase space coordinate transformations.
{ In \cite{lgmh} CKT in curved spacetime has been derived from the QKE. There it was shown that the noninertial effects and the CVE arise when the observer is in the comoving frame of fluid \cite{lgmh}. In this case our modification terms vanish identically as we have already discussed in detail in \cite{dk2019}. However, in flat spacetime formalism, to generate the noninertial effects one should consider the modified QKE. There is an other issue which should be clarified: Obviously the covariant formalism without the modification leads to CVE correctly \cite{glpww}, so that our modification can be seem to over count the CVE. However, this not the case. Within the modified QKE formalism of \cite{dk} in 4D the first order solutions of chiral fields are different from the ordinary ones and they generate the CVE correctly as it was verified explicitly in \cite{dk}. When one integrates 4D CKE over the zeroth component of momentum to get the nonrelativistic CKT, the modification terms turn up to be essential in acquiring the CVE correctly. In fact, as it will be explained at the end of section \ref{sec-cke}, the consistent 3D CKT which results from the 4D formalism would not generate the CVE correctly without the modification terms. }
We deal with relativistic plasma as a fluid whose constituents are fermions obeying the Dirac equation.
We will introduce the vector field $\eta_\mu (x)$ which is minimally coupled to Dirac fields as the electromagnetic vector potential $A_\mu(x).$
However, it is not a $U(1)$ gauge field. Equations of motion of the new vector potential follow from an action which describes fluid dynamics in terms of self interacting scalar field. Although there are some crucial differences, the action which we consider is mainly introduced in \cite{bb}. It is invariant under a gauge transformation which is not the custom $U(1)$ symmetry.
Dirac equation coupled to the new vector potential is also invariant under this gauge transformation when the Dirac field is transformed appropriately.
To derive the equation of motion of the Wigner function one employs the equation satisfied by the Dirac field coupled to vector potential and its gauge invariance \cite{vge}. As far as the Dirac Lagrangian is considered, the form of the gauge transformations related to $\eta_\mu (x)$ and $A_\mu(x)$ are similar. Hence, we generalize the procedure of \cite{vge} to derive the QKE satisfied the Wigner function when both of the $\eta_\mu (x)$ and $A_\mu(x)$ gauge fields are present. Then, we decompose the Wigner function in the Clifford algebra basis and obtain equations satisfied by the fields which are the coefficients of Clifford algebra generators. These equations depend explicitly on the field strengths. Some of these equations can be eliminated and the rest can be used to obtain kinetic transport equations (KTE). In general, until acquiring KTE electromagnetic field strength is defined in terms of vector potentials. Once the KTE are established it is expressed by the electric and magnetic fields which satisfy the Maxwell equations. However, as it will be discussed, for the field strength related to fluid one may proceed in two different ways. The first option is to
require that the field strength related to fluid is expressed in terms of vorticity and fluid velocity before deriving KTE. The second option is to establish KTE first and then express the fluid related field strength in terms of vorticity and fluid velocity. When the former method is adopted we find the KTE proposed in \cite{dk,dk-m}. The latter method which is similar to the electromagnetic case will be the subject of this work. In this case the massless and massive KTE can be obtained by generalizing the kinetic equations established, respectively, in \cite{hpy1,hsjlz} and \cite{hhy}.
We will acquire 3D kinetic equations by integrating the covariant equations over the zeroth component of four-momentum. For chiral fermions we will show that a novel 3D CKT is accomplished in the presence of both external electromagnetic field and fluid vorticity which does not depend explicitly on the spatial coordinates. Moreover, this theory possesses the Coriolis force term and it is consistent with the chiral anomaly. It generates the chiral magnetic and vortical effects correctly. When one deals with massive fermions kinetic equations of vector and axial-vector components of Wigner function depend on the spin four-vector $a^\mu$ \cite{hhy}. We provide mass corrections to 3D chiral effects by letting $a^\mu$ be given with the free Dirac equation for small mass values.
We start with presenting the action which is considered to establish transport equations of a fermionic fluid interacting with electromagnetic fields.
In section \ref{sec-fluid} we focus on the part of action which we claim to govern the dynamical evolution of fermionic fluid.
In section \ref{sec-QKE} we present an outline of the derivation of QKE satisfied by the Wigner function of Dirac fields coupled to two independent vector fields. section \ref{sec-cke} is devoted to the study of the kinetic equations of chiral fermions in the external electromagnetic fields by taking into account noninertial properties of the fluid. The relativistic and the 3D chiral transport equations are established. The massive fermions are studied in section \ref{sec-mass}, where the relativistic equations are integrated over the zeroth component of momentum in the small mass limit by approximating the spin four-vector adequately. Discussions of our results are presented in section \ref{sec-conc}.
\section{Action}
\label{sec-Act}
To establish quantum kinetic equation for fermionic fluids in the presence of electromagnetic fields we propose the action
\begin{equation}
\label{Stot}
S= S_\ssD +S_Q+S_{\ssE\ssM}+S_\zeta +S_\phi .
\end{equation}
The first term is the Dirac action,
\begin{equation}
S_\ssD = \frac{1}{2}\int d^4x\, \bar{\psi} \left(i\hbar \gamma^\mu \partial_\mu -m\right)\psi .
\end{equation}
We consider two vector potentials coupled to Dirac fields. One of them is the $U(1)$ gauge field $A_\mu,$
\begin{eqnarray}
S_\ssQ & =& -Q\int d^4x \ \bar{\psi} \gamma^\mu A_\mu \psi ,
\end{eqnarray}
whose dynamical equations are generated by
\begin{eqnarray}
S_{\ssE\ssM}&=&-\frac{1}{4}\int d^4x \ F^{\mu \nu} F_{\mu\nu} . \label{SEM}
\end{eqnarray}
$Q$ is the electric charge and
\begin{equation}
F_{\mu\nu}=\partial_{\mu}A_\nu -\partial_{\nu} A_\mu, \label{fmn}
\end{equation}
is the field strength of the $U(1)$ gauge field which is
invariant under the gauge transformation
\begin{equation}
\label{Ag}
A_\mu (x) \rightarrow A_\mu (x) -\partial_\mu \Lambda (x).
\end{equation}
The other one is the real four-vector field $\eta_ \alpha,$ whose coupling constant is $\zeta,$
\begin{eqnarray}
\label{szeta}
S_\zeta = \zeta \int d^4x \bar{\psi}\gamma^\alpha\eta_\alpha \psi.
\end{eqnarray}
$\eta_\alpha$ is also coupled to the complex scalar field $\phi,$ as follows,
\begin{equation}
\label{SPhi}
S_\phi =\frac{1}{2} \int d^4x \left[\left(\partial^\alpha \phi -i\eta^\alpha\phi\right)^\star \left(\partial_\alpha \phi -i\eta_\alpha\phi\right)-V\left(\phi^\star \phi\right)\right].
\end{equation}
In the next section we will clarify how the scalar field $\phi$ and the vector-field $\eta_\alpha,$ represent the fluid.
We work in Minkowski spacetime with $g_{\mu\nu}=\diag (1,-1,-1,-1).$
\section{Fluid}
\label{sec-fluid}
In this section we will describe how the variation of the action
\begin{equation}
\label{sF}
S_\ssF=S_\phi + S_\zeta ,
\end{equation}
with respect to $\eta_\alpha$ and $ \phi $ fields generate effectively the Euler action of a fluid with Coriolis force, whose constituent particles are Dirac fermions. { The scalar field $\phi$ is the mean field which represents fluid. The vector field $\eta_\alpha$ is an auxiliary field which will be fixed as in (\ref{main1}) below, hence it is considered only at the classical level.} Our formulation is mainly inspired from \cite{bb}, where a covariant action was proposed to describe magnetohydrodynamics as a field theory. However, we are interested in expressing the vector field $\eta_\alpha$ in terms of fluid variables like the enthalpy current. In contrary to \cite{bb}, in our formulation hydrodynamical quantities are not considered as variables. They will be related to the independent field variables $\eta_\alpha,\phi$ by some other means as we will discuss. Therefore we do not need the constraints considered in \cite{bb}.
Let us first write the complex scalar field in terms of two real fields:
\begin{equation}
\label{phi}
\phi=\sigma e^{i\varphi}.
\end{equation}
Plugging this definition into the action (\ref{SPhi}) yields
\begin{equation}
\label{Sreal}
S_\phi =\frac{1}{2} \int d^4x \left[\partial^\alpha \sigma \partial_\alpha \sigma+\sigma^2\left(\partial^\alpha \varphi -
\eta^\alpha\right) \left( \partial_\alpha \varphi -\eta_\alpha\right)-V(\sigma^2)\right].
\end{equation}
Observe that it is invariant under the gauge transformation
\begin{equation}
\label{etag}
\eta_\alpha (x)\rightarrow \eta_\alpha (x) -\partial_\alpha \lambda (x),\ \ \ \varphi (x)\rightarrow \varphi (x)-\lambda (x).
\end{equation}
After expressing $S_\phi $ as in (\ref{Sreal}), variation of $S_\ssF$ with respect to $\varphi$ generates the following equation of motion
\begin{equation}
\label{em2}
\partial_\alpha [\sigma^2\left(\partial^\alpha \varphi -\eta^\alpha\right)] =0.
\end{equation}
On the other hand variation of $S_\ssF$ with respect to the $\eta_\alpha$ yields
\begin{equation}
\label{main0}
\sigma^2\left(\partial^\alpha \varphi -\eta^\alpha\right) -\zeta \bar{\psi}\gamma^\alpha \psi =0.
\end{equation}
{ The mean value of particle number current density operator of the Dirac particles can be expressed in the form
\begin{equation}
\label{pnc}
j^\alpha =\langle :\!\bar{\hat{\psi}} \gamma^\alpha \hat{\psi }\! :\rangle =\int d^4q\, q^\alpha f(x,q) .
\end{equation}
Here, $\hat{\psi},\ \bar{\hat{\psi}}$ are operators and colons denote normal ordering. The exact form of the distribution function
$f(x.q)$ can be obtained from the Wigner function satisfying QKE \cite{Groot}.
One can also describe this system as a fluid. For this purpose let us introduce the fluid four-velocity $u_\alpha\equiv dx_\alpha (\tau)/d\tau ,$ where $\tau$ is the proper time and $x_\alpha (\tau)$ is the worldline of a fluid element, so that it satisfies
\begin{equation}
\label{u2}
u^\alpha u_\alpha=1.
\end{equation}
It can be used to decompose the momentum four-vector as $q^\alpha=(u\cdot q)u^\alpha+ q_{\perp}^{\alpha},$ where
$q_{\perp}\cdot u=0.$ Then, due to the momentum integral in (\ref{pnc}), one gets
\begin{equation}
\label{nu1}
j^\alpha = nu^\alpha ,
\end{equation}
where $n$ is the particle number density. In principle due to the motion of medium, particle number current density can have an anomalous parts. For example due to rotations there would be a term which depends linearly on the vorticity of the medium.
However, we ignore the anomalous contributions to current because we consider only classical fields.
In the classical field (mean field) approximation the quantum field $\hat{\psi}$ can be considered as a collection of wave-packets constructed by the solutions of Dirac equation. Then, when we deal with fluids composed of spin-1/2 particles, a fluid element which contains a large number of particles but compact enough such that they behave homogeneously, can be taken to coincide with
one of the wave-packets.
Therefore, in the mean field approximation we can write
\begin{equation}
\label{nu}
\bar{\psi}\gamma^\alpha \psi \equiv nu^\alpha .
\end{equation}}
Now (\ref{main0}) can equivalently be written as
\begin{equation}
\label{main}
nu^\alpha =\zeta^{-1} \sigma^2\left(\partial^\alpha \varphi -\eta^\alpha\right) ,
\end{equation}
and (\ref{em2}) states that the particle number current density is conserved:
\begin{equation}
\label{em21}
\partial_\alpha \left(nu^\alpha\right) =0.
\end{equation}
In fact, in relativistic fluid dynamics particle number current density without dissipation is given with (\ref{nu1}) \cite{LanLif}.
Equipped with these relations we may now discuss why we consider the action (\ref{Stot}). The kinetic theory of a neutral plasma can be described in terms of the
relativistic scalar electrodynamics with two
scalar fields $\phi_1$ and $\phi_2$ which are macroscopic wave functions representing the negative and positive charge carriers \cite{fa-ni,ffnr}:
\begin{equation}
S_{np}=\int d^4x \left[|\partial_\alpha \phi_1 +iQA_\alpha\phi_1|^2 +|\partial_\alpha \phi_2 -iQA_\alpha\phi_2|^2-V\left(\phi_1, \phi_2\right)\right]+S_{\ssE\ssM}.
\end{equation}
If one would like to represent neutral plasma in terms of one scalar field $\phi$ instead of $\phi_1$ and $\phi_2,$ it cannot minimally couple to $A_\mu.$ Then, it should interact with electromagnetic fields in a complicated way \cite{bb}. In fact (\ref{main0}) represents this interaction because the variation of the action (\ref{Stot}) with respect to $A_\mu$ shows that (\ref{pnc}) is the current which describes how the electromagnetic fields will evolve in plasma:
\begin{equation}
\label{r21}
\partial^\mu F_{\mu\nu}= Q\bar{\psi}\gamma_\nu \psi .
\end{equation}
By inspecting (\ref{main}) we see that interaction of the fluid with electromagnetic fields is exposed by setting
\begin{equation}
\label{r22}
\eta^\alpha= \partial^\alpha \varphi -\frac{\zeta}{Q\sigma^2}\partial_\mu F^{\mu\alpha}.
\end{equation}
In fact by inserting (\ref{r21}) and (\ref{r22}) into the action (\ref{sF}) we get
\begin{equation}
\label{sFr}
S_\ssF= \int d^4x \Big\{\frac{1}{2} \left[\partial^\alpha \sigma \partial_\alpha \sigma-V(\sigma^2) \right]
+\frac{\zeta}{Q}(\partial_\mu F^{\mu\alpha})\partial_{\alpha }\varphi-\frac{\zeta^2}{2Q^2\sigma^2} (\partial_\mu F^{\mu\alpha})(\partial^\nu F_{\nu\alpha})
\Big\}.
\end{equation}
This shows that scalar field components couple to the derivatives of the electromagnetic fields which would be calculated from the Maxwell equations (\ref{r21}) whose charge and current distributions are due to the charged fermions. Hence, the interactions between the scalar field and electromagnetic fields are sensitive how the electric and magnetic fields change spatially and temporally.
Variation of $S_\ssF$ with respect to $\sigma$ leads to
\begin{equation}
\label{em3}
\partial^\alpha\partial_\alpha\sigma+\sigma\left(\partial^\alpha \varphi -
\eta^\alpha\right) \left( \partial_\alpha \varphi -\eta_\alpha\right) -\sigma V^\prime (\sigma^2)=0,
\end{equation}
where $V^\prime$
is the derivative of $V$ with respect to its argument. As in \cite{bb} we assume that amplitude of $\phi$ varies slowly so that the first term is neglected compared to the second one. Thus we get
\begin{equation}
\label{eqm2}
\left(\partial^\alpha \varphi -
\eta^\alpha\right) \left( \partial_\alpha \varphi -\eta_\alpha\right) =\nu^2,
\end{equation}
where we defined
$$
\nu^2 \equiv V^\prime (\sigma^2).
$$
Using (\ref{main}) in (\ref{eqm2}) yields
\begin{equation}
n^2\zeta^2=\sigma^4\nu^2,
\end{equation}
so that one can express $n$ as a function of the real scalar field $\sigma$ as
\begin{equation}
\label{ssoz}
n=\frac{\sigma^2 \nu}{\zeta}.
\end{equation}
Plugging (\ref{ssoz}) back into (\ref{main}) leads to
\begin{equation}
\label{main1}
\partial^\alpha \varphi -\eta^\alpha = \nu u^\alpha .
\end{equation}
By taking the derivative of (\ref{eqm2}) and employing (\ref{main1}), we attain
\begin{equation}
\label{2der}
\nu u^\alpha (\partial_\beta \partial_\alpha \varphi -\partial_\beta \eta_\alpha )=\nu \partial_\beta \nu.
\end{equation}
Let us write it as
\begin{equation}
\label{22der}
\nu u^\alpha (\partial_\alpha\partial_\beta \varphi -\partial_\alpha\eta_\beta -W_{\alpha \beta} - w_{\beta \alpha} )=\nu \partial_\beta \nu,
\end{equation}
where we introduced
\begin{equation}
w_{\beta \alpha}=\partial_\beta\eta_\alpha -\partial_\alpha \eta_\beta,
\label{wmn}
\end{equation}
and
$$
W_{\alpha \beta}= \partial_{\alpha} \partial_{\beta } \varphi - \partial_{\beta} \partial_{\alpha } \varphi .
$$
Obviously $W_{\alpha \beta}$ vanishes for ordinary functions but in
it can also be different from zero \cite{bb}: $\varphi$ is the phase of scalar field $\phi,$ (\ref{phi}), so that it is defined up to some functions $\theta (x)=(\theta_1 (x),\theta_2 (x), \cdots)$ satisfying
$$
e^{\theta_k (x)}=1.
$$
Hence, along some $\theta_k (x)=0,$ curves, the mixed partial derivatives of $\varphi$ can fail to be continuous. We can ignore the singular curves and set $W_{\alpha \beta}=0$ without loss of generality. Nevertheless, we can also consider singular cases where the following condition is satisfied,
\begin{equation}
\label{W0}
u^\alpha W_{\alpha \beta}=0.
\end{equation}
Now by taking the derivative of (\ref{main1}),
$$
\partial_\alpha\partial_\beta \varphi -\partial_\alpha\eta_\beta=u_\beta \partial_\alpha \nu +\nu \partial_\alpha u_\beta ,
$$
and using it in (\ref{22der}) we obtain
\begin{equation}
\label{23der}
\nu u^\alpha (u_\beta \partial_\alpha \nu+\nu \partial_\alpha u_\beta - w_{\beta \alpha} )=\nu \partial_\beta \nu .
\end{equation}
As far as $\nu \neq 0,$ it yields
\begin{equation}
\label{24der}
\nu u^\alpha \partial_\alpha u_\beta = \partial_\beta \nu -u^\alpha u_\beta \partial_\alpha \nu+u^\alpha w_{\beta \alpha} .
\end{equation}
Acceleration which is defined as the derivative of the velocity $u_\alpha$ with respect to the proper time $\tau ,$ can be calculated by making use of (\ref{24der}):
\begin{equation}
\label{25der}
\frac{d u_\beta}{d \tau}= u^\alpha \partial_\alpha u_\beta = \frac{\partial_\beta \nu}{\nu } -u^\alpha u_\beta \frac{\partial_\alpha \nu}{\nu}+
\frac{u^\alpha w_{\beta \alpha} }{\nu}.
\end{equation}
{ Let us compare (\ref{25der}) with the (hydrodynamic) Euler equations
\begin{equation}
\label{euler}
\frac{d u_\beta}{d \tau}= \frac{\partial_\beta P}{\rho +P } -u^\alpha u_\beta \frac{\partial_\alpha P}{\rho +P}-\frac{F_{\beta }}{\rho +P},
\end{equation}
where $\rho$ is the energy density, $P$ is the pressure. They are related to the specific enthalpy $h$ as $nh=\rho +P.$ Here $F_\alpha$ is an external force
which can be the gravitational force, electromagnetic force and the ``fictitious'' force such as the centrifugal or Coriolis force \cite{ReZo}.
Except their last terms, (\ref{25der}) and (\ref{euler}) are identical if
\begin{equation}
\label{equ}
\frac{dP}{\rho +P}=\frac{d\nu}{\nu}.
\end{equation}
{From the first law of thermodynamics one knows that
\begin{equation}
\label{the}
\frac{d\rho}{dn}=\frac{\rho +P}{n},
\end{equation}
for the fluids of only one kind of particle whose total number is conserved and without heat exchange. }
Suppose that (\ref{equ}) is satisfied, then by making use of (\ref{the}) we have
\begin{equation}
\label{nls}
\frac{d (\rho +P)}{dn}=\frac{\rho +P}{n}+\frac{dP}{d\nu}\frac{d\nu}{dn}=\frac{\rho +P}{n}+\frac{\rho +P}{\nu} \frac{d\nu}{dn}.
\end{equation}
It can be expressed as
\begin{equation}
\label{nls1}
\frac{d (\rho +P)}{\rho +P}=\frac{dn}{n}+ \frac{d\nu}{\nu}.
\end{equation}
By integrating it
\begin{equation}
\label{nuent}
\nu=\xi\frac{\rho +P}{n}=\xi h
\end{equation}
follows, where $\xi$ is a positive constant of integration.
We need to express $\nu$ analytically in terms of fluid's parameters by respecting the equality (\ref{nuent}). Let us consider the equation of state $P=(\gamma -1)\rho ,$ where $\gamma>1$ is the adiabatic index. This relation can be taken as the definition of
an ideal fluid \cite{ReZo} and it is consistent with the equation of state resulting from the field equations by choosing $V(\sigma^2)$ adequately \cite{bb}. Then, we can write
\begin{equation}
\label{nhr}
h= \gamma \frac{\rho }{n}.
\end{equation}
By plugging it into (\ref{nuent}) we get
\begin{equation}
\nu =\xi^\prime \frac{\rho }{n},
\end{equation}
with $\xi^\prime =\xi \gamma.$ In the mean field approximation we identified a fluid element with a wave packet. Hence the proper energy density can be parametrized in terms of the momentum $p^\mu$ of the wave packet center as
\begin{equation}
\frac{\rho }{n}= u\cdot p.
\end{equation}
Therefore, we write
\begin{equation}
\label{nup}
\nu =\xi^\prime u\cdot p.
\end{equation}
From (\ref{main1}) by setting $W_{\alpha \beta}=0$ or employing the condition (\ref{W0}), we get
\begin{equation}
\label{aaa}
u^\alpha w_{ \alpha \beta} =u^\alpha \left[ \partial_\beta(\nu u_\alpha)- \partial_\alpha(\nu u_\beta)\right] \equiv \xi^\prime {\cal K}_\beta.
\end{equation}
Let us demonstrate that ${\cal K}_\beta \equiv u^\alpha\left[ \partial_\beta(u\cdot p\, u_\alpha)- \partial_\alpha(u\cdot p\, u_\beta)\right] , $ is the relativistic generalization of Coriolis force per particle.
We consider vanishing linear acceleration
\begin{equation}
\label{vanac}
u^\alpha \partial_\alpha u_\beta=0,
\end{equation}
so that the fluid velocity can be taken to satisfy
\begin{equation}
\partial_\alpha u_\beta =-\partial_\beta u_\alpha .
\end{equation}
Thus we write
$$
{\cal K}_\beta = p^\alpha\partial_\beta u_\alpha .
$$
Fluid vorticity four-vector is defined as
\begin{equation}
\omega^\mu =\frac{1}{2} \epsilon^{ \mu \nu \alpha \beta}\Omega_{\alpha \beta}u_\nu,
\end{equation}
where
\begin{equation}
\Omega_{\alpha \beta}= \frac{1}{2}(\partial_\alpha u_\beta -\partial_\beta u_\alpha ) ,
\end{equation}
is the kinematic vorticity tensor.
Therefore we get
$$
{\cal K}_\beta =\epsilon_{ \beta \alpha \mu \nu}u^\mu \omega^\nu p^\alpha.
$$
In the frame $u^\alpha=(1,\bm 0),\ \omega^\alpha =(0, \bm \omega),$ it becomes ${\cal K}_\beta =(0, \bm{{\cal K}})$ with
$$
\bm{{\cal K}}=\bm p \times \bm \omega.
$$
Hence, we conclude that the last terms of (\ref{25der}) and (\ref{euler}) are identical where
$$F_\alpha = n\gamma\, {\cal K}_\alpha ,$$
is a relativistic extension of the Coriolis force in a rotating coordinate frame.
From (\ref{aaa}) and (\ref{nup}) we have
\begin{equation}
\label{wab}
w_{ \alpha \beta} =\xi^\prime \left[ (\partial_\beta u\cdot p) u_\alpha-(\partial_\alpha u\cdot p) u_\beta \right] +\kappa\, u\cdot p \left(\partial_\beta u_\alpha- \partial_\alpha u_\beta \right).
\end{equation}
On the other hand, due to the vanishing of linear acceleration, (\ref{vanac}), we observe that
$$
u^\alpha \left(\partial_\beta u_\alpha- \partial_\alpha u_\beta \right)=0.
$$
Hence, (\ref{aaa}) is satisfied for an arbitrary constant $\kappa,$ which can be even zero.
By introducing
\begin{eqnarray}
\label{wmuC}
w^{\mu\nu}_\ssC&= &(\partial^\mu u^\alpha ) p_\alpha u^\nu - (\partial^\nu u^\alpha ) p_\alpha u^\mu,
\end{eqnarray}
$w_{\mu\nu}$ can be expressed as
\begin{equation}
\label{wab1}
w^{\mu\nu}=\xi^\prime w^{\mu\nu}_\ssC +\kappa (u\cdot p) \Omega^{\mu\nu}.
\end{equation}
It is worth noting that $\xi,\ \kappa$ are arbitrary constants and $w^{\mu\nu}$ is the circulation (vorticity) tensor for $ \kappa=2\gamma ,$
and $ \xi=1,$ i.e. $\xi^\prime =\gamma ,$ \cite{ReZo}.
Therefore, we come to the conclusion that the scalar field $\phi$ and the vector field $\eta_\alpha$ represent the fluid composed of the Dirac particles. Moreover, the field strength of $\eta_\alpha$ is given by (\ref{wab1}) when the equations of motion resulting from the variation of $S_\ssF$ with respect to $\phi$ and $\eta_\alpha$ are satisfied.}
\section{Quantum Kinetic Equation}
\label{sec-QKE}
Let us return to the initial action (\ref{Stot}) without imposing the equations of motion derived from (\ref{SEM}) and (\ref{sF}). Now, we would like to examine the action of Dirac field coupled to the vector fields,
\begin{equation}
\label{DAE}
S_\psi\equiv S_\ssD +S_Q+S_\zeta = \frac{1}{2}\int d^4x\, \bar{\psi} [ \gamma^\mu (i\hbar \partial_\mu - \zeta \eta_\mu -QA_\mu ) -m] \psi .
\end{equation}
It generates the Dirac equation
\begin{equation}
\label{deq}
[ \gamma^\mu (i\hbar \partial_\mu - \zeta \eta_\mu -QA_\mu ) -m] \psi =0,
\end{equation}
which is invariant under the gauge transformations (\ref{Ag}) and (\ref{etag}), when the spinor field transforms as
\begin{eqnarray}
\psi (x)\rightarrow e^{i (\zeta\lambda (x) +Q\Lambda (x) )/ \hbar}\psi (x). \label{G1}
\end{eqnarray}
The Wigner operator is defined by
\begin{equation}
\label{wo1}
\hat{W}(x,p) = \int d^4y\, e^{-ip\cdot y/\hbar} \bar{\psi}(x) e^{\frac{1}{2}y\cdot\partial^\dagger}\otimes e^{-\frac{1}{2}y\cdot\partial} \psi(x).
\end{equation}
Here, $\psi (x)$ and $\bar{\psi} (x)$ are operators and $\otimes$ represents tensor product.
The Wigner function is defined as the ensemble average of the normal ordered Wigner operator:
\begin{equation}
\label{wfun}
W(x,p)=\langle :\!\hat{W}(x,p)\!: \rangle .
\end{equation}
To derive the kinetic equation satisfied by the Wigner function we proceed as it was done in \cite{vge}:
The Wigner operator defined in (\ref{wo1}) fails to be invariant under the gauge transformations (\ref{Ag}), (\ref{etag}) and (\ref{G1}). To define the gauge invariant Wigner operator one introduces the gauge link
\begin{equation}
\label{link}
U(A,\eta; x_1,x_2)\equiv \exp \left[-iQ\gamma^\mu \int_0^1 ds A_\mu (x_2+sy)\right] \exp \left[-i \zeta \gamma^\mu \int_0^1 ds \eta_\mu (x_2+sy)\right],
\end{equation}
where $x_1^\mu\equiv x^\mu+y^\mu /2,$ $x_2^\mu\equiv x^\mu-y^\mu /2,$ and insert it into (\ref{wo1}):
\begin{equation}
\label{wo2}
\hat{W}(x,p)=\int d^4y\, e^{-ip\cdot y/\hbar} \bar{\psi}(x_1) U(A,\eta; x_1,x_2) \otimes \psi (x_2).
\end{equation}
By making use of the Dirac equation (\ref{deq}), one can show that the Wigner operator, (\ref{wo2}), satisfies
\begin{eqnarray}
[ \gamma \cdot (p-\frac{1}{2}i\hbar \partial ) -m ] \hat{W}(x,p)=-i\hbar\partial_p^\mu
\int d^4y\, e^{-ip\cdot y/\hbar} \bar{\psi}(x_1) \otimes
\nonumber\\
\Big[ \int_{0}^{1} ds (1-s)
\Big(QF_{\mu\nu}(x+sy-y/2)+ \zeta w_{\mu\nu} (x+sy-y/2) \Big) \label{qeq}\\
U(A,\eta; x_1,x_2)\Big]\gamma_\nu\psi (x_2), \nonumber
\end{eqnarray}
where $\partial_p^\mu \equiv \partial / \partial p_\mu .$ $F_{\mu\nu}$ and $w_{\mu\nu}$ are defined by (\ref{fmn}) and (\ref{wmn}). We consider the mean field approximation, so that the field strengths $F_{\mu\nu},\ w_{\mu\nu} $ are c-valued fields. Following \cite{vge} we write
$$
f(x+sy-y/2) ]=e^{(s-1/2)y\cdot\partial }f(x)
$$
and employ the relation
$$
\int d^4y\, e^{-ip\cdot y/\hbar}f(y)g(y)=f(i\hbar \partial_p) \int d^4y\, e^{-ip\cdot y/\hbar}g(y),
$$
to express the right-hand-side of (\ref{qeq}) as
\begin{eqnarray}
&i\hbar \int_{0}^{1} ds (s-1)e^{(s-1/2)i\hbar \partial_p\cdot \partial}
\Big(QF_{\mu\nu}(x)+ \zeta w_{\mu\nu} (x) \Big)\partial_p^\mu
\int d^4y\, e^{-ip\cdot y/\hbar} \bar{\psi}(x_1) \otimes U(A,\eta; x_1,x_2)\gamma_\nu\psi (x_2) .\nonumber
\end{eqnarray}
One expands the exponential as power series and perform the $s$ integration. Then
by taking the ensemble average of (\ref{qeq}),
the quantum kinetic equation satisfied by the Wigner function is established as
\begin{equation}
\left[\gamma \cdot \left(\pi+ \frac{i\hbar}{2} D \right)-m \right] W(x,p) = 0,
\label{qke}
\end{equation}
with
\begin{eqnarray}
D^{\mu} &\equiv & \partial^{\mu}-j_{0}(\Delta)\left[ QF^{\mu\nu} + \zeta w^{\mu\nu} \right] \partial_{p \nu} , \label{Df}\nonumber\\
\pi^{\mu} &\equiv &p^{\mu}-\frac{\hbar}{2} j_{1}(\Delta) \left[ Q F^{\mu\nu} +\zeta w^{\mu\nu} \right] \partial_{p \nu} .\label{Pf} \nonumber
\end{eqnarray}
\(j_{0}(x)\) and \(j_{1}(x)\)
are spherical Bessel functions in \(\Delta \equiv \frac{\hbar}{2} \partial_{p} \cdot \partial_{x}\). The space-time derivative \(\partial_{\mu}\) contained in \(\Delta\) acts on \(\left[ Q F^{\mu\nu} + \zeta w^{\mu\nu} \right] ,\) but not on the Wigner function. In contrary $ \partial_{p}^{\nu}$ acts on the Wigner function, but not on \(\left[ Q F^{\mu\nu} +\zeta w^{\mu\nu} \right] .\)
The Wigner function can be decomposed by means of the 16 generators of the Clifford algebra
\begin{equation}
W=\frac{1}{4}\left(\mathcal{F}+i \gamma^{5} \mathcal{P}+\gamma^{\mu} \mathcal{V}_{\mu}+\gamma^{5} \gamma^{\mu} \mathcal{A}_{\mu}+\frac{1}{2} \sigma^{\mu \nu} \mathcal{S}_{\mu \nu}\right),
\label{wigner}
\end{equation}
where the coefficients $\mathcal{C}\equiv \{\mathcal{F},\mathcal{P},\mathcal{V}_{\mu},\mathcal{A}_{\mu},\mathcal{S}_{\mu \nu}\},$ respectively, are the scalar, pseudoscalar, vector, axial-vector, and antisymmetric tensor fields.
We expand them in powers of Planck constant and keep the leading and next to the leading order terms in $\hbar.$ Hence in (\ref{qke})
one sets $\pi_\mu=p_\mu$ and substitutes $D_\mu$ with
\begin{eqnarray}
\tilde{\nabla}^\nu &\equiv & \partial^{\nu}-\left[ QF^{\nu\beta} +\zeta w^{\nu\beta} \right] \partial_{p \beta} .\end{eqnarray}
Plugging (\ref{wigner}) into (\ref{qke}), yields complex valued equations
whose real parts are
\begin{eqnarray}
p\cdot \mathcal{V}-m \mathcal{F} =0, \label{real1} \\
{p_{\mu} \mathcal{F}-\frac{\hbar}{2} \tilde{\nabla}^{\nu} \mathcal{S}_{\nu \mu}-m \mathcal{V}_{\mu}=0}, \label{real2} \\
{-\frac{\hbar}{2} \tilde{\nabla}_{\mu} \mathcal{P}+\frac{1}{2} \epsilon_{\mu \nu \alpha \beta} p^{\nu} S^{\alpha \beta}+m \mathcal{A}_{\mu}=0}, \label{real3} \\
{\frac{\hbar}{2} \tilde{\nabla}_{[\mu} \mathcal{V}_{\nu]}-\epsilon_{\mu \nu \alpha \beta} p^{\alpha} \mathcal{A}^{\beta}-m \mathcal{S}_{\mu \nu}=0}, \label{real4} \\
\frac{\hbar}{2} \tilde{\nabla} \cdot \mathcal{A}+m \mathcal{P} =0,
\label{real5}
\end{eqnarray}
and the imaginary parts are
\begin{eqnarray}
{\hbar \tilde{\nabla} \cdot \mathcal{V}=0}, \label{imag1} \\
{p\cdot \mathcal{A}=0}, \label{imag2} \\
{\frac{\hbar}{2} \tilde{\nabla}_{\mu} \mathcal{F}+p^{\nu} \mathcal{S}_{\nu \mu}=0}, \label{imag3} \\
{p_{\mu} \mathcal{P}+\frac{\hbar}{4} \epsilon_{\mu \nu \alpha \beta} \tilde{\nabla}^{\nu} \mathcal{S}^{\alpha \beta}=0}, \label{imag4} \\
{p_{[\mu} \mathcal{V}_{\nu]}+\frac{\hbar}{2} \epsilon_{\mu \nu \alpha \beta} \tilde{\nabla}^{\alpha} \mathcal{A}^{\beta}=0}.
\label{imag5}
\end{eqnarray}
To derive QKE (\ref{qke}), we started from the Dirac equation (\ref{DAE}). Then in getting (\ref{qeq}) we had to introduce $F_{\mu\nu}$ and $w_{\mu\nu}$
which are defined in terms of the gauge fields as in (\ref{fmn}) and (\ref{wmn}).
The equations of motions following from the action (\ref{SEM}) give the Maxwell equations when
one can expresses the field strength in terms of the electromagnetic fields $E_\mu, B_\mu$ by
\begin{equation}
\label{fEM}
F^{\mu\nu}=E^\mu u^\nu -E^\nu u^\mu +\epsilon^\mnar u_\alpha B_\rho,
\end{equation}
where $u_\mu$ is the fluid 4-velocity. Similarly, when the equations of motion discussed in section \ref{sec-fluid} are imposed, one deals with the Euler equations (\ref{euler}), where $w_{\mu\nu}$ is expressed in terms of vorticity and {energy per particle} as in (\ref{wab1}).
However, when the equations of motion following from (\ref{SEM}) and (\ref{sF}) should be imposed? We have two options: $i)$ Obtain the kinetic equations which the field components of Wigner function, $\mathcal{C},$ satisfy and then impose the equations of motion.
$ii)$ Impose the equations of motion from the beginning and then derive the kinetic equations satisfied by the fields $\mathcal{C}.$ Our previous works \cite{dk,dk-m} are consistent with the latter option for $\eta_\mu.$ Although we kept $F_{\mu\nu}$ as in (\ref{fmn}), $w_{\mu\nu}$ was expressed in terms of enthalpy current, (\ref{wab}), and then derived the kinetic equations satisfied by the fields $\mathcal{C}.$ Here, we adopt the former option for both of them: We use the definition of $w_{\mu\nu}$ in terms of the $\eta_\mu$ fields, (\ref{wmn}), to establish the kinetic equations and then apply the equations of motion yielding (\ref{wab}). Similarly, $F_{\mu\nu}$ is given by (\ref{fmn}) until the KTE are derived. Afterwards, we express it in terms of the electromagnetic fields, (\ref{fEM}). Once we choose this method the semiclassical kinetic equations can directly be read from the known ones \cite{hpy1,hpy2,hsjlz,hhy}, by substituting $QF_{\mu\nu}$ with $QF_{\mu\nu}+\zeta w_{\mu\nu},$ as it will discussed in the subsequent sections.
\section{Chiral Kinetic Equations}
\label{sec-cke}
For vanishing mass the equations of $\mathcal{A}_\mu$ and $\mathcal{V}_\mu$ decouple from the rest. They are given by (\ref{real1}),(\ref{real4}),(\ref{real5}) with $m=0$ and (\ref{imag1}),(\ref{imag2}),(\ref{imag5}).
Let us introduce the chiral vector fields
\begin{equation}
{\cal J}^\mu_\sschi = \frac{1}{2} ({\cal V}^\mu + \chi {\cal A}^\mu), \nonumber
\end{equation}
where $\chi =1,$ and $\chi =-1,$ correspond to the right-handed and the left-handed fermions.
They need to satisfy
\begin{eqnarray}
p_\mu {\cal J}_\sschi^\mu & = & 0,
\label{1st,0} \\
\tilde{\nabla}^\mu {\cal J}_{\sschi \mu }& = & 0,
\label{2nd,0}\\
\hbar \epsilon_{\mu \nu \alpha \rho} \tilde{\nabla}^\alpha {\cal J}^\rho_\sschi&=& - 2 \chi (p_\mu {\cal J}_{\sschi \nu} - p_\nu {\cal J}_{\sschi\mu}) .
\label{third,0}
\end{eqnarray}
The chiral kinetic equation which results from (\ref{1st,0})-(\ref{third,0}) can be acquired by generalizing the formalism given in \cite{hpy1,hpy2,hsjlz}.
First, one can verify that the solution of (\ref{1st,0}) and (\ref{third,0}) is
\begin{eqnarray}
{\cal J}^{\mu}_{\sschi} &=& p^\mu f_\sschi \delta(p^2) + \frac{\hbar}{2}\chi \epsilon^{\mu \nu \alpha \beta}\left(QF_{\alpha \beta} +\zeta w_{\alpha \beta}\right) p_\nu f^{0}_{\sschi} \delta^\prime (p^2) \nonumber\\
&+& \hbar \chi S^{\mu \nu}_\ssbfn (\tilde{\nabla}_{ \nu} f^{0}_{\sschi}) \delta(p^2),
\label{generalform}
\end{eqnarray}
where $ \delta^\prime (p^2) = - \delta(p^2)/p^2$ and $f_\chi (x,p)\equiv f^{0}_\sschi (x,p)+\hbar f^{1}_\sschi (x,p)$ is the distribution function.
$n^\mu$ is an arbitrary vector satisfying $n^2=1$ and
$$
S^{\mu \nu}_\ssbfn=
\frac{1}{ 2 n \cdot p} \epsilon^{\mu \nu \rho \sigma} p_\rho n_\sigma .
$$
Then, by inserting (\ref{generalform}) into (\ref{2nd,0}) one acquires the chiral kinetic equation:
\begin{eqnarray}
&& \delta\left( p^2 + \hbar \chi Q \frac{n_\mu \tilde{F}^{\ssmn} p_\nu }{n \cdot p}+ \hbar \chi \zeta \frac{n_\mu \tilde{w}^{\ssmn} p_\nu}{n \cdot p} \right)
\{ p \cdot \tilde{\nabla} \nonumber\\
&& + \frac{\hbar \chi Q }{n \cdot p} S^{\mu \nu}_\ssbfn \ F_{\mu\rho}n^\rho \tilde{\nabla}_\nu
- \frac{\hbar \chi}{2n \cdot p} \epsilon^{\mu \nu \lambda \rho } (\partial_\mu n_\nu )p_\lambda \tilde{\nabla}_\rho
\label{CKTn}\\
&&
+\frac{\hbar \chi \zeta }{n \cdot p} S^{\mu \nu}_\ssbfn w_{ \mu\rho} n^\rho\tilde{\nabla}_\nu
-\frac{\hbar \chi }{n \cdot p} S^{\mu \nu}_\ssbfn (\partial_\mu n_\alpha)p^\alpha\tilde{\nabla}_\nu
\} f_\chi =0,\nonumber
\end{eqnarray}
where $\tilde{F}_{\ssmn}=\frac{1}{2}\epsilon_{\mu \nu \alpha \rho}F^{\alpha \rho}$ and
$\tilde{w}_{\ssmn}=\frac{1}{2}\epsilon_{\mu \nu \alpha \rho}w^{\alpha \rho},$ are the dual field strengths.
Until now we have $w_{\mu\nu}=\partial_\mu\eta_\nu - \partial_\nu\eta_\mu, $ because $\eta_\mu$ was off-shell.
Now, we impose the equations of motion (\ref{main})-(\ref{em3}), thus $w_{\mu\nu}$ is given by (\ref{wab1}). In the rest frame of massive fermions { energy per particle is $m.$} Therefore, in the massless limit we set $\kappa=0,$ and write
\begin{equation}
\zeta w^{\mu\nu}|_{m=o}\equiv k w^{\mu\nu}_\ssC =k\left[(\partial^\mu u^\alpha ) p_\alpha u^\nu - (\partial^\nu u^\alpha ) p_\alpha u^\mu\right],
\end{equation}
where $k=-\zeta\kappa $ is an arbitrary constant which will be fixed. Now, (\ref{CKTn}) is the chiral kinetic equation where the vorticity and electromagnetic tensors are treated on the same footing.
\subsection{3D Chiral kinetic theory}
To establish the 3D chiral kinetic theory we write the covariant transport equation (\ref{CKTn}) in the comoving frame by setting $n^\mu=u^\mu,$
\begin{eqnarray}
&& \delta\left( p^2 + \hbar \chi Q \frac{u_\mu \tilde{F}^{\ssmn} p_\nu }{u \cdot p}+k \hbar \chi u_\mu \tilde{w}^{\ssmn}_\ssC p_\nu \right)
\{ p \cdot \tilde{\nabla} \nonumber\\
&& + \frac{\hbar \chi Q }{u \cdot p} S^{\mu \nu}F_{\mu \rho}u^\rho \tilde{\nabla}_\nu
- \frac{\hbar \chi}{u \cdot p} p_\mu \tilde{\Omega}^{\ssmn} \tilde{\nabla}_\nu
\label{cke1}\\
&&
+k \frac{\hbar \chi }{u \cdot p} S^{\mu \nu} w_{\ssC \mu\beta} u^\beta\tilde{\nabla}_\nu
-\frac{\hbar \chi }{u \cdot p} S^{\mu \nu}(\partial_\mu u_\alpha)p^\alpha\tilde{\nabla}_\nu
\} f_\chi =0. \nonumber
\end{eqnarray}
We defined $\tilde{w}_{\ssC \mu \nu }=\frac{1}{2}\epsilon_{\mu \nu \alpha \rho} w^{\alpha \rho}_\ssC$ and
\begin{equation}
\label{smunu}
S^{\mu \nu}=
\frac{1}{ 2 u \cdot p} \epsilon^{\mu \nu \rho \sigma} p_\rho u_\sigma .
\end{equation}
Obviously,
$u_\mu \tilde{w}_\ssC^{\ssmn} =0,$ and due to the fact that linear acceleration vanishes (\ref{vanac}), we have
$$
w_{\ssC \mu \beta }u^\beta =(\partial_\mu u_\alpha)p^\alpha .
$$
Thus, (\ref{cke1}) can be written as
\begin{equation}
\delta\left( p^2 - \hbar \chi Q\frac{ p \cdot B}{u\cdot p}\right)
\{ p \cdot \tilde{\nabla} + \frac{\hbar \chi Q }{u \cdot p} S^{\mu \nu}E_{\mu} \tilde{\nabla}_\nu
- \frac{\hbar \chi}{u \cdot p} p_\mu \tilde{\Omega}^{\ssmn} \tilde{\nabla}_\nu
+(k-1)\frac{\hbar \chi }{u \cdot p} S^{\mu \nu} \Omega_{\mu \alpha}p^\alpha\tilde{\nabla}_\nu
\} f_\chi =0, \label{cke12}
\end{equation}
where $E_\mu$ and $B_\mu$ are the external electromagnetic fields. { Kinetic vorticity tensor $\Omega_{\mu \nu}$ can be expressed as
\begin{equation}
\label{Ommunu}
\Omega_{\mu \nu}=\epsilon_{\mu \nu\alpha \beta }u^\alpha \omega^\beta
\end{equation}}
and its dual is ${\tilde{\Omega}}^{\mu\nu}=\omega_\mu u_\nu -u_\mu \omega_\nu . $
To establish a 3D CKT we would like to integrate (\ref{cke12}) over $p_0.$ To perform this integration we decompose
the distribution function into particle $s=1$ and antiparticle $s=-1$ parts,
\begin{equation}
f_{\sschi} (x,p)= \sum_{s=\pm 1} \theta(s u\cdot p) f^{s}_{\sschi} (x,p).
\label{f_fd}
\end{equation}
Moreover, we choose the frame
\begin{equation}
\label{cuo}
u^\alpha=(1,\bm 0)\ \ \mathrm{and}\ \ \omega^\alpha =(0, \bm \omega),
\end{equation}
where the
delta function yields the dispersion relations
\begin{equation}
{\cal E}^\sschi_s=s|\bm p|\left( 1 -\hbar \chi Q\frac{\bm B \cdot \bm p}{2|\bm p|^3}\right).
\end{equation}
Let us also introduce the canonical velocity
\begin{equation}
\label{canvel}
\bm v^\sschi_s\equiv \frac{\partial {\cal E}^\sschi_s}{\partial \bm p}=s \hat{\mathbf p}(1+s\hbar \chi Q \frac{\bm B \cdot \bm p}{|\bm p|^3})- \hbar \chi Q \frac{\bm B}{2|\bm p|^2}.
\end{equation}
As we show in appendix \ref{appe}, integrating (\ref{cke12}) over $p_0$ leads to the transport equation
\begin{equation}
\Big( \sqrt{\eta}^{\, \sschi }_s \frac{\partial }{\partial t} + (\sqrt{\eta} \dot{{\bm x}})^\sschi_s \cdot \frac{\partial }{\partial \bm{x}} + (\sqrt{\eta} \dot{\bm p})^\sschi_s \cdot\frac{\partial }{\partial \bm{p}}
+\hbar s\chi Q\frac{\bm \omega \cdot \bm E}{|\bm p|^2} -2 \hbar s\chi Q\frac{\bm p \cdot \bm\omega\, \bm p \cdot \bm E}{|\bm p|^4}\Big) f^s_{\sschi} (t,\bm x,\bm p)=0, \label{CKT3}
\end{equation}
where
\begin{eqnarray}
\sqrt{\eta}_{\, s}^{\, \sschi } &=& 1 + \hbar s \chi \frac{\bm p\cdot \bm \omega}{|\bm p|^2} +\hbar \chi Q \frac{\bm B \cdot \bm p}{2|\bm p|^3}, \label{3e1} \\
(\sqrt{\eta} \dot{\bm x})^\sschi_s &=&\bm v^\sschi_s + \hbar s\chi (\frac{1}{2}+\frac{k}{2}) \frac{\bm \omega}{|\bm p|} +\hbar s\chi (\frac{1}{2}-\frac{k}{2})\frac{\bm p\cdot \bm \omega}{|\bm p|^3}\bm p \nonumber \\
&& + \hbar \chi Q\frac{\bm B}{2|\bm p|^2} + \hbar \chi Q\frac{\bm E \times \bm p}{2|\bm p|^3} , \label{3e2}\\
(\sqrt{\eta} \dot{\bm p})^\sschi_s&=& sQ\bm E+ k |\bm p|\bm v_s^\sschi \times \bm \omega +s Q \bm v_s^\sschi \times \bm B \nonumber \\
& & - (k-1) \hbar s \chi Q \frac{\bm p\cdot \bm \omega}{|\bm p|^3}\bm p \times \bm B \nonumber \\
&& +\hbar s\chi Q^2\bm E \cdot \bm B \frac{\bm p}{2|\bm p|^3} + \hbar Q\chi \frac{\bm p\cdot \bm \omega}{|\bm p|^2}\bm E .
\label{3e3}
\end{eqnarray}
We ignore the ${\cal O}(\omega^2)$ terms.
The chiral
particle (antiparticle) number and current densities are defined by
\begin{eqnarray}
n^\chi_s& = & \int \frac{d^3p}{(2\pi\hbar)^3} (\sqrt{\eta})_s^\sschi f^{eq,s}_{\sschi},\label{nil} \\
\bm j^\chi_s & = & \int \frac{d^3p}{(2\pi\hbar)^3}(\sqrt{\eta} \dot{\bm x})^\sschi_s f^{eq,s}_{\sschi}+
\bm \nabla \times \int \frac{d^3p}{(2\pi\hbar)^3} \frac{s{\cal E}_s^\sschi \bm p}{2 |\bm p|^3} f^{eq,s}_{\sschi} ,\label{jil}
\end{eqnarray}
where $f_{\sschi}^s \equiv f_{\sschi}^s (t,\bm x,\bm p).$
To accomplish the continuity equation satisfied by the 4-current density $j^{\sschi\mu}_s\equiv(n^\chi_s , \bm j^\sschi_s),$ let us calculate
\begin{eqnarray}
C&\equiv&\int d^3p\Big\{ \frac{\partial }{\partial t} \left[\sqrt{\eta}_{\, s}^{\, \sschi } f_{\sschi}^s\right]+ \frac{\partial }{\partial \bm{x}}\left[(\sqrt{\eta} \dot{{\bm x}})^\sschi_s f_{\sschi}^s\right] + \frac{\partial }{\partial \bm{p}} \left[(\sqrt{\eta} \dot{\bm p})^\sschi_s f_{\sschi}^s\right] \Big\}\label{c} .
\end{eqnarray}
Observe that (\ref{3e1}) and (\ref{3e2}) do not depend on time and spatial coordinates explicitly, so that we have
\begin{eqnarray}
C &=&\int d^3p\Big\{ \Big( \sqrt{\eta}_{\, s}^{\, \sschi } \frac{\partial }{\partial t} + (\sqrt{\eta} \dot{{\bm x}})^\sschi_s \cdot \frac{\partial }{\partial \bm{x}} + (\sqrt{\eta} \dot{\bm p})^\sschi_s \cdot\frac{\partial }{\partial \bm{p}} \Big) f_{\sschi}^s+\frac{\partial (\sqrt{\eta} \dot{\bm p})^\sschi_s}{\partial \bm{p}} f_{\sschi}^s\Big\}\label{cc} . \nonumber
\end{eqnarray}
By employing the transport equation (\ref{CKT3}) one gets
\begin{eqnarray}
C&=&\int d^3p \left[\frac{\partial (\sqrt{\eta} \dot{\bm p})^\sschi_s}{\partial \bm{p}}
-\hbar s\chi Q\Big(\frac{\bm \omega \cdot \bm E}{|\bm p|^2} -2 \frac{\bm p \cdot \bm\omega\, \bm p \cdot \bm E}{|\bm p|^4}\Big)\right] f_{\sschi}^s .\label{c1}
\end{eqnarray}
The derivative of (\ref{3e3}) leads to
\begin{eqnarray}
\int d^3p\ \frac{\partial (\sqrt{\eta} \dot{\bm p})^\sschi _s}{\partial \bm{p}} f_{\sschi}^s&=&\hbar s\chi Q
\int d^3p \left(\frac{\bm \omega \cdot \bm E}{|\bm p|^2}-2\frac{\bm p \cdot \bm\omega\, \bm p \cdot \bm E}{|\bm p|^4} \right)
f_{\sschi}^s \nonumber \\
&&+ 2\pi \hbar Q^2\chi \bm E\cdot \bm B\int d^3p \delta (\bm p) f_{\sschi}^s\label{pdot} .
\end{eqnarray}
By substituting the first term in (\ref{c1}) with (\ref{pdot}) one finally attains
\begin{eqnarray}
C&=& 2\pi\hbar Q^2\chi \bm E\cdot \bm B\int d^3p \delta (\bm p) f^s_{\sschi} (t,\bm x,\bm p) .\label{pdott}
\end{eqnarray}
On the other hand the last term of (\ref{c}) vanishes because it is a total derivative, thus the continuity equation is deduced:
\begin{equation}
\label{ceqD0}
\frac{\partial n_s^\sschi}{\partial t} + \bm {\nabla} \cdot \bm j_s^\sschi= \frac{\chi Q^2}{(2\pi\hbar)^2} \bm{{ E}}\cdot \bm{B} \ f^{eq,s}_{\sschi} (t,\bm x,\bm 0) .
\end{equation}
Let us introduce the vector and axial-vector currents
\begin{equation}
\bm j_{\ssV}=\bm j_\ssR +\bm j_\ssL, \ \ \ \bm j_{\ssA}=\bm j_\ssR -\bm j_\ssL, \label{rlc3}
\end{equation}
where $\bm j_{\ssR}=\sum_{s=\pm 1}\bm j^{1}_s $ and $\bm j_{\ssL}=\sum_{s=\pm 1}\bm j^{-1}_s .$
Let us choose the equilibrium distribution function as
\begin{equation}
f^{eq,s}_{\sschi} = \frac{1}{ e^{s({\cal E}_s^\sschi - \mu_\sschi )/ T } +1 } \cdot \label{norot}
\end{equation}
Here $\mu_\chi =\mu + \chi \mu_5,$ where $\mu,\mu_5$ are the total and axial chemical potentials.
By inspecting (\ref{3e2}) one observes that the currents are linear in the magnetic field and vorticity:
\begin{eqnarray}
\bm j_\ssV &= & \xi_\ssB \bm B+ \xi \bm \omega ,\label{3DjV}\\
\bm j_\ssA &= & \xi_{\ssB 5} \bm B+ \xi_5 \bm \omega . \label{3DjA}
\end{eqnarray}
The coefficients of the magnetic field are calculated as
\begin{equation}
\xi_{{\ssB}} =\frac{Q\mu_{\ssbes} }{2 \pi^2 \hbar^2} , \ \
\xi_{\ssBb} = \frac{Q\mu}{2 \pi^2 \hbar^2} .\nonumber
\end{equation}
Thus the magnetic terms in (\ref{3DjV}) and (\ref{3DjA}), respectively, generate the chiral magnetic and chiral separation effects. The vorticity terms in (\ref{3DjV}) and (\ref{3DjA}), respectively, generate the chiral vortical and local polarization effects correctly for
\begin{equation}
\xi = \frac{\mu \mu_{\ssbes}}{\pi^2\hbar^2} , \ \ \xi_{\ssbes} = \frac{T^2}{6 \hbar^2} + \frac{\mu^2+{\mu_{\ssbes}}^2}{2 \pi^2 \hbar^2} .\label{omco}
\end{equation}
However, the coefficients $\xi$ and $\xi_5$ depend on $k.$ One can show that (\ref{omco}) results
as far as the condition
\begin{equation}
\frac{2}{3}+\frac{k}{3}=1,
\label{k1}
\end{equation}
is satisfied. This yields $k=1.$ This value of $k$ is in accord with the formalism considered in \cite{dk}.
We do not deal with the equilibrium distribution function for a rotating fluid because in constructing the CKE (\ref{cke1}) we have not considered rotation of the reference frame. However, in the absence of modification terms one should work with the equilibrium distribution function of a rotating fluid in the comoving frame of the fluid \cite{soncollisionckt,hpy2}
\begin{equation}
\label{frc}
f^{eq,s}_{\sschi} =\frac{1}{e^{s\left(u\cdot p -\mu_\chi +\frac{\hbar }{2}S^{\mu\nu}\partial_\mu u_\nu\right)/T} +1 },
\end{equation}
where $S^{\mu\nu}$ is given in (\ref{smunu}). In the frame $u^\alpha=(1,\bm 0)$ and $\omega^\alpha =(0, \bm \omega),$ the distribution function (\ref{frc}) yields
\begin{equation}
f^{eq,s}_{\sschi} = \frac{1}{ e^{s({\cal E}_s^\sschi - \mu_\sschi -\hbar \sschi \hat{\bm p} \cdot \bm \omega /2)/ T } +1 } .
\end{equation}
If one adopts this equilibrium distribution function
the values in (\ref{omco}) are acquired when the condition
\begin{equation}
\frac{5}{6}+\frac{k}{3}=1,
\label{k2}
\end{equation}
is fulfilled, which yields $k=1/2.$ Obviously, for the original CKE where $k=0,$ the condition (\ref{k2}) cannot be satisfied. Therefore, we conclude that without the modification the 3D formalism obtained from (\ref{cke1}) does not generate the CVE correctly. This result is pertinent to the transport equation in Minkowski space with the condition $n_\mu =u_\mu,$ where $u_\mu$ and $\omega_\mu$ are given as in (\ref{cuo}). Obviously, one can choose either $n_\mu$ or $u_\mu$ in a different manner, e.g. $u_\mu=(1, \bm{x}\times \bm{\omega}).$ For each choice one should derive the resulting 3D transport equations by integrating (\ref{CKTn}) over $p_0.$
As it was mentioned in Introduction there also exists a curved spacetime formulation of chiral kinetic equation \cite{lgmh} where the Coriolis force and CVE are generated correctly without a need for modification. However, this formalism leads to a 3D kinetic theory which depends on $\bm x$ explicitly \cite{dk2019}, in contrary to the 3D kinetic theories obtained here (\ref{3e1})-(\ref{3e3}) or as it has been derived in \cite{dk}.
\section{Kinetic Equations of Massive Fermions}
\label{sec-mass}
The equations (\ref{real1})-(\ref{imag5}) which should be satisfied by the components of Wigner function in Clifford algebra basis are reducible: The field equations (\ref{real1}), (\ref{real4}) and (\ref{real5}) can be employed to express the fields $\mathcal{F},\, \mathcal{P},\, \mathcal{S}_{\mu \nu}$ in terms of the vector and axial-vector fields, $\mathcal{V}_{\mu},\, \mathcal{A}_{\mu}.$ Following the procedure given in \cite{oh} and \cite{hhy} the rest of field equations (\ref{real1})-(\ref{imag5}), can be shown to yield
\begin{eqnarray}
\tilde{\nabla} \cdot \mathcal{V}&=&0, \label{mathcalV}\\
{p\cdot \mathcal{A} }&=&0, \label{I} \\
\left(p^{2}-m^{2}\right) \mathcal{V}_{\mu} &=&-\hbar (Q\tilde{F}_{\mu \nu}+\zeta \tilde{w}_{\mu \nu}) \mathcal{A}^{\nu}, \label{II} \\
p_{\nu} \mathcal{V}_{\mu}-p_{\mu} \mathcal{V}_{\nu}&=&6\frac{\hbar}{2} \epsilon_{\mu \nu \rho \sigma} \tilde{\nabla}^{\rho} \mathcal{A}^{\sigma} ,\label{III}\\
\left(p^{2}-m^{2}\right) \mathcal{A}^{\mu}&=&\frac{\hbar}{2} \epsilon^{\mu \nu \rho \sigma} p_{\sigma} \tilde{\nabla}_{\nu} \mathcal{V}_{\rho}, \label{IV} \\
p \cdot \tilde{\nabla} \mathcal{A}^{\mu}+(QF^{\nu \mu}+\zeta w^{\nu \mu}) \mathcal{A}_{\nu}&=&\frac{\hbar}{2} \epsilon^{\mu \nu \rho \sigma}\left(Q\partial_{\sigma} F_{\beta \nu}+\partial_{\sigma} \zeta w_{\beta \nu}\right) \partial_{p}^{\beta} \mathcal{V}_{\rho} \label{mathcalA},
\end{eqnarray}
where at most ${\mathcal O}(\hbar)$ terms are taken into account.
We can derive the
semiclassical kinetic equations resulting from (\ref{mathcalV})-(\ref{mathcalA}) by substituting $QF_{\mu \nu}$ with $QF_{\mu \nu}+\zeta w_{\mu \nu}$ in the formalism which has been given in \cite{hhy} for $\zeta =0.$
First one solves (\ref{II}), (\ref{III}) for ${\mathcal V}_\mu$ up to $\hbar$-order, then uses it in (\ref{mathcalV}) and gets
the kinetic equation of the vector field:
\begin{eqnarray}
\begin{aligned}
& \delta\left(p^{2}-m^{2}\right) p \cdot \tN f_{V}+ \hbar \delta\left(p^{2}-m^{2}\right) \Biggl\{ \Big(\frac{n^\alpha}{p \cdot n} S_{a(n)}^{\mu \nu} (Q F_{\mu \alpha}+\zeta w_{\mu \alpha}) + \partial_{\mu} S_{a(n)}^{\mu \nu} \Big) \tN_{\nu} \\
&+S_{a(n)}^{\mu \nu}\partial_{\mu} (QF_{\rho \nu} +\zeta w_{\rho \nu}) \partial_{p}^{\rho} +\frac{ \epsilon^{\mu \nu \alpha \beta}}{2} \Biggr[ \tN_{\mu}\left(\frac{n_{\beta}}{p \cdot n}\right)\left[\tN_{\nu} a_{\alpha}+Q F_{\nu \alpha}+ \zeta w_{\nu \alpha}\right]\\
&+\frac{n_{\beta}}{p \cdot n}\left( \partial_{\mu} \left( QF_{\rho \nu}+ \zeta w_{\rho \nu} \right)\partial_{p}^{\rho} a_{\alpha}+\left[\tN_{\nu} a_{\alpha}-(Q F_{\rho \nu} + \zeta w_{\rho \nu})\partial_{p}^{\rho} a_{\alpha}\right] \tN_{\mu}\right) \Biggr]\Biggr\} f_{A}\\
&-\hbar \frac{\delta^{\prime}\left(p^{2}-m^{2}\right)}{2 (p \cdot n)} \epsilon_{\mu \nu \alpha \rho} n^\nu (Q F^{\alpha \rho}+ \zeta w^{\alpha \rho}) [p \cdot \tN ({a}^{\mu} f_{A})+F^{\sigma \mu} {a}_{\sigma} f_{A}] =0.
\end{aligned}
\label{SKE}
\end{eqnarray}
Here $f_V,f_A$ are vector and axial distribution functions.
\begin{equation}
S^{\mu \nu}_{\ssa \ssbfn}=
\frac{1}{ 2 n \cdot p} \epsilon^{\mu \nu \rho \sigma} a_\rho n_\sigma
\label{S_a}
\end{equation}
is the spin tensor and $a_\mu$ is the spin four-vector which is defined to satisfy the constraint
\begin{equation}
\label{adq}
a\cdot p =p^2-m^2.
\end{equation}
To derive the other kinetic equations which are needed to determine the dynamical degrees of freedom $f_V, f_A, a_\mu,$ one solves (\ref{I}) and (\ref{IV}) for ${\mathcal A}^\mu$ up to ${\mathcal O}(\hbar)$ with the help of Wigner function of the quantized free Dirac fields \cite{hhy}. Plugging this solution into (\ref{mathcalA}) leads to
\begin{eqnarray}
\begin{aligned}
& \delta\left(p^{2}-m^{2}\right) \left(p \cdot \tN \left(a^{\mu} f_{A}\right)+(Q F^{\nu \mu}+\zeta w^{\nu \mu}) a_{\nu} f_{A}\right)\\
&+\hbar \delta\left(p^{2}-m^{2}\right) \Biggl\{ p^{\mu} S_{m(n)}^{\rho \nu}\partial_{\rho} \left(F_{\beta \nu} + \zeta w_{\beta \nu}\right) \partial_{p}^{\beta}+p^{\mu} \Big(\partial_{\alpha} S_{m(n)}^{\alpha \nu} +\frac{S_{m(n)}^{\alpha \nu} (Q F_{\alpha \beta}+\zeta w_{\alpha \beta}) n^\beta}{p \cdot n+m} \Big)\tN_{\nu} \\
&+ \frac{\epsilon^{\mu \nu \alpha \beta}}{2(p \cdot n+m)} \Big[\frac{m^2 n_{\beta}+mp_{\beta}}{p \cdot n+m} \left[(Q F_{\alpha \sigma}+\zeta w_ {\alpha \sigma})n^\sigma - \partial_{\alpha }(p \cdot n)\right] + m^2 \left(\partial_{\alpha} n_{\beta}\right) \Big]\tN_{\nu} \Biggr\}f_{V}\\
&- \hbar \frac{\delta^{\prime}\left(p^{2}-m^{2}\right) }{(p \cdot n+m)} \Big(p^{\mu} p_\alpha n_\beta (Q \tilde{F}^{\alpha \beta}+\zeta \tilde{w}^{\alpha \beta})-\left(m^2 n_{\beta}+mp_{\beta}\right) (Q\tilde{F}^{\mu \beta}+\zeta \tilde{w}^{\mu \beta})\Big) p \cdot \tN f_{V} =0,
\end{aligned}
\label{AKE}
\end{eqnarray}
where
\begin{equation}
S^{\mu \nu}_{\ssm \ssbfn}=
\frac{1}{ 2( n \cdot p +m)} \epsilon^{\mu \nu \rho \sigma} p_\rho n_\sigma .
\label{S_p}
\end{equation}
Once we get these kinetic equations, we impose the equations of motion resulting from (\ref{SEM}) and (\ref{sF}). Thus, from now on we deal with $F_{\mu \nu}$ expressed in terms of the electromagnetic fields $E_\mu, B_\mu$ as in (\ref{fEM}) and the field strength $ w^{\mu\nu}$ expressed in terms of the vorticity $\omega_\mu$ and the { energy per particle $u\cdot p,$ }as in (\ref{wab1}). To keep the discussion general let us introduce
\begin{equation}
\zeta w^{\mu\nu} \equiv \varLambda w^{\mu\nu}_\ssC+2\varepsilon u\cdot p\,\Omega^{\mu\nu} .
\label{zetaw}
\end{equation}
How to fix the constants $\varepsilon$ and $\varLambda$ will be discussed later.
By making use of (\ref{Ommunu}) and the Schouten identity, (\ref{zetaw}) can be written equivalently as
\begin{equation}
\zeta w^{\mu\nu}= \epsilon^{\mu \nu \sigma \rho} \omega_\rho (u\cdot p\, u_\sigma (2 \varepsilon+\varLambda) - \varLambda p_\sigma).
\label{zetaw-son}
\end{equation}
The kinetic equations (\ref{SKE}) and (\ref{AKE}) form the 4D collisionless kinetic theory of massive fermions in the presence electromagnetic fields and vorticity.
We would like to derive the nonrelativistic kinetic theory arising from this covariant formulation. For this purpose let us extract the scalar kinetic equation stemming from (\ref{AKE}) by projecting it on $n_\mu:$
\begin{eqnarray}
&&\delta (p^2-m^2)n_\mu\left[ p\cdot \tN (a^\mu f_A) -(QF^{\mu \sigma}+\zeta w^{\mu \sigma})a_\sigma f_A\right] +\hbar (p\cdot n +m) \delta (p^2-m^2)\Bigg[(\partial_{\sigma }S^{\sigma\nu}_{\ssm \ssbfn}) \tN_\nu \nonumber\\
&&+\frac{n^\sigma }{p\cdot n +m}(QF_{\mu \sigma}+\zeta w_{\mu \sigma})S^{\mu \nu}_{\ssm \ssbfn} \tN_\nu + S^{\mu \nu}_{\ssm \ssbfn} [\partial_\mu (QF_{\alpha \nu}+\zeta w_{\alpha \nu})]\partial_p^\alpha \Bigg] f_V \label{SAKE} \\
&& +\hbar m\frac{\delta (p^2-m^2)}{ 2( n \cdot p +m)}\epsilon^{\mu \nu \alpha \rho}(mn_\mu +p_\mu)(\partial_\alpha n_\rho)\tN_\nu f_V -\hbar \delta^\prime (p^2-m^2) p_\alpha n_\beta (Q\tilde{F}^{\alpha \beta}+\zeta \tilde{w}^{\alpha \beta})p \cdot \tN f_V =0. \nonumber
\end{eqnarray}
We will show that (\ref{SKE}) and (\ref{SAKE}) can be combined to derive scalar kinetic equations which generate correct dispersion relations of Dirac fermions coupled to electromagnetic fields in a frame rotating with the angular velocity $\bm \omega .$
\subsection{3D Massive kinetic theory}
To establish the 3D kinetic theory arising from the relativistic kinetic equations (\ref{SKE}) and (\ref{SAKE}), one should determine the form of $a_\mu ,$ consistent with the kinetic equations and the constraint (\ref{adq}). However we are only interested in attaining small mass corrections to the chiral effects. Hence, let us deal with $a_\mu$
derived from the Wigner function of the free Dirac fields \cite{hhy}. In this case it is defined to satisfy
\begin{eqnarray}
&a\cdot n f_A=(p\cdot n +m)f_A,& \ \ \ a_{\perp \mu} f_A=p_{\perp \mu}f_A-m S_\mu,
\end{eqnarray}
where $S_\mu$ is the spin four-vector of the Dirac wave-function and $a_{\perp}\cdot n= p_{\perp}\cdot n=0 .$
Thus, it can be expressed as
\begin{equation}
a_\mu=mn_\mu+p_\mu -mS_\mu \nonumber .
\end{equation}
$
S\cdot p =p\cdot n +m,
$
so that (\ref{adq}) is fulfilled.
Moreover, let $S_\mu$ be given as in the massless case:
$$
S^\mu= -\frac{p_{\perp \mu}}{p\cdot n} f_A.
$$
Therefore, we set
\begin{equation}
\label{amu}
a_\mu =(1+\frac{m}{p\cdot n}) p_\mu.
\end{equation}
Because of dealing with $a_\mu$ at the zeroth-order in electromagnetic fields and vorticity, we need to keep the terms which are at most linear in $F_{\mu\nu}$ and $w_{\mu\nu}$ in the kinetic equations (\ref{SKE}) and (\ref{SAKE}). For simplicity, we consider the fields satisfying $\partial_{\mu}F_{\nu \sigma}=0$ and $\partial_\mu \omega_\nu =0.$
To establish the 3D kinetic theory by integrating over $p_0,$ we have to work in the comoving frame by setting
\begin{equation}
n_\mu=u_\mu.
\label{cmf}
\end{equation}
Let us accomplish the kinetic equations following from (\ref{SKE}) and (\ref{SAKE}) in the comoving frame (\ref{cmf}), by keeping at most the terms linear in $E_{\mu},$ $B_{\mu}$ and $\omega_\mu.$
The kinetic equation of the vector field (\ref{SKE}) reads
\begin{eqnarray}
&&\delta (p^2-m^2) p\cdot \tN f_V \nonumber\\
&&+\hbar \delta (p^2-m^2)\Bigg[ (\partial_{\sigma }S^{\sigma\nu}_{\ssa \ssbfu}) +\frac{ S^{\mu \nu}_{\ssa \ssbfu} }{p\cdot u}(Q E_{\mu}+\varLambda \, \Omega_{\mu \alpha} p^\alpha) -\hbar m \ \frac{\epsilon^{\nu \mu \alpha \beta}}{ 2( u \cdot p)^3}u_\beta p_\alpha (\partial_\mu u_\sigma) p^\sigma
\Bigg]\partial_\nu f_A \nonumber\\
&& - \frac{\delta^\prime (p^2-m^2)}{ 2( u \cdot p )^2} (u \cdot p +m) \big[Q B \cdot p+ 2 \alpha (u\cdot p) (\omega \cdot p)\big]p\cdot \partial f_A =0,
\label{SKE-son}
\end{eqnarray}
with the spin tensor
\begin{equation}
S^{\mu \nu}_{\ssa \ssbfu}= \frac{1}{ 2 u \cdot p } \epsilon^{\mu \nu \rho \sigma} p_\rho u_\sigma
+\frac{m}{ 2 (u \cdot p)^2 } \epsilon^{\mu \nu \rho \sigma} p_\rho u_\sigma. \nonumber
\end{equation}
The left-hand side of (\ref{SAKE}) can be written with an overall $ (u\cdot p +m) $ factor which is obviously nonvanishing. Hence (\ref{SAKE}) yields the kinetic equation
\begin{eqnarray}
&&\delta (p^2-m^2) \Bigg[p\cdot \tN f_A -\frac{m}{p\cdot u (u\cdot p +m)} \Big( (\partial_{\alpha }u_\beta) p^\alpha p^\beta- p_\mu u_\nu QF^{\mu \nu} \Big) f_A \Bigg] \nonumber\\
&&+\hbar \delta (p^2-m^2) \Bigg[\partial_{\sigma }S^{\sigma\nu}_{\ssm \ssbfu} +\frac{ S^{\mu \nu}_{\ssm \ssbfu} }{p\cdot u +m}(Q E_{\mu}+\varLambda \, \Omega_{\mu \alpha} p^\alpha) +\frac{\epsilon^{\mu \nu \alpha \rho}(\partial_\alpha u_\rho) }{ 2( u \cdot p +m)^2}(m^2 u_\mu +m p_\mu)
\Bigg] \partial_\nu f_V \nonumber\\
&& - \frac{\delta^\prime (p^2-m^2)}{ 2( u \cdot p +m)}\big[Q B \cdot p+ 2 \alpha (u\cdot p) (\omega \cdot p)\big] p\cdot \partial f_V=0. \label{SAKE-son1}
\end{eqnarray}
By expanding the denominators for small mass with $m/u\cdot p\ll1,$ and ignoring the $m^2/(u\cdot p)^2$ and higher order terms, (\ref{SAKE-son1}) can be written as
\textbf{\begin{eqnarray}
&&\delta (p^2-m^2) \Bigg[p\cdot \tN f_A +\frac{m}{(u\cdot p )^2} QE \cdot p f_A \Bigg] \nonumber\\
&&+\hbar \delta (p^2-m^2) \Bigg[ \partial_{\sigma }S^{\sigma\nu}_{\ssm \ssbfu} +\left(1-\frac{m}{u\cdot p}\right)\frac{S^{\mu \nu}_{\ssm \ssbfu} }{p\cdot u }(Q E_{\mu}+\varLambda \,\Omega_{\mu \alpha} p^\alpha)+\frac{ m p_\mu}{ ( u \cdot p)^2} \tilde{\Omega}^{\mu \nu}
\Bigg] \partial_\nu f_V \nonumber\\
&& - \hbar \left(1-\frac{m}{u\cdot p}\right)\frac{\delta^\prime (p^2-m^2)}{ u \cdot p } \big[Q B \cdot p+ 2 \varepsilon (u\cdot p) (\omega \cdot p)\big] p\cdot \partial f_V=0 .\label{SAKE-son}
\end{eqnarray}}
In the small mass limit we have
\begin{equation}
S^{\mu \nu}_{\ssm \ssbfu} \approx \frac{1}{ 2 u \cdot p } \epsilon^{\mu \nu \rho \sigma} p_\rho u_\sigma
-\frac{m}{ 2 (u \cdot p)^2 } \epsilon^{\mu \nu \rho \sigma} p_\rho u_\sigma. \nonumber
\end{equation}
Then, by summing and subtracting (\ref{SKE-son}) and (\ref{SAKE-son}) we acquire the kinetic equations
\begin{eqnarray}
&&\delta \left( p^2-m^2 -\chi \hbar \frac{Qp\cdot B}{u\cdot p} -2 \varepsilon \chi \hbar p\cdot \omega\right) \Biggl\{ p\cdot \tN \nonumber\\
&&+\chi \frac{\hbar}{2(u \cdot p)^2} \epsilon^{\mu \nu \alpha \beta} p_\alpha \Big[Qu_\beta E_\mu + u \cdot p\, \Omega_{\mu \beta}+(\varLambda-1) u_\beta \Omega_{\mu \sigma} p^\alpha \Big] \partial_\nu
\label{eom-fR} \\
&&- \chi \frac{\hbar}{4 (u \cdot p)^3} \epsilon^{\mu \nu \alpha \beta} m \Big[ p_\alpha u_\beta ( Q E_\mu + \varLambda \ \Omega_{\mu \sigma} p^\sigma)- u \cdot p\, p_\mu \Omega_{\alpha \beta} \Big] \partial_\nu \nonumber\\
&&- \frac{\hbar}{4(u \cdot p)^3} \epsilon^{\mu \nu \alpha \beta} m \ p_\alpha u_\beta \Omega_{\mu \sigma} p^\sigma\partial_\nu
- \frac{m}{2(u \cdot p)^2} Q E\cdot p \Biggr\} f_{\chi} =C_{(-\chi)}, \nonumber
\end{eqnarray}
where
\begin{eqnarray}
C_{\chi}=&&- \chi \frac{\hbar \delta \left( p^2-m^2 \right)}{2(u \cdot p)^2} \epsilon^{\mu \nu \alpha \beta} \Biggr\{ \ m \ p_\alpha \Big[\Omega_{\mu \beta} -2\,\Omega_{\mu \sigma} p^\sigma u_\beta \Big] \partial_\nu \nonumber\\
&&- \frac{1}{4(u \cdot p)^3} m \Big[ p_\alpha u_\beta (Q E_\mu + \varLambda \, \Omega_{\mu \sigma} p^\sigma)- u \cdot p\, p_\mu\Omega_{\alpha \beta} \Big] \partial_\nu \nonumber\\
&&+ \frac{m }{2(u \cdot p)^2} Q E \cdot p \Biggr\} f_{\chi} \nonumber\\
&& - m \delta^\prime (p^2-m^2) ( Q p\cdot B + 2\varepsilon (u\cdot p) (p \cdot \omega)) p \cdot \partial f_{\chi} .
\label{C_R/L}
\end{eqnarray}
Here $f_\chi \equiv f_{R/L}=(f_V+\chi f_A)/2$ are the right-handed and left-handed distribution functions. The right-hand side of (\ref{eom-fR}) vanishes for both $m=0$ and $\hbar=0,$ but its left-hand side is non-zero whether $\hbar =0$ or $m=0.$
Moreover, the distribution functions appearing on the left- and the right-hand sides possess opposite chirality. Hence we can consider (\ref{eom-fR}) as the transport equation of $f_\chi$ which shows up on the left-hand side where its right-hand side is due to presence of the opposite chirality distribution function $f_{-\chi},$ for massive fermions.
To derive the nonrelativistic kinetic equations by integrating (\ref{eom-fR}) over $p_0,$ we consider distribution functions composed as in (\ref{f_fd}) and choose the frame $u_\mu=(1, \bm 0),$ $\omega^\mu=(0, \bm \omega).$ In this frame the Dirac delta function yields the dispersion relations
\begin{equation}
\label{edr}
p_{0s}^{\chi}=sE_p\left[1-s\chi \hbar \ \bm b \cdot \big(Q\bm B+2 \varepsilon E_p \bm \omega\big)\right],
\end{equation}
where $E_p=\sqrt{\boldsymbol{p}^2-m^2}$ and \mbox{$\bm b \equiv \bm p/2 E_p^3.$ }
Calculation of the $p_0$ integral yields
\begin{equation}
\big( \sqrt{\eta}_{\, \chi}^{\, s} \frac{\partial }{\partial t} + (\sqrt{\eta} \dot{{\bm x}})_{\chi}^{s} \cdot \frac{\partial }{\partial \bm{x}} + (\sqrt{\eta} \dot{\bm p})_{\chi}^{s} \cdot\frac{\partial }{\partial \bm{p}}\big) f_{\chi}^{s} (t,\bm x,\bm p) + \frac{m }{2 E_p^3} sQ \bm E \cdot \bm p) f_{\chi}^s (t,\bm x,\bm p) =C_{(-\chi)},
\label{3d-keR}
\end{equation}
where
\begin{eqnarray}
\sqrt{\eta}_{\, \chi}^{\, s}&=& 1+ \hbar \chi \bm b \cdot (Q\bm B + s(2E_p+m) \bm \omega), \\
(\sqrt{\eta} \dot{{\bm x}})_{\chi}^{s}
&=& \frac{\bm p}{E_p} \Big\{1+2 \hbar \chi Q\bm b \cdot \bm B + \frac{\hbar s}{2} [2 \chi E_p (2 \varepsilon +(1-\varLambda) )+( \chi +1)m] \ \bm b \cdot \bm \omega \Big\} \nonumber \\
&& +\hbar \chi Q \Big(1-\frac{m}{E_p}\Big) \bm E \times \bm b+\hbar \chi s \Big(1-\frac{m}{2 E_p}\Big) \frac{\bm \omega}{E_p} \nonumber \\
&&-\hbar \bm \omega (\bm b \cdot \bm p) \Big( \frac{ s m}{2 E_p} ( \chi \varLambda +1) + \chi (1-\varLambda) \Big), \label{xdot-m}\\
(\sqrt{\eta} \dot{\bm p})_{\chi}^{s}&=& s Q\bm E+ \frac{\bm p}{E_p} \times \left[eQ \bm B +(2\varepsilon+\varLambda)E_p \bm \omega\right]. \label{pdm}
\end{eqnarray}
The right-hand side of (\ref{3d-keR}) which vanishes for $m=0$ can be considered as the correction terms due to the presence of opposite chirality fermions in the massive case. Terms which vanish for $m=0$ can also be interpreted as collision terms due to interaction of electromagnetic fields and vorticity with the spin of massive fermions \cite{Wang_2021}.
Let us fix the values of $\varepsilon$ and $\varLambda .$ Comparing the dispersion relations (\ref{edr}) with the Hamiltonian of Dirac particles coupled to the magnetic field in rotating coordinates in the helicity basis \cite{dky,dk-m}, one observes that $\varepsilon=1/2.$ Similar energy relations were also obtained for chiral particles in \cite{cssyy,lgmh}. Now, by inspecting (\ref{pdm}) we see that $\varLambda=1$ for reproducing the Coriolis force correctly. This choice is also consistent with the massless case considered in the section \ref{sec-cke}.
Inserting the spatial velocity (\ref{xdot-m}) into the following definition
\begin{equation}
\bm j^\chi_s = \int \frac{d^3p}{(2\pi\hbar)^3}(\sqrt{\eta} \dot{\bm x})^\sschi_s f^{eq,s}_{\sschi},
\label{j-1}
\end{equation}
where $f^{eq,s}_{\sschi}$ is given in (\ref{norot}) by substituting ${\cal E}_s^\sschi $ with (\ref{edr}), we write the particle number axial-vector and vector current densities as
\begin{equation}
\bm j _\ssA =\sum_{s\chi}\chi \bm j^\chi_s,\ \ \ \bm j _\ssV =\sum_{s\chi} \bm j^\chi_s .
\end{equation}
They can be decomposed as follows
\begin{equation}
\boldsymbol{j}_{\ssA, \ssV}^{B, \omega}(\boldsymbol{x}, t)=\overline{\sigma}_{\ssA, \ssV}^{B} \boldsymbol{B}+\overline{\sigma}_{\ssA, \ssV}^{\omega} \boldsymbol{\omega},
\end{equation}
with
\begin{eqnarray}
\begin{aligned}
\bar{\sigma}_{\ssA, \ssV}^{B}=&\frac{Q}{6 \pi^{2} \hbar^{2}} \int d|\bm{p}|\left\{\frac{|\bm{p}|^{4}}{E_p^{4}} f_{\ssA, \ssV}^{0}-\frac{|\boldsymbol{p}|^{4}}{2 E_p^{3}} \frac{\partial f_{\ssA, \ssV}^{0}}{\partial E_p}\right\} \\
\bar{ \sigma}_{\ssA, \ssV}^{\omega}=&\frac{1}{2 \pi^{2} \hbar^{2}} \int d|\bm{p}| \Bigg( \Big(\frac{|\bm{p}|^{2}}{E_p}+\frac{|\bm{p}|^{4} (\varepsilon+\varLambda-1)}{3 E_p^3} -\frac{ (3\varLambda-1)|\bm{p}|^{4} m}{12 E_p^4}-\frac{m |\bm{p}|^{2}}{2E_p^2} \Big) f_{\ssA, \ssV}^{0}\\
&- \frac{|\bm{p}|^{4} m}{6 E_p^4} f_{V,A}^{FD}-\frac{\varepsilon|\bm{p}|^{4}}{3 E_p^2} \frac{\partial f_{\ssA, \ssV}^{0}}{\partial E_p}\Bigg), \label{sigma_Bw}
\end{aligned}
\end{eqnarray}
where $f_{\ssA, \ssV}^{0}$ are defined by
\begin{equation}
f_{\ssA, \ssV}^{0}=\sum_{s}\left\{\frac{1}{e^{s\left[E_p-\mu_{R}\right] / T}+1} \pm \frac{1}{e^{s\left[E_p-\mu_{L}\right] / T}+1}\right\}.
\end{equation}
For the sake of simplicity let us deal with $\mu_{\ssR}=\mu_{\ssL}=\mu$. While $\bar{\sigma}_{\ssV}^{\ssB}$ vanishes under this condition, $\bar{ \sigma}_{\ssV}^{\omega}$ is
\begin{equation}
\bar{ \sigma}_{\ssV}^{\omega} \underset{T = 0}{=}-\frac{m}{6\pi^2\hbar^2}\sqrt{\mu^2-m^2}\left(1+\frac{m^2}{2 \mu^2}\right)\theta(\mu-m),
\end{equation}
at zero temperature. By performing the integrals at zero temperature, the coefficients of axial-vector current density are calculated as
\begin{eqnarray}
\begin{aligned}
\bar{\sigma}_{\ssA}^{B} \underset{T = 0}{=} &\ \frac{1}{2 \pi^{2} \hbar^{2}} \sqrt{\mu^2-m^2} \theta(\mu-m) , \\
\bar{\sigma}_{A}^{\omega} \underset{T= 0}{=} &\ \frac{1}{ \pi^{2} \hbar^{2}} \Big[ \mu \sqrt{\mu^2-m^2} \Big(\frac{2+3\varepsilon+\varLambda}{6}-(3\varLambda-1)\frac{m^3}{24 \mu^3} \\
&-(3\varLambda+5)\frac{m}{12 \mu} +(\varepsilon+\varLambda-1) \frac{m^2}{3 \mu^2} \Big) \\
&-m^2 \ln \left(\mu/m + \sqrt{\mu^2/ m^2-1}\right) \Big]\theta(\mu-m).
\end{aligned}
\end{eqnarray}
$ \bar{\sigma}_{A}^{B}$ is in accord with the one derived by means of Kubo formula \cite{gmsw,bgb1,lykubo}. For $\varepsilon=1/2, \varLambda=1$ we have
\begin{eqnarray}
\begin{aligned}
\bar{\sigma}_{A}^{\omega} \underset{T= 0}{=} &\ \frac{1}{ \pi^{2} \hbar^{2}} \Big[ \mu \sqrt{\mu^2-m^2} \Big(\frac{3}{4}-\frac{m^3}{12 \mu^3} -\frac{2m}{3 \mu} +\frac{m^2}{6 \mu^2} \Big) \\
&-m^2 \ln \left(\mu/m + \sqrt{\mu^2/ m^2-1}\right) \Big]\theta(\mu-m).
\end{aligned}
\end{eqnarray}
Note that when the massless limit is considered one should also set $\varepsilon=0.$
For instance for small $m,$ by setting $\varepsilon=0,$ one obtains
\begin{equation}
\bar{\sigma}_{\ssA} \approx \frac{1}{ 2 \pi^{2} \hbar^{2}} \left(\mu^2 -\frac{m^2}{2}\right).
\end{equation}
This is the correction obtained in \cite{FlFu,lykubo} in the $\mu=0$ limit.
As we have already mentioned there is not a unique method of defining kinetic transport equations for massive fermions beginning with (\ref{real1})-(\ref{imag5}). Hence there is no consensus in obtaining the mass corrections to the chiral magnetic and vortical effects. Here we established the massive kinetic equation from the covariant formulation which directly generates the chiral transport equation when one sets $a^\mu=p^\mu$ and $m=0.$ In fact we have chosen the spin four-vector as in (\ref{amu}) which is consistent with the massless limit. Therefore the corrections to the chiral effects are apparent in contrary to the other formalisms which are suitable to discuss large mass limit \cite{wsswr,dk}.
The massless limit can also be discussed starting from (\ref{SKE}) and (\ref{SAKE}). For $m=0$ we need to set $\varepsilon =0$ in (\ref{zetaw}) and fix the spin four-vector as $a_\mu=p_\mu.$ One can then observe that (\ref{SAKE}) yields a scalar kinetic equation up to an overall $p^\mu$ factor. A similar scalar kinetic equation follows from (\ref{SKE}). By adding and subtracting these scalar kinetic equations one obtains the chiral kinetic equation (\ref{cke1}) for $n_\mu=u_\mu$ and ${\varLambda} =k.$ Therefore the 3D chiral theory coincides with the one obtained in section 5, in particular particle number current density satisfies the anomalous divergence as in (\ref{ceqD0}).
\section{Conclusions}
\label{sec-conc}
We demonstrated that the scalar field $\phi$ and the vector-field $\eta_\mu$ represent the fermionic fluid in the presence of the Coriolis force due to the nonvanishing vorticity. This is achieved by showing that the equations of motion acquired by variation of the action (\ref{sF}) with respect to $\phi$ and $\eta_\alpha$ are equivalent to relativistic Euler equations of a fluid with the Coriolis force. Moreover, this formalism provided the field strength of the vector field $\eta_\alpha$ in terms of the { specific} enthalpy and fluid vorticity.
Then we considered the action of Dirac spinors coupled to the vector fields $A_\mu$ and $\eta_\mu.$ By virtue of its gauge invariances we derived the QKE satisfied by the Wigner function, (\ref{qke}), by generalizing the formalism of \cite{vge}. In fact, one of the main results accomplished in this work is to show that when one deals with the fluids the original QKE \cite{vge} should be modified adequately. This modification has already been introduced in \cite{dk,dk-m} in an ad hoc manner. Here we obtained it in a systematic way from an underlying action.
The Clifford algebra generators are employed to decompose the Wigner function in terms of field components and the semiclassical equations of the component fields which follow from QKE are presented. Then to derive KTE one can proceed in two different ways depending on when to impose the on-shell conditions of fields representing the fluid. In \cite{dk,dk-m} we have derived the semiclassical kinetic equations by imposing the on-shell conditions from the start. In contrary here we first acquired the semiclassical transport equations and then let the fields $\eta_\alpha,\, \phi$ be on-shell, so that $w_{\mu \nu}$ is expressed in terms of the vorticity and fluid 4-velocity. This approach furnishes novel kinetic transport equations for either massless or massive fermions.
As usual the equations satisfied by vector and axial-vector decouple from the other fields when one considers massless fermions. The kinetic equation in the presence of unique gauge field is well known \cite{hpy1,hsjlz,hpy2}. We generalize it and obtain the semiclassical chiral kinetic equation where the electromagnetic fields and the vorticity are treated on the same footing.
By integrating the semiclassical relativistic kinetic equation over $p_0,$ we established a 3D CKT which does not depend on the spatial coordinates explicitly. It is consistent with chiral anomaly and takes into account the non inertial properties of rotating reference frame. Moreover, the chiral magnetic and vorticity effects are correctly generated.
Transport equations of massive fermions are also studied. The semiclassical kinetic equations of the vector and axial-vector fields are derived by extending the formalism given in \cite{hhy}. The related 3D kinetic transport equation is obtained by letting the spin 4-vector be adequate to discuss the small mass limit. We showed that the Coriolis force and the dispersion relation are correctly generated. We obtained the particle number current density in terms of the equilibrium distribution function and calculated it at zero temperature. The similarities and differences with the other approaches are discussed.
For massive fermions we obtained the 3D transport equations by adopting the definition of spin vector as in (\ref{amu}) and keeping only the linear terms in the electromagnetic fields and vorticity. For having a better understanding of the mass corrections to the chiral effects, one needs to find a solution of the spin vector depending on the electromagnetic fields and vorticity.
Although we considered the collisionless case, kinetic transport equations are mainly needed in the presence of collisions. They can be introduced in the current formalism by generalizing the methods given in \cite{hpy1} and \cite{yhh-col}. Obviously collisions can also be studied within the 3D kinetic transport theories.
Zilch vortical effect \cite{maximZilch} has been recently studied within the kinetic theories in \cite{hmss,hhyy}. In \cite{hmss} it was shown that the zilch current in a rotating system can equivalently be derived in chiral kinetic theory by employing chiral current.
It would be interesting to inspect if this construction of zilch vortical effect can be studied by means of our approach for example by modifying the chiral current. This may provide an alternative way of studying the rotation of reference frame for photonic media.
\acknowledgments{ E.K. is partially supported by the Bogazici University Research Fund under Grant No. 20B03SUP3.}
| -95,182.800064
|
[
-2.37890625,
2.291015625
] | 22.322322
|
[
-2.22265625,
0.8076171875,
-1.9716796875,
-4.70703125,
-0.85986328125,
6.77734375
] |
[
1.8115234375,
8.546875,
2.05078125,
4.61328125
] | 367
| 9,441
|
[
-3.02734375,
3.4453125
] | 32.984099
|
[
-5.6484375,
-3.8515625,
-4.46875,
-2.4609375,
1.7890625,
12.296875
] | 0.533838
| 9.150289
| 21.205381
| 1.807663
|
[
2.791227102279663
] | -57,769.265255
| 5.654804
| -94,149.127325
| 0.387735
| 6.300417
|
[
-2.2578125,
-3.880859375,
-3.96875,
-5.16796875,
2.12890625,
12.796875
] |
[
-5.20703125,
-1.3642578125,
-2.154296875,
-0.478271484375,
3.23828125,
2.78515625
] | |
BkiUfQDxK6Ot9TMSsrGw
|
\section{Background and Motivation}
\label{}
Clustered sensor networks can be classified into two broad types; homogeneous and heterogeneous sensor networks. In homogeneous networks all sensor nodes are identical in terms of energy and hardware complexity. With purely static clustering in a homogeneous network, it is evident that CHs will be over-loaded with long range transmissions to the remote sink, and extra processing is necessary for protocol co-ordination and data aggregation. WSN faces a problem that CHs dies before other nodes. However, to ensure that all nodes dies at about the same time when system expires, minor amount of residual energy is wasted. One method to ensure is rotating the role of a cluster head periodically and randomly over all the nodes. The downside of role rotation and using a homogeneous network is that all nodes should be capable of act as CH, therefore should require necessary hardware capabilities. On the other hand, in heterogeneous sensor network, two or more different types of sensor nodes in terms of different energy are used. The problem area is that extra energy and complex hardware can be embedded in few CH nodes, therefore reducing hardware cost of the entire sensor network.
In LEACH the sensor nodes are equipped with same amount of energy. This protocol selects CH periodically and consumes energy uniformaly. Each node is decide itself whether or not a CH based on probability [1]. SEP is based on two level heterogeneity and the CH election in SEP is on the basis on weighted election probability. A fraction $ m $ advanced nodes in total of $ n $ nodes is provided with an additional energy factor $ \alpha $. So, the stability period is increased due to advance nodes, however CH selection is done in the same way as in LEACH. ESEP is an extension of SEP that considers three types of nodes in discussed in [2,3]. In [4] DEEC estimates ideal value of network life time is used to compute reference energy that each node expend during a round.
\begin{figure}
\begin{center}
\includegraphics[height=5cm,width=10cm,angle=0]{obaid.eps}
\caption{\small \sl Cluster Formation in WSN.\label{fig:Stupendous}}
\end{center}
\end{figure}
In WSN, current scenario of research deals with efficient power utilization of sensor nodes. Due to small battery power of these nodes, there is a chance that WSN is not longer survive. Smart utilization of sensor node is very crucial to prolong the life time and stability of WSN. Most current protocols, such as SEP and ESEP are stability-oriented protocols, minimize energy utilization of network by using clustering approach. stability period is the time interval before the death of first node in WSN. In clustering data is transmitted to sink in form of clusters and every cluster consists of a CH, responsible to transmit data at sink.
Existing protocol SEP describe impact of heterogeneity on heterogeneous-aware protocols and instability of protocols, such as LEACH in presence of heterogeneity of sensor nodes. SEP is based on weighted election probabilities assigned to each node to become CH according to their initial energy. The rotating epoch and election probability is directly correlated with initial energy of nodes instead of residual energy of the nodes. Advance nodes more frequently becomes CHs, it may happen after some rounds the energy of advance nodes becomes less than normal nodes. To overcome this drawback, we introduce a new CH selection scheme for SEP based on ECR of each node. By using this criteria SEP increases stability and lifetime of network.
\section{SEP}
SEP improves the stable region of a WSN by using the heterogeneity parameters such as fraction of advanced nodes $ m$ and additional energy factor $\alpha $ between the normal and advance nodes. To prolong the stability region of a network, SEP maintain the constraints of well balance energy consumption.\\
In SEP initially, advanced nodes have to become the CH more often than normal nodes.
suppose that $ E_0$ is the initial energy of each normal node and $ E_0(1+\alpha)$ is the energy of advanced nodes in a WSN. The total energy of new heterogeneous network in [2] is equal to: $n.(1-m).E_0 + n.m.E_0 (1+\alpha)=n.E_0.(1+ \alpha .m)$. Total energy of the system in increased by $ 1+\alpha.m $ times. In order to increase the stability of the system, new epoch must equal to $ \frac{1}{p_{opt}}(1+ \alpha.m)$ because system has $ \alpha. m $ times more nodes and $ \alpha.m $ more energy.
Initially, for each node the probability of becoming CH is $ p_{opt} $ . An average $ n \times p_{opt}$ must becomes CHs per round per epoch. The nodes that are elected to be CH in current round can no longer become CH in the same epoch.Nodes that are not elected CHs belongs to set $ G $ in order to maintain a steady number of CHs per round. The probability of nodes $ s \epsilon G$ to become CH is increases after each round in same epoch. The decesion is made at the beginning of each round by each node $ s \epsilon G$ independently chossing a random number between [ 0,1 ]. If random number is less than threshold $ T(s) $ then the node become a cluster head in current round.
The threshold is set in [2] as:
\begin{eqnarray}
T_{S} = \left\{ \begin{array}{rl}
\frac{p_{opt}}{1-p_{opt}[r. mod \frac{1}{p_{opt}}]} &\mbox{ if $ s \epsilon G$} \\
0 &\mbox{ otherwise}
\end{array} \right.
\end{eqnarray}
where r is the current round number.
SEP increase the stable region of a network, if fulfilling the following conditions.\\
a. Each normal nodes becomes a CH once every $\frac {1}{p_{opt}}.(1 + \alpha.m)$ rounds per epoch.\\
b. Each advanced node becomes a CH $ 1 + \alpha $ times every $\frac {1}{p_{opt}}.(1 + \alpha.m)$ rounds per epoch. \\
c. Average number of CH per round per epoch is equal to $ n \times p_{opt}$.\\
If at the end of each epoch the number of times that an advanced node becomes CH is exactly not equal to $ 1+\alpha$ times, so energy is not well distributed and average numbers of CH per epoch per round is not equal to $ n \times p_{opt}$.
\begin{equation}
p_{adv}=\frac{p_{opt}}{1+\alpha m}
\end{equation}
\begin{equation}
p_{nrm}=\frac{p_{opt}(1+\alpha)}{1+\alpha m}
\end{equation}
\section{ECRSEP}
In ECRSEP, CH selection is based on the Energy Consumptio Rate (ECR).
ECR is defined mathematically as:
$ ECR = \frac {E_{int}-E_r}{r-1}$.\\
where, $E_{int}$ is initial energy and $E_r$, is residual energy of each node and $r$ is current round.
In next round, CH selection is based on ECR in previous round. A node, which have less ECR in the previous round is selected CH in next round. A CH in the previous round is not selected as CH in the next round, because its ECR is very high as compare to non CH nodes.
\subsection{Radio Model}
In radio model, energy dissipates $ E_{elec} = 50nJ/bit $ to run receiver and transmitter circuitry and $ E_{amp} = 100pJ/bit/m^2 $ for transmitter amplifier.
The equations that is used to calculate the receiving cost and transmitting cost for $ k $ bit message and distance $ d $ is modeled in [4] is as shown in below:\\
Transmitter Energy
\begin{equation}
E_{T}(K,d)= E_{T-elect}(k)+E_{T-amp}(k,d)
\end{equation}
\begin{equation}
E_{T}(K,d)=(E_{elect}\times K) + (E_{amp}\times k \times d^2)
\end{equation}
Receiving Energy
\begin{equation}
E_R(K)=E{R-elec}(K)
\end{equation}
\begin{equation}
E_{R}(K)=E_{elec}\times K
\end{equation}
\subsection{Network Model}
In this section, we discuss network model for ECRSEP. Assume that $ N $ sensor nodes are deployed within a $ M \times M $ . The network is deployed into clustering hierarchy. Every cluster has a CH, responsible to directly transmit data to Sink. We suppose that our network is stationary.\\
In our network we considered two level of heterogeneity in terms of energy. In heterogeneous networks, there are two types of sensor nodes, i.e., normal nodes and advance nodes. $ E_O $ is initial energy of normal nodes and $ m $ is fraction of advanced nodes. Advanced nodes have $\alpha$ times more energy than normal nodes. So $mN$ advanced nodes having initial energy $ E_o(1+\alpha)$ and $(1-m)N$ normal nodes having initial energy $E_o$.
The initial energy $ E_o$ of two levels heterogeneous network is given in [2] as:
\begin{equation}
E_{total}=N(1-m)E_o+NmE_o(1+\alpha)=NE_o(1+\alpha m)
\end{equation}
So, two level heterogeneous network have $ am $ times more energy than homogeneous network.
\subsection{CH selection in ECRSEP Protocol}
In this section, we describe the CH selection method in ECRSEP protocol. In this protocol CH selection is based on energy consumption rate.
Let $n$ is the number of rounds to become CH for nodes $S$ that are participating to become CH. we refer to it as rotating epoch.
Let $p=\frac{1}{n}$ is average probability to become CH during $n$ rounds. When nodes have same amount of energy at each epoch, choosing the $ p $ to become $ p_{opt}$ ensures that in every round there are $ p_{opt}N$ cluster-heads, we have
\begin{equation}
p=p_{opt} \times {ECR}
\end{equation}
The total number of CHs per epoch is equal to:
\begin{equation}
\sum_{i}^NP_i=Np_{opt}
\end{equation}
In two level heterogeneous networks, $p_{opt}$ is replaced by weighted probabilities for advance and normal nodes as modeled in [4] as:
\begin{equation}
p_{adv}=\frac{p_{opt}\frac{E(i)-E(r)}{r-1}}{1+\alpha m}
\end{equation}
\begin{equation}
p_{nrm}=\frac{p_{opt}(1+\alpha)\frac{E(i)-E(r)}{r-1}}{1+\alpha m}
\end{equation}
Therefore,$p_{(i)}$ is changed into:
\begin{eqnarray}
p_{(i)} = \left\{ \begin{array}{rl}
\frac{p_{opt}\frac{E(i)-E(r)}{r-1}}{(1+\alpha m)} &\mbox{ if $S(i)$} \, is\, the\, normal\, node \\
\frac{p_{opt}(1+\alpha)\frac{E(i)-E(r)}{r-1}}{(1+\alpha m)} &\mbox{ if $S(i)$} \, is\, the\, advance\, node \\
\end{array} \right.
\end{eqnarray}
We can get probability threshold used to elect CH. Thus the threshold is correlated with energy consumption rate of each node directly.\\
\section{Simulations Result}
We evaluate performance of our protocol by using MATLAB. We arrange a WSN with $ N=100$ nodes are distributed randomly in $ 100m\times 100m$ field. We assume in our simulations that sink is at center of sensing region. To compare performance of ECRSEP with other protocols, effect of interference and signal collision is not considered in wireless channel. Our goal is to compare performance of ECRSEP with SEP, ESEP, LEACH, and DEEC protocol on basis of energy dissipation and the longevity of network.
\begin{figure}[!t]
\centering
\subfigure[Dead Nodes, $\alpha = 2$ and $m = 0.2$]{\includegraphics[height=4.5 cm,width=7 cm]{fig3.eps}}
\subfigure[Alive Nodes,$\alpha = 2 $ and $m = 0.2$]{\includegraphics[height=4.5 cm,width=7 cm]{fig2.eps}}
\subfigure[Throughput,$\alpha = 2 $ and $m = 0.2$]{\includegraphics[height=4.5 cm,width=7 cm]{fig1.eps}}
\subfigure[Alive Nodes,$\alpha = 2 $ and $m = 0.3$]{\includegraphics[height=4.5 cm,width=7 cm]{figg2.eps}}
\subfigure[Dead Nodes,$\alpha = 2 $ and $m = 0.3$]{\includegraphics[height=4.5 cm,width=7 cm]{figg3.eps}}
\subfigure[Throughput,$\alpha = 2 $ and $m = 0.3$]{\includegraphics[height=4.5 cm,width=7 cm]{figg1.eps}}
\caption{Performance Evaluation of ECRSEP}
\end{figure}
We use following parameters in our simulation. $ E_{elect} = 50nJ/bit$, $E_{DA} = 5nJ/bit/message$, $\epsilon_{fs} = 10pJ/bit/m^2$ , $\epsilon_{mp} = 0.0013pJ/bit/m^4$, $E_o = 0.5J$, $K = 4000 $, $P_{opt} = 0.1$, $n = 100$, $\alpha = 1$ and $m = 0.1$.
By performing simulations in MATLAB, it is observed that, ECRSEP has enhanced stability period than all other protocol and network life for ECRSEP was increased as compared to others. However DEEC outperforms all protocols in terms of throughput.
The graphs, in Fig. 2 (a,b,c)] results, in a case when $\alpha$ = 2 and $m =0.2$; and shows comparison of protocols SEP, LEACH, ESEP, DEEC and ECRSEP regarding deads nodes, relative to number of rounds. Comparing all these protocols, SEP and LEACH probability based protocols result in approximately equal stability period. As in SEP and LEACH CHs selection is done on probability if LEACH would be considered with homogeneity then there would be a large difference. ESEP with tree levels of heterogeneity and probability based protocol obviously shows better results than SEP and LEACH. Due to availability of more nodes with extra energy ESEP results in increased stability period than SEP and LEACH. The first node of our proposed protocol die after 5000 rounds, achieves greater stability as compared to all protocols discussed in this paper.
Fig. 2 (a) shows that the network life of LEACH is less as compared to all protocols, as it is very sensitive to heterogeneity. Results shows that ECRSEP achieves maximum network lifetime. All nodes are die after 10000 rounds, so network lifetime increases.
Fig. 2 (b) shows the stabile region of the WSN. In a network of heterogeneous nodes, LEACH goes sooner to unstable operation as it is very sensitive to such type of heterogeneity. SEP extend the stability period by aware of heterogeneity through assigning probabilities of CH election weighted by relative initial energy. Due to extended stability throughput of SEP is higher than LEACH. SEP yields longer stability period due to the extra energy of advanced nodes. ESEP have 3 level of heterogeneity, so it have longer stability period than ESEP. Our proposed protocol outperforms all in terms of network stability.
Fig. 2 (c) shows comparison of these protocols regarding throughput, relative to number of rounds defined as data sent from CH to base station. The throughput of SEP is greater than LEACH in both stable and unstable region. The throughput of ESEP is grater than SEP because of three level of heterogenity. Our proposed protocol ECRSEP beats all protocols however, result shows that throughput of DEEC is maximum as compared to other protocols.
Fig. 2 (d,e,f) shows results, when $\alpha$ = 2 and $m =0.3$. Results in Fig. 2 (d) shows that ECRSEP is achieved maximum network life time as compared to all protocols. All the nodes die after 23000 rounds in our proposed protocol. The network life of other protocols is not more than 8000 rounds. Our protocol beats all these protocols, discussed in this paper.
Fig. 2 (e) shows that nodes die more slowly in ECRSEP, which means that its stability period is increased. LEACH goes sooner to unstable operation as it is very sensitive to such type of heterogeneity. SEP extend the stability period by aware of heterogeneity through assigning probabilities of CH election weighted by relative initial energy. Due to extended stability throughput of SEP is higher than LEACH. SEP yields longer stability period due to the extra energy of advanced nodes. ESEP have 3 level of heterogeneity, so it have longer stability period than ESEP. Our proposed protocol outperforms all in terms of network stability.
Fig. 2 (f) shows that DEEC outclass all protocols in terms of throughput,relative to number of rounds defined as data sent from CH to base station. Due to supporting heterogeneity the throughput of SEP, ESEP and ECRSEP is higher than LEACH.
\section{Conclusion}
In WSN nodes are not always homogeneous they might be heterogeneous, which increases network complexity. To increase stability and reduce the energy consumption clustering is key technique in WSNs. In this paper, we proposed ECRSEP protocol and compare our proposed protocol with other protocols of WSN such as SEP, ESEP, LEACH and DEEC. We conclude that ECRSEP is most suitable when deal with network lifetime and stability.
| -13,227.174216
|
[
-0.7236328125,
1.28125
] | 41.496599
|
[
-4.3671875,
0.151123046875,
-1.3583984375,
-2.62109375,
1.1298828125,
4.2265625
] |
[
0.6748046875,
5.66015625,
0.332763671875,
4.859375
] | 179
| 2,310
|
[
-1.9990234375,
1.9140625
] | 25.291629
|
[
-6.421875,
-3.853515625,
-3.25390625,
-1.4248046875,
2.66796875,
10.0234375
] | 1.521555
| 24.780883
| 24.112554
| 1.510078
|
[
2.8684732913970947
] | -10,019.701176
| 5.121212
| -13,169.407872
| 0.338123
| 5.393983
|
[
-3.587890625,
-3.52734375,
-2.763671875,
-3.6171875,
2.923828125,
9.65625
] |
[
-5.65625,
-0.73681640625,
-1.259765625,
-0.421875,
3.263671875,
2.654296875
] | |
BkiUbJs5qsJBjdcnyyrp
|
\section{Introduction}
In his seminal work \cite{G1}, Greenberg laid out a conjectural
Iwasawa theory for a motive $M$ at an {\it ordinary} prime $p$. His
ordinary hypothesis had the effect of drastically simplifying the
$p$-adic Hodge theory of $M$, while on the other hand being expected
to hold for a dense set of primes $p$.
Although our knowledge began to improve immediately after the time of
Greenberg's work, we learned that Iwasawa theory is, in comparison,
very complicated at nonordinary primes. For example, Bloch--Kato
found in \cite{BK} the right definition of the general Selmer group,
and in \cite{PR} Perrin-Riou $p$-adically interpolated the Bloch--Kato
dual exponential map, providing a close link between Euler systems
(which bound Selmer groups) and $p$-adic $L$-functions. While these
developments require no ordinary hypothesis, they rely heavily on
difficult crystalline techniques, and do not lead to a convenient
statement of the main conjecture punctually, let alone variationally.
Much more recently, there has been a major shift in the methods
underlying $p$-adic Hodge theory. The work of many people has shown
that, essentially, all the important information attached to a
$p$-adic representation $V$ of the absolute Galois group $G_K$ of a
$p$-adic local field $K$ can be read rather easily from its
$(\vphi,\Ga_K)$-module, an invariant originally associated to $V$ by
Fontaine in \cite{F}, and subsequently refined by several authors.
(See \S\ref{sect-phigamma} for numerous details and references.)
Notably, the $(\vphi,\Ga_K)$-module of $V$ over the Robba ring may be
dissected into subquotients in ways that are not readily visible on
the level of the $p$-adic representation $V$ itself. This was first
harnessed by Colmez, who called $V$ {\it trianguline} if its
$(\vphi,\Ga_K)$-module is a successive extension of $1$-dimensional
objects, and the latter notion has played a crucial role in our
burgeoning understanding of the $p$-adic local Langlands
correspondence for $\GL_2(\bbQ_p)$.
In this paper we use $(\vphi,\Ga_K)$-modules to give a natural
weakening of Greenberg's ordinary hypothesis. We identify those
representations whose $(\vphi,\Ga_K)$-module is the same as that of an
ordinary representation (except possibly as regards the $\vphi$-slopes
of its ordinary filtration), and call them {\it triangulordinary}. We
show how the $\vphi$-slopes are only rarely ever used when analyzing
the $p$-adic Hodge theory of such $V$.
We present two pieces of evidence that our hypothesis is natural and
timely. First, as our main result we show that the natural analogue
of Greenberg's Selmer groups coincide with those defined by
Bloch--Kato. This generalizes a result of Flach (see \cite[Lemma
2]{Fl}), which was proved using Poitou--Tate local duality and
Euler--Poincar\'e characteristic computations, to the case of
arbitrary perfect residue field. Second, we propose a variational
program to extend our theory to {\it define} the Selmer module of the
universal finite-slope eigenform over a dense open subset of the
Coleman--Mazur eigencurve (and the eigensurface obtained from it by
cyclotomic twisting); such a definition has been hitherto unknown.
Our program would encompass results of Kisin, which provide Selmer
groups for all the individual overconvergent eigenforms in the family.
After the writing of this article, we found that many of our technical
results appear in \cite{BC2}. The works have slightly different aims,
so let us briefly note how they differ. First, throughout \cite{BC2}
one has $K=\bbQ_p$, so that, in particular, $\vphi$ is a linear
operator with well-defined eigenvalues; our theory does not even
assume that $K/\bbQ_p$ is finite, which is necessary for all
crystalline representations to be trianguline. As concerns Selmer
groups, they only explicitly treat those associated to {\it adjoint}
representations, by measuring when trianguline deformations are
crystalline. Our work is valid even when there is no
deformation-theoretic interpretation available. In any case, their
methods can easily show that $H^1_\tord \subseteq H^1_f$; we explain
when equality holds.
This work would not even have been attempted, were it not for the
influence of many people. We owe particular thanks to Laurent Berger
and Kiran Kedlaya for introducing us to this subject, and for their
patience in explaining its ideas to us. Similarly, we would like to
thank the organizers of the 2005 ``Atelier sur les Repr\'esentations
$p$-adiques'' at CRM in Montreal, as well as the 2006 ``Special
Semester on Eigenvarieties'' at Harvard---our experiences there
incited us to take up a serious study of the ideas required to write
this article. We thank Barry Mazur for his enthusiasm and
encouragement throughout this project. We are indebted to Ruochuan
Liu and Ga\"etan Chenevier for extremely helpful conversations. Jan
\Nekovar\ arranged for our stay in Paris, during which time much of
this work was hammered out. Finally, we heartily thank the NSF for
its support through the MSGRFP, under which all this work was
completed, and l'Institut de Math\'ematiques de Jussieu for its
hospitality.
Let us conclude by describing the contents of the paper. In the
following section, we gather in one place the facts about
$(\vphi,\Ga_K)$-modules, Galois cohomology, and $p$-adic Hodge theory
that will be required in the sequel. Our aim is to provide a precise
resum\'e and guide to the literature. In \S\ref{sect-local} we
present our results concerning individual Galois representations.
Here the reader will find the definition of triangulordinary
representations and proofs of their basic properties, including the
comparison of Selmer local conditions. The section concludes by
describing the relationship to the notions of ordinary and
trianguline, and discussing examples arising in nature, including
abelian varieties and modular forms. In the \S\ref{sect-global}, we
propose a program to define Selmer modules for general variations of
$p$-adic Galois representations, and show how this would apply to the
eigencurve and overconvergent $p$-adic modular forms.
\section{Local theory}\label{sect-local}
In this section we define triangulordinary representations, and prove
that they have many amenable properties. We go on to define Selmer
groups of representations that are triangulordinary at $p$, and give
examples.
We continue with the notations set forth in \S\ref{sect-phigamma}.
\subsection{Triangulordinary $(\vphi,\Ga_K)$-modules and
Selmer groups}\label{sect-definitions}
When in doubt, $D$ refers to an object in
$\bfM(\vphi,\Ga_K)_{/B^\dag_{\rig,K}}$. In this subsection we set $L
= \wh{K^\unr}$, and remind the reader of the meaning of $D_L$: the
compositum $\ov{L}:=\ov{K}.L$ is an algebraic closure of $L$, and $G_L
= \Gal(\ov{L}/L) \stackrel{\sim}{\to} I_K \subset G_K$, the inertia
subgroup. Thus, for $V \in {\bf Rep}_\bbQp(G_K)$, one has
$\bfD^\dag_\rig(V)_L = \bfD^\dag_\rig(V|_{I_K})$.
We say that $D$ is {\it triangulordinary} if there exists a
decreasing, separated and exhaustive, $(\vphi,\Ga_K)$-stable
filtration $F^* \subseteq D$ by $B^\dag_{\rig,K}$-direct summands,
such that each $(\Gr_F^n)_L$ is $\Ga_L$-isomorphic to
$(t^nB^\dag_{\rig,L})^{\oplus d_n}$. The $d_n$ are called the
multiplicities of the weights $n$ in $D$; for example, $D$ has weights
$\geq 1$ if and only if $D = F^1$. For an equivalent definitions see
Corollary \ref{cor-crys-HT-n} and \S\ref{sect-compare}, and for
examples see \S\ref{sect-local-ex}. We do not give these immediately,
because discussing them rigorously requires several tools.
Given $D$ and a triangulordinary filtration $F^*$, we put
\begin{equation}\label{eqn-tord-selmer-cond}
H^1_\tord(D) = H^1_\tord(D;F^*) = \ker\left[ H^1(D) \to
H^1((D/F^1)_L) \right],
\end{equation}
and call it the {\it triangulordinary local condition}. It is
possible that $D$ be triangulordinary with respect to more than one
filtration (see \S\ref{sect-local-ex}), hence the need for the
``$F^*$'' in the notation. However, we will usually have a fixed
filtration in mind, and therefore we will usually drop it from sight.
Here is the main theorem of this section.
\begin{thm}\label{thm-local-main}
Let $D$ be triangulordinary with filtration $F^*$. Then the following
claims hold.
\begin{enumerate}
\item $D$ is de~Rham (and moreover +de~Rham if and only if $F^1 = 0$),
and even semistable.
\item Suppose for all $n \leq 0$ that $n-1$ is not a $\vphi$-slope on
$\Gr_F^n$. Then $H^1_\tord(D;F^*)$ coincides with $H^1_{g+}(D)$,
defined in \S\ref{sect-galois-descent}.
\end{enumerate}
\end{thm}
We prove this theorem in \S\ref{sect-local-proof}; the intervening
sections involve preparatory material.
For the remainder of this subsection, let us break with the running
notation. Let $K/\bbQ$ be a finite extension, and let $S$ be a finite
set of places of $K$ containing all primes above $p$ and $\infty$.
Write $K_S$ for a maximal extension of $K$ unramified outside $S$, and
$G_{K,S} = \Gal(K_S/K)$. Choose algebraic closures $\ov{K}_v$ of
$K_v$, and write $G_v = \Gal(\ov{K}_v/K)$, for each finite place $v
\in S$; choose embeddings $K_S \hookrightarrow \ov{K}_v$, which amount
to maps $G_v \to G_{K,S}$. Also, for $v \in S$ with $v \nmid
p\infty$, denote by $I_v \subset G_v$ the inertia subgroup, and write
$G_{\bbF_v} = G_v/I_v$.
Let $V$ be a finite-dimensional $\bbQ_p$-vector space equipped with a
continuous, linear action of $G_{K,S}$. Assume that, for each place
$v$ of $K$ with $v \mid p$, $\bfD^\dag_\rig(V|_{G_v})$ is equipped
with a triangulordinary filtration $F_v^*$. We define the {\it
triangulordinary local conditions} as above: they are the respective
subgroups $H^1_\tord(K_v,V)$ corresponding, under the identifications
$H^1(K_v,V) \cong H^1(\bfD^\dag_\rig(V|_{G_{K_v}})$, to the subgroups
$H^1_\tord(\bfD^\dag_\rig(V|_{G_{K_v}}))$. Then, following the
customary pattern, we define the {\it triangulordinary Selmer group}
to be
\begin{equation}\label{eqn-tord-selmer}
H^1_\tord(K,V) = \ker\left[ H^1(G_{K,S},V) \to
\bigoplus_{\substack{v \in S\\ v \nmid p\infty}}
\frac{H^1(K_v,V)}{H^1(G_{\bbF_v},V^{I_v})}
\oplus
\bigoplus_{\substack{v \in S,\\ v \mid p}}
\frac{H^1(K_v,V)}{H^1_\tord(K_v,V)} \right].
\end{equation}
After proving Theorem \ref{thm-local-main}, we will see how this
definition generalizes Selmer groups defined by Greenberg, and agrees
with those defined by Bloch--Kato.
\subsection{Galois descent}\label{sect-galois-descent}
In this section and the next we show that the de~Rham property is
rather flexible: it is easy to equate the validity of this property
for one $(\vphi,\Ga_K)$-module to its validity for another one. The
instance in this section concerns the ability to discern that $D$ is
de~Rham (resp.\ crystalline), given that the restriction $D_L$ of $D$
to some possibly large overfield $L \supseteq K$ has the same
property. I suspect that these facts are known to the experts, but I
give precise statements and proofs because they do not appear in the
literature in the generality of possibly non-\'etale
$(\vphi,\Ga_K)$-modules.
By a {\it complete unramified extension} $L$ of $K$, we mean the
$p$-adic completion of an unramified (but possibly infinite) algebraic
extension of $K$. Using an appropriate variant of the Witt vectors
formalism, such fields $L$ lie in a natural bijection with algebraic
extensions of $k$, and hence to closed subgroups $H$ of $G_k$. When
$H$ is normal in $G_K$, we call $L$ {\it normal} and set $\Gal(L/K) =
G_k/H$. For example, the maximal complete unramified extension of $K$
is $L = \wh{K^\unr}$, with $\Gal(L/K) = G_k$.
By a {\it complete discretely valued extension} $L$ of $K$, we mean a
finite extension of a complete unramified extension. These are the
same as CDVFs into which $K$ embeds continuously, such that the
embedding induces an algebraic extension of residue fields. The
complete discretely valued extensions $L$ of $K$ are in a natural
bijection with closed subgroups $H$ of $G_K$ for which $H \cap I_K$
has finite index in $I_K$. If $H$ is normal in $G_K$, we call $L$
{\it normal} and we set $\Gal(L/K) = G_K/H$. The class of complete
discretely valued extensions is closed under finite composita, but
does not admit a maximal element.
\begin{ppn}\label{ppn-derham-descent}
Let $D \in \bfM(\vphi,\Ga_K)_{/B^\dag_{\rig,K}}$, and let $L$ be a
complete discretely valued extension of $K$. Then $\dim_K D_\dR^{(+)}
= \dim_L (D_L)_\dR^{(+)}$.
\end{ppn}
\begin{rem}
In the down-to-earth terms of a Galois representation $V$, the
proposition says that the de~Rham periods of $V$ essentially only
depend on the action upon $V$ of an arbitrarily small finite index
subgroup of the inertia group $I_K \subseteq G_K$.
\end{rem}
\begin{proof}
It is clear that $D_\dR^{(+)} \otimes_K L \hookrightarrow
(D_L)_\dR^{(+)}$, which shows the inequality $\leq$. This proof
consists of showing the reverse inequality. Notice that, for any
particular $L$, and any complete discretely valued extension $L'/L$,
we know the inequality $\leq$ for $L'/L$, and if we know $\geq$ for
the extension $L'/K$, then we know as a result the inequality $\geq$
for $L/K$ and $L'/L$. Therefore, it suffices to prove the proposition
with $L'$ in place of $L$, i.e.\ it never hurts to enlarge the $L$ in
question. In particular, by passing to the (completed) normal
closure, we may assume that $L/K$ is normal.
We first establish the proposition in the case when $L/K$ is {\it
finite}. The idea is to harness the ability to enlarge $L$ in order
to really only treat two cases: when $L$ and $K_\infty$ are linearly
disjoint over $K$, and when $L \subset K_\infty$. Let $L_0 = L \cap
K_\infty$, so that $L$ and $K_\infty$ are linearly disjoint over $L_0$
(because $L$ and $K_\infty$ are both normal over $K$).
We treat the extension $L/L_0$ first. We have
\[
(D_L)_\dif^{(+)} = (D_{L_0})_\dif^{(+)}
\otimes_{L_{0,\infty}[\![t]\!]} L_\infty[\![t]\!].
\]
Notice that $\Gal(L/L_0)$ acts only on the right hand factor of this
expression, and it commutes with the $\Ga_L$-action (since $L$ and
$K_\infty$ are linearly disjoint over $L_0$). Thus, the
$\Gal(L/L_0)$-invariants of $(D_L)_\dif^{(+)}$ are the $\Ga_L =
\Ga_{L_0}$-module $(D_{L_0})_\dif^{(+)}$. In other words,
$(D_{L_0})_\dR^{(+)} = ((D_L)_\dR^{(+)})^{\Gal(L/L_0)}$. But
$(D_L)_\dR^{(+)}$ is an $L$-vector space of dimension $\rank D$
equipped with a continuous, semilinear action of $\Gal(L/L_0)$, and
Hilbert's Theorem 90 for finite, Galois field extensions implies that
all finite-dimensional, semilinear $\Gal(L/L_0)$-modules over $L$ are
trivial. Thus, $(D_L)_\dR^{(+)}$ admits a basis of elements fixed
under $\Gal(L/L_0)$, which shows that $(D_{L_0})_\dR^{(+)}$ has the
same dimension as dimension $(D_L)_\dR^{(+)}$, and we have $\geq$ in
this case.
We may now assume that $L = L_0$, so that $L$ is contained in
$K_\infty$. But now we have $(D_L)_\dif^{(+)} = D_\dif^{(+)}$ {\it as
$K_\infty[\![t]\!]$-modules}, and the $\Ga_L$-action on the left hand
side obtained by restricting the $\Ga_K$-action on the right hand
side. Since $\Ga_K$ is abelian, its action on $(D_L)_\dif^{(+)}$
commutes with the $\Ga_L$-action and induces a semilinear $\Ga_K/\Ga_L
= \Gal(L/K)$-action over $L$ on $D_\dif^{(+)}$. We again apply
Hilbert's Theorem 90 to deduce the desired inequality.
Now we turn to the infinite case, and make use of the proposition in
the finite case to simplify things. Suppose we are given a tower $K
\subseteq L_0 \subseteq L$, with $L_0/K$ complete unramified and
$L/L_0$ finite. Applying the finite case to $L/L_0$, we see that we
have equality for $L_0/K$ if and only if we have equality for $L/K$.
Thus we are reduced to proving the proposition when $L = L_0$, so that
$L/K$ is complete unramified.
Next consider the extension $K'/K$. Since it is finite unramified,
the extension $(K'.L)/L$ is also finite unramified. By the finite
case of the proposition, we have equality for the extension
$(K'.L)/L$, so we are reduced to the case where $L$ contains $K'$.
Considering the tower $K \subseteq K' \subseteq L$, we have equality
for $L/K$ if and only if we have equality for $L/K'$. Thus we can
assume that $K = K'$ and $L/K$ is complete unramified.
Note that $K_\infty/K$ is now totally ramified, and hence also
linearly disjoint with (any algebraic subextension of) $L/K$. This
implies the following facts. First, $\Ga_L = \Ga_K$. Moreover, the
actions of $\Ga_L$ and $\Gal(L/K)$ on $L_\infty$ commute with one
another. Finally, we infer that $L_\infty/L$ is totally ramified, or,
equivalently, that $L' = L$ (the left hand side being the maximal
unramified extension of $L$ in $L_\infty$).
Consider the module $(D_L)_\dif^{(+)}$, which can be written as
$D_\dif^{(+)} \otimes_{K_\infty[\![t]\!]} L_\infty[\![t]\!]$. Notice
that $\Gal(L/K)$ acts only on the right hand factor of this expression
for $(D_L)_\dif^{(+)}$, and it commutes with the $\Ga_L$-action.
Thus, the $\Gal(L/K)$-invariants of $(D_L)_\dif^{(+)}$ are the
$\Ga_L=\Ga_K$-module $D_\dif^{(+)}$. In other words, $D_\dR^{(+)} =
((D_L)_\dR^{(+)})^{\Gal(L/K)}$. So, $(D_L)_\dR^{(+)}$ is an
$L$-vector space with a continuous, semilinear action of $\Gal(L/K)$.
Invoking Hilbert's Theorem 90 for complete unramified extensions
(obtained by limits and d\'evissage from the traditional theorem for
$k_L/k$), we see that $(D_L)_\dR^{(+)}$ admits a basis of
$\Gal(L/K)$-invariants. Thus we have equality for $L/K$, as was
desired.
\end{proof}
\begin{cor}\label{cor-derham-descent}
Let $D$ be a $(\vphi,\Ga_K)$-module, and let $L/K$ be an extension, as
in the preceding theorem. Then $D$ is (+)de~Rham if and only if $D_L$
is (+)de~Rham (the latter considered as a $(\vphi,\Ga_L)$-module).
\end{cor}
\begin{proof}
By the theorem, $\dim_K D_\dR^{(+)} = \rank D$ if and only if $\dim_L
(D_L)_\dR^{(+)} = \rank D$.
\end{proof}
Our main use of the proposition is in the case of extension classes.
Let us describe how this occurs. Because the cohomology groups
$H^*(D)$ defined in \ref{sect-phigamma-cohom} coincide with Yoneda
groups, to every $c \in H^1(D)$ there corresponds a class of
extensions
\[
0 \to D \to E_c \to {\bf 1} \to 0.
\]
Since the functor $D \mapsto D_\dif^{(+)}$ corresponds to changing the
base rings of finite free objects (over Bezout domains), it preserves
short exact sequences. Therefore, we can hit the exact sequence above
with this functor to obtain an exact sequence of $\Ga_K$-modules over
$K_\infty[\![t]\!]$ or $K_\infty(\!(t)\!)$. The we obtain thus a map
$H^1(D) \to H^1(\Ga_K,D_\dif^{(+)})$ by $[E_c] \mapsto
[(E_c)_\dif^{(+)}]$.
Also, if $L/K$ is an extension as in the proposition
above, then $[E_c] \mapsto [(E_c)_L]$ defines a map denoted $\alpha_L
\cn H^1(D) \to H^1(D_L)$.
The {\it Bloch--Kato ``g(+)'' local condition} is the subgroup of
$H^1(D)$ determined by
\[
H^1_{g(+)}(D) = \ker\left[ H^1(D) \to H^1(\Ga_K,D_\dif^{(+)}) \right].
\]
Proposition \ref{ppn-derham-descent} now gives the following descent
result for these subgroups.
\begin{cor}\label{cor-BK-descent}
For any $D$ and $L/K$ as in the statement of Proposition
\ref{ppn-derham-descent}, one has $H^1_{g(+)}(D) = \alpha_L\inv
H^1_{g(+)}(D_L)$, where $\alpha_L$ is defined above.
In particular, taking $L = \wh{K^\unr}$, we see that $H^1_\unr(D) :=
\ker \alpha_{\wh{K^\unr}} \subseteq H^1_{g(+)}(D)$.
\end{cor}
\begin{proof}
A $\Ga_K$-fixed vector of $(E_c)_\dR^{(+)}$ not belonging to the
subspace $D_\dR^{(+)}$ is the same thing as a $\Ga_K$-equivariant
splitting of the map $(E_c)_\dif^{(+)} \to {\bf 1}_\dif^{(+)}$. Thus,
$(E_c)_\dif^{(+)}$ is $\Ga_K$-split if and only if $\dim_K
(E_c)_\dR^{(+)} = \dim_K D_\dR^{(+)} + 1$. By the theorem, this holds
if and only if the corresponding claim holds with $K$, $D$, and $E_c$
replaced by $L$, $D_L$, and $(E_c)_L$, respectively. Thus
$(E_c)_\dif^{(+)}$ is $\Ga_K$-split if and only if
$((E_c)_L)_\dif^{(+)}$ is $\Ga_L$-split. In other words, we have $c
\in H^1_{g(+)}(D)$ if and only if $\alpha_L(c) \in H^1_{g(+)}(D_L)$,
as was desired.
\end{proof}
Suppose that $D$ is (+)de~Rham, and $c \in H^1(D)$. We see from the
above proof that $(E_c)_\dif^{(+)}$ is $\Ga_K$-split if and only if
\[
\dim_K (E_c)_\dR^{(+)} = \dim_K D_\dR^{(+)} + 1 = \rank D + 1 = \rank
E,
\]
which occurs if and only if $E_c$ is itself (+)de~Rham. Thus, in this
case, the Bloch--Kato ``g(+)'' local condition can be interpreted as
the subgroup of $H^1(D)$ determined by
\[
H^1_{g(+)}(D) = \{c \in H^1(D) \mid E_c \text{ is (+)de~Rham}\}.
\]
We also prove a descent result for the crystalline property. Notice
that it includes the case of finite unramified extensions $L/K$. For
a complete discretely valued extension $L/K$, we write $F_L$ for the
maximal absolutely unramified subfield of $L$, i.e.\ the field that
would be called $F$ if we replaced $K$ with $L$.
\begin{ppn}\label{ppn-crys-descent}
Let $D \in \bfM(\vphi,\Ga_K)_{/B^\dag_{\rig,K}}$, and let $L/K$ be a
complete unramified extension. Then $\dim_F D_\crys = \dim_{F_L}
(D_L)_\crys$. In particular, $D$ is crystalline if and only if $D_L$
is.
\end{ppn}
\begin{proof}
Since $D_\crys \otimes_F F_L \hookrightarrow (D_L)_\crys$, we show the
inequality $\geq$.
We may make reductions just as in the proof of Proposition
\ref{ppn-derham-descent}; namely, it suffices to assume that $L$ is
Galois over $K$ and contains $K'$, and to just treat independently the
cases where $K=K'$ and $L=K'$.
Assume $L=K'$. Then since $(K')' = K'$ (see
\S\ref{sect-phigamma-rings}), one has $D_{K'} = D$ {\it as sets}, and
we see that
\[
(D_{K'})_\crys = D_{K'}[t\inv]^{\Ga_{K'}} = D[t\inv]^{\Ga_{K'}}
\]
is a semilinear $\Ga_K/\Ga_{K'} = \Gal(F'/F)$-module over $F'$
satisfying $(D_{K'})_\crys^{\Ga_K/\Ga_{K'}} = D_\crys$. It must be
trivial, by Hilbert's Theorem 90, and hence $D_\crys$ has the desired
$F$-dimension.
Now assume $K=K'$. Since $K_\infty/K$ is totally ramified, and $L/K$
is unramified, the two extensions are linearly disjoint. In
particular, $\Ga_L = \Ga_K$ and $L' = L$. We have
\[
D_L[t\inv] = D[t\inv] \otimes_{B^\dag_{\rig,K}} B^\dag_{\rig,L},
\]
and by linear disjointness the $\Gal(L/K)$-action on the right hand
factor commutes with the $\Ga_K$-action on the tensor product. This,
combined with the fact that $\Ga_L=\Ga_K$, shows that
$(D_L)_\crys^{\Gal(L/K)} = D_\crys$. We know that $(D_L)_\crys$ is a
semilinear $\Gal(L/K) = \Gal(F_L/F)$-module over $F_L$. Applying
Hilbert's Theorem 90, it admits a basis of invariants, and hence
$D_\crys$ has the desired rank.
\end{proof}
\begin{cor}\label{cor-crys-HT-n}
Write $L = \wh{K^\unr}$. For a $(\vphi,\Ga_K)$-module $D$, the
following claims are equivalent:
\begin{itemize}
\item $D$ is $\Ga_K$-isomorphic to $(t^nB^\dag_{\rig,K})^{\oplus d}$.
\item $D_L$ is $\Ga_L$-isomorphic to $(t^nB^\dag_{\rig,L})^{\oplus
d}$.
\item $D$ is crystalline, and all its Hodge--Tate weights equal $n$.
\end{itemize}
\end{cor}
\begin{proof}
The first condition clearly implies the second.
Assuming the second condition, $D_L$ is crystalline (since
$(D_L)_\crys = (t^{-n}D_L)^{\Ga_L}$ is clearly large enough), whence
Proposition \ref{ppn-crys-descent} shows that $D$ is crystalline. And
$\bfD_\dif^+(D) = (t^nK_\infty[\![t]\!])^{\oplus d}$, showing that the
Hodge--Tate weights are all $n$.
Now assume that the third condition holds. The fact that $D$ is
$\Ga_K$-isomorphic to $(t^nB^\dag_{\rig,K})^{\oplus d}$ results
directly from a check of the construction $D_\pst \mapsto D$ given in
\cite[\S II]{B2}.
\end{proof}
This corollary allows us to restate the condition that a filtration
$F^* \subseteq D$ be triangulordinary. The hypothesis becomes: each
$\Gr_F^n$ is crystalline, with all Hodge--Tate weights equal to $n$.
\subsection{Irrelevance of $\vphi$-structure}
\label{sect-local-phi}
Next we state and prove a precise version of Remark \ref{rem-moral}.
\begin{ppn}\label{ppn-phi-irr}
The $\Ga_K$-isomorphism class of $D_\dif^+$ does not depend on which
$\vphi$-structure $D$ is equipped with (although the existence of
$\vphi$ is necessary to define $D_\dif^+$). The same claim holds for
$D_\dif$.
\end{ppn}
\begin{proof}
The construction of $D^r$ in the proof of \cite[Th\'eor\`eme
I.3.3]{B2} shows that, for {\it any} $B^\dag_{\rig,K}$-basis $e =
\{e_1,\ldots,e_d\}$, there exists $0 < r(e) < r(D)$ such that the
$\Ga_K$-action on $e$ is defined over $B^{\dag,r(e)}_{\rig,K}$ and if
$0 < r \leq r(e)$ then $D^r$ is $B^{\dag,r}_{\rig,K}$-spanned by $e$.
Moreover, all the modules $D^r \otimes_{B^{\dag,r}_{\rig,K},\iota_n}
K_\infty[\![t]\!]$ for $0 < r < r(D)$, $n \geq n(K)$, and $rp^n \geq
1$ are isomorphic as $\Ga_K$-modules. Thus, if we consider $D_\dif^+$
as the $K_\infty[\![t]\!]$-span of $e$ with $\Ga_K$-action piped
through $\iota_n$, then the resulting isomorphism class stabilizes for
$n \gg 0$. We simply take $n$ large enough to ensure this. (In other
words, $\vphi$ only guarantees that the $\Ga_K$-structures are
equivalent and chooses the equivalence, but they are all equivalent no
matter which $\vphi$ is used to show it.)
The claim for $D_\dif$ follows from the claim for $D_\dif^+$ after
inverting $t$.
\end{proof}
\begin{cor}
If $D$ and $D'$ are two $(\vphi,\Ga_K)$-modules, and $D \cong D'$ as
$\Ga_K$-modules, then $D$ is (+)de~Rham if and only if $D'$ is
(+)de~Rham.
\end{cor}
We conclude this section by applying the Proposition \ref{ppn-phi-irr}
to $(\vphi,\Ga_K)$-modules $D$ having the property that, as
$\Ga_K$-modules, they are isomorphic to $(t^nB^\dag_{\rig,K})^{\oplus
d}$. Such $D$ is crystalline, with $D_\crys = (t^{-n}D)^{\Ga_K}$ a
$\vphi$-module over $F$ from which we can recover $D$ completely: $D =
t^nB^\dag_{\rig,K} \otimes_F D_\crys$.
Although we know being crystalline implies being de~Rham, one can also
see that $D$ is de~Rham by way of the proposition: $D_\dif^+$ is
$\Ga_K$-isomorphic to $(t^nK_\infty[\![t]\!])^{\oplus d}$, and
therefore $D_\dif$ is $\Ga_K$-isomorphic to $K_\infty(\!(t)\!)^{\oplus
d}$, which clearly has enough $\Ga_K$-invariants. Moreover, one finds
that
\begin{align}
\label{eqn-phi-irr1}
\dim_K H^q(\Ga_K,D_\dif^+)
& = d \cdot \dim_K H^q(\Ga_K,t^nK_\infty[\![t]\!]) =
\begin{cases}
d & \text{if }n \leq 0 \\
0 & \text{if }n \geq 1
\end{cases} \\
\intertext{and}
\label{eqn-phi-irr2}
\dim_K H^q(\Ga_K,D_\dif)
& = d \cdot \dim_K H^q(\Ga_K,t^nK_\infty(\!(t)\!)) = d,
\end{align}
for both $q = 0,1$.
\subsection{Cohomology of triangulordinary $(\vphi,\Ga_K)$-modules}
\label{sect-local-proof}
As in \S\ref{sect-definitions}, we set $L = \wh{K^\unr}$. In this
subsection we prove Theorem \ref{thm-local-main}.
\begin{proof}[Proof of Theorem \ref{thm-local-main}]
We first apply the techniques of Galois descent.
We assume the theorem holds for $K=L$. Given a triangulordinary $D$,
the theorem over $L$ shows that $D_L$ is (+)de~Rham, and therefore by
Corollary \ref{cor-derham-descent} we know that $D$ is (+)de~Rham.
Moreover, since the $(\Gr_F^n)_L$ are unconditionally crystalline by
the comments concluding \S\ref{sect-local-phi}, we can apply
Proposition \ref{ppn-crys-descent} to deduce that the $\Gr_F^n$
themselves are crystalline, and hence semistable. Since $D$ is
simultaneously de~Rham and a successive extension of semistable
pieces, \cite[Th\'eor\`eme 6.2]{B1} asserts that $D$ is semistable.
(Note that its proof applies without change to the non-\'etale case.)
Thus we have proved: if $D_L$ is (+)de~Rham, then $D$ is (+)de~Rham
and even semistable, in all cases.
By Corollary \ref{cor-BK-descent}, one has
\[
H^1_{g(+)}(D) = \alpha_L\inv H^1_{g(+)}(D_L).
\]
On the other hand, by its very definition,
\[
H^1_\tord(D;F^*) = \alpha_L\inv H^1_\tord(D_L;F^*_L).
\]
Therefore, if $H^1_\tord(D_L;F^*_L) = H^1_{g(+)}(D_L)$, then
$H^1_\tord(D;F^*) = H^1_{g(+)}(D)$.
The upshot is that we only need to prove (2) and the (+)de~Rham claim
of (1), assuming that
\[
K = L = \wh{K^\unr}, \text{ i.e.\ } k \text{ is algebraically closed.}
\]
Under this assumption, we develop a number of properties of
triangulordinary $(\vphi,\Ga_K)$-modules, organized under the
following lemma. (In particular, part (3) of the lemma finishes part
(1) of the theorem.)
\begin{lem}\label{lem-tord}
Let $D$ be triangulordinary. Then the following claims hold.
\begin{enumerate}
\item If $F^1=D$, then $H^q(\Ga,D_\dif^+) = 0$ for $q=0,1$, and
\[
H^1_{g+}(D) = H^1(D) = H^1_\tord(D).
\]
\item One has a decomposition $D_\dif^+ = \bigoplus_n
(\Gr_F^n)_\dif^+$ as $\Ga_K$-modules.
\item $D$ is de~Rham, and it is +de~Rham if and only if $F^1 = 0$.
\item The natural map $H^1(\Ga_K,D_\dif^+) \to
H^1(\Ga_K,(D/F^1)_\dif^+)$ is an isomorphism.
\end{enumerate}
\end{lem}
\begin{proof}
(1) To prove the first claim, we proceed by d\'evissage and induction
on the length of $F^*$, and are immediately reduced to the case where
$D = \Gr^n_F$. But in this situation, Equations
\ref{eqn-phi-irr1}--\ref{eqn-phi-irr2} provide exactly what we desire.
As for the second claim, the first quality follows from the first
claim, while the second equality follows from the fact that $F^1 = D$.
(2) We induct on the length of $F^*$, the case of length $1$ being
trivial. By twisting, we can assume that $F^0 = D$ and $F^1 \neq D$,
and we must show that the extension class
\[
0 \to (F^1)_\dif^+ \to D_\dif^+ \to (D/F^1)_\dif^+ \to 0
\]
is split. By Equation \ref{eqn-phi-irr1}, $(D/F^1)_\dif^+ \cong
K_\infty[\![t]\!]^{\oplus d_0}$. Therefore, as an extension class, we
have
\[
[D_\dif^+] \in \Ext^1_{\Ga_K}({\bf 1}^{\oplus d_0},(F^1)_\dif^+) =
\Ext^1_{\Ga_K}({\bf 1},(F^1)_\dif^+)^{\oplus d_0} =
H^1(\Ga_K,(F^1)_\dif^+)^{\oplus d_0} = 0,
\]
by part (1), since $F^1$ is triangulordinary of weights $\geq 1$.
Hence, the desired extension class is split.
(3) Invoking the decomposition in (2), we have
\[
D_\dif = D_\dif^+[t\inv] = \bigoplus_n (t^nK_\infty[\![t]\!])^{\oplus
d_i}[t\inv] = K_\infty(\!(t)\!)^{\oplus \rank D}.
\]
Therefore, $\dim_K D_\dR = \rank D$, and $D$ is de~Rham. And,
applying $\Ga_K$-invariants directly to the decomposition in (2),
Equation \ref{eqn-phi-irr1} shows that $D$ is +de~Rham if and only if
$d_n = 0$ for all $n \geq 1$, i.e.\ $F^1 = 0$.
(4) This follows by applying $H^1(\Ga_L,\cdot)$ to the decomposition
of (2), and noting Equation \ref{eqn-phi-irr1}.
\end{proof}
The rest of this section thus aimed at proving claim (2) of the
theorem. Note that it is precisely at this point that we must work
with the ``g+'' condition and not the ``g'' condition.
Consider the commutative diagram
\[\xymatrix{
H^1(D) \ar[r] \ar[d]_\beta & H^1(\Ga_K,D_\dif^+)
\ar[d]^{\protect\rotatebox[origin=c]{90}{$\sim$}} \\
H^1(D/F^1) \ar[r] & H^1(\Ga_K,(D/F^1)_\dif^+)
},\]
where the isomorphism is by (4) of Lemma \ref{lem-tord}. The kernel
of the top row is $H^1_{g+}(D)$, so that $H^1_{g+}(D) = \beta\inv
H^1_{g+}(D/F^1)$. Also, directly from the definition, $H^1_\tord(D) =
\beta\inv H^1_\tord(D/F^1)$. Therefore, we have reduced to the case
where $F^1 = 0$, i.e.\ $D$ only has weights $\leq 0$. In this case
(recall $K = \wh{K^\unr}$), by definition $H^1_\tord(D) = 0$, and so
we must show that $H^1_{g+}(D) = 0$ as well, i.e.\ that $H^1(D)
\hookrightarrow H^1(\Ga_K,D_\dif^+)$.
By part (1) of the theorem, assuming that $F^1=0$ means that $D$ is
+de~Rham. Therefore, $H^1_{g+}(D)$ has an interpretation in terms of
extension classes that are also +de~Rham. We will harness this
interpretation.
Considering the commutative diagram with exact rows
\[\xymatrix{
& H^1(F^j) \ar[r] \ar[d] & H^1(D) \ar[r] \ar[d] & H^1(D/F^j) \ar[d] \\
0 \ar[r] & H^1(\Ga_K,(F^j)_\dif^+) \ar[r] & H^1(\Ga_K,D_\dif^+) \ar[r]
& H^1(\Ga_K,(D/F^j)_\dif^+) \ar[r] & 0
},\]
an easy diagram chase shows that if the outer vertical arrows are
injective, then so is the middle. This allows us to induct on the
length of the filtration $F^*$, and reduce to the case where $F^n = D$
and $F^{n+1} = 0$. In other words, we may assume that, as a
semilinear $\Ga_K$-module, we have $D \cong
(t^nB^\dag_{\rig,K})^{\oplus d}$. As mentioned in the concluding
comments of \S\ref{sect-local-phi}, such objects are easy to classify:
they are of the form $D = t^nB^\dag_{\rig,K} \otimes_F D_\crys$, where
\[
D_\crys = (D[t\inv])^{\Ga_K} = D^{\Ga_K} = (t^{-n}D)^{\Ga_K}
\]
(remember $n \leq 0$) is a semilinear $\vphi$-module over $F$.
We may further simplify, by reducing the case of general $n$ to the
case when $n=0$ by a descending induction on $n$. So, we assume the
claim holds for $n+1 \leq 0$, and show it holds for $n$. We consider
the following commutative diagram with exact rows:
\[\xymatrix@=1.45pc{
H^0(D/tD) \ar[r] \ar[d] & H^1(tD) \ar[r] \ar[d]_a
& H^1(D) \ar[r] \ar[d]_b & H^1(D/tD) \ar[d] \\
H^0(\Ga_K,D_\dif^+/tD_\dif^+) \ar[r] & H^1(\Ga_K,tD_\dif^+) \ar[r]
& H^1(\Ga_K,D_\dif^+) \ar[r] & H^1(\Ga_K,D_\dif^+/tD_\dif^+)
}.\]
In the bottom row, the first and last groups are respectively
isomorphic to
\[
H^q(\Ga_K,K_\infty t^n)^{\oplus d} \text{ for } q=0,1,
\]
and thus classically seen to be trivial; hence, the bottom middle
arrow is an isomorphism. Using \cite[Lemma 3.2(1--2)]{L}, we have
\begin{multline}\label{eqn-local-KL}
D/tD \cong (t^nB^\dag_{\rig,K}/t^{n+1}B^\dag_{\rig,K})^{\oplus d} \\
= \rlim_r (t^nB^{\dag,r}_{\rig,K}/t^{n+1}B^{\dag,r}_{\rig,K})^{\oplus d}
= \rlim_r \prod_{m \geq n(r)} (K'_m t^n)^{\oplus d}.
\end{multline}
Examining the Herr complex
\[
(D/tD)^{\Delta_K} \xrightarrow{(\vphi-1,\ga-1)} (D/tD)^{\Delta_K}
\oplus (D/tD)^{\Delta_K} \xrightarrow{(1-\ga) \oplus (\vphi-1)}
(D/tD)^{\Delta_K},
\]
we immediately deduce from Equation \ref{eqn-local-KL} that
$H^0(D/tD)$ vanishes, because $n \neq 0$. On the other hand, one
easily uses Equation \ref{eqn-local-KL} and \cite[Lemma 3.2(3--4)]{L}
to calculate that $H^1(D/tD)$ is isomorphic to $\rlim_m
\left[((K'_m)^{\Delta_K}t^n)/(\ga-1)\right]^{\oplus d}$, and each term
of this limit is zero, since $n \neq 0$. Hence, the top middle arrow
in our commutative diagram is an isomorphism too. Notice that $n-1$
is not a $\vphi$-slope on $D$ if and only if $n$ is not a
$\vphi$-slope on $tD$ (which is triangulordinary of all weights
$n+1$). Therefore, the inductive hypothesis applies to $tD$, and $b$
is injective; we conclude that $a$ is also injective. Thus it
suffices to treat the case where $n=0$.
Recall that we want to show that $H^1_{g+}(D) = 0$. In other words,
given a class $c \in H^1(D)$, represented by an extension $E_c$, we
want to show that if $E_c$ is +de~Rham then it must be split. If it
is +de~Rham then, invoking \cite[Th\'eor\`eme 6.2]{B1} again, it must
be semistable. We will show that every semistable extension $E_c$ is
crystalline, {\it under our hypothesis on the $\vphi$-slopes of $D$}.
Then, we will show that every crystalline extension is split.
So let $E = E_c$ be given, and assumed to be semistable. We write
$E_\st = \bfD_\st(E)$ for the associated filtered $(\vphi,N)$-module;
showing that $E$ is crystalline is tantamount to showing that $N=0$ on
$E_\st$. In fact, since $D$ is crystalline, one has $N = 0$ on the
corank $1$ subspace $D_\crys \subset E_\st$. To treat the remainder
of $E_\st$, consider the $\vphi$-modules over $F$ that underlie the
the exact sequence
\[
0 \to D_\crys \to E_\st \to {\bf 1}_\crys \to 0.
\]
By the Dieudonn\'e--Manin theorem, the category of $\vphi$-modules
over $F$ is semisimple (recall that $k$ is algebraically closed), so
we may split this extension of $\vphi$-modules, i.e.\ choose a
$\vphi$-fixed vector $e \in E_\st$ that spans the complement of
$D_\crys$. We will know that $N=0$ on $E_\st$ as soon as we know that
$N(e)=0$. To see this, remember that $D$ is $\Ga_K$-isomorphic to
$(B^\dag_{\rig,K})^{\oplus d}$, and so the $\vphi$-slopes (\`a la
Dieudonn\'e--Manin) on $D_\crys$ are equal to the $\vphi$-slopes (\'a
la Kedlaya) on $D$, which by hypothesis are not equal to $-1$, while
the $\vphi$-slope of $e$ is $0$. Denoting $E_\st^{(\la)}$ for the
slope-$\la$ part of $E_\st$, recall that $N\vphi = p\vphi N$, and
hence $N(E_\st^{(\la)}) \subseteq E_\st^{(\la-1)}$. On the other
hand, we have
\[
N(e) \in N(E_\st^{(0)}) \subseteq E_\st^{(-1)} = 0.
\]
Therefore, $E$ is crystalline.
Given an extension $E$ of $D$ that is crystalline, we show that it is
trivial. In fact, applying Dieudonn\'e--Manin just as above, we find
that $E_\st = E_\crys$ is split as a $\vphi$-module:
\[
E_\crys = D_\crys \oplus {\bf 1}_\crys.
\]
But we may recover $E$ {\it as a $(\vphi,\Ga_K)$-module} from
$E_\crys$. In fact,
\begin{multline*}
E = B^\dag_{\rig,K} \otimes_F E_\crys = B^\dag_{\rig,K} \otimes_F
(D_\crys \oplus {\bf 1}_\crys) \\ = (B^\dag_{\rig,K} \otimes_F
D_\crys) \oplus (B^\dag_{\rig,K} \otimes_F {\bf 1}_\crys) = D \oplus
{\bf 1},
\end{multline*}
with $\vphi$ acting diagonally and $\Ga_K$ acting on the left
$\otimes$-factors. This shows that $E = E_c$ is split as an extension
of $(\vphi,\Ga_K)$-modules, and its corresponding class $c \in H^1(D)$
is trivial. This completes the proof.
\end{proof}
\begin{rem}
A curious byproduct of the final step of the argument is that, for $D$
in the special form treated there, if an extension $E$ of $D$ as a
$\Ga_K$-module admits any $\vphi$-structure, then it admits only one
$\vphi$-structure.
\end{rem}
\begin{rem}
The hypothesis on the $\vphi$-slopes of $D$ really is necessary when
working with general $(\vphi,\Ga_K)$-modules, as the following example
shows. The trouble seems to be that when we have left the category of
Galois representations, i.e.\ {\it \'etale} $(\vphi,\Ga_K)$-modules
and {\it admissible} filtered $(\vphi,N,G_K)$-modules, there are
simply too many objects to be handled via ordinary-theoretic
techniques. Cf.\ the failure of the above proof to apply to the ``g''
local condition.
\end{rem}
\begin{exmp}\label{exmp-bad}
Consider the filtered $(\vphi,N)$-module $E_\st =
\text{span}(e_0,e_{-1})$, with $\Fil^0 = E_\st$, $\Fil^1 = 0$, and
$\vphi$ and $N$ given by
\[
\vphi(e_0)=e_0,\ \vphi(e_{-1})=p\inv e_{-1},
\qquad \text{and} \qquad
N(e_0)=e_{-1},\ N(e_{-1})=0.
\]
Then $E_\st$ corresponds to a $(\vphi,\Ga_K)$-module $E$ of rank $2$
which is not \'etale, is semistable of Hodge--Tate weights both $0$,
and is {\it not crystalline}. Such an object is unheard of in the
classical setting.
Notice that $D_\st = \text{span}(e_{-1})$ corresponds to a subobject
$D$ of $E$, with quotient object isomorphic to ${\bf 1}_\crys$ (taking
$e_0$ to the standard basis element). Thus, $E$ represents a
nontrivial extension class in $H^1(D)$, which is actually +de~Rham
because its Hodge--Tate weights are all $0$. On the other hand, $D$
is only triangulordinary with respect to $F^0=D$, $F^1=0$, and so if
(for example) $K=\wh{K^\unr}$ then $H^1_\tord(D) = 0$, and
$H^1_\tord(D) \subsetneq H^1_{g+}(D)$.
\end{exmp}
The manner of the reduction steps in the proof of the theorem show
that all counterexamples to its conclusion, outside the context of the
slope hypothesis, arise by some manipulation (twisting, extensions,
descent) from the above example.
We will see in \S\ref{sect-local-ex} that the above counterexample
really does occur as a graded piece within \'etale
$(\vphi,\Ga_K)$-modules arising in nature, namely in the setting of a
modular form with good reduction at $p$ and slopes $1,k-2$ (where $k$
is the weight of $f$). It is unclear to the author whether its
obstruction can be worked around, even in explicit examples.
\subsection{Comparison with Bloch--Kato, ordinary and trianguline}
\label{sect-compare}
Since the reader might be wondering what ``triangulordinary'' means,
we explain how triangulordinary representations and Selmer groups
relate to common notions due to Bloch--Kato, Greenberg, and Colmez.
All the examples that motivate this work take place when $D$ is
\'etale, so that $D = \bfD^\dag_\rig(V)$ for a {\it bona fide}
$p$-adic representation $V$ of $G_K$. In order to place Theorem
\ref{thm-local-main} into context, we recall the following facts,
which are essentially due to Bloch--Kato \cite{BK}.
\begin{ppn}[\cite{BK}]\label{ppn-local-BK} Let $V$ be de~Rham. Then
the following claims hold.
\begin{enumerate}
\item One always has $H^1_g(V) = H^1_{g+}(V)$, these two items being
defined in \S\ref{sect-galois-descent}.
\item If $V$ is semistable and $\bfD_\crys(V)^{\vphi=p\inv} = 0$, then
$H^1_g(V) = H^1_f(V)$.
\end{enumerate}
In the second item, the local condition $H^1_f(V)$ consists of those
extension classes that are split after tensoring with $B_\crys$, and
provides the correct Selmer group with which to state Bloch--Kato's
conjectural analytic class number formula.
\end{ppn}
Thus, in the triangulordinary setting, the local condition
$H^1_\tord(V;F)$ usually measures $H^1_f(V)$, and hence the
triangulordinary Selmer group computes the Bloch--Kato Selmer group,
which is of intrinsic interest.
We find it helpful to have access to the following equivalent
formulation.
\begin{altdefn}\label{alt-defn}
We remind the reader that by Theorem \ref{thm-local-main}(1), every
triangulordinary $(\vphi,\Ga_K)$-module is semistable. Thus, there is
no loss of generality in assuming this is the case from the outset.
Given a semistable $(\vphi,\Ga_K)$-module $D$, the discussion at the
conclusion of \S\ref{sect-phigamma-monodromy} relates semistable
subobjects of $D$ to subobjects of $D_\st$. We find that the
triangulordinary filtrations $F^* \subseteq D$ are in a natural
correspondence with filtrations $F^* \subseteq D_\st$ by
$(\vphi,N)$-stable subvector spaces (each equipped with its Hodge
filtration induced by $D_\st$), such that each $\Gr_F^n D_\st$ has all
its induced Hodge--Tate weights equal to $n$ (i.e., its induced Hodge
filtration is concentrated in degree $-n$).
Given a corresponding pair $F^* \subseteq D$ and $F^* \subseteq
D_\st$, the gradeds $\Gr_F^n D$ and $\Gr_F^n D_\st$ are linked by the
formula
\[
\Gr_F^n D \cong t^nB^\dag_{\rig,K} \otimes_F \Gr_F^n D_\st
\]
as $(\vphi,\Ga_K)$-modules. Therefore, the $\vphi$-slopes on $\Gr_F^n
D$ are $n + {}$the $\vphi$-slopes on $\Gr_F^n D_\st$. As concerns
Theorem \ref{thm-local-main}(2), the requirement that for all $n \leq
0$, $n-1$ not be a $\vphi$-slope on $\Gr_F^n D$ becomes the condition
that, for all such $n$, the graded $\Gr_F^n D_\st$ does not contain
the $\vphi$-slope $-1$.
\end{altdefn}
\begin{exmp}[Relation to ordinary representations] Let us
see how ordinary representations, defined by Greenberg in \cite{G1},
fit into our context. We are given a Galois representation $V$, so
that $D = \bfD^\dag_\rig(V)$ is \'etale. The ordinary hypothesis is
that $V$ admits a decreasing filtration $F^* \subseteq V$ by
$G_K$-stable subspaces, such that, for each $n$, the representation
$\chi_\cycl^{-n} \otimes \Gr_F^n V$ is unramified. Applying
$\bfD^\dag_\rig$, we obtain a decreasing, $(\vphi,\Ga_K)$-stable
filtration $F^* \subseteq D$ by $B^\dag_{\rig,K}$-direct summands,
such that each $\Gr_F^n D_L$ is \'etale and $\Ga_L$-isomorphic to
$(t^nB^\dag_{\rig,L})^{\oplus d_n}$. Conversely, given such a
filtration on $D$, Theorem \ref{thm-phigamma-equiv} produces an
ordinary filtration on $V$. Thus, given an \'etale $D$, the ordinary
hypothesis is a strengthening of the triangulordinary hypothesis to
require all the graded pieces to be \'etale.
In the language of filtered $(\vphi,N)$-modules, a triangulordinary
filtration $F^* \subseteq D_\st$ corresponds to an ordinary filtration
precisely when all the $\Gr_F^n D_\st$ are admissibly filtered, which
means here that each $\Gr_F^n D_\st$ is of pure $\vphi$-slope $-n$.
Moreover, Theorem \ref{thm-local-main}(2) always applies to ordinary
representations. Namely, for all $n \leq 0$, the number $n-1$ (which
is $\leq -1$) never occurs as a $\vphi$-slope on $\Gr_F^n D$ because
the latter is \'etale. Thus, this theorem provides a generalization
of Flach's result \cite[Lemma 2]{Fl} from the case where $K=\bbQ_p$ to
the case of arbitrary perfect residue field.
We alert the reader to the fact that, although $V$ admits at most one
ordinary filtration, it may admit many different triangulordinary
filtrations, as we will see below.
\end{exmp}
Before discussing trianguline representations, we point out that our
entire theory works perfectly well with the $E$-coefficients replacing
the $\bbQ_p$-coefficients of Galois representations, for any finite
extension $E/\bbQ_p$.
\begin{exmp}[Relation to trianguline representations] We now
determine when a triangulordinary $(\vphi,\Ga_K)$-module is
trianguline, and give some comments about the converse. (More
precisely, we discuss when one can modify a triangulordinary
filtration into a trianguline one, and vice versa.) Recall that,
following Colmez \cite[\S0.3]{C}, a $(\vphi,\Ga_K)$-module $D$ is {\it
trianguline} if it is a successive extension of rank $1$ objects,
i.e.\ if there exists a decreasing, separated and exhaustive
filtration $F^* \subseteq D$ by $(\vphi,\Ga_K)$-stable
$B^\dag_{\rig,K}$-direct summands, with each graded of rank $1$. We
call the latter a {\it trianguline filtration}. When $D$ is
semistable, these correspond precisely to {\it refinements} in the
sense of Mazur: complete flags in $D_\st$ by $(\vphi,N)$-stable
subspaces.
A triangulordinary $D$ is trianguline precisely when the gradeds
$\Gr_F^*$ of the triangulordinary filtration $F^*$ are themselves
trianguline. Sufficiency is clear. For necessity, note that $D$ is
semistable, so assume given $D_\st$, and think of triangulordinary
filtrations as being $(\vphi,N)$-stable ones on $D_\st$. Given our
triangulordinary filtration and any (other) trianguline filtration on
D, taking the intersections of the two filtrations gives a refinement
of the triangulordinary filtration with rank one gradeds. (One could
avoid using filtered $(\vphi,N)$-modules by converting this last step
into the language of B\'ezout domains.)
In any case, with $D$ triangulordinary, each the $\Gr_F^*$ is
crystalline (by Corollary \ref{cor-crys-HT-n}), so $D$ is trianguline
if and only if the $(\Gr_F^*)_\crys$ admit refinements. Clearly, this
is the case precisely when $D_\crys$ is an extension of
one-dimensional $\vphi$-stable subspaces. If the residue field $k$ is
finite, then one can always replace the coefficient field $E$ by a
finite extension in order to achieve this.
As for the converse, since triangulordinary $D$ are always semistable,
we ask when a semistable trianguline $D$ is triangulordinary. It
turns out that {\it not all} such $D$ are triangulordinary. For a
semistable trianguline $D$ that is not triangulordinary, we consider
$E$ as in Example \ref{exmp-bad}. Being constructed out of $E_\st$,
it is semistable; being an extension of $D$ by ${\bf 1}$ it is
trianguline. Both its Hodge--Tate weights are $0$, so a putative
triangulordinary filtration $F^* \subseteq E$ would have $E =
\Gr_F^0$; by Corollary \ref{cor-crys-HT-n}, in order for $E$ to be
triangulordinary it must be crystalline. But $E$ is not crystalline.
The above example shows that, roughly, having a nonzero monodromy
operator acting within fixed a Hodge--Tate weight part is an
obstruction to being triangulordinary. Let us assume that this is not
the case for $D$, and suppose we are given a trianguline filtration
$F^* \subseteq D$. In order for $D$ to be triangulordinary, we must
be able to arrange that the Hodge--Tate weights of the $\Gr_F^*$ are
nondecreasing, because then, weakening $F^*$ so that each $\Gr_F^n$
has all Hodge--Tate weights equal to $n$, the resulting gradeds must
also be crystalline (by our rough assumption), hence
$\Ga_L$-isomorphic to $(t^nB^\dag_{\rig,L})^{\oplus d_n}$ by Corollary
\ref{cor-crys-HT-n}. In order to rearrange $F^*$ to have Hodge--Tate
weights in nondecreasing order, we must be able to break up any
extension between adjacent gradeds that are in the wrong order. Given
the intermediate extension of filtered $(\vphi,N)$-modules
\[
0 \to (\Gr_F^{n+1})_\crys \to (F^n/F^{n+2})_\st \to (\Gr_F^n)_\crys
\to 0
\]
whose Hodge--Tate weights are in decreasing order, one easily checks
that any $(\vphi,N)$-equivariant splitting will do. But
$(\vphi,N)$-equivariant splittings, in turn, might not exist. The
$\vphi$-structure itself could be nonsemisimple; by
Dieudonn\'e--Manin, this would require the crystalline $\vphi$-slopes
on the adjacent gradeds to be equal, and the $\vphi$-extension would
only necessarily split upon restriction to $\wh{K^\unr}$. Assuming
otherwise, that one can find a $\vphi$-eigenvector $v \in
(F^n/F^{n+2})_\st$ mapping onto a basis for $(\Gr_F^n)_\crys$, the
extension is split as $(\vphi,N)$-module if and only if $N(v) = 0$,
which might or might not hold.
In summary, the trianguline condition is roughly more general than the
triangulordinary condition. Triangulordinary representations are
semistable, and are trianguline when their filtrations may be further
subdivided to have gradeds of rank $1$; the latter always happens
after an extension of coefficients when $k$ is finite.
Trianguline $D$ may be highly nonsemistable due to continuous
variation of Sen weights. When they are semistable, they may fail to
be triangulordinary, if they have nontrivial extensions with the wrong
ordering of Hodge--Tate weights, or if they have extensions of common
Hodge--Tate weight that are semistable but not crystalline.
\end{exmp}
\subsection{Examples of triangulordinary representations}
\label{sect-local-ex}
In this section we explain when abelian varieties and modular forms
are triangulordinary. In passing, we gather for easy reference
descriptions of the invariants of the cyclotomic character and modular
forms. Since many different normalizations are used in the
literature, we have made an effort to organize them systematically.
Let us begin with some discussion of normalizations. The general
rules are summarized in the following table. The initial column says
what kind of motive we are dealing with: one cut out of homology or
cohomology. The first property is which Frobenius operator on
$\ell$-adic realizations has $\ell$-adic {\it integer} eigenvalues.
Next is which power of crystalline Frobenius, $\vphi$ or $\vphi\inv$,
has $p$-adic integer eigenvalues. Then come the degrees in which we
expect to see jumps in the Hodge filtration. Finally, we see which
powers of the cyclotomic character tend to appear in the action of
$\Ga_\bbQp$ on basis elements of the $\bfD_\dif^+$ (which is a rough
indication of the $\Ga_\bbQp$-action on $\bfD^\dag_\rig$); these are
the jumps we expect to see in a triangulordinary filtration.
\vskip 12pt
\begin{tabular}{|l|l|l|}
\hline
& homological & cohomological \\
\hline
\hline
$\ell$-adic $\Frob$ & arithmetic & geometric \\
\hline
\hline
crystalline $\vphi$ & $\vphi\inv$ & $\vphi$ \\
\hline
Hodge jumps & nonpositive & nonnegative \\
\hline
\hline
$\Ga_\bbQp$ on $\bfD_\dif^+$ & nonnegative & nonpositive \\
\hline
$\nabla$ord jumps & nonnegative & nonpositive \\
\hline
\end{tabular}
\vskip 12pt
We give three reminders: the $\vphi$ on $\bfD^\dag_\rig$ is always
\'etale, the cyclotomic character and Tate modules of abelian varieties
are {\it homological} objects, and this table is invariant under the
choice of sign of the Hodge--Tate weight of the cyclotomic character.
(In this text, the cyclotomic character has Hodge--Tate weight $+1$.)
\begin{exmp}[The cyclotomic character] We consider the $p$-adic
cyclotomic character $\chi_\cycl$ as a $1$-dimensional $\bbQ_p$-vector
space $\bbQ_p \cdot e_{\chi_\cycl}$, equipped with a $\bbQ_p$-linear
$G_K$-action via $g(e_{\chi_\cycl}) = \chi_\cycl(g)e_{\chi_\cycl}$.
One has $\bfD^\dag_\rig(\chi_\cycl^n) = B^\dag_{\rig,K} \cdot (1
\otimes e_{\chi_\cycl}^{\otimes n})$ and $\bfD_\crys(\chi_\cycl) = F
\cdot (t^{-n} \otimes e_{\chi_\cycl})$. From these, we derive the
following table, giving actions on the basis vectors just mentioned.
\vskip 12pt
\begin{tabular}{|l||l|l||l|l|}
\hline
$\ell$-adic $\Frob_\ell^\text{arith}$
& $\vphi$ on $\bfD_\crys$
& $\Gr^? \neq 0$
& $\vphi$ on $\bfD^\dag_\rig$
& $\Ga_\bbQp$ on $\bfD\dag_\rig$ \\
\hline
$p^n$ & $p^{-n}$ & $-n$ & $1$ & $\chi_\cycl^n$ \\
\hline
\end{tabular}
\vskip 12pt
Finally, we point out that the powers of the cyclotomic character are
all ordinary, hence triangulordinary. Since they are one-dimensional,
the ordinary filtration is the only choice of triangulordinary
filtration.
\end{exmp}
\begin{exmp}[Abelian varieties]
Take a semistable abelian variety $B$ over $K$ of dimension $d \geq
1$, and consider $D_\st = \bfD_\st(V)$, with $V = T_pB \otimes \bbQ$
the $p$-adic Tate module up to isogeny. Thus, when dealing with
abelian varieties, we are primarily concerned with homology. It is
well-known that the Hodge--Tate weights of $B$ are $0$ and $1$, each
with multiplicity $d$, and this tells us that the Hodge filtration
$H^* \subseteq D_\dR$ satisfies $\dim_K \Gr_H^0 = \dim_K \Gr_H^{-1} =
d$, and our triangulordinary filtration $F^* \subseteq D_\st$ must
satisfy $\rank \Gr_F^0 = \rank \Gr_F^1 = d$, and all other gradeds are
trivial. So, $F^*$ consists of the single datum of a
$(\vphi,N)$-stable $F$-subspace $F^1 \subset D_\st$ of dimension $d$.
Weak admissibility, here, means: nonzero slopes do not meet the Hodge
filtration $H$. Ordinary means that half these slopes are $0$ (can
lie anywhere), and half are $-1$ (cannot lie in $H$: span a weakly
admissible submodule). Triangulordinary means, one can find half of
these slopes, $\vphi$-stably, not contained in $H$. This means that
corresponding subspace $F^1$ has induced Hodge filtration concentrated
in degree $-1$, which automatically forces $\Gr_F^0 = D_\st/F^1$ to
have induced Hodge filtration concentrated in degree $0$.
Thus, in short, a triangulordinary filtration for $V$ consists of a
$d$-dimensional $(\vphi,N)$-stable subspace $F^1 \subseteq D_\st$ such
that $F^1 \otimes_F K$ is complementary to the Hodge filtration $H^0
\subset D_\dR$.
We stress that, because we are in a homological situation, the
$\vphi$-slopes on $\bfD_\st(V)$ are nonpositive. In order for Theorem
\ref{thm-local-main}(2) to apply, all we need is that $-1$ does not
occur as a $\vphi$-slope on the quotient $\Gr_F^0 D_\crys =
D_\crys/F^1$, or, equivalently, that every instance of slope $-1$
occurs within $F^1$. (This hypothesis is a variant of a ``noncritical
slope'' condition.)
Let us illustrate the above with some examples, assuming, for
simplicity, that $B$ has good reduction and our coefficients are
$E=\bbQ_p$:
Suppose $B$ has slopes $-1,-2/3,-1/3,0$. Then $B$ is nonordinary, and
always triangulordinary: for the triangulordinary filtration, one can
take either any of the two spaces with slopes $(-1,-2/3)$, $(-1,1/3)$.
When the $0$-slope is not in $H$, one gets two more options. The
theorem applies to the first two of these, but not to the possible
latter two.
If $B$ has slopes $-1,-1/2,-1/2,0$, then it is nonordinary, and always
triangulordinary: its filtrations include the two slope $(-1,-1/2)$
spaces, and, if the slope $0$ space is not in $H$, then the two slope
$(-1/2,0)$ spaces are also valid. (In particular, having slopes equal
to $-1/2$ is {\it not} necessarily an obstruction.) The theorem
applies to the first filtrations, and not to the second ones.
Let $B$ have slopes $-1,-1,-1,-1/2,-1/2,0,0,0$, and assume $\vphi$
acts irreducibly on its pure-slope spaces. Then $B$ is not
triangulordinary, simply because there are no $\vphi$-stable subspaces
with half the total dimension.
We leave it to the reader to examine, when the residue field $k$ is
finite, what additional possibilities occur after enlarging the
coefficient field $E$ to break the pure-slope spaces into extensions
of one-dimensional $\vphi$-stable spaces.
\end{exmp}
In the following example, fix a coefficient field $E$.
\begin{exmp}[Elliptic modular eigenforms]\label{exmp-MFs}
Let $f \in S_k(\Ga_1(M),\psi;E)$ be a normalized elliptic modular
cuspidal eigenform such that $k \geq 2$, having $q$-expansion $\sum
a_nq^n$. Deligne has associated to $f$ a $2$-dimensional $E$-valued
representation of the absolute Galois group of $\bbQ$, which is
unramified away from $Mp\infty$ and de~Rham at $p$. It is absolutely
irreducible, so it is characterized (up to a scalar multiple) by its
characteristic polynomials; by Chebotarev, it is enough to know the
polynomials of the Frobenius elements $\Frob_\ell$ at primes $\ell
\nmid Mp$. For such $\ell$, one has
\[
\text{trace}(\Frob_\ell) = a_\ell
\quad\text{ and }\quad
\det(\Frob_\ell) = \ell^{k-1}\psi(\ell).
\]
One vagueness that typically makes these matters confusing is whether
$\Frob_\ell$ refers to the arithmetic or the geometric Frobenius. In
the case where the above equations describe the arithmetic Frobenius,
we say that the representation is the {\it homological} normalization,
and denote it $V_f^\text{hom}$. When they apply to the geometric
Frobenius, we say that the representation is the {\it cohomological}
variant, and we denote it $V_f^\text{coh}$. These names originate in
whether $V_f^?$ is found within the \'etale homology or cohomology of
Kuga--Sato varieties, respectively. In either case, we only consider
the restriction of the $G_\bbQ$-action to a decomposition group
$G_\bbQp$.
In what follows, we make the following hypothesis: $f$ is semistable
at $p$, and the operator $\vphi$ on $\bfD_\st(V_f^\text{coh})$ has
{\it distinct} eigenvalues $\la,\mu$ lying in $E$. (This is
equivalent to requiring the same on $\bfD_\st(V_f^\text{hom})$, with
the the roots $\la\inv,\mu\inv \in E$.) We order the roots so that
$\ord_p \la \leq \ord_p \mu$; one has $\ord_p \la + \ord_p \mu = k-1$,
with $\ord_p \la = 0$ if and only if $f$ is ordinary at $p$.
The cohomological normalization has
\[
\bfD_\st(V_f^\text{coh}) = E \cdot e_\la \oplus E \cdot e_\mu
\quad \text{with} \quad
\left\{\begin{array}{ll}
\vphi(e_\nu) = \nu e_\nu, \\
\bfD = F^0 \supsetneq F^1 = \cdots = F^{k-1} \supsetneq F^k = 0.
\end{array}\right.
\]
The ``weak admissibility'' condition means that $e_\la \notin F^1$,
and $e_\mu \notin F^1$ unless possibly if $\ord_p \mu = k-1$ (in which
case $f$ is split ordinary at $p$). The monodromy operator $N$ is
only nonzero when $p \mid M$; if $N \neq 0$ then $\ord_p \mu = \ord_p
\la + 1$, and $N$ is determined by $N(e_\mu) = e_\la$ and $N(e_\la) =
0$ (up to rescaling $e_\la$).
Since the nonzero gradings for the Hodge filtration are $0,k-1$, each
with one-dimensional graded, the nonzero triangulordinary gradings are
$1-k,0$, each with one-dimensional graded. Thus we consider the
rank-one $\vphi$-stable subspaces of $\bfD_\st(V_f^\text{coh})$, and
their corresponding $(\vphi,\Ga_\bbQp)$-modules. Let $\nu$ be one of
$\la$ or $\mu$ and let $\nu'$ be the other; let $D_\nu = E \cdot e_\nu
\subseteq \bfD_\st$ and $D'_\nu = \bfD_\st/D_\nu$. Then, for
comparison to the cyclotomic character, one has the following table.
The parenthetical values are used precisely when $f$ is split ordinary
at $p$ and $\nu = \mu$.
\vskip 12pt
\begin{tabular}{|l|l||l|l|}
\hline
$\vphi$ on $D_\nu$
& $\Gr^? \neq 0$
& $\vphi$ on $(D_\nu)^\dag_\rig$
& $\Ga_\bbQp$ on $(D_\nu)^\dag_\rig$ \\
\hline
$\nu$ & $0$ ($k-1$) & $\nu$ (${\nu'}\inv$) & $1$ ($\chi_\cycl^{1-k}$) \\
\hline
\hline
$\vphi$ on $D'_\nu$
& $\Gr^? \neq 0$
& $\vphi$ on $(D'_\nu)^\dag_\rig$
& $\Ga_\bbQp$ on $(D'_\nu)^\dag_\rig$ \\
\hline
$\nu'$ & $k-1$ ($0$) & $\nu\inv$ ($\nu'$) & $\chi_\cycl^{1-k}$ ($1$)\\
\hline
\end{tabular}
\vskip 12pt
\noindent The triangulordinary hypothesis on $F^*$ requires that $F^0$
not meet the Hodge filtration $H^1 = H^{k-1} \subset D_\st \otimes_F
K$, and, if this holds, then one obtains for free that $\Gr^{1-k}
D_\st = D_\st/F^0$ has induced Hodge--Tate weight $k-1$, as is
required. Examining the above table, we see that $e_\nu$ always
defines a trianguline filtration, and that $e_\nu$ defines a
triangulordinary filtration except in the parenthetical (split
ordinary) case. In the split ordinary case, taking $\nu = \la$ still
gives a triangulordinary filtration. Also, theorem
\ref{thm-local-main}(2) always applies, because the only nonzero
$\Gr^n$ with $n \leq 0$ is with $n=0$, and the only $\vphi$-slope
occurring there is nonnegative.
We obtain descriptions of the homological normalization by taking
$E$-linear duals of everything above. Namely, $\bfD_\st$ has
\[
\bfD_\st(V_f^\text{hom})
= E \cdot e_{\la\inv} \oplus E \cdot e_{\mu\inv}
\text{ with }
\left\{\begin{array}{ll}
\vphi(e_{\nu\inv}) = \nu\inv e_{\nu\inv}, \\
\bfD = F^{1-k} \supsetneq F^{2-k} = \cdots = F^0 \supsetneq F^1 = 0.
\end{array}\right.
\]
The ``weak admissibility'' condition means that $e_{\mu\inv} \notin
F^1$, and $e_{\la\inv} \notin F^1$ unless possibly if $\ord_p \la = 0$
(in which case $f$ is split ordinary at $p$). The monodromy operator
$N$ is only nonzero when $p \mid M$; if $N \neq 0$ then $\ord_p \mu =
\ord_p \la + 1$, and $N$ is determined by $N(e_{\la\inv}) =
e_{\mu\inv}$ and $N(e_{\mu\inv}) = 0$ (after perhaps rescaling
$e_{\mu\inv}$). Note the role reversal between $\mu$ and $\la$; this
is only because $\ord_p \mu\inv \leq \ord_p \la\inv$.
Our triangulordinary filtration must have one-dimensional nonzero
gradeds in degrees $0,1$, and so we consider the rank-one
$\vphi$-stable subspaces of $\bfD_\st(V_f^\text{hom})$, and their
corresponding $(\vphi,\Ga_\bbQp)$-modules. Let $\nu$ be one of $\la$
or $\mu$ and let $\nu'$ be the other; let $D_{\nu\inv} = E \cdot
e_{\nu\inv} \subseteq \bfD_\st$ and $D'_{\nu\inv} =
\bfD_\st/D_{\nu\inv}$. Again, we have the following table. The
parenthetical values are used precisely when $f$ is split ordinary at
$p$ and $\nu\inv = \la\inv$.
\vskip 12pt
\begin{tabular}{|l|l||l|l|}
\hline
$\vphi$ on $D_{\nu\inv}$
& $\Gr^? \neq 0$
& $\vphi$ on $(D_{\nu\inv})^\dag_\rig$
& $\Ga_\bbQp$ on $(D_{\nu\inv})\dag_\rig$ \\
\hline
$\nu\inv$ & $1-k$ ($0$) & $\nu'$ ($\nu\inv$) & $\chi_\cycl^{k-1}$ ($1$) \\
\hline
\hline
$\vphi$ on $D'_{\nu\inv}$
& $\Gr^? \neq 0$
& $\vphi$ on $(D'_{\nu\inv})^\dag_\rig$
& $\Ga_\bbQp$ on $(D'_{\nu\inv})^\dag_\rig$ \\
\hline
${\nu'}\inv$ & $0$ ($1-k$) & ${\nu'}\inv$ ($\nu$) & $1$ ($\chi_\cycl^{k-1}$)
\\
\hline
\end{tabular}
\vskip 12pt
\noindent In particular, we see that $e_{\nu\inv}$ always defines a
trianguline filtration, and that $e_{\nu\inv}$ defines a
triangulordinary filtration except in the parenthetical (split
ordinary) case. In the split ordinary case, taking $\nu = \mu\inv$
still gives a triangulordinary filtration. As concerns Theorem
\ref{thm-local-main}, $\Gr^{1-k}$ always has nonnegative slope, and
hence presents no obstruction. But $\Gr^0$ has slope $\ord_p \nu +
1-k$, which is equal to $-1$ when $\ord_p \nu = k-2$. Thus, the
theorem does not apply to modular forms with triangulordinary
filtration determined by a $\vphi$-eigenvalue of slope $k-2$.
We invite the reader to check that the above conclusions for
$V_f^\text{hom}$ agree with the conclusions made, in the case $k=2$,
for $T_pB_f \otimes \bbQ$, where $B_f$ is the corresponding modular
abelian variety.
\end{exmp}
\begin{rem}
The example above should generalize readily to Hilbert modular forms.
\end{rem}
\begin{rem}
It is {\it extremely unusual} to naturally encounter a theorem that
applies only to modular forms with a $U_p$-slope $\neq k-2$, as does
Theorem \ref{thm-local-main} for $V_f^\text{hom}$. The only special
slopes are usually $0$, $k-1$, and $\frac{k-1}{2}$.
\end{rem}
\section{Review of $(\vphi,\Ga_K)$-modules}\label{sect-phigamma}
For this entire section, we fix a complete, discretely valued field
$K$ of characteristic $0$, supposed to have a residue field $k$ that
is perfect of characteristic $p > 0$. Choose once and for all an
algebraic closure $\ov{K}$ of $K$ and set $G_K = \Gal(\ov{K}/K)$. Our
goal in this section is to review the relevant theory of
$(\vphi,\Ga_K)$-modules, which provide a means of describing
continuous $p$-adic representations of $G_K$ and their associated
invariants.
\subsection{Definitions of many rings}\label{sect-phigamma-rings}
In terms of our fixed $K$, we define a dizzying list of objects. Our
notation most closely follows that of Colmez; in particular, our $r$
varies inversely with Berger's. For any field $E$, write $E_n =
E(\mu_{p^n})$ for $n \leq \infty$. If $E$ carries a valuation, write
$\calO_E$ for its ring of integers.
{\it Fields.} Let $F = \Frac W(k)$ be the fraction field of the Witt
vectors of $k$. Then $F$ embeds canonically into $K$ as its maximal
absolutely unramified subfield, and $K/F$ is a finite, totally
ramified extension. If $k'$ denotes the residue field of $K_\infty$,
which is finite over $k$, then we define $F' = \Frac W(k')$. Then
$F'$ is the maximal unramified extension of $F$ in
$K_\infty$\footnote{One can have $F \subsetneq F'$ and $F' \nsubseteq
K$! Take, for example, $p=3$ and $K=\bbQ_3(\sqrt{3})$.}, and $K' =
K.F'$ is the maximal unramified extension of $K$ in $K_\infty$, so
that $K_\infty/K'$ is totally ramified. Observe that, since $K'
\subseteq K_\infty$, for $n \gg 0$ one has $K' \subseteq K_n$ and
hence $K'_n = K_n$ and $K'_\infty = K_\infty$, and therefore $(K')' =
K'$.
We set $H_K = \Gal(\ov{K}/K_\infty)$ and $\Ga_K = \Gal(K_\infty/K)$.
The latter group is rather simple. If $E$ is any field of
characteristic not equal to $p$, then the action of $\Gal(\ov{E}/E)$
on $\mu_{p^\infty}(E)$ is described by a uniquely determined character
$\chi_\cycl \cn \Gal(\ov{E}/E) \to \bbZ_p^\times$, called the
cyclotomic character. The fundamental theorem of Galois theory in
this case says that $\chi_\cycl$ identifies $\Gal(E_\infty/E)$ with a
closed subgroup of $\bbZ_p^\times$. In the case at hand, using the
fact that $K$ is discretely valued, one finds that $K_\infty/K$ is
infinite, so that $\chi_\cycl$ identifies $\Ga_K$ with an open
subgroup of $\bbZ_p^\times$, which by force must be procyclic or
$\{\pm1\} \times (\text{procyclic})$. One has $H_{K'} = H_K$ by our
earlier remarks, and $\Ga_{K'}$ has finite index in $\Ga_K$.
Moreover, one has
\[
\Ga_K/\Ga_{K'} = \Gal(K'/K) = \Gal(F'/F) = \Gal(k'/k).
\]
We have the following diagram, where (for readability) $\star =
\Ga_K/\Ga_{K'}$, ``ur'' means unramified, and ``tr'' means totally
ramified.
\[\xymatrix@=1.45pc{
& {\ov{K}} \\
& K_\infty \ar@{-}[u]^{H_K} \\
& & F'_\infty \ar@{-}[ul] \\
& K' \ar@{-}[uu]^{\Ga_{K'}}_{\text{tr}} \\
K \ar@{-}[ur]^\star_{\text{ur}} \ar@/^1pc/@{-}[uuur]^{\Ga_K}
& & F' \ar@{-}[ul]_{\text{tr}} \ar@{-}[uu]_{\text{tr}} \\
& F \ar@{-}[ul]_{\text{tr}} \ar@{-}[ur]^\star_{\text{ur}}
}\]
I would like to point out to the novice that dividing up $G_K$ into
$H_K$ and $\Ga_K$ is not traditional. Classically, one divides up
$G_K$ into $I_K$ and $G_k$, where $I_K \subseteq G_K$ is the inertia
subgroup and $G_k = G_K/I_k$ is the absolute Galois group of $k$.
(Note that we have a canonical algebraic closure of $k$, namely the
residue field $\ov{k}$ of $\ov{K}$.) In fact, it ends up not being
very hard to uncover traditional un/ramification information when
using $H_K$ and $\Ga_K$ instead, so this method is much more powerful,
at least in the setting of $p$-adic representations.
{\it Robba rings.} There are three main variants of
$(\vphi,\Ga_K)$-modules, but we will only need the variety that live
over the Robba ring, so we now make a beeline for these. All we
really need is that the ``field of norms'' construction allows one to
make a certain choice of an indeterminate $\pi_K$, and associates to
$K$ a constant $e_K\ (= \ord_{\wt{E}^+}(\ov{\pi}_K)) > 0$. When
$K=F$, there is a {\it canonical} uniformizer which is written $\pi$,
and one can calculate that $e_F = p/(p-1)$.
Berger's Robba ring $B^\dag_{\rig,K}$ is defined to be the union of
the rings $B^{\dag,r}_{\rig,K}$ for $r>0$. The latter are defined by
\[
B^{\dag,r}_{\rig,K} = \Bigg\{ f(\pi_K) = \sum_{n \in \bbZ} a_n \pi_K^n \
\Bigg|\
\begin{array}{cc}
a_n \in F',\\
f(X) \text{ convergent for } 0 < \ord_p(X) < r/e_K
\end{array} \Bigg\}.
\]
Although all these rings are non-Noetherian, they are not too
unpleasant. For example, the rings $B^{\dag,r}_{\rig,K}$ are B\'ezout
domains: they admit a theory of principal divisors, and they have a
reasonable theory of finite free modules. See \cite[\S4.2]{B1} for
details.
If $L$ is another CDVF with perfect residue field, with $K$
continuously embedded into it, there is a canonical embedding
$B^{\dag,r}_{\rig,K} \hookrightarrow B^{\dag,r}_{\rig,L}$ for $r$
sufficiently small. More specifically, one can arrange for $\pi_L$ to
satisfy an Eisenstein polynomial over a subring of $B^\dag_{\rig,K}
\otimes_{F'} F_L'$ with respect to a suitable $\pi_K$-adic valuation.
(The term $F_L'$ is the maximal absolutely unramified subfield of
$L_\infty$, analogous to $F'$.) The constants $e_K$ and $e_L$ are
normalized so that the growth conditions on power series coincide.
When $L/K$ is finite, we see that the $B^{\dag,r}_{\rig,K} \subseteq
B^{\dag,r}_{\rig,L}$ (for $r$ sufficiently small), and hence also
$B^\dag_{\rig,K} \subseteq B^\dag_{\rig,L}$, are finite ring
extensions.
A more delicate construction of these rings (as in \cite{B1}) endows
them with natural, commuting ring-endomorphism actions of $\Ga_K$ and
an operator $\vphi$. One knows that $\vphi$ acts by Witt
functoriality on $a_n \in F'$, and $\Ga_K$ acts on $a_n$ through its
quotient $\Ga_K/\Ga_{K'} = \Gal(F'/F)$. The action on $\pi_K$ is
generally not explicitly given (especially since there is some choice
in $\pi_K$), except when $K=F$, in which case $\vphi(\pi) =
(1+\pi)^p-1$ and $\ga \in \Ga_K$ obeys $\ga(\pi) =
(1+\pi)^{\chi_\cycl(\ga)}-1$. The embeddings $B^\dag_{\rig,K}
\subseteq B^\dag_{\rig,L}$ are $\vphi$- and $\Ga_L$-equivariant
(considering $\Ga_L \hookrightarrow \Ga_K$).
Finally, we point out that the series $\log(1+\pi) = \sum_{n \geq 1}
\frac{(-1)^{n-1}}{n}\pi^n$ converges in $B^{\dag,r}_{\rig,\bbQp}$ for
every $r>0$, and we call its limit $t$. By means of the above
embedding process, $t$ is an element of every $B^\dag_{\rig,K}$. One
has $\vphi(t)=pt$ and $\ga(t) = \chi_\cycl(\ga)t$ for all $\ga \in
\Ga_K$.
\subsection{$\vphi$- and $(\vphi,\Ga_K)$-modules over the Robba
ring}
Since many important facts about $(\vphi,\Ga_K)$-modules arise from
their underlying $\vphi$-modules, we first recall general properties
of $\vphi$-modules over the Robba ring.
Suppose $B$ is a ring equipped with a ring endomorphism $\vphi$. A
$\vphi$-module over $B$ is a free, finite rank $B$-module $D$ equipped
with a semilinear action of $\vphi$, satisfying the nondegeneracy
condition that $\vphi(D)$ span $D$ over $B$. The adjective
``semilinear'' indicates that one has $\vphi(bd) = \vphi(b)\vphi(d)$
for $b \in B$ and $d \in D$, rather than $\vphi(bd) = b\vphi(d)$. We
write $\bfM(\vphi)_{/B}$ for the category of $\vphi$-modules. Unless
otherwise specified, we understand that $B = B^\dag_{\rig,K}$.
It is worth noting that, in general, $\vphi(B^{\dag,r}_{\rig,K})
\nsubseteq B^{\dag,r}_{\rig,K}$, but instead
$\vphi(B^{\dag,r}_{\rig,K}) \subseteq B^{\dag,r/p}_{\rig,K}$ (the
latter of which contains $B^{\dag,r}_{\rig,K}$). With this in mind,
it does not make sense to define a $\vphi$-module over
$B^{\dag,r}_{\rig,K}$. The best we can (and will) ask for is a basis
of $D$ with respect to which the matrix for $\vphi$ lies in
$B^{\dag,r}_{\rig,K}$. In this regard, there is the following crucial
lemma of Cherbonnier.
\begin{lem}[{\cite[Th\'eor\`eme I.3.3]{B2}}] For any $\vphi$-module $D$
and any $r$ sufficiently small, say $r < r(D)$, there exits a unique
$B^{\dag,r}_{\rig,K}$-lattice $D^r \subset D$ such that
$B^{\dag,r/p}_{\rig,K} \cdot \vphi(D^r)$ contains a basis of $D^r$.
One has
\[
B^{\dag,s}_{\rig,K} \otimes_{B^{\dag,r}_{\rig,K}} D^r
\stackrel{\sim}{\to} B^{\dag,s}_{\rig,K} \cdot D^r = D^s \subset D
\]
for $0 < s \leq r < r(D)$.
\end{lem}
We will make heavy use of the slope theory for $\vphi$-modules.
Currently the best reference for this material is \cite{K}, whose
proof easily specializes to give the classical Dieudonn\'e--Manin
theorem.
\begin{thm}[\cite{K}]
There is an over-ring $\wt{B}^\dag_\rig$ of $B^\dag_{\rig,K}
\otimes_{F'} \wh{F^\unr}$, with an extension of the operator $\vphi$
to it, such that the following claims hold.
\begin{enumerate}
\item For every $\vphi$-module $D$ over $\wt{B}^\dag_\rig$, there is a
finite extension $L$ of $\wh{F^\unr}$ such that $D
\otimes_\wh{F^\unr} L$ admits a basis of $\vphi$-eigenvectors, with
eigenvalues in $L$. The valuations of the eigenvalues, with
multiplicity, are uniquely determined by $D$. (Call them the {\em
slopes} of $D$.)
\item There exists a unique filtration $\Fil^* \subseteq D$
obeying the conditions that:
\begin{itemize}
\item each of the $\Gr^n$ has only one slope $s_n$ (but perhaps with
multiplicity), and
\item $s_1 < s_2 < \cdots < s_\ell$.
\end{itemize}
\item If $D$ descends to $B^\dag_{\rig,K}$, then $\Fil^*$ descends
with it uniquely.
\end{enumerate}
\end{thm}
One calls a $\vphi$-module {\it \'etale} if its only slope is $0$, and
denotes by $\bfM^\et(\vphi)_{/B^\dag_{\rig,K}} \subset
\bfM(\vphi)_{/B^\dag_{\rig,K}}$ the full subcategory of these.
A $(\vphi,\Ga_K)$-module (over $B^\dag_{\rig,K}$) is a $\vphi$-module
$D$ equipped with a semilinear action of $\Ga_K$ that is continuous
for varying $\ga \in \Ga_K$, and commutes with $\vphi$. The category
of these is denoted by $\bfM(\vphi,\Ga_K)_{/B^\dag_{\rig,K}}$. As a
consequence of the uniqueness statements in the above theorems about
$\vphi$-modules, one finds that $\Ga_K$ stabilizes the lattices $D^r$
and the slope filtration. A $(\vphi,\Ga_K)$-module is called \'etale
if its underlying $\vphi$-module is; the category of these is written
$\bfM^\et(\vphi,\Ga_K)_{/B^\dag_{\rig,K}}$.
The following theorem, which combines work of many people, is the
reason that $(\vphi,\Ga_K)$-modules are important in the study of
$p$-adic Galois representations. Let ${\bf Rep}_\bbQp(G_K)$ denote
the category of finite-dimensional $\bbQp$-vector spaces equipped with
a continuous, linear action of $G_K$.
\begin{thm}\label{thm-phigamma-equiv}
There is a canonical fully faithful embedding
\[
\bfD^\dag_\rig \cn {\bf Rep}_\bbQp(G_K) \hookrightarrow
\bfM(\vphi,\Ga_K)_{/B^\dag_{\rig,K}},
\]
whose essential image is $\bfM^\et(\vphi,\Ga_K)_{/B^\dag_{\rig,K}}$.
\end{thm}
\begin{proof}
The equivalence of Galois representations with \'etale
$(\vphi,\Ga_K)$-modules over the fraction field $B_K$ of the Cohen
ring of the field of norms is proved in \cite{F}. The overconvergence
of such an object, which means that it can be uniquely defined over
$B^\dag_K \subset B_K$, is proved in \cite{CC}. The equivalence of an
object over $B^\dag_K$ being \'etale over $B_K$ and \'etale over
$B^\dag_{\rig,K}$, as well as the unique descent of an \'etale object
over $B^\dag_{\rig,K}$ to $B^\dag_K$, follows from the slope theory of
\cite{K}.
\end{proof}
Let $L$ be another CDVF with perfect residue field, with $K$
continuously embedded into it, and let $D$ be a
$B^\dag_{\rig,K}$-module. We use the shorthand
\[
D_L := D \otimes_{B^\dag_{\rig,K}} B^\dag_{\rig,L}
\]
throughout this article. If $D$ has a $\vphi$-action, then so does
$D_L$. If $D$ has a $\Ga_K$-action, then $D_L$ has a $\Ga_L$-action.
Suppose $\ov{L}$ is an algebraic closure of $L$ containing $\ov{K}$,
and $G_L = \Gal(\ov{L}/L)$. Then for all $V \in {\bf
Rep}_\bbQp(G_K)$, one has $\bfD^\dag_\rig(V|_{G_L}) =
\bfD^\dag_\rig(V)_L$.
Can one recover the invariants of $V$ from $\bfD^\dag_\rig(V)$?
Indeed, and quite simply, as we recall in the next three sections.
\subsection{Galois cohomology}
\label{sect-phigamma-cohom}
Here we recall some results of Liu in \cite{L}, which are variants of
those of Herr, that allow one to recover the Galois cohomology of $V$
from $D = \bfD^\dag_\rig(V)$.
Recall that $\Ga_K \hookrightarrow \bbZ_p^\times$, and hence is
procyclic, except when $p=2$ and $-1 \in \img \Ga_K$, and in this case
$\Ga_K/\{\pm1\}$ is procyclic. If $\Ga_K$ is procyclic, we set
$\Delta_K = \{1\}$, and otherwise we set $\Delta_K = \{\pm1\}$. We
let $\ga \in \Ga_K/\Delta_K$ denote a topological generator.
In his thesis, Herr associated to $D$ the complex
\[
C^\bullet(D) = C_{\vphi,\ga}^\bullet(D) \cn 0 \to D^{\Delta_K}
\xrightarrow{(\vphi-1,\ga-1)} D^{\Delta_K} \oplus D^{\Delta_K}
\xrightarrow{(1-\ga) \oplus (\vphi-1)} D^{\Delta_K} \to 0,
\]
concentrated in degrees $[0,2]$. One can easily show that the
cohomology groups
\[
H^i(D) = H^i(C_{\vphi,\ga}^\bullet(D))
\]
are independent of $\ga$, and moreover that they are canonically
identified with the Yoneda groups:
\[
H^i(D) = \Ext^i_{\bfM(\vphi,\Ga_K)_{/\bfB^\dag_{\rig,K}}}({\bf 1},D),
\]
where ${\bf 1}$ denotes the unit object (i.e.\ $B^\dag_{\rig,K}$
itself as a $(\vphi,\Ga_K)$-module).
In the case where $D = \bfD^\dag_\rig(V)$ is \'etale, we recover
continuous Galois cohomology. Namely, by \cite[Theorem 2.6]{L} there
is a canonical isomorphism of $\delta$-functors $H^*(G_K,V)
\stackrel{\sim}{\to} H^*(\bfD^\dag_\rig(V))$. In degree $i=1$, this
says that Galois cohomology classes of $V$ are in a natural bijection
with extension classes of the $(\vphi,\Ga_K)$-module ${\bf 1}$ by $D$.
It is these extension classes that we will be measuring later in this
article.
Similar (yet simpler) statements can be made for $\Ga_K$-modules
without $\vphi$-action: define $C_\ga^\bullet(D) \cn 0 \to
D^{\Delta_K} \xrightarrow{\ga-1} D^{\Delta_K} \to 0$, concentrated in
degrees $[0,1]$. Then the cohomology of this complex is independent
of $\ga$ and computes the Yoneda groups
$\Ext^*_{\bfM(\Ga_K)_{/\bfB^\dag_{\rig,K}}}({\bf 1},D)$, as well as
the continuous group cohomology $H^*(\Ga_K,D)$.
\subsection{de~Rham theory}
We now explain the link between $(\vphi,\Ga_K)$-modules and $p$-adic
Hodge theory, first exploited by Cherbonnier--Colmez, and later
extended to the Robba ring by Berger.
In his thesis, Berger constructed maps $\iota_n \cn
B^{\dag,p^{-n}}_{\rig,K} \to K_n[\![t]\!]$ for all $n \geq n(K)$. (In
particular, one assumes $n$ is large enough so that $K'_n[\![t]\!] =
K_n[\![t]\!]$.)
There are two ways to describe $\iota_n$. On the one hand, one first
proves that $f \in \wt{B}^\dag_\rig$ (as in the slope filtration
theorem) converges in Fontaine's ring $B_\dR^+$ if and only $f \in
\wt{B}^{\dag,1}_\rig$. Write $\iota(f) = \img(f) \in B_\dR^+$. One
next shows that $\vphi^{-n}(\wt{B}^{\dag,p^{-n}}_\rig) \subseteq
\wt{B}^{\dag,1}_\rig$, and the image of $f \in
B^{\dag,p^{-n}}_{\rig,K}$ under $\iota_n = \iota \circ \vphi^{-n}$
lies in $K'_n[\![t]\!]$.
On the other hand, there is the following geometric picture when
$K=F$. An element of $B^{\dag,r}_{\rig,F}$ is a rigid analytic
function on an annulus around $\pi=0$. One has $r \geq 1$ if and only
if $\ep^{(1)}-1$ lies in this annulus. Since $t = t(\pi)$ vanishes to
order $1$ at every $\pi=\ep^{(n)}-1$, it serves as a uniformizing
parameter there. The map $\iota$ corresponds to the operation of
taking the formal germ at $\pi=\ep^{(1)}-1$. For $n \geq 0$, the
operator $\vphi^{-n}$ stretches the domains of functions towards the
center of the disk. After hitting a function by $\vphi\inv$ enough
times (i.e., if $f \in B^{\dag,r}_{\rig,F}$ and we ensure $rp^n \geq
1$), the point $\ep^{(1)}-1$ lies in its domain and we may localize
there. Another way of saying this is that $\iota_n$ performs
completion at $\ep^{(n)}-1$.
Hopefully, the above description motivates the following formulas.
When $K=F$, given $f = \sum_{k \in \bbZ} a_k\pi^k$ with $a_k \in F$,
we may calculate $\iota_n(f)$ by means of the following rules:
\[
\iota_n(a_k) = \vphi^{-n}(a_k) \qquad \text{and} \qquad \iota_n(\pi) =
\ep^{(n)}\exp(t/p^n)-1.
\]
When $K$ does not necessarily equal $F$, a general $f$ has the form
$\sum_k a_k\pi_K^k$ with $a_k \in F'$. Then one still has
$\iota_n(a_k) = \vphi^{-n}(a_k)$, but $\iota_n(\pi_K)$ is not
generally explicit.
The $\iota_n$ are $\Ga_K$-equivariant, and for varying $n$ they fit
into the following diagram.
\[\xymatrix{
B^{\dag,r}_{\rig,K} \ar[r]^{\iota_n} \ar[d]_\vphi
& K_n[\![t]\!] \ar@{^{(}->}[d] \\
B^{\dag,r/p}_{\rig,K} \ar[r]^{\iota_{n+1}} & K_{n+1}[\![t]\!]
}\]
Now we come to a fundamental construction. Given a $\vphi$-module $D$
over $B^\dag_{\rig,K}$, we associate to it $D_\dif^+ = D^r
\otimes_{B^{\dag,r}_{\rig,K},\iota_n} K_\infty[\![t]\!]$, and $D_\dif
= D_\dif^+[t\inv]$. Using the $\vphi$-structure in an essential way,
one shows that this definition is independent of the choices of $r$
and $n$ satisfying $r < r(V)$, $n \geq n(K)$, and $p^nr \geq 1$.
If $D$ is actually a $(\vphi,\Ga_K)$-module, then $D_\dif^+$ and
$D_\dif$ admit $\Ga_K$-actions. Thus, we are able to define $D_\dR^+
= (D_\dif^+)^{\Ga_K}$ and $D_\dR = (D_\dif)^{\Ga_K}$. These are
$K$-vector spaces of dimension $\leq \rank D$, and they carry a
decreasing, separated, and exhaustive filtration induced by the
$t$-adic filtration on $K_\infty(\!(t)\!)$. One says that $D$ is {\it
de~Rham} (resp.\ {\it +de~Rham}) if $\dim_K D_\dR = \rank D$ (resp.\
$\dim_K D_\dR^+ = \rank D$), and denotes by
$\bfM^\dR(\vphi,\Ga_K)_{/B^\dag_{\rig,K}} \subset
\bfM(\vphi,\Ga_K)_{/B^\dag_{\rig,K}}$ the full subcategory of de~Rham
objects.
\begin{thm}[{\cite[\S5.3\footnote{Comparing \cite[Proposition 5.7
and its proof]{B1} with [{\it loc.~cit.}, Corollaire 5.8] makes
clear a slight typo in the statement of the proposition; it is the
corrected form of the proposition that we use here.}]{B1}}] There
exist functorial identifications respecting filtrations,
$\bfD_\dR^+(V) = (\bfD^\dag_\rig(V))_\dR^+$ and $\bfD_\dR(V) =
(\bfD^\dag_\rig(V))_\dR$.
\end{thm}
\begin{cor}
A representation $V$ is de~Rham (resp.\ +de~Rham) if and only if its
$(\vphi,\Ga_K)$-module is trivialized as semilinear $\Ga_K$-module
upon base change to $K_\infty(\!(t)\!)$ (resp.\ $K_\infty[\![t]\!]$).
\end{cor}
\begin{rem}\label{rem-moral}
Therefore, morally, only the {\it existence} of a $\vphi$-structure is
needed in order to construct $D_\dif^{(+)}$, and the property of $D$
being (+)de~Rham is predominantly a condition on the $\Ga_K$-action on
$D$. This observation is the basis of our entire method; see
Proposition \ref{ppn-phi-irr} below for a precise statement.
\end{rem}
\subsection{$p$-adic monodromy}
\label{sect-phigamma-monodromy}
We will require the following results at a crucial point of the proof
of our main theorem, as well as to gain a more down-to-earth picture
of its content.
Given a $(\vphi,\Ga_K)$-module $D$, we write for brevity
\[
D[t\inv] = D \otimes_{B^\dag_{\rig,K}} B^\dag_{\rig,K}[t\inv]
\quad \text{and} \quad
D[\log(\pi),t\inv]
= D \otimes_{B^\dag_{\rig,K}} B^\dag_{\rig,K}[\log(\pi),t\inv],
\]
where $t$ is the element of $B^\dag_{\rig,K}$ defined at the
conclusion of \S\ref{sect-phigamma-rings}, and where the element
$\log(\pi)$ is a free variable over $B^\dag_{\rig,K}$ equipped with
\[
\vphi(\log(\pi)) = p\log(\pi) + \log(\vphi(\pi)/\pi^p)
\quad \text{and} \quad
\ga(\log(\pi)) = \log(\pi) + \log(\ga(\pi)/\pi),
\]
the series $\log(\vphi(\pi)/\pi^p)$ and $\log(\ga(\pi)/\pi)$ being
convergent in $B^\dag_{\rig,\bbQ_p}$. We associate to $D$ the modules
\[
D_\crys = D[t\inv]^{\Ga_K}
\quad \text{and} \quad
D_\st = D[\log(\pi),t\inv]^{\Ga_K}.
\]
These two modules are semilinear $\vphi$-modules over $F$, of
$F$-dimension $\leq \rank D$. They are related via the so-called
monodromy operator $N$. Namely, consider the unique
$B^\dag_{\rig,K}$-derivation $N \cn B^\dag_{\rig,K}[\log(\pi)] \to
B^\dag_{\rig,K}[\log(\pi)]$ satisfying $N(\log(\pi)) =
-\frac{p}{p-1}$. It satisfies $N\vphi = p\vphi N$ and and commutes
with $\Ga_K$, and thus gives rise to an operator $N$ on $D_\st$ with
the property that $D_\crys = D_\st^{N=0}$.
We say that $D$ is {\it crystalline} (resp.\ {\it semistable}) if
$D_\crys$ (resp.\ $D_\st$) has the maximal $F$-dimension, namely
$\dim_F D_\crys = \rank D$ (resp.\ $\dim_F D_\st = \rank D$). In
particular, $D$ is crystalline if and only if it is semistable and
$N=0$ on $D_\st$. One can show that $D_\st \otimes_F K
\hookrightarrow D_\dR$, so that crystalline implies semistable and
semistable implies de~Rham. We call $D$ {\it potentially crystalline}
(resp.\ {\it potentially semistable}) if there exists a finite
extension $L/K$ such that $D_L$ is crystalline (resp.\ semistable),
when considered as a $(\vphi,\Ga_L)$-module. The following statement
is known as Berger's $p$-adic local monodromy theorem.
\begin{thm}[\cite{B1}]
Every de~Rham $(\vphi,\Ga_K)$-module is potentially semistable.
\end{thm}
The upshot of this theorem is that, whereas in the last section
$D_\dR$ was merely a filtered $K$-vector space, now we may equip it
with much more structure. Given a de~Rham $D$, let $L/K$ be finite
Galois such that $D_L$ is semistable. Then $(D_L)_\st$ is a
$(\vphi,N)$-module over the maximal absolutely unramified subfield
$F_L$ of $L$, and $(D_L)_\st \otimes_{F_L} L = (D_L)_\dR$ is a
filtered $L$-vector space. Essentially because these data arise via
restriction from $K$, they are naturally equipped with a semilinear
action of $\Gal(L/K)$ that commutes with $\vphi$ and $N$ and preserves
the filtration. Such an object is called a {\it filtered
$(\vphi,N,\Gal(L/K))$-module over $K$}.
Given two extensions $L_i$ and filtered
$(\vphi,N,\Gal(L_i/K))$-modules $D_i$ (for $i=1,2$), we consider them
equivalent if there exists an extension $L$ containing the $L_i$ such
that the $D_i$ tensored up to $L$ are isomorphic. When we consider
objects only up to this equivalence, we call them {\it filtered
$(\vphi,N,G_K)$-modules}, thinking of the $G_K$-action as being
through an unspecified finite quotient that determines the field of
definition of the underlying vector spaces. We point out that if $D$
becomes semistable over both $L_1$ and $L_2$, then $(D_{L_1})_\st$ and
$(D_{L_2})_\st$ are equivalent. We call this equivalence class
$D_\pst$. (To avoid set-theoretic issues, one simply deals with
filtered $(\vphi,N,G_K)$-modules whose underlying $\vphi$-module is a
vector space over $F^\unr$, and underlying filtered vector space has
coefficients in $\ov{K}$, with the assumption that $G_K$ acts
discretely.) The category of filtered $(\vphi,N,G_K)$-modules is
denoted $\bfM\bfF(\vphi,N,G_K)$.
Summarizing the $p$-adic monodromy theorem in terms of the above
language, if $D$ is de~Rham then $D_\pst$ determines a filtered
$(\vphi,N,G_K)$-module over $K$. Following Fontaine, a filtered
$(\vphi,N,G_K)$-module $D_\pst$ is called {\it (weakly) admissible}
if, roughly, its Newton and Hodge polygons have the same endpoints,
and all its $(\vphi,N)$-stable submodules satisfy ``Newton on or above
Hodge''. (See \cite[{\S}I.1]{B2} for details.)
\begin{thm}[\cite{CF,B2}]
The functor $D \mapsto D_\pst$ is an equivalence of categories:
\[
\bfM^\dR(\vphi,\Ga_K)_{/B^\dag_{\rig,K}}
\stackrel{\sim}{\longrightarrow}
\bfM\bfF(\vphi,N,G_K).
\]
A de~Rham $(\vphi,\Ga_K)$-module $D$ is \'etale if and only if
$D_\pst$ is (weakly) admissible. It is potentially crystalline if and
only if $N=0$ on $D_\pst$. It is semistable if and only if $D_\pst$
can be realized as a filtered $(\vphi,N,\Gal(K/K))$-module.
\end{thm}
The two equivalent categories above are {\it not} abelian categories.
In the first category, the coimage of a map $D \to D'$ is the
set-theoretic image, while the image is the $t$-saturation of this set
(i.e.\ the elements $x \in D'$ such that some $t^nx$ lies in the
set-theoretic image). In the second category, the coimage and image
have the same underlying $(\vphi,N,G_K)$-module, but different
filtrations. The filtration on the coimage is induced by the
surjection from $D$, and the filtration on the image is induced from
the inclusion into $D'$. Thus, $t$-saturated $(\vphi,\Ga_K)$-stable
$B^\dag_{\rig,K}$-submodules of $D$ are in a natural correspondence
with subspaces of $D_\pst$ that are stable under the
$(\vphi,N,G_K)$-actions (considered as being equipped with the
filtration induced from $D_\pst$). Furthermore, one can show that a
$t$-saturated $B^\dag_{\rig,K}$-submodule is actually a direct
summand, provided that it is $(\vphi,\Ga_K)$-stable.
Moreover, the proof in \cite{B2} explains the following facts. For
simplicity, assume that $D$ is crystalline. Consider $D$ and $D_\crys
\otimes_F B^\dag_{\rig,K}$ as a $B^\dag_{\rig,K}$-lattices in
$D[t\inv]$. As one passes from the first to the second, one invokes
multiples by various $t^n$ (with $n \in \bbZ$) in order to trivialize
the $\Ga_K$-action on some basis. But multiplying by $t^n$ shifts
$\vphi$-slopes upwards by $n$, and thus, as we change lattices, the
$\vphi$-slopes get dragged to new values. But the powers of $t$
involved, which determine the amount of dragging, more directly
determine the weights of the Hodge filtration on $D_\crys$. So, there
is a close (but complicated!) connection between the Hodge--Tate
weights on $D_\crys$ and the {\it difference} between the
$\vphi$-slopes on $D$ and the $\vphi$-slopes on $D_\crys$. In the
case where $D$ is a trivial $\Ga_K$-module, i.e.\ $D \approx
(B^\dag_{\rig,K})^{\oplus d}$ as a $\Ga_K$-module, one clearly sees
that $D$ is crystalline and that there is no change of lattice, so the
$\vphi$-slopes on $D$ {\it coincide} with the $\vphi$-slopes on
$D_\crys$.
\section{Variational program}\label{sect-global}
In this section we describe a conjectural program for obtaining
triangulordinary filtrations, and hence Selmer groups, for families of
Galois representations. Our primary guide here is Greenberg's
variational viewpoint, described in \cite{G2}.
As a main example, we consider the eigencurve of Coleman--Mazur. We
show how our program would recover results of Kisin (see \cite{Ki}),
and interpolate his Selmer groups for overconvergent modular forms of
finite slope into a Selmer module over the entire eigencurve. Note
that we use the {\it homological} normalization to maintain
consistency with Greenberg on the ordinary locus, and with the
statements of Kisin.
We retain the notations and conventions of the preceding sections,
with the additional assumption that our local fields $K$ have {\it
finite} residue fields (since this is required by Berger--Colmez in
\cite{BC}). A careful reading of Berger--Colmez might allow this
restriction to be removed.
\subsection{Interpolation of $(\vphi,\Gamma_K)$-modules}
We construct a families of $(\vphi,\Ga_K)$-modules over rigid analytic
spaces corresponding to families of $p$-adic representations of $G_K$,
using the theory of Berger--Colmez. In their work, the base of the
family is a $p$-adic Banach space $S$. By an $S$-representation of
$G_K$, we mean a finite free $S$-module $V$ equipped with a
continuous, $S$-linear $G_K$-action. In order to get a $p$-adic Hodge
theory for $V$, we must assume the mild condition that $S$ is a {\it
coefficient algebra} as in \cite[\S2.1]{BC}
We require some terminology from $p$-adic functional analysis. Given
a $p$-adic Banach algebra $S$ with norm $|\cdot|_S$ and a Fr\'echet
space $T$ with norms $\{|\cdot|_i\}_{i \in I}$, we define norms
$\{|\cdot|_{S,i}\}_{i \in I}$ on $S \otimes_\bbQp T$ by
\[
|x|_{S,i}
= \inf_{x = \sum_{k=1}^n s_k \otimes t_k}
\left(\max_k |s_k|_S \cdot |t_k|_i \right).
\]
This makes $S \otimes_\bbQp T$ into a pre-Fr\'echet space, and we
declare $S \wh{\otimes} T$ to be its Fr\'echet completion, consisting
of equivalence classes of sequences that are simultaneously Cauchy
with respect to all the norms. If $T$ is instead the direct limit of
the Fr\'echet spaces $\{T^j\}_{j \in J}$ (henceforth, we say {\it $T$
is LF}), we define $S \wh{\otimes} T$ to be the direct limit of the $S
\wh{\otimes} T^j$, each of the latter terms being defined above.
In particular, the above definitions apply to $T =
B^{\dag,r}_{\rig,K}$, which is Fr\'echet: there are norms $|\cdot|_s$
on $B^{\dag,r}_{\rig,K}$, for $0 < s \leq r$, corresponding to the
$\sup$ norms on the annuli $\ord_p(X) = s/e_K$, which can be described
easily in terms of the expansion of $f$ in $\pi_K$. The definitions
also apply to $T = B^\dag_{\rig,K}$, which is the direct limit of the
$B^{\dag,r}_{\rig,K}$ for $r > 0$, and hence is LF.
We write $\Spm S$ for the collection of maximal ideals of $S$, and
when we have a label $x$ for an element $\fkm_x \in \Spm S$, we
abusively refer to $\fkm_x$ by $x$. If $M$ is an $S$-module, we write
$M_x$ for $M \otimes_S S/\fkm_x$ throughout this section.
Applying $\otimes_{S \wh{\otimes} B^\dag_K} (S \wh{\otimes}
B^\dag_{\rig,K})$ to \cite[Th\'eor\`eme A]{BC}, as in \S6.2 of {\it
loc.\,cit.}, we see that one can canonically associate to $V$ a
locally free $S \wh{\otimes} B^\dag_{\rig,K}$-module
$\bfD^\dag_\rig(V)$ of rank equal to $\rank_S V$, equipped with
commuting, continuous, semilinear actions of $\vphi$ and $\Ga_K$, with
the property that $\bfD^\dag_\rig(V)_x$ is canonically isomorphic to
$\bfD^\dag_\rig(V_x)$ in $\bfM(\vphi,\Ga_K)_{/B^\dag_{\rig,K}}$ for
all $x \in \Spm S$.
Let us globalize this result. Recall that our coefficient field for
Galois representations is $E$, a finite extension of $\bbQ_p$. Let
$\scrX/E$ be a reduced, separated rigid analytic space with structure
sheaf $\calO_\scrX$. The very notion of a rigid analytic space is
that $\scrX$ is built from its admissible affinoid subdomains $\scrU =
\Spm S$, so that a sheaf is determined by its restriction to an
admissible covering by admissible affinoid opens (for brevity, we will
call this a {\it good cover}), and a quasi-coherent sheaf is
determined by its {\it values} on such an open cover. Note that an
affinoid algebra is naturally a $p$-adic Banach algebra; in fact, a
reduced affinoid algebra is a coefficient algebra. A quasi-coherent
sheaf of $\calO_\scrX$-modules $M$ is said to be locally free of
finite rank (resp.\ locally free Banach, locally free Fr\'echet,
locally free LF) if $\scrX$ admits a good cover by opens $\scrU$ for
which $\Ga(\scrU,M) \approx \Ga(\scrU,\calO_\scrU) \wh{\otimes}
T_\scrU$, where $T_\scrU$ is a finite-dimensional $\bbQ_p$-vector
space (resp.\ a Banach space, a Fr\'echet space, an LF space).
For a commutative $\bbQ_p$-algebra $R$ that is finite-dimensional
(resp.\ Banach, Fr\'echet, LF) as a $\bbQ_p$-module, we define
$\calO_\scrX \wh{\otimes} R$ to be the locally free sheaf of
finite-dimensional (resp.\ Banach, Fr\'echet, LF)
$\calO_\scrX$-algebras with $T_\scrU = R$, as above, for every
affinoid subdomain $\scrU \subseteq \scrX$. A locally free
$\calO_\scrX \wh{\otimes} R$-module of finite rank is a quasicoherent
sheaf $M$ of $\calO_\scrX \wh{\otimes} R$-modules on $\scrX$ such
that, for $\scrU$ ranging over some good cover of $\scrX$, each
$\Ga(\scrU,M)$ is free of finite rank over $\Ga(\scrU,\calO)
\wh{\otimes} R$.
In particular, we have defined the sheaf of rings $\calO_\scrE
\wh{\otimes} B^\dag_{\rig,K}$, and we have a notion of locally free
$\calO_\scrE \wh{\otimes} B^\dag_{\rig,K}$-module of finite rank.
Suppose we are given $\scrX/E$ as above, and a locally free
$\calO_\scrX$-module $\scrV$ of finite rank $d$ equipped with a
continuous, linear action $G_K$ of $\calO_\scrX$. If $\scrU = \Spm S
\subset \scrX$ is any admissible affinoid neighborhood over which
$\scrV$ is free, then we can apply the theory of Berger--Colmez to
obtain $\bfD^\dag_\rig(\scrV|_\scrU)$. One can check that the
constructions of Berger--Colmez are compatible with localization to
admissible affinoid subdomains of $\scrU$, so that the rule $\scrU
\mapsto \bfD^\dag_\rig(\scrV|_\scrU)$ on admissible affinoid
subdomains such that $\scrV|_\scrU$ is free determines a sheaf of
locally free $\calO_\scrX \wh{\otimes} B^\dag_{\rig,K}$-modules of
rank $d$ on $\scrX$, which we call $\scrD$. It is equipped with
commuting, continuous, semilinear actions of $\vphi$ and $\Ga_K$, and
satisfies $\scrD_x \cong \bfD^\dag_\rig(\scrV_x)$ for all points $x
\in \scrX$. (When writing ``$x \in \scrX$'', we always mean $x$ is a
physical point of $\scrX$, in the sense of Tate's rigid analytic
spaces. The residue field $E(x)$ at $x$ is always finite over
$\bbQ_p$, since we have assumed $E/\bbQ_p$ finite.)
\subsection{Interpolation of the triangulordinary theory}
\label{sect-variational-tord}
It is our desire, in the future, to prove that a family consisting of
triangulordinary representations admits a corresponding family of
triangulordinary filtrations. We state our goal in a preliminary
form, as the following conjecture.
Let $\scrX/E$ be a reduced, separated rigid analytic space, and let
$\scrD$ be a locally free sheaf of $(\vphi,\Ga_K)$-modules over
$\calO_\scrX \wh{\otimes} B^\dag_{\rig,K}$, of rank $d$. Let $0 < c <
d$ be an integer. Consider the functor that associates to an
$\scrX$-space $f\cn \scrU \to \scrX$ the collection of
$(\vphi,\Ga_K)$-stable $\calO_\scrU \wh{\otimes}
B^\dag_{\rig,K}$-local direct summands of $f^*\scrD$ rank $c$.
\begin{conj}\label{conj-kisin}
The functor described above is representable by a locally finite type
morphism $p_c \cn \scrX(c) \to \scrX$. For each $x \in \scrX$, the
fiber $\scrX(c)_x$ is a finite union of quasiprojective flag varieties
over the residue field $E(x)$.
\end{conj}
\begin{rem}
This conjecture is inspired by results of Kisin, notably
\cite[Proposition 5.4]{Ki}. The statements proved there involve a
number of technical hypotheses; thus, the above conjecture may require
some slight changes. Under Kisin's hypotheses, we expect that his
methods may be translated into the $(\vphi,\Ga_K)$-module language to
establish the case where $c=d-1$. In any case, to prove the
conjecture, it would suffice to work locally: to assume $\scrX$ is
affinoid, and to construct $\scrX(c)$ with the desired property for
maps $\scrU \to \scrX$ with $\scrU$ affinoid.
\end{rem}
The underlying {\it flag} of a filtration $F$ is simply the poset of
its constituents $F^i$, forgetting the indices. By a {\it shape}
$\sigma$ (of rank $d$, and $r$ constituents), we mean a finite
sequence of dimensions $\{d = d_0 > d_1 > \cdots > d_{r+1} = 0\}$. We
say that a flag $F$ of an object $D$ has shape $\sigma$ if $\rank D =
d$ and its constituents have dimensions given precisely by the
dimensions $d_i$ of $\sigma$; a filtration has shape $\sigma$ if its
underlying flag does. We can consider the integer $c$ from above as
the shape with one constituent given by $\{d > c > 0\}$. If $\sigma$
is an arbitrary shape of rank $d$ then, by inducting on the number of
constituents, Conjecture \ref{conj-kisin} implies the existence of a
locally finite type morphism $p_\sigma \cn \scrX(\sigma) \to \scrX$
classifying $(\vphi,\Ga_K)$-stable flags in $\scrD$ with shape
$\sigma$. We write $\scrD(\sigma) := p_\sigma^*\scrD$, and denote by
$F(\sigma)$ the corresponding universal flag in $\scrD(\sigma)$ of
shape $\sigma$.
\begin{rem}
In the case $\sigma$ is the shape of a complete flag,
Bella\"iche--Chenevier give in \cite[Proposition 2.5.7]{BC2} an
affirmative answer to the Conjecture \ref{conj-kisin}, at least
infinitesimally locally: they prove the representability of the
related deformation problem. They go on to undertake a considerably
detailed study of what amounts to the formal completion of
$\scrX(\sigma)$ at a crystalline point.
\end{rem}
We go on to explain how Conjecture \ref{conj-kisin}, in the more
general form just explained, should lead to triangulordinary
filtrations on the level of families. First, we make precise what the
latter means.
We call $\scrD$ {\it pretriangulordinary} of shape $\sigma$ if there
is a Zariski dense subset $\scrX^\alg \subset \scrX$, all of whose
points $x \in \scrX^\alg$ satisfy the following property: the
$(\vphi,\Ga_K)$-module $\scrD_x$ is triangulordinary, with some (hence
every) triangulordinary filtration of shape $\sigma$. We call $\scrD$
{\it triangulordinary} with respect to $F$, where $F^* \subseteq
\scrD$ is a decreasing, separated and exhaustive filtration by
$(\vphi,\Ga_K)$-stable $\calO_\scrX \wh{\otimes}
B^\dag_{\rig,K}$-local direct summands, if there is a Zariski dense
subset $\scrX^\alg \subset \scrX$ with the following property: for all
points $x \in \scrX^\alg$, the image $F_x$ has underlying flag equal
to the underlying flag of some triangulordinary filtration of
$\scrD_x$. (For such $x$, the choices of indices making $F_x$ into
the triangulordinary filtration are then uniquely determined by the
Hodge--Tate weights.) Clearly, if $\scrD$ is triangulordinary with
respect to $F$, and $F$ has shape $\sigma$, then $\scrD$ is
pretriangulordinary of shape $\sigma$.
When $\scrD = \bfD^\dag_\rig(\scrV)$, we say that $\scrV$ is
pretriangulordinary of shape $\sigma$ (resp.\ triangulordinary with
respect to $F$) if the said condition holds for $\scrD$.
Suppose $\scrD$ is pretriangulordinary of shape $\sigma$, and let
$p_\sigma \cn \scrX(\sigma) \to \scrX$ classify $(\vphi,\Ga_K)$-stable
flags of shape $\sigma$, as above. We let $\scrX_\tord^\alg =
\scrX_\tord^\alg(\sigma)$ be the set of $x \in p_\sigma\inv
\scrX^\alg$ such that $F(\sigma)_x$ is the underlying flag of a
triangulordinary filtration on $\scrD_x$, and we let $\scrX_\tord$ be
the Zariski closure of $\scrX_\tord^\alg$ inside $\scrX$. We write
$\scrD_\tord$ (resp.\ $F_\tord$) for the restriction of
$\scrD(\sigma)$ (resp.\ $F(\sigma)$) to $\scrX_\tord$. By
construction, $\scrD_\tord$ is a triangulordinary family over
$\scrX_\tord$ of shape $\sigma$ with respect to any choice of indices
making the flag $F_\tord$ into a filtration. Moreover, by the
hypothesis that $\scrX$ is pretriangulordinary, the restriction of
$p_\sigma$ is a surjection $\scrX_\tord^\alg \twoheadrightarrow
\scrX^\alg$, so that $\scrX_\tord$ is rather substantial in comparison
with $\scrX$.
\begin{rem}\label{rem-indices}
It is not clear from the above discussion whether the construction
singles out a choice of indices for the triangulordinary flag
$F_\tord$ on $\scrD_\tord$. We would hope for the ``most
appropriate'' choice of indexing to have the following property: for
all $x \in \scrX^\alg$, the constituent $F^1$ has image in $\scrD_x$
equal to the $F^1$ of some triangulordinary filtration on $\scrD_x$
satisfying the hypotheses of Theorem \ref{thm-local-main}(2). Thus,
the choice of indexing {\it does} affect which Selmer group is
obtained from the definition given in the next section.
Whether a ``most appropriate'' indexing exists, and (if it exists)
which indexing it is, are sensitive to the specification of
$\scrX^\alg \subset \scrX$. See Remark \ref{rem-choosing-indices} for
an example. (Although we have not stressed this, the construction of
$\scrX_\tord$ itself depends on $\scrX^\alg$.)
\end{rem}
\begin{exmp}\label{exmp-univ-deformation}
Assume the notation at the end of \S\ref{sect-definitions}, with
$K=\bbQ$ and $p>2$. Fix a $2$-dimensional, irreducible, odd
representation $\ov{\rho}$ of $G_{\bbQ,S}$ with values in the residue
field $k_E$ of $E$. We take for $\scrX$ the generic fiber of $\Spf
R^\text{univ}_S(\ov{\rho})$, where $R^\text{univ}_S(\ov{\rho})$ is the
universal $\calO_E$-valued deformation ring with ``unramified'' local
conditions away from $S$, and no conditions at $S$. We take for
$\scrV$ the universal representation on this space, and $\scrD =
\bfD^\dag_\rig(\scrV|_{G_p})$. Since the set $\scrX^\alg$ of points
$x \in \scrX$ for which $\scrV_x|_{G_p}$ is semistable with distinct
Frobenius eigenvalues is Zariski dense, one can deduce that $\scrD$ is
pretriangulordinary of shape $\sigma = \{2 > 1 > 0\}$.
Granting Conjecture \ref{conj-kisin}, we expect that
$\scrX_\tord(\sigma)$ is none other than the eigensurface of
Coleman--Mazur (discussed towards the end of
\S\ref{sect-expectations}). Its restriction to the subspace of
$\scrX$ having a vanishing Sen weight is expected to be the
eigencurve, obtained as the resolution of the infinite fern of
Gouv\^ea--Mazur \cite{GM} at its double points. Thus, our setup ought
to give a clean realization of Kisin's hope of constructing general
eigenvarieties purely Galois-theoretically.
\end{exmp}
\subsection{Selmer groups via variation}
\label{sect-variational-selmer}
In order to define Selmer groups of families of Galois
representations, we need to give a meaning to the Galois cohomology of
a family. Let $G$ be a profinite group acting continuously on a
locally free module $\scrV$ of finite rank over a rigid analytic space
$\scrX$. We let
\[
H^i(G,M) := \Ext^i_{\calO_\scrX[G]}({\bf 1},M),
\]
the Yoneda group in the category of locally free $\calO_\scrX$-modules
with continuous $G$-actions. As is customary, when $G = G_K$ is the
absolute Galois group of a field $K$, we write $H^i(K,M)$ for
$H^i(G_K,M)$.
We now resume the notation at the end of \S\ref{sect-definitions}.
Namely, $K/\bbQ$ is a finite extension, $S$ is a finite set of places,
and we have algebraic closures $\ov{K}_v$ containing $K_S$ and maps
$G_v \to G_{K,S}$, for $v \in S$.
We let $\scrX/E$ be as in the preceding section, and let $\scrV$ be a
locally free sheaf on $\scrX$ of finite rank, equipped with a
continuous, $\calO_\scrX$-linear $G_{K,S}$-action. We assume, for
each place $v$ of $K$ with $v \mid p$, that $\scrV|_{G_v}$ is
triangulordinary with respect to some filtration $F_v^* \subseteq
\scrD_v := \bfD^\dag_\rig(\scrV|_{G_v})$, in the sense described in
\S\ref{sect-variational-tord}. (Whether or not Conjecture
\ref{conj-kisin} holds, we assume here that we are simply given the
$F_v^*$.) For such $v$ we define the local condition at $v$ to be
\begin{align}\label{eqn-selmer1}
H^1_\tord(K_v,\scrV) = \ker \Bigg[
H^1(K_v,\scrV)
&=
\Ext^1_{\calO_\scrX[G_v]}({\bf 1},\scrV) \nonumber \\
&\stackrel{\star}{\to}
\Ext^1_{\vphi,\Ga_{K_v}/\calO_\scrX \wh{\otimes}
B^\dag_{\rig,K_v}}({\bf 1}, \scrD_v) \\
&\to
\Ext^1_{\vphi,\Ga_\wh{K_v^\unr}/\calO_\scrX \wh{\otimes}
B^\dag_{\rig,\wh{K_v^\unr}}}({\bf 1}, (\scrD_v/F_v^1)_\wh{K_v^\unr})
\Bigg]. \nonumber
\end{align}
Assuming that, for $x \in \scrX^\alg$, the specialization $(F_v^1)_x$
is the $F^1$ of a triangulordinary filtration on $\scrV_x$, the above
definition provides an interpolation over all of $\scrX$ of the ``g+''
Bloch--Kato local conditions at the points of $\scrX^\alg$. By
Proposition \ref{ppn-local-BK}, at all such points this agrees with
the ``g'' local condition, and at most such points this agrees with
the ``f'' condition. Thus, it is reasonable to define the Selmer
group of $\scrV$ over $\scrX$ to be
\begin{multline}\label{eqn-selmer2}
H^1_\tord(K,\scrV) = \ker \Bigg[
H^1(G_{K,S},\scrV) \\
\to
\bigoplus_{v \in S,\ v \nmid p}
\frac{\displaystyle H^1(K_v,\scrV)}{\displaystyle H^1_\unr(K_v,\scrV)}
\oplus \bigoplus_{v \mid p}
\frac{\displaystyle H^1(K_v,\scrV)}{\displaystyle H^1_\tord(K_v,\scrV)}
\Bigg].
\end{multline}
\begin{rem}
The map labeled $\star$ in Equation \ref{eqn-selmer1} is not known to
be an isomorphism. In fact, in contrast to the situation of Theorem
\ref{thm-phigamma-equiv}, the map $\bfD^\dag_\rig$ from families of
Galois representations to \'etale families of $(\vphi,\Ga_K)$-modules
is {\it not} an equivalence of categories. Chenevier has given the
following counterexample. Denote by $K\bra{\ul{T}}$ the Tate algebra
over $K$ in the variables $\ul{T}$. Let $D = \bbQ_p\bra{T,T\inv}
\wh{\otimes}_{\bbQ_p} B^\dag_{\rig,K} \cdot e$, so that $D$ has rank
$1$ with basis element $e$, with actions given by $\Ga_K \cdot e = e$
and $\vphi(e) = Te$. There is no Galois representation $V$ over
$\bbQ_p\langle T,T\inv \rangle$ for which $\bfD^\dag_\rig(V) = D$.
We ask whether it is still the case that $\star$ is an isomorphism: if
two $(\vphi,\Ga_K)$-modules over $S \wh{\otimes} B^\dag_{\rig,K}$ come
from $S$-representations of $G_K$, does every extension between them
come from an $S$-representation? In any case, for every $x \in
\scrE^0$ the diagram
\begin{equation}\label{eqn-specialize-selmer}\begin{array}{ccc}
H^1(K_v,\scrV) & \to & \Ext^1({\bf 1},\scrD) \\
\downarrow & & \downarrow \\
H^1(K_v,\scrV_x) & \stackrel{\sim}{\to} & \Ext^1({\bf 1},\scrD_x)
\end{array}\end{equation}
commutes, so we at least know that we are imposing the correct local
condition, specialization-by-specialization, everywhere we are able
to.
\end{rem}
We also obtain notions of Selmer groups $H^1_\tord(K,\scrV_x)$ for
{\it all} specializations $\scrV_x$ of $\scrV$ with $x \in \scrX$:
namely, we define $F^1_{x,v}$ to be $(F^1_v)_x$, and add subscripts
$x$ everywhere in Equations (\ref{eqn-selmer1}--\ref{eqn-selmer2}).
It follows from the commutativity of Diagram
\ref{eqn-specialize-selmer} above that there is a natural
specialization map
\[
H^1_\tord(K,\scrV)_x \to H^1_\tord(K,\scrV_x).
\]
for each $x \in \scrX$. Perhaps the most important open question in
our program is whether an analogue of Mazur's control theorem holds:
when can we bound the kernel and cokernel of the above map? Can this
bounding be achieved, uniformly for $x$ varying through a substantial
subset of $\scrX$? Although we strongly desire to check this in a
concrete setting, at present we cannot handle any particular
nonordinary case.
We go on now to discuss in detail our model example: the eigencurve of
Coleman--Mazur.
\subsection{Review of the eigencurve}
\label{sect-review-eigencurve}
We continue with the notations of the end of \S\ref{sect-definitions}.
We assume $K=\bbQ$, and we fix positive integer $N$ not divisible by
$p$, which we call the tame level. We take for $S$ the set of primes
dividing $p$ and $N$, together with the place $\infty$.
By the weight space $\scrW$ we mean the rigid analytic space over
$\bbQ_p$ arising as the generic fiber of $\Spf \bbZ_p[\![\bbZ_p^\times
\times (\bbZ/N)^\times]\!]$. Its points $\scrW(R)$ correspond to
continuous characters of the form $\bbZ_p^\times \times
(\bbZ/N)^\times \to R^\times$. By class field theory, one can
consider $\scrW$ as being equipped with a free rank $1$ bundle $\scrT$
on which $G_{\bbQ,S}$ acts through its universal character. We let
$\scrW^\alg$ consist of those points $w \in \scrW$ corresponding to
characters having the form $a \mapsto a^{k_w}$ on some open subgroup
of $\bbZ_p^\times \subseteq \bbZ_p^\times \times (\bbZ/N)^\times$,
with $k_w$ an integer. Clearly, $\scrW$ is triangulordinary of shape
$\{1 > 0\}$.
The {\it eigencurve} $\scrE = \scrE_{p,N}$, defined in \cite{CM} in
the case $p > 2$ and $N = 1$, and extended to general $p$ and $N$ by a
variety of authors, is the following object. It is a rigid analytic
space over $\bbQ_p$, locally-on-the-base finite over $\scrW \times
(\scrB_1(0) \bs \{0\})$, and locally-in-the-domain finite flat over
$\scrW$. Here, $\scrB_1(0)$ is the closed unit disk around the
origin, $\scrB_1(0) = \Spm \bbQ_p\bra{U_p}$. The map to $\scrW$ is
called the weight (or, more precisely, weight-nebentypus), the map to
$\scrB_1(0)$ is called the $U_p$-eigenvalue, and the latter's
composite with the valuation map is called the slope. Finally,
$\scrE$ parameterizes a universal rigid analytic family of pairs
$(f,\alpha)$ with $f$ a $p$-adic overconvergent elliptic modular
eigenform of tame level $N$ and $\alpha$ a nonzero (``finite slope'')
$U_p$-eigenvalue of $f$. We let $\scrE^\alg$ be the collection of
points $x \in \scrE$ corresponding to pairs $(f_x,\alpha_x)$ with
$f_x$ classical of weight $k_x \geq 2$, and with $f_x$ at worst
semistable at $p$.
We remove two types of bad points on $\scrE$. Namely, we say that $x
\in \scrE$ has {\it critical slope} if its weight $w$ lies in
$\scrW^\alg$ with $k_w$ as above, $k_w \geq 2$, and $k_w-1$ is the
slope of $x$. (This agrees with the terminology of
\S\ref{sect-local-ex}, except that it also includes some nonclassical
$x$.) We say that $x \in \scrE^\alg$ has {\it does not have distinct
eigenvalues} if $\alpha_x$ is a double root of the $p$-Hecke
polynomial of (the newform associated to) $f_x$. Write $\scrE^0
\subset \scrE$ for the complement of the critical-slope and
not-distinct-eigenvalue loci, and $\scrE^{0,\alg} = \scrE^0 \cap
\scrE^\alg$. (One can do slightly better as regards critical slope,
and instead look at the complement of the points in the image of the
$\theta^{k-1}$-map for each $k \geq 2$.)
The constructions of $\scrE$ give rise to a locally free rank $2$
bundle $\scrV^0$ over $\scrE^0$, equipped with a continuous,
$\calO_{\scrE^0}$-linear action of $G_{\bbQ,S}$. This representation
has the property that, for any $x \in \scrE^{0,\alg}$, the fiber
$\scrV^0_x$ is isomorphic to the Galois representation
$V_{f_x}^\text{hom}$ associated to $f_x$ by Deligne.
\begin{rem}
The reader will note that the Galois representation
$V_{f_x}^\text{hom}$ is defined for every $x \in \scrE$ with $f_x$
classical of weight $k_x \geq 2$. Our restriction to semistable
points is because they would require modifying the triangulordinary
theory to handle representations that become semistable over an
abelian extension. We exclude $\scrE^\alg \bs \scrE^{0,\alg}$ from
our consideration only because Kisin does so in \cite{Ki}; we have not
tested whether our theory should make sense at these points.
\end{rem}
We write $\scrD^0 = \bfD^\dag_\rig(\scrV^0|_{G_p})$, so that for $x
\in \scrE^\alg$ one has
\[
\scrD^0_x \cong \bfD^\dag_\rig(\scrV^0_x|_{G_p}) \cong
\bfD^\dag_\rig(V_{f_x}^\text{hom}|_{G_p}).
\]
For every $x \in \scrE^{0,\alg}$ the representation
$V_{f_x}^\text{hom}|_{G_p}$ is semistable, and $\alpha_x$ is a
$\vphi\inv$-eigenvalue $\nu$ on $\bfD_\st(V_{f_x}^\text{hom}|_{G_p})$.
Since we have removed the not-distinct-eigenvalue locus from
$\scrE^0$, $\bfD_\st(V_{f_x}^\text{hom}|_{G_p})$ has distinct
$\vphi$-eigenvalues, so the $\nu$-eigenspace gives rise to a canonical
triangulordinary filtration
\begin{equation}\label{eqn-filtration-x}
\bfD^\dag_\rig(V_{f_x}^\text{hom}|_{G_p}) = F^0_k \supsetneq F^1_x =
F^{k_w-1}_x \supsetneq F^{k_w}_x = 0,
\end{equation}
as in \S\ref{exmp-MFs}, where $w$ is the weight of $x$. Therefore,
$\scrD^0$ is pretriangulordinary of shape $\{2 > 1 > 0\}$. From now
on, we denote this particular shape by $\sigma$.
\subsection{Expectations for the eigencurve}
\label{sect-expectations}
We expect that $\scrD^0$ is triangulordinary of shape $\sigma$ in the
following precise sense:
\begin{conj}\label{conj-eigencurve}
There exists a unique filtration
\[
\scrD^0 \supsetneq F^1 \supsetneq 0
\]
by a $(\vphi,\Ga_\bbQp)$-stable locally $\calO_{\scrE^0} \wh{\otimes}
B^\dag_{\rig,\bbQp}$-direct summand $F^1$ of rank $1$ with the
property that, for each $x \in \scrE^{0,\alg}$, $(F^1)_x = F^1_x$
under the identification of $\scrD^0_x \cong
\bfD^\dag_\rig(\scrV_x|_{G_p}) \cong
\bfD^\dag_\rig(V^\text{hom}_{f_x}|_{G_p})$.
\end{conj}
An equivalent way of formulating the conjecture is as follows. Let
$\mathscr{F} \subset \scrD^0$ be the subsheaf defined by
\[
\mathscr{F} := \bigcap_{x \in \scrE^{0,\alg}}
\ker\left[ \scrD^0 \to \scrD^0_x/F_x^1 \right].
\]
Then we may phrase Conjecture \ref{conj-eigencurve} as asserting the
existence of a unique $(\vphi,\Ga_\bbQp)$-stable locally
$\calO_{\scrE^0} \wh{\otimes} B^\dag_{\rig,\bbQp}$-direct summand
$F^1$ of $\scrD^0$ of rank $1$ contained in $\mathscr{F}$. If so,
then the specialization maps $\scrD^0_x \cong
\bfB^\dag_\rig(\scrV_x|_{G_p})$ automatically identify $(F^1)_x \cong
F_x^1$ for all $x \in \scrE^{0,\alg}$.
Suppose that Conjecture \ref{conj-kisin} holds with $d=2$ and $c=1$,
so that the morphism $p_\sigma \cn \scrE^0(\sigma) \to \scrE^0$
exists. Then Conjecture \ref{conj-eigencurve} says that the
assignment
\begin{eqnarray*}
\scrE^{0,\alg} & \to & \scrE^{0,\alg}_\tord \subset \scrE^0(\sigma) \\
x & \mapsto & (x,F_x^1)
\end{eqnarray*}
extends uniquely to a section $\scrE^0 \to \scrE^0(\sigma)$ of
$p_\sigma$. In fact, any such section must have image in
$\scrE^0_\tord$, and we expect that this is the only section of
$p_\sigma|_{\scrE^0_\tord}$. We envision that $\scrE^0_\tord$ can be
divided into two parts: a component mapping isomorphically onto
$\scrE^0$, and a disjoint union of points, one lying over each $x \in
\scrE^{0,\alg}$ with $f_x$ crystalline at $p$, corresponding to the
``evil twin'' of $(f_x,\alpha_x)$ (as in \cite{GM}).
The reader is advised to take note of the difference between the above
picture and Example \ref{exmp-univ-deformation}.
Conjecture \ref{conj-eigencurve} gives us a definition of the Selmer
group over the eigencurve, as well as Selmer groups for all
finite-slope overconvergent modular eigenforms, as in Equation
\ref{eqn-selmer2}.
\begin{rem}
An arbitrary (especially, nonclassical) $x \in \scrE^0$ is known to be
trianguline by work of Kisin and Colmez. Let $\bbQ_p(x)$ denote the
residue field of $\scrE^0$ at $x$, and $(f_x,\alpha_x)$ the
corresponding $\bbQ_p(x)$-valued overconvergent eigenform and
$U_p$-eigenvalue. Then \cite[Theorem 6.3]{Ki} shows the existence of
a nonzero, $G_p$-equivariant map $V_{f_x}^\text{hom} \to (B_\crys^+
\otimes_\bbQp \bbQ_p(x))^{\vphi=\alpha_x}$. This is equivalent to a
nonzero vector in
$\bfD_\crys(V_{f_x}^\text{coh}|_{G_p})^{\vphi=\alpha_x}$. Then
\cite[Proposition 5.3]{C} implies that $V_{f_x}^\text{coh}$ is
trianguline at $p$, and hence so is $V_{f_x}^\text{hom}$. The
trianguline subspace $F$ inside
$\bfD^\dag_\rig(V_{f_x}^\text{hom}|_{G_p})$ ought to coincide with the
putative triangulordinary filtration $F_x^1$ described above when
Conjecture \ref{conj-eigencurve} holds. In any case, using $F$ in
place of $F_x^1$, we obtain a definition of a local condition and
Selmer group without assuming any conjecture.
We remind the reader that, although the work of Kisin and Colmez gives
us trianguline filtrations at every point, they do not directly give
us a filtration on the family.
\end{rem}
A related example is the cyclotomic deformation of $\scrV^0$. This is
the bundle $\wt{\scrV}^0$ over the {\it eigensurface} $\wt{\scrE}^0 =
\scrE^0 \times \scrW$ determined by $p_1^* \scrV^0
\otimes_{\wt{\scrE}^0} p_2^* \scrT$, where the $p_i$ are the
projections of $\wt{\scrE}^0$ onto the respective factors. Letting
$\wt{\scrE}^{0,\alg} = \scrE^{0,\alg} \times \scrW^\alg$, we see that
$\wt{\scrV}^0$ is pretriangulordinary of shape $\sigma = \{2 > 1 >
0\}$. Assuming Conjecture \ref{conj-eigencurve} and setting $\wt{F}^1
= p_1^*F^1 \otimes_{\wt{\scrE}^0} p_2^*\scrT$, we see that
$\wt{\scrV}^0$ is also triangulordinary of shape $\sigma$.
\begin{rem}\label{rem-choosing-indices}
In the case of the eigencurve $\scrE^0$, every $\scrV_x|_{G_p}$ with
$x \in \scrE^{0,\alg}$ has Hodge--Tate weights $0$ and $k-1 > 0$.
Therefore, as seen in Equation \ref{eqn-filtration-x}, the
triangulordinary filtrations all have $F_x^1$ equal to their rank-$1$
constituent. When assigning indices to the putative triangulordinary
flag $F$ on $\scrD^0$ given by Conjecture \ref{conj-eigencurve}, this
fact forces us to take $F^1$ to be the rank-$1$ constituent. In other
words, the Galois theory provided us with a natural choice of
filtration indexing, and, by consequence, a natural choice of Selmer
group.
Consider the universal character $\scrT$ of $G_{\bbQ,S}$ over weight
space $\scrW$. The unique Hodge--Tate weight of $w \in \scrW^\alg$ is
$k_w$, and these integers vary without bound. Thus there is no ``most
appropriate'' index at which to situate the jump in the
triangulordinary filtration, compatibly over all of $\scrW^\alg$ as
defined in \S\ref{sect-review-eigencurve}. Another viewpoint is that
$\scrT$ is the cyclotomic {\it deformation} of the trivial character
$\chi_\triv$, and hence its triangulordinary filtration should be
chosen to deform the natural one for $\chi_\triv$. Since $\chi_\triv$
has Hodge--Tate weight $0$, this means taking $\Gr^0 \neq 0$, and, in
particular, $F^1 = 0$. Another way of achieving this would be to
reduce $\scrW^\alg$ to its subset consisting of those $w$ with $k_w =
0$ (which is still Zariski dense). A third option is to note that
$\scrT$ is also the cyclotomic deformation of the cyclotomic character
$\chi_\cycl$, which corresponds to replacing $\scrW^\alg$ with the
subset defined by $k_w = 1$, and which suggests taking $F^1 = \scrT$.
Thus, depending on the choice of $\scrW^\alg$, the most appropriate
indexing of the triangulordinary flag either does not exist, has
$F^1=0$, or has $F^1=\scrT$. The latter two possibilities give two
different Selmer local conditions at $p$ (respectively, they are the
unramified and empty conditions).
The ambiguity described above passes on to $\wt{\scrV}^0$: at a point
$(x,w)$, where $x$ has weight $k_x$ and $w$ has weight $k_w$, the
Hodge--Tate weights of $\wt{\scrV}^0_{(x,w)}$ are $k_w$ and $k_w+k_x-1
> k_w$, which vary roughly independently; thus $\wt{\scrE}^{0,\alg}$
does not admit a most appropriate choice of indices. Since we view
the eigensurface as a the cyclotomic deformation of the eigencurve, we
expect that considering $\scrT$ as the cyclotomic deformation of the
{\it trivial} character is most appropriate (in this particular
setting). This means reducing $\wt{\scrE}^{0,\alg}$ to the subset
defined by $k_w = 0$, and taking for $\wt{F}^1$ the rank-$1$
constituent of the triangulordinary flag, which is given by $p_1^*F^1
\otimes_{\wt{\scrE}^0} p_2^*\scrT$.
\end{rem}
Since the reader is likely to be aware of the goals of Iwasawa theory,
we conclude by saying that we expect the Selmer group
$H^1_\tord(\bbQ,\scrE^0)$ (resp.\ $H^1_\tord(\bbQ,\wt{\scrE}^0)$) to
be related to the analytic standard $p$-adic $L$-function varying
along the eigencurve (resp.\ eigensurface). But, the Selmer groups
being highly non-integral (and likely non-torsion), and the $p$-adic
$L$-functions being unbounded, the precise means by which these ought
to be related related is far from clear.
| -93,702.276604
|
[
-2.859375,
2.56640625
] | 22.741555
|
[
-2.4296875,
0.2900390625,
-2.646484375,
-6.140625,
-0.9619140625,
9.1875
] |
[
4.2734375,
9.7734375,
1.736328125,
5.46484375
] | 913
| 16,637
|
[
-3.126953125,
3.677734375
] | 31.89825
|
[
-4.66015625,
-3.74609375,
-5.4921875,
-2.1171875,
1.609375,
12.8671875
] | 0.614624
| 14.501426
| 15.868245
| 1.150111
|
[
1.3499823808670044
] | -56,367.284657
| 5.036425
| -93,752.737321
| 0.290007
| 6.285162
|
[
-1.0576171875,
-3.041015625,
-3.765625,
-5.22265625,
1.783203125,
11.7890625
] |
[
-5.75390625,
-2.01171875,
-2.111328125,
-0.951171875,
4.01171875,
4.09765625
] | |
BkiUdIbxaKgTr7hAWSu9
|
\section{Introduction}
\label{sec:1}
Deeply virtual exclusive pseudoscalar meson production can be described within QCD factorization,
through the convolution of Generalized Parton Distributions (GPDs) and hard scattering amplitudes (Fig.\ref{fig1}).
Although a full proof of factorization theorems was given only for longitudinal photon polarization \cite{ColFraStr},
in a series of papers \cite{AGL,GGL_pi0short,GGL_pi0} we showed that transverse photon polarization amplitudes can give substantial contributions even if they appear at the next to leading twist
through the $\pi^0$ coupling $\propto \gamma_5$, in the $\gamma^* p \rightarrow \pi^0 p'$ reaction.
Because of the $\gamma_5$ coupling, the transverse polarization amplitudes contain convolutions of the four chiral-odd GPDs namely,
$H_T, E_T, \widetilde{H}_T, \widetilde{E}_T$ \cite{Ji_odd,Diehl_odd}.
The chiral odd GPDs acquire a specific physical meaning, allowing us to explore different transverse spin configurations in the proton, when written in terms of the quark-proton transversity amplitudes, $A^{T_{Y(X)}}_{\Lambda' \lambda', \Lambda \lambda}$, where $T_{Y(X)}$ represents the spin component along the $y(x)$-axis. Sensible information is obtained when the GPDs are proportional to linear combinations of amplitudes that are diagonal in a given basis, meaning that the spin projections are the same on the LHS and RHS of Fig.\ref{fig1}. This allows us to associate each GPD with parton distribution functions carrying the same spin information, bearing in mind that even if a connection between spin configurations is established, GPDs are related to amplitudes {\it i.e}. they are not probabilities in momentum space (the quarks carry different momenta on the LHS and on the RHS of Fig.\ref{fig1}).
The connection between GPDs, Transverse Momentum Distributions (TMDs) both through their common ``parent" distributions, the Generalized TMDs (GTMDs) has been explored in \cite{Metz1,Metz2}, and in transverse coordinate space by Burkardt \cite{Bur1,Bur2}.
In particular, one can see that $H_T$ is the off forward generalization of the proton's transversity structure functions, $h_1$, or the probability of finding a transversely polarized quark inside a transversely polarized proton.
\begin{figure}
\includegraphics[width=9.cm]{pi0_final_fig1.pdf}
\caption{Leading order amplitude for DV$\pi^0(\eta)$P, $\gamma^* + P \rightarrow M +P^\prime$. Crossed diagrams are not shown in the figure.}
\label{fig1}
\end{figure}
$h_1$ and its integral over $x_{Bj}$, the tensor charge, $\delta$, have notoriously been elusive quantities to extract from experiment. Being chirally odd, $h_1$ can be measured in either Semi Inclusive Deep Inelastic Scattering (SIDIS) or in the Drell Yan process in conjunction with another chiral odd partner. The tensor charge's flavor dependence was obtained only relatively recently from model dependent analyses of SIDIS single hadron and dihadron production processes in the few GeV region, and for $x_{Bj} \gtrsim 0.06$ \cite{Anselmino,Courtoy}.
The combination $2 \widetilde{H}_T + E_T$ has a helicity structure similar to one for the Boer Mulders function $h_{1}^\perp$ \cite{BoerMul}, the transversely polarized quark distribution in an unpolarized proton. Although one can show that the two distributions are generated from the same GTMD \cite{Metz1}, they have an important phase difference. As a result, $2 \widetilde{H}_T + E_T$, which involves a single spin flip is related to the real part of the ``mother" GTMD, thus vanishing in the forward limit (it enters the exclusive process scattering amplitudes always multiplied by the transverse momentum transfer $\Delta_\perp$) while the TMD $h_{1}^\perp$ is related to the imaginary part of the same GTMD.
In a similar way, $\widetilde{H}_T$, involves a double spin flip and its contribution therefore also vanishes in the forward limit. This GPD emerges as the difference between the canonical transversity non-flip amplitudes ($A^{T_Y}$), and the planar transversity non-flip amplitudes ($A^{T_X}$). Since in the forward direction there is no distinction between canonical transversity and planar transversity, clearly then, this GPD requires non-forward scattering to be non-zero. The target polarized transversely at an azimuthal angle different from zero or $\pi/2$ will allow this double correlation to be probed.
Finally, $\widetilde{E}_T$ is the most elusive of the chiral odd GPDs since because of Parity and Time reversal constraints it vanishes for the skewness parameter $\xi=0$, and it is therefore zero in the forward limit; its first moment in $x$ is also zero \cite{Diehl_odd}.
$\widetilde{E}_T$ can therefore be considered an entirely off-forward product. $\widetilde{E}_T(x, \xi, t)$ describes a transversely polarized quark (along the $x$ axis) in a longitudinally polarized proton. It is T-even and directly connected to the first moment of the TMD $h_{1L}^\perp$.
Our interest in $\widetilde{E}_T$ stems from the fact that it manifests a similar spin structure to the chiral-even twist three GPD, $G_2$, that was shown to enter the sum rule component for partonic Orbital Angular Momentum (OAM) in Ref.\cite{Penttinen,Polyakov}. Correlated, although indirect information on OAM
therefore can be obtained from both $\widetilde{E}_T$ and $h_{1L}^\perp$ measurements.
In Ref.\cite{GGL_pi0} we evaluated the contributions of the various chiral odd GPDs to the observables in exclusive Deeply Virtual $\pi^0$ electro-Production (DV$\pi^0$P) recently measured at Jefferson Lab \cite{Kub,AvaKim}. In particular, there exist measurements for the various unpolarized scattering components, for
the beam spin asymmetry $A_{LU}$, and for the target spin asymmetries for longitudinal polarization, $A_{UL}$ (unpolarized beam) and $A_{LL}$ (polarized beam).
Our analysis uses a flexible parametrization which is based on the reggeized diquark model, that is a spectator model with variable mass of the spectator, $M_X$, which reproduces the Regge behavior in the low $x$, large $M_X$ limit \cite{GGL,newFF}. The presence of scalar and axial-vector diquarks allows us to model the $u$ and $d$ quark distributions separately, {\it i.e.} distinguishing between the different isospin projections, $ud$ and $uu$, as it follows from the use of SU(4) symmetry in the nucleon. The model's parameters were determined from quantitative fits of the chiral even GPDs using a compilation of: {\it i)} flavor separated Dirac and Pauli nucleon form factor data \cite{Cates}, and axial \cite{Schindler} and pseudo-scalar \cite{Fearing} form factor data; {\it ii)} DVCS data \cite{HallB}; and {\it iii)} reproducing the forward limit of GPDs for both the unpolarized and polarized parton distribution functions from DIS data.
For the latter we evolved the model from its initial low scale to the scale of the data using leading order Perturbative QCD (PQCD) GPD evolution equations (see {\it e.g.} Refs.\cite{MusRad,GolMar}).
For the various asymmetries, in particular, we predicted that the spin modulations which are ideal for measuring chiral odd GPDs using a longitudinally polarized target are
$A_{UL}^{\sin 2 \phi}$, and the constant in $\phi$ term, $A_{LL}$. The latter are, in fact, proportional to transverse polarization terms only, and therefore they involve only chiral odd GPDs. The other modulations, $A_{UL}^{\sin \phi}$, and $A_{LL}^{\cos \phi}$, contain contributions from both longitudinal and transverse photons, their description will always contain a mixture of chiral odd and chiral even GPDs, and thus, they are
not most appropriate for a clean extraction of the chiral odd sector. Using our model, we found that the dominant contribution to these asymmetries was coming from the chiral even sector (even in the low $Q^2$ kinematical range of Jefferson Lab).
An additional, important observation is that most of the $A_{UL}$ and $A_{LL}$ data are sensitive to $\widetilde{H}_T$, $E_T$, and $\widetilde{E}_T$, but not directly to $H_T$, and they are therefore not suitable for extracting the tensor charge.
In this paper we show that the availability of a transversely polarized target and of combined $\eta$ and $\pi^0$ exclusive electroproduction data are both crucial to extract the tensor charge and its flavor dependence. The extraction from exclusive measurements can in principle allow us to pin down this quantity more precisely than from SIDIS analyses, an extra advantage being that, as we will explain in what follows, the low $x$ dependence of transversity which dominates the integration giving $\delta_q$, $(q=u,d)$, will be constrained in exclusive measurements through the $t$ behavior of the Compton Form factors (CFFs).
Our paper is organized as follows:
in Section \ref{sec:2} we review our approach, and we outline the derivation of the helicity amplitudes entering the cross section
for deeply virtual pseudoscalar meson production; in Section \ref{sec:3} we present our results for the various observables and estimate the possibility of extraction of the tensor charge from deeply virtual exclusive experiments; in Section \ref{sec:conclusions} we draw our conclusions.
\section{Formalism}
\label{sec:2}
We summarize, in what follows, the formal steps that lead us to parametrize polarized exclusive pseudoscalar meson electroproduction in terms of chiral-odd GPDs.
\subsection{Definitions and Kinematics}
We start by defining GPDs at twist-two as the matrix elements of the following projection of the unintegrated quark-quark proton correlator (see Ref.\cite{Metz1} for a detailed overview),
\footnote{In what follows we can omit the Wilson gauge link without loss of generality \cite{Ji1}.}
\begin{eqnarray}
W_{\Lambda', \Lambda}^\Gamma(x,\Delta,P) & = & \int \frac{d z^- }{2 \pi} e^{ixP^+ z^-} \left. \langle p', \Lambda' \mid \overline{\psi}\left(-\frac{z}{2}\right) \Gamma \, \psi\left(\frac{z}{2}\right)\mid p, \Lambda \rangle \right|_{z^+=0,{\bf z}_T=0},
\label{matrix}
\end{eqnarray}
where $\Gamma=i\sigma^{i+}\gamma_5 (i=1,2)$
for the chiral odd case,
and the target's spins are $\Lambda, \Lambda^\prime$. $W_{\Lambda', \Lambda}^\Gamma$ was parametrized as \cite{Diehl_odd},
\begin{eqnarray}
\label{correlator}
W_{\Lambda', \Lambda}^{[i\sigma^{i+}\gamma_5]} & = & \overline{U}(P',\Lambda') \left[ i \sigma^{+i} H_T(x,\xi,t) +
\frac{\gamma^+ \Delta^i - \Delta^+ \gamma^i}{2M} E_T(x,\xi,t) \right. \nonumber \\
& + & \left. \frac{P^+ \Delta^i - \Delta^+ P^i}{M^2} \widetilde{H}_T(x,\xi,t) +
\frac{\gamma^+ P^i - P^+ \gamma^i}{2M} \widetilde{E}_T(x,\xi,t) \right] U(P,\Lambda) \nonumber \\
& = & (1-\xi) (\Lambda \delta_{i1} + i \delta_{i2} ) \delta_{\Lambda,-\Lambda'} H_T +
\left( \frac{\Delta_i}{2M} + i \Lambda \, \xi \epsilon^{03ji} \frac{\Delta_j}{2M} \right) \delta_{\Lambda \Lambda'} E_T \nonumber \\
& + & \left[ \frac{\Delta_i}{M} \delta_{\Lambda, \Lambda'} + (\Lambda \Delta_1+ i \Delta_2) \frac{\Delta_i}{2M^2} \delta_{\Lambda,- \Lambda'} \right] \widetilde{H}_T
+ \left[ \frac{1}{1+\xi} \left( \xi\frac{\Delta_i}{2M} + i \, \Lambda \, \epsilon^{03ji} \frac{\Delta_j}{2M} \right) \delta_{\Lambda \Lambda'}
+ \xi ( \Lambda \delta_{i1} + i \delta_{i2} ) \delta_{\Lambda,- \Lambda'} \right ] \widetilde{E}_T \nonumber \\
\end{eqnarray}
The correlator in Eqs.(\ref{matrix},\ref{correlator}) is expressed in terms of kinematical variables defined in the ``symmetric frame", where we define: $\overline{P}=(p+p')/2$, the average proton momentum, and $\Delta = P-P'$. $\overline{P}$ is along the $z$-axis with momentum, $\overline{P}_3 \approx \overline{P}^+$.
The four-momenta LC components ($v \equiv (v^+,v^-,\vec{v}_T)$, where $v^\pm=1/\sqrt{2}(v_o \pm v_3)$) are:
\begin{subequations}
\label{kin:sym}
\begin{eqnarray}
\overline{P} & \equiv & \left( \overline{P}^+, \frac{M^2}{\overline{P}^+}, 0) \right) \nonumber \\
\Delta & \equiv & \left( \xi \, (2 \overline{P}^+), \frac{ t+ {\bf \Delta}_T^2}{2 \xi \overline{P}^+}, {\bf \Delta}_T ) \right) \\
P & \equiv & \left((1+\xi) \overline{P}^+, \frac{M^2+ {\bf \Delta}_T^2/4}{(1+\xi)\overline{P}^+}, {\bf \Delta}_T/2 \right) \nonumber \\
P' & \equiv & \left( (1-\xi) \overline{P}^+, \frac{M^2+ {\bf \Delta}_T^2/4}{(1-\xi)\overline{P}^+},- {\bf \Delta}_T/2 \right), \nonumber
\label{coord_asym}
\end{eqnarray}
\end{subequations}
where in the DGLAP region (here we consider $x>\xi$) the coordinates of the off-shell struck parton are,
\begin{subequations}
\begin{eqnarray}
k & \equiv & \left( (x+\xi)\overline{P}^+, k^-, {\bf k}_T + {\bf \Delta}_T /2 \right), \nonumber \\
k' & \equiv & \left( (x-\xi)\overline{P}^+, k'^-,{\bf k}_T - {\bf \Delta}_T/2 \right)
\end{eqnarray}
\end{subequations}
Other useful variables can be written as,
\[ \hat{s} = (k+q)^2 \approx Q^2(x-\xi)/2\xi , \;\;\;\; \hat{u} = (k^\prime -q)^2 \approx Q^2 (x+\xi)/2 \xi, \;\;\;\; q^- \approx(Pq)/P^+ = Q^2/(4\xi) (1+\xi)P^+. \]
The loop diagram in Fig.\ref{fig1} integrated over the struck quark's momentum is performed using the variables: $d^4 k \equiv d k^+ d k^- d^2 k_\perp \equiv P^+ dX dk^- d^2 k_\perp$.
\subsection{Helicity Amplitudes Structure}
To describe spin dependent observables we next introduce the helicity amplitudes
(for a detailed description of the helicity amplitudes formalism in deeply virtual scattering processes see also Ref.\cite{Diehl_hab}).
For pseudoscalar meson production one has \cite{AGL,GGL},
\begin{eqnarray}
f_{\Lambda_\gamma 0}^{\Lambda \Lambda^\prime} (\xi,t, Q^2)& = & \sum_{\lambda,\lambda^\prime}
g_{\Lambda_\gamma 0}^{\lambda \lambda^\prime} (x,\xi,t,Q^2) \otimes
A_{\Lambda^\prime \lambda^\prime, \Lambda \lambda}(x,\xi,t),
\label{facto}
\end{eqnarray}
where the helicities of the virtual photon and the initial proton are, $\Lambda_\gamma$, $\Lambda$,
and the helicities of the produced pion and final proton are $0$, and $\Lambda^\prime$, respectively.
Notice that both longitudinal and transverse polarizations of the virtual photon $\gamma_{L(T)}^*$ can, in principle, contribute. While $\gamma_{L}^*$ was shown to be the leading contribution in Ref.\cite{ColFraStr}, in \cite{AGL,GGL} a possible scenario beyond collinear factorization was presented for $\gamma_T^* p \rightarrow M p$ which
explains the large transverse photon polarization contributions observed in the experimental data in terms of chiral odd GPDs.
In Eq.(\ref{facto}) we describe the factorization into a ``hard part'',
$g_{\Lambda_\gamma 0}^{\lambda \lambda^\prime}$ for the partonic subprocess
$\gamma^*_T + q \rightarrow \pi^0 + q$, which appears now at twist three,
and the quark-proton helicity amplitudes, $A_{\Lambda^\prime,\lambda^\prime;\Lambda,\lambda}$
that contain the chiral odd GPDs.
The amplitudes $A_{\Lambda^\prime \lambda^\prime, \Lambda \lambda}$ implicitly involve an integration over the unobserved quark's transerve momentum, $k_T$,
and are functions of
$x_{Bj} =Q^2/2M\nu \approx 2\xi/(1-\xi^2)$, $t$ and $Q^2$. The convolution integral in Eq.(\ref{facto})
is given by $\otimes \rightarrow \int_{-1}^1 d x$.
The connection with the correlator is carried out by considering,
\begin{eqnarray}
A_{\Lambda' \lambda', \Lambda \lambda} = \int \frac{d z^-}{2 \pi} e^{ixP^+ z^-} \left. \langle p', \Lambda' \mid {\cal O}_{\lambda' \lambda}(z) \mid p, \Lambda \rangle \right|_{z^+=0, {\bf z}_T=0},
\end{eqnarray}
where,
\begin{eqnarray}
{\cal O}_{-+}(z) & = & -i \bar{\psi}\left(-\frac{z}{2}\right) (\sigma^{+1} - i \sigma^{+2}) \psi\left(\frac{z}{2}\right) \\
{\cal O}_{+ -}(z) & = & i \bar{\psi}\left(-\frac{z}{2}\right) (\sigma^{+1} + i \sigma^{+2}) \psi\left(\frac{z}{2}\right),
\end{eqnarray}
By taking this into account in Eq.(\ref{correlator}), and by adding and subtracting the expressions corresponding to $i=1,2$, respectively, one obtains the expressions for the chiral odd helicity amplitudes in terms of
GPDs \cite{Diehl_odd,Diehl_hab},
\begin{subequations}
\label{GPDodd}
\begin{eqnarray}
A_{++,--} & = & \sqrt{1-\xi^2} \left[ { H}_ T + \frac{t_0-t}{4M^2} \widetilde{ H}_T
+ \frac{\xi^2}{1-\xi^2} { E}_T + \frac{\xi}{1-\xi^2} \widetilde{ E}_T \right] \\
A_{+-,-+} & = & - \sqrt{1-\xi^2} \, \frac{t_0-t}{4M^2} \, \widetilde{ H}_T \\
A_{++,+-} & = & \frac{\sqrt{t_0-t}}{4M} \left[ 2\widetilde{ H}_T + (1-\xi) \left({ E}_T - \widetilde{ E}_T \right) \right] \\
A_{-+,--} & = & \frac{\sqrt{t_0-t}}{4M} \, \left[ 2\widetilde{ H}_ T + (1+\xi) \left( { E}_T + \widetilde{ E}_T \right) \right].
\end{eqnarray}
\end{subequations}
Notice that $A_{+-,++}$, $A_{++,+-}$ change sign under Parity while $A_{--,++} $, $A_{+-,-+} $, do not change sign; since $g_{10}^{+-}$ also changes sign, then $f_{10}^{++}$, $f_{10}^{--}$ will not change sign under Parity, while $f_{10}^{+-} $, and$ f_{10}^{+-} $ will change sign.
The chiral-odd coupling at the pion vertex for the subprocess $\gamma^* q \rightarrow \pi^0 q'$
is given by,
\begin{eqnarray}
g_{\Lambda_\gamma 0}^{\lambda \lambda^\prime} & = & g_\pi^{V(A)}(Q^2) \, q^-
\left[ \bar{u}(k^\prime,\lambda^\prime) \gamma^\mu \gamma^+ \gamma_5 u(k,\lambda) \right]
\epsilon_\mu^{\Lambda_\gamma} \left( \frac{1}{\hat{s} - i \epsilon } - \frac{1}{\hat{u} - i \epsilon} \right).
\label{g_odd}
\end{eqnarray}
where we distinguish three different contributions: from the term,
$K = q^-[1/(\hat{s}-i \epsilon) -1/(\hat{u} - i \epsilon)]$,
\begin{eqnarray}
K = \frac{Q^2}{2 x_{Bj} P^+} \, \frac{x_{Bj}}{Q^2} \, C^+ \equiv \frac{1}{2P^+} C^+ \\
C^+ = \frac{1}{x- \xi + i \epsilon } + \frac{1}{x+\xi - i \epsilon };
\end{eqnarray}
from the contraction,
\begin{eqnarray}
\bar{u}(k^\prime,\lambda^\prime) \gamma^\mu \gamma^+ \gamma_5 u(k,\lambda) \epsilon_\mu^{\Lambda_\gamma} & = &
N N^\prime \, {\rm Tr}
\left\{ (\!\not{k} +m) \, \hat{\mathcal O}_{\lambda,\lambda^\prime} (\!\not{k}^\prime + m) \gamma^\mu \gamma_5 \gamma^+ \right\} \epsilon_\mu^{\Lambda_\gamma} \nonumber \\
& \approx & -\frac{1}{\sqrt{k^{\prime \, +} k^+}} \, \left[ k^o p^{\prime \, +} -(k k^\prime) + k^+ k^{\prime \, o} \right] (\epsilon_1^{+1} - i \epsilon_2^{+1}) = \sqrt{(x-\xi)(x+\xi)} P^+
\end{eqnarray}
where $N = 1/\sqrt{P^+(x+\xi)}$ and $N^\prime=1/\sqrt{P^+(x-\xi)}$ are the quark spinors normalizations (details are given in Appendix \ref{appa}), and,
\begin{subequations}
\begin{eqnarray}
\hat{\mathcal O}_{\pm \pm } = \frac{1}{4}(1+\gamma^o) (1\pm \gamma_5 \gamma_3) && \\
\hat{\mathcal O}_{\pm \mp} = -\frac{1}{4}(1+\gamma^o) \gamma_5(\gamma_1 \mp i\gamma_2), &&
\end{eqnarray}
\end{subequations}
and finally
from the $Q^2$ dependent form factor $g_\pi^{V(A)}(Q^2)$ where we separate \cite{AGL}
the $J^{PC}=1^{- -}$ (V) and $J^{PC}=1^{+ -}$ (A), $t$-channel exchanges in the amplitudes for transverse and longitudinal virtual photons, respectively.
The two distinct contributions arise when one goes beyond a simple one gluon exchange description of the chiral odd coupling $\propto \gamma^5$.
\footnote{These two distinct configurations are the dominant terms in the two series with {\it natural parity} one ($1^{--}, 3^{--} ... $), labeled $V$, and
{\it unnatural parity} one ($1^{+-}, 3^{+-} ...$), labeled $A$. }
What makes the two contributions $\gamma^* (q \bar{q})_V \rightarrow \pi^0$ and
$\gamma^* (q \bar{q})_A \rightarrow \pi^0$ distinct
is that, in the natural parity case (V), L is always the same for the initial and final states, or $\Delta L=0$,
while for unnatural parity (A), $\Delta L =1$.
We modeled this difference by replacing the collinear factorization expressions with the following expressions containing a modified kernel
\begin{eqnarray}
g^V_{\Lambda_{\gamma^*},\lambda; 0, \lambda^\prime} = \int dx_1 dy_1 \int d^2 b
\, \hat{\psi}_V(y_1,b) \, \hat{{\cal F}}_{\Lambda_{\gamma^*},\lambda; 0, \lambda^\prime}(Q^2,x_1,x_2,b) \alpha_S(\mu_R)
\exp[-S] \, \hat{\phi}_{\pi^0}(x_1,b) && \\
g^A_{\Lambda_{\gamma^*},\lambda; 0, \lambda^\prime} = \int dx_1 dy_1 \int d^2 b
\, \hat{\psi}_A(y_1,b) \, \hat{{\cal F}}_{\Lambda_{\gamma^*},\lambda; 0, \lambda^\prime}(Q^2,x_1,x_2,b) \alpha_S(\mu_R)
\exp[-S] \, \hat{\phi}_{\pi^0}(x_1,b) &&
\end{eqnarray}
where,
\begin{equation}
\hat{\psi}_{A}(y_1,b) = \int d^2 k_T J_1(y_1 b) \psi_V(y_1,k_T)
\end{equation}
The higher order Bessel function describes the situation where $L$ is always larger in the initial state.
In impact parameter space this corresponds to configurations of larger radius.
By considering the dominant LC components, we see that only
$g_{10}^{+-}$ survives,
\begin{eqnarray}
\label{ampg2}
g_{10}^{+-} & \approx & g_\pi^{V(A)}(Q^2) \: C^+
\end{eqnarray}
\begin{figure}
\includegraphics[width=9.cm]{fig2_trans.pdf}
\caption{Vertex structures defining the spectator model tree level diagrams.
}
\label{fig_LCWF}
\end{figure}
Putting together all steps, we can write a calculable form for the convolution in Eq.(\ref{facto}).
For a transverse photon we obtain,
\begin{subequations}
\label{helamps_gpd2}
\begin{eqnarray}
f_{10}^{++} &= & g_\pi^{V}(Q) \frac{\sqrt{t_0-t}}{4M} \, \left[ 2\widetilde{\cal H}_ T + (1+\xi) \left( { \cal E}_T + \widetilde{\cal E}_T \right) \right]
\\
f_{10}^{+-} & = & \frac{g_\pi^{V}(Q)+ g_\pi^{A}(Q)}{2} \, \sqrt{1-\xi^2} \left[ { \cal H}_ T + \frac{t_0-t}{4M^2} \widetilde{ \cal H}_T
+ \frac{\xi^2}{1-\xi^2} {\cal E}_T + \frac{\xi}{1-\xi^2} \widetilde{\cal E}_T \right] \\
f_{10}^{-+} & = & - \frac{g_\pi^{A}(Q)- g_\pi^{V}(Q)}{2} \, \sqrt{1-\xi^2} \, \frac{t_0-t}{4M^2} \, \widetilde{\cal H}_T
\\
f_{10}^{--} & = & g_\pi^{V}(Q) \frac{\sqrt{t_0-t}}{4M} \left[ 2\widetilde{ \cal H}_T + (1-\xi) \left({\cal E}_T - \widetilde{\cal E}_T \right) \right]
\end{eqnarray}
\end{subequations}
where the matching of the $V$ and $A$ contributions to the helicity amplitudes is as follows: $f_{10}^{++}, f_{10}^{--} \propto g^V$, $f_{10}^{+-} \propto g^V+g^A$,
$f_{10}^{-+} \propto g^V-g^A$ (see Ref. \cite{GGL_pi0short,GGL_pi0}).
${\cal H}_T$, etc., are the convolutions of the GPDs with $C^+$, or the Compton form factors which at leading order in PQCD are given by,
\begin{equation}
\label{CFF_def}
{\cal F}_T(\xi,t,Q^2) = \int_{-1}^1 dx \; C^+ \, F_T(x,\xi ,t,Q^2)
\end{equation}
$F_T \equiv {\cal H}_T, {\cal E}_T, \widetilde{\cal H}_ T, \widetilde{\cal E}_ T$.
\subsection{Chiral Odd GPDs from Helicity Amplitudes}
The chiral odd CFFs (or GPDs) are obtained by inverting Eqs.(\ref{helamps_gpd2}) (or Eqs.(\ref{GPDodd})).
For instance, for $\xi=0$ one has,
\footnote{Numerical calculations throughout this paper were conducted using the full $\xi$ dependent expressions, the expressions above are shown for simplicity.}
\begin{subequations}
\label{gpdsodd:eq}
\begin{eqnarray}
\label{AHTbar}
\frac{ \sqrt{t_o-t}}{2M} \left[ 2 \widetilde{H}_T(x, 0, t) + E_T(x,0,t) \right] & = & A_{++,+-} + A_{-+,--} \nonumber \\
& = & A^{T_Y}_{++,++} - A^{T_Y}_{+-,+-} + A^{T_Y}_{-+,-+} - A^{T_Y}_{--,--} \\
\label{AHTX}
H_T(x, 0, t) & = & A_{++,--} + A_{-+,+-} \nonumber \\
& = & A^{T_Y}_{++,--} - A^{T_Y}_{+-,-+} + A^{T_Y}_{--,++} - A^{T_Y}_{-+,+-} \nonumber \\
& = & A^{T_X}_{++,++} - A^{T_X}_{+-,+-} - A^{T_X}_{-+,-+} + A^{T_X}_{--,--} \\
\label{AHTtildeX}
-\frac{t_o-t}{4M^2} \widetilde{H}_T(x, 0, t) & = & A_{-+,+-} \nonumber \\
& = & A^{T_Y}_{++,++} - A^{T_Y}_{+-,+-} - A^{T_Y}_{--,--} + A^{T_Y}_{-+,-+} + A^{T_Y}_{++,--} - A^{T_Y}_{+-,-+} + A^{T_Y}_{--,++} - A^{T_Y}_{-+,+-} \nonumber \\
& = & A^{T_X}_{++,++}+A^{T_X}_{+-,+-}+A^{T_X}_{-+,-+}+A^{T_X}_{--,--} \\
- \frac{ \sqrt{t_o-t}}{2M} \widetilde{E}_T(x, 0, t) & = & A_{++,+-} - A_{-+,--} \nonumber \\
& = & A^{L,T_X}_{++,++}-A^{L,T_X}_{+-,+-}-A^{L,T_X}_{-+,-+}+A^{L,T_X}_{--,--} =0
\end{eqnarray}
\end{subequations}
where we show the GPDs calculated using the helicity amplitudes (first line of each equation), and using the transversity bases, $T_Y$, with the transverse spin orthogonal to ${\bf \Delta}$ (without loss of generality ${\bf \Delta}$ is assumed to be along the $x$ axis), the planar-transversity basis $T_X$, with transverse spin along $x$, and a mixed longitudinal and planar-transverse basis, $L,T_X$ with longitudinal an transverse along $x$ spins in the initial and final states, respectively.
If the scattering plane for the quark+nucleon is the $x-z-$plane (with ${\bf P}$ and ${\bf \Delta}$) and the $y-$direction is along ${\bf P} \times {\bf \Delta}$, then for each particle's spin $\frac{1}{2}$ helicity state, $\sqrt{2} \mid T_X=\pm \rangle = \mid + \frac{1}{2} \rangle \pm \mid - \frac{1}{2} \rangle $ and $\sqrt{2} \mid T_Y=\pm \rangle = \mid + \frac{1}{2} \rangle \pm i \mid - \frac{1}{2} \rangle $ (Fig.\ref{fig3}).
Note that $\widetilde{E}_T(x, \xi, t)$ vanishes for $\xi \rightarrow 0$ because the two helicity amplitudes become equal due to parity and time reversal invariance.
\begin{figure}
\includegraphics[width=6.cm]{fig1_transappendix.pdf}
\caption{$\Delta$ and $({\bf \Delta} \times {\bf P})/\mid {\bf P} \mid$ which defines transversity. In this paper we take $\Delta$ along the $x$-axis without loss of generality.
}
\label{fig3}
\end{figure}
In order to give a partonic interpretation, the spin structures on the RHS and LHS of Fig.\ref{fig1} need to be the same, or diagonal in spin, where the direction of transverse spin is established using the bases defined with $\Delta$. For instance, $H_T$ is diagonal in $T_X$, while $\widetilde{H}_T$ is diagonal in a $T_X$ and $T_Y$ mixed basis.
\subsubsection{$\overline{E}_T = 2 \widetilde{H}_T + E_T$}
By inspecting the spin content of Eqs.(\ref{gpdsodd:eq}) we see that Eq.(\ref{AHTbar}) corresponds to the same combination as for the Boer Mulders function $h_1^\perp$ \cite{BoerMul}.
This is well known to be vanishing at leading order, in the absence of a gauge link, owing to the ``naive" T-odd nature of $h_1^\perp$ (see \cite{BarDraRat} for a review). The question of whether $h_1^\perp$ and $\left[ 2 \widetilde{H}_T(x, 0, t) + E_T(x,0,t) \right]$ could be related was initially posed in Ref.\cite{Bur1} within a transverse coordinate space representation.
A more general framework from which to address this question was subsequently provided in Ref.\cite{Metz1} using the GTMDs, or unintegrated over ${\bf k}_T$ TMDs. GTMDs are complex objects that can be parametrized as
\begin{equation}
X(x,k_T,\xi,t;Q^2) = X^e + i X^o
\label{GTMD}
\end{equation}
where $X$ represents any twist two GPD, and $X^e$, and $X^o$, the real and imaginary parts, are also the T-even and T-odd components, respectively. The leading twist GPDs are obtained by integrating Eq.(\ref{GTMD}); they are T-even and they can derive only from the real parts of the amplitude combinations. On the other side, the leading twist TMDs, are obtained in the forward limit ($t,\xi \rightarrow 0$), and they can also be T-odd (within the well known appropriate interpretation of the gauge links)
\footnote{A discussion of gauge links is beyond the scope of this paper and is omitted here}
in which case they involve the imaginary part of the amplitudes combinations. In this context we see that both $h_1^\perp$ and $\overline{E}_T = 2 \widetilde{H}_T(x, \xi, t;Q^2) + E_T(x,\xi,t;Q^2) $ participate in the same equation at the GTMD level,
\begin{equation}
\overline{E}_T(x,k_T,\xi,t;Q^2) = \Re e \overline{E}_T (x,k_T,\xi,t;Q^2) + i \, \Im m \overline{E}_T (x,k_T,\xi,t;Q^2)
\label{GTMD2}
\end{equation}
with,
\begin{eqnarray}
h_1^\perp(x,k_T) \equiv \underset{t,\xi \rightarrow 0}{\rm lim} \frac{\sqrt{t_0-t}}{2M} \, \overline{E}_T (x,k_T,\xi,t;Q^2) & = & \, \Im m \overline{E}_T (x,k_T,0,0;Q^2) \\
\overline{E}_T (x,\xi,t;Q^2) \equiv \int^{1}_{-1} d^2 k_T \overline{E}_T (x,k_T,\xi,t;Q^2)& = & \int^{1}_{-1} d^2 k_T \Re e \overline{E}_T (x,k_T,\xi,t;Q^2).
\label{limitsGTMD}
\end{eqnarray}
(our formalism differs from Ref.\cite{Metz1} where the GTMD $\overline{E}$ is further separated into terms arising from different Lorentz structures while for simplicity, yet keeping general, we use only one term, $\overline{E}_T(x,k_T,\xi,t;Q^2)$).
We conclude that $h_1^\perp(x,k_T)$ and $\overline{E}_T (x,\xi,t;Q^2)$, although they derive from the same helicity amplitudes combinations, will differ, or they are unrelated, in general. In other words, $h_1^\perp(x,k_T)$ and $\overline{E}_T (x,\xi,t;Q^2)$ provide separate information from the imaginary and real parts of the amplitudes, respectively. $\pi^0$ and $\eta$ exclusive electroproduction data allow for an independent extraction of $\overline{E}_T$, and they are, therefore, ideal for testing this aspect of the theory.
Integrating Eq.(\ref{limitsGTMD}) over $x$ at $t=0$, one obtains the tensor anomalous magnetic moment \cite{Bur1},
\begin{eqnarray}
\kappa^T_q = \int_{-1}^1 dx \overline{E}_T(x,0,0;Q^2)
\label{kappaT}
\end{eqnarray}
This can also be extracted from the analysis of exclusive $\pi^0$ and $\eta$ electroproduction data as we explain below.
\subsubsection{Transversity}
By inspecting the spin structure of Eqs.(\ref{AHTX},\ref{AHTtildeX}) we see that $H_T$ is diagonal in planar transversity, whereas ${\widetilde H}_T$
is neither diagonal in canonical transversity nor planar transversity. However,
subtracting Eq.(\ref{AHTtildeX}) from Eq.(\ref{AHTX}) one obtains a diagonal combination of $T_Y$ amplitudes,
\begin{eqnarray}
H^\prime_T & = & H_T + \frac{t_0 - t}{2M^2} {\widetilde H}_T
= A_{++,++}^{T_Y} + A_{--,--}^{T_Y} - A_{+-,+-}^{T_Y} - A_{-+,-+} ^{T_Y}.
\label{HTprime2}
\end{eqnarray}
This is the analog of the TMD relation, $h_1(x,{\bf k}_T^2)=h_{1T}(x,{\bf k}_T^2) + ({\bf k}_T^2 / 2M^2) h_{1T}^\perp({\bf k}_T^2)$ where, owing to the fact that
$\Delta =0$, all functions that were diagonal in a given direction become diagonal on the transverse plane for any direction ({\it i.e.} $\Delta$ is no longer there to define the direction).
Therefore, in the forward direction, $H_T^\prime = H_T$, and this is diagonal in either $T_X$ or $T_Y$, consistently with the definition of transversity,
\begin{eqnarray}
H_T(x,0,0; Q^2) & = & h_1(x,Q^2)
\end{eqnarray}
Integrating over $x$ one obtains the tensor charge, at $t=0$,
\begin{eqnarray}
\delta_q = \int_{-1}^1 dx H_T(x,0,0;Q^2)
\end{eqnarray}
$\widetilde{H}_T$ can also be interpreted in a mixed planar/transversity basis as the distribution of transversely polarized quarks along $x$ in a transversely polarized proton along $y$.
In fact, Eq.(\ref{AHTtildeX}) is related to the first $k_T$ moment of $h_{1T}^\perp$,
\begin{eqnarray}
\underset{t \rightarrow 0}{\rm lim} \, \widetilde{H}_T(x,0,0, Q^2) & \rightarrow & h_{1T}^{\perp (1)}(x,Q^2).
\end{eqnarray}
It is important to keep in mind that although this relation holds (modulo an $x$ dependent factor) when tested using simple spectator models \cite{Metz2}, in Ref. \cite{Metz1} it has been disproven based on the GTMD substructures underlying $\widetilde{H}_T(x,0,0, Q^2)$, and $h_{1T}^{\perp (1)}(x,Q^2)$, which differ from one another. The physical motivation of such a discrepancy obtained in parametrizing the chiral odd GTMDs correlator is however unclear to date. Whether this is an artifact of the proposed parametrization or it can be traced back to different spin configurations, is a subject for further exploration.
\subsubsection{$\widetilde{E}_T$}
$\widetilde{E}_T(x, \xi, t)$ describes a transversely polarized quark (along the $x$ axis) in a longitudinally polarized proton. It is T-even and directly connected to a TMD, being related to the first moment of $h_{1L}^\perp$,
\begin{equation}
\int d^2 k_T h_{1L}^\perp(x,k_T) \propto \widetilde{E}_T(x, 0, t)\mid_{t \rightarrow 0} = 0
\end{equation}
Although $\widetilde{E}_T$ is 0 for $\xi=0$ due to Parity and Time reversal constraints,
$\widetilde{E}_T(x, \xi, t)$ can, however, be measured (see also our paper on $\pi^0$ electroproduction, Ref.\cite{GGL_pi0}).
What makes $\widetilde{E}_T$ interesting is that its spin structure is similar to the one that appears in the chiral-even twist three GPD, $G_2$, that was shown to enter the sum rule component for partonic Orbital Angular Momentum (OAM) in Ref.\cite{Polyakov}. Several candidates among the TMDs and GPDs were proposed recently \cite{Ma,Schmidt,LorPas} as observables for the OAM component. In Ref. \cite{OAM} we illustrated the helicity amplitude structure of partonic OAM, and we confirmed from an alternative perspective, that OAM is twist three, and that it is uniquely observable through DVCS type measurements of $G_2$ (see also \cite{Hatta}). In particular, we pointed out that
based on Parity constraints, the twist two chiral even GTMD labeled $F_{14}$ in \cite{Metz1}, describing an unpolarized quark in a longitudinally polarized proton, could not contribute to OAM. We also pointed out that another possible candidate, the chiral odd TMD, $h_{1T}^\perp$ \cite{Efremov}, does not display the necessary spin structure of partonic OAM. To confirm this picture, it would be important to obtain correlated, although indirect information on OAM from both $\widetilde{E}_T$ and $h_{1L}^\perp$ measurements.
\subsection{Chiral Odd GPDs in Spectator Model}
\label{sec:gpdsodd}
Eqs.(\ref{helamps_gpd2}) provide the helicity amplitudes that enter directly the observables for pseudoscalar meson electroproduction.
However, in practical calculations we find an outstanding problem: differently from the chiral even sector, even using available models for the chiral odd GPDs both the normalization to the form factors and the forward limit for these quantities are largely undetermined from independent measurements (with the exclusion, perhaps, of transversity, $h_1$).
This makes it difficult to estimate the magnitude of the chiral odd GPDs. In the spectator model, that we describe below, one can overcome this difficulty by exploiting Parity relations among the helicity amplitudes that allow us to connect the chiral odd GPDs to the chiral even ones (analogous relations were found to hold for TMDs in Ref.\cite{BCR}). Although this is a model dependent procedure, we consider it a necessary step at an initial stage of both theoretical and experimental investigations.
To extract the chiral odd GPDs from DV$\pi^0(\eta)$P data we propose a parametrization based on the reggeized diquark model developed in Refs.\cite{AHLT1,AHLT2,GGL,newFF}. This is, in a nutshell,
a spectator model with variable mass of the spectator, $M_X$, which allows us to reproduce the Regge behavior in the limit $M_X \rightarrow \infty$ ($x \rightarrow 0$).
The model was evolved from its initial low scale to the scale of the data using leading order Perturbative QCD (PQCD) GPD evolution equations (see {\it e.g.} Refs.\cite{MusRad,GolMar}).
In the chiral even sector we determined the model's parameters from a fit using a compilation of three different data sets, namely, nucleon form factor data (flavor separated Dirac and Pauli form factors \cite{Cates}, axial \cite{Schindler} and pseudo-scalar form factor \cite{Fearing}), DVCS data \cite{HallB}, and DIS data on $F_2^p(n)$.
For the latter we accurately reproduced the valence quarks PDFs obtained from global fits \cite{PDFS}.
In Ref.\cite{GGL} our approach lead us to succesfully reproduce data on various observables in DVCS besides the ones used in the fit, namely the charge \cite{HERMES1,HERMES2}, and transverse \cite{HERMES1,HERMES2} single spin asymmetries.
In the chiral odd sector the GPDs are largely unconstrained. This is mostly due to the fact that, differently from the GPDs in the chiral even sector,
no experimental measurements from $t$ dependent form factors exist, which would provide normalizations for the GPDs. In the analysis of $\pi^0$ and $\eta$ electroproduction data it is, however, important to be able to gauge the size of the various chiral odd GPDs contributions.
We therefore developed an alternative procedure by exploiting Parity relations within the reggeized diquark model.
The extension of this model to the chiral odd sector is explained in detail in Ref.\cite{GGL_pi0} where
we obtained predictions for both the unpolarized and longitudinally polarized observables in $\pi^0$ electroproduction, namely the beam spin asymmetry, $A_{LU}$, and the polarized target asyemmetries, $A_{UL}$ and $A_{LL}$.
The Parity relations between chiral even and chiral odd GPDs which are valid, in general, within a class of models including any type of spectator model with diquark spin $S=0,1$ and angular momentum $L=0$
\footnote{More complicated intermediate state configurations with $L\neq 0$ have been considered recently in Ref.\cite{Pena}. The Parity relations would be different in this case.}
yield, respectively, for $S=0$,
\begin{subequations}
\label{Amp0}
\begin{eqnarray}
A_{++,--}^{(0)} & = & A_{++,++}^{(0)} \\
A_{++,+-} ^{(0)} & = & - A_{++,-+}^{(0)*} \\
A_{+-,++}^{(0)} & = & - A_{-+,++}^{(0)*} \\
\label{f3}
A_{+-,-+}^{(0)} &= &\frac{t_0-t}{4M} \sqrt{1-\xi^2} \frac{\tilde{X}}{m+MX^\prime} \left[ E - \frac{\xi}{1-\xi^2} \widetilde{E} \right],
\end{eqnarray}
\end{subequations}
and, for $S=1$,
\begin{subequations}
\label{Amp1}
\begin{eqnarray}
A_{++,--}^{(1)} & = & \displaystyle\frac{X+X^\prime}{1+ X X^\prime} \; A_{++,++} ^{(1)} \\
A_{+-,-+}^{(1)} & = & 0 \\
A_{++,+-}^{(1)} & = & - \sqrt{ \frac{\langle \tilde{k}_\perp^2\rangle /P^{+ \, 2} }{X^{\prime \, 2} + \langle \tilde{k}_\perp^2 \rangle /P^{+ \, 2} } } \; A_{++,-+} ^{(1)*} \\
A_{+-,++}^{(1)} & = & - \sqrt{ \frac{\langle k_\perp^2 \rangle /P^{+ \, 2} }{X^2 + \langle k_\perp^2 \rangle/P^{+ \, 2} } } \; A_{-+,++}^{(1)*},
\end{eqnarray}
\end{subequations}
The relations in Eqs.(\ref{Amp0}) are valid only if one of the two $\phi$ functions is real. By using Parity symmetry one cannot connect directly the chiral odd amplitude $A_{+-,-+}$, with its chiral even counterpart $A_{+-,+-}$ since both involve complex $\phi$ functions. Physically this corresponds to the fact that $A_{+-,-+}$
involves a double spin flip, and it must therefore be proportional to $\Delta_\perp^2 $, while $A_{+-,+-}$ is non-flip.
$A_{+-,-+}$ is, therefore, evaluated directly in Eq.(\ref{f3}).
Eqs.(\ref{Amp1}) were obtained by taking into account the angular momentum exchange between the LHS and RHS vertices in Fig.\ref{fig1} so that each amplitude on the LHS (chiral odd) can no longer be obtained as a simple product of the two vertices which give the chiral even amplitude on the RHS.
We can then obtain the chiral odd GPDs sets $F_T^{(0),(1)}$, by
inverting the expressions of the quark parton helicity amplitudes in both the chiral even \cite{Diehl_hab}, and chiral odd (\cite{Diehl_hab} and Eqs.(\ref{GPDodd})) sectors.
In Ref.\cite{GGL_pi0} we obtained, for $S=0$,
\begin{subequations}
\label{S0}
\begin{eqnarray}
\widetilde{H}_T^{(0)} & = & -\frac{1}{F} \left(E^{(0)} - \frac{\zeta}{2} \widetilde{E}^{(0)} \right) \\
E_T^{(0)} & = & \frac{(1-\zeta/2)^2}{1-\zeta} \left[E^{(0)} - 2 \widetilde{H}_T^{(0)} - (\frac{\zeta/2}{1-\zeta/2})^2 \widetilde{E}^{(0)} \right] \\
\widetilde{E}_T^{(0)} & = & \frac{\zeta/2 (1-\zeta/2)}{1-\zeta} \left[E^{(0)} - 2\widetilde{H}_T^{(0)} - \widetilde{E}^{(0)} \right] \\
H_T^{(0)} & = & \frac{H^{(0)} + \widetilde{H}^{(0)} }{2} - \frac{\zeta^2/4}{1-\zeta}\frac{E^{(0)} + \widetilde{E}^{(0)} }{2} - \frac{\zeta^2/4}{(1-\zeta/2)(1-\zeta)} E_T^{(0)} +
\frac{\zeta/4 (1-\zeta/2)}{1-\zeta} \widetilde{E}^{(0)} _T -
\frac{t_0-t}{4M^2} \frac{1}{F} \left(E^{(0)} - \frac{\zeta}{2} \widetilde{E}^{(0)} \right) \nonumber \\
\end{eqnarray}
\end{subequations}
and for $S=1$,
\begin{widetext}
\begin{subequations}
\label{S1}
\begin{eqnarray}
\widetilde{H}_T^{(1)} & = & 0 \\
E_T^{(1)} & = & \frac{1-\zeta/2}{1-\zeta} \left[ \tilde{a} \left(E^{(1)} - \frac{\zeta/2}{1-\zeta/2} \widetilde{E}^{(1)} \right) + a \left(E^{(1)} + \frac{\zeta/2}{1-\zeta/2} \widetilde{E}^{(1)} \right) \right] \\
\widetilde{E}_T^1 & = & \frac{1-\zeta/2}{1-\zeta} \left[ \tilde{a} \left(E^{(1)} - \frac{\zeta/2}{1-\zeta/2} \widetilde{E}^{(1)} \right) - a \left(E{(1)} + \frac{\zeta/2}{1-\zeta/2} \widetilde{E}^{(1)} \right) \right] \\
H_T^{(1)} & = & - G \left[ \frac{H^{(1)} + \widetilde{H}^{(1)}}{2} - \frac{\zeta^2/4}{1-\zeta}\frac{E^{(1)} + \widetilde{E}^{(1)}}{2} \right] - \frac{\zeta^2/4}{1-\zeta} E_T^{(1)} +
\frac{\zeta/4}{1-\zeta} \widetilde{E}_T^{(1)}
\end{eqnarray}
\end{subequations}
\end{widetext}
where we used the following variables: $\zeta = 2 \xi/(1+\xi)$, $X=(x+\xi)/(1+\xi)$, $X^\prime = X-\zeta$, and the various kinematical factors are,
\[ F= \left(M X^\prime + m_q \right) \, \frac{1-\zeta}{\widetilde{X} }, \;\;\;\; G= \displaystyle\frac{X+X^\prime}{1+ X X^\prime}, \;\;\;\; \widetilde{X} = \frac{1-X}{1-\zeta} \]
and
\[a = \displaystyle\sqrt{ \frac{\langle k_\perp^2 \rangle}{X^2 + \langle k_\perp^2 \rangle \left(\frac{2M\zeta^2}{Q^2}\right)^2} }, \;\;\;\;
\tilde{a} = \displaystyle\sqrt{ \frac{\langle \tilde{k}_\perp^2 \rangle}{(X-\zeta)^2 + \langle \tilde{k}_\perp^2 \rangle \left(\frac{2M\zeta^2}{Q^2}\right)^2} } \]
where $\langle k_\perp^2 \rangle^{1/2} \approx 0.3$ GeV, and $\langle \tilde{k}_\perp^2 \rangle^{1/2} = \langle ({\bf k}_\perp + (1-X) {\bf \Delta})^2 \rangle^{1/2} \approx \langle k_\perp^2 \rangle^{1/2} + (1-X) \mid {\Delta}\mid $.
The chiral even GPDs used to evaluate Eqs.(\ref{S0}) and (\ref{S1}) were taken from the parametrization developed in Ref.\cite{newFF}. In the forward limit one obtains,
\begin{subequations}
\label{S00}
\begin{eqnarray}
\widetilde{H}_T^{(0)} & = & - \frac{M(1-x)}{m+Mx} E^{(0)} \\
E_T^{(0)} & = & \frac{M(2-x) + m}{m+Mx} E^{(0)} \\
\widetilde{E}_T^{(0)} & = & 0 \\
H_T^{(0)} & = & \frac{H^{(0)} + \widetilde{H}^{(0)} }{2}
\end{eqnarray}
\end{subequations}
and for $S=1$,
\begin{subequations}
\label{S10}
\begin{eqnarray}
\widetilde{H}_T^{(1)} & = & 0 \\
E_T^{(1)} & = & 2 a E^{(1)} \\
\widetilde{E}_T^1 & = & 0 \\
H_T^{(1)} & = & - \frac{2x}{1+x^2} \frac{H^{(1)} + \widetilde{H}^{(1)}}{2}
\end{eqnarray}
\end{subequations}
Note that once the chiral odd GPDs are calculated using the equations above, at a given initial scale of the model, $Q_o^2 \lesssim 1$ GeV$^2$,
their evolution to the scale of the data must proceed according to the equations developed for the chiral odd sector in \cite{odd_evol,Kumano}.
The parametric form for the chiral even GPDs was given in Ref.\cite{newFF }.
As both new DVCS and meson electroproduction data become available, it will be possible to perform a global fit using simultaneously all sets of data. At the present stage our approach guarantees a better control over the various kinematical dependences.
\subsection{Flavor Dependence}
\label{sec:flavor}
The $u$ and $d$ quark chiral odd distributions can be readily obtained from Eqs.(\ref{S0}) and (\ref{S1})
by using the SU(4) symmetry of the proton wave function,
\begin{eqnarray}
\mid p \uparrow \rangle &=& \sqrt{\frac{2}{1+a_S^2}} \left[ \frac{a_S}{\sqrt{2}} \mid u\uparrow S_0^0 \rangle + \frac{1}{3 \sqrt{2}} \mid u\uparrow T_0^0 \rangle -\frac{1}{3} \mid u\downarrow T_0^{1} \rangle
-\frac{1}{3} \mid d\uparrow T_{1}^0 \rangle + \frac{\sqrt{2}}{3} \mid d\downarrow T_{1}^{1} \rangle \right]
\end{eqnarray}
where $S_0^0 \equiv S_{I_3}^{S_3}$ is the scalar diquark with isospin 0 and spin component 0, $T_{0,1}^{0,1} \equiv T_{I_3}^{S_3}$ is the axial vector (triplet) diquark with indicated isospin and spin components, and the parameter $a_S=1$ for SU(4) symmetry and can differ from 1 to allow for symmetry breaking \cite{GolLie,BCR}. Separating out the spin dependence leaves,
\begin{eqnarray}
\mid p \uparrow \rangle &=& \sqrt{\frac{2}{1+a_S^2}} \left[ \frac{a_S}{\sqrt{2}} \mid u S_0 \rangle \mid 0,\uparrow \rangle + \left( \frac{1}{\sqrt{3}} \mid d \, T_{1} \rangle -\frac{1}{\sqrt{6}} \mid u \,T_0 \rangle \right) \left( \sqrt{\frac{2}{3}} \mid 1, \downarrow \rangle - \sqrt{\frac{1}{3}} \mid 0, \uparrow \rangle \right)
\right] .
\end{eqnarray}
When matrix elements are formed with this state and the corresponding spin down proton, then the sum over the spin states will leave the purely flavor or isospin couplings,
\begin{equation}
\mid p \rangle = \sqrt{\frac{2}{1+a_S^2}} \left[ \frac{a_S}{\sqrt{2}} \mid u \, S_0 \rangle -\frac{1}{\sqrt{6}} \mid u \, T_0 \rangle + \frac{1}{\sqrt{3}} \mid d \, T_{1} \rangle \right] .
\end{equation}
From here we can see that the spin independent nucleon distributions decompose as
\begin{eqnarray}
F^u &=& \frac{2}{1+a_S^2} \left( \frac{3}{2}a_S^2 F^{(0)} + \frac{1}{2} F^{(1)} \right) \nonumber \\
\label{Fu}
F^d &=& \frac{2}{1+a_S^2} F^{(1)},
\label{Fd}
\end{eqnarray}
where an overall normalization of $1/3$ for quark number has been imposed and the sum over quark spins has been taken. On the other hand, for the spin transfer GPDs, as in $g_1$ and $h_1$, only the quark spin state $\mid 0,\, \uparrow \rangle$ contributes, so the $F^{(1)}$ is replaced by $-\frac{1}{3}F_T^{(1)}$,
\begin{eqnarray}
\label{FTu}
F_T^u & = & \frac{2}{1+a_S^2} \left(\frac{3}{2} a_S^2 F_T^{(0)} - \frac{1}{6} F_T^{(1)}\right) \nonumber \\
\label{FTd}
F_T^d & = & - \frac{2}{1+a_S^2} \frac{1}{3} F_T^{(1)},
\end{eqnarray}
where $F_T^{q} \equiv \{ H_T^q, E_T^q, \widetilde{H}_T^q, \widetilde{E}_T^q \}$, $q=u,d$.
By inverting Eqs.(\ref{Fd},\ref{FTd}), and inserting the expressions for $F^{(0,1)}$, and $F_T^{(0,1)}$ in Eqs.(\ref{Amp0},\ref{Amp1}), we obtain
the chiral odd GPDs written in terms of their $u$ and $d$ quarks components.
\begin{figure}
\includegraphics[width=9.cm]{gpd_ht_etanew.pdf}
\caption{(Color online) Chiral odd GPD $H_T^q(x,\xi,t; Q^2)$, for $q=u$ (upper panel) and $q=d$ lower panel. The dotted line is the forward limit, or transversity, $H_T^q(x,0,0;Q^2) \equiv h_1(x,Q^2)$.
All lines were obtained for $x_{Bj}=0.13$ and $Q^2=1.1$ GeV$^2$. The full line is for $-t= 0.1$ GeV$^2$, the dashed line is for $-t=0.7$ GeV$^2$. This GPD dominates the amplitude $f_{10}^{+-}$ at small $-t$.}
\label{gpdht:fig}
\end{figure}
\begin{figure}
\includegraphics[width=8.cm]{gpdeta_etbarp.pdf}
\includegraphics[width=8.cm]{gpdeta_etbarm.pdf}
\caption{(Color online) Chiral odd GPD combinations. Left: $[2\widetilde{H}_T^q+(1+\xi)E_T^q](x,\xi,t; Q^2)$, for $q=u$ (upper panel) and $q=d$ lower panel, right: $[2\widetilde{H}_T^q+(1-\xi)E_T^q](x,\xi,t; Q^2)$, for $q=u$ (upper panel) and $q=d$ lower panel. The dotted line is the forward limit, or $[2\widetilde{H}_T^q+ E_T^q](x,\xi,t; Q^2)$.
All lines were obtained for $x_{Bj}=0.13$ and $Q^2=1.1$ GeV$^2$. The full line is for $-t= 0.1$ GeV$^2$, the dashed line is for $-t=0.7$ GeV$^2$. These combinations dominate the helicity amplitudes, $f_{10}^{++}$ and $f_{10}^{--}$ , respectively.}
\label{gpdetbar:fig}
\end{figure}
\begin{figure}
\includegraphics[width=9.cm]{gpdeta_ettil.pdf}
\caption{(Color online) Chiral odd GPD combinations. $\widetilde{E}_T^q(x,\xi,t; Q^2)$, for $q=u$ (upper panel) and $q=d$ lower panel.
All lines were obtained for $x_{Bj}=0.13$ and $Q^2=1.1$ GeV$^2$. The full line is for $-t= 0.1$ GeV$^2$, the dashed line is for $-t=0.7$ GeV$^2$. $\widetilde{E}_T^q$ enters the helicity amplitudes, $f_{10}^{++}$ and $f_{10}^{--}$, however its contribution is smaller than the GPDs combination in Fig.\ref{gpdetbar:fig}.}
\label{gpdettil:fig}
\end{figure}
In Figures \ref{gpdht:fig}, \ref{gpdetbar:fig}, and \ref{gpdettil:fig} we show the flavor separated GPDs obtained by fixing all parameters of our model using constraints on
the chiral even GPDs as explained above. Our strategy for extracting chiral odd GPDs from exclusive electroproduction data is
to gradually loosen such constraints as more data in the chiral odd sector become available.
To extract flavor dependent chiral odd GPDs directly from the data it is important to analyze simultaneously $\pi^0$ and $\eta$ production.
In fact, from the SU(3) flavor symmetry for the pseudo-scalar meson octet applied to the chiral odd sector one has,
\begin{subequations}
\label{octet}
\begin{eqnarray}
{\cal F}_T^{\pi^0} & = & \frac{1}{\sqrt{2}} (e_u {\cal F}_T^u - e_d {\cal F}_T^d) \\
{\cal F}_T^{\eta} & = & \frac{1}{\sqrt{6}} (e_u {\cal F}_T^u + e_d {\cal F}_T^d - 2 e_s {\cal F}_T^s)
\end{eqnarray}
\end{subequations}
where $e_q$, $q=u,d,s$, is the quark's charge.
A flavor separation of the chiral odd CFFs can be performed by inverting the above equations, disregarding the contribution of the strange GPD,
\begin{eqnarray}
\label{octet2}
e_u {\cal F}_T^u & \approx & \frac{1}{\sqrt{2}} \left( {\cal F}_T^{\pi^0} + \sqrt{3} {\cal F}_T^{\eta} \right) \\
-e_d {\cal F}_T^d & \approx & \frac{1}{\sqrt{2}} \left( {\cal F}_T^{\pi^0} - \sqrt{3} {\cal F}_T^{\eta} \right)
\end{eqnarray}
\section{Cross Sections and Asymmetries}
\label{sec:3}
The CFFs and from them the GPDs which were evaluated in the previous Section can be extracted from the cross section terms for exclusive meson electroproduction, which, using the notation of Ref.\cite{Bacchetta_review} (based on \cite{DieSap}), is defined as,
\begin{widetext}
\begin{eqnarray}
\label{xs}
\frac{d^4\sigma}{dx_{Bj} dy d\phi dt} & = & \Gamma \left\{ \left[ F_{UU,T} + \epsilon F_{UU,L}+ \epsilon \cos 2\phi F_{UU}^{\cos 2 \phi}
+ \sqrt{\epsilon(\epsilon+1)} \cos \phi F_{UU}^{\cos \phi} +
h \, \sqrt{\epsilon(1-\epsilon)} \, \sin \phi F_{LU}^{\sin \phi} \right] \right. \nonumber \\
& + & S_{||} \left[ \sqrt{\epsilon(\epsilon+1)} \sin \phi F_{UL}^{\sin \phi} + \epsilon \sin 2 \phi F_{UL}^{\sin 2 \phi} + h \,
\left( \sqrt{1 - \epsilon^2} \, F_{LL} + \sqrt{\epsilon(1-\epsilon)} \, \cos \phi \, F_{LL}^{\cos \phi} \right) \right] \nonumber \\
& - & S_\perp \left[ \sin(\phi-\phi_S) \left(F_{UT,T}^{\sin(\phi-\phi_S)} + \epsilon F_{UT,L}^{\sin(\phi-\phi_S)} \right) +
\frac{\epsilon}{2} \left( \sin(\phi+\phi_S) F_{UT}^{\sin(\phi+\phi_S)} + \sin(3\phi-\phi_S) F_{UT}^{\sin(3\phi-\phi_S)} \right) \right. \nonumber \\
& + & \left. \sqrt{\epsilon(1+\epsilon)} \left( \sin\phi_S F_{UT}^{\sin \phi_S} + \sin(2\phi-\phi_S) F_{UT}^{\sin(2\phi-\phi_S)} \right) \right] \nonumber \\
& + & \left. S_\perp h \left[ \sqrt{1-\epsilon^2} \cos(\phi-\phi_S) F_{LT}^{\cos(\phi-\phi_S)} +
\sqrt{\epsilon(1-\epsilon)} \left(\cos \phi_S F_{LT}^{\cos\phi_S} + \cos(2\phi-\phi_S) F_{LT}^{ \cos(2\phi-\phi_S)} \right) \right] \right\} \nonumber \\
\end{eqnarray}
\end{widetext}
where
$S_{||}$ and ${\bf S}_\perp$ refer to lab frame target polarization parallel and perpendicular to the virtual photon direction, $h$ is the lepton beam helicity, $\phi$ is the azimuthal angle between the lepton plane and the hadron scattering plane, $\phi_S$ is the azimuthal angle of the transverse spin vector ${\bf S}_\perp$ and $t$ is the square of the invariant momentum transfer between the initial and final nucleon.
The photon polarization parameter $\epsilon$, the ratio of longitudinal photon and transverse photon flux, can be written in terms of invariants as,
\begin{equation}
\epsilon^{-1} = 1 + 2\left( 1+\frac{\nu^2}{Q^2} \right)\left(4 \dfrac{\nu^2}{Q^2} \dfrac{1-y}{y^2}-1\right)^{-1}.
\end{equation}
\noindent
$\Gamma$ is, up to a kinematic factor, given by,
\begin{equation}
\label{Gamma}
\Gamma = \frac{\alpha^2 \, y^2 (1-x_{Bj})}{2\pi x_{Bj}(1-\epsilon)Q^2}.
\end{equation}
In Ref.\cite{GGL_pi0} we analyzed the unpolarized and longitudinally polarized contributions to the cross section, or the various modulations of the $F_{UU}$ and $F_{UL}$, $F_{LL}$, types, respectively.
We showed that:
\noindent {\it i)} the polarized cross section terms, $F_{LU}, F_{UL}^{\sin \phi}, F_{LL}^{\cos \phi}$ are dominated by chiral even GPDs through the contribution of longitudinal photon polarization,
while the terms $A_{UL}^{\sin 2\phi}, F_{LL}$ are purely transverse, and therefore useful for the extraction of chiral odd GPDs;
\noindent {\it ii)} the cross section components with a large chiral odd contribution are dominated by the GPDs $\widetilde{H}_T$, and $E_T$, and $\widetilde{E}_T$, while $H_T$'s contribution appears only in $F_{LL}$, and it can be disentangled from the other GPDs only at very low $t$ (Figures 15 and 16 in Ref.\cite{GGL_pi0}).
For the single transversely polarized target in Eq.(\ref{xs}), however, there appear terms which are proportional to $H_T$ and which will therefore enable us to extract the tensor charge from data. From Eq.(\ref{xs}) we see that there are six structure functions for the unpolarized beam and single transversely polarized target,
\begin{eqnarray}
F_{UT,T}^{\sin(\phi-\phi_S)} &=& \Im m \, F_{11}^{+-} = \Im m \, \sum_{\Lambda'} f_{10}^{+ \Lambda' *} f_{10}^{- \Lambda'}
= \, \Im m \left[ f_{10}^{++*} f_{10}^{-+} + f_{10}^{+-*} f_{10}^{--} \right] \\
F_{UT,L}^{\sin(\phi-\phi_S)} &=& \Im m \, F_{00}^{+-} = \Im m \, \sum_{\Lambda'} f_{00}^{+ \Lambda' *} f_{00}^{- \Lambda'}
= \, \Im m \left[ f_{00}^{++*} f_{00}^{-+} + f_{00}^{+-*} f_{00}^{--} \right] \\
F_{UT}^{\sin(\phi+\phi_S)} &=& \Im m \, F_{1-1}^{+-} = \Im m \, \sum_{\Lambda'} f_{10}^{+ \Lambda' *} f_{-10}^{- \Lambda'}
= \, \Im m \left[ -f_{10}^{++*} f_{10}^{+-} + f_{10}^{+-*} f_{10}^{++} \right] \\
F_{UT}^{\sin(3\phi+\phi_S)} &=& \Im m \, F_{1-1}^{-+} = \Im m \, \sum_{\Lambda'} f_{10}^{- \Lambda' *} f_{-10}^{+ \Lambda'}
= \, \Im m \left[ f_{10}^{-+*} f_{10}^{--} - f_{10}^{--*} f_{10}^{-+} \right] \\
F_{UT}^{\sin\phi_S} &=& \Im m \, F_{10}^{+-} = \Im m \, \sum_{\Lambda'} f_{10}^{+ \Lambda' *} f_{00}^{- \Lambda'}
= \, \Im m \left[ f_{10}^{++*} f_{00}^{-+} + f_{10}^{+-*} f_{00}^{--} \right] \\
F_{UT}^{\sin(2\phi-\phi_S)} &=& \Im m \, F_{10}^{-+} = \Im m \, \sum_{\Lambda'} f_{10}^{- \Lambda' *} f_{00}^{+ \Lambda'}
= \, \Im m \left[ f_{10}^{-+*} f_{00}^{++} + f_{10}^{--*} f_{00}^{+-} \right],
\label{UT}
\end{eqnarray}
and three for the longitudinally polarized lepton and transversely polarized target,
\begin{eqnarray}
F_{LT}^{\cos(\phi-\phi_S)} &=& \Re e \, F_{11}^{+-} = \Re e \, \, \sum_{\Lambda'} f_{10}^{+ \Lambda' *} f_{10}^{- \Lambda'}
= \, \Re e \left[ f_{10}^{++*} f_{10}^{-+} + f_{10}^{+-*} f_{10}^{--} \right]
\label{LT} \\
F_{LT}^{\cos\phi_S} &=& \Re e \, F_{10}^{+-} = \Re e \, \, \sum_{\Lambda'} f_{10}^{+ \Lambda' *} f_{00}^{- \Lambda'}
= \, \Re e \left[ f_{10}^{++*} f_{00}^{+-} + f_{10}^{+-*} f_{00}^{--} \right] \\
F_{LT}^{\cos(2\phi-\phi_S)} &=& \Re e \, F_{10}^{-+} = \Re e \, \, \sum_{\Lambda'} f_{10}^{- \Lambda' *} f_{00}^{+ \Lambda'}
= \, \Re e \left[ f_{10}^{-+*} f_{00}^{++} + f_{10}^{--*} f_{00}^{+-} \right] .
\end{eqnarray}
Notice that: {\it i)} when the nucleon is polarized along the photon direction there will be no asymmetry because of Parity conservation; {\it ii)} when the nucleon is polarized along the incoming lepton direction there will be a component of nucleon polarization transverse to the photon direction as well as transverse to the nucleon plane. This produces the modulations involving both the photon angle relative to the lepton beam $\phi_s$, and the azymuthal angle $\phi$.
Below we list the asymmetries that one can form involving transverse photon polarization only, so that they are most sensitive to the tensor charge,
\begin{eqnarray}
A_{UT}^{\sin(\phi-\phi_S)} & = & \frac{- F_{UT,T}^{\sin(\phi-\phi_S)}}{F_{UU,T} +\epsilon \, F_{UU,L} } =
- \, \frac{(\Re e f_{10}^{++} \Im m f_{10}^{-+} - \Im m f_{10}^{++} \Re e f_{10}^{-+}) + ( \Re e {\bf f_{10}^{+-}} \Im m f_{10}^{--} - \Im m \, {\bf f_{10}^{+-}} \Re e f_{10}^{--}) }{d\sigma / dt }
\label{A_UTsinphim} \\
A_{UT}^{\sin(\phi+\phi_S)} & = &-\frac{\epsilon}{2} \frac{F_{UT}^{\sin(\phi+\phi_S)}}{F_{UU,T} +\epsilon \, F_{UU,L} } = - \epsilon \, \frac{ \Re e {\bf f_{10}^{+-}} \, \Im m f_{10}^{++} - \Re e f_{10}^{++} \, \Im m {\bf f_{10}^{+-}}}{d \sigma /dt}
\label{A_UTsinphip} \\
A_{LT}^{\cos(\phi-\phi_S)} & = & \sqrt{1-\epsilon^2} \, \frac{F_{UT}^{\cos(\phi-\phi_S)}}{F_{UU,T} +\epsilon \, F_{UU,L} } \nonumber \\
& = & \sqrt{1-\epsilon^2} \, \frac{(\Re e f_{10}^{++} \Re e f_{10}^{-+} + \Im m f_{10}^{++} \Im m f_{10}^{-+}) + ( \Re e {\bf f_{10}^{+-}} \Re e f_{10}^{--} + \Im m \, {\bf f_{10}^{+-}} \Im m f_{10}^{--}) }{d \sigma/dt}
\label{A_LTcosphi}
\end{eqnarray}
where we have highlighted in boldface the amplitude parts which are sensitive to transversity; the unpolarized cross section is
\begin{eqnarray}
\label{dsigT}
\frac{d \sigma }{d t} = F_{UU,T} + \epsilon F_{UU,L}
\end{eqnarray}
Notice that the asymmetry $A_{UT}^{\sin(3\phi-\phi_S)}$,
\begin{eqnarray}
A_{UT}^{\sin(3\phi-\phi_S)} & = & - \epsilon \, \frac{F_{UT}^{\sin(3\phi-\phi_S)}}{F_{UU,T} +\epsilon \, F_{UU,L} } = - \epsilon \, \frac{2 \Re e f_{10}^{--} \, \Im m f_{10}^{-+} }{d \sigma/dt},
\end{eqnarray}
also involves transverse photon polarization only but it does not involve transversity, and it is predicted to be small being dominated by the double flip amplitude $f_{10}^{-+} $.
\subsection{Results}
\label{results:sec}
We now illustrate the various steps in the procedure for extracting chiral odd GPDs, and their forward limits, in particular transversity, and its integrated value, the tensor charge.
We start by showing in Figure \ref{pi0aut1}, the behavior of $A_{UT}^{\sin(\phi-\phi_S)}$, Eq.(\ref{A_UTsinphim}), $A_{UT}^{\sin(\phi+\phi_S)}$, Eq.(\ref{A_UTsinphip}), and $A_{LT}^{\cos(\phi-\phi_S)}$, Eq.(\ref{A_LTcosphi}), at kinematics which are attainable at Jefferson Lab, namely $x_{Bj}=0.2$, $Q^2=1.5$. The left panel shows $\gamma^* p \rightarrow \pi^0 p'$, and the right panel shows $\gamma^* p \rightarrow \eta p'$. The contributions from the terms $\Im m f^{+-}_{10}$ and $\Re e f^{+-}_{10}$ are indicated in the figure by long dashed and short dashed curves, respectively. The other contributions which are not sensitive to the tensor charge but that are sensitive to $\kappa^T_q$ are indicated by the dotted curve. From the figure we deduce that $A_{UT}^{\sin(\phi-\phi_S)}$ is the best quantity to extract the transversity GPD, $H_T$, while the two contributions from the real and imaginary parts of the amps nearly cancel each other in $A_{UT}^{\sin(\phi+\phi_S)}$; $A_{LT}^{\cos(\phi-\phi_S)}$, although it is predicted to be large, is dominated by $f_{10}^{++}$, and therefore it is sensitive to $\kappa_q$. We also observe clear difference between the $\eta$ and $\pi^0$ curves which can be examined in more detail by considering ratios of the observables for the two processes as we show in what follows.
\begin{figure}
\includegraphics[width=8.cm]{pi0_autkin1.pdf}
\includegraphics[width=8.cm]{eta_autkin1.pdf}
\caption{(Color online) The asymmetries $A_{UT}^{\sin(\phi-\phi_S)}$, Eq.(\ref{A_UTsinphim}), $A_{UT}^{\sin(\phi+\phi_S)}$, Eq.(\ref{A_UTsinphip}), and $A_{LT}^{\cos(\phi-\phi_S)}$, Eq.(\ref{A_LTcosphi}) plotted vs. $-t$, at $x_{Bj}=0.2$, $Q^2=1.5$. In the left panel we show $\gamma^* p \rightarrow \pi^0 p'$; in the right panel we show $\gamma^* p \rightarrow \eta p'$. The long dashed and short dashed curves show the contributions proportional to the amplitudes, $\Im m f^{+-}_{10}$ and $\Re e f^{+-}_{10}$, which contain the GPD $H_T$ (Eq.(\ref{helamps_gpd2})), and are therefore sensitive to the tensor charge. The dotted curves show the sum of contributions from the remaining amplitudes, and the full curves show the total contribution. }
\label{pi0aut1}
\end{figure}
In Figure \ref{pi0eta:fig} we show the ratio of the unpolarized transverse cross section, $F_{UU,T}$ for $\eta$ over $\pi^0$. This type of plot can be considered a first step towards flavor separation, although many important details should be considered. First of all, the transverse cross section receives contributions from all four transverse helicity amplitudes squared \cite{GGL_pi0}
\[ F_{UU,T} \propto \mid f_{10}^{++} \mid^2 + \mid f_{10}^{+-} + \mid^2 \mid f_{10}^{-+} + \mid^2 \mid f_{10}^{--} \mid^2 \]
However, the dominant terms are $f_{10}^{++} \propto \overline{\cal E}_T$, and $f_{10}^{+-} \propto {\cal H}_T$. So each term in the ratio is given by an interplay of the
two GPDs which are related to the tensor anomalous magnetic moment and to the tensor charge, respectively. For each ($x_{Bj}$, $Q^2$) bin, the $H_T$ term dominates at low $-t$, while the $\overline{E}_T$ term dominates at larger values of $-t$. Now, as one can see from Eqs.(\ref{octet}) the ratio would be equal to $1/3$ in the absence of $d$ quark contributions.
Therefore, the behavior of the ratio at low $-t$ reflects the sign and magnitude of the $d$ quark contribution to the transversity GPD, $H_T$, while at larger $-t$ it reflects the behavior of the $d$ quarks in $\overline{E}_T$.
\begin{figure}
\includegraphics[width=9.cm]{pi0_eta.pdf}
\caption{((Color online) The ratio of the unpolarized transverse cross section, $F_{UU,T}$ for $\eta$ over $\pi^0$ plotted vs. $-t$ at $x_{Bj}=0.2$ and $Q^2=1.5$ GeV$^2$.
Because, as explained in the text, the ratio is given by an interplay of the two GPDs $\overline{E}_T$ and $H_T$, in the given $x_{Bj}$ and $Q^2$ bin, we plot
both the total contribution (full line), and the ratio obtained omitting the ampltiude $f_{10}^{+-}$ which is dominated by $H_T$ at low $-t$ (dot-dashed line).
Since the ratio would be equal to $1/3$ in the absence of $d$ quark contributions (Eqs.(\ref{octet}), dashed line),
its behavior shows the presence of a small but negative $d$ quark component in both $H_T$ and $\overline{E}_T$.
}
\label{pi0eta:fig}
\end{figure}
We further clarify this point by showing the ratio without the contribution of $f_{10}^{+-} (H_T)$: one can see a clear difference in the behavior of the two curves at low $-t$. The fact that the curves lie higher than $1/3$ indicates that both $\overline{E}_T^d$ and $H_T^d$ are negative. As we show later on, the integral of $\overline{E}_T^d$ for $t=0$, $\kappa^T_d$, is positive (see also Fig.\ref{gpdetbar:fig}), however, in the off-forward case $\overline{E}_T^d$ can oscillate, being negative at $x=\xi$. As a consequence of this behavior, if we consider the integral of $\overline{E}_d$ over $x$, {\it i.e.} the quark tensor anomalous magnetic moment, this will be reduced due to the ranges of negative strength. Whether this particular picture (and setting of parameters) will be confirmed or not by the data is not of the essence here. What is important is that through combined $\eta$ and $\pi^0$ exclusive electrproduction measurements, and by selecting the appropriate observables as indicated in both this and our previous analysis \cite{GGL_pi0}, one will be able unravel the partonic structure underlying both the tensor charge and magnetic moment.
\begin{figure}
\includegraphics[width=8.5cm]{kapu_vs_kapd.pdf}
\caption{(Color online) The tensor anomalous magnetic moment, $\kappa^T_d$ plotted vs. the $u$ quark, $\kappa^T_u$ extracted in our analysis.
As for the tensor charge, the error bars on our extraction derive from the extension of our parametrization to the chiral odd sector, {\it i.e.} they are obtained propagating the errors on the chiral even GPDs parameters, as explained in the text.
Also shown are the values obtained in selected recent models \cite{Waka,PasBof,Ledwig}, and in lattice QCD \cite{lattice}. All values have been evolved to $Q^2=0.8$ GeV$^2$.}
\label{kapu_vs_kapd:fig}
\end{figure}
The tensor anomalous magnetic moment is shown in Figure \ref{kapu_vs_kapd:fig} where the $d$-quark, $\kappa^T_d$ is plotted vs. the value for the $u$ quark, $\kappa^T_u$ (Eq.(\ref{kappaT}). These values were extracted in our analysis simultaneously to the tensor charge. The error bars on our extraction derive from the extension of our parametrization to the chiral odd sector, {\it i.e.} they are obtained propagating the errors on the chiral even GPDs parameters, as explained in the previous Section. To our knowledge, no other determination making use of experimental data has been given so far.
Together with our value, we plot the values obtained in selected recent models \cite{Waka,PasBof,Ledwig}, and the lattice QCD determination from Ref.\cite{lattice}. All values were calculated by evolving to $Q^2=0.8$ GeV$^2$, noting that the all chiral odd GPDs have the same anomalous dimensions as for transversity \cite{Ledwig}. The values found in \cite{Waka,PasBof} will produce a ratio $\eta/\pi^0$ much higher than the one plotted in Fig.\ref{pi0eta:fig}. The $\eta/\pi^0$ type of measurements will allow us, therefore, to distinguish among models.
One can see that $\kappa_q$ is much more undetermined than the tensor charge owing to the scarcity of measurements so far. As we pointed out in Ref.\cite{GGL_pi0}, measurements with a longitudinally polarized target of both $\pi^0$ and $\eta$ exclusive electroproduction will allow us to extract the GPD $\overline{E}_T$, and consequently the tensor anomalous magnetic moment.
In Figure \ref{delu_vs_deld:fig} we show along with our results, a compilation of the tensor charge values from recent data analyses besides our suggested one from DV$\pi^0$P and DV$\eta$P, namely the Torino group extraction \cite{Anselmino} obtained combining data on polarized SIDIS single hadron production \cite{HERMES,COMPASS}, and data on dihadron production from $e^+e^-$ annihilation \cite{Belle}; the Pavia group extraction \cite{Courtoy} obtained from dihadron production in a collinear framework, {\it i.e.} combining the ($k_T$ integrated) transversity distribution, $h_1$ with dihadron fragmentation functions; and finally, what can be considered a pioneering extraction using a combination of vector and axial vector meson couplings to the nucleon which are constrained from data on the mesons decay constants and the average parton transverse momenta \cite{GamGol}. For comparison we show also the most recent lattice results obtained for the isovector combination $\delta u - \delta d$ \cite{Engel}, and selected model calculations Refs.\cite{HeJi,Waka,Lorce}. The error bars in our extraction are the uncorrelated errors from the parameters in the chiral even sector.
The values we obtained with the variant of the Reggeized diquark model based fit used in this paper for both $\delta_q$ and $\kappa_q^T$ are shown in Table \ref{tensor:table}.
\begin{table}
\begin{tabular}{|c|c|c|c|}
\hline
& $u$ & $d$ & $Q^2$ (GeV$^2$) \\
\hline
$\delta_q$ & $0.936 \pm 0.339$ & $-0.130 \pm 0.089$ & 1 \\
\hline
$\delta_q$ & $0.860 \pm 0.248 $ & $-0.119 \pm 0.060 $ & 4 \\
\hline
$\kappa^T_q$ & $3.43 \pm 0.26 $ & $1.37 \pm 0.34 $ & 0.8 \\
\hline
\end{tabular}
\label{tensor:table}
\caption{Values of the tensor charge and the tensor anomalous magnetic moment obtained in our analysis.}
\end{table}
\begin{figure}
\includegraphics[width=8.5cm]{delu_vs_deld.pdf}
\caption{(Color online) Tensor charge values for the $d$ quark, $\delta_d$ plotted vs. the $u$ quark, $\delta_u$, as obtained from our analysis of exclusive deeply virtual processes, and from other experimental extractions existing to date: dihadron electroproduction production ($Q^2$= 2 GeV$^2$), Anselmino {\it et al.}, Ref.\cite{Anselmino}, and Bacchetta {\it et al.} Ref.\cite{Courtoy} ($Q^2$= 1 GeV$^2$), and from a model describing the tensor charge through vector and axial vector mesons couplings, Gamberg and Goldstein Ref.\cite{GamGol} ($Q^2$= 1 GeV$^2$). The thin band delimited by the dotted curves is the recent lattice QCD result for the isovector component \cite{Engel} ($Q^2$= 4 GeV$^2$). For comparison, the tensor charges obtained in different models are also shown (Wakamatsu, CQSM $Q^2$=0.8 GeV$^2$ Ref.\cite{Waka}, Lorc\'{e} {\it et al.}, CQSM, Ref.\cite{Lorce} ($Q^2$= 1 GeV$^2$), and He and Ji, Bag Model Ref.\protect\cite{HeJi}. For our model we also show the effect of PQCD evolution from $Q^2$=1 GeV$^2$ to $Q^2$= 4 GeV$^2$.}
\label{delu_vs_deld:fig}
\end{figure}
\begin{figure}
\includegraphics[width=8.cm]{pi0_aut_tensor.pdf}
\caption{(Color online) The asymmetry $A_{UT}^{\sin(\phi-\phi_S)}$, Eq.(\ref{A_UTsinphim}), plotted vs. $-t$, at $x_{Bj}=0.2$, $Q^2=1.5$ for the $\gamma^* p \rightarrow \pi^0 p'$ reaction. The error band was obtained by varying the value of the $u$-quark tensor charge, $\delta_u$, by $\pm 0.08$. The dot-dashed curve corresponds to $\delta_u = 1.4$, and the dashed curve corresponds to $\delta_u=0.6$. The value of $\delta_d$ was kept fixed at $-0.12$. The graph shows the sensitivity of the asymmetry to variations of the tensor charge, or the precision that is needed in measurements of this quantity in order to reduce the size of the errors from the ones reported in Fig.\ref{delu_vs_deld:fig}.}
\label{AUT_tensor}
\end{figure}
Several remarks are in order.
First of all, the tensor charge is subject to a rather rapid Perturbative QCD evolution (see \cite{Waka} and references therein), as can be seen from the shift (decrease) in values from $Q^2$=1 GeV$^2$ to $Q^2$= 4 GeV$^2$ shown for our extracted values (all the other evaluations shown in the figure are in the $Q^2$ range: $0.8 - 2$ GeV$^2$).
\begin{figure}
\includegraphics[width=8.5cm]{transv.pdf}
\caption{(Color online) Transversity, $h_1^q$ plotted vs. $x$ at $Q^2$= 1 GeV$^2$, for the $u$ quarks (upper panel) and for the $d$ quark (lower panel). Besides our analysis, the most recent extractions from experimental data were plotted, namely the analysis of the Pavia group extraction \cite{Courtoy} obtained from dihadron production in a collinear framework, and the Torino group extraction \cite{Anselmino} obtained combining data on polarized SIDIS single hadron production \cite{HERMES,COMPASS}, and data on dihadron production from $e^+e^-$ annihilation \cite{Belle} (see the corresponding tensor charge values in Fig.\ref{delu_vs_deld:fig}).}
\label{transv:fig}
\end{figure}
Most importantly, the graph shows how the low $x$ tail of the transversity distribution plays a fundamental role in determining the value of the charge. This can be seen by comparing the values obtained for the $u$ quarks ($d$ quarks values tend to be smaller and the effects are less visible). In fact the values from Ref.\cite{Courtoy} were obtained by calculating the charge in the $x_{Bj}$ range of the fitted data ($x_{Bj} \gtrsim 0.06$), and by extrapolating to $x=10^{-5}$ (open symbol). The extrapolated result tends to agree better with our extraction. Note that our parametrization explicitly includes Regge behavior, therefore our value of the charge tends to be larger. Our Regge behavior, in turn, is constrained by the nucleon's form factors behavior \cite{newFF}. This is why measuring the tensor charge through GPDs provides a definite advantage over all of the inclusive measurements where extrapolation procedures in the low $x$ regime need to be devised.
In Figure \ref{AUT_tensor} we demonstrate the sensitivity of pseudoscalar meson production data to the values of the tensor charge. For illustration, we show only the $\pi^0$ asymmetry, and we fix $\delta_d = -0.12$. The figure shows curves for varying $\delta_u$ in the range $0.6 -1.4$, whereas the error band was obtained by varying the tensor charge by $\pm 0.08$.
Additional information on $\delta_d$ can be obtained by comparing $\pi^0$ and $\eta$ measurements. The purpose of the calculations represented in this figure is to gain information on the kind of precision that will be needed in order to reduce the error on the tensor charge with respect to the values from the analyses reported in Fig.\ref{delu_vs_deld:fig}. From the figure we conclude that an accurate extraction of tensor charge and its flavor dependence will be possible at Jefferson Lab, 12 GeV \cite{AvaKim}.
The various transversity distributions mentioned in our discussion above are shown in Figure \ref{transv:fig}. These curves show that the tensor charge for the u-quark will be larger than the results from the two other analyses, while the d-quark tensor charge will be smaller, again showing the importance of the small $x$ behavior in our Reggeized scheme.
We conclude this section by noting that the GPD $\widetilde{E}_T$ enters the transverse non flip amplitudes, $f_{10}^{++(--)}$ and it can therefore be extracted once more accurate data are available. We postpone the discussion of this observable to future work.
\section{Conclusions and Outlook}
\label{sec:conclusions}
In conclusion, we reiterate that transverse asymmetries allow us to single out the tensor charge best. Namely, the asymmetries are sensitive to the GPD $H_T$, in contrast to the longitudinal target polarization asymmetries reported in detail in Ref.\cite{GGL_pi0} that are mostly sensitive to the GPDs $\widetilde{H}_T$, $E_T$, and $\widetilde{E}_T$, and to the integral of $2\widetilde{H}_T$ + $E_T$, or the tensor anomalous magnetic moment. A combined analysis of $\pi^0$ and $\eta$ data will allow us to perform a flavor separation of both the tensor charges and the tensor anomalous magnetic moments.
It should be noticed, however, that although the agreement of the tensor charge values reported in this paper with the very precise recent lattice results is excellent,
in our analysis the tensor charge was obtained indirectly, by using constraints from Parity relations that allow us to connect to the somewhat better constrained GPDs in the chiral even sector. The extraction we proposed can therefore be considered model dependent. Nevertheless, at this stage our study provides, on one side, a set of very much needed estimates and constraints on the size of the various, so far largely unexplored, chiral odd GPDs. On the other side, it opens the way to an upcoming analysis that will be performed by fitting our functional forms directly to the combined exclusive $\eta$ and $\pi^0$ electroproduction data, once these will be made available.
Our ultimate goal is to determine the chiral odd GPDs from a global analysis on its own merit, using all of the pseudoscalar meson production data. Hence, this paper can be considered to be a step in this direction in that it provides
a framework with which to gauge the various contributions to all cross sections and asymmetries.
Most importantly, the suggested analysis will allow us to substantially reduce the errors on the flavor dependent $\delta_q$, and to perform, for the first time, an experimental extraction of $\kappa_q$.
We complete our discussion by acknowledging similar work in this direction {\it i.e.} Refs.\cite{GolKro,Kro_new}, and the alternative method proposed in Ref. \cite{Pire1,Pire2} to access chiral odd GPDs, through the electroproduction of two vector mesons.
\acknowledgments
We thank the Hall B collaboration at Jefferson Lab, in particular Harut Avakian, Francois Xavier Girod, Andrey Kim, Valery Kubarovsky and Paul Stoler for useful discussions and suggestions. We also thank Aurore Courtoy and Alexei Prokudin for discussions and for providing the calculated transversity functions from their respective collaborations' papers. This work was supported by the U.S. Department
of Energy grant DE-FG02-01ER4120.
| -68,086.262178
|
[
-2.91796875,
2.712890625
] | 27.633379
|
[
-3.76953125,
-0.17236328125,
-1.9326171875,
-6.09375,
-0.265380859375,
8.53125
] |
[
1.7373046875,
9.0078125,
2.361328125,
4.546875
] | 471
| 9,293
|
[
-3.0859375,
3.638671875
] | 34.045834
|
[
-5.9140625,
-3.80078125,
-4.32421875,
-2.537109375,
1.71484375,
11.875
] | 0.774798
| 13.671926
| 20.294846
| 3.780213
|
[
1.8245465755462646
] | -39,551.915407
| 5.485957
| -67,282.882355
| 0.823836
| 6.266379
|
[
-2.609375,
-3.912109375,
-4.10546875,
-5.17578125,
2.1640625,
12.8515625
] |
[
-5.66015625,
-1.58984375,
-1.65625,
-0.72705078125,
3.3125,
3.5078125
] | |
BkiUa-7xK7FjYAb_44Lg
|
\section{Introduction}
Let $X_A$ denote the adjacency matrix of a random $(d_b, d_w)$-biregular graph with off-diagonal blocks $A, A^{\ast}$. Here, we assume $A$ is a matrix of size $M \times N$ with $M \geq N$. We define the normalized empirical spectral distributions of $d_w^{-1} A^{\ast}A$ and $d_w^{-1/2} X_A$ to be the following macroscopic random point masses:
\begin{align}
\mu_{A^{\ast}A} \ &= \ \frac{1}{N} \sum_{\lambda \in \sigma(A^{\ast}A)} \ \delta_{d_w^{-1} \lambda}(x), \\
\mu_{X_A} \ &= \ \frac{1}{M+N} \sum_{\lambda \in \sigma(X_A)} \ \delta_{d_w^{-1/2} \lambda}(x).
\end{align}
It is known (see \cite{BS}) that the empirical spectral distribution of the normalized covariance matrix converges almost surely (in the limit $M, N \to \infty$ and $d \to \infty$ at a suitable rate) to the Marchenko-Pastur law with parameter $\gamma := N/M$ given by the following density function:
\begin{align}
\varrho_{\infty}(x) \ \d x \ := \ \varrho_{MP}(x) \ \d x \ = \ \frac{\sqrt{(\lambda_+ - x)(x - \lambda_-)}}{2 \pi \gamma x} \mathbf{1}_{x \in [\lambda_{-}, \lambda_{+}]} \d x, \label{eq:MPlaw}
\end{align}
where we define $\lambda_{\pm} = (1 \pm \sqrt{\gamma})^2$. As noted in \cite{DJ}, this implies that the empirical spectral distribution of the normalized adjacency matrix converges almost surely to a linearization of the Marchenko-Pastur law given by the following density function:
\begin{align}
\varrho(E) \ = \ \begin{cases}
\frac{\gamma}{(1 + \gamma) \pi |E|} \sqrt{(\lambda_{+} - E^2)(E^2 - \lambda_{-})} & E^2 \in [\lambda_{-}, \lambda_{+}] \\
0 & E^2 \not\in [\lambda_{-}, \lambda_{+}]
\end{cases}.
\end{align}
We briefly remark that in the regime $M = N$, the linearized Marchenko-Pastur density agrees exactly with the Wigner semicircle density, which is the limiting density for the empirical spectral distribution of random $d$-regular graphs on $N$ vertices in the limit $N, d \to \infty$ at suitable rates. In this regime, coincidence of the empirical spectral distribution and the limiting Wigner semicircle law was shown for intervals at the optimal scale $N^{-1 + \e}$ in \cite{BKY}. This short-scale result is crucial for understanding eigenvalue gap and correlation statistics and showing universality of eigenvalue statistics for random regular graphs compared to the GOE.
Moreover, the short-scale result is a drastic improvement from the order 1 result discussed above for biregular bipartite graphs. For these graphs, convergence of the empirical spectral distribution of $d_w^{-1} A^{\ast}A$ to the Marchenko-Pastur law was shown for scales $N^{-\e}$ for sufficiently small $\e > 0$ in \cite{DJ}. The techniques used in this paper included primarily analysis of trees and ballot sequences. This result, however, is far from the optimal scale and is thus far from sufficient for showing universality of eigenvalue statistics. The aim of this paper is remedy this problem and obtain convergence at the optimal scale. Similar to \cite{BKY}, we bypass the analysis of trees and ballot sequences with a combinatorial operator on graphs known as \emph{switchings}, which are ubiquitous throughout graph theory. This will help us resample vertices in a random graph and will be crucial in deriving a tractable self-consistent equation for the Green's function of a random biregular bipartite graph. However, in contrast to \cite{DJ}, we aim to prove convergence at the optimal scale for the ensembles of (normalized) covariance matrices and adjacency matrices simultaneously, transferring between the analysis of each ensemble whenever convenient. In \cite{DJ}, the result for adjacency matrices was derived as a consequence of the result for covariance matrices. To the author's knowledge, this idea is original to this paper.
For random regular graphs, universality of local bulk eigenvalue correlation statistics was shown in \cite{BHKY}, which required both the local law from \cite{BKY} as a crucial ingredient as well as analysis of the Dyson Brownian Motion in \cite{LY}. In a similar spirit for random biregular bipartite graphs, we prove universality of bulk eigenvalue statistics in \cite{Y2} using the local law result in this paper and an analysis of Dyson Brownian Motion for covariance matrices in \cite{Y3}. Thus, this paper may be viewed as the first in a series of three papers on random covariance matrices resembling the papers \cite{BKY}, \cite{BHKY}, \cite{LY}, and \cite{LSY} which study Wigner matrices.
Before we proceed with the paper, we remark that as with random regular graphs and Wigner ensembles, the covariance matrix ensembles arising from biregular bipartite graphs is a canonical example of a covariance matrix ensemble whose entries are nontrivially correlated. A wide class of covariance matrices with independent sample data entries was treated in the papers \cite{A}, \cite{BEKYY}, \cite{BKYY}, \cite{NP2}, and \cite{TV}. In these papers, local laws were derived and universality of local eigenvalue correlation statistics were proven assuming moment conditions. Because of the nontrivial correlation structure of the sample data entries and the lack of control of entry-wise moments, these papers and their methods cannot apply to our setting.
\subsection{Acknowledgements}
The author thanks H.T. Yau and Roland Bauerschmidt for suggesting the problem, referring papers, and answering the author's questions pertaining to random regular graphs. This work was partially funded by a grant from the Harvard College Research Program. This paper was written while the author was a student at Harvard University.
\subsection{Notation}
We adopt the Landau notation for big-Oh notation, and the notation $a \lesssim b$. We establish the notation $[[a, b]] := [a,b] \cap \Z$. We let $[E]$ denote the underlying vertex set of a graph $E$. For vertices $v, v' \in E$, we let $vv'$ denote the edge in $E$ containing $v$ and $v'$. For a real symmetric matrix $H$, we let $\sigma(H)$ denote its (real) spectrum.
\section{Underlying Model and Main Results}
We briefly introduce the underlying graph model consisting of bipartite graphs on a prescribed vertex set.
\begin{definition} \label{definition:bipartitegraph}
Suppose $\mathscr{V} = \{ 1_b, 2_b, \ldots, M_b, 1_w, \ldots, N_w \}$ is a set of labeled vertices, and suppose $E$ is a simple graph on $\mathscr{V}$. We say the graph $E$ is \emph{bipartite} with respect to the vertex sets $(\mathscr{V}, V_b, V_w)$ if $\mathscr{V}$ admits the following decomposition:
\begin{align}
\mathscr{V} \ = \ \left\{ 1_b, 2_b, \ldots, M_b \right\} \bigcup \left\{ 1_w, 2_w, \ldots, N_w \right\} \ =: \ V_b \cup V_w,
\end{align}
such that for any vertices $v_i, v_j \in V_b$ and $v_k, v_\ell \in V_w$, the edges $v_i v_j$ and $v_k v_\ell$ are not contained in $E$.
Moreover, for fixed integers $d_b, d_w > 0$, we say that a bipartite graph $E$ is $(d_b, d_w)$-\emph{regular} if each $v \in V_b$ has $d_b$ neighbors and if each $w \in V_w$ has $d_w$ neighbors.
\end{definition}
\begin{remark}
For the remainder of this paper, we will refer to $V_b$ as the set of \emph{black} vertices and $V_w$ as the set of \emph{white} vertices. Moreover, we will refer to a $(d_b, d_w)$-regular graph simply as a \emph{biregular} graph, if the parameters $d_b, d_w$ are assumed. In particular, when referring to a biregular graph we assume a bipartite structure. Lastly, the set of $(d_b, d_w)$-regular graphs on the vertex sets $(\mathscr{V}, V_b, V_w)$ will be denoted by $\Omega$, where we suppress the dependence of the parameters $M, N, d_b, d_w$ without the risk of confusion.
\end{remark}
We now record the following identity which follows from counting the total number of edges in a biregular graph $E$:
\begin{align}
Md_b \ = \ N d_w, \label{eq:countedges}
\end{align}
where $M = M_b$ and $N = N_w$. We retain this notation for $M, N$ for the remainder of the paper.
\subsection{The Random Matrix Ensemble}
We now introduce a modification of the random matrix ensemble studied in \cite{DJ}, retaining the notation used in the introduction of this paper. We first note the adjacency matrix of a biregular graph is a block matrix with vanishing diagonal, i.e. has the following algebraic form:
\begin{align}
X_A \ = \ \begin{pmatrix} 0 & A \\ A^\ast & 0 \end{pmatrix},
\end{align}
where $A$ is a matrix of size $M \times N$. By the biregular assumption of the underlying graph, the matrix $X_A$ exhibits the following eigenvalue-eigenvector pair:
\begin{align}
\lambda_{\max} \ = \ \sqrt{d_b d_w}, \quad \mathbf{e}_{\max} \ = \ \begin{pmatrix} \mathbf{e}_b \\ \sqrt{\alpha} \mathbf{e}_w \end{pmatrix},
\end{align}
where $\mathbf{e}_b$ and $\mathbf{e}_w$ are constant $\ell^2$-normalized vectors of dimension $M$ and $N$ respectively. By the Perron-Frobenius theorem, the eigenvalue $\lambda_{\max}$ is simple.
Using ideas from \cite{BKY}, the matrix ensemble of interest is the ensemble $\mathscr{X} = \mathscr{X}(M, N, d_b, d_w)$ of \emph{normalized} adjacency matrices given by
\begin{align}
X \ = \ \begin{pmatrix} 0 & H \\ H^\ast & 0 \end{pmatrix}, \quad H \ = \ d_w^{-1/2} \left( A - d_b \mathbf{e}_{b} \mathbf{e}_{w}^\ast \right), \label{eq:normalizedmatrix}.
\end{align}
Because the eigenspace corresponding to $\lambda_{\max}$ is one-dimensional, standard linear algebra implies that upon a normalization factor of $d_w^{-1/2}$, the matrices $X$ and $X_A$ will share the same eigenvalue-eigenvector pairs orthogonal to the eigenspace corresponding to $\lambda_{\max}$. On this maximal eigenspace, the matrix $X$ will exhibit $\mathbf{e}_{\max}$ as an eigenvector corresponding to the eigenvalue $\lambda = 0$. Moreover, as noted in \cite{DJ} the spectrum of $X$ will be compactly supported in a sense we will make shortly make precise.
To complete our discussion of the random matrix ensemble of interest, we first note that $\mathscr{X}$ is clearly in bijection with the set of biregular graphs on the fixed triple $(\mathscr{V}, V_b, V_w)$, which is a finite set for each fixed $M, N, d_b, d_w$. Because $\Omega$ is finite, we may impose the uniform probability measure on it.
We now define the following fundamental parameters:
\begin{align}
\alpha \ := \ \frac{M}{N} \ = \ \frac{d_w}{d_b}, \quad \gamma \ := \ \frac{1}{\alpha} \ = \ \frac{N}{M}.
\end{align}
where here we use the identity \eqref{eq:countedges}. As in \cite{DJ}, we now impose the following constraints on the parameters $M, N, d_b, d_w$:
\begin{align}
\lim_{M, N \to \infty} \ \alpha \ \geq \ 1. \label{eq:limitratio}
\end{align}
This assumption is not crucial as we may also relabel the vertices $\mathscr{V}$ if $M < N$ in the limit. This assumption will be convenient in our analysis of the spectral statistics, however. This completes our construction of the random matrix ensemble of interest.
\subsection{Random Covariance Matrices}
Strictly speaking, the random matrix ensemble studied in \cite{DJ} was the ensemble $\mathscr{X}_{\ast}$ consisted of the corresponding $N \times N$ covariance matrices:
\begin{align}
X_{\ast} \ = \ H^{\ast} H, \quad H \ = \ d_w^{-1/2}(A - d_b \mathbf{e}_b \mathbf{e}_w^{\ast}).
\end{align}
The ensemble $\mathscr{X}$ introduced in this paper may be realized as a linearization of the ensemble $\mathscr{X}_{\ast}$ of covariance matrices. This will be the upshot of working instead with the matrix ensemble $\mathscr{X}$ whenever convenient, i.e. when studying linear perturbations of the adjacency matrix of a biregular graph. The following result shows that when transferring between the ensembles $\mathscr{X}$ and $\mathscr{X}_{\ast}$, the spectral data is preserved. This result is standard in linear algebra and the analysis of compact operators, but we include it for completeness and since organizational purposes, as the result does not seem to be written precisely and formally in any standard text.
Before we give the result and its (short) proof, we define the following third matrix ensemble $\mathscr{X}_{\ast,+}$ of $M \times M$ covariance matrices:
\begin{align}
X_{\ast,+} \ = \ H H^{\ast}.
\end{align}
The ensemble $\mathscr{X}_{\ast,+}$ will not play any essential role in our analysis of random matrix ensembles and is included in this paper for the sake of completeness of our results.
\begin{prop} \label{prop:spectralcorrespondence}
Suppose $H$ is a real-valued matrix of size $M \times N$ with $M \geq N$, and suppose $X$ is a block matrix of the following form:
\begin{align}
X \ = \ \begin{pmatrix} 0 & H \\ H^{\ast} & 0 \end{pmatrix}. \label{eq:blockmatrix}
\end{align}
\begin{itemize}
\item \emph{(I).} The spectrum of $X$ admits the following decomposition:
\begin{align}
\sigma(X) \ = \ \sigma^{1/2}(H^{\ast} H) \cup \zeta(X),
\end{align}
where $\sigma^{1/2}(H^{\ast} H)$ denotes the pairs of eigenvalues $(\pm \lambda)$ such that $(\pm \lambda)^2$ is an eigenvalue of $H^{\ast} H$. Here, $\zeta(X)$ denotes the set of eigenvalues not in $\sigma^{1/2}(H^{\ast} H)$, all of which are 0.
\item \emph{(II).} The spectrum of $HH^{\ast}$ admits the following decomposition:
\begin{align}
\sigma(HH^{\ast}) \ = \ \sigma(H^{\ast} H) \cup \zeta^2(X),
\end{align}
where $\zeta^2(X)$ denotes the set of eigenvalues not in $\sigma(H^{\ast} H)$, all of which are 0.
\item \emph{(III).} Suppose $\lambda^2 \in \sigma(H^{\ast} H)$ is associated to the following $\ell^2$-normalized eigenvectors:
\begin{align}
\mathbf{v}_{\ast} \ \leftrightsquigarrow \ H^{\ast} H, \quad \mathbf{v}_{\ast, +} \ \leftrightsquigarrow \ HH^{\ast}.
\end{align}
Then $\pm \lambda$ is associated to the following $\ell^2$-normalized eigenvector pair of $X$:
\begin{align}
\pm \lambda \ \leftrightsquigarrow \ \frac{1}{\sqrt{2}} \begin{pmatrix} \mathbf{v}_{\ast,+} \\ \pm \mathbf{v}_{\ast} \end{pmatrix}.
\end{align}
\item \emph{(IV).} Conversely, any eigenvalue pair $\pm \lambda \in \sigma^{1/2}(H^{\ast} H)$ is associated to the following $\ell^2$-normalized eigenvector pair of $X$:
\begin{align}
\pm \lambda \ \leftrightsquigarrow \ \frac{1}{\sqrt{2}} \begin{pmatrix} \mathbf{v}_{\ast, +} \\ \pm \mathbf{v}_{\ast} \end{pmatrix},
\end{align}
where $\mathbf{v}_{\ast, +}$ is an $\ell^2$-normalized eigenvector of $HH^{\ast}$ with eigenvalue $\lambda^2$ and $\mathbf{v}_{\ast}$ is an $\ell^2$-normalized eigenvector of $H^{\ast} H$ with eigenvalue $\lambda^2$.
\item \emph{(V).} Suppose $\lambda = 0 \in \zeta^2(X)$ is associated to the $\ell^2$-normalized eigenvector $\mathbf{v}_{\lambda}$ of $HH^{\ast}$. Then for some $\lambda' = 0 \in \zeta(X)$, the corresponding $\ell^2$-normalized eigenvector is given by
\begin{align}
\lambda' \ \leftrightsquigarrow \ \begin{pmatrix} \mathbf{v}_{\lambda} \\ 0 \end{pmatrix}.
\end{align}
\item \emph{(VI).} Conversely, suppose $\lambda \in \zeta(X)$. Then $\lambda = 0$ is associated to the following $\ell^2$-normalized eigenvector of $X$:
\begin{align}
\lambda \ \leftrightsquigarrow \ \begin{pmatrix} \mathbf{v}_{\lambda} \\ 0 \end{pmatrix},
\end{align}
where $\mathbf{v}_{\lambda}$ is an $\ell^2$-normalized eigenvector of $HH^{\ast}$ with eigenvalue $\lambda' = 0$.
\end{itemize}
\end{prop}
\begin{remark}
We briefly note that Proposition \ref{prop:spectralcorrespondence} applies to a much more general class of covariance matrices and their linearizations, as it does not refer to any underlying graph or graph structure.
\end{remark}
\begin{proof}
Statements (I) -- (II) are consequence of the SVD (singular value decomposition) of the matrix $H$ and dimension-counting. Statements (III) -- (VI) follow from a direct calculation and dimension-counting.
\end{proof}
\subsection{The Main Result}
We begin with notation for the Stieltjes transforms of the Marchenko-Pastur law and its linearization, respectively:
\begin{align}
m_{\infty}(z) \ &= \ \int_{\R} \ \frac{\varrho_{\infty}(x)}{x - z} \ \d x, \\
m(z) \ &= \ \int_{\R} \ \frac{\varrho(x)}{x - z} \ \d x.
\end{align}
Here, we take $z \in \C_+$ or $z \in \C_-$. We also define the following perturbed Stieltjes transforms to address the ensemble $\mathscr{X}_{\ast, +}$:
\begin{align}
m_{\infty,+}(z) \ &:= \ \gamma m_{\infty}(z) + \frac{\gamma - 1}{z} \ = \ \int_{\R} \ \frac{\gamma \varrho_{\infty}(x) + (\gamma - 1) \delta_0(x)}{x - z} \ \d x, \\
m_{+}(z) \ &:= \ \gamma m(z) + \frac{\gamma - 1}{z} \ = \ \int_{\R} \ \frac{\gamma \rho(x) + (\gamma - 1) \delta_0(x)}{x - z} \ \d x.
\end{align}
This describes the limiting spectral behavior. For the graphs themselves, we define the following Green's functions of the matrix ensembles $\mathscr{X}$, $\mathscr{X}_{\ast}$, and $\mathscr{X}_{\ast,+}$:
\begin{align}
G(z) \ &= \ (X - z)^{-1}, \quad X \in \mathscr{X}; \\
G_{\ast}(z) \ &= \ (X_{\ast} - z)^{-1}, \quad X_{\ast} \in \mathscr{X}_{\ast}; \\
G_{\ast,+}(z) \ &= \ (X_{\ast,+} - z)^{-1}, \quad X_{\ast,+} \in \mathscr{X}_{\ast, +}.
\end{align}
We also define the Stieltjes transforms of each covariance matrix ensemble $\mathscr{X}_{\ast}$ and $\mathscr{X}_{\ast,+}$, which may be realized as the Stieltjes transform of the corresponding empirical spectral distribution in spirit of Proposition \ref{prop:spectralcorrespondence}:
\begin{align}
s_{\ast}(z) \ = \ \frac{1}{N} \on{Tr} G_{\ast}(z), \quad s_{\ast,+}(z) \ = \ \frac{1}{M} \on{Tr} G_{\ast,+}(z).
\end{align}
Lastly, for the ensemble $\mathscr{X}$, we define instead the \emph{partial} Stieltjes transforms which average only over the diagonal terms of a specified color (black or whtie):
\begin{align}
s_b(z) \ = \ \frac{1}{M} \sum_{i = 1}^M \ G_{ii}(z), \quad s_w(z) \ = \ \frac{1}{N} \sum_{k = 1}^N \ G_{kk}(z).
\end{align}
We now introduce domains in the complex plane on which we study the Green's functions of each matrix ensemble. These domains are engineered to avoid the singularities in the Green's functions near the origin, and in the case of the linearized Marchenko-Pastur law, the edge of the support. To this end, we establish notation for the following subsets of the complex plane for any fixed $\e > 0$:
\begin{align}
U_{\e, \pm} \ &:= \ \left\{ z = E + i \eta: \ |E| > \e, \ \eta > 0 \right\}, \\
U_{\e} \ &:= \ U_{\e,+} \cup U_{\e, -}.
\end{align}
We will also need to define the following control parameters:
\begin{align}
D \ &:= \ d_b \wedge \frac{N^2}{d_b^3}, \\
\Phi(z) \ &:= \ \frac{1}{\sqrt{N \eta}} + \frac{1}{\sqrt{D}}, \\
F_z(r) \ = \ F(r) \ &:= \ \left[ \left(1 + \frac{1}{\sqrt{(\lambda_+ - z)(z - \lambda_-)}} \right) r \right] \ \wedge \ \sqrt{r}.
\end{align}
We now present the main result of this paper.
\begin{theorem} \label{theorem:locallaw}
Suppose $\xi = \xi_N$ is a parameter chosen such that the following growth conditions on $D$ and $\eta$ hold:
\begin{align}
\xi \log \xi \ \gg \ \log^2 N, \quad |\eta| \ \gg \ \frac{\xi^2}{N}, \quad D \ \gg \ \eta^2. \label{eq:growthconditionslocallaw}
\end{align}
Then for any fixed $\e > 0$, we have the following estimates with probability at least $1 - e^{-\xi \log \xi}$, uniformly over all $z = E + i \eta \in U_{\e}$ with $\eta$ satisfying the growth condition in \eqref{eq:growthconditionslocallaw}:
\begin{align}
\max_{i} \left| [G_{\ast}(z)]_{ii} - m_{\infty}(z) \right| \ = \ O \left( F_z(\xi \Phi) \right), \ \ \ \max_{i \neq j} \left| [G_{\ast}(z)]_{ij} \right| \ = \ O\left( \frac{\xi \Phi(z^2)}{z} \right). \label{eq:GFlocallawsmall}
\end{align}
Similarly, for any fixed $\e > 0$, we have the following estimates with probability at least $1 - e^{-\xi \log \xi}$, uniformly over all parameters $z = E + i \eta \in U_{\e}$ with $\eta$ satisfying the growth condition in \eqref{eq:growthconditionslocallaw}:
\begin{align}
\max_{i} \left| [G_{\ast,+}(z)]_{ii} - m_{\infty,+}(z) \right| \ = \ O\left( F_z( \xi \Phi) \right), \ \ \ \max_{i \neq j} \left| [G_{\ast,+}(z)]_{ij} \right| \ = \ O \left( \frac{\xi \Phi(z^2)}{z} \right). \label{eq:GFlocallawlarge}
\end{align}
Conditioning on the estimates \eqref{eq:GFlocallawsmall} and \eqref{eq:GFlocallawlarge}, respectively, uniformly over $z = E + i \eta \in U_{\e}$ with $\eta$ satisfying the growth condition in \eqref{eq:growthconditionslocallaw}, we have
\begin{align}
\left| s_{\ast}(z) - m_{\infty}(z) \right| \ = \ O \left( F_z(\xi \Phi) \right), \quad \left| s_{\ast,+}(z) - m_{\infty}(z) \right| \ = \ O \left( F_z(\xi \Phi) \right). \label{eq:STlocallawcov}
\end{align}
Conditioning on the estimates \eqref{eq:GFlocallawsmall}, uniformly over all $z = E + i \eta$ with $\eta$ satisfying the growth condition in \eqref{eq:growthconditionslocallaw}, we have
\begin{align}
\max_{k > M} \left| [G(z)]_{kk} - m(z) \right| \ &= \ O \left( z F_{z^2}(\xi \Phi(z^2)) \right), \label{eq:GFlocallawlinearsmalldiagonal} \\
\max_{M < k < \ell} \ \left| [G(z)]_{k \ell} \right| \ &= \ O \left(\xi \Phi \right).\label{eq:GFlocallawlinearsmalloffdiagonal}
\end{align}
Moreover, conditioning on the estimates \eqref{eq:GFlocallawlarge}, for any fixed $\e > 0$, uniformly over all $z = E + i \eta \in U_{\e}$ with $\eta$ satisfying the growth condition in \eqref{eq:growthconditionslocallaw}, we have
\begin{align}
\max_{i \leq M} \left| [G(z)]_{ii} - m_{+}(z) \right| \ &= \ O \left( z F_{z^2}(\xi \Phi(z^2)) \right), \label{eq:GFlocallawlinearlargediagonal} \\
\max_{i < j \leq M} \ \left| [G(z)]_{ij} \right| \ &= \ O(\xi \Phi).\label{eq:GFlocallawlinearlargeoffdiagonal}
\end{align}
Conditioning on \eqref{eq:GFlocallawsmall} and \eqref{eq:GFlocallawlarge}, we have the following estimates uniformly over all $z = E + i \eta$ with $\eta$ satisfying the growth condition in \eqref{eq:growthconditionslocallaw}:
\begin{align}
\left| s_b(z) - m_{+}(z) \right| \ &= \ O \left( z F_{z^2}(\xi \Phi(z^2)) \right), \label{eq:partialSTblack} \\
\left| s_w(z) - m(z) \right| \ &= \ O \left( z F_{z^2}(\xi \Phi(z^2)) \right). \label{eq:partialSTwhite}
\end{align}
Lastly, the estimates \eqref{eq:GFlocallawsmall} and \eqref{eq:STlocallawcov} hold without the condition $|E| > \e$ if $\alpha > 1$. The estimates \eqref{eq:GFlocallawlinearlargediagonal} and \eqref{eq:GFlocallawlinearlargeoffdiagonal} hold without the condition $|E| > \e$ if $\alpha = 1$.
\end{theorem}
\begin{remark}
We briefly remark on the repulsion assumption $|E| > \e$ in Theorem \ref{theorem:locallaw}. The removal of this assumption discussed at the end of the statement of Theorem \ref{theorem:locallaw} is a direct consequence of studying the dependence of the singularities of the Green's functions and Stieltjes transforms at the origin with respect to the structural parameter $\alpha$. For example, the presence of a singularity of $m_{\infty}$ at the origin occurs exactly when $\alpha = 1$. Moreover, the singularities in the Stieltjes transforms of matrices and the singularities of $m_{\infty,+}$ at the origin cancel each other out, allowing for a regularization at the origin.
\end{remark}
\begin{remark}
We last remark that if $\alpha = 1$, the covariance matrices $X_{\ast}$ and $X_{\ast,+}$ are equal in law. This comes from symmetry of the bipartite graph between the two vertex sets $V_b$ and $V_w$, i.e. the graph statistics are unchanged upon relabeling the graph. This allows us to remove the assumption $|E| > \e > 0$ for certain estimates in Theorem \ref{theorem:locallaw} in the regime $\alpha = 1$.
\end{remark}
We now discuss important consequences of Theorem \ref{theorem:locallaw}, the first of which is the following result on eigenvector delocalization, i.e. an estimate on the $\ell^{\infty}$-norm of an eigenvector in terms of its $\ell^2$-norm. The proof of this delocalization result will be delegated to a later section after we study in more detail the spectral data of covariance matrices and their linearizations.
\begin{corollary} \label{corollary:delocalization}
\emph{(Eigenvector Delocalization).}
Assume the setting of Theorem \ref{theorem:locallaw}, and suppose $\mathbf{u}$ is an eigenvector of $X_{\ast}$ with eigenvalue $\lambda$. Then with probability at least $1 - e^{-\xi \log \xi}$, we have
\begin{align}
\| \mathbf{u} \|_{\ell^{\infty}} \ = \ O \left( \frac{\xi}{\sqrt{N}} \| \mathbf{u} \|_{\ell^2} \right).
\end{align}
\end{corollary}
We briefly remark that the eigenvector delocalization fails for the larger covariance matrix $X_{\ast,+}$.
\begin{proof}
First, we note in the case $\mathbf{u} \in \on{Span}(\mathbf{e}_b)$, the result is true trivially. Moreover, by Proposition \ref{prop:spectralcorrespondence}, it suffices to prove the claim for eigenvectors of the linearization $X$, replacing the $\ell^{\infty}$-norm by a supremum over indices $k > M$.
We now take for granted $|zm_{\infty}(z^2)| = O(1)$ uniformly for $z = E + i \eta \in \C_+$; this follows from an elementary analysis of the Stieltjes transform discussed in the appendix of this paper. This allows us to obtain the following string of inequalities with probability at least $1 - e^{-\xi \log \xi}$ and any index $k > M$:
\begin{align}
\left| \mathbf{u}(k) \right|^2 \ &\leq \ \sum_{\mathbf{v}_{\beta} \neq \mathbf{u}} \ \frac{\eta^2 \left| \mathbf{v}_\beta(k) \right|^2}{(\lambda_\beta - \lambda)^2 + \eta^2} \\
&= \ \eta \on{Im} [G(\lambda + i \eta)]_{kk} \\
&\leq \ \eta \left| zm_{\infty}(z^2) \right| + O(\eta \sqrt{\xi \Phi}) \\
&\leq \ 2 \eta,
\end{align}
where we used the local law for the linearization $X$ to estimate the second line. This completes the derivation of the eigenvector delocalization.
\end{proof}
We conclude this preliminary discussion concerning consequences of Theorem \ref{theorem:locallaw} with the following weak rigidity estimates. We briefly remark that it relies heavily upon the Helffer-Sjostrand formula and functional calculus, and beyond these tools, the local law in Theorem \ref{theorem:locallaw}. To state the result, we first introduce the following definition.
\begin{definition}
For each $i \in [[1, N]]$, we define the $i$-th \emph{classical location}, denoted $\gamma_i$, by the following quantile formula:
\begin{align}
\frac{i}{N} \ = \ \int_{-\infty}^{\gamma_i} \ \varrho_{\infty}(E) \ \d E,
\end{align}
where we recall $\varrho_{\infty}$ denotes the density function of the Marchenko-Pastur law.
\end{definition}
The following consequence of Theorem \ref{theorem:locallaw} will compare the classical location $\gamma_i$ to the $i$-th eigenvalue $\lambda_i$ of the covariance matrix $X_{\ast}$, where the ordering on the eigenvalues is the increasing order.
\begin{corollary}
For any fixed $\kappa > 0$ and index $i \in [[\kappa N, (1-\kappa)N]]$, we have, with probability at least $1 - e^{-\xi \log \xi}$,
\begin{align}
\left| \lambda_i - \gamma_i \right| \ = \ O \left( \frac{\xi^2}{D^{1/4}} \right).
\end{align}
\end{corollary}
For details of the proof, we refer to Section 5 in \cite{BHKY} and Section 7 in \cite{LY}.
We now give an outline for the derivation of the local law. The proof will roughly consist of the following three steps:
\begin{itemize}
\item (I). The first step will be to adapt the methods in \cite{BKY} to define and study a method of resampling biregular graphs in $\Omega$. The resampling will be generated by local operations on a given graph known as \emph{switchings}, which we will define more precisely in a later section. The local nature of the resampling method will help us derive equations exploiting the probabilistic stability of the Green's function under these switchings.
\item (II). The second step will be to study the Green's functions of the three matrix ensemble simultaneously. This includes both a preliminary analysis and a further analysis using the switching dynamics established in the previous step. In particular, we derive an approximate self-consistent equation for the diagonal entries of the Green's function and study its stability properties. As in \cite{BKY}, this will help us compare the diagonal of the Green's function to the associated Stieltjes transform. The equation in \cite{BKY}, however, contains a constant leading-order coefficient whereas for covariance matrices the leading-order coefficient is nonconstant. We adapt the methods suitably to handle this nonlinearity.
\end{itemize}
\section{Switchings on Bipartite Graphs}
We begin by introducing notation necessary to define switchings on biregular graphs. Switchings will be local operations on the biregular graphs, so we will establish notation for vertices and edges containing said vertices as follows.
\begin{notation}
A generic vertex in $V_b$ (resp. $V_w$) will be denoted by $v_b$ (resp. $v_w$).
For a fixed graph $E \in \Omega$, we will denote the edges in $E$ containing $v_b$ by $\{ e_{b, \mu} \}_{\mu = 1}^{d_b}$. Moreover, for a fixed edge $e_{b, \mu}$ containing $v_b$, we will denote the neighboring vertex by $v_{b,\mu}$, so that $e_{b, \mu} = v_b v_{b, \mu}$.
Similarly, the edges in $E$ containing $v_w$ will be denoted by $\{ e_{w, \nu} \}_{\nu = 1}^{d_w}$. For a fixed edge $e_{w,\nu}$ containing $v_w$, we will denote the neighboring vertex by $v_{w, \nu}$.
For a fixed vertex $v_b \in V_b$, we establish the notation for the set of edges not containing $v_b$:
\begin{align}
U_{v_b} \ := \ \left\{ \on{edges} \ e \in E: \ v_b \not\in e \right\}.
\end{align}
Similarly for a fixed vertex $v_w \in V_w$, we define the following set of edges not containing $v_w$:
\begin{align}
U_{v_w} \ := \ \left\{ \on{edges} \ e \in E: \ v_w \not\in e \right\}.
\end{align}
\end{notation}
We may now begin to define a switching on a generic graph $E \in \Omega$. To this end, we fix a black vertex $v_b \in V_b$ and an edge $e_{v, \mu}$ for some $\mu \in [[1, d_b]]$. We define the following space of subgraphs of $E$:
\begin{align}
\mathbf{S}_{v_b, \mu, E} \ := \ \left\{ S \subset E: \ S \ = \ \{ e_{b, \mu}, p_{b, \mu}, q_{b,\mu} \}, \ p_{b,\mu} \neq q_{b, \mu} \in U_{v_b} \right\}.
\end{align}
In words, the set $\mathbf{S}_{v_b, \mu, E}$ is the set of graphs consisting of the edges $e_{b,\mu}$ and any two distinct edges $p_{b,\mu}$ and $q_{b,\mu}$, neither of which contains the vertex $v_b$. Similarly, we may define for a fixed white vertex $v_w \in V_w$ and edge $e_{w, \nu}$, for some $\nu \in [[1, d_w]]$, the same set of graphs:
\begin{align}
\mathbf{S}_{v_w, \mu, E} \ := \ \left\{ S \subset E: \ S \ = \ \{ e_{w, \nu}, p_{w, \nu}, q_{w, \nu} \}: \ p_{w, \nu} \neq q_{w,\nu} \in U_{v_w} \right\}.
\end{align}
\begin{notation}
A generic graph in $\mathbf{S}_{v_b, \mu, E}$ will be denoted by $S_{b, \mu}$. A generic graph in $\mathbf{S}_{v_w, \nu, E}$ will be denoted by $S_{w, \nu}$.
\end{notation}
The set $\mathbf{S}_{v_b, \mu, E}$ contains the edge-local data along which switchings on graphs will be defined. To make this precise, we need to introduce the following indicator functions. First, we define the following configuration vectors for fixed vertices $v_b \in V_b$ and $v_w \in V_w$:
\begin{align}
\mathbf{S}_{v_b} \ := \ \left( S_{b, \mu} \right)_{\mu = 1}^{d_b}, \quad \mathbf{S}_{v_w} \ := \ \left( S_{w, \nu} \right)_{\nu = 1}^{d_w}.
\end{align}
With this notation, we define the following indicator functions that detect graph properties in $S_{b, \mu}$ and $S_{w, \nu}$.
\begin{align}
I(S_{b, \mu}) \ &= \ \mathbf{1} \left( |[S_{b, \mu}]| = 6 \right), \label{eq:I} \\
J(\mathbf{S}_{v_b}, \mu) \ &= \ \prod_{\mu' \neq \mu} \ \mathbf{1} \left( [S_{b, \mu}] \cap [S_{b, \mu'}] = \{ v_b \} \right), \label{eq:J} \\
W(\mathbf{S}_{v_b}) \ &= \ \left\{ \mu: \ I(S_{b, \mu}) J(\mathbf{S}_{v_b}, \mu) = 1 \right\}. \label{eq:W}
\end{align}
For white vertices $v_w \in V_w$, the functions $I, J$ and $W$ retain the same definition upon replacing $b$ with $w$ and $\mu$ with $\nu$.
We now define the augmented probability spaces $\wt{\Omega}$ which will make the switchings systematic from the perspective of Markovian dynamics. For a fixed black vertex $v_b \in V_b$ and a fixed white vertex $v_w \in V_w$, we define the following augmented space:
\begin{align}
\wt{\Omega} \ &= \left\{ \left( E, \mathbf{S}_{v_b}, \mathbf{S}_{v_w} \right), \quad E \in \Omega, \ \mathbf{S}_{v_b} \in \prod_{\mu = 1}^{d_b} \mathbf{S}_{v_b, \mu, E}, \ \mathbf{S}_{v_w} \in \prod_{\nu = 1}^{d_w} \mathbf{S}_{v_w, \nu, E} \right\}.
\end{align}
We now precisely define switchings by defining dynamics on $\wt{\Omega}$. To this end we define switchings on configuration vectors $\mathbf{S}_{v_b}$ and $\mathbf{S}_{v_w}$; we first focus on the configuration vectors for black vertices.
Fix a label $\mu$ and consider a component $S_{b, \mu}$ of a uniformly sampled configuration vector $\mathbf{S}_{v_b}$. Precisely, the components of $\mathbf{S}_{v_b}$ are sampled jointly uniformly and independently from $\mathbf{S}_{v_b, \mu, E}$, where $E \in \Omega$ is uniform over all $\mu$ and sampled uniformly. We now define the following map:
\begin{align}
T_b: \prod_{\mu = 1}^{d_b} \ \mathbf{S}_{v_b, \mu, E} \ \longrightarrow \ \prod_{\mu = 1}^{d_b} \ \mathbf{S}_{v_b, \mu, E'}
\end{align}
where $E' \in \Omega$ is possibly different from $E$. The map is given as follows: for any $\mu$, we define the map $T_{b,\mu}$
\begin{align}
T_{b,\mu}(S_{b, \mu}) \ = \ \begin{cases}
S_{b, \mu} & \mu \not\in W(\mathbf{S}_{v_b}) \\
(S_{b, \mu}, s_{b, \mu}) & \mu \in W(\mathbf{S}_{v_b})
\end{cases}.
\end{align}
We define the graph $(S_{b, \mu}, s_{b,\mu})$ as follows; this is where we now introduce randomness into the dynamics $T_b$. Suppose $\mu \in W(\mathbf{S}_{v_b})$, in which case $S_{b,\mu}$ is 1-regular and bipartite with respect to the vertex sets $([S_{v_b}], V_1, V_2)$. Consider the set of 1-regular bipartite graphs with respect to the vertex set $([S_{b,\mu}], V_1, V_2)$. In words, this is the set of 1-regular graphs on $[S_{b, \mu}]$ such that, upon replacing $S_{b,\mu}$ with any such graph, the global graph $E$ remains biregular. We now define $(S_{b, \mu}, s_{b,\mu})$ to be drawn from this set uniformly at random conditioning on the event $(S_{b, \mu}, s_{b, \mu}) \neq S_{b,\mu}$. Lastly, we define the following global dynamics:
\begin{align}
T_b \ = \ \prod_{\mu = 1}^{d_b} \ T_{b,\mu},
\end{align}
where the product is taken as composition. We note this product is independent of the order of composition; this is a consequence of the definition of the functions $I, J$ and $W$. For white vertices $v_w \in V_w$, we define the map $T_w$ by replacing all black indices $b$ and white indices $w$.
We note that the maps $T_b$ and $T_w$ define maps on $\wt{\Omega}$, because we are allowed to change the underlying graph $E$ when varying over the space $\wt{\Omega}$; this is the utility of the almost-product representation of $\wt{\Omega}$. This allows us to finally define switchings of a biregular graph.
\begin{definition}
For a fixed black vertex $v_b \in V_b$ and a fixed label $\mu \in [[1, d_b]]$, the \emph{local switching} at $v_b$ along $\mu$ is the map $T_{b,\mu}$. The \emph{global switching} is the map $T_b$.
Similarly, for a fixed white vertex $v_w \in V_w$ and a fixed label $\nu \in [[1, d_w]]$, the \emph{local switching} at $v_w$ along $\nu$ is the map $T_{w, \nu}$. The \emph{global switching} at $v_w$ is the map $T_w$.
\end{definition}
\begin{remark}
We note our construction, technically, implies the mappings $T_{b, \mu}$ and $T_{w, \nu}$ are random mappings on the augmented space $\wt{\Omega}$. Via this construction, we obtain a probability measure $\wt{\Omega}$ induced by the uniform measure and a uniform sampling of switchings. To obtain an honest mapping on the original space $\Omega$, we may instead construct deterministic mappings by averaging over the random switchings. For precise details, we cite \cite{BKY}.
\end{remark}
\subsection{Switchings on Adjacency Matrices}
We now aim to translate the combinatorics of graph switchings into analysis of adjacency matrices. Suppose $E \in \Omega$ is a biregular graph with adjacency matrix $A$. We will fix the following notation.
\begin{notation}
For an edge $e = ij$ on the vertex set $\mathscr{V}$, we let $\Delta_{ij}$ denote the adjacency matrix of the graph on $\mathscr{V}$ consisting only of the edge $e$. In particular, $\Delta_{ij}$ is the matrix whose entries are given by
\begin{align}
(\Delta_{ij})_{k\ell} \ = \ \delta_{ik} \delta_{j \ell} + \delta_{i \ell} \delta_{jk}.
\end{align}
\end{notation}
In the context of switchings on biregular graphs, the matrices $\Delta_{ij}$ are perturbations of adjacency matrices. This is made precise in the following definition.
\begin{definition}
Fix a black vertex $v_b \in V_b$ and a label $\mu \in [[1, d_b]]$. A \emph{local} switching of $A$, denoted $T_{b,\mu}$, at $v_b$ along $\mu$ is given by the following formula:
\begin{align}
T_{b,\mu}(A) \ = \ A - S_{b,\mu} + (S_{b, \mu}, s_{b,\mu}), \label{eq:switchmatrix}
\end{align}
where $S_{b,\mu}$ is a component of a random, uniformly sampled configuration vector $\mathbf{S}_{v_b}$. A \emph{global} switching of $A$, denoted $T_b$, is the composition of the random mappings $T_{b,\mu}$.
Similarly, we may define local switchings and global switchings of adjacency matrices for white vertices by replacing the black subscript $b$ with the white subscript $w$, and replacing the label $\mu$ with $\nu$.
\end{definition}
Clearly, a local or global switching of an adjacency matrix is the adjacency matrix corresponding to a local or global switching of the underlying graph. To realize the matrices $\Delta_{ij}$ as perturbations, we will rewrite the formula defining $T_{b,\mu}$ as follows. As usual, we carry out the discussion for black vertices $v_b \in V_b$, though the details for white vertices $v_b$ follow analogously.
First, we recall the following notation for a component $S_{b,\mu} \in \mathbf{S}_{v_b, \mu, E}$ of a configuration vector $\mathbf{S}_{v_b}$:
\begin{align}
S_{b,\mu} \ := \ \left\{ e_{b,\mu}, p_{b,\mu}, q_{b,\mu} \right\},
\end{align}
subject to the constraint that $S_{b,\mu}$ contains three distinct edges.
\begin{notation}
We will denote the vertices of $p_{b,\mu}$ by $a_{b,p,\mu} \in V_b$ and $a_{w,p,\mu} \in V_w$. Similarly, we will denote the vertices of $q_{b,\mu}$ by $a_{b,q,\mu} \in V_b$ and $a_{w,q,\mu} \in V_w$.
\end{notation}
With this notation, we may rewrite the random mapping $T_{b,\mu}$ as follows:
\begin{align}
T_{b,\mu}(A) \ = \ A - \left( \Delta_{v_b, v_{b,\mu}} + \Delta_{a_{b,p,\mu} a_{w,p,\mu}} + \Delta_{a_{b,q,\mu} a_{w,q,\mu}} \right) + \left( \Delta_{v_b, x} + \Delta_{a_{b,p,\mu}, y} + \Delta_{a_{b,q,\mu}, z} \right), \label{eq:perturbswitchmatrix}
\end{align}
where we recall $e_{b,\mu} = v_b v_{b,\mu}$. Here, the variables $x, y, z$ are three distinct vertices sampled from the set of white vertices $\{ v_{b,\mu}, a_{w,p,\mu}, a_{w,q,\mu}\}$ conditioning on the following constraint on ordered triples:
\begin{align}
(x, y, z) \ \neq \ (v_{b,\mu}, a_{w,p,\mu}, a_{w,q,\mu}).
\end{align}
\subsection{Probability Estimates on Vertices}
In this discussion, we obtain estimates on the distribution of graph vertices \emph{after} performing switchings. The main estimates here show that the vertices are approximately uniformly distributed, which we make precise in the following definition.
\begin{definition} \label{definition:approxuniform}
Suppose $S$ is a finite set and $X$ is an $S$-valued random variable. We say (the distribution of) $X$ is \emph{approximately uniform} if the following bound on total variation holds:
\begin{align}
\sum_{s \in S} \ \left| \mathbb{P} \left( X = s \right) - \frac{1}{|S|} \right| \ \leq \ O \left( \frac{1}{\sqrt{d_w D}} \right).
\end{align}
\end{definition}
We now introduce the following $\sigma$-algebras on $\wt{\Omega}$. These $\sigma$-algebras will allow us to focus on edge-local features of graphs $E \in \Omega$ upon conditioning on the global data $E$.
\begin{definition}
For a fixed label $\mu \in [[0, d_b]]$, we define the following $\sigma$-algebras:
\begin{align}
\mathscr{F}_{\mu} \ &:= \ \sigma \left( E, (S_{b,1}, s_{b,1}), \ldots, (S_{b,\mu}, s_{b,\mu}) \right), \\
\mathscr{G}_{\mu} \ &:= \ \sigma \left( E \left( S_{b, \mu'}, s_{b, \mu'} \right)_{\mu' \neq \mu} \right).
\end{align}
We similarly define the $\sigma$-algebras $\mathscr{F}_{\nu}$ and $\mathscr{G}_{\nu}$ for $\nu \in [[1, d_w]]$ for white vertices.
\end{definition}
In particular, conditioning on $\mathscr{F}_0$ corresponds to conditioning on the graph $E$ only. The last piece of probabilistic data we introduce is the following notation, which will allow us to compare i.i.d. switchings on biregular graphs.
\begin{notation}
Suppose $X$ is a random variable on the graph data $E, \{ (S_{b, \mu}, s_{b,\mu}) \}_{\mu}, \{(S_{w, \nu}, s_{w, \nu}) \}_{\nu}$. Then $\wt{X}$ denotes a random variable on the variables $\wt{E}, \{ (\wt{S}_{b, \mu}, \wt{s}_{b,\mu})_{\mu}, \{ (\wt{S}_{w, \nu}, \wt{s}_{w,\nu}) \}$, where the tildes on the graph data denote i.i.d. resamplings.
\end{notation}
\begin{notation}
For notational simplicity, by $p_{\mu}, q_{\mu}$ or $p_{\nu}, q_{\nu}$, we will refer to either $p_{b, \mu}, q_{b, \mu}$ or $p_{w, \nu}, q_{w, \nu}$, respectively, whenever the discussion applies to both situations.
\end{notation}
We now focus on obtaining an estimate on the distribution of the pair of edges $(p_{\mu}, q_{\mu})$, and similarly for $(p_{\nu}, q_{\nu})$. As with all results concerning switchings from here on, details of proofs resemble those of Section 6 in \cite{BKY}, so we omit details whenever redundant.
\begin{lemma} \label{lemma:approxuniformedges}
Conditioned on $\mathscr{G}_{\mu}$, the pair $(p_\mu, q_\mu)$ is approximately uniform, i.e., for any bounded symmetric function $F$, we have
\begin{align}
\E_{\mathscr{G}_\mu} F(p_\mu, q_\mu) \ = \ \frac{1}{(N d_w)^2} \sum_{p,q \in E} \ F(p,q) \ + \ O \left( \frac{1}{N} \| F \|_{\infty} \right). \label{eq:approxuniformjointpair}
\end{align}
Similarly, for any bounded function $F$, we have
\begin{align}
\E_{\mathscr{G}_{\mu}, q_\mu} F(p_\mu) \ = \ \frac{1}{N d_w} \sum_{p \in E} \ F(p) \ + \ O \left( \frac{1}{N} \| F \|_{\infty} \right). \label{eq:approxuniformmarginal}
\end{align}
\end{lemma}
\begin{proof}
Assume we resample about $v_b \in V_b$; the case for $v_w \in V_w$ follows analogously. By definition, we have
\begin{align}
\E_{\mathscr{G}_{\mu}} F(p_{\mu}, q_\mu) \ = \ \frac{1}{(Nd_w - d_w)(Nd_w - d_w - 1)} \sum_{p \in E_{v_b}} \ F(p),
\end{align}
where $E_{v_b}$ is the set of edges in $E$ that are not incident to $v_b$. Then, \eqref{eq:approxuniformjointpair} follows from the following estimate
\begin{align}
\frac{1}{(Nd_w - d_w)(Nd_w - d_w - 1)} \ = \ \frac{1}{(N d_w)^2} \ + \ O \left( \frac{1}{N^3 d_w^2} \right)
\end{align}
as well as the estimate $| E_{v_b} | \leq (N d_w)^2$, and lastly the estimate $|E_{v_b}^C| \leq Nd_w$. This last upper bound follows combinatorially; for details, see the proof of Lemma 6.2 in \cite{BKY}. The estimate \eqref{eq:approxuniformmarginal} follows from a similar argument.
\end{proof}
Because an edge is uniquely determined by its vertices in the graph, we automatically deduce from Lemma \ref{lemma:approxuniformedges} the following approximately uniform estimate for resampled vertices as well.
\begin{corollary} \label{corollary:switchingverticesapproxuniform}
Conditioned on $\mathscr{G}_\mu$, the pair $(p_{\mu}(b), q_{\mu}(b))$ (resp. $(p_{\mu}(w), q_{\mu}(w))$) is approximately uniform.
Similarly, conditioned on $\mathscr{G}_\mu$ and $q_{\mu}(b)$ (resp. $q_{\mu}(w)$), the random variable $p_{\mu}(b)$ (resp. $p_{\mu}(w)$) is approximately uniform.
\end{corollary}
\begin{proof}
This follows immediately upon applying Lemma \ref{lemma:approxuniformedges} to the function $F(p_{\mu}, q_{\mu}) = f(p_{\mu}(b), q_\mu(b))$.
\end{proof}
To fully exploit the resampling dynamics, we need a lower bound on the probability that a local switching $S_{b, \mu}, s_{b, \mu}$ around a vertex $v_b \in V_b$ does not leave the graph fixed. In particular, we need an estimate for the probability of the event $\mu \in W(\mathbf{S}_{v_b})$ where here $\mu$ is fixed and the set $W$ is viewed as random. As discussed in \cite{BKY}, to provide an estimate, the naive approach to estimating this probability conditioning on $\mathscr{G}_\mu$ fails in an exceptional set. Precisely, suppose the $\mu$-th neighbor $v_{b, \mu}$ of $v_b$ lives in $S_{b, \mu'}$ for some $\mu' \neq \mu$. In this case, almost surely, we have $[S_{b, \mu}] \cap [S_{b, \mu'}]$ is nontrivial. It turns out this is the only obstruction, so we aim to show that $v_{b, \mu} \in [S_{b, \mu'}]$ occurs with low probability for any $\mu' \neq \mu$.
Formally, we define the following indicator random variable which detects this exceptional set:
\begin{align}
h(\mathbf{S}_{v_b}, \mu) \ = \ \prod_{\mu' \neq \mu} \ \mathbf{1} \left( v_{b, \mu} \in S_{b, \mu'} \right).
\end{align}
Thus, the estimates we need are given in the following result.
\begin{lemma} \label{lemma:distributionneighbors}
For any neighbor index $\mu$, we have
\begin{align}
\mathbb{P}_{\mathscr{G}_\mu} \left[ I(S_{b, \mu}) J(\mathbf{S}_{v_b},\mu) \ = \ h(\mathbf{S}_{v_b}, \mu) \right] \ \geq \ 1 - O \left( \frac{d_b}{N} \right). \label{eq:distributionneighborsestimate1}
\end{align}
Moreover, we have
\begin{align}
\mathbb{P}_{\mathscr{F}_0} \left[ h(\mathbf{S}_{v_b}, \mu) = 1 \right] \ \geq \ 1 - O \left( \frac{d_b}{N} \right). \label{eq:distributionneighborsestimate2}
\end{align}
\end{lemma}
\begin{proof}
We first note that \eqref{eq:distributionneighborsestimate1} follows immediately conditioning on $h = 0$. In particular, the first lower bound \eqref{eq:distributionneighborsestimate1} follows from a combinatorial analysis of the underlying graph using the following union bound:
\begin{align}
\mathbb{P}_{\mathscr{G}_\mu, h = 1} \left[ I(S_{b, \mu}) J(\mathbf{S}_{v_b}, \mu) = 0 \right] \ \leq \ &\mathbb{P}_{\mathscr{G}_\mu} \left[ I(S_{b, \mu}) = 0 \right] \nonumber \\
&\quad \ + \ \mathbb{P}_{\mathscr{G}_\mu, h = 1} \left[ J(\mathbf{S}_{v_b}, \mu) = 0 \right].
\end{align}
Similarly, \eqref{eq:distributionneighborsestimate2} follows from the union bound
\begin{align}
\mathbb{P}_{\mathscr{F}_0} \left[ h(\mathbf{S}_{v_b}, \mu) = 0 \right] \ \leq \ \sum_{\mu' \neq \mu} \ \mathbb{P}_{\mathscr{F}_0} \left[ v_{b, \mu} \in [S_{\mu'}] \right].
\end{align}
For details, we refer back to \cite{BKY}.
\end{proof}
We conclude this section with an estimate that compares independent resamplings. Recall that $\wt{W}, W$ are i.i.d. copies of the random variable $W(\mathbf{S}_{v_b})$. The following result bounds the fluctuation in $W(\mathbf{S}_{v_b})$ from independent resamplings.
\begin{lemma} \label{lemma:howtocomputeswitchperturbations}
Almost surely, we know
\begin{align}
\# \left( W \Delta \wt{W} \right) \ = \ O(1),
\end{align}
where the implied constant is independent of $N$. Moreover, we also have
\begin{align}
\mathbb{P}_{\mathscr{G}_\mu} \left[ W \Delta \wt{W} \neq \emptyset \right] \ \leq \ O \left( \frac{d_b}{N} \right).
\end{align}
\end{lemma}
The proof follows the argument concerning Lemma 6.3 in \cite{BKY} almost identically, so we omit it. We now present the final estimate on adjacency matrices comparing switched matrices upon i.i.d. switchings in the sense of matrix perturbations. This will allow us to perform and control resamplings of biregular graphs, in particular using the resolvent perturbation identity.
\begin{lemma} \label{lemma:switchperturbmatrix}
Under the setting of the resampling dynamics, we have
\begin{align}
\wt{A} - A \ = \ T_{b, \mu}(A) - \wt{T}_{b,\mu}(\wt{A}) \label{eq:highprobabilityperturbationestimate}
\end{align}
with probability at least $1 - O(d_b/N)$. Almost surely, we have
\begin{align}
\wt{A} - A \ = \ \sum_{x, y = 1}^{O(1)} \ \Delta_{xy}, \label{eq:almostsureperturbationestimate}
\end{align}
such that either, conditioning on $\mathscr{G}_\mu$, the random indices $x,y$ are approximately uniform in the corresponding set $V_b$ or $V_w$ or, conditioning on $\mathscr{G}_{\mu}, p_{\mu}, \wt{p}_{\mu}$, at least one of the random indices $x, y$ is approximately uniform in the appropriate vertex set.
Lastly, the statement remains true upon switching instead at a white vertex $v_w$ along an edge label $\nu$.
\end{lemma}
\begin{proof}
The result follows from unfolding Lemma \ref{lemma:howtocomputeswitchperturbations} and the following deterministic identity:
\begin{align}
\wt{A} - A \ = \ &\mathbf{1}_{\mu \in \wt{W}} \left[ \wt{T}_{b, \mu}(E) - E \right] \ - \ \mathbf{1}_{\mu \in W} \left[ T_{b, \mu}(A) - A \right] \nonumber \\
&\quad \ + \ \sum_{\mu' \in \wt{W} \Delta W} \ \pm \left[ T_{b,\mu}(A) - A \right],
\end{align}
where the sign corresponds to which of the random sets $W$ or $\wt{W}$ contains the indexing label $\mu'$.
\end{proof}
\section{Green's Function Analysis}
\subsection{Preliminary Resolvent Theory}
Here we record the following fundamental identities for studying the Green's functions of adjacency matrices in the ensemble $\mathscr{X}$. These identities are standard and follow from standard linear algebra. First, because these identities hold for Green's functions of any real symmetric matrix, we fix the following notation.
\begin{notation}
Suppose $F$ is a function of the Green's function or matrix entries of matrices belonging to any one of the matrix ensembles $X$, $X_{\ast}$, or $X_{\ast,+}$. Then we establish the notation $F_{\star}$ to be the function obtained when restricted to the matrix ensemble $X_{\star}$, where we take $\star$ to be blank or $\star = \ast$ or $\star = \ast, +$.
\end{notation}
\begin{lemma} \label{lemma:resolventidentity}
\emph{(Resolvent Identity)}
Suppose $A$ and $B$ are invertible matrices. Then we have
\begin{align}
A^{-1} - B^{-1} \ = \ A^{-1}(B - A)B^{-1}. \label{eq:generalresolventidentity}
\end{align}
In particular, if $H$ and $\wt{H}$ denote real symmetric or complex Hermitian matrices with Green's functions $G(z)$ and $\wt{G}(z)$, respectively, for $z \not\in \R$, then
\begin{align}
G(z) - \wt{G}(z) \ = \ \left[ G \left( \wt{H} - H \right) \wt{G} \right](z). \label{eq:GFresolventidentity}
\end{align}
\end{lemma}
As an immediate consequence by letting $\wt{G}(z) = \overline{G}(z)$, we deduce the following off-diagonal averaging identity.
\begin{corollary} \label{corollary:ward}
\emph{(Ward Identity)}
Suppose $H$ is a real symmetric matrix of size $N$ with Green's function $G(z)$. Then for any fixed row index $i \in [[1, N]]$,
\begin{align}
\sum_{k = 1}^N \ |G_{ik}(E + i \eta)|^2 \ = \ \frac{\on{Im} G_{ii}(E + i \eta)}{\eta}. \label{eq:ward}
\end{align}
In particular, we obtain the following a priori estimate for any matrix index $(i,j)$:
\begin{align}
\left| G_{ij}(E + i \eta) \right| \ \leq \ \frac{1}{\eta}, \label{eq:apGF}
\end{align}
and thus for any matrix index $(i,j)$, the function $G_{ij}(z)$ is locally Lipschitz with constant $\eta^{-2}$.
\end{corollary}
The third preliminary result we give is the following representation of the Green's function $G(z;H)$ in terms of the spectral data of $H$ which is also an important result in compact operator and PDE theory. This spectral representation will be indispensable for exploiting the rich spectral correspondence among covariance matrices and their linearizations.
\begin{lemma} \label{lemma:GFspectral}
\emph{(Spectral Representation)}
Suppose $H$ is a real-symmetric or complex-Hermitian matrix with eigenvalue-eigenvector pairs $\left\{ (\lambda_{\alpha}, \mathbf{u}_{\alpha}) \right\}_{\alpha}$, and let $G(z)$ denote its Green's function. Then for any matrix index $(i,j)$, we have
\begin{align}
G_{ij}(z) \ = \ \sum_{\alpha = 1}^N \ \frac{\mathbf{u}_{\alpha}(i) \overline{\mathbf{u}_{\alpha}}(j)}{\lambda_{\alpha} - z}, \label{eq:GFspectral}
\end{align}
where the overline notation denotes the complex conjugate of the vector entry. In particular, the Green's function is complex Hermitian.
\end{lemma}
We conclude this preliminary discussion of the Green's function $G(z;H)$ with the following local regularity result concerning a maximal Green's function. The proof of this result may be found as Lemma 2.1 in \cite{BKY}. To state it, we now define the maximal Green's functions of interest, which may be viewed as control parameters for the sake of this paper:
\begin{align}
\Gamma(E + i \eta) \ &= \ \left[ \max_{i,j} \ |G_{ij}(z)| \right] \vee 1, \nonumber \\
\Gamma^{\ast}(E + i \eta) \ &= \ \sup_{\eta' \geq \eta} \Gamma(E + i \eta'). \nonumber
\end{align}
\begin{lemma} \label{lemma:GFLipschitzscaling}
For any $z = E + i \eta \in \C_+$, the function $\Gamma(z)$ is locally Lipschitz continuous in $\eta$ with the following bound on its almost-everywhere derivative:
\begin{align}
\left| \partial_{\eta} \Gamma(z) \right| \ \leq \ \frac{\Gamma(z)}{\eta}. \label{eq:GFLipschitz}
\end{align}
In particular, for any $\kappa > 1$ and $z = E + i \eta \in \C_+$, we have
\begin{align}
\Gamma \left( E + i \frac{\eta}{\kappa} \right) \ \leq \ \kappa \Gamma(E + i \eta). \label{eq:GFscaling}
\end{align}
\end{lemma}
\subsection{Reductions of the Proof of Theorem \ref{theorem:locallaw}}
We now return to the setting of biregular bipartite graphs, i.e. the ensembles $\mathscr{X}$, $\mathscr{X}_{\ast}$, and $\mathscr{X}_{\ast,+}$. We begin with the following consequence of Lemma \ref{lemma:GFspectral}, which relates the Green's function entries of matrices from each of the three matrix ensembles of interest.
\begin{lemma} \label{lemma:GFrelations}
Suppose $X$ is a block matrix of the form \eqref{eq:blockmatrix}, and suppose $i,j \in [[1, M+N]]$ are indices chosen such that either $i,j \leq M$ or $i,j > M$. Then, for any $z = E + i \eta \in \C_+$, we have
\begin{align}
G_{ij}(z) \ = \ \begin{cases}
z G_{\ast,+}(z^2) & i,j \leq M \\
z G_{\ast}(z^2) & i, j > M
\end{cases}. \label{eq:GFrelations}
\end{align}
\end{lemma}
\begin{proof}
For simplicity, we suppose $X$ is real symmetric as the proof for complex Hermitian matrices is similar. First suppose $i, j \leq M$. By the spectral representation in \eqref{eq:GFspectral} and Proposition \ref{prop:spectralcorrespondence}, we obtain
\begin{align}
G_{ij}(z) \ &= \ \sum_{\alpha} \ \frac{\mathbf{u}_{\alpha}(i) \mathbf{u}_{\alpha}(j)}{\lambda_{\alpha} - z} \ = \ \sum_{\lambda \in \sigma(HH^{\ast})} \ \frac12 \left( \frac{\mathbf{u}_{\alpha}(i) \mathbf{u}_{\alpha}(j)}{\sqrt{\lambda_{\alpha}} - z} \ + \ \frac{\mathbf{u}_{\alpha}(i) \mathbf{u}_{\alpha}(j)}{-\sqrt{\lambda_{\alpha}} - z} \right),
\end{align}
where the last equality holds by abuse of notation for eigenvectors of the covariance matrix $HH^{\ast}$ versus the linearization $X$. This completes the derivation for the case $i, j \leq M$. The proof for the case $i, j > M$ follows by the exact same calculation, but instead taking a summation over $\sigma(H^{\ast} H)$ and noting the eigenvector terms $\mathbf{u}_{\alpha}(i) \mathbf{u}_{\alpha}(j)$ vanish for $\lambda_{\alpha} \in \zeta(X)$ by Statements (V) and (VI) in Proposition \ref{prop:spectralcorrespondence}.
\end{proof}
Lemma \ref{lemma:GFrelations} now gives the first reduction of the proof of Theorem \ref{theorem:locallaw}.
\begin{lemma} \label{lemma:reduction1}
Assuming the setting of Theorem 4.1, then the following two estimates are equivalent:
\begin{itemize}
\item \emph{(I).} For any fixed $\e > 0$, we have with probability at least $1 - e^{-\xi \log \xi}$, uniformly over $z = E + i \eta \in U_{\e}$ with $\eta \gg \xi^2/N$,
\begin{align}
\max_{i} \left| [G_{\ast}(z)]_{ii} - m_{\infty}(z) \right| \ = \ O \left( F_z(\xi \Phi) \right), \ \ \ \max_{i \neq j} \left| [G_{\ast}(z)]_{ij} \right| \ = \ O(\xi \Phi).
\end{align}
\item \emph{(II).} For any fixed $\e > 0$, we have with probability at least $1 - e^{-\xi \log \xi}$, uniformly over $z = E + i \eta \in U_{\e}$ with $\eta \gg \xi^2/N$,
\begin{align}
\max_{k > M} \left| [G(z)]_{kk} - zm_{\infty}(z^2) \right| \ &= \ O \left( z F_{z^2}(\xi \Phi(z^2)) \right), \\
\max_{M < k < \ell} \ \left| [G(z)]_{k \ell} \right| \ &= \ O \left(z \xi \Phi(z^2) \right).
\end{align}
\end{itemize}
Similarly, the above equivalence holds replacing $G_{\ast}$ with $G_{\ast,+}$ and taking the maximums over $i \leq M$ and $i,j \leq M$.
\end{lemma}
From Lemma \ref{lemma:GFrelations}, we also deduce the next reduction.
\begin{lemma} \label{lemma:STlinearequivalence}
Assuming the setting of Theorem 4.1, the following estimates are equivalent for any $z \in \C_+$:
\begin{align}
\left| s_b(z) - zm_{\infty,+}(z^2) \right| \ &= \ O \left( z F_{z^2}(\xi \Phi(z^2)) \right), \\
\left| s_w(z) - z m_{\infty}(z^2) \right| \ &= \ O \left( z F_{z^2}(\xi \Phi(z^2)) \right).
\end{align}
Similarly, the following estimates are equivalent for any $z \in \C_+$:
\begin{align}
\left| s_{\ast}(z) - m_{\infty}(z) \right| \ &= \ O \left( F_z(\xi \Phi) \right), \\
\left| s_{\ast,+}(z) - m_{\infty,+}(z) \right| \ &= \ O \left( F_z(\xi \Phi) \right).
\end{align}
\end{lemma}
We briefly note that Lemma \ref{lemma:STlinearequivalence} improves upon Lemma \ref{lemma:reduction1} in that it removes the restriction $|E| > \e$ on the energy.
We are now in a position to make our final reduction of the proof of the local laws in Theorem 4.1, which exploits the first reduction in Lemma \ref{lemma:reduction1} and thus allows us to focus on the covariance matrices $X_{\ast}$ and $X_{\ast,+}$. The reduction will depend on the following result, for which we need to define the following spectral domain.
\begin{align}
\mathbf{D}_{N, \delta, \xi} \ = \ \{ z = E + i \eta: \ |E| \leq N^{\delta}, \ \xi^2/N \ \leq \ \eta \ \leq \ N \}.
\end{align}
\begin{prop} \label{prop:selfimprovingestimate}
Suppose $\xi, \zeta > 0$ and $D \gg \xi^2$. If, for a fixed $z \in \mathbf{D}_{N, \delta, \xi}$, we have
\begin{align}
\mathbb{P} \left( \Gamma^*_{\star}(z) = O(1) \right) \ \geq \ 1 - e^{- \zeta},
\end{align}
then, with probability at least $1 - e^{- (\xi \log \xi) \wedge \zeta + O(\log N)}$, we have
\begin{align}
\max_i \ |[G_{\star}(z)]_{ii} - m(z)| \ = \ O(F_z(\xi \Phi(z))), \ \ \ \max_{i \neq j} \ |[G_{\star}(z)]_{ij}| \ = \ O \left( \frac{\xi \Phi(z^2)}{z} \right). \label{eq:locallawconditioned}
\end{align}
Here, $\star$ can take the values $\star = \ast$ and $\star = \ast, +$.
\end{prop}
To deduce Theorem \ref{theorem:locallaw} from Proposition \ref{prop:selfimprovingestimate}, we follow the exactly the argument used to deduce Theorem 1.1 from Proposition 2.2 in \cite{BKY}. The only thing we need to check to apply the same argument are the following bounds:
\begin{align}
m_{\infty}(z), m_{\infty, +}(z) \ \leq \ C,
\end{align}
for some constant $C = O(1)$. These estimates are proven in the appendix of this paper. We may also extend this argument to remove the energy repulsion assumption $|E| > \e$, which we precisely state in the following lemma.
\begin{lemma} \label{lemma:bootstrapenergy}
Suppose Proposition \ref{prop:selfimprovingestimate} holds. Then the estimates \emph{(4.10)} and \emph{(4.11)} hold without the assumption $|E| > \e$. Consequently, the estimates \emph{(4.14)} and \emph{(4.15)} hold without the assumption $|E| > \e$. Moreover, if $\alpha > 1$, then the estimate \emph{(4.7)} holds without the assumption $|E| > \e$.
\end{lemma}
\begin{proof}
We may again apply the iteration scheme used in proving Theorem 1.1 from Proposition 2.2 in \cite{BKY}. Here, we need to check the estimate $m(z) = O(1)$ and that in the regime $\alpha > 1$, we have the estimate $m_{\infty}(z) = O(1)$. These are similarly derived in the appendix of this paper.
\end{proof}
Thus, contingent on estimates derived in the appendix, to prove Theorem \ref{theorem:locallaw} it suffices to prove Proposition \ref{prop:selfimprovingestimate}. This will be the focus for remainder of the paper. In particular, we may now work with an explicitly smaller domain $\mathbf{D}_{N, \delta, \xi}$ and an a priori estimate on the maximal Green's function.
\subsection{Switchings on Green's Functions}
The main result in this subsection consists of the following estimates comparing Green's function entries to index-wise averages. These estimates will be fundamental to controlling terms that show up naturally in the derivation of the self-consistent equation. Before we can state the result, we need to first introduce a notion of high probability used throughout the remainder of this paper.
\begin{definition}
Fix a parameter $t = t_N \gg \log N$, and a probability space $\Omega$. We say an event $\Xi \subset \Omega$ holds with $t$-\emph{high probability}, or $t$-HP for short, if
\begin{align}
\mathbb{P} \left( \Xi^{C} \right) \ \leq \ e^{-t + O(\log N)}.
\end{align}
\end{definition}
As suggested in Theorem 4.1, we will take the parameter $t = (\xi \log \xi) \wedge \zeta$. We now state the main estimates.
\begin{lemma} \label{lemma:conditionalexpectationscederivation}
Fix a vertex $i = v_b \in V_b$ and an edge label $\mu \in [[1, d_b]]$. Suppose $z = E + i \eta \in \C_+$ satisfies the following constraints for a fixed $\e > 0$:
\begin{align}
|E| \ > \ \e, \quad \eta \ \gtrsim \ \frac{1}{N}. \label{eq:Uelevel}
\end{align}
Suppose further that $\Gamma = O(1)$ holds with $t$-HP. Then for all fixed indices $j, \ell, r$ we have
\begin{align*}
\E_{\mathscr{F}_0} \left( G_{v_{b,\mu} j} - \frac{1}{N} \sum_{k = M+1}^{M+N} \ G_{k j} \right) \ &= \ - \E_{\mathscr{F}_0} \left( d_w^{-1/2} s_w G_{ij} \right) + O(d_w^{-1/2} \Phi), \\
\E_{\mathscr{F}_0} \left[ G_{\ell r}\left( G_{v_{b,\mu} j} - \frac{1}{N} \sum_{k = M+1}^{M+N} \ G_{k j} \right) \right] \ &= \ -\E_{\mathscr{F}_0} \left( G_{\ell r} d_w^{-1/2} s_w G_{ij} \right) + O(d_w^{-1/2} \Phi).
\end{align*}
Similarly, fix a vertex $k = v_w \in V_w$ and $\mu \in [[1, d_w]]$, and suppose $z \in \C_+$ satisfies the constraints \eqref{eq:Uelevel}. Further suppose $\Gamma = O(1)$ with $t$-HP. Then for all fixed indices $j, \ell, r$, we have
\begin{align*}
\E_{\mathscr{F}_0} \left( G_{v_{w,\mu} j} - \frac{1}{M} \sum_{i = 1}^{M} \ G_{i j} \right) \ &= \ - \E_{\mathscr{F}_0} \left( d_w^{-1/2} s_b G_{kj} \right) + O(d_w^{-1/2} \Phi), \\
\E_{\mathscr{F}_0} \left[ G_{\ell r}\left( G_{v_{w,\mu} j} - \frac{1}{M} \sum_{i = 1}^{M} \ G_{i j} \right) \right] \ &= \ -\E_{\mathscr{F}_0} \left( G_{\ell r} d_w^{-1/2} s_b G_{kj} \right) + O(d_w^{-1/2} \Phi).
\end{align*}
\end{lemma}
Before we provide a proof of this result, we will need an auxiliary estimate comparing Green's function entries from i.i.d. samples of graphs. To state this auxiliary estimate, we define first the conditional maximal Green's functions for any fixed edge label $\mu \in [[1, d_b]]$ or $\nu \in [[1, d_w]]$:
\begin{align}
\Gamma_{\mu} \ = \ \Gamma_{\mu}(z) \ := \ \| \Gamma(z) \|_{L^{\infty}(\mathscr{G}_{\mu})},
\end{align}
where the notation $L^{\infty}(\mathscr{G}_{\mu})$ in the norm denotes the $L^{\infty}$-norm conditioning on the $\sigma$-algebra $\mathscr{G}_{\mu}$.
\begin{lemma} \label{lemma:preliminaryestimategreensfunction}
For any fixed indices $i, j \in [[1, M+N]]$ and any label $\mu \in [[1, d_b]]$, we have
\begin{align}
G_{ij} - \wt{G}_{ij} \ = \ O(d_w^{-1/2} \Gamma_{\mu} \Gamma). \label{eq:perturbresample}
\end{align}
Moreover, suppose $x, y$ are random variables such that, conditioned on $\mathscr{G}_\mu$ and $x$, the random variable $y$ is approximately uniform. Then we have
\begin{align}
\E_{\mathscr{G}_{\mu}} \left| G_{xy} \right|^2 \ = \ O(\Gamma_{\mu}^4 \Phi^2). \label{eq:corollaryofperturbresample}
\end{align}
The estimate \label{eq:perturbresample} also holds for any fixed label $\nu \in [[1,d_w]]$ of a white vertex.
\end{lemma}
The proof of this result follows from the proof of Lemma 3.9 in \cite{BKY} so we omit it. We now record the following consequence of Lemma \ref{lemma:preliminaryestimategreensfunction} and proceed to the proof of Lemma \ref{lemma:conditionalexpectationscederivation}.
\begin{corollary} \label{corollary:takemaxperturbgreensfunction}
In the setting of Lemma \ref{lemma:preliminaryestimategreensfunction}, we have
\begin{align}
\Gamma \ = \ \Gamma_\mu \ + \ O \left( d_w^{-1/2} \Gamma_\mu \Gamma \right).
\end{align}
\end{corollary}
\begin{proof}
(of Lemma \ref{lemma:conditionalexpectationscederivation}).
We prove the first estimate only for $v_b \in V_b$; the proof of the second estimate and the estimates for $v_w \in V_w$ are analogous. In expectation, we first note
\begin{align}
\E_{\mathscr{F}_0} G_{v_{b,\mu} j} \ = \ \E_{\mathscr{F}_0} \wt{G}_{\wt{v}_{b,\mu} j}. \nonumber
\end{align}
Moreover, because $G(z)$ is independent of the random variable $\wt{v}_{b,\mu}$, and because, conditioned on $\mathscr{G}_\mu$, the random variable $\wt{v}_{b,\mu} \in V_w$ is approximately uniform, we also know
\begin{align}
\E_{\mathscr{F}_0} \left( \frac{1}{N} \sum_{k = M+1}^{M+N} \ G_{kj} \right) \ = \ \E_{\mathscr{F}_0} G_{\wt{v}_{b,\mu} j} + O(d_b^{-1/2} \Phi). \nonumber
\end{align}
Thus, it suffices to compute
\begin{align}
- \E_{\mathscr{F}_0} \left( G_{\wt{v}_{b,\mu} j} - \wt{G}_{\wt{v}_{b,\mu} j} \right). \nonumber
\end{align}
By the resolvent identity, we have the following equation holding in expectation:
\begin{align}
\E_{\mathscr{F}_0} \left( - G_{\wt{v}_{b,\mu} j} - \wt{G}_{\wt{v}_{b,\mu} j} \right) \ &= \ \E_{\mathscr{F}_0} \left( \sum_{k, \ell} \ G_{\wt{v}_{b,\mu} k} (\wt{X} - X)_{k \ell} \wt{G}_{\ell j} \right).\label{eq:perturbgreensexpectation}
\end{align}
Unfolding the high-probability equation (11.15) in Lemma 11.9, we have, with probability at least $1 - O(d_w^{-1/2} \Phi)$ conditioned on $\mathscr{F}_0$,
\begin{align}
\wt{X} - X \ = \ d_w^{-1/2} \left( \Delta_{v_b \wt{v}_{b,\mu}} - \Delta_{v_b v_{b, \mu}} + \Sigma_b \right) \ + \ d_w^{-1/2} \left( \Delta_{\wt{v}_{b,\mu} v_b} - \Delta_{v_{b,\mu} v_b} + \Sigma_b^* \right).\label{eq:perturbresampleexpectation}
\end{align}
Here, we recall that $v_{b,\mu}$ (resp. $\wt{v}_{b,\mu}$) is the vertex adjacent to $v_b$ in $S_{v_b, \mu}$ (resp. $\wt{S}_{v_b, \mu}$) \emph{after} resampling. Also, $\Sigma_b$ is the matrix given by a sum of terms $\pm \Delta_{xy}$ where one of the following two conditions holds:
\begin{itemize}
\item Conditioned on $\mathscr{G}_\mu, \wt{p}_\mu$, the random variable $x$ is approximately uniform, or;
\item Conditioned on $\mathscr{G}_{\mu}$, the random variable $y$ is approximately uniform.
\end{itemize}
Thus, upon unfolding the RHS of \eqref{eq:perturbgreensexpectation}, we see one term is given by, in expectation,
\begin{align}
\E_{\mathscr{F}_0} \left[ d_w^{-1/2} G_{\wt{v}_{b,\mu} \wt{v}_{b,\mu}} \wt{G}_{ij} \right] \ &= \ \E_{\mathscr{F}_0} \ \left[ d_w^{-1/2} \underline{s} \wt{G}_{ij} \right] + O(d_w^{-1/2} \Phi) \nonumber \\
&= \ \E_{\mathscr{F}_0} \ \left[ d_w^{-1/2} \underline{s} G_{ij} \right] + O(d_w^{-1/2} \Phi), \nonumber
\end{align}
where the first equality holds because $\wt{v}_{\beta,\mu}$ is approximately uniform by Corollary 11.6 and the second holds since for any fixed indices $i, j$, we have $G_{ij} \sim \wt{G}_{ij}$ conditioned on $\mathscr{G}_\mu$. In particular, we have
\begin{align}
\E_{\mathscr{F}_0} \left( G_{\wt{v}_{b,\mu} \wt{v}_{b, \mu}} \wt{G}_{ij} \right) \ &= \ \E_{\mathscr{F}_0} \left( G_{\wt{v}_{b,\mu} \wt{v}_{b,\mu}} G_{ij} \right) \ + \ \E_{\mathscr{F}_0} \left[ G_{\wt{v}_{b,\mu} \wt{v}_{b,\mu}} \left( \wt{G}_{ij} - G_{ij} \right) \right] \nonumber \\
&= \ \E_{\mathscr{F}_0} \left( G_{\wt{v}_{b,\mu} \wt{v}_{b,\mu}} G_{ij} \right) + O \left( \frac{1}{\sqrt{D}} \right), \nonumber
\end{align}
where the second equality holds by Lemma \ref{lemma:preliminaryestimategreensfunction}. Thus, it suffices to bound the remaining terms in \eqref{eq:perturbgreensexpectation}. By \eqref{eq:perturbresampleexpectation}, it suffices to estimate the expectation of terms $G_{\wt{v}_{b,\mu} x} \wt{G}_{yj}$. By the second result in Lemma \ref{lemma:preliminaryestimategreensfunction} and the Schwarz inequality, in the case where $y$ is approximately uniform conditioning on $\mathscr{G}_{\mu}$, we have, with high probability,
\begin{align}
\E_{\mathscr{F}_0} G_{\wt{v}_{b,\mu} x} \wt{G}_{yj} \ \leq \ \E_{\mathscr{F}_0} |\wt{G}_{yj}|^2 + O(D^{-1/2}) \ \leq \ O(\Phi). \nonumber
\end{align}
Thus, by the assumption $\Gamma = O(1)$, Lemma \ref{lemma:conditionalexpectationscederivation} follows after accumulating the finitely many events all holding with probability at least $1 - O(d^{-1/2} \Phi)$.
\end{proof}
\section{The Self-Consistent Equation and Proof of Theorem \ref{theorem:locallaw}}
\subsection{Derivation of the self-consistent equation}
We begin by introducing the following two pieces of notation for a random vector $Z = (Z_i)_{i \in [[1, M]]}$ and $\wt{Z} = (\wt{Z}_k)_{k \in [[M+1, M+N]]}$ :
\begin{align}
\E_{(i)} Z \ = \ \frac{1}{M} \sum_{i = 1}^M \ Z_i, \quad \E_{(k)} \wt{Z} \ = \ \frac{1}{N} \sum_{k = M+1}^{M+N} \ \wt{Z}_k.
\end{align}
We now record the following concentration estimate which will allow us to take expectations and exploit the estimates in Lemma {lemma:conditionalexpectationscederivation}. To do so, we introduce the following notation.
\begin{definition}
Suppose $X$ is an $L^1$-random variable, and suppose $\sigma(\cdot)$ is a $\sigma$-algebra which $X$ is measurable with respect to. We define the $\sigma(\cdot)$-\emph{fluctuation} of $X$ to be the following centered random variable:
\begin{align}
X_{\sigma(\cdot)} \ := \ X - \E_{\sigma(\cdot)} X.
\end{align}
\end{definition}
\begin{prop} \label{prop:concentrationestimate}
Suppose that $z = E + i \eta \in \mathbf{D}_{N, \delta, \xi} \cap U_{\e}$, that $\zeta > 0$, and that $\Gamma = O(1)$ with probability at least $1 - e^{-\zeta}$. Fix $k = O(1)$ and pairs of indices $I_k = \{ (i_1, j_1), \ldots, (i_k, j_k)\} $. Define the random variable
\begin{align}
X_{I_k}(z) \ = \ G_{i_1 j_1} \ldots G_{i_k j_k}. \nonumber
\end{align}
Then, for any $\xi = \xi(N)$ satisfying $\xi \to \infty$ as $N \to \infty$, we have the following pointwise concentration estimate:
\begin{align}
\mathbb{P} \left[ \left(X_{I_k}(z) \right)_{\mathscr{F}_0} \ = \ O( \xi \Phi) \right] \ \geq \ 1 - e^{- \left[ (\xi \log \xi) \wedge \zeta \right] + O(\log N)}.
\end{align}
\end{prop}
The proof of Proposition \ref{prop:concentrationestimate} follows from the proof of Proposition 4.1 in \cite{BKY} so we omit it.
We now consider the matrix equation $HG = zG + \on{Id}$ and compute the diagonal entries of both sides. The $(i,i)$-entry of the RHS is clearly given by $zG_{ii} + 1$. We now study the LHS, considering the $(k,k)$-entry for $k \in [[M, M+1]]$. By matrix multiplication we have
\begin{align}
(HG)_{kk} \ = \ \sum_{i = 1}^M \ H_{ki} G_{ik} \ &= \ d_w^{-1/2} \sum_{i = 1}^M \ \sum_{\nu = 1}^{d_w} \left( \delta_{i, v_{b, \nu}} - \frac{1}{M} \right) G_{ik} \\
&= \ d_w^{-1/2} \sum_{\nu = 1}^{d_w} \ \left( G_{v_{b, \nu} k} - \E_{(i)} G_{ik} \right), \label{eq:laststepsce}
\end{align}
where we used the relation $Md_b = Nd_w$. Appealing to Lemma \ref{lemma:conditionalexpectationscederivation} we deduce the following identity:
\begin{align}
\E_{\mathscr{F}_0} (HG)_{kk} \ = \ - \E_{\mathscr{F}_0} \left( \sum_{\nu = 1}^{d_w} \ \left[ \frac{1}{d_w} \ s_b G_{ii} + O(d_w^{-1} \Phi) \right] \right) \ = \ - \E_{\mathscr{F}_0} s_b G_{ii} + O(\Phi).
\end{align}
Taking an expectation conditioning on $\mathscr{F}_0$ in the matrix equation $HG = zG + \on{Id}$, we see
\begin{align}
1 + z \E_{\mathscr{F}_0} G_{kk} \ = \ - \E_{\mathscr{F}_0} s_b G_{kk} + O \left( \Phi \right).
\end{align}
Using Proposition \ref{prop:concentrationestimate} to account for the $\mathscr{F}_0$-fluctuation of the Green's function terms, we ultimately deduce a stability equation for the diagonal $(k,k)$-entries of $G$, with $k > M$. We may run a similar calculation for indices $i \in [[1, M]]$ and derive the following system of equations:
\begin{align}
1 + (z + \gamma s_w) G_{ii} \ &= \ O \left( (1 + |z|) \xi \Phi \right), \\
1 + (z + s_b) G_{kk} \ &= \ O \left((1 + |z|) \xi \Phi \right).
\end{align}
Although this system is a priori coupled, we now appeal to Lemma \ref{lemma:GFrelations} to decouple the equations. More precisely, we deduce the following system of \emph{decoupled} equations:
\begin{align}
1 + \left( z + s_b + \frac{1 - \gamma}{z} \right) G_{ii} \ &= \ O \left( (1 + |z|) \xi \Phi \right), \\
1 + \left( z + \gamma s_w + \frac{\gamma - 1}{z} \right) G_{kk} \ &= \ O \left( (1 + |z|) \xi \Phi \right).
\end{align}
From here, we may proceed in two fashions. First, we may use Lemma 7.3 to deduce stability equations for the Green's functions $G_{\ast}$ and $G_{\ast,+}$, relating the diagonal entries of these Green's functions to the Stieltjes transforms $s_{\ast}$ and $s_{\ast,+}$. On the other hand, we may also average over the diagonal entries and deduce self-consistent equations for the Stieltjes transforms $s_b, s_w$ and $s_{\ast}, s_{\ast,+}$. We summarize these estimates in the following proposition.
\begin{prop} \label{prop:sces}
Suppose $\Gamma = O(1)$ with $t$-HP, and let $z = E + i \eta \in U_{\e}$ satisfy $\eta \gg N^{-1}$. Then for any $i \in [[1, M]]$ and $k \in [[M+1, M+N]]$, we have the following equations uniformly over such $z$ with $t$-HP:
\begin{align}
1 + \left( z + s_b + \frac{1 - \gamma}{z} \right) G_{ii} \ &= \ O((1 + |z|) \xi \Phi), \label{eq:individualsceupper} \\
1 + \left( z + \gamma s_w + \frac{\gamma - 1}{z} \right) G_{kk} \ &= \ O((1 + |z|) \xi \Phi), \label{eq:individualscelower} \\
1 + \left( z + 1 - \gamma + z s_{\ast,+} \right) [G_{\ast,+}]_{ii} \ &= \ O((1 + |z|^{1/2}) \xi \Phi) \ = \ O((1 + |z|) \xi \Phi), \label{eq:individualscelargecov} \\
1 + \left( z + \gamma - 1 + \gamma z s_{\ast} \right) [G_{\ast}]_{kk} \ &= \ O((1 + |z|^{1/2}) \xi \Phi) \ = \ O((1 + |z|) \xi \Phi). \label{eq:individualscesmallcov}
\end{align}
Moreover, we have the following averaged equations uniformly over such $z$ with $t$-HP:
\begin{align}
1 + \left( z + s_b + \frac{1 - \gamma}{z} \right) s_b \ &= \ O((1 + |z|) \xi \Phi), \label{eq:averagedsceupper} \\
1 + \left( z + \gamma s_w + \frac{\gamma - 1}{z} \right) s_w \ &= \ O((1 + |z|) \xi \Phi), \label{eq:averagedscelower} \\
1 + \left( z + 1 - \gamma + z s_{\ast,+} \right) s_{\ast,+} \ &= \ O((1 + |z|^{1/2} \xi \Phi) \ = \ O((1 + |z|)\xi \Phi), \label{eq:averagedscelargecov} \\
1 + \left( z + \gamma - 1 + \gamma z s_{\ast} \right) s_{\ast} \ &= \ O((1 + |z|^{1/2}) \xi \Phi) \ = \ O((1 + |z|) \xi \Phi). \label{eq:averagedscesmallcov}
\end{align}
\end{prop}
\begin{proof}
It remains to upgrade the self-consistent equations \eqref{eq:individualsceupper} -- \eqref{eq:averagedscesmallcov} to hold over all such $z = E + i \eta$ with $t$-HP. To this end, we appeal to the Lipschitz continuity of the Green's function entries on a sufficiently dense lattice as in the proof of Lemma 8.5.
\end{proof}
\subsection{Analysis of the self-consistent equation}
From a direct calculation, we note the Stieltjes transform $m_{\infty}$ of the Marchenko-Pastur law with parameter $\gamma \leq 1$ is given by the following self-consistent equation:
\begin{align}
\gamma z m_{\infty}^2(z) + \left( \gamma + z - 1 \right) m_{\infty}(z) + 1 \ = \ 0. \label{eq:sceMP}
\end{align}
For the augmented Stieltjes transform $m_{\infty,+}$, we may similarly deduce a self-consistent equation. In our analysis, we will be concerned with providing full details for the Stieltjes transform $m_{\infty}$ \emph{only}, as the estimate for the augmented transform $m_{\infty,+}$ will follow from the estimate on $m_{\infty}$. We now note Proposition \ref{prop:sces} implies the Stieltjes transform $s_{\ast}$ solves the same self-consistent equation with an error of $o(1)$ throughout the domain $\mathbf{D}_{N, \delta, \xi} \cap U_{\e}$, with $t$-HP. Our goal will be to use the stability of the self-consistent equation \eqref{eq:sceMP} under $o(1)$ perturbations to compare $s_{\ast}$ and $m_{\infty}$. This is the content of the following result.
\begin{prop} \label{prop:stabilityestimateaverage}
Let $m: \C_+ \to \C_+$ be the unique solution to the following equation:
\begin{align}
\gamma zm^2 + (\gamma + z - 1)m + 1 \ = \ 0.
\end{align}
Suppose $s: \C_+ \to \C_+$ is continuous and let
\begin{align}
R := \gamma zs^2 + (\gamma + z - 1)s + 1.
\end{align}
Fix an energy $E \in \R \setminus [-\varepsilon, \varepsilon]$ for $\e > 0$ small and scales $\eta_0 < C(E)$ and $\eta_{\infty} \leqslant N$, where $C(E) = O_E(1)$ is a constant to be determined. Suppose we have
\begin{align}
|R(E + i \eta)| \ \leqslant \ (1 + |z|) r(E + i \eta)
\end{align}
for a nonincreasing function $r: [\eta_0, \eta_{\infty}] \to [0,1]$. Then for all $z = E + i \eta$ for $\eta \in [\eta_0, \eta_{\infty}]$, we have the following estimate for sufficiently large $N$:
\begin{align}
| m - s | \ = \ O(F(r)). \label{eq:stabilityestimateaverage}
\end{align}
Here, the constant $C(E)$ is determined by
\begin{align}
\on{Im} \left( \frac{1 - \gamma}{E + i \eta} \right) \ > \ 3 \alpha^{1/2} \varepsilon^{-1/2}
\end{align}
for all $\eta \leqslant C(E)$.
\end{prop}
Before we proceed with the proof of Proposition \ref{prop:stabilityestimateaverage}, we introduce the following notation.
\begin{notation}
We denote the solutions to the equation \eqref{eq:sceMP} by $m_{\pm}$, where $m_+$ maps the upper-half plane to itself, and $m_-$ maps the upper-half plane to the lower-half plane.
Moreover, we define the following error functions:
\begin{align}
v_{\pm} \ = \ \left| m_{\pm} - s \right|.
\end{align}
\end{notation}
Having established this notation, because $m_{+}$ takes values in the upper-half plane, we deduce the following upper bound on the values taken by the imaginary part of $m_{-}$ as follows:
\begin{align}
\on{Im}(m_-(z)) \ \leqslant \ - \on{Im} \left( \frac{1 - \gamma}{E + i \eta} \right) \ < \ - 3 \alpha^{1/2} \varepsilon^{-1/2}
\end{align}
for scales $\eta < C(E)$. We now proceed to derive an a priori estimate on the error functions $v_{\pm}$.
\begin{lemma} \label{lemma:easyregimemin}
Under the assumptions and setting of Proposition \ref{prop:stabilityestimateaverage}, we have
\begin{align}
|v_+| \wedge |v_-| \ \leqslant \ 3 \alpha^{1/2} \varepsilon^{-1/2} F(r).
\end{align}
\end{lemma}
\begin{proof}
We appeal to the following inequality which holds for any branch of the complex square root $\sqrt{\cdot}$ and any complex parameters $w, \zeta$ for which the square root is defined:
\begin{align}
| \sqrt{w + \zeta} - \sqrt{w} | \wedge | \sqrt{w + \zeta} + \sqrt{w}| \ &\leqslant \ \frac{|\zeta|}{\sqrt{|w|}} \wedge \sqrt{|\zeta|}.
\end{align}
In particular, this implies the following string of inequalities:
\begin{align}
|v_+| \wedge |v_-| \ &\leqslant \ \frac{1}{2\gamma z} \left( \frac{|4\gamma zR|}{\sqrt{|(\gamma + z - 1)^2 - 4\gamma z|}} \wedge \sqrt{|4 \gamma z R|}\right) \\
&\leqslant \ \frac{2|R|}{\sqrt{|(\gamma + z - 1)^2 - 4 \gamma z|}} \wedge \sqrt{\varepsilon \alpha R} \\
&\leqslant \ \frac{2 \varepsilon^{1/2}(1 + |z|)r(E + i \eta)}{\sqrt{|(\gamma + z - 1)^2 - 4\gamma z|}} \wedge \sqrt{\varepsilon \alpha (1 + |z|) r(E + i \eta)}
\end{align}
where the second inequality follows from the assumption $|z| \geqslant |E| \geqslant \varepsilon$ and the last bound follows if we choose $\varepsilon \leqslant 1$. But this is bounded by $3 \alpha^{1/2} \varepsilon^{-1/2} F(r)$ for any $r \in [0,1]$.
\end{proof}
\begin{proof}
(of Proposition \ref{prop:stabilityestimateaverage}).
We consider two different regimes. First consider the regime where $|m_+ - m_-| > (1 + |z|) r(\eta)$. Precisely, this is the regime defined by $\eta > C(E)$ and the energy-dependent constant $D(E)$ such that
\begin{align}
(1 + |z|) r(\eta) \ < \ \frac{|(\gamma + z - 1)^2 - 4 \gamma z|}{D(E)};
\end{align}
the constant $D(E)$ will be determined later. We note by Lemma \ref{lemma:easyregimemin}, in this regime it suffices to prove the following bound:
\begin{align}
|v_-| \ > \ |v_-| \wedge |v_+|.
\end{align}
We now choose an energy-dependent constant $\kappa(E)$ such that for all $\eta \in [C(E), \eta_{\infty}]$, we have the bound
\begin{align}
\alpha^{1/2} (1+|z|)r(\eta) \ \leqslant \ \kappa(E) (1 + |E + i C(E)|).
\end{align}
Moreover, note $|(\gamma + z - 1)^2 - 4\gamma z|$ is increasing in $\eta$, as seen by translating $z = w + 1 - \gamma$ and computing
\begin{align}
|(\gamma + z - 1)^2 - 4\gamma z| \ = \ |w^2 - 4\gamma w + X| \ = \ |w(w-4 \gamma) + X|, \nonumber
\end{align}
where $X \in \R$. Because $r(\eta)$ is non-increasing in $\eta$, for all $z = E + i \eta$ with $\eta \in [C(E), \eta_{\infty}]$, we have
\begin{align}
(1+|z|)r(\eta) \ < \ \frac{\kappa(E)}{D(E)} |(\gamma + z - 1)^2 - 4\gamma z|.
\end{align}
We first compute a uniform lower bound on the difference term as follows:
\begin{align}
|v_+ - v_-| \ &= \ \frac{|(\gamma + z - 1)^2 - 4 \gamma z|}{2 \gamma |z|} \nonumber \\
&\geqslant \ \frac{1}{2 \gamma |E + i \eta_{\infty}|} \left( \frac{D(E)}{\kappa(E)} \frac{(1 + |z|) r(\eta)}{\sqrt{|(\gamma + z - 1)^2 - 4\gamma z|}} \ \wedge \ \sqrt{\frac{D(E)}{\kappa(E)}} \sqrt{\alpha (1 + |z|) r(\eta) }\right) \nonumber \\
&> \ 0 \nonumber.
\end{align}
By continuity of $s$ and the estimate Lemma \ref{lemma:easyregimemin}, choosing $D(E)$ large enough as a function of $\kappa(E), E, \eta_{\infty}, \varepsilon$, it suffices to prove the estimate \eqref{eq:stabilityestimateaverage} for some $\eta \in [C(E), \eta_{\infty}]$. But this follows from Lemma \ref{lemma:easyregimemin}; in particular, we have at $\eta = C(E)$ and $N$ sufficiently large,
\begin{align}
|v_-| \ \geqslant \ | \on{Im}(s) - \on{Im}(m_-)| \ \geqslant |\on{Im}(m_-)| \ > \ 3 \varepsilon^{-1/2} \ \geqslant \ 3 \varepsilon^{-1/2} F(r) \ \geqslant \ |v_+| \wedge |v_-|. \nonumber
\end{align}
Thus, we have $|v_+| = |v_+| \wedge |v_-|$ in this first regime, implying the stability estimate \eqref{eq:stabilityestimateaverage}.
Now, we take the regime where the a priori estimate
\begin{align}
|(\gamma + z - 1)^2 - 4\gamma z| \ = \ O((1 + |z|)r(\eta))
\end{align}
holds. Thus, we know
\begin{align}
|v_-| \ &\leqslant \ |v_+| + \frac{\sqrt{|(\gamma + z - 1)^2 - 4 \gamma z|}}{2 \varepsilon} \ = \ |v_+| + O \left( \frac{(1+|z|)r(\eta)}{\sqrt{|(\gamma + z - 1)^2 - 4 \gamma z|}} \wedge \sqrt{(1 + |z|) r} \right) \nonumber \\
&= \ |v_+| + O(F(r)), \nonumber
\end{align}
implying the estimate in the second regime as well.
\end{proof}
We conclude the discussion of the self-consistent equation by noting that Lemma \ref{lemma:GFrelations} allows us to deduce the following local law for the Stieltjes transform $s_{\ast,+}$:
\begin{align}
\left| s_{\ast,+} - m_{\infty,+} \right| \ = \ O(F(r)).
\end{align}
This estimate holds in the regime $z = E + i \eta$ for $\eta \in [\eta_0, \eta_{\infty}]$.
\subsection{Final estimates}
Fix an index $k \in [[1, N]]$, and for notational convenience, \emph{for this calculation only}, we let $G$ denote the Green's function $G_{\ast}$. Consider the following approximate stability equation for the diagonal entry $G_{kk}$:
\begin{align}
1 + (z + \gamma - 1 + \gamma z s_{\ast}) G_{kk} \ = \ O \left( (1 + |z|) \xi \Phi \right).
\end{align}
We now appeal to the following estimate which will allow us to study the stability of this equation upon the replacement $s_{\ast} \to m_{\infty}$ that holds with $t$-HP:
\begin{align}
|z G_{kk}| \ = \ O(1).
\end{align}
Indeed, if $\eta \gg 1$, we have
\begin{align}
\left| z G_{kk} \right| \ \leq \ \left| \frac{E + i \eta}{\eta} \right| \ = \ O(1).
\end{align}
If $E \gg 1$, we appeal to the spectral representation of $G_{kk}$ and deduce
\begin{align}
\left| zG_{kk} \right| \ &\leq \ C \left| \frac{1}{N} \sum_{\lambda} \ \frac{|\mathbf{u}_{\lambda}(k)|^2 \times E}{E - \lambda + i \eta} \right| \\
&\leq \ \frac{C}{N} \sum_{\lambda} \ \left| \frac{E}{E - \lambda + i \eta} \right| \\
&= \ O(1),
\end{align}
where we used the uniform a priori bound $\lambda = O(1)$. If $\eta, E \lesssim 1$, then we appeal to the a priori bound $\Gamma = O(1)$ with $t$-HP and the trivial bound $z = O(1)$. Thus, by Proposition \ref{prop:stabilityestimateaverage}, we have
\begin{align}
1 + (z + \gamma - 1 + \gamma z m_{\infty}) G_{kk} \ = \ O \left((1 + |z|) \xi \Phi \right) + O(F(\xi \Phi)).
\end{align}
On the other hand, the self-consistent equation \eqref{eq:sceMP} implies the Green's function term on the LHS may be written as $-G_{kk}/m_{\infty}$. Moreover, because $m_{\infty} = O(1)$ uniformly on the domain $U_{\e}$, we establish the following estimate that holds over all $z \in \mathbf{D}_{N, \delta, \xi} \cap U_{\e}$ with $t$-HP:
\begin{align}
m_{\infty} - G_{kk} \ = \ O \left( F(\xi \Phi) \right),
\end{align}
where we use the estimate $(1 + |z|) m_{\infty} = O(1)$. This completes the proof of the local law along the diagonal of $G = G_{\ast}$.
To derive the estimate for the off-diagonal entries, we appeal to the Green's function $G(z) = (X - z)^{-1}$ of the linearization $X$. Note this is \emph{no longer} the Green's function $G_{\ast}$ of the covariance matrix $X_{\ast}$. In particular, we appeal to the following entry-wise representation of a matrix equation (for indices $i, j > M$):
\begin{align}
G_{ij}(HG)_{ii} - G_{ii} (HG)_{ij} \ = \ G_{ij}.
\end{align}
As in the derivation of the stability equations in Proposition \ref{prop:sces}, by Lemma \ref{lemma:conditionalexpectationscederivation} the expectation of the LHS is given by
\begin{align}
\E_{\mathcal{F}_0} \left[ G_{ij}d_w^{-1/2} \underline{s} G_{ii} \right] \ - \ \E_{\mathcal{F}_0} \left[ G_{ii} d_w^{-1/2} \underline{s} G_{ij} \right] \ + \ O(\Phi) \ = \ O(\Phi). \nonumber
\end{align}
Thus, at the cost of a concentration estimate in Proposition \ref{prop:concentrationestimate}, we deduce
\begin{align}
|G_{ij}| \ = \ O(\xi \Phi),
\end{align}
which yields the estimate for the off-diagonal entries. By Lemma 7.3, this gives the desired estimate for the off-diagonal entries of the Green's function $G_{\ast}$ with $t$-HP. An analogous calculation proves the desired entry-wise estimates for the Green's function $G_{\ast,+}$, which completes the proof of Theorem 4.1.
\section{Appendix}
\subsection{Estimates on the Stieltjes transforms on the upper-half plane}
Here, we want to control the growth of the Stieltjes transforms on the upper-half plane. This is summarized in the following lemma.
\begin{lemma} \label{lemma:STMPlinearbound}
Uniformly over $z \in \C_{+}$, we have
\begin{align}
m(z) \ = \ O(1). \label{eq:STMPlinearbound}
\end{align}
\end{lemma}
We proceed by considering the two regimes $\gamma = 1$ and $\gamma < 1$, which we refer to as the \emph{square regime} and \emph{rectangular regime}, respectively. \newline
\noindent {\bf{Square Regime.}}
If $\gamma = 1$, we rewrite the density $\varrho(E)$ as
\begin{align}
\varrho_{\gamma = 1}(E) \ = \ \frac{\sqrt{4 - E^2}}{2 \pi} \mathbf{1}_{E \in [-2, 2]},
\end{align}
which is the well-studied semicircle density, whose Stieltjes transform is given by
\begin{align}
m(z) \ = \ \frac{-z + \sqrt{z^2 - 4}}{2}.
\end{align}
For a reference on the semicircle law and its Stieltjes transform, we cite \cite{AGZ}, \cite{BHKY}, \cite{BKY}, \cite{ESY}, \cite{EYY}, and \cite{LY}. We note the branch of the square root is taken so that $\sqrt{z^2 - 4} \sim z$ for large $z$, in which case the bound \eqref{eq:STMPlinearbound} follows immediately. \newline
\noindent {\bf{Rectangular Regime.}}
Fix constants $\Lambda > 0$ and $\e > 0$ to be determined. Suppose $|E| \in [\e, \Lambda]$. By the representation $m(z) = zm_{\infty}(z^2)$ and the explicit formula for $m_{\infty}(z)$ as given in \eqref{eq:STMP}, the bound \eqref{eq:STMPlinearbound} follows immediately in this energy regime, where the implied constant in \eqref{eq:STMPlinearbound} may be taken independent of $\eta$.
Suppose now that $|E| > \Lambda$. Again by the representation $m(z) = zm_{\infty}(z^2)$, we have
\begin{align}
|m(z)| \ &= \ O \left( -z^2 + i \sqrt{(\lambda_{+} - z^2)(z^2 - \lambda_{-})}\right) \\
&= \ O \left(-z^2 + \sqrt{(z^2 - \lambda_{+})(z^2 - \lambda_{-})} \right) \\
&= \ O(1),
\end{align}
since the square root is, again, chosen so that $\sqrt{z^4 + O(z^2)} \sim z^2$ for large $z$.
Lastly, suppose $|E| < \e$. By definition of $m(z)$ as the Stieltjes transform of $\varrho$, we obtain
\begin{align}
|m(z)| \ = \ \left| \int_{E^2 \in [\lambda_{\pm}]} \ \frac{\gamma}{(1 + \gamma) \pi |E| (E - \e)} \sqrt{(\lambda_+ - E^2)(E^2 - \lambda_-)} \ dE \right| \ = \ O \left(\frac{1}{\sqrt{\lambda_{-} - \e}} \right),
\end{align}
where the implied constant depends only on fixed data $\gamma, \lambda_{\pm}$. Choosing $\e = \sqrt{\lambda_{-}}/100 > 0$, we obtain the desired bound. We note this choice of $\e$ is positive if and only if $\alpha > 1$. This completes the proof of Lemma \ref{lemma:STMPlinearbound}. \hfill\ensuremath{\square}
| -108,791.631576
|
[
-2.646484375,
2.36328125
] | 24.130879
|
[
-2.73828125,
0.53857421875,
-2.546875,
-6.890625,
-1.3623046875,
10.015625
] |
[
1.564453125,
9.828125,
-0.07568359375,
4.30859375
] | 526
| 11,352
|
[
-3.42578125,
3.978515625
] | 34.00265
|
[
-5.81640625,
-4.48828125,
-5.30859375,
-2.5546875,
1.95703125,
13.5078125
] | 2.136209
| 6.860477
| 15.072234
| 1.097681
|
[
0.9628667831420898
] | -61,891.645483
| 5.195825
| -108,317.382248
| 0.84092
| 6.010931
|
[
-1.9453125,
-3.5703125,
-4.20703125,
-5.59765625,
2.080078125,
12.9453125
] |
[
-5.8671875,
-2.771484375,
-2.791015625,
-2.31640625,
4.40625,
6.2734375
] | |
BkiUeVHxK6mkyCfODULn
|
\section{\bf Introduction}
A quantum equivalence between the six-dimensional $N=(2,0)$ theory of multiple
fivebranes compactfied on a circle $S^1$
and five-dimensional maximally supersymmetric Yang Mills has been conjectured
by Douglas and Lambert {\it et al.}
in \cite{Douglas, Lambert}. In this paper we will study an abelian
version of the conjecture where the common five-manifold is a five-torus
$T^5$ with a general flat metric, and find an equivalence only in the weak
coupling limit.
The physical degrees of freedom
of a single fivebrane are described by an $N=(2,0)$ tensor supermultiplet
which includes a chiral two-form field potential, so even a single
fivebrane has no fully covariant action. In order to investigate its
quantum theory we were thus led in \cite{DN} to compute the partition
function instead, which we carried out on the six-torus $T^6$.
We will use this calculation to investigate the partition function
of the self-dual three-form field strength restricted to $S^1\times T^5$
and compare it with the partition function of the five-dimensional
Maxwell theory on a twisted five-torus quantized via Dirac constraints
in radiation gauge.
Because both the theory and the manifold are so simple, we do not
use localization techniques fruitful for non-abelian theories
and their partition functions on spheres \cite{WittenBeasley}-\cite{Tachikawa}.
The five-dimensional Maxwell partition function on $T^5$
is defined\footnote{
Related work is
\cite{BG} which appeared after an earlier version of this paper. See
also \cite{Gustavsson},\cite{Dijkgraaf}.}
as in string theory \cite{GSW},
\begin{align} Z^{5d, Maxwell}&\equiv
tr e^{-2\pi H^{5d} + i 2\pi \gamma^i P^{5d}_i} =
Z^{5d}_{\rm zero\;modes}\;\cdot Z^{5d}_{\rm osc},\cr
H^{5d}&= {R_6\over g^2_{5YM}}\int_0^{2\pi} d\theta^2d\theta^3d\theta^4
d\theta^5 \sqrt{g}\;
\big( {1\over 2 R^2_6} g^{ii'}\,F_{6i}F_{6i'}
+ {1\over 4} g^{ii'}g^{jj'} F_{ij}F_{i'j'} \big),\cr
&P^{5d}_i = {1\over g^2_{5YM} R_6}
\int_0^{2\pi} d\theta^2 d\theta^3 d\theta^4 d\theta^5
\sqrt{g}\;
g^{jj'}\,F_{6j'}F_{ij},
\label{wholepf}\end{align}
in terms of the gauge field strength $F_{\tilde m\tilde n}
(\theta^2,\theta^3,\theta^4,\theta^5,\theta^6),$
and constant metric $g^{ij}, R_6, \gamma^i$.
The partition function of the abelian chiral two-form on a space circle times
the five-torus is
\begin{align}
Z^{6d, chiral} &=tr \,e^{-2\pi R_6{\cal H} + i2\pi\gamma^i{\cal P}_i}
= Z^{6d}_{\rm zero \; modes} \cdot Z^{6d}_{\rm osc},\cr
{\cal H} &= {1\over 12}\int_0^{2\pi} d\theta^1 \ldots d\theta^5
{\sqrt G_5}{G_5}^{ll'}
{G_5}^{mm'}{G_5}^{nn'} H_{lmn}(\vec\theta,\theta^6)\,
H_{l'm'n'}(\vec\theta,\theta^6),\cr
{\cal P}_i&= -{1\over 24}{\int_0}^{2\pi}
d\theta^1...d\theta^5 \epsilon^{rsumn} H_{umn}(\vec\theta,\theta^6)\,
H_{irs}(\vec\theta,\theta^6)
\label{6dpf}\end{align}
where $\theta^1$ is the direction of the circle $S^1$.
The time direction $\theta^6$ we will use for quantization
is common to both theories, and the
angles between the circle and the five-torus denoted by $\alpha,\beta^i$ in
\cite{DN} have been set to zero. The final
results are given in (\ref{wpfyetagain}), (\ref{wpfmore}).
We use (\ref{wholepf},\ref{6dpf})
to compute both the zero mode and
oscillator contributions, and find an exact equivalence
between the zero mode contributions,
\begin{align}
Z^{6d}_{\rm zero \; modes}
= Z^{5d}_{\rm zero \; modes}.
\label{zerocomp}\end{align}
Not surprisingly, we find the oscillator traces differ by the absence in
$Z^{5d}_{\rm osc}$ of the
Kaluza-Klein modes generated in $Z^{6d}_{\rm osc}$
from compactification on the circle $S^1$.
The Kaluza-Klein modes have been associated with instantons in
the five-dimensional non-abelian gauge theory in \cite{Douglas,Lambert,
KimyeongLee, CollieTong}, with additional comments given for the abelian limit.
It would be interesting to find a systematic way to incorporate
these modes in a generalized five-dimensional partition function along the
lines of a character, in order to match the partition functions
exactly, but we have not done that here. Rather
our explicit expressions show an equivalence between the oscillator traces
of the two theories only in the limit where the compactification
radius $R_1$ of the circle is small compared to the five-torus $T^5$.
Other approaches to $N=(2,0)$ theories
formulate fields for non-abelian chiral two-forms \cite{Singh}-\cite{Lee}
which would be useful if
the non-abelian six-dimensional theory has
a classical description and if the quantum theory
can be described in terms of fields. On the other hand the
partition functions on various manifolds \cite{Nekrasov}-\cite{Vandoren}
can demonstrate aspects of
the six-dimensional finite quantum conformal theory presumed
responsible for features of four-dimensional gauge theory \cite{Witten}.
In section 2, the contribution of the zero modes to the partition function
for the
chiral theory on a circle times a five-torus is computed as a sum over the ten
integer eigenvalues, and its relation to that of the gauge theory
is shown via a fiber bundle approach.
In section 3, the abelian gauge theory is quantized on
a five-torus using Dirac constraints, and the Hamiltonian and momenta
are computed in terms of the oscillator modes.
In section 4, we construct the oscillator trace contribution to the
partition function for the gauge theory
and compare it with that of the chiral two-form. Section 5 contains
discussion and conclusions. Appendix A presents details of the Dirac
quantization and Appendix B verifies the Hamilton equations of motion.
Appendix C regularizes the vacuum energy.
Appendix D proves the $SL(5,{\cal Z})$ invariance of both partition functions.
\section{\bf Zero Modes}
\label{zeromodes}
The $N=(2,0)$ 6d world volume theory of the fivebrane
contains five scalars, two four-spinors
and a chiral two-form $B_{MN}$,
which has a self-dual three-form field strength
$H_{LMN}= \partial_LB_{MN} + \partial_MB_{NL} + \partial_NB_{LM}$
with $1\le L,M,N \le 6$,
\begin{align}
H_{LMN}(\vec\theta,\theta^6)= {1\over{6\sqrt{-G}}}G_{LL'}G_{MM'}G_{NN'}
\epsilon^{L'M'N'RST}H_{RST}(\vec\theta,\theta^6).
\label{sd3form}\end{align}
(\ref{sd3form}) gives
$H_{LMN}(\vec\theta,\theta^6)= {{\textstyle i}
\over{6\sqrt{|G|}}}G_{LL'}G_{MM'}G_{NN'}
\epsilon^{L'M'N'RST}H_{RST}(\vec\theta,\theta^6)$ for a
Euclidean signature metric.
In the absence of a covariant Lagrangian, the partition function
of the chiral field is defined via a trace over the Hamiltonian \cite{DN}
as is familiar from string calculations. We display this expression in
(\ref{6dpf}) where the metric has been restricted to describe
the line element for $S^1\times T^5$,
\begin{align}
ds^2 &= {R_1}^2(d\theta^1)^2 + {R_6}^2(d\theta^6)^2
+ \sum_{i,j=2...5}g_{ij}(d\theta^i -\gamma^id\theta^6)
(d\theta^j - \gamma^jd\theta^6)
\label{lineelem}\end{align}
with $0\le\theta^I\le 2\pi$, $1\le I\le 6$.
The parameters $R_1$ and $R_6$ are the radii for
directions 1 and 6,
$g_{ij}$ is a 4d metric, and $\gamma^j$ are the angles between
between 6 and $j$. So from (\ref{lineelem}),
\begin{align}
&{G}_{ij}= g_{ij}\,;
\;G_{11}={R_1}^2;\quad
G_{i1}= 0\,;\quad
G_{66}= {R_6}^2 + g_{ij}\gamma^i\gamma^j;
\quad G_{i6}=-g_{ij}\gamma^j\,;
\quad G_{16}=0;
\label{sixmetric}\end{align}
and the inverse metric is\begin{align}
&G^{ij}= g^{ij}+{\gamma^i\gamma^j\over R_6^2};
\quad G^{11}={1\over R^2_1};\quad
G^{1i}=0;\quad
G^{66}={1\over R_6^2};
\quad G^{i6}={\gamma^i\over R_6^2};\quad G^{16}=0.
\label{sixmetricinverse}\end{align}
We want to keep the time direction $\theta^6$ common to both theories,
so in the 5d expressions (\ref{wholepf}) the indices are on
$2\le\tilde m,\tilde n\le 6$;
whereas the Hamiltonian and momenta in (\ref{6dpf})
sum on $1\le m,n\le 5$.
The common space index is labeled $2\le i,j\le 5$.
To this end, for the metric
$G_{MN}$ in (\ref{sixmetric}) we introduce
the 5-dimensional inverse (in directions 1,2,3,4,5)
\begin{align}{G_5}^{ij}= g^{ij};\qquad
{G_5}^{i1}=0;\qquad
{G_5}^{11}= {1\over R_1^2};\end{align}
and the 5-dimensional inverse (in directions 2,3,4,5,6)
for the five-torus $T^5$, \begin{align}
{\widetilde G_5}^{ij}=& g^{ij} + {\gamma^i\gamma^j\over R_6^2}\,;\qquad
{\widetilde G_5}^{i6}={\gamma^i\over R_6^2}\,;\qquad
{\widetilde G_5}^{66}= {1\over R_6^2}.
\label{tildeg5}\end{align}
The determinants of the metrics are related simply by
$\sqrt{G} = R_6\sqrt{G_5} = R_1\sqrt{\widetilde G_5} = R_6 R_1\sqrt{g}$\,,
and $\epsilon_{23456} \equiv \widetilde G_5 \,
\epsilon^{23456} =\widetilde G_5$,
with corresponding epsilon tensors related by $G$, $G_5$, $g$.
To compute $Z_{\rm zero\, modes}^{6d}$ we neglect the integrations
in (\ref{6dpf}) and get
\begin{align}
-2\pi R_6{\cal H}
&=-{\pi \over 6} R_6R_1{\sqrt g}g^{ii'}g^{jj'}g^{kk'}
H_{ijk}H_{i'j'k'}
-{\pi\over 4} {R_6\over R_1}{\sqrt g}(g^{jj'}g^{kk'}
- g^{jk'}g^{kj'}) H_{1jk}H_{1j'k'},\c
i2\pi\gamma^i \P_i &=
-{i\pi \over 2} \gamma^i \epsilon^{jkj'k'} H_{1jk} H_{ij'k'}
= {i\pi\over 3} \gamma^i \epsilon^{jj'kk'} H_{j'kk'}H_{1ij},
\label{pmo}\end{align}
\vskip-15pt
where the zero modes of the four fields $H_{ijk}$ are labeled by the integers
$n_7,\ldots , n_{10}$. The six fields $H_{1jk}$ have zero mode eigenvalues
$H_{123}=n_1$, $H_{124}=n_2$, $H_{125}=n_3$, $H_{134}=n_4$, $H_{135}=n_5$,
$H_{145}=n_6$, and the trace on the zero mode operators in (\ref{6dpf}) is
\begin{align}
Z_{\rm zero\, modes}^{6d}&=
\sum_{n_1,\ldots,n_6}
\exp\{-{\pi\over 4} {R_6\over R_1}{\sqrt g}
(g^{jj'}g^{kk'} - g^{jk'}g^{kj'}) H_{1jk}H_{1j'k'}\}
\cr
&\hskip10pt\cdot
\sum_{n_7,\ldots,n_{10}}
\exp\{-{\pi \over 6} R_6R_1{\sqrt g}g^{ii'}g^{jj'}g^{kk'}
H_{ijk}H_{i'j'k'}
-{i\pi \over 2}\gamma^i \epsilon^{jkj'k'} H_{1jk} H_{ij'k'}\}. \cr
\label{chzm}\end{align}
The same sum is obtained from the 5d Maxwell theory
(\ref{wholepf}) where the gauge coupling is identified\footnote{
See for example arXiv:1012.2882, p5 \cite{Lambert}.} with the
radius of the circle $g^2_{5YM} = 4\pi^2 R_1$, as follows.
The zero modes of the gauge theory are eigenvalues of operator-valued
fields that satisfy Maxwell equations with no sources. Even classically
these solutions have constant
$F_{ij}$ which lead to non-zero flux through closed two-surfaces that
are not a boundary of a three-dimensional submanifold in $T^5$.
Working in $A_6=0$ gauge,
if we consider the $U(1)$ gauge field $A_i$
at any time $\theta^6$ as a connection on a principal U(1) bundle
with base manifold $T^4$, then the curvature
$F_{ij}=\partial_iA_j-\partial_jA_i$\break for $2\le i,j\le 5$ must have
integer flux \cite{Wittentwo,Verlinde}, in the sense that
\begin{align}
n_I = {1\over 2\pi}\int_{\Sigma_2^I} F\equiv {1\over 2\pi}\int_{\Sigma_2^I}
{1\over 2} F_{ij} \, d\theta^i\wedge d\theta^j,\qquad n_I\in \Z,
\; \hbox{for each} \; 1\le I\le 6.\label{coho}
\end{align}
In $T^4$, the six representative two-cycles $\Sigma_2^I$ are each a 2-torus
constructed by the six ways of combining the four $S^1$ of $T^4$ two at a time,
given by the cohomology class, $\dim H_2(T^4) = 6$. Relabeling
$n_I$ as $n_{i,j}$ and $\Sigma_2^I$ as $\Sigma_2^{i,j}$, $2\le i< j\le 5$, we have
$\int_{\Sigma_2^{g,h}} d\theta^i\wedge d\theta^j
= (2\pi)^2 (\delta^i_g\delta^j_h
- \delta^i_h\delta^j_g)$. So ({\ref{coho}) is
\begin{align}
F_{ij} = {n_{i,j}\over 2\pi}, \qquad n_{i,j}\in\Z \; \hbox{for $i<j$}.
\label{Fint}\end{align}
Furthermore we show how the zero mode
eigenvalues of $F_{6i}$ are found\footnote{This point of view is discussed
in \cite{Henningson}. See also \cite{BG}.}
from those of the conjugate momentum $\Pi^i$.
In section 3 we derive the form of $H^{5d}$ and $P^{5d}_i$ given in
(\ref{wholepf}) from
a canonical quantization using a Lorentzian signature metric.
In (\ref{conjmomL}) the conjugate momentum is defined as
\begin{align}
\Pi^i &= {\sqrt{g}\over 4\pi^2 R_1R_6} g^{ii'}F_{6i'}.
\label{fmom}\end{align}
From the commutation relations (\ref{thcr}) we can compute its commutator
with the holonomy $\int_{\Sigma_1^k} A \equiv
\int_{\Sigma_1^k} A_i(\vec\theta,\theta^6) d\theta^i$
where $\Sigma_1^k$ are the four representative one-cycle circles in $T^4$,
\begin{align}
\left[\int_{\Sigma_1^k} A_i(\vec\theta,\theta^6) d\theta^i,
\int {d^4\theta'\over 2\pi} \Pi^j(\vec\theta',\theta^6)\right]
&= {i\over 2\pi} \int_{\Sigma_1^k} d\theta^j = i \,\delta_k^j.
\end{align}
Hence an eigenstate $\psi$ of the the zero mode operator
${1\over 2\pi} \int d^4\theta'
\Pi^k(\vec\theta',\theta^6)$ with eigenvalue $\lambda$ is
\begin{align}
\psi = e^{i \lambda\int_{\Sigma_1^k}A} \, |0\rangle,\qquad
\Big({1\over 2\pi}\int d^4\theta' \Pi^k(\vec\theta',\theta^6)\Big) \;
e^{i \lambda\int_{\Sigma_1^k}A} \, |0\rangle
= \lambda \; \, e^{i \lambda\int_{\Sigma_1^k}A} \, |0\rangle.
\nonumber\end{align}
Since the holonomy is defined mod $2\pi$, thus allowing $A$ to vary by gauges
when crossing neighborhoods, but ensuring
$e^{i \int_{\Sigma_1^k}A}$ to be a single valued element of
the structure group $U(1)$, then the states
\begin{align}
e^{i \lambda \int_{\Sigma_1^k}A} \, |0\rangle \qquad \hbox{and}\qquad
e^{i \lambda \big( 2\pi + \int_{\Sigma_1^k}A\big)} \, |0\rangle
\end{align}
must be equivalent, so the eigenvalue $\lambda$ of
the operator ${1\over 2\pi} \int d^4\theta' \Pi^k(\vec\theta',\theta^6)$
must have integer values $n^{(k)}$,
\begin{align}
\Pi^k(\vec\theta',\theta^6)
= {n^{(k)}\over (2\pi)^3},\qquad n^{(k)}\in \Z^4.
\label{piint}\end{align}
In this normalization of the zero mode eigenvalues for the gauge theory, we
are taking the $d\theta^i$ space integrations into account.
So (\ref{wholepf}) gives
\begin{align}
&- 2\pi H^{5d} + i 2\pi \gamma^i P_i^{5d}\cr
&= \Big( - {\pi\sqrt{g}\over R_1R_6} g^{ii'} F_{6i} F_{6i'}
-{\pi R_6\over 2 R_1}\sqrt{g} g^{ii'}g^{jj'} F_{ij}
F_{i'j'} + 2\pi i \gamma^i {\sqrt{g}\over R_1R_6} g^{jj'}
F_{6j'}F_{ij}\Big) \, (2\pi)^2.\cr
\label{maxwell5}
\end{align}
We can use the identity
\begin{align}
-{1\over 4} \epsilon^{jkj'k'} H_{1jk}H_{ij'k'} &=
{1\over 6} \epsilon^{jj'kk'} H_{j'kk'}H_{1ij},
\nonumber\end{align}
to rewrite the last term in (\ref{chzm}) as
\begin{align}
-{i\pi \over 2} \gamma^i \epsilon^{jkj'k'} H_{1jk} H_{ij'k'}
&= {i\pi \over 3} \gamma^i \epsilon^{jj'kk'} H_{j'kk'}H_{1ij},
\nonumber\end{align}
which is equal to the last term in (\ref{maxwell5}) if we identify
\begin{align}
{1\over 6} \epsilon^{jj'kk'} H_{j'kk'} = {2\pi \sqrt{g}\over R_1R_6}
g^{jj'} F_{6j'},\qquad H_{1ij} = 2\pi F_{ij}.
\label{newid}\end{align}
Then, from (\ref{newid}) we have
that the first term in (\ref{maxwell5}) becomes\footnote{
\begin{align}
&g_{jg} \epsilon^{jj'kk'} \epsilon^{gg'hh'}
= g( g^{j'g'}g^{kh}g^{k'h'} - g^{j'g'}g^{k'h}g^{kh'}
- g^{k'g'}g^{kh}g^{j'h'} + g^{k'g'}g^{j'h}g^{kh'}
- g^{k g'}g^{j'h}g^{k'h'} + g^{kg'}g^{k'h}g^{j'h'}),\cr
&\epsilon^{2345} =1 \;\hbox{and}\; \epsilon_{2345} = g \epsilon^{2345} = g.
\nonumber\end{align}}
\begin{align}
-{4\pi^3\sqrt{g}\over R_1R_6} g^{ii'} F_{6i}F_{6i'}
&= -{\pi\over 6}\sqrt{g}R_1R_6 \;g^{j'g'}g^{gh}g^{g'h'} H_{j'kk'}H_{g'hh'}.
\nonumber\end{align}
Thus with the identifications in (\ref{newid}),
the 5d Maxwell expression in (\ref{maxwell5}) is equal to
the 6d chiral exponent in (\ref{chzm}),
\begin{align}
-2\pi H^{5d} + i2\pi\gamma^i P^{5d}_i &= \Big(
-{\pi \sqrt{g}\over R_1 R_6} g^{ii'}\,F_{6i}F_{6i'}
-{\pi R_6\sqrt{g}\over 2 R_1} g^{ii'}g^{jj'} F_{ij}F_{i'j'}
+ {i 2\pi \sqrt{g}\over R_1 R_6} \gamma^i g^{jj'}\,F_{6j'}F_{ij}\Big)
(2\pi)^2 \cr
= -t{\cal H} + i2\pi\gamma^i\P_i
&= -{\pi \over 6} R_6R_1{\sqrt g}g^{ii'}g^{jj'}g^{kk'}
H_{ijk}H_{i'j'k'}
-{\pi\over 4}{R_6\over R_1}\sqrt{g} (g^{jj'}g^{kk'}-g^{jk'}g^{j'k})
H_{1jk}H_{1j'k'}\cr
&\hskip15pt -{i\pi \over 2} \gamma^i \epsilon^{jkj'k'} H_{1jk} H_{ij'k'}.
\nonumber\end{align}
We now discuss the sum over integers in (\ref{chzm}). From
(\ref{newid}),
if $H_{1jk}$ are integers, then $2\pi \, F_{ij}$ are integers.
If $H_{ijk}$ are integers, then ${1\over 6}\epsilon^{jj'kk'} H_{j'kk'}$
are also integers. This implies, again from (\ref{newid}), that
${2\pi\,\sqrt{g}\over R_1R_6} g^{jj'} F_{6j'}$ should be integers, which we
justify in (\ref{Fint}) and (\ref{piint}) with (\ref{fmom}).
Thus the Maxwell zero mode trace can be written as
\begin{align}
Z^{5d}_{\rm zero\;modes}&= \sum_{n_1\ldots n_6}
\,\exp\{-{2\pi^3}{R_6\sqrt{g}\over R_1} g^{ii'}g^{jj'}F_{ij}F_{i'j'}
\} \cr
&\hskip40pt\cdot
\sum_{n^7\ldots n^{10}} \exp\{-{4\pi^3\sqrt{g}\over R_1R_6} g^{ii'}
F_{6i} F_{6i'}
+ {i (2\pi)^3 \sqrt{g}\over R_1 R_6} \gamma^i g^{jj'}\,F_{6j'}F_{ij} \}
\label{pfi}\end{align}
where the integer eigenvalues are
$n_1= 2\pi F_{23}, n_2=2\pi F_{24}, n_3= 2\pi F_{25}, n_4=2\pi F_{34},
\hfill\break
n_5=2\pi F_{35}, n_6=2\pi F_{45}$;
$(n^7,n^8,n^9,n^{10}) \equiv (n^{(2)},n^{(3)},n^{(4)},n^{(5)})$,\hfill\break
for
$n^{(k)}\equiv {2\pi\sqrt{g}\over R_1R_6} g^{ki'}F_{6i'}\in\Z^4.$
So we have
proved the relation (\ref{zerocomp})
\begin{align}
Z^{6d}_{\rm zero \; modes} = Z^{5d}_{\rm zero \; modes}
\label{zerocompa}\end{align}
and the explicit expression is given by (\ref{chzm}) or (\ref{pfi}).
\section{\bf Dirac Quantization of Maxwell Theory on a Five-torus}
\label{diracquant}
To evaluate the oscillator contribution to the partition function
in (\ref{wholepf}), we will
first quantize the abelian gauge theory on the five-torus with
a general metric. The equation of motion is
$\partial^{\tilde m} F_{\tilde m\tilde n} = 0.$ For
$ F_{\tilde m\tilde n} = \partial_{\tilde m} A_{\tilde n}
- \partial_{\tilde n} A_{\tilde m}$, a solution
is given by a solution to
\begin{align}
\partial^{\tilde n}\partial_{\tilde n} A_{\tilde m} = 0,\qquad
\partial^{\tilde m}A_{\tilde m} = 0.
\label{eomagt}\end{align}
These have a plane wave solution
$A_{\tilde m}(\vec\theta,\theta^6) = f_{\tilde m}(k) e^{ik\cdot\theta}
+ (f_{\tilde m}(k) e^{ik\cdot\theta})^\ast$\;
when
\begin{align}
\widetilde G_L^{\tilde m \tilde n} k_{\tilde m}k_{\tilde n} = 0,
\qquad k^{\tilde m} f_{\tilde m} =0.
\label{5dg}\end{align}
In order for the operator formalism (\ref{wholepf}) to reproduce
a path integral quantization with spacetime metric (\ref{tildeg5}), we
must canonically quantize $H^{5d}$ and $P^{5d}_i$ via a metric that has
zero angles with the time
direction, {\it i.e.} $\gamma^i=0$, and insert $\gamma^i$ in the partition
function merely as the coefficient of $P^{5d}_i$ \cite{GSW}.
Furthermore a Lorentzian signature metric is needed for
quantum mechanics, so we modify the metric on the five-torus
(\ref{sixmetric}), (\ref{tildeg5}) to be
\begin{align}&{\widetilde G}_{L\,ij}= g_{ij}\,;\;
\widetilde G_{L\,66}= -{R_6}^2;
\; \widetilde G_{L\,i6}=0\,;\quad
\widetilde G_L^{ij}= g^{ij};\;
\widetilde G_L^{66}=-{1\over R_6^2};
\;\widetilde G_L^{i6}=0,\quad \widetilde G_L = \det \widetilde
G_{L\,\tilde m\tilde n}.
\label{Lfivemetric}\end{align}
Solving for $k_6$ from
(\ref{5dg}) we find
\begin{align}
k_6= \pm {\sqrt{-\widetilde G_L^{66}}\over \widetilde G_L^{66}}\,
|k|, \label{k6}\end{align}
where $2\le i,j\le 5,$ and $|k| \equiv \sqrt{g^{ij} k_ik_j}.$
Use the gauge invariance $f_{\tilde m}\rightarrow f'_{\tilde m}
= f_{\tilde m} + k_{\tilde m} \lambda$ to fix $f'_6=0,$
which is the gauge choice $$A_6=0.$$
This reduces the number of components of $A_{\tilde m}$ from 5 to 4.
To satisfy (\ref{5dg}), we can use the $\partial^{\tilde m} F_{\tilde m 6}
= -\partial_6\partial^i A_i=0$
component of the equation of motion to eliminate $f_5$ in terms of the three
$f_2,f_3,f_4$,
\begin{align}
f_5= -{1\over p^5}(p^2f_2+p^3f_3+p^4f_4),
\nonumber\end{align}
leaving just three independent polarization vectors
corresponding to the physical degrees of freedom of the 5d one-form
with Spin(3) content 3.
From the Lorentzian Lagrangian
\begin{align}
\cL= -{1\over 4}{\sqrt{-\widetilde G_L}\over g^2_{5YM}}
\widetilde G_L^{\tilde m\tilde {m'}}
\widetilde G_L^{\tilde n\tilde {n'}} F_{\tilde m\tilde n}
F_{\tilde m'\tilde n'}
= {R_6\sqrt{g}\over 4\pi^2R_1} \Big(-{1\over 4 }g^{ii'}g^{jj'} F_{ij} F_{i'j'}
- {1\over 2} \widetilde G_L^{66}
g^{jj'} F_{6j} F_{6j'}\Big),\label{5dLor}
\end{align}
the energy-momentum tensor
\begin{align}
\T^{\,m}_{\hskip8pt n} &= {\delta\cL\over \delta \partial_m A_p}
\partial_n A_p - \delta^m_{\hskip4pt_n}\,\cL
\end{align} leads to the Hamiltonian and momenta operators
\begin{align}
H_c &\equiv \int d^4\theta \, \T^{\,6}_{\hskip8pt6} = \int d^4\theta\,
\Big( {R_6\sqrt{g}\over 4\pi^2 R_1}\big(
-{1\over 2} \widetilde G_L^{66}g^{ii'}\,F_{6i}F_{6i'}
+ {1\over 4} g^{ii'}g^{jj'} F_{ij}F_{i'j'}
- F^{6i}\partial_i A_6\big) + \Pi^6 \partial_6 A_6 \Big),\label{firstHC}\\
P_i&\equiv\int d^4\theta \,\T^{6}_{\hskip8pt i} =
\int d^4\theta \Big( {R_6\sqrt{g}\over 4\pi^2 R_1}
\big (-\widetilde G_L^{66}g^{jj'} \,F_{6j'}F_{ij} - F^{6j}\partial_j A_i \big)
+ \Pi^6 \partial_i A_6 \Big) \label{firstP},\cr
\end{align}
where the conjugate momentum is
\begin{align}
\Pi^i = {\delta\cL\over \delta \partial_6 A_i} = -{R_6\sqrt{g}\over 4\pi^2 R_1} F^{6i} = - {R_6\sqrt{g}\over 4\pi^2 R_1} \widetilde G_L^{66} g^{ii'} F_{6i'},
\qquad \Pi^6 = {\delta\cL\over \delta \partial_6 A_6} = 0.
\label{conjmomL}
\end{align}
We quantize the Maxwell field on the five-torus with the metric
(\ref{Lfivemetric}) in radiation gauge
using Dirac constraints \cite{Dirac, Das}.
The theory has a primary constraint $\Pi^6(\vec \theta, \theta^6) \approx 0.$
We can express the Hamiltonian (\ref{firstHC}) in terms of the conjugate
momentum as
\begin{align}
H_{can}&= \int d^4\theta \Big( -{ 2\pi^2 R_1 \over R_6\sqrt{g}
\widetilde G_L^{66}}\, g_{ii'}\Pi^i\,\Pi^{i'} +
{R_6\sqrt{g}\over 16\pi^2 R_1} g^{ii'}g^{jj'} F_{ij} F_{i'j'}
- \partial_i\Pi^i \,A_6 \Big),
\label{Hcan}\end{align}
where the last term has been integrated by parts.
The primary Hamiltonian is defined by
\begin{align}
H_p&= \int d^4\theta \Big( -{ 2\pi^2 R_1 \over R_6\sqrt{g}
\widetilde G_L^{66}}\, g_{ii'}\Pi^i\,\Pi^{i'} +
{R_6\sqrt{g}\over 16\pi^2 R_1} g^{ii'}g^{jj'} F_{ij} F_{i'j'}
- \partial_i\Pi^i \,A_6 + \lambda_1\Pi^6\Big),
\label{Hp}\end{align}
with $\lambda_1$ as a Lagrange multiplier.
In Appendix A, we use the Dirac method of quantizing with constraints
for the radiation gauge conditions $A_6\approx 0$,
$\partial^i A_i\approx 0$, and find the equal time commutation relations
(\ref{DirCom}), (\ref{othcom}):
\begin{align}
&[\Pi^j(\vec\theta, \theta^6), A_i(\vec\theta',\theta^6)] =
- i \Big( \delta^j_i - g^{jj'}
(\partial_i{1\over g^{kk'}\partial_k\partial_{k'}}
\partial_{j'})\Big) \;\delta^4 (\theta-\theta'),\cr
& [ A_i(\vec\theta,\theta^6), A_j(\vec\theta',\theta^6)]=0,
\qquad [\Pi^i(\vec\theta,\theta^6),\Pi^j(\vec\theta',\theta^6)]=0.
\label{thcr}\end{align}
Appendix B shows the Hamilitonian (\ref{Hp})
to give the correct equations of motion.
In $A_6=0$ gauge,
the free quantum vector field on the torus is expanded as
\begin{align}
A_i(\vec\theta,\theta^6) = \;{\rm zero\, modes}\; + \sum_{\vec k\ne0, \vec k
\in {\cal Z}_4}
(f_i^{\kappa} a_{\vec k}^{\kappa} e^{ik\cdot \theta}
+ f_i^{\kappa\ast} a_{\vec k}^{\kappa\dagger} e^{-ik\cdot \theta}),
\nonumber\end{align}
where $1\le \kappa\le 3$, $2\le i\le 5$ and $k_6$ defined in (\ref{k6}).
The sum is on the dual lattice $\vec k = k_i\in {\cal Z}_4\ne\vec 0.$
Having computed the zero mode contribution in (\ref{pfi}),
here we consider\footnote{In this mode expansion, we shall pick the plus sign
in (\ref{k6}) for the root $k_6$ which solves (\ref{5dg}).}
\begin{align}
A_i(\vec\theta,\theta^6) = \sum_{\vec k\ne 0}
(a_{\vec k\, i} e^{ik\cdot \theta}
+ a_{\vec k\, i}^{\dagger} e^{-ik\cdot \theta}),
\label{vecfield}\end{align}
with polarizations absorbed in
\begin{align}
a_{\vec k\,i}= f_i^\kappa a_{\vec k}^\kappa.\label{polarize}
\end{align}
From (\ref{thcr}) the commutator in terms of the oscillators is
\begin{align}
\int {d^4\theta d^4\theta'\over (2\pi)^8}
e^{- i k_i\theta^i} e^{- i {k'}_i{\theta'}^i}
[ A_i(\vec\theta, 0), A_j(\vec\theta', 0)]
= [(a_{\vec k\, i} + a_{-\vec k\, i}^\dagger),
(a_{\vec k'\, j} + a_{-\vec k'\, j}^\dagger)] = 0.
\label{Acom}\end{align}
The conjugate momentum $\Pi^j(\vec\theta,\theta^6)$ in (\ref{conjmomL})
is expressed in terms of $a_{\vec k\,i}, a_{-\vec k\,i}^\dagger$ by
\begin{align}
\Pi^j(\vec\theta,\theta^6)
&= -i{R_6\sqrt{g}\over 4\pi^2 R_1} \widetilde G_L^{66} g^{jj'}
\sum_{\vec k} k_6
\, (a_{\vec k\, j'} e^{ik\cdot\theta}
- a_{\vec k\, j'}^\dagger e^{-ik\cdot\theta}).
\label{pie}\end{align}
Then taking the Fourier transform of $\Pi^j(\vec\theta,\theta^6)$
at $\theta^6=0$, we have
\begin{align}
\int {d^4\theta\over (2\pi)^4} e^{-ik_i\theta^i} \Pi^j(\vec\theta, 0)
&= - i{R_6\sqrt{g}\over 4\pi^2 R_1} \widetilde G_L^{66} g^{jj'}
k_6 \, (a_{\vec k\, j'} - a_{-\vec k\, j'}^\dagger).
\label{pia}\end{align}
From (\ref{pia}) and the commutators (\ref{thcr}) and (\ref{Acom}), we find
\begin{align}
&\int {d^4\theta d^4\theta'\over (2\pi)^8} e^{-ik_i\theta^i}
e^{-i{k'}_i{\theta'}^i} [\Pi^j(\vec\theta, 0),
A_i(\vec\theta', 0)] \cr&= -i(\delta^j_i - {g^{jj'}
k_ik_{j'}\over g^{kk'}k_kk_{k'}}) \delta_{\vec k,-\vec k'}\;
{1\over (2\pi)^4}
= - i{R_6\sqrt{g} \over 4\pi^2 R_1} \widetilde G_L^{66} g^{jj'}
k_6 \,
[(a_{\vec k\, j'} - a_{-\vec k\, j'}^\dagger),
(a_{\vec k'\, i} + a_{-\vec k'\, i}^\dagger)].\cr
\label{piatwo}\end{align}
To reach the oscillator commutator (\ref{aacom}), we define
\begin{align}
& A_{\vec k\, i} \equiv a_{\vec k\,i} + a^\dagger_{-\vec k\,i}
= A^\dagger_{-\vec k\, i},\qquad
E_{\vec k\, i} \equiv a_{\vec k\,i} - a^\dagger_{-\vec k\,i}
= -E^\dagger_{-\vec k\, i},
\label{EA}\\
&a_{\vec k\,i} = {1\over 2} (A_{\vec k\,i} + E_{\vec k\,i}),
\qquad
a^\dagger_{\vec k\,i} = {1\over 2} (A^\dagger_{\vec k\,i} +
E^\dagger_{\vec k\,i}) = {1\over 2} (A_{-\vec k\,i} -
E_{-\vec k\,i}).
\label{aa}\end{align}
Now inverting (\ref{piatwo}) we have
\vskip-30pt \begin{align}
[E_{\vec k\, j}, A_{\vec k'\, i}] =
{R_1\over R_6\sqrt{g}\widetilde G_L^{66} k_6} {1\over (2\pi)^2}
\big( g_{ji} - {k_jk_i\over g^{kk'}k_kk_{k'}} \big) \delta_{\vec k,-\vec k'},
\label{piathree}\end{align}
and from (\ref{pia}) and the relations (\ref{thcr}) and (\ref{Acom}),
\begin{align}
[A_{\vec k\, i}, A_{\vec k'\, j}] = 0,\qquad
[E_{\vec k\, i}, E_{\vec k'\, j}] &= 0.
\label{aaee}\end{align}
Using (\ref{aa}),
\begin{align}
[a_{\vec k\,i}, a^\dagger_{\vec k'\,j}] &=
{1\over 4}\Big( [A_{\vec k\,i}, A_{-\vec k'\,j}] -
[E_{\vec k\,i}, E_{-\vec k'\,j}] - [A_{\vec k\,i}, E_{-\vec k'\,j}]
+ [E_{\vec k\,i}, A_{-\vec k'\,j}]\Big),
\end{align}
together with (\ref{piathree}), (\ref{aaee})
we find the oscillator commutation relations
\begin{align}
[a_{\vec k\,i}, a^\dagger_{\vec k'\,j}]
&= {R_1\over R_6\sqrt{g}\widetilde G_L^{66} k_6} {1\over 2 (2\pi)^2}
\big( g_{ij} - {k_ik_j\over g^{kk'}k_kk_{k'}} \big)
\delta_{\vec k,\vec k'},\cr
[a_{\vec k\,i}, a_{\vec k'\,j}]&=0,\qquad
[a^\dagger_{\vec k\,i}, a^\dagger_{\vec k'\,j}]=0.\label{aacom}
\end{align}
In the gauge $\partial^i A_i(\vec\theta,\theta^6) =0$, then
$k^ia_{\vec k\,i} = g^{ij}k_j a_{\vec k\,i} = 0, \;k^ia^\dagger_{\vec k\,i}
= g^{ij} k_j a^\dagger_{\vec k\, i} = 0$ as in ({\ref{5dg}),
and these are consistent
with the commutator (\ref{aacom}).
We will use this commutator
to proceed with the evaluation of the
Hamiltonian and momenta in (\ref{firstHC},\ref{firstP}).
In $A_6=0$ gauge,
\begin{align}
H_c &= \int d^4\theta\, {R_6\sqrt{g}\over 4\pi^2 R_1}\Big(
-{1\over 2} \widetilde G_L^{66}g^{ii'}\,\partial_6 A_i \partial_6 A_{i'}
+ {1\over 4} g^{ii'}g^{jj'} F_{ij}F_{i'j'}\Big),
\label{Hq}\end{align}
which is the Hamiltonian $H^{5d}$ in (\ref{wholepf}).
In (\ref{firstP}) after integrating by parts, we also set the
second constraint described in Appendix A \; $\partial_i\Pi^i= 0$,
to find
\begin{align}
P_i = {1\over 4\pi^2 R_1 R_6}
\int_0^{2\pi} d\theta^2 d\theta^3 d\theta^4 d\theta^5
\sqrt{g}\;
g^{jj'}\,F_{6j'}F_{ij},
\label{Pq}\end{align}
which is the momenta $P^{5d}_i$ in (\ref{wholepf}).
From (\ref{Hq}), in terms of the normal mode expansion (\ref{vecfield}),
\begin{align}
H_c &=
(2\pi)^2{R_6\sqrt{g}\over R_1}\sum_{\vec k\in\Z^4 \ne \vec 0}
\big( {1\over 2} \widetilde G_L^{66}g^{ii'}
k_6 k_6 + {1\over 2} (g^{ii'}g^{jj'}- g^{ij'}g^{ji'}) k_jk_{j'} \big)
(a_{\vec k\,i} a_{-\vec k\,i'}
e^{2 i k_6\theta^6}
+ a^\dagger_{\vec k\,i}a^\dagger_{-\vec k\,i'}
e^{- 2 i k_6\theta^6})\cr
&\hskip10pt + (2\pi)^2{R_6\sqrt{g}\over R_1}
\sum_{\vec k\in\Z^4 \ne \vec 0}\big(-{1\over 2} \widetilde G_L^{66}g^{ii'}
k_6 k_6 + {1\over 2} (g^{ii'}g^{jj'}- g^{ij'}g^{ji'}) k_jk_{j'} \big)
(a_{\vec k\,i} a^\dagger_{\vec k\,i'}
+ a^\dagger_{\vec k\,i}a_{\vec k\,i'}),
\end{align}
with the delta function
\begin{align}
\int {d^4\theta\over (2\pi)^4} e^{i(k_i-k'_i)\theta^i}=\delta_{\vec k,\vec k'},
\end{align}
and where $k_6$ is given in $(\ref{k6})$.
From the on-shell and transverse conditions (\ref{5dg}), \hfill\break
$\widetilde G_L^{66} k_6k_6 +|k|^2 =0,$ and $k^ia_{\vec k\, i} =
k^ia^\dagger_{\vec k\, i}=0,$ so the time-dependence of
$H_c$ on $\theta^6$ cancels and
\begin{align}
H_c &= (2\pi)^2 {R_6\sqrt{g}\over R_1}\sum_{\vec k\in\Z^4 \ne \vec 0}
g^{ii'} |k|^2 \;
(a_{\vec k\,i} a^\dagger_{\vec k\,i'}
+ a^\dagger_{\vec k\,i}a_{\vec k\,i'}).
\end{align}
Similarly the momenta from (\ref{Pq}) become
\begin{align}
P_i&= - {R_6\sqrt{g}\over R_1}
\widetilde G_L^{66} g^{jj'} (2\pi)^2\sum_{\vec k\in\Z^4 \ne \vec 0}
k_6 k_i \,\big(a_{\vec k\,j'}a^\dagger_{\vec k\,j}
+ a^\dagger_{\vec k\,j'}a_{\vec k\,j}\big).
\end{align}
Then
\begin{align}
- H_c + i\gamma^i P_i & = \mp
\sqrt{-\widetilde G_L^{66}} {R_6\sqrt{g}\over R_1}\;(2\pi)^2
\sum_{\vec k\in\Z^4 \ne \vec 0} |k| \; \big ( \pm {|k| \over
\sqrt{-\widetilde G_L^{66}}}
+ i\gamma^i k_i \big) g^{jj'}
\big (a_{\vec k\,j} a^\dagger_{\vec k\,j'}
+ a^\dagger_{\vec k\,j}a_{\vec k\,j'}\big)\cr
&= \mp i
\sqrt{-\widetilde G_L^{66}} {R_6\sqrt{g}\over R_1}\;(2\pi)^2
\sum_{\vec k\in\Z^4 \ne \vec 0} |k| \; \big( \pm i
{\sqrt{-\widetilde G_L^{66}}\over \widetilde G_L^{66}} \; |k|
+ \gamma^i k_i \big) g^{jj'}
\big (a_{\vec k\,j} a^\dagger_{\vec k\,j'}
+ a^\dagger_{\vec k\,j}a_{\vec k\,j'}\big).
\end{align}
Since we are using a Lorentzian signature metric at this point,
$-\widetilde G_L^{66} >0$.
Then rewriting in terms of a real Euclidean radius $R_6$,
and making the upper sign choice in (\ref{k6}), we have
\begin{align}
- H_c + i\gamma^i P_i & = - i {1\over R_6} \; {R_6\sqrt{g}\over R_1}
\,(2\pi)^2
\sum_{\vec k\in\Z^4 \ne \vec 0} |k| \big( - i R_6 |k|
+\gamma^i k_i \big) g^{jj'}
\big (a_{\vec k\,j} a^\dagger_{\vec k\,j'}
+ a^\dagger_{\vec k\,j}a_{\vec k\,j'}\big).
\end{align}
Inserting the polarizations as $a_{\vec k\,i} = f_i^\kappa a^\kappa_{\vec k}
$ and $a^\dagger_{\vec k\,i} = f_i^{\lambda\ast}
a^{\lambda\dagger}_{\vec k}$ from (\ref{polarize}) in the
commutator (\ref{aacom}) gives
\begin{align}
[a_{\vec k\,i}, a^\dagger_{\vec k'\,j}]
&={R_1\over R_6\sqrt{g}} {R_6 \over |k|}
{1\over 2 (2\pi)^2} \Big( g_{ij} - {k_ik_j \over |k|^2}\Big)
\delta_{\vec k,\vec k'}
= f_i^\kappa f_j^{\lambda\ast}
[a^\kappa_{\vec k}, a^{\lambda\dagger}_{\vec k}],
\end{align}
where we choose the normalization
\begin{align}
[a^\kappa_{\vec k}, a^{\lambda\dagger}_{\vec k'}]
= \delta^{\kappa\lambda} \delta_{\vec k,\vec k'}.
\label{akappa}\end{align}
Then the polarization vectors satisfy
\begin{align}
f_i^\kappa f_j^{\lambda\ast} \delta^{\kappa\lambda}
&={R_1\over \sqrt{g}\,|k|}
{1\over 2 (2\pi)^2} \Big( g_{ij} - {k_ik_j \over |k|^2}\Big),\qquad
g^{jj'} f_j^\kappa f_{j'}^{\lambda\ast} \delta^{\kappa\lambda} =
{R_1\over \sqrt{g}\, |k|}
{1\over 2 (2\pi)^2} \cdot 3,\cr
g^{jj'} f_j^\kappa f_{j'}^{\lambda\ast} &= \delta^{\kappa\lambda}
{R_1\over \sqrt{g}\, |k|} {1\over 2 (2\pi)^2}.
\nonumber\end{align}
So the exponent in (\ref{wholepf}) is given by
\begin{align}
-H_c + i\gamma^i P_i &= - i {1\over R_6} \; {R_6\sqrt{g}\over R_1}(2\pi)^2
\sum_{\vec k\in\Z^4 \ne \vec 0} |k| \big( - i R_6 |k|
+ \gamma^i k_i \big) g^{jj'}
\big ( 2 a^\dagger_{\vec k\,j}a_{\vec k\,j'} +
[ a_{\vec k\,j}, a^\dagger_{\vec k\,j'}]\, \big)\cr
& = -i \sum_{\vec k\in\Z^4 \ne \vec 0} \big(
\gamma^i k_i - i R_6 |k|\,\big) \, a_{\vec k}^{\kappa\dagger}
a_{\vec k}^\kappa \quad -{i\over 2} \sum_{\vec k\in\Z^4 \ne \vec 0}
\big( -i R_6 |k|\,\big)\,\delta^{\kappa\kappa}.
\label{HplusP}\end{align}
Then the partition function is
\begin{align}
Z^{5d, Maxwell} &\equiv tr \;\exp\{2\pi (- H_c + i\gamma^i P_i)\}
= Z^{5d}_{\rm zero\, modes} \; Z^{5d}_{\rm osc},
\end{align} where from (\ref{HplusP}),
\begin{align}
Z^{5d}_{\rm osc} &
= tr \; e^{ -2\pi i \sum_{\vec k\in\Z^4 \ne \vec 0} \big(
\gamma^i k_i - i R_6 |k|\,\big) \, a_{\vec k}^{\kappa\dagger}
a_{\vec k}^\kappa \;\; - \, \pi R_6 \sum_{\vec k\in\Z^4 \ne \vec 0}
|k| \,\delta^{\kappa\kappa}}.
\label{ZHplusP}\end{align}
\section{\bf Comparison of Oscillator Traces $Z^{5d}_{\rm osc}$
and $Z^{6d}_{\rm osc}$}
In order to compare the partition functions of the two theories,
we first review the calculation for the 6d chiral field from \cite{DN}
setting the angles between the circle and five-torus $\alpha, \beta^i =0$.
The oscillator trace is evaluated by rewriting (\ref{6dpf}) as
\begin{align} -2\pi R_6{\cal H} +i2\pi\gamma^i\P_i&=
{i\pi\over 12}\int_0^{2\pi} d^5\theta H_{lrs}
\epsilon^{lrsmn} H_{6mn}
= {i\pi\over 2}\int_0^{2\pi} d^5\theta \sqrt{-G}
H^{6mn} H_{6mn}\cr
&={-i\pi \int_0^{2\pi} d^5\theta
(\Pi^{mn} H_{6mn} + H_{6mn} \Pi^{mn})}
\label{chone}\end{align}
where the definitions
$H^{6mn} = {1\over 6\sqrt{-G}}\epsilon^{mnlrs} H_{lrs}$ and
$H_{6mn}= {1\over 6 \sqrt{-G} G^{66}}\epsilon_{mnlrs}H^{lrs}$ follow from
the self-dual equation of motion (\ref{sd3form}).
$\Pi^{mn}(\vec\theta,\theta^6)$,
the field conjugate to $B_{mn}(\vec\theta,\theta^6)$ is defined
from the Lagrangian
for a general (non-self-dual) two-form
$I_6=\int d^6\theta (-{\sqrt{-G}\over 24})H_{LMN} H^{LMN}$, so
$\Pi^{mn}\nobreak={\textstyle{\delta I_6\over \delta \partial_6
B_{mn}}} = -{\sqrt{-G}\over 4} H^{6mn}\,.$
The commutation relations of the two-form
and its conjugate field $\Pi^{mn}(\vec\theta,\theta^6)$ are
\begin{align}
[\Pi^{rs}(\vec\theta,\theta^6),
B_{mn}(\vec\theta',\theta^6)]
=& -i\delta^5 (\vec\theta - \vec\theta')
(\delta^r_m\delta^s_n - \delta^r_n\delta^s_m),\cr
[\Pi^{rs}(\vec\theta,\theta^6),
\Pi^{mn}(\vec\theta',\theta^6)]
=& [B_{rs}(\vec\theta,\theta^6),
B_{mn}(\vec\theta',\theta^6)] =0.\nonumber\end{align}
{}From the Bianchi identity $\partial_{[L}H_{MNP]} = 0$ and the fact
that (\ref{sd3form}) implies $\partial^L H_{LMN} =0$, then
a solution to (2.1) is given by a solution to the homogeneous equations
$\partial^L\partial_L B_{MN}=0,$ $\partial^L B_{LN} =0\,.$
These have a plane wave solution
\begin{align}B_{MN}(\vec\theta,\theta^6)
&=f_{MN}(p) e^{ip\cdot\theta} + (f_{MN}(p) e^{ip\cdot\theta})^\ast;
\qquad G^{LN}p_L p_N =0\,;\qquad p^L f_{LN} =0;\label{ceom}\end{align}
and quantum tensor field expansion
\begin{align}
B_{mn} (\vec\theta, \theta^6) = {\rm zero\, modes} \quad +
\sum_{\vec p= p_l \in {\cal Z}^5\ne \vec 0}
( f_{mn}^\kappa b_{\vec p}^\kappa e^{ip\cdot\theta}
+ f_{mn}^{\kappa\ast} b_{\vec p}^{\kappa\dagger} e^{-ip^\ast\cdot\theta})
\end{align}
for the three physical polarizations of the 6d chiral two-form
\cite{DN}, $1\le\kappa\le 3$.
Because oscillators with different polarizations commute,
each polarization can be treated separately and the result then cubed.
Without the zero mode term,
\begin{align}
B_{mn}(\vec\theta, \theta^6) =
\sum_{\vec p\ne 0} ( b_{\vec p mn}\,e^{ip\cdot\theta}
+ b_{\vec p mn}^{\dagger} e^{-ip^\ast\cdot\theta})\,,\end{align}
for $b_{\vec pmn}= f_{mn}^1 b_{\vec p}^1$ for example, with a similar
expansion for $\Pi^{mn}(\vec\theta, \theta^6)$ in terms of
$c^{6mn\dagger}_{\vec p}$.
From (\ref{ceom}) the momentum $p_6$ is
\begin{align}
p_6 = -\gamma^ip_i -iR_6\sqrt{g^{ij} p_ip_j + {p_1^2\over R_1^2}}.
\end{align}
For the gauge choice $B_{6n}=0$, the exponent (\ref{chone}) becomes
\begin{align}
&-i\pi (2\pi)^5
\sum_{\vec p=p_l\in {\cal Z}^5\ne 0} i p_6 ({\cal C}_{\vec p}^{6mn\dagger}
B_{\vec p mn} +
B_{\vec p mn} {\cal C}_{\vec p}^{6mn\dagger} )\cr
&= -2 i \pi\sum_{\vec p \ne 0} p_6
{\cal C}_{\vec p}^{\kappa\dagger} B_{\vec p}^\lambda
f^{\kappa mn}(p) f^\lambda_{mn}(p)
- i\pi \sum_{\vec p \ne 0} p_6 f^{\kappa mn}(p) f^\kappa_{mn}(p)\cr
&= -2 i \pi\sum_{\vec p \ne 0} p_6
{\cal C}_{\vec p}^{\kappa\dagger} B_{\vec p}^\kappa
-i \pi \sum_{\vec p \ne 0} p_6 \delta^{\kappa\kappa},
\end{align}
with $B_{\vec p \,mn} \equiv b_{\vec p \,mn} + b_{-\vec p \,mn}^\dagger $,
${\cal C}_{\vec p}^{6mn\dagger} \equiv
c_{-\vec p}^{6mn} + c_{\vec p}^{6mn\dagger}$.
The polarization tensors have been restored where $1\le\kappa,\lambda\le 3$ and
the oscillators $B_{\vec p}^\kappa, {\cal C}_{\vec p}^{\lambda\dagger}$
satisfy the commutation relation
\begin{align}
[B_{\vec p}^\kappa, {\cal C}_{\vec p}^{\lambda\dagger}]=
\delta^{\kappa\lambda} \; \delta_{\vec p,\vec p'}.
\label{BCcr}\end{align}
So restricting the manifold to a circle times a five-torus in \cite{DN}
we have
\begin{align}
&-2\pi R_6{\cal H} +i2\pi\gamma^iP_i\cr&=
-2i\pi \sum_{\vec p \in {\cal Z}^5\ne 0}
\Big(-\gamma^ip_i -iR_6\sqrt{g^{ij} p_ip_j + {p_1^2\over R_1^2}}\Big)
{\cal C}_{\vec p}^{\kappa\dagger} B_{\vec p}^\kappa
-\pi R_6 \sum_{\vec p \in {\cal Z}^5}
\sqrt{g^{ij} p_i p_j + {p_1^2\over R_1^2}}\;
\delta^{\kappa\kappa}
\label{HPexp}\end{align}
The oscillator trace (\ref{6dpf}) is
\begin{align}
Z^{6d}_{\rm osc} &= tr \, e^{-t{\cal H} + i2\pi\gamma^iP_i}
= tr \, e^{-2i\pi \sum_{\vec p \ne 0} p_6
{\cal C}_{\vec p}^{\kappa\dagger} B_{\vec p}^\kappa
-\pi R_6 \sum_{\vec p } \sqrt{g^{ij}p_ip_j + {p^2_1\over R_1^2}}\;\,
\delta^{\kappa\kappa}},\cr
Z^{6d, chiral} &= Z^{6d}_{\rm zero\,modes}\cdot\bigl ( e^{-\pi R_6
\sum_{\vec n \in {\cal Z}^5}
\sqrt{g^{ij} n_i n_j + {n^2_1\over R_1^2}}}\, \prod_{\vec n \in {\cal Z}^5
\ne 0}
{1\over 1-e^{-2\pi R_6\sqrt{g^{ij} n_i n_j + {n^2_1\over R_1^2}}
\; +i2\pi\gamma^in_i}}\bigl)^3.\cr
\label{f6doscpf}\end{align}
\vskip-10pt
Regularizing the vacuum energy as in \cite{DN}, the chiral
field partition function (\ref{6dpf}) becomes
\begin{align}
Z^{6d, chiral}&
= Z^{6d}_{\rm zero\,modes} \cdot
\Bigl ( e^{ R_6 \pi^{-3} \sum_{\vec n\ne \vec 0} {\sqrt{G_5}\over
(g_{ij} n^in^j + R_1^2 (n^1)^2)^{\,3}}}\,
\prod_{\vec n\in {\cal Z}^5\ne \vec 0}
{\textstyle 1\over{1- e^{-2\pi R_6 \sqrt{g^{ij} n_in_j + {(n_1)^2\over R_1^2}}
+ i 2\pi \gamma^i n_i}}}\Bigr )^3,
\label{oscb}\end{align}
\vskip-10pt
where $Z^{6d}_{\rm zero\,modes}$ is given in (\ref{chzm}).
Lastly we compute the 5d Maxwell partition function (\ref{wholepf})
from (\ref{ZHplusP}),
\begin{align}
Z^{5d, Maxwell}&= Z^{5d}_{\rm zero\;modes} \cdot
\, tr \; e^{-2i\pi\sum_{\vec k\ne \vec 0} (\gamma^i k_i
-iR_6\sqrt{g^{ij}k_ik_j})
\, a_{\vec k}^{\kappa\dagger} \,a_{\vec k}^{\kappa}
- \pi\sum_{\vec k\ne \vec 0}
(R_6\sqrt{g^{ij} k_ik_j}) \, \delta^{\kappa\kappa}},\label{wpf2}
\end{align}
where $\vec k = k_i = n_i\in {\cal Z}^4$ on the torus.
From the standard Fock space argument
$$tr\;\omega^{\sum_p p a^\dagger_p a_p}
=\prod_p\sum_{k=0}^\infty \langle k |\omega^{p a^\dagger_p a_p} | k\rangle
=\prod_p {\textstyle 1\over {1 - \omega^p}},$$
we perform the trace on the oscillators,
\begin{align}
Z^{5d}_{\rm osc} &=
\, \Big( e^{-\pi R_6\sum_{\vec n\in {\cal Z}^4} \sqrt{g^{ij}n_in_j}}
\; \; \prod_{\vec n\in {\cal Z}^4\ne \vec 0} {1\over 1 - e^{-i2\pi
(\gamma^in_i -i R_6\sqrt{g^{ij} n_in_j})}}\Big)^3,\label{opf}\\
Z^{5d, Maxwell} &= Z^{5d}_{\rm zero\;modes} \cdot
\, \Big( e^{-\pi R_6\sum_{\vec n\in {\cal Z}^4} \sqrt{g^{ij}n_in_j}}
\; \; \prod_{\vec n\in {\cal Z}^4\ne \vec 0} {1\over 1 - e^{-2\pi
R_6\sqrt{g^{ij} n_in_j} -2\pi i\gamma^in_i}} \Big)^3,
\label{wpf3}\end{align}
where $Z^{5d}_{\rm zero\;modes}$ is given in (\ref{pfi}).
(\ref{wpf3}) and (\ref{f6doscpf}) are each manifestly
$SL(4,{\cal Z})$ invariant
due to the underlying $SO(4)$ invariance we have labeled as $i=2,3,4,5$.
We use the $SL(4,{\cal Z})$ invariant regularization of the
vacuum energy reviewed in Appendix C to obtain
\begin{align}
Z^{5d, Maxwell}&= Z^{5d}_{\rm zero\;modes} \cdot
\, \Big( e^{{3 \over 8}R_6\pi^{-2}
\sum_{\vec n\ne 0} {\sqrt{g}\over ({g_{ij}n^in^j})^{5\over 2}}}
\; \; \prod_{\vec n\in {\cal Z}^4\ne \vec 0} {1\over 1 - e^{-2\pi
R_6\sqrt{g^{ij} n_in_j} - 2\pi i \gamma^in_i}} \Big)^3,
\label{wpf4}\end{align}
where the sum is on the original lattice $\vec n = n^i\in {\cal Z}^4\ne \vec 0,$
and the product is on the dual lattice $\vec n = n_i\in {\cal Z}^4\ne \vec 0.$
In Appendix D we prove that the product of the zero mode contribution
and the oscillator
contribution in (\ref{wpf4}) is $SL(5,{\cal Z})$ invariant.
In (\ref{wpf4again}) we give an equivalent expression,
\begin{align}
Z^{5d, Maxwell}&= Z^{5d}_{\rm zero\;modes} \cdot
\Big( e^{\pi R_6\over 6 R_2} \prod_{n\ne 0} {1\over
1 - e^{-2\pi {R_6\over R_2}|n| +2\pi i \gamma^2 n}}\Big )^3
\cr& \hskip10pt \cdot
\Big( \prod_{n_\alpha\in \Z^3\ne (0,0,0)} e^{-2\pi R_6 <H>_{p_\perp}}
\; \; \prod_{n_2\in {\cal Z}} {1\over 1 - e^{-2\pi
R_6\sqrt{g^{ij} n_in_j} + 2\pi i\gamma^i n_i}} \Big)^3,\cr
\label{wpfyetagain}\end{align}
with $ <H>_{p_\perp}$ defined in (\ref{BES5}).
In Appendix D we also prove the $SL(5,\Z)$ invariance of
the 6d chiral partition function
(\ref{oscb}), using the equivalent form (\ref{a6dpf}),
\begin{align}
Z^{6d, chiral} &= Z^{6d}_{\rm zero\,modes}\cdot\bigl (e^{\pi R_6\over 6 R_2}
\prod_{n\in\Z\ne 0} {1\over
1 - e^{- 2\pi {R_6\over R_2} |n| + 2\pi i \,\gamma^2 \,n\,)}}\bigr )^3\cr
&\hskip20pt \cdot
\bigl(\prod_{n_\perp\in\Z^4\ne (0,0,0,0)}
e^{-2\pi R_6 < H>^{6d}_{p_\perp}} \prod_{n_2\in {\cal Z}}
{1\over 1-e^{-2\pi R_6\sqrt{g^{ij} n_i n_j + {n^2_1\over R_1^2}}
\; +i2\pi\gamma^in_i}}\bigr)^3\label{wpfmore}\end{align}
with $< H>^{6d}_{p_\perp}$ in (\ref{BES6}).
Thus the partition functions of the two theories are both $SL(5,\Z)$ invariant,
but they are not equal.
The comparison of the 6d chiral theory on
$S^1\times T^5$ and the abelian gauge theory on $T^5$ shows the
exponent of the oscillator contribution to the partition function
for the 6d theory (\ref{HPexp}),
\begin{align}
&-2\pi R_6{\cal H} +i2\pi\gamma^i\P_i\cr&=
-2\pi \sum_{\vec p \in {\cal Z}^5\ne 0}
\Big( -i\gamma^ip_i + R_6\sqrt{g^{ij} p_ip_j + {p_1^2\over R_1^2}}\,\Big)
\; {\cal C}_{\vec p}^{\kappa\dagger} B_{\vec p}^\kappa
-\pi R_6 \sum_{\vec p \in {\cal Z}^5}
\sqrt{g^{ij} p_i p_j + {p_1^2\over R_1^2}}\;\;
\delta^{\kappa\kappa},
\cr\label{HPexpb}\end{align}
and for the gauge theory
(\ref{HplusP}),
\begin{align}
- 2\pi H^{5d} + 2\pi i \gamma^i P^{5d}_i &
= -2\pi\, \sum_{\vec k\in {\cal Z}^4 \ne 0}
\Big(i\gamma^i k_i + R_6\sqrt{g^{ij}k_ik_j} \Big) \,
\, a_{\vec k}^{\kappa\dagger} \,a_{\vec k}^{\kappa}
- \pi R_6 \sum_{\vec k\in {\cal Z}^4}
\sqrt{g^{ij} k_ik_j} \, \delta^{\kappa\kappa},
\label{addon5b}
\end{align}
differ only by the sum on the Kaluza-Klein modes $p_1$ of $S^1$ since
for the chiral case $\vec p\in {\cal Z}^5$, and for the Maxwell case
$\vec k\in {\cal Z}^4$.
Both theories have three polarizations, $1\le \kappa\le 3$, and
from (\ref{BCcr}), (\ref{akappa})
the oscillators have the same commutation relations,
\begin{align}
[B_{\vec p}^\lambda, {\cal C}_{\vec p}^{\lambda\dagger}]=
\delta^{\kappa\lambda} \; \delta_{\vec p,\vec p'},\qquad
[a_{\vec k}^\kappa, a_{\vec k'}^{\lambda\dagger} ]
&= \delta^{\kappa\lambda} \; \delta_{\vec k,\vec k'}.
\end{align}
If we discard the
Kaluza-Klein modes $p_1^2$ in the usual limit \cite{Witten} as
the radius of the circle $R_1$ is
very small with respect to the radii and angles $g_{ij}, R_6,$
of the five-torus, then the oscillator products in
(\ref{wpfmore}) and (\ref{wpfyetagain}) are equivalent.
This holds as a precise limit
since we can
separate the product on $ n_\perp = (n_1,n_\alpha)\ne 0_\perp $
in (\ref{wpfmore}),
into
$(n_1=0,n_\alpha\ne (0,0,0))$ and $(n_1\ne 0, \hbox{all}\; n_\alpha)$,
to find at fixed $n_2$,
\begin{align}
&\prod_{n_\perp\in{\cal Z}^4\ne (0,0,0,0)}
{\textstyle 1\over{1- e^{-2\pi R_6 \sqrt{g^{ij} n_in_j + {(n_1)^2\over R_1^2}}
+ 2\pi i\gamma^in_i}}}\cr
&\hskip30pt
= \prod_{n_\alpha\in \Z^3\ne (0,0,0)}
{\textstyle 1\over{1- e^{-2\pi R_6 \sqrt{g^{ij} n_in_j } + 2\pi i \gamma^in_i
}}}\cdot \prod_{n_1\ne 0, n_\alpha\in {\cal Z}^3}
{\textstyle 1\over{1- e^{-2\pi R_6 \sqrt{g^{ij} n_in_j + {(n_1)^2\over R_1^2}}
+ 2\pi i\gamma^in_i}}}.\cr
\end{align}
In the limit of small $R_1$ the last product reduces to unity, thus
for $S^1$ smaller than $T^5$
\begin{align}
&\prod_{n_\perp\in{\cal Z}^4\ne (0,0,0,0)}
{\textstyle 1\over{1- e^{-2\pi R_6 \sqrt{g^{ij} n_in_j + {(n_1)^2\over R_1^2}}
+2\pi i\gamma^in_i}}}
\rightarrow \prod_{n_\alpha\in {\cal Z}^3\ne (0,0,0)}
{\textstyle 1\over{1- e^{-2\pi R_6 \sqrt{g^{ij} n_in_j } + 2\pi i\gamma^in_i
}}}.
\label{65lim}\end{align}
Inspecting the regularized vacuum energies $<H>_{p_\perp}$ and
$<H>_{p_\perp}^{6d}$ in (\ref{BES5}),(\ref{BES6}),
\begin{align}
<H>_{p_\perp\ne 0}&=-{\pi}^{-1}\; |p_\perp|
\sum_{n=1}^\infty\cos(p_\alpha\kappa^\alpha 2\pi n)
{K_1(2\pi n R_2 |p_\perp|)\over n}, \quad
\hbox{for}\quad
|p_\perp|\equiv \sqrt{\widetilde g^{\alpha\beta} n_\alpha n_\beta},\cr
<H>^{6d}_{p_\perp\ne 0}&=-{\pi}^{-1}\; |p_\perp|
\sum_{n=1}^\infty\cos(p_\alpha\kappa^\alpha 2\pi n)
{K_1(2\pi n R_2 |p_\perp|)\over n}, \quad
\hbox{for}\quad|p_\perp|\equiv \sqrt{{(n_1)^2\over R_1^2}
+ \widetilde g^{\alpha\beta} n_\alpha n_\beta},
\end{align}
we see they have the same form of spherical Bessel functions,
but the argument differs by Kaluza-Klein modes.
Again separating the product on $n_\perp = (n_1,n_\alpha)$ in (\ref{wpfmore}),
into \hfill\break
$(n_1=0,n_\alpha\ne (0,0,0))$ and $(n_1\ne 0, \hbox{all}\; n_\alpha)$
we have
\begin{align}
\prod_{n_\perp\in\Z^4\ne (0,0,0,0)}
e^{-2\pi R_6 < H>^{6d}_{p_\perp}} =
\bigl(\prod_{n_\alpha\in\Z^3\ne (0,0,0)}
e^{-2\pi R_6 < H>_{p_\perp}}\bigr)\cdot
\bigl(\prod_{n_1\ne 0, n_\alpha\in\Z^3}
e^{-2\pi R_6 < H>^{6d}_{p_\perp}}\bigr).
\label{relH}\end{align}
In the limit $R_1\rightarrow 0$, the last product is unity because
for $n_1\ne 0$, \begin{align}
&\lim_{R_1\rightarrow 0} \sqrt{{(n_1)^2\over R_1^2}+ \widetilde
g^{\alpha\beta} n_\alpha n_\beta} \sim {|n_1|\over R_1},\cr
&\lim_{R_1\rightarrow 0} |p_\perp| \; K_1(2\pi n R_2 |p_\perp|)
=\lim_{R_1\rightarrow 0} {|n_1|\over R_1}\: K_1
\big (2\pi n R_2 {|n_1|\over R_1}\big)
=0,
\end{align}
since $\lim_{x\rightarrow\infty} x K_1(x) \sim \sqrt{x} \;e^{-x} \rightarrow 0.$
\cite{AS}. So (\ref{relH}) leads to
\begin{align}
\lim_{R_1\rightarrow 0} \prod_{n_\perp\in\Z^4\ne (0,0,0,0)}
e^{-2\pi R_6 < H>^{6d}_{p_\perp}} =
\prod_{n_\alpha\in\Z^3\ne (0,0,0)} e^{-2\pi R_6 < H>_{p_\perp}}.
\end{align}
Thus in the limit where the radius of the circle $S^1$ is
small with respect to $T^5$, which is the
limit of weak coupling $g_{5YM}^2$, we have proved
\begin{align}
\lim_{R_1\rightarrow 0} & \prod_{n_\perp\in\Z^4\ne (0,0,0,0)}
e^{-2\pi R_6 < H>^{6d}_{p_\perp}} \prod_{n_2\in {\cal Z}}
{1\over 1-e^{-2\pi R_6\sqrt{g^{ij} n_i n_j + {n^2_1\over R_1^2}}
\; +i2\pi\gamma^in_i}}\cr&
= \prod_{n_\alpha\in \Z^3\ne (0,0,0)} e^{-2\pi R_6 <H>_{p_\perp}}
\; \; \prod_{n_2\in {\cal Z}} {1\over 1 - e^{-2\pi
R_6\sqrt{g^{ij} n_in_j} + 2\pi i\gamma^i n_i}}.
\end{align}
So together with
(\ref{zerocomp}), we have shown the partition functions
of the chiral theory on $S^1\times T^5$ and of Maxwell theory on $T^5$,
which we computed in (\ref{wpfmore}) and (\ref{wpfyetagain}),
are equal only in the weak coupling limit,
\begin{align}
\lim_{R_1\rightarrow 0} \; Z^{6d, chiral} = Z^{5d, Maxwell}.
\end{align}
\section{\bf Discussion and Conclusions}
We have addressed a conjecture of the quantum equivalence between
the six-dimensional conformally invariant $N=(2,0)$ theory
compactified on a circle and the five-dimensional maximally
supersymmetric Yang-Mills theory. In this paper we consider an abelian case
without supersymmetry when the five-dimensional manifold is a twisted torus.
We compute the partition
functions for the chiral tensor field $B_{LN}$ on $S^1\times T^5$,
and for the Maxwell field $A_{\tilde m}$ on $T^5$. We prove the two
partition functions are each $SL(5,\Z)$ invariant, but are
equal only in the limit of weak coupling
$g^2_{5YM}$, a parameter which is proportional to $R_1$, the radius of
the circle $S^1$.
To carry out the computations we first
restricted an earlier calculation \cite{DN} of the chiral partition function
on $T^6$ to $S^1\times T^5$. Then we used an operator quantization
to compute the Maxwell partition on $T^5$ as defined in (\ref{wholepf})
which inserts non-zero $\gamma^i$ as the coefficient of $P^{5d}_i$,
but otherwise quantizes the theory in a 5d Lorentzian signature metric
that has zero angles with its time direction, {\it i.e.} $\gamma^i=0,\;
2\le i\le 5,$
\cite{GSW}. We used this metric and form (\ref{wholepf})
to derive both the zero mode and oscillator contributions.
The Maxwell field theory was thus quantized on $T^5$,
with the Dirac method of constraints resulting in
the commutation relations in (\ref{aacom}).
Comparing the partition function of the Maxwell field on a twisted
five-torus $T^5$ with that of a two-form potential with a self-dual three-form
field strength on $S^1\times T^5$, where the radius of the circle
is $R_1\equiv g^2_{5YM}/ 4\pi^2$, we find the two theories are not equivalent
as quantum theories, but are equal only
in the limit where $R_1$ is small relative to the metric parameters of
the five-torus, a limit which effectively removes the Kaluza-Klein modes
from the 6d partition sum.
How to incorporate these modes rigorously in the 5d theory,
possibly interpreted as instantons in the non-abelian version of the
gauge theory
with appropriate dynamics remains difficult \cite{Seibergone}-\cite{Bern},
suggesting that the 6d finite conformal $N=(2,0)$ theory on a circle
is an ultraviolet completion of the 5d maximally supersymmetric
gauge theory rather than an exact quantum equivalence.
Furthermore, it would be compelling
to find how expressions for the partition function of the 6d $N=(2,0)$
conformal quantum theory computed on various manifolds using localization
should reduce to the expression in \cite{DN} in an appropriate limit,
providing a check that localization is equivalent to canonical quantization.
\section*{Acknowledgments}
We are grateful to Michael Douglas, Peter Goddard, Sergei Gukov and
Edward Witten for discussions.
LD thanks the Institute for Advanced Study at Princeton for its hospitality.
LD and YS are partially supported by the U.S. Department of Energy, Grant No.
DE-FG02-06ER-4141801, Task A.
\vfill\eject
| -78,190.212099
|
[
-2.951171875,
2.708984375
] | 11.979167
|
[
-2.708984375,
0.3408203125,
-2.123046875,
-5.64453125,
-0.92724609375,
7.98828125
] |
[
3.013671875,
10.140625,
1.9580078125,
5.0703125
] | 168
| 5,829
|
[
-3.619140625,
4.05078125
] | 40.254573
|
[
-5.328125,
-3.396484375,
-4.1015625,
-2.453125,
1.15234375,
10.9609375
] | 0.863504
| 5.01393
| 24.635443
| 5.212338
|
[
0.8913747668266296
] | -50,490.270427
| 5.463544
| -77,998.41437
| 0.816403
| 6.205087
|
[
-2.251953125,
-3.62109375,
-3.8515625,
-5.1484375,
2.150390625,
12.46875
] |
[
-5.46484375,
-1.4482421875,
-1.8359375,
-0.348876953125,
3.09765625,
2.79296875
] | |
BkiUfX45qhDCKmWYM-EV
|
\section{Introduction}
Cohesive zone models (CZMs) were pioneered by Dugdale \cite{174} and Barenblatt \cite{173} and have been extensively employed to study the delamination process. In CZMs traction-separation laws are employed to describe the interface interactions as well as any associated dissipation. These models can be coupled or uncoupled. In an uncoupled model, the surface tractions are only dependent on their corresponding gap value \cite{162}. However, Savkoor and Briggs \cite{124} experimentally showed that in adhesive contact problems there is an interaction between different components of surface forces. As a result, in the last decades, increasing attention has been devoted to the use of coupled mixed-mode CZMs. The advantages of these methods over other approaches are their computational efficiency and versatility for numerical implementation \cite{83}.
In the relevant literature, there is a large variety of cohesive zone models. Most of them can be categorized into the following groups: polynomial \cite{164}, piece-wise linear \cite{165}, exponential \cite{88}, and rigid-linear cohesive zone models \cite{166} (for more details on each model see \cite{81}). Among them, the exponential type is one of the most popular CZMs due to the following advantages: First of all, a phenomenological description of contact is automatically achieved in normal compression. Secondly, the tractions and their derivatives are continuous, which is favourable from an implementation and computational point of view. Thirdly, the exponential models originate from the universal relationship between binding energies and atomic separation of bimetallic interfaces \cite{167}.
In addition to mixed-mode separation, CZMs have been also used to model mixed-mode closure. The term closure refers to the phenomena whereby contacting surfaces push into one another under a compressive load, e.g. in an indentation contact problem \cite{163}. However, physically realistic closure behaviour is not trivially achieved by most CZM formulations \cite{87}.
In the present study, we focus on the exponential CZMs because of the advantages mentioned above. First, some existing models are reviewed. Then, it is shown that application of these models are limited to separation and/or two-dimensional cases. Finally, a coupled mixed-mode CZM in a three-dimensional setting is introduced which is applicable to the cases of separation and in particular closure.
\section{Some existing models and their range of applicability}
Among the widening class of exponential CZMs, the 2D model of Xu-Needleman (XN) is one of the most frequently used \cite{88}. In this model, at the cohesive surface, interfacial tractions in normal ($T_{1}$) and transverse directions ($T_{2}$) are defined as follows:
\begin{equation}
\setlength{\jot}{10pt}
\begin{split}
T_{1} &= \dfrac{\phi_1}{\delta_1} ~ \text{exp} (- \dfrac{\Delta_1}{\delta_1})~\left\lbrace \dfrac{\Delta_1}{\delta_1} ~ \text{exp} ( - \dfrac{\Delta_2^{2}}{\delta_2^{2}} ) + \dfrac{1-q}{r-1} \left[ 1 - \text{exp} ( - \dfrac{\Delta_2^{2}}{\delta_2^{2}} ) \right] \left[ r - \dfrac{\Delta_1}{\delta_1} ~ \right] \right\rbrace, \\
T_{2} &=2~ \dfrac{\phi_2}{\delta_2} ~ \dfrac{\Delta_2}{\delta_2} ~ \left[ q + \dfrac{r-q}{r-1} ~ \dfrac{\Delta_1}{\delta_1} \right] \text{exp} ( - \dfrac{\Delta_1}{\delta_1} ) ~\text{exp} ( - \dfrac{\Delta_2^{2}}{\delta_2^{2}}),
\end{split}
\label{XNn}
\end{equation}
where $\phi_i$, $\delta_i$ and $\Delta_i$ are work of separation, characteristic length and gap value in direction $i=\{1, 2\}$, respectively. Coupling in this model is controlled through the parameters $q = \phi_2 / \phi_1$ and $r = \Delta^{*}_1 / \delta_1$, where $\Delta^{*}_1$ is the value of $\Delta_1$ after complete separation in transverse direction takes place under the condition of $T_{1} = 0$.
Since Xu and Needleman introduced their exponential CZM in 1993, their model has been altered and extended by several authors. In 2006, a comprehensive study on the coupling parameters, $r$ and $q$ was performed by van den Bosch \textit{et al.} \cite{81}. They demonstrated that only for $r = q$ the required transverse traction increases with increasing compression in the normal direction. Moreover, they showed that the XN model yields to unrealistic results for $r \neq 1$, since even after complete separation in transverse direction, $T_{1}$ does not become zero. Hence, they concluded that only for $r = q = 1$ a physical coupling behaviour is obtained. Nevertheless, setting $q$ equal to unity causes other issues. First, choosing $q = 1$ implies that $\phi_1 = \phi_2$. This assumption is often made in literature however, multiple experiments show that work of separation in different directions are not equal \cite{93,94}. For solving this issue, an adjusted model based on the XN model was proposed by van den Bosch \textit{et al.} \cite{81}, the BSG model:
\begin{equation}
\setlength{\jot}{10pt}
\begin{split}
T_{1} &= \dfrac{\phi_1}{\delta_1} ~\dfrac{\Delta_1}{\delta_1}~ \text{exp} (- \dfrac{\Delta_1}{\delta_1})~ \text{exp} ( - \dfrac{\Delta_2^{2}}{\delta_2^{2}} ) , \\
T_{2} &=2~ \dfrac{\phi_2}{\delta_2} ~ \dfrac{\Delta_2}{\delta_2} ~ \left[ 1 + \dfrac{\Delta_1}{\delta_1} \right] \text{exp} ( - \dfrac{\Delta_1}{\delta_1} ) ~\text{exp} ( - \dfrac{\Delta_2^{2}}{\delta_2^{2}}).
\end{split}
\label{BSG}
\end{equation}
The BSG model works perfectly for problems with mixed-mode separation. However, setting $q$ equal to unity also raises an issue in problems involving mixed-mode closure: the work done in transverse direction reduces with increasing compressive load in normal direction. Consequently, for large values of closure $\Delta_{1}/\delta_{1}<-1$, a negative work for transverse direction is computed.
In order to correct the non-physical behaviour of the BSG model in mixed-mode closure problems, McGarry \textit{et al.} \cite{87} proposed a modified form of the BSG traction–separation relationship, the so-called NP1 model. This is done by simply taking out the term $[ 1+\Delta_{1}/\delta_{1} ]$ from the definition of $T_{2}$ in Eq. (\ref{BSG}), without changing the relation for $T_{1}$.
\begin{equation}
\setlength{\jot}{10pt}
\begin{split}
T_{1} &= \dfrac{\phi_1}{\delta_1} ~\dfrac{\Delta_1}{\delta_1}~ \text{exp} (- \dfrac{\Delta_1}{\delta_1})~ \text{exp} ( - \dfrac{\Delta_2^{2}}{\delta_2^{2}} ) , \\
T_{2} &=2~ \dfrac{\phi_2}{\delta_2} ~ \dfrac{\Delta_2}{\delta_2} ~ \text{exp} ( - \dfrac{\Delta_1}{\delta_1} ) ~\text{exp} ( - \dfrac{\Delta_2^{2}}{\delta_2^{2}}).
\end{split}
\label{NP1}
\end{equation}
This new formulation is able to obtain correct coupling in both separation and closure problems.
Here, from the large body of literature few CZMs were addressed in a two-dimensional setting. Contrary to 2D cases, only a relatively small number of studies have been devoted to 3D cohesive surfaces \cite{168,169,170}. In one class of these models an effective gap value $\Delta_{\text{eff}}$ is employed:
\begin{equation}
\Delta_{\text{eff}} = \sqrt{\Delta_{1}^{2}+\beta^{2}(\Delta_{2}^{2}+\Delta_{3}^{2})},
\label{delta_eff}
\end{equation}
where $\beta$ is a scalar and assigns a different weight to opening displacements for transverse and normal directions. Subsequently, the calculated $\Delta_{\text{eff}}$ is employed to obtain the traction value. Its merits include simplicity and a straightforward formulation in 3D. However, whether these models can be applied in mixed-mode problems remains an issue \cite{171}. Besides, to the best of our knowledge, these models are merely applied into crack opening problems, i.e. single-mode separation cases.
In another class of 3D CZMs, e.g. \cite{171,172}, all three components of the gap function are explicitly incorporated into the calculation of surface tractions. These models follow the framework of Xu and Needleman. Hence, they are very similar to Eq. (\ref{XNn}) or (\ref{BSG}) and work perfectly in separation problems. However, they show unrealistic behaviour in indentation-induced problems, as discussed earlier in the BSG model.
\section{Extension to 3D contact problems: 3DC model}
In an effort to overcome the above-mentioned problems of the existing models, we propose a phenomenological three-dimensional coupled (3DC) mixed-mode CZM. Since the NP1 model \cite{87} of Eq. (\ref{NP1}) works perfectly in both problems of separation and closure, the proposed 3DC model will be obtained by extending NP1 to 3D contact problems.
In the 3DC model, the traction-separation relationship $T_{i}$ in normal ($\text{x}_{1}$) and transverse directions ($\text{x}_{2},\text{x}_{3}$) have the following expressions:
\begin{equation}
T_{i} = \dfrac{\phi_{i}}{\delta_{i}}~\dfrac{\Delta_{i}}{\delta_{i}}~\text{exp}\left[ - \sum_{j=1}^{d} (\dfrac{\Delta_{j}}{\delta_{j}})^{\alpha} \right]; \hspace{8mm} i=[1,d ],\hspace{5mm} \alpha =
\left\{ \begin{array}{lcl}
1 \hspace{8mm} j=1 \\
2 \hspace{8mm} j \neq 1
\end{array}
\right.
,
\label{3DC}
\end{equation}
where $d=3$ is the dimension of the problem. With transverse isotropic assumptions, work of separation and characteristic length for both transverse directions are the same:
\begin{equation}
\setlength{\jot}{10pt}
\begin{split}
\phi_2 &= \phi_3 \coloneqq \phi_\text{t} , \\
\delta_2 &= \delta_3 \coloneqq \delta_\text{t}.
\end{split}
\label{istp}
\end{equation}
Note that by limiting the dimension and putting $d=2$ for a 2D case, the proposed model in Eq. \ref{3DC} almost coincides with the formulation of Eq. (\ref{NP1}). Hence, it provides a physically realistic representation of the cases of mixed-mode separation and in particular mixed-mode closure. Moreover, it preserves all the essential features of the original XN model. Similar to the XN model, the 3DC model is built upon different independent parameters, i.e. $\phi_{i}$ and $\delta_{i}$, which have physical meaning and can be determined via mode-I (opening mode) and mode-II (shearing mode) experiments.
The maximum value of the traction $T_{i,\text{max}}$ and its corresponding gap value is given as:
\begin{equation}
T_{i,\text{max}} \vert_{\Delta_{i} = \delta_{i} / \eta} = \sigma_{i,\text{max}}~\text{exp}\left[ - \sum_{j=1, ~ j\neq i}^{d} (\dfrac{\Delta_{j}}{\delta_{j}})^{\alpha} \right]; \hspace{8mm} \eta =
\left\{ \begin{array}{lcl}
\hspace{1.5mm}1 \hspace{9mm} i=1 \\
\sqrt{2} \hspace{7mm} i \neq 1
\end{array}
\right.
,
\label{3DC_max}
\end{equation}
where $\sigma_{i,\text{max}}$ is the strength of the interface in $i-$direction without considering separation in other directions, i.e. uncoupled strength of each individual mode. The following relation holds for $\sigma_{i,\text{max}}$:
\begin{equation}
\sigma_{i,\text{max}} = \dfrac{1}{\lambda} ~ \dfrac{\phi_{i}}{\delta_{i}} ; \hspace{8mm} \lambda =
\left\{ \begin{array}{lcl}
\hspace{2.5mm}e \hspace{9.9mm} i=1 \\
\sqrt{2e} \hspace{7.5mm} i \neq 1
\end{array}
\right.
,
\label{3DC_max_uncpl}
\end{equation}
where $e = \text{exp}(1)$ and $\sigma_{2,\text{max}} = \sigma_{3,\text{max}} \coloneqq \sigma_{\text{t},\text{max}}$. The coupled tractions versus corresponding gap values are graphically shown in Fig.~\ref{3DCCZM}.
\begin{figure}[!htp]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.85\linewidth]{NormalTracCouple3D.pdf}
\caption{\centering normal direction: $\text{x}_{1}$}
\label{norm}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.85\linewidth]{LatTracCoupled3D.pdf}
\caption{\centering transverse direction: $\text{x}_{2}$ ($\text{x}_{3}$)}
\label{lat}
\end{subfigure}
\caption{Graphical representation of normalized coupled tractions versus normalized gap values as given by Eq. (\ref{3DC}).}
\label{3DCCZM}
\end{figure}
For modelling a general anisotropic material, the proposed 3DC model must be extended to also accurately account for the mode-III (tearing mode) as well as mode-I and mode-II besides their mode-mixity, but this is beyond the scope of the current work.
\section{Concluding remarks}
We have focused our attention on the coupled mixed-mode exponential cohesive zone models (CZMs). A phenomenological three-dimensional coupled (3DC) CZM is proposed which is applicable to separation and in particular closure mixed-mode problems. To this end, an improved version of the well established Xu and Needleman model has been extended to 3D contact problems.
The proposed 3DC model provides the possibility to explicitly account for all three components of the gap function. This model, similar to the XN model, is built upon different independent parameters, i.e. interface properties. All the cohesive zone parameters can be determined using mode-I and mode-II experiments. The proposed model can be applied to study the delamination process, crack propagation, thin film peeling, and several other applications where mode-mixity comes into play.
Although this model provides a fundamental platform for modelling the interface in three-dimensional contact problems, further verification is yet needed through experimental tests and computer simulations.
{\small
\bibliographystyle{ieeetr}
| -13,534.579878
|
[
-3.6015625,
3.216796875
] | 29.92126
|
[
-5.875,
-3.408203125,
-3.787109375,
-9.2890625,
0.8193359375,
13.9765625
] |
[
3.60546875,
8.3203125,
2.677734375,
6.9375
] | 107
| 1,595
|
[
-3.2421875,
3.71484375
] | 28.294574
|
[
-6.296875,
-5.41796875,
-3.59765625,
-1.275390625,
2.953125,
10.7734375
] | 0.771446
| 13.816354
| 35.213033
| 3.519956
|
[
1.8271470069885254
] | -9,450.565986
| 6.095298
| -13,241.144422
| 2.653775
| 5.629545
|
[
-2.984375,
-4.08984375,
-4.2578125,
-4.6015625,
2.927734375,
11.53125
] |
[
-6.9609375,
-5.140625,
-3.734375,
-2.99609375,
5.17578125,
8.96875
] | |
BkiUbZs5qsNCPeN2Ew74
|
\section{Status of ASACUSA's antihydrogen program}
ASACUSA (Atomic Spectroscopy And Collisions Using Slow Antiprotons)
is one of several collaborations
studying antimatter at the antiproton decelerator at CERN.
The majority of experiments in this area compare antimatter properties
to those of their matter counterparts
for a precise test of CPT symmetry.
Recent results on the magnetic moment of the antiproton\cite{BASE}
by BASE
as well as the 1S--2S,\cite{Ahmadi2018-1S2S} 1S--2P,\cite{Ahmadi2018-1S2P}
and hyperfine transitions\cite{Ahmadi2017}
of antihydrogen ($\overline{\mathrm{H}}$) by ALPHA\cite{ALPHA}
demonstrate the rapid progress in this field.
All existing measurements confirm CPT symmetry
within their respective uncertainties.
Thus,
the quest for ever higher precision tests
by the antiproton-decelerator community continues.
In this spirit,
ASACUSA is working toward a hyperfine-splitting determination
based on Rabi spectroscopy.
The SME coefficients,
which can be tested or constrained by this specific experiment
have been discussed
in the proceedings of the previous meeting in this series.\cite{Widmann2016}
The approach followed by ASACUSA
requires the formation of a beam of $\overline{\mathrm{H}}$\
and offers the advantage that the actual measurement,
i.e.,
the interaction with microwaves,
takes place in a well-controlled environment
far away from the strong fields and field gradients
of the trap for $\overline{\mathrm{H}}$\ formation.\cite{Widmann2013,Mohri2003}
The observation of extracted antiatoms
had been reported in Ref.\ \refcite{Kuroda2014},
and later this beam had its quantum-state distribution
characterized in Ref.\ \refcite{Malbrunot2018}.
Ground-state $\overline{\mathrm{H}}$\ has clearly been observed.
However,
a rate increase is still needed
before spectroscopic measurements with reasonable acquisition times can commence.
Due to the current long shutdown at CERN,
antiproton physics is on hold until 2021.
Then,
the Extra-Low ENergy Antiproton ring (ELENA)
will be in operation and the $\overline{\mathrm{H}}$\ experiment of ASACUSA
will receive a dedicated beamline.
Therefore,
the multi-trap setup does not have to be removed from the zone anymore
in order to make space for other antimatter studies pursued by ASACUSA.
Currently,
the setup is being installed at its final position
and matter studies
(i.e., mixing of protons and electrons)
are planned to continue optimizing the formation process during the shutdown.
\section{Transitions within the hyperfine sublevels}
The two transitions that occur between a low-field seeking triplet state
($F \! = \! 1$)
and the singlet state
($F \! = \! 0$)
are called $\sigma_1$ and $\pi_1$ transitions
(see Table\ \ref{tab:hfsublevels}).
Those are accessible in a Rabi-type experiment
and approach the value of interest at zero field.
In the interaction volume,
one has to provide both an oscillating magnetic field $B_\text{osc}$
at $1.42\,$GHz to stimulate hyperfine transitions,
and an external static magnetic field $B_\text{ext}$
to control the Zeeman shift.
A key difference between the two transitions is
that they need different relative orientations of $B_\text{osc}$ and $B_\text{ext}$.
The component of $B_\text{osc}$ aligned parallel to $B_\text{ext}$
stimulates the $\sigma_1$ transition,
and the orthogonal one the $\pi_1$ transition.
In the ASACUSA setup,
these fields are provided by a cavity of strip-line geometry
with a large acceptance and surrounding coils.
\begin{table}
\tbl{Properties of the hyperfine sublevels of ground-state $\overline{\mathrm{H}}$ . }
{
\begin{tabular}{@{}rllcc@{}}
\toprule
state: $ | F, M_F \rangle $ & \ Zeeman shift & Stern--Gerlach & $\sigma_1$ & $\pi_1$ \\
\colrule
$ | \ 1,-1 \ \rangle $ & \ 1$^\text{st}O$ \ \ (linear) & low-field seeker & & initial \\
$ | \ 1, \hspace{0.23cm} 0 \ \rangle $ & \ 2$^\text{nd}O$ \ (hyperbolic) & low-field seeker & initial & \\
$ | \ 1, \hspace{0.23cm} 1 \ \rangle $ & \ 1$^\text{st}O$ \ \ (linear) & high-field seeker & & \\
$ | \ 0, \hspace{0.23cm} 0 \ \rangle $ & \ 2$^\text{nd}O$ \ (hyperbolic) & high-field seeker & final & final \\
\botrule
\end{tabular}
}
\label{tab:hfsublevels}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=4.3in]{Fig1_ASACUSA-Cavities.pdf}
\end{center}
\caption{ASACUSA's devices for providing an oscillating $B_\text{osc}$ and external static magnetic field $B_\text{ext}$ in the interaction region of the Rabi experiment using microwave cavities and coil configurations.
}
\label{fig:cavities}
\end{figure}
In the low-field regime,
the $\sigma_1$ transition has a weak magnetic-field sensitivity
making it more robust against systematic effects.
However,
it is~also insensitive to SME coefficients,\cite{Kostelecky2015}
which relates to the fact that $\Delta M_F \! = \! 0$.
In contrast,
the $\pi_1$ transition probes the nonrelativistic spherical SME coeffients
${g_w}_{(2q)10}^{\text{NR}(0\text{B})}$, ${g_w}_{(2q)10}^{\text{NR}(1\text{B})}$
(CPT odd)
and ${H_w}_{(2q)10}^{\text{NR}(0\text{B})}$, ${H_w}_{(2q)10}^{\text{NR}(1\text{B})}$ (CPT even).\cite{Widmann2016,Kostelecky2015}
Control over systematic effects
is a much stronger experimental challenge
due to the first-order shift with $B_\text{ext}$ of $14\,$Hz/nT.
Helmholtz coils produce a sufficiently homogeneous static field
to investigate the $\sigma_1$ transition
($\sigma_{|B|} / \bar{|B|} < 5 \%$).
The cavity providing the oscillating field
is placed within those coils in such a way
that the required parallel alignment is achieved,
as shown in the left panel of Fig.\ \ref{fig:cavities}.
For more details and a description of the entire H -beam setup
for commissioning the hyperfine spectrometer,
see Ref.\ \refcite{Malbrunot2019}.
A determination of the H\ hyperfine structure with $2.7\,$ppm precision
in agreement with the literature value
was accomplished\cite{Diermaier2017} by measuring the $\sigma_1$ transition
at various static-field values
and extrapolating to zero field.
However,
as mentioned before the $\sigma_1$ transition
is not the first choice for a CPT test,
which motivated upgrades of the interaction apparatus
for investigations of the $\pi_1$ transition.
We designed McKeehan-like coils\cite{McKeehan1936}
to provide a more homogeneous magnetic field
($\sigma_{|B|} / \bar{|B|} < 0.1 \%$).
The cavity can be rotated in steps of $45^\circ$
within the coil arrangement
thereby providing flexible alignment of $B_\text{osc}$ to $B_\text{ext}$.
A $45^\circ$ alignment enables simultaneous access to both transitions.
The upgraded device is shown in the right panel of Fig.~\ref{fig:cavities}.
\section{Interference effect and future measurements and devices}
At small $B_\text{ext}$,
the separation of the $\sigma_1$ and $\pi_1$ transitions
becomes comparable to the line widths,
which is on the order of $10\,$kHz
as the interaction time is typically $100\,\mu$s
(beam velocity $\sim 1\,$km/s,
cavity length $\sim 100\,$mm).
For example,
at $4.6\,\mu$T
the separation decreases to $65\,$kHz,
and the interference leads to asymmetric line shapes
and systematic shifts of the extracted central frequencies,
if one applies the symmetric fit function
for a two-level system in this regime.
Currently,
we apply corrections for the effect,
and the development of a fit procedure
based on the complete four-level system of the ground-state hyperfine states
is under consideration.
On the other hand,
the interference effect and resulting systematic shifts can be avoided
with purely parallel or orthogonal alignment of the fields.
This clean solution will be realized in a Ramsey apparatus
that is currently in its final design phase.\cite{HyDRA}
In principle,
the respective alignment can also be adjusted in steps of $45^\circ$
with the present Rabi apparatus.
However,
a change requires breaking the vacuum.
In this design,
the advantageous opportunity
for interleaved measurements of $\pi_1$ and $\sigma_1$ transitions
is tied to the alignment of the fields
that also gives rise to the interference effect.
By operating at sufficiently high $B_\text{ext}$
(e.g., $23/69/115\,\mu$T as in Ref.\ \refcite{Widmann2018})
the systematic shifts and related uncertainties
can easily be kept below $\sim 10\,$ppb,
an acceptable level
for the anticipated first-stage results on $\overline{\mathrm{H}}$ .
A measurement of a single pair of a $\pi_1$ and $\sigma_1$ transitions
in this regime of $B_\text{ext}$
provides a reliable way to determine the zero-field splitting
from a calculation,\cite{Widmann2018}
i.e.,
without extrapolating to zero field by a fit.
Thus,
the present apparatus is well suited
for a first Rabi-type SME-sensitive $\overline{\mathrm{H}}$\ hyperfine-structure measurement.
\section*{Acknowledgments}
This work was funded by the European Research Council (grant no.\ 291242),
the Austrian Ministry of Science and Research, and the Austrian Science Fund (FWF) through DKPI (W1252).
| -7,598.195991
|
[
-2.60546875,
2.529296875
] | 27.488152
|
[
-6.70703125,
-4.98828125,
-3.1484375,
-9.015625,
3.005859375,
13.8671875
] |
[
2.470703125,
6.6328125,
4.734375,
5.71484375
] | 75
| 1,143
|
[
-2.326171875,
2.48046875
] | 27.501918
|
[
-6.6171875,
-5.6640625,
-2.841796875,
-0.76123046875,
3.375,
9.3828125
] | 1.029563
| 10.817372
| 44.269466
| 2.463206
|
[
1.9857754707336426
] | -5,957.629448
| 5.948381
| -7,383.404914
| 1.397264
| 5.487217
|
[
-3.166015625,
-3.99609375,
-3.841796875,
-4.0546875,
2.71875,
10.109375
] |
[
-6.9453125,
-4.4296875,
-3.19140625,
-2.267578125,
4.99609375,
7.82421875
] | |
BkiUdcY4ubnhAqznNmjA
|
\section{Introduction}
Quantum entanglement, which is one of the quintessential features of quantum theory and a manifestation of nonlocality of quantum mechanics~\cite{R01,R02}, shows stronger correlations than classically explainable~\cite{R03,R04}. It is of great importance for quantum information science and fundamental issues of quantum theory. For example, Bell-basis states as important entangled states have been widely used in quantum information processing tasks such as quantum computation~\cite{R05}, quantum teleportation~\cite{R06,R07,R08}, entanglement swapping~\cite{R09,R10,R11,R12,R13}, quantum dense coding~\cite{R14,R15,R16}, quantum key distribution~\cite{R17,R18}, and quantum secure direct communication~\cite{R19,R20}.
For a two-particle entangled state, neither particle possesses its own well-defined state before performing the measurement, while once the state of one particle is measured the state of another particle is also defined instantaneously. Photon entanglement has be in fact realized in various degrees of freedom. Entangled photon states have hitherto been mostly realized by the two orthogonal polarization states of photons. To enable more efficient use of communication channels in quantum cryptography, multi-dimensional entanglement is of great importance. Higher-order entanglement has been suggested via multiport beam splitters~\cite{R21,R22}.
Orbital angular momentum (OAM) carried by photons with helical phase structures~\cite{R23}, as a fundamentally controllable degree of freedom of photons, has attracted extensive attention and academic interest in quantum foundations and quantum information. The OAM as a degree of freedom has many novel properties and important applications, for instance, uncertainty relations of angular position-OAM~\cite{R24}, violation of the Bell inequality~\cite{R25}, preparation of the four Bell states~\cite{R26}, and qutrit quantum communication protocols~\cite{R27}. As the quanta of OAM carried by a single photon has no theoretical upper limit, which defines an infinitely dimensional discrete Hilbert space, the OAM offers a practical way to create multi-dimensional entanglement~\cite{R28,R29,R30,R31}.
Here we focus on quantum entangled vector-polarization states of photons~\cite{R32,R33}. We present vector-polarization entangled Bell states, which use the spatial modes of the vector fields with space-variant polarization distribution~\cite{R34,R35,R36}. We propose a scheme of creating the vector-polarization entangled Bell states based on a Sagnac interferometer~\cite{R37}. We also design an analyzer for identifying the vector-polarization entangled Bell states. Because the vector fields can be considered as a combination of a pair of orthogonally polarized fields (photons) carrying the opposite-handedness quantized OAMs~\cite{R34}, it is predictable that our approach provides another practical route to define an infinitely dimensional discrete Hilbert space and to future extend to multi-dimensional multi-particle entanglement, enabling more efficient use of communication channels in quantum cryptography. Furthermore, due to the combination of polarization and OAM, the vector-polarization entanglement may increase security with respect to the polarization or OAM entanglement in quantum cryptography. It is conceivable that the vector-polarization Bell states could be of considerable importance in quantum communication, information, cryptography and teleportation, making them versatile and potentially suitable for future technologies.
\section{Vector-Polarization Bell States}
Polarization, momenta and phase are regarded as the controllable degrees of freedom of photons. For the polarization degree of freedom, the maximally entangled Bell states of two photons are written as
\begin{subequations}
\begin{align}\label{01}
| \psi_{\mp}\rangle = & \frac{1}{\sqrt{2}} \left( | H \rangle | V \rangle \mp | V \rangle | H \rangle \right), \\
| \phi_{\mp}\rangle = & \frac{1}{\sqrt{2}} \left( | H \rangle | H \rangle \mp | V \rangle | V \rangle \right),
\end{align}
\end{subequations}
\noindent where $| H \rangle$ and $| V \rangle$ indicate the horizontal and vertical polarization eigenstates, respectively.
As is well known, it is easy to transform one Bell state into another one. For instance, for $| \psi_{+}\rangle$, (i) $| \psi_{+}\rangle$ can be changed into $| \phi_{+} \rangle$ with the aid of the polarization exchange ($| H \rangle \Rightarrow | V \rangle $ and $| V \rangle \Rightarrow | H \rangle $) which can be realized by a half-wave plate (HWP), (ii) $| \psi_{+}\rangle$ can be transformed into $| \psi_{-}\rangle$ with the aid of the polarization-dependent phase shift which is generated by a quarter-wave plate (QWP)~\cite{R15}, and (iii) $| \psi_{+}\rangle$ can be converted into $|\phi_{-}\rangle$ with the aid of both polarization exchange and polarization phase shift, respectively. It should be pointed out that $| V \rangle = \sigma_1 | H \rangle$ and $| H \rangle = \sigma_1 |V \rangle$, where $\sigma_1$ is the well-known Pauli matrix being both Hermitian and unitary, and can also describe the Jones matrix of the HWP with its fast axis at $- \pi/4$ with respect to the horizontal axis.
The optical field with the helical phase structure of $\exp [j (m \varphi + \varphi_0)]$ (shown in the left panel of Fig.~1a) carries the OAM of $ m \hbar$ per photon, where $m$ is referred to as the topological charge or winding number and can take any integer value~\cite{R23,R38}. The OAM degree of freedom can be regarded as a kind of degree of freedom of azimuth-variant phase. Although the helical structure exhibits the phase change in two dimensions in the Cartesian coordinate system, it is in fact the azimuthal phase change in one dimension in the polar coordinate system. During the propagation, the optical fields carrying the OAM exhibit the intertwined (or helical) phase front with its handedness depending on the sign of $m$ (a-1 and a-2 in Fig.~1a). Differently from the SAM or polarization which constructs a two dimensional Hilbert space only, the OAM as a new degree of freedom can define an infinitely dimensional discrete (the quantization of OAM) Hilbert space~\cite{R29,R30,R31}. We have known that the vector fields with the space-variant distribution of states of polarization can be produced by combining a pair of orthogonally polarized optical fields carrying the opposite OAMs (or the helical phases with the opposite handedness)~\cite{R34,R35}. In recent years, the technique of generating the vector fields has become very mature~\cite{R36}. Due to novel properties, the vector fields have many potential applications in various realms~\cite{R39,R40,R41}, including quantum information~\cite{R32,R33}. Since the vector field is always associated with a pair of OAMs with the opposite handedness, it is possible to define an infinitely dimensional discrete (the quantization of vector-polarization state) Hilbert space (Fig.~1b), which provides an opportunity that the vector-polarization states as the degree of freedom are extended from two dimensional Hilbert space to a higher dimensional one. Here we focus on the quantum aspect of the vector-polarization states.
\begin{figure}[bht]
\centerline{\includegraphics[width=8.5cm]{Fig1.eps}}
\caption{ \textbf{(a)} Vortex optical field with an azimuthal phase term $\exp [j (m \varphi + \varphi_0)]$, carrying an OAM of $m \hbar$, where $\varphi_0$ is the initial phase of an OAM state. The helical phase front (left), and the helical structures of two examples $| m \rangle = |+ 2 \rangle$ (a-1) and $| m \rangle = | -2 \rangle$ (a-2) during the propagation. \textbf{(b)} Polarization distributions of vector-polarization states described by Eqs.~(3) or (4). The helical phase front where $\phi_0$ is the initial phase of a vector-polarization state (left). b-1, b-2, b-3 and b-4 indicate polarization distributions of four vector-polarization states as examples, $| U ^{+1}_R \rangle$, $| U ^{+1}_A \rangle$, $| U ^{-1}_R \rangle$ and $| U ^{-1}_A \rangle$, respectively. $| U ^{+1}_R \rangle$ (b-1) and $| U ^{+1}_A \rangle$ (b-2) for $m = +1$ and $\phi_0 = 0$ are a pair of orthogonal vector-polarization states. $| U ^{-1}_R \rangle$ (b-3) and $| U ^{-1}_A \rangle$ (b-4) for $m = -1$ and $\phi_0 = 0$ are another pair of orthogonal vector-polarization states.}
\end{figure}
We first will construct the vector-polarization entangled Bell states. We then also propose schemes of generating and analyzing the vector-polarization Bell states. For local linearly-polarized vector-polarization states, there are two typical groups of basic states as follows
\begin{subequations}
\begin{align}\label{02}
| U ^{+ m}_R \rangle & = \sin (+ m \phi + \phi_0) | V \rangle + \cos (+ m \phi + \phi_0) | H \rangle, \\
| U ^{+ m}_A \rangle & = \cos (+ m \phi + \phi_0) | V \rangle - \sin (+ m \phi + \phi_0) | H \rangle, \\
| U ^{- m}_R \rangle & = \sin (- m \phi + \phi_0) | V \rangle + \cos (- m \phi + \phi_0) | H \rangle, \\
| U ^{- m}_A \rangle & = \cos (- m \phi + \phi_0) | V \rangle - \sin (- m \phi + \phi_0) | H \rangle,
\end{align}
\end{subequations}
where $m$ is still the topological charge which is the same as in the OAM. We can affirm from Eq.~(2) that ($|U ^{+m}_R \rangle$ and $|U ^{+m}_A \rangle$) and ($|U ^{-m}_R \rangle$ and $|U ^{-m}_A \rangle$) are two pairs of orthogonal vector-polarization states. As examples, when the topological charge $m=1$, four basic vector-polarization states are shown by b-1, b-2, b-3 and b-4 in Fig.~1b, respectively. They are easily interchanged by using two HWPs~\cite{R33}. In fact, the four basic vector-polarization states in Eq.~(2) can be equivalently rewritten as
\begin{subequations}
\begin{align}\label{03}
| U ^{+m}_R \rangle & = \frac{1}{\sqrt{2}} (| R \rangle | - m \rangle + | L \rangle | + m \rangle), \\
| U ^{+m}_A \rangle & = \frac{1}{j \sqrt{2}} (| R \rangle | - m \rangle - | L \rangle | + m \rangle), \\
| U ^{- m}_R \rangle & = \frac{1}{\sqrt{2}} (| R \rangle | + m \rangle + | L \rangle | - m \rangle), \\
| U ^{- m}_A \rangle & = \frac{1}{j \sqrt{2}} (| R \rangle | + m \rangle - | L \rangle | - m \rangle),
\end{align}
\end{subequations}
where $|+ m \rangle$ and $|- m \rangle$ stand for the paraxial spatial modes carrying OAMs of $+ m \hbar$ and $- m \hbar$, and $|R\rangle = \frac{1}{\sqrt{2}} (| H \rangle + j | V \rangle)$ and $| L \rangle = \frac{1}{\sqrt{2}} (| H \rangle - j | V \rangle)$ are the right and left circularly polarized states, respectively. As the forms in Eq.~(3), $| U ^{+m}_R \rangle$, $| U ^{+m}_A \rangle$, $| U ^{- m}_R \rangle$ and $| U ^{- m}_A \rangle$ are also regarded as single-photon ``hybrid" entangled states~\cite{R32}, and any one as a whole state represents a kind of vector-polarization states~\cite{R32}.
Like the traditional polarization Bell states and the OAM Bell states, two-photon vector-polarization Bell states constructed by the orthogonal bases $| U ^{+ m}_R \rangle$ and $| U ^{+ m}_A \rangle$ can be written as follows
\begin{subequations}
\begin{align}\label{04}
| \Psi^{+m}_{-}\rangle = & \frac{1}{\sqrt{2}} \left( | U ^{+m}_R \rangle | U ^{+m}_A \rangle - | U ^{+m}_A \rangle | U ^{+m}_R \rangle \right), \\
| \Psi^{+m}_{+}\rangle = & \frac{1}{\sqrt{2}} \left( | U ^{+m}_R \rangle | U ^{+m}_A \rangle + | U ^{+m}_A \rangle | U ^{+m}_R \rangle \right), \\
| \Phi^{+m}_{-}\rangle = & \frac{1}{\sqrt{2}} \left( | U ^{+m}_R \rangle | U ^{+m}_R \rangle - | U ^{+m}_A \rangle | U ^{+m}_A \rangle \right), \\
| \Phi^{+m}_{+}\rangle = & \frac{1}{\sqrt{2}} \left( | U ^{+m}_R \rangle | U ^{+m}_R \rangle + | U ^{+m}_A \rangle | U ^{+m}_A \rangle \right).
\end{align}
\end{subequations}
Of course, another pair of orthogonal bases $| U ^{-m}_R \rangle$ and $| U ^{-m}_A \rangle$ can also construct another two-photon vector-polarization Bell states as follows
\begin{subequations}
\begin{align}\label{05}
| \Psi^{-m}_{-}\rangle = & \frac{1}{\sqrt{2}} \left( | U ^{-m}_R \rangle | U ^{-m}_A \rangle - | U ^{-m}_A \rangle | U ^{-m}_R \rangle \right), \\
| \Psi^{-m}_{+}\rangle = & \frac{1}{\sqrt{2}} \left( | U ^{-m}_R \rangle | U ^{-m}_A \rangle + | U ^{-m}_A \rangle | U ^{-m}_R \rangle \right), \\
| \Phi^{-m}_{-}\rangle = & \frac{1}{\sqrt{2}} \left( | U ^{-m}_R \rangle | U ^{-m}_R \rangle - | U ^{-m}_A \rangle | U ^{-m}_A \rangle \right), \\
| \Phi^{-m}_{+}\rangle = & \frac{1}{\sqrt{2}} \left( | U ^{-m}_R \rangle | U ^{-m}_R \rangle + | U ^{-m}_A \rangle | U ^{-m}_A \rangle \right).
\end{align}
\end{subequations}
\section{Scheme for Preparing Vector-Polarization Bell States}
The scheme of preparing two-photon vector-polarization Bell states, we presented, is shown in Fig.~2. By using degenerate spontaneous parametric down-conversion via noncollinear type-II phase matching in a nonlinear crystal (NLC), the traditional four polarization-entangled Bell states described by Eq.~(1) can be prepared~\cite{R15} (Fig.~2a). The HWP and the QWP in path $p_1$ are used to implement the interchanges among the four polarization-entangled Bell states described in Eq.~(1). Then the polarization-entangled photons $a_1$ and $a_2$ in path $p_1$ and $p_2$ enter into the respective generation systems of vector-polarization states (Fig.~2b), which are composed of a Sagnac interferometer and some optical elements~\cite{R37}. Correspondingly, the polarization-entangled photons $a_1$ and $a_2$ in path $p_1$ and $p_2$ are converted into the vector-polarization entangled photons $b_1$ and $b_2$, respectively. The procedure of preparing the vector-polarization states will be described below.
Before entering the Sagnac interferometer, photons $a_1$ ($a_2$) in path $p_1$ ($p_2$) pass firstly through a HWP whose fast axes has the angle of $-\pi /8$ with respect to the horizontal direction (Fig.~2b). The operator of this HWP, $\mathcal{J}_{HWP}$, can be written by the Jones calculus as
\begin{equation}\label{06}
\mathcal{J}_{HWP} = \frac{1}{\sqrt{2}}
\left[ \begin{array}{cc}
1 & 1\\
1 & -1\end{array} \right].
\end{equation}
This is in fact a Walsh-Hadamard matrix being both Hermitian and unitary. Then the horizontally and vertically polarized components of $a_1$ ($a_2$), separated by the PBS, counter-propagate along a common path in the Sagnac interferometer. A space-variant phase plate (SVPP) with a helical phase front makes the two counter-propagating fields (or photons) feel the opposite chirality of the helical phases of $\exp (\pm j m\phi)$ and then carry the opposite OAMs of $ \pm m \hbar$. Thus the operator of the SVPP can be expressed concisely by the Jones calculus as
\begin{equation}\label{07}
\mathcal{J}_{SVPP} =
\left[ \begin{array}{cc}
e^{+ j m \phi} & 0\\
0 & e^{- j m \phi}\end{array} \right].
\end{equation}
This matrix is always unitary but may not Hermitian except for $m = 0$.
In our scheme (Fig.~2b), a geometric-phase shifter (GPS) composed of three polarization elements (a sandwich structure, i.e. one HWP is sandwiched between two QWPs) is used to control the geometric-phase shift between the two counter-propagating orthogonally polarized components. The fast axes of the two QWPs are parallel each other and are fixed at an angle of $ \pi /4$ with respect to the horizontal direction, while the HWP is allowed to rotate. When the two orthogonally polarized components pass through the GPS, both dynamic phases have no change even though the HWP is rotated~\cite{R42}. Instead, the rotation of the HWP can acquire a controllable geometric-phase shift between the two counter-propagating orthogonally polarized components. The operator of this GPS can be described by the Jones calculus as
\begin{equation}\label{08}
\mathcal{J}_{GPS} (\theta) =
\left[ \begin{array}{cc}
e^{j 2 \theta} & 0\\
0 & - e^{-j 2 \theta}\end{array} \right],
\end{equation}
where $\theta$ is the angle forming the fast axis of the HWP with the horizontal direction. This matrix is always unitary but may not Hermitian except for the cases when $2 \theta$ is an integral multiple of $\pi$. This scheme is robust against the dynamic phase fluctuation caused by the environmental turbulence in such a common closed loop.
After photons leave from the Sagnac interferometer, another QWP whose fast axis forming an angle of $\vartheta$ with the horizontal direction is used to introduce a geometric phase. The operator of the QWP, $\mathcal{J}_{QWP}$, can be described as
\begin{equation}\label{09}
\mathcal{J}_{QWP} (\vartheta) =
\left[ \begin{array}{cc}
\cos ^2 \vartheta - j \sin ^2 \vartheta & - (j + 1) \sin \vartheta \cos \vartheta \\
- (j + 1) \sin \vartheta \cos \vartheta & \sin ^2 \vartheta - j \cos^2 \vartheta \end{array} \right].
\end{equation}
This matrix is always unitary but may not Hermitian except for the cases when $\vartheta$ is an integral multiple of $\pi/2$.
\begin{figure*}[bht]
\centerline{\includegraphics[width=15.0cm]{Fig2.eps}}
\caption{Schemes for preparing and for analysing the vector-polarization Bell states. \textbf{(a)} Scheme for preparing the polarization-entangled states. A continuous wave shorter-wavelength laser can be used to pump a nonlinear crystal (NLC) to produce the polarization-entangled states based on a noncollinear, degenerate type II spontaneous parametric down conversion. By controlling the HWP and QWP, any one of the four polarization-entangled Bell states described by Eq. (1) can be prepared. \textbf{(b)} Scheme for preparing the two-photon vector-polarization Bell states. \textbf{(c)} Scheme for analysing the two-photon vector-polarization Bell states. RATP is a right-angled triangle prism which can be used to adjust the optical length. D is a detect system consisting of single mode fiber and single-photon detector. (See text for details).}
\end{figure*}
With Eqs.~(7) and (8), the accumulated operator of SVPP and GPS units in the Sagnac interferometers can be rewritten as $\mathcal{J}_{GPS} (\theta) \mathcal{J}_{SVPP}$. After passing through the generation system of vector-polarization states, the evolutions of the eigenstates of $| H \rangle$ and $| V \rangle$ when $\theta = \pi/8$ and $\vartheta = \pi/4$ are as follows
\begin{subequations}
\begin{align}\label{10}
| H \rangle \Rightarrow & \mathcal{J}_{QWP} (\pi/4) \mathcal{J}_{GPS} (\pi/8) \mathcal{J}_{SVPP} \mathcal{J}_{HWP} | H \rangle = e^{+ j \pi /4} | U ^{+m}_R \rangle, \\
| V \rangle \Rightarrow &\mathcal{J}_{QWP} (\pi/4) \mathcal{J}_{GPS} (\pi/8) \mathcal{J}_{SVPP} \mathcal{J}_{HWP} | V \rangle = e^{- j \pi /4} | U ^{+m}_A \rangle.
\end{align}
\end{subequations}
Then the traditional four polarization Bell states described by Eq.~(1) will be converted into the four vector-polarization Bell states described by Eq.~(4) as $| \psi_{\mp} \rangle \Rightarrow | \Psi^{+m}_{\mp} \rangle$ and $| \phi_{\mp} \rangle \Rightarrow | \Phi^{+m}_{\pm} \rangle$.
When $\theta=\pi/8$ and $\vartheta = -\pi/4$ , the evolutions of the eigenstates $| H \rangle$ and $| V \rangle$ can be written as follows
\begin{subequations}
\begin{align}\label{11}
| H \rangle \Rightarrow &\mathcal{J}_{QWP} (- \pi/4) \mathcal{J}_{GPS} (\pi/8) \mathcal{J}_{SVPP} \mathcal{J}_{HWP} | H \rangle = e^{+ j \pi /4} | U ^{-m}_A \rangle, \\
| V \rangle \Rightarrow &\mathcal{J}_{QWP} (- \pi/4) \mathcal{J}_{GPS} (\pi/8) \mathcal{J}_{SVPP} \mathcal{J}_{HWP} | V \rangle = e^{- j \pi/4} | U ^{-m}_R \rangle.
\end{align}
\end{subequations}
In this case, the traditional four polarization Bell states described by Eq.~(1) will be converted into the four vector-polarization Bell states described by Eq.~(5) as $| \psi_{\mp} \rangle \Rightarrow | \Psi^{-m}_{\mp} \rangle$ and $| \phi _{\mp} \rangle \Rightarrow | \Phi^{-m}_{\pm} \rangle$. After leaving from the generation system of vector-polarization states, photons $a_1$ and $a_2$ are converted into photons $b_1$ and $b_2$, respectively. By manipulating the polarization Bell states, therefore, the preparation of the vector-polarization Bell states can be realized.
\section{Analyzer of Vector-Polarization Bell States}
We present a vector-polarization Bell-state analyzer (Fig.~2c) which is similar to the analyzer in Ref.~\cite{R32}. Photons $b_1$ ($b_2$) will meet a fork-shaped phase grating (FSPG). When the FSPG has the same topological charge with the SVPP mentioned above, an incoming photon in the state $ | U_{X}^{+m}\rangle$ $( | U_{X}^{-m}\rangle )$, where $X = R$ or $A$, will be transformed into two equal-probability diffraction beams: One is the right-circularly polarized state $| R \rangle$ in the $+1$st ($-1$st) diffraction order and the other one is the left-circularly polarized state $| L \rangle$ in the $-1$st ($+1$st) diffraction order, and both beams carry no OAM. A QWP and a HWP are then used to transform $| R \rangle$ and $| L \rangle$ into $| V \rangle$ and $| H \rangle$, respectively. The two equal-probability diffraction beams will merge on a PBS. Photons in the states $ | U_{X}^{+m} \rangle$ ($ | U_{X}^{-m} \rangle$) exit in path $p^+$ $(p^-)$ behind the PBS as shown in Fig.~2c. By using two HWPs in paths $p^+$ and $p^-$ (whose fast axes have an angle of $- \pi / 8$ with respect to the horizontal direction), the four vector-polarization states, $| U ^{+m}_R \rangle$, $| U ^{+m}_A \rangle$, $| U ^{- m}_R \rangle$ and $| U ^{- m}_A \rangle$, can be distinguished with other two PBSs. Finally, the analysis of the vector-polarization-entangled Bell states with joint measurement can be realized.
\section{Discussion}
It should be noted that there are some similarities between the OAM states and the vector-polarization states. For example, both of them use the same parameter called topological charge $m$ to describe the two-dimensional spatial distributions, as shown in Fig.~1. However, they are two complete independent degrees of freedom. The OAM states are associated with the phase, while the vector-polarization states are related to the polarization. Photons in the vector-polarization state carry the information of both the OAM and the space-variant polarization. Compared with the OAM states, the vector-polarization states have some advantages such as, for a given topological charge $m$, only one OAM state $| m \rangle$ corresponds to it, while there are two vector-polarization states $| U _R ^{m} \rangle$ and $| U _A ^{m} \rangle$. A major challenge is to ultimately confirm the entanglement in experiment, which may be a Bell inequality experiment generalized to more states. For a linearly-polarized pump field with zero angular momentum (including spin and orbital angular momentum), the emitted state can be represented by
\begin{align}\label{12}
| \psi \rangle = & \sum^{+\infty}_{m=0} (C^{+m,+m}_{R,R} |U^{+m}_R \rangle |U^{+m}_R \rangle + C^{+m,+m}_{R,A} |U^{+m}_R \rangle |U^{+m}_A \rangle \nonumber \\
& \quad + C^{+m,-m}_{R,R} |U^{+m}_R \rangle |U^{-m}_R \rangle + C^{+m,-m}_{R,A} |U^{+m}_R \rangle |U^{-m}_A \rangle \nonumber \\
& \quad + C^{+m,+m}_{A,R} |U^{+m}_A \rangle |U^{+m}_R \rangle + C^{+m,+m}_{A,A} |U^{+m}_A \rangle |U^{+m}_A \rangle \nonumber \\
& \quad + C^{+m,-m}_{A,R} |U^{+m}_A \rangle |U^{-m}_R \rangle + C^{+m,-m}_{A,A} |U^{+m}_A \rangle |U^{-m}_A \rangle \nonumber \\
& \quad + C^{-m,+m}_{R,R} |U^{-m}_R \rangle |U^{+m}_R \rangle + C^{-m,+m}_{R,A}|U^{-m}_R \rangle |U^{+m}_A \rangle \nonumber \\
& \quad + C^{-m,-m}_{R,R} |U^{-m}_R \rangle |U^{-m}_R \rangle + C^{-m,-m}_{R,A} |U^{-m}_R \rangle |U^{-m}_A \rangle \nonumber \\
& \quad + C^{-m,+m}_{A,R} |U^{-m}_A \rangle |U^{+m}_R \rangle + C^{-m,+m}_{A,A} |U^{-m}_A \rangle |U^{+m}_A \rangle \nonumber \\
& \quad + C^{-m,-m}_{A,R} |U^{-m}_A \rangle |U^{-m}_R \rangle + C^{-m,-m}_{A,A} |U^{-m}_A \rangle |U^{-m}_A \rangle ),
\end{align}
\noindent where $C^{i,j}_{P,Q}$ denote the corresponding probability amplitude for measuring $|U^{i}_P \rangle |U^{j}_Q \rangle$ ($i, j = +m, -m$ and $P, Q = R, A$). The photonic state~(12) is an infinite dimensional entangled state for two photons, meaning neither photon possesses a well-defined vector-polarization state after parametric down-conversion. The measurement of one photon defines its vector-polarization state, and projects the second one into the corresponding vector-polarization state. The state in Eq.~(12) is composed of infinite dimensional vector-polarization basis which forms an infinite dimensional Hilbert space.
In summary, we have proposed the concept of the vector-polarization Bell state from traditional polarization state by analogy between the OAM and phase, and extended the Hilbert space of degree of polarization from two dimensions to indefinite dimensions. The polarization entangled Bell states have been constructed. By using some optical elements and a Sagnac interferometer, an effective vector-polarization states generation system has been presented. We have also designed an analyzer to distinguish the vector-polarization Bell states by using the linear optical elements only. The similarities and differences between OAM states and vector-polarization states have been pointed out. The vector-polarization state is an independent degree of freedom from the OAM states although it is always associated to a pair of opposite OAMs. Differently from the traditional polarization states, the vector-polarization states are also ``quantized", because which are always associated with a pair of quantized OAMs with opposite handedness. These vector-polarization states could be in fact extended to higher-dimensional multi-particle entanglement. These vector-polarization states could be anticipated many applications in quantum cryptography, quantum teleportation, quantum communication, and quantum information, due to their higher dimensions and larger flux of information. These photon states should be versatile and potentially suitable for future photonic technologies.
\section*{ACKNOWLEDGMENTS}
This work is supported by the National Basic Research Program (973 Program) of China (No. 2012CB921900), National Natural Science Foundation of China (No. 11274183 and No. 11374166), 111 Project (No. B07013), the National scientific instrument and equipment development project (No. 2012YQ17004), and Tianjin research program of application foundation and advanced technology (No. 13JCZDJC33800 and No. 12JCYBJC10700).
| -28,331.435269
|
[
-3.041015625,
2.787109375
] | 34.394904
|
[
-3.34375,
0.4951171875,
-2.228515625,
-5.61328125,
-1.0029296875,
9
] |
[
3.955078125,
8.875,
3.84765625,
6.90234375
] | 158
| 3,454
|
[
-1.8125,
1.6376953125
] | 29.487862
|
[
-6.0078125,
-3.62890625,
-3.380859375,
-1.8349609375,
1.88671875,
10.703125
] | 1.331924
| 21.619282
| 22.09033
| 3.127227
|
[
3.0750393867492676
] | -20,876.983379
| 5.477707
| -28,875.926448
| 1.427061
| 5.516143
|
[
-2.67578125,
-3.63671875,
-4.1484375,
-5.20703125,
2.369140625,
12.7265625
] |
[
-5.6484375,
-1.6064453125,
-2.4765625,
-1.3642578125,
3.537109375,
4.671875
] | |
BkiUe5c4uzliDEmfgyVZ
|
\section{Introduction
The graded center of a $k$-linear triangulated category $({\mathcal{C}};\Sigma)$ over a commutative
ring $k$ is the graded $k$-module $Z^*({\mathcal{C}}) = Z^*({\mathcal{C}};\Sigma)$ which in degree $n\in {\mathbb{Z}}$
consists of all $k$-linear natural transformations $\varphi : \mathrm{Id}_{\mathcal{C}} \rightarrow
\Sigma^n$ satisfying
$\Sigma\varphi = (-1)^n\varphi\Sigma$; this becomes a graded
commutative $k$-algebra with multiplication essentially induced by composition in ${\mathcal{C}}$.
We refer to \cite{Li2} for more details.
By \cite{Hap}, the stable category $\modbar(A)$ of finitely generated modules over a
finite-dimensional self-injective algebra $A$ is a triangulated category with shift
functor the inverse of the Heller operator. Its graded center has been calculated for
Brauer tree algebras \cite{KeLi} and in particular also uniserial algebras
\cite{KrYe}. These calculations suggest that Tate cohomlogy rings of blocks and the
graded centers of their stable module categories are closely related. Since the Tate
cohomology of a block is an invariant of the fusion system of the block, a good
understanding of graded centers might shed some light on the question to what extent the
fusion system of a block is determined by its stable module category. These calculations
also suggest that in order to determine the graded center of the stable module category
of a block one will need to do this first for a defect group algebra of the block, hence
for finite $p$-group algebras. This is what motivates the present paper. The following
result shows that $Z^0(\modbar(A))$ need not be finite-dimensional, answering a question
raised in \cite{Li2}.
\begin{Theorem} \label{2groupsstablecenter}
Let $P$ be a finite $2$-group of rank at least $2$ and $k$ an algebraically closed field
of characteristic~$2$. Evaluation at the trivial $kP$-module
induces a surjective homomorphism of graded $k$-algebras
$Z^*(\modbar(kP)) \longrightarrow \hat H^*(P;k)$
whose kernel ${\mathcal{I}}$ is a nilpotent homogeneous ideal which is infinite-dimensional in each
degree; in particular,
$Z^0(\modbar(kP))$ has infinite dimension.
\end{Theorem}
For odd $p$ we have a slightly weaker statement:
\begin{Theorem} \label{pgroupsstablecenter}
Let $p$ be an odd prime, $P$ a finite $p$-group of rank at least
$2$ and $k$ an algebraically closed field of characteristic~$p$. Evaluation at the
trivial $kP$-module
induces a surjective homomorphism of graded $k$-algebras
$Z^*(\modbar(kP)) \longrightarrow \hat H^*(P;k)$
whose kernel ${\mathcal{I}}$ is a nilpotent homogeneous ideal which is infinite-dimensional in each
odd degree. \end{Theorem}
It is easy to see that the canonical
map $Z^*(\modbar(kP))\rightarrow \hat H^*(P;k)$ is surjective
with nilpotent kernel ${\mathcal{I}}$ (see Lemma \ref{tatestablecenter} below). The point of the
above
theorem is that this kernel tends to have infinite dimension in
each degree (if $p=2$) and at least in each odd degree if $p>2$.
Note though that for $p$ odd there is no known example with
$Z^0(\modbar(kP))$ having infinite dimension.
The proof shows more precisely that these dimensions have as lower
bound the cardinality of the field $k$. Technically,
the proofs of the above theorems are based on the fact that for $A$ a
symmetric algebras, an almost split sequence ending in an indecomposable
non projective $A$-module $U$ determines an {\it almost vanishing
morphism} $\zeta_U : U \rightarrow \Omega(U)$ which in turn provides elements of degree
$-1$ in the graded center of the stable category; see e.g. \cite[Proposition 1.4]{Li2} or
section \ref{background} below. Using modules with appropriate periods, these can then
be ``shifted" to all other degrees if the underlying characteristic is $2$ and all other
odd degrees if the characteristic is odd. The elements of $Z^*(\modbar(A))$ obtained in
this way will be
called {\it almost vanishing}; see \ref{avdef} for details.
For Klein four groups, almost split sequences
turn out to be the only way to obtain elements in the graded center
of its stable module category beyond Tate cohomology. This can be seen using the
classification of indecomposable modules over Klein four groups and leads to a slightly
more precise statement.
\begin{Theorem} \label{kleinfour}
Let $P$ be a Klein four group and let $k$ be an algebraically closed field
of characteristic $2$. Then the evaluation at the trivial $kP$-module
induces a surjective homomorphism of graded $k$-algebras
$Z^*(\modbar(kP)) \longrightarrow \hat H^*(P;k)$
whose kernel ${\mathcal{I}}$ is a homogeneous ideal which is infinite-dimensional in each degree.
Moreover, we have ${\mathcal{I}}^2 = \{0\}$ and all elements in ${\mathcal{I}}$
are almost vanishing. \end{Theorem}
This raises the question for which finite $p$-groups $P$ is
the graded center $Z^*(\modbar(kP))$ generated by Tate cohomology
and almost vanishing elements. Another interesting question underlying
some of the techical details below is the following. Given a periodic module
$U$ of period $n$ of a symmetric algebra $A$ over a field $k$, any
isomorphism $\alpha : U\cong \Omega^n(U)$ induces an algebra automorphism
of the stable endomorphism algebra $\Endbar_A(U)$ sending $\varphi$ to
$\alpha^{-1}\circ\Omega(\varphi)\circ\alpha$. When is this an inner
automorphism? Equivalently, when can $\alpha$ be chosen in such a way
that this automorphism is the identity? D. J. Benson \cite{Bencomm}
observed that the answer is positive if $\alpha$ is induced by an
element in Tate cohomology (or the Tate analogue of Hochschild cohomology)
because then $\alpha$ is the evaluation at $U$ of a natural transformation from the
identity functor on $\modbar(A)$ to the functor $\Omega^n$.
\section{Background material} \label{background
Let $A$ be a finite-dimensional algebra over a field $k$. The stable module
category $\modbar(A)$ has as objects the finitely generated $A$-modules
and as morphisms, for any two finitely generated $A$-modules $U$, $V$,
the $k$-vector space $\Hombar_A(U,V) = \mathrm{Hom}} \def\Hombar{\underline{\mathrm{Hom}}_A(U,V)/\mathrm{Hom}} \def\Hombar{\underline{\mathrm{Hom}}_A^{pr}(U,V)$,
where $\mathrm{Hom}} \def\Hombar{\underline{\mathrm{Hom}}_A^{pr}(U,V)$ is the $k$-vector space of $A$-homomorphisms from
$U$ to $V$ which factor through a projective $A$-module, with composition
of morphisms induced by the usual composition of $A$-homomorphisms
in the category of finitely generated $A$-modules $\mathrm{mod}} \def\modbar{\underline{\mathrm{mod}}(A)$.
The Heller operator $\Omega_A$ sends an $A$-module $U$ to the kernel
of a projective cover $P_U\rightarrow U$ of $U$; this induces a
functor, still denoted $\Omega_A$, on $\modbar(A)$, which is unique up to unique
isomorphism of functors. If $A$ is self-injective then $\Omega_A$ is an equivalence,
and $\modbar(A)$ becomes a triangulated category with shift functor
$\Sigma_A = \Omega_A^{-1}$, sending an $A$-module to a cokernel of an
injective hull of $U$, with exact triangles induced by short exact
sequences in $\mathrm{mod}} \def\modbar{\underline{\mathrm{mod}}(A)$. See e.g. \cite{Hap} for details.
If $A$ is symmetric - that is, $A$ is
isomorphic, as $A$-$A$-bimodule to its $k$-dual $\mathrm{Hom}} \def\Hombar{\underline{\mathrm{Hom}}_k(A,k)$ -
then $A$ is in particular self-injective.
The literature on finite-dimensional algebras tends to privilege $\Omega_A$ from a
notational point of view, while the one on triangulated categories would rather use
$\Sigma_A$; we sometimes use both.
If $A$ is clear from the context and no confusion arises, we write $\Omega$ and $\Sigma$
instead of $\Omega_A$ and $\Sigma_A$, respectively.
If $B$ is a subalgebra of $A$ we have a canonical isomorphism
$\Omega_A^n(A\otimes_B V)\cong A\otimes_B \Omega_B^n(V)$ in $\modbar(A)$, where $V$ is a finitely
generated $B$-module and $n$ a positive integer.
If moreover $A$ is projective as $B$-module, we also have a canonical
isomorphism $\Omega_B^n(\mathrm{Res}^A_B(U))\cong\mathrm{Res}^A_B(\Omega_A^n(U))$ in $\modbar(B)$, where
$U$ is a finitely generated $A$-module, and the canonical adjunction isomorphism
$\mathrm{Hom}} \def\Hombar{\underline{\mathrm{Hom}}_A(A\otimes_B V,U)
\cong$ $\mathrm{Hom}} \def\Hombar{\underline{\mathrm{Hom}}_B(B,\mathrm{Res}^A_B(U))$ induces an isomorphism $\Hombar_A(A\otimes_B V,U)
\cong$ $\Hombar_B(V,\mathrm{Res}^A_B(U))$, where $U$ is an $A$-module and $V$ is a
$B$-module. The following well-known lemma states that the Heller translates commute
with the adjunction isomorphisms (we sketch a proof for the convenience of the reader).
\begin{Lemma}\label{OmegaAdjComm} Let $A$ be a finite-dimensional algebra over a field
$k$ and $B$ a subalgebra
such that $A$ is projective as $B$-module. Let $n$ be a positive integer. The
following diagram of canonical isomorphisms is commutative:
$$\xymatrix{
\Hombar_A(A\otimes_B\Omega_B^n(V),\Omega_A(U))\ar[r]_\simeq\ar[d]_\simeq
&\Hombar_B(\Omega_B^n(V),\mathrm{Res}^A_B(\Omega_A^n(U)))\ar[d]^\simeq\\
\Hombar_A(\Omega_A^n(A\otimes_B V),\Omega_A^n(U))
&\Hombar_B(\Omega_B^n(V),\Omega_B^n(\mathrm{Res}^A_B(U)))\\
\Hombar_A(A\otimes_B V,U)\ar[r]^\simeq\ar[u]^{\Omega_A^n}_{\simeq}
&\Hombar_B(V,\mathrm{Res}^A_B(U))\ar[u]_{\Omega_B^n}^{\simeq}
}$$
\end{Lemma}
\begin{proof}
We sketch the argument in the case $n=1$; the general case follows either
by induction, or directly with short exact sequences in the two diagrams below
replaced by exact sequences with $n+2$ terms of which all but possibly the first and last
are projective.
The Heller translates $\Omega_A$ and $\Omega_B$ are defined by the following two
commutative diagrams with exact rows, where $P_X$ denotes a projective cover of the
module $X$:
$$\xymatrix{
0\ar[r] &A\otimes_B\Omega_B(V)\ar[r]\ar[d]_\simeq &A\otimes_B P_V\ar[r]_\nu\ar[d]_\delta &A\otimes_B
V\ar[r]\ar@{=}[d] &0\\
0\ar[r] &\Omega_A(A\otimes_B V))\ar[r]\ar[d]_{\Omega_\alpha} &P_{A\otimes_B
V}\ar[r]\ar[d]_\gamma &A\otimes_B V\ar[r]\ar[d]_\alpha &0\\
0\ar[r] &\Omega_A(U)\ar[r] &P_U\ar[r]_\mu &U\ar[r] &0
}$$
$$\xymatrix{
0\ar[r] &\Omega_B(V)\ar[r]\ar[d]_{\Omega_\beta} &P_V\ar[r]_\rho\ar[d]_\phi &
V\ar[r]\ar[d]_\beta &0\\
0\ar[r] &\Omega_B(\mathrm{Res}_B^A(U))\ar[r]\ar[d]_\simeq &P_{\mathrm{Res}_B^A(U)}\ar[r]\ar[d]_\psi
&\mathrm{Res}_B^A(U)\ar[r]\ar@{=}[d] &0\\
0\ar[r] &\mathrm{Res}_B^A(\Omega_A U)\ar[r] &\mathrm{Res}_B^A(P_U)\ar[r]_\sigma &\mathrm{Res}_B^A(U)\ar[r]&0
}$$
Here $\mu$, $\rho$ are chosen surjective morphisms and
$\sigma = \mathrm{Res}^A_B(\mu)$, $\nu = \mathrm{Id}_A\otimes \rho$.
We have that $\mu\circ\gamma\circ\delta=\alpha\circ\nu$. Since $\alpha$, $\beta$
correspond to each other through the adjunction
ismorphism, the two diagrams imply that the images of
$\Omega_\alpha$ and $\Omega_\beta$ correspond to each other through
the adjunction $\Hombar_B(\Omega_B(V),\mathrm{Res}^A_B(\Omega_A(U)))
\cong$ $\Hombar_A(A\otimes_B \Omega_B(V), \Omega_A(U))$. \end{proof}
A morphism $\alpha : U\rightarrow V$ in $\modbar(A)$ is called
{\it almost vanishing} if $\alpha\neq 0$ and if for any morphism
$\varphi : X \rightarrow U$ in $\modbar(A)$ which is not a split epimorphism we have
$\alpha\circ\varphi = 0$. Since endomorphism
algebras of indecomposable $A$-modules are (not necessarily split) local, it follows from
\cite[I.4.1]{Hap} that this condition is equivalent to its dual; that is, for any
morphism $\psi : V \rightarrow Y$ in $\modbar(A)$ which is not a
split monomorphism we have $\psi\circ\alpha = 0$. In order to follow almost vanishing
morphisms through the standard
adjunction isomorphisms we collect a few elementary statements
on split epimorphisms. In statements and proofs
where we consider both $A$-homomorphisms and their classes in the stable
category we adopt the notational convention that if $\varphi : U\to V$
is a homomorphism of $A$-modules then $\underline\varphi$ denotes the
class of $\varphi$ in $\Hombar_A(U,V)$. As in any triangulated category,
epimorphisms in $\modbar(A)$ are split epimorphisms, and those are
essentially induced by split epimorphisms in $\mathrm{mod}} \def\modbar{\underline{\mathrm{mod}}(A)$:
\begin{Lemma} \label{stablesplitEpi}
Let $A$ be a finite-dimensional $k$-algebra and $\varphi : U\rightarrow
V$ an $A$-homomorphism. Suppose that $V$ is indecomposable non projective.
Then $\varphi$ is a split epimorphism in $\mathrm{mod}} \def\modbar{\underline{\mathrm{mod}}(A)$ if and only if
its class $\underline\varphi : U\rightarrow V$ is a split epimorphism in $\modbar(A)$.
\end{Lemma}
\begin{proof}
Suppose $\underline\varphi$ is a split epimorphism in $\modbar(A)$.
Let $\sigma:V\rightarrow U$ be an $A$-homomorphism such that
$\underline\varphi\circ\underline\sigma = \underline\mathrm{Id}_V$.
Thus $\varphi\circ\sigma-\mathrm{Id}_V$ factors through a projective
module. Since $V$ is indecomposable non projective this implies that
$\varphi\circ\sigma-\mathrm{Id}_V\in J(\mathrm{End}} \def\Endbar{\underline{\mathrm{End}}_A(V))$, hence $\rho = \varphi\circ\sigma$ is an
automorphism of $V$ and $\sigma\circ\rho^{-1}$ a section for $\varphi$. The converse is
trivial.
\end{proof}
\begin{Lemma}\label{splitEpi}
Let $A$ be a $k$-algebra and $B$ a subalgebra of $A$. Suppose that $B$ has a complement
in $A$ as a $B$-$B$-bimodule. Let $\varphi:V\to W$ be a homomorphism of left $B$-modules
and set
$\psi = \mathrm{Id}_A\otimes\varphi : A\otimes_B V\rightarrow A\otimes_B W$.
Then $\psi$ is a split epimorphism if and only of $\varphi$ is a split epimorphism.
\end{Lemma}
\begin{proof}
Suppose that $\psi$ is a split epimorphism; that is, there is
an $A$-homomorphism $\sigma : A\otimes_B W\rightarrow A\otimes_B V$ satisfying $\psi\circ\sigma =
\mathrm{Id}_{A\otimes_B W}$. Let $C$ be a complement of $B$ in $A$ as a $B$-$B$-bimodule; that is,
$A = B\oplus C$ as a $B$-$B$-bimodule.
Denote by $\pi : A\otimes_B V\rightarrow V$ and $\tau : A\otimes_B W\rightarrow W$
the canonical projections with kernel $C\otimes_B V$ and $C\otimes_B W$,
respectively, induced by the projection $A\rightarrow B$ with kernel $C$. Explicitly, for
$v\in V$ and $a\in A$
we have $\pi(a\otimes v) = a\otimes v$ if $a\in B$ and $\pi(a\otimes v) = 0$ if
$a\in C$; similarly for $\tau$. Since $\psi = \mathrm{Id}_A\otimes \varphi$
we have $\varphi\circ\pi = \tau\circ\psi$. Define $\rho : W\rightarrow V$
by $\rho(w) = \pi(\sigma(1\otimes w))$, for any $w\in W$. Then
$\varphi(\rho(w)) =$ $\varphi(\pi(\sigma(1\otimes w))) = $ $\tau(\psi(\sigma(1\otimes w))) =$ $
\tau(1\otimes w) = w$, hence $\varphi$ is
a split epimorphism with section $\rho$. The converse is trivial.
\end{proof}
\begin{Lemma} \label{Omegainvariance}
Let $A$ be a finite-dimensional algebra over an algebraically closed
field $k$ and $B$ a subalgebra such that $A$ is projective as left and right $B$-module
and such that $B$ has
a complement in $A$ as a $B$-$B$-bimodule. Let $V$ be a finitely
generated indecomposable non projective $B$-module such that $U=A\otimes_B V$ is
indecomposable. Then $U$ is non projective.
Let $\zeta_V : V\rightarrow \Omega_B(V)$ and $\zeta_U : U\rightarrow \Omega_A(U)$ be
almost vanishing homomorphisms
in $\modbar(B)$ and $\modbar(A)$, respectively.
Suppose that there is an isomorphism $\beta : \Omega_B^n(V)\cong V$ in $\modbar(B)$ such
that $\Omega_B(\beta)\circ\Omega_B^n(\zeta_V)=(-1)^n\zeta_V\circ\beta$.
Then $\Omega_A^n(U)\cong U$ and for any isomorphism
$\alpha : \Omega_A^n(U)\cong U$ in $\modbar(A)$ we have
the $\Omega_A(\alpha)\circ\Omega_A^n(\zeta_U)=(-1)^n\zeta_U\circ\alpha$.
\end{Lemma}
\begin{proof}
We first point out where we use the hypothesis on $k$ being algebraically closed.
If we can find {\it some} isomorphism
$\alpha : \Omega_A^n(U)\cong U$ in $\modbar(A)$ such that
$\Omega_A(\alpha)\circ\Omega_A^n(\zeta_U)=(-1)^n\zeta_U\circ\alpha$ then
this holds for {\it any} isomorphism because $\mathrm{End}} \def\Endbar{\underline{\mathrm{End}}_A(U)$ is split local.
Moreover, $\zeta_U$ is unique up to a nonzero scalar.
Similarly for $V$ and $\zeta_V$.
By the assumptions, $A = B\oplus C$ for some $B$-$B$-submodule
$C$ of $A$ which is finitely generated projective as left and right
$B$-module. Thus $\mathrm{Res}^A_B$ sends any projective $A$-module to a
projective $B$-module, and $V$ is a direct summand of $\mathrm{Res}^A_B(U)$.
Since $V$ is non projective this implies that $U$ is non projective.
By adjunction we have
$$\Hombar_{A}(U,U) \cong \Hombar_{B}(V,\mathrm{Res}^A_B(A\otimes_B V))$$ We prove that the image
$\eta$ of $\zeta_U$ under this adjunction is a morphism
which, when composed with any projection of $\mathrm{Res}^A_B(A\otimes_B V)$ onto an
indecomposable direct summand is either zero or almost vanishing in $\modbar(B)$. This is
equivalent to showing that $\eta$ precomposed with any morphism
ending at $V$ which is not a split epimorphism in $\modbar(B)$ yields zero.
Let $X$ be a $B$-module and $\varphi\in\Hombar_B(X,V)$ such that the image
$\underline\varphi$ in $\Hombar_B(X,V)$ is not a split epimorphism.
Then by Lemma \ref{stablesplitEpi} and Lemma \ref{splitEpi} the morphism $\psi=\mathrm{Id}_A\otimes
\varphi$ in $\Hombar_A(A\otimes_B X,U)$ is not a split epimorphism and hence
$\zeta_M\circ\psi$ is zero in $\Hombar_{A}(A\otimes_B X,M)$. Applying the adjunction to
$\zeta_M\circ\psi$ we get that $\eta\circ\varphi$ is zero in
$\Hombar_{B}(X,\mathrm{Res}^A_B(A\otimes_B V)))$. Thus all components of $\eta$
to the indecomposable direct summands of $\mathrm{Res}^A_B(A\otimes_B V))$ are
either zero or a scalar multiple of $\zeta_V$.
The hypothesis on $\zeta_V$ implies that $\Omega^n_{B}(\eta)=(-1)^n\eta$, modulo
appropriate identifications.
It follows from Lemma \ref{OmegaAdjComm} that $\Omega^n(\zeta_U)=(-1)^n\zeta_U$,
again modulo appropriate identifications, which proves the result.
\end{proof}
The existence of
an almost vanishing morphism $\alpha : U\rightarrow V$ in $\modbar(A)$
forces that $U$, $V$ are indecomposable non projective $A$-modules,
and that the exact triangle in $\modbar(A)$ of the form
$$\xymatrix{\Omega(V) \ar[r] & E \ar[r] & U \ar[r]^{\alpha} & V }$$ is {\it almost split}
in the sense of \cite[I.4.1.]{Hap}, hence induced
by an almost split exact sequence of $A$-modules of the form
$$\xymatrix{0 \ar[r] & \Omega(V) \ar[r] & E \ar[r] & U \ar[r] & 0 }$$
which in turn forces that $V\cong \Omega(U)$
as $A$ is symmetric (cf. \cite[4.12.8]{Ben}). This shows that if there is an almost
vanishing
morphism $U \rightarrow \Sigma^n(U)$ in $\modbar(A)$ for some integer $n$
then $\Sigma^n(U)\cong \Sigma^{-1}(U)$, and hence either $n = -1$ or $U$ is periodic of
period dividing $n+1$. As mentioned earlier, almost vanishing
morphisms define elements in the graded center. It is convenient to extend the
terminology of almost vanishing morphisms to elements in the graded center:
\begin{Definition} \label{avdef}
Let $A$ be a symmetric algebra over a field $k$.
For any integer $n$ we say that an element
$\varphi\in Z^n(\modbar(A))$ is {\it almost vanishing} if for any
indecomposable non projective $A$-module $U$ either $\varphi(U) = 0$
or $\varphi(U) : U \rightarrow \Sigma^n(U)$ is almost vanishing.
An element in $Z^*(\modbar(A))$ is called {\it almost vanishing}
if all of its homogeneous components are almost vanishing.
We denote by ${\mathcal{I}}_A$ the set of all almost vanishing elements
in $Z^*(\modbar(A))$.
\end{Definition}
This definition makes sense for arbitrary triangulated categories.
Note that the zero element of $Z^*(\modbar(A))$ is
contained in ${\mathcal{I}}_A$; this slight departure from the definition of
almost vanishing morphisms has the following obvious advantage:
\begin{Lemma} \label{avideal}
Let $A$ be a symmetric algebra over a field $k$. Then ${\mathcal{I}}_A$ is an ideal in
$Z^*(\modbar(A))$ whose square is zero. Moreover, ${\mathcal{I}}_A$ annihilates
every ideal consisting of nilpotent elements in $Z^*(\modbar(A))$.
\end{Lemma}
\begin{proof} If $\varphi\in Z^n(\modbar(A))$ is nilpotent, where $n$
is an integer, then for any indecomposable non projective $A$-module
$U$ the morphism $\varphi(U) : U\rightarrow \Sigma^n(U)$ is not an
isomorphism. Since $U$, $\Sigma^n(U)$ are both indecomposable,
$\varphi(U)$ is neither a split epimorphism nor a split monomorphism.
Thus $\varphi(U)$ composed with any almost vanishing
morphism yields zero. The result follows.
\end{proof}
In the statements of Theorem \ref{pgroupsstablecenter} and
Theorem \ref{kleinfour}, in order to show that $Z^*(\modbar(A))$ is not
finite-dimensional in a particular degree we actually show that ${\mathcal{I}}_A$ is not
finite-dimensional in that degree. We will need the following lemma, which is well-known
and strictly analogous to \cite[Proposition 1.3]{Li2}, for instance.
\begin{Lemma} \label{tatestablecenter}
Let $p$ be a prime, $P$ a finite $p$-group and $k$ a field of
characteristic $p$. Evaluation at the trivial $kP$-module $k$
induces a surjective graded $k$-algebra homomorphism $Z^*(kP)\rightarrow \hat H^*(P;k)$
whose kernel ${\mathcal{I}}$ is a
nilpotent ideal.
\end{Lemma}
\begin{proof}
This is the same proof as that of \cite[Proposition 1.3]{Li2},
with $H^*(P;k)$, $HH^*(kP)$, $D^b(kP)$ replaced by Tate cohomology $\hat H^*(P;k)$, the
Tate analogue of Hochschild cohomology
$\hat{HH}^*(kP)$ and $\modbar(kP)$, respectively.
\end{proof}
\section{On some modules of period $1$ and $2$
In order to construct elements in a given degree $n$ of the graded center of the stable
category of a symmetric algebra $A$ over a field $k$ it is not sufficient to construct
$A$-modules with period dividing $n+1$; we also need to have some control over the effect
of the shift on endomorphisms in order to verify the additional compatibility condition
with the shift functor of natural transformations belonging to $Z^n(\modbar(A))$. This is
the purpose of the elementary observations in this section. We use without further
comment that if $k$ has positive characteristic
$p$ and if $x\in Z(A)$ such that $x^{p-1}\neq 0$, $x^p = 0$, then
$1+x$ is an invertible element in $A$ of order $p$ and generates
a subalgebra which can be identified with the truncated polynomial
algebra $k[x]/(x^p)$ or the group algebra $k\langle 1+x\rangle$,
whichever is more convenient. The algebra $A$ may or may not be projective as $k\langle
1+x\rangle$-module; this will depend on $x$. If $A = kP$ is the group algebra of an
elementary abelian $p$-group $P$ with
minimal generating set $\{u_1,u_2,..,u_r\}$, then any nonzero $k$-linear
combination $x = \sum_{i=1}^r\ \lambda_i(1-u_i)$ has the property that
$A$ is projective as $k\langle 1+x\rangle$-module. The subgroups
generated by $1+x$ for those $x$ are called cyclic shifted subgroups of~$A$.
\begin{Lemma} \label{cyclicmodify}
Let $A$ be a finite-dimensional algebra over a field $k$
of prime characteristic $p$ and $x$ an element in $A$ such
that $x^p=0$, $x^{p-1}\neq 0$ and such that $A$ is projective as a
$k\langle 1+x\rangle$-module. Let $u\in A^\times$ such that $ux = xu$.
Then $(ux)^p=0$, $(ux)^{p-1}\neq 0$ and $A$ is projective as a
$k\langle 1+ux\rangle$-module.
\end{Lemma}
\begin{proof} Since $x$ and $u$ commute we clearly have $(ux)^p=0$ and $(ux)^{p-1}\neq
0$. The hypothesis that $A$ is projective
as $k\langle 1+x\rangle$-module is equivalent to $\dim_k(x^{p-1}A) =
\frac{1}{p}\dim_k(A)$. As $u$ is invertible and commutes to $x$, we have $(ux)^{p-1}A =
x^{p-1}A$, and hence $A$ is projective as
$k\langle 1+ux\rangle$-module.
\end{proof}
\begin{Lemma} \label{periodonemodules}
Let $A$ be a finite-dimensional algebra over a field $k$ of prime characteristic $p>0$.
Let $x\in A$ such that $x^{p-1}\neq 0$, $x^p = 0$ and such that $A$ is free as right
$k\langle 1+x\rangle$-module. Set $M = Ax^{p-1}$.
\smallskip\noindent (i)
If $p=2$ there is an isomorphism $\alpha : M\cong \Omega(M)$ in $\modbar(A)$
such that $\underline\varphi\circ\alpha = \alpha\circ\Omega(\underline\varphi)$
for any $\varphi\in\mathrm{End}} \def\Endbar{\underline{\mathrm{End}}_A(M)$.
\smallskip\noindent (ii)
If $p>2$ there is an isomorphism $\beta : M\cong \Omega^2(M)$ in $\modbar(A)$
such that $\underline\varphi\circ\beta = \beta\circ\Omega^2(\underline\varphi)$
for any $\varphi\in\mathrm{End}} \def\Endbar{\underline{\mathrm{End}}_A(M)$.
\smallskip\noindent (iii)
We have $\Endbar_A(M) \cong \mathrm{End}} \def\Endbar{\underline{\mathrm{End}}_A(M)$.
\end{Lemma}
\begin{proof}
Since $A$ is free as right $k\langle 1+x\rangle$-module
we have an exact sequence of $A$-modules of the form
$$\xymatrix{ 0 \ar[r] & M \ar[r] & A \ar[r]^{\epsilon} & A \ar[r]^{\pi} & M\ar[r] &0
}$$
where $\epsilon$ is given by right multiplication with $x$ and $\pi$
is given by right multiplication with $x^{p-1}$. Thus for this choice
of projective covers we have $\Omega(M) = Ax$ and $\Omega^2(M) = M$.
Let $\varphi\in\mathrm{End}} \def\Endbar{\underline{\mathrm{End}}_A(M)$. Then $\varphi$ lifts to an endomorphism
$\psi\in\mathrm{End}} \def\Endbar{\underline{\mathrm{End}}_A(A)$ satisfying $\pi\circ\psi = \varphi\circ\pi$.
Let $y\in A$ such that $\psi(a) = ay$ for all $a\in A$. Thus
$\varphi(ax^{p-1}) = ayx^{p-1} = ax^{p-1}y$ for all $a\in A$.
Also, since $x$ is central we have $\epsilon\circ\psi = \psi\circ\epsilon$.
Thus $\psi$ restricts to the endomorphism $\varphi$ of $M$, which
shows (i) and (ii). Suppose now that $\varphi$ factors through a projective $A$-module.
Then $\varphi$ factors through the inclusion $M\subseteq A$, or equivalently, there is
$y\in A$ such that $\varphi(m) = my$ for all $m\in M$ and such that $Ay\subseteq M$.
But then in particular $y\in M = Ax^{p-1}$, hence
$y = ux^{p-1}$ for some $u\in A$. Since any $m\in M$ is of the form $m = ax^{p-1}$ for
some $a\in A$ we get that $\varphi(m) = \varphi(ax^{p-1}) = ax^{p-1}ux^{p-1} =
aux^{2(p-1)} = 0$,
whence (iii).
\end{proof}
The next observation shows that if the center of a finite $p$-group
$P$ has rank at least $2$ then the construction principle for modules of period $1$ and
$2$ over the group algebra $A = kP$ in the
two previous lemmas yields infinitely many isomorphism classes
whenever the field $k$ is infinite.
\begin{Proposition} \label{isomod}
Let $p$ be a prime, $P$ a finite $p$-group and $k$ a field of
characteristic $p$. Set $A = kP$. Suppose that $Z(P)$ has an
elementary abelian subgroup $E = \langle s\rangle \times \langle t\rangle$ of rank $2$,
for
some elements $s, t\in Z(P)$ of order $p$. Let $(\lambda,\mu), (\lambda',\mu')\in
k^2-\{(0,0)\}$. Set
$x = \lambda(s-1) + \mu(t-1)$ and $x' = \lambda'(s-1) + \mu'(t-1)$.
Then $Ax$ is absolutely indecomposable as left $A$-module, and
the following statements are equivalent.
\smallskip\noindent (i)
$Ax \cong Ax'$ as left $A$-modules.
\smallskip\noindent (ii)
$Ax = Ax'$.
\smallskip\noindent (iii)
The images of $(\lambda,\mu)$ and $(\lambda',\mu')$ in ${\mathbb{P}}^1(k)$
are equal.
\end{Proposition}
\begin{proof}
Since $A = kP$ is split local, any quotient of $A$ as $A$-module
is absolutely indecomposable.
Suppose (i) holds. Since $A$ is symmetric, any isomorphism
$Ax\cong Ax'$ extends to an endomorphism of $A$,
which is given by right multiplication with an element $y\in A$.
In particular, $Axy = Ax'$. But $x\in Z(A)$ implies $Axy = Ayx
\subseteq Ax$. Thus $Ax'\subseteq Ax$. Exchanging the roles of $x$
and $x'$ shows $Ax = Ax'$, whence (ii) holds. The converse is
trivial. Suppose now that (ii) holds. Arguing by contradiction,
suppose that the images of $(\lambda,\mu)$, $(\lambda',\mu')$ in
${\mathbb{P}}^1(k)$ are different, or equivalently, that $(\lambda,\mu)$, $(\lambda',\mu')$ are
linearly independent in $k^2$. Since $Ax = Ax'$ contains any $k$-linear combination of
$x$ and $x'$ it
follows that $Ax$ contains any $k$-linear combination of $s-1$ and
$t-1$. But then $A(s-1) = Ax = A(t-1)$ because all involved modules
have the same dimension $\frac{p-1}{p}\dim_k(A)$ as $A$ is projective
as right $kE$-module. However, clearly $kE(s-1)\neq kE(t-1)$, so
this is impossible. This shows that (ii) implies (iii). The converse
is again trivial.
\end{proof}
\begin{Corollary} \label{isomodcor}
Let $p$ be a prime, $P$ a finite $p$-group of rank at least $2$ and
$k$ an infinite field of characteristic $p$.
If $p=2$ then $kP$ has infinitely many isomorphism classes of absolutely
indecomposable modules of dimension $\frac{|P|}{2}$ whose period is $1$,
and if $p>2$ then $kP$ has infinitely many isomorphism classes of absolutely
indecomposable modules of dimension $\frac{|P|}{p}$ whose period is $2$.
\end{Corollary}
\begin{proof}
Let $E$ be an elementary abelian subgroup of rank $2$ of $P$.
By Proposition \ref{isomod} and Lemma \ref{periodonemodules}, the result holds for $kE$
instead of $kP$. Induction $\mathrm{Ind}^P_E$ is an exact
functor, preserving periodicity, and by Mackey's formula, for any indecomposable
$kP$-module $M$ there are, up to isomorphism, only finitely many indecomposable
$kE$-modules $N$ satisfying $\mathrm{Ind}^P_E(N) \cong M$, which implies the statement.
\end{proof}
\section{Proof of Theorem \ref{2groupsstablecenter} and
Theorem \ref{pgroupsstablecenter}}
Let $p$ be a prime and $P$ a finite $p$-group of
rank at least $2$. Let $k$ be an algebraically closed
field of characteristic $p$. By Lemma \ref{tatestablecenter}, evaluation at the trivial
$kP$-module induces a surjective graded algebra homomorphism $Z^*(\modbar(kP))\rightarrow
\hat H^*(P;k)$
with nilpotent kernel ${\mathcal{I}}$. Let $E = \langle s\rangle \times \langle t\rangle$ be an
elementary abelian subgroup of $Z(P)$ of rank $2$ for some elements
$s$, $t$ in $Z(P)$ of order $P$ and let $(\lambda,\mu)\in k^2-\{(0,0)\}$
Set $x = \lambda(s-1) + \mu(t-1)$ and $M = Ax^{p-1}$.
By Lemma \ref{periodonemodules} the module $M$ has
period $1$ if $p=2$ and period $2$ if $p$ is odd.
The almost split exact sequence ending in $M$ is of the form
$$\xymatrix{ 0 \ar[r] & \Omega^2(M) \ar[r] & E \ar[r] & M \ar[r] & 0}$$
hence represented by
an almost vanishing morphism $\zeta_M : M \rightarrow \Omega(M)$.
By \cite[Proposition 1.4]{Li2}, this defines a natural transformation $\varphi_M : \mathrm{Id}
\rightarrow \Omega$ as follows.
Set $\varphi_M(M) = \zeta_M$, set $\varphi_M(N) = 0$ if $N$ is indecomposable
non projective and not isomorphic to $M$ or $\Omega(M)$, and in the case
$p$ odd, set $\varphi_M(\Omega(M)) = -\Omega(\zeta_M)$.
Then, thanks to Lemma \ref{periodonemodules} again,
$\varphi$ is indeed a natural transformation belonging to
$Z^{-1}(\modbar(kP))$. If $p=2$ (resp. $p > 2$)then for any integer $n$ (resp. even
integer $n$) we have
$M \cong \Sigma^n(M)$, and hence $\varphi$ determines also an
element in $Z^{n-1}(\modbar(kP))$ for any integer $n$ (resp.
even integer $n$). By Lemma
\ref{isomod}, this implies that $Z^{n-1}(\modbar(kP))$ is infinite
dimensional for any integer $n$ if $p=2$ and for any even integer
if $p>2$. This proves Theorem \ref{2groupsstablecenter}
and Theorem \ref{pgroupsstablecenter}.
\begin{Remark}
D. J. Benson \cite{Bencomm} pointed out that it is easy to construct indecomposable
modules for finite $p$-group algebras with period
one for odd $p$. One might therefore wonder whether
the stronger statement of Theorem \ref{2groupsstablecenter}
holds for $p$ odd as well. \end{Remark}
\section{Proof of Theorem \ref{kleinfour}
Let $k$ be an algebraically closed field of characteristic $2$. Denote by $V_4 = \langle
s, t\rangle$ a Klein four group, with commuting involutions $s, t$. By results of
Ba\v{s}ev \cite{Bas}, Heller and Reiner \cite{HeRe} (see also \cite[4.3.3]{Ben} for an
expository account), for any positive integer $n$
and any $\lambda \in k \cup \{\infty\}$ there is, up to isomorphism, a unique
indecomposable $kV_4$-module $M^\lambda_n$ of dimension $2n$,
having a $k$-basis $\{a_1, a_2,\ldots,a_n, b_1,b_2,\ldots, b_n\}$ on which the
elements $x = s-1$ and $y = t-1$ act by $$xa_i=b_i\ (1\leq i\leq n),\ ya_i = \lambda b_i
+b_{i+1}\ (1\leq i <n),\ ya_n = \lambda b_n\ ,xb_i = 0 = yb_i\ (1\leq i\leq n)$$
provided that $\lambda\in k$, and by
$$xa_i = b_{i+1}\ (1\leq i<n),\ xa_n=0,\ ya_i=b_i\ (1\leq i\leq n),\ ya_n = b_n\ ,xb_i =
0 = yb_i\ (1\leq i\leq n)$$
if $\lambda = \infty$. Then $M^\lambda_n$, with $\lambda\in k\cup\{\infty\}$, is
a set of representatives of the isomorphism classes of indecomposable
$kV_4$-modules of dimension $2n$. This set can be indexed by the elements
in ${\mathbb{P}}^1(k)$ with $M^\lambda_n$ corresponding to the image of $(\lambda,1)$
in ${\mathbb{P}}^1(k)$ if $\lambda\in k$, and with $M^\infty_n$ corresponding to
the image of $(0,1)$ in ${\mathbb{P}}^1(k)$. The natural action of $GL_2(k)$ on
${\mathbb{P}}^1(k)$ translates to a transitive action of $GL_2(k)$ on the set of
isomorphism classes of $2n$-dimensional indecomposable $kV_4$-modules.
The group $GL_2(k)$ can be identified with a group of $k$-algebra automorphisms
of $kV_4$, with $g = \left(\begin{array}{cc} \alpha & \beta\\ \gamma &
\delta\end{array}\right)$
acting on $kV_4$ by $g(x) = \alpha x+\beta y$ and $g(y) = \gamma x+\delta y$.
In this way, any $kV_4$-module $M$ gives rise via restriction along the
automorphism $g$ to a $kV_4$-module ${^gM}$, which
is equal to $M$ as $k$-vector space, with $z\in kV_4$ acting as $g(z)$ on $M$.
This defines another action of $GL_2(k)$ on the set of isomorphism classes of
$2n$-dimensional indecomposable $kV_4$-modules, and, of course, these actions
are compatible: one checks that for $\lambda\in k$ we have
$M^\lambda_n\cong {^g(M^0_n)}$, where
$g = \left(\begin{array}{cc} 1 & 0 \\ \lambda & 1 \end{array}\right)$,
and that $M^\infty_n \cong {^h(M^0_n)}$, where $h = \left(\begin{array}{cc} 0 & 1 \\ 1 &
0 \end{array}\right)$.
The point of these remarks is that in order to calculate (stable)
endomorphism algebras of the modules $M^\lambda_n$, identify almost vanishing
morphisms and check their invariance under the Heller operator, it
suffices to consider the case $\lambda = 0$.
We collect some basic facts on endomorphisms of the modules $M_n^\lambda$,
many of which are, of course, well-known (such as the fact that $M_n^\lambda$
has period $1$). In what follows we set $M_n = M_n^0$, for any positive
integer $n$.
\begin{Lemma}\label{zeroendomorphisms}
Let $n$ be a positive integer and denote by $\{a_1,a_2,..,a_n,b_1,b_2,..,b_n\}$ a
$k$-basis
of $M_n$ such that $xa_i = b_i$ for $1\leq i\leq n$,
$ya_i = b_{i+1}$ for $1\leq i < n$, $ya_n = 0$ and $xb_i = 0= yb_i$
for $1\leq i\leq n$. Let $\varphi\in\mathrm{End}} \def\Endbar{\underline{\mathrm{End}}_{kV_4}(M_n)$.
\smallskip\noindent (i)
We have $\Omega(M_n) \cong M_n$.
\smallskip\noindent (ii)
If $\varphi\in\mathrm{End}} \def\Endbar{\underline{\mathrm{End}}_{kV_4}^{pr}(M}%^\lambda_n)$ then $\mathrm{Im}(\varphi)\subseteq \mathrm{soc}(M}%^\lambda_n)$.
\smallskip\noindent (iii)
If $\mathrm{Im}(\varphi) = k\cdot b_1$ then $\varphi\not\in \mathrm{End}} \def\Endbar{\underline{\mathrm{End}}_{kV_4}^{pr}(M_n)$.
\smallskip\noindent (iv) For $1\leq i\leq n$ denote by $\varphi^n_i$ the endomorphism of
$M_n$
sending $a_i$ to $b_1$ and all other basis elements of $M_n$ to zero.
Then the set $\{\underline\varphi^n_i\}_{1\leq i\leq n}$ is linearly
independent in $\Endbar_{kV_4}(M_n)$.
\smallskip\noindent (v)
We have $\dim_k(\mathrm{End}} \def\Endbar{\underline{\mathrm{End}}_{kV_4}^{pr}(M_n)) = n^2-n$.
\end{Lemma}
\begin{proof}
For $1\leq i\leq n$ denote by $s_i$ the element in
$(kV_4)^n$ whose $i$-th component is $1$ and whose other componenets are $0$.
Then $\{s_1,s_2,..,s_n\}$ is a basis of the free $kV_4$-module $(kV_4)^n$.
There is an injective $kV_4$-homomorphism
$$\iota : M_n \longrightarrow (kV_4)^n$$
such that $\iota(a_i) = ys_i+xs_{i+1}$ for $1\leq i < n$ and $\iota(a_n) = ys_n$, and
there is a surjective $kV_4$-homomorphism
$$\pi : (kV_4)^n \longrightarrow M_n$$
such that $\pi(s_i) = a_i$ for $1\leq i\leq n$. One checks that $\mathrm{Im}(\iota) = \mathrm{ker}(\pi)$,
which shows that $\Omega(M_n)
\cong M_n$, whence (i). If $\varphi\in \mathrm{End}} \def\Endbar{\underline{\mathrm{End}}_{kV_4}^{pr}(M_n)$,
the inclusion $\mathrm{Im}(\varphi)\subset \mathrm{rad}(M_n)$ is a general fact, and
we have $\mathrm{rad}(M_n) = \mathrm{soc}(M_n)$, which shows (ii). Since $\varphi$ factors through a
projective module, it factors through both
$\iota$ and $\pi$; thus there is $\psi\in\mathrm{End}} \def\Endbar{\underline{\mathrm{End}}_{kV_4}((kV_4)^n)$ such that
$\varphi = \pi\circ\psi\circ\iota$. For $1\leq i, j \leq n$ let
$\alpha_{i,j}\in k$ and $r_{i,j}\in \mathrm{rad}(kV_4) = xkV_4+ykV_4$,
viewed as element in the $j$-th component of $(kV_4)^n$, such that
$$\psi(s_i) = \sum_{j=1}^n\ \alpha_{i,j}\ s_j\ +\ r_{i,j}$$
Since $\mathrm{Im}(\iota) \subset \mathrm{rad}((kV_4)^n)$ one easily sees that the
we may assume $r_{i,j} = 0$. Two short verifications show that
$$\varphi(a_n) = \sum_{j=2}^n\ \alpha_{n,j-1}\ b_j$$
and, for $1\leq i < n$,
$$\varphi(a_i) = \alpha_{i+1, 1} b_1\ +\ \sum_{j=2}^n\
(\alpha_{i,j-1}+\alpha_{i+1,j})b_j$$
which also shows again that $\mathrm{Im}(\varphi)\subseteq\mathrm{soc}(M_n)$.
Moreover, this calculation shows that if $\mathrm{Im}(\varphi)\subseteq kb_1$,
then the equation for $\varphi(a_n)$ implies $\alpha_{n,j} = 0$
for $1\leq j < n$, and the equations for the $\varphi(a_i)$ then
imply inductively that $\alpha_{i,j} = 0$ for $1\leq j<i\leq n$, which
in turn yields $\varphi = 0$. This proves (iii). Statement (iv) is an
immediate consequence of (iii). For any pair $(i,j)$, with $1\leq i,j\leq n$
there is a unique endomorphism of $M_n$ sending $a_i$ to $b_j$ and all
other basis elements to zero. Thus the space of endomorphisms of $M_n$
whose image is contained in $\mathrm{soc}(M_n)$ has dimension $n^2$. It follows
form (iv) that $\dim_k(\mathrm{End}} \def\Endbar{\underline{\mathrm{End}}_{kV_4}^{pr}(M_n)) \leq n^2-n$.
Setting $\varphi_{i,j} = \pi\circ\psi_{i,j}\circ\iota$, where $\psi_{i,j}$
is the unique endomorphism determined by the coefficients $\alpha_{i,j} = 1$
and $\alpha_{r,s} = 0$ for $(r,s) \neq (i,j)$ an easy verification
shows that the set
$\{\varphi_{i,j}\ |\ 1\leq i\leq n,\ 1\leq j < n\}$ is linearly
independent in $\mathrm{End}} \def\Endbar{\underline{\mathrm{End}}_{kV_4}^{pr}(M_n)$, whence the equality in (v).
\end{proof}
\begin{Lemma} \label{morphisms}
For any positive integer $n$ denote by
$\{\alpha^n_1,\alpha^n_2,..,\alpha^n_n,\beta^n_1,\beta^n_2,..,\beta^n_n\}$
a $k$-basis of $M_n$ such that $xa^n_i = b^n_i$ for $1\leq i\leq n$,
$ya^n_i=b^n_{i+1}$ for $1\leq i\leq n-1$, $ya^n_n = 0$, and such
that $xb^n_i = 0 = yb^n_i$ for $1\leq i\leq n$.
For any positive integers $m<n$ there is a homomorphism
$\zeta_m^n:M_m\to M_n$ such that $\zeta(a^m_i) = a^n_{n-m+i}$ for
$1\leq i\leq m$, and a homomorphism $\xi^n_m : M_n\to M_m$ such that
$\xi_m^n(a^n_i)=a^m_i$ for $1\le i\le m$ and $\xi_m^n(a^n_i)=0$ for $m+1\le i\le n$.
Then the sequence
$$\xymatrix{ 0 \ar[r] & M}%^\lambda_m \ar[r]^{\zeta_m^n} & M}%^\lambda_n \ar[r]^{\xi_{n-m}^n} &
M}%^\lambda_{n-m}\ar[r] &0 }$$
is exact. In particular $\zeta_m^n$ is a non-split monomorphism and $\xi_m^n$ is a
non-split epimorphism.
\end{Lemma}
\begin{proof} Straightforward verification.
\end{proof}
\begin{Lemma}\label{kV4AuslanderReiten}
Let $n$ be a positive integer. The following hold.
\smallskip\noindent (i)
The endomorphism $\underline\varphi^n_1$ in $\Endbar_{kV_4}(M_n)$ is an almost vanishing
morphism.
\smallskip\noindent (ii)
For any isomorphism $\alpha : M_n\cong\Omega(M_n)$ in $\modbar(kV_4)$
We have $\alpha\circ\underline\varphi^n_1 = $
$\Omega(\underline\varphi^n_1)\circ\alpha$.
\end{Lemma}
\begin{proof}
Since $M_n$ has period $1$, any almost vanishing morphism starting at
$M_n$ is an endomorphism of $M_n$.
Let $\psi:M_n\to M_n$ be a representative of an almost vanishing endomorphism.
We use the notation of the previous lemma, except that we drop the superscripts of basis
elements.
First compose $\psi$ with $\zeta_n^{n+1}$. Given that $\underline\psi$ is almost
vanishing and $\zeta_n^{n+1}$ is not an split monomorphism, $\zeta_n^{n+1}\circ\psi$ has
to be a zero morphism in $\modbar(kV_4)$. But any $kV_4$-homomorphism between
indecomposable non projective modules
that factors through a projective module has its image contained in the socle of the
target module. As $\zeta_n^{n+1}$ is a monomorphism, it implies that the image of $\psi$
has to be contained in $\mathrm{soc}(M_n)$.
Using Lemma \ref{zeroendomorphisms} (iv), $\psi$ can be chosen in the $k$-vector space
generated by $\{\varphi^n_i\,|\,1\le i\le n\}$. So there exist $\alpha_i\in k,\,1\le i\le
n$ such that $\psi=\sum_{i=1}^n \alpha_i \varphi^n_i$. The aim is to prove that
$\alpha_i=0$ for all $2\le i\le n$.
To do that we pre-compose $\psi$ with $\zeta_j^n$ where $j=n-i+1$. Again the class of
this composition has to be a zero morphism in $\modbar(kV_4)$.
The proof is by reverse induction on $i$, which amounts to proceeding
by induction over $j=n-i+1$.
If $j=1$ then $\psi\circ\zeta_1^n(a_1)=\alpha_n\varphi_n^n(a_n)=\alpha_n b_1$,
where the notation of basis elements of $M_n$ is as at the beginning of the
proof of Lemma \ref{zeroendomorphisms}. Pre-composing with $\xi_1^n$ one gets an
endomorphism $\psi\circ\zeta_1^n\circ\xi_1^n$ of $M_n$ that is equal to
$\alpha_n\varphi^n_1$. Since the class of this morphism has to be $0$ in $\modbar(kV_4)$
we get that $\alpha_n=0$.
Suppose now that $\alpha_i=0$ for all $i>n-j+1$. Pre-compose $\psi$ with $\zeta_j^n$; as
before the class of $\psi\circ\zeta_j^n$ in $\modbar(kV_4)$ is zero. By direct
computation $\psi\circ\zeta_j^n(a_1)=$ $\alpha_{n-j+1}\varphi^n_{n-j+1}(a_{n-j+1})=$
$\alpha_{n-j+1} b_1$ and $\psi\circ\zeta_j^n(a_i)=$
$\alpha_{n-j+1}\varphi^n_{n-j+i}(a_{n-j+1})=0$. Pre-compose now with $\xi_j^n$ and get an
endomorphism $\psi\circ\zeta_j^n\circ\xi_j^n$ of $M_n$ that is equal to
$\alpha_{n-j+1}\varphi^n_1$. The class of the latter morphism is zero in $\modbar(kV_4)$
if and only if $\alpha_{n-j+1}=0$.
The induction procedure stops at $j=n-1$, or equivalently, at $i = 2$,
whence (i). In order to prove (ii), note first that of the statement holds
for some isomorphism $\alpha$ it holds for any such isomorphism because
$\mathrm{End}} \def\Endbar{\underline{\mathrm{End}}_{kV_4}(M_n)$ is split slocal and $\underline\varphi^n_1$ is
almost vanishing. We use the notation introduced at the beginning of
the proof of Lemma \ref{zeroendomorphisms}; in particular, we have an injective
homomorphism $\iota : M_n\to (kV_4)^n$ and a surjective homomorphism
$\pi : (kV_4)^n\to M_n$. Define $\theta:(kV_4)^n\to(kV_4)^n$ by $\theta(s_1)=xs_1$ and
$\theta(s_j)=0$ for $2\le j\ne n$. A straightforward verification shows that the
following diagram commutes:
$$\xymatrix{
0\ar[r] & M}%^\lambda_n\ar[r]^\iota\ar[d]_{\varphi^n_1} &(kV_4)^n\ar[r]^\pi\ar[d]_\theta
&M}%^\lambda_n\ar[r]\ar[d]_{\varphi^n_1} &0\\
0\ar[r] & M}%^\lambda_n\ar[r]^\iota &(kV_4)^n\ar[r]^\pi &M}%^\lambda_n\ar[r]
&0
}$$
This shows that for some - and hence any - identification $M_n=\Omega(M_n)$
we have $\Omega(\underline\varphi^n_1) = \underline\varphi^n_1$, which completes
the proof.
\end{proof}
In conjunction with the remarks at the beginning of this section, this shows
that for any even dimensional indecomposable non projective $kV_4$-module $M^\lambda_n$
there exists, up to multiplication by a scalar, a unique element of the graded center in
any degree that is an almost vanishing endomorphism of $M^\lambda_n$ and zero on all
indecomposable modules not isomorphic to $M^\lambda_n$. Now let $\eta$ be a natural
transformation belonging to the kernel ${\mathcal{I}}$
of the canonical evaluation map $Z^*(\modbar(kV_4))\to \hat H^*(V_4;k)$.
Then $\eta$ is zero when evaluated at the trivial module, hence zero at any odd
dimensional inedcomposable $kV_4$-module, as these are precisely the $\Omega$-orbit of
$k$, up to isomorphism. In order to conclude the
proof of Theorem \ref{kleinfour} we need to show that for any even-dimensional
indecomposable $kV_4$-module $M$ the evaluation $\eta(M)$ is either zero
or almost vanishing. Thanks to the remarks at the beginning of this section, it
suffices to show this for the modules $M_n$, for $n$ a positive integer,
and this is what the following lemma does:
\begin{Lemma}
Let $\eta\in{\mathcal{I}}$ and $n$ a positive integer.
Then $\eta(M_n)$ is a scalar multiple of $\underline\varphi^n_1$.
\end{Lemma}
\begin{proof}
The proof is by induction on $n$.
For $n=1$ the assertion is trivial. Suppose that $n>1$ and that $\eta_{M_l}$ is a
multiple of $\underline\varphi^l_1$ for all $1\le l<n$. By Lemma
\ref{zeroendomorphisms}, there are coefficients $\alpha_i\in k$ such that
$\eta(M_n)=\sum_{i=1}^n \alpha_i \underline\varphi^n_i$. We prove that, for any for any
$k$ such that $n\geq k>1$, if $\eta(M_n) = \sum_{i=1}^k \alpha_i \underline\varphi^n_i$
then $\alpha_k=0$.
The following diagram in $\modbar(kV_4)$ is commutative:
$$\xymatrix{
M}%^\lambda_n\ar[r]^{\underline\xi_{n-k+1}^n\,\,}\ar[d]_{\eta(M_n)} &
M}%^\lambda_{n-k+1}\ar[d]^{\,\eta(M_{n-k+1})}\\
M}%^\lambda_n\ar[r]_{\underline\xi_{n-k+1}^n\,\,} & M}%^\lambda_{n-k+1}
}$$
By induction $\eta(M_{n-k+1})$ is an almost vanishing morphism or zero, so on one hand,
$\eta(M}%^\lambda_{n-k+1})\circ\underline\xi_{n-k+1}^n$ is zero in $\modbar(kV_4)$. On the other
hand,
an easy verification shows that
$$\underline\xi_{n-k+1}^n\circ\eta(M}%^\lambda_n)\circ\underline\zeta_{n-k+1}^n \ =\
\alpha_k\underline\varphi^{n-k+1}_1$$ This forces $\alpha_k= 0$, whence the result.
\end{proof}
Combining the above lemmas yields a proof of Theorem \ref{kleinfour}.
\section{Rank 1
For completeness we include some remarks on the graded center
of the stable category of a finite $p$-group $P$ of
rank $1$. If $k$ is a field of odd characteristic $p$
then $P$ is cyclic, the algebra $kP$ has finite
representation type, and the structure of $\modbar(kP)$ is
calculated in \cite{KrYe} and \cite{KeLi}. If $p=2$
and $Q$ is a non cyclic finite $2$-group of rank $1$ then
$Q$ is a generalised quaternion group.
\begin{Theorem} \label{Qstablecenter}
Let $k$ be an algebraically closed field of characteristic $2$ and
let $Q$ be a generalised quaternion $2$-group.
The kernel ${\mathcal{I}}$ of the canonical map $Z^*(\modbar(kQ)) \longrightarrow \hat H^*(Q;k)$
given by evaluation at the trivial $kQ$-module is infinite dimensional in odd degrees.
\end{Theorem}
The idea of the proof of Theorem \ref{Qstablecenter} is to reduce this to the
case where $Q = Q_8$ is quaternion of order $8$, and then to show that the
$2$-dimensional modules over the Klein four quotient group $V_4$ of $Q_8$ yield infinitely
many isomorphism classes of $kQ_8$-modules of period $2$ with the property that
$\Omega^2$ acts trivially on the almost vanishing morphisms starting at these
modules; this implies that
the almost vanishing morphisms determine infinitely many linearly independent
elements in each odd degree of the graded center of the
stable module category. In what follows, $k$ is a (not necessarily algebraically closed)
field of characteristic $2$ and $Q_8$ a quaternion group of order $8$, with generators
$s$, $t$ of order $4$ satisfying $s^2 = t^2$ and $tst^{-1} = s^{-1}$. Set $z = s^2$; this
is the unique (and hence central) involution of $Q_8$. We start with a few elementary
observations.
\begin{Lemma} \label{kQ8center}
We have $kQ_8(z-1)\subseteq Z(kQ_8)$.
\end{Lemma}
\begin{proof}
We have $s(z-1) = s^3 + s$; this is the conjugacy class sum of $s$, hence
contained in $Z(kQ_8)$. The same argument applies to all elements of order
$4$ in $Q_8$, whence the result.
\end{proof}
\begin{Lemma} \label{cyclicshifted4}
Let $(\lambda,\mu)\in k^2$ and set $x = \lambda (s-1) + \mu (t-1)$.
Suppose that $\lambda^2+\lambda\mu+\mu^2\neq 0$.
Then $1+x$ is invertible of order $4$ in $(kQ_8)^\times$, and $kQ_8$ is
projective as left or right $k\langle 1+x \rangle$-module.
\end{Lemma}
\begin{proof}
We have $x^2 = (\lambda^2+\mu^2)(z-1) + \lambda\mu(s-1)(t-1) + \lambda\mu(t-1)(s-1)$. A
short calculation shows that
$(s-1)(t-1)+(t-1)(s-1) = st(z-1)$. This is an element in $Z(kQ_8)$, by
Lemma \ref{kQ8center}. Thus $x^2 = u(z-1)$, where $u = (\lambda^2+\mu^2)\cdot 1 +
\lambda\mu\cdot(st)$. The hypotheses on
$\lambda$ and $\mu$ imply that $u$ is invertible. Thus $\langle 1+x\rangle$
has order $4$ and Lemma \ref{cyclicmodify}
implies that $kQ_8$ is projective as $k\langle 1+x^2\rangle$ module. Since
$\langle 1+x^2\rangle$ is the unique subgroup of order $2$ of $\langle 1+x \rangle$
it follows that $kQ_8$ is projective as left or right $k\langle 1+x\rangle$-module.
\end{proof}
The hypothesis $\lambda^2+\lambda\mu+\mu^2\neq 0$ in the previous lemma
means that at leat one of $\lambda$, $\mu$ is nonzero, and if they are both
nonzero then $\frac{\lambda}{\mu}$ is not a cube root of unity.
\begin{Proposition} \label{Q8period2}
Let $(\lambda,\mu)\in k^2$ and set $x = \lambda (s-1) + \mu (t-1)$.
Suppose that $\lambda^2+\lambda\mu+\mu^2\neq 0$.
Set $M = kQ_8(z-1)x$ and $N = kQ_8x$. Then $M$, $N$ are absolutely indecomposable, and
the following hold.
\smallskip\noindent (i) We have isomorphisms
$\alpha : M \cong \Omega^2(M)$ and $\beta : \Omega(M) \cong N$
in $\modbar(kQ_8)$.
\smallskip\noindent (ii) We have
$M \subseteq N^{\langle z\rangle} = kQ_8(z-1)$.
\smallskip\noindent (iii)
For any $\varphi\in\mathrm{Hom}} \def\Hombar{\underline{\mathrm{Hom}}_{kQ_8}(M,N)$ there is
an element $c\in kQ_8$ such that $\varphi(m) = cm$ for all $m\in M$;
in particular, $\mathrm{Im}(\varphi)\subseteq M$.
\smallskip\noindent (iv) Set $\gamma = \Omega(\beta)\circ\Omega(\alpha)\circ\beta^{-1} :
N\cong \Omega^2(N)$. Then, for any $\varphi\in\mathrm{Hom}} \def\Hombar{\underline{\mathrm{Hom}}_{kQ_8}(M,N)$ we have
$\Omega^2(\underline\varphi) \circ \alpha = \gamma\circ\underline\varphi$.
\end{Proposition}
\begin{proof} The modules $M$, $N$ are cyclic modules over the split local algebra
$kQ_8$, hence absolutely indecomposable.
Since $x^2 = u(z-1)$ for some $u\in (kQ_8)^\times$ we have
$x^2(z-1) = 0$. Thus $N = kQ_8x$ is contained in the kernel of the surjective
homomorphism $kQ_8\rightarrow M$ given by right multiplication with $(z-1)x$, hence equal
to this kernel because both have dimension $6$. The same argument
shows that the endomorphism $kQ_8 \rightarrow kQ_8$ given by right multiplication with
$x$ has kernel equal to $M$. This proves that for this choice of projective covers of $M$
and $N$ we get $\Omega(M) = N$ and
$\Omega^2(M) = M$, whence (i).
The inclusion $M\subseteq N$ is trivial. Since $z-1$ annihilates $M$, we have $M
\subseteq N^{\langle z\rangle} \subseteq kQ_8(z-1)$, where the
second inclusion uses the fact that $kQ_8$ is projective as right $k\langle
z\rangle$-module. Clearly also
$\mathrm{Im}(\varphi) \subseteq N^{\langle x\rangle}$. Now $kQ_8$ is projective as a right
$k\langle 1+x\rangle$-module by \ref{cyclicshifted4}, thus $kQ_8$ is free of rank $2$ as
a $k\langle 1+x \rangle$-module. It follows that $kQ_8x$, when
viewed as right $k\langle 1+x\rangle$-module, is isomorphic to the direct sum
of two copies of the unique (up to isomorphism) $3$-dimensional
$k\langle 1+x\rangle$-module. The restriction of this $3$-dimensional module
to $\langle 1+x^2\rangle$ is of the form $k\oplus k\langle 1+x^2\rangle$, so
the space annihilated by $x^2$ has dimension $2 + 2 = 4$. Since $x^2 = u(z-1)$
for some invertible element $u$ it follows that the subspace of $N$
annihilated by $z-1$ has also dimension $4$, hence is equal to $kQ_8(z-1)$.
This proves (ii).
Let $\varphi\in\mathrm{Hom}} \def\Hombar{\underline{\mathrm{Hom}}_{kQ_8}(M,N)$.
It follows from (ii) that $\mathrm{Im}(\varphi) \subseteq kQ_8(z-1)$.
Thus we may view $\varphi$ as a homomorphism from the submodule $M$ of
$kQ_8(z-1)$ to $kQ_8(z-1)$. Both
modules are annihilated by $z-1$, so we may view this as a homomorphism of
$kV_4$-modules, where $V_4 = Q_8/\langle z\rangle$. But since $M$ is
a submodule of $kQ_8(z-1)\cong kV_4$ as $kV_4$-modules and since
$kV_4$ is commutative it follows that $\varphi$ is actually induced
by {\it left} multiplication with an element in $kV_4$. Pulling this back
to $kQ_8$ shows that $\varphi$, as a homomorphism of $kQ_8$-modules, is induced
by left multiplication on $M$ with some element $c\in kQ_8$, composed
with the inclusion $M\subseteq N$. This proves (iii). In order to prove (iv), we choose
notation such that $\alpha$, $\beta$ are
equalities (that is, we identify $N = \Omega(M)$ and $M = \Omega^2(M)$
in the obvious way described at the beginning of this proof), and consider a commutative
diagram of $kQ_8$-modules
of the form
$$\xymatrix{
0\ar[r] & M\ar[r]^\iota\ar[d]_{\varphi'} &kQ_8\ar[r]^{x}\ar[d]_{v}
&kQ_8\ar[r]^{x(z-1)}\ar[d]_{w} &M\ar[r]\ar[d]_{\varphi} &0\\
0\ar[r] & N\ar[r]_\kappa &kQ_8\ar[r]_{x(z-1)} &kQ_8\ar[r]_{x} &N\ar[r]
&0
}$$
where $\iota$, $\kappa$ are the inclusions, $v$, $w$ are suitable elements in $kQ_8$ and
where
an arrow labelled by an element in $kQ_8$ means the homomorphism induced by right
multiplication with that element. We need to show $\underline\varphi' =
\underline\varphi$. For $a\in kQ_8$, the commutativity
of the right square in this diagram means that $\varphi(ax(z-1)) = awx$.
The commutativity
of the middle square is equivalent to $axw = avx(z-1)$, hence $xw = vx(z-1)$. The
commutativity of the left square implies that $\varphi'(ax(z-1)) = $ $ax(z-1)v =$
$avx(z-1) =axw$, where we used that $x(z-1)\in Z(kQ_8)$ by Lemma \ref{kQ8center}. In
order to show that $\varphi=\varphi'$
it suffices to show that $w$ can be chosen in $Z(kQ_8)$. By (iii) there is an element
$c\in kQ_8$ such that
$\varphi'(x(z-1)) = cx(z-1) = xc(z-1)$, and hence we may take
$w = c(z-1)$, which is an element in $Z(kQ_8)$ by Lemma \ref{kQ8center}.
\end{proof}
\begin{proof} [Proof of Theorem \ref{Qstablecenter}]
We use the notation and assumptions of Theorem \ref{Qstablecenter};
in particular, $k$ is now algebraically closed.
We consider first the case where $Q = Q_8$.
It follows from Proposition \ref{isomod} applied to the Klein four group
$V_4 = Q_8/\langle z\rangle$ and from Proposition \ref{Q8period2} (i)
that if $(\lambda,\mu)\in k^2$ runs over a set of
representatives of elements in ${\mathbb{P}}^1(k)$ satisfying $\lambda^2+\lambda\mu+\mu^2\neq 0$
then $M_{\lambda,\mu} = kQ_8(z-1)(\lambda (s-1)+\mu (t-1))$ runs over pairwise
nonisomorphic indecomposable $kQ_8$-modules of period $2$.
It follows from Proposition \ref{Q8period2} (iv) that the almost vanishing morphisms
starting at these modules determine elements in the graded center of the stable module
category of $kQ_8$ in any odd degree, whence the result in the case $Q = Q_8$. For $Q$ a
generalised quaternion $2$-group we may identify $Q_8$ to a subgroup of $Q$, and then the
induction functor $\mathrm{Ind}^Q_{Q_8}$ sends the indecomposable modules of period $2$
considered in Proposition \ref{Q8period2} to indecomposable $kQ$-modules of period $2$.
Moreover, it follows from Proposition \ref{Q8period2} (iv) in conjunction with Lemma
\ref{Omegainvariance}
that the almost vanishing morphisms starting at these modules are
again invariant under $\Omega_{kQ}^2$, hence define elements in the
graded center of the stable category of $kQ$-modules.
\end{proof}
| -59,106.015434
|
[
-2.587890625,
2.3046875
] | 25.981873
|
[
-2.216796875,
0.99169921875,
-2.3203125,
-6,
-1.21875,
8.390625
] |
[
3.251953125,
10.5390625,
1.103515625,
5.55859375
] | 368
| 7,137
|
[
-3.267578125,
3.916015625
] | 31.847122
|
[
-5.07421875,
-3.83203125,
-5.6015625,
-2.822265625,
1.580078125,
13.484375
] | 0.626043
| 12.571672
| 18.831442
| 1.855121
|
[
1.5646028518676758
] | -36,006.604114
| 5.371445
| -58,720.294316
| 0.511269
| 5.942227
|
[
-1.439453125,
-3.171875,
-4.140625,
-5.625,
1.775390625,
12.671875
] |
[
-5.10546875,
-1.2578125,
-1.6181640625,
-0.88818359375,
2.953125,
2.89453125
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.