Eufisky - The lost book

国外论坛好题

Sylvester equation https://en.wikipedia.org/wiki/Sylvester_equation
To seek the maximum value of $S=x_1x_2+x_2x_3+\cdots+x_nx_1$ on this domain:
 
补充:这是Ky Fan-Tausski-Todd inequality,参考Converses of two inequalities of Ky Fan, O. Taussky, and J. Todd以及这里
 
$x_1+x_2+\cdots+x_n=0$ and 
$x_1^2+x_2^2+\cdots+x_n^2=1$.
 
I have made some trivial observations:
 
1) $S\in[-1,1)$ by the rearrangement inequality.
 
2) We can make $S$ arbitrarily close to $1$ by increasing $n$.
 
3) An equivalent problem is to minimise $(x_1-x_2)^2+\cdots+(x_n-x_1)^2$.
 
But does the maximum have a meaningful closed form  for each $n$?
I propose to do a discrete Fourier transform. To this end put $\omega:=e^{2\pi i/n}$. In the following  all sums are over ${\mathbb Z}_n$, unless indicated otherwise. Let
$$y_k:={1\over n}\sum_l x_l\omega^{-kl}\ .$$
Since the $x_l$ are real we have $$y_{-k}=\overline{y_k}\tag{1}$$ for all $k$, furthermore $y_0=\sum_lx_l=0$. One has Parseval's formula
$$n\sum_k y_k\overline{ y_k}=\sum_l x_l^2=1\tag{2}$$
and the inversion formula
$$x_j=\sum_k y_k\omega^{jk}\ .\tag{3}$$
Using $(3)$ one computes
$$S=\sum_j x_jx_{j+1}=\ldots=n\sum_k|y_k|^2\omega^k\ .\tag{4}$$
At this point we have to distinguish the cases (a) $n=2m$, and (b) $n=2m+1$.
 
(a) If $n=2m$ then $\omega^m=-1$, and $(4)$ gives
$$\eqalign{S&=n\left(\sum_{k=1}^{m-1}|y_k|^2(\omega^k+\omega^{-k}) \ +|y_m|^2\omega^m\right)\cr
&=n\left(\sum_{k=1}^{m-1}|y_k|^2\>2\cos{2k\pi\over n} \ -|y_m|^2\right)\ .\cr}$$
Given the conditions $(1)$ and $(2)$ it is easily seen that the optimal admissible choice of the $y_k$ is $$y_1=y_{-1}={1\over\sqrt{2n}}\>,\qquad y_k=0\quad(k\ne\pm1)\ .\tag{5}
$$  This leads to
$$S_{\rm opt}=\cos{2\pi\over n}\ .$$
In particular when $n=4$ one obtains $S_{\rm opt}=0$, as indicated in Zubzub's answer.
 
(b) If $n=2m+1$ then $(4)$ gives
$$S=2n\sum_{k=1}^m |y_k|^2\cos{2k\pi\over n}\ ,$$
and the choice $(5)$ leads again to
$$S_{\rm opt}=\cos{2\pi\over n}\ .$$

In particular when $n=3$ one obtains $S_{\rm opt}=-{1\over2}$, and when $n=5$ one obtains $S_{\rm opt}=\cos{2\pi\over5}\doteq0.309$, as indicated in Zubzub's answer.

见:https://math.stackexchange.com/questions/2213960/optimising-x-1x-2x-2x-3-cdotsx-nx-1-given-certain-constraints


Let $$
A=\left[ \begin{matrix}
A_1& A_3\\
0& A_2\\
\end{matrix} \right]\quad \text{and} \quad B=\left[ \begin{array}{c}
B_1\\
B_2\\
\end{array} \right],
$$and for any eigenvalue $s$ of $A$, we have
$$
\mathrm{rank}\,\left[ A-sI_n,B \right] =n.
$$
Prove there are a real matrix $K\in \mathbb{R}^{r\times n}$ and invertible matrix $T\in \mathbb{R}^{n\times n}$, such that
$$
T\left( A+BK \right) T^{-1}=\left[ \begin{matrix}
A_1& 0\\
0& \bar{A}_2\\
\end{matrix} \right] ,\qquad TB=\left[ \begin{array}{c}
\bar{B}_1\\
B_2\\
\end{array} \right],
$$
where $A\in \mathbb{R}^{n\times n},B\in \mathbb{R}^{n\times r}$. Meanwhile,  $\bar{A}_2$ and $\bar{B}_1$ is real matrix of the proper dimension, $\bar{A}_2$ and $A_1$ Have eigenvalues that are not identical to each other.
 
This problem is about controllability of linear system and pole assignment, so I think we may try controllability canonical form, or let $K=(K_1,K_2)$, then determine the $K_1,K_2$ and $T$, but it seems so difficult.

The statement about the rank of $\left[A - s\,I, B\right]$ implies that the pair $(A,B)$ is [controllable](https://en.wikipedia.org/wiki/Hautus_lemma). Therefore the poles of the resulting matrix should be able to be placed anywhere. The similarity transformation should allow us to separate the eigenvalues/modes and therefore achieve the stated goal
 
$$
T \left(A + B\,K\right) T^{-1} = 
\begin{bmatrix}
A_1 & 0 \\ 0 & \bar{A}_2
\end{bmatrix}, \quad 
T\,B = 
\begin{bmatrix}
\bar{B}_1 \\ B_2
\end{bmatrix},
$$
 
given that
 
$$
A = 
\begin{bmatrix}
A_1 & A_3 \\ 0 & A_2
\end{bmatrix}, \quad
B = 
\begin{bmatrix}
B_1 \\ B_2
\end{bmatrix}.
$$
 
For solving this it is easier to separate the problem in smaller problems. For this I will define $T$ and $K$ as
 
$$
T = 
\begin{bmatrix}
T_1 & T_2 \\ T_3 & T_4
\end{bmatrix}, \quad
K = 
\begin{bmatrix}
K_1 & K_2
\end{bmatrix}.
$$
 
The last goal requires
 
$$
T\,B = 
\begin{bmatrix}
T_1\,B_1 + T_2\,B_2 \\ T_3\,B_1 + T_4\,B_2
\end{bmatrix} = 
\begin{bmatrix}
\bar{B}_1 \\ B_2
\end{bmatrix}.
$$
 
The bottom half of this goal might have infinitely many solution if there is any overlap in the span of $B_1$ and $B_2$. But this problem should be solvable in general, in which case $T_3=0$ and $T_4=I$ should always solve it. Using this then the inverse of $T$ can shown to be
 
$$
T^{-1} = 
\begin{bmatrix}
T_1 & T_2 \\ 0 & I
\end{bmatrix}^{-1} = 
\begin{bmatrix}
T_1^{-1} & -T_1^{-1}\,T_2 \\ 0 & I
\end{bmatrix}.
$$
 
Since nothing is specified about $\bar{B}_1$ then $T_1$ and $T_2$ could be anything for now as long as $T_1$ is invertible. The left hand side of the first goal can now be written as
 
$$
T \left(A + B\,K\right) T^{-1} = 
\begin{bmatrix}
T_1\,A_1\,T_1^{-1} + \bar{B}_1\,K_1\,T_1^{-1} & T_1\,A_3 + T_2\,A_2 - T_1\,A_1\,T_1^{-1}\,T_2 + \bar{B}_1\left(K_2 - K_1\,T_1^{-1}\,T_2\right) \\
B_2\,K_1\,T_1^{-1} & A_2 + B_2\left(K_2 - K_1\,T_1^{-1}\,T_2\right)
\end{bmatrix}.
$$
 
It can be shown that the top and bottom left half can be set equal to the goal by using $T_1=I$ and $K_1=0$. This allows the top and bottom right of the first goal equation to be simplified to
 
$$
\begin{bmatrix}
A_3 + T_2\,A_2 - A_1\,T_2 + \bar{B}_1\,K_2 \\
A_2 + B_2\,K_2
\end{bmatrix} = 
\begin{bmatrix}
0 \\
\bar{A}_2
\end{bmatrix}.
$$
 
For the bottom half of this equation you could just use a pole placement algorithm to find $K_2$, such that none of the poles match those of $A_1$. This should be possible since the pair $(A_2,B_2)$ should be controllable. The top half can then be rewritten as
 
$$
A_3 + T_2\,\bar{A}_2 - A_1\,T_2 + B_1\,K_2 = 0,
$$
 
which can be transformed into a [Sylvester equation](https://en.wikipedia.org/wiki/Sylvester_equation)
 
$$
A\,X + X\,B = C,
$$
 
with $X = T_2$, $A = -A_1$, $B = \bar{A}_2$ and $C = -A_3 - B_1\,K_2$. This equation has an unique solution for $X$ when $A$ and $-B$ do not have a common eigenvalue. This is an identical constraint as mentioned by your problem statement.
 
So to solve this problem you can first do a pole placement with the pair $(A_2,B_2)$ to find $K_2$, avoiding the eigenvalues of $A_1$. And then solve a Sylvester equation after substituting in this obtained $K_2$ in order to find $T_2$. Using these values then the final solution can then be expressed using
 
$$
T = 
\begin{bmatrix}
I & T_2 \\ 0 & I
\end{bmatrix}, \quad
K = 
\begin{bmatrix}
0 & K_2
\end{bmatrix}.
$$