Christian Marks

Ecumenical dispatches from the London Library

A co-topological category

with one comment

This post describes a nearly useless category that reminds me of the topologies associated with categories of coalgebras for a set endofunctor.

Let ${X, Y}$ be topological spaces. A not-necessarily continuous map ${f:X\rightarrow Y}$ is open if for every open set ${U\subset X}$, ${f[U]}$ is open in ${Y}$. The category ${\mathbf{Opt}}$ of topological spaces and open maps has the same isomorphism types as the category ${\mathbf{Top}}$ of topological spaces and continuous maps.

Proof: A map is an isomorphism in ${\mathbf{Opt}}$ if and only if it is a homeomorphism in ${\mathbf{Top}}$. $\Box$

Coproducts exist in ${\mathbf{Opt}}$. In general, products do not exist in ${\mathbf{Opt}}$. However, something analogous to powers of a single space exist.

For a topological space ${X}$, ${\square X}$ (“the square of ${X}$“) denotes the topological space with underlying set ${X\times X}$ and topology generated by the basis

$\displaystyle \{ U\times V, \Delta_X[U] : U,V\,\mathrm{open}\,\mathrm{in}\,X\}$

where ${\Delta_X:X\rightarrow X\times X}$ is the diagonal. This topology is the smallest topology on ${X\times X}$ making the diagonal map open (and continuous).

The ${\square}$ construction defines an endofunctor ${\square}$ on ${\mathbf{Opt}}$ together with a natural transformation ${\delta_X:X\rightarrow\square X}$ (the same underlying map as ${\Delta_X}$) from the identity functor ${1_\mathbf{Opt}}$ to ${\square}$. There is a one-to-one correspondence between maps ${f:X\rightarrow Y}$ and maps ${g:X\rightarrow\square Y}$ such that ${\pi_1 g= \pi_2 g}$ for ${i=1,2}$, where ${\pi_i:\square Y\rightarrow Y}$ is the projection (projections are open).

Proof: Given a open map ${f:X \rightarrow Y}$, the open map ${\delta_Y f}$ satisfies ${\pi_1\delta_Y f = f = \pi_2 \delta_Y f}$. Conversely, a map ${g:X\rightarrow \square Y}$ with ${\pi_1 g= \pi_2 g}$ for ${i=1,2}$, defines ${f=\pi_1 g}$. Then

$\displaystyle \delta_Y f = (\pi_1 g \times \pi_1 g) \delta_Y = (\pi_1 g \times \pi_2 g) \delta_Y = (\pi_1 \times \pi_2) \delta_Y g = g$

$\Box$

Written by Christian Marks

July 6, 2011 at 3:37 AM

Posted in Bagatelle

The tragedy of asymptotic density

If $A\subseteq\mathbb{N}$, we define $A(n) = A\cap[1,n]$ and say that $A$ has asymptotic  density if $\lim_{n\rightarrow\infty} |A(n)|/{n}$ exists.  The collection of sets that have asymptotic density does not form an algebra.

The set $A = \{4,6, 9, 11, 13, 15, 16, 18, 20, 22, 24, 26, 28, 30, 33,\ldots\}$ has asymptotic density 1/2. This is illustrated by the blue graph below. The red graph is the intersection of $A$ with the set $2\mathbb{N}$ of even numbers. Call this intersection $B$. The upper density of $B$ is $\limsup_{n\rightarrow\infty} |B(n)|/{n}=1/3$ and the lower density of $B$ is $\liminf_{n\rightarrow\infty} |B(n)|/{n}=1/6$. (This can be established with certain geometric series.) The two sets $A$ and $2\mathbb{N}$ have asymptotic density (in this case, 1/2), but their intersection $B=A\cap2\mathbb{N}$ does not.

4 6 9 11 13 15 16 18 20 22 ... 30 33 35 ... 61 63 64 ...

This implies that asymptotic density cannot be used as a probability measure.

Written by Christian Marks

May 31, 2010 at 1:17 AM

Posted in Bagatelle

Spivak’s mistaken problem

In “Calculus on Manifolds,” Michael Spivak incorrectly states conditions for a linear operator on a finite dimensional real inner product space to be angle preserving, on the assumption that the space has a basis consisting of eigenvectors of the operator.

Let $T$ be an injective linear operator on a finite dimensional real inner product space $V$. The operator $T$ is angle preserving if and only if for all nonzero $x,y\in V$,

$\frac {\langle T x | T y\rangle} {|T x| |T y|} = \frac {\langle x | y\rangle} { |x| |y| }$

Spivak writes that assuming there exists a basis $x_1,\ldots,x_n$ of $V$ and real numbers $\lambda_i,\, 1\le i\le n$ such that $T x_i = \lambda x_i$, then $T$ angle preserving if and only of the $\lambda_i$ all have the same absolute value.

This is incorrect: the eigenbasis must be orthogonal. One can produce linear operators on $\mathbb{R}^2$ with real eigenvalues $\pm 1$ but which do not preserve angles. An example is an operator with the following matrix in the standard basis:

$\left(\begin{array}{cc}-1 & 0\\-2 & 1\end{array}\right).$

An eignenbasis for this operator is

$\left\{\left(\begin{array}{c}0\\1\end{array}\right),\,\left(\begin{array}{c}1\\1\end{array}\right)\right\}.$

We can use the eigenbasis to diagonalize the matrix:

$\left(\begin{array}{cc}-1 & 0\\-2 & 1\end{array}\right)=\left(\begin{array}{cc}0 & 1\\ 1 & 1\end{array}\right)\left(\begin{array}{cc}1 & 0\\ 0 & -1\end{array}\right)\left(\begin{array}{cc}-1 & 1\\ 1 & 0\end{array}\right).$

The possibility that there may be some other orthogonal eigenbasis that we have overlooked is ruled out by the spectral theorem for linear operators. The spectral theorem for a linear operator T on a real finite dimensional inner product space V states that V has an orthonormal basis consisting of eigenvectors of T if and only if T is self-adjoint. In the example above, the matrix is not symmetric, so there is no orthogonal basis of $\mathbb{R}^2$ consisting of eigenvectors for our matrix.

Let’s prove that T angle preserving implies that all eigenvalues have the same absolute value. If x and y are two eigenvectors of T with distinct eigenvalues a and b, then there are two cases to consider.

Case one: $\langle x | y\rangle \ne 0$. We may assume that x and y have norm 1. Apply the Gram-Schmidt procedure to x and y (up to scale). We have that

$\langle x | y - \langle x| y\rangle x\rangle=\langle x | y\rangle - \langle x | y\rangle \langle x | x\rangle = 0.$

Since T is angle preserving, it follows that

$0=\left\langle T x | T (y - \langle x| y\rangle x)\right\rangle = a b\langle x| y\rangle - a^2\langle x| y\rangle.$

Dividing by $a \langle x | y\rangle$, which is nonzero (T is injective and cannot have 0 as an eigenvalue, and we are in case one), we have that a = b.

Case two: $\langle x | y\rangle = 0$. In that case $\langle x + y| x-y\rangle = 0$, and since T is angle preserving, $\langle T(x + y)|T( x-y)\rangle = 0$. This implies that $a^2 = b^2$, so that a and b have the same absolute value.

Hence in either case, the eigenvalues of T satisfy $\lambda_i=\pm\lambda$. We may write

$V = \ker (T-\lambda I)\oplus\ker (T+\lambda I)$

where $\ker (T+\lambda I)=\ker (T-\lambda I)^\perp$, and apply the Gram-Schmidt orthonormalization process to bases for each summand  to produce an orthornormal basis of V consisting of eigenvectors for T.

The converse, on the assumption that T has an orthonormal eigenbasis, is a straightforward calculation.

Written by Christian Marks

July 6, 2009 at 4:38 AM

Posted in Bagatelle

Skew symmetric matrices

A skew symmetric matrix of odd order vanishes. Suppose that A is a skew symmetric matrix of even order. If B is the matrix obtained from A by adding the same number $\lambda$  each of the entries of A, then A and B have the same determinant. The determinant of A equals the following.

$\left|\begin{array}{cccc}1&0&\cdots&0\\ 0\\ \vdots&&A\\ 0 \end{array}\right|=\left|\begin{array}{cccc}1&\lambda&\cdots&\lambda\\ 0 \\ \vdots&&A\\ 0 \end{array}\right|=\left|\begin{array}{cccc}1&\lambda&\cdots&\lambda\\1\\ \vdots&&A+\Lambda\\1\end{array}\right|.$

where $\Lambda$ is the $n\times n$ matrix with all entries equal to $\lambda$. The last determinant equals the following sum, using bilinearity.
$\left|\begin{array}{cccc}0&\lambda & \cdots&\lambda\\1\\ \vdots&&A+\Lambda\\1\end{array}\right|+\left|\begin{array}{cccc}1&\lambda&\cdots&\lambda\\ 0\\ \vdots & & A+\Lambda \\ 0 \end{array}\right|$

Add $-\lambda$ times first column of the first term to the remaining columns, and multiply the first column (hence the determinant) by $-\lambda$. The result is a skew symmetric matrix, which must be zero since the determinant of a skew symmetric matrix of odd order is zero.
$\left|\begin{array}{cccc}0& \lambda & \cdots & \lambda\\ -\lambda\\ \vdots & &A \\ -\lambda \end{array}\right|+\left|\begin{array}{cccc}1&0&\cdots&0\\ 0\\ \vdots & & A+\Lambda \\ 0 \end{array}\right|=|A+\Lambda|$

Written by Christian Marks

July 5, 2009 at 7:23 AM

Posted in Bagatelle