Archive for the ‘Bagatelle’ Category
This post describes a nearly useless category that reminds me of the topologies associated with categories of coalgebras for a set endofunctor.
Let be topological spaces. A not-necessarily continuous map is open if for every open set , is open in . The category of topological spaces and open maps has the same isomorphism types as the category of topological spaces and continuous maps.
Proof: A map is an isomorphism in if and only if it is a homeomorphism in .
Coproducts exist in . In general, products do not exist in . However, something analogous to powers of a single space exist.
For a topological space , (“the square of “) denotes the topological space with underlying set and topology generated by the basis
where is the diagonal. This topology is the smallest topology on making the diagonal map open (and continuous).
The construction defines an endofunctor on together with a natural transformation (the same underlying map as ) from the identity functor to . There is a one-to-one correspondence between maps and maps such that for , where is the projection (projections are open).
Proof: Given a open map , the open map satisfies . Conversely, a map with for , defines . Then
If , we define and say that has asymptotic density if exists. The collection of sets that have asymptotic density does not form an algebra.
The set has asymptotic density 1/2. This is illustrated by the blue graph below. The red graph is the intersection of with the set of even numbers. Call this intersection . The upper density of is and the lower density of is . (This can be established with certain geometric series.) The two sets and have asymptotic density (in this case, 1/2), but their intersection does not.
This implies that asymptotic density cannot be used as a probability measure.
In “Calculus on Manifolds,” Michael Spivak incorrectly states conditions for a linear operator on a finite dimensional real inner product space to be angle preserving, on the assumption that the space has a basis consisting of eigenvectors of the operator.
Let be an injective linear operator on a finite dimensional real inner product space . The operator is angle preserving if and only if for all nonzero ,
Spivak writes that assuming there exists a basis of and real numbers such that , then angle preserving if and only of the all have the same absolute value.
This is incorrect: the eigenbasis must be orthogonal. One can produce linear operators on with real eigenvalues but which do not preserve angles. An example is an operator with the following matrix in the standard basis:
An eignenbasis for this operator is
We can use the eigenbasis to diagonalize the matrix:
The possibility that there may be some other orthogonal eigenbasis that we have overlooked is ruled out by the spectral theorem for linear operators. The spectral theorem for a linear operator T on a real finite dimensional inner product space V states that V has an orthonormal basis consisting of eigenvectors of T if and only if T is self-adjoint. In the example above, the matrix is not symmetric, so there is no orthogonal basis of consisting of eigenvectors for our matrix.
Let’s prove that T angle preserving implies that all eigenvalues have the same absolute value. If x and y are two eigenvectors of T with distinct eigenvalues a and b, then there are two cases to consider.
Case one: . We may assume that x and y have norm 1. Apply the Gram-Schmidt procedure to x and y (up to scale). We have that
Since T is angle preserving, it follows that
Dividing by , which is nonzero (T is injective and cannot have 0 as an eigenvalue, and we are in case one), we have that a = b.
Case two: . In that case , and since T is angle preserving, . This implies that , so that a and b have the same absolute value.
Hence in either case, the eigenvalues of T satisfy . We may write
where , and apply the Gram-Schmidt orthonormalization process to bases for each summand to produce an orthornormal basis of V consisting of eigenvectors for T.
The converse, on the assumption that T has an orthonormal eigenbasis, is a straightforward calculation.
A skew symmetric matrix of odd order vanishes. Suppose that A is a skew symmetric matrix of even order. If B is the matrix obtained from A by adding the same number each of the entries of A, then A and B have the same determinant. The determinant of A equals the following.
where is the matrix with all entries equal to . The last determinant equals the following sum, using bilinearity.
Add times first column of the first term to the remaining columns, and multiply the first column (hence the determinant) by . The result is a skew symmetric matrix, which must be zero since the determinant of a skew symmetric matrix of odd order is zero.