## Saturday, November 29, 2014

### Learning project #1: a theorem

0th post in this series
1st post (updated with some more facts about IA structures at the end)
(edited 12/10/2014 to improve clarity)

We'll explain the proof of Theorem 27 of FMS:

Theorem 27: Suppose $\kappa$ is a supercompact cardinal and $\mu<\kappa$ is regular. Then in $V^{\rm{Col}(\mu,<\kappa)}$, the nonstationary ideal on $\mu$ is precipitous.

We'll follow the proof in FMS, trying to use Foreman's chapter for some new insights. I think the material is generally covered much better in the Handbook chapter than in the article.

Now we have Lemma 1 for proving ideals are precipitous, we can carry out this gameplan.

Let $G$ be a generic for $\rm{Col}(\mu,<\kappa)$, and let's think in the generic extension $V[G]$. We will show:

Main Claim: Let $\mathfrak{A}$ be an expansion of $\langle H(\lambda),\in,\Delta\rangle$ (where $\lambda$ is some regular cardinal above, say, $2^{2^{2^\mu}}$). Then for almost all (in the sense of the nonstationary ideal) $N\prec\mathfrak{A}$ with $N\in \rm{IA}$, $|N|<\mu$, and $N\cap \mu$ an ordinal:

For all maximal antichains $\mathcal{A} \in N$ of $P(\mu)/NS$, there is an $A\in \mathcal{A}$ such that

1. $N\cap \mu\in A$ and
2. letting $N^A$ equal the Skolem hull of $N\cup \{A\}$, $N^A\cap \mu=N\cap \mu$ in $\mathfrak{A}$.
Let's explain the usefulness of these conditions. To apply our Lemma 1, let $\mathcal{A}=\langle A \rangle$ be a tree of maximal antichains for $P(\mu)/\rm{NS}$ with underlying tree $T\subseteq \langle\mu^+\rangle^{<\omega}$. We will construct a branch $f:\omega\rightarrow \mu^+$ so that $\bigcap_n A_{f\upharpoonright n}\neq \emptyset$.

We'll construct the branch $f$ in $\omega$ stages.

Take $N_0$ containing $\mathcal{A}$ as an element. Let $s_0=\langle\alpha_0\rangle$ be so that $A_{s_0}$ witnesses (1) and (2) with antichain equal to the first level of $\mathcal{A}$. From now on, use the notation $s_i:=\langle \alpha_0,\ldots, \alpha_{i}\rangle$.

At the $i+1$st stage, let $N_{i+1}:=(N_i)^{\alpha_i}$ (the Skolem hull of $N_i\cup\{\alpha_i\}$ in $\mathcal{L}$). Pick $\alpha_{i+1}$ so that (1) and (2) are satisfied for $N=N_{i+1}$ and $s=\langle \alpha_0,\ldots, \alpha_{i+1}\rangle$ in the maximal antichain consisting of $\langle A_s:s \textrm{ extends }s_i\rangle$ together with some set not containing the single point $N\cap \mu$ (in order to do this construction, we will need to show that the property that $N_i\in\rm{IA}$ is maintained, which is Lemma 2 from last time).

We have $N_i\cap \mu\in A_{s_i}$ for all $i$, but $N_i\cap\mu$ was always just equal to $N\cap \mu$, so $N\cap \mu$ would be in the intersection of all of the $A_{s_i}$. The key point of absorbing the index of the member of the antichain was so that the construction could continue in a coherent way, ensuring we are actually building a branch of $T$. (Foreman's chapter further develops these ideas, leading to the notion of "catching antichains".)

Let us now prove the main claim. Suppose there is a stationary $S\subseteq \rm{IA}$ of structures for which the conclusion of the main claim fails. The idea of the proof is that the supercompact gives us enough reflection of $S$ to catch the antichain. Let $\mathbb{P}=\rm{Col}(\mu,<\kappa)$ and let $j:V\rightarrow M$ be an $|H(\lambda)|$-supercompact embedding. Since $\mathbb{P}$ is $\kappa$-c.c., we can force to find $H\subseteq j(\mathbb{P})$ which contains $G$ as a subset, so we can extend $j$ to $\hat{j}:V[G]\rightarrow M[H]$ in $V[H]$.

Subclaim: $S$ reflects to a set of size $\mu$, i.e., there is $Y\subseteq \lambda$ so that $\mu\subseteq Y$ and $|Y|=\mu$ so that $S\cap P(Y)$ is stationary in $P(Y)$.

Proof of Subclaim: By Lemma 3 from last time, $\hat{j}(S)\cap P(\hat{j}H(\lambda))$ is stationary in $V[H]$. Now because of the collapse, $|\hat{j}H(\lambda)|=\mu$ and $\mu\subseteq \hat{j}H(\lambda)$ since $\kappa$ is the critical point. By closure of $M$ (and, it can be seen, of $M[H]$, $\hat{j}H(\lambda)\in M[H]$. So ($\hat{j}$ of) the statement of the subclaim holds in $M[H]$, so by elementarity the subclaim holds in $V[G]$.

Returning to the proof of the main claim, let $Y$ be given by the subclaim and let $f:\mu\rightarrow Y$ be a bijection. Let $T:=\{\delta<\mu: f\delta\in S\textrm{ and }\delta=f\delta\cap\mu\}$. Then it is easy to see that $T$ is stationary, using the fact that $S$ reflects to $Y$. By maximality, there is $A\in\mathcal{A}$ such that $T\cap A$ is stationary. So we can find an IA structure $N'\prec \mathfrak{A}$ containing $A, f$ and with $N'\cap \mu \in T\cap A$. Take $N$ to be $f(N' \cap\mu)$, so $N\in S$ by definition of $T$. By the second property in the definition of $T$, we also have $N\cap \mu=N'\cap \mu$. But $N'$ contains $A$, so the Skolem Hull of $N\cup\{A\}$ is contained in $N'$ and therefore has the same supremum below $\mu$, and would show that $N$ cannot be in $S$.

This finishes the proof.

## Thursday, November 20, 2014

### Spencinar #6: The Galvin-Prikry Theorem

Today Zach Norwood talked about the Galvin-Prikry Theorem.

Prerequisites: basic topology, notions of category, familiarity with Ramsey theory is helpful for motivation. Last week's Spencinar post will be referenced occasionally but only in inessential remarks.

The Galvin-Prikry Theorem is a Ramsey result that is perhaps surprising because it is an example of an "infinite exponent" partition property. This means it deals with colorings of infinite subsets of the underlying space. With the Axiom of Choice, none of the usual partition relations hold with infinite exponents.

The Galvin-Prikry Theorem gets around this by only worrying about definable partitions of $[\omega^\omega]$. In order to state the theorem, let's make some definitions:

Definition: $Y\subseteq [\omega]^\omega$ is Ramsey if there is $A\in [\omega]^\omega$ with $[A]^\omega\subseteq Y$ or $A^\omega\subseteq Y^c$. (Here $Y^c$ is the complement of $Y$.) I'll informally say that $A$ in the first case is $Y$-homogeneous and $A$ in the second case is $Y^c$-homogeneous.

$Y$ can also be thought of as a coloring $[\omega]^\omega\rightarrow 2$. As in last week's post, we think of $[X]^\omega$ as the set of all increasing infinite sequences of $X$, or sometimes in a slight abuse of notation as the set of infinite subsets of $X$.

Galvin-Prikry Theorem: If $Y$ is Borel, then $Y$ is Ramsey.

Here the topology on $Y$ is generated by sets of the form $\{X: x \textrm{ is an initial segment of }X\}$. The proof, originally due to Ellentuck, will actually go through a finer topology which is more suited to the situation at hand. Notice the similarity with the Mathias forcing from last week.

Definition: The Ellentuck topology is generated by sets of the form
$$[s,B]^\omega:=\{X\in [\omega]^\omega: s \textrm{ is an initial segment of } X, x\setminus s\subseteq B\},$$
where $s\in [\omega]^{<\omega}$ and $B\in [\omega]^\omega$.

We will use "$\ast$-" to indicate the use of the Ellentuck topology over the usual one. Notice that all Borel sets are $\ast$-Borel.

Fix $Y\subseteq [\omega]^\omega$. We say $B\in[\omega]^\omega$ accepts $s$ (into $Y$) if $[s,B]^\omega\subseteq Y$. We say $B$ rejects $s$ if no infinite subset of $B$ accepts $s$. Informally, $B$ accepts $s$ if $s$ is long enough and $B$ is thin enough so that every subset of $B-\max(s)$ compatible with $s$ is in $Y$. $B$ rejects $s$ if $s$ is too short or $B$ concentrates on the wrong place for acceptance to occur. Although perhaps its best to think just in terms of the Ellentuck topology.

Some easy remarks:
• if $B$ accepts $s$, then so do all infinite subsets of $B$.
• if $B$ rejects $s$, then so do all infinite subsets of $B$ and all supersets of $B$.
The main part of the proof is the following lemma, which clarifies the relationship between acceptance and rejection.

Galvin-Prikry Lemma: For $Y\subseteq [\omega]^\omega$, exactly one of the following holds:
1. There is $B\in[\omega]^\omega$ that accepts $\emptyset$, or
2. there is $B \in [\omega]^\omega$ which rejects all of its finite subsets.
A $B$ that accepts $\emptyset$ is just one that is $Y$-homogeneous, and a $B$ which rejects all of its finite subsets is exactly one for which no $\ast$-open subset of $[B]^\omega$ is $Y$-homogeneous. For those familiar with forcing and last week's seminar, this is a special version of the Prikry lemma for the empty condition in Mathias forcing. Not having read the original source, I wonder if Prikry noticed the analogy with Prikry forcing?

Proof: Assume that no $B$ accepts $\emptyset$, so $\omega$ rejects $\emptyset$. We will construct recursively
• infinite subsets of $\omega$: $B_0\supseteq B_1\subseteq B_2\supseteq\cdots$, and
• natural numbers $a_1<a_2<\cdots$
so that $a_i\in B_i$ and $B_i$ rejects all subsets of $\{a_1,\ldots,a_{i-1}\}$. If this can be done, then $\{a_1,a_2,\ldots\}$ rejects all of its finite subsets. This kind of construction is typical in Ramsey theory.

By our assumption, $B_0=\omega$ works. At the $k$th stage of the construction, suppose that $B_0,\ldots, B_k$ and $a_1,\ldots,a_{k-1}$ have already been constructed. We want to define $a_k>a_{k-1}$ in $B_k$ and $B_{k+1}\subseteq B_k$ so that $B_{k+1}$ rejects all subsets of $\{a_1,\ldots,a_k\}$.

If such $a_k, B_{k+1}$ did not exist, then we will define a sequence $C_1\supseteq C_2\supseteq\ldots$ of subsets of $B_k$, $b_1<b_2<\ldots$ with $b_{i+1}\in C_i$ and a finite subset $\bar{s}$ of $B_k$ so that $\{b_1,b_2,\ldots\}$ accepts $\bar{s}$. But this contradicts the inductive assumption on $B_k$. (This auxiliary construction is another version of the "typical" Ramsey construction building a kind of homogeneous object in the other direction.)

Choose $b_1>a_{k-1}$ in $B_k$. $B_k$ doesn't reject all subsets of $\{a_1,\ldots,a_{k-1},b_1\}$ (otherwise we could pick $b_1$ to be $a_k$ and $B_k$ to be $B_{k+1}$), so we can take $C_1\in [B_k]^{\omega}$ and $S_1\subseteq\{a_1,\ldots,a_{k-1},b_1\}$ so that $C_1$ accepts $s_1$. We must have $b_1\in s_1$ since we assumed that $B_k$ rejects all subsets of $\{a_1,\ldots,a_{k-1}\}$. Put $\bar{s}_1=s_1-\{b_1\}$.

Continuing the construction, at stage $i+1$ pick $b_{i+1}>b_k$ in $C_i$, $C_{i+1}\in[C_i]^\omega$, and $s_{i+1}\subseteq \{a_1,\ldots,a_{k-1},b_{i+1}\}$ so that $C_{i+1}$ accepts $s_{i+1}$. Let $\bar{s}_{i+1}=s_{i+1}-\{b_{i+1}\}$.

Now the $\bar{s}_i$ must be constant for infinitely many $i$, and let this constant value be $\bar{s}$. Then $\{b_1,b_2,\ldots\}$ accepts $\bar{s}$. $\Box$

After this proof, we stopped using the language of acceptance and rejection, which made it easier for me to follow since I don't think that "rejection" is that easy to grasp in the flow of things.

As a first application of the Galvin-Prikry lemma,

Lemma 1: $\ast$-open sets are Ramsey.

Proof: Let $Y\subseteq [\omega]^\omega$, and take $B$ satisfying the conclusion of the Galvin-Prikry lemma. In the first case, $B$ accepts $\emptyset$, so we're done. In the second case, $B$ rejects all of its finite subsets. We'll show that $B$ is $Y^c$-homogeneous. Otherwise, there is $A$ an infinite subset of $B$ so that $A\in Y$. Using that $Y$ is $\ast$-open, it follows that there is a $\ast$-neighborhood of $A$, say $[s,A]^\omega$, with $[s,A]^\omega\subseteq Y$. But this means that $A$ accepts $s$, contradicting that $B$ rejects $s$. $\Box$

Now we introduce a strengthening of the Ramsey property:

Definition: A set $Y\subseteq [\omega]^\omega$ is completely Ramsey if for every $s\in[\omega]^<\omega$ and every $B\in[\omega]^\omega$, there exists $C\in [B]^\omega$ so that either $[s,C]^\omega\subseteq Y$ or $[s,C]^\omega\subseteq Y^c$. If the second possibility holds for all $s,B$, then $Y$ is called Ramsey null.

Thus, completely Ramsey means that a local Ramsey property holds on any $\ast$-open set. This is useful because topological properties are local in this sense; a key idea of this proof will be to show that certain Ramsey properties are equivalent to topological properties in the Ellentuck topology.

Lemma 2: $\ast$-open sets are completely Ramsey. As a corollary, $\ast$-closed sets are completely Ramsey.

Proof:  Let $s\in [\omega]^{<\omega}$ and $B\in [\omega]^\omega$ be arbitrary. The idea of the proof is to use a homeomorphism from $[\omega]^\omega$ to $[s,B]^\omega$ to transfer the result of Lemma 1. Enumerate $B=b_1<b_2<b_2<\cdots$. Then we define $f:[\omega]^\omega\rightarrow [s,B]^\omega$ by $X\mapsto s\cup \{b_i:i\in X\}$ (let's make the harmless assumption that $\min(B)>\max(s)$). It is easy to see that $f$ is $\ast$-continuous, so $f^{-1}Y$ is $\ast$-open, so Ramsey by Lemma 1. Now there is a $C'\in [\omega]^\omega$ so that $[C']^\omega\subseteq Y$ or $Y^c$. We can take $C=f(C')$ to witness completely Ramsey at $[s,B]^\omega$. $\Box$

We begin to realize the "key idea" mentioned above.

Lemma 3: $Y\in[\omega]^\omega$ is Ramsey null iff it is $\ast$-nowhere dense. (Reminder: a set is nowhere dense iff its closure has empty interior.)

Proof: The forward direction follows easily from the definitions: $\ast$-nowhere dense just means that for any basic $\ast$-open set $[s,B]^\omega$, there is a basic $\ast$-open subset $[t,C]^\omega$ disjoint from $Y$. Ramsey null is the prima facie stronger assertion where we cannot change the "stem" part $s$.

Conversely, take $[s,B]^\omega$ a basic $\ast$-open set. The $\ast$-closure $\bar{Y}$ of $Y$ is completely Ramsey by Lemma 2. If there were $C\in [B]^\omega$ with $[s,C]^\omega\subseteq \bar{Y}$, then $[s,C]^\omega$ would be a subset of the $\ast$-interior of $\bar{Y}$, contradicting $\ast$-nowhere dense. $\Box$

Let's recall some of the basic notions of category, the topological notion of largeness. The meager sets are countable unions of nowhere dense sets. It's a basic result (the Baire category theorem) that under some flexible hypotheses, a space is nonmeager in its own topology, so the category notions are nontrivial.

It turns out that Lemma 3 can be improved to $\ast$-meager sets:

Lemma 4: If $Y$ is $\ast$-meager, then it is Ramsey null.

Proof: Set $Y=\bigcup_{n=0}^\infty Y_n$, $Y_n$ is $\ast$-nowhere dense. The proof is roughly a diagonal argument, similar to some of the earlier ones. Let $s,B$ be arbitrary. We use several applications of Lemma 3 to construct $B_0\supseteq B_1\supseteq B_2\supseteq \cdots$ subsets of $B$ and an increasing sequence of $x_i\in B_i$ so that for each $n$,

$[s\cup F,B_n]^\omega\subseteq Y_n^c$ for all $F\subseteq\{x_0,\ldots,x_{n-1}\}$.

So at the $n$th stage, we use Lemma 3 successively to shrink $B_{n-1}$ so that the statement above holds for each $F\subseteq\{x_0,\ldots,x_{n-1}\}$ (one application of Lemma 3 for each $F$).

Then the sequence of $\{x_0,x_1,\ldots\}$ witnesses completely Ramsey for $s,B$. $\Box$

So $\ast$-nowhere dense, $\ast$-meager, and Ramsey null are all the same thing.

We can finally put it all together to get the main theorem of this talk, which implies the Galvin-Prikry Theorem as a corollary (since Borel sets in the usual topology are easily $\ast$-Baire, being in the $\sigma$-ideal generated by the $\ast$-open sets.). Recall that a set $Y$ is Baire (or has the property of Baire) iff there is a meager set $W$ and an open set $Z$ so that $Y= Z\triangle W$ is open (these are the "measurable" sets in terms of category).

Theorem: $Y$ is $\ast$-Baire iff it is completely Ramsey.

Proof: For the forward direction, suppose $Y$ is $\ast$-Baire, so there is a $\ast$-meager set $W$ and $\ast$-open $Z$ with $Y=Z\triangle W$. Take $s\in[\omega]^{<\omega}$ and $B\in [\omega]^\omega$ arbitrary. By Lemma 5, there is $C\in [B]^\omega$ so that $[s,C]^\omega\subseteq W^c$.  By Lemma 1, there is $D\in[C]^\omega$ so that $[s,D]^\omega\subseteq Z$ or $Z^c$. By choice of $C$, this means that $[s,D]^\omega\subseteq Z-W\subseteq Y$ or $[s,D]^\omega\subseteq Z^c-W\subseteq Y^c$.

Conversely, assume $Y$ is completely Ramsey. We will show that $Y-\rm{int}(Y)$ is $\ast$-meager. It suffices to show that it's Ramsey null. Take $s\in[\omega]^{<\omega}$ and $B\in [\omega]^\omega$ arbitrary. There is $C\in [B]^\omega$ so that $[s,C]^\omega\subseteq Y$ or $Y^c$. In the first case, $[s,C]^\omega$ is contained in the interior of $Y$, and in the second case, $[s,C]^\omega$ is disjoint from $Y$, so in either case $[s,C]\subseteq Y-\rm{int}(Y)$. $\Box$

Zach ended by giving an open question. Mathias proved that if it is consistent that an inaccessible cardinal exists, then it is consistent that projective sets are Ramsey (this statement holds in Solovay's model, obtained by collapsing an inaccessible). But is the inaccessible necessary? Does the statement that projective sets are Ramsey have consistency strength above just ZFC?

## Sunday, November 16, 2014

### Learning project #1, part 1: preliminaries

0th post in this series (updated with some more motivation):
(other relevant posts for reference: Spencinar #3: precipitous ideals)

Let's begin! Remember that one of our goals is

Theorem: $\rm{Col}(\mu,<\kappa)$ makes the nonstationary ideal restricted to $[\mu]^{<\mu}$ precipitous.

First, let's define notions of stationarity on sets of the form $[\lambda]^{<\kappa}=\{x\subseteq \lambda: |x|<\kappa\}$. We will have $\kappa$ regular.

There are two different notions here. If $X$ is a set and $C\subseteq P(X)$, then $C$ is strongly club if there is a structure $\mathfrak{A}$ on $X$ with countably many function symbols so that $C$ is the set of elementary substructures of $\mathfrak{A}$ (or equivalently, the set of substructures closed under a function $X^{<\omega}\rightarrow X$). $C$ is J-club if it is unbounded in the $\subseteq$ order on $[\lambda]^{<\kappa}$ and closed under increasing unions of size less than $\kappa$. It turns out the two notions are closely connected, since a result of Kueker says that the filter generated by the J-club sets is the filter generated by the strongly club sets together with all sets of the form $\{x\in [\lambda]^{<\kappa}:x\cap \kappa \in \kappa\}$ (this means that the ordinals of $x\cap \kappa$ are an initial segment of all ordinals).

We can correspondingly define weakly stationary (resp. J-stationary) sets as those that intersect every strongly club (resp. J-club) set. For our purposes, strongly club and weakly stationary sets are the right notions, and we will refer to them from now on without the adverbs.

In this post, we will get a simple criterion for precipitousness. Read the Spencinar #1 or #3 posts for basic information about generic ultrapowers.

Our notation will be that $Z$ is a set and $I$ is an ideal on $Z$. We will use $I^+$ to denote the collection of $I$-positive sets, i.e., the set $P(Z)-I$. We will use $I^\wedge$ to mean the dual filter of $I$, i.e., the collection of sets of the form $Z-A$ for some $A\in I$. The equivalence class of $S\subseteq Z$ in $P(Z)/I$ will be denoted by $[S]$.

Definition: A tree of maximal antichains $\mathcal{A}$ for $P(Z)/I$ below $S$ is an underlying tree $T\subseteq \mathrm{ON}^{<\omega}$ labeled with $\langle A_s: s\in T\rangle$ elements of $P(Z)$ such that

1. $A_\emptyset = S$,
2. The levels of the tree form increasingly refined maximal antichains of $P(Z)/I$ restricted to $S$, i.e., for each $s\in T$, the set $\{[A_{s\ast \alpha}]:s\ast\alpha\in T\}$ is a maximal antichain below $A_s$ ($\ast$ denotes concatenation). Further, let us assume that $A_{s\ast \alpha}\subseteq A_s$. Denote the levels by $\mathcal{A}_n$, $n<\omega$.
Lemma 1: (FMS p. 31, originally Jech--Prikry) $I$ is precipitous iff for every $S\in I^+$ and every tree $\mathcal{A}$ of maximal antichains for $P(Z)/I$ restricted to $S$, there is a branch $f$ of the tree so that $\cap_{n<\omega} A_{f\upharpoonright n}$ is nonempty.

Proof: If $I$ is precipitous, and $\langle A_s: s\in T$ is a tree of maximal antichains below some $S\in P(Z)$, then consider the generic ultrapower (see older posts for references) by $P(Z)/I$ restricted to $[S]$. This is a map
$$j:V\rightarrow M\subseteq V[G]$$
defined in the forcing extension by $G$ which is generic for $P(Z)/I$ restricted to $[S]$. We will think of $G$ as a $V$-ultrafilter extending the dual filter of $I$.

Let's review the construction of this ultrapower. The elements are represented by functions $F: Z\rightarrow V$ in $V$, and we have Los's theorem which says for a formula $\varphi$ in the language of set theory and $F_0,\ldots,F_n$, $M\vDash \varphi([F_0],\ldots,[F_n])$ if and only if $\{z:V\vDash \varphi(F_0(z),\ldots,F_n(z))\}\in G$.

In $M$, let $\mathcal{B}=j(\mathcal{A})$ and $i$ be the element represented in the ultrapower by the identity function on $Z$.

For each $n<\omega$, there must be a unique $A_{s_n}\in \mathcal{A}_n$ so that $i\in j(A_{s_n})$: this $A_s$ is the one in $G$ (such $A_s$ must exist by a basic density argument using the fact that $\mathcal{A}_n$ is a maximal antichain, and it must be unique). The sequence of these $\langle A_{s_n}:n<\omega\rangle$ can be computed in $V[G]$ and is a branch through $\mathcal{B}$. Then there must be a branch in $M$ through the subtree of $\mathcal{B}$ consisting of those nodes which are labeled by a set containing $i$ as an element. (Otherwise, in $M$ there would be a rank function $f$ on this subtree taking ordinal values such that $f(B_s)<f(B_t)$ if $s$ is below $t$, and this would have to be in $V[G]$, so there couldn't be a branch in $V[G]$. Note that this is an important pattern of argument that is very common in many different places in set theory).

Now this branch is a branch through $\mathcal{B}$ with nonempty intersection (containing $i$, for one) so by elementarity there is such a branch through $\mathcal{A}$.

Conversely, assume that $I$ is not precipitous. Then there is $S\subseteq Z$ such that $[S]$ forces that the generic ultrapower is well-founded. By maximality of the forcing language, we can find names $\dot{F}_n$ such that

1. $[S]\Vdash \dot{F}_n:Z\rightarrow V$,
2. $[S]\Vdash \dot{F}_n\in V$,
3. $[S]\Vdash M \vDash [\dot{F}_{n+1}]\in [\dot{F}_n]''$.

We can build a tree of maximal antichains $\mathcal{A}$ below $[S]$ such that

• a condition $A_s$ on the $n$th level decides the value of $\dot{F}_n$, say as $f^s_n$,
• if $s$ is the predecessor of $t$ in the underlying tree, then $f^t_{n+1}(z)\in f^s_n(z)$ for all $z\in A_t$.
If we have a tree of maximal antichains with just the first property, we can throw out sets in $I$ from each node to get the second property.

This tree has no branch with nonempty intersection, otherwise this would give an infinite descending sequence in $V$. $\Box$

Another thing we will need is the notion of an internally approachable (IA) structure. Let $\lambda$ be sufficiently large regular cardinal and consider the structure $\mathfrak{A}=\langle H(\lambda),\epsilon,\Delta,f_i\rangle$ where the $f_i$ are countably many function symbols and $\Delta$ is a fixed well-order of $H(\lambda)$ needed for certain Skolem hull arguments. An elementary substructure $N\prec \mathfrak{A}$ is internally approachable iff it can be written as $N=\bigcup_{j<\delta}N_\delta$ for some $\delta$ such that for all $\beta\in \delta$, $\langle N_\alpha:\alpha<\beta\rangle\in N$ (this definition is slightly weaker than other common definitions of IA structures). Sometimes such an $N$ will be said to be internally approachable of length $\delta$.

Because of the way these structures are built in layers, an IA structure can do some pretty "meta" computations (as we will see). This enables us to do some inductive constructions, where the IA condition gives a kind of strengthened inductive hypothesis.

First we have a basic lemma about IA structures which is useful in many contexts.

Lemma 2: Let $N\prec \langle H(\lambda),\epsilon,\Delta$ be IA of length $\delta$ for $\delta<\lambda$. Suppose $a\in y$ for some $y\in N$. Then the Skolem hull of $N\cup\{a\}$ computed in $H(\lambda)$ is IA of length $\delta$.

Proof: Let $\langle N_\alpha:\alpha<\delta\rangle$ witness approachability. For each $\alpha<\delta$, let $N^*_\alpha$ be the closure of $N_\alpha\cup\{a\}$ under all functions $H(\lambda)\rightarrow H(\lambda)$ in $N_{\alpha+1}$. The $N^*_\alpha$ are elementary substructures of $H(\lambda)$ that witness approachability for the Skolem hull of $N\cup\{a\}$: if $\beta<\delta$, then $\langle N^*_\alpha:\alpha<\beta\rangle$ can be computed from $a$ and the sequence $\langle N_\alpha:\alpha\le \beta\rangle$. $\Box$

Next is the main importance of IA structures in this project. It is very typical of applications of approachability:

Lemma 3: Suppose $S\subseteq \rm{IA}$ be a stationary subset of $[H(\lambda)]^{<\mu}$, for some uncountable regular $\mu<\delta$.  Then $S$ remains stationary in $[H(\lambda)^V]^{<\mu}$ after forcing with $\mathbb{P}=\rm{Col}(\mu,<\kappa)$ for any ordinal $\kappa$.

Proof: Let $\dot{F}$ be a name for a function $H(\lambda)^{<\omega}\rightarrow H(\lambda)$ in the extension. Let $\lambda^*$ be a sufficiently large regular cardinal, and $N\prec \langle H(\lambda^*),\in,\Delta,\mathbb{P},\dot{F},S\rangle$ with $|N<\mu|$ and $N\cap H(\lambda)\in S$.

Then $N\cap H(\lambda)$ is IA of some length $\delta<\mu$, witnessed by $\langle N_\alpha:\alpha<\delta\rangle$. We will define a decreasing sequence $\langle p_\alpha:\alpha<\delta\rangle$ of conditions in $\mathbb{P}$ so that

1. For every $\beta<\delta$, there is $M_\beta\in N$ containing $N_\beta$ and each of the previous $M_\alpha$, $\alpha<\beta$, and so that for each $x\in M_\alpha$, $p_\beta$ forces there is $y\in M_\beta$ with $F(x)=y$.
2. For every $\beta<\delta$, $\langle p_\alpha:\alpha<\beta\rangle\in N$.
We just pick the $\Delta$-least $p_\beta$ so that there is a $M_\beta$ satisfying the above, and then pick the $\Delta$-least such $M_\beta$. These things all exist in $N$ since $N$ is IA, and we can get (2) since this computation can be done inside $N$.

Now use closure of $\mathbb{P}$ to find a $p$ extending all of the $p_\alpha$. This $p$ forces that $N\cap H(\lambda)$ is closed under $F$, so $S$ is stationary.

## Friday, November 14, 2014

### Spencinar #5: Happy and mad families, part 2

Now we get to Asger's proof.

Definition: If $A$ is an almost disjoint family and $B_0,B_1\subseteq A$, then $B_0$, $B_1$ have uniformly bounded intersection by $k$ if for any $x\in B_0$, $y\in B_1$, $x\cap y\subseteq k$.

Definition: A polarization of $A$ is a countable partition $\{F_n\}_{n\in\omega}$ of $\{(x,y)\in A^2:x\neq y\}$ such that for each $n$, $\mathrm{proj}_0 F_n$ and $\mathrm{proj}_1 F_n$ have uniformly bounded intersection by some $k$.

Here $\mathrm{proj}_0$ and $\mathrm{proj}_1$ are the projections to the left and right coordinates, respectively.

The polarization will help us get to a countable situation where we can diagonalize. We will show the following lemmas:

Lemma 1: $\mathbf{\Sigma}^1_1$ almost disjoint families have polarizations.

Lemma 2: If an almost disjoint family has a polarization, then it is not maximal.

This would prove Mathias's result again.

To prove Theorem 2, we can use Lemma 2 plus the following (which we won't prove here):

Lemma 3: $\mathrm{OCA}_\infty$ + every set of reals has the perfect set property implies that every almost disjoint family has a polarization. ($\mathrm{OCA}_\infty$ is a strengthening of the open coloring axiom)

Lemma 4:  $\mathrm{OCA}_\infty$ holds in Solovay's model.

Proof of Lemma 1: Suppose $T\subseteq [\omega]^{<\omega}\times [\omega]^{<\omega}\times [\omega]^{<\omega}$ is such that $p[T]:=\{(x,y)\in ([\omega]^{\omega})^2: \exists t \, (t,x,y)\textrm{ is a branch of T}\}$ is equal to $\{(x,y)\in A^2:x\neq y\}$ which exists since that set is analytic (this is a basic descriptive set theory representation of an analytic set).

Consider the set of nodes $(\tau, r,s)$ such that every extension $(\tau',r',s')$ has $r'\cap s'=r\cap s$. Given such $(\tau,r,s)$, let $F=p[T_{(\tau,r,s)}]$. Then $\mathrm{proj}_0 F$ and $\mathrm{proj}_1 F$ have uniformly bounded intersection by $\max(r\cap s)$.

We can perform a countable length derivative process, iteratively removing such nodes from $T$. This will terminate in countably many steps (since there are only countably many nodes) and the tree with no such nodes cannot have infinite branches, as this would contradict the almost disjointness of $A$.

Proof of Lemma 2: First we make the following observation: if $A$ is an uncountable almost disjoint family and polarized by $\{F_n\}_{n\in \omega}$, then there is a sequence $B_0,B_1,\ldots$ of uncountable subsets of $A$ such that $B_i$ has uniformly bounded intersection with $\bigcup_{j>i} B_j$ and $B_0=\mathrm{proj}_0 F_n$ for some $n$.

To prove the observation, take $n$ such that $\mathrm{proj}_0 F_n$ and $\mathrm{proj}_1 F_n$ are both uncountable. Take $B_0=\mathrm{proj}_0 F_n$. Now $\mathrm{proj}_1 F_n$ is polarized by $\{F_m\cap (\mathrm{proj}_1 F_n)^2\}_{m\neq n}$. So we can iterate this process for $\omega$ steps, giving the observation.

Suppose $B_i$ are as above. Set $X_{B_i}=\bigcup B_i$. Note that the $X_{B_i}$ form an almost disjoint family, because of the uniform bound on the intersections of sets from $B_i, B_j$ for any given $i,j$.

This process is the first step in an iterative procedure, $A_0=A$, $B_{0,i}=B_i$, $X_{0,i}=X_i$. The iteration will extend transfinitely. At stage $\alpha$, let $A_\alpha$ be $A$ minus the free ideal generated by all of the $X_{\beta,i}$, $\beta<\alpha$. Define $B_{\alpha,i}$ and $X_{\alpha,i}$ in the same way as above. This terminates after countably many steps, since each $B_{\alpha_i}$ is a different $\mathrm{proj}_0 F_n$. Let $\lambda$ be the length of this procedure: the termination condition is that $A_\lambda$ is countable. Now we can use the usual diagonalization to find a set almost disjoint from $A_\lambda$ and each of the $X_{\alpha_i}$. This set will be almost disjoint from every member of $A$, so $A$ cannot be maximal.

## Thursday, November 13, 2014

### Spencinar #5: Happy and mad families, part 1

Yesterday Andrew Marks gave the Spencinar. I was really excited to hear that Andrew would be coming to UCLA, since all of the talks I heard him give in various seminars and meetings inspired me (and everyone else in the audience as well). What makes his talks so special is that he emphasizes things with the most mathematical substance and sprinkles his presentation with insightful remarks, yet manages to stay completely rigorous. Hopefully, these notes reflect some of the energy and clarity he spoke with at the seminar.

Prerequisites: some descriptive set theory (tree representation of analytic sets, Shoenfield absoluteness), and basic knowledge of forcing is helpful.

Andrew was motivated by some problems he was thinking about under the broad program of showing that there are no definable examples of certain kinds of objects whose constructions use the axiom of choice. An almost disjoint family is a collection of infinite subsets of $\omega$ with finite pairwise intersections. A mad family is a maximal such family, which exists by using Zorn's Lemma. The main result of this presentation was:

Theorem 1 (Mathias 1977): There are no $\mathbf{\Sigma^1_1}$ mad families.

Andrew gave two different proofs of Theorem 1: one was the original argument using what is now known as Mathias forcing, together with an absoluteness argument, the other is a recent proof due to Asger Törnquist (which he talked about in the colloquium last year, see these slides). The second proof is both purely elementary and generalizes well to other situations (even those not involving almost disjoint sets). Törnquist used these methods to prove:

Theorem 2 (Törnquist 2014): There are no mad families in Solovay's model.

We will sketch Mathias's proof. You can skip this part if you are not familiar with the method of forcing (but if you are familiar with it, you will find these arguments extremely elegant!).

Definition: Mathias forcing $\mathbb{M}$ is the poset of pairs $(s, X)$ where $s\in[\omega]^{<\omega}$, $X\in[\omega]^{\omega}$ and $\max(s)<\min(X)$. The ordering is defined by $(r,Y)\le (s, X)$ iff $r$ end-extends $s$ and $r\setminus s\subseteq X$.

So this is like a version of Prikry's forcing at $\omega$. It satisfies some similar properties which we will take as black boxes (proofs may be given in next week's seminar):

Lemma:
1. (Mathias property) If $G$ is Mathias generic (thought of as a member of $[\omega]^\omega$), and $G'\in [\omega]^\omega$ is a subset of $G$, then $G'$ is also generic for $\mathbb{M}$.
2. (Prikry property) If $\varphi$ is a formula in the forcing language and $(s,X)$ is a condition, then there is $Y\subseteq X$ infinite such that $(s,Y)\Vdash \varphi$ or $(s,Y)\Vdash \neg\varphi$ (we say that $(s,X)$ decides $\varphi$).
On a digression, this forcing can be used to show that

Proposition: Every analytic set is Ramsey. ($A\subseteq [\omega]^{\omega}$ is Ramsey iff there is an $X\in [\omega]^{\omega}$ so that for all infinite subsets $Y\subseteq X$, $Y\in A$ iff $X\in A$. In other words, either all infinite subsets of $X$ are in $A$, or all of them are not in $A$.)

Proof: If $A\subseteq [\omega]^{\omega}$ is analytic, then there is $X\in [\omega]^{\omega}$ such that $(\emptyset, X)$ decides $\dot{G}\in A$. Note that $A$ used in the forcing language is actually a shorthand for the analytic definition of $A$ (which we will evaluate in the extension to be $A^{V[G]}$: it is not actually the name for the ground model set $A$. Let $G$ be the generic obtained by forcing with $\mathbb{M}$ below $(\emptyset, X)$. Then any infinite subset $G'$ of $G$ is still $\mathbb{M}$ generic containing $(\emptyset, X)$, so the question of $G'\in A^{V[G]}$ is decided the same way as the question for $G$ (this uses Shoenfield absoluteness between $V[G']$ and $V[G]$). So $A^{V[G]}$ is Ramsey in $V[G]$. By absoluteness, $A$ must be Ramsey (in the ground model).

Mathias's proof uses a variant of Mathias forcing. We'll need some definitions first.

Definitions:
• A free ideal on $\omega$ is an ideal on $\omega$ containing all finite sets. A free filter is the dual of a free ideal.
• A free family is the complement of a free ideal $I$ in $P(\omega)$.
Informally, we think of a free ideal as the collection of "measure zero" sets, its dual filter as the collection of "full measure" sets, and the corresponding free family as the collection of "positive measure" sets.

The main definition of this part is of a kind of free family where a certain kind of diagonal intersection can be taken:

Definition:
A happy family is a free family $\mathcal{H}$ such that whenever $\{X_s:s\in [\omega]^{<\omega}\}$ are infinite subsets of $\omega$ so that any finite intersections of the $X_s$ are in $\mathcal{H}$, then there is an $X\in\mathcal{H}$ diagonalizing the $X_s$: that is,
• if $\max(s)\in X$, then $\{n\in X:n>\max(s)\}\subseteq X_s$.
Proposition: $[\omega]^\omega$ is happy.

Proof: A diagonal argument. Suppose $\{X_s:s\in [\omega]^{<\omega}\}$ are infinite subsets of $\omega$ with all finite intersections infinite. Pick the first element to be in $X_\emptyset$. At stage $i+1$, pick the next element greater than all previously chosen elements, and a member of the intersection of all $X_s$ with $\max(s)$ less than the element chosen at stage $i$ (this is possible since that intersection is infinite). This construction works. $\Box$

Definition: The forcing $\mathbb{M}_\mathcal{H}$ is the subposet of $\mathcal{M}$ of condition
whose second coordinate is in $\mathcal{H}$.

The "happy" property is what is needed so that the proofs of the Lemma on Mathias forcing go through.

Now we relate mad and happy families.

Lemma: If $A$ is a mad family, and $I$ is the free ideal generated by $A$, then $\mathcal{H}=P(\omega)\setminus I$ is a happy family.

Proof. Suppose $\{X_s:s\in [\omega]^{<\omega}\}$ are infinite subsets of $\omega$ with all finite intersections in $\mathcal{H}$. Let $X^0$ be constructed as in the proof of the previous proposition. If $X^0\in \mathcal{H}$, then we are done. Otherwise, there is $Y^0\in A$ such that $Y^0\cap X^0$ is infinite. Let $X^1_s=X_s\setminus Y_0$, and let $X_1$ diagonalize the $X^1_s$ as in the proof of the previous proposition, and so on.

At the end of this construction, we have $X^i$ all diagonalizing the collection of $X_s$ (in the sense of the family $[\omega]^\omega$), and so that $X^i\cap Y^i$ is infinite for all $i$. Take $X$ to be the set $\{x_i:i<\omega\}$ (enumerated in increasing order) so that $x_i\in X^{f(i)}\cap Y^{f(i)}\cap \bigcap\{X_s: \max(s)<x_{i-1} \}$ (forgetting the last intersection if $i=0$), where $f(i):\omega\rightarrow\omega$ is a function so that each $n\in\omega$ has infinite preimage. The point of the function $f$ is to ensure that $X$ has infinite intersection with each $Y^i$, so $X\in\mathcal{H}$.

Sketch of the proof of Theorem 1:  If $A$ is a $\mathbf{\Sigma^1_1}$ mad family, then let $\mathcal{H}$ be the associated happy family. Force with $\mathbb{M}_\mathcal{H}$.

Claim: It is forced that $\dot{G}\in \mathcal{H}$ (again, $\mathcal{H}$ used here in the forcing language is the name for the set in the extension with the same definition as $\mathcal{H}$).

Suppose otherwise. Then there is $p=(s,X)$ and a name $\dot{a}$ for a member of $A$ so that $p$ forces that $\dot{G}$ is almost contained in $\dot{a}$, say, above some $n$. But it is forced that $X$ is not almost contained in $\dot{a}$, so we can strengthen $p$ to force some natural number larger than $n$ into $\dot{G}\setminus \dot{a}$.

So if $G$ is generic, any infinite subset $G'\subseteq G$ must also be in $\mathcal{H}^{V[G]}$. So $G$ is almost disjoint with any set in $A^{V[G]}$, hence by absoluteness $A$ is not mad, contradiction. $\Box$

I wasn't following along 100%, so some of the argument I used in the last few proofs might not be optimal (I just pieced them together to fill gaps in my notes).

## Tuesday, November 11, 2014

### Learning project #1: Foreman-Magidor-Shelah "Martin's Maximum, saturated ideals, and nonregular ultrafilters I"

The paper "Martin's Maximum, saturated ideals, and nonregular ultrafilters I" by Foreman-Magidor-Shelah seems to be perhaps the most influential set theory paper in recent times (let's use a loose definition of "recent"). In this paper, they introduce the "maximal" forcing axiom known as Martin's Maximum, and then prove some consistency results from large cardinals about the saturation of "natural" ideals. We will focus on the second thing; for example:

1. Forcing with $\rm{Col}(\mu,<\kappa)$ makes the nonstationary ideal restricted to $[\mu]^{<\mu}$ precipitous.

2. Forcing with  $\rm{Col}(\omega_1,<\kappa)$ gives an $\aleph_2$-saturated ideal on $\omega_1$.

3. To make the nonstationary ideal on $\omega_1$ saturated, we must force with something more than the collapse. But this is also possible and can be done with $\kappa$-c.c. semiproper forcing.

All of the conclusions above are also consequences of Martin's Maximum.

Recall the definition and motivation of precipitous ideals from the beginning of a previous post. In that post, we gave the Jech--Magidor--Mitchell--Prikry construction of a precipitous ideal on $\omega_1$. These authors in fact proved that the nonstationary ideal on $\omega_1$ can be precipitous. The method was by an iterated forcing that at each step destroyed the stationarity of sets in the ideal constructed so far.

However, the FMS paper managed to find a different consistency proof of precipitousness of the nonstationary ideal on $\omega_1$. In fact, the proof is general to any other regular cardinals, unlike the JMMP construction which uses something special about $\omega_1$ for destroying stationarity (although generalizations of that construction to other cardinals has been carried out in work of Gitik). The drawback is that the FMS construction uses quite large cardinals, whereas Gitik found the equiconsistency.

I plan to at least cover (1.) and (2.) in a forthcoming series of posts. The plan is to prove (3.) sometime in a future learning project dealing with semiproper forcing, RCS iterations, and the other parts of the FMS paper.

The posts will come out slowly because I want to make sure I fully understand each step. The references will be the FMS paper and Section 8 of Foreman's Handbook chapter.

## Thursday, November 6, 2014

### Spencinar #4: Weakenings of Martin's Axiom

Spencer talked today about weakenings of Martin's Axiom of the form "All c.c.c. posets have a nice property." The nice properties were things like "$\omega_1$-Knaster", or "$\sigma$-centered." Actually, Todorcevic--Velickovic proved that $MA_{\aleph_1}$ is equivalent to "all c.c.c. posets are $\sigma$-centered". We saw how Martin's Axiom implies these other principles.

The main thing Spencer talked about was that "All c.c.c. posets $\mathbb{P}$ have $\mathbb{P}\times\mathbb{P}$ c.c.c" implies that the continuum hypothesis fails. To prove this, he uses the CH to construct an entangled linear order, and shows that the poset of chains of this entangled linear order is c.c.c. but its square is not. The proof was very involved, and I'll try to write up some more about this when I have some time to think about the main underlying ideas. I remember reading about entangled linear orders from Shelah's book on cardinal arithmetic, but not being very motivated by it. This talk showed me the power of these linear orders when proving that the poset of chains is c.c.c.

### Yuval Peres DLS

This week, Yuval Peres gave a Distinguished Lecture Series, which is a series of three talks for a general mathematical audience by a prominent mathematician visiting UCLA. I felt really compelled to attend these talks, since Peres seems to be working on very fundamental problems that have some interest for me. I read some of his work when I was in high school, and I was pleased to finally attend some of his lectures!

This blog post deals with the talks on Tuesday and Wednesday. I won't give a comprehensive account, because my comprehension wasn't 100%.

The first talk was about Search Games and Optimal Kakeya sets, based on joint work with Babichenko, Peretz, Sousi, and Winkler. He defined a game $G_n$ with two players "Hunter" and "Rabbit" on a cycle with $n$ vertices. At time $0$, each player chooses a position on the cycle. Then for each time step, the hunter moves to an adjacent position (or stays at the same position). The rabbit simultaneously moves to any position on the cycle. The players cannot see each other (Peres said to imagine the game is played in the dark). The game ends when Hunter and Rabbit occupy the same location.

The punchline was that the optimal winning strategies for this game can be used to construct an "optimal" Kakeya set. This Kakeya set is the union of a certain number of triangles, and has the smallest area among all such Kakeya sets with the same number of triangles. A Kakeya set is just a subset of the plane that has a unit segment in every direction. The first construction of a Kakeya set was by Besicovitch, and these sets have some importance in harmonic analysis.

But I was really fascinated by this simple game for its own interest.

For technical reasons, it is easier to consider the game $G^*_n$ which runs for $n$ timesteps, and the Hunter and Rabbit move as above. The payoff for Hunter is 1 if he's successful, 0 otherwise. To make it a zero-sum game, the payoff for Rabbit is -1 if Hunter is successful, 0 otherwise. So von Neumann proved this minimax theorem, which asserts the existence of "optimal" strategies for both players. A strategy is a rule which tells a player what move to make, each move assigned a probability depending on the situation (previous moves and information known to the player). Then von Neumann's theorem (as I understood it from the talk) says that there is a value $v$ such that $v$ is both:

• the maximum possible expected payoff for any of I's strategies, assuming that II plays optimally against that particular strategy (so the max over all strategies of I of (the min over all strategies of II of the expected payoff of running the strategies against each other))
• the minimum possible expected payoff for I for any of II's strategies, assuming that I plays optimally against that particular strategy (so $-v$ has the same role for II as $v$ had for I above).

I tried to say it in a way which I think can be easily made precise. It's very nice that you can formulate this notion of optimality, and prove that optimal strategies exist. It seems like this definition of optimality can often involve being as unpredictable as possible, because if the opponent knows your strategy, he can use that information to predict your moves and play against that. So, barring situations where for example there is a sequence of moves with no counterplay, it seems to be greatly beneficial to move around randomly.

What's the connection between $G_n$ and $G^*_n$? Peres answered this in his next slides: applying the minimax theorem to our situation, there is a value $p_n$ for the game $G^*_n$, which can be interpreted as the probability of capture under optimal play. Now we argue that in the original game $G_n$, the mean capture time in $G_n$ under optimal play is between $n/p_n$ and $2n/p_n$. The rabbit can just play his $G^*_n$ strategy for every block of time $n$, so he can ensure that the mean capture time is $\ge n/p_n$ (using linearity of expectation). The hunter can play his $G^*_n$ strategy for every other block of time $n$, using the odd blocks to move to the starting position for his next strategy. This ensures a mean capture time $\le 2n/p_n$ (again using linearity of expectation).

He then went over some sample strategies for the players, involving staying still, moving in some direction at constant speed, etc. It was amazing because he involved the audience, asking them to try to figure out different strategies. I forgot exactly what $p_n$ turned out to be: I think it was something like $n^{-1/2}$? But the optimal strategy for the hunter turned out to be going at a random speed, like I suspected. The rabbit's optimal strategy actually involved not moving around too much: I stopped taking notes here because I was struggling to keep up with his arguments at this point. Someone in the audience gave an intuitive explanation of this, but I must admit I wasn't sure about this explanation either.

I didn't know the minimax theorem before, so this talk was very eye-opening. I wonder how the proof of that theorem goes, since evidently it doesn't construct the optimal strategies (which seem to be found from methods special to the game at hand). If the argument is nice, I'll try to write a blog post about it. I wonder if these probabilistic strategies can be somehow generalized to the infinite situations that set theorists are interested in (probably just nonsense, I know!)
--------------------

The DLS on Wednesday was about random walks on groups (which seems to just be a random walk on the Cayley graph of a group, each generator has different weights). It was a bit hard to follow for me, but he proved some nice results on the blackboard bounding the expected distance in the graph between the starting point and the $n$th step of the walk (this is the rate of escape). It reminded me a lot of IST 1 class from Caltech, since a lot of inequalities I learned there were used in the talk. Unfortunately, I won't post my notes about this talk.

## Saturday, November 1, 2014

### UCLA Logic Colloquium: Elementary amenable groups, chain conditions, and descriptive set theory (Jay Williams)

The second talk of the Friday doubleheader was Jay Williams's colloquium talk about chain conditions in group theory, which was based on joint work with Philip Wesolek.

Jay and his coauthor proved that the class $\mathcal{M}_C$ of groups that satisfy the minimal condition on centralizers is coanalytic but not Borel. These definability notions are relative to the space of countable groups, which can be realized as the set of normal subgroups of $\mathbb{F}_\omega$, the free group on countably many generators, equipped with the subspace topology from $2^{\mathbb{F}_\omega}$.

A group satisfies the minimal condition on centralizers iff there is no infinite decreasing sequence of subgroups which can be realized as the centralizer of some subset of the group.

Given a countable group $G$, enumerate the elements and form the tree which has nodes corresponding to finite sequences elements of $G$. Identify a node and its parent if they have the same centralizer, giving a new tree $T_G$. This gives a map from the space of countable groups to the set of trees $G\mapsto T_G$ so that $G\in \mathcal{M}_C$ iff $T_G\in \mathrm{WF}$, where $\mathrm{WF}$ is the set of well-founded trees. Therefore, $\mathcal{M}_C$ is coanalytic. To show it's not Borel, the Boundedness Theorem is invoked: a Borel subset of $\mathrm{WF}$ must have bounded rank functions. For this, constructions were given to find a group $B\in \mathcal{M}_C$ with larger rank than an arbitrary group $A\in \mathcal{M}_C$ (take its product with a nonabelian group) and to find a $B\in \mathcal{M}_C$ whose rank is larger than an arbitrary countable collection of groups in $\mathcal{M}_C$ (take the free product).

Another result in this vein is that $\mathcal{M}_C$ restricted to finitely generated groups is coanalytic but not Borel. This uses a construction of Karrass and Solitar (reading about their friendship on the MacTutor archive is really inspiring!)

The maximal condition on normal subgroups (max-n) is a property of groups that holds iff there is no strictly increasing chain of normal subgroups of $G$. Hall proved that a metabelian group (abelian extension of an abelian group) satisfies max-n iff it is finitely generated, so this max-n condition is nice in a group theory sense. This class is also coanalytic but not Borel, using similar constructions with a theorem of Olshanskii.

The second half of the talk was to show that a special class of amenable groups, the elementary amenable groups (EG) is not equal to the class of all amenable groups. The elementary amenable groups are constructed from abelian and finite groups by taking closure under countable unions and abelian extensions. To do this, they show that the class EG is coanalytic but not Borel. The method is similar to above, where the construction of the tree associated to $G$ goes roughly by taking commutator subgroups (this handles increasing unions) intersected with all subgroups of a certain index at each level (this handles the abelian extensions). The construction for unbounded rank depends on a lemma due to Neumann--Neumann and Hall.

Jay noted that Grigorchuk gave an explicit example of a finitely generated amenable group that is not EG. The method discussed in the talk gives an elementary, but nonconstructive, proof of this fact.

### UCLA Logic Seminar: Baire measurable paradoxical decompositions via matchings (Andrew Marks)

Today's logic seminar was by Andrew Marks, entitled Baire measurable paradoxical decompositions via matchings. The work, joint with Spencer Unger, showed the following theorem:

Main Result: Suppose a group $\Gamma$ acts on a Polish space $X$ by Borel automorphisms ($x\mapsto \gamma\cdot x$ is Borel for every $\gamma\in \Gamma$. Then if this action has a paradoxical decomposition, it has one where the pieces have the Baire property.

If $a$ is a group action of $\Gamma$ on a space $X$, a paradoxical decomposition of $a$ is a partition of $X$ into finitely many pieces $\{A_1,\ldots, A_n, B_1,\ldots, B_m\}$ such that there exists $\alpha_1,\ldots, \alpha_n, \beta_1,\ldots,\beta_m\in \Gamma$ so that $\alpha_1A_1,\ldots, \alpha_nA_n$ and $\beta_1B_1,\ldots,\beta_mB_m$ each partition $X$.

As an example, the Banach-Tarski paradox corresponds to a paradoxical decomposition of the action of the group of rotations on the unit ball in $\mathbb{R^3}$ (minus the origin, for technical reasons).

The amenable groups, those which have a finitely additive translation invariant probability measure, cannot have paradoxical decompositions of their free actions. Tarski showed that the left translation action of a nonamenable group on itself admits a paradoxical decomposition.

Marczewski asked in 1930 whether there is a Banach-Tarski decomposition using pieces with the Baire Property, which was answered affirmatively by Dougherty-Foreman. Andrew's talk extended this result (with different proof methods) to all nonamenable groups.

Someone in the audience asked whether the Baire decomposition can always use the minimum number of pieces of a paradoxical decomposition. Andrew stated a result of Mycielski that the answer is no, but the present work gives a bound for the number of Baire pieces needed (something like $n^2$, where $n$ is the least number of pieces in any paradoxical decomposition).

To do this, they constructed a graph whose perfect matchings exactly correspond to paradoxical decompositions. Suppose $a$ is an action of a group $\Gamma$ on $X$. Given a finite set $S\subseteq \Gamma$ closed under inverses, form the graph $G_p(a,S)$ with vertex set $\{0,1,2\}\times X$ (three disjoint copies of $X$), where $(i,x)\sim (j,y)$ iff exactly one of $i,j$ is equal to 0 and there is some $s\in S$ such that $sx=y$. The fact that $S$ is closed under inverses makes this relation symmetric.

Then he recalled Hall's theorem: a locally finite bipartite graph has a perfect matching iff for every finite $F$ on one side of the bipartition of $G$, $|N(F)|\ge|F|$ where $N(F)$ is the number of vertices not in $F$ adjacent to a vertex in $F$.

The condition that Hall's theorem shows equivalent to the existence of a perfect matching is called Hall condition.

Laczkovich showed that there is no Borel version of Hall's theorem, and Conley--Kechris found more examples. I had encountered these results earlier: around 2009 during the time of the Conley--Kechris results, I had finished working on a project in (finite) matching theory and became interested in "descriptive" combinatorics, and Kechris shared with me these two papers that gave negative answers to the natural Borel questions for matchings.

But it turns out that if you require a Borel perfect matching on just a Borel $G$-invariant (union of connected components) comeager set, then it can be done while strengthening the Hall condition.

Theorem: Suppose $G$ is a locally finite bipartite Borel graph, and there is $\epsilon>0$ such that for every finite $F$ in one part of this bipartition,
$$|N(F)|\ge (1+\epsilon)|F|$$
then $G$ has a Borel perfect matching on a Borel $G$-invariant comeager set.

In response to a question, Andrew noted that this means "isoperimetric constant is greater than 1". The theorem was proven using nice elementary arguments.

To apply it to the graphs of the form $G_p(a,S)$, for $S$ a finite set of generators witnessing the paradoxical decomposition and closed under inverses, notice that changing $S$ to $S^2$ results in
$|N_{G_p(a,S)}(F)\ge 2|F|$. We may assume $F=\{1,2\}\times F'$, the other case is trivial. Now if $N_{G(a,S)}(\{1,2\}\times F')=\{0\}\times F''$, then
$$|N_{G(a,S^2)}(F)|\ge |N_{G(a,S)}(\{1,2\}\times F'')|$$
$$\ge|\{1,2\}\times F''|$$
$$\ge 2|F''|\ge 2 |F|$$
where the second line follows since $G(a,S)$ satisfied the Hall condition.  This gives the strong Hall condition needed to apply the theorem (with $\epsilon=1$).