Prerequisites: some descriptive set theory (tree representation of analytic sets, Shoenfield absoluteness), and basic knowledge of forcing is helpful.
Andrew was motivated by some problems he was thinking about under the broad program of showing that there are no definable examples of certain kinds of objects whose constructions use the axiom of choice. An almost disjoint family is a collection of infinite subsets of $\omega$ with finite pairwise intersections. A mad family is a maximal such family, which exists by using Zorn's Lemma. The main result of this presentation was:
Theorem 1 (Mathias 1977): There are no $\mathbf{\Sigma^1_1}$ mad families.
Andrew gave two different proofs of Theorem 1: one was the original argument using what is now known as Mathias forcing, together with an absoluteness argument, the other is a recent proof due to Asger Törnquist (which he talked about in the colloquium last year, see these slides). The second proof is both purely elementary and generalizes well to other situations (even those not involving almost disjoint sets). Törnquist used these methods to prove:
Theorem 2 (Törnquist 2014): There are no mad families in Solovay's model.
We will sketch Mathias's proof. You can skip this part if you are not familiar with the method of forcing (but if you are familiar with it, you will find these arguments extremely elegant!).
Definition: Mathias forcing $\mathbb{M}$ is the poset of pairs $(s, X)$ where $s\in[\omega]^{<\omega}$, $X\in[\omega]^{\omega}$ and $\max(s)<\min(X)$. The ordering is defined by $(r,Y)\le (s, X)$ iff $r$ end-extends $s$ and $r\setminus s\subseteq X$.
So this is like a version of Prikry's forcing at $\omega$. It satisfies some similar properties which we will take as black boxes (proofs may be given in next week's seminar):
Lemma:
- (Mathias property) If $G$ is Mathias generic (thought of as a member of $[\omega]^\omega$), and $G'\in [\omega]^\omega$ is a subset of $G$, then $G'$ is also generic for $\mathbb{M}$.
- (Prikry property) If $\varphi$ is a formula in the forcing language and $(s,X)$ is a condition, then there is $Y\subseteq X$ infinite such that $(s,Y)\Vdash \varphi$ or $(s,Y)\Vdash \neg\varphi$ (we say that $(s,X)$ decides $\varphi$).
On a digression, this forcing can be used to show that
Proposition: Every analytic set is Ramsey. ($A\subseteq [\omega]^{\omega}$ is Ramsey iff there is an $X\in [\omega]^{\omega}$ so that for all infinite subsets $Y\subseteq X$, $Y\in A$ iff $X\in A$. In other words, either all infinite subsets of $X$ are in $A$, or all of them are not in $A$.)
Proof: If $A\subseteq [\omega]^{\omega}$ is analytic, then there is $X\in [\omega]^{\omega}$ such that $(\emptyset, X)$ decides $\dot{G}\in A$. Note that $A$ used in the forcing language is actually a shorthand for the analytic definition of $A$ (which we will evaluate in the extension to be $A^{V[G]}$: it is not actually the name for the ground model set $A$. Let $G$ be the generic obtained by forcing with $\mathbb{M}$ below $(\emptyset, X)$. Then any infinite subset $G'$ of $G$ is still $\mathbb{M}$ generic containing $(\emptyset, X)$, so the question of $G'\in A^{V[G]}$ is decided the same way as the question for $G$ (this uses Shoenfield absoluteness between $V[G']$ and $V[G]$). So $A^{V[G]}$ is Ramsey in $V[G]$. By absoluteness, $A$ must be Ramsey (in the ground model).
Mathias's proof uses a variant of Mathias forcing. We'll need some definitions first.
Definitions:
- A free ideal on $\omega$ is an ideal on $\omega$ containing all finite sets. A free filter is the dual of a free ideal.
- A free family is the complement of a free ideal $I$ in $P(\omega)$.
Informally, we think of a free ideal as the collection of "measure zero" sets, its dual filter as the collection of "full measure" sets, and the corresponding free family as the collection of "positive measure" sets.
The main definition of this part is of a kind of free family where a certain kind of diagonal intersection can be taken:
Definition:
A happy family is a free family $\mathcal{H}$ such that whenever $\{X_s:s\in [\omega]^{<\omega}\}$ are infinite subsets of $\omega$ so that any finite intersections of the $X_s$ are in $\mathcal{H}$, then there is an $X\in\mathcal{H}$ diagonalizing the $X_s$: that is,
- if $\max(s)\in X$, then $\{n\in X:n>\max(s)\}\subseteq X_s$.
Proposition: $[\omega]^\omega$ is happy.
Proof: A diagonal argument. Suppose $\{X_s:s\in [\omega]^{<\omega}\}$ are infinite subsets of $\omega$ with all finite intersections infinite. Pick the first element to be in $X_\emptyset$. At stage $i+1$, pick the next element greater than all previously chosen elements, and a member of the intersection of all $X_s$ with $\max(s)$ less than the element chosen at stage $i$ (this is possible since that intersection is infinite). This construction works. $\Box$
Definition: The forcing $\mathbb{M}_\mathcal{H}$ is the subposet of $\mathcal{M}$ of condition
whose second coordinate is in $\mathcal{H}$.
The "happy" property is what is needed so that the proofs of the Lemma on Mathias forcing go through.
Now we relate mad and happy families.
Lemma: If $A$ is a mad family, and $I$ is the free ideal generated by $A$, then $\mathcal{H}=P(\omega)\setminus I$ is a happy family.
Proof. Suppose $\{X_s:s\in [\omega]^{<\omega}\}$ are infinite subsets of $\omega$ with all finite intersections in $\mathcal{H}$. Let $X^0$ be constructed as in the proof of the previous proposition. If $X^0\in \mathcal{H}$, then we are done. Otherwise, there is $Y^0\in A$ such that $Y^0\cap X^0$ is infinite. Let $X^1_s=X_s\setminus Y_0$, and let $X_1$ diagonalize the $X^1_s$ as in the proof of the previous proposition, and so on.
At the end of this construction, we have $X^i$ all diagonalizing the collection of $X_s$ (in the sense of the family $[\omega]^\omega$), and so that $X^i\cap Y^i$ is infinite for all $i$. Take $X$ to be the set $\{x_i:i<\omega\}$ (enumerated in increasing order) so that $x_i\in X^{f(i)}\cap Y^{f(i)}\cap \bigcap\{X_s: \max(s)<x_{i-1} \}$ (forgetting the last intersection if $i=0$), where $f(i):\omega\rightarrow\omega$ is a function so that each $n\in\omega$ has infinite preimage. The point of the function $f$ is to ensure that $X$ has infinite intersection with each $Y^i$, so $X\in\mathcal{H}$.
Sketch of the proof of Theorem 1: If $A$ is a $\mathbf{\Sigma^1_1}$ mad family, then let $\mathcal{H}$ be the associated happy family. Force with $\mathbb{M}_\mathcal{H}$.
Claim: It is forced that $\dot{G}\in \mathcal{H}$ (again, $\mathcal{H}$ used here in the forcing language is the name for the set in the extension with the same definition as $\mathcal{H}$).
Suppose otherwise. Then there is $p=(s,X)$ and a name $\dot{a}$ for a member of $A$ so that $p$ forces that $\dot{G}$ is almost contained in $\dot{a}$, say, above some $n$. But it is forced that $X$ is not almost contained in $\dot{a}$, so we can strengthen $p$ to force some natural number larger than $n$ into $\dot{G}\setminus \dot{a}$.
So if $G$ is generic, any infinite subset $G'\subseteq G$ must also be in $\mathcal{H}^{V[G]}$. So $G$ is almost disjoint with any set in $A^{V[G]}$, hence by absoluteness $A$ is not mad, contradiction. $\Box$
I wasn't following along 100%, so some of the argument I used in the last few proofs might not be optimal (I just pieced them together to fill gaps in my notes).
No comments:
Post a Comment