Tuesday, December 30, 2014

Good and bad terminology in mathematics

Happy holidays! I hope everyone is having a restful vacation. Here's a post which is just for a bit of fun---don't take it too seriously!

A lot of people complain about bad terminology in mathematics. I don't really consider myself to be one of those people, but I have to admit they have a point. The choice of a word we use to represent a concept greatly influences the way we read and perceive it. Here are examples of terminology I like and dislike, with some whimsical alternatives for fun. It turned out to be much harder than I expected to come up with good terminology. Leave a comment if you can think of something to add to these lists.

  • Adjoint: This word seems to only exist in mathematics. It captures a general pattern that appears frequently in different contexts; several formally related concepts share this terminology. Appropriately, it suggests pairing or duality.
  • Club set: Short for "closed and unbounded". This neat portmanteau helps emphasize that the conjunction of the two properties is somehow more the sum of its parts.
  • Compactness: This is such a carefully nuanced word for a subtle and fundamental property which occurs in many different contexts in mathematics, although I think it can take some getting used to. It really captures the idea of some kind of finitary character, or a strong notion of "smallness".
  • Mouse, weasel: Inner model theory words that give a mischievous flavor to a difficult subject.
  • Mixing: from ergodic theory. It's a gerund, which helps describe this dynamical situation well.
  • Rational numbers: This is a nice play on words. It makes $\mathbb{Q}$ feel very concrete and friendly and is probably derived from the representation of these numbers as ratios of integers. 
  • Supercompact cardinal: sometimes terminology doesn't totally make sense, it just sounds cool. I have to admit that hearing this word early made me curious about set theory.

  • Amenable group: This seems to be a pun that only makes sense with British pronunciations. According to Wikipedia, the original name given by von Neumann was "messbar", or "measurable" in English. Of course, this name is even worse. Alternatives: Maybe something like "well-measured" would be better.
  • Antichain: It's actually a catchy word, if only it didn't have two closely related but nonequivalent meanings. In order theory people use it to mean a collection of pairwise incomparable elements of a poset, while set theorists would say it is a collection of pairwise incompatible elements. Let the order theory people take back the outdated term "Sperner system".
  • Cardinal collapsing: This term from set theory is really convenient and probably no one else in the world has a problem with it. My gripe is that it's somewhat inaccurate and easily replaced by simpler terms. "Collapsing" a cardinal refers to a forcing extension where a particular ordinal is no longer a cardinal, in other words, its cardinality is decreased (as measured by ordinals) in the extension. The word "collapse" is a bit overloaded (it seems better suited to describe things like Mostowski collapse). Alternatives: "decardinalizing", "cardinality decreasing".
  • Commutative diagram: To commute just means to go from one place to another. This got twisted into the property that $ab=ba$; I guess I can kind of see that they are moving past each other. So we can also use this for functions $f,g:X\rightarrow X$ to say $fg=gf$ (a special kind of commutative square). But what about commutative triangles, some functions that satisfy $f=gh$? Alternatives: "freely composing", or maybe something made-up and descriptive like "ambicompositional".
  • Countable chain condition (c.c.c.): This one from set theory is unpopular among some people I know, since it's a property of posets that means there are no uncountable antichains (and I don't mean Sperner systems here). I saw one paper (Abraham--Shelah: Forcing closed unbounded sets) which called this the c.a.c., for countable antichain condition, which seems like a reasonable alternative. I recall that there's actually a good historical reason for this terminology, though I'm not sure exactly what that was.
  • Maths: OK, so it's not really in the spirit of the other things, and maybe its just my American crudeness, but this sounds totally wrong to me.
  • One-to-one: I thought it was OK until someone pointed out to me that it sounds like each member of the domain maps to a unique member of the codomain (what some people call single-valued). This has been the source of some confusion in classes I've TA'ed. Alternatives: just use the already existing technical-sounding words "injective" and "surjective."
  • Proper forcing: It's just a bit too vague and authoritarian for me. Alternatives: Maybe something like "internally generic forcing".

Sunday, December 14, 2014

Spencinar flashback: Approachability and stationary reflection at $\aleph_\omega$

The Spencinar has been continuing as usual since the Thanksgiving break. Two weeks ago, Spencer talked about diamond principles. I'm working on these notes still. Last week, I talked about precipitousness of the nonstationary ideal (the subject of the learning project #1).

I'm about to retire this old seminar notebook, so I thought I would record here a Spencinar that I gave over the summer about the approachability property. This will also provide some useful background for future posts.

This material comes mostly from Eisworth's friendly handbook chapter, where I was originally inspired to pursue the line of research that I'm on. I'm going to focus on the following result of Shelah's (interpreted through Eisworth's chapter):

Main Theorem: If $\aleph_\omega$ is a strong limit, then $\rm{Refl}(\aleph_{\omega+1})$ implies the approachability property at $\aleph_\omega$.

This result is unexpected since approachability is a rather weak "non-reflection" principle. It's interesting since Magidor's model of $\rm{Refl}(\aleph_{\omega+1})$ satisfies approachability property at $\aleph_\omega$, and in fact uses this to show that stationarity is preserved after forcing. It's also interesting since the proof seems to use special properties of $\aleph_\omega$, giving an example of the phenomenon that this singular cardinal in particular seems to have a lot of ZFC structure. This is nice to understand, because there is an important open question asking whether the failure of singular cardinals hypothesis at $\aleph_\omega$ implies approachability (and this is consistently false for larger singular cardinals).

I. $\mathrm{AP}_\mu$: review.

Shelah defined the ideal $I[\lambda]$ for a regular cardinal $\lambda$ by:

$S\subseteq I[\lambda]$ iff there is

  1. a sequence $\langle a_\alpha:\alpha<\lambda\rangle$ of bounded subsets of $\lambda$ and
  2. $C\subseteq \lambda$ club,

 so that for every $\delta\in S\cap C$, there is $A_\delta\subseteq \delta$ unbounded of order-type $\rm{cf}(\delta)$ so that

  • $\{A\cap \beta:\beta<\delta\}\subseteq \{a_\beta:\beta<\delta\}$ (or in words, all initial segments of $A_\delta$ are enumerated in $\bar{a}$ at a stage before $\delta$.
Such $\delta$ is said to be approachable with respect to $\bar{a}$. It can be shown (cf. Eisworth's handbook chapter) that $I[\lambda]$ is a normal ideal on $\lambda$.

For $\mu$ a singular cardinal, the approachability property $\rm{AP}_\mu$ is the statement $\mu^+\in I[\mu^+]$. (Some other authors denote this by $\rm{AP}_{\mu^+}$. Restating the definitions, this means there is $\bar{a}$ and a club of $\delta<\mu^+$ of points which are approachable with respect to $\bar{a}$.

2. Elementary substructures

We will define a $\lambda$-approximating sequence to be a $\subseteq$-increasing sequence $\mathcal{M}=\langle M_\alpha:\alpha<\lambda\rangle$ of elementary substructures of $(H(\theta); \in, \triangleleft)$ for sufficiently large regular $\theta$ so that $\lambda\in M_0$ and for all $\alpha$, $|M_\alpha|<\lambda$ and $M_\alpha\cap \lambda\in \lambda$. Most importantly, we require that $\langle M_\alpha:\alpha\le \beta\rangle\in M_{\beta+1}$. This is a slight strengthening of the concept of "IA sequence" which we saw in the learning project.

If $\mathcal{M}$ is a  $\lambda$-approximating sequence, then define $S[\mathcal{M}]$ to be the set of $\delta<\lambda$ such that $M_\delta\cap\lambda=\delta$, there is a cofinal $a\subseteq \delta$ of order-type $\rm{cf}(\delta)$ such that every inital segment of $a$ is in $M_\delta$.

This gives a useful characterization of approachability. $I[\lambda]$ is just ideal generated by nonstationary sets together with sets of the form $S[\mathcal{M}]$. For a given $\bar{a}$, let $\mathcal{M}$ be a $\lambda$-approximating sequence with $\bar{a}\in M_0$ and $E\subset\lambda$ be club so that $M_\alpha\cap\lambda=\alpha$ for all $\alpha\in E$. In this club, the set of points approachable with respect to $\bar{a}$ is contained in $S[\mathcal{M}]\cap E$.

Conversely, given a sequence $\mathcal{M}$, let $\bar{a}$ be the sequence of bounded subsets of $\lambda$ in $\bigcup_\alpha M_\alpha$, and $E$ be the club of $\delta$ so that $\{a_\alpha:\alpha<\delta\}$ is an enumeration of the bounded subsets of $M_\delta$.

3. Colorings

We can also rephrase notions of approachability in terms of colorings, given some extra assumptions. Here $S^\lambda_\kappa$ denotes the set of points of cofinality $\kappa$ below $\lambda$.

Theorem 1: Suppose $\kappa<\lambda$ be regular with $2^{<\kappa}<\lambda$. If $d:[\lambda]^2\rightarrow \omega$ and $\mathcal{M}$ is a $\lambda$-approximating sequence containing $\{\kappa, d\}$, then for every $\delta\in S[\mathcal{M}]\cap S^\lambda_\kappa$, there is a cofinal $H\subseteq \delta$ homogeneous for d.

Proof: If $\delta\in S[\mathcal{M}]\cap S^\lambda_\kappa$, let $\{\alpha_i:i<\kappa\}$ cofinal in $\delta$ such that $\{a_i:i<\zeta\}\in M_\delta$ for all $\zeta<\kappa$. By induction on $i<\kappa$, we can define $\epsilon_i,f_i$ so that:

  1. $\epsilon_0=\alpha_0$.
  2.  $f_i:i\rightarrow \theta$, $f_i(j)=d(\epsilon_j,\delta)$.
  3. $\epsilon_i$ is the least $\alpha$ so that for all $j<i$, $\alpha>\alpha_j,\epsilon_j$ and $d(\epsilon_j,\alpha)=d(\epsilon_j,\delta)$ for all $j<i$, if such exists. The construction ends if no such exists.
This last condition says that the function $d(\epsilon_i,\epsilon_j)$ depends only on $i$, where $i<j$, and uses $\delta$ as a "reference point". Let $i^*$ be the length of the construction.

We show the construction goes up to $\delta$. By part of the definition of $S[\mathcal{M}]$, $M_\delta\cap\lambda=\delta$. Since $\kappa\in M_\delta$, $M_\delta$ must be $<\kappa$-closed, so it contains $f_i$ for all $i<i^*$ (this uses $2^{<\kappa}<\lambda$). This means the construction above can be done in $M_\delta$ and therefore all $\epsilon_i<\delta$, $\delta$ witnesses condition in 3.

The proof concludes by finding $\xi$ so that $\{i<\kappa:d(\epsilon_i,\delta)=\xi\}$ is unbounded, giving a homogeneous set. $\Box$

This motivates yet another conception of "approachability".

Definition: For $d:[\lambda]^2\rightarrow\chi$ (some $\chi$), define $S(d)$ to be the set of $\delta<\lambda$ for which there is a cofinal $H_\delta\subseteq \delta$ homogeneous for $d$, and $S^*(d)=\lambda-S(d)$.

So $S(d)$ is club for every $d$ if the approachability property holds at the predecessor of $\lambda$ (under the hypotheses of Theorem 1).

Actually, under those hypotheses, the approachability ideal is generated by a single set over the nonstationary ideal. We can see this result in a way by using colorings. It turns out there is a certain class of colorings whose members have "all of the complexity" of the general case.

Definition: $d:[\lambda]^2\rightarrow\omega$ is a coloring as above with $\lambda=\mu^+$. We say $d$ is normal if $\{\beta<\alpha:d(\beta,\alpha)<i\}|<\mu$ for all $i<\rm{cf}(\mu)$. We say $d$ is transitive if $d(\alpha,\gamma)\le \mathrm{max}\{d(\alpha,\beta),d(\beta,\gamma)\}$ for all $\alpha<\beta<\gamma<\lambda$.

Normal transitive colorings are not difficult (but a little tricky) to construct. We'll just take this as an unproven fact (this construction was used, for example, in John's Spencinar talk about Martin's Maximum).

Proposition: If $\mu$ is strong limit and $d$ is a normal, transitive coloring of $\mu^+$, then there is $\bar{a}$ so that almost every point of $S(d)$ is approachable with respect to $\bar{a}$.

Proof: Let $\langle a_\xi:\xi<\lambda\rangle$ enumerate all subsets of the form $\{\beta<\alpha:d(\beta,\alpha)<i\}$ in some order. This is fine since $\lambda^{<\lambda}=\lambda$ ($\mu$ strong limit). Then for any $\delta\in S(d)$ of uncountable cofinality there exists a cofinal $H_\delta\subseteq\delta$ homogeneous for $d$. Actually, $d(\beta,\delta)$ is bounded in $\omega$: by transitivity, if $\beta_1<\beta_2<\delta$, then $d(\beta_1,\delta)\le \max\{i,d(\beta_2,\delta)\}$, and since $\delta$ has uncountable cofinality and $d(\beta,\delta)$ is increasing in $\beta\in H_\delta$ (if above $i$), then $d(\beta,\delta)$ is bounded. So almost every point of $S(d)$ is approachable. $\Box$

So the proposition gives a really nice characterization of approachability property under the hypotheses of Theorem 1: $\mathrm{AP}_\mu$ holds iff there exists (equivalently for all) normal transitive colorings, $S(d)$ contains a club of points of uncountable cofinality.

4. Proof of Main Theorem

Now we start working towards the proof of the theorem. The first result is that $S^*(d)$ can only reflect in itself.

Lemma 2: If $S^*(d)\cap \alpha$ is stationary in $\alpha$, then $\alpha\in S^*(d)$.

Proof: Otherwise $\alpha\in S(d)$, so let $H\subset \alpha$ be cofinal in $\alpha$ and homogeneous for $d$. Then every limit point of $H$ must be in $S(d)$ by the definition of $S(d)$, so $S^*(d)$ is disjoint from a club in $\alpha$. $\Box$

So if the approachability property fails, then there is a coloring $d$ (namely, any normal transitive one) for which $S^*(d)\cap \mathrm{cof}(>\omega)$ is stationary. Take $n<\omega$ such that $S=S^*(d)\cap S^{\aleph_{\omega+1}}_{\aleph_{n+1}}$ is stationary.

The next claim is the crucial fact giving a non-reflecting stationary set.

Claim: $S$ cannot reflect at points of cofinality $>2^{\aleph_n}$.

Proof of Claim: Suppose $S$ reflects at $\tau$.  If $\tau>2^{\aleph_n}$, then $\tau^{<\aleph_{n+1}}=\tau$. So we can construct a $\lambda$-approximating sequence containing all the relevant parameters which is $<\aleph_{n+1}$ closed. Then for any $\delta\in S^\tau_{\aleph_{n+1}}$ with $M_\delta=\delta$, any $A\subseteq \delta$ of order-type ${\aleph_{n+1}}$ has all of its initial segments in $M_\delta$. This shows that $S^\tau_{\aleph_{n+1}}\in I[\tau]$. This is impossible by Theorem 1, which says almost all points below $\tau$ are in $S(d)$, so $S$ cannot reflect at $\tau$. $\Box$

Now fix $k<\omega$ so that $2^{\aleph_n}=\aleph_{n+k}$. Such exists by $\mu$ strong limit. Define $S_0=S$, and $S_{i+1}$ to be the reflection points of $S_i$. Observe that if $\delta\in S_i$, then $\mathrm{cf}(\delta)\ge \aleph_{n+1+i}$, since the set of points below an ordinal with cofinality less than that ordinal is club. This implies that $S_k$ is empty, so let $i^*<\omega$ be maximal so that $S_{i^*}$ is stationary. Then $S_{i^*}$ does not reflect stationarily often. Hence we have proven the main theorem.

5. Normal scales?

Let's try to apply some of these ideas. This section requires a basic knowledge of pcf theory. Let $\langle f_\alpha:\alpha<\mu^+\rangle$ be a scale.

The "difference function" of the scale is a transitive function $d:[\mu^+]^2\rightarrow\omega$ defined so that $d(\alpha,\beta)$ for $\alpha<\beta$ is the supremum of all $n$ so that $f_\alpha(n)\ge f_\beta(n)$. In light of the previous explorations on approachability, it is natural to ask if it can ever be normal.

Proposition: Normal scales don't exist.

Proof: Let $\vec{f}$ be a scale on $\langle \mu_i:i<\omega\rangle$. Pick $\gamma_1<\mu_1$ so that $B_1=\{\alpha:f_\alpha(1)<\gamma_1\}$ has size $\kappa^+$. Let $A_0$ be a set of $\mu_0$ many $\alpha<\mu^+$ in $B_1$.

If $n>0$, then pick $\gamma_{n+1}$ s.t. $\gamma_{n+1}>\sup_{\alpha\in A_{n-1}} f_\alpha(n+1)$ and $B_{n+1}=\{\alpha\in B_n: f_\alpha(n+1)<\gamma_{n+1}\}$ has size $\mu^+$. Let $A_n$ be a set of $\mu_{n}$ many from $B_{n+1}$.

Then $\bigcup_n A_n$ has size $\kappa$ and is bounded by $n\mapsto \gamma_n$.

Saturday, November 29, 2014

Learning project #1: a theorem

0th post in this series
1st post (updated with some more facts about IA structures at the end)
(edited 12/10/2014 to improve clarity)

We'll explain the proof of Theorem 27 of FMS:

Theorem 27: Suppose $\kappa$ is a supercompact cardinal and $\mu<\kappa$ is regular. Then in $V^{\rm{Col}(\mu,<\kappa)}$, the nonstationary ideal on $\mu$ is precipitous.

We'll follow the proof in FMS, trying to use Foreman's chapter for some new insights. I think the material is generally covered much better in the Handbook chapter than in the article.

Now we have Lemma 1 for proving ideals are precipitous, we can carry out this gameplan.

Let $G$ be a generic for $\rm{Col}(\mu,<\kappa)$, and let's think in the generic extension $V[G]$. We will show:

 Main Claim: Let $\mathfrak{A}$ be an expansion of $\langle H(\lambda),\in,\Delta\rangle$ (where $\lambda$ is some regular cardinal above, say, $2^{2^{2^\mu}}$). Then for almost all (in the sense of the nonstationary ideal) $N\prec\mathfrak{A}$ with $N\in \rm{IA}$, $|N|<\mu$, and $N\cap \mu$ an ordinal:

For all maximal antichains $\mathcal{A} \in N$ of $P(\mu)/NS$, there is an $A\in \mathcal{A}$ such that 

  1. $N\cap \mu\in A$ and
  2. letting $N^A$ equal the Skolem hull of $N\cup \{A\}$, $N^A\cap \mu=N\cap \mu$ in $\mathfrak{A}$.
Let's explain the usefulness of these conditions. To apply our Lemma 1, let $\mathcal{A}=\langle A \rangle$ be a tree of maximal antichains for $P(\mu)/\rm{NS}$ with underlying tree $T\subseteq \langle\mu^+\rangle^{<\omega}$. We will construct a branch $f:\omega\rightarrow \mu^+$ so that $\bigcap_n A_{f\upharpoonright n}\neq \emptyset$.

We'll construct the branch $f$ in $\omega$ stages. 

Take $N_0$ containing $\mathcal{A}$ as an element. Let $s_0=\langle\alpha_0\rangle$ be so that $A_{s_0}$ witnesses (1) and (2) with antichain equal to the first level of $\mathcal{A}$. From now on, use the notation $s_i:=\langle \alpha_0,\ldots, \alpha_{i}\rangle$. 

At the $i+1$st stage, let $N_{i+1}:=(N_i)^{\alpha_i}$ (the Skolem hull of $N_i\cup\{\alpha_i\}$ in $\mathcal{L}$). Pick $\alpha_{i+1}$ so that (1) and (2) are satisfied for $N=N_{i+1}$ and $s=\langle \alpha_0,\ldots, \alpha_{i+1}\rangle$ in the maximal antichain consisting of $\langle A_s:s \textrm{ extends }s_i\rangle$ together with some set not containing the single point $N\cap \mu$ (in order to do this construction, we will need to show that the property that $N_i\in\rm{IA}$ is maintained, which is Lemma 2 from last time). 

We have $N_i\cap \mu\in A_{s_i}$ for all $i$, but $N_i\cap\mu$ was always just equal to $N\cap \mu$, so $N\cap \mu$ would be in the intersection of all of the $A_{s_i}$. The key point of absorbing the index of the member of the antichain was so that the construction could continue in a coherent way, ensuring we are actually building a branch of $T$. (Foreman's chapter further develops these ideas, leading to the notion of "catching antichains".)

Let us now prove the main claim. Suppose there is a stationary $S\subseteq \rm{IA}$ of structures for which the conclusion of the main claim fails. The idea of the proof is that the supercompact gives us enough reflection of $S$ to catch the antichain. Let $\mathbb{P}=\rm{Col}(\mu,<\kappa)$ and let $j:V\rightarrow M$ be an $|H(\lambda)|$-supercompact embedding. Since $\mathbb{P}$ is $\kappa$-c.c., we can force to find $H\subseteq j(\mathbb{P})$ which contains $G$ as a subset, so we can extend $j$ to $\hat{j}:V[G]\rightarrow M[H]$ in $V[H]$.  

Subclaim: $S$ reflects to a set of size $\mu$, i.e., there is $Y\subseteq \lambda$ so that $\mu\subseteq Y$ and $|Y|=\mu$ so that $S\cap P(Y)$ is stationary in $P(Y)$.

Proof of Subclaim: By Lemma 3 from last time, $\hat{j}(S)\cap P(\hat{j}``H(\lambda))$ is stationary in $V[H]$. Now because of the collapse, $|\hat{j}``H(\lambda)|=\mu$ and $\mu\subseteq \hat{j}``H(\lambda)$ since $\kappa$ is the critical point. By closure of $M$ (and, it can be seen, of $M[H]$, $\hat{j}``H(\lambda)\in M[H]$. So ($\hat{j}$ of) the statement of the subclaim holds in $M[H]$, so by elementarity the subclaim holds in $V[G]$.

Returning to the proof of the main claim, let $Y$ be given by the subclaim and let $f:\mu\rightarrow Y$ be a bijection. Let $T:=\{\delta<\mu: f``\delta\in S\textrm{ and }\delta=f``\delta\cap\mu\}$. Then it is easy to see that $T$ is stationary, using the fact that $S$ reflects to $Y$. By maximality, there is $A\in\mathcal{A}$ such that $T\cap A$ is stationary. So we can find an IA structure $N'\prec \mathfrak{A}$ containing $A, f$ and with $N'\cap \mu \in T\cap A$. Take $N$ to be $f``(N' \cap\mu)$, so $N\in S$ by definition of $T$. By the second property in the definition of $T$, we also have $N\cap \mu=N'\cap \mu$. But $N'$ contains $A$, so the Skolem Hull of $N\cup\{A\}$ is contained in $N'$ and therefore has the same supremum below $\mu$, and would show that $N$ cannot be in $S$.

This finishes the proof. 

Thursday, November 20, 2014

Spencinar #6: The Galvin-Prikry Theorem

Today Zach Norwood talked about the Galvin-Prikry Theorem.

Prerequisites: basic topology, notions of category, familiarity with Ramsey theory is helpful for motivation. Last week's Spencinar post will be referenced occasionally but only in inessential remarks.

The Galvin-Prikry Theorem is a Ramsey result that is perhaps surprising because it is an example of an "infinite exponent" partition property. This means it deals with colorings of infinite subsets of the underlying space. With the Axiom of Choice, none of the usual partition relations hold with infinite exponents.

The Galvin-Prikry Theorem gets around this by only worrying about definable partitions of $[\omega^\omega]$. In order to state the theorem, let's make some definitions:

Definition: $Y\subseteq [\omega]^\omega$ is Ramsey if there is $A\in [\omega]^\omega$ with $[A]^\omega\subseteq Y$ or $A^\omega\subseteq Y^c$. (Here $Y^c$ is the complement of $Y$.) I'll informally say that $A$ in the first case is $Y$-homogeneous and $A$ in the second case is $Y^c$-homogeneous.

$Y$ can also be thought of as a coloring $[\omega]^\omega\rightarrow 2$. As in last week's post, we think of $[X]^\omega$ as the set of all increasing infinite sequences of $X$, or sometimes in a slight abuse of notation as the set of infinite subsets of $X$.

Galvin-Prikry Theorem: If $Y$ is Borel, then $Y$ is Ramsey.

Here the topology on $Y$ is generated by sets of the form $\{X: x \textrm{ is an initial segment of }X\}$. The proof, originally due to Ellentuck, will actually go through a finer topology which is more suited to the situation at hand. Notice the similarity with the Mathias forcing from last week. 

Definition: The Ellentuck topology is generated by sets of the form 
$$[s,B]^\omega:=\{X\in [\omega]^\omega: s \textrm{ is an initial segment of } X, x\setminus s\subseteq B\},$$
where $s\in [\omega]^{<\omega}$ and $B\in [\omega]^\omega$.

We will use "$\ast$-" to indicate the use of the Ellentuck topology over the usual one. Notice that all Borel sets are $\ast$-Borel.

Fix $Y\subseteq [\omega]^\omega$. We say $B\in[\omega]^\omega$ accepts $s$ (into $Y$) if $[s,B]^\omega\subseteq Y$. We say $B$ rejects $s$ if no infinite subset of $B$ accepts $s$. Informally, $B$ accepts $s$ if $s$ is long enough and $B$ is thin enough so that every subset of $B-\max(s)$ compatible with $s$ is in $Y$. $B$ rejects $s$ if $s$ is too short or $B$ concentrates on the wrong place for acceptance to occur. Although perhaps its best to think just in terms of the Ellentuck topology.

Some easy remarks:
  • if $B$ accepts $s$, then so do all infinite subsets of $B$.
  • if $B$ rejects $s$, then so do all infinite subsets of $B$ and all supersets of $B$.
The main part of the proof is the following lemma, which clarifies the relationship between acceptance and rejection.

Galvin-Prikry Lemma: For $Y\subseteq [\omega]^\omega$, exactly one of the following holds:
  1. There is $B\in[\omega]^\omega$ that accepts $\emptyset$, or
  2. there is $B \in [\omega]^\omega$ which rejects all of its finite subsets.
A $B$ that accepts $\emptyset$ is just one that is $Y$-homogeneous, and a $B$ which rejects all of its finite subsets is exactly one for which no $\ast$-open subset of $[B]^\omega$ is $Y$-homogeneous. For those familiar with forcing and last week's seminar, this is a special version of the Prikry lemma for the empty condition in Mathias forcing. Not having read the original source, I wonder if Prikry noticed the analogy with Prikry forcing?

Proof: Assume that no $B$ accepts $\emptyset$, so $\omega$ rejects $\emptyset$. We will construct recursively
  • infinite subsets of $\omega$: $B_0\supseteq B_1\subseteq B_2\supseteq\cdots$, and
  • natural numbers $a_1<a_2<\cdots$
so that $a_i\in B_i$ and $B_i$ rejects all subsets of $\{a_1,\ldots,a_{i-1}\}$. If this can be done, then $\{a_1,a_2,\ldots\}$ rejects all of its finite subsets. This kind of construction is typical in Ramsey theory.

By our assumption, $B_0=\omega$ works. At the $k$th stage of the construction, suppose that $B_0,\ldots, B_k$ and $a_1,\ldots,a_{k-1}$ have already been constructed. We want to define $a_k>a_{k-1}$ in $B_k$ and $B_{k+1}\subseteq B_k$ so that $B_{k+1}$ rejects all subsets of $\{a_1,\ldots,a_k\}$. 

If such $a_k, B_{k+1}$ did not exist, then we will define a sequence $C_1\supseteq C_2\supseteq\ldots$ of subsets of $B_k$, $b_1<b_2<\ldots$ with $b_{i+1}\in C_i$ and a finite subset $\bar{s}$ of $B_k$ so that $\{b_1,b_2,\ldots\}$ accepts $\bar{s}$. But this contradicts the inductive assumption on $B_k$. (This auxiliary construction is another version of the "typical" Ramsey construction building a kind of homogeneous object in the other direction.)

Choose $b_1>a_{k-1}$ in $B_k$. $B_k$ doesn't reject all subsets of $\{a_1,\ldots,a_{k-1},b_1\}$ (otherwise we could pick $b_1$ to be $a_k$ and $B_k$ to be $B_{k+1}$), so we can take $C_1\in [B_k]^{\omega}$ and $S_1\subseteq\{a_1,\ldots,a_{k-1},b_1\}$ so that $C_1$ accepts $s_1$. We must have $b_1\in s_1$ since we assumed that $B_k$ rejects all subsets of $\{a_1,\ldots,a_{k-1}\}$. Put $\bar{s}_1=s_1-\{b_1\}$. 

Continuing the construction, at stage $i+1$ pick $b_{i+1}>b_k$ in $C_i$, $C_{i+1}\in[C_i]^\omega$, and $s_{i+1}\subseteq \{a_1,\ldots,a_{k-1},b_{i+1}\}$ so that $C_{i+1}$ accepts $s_{i+1}$. Let $\bar{s}_{i+1}=s_{i+1}-\{b_{i+1}\}$.

Now the $\bar{s}_i$ must be constant for infinitely many $i$, and let this constant value be $\bar{s}$. Then $\{b_1,b_2,\ldots\}$ accepts $\bar{s}$. $\Box$

After this proof, we stopped using the language of acceptance and rejection, which made it easier for me to follow since I don't think that "rejection" is that easy to grasp in the flow of things.

As a first application of the Galvin-Prikry lemma,

Lemma 1: $\ast$-open sets are Ramsey.

Proof: Let $Y\subseteq [\omega]^\omega$, and take $B$ satisfying the conclusion of the Galvin-Prikry lemma. In the first case, $B$ accepts $\emptyset$, so we're done. In the second case, $B$ rejects all of its finite subsets. We'll show that $B$ is $Y^c$-homogeneous. Otherwise, there is $A$ an infinite subset of $B$ so that $A\in Y$. Using that $Y$ is $\ast$-open, it follows that there is a $\ast$-neighborhood of $A$, say $[s,A]^\omega$, with $[s,A]^\omega\subseteq Y$. But this means that $A$ accepts $s$, contradicting that $B$ rejects $s$. $\Box$

Now we introduce a strengthening of the Ramsey property:

Definition: A set $Y\subseteq [\omega]^\omega$ is completely Ramsey if for every $s\in[\omega]^<\omega$ and every $B\in[\omega]^\omega$, there exists $C\in [B]^\omega$ so that either $[s,C]^\omega\subseteq Y$ or $[s,C]^\omega\subseteq Y^c$. If the second possibility holds for all $s,B$, then $Y$ is called Ramsey null.

Thus, completely Ramsey means that a local Ramsey property holds on any $\ast$-open set. This is useful because topological properties are local in this sense; a key idea of this proof will be to show that certain Ramsey properties are equivalent to topological properties in the Ellentuck topology.

Lemma 2: $\ast$-open sets are completely Ramsey. As a corollary, $\ast$-closed sets are completely Ramsey.

Proof:  Let $s\in [\omega]^{<\omega}$ and $B\in [\omega]^\omega$ be arbitrary. The idea of the proof is to use a homeomorphism from $[\omega]^\omega$ to $[s,B]^\omega$ to transfer the result of Lemma 1. Enumerate $B=b_1<b_2<b_2<\cdots$. Then we define $f:[\omega]^\omega\rightarrow [s,B]^\omega$ by $X\mapsto s\cup \{b_i:i\in X\}$ (let's make the harmless assumption that $\min(B)>\max(s)$). It is easy to see that $f$ is $\ast$-continuous, so $f^{-1}Y$ is $\ast$-open, so Ramsey by Lemma 1. Now there is a $C'\in [\omega]^\omega$ so that $[C']^\omega\subseteq Y$ or $Y^c$. We can take $C=f(C')$ to witness completely Ramsey at $[s,B]^\omega$. $\Box$

We begin to realize the "key idea" mentioned above.

Lemma 3: $Y\in[\omega]^\omega$ is Ramsey null iff it is $\ast$-nowhere dense. (Reminder: a set is nowhere dense iff its closure has empty interior.)

Proof: The forward direction follows easily from the definitions: $\ast$-nowhere dense just means that for any basic $\ast$-open set $[s,B]^\omega$, there is a basic $\ast$-open subset $[t,C]^\omega$ disjoint from $Y$. Ramsey null is the prima facie stronger assertion where we cannot change the "stem" part $s$.

Conversely, take $[s,B]^\omega$ a basic $\ast$-open set. The $\ast$-closure $\bar{Y}$ of $Y$ is completely Ramsey by Lemma 2. If there were $C\in [B]^\omega$ with $[s,C]^\omega\subseteq \bar{Y}$, then $[s,C]^\omega$ would be a subset of the $\ast$-interior of $\bar{Y}$, contradicting $\ast$-nowhere dense. $\Box$

Let's recall some of the basic notions of category, the topological notion of largeness. The meager sets are countable unions of nowhere dense sets. It's a basic result (the Baire category theorem) that under some flexible hypotheses, a space is nonmeager in its own topology, so the category notions are nontrivial.

It turns out that Lemma 3 can be improved to $\ast$-meager sets:

Lemma 4: If $Y$ is $\ast$-meager, then it is Ramsey null.

Proof: Set $Y=\bigcup_{n=0}^\infty Y_n$, $Y_n$ is $\ast$-nowhere dense. The proof is roughly a diagonal argument, similar to some of the earlier ones. Let $s,B$ be arbitrary. We use several applications of Lemma 3 to construct $B_0\supseteq B_1\supseteq B_2\supseteq \cdots$ subsets of $B$ and an increasing sequence of $x_i\in B_i$ so that for each $n$,

$[s\cup F,B_n]^\omega\subseteq Y_n^c$ for all $F\subseteq\{x_0,\ldots,x_{n-1}\}$.

So at the $n$th stage, we use Lemma 3 successively to shrink $B_{n-1}$ so that the statement above holds for each $F\subseteq\{x_0,\ldots,x_{n-1}\}$ (one application of Lemma 3 for each $F$).

Then the sequence of $\{x_0,x_1,\ldots\}$ witnesses completely Ramsey for $s,B$. $\Box$

So $\ast$-nowhere dense, $\ast$-meager, and Ramsey null are all the same thing.

We can finally put it all together to get the main theorem of this talk, which implies the Galvin-Prikry Theorem as a corollary (since Borel sets in the usual topology are easily $\ast$-Baire, being in the $\sigma$-ideal generated by the $\ast$-open sets.). Recall that a set $Y$ is Baire (or has the property of Baire) iff there is a meager set $W$ and an open set $Z$ so that $Y= Z\triangle W$ is open (these are the "measurable" sets in terms of category).

Theorem: $Y$ is $\ast$-Baire iff it is completely Ramsey.

Proof: For the forward direction, suppose $Y$ is $\ast$-Baire, so there is a $\ast$-meager set $W$ and $\ast$-open $Z$ with $Y=Z\triangle W$. Take $s\in[\omega]^{<\omega}$ and $B\in [\omega]^\omega$ arbitrary. By Lemma 5, there is $C\in [B]^\omega$ so that $[s,C]^\omega\subseteq W^c$.  By Lemma 1, there is $D\in[C]^\omega$ so that $[s,D]^\omega\subseteq Z$ or $Z^c$. By choice of $C$, this means that $[s,D]^\omega\subseteq Z-W\subseteq Y$ or $[s,D]^\omega\subseteq Z^c-W\subseteq Y^c$.

Conversely, assume $Y$ is completely Ramsey. We will show that $Y-\rm{int}(Y)$ is $\ast$-meager. It suffices to show that it's Ramsey null. Take $s\in[\omega]^{<\omega}$ and $B\in [\omega]^\omega$ arbitrary. There is $C\in [B]^\omega$ so that $[s,C]^\omega\subseteq Y$ or $Y^c$. In the first case, $[s,C]^\omega$ is contained in the interior of $Y$, and in the second case, $[s,C]^\omega$ is disjoint from $Y$, so in either case $[s,C]\subseteq Y-\rm{int}(Y)$. $\Box$

Zach ended by giving an open question. Mathias proved that if it is consistent that an inaccessible cardinal exists, then it is consistent that projective sets are Ramsey (this statement holds in Solovay's model, obtained by collapsing an inaccessible). But is the inaccessible necessary? Does the statement that projective sets are Ramsey have consistency strength above just ZFC?

Sunday, November 16, 2014

Learning project #1, part 1: preliminaries

0th post in this series (updated with some more motivation):
(other relevant posts for reference: Spencinar #3: precipitous ideals)

Let's begin! Remember that one of our goals is

Theorem: $\rm{Col}(\mu,<\kappa)$ makes the nonstationary ideal restricted to $[\mu]^{<\mu}$ precipitous.

First, let's define notions of stationarity on sets of the form $[\lambda]^{<\kappa}=\{x\subseteq \lambda: |x|<\kappa\}$. We will have $\kappa$ regular.

There are two different notions here. If $X$ is a set and $C\subseteq P(X)$, then $C$ is strongly club if there is a structure $\mathfrak{A}$ on $X$ with countably many function symbols so that $C$ is the set of elementary substructures of $\mathfrak{A}$ (or equivalently, the set of substructures closed under a function $X^{<\omega}\rightarrow X$). $C$ is J-club if it is unbounded in the $\subseteq$ order on $[\lambda]^{<\kappa}$ and closed under increasing unions of size less than $\kappa$. It turns out the two notions are closely connected, since a result of Kueker says that the filter generated by the J-club sets is the filter generated by the strongly club sets together with all sets of the form $\{x\in [\lambda]^{<\kappa}:x\cap \kappa \in \kappa\}$ (this means that the ordinals of $x\cap \kappa$ are an initial segment of all ordinals).

We can correspondingly define weakly stationary (resp. J-stationary) sets as those that intersect every strongly club (resp. J-club) set. For our purposes, strongly club and weakly stationary sets are the right notions, and we will refer to them from now on without the adverbs.

In this post, we will get a simple criterion for precipitousness. Read the Spencinar #1 or #3 posts for basic information about generic ultrapowers.

Our notation will be that $Z$ is a set and $I$ is an ideal on $Z$. We will use $I^+$ to denote the collection of $I$-positive sets, i.e., the set $P(Z)-I$. We will use $I^\wedge$ to mean the dual filter of $I$, i.e., the collection of sets of the form $Z-A$ for some $A\in I$. The equivalence class of $S\subseteq Z$ in $P(Z)/I$ will be denoted by $[S]$.

Definition: A tree of maximal antichains $\mathcal{A}$ for $P(Z)/I$ below $S$ is an underlying tree $T\subseteq \mathrm{ON}^{<\omega}$ labeled with $\langle A_s: s\in T\rangle$ elements of $P(Z)$ such that

  1. $A_\emptyset = S$,
  2. The levels of the tree form increasingly refined maximal antichains of $P(Z)/I$ restricted to $S$, i.e., for each $s\in T$, the set $\{[A_{s\ast \alpha}]:s\ast\alpha\in T\}$ is a maximal antichain below $A_s$ ($\ast$ denotes concatenation). Further, let us assume that $A_{s\ast \alpha}\subseteq A_s$. Denote the levels by $\mathcal{A}_n$, $n<\omega$. 
Lemma 1: (FMS p. 31, originally Jech--Prikry) $I$ is precipitous iff for every $S\in I^+$ and every tree $\mathcal{A}$ of maximal antichains for $P(Z)/I$ restricted to $S$, there is a branch $f$ of the tree so that $\cap_{n<\omega} A_{f\upharpoonright n}$ is nonempty.

Proof: If $I$ is precipitous, and $\langle A_s: s\in T$ is a tree of maximal antichains below some $S\in P(Z)$, then consider the generic ultrapower (see older posts for references) by $P(Z)/I$ restricted to $[S]$. This is a map
$$j:V\rightarrow M\subseteq V[G]$$
defined in the forcing extension by $G$ which is generic for $P(Z)/I$ restricted to $[S]$. We will think of $G$ as a $V$-ultrafilter extending the dual filter of $I$.

Let's review the construction of this ultrapower. The elements are represented by functions $F: Z\rightarrow V$ in $V$, and we have Los's theorem which says for a formula $\varphi$ in the language of set theory and $F_0,\ldots,F_n$, $M\vDash \varphi([F_0],\ldots,[F_n])$ if and only if $\{z:V\vDash \varphi(F_0(z),\ldots,F_n(z))\}\in G$.

In $M$, let $\mathcal{B}=j(\mathcal{A})$ and $i$ be the element represented in the ultrapower by the identity function on $Z$.

For each $n<\omega$, there must be a unique $A_{s_n}\in \mathcal{A}_n$ so that $i\in j(A_{s_n})$: this $A_s$ is the one in $G$ (such $A_s$ must exist by a basic density argument using the fact that $\mathcal{A}_n$ is a maximal antichain, and it must be unique). The sequence of these $\langle A_{s_n}:n<\omega\rangle$ can be computed in $V[G]$ and is a branch through $\mathcal{B}$. Then there must be a branch in $M$ through the subtree of $\mathcal{B}$ consisting of those nodes which are labeled by a set containing $i$ as an element. (Otherwise, in $M$ there would be a rank function $f$ on this subtree taking ordinal values such that $f(B_s)<f(B_t)$ if $s$ is below $t$, and this would have to be in $V[G]$, so there couldn't be a branch in $V[G]$. Note that this is an important pattern of argument that is very common in many different places in set theory).

Now this branch is a branch through $\mathcal{B}$ with nonempty intersection (containing $i$, for one) so by elementarity there is such a branch through $\mathcal{A}$.

Conversely, assume that $I$ is not precipitous. Then there is $S\subseteq Z$ such that $[S]$ forces that the generic ultrapower is well-founded. By maximality of the forcing language, we can find names $\dot{F}_n$ such that

  1. $[S]\Vdash \dot{F}_n:Z\rightarrow V$,
  2. $[S]\Vdash \dot{F}_n\in V$,
  3. $[S]\Vdash ``M \vDash [\dot{F}_{n+1}]\in [\dot{F}_n]''$. 

We can build a tree of maximal antichains $\mathcal{A}$ below $[S]$ such that

  • a condition $A_s$ on the $n$th level decides the value of $\dot{F}_n$, say as $f^s_n$,
  • if $s$ is the predecessor of $t$ in the underlying tree, then $f^t_{n+1}(z)\in f^s_n(z)$ for all $z\in A_t$.
If we have a tree of maximal antichains with just the first property, we can throw out sets in $I$ from each node to get the second property.

This tree has no branch with nonempty intersection, otherwise this would give an infinite descending sequence in $V$. $\Box$

Another thing we will need is the notion of an internally approachable (IA) structure. Let $\lambda$ be sufficiently large regular cardinal and consider the structure $\mathfrak{A}=\langle H(\lambda),\epsilon,\Delta,f_i\rangle$ where the $f_i$ are countably many function symbols and $\Delta$ is a fixed well-order of $H(\lambda)$ needed for certain Skolem hull arguments. An elementary substructure $N\prec \mathfrak{A}$ is internally approachable iff it can be written as $N=\bigcup_{j<\delta}N_\delta$ for some $\delta$ such that for all $\beta\in \delta$, $\langle N_\alpha:\alpha<\beta\rangle\in N$ (this definition is slightly weaker than other common definitions of IA structures). Sometimes such an $N$ will be said to be internally approachable of length $\delta$.

Because of the way these structures are built in layers, an IA structure can do some pretty "meta" computations (as we will see). This enables us to do some inductive constructions, where the IA condition gives a kind of strengthened inductive hypothesis.

First we have a basic lemma about IA structures which is useful in many contexts.

Lemma 2: Let $N\prec \langle H(\lambda),\epsilon,\Delta$ be IA of length $\delta$ for $\delta<\lambda$. Suppose $a\in y$ for some $y\in N$. Then the Skolem hull of $N\cup\{a\}$ computed in $H(\lambda)$ is IA of length $\delta$.

Proof: Let $\langle N_\alpha:\alpha<\delta\rangle$ witness approachability. For each $\alpha<\delta$, let $N^*_\alpha$ be the closure of $N_\alpha\cup\{a\}$ under all functions $H(\lambda)\rightarrow H(\lambda)$ in $N_{\alpha+1}$. The $N^*_\alpha$ are elementary substructures of $H(\lambda)$ that witness approachability for the Skolem hull of $N\cup\{a\}$: if $\beta<\delta$, then $\langle N^*_\alpha:\alpha<\beta\rangle$ can be computed from $a$ and the sequence $\langle N_\alpha:\alpha\le \beta\rangle$. $\Box$

Next is the main importance of IA structures in this project. It is very typical of applications of approachability:

Lemma 3: Suppose $S\subseteq \rm{IA}$ be a stationary subset of $[H(\lambda)]^{<\mu}$, for some uncountable regular $\mu<\delta$.  Then $S$ remains stationary in $[H(\lambda)^V]^{<\mu}$ after forcing with $\mathbb{P}=\rm{Col}(\mu,<\kappa)$ for any ordinal $\kappa$.

Proof: Let $\dot{F}$ be a name for a function $H(\lambda)^{<\omega}\rightarrow H(\lambda)$ in the extension. Let $\lambda^*$ be a sufficiently large regular cardinal, and $N\prec \langle H(\lambda^*),\in,\Delta,\mathbb{P},\dot{F},S\rangle$ with $|N<\mu|$ and $N\cap H(\lambda)\in S$.

Then $N\cap H(\lambda)$ is IA of some length $\delta<\mu$, witnessed by $\langle N_\alpha:\alpha<\delta\rangle$. We will define a decreasing sequence $\langle p_\alpha:\alpha<\delta\rangle$ of conditions in $\mathbb{P}$ so that

  1. For every $\beta<\delta$, there is $M_\beta\in N$ containing $N_\beta$ and each of the previous $M_\alpha$, $\alpha<\beta$, and so that for each $x\in M_\alpha$, $p_\beta$ forces there is $y\in M_\beta$ with $F(x)=y$.
  2. For every $\beta<\delta$, $\langle p_\alpha:\alpha<\beta\rangle\in N$.
We just pick the $\Delta$-least $p_\beta$ so that there is a $M_\beta$ satisfying the above, and then pick the $\Delta$-least such $M_\beta$. These things all exist in $N$ since $N$ is IA, and we can get (2) since this computation can be done inside $N$.

Now use closure of $\mathbb{P}$ to find a $p$ extending all of the $p_\alpha$. This $p$ forces that $N\cap H(\lambda)$ is closed under $F$, so $S$ is stationary.

Friday, November 14, 2014

Spencinar #5: Happy and mad families, part 2

Now we get to Asger's proof.

Definition: If $A$ is an almost disjoint family and $B_0,B_1\subseteq A$, then $B_0$, $B_1$ have uniformly bounded intersection by $k$ if for any $x\in B_0$, $y\in B_1$, $x\cap y\subseteq k$.

Definition: A polarization of $A$ is a countable partition $\{F_n\}_{n\in\omega}$ of $\{(x,y)\in A^2:x\neq y\}$ such that for each $n$, $\mathrm{proj}_0 F_n$ and $\mathrm{proj}_1 F_n$ have uniformly bounded intersection by some $k$.

Here $\mathrm{proj}_0$ and $\mathrm{proj}_1$ are the projections to the left and right coordinates, respectively.

The polarization will help us get to a countable situation where we can diagonalize. We will show the following lemmas:

Lemma 1: $\mathbf{\Sigma}^1_1$ almost disjoint families have polarizations.

Lemma 2: If an almost disjoint family has a polarization, then it is not maximal.

This would prove Mathias's result again.

To prove Theorem 2, we can use Lemma 2 plus the following (which we won't prove here):

Lemma 3: $\mathrm{OCA}_\infty$ + every set of reals has the perfect set property implies that every almost disjoint family has a polarization. ($\mathrm{OCA}_\infty$ is a strengthening of the open coloring axiom)

Lemma 4:  $\mathrm{OCA}_\infty$ holds in Solovay's model.

Proof of Lemma 1: Suppose $T\subseteq [\omega]^{<\omega}\times [\omega]^{<\omega}\times [\omega]^{<\omega}$ is such that $p[T]:=\{(x,y)\in ([\omega]^{\omega})^2: \exists t \, (t,x,y)\textrm{ is a branch of T}\}$ is equal to $\{(x,y)\in A^2:x\neq y\}$ which exists since that set is analytic (this is a basic descriptive set theory representation of an analytic set).

Consider the set of nodes $(\tau, r,s)$ such that every extension $(\tau',r',s')$ has $r'\cap s'=r\cap s$. Given such $(\tau,r,s)$, let $F=p[T_{(\tau,r,s)}]$. Then $\mathrm{proj}_0 F$ and $\mathrm{proj}_1 F$ have uniformly bounded intersection by $\max(r\cap s)$.

We can perform a countable length derivative process, iteratively removing such nodes from $T$. This will terminate in countably many steps (since there are only countably many nodes) and the tree with no such nodes cannot have infinite branches, as this would contradict the almost disjointness of $A$.

Proof of Lemma 2: First we make the following observation: if $A$ is an uncountable almost disjoint family and polarized by $\{F_n\}_{n\in \omega}$, then there is a sequence $B_0,B_1,\ldots$ of uncountable subsets of $A$ such that $B_i$ has uniformly bounded intersection with $\bigcup_{j>i} B_j$ and $B_0=\mathrm{proj}_0 F_n$ for some $n$.

To prove the observation, take $n$ such that $\mathrm{proj}_0 F_n$ and $\mathrm{proj}_1 F_n$ are both uncountable. Take $B_0=\mathrm{proj}_0 F_n$. Now $\mathrm{proj}_1 F_n$ is polarized by $\{F_m\cap (\mathrm{proj}_1 F_n)^2\}_{m\neq n}$. So we can iterate this process for $\omega$ steps, giving the observation.

Suppose $B_i$ are as above. Set $X_{B_i}=\bigcup B_i$. Note that the $X_{B_i}$ form an almost disjoint family, because of the uniform bound on the intersections of sets from $B_i, B_j$ for any given $i,j$.

This process is the first step in an iterative procedure, $A_0=A$, $B_{0,i}=B_i$, $X_{0,i}=X_i$. The iteration will extend transfinitely. At stage $\alpha$, let $A_\alpha$ be $A$ minus the free ideal generated by all of the $X_{\beta,i}$, $\beta<\alpha$. Define $B_{\alpha,i}$ and $X_{\alpha,i}$ in the same way as above. This terminates after countably many steps, since each $B_{\alpha_i}$ is a different $\mathrm{proj}_0 F_n$. Let $\lambda$ be the length of this procedure: the termination condition is that $A_\lambda$ is countable. Now we can use the usual diagonalization to find a set almost disjoint from $A_\lambda$ and each of the $X_{\alpha_i}$. This set will be almost disjoint from every member of $A$, so $A$ cannot be maximal.

Thursday, November 13, 2014

Spencinar #5: Happy and mad families, part 1

Yesterday Andrew Marks gave the Spencinar. I was really excited to hear that Andrew would be coming to UCLA, since all of the talks I heard him give in various seminars and meetings inspired me (and everyone else in the audience as well). What makes his talks so special is that he emphasizes things with the most mathematical substance and sprinkles his presentation with insightful remarks, yet manages to stay completely rigorous. Hopefully, these notes reflect some of the energy and clarity he spoke with at the seminar.

Prerequisites: some descriptive set theory (tree representation of analytic sets, Shoenfield absoluteness), and basic knowledge of forcing is helpful.

Andrew was motivated by some problems he was thinking about under the broad program of showing that there are no definable examples of certain kinds of objects whose constructions use the axiom of choice. An almost disjoint family is a collection of infinite subsets of $\omega$ with finite pairwise intersections. A mad family is a maximal such family, which exists by using Zorn's Lemma. The main result of this presentation was:

Theorem 1 (Mathias 1977): There are no $\mathbf{\Sigma^1_1}$ mad families.

Andrew gave two different proofs of Theorem 1: one was the original argument using what is now known as Mathias forcing, together with an absoluteness argument, the other is a recent proof due to Asger Törnquist (which he talked about in the colloquium last year, see these slides). The second proof is both purely elementary and generalizes well to other situations (even those not involving almost disjoint sets). Törnquist used these methods to prove: 

Theorem 2 (Törnquist 2014): There are no mad families in Solovay's model.

We will sketch Mathias's proof. You can skip this part if you are not familiar with the method of forcing (but if you are familiar with it, you will find these arguments extremely elegant!).

Definition: Mathias forcing $\mathbb{M}$ is the poset of pairs $(s, X)$ where $s\in[\omega]^{<\omega}$, $X\in[\omega]^{\omega}$ and $\max(s)<\min(X)$. The ordering is defined by $(r,Y)\le (s, X)$ iff $r$ end-extends $s$ and $r\setminus s\subseteq X$.

So this is like a version of Prikry's forcing at $\omega$. It satisfies some similar properties which we will take as black boxes (proofs may be given in next week's seminar):

  1. (Mathias property) If $G$ is Mathias generic (thought of as a member of $[\omega]^\omega$), and $G'\in [\omega]^\omega$ is a subset of $G$, then $G'$ is also generic for $\mathbb{M}$.
  2. (Prikry property) If $\varphi$ is a formula in the forcing language and $(s,X)$ is a condition, then there is $Y\subseteq X$ infinite such that $(s,Y)\Vdash \varphi$ or $(s,Y)\Vdash \neg\varphi$ (we say that $(s,X)$ decides $\varphi$).
On a digression, this forcing can be used to show that 

Proposition: Every analytic set is Ramsey. ($A\subseteq [\omega]^{\omega}$ is Ramsey iff there is an $X\in [\omega]^{\omega}$ so that for all infinite subsets $Y\subseteq X$, $Y\in A$ iff $X\in A$. In other words, either all infinite subsets of $X$ are in $A$, or all of them are not in $A$.) 

Proof: If $A\subseteq [\omega]^{\omega}$ is analytic, then there is $X\in [\omega]^{\omega}$ such that $(\emptyset, X)$ decides $\dot{G}\in A$. Note that $A$ used in the forcing language is actually a shorthand for the analytic definition of $A$ (which we will evaluate in the extension to be $A^{V[G]}$: it is not actually the name for the ground model set $A$. Let $G$ be the generic obtained by forcing with $\mathbb{M}$ below $(\emptyset, X)$. Then any infinite subset $G'$ of $G$ is still $\mathbb{M}$ generic containing $(\emptyset, X)$, so the question of $G'\in A^{V[G]}$ is decided the same way as the question for $G$ (this uses Shoenfield absoluteness between $V[G']$ and $V[G]$). So $A^{V[G]}$ is Ramsey in $V[G]$. By absoluteness, $A$ must be Ramsey (in the ground model).

Mathias's proof uses a variant of Mathias forcing. We'll need some definitions first.

  • A free ideal on $\omega$ is an ideal on $\omega$ containing all finite sets. A free filter is the dual of a free ideal.
  • A free family is the complement of a free ideal $I$ in $P(\omega)$.
Informally, we think of a free ideal as the collection of "measure zero" sets, its dual filter as the collection of "full measure" sets, and the corresponding free family as the collection of "positive measure" sets.

The main definition of this part is of a kind of free family where a certain kind of diagonal intersection can be taken:

A happy family is a free family $\mathcal{H}$ such that whenever $\{X_s:s\in [\omega]^{<\omega}\}$ are infinite subsets of $\omega$ so that any finite intersections of the $X_s$ are in $\mathcal{H}$, then there is an $X\in\mathcal{H}$ diagonalizing the $X_s$: that is,
  • if $\max(s)\in X$, then $\{n\in X:n>\max(s)\}\subseteq X_s$.
Proposition: $[\omega]^\omega$ is happy. 

Proof: A diagonal argument. Suppose $\{X_s:s\in [\omega]^{<\omega}\}$ are infinite subsets of $\omega$ with all finite intersections infinite. Pick the first element to be in $X_\emptyset$. At stage $i+1$, pick the next element greater than all previously chosen elements, and a member of the intersection of all $X_s$ with $\max(s)$ less than the element chosen at stage $i$ (this is possible since that intersection is infinite). This construction works. $\Box$

Definition: The forcing $\mathbb{M}_\mathcal{H}$ is the subposet of $\mathcal{M}$ of condition 
whose second coordinate is in $\mathcal{H}$.

The "happy" property is what is needed so that the proofs of the Lemma on Mathias forcing go through. 

Now we relate mad and happy families.

Lemma: If $A$ is a mad family, and $I$ is the free ideal generated by $A$, then $\mathcal{H}=P(\omega)\setminus I$ is a happy family.

Proof. Suppose $\{X_s:s\in [\omega]^{<\omega}\}$ are infinite subsets of $\omega$ with all finite intersections in $\mathcal{H}$. Let $X^0$ be constructed as in the proof of the previous proposition. If $X^0\in \mathcal{H}$, then we are done. Otherwise, there is $Y^0\in A$ such that $Y^0\cap X^0$ is infinite. Let $X^1_s=X_s\setminus Y_0$, and let $X_1$ diagonalize the $X^1_s$ as in the proof of the previous proposition, and so on.

At the end of this construction, we have $X^i$ all diagonalizing the collection of $X_s$ (in the sense of the family $[\omega]^\omega$), and so that $X^i\cap Y^i$ is infinite for all $i$. Take $X$ to be the set $\{x_i:i<\omega\}$ (enumerated in increasing order) so that $x_i\in X^{f(i)}\cap Y^{f(i)}\cap \bigcap\{X_s: \max(s)<x_{i-1} \}$ (forgetting the last intersection if $i=0$), where $f(i):\omega\rightarrow\omega$ is a function so that each $n\in\omega$ has infinite preimage. The point of the function $f$ is to ensure that $X$ has infinite intersection with each $Y^i$, so $X\in\mathcal{H}$. 

Sketch of the proof of Theorem 1:  If $A$ is a $\mathbf{\Sigma^1_1}$ mad family, then let $\mathcal{H}$ be the associated happy family. Force with $\mathbb{M}_\mathcal{H}$.

Claim: It is forced that $\dot{G}\in \mathcal{H}$ (again, $\mathcal{H}$ used here in the forcing language is the name for the set in the extension with the same definition as $\mathcal{H}$). 

Suppose otherwise. Then there is $p=(s,X)$ and a name $\dot{a}$ for a member of $A$ so that $p$ forces that $\dot{G}$ is almost contained in $\dot{a}$, say, above some $n$. But it is forced that $X$ is not almost contained in $\dot{a}$, so we can strengthen $p$ to force some natural number larger than $n$ into $\dot{G}\setminus \dot{a}$.

So if $G$ is generic, any infinite subset $G'\subseteq G$ must also be in $\mathcal{H}^{V[G]}$. So $G$ is almost disjoint with any set in $A^{V[G]}$, hence by absoluteness $A$ is not mad, contradiction. $\Box$

I wasn't following along 100%, so some of the argument I used in the last few proofs might not be optimal (I just pieced them together to fill gaps in my notes).

Tuesday, November 11, 2014

Learning project #1: Foreman-Magidor-Shelah "Martin's Maximum, saturated ideals, and nonregular ultrafilters I"

The paper "Martin's Maximum, saturated ideals, and nonregular ultrafilters I" by Foreman-Magidor-Shelah seems to be perhaps the most influential set theory paper in recent times (let's use a loose definition of "recent"). In this paper, they introduce the "maximal" forcing axiom known as Martin's Maximum, and then prove some consistency results from large cardinals about the saturation of "natural" ideals. We will focus on the second thing; for example:

1. Forcing with $\rm{Col}(\mu,<\kappa)$ makes the nonstationary ideal restricted to $[\mu]^{<\mu}$ precipitous.

2. Forcing with  $\rm{Col}(\omega_1,<\kappa)$ gives an $\aleph_2$-saturated ideal on $\omega_1$.

3. To make the nonstationary ideal on $\omega_1$ saturated, we must force with something more than the collapse. But this is also possible and can be done with $\kappa$-c.c. semiproper forcing.

All of the conclusions above are also consequences of Martin's Maximum.

Recall the definition and motivation of precipitous ideals from the beginning of a previous post. In that post, we gave the Jech--Magidor--Mitchell--Prikry construction of a precipitous ideal on $\omega_1$. These authors in fact proved that the nonstationary ideal on $\omega_1$ can be precipitous. The method was by an iterated forcing that at each step destroyed the stationarity of sets in the ideal constructed so far.

However, the FMS paper managed to find a different consistency proof of precipitousness of the nonstationary ideal on $\omega_1$. In fact, the proof is general to any other regular cardinals, unlike the JMMP construction which uses something special about $\omega_1$ for destroying stationarity (although generalizations of that construction to other cardinals has been carried out in work of Gitik). The drawback is that the FMS construction uses quite large cardinals, whereas Gitik found the equiconsistency.

I plan to at least cover (1.) and (2.) in a forthcoming series of posts. The plan is to prove (3.) sometime in a future learning project dealing with semiproper forcing, RCS iterations, and the other parts of the FMS paper.

The posts will come out slowly because I want to make sure I fully understand each step. The references will be the FMS paper and Section 8 of Foreman's Handbook chapter.

Thursday, November 6, 2014

Spencinar #4: Weakenings of Martin's Axiom

Spencer talked today about weakenings of Martin's Axiom of the form "All c.c.c. posets have a nice property." The nice properties were things like "$\omega_1$-Knaster", or "$\sigma$-centered." Actually, Todorcevic--Velickovic proved that $MA_{\aleph_1}$ is equivalent to "all c.c.c. posets are $\sigma$-centered". We saw how Martin's Axiom implies these other principles.

The main thing Spencer talked about was that "All c.c.c. posets $\mathbb{P}$ have $\mathbb{P}\times\mathbb{P}$ c.c.c" implies that the continuum hypothesis fails. To prove this, he uses the CH to construct an entangled linear order, and shows that the poset of chains of this entangled linear order is c.c.c. but its square is not. The proof was very involved, and I'll try to write up some more about this when I have some time to think about the main underlying ideas. I remember reading about entangled linear orders from Shelah's book on cardinal arithmetic, but not being very motivated by it. This talk showed me the power of these linear orders when proving that the poset of chains is c.c.c.

Yuval Peres DLS

This week, Yuval Peres gave a Distinguished Lecture Series, which is a series of three talks for a general mathematical audience by a prominent mathematician visiting UCLA. I felt really compelled to attend these talks, since Peres seems to be working on very fundamental problems that have some interest for me. I read some of his work when I was in high school, and I was pleased to finally attend some of his lectures!

This blog post deals with the talks on Tuesday and Wednesday. I won't give a comprehensive account, because my comprehension wasn't 100%.

The first talk was about Search Games and Optimal Kakeya sets, based on joint work with Babichenko, Peretz, Sousi, and Winkler. He defined a game $G_n$ with two players "Hunter" and "Rabbit" on a cycle with $n$ vertices. At time $0$, each player chooses a position on the cycle. Then for each time step, the hunter moves to an adjacent position (or stays at the same position). The rabbit simultaneously moves to any position on the cycle. The players cannot see each other (Peres said to imagine the game is played in the dark). The game ends when Hunter and Rabbit occupy the same location.

The punchline was that the optimal winning strategies for this game can be used to construct an "optimal" Kakeya set. This Kakeya set is the union of a certain number of triangles, and has the smallest area among all such Kakeya sets with the same number of triangles. A Kakeya set is just a subset of the plane that has a unit segment in every direction. The first construction of a Kakeya set was by Besicovitch, and these sets have some importance in harmonic analysis.

But I was really fascinated by this simple game for its own interest.

For technical reasons, it is easier to consider the game $G^*_n$ which runs for $n$ timesteps, and the Hunter and Rabbit move as above. The payoff for Hunter is 1 if he's successful, 0 otherwise. To make it a zero-sum game, the payoff for Rabbit is -1 if Hunter is successful, 0 otherwise. So von Neumann proved this minimax theorem, which asserts the existence of "optimal" strategies for both players. A strategy is a rule which tells a player what move to make, each move assigned a probability depending on the situation (previous moves and information known to the player). Then von Neumann's theorem (as I understood it from the talk) says that there is a value $v$ such that $v$ is both:

  • the maximum possible expected payoff for any of I's strategies, assuming that II plays optimally against that particular strategy (so the max over all strategies of I of (the min over all strategies of II of the expected payoff of running the strategies against each other))
  • the minimum possible expected payoff for I for any of II's strategies, assuming that I plays optimally against that particular strategy (so $-v$ has the same role for II as $v$ had for I above).

I tried to say it in a way which I think can be easily made precise. It's very nice that you can formulate this notion of optimality, and prove that optimal strategies exist. It seems like this definition of optimality can often involve being as unpredictable as possible, because if the opponent knows your strategy, he can use that information to predict your moves and play against that. So, barring situations where for example there is a sequence of moves with no counterplay, it seems to be greatly beneficial to move around randomly.

What's the connection between $G_n$ and $G^*_n$? Peres answered this in his next slides: applying the minimax theorem to our situation, there is a value $p_n$ for the game $G^*_n$, which can be interpreted as the probability of capture under optimal play. Now we argue that in the original game $G_n$, the mean capture time in $G_n$ under optimal play is between $n/p_n$ and $2n/p_n$. The rabbit can just play his $G^*_n$ strategy for every block of time $n$, so he can ensure that the mean capture time is $\ge n/p_n$ (using linearity of expectation). The hunter can play his $G^*_n$ strategy for every other block of time $n$, using the odd blocks to move to the starting position for his next strategy. This ensures a mean capture time $\le 2n/p_n$ (again using linearity of expectation).

He then went over some sample strategies for the players, involving staying still, moving in some direction at constant speed, etc. It was amazing because he involved the audience, asking them to try to figure out different strategies. I forgot exactly what $p_n$ turned out to be: I think it was something like $n^{-1/2}$? But the optimal strategy for the hunter turned out to be going at a random speed, like I suspected. The rabbit's optimal strategy actually involved not moving around too much: I stopped taking notes here because I was struggling to keep up with his arguments at this point. Someone in the audience gave an intuitive explanation of this, but I must admit I wasn't sure about this explanation either.

I didn't know the minimax theorem before, so this talk was very eye-opening. I wonder how the proof of that theorem goes, since evidently it doesn't construct the optimal strategies (which seem to be found from methods special to the game at hand). If the argument is nice, I'll try to write a blog post about it. I wonder if these probabilistic strategies can be somehow generalized to the infinite situations that set theorists are interested in (probably just nonsense, I know!)

The DLS on Wednesday was about random walks on groups (which seems to just be a random walk on the Cayley graph of a group, each generator has different weights). It was a bit hard to follow for me, but he proved some nice results on the blackboard bounding the expected distance in the graph between the starting point and the $n$th step of the walk (this is the rate of escape). It reminded me a lot of IST 1 class from Caltech, since a lot of inequalities I learned there were used in the talk. Unfortunately, I won't post my notes about this talk.

Saturday, November 1, 2014

UCLA Logic Colloquium: Elementary amenable groups, chain conditions, and descriptive set theory (Jay Williams)

The second talk of the Friday doubleheader was Jay Williams's colloquium talk about chain conditions in group theory, which was based on joint work with Philip Wesolek.

Jay and his coauthor proved that the class $\mathcal{M}_C$ of groups that satisfy the minimal condition on centralizers is coanalytic but not Borel. These definability notions are relative to the space of countable groups, which can be realized as the set of normal subgroups of $\mathbb{F}_\omega$, the free group on countably many generators, equipped with the subspace topology from $2^{\mathbb{F}_\omega}$.

A group satisfies the minimal condition on centralizers iff there is no infinite decreasing sequence of subgroups which can be realized as the centralizer of some subset of the group.

Given a countable group $G$, enumerate the elements and form the tree which has nodes corresponding to finite sequences elements of $G$. Identify a node and its parent if they have the same centralizer, giving a new tree $T_G$. This gives a map from the space of countable groups to the set of trees $G\mapsto T_G$ so that $G\in \mathcal{M}_C$ iff $T_G\in \mathrm{WF}$, where $\mathrm{WF}$ is the set of well-founded trees. Therefore, $\mathcal{M}_C$ is coanalytic. To show it's not Borel, the Boundedness Theorem is invoked: a Borel subset of $\mathrm{WF}$ must have bounded rank functions. For this, constructions were given to find a group $B\in \mathcal{M}_C$ with larger rank than an arbitrary group $A\in \mathcal{M}_C$ (take its product with a nonabelian group) and to find a $B\in \mathcal{M}_C$ whose rank is larger than an arbitrary countable collection of groups in $\mathcal{M}_C$ (take the free product).

Another result in this vein is that $\mathcal{M}_C$ restricted to finitely generated groups is coanalytic but not Borel. This uses a construction of Karrass and Solitar (reading about their friendship on the MacTutor archive is really inspiring!)

The maximal condition on normal subgroups (max-n) is a property of groups that holds iff there is no strictly increasing chain of normal subgroups of $G$. Hall proved that a metabelian group (abelian extension of an abelian group) satisfies max-n iff it is finitely generated, so this max-n condition is nice in a group theory sense. This class is also coanalytic but not Borel, using similar constructions with a theorem of Olshanskii.

The second half of the talk was to show that a special class of amenable groups, the elementary amenable groups (EG) is not equal to the class of all amenable groups. The elementary amenable groups are constructed from abelian and finite groups by taking closure under countable unions and abelian extensions. To do this, they show that the class EG is coanalytic but not Borel. The method is similar to above, where the construction of the tree associated to $G$ goes roughly by taking commutator subgroups (this handles increasing unions) intersected with all subgroups of a certain index at each level (this handles the abelian extensions). The construction for unbounded rank depends on a lemma due to Neumann--Neumann and Hall.

Jay noted that Grigorchuk gave an explicit example of a finitely generated amenable group that is not EG. The method discussed in the talk gives an elementary, but nonconstructive, proof of this fact.

UCLA Logic Seminar: Baire measurable paradoxical decompositions via matchings (Andrew Marks)

Today's logic seminar was by Andrew Marks, entitled Baire measurable paradoxical decompositions via matchings. The work, joint with Spencer Unger, showed the following theorem:

Main Result: Suppose a group $\Gamma$ acts on a Polish space $X$ by Borel automorphisms ($x\mapsto \gamma\cdot x$ is Borel for every $\gamma\in \Gamma$. Then if this action has a paradoxical decomposition, it has one where the pieces have the Baire property.

If $a$ is a group action of $\Gamma$ on a space $X$, a paradoxical decomposition of $a$ is a partition of $X$ into finitely many pieces $\{A_1,\ldots, A_n, B_1,\ldots, B_m\}$ such that there exists $\alpha_1,\ldots, \alpha_n, \beta_1,\ldots,\beta_m\in \Gamma$ so that $\alpha_1A_1,\ldots, \alpha_nA_n$ and $\beta_1B_1,\ldots,\beta_mB_m$ each partition $X$.

As an example, the Banach-Tarski paradox corresponds to a paradoxical decomposition of the action of the group of rotations on the unit ball in $\mathbb{R^3}$ (minus the origin, for technical reasons).

The amenable groups, those which have a finitely additive translation invariant probability measure, cannot have paradoxical decompositions of their free actions. Tarski showed that the left translation action of a nonamenable group on itself admits a paradoxical decomposition.

Marczewski asked in 1930 whether there is a Banach-Tarski decomposition using pieces with the Baire Property, which was answered affirmatively by Dougherty-Foreman. Andrew's talk extended this result (with different proof methods) to all nonamenable groups.

Someone in the audience asked whether the Baire decomposition can always use the minimum number of pieces of a paradoxical decomposition. Andrew stated a result of Mycielski that the answer is no, but the present work gives a bound for the number of Baire pieces needed (something like $n^2$, where $n$ is the least number of pieces in any paradoxical decomposition).

To do this, they constructed a graph whose perfect matchings exactly correspond to paradoxical decompositions. Suppose $a$ is an action of a group $\Gamma$ on $X$. Given a finite set $S\subseteq \Gamma$ closed under inverses, form the graph $G_p(a,S)$ with vertex set $\{0,1,2\}\times X$ (three disjoint copies of $X$), where $(i,x)\sim (j,y)$ iff exactly one of $i,j$ is equal to 0 and there is some $s\in S$ such that $sx=y$. The fact that $S$ is closed under inverses makes this relation symmetric.

Then he recalled Hall's theorem: a locally finite bipartite graph has a perfect matching iff for every finite $F$ on one side of the bipartition of $G$, $|N(F)|\ge|F|$ where $N(F)$ is the number of vertices not in $F$ adjacent to a vertex in $F$.

The condition that Hall's theorem shows equivalent to the existence of a perfect matching is called Hall condition.

Laczkovich showed that there is no Borel version of Hall's theorem, and Conley--Kechris found more examples. I had encountered these results earlier: around 2009 during the time of the Conley--Kechris results, I had finished working on a project in (finite) matching theory and became interested in "descriptive" combinatorics, and Kechris shared with me these two papers that gave negative answers to the natural Borel questions for matchings.

But it turns out that if you require a Borel perfect matching on just a Borel $G$-invariant (union of connected components) comeager set, then it can be done while strengthening the Hall condition.

Theorem: Suppose $G$ is a locally finite bipartite Borel graph, and there is $\epsilon>0$ such that for every finite $F$ in one part of this bipartition,
$$|N(F)|\ge (1+\epsilon)|F|$$
then $G$ has a Borel perfect matching on a Borel $G$-invariant comeager set.

In response to a question, Andrew noted that this means "isoperimetric constant is greater than 1". The theorem was proven using nice elementary arguments.

To apply it to the graphs of the form $G_p(a,S)$, for $S$ a finite set of generators witnessing the paradoxical decomposition and closed under inverses, notice that changing $S$ to $S^2$ results in
$|N_{G_p(a,S)}(F)\ge 2|F|$. We may assume $F=\{1,2\}\times F'$, the other case is trivial. Now if $N_{G(a,S)}(\{1,2\}\times F')=\{0\}\times F''$, then
$$|N_{G(a,S^2)}(F)|\ge |N_{G(a,S)}(\{1,2\}\times F'')|$$
$$\ge|\{1,2\}\times F''|$$
$$\ge 2|F''|\ge 2 |F|$$
where the second line follows since $G(a,S)$ satisfied the Hall condition.  This gives the strong Hall condition needed to apply the theorem (with $\epsilon=1$).

Tuesday, October 28, 2014

Spencinar #3: Saturated ideals (part 2)

Now we'll construct an $\aleph_2$-saturated ideals on $\omega_1$. The construction is originally due to Kunen.

The idea is similar to the previous construction. We will look at the ideal induced by a large cardinal embedding $j$ after a collapsing the large cardinal $\kappa$ to become $\omega_1$. We know the quotient algebra is equivalent to the image of the collapse, so we hope that this is $\kappa^+$-c.c. To do this requires a huge cardinal, which gives a situation where where $j(\kappa)$ is also the amount of closure of the target model in $V$. We want to collapse to turn $j(\kappa)$ into $\kappa^+$, but in order to have a master condition to extend the embedding to the extension by this collapse, we cannot use the Levy collapse. Instead, we will use the Silver collapse. The first poset collapsing $\kappa$ to become $\omega_1$ must absorb the Silver collapse and so also cannot be the Levy collapse: we will construct a poset which will fulfill this purpose by design.

(Drinking game: consume a designated amount of an alcoholic beverage every time you read the word "collapse" in this post...)

Definition: The Silver collapse $S(\kappa,\lambda)$ is the forcing whose elements are partial functions $p: \kappa\times \lambda\rightarrow\kappa$ with domain of size $\le \kappa$ so that there is some $\eta<\kappa$ so that the domain is contained in $\eta\times \lambda$, and for all $(\alpha,\beta)$ in the domain, $p(\alpha,\beta)<\beta$.

Like the Levy collapse $\mathrm{Col}(\kappa,<\lambda)$, the Silver collapse adds surjections from $\kappa$ to everything less than $\lambda$ with uniformly bounded domain, but we are allowed to have some conditions with support of size equal to $\kappa$. It is $\lambda$-c.c., $<\kappa$-closed, and every regular cardinal of the ground model between $\kappa$ and $\lambda$ is collapsed with cofinality $\kappa$.

Start with a huge cardinal $\kappa$ and a huge embedding $j:V\rightarrow M$ with critical point $\kappa$ so that $j(\kappa)=\lambda$ and $M$ is closed under $\lambda$-sequences. We will define a "universal collapse" $\mathbb{P}$ with the $\kappa$-c.c. which will turn $\kappa$ into $\omega_1$, and then use the Silver collapse in the extension to turn $\lambda$ into $\omega_2$.

The crucial property of $\mathbb{P}$ is the following: if $\mathbb{Q}$ completely embeds into $\mathbb{P}$ and has inaccessible cardinality $\alpha$, then there is a complete embedding $i:\mathbb{Q}\ast S(\alpha,\kappa)\rightarrow \mathbb{P}$ extending the embedding of $\mathbb{Q}$. Thus, we are preparing $\mathbb{P}$ to absorb the Silver collapse.

The construction of $\mathbb{P}$ is an iteration with finite support starting with 0th stage $\mathrm{Col}(\omega,<\kappa)$, and at inaccessible stages $\alpha$, forcing with the Silver collapse of $V^\mathbb{Q}$ for some poset $\mathbb{Q}$ which completely embeds into $\mathbb{P}_\alpha$. By a suitable bookkeeping, we can achieve the crucial property. By the chain condition of the Silver collapse, it is easy to see that $\mathbb{P}$ has the $\kappa$-c.c.

Since $\mathbb{P}$ is $\kappa$-c.c., $\mathbb{P}$ completely embeds into $j(\mathbb{P})$ via $j$ (any maximal antichains of $\mathbb{P}$ have size $<\kappa$, so they are fixed by $j$ and remain maximal in $j(\mathbb{P})$).

So, letting $G$ be generic for $\mathbb{P}$ over $V$, we can force further to obtain $\hat{G}$ which is $V[G]$-generic for $j(\mathbb{P})$ and extend $j$ to $j^+:V[G]\rightarrow M[\hat{G}]$. Note that $M[\hat{G}]$ is $\lambda$-closed in $V[\hat{G}]$.

Therefore using the crucial property of $j(\mathbb{P})$, there is a complete embedding $i:\mathbb{P}\ast S(\kappa,\lambda)\rightarrow j(\mathbb{P})$ extending $j$ restricted to $\mathbb{P}$. Let $\mathbb{Q}$ be this Silver collapse, and let $H\in M[\hat{G}]$ be the generic for $\mathbb{Q}$ over $V[G]$ induced by $\hat{G}$ via the complete embedding (using closure of $M[\hat{G}]$. We can extend the embedding $j^+$ to $V[G\ast H]$, since $\bigcup j``H$ is a condition in $j(\mathbb{Q})$. This is where we used the Silver collapse--for the Levy collapse, $\bigcup j``H$ would have too large of a domain, it does not interact well with the huge embedding. The usual large cardinal techniques allow us to extend to $\hat{j}:V[G\ast H]\rightarrow M[\hat{G}\ast\hat{H}]$.

Now consider the $V[G\ast H]$-ultrafilter $U(\hat{j},\kappa)$ (defined in $V[\hat{G}\ast\hat{H}]$). We can use the $<\lambda$-closure of $j(\mathbb{Q})$ to find a $\tilde{U}$ in $V[\hat{G}]$ which is a $V[G\ast H]$-ultrafilter, normal, and $V[G\ast H]$-$\kappa$-complete. The construction works by taking a decreasing $\lambda$-sequence of conditions which together decide $\kappa\in \hat{j}(X)$ for every $X$ in $V[G\ast H]$, and additionally meeting dense sets which ensure that $V[G\ast H]$-sequences of fewer than $\kappa$-many of such $X$ which are forced in to be $U(\hat{j},\kappa)$ will also be in the ultrafilter.

In $V[G\ast H]$, $\kappa=\omega_1$ and $\lambda=\omega_2$. Let $I=\{X\subseteq \kappa: 1\Vdash X\in \dot{\tilde{U}}\}$. Then it can be checked that $I$ is normal and $\kappa$-complete, and since (using similar arguments as in Part 1) $j(\mathbb{P})/G\ast H$ is $\lambda$-c.c. and equivalent to the quotient algebra of $I$, $I$ is $\aleph_2$-saturated.

Spencinar #3: Precipitous ideals (part 1)

This week I'll continue the theme of Spencinar #1 by introducing some more concepts around the idea of generic ultrapowers by ideals. The emphasis will be on using large cardinals to construct ideals on small cardinals. The material comes mostly from Foreman's comprehensive chapter in the Handbook.

Prerequisites: Basic theory of large cardinals, forcing, and ultrapowers, some ideas in the first Spencinar post. The Boolean algebra approach to forcing will be helpful.

Recall the basic situation. If $I$ is an ideal on a set $Z$, then we can force with the poset $P(Z)/I$ as defined in Spencer's talk. Its elements are the mod $I$ equivalence classes of the positive subsets of $P(Z)$, ordered by the subset relation (also taken mod $I$). We denote the class of $S\subseteq Z$ by $[S]_I$. This poset is often called the quotient algebra of $I$. The generic object $G$, obtained by forcing with this poset is (equidefinable with) a $V$-ultrafilter, and so in the extension $V[G]$ we can take the $V$-ultrapower of $V$ by $G$. We denote this ultrapower by $V^Z/G$.

The basic definition, due to Jech and Prikry, is:

Definition: An ideal $I$ on a set $Z$ is precipitous if it is forced by $P(Z)/I$  that:

$V^Z/G$ is well-founded (in V[G]).

There are several combinatorial characterizations of precipitousness for an ideal $I$.

Let's give some nontrivial examples of this. A very basic result is that saturated ideals are precipitous.

Proposition: Suppose $I$ is $\kappa$-complete and $\kappa^+$-saturated. Then $I$ is precipitous.

Proof: Suppose that $I$ is not precipitous, so we can find $[S]_I$ forcing that $V^Z/G$ is ill-founded. Then let $\langle\dot{F}_n:n<\omega\rangle$ be $P(Z)/I$-names for functions $Z\rightarrow V$ in $V$ so that $[S]_I$ forces that for every $n$, $[\dot{F}_{n+1}]_G \in [\dot{F}_n]_G$ in the ultrapower.

The key is that one can replace the  $\langle\dot{F}_n:n<\omega\rangle$ by actual functions in $V$ without changing $S$. Let $n<\omega$. Find a maximal $P(Z)/I$ antichain $\mathcal{A}$ below $[S]_I$ deciding the value of $\dot{F}_n$. By a "disjointifying" argument using saturation and completeness, we can find a system of pairwise disjoint representatives $\langle A_i:i\in \mathcal{A}\rangle$ for the elements of this antichain. Define $f_n(z)$ to be the value of $\dot{F}_n$ forced by $[A_i]_I$ if $z\in A_i$, and 0 otherwise. Then we can see that $[S]_I$ forces $[\dot{F}_n]_G = [f_n]_G$. 

But now (using countable completeness of $I$), in $V$, for almost every $z\in S$, $f_{n+1}(z)\in f_n(z)$ for every $n$, giving an ill-foundedness in $V$. $\Box$

So the generic ultrapowers we considered last time (which didn't exist) were well-founded.

An immediate question is: can there be a precipitous ideal on small cardinals, e.g., $\omega_1$? The generic ultrapower construction and absoluteness of constructibility show that existence of any precipitous ideal implies that it is consistent that there is an embedding $L\rightarrow L$, so existence of a precipitous ideal has consistency strength at least that of the existence of $0^\#$.

We construct a precipitous ideal in a forcing extension starting from a measurable cardinal. The ideal is a fundamental one--start from a large cardinal ideal $J$, force to collapse the large cardinal to a small cardinal, and show that the ideal $I$ generated by $J$ in the extension retains some of the nice properties of $J$.

Actually, we'd like our precipitous idea to be a natural ideal, e.g., the nonstationary ideal on $\omega_1$. This is possible. However, today we will only get precipitousness for an induced (read: unnatural) ideal, that is an ideal which is defined to be the collection of subsets forced not to be in a particular ultrafilter of the form $U(\hat{j},i)=\{X:i\in \hat{j}(X)\}$ derived by a certain generic embedding $\hat{j}$. (We'll see this in action below).

Theorem: Suppose $\kappa$ is a measurable cardinal and let $U$ be a $\kappa$-complete normal ultrafilter on $\kappa$. Let $\mathbb{P}=\mathrm{Col}(\omega, <\kappa)$ be the Levy poset collapsing $\kappa$ to $\omega_1$. Then the ideal $I$ in $V^\mathbb{P}$ generated by the dual of $U$ is precipitous in $V^\mathbb{P}$.

Proof. By the basic theory of elementary embeddings and large cardinals, the ultrapower embedding $j:V\rightarrow M$ given by $U$ can be extended to $j^+:V[G]\rightarrow M[G\ast H]$, where $G$ is generic for $\mathbb{P}$ and $G\ast H$ is $V$-generic for $j(\mathbb{P})$ (here we used that $\mathbb{P}$ completely embeds into $j(\mathbb{P})=Col(\omega, <j(\kappa))^M$, and we'll write $j(\mathbb{P})=\mathbb{P}\ast \mathbb{Q}$). The extension takes the object $\mathbb{P}$-named by $\dot{\tau}$ (evaluated by $G$) to the object $j(\mathbb{P})$-named by $j(\dot{\tau})$ (evaluated by $G\ast H$).

The correct way to think of this is that forcing with $\mathbb{Q}$ over $V$ adds the embedding $j^+$.

First we show that (in $V[G]$) 

Claim: $I$ is equal to the ideal $\{X \subset \kappa : 1\Vdash  \kappa \not\in  j^+(X) \}$. 

Clearly, $I$ is contained in this ideal. Conversely, work in $V$ and suppose $\dot{X}$ is a $\mathbb{V}$ name for a subset of $\kappa$, and $p\in \mathbb{P}$ forces that $1\Vdash_\mathbb{ Q} \kappa \not\in  j^+(X)$. Define $A=\{\alpha:p\Vdash\alpha\not\in \dot{X}\}$. Then $A\in U$ (and so $p$ forces that $\dot{X}\in \dot{I}$, finishing the proof), otherwise we can define $F:(\kappa \setminus A)\rightarrow \mathbb{P}/p$ so that $F(\alpha)$ forces $\alpha\in X$. But then in $M$, $F$ represents a condition below $j(p)=p$ which forces $\kappa$ into $j^+(X)$, contradicting the choice of $p$. This proves the claim.

Now we show the precipitousness. Work in $V[G]$. Define a Boolean algebra embedding $P(\kappa)\rightarrow \mathcal{B}(\mathbb{Q})$ (here the codomain is the Boolean completion of $\mathbb{Q}$) which sends a set $X$ to the truth value $\lVert \kappa\in j^+(X)\rVert$. The use of the Boolean completion is so that this map is defined--in general, many different members of $\mathbb{Q}$ could force $\kappa\in j^+(X)$, but in the Boolean completion there is a greatest such member. Since $I$ is the kernel of this embedding, we see that the embedding factors to $\iota:P(\kappa)/I\rightarrow \mathcal{B}(\mathbb{Q})$. We will show that $\iota$ gives a dense embedding from the quotient algebra of $I$ into $\mathbb{Q}$. 

Claim: The range of $\iota$ is dense in $\mathcal{B}(\mathbb{Q})$. 

Since $\mathbb{Q}$ is densely embedded in $\mathcal{B}(\mathbb{Q})$, it suffices to show that for any $q\in\mathbb{Q}$ there is $X\subseteq \kappa$ with $\iota(X)=q$. Let $q=j(F)(\kappa)$, where $F\in V$ is a function such that for all $\alpha$, $F(\alpha)\in \mathrm{Col}(\omega, <\kappa)$. Define $X=\{\alpha: F(\alpha)\in G\}$. Now for any $H$ which is $V[G]$ generic for $\mathbb{Q}$, we have

 $\iota(X)\in H$ iff $\kappa \in j^+(X)$ (definition of $\iota$)
 iff $j(F)(\kappa)\in j^+(G)$ (definition of $X$)
iff $q\in j^+(G)$ (definition of $F$)
iff $q\in H$

so by separativity, $\iota(X)=q$, proving the claim.

So forcing with the quotient algebra of $I$ is equivalent to forcing with $\mathbb{Q}$; generic filters are equidefinable using the $\iota$ function. So let $H$ be obtained by applying $\iota$ image to an arbitrary $V[G]$-generic filter for $P(Z)/I$.

Now in $V[G\ast H]$, let $U^+=\{X\subseteq \kappa: \kappa\in j^+(X)\}$ be the ultrafilter derived from $j^+$, so we know from general considerations that the ultrapower of $V[G]$ by $U^+$ embeds into $M[G\ast H]$ (the embedding takes the element represented by a function $f $ to $j^+(f)(\kappa)$). But $U^+$ is exactly the filter generated the $\iota$-preimage of $H$, by the definition of $\iota$, so it is the original generic for $P(Z)/I$.

Wednesday, October 22, 2014

Spencinar #2: Martin's Maximum and Weak Square

Today John Susice gave an excellent presentation about a paper of Cummings and Magidor entitled "Martin's Maximum and Weak Square". Since the content of the talk was pretty close to the paper (which is well-written and available online), I won't blog about the precise details of the proof. Instead, I'll try to summarize what I see are the main points, and offer some soft speculation on future possibilities. I realize that what I write below might be unreadable, but I hope it conveys my general understanding of things, which might be helpful in a different way from careful writeups of proofs.

First, let me explain the two title principles. Martin's Maximum (MM) is a very strong forcing axiom, i.e., a principle which, for posets belonging to certain classes, asserts the existence of filter of each poset which meets every family of dense sets up to a certain size. We informally call this filter, meeting the appropriate choice of dense sets, a pseudogeneric. Martin's Axiom is perhaps the most well-known example of a forcing axiom; MM is the analogue of this for the class of posets whose generic extensions preserve the stationarity of ground model subsets of $\omega_1$ (the class of posets is usually simply called stationary preserving), and we must meet any $\omega_1$ many dense sets at a time. John made an amusing remark and said the name "non-forcing axiom" might be more appropriate, since the pseudogeneric objects they provide exist in the ground model, without any actual forcing taking place.

Weak square is a combinatorial principle introduced by Jensen. Generally, square principles assert the existence of sequences $\langle C_\alpha: \alpha<\lambda^+\rangle$ so that each $C_\alpha$ is club in $\alpha$, and there is some coherence between the $C_\alpha$. The "weak" part means that we will also allow multiple clubs at each level. I won't get into it too much, because we will really be working with a consequence of weak square: the existence of good scales.

Let $\lambda$ be a singular cardinal, and $\langle \lambda_i : i<\mathrm{cf}(\lambda)\rangle$ a cofinal sequence of regular cardinals less than $\lambda$. In this case, a scale is a sequence of functions $\langle g_\alpha: \alpha<\lambda^+\rangle$ in $\prod_{i<\mathrm{cf}(\lambda)} \lambda_i$ which is increasing and unbounded in the eventual domination ordering. Shelah proved that these scales exist for any singular cardinal (for some careful choice of $\langle \lambda_i : i<\mathrm{cf}(\lambda)\rangle$).

A good point of the scale is just an $\alpha_0<\lambda^+$ with cofinality $>\mathrm{cf}(\lambda)$ for which the ordering on $\langle g_\alpha: \alpha<\alpha_0\rangle$ is particularly nice: there is an unbounded set $A$ in $\alpha$ and some $i_0$ so that $\langle g_\alpha: \alpha\in A\rangle$ is pointwise increasing at each coordinate above $i_0$. A good scale is just a scale with club many good points. If you have some experience with the definitions, it's not too hard to show that a weak square sequence at $\lambda$ implies that every scale at $\lambda$ is good.

John went over the proof of the following theorem:

Theorem: If Martin's Maximum holds, then there are no good scales on $\lambda$ for any singular $\lambda$ with $\mathrm{cf}(\lambda)=\omega$. In particular, weak square fails at $\lambda$.

The proof, of course, went by defining a poset $\mathbb{P}$, proving that it is stationary preserving, and then applying MM. Suppose there is a good scale on $\langle \lambda_n:n<\omega\rangle$. The poset was a version of Namba forcing: its conditions are trees $T$ whose nodes are sequences of ordinals of countable cofinality with $n$th coordinate less than $\lambda_n$, and which (after some finite stem) split into stationary sets, i.e., for every $t\in T$ above the stem, there are stationary (in $\lambda_n$ for $n$ equal to the length of $t$) many $i$ so that $i$ appended to $t$ is in $T$.

The heart of the argument is the stationary preserving property, which was proven by appealing to the Gale-Stewart theorem for closed games. It was a very clever argument that involved showing that for any name for a club $\dot{C}\subset \lambda^+$, any ground model stationary set $S$, and any condition $T$ in the poset, you could find $\delta\in S$ so that:

$T$ can be thinned to a stationarily splitting subtree $T'$ so that the nodes at the $i$th level of $T'$ forces points in $(\delta_i, \delta)$ into $C$ below $\delta$, where $\delta_i$ is a fixed cofinal sequence in $\delta$.

This ensures that in the generic extension obtained by forcing with $T'$, $\delta\in S\cap C$. This is shown with a "Namba combinatorics" argument. A family of two-player games of length $\omega$ were defined, with a parameter for $\delta<\lambda^+$. In the game with parameter $\delta$, player II will build a branch through $T'$ after $\omega$ moves. Player I's $i$th move consists of blocking a nonstationary set of points below $\lambda_i$, and player II responds by naming the next coordinate of his branch, with the condition that he must play outside of the set that player I just blocked. Player II winning if at each stage $i$, taking $T'$ with his branch as the stem forces that a certain member of $C$ into the interval $(\delta_i,\delta)$. It was shown that player II wins this game for almost every $\delta$.

Now, in the generic extension by $\mathbb{Q}=\mathbb{P}\ast \mathrm{Col}(\omega_1, \lambda^+)$ (which is a stationary preserving poset), it is not hard to see by a density argument that the generic element $h$ of $\prod \lambda_i$ added by $\mathbb{Q}$ is an upper bound of every ground model member of $\prod \lambda_i$, under the eventual domination ordering. In fact it is an exact upper bound in the sense that any element of $\prod \lambda_i$ which is below $h$ is eventually dominated by a ground model function. Furthermore, every coordinate of $h$ has countable cofinality.

Back to the ground model. For any club $C\subset \lambda^+$, we can use MM to find a pseudogeneric $h$ which (VERY roughly) has some of the properties of the actually generic $h$ of the previous paragraph, but just working with the scale functions indexed below some cofinality $\omega_1$ ordinal $\gamma\in C$. Through some arguments which I omit, you can show that $\gamma$ is not good. So the set of nongood points is stationary!

Some vague parting thoughts: Are there any other constructions of such nongood points $\gamma$ for a scale, without MM? (Part of this question involves understanding the essential properties of the point $\gamma$, which I did not talk about). Are they similar to the nongood points you get from a supercompact cardinal? (I think the answer is no.) More generally, can we classify nongood points somehow? On the technical side, is there a general form of the Namba lemma which we can directly apply to see that $\mathbb{P}$ is stationary preserving?