Potential Automorphy for GL(n)

Fresh on the arXiv, a nice new paper by Lie Qian proving potential automorphy results for ordinary Galois representations

\(\rho: G_F \rightarrow \mathrm{GL}_n(\mathbf{Q}_p)\)

of regular weight \([0,1,\ldots,n-1]\) for arbitrary CM fields \(F\). The key step in light of the 10-author paper is to construct suitable auxiliary compatible families of Galois representations for which:

  1. The mod-\(p\) representation coincides with the one coming from \(\rho\),
  2. The compatible family can itself be shown to be potentially automorphic.

The main result then follows by an application of the p-q switch. Something similar was done by Harris–Shepherd-Barron–Taylor in the self-dual case. They ultimately found the motives inside the Dwork family. Perhaps surprisingly, Qian also finds his motives in the same Dwork family, except now taken from a part of the cohomology which is not self-dual!

This result doesn’t *quite* have immediate implications for the potential modularity of compatible families: If you take a (generically irreducible) compatible family with Hodge-Tate weights \([0,1,\ldots,n-1]\) then one certainly expects (with some assumption on the monodromy group) that the representations are generically ordinary, but this is a notorious open problem even in the analogous case of modular forms of high weight. One way to try to avoid this would be by proving analogous results for non-ordinary representations. But then you run into genuine difficulties trying to find such arbitrary residual representations inside the Dwork family over extensions unramified at \(p\). This difficulty also arises in the self-dual situation, and the ultimate fix in BLGGT was to bypass such questions by applying Khare-Wintenberger lifting style results. However, such lifting results can’t immediately be adapted to the \(l_0 > 0\) situation under discussion here.

On the other hand, I guess one should be OK for very small \(n\): If \(M\) is (say) a rank three motive over \(\mathbf{Q}\) with HT weights \([0,1,2]\), determinant \(\varepsilon^3\), and coefficients in some CM quadratic field \(E\) (you have to allow coefficients since otherwise the motive is automatically self-dual, see here), then one is probably in good shape. For example, the characteristic polynomials of Frobenius are Weil numbers \(\alpha,\beta,\gamma\) of absolute value \(p\) and will have (as noted in the blog post linked to in the previous sentence) the shape

\(X^3 – a_p X^2 + \overline{a_p} p X + p^3,\)

and now for primes \(p\) which split in \(E\), the corresponding v-adic representation will be ordinary for at least one of the \(v|p\) unless \(a_p\) is divisible by \(p\), which by purity forces \(a_p \in \{-3p,-2p,-p,0,p,2p,3p\}\). From the usual arguments, one sees that there is at least one ordinary \(v\) for almost all split primes \(p\). The rest of the Taylor-Wiles hypotheses should also be generically satisfied assuming the monodromy of \(M\) is \(\mathrm{GL}(3)\), potential modularity in any other case surely being more or less easy to handle directly. Hence Qian thus proves such motives are potentially automorphic. A funny thing about this game is that actually finding examples of non-self dual motives is very difficult, but in this case, van Geemen and Top studied a family of such motives \(S_t\) occurring inside \(H^2\) of the surface

\(z^2 = xy(x^2 – 1)(y^2 – 1)(x^2-y^2 + t x y)\)

for varying \(t\) (they note that this family was first considered by Ash and Grayson. Also apologies for changing the notation slightly from the paper, but I prefer to denote the parameter of the base by \(t\)). They then compare their particular motive when \(t=2\) to an explicit non-self dual form for \(\mathrm{GL}(3)/\mathbf{Q}\) of level \(128\). I’m sure by this time (after HLTT and Scholze) someone has verified using the Faltings–Serre method that \(S_2\) is automorphic, but now by Qian’s result we know that the \(S_t\) are potentially automorphic for all \(t\).

Posted in Mathematics | Tagged , , , , , , , , , , , , , , , , , , | 1 Comment

Don’t cite my paper!

The process of publishing a paper is an extremely long one, and it is not atypical to take several years from the first submission to the paper finally being accepted. The one part of the process that happens extremely quickly, however, is the moment when the journal sends you the galley proofs of the paper and then gives you 48 hours to make any final minor corrections. Despite the journal having taken up to several years to referee the
paper, these messages often come with breathless warnings that failure to respond within the time window puts your paper in danger of not being published at all. I remember in 2019 being given the (relatively generous) span of two weeks to look over the (96 page) galley proofs of my Duke paper with David Geraghty. Except that two week period happened to coincide with the holidays (the requested return date was December 24), and overlapped with a period where I was particularly busy. Moreover, I was also about to go on a trip to Australia (which I ultimately had to cancel because of the bushfires). I told them that I should be able to get around to going through the paper by February. The journal responded by telling me, and I quote, that “While we appreciate the fact that this is a long article and that it will take some time to review, a delay of two months to handle this seems excessive to us.” In response, I gently mentioned that the journal had taken 864 days to accept my paper followed by a further 147 days before they produced the galley proofs and that this seemed a little excessive to me, but that I would do what I could within my own time constraints. They followed up with a note that they looked forward to receiving my answers in February.

But at least the copy-editing done by Duke on this occasion made a few genuine improvements and did not detract from the paper. What is surprising is when a for-profit journal makes the paper categorically worse by adding errors and not even telling the author about it. And surprise surprise, it always seems to be the for-profit journals that do this. (My very best copy-editing experiences have been with MSP and with the AMS journals.) If you are anything like me, if you try to read over your own paper for typos, your brain is very good at seeing what it expects rather than what is actually there (the standard example is not picking up on “the the” in the middle of a sentence). In particular, the actual probability of finding any error in a 100+ page paper with a 48 hour window by looking at the galley proofs is vanishingly small. The two worst experiences I have had in this regard are at Springer journals. A bare minimum requirement for a galley proof where changes have been made to the original paper is that the journal should tell you what changes they actually made. What they should really do is send a diff file comparing your original .tex to their new version. What they should not do is make subtle changes that are impossible to pick up on a quick reading that change the mathematical meaning of the paper without telling the author. To take an example, I just found out that in my Inventiones paper with David Geraghty, every one of the 47 occurrences of “[[” and “]]” was replaced by single brackets “[” and “]”. The rings \(\mathbf{Z}_p[[X]]\) and \(\mathbf{Z}_p[X]\) are very very different — one is a complete local ring, the other is not. So now in the published paper we patch modules over the \(S_{\infty}\) which is now a polynomial ring; modular forms over a field \(K\) have \(q\)-expansions in \(K[q]\) and hence are polynomials, and so on. I think that every one of those 47 occurrences introduced an error of this magnitude. But at the same time, picking this up is close to impossible when looking at the paper because the mind naturally “corrects” to what it should be, especially if you know how the Taylor-Wiles method works already (which, if you are one of the authors of this paper, is certainly the case). What’s particularly annoying is the stupidity of this process — unilaterally making the change, not telling the author, and then giving them (in this case) 48 hours to look at the paper.

So what is to be done? The good news is that I am in the luxurious position that citations (or lack thereof) make no difference to me. (Hat tip to both the University of Chicago and the NSF for not being obsessed by such metrics.) So clearly the solution is that anyone who wants to cite my paper should cite the latest version on the ArXiV rather than the published version. If a journal complains that they want to cite the published version, simply point out that the published version is riddled with misprints and thus should not be cited. You have my blessing to do this!

Update Aug 2: I did decide to email the journal and ask if they could republish the online version. The mathematicians involved were perhaps not surprisingly very apologetic and upset as well, and have put pressure on Springer to fix the problem. We will see what actually happens.

Posted in Rant | Tagged , , | 18 Comments

The Arbeitsgemeinschaft has returned!

An update on this post; the Arbeitsgemeinschaft on derived Galois deformation rings and the cohomology of arithmetic groups will now be taking place the week of April 5th. Here is some practical information if you are curious.

Is there somewhere I can watch the lectures even though I am not a participant? No, the workshop is invitation only.

Is there somewhere I can watch the lectures as a virtual participant? I assume so, but I don’t know the exact details. I predict you will find out at the same time I do.

Is anyone attending in person? I believe so, but I’m not sure how many, I’m guessing they will be mostly coming from Germany. I think that Gerd Faltings and a few graduate students from Bonn will be there in person, for example.

What is the schedule of lectures? Am I going to have to wake up at 3:00AM to watch them from the USA? Ah, on this point I do have some useful information. The schedule of lectures is as follows, all times are local German afternoon time. Please bear in mind that German Daylight Savings time begins this weekend, so a talk at 3:00PM Oberwolfach time will be at 8:00AM in Chicago, 9:00AM on the East Coast, and 6:00AM on the West Coast.

Monday:

3-3:45 A1
4:15-5 A2
5:15-6 A3
8-8:45 A4

Tuesday:

3-3:45 B1
4:30-5:15 B2
8-8:45 C1

Wednesday:

4:15-5 C2
5:15-6 B3
8-8:45 B4

Thursday:

3-3:45 C3
4:15-5 C4
5:15-6 D1
8-8:45 D2

Friday:

3-3:45 D3
4:15-5 D4

Posted in Mathematics | Tagged , , | 2 Comments

Test Your Intuition: p-adic local Langlands edition

Taking a page from Gil Kalai, here is a question to test your intuition about 2-dimensional crystalline deformation rings.

Fix a representation:

\(\rho: G_{\mathbf{Q}_p} \rightarrow \mathrm{GL}_2(\overline{\mathbf{F}}_p)\)

after twisting, let me assume that this representation has a crystalline lift of weight \([0,k]\) for some \(1 \le k \le p\). Let \(R\) denote the universal framed local deformation ring with fixed determinant. Now consider positive integers \(n \equiv k \bmod p-1\), and let \(R_n\) denote the Kisin crystalline deformation ring also with fixed determinant. Global considerations suggest that for \(n \equiv m \equiv k \bmod p-1\) and \(n \ge m\), there should be a surjection \(R_n/p \rightarrow R_m/p\), and quite possibly one even knows this to be true. Global considerations also suggest that any representation can be seen in high enough weight, which leads to the following problem:

Question: How large does \(n\) have to be to see the entire tangent space of the unrestricted local deformation ring \(R\)? That is, how large does \(n\) have to be for the map

\(R/(p,\mathfrak{m}^2) \rightarrow R_n/(p,\mathfrak{m}^2)\)

to be an isomorphism? Naturally, one can also ask the same question with \(\mathfrak{m}^2\) replaced by \(\mathfrak{m}^k\) for any \(k \ge 2\).

The first question came up in a discussion with my student Chengyang. I made a guess, and then we proceeded (during our meeting) to do a test computation on magma, where my prediction utterly failed, but in retrospect my computation itself may have been dodgy so now I’m doubly confused.

Matt remarked that this question is not entirely unrelated in spirit to the Breuil-Mezard conjecture. Instead of counting multiplicities of geometric cycles, one is measuring the Hilbert-Samuel function and its “convergence” to that of the free module. Also, if you know everything about \(\mathrm{GL}_2(\mathbf{Q}_p)\) and \(2\)-dimensional Galois representations then you should be able to answer this question too.

Of course I could have re-done the initial computation for this blog post, but I think at least some readers are happier when I ask questions for which I don’t know the answer…

Posted in Mathematics | Tagged , , | 12 Comments

Fermat Challenge

A challenge inspired from a question of Doron Zeilberger. Do there exist arbitrarily large integers \(n\) with the following property:

  1. There exists an ordered field \(F\) such that \(x^n+ y^n = z^n\) has solutions in \(F\) with \(xyz \ne 0\).
  2. The only solutions in \(F\) to \(x^m + y^m = z^m\) for \(3 \le m < n\) satisfy \(xyz = 0\),

To give a somewhat looser phrasing, you might try to prove Fermat over \(\mathbf{Q}\) by an inductive argument that only relies on positivity of squares together with the fact that Fermat was classical known for some small values of \(n\). This question asks whether you can rule out such a proof.

This might be tricky. Quite possibly taking \(F = \mathbf{Q}(2^{1/n}) \subset \mathbf{R}\) will work for infinitely many integers \(n\), but this is not obvious. Indeed, since any ordered field \(F\) will always contain \(\mathbf{Q}\), any proof that arbitrarily large \(n\) with the properties above exist will also prove Fermat over \(\mathbf{Q}\). That said, there might be simple constructions of such \(F\) assuming Fermat is true over \(\mathbf{Q}\) which we fortunately know to be true.

Posted in Mathematics | Tagged , , | 6 Comments

Ramanujan Machine Redux

I had no intention to discuss the Ramanujan Machine again, but over the past few days there has been a flurry of (attempted) trollish comments on that post, so after taking a brief look at the latest version, I thought I would offer you my updates. (I promise for the last time.)

Probably the nicest thing I have to say about the updated paper is that it is better than the original. My complaints about the tone of the paper remain the same, but I don’t think it is necessary for me to revisit them here.

Concerning the intellectual merit, I think it is worth making the following remarks. First, I am only address the contributions to mathematics, Second, what counts as a new conjecture is not really as obvious as it sounds. Since continued fractions are somewhat recherché, it might be more helpful to give an analogy with infinite series. Suppose I claimed it was a new result that

\( \displaystyle{ 2G = \sum_{n=0}^{\infty} a_n = 1 + \frac{1}{2} + \frac{5}{36} + \frac{5}{72} + \frac{269}{3600} – \frac{1219}{705600} + \ldots } \)

where for \(n \ge 4\) one has

\(2 n^2 a_n = n^2 a_{n-1} – 2 (n-2)^2 a_{n-2} + (n-2)^2 a_{n-3}.\)

How can you evaluate this claim? Quite probably this is the first time this result has been written down, and you will not find it anywhere in the literature. But it turns out that

\( \displaystyle{ \left(\sum_{n=0}^{\infty} \frac{x^n}{2^n} \right) \times \left(\sum_{n=0}^{\infty} \frac{(-1)^n x^{2n+1}}{(2n+1)^2} \right)
= \sum_{n=0}^{\infty} a_n x^n}\)

and letting \(x=1\) recovers the identity above and immediately explains how to prove it. To a mathematician, it is clear that the proof explains not only why the originally identity is true, but also why it is not at all interesting. It arises as more or less a formal manipulation of a definition, with a few minor things thrown in like the sum of a geometric series and facts about which functions satisfy certain types of ordinary differential equations. The point is that the identities produced by the Ramanujan Machine have all been of this type. That is, upon further scrutiny, they have not yet revealed any new mathematical insights, even if any particular example, depending on what you know, may be more or less tricky to compute.

What then about the improved irrationality measures for the Catalan constant? I think that is a polite way of describing a failed attempt to prove that Catalan’s constant was irrational. It’s something that would be only marginally publishable in a mathematics journal even with a proof. Results about the irrationality measure in the range where they are irrational have genuine implications about the arithmetic of the relevant numbers, but these results do not.

What then about the new continued fractions developed over the last year — maybe these are now deeper? Here you have to remember that continued fractions, especially of the kind considered in this paper, are more or less equivalent to questions about certain types of ordinary differential equations and their related periods. (But importantly, not conversely: most of these interesting ODEs have nothing to do with continued fractions since they are associated with recurrences of length greater than two.) For your sake, dear reader, I voluntarily chose to give up an hour or two of my life and took a closer look at one of their “new conjectures.” I deliberately chose one that they specifically highlighted in their paper, namely:

Where \(G\) here is Catalan’s constant \(L(2,\chi_4)\). As you might find unsurprising, once you start to unravel what is going on you find that, just as in the example above, the mystery of these numbers goes away. This example can be generalized in a number of ways without much change to the argument. Let \(p_0=1\) and \(q_0 = 0\), and otherwise let

\(\displaystyle{\frac{p_n}{q_n} = \frac{3}{1}, \frac{33}{13}, \frac{765}{313}, \frac{30105}{12453}, \frac{1790775}{743403}, \ldots} \)

denote the (non-reduced) partial fraction convergents. If

\( \displaystyle{ P(z) = \sum \frac{4^n p_n z^n}{n!^2} = 1 + 12z + 132 z^2 + \ldots
\quad Q(z) = \sum \frac{4^n q_n z^n}{n!^2} = 4z +52 z^2 + \ldots} \)

Then, completely formally, \(DP(z) = 0\) where

\( \displaystyle{ D = z(8z-1)(4z-1) \frac{d^2}{dz^2} + (160 z^2 – 40 z + 1) \frac{d}{dz} + 12(8z – 1)}\)

and \(DQ(z) = 4\). If \(K\) and \(E\) denote the standard elliptic functions, one observes that \(P(z)\) is nothing but the hypergeometric function

But now one is more or less done! The argument is easily finished with a little help from mathematica. Another solution to \(DF(z) = 0\) is of course

\( \displaystyle{ R(z) = \frac{ 2 E((1-8z)^2) -2 K((1-8z)^2) }{(1 – 8z)^2} = \log(z) + 2 + \ldots } \)

and knowing both homogenous solutions allows one to write \(Q(z) = u(z) P(z) + v(z) R(z)\) and then easily compute that

\(\displaystyle{ \lim_{n \rightarrow \infty} \frac{p_n}{q_n}
= \lim_{z \rightarrow 1/8} \frac{P(z)}{Q(z)} = \frac{2}{-1 + 2G}.}\)

as desired. For those playing at home, note that a convenient choice of \(u(z)\) and \(v(z)\) can be given by

\( \displaystyle{ v(z) = \int \frac{ E(16 z(1-4z))}{\pi} = 4 z – 8 z^2 + \ldots }\)

Posted in Mathematics, Rant | Tagged , , | 8 Comments

Hire my students!

I have three students graduating this year: Shiva Chidambaram, Eric Stubley, and Noah Taylor. In light of the last post, I should give them a boost by reminding you of their (numerous) results which have been discussed on this blog. You can read about Shiva’s work here, here, and here, about Eric’s work here and here, and Noah’s work here and here. Alternatively, you can always click on the work of my students link.

But even this link is not complete! Here’s a result from Noah’s thesis which I haven’t discussed before:

Let \(N\) be prime, and let \(\mathbf{T}\) denote the \(\mathbf{Z}_2\)-Hecke algebra generated by \(T_l\) for \(l\) prime to \(2\), and let \(\widetilde{\mathbf{T}}\) denote the Hecke alegbra where \(T_2\) is also included. These Hecke algebras are famously not the same in general. For example, when \(N = 23\), the space of cusp forms is \(2\)-dimensional and has a pair of conjugate cusp forms as follows:

\(\displaystyle{q – \frac{\sqrt{5}+1}{2} q^2 + \sqrt{5} q^3 + \frac{\sqrt{5} – 1}{2} q^4 – (1 + \sqrt{5})q^5 + \ldots}\)

So \(\mathbf{T} = \mathbf{Z}[\sqrt{5}]\) whereas \(\widetilde{\mathbf{T}} = \displaystyle{\mathbf{Z} \left[ \frac{\sqrt{5}+1}{2} \right]}\). Noah gives a formula for the index:

Theorem: Let \(N\) be prime. Then the index \([\widetilde{\mathbf{T}}:\mathbf{T}]\) is given by the order of the space

\(S_1(\Gamma_0(N),\mathbf{F}_2)\)

of Katz modular forms of weight one and level \(\Gamma_0(N)\).

In particular, the index at level \(23\) is coming from the fact that there is a classical weight one form of this level. From this one sees that the index is non-trivial for all primes \(N \equiv 3 \bmod 4\) except for \(N = 3,7,11,19,43,67\) and \(163\). For primes \(N \equiv 1 \bmod 4\), on the other hand, I might guess that there would be a positive density of primes for which either the index was trivial or non-trivial. The question more or less hinges on the expected number of \(\mathbf{SL}_2(\mathbf{F}_{2^n})\) representations of \(\mathbf{Q}\) (with \(n \ge 2\)) which become unramified at all finite places over \(\mathbf{Q}(\sqrt{N})\).

Posted in Mathematics, Work of my students | Tagged , | Leave a comment

The upcoming jobs bloodbath

Universities are losing lots of money this year. Even those schools with a sizeable endowment are very restricted in how those funds can be used, and the result is that many places will have hiring freezes. This is surely going to have an immediate impact in the jobs market in mathematics, at every level. In a usual year, Chicago hires as many as ten Dickson instructors (our named postdoctoral position). This year, I find it hard to imagine that we would hire half that number. In part, this is because we have moved to protect a number of our final year postdocs by extending their position for another year, although if enough other places do something similar then next year is going to be tough as well.

There are rumors that a number of places (including really top places) are not going to admit any graduate students in mathematics next year, and that others will have at the very least significantly smaller classes. I fully expect that we will have an incoming class but I don’t know how large it is going to be.

Does your department expect to reduce (significantly or moderately) your postdoc hiring this year? What about tenure track lines or graduate students? Let me know! During the last financial crisis, both the Simons foundation and the NSF (via the stimulus bill) had a real effect by adding more money to the postdoc pool — hopefully something like that will happen again.

Posted in Politics | Tagged , , | 9 Comments

En Passant IX (I’m a Gnu)

One feature of having an electric piano is the ability to record the accompaniment to songs which (for reasons of timing or otherwise) are quite hard to play and sing at the same time. A possible downside, however, is that this accompaniment is now available at any notice, and hence subject to the whims of any household member who perhaps does not appreciate what you wish to play and instead wants to listen to yet another rendition of the Gnu. And this is why the following song is the only live music performed at our house at the moment:

Everyone pretty much knows all the words at this point! (Hat Tip to Martin Rutherford for playing ill wind during music class in 1990)

In these times I recommend that everyone relax by taking a deep breath. And I mean the type of breath necessary for the following oboe part assuming that circular breathing (exhaling and breathing in at the same time) is not part of your daily repertoire:

Finally, a few tips on Australian fusion cuisine. If for some reason you find yourself going for long periods of time between trips to the grocery store, you might just consider opening that jar of Vegemite on the shelf, and then start eating it every day for breakfast. You may know the basic Vegemite tip (use buttered toast, don’t use too much), but you might be unsure what to do if your standard Italian bread is not available. This is advice for those times:

  1. Challah: This is not a good match. Toasted challah does not have the required firmness and it just doesn’t work. When eating toasted Challah, always stick to marmalade.
  2. Tortillas: A disaster: melted Vegemite on a tortilla running down the side in little puddles. Avoid.
  3. Injera: Jackpot! A pefect match. Probably a buckwheat crepe would also do as a pinch for a substitute.
Posted in Food, Music, Waffle | Tagged , , , , , , , , , , , , | 1 Comment

The eigencurve is (still) proper

Although I don’t think about it so much anymore, the eigencurve of Coleman-Mazur was certainly one of my first loves. I can’t quite say I learnt about \(p\)-adic modular forms at my mother’s knee, but I did spend a formative summer before starting university thinking about (with Matthew Emerton) what in effect was the \(2\)-adic eigendecomposition of the (inverse) hauptmodul \(f = q \prod (1 +q^n)^{24}\) of \(X_0(2)\). I remember that we had a massive file called “tee-hee” which contained an absolutely huge number of Fourier coefficients which tested the memory limits of the University of Melbourne computer system (it was 10MB).

Jumping forward in time, I learnt about Kevin Buzzard’s Arizona Winter School project on a special case of his slope conjectures. This turned out to be closely related to the explicit computations I had done when I was younger. I got in touch and we managed to solve the first special case of his conjectures. Kevin and I continued collaborating over the next few years on a number of papers related to the geometry of the eigencurve.

In the abstract theory of the eigencurve, it is not important how overconvergent a modular form is but merely that it is overconvergent. However, it has always seemed to me that the analytic theory of overconvergent modular forms deep into the supersingular annuli has many unrevealed mysteries. One problem Kevin and I thought about was whether the eigencurve was “proper” in the sense of whether any punctured disc of finite slope eigenforms could be filled in at the central point. (Coleman and Mazur raise this question in their original paper.) At one point we thought we had proved it — the idea was that (by Buzzard’s analytic continuation theorem) any finite slope eigenform would converge uniformly far into the supersingular annuli, and since this property would hold uniformly for all points on the punctured disc of finite slope eigenforms it would follow that the limiting form at the centre was also highly overconvergent. However, if that form had infinite slope, it would lie in the kernel of the \(U\) operator, and now there was an elementary argument to show that any such form had a natural radius which was not (as) highly overconvergent. Done! Except there was a problem: the results on overconvergence were only proved for forms of integral weight since they relied on geometric constructions, particularly on the fact that one could make sense of \(\omega^k\) (for an integer \(k\)) on the entire modular curve. Coleman’s definion of overconvergent forms of general weight used a trick involving Eisenstein series. The notion of radius of convergence arising from this construction ended up being related to ratios of certain Eisenstein series in weight zero, and these ratios are not very overconvergent for the most general weights. This meant that (for general weights) the radius only made sense in a small overconvergent region — in particular smaller than the radius necessary to rule out elements in the kernel of \(U\) — and the idea didn’t work. In some cases (for example the \((N,p)=(1,2)\) eigencurve) there were workarounds one could make to give ad hoc definitions of the radius in order to push things through (a proof of concept as it were), but the situation was otherwise not so great.

Some time later, Hansheng Diao and Ruochuan Liu proved that the eigencurve was indeed proper. There argument was completely different, and used local arguments and period rings. It was a very nice result, and possibly their argument should even be considered the “correct” one. However, to my delight, Lynnelle Ye has just posted to the arXiv a new proof of the properness of the eigencurve which does indeed proceed by exploiting the radius of overconvergence for finite slope forms and proving that it inconsistent with the radius of convergence of elements in the kernel of \(U\). As mentioned above, the immediate stumbling block for Kevin and I was that the definition of overconvergent forms of a general weight \(\kappa\) was not geometric. However, thanks to Pilloni and Andreatta-Iovita-Stevens there now are such definitions available. Ye takes these constructions and then pushes them further into the supersingular annuli. These efforts are then indeed enough to turn what was merely a heuristic into a completely rigorous proof!

Posted in Mathematics | Tagged , , , , , , , , , , , , | 1 Comment