In this chapter we explain how to generalize the notion of modular symbols given in Chapter Modular Forms of Weight 2 to higher weight and more general level. We define Hecke operators on them and their relation to modular forms via the integration pairing. We omit many difficult proofs that modular symbols have certain properties and instead focus on how to compute with modular symbols. For more details see the references given in this section (especially [Mer94]) and [Wie05].
Modular symbols are a formalism that make it elementary to compute with homology or cohomology related to certain Kuga-Sato varieties (these are , where is a modular curve and is the universal elliptic curve over it). It is not necessary to know anything about these Kuga-Sato varieties in order to compute with modular symbols.
This chapter is about spaces of modular symbols and how to compute with them. It is by far the most important chapter in this book. The algorithms that build on the theory in this chapter are central to all the computations we will do later in the book.
This chapter closely follows Lo”{i}c Merel’s paper [Mer94]. First we define modular symbols of weight . Then we define the corresponding Manin symbols and state a theorem of Merel-Shokurov, which gives all relations between Manin symbols. (The proof of the Merel-Shokurov theorem is beyond the scope of this book but is presented nicely in [Wie05].) Next we describe how the Hecke operators act on both modular and Manin symbols and how to compute trace and inclusion maps between spaces of modular symbols of different levels.
Not only are modular symbols useful for computation, but they have been used to prove theoretical results about modular forms. For example, certain technical calculations with modular symbols are used in Lo”{i}c Merel’s proof of the uniform boundedness conjecture for torsion points on elliptic curves over number fields (modular symbols are used to understand linear independence of Hecke operators). Another example is [Gri05], which distills hypotheses about Kato’s Euler system in of modular curves to a simple formula involving modular symbols (when the hypotheses are satisfied, one obtains a lower bound on the Shafarevich-Tate group of an elliptic curve).
We recall from Chapter Modular Forms of Weight 2 the free abelian group of modular symbols. We view these as elements of the relative homology of the extended upper half plane relative to the cusps. The group is the free abelian group on symbols with
modulo the relations
for all , and all torsion. More precisely,
where is the free abelian group on all pairs and is the subgroup generated by all elements of the form . Note that is a huge free abelian group of countable rank.
For any integer , let be the abelian group of homogeneous polynomials of degree in two variables .
Remark 1.22
Note that is isomorphic to as a group, but certain natural actions are different. In [Mer94], Merel uses the notation for what we denote by .
Now fix an integer . Set
which is a torsion-free abelian group whose elements are sums of expressions of the form . For example,
Fix a finite index subgroup of . Define a left action of on as follows. If and , let
Note that if we think of as a column vector, then
since . The reason for the inverse is so that this is a left action instead of a right action, e.g., if , then
Recall that we let act on the left on by
where acts via linear fractional transformations, so if , then
For example, useful special cases to remember are that if , then
(Here we view as in order to describe the action.)
We now combine these two actions to obtain a left action of on , which is given by
For example,
We will often write for .
Definition 1.23
Let be an integer and let be a finite index subgroup of . The space of weight modular symbols for is the quotient of by all relations for , , and by any torsion.
Note that is a torsion-free abelian group, and it is a nontrivial fact that has finite rank. We denote modular symbols for in exactly the same way we denote elements of ; the group will be clear from context.
The space of modular symbols over a ring is
Let be a finite index subgroup of and an integer. Just as in Chapter Modular Forms of Weight 2 it is possible to compute using a computer, despite that, as defined above, is the quotient of one infinitely generated abelian group by another one. This section is about Manin symbols, which are a distinguished subset of that lead to a finite presentation for . Formulas written in terms of Manin symbols are frequently much easier to compute using a computer than formulas in terms of modular symbols.
Suppose and . Then the Manin symbol associated to this pair of elements is
Notice that if , then , since the symbol is invariant by the action of on the left (by definition, since it is a modular symbol for ). Thus for a right coset it makes sense to write for the symbol for any . Since has finite index in , the abelian group generated by Manin symbols is of finite rank, generated by
where run through representatives for the right cosets .
We next show that every modular symbol can be written as a -linear combination of Manin symbols, so they generate .
Proposition 1.24
The Manin symbols generate .
Proof
The proof if very similar to that of Proposition 3.11 except we introduce an extra twist to deal with the polynomial part. Suppose that we are given a modular symbol and wish to represent it as a sum of Manin symbols. Because
it suffices to write in terms of Manin symbols. Let
denote the continued fraction convergents of the rational number . Then
If we let , then and
Since and has integer coefficients, the polynomial also has integer coefficients, so we introduce no denominators.
Now that we know the Manin symbols generate , we next consider the relations between Manin symbols. Fortunately, the answer is fairly simple (though the proof is not). Let
Define a right action of on Manin symbols as follows. If , let
This is a right action because both and are right actions.
Theorem 1.25
If is a Manin symbol, then
Moreover, these are all the relations between Manin symbols, in the sense that the space of modular symbols is isomorphic to the quotient of the free abelian group on the finitely many symbols (for and ) by the above relations and any torsion.
Proof
First we prove that the Manin symbols satisfy the above relations. We follow Merel’s proof (see [Mer94, Section 1.2]). Note that
Writing , we have
Also,
Finally,
where we use that acts trivially via linear fractional transformations. This proves that the listed relations are all satisfied.
That the listed relations are all relations is more difficult to prove. One approach is to show (as in [Mer94, Section 1.3]) that the quotient of Manin symbols by the above relations and torsion is isomorphic to a space of Shokurov symbols, which is in turn isomorphic to . A much better approach is to apply some results from group cohomology, as in [Wie05].
If is a finite index subgroup and we have an algorithm to enumerate the right cosets and to decide which coset an arbitrary element of belongs to, then Theorem 1.25 and the algorithms of Chapter Linear Algebra yield an algorithm to compute . Note that if , then the relation is automatic.
Remark 1.26
The matrices and do not commute, so in computing , one cannot first quotient out by the two-term relations, then quotient out only the remaining free generators by the relations, and get the right answer in general.
The following is analogous to Proposition 3.10:
Proposition 1.27
The right cosets are in bijection with pairs where and . The coset containing a matrix corresponds to .
Proof
This proof is copied from [Cre92, pg. 203], except in that paper Cremona works with the analogue of in , so his result is slightly different. Suppose , for . We have
which is in if and only if
(1)
and
(2)
Since the have determinant , if , then the congruences (1)–(2) hold. Conversely, if (1)–(2) hold, then
and likewise
Thus we may view weight Manin symbols for as triples of integers , where and with . Here corresponds to the Manin symbol , where and are congruent to . The relations of Theorem 1.25 become
Recall that Proposition 3.10 gives a similar description for .
Suppose Define an action of diamond-bracket operators o, with on Manin symbols as follows:
Let
be a Dirichlet character, where is an root of unity and is the order of . Let be the quotient of by the relations (given in terms of Manin symbols)
for all , , and by any -torsion. Thus is a -module that has no torsion when viewed as a -module.
Suppose is a subgroup of of level that contains . Just as for modular forms, there is a commutative Hecke algebra , which is the subring of generated by all Hecke operators. Let
where we omit if . Then the Hecke operator on is given by
Notice when that is defined by summing over matrices that correspond to the subgroups of of index . This is exactly how we defined on modular forms in Definition Definition 2.26.
Let be a finite index subgroup of and suppose
is a set such that and is finite. For example, satisfies this condition. Also, if , then for any positive integer , the set
also satisfies this condition, as we will now prove.
Lemma 1.28
We have
and
where , the union is disjoint and with , , and . In particular, the set of cosets is finite.
Proof
(This is Lemma 1 of [Mer94, Section 2.3].) If and , then
Thus , and since is a group, ; likewise .
For the coset decomposition, we first prove the statement for , i.e., for . If is an arbitrary element of with determinant , then using row operators on the left with determinant , i.e., left multiplication by elements of , we can transform into the form , with and . (Just imagine applying the Euclidean algorithm to the two entries in the first column of . Then is the of the two entries in the first column, and the lower left entry is . Next subtract from until .)
Next suppose is arbitrary. Let be such that
is a disjoint union. If is arbitrary, then as we showed above, there is some , so that , with and , and . Write , with . Then
It follows that
Since and , there is such that
We may then choose . Thus every is of the form , with and suitably bounded. This proves the second claim.
Let any element act on the left on modular symbols by
(Until now we had only defined an action of on modular symbols.) For , let
(3)
Note that . Also, , where we set
Suppose and are as above. Fix a finite set of representatives for . Let
be the linear map
This map is well defined because if and , then
where we have used that , and acts trivially on .
Let and . Then the Hecke operator is , and by Lemma 1.28,
where are as in Lemma 1.28.
Given this definition, we can compute the Hecke operators on as follows. Write as a modular symbol , compute as a modular symbol, and then convert to Manin symbols using continued fractions expansions. This is extremely inefficient; fortunately Lo”{i}c Merel (following [Maz73]) found a much better way, which we now describe (see [Mer94]).
If is a subset of , let
where is as in (3). Also, for any ring and any subset , let denote the free -module with basis the elements of , so the elements of are the finite -linear combinations of the elements of .
One of the main theorems of [Mer94] is that for any satisfying the condition at the beginning of Section General Definition of Hecke Operators, if we can find and a map
that satisfies certain conditions, then for any Manin symbol , we have
The paper [Mer94] contains many examples of and that satisfy all of the conditions.
When , the complicated list of conditions becomes simpler. Let be the set of matrices with determinant . An element
satisfies condition if for every , we have that
(4)
If satisfies condition , then for any Manin symbol , Merel proves that
(5)
Here corresponds via Proposition 1.27 to a coset of in , and if and , then we omit the corresponding summand.
For example, we will now check directly that the element
satisfies condition . We have, as in the proof of Lemma 1.28 (but using elementary column operations), that
To verify condition , we consider in turn each of the three elements of and check that (4) holds. We have that
and
Thus if , the left sum of (4) is , as required. If , then the left side of (4) is
Finally, for we also have , as required. Thus by (5) we can compute on any Manin symbol, by summing over the action of the four matrices .
Proposition 1.29
The element
satisfies condition .
Merel’s two-page proof of Proposition 1.29 is fairly elementary.
Remark 1.30
In [Cre97a, Section 2.4], Cremona discusses the work of Merel and Mazur on Heilbronn matrices in the special cases and weight . He gives a simple proof that the action of on Manin symbols can be computed by summing the action of some set of matrices of determinant . He then describes the set and gives an efficient continued fractions algorithm for computing it (but he does not prove that his satisfy Merel’s hypothesis).
Merel gives another family of matrices that satisfy condition , and he proves that as ,
where is the sum of the divisors of . Thus for a fixed space of modular symbols, one can compute using arithmetic operations. Note that we have fixed , so we ignore the linear algebra involved in computation of a presentation; also, adding elements takes a bounded number of field operations when the space is fixed. Thus, using Manin symbols the complexity of computing , for prime, is field operations, which is exponential in the number of digits of .
There is a trick of Basmaji (see [Bas96]) for computing a matrix of on , when is very large, and it is more efficient than one might naively expect. Basmaji’s trick does not improve the big-oh complexity for a fixed space, but it does improve the complexity by a constant factor of the dimension of . Suppose we are interested in computing the matrix for for some massive integer and that has large dimension. The trick is as follows. Choose a list
of Manin symbols such that the map given by
is injective. In practice, it is often possible to do this with “very small”. Also, we emphasize that is a -vector space of dimension .
Next find Hecke operators , with small, whose images form a basis for the image of . Now with the above data precomputed, which only required working with Hecke operators for small , we are ready to compute with huge. Compute , for each , which we can compute using Heilbronn matrices since each is a Manin symbol. We thus obtain Since we have precomputed Hecke operators such that generate , we can find such that . Then since is injective, we have , which gives the full matrix of on .
Let be the free abelian group on symbols , for , and set
Define a left action of on by
for . For any finite index subgroup , let be the quotient of by the relations for all and by any torsion. Thus is a torsion-free abelian group.
The boundary map is the map
given by extending the map
linearly. The space of cuspidal modular symbols is the kernel
so we have an exact sequence
One can prove that when , this sequence is exact on the right.
Next we give a presentation of in terms of “boundary Manin symbols”.
We give an explicit description of the boundary map in terms of Manin symbols for , then describe an efficient way to compute the boundary map.
Let be the equivalence relation on given by
for any . Denote by the finite-dimensional -vector space with basis the equivalence classes . The following two propositions are proved in [Mer94].
Proposition 1.31
The map
is well defined and injective. Here and are assumed coprime.
Thus the kernel of is the same as the kernel of .
Proposition 1.32
Let and . We have
We next describe how to explicitly compute by generalizing the algorithm of [Cre97a, Section 2.2]. To compute the image of , with , we must compute the class of and of . Instead of finding a canonical form for cusps, we use a quick test for equivalence modulo scalars. In the following algorithm, by the symbol we mean the basis vector for a basis of . This basis is constructed as the algorithm is called successively. We first give the algorithm, and then prove the facts used by the algorithm in testing equivalence.
Algorithm 1.33
Given a boundary Manin symbol this algorithm outputs an integer and a scalar such that is equivalent to times the symbol found so far. (We call this algorithm repeatedly and maintain a running list of cusps seen so far.)
The case considered in Cremona’s book [Cre97a] only involve the trivial character, so no cusp classes are forced to vanish. Cremona gives the following two criteria for equivalence.
Proposition 1.34
Consider , , with integers such that for each .
There exists such that if and only if
There exists such that if and only if
Algorithm 1.35
Suppose and are equivalent modulo . This algorithm computes a matrix such that .
Proof
See the proof of [Cre97a, Prop. 8.13].
The relations can make the situation more complicated, since it is possible that but
One way out of this difficulty is to construct the cusp classes for , and then quotient out by the additional relations using Gaussian elimination. This is far too inefficient to be useful in practice because the number of cusp classes can be unreasonably large. Instead, we give a quick test to determine whether or not a cusp vanishes modulo the -relations.
Lemma 1.36
Suppose and are integers such that . Then there exist integers and , congruent to and modulo , respectively, such that .
Proof
By Exercise 1.12 the map is surjective. By the Euclidean algorithm, there exist integers , and such that . Consider the matrix and take , to be the bottom row of a lift of this matrix to .
Proposition 1.37
Let be a positive integer and a Dirichlet character of modulus . Suppose is a cusp with and coprime. Then vanishes modulo the relations
if and only if there exists , with , such that
Proof
First suppose such an exists. By Lemma 1.36 there exists lifting such that . The cusp has coprime coordinates so, by Proposition 1.34 and our congruence conditions on , the cusps and are equivalent by an element of . This implies that . Since and , we have .
Conversely, suppose . Because all relations are two-term relations and the -relations identify -orbits, there must exists and with
Indeed, if this did not occur, then we could mod out by the relations by writing each in terms of , and there would be no further relations left to kill . Next observe that
Applying Proposition 1.34 and noting that shows that satisfies the properties of the ” ” in the statement of the proposition.
We enumerate the possible appearing in Proposition 1.37 as follows. Let and list the , for , such that .
In this section we define a pairing between modular symbols and modular forms that the Hecke operators respect. We also define an involution on modular symbols and study its relationship with the pairing. This pairing is crucial in much that follows, because it gives rise to period maps from modular symbols to certain complex vector spaces.
Fix an integer weight and a finite index subgroup of . Let denote the space of holomorphic modular forms of weight for , and let denote its cuspidal subspace. Following [Mer94, Section 1.5], let
(6)
denote the space of antiholomorphic cusp forms. Here is the function on given by .
Define a pairing
(7)
by letting
and extending linearly. Here the integral is a complex path integral along a simple path from to in (so, e.g., write , where traces out the path and consider two real integrals).
Proposition 1.38
The integration pairing is well defined, i.e., if we replace by an equivalent modular symbol (equivalent modulo the left action of ), then the integral is the same.
Proof
This follows from the change of variables formulas for integration and the fact that and . For example, if , and , then
where because is a weight modular form. For the case of arbitrary weight , see Exercise 1.14.
The integration pairing is very relevant to the study of special values of -functions. The -function of a cusp form is
(8)
The equality of the integral and the Dirichlet series follows by switching the order of summation and integration, which is justified using standard estimates on (see, e.g., [Kna92, Section VIII.5]).
For each integer with , we have, setting and making the change of variables in (8), that
(9)
The integers as above are called critical integers. When is an eigenform, they have deep conjectural significance (see [BK90, Sch90]). One can approximate to any desired precision by computing the above pairing explicitly using the method described in Chapter Computing Periods. Alternatively, [Dok04] contains methods for computing special values of very general -functions, which can be used for approximating for arbitrary , in addition to just the critical integers .
Theorem 1.39
The pairing
is a nondegenerate pairing of complex vector spaces.
Corollary 1.40
We have
The pairing is also compatible with Hecke operators. Before proving this, we define an action of Hecke operators on and on . The definition is similar to the one we gave in Section Hecke Operators for modular forms of level . For a positive integer , let be a set of coset representatives for from Lemma 1.28. For any and set
Also, for , set
Then for ,
and for ,
This agrees with the definition from Section Hecke Operators when .
Remark 1.41
If is an arbitrary finite index subgroup of , then we can define operators on for any with and finite. For concreteness we do not do the general case here or in the theorem below, but the proof is exactly the same (see [Mer94, Section 1.5]).
Finally we prove the promised Hecke compatibility of the pairing. This proof should convince you that the definition of modular symbols is sensible, in that they are natural objects to integrate against modular forms.
Theorem 1.42
If
and , then for any ,
Proof
We follow [Mer94, Section 2.1] (but with more details) and will only prove the theorem when , the proof in the general case being the same.
Let , , and for , set
Let be any positive integer, and let be a set of coset representatives for from Lemma 1.28.
We have
Now for each summand corresponding to the , make the change of variables . Thus we make change of variables. Also, we will use the notation
for . We have
We have , since a linear fractional transformation is unchanged by a nonzero rescaling of a matrix that induces it. Thus by the quotient rule, using that has determinant , we see that
We next show that
(10)
From the definitions, and again using that , we see that
which proves that (10) holds. Thus
Next we use that
To see this, note that . Using this we see that
Now substituting for , we see that
as required. Thus finally
Suppose that is a finite index subgroup of such that if , then
For example, satisfies this condition. There is an involution on given by
(11)
which we call the star involution. On Manin symbols, is
(12)
Let be the eigenspace for on , and let be the eigenspace. There is also a map on modular forms, which is adjoint to .
Remark 1.43
Notice the minus sign in front of in (11). This sign is missing in [Cre97a], which is a potential source of confusion (this is not a mistake, but a different choice of convention).
We now state the final result about the pairing, which explains how modular symbols and modular forms are related.
Theorem 1.44
The integration pairing induces nondegenerate Hecke compatible bilinear pairings
Remark 1.45
We make some remarks about computing the boundary map of Section Boundary Manin Symbols when working in the quotient. Let be a sign, either or . To compute , it is necessary to replace by its quotient modulo the additional relations for all cusps . Algorithm 1.33 can be modified to deal with this situation as follows. Given a cusp , proceed as in Algorithm 1.33 and check if either or is equivalent (modulo scalars) to any cusp seen so far. If not, use the following trick to determine whether the and -relations kill the class of : use the unmodified Algorithm 1.33 to compute the scalars and indices , associated to and , respectively. The -relation kills the class of if and only if but .
In this section, we describe natural maps between spaces of modular symbols with character of different levels. We consider spaces with character, since they are so important in applications.
Fix a positive integer and a Dirichlet character . Let be a positive divisor of that is divisible by the conductor of , in the sense that factors through via the natural map composed with some uniquely defined character . For any positive divisor of , let and fix a choice of coset representatives for .
Remark 1.46
Note that [Mer94, Section 2.6] contains a typo: The quotient ” ” should be replaced by ” “.
Proposition 1.47
For each divisor of there are well-defined linear maps
Furthermore, is multiplication by
Proof
To show that is well defined, we must show that for each and , we have
We have
so
We next verify that is well defined. Suppose that and ; then , so
This computation shows that the definition of does not depend on the choice of coset representatives. To finish the proof that is well defined, we must show that, for , we have so that respects the relations that define . Using that does not depend on the choice of coset representative, we find that for ,
To compute , we use that :
The scalar factor of appears instead of , because is acting on as an element of and not as an an element of .
Definition 1.48
The space of new modular symbols is the intersection of the kernels of the as runs through all positive divisors of and runs through positive divisors of strictly less than and divisible by the conductor of . The subspace of old modular symbols is the subspace generated by the images of the where runs through all positive divisors of and runs through positive divisors of strictly less than and divisible by the conductor of . The new and old subspaces of cuspidal modular symbols are the intersections of the above spaces with .
Example 1.49
The new and old subspaces need not be disjoint, as the following example illustrates! (This contradicts [Mer94, pg. 80].) Consider, for example, the case , , and trivial character. The spaces and are each of dimension , and each is generated by the modular symbol . The space is of dimension and is generated by the three modular symbols , , and . The space generated by the two images of under the two degeneracy maps has dimension , and likewise for . Together these images generate , so is equal to its old subspace. However, the new subspace is nontrivial because the two degeneracy maps are equal, as are the two degeneracy maps
In particular, the intersection of the kernels of the degeneracy maps has dimension at least (in fact, it equals ). We verify some of the above claims using Sage.
sage: M = ModularSymbols(Gamma0(6)); M
Modular Symbols space of dimension 3 for Gamma_0(6)
of weight 2 with sign 0 over Rational Field
sage: M.new_subspace()
Modular Symbols subspace of dimension 1 of Modular
Symbols space of dimension 3 for Gamma_0(6) of weight
2 with sign 0 over Rational Field
sage: M.old_subspace()
Modular Symbols subspace of dimension 3 of Modular
Symbols space of dimension 3 for Gamma_0(6) of weight
2 with sign 0 over Rational Field
In this section we explicitly compute for various and . We represent Manin symbols for as triples of integers , where , and corresponds to in the usual notation. Also, recall from Proposition 3.10 that corresponds to the right coset of that contains a matrix with as elements of , i.e., up to rescaling by an element of .
In this section we give an algorithm to compute a canonical representative for each element of . This algorithm is extremely important because modular symbols implementations use it a huge number of times. A more naive approach would be to store all pairs and a fixed reduced representative, but this wastes a huge amount of memory. For example, if , we would store an array of
Another approach to enumerating is described at the end of [Cre97a, Section 2.2]. It uses the fact that is easy to test whether two pairs define the same element of ; they do if and only if we have equality of cross terms (see [Cre97a, Prop. 2.2.1]). So we consider the -based list of elements
(13)
concatenated with the list of nonequivalent elements for and , checking each time we add a new element to our list (of ) whether we have already seen it.
Given a random pair , the problem is then to find the index of the element of our list of the equivalent representative in . We use the following algorithm, which finds a canonical representative for each element of . Given an arbitrary , we first find the canonical equivalent elements . If , then the index is . If , we find the corresponding element in an explicit sorted list, e.g., using binary search.
In the following algorithm, denotes the residue of modulo that satisfies . Note that we never create and store the list (13) itself in memory.
Algorithm 1.50
Given and and a positive integer , this algorithm outputs a pair such that as elements of and such that . Moreover, the element does not depend on the class of , i.e., for any with the input also outputs . If is not in , this algorithm outputs .
Remark 1.51
In the above algorithm, there are many gcd’s with so one should create a table of the gcd’s of with .
Remark 1.52
Another approach is to instead use that
where , and that it is relatively easy to enumerate the elements of for a prime power .
Algorithm 1.53
Given an integer , this algorithm makes a sorted list of the distinct representatives of with , as output by Algorithm 1.50.
Explicit detailed examples are crucial when implementing modular symbols algorithms from scratch. This section contains a number of such examples.
In this section, we compute explicitly in a few cases.
Example 1.54
We compute . Because and , we expect to have dimension and for each integer the Hecke operator to have eigenvalue the sum of the cubes of positive divisors of .
The Manin symbols are
The relation matrix is
where the first two rows correspond to -relations and the second three to -relations. Note that we do not include all -relations, since it is obvious that some are redundant, e.g., and are the same since has order .
The echelon form of the relation matrix is
where we have deleted the zero rows from the bottom. Thus we may replace the above complicated list of relations with the following simpler list of relations:
from which we immediately read off that the second generator is and . Thus has dimension , with basis the equivalence class of (or of ).
Next we compute the Hecke operator on . The Heilbronn matrices of determinant from Proposition 1.29 are
To compute , we apply each of these matrices to , then reduce modulo the relations. We have
Summing we see that in . Notice that
The Heilbronn matrices of determinant from Proposition 1.29 are
We have
Summing we see that
Notice that
Example 1.55
Next we compute explicitly. The Manin symbol generators are
The relation matrix is as follows, where the -relations are above the line and the -relations are below it:
In weight , two out of three -relations are redundant, so we do not include them. The reduced row echelon form of the relation matrix is
From the echelon form we see that every symbol is equivalent to a combination of , , and . (Notice that columns , , and are the pivot columns, where we index columns starting at .)
To compute , we apply each of the Heilbronn matrices of determinant from Proposition 1.29 to , then to , and finally to . The matrices are as in Example 1.54 above. We have
Applying to , we get
Applying to , we get
Thus the matrix of with respect to this basis is
where we write the matrix as an operator on the left on vectors written in terms of , , and . The matrix has characteristic polynomial
The factor corresponds to the weight Eisenstein series, and the factor corresponds to the elliptic curve , which has
Example 1.56
In this example, we compute , which illustrates both weight greater than and level greater than . We have the following generating Manin symbols:
The relation matrix is already very large for . It is as follows, where the -relations are before the line and the -relations after it:
The reduced row echelon form of the relations matrix, with zero rows removed is
Since these relations are equivalent to the original relations, we see how can be expressed in terms of , , , and . Thus has dimension . For example,
Notice that the number of relations is already quite large. It is perhaps surprising how complicated the presentation is already for . Because there are denominators in the relations, the above calculation is only a computation of . Computing involves finding a -basis for the kernel of the relation matrix (see Exercise 7.5).
As before, we find that with respect to the basis , , , and
Notice that there are denominators in the matrix for with respect to this basis. It is clear from the definition of acting on Manin symbols that preserves the -module , so there is some basis for such that is given by an integer matrix. Thus the characteristic polynomial of will have integer coefficients; indeed,
Note the factor , which comes from the two images of the Eisenstein series of level . The factor comes from the cusp form
By computing more Hecke operators , we can find more coefficients of . For example, the charpoly of is , and the matrix of is
which has characteristic polynomial
The matrix of is
with characteristic polynomial
One can put this information together to deduce that
Example 1.57
Consider , which has dimension . With respect to the symbols
the matrix of is
which has characteristic polynomial
There is one Eisenstein series and there are three cusp forms with and .
Example 1.58
To compute , we first make a list of the
elements using Algorithm 1.50. The list looks like this:
For each of the symbols , we consider the -relations and -relations. Ignoring the redundant relations, we find -relations and -relations. It is simple to quotient out by the -relations, e.g., by identifying with for some (or setting if ). Once we have taken the quotient by the -relations, we take the image of all of the -relations modulo the -relations and quotient out by those relations. Because and do not commutate, we cannot only quotient out by -relations where the are the basis after quotienting out by the -relations. The relation matrix has rank , so has dimension .
If we instead compute the quotient of by the subspace of elements , we include relations , where . There are now 2016 -relations, 2024 -relations, and 1344 -relations. Again, it is relatively easy to quotient out by the -relations by identifying and . We then take the image of all -relations modulo the -relations, and again directly quotient out by the -relations by identifying with for some , where by we mean the class of modulo the -relations. Finally, we quotient out by the -relations, which involves sparse Gauss elimination on a matrix with rows and at most three nonzero entries per row. The dimension of is .
Algorithm 1.59
This is an algorithm to compute or , which only requires doing generic sparse linear algebra to deal with the three term -relations.
Let by a list of all Manin symbols.
Quotient out the two-term -relations and if the quotient is desired, by the two-term -relations. (Note that this is more subtle than just “identifying symbols in pairs”, since complicated relations can cause generators to surprisingly equal .) Let denote the class of after this quotienting process.
Create a sparse matrix with columns, whose rows encode the relations
For example, there are about such rows when . The number of nonzero entries per row is at most . Note that we must include rows for all , since even if , it need not be the case that , since the matrices and do not commute. However, we have an a priori formula for the dimension of the quotient by all these relations, so we could omit many relations and just check that there are enough at the end—if there are not, we add in more.
Compute the reduced row echelon form of using Algorithm 7.6. For , this is the echelon form of a matrix with size about .
Use what we have done above to read off a sparse matrix that expresses each of the Manin symbols in terms of a basis of Manin symbols, modulo the relations.
We sketch some of the ways in which we will apply the modular symbols algorithms of this chapter later in this book.
Cuspidal modular symbols are in Hecke-equivariant duality with cuspidal modular forms, and as such we can compute modular forms by computing systems of eigenvalues for the Hecke operators acting on modular symbols. By the Atkin-Lehner-Li theory of newforms (see, e.g., Theorem 9.4), we can construct for any , any , and using this method. See Chapter Modular Forms for more details.
Once we can compute spaces of modular symbols, we move to computing the corresponding modular forms. We define inclusion and trace maps from modular symbols of one level to modular symbols of level a multiple or divisor of . Using these, we compute the quotient of the new subspace of cuspidal modular symbols on which a “star involution” acts as . The Hecke operators act by diagonalizable commuting matrices on this space, and computing the systems of Hecke eigenvalues is equivalent to computing newforms . In this way, we obtain a list of all newforms (normalized eigenforms) in for any , , and .
In Chapter Computing Periods, we compute with the period mapping from modular symbols to attached to a newform . When and has rational Fourier coefficients, this gives a method to compute the period lattice associated to a modular elliptic curve attached to a newform (see Section All Elliptic Curves of Given Conductor). In general, computation of this map is important when finding equations for modular -curves, CM curves, and curves with a given modular Jacobian. It is also important for computing special values of the -function at integer points in the critical strip.
Modular symbols were introduced by Birch [Bir71] for computations in support of the Birch and Swinnerton-Dyer conjecture. Manin [Man72] used modular symbols to prove rationality results about special values of -functions.
Merel’s paper [Mer94] builds on work of Shokurov (mainly [Sho80a]), which develops a higher-weight generalization of Manin’s work partly to understand rationality properties of special values of -functions. Cremona’s book [Cre97a] discusses how to compute the space of weight modular symbols for , in connection with the problem of enumerating all elliptic curves of given conductor, and his article [Cre92] discusses the case and computation of modular symbols with character.
There have been several Ph.D. theses about modular symbols. Basmaji’s thesis [Bas96] contains tricks to efficiently compute Hecke operators , with very large (see Section Basmaji’s Trick), and also discusses how to compute spaces of half integral weight modular forms building on what one can get from modular symbols of integral weight. The author’s Ph.D. thesis [Ste00] discusses higher-weight modular symbols and applies modular symbols to study Shafarevich-Tate groups (see also [Aga00]). Martin’s thesis [Mar01] is about an attempt to study an analogue of analytic modular symbols for weight . Gabor Wiese’s thesis [Wie05] uses modular symbols methods to study weight 1 modular forms modulo . Lemelin’s thesis [Lem01] discusses modular symbols for quadratic imaginary fields in the context of -adic analogues of the Birch and Swinnerton-Dyer conjecture. See also the survey paper [FM99], which discusses computation with weight modular symbols in the context of modular abelian varieties.
The appendix of this book is about analogues of modular symbols for groups besides finite index subgroups of , e.g., for subgroup of higher rank groups such as . There has also been work on computing Hilbert modular forms, e.g., by Lassina Dembel’e [Dem05] Hilbert modular forms are functions on a product of copies of , and is replaced by a group of matrices with entries in a totally real field.
Glenn Stevens, Robert Pollack and Henri Darmon (see [DP04]) have worked for many years to develop an analogue of modular symbols in a rigid analytic context, which is helpful for questions about computing with overconvergent -adic modular forms or proving results about -adic -functions.
Finally we mention that Barry Mazur and some other authors use the term “modular symbol” in a different way than we do. They use the term in a way that is dual to our usage; for example, they attach a “modular symbol” to a modular form or elliptic curve. See [MTT86] for an extensive discussion of modular symbols from this point of view, where they are used to construct -adic -functions.
Exercise 1.11
Suppose is an integer multiple of . Prove that the natural map is surjective.
Exercise 1.12
Prove that is surjective (see Lemma 1.36).
Exercise 1.13
Compute . List each Manin symbol the relations they satisfy, compute the quotient, etc. Find the matrix of . (Check: The dimension of is , and the characteristic polynomial of is .)
Exercise 1.14
Finish the proof of Proposition 1.38.
Exercise 1.15