Sorry, I wrote when I meant .

]]>Jason, the discrepancy of that sequence is indeed bounded, but that’s OK, because the ratio is about .

]]>1,1,-1,-1,0,1,1,-1,-1,0 …

be bound by 2?

]]>Thanks, that works. I’ve made a minor edit. (In case anyone isn’t aware, by clicking on ‘history’ at the top of the document and selecting two revisions you can see how they differ.)

]]>OK I’ve done that. You can get to it via the following sequence of links: Polymath5 wiki — General proof strategies — representation of diagonals — after which it is clear what to go for.

]]>Yes, it does seem to be a limit of one on the free account, unfortunately. I’ve done a bit of searching and found a couple of other options (latexlab and monkeytex), but I haven’t got either to work. I suppose a partial solution would be simply to post the Latex source on the wiki.

]]>ONE?!?

]]>I was going to do just that, but I now discover that the limited-numbers issue has kicked in already: if I have a free account then I’m allowed up to … er … one collaborator per project.

]]>I’ve just created a Scribtex account — could you add me, please? My username is ‘obtext’.

]]>Well, you’re added now. If the limited numbers become a problem we can think again about what to do.

]]>Well, you’re added now. If the limited numbers become a problem we can think again about what to do.

]]>My scribtex username is “obryant”. I haven’t used scribtex in awhile; it looks much more polished now than last summer. It also looks like the number of collaborators might be severely limited. /Something/ like this should be around and useful for the write-up stage, instead of “passing the token” and the wiki.

]]>Kevin, I’ve followed as much of your suggestion as I can, meaning that I’ve got myself a free scribtex account and uploaded the file as it currently is. I’m happy to share the file with anyone who’s interested.

]]>One way to handle a shared tex file is to use scribtex. This still requires some action on your part to share the file, so it isn’t as free-range as a wiki would be. (To get access, I’d need a free scribtex account, and you’d need to add my username to the file’s share list.)

]]>I was hoping to avoid that, but have given up the attempt. The link is now to the corrected file. (I hope I won’t have to keep doing this every time I update the file.)

]]>One crude fix to the file update issue would be to use a different filename.

]]>I hope this link will work. It’s very much a first draft, so I’ve been a bit lazy in places (not with the details of the proof, but with the accompanying explanations of what is going on). At some point I’ll update it, or if anyone else feels like doing so they are welcome and I can send the source file.

I’ve just checked, and the link does work. I haven’t mentioned in the write-up that I used the extra flexibility that is provided by an observation of Moses, which is that it is not necessary to represent a diagonal matrix — it’s enough to find an efficient representation of a matrix , where is diagonal and is positive semidefinite. It turned out to be very helpful not to have to make the decomposition exactly equal to the identity. I haven’t thought about how this would affect any generalization to an HAP version with down the diagonal instead of 1.

I also have a question: is this proof essentially the same as Roth’s original argument? I don’t know because I haven’t looked at the original argument.

Update: I’ve noticed that the statement of Lemma 3.1 is wrong (though the proof is a proof of the correct statement). The answer should be 1 rather than 1/N. I’ve tried to replace the file by a corrected version, but it’s failing to work, for reasons that I simply don’t understand. I’ll try some crude fix later.

]]>I hope I haven’t made a silly mistake, but plays a dual role in the proof. In order to approximate a trigonometric function , I convolve it with (a multiple of) an arithmetic progression with common difference . For this to work, we choose such that is at most , and that allows us to take the length of to be . It’s because we need to be at most that has to be at most .

Thus, in brief, is actually given as the bound on the common differences, and it requires a comparable bound on the lengths for the proof to work.

]]>I look forward to the writeup. In reference to your earlier comment here , I’m not sure I follow your intuition for why getting the lengths of the APs to tend to infinity slowly is useful for getting a result for HAP discrepancy. Maybe I’m being daft (and admittedly I haven’t worked through the the details of the Roth proof carefully), but why does having small ensure that the common differences are small too ?

]]>I’m doing this because it is necessary to sort out what happens with small if there is to be any hope of modifying the proof to give something for HAPs rather than APs.

]]>I’ve thought about this a bit further, and this is where I’ve got to.

In the representation-of-diagonals approach to Roth’s AP-discrepancy theorem, one does a trivial estimate, which results in no improvement to the trivial bound, and then one exploits cancellation to gain on the trivial estimate. Looking at it more closely, I see that the fact that the trivial estimate gives rise to the trivial bound is actually a coincidence: if you take (the lengths of the APs into which you decompose) to be smaller than then you no longer get the trivial estimate but something *worse* than the trivial estimate.

Just to clarify, in this proof we are trying to represent the identity as a linear combination of AP products with absolute values of coefficients adding up to less than The trivial estimate allows us to do so with absolute values adding up to around which just happens to equal when The extra cancellation then gains a factor of which for Roth’s theorem leads to a bound of for the sum of the coefficients. The discrepancy is obtained by dividing by this sum and taking the square root, giving the bound. Unfortunately, when it gives us nothing.

However, we know that the correct bound for Roth is and it did look as though it should be possible to work out the cancellations more efficiently and obtain an improvement to the trivial bound by a factor of rather than just If this is indeed possible, then it would give a bound of for the sum of the coefficients, and a discrepancy bound of which makes more sense, since it interpolates correctly between the results we know for and strongly suspect for So I now feel that it is becoming a priority to stop being lazy and to try to push the representation-of-diagonals approach through for Roth until it matches the correct bound. (An additional motive for this is that it would be an interesting result in itself — perhaps even interesting enough to publish.)

If all that can be got to work, here’s how I imagine we go from there. (The word “imagine” is carefully chosen, as the argument I have in mind may well be purely imaginary.) We decompose the identity efficiently into a sum of AP products of length for some that tends to infinity very slowly. Note the crucial fact that the common differences of these APs will also be bounded above by a number that tends to infinity very slowly. This gives us a very small but non-trivial AP-discrepancy lower bound. We then modify the decomposition to get a decomposition of the diagonal matrix with down the diagonal (in the manner described in the comment to which this is a reply). Now some of the APs used in this decomposition will be (segments of) HAPs, and others won’t. But if is small enough, so that the common differences are very small — only just bigger than constant — then should give very much smaller weight, on average, to the non-homogeneous parts. Indeed, the fraction ought to be smaller than So I’m hoping we can replace the non-homogeneous APs we use by trivial decompositions into singletons and not suffer for it.

At the moment, I don’t have a convincing argument that tells me that this approach couldn’t work. So I’m feeling as though we need to start to get a bit technical. Of course, I don’t mean that absolutely everything needs to be proved — at this stage it would be fantastic if we could prove a result conditional on the Roth thing working out and various plausible facts about being true. Then we would have reduced the whole problem to a couple of technical lemmas.

Moses, that was an interesting thought and I’ve been thinking about it, but I haven’t come up with anything worth saying in this comment. Perhaps it would be worth working out the SVD of the matrix for some large just to see what it looks like. A slightly similar thought I had was to take the inner product I defined earlier (in the weighted space using as weights) and do Gram-Schmidt orthogonalization. But I don’t have a good candidate for the basis that should be orthogonalized.

]]>I haven’t quite figured out where this is headed, but I just wanted to recall

a comment that Tim made when he first suggested this particular function as a possible candidate for the diagonal. Consider the matrix whose th column is the HAP of common difference and remove the diagonal part except for the (1,1) entry. Tim pointed out that is an eigenvector of this matrix.

Given this connection and the fact that this matrix is defined in terms of HAPs,

is there any useful information in this matrix for constructing the diagonal representation we are hoping for ? e.g. does the singular value decomposition of this matrix suggest anything ?

Let me say very slightly more about that last suggestion. The idea would be this. We would start with the decomposition where for every and As in the proof of Roth’s discrepancy theorem, written out here with an bound by Kevin O’Bryant, we then decompose each efficiently into characteristic functions of APs. However, in this case instead of trying to make them as long as possible, we give them lengths that tend slowly to infinity.

As in the Roth proof, there should be plenty of cancellation in the coefficients, which is crucial to the success of that proof and obviously would be here too. But here we face an additional difficulty, which is that most of the APs we have decomposed into are not HAPs.

What I am hoping, though it may be a rather desperate hope, is that if instead we do something very similar but for the functions then the total weight of the non-homogeneous APs that we use will be small enough that we can replace that part of the decomposition by something much more trivial (such as decomposition into HAPs of length 1).

Everything depends in a crucial way on the numbers, and at the time of writing I don’t have an intuitive idea of how that is likely to go. (Naturally, the most likely outcome is that a first attempt at getting the numbers to work will fail dismally, but if we were lucky then it might suggest a refined approach with a better chance of working.)

]]>