SelectedPapers.net is a free, open-source project aimed at improving the way people find, read, and share academic papers. By looking at the intersection of people and ideas, we can help deliver content specifically tuned to your interests. But we're just getting started, and we can use your help. This is an initial, alpha release of SelectedPapers.net. Please give us your feedback: either general comments or specific new bug reports or feature suggestions.

Try out our new People, Topics and Comments search options above! We have a new discussion forum. Also please subscribe for news about spnet. If your Google+ post is failing to appear here, click here for a quick solution.

Give it a try by signing in below, or learn more about how it works:

Recent Activity on SelectedPapers.net

commented on Communication: Charge-Population Based Dispersion Interactions for Molecules and Materials (Apr. 18, 2016)
[PARTIALLY NOT ORIGINAL IDEA] #spnetwork arXiv:1604.02493
Hello,
I am sure  that the same approximation to compute the C6 of the atom in a molecule has already been used in the paper <<Petraglia, R., Steinmann, S. N., & Corminboeuf, C. (2015). A fast charge-Dependent atom-pairwise dispersion correction for DFTB3. International Journal of Quantum Chemistry, 115(18), 1265–1272. doi:10.1002/qua.24887>>. This paper is cited only marginally in your work while it would be nice if you give the right credits on the approximation \(C_6{AA}/C_^{AA,free} \approx (h_A/Z_a)^2\).
David Roberts commented on Note on the construction of globular weak omega-groupoids from types, topological spaces etc (Apr. 4, 2016)
This paper has a very nice and concise definition of a weak ∞-groupoid a la Grothendieck. Starting from general category theoretic background, it gives the relevant material and then gives the definition, in two pages. And it really isn't something like "worble worble worble, ninky-nonk, iggle-piggle Grothendieck ∞-groupoid", where all those words are contained in other sources and require serious reading; everything is in there. The only problem then is to see how that definition captures the subtle idea of a ∞-groupoid, which is deferred to the cited references.

http://arxiv.org/abs/1602.07962

Title: Note on the construction of globular weak omega-groupoids from types, topological spaces etc
Author: John Bourke

Abstract: A short introduction to Grothendieck weak omega-groupoids is given. Our aim is to give evidence that, in certain contexts, this simple language is a convenient one for constructing globular weak omega-groupoids. To this end, we give a short reworking of van den Berg and Garner's construction of a Batanin weak omega-groupoid from a type using the language of Grothendieck weak omega-groupoids.

#arXiv  arXiv:1602.07962 #spnetwork   #categorytheory  
#MochizukiABC new topic created (Jan. 16, 2016)
Rongmin Lu replied RE: Effectivity in Mochizuki's work on the $abc$-conjecture (Jan. 16, 2016)
Thanks for sharing!
David Roberts commented on Effectivity in Mochizuki's work on the $abc$-conjecture (Jan. 16, 2016)
If Mochizuki's work is correct, this removes the ineffectivity in the proof arising from the use of proof-by-contradiction in one of the earlier precursor papers (see http://mathbabe.org/2015/12/15/notes-on-the-oxford-iut-workshop-by-brian-conrad/ for discussion of this point). This is hugely important, as, conditionally, M's work only at present proves the existence of some constant giving the needed bound. If people can unwind the claimed proof using IUTT this should enable people to get actual estimates that would give upper bounds on the number of rational points on algebraic curves, which Falting's proof of the Mordell conjecture only proves to be finite.

http://arxiv.org/abs/1601.03572

Title: Effectivity in Mochizuki's work on the abc-conjecture
Author: Vesselin Dimitrov

Abstract: This note outlines a constructive proof of a proposition in Mochizuki's paper "Arithmetic elliptic curves in general position," making a direct use of computable non-critical Belyi maps to effectively reduce the full abc-conjecture to a restricted form. Such a reduction means that an effective abc-theorem is implied by Theorem 1.10 of Mochizuki's final IUT paper ("Inter-universal Teichmuller theory IV: log-volume computations and set-theoretic foundations").

#arXiv arXiv:1601.03572 #spnetwork
#MochizukiABC
Joel David Hamkins replied RE: Non Standard Analysis as a Functor, as Local, as Iterated (Jan. 7, 2016)
This looks like it could be interesting, but I wonder whether there is anything more than translating the usual elementary-embedding understanding of the transfer principle into category-theoretic language? If not, then I'm skeptical. The requirements on the functor seem to me like they might be weaker than full elementarity, however, although the cardinality calculations and use of GCH are just the kind of thing one expects for ordinary ultrapowers of the set-theoretic universe, which I would expect to be a central example here.
Luke Sciarappa replied RE: Non Standard Analysis as a Functor, as Local, as Iterated (Jan. 6, 2016)
Does ultrafilters even preserve finite limits?
Luke Sciarappa replied RE: Non Standard Analysis as a Functor, as Local, as Iterated (Jan. 6, 2016)
Ooh, that does sound interesting. Isn't x - > ultrafilters on x the initial or final such functor?
#SetTheory new topic created (Jan. 6, 2016)
David Roberts commented on Non Standard Analysis as a Functor, as Local, as Iterated (Jan. 6, 2016)
A non-standard analysis is an endofunctor of Set preserving finite limits and restricting to a self-equivalence of the full subcategory of finite sets.

Looks like an interesting definition, even though I haven't read the rest of the paper though. Immediately after Definition 1 it takes a material stance on sets by considering {a} for all sets a, and the image under such a functor *.

http://arxiv.org/abs/1601.00488

Title: Non Standard Analysis as a Functor, as Local, as Iterated
Author: Eliahu Levy

Abstract: This note has several aims. Firstly, it portrays a non-standard analysis as a functor, namely a functor * that maps any set A to the set *A of its non-standard elements. That functor, from the category of sets to itself, is postulated to be an equivalence on the full subcategory of finite sets onto itself and to preserve finite projective limits (equivalently, to preserve finite products and equalizers). Secondly, "Local" non-standard analysis is introduced as a structure which I call lim-rim, in particular exact lim-rims. The interplay between these, and ultrafilters and ultrapowers, and also cardinality relations and notions depending on a cardinality such as saturation and what I call "confinement" and "exactness", are investigated. 
In particular, one constructs non-standard analyses, with "good" kinds of lim-rims. In these one may say that *A - "the adjunction of all possible limits from A" - plays a role analogous to that of the algebraic closure of a field - "the adjunction of all roots of polynomials". Then in the same spirit as with the latter, one has uniqueness up to isomorphism, and also universality and homogeneity, provided one has enough General Continuum Hypothesis. The cardinality of *A will be something like \(2^{2^{|A|}}\), and one has a high degree of saturation. 
Also, one notes that the functor * can be applied again to *A, giving **A, ***A, and so forth. In particular, I focus on the two different embeddings of *A into **A and prove some of their properties, with some applications.

#arXiv  arXiv:1601.00488 #spnetwork   #SetTheory   #CategoryTheory  
Richard Green commented on Applications of Markov Chain Analysis to Integer Complexity (Dec. 13, 2015)
The complexity of integers

The complexity of an integer n is defined to be the smallest number of 1s required to build the integer using parentheses, together with the operations of addition and multiplication. 

For example, the complexity of the integer 10 is 7, because we can write 10=1+(1+1+1)x(1+1+1), or as (1+1+1+1+1)x(1+1), but there is no way to do this using only six occurrences of 1. You might think that the complexity of the number 11 would be 2, but it is not, because pasting together two 1s to make 11 is not an allowable operation. It turns out that the complexity of 11 is 8.

The complexity, f(n), of an integer n was first defined by K. Mahler and J. Popken in 1953, and it has since been rediscovered by various other people. A natural problem that some mathematicians have been considered is that of finding upper and lower bounds of f(n) in terms of n. 

John Selfridge found a lower bound for f(n) by proving that f(n) is always greater than or equal to 3 log_3(n), where log_3(n) denotes the base 3 logarithm of n. This lower bound is sharp: it cannot be improved because the bound is achieved when n is a power of 3. For example, if n=81, we have n=3^4, which (by the definition of logarithm) means that log_3(n)=4. Selfridge's lower bound for f(n) is then 3x4=12. We can write 81=(1+1+1)x(1+1+1)x(1+1+1)x(1+1+1), which uses twelve occurrences of 1, and Selfridge's result shows that there is no way to achieve this using eleven or fewer 1s. Note that we are only allowed to use addition and multiplication in our expressions; using exponentiation is not allowed.

The problem of finding an upper bound is more complicated. Richard K. Guy found an upper bound for f(n), showing that it is bounded above by 3 log_2(n), which works out at about 4.755 log_3(n). (The 4.755 is an approximation to 3log(3)/log(2).) Guy found this bound using Horner's method, which is explained in the appendix below. The worst case of Guy's bound is achieved in the case that n has no zeros when it is expressed in base 2. 

More generally, it turns out to be difficult to improve the upper bound for f(n) because numbers whose binary digits contain a very uneven balance of 0s and 1s tend to cause problems. An example of this is the number 1439, which is 10110011111 in binary. It turns out that f(1439)=26, and an optimal expression for 1439 is 1+2(1+2(1+1+3((2)(2)(2)(2)+1)((3)(2)+1))), where 2 and 3 are shorthands for (1+1) and (1+1+1), respectively. This works out as about 3.928 log_3(1439).

However, it is possible to replace this value of 3.928 by a lower number for “almost all” integers n; in particular, recent work of J. Arias de Reyna and J. van de Lune shows that it can be replaced by 3.655 for a set of natural numbers of density 1. The recent paper Applications of Markov Chain Analysis to Integer Complexity  (http://arxiv.org/abs/1511.07842) by Christopher E. Shriver improves this upper bound to 3.529 for suitably generic numbers. The paper mentions that extensive numerical computations by other authors suggest that it ought to be possible to improve the generic bound to 3.37.

Relevant links

The On-Line Encyclopedia of Integer Sequences lists the complexities f(n) of the first few natural numbers (https://oeis.org/A005245) and mentions that Guy conjectured that f(p) = f(p–1) + 1 whenever p is prime. Guy's conjecture turned out to be false, but the smallest counterexample, which was found by Martin Fuller in 2008, is surprisingly big: the smallest such prime is 353942783.

The OEIS is an extremely useful resource for researchers in discrete mathematics. They are currently running their annual appeal for donations, and you can donate on this page: http://oeisf.org/

The “extensive numerical computations” mentioned above are discussed in the paper by Iraids et al, which you can find at http://arxiv.org/abs/1203.6462

Appendix: proof of Guy's upper bound

Horner's method works by expressing a (nonzero) number n in base 2 as a sequence of binary digits a_0 a_1 ... a_k, where we may assume that the last digit a_k is equal to 1. It is not hard to show from this that log_2(n) is greater than or equal to k and less than k+1. It is also immediate that n can be expressed as the polynomial 
a_0 + x a_1 + x^2 a_2 + ... + x^k a_k
when x is replaced by 2. Rearranging, we find that
n = a_0 + 2(a_1 + 2(a_2 + ... + 2(a_{k-1} + 2)...)),
because we assumed that a_k was equal to 1. We then replace each occurrence of 2 with “1+1”, and replace each occurrence of a_i with either 0 or 1. Finally, we remove all the occurrences of “0+”. The number of 1s in the result is at most 3k, as required; the bound is achieved whenever n is one less than a power of 2.

For example, n=42 is 101010 in binary, which can be written as 0+2(1+2(0+2(1+2(0+2)))). This expands to (1+1)(1+(1+1)(1+1)(1+(1+1)(1+1))), which uses 12 ones. Since 42 lies between 2^5=32 and 2^6=64, it follows that log_2(42) is more than 5, and 3 log_2(n) is more than 12, as required.

#mathematics #sciencesunday #spnetwork arXiv:1511.07842
Terence Tao replied RE: Proof of the main conjecture in Vinogradov's mean value theorem for degrees higher than three (Dec. 8, 2015)
+Timothy Gowers I've added the tag. I did try to post through the spnetwork system first, but the bookmark I had for it no longer worked. I see though now that the network web page is still up, so hopefully this post will get detected by it.

+Mark Lewko As Ben says, without the explicit dependence of constants on k one probably doesn't immediately get new bounds on zeta; for instance even the deep results of Wooley in recent years have not (to my knowledge) budged the classical Vinogradov-Korobov zerofree region on zeta, which is proven using the classical Vinogradov mean value theorem. But perhaps some variant of the methods can say something new. Recently for instance Bourgain used the decoupling theorem to slightly improve the upper bound on zeta on the critical line (making a little bit of progress towards the Lindelof hypothesis). Naively one could perhaps hope to somehow combine the decoupling methods with the efficient congruencing methods to prove some super-theorem that could have further applications, though currently the methods look very incompatible (the latter relies very much on the arithmetic structure, while the former heavily relies on generalising to the non-arithmetic setting for induction purposes).
Ben Green replied RE: Proof of the main conjecture in Vinogradov's mean value theorem for degrees higher than three (Dec. 8, 2015)
They defer to Wooley's paper for a discussion of the implications for zeta, which would have to do with the zero-free region if I understand correctly. My vague impression, not having looked at the matter in any detail, is that explicit implied constants will be required in the estimates - or at least some indication of the form of the dependence of these constants on k, s and eps. From what I saw from a glance through the preprint this morning, that might be a slightly tricky undertaking.
Mark Lewko replied RE: Proof of the main conjecture in Vinogradov's mean value theorem for degrees higher than three (Dec. 8, 2015)
This is certainly a spectacular breakthrough. I'm curious what the implications to zeta are? 
Holden Lee commented on Ramanujan Coverings of Graphs (Dec. 8, 2015)
Notes on Doron Puder's talk "Ramanujan coverings of graphs" at IAS/CSDM seminar today: S6 in https://dl.dropboxusercontent.com/u/27883775/wiki/math/pdfs/csdm.pdf #spnetwork arXiv:1506.02335
Timothy Gowers replied RE: Proof of the main conjecture in Vinogradov's mean value theorem for degrees higher than three (Dec. 8, 2015)
This sounds amazing, and perhaps worth an #spnetwork  tag, unless that idea has basically died.
Richard Elwes replied RE: Proof of the main conjecture in Vinogradov's mean value theorem for degrees higher than three (Dec. 8, 2015)
Should it read "they have their most spectacular result yet"?
Terence Tao commented on Proof of the main conjecture in Vinogradov's mean value theorem for degrees higher than three (Dec. 8, 2015)
In the last few years, Bourgain and Demeter have been pursuing various applications of their decoupling theorem in restriction theory, which allows for the accurate estimation of various exponential integrals or exponential sums. In a paper appearing today on the arXiv with Guth, they have their most spectacular conjecture application yet - settling in full generality the main conjecture of Vinogradov on mean values of exponential sums. There has been much recent work on this by Wooley and his coauthors using the "efficient congruencing" method, but the methods of Bourgain-Demeter-Guth look to be quite different. There should be applications of this result to Waring's problem and to upper bound on the Riemann zeta function. Anyway, certainly a paper I intend to read through in the coming weeks... #spnetwork arXiv:1512.01565