> My concern is that the word “elementary” in the title carries a much broader meaning in standard mathematical usage, and in this meaning, the paper’s title does not hold.
> Elementary functions typically include arbitrary polynomial roots, and EML terms cannot express them.
If you take a real analysis class, the elementary functions will be defined exactly as the author of the EML paper does.
I've actually just learnt that some consider roots of arbitrary polynomials being part of the elementary functions before, but I'm a physicist and only ever took some undergraduate mathematics classes.
Nonetheless, calling these elementary feels a bit of stretch considering that the word literally means basic stuff, something that a beginner will learn first.
No. It's code for the thickest, densest book on the subject that you're ever gonna not read, as it actually assumes you're experienced in the subject and goes into everything except intro level topics.
The definition of "elementary function" typically includes functions which solve polynomials, like the Bring radical. The definition was developed and is most fitting in algebraic contexts where algebraic structure is meaningful, like Liouvillian structure theorems, algorithmic integration, and computer algebra. See e.g.
There appears to be a typo in that example; I assume "Essentially elementary functions are the functions that can be built from ℂ and f(x) = x" should say something more like "the functions that can be built from ℂ and f(x) = y".
Not a typo! Think of f(x) = x as a seed function that can be used to build other functions. It's one way to avoid talking about "variables" as a "data type" and just keep everything about functions. We can make a function like x + x*exp(log(x)) by "formally" writing
f + f*(exp∘log)
where + and * are understood to produce new functions. Sort of Haskell-y.
jargon are words being used that don't carry the typical laymen definition, but a specific one from the domain of said jargon.
If a written piece is intended for an audience who knows the jargon, then it's fine to use jargon - in fact it's appropriate and succinct. If it was intended for the laymen, then jargon is inappropriate.
But it seems you're lamenting that this jargon is wrong and that it shouldn't be jargon!?
I don't know if I read this right, but I thought it's proven that "elementary functions" can't solve 5th degree or higher polynomial, so I'm confused how it's interpreted if elementary functions also include arbitrary polynomial roots. Or is it different elementary functions?
That theorem is not formulated about "elementary functions".
It says that polynomial equations of the 5th degrees or higher cannot, in general, be solved using "radicals".
While something like "polynomials" or "radicals" has a clear meaning, which are the "elementary functions" is a matter of convention.
The usual convention is to include all algebraic functions and a few selected transcendental functions.
In "all algebraic functions", are included the rational functions, the radicals and the functions that compute solutions of arbitrary polynomial equations.
Some conventions used for "elementary functions" describe the expressions that you can use to write such "elementary functions", in which case not all algebraic functions are included, but only those written by combining rational functions with radicals.
For an algebraic function that computes a solution of a general polynomial equation, which cannot be expressed with radicals, you cannot write an explicit formula, but you can write the function only implicitly, by writing the corresponding polynomial equation.
So the difference between the 2 kinds of conventions about which are "the elementary functions" is usually based on whether only explicitly-written functions are considered, or also implicit functions.
The term 'elementary function' doesn't really have a single universally agreed on strict definition.
Definitions are either a bit fuzzy, or not universally agreed on.
Though interestingly https://en.wikipedia.org/wiki/Elementary_function says "More generally, in modern mathematics, elementary functions comprise the set of [...]". Though at least Wikipedia thinks that 'modern mathematics' has a consensus; of course, there's no guarantee that whoever you are talking to uses the 'modern mathematics' definition that Wikipedia brings up.
The original article explicitly acknowledged this limitation, that while in "the classical differential-algebraic setting, one often works with a broader notion of elementary function, defined relative to a chosen field of constants and allowing algebraic adjunctions, i.e., adjoining roots of polynomial equations," the author works with the less general definition.
Neither the present article, nor the original one has much mathematical originality, though: Odrzywolek's result is immediately obvious, while this blog post is a rehash of Arnold's proof of the unsolvability of the quintic.
Yes, this article is kicking in open doors, the original article was quite clear about the scope.
The present article could rather have spent time arguing why this isn't like NAND gate functional completeness.
I would have thought the differences lie in the other direction: not that trees of EML and 1 can describe too little, but that they can describe too much already. It's decidable whether two NAND circuits implement the same function, I'm pretty sure it's not decidable if two EML trees describe the same function.
Arnold (as reported by Goldmakher [1]) does prove the unsolvability of the quintic in finite terms of arithmetic and single-valued continuous functions (which does not include the complex logarithm). TFA's result is stronger, which is something about the solvability of the monodromy groups of all EML-derived functions. So it doesn't seem to be a "rehash", even if their specific counterexample could have been achieved either in fewer steps or with less machinery.
Arnold's proof can be used to show that certain classes of functions are insufficient to express a quintic formula.
These classes can always safely include all single-valued continuous functions (you cannot even write the _quadratic_ formula in terms of arithmetic and single-valued continuous functions!), but also plenty of non-single-valued functions (e.g. the +-sqrt function which appears in the well-known quadratic formula).
Applying Arnold's proof to the class given by arithmetic and all complex nth root functions (also multivalued) gives the usual Abel-Ruffini theorem. But Arnold's proof applies to the class "all elm-expressible functions" without modification.
and its depressing when the rare actual progress is made, a collection of jealous practitioners comes to party-poop all over the place, for bringing the insights that make the result from then on immediately obvious.
This may or may not be true; but the burden of proof should not lay with the reader.
Please provide (in absence of which every reader can draw their own conclusions) a reference which simultaneously:
1) predates Odrzywolek's result
2) and demonstrates the other unary and binary operations typically tacitly assumed can be expressed in terms of a single binary operation and a constant.
(in other news: I can spontaneously levitate, I just don't feel like demonstrating it to you right now...)
Related is the paper [What is a closed-form number?], which explores the field E, defined as the smallest subfield of ℂ closed under exp and log. I believe the set of numbers that can be generated using exp-minus-log is a strict subset of this.
In a similar vein to this post, the paper points out that general polynomials do not have solutions in E, so of course exp-minus-log is similarly incomplete.
What is intriguing is that we don’t even know whether many simple equations like exp(-x) = x (i.e. the [omega constant]) have solutions in E. We of course suspect they don’t, but this conjecture is not proven: https://en.wikipedia.org/wiki/Schanuel%27s_conjecture
> Related is the paper [What is a closed-form number?], which explores the field E, defined as the smallest subfield of ℂ closed under exp and log. I believe the set of numbers that can be generated using exp-minus-log is a strict subset of this.
is that a typo / accidental mis-phrasing?
exp-minus-log construction is closed for the operations it supports, and spans both exp and log, so E must be either identical to or a subset of exp-minus-log; not the other way around.
2)
EML is spanned by a single binary operator, while the article you reference describing ("what is a closed-form number") just tacitly assumes +, -, x, / are available for free, so even in just this sense the EML construction is superior. Since EML can construct the larger presumed basic operations of E, E must be contained in it, but since the E implicitly has +, - besides exp(x) and ln(x) the reverse can also be said, so the sets and functions spanned by E and EML should be equivalent. So what is novel? precisely what the recent article describes: all the tacitly (+,-,x,/) and explicitly assumed (exp and ln) operations can be spanned with just 1 (non-unique) binary operation; and on top of that:
3)
the recent article describes freely available code to conduct such searches and find alternative binary operations, search for functions or constants.
The EML paper provides code and machinery to conduct a search for the value x in exp(-x)=x : use a multiprecision library to get an arbitrarily precise representation, and search for some EML expression to find candidates.
> exp-minus-log construction is closed for the operations it supports, and spans both exp and log, so E must be either identical to or a subset of exp-minus-log; not the other way around.
Since E is by definition closed under exp, log and subtraction, it is clearly also closed under EML.
SabrinaJewson claims it is a STRICT subset: EML ⊂ E
I remind the trivial results that both E ⊆ EML and EML ⊆ E and hence EML = E
apart from construction: which is minimal for EML but highly redundant for E.
the EML paper shows that this minimal construction for EML is not unique so other binary operations may be found with perhaps more interesting properties, or admitting shorter binary trees for commonly used functions and values (which may reflect subjective "simplification" of expressions in mathematics.
That's a kind of weak criticism. What functions are considered elementary was always going to be arbitrary, picking the set you can generate from exp, log, and some complex algebra is not the worst choice.
If nothing else you could solve simple differential equations with them. And it gives you the 'power' function.
The very fact that the set of functions is largely arbitrary is a much bigger issue. Or at least it limits the use of the fact that you can represent those functions.
Edit: I feel the need to add that just because it is a weak critique doesn't mean the argument itself is not interesting.
When I first read the exp-minus-log paper, I found it extremely surprising - even shocking that such a function could exist.
But the fact that a single function can represent a large number of other functions isn't that surprising at all.
It's probably obvious to anyone (it wasn't initially to me), but given enough arguments I can represent any arbitrary set of n+1 functions (they don't even have to be functions on the reals - just as long as the domain has a multiplicative zero available) as a sort of "selector":
When you may use functions of 3 or more arguments, it becomes trivial to find a single function that can be used to express large classes of other functions.
These tricks break when you are restricted to use one binary function, like in the EML paper.
The second argument cannot be used as a selector, because you cannot make binary functions from unary functions (while from binary functions you can make functions with an arbitrary number of parameters, by composing them in a tree).
If you used an argument as a function selector in a binary function, which transforms the binary function into a family of unary functions, then you would need at least one other auxiliary binary function, to be able to make functions with more than one parameter.
The auxiliary binary function could be something like addition or subtraction, or at the minimum a function that makes a tuple from its arguments, like the function CONS of LISP I.
The EML paper can also be understood that the elementary functions as defined by it can be expressed using a small family of unary functions (exponential, logarithmic and negation), together with one binary function: addition.
Then this set of 4 simple functions is reduced to one complex function, which can regenerate any of those 4 functions by composition with itself.
This is the same trick used to reduce the set of 2 simple functions, AND & NOT, which are sufficient to write any logical function, to a single function, NAND, which can generate both simpler functions.
And if you want something truly surprising, Riemann's zeta function can approximate any holomorphic function arbitrarily well on the critical strip. So technically you need only _one_ argument.
The author essentially says that the quintic has no closed form solution which is true regardless of the exp-minus-log function. The purpose of this blog post is lost on me.
Can anyone please explain this further? It seems like he’s moving the goalposts.
"The quintic has no closed form solution" is a theorem that is more precisely stated (in the usual capstone Galois proof) as follows: The quintic has no closed form solution in terms of arbitrary compositions of rational numbers, arithmetic, and Nth roots. We can absolutely express closed form solutions to the quintic if we broaden our repertoire of functions, such as with the Bring radical.
The post's argument is different than the usual Galois theory result about the unsolvability of the quintic, in that it shows a property that must be true about all EML(x,y)-derived functions, and a hypothetical quintic-solver-function does not have that property, so no function we add to our repertoire via EML will solve it (or any other function, elementary or not, that lacks this property).
This fundamental "cheat" gave rise to some of the most important pure and applied mathematics known.
Can't solve the differential equation x^2 - a = 0? Why not just introduce a function sqrt(a) as its solution! Problem solved.
Can't solve the differential equation y'' = -y? Why not just introduce a function sin(x) as its solution! Problem solved.
A lot of 19th century mathematics was essentially this: discover which equations had solutions in terms of things we already knew about, and if they didn't and it seemed important or interesting enough, make a new name. This is the whole field of so-called "special functions". It's where we also get the elliptic functions, Bessel functions, etc.
The definition of "elementary function" comes exactly from this line in inquiry: define a set of functions we think are nice and algebraically tractable, and answer what we can express with them. The biggest classical question was:
Do integrals of elementary functions give us elementary functions?
The answer is "no" and Liouville gave us a result which tells us what the answer does look like when the result is elementary.
Risch gave us an algorithm to compute the answer, when it exists in elementary form.
The Bring radical has a great geometric interpretation: BR(a) is where the curve x^5 + x + a crosses the x axis.
Like sine or exp, it also has a nice series representation:
sum(k = 0 to inf) binom(5k,k) (-1)^(k+1) a^(4k+1) / (4k+1)
We can compute its digits with the very rapidly convergent Newton iteration
x <- x - (x^5 + x + a)/(5x^4 + 1)
and so on.
Why not invite it to the table of functions?
Ellipses are simple and beautiful figures known to every child, but why do we rarely invite the elliptic integrals to the table too?
I guess my point is that "nice geometric interpretation" is a little subjective and hasn't led to much consistency in our choice of which functions are popular or
obscure.
> This fundamental "cheat" gave rise to some of the most important pure and applied mathematics known.
> Can't solve the differential equation y'' = -y? Why not just introduce a function sin(x) as its solution! Problem solved.
But that's not how sine was introduced. It's been around since classical geometry. It was always easy to solve the differential equation y'' = -y, because the sine had that property, and we knew that.
Heck, you can tell this just by looking at the names of the functions you mentioned. "Sine" is called "sine", which appears to have originated as an attempted calque of a Sanskrit term (referring to the same function) meaning "bowstring".
"Square root" is named after the squaring function that was used to define it.
Introducing an answer-by-definition gives us negative numbers, rational numbers, imaginary numbers, and nth roots... but not sines, come on. You can just measure sines.
You can calculate, measure, draw, construct, write a power series for, express as hypergeometric function, etc. the Bring radical too.
All of these concepts, from sine to real numbers, Bring radicals to complex exponentials, can all be defined in different, equivalent ways. What is interesting are the properties invariant to these definitions.
It still doesn't seem to me that a square root should be any more or less contrived than a Bring radical. Maybe we should call it a ultraradical instead?
For me, what makes the square root more “natural” is that, although it’s usually introduced as an “answer by definition”, it can also be arrived at by wondering what happens if you take something to the halfth power.
Can anyone provide a link that "Some are going as far as to suggest that the entire foundations of computer engineering and machine learning should be re-built as a result of this", or anything similarly grandiose?
I am a professional mathematician, though nowhere near this kind of thing. The result seems amusing enough, but it doesn't really strike me as something that would be surprising. I confess that this thread is the first I've heard of it...
I still consider the article important, as it demonstrates techniques to conduct searches, and emphasizes the very early stage of the research (establishes non-uniqueness for example), openly wonders which other binary operators exist and which would have more desirable properties, etc.
Sometimes articles are important not for their immediate result, but for the tools and techniques developed to solve (often artificial or constrained) problems. The history of mathematics is filled with mathematicians studying at-the-time-rather-useless-constructions which centuries or millennia later become profound to human interaction. Think of the "value" of Euclid's greatest common divisor algorithm. What starts out as a curiosity with 0 immediate relevance for society, is now routinely used by everyone who enjoys the world wide web without their government or others MitM'ing a webpage.
If the result was the main claimed importance for the article, there would be more emphasis on it than on the methodology used to find and verify candidates, but the emphasis throughout the article is on the methodology.
It is far from obvious that the tricks used would have converged at all. Before this result, a lot of people would have been skeptical that it is even possible to do search candidates this way. While the gradual early-out tightening in verification could speed up the results, many might have argued that the approach to be used doesn't contain an assurance that the false positive rate wouldn't be excessively high (i.e. many would have said "verifying candidates does not ensure finding a solution, reality may turn out that 99.99999999999999999% of candidates turn out not to pass deeper inspection").
It is certainly noteworthy to publish these results as they establish the machinery for automated search of such operations.
The argument is that a universal basis would be capable of solving arbitrary polynomial roots. The rest is an argument that the group constructed by eml is solveable, and hence not all the standard elementary functions.
It wouldn't be a math discussion without people using at least two wildly different definitions.
His claim is that we exp-minus-log cannot compute the root of an arbitrary quintic. If you consider the root of an arbitrary quintic "elementary" the exp-minus-log can't represent all elementary functions.
I think it really comes down to what set of functions you are calling "elementary".
The author discusses this in his third paragraph, and states explicitly in his fourth that he considers the result faulty for its unrealistically narrow definition of elementarity.
(I'm not a mathematician, so don't expect me to have an opinion as far as that goes. But the author also writes well in English, and that language we do share.)
> In layman’s terms, I do not consider the “Exp-Minus-Log” function to be the continuous analog of the Boolean NAND gate or the universal quantum CCNOT/CSWAP gates.
But is there actually a combination of NANDs that find the roots of an arbitrary quintic? I always thought the answer was no but admittedly this is above my math level.
Combinations of the NAND gate can express any Boolean function. The Toffoli (CCNOT) or Fredkin (CSWAP) can express any reversible Boolean function, which is important in quantum computing where all gates must be unitary (and therefore reversible). The posited analog is that EML would be the "universal operator" for continuous functions.
I would agree, it makes them anything but elementary. I am honestly not even sure if there is a finite constructible basis of the functions that can express any solution of single-variable integer polynomials.
And for multivariate polynomials, the roots are uncomputable due to MRDP theorem.
It is not known, and the model problem for this is Hilbert's 13th [1].
Nonetheless, "elementary function" is a technical term dating back to the 19th century; it's very much not a general adjective whose synonym is "basic".
It's news to me that "elementary functions" include roots of arbitrary polynomials, but the wiki article in fact says that they're included at least some of the time. I remember reading about the Risch algorithm (for finding closed form antiderivatives) a long time ago and elementary functions were just the ordinary ones found on calculators.
Interestingly, the abs (absolute value) function is non-elementary. I wonder if exp-minus-log can represent it.
EML can represent the real absolute value, so long as we agree with the original author's proviso that we define log(0) and exp(-∞), by way of sqrt(x^2) as f(x) = exp((1/2)log x). Traditionally, log(0) isn't defined, but the original author stipulated it to be -∞, and that all arithmetic works over the "extended reals", which makes
abs(0)
= f(0) ; by defn
= exp(1/2 log 0) ; by defn
= exp(-∞/2) ; log 0 rule
= exp(-∞) ; extended real arith
= 0 ; exp(-∞) rule
If we don't agree with this, then abs() could be defined with a hole punched out of the real line. The logarithm function isn't exactly elegant in this regard with its domain restrictions. :)
I think the issue might be the branch cut in the sqrt function. Per the wiki article, elementary functions have to be differentiable in the complex plane at all but a finite number of points.
This is a bit like invalidating a result based on 0^0 := 1 because you work in a field of mathematics where 0^0 is an indeterminate form. Not very interesting.
AFAIU the original paper is a result in the field of symbolic regression. What definition of elementary function do they use?
I only skimmed the article, but I think the idea is to use some variation on:
f(a,b,c,d,e) = the largest real solution x of the quintic equation x^5 + ax^4 + bx^3 + cx^2 + dx + e = 0
There's not a simple formula for this function (which is the basic point), but certainly it is a function: you feed it five real numbers as input, and it spits out one number as output. The proof that you can't generate this function using the single one given looks like some fairly routine Galois theory.
Whether this function is "considered elementary" depends on who you ask. Most people would not say this is elementary, but the author would like to redefine the term to include it, which would make the theorem not true anymore.
Why any of this would shake the foundations of computer engineering I do not know.
I've thought something like that, but I'm interested more in details of the argument.
As for why this could be important... we sometimes find new ways of solving old problems, when we formulate them in a different language. I remember how i was surprised to learn how representation of numbers as a tuple (ordered list of numbers), where each element is the remainder for mutually prime dividers - as many dividers as there are elements in the tuple - reduces the size of tables of division operation, and so the hardware which does the operation using thise tables may use significantly less memory. Here we might have some other interesting advantages.
But can you even express this function with the elementary operator symbols, exp, log, power and trig functions? It seems to me like no, you can't express "largest real solution" with those (and what's the intended result for complex inputs?)
At least eml can express the quintic itself, just like the above mentioned operators can
Author and EML are using different definitions of elementary functions, EML's definition being the school textbooks' one (polynomials, sin, exp, log, arcsin, arctan, closed under multiplication, division and composition). The author's definition I've never met before, it apparently includes some multi-valued functions, which are quite unusual.
> More generally, in modern mathematics, elementary functions comprise the set of functions previously enumerated, all algebraic functions (not often encountered by beginners), and all functions obtained by roots of a polynomial whose coefficients are elementary. [...] This list of elementary functions was originally set forth by Joseph Liouville in 1833.
I feel that saying that EML can't generate all the elementary functions because it can't express the solution of the quintic is like saying that NAND gates can't be the basis of modern computing because they can't be used to solve Turing's halting problem.
As is usual with these kinds of "structure theorems" (as they're often called), we need to precisely define what set of things we seek to express.
A function which solves a quintic is reasonably ordinary. We can readily compute it to arbitrary precision using any number of methods, just as we can do with square roots or cosines. Not just the quintic, but any polynomial with rational coefficients can be solved. But the solutions can't be expressed with a finite number of draws from a small repertoire of functions like {+, -, *, /}.
So the question is, does admitting a new function into our "repertoire" allow us to express new things? That's what a structure theorem might tell us.
The blog post is exploring this question: Does a repertoire of just the EML function, which has been shown by the original author to be able to express a great variety of functions (like + or cosine or ...) also allow us to express polynomial roots?
On a tangent: I've tried to connect Euclid's Elements with quantifier elimination theorems. It looks like most of the geometry follows from QE of real-closed fields. Some of the number theory relates to Presburger arithmetic. Some other number theory, including the irrationality of sqrt(2), is down to Skolem. The Pythagorean triples relate to extending Skolem to the Gaussian integers. I suspect some of the "embryonic" integral calculus could be related to holonomic functions, which seem like they admit a form of QE.
Don't have anything for the perfect numbers though.
> Elementary functions typically include arbitrary polynomial roots, and EML terms cannot express them.
If you take a real analysis class, the elementary functions will be defined exactly as the author of the EML paper does.
I've actually just learnt that some consider roots of arbitrary polynomials being part of the elementary functions before, but I'm a physicist and only ever took some undergraduate mathematics classes. Nonetheless, calling these elementary feels a bit of stretch considering that the word literally means basic stuff, something that a beginner will learn first.
There's also the opposite in physics though, "modern" means from the 60s with square roots drawn in manually.
See e.g. Petzold, et al.
- Page 2 and the following example of https://billcookmath.com/courses/math4010-spring2016/math401... (2016)
- Ritt's Integration in Finite Terms: Liouville's Theory of Elementary Methods (1948)
It's not frequent that analysis books will define the class of elementary functions rigorously, but instead refer to examples of them informally.
There appears to be a typo in that example; I assume "Essentially elementary functions are the functions that can be built from ℂ and f(x) = x" should say something more like "the functions that can be built from ℂ and f(x) = y".
If a written piece is intended for an audience who knows the jargon, then it's fine to use jargon - in fact it's appropriate and succinct. If it was intended for the laymen, then jargon is inappropriate.
But it seems you're lamenting that this jargon is wrong and that it shouldn't be jargon!?
It says that polynomial equations of the 5th degrees or higher cannot, in general, be solved using "radicals".
While something like "polynomials" or "radicals" has a clear meaning, which are the "elementary functions" is a matter of convention.
The usual convention is to include all algebraic functions and a few selected transcendental functions.
In "all algebraic functions", are included the rational functions, the radicals and the functions that compute solutions of arbitrary polynomial equations.
Some conventions used for "elementary functions" describe the expressions that you can use to write such "elementary functions", in which case not all algebraic functions are included, but only those written by combining rational functions with radicals.
For an algebraic function that computes a solution of a general polynomial equation, which cannot be expressed with radicals, you cannot write an explicit formula, but you can write the function only implicitly, by writing the corresponding polynomial equation.
So the difference between the 2 kinds of conventions about which are "the elementary functions" is usually based on whether only explicitly-written functions are considered, or also implicit functions.
Definitions are either a bit fuzzy, or not universally agreed on.
Though interestingly https://en.wikipedia.org/wiki/Elementary_function says "More generally, in modern mathematics, elementary functions comprise the set of [...]". Though at least Wikipedia thinks that 'modern mathematics' has a consensus; of course, there's no guarantee that whoever you are talking to uses the 'modern mathematics' definition that Wikipedia brings up.
Neither the present article, nor the original one has much mathematical originality, though: Odrzywolek's result is immediately obvious, while this blog post is a rehash of Arnold's proof of the unsolvability of the quintic.
The present article could rather have spent time arguing why this isn't like NAND gate functional completeness.
I would have thought the differences lie in the other direction: not that trees of EML and 1 can describe too little, but that they can describe too much already. It's decidable whether two NAND circuits implement the same function, I'm pretty sure it's not decidable if two EML trees describe the same function.
[1] https://en.wikipedia.org/wiki/Richardson%27s_theorem
Maybe. But I found it a nice piece of recreational mathematics nevertheless.
[1] https://web.williams.edu/Mathematics/lg5/394/ArnoldQuintic.p...
These classes can always safely include all single-valued continuous functions (you cannot even write the _quadratic_ formula in terms of arithmetic and single-valued continuous functions!), but also plenty of non-single-valued functions (e.g. the +-sqrt function which appears in the well-known quadratic formula).
Applying Arnold's proof to the class given by arithmetic and all complex nth root functions (also multivalued) gives the usual Abel-Ruffini theorem. But Arnold's proof applies to the class "all elm-expressible functions" without modification.
Many things that in retrospect seem immediately obvious weren't obvious before, let alone immediately obvious.
This may or may not be true; but the burden of proof should not lay with the reader.
Please provide (in absence of which every reader can draw their own conclusions) a reference which simultaneously:
1) predates Odrzywolek's result
2) and demonstrates the other unary and binary operations typically tacitly assumed can be expressed in terms of a single binary operation and a constant.
(in other news: I can spontaneously levitate, I just don't feel like demonstrating it to you right now...)
In a similar vein to this post, the paper points out that general polynomials do not have solutions in E, so of course exp-minus-log is similarly incomplete.
What is intriguing is that we don’t even know whether many simple equations like exp(-x) = x (i.e. the [omega constant]) have solutions in E. We of course suspect they don’t, but this conjecture is not proven: https://en.wikipedia.org/wiki/Schanuel%27s_conjecture
What is a closed-form number?: http://timothychow.net/closedform.pdf omega constant: https://en.wikipedia.org/wiki/Omega_constant
> Related is the paper [What is a closed-form number?], which explores the field E, defined as the smallest subfield of ℂ closed under exp and log. I believe the set of numbers that can be generated using exp-minus-log is a strict subset of this.
is that a typo / accidental mis-phrasing?
exp-minus-log construction is closed for the operations it supports, and spans both exp and log, so E must be either identical to or a subset of exp-minus-log; not the other way around.
2)
EML is spanned by a single binary operator, while the article you reference describing ("what is a closed-form number") just tacitly assumes +, -, x, / are available for free, so even in just this sense the EML construction is superior. Since EML can construct the larger presumed basic operations of E, E must be contained in it, but since the E implicitly has +, - besides exp(x) and ln(x) the reverse can also be said, so the sets and functions spanned by E and EML should be equivalent. So what is novel? precisely what the recent article describes: all the tacitly (+,-,x,/) and explicitly assumed (exp and ln) operations can be spanned with just 1 (non-unique) binary operation; and on top of that:
3)
the recent article describes freely available code to conduct such searches and find alternative binary operations, search for functions or constants.
The EML paper provides code and machinery to conduct a search for the value x in exp(-x)=x : use a multiprecision library to get an arbitrarily precise representation, and search for some EML expression to find candidates.
Since E is by definition closed under exp, log and subtraction, it is clearly also closed under EML.
I remind the trivial results that both E ⊆ EML and EML ⊆ E and hence EML = E
apart from construction: which is minimal for EML but highly redundant for E.
the EML paper shows that this minimal construction for EML is not unique so other binary operations may be found with perhaps more interesting properties, or admitting shorter binary trees for commonly used functions and values (which may reflect subjective "simplification" of expressions in mathematics.
If nothing else you could solve simple differential equations with them. And it gives you the 'power' function.
The very fact that the set of functions is largely arbitrary is a much bigger issue. Or at least it limits the use of the fact that you can represent those functions.
Edit: I feel the need to add that just because it is a weak critique doesn't mean the argument itself is not interesting.
But the fact that a single function can represent a large number of other functions isn't that surprising at all.
It's probably obvious to anyone (it wasn't initially to me), but given enough arguments I can represent any arbitrary set of n+1 functions (they don't even have to be functions on the reals - just as long as the domain has a multiplicative zero available) as a sort of "selector":
g(x_0, c_0, x_1, c_1, ... , x_n, c_n) = c_0 * f_0(x_0) + ... + c_n * f_n(x_n)
The trick is to minimize the number of arguments and complexity of the RHS - but that there's a trivial upper-bound (in terms of number of arguments).
These tricks break when you are restricted to use one binary function, like in the EML paper.
The second argument cannot be used as a selector, because you cannot make binary functions from unary functions (while from binary functions you can make functions with an arbitrary number of parameters, by composing them in a tree).
If you used an argument as a function selector in a binary function, which transforms the binary function into a family of unary functions, then you would need at least one other auxiliary binary function, to be able to make functions with more than one parameter.
The auxiliary binary function could be something like addition or subtraction, or at the minimum a function that makes a tuple from its arguments, like the function CONS of LISP I.
The EML paper can also be understood that the elementary functions as defined by it can be expressed using a small family of unary functions (exponential, logarithmic and negation), together with one binary function: addition.
Then this set of 4 simple functions is reduced to one complex function, which can regenerate any of those 4 functions by composition with itself.
This is the same trick used to reduce the set of 2 simple functions, AND & NOT, which are sufficient to write any logical function, to a single function, NAND, which can generate both simpler functions.
And if you want something truly surprising, Riemann's zeta function can approximate any holomorphic function arbitrarily well on the critical strip. So technically you need only _one_ argument.
Can anyone please explain this further? It seems like he’s moving the goalposts.
The post's argument is different than the usual Galois theory result about the unsolvability of the quintic, in that it shows a property that must be true about all EML(x,y)-derived functions, and a hypothetical quintic-solver-function does not have that property, so no function we add to our repertoire via EML will solve it (or any other function, elementary or not, that lacks this property).
You can't solve an equation? Why not just introduce a function that is equal to the solution of the equation! Problem solved.
Can't solve the differential equation x^2 - a = 0? Why not just introduce a function sqrt(a) as its solution! Problem solved.
Can't solve the differential equation y'' = -y? Why not just introduce a function sin(x) as its solution! Problem solved.
A lot of 19th century mathematics was essentially this: discover which equations had solutions in terms of things we already knew about, and if they didn't and it seemed important or interesting enough, make a new name. This is the whole field of so-called "special functions". It's where we also get the elliptic functions, Bessel functions, etc.
The definition of "elementary function" comes exactly from this line in inquiry: define a set of functions we think are nice and algebraically tractable, and answer what we can express with them. The biggest classical question was:
The answer is "no" and Liouville gave us a result which tells us what the answer does look like when the result is elementary.Risch gave us an algorithm to compute the answer, when it exists in elementary form.
Eg you can get complex numbers from matrices.
But if you want to go in your direction: you can say we get fractions and negative numbers this way.
Bring radicals don't. They're just defined as a solution to this particular quintic.
Kinda the similar story with the Lambert function.
Like sine or exp, it also has a nice series representation:
We can compute its digits with the very rapidly convergent Newton iteration and so on.Why not invite it to the table of functions?
Ellipses are simple and beautiful figures known to every child, but why do we rarely invite the elliptic integrals to the table too?
I guess my point is that "nice geometric interpretation" is a little subjective and hasn't led to much consistency in our choice of which functions are popular or obscure.
> Can't solve the differential equation y'' = -y? Why not just introduce a function sin(x) as its solution! Problem solved.
But that's not how sine was introduced. It's been around since classical geometry. It was always easy to solve the differential equation y'' = -y, because the sine had that property, and we knew that.
Heck, you can tell this just by looking at the names of the functions you mentioned. "Sine" is called "sine", which appears to have originated as an attempted calque of a Sanskrit term (referring to the same function) meaning "bowstring".
"Square root" is named after the squaring function that was used to define it.
Introducing an answer-by-definition gives us negative numbers, rational numbers, imaginary numbers, and nth roots... but not sines, come on. You can just measure sines.
All of these concepts, from sine to real numbers, Bring radicals to complex exponentials, can all be defined in different, equivalent ways. What is interesting are the properties invariant to these definitions.
It still doesn't seem to me that a square root should be any more or less contrived than a Bring radical. Maybe we should call it a ultraradical instead?
I am a professional mathematician, though nowhere near this kind of thing. The result seems amusing enough, but it doesn't really strike me as something that would be surprising. I confess that this thread is the first I've heard of it...
Some of my favorites:
DoctorOetker: "I'm still reading this, but if this checks out, this is one of the most significant discoveries in years."
cryptonektor: "Given this amazing work, an efficient EML operator HW implementation could revolutionize a bunch of things."
zephen: "This is about continuous math, not ones and zeroes. Assuming peer review proves it out, this is outstanding."
[1] https://news.ycombinator.com/item?id=47746610
[2] https://www.reddit.com/r/math/comments/1sk63n5/all_elementar...
I still consider the article important, as it demonstrates techniques to conduct searches, and emphasizes the very early stage of the research (establishes non-uniqueness for example), openly wonders which other binary operators exist and which would have more desirable properties, etc.
Sometimes articles are important not for their immediate result, but for the tools and techniques developed to solve (often artificial or constrained) problems. The history of mathematics is filled with mathematicians studying at-the-time-rather-useless-constructions which centuries or millennia later become profound to human interaction. Think of the "value" of Euclid's greatest common divisor algorithm. What starts out as a curiosity with 0 immediate relevance for society, is now routinely used by everyone who enjoys the world wide web without their government or others MitM'ing a webpage.
If the result was the main claimed importance for the article, there would be more emphasis on it than on the methodology used to find and verify candidates, but the emphasis throughout the article is on the methodology.
It is far from obvious that the tricks used would have converged at all. Before this result, a lot of people would have been skeptical that it is even possible to do search candidates this way. While the gradual early-out tightening in verification could speed up the results, many might have argued that the approach to be used doesn't contain an assurance that the false positive rate wouldn't be excessively high (i.e. many would have said "verifying candidates does not ensure finding a solution, reality may turn out that 99.99999999999999999% of candidates turn out not to pass deeper inspection").
It is certainly noteworthy to publish these results as they establish the machinery for automated search of such operations.
> If this is true, then this blog post debunking EML is going to up-end all of mathematics for the next century.
This is very concerning for mathematics in general.
1: https://news.ycombinator.com/item?id=47775105
It wouldn't be a math discussion without people using at least two wildly different definitions.
I think it really comes down to what set of functions you are calling "elementary".
(I'm not a mathematician, so don't expect me to have an opinion as far as that goes. But the author also writes well in English, and that language we do share.)
> In layman’s terms, I do not consider the “Exp-Minus-Log” function to be the continuous analog of the Boolean NAND gate or the universal quantum CCNOT/CSWAP gates.
But is there actually a combination of NANDs that find the roots of an arbitrary quintic? I always thought the answer was no but admittedly this is above my math level.
However by the same token couldn't you use the same brute force approach with exp minus log?
What im really asking, are NAND gates really different here?
Compare https://arxiv.org/abs/1108.1791 and why computational complexity is often more interesting that computability.
Admittedly this may be above my math level, but this just seems like a bad definition of elementary functions, given the context.
And for multivariate polynomials, the roots are uncomputable due to MRDP theorem.
Nonetheless, "elementary function" is a technical term dating back to the 19th century; it's very much not a general adjective whose synonym is "basic".
[1] https://en.wikipedia.org/wiki/Hilbert%27s_thirteenth_problem
Interestingly, the abs (absolute value) function is non-elementary. I wonder if exp-minus-log can represent it.
AFAIU the original paper is a result in the field of symbolic regression. What definition of elementary function do they use?
Also I'd be glad to see a specific example of a function, considered elementary, which is not representable by EML.
It could be hard, and in any case, thanks for the article. I wish it would be more accessible to me.
f(a,b,c,d,e) = the largest real solution x of the quintic equation x^5 + ax^4 + bx^3 + cx^2 + dx + e = 0
There's not a simple formula for this function (which is the basic point), but certainly it is a function: you feed it five real numbers as input, and it spits out one number as output. The proof that you can't generate this function using the single one given looks like some fairly routine Galois theory.
Whether this function is "considered elementary" depends on who you ask. Most people would not say this is elementary, but the author would like to redefine the term to include it, which would make the theorem not true anymore.
Why any of this would shake the foundations of computer engineering I do not know.
As for why this could be important... we sometimes find new ways of solving old problems, when we formulate them in a different language. I remember how i was surprised to learn how representation of numbers as a tuple (ordered list of numbers), where each element is the remainder for mutually prime dividers - as many dividers as there are elements in the tuple - reduces the size of tables of division operation, and so the hardware which does the operation using thise tables may use significantly less memory. Here we might have some other interesting advantages.
At least eml can express the quintic itself, just like the above mentioned operators can
> More generally, in modern mathematics, elementary functions comprise the set of functions previously enumerated, all algebraic functions (not often encountered by beginners), and all functions obtained by roots of a polynomial whose coefficients are elementary. [...] This list of elementary functions was originally set forth by Joseph Liouville in 1833.
which seems to be what the blog post references.
A function which solves a quintic is reasonably ordinary. We can readily compute it to arbitrary precision using any number of methods, just as we can do with square roots or cosines. Not just the quintic, but any polynomial with rational coefficients can be solved. But the solutions can't be expressed with a finite number of draws from a small repertoire of functions like {+, -, *, /}.
So the question is, does admitting a new function into our "repertoire" allow us to express new things? That's what a structure theorem might tell us.
The blog post is exploring this question: Does a repertoire of just the EML function, which has been shown by the original author to be able to express a great variety of functions (like + or cosine or ...) also allow us to express polynomial roots?
Don't have anything for the perfect numbers though.
https://en.wikipedia.org/wiki/Template:Mathematical_expressi...
"For example, if one adds polynomial roots to the basic functions, the functions that have a closed form are called elementary functions."