Watch my YouTube video, It relates to what you are taking about. My video will show a better alternative solution for Logarithms. I believe my solution to be unique.

My suggestion would be to teach log and exp as functions that give one information about a given tree diagram. Let’s say this tree exists out there in the Platonic realm, and log and exp are instruments that show us different aspects of it.

Exp is a function that says; given a branch factor b and the number of levels x, tell me how many leaf nodes = y are on the tree.

Log is a function that says; given a branch factor b and the number of leaf nodes y on the tree, tell me how many levels = x there are.

It then becomes clear that the number of levels x (returned by log()) grow arithmetically with respect to the branch factor b (think vertically), while the number of leaf nodes y (returned by exp()) grows geometrically with respect to the branch factor b (think horizontally).

Of course, this becomes less clean when you are dealing with reals instead of integers.

]]>Please access and comment.

Dan Umbarger

]]>Oh I love this kind of discussion. On Williams comments about equation, when I was in second year high school in a kinda outback fishing village in Philippines, one of the math subjects is algebra I discovered my first love, numbers. My impression on education there at that time about 30 yrs ago is pretty much like an old fashion american style a basic 3 r’s (writing, reading and arithmetic) and lots of drills in exercises. So my teacher introduced equation the first time and the first axioms relating to equation namely: axioms of addition, subtraction, multiplication and division, as a start. What you do in the left hand side of the equation you do in the right and everything will be fine.

Really reminds of democracy, you do same thing on both sides of the equation and you can start at the very beginning 1 = 1 .But 1 has many guises and faces and that the start of the story.

I did my Electrical Engineering degree in Manila just to get close to math and love of science. Now living in Sydney done my grad dip in Computer Science in University of New South Wales. Life passes by, now I am back to my first love, numbers.

]]>Sorry to comment so late, but seeing that makes me feel better about how embarrassingly long it took me to realise this! I remember my school teacher saying ‘and now we add 2 to both sides’ or something and finally actually taking it in, and everyone looking at me like I was a moron cos I was supposed to be the clever one – but no, i’d just been chucking the variables ‘over the fence’ for years.

I like maths for the rare semantic insights, but it takes me so painfully long i’m resigned to huge amounts of syntactic muddling along.

]]>Idetrorce ]]>

“In response to William on manipulating equations, I’d like to mention that I remember well from my own experience that I didn’t truly understand why it worked until I’d been doing it for years. The thing that made me finally understand THAT THE TWO THINGS ARE EQUAL SO OF COURSE YOU CAN DO THE SAME!!! was realizing that I had not really stopped to think why it was OK to add two equations together when solving simultaneous equations.”

Gosh, if I ever needed proof that my brain is wired differently from that of a top mathematician, here it is (btw, I am not even a mediocre mathematician).

How on earth could you go on for years just relying on applying “syntactic” rules without relying on the “semantic” fact you capitalized?

I would have flunked out of middle school!

A generation later: I have been relying very heavily on the capitalized fact to explain things to my 9yrs old daughter and not only it is working fine: it seems the only thing that works.

]]>Simpler rule – ‘Mathematica rules’.

There has to be some way to stop all those potential advanced math users moving into media studies or Macdonald University.

Hide the plumbing?

]]>‘Counting on’ is clearly a simple rule that you could program a computer to follow, thus ‘syntactic’ in one sense.

]]>(A) a^n = a1.a2.a3…an

The digits following the a’s are meant to be subscripts, and simply ‘count’ the number of times we are applying the operator ‘times’, represented by the dot. The mental grasp required here is to understand what the ‘n’ represents, and is not any different from understanding multiplication. Indeed, if we replaced the caret by a ‘times’ sign, and the times sign by a plus, then we have a definition of multiplication.

(B) a^(m+n) = a1.a2.a3…a(m+n)

The move to (B) requires only a mental grasp of substitution. We don’t have to know what ‘m+n’ represents at this point.

(C) a1.a2.a3…a(m+n) = a1.a2.a3…a(m) . a1.a2.a3…a(n)

Now we move to the equivalence in question. How is it a ‘semantic’ move? Surely it’s no different to the move that young children make, called ‘counting on’. We have a sum, say

5 + 4

‘counting on’ is something they don’t grasp until reception or year 1 (age 5-6), perhaps later. They have to get the knack of saying ‘6 7 8 9’ while at the same time counting ‘1 2 3 4’. I remember from my brood how difficult this was. But (C) is the same. You count m times, applying the times operator as you go (doesn’t matter what this operator means). Then you count to m+1 to m+n, simultaneously ‘counting on’ 1 to n in parallel.

So the question is, in what sense is the first ‘traditional’ explanation a ‘semantic’ one?

]]>Now please excuse a little experiment: ;

I would like to *emphasize* things from time to time. Thanks: a huge step taken towards true maths bloggership.

Although you are undoubtedly in a better position to know than I am, I’m nevertheless surprised that you refer to the “semantic” approach as “traditional”; my impression is just the opposite — that the “meanings” of mathematical objects have traditionally been given very short shrift.

The most striking (and, I would say, egregious) example of this is perhaps a subject that has already been alluded to, viz. linear algebra (and its applications in multivariable calculus). In the “traditional” approach, one performs formal computations with matrices, without any acknowledgement (at least at first) of the fact that the matrices represent homomorphisms of vector spaces. As a consequence, the definition of matrix multiplication seems like an artifice cooked up out of thin air — an excessively complicated one at that — rather than a natural, inevitable consequence of the fact that we’re talking about the composition of linear maps. The “traditional” proof of the existence of eigenvalues, invoking the characteristic polynomial defined in terms of determinants, is virtually a caricature of itself (it would have been a good candidate for inclusion in that famous book *Mathematics Made Difficult*) — as is the presentation of the multivariable chain rule (where one is presented with a near-incomprehensible formula involving hordes of indices rather than being told that “the derivative of a composition is the composition of the derivatives”), and the inverse/implicit function theorem (with talk about Jacobian determinants being nonzero rather than about derivatives being isomoprhisms), etc.

My distaste for this approach, in addition to deriving from my interest in things infinite-dimensional, where e.g. determinants simply aren’t involved, probably has a lot to do with the fact that the semantics are what I care about in the first place. (I wouldn’t be interested in matrices, for example, but for their interpretation as linear transformations.) The beauty of pure syntax resides in the fact that it allows one to contemplate multiple semantic interpretations, and thus to make previously unsuspected connections between apparently separate ideas; this is for me the point of category theory, for example. To be useful for this purpose, however, syntax has to be kept simple, lest the manipulative difficulties get in the way of semantic contemplation.

I actually don’t think your point about logarithms has to do with syntax versus semantics so much as with different choices of semantics. In effect, I read you as advocating that be conceived abstractly as a homomorphism relating the multiplicative structure of the real numbers to the additive structure — rather having that property follow as a consequence of another, more concrete, definition. (Or at least, that one should adopt the abstract conception as soon as one has seen that the property in question is a consequence of the concrete definition.)

]]>Being a sophomore in high school now, I’d only understood the reasoning behind many mathematical concepts last year, when I took a college computer science course and Algebra II simultaneously. Taking a computer science class taught me to think more logically instead of just going through the processes that I’d learned years before. With my computer science instructor forcing me to map out every process in my code and explain how and why each step in the process was crucial, I was finally able to understand the meanings behind the rules I had to memorize in maths.

Teachers no longer teach logic in maths class; they just tell students to memorize a rule, but don’t explain why it’s important. So, when students make a mistake, it really is because they don’t understand the concept. Your second suggestion of teaching might be harder to implement at first, but it definitely pays off later.

]]>