Ask HN: How did you learn math notation?

Ask HN: How did you learn math notation?


It sounds like you’re trying to read papers that assume a certain level of mathematical sophistication without having reached that level. Typical engineering papers will assume at least what’s taught in 2 years of college level mathematics, mainly calculus and linear algebra, and no they aren’t going to be explaining notation used at that level.

But it isn’t just about the notation. You also need to understand the concepts the notation represents, and there aren’t really any shortcuts to that.

These days there are online courses (many freely available) in just about every area of mathematics from pre-high school to intro graduate level.

It’s possible for a sufficiently motivated person to learn all of that mathematics on their own from online resources and books, but it isn’t going to be an easy task or one that you can complete in a few weeks/months.


I think a real problem in this area is the belief that there is “one true notation” and that everything is unambiguous and clearly defined.

Yes, conventions have emerged, people tend to use the same sort of notation in a given context, but in the main, the notation should be regarded as an aide memoire, something to guide you.

You say that you’re struggling because of “the math notations and zero explanation of it in the context.” Can you give us some examples? Maybe getting a start on it with a careful discussion of a few examples will unblock the difficulty you’re having.


> I think a real problem in this area is the belief that there is “one true notation” and that everything is unambiguous and clearly defined.

Just to back up this point: In probably every university-level math book I’ve read, they introduce and explain all the notation used. In the preface and/or as concepts are introduced.

There are lists at wikipedia [1] and other places, but I’m not sure how valuable it is out of context.

[1] https://en.wikipedia.org/wiki/Glossary_of_mathematical_symbo…


> I think a real problem in this area is the belief that there is “one true notation” and that everything is unambiguous and clearly defined.

One main cause for this belief is that in a programming there is one true noation (or rather, a separate one for each language) that is unambiguous and clearly defined.

I dislike maths notation as I find it lacks rigour.


> I dislike maths notation as I find it lacks rigour.

I see this a lot from programmers, but in essence, you seem to be complaining that maths notation isn’t what you want it to be, but is instead something else that mathematicians (and physicists and engineers) find useful.


As someone who’s studied math and CS extensively, it’s not that mathematicians don’t need that rigor it’s only certain sub-fields have a culture of this kind of notational rigor. You absolutely see little bubbles of research, 2-4 professors, get sealed off from the rest of the research community because their notational practices are so sloppy that no one wants to bother whereas others make it easy to understand their work.

CS as a field just seems to have a higher base standard for explaining their notation and ideas. It helps in cross-collaboration by making it significantly easier to self study.


Came here to say the same thing harshly and laced with profanity. I guess I can back off a bit from that now.

I was filled with crushing disappointment when I learned mathematical notation is “shorthand” and there isn’t a formal grammar. Same goes for learning writers take “shortcuts” with the expectation the reader will “fill in the gaps”. Ostensibly this is so the writer can do “less writing” and the reader can do “less reading”.

There’s so much “pure” and “universal” about math, but the humans who write about it are too lazy to write about it in a rigorous manner.

I can’t write software w/ the expectation the computer “just knows” or that it will “fill in the gaps”. Sure– I can call libraries, write in a higher-level language to let the compiler make machine language for me, etc. I can inspect and understand the underlying implementations if I want to, though. Nothing relies on the machine “just knowing”.

It’s feels like the same goddamn laziness that plagues every other human endeavor outside of programming. People can’t be bothered to be exact about things because being exact is hard and people avoid hard work.

“We’ll have a face-to-face to discuss this there’s too much here to put in an email.”


You seem to be complaining that math isn’t programming, that it’s something different, and you’ve discovered that you don’t like how mathematicians do math.

Math notation is the way it is because it’s what mathematicians have found useful for the purpose of doing and communicating math. If you are upset and disappointed that that’s how it is then there’s not a lot we can do about it. If there was a better way of doing it, people would be jumping on it. If a different way of doing it would let you achieve more, people would be doing it.

It’s not laziness, and I think you very much have got the wrong idea of how it works, why it works, and why it is as it is. Your anger comes across very clearly, and I’m saddened that your experience has left you feeling that way.

Maths is very much about communicating what the results are and why they are true, then giving enough guidance to let someone else work through the details should they choose. Simply giving someone absolutely all the details is not really communicating why something is true.

I’m not good at this, but let me try an analogy. A computer doesn’t have to understand why a program gives the result it does, it just has to have the exact algorithm to execute. On the other hand, if I want you to understand why when n is an integer greater than 1, { n divides (n-1)!+1 } if and only if { n is prime } then I can sketch the idea and let you work through it. Giving you all and every step of a proof using Peano axioms isn’t going to help you understand.

Similarly, I can express in one of the computer proof assistants the proof that when p is an odd prime, { x^2=-1 has a solution mod p } if and only if { p = 4k+1 for some k }, but that doesn’t give a sense of why it’s true. But I can sketch a reason why it works, and you can then work out the details, and in that way I’m letting you develop a sense of why it works that way.

Math isn’t computing, and complaining that the notation isn’t like a computer program is expressing your disappointment (which I’m not trying to minimise, and is probably very real) but is missing the point.

Math isn’t computing, and “Doing Math” is not “Writing Programs”.


I really, really appreciate your reply and its tone. Thank you for that. You’ve given me some things to think about.

I often wish people were more like computers. It probably wouldn’t make the world better but it would make it more comprehensible.


Thanks for the pingback … I appreciate that. And thanks for acknowledging that I’m trying to help.

It might also help to think of “scope” in the computing sense. Often you have a paragraph in a math paper using symbols one way, then somewhere else the same symbols crop up with a different meaning. But the scope has changed, and when you practise, you can recognise the change of scope.

We reuse variable names in different scopes, and when something is introduced exactly here, only here, and only persists for a short time, sometimes it’s not worth giving it a long, descriptive name. That’s also similar to what happens in math. If I have a loop counting from 1 to 10, sometimes it’s not worth doing more than:

    for x in [1..10] {
      /five lines of code */
    }

If you want to know what “x” means then it’s right there, and giving it a long descriptive name might very well hamper reading the code rather than making it clearer. That’s a judgement call, but it brings the same issues to mind.

I hope that helps. You may still not like math, or the notation, but maybe if gives you a handle on what’s going on.

PS: There are plenty of mathematicians who complain about some traditional notations too, but not generally the big stuff.


> There’s so much “pure” and “universal” about math, but the humans who write about it are too lazy to write about it in a rigorous manner.

Are you sure it’s laziness? Maybe it’s a result of there not actually being any universal notation (not even within subfields) or the exactness you refer to really isn’t necessary. This doesn’t mean that unclear exposition is a good thing. Mathematical writing (as with all writing) should strive towards clarity. But clarity doesn’t require some sort of minutely perfectly consistently notation which would be required by a computer because humans are better than computers at handling exactly those kinds of situations.

> People can’t be bothered to be exact about things because being exact is hard and people avoid hard work.

I think you have it wrong. People can’t be bothered to be as exact because they don’t need to. People can understand things even if they are inexact. So can mathematicians. Honestly this is a feature. If computers would just intuitively understand what I tell them to do like a human assistant would, that would be a step up not a step down in human computer interfaces.


> there is one true noation (or rather, a separate one for each language) that is unambiguous and clearly defined.

This is such a disingenuous take. How many of the source code files you write are 100% self contained and well defined? I’d bet not a single one of them are. You reference libraries, you depend on specific compiler/runtime/OS versions, you reference other files etc. If you take a look at any of these scientific papers you call “badly defined”, did you really go through all of the referenced papers and look if they defined the things you didn’t get? If not then you can’t be sure that the paper uses undefined notation. If you argue that it is too much work to go through that many references, well that is what you would have to do to understand one of your program files.


One can look at the source code to a program, the libraries it uses, the compiler for the language, and the ISA spec for the machine language the compiler generates. You can know that there are no hidden unspecified quantities because programs can’t work without being specified.

When you get down to the microcode of the CPU that implements the ISA you might have an issue if it’s ill-specified. You might be talking about an ISA like RISC-V, though, specified at a level sufficient to go down to the gates. You might be talking about an ISA like 6502 where the gate-level implementations have been reverse-engineered.

You can take programming all the way down boolean logic if you need to and the tools are readily available. They don’t rely on you “just knowing” something.


> because programs can’t work without being specified.

Someone hasn’t read the C spec, with all its specified as undefined behavior.

Programs working on real systems is very different from those systems being formally specified. I suspect that if you only had access to the pile of documentation and no real computer system – if you were an alien trying to reconstruct it, for example – you’d hit serious problems.


Undefined behavior isn’t a feature. A spec isn’t an implementation, either.

All behavior in an implementation can be teased-out if given sufficient time.

> if you were an alien trying to reconstruct it, for example – you’d hit serious problems.

I can’t speak to alien minds. Considering the feats of reverse-engineering I’ve seen in the IT world (software security, semiconductor reverse-engineering) or cryptography (the breaking the Japanese Purple cipher in WWII, for example) I think it’s safe to say humans are really, really good at reverse-engineering other human-created systems from close-to-nothing. Starting with documentation would be a step-up.


> One can look at the source code to a program, the libraries it uses, the compiler for the language, and the ISA spec for the machine language the compiler generates. You can know that there are no hidden unspecified quantities because programs can’t work without being specified.

I doubt you actually can do that and understand it all. A computer can do it, but I doubt you the human can do that and get a perfect picture of any non trivial program without making errors. Human math is a human language first and foremost, its grammar is human language which is used to define things and symbols. This lets us write things that humans can actually read and understand the entirety of, unlike a million lines of code or cpu instructions.

Show me a program written by 10 programmers over 10 years and I doubt anyone really understands all of it. But we have mathematical fields that hundreds of mathematicians have written over centuries, and people still are able to understand it all perfectly. It is true that a computer can easily read a computer program, but since we are arguing about teaching humans you would need to show evidence that humans can actually read and understand complex code well.


Formulas would also be easier to read if they would not name all their variables and functions with one character.

If programmers would write code like that (even fortran programmers use 3 characters), noone would be able to understand the code…


As someone trained in mathematics, I can tell you that using single character variables allows one to focus better on the concepts abstractly which is one of the goals of mathematics. That is to say, it is a practice well-suited to mathematics.

It doesn’t carry over to programming where explicit variables are better suited. In mathematics one is dealing with relatively few concepts compared to a typical program so assigning a single letter (applied consistently) to each is not a problem. This is not so in programming, except for a few cases like using i and j for loop variables (back when programs had explicit loops).


As far as programmers, forget about the names. Does every C source file that uses pointer arithmetic include an explanation of how it works? Nope. They just use it and assume the reader understands it or is clever enough to ask for help or read up on the language.

Mathematical writing is similar. At some point you have to assume an audience, which may be more or less mathematically literate. If you’re writing for graduate students or experts in a domain, you don’t include a tutorial and description of literally every term, you can assume they’re familiar with the domain jargon (just like C programmers can assume that others who read their program understand pointers and other program elements). Whenever something is being used that is unique to the context, a definition is typically provided, at least if the writer is halfway decent.

If the audience is assumed to be less mathematically literate (like a Calculus course textbook audience), then more terms will be defined (chapter 1 of most Calculus books include a definition of “function”). But a paper on some Calculus topic shouldn’t have to define the integral, it should be able to use it because the audience will be expected to understand Calculus.


I’m glad I’m not the only person like this. I’ve never liked tradition math notation and found it about as useful as traditional musical notation, that is, hard to read for the layman and for no other reason than “this is how people have been doing it for a long time”. Maybe I’m the minority, but when I read a CS paper I mostly ignore the maths and then go to the source code or pseudocode to see how the algorithm was implemented.


> …for no other reason than “this is how people have been doing it for a long time”.

I disagree. Math notation has evolved to be as it is because it is useful for the purpose of doing math. If there were some way of doing it better, people would be evolving to be doing so.

In some ways they are … people are using computer algebra packages more for a lot of the grunt work, and are using proof assistants to verify some things, but there’s a lot of math that’s still done by sketching why something is true and letting the reader work through it. Math notation isn’t about executing algorithms, it’s about communicating what the result is, and why it works.

“Doing Math” is not “Writing Programs”, so math notation is different.


What you’re looking at is calculus, specifically differentiation. This is pretty core to understanding physics, because so much of physics depends on the time-evolving state of things. That’s fundamentally what’s happening here.

The triangle, for example, is the upper-case greek letter delta, which in calculus represents ‘change of’. You might have heard of ‘delta-T’ with respect to ‘change of time’.

In calculus, upper-case delta means ‘change over a finite time’ vs lower-case delta meaning ‘instantaneous change’. The practical upshot, for example, is that the lower-case is the instantaneous rate-of-change at an instant in time, whereas the upper-case is the change over a whole time (e.g. the average rate of change per second for time = 0 seconds to time = 3 seconds).

If you are trying to grok this, I would suggest an introductory calculus or pre-calculus resource. It doesn’t have to be a uni textbook – higher-level high school maths usually teaches this. In this particular case, the Khan Academy would be my recommendation because it is about the right level (we’re not talking esoteric higher-level university knowledge here) and it is eminently accessable. For example, this link may be a good starter in this instance:

https://www.youtube.com/watch?v=MeU-KzdCBps


Everyone is talking about the Δ symbol, but the real problem that you’ll encounter will be later in the paper where they start talking about H(ω), which is the Fourier transform of the impulse function (equation 4 and following). You’ll need to know a fair bit about Fourier transforms and impulse responses and filter design to get through this section. The notation is the least of the problems.

One place to start is https://en.wikipedia.org/wiki/Impulse_response


You say “There’s a formula with a triangle …” without telling me where. That’s not real helpful, and you’re making me do the work to find out what you’re talking about. If you want assistance to get started, you need to be more explicit.

However, I have done that work, so I’ve looked, and in the second column of page 210 there’s a “formula with a triangle”:

t_c = 5 middot 10^{-5} sqrt( V / Dt )

… where the “D” I’ve used is where the triangle appears in the formula.

But that can’t be it, because just two lines above it we have:

“For a pulse of width Dt, the critical time …”

So that’s stating that “Dt” is the width of the pulse, and should be thought of as a single term.

So maybe that’s the wrong formula, or maybe it was just a bad example. So trying to be more helpful, the “triangle” is a Greek capital delta and means different things in different places. However, it is often used to mean “a small change in”.

https://en.wikipedia.org/wiki/%CE%94T

FWIW … at a glance I can’t see where that result is derived, it appears simply to be stated without explanation. I might be wrong, I’ve not read the rest of the paper.


I feel you’re coming at this without appreciating your body of prior knowledge. Intended or not, your statment “But that can’t be it, because just two lines above it we have…” assumes a whole lot of knowledge.

You and I both know that it reads as one term, but for someone unfamiliar with calculus but exposed to algebra they are drilled to understand separate graphemes as separate items, because the algebraic ‘multiply’ is so often implied, e.g. 3x = 3 x as two individual ‘things’.

I think there’s merit in explaining the concept of delta representing change, because it’s not obvious. For example, when I was taught the concept in school, my teacher explicitly started with doing a finite change with numbers, then representing it in terms of ‘x’ and ‘y’, then merged them into the delta symbol. That’s a substantial intuitive stepping stone and I think it’s pretty reasonable that someone may not find this immediately apparent.


I agree completely that I’m coming at this with a lot of background knowledge, but if I’m reading in an unfamiliar field and I see a symbol I don’t recognise, I look in the surrounding text to see if the symbol appears nearby. As I say, “Δt” appears immediately above … that’s a clue. As you say, it’s drilled in at school that everything is represented by a single glyph, and if these are juxtaposed then it means multiplication, and that is another thing to unlearn.

But I think the problem isn’t the specifics of the “Δ”, it’s the meta-problem of believing that symbols have a “one true meaning” instead of being defined by the scope.

I agree that explaining the delta notation would be helpful, but that’s like giving someone a fish, or making them a fire. They are fed for one day, or warm for one night, it’s the underlying misconceptions that need addressing so they can learn to fish and be fed, or set on fire and be warm, for the remainder of their life.


I absolutely agree with your comments regarding teaching the underlying approach to digesting a paper. You definitely raise good points, especially the ‘one true meaning’ comment. I should state that I’m not discounting the value of your point, especially given this clarification, however I guess that when I reflect on my experience in my time learning this, the time I best learnt was via initial expalnation, then worked example, then customary warning of corner-cases and here-be-dragons.

e: I also think, on reflection, that a signfigicant part of your ability to grok a new paper per your comments is your comfort in approaching these concepts due to your familiarity. Think of learning a new language – once you have a feel for it, you’re likely more comfortable exploring new concepts within it, however when you’re faced with it from the start you probably feel very lost and apprehensive.

I feel that understanding calculus is a fairly fundamental step in the ‘language of maths’, teaching that symbols don’t necessarily represent numbers but can represent concepts (e.g. delta being change). This isn’t something you encounter until then, but once you do you begin to understand the characters associated iwth integrals, matricies, etc. in a way that you may not have previously with algebra alone.


I think that this is indeed the formula in GP’s question. And indeed sometimes math notation is obtuse like that. It looks like 2 terms, but the triangle goes together with the t as a single term. At other times it might be called “dt” and despite looking like a multiplication of 2 variables (d and t, or triangle and t in this case) it’s just a single variable with a named made of 2 characters.

The important thing here is that “For a pulse of width Dt” is the definition of this variable, but this can be easily missed if you’re not used to this naming convention.


That’s because “Δ” means “a change of” or “an interval of”. So, Δt is “an interval of time”. It is like a compound word, really. It conveys more information than giving it an arbitrary, single-letter name.

This convention is used in a whole bunch of scientific fields, like quantum mechanics, chemistry, biology, mechanics, thermodynamics, etc.

It’s also very useful in how it relates to derivatives, which is a crucial concept in just about any kind of science you could care to mention.

So yes, there is a learning curve, but we write things this way for good reasons, most of the time.

Multiplication should be represented by a (thin) space in good typography, to avoid this sort of things. Not doing it is sloppy and invites misreading. Same with omitting parenthesis around a function’s argument most of the time (e.g. sin 2πθ instead of sin(2 π θ) ).


> it’s just a single variable with a named made of 2 characters.

I have this same problem with programming, when I have to deal with code written by non-mathematicians. They tend to use all these stupid variables with more than one letter and that confuses the heck out of me.


Sorry I didn’t mean to make you work for me, but it’s a PDF and I didn’t know how to explain better the position (maybe I should have told you the first formula on page X).

For you it was a D, for me it was a triangle and I didn’t get the meaning of that Dt. Maybe it’s just a too advanced paper for my knowledge.


BTW … you say:

> Maybe it’s just a too advanced paper for my knowledge.

Maybe it is for now … the point being that if you start at the beginning, chip away at it, search for terms on the ‘net, read multiple times, try to work through it, and then ask people when you’re really stuck, that’s one way of making progress.

You can, instead, enroll in an on-line course, or night-school, and learn all this stuff from the ground up, but it will almost certainly take longer. Your knowledge would be better grounded and more secure, but learning how to read, investigate, search, work, then ask, is a far greater skill that “taking a course”.

Others have answered your specific question about the delta symbol, but there are deeper processes/problems/questions here:

Not all concepts or values or represented by a single glyph, sometimes there are multi-glyph “symbols”, such as “Δt” in your example.

When you see a symbol you don’t recognise, read the surrounding text. The symbol will almost always be referenced or described.

The notation isn’t universal. Often it’s an aid to your memory, to write in a succinct form the thing that has been described elsewhere.

In these senses, it’s very much a language more akin to natural languages than computer languages. The formulas are things used to express a meaning, not things to be executed.

Specific questions about specific notation can be answered more directly, but to really get along with mathematical notation you need to “read like math” and not “read like a novel”.

None of this is correct, all of it is intended to give you a sense of how to make progress.


I’m just saying “D” because I can’t immediately type the symbol here and it was easier just to use that. Not least, I didn’t know if that was the formula you meant.

But as I say, immediately above the formula it says:

“For a pulse of width ∆t, the critical time …”

So that really is saying exactly what that cluster of symbols means. There will be things like this everywhere as you read stuff. Things are rarely completely undefined, but you are expected to be reading along.

And you need to work. I just typed this into DDG:

“What does ∆t mean?”

The very first hit is this:

https://en.wikipedia.org/wiki/Delta_%28letter%29

That gives you a lot of context for what the symbol means, and this is the sort of thing you’ll need to do. You need to stop, look at the thing you don’t understand, read around in the nearby text, then type a question (or two, or three) into a search engine.


I’ll use this as an example for the point I’m trying to make in my comment https://news.ycombinator.com/item?id=29341727

Please don’t take this the wrong way. It is not meant to be demeaning, and it is not meant to be gatekeeping (quite the contrary!). But: If you do not know what a derivative is, then learning that that symbol means derivative (assuming that it does, I have not actually looked at what you link to) will help you next to nothing. OK, you’ll have something to google, but if you don’t already have some idea what that is, there is no way you will get through the paper that way.

I hope you take this as motivation to take the time to properly learn the fundamentals of mathematics (such as for example calculus for the topic of derivatives).


The triangle, or “delta”, is used to indicate a tiny change in the following variable.

Let’s say you go on a journey, and the distance you’ve travelled so far is “x” and the time so far is “t”.

Then your average velocity since the beginning is x / t .

But, if you want to know your current velocity, that would be delta x divided by delta t .

The delta is usually used in a “limiting” sense – you can get a more accurate measurement of your velocity by measuring the change in x during a tiny time interval. The tinier the interval, the more accurate the estimate of current velocity.

What I’m talking about here is the first steps in learning differential calculus. You could look for that at kahnacademy.com. You might also benefit by looking at their “precalculus” courses.

Just keep plugging away at it, the concepts take awhile to seep in. Attaining mathematical maturity takes years.


Looks like you need to grind through an elementary calculus book. With the exercises, you may think you build intuition by reading just the definitions, but half of the understanding is tacit and you get through the exercises.

If you’re trying to get into signal processing, it’ll involve calculus in complex numbers, and knowledge of that is often gained through plodding through proofs and exercises over and over.


Most textbooks come with a list of definitions.

Try to read it aloud.

“The Probability Lifesaver” has a lot of good mathematics tips (which are not even mathematics related) most of which are not probability-specific. It’s a goldmine.


As a starting point you can check out the notation appendices from my books:
https://minireference.com/static/excerpts/noBSmathphys_v5_pr…
https://minireference.com/static/excerpts/noBSLA_v2_preview….
You can also see this excerpt here on set notation https://minireference.com/static/excerpts/set_notation.pdf

That covers most of the basics, but I think your real question is how to learn all those concepts, not just the notation for them, which will require learning/reviewing relevant math topics. If you’re interested in post-high-school topics, I would highly recommend linear algebra, since it is a very versatile subject with lots of applications (more so than calculus).

As ColinWright pointed out, there is no one true notation and sometimes authors of textbooks will use slightly different notation for the same concepts, especially for more advanced topics. For basic stuff though, there is kind of a “most common” notation, that most books use and in fact there is a related ISO standard you can check out: https://people.engr.ncsu.edu/jwilson/files/mathsigns.pdf#pag…

Good luck on your math studies. There’s a lot of stuff to pick up, but most of it has “nice APIs” and will be fun to learn.


the notation you need to know should be defined somewhere in the book or paper you’re reading

if it’s not, try intuition

if that fails, email your mathematician friend and ask

don’t have a mathematician friend? there’s your next goal, go make one.


There is no single authoritative source for mathematical notation. That said, there are a lot of common conventions. You could do worse than this NIST document if it’s just a notation question:

https://dlmf.nist.gov/front/introduction

Of course, if the real problem is that you need to learn some mathematical constructs, that is a different problem. The good news is that there’s a lot of material online, the bad news is that not all of it is good… I often like Khan Academy when it covers the topic.

I wish you luck!


For about $5 you can find an old (around 1960-1969) edition of the “CRC Handbook of Standard Mathematical Tables. I’ve owned two of the 17th edition published in 1969, because back then hand calculators didn’t exist and many of the functions used in mathematics had to be looked up in books, like what is the square root of 217. Engineers used these handbooks extensively back then.

Now, of course, you have the internet and it can tell you what the square root of 217 is. Consequently, the value of these used CRC handbooks is low and many are available on eBay for a few dollars. Pick up a cheap one and in it you will find many useless pages of tables covering square roots and trigonometry, but you will also find pages of formulas and explanations of mathematical terms and symbols.

Don’t pay too much for these books because the internet and handheld calculators have pretty much removed the need from them, but that is how I first learned the meanings of many mathematical symbols and formulas.

You might also look for books of “mathematical formulas” in you local bookstores. Math is an old field and the notations you are stumbling over have likely been used for 100 years, like the triangle you were wondering about. (Actually the triangle is the upper case greek letter delta. Delta T refers to an amount of time, usually called an interval of time.)

Unfortunately, because math is an old subject it is a big subject. So big that no one person is expert in every part of math. The math covered in high school is kind of the starting point. All branches of mathematics basically start from there and spread out. If you feel you are rusty on your high school math, start there and look for a review book or study guide in those subjects, usually called Algebra 1 and Algebra 2. If you recall your Algebra 1 and 2, take a look at the books on pre-calculus. The normal progression is one year for each of the following courses in order, Algebra 1, Geometry, Algebra 2, Pre-Calculus, and Calculus. This is just the beginning of math proficiency, but by the time you get through Calculus you will be able to read the paper you referenced.

Is it really a year for each of those subjects? It can be done faster but math proficiency is a lot of work. Like learning to be a good golfer, it would be unusual to become a 10 handicap in less than 5 years of doing hours of golf each and every week.

Calculus is kind of the dividing line between high-school math and college level math. Calculus is the prerequisite for almost all other higher level math. With an understanding of Calculus one can go on to look into a wide range of mathematical subjects.

Some math is focused on its use to solve problems in specific areas; this is called applied math. In applied math there are subjects like Differential Equations, Linear Algebra, Probability and Statistics, Theory of Computation, Information & Coding Theory, and Operations Research.

Alternatively, there are areas of math that are studied because they have wider implications but not because they are trying to solve a specific kind of problem; this is called pure math. In pure math there are subjects like Number Theory, Abstract Algebra, Analysis, Topology & Geometry, Logic, and Combinatorics.

All of these areas start off easy and keep getting harder and harder. So you can take a peek at any of them, once you are through Calculus, and decide what to study next.


Naively, I would say the following:

1) Search youtube for multiple videos by different people on the topic you want to learn. Watch them without expecting to understand them at first. There is a delayed effect. Each content creator will explain it slightly differently and you will find that it will make sense once you’ve heard it explained several different times and ways.

I will read the chapter summary for a 1k page math book repeatedly until I understand the big picture. Then I will repeated skim the chapters I least understand until I understand its big picture. I need to know the terms and concepts before I try to understand the formulas. I will do this until I get too confused to read more then I will take a break for a few hours/days and start again.

2) You have to rewrite the formulas in your own language. At first you will use a lot of long descriptions but quickly you will get tired and you will start to abbreviate. Eventually, you get the point where you will prefer the terse math notation because it is just too tedious to write it out in longer words.

3) You might have to pause the current topic you are struggling with and learn the math that underlies it. This means a topic that should take 1 month to learn might actually take 1 year because you need to understand all that it is based on.

4) Try to find an applied implementation. For example photogrammetry applies a lot of linear algebra. It is easer to learn linear algebra if you find an implementation of photogrammetry and try to rewrite it. This forces you to completely understand how the math works. You should read the parts of the math books that you need.


Maybe a problem is trying to learn it by reading it.

I was a college math major, and I admit that I might have flunked out had I been told to learn my math subjects by reading them from the textbooks without the support of the classroom environment. It may be that the books are “easy to read if a teacher is teaching them to you.”

Talking and writing math also helped me. Maybe it’s easier to learn a “language” if it’s a two way street and involves more of the senses.

Perhaps a substitute to reading the stuff straight from a book might be to find some good video lectures. Also, work the chapter problems, which will get your brain and hands involved in a more active way.

As others might have mentioned, there’s no strict formal math notation. It’s the opposite of a compiled programming language. In fact, math people who learn programming are first told: “The computer is stupid, it only understands exactly what you write.” In math, you’re expected to read past and gloss over the slight irregularities of the language and fill in gaps or react to sudden introduction of a new symbol or notational form by just rolling with it.


I think good first resource would be the book and lecture notes in an introductory university course treating the specific domain you are interested in because often lots of things in notation are domain specific. Lots of good open university lectures out there, if not sure from where to start the MIT open courseware used to be a good first guess for accessing materials.

As a sidenote I have MSc in Physics with a good dollop of maths involved and I am quite clueless when looking at a new domain so it’s not as if university degree in non-related subject would be of any help…


I think the problem is that there is no authoritative text, that I know of, and as ColinWright says, the same ideas can be notated differently by different fields or sometimes by different authors in the same field (though often they converge if they are in the same community).

Wikipedia has been helpful sometimes but otherwise I have found reading a lot of papers on the same topic has been useful. However, this is kind of an “organic” and slow way of learning notation common to a specific field.


The Greek alphabet would like to thank all the scholars for the centuries of overloading and offer a “tee hee hee” to all of the students tormented by attendant ambiguities.

Tough love, kids.


Could it be that you are trying to read things that are a bit too advanced? Maybe look for some first year university lecture notes? In general, if you cannot follow something, try to find some other materials on the same subject, preferably more basic ones.


Mathematics is a lingo and notations are mostly convention. Luckily people generally follow the same conventions, so my best advice if you want to learn about a specific topic is to work through the introductory texts! If you want to learn calculus find an introductory college text. Statistics? There are traditional textbooks like Introduction to Statistical Learning. The introductory texts generally do explain notation which may become assumed knowledge for more advanced texts, or as you seem to be wanting to read, academic papers. If those texts are still too difficult, then maybe move down to highschool text first.

Think about it this way. A scientist, wanting to communicate his ideas with fellow academics, is not going to spend more than half the paper on pedantics and explaining notations which everyone in their field would understand. Else what is the purpose of creating the notations? They might as well write their formulas and algorithms COBOL style!

Ultimately mathematics, like most human-invented languages, is highly tribal and has no fixed rules. And I believe we are much richer for it! Mathematicians constantly invent new syntax to express new ideas. If there was some formal reference they had to keep on hand every time they need to write an equation that would hamper their speed of thought and creativity. How would one even invent something new if you need to get the syntax approved first!

TL;DR: Treat math notation as any other human language. Find some introductory texts on the subject matter you are interested in to be “inducted” into the tribe


It can be quite provincial. Could you please post a link to a paper or website that has notation you’d like to understand? Which domains are you interested in particularly?


Math notation feels like a write-only language somehow.

I can read and understand undocumented code with relative ease. Reading math notation without any documentation seems pretty much impossible, otoh.


You get better at it the more you do. A tip is also to actually change a mathematical exposition into a form you better understand (e.g. by writing it in a different notation and/or expanding it out in words to make the existing notation less dense). Basically convert the presentation into the way you would personally like to see it.

If you do this enough, the process becomes easier and the original notation becomes easier to understand. But it takes a lot of time and patience (as I’m sure it took for you understand undocumented code did as well).


I learned it by asking peers in grad school what stuff meant. And working through the math myself (it was a slog at first) and then writing stuff out it in LaTeX. When one is forced to learn something because one needs to take courses and to graduate, the human brain someone figures out a way.

A lot of it is convention, so you do need a social approach – ie asking others in your field. For me it was my peers, but these days there’s Math stack exchange, google, and math forums. Also, first few chapters of an intro Real Analysis text is usually a good primer to most common math notation.

When I started grad school I didn’t know many math social norms, like the unstated one that vectors (say x) were usually in column form by convention unless otherwise stated (in undergrad calc and physics, vectors we’re usually in row form). I spent a lot of time being stymied by why matrix and vector sizes were wrong and why x’ A x worked. Or that the dot product was x’x (in undergrad it was x.x). It sounds like I lacked preparation but the reality was no one told me these things in undergrad. (I should also note that I was not a math major; the engineering curriculum didn’t expose me much to advanced math notation. Math majors will probably have a different experience.)


First, just to state the obvious, if you can accurately describe a notation in words, you can do an Internet search for it.

When that fails, math.stackexchange.com is a very active and helpful resource. You can ask what certain notation means, and upload a screenshot since it’s not always easy to describe math notation in words.

If you don’t want to wait for a human response, Detexify (https://detexify.kirelabs.org/classify.html) is an awesome site where you can hand draw math notation and it’ll tell you the LaTeX code for it. That often gives a better clue for what to search for.

For example you could draw an upside down triangle, and see that one of the ways to express this in LaTeX is nabla. Then you can look up the Wikipedia article on the Nabla symbol. (Of course in this case you could easily have just searched “math upside down triangle symbol” and the first result is a Math Stackechange thread answering this).


Practice, just like you learned programming.
“The Context” gives you the meaning for the notation, sadly. You have to kind of know it to understand the notation properly.


You can also get sufficiently angry and just write out linear algebra books and what not in Agda / Coq / Lean if it pisses you off so much (I’ve done a bunch of exercises in Coq)


I like the approach they took in Structure and Interpretation of Classical Mechanics, where the whole book is done in Scheme:

    (define ((Lagrange-equations Lagrangian) q)
      (- (D (compose ((partial 2) Lagrangian) (Gamma q)))
         (compose ((partial 1) Lagrangian) (Gamma q))))


Through a really nice and helpful math prof who took time out of her day to explain it to those in the “im in trouble” additional course. Forever grateful for that, would have failed otherwise.

Math notation becomes very readable, as soon as the teacher writes a example out on the black board, and that is why i will never forgive wikipedia / wolfram / latex for not having a interactive “notation to example expansion”. They had such a chance to reform the medium – to make it more accessible to beginners and basically forgot about them.


Do you mean all the introductory mathematics books you tried fail to properly explain the notation ?

Or that the notation differs from books to books ?

(In my case, I learned the notation via French math textbooks, and in the first day of college/uni we litteraly went back to “There is a set of things called natural numbers, and we call this set N, and there is this one thing called 0, and there is a notion of successor, and if you keep taking the successor it’s called ‘+’, and…” etc..

But then, the French, Bourbaki-style of teaching math is veeeeeeeery strict on notations.


You might be better picking an area, and trying to work out the notation relating to that area e.g. vectors / matrices / calculus etc. As Colin says below there are often multiple equivalent ways of representing things across different fields and timeframes. I seem to remember maths I studies in Elec Eng looking different but equivalent to the way it was represented in other disciplines


Well, the real fun is deciphering a lower case xi – ξ – when written on the blackboard (or whiteboard), specially compared to a lower case zeta – ζ (fortunately way less commonly used).

As all the others already told you. you don’t learn by reading alone.


Related question, does anyone know of any websites/books that have mathematical notation vs the computer code representing the same formula side by side? I find that seeing it in code helps me grasp it very quickly.


I’ve run into this problem as well and it’s put me off learning TLA+ and information theory, which bums me out. I assume there’s a Khan Academy class that would help but it’s hard to find.


Khan academy and Schaum’s Outlines are your friends.

Then some textbooks with exercises (e.g. Axler on lin alg).

The notation is usually an expression of a mental model, so just approaching via notation may cause some degree of confusion.


If math was a programming language, all mathematicians would be fired for terrible naming conventions and horrible misuse of syntax freedom.

Honestly, most math formulas can be turned into something that looks like C/C++/C#/Java/JavaScript/TypeScript code and become infinitely more readable and understandable.

Sadly, TypeScript is one of the languages that is attempting to move back to idiocy by having generics named a single letter. Bastards.


I hear this question asked quite often, particularly on HN. I think the question is quite backwards. There is little value alone in learning “math notation”, even ignoring what many people point out (there is no one “math notation”). “Math notation”, at best, translates into mathematical concepts. Words, if you will, but words with very specific meaning. Understanding those concepts is the crux of the matter! That is what takes effort – and the effort needed is that of learning mathematics. After that, one may still struggle with bad (or “original”, or “different”, or “overloaded”, or “idiotic”, or…) notation, of course, but there is little use in learning said notation(s) on their own.

I’ve been repeatedly called a gatekeeper for this stance here on HN, but really: notation is a red herring. To understand math written in “math notation”, you first have to understand the math at hand. After that, notation is less of an issue (even though it may still be present). Of course the same applies to other fields, but I suspect that the question crops up more often regarding mathematics because it has a level of precision not seen in any other field. Therefore a lot more precision tends to hide behind each symbol than the casual observer may be aware of.


> I find it really hard to read anything because of the math notations and zero explanation of it in the context.

So many answers and no correct one yet. Read and solve “How to Prove It: A Structured Approach”, Velleman. This is the best introduction I’ve seen so far. After finishing you’ll have enough maturity to read pretty much any math book.


I sometimes think math notation is a conspiracy against the clever but lazy.
Being able to pronounce the greek alphabet is a start, as you can use your ear and literary mind once you have that, but when you encounter <...>, as in an unpronouncable symbol, the meaningless abstraction becomes a black box and destroys information for you.

Smart people often don’t know the difference between an elegant abstraction that conveys a concept and a black box shorthand for signalling pre-shared knowledge to others. It’s the difference between compressing ideas into essential relationships, and using an exclusive code word.

This fellow does a brilliant job at explaining the origin of a constant by taking you along the path of discovery with him, whereas many “teachers” would start with a definition like “Feigenbaum means 4.669,” which is the least meaningful aspect to someone who doesn’t know why. https://www.veritasium.com/videos/2020/1/29/this-equation-wi…

It wasn’t until decades after school that it clicked for me that a lot of concepts in math aren’t numbers at all, but refer to relationships and relative proporitons and the interactions of different types of things, which are in effect just shapes, but ones we can’t draw simply, and so we can only specify them using notations with numbers. I think most brains have some low level of natural synesthesia, and the way we approach math in high school has been by imposing a three legged race on anyone who tries it instead.

Pi is a great example, as it’s a proportion in a relationship between a regular line you can imagine, and the circle made from it. There isn’t much else important about it othat than it applies to everything, and it’s the first irrational number we found. You can speculate that a line is just a stick some ancients found on the ground and so its unit is “1 stick” long, which makes it an integer, but when you rotate the stick around one end, the circular path it traces has a constant proportion to its length, because it’s the stick and there is nothing else acting on it, but amazingly that proportion that describes that relationship pops out of the single integer dimension and yields a whole new type of unique number that is no longer an integer. The least interesting or meaningful thing about pi is that it is 3.141 etc. High school math teaching conflates computation and reasoning, and invents gumption traps by going depth first into ideas that make much more sense in their breadth-first contexts and relationships to other things, which also seems like a conspiracy to keep people ignorant.

Just yesterday I floated the idea of a book club salon idea for “Content, Methods, and Meaning,” where starting from any level, each session 2-3 participants pick and learn the same chapter separately and do their best to give a 15 minute explanation of it to the rest of the group. It’s on the first year syllabus of a few universities, and it’s a breadth-first approach to a lot of the important foundational ideas.

The intent is I think we only know anything as well as we can teach it, so the challenge is to learn by teaching, and you have to teach it to someone smart but without the background. Long comment, but keep at it, dumber people than you have got further with mere persistance.

Read More

Author: admin

Leave a Reply

Your email address will not be published. Required fields are marked *