Yamato
Legendary member
- Messages
- 9,840
- Likes
- 246
Last edited:
Our modern word "sine", is derived from the Latin word sinus, which means "bay", "bosom" or "fold", translating Arabic jayb. The Arabic term is in origin a corruption of Sanskrit jīvā "chord". Sanskrit jīvā in learned usage was a synonym of jyā "chord", originally the term for "bow-string". Sanskrit jīvā was loaned into Arabic as jiba.[1][2][clarification needed] This term was then transformed[2] into the genuine Arabic word jayb, meaning "bosom, fold, bay", either by the Arabs or by a mistake[1] of the European translators such as Robert of Chester (perhaps because the words were written without vowels[1]), who translated jayb into Latin as sinus.[3] Particularly Fibonacci's sinus rectus arcus proved influential in establishing the term sinus.[4]
This is one awesome link, right there. I think I will need both, but for portfolio theory I will need more probability, because I want to know how likely is the goddamn risk of ruin, which keeps on persecuting me (because I push my risk, because I want money).01
Probability deals with predicting the likelihood of future events
Statistics involves the analysis of the frequency of past events
02
Probability is primarily a theoretical branch of mathematics, which studies the consequences of mathematical definitions
Statistics is primarily an applied branch of mathematics, which tries to make sense of observations in the real world
03
Probability theory enables us to find the consequences of a given ideal world
Statistical theory enables us to measure the extent to which our world is ideal
Rule 4 describes what my job as a system trader should be. To identify the the "probability model" based on past statistics, and use it to predict future probability. Of course I've been doing these things all along (badly), but it helps to know where I am standing, and what I am doing, objectively. So I know where to find help.In probability, we start with a model describing what events we think are going to occur, with what likelihoods...
[...]
The statistician turns this around:
[...]
- Rules ← data: Given only the data, try to guess what the rules were. That is, some probability model controlled what data came out, and the best we can do is guess — or approximate — what that model was. We might guess wrong; we might refine our guess as we get more data.
- Statistics is about looking backward.
- Statistics is an art. It uses mathematical methods, but it is more than math.
- Once we make our best statistical guess about what the probability model is (what the rules are), based on looking backward, we can then use that probability model to predict the future. (This is, in part, why I say that probability doesn't need statistics, but statistics uses probability.)
Here's my favorite example to illustrate. Suppose I give you a list of heads and tails. You, as the statistician, are in the following situation:
- You do not know ahead of time that the coin is fair. Maybe you've been hired to decide whether the coin is fair (or, more generally, whether a gambling house is committing fraud).
- You may not even know ahead of time whether the data come from a coin-flipping experiment at all.
Suppose the data are three heads. Your first guess might be that a fair coin is being flipped, and these data don't contradict that hypothesis. Based on these data, you might hypothesize that the rules governing the experiment are that of a fair coin: your probability model for predicting the future is that heads and tails each occur with 50% likelihood.
If there are ten heads in a row, though, or twenty, then you might start to reject that hypothesis and replace it with the hypothesis that the coin has heads on both sides. Then you'd predict that the next toss will certainly be heads: your new probability model for predicting the future is that heads occur with 100% likelihood, and tails occur with 0% likelihood.
If the data are “heads, tails, heads, tails, heads, tails”, then again, your first fair-coin hypothesis seems plausible. If on the other hand you have heads alternating with tails not three pairs but 50 pairs in a row, then you reject that model. It begins to sound like the coin is not being flipped in the air, but rather is being flipped with a spatula. Your new probability model is that if the previous result was tails or heads, then the next result is heads or tails, respectively, with 100% likelihood.
I think the distinction you want is that probability theory is pure math, while statistical theory is applied math. Statistics is the application of probability theory to the real world; It's a science, like physics, where you gather data, perform experiments, make predictions, and so on. So just as a physicist might use calculus to predict the path of a moving object, a statistician might use probability theory to predict the weather.
Simply put, probability deals with what SHOULD occur, statistics deals with what HAS occurred. It's simply a matter of when.
In a certain sense, probability and statistics are opposites. Probability starts from a given probability distribution, with given parameters, and gives the chances that a specific outcome with happen. Statistics start with specific outcomes (the sample) and gives the parameters for the probability distribution.
In these notes I view Probability as a predictive science analogous to Physics.
The focus of Statistics (as an intellectual discipline) is on using quantitative data to try to answer questions you don't know yet know the answer to.
...the problems considered by probability and statistics are inverse to each other. In probability theory we consider some underlying process which has some randomness or uncertainty modeled by random variables, and we figure out what happens. In statistics we observe something that has happened, and try to figure out what underlying process would explain those observations.
There is a lot of overlap. They are both studies of randomness, but they approach it from different perspectives. Probability generally deals with simple random events, statistics deals with complex real-world events that aren't completely understood. Probability is more math-flavored, statistics is more engineering-flavored. I also think statistics is a little more difficult, but that's just an opinion.
A simple probability problem would deal with uniform distribution, such as drawing a card from a perfectly shuffled deck. A simple statistics problem deals with random numbers that more difficult ( or impossible) to predict, such as the average height of a group of people.
I think a big part of the fear and confusion of these subjects lies in notation difficulties. You are required to use parameters, constants, and more than one type of variable all at once. It's easy to get confused if you don't understand what's happening very clearly. And teachers unfortunately tend to give formulas instead of explaining in clear language. But that's only a part of the reason...
Don't forget that these sciences are studying the future and hypothetical situations, which are always tinged with emotion and confusion. Consider gambling and investments.
There are also some mild controversies over probability and statistics. How can you know for sure that a deck is perfectly shuffled? What does that even mean? How can you measure the amount of randomness in an event? Can you claim that a certain event has a certain probability distribution, if you never actually measure it an infinite number of times?
Probability is an assumption theory based on some fact or data which may not be necessarily true.
Statistics contains authentic data through which a fair analysis can be made as to the chances of getting things done.
So I definitely will have to cover inferential statistics.Probability is the foundation of inferential statistics.
There are two branches of statistics - descriptive and inferential.
Descriptive statistics does not require a working knowledge of probability.
The concept of probability is well defined, more so than that of statistics.
Statistics is descriptive and inferential.
Probability is neither.
Probability can be viewed either as the long-run frequency of occurrence or as a measure of the plausibility of an event given incomplete knowledge - but not both.
Statistics are functions of the observations (data) that often have useful and even surprising properties.
[...]
From the observations we compute statistics that we use to estimate population parameters, which index the probability density, from which we can compute the probability of a future observation from that density.