n-th moment | (1) | |
Norm | (2) | |
Mean | (3) | |
n-th central moment | (4) | |
Variance | (5) | |
Std deviation | (6) | |
Skewness | μ_{3 }/ σ^{3} | (7) |
Kurtosis | μ_{4 }/ σ^{4} | (8) |
Excess kurtosis | μ_{4 }/ σ^{4} - 3 | (9) |
μ_{2 }= m_{2} - m_{1}^{2} | (10) |
μ_{3 }= m_{3} - 3 m_{2} m_{1}+ 2 m_{1}^{3} | (11) |
μ_{4 }= m_{4} - 4 m_{3} m_{1}+ 6 m_{2} m_{1}^{2} - 3 m_{1}^{4} | (12) |
Power-exponential function:
Negative skew, left-skewed. |
Positive skew, right-skewed. |
See more examples in Distributions.
A high kurtosis distribution has a sharper "peak" and flatter "tails", while a low kurtosis distribution has a more rounded peak with wider "shoulders".
Laplace distribution.
Uniform distribution.
See more examples in Distributions.
In practice, often the density functions P(x) are not available and only a limited number of a random variable is available: x_{i} , i = 1, . . . , N . So-called sample estimators can be used instead of (1):
n-th moment | (13) |
Note, that m_{n} as the sum of random variable is a random variable itself. It has its expectation (mean) value and standard deviation. An estimator is called "unbiased" if its expectation value is equal to the exact (population) value. For example, E(m_{1}) = μ, is an unbiased estimator; E(m_{2} - m_{1}^{2}) = σ^{2} N/(N - 1), is biased (although it tends to σ^{2 }as N → ∞).
Mean | m_{1} | (14) |
Variance | (15) | |
Skewness | (16) | |
Excess kurtosis | (17) |
© Nikolai Shokhirev, 2001 - 2017