Sharpe measure = (1.30 - 0.25) / 7.30 = 0.144
The S&P 500 earned 0.144% of excess return per unit of risk, where risk is measured by standard deviation.
o: Describe the relative locations of the mean, median, and mode for a nonsymmetrical distribution.
For a symmetrical distribution, the mean, median, and mode are equal.
For a positively skewed distribution, the mode is less than the median, which is less than the mean. Recall that the mean is affected by outliers. In a positively skewed distribution, there are large, positive outliers which will tend to "pull" the mean upward.
For a negatively skewed distribution, the mean is less than the median, which is less than the mode. In this case, there are large, negative outliers which tend to "pull" the mean downward.
p: Define and interpret skewness and explain why a distribution might be positively or negatively skewed.
Skewness refers to a distribution that is not symmetrical.
A positively skewed distribution is characterized by many outliers in its upper or right tail. Recall that an outlier is defined as an extraordinarily large outcome in absolute value. Positively skewed distributions have long right tails.
A negatively skewed distribution is the opposite of a positively skewed distribution. A negatively skewed distribution has a disproportionately large amount of outliers on its left side. In other words, a negatively skewed distribution is said to have a long tail on its left side.
q: Define and interpret kurtosis and explain why a distribution might have positive excess kurtosis.
Kurtosis deals with whether or not a distribution is more or less "peaked" than a normal distribution.
A distribution that is more peaked than normal is leptokurtic. A leptokurtic return distribution will have more returns clustered around the mean and more returns with large deviations from the mean (fatter tails).
A distribution that is less peaked, or flatter than normal is said to be platykurtic.
For all normal distributions, kurtosis is equal to three. Statisticians, however, sometimes report excess kurtosis, which is defined as kurtosis minus three. A normal distribution has excess kurtosis equal to zero, a leptokurtic distribution has excess kurtosis greater than zero, and platykurtic distribution will have excess kurtosis less than zero.
r: Explain why a semi-logarithmic scale is often used for return performance graphs.
Semi-logarithmic scales use an arithmetic scale on the horizontal axis, but use a logarithmic scale on the vertical axis. Hence, values on the vertical axis are spaced according to their logarithms. On a semi-logarithmic scale, equal movements on the vertical axis reflect equal percentage changes.
1.C: Probability Concepts
a: Define a random variable.
A random variable is a quantity whose outcomes are uncertain. A realized random variable is a number associated with the outcome of an experiment. When rolling a conventional six-sided die, the random variable might be the number that faces up with the die stops rolling.
b: Explain the two defining properties of probability.
The probability of any event "i" is between zero and one.
If a set of events: E1, E2, ....En, are mutually exclusive and exhaustive, then the sum of the probabilities of those events equals one.
Mutually exclusive means that the events do not share any outcomes. Knowing that you have an outcome in one event excludes the possibility of an outcome in another event.
Exhaustive means that a given list of events represent all possible outcomes.
c: Distinguish among empirical, a priori, and subjective probabilities.
We can assign probabilities to events three ways:
We calculate an empirical probability by analyzing past data.
We calculate an a priori probability by using formal reasoning and inspection.
A subjective probability is less formal and involves personal judgment.
d: Describe the investment consequences of probabilities that are inconsistent.
With respect to investment opportunities, when two assets are price based upon different probabilities being assigned to the same event, this is called inconsistent probabilities. It is best defined by a general example.
Example: Event E will increase the return of both stock A and B. The price of stock A incorporates a higher probability of E than does stock B. All other things equal, stock A is overpriced when compared to stock B. Therefore, an investor should lower holdings of stock A and increase holdings of stock B. An investor that is not too risk averse might engage in a pairs arbitrage trade, where he/she short sells A and uses the proceeds to buy stock B.
e: Distinguish between unconditional and conditional probabilities.
An unconditional probability is also called a marginal probability, and it is the most basic type of probability. It is the probability of an event where the occurrence of other events is not important. We might be concerned with the probability of an economic recession where we do not care about interest rates, inflation, etc. In such a case, we would be concerned with the unconditional probability of a recession.
A conditional probability is one where the knowledge of some other event is important. We might be concerned about the probability of a recession given that the monetary authority increases interest rates. This is a conditional probability. The key thing to look for is "the probability of A given B." This is noted by a vertical bar symbol.
f: Define a joint probability.
An joint probability is the probability that both events occur at the same time, but neither is certain or a given. We write the probability of A and B as P(AB). Unless both A and B occur, it does not qualify as the event "A and B."
g: Calculate, using the multiplication rule, the joint probability of two events.
There is a relationship between the expressions P(AB) and P(A I B). It is called the multiplication rule for probabilities:
P(AB) = P(A I B) * P(B)
In words this is: "the probability of A and B is the probability of A given B times the unconditional probability of B."
We can manipulate this to give the following representation for a conditional probability: P (A I B) = P(AB) / P(B)
Example: We will assume the probabilities in the list below:
The probability of the monetary authority increasing interest rates "I" is 40%: P(I) = .4
The probability of a recession "R" given an increase in interest rates is 70%: P(R given I) = .7
The probability of "R" without an increase in interest rates is 10%: P(R given IC) = .1
Without additional information, we can assume that the events "increase in interest rates" and "no increase in interest rates" are the only possible events. They are mutually exclusive and exhaustive, and since there are only two events, they are called complements. The superscript "C" stands for complement.
P(IC) = 1 - P(I) = .60
What is the probability of "recession and an increase in interest rates?"
P(RI) = P(R given I) * P(I) = .7 * .4 = .28
h: Calculate,using the addition rule, the probability that at least one of two events will occur.
The general rule of addition states that if two events A and B are not mutually exclusive then you must account for the joint probability of events. That is the possibility that the two events will occur at exactly the same time. Joint probability is shown by the overlap of the occurrence circles in the traditional Venn diagram shown below.
P (A or B) = P (A) + P(B) – P(A and B), where P(A and B) is the joint probability of A and B.
The joint probability [P(A and B)] is defined as the probability that measures the likelihood that 2 or more events will happen concurrently.
P(A and B) = P(A)*P(B) for independent events, or
P(A and B) = P(A)*P(B given that A occurs) for conditional events.
i: Distinguish between dependent and independent events.
Independent events are a list of events where knowledge of one has no influence on the other. That is easily expressed using conditional probabilities. A and B are independent if:
P (A I B) = P (A), and P(B I A) = P(B)
The best examples of independent events are found with the a priori probabilities of dice throws or coin flips. A die has no memory; therefore, the event of a "4" on a second throw of a die is independent of a "4" on the first throw.
j: Calculate a joint probability of any number of independent events.
The multiplication rule for independent events is:
P (A I B) * P(B) = P(A) * P(B) = P(AB)
P (B I A) * P(A) = P(B) * P(A) = P(AB)
Example: On the roll of two dice, the probability of getting two "4s" is:
P(4 on first die and 4 on second die) = P(4 on first die) * P(4 on second die)
P(4 on first die and 4 on second die) = (1/6) *(1/6) = 1/36 = .0278.
k: Calculate, using the total probability rule, an unconditional probability.
The total probability rule is used to demonstrate how joint probabilities tie in with unconditional probabilities. If we continue with our example from LOS 1.C.g about interest rates and recession, and assume that the events "I" and "IC" are mutually exclusive and exhaustive, then a recession can only occur with either of these two events. In that case, the sum of these two joint probabilities is the unconditional probability of a recession:
P(R) = P(R given I) * P(I) + P(R given IC) * P(IC)
P(R) = P(RI) + P(RIC)
P(R) = .28 + .06 = .34
l: Define and calculate expected value.
The expected value is the probability-weighted average of the possible outcomes of the random variable.?
E(X) = ∑xi*P(xi) = x1*P(x1) + x2*P(x2) + … + xn*P(xn)
Here, the "E" denoted expected value. The symbol x1 is the first realization of random variable X. The symbol x2 is the second realization, etc. In the long run, the realizations should average to the expected value. This is most easily seen using the a priori probabilities associated with a coin toss. On the flip of one coin, we might designate the event "head" as letting the random variable equal one. Alternatively, the event "tail" means the random variable equals zero. A statistician would write:
If head, the X = 1
If tail, then X = 0
For a fair coin where P(head) = P(X = 1) = 0.5 and P(tail) = P(X = 0) = 0.5, the probability weighted average or expected value is:
E(X) = P(X = 0) * 0 + P(X = 1) = 0.5
For the coin flip, X cannot assume a value of 0.5 in any single experiment. Over the long term, however, the average of all the outcomes should be 0.5.
m: Define, calculate, and interpret variance and standard deviation.
The variance is the expected value of the squared deviations of each observation from the random variable's expected value. As an expected value, the variance uses the probability of each observation xi to weight the associated squared deviation: [xi - E(X)]2. The formula for variance is:
ó2(X) = ó [xi - E(X)]2 * P(xi)
The standard deviation is the positive square root of the variance. It may be represented by ó(X) or just ó.
Example:
Event |
xi |
P(xi) |
xi * P(xi) |
[xi - E(X)]2 * P(xi) |
Fall short of forecast |
-.03 |
.20 |
-.0060 |
.000361 |
Meet forecast |
.01 |
.45 |
.0045 |
.000003 |
Exceed forecast |
.04 |
.35 |
.0140 |
.000265 |
|
|
E(X) |
.0125 |
ó2(X) = .000629 |
The value ó2(X) = .000629 is in "squared" units and is very difficult to interpret. The standard deviation is more useful: ó = (.000629).5 = .0251 or 2.51%.
n: Explain the use of conditional expectation in investment applications.
As we know, other factors can play a role in how a stock reacts to a given set of news. A conditional expected value is a refined forecast that uses additional or new information to appropriately adjust the probabilities that make up a forecast.
Example: Suppose the probability of falling short, meeting or exceeding expectations depends upon some external event like weather conditions. If the weather in the relevant time period has been "good," the probabilities are P(fall short I good) = 0.10, P(meet I good) = 0.50, P(exceed I good) = 0.40. If the weather has been "poor," the corresponding probabilities are 0.3, 0.4, 0.3. For each type of weather, the conditional expected value is:
Good weather: E(X I good) = -.03? * .10 + .01 * .50 + .04 * .40 = .018
Poor weather: E(X I poor) = -.03 * .30 + .01 * .40 + .04 * .30 = .007
o: Calculate an expected value using the total probability rule.
The total probability rule for expected value says that the unconditional probability is the weighted average of the conditional probabilities.
Example: Continuing with our good vs. poor weather example from LOS 2.C.n:
E(X) = E(X I good) * P(good) + E(X I poor) * P(poor)
E(X) = [.018 * 0.5] + [.007 * 0.5] = .0125
We can apply this procedure to any set of mutually exclusive and exhaustive scenarios S1, S2,...Sn. The total probability rule for expected value is then represented by:
E(X) = E(X1 I S1) + E(X2 I S2) * P(S2) + E(X3 I S3) * P(S3) +...+ E(Xn I Sn) * P(Sn)
p: Define, calculate, and interpret covariance.
The covariance is the most basic measure of how two assets move together. The covariance is the expected value of the product of the deviations of the two random variables around their respective means. The symbol for the covariance between X and Y is Cov(X,Y). Since the formula is often applied to the returns of assets, the formula below has been written in terms of the covariance of the return of asset "i" and the return of asset "j:"
Cov(Ri,Rj) = E{(Ri - E(Ri)] * [Rj - E(Rj)]}
Example: The economy can experience one of the following three states "S" next year: boom, normal, or slow economic growth. An expert source has calculated that P(boom) = 0.30, P(normal) = 0.50, and P(slow) = 0.20 percent. The corresponding returns for stock A and stock B are in the designated columns in the table below. The last column is the product of each return around the expected value weighted by the respective probability.
Event |
P(S) |
RA |
RB |
[RA - E(RA)] * [RB - E(RB)] * P(S) |
Boom |
0.3 |
0.20 |
0.30 |
.00336 |
Normal |
0.5 |
0.12 |
0.10 |
.00020 |
Slow |
0.2 |
0.05 |
0.00 |
.00224 |
|
|
E(RA) = 0.13 |
E(RB) = 0.14 |
0.0058 |
q: Explain the relationship among covariance, standard deviation, and correlation.
The covariance is a more general representation of the same concept as the variance. The variance measures how a random variable moves with itself. The covariance measures how one random variable moves with another random variable.
The covariance of RA with itself is equal to the variance of RA
The covariance can be zero and even negative. For example, the returns of a stock and of a put option on the stock would have a negative covariance.
The covariance is difficult to interpret by itself. For this reason, we usually divide the covariance by the standard deviations of the two random variables to get the correlation between the two random variables. The correlation is a measure of the strength of the linear relationship between two random variables.
r: Calculate the expected return and the variance for return on a portfolio.
An analyst can determine the expected value and variance of a portfolio of assets using the corresponding properties of the assets in the portfolio. To do this, we must first introduce the concept of portfolio weights:
wi = market value of investment in asset i / market value of the portfolio
For the exam, memorize the formula for the two-stock portfolio:
Var(Rp) = wA2 * Var(RA) + wB2 * Var(RB) + 2 * wA * wB * Cov(RA,RB)
s: Calculate covariance given a joint probability function.
Example: A covariance matrix contains both the covariances and the variances (recall that the covariance of an asset with itself is the variance - the terms along the diagonal in the table below are the variances). This is the simplest case because the most tedious calculations have already been performed. To make this example more interesting, let's assume that we have a portfolio that consists of a stock "S" and a put option on the stock "O." We are given wS = 0.90, wO = 0.10, and the covariance table below.
Covariances |
RS |
RO |
RS |
0.0011 |
-0.0036 |
RO |
-0.0036 |
0.016 |
We simply place the values into the formula for the variance:
Var(Rp) = 0.902 * 0.0011 + 0.102 * 0.016 + 2 * 0.9 * 0.10 * (-0.0036).
Var(Rp) = 0.000403
Since options are a zero-sum game, we might assume E(RO) = 0. Then the expected return of the portfolio is 0.9 * E(RS).
t: Calculate an updated probability, using Bayes' formula.
Bayes' formula says that given a set of prior probabilities for an event of interest, if you can receive new information, the rule for updating the probability of the event is:
Updated probability = |
Probability of new information given event |
x |
Prior probability of event |
Unconditional probability of new information |
Example: Electcomp Corporation manufactures electronic components for computers and other devices. There is speculation that Electcomp will soon announce a major expansion into overseas markets. Electcomp would only do this if its managers estimated the demand to be sufficient to support the sales. If demand is sufficient, Electcomp would also be more likely to raise prices. For ease of notation, let expand overseas = "O" and let increase prices = "I."
An industry analyst determines the following probabilities:
P(I) = 0.3 and P(IC) = 0.7
P(O given I) = 0.6 and P(O given IC) = 0.2