There are 1332 items in this version of the glossary, dated November 27, 2005.
Copyright 1997-2005, Peter B. Meyer.
2SLS:
an abbreviation for two stage least squares, an instrumental
variables estimation technique.
Contexts: econometrics; estimation
3SLS:
A kind of simultaneous equations estimation. Made up of 2SLS
followed by SUR.
First proposed by Zellner and Theil, Econometrica, 1962, pp
54-78.
Contexts: econometrics; estimation
a fortiori:
Latin for "even stronger". Can be used to compare two theorems or
proofs. Could be interpreted to mean "in the same way."
Contexts: phrases
a priori:
It is always used in the phrase "a priori" often shown in italics because
it is not English, but comes from Latin.
In the economics context "a priori" means "it is assumed in advance".
It means: "we think it is logical that . . . " or "we had to assume
something, and we assumed this, without evidence."
The writer is also implying "I do not cite evidence here because I do not
know it or do not wish to discuss it."
I do not know why they do not say this in English. It may come from the
formal logic of proof in mathematics, developed over hundreds of years by
people who knew Latin. It may have also a more precise meaning than I
said there but I am sure this is clear enough to help. Or maybe they like
"a priori" because it is so short.
Contexts: phrases
A-D equilibrium:
abbreviation for Arrow-Debreu equilibrium.
AAEA:
American Agricultural Economics Association.
See their web site at http://www.aaea.org.
abnormal returns:
Used in the context of stock returns; means the return to a portfolio in
excess of the return to a market portfolio. Contrast excess returns
which means something else. Note that abnormal returns can be negative.
Example: Suppose average market return to a stock was 10% for some calendar
year, meaning stocks overall were 10% higher at the end of the year than at
the beginning, and suppose that stock S had risen 12% in that period. Then
stock S's abnormal return was 2%.
Contexts: finance
absolute risk aversion:
An attribute of a utility function. See Arrow-Pratt measure.
Contexts: micro theory; finance
absorptive capacity:
A limit to the rate or quantity of scientific or technological information
that a firm can absorb. If such limits exist they provide one explanation for
firms to develop internal R&D capacities. R&D departments can not only
conduct development along lines they are already familiar with, but they have
formal training and external professional connections that make it possible
for them to evaluate and incorporate externally generated technical knowledge
into the firm better than others in the firm can. In other words a partial
explanation for R&D investments by firms is to work around the absorptive
capacity constraint.
This term comes from Cohen and Levinthal (1990).
Source: Cohen W., and D. Levinthal. 1990. "Absorptive capacity: a new
perspective on learning and innovation." Administrative Science
Quarterly 35(1) pp 128-152.
Contexts: IO; organizations; theory of the firm
abstracting from:
a phrase that generally means "leaving out". A model abstracts from
some elements of the real world in its demonstration of some specific
force.
Contexts: phrases
accelerator principle:
That it is the growth of output that induces continuing net investment. That
is, net investment is a function of the change in output not its
level.
Source: Branson
Contexts: macro
acceptance region:
Occurs in the context of hypothesis testing. Let T be a test statistic.
Possible values of T can be divided into two regions, the acceptance region
and the rejection region. If the value of T comes out to be in the
acceptance region, the null hypothesis being tested is not rejected.
If T falls in the rejection region, the null hypothesis is rejected.
The terms 'acceptance region' and 'rejection region' may also refer to the
subsets of the sample space that would produce statistics T in the
acceptance region or rejection region as defined above.
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; statistics; estimation
ACIR:
Advisory Council on Intergovernmental Relations, in the U.S.
Contexts: organizations
active measures:
In the context of combating unemployment: policies designed to improve the
access of the unemployed to the labor market and jobs, job-related skills,
and the functioning of the labor market. Contrast passive
measures.
Source: John P. Martin, D16 readings book
Contexts: labor; macro
adapted:
The stochastic process {Xt} and information sets {Yt}
are adapted if {Xt} is a martingale difference sequence with
respect to {Yt}.
Contexts: statistics; econometrics
AEA:
American Economics Association
AER:
An abbreviation for the American Economic Review.
Contexts: journals
affiliated:
From Milgrom and Weber (Econometrica, 1982, page 1096): Bidders' valuations
of a good being auctioned are affiliated if, roughly: "a high value of one
bidder's estimate makes high values of the others' estimates more
likely."
There may well be good reasons not to use the word
correlated in place of affiliated. This editor is
advised that there is some mathematical difference.
Source: Milgrom and Weber, Econometrica, 1982, p 1096.
Contexts: auctions; micro theory; modelling
affine:
adjective, describing a function with a constant slope. Distinguished from
linear which sometimes is meant to imply that the function has no
constant term; that it is zero when the independent variables are zero. An
affine function may have a nonzero value when the independent variables are
zero.
Examples: y = 2x is linear in x, whereas y = 2x + 7 is an affine function of
x.
And y = 2x + z2 is affine in x but not in z.
Contexts: real analysis
affine pricing:
A pricing schedule where there is a fixed cost or benefit to the consumer for
buying more than zero, and a constant per-unit cost per unit beyond that.
Formally, the mapping from quantity purchased to total price is an affine
function of quantity.
Using, mostly, Tirole's notation, let q be the quantity in units purchased,
T(q) be the total price paid, p be a constant price per unit, and k be the
fixed cost, an example of an affine price schedule is T(q)=k+pq.
For alternative ways of pricing see linear pricing schedule and
nonlinear pricing.
Source: Tirole, p 136
Contexts: IO
AFQT:
Armed Forces Qualifications(?) Test -- a test given to new recruits in the
U.S. armed forces. Results from this test are used in regressions of labor
market outcomes on possible causes of those outcomes, to control for other
causes.
Contexts: data; labor
AGI:
An abbreviation for Adjusted Gross Income, a line item which appears on the
U.S. taxpayer's tax return and is sometimes used as a measure of income which
is consistent across taxpayers. AGI does not include any accounting for
deductions from income that reduce the tax due, e.g. for family size.
Contexts: public finance; labor
agricultural economics:
"Agricultural Economics is an applied social science that deals with how
producers, consumers, and societies use scarce resources in the production,
processing, marketing, and consumption of food and fiber products." (from
Penson, Capps, and Rosson (1996), as cited by Hallam 1998).
Source: Penson, Capps, and Rosson, 1996; Hallam,
1998
Contexts: agricultural economics; fields
AIC:
abbreviation for Akaike's Information Criterion
Contexts: econometrics; time series; estimation
AJS:
An abbreviation for the American Journal of Sociology.
Contexts: journals
Akaike's Information Criterion:
A criterion for selecting among nested econometric models. The AIC is
a number associated with each model:
AIC=ln (sm2) + 2m/T
where m is the number of parameters in the model, and
sm2 is (in an AR(m) example) the estimated residual
variance: sm2 = (sum of squared residuals for model
m)/T. That is, the average squared residual for model m.
The criterion may be minimized over choices of m to form a tradeoff between
the fit of the model (which lowers the sum of squared residuals) and the
model's complexity, which is measured by m. Thus an AR(m) model versus an
AR(m+1) can be compared by this criterion for a given batch of data.
An equivalent formulation is this one: AIC=T ln(RSS) + 2K
where K is the number of regressors, T the number of obserations, and RSS the
residual sum of squares; minimize over K to pick K.
Source: Watson's compressed notes, p. 23; RATS maual pg. 5-18
Contexts: econometrics; time series
alienation:
A Marxist term. Alienation is the subjugation of people by the artificial
creations of people "which have assumed the guise of independent things."
Because products are thought of as commodities with money prices, the social
process of trade and exchange becomes driven by forces operating independently
of human will like natural laws.
almost surely:
With probability one. In particular, the statement that a series
{Wn} limits to W as n goes to infinity, means that
Pr{Wn->W}=1.
Contexts: probability; statistics; econometrics
alternative hypothesis:
"The hypothesis that the restriction or set of restrictions to be tested
does NOT hold." Often denoted H1. Synonym for 'maintained
hypothesis.'
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; statistics; estimation
Americanist:
A member of a certain subfield of political science.
Contexts: political science
AMEX:
American Stock Exchange, which is in New York City
Contexts: organizations
aML:
A programming language/environment for maximum likelihood estimation, allowing
complicated error specifications.
Their web site.
Contexts: estimation
Amos:
A statistical data analysis program, discussed at
http://www.smallwaters.com/amos.
Contexts: data
analytic:
Often means 'algebraic', as opposed to 'numeric'. E.g., in the context of
taking a derivative, which could sometimes be calculated numerically on a
computer, but is usually done analytically by finding an algebraic expression
for the derivative.
Contexts: phrases
annihilator operator:
Denoted []+ with a lag operator polynomial in the brackets.
Has the effect of removing the terms with an L to a negative power; that is,
future values in the expression. Their expected value is assumed to be zero
by whoever applies the operator.
Contexts: models
Annuity formula:
If annuity payments over time are (0,P,P,...P) for n periods, and the constant
interest rate r>0, then the net present value to the recipient of the
annuity can be calculated this way:
NPV(A) = (1-(1+r)-n)P/r
Contexts: finance
ANOVA:
Stands for analysis-of-variance, a statistical model meant to analyze
data. Generally the variables in an ANOVA analysis are categorical, not
continuous. The term main effect is used in the ANOVA context. The
main effect of x seems to mean the result of an F test to see if the
different categories of x have any detectable effect on the dependent variable
on average.
ANOVA is used often in sociology, but rarely in economics as far as this
editor can tell. The terms ANCOVA and ANOCOVA mean
analysis-of-covariance.
From Kennedy, 3rd edition, pp226-227:
"Analysis of variance is a statistical technique designed to determine
whether or not a particular classification of the data is meaningful. The
total variation of the dependent variable (the sum of squared differences
between each observation and the overall mean) can be expressed as the sum
of the variation between classes (the sum of the squared differences
between the mean of each class and the overall mean, each times the number
of observations in that class) and the variation within each class (the
sum of the squared difference between each observation and its class
mean). This decomposition is used to structure an F test to test the
hypothesis that the between-class variation is large relative to the
within-class variation, which implies that the classification is
meaningful, i.e., that there is a significant variation in the dependent
variable between classes. If dummy variables are used the capture these
classifications and a regression is run, the dummy variable coefficients
turn out to be the class means, the between-class variation is the
regression's "explained" variation, the within-class variation is the
regression's "unexplained" variation, and the analysis of variance F test
is equivalent to testing whether or not the dummy variable coefficients
are significantly different from one another. The main advantage of the
dummy variable regression is that it provides estimates of he magnitudes
of class variation influences on the dependent variables (as well as
testing whether or not the classification is meaningful).
"Analysis of covariance is an extension of analysis of variance to handle
cases in which there are some uncontrolled variables that could not be
standardized between classes. These cases can be analyzed by using dummy
variables to capture the classifications and regressing the dependent
variable on these dummies and the uncontrollable variables. The analysis
of covariance F tests are equivalent to testing whether the coefficient of
the dummies are significantly different from one another. These tests can
be interpreted in terms of changes in the residual sums of squares caused
by adding the dummy variables. Johnston (1972, pp 192-207) has a good
discussion."
Kennedy also says: "In light of the above, it can be concluded that anyone
comfortable with regression analysis and dummy variables can eschew
analysis of variance and covariance techniques." [But one needs to understand
the academic work out there, not just write one's own. -ed.]
Source: Stata manuals; Kennedy, 1992
Contexts: statistics; sociology
APT:
Arbitrage Pricing Theory; from Stephen Ross, 1976-78. Quoting Sargent,
"Ross posited a particular statistical process for asset returns, then
derived the restrictions on the process that are implied by the hypothesis
that there exist no arbitrage possibilities."
The APT includes multiple risk factors, unlike the CAPM.
Source: Sargent, 1987, p 112; Ross,
1976
Contexts: finance; models
AR:
Stands for "autoregressive." Describes a stochastic process
(denote here, et) that can be described by a weighted sum of its
previous values and a white noise error. An AR(1) process is a
first-order one, meaning that only the immediately previous value has a direct
effect on the current value:
et = ret-1 +
ut
where r is a constant that has absolute value less
than one, and ut is drawn from a distribution with mean zero and
finite variance, often a normal distribution.
An AR(2) would have the form:
et = r1et-1 +
r2et-2 + ut
and so on. In theory a process might be represented by an
AR(infinity).
Contexts: time series; econometrics; statistics
AR(1):
A first-order autoregressive process. See AR for
details.
Contexts: statistics
ARCH:
Stands for Autoregressive Conditional Heteroskedasticity. It's a technique
used in finance to model asset price volatility over time.
It is observed in much time series data on asset prices that there are
periods when variance is high and periods where variance is low. The ARCH
econometric model for this (introduced by Engle (1982)) is that the variance
of the series itself is an AR (autoregressive) time series, often a linear
one.
Formally, per Bollerslev et al 1992 and Engle (1982):
An ARCH model is a discrete time stochastic process {et} of the
form:
et = ztst
where the zt's are iid over time, E(zt)=0,
var(zt)=1, and st is positive and time-varying. Usually
st is further modeled to be an autoregressive process.
According to Andersen and Bollerslev 1995/6/7, "ARCH models are usually
estimated by maximum likelihood techniques." They almost always give a
leptokurtic distrbution of asset returns even if one assumes that each
period's returns are normal, because the variance is not the same each period.
Even ARCH models, however, do not usually generate enough kurtosis in equity
returns to match U.S. stock data.
Source: Engle, 1982
Contexts: finance; statistics; time series
ARIMA:
Describes a stochastic process or a model of one. Stands for
"autoregressive integrated moving-average". An ARIMA process is
made up of sums of autoregressive and moving-average
components, and may not be stationary.
Source: Enders, 1996, p 23
Contexts: time series; econometrics
ARMA:
Describes a stochastic process or a model of one. Stands for
"autoregressive moving-average". An ARMA process is a
stationary one made up of sums of autoregressive and
moving-average components.
Source: Enders, 1996, p 23
Contexts: time series; econometrics
Arrovian uncertainty:
Measurable risk, that is, measurable variation in possible outcomes, on
the basis of knowledge or believed assumptions in advance. Contrast
Knightian uncertainty.
Source: Used in Rosenberg (1996) in Mosaic of Economic Growth.
Arrow-Debreu equilibrium:
Means, in practice, competitive equilibrium of the kind shown in Debreu's
Theory of Value.
The Arrow-Debreu reference may be to a particular paper: "Existence of an
Equilibrium for a Competitive Economy", Econometrica. Vol 22 July
1954, pp 265-290. I haven't checked that out.
Source: Debreu; Arrow and Debreu, 1954
Arrow-Pratt measure:
An attribute of a utility function.
Denote a utility function by u(c). The Arrow-Pratt measure of absolute risk
aversion is defined by:
RA=-u''(c)/u'(c)
This is a measure of the curvature of the utility function. This measure is
invariant to affine transformation of the utility function, which is a useful
attributed because such transformation do not affect the preferences expressed
by u().
If RA() is decreasing in c, then u() displays decreasing
absolute risk aversion. If RA() is increasing in c, then u()
displays increasing absolute risk aversion. If RA() is
constant with respect to changes in c, then u() displays constant absolute
risk aversion.
Source: Huang and Litzenberger, 1988, p 21; for Arrow (1970)
and Pratt (1964).
Contexts: finance; micro theory
ASQ:
An abbreviation for the journal Administrative Science Quarterly which
tends to be closer to sociology than to economics.
Contexts: journals
ASR:
An abbreviation for the journal American Sociological Review.
Contexts: journals
asset pricing models:
A way of mapping from abstract states of the world into the prices of
financial assets like stocks and bonds. The prices are always conceived of as
endogenous; that is, the states of the world cause them, not the other way
around, in an asset pricing model.
Several general types are discussed in the research literature. The
CAPM is one, distinguished from three that Fama (1991) identifies: (a)
the Sharpe-Lintner-Black class of models, (b) the multifactor models like the
APT of Ross (1976), and (c) the consumption based models such as Lucas
(1978).
An asset pricing model might or might not include the possibility of
fads or bubbles.
Source: Fama, 1991, p 1590-1599
Contexts: finance
asset-pricing function:
maps the state of the economy at time t into the price of a capital asset at
time t.
Source: Sargent, 1987, Ch 3
Contexts: macro; finance; models
asymptotic:
An adjective meaning 'of a probability distribution as some variable or
parameter of it (usually, the size of the sample from another distribution)
goes to infinity.'
In particular, see asymptotic distribution.
Contexts: econometrics
asymptotic normality:
A limiting distribution of an estimator is usually normal. (details!)
This is usually proven with a mean value expansion of the score at the
estimated parameter value? (details)
asymptotic variance:
Definition of the asymptotic variance of an estimator may vary from author to
author or situation to situation. One standard definition is given in Greene,
p 109, equation (4-39) and is described there as "sufficient for nearly
all applications." It's
asy var(t_hat) = (1/n) * limn->infinity E[ {t_hat -
limn->infinity E[t_hat] }2 ]
Source: Greene, 1993, p 109
Contexts: econometrics
asymptotically equivalent:
Estimators are asymptotically equivalent if they have the same asymptotic
distribution.
Contexts: econometrics
asymptotically unbiased:
"There are at least three possible definitions of asymptotic
unbiasedness:
1. The mean of the limiting distribution of n.5(t_hat - t) is
zero.
2. limn->infinity E[t_hat] = t.
3. plim t_hat = t."
Usually an estimator will have all three of these or none of them. Cases
exist however in which left hand sides of those three are different.
"There is no general agreement among authors as to the precise meaning of
asymptotic unbiasedness, perhaps because the term is misleading at the outset;
asymptotic refers to an approximation, while unbiasedness is an
exact result. Nonetheless the majority view seems to be that (2) is the
proper definition of asymptotic unbiasedness. Note, though, that this
definition relies upon quantities that are generally unknown and that may not
exist." -- Greene, p 107
Source: Greene, 1993, p 107
Contexts: econometrics
attractor:
a kind of steady state in a dynamical system. There are three types of
attractor: stable steady states, cyclical attractors, and
chaotic attractors.
Source: J. Montgomery, social networks paper
Contexts: macro; models
augmented Dickey-Fuller test:
A test for a unit root in a time series sample. An augmented
Dickey-Fuller test is a version of the Dickey-Fuller test for a larger
and more complicated set of time series models.
(Ed.: what follows is only my best understanding.) The augmented
Dickey-Fuller (ADF) statistic, used in the test, is a negative number. The
more negative it is, the stronger the rejection of the hypothesis that there
is a unit root at some level of confidence. In one example, with three
lags, a value of -3.17 constituted rejection at the p-value of
.10.
Source: Greene, 1997
With thanks to: Don Watson (as of 1999/03/31: drw@matilda.vm.edu.au)
Contexts: econometrics; time series
Austrian economics:
A school of thought which "takes as its central concern the problem of
human coordination, through which order emerges not from a dictator, but from
the decisions and judgments of numerous individuals in a world of highly
disperced and sometimes only tacit knowledge." -- Cass R. Sunstein,
"The Road from Serfdom" The New Republic Oct 20, 1997, p
42.
Well-known authors along this line include Carl Menger, Ludwig von Mises, and
Friedrich von Hayek. See
Deborah L. Walker's essay for a clear account.
Source:
Walker's essay at
http://econlib.org/library/Enc/AustrianEconomics.html
Contexts: political
autarky:
The state of an individual who does not trade with anyone.
Contexts: modelling
autocorrelation:
the jth autocorrelation of a covariance-stationary process is defined as its
jth autocovariance divided by its variance.
In a sample, the kth autocorrelation is the OLS estimate that results from the
regression of the data on the kth lags of the data.
Below is Gauss code to calculate autocorrelations from a sample.
/* This functions calculates autocorrelation estimates for lag k */
proc autocor(series, k);
local rowz,y,x,rho;
rowz = rows(series);
y = series[k+1:rowz];
x = series[1:rowz-k];
rho = inv(x'x)*x'y; /* compute autocorrelation by OLS */
retp(rho);
endp;
Contexts: econometrics; time series
autocovariance:
The jth autocovariance of a stochastic process yt is the
covariance between its time t value and the value at time t-j. It is
denoted g below, and E[] means expectation,
or mean:
gjt = E[(yt -
Ey)(yt-j-Ey)]
In that equation the process is assumed to be covariance stationary.
If there is a trend, then the second Ey should be E(yt-j).
Contexts: econometrics; time series
autocovariance matrix:
Defined for a vector random process, denoted yt here. The
ij'th element of the autocovariance matrix is cov(yit,
yj,t-k).
Contexts: econometrics; time series
autoregressive process:
See AR.
Contexts: econometrics; statistics; time series
avar:
abbreviation or symbol for the operation of taking the asymptotic
variance of an expression, thus: avar().
Contexts: econometrics
average treatment effect:
In a treatment model where some observations receive the treatment (for
example, a training program) and some do not, the average treatment effect is
the difference between the conditional expectation of the dependent variable
with the treatment effect and the conditional expectation of the dependent
variable without the treatment effect. This is the average benefit from the
treatment.
Often this term is abbreviated ATE.
Source: source: John Pencavel, lecture notes for Econ 247, Stanford
University, circa 2000-2005.
Contexts: estimation; labor
b:
b(n,q) is notation for a binomial
distribution with parameters n and q, where n
is the number of draws and q is the probability
that each is a one; the value of X~b(n,q) is a
count of the number of ones drawn.
Contexts: statistics
B1:
B1 denotes the Borel sigma-algebra of the real line. It
will contain every open interval by definition, which implies that it contains
every closed interval and every countable union of open, half-open, and closed
intervals.
What won't it contain? In practice, only obscure sets. Here's an example:
Define the equivalence class ~ on the real line such that x~y (read: x is in
the same equivalence class as y) if x-y is a rational number. Now consider
the set of all numbers in [0,1] such that none of them are in the same
equivalence class. How many members of that set are there? Well, it's not a
countable number. This set is not in B1.
Contexts: math; measure theory; real analysis
balance of payments:
A country's balance of payments is the quantity of its own currency flowing
out of of the country (for purchases, for example, but also for gifts and
intrafirm transfers) minus the amount flowing in.
[Ed: this next part is partly speculation; feel free to correct it.] For some
purposes this term refers to a stock value and for others a flow value. It is
well defined over a period in the sense that it has changed from time A to
time B.
Source:
A macro model exhibits balanced growth if consumption, investment, and capital
grow at at a constant rate while hours of work per time period stays
constant.
Source: Cooley, 1995, p 16
Contexts: macro; modelling
Banach space:
Any complete normed vector space is a Banach space.
Contexts: real analysis
bandwidth:
In kernel estimation, a scalar argument to the kernel function that determines
what range of the nearby data points will be heavily weighted
in making an estimate. The choice of bandwidth represents a tradeoff between
bias (which is intrinsic to a kernel estimator, and which increases with
bandwidth), and variance of the estimates from the data (which decreases with
bandwidth).
Cross-validation is one way to choose the bandwidth as a function of
the data.
Has a variety of similar definitions in spectral analysis. Generally, a
bandwidth is some way of defining the range of frequencies that will be
included by the estimation process. In some estimations it is an argument to
the estimation process.
Source: Hardle, 1990, especially p 148
Contexts: econometrics; statistics
bank note:
In periods of free banking, such as most states in the U.S. from
1839-1863, banks could issue their own money, called bank notes. A bank note
was a risky, perpetual debt claim on a bank which paid no interest, and could
be redeemed on demand at the original bank, usually in gold. There was a risk
that the bank would not be able or willing to redeem it.
Contexts: money; history
barter economy:
An economy that does not have a medium of exchange, or money, and where
trade occurs instead by exchanging useful goods for useful goods.
Source: McCallum, 1983
Contexts: money; models
base point pricing:
The practice of firms setting prices as if their transportation costs to all
locations were the same, even if all the vendors are distant from one another
and have substantially different costs of transportation to each location.
One might interpret this as a form of monitored collusion between the vendor
firms.
Contexts: IO
basin of attraction:
the region of states, in a dynamical system, around a particular stable steady
state, that lead to trajectories going to the stable steady state. (E.g. the
region inside the event horizon around a black hole.)
Source: James Montgomery, social networks paper
Contexts: macro; models
basis point:
One-hundredth of a percentage point. Used in the context of interest
rates.
Contexts: finance; business
basket:
A known set of fixed quantites of known goods, needed for defining a price
index.
Contexts: macro; price indexes
Bayesian analysis:
"In Bayesian analysis all quantities, including the parameters, are
random variables. Thus, a model is said to be identified in probability if
the posterior distribution for [the parameter to be estimated] is
proper."
Source: Hsiao, The New Palgrave: Econometrics, p 98
Contexts: econometrics; statistics
Bellman equation:
Any value or flow value equation. For a discrete problem it can generally be
of the form:
v(k) = max over k' of { u(k,k') + b*v(k') }
where:
u() is the one-period return function (e.g., a utility function) and
v() is the value function and
k is the current state and
k' is the state to be chosen and
b is a scalar real parameter, the discount rate,
generally slightly less than one.
Contexts: dynamic optimization; macro; models
Bertrand competition:
A bidding war in which the bidders end up at a zero-profit price. See
Bertrand game.
Contexts: game theory; IO
Bertrand duopoly:
The two firms producing in a market modeled as a Bertrand game.
Contexts: IO
Bertrand game:
Model of a bidding war between firms each of which can offer to sell a certain
good (say, widgets), but no other firms can. Each firm may choose a price to
sell widgets at, and must then supply as many as are demanded. Consumers are
assumed to buy the cheaper one, or to purchase half from each if the prices
are the same.
Best for the firms (both collectively and individually) is to cooperate,
charge monopoly price, and split the profits. Each firm could seize the whole
market by lowering price slightly, however, and the noncooperative Nash
equilibrium outcome of a Bertrand game is that both charge a zero-profit
price.
Contexts: game theory; IO
Beveridge curve:
The graph of the inverse relation of unemployment to job vacancies.
Contexts: labor; macro
BHHH:
A numerical optimization method from Berndt, Hall, Hall, and Hausman (1974).
Used in Gauss, for example.
The following discussion of BHHH was posted to the newsgroup sci.econ.research
by Paul L. Schumann, Ph.D., Professor of Management at
Minnesota State University, Mankato (formerly Mankato State University).
It is included here without any explicit permission whatsoever.
BHHH usually refers to the procedure explained in Berndt, E., Hall, B.,
Hall, R., & Hausman, J. (1974), "Estimation and Inference in Nonlinear
Structural Models," Annals of Economic and Social Measurement, 3/4: 653-665.
BHHH provides a method of estimating the asymptotic covariance matrix of a
Maximum Likelihood Estimator. In particular, the covariance matrix for a MLE
depends on the second derivatives of the log-likelihood function. However,
the second derivatives tend to be complicated nonlinear functions. BHHH
estimates the asymptotic covariance matrix using first derivatives instead
of analytic second derivatives. Thus, BHHH is usually easier to compute than
other methods.
In addition to the original BHHH article referenced above, BHHH is also
discussed in Greene, W.H., Econometric Analysis, 3rd Edition, Prentice-Hall,
1997. Greene's econometric software program, LIMDEP, uses BHHH for some of
the estimation routines.
Someone (perhaps BHHH themselves?) wrote a Fortran subroutine in the 1970's
to do BHHH. I do not have a copy of this subroutine at the present time. You
may want to check out Green's econometric software, LIMDEP, to see if it
will do what you require, rather than writing your own program to use an
existing BHHH subroutine. The Web address for LIMDEP is:
http://www.limdep.com/index.htm
Cheers,
Paul.
--
Paul L. Schumann, Ph.D., Professor of Management
Minnesota State University, Mankato (formerly Mankato State University)
Mankato, MN 56002
mailto:paul.schumann@mankato.msus.edu
http://krypton.mankato.msus.edu/~schumann/www/welcome.html
Source: Gauss Applications: Maximum Likelihood;
Berndt, Hall, Hall, and Hausman (1974)
With thanks to: Paul L. Schumann, Ph.D., Professor of Management
Minnesota State University, Mankato (formerly Mankato State University)
Mankato, MN 56002
mailto:paul.schumann@mankato.msus.edu
http://krypton.mankato.msus.edu/~schumann/www/welcome.html
Contexts: numerical methods; estimation
BHPS:
British Household Panel Survey.
A British government database going back to 1990.
Web page:
http://www.iser.essex.ac.uk/bhps/index.php
Contexts: data; labor
bias:
the difference between the parameter and the expected value of the estimator
of the parameter.
Contexts: econometrics; estimation
bidding function:
In an auction analysis, a bidding function (often denoted b()) is a function
whose value is the bid that a particular player should make. Often it is a
function of the player's value, v, of the good being auctioned. Thus the
common notation b(v).
Contexts: micro theory; IO
bill of exchange:
From the late Middle Ages. A contract entitling an exporter to receive
immediate payment in the local currency for goods that would be shipped
elsewhere. Time would elapse between payment in one currency and repayment in
another, so the interest rate would also be brought into the
transaction.
Source: Glasner, p. 23
Contexts: history; money
billon:
A mixture of silver and copper, from which small coins were made in medieval
Europe. Larger coins were made of silver or gold.
Source: Thomas J Sargent and Francois R Velde, 1997, "The Evolution of Small
Change", unpublished paper, p. 6
bimetallism:
A commodity money regime in which there is concurrent circulation of coins
made from each of two metals and a fixed exchange rate between them.
Historically the metals have almost always been gold and silver. Bimetallism
was tried many times with varying success but since about 1873 the practice
has been generally abandoned.
Source: Velde, Francois R., and Warren E. Weber. 1998. "A Model of
Bimetallism." Working paper, Federal Reserve Bank of Chicago, Federal
Reserve Bank of Minneapolis, and University of Minnesota. page 2.
Contexts: money
BJE:
Bell Journal of Economics, the previous name of the RAND Journal of
Economics or RJE.
Contexts: journals
Black-Scholes equation:
An equation for option securities prices on the basis of an assumed
stochastic process for stock prices.
The Black-Scholes algorithm can produce an estimate the value of a call on a
stock, using as input:
-- an estimate of the risk-free interest rate now and in the near future
-- current price of the stock
-- exercise price of the option (strike price)
-- expiration date of the option
-- an estimate of the volatility of the stock's price
Click here for a derivation of Black-Scholes
equation.
From the Black-Scholes equation one can derive the price of an option.
Click here for
a simplified derivation which assumes risk-neutrality.
Contexts: finance; business
BLS:
Abbrevation for the U.S. government's Bureau of
Labor Statistics, in the Labor Department.
Contexts: data
Bonferroni criterion:
Suppose a certain treatment of a patient has no effect. If one runs a test of
statistical significance on enough randomly selected subsets of the patient
base, one would find some subsets in which statistically significant
differences were apparently distinguished by the treatment.
The Bonferroni criterion is a redefinition of the statistical signficance
criterion for the testing of many subgroups: e.g. if there are five subgroups
and one of them shows an effect of the treatment at the .01 significance
level, the overall finding is significant at the .05 level.
This is discussed in more detail (and probably more correctly) in Bland and
Altman (1995) in the statistics notes of the British Medical Journal. Either
of these links should go there:
Llink 1.
Link 2; search for Bonferroni.
Source: British Medical Journal, statistics notes by Bland and Altman.
Contexts: statistics; epidemiology
bootstrapping:
The activity of applying estimators to each of many subsamples of a data
sample, in the hope that the distribution of the estimator applied to these
subsamples is similar to the distribution of the estimator when applied to the
distribution that generated the sample.
It is a method that gives a sense of the sampling variability of an estimator.
"After the set of coefficients b0 is computed, M randomly drawn samples
of T observations are drawn from the original data set with
replacement. T may be less than or equal to n, the sample size. With
each such sample the ... estimator is recomputed." -- Greene, p
658-9.
The properties of this distribution of estimates of b0 can then be
characterized, e.g. its variance. If the estimates are highly variable, the
investigator knows not to think of the estimate of b0 as precise.
Bootstrapping could also be used to estimate by simulation, or empirically,
the variance of an estimation procedure for which no algebraic expression for
the variance exists.
Source: Greene, 1993, p 658-9
Contexts: econometrics; estimation; statistics
Borel set:
Any element of a Borel sigma-algebra.
Contexts: math; measure theory; real analysis
Borel sigma-algebra:
The Borel sigma-algebra of a set S is the smallest sigma-algebra of S
that contains all of the open balls in S. Any element of a Borel
sigma-algebra is a Borel set.
Example: The set B1 is the Borel sigma-algebra of the real line,
and thus contains every open interval.
Example: Consider a filled circle in the unit square. It can be constructed
by a countable number of non-overlapping open rectangles (since a series of
such rectangles can be defined that would cover every point in the circle but
no point outside of it. Therefore it is in the smallest sigma-algebra of open
subsets of the unit square.
Contexts: math; measure theory; real analysis
bounded rationality:
Models of bounded rationality are defined in a recent book by Ariel Rubinstein
as those in which some aspect of the process of choice is explicitly
modeled.
Source: Rubinstein, Ariel. 1998. Modeling Bounded Rationality.
Contexts: game theory; micro theory
Box-Cox transformation:
The Box-Cox transformation, below, can be applied to a regressor, a
combination of regressors, and/or to the dependent variable in a regression.
The objective of doing so is usually to make the residuals of the regression
more homoskedastic and closer to a normal distribution:
{ |
y(l) = ((y^l) - 1) / l |
for l not equal to zero |
y(l)=log(y) | l=0 |
|
Box and Cox (1964) developed the transformation.
Estimation of any Box-Cox parameters is by maximum likelihood.
Box and Cox (1964) offered an example in which the data had the form of
survival times but the underlying biological structure was of hazard rates,
and the transformation identified this.
Source: Box and Cox, 1964; Stata 7 manual entry for
boxcox;
Davidson and Mackinnon, 1993, pp 481-488.
Contexts: econometrics; statistics
Box-Jenkins:
A "methodology for identifying, estimating, and forecasting"
ARMA models. (Enders, 1996, p 23). The reference
in the name is to Box and Jenkins, 1976.
Source: Enders, 1996, p 23
Contexts: time series; econometrics
Box-Pierce statistic:
Defined on a time series sample for each natural number k by the sum of the
squares of the first k sample autocorrelations. The kth sample
autocorrelation is denoted r:
BP(k)=Ss=1k
[rs2]
Used to tell if a time series is nonstationary.
Below is Gauss code with a procedure that calculates the Box-Pierce statistic
for a set of residuals.
/* A series of residuals eps_hat[] is generated from a regression, e.g.: */
eps_hat = y - X*betaols;
/* Then the Box-Pierce statistic for each k can be calculated this way: */
print "Box-Pierce statistic for k=1 is" BP(eps_hat,1);
print "Box-Pierce statistic for k=2 is" BP(eps_hat,2);
print "Box-Pierce statistic for k=3 is" BP(eps_hat,3);
proc BP(series, k);
local beep, rho;
beep = 0;
do until k < 1;
rho = autocor(series, k);
beep = beep + rho * rho;
k = k - 1;
endo;
beep = beep * rows(series); /* BP = T* (the sum) */
retp(beep);
endp;
/* This functions calculates autocorrelation estimates for lag k */
proc autocor(series, k);
local rowz,y,x,rho;
rowz = rows(series);
y = series[k+1:rowz];
x = series[1:rowz-k];
rho = inv(x'x)*x'y; /* compute autocorrelation by OLS */
retp(rho);
endp;
Contexts: finance, time series
BPEA:
An abbreviation for the Brookings Papers on Economic Activity.
Brent method:
An algorithm for choosing the step lengths when numerically calculating
maximum likelihood estimates.
Source: Gauss Applications: Maximum Likelihood
;
Brent, 1972
Contexts: numerical methods; estimation
Bretton Woods system:
The international monetary framework of fixed exchange rates after World War
II. Drawn up by the U.S. and Britain in 1944. Keynes was one of the
architects. The meetings occurred at Bretton Wood, New Hampshire, in the
U.S., in July 1944. The International Bank for Reconstruction and
Development, now called the World Bank, was planned at the meetings.
So was the International Monetary Fund or IMF.
The system ended on August 15, 1971, when President Richard Nixon ended
trading of gold at the fixed price of $35/ounce. At that point for the first
time in history, formal links between the major world currencies and real
commodities were severed.
Source: Glasner, p 157-160;
The International Forum on Globalization. Alternatives to Economic
Globalization. 2002. p.18
Contexts: money; history
Breusch-Pagan statistic:
A diagnostic test of a regression. It is a statistic for testing whether
dependent variable y is heteroskedastic as a function of regressors X.
If it is, that suggests use of GLS or SUR estimation in place of
OLS. The test statistic is always nonnegative. Large values of test
statistic reject the hypothesis that y is homoskedastic in X. The
meaning of 'large' varies with the number of variables in X.
Quoting almost directly from the Stata manual: The Breusch and Pagan
(1980) chi-squared statistic -- a Lagrange multiplier statistic -- is given
by
l =
T * [Sm=1m=M
[Sn=1n=m-1
[rmn2 ]]
where rmn2 is the estimated correlation between the
residuals of the M equations and T is the number of observations. It has a
chi-squared distribution with M(M-1)/2 degrees of freedom.
Source: Breusch, T. and A. Pagan. 1980. "The LM test and its
applications to model specification in econometrics." Review of
Economic Studies. 47: 239-254.
StataCorp. 1999. Stata statistical software release 6.0 manual, vol
4., page 14.
Contexts: estimation; econometrics
bubble:
A substantial movement in market price away from a price determined by
fundamental value. In practice, "bubble" always refers to a
situation where the market price is higher than the conjectured fundamentally
supported price. The idea of a fundamental value requires some model or
outside knowledge of what the security (or other good) is worth.
Bubbles are often described as speculative and it is conjectured that bubbles
could be risky ventures for speculators who earn a fair rate of return on
them. [ed: I believe these are "rational" bubbles.]
There exist statistical models of a bubbles. For example, stochastic
collapsing bubbles are cited to Blanchard and Watson (1982) -- in this form,
"the bubble continues with a certain conditional probability and
collapses otherwise."
Source: Bollerslev and Hodrick (1992), p 15;
For more discussion of the definition and a history of examples, see:
Garber, Peter M. 2000. Famous First Bubbles. MIT Press.
especially its introduction. And academic articles by Garber too.
Contexts: finance
budget:
A budget is a description of a financial plan. It is a list of estimates of
revenues to and expenditures by an agent for a stated period of time.
Normally a budget describes a period in the future not the past.
budget line:
A consumer's budget line characterizes on a graph the maximum amounts of goods
that the consumer can afford. In a two good case, we can think of quantities
of good X on the horizontal axis and quantities of good Y on the vertical
axis. The term is often used when there are many goods, and without reference
to any actual graph.
Contexts: micro theory; phrases
budget set:
The set of bundles of goods an agent can afford. This set is a function of
the prices of goods and the agents endownment.
Assuming the agent cannot have a negative quantity of any good, the budget set
can be characterized this way. Let e be a vector representing the
quantities of the agent's endowment of each possible good, and p be a
vector of prices for those goods. Let B(p,e) be the budget set.
Let x be an element of R+L; that
is, the space of nonnegative reals of dimension L, the number of possible
goods. Then:
B(p,e) = {x: px <= pe}
Contexts: general equilibrium; models
bureaucracy:
A form of organization in which officeholders have defined positions and
(usually) titles. Formal rules specify the duties of the officeholders.
Personalistic distinctions are usually discouraged by the rules.
Burr distribution:
Has density function (pdf):
f(x) = ckxc-1(1+xc)k+1 for constants
c>0, k>0, and for x>0.
Has distribution function (cdf):
F(x) = 1 - (1+xc)-k.
Source: Maddala, 1983/96, p 10-11; Burr,
1942
Contexts: econometrics
business:
Relevant terms: basis point,
Black-Scholes equation,
call option,
conglomerate,
coupon strip,
EBITDA,
ex dividend date,
NASDAQ,
NYSE,
option,
principal strip,
pro forma,
put option,
reinsurance.
Contexts: fields
business cycle frequency:
Three to five years. Called the business cycle frequency by Burns and
Mitchell (1946), and this became standard language.
Source: Cooley, 1995, p 28
Contexts: macro
BVAR:
Bayesian VAR (Vector Autoregression)
Contexts: time series; econometrics; estimation
CAGR:
Cumulative Average Growth Rate
calculus of voting:
A model of political voting behavior in which a citizen chooses to vote if the
costs of doing so are outweighed by the strength of the citizen's preference
for one candidate weighted by the anticipated probability that the citizen's
vote will be decisive in the election.
Source: Downs, 1957; Riker and Ordeshook,
1968
Contexts: political science
calibration:
NOT SURE WHICH OF THESE (IF EITHER) IS RIGHT:
1. The estimation of some parameters of a model, under the assumption
that the model is correct, as a middle step in the study of other parameters.
Use of this word suggests that the investigator wishes to give those other
parameters of the model a 'fair chance' to describe the data, not to get stuck
in a side discussion about whether the calibrated parameters are ideally
modeled or estimated.
2. Taking parameters that have been estimated for a similar model into one's
own model, solving one's own model numerically, and simulating. Attributed to
Edward Prescott.
Contexts: econometrics; estimation
call option:
A call option conveys the right to buy a specified quantity of an underlying
security.
Contexts: finance; business
capital:
Something owned which provides ongoing services. In the national
accounts, or to firms, capital is made up of durable investment goods,
normally summed in units of money. Broadly: land plus physical structures
plus equipment. The idea is used in models and in the national
accounts.
See also human capital and social capital.
Contexts: macro; IO
capital consumption:
In national accounts, this is the amount by which gross investment exceeds net
investment. It is the same as replacement investment.
-- Oulton (2002, p. 13)
Source: Oulton, Nicholas. 2002. "Productivity versus welfare: or GDP versus
Weitzman's NDP." Bank of England. On the web.
Contexts: macro; measurement; government
capital deepening:
Increase in capital intensity, normally in a macro context where it is
measured by something analogous to the capital stock available per labor hour
spent. In a micro context, it could mean the amount of capital available for
a worker to use, but this use is rare.
Capital deepening is a macroeconomic concept, of a faster-growing
magnitude of capital in production than in labor. Industrialization
involved capital deepening - that is, more and more expensive equipment
with a lesser corresponding rise in wage expenses.
Capital deepening has been measured by a rising ratio of some kind of capital
in production, or services provided by capital to production, to total output.
Capital may include land, structures, equipment, or the relevant capital may
be a more narrowly defined input (e.g. a computer equipment).
Source: Oulton, Nicholas. 2002. "Productivity versus welfare: or GDP versus
Weitzman's NDP." Bank of England. page 31. On the Web.
Margo, Atack, and others on US national growth 1850-1880. ~2003 NBER
paper.
Contexts: macro
capital intensity:
Amount of capital per unit of labor input.
capital ratio:
A measure of a bank's capital strength used by U.S. regulatory
agencies.
Contexts: money; banking
capital structure:
The capital structure of a firm is broadly made up of its amounts of
equity and debt.
Contexts: finance
capital-augmenting:
One of the ways in which an effectiveness variable could be included in a
production function in a Solow model. If effectiveness A is multiplied
by capital K but not by labor L, then we say the effectiveness variable is
capital-augmenting.
For example, in the model of output Y where Y=(AK)aL1-a
the effectiveness variable A is capital-augmenting but in the model
Y=AKaL1-a it is not.
Another example would be a capital utilization variable as measured say by
electricity usage. (E.g., as in Eichenbaum).
-----------------
An example: in the context of a railroad, automatic railroad signaling,
track-switching, and car-coupling devices are capital-augmenting.
From Moses Abramovitz and Paul A. David, 1996. "Convergence and Deferred
Catch-up: productivity leadership and the waning of American exceptionalism."
In Mosaic of Economic Growth, edited by Ralph Landau, Timothy Taylor,
and Gavin Wright.
Source: Romer, 1996, p 7
Contexts: macro
capitation:
The system of payment for each customer served, rather than by service
performed. Both are used in various ways in U.S. medical care.
Source: Weisbrod's class circa 5/21/97
Contexts: public
CAPM:
Capital Asset Pricing Model
Contexts: finance; models
CAR:
stands for Cumulative Average Return.
A portfolio's abnormal return (AR) at each time is ARt=Sum
from i=1 to N of each arit/N. Here arit is the abnormal
return at time t of security i.
Over a window from t=1 to T, the CAR is the sum of all the ARs.
Contexts: finance
CARA utility:
A class of utility functions. Also called exponential utility. Has the form,
for some positive constant a:
u(c)=-(1/a)e-ac
"Under this specification the elasticity of marginal utility is equal to
-ac, and the instantaneous elasticity of substitution is equal to
1/ac."
The coefficient of absolute risk aversion is a; thus the abbreviation CARA for
Constant Absolute Risk Aversion. "Constant absolute risk aversion is
usually thought of as a less plausible description of risk aversion than
constant relative risk aversion" (that's the CRRA, which see), but
it can be more analytically convenient.
Source: Blanchard and Fischer, p. 44
Contexts: models
CARs:
cumulative average adjusted returns
Contexts: finance
cash-in-advance constraint:
A modeling idea. In a basic Arrow-Debreu general equilibrium there is no need
for money because exchanges are automatic, through a Walrasian
auctioneer. To study monetary phenomena, a class of models was made in
which money was required to make purchases of other goods. In such a model
the budget constraint is written so that the agent must have enough cash on
hand to make any consumption purchase. Using this mechanism money can have a
positive price in equilibrium and monetary effects can be seen in such models.
Contrast money-in-the-utility function for an alternative modeling
approach.
Source: Ostroy and Starr, 1990, pp 6-7
Contexts: money; models
catch-up:
"'Catch-up' refers to the long-run process by which productivity laggards
close the proportional gaps that separate them from the productivity leader
.... 'Convergence,' in our usage, refers to a reduction of a measure of
dispersion in the relative productivity levels of the array of countries under
examination." Like Barro and Sala-i-Martin (92)'s "sigma-convergence", a
narrowing of the dispersion of country productivity levels over time.
Source: From Moses Abramovitz and Paul A. David, 1996. "Convergence and
Deferred Catch-up: productivity leadership and the waning of American
exceptionalism." In Mosaic of Economic Growth, edited by Ralph Landau,
Timothy Taylor, and Gavin Wright.
Contexts: international; macro
Cauchy distribution:
Has thicker tails than a normal distribution.
density function (pdf): f(x) = 1/[pi*(1+x2)].
distribution function (cdf): F(x) = .5 + (tan-1x)/pi.
A sequence satisfies the Cauchy criterion iff for each positive real epsilon
there exists a natural number N such that the distance between any two
elements of the sequence past the Nth element is less than epsilon.
'Distance' must be defined in context by the user of the term.
One sometimes hears the construction: 'The sequence is Cauchy' if the sequence
satisfies the definition.
Source: Stokey and Lucas, 1989
Contexts: real analysis
CCAPM:
Stands for Consumption-based Capital Asset Pricing Model.
A theory of asset prices. Formulated in Lucas, 1978, and Breeden,
1979.
Source: Lucas, 1978; Breeden, 1979
Contexts: finance; macro
CDE:
Stands for Corporate Data Exchange, an organization which has data on the
shareholdings of large U.S. companies.
Source: Weisbach, 1988, p 448
Contexts: finance
cdf:
cumulative distribution function. This function describes a statistical
distribution. It has the value, at each possible outcome, of the probability
of receiving that outcome or a lower one. A cdf is usually denoted in capital
letters.
Consider for example some F(x), with x a real number is the probability of
receiving a draw less than or equal to x. A particular form of F(x) will
describe the normal distribution, or any other unidimensional
distribution.
Contexts: econometrics; statistics
CDFC:
Stands for Concavity of distribution function condition.
Contexts: micro theory
censored dependent variable:
A dependent variable in a model is censored if observations of it cannot be
seen when it takes on vales in some range. That is, the independent
variables are observed for such observations but the dependent variable is
not.
A natural example is that if we have data on consumers and prices paid for
cars, if a consumer's willingness-to-pay for a car is negative, we will see
observations with consumer information but no car price, no matter how low car
prices go in the data. Price observations are then censored at zero.
Contrast truncated dependent variables.
Contexts: econometrics; estimation
central bank:
A government bank; a bank for banks.
Source: Mark Witte, (mwitte@nwu.edu).
Contexts: money; macro
certainty equivalence principle:
Imagine that a stochastic objective function is a function only of output and
output-squared. Then the solution to the optimization problem of choosing
output will have the special characteristic that only the conditional means of
the future forcing variables appear in the first order
conditions. (By conditional means is meant the set of means for each
state of the world.) Then the solution has the "certainty
equivalence" property. "That is, the problem can be separated into
two stages: first, get minimum mean squared error forecasts of the exogenous
[variables], which are the conditional expectations...; second, at time t,
solve the nonstochastic optimization problem," using the mean in place of
the random variable. "This separation of forecasting from
optimization.... is computationally very convenient and explains why quadratic
objective functions are assumed in much applied work. For general [functions]
the certainty equivalence principle does not hold, so that the forecasting and
opt problems do not 'separate.'"
Source: Sargent, 1979, Ch 14, p 396
Contexts: macro; finance; models
certainty equivalent:
The amount of payoff (e.g. money or utility) that an agent would have to
receive to be indifferent between that payoff and a given gamble is called
that gamble's 'certainty equivalent'. For a risk averse agent (as most are
assumed to be) the certainty equivalent is less than the expected value of the
gamble because the agent prefers to reduce uncertainty.
Contexts: micro theory; finance
CES production function:
CES stands for constant elasticity of substitution. This is a function
describing production, usually at a macroeconomic level, with two inputs which
are usually capital and labor. As defined by Arrow, Chenery, Minhas, and
Solow, 1961 (p. 230), it is written this way:
V = (bK-r
+ aL-r)
-(1/r)
where V = value-added, (though y for output is more common),
K is a measure of capital input,
L is a measure of labor input,
and the Greek letters are constants. Normally a>0 and b>0 and r>-1. For more details see the source
article.
In this function the elasticity of substitution between capital and
labor is constant for any value of K and L. It is (1+r)-1.
Source: Defined and discussed in Arrow, Chenery, Minhas, and Solow,
1961.
Contexts: macro; models
CES technology:
Example, adapted from Caselli and Ventura:
For capital k, labor input n, and constant b<?? (?less that what?)
f(k,n) = (kb + nb)1/b
Here the elasticity of substitution between capital and labor is less than
one, i.e. 1/(1-b)<1.
Source: "A Representative Consumer Theory of Distribution" by Francesco
Caselli and Jaume Ventura, working paper dated April, 1996 presented at Summer
Macro Conference at Northwestern University circa July 28, 1996
Contexts: models
CES utility:
Stands for Constant Elasticity of Substitution, a kind of utility function. A
synonym for CRRA or isoelastic utility function. Often written this way,
presuming a constant g not equal to one:
u(c)=c1-g/(1-g)
This limits to u(c)=ln(c) as g goes to one.
The elasticity of substitution between consumption at any two points in time
is constant, equal to 1/g. "The elasticity of marginal utility is equal
to" -g. g can also be said to be the coefficient of relative risk
aversion, defined as -u"(c)c/u'(c), which is why this function is also
called the CRRA (constant relative risk aversion) utility function.
Source: Blanchard and Fischer, p. 44
Contexts: macro; finance; models
ceteris paribus:
means "assuming all else is held constant". The author is
attempting to distinguish an effect of one kind of change from any
others.
Contexts: phrases
CEX:
Abbreviation for the U.S. government's
Consumer Expenditure Survey
Contexts: data
CFTC:
The U.S. government's Commodities and Futures Trading Commission.
CGE:
An occasional abbreviation for "computable general equilibrium"
models.
Contexts: models
chained:
Describes an index number that is frequently reweighted. An example is
an inflation index made up of prices weighted by frequency with which they are
paid, and frequent recomputation of weights makes it a chained inded.
Source: Hulten, 2000
Contexts: index numbers
chaotic:
A description of a dynamic system that is very sensitive to initial conditions
and may evolve in wildly different ways from slightly different initial
conditions.
Source: Devaney, 1992, p 1-2
Contexts: mathematics; dynamic optimization
characteristic equation:
polynomial whose roots are eigenvalues
Contexts: linear algebra
characteristic function:
Denoted here PSI(t) or PSIX(t). Is defined for any random variable
X with a pdf f(x). PSI(t) is defined to be E[eitX], which is the
integral from minus infinity to infinity of eitXf(x).
This is also the cgf, or cumulant generating function.
"Every distribution has a unique characteristic function; and to each
characteristic function there corresponds a unique distribution of
probability." -- Hogg and Craig, p 64
Source: Hogg and Craig, 1995, p 64
Contexts: econometrics; statistics
characteristic root:
Synonym for eigenvalue.
Contexts: linear algebra
chartalism:
or "state theory of money" -- 19th century monetary theory, based
more on the idea that legal restrictions or customs can or should maintain the
value of money, not intrinsic content of valuable metal.
Source: Thomas J Sargent and Francois R Velde, 1997, "The Evolution of Small
Change", unpublished paper, p. 27
chi-square distribution:
A continuous distribution, with natural number parameter r. Is the
distribution of sums of squares of r standard normal variables.
Mean is r, variance is 2r, pdf and cdf is difficult to express
in html, and moment-generating function (mgf) is (1-2t)-r/2.
From older definition in this same database:
If n random values z1, z2, ..., zn are drawn
from a standard normal distribution, squared, and summed, the resulting
statistic is said to have a chi-squared distribution with n degrees of
freedom:
z12 + z22 + ... +
zn2) ~ X2(n)
This is a one-parameter family of distributions, and the parameter, n, is
conventionally labeled the degrees of freedom of the distribution.
-- quoted and paraphrased from Johnston
See also noncentral chi-squared distribution
Source: Hogg and Craig; Johnston (p. 530 in older edition?)
Contexts: statistics
Chicago School:
Refers to an perspective on economics of the University of Chicago circa 1970.
Variously interpreted to imply:
1) A preference for models in which information is perfect, and an associated
search for empirical evidence that choices, not institutional limitations, are
what result in outcomes for people. (E.g., that committing crime is a career
choice; that smoking represents an informed tradeoff between health risk and
immediate gratification.)
2) That antitrust law is rarely necessary, because potential competition will
limit monopolist abuses.
Contexts: phrases
choke price:
The lowest price at which the quantity demanded is zero.
Cholesky decomposition:
Given a symmetric positive definite square matrix X, the Cholesky
decomposition of X is the factorization X=U'U, where U is the square root
matrix of X, and satisfies:
(1) U'U = X
(2) U is upper triangular (that is, it has all zeros below the diagonal)
Once U has been computed, one can calculate the inverse of X more easily,
because X-1 = U-1(U')-1, and the inverses of
U and U' are easier to compute.
Source: Greene, 1993, p 36; Gauss help system, under
CHOL(), which finds U given X
Contexts: econometrics; linear algebra
Cholesky factorization:
Same as Cholesky decomposition.
Source: Greene, 1993, p 36
Contexts: econometrics
Chow test:
A particular test for structural change; an econometric test to determine
whether the coefficients in a regression model are the same in separate
subsamples. In reference to a paper of G.C. Chow (1960), "the standard F
test for the equality of two sets of coefficients in linear regression
models" is called a Chow test. See derivation and explanation in
Davidson and MacKinnon, p. 375-376. More info in Greene, 2nd edition, p
211-2.
Homoskedasticity of errors is assumed although this can be dubious since we
are open to the possibility that the parameter vector (b) has changed.
RSSR = the sum of squared residuals from a linear regression in which b1 and b2 are assumed to be the same
SSR1 = the sum of squared residuals from a linear regression of
sample 1
SSR2 = the sum of squared residuals from a linear regression of
sample 2
b has dimension k, and there are n observations in
total
Then the F statistic is:
((RSSR-SSR1-SSR2)/k ) /
((SSR1+SSR2)/(n-2k).
That test statistic is the Chow test.
Source: Davidson and MacKinnon, 1993, p 375
Contexts: econometrics; estimation
circulating capital:
flows of value within a production organization. Includes stocks of raw
material, work in process, finished goods inventories, and cash on hand needed
to pay workers and suppliers before products are sold.
Source: unpublished dissertation, Thomas Geraghty, at Northwestern Univ,
1998
With thanks to: Thomas M. Geraghty, (t-geraghty@nwu.edu)
Contexts: IO
CJE:
An abbreviation for the Canadian Journal of Economics.
CLAD:
Stands for the "Censored Least Absolute Deviations" estimator. If
errors are symmetric (with median of zero), this estimator is unbiased and
consistent though not efficient. The errors need not be homoskedastic or
normally distributed to have those attributes.
CLAD may have been defined for the first time in Powell,
1984.
Source: statalist [email discussion list for Stata] and other unpublished
sources
Contexts: econometrics
classical:
According to Lucas (1998), a classical theory would have no explicit reference
to preferences. Contrast neoclassical.
Source: Lucas (1998)
Contexts: phrases; macro theory
Clayton Act:
A 1914 U.S. law on the subject of antitrust and price discrimination.
Section two prohibits price discrimination.
Section three prohibits sales based on an exclusive dealing contract
requirement that may have the effect of lessening competition.
Section seven prohibits mergers where "the effect of such acquisition may
be substantially to lessen competition, or tend to create a monopoly" in
any line of commerce.
Source: lectures and handouts of Michael Whinston at Northwestern U in
Economics D50, Winter 1998
Contexts: IO; antitrust; regulation
clears:
A verb. A market clears if the vector of prices for goods is such that the
excess demand at those prices is zero. That is, the quantity demanded of
every good at those prices is met.
Contexts: general equilibrium; modelling
climacteric:
Critical stage, period, or turning point, usually away from an upward,
expansive, or optimistic path into a downward or quiescent direction. Has
been used in the context of declining British economic success after
1890.
Source: dictionary
Contexts: economic history; phrases
cliometrics:
the study of economic history; the 'metrics' at the end was put to emphasize
(possibly humorously) the frequent use of regression estimation.
"The cliometric contribution was the application of a systematic body of
theory -- neoclassical theory -- to history and the application of
sophisticated, quantitative techniques to the specification and testing of
historical models." -- North (1990/1993) p 131.
Source: North, 1990
Contexts: history; fields
clustered data:
Data whose observations are not iid but rather come in clusters that
are correlated together -- e.g. a data set of individuals some of whom are
siblings of others, and are therefore similar demographically.
Contexts: data
Coase theorem:
Informally: that in presence of complete competitive markets and the absence
of transactions costs, an efficient set of inputs to production and outputs
from production will be chosen by agents regardless of how property rights
over the inputs were assigned to the agents.
A detailed discussion is in the Encyclopedia of Law and
Economics, online.
Contexts: public economics
Cobb-Douglas production function:
A standard production function which is applied to describe much output
two inputs into a production process make. It is used commonly in both macro
and micro examples.
For capital K, labor input L, and constants a, b, and c, the Cobb-Douglas
production function is
f(k,n) = bkanc
If a+c=1 this production function has constant returns to scale.
(Equivalently, in mathematical language, it would then be linearly
homogenous.) This is a standard case and one often writes (1-a) in place
of c.
Log-linearization simplifies the function, meaning just that taking
logs of both sides of a Cobb-Douglass function gives one better separation of
the components.
In the Cobb-Douglass function the elasticity of substitution between
capital and labor is 1 for all values of capital and labor.
With thanks to: Nelson Noya
Contexts: models
cobweb model:
A theoretical model of an adjustment process that on a price/quantity or
supply/demand graph spirals toward equilibrium.
Example, from Ehrenberg and Smith: Suppose the equilibrium labor market wage
for engineers is stable over a ten-year period, but at the beginning of that
period the wage is above equilibrium for some reason. Operating on the
assumption, let's say, that engineering wages will remain that high, too many
students then go into engineering. The wage falls suddenly from oversupply
when that population graduates. Too few students then choose engineering.
Then there is a shortage following their graduation. Adjustment to
equilibrium could be slow.
"Critical to cobweb models is the assumption that workers form myopic
expectations about the future behavior of wages." "Also critical to
cobweb models is that the demand curve be flatter than the supply curve; if it
is not, the cobweb 'explodes' when demand shifts and an equilibrium wage is
never reached."
Source: Ehrenberg and Smith, 1994, p 292-3
Contexts: labor; models
Cochrane-Orcutt estimation:
An algorithm for estimating a time series linear regression in the presence of
autocorrelated errors. The implicit citation is to Cochrane-Orcutt (1949).
The procedure is nicely explained in the SHAZAM manual section online
at the SHAZAM web
site. Their procedure includes an improvement to include the first
observation attributed to the Prais-Winsten transformation. A summary
of their excellent description is below. This version of the algorithm can
handle only first-order autocorrelation but the Cochrane-Orcutt method could
handle more.
Suppose we wish to regress y[t] on X[t] in the presence of autocorrelated
errors. Run an OLS regression of y on X and construct a series of
residuals e[t]. Regress e[t] on e[t-1] to estimate the autocorrelation
coefficient, denoted p here. Then construct series
y* and X* by:
y*1 = sqrt(1-p2)y1,
X*1 = sqrt(1-p2)X1,
and
y*t = yt - pyt-1,
X*t = Xt - pXt-1
One estimates b in y=bX+u by applying this procedure iteratively -- renaming
y* to y and X* to X at each step, until estimates of p
have converged satisfactorily.
Using the final estimate of p, one can construct an estimate of the covariance
matrix of the errors, and apply GLS to get an efficient estimate of b.
Transformed residuals, the covariance matrix of the estimate of b,
R2, and so forth can be calculated; see source.
Source: SHAZAM
manual
Contexts: estimation; time series; econometrics
coefficient of absolute risk aversion:
This is a measure of the responsiveness to risk implied by a utility function
of consumption, for each consumption level. Thus it is an attribute of a
model, not an empirical measure usually.
It is defined by RA(c) = -u''(c) / u'(c).
If RA(c) is constant for all c, and there are only two possible
investments, a risky one and a risk-free one, the amount of investment
in the one risky asset is constant for all c.
See also coefficient of relative risk aversion, which is
cRA(c).
Source: Huang and Litzenberger, p. 20
Contexts: finance; models; utility
coefficient of determination:
Same as R-squared.
Source: Greene, 1993, p 72
Contexts: econometrics
coefficient of relative risk aversion:
This is a measure of the responsiveness to risk implied by a utility function
of consumption, for each consumption level. Thus it is an attribute of a
model, not an empirical measure usually.
It is defined by RA(c) = -cu''(c) / u'(c).
If RR(c) is constant for all c, and there are only two possible
investments, a risky one and a risk-free one, the proportion of
investment in the one risky asset is constant for all c.
See also coefficient of absolute risk aversion, which is
RR(c)/c.
Source: Huang and Litzenberger, p. 20
Contexts: finance; models; utility
coefficient of variation:
An attribute of a distribution: its standard deviation divided by its mean.
Example: In a series of wage distributions over time, the standard deviation
may rise over time with inflation, but the coefficient of variation may not,
and thus the fundamental inequality may not.
Source: Atkinson, 1970, p 252
Contexts: statistics
cohort:
A sub-population going through some specified stage in a process. The term is
often applied to describe a population of persons going through some life
stage, like a first year in a new school.
Contexts: data; labor
cointegration:
"An (n x 1) vector time series yt is said to be
cointegrated if each of the series taken individually is ... nonstationary
with a unit root, while some linear combination of the series
a'y is stationary ... for some nonzero (n x 1) vector
a."
Hamilton uses the phrasing that yt is cointegrated with
a', and offers a couple of examples. One was that although consumption
and income time series have unit roots, consumption tends to be a roughly
constant proportion of income over the long term, so (ln income) minus (ln
consumption) looks stationary.
Source: Hamilton, p. 571
Contexts: econometrics; time series; data
commercial paper:
commoditized short-term corporate debt.
Contexts: finance
common pool resource:
A common pool resource is one which can be used by many users at once, and use
by each one reduces the benefits available to the others. Examples are the
radio spectrum, ocean fisheries, public roads and parks.
Common pool resources are different from public goods such as
information, for which use by one user does not reduce its availability to
others. (Kruse, 2002, p. 664.)
Source: Kruse, Elizabeth F. "From free privilege to regulation: Wireless
firms and the competition for spectrum rights before World War I"
Business History Review, Winter 2002, 76:4.
Contexts: public economics
compact:
A set is compact if it is closed and bounded.
The concept comes up most often in economics in the context of a theory in
which a function must be maximized. Continuous functions that are well
defined on a compact domain have a maximum and minimum; this is the
Weierstrauss Theorem. Noncontinuous functions, or functions on a
noncompact domain, may not.
Contexts: real analysis; micro theory
comparative advantage:
To illustrate the concept of comparative advantage requires at least two goods
and at least two places where each good could be produced with scarce
resources in each place. The example drawn here is from Ehrenberg and Smith
(1997), page 136. Suppose the two goods are food and clothing, and that "the
price of food within the United States is 0.50 units of clothing and the price
of clothing is 2 units of food. [Suppose also that] the price of food in
China is 1.67 units of clothing and the price of clothing is 0.60 units of
food." Then we can say that "the United States has a comparative advantage in
producing food and China has a comparative advantage in producing clothing.
It follows that in a trading relationship the U.S. should allocate at least
some of its scarce resources to producing food and China should allocate at
least some of its scarce resources to producing clothing, because this is the
most efficient allocation of the scarce resources and allows the price of food
and clothing to be as low as possible.
Famous economist David Ricardo illustrated this in the 1800s using wool in
Britain and wine from Portugal as examples. The comparative advantage concept
seems to be one of the really challenging, novel, and useful abstractions in
economics.
Source: Ehrenberg and Smith, Modern Labor Economics, sixth edition
Contexts: trade
compensating variation:
The price a consumer would need to be paid, or the price the consumer would
need to pay, to be just as well off after (a) a change in prices of products
the consumer might buy, and (b) time to adapt to that change.
It is assumed the consumer does not benefit or lose from producing the
product.
Source: Hicks, John R. 1942. "Consumers' Surplus and Index Numbers."
Review of Economic Studies 9(2). pp 126-137.
as cited in:
Brynjolfsson, Erik, Michael D. Smith, Yu (Jeffrey) Hu. "Consumer Surplus in
the Digital Economy: Estimating the Value of Increased Product Variety." p.6.
On the net as of Jan 7, 2003.
Contexts: IO
competency trap:
The position of an organization which uses a suboptimal procedure because it
is good enough in the short run and so does not switch to a better one.
Becker (2004, p. 653) quotes Levitt and March (1988, p. 322) thus:
"favorable performance with an inferior procedure leads an organization
to accumulate more experience with it, thus keeping experience with a superior
procedure inadequate to make it rewarding to use".
Source: Becker, Markus C. "Organizational routines: a review of the
literature." Industrial and Corporate Change. Vol 13, no. 4 (August
2004), pp. 643-677.
Levitt, B., and J. March. 1988. "Organizational learning,"
Annual Review of Sociology, vol. 14, pp. 319-340.
Contexts: management; sociology; organizations
complete:
(economics theory definition) A model's markets are complete if agents can
buy insurance contracts to protect them against any future time and state of
the world.
(statistics definition) In a context where a distribution is known except for
a parameter q, a minimal sufficient statistic is
complete if there is only one unbiased estimator of q using that statistic.
Contexts: modelling; statistics
complete market:
One in which the complete set of possible gambles on future
states-of-the-world can be constructed with existing assets.
This is a theoretical ideal against which reality can be found more or less
wanting. It is a common assumption in finance or macro models, where the set
of states-of-the-world is formally defined.
Another phrasing: "a complete set of state contingent claim markets." (HL, p.
124).
Source: Huang and Litzenberger, 1988, p. 124
Contexts: finance; models
Compustat:
a data set used in finance
Contexts: finance; data
concavity of distribution function condition:
A property of a distribution function-utility function pair.
(At least, it MAY require specification of the utility function; this editor
can't tell well.) It is assumed to hold in some principal-agent models
so as to make certain conclusions possible.
Contexts: micro theory
concentration ratio:
A way of measuring the concentration of market share held by particular
suppliers in a market. "It is the percentage of total market sales
accounted for by a given number of leading firms." Thus a four-firm
concentration ratio is the total market share of the four firms with the
largest market shares. (Sometimes this particular statistic is called the
CR4.)
Source: Greer, 1992, p. 176
Contexts: IO
condition number:
A measure of how close a matrix is to being singular. Relevant in estimation
if the matrix of regressors is nearly singular the data are nearly collinear
and (a) it will be hard to make an accurate or precise inverse, (b) a linear
regression will have large standard errors.
The condition number is computed from the characteristic roots or
eigenvalues of the matrix. If the largest characteristic root is
denoted L and the smallest characteristic root is S (both being presumed to be
positive here, that is, the matrix being diagnosed is presumed to be positive
definite), then the condition number is:
g = (L/S).5
Values larger than 20, according to Greene (93), are observed if and
only if the matrix is 'nearly singular'.
Greene cites Belsley et al (1980) for this term and the number 20.
Source: Greene, 1993, p 33; cites Belsley et al 1980.
Contexts: estimation; econometrics
conditional:
has a special use in finance when used without other modifiers; often means
'conditional on time and previous asset returns'. In that context, one might
read 'returns are conditionally normally distributed.'
Contexts: finance
conditional factor demands:
a collection of functions that give the optimal demands for each of several
inputs as a function of the output expected, and the prices of inputs. Often
the prices are taken as given, and incorporated into the functions, and so
they are only functions of the output.
Usual forms:
x1(w1, w2, y) is a conditional factor
demand for input 1, given input prices w1 and w2, and
output quantity y
Source: Varian, 1992
Contexts: models; micro
conditional variance:
Shorthand often used in finance to mean, roughly, "variance at time t
given that many events up through time t-1 are known."
For example, it has been useful in studying aggregate stock prices, which go
through periods of high volatility and periods of low volatility, to model
them econometrically as having the variance at time t as coming from an
AR process. This is the ARCH idea. In such a statistical
model, the conditional variance is generally different from the unconditional
variance. That is, the unconditional variance is the variance of the whole
process, whereas the 'conditional variance' can be better estimated since in
this phrasing it is assumed that we can estimate the immediately previous
values of variance.
Contexts: finance
conformable:
A matrix may not have the right dimension or shape to fit into some particular
operaton with another matrix. Take matrix addition -- the matrices are
supposed to have the same dimensions to be summed. If they don't, we can say
that they are not conformable for addition. The most common application of
the term comes in the context of multiplication. Multiplying an M x N matrix
A by an R x S matrix B directly can only be done if N=R. Otherwise the
matrices are not conformable for this purpose. If instead M=R, then the
intended operation may be to take the transpose of A and multiply it by B.
This operation would properly be denoted A'B, where the prime denotes the
transpose of A.
Contexts: econometrics; linear algebra
conglomerate:
A firm operating in several industries.
Contexts: business; finance
consistent:
An estimator for a parameter is consistent iff the estimator converges in
probability to the true value of the parameter; that is, the plim of the
estimator, as the sample size goes to infinity, is the parameter itself.
Another phrasing: an estimator is consistent if it has asymptotic
power of one.
"Consistency", without a modifier, is synonymous with weak
consistency.
From Davidson and Mackinnon, p. 79: If for any possible value of the
parameter q in a region of a parameter space the
power of a test goes to one as sample size n goes to infinity, that
test is said to be consistent against alternatives in that region of the
parameter space. That is, if as the sample size increases we can in the limit
reject every false hypothesis about the parameter, the test is consistent.
How does one prove that an estimator is consistent? Here are two ways.
(1) Prove directly that if the model is correct, the estimator has
power one in the limit to reject any alternative but the true
parameter.
(2) Sufficient conditions for proving that an estimator is consistent are (i)
that the estimator is asymptotically unbiased and (ii) that its variance
collapses to zero as the sample size goes to infinity. This method of proof
is usually easier than (1) and is commonly used.
The existence of a consistent estimator for a parameter is proof that
the parameter is identified. But a parameter could be identified
without there being a consistent estimator. For more on this see comment on consistency and identification.
Contexts: econometrics; statistics; estimation
constant returns to scale:
An attribute of a production function. A production function exhibits
constant returns to scale if changing all inputs by a positive proportional
factor has the effect of increasing outputs by that factor. This may be true
only over some range, in which case one might say that the production function
has constant returns over that range.
Contexts: models
Consumer Expenditure Survey:
Conducted by the U.S. government. See its
Web site.
Contexts: data
consumption beta:
"A security's consumption beta is the slope in the regression of its
return on per capita consumption."
Source: Fama 1991 p 1596
Contexts: finance
consumption set:
The set of affordable consumption bundles. One way to define a consumption
set is by a set of prices, one for each possible good, and a budget. Or a
consumption set could be defined in a model by some other set of restrictions
on the set of possible consumption bundles.
E.g. if consumer i can consume nonnegative quantities of all goods, it is
standard to define xi as i's consumption set, a member of
R+L where L is the number of goods. Normally if
the agent is endowed with a set of goods, the endowment is in the consumption
set.
Contexts: general equilibrium; models
contingent valuation:
The use of questionnaires about valuation to estimate the willingness of
respondents to pay for public projects or programs.
Often the question is framed, "Would you accept a tax of x to pay for the
program?" Any such survey must be carefully done, and even so there is
dispute about the value of the basic method, as is discussed in the issue of
the JEP with the Portney (1994) article.
Source: Portney, 1994
Contexts: public finance
contract curve:
Same as Pareto set, with the implication that it is drawn in an
Edgeworth box.
Source: Varian, 1992, p 324
Contexts: micro theory; general equilibrium; models
contraction mapping:
Given a metric space S with distance measure d(), and T:S->S mapping
S into itself, T is a contraction mapping if for some b ('b') in the range (0,1), d(Tx,Ty) is less than or equal to
b*d(x,y) for all x and y in S.
One often abbreviates the phrase 'contraction mapping' by saying simply that T
is a contraction.
The function resulting from the applications of a contraction could slope the
opposite way of the original function as long as it is less steeply sloped.
A standard way to prove that an operator T is a contraction is to prove that
it satisfies Blackwell's conditions.
Source: Stokey and Lucas, 1989
Contexts: macro; models
contractionary fiscal policy:
A government policy of reducing spending and raising taxes.
In the language of some first courses in macroneconomics, it shifts the IS
curve (investment/saving curve) to the left.
Contexts: macro
contractionary monetary policy:
A government policy of raising interest rates charged by the central
bank.
In the language of some first courses in macroeconomics, it shifts the LM
curve (liquidity/money curve) to the left.
Contexts: macro
control for:
As used in the following way: "The effect of X on Y disappears when we
control for Z", the phrase means to regress Y on both X and Z, together,
and to interpret the direct effect of X as the only effect. Here the effect
of Z on X has been "controlled for". It is implied that X is not
causing changes in Z.
Contexts: phrases; econometrics
control variable:
A variable in a model controlled by an agent in order to optimize
something.
Contexts: models
convergence:
Multiple meanings: (1) a mathematical property of a sequence or series that
approaches a value;
In macro:
"'Catch-up' refers to the long-run process by which productivity laggards
close the proportional gaps that separate them from the productivity leader
.... 'Convergence,' in our usage, refers to a reduction of a measure of
dispersion in the relative productivity levels of the array of countries under
examination." Like Barro and Sala-i-Martin (92)'s "sigma-convergence", a
narrowing of the dispersion of country productivity levels over time.
Source: From Moses Abramovitz and Paul A. David, 1996. "Convergence and
Deferred Catch-up: productivity leadership and the waning of American
exceptionalism." In Mosaic of Economic Growth, edited by Ralph Landau,
Timothy Taylor, and Gavin Wright.
convergence in quadratic mean:
A kind of convergence of random variables. If xt converges in
quadratic mean it converges in probability but it does not necessarily
converge almost surely.
The following is a best guess, not known to be correct.
Let et be a stochastic process and Ft be an information
set at time t uncorrelated with et:
E[et|Ft-m] converges in quadratic mean to zero as m goes
to infinity IFF:
E[E[et|Ft-m]2] converges to zero as m goes to
infinity.
Contexts: probability; econometrics
convolution:
The convolution of two functions U(x) and V(x) is the function:
U*V(x) = (integral from 0 to x of) U(t)V(x-t) dt
Source: Derrick, 1984
Contexts: calculus; complex analysis; real analysis; time series
Cook's distance:
A metric for deciding whether a particular point alone affects regression
estimates much. After a regression is run one can consider for each data
point how far it is from the means of the independent variables and the
dependent variable. If it is far from the means of the independent variables
it may be very influential and one can consider whether the regression results
are similar without it.
[Need to add the equation defining the Cook's d here.]
Source: Stephen Brown (stephenb@nwu.edu as of Aug 25, 1999)
With thanks to: Stephen Brown (stephenb@nwu.edu as of Aug 25, 1999)
Contexts: estimation
cooperative game:
A game structure in which the players have the option of planning as a group
in advance of choosing their actions. Contrast noncooperative
game.
Contexts: game theory
core:
Defined in terms of an original allocations of goods among agents with
specified utility functions. The core is the set of possible reallocations
such that no subset of agents could break off from the others and all do
better just by trading among themselves.
Equivalently: The intersection of individually rational allocations with the
Pareto efficient allocations. Individually rational, here, means the
allocations such that no agent is worse off than with his endowment in the
original allocation.
Contexts: general equilibrium; models
corner solution:
A choice made by an agent that is at a constraint, and not at the tangency of
two classical curves on a graph, one characterizing what the agent could
obtain and the other characterizing the imaginable choices that would attain
the highest reachable value of the agents' objective.
A classic example is the intersection between a consumer's budget line
(characterizing the maximum amounts of good X and good Y that the consumer can
afford) and the highest feasible indifference curve. If the agent's best
available choice is at a constraint -- e.g. among affordable bundles of good X
and good Y the agent prefers quantity zero of good X -- that choice is often
not at a tangency of the indifference curve and the budget line, but at a
"corner"
Contrast interior solution.
Contexts: micro theory; phrases
correlation:
Two random variables are positively correlated if high values of one are
likely to be associated with high values of the other. They are negatively
correlated if high values of one are likely to be associated with low values
of the other.
Formally, a correlation coefficient is defined between the two random
variables (x and y, here). Let sx and xy denote the
standard devations of x and y. Let sxy denote the
covariance of x and y. The correlation coefficent between x and y,
denoted sometimes rxy, is defined by:
rxy = sxy / sxsy
Correlation coefficients are between -1 and 1, inclusive, by definition. They
are greater than zero for positive correlation and less than zero for negative
correlations.
Source: Greene, 1997, page 102-3
Contexts: statistics; econometrics
cost curve:
A graph of total costs of production as a function of total quantity
produced.
Contexts: IO; micro
cost function:
is a function of input prices and output quantity. Its value is the cost of
making that output given those input prices.
A common form:
c(w1, w2, y) is the cost of making output quantity y
using inputs that cost w1 and w2 per unit.
Source: Varian, 1992
Contexts: models
cost-benefit analysis:
An approach to public decisionmaking. Quotes below from Sugden and
Williams, 1978
p. 236, with some reordering:
"Cost-benefit analysis is a 'scientific' technique, or a way of organizing
thought, which is used to compare alternative social states or courses of
action." "Cost-benefit analysis shows how choices should be made so as to
pursue some given objective as efficiently as possible." "It has two
essential characteristics, consistency and explicitness. Consistency is the
principle that decisions between alternatives should be consistent with
objectives....Cost-benefit analysis is explicit in that it seeks to
show that particular decisions are the logical implications of
particular, stated, objectives."
"The analyst's skill is his ability to use this technique. He is hired to
use this skill on behalf of his client, the decision-maker..... [The
analyst] has the right to refuse offers of employment that would require him
to use his skills in ways that he believes to be wrong. But to accept the
role of analyst is to agree to work with the client's objectives."
p. 241: Two functions of cost-benefit analysis: It "assists the
decision-maker to pursue objectives that are, by virtue of the community's
assent to the decision-making process, social objectives. And by making
explicit what these objectives are, it makes the decision-maker more
accountable to the community."
"This view of cost-benefit analysis, unlike the narrower value-free
interpretation of the decision-making approach, provides a justification for
cost-benefit analysis that is independent of the preferences of the
analyst's immediate client. An important consequence of this is that the
role of the analyst is not completely subservient to that of the
decision-maker. Because the analyst has some responsibility of principles
over and above those held by the decision-maker, he may have to ask
questions that the decision-maker would prefer not to answer, and which
expose to debate conflicts of judgement and of interest that might otherwise
comfortably have been concealed."
Source: Sugden and Williams, 1978
Contexts: public
cost-of-living index:
A cost-of-living price index measures the changing cost of a constant standard
of living. The index is a scalar measure for each time period. Usually it is
a positive number which rises over time to indicate that there was inflation.
Two incomes can be compared across time by seeing whether the incomes changed
as much as the index did.
Contexts: macro; prices
costate:
A costate variable is, in practice, a Lagrangian multiplier, or
Hamiltonian multiplier.
Contexts: models
countable additivity property:
the third of the properties of a measure.
Contexts: math; probability; measure theory
coupon strip:
A bond can be resold into two parts that can be thought of as components: (1)
a principal component that is the right to receive the principal at the end
date, and (2) the right to receive the coupon payments. The components are
called strips. The right to receive coupon payments is the coupon
strip.
Contexts: finance; business
Cournot duopoly:
A pair of firms who split a market, modeled as in the Cournot
game.
Contexts: IO; models
Cournot game:
A game between two firms. Both produce a certain good, say, widgets. No
other firms do. The price they receive is a decreasing function of the total
quantity of widgets that the firms produce. That function is known to both
firms. Each chooses a quantity to produce without knowing how much the other
will produce.
Contexts: game theory; IO
Cournot model:
A generalization of the Cournot game to describe industry structure.
Each of N firms will choose a quantity of output. Price is a commonly-known
decreasing functions of total output. All firms know N and take the output of
the others as given. Each firm has a cost function
ci(qi). Usually the cost functions are treated as
common knowledge. Often the cost functions are assumed to be the same for all
firms.
The prediction of the model is that the firms will choose Nash
equilibrium output levels.
Formally, from notes given by Michael Whinston to the Economics D50-1 class at
Northwestern U. on Sept 23, 1997:
Denote xi as a quantity that firm i considers,
X as the total quantity (the sum of the xi's),
xi* and X* as the Nash equilibrium levels of those quantities,
X-i as the total quantity chosen by all firms other than firm
i,
and p(X) as the function mapping total quantity to price in the market.
Each firm i solves:
maxxi
p(xi+X-i)-ci(xi)
The first order conditions are, for i from 1 to N:
p'(xi*+X-i)+p(X*)-ci'(xi*)=0
Assuming xi* is greater than 0 for all i, then the Nash equilibrium
output levels are characterized by the N equations:
p'(X*)xi* + p(X*) = ci'(xi*) for each
i.
Source: handout of Michael Whinston, 9/23/97.
Contexts: IO
covariance stationary:
A stochastic process is covariance stationary if neither its
mean nor its autocovariances depend on the time or spatial
index. For an empirical purpose, one might formally make the assumption that
a time series was covariance stationary, then use the data to estimate the
mean, variance, and autocovariances.
Formally the definition can be written this way. A stochastic process
{yt} is covariance stationary if there exists a constant mean m, a constant variance s2, and a series of constant
autocovariances gs such that
(using E as the mean or expectations operator):
(1) E[yt] = m for all integers t
(2) E[(yt-m)2)] =
s2 for all integers t and
(3) E[(yt-m)
(yt+j-m)] =
E[(ys-m)
(ys+j-m)] for all integers s, t,
and j.
Contrast strict stationarity which is usually stricter but includes
process which do not have finite variances. Covariance stationary means the
same as weakly stationary and generally the same as just
stationary.
Source: Enders, 1995, p. 69
Contexts: econometrics; time series
covered:
Covered employment is that set of U.S. jobs which pay into the state
unemployment insurance systems and therefore the holders of the jobs will
receive insurance payments if they are laid off.
Contexts: government
Cowles Commission:
A 1950s, probably British, panel on econometrics which focussed attention on
the problem of simultaneous equations. In some tellings of the history this
had an impact on the field -- other problems such as errors-in-variables
(measurement errors in the independent variables), were set aside or given
lower priority elsewhere too because of the prestige and influence of the
Cowles Commission.
Source: The New Palgrave: Econometrics (e.g. p.82)
Contexts: econometrics
CPI:
The Consumer Price Index, which is a measure of the cost of goods purchased by
average U.S. household. It is calculated by the U.S. government's Bureau of Labor Statistics.
As a pure measure of inflation, the CPI has some flaws:
1) new product bias (new products are not counted for a while after the
appear)
2) discount store bias (consumers who care won't pay full price)
3) substitution bias (variations in price can cause consumers to respond by
substituting on the spot, but the basic measure holds their consumption of
various goods constant)
4) quality bias (product improvements are under-counted)
5) formula bias (overweighting of sale items in sample rotation)
Source: Message from Louis Crandall of Wrightson Associates on
sci.econ.research circa 10/24/96.
Contexts: macro; labor; data
CPI-U:
The U.S.'s government's "Consumer Price Index for All Urban
Consumers.
Contexts: data
CPI-W:
The U.S.'s government's "Consumer Price Index for Urban Wage Earners and
Clerical Workers.
Contexts: data
CPS:
The Current Population Survey (of the U.S.) is compiled by the U.S. Bureau of
the Census, which is in the Dept of Commerce. The CPS is the source of
official government statistics on employment and unemployment in the U.S.
Each month 56,500-59,500 households are interviewed about their average weekly
earnings and average hours worked. The households are selected by area to
represent the states and the nation. "Each household is interviewed once
a month for four consecutive months in one year and again for the
corresponding time period a year later" to make month-to-month and
year-to-year comparisons possible.
The March CPS is special. For one thing the respondents are asked about
insurance then.
Source: Blanchflower and Oswald, Ch 4, p. 171; Freeman,
1991
Contexts: data
Cramer-Rao lower bound:
Whenever the Fisher information I(b) is a well-defined matrix or
number, the variance of an unbiased estimator B for b is at
least as large as [I(B)]-1.
Source: Greene, 1993, p 96
Contexts: econometrics; statistics; estimation
creative destruction:
The phenomenon of old industries being wiped out and new ones arising through
the process of changing opportunities (like new technology) under capitalism.
The term is attributed by Mancusi, 2004, p. 272 to Schumpeter, 1912.
Source: Mancusi, Maria Luisa. "Georgraphical concentration and the dynamics
of countries' specialization in technologies" Economics of Innovation and
New Technology 2003, vol 12:3, pp. 269-291.
Schumpeter, Joseph. 1912. The Theory of Economic Development.
Contexts: phrases
criterion function:
Synonym for loss function. Used in reference to econometrics.
Contexts: econometrics; estimation
critical region:
synonym for rejection region. This describes the subset of the sample
space which would cause rejection of the hypothesis being tested.
Source: Davidson and MacKinnon, 1993, p 78-79; Hogg and Craig,
5th edition, p. 282
Contexts: econometrics; estimation
Cronbach's alpha:
A test for a model or survey's internal consistency. Called a 'scale
reliability coefficient' sometimes. The remainder of this definition is
partial and unconfirmed.
Cronbach's alpha assesses the reliability of a rating summarizing a group of
test or survey answers which measure some underlying factor (e.g., some
attribute of the test-taker). A score is computed from each test item and the
overall rating, called a 'scale' is defined by the sum of these scores over
all the test items. Then reliability a is defined
to be the square of the correlation between the measured scale and the
underlying factor the scale was supposed to measure. (Which implies that one
has another measure in test cases of that underlying factor, or that it's
imputed from the test results.)
(In Stata's examples it remains unclear what the scale is, and how it's
measured; apparently alpha can be generated without having a measure of the
underlying factor.)
Source: StataCorp. 1999 Stata Statistical Software: Release 6.0.
College Station, TX: Stata Corporation. pages 20-24 of Reference Volume
1.
Contexts: surveys
cross-section data:
Parallel data on many units, such as individuals, households, firms, or
governments. Contrast panel data or time series data.
Contexts: econometrics; estimation
cross-validation:
A way of choosing the window width for a kernel estimation. The
method is to select, from a set of possible window widths, one that minimizes
the sum of errors made in predicting each data point by using kernel
regression on the others.
Formally, let J be the number of data points, j an index to each one, from one
to J, yj the dependent variable for each j, Xj the
independent variables for that j, Yj the dependent variable for
that j, and {hi} for i=1 to n the set of candidate window widths.
The hi's might be a set of equally spaced values on a grid. The
algorithm for choosing one of the hi's is:
For each candidate window width hi
{
..For each j from 1 to J
..{
....Drop the data point (Xj, Yj) from the sample
temporarily
....Run a kernel regression to estimate Yj using the remaining X's
and Y's
....Keep track of the square of the error made in that prediction
..}
..Sum the squares of the errors for every j to get a score for candidate
window width hi
..Record that in a list as the score for hi
}
Select as the outcome h of this algorithm the hi with the lowest
score
The grid approach is necessary because the problem is not concave. Otherwise
one might try a simpler maximization e.g., with the first order
conditions.
Note however that a complete execution of the cross-validation method can be
very slow because it requires as many kernel regressions as there are data
points. E.g. in this author's experience, the cross-validation computation
for one window width on 500 data points on a Pentium-90 in Gauss took about
five seconds, 1000 data points took circa seventeen seconds, but for 15000
data points it took an hour. (Then it takes another hour to check another
window width; so even the very simplest choice, between two window widths,
takes two hours.)
Source: Hardle, 1990
Contexts: nonparametrics; estimation; econometrics; statistics
CRRA:
Stands for Constant Relative Risk Aversion, a property of some utility
functions, also said to have isoelastic form. CRRA is a synonym for
CES.
Example 1: for any real a<1, u(c)=ca/a is a CRRA utility function.
It is a vNM utility function.
Source: Blanchard and Fischer, pp 43-44.
Contexts: models; finance
CRS:
Stands for Constant Returns to Scale.
CRSP:
Center for Research in Security Prices, a standard database of finance
information at the University of Chicago. Has daily returns on NYSE, AMEX,
and NASDAQ stocks.
Started in early 1970s by Eugene Fama among others. The data there was so
much more convenient than alternatives that it drove the study of security
prices for decades afterward. It did not have volume data which meant that
volume/volatility tests were rarely done.
Contexts: finance; data
cubic spline:
A particular nonparametric estimator of a function. Given a data set
{Xi, Yi} it estimates values of Y for X's other than
those in the sample. The process is to construct a function that balances the
twin needs of (1) proximity to the actual sample points, (2) smoothness. So a
'roughness penalty' is defined. See Hardle's equation 3.4.1 near p. 56 for
exact equation. The cubic spline seems to be the most common kind of spline
smoother.
Source: Hardle, 1990
Contexts: econometrics; nonparametrics; estimation
current account balance:
The difference between a country's savings and its investment. "[If]
positive, it measures the portion of a country's saving invested abroad; if
negative, the portion of domestic investment financed by foreigners'
savings."
Defined by the sum of the value of imports of goods and services plus net
returns on investments abroad, minus the value of exports of goods and
services, where all these elements are measured in the domestic
currency.
Source: Maurice Obstfeld, "The global capital market: benefactor or
menace?", Journal of Economic Perspectives, vol 12, no. 4, Fall
1998, page 11.
Contexts: trade; international; macro
DARA:
decreasing absolute risk aversion
data:
Relevant terms: AFQT,
Amos,
BHPS,
BLS,
CEX,
clustered data,
cohort,
cointegration,
Compustat,
Consumer Expenditure Survey,
CPI,
CPI-U,
CPI-W,
CPS,
CRSP,
DataDesk,
EconLit,
ExecuComp,
FASB,
filter,
FIPS,
Freddie Mac,
Gauss,
GDP deflator,
GSOEP,
High School and Beyond,
HSB,
INSEE,
IPUMS,
Limdep,
longitudinal data,
M1,
March CPS,
Matlab,
Minitab,
MSA,
natural experiment,
NELS,
NIPA,
NLREG,
NLS,
NLSY,
NLSYW,
NORC,
OES,
poverty,
PSID,
RATS,
ridit scoring,
S-Plus,
SAS,
SHAZAM,
SIC,
SIPP,
SLID,
SMSA,
Solas,
SPSS,
SSEP,
SSRN,
Stata,
Statistica,
SUDAAN,
top-coded,
tutorial,
unemployment,
urban ghetto,
WesVar,
X-11 ARIMA.
Contexts: fields
DataDesk:
Data analysis software, discussed at http://www.datadesk.com.
Contexts: data; software
decision rule:
Either (1) a function that maps from the current state to the agent's decision
or choice or (2) a mapping from the expressed preferences of each of a group
of agents to a group decision. The first is more relevant to decision theory
and dynamic optimization; the second is relevant to game theory.
The phrase allocation rule is sometimes used to mean the same thing as
decision rule. The term strategy-proof has been defined in both
contexts.
Contexts: macro; models; game theory
decomposition theorem:
Synonym for FWL theorem or Frisch-Waugh-Lovell theorem.
Contexts: econometrics
deductive:
Characterizing a reasoning process of logical reasoning from stated
propositions. Contrast inductive.
Contexts: philosophy
deep:
A capital market may be said to be deep if it has great depth (which
see).
May less formally be used to describe a market with large total market
capitalization.
Contexts: finance
delta:
As used with respect to options: The rate of change of a financial
derivative's price with respect to changes in the price of the underlying
asset. Formally this is a partial derivative.
A derivative is perfectly delta-hedged if it is in a portfolio with a delta of
zero. Financial firms make some effort to construct delta-hedged
portfolios.
Source: Hull, 1997, p 312
Contexts: finance
delta method:
Gives the distribution of a function of random variables for which one
has a distribution. In particular, for the function g(b,l), where b and l are
estimators for true values b0 and l0:
g(b,l) ~ N(g(b0,l0), g'(b,l)var(b,l)g'(b,l)')
Contexts: statistics; econometrics
demand:
A relation between each possible price and the quantity demanded at that
price.
[Aspects of the population doing the demanding are often left implicit. An
actual supply is not necessary to conceive of demand because demand involves
hypothetical quantities.]
Source: macro; micro theory
demand curve:
For a given good, the demand curve is a relation between each possible price
of the good and the quantity that would be bought at market sale at that
price.
Drawn in introductory classes with this arrangement of the axes, although
price is thought of as the independent variable:
Price | \
| \
| \
| \ Demand
|________________________
Quantity
Contexts: micro
demand deposits:
The money stored in the form of checking accounts at banks.
Contexts: macro; money
demand set:
In a model, the set of the most-preferred bundles of goods an agent can
afford. This set is a function of the preference relation for this agent, the
prices of goods, and the agent's endowment.
Assuming the agent cannot have a negative quantity of any good, the demand set
can be characterized this way:
Define L as the number of goods the agent might receive an allocation of. An
allocation to the agent is an element of the space
R+l; that is, the space of nonnegative real
vectors of dimension L.
Define >p as a weak preference relation over goods; that is,
x>px' states that the allocation vector x
is weakly preferred to x' .
Let e be a vector representing the quantities of the agent's endowment
of each possible good, and p be a vector of prices for those goods.
Let D(>p,p,e) denote the demand set. Then:
D(>p,p,e) = {x: px <= pe and
x >p x' for all affordable bundles
x'}.
Contexts: general equilibrium; models
democracy:
Literally "rule by the people". This is a dictionary definition and
is not considered sharp enough for academic use. Schumpeter (1942) contrasts
these two definitions below and regards only the second one as useful and
plausible enough to work with:
"The eighteenth-century philosophy of democracy may be couched in the
following definition: the democratic method is that institutional arrangement
for arriving at political decisions which realizes the common good by making
the people itself decide issues through the election of individuals who are to
assemble in order to carry out its will." (p 250)
This "classical" definition has the problem that the will of the
people is not clearly defined here (e.g. consider voting paradoxes) or known
(perhaps even to the people at the time), and this can lead to ambiguity about
whether a given political system is democratic. The following definition is
preferred for its clarity but has a modern feel that is at some distance from
the original dictionary definition. Political representation is assumed to be
necessary here.
"[T]he democratic method is that institutional arrangement for arriving
at political decisions in which individuals acquire the power to decide by
means of a competitive struggle for the people's vote." (p 269) More
clearly: the democratic method is one in which people campaign competitively
for the people's votes to achieve the power to make public decisions. This
definition is the sharpest.
Source: Schumpeter, Joseph R. 1950. Capitalism, Socialism, and
Democracy, third edition. (First edition 1942.) Harper & Row. New
York.
Contexts: political economy
demography:
The study of the size, growth, and age and geographical distribution of human
populations, and births, deaths, marriages, and migrations.
density function:
A synonym for pdf.
Contexts: econometrics; statistics
depreciation:
The decline in price of an asset over time attributable to
deterioration, obsolescence, and impending retirement. Applies
particularly to physical assets like equipment and structures.
Source: Hulten, 2000, p. 8
Contexts: macro; accounting
depth:
An attribute of a market.
In securities markets, depth is measured by "the size of an order flow
innovation required to change prices a given amount." (Kyle, 1985, p
1316).
Source: Kyle, 1985, p 1316
Contexts: finance
derivatives:
securities whose value is derived from the some other time-varying quantity.
Usually that other quantity is the price of some other asset such as bonds,
stocks, currencies, or commodities. It could also be an index, or the
temperature. Derivatives were created to support an insurance market against
fluctuations.
Contexts: finance
deterioration:
The process or occurrence of an asset's declining productivity as it ages.
This is a component of depreciation.
determinant:
An operator defined on square matrices or the value of that operator. For a
matrix B the determinant is denoted |B|. Its value is a unique scalar.
Calculation of the value of the determinant is discussed in linear algebra
books.
Source: Chiang, 1984, p 93
Contexts: linear algebra
deterministic:
Not random. A deterministic function or variable often means one that is not
random, in the context of other variables available.
That is, those other variables determine the variable in question unerringly,
by a function that would give the same value every time those other variables
were given to it as arguments, unlike a random one which with some probability
would give different answers.
Contexts: phrases
development:
The study of industrialization.
Relevant terms: Kuznets curve.
Contexts: fields
Dickey-Fuller test:
A Dickey-Fuller test is an econometric test for whether a certain kind of time
series data has an autoregressive unit root.
In particular in the time series econometric model
y[t] =
by[t-1] + e[t], where
t is an integer greater than zero indexing time, and
b=1, let
bOLS
denote the OLS estimate of
b from a particular sample.
Let T be the sample size.
Then the test statistic T*(bOLS
-1) has a known, documented distribution. Its value in a particular sample
can be compared to that distribution to determine a probability that the
original sample came from a unit root autoregressive process; that is,
one in which
b=1.
Source: Greene, 1997, Dickey and Fuller (1979) and (1981)
(which are cited by Greene).
Contexts: econometrics; time series
dictator game:
A formal game with two players: Allocator A and Recipient R. They have
received a windfall of, say, $1. The allocator, moving first, proposes a
split so that A would receive x and R would receive 1-x. The recipient then
accepts, no matter what A proposed. In a subgame perfect equilibrium, A would
offer R nothing. In experiments with human subjects, however, in which A and
R do not know one another, A offers relatively large shares to R (often
50-50). See also Ultimatum Game.
Contexts: game theory; models
diffuse prior:
In Bayesian statistics the investigator has to specify a prior distribution
for a parameter, before the experiment or regression that is to update that
distribution. A diffuse prior is a distribution of the parameter with equal
probability for each possible value, coming as close as possible to
representing the notion that the analyst hasn't a clue about the value of the
parameter being estimated.
Source: Posts to the newsgroup sci.econ.research by moderator AK and
ezivot@u.washington.edu, responding to a question by Herbert M Gintis circa
Feb 5, 1999.
Contexts: statistics
discount factor:
In a multi-period model, agents may have different utility functions for
consumption (or other experiences) in different time periods. Usually in such
models they value future experiences, but to a lesser degree than present
ones. For simplicity the factor by which they discount next period's utility
may be a constant between zero and one, and if so it is called a discount
factor. One might interpret the discount factor not as a reduction in the
appreciation of future events but as a subjective probability that the agent
will die before the next period, and so discounts the future experiences not
because they aren't valued, but because they may not occur.
A present-oriented agents discounts the future heavily and so has a LOW
discount factor. Contrast discount rate and
future-oriented.
In a discrete time model where agents discount the future by a factor of b,
one usually lets b=1/(1+r) where r is the discount rate.
Contexts: models
discount rate:
At least two meanings:
(1) The interest rate at which an agent discounts future events in
preferences in a multi-period model. Often denoted r.
A present-oriented agent discounts the future heavily and so has a HIGH
discount rate. Contrast 'discount factor'. See also 'future-oriented'.
In a discrete time model where agents discount the future by a factor of b,
one finds r=(1-b)/b, following from b=1/(1+r).
(2) The Discount Rate is the name of the rate at which U.S. banks can borrow
from the U.S. Federal Reserve.
Contexts: finance; models; institutions
discrete choice linear model:
An econometric model: Pr(yi=1) = F(Xi'b) =
Xi'b
Contexts: econometrics; estimation
discrete choice model:
An econometric model in which the actors are presumed to have made a choice
from a discrete set. Their decision is modeled as endogenous. Often the
choice is denoted yi.
Contexts: econometrics; estimation
discrete regression models:
Econometrics models in which the dependent variables assumes discrete
values.
Source: Maddala, p. 13
Contexts: econometrics
diseconomies of scale:
Like economies of scale but with the implication that they are
negative, so larger scale would increase cost per unit.
disintermediation:
prevention of banks from flowing money from savers to borrowers as an effect
of regulations; e..g the U.S. home mortgage market is partly blocked from
banks and left to savings and loan institutions.
Source: Branson
Contexts: macro
dismal science:
Refers to the field of economics. The term continues to be used perhaps
because economics is so often about tradeoffs and is therefore said to be
depressing to study.
This pejorative term was coined in the 1800s partly because of the assumption
by economists such as J.S. Mill that persons are similar and their differences
in behavior can often be traced to institutions and incentives. This was
thought to be a dismal attitude by a person who believed that races of people
had very different inborn capabilities and attitudes. See
http://www.econlib.org/library/Columns/LevyPeartdismal.html for more of
the history.
Contexts: phrases
distribution function:
A synonym for cdf.
Contexts: econometrics; statistics
Divisia index:
A continuous-time index number. "The Divisia index is a weighted sum of
growth rates, where the weights are the components' shares in total
value." -- Hulten (1973, p. 1017)
See also
http://www.geocities.com/jeab_cu/paper2/paper2.htm.
Source: Hulten, 1973;
Hulten, 2000;
Richter, 1966
Contexts: index numbers; macro
DOJ:
Abbreviaton for the U.S. national Department of Justice, which does among
other things investigations into violations of antitrust law. See also
FTC.
Contexts: IO; regulation; antitrust
dollarization:
Widespread use, or routine official government use, of US dollars in a country
in place of that country's own currency.
Source: Officer, Lawrence H., "Reviewof Kurt Schuler Currency Boards and
Dollarization" Economic History Services, Nov 14, 2003, URL:
http://www.eh.net/bookreviews/library/0705.shtml;
http://www.dollarization.org;
Domar aggregation:
This seems to be the principle that the growth rate of an aggregate is the
weighted average of the growth rates of its components, where each component
is weighted by the share of the aggregate it makes up. The idea comes up in
the context of national accounts and national statistics.
Contexts: macro; measurement; government
dominant design:
After a technological innovcation and a subsequent era of ferment in an
industry, a basic architecture of product or process that becomes the accepted
market standard. From Abernathy& Utterback 1978, cited by A&T 1991. Dominant
designs may not be better than alternatives nor innovative. They have the
benchmark features to which subsequent designs are compared. Examples include
the IBM 360 computer series and Ford's Model T automobile, and the IBM
PC.
Source: Abernathy. 1978.;
Philip Anderson and Michael L. Tushman, Research-Technology Management,
May/June 1991, pp 26-31.
Contexts: IO; business history; technology; management
Donsker's theorem:
Synonymous with Functional Central Limit Theorem (FCLT).
Source: Richardson and Stock (1989)
Contexts: time series
double coincidence of wants:
phrasing from Jevons (1893). "[T]he first difficulty in barter is to
find two persons whose disposable possessions mutually suit each other's
wants. There may be many people wanting, and many possessing those things
wanted; but to allow of an act of barter there must be a double coincidence,
which will rarely happen." That is, paraphrasing Ostroy and Starr, 1990,
p 26, the double coincidence is the situation where the supplier of good A
wants good B and the supplier of good B wants good A.
The point is that the institution of money gives us a more flexible approach
to trade than barter, which has the double coincidence of wants
problem.
Source: Ostroy and Starr, 1990, p 26
Contexts: money; models
dummy variable:
In an econometric model, a variable that marks or encodes a particular
attribute. A dummy variable has the value zero or one for each observation,
e.g. 1 for male and 0 for female. Same as indicator variables or binary
variables.
Contexts: econometrics
dumping:
An informal name for the practice of selling a product in a foreign country
for less than either (a) the price in the domestic country, or (b) the cost of
making the product. It is illegal in some countries to dump certain products
into them, because they want to protect their own industries from such
competition.
Contexts: trade
Durbin's h test:
An algorithm for detecting autocorrelation in the errors of a time series
regression. The implicit citation is to Durbin (1970). The h statistic is
asymptotically distributed normally if the hypothesis that there is no
autocorrelation.
Source: SHAZAM
manual
Contexts: estimation; time series; econometrics
Durbin-Watson statistic:
A test for first-order serial correlation in the residuals of a time series
regression. A value of 2.0 for the statistic indicates that there is no
serial correlation. For tables to interpret the statistic see Greene pgs
738-743, and context discussing them is on pages 424-425.
This result is biased toward the finding that there is no serial
correlation if lagged values of the regressors are in the regression.
Formally, the statistic is:
d=(sum from t=2 to t=T of: (et-et-1)2/(sum
from t=1 to t=T of: et2)
where the series of et are the residuals from a regression.
Source: Greene, 1993, p 423-4
Contexts: time series; estimation; econometrics
dyadic map:
synonym for dyadic transformation.
Source: Domowitz and Muus, 1992, p 2849
Contexts: dynamical systems; chaos
dyadic transformation:
For whole numbers t and initial value x0 in [0,1], consider the
mapping:
xt+1 = (2xt) mod 1
"This law of motion is a standard example of chaotic
dynamics. It is commonly known as the dyadic transformation. It is
mixing (and hence also ergodic)."
-- Domowitz and Muus, 1992, p 2849
All the xt's will be in [0,1]. Their distribution will depend on
the initial value x0. If x0 is rational, the mapping
will eventually become periodic (for large enough values of t). If
x0 is irrational, the mapping is never periodic.
Source: Domowitz and Muus, 1992, p 2849
Contexts: dynamical systems; chaos
dynamic:
means "changing over time".
dynamic inconsistency:
A possible attribute of a player's strategy in a dynamic decision-making
environment (such as a game).
When the best plan that a player can make for some future period will not be
optimal when that future period arrives, the plan is dynamically inconsistent.
In one stylized example, addicted smokers face this problem -- each day, their
best plan is to smoke today, and to quit (and suffer) tomorrow in order to get
health benefits subesquently. But the next day, that is once again the best
plan, so they do not quit then either. (In a model this can come about if the
planner values the present much more than the near future, -- that is, has a
low short-run discount factor -- but has a higher discount factor between two
future periods.)
Monetary policy is sometimes said to suffer from a dynamic inconsistency
problem. Government policymakers are best off to promise that there will be
no inflation tomorrow. But once agents and firms in the economy have fixed
nominal contracts, the government would get seigniorage revenues from
raising the level of inflation.
Source: Cukierman, 1992; Kydland and Prescott,
1977
Contexts: macro; money; game theory; dynamic optimization
dynamic multipliers:
The impulse responses in a distributed lag model.
Source: M.W. Watson, Ch 47, Handbook of Econometrics, p, 2899.
Contexts: econometrics; macro
dynamic optimization:
Relevant terms: Bellman equation,
chaotic,
dynamic inconsistency,
dynamic optimizations,
dynamic programming,
dynamical systems,
hyperbolic discounting,
quasi-hyperbolic discounting,
time inconsistency.
Contexts: fields
dynamic optimizations:
maximization problems to which the solution is a function; equivalently,
optimization problems in infinite-dimensional spaces.
Source: Stokey and Lucas, 1989
Contexts: macro; models; dynamic optimization
dynamic programming:
The study of dynamic optimization problems through the analysis of
functional equations like value equations.
This phrase is normally used, analogously to linear programming to
describe the study of discrete problems; e.g. those for which a decision must
be made at times t for t=1,2,3,...
Source: Stokey and Lucas, 1989, p 14
Contexts: macro; models; dynamic optimization
dynamical systems:
The branch of mathematics describing processes in motion. Some are
predictable and others are not. Two reasons a process might be unpredictable
are that it might be random, and it might be chaotic.
Source: Devaney, 1992, p 1-2
Contexts: mathematics; dynamic optimization
EBIT:
Stands for "earnings before interest and taxes" which is used as a
measure of earnings performance of firms that is not clouded by changes in
debt or equity types, or tax rules.
Contexts: accounting; finance
EBITDA:
An accounting measure of a private company's overall financial performance in
a period of time. Used in the U.S. but may not be used elsewhere [ed.: I
don't know]. Stands for Earnings (or loss) before Interest, Taxes,
Depreciation, and Amortization. The figure is in currency units.
Source: balance sheets
Contexts: accounting; business
EconLit:
An electronic bibliography of economics literature organized by the American
Economics Association, derived partly from the Journal of Economic Literature.
EconLit is made available through libraries and universities.
See http://www.econlit.org for more
information.
Source: http://www.econlib.org
Contexts: data; journals
econometric model:
An economic model formulated so that its parameters can be estimated if one
makes the assumption that the model is correct.
Contexts: estimation; econometrics
Econometrica:
A journal whose web site is at
http://www.econometricsociety.org/es/journal.html
.
Contexts: journals
econometrics:
Relevant terms: 2SLS,
3SLS,
acceptance region,
adapted,
AIC,
Akaike's Information Criterion,
almost surely,
alternative hypothesis,
AR,
ARIMA,
ARMA,
asymptotic,
asymptotic variance,
asymptotically equivalent,
asymptotically unbiased,
augmented Dickey-Fuller test,
autocorrelation,
autocovariance,
autocovariance matrix,
autoregressive process,
avar,
bandwidth,
Bayesian analysis,
bias,
bootstrapping,
Box-Cox transformation,
Box-Jenkins,
Breusch-Pagan statistic,
Burr distribution,
BVAR,
calibration,
Cauchy distribution,
cdf,
censored dependent variable,
characteristic function,
Cholesky decomposition,
Cholesky factorization,
Chow test,
CLAD,
Cochrane-Orcutt estimation,
coefficient of determination,
cointegration,
condition number,
conformable,
consistent,
control for,
convergence in quadratic mean,
correlation,
covariance stationary,
Cowles Commission,
Cramer-Rao lower bound,
criterion function,
critical region,
cross-section data,
cross-validation,
cubic spline,
decomposition theorem,
delta method,
density function,
Dickey-Fuller test,
discrete choice linear model,
discrete choice model,
discrete regression models,
distribution function,
dummy variable,
Durbin's h test,
Durbin-Watson statistic,
dynamic multipliers,
econometric model,
efficiency,
eigenvalue decomposition,
Epanechnikov kernel,
ergodic,
error-correction model,
essentially stationary,
estimator,
exclusion restrictions,
expectation,
expected value,
exponential family,
F distribution,
F test,
FGLS,
FIML,
Fisher consistency,
Fisher information,
Fisher transformation,
fixed effects estimation,
FWL theorem,
Gaussian,
Gaussian kernel,
Gaussian white noise process,
generalized linear model,
GEV,
Gibbs sampler,
GLS,
GMM,
Granger causality,
Grenander conditions,
Hansen's J test,
Hausman test,
hedonic,
heterogeneous process,
heteroscedastic,
heteroskedastic,
homoscedastic,
homoskedastic,
Huber standard errors,
Huber-White standard errors,
idempotent,
identification,
IIA,
iid,
ILS,
impulse response function,
inadmissible,
incidental parameters,
independent,
indicator variable,
information matrix,
information number,
instrumental variables,
instruments,
integrated,
inverse Mills ratio,
invertibility,
is consistent for,
IV,
J statistic,
jackknife estimator,
k-nearest-neighbor estimator,
Kalman filter,
Kalman gain,
kernel estimation,
kernel function,
kitchen sink regression,
KLIC,
knots,
Kolmogorov's Second Law of Large Numbers,
Kronecker product,
Kruskal's theorem,
kurtosis,
LAD,
LAN,
large sample,
likelihood function,
limited dependent variable,
LIML,
Lindeberg-Levy Central Limit Theorem,
linear model,
linear probability models,
linear regression,
link function,
locally identified,
log-concave,
log-convex,
logistic distribution,
logit model,
lognormal distribution,
loss function,
m-estimators,
MA,
MA(1),
main effect,
maintained hypothesis,
MAR,
marginal significance level,
martingale,
martingale difference sequence,
maximum score estimator,
mean square error,
mean squared error,
method of moments,
MGF,
mixing,
MLE,
moment-generating function,
Monte Carlo simulations,
Moore-Penrose inverse,
MSE,
multinomial,
multinomial logit,
multinomial probit,
multivariate,
Nadaraya-Watson estimator,
NLLS,
noncentral chi-squared distribution,
nonergodic,
nonparametric estimation,
normal distribution,
null hypothesis,
ocular regression,
OLS,
omitted variable bias,
Op(1),
order condition,
order of a kernel,
order of a sequence,
Ox,
p value,
panel data,
parametric,
Pareto chart,
Pareto distribution,
partially linear model,
partition,
pdf,
Phillips-Perron test,
polychotomous choice,
power,
Prais-Winsten transformation,
precision,
predetermined variables,
probability function,
probit model,
pseudoinverse,
Q-statistic,
QLR,
QML,
quartic kernel,
quasi-differencing,
quasi-maximum likelihood,
R-squared,
random,
random effects estimation,
random process,
random walk,
Rao-Cramer inequality,
reduced form,
regression function,
rejection region,
restricted estimate,
restriction,
Riemann-Stieltjes integral,
robust smoother,
roughness penalty,
Sargan test,
scatter diagram,
scedastic function,
Schwarz Criterion,
score,
second moment,
semi-nonparametric,
semilog,
sieve estimators,
significance,
significance level,
simultaneous equation system,
size,
SLLN,
SMA,
smoothers,
smoothing,
SNP,
sparse,
spatial autocorrelation,
spectral decomposition,
spectrum,
spline function,
spline regression,
spline smoothing,
statistic,
stochastic,
strict stationarity,
strictly stationary,
strong law of large numbers,
strongly consistent,
strongly dependent,
strongly ergodic,
structural break,
structural change,
structural moving average model,
structural parameters,
structure,
SUR,
SURE,
survival function,
SVAR,
symmetric,
t distribution,
t statistic,
test for structural change,
test of identifying restrictions,
test statistic,
time-varying covariates,
tobit model,
trace,
translog,
transpose,
treatment effects,
triangular kernel,
truncated dependent variable,
Tukey boxplot,
two stage least squares,
type I error,
type I extreme value distribution,
type II error,
unbalanced data,
unbiased,
uncorrelated,
under the null,
uniform kernel,
uniform weak law of large numbers,
unit root,
unit root test,
univariate,
univariate binary model,
unrestricted estimate,
UVAR,
UWLLN,
VAR,
variance,
variance decomposition,
vec,
Wallis statistic,
wavelet,
weak law of large numbers,
weak stationarity,
weakly consistent,
weakly dependent,
weakly ergodic,
weighted least squares,
white noise process,
White standard errors,
within estimator,
WLLN,
Wold decomposition,
Wold's theorem.
Contexts: fields
economic discrimination:
in labor markets: the presence of different pay for workers of the same
ability but who are in different groups, e.g. black, white; male,
female.
Source: Aigner and Cain, editor's comments, p 175
economic environment:
In a model, a specification of preferences, technology, and the stochastic
processes underlying the forcing variables.
Source: Hansen and Singleton, 1982
Contexts: models
economic growth:
Paraphrasing directly from Mokyr, 1990: Economic growth has four basic
causes:
1) Investment, meaning increases in the capital stock (Solovian growth)
2) Increases in trade (Smithian growth)
3) Size or scale effects, e.g. by overcoming fixed costs, or achieving
specialization
4) Increases in knowledge, most of which is called technological progress
(Schumpeterian growth).
Further elaboration is in Mokyr's book.
Source: Mokyr, 1990, p. 4-6.
Contexts: history; macro
economic sociology:
Piore (1996) writes of two definitions of economics, a narrow one organized
around optimization and a broad one organized around scarcity, and suggests
that the subjects included by the larger one but not in the smaller one are
the subjects of economic sociology discussed in the Handbook
(1994).
More specifically, the broad definition of economics is "the study of how
people employ scarce resources and distribute them over time and among
competing demands" paraphrasing Paul Samuelson (1961). The narrower
definition is from Gary Becker (1976): "The combined assumptions of
maximizing behavior, market equilibrium, and stable preferences, used
relentlessly and unflinchingly . . . [B]ehavior [of] participants who maximize
their utility from a stable set of preferences and accumulate an optimal
amount of information and other inputs in a variety of markets."
A bit more specifically -- optimization and formal equilibrium are not natural
subjects or methods of economic sociology, but the general subjects of
economics are. Economic sociology is more likely than economics to use groups
or organizations rather than individuals as units of analysis. The practical
definition seems to be evolving over time.
Source: Piore, Michael J. "Review of The Handbook of Economic
Sociology," Journal of Economic Literature XXXIV (June 1996),
pp. 741-754, esp. p 741-2.
Samuelson, Paul A. 1961. Economics, an introductory analysis. 5th
edition. New York: McGraw-Hill. p. 5.
Becker, Gary. 1976. The economic approach to human behavior.
Chicago: U. of Chicago Press.
Contexts: sociology; fields
economies of scale:
Usually one says there are economies of scale in production of cost per unit
made declines with the number of units produced. It is a descriptive,
quantitative term. One measure of the economies of scale is the cost per unit
made. There can be analosous economies of scale in marketing or distribution
of a product or service too. The term may apply only to certain ranges of
output quantity.
Contexts: production theory
ECU:
European Currency Unit
Editor's comment on time series:
A frequent and dangerous mistake for those not familiar with this language is
to think that discussion of 'time series' are about data values in a sample.
Actually, they are about probability distributions. It has taken this author
years to get used to that, which may just be normal.
An example of the error is to think that a discussion about E[Xt]
is testable or measurable. Usually it's not. It's assumed in the discussion.
A sample has a computable mean, but whether a time series has a trend,
or a unit root, or heteroskedasticity are statements about a conjectured
process, not statements about data.
With thanks to: P B Meyer, pbmeyer@nwu.edu
education production function:
Usually a function mapping quantities of measured inputs to a school and
student characteristics to some measure of school output, like the test scores
of students from the school.
For empirical purposes one might assume this function is linear and generate
the linear regression:
Y = X'b + S'c + e
where Y is a measure of school outputs like a vector of student test scores, X
is a set of measures of student attributes (collectively or individually), S
is vector of measures of schools those students attend, b and c are
coefficients, and e is a disturbance term.
Source: I am advised that one should look for a survey in the JEL around 1985
by Eric Hanushek.
Contexts: education; labor
EEH:
An abbreviation for the journal Explorations in Economic
History.
Contexts: journals
EER:
An abbreviation for European Economic Review.
Contexts: journals
effective labor:
In the context of a Solow model, if labor time is denoted L and labor's
effectiveness, or knowledge, is A, then by effective labor we mean AL. In
general means 'efficiency units' of labor or 'productive effort' as opposed to
time spent.
Source: Romer, 1996, p 7
Contexts: macro
efficiency:
Has several meanings. Sometimes used in a theoretical context as a synonym
for Pareto efficiency. Below is the econometric/statistical
definition.
Efficiency is a criterion by which to compare unbiased estimators. For scalar
parameters, one estimator is said to be more efficient than another if the
first has smaller variance. For multivariate estimators, one estimator is
said to be more efficient than another if the covariance matrix of the second
minus the covariance matrix of the first is a positive semidefinite matrix.
Sometimes properties of the most efficient estimator can be computed; see
efficiency bound.
Computation of efficiency is defined on the basis of assumed distributions of
errors ('disturbance terms'). It is not calculated directly on the basis of
sample information unless the sample information come from a simulation where
the actual error distribution was known.
Source: Davidson and MacKinnon, 1993, p 95; Greene,
1993, p 93
Contexts: econometrics; estimation; statistics
efficiency bound:
The minimum possible variance for an estimator given the statistical
model in which it applies. An estimator which achieves this variance is
called efficient.
Source: Tripathi, 1996, p 4
Contexts: econometrics, statistics
efficiency units:
Usually interpretable as "output per worker per hour."
More generally: An abstract measure of the amount produced for a constant
production technology by a worker in some time period. Often the context is
theoretical and the time period and production technology do not have to be
specified.
But efficiency units can be conceived of (and theorized about) as a function
of each worker's characteristics, of the vintage of equipment, of the date in
history, of the production technology, and so forth.
Contexts: labor; macro
efficiency wage hypothesis:
The hypothesis that workers' productivity depends positively on their wages.
(For reasons this might be the case see the entry on efficiency
wages.)
This could explain why employers in some industries pay workers more than
employers in other industries do, even if the workers have apparently
comparable qualifications and jobs. A contrasting explanation is that of
hedonic models in which these differentials are explained by quality
differences in the jobs.
Source: Lawrence F. Katz, "Efficiency Wage Theories: A Partial
Evaluation", NBER Macroeconomic Annual 1986, p 235.
Contexts: labor; models
efficiency wages:
A higher than market-clearing wage set by employers to, for example:
-- discourage shirking by raising the cost of being fired
-- encourage worker loyalty
-- raise group output norms
-- improve the applicant pool
-- raise morale
Labor productivity in efficiency wage models is positively related to
wage.
By contrast, consider models in which the wage is equal to labor productivity
in equilibrium, or models in which wages are set to reduce the likelihood of
unionization (union threat models). In these, productivity is not a function
of the wage.
Contexts: labor; models
efficient:
A description of either:
-- an allocation that is Pareto efficient
or
-- an estimator that has the minimum possible variance given the
statistical model; see efficiency bound.
Source: Tripathi, 1996, p 4
Contexts: econometrics, statistics
efficient markets hypothesis:
"A market in which prices always 'fully reflect' available information is
called 'efficient.'" -- Fama, p. 383
Source: "Efficient Capital Markets: a review of theory and empirical
work" Journal of Finance, 1970, p. 384-417
Contexts: finance
EGARCH:
Exponential GARCH. The EGARCH(p,q) model is attributed to Nelson,
(1991).
Source: Nelson, 1991
Contexts: finance; statistics
eigenvalue:
An eigenvalue or characteristic root of a square matrix A is a scalar L that
satisfies the equation:
det [ A - LI ] = 0
where "det" is the operator that takes a determinant of its
argument, and I is the identity matrix with the same dimensions as
A.
Contexts: linear algebra
eigenvalue decomposition:
Same as spectral decomposition.
Source: Greene, 1993, p 34
Contexts: econometrics
eigenvector:
For each eigenvalue L of a square matrix A there is an associated right
eigenvector, denoted b that has the dimension of the number of rows of A. The
right eigenvector satisfies:
Ab = Lb
Contexts: linear algebra
EJ:
An occasional abbreviation for the British academic journal Economic
Journal.
Contexts: journals
elasticity:
A measure of responsiveness. The responsiveness of behavior measured by
variable Z to a change in environment variable Y is the change in Z observed
in response to a change in Y. Specifically, this approximation is
common:
elasticity = (percentage change in Z) / (percentage change in Y)
The smaller the percentage change in Y is practical, the better the measure is
and the closer it is to the intended theoretically perfect measure.
Elasticities are often negative, but are sometimes reported in absolute value
(perhaps for brevity) in which case the author is depending on the reader
knowing, or quickly applying, some theory. Usually the theory is the theory
of supply and demand.
Among the elasticities that show up in the economics literature are:
elasticity of quantity demanded of some product in response to a change in
price of that product-- I think this is "elasticity of demand" or "price
elasticity of demand". These are ordinarily negative, and when author
reports a positive figure it is usually just an absolute value. A reader has
to decide whether the true value is negative; hopefully this is obvious.
elasticity of supply, which is analogous
elasticity of quantity demanded in response to a change in the potential
consumer's income -- called "income elasticity of demand". These are
normally positive.
Inventing another kind of elasticity is plausible. Doing so implies a partial
theory of behavior -- e.g. that Y creates a reason for the agent to change
behavior Z.
An elasticity can sometimes be measured in a regression. Dunn (2004)
discusses a regression of the log of a son's lifetime earnings on the log of
his father's lifetime income in a regression:
y[son i] = beta*y[father i] + epsilon
Here the beta can be called an estimate of the elasticity of the sons'
earnings to the fathers' earnings, in the population. Dunn writes, "Viewed
across individuals [beta] is the fraction of the earnings difference between
fathers that is typically observed between their sons." One might or might
not want to view this relationship as causal.
Source: Dunn, Christopher. Jan 2004. "Intergenerational Transmission of
Lifetime Earnings: New Evidence from Brazil". Working paper from University
of Michigan Departiment of Economics and Population Studies Center.
Contexts: micro; macro; measurement
elasticity of substitution:
As measured in Broda and Weinstein (2005):
An elasticity of substitution is a scalar equal to or greater than one which
measures the effect on consumption of each of two goods if the price of the
other changes. (See elasticity for a definition of its
measurement.)
If an elasticity is much larger than one, it suggests the two goods are nearly
interchangeable; they are close substitutes. If it is near one, they are not
close substitutes, perhaps because they are substantively different, or differ
greatly in quality, or (empirically) because goods have been not been actually
classified as the econometrician has assumed.
Sometimes an elasticity of substitution is assumed in a demand function
without being measured, but for purposes of building a theory.
Source: Broda and Weinstein. 2005. Globalization and the gains from variety.
Aug 2005 working paper. especially circa p.14
Contexts: demand; estimation
EMA:
An occasional abbreviation for the journal Econometrica.
Contexts: journals
embedding effect:
The tendency of some contingent valuation survey responses to be
similar across different survey questions in conflict with theories about what
is valued in the utility function.
An example from Diamond and Hausman (1994): A survey might come up with a
willingness-to-pay amount that was the same for either (a) one lake or (b)
five lakes which include the one that was asked about individually. If lakes
have some utility value to the respondent, one would have expected that five
lakes would be worth more than one. Possibly the difference arises because
the respondent was not expressing a specific preference for the first lake,
and/or was not taking a budget constraint into account. Diamond and Hausman
argue that for this reason among others contingent valuation surveys cannot
arrive at good estimates for values of public goods.
Source: Kahneman and Knetsch, 1992; Diamond and Hausman,
1994
Contexts: public finance
embodied:
An attribute of the way technological progress affects productivity. In Solow
(1956), any improvement in technology instantaneously affects the productivity
of all factors of production. In Solow (1960) however productivity
improvements were a property of only of new capital investment. In the second
case we say the technologies are embodied in the new equipment, but in the
first case they are disembodied.
Source: Mortensen, Job Reallocation paper, Feb 1997
Contexts: macro
employment-at-will:
Describes an employment contract which gives the employer the
authority to end the employment relationship at any time without specific
justification.
EMS:
European Monetary System -- founded in 1979, its purpose was to reduce
currency fluctuations, and evolved toward offering a common currency.
Contexts: organizations; money
EMU:
European Monetary Union.
endogenous:
A variable is endogenous in a model if it is at least partly function of other
parameters and variables in a model. Contrast exogenous.
Contexts: phrases
endogenous growth model:
An endogenous growth macro model is one in which the long-run growth rate of
output per worker is determined by variables within the model, not an
exogenous rate of technological progress as in a neoclassical growth
model like those following from Ramsey (1928), Solow (1956), Swan (1956),
Cass (1965), Koopmans (1965).
Influential early endogenous growth models are Romer (1986), Lucas (1988), and
Rebelo (1991). See the sources for this entry for more information.
Hulten (2000) says "What is new in endogenous growth theory is the assumption
that the marginal product of (generalized) capital is constant, rather than
diminishing as in classical theories." Generalized capital includes the
result of investments in research and development (R&D).
Source: Barro and Sala-i-Martin, 1995, pp. 10-12;
Romer, 1996, p 100;
Hulten, 2000, p 37
Contexts: macro; growth
endowment:
In a general equilibrium model, an individual's endowment is a vector made up
of quantities of every possible good that the individual starts out
with.
Contexts: general equilibrium; models
energy intensity:
energy consumption relative to total output (GDP or GNP).
Source: Rosenberg, Nathan. 1994. Exploring the Black Box. p 167-8.
Engel curve:
On a graph with good 1 on the horizontal axis and good 2 on the vertical axis,
envision a convex indifference curve, and a diagonal budget constraint that
meets it at one point. Now move the budget constraint in and out and mark the
points where the tangencies with indifference curves are. The locus of such
points is the Engel curve -- it's the mapping from wealth into the space of
the two goods. That is, the Engel curve is (x(w), y(w)) where w is wealth and
x() and y() are the amounts of each of the goods purchased at those levels of
wealth.
Hardle (1990) p 18 defines the Engel curve as the graph of average expenditure
(e.g. on food) as a function of income. And on p 118, defines food
expenditure as a function of total expenditure.
The name refers to 19th century Prussian statistician Ernst Engel, according
to Fogel (1979).
Source: Hardle, 1990, p 18, 118
R.w. Fogel, "Notes on the social saving controversy," Journal of
Economic History vol XXXIX, No. 1 (March 1979) page 2.
Contexts: models
Engel effects:
Changes in commodity demands by people because their incomes are rising. A
generalization of Engel's law.
Source: Williamson and Lindert, 1980, p 179
Contexts: labor; macro; micro
Engel's law:
The observation that "the proportion of a family's budget devoted to food
declines as the family's income increases."
See also Engel effects.
Source: Timmer, Falcon, and Pearson, 1983/1985, p 56
Contexts: micro; stylized facts
entrenchment:
A possible description of the actions of managers of firms. Managers can make
investments that are more valuable under themselves than under alternative
managers. Those investments might not maximize shareholder value. So
shareholders have a moral hazard in contracting with managers.
Or, in the phrasing of Weisbach (1988): "Managerial entrenchment occurs
when managers gain so much power that they are able to use the firm to further
their own interests rather than the interests of shareholders."
The abstract to Shleifer and Vishny, 1989, p 123, is nicely
explicit: "By making manager-specific investments, managers can reduce
the probability of being replaced, extract higher wages and larger
perquisities from shareholders, and obtain more latitude in determining
corporate strategy."
Source: Shleifer and Vishny, 1989, p 123; Weisbach,
1988; Demsetz, 1983
Contexts: corporate finance; theory of the firm
EOE:
European Options Exchange
Contexts: organizations
Epanechnikov kernel:
The Epanechnikov kernel is this function: (3/4)(1-u2) for
-1<u<1 and zero for u outside that range. Here u=(x-xi)/h,
where h is the window width and xi are the values of the
independent variable in the data, and x is the value of the scalar independent
variable for which one seeks an estimate.
For kernel estimation.
Source: Hardle, 1990
Contexts: econometrics; nonparametrics; estimation
epistemic:
"Of, relating to; or involving knowledge or the act of knowing." An
economic theory might take aspects of human understanding or belief as
fundamental to economic processes or outcomes.
Source: American Heritage Dictionary, 1982, p 460
Contexts: micro theory; philosophy
epistemology:
"1. The division of philosophy that investigates the nature and origin
of knowledge. 2. A theory of the nature of knowledge."
Source: American Heritage Dictionary, 1982, p 460
Contexts: philosophy
epsilon-equilibrium:
(Usually written with a true epsilon character.)
In a noncooperative game, for any small positive number epsilon, an
epsilon-equilibrium is a profile of totally mixed strategies such that
each player gives more probability weight than epsilon only to strategies that
are best responses to the profile of strategies the others are playing.
For a more formal definition see sources. This is a rough paraphrase.
Source: Pearce, 1984, p 1037
Contexts: game theory
epsilon-proper equilibrium:
In a noncooperative game, a profile of strategies is an epsilon-proper
equilibrium if "every player is giving his better responses much more
probability weight than his worse responses (by a factor 1/epsilon), whether
or not those 'better' responses are 'best'."
-- Myerson (1978), p 78.
For a more formal definition see sources. This is a rough paraphrase.
Source: Myerson, 1978, p 78, as cited by Pearce,
1984, p 1037
Contexts: game theory
equilibrium:
Some balance that can occur in a model, which can represent a prediction if
the model has a real-world analogue. The standard case is the price-quantity
balance found in a supply and demand model. If the term is not otherwise
qualified it often refers to the supply and demand balance.
But there also exist Nash equilibria in games, search equilibria in
search models, and so forth.
Contexts: models
equity premium puzzle:
Real returns to investors from the purchases of U.S. government bonds have
been estimated at one percent per year, while real returns from stock
("equity") in U.S. companies have been estimated at seven percent
per year (Kocherlakota, 1996). General utility-based theories of asset prices
have difficulty explaining (or fitting, empirically) why the first rate is so
low and the second rate so high, not only in the U.S. but in other countries
too. The phrase equity premium puzzle comes from the framing of this
problem (why is the difference so great?) and the attention focused on it by
Mehra and Prescott (1985); sometimes the phrase risk free rate puzzle
is used to describe the closely related question: why is the bonds rate so
low? The problem can be inverted to ask: why do investors not reject the
low-returning bonds in order to buy stocks, which would then raise the price
of stocks and lower their subsequent returns?
The above is drawn from the excellent review by Kocherlakota (1996) which
surveys the substantial literature on this subject. Abbreviating further from
it: the theories against which the evidence constitute a "puzzle"
(or paradox, which see) tend to have these aspects in common: (1)
standard preferences described by standard utility functions, (2)
contractually complete asset markets (against possible time- and
state-of-the-world contingencies), and (3) costless asset trading (in terms of
taxes, trading fees, and presumably information).
Overwhelmingly the discussion in the economics literature has focused on
expansions to the formal theory and on refinements and expansions of data
sources, rather than survey evidence. A survey of U.S. households would
answer (has answered?) the question of why they invest so little in
stocks.
[Editorial comment follows.] It is likely (but this is conjecture) that large
fractions of the population do not seriously consider investing in stocks, and
are thus not rejecting stocks because their returns are low, but rather
because they do not know how and think there are some barriers to learning
how; and/or they perceive the risks of stocks to be higher than they have
historically been; and/or they believe their savings are insufficient to
invest. These explanations suggest that as stock trading becomes easier (e.g.
over the Web, with heavy marketing and easy interfaces) the theories will fit
better because more of the population will buy stocks. Indeed, this has been
observed over the last few years. Another class of likely explanations is
that people are highly impatient to spend their income (which would conflict
with standard constant-discount-rate utility functions, but agree with the
evidence; see hyperbolic discounting). Seen this way, the puzzle is
not why the evidence looks the way it does, but the hard theoretical problem
of getting these factors into the asset pricing models.
Source: Kocherlakota, Narayana R. "The equity premium: it's still a
puzzle," Journal of Economic Literature vol XXXIV (March 1996), pp
42-71.
Mehra, Rajnish and Edward C. Prescott. "The equity premium: a
puzzle," Journal of Monetary Economics 15(2) (March 1985), pp
145-161.
Contexts: finance; macro; phrases
ergodic:
Informally: a stochastic process is ergodic if no sample helps meaningfully to
predict values that are very far away in time from that sample. Another way
to say that is that the time path of the stochastic process is not sensitive
to initial conditions.
Two events A and B (e.g. possible sets of states of the process) are ergodic
iff, taking the limit as h goes to infinity:
lim (1/h)SUMfrom i=1to i=h |Pr(A intersection with
L-iB)-Pr(A)Pr(B)| = 0
Here L is the lag operator. This definition is like that of 'mixing on
average'. A stochastic process is ergodic, I believe, if all possible events
in it are ergodic by this definition.
If a random process is mixing, it is ergodic.
Priestly, p 340: A process is ergodic iff 'time averages' over a single
realization of the process converge in mean square to the corresponding
'ensemble averages' over many realizations.
Example 1: Let xt (for integer t=0 to infinity) is known to be
drawn iid from a standard normal distribution. Then knowing the value
of x1 doesn't help predict the value of x2, because they
are independently drawn. This time series process is ergodic.
Example 2: Suppose the process is xt=k+sin(t)+et where
k is unknown and et is a white noise error. Then any sample of
xt for a known t gives information about k and that is enough
information to make predictions at remote times in the future that are just as
good as predictions at nearby times. This process is not ergodic.
Contexts: time series; econometrics
ergodic properties:
means persistent properties
ergodic set:
In the context of a stochastic processes {xt}, set E is an ergodic
set if:
(i) it is a subset of the state space S of possible values of
xt,
(ii) if xt is in E, then Pr(xt+1 is in E}=1, and
(iii) no proper subset of E has the property in (ii).
Source: Stokey and Lucas, 1989, p 321
Contexts: macro; stochastic processes
ERISA:
The Employee Retirement Income Security Act of 1974, a major U.S. law which
guaranteed certain categories of employees a pension after some period at
their employer; there had been more ambiguity before about what rules an
employer could put on which employees could get a pension. Also ERISA changed
the perceived rules about whether pensions could be invested in venture
capital.
Contexts: instititutions
error-correction model:
A dynamic model in which "the movement of the variables in any periods is
related to the previous period's gap from long-run
equilibrium."
Source: Enders, 1995
Contexts: time series; econometrics; modelling
essentially stationary:
A time series process {xt} is essentially stationary iff
E[xt2] is uniformly bounded. (from Wooldridge)
This definition may not be standard or widely used.
I believe this means that even if the variance wanders around and is different
for different t, there is a finite bound to those variances. The variance of
the distribution of xt is never infinite for any t and indeed never
exceeds that finite bound. Thus an ARCH-type process might be essentially
stationary even though its variance is not constant for all t.
Note that there are strictly stationary processes that have infinite
second moments; such processes are not essentially stationary.
Source: Wooldridge, 1995, p 2643
Contexts: time series; econometrics; statistics
estimation:
Relevant terms: 2SLS,
3SLS,
acceptance region,
AIC,
alternative hypothesis,
aML,
average treatment effect,
BHHH,
bias,
bootstrapping,
Brent method,
Breusch-Pagan statistic,
BVAR,
calibration,
censored dependent variable,
Chow test,
Cochrane-Orcutt estimation,
condition number,
consistent,
Cook's distance,
Cramer-Rao lower bound,
criterion function,
critical region,
cross-section data,
cross-validation,
cubic spline,
discrete choice linear model,
discrete choice model,
Durbin's h test,
Durbin-Watson statistic,
econometric model,
efficiency,
elasticity of substitution,
Epanechnikov kernel,
estimator,
F distribution,
F test,
FE,
FGLS,
FIML,
Fisher transformation,
fixed effects estimation,
Gaussian kernel,
Granger causality,
Hausman test,
heteroscedastic,
heteroskedastic,
Hodrick-Prescott filter,
homoscedastic,
homoskedastic,
Huber standard errors,
Huber-White standard errors,
ideal,
ILS,
impulse response function,
incidental parameters,
indicator variable,
instrumental variables,
instruments,
integrated,
IV,
jackknife estimator,
k-nearest-neighbor estimator,
Kalman filter,
kernel estimation,
kernel function,
kitchen sink regression,
likelihood function,
limited dependent variable,
LIML,
linear probability models,
linear regression,
locally identified,
logit model,
loss function,
m-estimators,
maintained hypothesis,
marginal significance level,
maximum score estimator,
mean squared error,
method of moments,
MLE,
MSE,
Nadaraya-Watson estimator,
NLLS,
NLREG,
noncentral chi-squared distribution,
nonparametric estimation,
null hypothesis,
output elasticity,
p value,
panel data,
parametric,
partially linear model,
piecewise linear,
power,
Prais-Winsten transformation,
probit model,
Q-statistic,
QML,
quartic kernel,
quasi-differencing,
quasi-maximum likelihood,
random effects estimation,
Rao-Cramer inequality,
RATS,
reduced form,
regression function,
rejection region,
restricted estimate,
restriction,
ridit scoring,
robust smoother,
roughness penalty,
S-Plus,
SAS,
scatter diagram,
score,
SHAZAM,
significance,
significance level,
simulated annealing,
simultaneous equation system,
size,
smoothers,
smoothing,
Solas,
spatial autocorrelation,
spline smoothing,
Stata,
statistic,
Statistica,
structural parameters,
SUDAAN,
SUR,
SURE,
survival function,
t distribution,
t statistic,
test for structural change,
test of identifying restrictions,
test statistic,
time-varying covariates,
tobit model,
triangular kernel,
truncated dependent variable,
TSP,
two stage least squares,
type I error,
type II error,
unbalanced data,
uniform kernel,
univariate binary model,
unrestricted estimate,
UVAR,
VAR,
variance decomposition,
VARs,
Wallis statistic,
White standard errors,
X-11 ARIMA.
Contexts: fields
estimator:
A function of data that produces an estimate for an unknown parameter of the
distribution that produced the data.
The way estimators are often discussed, they can be thought of as chosen
before the data are seen. This can be hard to understand for the person new
to the term. Properties of estimators (such as unbiasedness in finite
samples, asymptotic unbiasedness, efficiency, and consistency) are discussed
without considering any particular sample, by making assumptions about the
distribution of the data, and considering the estimator in the context of the
distributions.
Contexts: econometrics; estimation; statistics
Euler equation:
A first order condition that is across a time or state boundary. (Across a
state boundary means a tradeoff between uncertain events.)
That is, a first order condition that is a relation between a variable that
has different values in different periods or different states.
E.g.
kt = b(1+r)kt+1
is an Euler equation, but
2nt2 - 3kt = 0
is not.
Contexts: macro; models
Euler's constant:
May refer to either the natural logarithm base e, approximately
2.71828, or to the Euler-Mascheroni (sp) constant, which is approximately
.57721566.
Contexts: mathematics
Eurodollar:
"Originally, it was a dollar-denominated deposit created either in a
European bank or in the European subsidiary of an American bank, usually
located in London." Here's why: (1) Americans overseas might want their
deposits in dollars; (2) the dollar being the most common international
currency, borrowers and lenders internationally may want to make their
accounts in it; (3) the Eurodollar market was "exempt from reserve
requirements and other regulatory costs imposed on domestic American banks.
Superior terms in the Eurodollar market attracted American borrowers and
depositors who would have otherwise patronized domestic institutions."
An example of such regulation was the US Regulation Q which limited interest
banks could pay.
Source: Glasner, p. 162
Contexts: money; history; finance
Eurosclerosis:
a name for the 'disease' of rigid, slow-moving labor markets in Europe in
contrast to fast-moving markets, e.g. in North America.
Contexts: labor; macro
even function:
A function f() is even iff f(x)=f(-x).
Contexts: real analysis
event studies:
Empirical study of prices of an asset just before and after some event, like
an announcement, merger, or dividend. Can be used to discuss whether the
market priced the information efficiently, whether there was private
information, etc.
This method was developed by Fama, Fisher, Jensen, and Roll
(1969) according to Weisbach, 1988, p
455
Contexts: finance
evolutionary game theory:
Describes game models in which players choose their strategies through a
trial-and-error process in which they learn over time that some strategies
work better than others.
Source: Attributed to Samuelson, Larry. 1997. Evolutionary Games and
Equilibrium Selection.
Contexts: game theory; micro theory
ex ante:
Latin for "beforehand". In models where there is uncertainty that
is resolved during the course of events, the ex antes values (e.g. of expected
gain) are those that are calculated in advance of the resolution of
uncertainty.
Contexts: theory; models
ex dividend date:
Firms pay dividends to those who are shareholders on a certain date. The next
day is called the ex dividend date. People who own no shares until the ex
dividend date do not receive the dividend. The price of the stocks is often
adjusted downward before the start of trading on the ex dividend date because
to compensate for this.
Contexts: finance; business
ex post:
Latin for "after the fact". In models where there is uncertainty
that is resolved during the course of events, the ex post values (e.g. of
expected gain) are those that are calculated after the uncertainty has been
resolved.
Contexts: theory; models
exact:
excess kurtosis:
Sample kurtosis minus 3, which means when 'excess kurtosis' is positive, there
is greater kurtosis than in the normal distribution.
Source: Campbell, Lo, and MacKinlay, p 17
Contexts: finance; statistics
excess returns:
Asset returns in excess of the risk-free rate. Used especially in the context
of the CAPM. Excess returns are negative in those periods in which
returns are less than the risk-free rate. Contrast abnormal
returns.
Contexts: finance
exclusion restrictions:
In a simultaneous equation system -- that some of the exogenous variables are
not in some of the equations; often this idea is expressed by saying the
coefficient next to that exogenous variable is zero. This way of putting it
may make this restriction (hypothesis) testable, and may make a simultaneous
equation system identified.
Contexts: econometrics
exclusive dealing:
A requirement in a contract that the buyer will only buy goods of a certain
type from the stated seller.
Source: lectures and handouts of Michael Whinston at Northwestern U in
Economics D50, Winter 1998
Contexts: IO; antitrust; regulation
ExecuComp:
data set from Standard and Poors on compensation to American corporate
executives, including stock and options ownership.
Contexts: data; finance
existence value:
The value that individuals may attach to the mere knowledge of the existence
of something, as opposed to having direct use of that thing. Synonymous with
nonuse value.
For example, knowledge of the existence of rare and diverse species and unique
natural environments may have value to environmentalists who do not actually
see them.
Source: Portney, 1994; Krutilla,
1967
Contexts: public finance
exogenous:
A variable is exogenous to a model if it is not determined by other parameters
and variables in the model, but is set externally and any changes to it come
from external forces. Contrast endogenous.
Contexts: phrases
expectation:
There are several, overlapping definitions:
1) The mean of a probability distribution. If the probability distribution
function is F(x) then the mean would be calculated by integrating dF(x) over
the domain of the probability distribution function. The expectation
operator, E[], is a linear operator per Hogg and Craig,
1995, page 55.
2) In a model, the agents may have to anticipate the value of variables whose
realizations may occur in the future. The values they anticipate are often
called their expectations. The agents may generalize only from past
realizations in a way that we can call "adaptive expectations" or
they may have other information from which they hypothesize a distribution
from which the realization will be drawn. From such a distribution they can
calculate the mean value, and variance, and so forth. This process is one of
"rational expectations."
---
Note: the notation Ex[] means the expectation of the expression
taken over the random variable X. The result of the expression could still be
a random variable if there are other random variables in the
expression.
Contexts: probability; econometrics; macro
expected utility hypothesis:
That the utility of an agent facing uncertainty is calculated by considering
utility in each possible state and constructing a weighted average, where the
weights are the agent's estimate of the probability of each state.
Arrow, 1963 attributes to Daniel Bernoulli (1738) the
earliest known written statement of this hypothesis.
Source: Arrow, 1963
Contexts: modelling; utility theory
expected value:
The expected value of a random variable is the mean of its distribution.
In its technical use this word does not have exactly the same meaning as in
ordinary English. For example, people buying a lottery ticket that has a
1/10,000 chance of paying $10,000 can expect to get zero since that is
overwhelmingly the likely outcome. They can be certain they won't get $1.
But the expected value of their winnings is $1.
Having said this, it is a standard implementation of 'rational expectations'
to assume that agents behave in response to the expected values of the
distributions they face.
Contexts: econometrics; statistics
expenditure function:
e(p,u) -- the minimum income necessary for a consumer to achieve utility level
u given a vector of prices for goods p. (The consumer is presumed to get
utility from the goods.)
Source: Varian, 1992
experience:
In the context of studies of employees, length of time employed anywhere.
Sometimes narrowed to include only length of time employed in relevant jobs.
Contrast tenure.
Contexts: labor; corporate finance
exponential distribution:
A particular function form for a continuous distribution with parameter k, a
scalar real greater than zero. Has pdf f(x)=ke-kx.
The mean is E[x]=1/k, and variance var(x)=1/k2. Moment-generating
function is (1-kt)-1.
Contexts: statistics
exponential family:
A distribution is a member of the exponential family of distributions if its
log-likelihood function can be written in the form below.
ln L(q | X) = a(X) + b(q) + c1(X)s1(q)
+ c2(X)s2(q) + . . .
+ cK(X)sK(q)
where a(), b(), and cj() and sj() for each j=1 to K are
functions; q is the vector of all parameters; X is
the matrix of observable data; and L() is the likelihood function as defined
by the maximum likelihood procedure.
The members of the exponential family vary from each other in a(), b(), and
the cj()s and sj()s. Most common named distributions
are members of the exponential family.
Quoting from Greene, 1997, page 149: "If the
log-likelihood function is of this form, then the functions cj()
are called sufficient statistics [and] the method of
moments estimators(s) will be functions of them," Those estimators
will be the maximum likelihood estimators which are asymptotically efficient
here.
Source: Greene, 1997
Contexts: statistics; econometrics
exponential utility:
A particular functional form for the utility function. Some versions of it
are used often in finance.
Here is the simplest version. Define U() as the utility function and w as
wealth. a is a positive scalar parameter.
U(w) = -e-aw
is the exponential utility function.
Now consider events over time. An agent might have a utility function
mapping possible streams of consumption into utility values. Here is one way
this is often parameterized:
Define (b) as a constant discount rate known to the
agent. It's a scalar that is between zero and one, and usually thought of as
near one.
Define t as a time subscript that starts at zero and increases over the
integers, either to some fixed T or to infinity.
Define c(t) as the amount the agent gets to consume at each t, and {c(t)} as
the series of consumptions for all relevant t. c(t) is random here. its value
is not known but its distribution is assumed known to the agent.
Let E[] be the expectations operator that takes means of
distributions.
Using this notation a common dynamic version of exponential utility is:
u({ct} = the sum over all t of (b)tE[-e-ac(t)]
Whether this utility function describes observed investment decisions is
discussable and testable. It is not often discussed, however. If clear
information on that becomes known to this author, it will be added here.
Most uses of the exponential utility function in finance are driven by these
aspects: (a) its analytic tractability; e.g. that it can be differentiated
with respect to choice variables that affect future wealth w or consumption
c(t); (b) for some applications it aggregates usefully, meaning that if every
agent has this exact utility function and they can buy securities then a
representative agent can be defined which also has this analytically
convenient form and for whom the securities prices would be the same. It's
convenient for computing securities prices in some abstract economies to use
that representative agent. There are "no wealth effects" -- that is, the
amount of risky securities that the agent wants to hold is not a function of
his own wealth, as long as he can borrow infinitely (which is often assumed
for tractability in these models.)
Source: Huang and Litzenberger, 1988
Contexts: modelling; finance
extended reals:
Or, extendend real numbers, or extended real line. The set of reals plus the
elements (infinity) and (minus infinity). Addition and multiplication can
generally be extended to this set; see Royden, p. 36
Source: Royden, p. 36
Contexts: real analysis
extensive margin:
Refers to the range to which a class of resources is allocated to a production
process.
Example: the number of employees an employer has.
Contrast intensive margin.
Contexts: micro; labor
externality:
An effect of a purchase or use decision by one set of parties on others who
did not have a choice and whose interests were not taken into account.
Classic example of a negative externality: pollution, generated by some
productive enterprise, and affecting others who had no choice and were
probably not taken nto account.
Example of a positive externality: Purchase a car of a certain model increases
demand and thus availability for mechanics who know that kind of car, which
improves the situation for others owning that model.
F distribution:
The F distribution is defined in terms of two independent chi-squared
variables. Let u and v be independently distributed chi-squared variables
with u1 and v1 degrees of freedom, respectively.
Then the statistic: F=(u/u1)/(v/v1) has an F
distribution with (u1,v1) degrees of freedom.
As can be computed from the definition of the t distribution, the square of a
t statistic may be written:
t2=(z2/1)/(v/v1), where z2, being
the square of a standard normal variable, has a chi-squared distribution.
Thus the square of a t variable with v1 degrees of freedom is an F variable
with (1,v1) degrees of freedom, that is:
t2=F(1,v1).
Source: Johnston, p 530-1
Contexts: econometrics; statistics; estimation
F test:
Normally a test for the joint hypothesis that a number of coefficients are
zero. Large values (greater than two?) generally reject the hypothesis,
depending on the level of significance required.
Contexts: estimation; econometrics
f.o.b.:
Indicates which services come with a price. Stands for "free on board."
Describes a price which includes goods plus the services of loading those
goods onto some vehicle or vessel at a named location, sometimes put in
parentheses after the f.o.b.
Source: William P. Rogers. 1978. The Coal Primer.
factor analysis:
An approach to finding what mixture of underlying variables produces most of
the variation in the dependent variable.
For a more complete discussion see
http://www.statsoftinc.com/textbook/stfacan.html
factor loadings:
"A security's factor loadings are the slopes in a multiple regression of
its return on the factors."
Source: Fama 1991 p 1594
Contexts: finance
factor price equalization:
An effect observed in models of international trade -- that the prices of
inputs to ("factors of") production in different countries, like
wages, are driven towards equality in the absence of barriers to trade. This
happens among other reasons because price incentives cause countries to choose
to specialize in the production of goods whose factors of production are
abundant there, which raises the prices of the factors towards equality with
the prices in countries where those factors are not abundant. Shocks to
factor availability in a country would cause only a temporary departure from
factor price equality.
The basic theorem of this kind is attributed to Samuelson (1948) by Hanson and
Slaughter (1999) who also cite Blackorby, Schworm, and Venables (1993).
The context of the theorem is a Heckscher-Ohlin model.
Source: Hanson, Gordon H., and Matthew J. Slaughter, "The Rybczinski
theorem,
factor-price equalization, and immigration: evidence from U.S. states,"
NBER working paper 7074, April 1999. On Web at http://www.nber.org/papers/w7074
Samuelson, Paul A. 1948. "International trade and the equalization of
factor prices." Economic Journal 48: 163-184.
Blackorby, Charles, William Schworm, and Anthony Venables. 1993.
"Necessary and sufficient conditions for factor price-equalization."
Review of Economic Studies 60: 413-434.
Contexts: trade; international
factory system:
The production process of having many manufacturing steps done together in ae
big building, not spread out in many places.
Background: The making of manufactured articles like clothing in Britain
before 1800 was done in a distributed way, with goods shipped around a lot
between houses and other places for the next step in the processing to get
done. Increasingly it became more common to have building with all the
workers together, not at home, but rather a workplace where the many steps
could be done in the same space. This had several effects that raised
efficiency, meaning it produced more output per a given set of inputs:
(a) it reduced transportation costs, and
(b) it made hierarchical or mutual monitoring of the workers easier, and
(c) it allowed quicker responsive adaptation when the situation changed (e.g.
someone was sick or a machine broke down or a new machine came in).
Among the effects of the rise of the factory system was that more people
commuted from home to work. They had a separate workplace.
Contexts: history; production
fads:
The conjecture that market prices for securities take long swings away from
their fundamental values and tend to return to them.
In a time series of data this suggests that "the market price differs
from the fundamental price by a highly serially correlated fad.".
This formulation attributed to Shiller(1981, 1994), Summers (1986) and Poterba
and Summers (1988) by Bollerslev and Hodrick (1992) p. 13.
Source: Bollerslev and Hodrick (1992), p 13-14.
Contexts: finance
fair trader:
Contrasted with free trader, a holder of the the point of view that
one's country's government must prevent foreign companies from having
artificial advantages over domestic ones.
The term dates at least as far back as 1886 Britain, where tariffs were
recommended by one point of view expressed in a Royal Commission report "not
to countervail any natural and legitimate advantage which foreign
manufacturers may possess, but simply to prevent our own industries being
placed at an artificial disadvantage by the interference of either home or
foreign legislation...." (Carr and Taplin, p 122)
Source: Carr, J.C., and W. Taplin, assisted by A.E.G. Wright. 1962.
History of the British Steel Industry. Oxford: Basil Blackwell.
Contexts: political economy; trade
Fama-MacBeth regression:
A panel study of stocks to estimate CAPM or APT parameters
Contexts: finance
family:
two or more persons related by blood, marriage, or adoption, and residing
together.
Source: Census Bureau, cited in Glen Cain's 1976 Handbook article
Contexts: labor; sociology
FASB:
Financial Accounting Standards Board, which sets accounting rules for the US.
(public? private?)
Contexts: accounting; corporate finance; data
fat-tailed:
describes a distribution with excess kurtosis.
Contexts: finance; statistics
Fatou's lemma:
Let {Xn} for n=1,2,3,... be a sequence of nonnegative real random
variables.
Then lim infn->infinity E[Xn] ≥ E[lim
infn->infinity Xn].
Source: Durrett, 1996, p 16
Contexts: probability
FCLT:
stands for 'functional central limit theorem', and is synonymous with
Donsker's theorem.
Briefly: if {et} is a series of independent and mean zero random
variables, partial sums (from 1 to T) of the e's converge to a standard
Brownian motion process on [0,1] as T goes to infinity. See other sources for
a proper formal statement.
Source: Richardson and Stock (1989)
Contexts: time series
FDI:
Foreign Direct Investment, a component of a country's national financial
accounts. Foreign direct investment is investment of foreign assets into
domestic structures, equipment, and organizations. It does not include
foreign investment into the stock markets. Foreign direct investment is
thought to be more useful to a country than investments in the equity of its
companies because equity investments are potentially "hot money"
which can leave at the first sign of trouble, whereas FDI is durable and
generally useful whether things go well or badly.
Contexts: accounting; macro
FE:
stands for Fixed Effects estimator. That is, a linear regression in which
certain kinds of differences are subtracted out so that one can estimate the
effects of another kind of difference.
Contexts: estimation
Fed Funds Rate:
The interest rate at which U.S. banks lend to one another their excess
reserves held on deposit at the U.S. Federal Reserve.
Contexts: money; banking; institutions
FGLS:
Feasible GLS. That is, the generalized least squares estimation procedure
(see GLS), but with an estimated covariance matrix, not an assumed
one.
Contexts: econometrics; estimation
fiat money:
is intrinsically useless; is used only as a medium of exchange.
Contexts: money; macro
fields:
Most terms are in one of these categories. You can click on one to see a list
of terms relevant to it.
Relevant terms: agricultural economics,
business,
cliometrics,
data,
development,
dynamic optimization,
econometrics,
economic sociology,
estimation,
finance,
game theory,
general equilibrium,
history,
information,
IO,
journals,
labor,
linear algebra,
macro,
measure theory,
models,
organizations,
phrases,
probability,
public finance,
real analysis,
statistics,
stylized facts,
time series,
transition economics.
filter:
A filter is a way of treating or adjusting data before it is analyzed.
Examples are the Hodrick-Prescott filter or Kalman filter.
More exactly, a filter is an algorithm or mathematical operation that is
applied to a time series sample to get another sample, often called the
'filtered' data. For example a filter might remove some high-frequency
effects from the data; or detrend it; or remove seasonal frequencies but leave
monthly frequencies in.
Contexts: time series; data
FIML:
Full Information Maximum Likelihood, an approach to the estimation of
simultaneous equations.
As portrayed in Johnston's book: Define A as the matrix of coefficients in
the multiple-equation model, u as the vector of residuals for each choice of
A, and s as the covariance matrix E(uu'). FIML
consists of maximizing ln(L(A, s)) with respect to
the elements of A and s.
Source: Johnston p 490-492
Contexts: econometrics; estimation
finance:
The study of securities, borrowing, and ownership.
Relevant terms: abnormal returns,
absolute risk aversion,
AGI,
Annuity formula,
APT,
ARCH,
Arrow-Pratt measure,
asset pricing models,
asset-pricing function,
basis point,
Black-Scholes equation,
bubble,
call option,
capital structure,
CAPM,
CAR,
CARs,
CCAPM,
CDE,
certainty equivalence principle,
certainty equivalent,
CES utility,
coefficient of absolute risk aversion,
coefficient of relative risk aversion,
commercial paper,
complete market,
Compustat,
conditional,
conditional variance,
conglomerate,
consumption beta,
contingent valuation,
coupon strip,
CRRA,
CRSP,
deep,
delta,
depth,
derivatives,
discount rate,
EBIT,
efficient markets hypothesis,
EGARCH,
embedding effect,
entrenchment,
equity premium puzzle,
Eurodollar,
event studies,
ex dividend date,
excess kurtosis,
excess returns,
ExecuComp,
existence value,
experience,
exponential utility,
factor loadings,
fads,
Fama-MacBeth regression,
FASB,
fat-tailed,
firm,
Fisherian criterion,
Freddie Mac,
free cash flow,
gamma (of options),
GARCH,
generalized Wiener process,
GMM,
Gordon model,
hold-up problem,
ICAPM,
IGARCH,
Ito process,
Jensen's inequality,
JF,
JFE,
LBO,
Lerman ratio,
leverage ratio,
liquid,
Ljung-Box test,
log utility,
Lucas critique,
market capitalization,
market for corporate control,
market price of risk,
MBO,
Modigliani-Miller theorem,
NASDAQ,
no-arbitrage bounds,
noise trader,
nonuse value,
NPV,
NYSE,
option,
par,
PDV,
portmanteau test,
precautionary savings,
principal strip,
pro forma,
put option,
put-call parity,
Q ratio,
quasi rents,
rents,
residual claimant,
resiliency,
risk free rate puzzle,
Roll critique,
SCF,
semi-strong form,
senior,
Sharpe ratio,
short rate,
stable distributions,
state price,
state price vector,
straddle,
strip financing,
strips,
strong form,
submartingale,
subordinated,
Survey of Consumer Finances,
team production,
tenure,
term spreads,
theta,
tightness,
Tobin tax,
variance ratio statistic,
vega,
volatility clustering,
weak form,
white noise process.
Contexts: fields
FIPS:
Federal Information Processing Standards. These are encodings defined by the
U.S. government and used to encode some data (like states and counties) in
U.S. data sets. Listings can be found at the NIST FIPS site.
Source: NIST FIPS web site
Contexts: data; organizations
firm:
Defined by Alchian and Demsetz (1972) this way: "The essence of the
classical firm is identified here as a contractual structure with: 1) joint
input production [see team production]; 2) several input owners [e.g.
the workers]; 3) one party [the firm or its owners] who is common to all the
contracts of the joint inputs; 4) who has rights to renegotiate any
input's contract independently of contracts with other input owners; 5) who
holds the residual claim; and 6) who has the right to sell his central
contractual residual status. The central agent is call the firm's owner and
the employer. No authoritarian control is involved; the arrangement is
simply a contractual structure subject to continuous renegotiation with
the central agent. The contractual structure arises as a means of enhancing
efficient organization of team production."
----------
a firm is a hierarchical organization attempting to make profits.
Source: Alchian and Demsetz, 1972, p 794
Contexts: theory of the firm; IO; corporate finance
First Welfare Theorem:
The statement that a Walrasian equilibrium is weakly Pareto
optimal. Such a theorem is true in a large and important class of general
equilibrium models (usually static ones). The standard case is if
every agent has a positive quantity of every good, and every agent has a
utility function that is convex, continuous, and strictly increasing, the then
the First Welfare Theorem holds.
Contexts: general equilibrium; models
first-order stochastic dominance:
Usually means stochastic dominance.
fiscalist view:
An extreme Keynesian view, that money doesn't matter at all as aggregate
demand policy. Assumes that investment demand does not respond to interest
rate changes. Relevant only in depression conditions (Branson, p
386).
Source: Branson
Contexts: macro
Fisher consistency:
This is a necessary condition for maximum likelihood estimation to be
consistent. Maximizing the likelihood function L gives an estimate for
parameter b that is Fisher-consistent if:
E[d(ln L)/db]=0 at b=b0, where b0 is the true value of
b.
Another interpretation or phrasing: "An estimation procedure is Fisher
consistent if the parameters of interest solve the population analog of the
estimation problem." (Wooldridge).
Source: Wooldridge, 1995, p 2648.
Contexts: econometrics
Fisher effect:
That in a model where inflation is expected to be steady, the nominal interest
rate changes one-for-one with the inflation rate; see Fisher equation.
The empirical analogy is the Fisher hypothesis.
Contexts: macro; money
Fisher equation:
nominal rate of interest = real rate of interest + inflation
Contexts: macro; money
Fisher hypothesis:
That the real rate of interest is constant. So the nominal rate moves with
inflation.
The real rate of interest would be determined by the time preferences of the
public and technological constraints determining the return on real
investment.
Source: G. Thomas Woodward, Review of Economics and Statistics, 1992, p
315
Contexts: macro; money
Fisher Ideal Index:
The "geometric mean of the fixed-weighted Paasche and Laspeyres
indexes." Proposed as a price index by Irving Fisher in 1922. This is a
superlative index number formula. -- Triplett, 1992.
Source: Triplett, 1992, p. 50
Contexts: index theory; macro; prices
Fisher index:
A price index, computed for a given period by taking the square root of
the product of the the Paasche index value and the Laspeyres
index value.
Source:
http://www.geocities.com/jeab_cu/paper2/paper2.htm;
Gordon, 1990, p. 5
Contexts: index numbers
Fisher information:
The Fisher information is an attribute or property of a distribution with
known form but uncertain parameter values. It is only well-defined for
distributions satisfying certain assumptions. It is a (k x k) matrix, where k
is the number of elements in a vector of parameters b. Thus, for
parameter b of pdf f(x):
I(b)=E{ [f'(x)/f(x)]2 | b}
That's from DeGroot. I think this is the same as in Greene p 96:
I(b)=E[{d/db(ln L(b))}2]
=-E[d2/db2(ln L(b))]
If the Fisher information is 'large' then the estimated distribution will
change radically as new data (x) are incorporated into the estimate of the
distribution by maximum likelihood.
The Fisher information is the main ingredient in the Cramer-Rao lower
bound, and in some maximum likelihood estimators.
Source: DeGroot; Greene, 1993, p 96
Contexts: econometrics
Fisher transformation:
Hypotheses about the value of r, the correlation
coefficient between variables x and y of the underlying population, can be
tested using the Fisher transformation of a sample's correlation coefficient
r. Let N be the sample's size.
This transformation is defined by:
z = 0.5 * ln ( (1+r)/(1-r) )
z is approximately normally distributed with mean r, and standard error 1/((N-3)^0.5).
This is a common way of testing whether a correlation coefficient is
significantly different from 0, and hence ascribing a p-value.
------
[Editor: We suspect that for x and y bivariate normal the distribution works
exactly in all sample sizes, otherwise only asymptotically.]
[See Kennedy, p 369. Bickel and Dobson, "Mathematical Statistics: Basic Ideas
and selected topics" page 221 also gives derivation, but makes no mention of
any distribution requirements.]
Source: stephenb@nwu.edu
With thanks to: Stephen Brown (as of 4/25/99: stephenb@nwu.edu)
Contexts: estimation; econometrics
Fisherian criterion:
for optimal investment by a firm -- that it should invest in real assets until
their marginal internal rate of return equals the appropriately risk-adjusted
rate of return on securities
Source: Miller and Rock, Journal of Finance Sept 1985, p. 1032
Contexts: models; finance
fixed effects estimation:
A method of estimating parameters from a panel data set. The fixed
effects estimator is obtained by OLS on the deviations from the means of each
unit or time period. This approach is relevant when one expects that the
averages of the dependent variable will be different for each cross-section
unit, or each time period, but the variance of the errors will not. In such a
case random effects estimation would give inconsistent estimates of b in the
model: y = Xb + e
The fixed effects estimator is: (X'QX)-1X'Qy
where Q is the matrix that "partials out" the averages from the
groups that have different variances.
Example: Define L as IN x 1T, where x is the Kronecker
cross product operator, T is the number of time periods, and N is the number
of cross-section units (individuals, say). Now individual effects can be
screened out by premultiplying the model's equation by Q and running OLS, or
equivalently using the estimator equation above. Thus estimating b.
Contexts: econometrics; estimation
flexible-accelerator model:
A macro model in which there is a variable relationship between the growth
rate of out put and the level of net investment. The relation between the
change in output and the level of net investment is the accelerator
principle.
Source: Branson, Ch 13
Contexts: macro
fob:
An occasional compressed form of f.o.b..
Folk theorem:
The theorem is that a Nash equilibrium exists in repeated games in
which sufficiently patient players to reach Pareto optimal payoffs in a
Nash equilibrium.
(Fudenberg and Tirole, p 150, describes the achievable payoffs as the
individually rational ones, not the Pareto optimal ones.)
The strategies that achieve this often have the pattern that they 'punish'
the other player at length for any defection from the Pareto optimal choice.
In equilibrium that encourages the other player not to defect for a short
term gain.
Source: Fudenberg and Tirole, _Game Theory_
Contexts: game theory
Frechet derivative:
Informally: A derivative (slope) defined for mappings from one vector space
to another.
The first e in Frechet should have an accent aigu.
Formally (this taken more or less directly from Tripathi, 1996):
Let T be a transformation defined on an open domain U in a
normed space X and mapping to a range in a normed space Y.
(Does normed space mean normed vector space? Or might it not?)
Holding fixed an x in U and for each h in X, if a
linear and continuous operator L (mapping from X to Y)
exists such that:
lim||h|| falls to 0
(1/||h||) * (||T(x+h)-T(x)-L(h)||) = 0
Then the operator L, often denoted T'(x), is the Frechet
derivative of T() and we can say T is Frechet
differentiable at x.
(Ed.: I believe any such L is unique.)
Source: Tripathi, 1996, p 7
Contexts: mathematics; real analysis
Frechet differentiable:
Informally: A possible property of mappings from one space to another. For
such a transformation, a Frechet derivative may exist at each point and
if so we say the transformation is Frechet differentiable at that
point.
Properly the first e in Frechet should have an accent aigu.
See the entry at Frechet derivative for a formal definition.
Source: Tripathi, 1996, p 7
Contexts: mathematics; real analysis
Freddie Mac:
Shorthand for U.S. Federal Home Loan Mortgage Corporation.
Contexts: data; finance
free cash flow:
cash flow to a firm in excess of that required to fund all projects that have
positive net present values when discounted at the relevant cost of
capital.
Free cash flow can be a source of principal-agent conflict between
shareholders and managers, since shareholders would probably want it paid out
in some form to them, and managers might want to control it, e.g. to use it
for unprofitable projects, for perquisites, to make acquisitions, to create
jobs for friends and allies, and so forth. A possible partial solution to the
conflict for the shareholders is for the company to have heavy debts on which
frequent, heavy payments are due. Those payments keep the managers focused on
delivering consistent revenues and clear out the extra cash.
Source: Jensen (86)
Contexts: micro; finance
free entry condition:
An assumption posited in a search and matching model of a market. The
assumption is that there is no institutional constraint on firms entering the
market (e.g. to hire workers). There is no fixed number of firms. The number
of firms is determined in equilibrium, by the costs of starting up.
Contexts: macro; labor
free reserves:
excess reserves minus borrowed reserves (Branson, p 353).
Source: Branson, p 353
Contexts: money
free trader:
Holder of the political point of view that the best policy is to allow free
trade into one's own country.
Contexts: political economy; trade
frequency function:
The frequency function is the probability of drawing each particular value
from a discrete distribution: p(x) = Pr(X=x).
Here X is the random variable and x is one of its possible values.
Contexts: statistics
frictional unemployment:
Unemployment that comes from people moving between jobs, careers, and
locations. Contrast structural unemployment.
Source: Baumol & Blinder
Contexts: labor; macro
Friedman rule:
In a cash-in-advance model of a monetary system, the Friedman rule for
monetary policy is to deflate so that it is not costly to those who have money
to continue to hold it. Then the cash-in-advance constraint isn't binding on
them.
Contexts: money
FTC:
Abbreviaton for the U.S. national Federal Trade Commission, which rules in
some circumstances on some antitrust regulations. See also
FTC.
Contexts: IO; regulation; antitrust
FTC Act:
A 1914 U.S. law creating a regulatory body for antitrust, price
discrimination, and regulation. Section five says "Unfair methods of
competition in or affecting commerce, and unfair or deceptive acts or
practices in or affecting commerce, are hereby declared
unlawful."
Source: lectures and handouts of Michael Whinston at Northwestern U in
Economics D50, Winter 1998
Contexts: IO; antitrust; regulation
functional:
a mapping from paths of functions to the reals (e.g. a value function defined
by a mapping from possible paths of choices)
Contexts: real analysis
functional equation:
an equation where the unknown is a function. Example: a value function is the
solution to the equation that sets the value function equal to the present
discounted value of the current period's utility and the discounted value
function of next period's state.
Source: Stokey and Lucas, 1989, p 14
Contexts: macro; models
fungible:
"Being of such a nature or kind that one unit or part may be exchanged or
substituted for another unit or equal part to discharge an
obligation."
Examples: money or grain. Not examples: works of art.
Source: American Heritage Dictionary, 1982
Contexts: money
future-oriented:
A future-oriented agent discounts the future lightly and so has a LOW discount
rate, or equivalently a HIGh discount factor. See also
present-oriented, discount rate, and discount
factor.
Contexts: models
FWL theorem:
Given a statistical model y = X1b1 +
X2b2+ e
where
y is a vector of values of a dependent variable,
the X's are
linearly independent matrices of predetermined variables, and
the e's are
errors, we could premultiply the equation by
M1=I-X1(X1'X1)-1X'
which projects vectors in the space spanned by X1 to zero, and run
OLS on the resulting equation M1y =
M1X2b2+ M1e
and (the theorem says) would get exactly the same estimate of b2
that OLS on the first equation would have given.
This use of premultiplying is used in the derivation of many estimators:
notably IV estimators and FE estimators.
Contexts: econometrics
game:
A game is a model with (1) players who make (2) strategy (or action)
choices in a (3) predefined time order, and then (4) receive payoffs, which
are usually conceived of in money or utility terms. Classic games are the
Prisoner's Dilemma,
Matching Pennies,
the Battle of the Sexes,
the dictator game,
the ultimatum game,
the Bertrand game,
and the Cournot game.
Contexts: game theory; models
game theory:
Relevant terms: Bertrand competition,
Bertrand game,
bounded rationality,
cooperative game,
Cournot game,
decision rule,
dictator game,
dynamic inconsistency,
epsilon-equilibrium,
epsilon-proper equilibrium,
evolutionary game theory,
Folk theorem,
game,
implementable,
implicit contract,
interim efficient,
Markov perfect,
Markov strategy,
Matching Pennies,
mechanism design,
Nash equilibrium,
Nash product,
Nash strategy,
NBS,
NE,
noncooperative game,
normal form,
payoff matrix,
PBE,
perfect Bayesian equilibrium,
perfect equilibrium,
principal-agent,
principal-agent problem,
Prisoner's Dilemma,
proper equilibrium,
rationalizable,
screening game,
sharing rule,
signaling game,
solution concept,
SPE,
strategic form,
strategy-proof,
subgame perfect equilibrium,
tit-for-tat,
totally mixed strategy,
trembling hand perfect equilibrium,
ultimatum game,
winner's curse,
zero-sum game.
Contexts: fields
gamma (of options):
As used with respect to options: The rate of change of the portfolio's
delta with respect to the price of the underlying asset. Formally this
is a partial derivative.
A portfolio is gamma-neutral if it has zero gamma.
Source: Hull, 1997, p 323
Contexts: finance
gamma distribution:
A distribution relevant to, for example, waiting times. Expression of its pdf
requires reference to the gamma function which will be called GAMMA(a)
here.
(When HTML supports math a better display will be possible.)
The gamma distribution's pdf has parameters a>0 and b>0, and
GAMMA(a) is also greater than zero. The support is on x>0:
f(x)=[xa-1e-x/b]/[GAMMA(a)ba]
Source: Hogg and Craig, 1995, p 132
Contexts: statistics
gamma function:
A function of a real a>0. It is the integral over y from zero to
infinity of ya-1e-y dy. This integral is the gamma
function of a, GAMMA(a). (When HTML supports math a better display will be
possible.) The gamma distribution is a function that includes the
gamma function.
Source: Hogg and Craig, 1995, p 131
Contexts: statistics
GARCH:
Generalized ARCH. First paper may have been Bollerslev, 1986, Journal of
Econometrics
Source: Bollerslev, 1986, Journal of Econometrics
Contexts: finance; statistics
GARP:
abbreviation for the Generalized Axioms of Revealed Preference.
Source: Varian, 1992
Contexts: models
Gauss:
A matrix programming language and programming environment. Made by Aptech.
Contexts: data; simulation
Gaussian:
an adjective that describes a random variable, meaning it has a
normal distribution.
Contexts: statistics; econometrics
Gaussian kernel:
The Gaussian kernel is this function:
(2PI)-.5exp(-u2/2). Here u=(x-xi)/h, where h
is the window width and xi are the values of the independent
variable in the data, and x is the value of the independent variable for which
one seeks an estimate. Unlike most kernel functions this one is unbounded on
x; so every data point will be brought into every estimate in theory, although
outside three standard deviations they make hardly any difference.
For kernel estimation.
Source: Hardle, 1990
Contexts: econometrics; nonparametrics; estimation
Gaussian white noise process:
A white noise process with a normal distribution.
Contexts: models; statistics; econometrics; time series
GDP:
Gross domestic product. For a region, the GDP is "the market value of
all the goods and services produced by labor and property located in" the
region, usually a country. It equals GNP minus the net inflow of labor
and property incomes from broad. -- Survey of Current Business
A key example helps. A Japanese-owned automobile factory in the US counts in
US GDP but in Japanese GNP.
"GDP can be calculated from output statistics, income statistics or as the sum
of private expenditures, public expenditures and net exports." (Bohlin,
2003). Measured output from one sector that immediately became an input into
another productive sector is not supposed to count; one subtracts them out, to
get only the value-added by each productive sector.
Source: Survey of Current Business;
Bohlin, Jan. "Swedish historical national accounts: The fifth
generation." European Review of Economic History, 7:1 (April, 2003):
73-97.
Contexts: macro; government
GDP deflator:
A measure of the cost of goods purchased by U.S. households, government, and
industry. Differs conceptually from the CPI measure of inflation, but
not by much in practice.
Contexts: macro; labor; data
GEB:
An abbreviation for the journal Games and Economic Behavior.
Contexts: journals
general equilibrium:
Relevant terms: budget set,
clears,
consumption set,
contract curve,
core,
demand set,
endowment,
First Welfare Theorem,
government failure,
individually rational,
locally nonsatiated,
market failure,
netput,
offer curve,
Pareto set,
production set,
real externality,
Second Welfare Theorem,
social planner,
SPO,
strongly Pareto Optimal,
subdifferential,
Walrasian equilibrium,
Walrasian model,
WE,
weakly Pareto Optimal,
WPO.
Contexts: fields
generalized linear model:
A model of the form y=g(b'x) where y is a vector
of dependent variables, x is a column vector of independent variables,
b' is a row vector of parameters (that is, b is not a function of x)
and g() is a possibly random function called a link function.
Examples: linear regression (y=b'x+errs) and logistic regression
y=1/(1+e-x)+errs.
An example that is not in the class of generalized linear models is:
y=x1*x2.
Source: Rabe-Hesketh, Sophia, and Brian Everitt. 1999. A Handbook of
Statistical Analyses using Stata. Chapman & Hall / CRC. pp 91-93.
Contexts: econometrics
Generalized Method of Moments:
See GMM.
generalized Ozaki cost function:
This is a very general form of cost function defined by Nakamura (1990), p.
650, citing earlier work by Ozaki. Nakamura's article details its
many virtues, one of which is that homotheticity is not imposed --
that is, if the optimal use of inputs is not the same at different scales
of output, this function can express that. (The mainline case of this is
when there is a fixed input requirement at the beginning, or the input
comes in indivisible chunks.)
Equation (3) in that article defines the cost function which is displayed
here as best this editor can do it. The Generalized-Ozaki cost function's
expression for the cost of producing output y, at date t, given a vector of
m input prices p[] which are indexed by i=1 to m, is:
c(p,y,t) = Si bii
ybyiexp(btit)pi +
Si<>j bij
(pipj).5
ybyexp(btt)
[your editor's comments, clarifications, and interpretations follow,
with imperfectly suppressed frustration.]
The notation is murderously difficult here. I do not fully understand
it but can clarify some things. First, any element with a 'b' in it is a
parameter that could be estimated. In particular:
(a) 'byi' is meant to look like byi, but neither html (apparently)
nor even the 1990 Review of Economics and Statistics could put
subscripts in a superscript. (TeX can, so one might be wise to use that,
if jumping off this particular ledge.)
One worries, though: how many parameters does byi represent? m, I think.
The y is hard-coded, doesn't index. And I goes from 1 to m. In Nakamura's
example, m=3, by the way. So perhaps m is never meant to be big.
(b) Analogously, 'by' represents by, which is probably a scalar
parameter. Probably bt is also a scalar.
bti is probably an m-vector collectively called
bt.
And bij and bii could be combined in a m x m matrix.
(c) exp(z) means the constant e taken to the power of z. Expressing this
as ez would have been nice but the typeface is already cracking
under the strain.
(d) The notation Si<>j is not obvious.
(<> means not-equal here; the original article has a nice not-equals sign.)
I'd guess it means the same as
Si=1m{
Sj=1,i<>jm (expression)}
I grasp that's not transparent either. But the index variable is ambiguous
in the original expression. It's possible that we were supposed to think
the second S was part of the first (though
parentheses would have been required to make that clear) and that the
index variable on the second S was
j.
Source: Nakamura, Shinichiro. 1990. "A nonhomothetic generalized
Leontief cost function based on pooled data". Review of Economics and
Statistics 72:4 (Nov, 1990), 649-656.
http://links.jstor.org/sici?sici=0034-6535%28199011%2972%3A4%3C649%3AANGLCF%3E
2.0.CO%3B2-0
Ozaki, Iwao, "Economics of Scale and Input Output Coefficients," in A. Carter
and A. Brody (eds.) Applications of Input Output Analysis (Amsterdam:
1969), North-Holland, 280-302.
Ozaki, Iwao, "The Effects of Technological Change on the Economic Growth of
Japan," in K. Polenske and J. Skolka (eds.) Advances in Input Output
Analysis (Cambridge, MA: 1976), Ballinger, 93-111.
Contexts: production theory
generalized Tobit:
Synonym for Heckit.
generalized Wiener process:
A continuous-time random walk with a drift and random jumps at every
point in time (roughly speaking). Algebraically:
a(x,t)dt + b(x,t)c(dt).5
describes a generalized Wiener process, where:
a and b are deterministic functions
t is a continuous index for time
x is a set of exogenous variables that may change with time
dt is a differential in time
c is a random draw from a standard normal distribution at each
instant
Source: Hull, 1997
Contexts: time series; finance; statistics; models
generator function:
in a dynamical system, the generator function maps the old state Nt
into new state Nt+1
E.g. Nt+1 = F(Nt).
A steady state would be an N* such that F(N*) =
N*.
Source: J. Montgomery, social networks paper
Contexts: macro
geometric mean:
Geometric mean is a kind of average of a set of numbers that is different from
the arithmetic average. The geometric mean is well defined only for sets of
positive real numbers.
Geometric mean of A and B is the square root of (A*B).
The geometric mean of A, B, and C is the cube root of (A*B*C).
And so forth.
Contrast this to the arithmetic means, which are .5*(A+B) and
.333*(A+B+C).
GEV:
abbrevation for Generalized Extreme Value distribution. The difference
between two draws of GEV type 1 variables has a logistic distribution, which
is why a GEV distribution for errors gets assumed in certain binary
econometric models.
Contexts: statistics; econometrics
GGH preferences:
Refers to a paper by Greenwood, Hercowitz, and Huffman (1988) with utility
functions across agents and across time by:
u(Cit, Nit) = Cit -
Nitb
where a>0 and b>1 are constants, and Cit and Nit
stand for consumption and hours worked by each agent i at date t.
-- this utility function has Gorman form and so it aggregates
-- it has been successful at matching cross-section data relative to other
functions that do.
Source: Greenwood, J., Z. Hercowitz, and G. Huffman, (1988), "Investment,
Capacity Utilization, and the Real Business Cycles", AER, 78
402-417.
Contexts: models; macro
Gibbs sampler:
A way to generate empirical distributions of two variables from a model. Say
the model defines probability distributions F(X|Y) and G(Y|X). Then start
with a random set of possible X's, draw Y's from G(), then use those Y's to
draw X's, and so on indefinitely. Keep track of the X's and Y's seen, and
this will give samples enough to find the unconditional distributions of X and
Y.
Source: talk by Moshe Buchinsky at Northwestern 10/29/1996 regarding research
of his with Phillip Leslie
Contexts: econometrics; simulation
Gibrat's law:
A descriptive relationship between size and growth -- that the size of units
and their growth percentage statistics are statistically independent.
Sometimes Gibrat's law is thought to apply to large firms, and sometimes to
cities (Gabaix, May 1999 American Economic Review, page 130).
Gini coefficient:
This is another name for the Gini index. This editor prefers
"Gini index" because the word "coefficient" implies that
the number's meaning depends on multiplying it by something, or that it came
out of a regression.
Gini index:
A number between zero and one that is a measure of inequality. An example is
the concentration of suppliers in a market or industry.
The Gini index is the ratio of the area under the Lorenz curve to the area
under the diagonal on a graph of the Lorenz curve, which is 5000 if both axes
have percentage units. The meaning of the Gini index: if the suppliers in a
market have near-equal market share, the Gini index is near zero. If most of
the suppliers have very low market share but there exist one or a few supplies
providing most of the market share then the Gini index is near one.
In labor economics, inequality of the wage distribution can be discussed in
terms of a Gini index, where the wages of subgroups are fractions of the total
wage bill.
The Gini index is sometimes called the Gini coefficient.
Source: Greer, 1992, p 174
Contexts: IO; labor
Glass-Steagall Act:
A 1933 United States national law separating investment banking and commercial
banking firms. Also prohibited banks from owning corporate stock. It was
designed to confront the problem that banks in the Great Depression collapsed
because they held a lot of stock.
Source: Glasner, p 198
Contexts: history; IO
GLS:
Generalized Least Squares. A generalization of the OLS procedure to make an
efficient linear regression estimate of a parameter from a sample in which the
disturbances are heteroskedastic. That is, in
y = Xb + e (equation 1)
that the e's vary in magnitude with the X's.
The estimator of b is: (X'O-1X)-1X'O-1y
(equation 2)
where O, standing for omega, is the covariance matrix. (As you see in the
estimator, the covariance matrix is assumed to be invertible.)
The procedure to derive this is to multiply through the first equation by the
square root of the inverse of the covariance matrix (which assumed to be
known; if it estimated, one calls this procedure FGLS, for feasible GLS.)
Then take OLS of the resulting equation.
Contexts: econometrics
GMM:
Stands for Generalized Method of Moments, an econometric framework of Hansen,
1982. It is an approach to estimating parameters in an economic model from
data. Used often to figure out what standard errors on parameter estimates
should be.
Source: Hansen, 1982
Contexts: econometrics; macro; finance
GNP:
Gross national product. The GDP is "the market value of all the goods
and services producted by labor and property belonging to the region, usually
a country. It equals GDP plus the net inflow of labor and property
incomes from broad. A Japanese-owned automobile factory in the US counts in
US GDP but in Japanese GNP.
Source: Survey of Current Business
Contexts: macro
Golden Rule capital rate:
f'(k*)=(1+n) where k* is optimal capital stock, f() is
the aggregate production function, and n is population growth rate. f(k)-k is
consumed by the population. 'Golden Rule' may refer to a Solow fairy
tale.
Contexts: macro
good:
A good is a desired commodity.
goodwill:
The accounting term to describe the premium that acquiring companies pay over
the book value of the firm being acquired. Goodwill can include value for R&D
and trademarks.
Contexts: accounting
Gordon model:
Of a stock price. From M. R. Gordon (1962). This model is sometimes used as
a baseline for comparison or for intuition.
Assume a constant rate of return r, and a constant dividend growth rate g.
Define Pt to be the price of the stock in period t, and
Dt to be its dividend in period t. Implication is that price of
stock Pt = Dt/(r-g).
Source: Bollerslev-Hodrick 1992; Gordon 1962 ref'd directly there
Contexts: finance
Gorman form:
A utility function or indirect utility function is in Gorman form if it is
affine with respect to some argument. Which argument should be clear
from context. E.g.:
Ui(xi, z) = A(z)xi + Bi(z)
Here the utility Ui for individual i is is affine in argument
xi.
A critical implication is that the sum of Gorman form utility functions for
individuals is a well-defined aggregate utility function under some
conditions.
Source: Varian, 1992
Contexts: models; utility
government failure:
A situation, usually discussed in a model not in the real world, in which the
behavior of optimizing agents in a market with a government would not produce
a Pareto optimal allocation. The point is not that a particular
government had, or would have, failed at something, but that the problem
abstractly put cannot be perfectly solved by the government. The most common
source of government failures in models is private information among the
agents.
Contexts: general equilibrium; public
Granger causality:
Informally, if one time series helps predict another, we can say it Granger
causes the other. The original definition, for linear predictors, is in
Granger, 1980.
From Sargent: A stochastic process zt is said NOT to Granger-cause
a random process xt if E(xt+1 |
xt,xt-1,...,zt,zt-1,...) =
E(xt+1 | xt,xt-1,...)
*** NOTE in J Pehkonen, Applied Economics, 1991, 23, 1559-1568, p.
1560.
*** Expert treatment of this subject and more formal, less ambiguous
definitions are in Chamberlain, Econometrica, May 82
Source: Sargent, 1987, Ch 3
Contexts: econometrics; time series; estimation
Grenander conditions:
Conditions on the regressors under which the OLS estimator will be
consistent.
The Grenander conditions are weaker than the assumption on the regressor X
that limn->infinity(X'X)/n is a fixed positive definite matrix,
which is a common starting assumption.
See Greene, 2nd ed, 1993, p 295.
Source: Greene, 1993
Contexts: econometrics
Gresham's Law:
Some version of "Bad money will drive out good."
I think the context is that if there are two suppliers of the same money (e.g.
if one of them is a counterfeiter) or of two monies with a fixed exchange rate
between them (per Hayek, Denationalization of Money, 1976 p. 39), there will
be a tendency for overproduction and that the actual money stock will be made
up of the bad, or less valuable, one. (Another situation is if one supplier
makes coins that are 90% gold and the other has the option of making coins
with less gold, Bertrand competition for coins would drive the gold fraction
down over time.)
Contexts: money
GSOEP:
German Socio-Economic Panel.
A German government database going back to at least 1984.
Contexts: data; labor
H index:
Stands for Herfindahl-Hirschman index, which is a way of measuring the
concentration of market share held by particular suppliers in a market. It is
the sum of squares of the percentages of the market shares held by the firms
in a market. If there is a monopoly -- one firm with all sales, the H index
is 10000. If there is perfect competition, with an infinite number of firms
with near-zero market share each, the H index is approximately zero. Other
industry structures will have H indices between zero and 10000.
Tirole's version is bounded between zero and one because each of the market
shares is between zero and one.
Source: Greer, 1992, p 177; Tirole, _The Theory of
Industrial Organization_
Contexts: IO
Habakkuk thesis:
That high wages and labor scarcity stimulated technological progress in the
U.S. in the 1800s, and in particular brought about the American system of
manufacturing based on interchangeable parts.
(This description from Mokyr, 1990; idea from Habakkuk, 1962).
Source: Mokyr, 1990, p 165; Habakkuk, 1962, American and
British Technology in the Nineteenth Century
Contexts: history
Hahn problem:
Hahn (1965) question: when does there exist an equilibrium in a model in which
money has positive value?
Contexts: money; models
Hansen's J test:
See J statistic
Source: Ogaki, 1993
Contexts: econometrics
Harrod-neutral:
The effect on a production function description of certain kinds of technical
change. Harrod-neutrl is a synonym for labor-augmenting, in practice.
Uzawa (1961), pp. 117-8, has perhaps the earliest use, which cites 1937 and
1948 works of Harrod for the idea.
Source: Romer, 1996, p 7, and in the bibliography where
Harrod's works are cited directly.
Hamermesh, 1993,1996, p 349
Uzawa, H. "Neutral Inventions and the Stability of Growth Equilibrium."
The Review of Economic Studies 28:2 (Feb., 1961), 117-124.
JSTOR link to Uzawa (1961)
Contexts: macro; technology; production
Hausman test:
Given a model and data in which fixed effects estimation would be appropriate,
a Hausman test tests whether random effects estimation would be almost as
good. In a fixed-effects kind of case, the Hausman test is a test of
H0: that random effects would be consistent and efficient, versus
H1: that random effects would be inconsistent. (Note that fixed
effects would certainly be consistent.) The result of the test is a vector of
dimension k (dim(b)) which will be distributed
chi-square(k). So if the Hausman test statistic is large, one must use FE.
If the statistic is small, one may get away with RE.
Contexts: econometrics; estimation
hazard rate:
escape rate; rate of transition out of current state
Heaviside function:
Is a mapping from the real line to {0, 1}, denoted (at least sometimes) hv(x).
hv(x) is zero for x<0, and is one for x>=0.
Source: Cox and Hinkley (1974, 1995), p 167
Contexts: statistics
Heckit:
An occasional name for generalized Tobit. This approach allows a
different set of explanatory variables to predict the binary choice from those
which predict the continuous choice. (The data environment is one in which
the continuous choice is measured only when the binary choice is nonzero --
e.g., if we have data on people, whether they bought a car, and how expensive
it was, we can estimate a statistical model of how expensive a car other
people would buy, but only on the basis of the ones who did buy a car in the
data sample.) A regular, non-generalized Tobit constrains the two sets of
variables to be the same, and the signs of their effects to be the same in the
two estimated equations. 'Heck' is for James Heckman.
-- Christopher Baum, Boston College economics department, 20 May 2000, in a
broadcast to the statalist, the email list of people interested in the
software Stata.
Heckman two-step estimation:
A way of estimating treatment effects when the treated sample is
self-selected and so the effects of the treatment are confounded with the
population that chose it because they expected it would help -- the classic
example is that college educations may be selected by those most likely to
benefit.
Taking that example, we wish to advance past the following regression:
wi = a + bXi + dCi + ei
where i indexes people, wi is the wage (or other outcome variable)
for agent i, Xi are variables predicting i's wage, and
Ci is 1 if i went to college and 0 if not. ei is the
remaining error after least squares estimation of a, b, and d.
Source: Greene, 1997; James Heckman, "Sample
selection bias as specification error", Econometrica, 47, 1979, pp
153-161.
Heckscher-Ohlin model:
A model of the effects of international trade. "The Heckscher-Ohlin
framework typically is presented as a two-country, two-good, two-factor model.
The two countries are assumed to share identical, homothetic tastes for the
two substitutable goods and identical, constant-returns-to-scale technologies
with some factor substitutability. Perfect competition prevails in each
market with zero transport costs and no artificial barriers to international
trade in goods, although factors are internationally immobile. In this
framework, each country will (incompletely) specialize in production and
export the good using intensively in production the factor that the country
has in relative abundance." That effect is called factor-price
equalization across countries, and is used sometimes to explain how rising
international trade would lead to greater income inequality in the most
developed countries. (from Bergstrand, Cosimano, Houck, and Sheehan,
1994, p 3)
The reference in the name is to "Scandinavian economists Eli Heckscher
and Bertil Ohlin early in [the twentieth century]" in work that is rarely
cited directly. (from Bluestone, 1994, p 336).
Contexts: trade; international
hedonic:
of or relating to utility. (Literally, pleasure-related.) A hedonic
econometric model is one where the independent variables measure attributes of
what is to be exchanged; e.g. various qualities of a product that one might
buy or of a job one might take. The measured qualities form a bundle of
attributes which are combined in the resulting product. Each attribute can be
thought of as affecting the price, either independently or in particular
combinations with other attributes, and these effects can be
estimated.
A hedonic model of wages might correspond to the idea that there are
compensating differentials -- that workers would get higher wages for jobs
that were more unpleasant.
"A product that meets several needs, or has a variety of features ...
generates a number of hedonic services. Each one of these services can be
thought of as generating its own demand, along with a resulting hedonic price.
Although each separate component is not observable, the aggregation of all the
components results in the observed product demand and equilibrium price....
[Q]uality improvements will appear to an observer as an outward shift of the
product demand curve, as consumers are willing to purchase more at the
prevailing price." -- William J. White, "A Hedonic Index of Farm
Tractor Prices: 1910-1955", Ohio State University working paper, October
1998, pp. 3-4.
Contexts: econometrics
help:
A list of fields contained here is below. There is some other advice at this
help page: http://econterms.com/help.html
Most terms are in one of these categories. You can click on one to see a list
of terms relevant to it.
Relevant terms: agricultural economics,
business,
cliometrics,
data,
development,
dynamic optimization,
econometrics,
economic sociology,
estimation,
finance,
game theory,
general equilibrium,
history,
information,
IO,
journals,
labor,
linear algebra,
macro,
measure theory,
models,
organizations,
phrases,
probability,
public finance,
real analysis,
statistics,
stylized facts,
time series,
transition economics.
Herfindahl-Hirschman index:
See 'H index'.
Contexts: IO
Hermite polynomials:
The Hermite polynomials are a series of polynomials defined for each natural
number r, used for statistical approximations I believe. Click here for the equation and graphs of the first
several.
Contexts: statistics
Hessian:
The matrix of second derivatives of a multivariate function. That is, the
gradient of the gradient of a function. Properties of the Hessian matrix at
an optimum of differentiable function are relevant in many places in
economics:
1) In maximum likelihood estimation, the information matrix is (-1) times the
Hessian.
Contexts: optimization; linear algebra
heterogeneous process:
A stochastic process is heterogeneous if it is not identically
distributed every period.
Contexts: time series; econometrics; statistics
heteroscedastic:
An alternate spelling of heteroskedastic. McCulloch (1985) argues that
the spelling with the k is preferred, on the basis of the pronunciation and
etymology (Greek not French derivation) of the term.
Source: J. Huston McCulloch. 1985. "On heteros*edasticity"
Econometrica 53:2 (March, 1985), p 483.
With thanks to: Shazam web site which cited McCulloch
Contexts: econometrics; estimation
heteroskedastic:
An adjective describing a data sample or data-generating process in which the
errors are drawn from different distributions for different values of the
independent variables.
Most commonly this takes the form of changes in variance with the magnitude of
X. That is, in
y = Xb + e
that the e's vary in magnitude with the X's. (An example is that variance of
income across individuals is systematically higher for higher income
individuals.)
If the errors are drawn from different distributions, or if higher moments of
the error distributions vary systematically, these are also forms of
heteroskedasticity.
Contexts: econometrics; estimation
Hicks-Kaldor criterion:
For whether a cost-benefit analysis supports a public project. The criterion
is that the gainers from the project could in principle compensate the losers.
That is, that total gains from the project exceed the losses. The criterion
does not go so far as the Pareto criterion, according to which the gainers
would in fact have to compensate the losers.
Source: Layard and Glaister, p 6
Contexts: public
Hicks-neutral:
An attribute of an effectiveness variable in a production function. The
attribute is that it does not affect labor differently from the way it affects
capital.
The canonical example is the Solow model production function Y=AF(K,L).
There Y is output, L labor, K capital, F a production variable, and A
represents some kinds of effectiveness variable. In Y=F(AK,L) the
effectiveness variable affects capital but not labor. In Y=F(K,AL) it affects
labor but not capital. These two cases can be described as Hicks-biased. In
Y=AF(K,L) it is Hicks-neutral.
Source: Romer, 1996, p 7, Hulten,
2000
Contexts: macro
Hicks-neutral technical change:
Given a production function AF(K,L) changes in A are Hicks-neutral, meaning
that they do not affect the optimal choice of K or L. The subject comes up in
practice only for aggregate production functions.
Uzawa, H. "Neutral Inventions and the Stability of Growth Equilibrium,"
The Review of Economic Studies 28:2 (Feb., 1961), 117-124
contains the first known published use of the adjective 'Harrod neutral'
According to it, the criterion of Harrod-neutrality comes from
Harrod, Roy F., "Review of Joan Robinson's Essays in the Theory of
Employment," Economic Journal, vol. 47 (1937), 326-330.
Uzawa also proves that AF(K,L) and F(K,AL) are the right functional forms to
meet Hicks and Harrod-neutrality, and that only the Cobb-Douglas form
accomplishes both.
Contexts: models; macro
Hicksian demand function:
h(p,u) -- the amount of a good that demanded by a consumer given that it costs
p per unit and that the consumer will have utility u from all goods. h(p,u)
is the cost-minimizing amount.
Source: Varian, 1992
Contexts: micro
High School and Beyond:
A panel data set on U.S. high school students.
Contexts: data
high-powered money:
reserves plus currency
Source: Branson
Contexts: money
Hilbert space:
A complete normed metric space with an inner product. So the Hilbert spaces
are also Banach spaces. L2 is an example of a Hilbert space. Any
Rn with n finite is another.
Source: Royden p. 245
Contexts: real analysis
history:
The subject of economic history is anything in history that is subject to
economic explanations. Application of formal theory or statistical analysis
of data may be relevant, although it is possible to make a contribution
without either, e.g. with a case study or a contextual reinterpretation.
Historians tend to be focused on what happened, how, and why, not on the
question of whether a model fits the evidence.
Relevant terms: bank note,
bill of exchange,
Bretton Woods system,
climacteric,
cliometrics,
dominant design,
economic growth,
Eurodollar,
factory system,
Glass-Steagall Act,
Habakkuk thesis,
Industrial Revolution,
industrialization,
institution,
mass production,
modernization,
morbidity,
mortality,
natural experiment,
new institutionalism,
path dependence,
path dependency,
real bills doctrine,
Regulation Q,
Robinson-Patman Act,
Schumpeterian growth,
shakeout,
Smithian growth,
Solovian growth,
specie,
welfare capitalism.
Contexts: fields
HLM:
Statistical software for Hierarchical Linear Modeling, from
Scientific Software
International.
Contexts: software
Hodrick-Prescott filter:
Algorithm for choosing smoothed values for a time series. The H-P filter
chooses smooth values {st} for the series {xt} of T
elements (t=1 to T) that solve the following minimization problem:
min { {(xt-st)2 ... etc. }
Parameter l>0 is the penalty on variation, where
variation is measured by the average squared second difference. A larger
value of l makes the resulting {st}
series smoother; less high-frequency noise. The commonly applied value of
l is 1600.
For the study of business cycles one uses not the smoothed series, but the
jagged series of residuals from it. See Cooley, 1995, p
27-29. That H-P filtered data shows less fluctuation than first-differenced
data, since the H-P filter pays less attention to high frequency movements.
H-P filtered data also shows more serial correlation than first-differenced
data. For l=1600: "if the series were
stationary, then [this choice] would eliminate fluctuations at frequencies
lower than about thirty-two quarters, or eight years."
Contexts: macro; estimation
hold-up problem:
One of a certain class of contracting problems.
Imagine a situation where there is profit to be made if agents A and B work
together, so they consider an agreement to do so after A buys the necessary
equipment. The hold-up problem (in this context) is A might not be willing to
take that agreement, even though the outcome would be Pareto efficient,
because after A has made that investment, B would have the power might decide
to demand a larger share of the profits than before, since A is now deeply
invested in the project but B is not, so B has some bargaining power that
wasn't there before the investment. B could demand all of the profits, in
fact, since A's alternative is to lose the investment entirely.
Other hold-up problems are analogous to this one.
Contexts: theory of the firm; corporate finance
Holder continuous:
An attribute of a function g:Rd->R. g can be
said to be Holder continuous if there exist constants C and 0<=E<=1 such
that for all u and v in Rd:
|g(u)-g(v)| <= C||u-v||E
So if g is Holder continuous for C=1 then it is Lipschitz continuous?
And if g is Holder continuous then it is continuous.
Source: Hardle, 1990
Contexts: real analysis;nonparametrics
homoscedastic:
An alternate spelling of homoskedastic. McCulloch (1985) argues that
the spelling with the k is preferred, on the basis of the pronunciation and
etymology (Greek not French derivation) of the term.
Source: J. Huston McCulloch. 1985. "On heteros*edasticity"
Econometrica 53:2 (March, 1985), p 483.
With thanks to: Shazam web site which cited McCulloch
Contexts: econometrics; estimation
homoskedastic:
An adjective describing a statistical model in which the errors are drawn from
the same distribution for all values of the independent variables. Contrast
heteroskedastic.
This is a strong assumption, and includes in particular the assumption in a
linear regression, for example,
y = Xb + e
that the variance of the e's is the same for all X's.
(The observed variance will differ in almost any sample. But if one believes
that the data-generating process does not in principle have greater variances
for different values of the independent variable, one would describe the
sample as homoskedastic anyway.)
Contexts: econometrics; estimation
homothetic:
Let u(x) be a function homogeneous of degree one in x. Let g(y) be a function
of one argument that is monotonically increasing in y.
Then u(g()) is a homothetic function of y.
So a function is homothetic in y if it can be decomposed into an inner
function that is monotonically increasing in y and an outer function that is
homogeneous of degree one in its argument.
In consumer theory there are some useful analytic results that can come from
studing homothetic utility functions of consumption.
Contexts: micro theory.
HRS:
Health and Retirement Study, a longitudinal panel of older Americans studied
by the Survey Research Center at the University of Michigan. Their Web site
is at
http://www.umich.edu/~hrswww.
Contexts: labor
HSB:
High School and Beyond, a panel data set on U.S. high school
students.
Contexts: data
Huber standard errors:
Same as Huber-White standard errors.
Contexts: econometrics; statistics; estimation
Huber-White standard errors:
Standard errors which have been adjusted for specified assumed-and-estimated
correlations of error terms across observations.
The implicit citations are to Huber, 1967, White,
1980, and White, 1982.
Contexts: econometrics; statistics; estimation
human capital:
The attributes of a person that are productive in some economic context.
Often refers to formal educational attainment, with the implication that
education is investment whose returns are in the form of wage, salary,
or other compensation. These are normally measured and conceived of as
private returns to the individual but can also be social returns.
"'Human capital' was invented by the economist Theodore Schultz in 1960 to
refer to all those human capacities, developed by education, that can be used
productively -- the capacity to deal in abstractions, to recognize and adhere
to rules, to use language at a high level. Human capital, like other forms of
capital, accumulates over generations; it is a thing that parents 'give' to
their children through their upbringing, and that children then successfully
deploy in school, allowing them to bequeath more human capital to their own
children." -- Traub (2000)
Source: Traub, James. The New York Times. January 16, 2000. Sunday,
Late Edition, final. Article in section 6, starting page 52, column 1,
Magazine Desk.
With thanks to: Isaac McFarlin for finding this definition
hyperbolic discounting:
A way of accounting in a model for the difference in the preferences an agent
has over consumption now versus consumption in the future.
For a and g scalar real parameters greater than zero, under hyperbolic
discounting events t periods in the future are discounted by the factor
(1+at)(-g/a).
That expression describes the "class of generalized hyperbolas".
This formulation comes from a 1999 working paper of C. Harris and D. Laibson,
which cites Ainslie (1992) and Loewenstein and Prelec (1992).
In dynamic models it is common to use the more convenient assumption that
agents have a common discount rate applying for any t-period forecast,
starting now or starting in the future. Hyperbolic discounting is less
convenient but fits the psychological evidence better, and when contrasted to
the constant-discount-rate assumption can get models to fit the noticeable
fall in consumption that U.S. workers are observed to experience when they
retire. In a constant-discount-rate model the worker would usually have
forecast the fall in income and their consumption expenses would be
smooth.
One reason hyperbolic preferences are less convenient in a model is not only
that there are more parameters but that the agent's decisions are not
time-consistent as they are with a constant discount rate. That is,
when planning for time two (two periods ahead) the agent might prepare for
what looks like the optimal consumption path as seen from time zero; but at
time two his preferences would be different.
Contrast quasi-hyperbolic discounting.
Source: "Dynamic choices of hyperbolic consumers"", working paper by
Christopher Harris and David Laibson.
Contexts: models; macro; dynamic optimization
hysteresis:
a hypothesized property of unemployment rates -- that there is a ratcheting
effect, so a short-term rise in unemployment rates tends to persist.
Theories that would lead to hysteresis:
-- an insider/outsider model of decisionmaking about employment; insiders such
as the unionized workers ratchet up wage rates beyond where it is profitable
to hire the unemployed; outsiders who are unemployed don't get to be part of
the negotiation process.
-- behavioral and human capital changes among the unemployed, such as
forgetting the details of work or work behavior, or losing interest or skill
in getting new jobs, could lead to declining chances of becoming
employed.
Source: Blank, "Changes in Inequality and unemployment over the
1980s", J Population Economics, 1995 8:1-21 pg 5
Contexts: macro; labor; models
I(0):
A stochastic process, say {yt}, is I(0), or "integrated
of order zero"l, if it is covariance stationary. Contrast
I(1).
Source: Hamilton
Contexts: time series; stochastic processes
I(1):
A stochastic process, say {yt}, is I(1), or "integrated
of order zero"l, if it is not covariance stationary but the
series created by taking the first differences of yt's elements
(e.g. xt = yt - yt-1) is covariance
stationary.
Source: Hamilton
Contexts: time series; stochastic processes
IARA:
increasing absolute risk aversion
IC constraint:
IC stands for "incentive compatible".
When solving a principal-agent maximization problem for a contract that meets
various criteria, the IC constraints are those that require agents to
prefer to act in accordance with the solution. If the IC constraint
were not imposed, the solution to the problem might be economically
meaningless, insofar as it produced an outcome that met some criterion of
optimality but which an agent would choose not to act in accord with.
See also IR constraint.
Contexts: principal-agent problems; micro theory; models
ICAPM:
Intertemporal CAPM. From Merton, 1973.
Source: Campbell, Lo, and MacKinlay 1996, pp 219-221;
Merton, 1973
Contexts: finance
ideal:
Broda and Weinstein (2005) write: "As explained in Sato (1976), a price index
P that is dual to a quantum index, Q, in the sense that PQ=E and shares and
identical weighting formula with Q is defined as 'ideal'. Fischer (1922) was
the first to use the term ideal to characterize a price index. He noted that
the geometric mean of the Paasche and Laspayres indices is ideal."
E there probably stands for expenditure.
Quantum probably means quantity.
Laspayres is the same as Laspeyres.
[Ed: I infer that the Paasche and Laspeyres indexes are not themselves
ideal.]
Source: Broda and Weinstein. 2005. Globalization and the gains from variety.
Aug 2005 working paper. especially circa p.14;
Diewert, W. Erwin. 1976. Exact and Superlative Index Numbers. Journal of
Econometrics. pp. 115-145.
Sato, Kazuo. 1976. The ideal log-change index number. Review of Economics
and Statistics. pp. 223-8.
Vartia, Yrjo. 1976. Ideal log-change index numbers. Scandinavian Journal of
Statistics. pp. 121-126.
Contexts: demand; estimation
idempotent:
A matrix M is idempotent if MM=M. (M times M equals M.)
Example: the identity matrix, denoted I.
Contexts: econometrics; linear algebra
identification:
A parameter in a model is identified if and only if complete knowledge of the
joint distribution of the observed variables would be enough information to
calculate the parameter exactly.
If the model has been written in such a way that its parameters can be
consistently estimated from the observables, then the parameters are
identified. There exist cases (mostly obscure) where parameters are
identified but consistent estimators are not possible, as shown in
this discussion drawn from the elegant paper of
Gabrielsen (1978).
A model is identified if there is no observationally equivalent model. That
is, potentially observable random variables in the model have different
distributions for different values of the parameter.
Formally:
Let h* be a vector of unknown functions and distributions in an
econometric model.
Let H denote a set which h* is known to belong. H is defined by
the model's restrictions.
Let P(h) denote the joint distribution of observable variables of the model
for various elements of h in H. The distribution for the actual data will be
assumed to be P(h*).
Now, vector h* is identified within H if for all h in H such that
h<>h* it is true that P(h)<>P(h*).
Note: Linear models are either globally identified or there are an infinite
number of observably equivalent ones. But for models that are nonlinear in
parameters, "we can only talk about local properties." Thus the
idea of locally identified models, which can be distinguished in data
from any other 'close by' model.
"An identification problem occurs when a specifed set of assumptions
combined with unlimited observations drawn by a specified sampling process
does not reveal a distribution of interest." -- Manski, Charles F.
"Identification problems and decisions under ambiguity: empirical
analysis of treatment response and normative analysis of treatment
choice" Northwestern University Department of Economics and Institute for
Policy Research, September 1998, p. 2
Source: The New Palgrave: Econometrics, p 96; and Gabrielsen,
1978
Contexts: econometrics
identity matrix:
An identity matrix is a square matrix of any dimension whose elements are ones
on its northwest-to-southeast diagonal and zeroes everywhere else. Any square
matrix multiplied by the identity matrix with those dimensions equals itself.
One usually says 'the' identity matrix since in most contexts the dimension is
unambiguous. It is standard to denote the identity matrix by I.
Contexts: linear algebra
idle:
Sometimes used to name the state of people who are not in school but also not
working. Context is usually industrialized countries with established labor
markets, and the idle are often poor.
Contexts: labor; poverty
IER:
An abbreviation for the journal International Economic Review.
Contexts: journals
iff:
abbreviation for "if and only if"
Contexts: math
IGARCH:
Integrated GARCH, a kind of econometric model of a stochastic process
in which there is a unit root in a GARCH environment.
The IGARCH(p,q) process was proposed in Engle and Bollerslev (1986).
Source: EB86
Contexts: finance
IIA:
stands for Irrelevance of Independent Alternatives, an assumption in a model.
In a discrete choice setting, the multinomial logit model is appropriate only
if the introduction or removal of a choice has no effect on the proportion of
probability assigned to each of the other choices.
This is a strong assumption; a standard example where IIA is not an
appropriate assumption is if one compares a model of transportation choices
between a car and a red bus, then introduces a blue bus. The blue bus is
functionally like the red bus, so presumably its introduction draws ridership
more heavily from the red bus than from the car.
Contexts: models; econometrics
iid:
An abbreviation for "independently and identically distributed."
One would say this about two or more random variables to describe their joint
distribution. A common use is to describe ongoing disturbances to a
stochastic process, indicating that they are not correlated to one
another.
Contexts: statistics; econometrics
IJIO:
An occasional abbreviation for the academic journal International Journal
of Industrial Organization.
Contexts: journals
ILS:
Indirect Least Squares, an approach to the estimation of simultaneous
equations models. Steps:
1) Rearrange the structural form equations into reduced form
2) Estimate the reduced form parameters
3) Solve for the structural form parameters in terms of the reduced form
parameters, and substitute in the estimates of the reduced form parameters to
get estimates for the structural ones.
Contexts: econometrics; estimation
IMF:
International Monetary Fund -- an international organization which makes loans
to maintain financial stability. The IMF makes loans to governments not to
other institutions [ed.: as far as I know]. The IMF's objectives have to do
with minimizing the damage of financial crises, not to have an effect on
economic growth. The IMF has a substantial web site and supports its own
research department. See http://www.imf.org.
Contexts: macro; monetary; trade
implementable:
A decision rule (a mapping from expressed preferences by each of a
group of agents to a common decision) "is implementable (in Nash
equilibrium) if there exists a game form whose Nash equilibrium outcome is the
desired outcome for the true preferences."
Source: Miyagawa, 1998, p 2
Contexts: game theory
implicit contract:
A non-contractual agreement that corresponds to a Nash equilibrium to the
repeated bilateral trading game other than the sequence of Nash equilibria to
the one-shot trading game.
In the labor market -- an implicit contract is formally represented by a
series of games in which the firm pays a salary and the employee works
effectively because they expect to play the game again (continue the
agreement) if it goes well, not because they have an explicit, enforceable
contract.
That is, "by implicit contracts is meant nonbinding commitments from
employers to offer ... continuity of wages, employment, and working
conditions, and from employees to forgo such temptations as shirking and
quitting for better opportunities." -- Granovetter, Ch 9
Source: paraphrased from Clive Bull, (1987)
Contexts: labor; game theory; models
impossibility theorem:
One of a class of theorems following Arrow (1951) showing that social welfare
functions cannot have certain collections of desirable attributes in
common.
Source: Arrow, 1951;
Sen, Amartya. "Rationality and social choice". American
Economic Review, Vol 85:1, March 1995, pages 1-2.
Contexts: public economics; models
impulse response function:
Consider a shock to a system. A graph of the response of the system over time
after the shock is an impulse response function graph. One use is in models
of monetary systems. One graphs for example the percentage deviations in
output or consumption over time after a one-time one percent increase in the
money stock.
Contexts: macro; econometrics; estimation
Inada conditions:
A function f() satisfies the Inada conditions if: f(0) = 0, f'(0) =
infinity, and f'(infinity) = 0. f() is usually a production function in this
context.
Source: Blanchard & Fischer, p. 38
Contexts: macro; models
inadmissible:
A possible action by a player in a game may be said to be inadmissible if it
is dominated by another feasible actions.
The term comes the view of a game as a math problem. An action is or is not
admissible as a candidate solution to the problem of choosing a
utility-maximizing strategy for the game player.
As used in Manski, Charles F. "Identification problems and decisions
under ambiguity: empirical analysis of treatment response and normative
analysis of treatment choice" Northwestern University Department of
Economics and Institute for Policy Research, September 1998, p. 2
Source: The New Palgrave: Econometrics, p 96; and Gabrielsen,
1978
Contexts: econometrics
incidental parameters:
Parameters of an estimation problem which only a limited number of data
observations tell us about. This poses a problem for estimating the other
parameters, even if every data observation tells us something about those.
To illustrate, suppose we have a statistical model that we are interested in
applying to data which is clustered in small groups and there is a parameter
associated with each small group -- classically, fixed effects. E.g.
we wish to measure the effect of neighborhoods on children's education, but
also wish for each family to have a fixed effect on the educational outcome,
and families have ten or fewer members each. Or we wish to measure worker
practices on safety outcomes, and to take into account that each employer has
an effect, but have (for some reason we cannot work around) fewer than twenty
observations on each employer.
Consider the consistency question: if the number of observations were to grow
to an infinite number and there were plenty of variation in the data, would it
be possible to estimate the structural parameters perfectly? A rule of the
problem here is that the groups (families or employers, in the examples) are
always small, so when we say infinite data, we mean more and more individuals,
but also more and more families or companies. These groups remain an
estimation problem no matter how many observations there are. Even as our
data grows in principle to infinite numbers of obsevations, the number of
parameters to be estimated also grows to an infinite number. In the language
of Neyman and Scott (1948, p. 2) who defined this term, the per-group
parameters, which grow infinitely, are the incidental parameters and
the parameters on which every observation sheds light, and which are the
actual subject of interest, are structural parameters. The structural
parameters in the examples above are the effects of various neighborhoods on
children, and the effect of various work practices on safety and
health.
In these models, consistent estimation of the incidental parameters is not
possible, in their language, because the data available on each of them is
finite. In some cases the structural parameters can be consistently
estimated, and in others cases they cannot. If consistency is possible, the
efficiency of estimation is sometimes impaired, and sometimes not. Maximum
likelihood estimation is sometimes biased or inconsistent (Neyman and Scott,
p. 7-8, and p. 26).
Source: Ed Johnson's working paper, "Ordered Logit with Fixed
Effects."
Neyman, J., and Elizabeth L. Scott. "Consistent Estimates Based on Partially
Consistent Observations" Econometrica, vol. 16, No. 1 (Jan. 1948), pp
1-32.
JSTOR copy of the article
Contexts: econometrics; estimation
income elasticity:
When used without another referent, appears to mean 'of consumption'. That is
for income I and consumption C:
income elasticity = (I/C)*(dC/dI)
In one paper estimates were shown of .2 to .6 for a random sample of
industrialized country middle class people.
For more details see elasticity.
Source: labor; macro
indemnity:
A kind of insurance, in which payment is made (often in previously determined
amounts) for injuries suffered, not for the costs of recovery. The payment is
designed not to be a dependent on anything the patient can control. From the
point of view of the insurer, this mechanism avoids the moral hazard problem
of victim spending too much in recovery.
Source: Weisbrod's class circa 5/21/97
Contexts: public
independent:
Two random variables X and Y are statistically independent if and only if
their joint density (pdf) is the product of their marginal densities, that is
if f(x,y)=fx(x)fy(y).
If two random variables are independent they are also
uncorrelated.
Source: Greene, 1993, p. 64
Contexts: econometrics
indicator variable:
In a regression, a variable that is one if a condition is true, and zero if it
is false. Approximately synonymous with dummy variable, binary
variable, or flag.
Contexts: econometrics; estimation
indifference curve:
Represented for example on a graph whose horizontal and vertical axes are
quantities of goods an individual might consume, an indifference curve
represents a contour along which utility for that individual is constant. The
curve represents a set of possible consumption bundles between which the
individual is indifferent. Normally, with desirable goods on both axes (say,
income today and income tomorrow) the curve has a certain shape, further from
the origin when both quantities are positive than when one is zero.
Contexts: micro; modelling
indirect utility function:
Denoted v(p, m) where p is a vector of prices for goods, and m is a budget in
the same units as the prices. This function takes the value of the maximum
utility that can be achieved by spending the budget m on the consumption goods
with prices p.
Source: Varian, 1992, Ch 15
Contexts: models
individually rational:
An allocation is individually rational if no agent is worse off in that
allocation than with his endowment.
Contexts: general equilibrium; models
inductive:
Characterizing a reasoning process of generalizing from facts, instances, or
examples. Contrast deductive.
Contexts: philosophy
Industrial Revolution:
A period commonly dated 1760-1830 in Britain (as in Mokyr, 1993, p 3 and
Ashton, 1948). Characterized by: "a complex of technological advances:
the substitution of machines for human skills and strength; the development of
inanimate sources of power (fossil fuels and the steam engine); the invention,
production, and use of new materials (iron for wood, vegetable for animal
matter, mineral for vegetable matter); and the introduction and spread of a
new mode of production, known by contemporaries as the factory system."
-- Landes (1993b) p 137.
Source: Mokyr, 1993, p 3; Landes,
1993b, p 137
Contexts: history
industrialization:
A historical phase and experience. The overall change in circumstances
accompanying a society's movement population and resources from farm
production to manufacturing production and associated services.
Source: Kemp, Thomas. 1985. Industrialization in Nineteenth-Century Europe.
page xi.
Contexts: history
inf:
Stands for 'infimum'. A value is an infimum with respect to a set if all
elements of the set are at least as large as that value. An infimum exists in
context where a minimum does not, because (say) the set is open; e.g. the set
(0,1) has no minimum but 0 is an infimum.
inf is a mathematical operator that maps from a set to a value that is
syntactically like the members of that set, although the value may not
actually be a member of the set.
Contexts: real analysis
inflation:
Reduction in value of a currency. Measured often by percentage increases in
the general price level per year.
Contexts: macro; money
information:
Relevant terms: meet,
stopping rule.
Contexts: fields
information matrix:
In maximum likelihood estimation, the variance of the score vector. It's a k
x k matrix, where k is the dimension of the vector of parameters being
estimated. The vector of parameters is denoted q
here:
I(q) = var S(q) =
E[(S(q)-ES(q))2] = E[S(q)2]
where the score is S(q) = dL(q)/d(q)
The information matrix can also be calculated by multiplying the Hessian of
the log-likelihood function by (-1).
Contexts: econometrics
information number:
Synonym for Fisher information (which see).
Contexts: econometrics; statistics
informational cascade:
"An informational cascade occurs when it is optimal for an individual,
having observed the actions of those ahead of him, to follow [that is,
imitate] the behavior of the preceding individual without regard to his own
information."
-- Bikhchandani, Hirshleifer, and Welch, 1992, p 992
Source: Bikhchandani, Hirshleifer, and Welch, 1992, p 992
INSEAD:
An American-style business school near Paris. Operates in English.
INSEE:
The economic statistics agency of the French government. Stands for Institut
National de la Statistique et des Etudes Economiques.
Its web site.
Contexts: data
inside money:
Any debt that is used as money. Is a liability to the issuer. Total amount
of inside money in an economy is zero. Contrast outside money.
Contexts: money; models
institution:
There are several definitions. Here's one: "An institution is a social
mechanism through which men work together for common or like ends. It is a
necessary arrangement wherever regulated group behavior over a broad field of
activity is found. It is opposed in sociological thought to 'face to face'
grouping and to local community forms of life ..." (Ware, p. 6)
For more see new institutionalism.
Source: Norman J. Ware. Labor in Modern Industrial Society.
1935. D.C Heath and Company.
Contexts: sociology; history; organizations
instrumental variables:
Either (1) an estimation technique, often abbreviated IV, or (2) the exogenous
variables used in the estimation technique.
Suppose one has a model:
y = Xb + e
Here y is a T x 1 vector of dependent variables and X is a T x k matrix of
independent variables, both of which come from some data source. b is a k x 1
vector of parameters to estimate, and e is a T x 1 vector of errors made by
the model in its predictions of y.
Suppose in the environment being modelled that the matrix of independent
variables X may be correlated to the e's. One could run OLS, but
because of the correlation between X and e, the OLS estimator is biased and
inconsistent. However, using a T x k matrix of independent variables
Z, correlated to the X's but uncorrelated to the e's one can construct an IV
estimator that will be consistent:
bIV = (Z'X)-1Z'y
The two stage least squares estimator is an important extension of this
idea.
In that discussion above, the exogenous variables Z are called instrumental
variables.
With thanks to: Jonathan Meer; Masahito Yoshida
Contexts: econometrics; estimation
instruments:
When regressors are correlated to errors in a model, one may be able to
replace the regressors by estimates for these regressors that are not
correlated to the errors. This is the technique of instrumental
variables, and the replacement regressors are called
instruments.
The replacement regressors are constructed by running regressions of the
original regressors on exogenous variables that are called the instrumental
variables.
There are two conditions for a variable to be a viable instrument. First, it
must be uncorrelated with the errors. This is the exogeneity
condition. Second, it must be correlated with the endogenous variables. This
is the relevance condition. This second condition can be tested through the
correlation between the instrument and the endogenous variables (or the
coefficient on the instrument in the first stage of a two stage least squares
regression). The exogeneity condition, on the other hand, cannot generally be
directly tested, and an intuitive argument must be made. If there are more
instruments than endogenous variables, then an overidentification test can be
used to test exogeneity of the instruments.
Weak instruments which do not satisfy the relevance condition well can often
cause more harm than good. (On this last point, see John Bound, David A.
Jaeger and Regina Baker's "The Cure Can Be Worse than the Disease: A
Cautionary Tale Regarding Instrumental Variables," NBER Working Paper
T0137.)
Contexts: econometrics; estimation
integrated:
Said in reference to a random process. A random process is said to be
'integrated of order d' (sometimes denoted I(d)) for some natural number d if
the series that would remain after one took first differences d times would be
covariance stationary.
Example: a random walk is I(1).
Example: "Most macroeconomic flows and stocks that relate to population
size, such as output or employment, are I(1)." They are growing.
Example: "An I(2) series [might] be growing at an ever-increasing
rate."
Source: Greene, 1993, p 559
Contexts: econometrics; time series; estimation
intensive margin:
Refers to the degree (intensity) to which a resource is utilized or applied.
For example, the effort put in by a worker or the number of hours the worker
works. Contrast extensive margin.
inter alia:
"Among other things"
Source: American Heritage Dictionary, 1982
Contexts: phrases
inter vivos:
From Latin, "between lives". Used to describe gifts beetween
people, usually from one generation to the next, which are like
bequests except that both parties are alive. Quantities and timing of such
gifts are studied empirically in the same way that quantities and purposes
of bequests are subjects of empirical study.
Contexts: labor; family; tax
interim efficient:
Defined, apparently, in Holmstrom and Myerson (1983) with reference to
Rothschild and Stiglitz (1976). In Imderst (2000) this term is used to
characterize the set ('family') of Rothschild-Sticlitz contracts in a
particular model setting.
Source: Holmstrom, Bengt, and Roger Myerson. "Efficient and durable
decision rules with incomplete information." Econometrica 51
(1983), 1799-1819.
Inderst, Roman. "Markets with simultaneous signaling and
screening." Jan 2000 working paper from University of Mannheim.
See
http://www.vwl.uni-mannheim.de/modovan/roman.html.
Rothschild, M., and J Stiglitz. "Equilibrium in competitive insurance
markets: an essay on the economics of imperfect information."
Quarterly Journal of Economics 90 (1976), 629-650.
Contexts: game theory
interior solution:
A choice made by an agent that can be characterized as an optimum located at a
tangency of two curves on a graph.
A classic example is the tangency between a consumer's budget line
(characterizing the maximum amounts of good X and good Y that the consumer can
afford) and the highest possible indifference curve. The slope of that
tangency is where:
(marginal utility of X)/(price of X) = (marginal utility of Y)/(price of Y)
Contrast corner solution.
Contexts: micro theory; phrases
internal knowledge spillover:
positive learning or knowledge externalities between programs or plants within
a production organization.
internal labor markets:
Refers to the process of reallocating workers within an organization, as
opposed to reallocating them to and from the outside, external labor market.
Relevant institutions include:
- hierarchical job ladders, in which promotions are orderly and routine;
- limited entry points to the organization, e.g. because it hires only for
the bottom jobs, or hires only recent college graduates;
- inducements to stay on the job, such as benefits earned with time like
extra vacation or stock options.
(Drawn from Stone, 1973, pp 163-4.)
There are others; to be added here as discovered by this editor. One reason
an employer would want to encourage internal labor markets is that workers
learn over time how to work effecively with particular technologies and in
particular organizations and losing this background to an outside employer
when a worker leaves is costly; substitute workers from outside would not know
the same things.
Source: Stone, Katherine. 1974. "The Origin of Job Structures in the
Steel Industry" Review of Radical Political Economics. pp
113-173.
Contexts: labor
inverse demand function:
A function p(q) that maps from a quantity of output to a price in the market;
one might model the demand a firm faces by positing an inverse demand function
and imagining that the firm chooses a quantity of output.
Contexts: IO; modelling; micro
inverse Mills ratio:
Usually denoted l(Z), and defined by l(Z)=phi(Z)/PHI(Z), where phi() is the standard normal
pdf and PHI() is the standard normal cdf.
Contexts: econometrics
invertibility:
In context of time series processes, represented for example by a lag
polynomial, inverting means to solve for the e's (epsilons) in terms of the
y's.
One inverts moving average (MA) processes to get AR representations.
Source: Watson's compressed time series notes, p. 32
Contexts: econometrics; time series; linear algebra
investment:
Any use of resources intended to increase future production output or
income.
Contexts: macro
IO:
stands for 'Industrial Organization', the field of industry structure,
conduct, and performance. By structure we usually mean the size of the firms
in the industry -- e.g. whether firms have monopoly power.
Relevant terms: absorptive capacity,
affine pricing,
base point pricing,
Bertrand competition,
Bertrand duopoly,
Bertrand game,
bidding function,
capital,
circulating capital,
Clayton Act,
compensating variation,
concentration ratio,
cost curve,
Cournot duopoly,
Cournot game,
Cournot model,
DOJ,
dominant design,
exclusive dealing,
firm,
FTC,
FTC Act,
Gini index,
Glass-Steagall Act,
H index,
Herfindahl-Hirschman index,
inverse demand function,
Lerner index,
linear pricing schedule,
Lorenz curve,
market power,
market power theory of advertising,
Markov strategy,
monopoly,
monopoly power,
monopsony,
network externalities,
nonlinear pricing,
oligopsony,
predatory pricing,
price complements,
price substitutes,
pricing schedule,
product differentiation,
Regulation Q,
Robinson-Patman Act,
shakeout,
Sherman Act,
SIC,
team production,
theory of the firm,
tying,
X-inefficiency model.
Contexts: fields
IPO:
Stands for "initial public offering", the event of a firm's first
sale of stock shares.
Source: finance
IPUMS:
Integrated Public Use Microdata Series. These are collections of U.S. Census
data, adapted for easy use by the University of Minnestota Social History
Research Laboratory, at its Web site
http://www.ipums.umn.edu.
Contexts: data
IR constraint:
IR stands for "individually rational".
When solving a principal-agent maximization problem for a contract that meets
various criteria, the IR constraints are those that require agents to prefer
to sign the contract than not to. If the IR constraint were not imposed, the
solution to the problem might be economically meaningless, insofar as it was a
contract that met some criterion of optimality but which an agent would refuse
to sign.
See also IC constraint.
Contexts: models; micro theory; principal-agent problems
IRS:
The United States national tax collection agency, called the Internal Revenue
Service.
is consistent for:
means "is a consistent estimator of"
Contexts: phrases; econometrics
isoquant:
Given a production function, an isoquant is "the locus of input
combinations that yield the same output level." (Chiang, p. 360) There is an
isoquant set for each possible output level. Mathematically the isoquant is a
level curve of the production function.
Examples and discussion is at Martin Osborne's web page:
http://www.chass.utoronto.ca/~osborne/2x3/tutorial/ISOQUANT.HTM.
Source: Chiang, 1984
Contexts: production theory; micro
Ito process:
A stochastic process: a generalized Wiener process with normally
distributed jumps.
Contexts: time series; finance; models; statistics
IV:
abbrevation for Instrumental Variables, an estimation technique
Contexts: econometrics; estimation
J statistic:
In a GMM context, when there are more moment conditions than parameters
to be estimated, a chi-square test can be used to test the overidentifying
restrictions. The test statistic can be called the J statistic.
In more detail: Say there are q moment conditions and p parameters to be
estimated. Let the weighting matrix be the inverse of the asymptotic
covariance matrix. Let T be the sample size. Then T times the minimized
value of the objective function (TJT(bT)) is asymptotically distributed with a
chi-square distribution with (q-p) degrees of freedom.
Source: Ogaki, Handbook of Statistics, Vol 11, chapter 17, p 458;
Hansen, 1982
Contexts: econometrics
jackknife estimator:
Has multiple, overlapping definitions numbered below:
(1) kind of nonparametric estimator for a regression function. A jackknife
estimator is a linear combination of kernel estimators with different window
widths. Jackknife estimators have higher variance but less bias than kernel
estimators. (Hardle, p. 145.)
(2) creates a series of statistics, usually a parameter estimate, from a
single data set by generating that statistic repeatedly on the data set
leaving one data value out each time. This produces a mean estimate of the
parameter and a standard deviation of the estimates of the parameter.
(Nick Cox, in an email broadcast to Stata users on statalist, circa
7/5/2000.)
Source: Hardle, 1990
Contexts: econometrics; nonparametrics; estimation
JE:
An occasional abbreviation for the academic journal Journal of
Econometrics.
Contexts: journals
JEH:
An abbreviation for the Journal of Economic History.
Contexts: journals
JEL:
Journal of Economic Literature. See also JEL classification
codes.
Contexts: journals
JEL classification codes:
These define a classification system for books and journal articles relevant
to the economic researcher. The list has three levels of precision:
categories A-Z, subcategories like A0-A2 (these are used to classify books),
and sub-subcategories like A10-A14 (which are used to classify journal
articles). The second level is detailed here; for the complete set of
possible JEL codes see any issue, e.g. in the Sept 1997 issue, pages
1609-1620. The list below comes from that same issue, pages 1437-1439. A
more up-to-date list is online at
http://www.aeaweb.org/journal/elclasjn.html
A. General Economics and Teaching (A0 General, A1 General Economics, A2
Teaching of Economics)
B. Methodology and History of Economic Thought (B0 General, B1 History of
Economic Thought through 1925, B2 History of Economic Thought since 1925, B3
History of Thought: Individuals, B4 Economic Methodology)
C. Mathematical and Quantitative Methods (C0 General, C1 Econometric and
Statistical Methods: General, C2 Econometric and Statistical Methods: Single
Equation Models, C3 Econometric and Statistical Methods: Multiple Equation
Models, C4 Econometric and Statistical Methods: Special Topics, C5 Econometric
Modeling, C6 Mathematical Methods and Programming, C7 Game Theory and
Bargaining Theory, C8 Data Collection and Data Estimation Methodology;
Computer Programs, C9 Design of Experiments)
D. Microeconomics (D0 General, D1 Household Behavior and Family Economics, D2
Production and Organizations, D3 Distribution, D4 Market Structure and
Pricing, D5 General Equilibrium and Disequilibrium, D6 Economic Welfare, D7
Analysis of Collective Decision-Making, D8 Information and Uncertainty, D9
Intertemporal Choice and Growth)
E. Macroeconomics and Monetary Economics (E0 General, E1 General Aggregative
Models, E2 Consumption, Saving, Production, Employment, and Investment, E3
Prices, Business Fluctuations, and Cycles, E4 Money and Interest Rates, E5
Monetary Policy, Central Banking and the Supply of Money and Credit, E6
Macroeconomic Aspects of Public Finance, Macroeconomic Policy, and General
Outlook)
F. International Economics (F0 General, F1 Trade, F2 International Factor
Movements and International Business, F3 International Finance, F4
Macroeconomic Aspects of International Trade and Finance)
G. Financial Economics (G0 General, G1 General Financial Markets, G2 Financial
and Institutions and Services, G3 Corporate Finance and Governance)
H. Public Economics (H0 General, H1 Structure and Scope of Government, H2
Taxation and Susidies, H3 Fiscal Policies and Behavior of Economic
Agents
Source: JEL XXXV: 3 (Sept 1997), pp 1437-1439
Contexts: journals
JEMS:
An abbreviation for the Journal of Economics and Management
Strategy.
Contexts: journals
Jensen's inequality:
If X is a real-valued random variable with E(|X|) finite and the function g()
is convex, then E[g(X)] >= g(E[X]).
One application: By Jensen's inequality, E[X2] >=
(E[X])2. Since the difference between these is the variance, we
have just shown that any random variable for which E[X2] is finite
has a variance and a mean.
This is the inequality one can refer to when showing that an investor with a
concave utility function prefers a certain return to the same expected return
with uncertainty.
Contexts: probability; statistics; finance
JEP:
An abbreviation for the Journal of Economic Perspectives.
Contexts: journals
JET:
An abbreviation for the Journal of Economic Theory.
Contexts: journals
JF:
Journal of Finance
Contexts: finance; journals
JFE:
Journal of Financial Economics
Contexts: finance; journals
JFI:
Journal of Financial Intermediation, at
http://www.bus.umich.edu/jfi/
JHR:
Journal of Human Resources
Contexts: journals
JIE:
An abbreviation for the
Journal of Industrial Economics
.
Contexts: journals
JLE:
An abbreviation for the Journal of Law and Economics.
Contexts: journals
JLEO:
An abbreviation for the Journal of Law, Economics and
Organization.
Contexts: journals
job lock:
Describes the situation of a person with a U.S. job who is not free to leave
for another job because the first job has medical benefits associated with it
that the person needs, and the second one would not, perhaps because
'pre-existing conditions' are often not covered under U.S. health
insurance.
JOE:
The monthly US publication
Job Openings for
Economists.
Contexts: publications
journals:
In the context of research economics these are academic periodicals, usually
with peer-reviewed contents. An amazingly complete list of hyperlinks to
journals is at the WebEc web site.
Some are also in this glossary directly, below.
Relevant terms: AER,
AJS,
ASQ,
ASR,
BJE,
EconLit,
Econometrica,
EEH,
EER,
EJ,
EMA,
GEB,
IER,
IJIO,
JE,
JEH,
JEL,
JEL classification codes,
JEMS,
JEP,
JET,
JF,
JFE,
JHR,
JIE,
JLE,
JLEO,
JPAM,
JPE,
JPubE,
JRE,
Kyklos,
QJE,
ReStat,
ReStud,
RJE.
Contexts: fields
JPAM:
Journal of Policy Analysis and Management
Contexts: journals
JPE:
Abbreviation for the Journal of Political
Economy
Contexts: journals
JPubE:
Journal of Public Economics
Contexts: journals
JRE:
An abbreviation for the Journal of Regulatory Economics.
Contexts: journals
k percent rule:
A monetary policy rule of keeping the growth of money at a fixed rate of k
percent a year. This phrase is often used as stated, without specifying the
percentage.
Contexts: money; macro
k-nearest-neighbor estimator:
A kind of nonparametric estimator of a function. Given a data set
{Xi, Yi} it estimates values of Y for X's other than
those in the sample. The process is to choose the k values of Xi
nearest the X for which one seeks an estimate, and average their Y values.
Here k is a parameter to the estimator. The average could be weighted, e.g.
with the closest neighbor having the most impact on the estimate.
Source: Hardle, 1990
Contexts: econometrics; estimation
Kalman filter:
The Kalman filter is an algorithm for sequentially updating a linear
projection for a dynamic system that is in state-space representation.
Application of the Kalman filter transforms a system of the following
two-equation kind into a more solvable form:
xt+1=Axt+Cwt+1
yt=Gxt+vt
in which:
A, C, and G are matrices known as functions of a parameter q about which inference is desired (this is the PROBLEM
to be solved),
t is an whole number, usually indexing time,
xt is a true state variable, hidden from the econometrician,
yt is a measurement of x with scalings G and measurement errors
vt,
wt are innovations to the hidden xt process,
Ewt+1wt'=1 by normalization,
Evtvt=R, an unknown matrix, estimation of which is
necessary but ancillary to the problem of interest which is to get an estimate
of q.
The Kalman filter defines two matrices St and Kt such
that the system described above can be transformed into the one below, in
which estimation and inference about q and R is
more straightforward, possibly even by OLS:
zt+1=Azt+Kat
yt=Gzt+at
where
zt is defined to be Et-1xt,
at is defined to be yt-Et-1yt,
K is defined to be lim Kt as t goes to infinity.
The definition of those two matrices St and Kt is itself
most of the definition of the Kalman filter:
Kt=AStG'(GStG'+R)-1
St+1=(A-KtG)St(A-KtG)'+CC'+Kt
RKt'
Kt is called the Kalman gain.
It's not yet clear to me what specific examples there are of problems that the
Kalman filter solves.
Source: Hamilton p 372; Sargent lecture 5/8/97
Contexts: macro; econometrics; estimation
Kalman gain:
One of the two equations that characterizes the application of the Kalman
filter process defines an expression sometimes denoted Kt,
which is called the Kalman gain.
That equation, using notation from Sargent's lectures, is:
Kt=AStG'(GStG'+R)-1
Contexts: macro; econometrics
keiretsu system:
The framework of relationships in postwar Japan's big banks and big firms.
Related companies organized around a big bank (like Mitsui, Mitsubishi, and
Sumitomo) which own a lot of equity in one another and in the bank and do much
business with one another. This system has the virtue of maintaining long
term business relationships and stability in suppliers and customers. It has
the disadvantage of reacting slowly to outside events since the players are
partly protected from the external market. (p 412)
Source: Landau, Ralph. 1996. "Strategy for Economic Growth: lessons from the
Chemical Industry." In The Mosaic of Economics Growth, edited by Ralph
Landau, Timothy Taylor, and Gavin Wright. Stanford University Press.
kernel estimation:
Kernel estimation means the estimation of a regression function or probability
density function. Such estimators are consistent and asymptotically normal if
as the number of observations n goes to infinity, the bandwidth (window width)
h goes to zero, and the product nh goes to infinity. In practice, kernel
estimation may mean use of the Nadaraya-Watson estimator.
Source: Hardle, 1990
Contexts: econometrics; nonparametrics; estimation
kernel function:
A weighting function used in nonparametric function estimation. It gives the
weights of the nearby data points in making an estimate. In practice kernel
functions are piecewise continuous, bounded, symmetric around zero, concave at
zero, real valued, and for convenience often integrate to one. They can be
probability density functions. Often they have a bounded domain like
[-1,1].
Source: Hardle, 1990
Contexts: econometrics; nonparametrics; estimation
Keynes effect:
As prices fall, a given nominal amount of money will be a larger real amount.
Consequently the interest rate would fall and investment demanded rise. This
Keynes effect disappears in the liquidity trap. Contrast the
Pigou effect.
Another phrasing: that a change in interest rates affects expenditure spending
more than it affects savings.
Source: James Tobin. "Keynesian Models of Recession and
Depression"
Contexts: macro; models
kitchen sink regression:
Describes a regression where the regressors are not in the opinion of the
writer thoroughly 'justified' by an argument or a theory. Often used
pejoratively; other times describes an exploratory regression.
Contexts: estimation; econometrics
KLIC:
Kullback-Leibler Information Criterion. An unpublished paper by Kitamura
(1997) describes this as a distance between probability measures. It is
defined in that paper thus. The KLIC between probability measures P and Q
is:
I(P||Q) = [integral of] ln(dP/dQ) dP if P << Q
........ = infinity otherwise
Contexts: econometrics
Knightian uncertainty:
Unmeasurable risk. Contrast Knightian uncertainty.
Source: Used in Rosenberg (1996) in Mosaic of Economic Growth.
knots:
If a regression will be run to estimates different linear slopes for different
ranges of the independent variables, it's a spline regression, and the
endpoints of the ranges are called knots.
The spline regression is designed so that the resulting spline function,
estimating the dependent variable, is continuous at the knots.
Source: Greene, 1993, p 237
Contexts: econometrics
Kolmogorov's Second Law of Large Numbers:
If {wt} is a sequence of iid draws from a distribution and
Ewt exists (call it mu) then the average of the wt's
goes 'almost surely' to mu as t goes to infinity.
Same as strong law of large numbers, I believe.
Source: Bruce Meyer's D80-3 notes
Contexts: econometrics; statistics
Kronecker product:
This is an operator that takes two matrix arguments. It is denoted by a small
circle with an x in it, but will be denoted here by 'o'. Let A be an M x N
matrix, and B be an R x S matrix. Then AoB is an MR x NS matrix, formed from
A by multiplying each element of a by the entire matrix B and putting it in
the place of the element of A, e.g.:
a11B a12B ... a1nB
. . . . . .
. . . . . .
aM1B aM2B ... aMnB
Kronecker products have the following useful properties:
(AoB)(CoD)=ACoBD
(AoB)-1 = A-1oB-1
(AoB)' = A'oB'
(AoB)+(AoC)=Ao(B+C)
AoC+BoC = (A+B)oC
Contexts: econometrics; linear algebra
Kruskal's theorem:
Let X be a set of regressors, y be a vector of dependent variables, and the
model be: y=Xb+e where E[ee'] is the matrix OMEGA.
The theorem is that if the column space of (OMEGA)X is the same as the column
space of X; that is, that there is heteroskedasticity but not
cross-correlation, then the GLS estimator of b is the same as the
OLS estimator of b.
Contexts: econometrics
kurtosis:
An attribute of a distribution, describing 'peakedness'. Kurtosis is
calculated as E[(x-mu)4]/s4 where mu is the mean and s
is the standard deviation.
Source: Hogg and Craig, p 57
Contexts: econometrics; statistics
Kuznets curve:
A graph with measures of increased economic development (presumed to correlate
with time) on the horizontal axis, and measures of income inequality on the
vertical axis hypothesized by Kuznets (1955) to have an inverted-U-shape.
That is, Kuznets made the proposition when an economy is primarily
agricultural it has a low level of income inequality, that during early
industrialization income inequality increases over time, then at some critical
point it starts to decrease over time. Kuznets (1955) showed evidence for
this.
Source: Kuznets, 1955, p 16-17
Contexts: development; macro
Kyklos:
A journal, whose Web site is at
http://www.kyklos-review.ch/kyklos/index.html.
Contexts: journals
L1:
The set of Lebesgue-integrable real-valued functions on [0,1].
Source: Royden, p 118
Contexts: real analysis; models
L2:
A Hilbert space with inner product (x,y) = integral of x(t)y(t) dt.
Equivalently, L2 is the space of real-valued random variables that
have variances. This is an infinite dimensional space.
Source: Royden, p 118
Contexts: real analysis; models
Ln:
is the set of continuous bounded functions with domain
Rn
Source: Lucas (78) "Asset Pricing" paper
Contexts: real analysis; models
labor:
"[L]abor economics is primarily concerned with the behavior of employers
and employees in response to the general incentives of wages, prices, profits,
and nonpecuniary aspects of the employment relationship, such as working
conditions."
Relevant terms: active measures,
AFQT,
AGI,
average treatment effect,
Beveridge curve,
BHPS,
cobweb model,
cohort,
CPI,
education production function,
efficiency units,
efficiency wage hypothesis,
efficiency wages,
Engel effects,
Eurosclerosis,
experience,
extensive margin,
family,
free entry condition,
frictional unemployment,
GDP deflator,
Gini index,
GSOEP,
HRS,
hysteresis,
idle,
implicit contract,
inter vivos,
internal labor markets,
labor market outcomes,
labor-leisure tradeoff,
Lerman ratio,
natural rate of unemployment,
NLS,
NLSY,
NLSYW,
oligopsony,
passive measures (to combat unemployment),
price complements,
price substitutes,
PSID,
regrettables,
reservation wage property,
SCF,
SES,
skill,
SLID,
social capital,
statistical discrimination,
structural unemployment,
Survey of Consumer Finances,
tenure,
tightness,
treatment effects,
unemployment,
union threat model,
wage curve,
welfare capitalism,
yellow-dog contract.
Contexts: fields
labor market outcomes:
Shorthand for worker (never employer) variables that are often considered
endogeneous in a labor regression. Variables which are determined and
which may appear on the right side of such regressions: wage rates, employment
indicators, or employment rates.
Contexts: labor
labor productivity:
Quantity of output per time spent or numbers employed. Could be measured in,
for example, U.S. dollars per hour.
Contexts: macro
labor theory of value:
"Both Ricardo and Marx say that the value of every commodity is (in
perfect equilibrium and perfect competition) proportionaly to the quantity of
labor contained in the commodity, provided this labor is in accordance with
the existing standard of efficiency of production (the 'socially necessary
quantity of labor'). Both measure this quantity in hours of work and use the
same method in order to reduce different qualities of work to a single
standard." And neither accounts well for monopoly or imperfect
competition. (Schumpeter, p 23)
Source: Schumpeter, Joseph R. 1950. Capitalism, Socialism, and
Democracy, third edition. (First edition 1942.) Harper & Row. New
York.
labor-augmenting:
One of the ways in which an effectiveness variable could be included in a
production function in a Solow model. If effectiveness A is multiplied
by labor L but not by capital K, then we say the effectiveness variable is
labor-augmenting.
See also Harrod-neutral, a near-synonym. It is suggested in the
literature using the term Harrod-neutral that some inventions or other
technical changes might be measured to be labor-augmenting.
Source: Romer, 1996, p 7
Contexts: macro
labor-leisure tradeoff:
In a model of how people spend their time and effort, a classic design is to
label time at work 'labor' and time not at work 'leisure'.
The agents being modeled may make a choice of working more time and earning
more money, or working less and earning less. If they have a desire for both
money and leisure, but receive diminishing returns from each, then there might
make an interior choice in the model, to work neither zero nor 24 hours a day.
If so the model has succeeded in the sense that it made a prediction which
could be tested.
In such a model one might use a utility function to describe the agent's
behavior, u(i,e) where i is income and e is leisure time, and the mathematics
of the assumption about diminishing returns take this form: Let e and i be
nonnegative and e be less than 24 hours. u'(i)>0 and u'(e)>0. (Those are
first derivatives.) Also let u''(i)<0 and u''(e)<0. These requirements are
not yet enough for there to be an interior solution but this is the main line
path followed by the thinking of an author who makes reference to the
labor-leisure tradeoff.
Contexts: labor
LAD:
Stands for 'Least absolute deviations' estimation.
LAD estimation can be used to estimate a smooth conditional median function;
that is, an estimator for the median of the process given the data. Say the
data are stationary {xt, yt}. The dependent variable is
y and the independent variable is x.The criterion function to be minimized in
LAD estimation for each observation t is:
q(xt,yt,q) =
|yt=m(xt,q)|
where m() is a guess at the conditional median function.
Under conditions specified in Wooldridge, p 2657, the LAD estimator here is
Fisher-consistent for parameters of the estimator of the median
function.
Source: Wooldridge 1995, p 2657
Contexts: econometrics
lag operator:
Denoted L. Operates on an expression by moving the subscripts on a time
series back one period, so:
Let = et-1
Why? Well, it can help manipulability of some expressions. For example it
turns out one can could write an MA(2) process (which see) to look like this,
in lag polynomials (which see):
et = (1 + p1L + p2L2)ut
and then divide both sides by the lag polynomial, and get a legal, meaningful,
correct expression.
Contexts: macro; time series; models
lag polynomial:
A polynomial expression in lag operators (which see).
Example: (1 - p1L + p2L2)
where L2 = LL, or the lag operator L applied twice.
These are useful for manipulating time series. For example, one can quickly
show an AR(1) is equivalent to an MA(infinity) by dividing both sides by the
lag polynomial (1-pL).
Contexts: models
Lagrangian multiplier:
An algebraic term that arises in the context of problems of mathematical
optimization subject to constraints, which in economics contexts is sometimes
called a shadow price.
A long example: Suppose x represents a quantity of something that an
individual might consume, u(x) is the utility (satisfaction) gained by that
individual from the consumption of quantity x. We could model the
individual's choice of x by supposing that the consumer chooses x to maximize
u(x):
x = arg maxx u(x)
Suppose however that the good is not free, so the choice of x must be
constrained by the consumer's income. That leads to a constrained
optimization problem ............
[Ed.: this entry is unfinished]
Contexts: micro theory; optimization
laissez faire:
A government policy posture of letting market processes proceed without
intervention or regulation. Often implies tolerance of monopoly.
LAN:
stands for "locally asymptotically normal", a characteristic of many ("a
family of") distributions.
Contexts: statistics; econometrics
large sample:
Usually a synonym for 'asymptotic' rather than a reference to an actual sample
magnitude.
Contexts: econometrics
Laspeyres index:
A price index following a particular algorithm.
It is calculated from a set ("basket") of fixed quantities of a finite list of
goods. We are assumed to know the prices in two different periods. Let the
price index be one in the first period, which is then the base period. Then
the value of the index in the second period is equal to this ratio: the total
price of the basket of goods in period two divided by the total price of
exactly the same basket in period one.
As for any price index, if all prices rise the index rises, and if all prices
fall the index falls.
Source:
http://www.geocities.com/jeab_cu/paper2/paper2.htm.
Contexts: price indices; macro
Law of iterated expectations:
Often exemplified by EtEt+1(.) = Et(.) That
is, "one cannot use limited information [at time t] to predict the
forecast error one would make if one had superior information [at t+1]."
-- Campbell, Lo, and MacKinlay, p 23.
Source: Sargent, 1987, Ch 3
Contexts: macro; models
LBO:
Leveraged buy-out. The act of taking a public company private by buying it
with revenues from bonds, and using the revenues of the company to pay off the
bonds.
Contexts: finance
least squares learning:
The kind of learning that an agent in a model exhibits by adapting to
past data by running least squares on it to estimate a hypothesized
parameter and behaving as if that parameter were correct.
Contexts: macro
leisure:
In some models, individuals spend some time working and the rest is lumped
into a category called leisure, the details of which are usually left
out.
Contexts: models
lemons model:
Describes models like that of Akerlof's 1970 paper, in which the fact that a
good is available suggests that it is of low quality. For example, why are
used cars for sale? In many cases because they are "lemons," that
is, they were problematic to their previous owners.
Source: George A Akerlof "The Market for Lemons..." QJE 1970
Contexts: models
Leontief production function:
Has the form q=min{x1,x2} where q is a quantity of output and x1 and x2 are
quantities of inputs or functions of the quantities of inputs.
Contexts: models; production
leptokurtic:
An adjective describing a distribution with high kurtosis. 'High' means the
fourth central moment is more than three times the second central moment; such
a distribution has greater kurtosis than a normal distribution. This term is
used in Bollerslev-Hodrick 1992 to characterize stock price returns.
Lepto- means 'slim' in Greek and refers to the central part of the
distribution.
Source: Davidson and MacKinnon, 1993, p 62
Contexts: statistics
Lerman ratio:
A government benefit to the underemployed will presumably reduce their hours
of work. The ratio of the actual increase in income to the benefit is the
Lerman ratio, which is ordinarily between zero and one. Moffitt (1992)
estimates it in regard to the U.S. AFDC program at about .625.
Source: Robert Moffitt, "Incentive Effects of the U.S. Welfare System: A
Review", JEL March 1992, p. 17.
Contexts: public finance; labor
Lerner index:
A measure of the profitability of a firm that sells a good: (price - marginal
cost) / price.
One estimate, from Domowitz, Hubbard, and Petersen (1988) is that the average
Lerner index for manufacturing firms in their data was .37.
Source: Domowitz, Hubbard, and Petersen (1988), p 57-58
Contexts: IO
leverage ratio:
Meaning differs by context. Often: the ratio of debts to total assets. Can
also be the ratio of debts (or long-term debts in particular, excluding for
example accounts payable) to equity.
Normally used to describe a firm's but could describe the accounts of some
other organization, or an individual, or a collection of
organizations.
Contexts: finance; accounting
Leviathan:
The all-powerful kind of state that Hobbes thought "was necessary to
solve the problem of social order." -- Cass R. Sunstein, "The Road
from Serfdom" The New Republic Oct 20, 1997, p 37.
Contexts: political
likelihood function:
In maximum likelihood estimation, the likelihood function (often denoted L())
is the joint probability function of the sample, given the probability
distributions that are assumed for the errors. That function is constructed
by multiplying the pdf of each of the data points together:
L(q) = L(q; X) = f(X;
q) = f(X0;q)f(X1;q)...f(XN;q)
Contexts: econometrics; estimation
Limdep:
A program for the econometric study of limited dependent variables. Limdep's
web site is at "http://www.limdep.com".
Contexts: data
limited dependent variable:
A dependent variable in a model is limited if it is discrete (can take on only
a countable number of values) or if it is not always observed because it is
truncated or censored.
Contexts: econometrics; estimation
LIML:
stands for Limited Information Maximum Likelihood, an estimation idea
Contexts: econometrics; estimation
Lindahl pricing:
A theoretical pricing schedule for a public good which prices the good at a
level which would extract all the consumer surplus from each type of consumer.
Consumers with different levels of demand for the good would each find the
good priced just high enough each one is indifferent to paying for it or not
paying for it.
Contexts: public economics; theory
Lindeberg-Levy Central Limit Theorem:
Let {wt} be an iid sequence, with mean
E[wt]=m, and variance
var(wt)=s2.
Let W denote the average of T wt's. Then as T goes to infinity,
T.5(W-m)/s will converge in distribution
to a standard normal distribution, N(0,1).
Source: Bruce Meyer's D80-3 notes
Contexts: econometrics; statistics; time series
linear algebra:
Relevant terms: characteristic equation,
characteristic root,
Cholesky decomposition,
conformable,
determinant,
eigenvalue,
eigenvector,
Hessian,
idempotent,
identity matrix,
invertibility,
Kronecker product,
symmetric,
trace,
transpose.
Contexts: fields
linear model:
An econometric model is linear if it is expressed in an equation which the
parameters enter linearly, whether or not the data require nonlinear
transformations to get to that equation.
Source: Greene, 1993, p 240
Contexts: econometrics
linear pricing schedule:
Say the number of units, or quantity, paid for is denoted q, and the total
paid is denoted T(q), following the notation of Tirole. A linear pricing
schedule is one that can be characterized by T(q)=pq for some price-per-unit
p.
For alternative pricing schedules see nonlinear pricing or affine pricing
schedule.
Source: Tirole, p 136
Contexts: IO
linear probability models:
Econometric models in which the dependent variable is a probability between
zero and one. These are easier to estimate than probit or logit
models but usually have the problem that some predictions will not be in the
range of zero to one.
Contexts: econometrics; estimation
linear regression:
A regression in which dependent variable y could be predicted by a linear
function of the independent variables X, and thus of the form y=Xb+e. Other
possible forms of regression might look like this: y=f(Xb)+e, or y=f(X,b)+e,
or y=f(X,b)e.
Contexts: estimation; econometrics
link function:
Defined in the context of the generalized linear model, which
see.
Source: Rabe-Hesketh, Sophia, and Brian Everitt. 1999. A Handbook of
Statistical Analyses using Stata. Chapman & Hall / CRC. pp 91-93.
Contexts: econometrics
Lipschitz condition:
A function g:R->R satisfies a Lipschitz
condition if
|g(t1)-g(t2) <= C|t1-t2|
for some constant C. For a fixed C we could say this is "the Lipschitz
condition with constant C."
A function that satisfies the Lipschitz condition for a finite C is said to be
Lipschitz continuous, which is a stronger condition than regular continuity;
it means that the slope so steep as to be outside the range (-C, C).
Source: Kolmogorov and Fomin
Contexts: real analysis; nonparametrics
Lipschitz continuous:
A function is Lipschitz continuous if it satisfies the Lipschitz
condition for a finite constant C. Lipschitz continuity is a stronger
condition than regular continuity. It means that the slope is never outside
the range (-C, C).
Contexts: real analysis; nonparametrics
liquid:
A liquid market is one in which it is not difficult or costly to buy or
sell.
More formally, Kyle (1985), following Black (1971), describes a liquid market
as "one which is almost infinitely tight, which is not infinitely
deep, and which is resilient enough so that prices eventually
tend to their underlying value."
Source: Kyle, 1985, p 1317
Contexts: finance
liquidity:
A property of a good: a good is liquid to the degree it is easily convertible,
through trade, into other commodities. Liquidity is not a property of the
commodity itself but something established in trading arrangements.
Source: Ostroy and Starr, 1990
Contexts: money
liquidity constraint:
Many households, e.g. young ones, cannot borrow to consume or invest as much
as they would want, but are constrained to current income by imperfect capital
markets.
Contexts: money; macro
liquidity trap:
A Keynesian idea. When expected returns from investments in securities or
real plant and equipment are low, investment falls, a recession begins, and
cash holdings in banks rise. People and businesses then continue to hold cash
because they expect spending and investment to be low. This is a
self-fulfilling trap.
See also Keynes effect and Pigou effect.
Source: Hughes, Jonathan, and Louis P. Cain. 1994. American Economic
History, fourth edition. HarperCollins College Publishers. p 463.
Contexts: macro
Ljung-Box test:
Same as portmanteau test.
Contexts: finance; time series
locally identified:
Linear models are either globally identified or there are an infinite number
of observably equivalent ones. But for models that are nonlinear in
parameters, "we can only talk about local properties." Thus the
idea of locally identified models, which can be distinguished in data
from any other 'close by' model. "A sufficient condition for local
identification is that" a certain Jacobian matrix is of full column
rank.
Source: Hsiao, The New Palgrave: Econometrics, p 96-98
Contexts: econometrics; estimation
locally nonsatiated:
An agent's preferences are locally nonsatiated if they are continuous and
strictly increasing in all goods.
Contexts: general equilibrium; models
log:
In the context of economics, log usually means 'natural log', that is
loge, where e is the natural constant that is approximately
2.718281828. So x=log y <=> ex=y.
log utility:
The logarithmic utility function, usually of consumption or wealth. Here is
the simplest version. Define U() as the utility function and w as wealth.
Then
U(w) = ln(w)
is the log utility function.
Contexts: modelling; finance
log-concave:
A function f(w) is said to be log-concave if its natural log, ln(f(w)) is a
concave function; that is, assuming f is differentiable,
f''(w)/f(w) - f'(w)2 <= 0
Since log is a strictly concave function, any concave function is also
log-concave.
A random variable is said to be log-concave if its density
function is log-concave. The uniform, normal, beta, exponential, and extreme
value distributions have this property. If pdf f() is log-concave, then so is
its cdf F() and 1-F(). The truncated version of a log-concave function is
also log-concave. In practice the intuitive meaning of the assumption that a
distribution is log-concave is that (a) it doesn't have multiple separate
maxima (although it could be flat on top), and (b) the tails of the density
function are not "too thick".
An equivalent definition, for vector-valued random variables, is in
Heckman and Honore, 1990, p 1127. Random vector X is
log-concave iff its density f() satisfies the condition that
f(ax1+(1-a)x2)≥[f(x1)
]a[f(x2)](1-a) for all
x1, and x2 in the support of X and
all a satisfying 0≤a≤1.
Source: Heckman and Honore, 1990, p 1127; BMW Ch 2, Job Search
Theory
Contexts: statistics; econometrics; models
log-convex:
A random variable is said to be log-convex if its density
function is log-concave. Pareto distributions with finite means and variances
have this property, and so do gamma densities with a coefficient of variation
greater than one. [Ed.: I do not know the intuitive content of the
definition.]
A log-convex random vector is one whose density f() satisfies the condition
that f(ax1+(1-a)x2) ≤
[f(x1)]a[f(x2)](1-a)
for all x1, and x2 in the support of
X and all a satisfying 0≤a≤1.
Source: Heckman and Honore, 1990, p 1132
Contexts: statistics; econometrics; models
logistic distribution:
Has the cdf F(x) = 1/(1+e-x)
This distribution is quicker to calculate than the normal distribution but is
very similar. Another advantage over the normal distribution is that it has a
closed form cdf.
pdf is f(x) = ex(1+ex)-2 = F(x)F(-x)
The New Palgrave:
Econometrics, and Davidson and MacKinnon, 1993, p 515
Contexts: econometrics; statistics
logit model:
A univariate binary model. That is, for dependent variable yi that
can be only one or zero, and a continuous indepdendent variable xi,
that:
Pr(yi=1)=F(xi'b)
Here b is a parameter to be estimated, and F is the logistic cdf.
The probit model is the same but with a different cdf for F.
Source: Takeshi Amemiya, "Discrete Choice Models," The New Palgrave:
Econometrics
Contexts: econometrics; estimation
lognormal distribution:
Let X be a random variable with a standard normal distribution. Then the
variable Y=eX has a lognormal distribution.
Example: Yearly incomes in the United States are roughly log-normally
distributed.
If random variable X is distributed N(m, v), then the random variable
Y=eX has a lognormal distribution with mean em+.5v and
variance e2m+v(ev-1). A proof is shown by John Norstad;
see sources for that.
Source: A series of proofs relevant to this distribution is at
Norstad's finance proofs.
Contexts: statistics; econometrics
longitudinal data:
a synonym for panel data
Contexts: data
Lorenz curve:
used to discuss concentration of suppliers (firms) in a market. The
horizontal axis is divided into as many pieces as there are suppliers. Often
it is given a percentage scale going from 0 to 100. The firms are in order of
decreasing size. On the vertical axis are the market sales in percentage
terms from 0 to 100. The Lorenz curve is a graph of the sales of all the
firms to the right of each point on the horizontal axis.
So (0,0) and (100,100) are the endpoints on the Lorenz curve and it is weakly
convex, and piecewise linear, between. See also Gini
coefficient.
Source: Greer, 1992, p. 173
Contexts: IO
loss function:
Or, 'criterion function.' A function that is minimized to achieve a desired
outcome. Often econometricians minimize the sum of squared errors in making
an estimate of a function or a slope; in this case the loss function is the
sum of squared errors. One might also think of agents in a model as
minimizing some loss function in their actions that are predicated on
estimates of things such as future prices.
Contexts: econometrics; models; estimation
lower hemicontinuous:
No appearing points
Contexts: real analysis; models
LRD:
Longitudinal Research Database, at the U.S. Bureau of the Census. Used in the
study of labor and productivity. The data is not publicly available without
special certification from the Census. The LRD extends back to 1982.
Contexts: data sets
Lucas critique:
A criticism of econometric evaluations of U.S. government policy as they
existed in 1973, made by Robert E. Lucas. "Keynesian models consisted of
collections of decision rules for consumption, investment in capital,
employment, and portfolio balance. In evaluating alternative policy rules for
the government,.... those private decision rules were assumed to be fixed....
Lucas criticized such procedures [because optimal] decision rules of private
agents are themselves functions of the laws of motion chosen by the
government.... policy evaluation procedures should take into account the
dependence of private decision rules on the government's ... policy
rule."
In Cochrane's language: "Lucas argued that policy evaluation must be
performed with models specified at the level of preferences ... and technology
[like discount factor beta and permanent consumption c* and
exogenous interest rate r], which presumably are policy invariant, rather than
decision rules which are not."
[I believe the canonical example is: what happens if government changes
marginal tax rates? Is the response of tax revenues linear in the change, or
is there a Laffer curve to the response? Thus stated, this is an empirical
question.]
Source: Sargent, 1979, Ch 14, p. 398; Cochrane, Econ 330
notes
Contexts: macro; public finance
m-estimators:
Estimators that maximize a sample average. The 'm' means
'maximum-likelihood-like'. (from Newey-McFadden)
The term was introduced by Huber (1967). "The class of M-estimators
included the maximum likelihood estimator, the quasi-maximum likelihood
estimator, multivariate nonlinear least squares" and others. (from
Wooldridge, p 2649)
I think all m-estimators have scores.
Source: Newey-McFadden, Handbook of Econometrics, Ch 36, p 2115;
Wooldridge, 1995, p 2649
Contexts: econometrics; estimation
M1:
A measure of total money supply. M1 includes only checkable demand
deposits.
Contexts: money; macro; data
M2:
A measure of total money supply. M2 includes everything in M1 and also
savings and other time deposits.
Contexts: money
MA:
Stands for "moving average." Describes a stochastic process
(here, et) that can be described by a weighted sum of a white
noise error and the white noise error from previous periods. An MA(1)
process is a first-order one, meaning that only the immediately previous value
has a direct effect on the current value:
et = ut + put-1
where p is a constant (more often denoted q) that
has absolute value less than one, and ut is drawn from a
distribution with mean zero and finite variance, often a normal distribution.
An MA(2) would have the form:
et = ut + p1ut-1 +
p2ut-2
and so on. In theory a process might be represented by an
MA(infinity).
Contexts: econometrics; statistics; time series
MA(1):
A first-order moving average process. See MA for details.
Contexts: econometrics
macro:
Relevant terms: accelerator principle,
active measures,
asset-pricing function,
attractor,
balance of payments,
balanced growth,
basin of attraction,
basket,
Bellman equation,
Beveridge curve,
business cycle frequency,
capital,
capital consumption,
capital deepening,
capital-augmenting,
catch-up,
CCAPM,
central bank,
certainty equivalence principle,
CES production function,
CES utility,
contraction mapping,
contractionary fiscal policy,
contractionary monetary policy,
cost-of-living index,
CPI,
current account balance,
decision rule,
demand deposits,
depreciation,
disintermediation,
Divisia index,
Domar aggregation,
dynamic inconsistency,
dynamic multipliers,
dynamic optimizations,
dynamic programming,
economic growth,
effective labor,
efficiency units,
elasticity,
embodied,
endogenous growth model,
Engel effects,
equity premium puzzle,
ergodic set,
Euler equation,
Eurosclerosis,
expectation,
FDI,
fiat money,
fiscalist view,
Fisher effect,
Fisher equation,
Fisher hypothesis,
Fisher Ideal Index,
flexible-accelerator model,
free entry condition,
frictional unemployment,
functional equation,
GDP,
GDP deflator,
generator function,
GGH preferences,
GMM,
GNP,
Golden Rule capital rate,
Harrod-neutral,
Hicks-neutral,
Hicks-neutral technical change,
Hodrick-Prescott filter,
hyperbolic discounting,
hysteresis,
IMF,
impulse response function,
Inada conditions,
inflation,
investment,
k percent rule,
Kalman filter,
Kalman gain,
Keynes effect,
Kuznets curve,
labor productivity,
labor-augmenting,
lag operator,
Laspeyres index,
Law of iterated expectations,
least squares learning,
liquidity constraint,
liquidity trap,
Lucas critique,
M1,
markup,
metaproduction function,
monetarism,
monetarist view,
monetary base,
monetary rule,
multi-factor productivity,
NAIRU,
natural rate of unemployment,
neoclassical growth model,
neutrality,
New Classical view,
new growth theory,
NNP,
nondivisibility of labor,
numeraire,
passive measures (to combat unemployment),
Phillips curve,
physical depreciation,
Pigou effect,
price index,
pricing kernel,
productivity,
productivity paradox,
property income,
putty-putty,
Q ratio,
quasi-hyperbolic discounting,
Ramsey equilibrium,
Ramsey outcome,
rational expectations,
RBC,
real business cycle theory,
recession,
reservation wage property,
Ricardian proposition,
risk free rate puzzle,
RMPY,
saddle point,
Schumpeterian growth,
sink,
Smithian growth,
Solovian growth,
Solow growth model,
Solow residual,
source,
stabilization policy,
stable steady state,
state price vector,
state-space approach to linearization,
stochastic difference equation,
Stolper-Samuelson theorem,
structural unemployment,
substitution bias,
superneutrality,
technical change,
technology shocks,
tertiary sector,
TFP,
time consistency,
time deposits,
Tobin's marginal q,
total factor productivity,
trajectory,
transient,
transversality condition,
unemployment,
value function,
VARs,
vintage model,
Walrasian model,
welfare capitalism,
Wold's theorem.
Contexts: fields
main effect:
As contrasted to interaction effect.
In the regression
yi = aXi + bXZi + cZi + errors
The bXZi term measures the interaction effect. The main effect is
cZi.
This term is usually used in an ANOVA context, where its meaning is presumably
analogous but this editor has not verified that.
Source: Phrasing example drawn from
Hamermesh, Daniel S. "The Art of Labormetrics." NBER Working paper 6927.
February 1999.
http://www.nber.org/papers/w6927
Contexts: statistics; econometrics
maintained hypothesis:
An ambiguous term. Probably it is wise to avoid it.
Most common meaning: synonym for alternative hypothesis. "The
model in which the restrictions do not hold is usually called the alternative
hypothesis, or sometimes the maintained hypothesis, and may be denoted
H1." (Davidson and MacKinnon, 1993, p. 78).
However Greene, 1993/7, third edition, defines it to mean precisely the
opposite. From p. 155: "The formal procedure involves a statement of the
hypothesis, usually in terms of a 'null' or maintained hypothesis and an
'alternative,' conventionally denoted H0 and H1,
respectively."
Source: Davidson and MacKinnon, 1993, p 78-79.
Greene, 1993, 3rd edition, p 155.
Contexts: econometrics; estimation
Malmquist index:
An index number enabling a productivity comparison between economy A and
economy B.
Imagine that we have an aggregate production function
QAA=fA(KA,LA) that describes
economy A and an aggregate production
QBB=fB(KB,LB) that describes
economy B. K and L stand for capital and labor inputs. We substitute the
inputs of B into the production function of A to compute
QAB=fA(KB,LB). We also compute
QBA=fB(KA,LA) with the inputs from
country A.
The Malmquist index of A with respect to B is the geometric mean of
QAA/QAB and QBA/QBB. It will be
greater than one if A's aggregate production technology is better than
B's.
Source: Hulten, 2000, pp 26-27.
Contexts: index numbers
MANOVA:
This is a generalization of the ANOVA statistical model. It allows
multiple dependent variables. Output is reported as one or more multivariate
test statistics which are asymptotically equivalent but differ in their small
sample properties. These tests include Wilks's lambda, Pillai's trace,
Lawley-Hotelling trace, and Roy's largest root. If the test statistics are
significantly different from zero, the hypothesis that there is no difference
between the multidimensional mean vectors of the categories is
rejected.
Source: Stata manuals
Contexts: statistics; sociology
mantissa:
Fractional part of a real number.
MAR:
a rare abbreviation, for moving-average representation
Contexts: econometrics
March CPS:
Also known as the Annual Demographic File. Conducted in March of each year by
the Census Bureau in the U.S. Gets the information from the regular monthly
CPS survey, plus additional data on work experience, income, noncash benefits,
and migration.
Source: Blanchflower and Oswald, Ch 4, p. 171
Contexts: data
marginal significance level:
a synonym for 'P value'
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; estimation
market:
An organized exchange between buyers and sellers of a good or service.
Contexts: models
market capitalization:
Total number of shares times the market price of each. May be said of a
firm's shares, or of all the shares on an equity market.
Contexts: finance
market failure:
A situation, usually discussed in a model not in the real world, in which the
behavior of optimizing agents in a market would not produce a Pareto
optimal allocation. Sources of market failures:
-- monopoly. Monopoly or oligopoly producers have incentives to underproduce
and to price above marginal cost, which then gives consumers incentives to buy
less than the Pareto optimal allocation.
-- externalities
Source: Layard and Glaister, 1972, p 15
Contexts: general equilibrium; public
market for corporate control:
Shares of public firms are traded, and in large enough blocks this means
control over corporations is traded. That puts some pressure on managers to
perform, otherwise their corporation can be taken over.
Source: Jensen and Ruback, 1983
Contexts: corporate finance
market power:
Power held by a firm over price, and the power to subdue competitors.
Contexts: IO
market power theory of advertising:
That established firms use advertising as a barrier to entry through product
differentiation. Such a firm's use of advertising differentiates its brand
from other brands to a degree that consumers see its brand is a slightly
different product, not perfectly substituted by existing or potential
competitors. This makes it hard for new competitors to gain consumer
acceptance.
Contexts: IO
market price of risk:
Synonym for Sharpe ratio.
Contexts: finance
Markov chain:
A stochastic process is a Markov chain if:
(1) time is discrete, meaning that the time index t has a finite or countably
infinite number of values;
(2) the set of possible values of the process at each time is finite or
countably infinite; and
(3) it has the Markov property of memorylessness.
Source: Hoel, Port, and Stone, 1987, pg v and pg 1
Contexts: time series
Markov perfect:
A characteristic of some Nash equilibria. "A Markov perfect
equilibrium (MPE) is a profile of Markov strategies that yields a Nash
equilibrium in every proper subgame." A Markov strategy is one that does
not depend at all on variables that are functions of the history of the game
except those that affect payoffs.
A tiny change to payoffs can discontinuously change the set of Markov perfect
equilibria, because a state variable with a tiny effect on payoffs can be part
of a Markov perfect strategy, but if its effect drops to zero, it cannot be
included in a strategy; that is, such a change makes many strategies disappear
from the set of Markov perfect strategies.
Source: Fudenberg and Tirole, 1991/1993, p 501-2; originally
defined in Maskin, E., and J. Tirole (1988), "A Theory of Dynamic
Oligopoly: I & II," Econometrica 56:3, 549-600.
Contexts: game theory
Markov process:
A stochastic process where all the values are drawn from a discrete set. In a
first-order Markov process only the most recent draw affects the distribution
of the next one; all such processes can be represented by a Markov transition
density matrix. That is,
Pr{xt+1 is in A | xt, xt-1,...} =
Pr{xt+1 is in A | xt}
Example 1: xt+1 = a + bxt + et is a Markov
process
For a=0, b=1 it is a martingale.
A Markov process can be periodic only if it is of higher than first
order.
Contexts: models; statistics
Markov property:
A property that a set of stochastic processes may have. The system has the
Markov property if the present state predicts future states as well as the
whole history of past and present states does -- that is, the process is
memoryless.
Source: Hoel, Port, and Stone, 1987, pg v and pg 1
Contexts: time series
Markov strategy:
In a game, a Markov strategy is one that does not depend at all on
state variables that are functions of the history of the game except those
that affect payoffs.
[Ed.: I believe random elements can be in a Markov strategy: e.g. a mixed
strategy could be a Markov strategy.]
Source: Tirole, 1988/1993, p 343; Fudenberg and
Tirole, 1991/1993, p 502
Contexts: game theory; IO
Markov transition matrix:
A square matrix describing the probabilities of moving from one state to
another in a dynamic system. In each ?row? are the probabilities of moving
from the state represented by that row, to the other states. Thus the rows of
a Markov transition matrix each add to one. Sometimes such a matrix is
denoted something like Q(x' | x) which can be understood this way: that Q is a
matrix, x is the existing state, x' is a possible future state, and for any x
and x' in the model, the probability of going to x' given that the existing
state is x, are in Q.
(An example would be good here)
Contexts: models
Markov's inequality:
Quoting almost strictly from Goldberger, 1994, p
31:
If Y is a nonnegative random variable, that is, if Pr(Y<0)=0, and k is any
positive constant, then E(Y) ≥ kPr(Y ≥ k).
The proof is amazingly quick. See Goldberger page 31 or Hogg and Craig page
68.
Source: goldberger, hogg and craig
Contexts: statistics
markup:
In macro, the ratio of price to marginal cost. Can be used as a measure of
market power across firms, industries, or economies.
Contexts: macro; models
Marshallian demand function:
x(p,m) -- the amount of a factor of production that is demanded by a producer
given that it costs p per unit and the budget limit that can be spent on all
factors is m. p and x can be vectors.
Source: Varian, 1992
Contexts: production theory; micro
martingale:
Same as martingale difference sequence.
Contexts: statistics; econometrics
martingale difference sequence:
** This definition is not usable as is **
A stochastic process {Xt} is a martingale (or, equivalently,
martingale difference sequence) with respect to information {Yt} if
and only if:
(i) E|Xt| < infinity
(ii) E[Xn+1 | Y0, Y1, ... , Yn] =
Xn
E(gt+1) = gt.
Martingale differences are uncorrelated but not necessarily
independent.
Contexts: statistics; econometrics
mass production:
"A production system characterized by mechanization, high wages, low prices,
and large-volume output." (Hounshell, p.305) Usually refers to factory
processes on metalwork, not to textiles or agriculture. The term came into
use in the 1920s and referred to production approaches analogous to those of
the Ford Motor Company in the US.
Source: Hounshell, David. 1984. From the American System to Mass
Production, 1800-1932. Johns Hopkins University Press. p. 305.
Contexts: history
Matching Pennies:
A zero-sum game with two players. Each shows either heads or tails from a
coin. If both are heads or both are tails then player One wins, otherwise Two
wins. The payoff matrix is at right.
| |
Player Two |
| C | D |
Player One |
C | 1,-1 |
-1,1 |
D | -1,1 |
1,-1 |
There is no Nash equilibrium to this game in pure strategies.
Source: Varian, 1992, Ch 15
Contexts: game theory
Matlab:
A matrix programming language and programming environment. Used more by
engineers but increasingly by economists. There's a very brief tutorial at
Tutorial: Matlab.
The software is made by The Mathworks,
Inc.
Contexts: data; code
maximand:
In a maximization problem, the maximand is the function to be maximized. In
the problem being referred to, the operation is the select some parameters or
choices so as to get the highest feasible value from this maximand.
Contexts: optimization
maximin principle:
A justice criterion proposed by the philosopher Rawls. A principle about the
just design of social systems -- e.g., rights and duties. According to this
principle the system should be designed to maximize the position of those who
will be worst off in it.
"The basic structure is just throughout when the advantages of the more
fortunate promote the well-being of the least fortunte, that is, when a
decrease in their advantages would make the least fortunate even worse off
than they are. The basic structure is perfectly just when the prospects ofthe
least fortunate are as great as they can be." -- Rawls,
1973, p 328
Source: Rawls, 1973, p 328
Contexts: public
maximum score estimator:
A nonparametric estimator of certain coefficients of a binary choice model.
Avoids assumptions about the distribution of errors that would be made by a
probit or logit model in the same circumstances.
In the econometric model: the dependent variable yi is either zero
or one; the regressors Xi are multiplied by a parameter vector
b. yi often represents which of two
choices was selected by a respondent. b is
estimated to maximize an objective function that is given by an
expression:
maxb sumi=1 to N
[(yi-.5)sign(Xib)]
where i indexes observations, of which there are N, and the function sign()
has value one if its argument is greater than or equal to zero, and has value
zero otherwise.
b chosen this way has the property that it
maximizes the correct prediction of yi given the information in X.
Notice that although the maximum value of the maximand may be well defined,
b is not usually uniquely estimated in a finite
data set, because values of b near betahat would
make the same predictions. Often, however, b is
estimated within a narrow range.
Source: Greene, 1993, p 658;
Greene, 1997, p 902;
Manski, 1975
Contexts: econometrics; discrete choice models; estimation
MBO:
Stands for Management Buy-Out, the purchase of a company by its management.
Sometimes means Management By Objectives, a goal-oriented personnel evaluation
approach.
Contexts: finance
mean square error:
A criterion for an estimator: the choice is the one that minimizes the sum of
squared errors due to bias and due to variance.
Source: Kennedy, 1992, p. 16
Contexts: econometrics
mean squared error:
The mean squared error of an estimator b of true parameter vector B is:
MSE(b) = E[(b - B)2]
which is also
MSE(b) = var(b) + (bias(b))(bias(b)')
Source: Greene, 1993, p 94
Contexts: econometrics; estimation
measurable:
If (S, A) is a measurable space, elements of A are
A-measurable.
Contexts: math; measure theory; real analysis
measurable space:
(S, A) is a measurable space if S is a set and A is a
sigma-algebra of S.
Elements of A are said to be A-measurable.
Contexts: math; measure theory; real analysis
measure:
A noun, in the mathematical language of measure theory: a measure is a
function from sets to the real line. Probability is a common kind of measure
in economic models. Other measures are the counting measure, which is the
number of elements in the set, the length measure, the area measure, and the
volume measure. Length, area, and volume are defined along lines, planes, and
spaces just as one would expect, and they have the natural meanings.
Formally: a measure is a mapping m from a sigma algebra A to the
extended real line such that
(i) m(null) = 0
(ii) m(B) >= 0 for all B in A
(iii) m(any countable union of disjoint sets in A) = the sum of m(each
of those sets)
The third property is called the countable additivity property.
An example: imagine probability mass distributed evenly on a unit square.
Probability is then defined on any area within the square. The measure
(probability, here) is the size (area) of the subset.
The kinds of subsets on which measures such as probability are defined are
called sigma-algebras (which see).
Contexts: math; measure theory; real analysis
measure theory:
Relevant terms: B1,
Borel set,
Borel sigma-algebra,
countable additivity property,
measurable,
measurable space,
measure,
sigma-algebra.
Contexts: fields
measurement error:
The data used in an econometric problem may have been measured with some
error, and if so this violates a basic condition of the abstract
environment in which OLS is validly derived. This turns out not to be
seriously problematic if the dependent variable is affected by an iid
mean-zero measurement error, but if the regressors have been measured with a
mean-zero iid error the estimates can be biased. There are standard
approaches to this problem, notably the use of instrumental variables.
Paraphrasing from Schennach, 2000, p 1: In a linear econometric
specification, a measurement error on the regressors can be viewed as a
particular type of endogeneity problem causing the disturbance to be
correlated with the regressors, which is precisely the problem addressed by
standard IV techniques.
Source: Schennach, Susanne M. "Estimation of nonlinear models with
measurement error." MIT Department of Economics working paper, dated Jan
14, 2000.
mechanism design:
A certain class of principal-agent problems are called mechanism design
problems. In these, a principal would like to condition her own actions on
the private information of agents. The principal must offer incentives for
the agents to reveal information.
Examples from the theoretical literature are auction design,
monopolistic price discrimination, and optimal taxation. In an auction
the seller would like to set a price just below the highest valuation of a
potential buyer, but does not know that price, and an auction is a mechanism
to at least partially reveal it. In a price discrimination, the seller would
like to offer the product at different prices to groups with different
valuations but may not be able to identify which group an agent is a member of
in advance.
Source: Fudenberg and Tirole, 1993, p 243
Contexts: game theory
medium of exchange:
A distinguishing characteristic of money is that it is taken as a medium of
exchange, that is, in the language of Wicksell (1935) p. 17, that it is
"habitually, and without hesitation, taken by anybody in exchange for any
commodity."
Source: Bennett T. McCallum, "Comments on 'A Model of Commodity Money' by
Thomas J. Sargent and Neil Wallace", Journal of Monetary Economics
12, July 1983, pp. 189-196.
Contexts: money
meet:
Given a space of possible events, the meet is the finest common coarsening of
the information sets of all the players. The meet is the finest partition of
the space of possible events such that all players have beliefs about the
probabilities of the elements of the partition.
Contexts: micro theory; information
mesokurtic:
An adjective describing a distribution with kurtosis of 3, like the normal
distribution. See by contrast leptokurtic and platykurtic.
Source: Davidson and MacKinnon, 1993, p 62
Contexts: statistics
metaproduction function:
Means best-practice production function -- depending on context, either the
most efficient feasible practice, or most efficient actual practice of the
existing entities converting inputs X into output y. Often in practice y is
an agricultural output, and data from a sample of farms, and the
meta-production function could be estimated by estimating production functions
for the farms and choosing among the most efficient ones. In the (macro)
context of the quote below, the entities are not farms but countries,
producing GDP.
"The term 'meta-production function' is due to Hayami and Ruttan (1970, 1985).
For an exposition of the meta-production function approach, see Lau and
Yotopoulos (1989) and Boskin and Lau (1990).... The two most important
maintained hypotheses [of this approach] are: (1) that the aggregate
production functions of all countries are identical in terms of
'efficiency-equivalent' units of output and inputs; and (2) that technical
progress in all countries can be represented in the commodity-augmentation
form, with constant geometric augmentation factors...." The framework allows
"the researcher to consider and potentially to reject the maintained
hypotheses of traditional growth accounting [such as] (1) constant returns to
scale, (2) neutrality of technical progress; and (3) profit maximization."
(p66) An assumption related to the second maintained hypothesis above, which
the theory depends on (p69) is that "the measured outputs and inputs of the
different countries may be converted into unobservable standardized, or
'efficiency-equivalent,' quantities of output and inputs by multiplicative
country- and output- and input-specific time-varying augmentation factors...."
(where "time-varying" seems to conflict with the requirement, above, that the
augmentation factors be "constant".) (p69) In this approach "countries may
differ in the quantities of their factor inputs and intensities and possibly
in the qualities and efficiencies of their inputs and outputs, but they do not
differ with regard to the technological opportunities .... [T]hey are assumed
to have equal access to technologies."
From p66, 69, 73 of Lau (1996).
Source: Lau, Lawrence J. 1996. "The Sources of Long-Term Economic Growth:
Observations from the Experience of Developed and Developing Countries." In
Mosaic of Economic Growth, edited by Ralph Landau, Timothy Taylor, and
Gavin Wright. Stanford University Press.
With thanks to: Randal Kinoshita
Contexts: international; macro; agricultural
metatheorem:
An informal term for a proposition that can be proved in a class of economic
model environments.
Contexts: models
method of moments:
A way of generating estimators: set the distribution moments equal to the
sample moments, and solve the resulting equations for the parameters of the
distribution.
Contexts: econometrics; estimation
MFP:
Abbreviation for Multi-factor productivty.
MGF:
stands for 'moment generating function', which see.
Contexts: econometrics; statistics
Minitab:
Data analysis software, discussed at http://www.minitab.com.
Contexts: data; software
mixing:
In the context of stochastic processes, events A and B (that is, subsets of
possible outcomes of the process) "are mixing" if they are
asymptotically independent in the following way.
Let L be a lag operator that moves all time subscripts back by one (e.g.
replacing t by t-1). Iff A and B are mixing, then taking the limit as h goes
to infinity:
lim Pr(A intersected with LhB) = Pr(A)Pr(B).
The event Lh is the event B, but h periods ago; it's NOT some kind
of stochastic ancestor of B.
If two events are independent, they are mixing.
If two events are mixing, they are ergodic.
I *believe* that a stochastic process is mixing iff all pairs of possible
values it can take, taken as events, are mixing.
Contexts: probability; econometrics; time series
MLE:
maximum likelihood estimator
Contexts: econometrics; estimation
MLRP:
Abbreviation for monotone likelihood ratio property of a statistical
distribution.
Contexts: micro theory
models:
Generally means theoretical or structural models. Can also mean
econometric models which in this glossary are listed separately.
Relevant terms: annihilator operator,
APT,
asset-pricing function,
attractor,
barter economy,
basin of attraction,
Bellman equation,
budget set,
CAPM,
CARA utility,
cash-in-advance constraint,
certainty equivalence principle,
CES production function,
CES technology,
CES utility,
CGE,
Cobb-Douglas production function,
cobweb model,
coefficient of absolute risk aversion,
coefficient of relative risk aversion,
complete market,
conditional factor demands,
constant returns to scale,
consumption set,
contract curve,
contraction mapping,
control variable,
core,
cost function,
costate,
Cournot duopoly,
CRRA,
decision rule,
demand set,
dictator game,
discount factor,
discount rate,
double coincidence of wants,
dynamic optimizations,
dynamic programming,
economic environment,
efficiency wage hypothesis,
efficiency wages,
endowment,
Engel curve,
equilibrium,
Euler equation,
ex ante,
ex post,
First Welfare Theorem,
Fisherian criterion,
functional equation,
future-oriented,
game,
GARP,
Gaussian white noise process,
generalized Wiener process,
GGH preferences,
Gorman form,
Hahn problem,
Hicks-neutral technical change,
hyperbolic discounting,
hysteresis,
IC constraint,
IIA,
implicit contract,
impossibility theorem,
Inada conditions,
indirect utility function,
individually rational,
inside money,
IR constraint,
Ito process,
Keynes effect,
L,
L,
L,
lag operator,
lag polynomial,
Law of iterated expectations,
leisure,
lemons model,
Leontief production function,
locally nonsatiated,
log-concave,
log-convex,
loss function,
lower hemicontinuous,
market,
Markov process,
Markov transition matrix,
markup,
maximum score estimator,
metatheorem,
Modigliani-Miller theorem,
monetized economy,
money,
money-in-the-utility-function models,
Nash product,
NBS,
NE,
netput,
neutrality,
numeraire,
offer curve,
OLG,
outside money,
parametric,
Pareto optimal,
Pareto set,
payoff matrix,
phase portrait,
Pigou effect,
PO,
Poisson process,
precautionary savings,
present-oriented,
principle of optimality,
production set,
putty-putty,
Q ratio,
quasi-hyperbolic discounting,
quasilinear,
random process,
random walk,
RBC,
real business cycle theory,
Ricardian proposition,
risk,
saddle point,
Schwarz Criterion,
Second Welfare Theorem,
sharing rule,
Shubik model,
single-crossing property,
sink,
social planner,
social welfare function,
source,
SPE,
SPO,
stable steady state,
staggered contracting,
state price,
state price vector,
state-space approach to linearization,
stochastic difference equation,
stochastic process,
strict version of Jensen's inequality,
strongly Pareto Optimal,
subdifferential,
subgame perfect equilibrium,
superneutrality,
technology shocks,
total factor productivity,
Townsend inefficiency,
trajectory,
transversality condition,
uncertainty,
upper hemicontinuous,
utility curve,
value function,
WACM,
Walrasian auctioneer,
Walrasian equilibrium,
WAPM,
WE,
weakly Pareto Optimal,
white noise process,
WPO.
Contexts: fields
modernization:
Quoting from Landes: "Modernization comprises such developments as
urbanization (the concentration of the population in cities that serve as
nodes of industrial production, administration, and intellectual and artistic
activity); a sharp reduction in both death rates and birth rates from
traditional levels (the so-called demographic transition); the establishment
of an effective, fairly centralized bureaucratic government; the creation of
an educational system capable of training and socializing the children of the
society to a level compatible with their capacities and best contemporary
knowledge; and, of course, the acquisition of the ability and means to use an
up-to-date technology."
Source: Landes, 1969/1993, p 6
Contexts: history
Modigliani-Miller theorem:
that the total value of the bonds and equities issued by a firm in a
model is independent of the number of bonds outstanding or their interest
rate.
The theorem was shown by Modigliani and Miller, 1958 in a
particular context with no fixed costs, transactions costs, asymmetric
information, and so forth. Analogous theorems are shown in various contexts.
The assumptions made by such theorems offer a way of organizing what it would
be that makes corporations choose to offer various levels of bonds. The
choice of numbers and types of bonds and stocks a corporation offers is the
choice of capital structure. Among the factors affecting the capital
structure of a firm are taxes, bankruptcy costs, agency costs, signalling,
bargaining position in litigation, and differences between firms and investors
in access to capital markets.
Source: Sargent, 1987, Ch 3;
Modigliani and Miller, 1958
Contexts: finance; models
moment-generating function:
Denoted M(t) or MX(t), and describes a probability distribution. A
moment-generating function is defined for any random variable X with a pdf
f(x).
M(t) is defined to be E[etX], which is the integral from minus
infinity to infinity of etXf(x).
A use for these is that the tth moment of X is M(t)(0),
that is the tth derivative of M() at zero.
Contexts: econometrics; statistics
monetarism:
The view that monetary policy is a prime source of the business cycle, and
that the time path of the money stock is a good index of monetary policy. As
presented by Milton Friedman and Anna Schwartz, monetarism emphasizes the
relation between the level of the money stock and the level of output without
a detailed theory of why changes in the money stock are not neutral in the
short run. Later versions posed an explicit basis for noneutrality in the
form of barriers to information flow about prices.
In policy terms monetarists, notably Friedman, advocated a monetary
rule, that is, a steady growth in the money supply to match economic
growth, without allowing central banks room for discretion. If the rule is
credible, public expectations of inflation be low, and thus inflation itself,
if high, would fall almost immediately.
Source: Sims, "Comparison of Interwar and Postwar Business Cycles:
Monetarism
Reconsidered"
Contexts: macro; money; policy
monetarist view:
In extreme form: that only the quantity of money matters by way of aggregate
demand policy. Relevant only in an overheated economy (Branson p
391).
Source: Branson
Contexts: money; macro
monetary base:
In a modern industrialized monetary economy, the monetary base is made up of
(1) the currency held by individuals and firms and (2) bank reserves
kept within a bank or on deposit at the central bank.
Contexts: money; macro
monetary regime:
"A monetary regime can be thought of as a set of rules governing the
objectives and the actions of the monetary authority."
Examples: (1) "A gold standard is one example of a monetary regime --
the monetary authority is obligated to maintain instant convertibility between
its liabilities and the gold. Th monetary authority may have considerable
room to maneuver in that monetary regime, but it can do nothing that would
cause it to violate its commitment."
(2) "The same remarks would apply to a monetary regime obligating the
monetary authority to maintain a fixed exchange rate between its own and
another currency."
(3) "A monetary regime of a very different sort could be based on a
Monetarist rule specifying the rate of growth of some monetary aggregate. The
basic distinction is between regimes based on a convertibility or redemption
principle and those based on a quantity principle."
Source: Glasner, p 206
Contexts: money
monetary rule:
See the policy discussion in monetarism.
Contexts: macro; money
monetized economy:
A model economy that has a medium of exchange: money
Source: Bennett T. McCallum, "Comments on 'A Model of Commodity Money' by
Thomas J. Sargent and Neil Wallace", Journal of Monetary Economics 12,
July 1983, pp. 189-196.
Contexts: money; models
money:
A good that acts as a medium of exchange in transactions. Classically it is
said that money acts as a unit of account, a store of value, and
a medium of exchange. Most authors find that the first two are
nonessential properties that follow from the third. In fact, other goods are
often better than money at being intertemporal stores of value, since most
monies degrade in value over time through inflation or the overthrow of
governments.
Theory: Ostroy and Starr, 1990, p. 25, define money in certain models
"as a commodity of positive price and zero transaction cost that does not
directly enter in production or consumption."
History: See this Web site on the History of Money.
Related terms:
Relevant terms: bank note,
barter economy,
bill of exchange,
bimetallism,
Bretton Woods system,
capital ratio,
cash-in-advance constraint,
central bank,
demand deposits,
double coincidence of wants,
dynamic inconsistency,
EMS,
Eurodollar,
Fed Funds Rate,
fiat money,
Fisher effect,
Fisher equation,
Fisher hypothesis,
free reserves,
Friedman rule,
fungible,
Gresham's Law,
Hahn problem,
high-powered money,
inflation,
inside money,
k percent rule,
liquidity,
liquidity constraint,
M1,
M2,
medium of exchange,
monetarism,
monetarist view,
monetary base,
monetary regime,
monetary rule,
monetized economy,
money,
money illusion,
money-in-the-utility-function models,
neutrality,
outside money,
seignorage,
Shubik model,
specie,
speculative demand,
storable,
superneutrality,
time deposits,
Townsend inefficiency,
transactions demand.
Contexts: money; models
money illusion:
"the belief that money [that is, a particular currency] represents a constant
value"
Source: Hayek, 1978, Ch 3
Contexts: money
money-in-the-utility-function models:
A modeling idea. In a basic Arrow-Debreu general equilibrium there is no need
for money because exchanges are automatic, through a 'Walrasian auctioneer'.
To study monetary phenomena, a class of models was made in which money was a
good that brought direct utility to the agent holding it; e.g., a utility
function took the form u(x,m) where x is a vector of other commodities, and m
is a scalar quantity of real money held by the agent. Using this mechanism
money can have a positive price in equilibrium and monetary effects can be
seen in such models. Contrast 'cash-in-advance constraint' for an alternative
approach.
Source: Ostroy and Starr, 1990, pp 6-7
Contexts: money; models
monopoly:
If a certain firm is the only one that can produce a certain good, it has a
monopoly in the market for that good.
Contexts: IO
monopoly power:
The degree of power held by the seller to set the price for a good. In U.S.
antitrust law monopoly power is not measured by market share.
(Salon
magazine, 1998/11/11)
Contexts: IO
monopsony:
A state in which demand comes from one source. If there is only one customer
for a certain good, that customer has a monopsony in the market
for that good.
Analogous to monopoly, but on the demand side not the supply side.
A common theoretical implication is that the price of the good is pushed
down near the cost of production. The price is not predicted to go to
zero because if it went below where the suppliers are willing to produce,
they won't produce.
Market power is a continuum from perfectly competitive to
monopsony and there's an extensive practice/industry/science of measuring the
degree of market power.
Examples: For workers in an isolated company town, created by and dominated
by one employer, that employer is a monopsonist for some kinds of employment.
For some kinds of U.S. medical care, the government program Medicare is a
monopsony.
Contexts: IO
monotone likelihood ratio property:
A property of a set of pdfs which is assumed in theoretical models to
characterize risk and uncertainty because it makes more conclusions feasible
and is often plausible.
Example: Let e ("effort") be an input variable into a stochastic production
function, and y be the random variable that represent output. Let f(y | e) be
the pdf of y for each e. Then the statement that f() has the monotone
likelihood ratio property (MLRP) is the same as the statement that:
for e2>e1, f(y|e2)/f(y|e1) is increasing in y.
This says that output is positively related to effort, and something stronger,
something like: of two outcomes or ranges of outcomes, the worse one will not
become relatively more likely than the better one if effort were to rise. By
relatively more likely is meant that the likelihood ratio, above,
rises.
The set of pdfs for which the MLRP is assumed above is the set of f()'s
indexed by values of e. Each holds that specified relationship to the others.
In practice the MLRP assumption tends to rule out multimodal classes of
distributions, and this is its main effect. (By multimodal we mean those with
multiple-peaked pdfs.)
Normally e is scalar, taking on either discrete or continuous sets of values.
An analogous definition, for a multidimensional (vector) e, is feasible.
Whether it is used in existing models is not known to this author.
Source: Milgrom, 1981, p 383
Contexts: micro theory
monotone operator:
An operator that preserves inequalities of its arguments. That is, if T is a
monotone operator, then: (i) iff x>y, then Tx>Ty, and iff x<y, then
Tx<Ty.
Same basic meaning as monotone transformation.
The most common monotone operator is the natural log function. For example in
maximum likelihood estimation, one usually maximizes the log of the
likelihood function, not the likelihood function itself, because this is more
tractable and the log is a monotone operator so it doesn't change the
answer.
monotone transformation:
A transformation that preserves inequalities of its arguments. That is, if T
is a monotone transformation, then: (i) iff x>y, then Tx>Ty, and iff
x<y, then Tx<Ty.
Same basic meaning as monotone operator.
monotonic transformation:
A transformation that preserves inequalities of its arguments. That is, if T
is a monotone transformation, then: (i) iff x>y, then Tx>Ty, and iff
x<y, then Tx<Ty.
Same as monotone operator (which see, for more details) or monotone
transformation.
Monte Carlo simulations:
These are data obtained by simulating a statistical model in which all
parameters are numerically specified.
One might use Monte Carlo simulations to test how an estimation procedure
would behave, for example under conditions when exact analytic descriptions of
the performance of the estimation are not algebraically feasible, or when one
wants to verify that one's analytic calculation for a confidence interval is
correct.
Contexts: statistics; econometrics
Moore-Penrose inverse:
Same as pseudoinverse.
Source: Greene, 1993, p 37
Contexts: econometrics
morbidity:
Incidence of ill health. It is measured in various ways, often by the
probability that a randomly selected individual in a population at some date
and location would become seriously ill in some period of time. Contrast to
mortality.
Contexts: demography; history
mortality:
Incidence of death in a population. It is measured in various ways, often by
the probability that a randomly selected individual in a population at some
date and location would die in some period of time. Contrast to
morbidity.
Contexts: demography; history
MSA:
Same as SMSA.
Contexts: data
MSE:
mean squared error (which see)
Contexts: econometrics; estimation
multi-factor productivity:
Same as total factor productivity, a certain type of Solow
residual.
MFP = d(ln f)/dt = d(ln Y)/dt - sLd(ln L)/dt - sKd(ln
K)/dt
where f is the global production function; Y is output; t is time;
sL is the share of input costs attributable to labor expenses;
sK is the share of input costs attributable to capital expenses; L
is a dollar quantity of labor; K is a dollar quantity of capital.
Source: paraphrased from Dean & Harper, Feb. 1998
Contexts: macro
multinomial:
In the context of discrete choice models, multinomial means there are more
than two possible values of the dependent variable, the choice, which is a
scalar.
For specific constructions see multinomial logit and multinomial
probit.
Contexts: econometrics
multinomial logit:
Relatively easy to compute but has the problematic IIA property by
construction. Multinomial probit with correlation between structural
residuals does not suffer from the IIA problem but is computationally
expensive. (Ed.: I don't know why the IIA problem gets sucked into this when
the actual different between logit and probit is the functional form.)
Multinomial logit is available in more software packages than is multinomial
probit.
Source: Broadcast by Clint Cummins of TSP International to rec.econ.research
circa Jun 28, 2000.
Contexts: econometrics
multinomial probit:
Multinomial probit with correlation between structural residuals does not
suffer from the IIA problem but is computationally expensive. Multinomial
logit which solves a similar problem is relatively easy to compute but has the
problematic IIA property by construction. (Ed.: I don't know why the
IIA problem gets sucked into this when the actual different between logit and
probit is the functional form.) Multinomial logit is available in more
software packages than is multinomial probit.
Source: Broadcast by Clint Cummins of TSP International to rec.econ.research
circa Jun 28, 2000.
Contexts: econometrics
multivariate:
A discrete choice model in which the choice is made from a set with more than
one dimension is said to be a multivariate discrete choice model.
Contexts: econometrics; statistics
Mundell-Tobin effect:
That nominal interest rates would rise less than one-for-one with inflation
because in response to inflation the public would hold less in money balances
and more in other assets, which would drive interest rates down.
Source: G. Thomas Woodward, The Review of Economics and Statistics, 1992, p
316
mutatis mutandis:
"The necessary changes having been made; substituting new
terms."
Source: American Heritage Dictionary, 1982, p 825
Contexts: phrases
MVN:
An abbrevation for 'multivariate normal' distribution.
Contexts: statistics
Nadaraya-Watson estimator:
Used to estimate regression functions based on data {Xi,
Yi}. See the equation in the middle of Hardle's page 25. The
equation produces an estimate for Y at any requested value of X (not only the
values of X in the source data), using as input (1) the data set
{Xi, Yi}, and (2) a kernel function describing
the weights to be put on values in the data set near X in estimating Y. The
kernel function itself can be parameterized by the choice of its functional
form and its 'bandwidth' which scales its width in the X-direction.
Source: Hardle, 1990
Contexts: econometrics; estimation; nonparametrics
NAICS:
North American Industry Classification System, a set of industry categories
standardized between the U.S. and Canada. In the U.S. it is taking over from
the SIC code system.
For details see
http://www.census.gov/epcd/naics02/naicod02.htm.
Source: IO; data
NAIRU:
Non-Accelerating Inflation Rate of Unemployment. That is, a steady state
unemployment rate above which inflation would fall and below which inflation
would rise. By some estimates the NAIRU is 6% in the U.S.
NAIRU is approximately a synonym for the natural rate of unemployment.
Paraphrased from Eisner's article: The essential hypotheses of the theory that
there is a stable NAIRU are that (1) an existing rate of inflation
self-perpetuates by generating expectations of future inflation; (2)
higher unemployment reduces inflation and lower unemployment raises
inflation.
Source: Robert Eisner, "Nothing to Fear but Fear of Good News",
Wall Street Journal, July 9, 1996.
Contexts: macro
narrow topology:
Synonym for weak topology.
Source: Christopher Harris and Patrick Bolton, 1997, "The continuous-time
principal-agent problem: first-best risk-sharing contacts and their
decentralization", unpublished paper, p. 6
Contexts: real analysis; micro theory
NASDAQ:
National Association of Securities Dealers automatic quotation market. A
mostly-electronic market of stocks in the United States. There is no 'pit' --
market makers in each stock offer buy and sell prices which are
different.
Contexts: finance; business
Nash equilibrium:
Sets of strategies for players in a noncooperative game such that no
single one of them would be better off switching strategies unless others did.
Formally: Using the normal form definitions, let utility functions as
functions of payoffs for the n players u1() ... un() and
sets of possible actions A=A1 x ... x An, be common
knowledge to all the players. Also define a-i as the vector of
actions of the other players besides player i. Then a Nash equilibrium is an
array of actions a* in A such that ui(a*) >=
ui(a-i* | ai) for all i and all
ai in Ai.
In a two-player game that can be expressed in a payoff matrix, one can
generally find Nash equilibria if there are any by, first, crossing out
strictly dominated strategies for each player. After crossing out any
strategy, consider again all the strategies for the other player. When done
crossing out strategies, consider which of the remaining cells fail to meet
the criteria above, and cross them out too. At the end of the process, each
player must be indifferent among his remaining choices, GIVEN the action of
the others.
In most noncooperatives games of interest, each player has to calculate what
the strategies of the others will be before his own Nash equilibrium strategy
can become clear. Introspection may also be needed to envision his own
payoffs. This approach tends to presume that the payoffs are known, or
knowable, and that the players are rational. An alternative line of thought
with its own detailed theory, is that the players can arrive at Nash
equilibria by repeated experimentation, searching for an optimal strategy.
Theories of learning and evolutionary game theory are related.
A Nash equilibrium represents a prediction if there is a real world analog to
the game.
Source: Notes from Asher Wolinsky
Contexts: game theory
Nash product:
The maximand of the Nash Bargaining Solution:
(s1-d1)(s2-d2)
where d1 and d2 are the threat points, and s1
and s2 are the shares of the good to be divided.
Contexts: game theory; models
Nash strategy:
The strategy of a player in a Nash equilibrium.
Contexts: game theory
national accounts:
A measure of macroeconomic categories of production and purchase in a nation.
The production categories are usually defined to be output in currency units
by various industry categories, plus imports. (Output is usually
approximately the same as industry revenue.) The purchase categories are
usually government, investment, consumption, and exports, or subsets of these.
The amount produced is supposed to be approximately equal to the amount
purchased. Measures are in practice made by national governments.
a different definition, by Peter Wardley:
national accounts: a measure of all the income received by economic actors
within an economy. It can be measured as expenditure (on investment and
consumption), income (wages, salaries, profits and rent) or as the value of
output (expenditure of all goods and services). Inevitably these three
different methods of estimating national accounts will produce different
results but these discrepancies are usually relatively small.
With thanks to: Peter Wardley
national innovation systems:
A research topic in which it is investigated how differences in national
institutions and characteristics of the economic environment produce different
kinds of innovation and adoption of technologies in different
countries.
Examples of institutions which differ between industrialized countries include
universities, government funding of research, and the role of military
research. Structural differences can include the country's size, openness to
trade, and idea flow.
This framing, drawn from Mancusi (2004), p. 272 has been attributed to Nelson
(1993).
(Ed.: It does not seem to be acknowledged in this literature that the
relation between innovations made in country X and the technology used by the
economic actors in country X can be quite weak, since technologies diffuse so
much. The idea that the technologies used in the macroeconomy of a tiny
country are closely related to the government's sponsorship of research in
that country breaks down almost completely if the country is tiny enough and
open enough. This effect may have become more extreme in recent years in
which international trade is so great and the Web so easy to use.)
Source: Mancusi, Maria Luisa. "Georgraphical concentration and the dynamics
of countries' specialization in technologies" Economics of Innovation and
New Technology 2003, vol 12:3, pp. 269-291.
Nelson, Richard R. 1993. National Innovation Systems: A Comparative
Analysis. Oxford University Press.
natural experiment:
If economists could experiment they could test some theories more quickly and
thoroughly than is now possible. Sometimes an isolated change occurs in one
aspect of the economic environment and economists can study the effects of
that change as if it were an experiment; that is, by assuming that every other
exogenous input was held constant.
An interesting example is that of the U.S. ban on the television and radio
broadcast of cigarette advertising which took effect on Jan 2, 1971. The ban
seems to have had substantial effects on industry profitability, the rate of
new entrants, the rate of consumers switching brands and types of cigarettes,
and so forth. The ban can be used as a natural experiment to test theories of
the effects of advertising.
Contexts: history; data
natural increase:
population increase due to more births and less mortality
natural rate of unemployment:
"The natural rate of unemployment is the level which would be ground out
by the Walrasian system of general equilibrium equations, provided that there
is [e]mbedded in them the actual structural characteristics of the labor and
commodity markets, including market imperfections, stochastic variability in
demands and supplies, the cost of gathering information about job vacancies
and labor availiabilities, the costs of mobility, and so on."
--
Milton Friedman, "The Role of Monetary Policy" AER March 1968 1-21
This is a long-run rate. Transitory shocks could move unemployment away from
the natural rate. Real wages would increase with productivity as long as
unemployment were kept at the natural rate.
Source: Friedman, AER 1968
Contexts: macro; labor
NBER:
The U.S. National Bureau of Economic Research. At 1050 Massachusetts Avenue,
Cambridge, MA 02138, USA. Focuses on macroeconomics. Data source by ftp:
ftp nber.harvard.edu.
NBER web site
Contexts: organizations
NBS:
Nash Bargaining Solution
Contexts: game theory; models
NE:
Nash Equilibrium
Contexts: game theory; models
NELS:
National Educational Longitudinal Survey, a U.S. survey administered to 24,599
eighth grade students from 1052 schools in 1988, with follow-up surveys to the
same students every two years afterward. Many similar questions were asked of
the parents of the students as well to obtain more accurate
information.
Contexts: data
neoclassical growth model:
A macro model in which the long-run growth rate of output per worker is
determined by an exogenous rate of technological progress, like those
following from Ramsey (1928), Solow (1956), Swan (1956), Cass (1965), and
Koopmans (1965).
Source: Barri and Sala-i-Martin, 1995, pp. 10-12
Contexts: macro
neoclassical model:
Often means Walrasian general equilibrium model.
Describes a model in which firms maximize profits and markets are perfectly
competive.
Contexts: phrases
neolassical:
According to Lucas (1998), neoclassical theory has explicit reference to
preferences. Contrast classical.
Source: Lucas (1998)
Contexts: phrases; macro theory
nests:
We say "model A nests model B" if every version of model B is a
special case of model A. This can be said of either structural (theoretical)
or estimated (econometric) models.
Example: Model B is "Nominal wage is an affine function of the age of
the worker." Model A is "Nominal wage is an affine function of the
age and education of the worker." Here model A nests model B.
Contexts: phrases
netput:
Stands for "net output". A quantity, in the context of production,
that is positive if the quantity is output by the production process and
negative if it is an input to the production process.
A technology is often be defined in a model by restrictions on the vector of
netputs with the dimension of the number of goods.
Contexts: general equilibrium; models
network externalities:
The effects on a user of a product or service of others using the same or
compatible products or services. Positive network externalities exist if the
benefits are an increasing function of the number of other users. Negative
network externalities exist if the benefits are a decreasing function of the
number of other users.
Katz and Shapiro, 1985 consider two types of positive network
externalities. A communication externality or direct externality describes a
communication network in which the more subscribers there are the greater the
services provided by the network (e.g. the telephone system or the Internet).
An indirect externality or hardware-software externality exists if a durable
good (e.g. computer) is compatible with certain complementary goods or
services (e.g. software) and the owner of the durable good benefits if their
system is compatible with a large pool of such complementary goods.
Liebowitz and Margolis, 1994 have an insightful commentary on
this subject, and offer among other things the following example: "if a
group of breakfat-eaters joins the network of orange juice drinkers, their
increased demand raises the price of orange juice concentrate, and thus most
commonly effects a transfer of wealth from their fellow network members to the
network of orange growers." The new group negatively affects the old
group without compensation, but it is through the price system and is
therefore a pecuniary externality. These authors strongly make the
case that big network externalities are not often observed, and cite evidence
against two common examples, the QWERTY and VHS standards.
Contexts: IO
neutral technological change:
Refers to the behavior of technological change in models. Barro and
Salai-i-Martin (1995), page 33, refer to three types:
A technological innovation is Hicks neutral (following Hicks (1932)) if
the ratio of capital's marginal product to labor's marginal product is
unchanged for a given capital to labor ratio. That is: Y=T(t)F(K,L).
A technological innovation is Harrod neutral (following Hicks (1932))
if the technology is labor-augmenting
... contd, see barro p 33 ...
neutrality:
"Money is said to be neutral [in a model] if changes in the level of
nominal money have no effect on the real equilibrium." -- Blanchard and
Fisher, p. 207
.
Money might not be neutral in a model if changes in the level of nominal money
induce self-fulfilling expectations or interact with real frictions like fixed
nominal wages, fixed nominal prices, information asymmetries, or slow
reactions by households to adjust their money holding quickly. (This list
from a talk by Martin Eichenbaum, 11/11/1996.)
Source: Blanchard and Fischer p. 207
Contexts: macro; money; models
New Classical view:
On policy -- that no systematic (that is, predictable) monetary policy
matters.
Source: Branson
Contexts: macro
New Economy:
A proper noun, describing one of several aspects of the late 1990s. Lipsey
(2001) has discerned these meanings:
(1) An economy characterized by the absence of business cycles or inflations.
(2) The industry sectors producing computers and related goods and presumably
services such as e-commerce.
(3) An economy characterized by an accelerated rate of productivity growth.
(4) The "full effects on social, economic, and political systems of the
[information and communications technologies] revolution" centered on the
computer. This is Lipsey's meaning.
Source: Lipsey, Richard G.
"
The productivity paradox: a case of the emporer's new clothes.
"
July 2001.
Contexts: phrases
new growth theory:
Study of economic growth. Called 'new' because unlike previous attempts to
model the phenomenon, the new theories treat knowledge as at least partly
endogenous. R&D is one path. Hulten (2000) says that the new growth
theories have the new assumption that the marginal product of capital is
constant rather than in diminishing as in the neoclassical theories of growth.
Capital often in the new growth models includes investments in knowledge,
research and development of products, and human capital.
Source: Hulten (2000), p. 37
Contexts: macro
new institutionalism:
A school of thought in economic history, linked to the work of Douglas North.
New institutionalist. "This body of literature has claimed that, in history,
institutions matter, and in empirical analyses of history, institutions
typically refer to those provided by the state: a currency, stock market,
property rights, legal system, patents, insurance schemes, and so on." The
literature Hopcroft cites includes: North 1990b; North 1994; North and Thomas
1973; North and Weingast 1989; Bates 1990, p. 52; Campbell and Lindberg 1990;
Eggertson 1990, pp 247-8; Cameron 1993, p. 11.
p 35: "Using the terminology of the new institutionalizsm, field systems in
preindustrial Europe were produces of local institutions. Institution
is defined as a system of social rules, accompanied by some sort of
enforcement mechanism. Rules may be formal in nature -- for exapmle,
legislation, constitutions, legal specifications of property rights, and so on
(Coase 1960; Barzel 1989; North 1982: 23) -- or informal in nature -- for
example, cultural norms, customs, and mores (North 1990a: 192; Knight 1992) .
. . ."
All these are from
Hopcroft, Rosemary L. "Local Institutions and Rural Development in European
History" Social Science History 27:1 (spring 2003), pp 25-74.
Source: Hopcroft, Rosemary L. "Local Institutions and Rural Development in
European History" Social Science History 27:1 (spring 2003), pp 25-74.
Contexts: history
NIPA:
Stands for the National Income and Product Accounts. This is a GDP account
for the United States.
Contexts: data
NLLS:
Stands for Nonlinear least squares, an estimation technique.
The technique is to choose the parameter, b, of assumed distribution pdf f(),
to minimize this expression: sum over all i of
(yi-f(xi, b))2
where the xi's are the independent data, yi's are the
dependent data.
Contexts: econometrics; estimation
NLREG:
Stands for Nonlinear Statistical Regression program, discussed at
http://www.sandh.com/sherrod/nlreg.html.
Contexts: data; estimation
NLS:
National Longitudinal Survey, done at
the U.S. Bureau of Labor Statistics.
Contexts: labor; data
NLSY:
"The National Longitundinal Survey of Youth is a detailed survey of more
than 12,000 young people from 1979 through 1987. The original 1979 sample
contained 12,686 youths age 14 to 21, of whom 6,111 represented the entire
population of youths and 5,295 represented an oversampling of civilian
Hispanic, black, and economically disadvantages non-Hispanic, nonblack youth.
An additional 1,280 were in the military. [ed.: meaning, their parents were?]
The survey had a remarkably low attrition rate -- 4.9 percent through 1984 --
and thus represents the largest and best available longitudinal data set on
youths in the period under study."
NLS web site.
Source: Freeman, 1991, p 103-4.
Contexts: labor; data
NLSYW:
National Longitudinal Survey of Young
Women, done at the U.S. Bureau of Labor Statistics.
Contexts: data; labor
NNP:
Net National Product. "Net national product is the net market value of
the goods and services produced by labor and property located in [a nation].
Net national product equals GNP [minus] the capital consumption
allowances, which are decudted from gross private domestic fixed investment to
express it on a net basis." -- Survey of Current Business
Source: Survey of Current Business
Contexts: macro
no-arbitrage bounds:
Describes the outer limits on a price in a model where that price must meet a
no-arbitrage condition.
In many models a price is completely determined by a no-arbitrage condition,
but if some frictions are modeled -- transactions costs or liquidity
constraints, for example -- then a no-arbitrage condition defines a range of
possible prices, because tiny variations from the theoretical no-arbitrage
price are not large enough to make arbitrage profits feasible. The range of
possible prices is bounded by the "no-arbitrage bounds"
Source: McDonald, Robert L. 1998. "Dividend Tax Credits, the Ex-day,
and Cross-Border Tax Arbitrage: The Case of Germany", Working paper,
Kellogg School of Management's Finance department, Northwestern University.
Page 4 has an example of this term in use.
Contexts: finance
noise trader:
In models of asset trading, a noise trader is one who doesn't have any special
information but trades for exogenous reasons; e.g., to raise cash.
Such trades make a market liquid for other traders; that is, they give
a given trader someone to exchange with.
Contexts: finance
noncentral chi-squared distribution:
If n random values z1, z2, ..., zn are drawn
from normal distributions with known nonzero means and constant
variance, then squared, and summed, the resulting statistic is said to have a
noncentral chi-squared distribution with n degrees of freedom:
z12 + z22 + ... +
zn2) ~ X2(n, q)
This is a two-parameter family of distributions. Parameter n is
conventionally labeled the degrees of freedom of the distribution. Parameter
q is the noncentrality parameter. It is
related to the means mi and variance
s2 of the normal distributions thus:
q=(sum for i=1 to n) of (mi2 / s2).
The mean of a distribution that is X2(n, q) is (n+q). The
variance of that distribution is (2n+4q).
Source: Hogg and Craig, p 288-291
Contexts: statistics; econometrics; estimation
noncooperative game:
A game structure in which the players do not have the option of planning as a
group in advance of choosing their actions. It is not the players who are
uncooperative, but the game they are in.
Contexts: game theory
nondivisibility of labor:
If one models labor as contractible in continuous units, workers as identical,
and workers' utility functions as concave in leisure and income, an optimal
outcome is often for all workers to work some fraction of the time. Then none
are unemployed. We do not observe this.
If instead one presumes that labor cannot be effectively contracted in
continuous units but must be purchased in blocks (e.g. of eight hours per day,
or forty per week), this aspect can generate unemployed workers in the model
while others work long schedules, even if the workers are otherwise identical.
Labor may have to be sold in such blocks for several observed reasons: (a)
because there are fixed costs to the employer of employing each worker; (b)
because there are fixed costs (e.g. transportation; dressing for work) to the
employee of each job. This idea of labor as nondivisible has been used in
macro models by Gary Hansen (1985) and Richard Rogerson (1988).
Contexts: macro
nonergodic:
A time series process {xt} is nonergodic if it is so strongly
dependent that it does not satisfy the law of large numbers.
(Paraphrased straight from Wooldridge.)
Source: Discussed in Wooldridge, 1995, p 2647
Contexts: time series; econometrics
nonlinear pricing:
A pricing schedule where the mapping from quantity purchased to total price is
not a strictly linear function. An example is affine pricing.
Source: Tirole, p 136
Contexts: IO
nonparametric:
In the context of production theory (e,g, hulten 2000 circa p 21) a
nonparametric index number would not be derived from a specific functional
form of the production function.
See also nonparametric estimation.
nonparametric estimation:
Allows the functional form of the regression function to be flexible.
Parametric estimation, by contrast, makes assumptions about the functional
form of the regression function (e.g. that it is linear in the independent
variables) and the estimate is of those parameters that are free.
Source: Hardle, 1990
Contexts: econometrics; estimation
nonprofit:
A nonprofit organization is one that has committed legally not to distribute
any net earnings (profits) to individuals with control over it such as
members, officers, directors, or trustees. It may pay them for services
rendered and goods provided.
Source: Hansmann, Henry B. "The Role of Nonprofit Enterprise."
The Yale Law Journal, Vol 89, No 5, April 1980.
Contexts: public economics; law and economics
nonuse value:
Synonym for existence value.
Source: Portney, 1994; Krutilla,
1967
Contexts: public finance
NORC:
National Opinion Research Center at
the University of Chicago.
Contexts: data
normal distribution:
A continuous distribution of major importance. Cdf is often
denoted by capital F(x). Pdf is often
denoted by little f(x).
The cdf and pdf are not representable in html. The distribution has two
parameters, mean m and variance s2. Has moment-generating function
M(t)=exp(m*t + .5*s2t2).
A way of writing out a game. Synonymous with strategic form.
Formally:
let n be the number of players,
let Ai be the set of possible actions (or strategies) of player
i,
and let ui:A1 x A2 x ... x An ->
R represents the payoff function (or utility function) for player i.
That is once all players have chosen that set of actions, the payoff for
player i is the value of that function.
Then the normal form of the game is characterized by G = (A1,
A2, ... An, u1, ... , un)
Contexts: game theory
normality:
When used to describe a random variable, means that it has a normal
distribution.
notation:
Unusual notation, hard to put in glossary for definition, is listed
here:
2A has a particular meaning. For a finite set A, the expression
2A means "the set of all subsets of A.". If as is
standard we denote the number of elements in set A by |A|, the number of
elements in 2A is 2|A|.
Contexts: set theory; mathematics; probability
NPV:
Net Present Value. Same as PDV (present discounted value).
Contexts: finance
NSF:
The U.S. National Science Foundation, which funds much economic
research.
Contexts: organizations
null hypothesis:
The set of restrictions on parameters that is formally being tested.
"The hypothesis that the restriction or set of restrictions to be tested
does in fact hold." (Davidson and MacKinnon, 1993, p. 78) It is standard
to use the notation H0 for the null hypothesis. The
alternative hypothesis, that the restrictions do not hold is denoted
H1.
Often H1 is the conjecture of interest to the investigator. (Hogg
and Craig, p. 281) The problem gets framed this way so that the data and
statistical methods can potentially reject H0.
The term is from formal statistical language describing the test to which the
restrictions are being subjected. The real hypothesis of interest to the
investigator may not be either H0 nor H1. Even if it is
H1, the statistical rejection of H0 may not be strong
and specific evidence.
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; estimation
numeraire:
The money unit of measure within an abstract macroeconomic model in which
there is no actual money or currency. A standard use is to define one unit of
some kind of goods output as the money unit of measure for wages.
Source: An example is in Arrow, Chenery, Minhas, and Solow,
1961
Contexts: macro; models
NYSE:
New York Stock Exchange, the largest physical exchange in the U.S. Is in New
York City.
Contexts: finance; business
obsolescence:
An object's attribute of losing value because the outside world has changed.
This is a source of price depreciation.
ocular regression:
A term, generally intended to be amusing, for the practice of looking at the
data to estimate by eye how data variables are related. Contrast formal
statistical regressions like OLS.
Contexts: phrases; econometrics
ODE:
Abbreviation for "ordinary differential equation".
Contexts: math
OECD:
Organization of Economic Cooperation and Development; includes about 25
industrialized democracies.
OES:
The Occupational Employment Statistics survey of the United States, conducted
by its Bureau of Labor Statistics. It surveys approximately 400,000
establishments each year, excluding those in agriculture, forestry,
fishing, and the national government.
Source: Jeffrey A. Groen working paper, 2003.
BLS web site.
Contexts: U.S.; data
offer curve:
Consider an agent in a general equilibrium (e.g., an Edgeworth box).
Assume that agent has a fixed known budget and known preferences which predict
what set (or possible sets) of quantities that agent will demand at various
relative prices. The offer curve is the union of those sets, for all
relative prices, and can be drawn in an Edgeworth box.
Source: Varian, 1992, p 316
Contexts: micro theory; general equilibrium; models
OLG:
Abbreviation for overlapping generations model, in which agents live a
finite length of time long enough to live one period at least with the next
generations of agents.
Contexts: models
oligopsony:
The situation in which a few, possibly collusive, buyers are the only ones who
buy a certain good.
Has the same relation to monopsony that oligopoly has to monopoly.
Contexts: IO; labor
OLS:
Ordinary Least Squares, the standard linear regression procedure. One
estimates a parameter from data and applying the linear model
y = Xb + e
where y is the dependent variable or vector, X is a matrix of independent
variables, b is a vector of parameters to be estimated, and e is a vector of
errors with mean zero that make the equations equal.
The estimator of b is: (X'X)-1X'y
A common derivation of this estimator from the model equation (1) is:
y = Xb + e
Multiply through by X'.
X'y = X'Xb + X'e
Now take expectations. Since the e's are assumed to be
uncorrelated to the X's the last term is zero, so that term drops. So
now:
E[X'Xb] = E[X'y] Now multiply through by (X'X)-1
E[(X'X)-1X'Xb] = E[(X'X)-1X'y]
E[b] = E[(X'X)-1X'y] Since the X's and y's are data the estimate
of b can be calculated.
Contexts: econometrics
omitted variable bias:
There is a standard expression for the bias that appears in an
estimate of a parameter if the regression run does not have the appropriate
form and data for other parameters.
Define: y as a vector of N dependent variable observations, X1
as an (N by K1) matrix of regressors, X2 as an (N by
K2 matrix of additional regressors), and e as an (N by 1) vector
of disturbance terms with sample mean zero.
Suppose the true regression is:
y = X1b1 + X2b2 + e
for fixed values of b1 and b2. (If "true regression"
seems ambiguous, imagine for the rest of the description that the values of
X1, X2, b1, and b2 were chosen
in advance by the econometrician and e will be chosen by a random number
generator with expectation zero, and y is determined by these choices; in
this framework we can be certain what the true regression is and can study
the behavior of possible estimators.)
Suppose given the data above one ran the OLS regression
y = X1c1 +errors
Would E[c1]=b1 despite the absence of
X2b2? It will turn out in the following derivation
that in most cases the answer is no and the difference between the two
values is called the omitted variable bias.
The OLS estimator for c1 will be:
c1OLS =
(X1'X1)-1X1'y
=
(X1'X1)-1X1'(X1b
1 + X2b2 + e)
=
(X1'X1)-1X1'X1b1
+
(X1'X1)-1X1'X2b2
+ (X1'X1)-1X1'e
=
b1 +
(X1'X1)-1X1'X2b2
+ (X1'X1)-1X1'e
So since E[X1'e] = 0, taking expectations of both sides
gives:
E[c1] =
b1 +
(X1'X1)-1X1'X2b2
In general c1OLS will be a biased estimator of
b1. The omitted variables bias is
(X1'X1)-1X1'X2b2
. An exception occurs if X1'X2=0. Then the
estimator is unbiased.
There is more to be learned from the omitted variables bias expression.
Leaving off the final b2, the expression
(X1'X1)-1X1'X2b2
is the OLS estimator from a regression of X2 on X1.
Source: Greene, 1993, p 245-246
Contexts: econometrics
Op(1):
statistical abbreviation for "converges in distribution"
or, equivalently, "the average is bounded in probability."
That is Xt/n is bounded in probability.
Contexts: statistics; econometrics
open:
An economy is said to be open if it has trade with other economies.
(Implicitly these are usually assumed to be countries.)
One measure of a country's openness is the fraction of its GDP devoted
to imports and exports.
option:
A contract that gives the holder the right, but not the duty, to make a
specified transaction for a specified time.
The most common option contracts give the holder the right buy a specific
number of shares of the underlying security (equity or index) at a fixed price
(called the exercise price or strike price) for a given period of time. Other
option contracts allow the holder to sell.
This is its most common practical business meaning, and the use in theoretical
economics is analogous -- e.g. that owning a plant gives a firm the option to
manufacture in it at any time or to sell it at any time.
Contexts: finance; business
order condition:
In a econometric system of simultaneous equations, each equation may satisfy
the order condition, or not do so. If it does not, its parameters are not all
identified.
The order condition is often easy to verify. Often the econometrician
verifies that the order condition is satisfied and assumes with this
justification that the equation is identified, although formally a stronger
requirement, the rank condition, must be satisfied. For each equation there
must be enough instrumental variables available for the equation to have as
many instruments as there are parameters.
The system can satisfy a form of the order condition: that there be as many
exogenous variables in the reduced form of the system as there are
parameters.
Contexts: econometrics
order of a kernel:
The order of a kernel function is defined as the first nonzero moment.
Source: Hardle and Linton paper, June 1993, "Applied Nonparametric
Methods"
Contexts: econometrics; nonparametrics
order of a sequence:
Two relevant concepts are denoted O() and o().
Let cn be a random sequence. Quoting from Greene, p 110:
"cn is of order 1/n, denoted O(1/n), if plim ncn is
a nonzero constant."
And
"cn is of order less than 1/n, denoted o(1/n), if plim
ncn equals 0."
Source: Greene, 1993, p 110
Contexts: econometrics
order statistic:
The first order statistic of a random sample is the smallest element of the
sample. The second order statistic is the second smallest. And the
nth order statistic in a sample of size n is the largest element.
The pdf of the order statistics can be derived from the pdf from which
the random sample was drawn.
Source: Hogg and Craig, 1995, pp. 193-200.
Contexts: statistics
organizational capital:
"whatever makes a collection of people and assets more productive together
than apart. Firm-specific human capital (Becker 1962), management
capital (Prescott and Visscher 1980), physical capital (Ramey and Schapiro
1996), and a cooperative disposition in the firm's workforce (Eeckhout 2000
and Rb and Zemsky 1997) are examples of organizational capital."
-- from Boyan Jovanovic and Peter L. Rousseau, Sept 20 2000, "Technology and
the Stock Market: 1885-1998" NYU and Vanderbilt University, working
paper
Source: Boyan Jovanovic and Peter L. Rousseau, Sept 20 2000, "Technology and
the Stock Market: 1885-1998" NYU and Vanderbilt University, working paper
Contexts: organizations
organizational routines:
Almost all of this is condensed from the literature review in Becker (2004).
Organizational routines are recurrent sequences of behavior, or related
cognitive patterns such as rules and recipes, in organizations. These are
context-specific ways of coordinating multiple actors. Generally they
achieve, or enable, a systematic response to events or input by the
organization. Routines therefore store operational or tacit
knowledge.
Organizational routines tend:
- to have path-dependent, local histories
- to satisfice, that is, reach a kind of level of aspriation, and not to be
tested for optimality
- to be triggered, perhaps by performance below that level of aspiration,
which may be implicit
- to exert control in the context of uncertainty, and by being standardized
across events, to enable measurement of processes within the organization
- to reduce surprises, error handling, and emergency handling of inputs
- to create or tolerate some internal delays by definition
- to economize on cognitive resources like the information processing and
decision capacity of agents within the organization, by defining structures of
response
- to guide inexperienced members of the organization
- to provide stability and predictability, but also to evolve with time
Organizational routines may serve as heuristics or guidelines, not
rules.
Becker (2004) treats Nelson and Winter (1982), as a central source work in
this area, and cites a variety of empirical and theoretical academic
literature on organizational routines.
Organizational routines are so common it's not clear whether one example
clarifies the subject but I inferred that this was a good example. Suppose an
organization maintains software, and converts complaints and bug reports about
this software into records in a database (classically, a "bugbase") which will
be addressed in a substantively appropriate way by specialists. This mode of
behavior is an organizational routine that satisfies the above description.
The substance and addressee of each bug report record will vary; the substance
includes the uncertain input from outside, plus various tacit information like
the appropriate addressee.
Source: Becker, Markus C. "Organizational routines: a review of the
literature." Industrial and Corporate Change. Vol 13, no. 4 (August
2004), pp. 643-677.
Nelson, R.R., and S.G. Winter. 1982. An Evolutionary Theory of Economic
Change. Belknap Press/Harvard University Press: Cambridge, MA.
Contexts: management; sociology; organizations
organizations:
Relevant terms: absorptive capacity,
ACIR,
AMEX,
competency trap,
EMS,
EOE,
FIPS,
institution,
NBER,
NSF,
organizational capital,
organizational routines,
SOFFEX,
TRIPs,
WIPO,
World Bank.
Contexts: fields
output elasticity:
"The output elasticity of an input is roughly the percentage increase in
output for a 1 percent [increase] in that input, holding other factors
constant. If [an] input has a 'normal' rate of return, then [its] output
elasticity should equal its share of total inputs." -- Hitt and
Brynjolfsson (2002), p. 85.
The output elasticity idea presumes that somebody is monitoring a production
function and adjusting production to maximize something, perhaps output,
revenue, or profit. It also presumes the input and output are continuous so
that small changes in inputs and outputs are well-defined (e.g., we are not
talking about a slight increase over one hand-crafted violin per year; the
abstraction of output elasticity of inputs applies poorly in a case like
that). For more on elasticity and concepts analogous to output elasticity,
see elasticity.
Source: Hitt, Lorin M., and Erik Brynjolfsson. 2002. "Information
Technology, Organizational Transformation, and Business Performance." Chapter
2 of Productivity, Inequality, and the Digital Economy, edited by
Nathalie Greenan, Yannick L'Horty, and Jacques Mairesse. The MIT Press.
Contexts: micro; estimation; industrial organization
outside money:
monetary base. Is held in net positive amounts in an economy. Is not a
liability of anyone's. E.g., gold or cash. Contrast inside
money.
Contexts: money; models
overshooting:
Describes "a situation where the initial reaction of a variable to a
shock is greater than its long-run response."
Source: Romer, 1996, p 7
own:
This word is used in a very particular way in the discussion of time series
data. In the context of a discussion of a particular time series it refers to
previous values of that time series. E.g. 'own temporal dependence' as in
Bollerslev-Hodrick 92 p 8 refers to the question of whether values of the time
series in question were detectably a function of previous values of that same
time series.
Contexts: time series
Ox:
An object-oriented matrix language sometimes used for econometrics. Details
are at
http://hicks.nuff.ox.ac.uk/Users/Doornik/doc/ox/
.
Contexts: econometrics; time series
p value:
A p-value is associated with a test statistic. It is "the probability,
if the test statistic really were distributed as it would be under the null
hypothesis, of observing a test statistic [as extreme as, or more extreme
than] the one actually observed."
The smaller the P value, the more
strongly the test rejects the hypothesis being tested (the null
hypothesis).
A p-value of .05 or less rejects the null hypothesis "at the 5%
level" that is, the statistical assumptions used imply that only 5% of
the time would the supposed statistical process produce a finding this extreme
if the null hypothesis were true.
5% and 10% are common significance levels to which p-values are
compared and in academic seminars, a p-value of .05 is treated as strong
evidence that the result is a signal, not just noise.
Source: Davidson and MacKinnon, 1993, p 78-79 ; and with
relieved thanks to Ed DeMattia and Steven Hetzler for pointing out that this
definition was mistaken before fall 2005.
With thanks to: Ed De Mattia;
Contexts: econometrics; estimation
Paasche index:
A kind of index number. The official method for US price deflators computes
them as a Paasche index. The algorithm is just like the Laspeyres
index but the base quantities are chosen from the second, later
period.
See also
http://www.geocities.com/jeab_cu/paper2/paper2.htm.
From Aizcorbe (2004), p.9: If one assumes there is a representative
consumer then the Paashe index can be, and has been, interpreted as a
lower bound for the representative consumer's cost of living increase. That
is, it shows the change in income needed to make the representative consumer
indifferent between purchasing current quantities at the base and current time
periods.
Source:
http://www.geocities.com/jeab_cu/paper2/paper2.htm;
Gordon, 1990, p. 5
Aizcorbe, Ana. "Price Indexes for Intermittent Purchases and an
Application to Price Deflators for High Technology Goods" working paper,
Bureau of Economic Analysis, U.S. Department of Commerce.
Contexts: index numbers
panel data:
Data from a (usually small) number of observations over time on a (usually
large) number of cross-sectional units like individuals, households, firms, or
governments.
Contexts: econometrics; estimation
par:
Can by a synonym for 'face value' as in the expression "valuing a bond at
par".
Contexts: finance
paradox:
This word is used in a particular way within the literature of economics --
not to describe a situation in which facts are apparently in conflict, but to
describe situations in which apparent facts are in conflict with models or
theories to which some class of people holds allegiance. This use of the word
implies strong belief in the measured facts, and in the theory, and the
resolution to economic paradoxes tend to be of the form that the data do not
fit the model, the data are mismeasured or, (the most common case) the model
or theory does not fit the environment measured.
In some ways the term paradox is awkward in economics since the data
are so poorly measured, the models so brutally simplified, and the mapping
between environment and evidence so stochastic. So this editor avoids the
term where possible, but often it is a compact and vigorous way of telling the
reader the context of the subsequent discussion.
A list of these that an economist may be expected to recognize includes:
Allais paradox, Ellsberg paradox, Condorcet voting
paradox, Scitovsky paradox, and productivity
paradox.
Contexts: phrases
parametric:
adjective. A function is 'parametric' in a given context if its functional
form is known to the economist.
Example 1: One might say that the utility function in a given model is
increasing and concave in consumption. But it only becomes parametric once
one says that u(c)=ln(c) or u(c)=c1-A/1-A. At this point only
parameters such as A remain to be specified or estimated.
Example 2: In an econometric model one often imposes assumptions such as that
the the relationship being estimated is linear, thence to do a linear
regression. These are parametric assumptions. One might also make some
estimates of the 'regression function' (the relationship) without such
parametric assumptions. This field is called nonparametric
estimation.
Contexts: econometrics; estimation; models
Pareto chart:
The message below was posted to a Stata listserv and is reproduced here
without any permission whatsoever.
Date: Thu, 28 Jan 1999 08:59:57 -0500
From: "Steichen, Thomas"
Subject: RE: statalist: Re: Pareto diagrams
[snip]
Pareto charts are bar charts in which the bars are arranged in descending
order, with the largest to the left. Each bar represents a problem. The chart
displays the relative contribution of each sub-problem to the total problem.
Why: This technique is based on the Pareto principle, which states that a
few of the problems often account for most of the effect. The Pareto
chart makes clear which "vital few" problems should be addressed first.
How: List all elements of interest. Measure the elements, using the same
unit of measurement for each element. Order the elements according to
their measure, not their classification. Create a cumulative distribution
for the number of items and elements measured and make a bar and line
graph. Work on the most important elements first.
Reference: Wadsworth, Stephens and Godfrey. Modern Methods for Quality
Control and Improvement, New York: John Wiley, 1986 and Kaoru Ishikawa,
Guide To Quality Control, Asian Productivity Organization, 1982, Quality
Resources, 1990.
(Note: above info "borrowed" from a web page)
Source: Stata listserv
With thanks to: Statalist and Thomas Steichen , as of
4/15/1999
Contexts: econometrics
Pareto distribution:
Has cdf H(x) = 1 - x(-a) where x>=0, a>0. This distribution
is unbounded above. (A slightly different version, with two parameters, is
shown in Hogg and Craig on p. 207.)
In an endowment economy, an allocation of goods to agents is Pareto Optimal if
no other allocation of the same goods would be preferred by every agent.
Pareto optimal is sometimes abbreviated as PO.
Optimal is the descriptive adjective, whereas optimum is a noun.
A Pareto optimal allocation is one that is a Pareto optimum. There may be
only one such optimum.
Contexts: models
Pareto set:
The set of Pareto-efficient points, usually in a general
equilibrium setting.
Source: Varian, 1992, p 324
Contexts: micro theory; general equilibrium; models
partially linear model:
Refers to a particular econometric model which is between a linear regression
model and a completely nonparametric model:
y=b'X+f(Z)+e
where X and Z are known matrices of independent variables, y is a known vector
of the dependent variable, f() is not known but often some assumptions are
made about it, and b is a parameter vector. Assumptions are often made on e
such as that e~N(0,s2I) and that E(e|X,Z)=0.
The project at hand is to estimate b and/or to estimate f() in a
non-parametric way, e.g. with a kernel estimator.
Source: possibly first discussed in Robinson, P., 1988,
"Root-n-consistent semiparametric regression", Econometrica
vol 56, pp 931-954.
Contexts: econometrics; estimation
partition:
"[A] partition of a finite set (capital omega) is a collection of
disjoint subsets of (capital omega) whose union is (capital omega)." --
Fudenberg and Tirole p 55
Source: Fudenberg and Tirole, 1991/1993, p 55
Contexts: information theory; econometrics; micro theory
passive measures (to combat unemployment):
unemployment and related social benefits and early retirement benefits.
(contrast active)
Source: John P. Martin, D16 readings book
Contexts: labor; macro
path dependence:
Following David (97): describes allocative stochastic processes. Refers to
the way the history of the process relates to the limiting distribution of the
process.
"Processes that are non-ergodic, and thus unable to shake free of
their history, are said to yield path dependent outcomes." (p. 13)
"A path-dependent stochastic process is one whose asymptotic distribution
evolves as a consequence" of the history of the process. (p. 14)
The term is relevant to the outcome of economic processes through history.
For example, the QWERTY keyboard standard would not be the standard if it had
not been chosen early; thus the keyboard standard evolved through a
path-dependent process.
Source: David, 1997, p 13-14
Contexts: history; stochastic processes
path dependency:
The view that technological change in a society depends quantitatively and/or
qualitatively on its own past. "A variety of mechanisms for the
autocorrelation can be proposed. One of them, due to David (1975) is that
technological change tends to be 'local,' that is, learning occurs primarily
around techniques in use, and thus more advanced economies will learn more
about advanced techniques and stay at the cutting edge of progress."
(Mokyr, 1990, p 163)
A noted example of technological path dependence is the QWERTY keyboard, which
would not be in use today except that it happened to be chosen a hundred years
ago. A special interest in the research literature was taken in the question
of whether technological path dependence has been observed to lead to
noticeably Pareto-inferior outcomes later. Liebowitz and Margolis in a series
of papers (e.g. in the JEP) have made the case that it has not -- that is that
the QWERTY keyboard is not especially inferior to alternatives in
productivity, and that the VHS videotapes were not especially inferior to Beta
videotapes at the time consumers chose between them.
Source: Mokyr, 1990, p 163
Contexts: stochastic processes; history
payoff matrix:
In a game with two players, the payoffs to each player can be shown in a
matrix. The one at right is from the classic Prisoners Dilemma game:
| |
Player Two |
| C | D |
Player One |
C | 3,3 |
0,4 |
D | 4,0 |
1,1 |
Here, player one's strategy choices (shown, conventionally, on the left) are C
and D, and player two's, shown on the top, are also C and D. The payoffs of
each possible choice of strategy pairs is in each cell of the matrix. The
first number is the payoff to player one, and the second is the payoff to
player two.
Contexts: game theory; models
PBE:
abbreviation for perfect Bayesian equilibrium.
Contexts: game theory
pdf:
probability distribution function. This function describes a statistical
distribution. It has the value, at each possible outcome, of the probability
of receiving that outcome. A pdf is usually denoted in lower case letters.
Consider for example some f(x), with x a real number is the probability of
receiving a draw of x. A particular form of f(x) will describe the normal
distribution, or any other unidimensional distribution.
Contexts: econometrics; statistics
PDV:
Present Discounted Value
Contexts: finance
pecuniary externality:
An effect of production or transactions on outside parties through prices but
not real allocations.
perfect Bayesian equilibrium:
A perfect Bayesian equilibrium is a game-theoretic concept. It is very
like a Nash equilibrium but in which each player's beliefs are also
defined and are integrated into the definition.
A perfect Bayesian equilibrium (PBE) is defined to exist in a game in
which payoffs and players are stated and are common knowledge within the
game. It can be described by this set of things taken together:
- a profile of strategies and
- a profile of belief functions
To be a PBE they also satisfy these rules:
- the strategies satisfy sequential rationality rule and
- the beliefs are updated over time according to Bayes rule
whenever possible.
[Ed.: Normally one also assumes that the players know with certainty that
they are in a PBE. Without this assumption it may not be possible to
locate any PBE of the game, but with this assumption this equilibrium is
less likely to describe the real world.]
-- F&T 1993? pp 321-333
it also describes separating and pooling equilibria as subsets of PBE
There is a thm in Kreps that all seq eqms are also subgame perfect.
Contexts: game theory
perfect equilibrium:
In a noncooperative game, a profile of strategies is a perfect equilibrium if
it is a limit of epsilon-equilibria as epsilon goes to zero.
There can be more than one perfect equilibrium in a game.
For a more formal definition see sources. This is a rough paraphrase.
Source: Pearce, 1984, p 1037
Contexts: game theory
PERT:
Program Evaluation and Review Technique
(is this used?)
Source: Hogg and Craig, 163-4
phase portrait:
graph of a dynamical system, depicting the system's trajectories (with arrows)
and stable steady states (with dots) and unstable steady states (with circles)
in a state space. The axes are of state variables.
Contexts: models
Phillips curve:
A relation between inflation and unemployment. Follows from William Phillips'
1958 "The relation between unemployment and the rate of change of money
wage rates in the United Kingdom, 1861-1957" in _Economica_.
In the subsequent discussion the relation was thought to be a negative one --
high unemployment would correlate with low inflation. That stylized fact lost
empirical support with the stagflation of the U.S. in the 1970s, in which high
inflation and high unemployment occurred together. More recent evidence
suggests that over the long term, across countries, there is a POSITIVE
correlation between inflation and unemployment. Discussion continues on which
of these is more 'causal to' the other and less 'caused by' the other.
In recent use, "[T]he 'Phillips curve' has become a generic term for any
relationship between the rate of change of a nominal price or wage and
the level of a real indicator of the intensity of demand in the
economy, such as the unemployment rate." -- Gordon, Robert G.,
"Foundations of the Goldilocks Economy" for Brookings Panel on
Economic Activity, Sept 4, 1998.
Contexts: macro
Phillips-Perron test:
A test of a unit root hypothesis on a data series.
(Ed.: what follows is my best, but imperfect, understanding.) The
Phillips-Perron statistic, used in the test, is a negative number. The more
negative it is, the stronger the rejection of the hypothesis that there is a
unit root at some level of confidence. In one example a value of -4.49
constituted rejection at the p-value of .10.
Source: cites of Phillips, Peter C.B. and Pierre Perron. "Testing for a
Unit Root in Time Series Regression." Biometrika June 1988, 75,
pp 335-346.
With thanks to: Don Watson (as of 1999/03/31: drw@matilda.vm.edu.au)
Contexts: econometrics; time series
phrases:
Relevant terms: a fortiori,
a priori,
abstracting from,
analytic,
budget line,
ceteris paribus,
Chicago School,
classical,
climacteric,
control for,
corner solution,
creative destruction,
deterministic,
dismal science,
endogenous,
equity premium puzzle,
exogenous,
inter alia,
interior solution,
is consistent for,
mutatis mutandis,
neoclassical model,
neolassical,
nests,
New Economy,
ocular regression,
paradox,
priori,
proof,
quango,
rational,
rational ignorance,
rationalize,
risk free rate puzzle,
significance,
solution concept,
stylized facts,
the standard model,
transition economics,
under the null,
unity.
Contexts: fields
physical depreciation:
Decline in ability of assets to produce output. For example, computers, light
bulbs, and cars have low physical depreciation; they work until they expire.
Could be said to be made up of deterioration and exhaustion.
Contexts: macro; capital measurement
piecewise linear:
A set of line segments each of which represents a linear relationship between
two variables over some subset of their domains. Usually the relationship
referred to is between what the writer is characterizing as an independent
variable and a dependent variable.
Contexts: mathematics; theory; estimation
Pigou effect:
The wealth effect on consumption as prices fall. A lower price level leads to
a greater existing private wealth of nominal value, leading to a rise in
consumption. Contrast the Keynes effect.
Source: James Tobin. "Keynesian Models of Recession and
Depression"
Contexts: macro; models
plant:
a plant is an integrated workplace, usually all in one location.
platykurtic:
An adjective describing a distribution with low kurtosis. 'Low' means the
fourth central moment is less than three times the second central moment; such
a distribution has less kurtosis than a normal distribution.
Platy- means 'fat' in Greek and refers to the central part of the
distribution. Platykurtic distributions are not as common as leptokurtic
ones.
Source: Davidson and MacKinnon, 1993, p 62-64
Contexts: statistics
PO:
Pareto Optimal
Contexts: models
Poisson distribution:
A discrete distribution. Possible values for x are the integers
1,2,3,...
Denoting mean as mu, the Poisson distribution has mean mu, variance mu, and
pdf (e-mumu-x)/x!. Moment-generating function
(mgf) is exp(mu(et-1)).
Source: Hogg and Craig
Contexts: statistics
Poisson process:
In such a process, let n be the number of events that occur in a given time.
n will have a Poisson distribution.
Contexts: statistics; models
political science:
The academic subject centering on the relations between governments and other
governments, and between governments and peoples.
polity:
Group with an organized governance. Normally a politically organized
population or can be a religious one.
Or, form of governance.
Examples needed. Use is very context-sensitive; that is, the definition is
not too informative without examples.
Contexts: political science
polychotomous choice:
Multiple choice. In the context of discrete choice econometric models, means
that the dependent variable has more than two possible values.
Source: Maddala, 1983, 1996, p 275
Contexts: econometrics
pooling of interests:
One of two ways to do the accounting for a U.S. firm after a merger. The
alternative is purchase accounting.
A pooling of interests is the method usually taken for all-stock
deals.
Contexts: accounting
poor:
In poverty, which see.
Source: Blank, ITAN
Contexts: poverty
portmanteau test:
a test for serial correlation in a time series, not just of one period back
but of many. Standard reference is Ljung and Box (1978).
The equation characterizing this test is given on page 18, footnote 15, of
Bollerslev-Hodrick 1992 and will go in here when html has an equation
format.
Source: Bollerslev-Hodrick 92 circa p 8
Contexts: finance; time series
poverty:
As commonly defined by U.S. researchers: the state of living in a family with
income below the federally defined poverty line.
Relevant terms: idle,
poor,
poverty,
urban ghetto.
Source: Blank, ITAN
Contexts: data; poverty
power:
"The power of a test statistic T is the probability that T will reject
the null hypothesis when the hypothesis is not true.
Formally, it is
the probability that a draw of T is in the rejection region given that
the hypothesis is not true.
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; statistics; estimation
power distribution:
A continuous distribution with a parameter that we will denote k.
Pdf is kxk-1. Mean is k/(k+1). Variance is
k/[(1+k)2(2+k)].
This distribution has not been found to correspond to natural or economic
phenomena, but is useful in practice problems because it is algebraically
tractable.
Source: Hogg and Craig
Contexts: statistics
PPF:
Short for Production Possibilities Frontier.
PPP:
Stands for purchasing power parity, a criterion for an appropriate exchange
rate between currencies. It is a rate such that a representative basket
of goods in country A costs the same as in country B if the currencies are
exchanged at that rate.
Actual exchange rates vary from the PPP levels for various reasons, such as
the demand for imports or investments between countries.
Contexts: international
Prais-Winsten transformation:
An improvement to the original Cochrane-Orcutt algorithm for estimating
time series regressions in the presence of autocorrelated errors. The
implicit reference is to Prais-Winsten (1954).
The Prais-Winsten tranformation makes it possible to include the first
observation in the estimation.
Source: SHAZAM
manual
Contexts: estimation; time series; econometrics
pre-fisc:
Means before taking account of the government's fiscal policy. Usually refers
to personal incomes before taxes and government transfers between people. For
example a researcher might take more interest in pre-fisc income inequality
than in post-fisc income inequality because the effects of government
transfers are designed specifically to reduce inequality.
Contexts: public
precautionary savings:
Savings accumulated by an agent to prepare for future periods in which the
agent's income is low.
Contexts: models; finance
precision:
reciprocal of the variance
Source: Hamilton, p 355
Contexts: econometrics
predatory pricing:
The practice of selling a product at low prices in order to drive competitors
out, discipline them, weaken them for possible mergers, and/or to prevent
firms from entering the market. It is an expensive strategy.
In the United States there is no legal (statutory) definition of predatory
pricing, but pricing below marginal cost (the Areeda-Turner test) has been
used by the Supreme Court in 1993 as a criterion for pricing that is
predatory. (Salon
magazine, 1998/11/11)
Contexts: IO
predetermined variables:
Those that are known at the beginning of the current time period. In an
econometric model, means exogenous variables and lagged endogenous
variables.
Source: Johnston, p 440
Contexts: econometrics
preferences:
A property of the agents (individuals) in many economics models. The property
is the ability to compare any two bundles of goods and to prefer one over the
other. Usually this ability is presumed to be stable over time. The ability
is presumed to be behind the observed behavior of individuals who make
choices, and we can then make theories about their preferences.
Preferences are often implicitly assumed to exist in a model by declaring that
there is a utility function characterizing the preferences of the
agents in the model. A continuous utility function exists which
perfectly describes the preferences if the preferences are complete,
reflexive, transitive, continuous, and strongly monotonic.
[ed: yes, someday I will define those terms too.]
Standard disputes about models in which perfectly defined preferences exist
are about whether the agent is perfectly informed about the various bundles;
about whether there are risks; and how stable over time the preference
relations (responses to comparisons) are.
Contexts: utility theory
present-oriented:
A present-oriented agent discounts the future heavily and so has a HIGH
discount rate, or equivalently a LOW discount factor. See also
'future-oriented', 'discount rate', and 'discount factor'.
Contexts: models
price ceiling:
Law requiring that a price for a certain good be kept below some level. May
lead to shortage and a black market.
price complements:
Two inputs i and j to a production function can be "price complements in
production". Assume the demand for the output is decreasing in its
price. Inputs i and j are price complements if when the price of i goes down
the profit-maximizing use of both i and j go up.
Contexts: IO; labor
price elasticity:
A measure of responsiveness of some other variable to a change in price. See
elasticity for the the general equation.
Contexts: micro theory
price floor:
Law requiring that a price for a certain good be kept above some
level.
price index:
A single number summarizing price levels.
A larger number conventionally represents higher prices. A variety of
algorithms are possible and a precise specification (which is rare) requires
both an algorithm (an example of which is a Laspeyres index) and a set
of goods, fixed known quantities of each (the basket),
Contexts: macro; price indexes
price substitutes:
Inputs i and j to a production function are "price substitutes in
production" if when the price of i goes down the use of j goes
up.
Contexts: IO; labor
pricing kernel:
same as "stochastic discount factor" in a model of asset
prices.
Source: Campbell, Lo, and MacKinlay p 294
Contexts: finance, macro
pricing schedule:
A mapping from quantity purchased to total price paid
Contexts: IO
principal components:
An approach to finding what mixture of underlying variables produces most of
the variation in the dependent variable.
For a more complete discussion see
http://www.statsoftinc.com/textbook/stfacan.html
principal strip:
A bond can be resold into parts that can be thought of as components: a
principal component that is the right to receive the principal at the end
date, and the right to receive the coupon payments. The components are called
strips. The principal component is the principal strip.
Contexts: finance; business
principal-agent:
The general name for a class of games faced by a player, called the principal,
who by the nature of the environment does not act directly but instead by
giving incentives to other players, called agents, who may have different
interests.
Contexts: phrasing; game theory
principal-agent problem:
A particular game-theoretic description of a situation. There is a player
called a principal, and one or more other players called agents with utility
functions that are in some sense different from the principal's. The
principal can act more effectively through the agents than directly, and must
construct incentive schemes to get them to behave at least partly according to
the principal's interests. The principal-agent problem is that of designing
the incentive scheme. The actions of the agents may not be observable so it
is not usually sufficient for the principal just to condition payment on the
actions of the agents.
Contexts: game theory
principle of optimality:
The basic principle of dynamic programming, which was developed by Richard
Bellman: that an optimal path has the property that whatever the initial
conditions and control variables (choices) over some initial period, the
control (or decision variables) chosen over the remaining period must be
optimal for the remaining problem, with the state resulting from the early
decisions taken to be the initial condition.
Contexts: models
priori:
Not used separately; see phrase a priori.
Contexts: phrases
Prisoner's Dilemma:
A classic game with two players. Imagine that the two players are
criminals being interviewed separately by police. If either gives information
to the police, the other will get a long sentence. Either player can
Cooperate (with the other player) or Defect (by giving information to the
police). Here is an example payoff matrix for a Prisoner's Dilemma game:
| |
Player Two |
| C | D |
Player One |
C | 3,3 |
0,4 |
D | 4,0 |
1,1 |
(D,D) is the Nash equilibrium, but (C,C) is the Pareto
optimum. That difference has been discussed extensively
for various games in the research literature. Analogies to the prisoner's
dilemma or some other game can support an argument about why in the real world
some Pareto optima are observed not to be achieved.
If this same game is repeated more than once with a high enough discount
factor, there exist Nash equilibria in which (C,C) is a possible outcome of
the early stages.
Source: Varian, 1992, Ch 15
Contexts: game theory
pro forma:
describes a presentation of data, typically financial statements, where the
data reflect the world on an 'as if' basis. That is, as if the state of the
world were different from that which is in fact the case.
For example, a pro forma balance sheet might show the balance sheet as if a
debt issue under consideration had already been issued. A pro forma income
statement might report the transactions of a group on the basis that a
subsidiary acquired partway through the reporting period had been a part of
the group for the whole period. This latter approach is often adopted in
order to ensure comparability between financial statements of the year of
acquisition with those of subsequent years.
Source: Stephen Brown (stephenb@nwu.edu at the time) 7/24/2000, by email
Contexts: accounting; finance; business
probability:
Relevant terms: almost surely,
convergence in quadratic mean,
countable additivity property,
expectation,
Fatou's lemma,
Jensen's inequality,
mixing,
notation,
strong law of large numbers,
support.
Contexts: fields
probability function:
synonym for pdf.
Source: Newey-McFadden, Ch 36, Handbook of Econometrics
Contexts: econometrics; statistics
probit model:
An econometric model in which the dependent variable yi can be only
one or zero, and the continuous indepdendent variable xi are
estimated in:
Pr(yi=1)=F(xi'b)
Here b is a parameter to be estimated, and F is the normal cdf.
The logit model is the same but with a different cdf for F.
Source: Takeshi Amemiya, "Discrete Choice Models," The New Palgrave:
Econometrics
Contexts: econometrics; estimation
process:
see "stochastic process"
Contexts: statistics
product differentiation:
This is a product market concept. Chamberlin (1933) defined it thus: "A
general class of product is differentiated if any significant basis exists for
distinguishing the goods of one seller from those of another."
Source: Chamberlin, E. 1933. The Theory of Monopolistic Competition.
Harvard University Press. Cambridge, MA. 8th edition, 1962.
as cited in:
Brynjolfsson, Erik, Michael D. Smith, Yu (Jeffrey) Hu. "Consumer Surplus in
the Digital Economy: Estimating the Value of Increased Product Variety." p.6.
On the net as of Jan 7, 2003.
Contexts: IO
production function:
Describes a mapping from quantities of inputs to quantities of an output as
generated by a production process. Standard example is:
y = f(x1, x2)
Where f() is the production function, the x's are inputs, and the y is an
output quantity.
Contexts: micro
production possibilities frontier:
A standard graph of the maximum amounts of two possible outputs that can be
made from a given list of input resources.
A basic outline of how to draw one.
production set:
The set of possible input and output combinations. Often put into the
notation of netputs, so that this set can be defined by restrictions on a
collection of vectors with the dimension of the number of goods, one element
for each kind of good, and a positive or negative real quantity in each
element.
Contexts: general equilibrium; models
productivity:
A measure relating a quantity or quality of output to the inputs required to
produce it.
Often means labor productivity, which is can be measured by quantity of output
per time spent or numbers employed. Could be measured in, for example, U.S.
dollars per hour.
For some more detail see
this site.
Contexts: macro
productivity paradox:
Standard measures of labor productivity in the U.S. suggest that
computers, at least until 1995, were not improving productivity. The paradox
is the question: why, then, were U.S. employers investing more and more
heavily in computers?
Resolving the paradox probably requires an understanding of the gap between
what the productivity statistics measure and the goals of the U.S.
organizations getting computers. Sichel (1990), pp 33-36 lists these six:
- the mismanagement hypothesis is that computers underestimate the
costs of new computer technology, such as training, and therefore buy too many
for optimum short-run profitability
- the redistribution hypothesis is that private rates of return on
computers are high enough, but the effect is only to compete over business
with other firms in the same industry, which does not overall show greater
productivity; the analogy is to an arms race, in which both players invest
heavily but the overall effect is not to increase security
- the long learning-lags hypothesis is that information technology
will generate a substantial productivity effect when society is organized
around its availability, but it is too soon for that
- the mismeasurement hypothesis is that national economic accounts do
not tend to measure the services brought by information technology such as
quality, variety, customization, and convenience
- the offsetting factors hypothesis is that other factors unrelated
to computers have dragged down productivity measures
- the small share of computers in the capital stock hypothesis is
just that computers are too small a share of plant and equipment to make a
difference.
Two other hypotheses on this subject are:
- the externalities hypothesis is that computers in organization A
improve the long-run productivity of organization B but this is not
attributable in the national accounts to the computers in A.
- the reorganization hypothesis is that computers in a firm do not
raise much the quantity of capital stock but they cause a more productive long
run organization of the capital stock within that firm and a more efficient
split of tasks between that firm and other organizations.
Technophiles (such as this writer, or venture capitalists, or Silicon Valley
publications) and technology historians tend to believe in the long
learning-lags hypothesis, the mismeasurement hypothesis, the
externalities/network-effects hypothesis, and the reorganization hypothesis.
The gap in beliefs and understandings between technophiles and national
accounts and pricing experts, such as Sichel and Robert J. Gordon (see e.g.
the 1996 paper) is astonishing as of early 1999. They talk past one another.
The national accounts experts tend to take the labor/capital models more
seriously, and technology history less seriously, than do the technophiles.
The Federal Reserve Bank under Greenspan has piloted between these views.
Note, March 2002: The national accounts experts have come now to the view of
the technophiles and it is now commonly thought thtat the productivity measure
lags the other indicators in the boom.
Source: Sichel, Daniel E. 1997. The computer revolution: an economic
perspective. Brookings Institution Press, Washington D.C.
Paul David, May 1990, American Economic Review, p 355.
Contexts: macro; technology
proof:
A mathematical derivation from axioms, often in principle in the form of a
sequence of equations, each derived by a standard rule from the one
above.
Contexts: mathematics; phrases; modelling
propensity score:
An estimate of the probability that an observed entitiy like a person would
undergo the treatment. This probability is itself a predictor of outcomes
sometimes.
Contexts: empirical
proper equilibrium:
Any limit of epsilon-proper equilibria as epsilon goes to zero.
-- Myerson (1978), p 78
Source: Myerson, 1978, p 78, as cited by Pearce,
1984, p 1037
Contexts: game theory
property income:
Nominal revenues minus expenses for variable inputs including labor, purchased
materials, and purchased services. Property income can serve as an
approximation to the services rendered by capital.
It contains the returns to national wealth. It can be thought to include
technology and organizational components as well as 'pure' returns to
capital.
Source: Harper, 1999, p. 330
Contexts: macro; capital measurement
pseudoinverse:
Also called Moore-Penrose inverse. The pseudoinverse of any matrix exists, is
unique and satisfies four conditions shown on p 37 of Greene (1993).
Perhaps the most important case is when the matrix X has more rows than
columns, and X is of full column rank. Then the pseudoinverse of X is:
(X'X)-1X'. Notice how much this equation looks like the equation
for the OLS estimator.
Source: Greene, 1993, p 37
Contexts: econometrics
PSID:
Panel Study of Income Dynamics. Data set often used in labor economics
studies. Data is from U.S. and is put together at the University of
Michigan.
Since 1968 the PSID has followed and interviewed annually a national sample
that began with about 5000 families. Low-income families were over-sampled in
the original design. Interviews are usually conducted with the 'head' of each
family.
Includes a lot of income and employment variables, and continues to track
children who grow up and move out.
For more information see the PSID's Web site at http://www.isr.mich.edu/src
/psid/index.html
Contexts: labor; data
public economics:
A subfield that includes public goods and common pool resources,
and public finance meaning taxes and public borrowing and
spending.
Source: fields
public finance:
Relevant terms: AGI,
contingent valuation,
embedding effect,
existence value,
Lerman ratio,
Lucas critique,
nonuse value,
SCF,
Survey of Consumer Finances,
Tobin tax.
Contexts: fields
purchase accounting:
One of two ways to do the accounting for a U.S. firm after a merger. The
alternative is the pooling of interests.
Contexts: accounting
put option:
A put option is a security which conveys the right to sell a specified
quantity of an underlying asset at or before a fixed date.
Contexts: finance; business
put-call parity:
A relationship between the price of a put option and a call
option on a stock according to a standard model.
Define:
r as the risk-free interest rate, constant over time, in an environment with
no liquidity constraints
S as a stock's price
t as the current date
T as the expiration date of a put option and a call option
K as the strike price of the put option and call option
C(S,t) as the price of the call option when the current stock price is S and
the current date is t
P(S,t) as the price of the put option when the current stock price is S and
the current date is t
Then the relationship is:
P(S,t) = C(S,t) - S + Ke-r(T-t)
The relationship is derived from the fact that combinations of options can
make portfolios that are equivalent to holding the stock through time T, and
that they must return exactly the same amount or an arbitrage would be
available to traders.
Contexts: finance
putting-out system:
"A condition for the putting-out system to exist was for labor to be paid a
piece wage, since working at home made the monitoring of time impossible."
-- Joel Mokyr, NU working paper: "The rise and fall of the factory system:
technology, firms, and households since the industrial revolution"
Carnegie-Rochester Conference on macreconomics, Nov 17-19, 2000.
Source: Joel Mokyr, NU working paper: "The rise and fall of the factory
system: technology, firms, and households since the industrial revolution"
Carnegie-Rochester Conference on macreconomics, Nov 17-19, 2000.
putty-putty:
As in Romer, JPE, Oct 1990. This describes an attribute of capital in some
models. Putty-putty capital can be transformed into durable goods then back
into general, flexible capital. This contrasts
with putty-clay capital which if I understand correctly can be converted
into durable goods but which cannot then be converted back into
re-investable capital. The algebraic modeler chooses one of these to make
an argument or arrive at a conclusion within the model. The term is not
normally interpreted empirically although empirical analogues to each kind
of capital exist.
Source: Romer, 1990
Contexts: macro; models
Q ratio:
Or, "Tobin's Q". The ratio of the market value of a firm to the
replacement cost of everything in the firm. In Tobin's model this was the
driving force behind investment decisions.
Contexts: macro; finance; models
Q-statistic:
Of Ljung-Box. A test for higher-order serial correlation in residuals from a
regression.
Source: RATS manual, p. 1-15
Contexts: time series; estimation; econometrics
QJE:
Quarterly Journal of Economics
Contexts: journals
QLR:
quasi-likelihood ratio statistic
Contexts: econometrics; time series
QML:
Stands for quasi-maximum likelihood.
Contexts: econometrics; estimation
quango:
Stands for quasi-non-governmental organization, such as the U.S. Federal
Reserve. The term is British.
Contexts: phrases
quantity theory of money:
There are multiple versions and interpretations of quantity theories of money.
A central idea is that if there is more money in an economic system, prices
will go up. Here, money includes cash and various kinds of credit
which serve as intermediaries in purchases.
A core basic expression of this theory is in an equation described by Patinkin
(?) in Leeson(2003) as cited by Samuels (2005). The quantity theory relates
these variables:
- the quantity of money (M)
- its velocity (V) (that is the speed with which money changes hands on
average)
- thus producing the aggregate demand for goods and services (MV)
- the price level (P)
- and the level of output of goods and services (T)
by this equation: MV=PT.
An implication of this construction is that the monetary authorities have some
control over prices, in a way that is not direct but through a whole system of
purchases, production, and consumption.
Contexts: monetary
quartic kernel:
The quartic kernel is this function: (15/16)(1-u2)2 for
-1<u<1 and zero for u outside that range. Here u=(x-xi)/h,
where h is the window width and xi are the values of the
independent variable in the data, and x is the value of the independent
variable for which one seeks an estimate.
For kernel estimation.
Source: Hardle, 1990
Contexts: econometrics; nonparametrics; estimation
quasi rents:
returns in excess of the short-run opportunity cost of the resources devoted
to the activity
Source: Jensen (86)
Contexts: micro; finance
quasi-differencing:
a process that makes GLS easier, computationally, in a fixed-effects kind of
case. One generates a (delta) with an equation [see B. Meyer's notes,
installment 2, page 3] then subtracts delta times the average of each
individual's x from the list of x's, and delta times each individual's y from
the list of y's, and can run OLS on that. The calculation of delta requires
some estimate of the idiosyncratic (epsilon) error variance and the individual
effects (mu) error variance.
Contexts: econometrics; estimation
quasi-hyperbolic discounting:
A way of accounting in a model for the difference in the preferences an agent
has over consumption now versus consumption in the future.
Let b and d be scalar real parameters greater than zero and less than one.
Events t periods in the future are discounted by the factor bdt.
This formulation comes from a 1999 working paper of C. Harris and D. Laibson
which cites Phelps and Pollak (1968) and Zeckhauser and Fels (1968) for this
function.
Contrast hyperbolic discounting, and see more information on discount
rates at that entry.
Source: "Dynamic choices of hyperbolic consumers"", working paper by
Christopher Harris and David Laibson.
Contexts: models; macro; dynamic optimization
quasi-maximum likelihood:
Often abbreviated QML. Maximum likelihood estimation can't be applied to a
econometric model which has no assumption about error distributions, and may
be difficult if the model has assumptions about error distributions but the
errors are not normally distributed. Quasi-maximum likelihood is maximum
likelihood applied to such a model with the alteration that errors are
presumed to be drawn from a normal distribution. QML can often make
consistent estimates.
QML estimators converge to what can be called a quasi-true estimate; they have
a quasi-score function which produces quasi-scores, and a quasi-information
matrix. Each has maximum likelihood analogues.
Contexts: econometrics; estimation
quasiconcave:
A function f(x) mapping from the reals to the reals is quasiconcave if it is
nondecreasing for all values of x below some x0 and nonincreasing
for all values of x above x0. x0 can be infinity or
negative infinity: that is, a function that is everywhere nonincreasing or
nondecreasing is quasiconcave.
Quasiconcave functions have the property that for any two points in the
domain, say x1 and x2, the value of f(x) on all points
between them satisfies:
f(x) >= min{f(x1), f(x2)}.
Equivalently, f() is quasiconcave iff -f() is quasiconvex.
Equivalently, f() is quasiconcave iff for any constant real k, the set of
values x in the domain of f() for which f(x) >= k is a convex set.
The most common use in economics is to say that a utility function is
quasiconcave, meaning that in the relevant range it is nondecreasing.
A function that is concave over some domain is also quasiconcave over that
domain. (Proven in Chiang, p 390).
A strictly quasiconcave utility function is equivalent to a strictly
convex set of preferences, according to Brad Heim and Bruce Meyer (2001) p.
17.
Source: Simon and Blume, Chiang, pp 387-399
Contexts: real analysis
quasiconvex:
A function f(x) mapping from the reals to the reals is quasiconvex if it is
nonincreasing for all values of x below some x0 and nondecreasing
for all values of x above x0. x0 can be infinity or
negative infinity: that is, a function that is everywhere nonincreasing or
nondecreasing is quasiconvex.
Quasiconvex functions have the property that for any two points in the domain,
say x1 and x2, the value of f(x) on all points between
them satisfies:
f(x) <= max{f(x1), f(x2)}.
Equivalently, f() is quasiconvex iff -f() is quasiconcave.
Equivalently, f() is quasiconvex iff for any constant real k, the set of
values x in the domain of f() for which f(x) <= k is a convex set.
A function that is convex over some domain is also quasiconvex over that
domain. (Proven in Chiang, p 390).
Source: Simon and Blume, Chiang, pp 387-399
Contexts: real analysis
quasilinear:
A utility function U() is quasilinear in one of its arguments, c, if a
monotonic transformation of U() has this form for some v():
U(c,x1,x2,...xk) = c +
v(x1,x2,...xk)
[ed: Your editor no longer recalls why one would care if a utility function
has this property or not.]
Contexts: models; utility
R&D intensity:
Sometimes defined to be the ratio of expenditures by a firm on research and
development to the firm's sales.
Source: Levin, Richard C., Alvin K. Klevorick, Richard R. Nelson, Sidney G.
Winter. 1987. "Appropriating the Returns from Industrial Research and
Development." Brookings Papers on Economic Activity, voume 1987, issue
3. pp 783-820. (see p. 812 for the definition of this term.)
R-squared:
Usually written R2. Is the square of the correlation coefficient
between the dependent variable and the estimate of it produced by the
regressors, or equivalently defined as the ratio of regression variance
to total variance.
Source: Kennedy, 1992; Greene,
1993, p 72
Contexts: econometrics
Ramsey equilibrium:
Results from a government's choice in certain kinds of models. Suppose that
the government knows how private sector producers will respond to any economic
environment, and that the government moves first, choosing some aspect of the
environment. Suppose further that the government makes its choice in order to
maximize a utility function for the population. Then the government's choice
is a Ramsey problem and its solution pays off with the ?>Ramsey
outcome?>.
Contexts: macro
Ramsey outcome:
The payoffs from a Ramsey equilibrium.
Contexts: macro
Ramsey problem:
See Ramsey equilibrium.
random:
Not completely predetermined by the other variables available.
Examples: Consider the function plus(x,y) which we define to have the value
x+y. Every time one applies this function to a given x and y, it would give
the same answer. Such a function is deterministic, that is,
nonrandom.
Consider by contrast the function N(0,1) which we define to give back a draw
from a standard normal distribution. This function does not return the
same value every time, even when given the same parameters, 0 and 1. Such a
function is random, or stochastic.
Contexts: statistics; econometrics; time series
random effects estimation:
The GLS procedure in the context of panel data.
Fixed effects and random effects are forms of linear regression whose
understanding presupposes an understanding of OLS.
In a fixed effects regression specification there is a binary variable
(also called dummy or indicator variable) marking cross section units
and/or time periods. If there is a constant in the regression, one cross
section unit must not have its own binary variable marking it.
From Kennedy, 1992, p. 222:
"In the random effects model there is an overall intercept and an error
term with two components: eit + ui. The eit
is the traditional error term unique to each observation. The ui
is an error term representing the extent to which the intercept of the
ith cross-sectional unit differs from the overall intercept. . . . .
This composite error term is seen to have a particular type of
nonsphericalness that can be estimated, allowing the use of EGLS for
estimation. Which of the fixed effects and the random effects models is
better? This depends on the context of the data and for what the results are
to be used. If the data exhaust the population (say observations on all firms
producing automobiles), then the fixed effects approach, which produces
results conditional on the units in the data set, is reasonable. If the data
are a drawing of observations from a large population (say a thousand
individuals in a city many times that size), and we wish to draw inferences
regarding other members of that population, the fixed effects model is no
longer reasonable; in this context, use of the random effects model has the
advantage that it saves a lot of degrees of freedom. The random effects model
has a major drawback, however: it assumes that the random error associated
with each cross-section unit is uncorrelated with the other regressors,
something that is not likely to be the case. Suppose, for example, that wages
are being regressed on schooling for a large set of individuals, and that a
missing variable, ability, is thought to affect the intercept; since schooling
and ability are likely to be correlated, modeling this as a random effect will
create correlation between the error and the regressor schooling (whereas
modeling it as a fixed effect will not). The result is bias in the
coefficient estimates from the random effect model."
[Kennedy asserts, then, that fixed and random effects often produce very
different slope coefficients.]
The Hausman test is one way to distinguish which one makes
sense.
Source: Kennedy, 1992
Contexts: estimation; econometrics
random process:
Synonym for stochastic process.
Contexts: statistics; models; econometrics; time series
random variable:
A nondeterministic function. See random.
Contexts: statistics
random walk:
A random walk is a random process yt like:
yt=m+yt-1+et
where m is a constant (the trend, often zero) and
et is white noise.
A random walk has infinite variance and a unit root.
Source: Greene, 1993, p 559
Contexts: econometrics; statistics; time series; models
Rao-Cramer inequality:
defines the Cramer-Rao lower bound, which see.
(would like to put equation from Hogg and Craig p 372 here)
Source: Hogg and Craig p 372
Contexts: econometrics; statistics; estimation
rational:
An adjective. Has several definitions.:
(1) characterizing behavior that purposefully chooses means to achieve ends
(as in Landes, 1969/1993, p 21).
(2) characterizing preferences which are complete and transitive, and
therefore can be represented by a utility function (e.g. Mas-Coleil).
(3) characterizing a thought process based on reason; sane; logical. Can be
used in regard to behavior. (e.g. American Heritage
Dictionary, p 1028)
Contexts: phrases; modelling
rational expectations:
An assumption in a model: that the agent under study uses a forecasting
mechanism that is as good as is possible given the stochastic processes and
information available to the agent.
Often in essence the rational expectations assumption is that the agent knows
the model, and fails to make absolutely correct forecasts only because of the
inherent randomness in the economic environment.
Contexts: macro
rational ignorance:
The option of an agent not to acquire or process information about some realm.
Ordinarily used to describe a citizen's choice not to pay attention to
political issues or information, because paying attention has costs in time
and effort, and the effect a citizen would have by voting per se is usually
zero.
Source: Downs (1957): An economic theory of democracy, according to
G.Miller, "The impact of economics on contemporary political
science", Sept 1997 JEL.
Contexts: phrases; political
rationalizable:
In a noncooperative game, a strategy of player i is rationalizable iff it is a
best response to a possible set of actions of the other players, where
those actions are best responses given beliefs that those other players might
have.
By rationalizable we mean that i's strategy can be justified in terms of the
other players choosing best responses to some beliefs (subjective probability
distributions) that they may be conjectured to have.
Nash strategies are rationalizable.
For a more formal definition see sources. This is a rough paraphrase.
Source: Bernheim, 1984, p 1014
Contexts: game theory
rationalize:
verb, meaning: to take an observed or conjectured behavior and find a model
environment in which that behavior is an optimal solution to an optimization
problem.
Contexts: phrases; modelling
RATS:
A computer program for the statistical analysis of data, especially time
series. Name stands for Regression Analysis of Time Series.
First chapter of its manual has a nice tutorial.
The software is made by Estima
Corp.
Contexts: data; estimation; code
RBC:
stands for Real Business Cycle (which see) -- a class of macro
theories
Contexts: models; macro
real analysis:
Relevant terms: affine,
B1,
Banach space,
Borel set,
Borel sigma-algebra,
Cauchy sequence,
compact,
convolution,
even function,
extended reals,
Frechet derivative,
Frechet differentiable,
functional,
Hilbert space,
Holder continuous,
inf,
L,
L,
L,
Lipschitz condition,
Lipschitz continuous,
lower hemicontinuous,
measurable,
measurable space,
measure,
narrow topology,
quasiconcave,
quasiconvex,
Riemann-Stieltjes integral,
sigma-algebra,
sup,
topological space,
topology,
upper hemicontinuous,
Weierstrauss Theorem.
Contexts: fields
real bills doctrine:
In the Great Depression of the 1930s, the Federal Reserve did not provide
sufficient money to prevent a series of banking panics. "The Fed erred, says
Meltzer, because it followed a theory (named 'real bills') that called for it
to create money only in response to higher loan demand; because loan demand
had collapsed, the Fed was too passive."
from "Greenspan's Finest Hour?" by Robert J. Samuelson, published in the Dec
15, 2003 Newsweek, page 39, which cites Allan Meltzer's book A
History of the Federal Reserve.
Source: from "Greenspan's Finest Hour?" by Robert J. Samuelson, published in
the Dec 15, 2003 Newsweek, page 39, which cites Allan Meltzer's book A
History of the Federal Reserve.
Contexts: monetary; economic history
real business cycle theory:
A class of theories explored first by John Muth (1961), and associated most
with Robert Lucas. The idea is to study business cycles with the assumption
that they were driven entirely by technology shocks rather than by
monetary shocks or changes in expectations.
Shocks in government purchases are another kind of shock that can appear in a
pure real business cycle (RBC) model. Romer, 1996, p
151
Source: Among others: Romer, 1996, p 151
Contexts: macro; models
real externality:
An effect of production or transactions on outside parties that affects
something entering their production or utility functions directly.
Contexts: general equilibrium
recession:
A recession is defined to be a period of two quarters of negative GDP growth.
Thus: a recession is a national or world event, by definition. And
statistical aberrations or one-time events can almost never create a
recession; e.g. if there were to be movement of economic activity (measured or
real) around Jan 1, 2000, it could create the appearance of only one quarter
of negative growth. For a recession to occur the real economy must
decline.
Contexts: macro
reduced form:
The reduced form of an econometric model has been rearranged algebraically so
that each endogenous variable is on the left side of one equation, and only
predetermined variables (exogenous variables and lagged endogenous variables)
are on the right side.
Source: Johnston, p 440
Contexts: econometrics; estimation
regression function:
A regression function describes the relationship between dependent variable Y
and explanatory variable(s) X. One might estimate the regression function
m() in the econometric model
Yi = m(Xi) + ei
where the ei are the residuals or errors. As presented that is a
nonparametric or semiparametric model, with few assumptions about m(). If one
were to assume also that m(X) is linear in X one would get to a standard
linear regression model:
Yi = (Xi)b + ei
where the vector b could be estimated.
Source: Hardle, 1990
Contexts: econometrics; estimation
regrettables:
consumption items that to not directly produce utility, such as health
maintenance, transportation to work, and "waiting times"
Source: Glen Cain, Handbook article
Contexts: labor
Regulation Q:
A U.S. Federal Reserve System rule limiting the interest rates that U.S. banks
and savings and loan institutions could pay on deposits.
Source: Glasner, p. 162
Contexts: history; IO
reinsurance:
Insurance purchased by an insurer, often to protect against especially large
risks or risks correlated to other risks the insurer faces.
Contexts: business
rejection region:
In hypothesis testing. Let T be a test statistic. Possible values of T can
be divided into two regions, the acceptance region and the rejection region.
If the value of T comes out to be in the acceptance region, the null
hypothesis H0 (the set of restrictions being tested) is
accepted, or at any rate not rejected. If T falls in the rejection region,
the null hypothesis is rejected.
The terms 'acceptance region' and 'rejection region' may also refer to the
subsets of the sample space that would produce statistics T that go into the
acceptance region or rejection region as defined above.
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; estimation
rent-seeking:
Rent-seeking means a search for extraordinary profits, beyond the normal
returns to investment. It often implies putting someone else at a
disadvantage. See rents and quasi-rents.
rents:
Rents are returns in excess of the opportunity cost of the resources devoted
to the activity.
Source: Jensen (86)
Contexts: micro; finance
resale price maintenance:
The effect of rules imposed by a manufacturer on wholesale or retail resellers
of its own products, to prevent them from competing too fiercely on price and
thus driving profits down from the reselling activity. The manufacturer may
do this because it wishes to keep resellers profitable. Such contract
provisions are usually legal under US law but have not always been allowed
since they formally restrict free trade.
With thanks to: Jonathan G. Powers (jgp423@northwestern.edu as of 12 July
2000)
Contexts: industrial organization
reservation wage property:
A model has the reservation wage property if agents seeking employment in the
model accept all jobs paying wages above some fixed value and reject all jobs
paying less.
Contexts: labor; macro
residual claimant:
The agent who receives the remainder of a random amount once predictable
payments are made.
The most common example: consider a firm with revenues, suppliers, and holders
of bonds it has issued, and stockholders. The suppliers receive the
predictable amount they are owed. The bondholders receive a predictable
payout -- the debt, plus interest. The stockholders can claim the residual,
that is, the amount left over. It may be a negative amount, but it may be
large. The same idea of a residual claimant can be applied in analyzing other
contracts.
There is a historical link to theories about wages; see
http://britannica.com/bcom/eb/article/9/0,5716,109009+6+106209,00.html
Contexts: corporate finance
resiliency:
An attribute of a market.
In securities markets, depth is measured by "the speed with which prices
recover from a random, uninformative shock." (Kyle, 1985, p
1316).
Source: Kyle, 1985, p 1316
Contexts: finance
ReStat:
An abbrevation for the Review of Economics and Statistics.
Contexts: journals
restricted estimate:
An estimate of parameters taken with the added requirement that some
particular hypothesis about the parameters is true. Note that the variance of
a restricted estimated can never be as low as that of an unrestricted
estimate.
Contexts: econometrics; estimation
restriction:
assumption about parameters in a model
Contexts: econometrics; estimation
ReStud:
An abbreviation for the journal
Review of Economic Studies.
Contexts: journals
revelation principle:
That truth-telling, direct revelation mechanisms can generally be designed to
achieve the Nash equilibrium outcome of other mechanisms; this can be
proven in a large category of mechanism design cases.
Relevant to a modelling (that is, theoretical) context with:
-- two players, usually firms
-- a third party (usually the government) managing a mechanism to
achieve a desirable social outcome
-- incomplete information -- in particular, the players have types that are
hidden from the other player and from the government.
Generally a direct revelation mechanism (that is, one in which the strategies
are just the types a player can reveal about himself) in which telling the
truth is a Nash equilibrium outcome can be proven to exist and be equivalent
to any other mechanism available to the government. That is the revelation
principle. It is used most often to prove something about the whole class of
mechanism equilibria, by selecting the simple direct revelation mechanism,
proving a result about that, and applying the revelation principle to assert
that the result is true for all mechanisms in that context.
Source: Kreps, 1990, p 691, 694
Contexts: micro theory
Ricardian proposition:
that tax financing and bond financing of a given stream of government
expenditures lead to equivalent allocations. This is the Modigliani-Miller
theorem applied to the government.
Contexts: macro; models
ridit scoring:
A way of recoding variables in a data set so that one has a measure not of
their absolute values but their positions in the distribution of observed
values. Defined in this broadcast to the list of Stata users:
Date: Sat, 20 Feb 1999 14:13:35 +0000
From: Ronan Conroy
Subject: Re: statalist: Standardizing Variables
Paul Turner said (19/2/99 9:54 pm)
>I have two variables--X1 and X2--measured on ordinal scales. X1 ranges
>from 0 to 10; X2 ranges from 0 to 12. What I want to do is to standardize
>X1 and X2 to a common metric in order to explore how differences between
>the two affect the dependent variable of interest. Converting values to
>percentages of the maximum values (10 and 12) is the first approach that
>occurs to me, but I don't know if there's something I'm forgetting
This sort of thing is possible, and called ridit scoring. You replace
each of the original scale points with the percentage (or proportion) of
the sample who scored at or below that value. This gives the scales a
common interpretation as percentiles of the sample, and means that they
are now expressed on an interval metric, though the data are still grainy.
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
_/_/_/ _/_/ _/_/_/ _/ Ronan M Conroy
_/ _/ _/ _/ _/ _/ Lecturer in Biostatistics
_/_/_/ _/ _/_/_/ _/ Royal College of Surgeons
_/ _/ _/ _/ _/ Dublin 2, Ireland
_/ _/ _/_/ _/_/_/ _/ voice +353 1 402 2431
rconroy@rcsi.ie fax +353 1 402 2329
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
I'm not an outlier; I just haven't found my distribution yet
Source: Ronan Conroy (rconroy@rcsi.ie)
Contexts: data; estimation
Riemann-Stieltjes integral:
A generalization of regular Riemann integration.
Let | denote the integral sign. Quoting from Priestly:
"...when we have two deterministic functions g(t),F(t), the
Riemann-Stieltjes integral
R = |ab g(t)dF(t)
is defined as the limiting value of the discrete summation"
(sum from i=1 to i=n of)
g(ti)[F(ti)-F(ti-1)]
for t1=a and tn=b as n goes to infinity and "as
max(ti-ti-1)->0."
If F(t) is differentiable, then the above integral is the same as the regular
integral R=|ab g(t)F'(t) dt, but the Reimann-Stieltjes
integral can be defined in many cases even when F() is not
differentiable.
One of the most common uses is when F() is a cdf.
Examples: The expectation of a random variable can be written:
mu=| xf(x) dx
if f(x) is the pdf. It can also be written:
mu=| x dF(x)
where F(x) is the cdf. The two are equivalent for a continuous distribution,
but notice that for a discrete one (e.g. a coin flip, with X=0 for heads and
X=1 for tails) the second, Riemann-Stieltjes, formulation is well defined but
no pdf exists to calculate the first one.
Source: Priestly, 1981, 1994, p 155
Contexts: econometrics; time series; real analysis; statistics
risk:
If outcomes will occur with known or estimable probability the decisionmaker
faces a risk. Certainty is a special case of risk in which this probability
is equal to zero or one. Contrast uncertainty.
Source: J. Montgomery, social networks paper
Contexts: models
risk free rate puzzle:
See equity premium puzzle.
Contexts: finance; macro; phrases
RJE:
An abbreviation for the RAND Journal of Economics, which was previously
called the Bell Journal of Economics.
Contexts: journals
RMPY:
Stands for a standard VAR run on standard data, with interest rates (R), money
stock (M), inflation (P), and output (Y). In Faust and Irons (1996), these
are operationalized by the three-month Treasury bill rate, M2, the CPI, and
the GNP.
Source: Jon Faust and John Irons (1996), "Money, politics, and the
post-War business cycle"
Contexts: macro
Robinson-Patman Act:
U.S. legislation of 1936 which made rules against price discrimination by
firms. Agitation by small grocers was a principal cause of the law. They
were under competitive pressure and displaced by the arrival of chain stores.
The Act is thought by many to have prevented reasonable price competition,
since it made many pricing actions illegal per se. For many of its
provisions, 'good faith' was not a permitted defense. So it can be argued
that it was confusing, vague, unnecessarily restrictive, and designed to
prevent some competitors in retailing from being driven out rather than to
further social welfare generally, e.g. by allowing pricing decisions that
would benefit consumers.
Other causes: glitches in an earlier law, the Clayton Act.
Contexts: IO; history
robust smoother:
A robust smoother is a smoother (an estimator of a regression function) that
gives lower weights to datapoints that are outliers in the
y-direction.
Source: Hardle, 1990
Contexts: econometrics; estimation
Roll critique:
That the CAPM may appear to be rejected in tests not because it is
wrong but because the proxies for the market return are not close enough to
the true market portfolio available to investors.
Contexts: finance
roughness penalty:
A loss function that one might incorporate into an estimate of a function to
prevent the estimated function from matching the data closely but at the cost
of jerkiness. See 'spline smoothing' and 'cubic spline' for example uses.
An example roughness penalty would be LI[m"(u)]2du, where L is
a 'smoothing parameter', I stands for the integral sign, m"() is the
second derivative of the estimated function, and u is a dummy variable that
ranges over the domain of the estimated function.
Source: Hardle, 1990, circa page 56
Contexts: econometrics; estimation
Rybczynski theorem:
Paraphrasing from Hanson and Slaughter (1999): In the context of a
Heckscher-Ohlin model of international trade, open trade between
regions means changes in relative factor supplies between regions can lead to
an adjustment in quantities and types of outputs between regions that would
return the system toward equality of production input prices like wages across
countries (the state of factor price equalization).
Such theorems are named this way by analogy to Rybczynski (1955), and refer to
that part of the mechanism that has to do with output adjustments.
Source: Hanson, Gordon H., and Matthew J. Slaughter, "The Rybczinski
theorem,
factor-price equalization, and immigration: evidence from U.S. states,"
NBER working paper 7074, April 1999. On Web at http://www.nber.org/papers/w7074
Rybczynski, T.M. 1955. "Factor endowments and relative commodity
prices." Economica 22: 336-341.
Contexts: trade; international
S-Plus:
Statistical software published by
Mathsoft.
Contexts: data; estimation
s.t.:
An abbreviation meaning "subject to" or "such that", where
constraints follow.
In a common usage:
maxx f(x) s.t. g(x)=0
The above expression, in words, means: "The value of f(x) that is
greatest among all those for which the argument x satisfies the constraint
that g(x)=0." (Here f() and g() are fixed, possibly known, real-valued
functions of x.)
Contexts: notation
saddle point:
In a second-order [linear difference equation] system, ... if one root has
absolute value greater than one, and the other root has absolute value less
than one, then the steady state of the system is called a saddle point.
In this case, the system is unstable for almost all initial conditions. The
exception is the set of initial conditions that begin on the eigenvector
associated with the stable eigenvalue.
Source: Farmer, p. 30
Contexts: macro; dynamical systems; models
Sargan test:
A test of the validity of instrumental variables. It is a test of the
overidentifying restrictions. The hypothesis being tested is that the
instrumental variables are uncorrelated to some set of residuals, and
therefore they are acceptable, healthy, instruments.
If the null hypothesis is confirmed statistically (that is, not rejected), the
instruments pass the test; they are valid by this criterion.
In the Shi and Svensson working paper (which shows that elected national
governments in 1975-1995 had larger fiscal deficits in election years,
especially in developing countries), the Sargan statistic was asymptotically
distributed chi-squared if the null hypothesis were true.
See test of identifying restrictions, which is not exactly the same
thing, I think.
Source: Shi and Svensson; Harvard University working paper circa 2000
Contexts: econometrics
SAS:
Statistical analysis software. SAS web
site
Contexts: data; estimation
scale economies:
Same as economies of scale.
Contexts: production theory
scatter diagram:
A graph of unconnected points of data. If there are many of them the result
may be 'clouds' of data which are hard to interpret; in such a case one might
want to use a nonparametric technique to estimate a regression
function.
Source: Greene, 1993, p 88
Contexts: econometrics; estimation
scedastic function:
Given an independent variable x and a dependent variable y, the scedastic
function is the conditional variance of y given x. That variance of the
conditional distribution is:
var[y|x] = E[(y-E[y|x])2|x]
= integral or sum of (y-E[y|x])2f(y|x) dy
= E[y2|x] - (E[y|x])2.
Source: Greene, 1993, pp 68-69
Contexts: econometrics
SCF:
Stands for Survey of Consumer Finances.
Contexts: public finance; labor
Schumpeterian growth:
Paraphrasing from Mokyr (1990): Schumpeterian growth of economic growth
brought about by increase in knowledge, most of which is called technological
progress.
Source: Mokyr, 1990, p. 4-6
Contexts: history; macro
Schwarz Criterion:
A criterion for selecting among formal econometric models. The Schwarz
Criterion is a number:
T ln (RSS) + K ln(T)
The criterion is minimized over choices of K to form a tradeoff between the
fit of the model (which lowers the sum of squared residuals) and the model's
complexity, which is measured by K. Thus an AR(K) model versus an AR(K+1) can
be compared by this criterion for a given batch of data.
Source: RATS maual pg. 5-18
Contexts: econometrics; time series; models
Scitovsky paradox:
The problem that some ways of aggregating social welfare may make it possible
that a switch from allocation A to allocation B seems like an improvement in
social welfare, but so does a move back. (An example may be Condorcet's
voting paradox.)
Scitovsky, T., 1941, "A Note on Welfare Propositions in Economics", Review
of Economic Studies, Vol 9, Nov 1941, pp 77-88.
The Scitovsky criterion (for a social welfare function?) is that the Scitovsky
paradox not exist.
Source: Boadway 1974; Scitovsky
Contexts: public
score:
In maximum likelihood estimation, the score vector is the gradient of the
likelihood function with respect to the parameters. So it has the same number
of elements as the parameter vector does (often denoted k). The score is a
random variable; it's a function of the data. It has expectation zero, and is
set to zero exactly for a given sample in the maximum likelihood estimation
process.
Denoting the score as S(q), and the likelihood
function as L(q), where in both cases the data are
also implied arguments:
S(q) = dL(q)/d(q)
Example: In OLS regression of Yt=Xtb+et, the score for each possible parameter
value, b, is
Xt'et(b).
The variance of the score is E[score2]-(E[score])2)
which is E[score2] since E[score] is zero. E[score2] is
also called the information matrix and is denoted I(q).
Contexts: econometrics; estimation
screening game:
A game in which an uninformed player offers a menu of choices to the
player with private information (the informed player). The selection
of the elements of that menu (which might be, for example, employment
contracts containing pairs of pay rates and working hours) is a choice for the
uninformed player to optimize on the basis of expectations about they possible
types of the informed player.
Contexts: game theory
second moment:
The second moment of a random variable is the expected value of the square of
the draw of the random variable. That is, the second moment is
EX2. Same as 'uncentered second moment' as distinguished from the
variance which is the 'centered second moment.'
Contexts: econometrics; statistics
Second Welfare Theorem:
A Pareto efficient allocation can be achieved by a Walrasian equilibrium if
every agent has a positive quantity of every good, and preferences are convex,
continuous, and strictly increasing.
(My best understanding of 'convex preferences' is that it means 'concave
utility function'.)
Contexts: general equilibrium; models
secular:
an adjective meaning "long term" as in the phrase "secular
trends." Outside the research context its more common meaning is 'not
religious'.
seigniorage:
Alternate spelling for seignorage.
seignorage:
"The amount of real purchasing power that [a] government can extract from
the public by printing money."" -- Cukierman 1992
Explanation: When a government prints money, it is in essence borrowing
interest-free since it receives goods in exchange for the money, and must
accept the money in return only at some future time. It gains further if
issuing new money reduces (through inflation) the value of old money by
reducing the liability that the old money represents. These gains to a
money-issuing government are called "seignorage" revenues.
The original meaning of seignorage was the fee taken by a money issuer (a
government) for the cost of minting the money. Money itself, at that time,
was intrinsically valuable because it was made of metal.
Source: Cukierman, 1992
Contexts: money
self-generating:
Given an operator B() that operates on sets, a set W is self-generating if W
is contained in B(W).
This definition is in Sargent (98) and may come from Abreu, Pearce, and
Stacchetti (1990).
semi-nonparametric:
synonym for semiparametric.
Contexts: econometrics
semi-strong form:
Can refer to the semi-strong form of the efficient markets hypothesis, which
is that any public information about a security is fully reflected in its
current price.
Fama (1991) says that a more common and current name for tests of the
semi-strong form hypothesis is 'event studies.'
Source: Fama, 1970, p 404
Contexts: finance
semilog:
The semilog equation is an econometric model:
Y = ea+bX+e
or equivalently
ln Y = a + bX + e
Commonly used to describe exponential growth curves. (Greene 1993, p
239)
Source: Greene, 1993, p 239
Contexts: econometrics
semiparametric:
An adjective that describes an econometric model with some components that are
unknown functions, while others are specified as unknown finite dimensional
parameters.
An example is the partially linear model.
Source: Tripathi, 1996, p 2
Contexts: econometrics, statistics
senior:
Debts may vary in the order in which they must legally be paid in the event of
bankruptcy of the individual or firm that owes the debt. The debts that must
be paid first are said to be senior debts.
Contexts: finance
SES:
socioeconomic status
Contexts: labor; sociology
shadow price:
In the context of a maximization problem with a constraint, the shadow price
on the constrain is the amount that the objective function of the maximization
would increase by if the constraint were relaxed by one unit.
The value of a Lagrangian multiplier is a shadow price.
This is a striking and useful fact, but takes some practice to
understand.
Source: Layard and Glaister, p 8
shakeout:
A period when the failure rate or exit rate of firms from an industry is
unusually high.
Source: Philip Anderson and Michael L. Tushman, Research-Technology
Management, May/June 1991, pp. 26-31.
Contexts: IO; business history
sharing rule:
A function that defines the split of gains between a principal and agent. The
gains are usually profits, and the split is usually a linear rule that gives a
fraction to the agent.
For example, suppose profits are x, which might be a random variable. The
principal and agent might agree, in advance of knowing x, on a sharing rule
s(x). Here s(x) is the amount given to the agent, leaving the principal with
the residual gain x-s(x).
Source: very roughly taken from Holmstrom (82)
Contexts: game theory; micro theory; models
Sharpe ratio:
Computed in context of the Sharpe-Linter CAPM. Defined for an asset
portfolio a that has mean ma, standard
deviation sa, and with risk-free rate
rf by:
[ma-rf]/sa
Higher Sharpe ratios are more desirable to the investor in this model.
The Sharpe ratio is a synonym for the "market price of risk."
Empirically, for the NYSE, the Sharpe ratio is in the range of .30 to
.40.
Contexts: finance
SHAZAM:
Econometric software published at the University of British Columbia. See
http://shazam.econ.ubc.ca.
Contexts: data; estimation
Shephard's lemma:
Source: Shephard (1953) cited in Zegeye (2001)
Sherman Act:
1890 U.S. antitrust law. It has been described as vague, leading to ambiguous
interpretations over the years.
Section one of the law forbids certain joint actions: "Every contract,
combination in the form of trust or otherwise, or conspiracy, in restraint of
trade or commerce among the several states, or with foreign nations, is hereby
declared illegal...."
Section two of the law forbids certain individual actions: "Every person
who shall monopolize, or attempt to monopolize, or combine or conspire with
any other person or persons, to monopolize any part of the trade or commerce
among the several states, or with foreign nations, shall be deemed guilty of a
felony..."
The reasons for the passage of the Sherman Act:
(1) To promote competition to benefit consumers,
(2) Concern for injured competitors,
(3) Distrust of concentration of power.
Source: lectures and handouts of Michael Whinston at Northwestern U in
Economics D50, Winter 1998
Contexts: IO; antitrust; regulation
short rate:
Abbreviation for 'short term interest rate'; that is, the interest rate
charged (usually in some particular market) for short term loans.
Contexts: finance
Shubik model:
A theoretical model designed to study the behavior of money. There are N
goods traded in N(N-1) markets, one for each possible combination of good i
and good j that could be exchanged. One assumes that only N of these markets
are open; that good 0, acting as money, is traded for each of the other
commodities but they are not exchanged for one another. Then one can study
the behavior of the money good.
Contexts: money; models
SIC:
Standard Industrial Classification code -- a four-digit number assigned to
U.S. industries and their products. By "two-digit industries" we
mean a coarser categorization, grouping the industries whose first two digits
are the same.
Contexts: IO; data
sieve estimators:
flexible basis functions to approximate a function being estimated. It may be
that orthogonal series, splines, and neural networks are examples. Donald
(1997) and Gallant and Nychka (1987) may have more information.
Source: Chen, Xiaohong, and Timothy G. Conley, &A semiparametric spatial model
for panel time series" Preliminary Northwestern University working
paper, January 1999.
Donald, Stephen G. 1997. "Inference concerning the number of factors in
a multivariate nonparametric relationship" Econometrica
65:103-132.
Gallant, A.R. and Nychka, D. 1987. "Semi-non-parametric maximum
likelihood estimation" Econometrica 55:363-390.
Contexts: econometrics
sigma-algebra:
A collection of sets that satisfy certain properties with respect to their
union. (Intuitively, the collection must include any result of
complementations, unions, and intersections of its elements. The effect is to
define properties of a collection of sets such that one can define probability
on them in a consistent way.) Formally:
Let S be a set and A be a collection of subsets of S.
A is a sigma-algebra of S if:
(i) the null set and S itself are members of A
(ii) the complement of any set in A is also in A
(iii) countable unions of sets in A are also in A.
It follows from these that a sigma-algebra is closed under countable
complementation, unions, and intersections.
Contexts: math; measure theory; real analysis
signaling game:
A game in which a player with private information (the informed
player) sends a signal of his private type to the uninformed player before
the uninformed player makes a choice. An example: a candidate worker might
suggest to the potential employer what wage is appropriate for himself in a
negotiation.
Contexts: game theory
significance:
A finding in economics may be said to be of economic significance (or
substantive significance) if it shows a theory to be useful or not
useful, or if has implications for scientific interpretation or policy
practice (McCloskey and Ziliak, 1996). Statistical significance is property
of the probability that a given finding was produced by a stated model but at
random: see significance level.
These meanings are different but sometimes overlap. McCloskey and Ziliak
(1996) have a substantial discussion of them. Ambiguity is common in
practice, but not hard to avoid. (Editorial comment follows.) When the
second meaning is intended, use the phrase "statistically
significant" and refer to a level of statistical significance or a
p-value. Avoid the aggressive word "insignificant" unless it
is clear whether the word is to be taken to mean substantively
insignificant or not statistically significant.
Source: McCloskey, Deirdre N., and Stephen T. Ziliak. "The standard
error of regressions," Journal of Economic Literature vol XXXIV
(March 1996), pp 97-114.
Contexts: phrases; statistics; econometrics; estimation
significance level:
The significance level of a test is the probability that the test statistic
will reject the null hypothesis when the [hypothesis] is true. Significance
is a property of the distribution of a test statistic, not of any particular
draw of the statistic.
Synonymous with statistical 'size' of a test.
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; estimation
simulated annealing:
A method of finding optimal values numerically. Simulated annealing is a
search method as opposed to a gradient based algorithm. It chooses a new
point, and (for optimization) all uphill points are accepted while some
downhill points are accepted depending on a probabilistic criteria.
Unlike the simplex search method provided by Matlab, simulated annealing may
allow "bad" moves thereby allowing for escape from a local max. The
value of a move is evaluated according to a temperature criteria (which
essentially determines whether the algorithm is in an "hot" area of
the function).
Source: Mark Manuszak (as of 5/25/99, mdm080@nwu.edu of Northwestern Univ);
Matlab documentation
Contexts: optimization; estimation; numerical techniques
simultaneous equation system:
By "system" is meant that there are multiple, related, estimable equations.
By simultaneous is meant that two quantities are jointly determined at time t
by one another's values at time t-1 and possibly at t also.
Example, from Greene, (1993, p. 579), of market
equilibrium:
qd=a1p+a2+ed (Demand equation)
qs=b1p+es (Supply equation)
qd=qs=q
Here the quantity supplied is qs, quantity demanded is
qd, price is p, the e's are errors or residuals, and the a's and
b's are parameters to be estimated. We have data on p and q, and the
quantities supplied and demanded are conjectural.
Contexts: econometrics; estimation
single-crossing property:
Distributions with cdfs F and G satisfy the single-crossing property if there
is an x0 such that:
F(x) >= G(x) for x<=x0
and
G(x) >= F(x) for x>=x0
Contexts: models; statistics
sink:
"In a second-order [linear difference equation] system, if both roots are
positive and less than one, then the system converges monotonically to the
steady state. If the roots are complex and lie inside the unit circle then
the system spirals into the steady state. If at least one root is negative,
but both roots are less than one in absolute value, then the system will flip
from one side of the steady state to the other as it converges. In all of
these cases the steady state is called a sink." Contrast
'source'.
Source: Farmer, p. 29
Contexts: macro; dynamical systems; models
SIPP:
The U.S.
Survey of Income and Program
Participation, which is conducted by the U.S. Census Bureau.
A tutorial is at:
http://www.bls.census.gov/sipp/tutorial/SIPP_Tutorial_Beta_version/LAUNCHtutor
ial.html
Source:
http://www.sipp.census.gov/sipp
With thanks to: Dan Levy
Contexts: data
size:
A synonym for significance level.
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; estimation
skewness:
An attribute of a distribution. A distribution that is symmetric around its
mean has skewness zero, and is 'not skewed'. Skewness is calculated as
E[(x-mu)3]/s3 where mu is the mean and s is the standard
deviation.
Source: Hogg and Craig, p 57
Contexts: statistics
skill:
In regular English usage means "proficiency". Sometimes used in
economics papers to represent the experience and formal education. (Ed.: in
this editor's opinion that is a dangerously misleading use of the term; it
invites errors of thought and understanding.)
Contexts: labor
SLID:
Stands for Survey of Labour and Income Dynamics. A Canadian government
database going back to 1993 at least.
Web pages on this subject can be searched from:
http://www.statcan.ca/english/search/index.htm.
Contexts: data; labor
SLLN:
Stands for strong law of large numbers.
Contexts: econometrics
SMA:
Structural Moving Average model, which see
Contexts: econometrics
Smithian growth:
Paraphrasing directly from Mokyr, 1990: Economic growth brought about by
increases in trade.
Source: Mokyr, 1990, pp 4-6
Contexts: history; macro
smoothers:
Smoothers are estimators that produce smooth estimates of regression
functions. They are nonparametric estimators. The most common and
implementable types are kernel estimators, k-nearest-neighbor estimators, and
cubic spline smoothers.
Source: Hardle, 1990
Contexts: econometrics; nonparametrics; estimation
smoothing:
Smoothing of a data set {Xi, Yi} is the act of
approximating m() in a regression such as:
Yi = m(Xi) + ei
The result of a smoothing is a smooth functional estimate of m().
Source: Hardle, 1990
Contexts: econometrics; estimation
SMR:
Standardized mortality ratio
Contexts: demography; epidemiology
SMSA:
Stands for Standard Metropolitan Statistical Area, a U.S. term for the
standard boundaries of urban regions for purpose of measurement. Defined by
the U.S. Census Bureau. There are 250-300 of these.
Contexts: data
SNP:
abbreviation for 'seminonparametric', which means the same thing as
semiparametric.
Contexts: econometrics
social capital:
The relationships of a person which lead to economically productive
consequences. E.g., they may produce something analogous to investment
returns to that person, or socially productive consequences to a larger
society.
"'Social capital' refers to the benefits of strong social bonds. [Sociologist
James] Coleman defined the term to take in 'the norms, the social interworks,
the relationships between adults and children that are of value for the
children's growing up.' The support of a strong community helps the child
accumulate social capital in myriad ways; in the [1990s U.S.] inner city,
where institutions have disintegrated, and mothers often keep children locked
inside out of fear for their safety, social capital hardly exists." -- Traub
(2000)
Source: Traub, James. The New York Times. January 16, 2000. Sunday,
Late Edition, final. Article in section 6, starting page 52, column 1,
Magazine Desk.
A World Bank publication suggests that James Coleman has a "classic 1987
article" called "Social Capital in the Creation of Human
Capital" which was fundamental to the use of the term social capital in
the social sciences.
With thanks to: Isaac McFarlin for finding this definition
Contexts: labor; sociology; education
social planner:
One solving a Pareto optimality problem. The problem faced by a social
planner will have as an answer an allocation, without prices.
Also, "the social planner is subject to the same information limitations
as the agents in the economy." -- Cooley and Hansen p 185
That is, the social planner does not see information that is hidden by the
rules of the game from some of the agents. If an agent happens not to know
something, but it is not hidden from him by the rules of the game, then the
social planner DOES see it.
Contexts: general equilibrium; models
social savings:
A measurement of a new technology discussed in Crafts (2002). "How much more
did [a new technology] contribute than an alternative investment might have
yielded?" and cites Fogel (1979).
social welfare function:
A mapping from allocations of goods or rights among people to the real
numbers.
Such a social welfare function (abbreviated SWF) might describe the
preferences of an individual over social states, or might describe outcomes of
a process that made allocations, whether or not individuals had preferences
over those outcomes.
Contexts: models; public economics
SOFFEX:
Swiss Options and Financial Futures Exchange
Contexts: organizations
Solas:
Software for imputing values to missing data, published by
Statistical Solutions.
Contexts: data; estimation
Solovian growth:
Paraphrasing from Mokyr (1990): Economic growth brought about by investment,
meaning increases in the capital stock.
Source: Mokyr, 1990, p. 4-6.
Contexts: history; macro
Solow growth model:
Paraphrasing pretty directly from Romer, 1996, p 7:
The Solow model is meant to describe the production function of an entire
economy, so all variables are aggregates. The date or time is denoted t.
Output or production is denoted Y(t). Capital is K(t). Labor time is denoted
L(t). Labor's effectiveness, or knowledge, is A(t). The production function
is denoted F() and is assumed to have constant returns to scale. At
each time t, the production function is:
Y = F(K, AL)
which can be written:
Y(t) = F(K(t), A(t)L(t))
AL is effective labor.
Note variants of the way A enters into the production function. This one is
called labor-augmenting or Harrod-neutral. Others are
capital-augmenting, e.g. Y=F(AK,L), or , like
Y=AF(K,L).
--------------------
From _Mosaic of Economic Growth_:
DEFN of Solow-style growth models: They come from the seminal Solow (1956).
"In Solow-style models, there exists a unique and globally stable growth path
to which the level of labor productivity (and per capita output) will
converge, and along which the rate of advance is fixed (exogenously) by the
rate of technological progress." Many subsequent models of agg growth (like
Romer 1986) have abandoned the assumption that all forms of kap accumulation
run into diminishing marginal returns, and get different global convergence
implications. (p22)
Source: Romer, 1996, p 7;
Solow, 1956; that is:
Solow, Robert. "A contribution to the theory of economic growth."
Quarterly Journal of Economics. Feb. 1956
Contexts: macro
Solow residual:
A measure of the change in total factor productivity in a
Solow growth model. This is a way of doing growth accounting
empirically either for an industry or more commonly for a macroeconomy.
Formally, roughly following Hornstein and Krusell (1996):
Suppose that in year t an economy produces output quantity yt with
exactly two inputs: capital quantity kt and labor quantity
lt. Assume perfectly competitive markets and that production has
constant returns to scale. Let capital's share of income be fixed over
time and denoted a. Then the change in total factor productivity
between period t and period t+1, which is the Solow residual, is defined
by:
Solow residual = (log TFPt+1) - (log TFPt)
= (log yt+1) - (log yt)
- a(log kt+1) - a(log kt)
- (1-a)(log lt+1) - (1-a)(log lt)
Analogous definitions exist for more complicated models (with other factors
besides capital and labor) or on an industry-by-industry basis, or with
capital's share varying by time or by industry.
The equation may look daunting but the derivations are not difficult and
students are sometimes asked to practice them until they are routine.
Hulten (2000) says about the residual that:
-- it measures shifts in the implicit aggregate production function.
-- it is a nonparametric index number which measures that shift in a
computation that uses prices to measure marginal products.
-- the factors causing the measured shift include technical innovation,
organizational and institutional changes, fluctuations in demand, changes in
factor shares (where factors are capital, labor, and sometimes measures of
energy use, materials use, and purchased services use), and measurement
errors.
From an informal discussion by this editor, it looks like the residual
contains these empirical factors, among others: public goods like highways;
externalities from networks like the Internet; some externalities and losses
of capital services from disasters like September 11; theft; shirking; and
technical / technological change.
Source: Hornstein, Andreas, and Per Krusell. 1996. "Can technology
improvements cause productivity slowdowns?" NBER Macroeconomics
Annual 1996. MIT Press. pp 214-215.
Hulten, 2000
Contexts: macro
solution concept:
A phrase relevant to game theory. A game has a 'solution' which may represent
a model's prediction. The modeler often must choose one of several
substantively different solution methods, or solution concepts, which can lead
to different game outcomes. A solution concept may be preferred to others in
context where it makes a unique prediction. Game solution concepts
include:
iterative elimination of strictly dominated strategies
Nash equilibrium
subgame perfect equilibrium
perfect Bayesian equilibrium
Contexts: phrases; game theory; modelling
source:
"In a second-order [linear difference equation] system, ... if both roots
are positive and greater than one, then the system diverges monotonically
to plus or minus infinity. If the roots are complex and [lie] outside the
unit circle then the system spirals out away from the steady state. If at
least one root is negative, but both roots are greater than one in absolute
value, then the system will flip from one side of the steady state to the
other as it diverges to infinity. In each of these cases the steady state is
called a source." Constrast 'sink'.
Source: Farmer, p. 29
Contexts: macro; dynamical systems; models
sparse:
A matrix is sparse if many of its values are zero. A division of sample data
into discrete bins (that is into a multinomial table) is sparse if many of the
bins have no data in them.
Contexts: econometrics
spatial autocorrelation:
Usually autocorrelation means correlation among the data from different time
periods. Spatial autocorrelation means correlation among the data from
locations. There could be many dimensions of spatial autocorrelation, unlike
autocorrelation between periods.
Nick J. Cox () wrote, in a broadcast to a listserv
discussing the software Stata, this discussion of spatial
autocorrelation. It is quoted here without any explicit permission
whatsoever. (Parts clipped out are marked by 'snip'.) If 'Moran measure' and
'Geary measure' are standard terms used in economics I'll add them to the
glossary.
Date: Thu, 15 Apr 1999 12:29:10 GMT
From: "Nick Cox"
Subject: statalist: Spatial autocorrelation
[snip...]
First, the kind of spatial data considered here is data in two-dimensional
space, such as rainfall at a set of stations or disease incidence in a set of
areas, not three-dimensional or point pattern data (there is a tree or a
disease case at coordinates x, y). Those of you who know time series might
expect from the name `spatial autocorrelation' estimation of a function,
autocorrelation as a function of distance and perhaps direction. What is
given here are rather single-value measures that provide tests of
autocorrelation for problems where the possibility of local influences is of
most interest, for example, disease spreading by contagion. The set-up is that
the value for each location (point or area) is compared with values for its
`neighbours', defined in some way.
The names Moran and Geary are attached to these measures to honour the pioneer
work of two very fine statisticians around 1950, but the modern theory is due
to the statistical geographer Andrew Cliff and the statistician Keith Ord.
For a vector of deviations from the mean z, a vector of ones 1, and a matrix
describing the neighbourliness of each pair of locations W, the Moran measure
for example is
(z' W z) / (z' z)
I = -----------------
(1' W 1) / (1' 1)
where ' indicates transpose. This measure is for raw data, not regression
residuals.
[snip; and the remainder discusses a particular implementation of a spatial
autocorrelation measuring function in Stata.]
For n values of a spatial variable x defined for various locations,
which might be points or areas, calculate the deviations
_
z = x - x
and for pairs of locations i and j, define a matrix
W = ( w )
ij
describing which locations are neighbours in some precise sense.
For example, w might be assigned 1 if i and j are contiguous areas
ij
and 0 otherwise; or w might be a function of the distance between
ij
i and j and/or the length of boundary shared by i and j.
The Moran measure of autocorrelation is
n n n n n 2
n ( SUM SUM z w z ) / ( 2 (SUM SUM w ) SUM z )
i=1 j=1 i ij j i=1 j=1 ij i=1 i
and the Geary measure of autocorrelation is
n n 2 n n n 2
(n -1) ( SUM SUM w (z - z ) ) / ( 4 (SUM SUM w ) SUM z )
i=1 j=1 ij i j i=1 j=1 ij i=1 i
and these measures may used to test the null hypothesis of no spatial
autocorrelation, using both a sampling distribution assuming that x
is normally distributed and a sampling distribution assuming randomisation,
that is, we treat the data as one of n! assignments of the n values to
the n locations.
In a toy example, area 1 neighbours 2, 3 and 4 and has value 3
2 1 and 4 2
3 1 and 4 2
4 1, 2 and 3 1
This would be matched by the data
^_n^ (obs no) ^value^ (numeric variable) ^nabors^ (string variable)
- ----------- ------------------------ ------------------------
1 3 "2 3 4"
2 2 "1 4"
3 2 "1 4"
4 1 "1 2 3"
That is, ^nabors^ contains the observation numbers of the neighbours of
the location in the current observation, separated by spaces. Therefore,
the data must be in precisely this sort order when ^spautoc^ is called.
Note various assumptions made here:
1. The neighbourhood information can be fitted into at most a ^str80^
variable.
2. If i neighbours j, then j also neighbours i and both facts are
specified.
By default this data structure implies that those locations listed
have weights in W that are 1, while all other pairs of locations are not
neighbours and have weights in W that are 0.
If the weights in W are not binary (1 or 0), use the ^weights^ option.
The variable specified must be another string variable.
^_n^ (obs no) ^nabors^ (string variable) ^weight^ (string variable)
- ----------- ------------------------ ------------------------
1 "2 3 4" ".1234 .5678 .9012"
etc.
that is, w = 0.1234, and so forth. w need not equal w .
12 ij ji
[snip]
References
- ----------
Cliff, A.D. and Ord, J.K. 1973. Spatial autocorrelation. London: Pion.
Cliff, A.D. and Ord, J.K. 1981. Spatial processes: models and
applications. London: Pion.
Author
- ------
Nicholas J. Cox, University of Durham, U.K.
n.j.cox@@durham.ac.uk
- ------------------------- end spautoc.hlp
Nick
n.j.cox@durham.ac.uk
Source: statalist, Nick J. Cox (N.J.Cox@durham.ac.uk, as of 4/15/99)
With thanks to: Nicholas J. Cox, University of Durham, U.K.
(n.j.cox@durham.ac.uk)
Contexts: statistics; econometrics; estimation
SPE:
Abbreviation for: Subgame perfect equilibrium
Contexts: game theory; models
specie:
A commodity metal backing money; historically specie was gold or
silver.
Contexts: money; history
spectral decomposition:
The factorization of a positive definite matrix A into A=CLC' where L is a
diagonal matrix of eigenvalues, and the C matrix has the eigenvectors. That
decomposition can be written as a sum of outer products:
A = (sum from i=1 to i=N of) Licici'
where ci is the ith column of C.
Source: Greene, 1993, p 34
Contexts: econometrics
spectrum:
Summarizes the periodicity properties of a time series or time series sample
xt. Often represented in a graph with frequency, or period, (often
denoted little omega) on the horizontal axis, and Sx (omega), which
is defined below, on the vertical axis. Sx is zero for frequencies
that are not found in the time series or sample, and is increasingly positive
for frequencies that are more important in the data.
Sx(omega) = (2pi)-1(sum for j from -infinity to
+infinity of) gje-ijomega
where gj is the jth
autocovariance, omega is in the range [-pi, pi], and i is the square
root of -1.
Example 1: If xt is white noise, the spectrum is flat. All cycles
are equally important. If they were not, the series would be
forecastable.
Example 2: If xt is an AR(1) process, with coefficient in (0, 1),
the spectrum has a peak at frequency zero and declines monotonically with
distance from zero. This process does not have an observable cycle.
Source: Wouter J. den Haan, April 1996, "The Comovements between real activity
and prices at different business cycle frequencies", as presented at
Northwestern, April 21, 1997. Possibly a UCSD or NBER working paper.
Supported by NSF grant SBR-9514813. Another good reference is Priestly,
1994.
Contexts: time series; econometrics
speculative demand:
The speculative demand for money is inversely related to the interest
rate.
Source: Branson
Contexts: money
spline function:
The kind of estimate producted by a spline regression in which the
slope varies for different ranges of the regressors. The spline function is
continuous but usually not differentiable.
Source: Greene, 1993, p 237
Contexts: econometrics
spline regression:
A regression which estimates different linear slopes for different ranges of
the independent variables. The endpoints of the ranges are called
knots.
Source: Greene, 1993, p 237
Contexts: econometrics
spline smoothing:
A particular nonparametric estimator of a function. Given a data set
{Xi, Yi} it estimates values of Y for X's other than
those in the sample. The process is to construct a function that balances the
twin needs of (1) proximity to the actual sample points, (2) smoothness. So a
'roughness penalty' is defined. See Hardle's equation 3.4.1 near p. 56 for
the 'cubic spline' which seems to be the most common.
Source: Hardle, 1990
Contexts: econometrics; nonparametrics; estimation
SPO:
stands for Strongly Pareto Optimal, which see.
Contexts: general equilibrium; models
SPSS:
Stands for 'Statistical Product and Service Solutions', a corporation at www.spss.com
Contexts: data
SSEP:
Social Science Electronic Publishing, Inc.
Contexts: data
SSRN:
Social Science Research Network
Their web site
Contexts: data
stabilization policy:
"Macroeconomic stabilization policy consists of all the actions taken by
governments to (1) keep inflation low and stable; and (2) keep the short-run
(business cycle) fluctuations in output and employment small." Includes
monetary and fiscal policies, international and exchange rate policy, and
international coordination. (p129 in Taylor (1996)).
Source: Taylor, John B. 1996. "Stabilization Policy and Long-Term Economic
Growth." In The Mosaic of Economic Growth, edited by Ralph Landau,
Timothy Taylor, and Gavin Wright. Stanford University Press.
Contexts: macro
stable distributions:
See Campbell, Lo, and MacKinlay pp 17-18. Ref to French probability theories
Levy. The normal, Cauchy, and Bernoulli distrbutions are special cases.
Except for the normal distrbituion, they have infinite variance.
There has been some study of whether continuously compounded asset returns
could fit a stable distribution, given that their kurtosis is too high for a
normal distribution.
Source: Campbell, Lo, and MacKinlay 1996, pp 17-18
Contexts: finance; statistics
stable steady state:
in a dynamical system with deterministic generator function F() such that
Nt+1=F(Nt), a steady state is stable if, loosely, all
nearby trajectories go to it.
Source: J. Montgomery, social networks paper
Contexts: macro; dynamical systems; models
staggered contracting:
A model can be constructed in which some agents, usually firms, cannot change
their prices at will. They make a contract at some price for a specified
duration, then when that time is up can change the price. If the terms of the
contracts overlap, that is they do not all end at the same time, we say the
contracts are staggered.
An important paper on this topic was Taylor (1980) which showed that staggered
contracts can have an effect of persistence -- that is, that one-time shocks
can have effects that are still evolving for several periods. This is a
version of a new Keynesian, sticky-price model.
Contexts: models
standard normal:
Refers to a normal distribution with mean of zero and variance of
one.
Contexts: statistics
Stata:
Statistical analysis software. Stata web
site
Contexts: data; estimation
state price:
the price at time zero of a state-contingent claim that pays one unit of
consumption in a certain future state.
Source: Huang and Litzenberger, 1988, pp 123-4
Contexts: finance; models
state price vector:
the vector of state prices for all states.
Contexts: macro; finance; models
state-space approach to linearization:
Approximating decision rules by linearizing the Euler equations of the
maximization problem around the stationary steady state and finding a unique
solution to the resulting system of dynamic equations
Source: explained in King, Plosser, Rebelo (87); ref Merz 94, p 21
footnote
Contexts: macro; models
stationarity:
The attribute of being covariance stationary, with reference to a
stochastic process. Note that strict stationarity is not a
superset or subset, but a different thing.
stationary:
When used to describe a stochastic process, this is usually a synonym
for covariance stationary.
Source: Enders, Walter. 1995. Applied Econometric Time Series. John
Wiley & Sons, Inc.
Hamilton, J. Time Series Analysis.
Contexts: stochastic processes
statistic:
a function of one or more random variables that does not depend upon any
unknown parameter.
(The distribution of the statistic may depend on one or more unknown
parameters, but the statistic can be calculated without knowing them just from
the realizations of the random variables, e.g. the data in a sample.)
In general a statistic could be a vector of values, but often it is a
scalar.
Source: adapted from Hogg & Craig.
Contexts: statistics; econometrics; estimation
Statistica:
Statistical software. See
http://www.statsoft.com.
Contexts: data; estimation
statistical discrimination:
A theory of why minority groups are paid less when hired. The theory is
roughly that managers, who are of one type (say, white), are more culturally
attuned to the applicants of their own type than to applicants of another type
(say, black), and therefore they have a better measure of the likely
productivity of the applicants of their own type. (There is uncertainty in
the manager's predictions about blacks and probably of whites too, but more
uncertainty for blacks.) Because the managers are risk averse they bid more
for a white applicant of a given apparent productivity than for a black one,
since their measure of the white's productivity is better. This theory
predicts that white managers would offer black applicants lower starting wages
than whites of the same apparent ability, even if the manager is not
prejudiced against the blacks.
Contexts: labor
statistics:
Relevant terms: acceptance region,
adapted,
almost surely,
alternative hypothesis,
ANOVA,
AR,
AR(1),
ARCH,
autoregressive process,
b,
bandwidth,
Bayesian analysis,
Bonferroni criterion,
bootstrapping,
Box-Cox transformation,
Cauchy distribution,
cdf,
characteristic function,
chi-square distribution,
coefficient of variation,
complete,
consistent,
correlation,
Cramer-Rao lower bound,
cross-validation,
delta method,
density function,
diffuse prior,
distribution function,
efficiency,
efficiency bound,
efficient,
EGARCH,
essentially stationary,
estimator,
excess kurtosis,
expected value,
exponential distribution,
exponential family,
F distribution,
fat-tailed,
frequency function,
gamma distribution,
gamma function,
GARCH,
Gaussian,
Gaussian white noise process,
generalized Wiener process,
GEV,
Heaviside function,
Hermite polynomials,
heterogeneous process,
Huber standard errors,
Huber-White standard errors,
iid,
information number,
Ito process,
Jensen's inequality,
Kolmogorov's Second Law of Large Numbers,
kurtosis,
LAN,
leptokurtic,
Lindeberg-Levy Central Limit Theorem,
log-concave,
log-convex,
logistic distribution,
lognormal distribution,
MA,
main effect,
MANOVA,
Markov process,
Markov's inequality,
martingale,
martingale difference sequence,
mesokurtic,
MGF,
moment-generating function,
Monte Carlo simulations,
multivariate,
MVN,
noncentral chi-squared distribution,
normal distribution,
Op(1),
order statistic,
Pareto distribution,
pdf,
platykurtic,
Poisson distribution,
Poisson process,
power,
power distribution,
probability function,
process,
random,
random process,
random variable,
random walk,
Rao-Cramer inequality,
Riemann-Stieltjes integral,
second moment,
semiparametric,
significance,
single-crossing property,
skewness,
spatial autocorrelation,
stable distributions,
standard normal,
statistic,
stochastic,
stochastic process,
strict stationarity,
strictly stationary,
strong law of large numbers,
strongly consistent,
Student t,
sufficient statistic,
support,
t distribution,
t statistic,
tangent cone,
Tukey boxplot,
type I error,
type I extreme value distribution,
type II error,
uniform distribution,
uniform weak law of large numbers,
unit root,
unit root test,
UWLLN,
variance,
wavelet,
weak law of large numbers,
weak stationarity,
weakly consistent,
weakly dependent,
Weibull distribution,
white noise process,
White standard errors,
WLLN,
Wold's theorem.
Contexts: fields
stochastic:
synonym for random.
Contexts: statistics; econometrics; time series
stochastic difference equation:
A linear difference equation with random forcing variables on the right hand
side. Here is a stochastic difference equation in k:
kt+1 + kt = wt
where the k's and w's are scalars, and time t goes from 0 to infinity. The
w's were exogenous forcing variables. Or:
Akt+1 + Bkt + Ckt-1 = Dwt +
et
where the k's are vectors, the w's and e's are exogenous vectors, and A, B, C,
and D are constant matrices.
Source: Sargent, 1979
Contexts: macro; models
stochastic dominance:
An abbreviation for first-order stochastic dominance. A possible
comparison relationship between two stochastic distributions. Let the
possible returns from assets A and B be described by statistical distributions
A and B. Payoff distribution A first-order stochastically dominates payoff
distribution B if for every possible payoff, the probability of getting a
payoff that high is never better in B than in A.
Much more is in Huang and Litzenberger (1988), chapter 2.
Source: Huang and Litzenberger, (1988), p. 40
stochastic kernel:
Another name for stochastic transition function.
stochastic process:
A stochastic process is an ordered collection of random variables. The term
is synonymous with random process. Discrete ones are indexed, often by
the subscript t for time, e.g., yt, yt+1, although such
a process could be spatial instead of temporal. Continuous ones can be
described as continuous functions of time, e.g. y(t).
A stochastic process is specified by properties of the joint distribution for
those random variables. Examples:
-- the random variables are independently and identically distributed
(iid).
-- the process is a Markov process
-- the process is a martingale
-- the process is white noise
-- the process is autoregressive (e.g. AR(1))
-- the process has a moving average (e.g. see MA(1))
Contexts: models; statistics
stochastic transition function:
A generalization of the Markov transition matrix which describes the
probability of a transition by a system from one set of states to another set
of states. A matrix may not describe the situation if the subsets are
complicated or infinite in number.
Stolper-Samuelson theorem:
In some models of international trade, trade lowers the real wage of the
scarce factor of production, and protection from trade raises it.
That is a Stolper-Samuelson effect, by analogy to their (1941) theorem in a
Heckscher-Ohlin model context.
A notable case is when trade between a modernized economy and a developing one
would lower the wages of the unskilled in the modernized economy because the
developing country has so many of the unskilled.
Source: MIT Dictionary of Modern Economics, edited by David W
Pearce;
Gordon H. Hanson and Matthew J. Slaughter, "The Rybczinski theorem,
factor-price equalization, and immigration: evidence from U.S. states,"
NBER working paper 7074, April 1999. On Web at http://www.nber.org/papers/w7074
Stolper, Wolfgang, and Paul A. Samuelson. 1941. "Protection and real
wages." Review of Economics and Statistics 9(1): 58-73.
Contexts: trade; macro
stopping rule:
A stopping rule, in the context of search theory, is a mapping from histories
of draws to one of two decisions: stop at this draw, or continue
drawing.
Source: Wolinsky's D14 notes, Jan 1997
Contexts: information; search
storable:
A good is storable to the degree that it does not degrade or lose its value
over time. In models of money, storable goods dominate less storable goods as
media of exchange.
Contexts: money
straddle:
An options trading strategy of buying a call option and a put
option on the same stock with the same strike price and expiration date.
Such a strategy would result in a profitable position if the stock price is
far enough from the strike price.
Source: Hull, 1997, p 187
Contexts: finance
strategic form:
Synonym for normal form display of a game.
Source: Varian, 1992
Contexts: game theory
strategy-proof:
A decision rule (a mapping from expressed preferences by each of a
group of agents to a common decision) "is strategy-proof if in its
associated revelation game, it is a dominant strategy for each agent to
reveal its true preferences."
Source: Miyagawa, 1998, p 2
Contexts: game theory
strict stationarity:
Describes a stochastic process whose joint distribution of observations is not
a function of time. Contrast weak stationarity.
Source: Hoel, Port, and Stone, 1972, p. 122.
Contexts: statistics; econometrics; time series
strict version of Jensen's inequality:
Quoting directly from Newey-McFadden: "[I]f a(y) is a stricly concave
function [e.g. a(y)=ln(y)] and Y is a nonconstant random variable, then
a(E[Y]) > E[a(Y)]."
Source: Newey-McFadden, Ch 36, Handbook of Econometrics, p. 2124
Contexts: models
strictly stationary:
A random process {xt} is strictly stationary if the joint
distribution of elements is not a function of the index t. In one sense this
is a stronger condition than covariance stationarity because it
requires also that the third and higher moments of the distributions be
stationary. But a process can be strictly stationary without being covariance
stationary if it does not have a finite variance.
Contexts: time series; econometrics; statistics
strip financing:
Corporate financing by selling "stapled" packages of securities
together that cannot be sold separately. E.g., if a firm might sell bonds
only in a package that includes a standard proportion of senior subordinated
debt, convertible debt, preferred, and common stock. A benefit is reduced
conflict. In principle bondholders and stockholders have different interests
and that can impose costs on the firm. After a strip financing, however,
those groups are each made up of all the same people, so their interests
coincide.
Source: Jensen (86)
Contexts: finance
strips:
securities made up of standardized proportions of other securities from the
same firm. See strip financing.
U.S. Treasury bonds can be split into principal and interest components, and
the standard name for the resulting securities is STRIPS (Separate Trading of
Registered Interest and Principal of Securities). See coupon strip and
principal strip.
Source: Jensen (86)
Contexts: finance
strong form:
Can refer to the strong form of the efficient markets hypothesis, which is
that any public or private information known to anyone about a security is
fully reflected in its current price.
Fama (1991) renames tests of the strong form of the hypothesis to be 'tests
for private information.' Roughly -- If individuals with private information
can make trading gains with it, the strong form hypothesis does not
hold.
Source: Fama, 1970.
Dow and Gorton (1996) cite this paper. Possibly it defined the term for th
first time:
Roberts, Harry V. 1967. "Statistical versus clinical prediction of the
stock market," working paper, University of Chicago.
Contexts: finance
strong incentive:
An incentive that encourages maximization of an objective.
For example, payment per unit of output produced encourages maximum
production. Useful in design of a contract if the buyer knows exactly what is
desired. Contrast weak incentive.
Source: Weisbrod's class 5/23/97
Contexts: public economics
strong law of large numbers:
If {Zt} is a sequence of n iid random variables drawn from a
distribution with mean MU, then with probability one, the limit of sample
averages of the Z's goes to MU as sample size n goes to infinity.
I believe that strong laws of large numbers are generally, or perhaps always,
proved using some version of Chebyshev's inequality. (The proof is rarely
shown; in most contexts in economics one can simply assume laws of large
numbers).
Contexts: statistics; econometrics; probability
strongly consistent:
An estimator for a parameter is strongly consistent if the estimator goes to
the true value almost surely as the sample size n goes to infinity. This is a
stronger condition than weak consistency; that is, all strongly consistent
estimators are weakly consistent but the reverse is not true.
Contexts: econometrics; statistics
strongly dependent:
A time series process {xt} is strongly dependent if it is not
weakly dependent; that is, if it is strongly autocorrelated, either
positively or negatively.
Example 1: A random walk with correlation 1 between observations is strongly
dependent.
Example 2: An iid process is not strongly dependent.
Source: Discussed in Wooldridge, 1995, p 2646 which
cites Robinson 1991b in J of econometrics.
Contexts: time series; econometrics
strongly ergodic:
A stochastic process may be strongly ergodic even if it is nonstationary. A
strongly ergodic process is also weakly ergodic.
Contexts: time series; econometrics
strongly Pareto Optimal:
A strongly Pareto optimal allocation is one such that no other allocation
would be both (a) as good for everyone and (b) strictly preferred by
some.
Contexts: general equilibrium; models
strongly stationary:
Synonym for strictly stationary, regarding a stochastic
process.
Source: Hoel, Port, and Stone, 1972
Contexts: time series
structural break:
A structural change detected in a time series sample.
Contexts: time series; econometrics
structural change:
A change in the parameters of a structure generating a time series.
There exist tests for whether the parameters changed. One is the Chow
test.
Examples: (planned)
Contexts: time series; econometrics
structural moving average model:
The model is a multivariate, discrete time, dynamic econometric model. Let
yt be an ny x 1 vector of observable economic variables,
C(L) is a ny x ne matrix of lag polynomials, and
et be a vector of exogenous unobservable shocks, e.g. to labor
supply, the quantity of money, and labor productivity. Then:
yt=C(L)et
is a structural moving average model.
Source: M.W. Watson, Ch 57, Handbook of Econometrics, p 2899.
Contexts: econometrics
structural parameters:
Underlying parameters in a model or class of models.
If a theoretical model explains two effects of variable x on variable
y, one of which is positive and one negative, they are structurally separate.
In another model, in which only the net effect of x on y is relevant, one
structural parameter for the effect may be sufficient.
So a parameter is structural if a theoretical model has a distinct structure
for its effect. The definition is not absolute, but relative to a model or
class of models which are sometimes left implicit.
Contexts: estimation; econometrics
structural unemployment:
Unemployment that comes from there being an absence of demand for the workers
that are available. Contrast frictional unemployment.
Source: Baumol & Blinder
Contexts: labor; macro
structure:
A model with its parameters fixed. One can discuss properties of a model with
various parameters, but 'structural' properties are those that are fixed
unless parameters change.
Source: Davidson and MacKinnon, 1993, I think, but can't find
the exact page.
Contexts: econometrics
Student t:
Synonym for the t distribution. The name came about because the
original researcher who described the t distribution wrote under the pseudonym
'Student'.
Contexts: statistics
stylized facts:
Observations that have been made in so many contexts that they are widely
understood to be empirical truths, to which theories must fit. Used
especially in macroeconomic theory. Considered unhelpful in economic history
where context is central.
Relevant terms: Engel's law.
Contexts: fields; phrases
subdifferential:
A class of slopes. By example -- consider the top half of a stop sign as a
function graphed on the xy-plane. It has well-defined derivatives except at
the corners. The subdifferential is made up of only one slope, the
derivative, at those points. At the corners there are many 'tangents' which
define lines that are everywhere above the stop sign except at the corner.
The slopes of those lines are members of the subdifferential at those points.
In general equilibrium usage, the subdifferential can be a class of prices.
It's the set of prices such that expanding the total endowment constraint
would not cause buying and selling, because the agents have optimized
perfectly with respect to the prices. So if a set of prices is possible for a
Walrasian equilibrium, it is in the subdifferential of that alocation.
Contexts: general equilibrium; models
subgame perfect equilibrium:
An equilibrium in which the strategies are a Nash equilibrium, and, within
each subgame, the parts of the strategies relevant to the subgame make a Nash
equilibrium of the subgame.
Contexts: game theory; models
submartingale:
A kind of stochastic process; one in which the expected value of
next period's value, as projected on the basis of the current period's
information, is greater than or equal to the current period's value.
This kind of process could be assumed for securities prices.
Source: Fama, 1970, p 386
Contexts: finance; time series
subordinated:
Adjective. A particular debt issue is said to be subordinated if it was
senior but because of a subsequent issue of debt by the same firm is no longer
senior. One says, 'subordinated debt'.
Contexts: finance
substitution bias:
A possible problem with a price index. Consumers can substitute goods in
response to price changes. For example when the price of apples rises but the
price of oranges does not, consumers are likely to switch their consumption a
little bit away from apples and toward oranges, and thereby avoid experiencing
the entire price increase. A substitution bias exists if a price index does
not take this change in purchasing choices into account, e.g. if the
collection ("basket") of goods whose prices are compared over time is fixed.
"For example, when used to measure consumption prices between 1987 and 1992, a
fixed basket of commodities consumed in 1987 gives too much weight to the
prcies that rise rapidly over the timespan and too little weight to the prices
that have fall; as a result, using the 1987 fixed basket overstates the
1987-92 cost-of-living change. Conversely, because consumers substitute, a
fixed basket of commodities consumed in 1992 gives too much weight to the
prices that have fallen over the timespan and to little to the prices that
have risen; as a result, the 1992 fixed based understates the 1987-92
cost-of-living change." (Triplett, 1992)
Source: Triplett, 1992, p. 50
Contexts: macro; prices
SUDAAN:
A statistical software program designed especially to analyze clustered
data and data from sample surveys. The SUDAAN Web site is at http://www.rti.org/patent
s/sudaan/sudaan.html.
Contexts: data; estimation; code
sufficient statistic:
Suppose one has samples from a distribution, does not know exactly what that
distribution is, but does know that it comes from a certain
set of distributions that is determined partly or wholly by a certain
parameter, q. A statistic is sufficient for
inference about q
if and only if the values of any sample from that distribution give no
more information about q than does the value of the
statistic on
that sample.
E.g. if we know that a distribution is normal with variance 1 but has
an unknown mean, the sample average is a sufficient statistic for the
mean.
Contexts: statistics
sunk costs:
Unrecoverable past expenditures. These should not normally be taken into
account when determining whether to continue a project or abandon it, because
they cannot be recovered either way. It is a common instinct to count them,
however.
sup:
Stands for 'supremum'. A value is a supremum with respect to a set if it is
at least as large as any element of that set. A supremum exists in context
where a maximum does not, because (say) the set is open; e.g. the set (0,1)
has no maximum but 1 is a supremum.
sup is a mathematical operator that maps from a set to a value that is
syntactically like an element of that set, although it may not actually be a
member of the set.
Contexts: real analysis
superlative index numbers:
"What Diewert called 'superlative' index numbers were those that provide a
good approximation to a theoretical cost-of-living index for large classes of
consumer demand and utility function specifications. In addition to the
Tornqvist index, Diewert classified Irving Fisher's 'Ideal' index as belong to
this class." -- Gordon, 1990, p. 5
from harper (1999, p. 335): The term "superlative index number" was coined
by W. Erwin Diewert (1976) to describe index number formulas which generate
aggregates consistent with flexible specifications of the production
function."
Two examples of superlative index number formulas are the Fisher Ideal
Index and the Tornqvist index. These indexes "accomodate
subsitution in consumer spending while holding living standards constant,
something the Paasche and Laspeyres indexes do not do." (Triplett, 1992, p.
50).
Source: Diewert, 1976;
Gordon, 1990, p. 5;
Triplett, 1992, p. 50
Contexts: index numbers
superneutrality:
Money in a model 'is said to be superneutral if changes in [nominal] money
growth have no effect on the real equilibrium.' Contrast neutrality.
Source: Blanchard & Fischer, p. 207
Contexts: money; macro; models
supply curve:
For a given good, the supply curve is a relation between each possible price
of the good and the quantity that would be supplied for market sale at that
price.
Drawn in introductory classes with this arrangement of the axes, although
price is thought of as the independent variable:
Price | / Supply
| /
| /
| /
|________________________
Quantity
Contexts: micro
support:
of a distribution. Informally, the domain of the probability function;
includes the set of outcomes that have positive probability.
A little more exactly: a set of values that a random variable may take, such
that the probability is one that it will take one of those values. Note that
a support is not unique, because it could include outcomes with zero
probability.
Source: Farmer, p. 235
Contexts: probability; statistics
SUR:
Stands for Seemingly Unrelated Regressions. The situation is one where the
errors across observations are thought to be correlated, and one would like to
use this information to improve estimates. One makes an SUR estimate by
calculating the covariance matrix, then running GLS.
The term comes from Arnold Zellner and may have been used first in Zellner
(1962).
Source: StataCorp. 1999. Stata statistical software release 6.0
manual, vol 4., pages 8 and 14.
Zellner, A. 1962. "An efficient method of estimating seeming unrelated
regressions and tests for aggregation bias." Journal of the American
Statistical Association 57: 348-368.
Zellner, A. 1963. "Estimators for seemingly unrelated regression
equations: Some exact finite sample results." Journal of the American
Statistical Association 58: 977-992.
Zellner, A., and D.S. Huang. 1962. "Further properties of efficient
estimators for seemingly unrelated regression equations."
International Economic Review 3: 300-313.
Contexts: econometrics; estimation
SURE:
same as SUR estimation.
Contexts: econometrics; estimation
Survey of Consumer Finances:
There is a U.S. survey and a Canadian survey by this name.
The U.S. one is a survey of U.S. households by the Federal Reserve which
collects information on their assets and debt. The survey oversamples high
income households because that's where the wealth is. The survey has been
conducted every three years since 1983.
The Canadian one is an annual supplement to the Labor Force Survey that
is carried out every April.
Source: For the Canadian definition, see page 297 of:
Kevin M. Murphy, W. Craig Riddell, and Paul M. Romer. 1998. "Wages,
skills, and technology in the United States and Canada", Chapter 11 of
General purpose technologies and economic growth, edited by Elhanan
Helpman. MIT Press.
Contexts: public finance; labor
survival function:
From a model of durations between events (which are indexed here by i).
Probability that an event has not happened since event (i-1), as a function of
time.
E.g. denote that probability by Si():
Si(t | ti-1, ti-2, ...)
Contexts: econometrics; estimation
SVAR:
Structured VAR (Vector Autoregression).
The SVAR representation of a SMA model comes from inverting the matrix of lag
polynomials C(L) (see the SMA definition) to get:
A(L)yt=et
The SVAR is useful for (1) estimating A(L), (2) reconstructing the shocks
et if A(L) is known.
Source: M.W. Watson, Ch 47, Handbook of Econometrics, p 2901.
Contexts: econometrics; time series
symmetric:
A matrix M is symmetric if for every row i and column j, element M[i,j] =
M[j,i].
Contexts: econometrics; linear algebra
t distribution:
Defined in terms of a normal variable and a chi-squared variable. Let
z~N(0,1) and v~X2(n). (That is, v is drawn from
a chi-squared distribution with n degrees of freedom.)
Then t = z(n/v)1/2
has a t distribution with n
degrees of freedom. The t distribution is a one-parameter family of
distributions. n is that parameter here. The t distribution is symmetric
around zero and asymptotically (as n goes to infinity) approaches the standard
normal distribution.
Mean is zero, and variance is n/(n-2).
Source: Johnston, p. 530
Contexts: econometrics; statistics; estimation
t statistic:
After an estimation of a coefficient, the t-statistic for that coefficient is
the ratio of the coefficient to its standard error. That can be tested
against a t distribution (which see) to determine how probable it is that the
true value of the coefficient is really zero.
Contexts: econometrics; statistics; estimation
tangent cone:
Informally: a set of vectors that is tangent to a specified point.
Source: Tripathi, 1996, p 8 and on
Contexts: mathematics, statistics
team production:
Defined by Alchian and Demsetz (1972) this way: "Team pproductive
activity is that in which a union, or joint use, of inputs yields a larger
output than the sum of the products of the separately used inputs." (p.
794)
Source: Alchian and Demsetz, 1972, p 794
Contexts: theory of the firm; IO; corporate finance
technical change:
A change in the amount of output produced from the same inputs. Such a change
is not necessarily technological; it might be organizational, or the result of
a change in a constraint such as regulation, prices, or quantities of
inputs.
According to Jorgenson and Stiroh (May 1999 American Economic Review p
110), sometimes total factor productivity (TFP) can be a synonym for technical
change. A possible measure is output per unit of factor input.
Jorgenson and Stiroh also have an explanation of how it is definitionally
possible for how a technological revolution not to lead to technical change as
measured in these ways.
Contexts: macro
technological change:
A change in the set of feasible production possibilities.
Contrast technical change.
technology shocks:
An event, in a macro model, that changes the production function. Commonly
this is modeled with a aggregate production function that has a scaling
factor, e.g.:
F(Kt,Nt) = AtKaN(1-a)
where At a time series of technology shocks whose values can be
estimated or whose stochastic process (joint distribution) might be
conjectured to have certain properties.
By this definition the oil shocks of the 1970s were technology shocks -- that
is, for any given aggregate capital stock or labor stock, production was more
expensive after an oil shock because energy would be more expensive. This
interpretation explains why real business cycle theory drew interest in
economics in the 1970s after the oil shocks had such a dramatic impact on
Western economies.
Contexts: macro; models
tenure:
In the context of studies of employees, length of time with current employer
in current job. Contrast experience.
Contexts: labor; corporate finance
term spreads:
"long-term minus short-term interest rates"
Source: Fama 1991 p 1609
Contexts: finance
terms of trade:
An index of the price of a country's exports in terms of its imports. The
terms of trade are said to improve if that index rises. (Obstfeld and
Rogoff, p 25)
An analogous use is when comparing relative prices. If the cost of
agricultural goods in terms of industrial goods goes up, one might say the
"terms of trade ... shifted in favor of agricultural products." (North and
Thomas, p 108).
Source: Obstfeld and Rogoff, 1996, p 25
North, Douglass C. and Robert Paul Thomas. 1973/1992. The Rise of the
Western World: A New Economic History. Cambridge University Press.
Contexts: trade; international
tertiary sector:
Literally, 'third sector'. Per Landes, 1969/1993, p 9,
refers to the "administrative and service sector of the economy".
In context of Williamson and Lindert, 1980, p 172, is defined
more specifically to be the sector of production outside of agriculture and
industry, and includes construction, trade, finance, real estate, private
services, government, and sometimes transportation.
Source: Landes, 1969/1993, p 9; Williamson and
Lindert, 1980, p 172
Contexts: macro
test for structural change:
An econometric test to determine whether the coefficients in a regression
model are the same in separate subsamples. Often the subsamples come from
different time periods. See Chow test.
Source: Davidson and MacKinnon, 1993, p 375
Contexts: estimation; econometrics
test of identifying restrictions:
synonym for Hausman test, in practice. Only overidentifying
restrictions (assumptions) can be tested.
Contexts: econometrics; estimation
test statistic:
"A random variable [T, in this example] of which the probability
distribution is known, either exactly or approximately, under the null
hypothesis. We then see how likely the observed value of T is to have
occurred, according to that probability distribution. If T is a number that
could easily have occurred by chance [under the tested hypothesis], then we
have no evidence against the null hypothesis H0. However if it is
a number that would occur by chance only rarely, we do have evidence against
the null, and may well decide to reject it."
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; estimation
TFP:
Abbreviation for Total Factor Productivity.
Contexts: macro
the standard model:
Has a variety of meanings, and can be a confusing phrase to outsiders to a
discussion. Often implicitly contrasts the model at hand to a simpler,
earlier one in the same literature, sometimes with the implication that
variations from the earlier one ought, in the speaker's opinion, to be
justified explicitly.
A standard model of a firm is one in which it is strictly and always profit
maximizing. Often 'profit' is interpreted in a short term way, but depending
on context it may refer to a long run present-discounted value kind of profit.
A standard model of individuals seeking jobs is that they are strictly
consumption maximizing, and therefore wage maximizing. Occasionally a long
run present discounted value of wages is the objective. If time away from
work is relevant, the consumer maximizes some combination of consumption/ wage
and time away from work, or 'leisure'.
A standard model of international trade is one in which countries specialize
toward their comparative advantages.
A standard model of a product market is one in which (1) all producers (called
firms) and consumers (thought of as individuals) are price takers and
variations in any one actor's production or consumption have no effect on the
price; (2) the demand curve is strictly increasing (that is the price and
quantity are positively correlated); (3) the supply curve is strictly
decreasing (that is, price and quantity are negatively correlated); (4) the
good is infinitely divisible.
Contexts: phrases
theory of the firm:
Subject is: What are the nature, extent, and purposes of firms? This
organization of the answers comes from Hart's book.
Categories of answers:
Neoclassical theories of the firm identify it with its production technology,
and usually define the driving objective of the firm as maximizing its profits
given its technology.
Principal-agent theories of firms -- that firms are organized to divide work
among many people in ways that minimize principal agent problems.
Transaction cost theories -- that comprehensive contracts with workers are
unrealistic and that the structure of a firm (e.g., a hierarchical one) is
useful for efficiently doing a job. First academic paper of this kind was
Coase, 1937.
Property rights theories -- that ownership is a source of power ...
---------
theory of the firm: firm organization substitutes for
contracts, firms reduce uncertainty and opportunistic
behavior, and set incentives to elicit efficient responses
from agents.
-- Mokyr's rise and fall paper
Source: Hart, 1995; Coase, 1937
Contexts: IO
theta:
As used with respect to options: The rate of change of the value of a
portfolio of derivatives with respect to time with all else held constant.
Formally this is a partial derivative.
Source: Hull, 1997, p 321
Contexts: finance
tightness:
An attribute of a market.
In securities markets, tightness is "the cost of turning around a
position over a short period of time." (Kyle, 1985, p 1316). [Does
'cost' mean trading costs, alone? So does 'turning around' just mean
'trading'?]
A labor market is said to be tight if employers have trouble filling jobs, or
if there is a long wait to fill an available job. It is not evidence that the
labor market is tight if potential employees have trouble finding jobs or must
wait to get one.
Source: Kyle, 1985, p 1316
Contexts: finance; labor
time consistency:
Opposite of time inconsistency or dynamic inconsistency.
Contexts: macro
time deposits:
The money stored in the form of savings accounts at banks.
Contexts: macro; money
time inconsistency:
Same as dynamic inconsistency.
Contexts: dynamic optimization
time preference:
A utility function may or may not have the property of time preference.
Time preference is an intense preference to receive goods or services
immediately.
The discount factor preference to avoid delay must be more than
multiplictavely linear in the delay time passed, or one would not use this
term to describe the utility function. In theory this attribute is
analytically distinct from other reasons to want something sooner, such as
interest rates; the bounded rationality problem of remembering how and
when to consume the good later; or discounting of future events for reasons of
opportunity, risk, or uncertainty (e.g., the chance of surviving to a later
time).
There is evidence that human behavior exhibits great impatience which might be
modeled well by time preference and perhaps can perhaps be distinguished from
these other factors. So one may read references to empirical
observations of time preference, though as far as this editor can tell the
concept is quite theoretical and some jump is required to leave all other
explanations aside and link it directly to an observation.
Contexts: micro theory; utility
time series:
A stochastic process where the time index takes on a finite or countably
infinite set of values. Denoted, e.g. {Xt | for all integers t}.
Relevant terms:
Relevant terms: AIC,
Akaike's Information Criterion,
AR,
ARCH,
ARIMA,
ARMA,
augmented Dickey-Fuller test,
autocorrelation,
autocovariance,
autocovariance matrix,
autoregressive process,
Box-Jenkins,
Box-Pierce statistic,
BVAR,
Cochrane-Orcutt estimation,
cointegration,
convolution,
covariance stationary,
Dickey-Fuller test,
Donsker's theorem,
Durbin's h test,
Durbin-Watson statistic,
ergodic,
error-correction model,
essentially stationary,
FCLT,
filter,
Gaussian white noise process,
generalized Wiener process,
Granger causality,
heterogeneous process,
I(0),
I(1),
integrated,
invertibility,
Ito process,
lag operator,
Lindeberg-Levy Central Limit Theorem,
Ljung-Box test,
MA,
Markov chain,
Markov property,
mixing,
nonergodic,
own,
Ox,
Phillips-Perron test,
portmanteau test,
Prais-Winsten transformation,
Q-statistic,
QLR,
random,
random process,
random walk,
Riemann-Stieltjes integral,
Schwarz Criterion,
spectrum,
stochastic,
strict stationarity,
strictly stationary,
strongly dependent,
strongly ergodic,
strongly stationary,
structural break,
structural change,
submartingale,
SVAR,
trend stationary,
uniform weak law of large numbers,
unit root,
unit root test,
UVAR,
UWLLN,
VAR,
variance decomposition,
variance ratio statistic,
VARs,
volatility clustering,
Wallis statistic,
weak law of large numbers,
weak stationarity,
weakly dependent,
weakly ergodic,
WLLN,
Wold decomposition,
Wold's theorem.
See Editor's comment on time series.
Contexts: fields
time-varying covariates:
Means the same thing as time-dependent covariates; that the covariates
(regressors, probably) change over time.
Source: statalist, general discussion
Contexts: econometrics; estimation
tit-for-tat:
A strategy in a repeated game (or a series of similar games). When a
Prisoner's dilemma game is repeated between the same players, the
tit-for-tat strategy is to choose the 'cooperate' action unless in the
previous round, one's opponent chose to defect, in which case one responds by
choosing to defect this round. This tends to induce cooperative behavior
against an attentive opponent.
Contexts: game theory
Tobin tax:
A tax on foreign currency exchanges.
Contexts: public finance
Tobin's marginal q:
The ratio of the change in the value of the firm to the added capital cost for
a small increment to the capital stock. If the firm is in equilibrium, it's
marginal q is one; all investments that add more to the value of the firm than
their cost have already been undertaken, and if we knew the replacement cost
of capital we could look up the stock market value of a firm and calculate its
average q directly.
Source: Branson, Ch 13
Contexts: macro
Tobin's q:
This description comes from Dow and Gorton, (1996): The ratio of the current
market value of a firm's assets to their cost. If q is greater than 1, the
firm should increase its capital stock. It follows that, according to
"Fischer and Merton (1984), 'the stock market should be a predictor of
the rate of corporate investment' (p 84-85)" -- that is,"rising
stock prices cause higher investment [by firms]. The empirical evidence is
consistent with this view: investment in plant and eqipment increases
following a rise in stock prices in all countries that have been studied. In
fact, lagged stock returns outperform q in predicting investment [at both] the
macroeconomic level and in cross-sections of firms. See Barro (1990),
Bosworth (1975), and Welch (1994)."
Source: Quoted from:
James Dow and Gary Gorton. 1996. "Stock market efficiency and economic
efficiency: is there a connection?" European University Institute,
London Business School, the Wharton Schoo, and NBER. Working paper.
Barro, Robert J. 1990. "The stock market and investment,"
Review of Financial Studies 3: 115-131.
Bosworth, Barry. 1975. "The stock market and the economy,"
Brookings Papers on Economic Activity 2: 257-300.
Fischer, Stanley, and Robert C. Merton. 1984. "Macroeconomics and
finance: the role of the stock market," Carnegie-Rochester Conference
Series on Public Policy 21: 57-108.
Tobin, James. 1969. "A general equilibrium approach to monetary
theory" Journal of Money, Credit, and Banking 1: 15-29.
Welch, Ivo. 1994. "The cross-sectional determinants of corporate
capital expenditures: a multinational comparison." UCLA working
paper.
tobit model:
An econometric model in which the dependent variable is censored; in the
original model of Tobin (1958), for example, the dependent variable was
expenditures on durables, and the censoring occurs because values below zero
are not observed.
The model is:
yi*=bxi+ui where
ui~N(0,s2)
But yi* (e.g., durable goods desired by the consumer
described by variables xi) is not observed.
yi=yi* if
yi*>y0, and yi=y0
otherwise
yi is observed.
y0 is known. s2 is often treated as known.
xi's are observed for all i.
Contexts: econometrics; estimation
top-coded:
For data recorded in groups, e.g. 0-20, 21-50, 50-100, 101-and-up, we do not
know the average or distribution of the top category, just its lower bound and
quantity. That data is "top-coded." We may adjust for it by scaling up the
top-code and calling that the average.
Contexts: data
topological space:
A pair of sets (X, t) such that t is a topology in X.
See topology.
Source: Kolmogorov and Fomin, p 78
Contexts: real analysis
topology:
Is defined with respect to a set X. A 'topology in X' is a set of subsets of
X satisfying several criteria. Let t denote a
topology in X. The sets in t are by definition
'open sets' with respect to t, and sets outside of
t are not. t satisfies
the following:
(1) X and the null set are in t.
(2) Finite or infinite unions of open sets (that is, elements of t) are also in t.
(3) Finite intersections of open sets are in t.
Comments and related definitions:
More than one topology in X may be possible for a given set X.
The complement of a set in t is said to be a
'closed set'.
Element of X may be called 'points'.
A 'neighborhood' of a point x is any open set containing x.
Let M be a subset of X. A point x in X is a 'contact point' of M if every
neighborhood of x contains at least one point of M; and x would be a 'limit
point' of M if every neighborhood of x contained infinitely many points of M.
The set of all contact points of M is the 'closure' of M.
A 'topological space' is a pair of sets (X, t)
satisfying the above.
All metric spaces are topological spaces. The sets one would call open in a
metric space satisfy the criteria above; one could also label all subsets of X
as open for purpose of listing the members of the topology and they would then
satisfy the definition above.
Given two topologies t1 and t2 on the same set X, we say that 't1 is stronger than t2', or
equivalently that 't2 is weaker than t1' if every set in t2 is in
t1. A stronger topology thus has at least as many
elements as a weaker one.
Source: Kolmogorov and Fomin, p 78
Contexts: real analysis
Tornqvist index:
A kind of index number which can accumulate various kinds of capital in to a
single number. Averages between periods fill in the quantities of capital and
labor.
Defined in Hulten to be a discrete-time approximation to a Divisia
index
Can handle a production function as complicated as a translog, not just
a Cobb-Douglas. It can handle a Cobb-Douglas production function in
which the shares change over time.
The Tornqvist index is a superlative index formula. It was developed
in the 1930s at the Bank of Finland, according to Triplett (1992).
Defined at length in Dean & Harper, 1998, pages 8-9.
Source: paraphrased from Dean & Harper, Feb. 1998;
Hulten, 2000;
Triplett, 1992
Contexts: government; measurement
total factor productivity:
Given the macro model: Yt =
ZtF(Kt,Lt), Total Factor Productivity (TFP)
is defined to be Yt/F(Kt,Lt)
Likewise, given Yt =
ZtF(Kt,Lt,Et,Mt), TFP
is Yt/F(Kt,Lt,Et,Mt)
The Solow residual is a measure of TFP. TFP presumably changes over time.
There is disagreement in the literature over the question of whether the Solow
residual measures technology shocks. Efforts to change the inputs, like
Kt, to adjust for utilization rate and so forth, have the effect of
changing the Solow residual and thus the measure of TFP. But the idea of TFP
is well defined for each model of this kind.
TFP is not necessarily a measure of technology since the TFP could be a
function of other things like military spending, or monetary shocks, or the
political party in power.
"Growth in total-factor productivty (TFP) represents output growth not
accounted for by the growth in inputs." -- Hornstein and Krusell (1996).
Disease, crime, and computer viruses have small negative effects on TFP using
almost any measure of K and L, although with absolutely perfects measures of K
and L they might disappear.
Reason: crime, disease, and computer viruses make people AT WORK less
productive.
Source: Conversation with Martin Eichenbaum, 8/19/96
Robert E Hall. "The Relation between Price and Marginal Cost in U.S.
Industry", Journal of Political Economy, April 1996.
Hornstein, Andreas, and Per Krusell. 1996. "Can technology improvements
cause productivity slowdowns?" NBER Macroeconomics Annual 1996.
MIT Press. p 214.
Contexts: macro; models
totally mixed strategy:
In a noncooperative game, a totally mixed strategy of a player is a mixed
strategy giving positive probability weight to every pure strategy
available to the player.
For a more formal definition see Pearce, 1984, p 1037.
This is a rough paraphrase.
Source: Pearce, 1984, p 1037
Contexts: game theory
Townsend inefficiency:
a possible property of monetary exchange. One of the parties is evaluating
the value of the money he gets in the transaction not the utility he generated
in production.
Contexts: money; models
trace:
The trace of a square matrix A is the sum of the elements on its diagonal.
Has the property that tr(AB)=tr(BA).
Source: Greene, 1993, p 33
Contexts: econometrics; linear algebra
tract:
A geographical unit of the U.S. defined by the U.S. Census Bureau, usually
having between a population between 2500 and 8000. Zip codes are about five
times larger. Census-defined "blocks" are a smaller unit than tracts.
Source: Working paper by Joel Elvery; it cites on these questions this
book:
U.S. Dept of Commerce, Bureau of the Census. Geographic Areas Reference
Manual. 1994. Washington, DC.
Tragedy of the commons:
A metaphor for the public goods problem that it is hard to coordinate and pay
for public goods. The term comes from Hardin (1968). The commons is a
pasture held by a group. Each individual owns sheep and has the incentive to
put more and more sheep on the pasture to gain, privately. The overall effect
of many individuals do this overwhelms the carrying capacity of the pasture
and the sheep cannot all survive.
Source: Hardin, 1968
Contexts: public
trajectory:
series of states in a dynamical system {N0, N1,
N2, ...}.
For a deterministic generator function F() such that Nt+1 =
F(Nt), then N1=F(N0),
N2=F(F(N0)), etc.
Source: J. Montgomery, social networks paper
Contexts: macro; models
transactions costs:
Made up of three types per North and Thomas (1973) p 93:
-- search costs (the costs of locating information about opportunities for
exchange)
-- negotiation costs (costs of negotiating the terms of the exchange)
-- enforcement costs (costs of enforcing the contract)
Source: North, Douglass C., and Robert Paul Thomas. 1973/1992. The Rise
of the Western World: A New Economic History. Cambridge University
Press.
transactions demand:
The transactions demand for money is positively related to income and
negatively related to the interest rate.
Source: Branson
Contexts: money
transient:
In the context of stochastic processes, "A state is called transient if
there is a positive probability of leaving and never returning." --
Stokey and Lucas, p 322
Source: Stokey and Lucas, 1989, p 322
Contexts: macro; stochastic processes
transition economics:
Since about 1992 this term has come to mean the subject of the transition of
post-Soviet economies toward a Western free market model.
It almost never refers either to other kinds of transitions economies might
undergo, nor to the subject labeled development economics.
Contexts: phrases; fields
translog:
The translog production function is a generalization of the
Cobb-Douglas production function. The name stands for
'transcendental logarithmic'.
See Greene, 2nd edition, p 209-210. Cited to Berndt and Christensen (1972);
elsewhere, said to have been introduced by Christensen, Jorgenson, and
Lau, 1971, p 255-6. Applied to a case like Y=f(K,L), where f()
is replaced by the translog. Its use always seems to be in estimation not in
theory. Avoids strong assumptions about the functional form of the production
function; can approximate any other production function to second degree. The
regression run is, e.g. (from Greene p 209):
ln Y = b1 + b2 (ln L) + b3 (ln K) + b4 (ln L)2/2
+ b5 (ln K)2/2 + b6 (ln L)(ln K) + e
The Cobb-Douglas estimation is like this but with the restriction that
b4=b5=b6=0.
Greene, p 210 says that the elasticity of output with respect to capital is in
this model:
(d ln Y)/(d ln K) = b3 + b5 (ln K) + b6 (ln L)
--------------------
From Lau (1996) in _Mosaic_:
Flexible functional forms such as the translog production function allow "the
production [function?] elasticies to change with differing relative factor
proportions." (p76)
Source: Christensen, L.R., D.W. Jorgenson, and L.J. Lau. 1973.
"Transcendental Logarithmic Production Frontiers." Review of Economics and
Statistics. 55: pp 28-45.
Greene, 1993
Contexts: econometrics
transpose:
A matrix operation. The transpose of an M x N matrix A is an N x M matrix,
denoted A' or AT, in which the top row of A has been made into the
first column of A', the second row of A has been made into the second column
of A', and so forth.
Contexts: econometrics; linear algebra
transversality condition:
Limits solutions to an infinite period dynamic optimization problem.
Intuitively, it rules out those that involve accumulating, for example,
infinite debt.
The transversality condition (TC) can be obtained by considering a finite,
T-period horizon version of the problem of maximizing present value, obtaining
the first-order condition for nt+T, and then taking the limit of
this condition as T goes to infinity.
The form is often:
(TC) lim bT.... = 0
Contexts: macro; models
treatment effects:
In the language of experiments, a treatment is something done to a person that
might have an effect. In the absence of experiments, discerning the effect of
a treatment like a college education or a job training program can be clouded
by the fact that the person made the choice to be treated. The outcomes are a
combined result of the person's propensity to choose the treatment, and the
effects of the treatment itself. Measuring the treatment's effect while
screening out the effects of the person's propensity to choose it is the
classic treatment effects problem.
A standard way to do this is to regress the outcome on other predictors that
do not vary with time, as well as whether the person took the treatment or
not. An example is a regression of wages not only on years-of-education but
also on test scores meant to measure abilities or motivation. Both
years-of-education and test scores are positively correlated with subsequent
wages, and when interpreting the findings the coefficient found on years of
education has been partly cleansed of the factors predicting which people
would have chosen to have more education.
A more advanced method is the Heckman two-step.
Source: Greene, 1997, p 981-2
Contexts: econometrics; labor
trembling hand perfect equilibrium:
Defined by Selten (1975). Now perfect equilibrium is considered a
synonym.
Source: Selten, 1975, as cited by Pearce,
1984, p 1037
Contexts: game theory
trend stationary:
A time series process is trend stationary if after trends were removed it
would be stationary.
Following Phillips and Xiao (1998): iff a time series process yt
can be decomposed into the sum of other time series as below, it is trend
stationary:
yt = gxt + st
where g is a k-vector of constants, xt is a vector of deterministic
trends, and st is a stationary time series.
Phillips and Xiao (1998), p. 2, say that xt may be "more
complex than a simple time polynomial. For example, time polynomials with
sinusoidal factors and piecewise time polynomials may be used. The latter
corresponds to a class of models with structural breaks in the deterministic
trend."
Whether all researchers would include statistical models with structural
breaks in the class of those that are trend stationary, as Phillips and Xiao
do, is not known to this writer.
Note that this definition is designed to discuss the question of whether a
statistical model is trend stationary. To decide if one should think of a
particular time series sample as trend stationary requires imposing a
statistical model first.
Source: Phillips, Peter C.B. and Zhijie Xiao, "A primer on unit root
testing" Journal of Economic Surveys Vol 12, No. 5, 1998.
and/or: Cowles Foundation paper No. 972, Yale University, 1999
Contexts: time series
triangular kernel:
The triangular kernel is this function: (1-|u|) for -1<u<1 and zero for
u outside that range. Here u=(x-xi)/h, where h is the window width
and xi are the values of the independent variable in the data, and
x is the value of the independent variable for which one seeks an
estimate.
For kernel estimation.
Source: Hardle, 1990
Contexts: econometrics; nonparametrics; estimation
TRIPs:
A recent international treaty on intellectual property.
Contexts: intellectual property; organizations
truncated dependent variable:
A dependent variable in a model is truncated if observations cannot be seen
when it takes on vales in some range. That is, both the independent and the
dependent variables are not observed when the dependent variable is in that
range.
A natural example is that if we have data on consumption purchases, if a
consumer's willingness-to-pay for a certain product is negative, we will never
see evidence of it no matter how low the price goes. Price observations are
truncated at zero, along with identifying characteristics of the consumer in
this kind of data.
Contrast censored dependent variables.
Contexts: econometrics; estimation
TSP:
Time series econometrics software
Contexts: estimation
Tukey boxplot:
A way of showing a distribution on a line, so that distributions can be
compared easily in a single diagram. Used more in statistics than in
econometrics. A thin box marks out the 25th to 75th percentiles; a dash
within that box marks the median; a line marks the outer part of the
distribution, and outside dots or stars mark outliers. (The exact range of
the line is also derived from the location of the quartiles; its exact
definition I do not understand from Quah, 1997; maybe is clear in Cleveland
1993.)
A rough example; consider two continuous distributions that ranges from 0 to
4:
0 1 2 3 4
|--[==+===]---| * * <= the first distribution
|-[=+==]---| <= the second distribution
The first distribution has a median around 1.3, and the main part of it ranges
from .3 to 3.0. There are some outliers at the top. The second distribution
has a median near 2.0, and is more narrowly concentrated than the first, with
few outliers.
Source: Quah, 1997; Cleveland,
1993
Contexts: statistics; econometrics
tutorial: Matlab:
From a Unix shell one can just type 'matlab' as a command on any computer that
has it, and start to type interactive statements such as those below. One
could also put them in a file with the .m extension to run them from within
matlab with 'run file.m' or from the shell with 'matlab < file.m' This
tutorial covers very little but you can see something of the language.
% The percent sign begins comments.
% The statements below can be typed interactively one per line to get
% clear responses from Matlab. No need to type the comment part at the
% end of the lines. Make sure to use upper and lower case in the
% same was as in the statements shown.
A=[1 2;3 4] % defines matrix A as a 2x2 with first line [1 2]
B=A' % transpose
B=A+A % sum, element by element
Ainv=inv(A) % takes inverse of a matrix
A*Ainv % calculates and prints the result of a matrix multiplication
B=[A;A] % stacked so B has twice as many rows as A
B=[A A] % the A's are side by side. B has twice as many columns as A.
B=A(1,1) % B is a scalar now, the upper left element of A
B=A'*A % matrix multiplication
B=A(:,1) % B is set to first row of A
B=A.*A % element by element multiplication
B=B./A % element by element division
A=zeros(3,3) % special definition of a matrix of zeros
B=ones(3,1) % defines a matrix of ones
A=eye(5) % defines identity matrix
B=A(1:2,1:3) % takes part of matrix
more on % may not be needed; prevents help screen from scrolling off
help * % shows sample of the help available
Source: Chris Taber, Econ D83 at Northwestern 1996-7, Matlab tutorial
handout
Contexts: data
two stage least squares:
An instrumental variables estimation technique. Extends the IV idea to a
situation where one has more instruments than independent variables in the
model. Suppose one has a model:
y = Xb + e
Here y is a T x 1 vector of dependent variables, X is a T x k matrix of
independent variables, b is a k x 1 vector of parameters to estimate, and e is
a k x 1 vector of errors.
But the matrix of independent variables X may be correlated to the e's. Then
using a matrix of independent variables Z, uncorrelated to the e's, that is T
x r, where r>k:
Stage 1: By OLS, regress the X's on the Z's to get Xhat =
(Z'X)-1Z'y
Stage 2: By OLS, regress y on the Xhat's. This gives an unbiased estimate of
b.
The stages can be combined into one for maximum speed:
b = (X'PzX)-1X'Pzy
where Pz, the projection matrix of Z, is defined to be:
Pz = Z(Z'Z)-1Z'
Contexts: econometrics; estimation
two-factor model:
suggests a production model with two factors of production, labor L and
capital K.
tying:
Tying is the vendor practice of requiring customers of one product to buy
others.
Tying can be said to impede trade in that the customer's choices are
restricted. If the customer were free to buy the product without further
conditions, the customer would apparently be better off than if the product
has strings attached. Tying could, however, be efficiency-enhancing by (1)
reducing the number of market transactions (an efficiency of scale), or by (2)
enabling a work-around of a regulation, such as offering a bargain in
conjunction with a price-controlled product.
A historical example: years ago lessees of IBM mainframes had to agree to buy
punch cards only from IBM. Those punch cards were sold at a higher price than
on the open market. So the customer would have been better off with the same
contract minus this clause. But one could argue that tying the products this
way improved competition. It could be that IBM was trying to charge heavy
users of the computer more than light users by putting a surcharge on the
punch cards. If so, IBM found a way to bill customers for one of its costs,
computer maintenance. The practice would theoretically encourage customers to
optimize their use of the computer rather than use it excessively. In this
case the practice might be pro-competitive.
Contexts: IO
type I error:
That is, 'type one error.' This is the error in testing a hypothesis of
rejecting a hypothesis when it is true.
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; estimation; statistics
type I extreme value distribution:
Has the cdf F(x)=exp(-exp(-x)).
(Devine and Kiefer write F(x)=exp(-exp(-x)); the difference may be in the
range of x? must write this out)
Source: Amemiya, Takeshi. "Discrete Choice Models," The New Palgrave:
Econometrics
Contexts: econometrics; statistics
type II error:
That is, 'type two error.' This is the error in testing a hypothesis of
failing to reject a hypothesis when it is false.
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; estimation; statistics
ultimatum game:
An experiment. There are two players, an allocator A and a recipient R, who
in the experiment do not know one another. They have received a windfall,
e.g., of $1. The allocator, moving first, proposes to split the windfall by
proposing to take share x, so that A receives x and R receives 1-x. The
recipient can accept this allocation, or reject it in which case both get
nothing. The subgame perfect equilibrium outcome is that A would offer the
smallest possible amount to R, e.g., the share $.99 for A and $.01 for R, and
that the recipient should accept. The experimental evidence, however, is that
A offers a relatively large share to R, often 50-50, and that R would often
reject smaller positive amounts. We may interpret R's behavior has
willingness to pay a cost to punish "unfair" splits. With regard
to A's behavior -- does A care about fairness too? Or is A income-maximizing
given R's likely behavior? See also Dictator Game.
Contexts: game theory; experimental
unbalanced data:
In a panel data set, there are observations across cross-section units
(e.g. individuals or firms), and across time periods. Often such a data set
can be represented by a completely filled in matrix of N units and T periods.
In the "unbalanced data" case, however, the number of observations
per time period varies. (Equivalently one might say that the number of
observations per unit is not always the same.) One might handle this by
letting T be the total number of time periods and Nt be the number
of observations in each period.
Contexts: econometrics; estimation
unbiased:
An estimator b of a distribution's parameter B is unbiased if the mean of b's
sampling distribution is B. Formally, if: E[b] = B.
Source: Greene, 1993, p 93
Contexts: econometrics
uncertainty:
If outcomes will occur with a probability that cannot even be estimated, the
decisionmaker faces uncertainty. Contrast risk.
This meaning to uncertainty is attributed to Frank Knight, and is sometimes
referred to as Knightian uncertainty.
The decisionmaker can apply game theory even in such a circumstance, e.g. the
choice of a dominant strategy.
Kreps (1988), p 31, writes that three standard ways of modeling choices made
under conditions of uncertainty are with von Neumann-Morgenstern expected
utility over objective uncertainty, the Savage axioms for modeling subjective
uncertainty, and the Anscombe-Aumann theory which is a middle course between
them.
A recent ad for a new book edited by Haim Levy (Stochastic Dominance:
Investment Decision Making under Uncertainty) considers three ways of
modeling investment choices under uncertainty: by tradeoffs between mean and
variance, by choices made by stochastic dominance, and non-expected
utility approaches using prospect theory.
Source: J. Montgomery, social networks paper;
Kreps, 1988
Contexts: models
uncorrelated:
Two random variables X and Y are uncorrelated if E(XY)=E(X)E(y). Note that
this does not guarantee they are independent.
Source: Greene, 1993, p 64-65
Contexts: econometrics
under the null:
Means "assuming the hypothesis formally being tested is true." See
null hypothesis.
Contexts: phrases; econometrics
over a range which we will denote [a,b].
Pdf is (x-a)/(b-a). Mean is .5*(a+b). Variance is
(1/12)(b-a)2.
Contexts: statistics
uniform kernel:
The uniform kernel function is 1/2, for -1<u<1 and zero outside that
range. Here u=(x-xi)/h, where h is the window width and
xi are the values of the independent variable in the data, and x is
the value of the independent variable for which one seeks an estimate. Unlike
most kernel functions this one is unbounded in the x direction; so every data
point will be brought into every estimate in theory, although outside three
standard deviations they make hardly any difference.
For kernel estimation.
Source: Hardle, 1990
Contexts: econometrics; nonparametrics; estimation
uniform weak law of large numbers:
See Wooldridge chapter, p 2651. The UWLLN applies to a non-random criterion
function qt(wt,q), if the
sample average of qt() for a sample {wt} from a random
time series is a consistent estimator for E(qt()).
A law like this is proved with Chebyshev's inequality.
Source: Wooldridge, 1995, p 2651
Contexts: econometrics; statistics; time series
union threat model:
"Firms may find it profitable to pay wages above the market clearing
level to try to prevent unionization." In a model this could lead to job
rationing and unemployment, just as efficiency wage models can.
Source: Katz, "Efficiency Wage Theories: A Partial Evaluation" NBER Macro
Annual 1986, p 250
Contexts: labor
unit root:
An attribute of a statistical model of a time series whose autoregressive
parameter is one. In a data series y[t] modeled by:
y[t+1] = y[t] + other terms
the series y[] has a unit root.
Contexts: statistics; econometrics; time series
unit root test:
A statistical test for the proposition that in a autoregressive statistical
model of a time series, the autoregressive parameter is one. In a data series
y[t], where t a whole number, modeled by:
y[t+1] = ay[t] + other terms
where a is an unknown constant, a unit root test would be a test of the
hypothesis that a=1, usually against the alternative that |a|<1.
Contexts: statistics; econometrics; time series
unity:
A synonym for the number 'one'.
Contexts: phrases
univariate:
A discrete choice model in which the choice is made from a one-dimensional set
is said to be a univariate discrete choice model.
Contexts: econometrics
univariate binary model:
For dependent variable yi that can be only one or zero, and a
continuous indepdendent scalar variable xi, that:
Pr(yi=1)=F(xi'b)
Here b is a parameter to be estimated, and F is a distribution function. See
probit and logit models for examples.
Source: Takeshi Amemiya, "Discrete Choice Models," The New Palgrave:
Econometrics
Contexts: econometrics; estimation
unrestricted estimate:
An estimate of parameters taken without constraining the parameters. See
"restricted estimate."
Contexts: econometrics; estimation
upper hemicontinuous:
no disappearing points.
Contexts: real analysis; models
urban ghetto:
As commonly defined by U.S. researchers: areas where 40 percent or more of
residents are poor.
Source: Blank, _ITAN_, p. 41
Contexts: poverty; data
utilitarianism:
A moral philosophy, generally operating on the principle that the utility
(happiness or satisfaction) of different people can not only be measured but
also meaningfully summed over people and that utility comparisons between
people are meaningful. That makes it possible to achieve a well-defined
societal optimum in allocations, production, and other decisions, and achieve
the goal utilitarian British philosopher Jeremy Bentham described as "the
greatest good for the greatest number."
This form of utilitarianism is thought of as extreme, now, partly because it
is widely believed that there exists no generally acceptable way of summing
utilities across people and comparing between them. Utility functions that
can be compared and summed arithmetically are cardinal utility
functions; utility functions that only represent the choices that would be
made by an individual are ordinal.
Contexts: philosophy
utility curve:
synonym for indifference curve.
Contexts: models
utility function:
A function which describes an individual agent's preferences. It is a
mathematical relation from various quantitatively measurable goods given to an
individual, or attributes of that individual's environment, and a level of
satisfaction that brings to the individual. We have no measure of the level
of satisfaction, or utility level, experienced by the agent but we can make a
hypothesis about the individual's utility function which could then be
disproved by the individual's behavior. Because they *could* be disproved,
the utility functions economists normally use have survived something of a
selection process.
A standard utility function is log utility which can be a function of a
one-dimensional measure of consumption or wealth.
Contexts: utility theory
UVAR:
Unstructured VAR (Vector Autoregression)
Contexts: econometrics; time series; estimation
UWLLN:
Uniform weak law of large numbers
Source: Wooldridge, 1995, p 2651
Contexts: econometrics; statistics; time series
value added:
A measure of output. Value added by an organization or industry is, in
principle:
revenue - non-labor costs of inputs
where revenue can be imagined to be price*quantity, and costs are usually
described by capital (structures, equipment, land), materials, energy,
and purchased services.
Treatment of taxes and subsidies can be nontrivial.
Value-added is a measure of output which is potentially comparable
across countries and economic structures.
value function:
Often denoted v() or V(). Its value is the present discounted value, in
consumption or utility terms, of the choice represented by its arguments.
The classic example, from Stokey and Lucas, is:
v(k) = maxk' { u(k, k') + bv(k') }
where k is current capital,
k' is the choice of capital for the next (discrete time) period,
u(k, k') is the utility from the consumption implied by k and k',
b is the period-to-period discount factor,
and the agent is presumed to have a time-separable function, in a discrete
time environment, and to make the choice of k' that maximizes the given
function.
Source: Stokey and Lucas, 1989
Contexts: macro; models
VAR:
Vector Autoregression, a kind of model of related time series.
In the simplest example, the vector of data points at each time t
(yt) is thought of as a parameter vector (say, phi1) times a
previous value of the data vector, plus a vector of errors about which some
distribution is assumed. Such a model may have autoregression going back
further in time than t-1 too.
Contexts: econometrics; time series; estimation
var():
An operator returning the variance of its argument
Contexts: notation
variance:
The variance of a distribution is the average of squares of the distances from
the values drawn from the mean of the distribution:
var(x) = E[(x-Ex)2].
Also called 'centered second moment.'
Nick Cox attributes the term to R.A. Fisher, 1918.
Contexts: econometrics; statistics
variance decomposition:
In a VAR, the variance decomposition at horizon h is the set of
R2 values associated with the dependent variable yt and
each of the shocks h periods prior.
Source: M.W. Watson, Handbook of Econometrics, Ch 47, p. 2900
Contexts: econometrics; estimation; time series
variance ratio statistic:
discussed thoroughly on Bollerslev-Hodrick 1992 p. 19. Equations and
estimation there.
Contexts: finance; time series
VARs:
Vector Autoregressions. "Vector autoregressive models are _atheoretical_
models that use only the observed time series properties of the data to
forecast economic variables." Unlike structural models there are no
assumptions/restrictions that theorists of different stripes would object to.
But a VAR approach only test LINEAR relations among the time series.
Source: Hakkio & Morris: Vector Autoregressions, a User's Guide
Contexts: macro; time series; estimation
vec:
An operator. For a matrix C, vec(C) is the vector constructed by stacking all
of the columns of C, the second below the first and so on. So if C is n x k,
then vec(C) is nk x 1.
Contexts: econometrics; notation
vega:
As used with respect to options: "The vega of a portfolio of derivatives
is the rate of change fo the value of the portfolio with respect to the
volatility of the underlying asset." -- Hull (1997) p 328.
Formally this is a partial derivative.
A portfolio is vega-neutral if it has a vega of zero.
Source: Hull, 1997, p 328
Contexts: finance
verifiable:
Observable to outsiders, in the context of a model of information.
Models commonly assume that some the values of some variables are known to
both of the parties to a contract but are NOT verifiable, by which we mean
that outsiders cannot see them and so references to those variables in a
contract between the two parties cannot be enforced by outside authorities.
Examples: .....
vintage model:
One in which technological change is 'embodied' in Solow's
language.
Source: Mortensen, Job Reallocation paper, Feb 1997
Contexts: macro
vNM:
Abbreviation for von Neumann-Morgenstern, which describes attributes of
some utility functions.
Contexts: modelling; micro theory
volatility clustering:
In a time series of stock prices, it is observed that the variance of returns
or log-prices is high for extended periods and then low for extended periods.
(E.g. the variance of daily returns can be high one month and low the next.)
This occurs to a degree that makes an iid model of log-prices or
returns unconvincing. This property of time series of prices can be called
'volatility clustering' and is usually approached by modeling the price
process with an ARCH-type model.
Source: Bollerslev-Hodrick 92 circa p 8
Contexts: finance; time series
von Neumann-Morgenstern utility:
Describes a utility function (or perhaps a broader class of preference
relations) that has the expected utility property: the agent is indifferent
between receiving a given bundle or a gamble with the same expected
value.
There may be other, or somewhat stronger or weaker assumptions in the vNM
phrasing but this is a basic and important one. It does not seem to be the
case that such a utility representation is required to be increasing in all
arguments or concave in all arguments, although these are also common
assumptions about utility functions.
The name refers to John von Neumann and Oskar Morgenstern's Theory of Games
and Economic Behavior. Kreps (1990), p 76, says that this kind of utility
function predates that work substantially, and was used in the 1700s by Daniel
Bernoulli.
Source: Kreps (1990)
Contexts: micro theory; modelling
WACM:
abbreviation for the Weak Axiom of Cost Minimization
Source: Varian, 1992
Contexts: models
wage curve:
A graph of the relation between the local rate of unemployment, on the
horizontal axis, and the local wage rate, on the vertical axis. Blanchflower
and Oswald show that this relation is downward sloping.
That is, locally high wages and locally low unemployment are
correlated.
Source: Blanchflower and Oswald, The Wage Curve, Ch 4
Contexts: labor
wage equation:
An equation in which a wage is the dependent variable. Usually this is an
empirical, econometric, linear regression equation. It may however be
nonlinear, or an abstract equation within a structural model.
Contexts: labor economics
Wallis statistic:
A test for fourth-order serial correlation in the residuals of a
regression, from Wallis (1972) Econometrica 40:617-636. Fourth-order serial
correlation comes up in the context of quarterly data; e.g., seasonality.
Formally, the statistic is:
d4=(sum from t=5 to t=T of:
(et-et-4)2/(sum from t=1 to t=T of:
et2)
where the series of et are the residuals from a regression.
Tables for interpreting the statistic are in Wallis (1972).
Source: Greene, 1993, p 424
Contexts: time series; econometrics; estimation
Walrasian auctioneer:
A hypothetical market-maker who matches suppliers and demanders to get a
single price for a good. One imagines such a market-maker when modeling a
market as having a single price at which all parties can trade.
Such an auctioneer makes the process of finding trading opportunities perfect
and cost free; consider by contrast a "search problem" in which
there is a stochastic cost of finding a partner to trade with and transactions
costs when one does meet such a partner.
Contexts: models
Walrasian equilibrium:
An allocation vector pair (x,p), where x are the quantities held of each good
by each agent, and p is a vector of prices for each good, is a Walrasian
equilibrium if (a) it is feasible, and (b) each agent is choosing optimally,
given that agent's budget. In a Walrasian equilibrium, if an agent prefers
another combination of goods, the agent can't afford it.
Contexts: general equilibrium; models
Walrasian model:
A competitive markets equilibrium model "'without any externalities,
asymmetric information, missing markets, or other imperfections."
(Romer, 1996, p 151)
"In this general equilibrium model, commodities are identical, themarket is
concentrated at a single point [location] in space, and the exchange is
instantaneous. [Individuals] are fully informed about the exchange commodity
and the terms of trade are known to both parties. [No] effort is required to
effect exchange other than to dispense with the appropriate amount of cash.
[Prices are] a sufficient allocative device to achieve highest value uses."
(North, 1990, p. 30.)
Source: Romer, 1996, p 151; North, 1990
Contexts: models
wavelet:
A wavelet is a function which (a) maps from the real line to the real line,
(b) has an average value of zero, (c) has values very near zero except over a
bounded domain, and (d) is used for the purpose, analogous to Fourier
analysis, implied by the following paragraphs.
Unlike sine waves, wavelets tend to be irregular, asymmetric, and to have
values that die out to zero as one approaches positive and negative infinity.
"Fourier analysis consists of breaking up a signal into sine waves of
various frequencies. Similarly, wavelet analysis is the breaking up of
a signal into shifted and scaled versions of the original (or mother)
wavelet."
By decomposing a signal into wavelets one hopes not to lose local features of
the signal and information about timing. These contrast with Fourier
analysis, which tends to reproduce only repeated features of the original
function or series.
Source: Michel Misiti, Yves Misiti, Georges Oppenheim, Jean-Michel Poggi.
MATLAB Wavelet Toolbox User's Guide, version 1. March 1996. Copyright
The Mathworks, Inc. page 1-7.
Contexts: econometrics; statistics
WE:
Walrasian Equilibrium
Contexts: general equilibrium; models
weak form:
Can refer to the weak form of the efficient markets hypothesis, which
is that any information in the past prices of a security are fully reflected
in its current price.
Fama (1991) broadens the category of tests of the weak form hypothesis under
the name of 'test for return predictability.'
Source: Fama, 1970
Contexts: finance
weak incentive:
An incentive that is does not encourage maximization of an objective, because
it is ambiguous or satisfice-able.
For example, payment of weekly wages is a weak incentive since by construction
it does not encourage maximum production, but rather the minimal performance
of showing up every work day. This can be the best kind of incentive in a
contract if the buyer doesn't know exactly what he wants or if output is not
straightforwardly measurable.
Contrast strong incentive.
Source: Weisbrod's class 5/23/97
Contexts: public economics
weak law of large numbers:
Quoted right from Wooldridge chapter:
A sequence of random variables {zt} for t=1,2,... satisfies the
weak law of large numbers if these three conditions hold:
(1) E[|zt|] is finite for all t,
(2) as T goes to infinity, the limit of the average of the first T elements
of {zt} 'exists' [unknown: that means it's fixed and finite,
right?],
(3) as T goes to infinity, the probability limit of the average of the first
T elements of the series [zt - E(zt)] is zero.
The most important point (I think) is that the weak law of large numbers holds
iff the sample average is a consistent estimate for the mean of the process.
Laws of large numbers are proved with Chebyshev's inequality.
Source: Wooldridge, 1995, p 2651
Contexts: econometrics; statistics; time series
weak stationarity:
synonym for covariance stationarity. A random process is weakly stationary if
and only if it is covariance stationary.
Contexts: statistics; time series; econometrics
weakly consistent:
synonym for consistent in the context of evaluating
estimators.
Contexts: econometrics; statistics
weakly dependent:
A time series process {xt} is weakly dependent iff these four
conditions hold:
(1) {xt} is essentially stationary, that is if
E[xt2] is uniformly bounded. In any such process, the
following 'variance of partial sums' is well defined, and it will be used in
the following conditions. Define sT2 to be the variance of the sum
from t=1 to t=T of xt.
(2) sT2 is O(T).
(3) sT-2 is O(1/T).
(4) The asymptotic distribution of the sum from t=1 to t=T of
(xt-E(xt))/sT is
N(0,1).
These conditions rule out random processes which are serially correlated too
positively or negatively or whose partial sums are near zero.
Example 1: An iid process IS weakly dependent. (Domowitz, in class
4/14/97.)
Example 2: A stable AR(1) (|r|<1) with iid
innovations.
Source: Wooldridge, 1995, p 2643-44
Contexts: time series; econometrics; statistics
weakly ergodic:
A stochastic process may be weakly ergodic without being strongly
ergodic.
Contexts: time series; econometrics
weakly Pareto Optimal:
An allocation is weakly Pareto optimal (WPO) if a feasible reallocation would
be strictly preferred by all agents.
WPO <=> SPO if preferences are continuous and strictly increasing (that
is, locally nonsatiated).
Contexts: general equilibrium; models
WebEc:
A Web site with indexes to World Wide Web Resources in Economics. Click here to go there.
wedge:
The gap between the price paid by the buyer and price received by the seller
in an exchange. Might be caused by a tax paid to a third party.
Contexts: micro theory
Weibull distribution:
in at least one 'standard' specification, has pdf:
f(x)=TxT-1exp(-xT)
where T stands for q. T=1 is the simplest case.
It looks like the pdf is zero for x<1 in that case.
Contexts: statistics
Weierstrauss Theorem:
that a continuous function on a closed and bounded set will have a maximum and
a minimum.
This theorem is often used implicitly, in the assumption that some set is
compact, meaning closed and bounded. Examples that may help
clarify:
Example 1: Consider a set which is unbounded, like the real line. Say
variable x has any value on the real line, and we wish to maximize the
function f(x)=2x. It doesn't have a maximum or minimum because values of x
further from zero have more and more extreme values of f(x).
Example 2: Consider a set which is not closed, like (0,1). Again, let f(x)
be 2x. Again this function has no maximum or minimum because there is no
largest or smallest value of x in the set.
Contexts: real analysis
weighted least squares:
A way of choosing an estimator. Makes a weighted tradeoff between the error
in an estimator due to bias and that due to variance.
Putting equal weights on the two is the mean square error
criterion.
Source: Kennedy, 1992, p. 16
Contexts: econometrics
welfare capitalism:
welfare capitalism -- the practice of employers' voluntary provision of
nonwage benefits of to their blue collar employees.
Source: Moriguchi, Chiaki. 2000. TITLE?
Contexts: labor; macro; history; comparative; political economy; sociology
WesVar:
A software program for computing estimates and variance estimates from
potentially complicated survey data. Made by
Westat.
Source: Westat, Inc.
Contexts: data; software
white noise process:
a random process of random variables that are uncorrelated, have
mean zero, and a finite variance (which is denoted s2 below).
Formally, et is a white noise process if E(et) = 0,
E(et2) = s2, and
E(etej) = 0 for t<>j, where all those expectations are
taken prior to times t and j.
A common, slightly stronger condition is that they are independent from one
another; this is an "independent white noise process."
Often one assumes a normal distribution for the variables, in which case the
distribution was completely specified by the mean and variance; these are
"normally distributed" or "Gaussian" white noise
processes.
Contexts: econometrics; finance; statistics; models
White standard errors:
Same as Huber-White standard errors.
Contexts: econometrics; statistics; estimation
Wiener process:
A continuous-time random walk with random jumps at every point in time
(roughly speaking).
window width:
Synonym for bandwidth in the context of kernel estimation
Contexts: econometrics, nonparametrics
winner's curse:
That a winner of an auction may have overestimated the value of the good
auctioned.
"The winner's curse arises in an auction when the good being sold has a
common value to all the bidders (such as an oil field) and each bidder has a
privately known unbiased estimate of the value of the good (such as from a
geologist's report): the winning bidder [may] be the one who most
overestimated the value of the good; this bidder's estimate itself may be
unbiased but the estimate conditional on the knowledge that it is the highest
of n unbiased estimates is not." -- Gibbons and Katz
Source: Gibbons and Katz, "Layoffs and Lemons", Journal of Labor
Economics, 1991
Contexts: game theory; auctions
WIPO:
World Intellectual Property Organization, a component of the WTO
agreement.
Contexts: intellectual property; organizations
within estimator:
synonym for fixed effects estimator
Contexts: econometrics
WLLN:
Weak law of large numbers
Contexts: econometrics; statistics; time series
WLOG:
abbreviation for "without loss of generality". This phrase is
relevant in the context of a proof or derivation in which the notation becomes
simpler, or there are fewer cases to demonstrate, by making an innocuous
assumption, for example that the data are in a certain order.
Contexts: mathematics; proofs
Wold decomposition:
Any zero mean, covariance stationary process can be represented as a
moving average sum of white noise processes plus a linearly deterministic
component that is a function of the index t. That form of expressing the
process is its Wold decomposition. Clear expression of this idea requires an
equation or two that cannot be put here yet.
Source: Hamilton, p. 109
Contexts: econometrics; time series
Wold's theorem:
That any covariance stationary stochastic process with mean zero
has a moving average representation, called its Wold
decomposition. Let {xt} be that process. See Sargent,
1987, p 286-288 for the complete theorem, assumptions, and
proof.
Source: Sargent, 1987, p 286-288
Contexts: macro; time series; econometrics; statistics
World Bank:
A collection of international organizations to aid countries in their process
of economic development with loans, advice, and research. It was founded in
the 1940s to aid Western European countries after World War II with
capital.
Click here to go to the
World Bank web site.
Contexts: organizations
world systems theory:
[What follows is the editor's best understanding, but not definitive.] A
category of sociological/historical description and analysis in which aspects
of the world's history are thought of as byproducts of the world being an
organic whole. Key categories are core and periphery. Core
countries, economies, or societies are richer, have more capital-intensive
industry, skilled labor and relatively high profits. In a way they exploit
the poorer peripheral societies but it may not be a deliberate
collusion.
Source: Gray's review of O'Hearn's book at http://www.eh.net/BookReview, first
paragraph.
http://eh.net/bookreviews/library/0595.shtml
WPO:
stands for Weakly Pareto Optimal
Contexts: general equilibrium; models
X-11 ARIMA:
A nonparametric method or program for seasonal adjustment, developed at the
Census Bureau, and used by US national agencies such as the Federal
Reserve.
Source: conversations with Ish, Jan 18, 2004; Stuart Scott, Aug 2, 2005
Contexts: data; estimation
X-inefficiency model:
A model in which there is a best-practice technology, and a unit (firm or
country, for example) either has that technology or one not as good. No
random factor could make a firm's production function better than that
best-practice one.
An organization is perfectly x-efficient if it produces the maximum output
possible from its inputs? Or is there some connection between its choice
of output levels and types and its x-efficiency?
Sources of x-inefficiency discussed in the academic literature:
n inertia in process; that is, doing things to minimize internal redesign
from the way they were done last time, rather than in the most efficient
way for current circumstances
In prisoner's dilemma situations where an individual's effort is
unobservable; lack of trust and lack of communication can contribute to
this. It is hard for any individual to coordinate the agreement necessary
to raise effort. (Leibenstein, Sept 1983 AER comment.)
In Absence of knowledge (I haven't seen this discussed but it has to be out
there.)
Source: Leibenstein and others
Contexts: IO
yellow-dog contract:
A requirement by a firm that the worker agree not to engage in collective
labor action. Such contracts are not enforceable in the U.S.
Source: Katz, "Efficiency Wage Theories...", NBER Annual 1986, p 250
Contexts: labor
zero-sum game:
A game in which total winnings and total losings sum to zero for each possible
outcome.
Contexts: game theory