Defined on a time series sample for each natural number k by the sum of the
squares of the first k sample autocorrelations. The k^{th} sample
autocorrelation is denoted r:
BP(k)=S_{s=1}^{k}
[r_{s}^{2}]
Used to tell if a time series is nonstationary.
Below is Gauss code with a procedure that calculates the Box-Pierce statistic
for a set of residuals.
/* A series of residuals eps_hat[] is generated from a regression, e.g.: */
eps_hat = y - X*betaols;
/* Then the Box-Pierce statistic for each k can be calculated this way: */
print "Box-Pierce statistic for k=1 is" BP(eps_hat,1);
print "Box-Pierce statistic for k=2 is" BP(eps_hat,2);
print "Box-Pierce statistic for k=3 is" BP(eps_hat,3);
proc BP(series, k);
local beep, rho;
beep = 0;
do until k < 1;
rho = autocor(series, k);
beep = beep + rho * rho;
k = k - 1;
endo;
beep = beep * rows(series); /* BP = T* (the sum) */
retp(beep);
endp;
/* This functions calculates autocorrelation estimates for lag k */
proc autocor(series, k);
local rowz,y,x,rho;
rowz = rows(series);
y = series[k+1:rowz];
x = series[1:rowz-k];
rho = inv(x'x)*x'y; /* compute autocorrelation by OLS */
retp(rho);
endp;
Contexts: finance, time series
BPEA:, as of
4/15/1999
Contexts: econometrics
Pareto distribution:
Has cdf H(x) = 1 - x^{(-a)} where x>=0, a>0. This distribution
is unbounded above. (A slightly different version, with two parameters, is
shown in Hogg and Craig on p. 207.)
In an endowment economy, an allocation of goods to agents is Pareto Optimal if
no other allocation of the same goods would be preferred by every agent.
Pareto optimal is sometimes abbreviated as PO.
Optimal is the descriptive adjective, whereas optimum is a noun.
A Pareto optimal allocation is one that is a Pareto optimum. There may be
only one such optimum.
Contexts: models
Pareto set:
The set of Pareto-efficient points, usually in a general
equilibrium setting.
Source: Varian, 1992, p 324
Contexts: micro theory; general equilibrium; models
partially linear model:
Refers to a particular econometric model which is between a linear regression
model and a completely nonparametric model:
y=b'X+f(Z)+e
where X and Z are known matrices of independent variables, y is a known vector
of the dependent variable, f() is not known but often some assumptions are
made about it, and b is a parameter vector. Assumptions are often made on e
such as that e~N(0,s^{2}I) and that E(e|X,Z)=0.
The project at hand is to estimate b and/or to estimate f() in a
non-parametric way, e.g. with a kernel estimator.
Source: possibly first discussed in Robinson, P., 1988,
"Root-n-consistent semiparametric regression", Econometrica
vol 56, pp 931-954.
Contexts: econometrics; estimation
partition:
"[A] partition of a finite set (capital omega) is a collection of
disjoint subsets of (capital omega) whose union is (capital omega)." --
Fudenberg and Tirole p 55
Source: Fudenberg and Tirole, 1991/1993, p 55
Contexts: information theory; econometrics; micro theory
passive measures (to combat unemployment):
unemployment and related social benefits and early retirement benefits.
(contrast active)
Source: John P. Martin, D16 readings book
Contexts: labor; macro
path dependence:
Following David (97): describes allocative stochastic processes. Refers to
the way the history of the process relates to the limiting distribution of the
process.
"Processes that are non-ergodic, and thus unable to shake free of
their history, are said to yield path dependent outcomes." (p. 13)
"A path-dependent stochastic process is one whose asymptotic distribution
evolves as a consequence" of the history of the process. (p. 14)
The term is relevant to the outcome of economic processes through history.
For example, the QWERTY keyboard standard would not be the standard if it had
not been chosen early; thus the keyboard standard evolved through a
path-dependent process.
Source: David, 1997, p 13-14
Contexts: history; stochastic processes
path dependency:
The view that technological change in a society depends quantitatively and/or
qualitatively on its own past. "A variety of mechanisms for the
autocorrelation can be proposed. One of them, due to David (1975) is that
technological change tends to be 'local,' that is, learning occurs primarily
around techniques in use, and thus more advanced economies will learn more
about advanced techniques and stay at the cutting edge of progress."
(Mokyr, 1990, p 163)
A noted example of technological path dependence is the QWERTY keyboard, which
would not be in use today except that it happened to be chosen a hundred years
ago. A special interest in the research literature was taken in the question
of whether technological path dependence has been observed to lead to
noticeably Pareto-inferior outcomes later. Liebowitz and Margolis in a series
of papers (e.g. in the JEP) have made the case that it has not -- that is that
the QWERTY keyboard is not especially inferior to alternatives in
productivity, and that the VHS videotapes were not especially inferior to Beta
videotapes at the time consumers chose between them.
Source: Mokyr, 1990, p 163
Contexts: stochastic processes; history
payoff matrix:
In a game with two players, the payoffs to each player can be shown in a
matrix. The one at right is from the classic Prisoners Dilemma game:
| |
Player Two |
| C | D |
Player One |
C | 3,3 |
0,4 |
D | 4,0 |
1,1 |
Here, player one's strategy choices (shown, conventionally, on the left) are C
and D, and player two's, shown on the top, are also C and D. The payoffs of
each possible choice of strategy pairs is in each cell of the matrix. The
first number is the payoff to player one, and the second is the payoff to
player two.
Contexts: game theory; models
PBE:
abbreviation for perfect Bayesian equilibrium.
Contexts: game theory
pdf:
probability distribution function. This function describes a statistical
distribution. It has the value, at each possible outcome, of the probability
of receiving that outcome. A pdf is usually denoted in lower case letters.
Consider for example some f(x), with x a real number is the probability of
receiving a draw of x. A particular form of f(x) will describe the normal
distribution, or any other unidimensional distribution.
Contexts: econometrics; statistics
PDV:
Present Discounted Value
Contexts: finance
pecuniary externality:
An effect of production or transactions on outside parties through prices but
not real allocations.
perfect Bayesian equilibrium:
A perfect Bayesian equilibrium is a game-theoretic concept. It is very
like a Nash equilibrium but in which each player's beliefs are also
defined and are integrated into the definition.
A perfect Bayesian equilibrium (PBE) is defined to exist in a game in
which payoffs and players are stated and are common knowledge within the
game. It can be described by this set of things taken together:
- a profile of strategies and
- a profile of belief functions
To be a PBE they also satisfy these rules:
- the strategies satisfy sequential rationality rule and
- the beliefs are updated over time according to Bayes rule
whenever possible.
[Ed.: Normally one also assumes that the players know with certainty that
they are in a PBE. Without this assumption it may not be possible to
locate any PBE of the game, but with this assumption this equilibrium is
less likely to describe the real world.]
-- F&T 1993? pp 321-333
it also describes separating and pooling equilibria as subsets of PBE
There is a thm in Kreps that all seq eqms are also subgame perfect.
Contexts: game theory
perfect equilibrium:
In a noncooperative game, a profile of strategies is a perfect equilibrium if
it is a limit of epsilon-equilibria as epsilon goes to zero.
There can be more than one perfect equilibrium in a game.
For a more formal definition see sources. This is a rough paraphrase.
Source: Pearce, 1984, p 1037
Contexts: game theory
PERT:
Program Evaluation and Review Technique
(is this used?)
Source: Hogg and Craig, 163-4
phase portrait:
graph of a dynamical system, depicting the system's trajectories (with arrows)
and stable steady states (with dots) and unstable steady states (with circles)
in a state space. The axes are of state variables.
Contexts: models
Phillips curve:
A relation between inflation and unemployment. Follows from William Phillips'
1958 "The relation between unemployment and the rate of change of money
wage rates in the United Kingdom, 1861-1957" in _Economica_.
In the subsequent discussion the relation was thought to be a negative one --
high unemployment would correlate with low inflation. That stylized fact lost
empirical support with the stagflation of the U.S. in the 1970s, in which high
inflation and high unemployment occurred together. More recent evidence
suggests that over the long term, across countries, there is a POSITIVE
correlation between inflation and unemployment. Discussion continues on which
of these is more 'causal to' the other and less 'caused by' the other.
In recent use, "[T]he 'Phillips curve' has become a generic term for any
relationship between the rate of change of a nominal price or wage and
the level of a real indicator of the intensity of demand in the
economy, such as the unemployment rate." -- Gordon, Robert G.,
"Foundations of the Goldilocks Economy" for Brookings Panel on
Economic Activity, Sept 4, 1998.
Contexts: macro
Phillips-Perron test:
A test of a unit root hypothesis on a data series.
(Ed.: what follows is my best, but imperfect, understanding.) The
Phillips-Perron statistic, used in the test, is a negative number. The more
negative it is, the stronger the rejection of the hypothesis that there is a
unit root at some level of confidence. In one example a value of -4.49
constituted rejection at the p-value of .10.
Source: cites of Phillips, Peter C.B. and Pierre Perron. "Testing for a
Unit Root in Time Series Regression." Biometrika June 1988, 75,
pp 335-346.
With thanks to: Don Watson (as of 1999/03/31: drw@matilda.vm.edu.au)
Contexts: econometrics; time series
phrases:
Relevant terms: a fortiori,
a priori,
abstracting from,
analytic,
budget line,
ceteris paribus,
Chicago School,
classical,
climacteric,
control for,
corner solution,
creative destruction,
deterministic,
dismal science,
endogenous,
equity premium puzzle,
exogenous,
inter alia,
interior solution,
is consistent for,
mutatis mutandis,
neoclassical model,
neolassical,
nests,
New Economy,
ocular regression,
paradox,
priori,
proof,
quango,
rational,
rational ignorance,
rationalize,
risk free rate puzzle,
significance,
solution concept,
stylized facts,
the standard model,
transition economics,
under the null,
unity.
Contexts: fields
physical depreciation:
Decline in ability of assets to produce output. For example, computers, light
bulbs, and cars have low physical depreciation; they work until they expire.
Could be said to be made up of deterioration and exhaustion.
Contexts: macro; capital measurement
piecewise linear:
A set of line segments each of which represents a linear relationship between
two variables over some subset of their domains. Usually the relationship
referred to is between what the writer is characterizing as an independent
variable and a dependent variable.
Contexts: mathematics; theory; estimation
Pigou effect:
The wealth effect on consumption as prices fall. A lower price level leads to
a greater existing private wealth of nominal value, leading to a rise in
consumption. Contrast the Keynes effect.
Source: James Tobin. "Keynesian Models of Recession and
Depression"
Contexts: macro; models
plant:
a plant is an integrated workplace, usually all in one location.
platykurtic:
An adjective describing a distribution with low kurtosis. 'Low' means the
fourth central moment is less than three times the second central moment; such
a distribution has less kurtosis than a normal distribution.
Platy- means 'fat' in Greek and refers to the central part of the
distribution. Platykurtic distributions are not as common as leptokurtic
ones.
Source: Davidson and MacKinnon, 1993, p 62-64
Contexts: statistics
PO:
Pareto Optimal
Contexts: models
Poisson distribution:
A discrete distribution. Possible values for x are the integers
1,2,3,...
Denoting mean as mu, the Poisson distribution has mean mu, variance mu, and
pdf (e^{-mu}mu^{-x})/x!. Moment-generating function
(mgf) is exp(mu(e^{t}-1)).
Source: Hogg and Craig
Contexts: statistics
Poisson process:
In such a process, let n be the number of events that occur in a given time.
n will have a Poisson distribution.
Contexts: statistics; models
political science:
The academic subject centering on the relations between governments and other
governments, and between governments and peoples.
polity:
Group with an organized governance. Normally a politically organized
population or can be a religious one.
Or, form of governance.
Examples needed. Use is very context-sensitive; that is, the definition is
not too informative without examples.
Contexts: political science
polychotomous choice:
Multiple choice. In the context of discrete choice econometric models, means
that the dependent variable has more than two possible values.
Source: Maddala, 1983, 1996, p 275
Contexts: econometrics
pooling of interests:
One of two ways to do the accounting for a U.S. firm after a merger. The
alternative is purchase accounting.
A pooling of interests is the method usually taken for all-stock
deals.
Contexts: accounting
poor:
In poverty, which see.
Source: Blank, ITAN
Contexts: poverty
portmanteau test:
a test for serial correlation in a time series, not just of one period back
but of many. Standard reference is Ljung and Box (1978).
The equation characterizing this test is given on page 18, footnote 15, of
Bollerslev-Hodrick 1992 and will go in here when html has an equation
format.
Source: Bollerslev-Hodrick 92 circa p 8
Contexts: finance; time series
poverty:
As commonly defined by U.S. researchers: the state of living in a family with
income below the federally defined poverty line.
Relevant terms: idle,
poor,
poverty,
urban ghetto.
Source: Blank, ITAN
Contexts: data; poverty
power:
"The power of a test statistic T is the probability that T will reject
the null hypothesis when the hypothesis is not true.
Formally, it is
the probability that a draw of T is in the rejection region given that
the hypothesis is not true.
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; statistics; estimation
power distribution:
A continuous distribution with a parameter that we will denote k.
Pdf is kx^{k-1}. Mean is k/(k+1). Variance is
k/[(1+k)^{2}(2+k)].
This distribution has not been found to correspond to natural or economic
phenomena, but is useful in practice problems because it is algebraically
tractable.
Source: Hogg and Craig
Contexts: statistics
PPF:
Short for Production Possibilities Frontier.
PPP:
Stands for purchasing power parity, a criterion for an appropriate exchange
rate between currencies. It is a rate such that a representative basket
of goods in country A costs the same as in country B if the currencies are
exchanged at that rate.
Actual exchange rates vary from the PPP levels for various reasons, such as
the demand for imports or investments between countries.
Contexts: international
Prais-Winsten transformation:
An improvement to the original Cochrane-Orcutt algorithm for estimating
time series regressions in the presence of autocorrelated errors. The
implicit reference is to Prais-Winsten (1954).
The Prais-Winsten tranformation makes it possible to include the first
observation in the estimation.
Source: SHAZAM
manual
Contexts: estimation; time series; econometrics
pre-fisc:
Means before taking account of the government's fiscal policy. Usually refers
to personal incomes before taxes and government transfers between people. For
example a researcher might take more interest in pre-fisc income inequality
than in post-fisc income inequality because the effects of government
transfers are designed specifically to reduce inequality.
Contexts: public
precautionary savings:
Savings accumulated by an agent to prepare for future periods in which the
agent's income is low.
Contexts: models; finance
precision:
reciprocal of the variance
Source: Hamilton, p 355
Contexts: econometrics
predatory pricing:
The practice of selling a product at low prices in order to drive competitors
out, discipline them, weaken them for possible mergers, and/or to prevent
firms from entering the market. It is an expensive strategy.
In the United States there is no legal (statutory) definition of predatory
pricing, but pricing below marginal cost (the Areeda-Turner test) has been
used by the Supreme Court in 1993 as a criterion for pricing that is
predatory. (Salon
magazine, 1998/11/11)
Contexts: IO
predetermined variables:
Those that are known at the beginning of the current time period. In an
econometric model, means exogenous variables and lagged endogenous
variables.
Source: Johnston, p 440
Contexts: econometrics
preferences:
A property of the agents (individuals) in many economics models. The property
is the ability to compare any two bundles of goods and to prefer one over the
other. Usually this ability is presumed to be stable over time. The ability
is presumed to be behind the observed behavior of individuals who make
choices, and we can then make theories about their preferences.
Preferences are often implicitly assumed to exist in a model by declaring that
there is a utility function characterizing the preferences of the
agents in the model. A continuous utility function exists which
perfectly describes the preferences if the preferences are complete,
reflexive, transitive, continuous, and strongly monotonic.
[ed: yes, someday I will define those terms too.]
Standard disputes about models in which perfectly defined preferences exist
are about whether the agent is perfectly informed about the various bundles;
about whether there are risks; and how stable over time the preference
relations (responses to comparisons) are.
Contexts: utility theory
present-oriented:
A present-oriented agent discounts the future heavily and so has a HIGH
discount rate, or equivalently a LOW discount factor. See also
'future-oriented', 'discount rate', and 'discount factor'.
Contexts: models
price ceiling:
Law requiring that a price for a certain good be kept below some level. May
lead to shortage and a black market.
price complements:
Two inputs i and j to a production function can be "price complements in
production". Assume the demand for the output is decreasing in its
price. Inputs i and j are price complements if when the price of i goes down
the profit-maximizing use of both i and j go up.
Contexts: IO; labor
price elasticity:
A measure of responsiveness of some other variable to a change in price. See
elasticity for the the general equation.
Contexts: micro theory
price floor:
Law requiring that a price for a certain good be kept above some
level.
price index:
A single number summarizing price levels.
A larger number conventionally represents higher prices. A variety of
algorithms are possible and a precise specification (which is rare) requires
both an algorithm (an example of which is a Laspeyres index) and a set
of goods, fixed known quantities of each (the basket),
Contexts: macro; price indexes
price substitutes:
Inputs i and j to a production function are "price substitutes in
production" if when the price of i goes down the use of j goes
up.
Contexts: IO; labor
pricing kernel:
same as "stochastic discount factor" in a model of asset
prices.
Source: Campbell, Lo, and MacKinlay p 294
Contexts: finance, macro
pricing schedule:
A mapping from quantity purchased to total price paid
Contexts: IO
principal components:
An approach to finding what mixture of underlying variables produces most of
the variation in the dependent variable.
For a more complete discussion see
http://www.statsoftinc.com/textbook/stfacan.html
principal strip:
A bond can be resold into parts that can be thought of as components: a
principal component that is the right to receive the principal at the end
date, and the right to receive the coupon payments. The components are called
strips. The principal component is the principal strip.
Contexts: finance; business
principal-agent:
The general name for a class of games faced by a player, called the principal,
who by the nature of the environment does not act directly but instead by
giving incentives to other players, called agents, who may have different
interests.
Contexts: phrasing; game theory
principal-agent problem:
A particular game-theoretic description of a situation. There is a player
called a principal, and one or more other players called agents with utility
functions that are in some sense different from the principal's. The
principal can act more effectively through the agents than directly, and must
construct incentive schemes to get them to behave at least partly according to
the principal's interests. The principal-agent problem is that of designing
the incentive scheme. The actions of the agents may not be observable so it
is not usually sufficient for the principal just to condition payment on the
actions of the agents.
Contexts: game theory
principle of optimality:
The basic principle of dynamic programming, which was developed by Richard
Bellman: that an optimal path has the property that whatever the initial
conditions and control variables (choices) over some initial period, the
control (or decision variables) chosen over the remaining period must be
optimal for the remaining problem, with the state resulting from the early
decisions taken to be the initial condition.
Contexts: models
priori:
Not used separately; see phrase a priori.
Contexts: phrases
Prisoner's Dilemma:
A classic game with two players. Imagine that the two players are
criminals being interviewed separately by police. If either gives information
to the police, the other will get a long sentence. Either player can
Cooperate (with the other player) or Defect (by giving information to the
police). Here is an example payoff matrix for a Prisoner's Dilemma game:
| |
Player Two |
| C | D |
Player One |
C | 3,3 |
0,4 |
D | 4,0 |
1,1 |
(D,D) is the Nash equilibrium, but (C,C) is the Pareto
optimum. That difference has been discussed extensively
for various games in the research literature. Analogies to the prisoner's
dilemma or some other game can support an argument about why in the real world
some Pareto optima are observed not to be achieved.
If this same game is repeated more than once with a high enough discount
factor, there exist Nash equilibria in which (C,C) is a possible outcome of
the early stages.
Source: Varian, 1992, Ch 15
Contexts: game theory
pro forma:
describes a presentation of data, typically financial statements, where the
data reflect the world on an 'as if' basis. That is, as if the state of the
world were different from that which is in fact the case.
For example, a pro forma balance sheet might show the balance sheet as if a
debt issue under consideration had already been issued. A pro forma income
statement might report the transactions of a group on the basis that a
subsidiary acquired partway through the reporting period had been a part of
the group for the whole period. This latter approach is often adopted in
order to ensure comparability between financial statements of the year of
acquisition with those of subsequent years.
Source: Stephen Brown (stephenb@nwu.edu at the time) 7/24/2000, by email
Contexts: accounting; finance; business
probability:
Relevant terms: almost surely,
convergence in quadratic mean,
countable additivity property,
expectation,
Fatou's lemma,
Jensen's inequality,
mixing,
notation,
strong law of large numbers,
support.
Contexts: fields
probability function:
synonym for pdf.
Source: Newey-McFadden, Ch 36, Handbook of Econometrics
Contexts: econometrics; statistics
probit model:
An econometric model in which the dependent variable y_{i} can be only
one or zero, and the continuous indepdendent variable x_{i} are
estimated in:
Pr(y_{i}=1)=F(x_{i}'b)
Here b is a parameter to be estimated, and F is the normal cdf.
The logit model is the same but with a different cdf for F.
Source: Takeshi Amemiya, "Discrete Choice Models," The New Palgrave:
Econometrics
Contexts: econometrics; estimation
process:
see "stochastic process"
Contexts: statistics
product differentiation:
This is a product market concept. Chamberlin (1933) defined it thus: "A
general class of product is differentiated if any significant basis exists for
distinguishing the goods of one seller from those of another."
Source: Chamberlin, E. 1933. The Theory of Monopolistic Competition.
Harvard University Press. Cambridge, MA. 8th edition, 1962.
as cited in:
Brynjolfsson, Erik, Michael D. Smith, Yu (Jeffrey) Hu. "Consumer Surplus in
the Digital Economy: Estimating the Value of Increased Product Variety." p.6.
On the net as of Jan 7, 2003.
Contexts: IO
production function:
Describes a mapping from quantities of inputs to quantities of an output as
generated by a production process. Standard example is:
y = f(x_{1}, x_{2})
Where f() is the production function, the x's are inputs, and the y is an
output quantity.
Contexts: micro
production possibilities frontier:
A standard graph of the maximum amounts of two possible outputs that can be
made from a given list of input resources.
A basic outline of how to draw one.
production set:
The set of possible input and output combinations. Often put into the
notation of netputs, so that this set can be defined by restrictions on a
collection of vectors with the dimension of the number of goods, one element
for each kind of good, and a positive or negative real quantity in each
element.
Contexts: general equilibrium; models
productivity:
A measure relating a quantity or quality of output to the inputs required to
produce it.
Often means labor productivity, which is can be measured by quantity of output
per time spent or numbers employed. Could be measured in, for example, U.S.
dollars per hour.
For some more detail see
this site.
Contexts: macro
productivity paradox:
Standard measures of labor productivity in the U.S. suggest that
computers, at least until 1995, were not improving productivity. The paradox
is the question: why, then, were U.S. employers investing more and more
heavily in computers?
Resolving the paradox probably requires an understanding of the gap between
what the productivity statistics measure and the goals of the U.S.
organizations getting computers. Sichel (1990), pp 33-36 lists these six:
- the mismanagement hypothesis is that computers underestimate the
costs of new computer technology, such as training, and therefore buy too many
for optimum short-run profitability
- the redistribution hypothesis is that private rates of return on
computers are high enough, but the effect is only to compete over business
with other firms in the same industry, which does not overall show greater
productivity; the analogy is to an arms race, in which both players invest
heavily but the overall effect is not to increase security
- the long learning-lags hypothesis is that information technology
will generate a substantial productivity effect when society is organized
around its availability, but it is too soon for that
- the mismeasurement hypothesis is that national economic accounts do
not tend to measure the services brought by information technology such as
quality, variety, customization, and convenience
- the offsetting factors hypothesis is that other factors unrelated
to computers have dragged down productivity measures
- the small share of computers in the capital stock hypothesis is
just that computers are too small a share of plant and equipment to make a
difference.
Two other hypotheses on this subject are:
- the externalities hypothesis is that computers in organization A
improve the long-run productivity of organization B but this is not
attributable in the national accounts to the computers in A.
- the reorganization hypothesis is that computers in a firm do not
raise much the quantity of capital stock but they cause a more productive long
run organization of the capital stock within that firm and a more efficient
split of tasks between that firm and other organizations.
Technophiles (such as this writer, or venture capitalists, or Silicon Valley
publications) and technology historians tend to believe in the long
learning-lags hypothesis, the mismeasurement hypothesis, the
externalities/network-effects hypothesis, and the reorganization hypothesis.
The gap in beliefs and understandings between technophiles and national
accounts and pricing experts, such as Sichel and Robert J. Gordon (see e.g.
the 1996 paper) is astonishing as of early 1999. They talk past one another.
The national accounts experts tend to take the labor/capital models more
seriously, and technology history less seriously, than do the technophiles.
The Federal Reserve Bank under Greenspan has piloted between these views.
Note, March 2002: The national accounts experts have come now to the view of
the technophiles and it is now commonly thought thtat the productivity measure
lags the other indicators in the boom.
Source: Sichel, Daniel E. 1997. The computer revolution: an economic
perspective. Brookings Institution Press, Washington D.C.
Paul David, May 1990, American Economic Review, p 355.
Contexts: macro; technology
proof:
A mathematical derivation from axioms, often in principle in the form of a
sequence of equations, each derived by a standard rule from the one
above.
Contexts: mathematics; phrases; modelling
propensity score:
An estimate of the probability that an observed entitiy like a person would
undergo the treatment. This probability is itself a predictor of outcomes
sometimes.
Contexts: empirical
proper equilibrium:
Any limit of epsilon-proper equilibria as epsilon goes to zero.
-- Myerson (1978), p 78
Source: Myerson, 1978, p 78, as cited by Pearce,
1984, p 1037
Contexts: game theory
property income:
Nominal revenues minus expenses for variable inputs including labor, purchased
materials, and purchased services. Property income can serve as an
approximation to the services rendered by capital.
It contains the returns to national wealth. It can be thought to include
technology and organizational components as well as 'pure' returns to
capital.
Source: Harper, 1999, p. 330
Contexts: macro; capital measurement
pseudoinverse:
Also called Moore-Penrose inverse. The pseudoinverse of any matrix exists, is
unique and satisfies four conditions shown on p 37 of Greene (1993).
Perhaps the most important case is when the matrix X has more rows than
columns, and X is of full column rank. Then the pseudoinverse of X is:
(X'X)^{-1}X'. Notice how much this equation looks like the equation
for the OLS estimator.
Source: Greene, 1993, p 37
Contexts: econometrics
PSID:
Panel Study of Income Dynamics. Data set often used in labor economics
studies. Data is from U.S. and is put together at the University of
Michigan.
Since 1968 the PSID has followed and interviewed annually a national sample
that began with about 5000 families. Low-income families were over-sampled in
the original design. Interviews are usually conducted with the 'head' of each
family.
Includes a lot of income and employment variables, and continues to track
children who grow up and move out.
For more information see the PSID's Web site at http://www.isr.mich.edu/src
/psid/index.html
Contexts: labor; data
public economics:
A subfield that includes public goods and common pool resources,
and public finance meaning taxes and public borrowing and
spending.
Source: fields
public finance:
Relevant terms: AGI,
contingent valuation,
embedding effect,
existence value,
Lerman ratio,
Lucas critique,
nonuse value,
SCF,
Survey of Consumer Finances,
Tobin tax.
Contexts: fields
purchase accounting:
One of two ways to do the accounting for a U.S. firm after a merger. The
alternative is the pooling of interests.
Contexts: accounting
put option:
A put option is a security which conveys the right to sell a specified
quantity of an underlying asset at or before a fixed date.
Contexts: finance; business
put-call parity:
A relationship between the price of a put option and a call
option on a stock according to a standard model.
Define:
r as the risk-free interest rate, constant over time, in an environment with
no liquidity constraints
S as a stock's price
t as the current date
T as the expiration date of a put option and a call option
K as the strike price of the put option and call option
C(S,t) as the price of the call option when the current stock price is S and
the current date is t
P(S,t) as the price of the put option when the current stock price is S and
the current date is t
Then the relationship is:
P(S,t) = C(S,t) - S + Ke^{-r(T-t)}
The relationship is derived from the fact that combinations of options can
make portfolios that are equivalent to holding the stock through time T, and
that they must return exactly the same amount or an arbitrage would be
available to traders.
Contexts: finance
putting-out system:
"A condition for the putting-out system to exist was for labor to be paid a
piece wage, since working at home made the monitoring of time impossible."
-- Joel Mokyr, NU working paper: "The rise and fall of the factory system:
technology, firms, and households since the industrial revolution"
Carnegie-Rochester Conference on macreconomics, Nov 17-19, 2000.
Source: Joel Mokyr, NU working paper: "The rise and fall of the factory
system: technology, firms, and households since the industrial revolution"
Carnegie-Rochester Conference on macreconomics, Nov 17-19, 2000.
putty-putty:
As in Romer, JPE, Oct 1990. This describes an attribute of capital in some
models. Putty-putty capital can be transformed into durable goods then back
into general, flexible capital. This contrasts
with putty-clay capital which if I understand correctly can be converted
into durable goods but which cannot then be converted back into
re-investable capital. The algebraic modeler chooses one of these to make
an argument or arrive at a conclusion within the model. The term is not
normally interpreted empirically although empirical analogues to each kind
of capital exist.
Source: Romer, 1990
Contexts: macro; models
Q ratio:
Or, "Tobin's Q". The ratio of the market value of a firm to the
replacement cost of everything in the firm. In Tobin's model this was the
driving force behind investment decisions.
Contexts: macro; finance; models
Q-statistic:
Of Ljung-Box. A test for higher-order serial correlation in residuals from a
regression.
Source: RATS manual, p. 1-15
Contexts: time series; estimation; econometrics
QJE:
Quarterly Journal of Economics
Contexts: journals
QLR:
quasi-likelihood ratio statistic
Contexts: econometrics; time series
QML:
Stands for quasi-maximum likelihood.
Contexts: econometrics; estimation
quango:
Stands for quasi-non-governmental organization, such as the U.S. Federal
Reserve. The term is British.
Contexts: phrases
quantity theory of money:
There are multiple versions and interpretations of quantity theories of money.
A central idea is that if there is more money in an economic system, prices
will go up. Here, money includes cash and various kinds of credit
which serve as intermediaries in purchases.
A core basic expression of this theory is in an equation described by Patinkin
(?) in Leeson(2003) as cited by Samuels (2005). The quantity theory relates
these variables:
- the quantity of money (M)
- its velocity (V) (that is the speed with which money changes hands on
average)
- thus producing the aggregate demand for goods and services (MV)
- the price level (P)
- and the level of output of goods and services (T)
by this equation: MV=PT.
An implication of this construction is that the monetary authorities have some
control over prices, in a way that is not direct but through a whole system of
purchases, production, and consumption.
Contexts: monetary
quartic kernel:
The quartic kernel is this function: (15/16)(1-u^{2})^{2} for
-1<u<1 and zero for u outside that range. Here u=(x-x_{i})/h,
where h is the window width and x_{i} are the values of the
independent variable in the data, and x is the value of the independent
variable for which one seeks an estimate.
For kernel estimation.
Source: Hardle, 1990
Contexts: econometrics; nonparametrics; estimation
quasi rents:
returns in excess of the short-run opportunity cost of the resources devoted
to the activity
Source: Jensen (86)
Contexts: micro; finance
quasi-differencing:
a process that makes GLS easier, computationally, in a fixed-effects kind of
case. One generates a (delta) with an equation [see B. Meyer's notes,
installment 2, page 3] then subtracts delta times the average of each
individual's x from the list of x's, and delta times each individual's y from
the list of y's, and can run OLS on that. The calculation of delta requires
some estimate of the idiosyncratic (epsilon) error variance and the individual
effects (mu) error variance.
Contexts: econometrics; estimation
quasi-hyperbolic discounting:
A way of accounting in a model for the difference in the preferences an agent
has over consumption now versus consumption in the future.
Let b and d be scalar real parameters greater than zero and less than one.
Events t periods in the future are discounted by the factor bd^{t}.
This formulation comes from a 1999 working paper of C. Harris and D. Laibson
which cites Phelps and Pollak (1968) and Zeckhauser and Fels (1968) for this
function.
Contrast hyperbolic discounting, and see more information on discount
rates at that entry.
Source: "Dynamic choices of hyperbolic consumers"", working paper by
Christopher Harris and David Laibson.
Contexts: models; macro; dynamic optimization
quasi-maximum likelihood:
Often abbreviated QML. Maximum likelihood estimation can't be applied to a
econometric model which has no assumption about error distributions, and may
be difficult if the model has assumptions about error distributions but the
errors are not normally distributed. Quasi-maximum likelihood is maximum
likelihood applied to such a model with the alteration that errors are
presumed to be drawn from a normal distribution. QML can often make
consistent estimates.
QML estimators converge to what can be called a quasi-true estimate; they have
a quasi-score function which produces quasi-scores, and a quasi-information
matrix. Each has maximum likelihood analogues.
Contexts: econometrics; estimation
quasiconcave:
A function f(x) mapping from the reals to the reals is quasiconcave if it is
nondecreasing for all values of x below some x_{0} and nonincreasing
for all values of x above x_{0}. x_{0} can be infinity or
negative infinity: that is, a function that is everywhere nonincreasing or
nondecreasing is quasiconcave.
Quasiconcave functions have the property that for any two points in the
domain, say x_{1} and x_{2}, the value of f(x) on all points
between them satisfies:
f(x) >= min{f(x_{1}), f(x_{2})}.
Equivalently, f() is quasiconcave iff -f() is quasiconvex.
Equivalently, f() is quasiconcave iff for any constant real k, the set of
values x in the domain of f() for which f(x) >= k is a convex set.
The most common use in economics is to say that a utility function is
quasiconcave, meaning that in the relevant range it is nondecreasing.
A function that is concave over some domain is also quasiconcave over that
domain. (Proven in Chiang, p 390).
A strictly quasiconcave utility function is equivalent to a strictly
convex set of preferences, according to Brad Heim and Bruce Meyer (2001) p.
17.
Source: Simon and Blume, Chiang, pp 387-399
Contexts: real analysis
quasiconvex:
A function f(x) mapping from the reals to the reals is quasiconvex if it is
nonincreasing for all values of x below some x_{0} and nondecreasing
for all values of x above x_{0}. x_{0} can be infinity or
negative infinity: that is, a function that is everywhere nonincreasing or
nondecreasing is quasiconvex.
Quasiconvex functions have the property that for any two points in the domain,
say x_{1} and x_{2}, the value of f(x) on all points between
them satisfies:
f(x) <= max{f(x_{1}), f(x_{2})}.
Equivalently, f() is quasiconvex iff -f() is quasiconcave.
Equivalently, f() is quasiconvex iff for any constant real k, the set of
values x in the domain of f() for which f(x) <= k is a convex set.
A function that is convex over some domain is also quasiconvex over that
domain. (Proven in Chiang, p 390).
Source: Simon and Blume, Chiang, pp 387-399
Contexts: real analysis
quasilinear:
A utility function U() is quasilinear in one of its arguments, c, if a
monotonic transformation of U() has this form for some v():
U(c,x_{1},x_{2},...x_{k}) = c +
v(x_{1},x_{2},...x_{k})
[ed: Your editor no longer recalls why one would care if a utility function
has this property or not.]
Contexts: models; utility
R&D intensity:
Sometimes defined to be the ratio of expenditures by a firm on research and
development to the firm's sales.
Source: Levin, Richard C., Alvin K. Klevorick, Richard R. Nelson, Sidney G.
Winter. 1987. "Appropriating the Returns from Industrial Research and
Development." Brookings Papers on Economic Activity, voume 1987, issue
3. pp 783-820. (see p. 812 for the definition of this term.)
R-squared:
Usually written R^{2}. Is the square of the correlation coefficient
between the dependent variable and the estimate of it produced by the
regressors, or equivalently defined as the ratio of regression variance
to total variance.
Source: Kennedy, 1992; Greene,
1993, p 72
Contexts: econometrics
Ramsey equilibrium:
Results from a government's choice in certain kinds of models. Suppose that
the government knows how private sector producers will respond to any economic
environment, and that the government moves first, choosing some aspect of the
environment. Suppose further that the government makes its choice in order to
maximize a utility function for the population. Then the government's choice
is a Ramsey problem and its solution pays off with the Ramsey
outcome.
Contexts: macro
Ramsey outcome:
The payoffs from a Ramsey equilibrium.
Contexts: macro
Ramsey problem:
See Ramsey equilibrium.
random:
Not completely predetermined by the other variables available.
Examples: Consider the function plus(x,y) which we define to have the value
x+y. Every time one applies this function to a given x and y, it would give
the same answer. Such a function is deterministic, that is,
nonrandom.
Consider by contrast the function N(0,1) which we define to give back a draw
from a standard normal distribution. This function does not return the
same value every time, even when given the same parameters, 0 and 1. Such a
function is random, or stochastic.
Contexts: statistics; econometrics; time series
random effects estimation:
The GLS procedure in the context of panel data.
Fixed effects and random effects are forms of linear regression whose
understanding presupposes an understanding of OLS.
In a fixed effects regression specification there is a binary variable
(also called dummy or indicator variable) marking cross section units
and/or time periods. If there is a constant in the regression, one cross
section unit must not have its own binary variable marking it.
From Kennedy, 1992, p. 222:
"In the random effects model there is an overall intercept and an error
term with two components: e_{it} + u_{i}. The e_{it}
is the traditional error term unique to each observation. The u_{i}
is an error term representing the extent to which the intercept of the
ith cross-sectional unit differs from the overall intercept. . . . .
This composite error term is seen to have a particular type of
nonsphericalness that can be estimated, allowing the use of EGLS for
estimation. Which of the fixed effects and the random effects models is
better? This depends on the context of the data and for what the results are
to be used. If the data exhaust the population (say observations on all firms
producing automobiles), then the fixed effects approach, which produces
results conditional on the units in the data set, is reasonable. If the data
are a drawing of observations from a large population (say a thousand
individuals in a city many times that size), and we wish to draw inferences
regarding other members of that population, the fixed effects model is no
longer reasonable; in this context, use of the random effects model has the
advantage that it saves a lot of degrees of freedom. The random effects model
has a major drawback, however: it assumes that the random error associated
with each cross-section unit is uncorrelated with the other regressors,
something that is not likely to be the case. Suppose, for example, that wages
are being regressed on schooling for a large set of individuals, and that a
missing variable, ability, is thought to affect the intercept; since schooling
and ability are likely to be correlated, modeling this as a random effect will
create correlation between the error and the regressor schooling (whereas
modeling it as a fixed effect will not). The result is bias in the
coefficient estimates from the random effect model."
[Kennedy asserts, then, that fixed and random effects often produce very
different slope coefficients.]
The Hausman test is one way to distinguish which one makes
sense.
Source: Kennedy, 1992
Contexts: estimation; econometrics
random process:
Synonym for stochastic process.
Contexts: statistics; models; econometrics; time series
random variable:
A nondeterministic function. See random.
Contexts: statistics
random walk:
A random walk is a random process y_{t} like:
y_{t}=m+y_{t-1}+e_{t}
where m is a constant (the trend, often zero) and
e_{t} is white noise.
A random walk has infinite variance and a unit root.
Source: Greene, 1993, p 559
Contexts: econometrics; statistics; time series; models
Rao-Cramer inequality:
defines the Cramer-Rao lower bound, which see.
(would like to put equation from Hogg and Craig p 372 here)
Source: Hogg and Craig p 372
Contexts: econometrics; statistics; estimation
rational:
An adjective. Has several definitions.:
(1) characterizing behavior that purposefully chooses means to achieve ends
(as in Landes, 1969/1993, p 21).
(2) characterizing preferences which are complete and transitive, and
therefore can be represented by a utility function (e.g. Mas-Coleil).
(3) characterizing a thought process based on reason; sane; logical. Can be
used in regard to behavior. (e.g. American Heritage
Dictionary, p 1028)
Contexts: phrases; modelling
rational expectations:
An assumption in a model: that the agent under study uses a forecasting
mechanism that is as good as is possible given the stochastic processes and
information available to the agent.
Often in essence the rational expectations assumption is that the agent knows
the model, and fails to make absolutely correct forecasts only because of the
inherent randomness in the economic environment.
Contexts: macro
rational ignorance:
The option of an agent not to acquire or process information about some realm.
Ordinarily used to describe a citizen's choice not to pay attention to
political issues or information, because paying attention has costs in time
and effort, and the effect a citizen would have by voting per se is usually
zero.
Source: Downs (1957): An economic theory of democracy, according to
G.Miller, "The impact of economics on contemporary political
science", Sept 1997 JEL.
Contexts: phrases; political
rationalizable:
In a noncooperative game, a strategy of player i is rationalizable iff it is a
best response to a possible set of actions of the other players, where
those actions are best responses given beliefs that those other players might
have.
By rationalizable we mean that i's strategy can be justified in terms of the
other players choosing best responses to some beliefs (subjective probability
distributions) that they may be conjectured to have.
Nash strategies are rationalizable.
For a more formal definition see sources. This is a rough paraphrase.
Source: Bernheim, 1984, p 1014
Contexts: game theory
rationalize:
verb, meaning: to take an observed or conjectured behavior and find a model
environment in which that behavior is an optimal solution to an optimization
problem.
Contexts: phrases; modelling
RATS:
A computer program for the statistical analysis of data, especially time
series. Name stands for Regression Analysis of Time Series.
First chapter of its manual has a nice tutorial.
The software is made by Estima
Corp.
Contexts: data; estimation; code
RBC:
stands for Real Business Cycle (which see) -- a class of macro
theories
Contexts: models; macro
real analysis:
Relevant terms: affine,
B1,
Banach space,
Borel set,
Borel sigma-algebra,
Cauchy sequence,
compact,
convolution,
even function,
extended reals,
Frechet derivative,
Frechet differentiable,
functional,
Hilbert space,
Holder continuous,
inf,
L,
L,
L,
Lipschitz condition,
Lipschitz continuous,
lower hemicontinuous,
measurable,
measurable space,
measure,
narrow topology,
quasiconcave,
quasiconvex,
Riemann-Stieltjes integral,
sigma-algebra,
sup,
topological space,
topology,
upper hemicontinuous,
Weierstrauss Theorem.
Contexts: fields
real bills doctrine:
In the Great Depression of the 1930s, the Federal Reserve did not provide
sufficient money to prevent a series of banking panics. "The Fed erred, says
Meltzer, because it followed a theory (named 'real bills') that called for it
to create money only in response to higher loan demand; because loan demand
had collapsed, the Fed was too passive."
from "Greenspan's Finest Hour?" by Robert J. Samuelson, published in the Dec
15, 2003 Newsweek, page 39, which cites Allan Meltzer's book A
History of the Federal Reserve.
Source: from "Greenspan's Finest Hour?" by Robert J. Samuelson, published in
the Dec 15, 2003 Newsweek, page 39, which cites Allan Meltzer's book A
History of the Federal Reserve.
Contexts: monetary; economic history
real business cycle theory:
A class of theories explored first by John Muth (1961), and associated most
with Robert Lucas. The idea is to study business cycles with the assumption
that they were driven entirely by technology shocks rather than by
monetary shocks or changes in expectations.
Shocks in government purchases are another kind of shock that can appear in a
pure real business cycle (RBC) model. Romer, 1996, p
151
Source: Among others: Romer, 1996, p 151
Contexts: macro; models
real externality:
An effect of production or transactions on outside parties that affects
something entering their production or utility functions directly.
Contexts: general equilibrium
recession:
A recession is defined to be a period of two quarters of negative GDP growth.
Thus: a recession is a national or world event, by definition. And
statistical aberrations or one-time events can almost never create a
recession; e.g. if there were to be movement of economic activity (measured or
real) around Jan 1, 2000, it could create the appearance of only one quarter
of negative growth. For a recession to occur the real economy must
decline.
Contexts: macro
reduced form:
The reduced form of an econometric model has been rearranged algebraically so
that each endogenous variable is on the left side of one equation, and only
predetermined variables (exogenous variables and lagged endogenous variables)
are on the right side.
Source: Johnston, p 440
Contexts: econometrics; estimation
regression function:
A regression function describes the relationship between dependent variable Y
and explanatory variable(s) X. One might estimate the regression function
m() in the econometric model
Y_{i} = m(X_{i}) + e_{i}
where the e_{i} are the residuals or errors. As presented that is a
nonparametric or semiparametric model, with few assumptions about m(). If one
were to assume also that m(X) is linear in X one would get to a standard
linear regression model:
Y_{i} = (X_{i})b + e_{i}
where the vector b could be estimated.
Source: Hardle, 1990
Contexts: econometrics; estimation
regrettables:
consumption items that to not directly produce utility, such as health
maintenance, transportation to work, and "waiting times"
Source: Glen Cain, Handbook article
Contexts: labor
Regulation Q:
A U.S. Federal Reserve System rule limiting the interest rates that U.S. banks
and savings and loan institutions could pay on deposits.
Source: Glasner, p. 162
Contexts: history; IO
reinsurance:
Insurance purchased by an insurer, often to protect against especially large
risks or risks correlated to other risks the insurer faces.
Contexts: business
rejection region:
In hypothesis testing. Let T be a test statistic. Possible values of T can
be divided into two regions, the acceptance region and the rejection region.
If the value of T comes out to be in the acceptance region, the null
hypothesis H_{0} (the set of restrictions being tested) is
accepted, or at any rate not rejected. If T falls in the rejection region,
the null hypothesis is rejected.
The terms 'acceptance region' and 'rejection region' may also refer to the
subsets of the sample space that would produce statistics T that go into the
acceptance region or rejection region as defined above.
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; estimation
rent-seeking:
Rent-seeking means a search for extraordinary profits, beyond the normal
returns to investment. It often implies putting someone else at a
disadvantage. See rents and quasi-rents.
rents:
Rents are returns in excess of the opportunity cost of the resources devoted
to the activity.
Source: Jensen (86)
Contexts: micro; finance
resale price maintenance:
The effect of rules imposed by a manufacturer on wholesale or retail resellers
of its own products, to prevent them from competing too fiercely on price and
thus driving profits down from the reselling activity. The manufacturer may
do this because it wishes to keep resellers profitable. Such contract
provisions are usually legal under US law but have not always been allowed
since they formally restrict free trade.
With thanks to: Jonathan G. Powers (jgp423@northwestern.edu as of 12 July
2000)
Contexts: industrial organization
reservation wage property:
A model has the reservation wage property if agents seeking employment in the
model accept all jobs paying wages above some fixed value and reject all jobs
paying less.
Contexts: labor; macro
residual claimant:
The agent who receives the remainder of a random amount once predictable
payments are made.
The most common example: consider a firm with revenues, suppliers, and holders
of bonds it has issued, and stockholders. The suppliers receive the
predictable amount they are owed. The bondholders receive a predictable
payout -- the debt, plus interest. The stockholders can claim the residual,
that is, the amount left over. It may be a negative amount, but it may be
large. The same idea of a residual claimant can be applied in analyzing other
contracts.
There is a historical link to theories about wages; see
http://britannica.com/bcom/eb/article/9/0,5716,109009+6+106209,00.html
Contexts: corporate finance
resiliency:
An attribute of a market.
In securities markets, depth is measured by "the speed with which prices
recover from a random, uninformative shock." (Kyle, 1985, p
1316).
Source: Kyle, 1985, p 1316
Contexts: finance
ReStat:
An abbrevation for the Review of Economics and Statistics.
Contexts: journals
restricted estimate:
An estimate of parameters taken with the added requirement that some
particular hypothesis about the parameters is true. Note that the variance of
a restricted estimated can never be as low as that of an unrestricted
estimate.
Contexts: econometrics; estimation
restriction:
assumption about parameters in a model
Contexts: econometrics; estimation
ReStud:
An abbreviation for the journal
Review of Economic Studies.
Contexts: journals
revelation principle:
That truth-telling, direct revelation mechanisms can generally be designed to
achieve the Nash equilibrium outcome of other mechanisms; this can be
proven in a large category of mechanism design cases.
Relevant to a modelling (that is, theoretical) context with:
-- two players, usually firms
-- a third party (usually the government) managing a mechanism to
achieve a desirable social outcome
-- incomplete information -- in particular, the players have types that are
hidden from the other player and from the government.
Generally a direct revelation mechanism (that is, one in which the strategies
are just the types a player can reveal about himself) in which telling the
truth is a Nash equilibrium outcome can be proven to exist and be equivalent
to any other mechanism available to the government. That is the revelation
principle. It is used most often to prove something about the whole class of
mechanism equilibria, by selecting the simple direct revelation mechanism,
proving a result about that, and applying the revelation principle to assert
that the result is true for all mechanisms in that context.
Source: Kreps, 1990, p 691, 694
Contexts: micro theory
Ricardian proposition:
that tax financing and bond financing of a given stream of government
expenditures lead to equivalent allocations. This is the Modigliani-Miller
theorem applied to the government.
Contexts: macro; models
ridit scoring:
A way of recoding variables in a data set so that one has a measure not of
their absolute values but their positions in the distribution of observed
values. Defined in this broadcast to the list of Stata users:
Date: Sat, 20 Feb 1999 14:13:35 +0000
From: Ronan Conroy
Subject: Re: statalist: Standardizing Variables
Paul Turner said (19/2/99 9:54 pm)
>I have two variables--X1 and X2--measured on ordinal scales. X1 ranges
>from 0 to 10; X2 ranges from 0 to 12. What I want to do is to standardize
>X1 and X2 to a common metric in order to explore how differences between
>the two affect the dependent variable of interest. Converting values to
>percentages of the maximum values (10 and 12) is the first approach that
>occurs to me, but I don't know if there's something I'm forgetting
This sort of thing is possible, and called ridit scoring. You replace
each of the original scale points with the percentage (or proportion) of
the sample who scored at or below that value. This gives the scales a
common interpretation as percentiles of the sample, and means that they
are now expressed on an interval metric, though the data are still grainy.
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
_/_/_/ _/_/ _/_/_/ _/ Ronan M Conroy
_/ _/ _/ _/ _/ _/ Lecturer in Biostatistics
_/_/_/ _/ _/_/_/ _/ Royal College of Surgeons
_/ _/ _/ _/ _/ Dublin 2, Ireland
_/ _/ _/_/ _/_/_/ _/ voice +353 1 402 2431
rconroy@rcsi.ie fax +353 1 402 2329
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
I'm not an outlier; I just haven't found my distribution yet
Source: Ronan Conroy (rconroy@rcsi.ie)
Contexts: data; estimation
Riemann-Stieltjes integral:
A generalization of regular Riemann integration.
Let | denote the integral sign. Quoting from Priestly:
"...when we have two deterministic functions g(t),F(t), the
Riemann-Stieltjes integral
R = |_{a}^{b} g(t)dF(t)
is defined as the limiting value of the discrete summation"
(sum from i=1 to i=n of)
g(t_{i})[F(t_{i})-F(t_{i-1})]
for t_{1}=a and t_{n}=b as n goes to infinity and "as
max(t_{i}-t_{i-1})->0."
If F(t) is differentiable, then the above integral is the same as the regular
integral R=|_{a}^{b} g(t)F'(t) dt, but the Reimann-Stieltjes
integral can be defined in many cases even when F() is not
differentiable.
One of the most common uses is when F() is a cdf.
Examples: The expectation of a random variable can be written:
mu=| xf(x) dx
if f(x) is the pdf. It can also be written:
mu=| x dF(x)
where F(x) is the cdf. The two are equivalent for a continuous distribution,
but notice that for a discrete one (e.g. a coin flip, with X=0 for heads and
X=1 for tails) the second, Riemann-Stieltjes, formulation is well defined but
no pdf exists to calculate the first one.
Source: Priestly, 1981, 1994, p 155
Contexts: econometrics; time series; real analysis; statistics
risk:
If outcomes will occur with known or estimable probability the decisionmaker
faces a risk. Certainty is a special case of risk in which this probability
is equal to zero or one. Contrast uncertainty.
Source: J. Montgomery, social networks paper
Contexts: models
risk free rate puzzle:
See equity premium puzzle.
Contexts: finance; macro; phrases
RJE:
An abbreviation for the RAND Journal of Economics, which was previously
called the Bell Journal of Economics.
Contexts: journals
RMPY:
Stands for a standard VAR run on standard data, with interest rates (R), money
stock (M), inflation (P), and output (Y). In Faust and Irons (1996), these
are operationalized by the three-month Treasury bill rate, M2, the CPI, and
the GNP.
Source: Jon Faust and John Irons (1996), "Money, politics, and the
post-War business cycle"
Contexts: macro
Robinson-Patman Act:
U.S. legislation of 1936 which made rules against price discrimination by
firms. Agitation by small grocers was a principal cause of the law. They
were under competitive pressure and displaced by the arrival of chain stores.
The Act is thought by many to have prevented reasonable price competition,
since it made many pricing actions illegal per se. For many of its
provisions, 'good faith' was not a permitted defense. So it can be argued
that it was confusing, vague, unnecessarily restrictive, and designed to
prevent some competitors in retailing from being driven out rather than to
further social welfare generally, e.g. by allowing pricing decisions that
would benefit consumers.
Other causes: glitches in an earlier law, the Clayton Act.
Contexts: IO; history
robust smoother:
A robust smoother is a smoother (an estimator of a regression function) that
gives lower weights to datapoints that are outliers in the
y-direction.
Source: Hardle, 1990
Contexts: econometrics; estimation
Roll critique:
That the CAPM may appear to be rejected in tests not because it is
wrong but because the proxies for the market return are not close enough to
the true market portfolio available to investors.
Contexts: finance
roughness penalty:
A loss function that one might incorporate into an estimate of a function to
prevent the estimated function from matching the data closely but at the cost
of jerkiness. See 'spline smoothing' and 'cubic spline' for example uses.
An example roughness penalty would be LI[m"(u)]^{2}du, where L is
a 'smoothing parameter', I stands for the integral sign, m"() is the
second derivative of the estimated function, and u is a dummy variable that
ranges over the domain of the estimated function.
Source: Hardle, 1990, circa page 56
Contexts: econometrics; estimation
Rybczynski theorem:
Paraphrasing from Hanson and Slaughter (1999): In the context of a
Heckscher-Ohlin model of international trade, open trade between
regions means changes in relative factor supplies between regions can lead to
an adjustment in quantities and types of outputs between regions that would
return the system toward equality of production input prices like wages across
countries (the state of factor price equalization).
Such theorems are named this way by analogy to Rybczynski (1955), and refer to
that part of the mechanism that has to do with output adjustments.
Source: Hanson, Gordon H., and Matthew J. Slaughter, "The Rybczinski
theorem,
factor-price equalization, and immigration: evidence from U.S. states,"
NBER working paper 7074, April 1999. On Web at http://www.nber.org/papers/w7074
Rybczynski, T.M. 1955. "Factor endowments and relative commodity
prices." Economica 22: 336-341.
Contexts: trade; international
S-Plus:
Statistical software published by
Mathsoft.
Contexts: data; estimation
s.t.:
An abbreviation meaning "subject to" or "such that", where
constraints follow.
In a common usage:
max_{x} f(x) s.t. g(x)=0
The above expression, in words, means: "The value of f(x) that is
greatest among all those for which the argument x satisfies the constraint
that g(x)=0." (Here f() and g() are fixed, possibly known, real-valued
functions of x.)
Contexts: notation
saddle point:
In a second-order [linear difference equation] system, ... if one root has
absolute value greater than one, and the other root has absolute value less
than one, then the steady state of the system is called a saddle point.
In this case, the system is unstable for almost all initial conditions. The
exception is the set of initial conditions that begin on the eigenvector
associated with the stable eigenvalue.
Source: Farmer, p. 30
Contexts: macro; dynamical systems; models
Sargan test:
A test of the validity of instrumental variables. It is a test of the
overidentifying restrictions. The hypothesis being tested is that the
instrumental variables are uncorrelated to some set of residuals, and
therefore they are acceptable, healthy, instruments.
If the null hypothesis is confirmed statistically (that is, not rejected), the
instruments pass the test; they are valid by this criterion.
In the Shi and Svensson working paper (which shows that elected national
governments in 1975-1995 had larger fiscal deficits in election years,
especially in developing countries), the Sargan statistic was asymptotically
distributed chi-squared if the null hypothesis were true.
See test of identifying restrictions, which is not exactly the same
thing, I think.
Source: Shi and Svensson; Harvard University working paper circa 2000
Contexts: econometrics
SAS:
Statistical analysis software. SAS web
site
Contexts: data; estimation
scale economies:
Same as economies of scale.
Contexts: production theory
scatter diagram:
A graph of unconnected points of data. If there are many of them the result
may be 'clouds' of data which are hard to interpret; in such a case one might
want to use a nonparametric technique to estimate a regression
function.
Source: Greene, 1993, p 88
Contexts: econometrics; estimation
scedastic function:
Given an independent variable x and a dependent variable y, the scedastic
function is the conditional variance of y given x. That variance of the
conditional distribution is:
var[y|x] = E[(y-E[y|x])^{2}|x]
= integral or sum of (y-E[y|x])^{2}f(y|x) dy
= E[y^{2}|x] - (E[y|x])^{2}.
Source: Greene, 1993, pp 68-69
Contexts: econometrics
SCF:
Stands for Survey of Consumer Finances.
Contexts: public finance; labor
Schumpeterian growth:
Paraphrasing from Mokyr (1990): Schumpeterian growth of economic growth
brought about by increase in knowledge, most of which is called technological
progress.
Source: Mokyr, 1990, p. 4-6
Contexts: history; macro
Schwarz Criterion:
A criterion for selecting among formal econometric models. The Schwarz
Criterion is a number:
T ln (RSS) + K ln(T)
The criterion is minimized over choices of K to form a tradeoff between the
fit of the model (which lowers the sum of squared residuals) and the model's
complexity, which is measured by K. Thus an AR(K) model versus an AR(K+1) can
be compared by this criterion for a given batch of data.
Source: RATS maual pg. 5-18
Contexts: econometrics; time series; models
Scitovsky paradox:
The problem that some ways of aggregating social welfare may make it possible
that a switch from allocation A to allocation B seems like an improvement in
social welfare, but so does a move back. (An example may be Condorcet's
voting paradox.)
Scitovsky, T., 1941, "A Note on Welfare Propositions in Economics", Review
of Economic Studies, Vol 9, Nov 1941, pp 77-88.
The Scitovsky criterion (for a social welfare function?) is that the Scitovsky
paradox not exist.
Source: Boadway 1974; Scitovsky
Contexts: public
score:
In maximum likelihood estimation, the score vector is the gradient of the
likelihood function with respect to the parameters. So it has the same number
of elements as the parameter vector does (often denoted k). The score is a
random variable; it's a function of the data. It has expectation zero, and is
set to zero exactly for a given sample in the maximum likelihood estimation
process.
Denoting the score as S(q), and the likelihood
function as L(q), where in both cases the data are
also implied arguments:
S(q) = dL(q)/d(q)
Example: In OLS regression of Y_{t}=X_{t}b+e_{t}, the score for each possible parameter
value, b, is
X_{t}'e_{t}(b).
The variance of the score is E[score^{2}]-(E[score])^{2})
which is E[score^{2}] since E[score] is zero. E[score^{2}] is
also called the information matrix and is denoted I(q).
Contexts: econometrics; estimation
screening game:
A game in which an uninformed player offers a menu of choices to the
player with private information (the informed player). The selection
of the elements of that menu (which might be, for example, employment
contracts containing pairs of pay rates and working hours) is a choice for the
uninformed player to optimize on the basis of expectations about they possible
types of the informed player.
Contexts: game theory
second moment:
The second moment of a random variable is the expected value of the square of
the draw of the random variable. That is, the second moment is
EX^{2}. Same as 'uncentered second moment' as distinguished from the
variance which is the 'centered second moment.'
Contexts: econometrics; statistics
Second Welfare Theorem:
A Pareto efficient allocation can be achieved by a Walrasian equilibrium if
every agent has a positive quantity of every good, and preferences are convex,
continuous, and strictly increasing.
(My best understanding of 'convex preferences' is that it means 'concave
utility function'.)
Contexts: general equilibrium; models
secular:
an adjective meaning "long term" as in the phrase "secular
trends." Outside the research context its more common meaning is 'not
religious'.
seigniorage:
Alternate spelling for seignorage.
seignorage:
"The amount of real purchasing power that [a] government can extract from
the public by printing money."" -- Cukierman 1992
Explanation: When a government prints money, it is in essence borrowing
interest-free since it receives goods in exchange for the money, and must
accept the money in return only at some future time. It gains further if
issuing new money reduces (through inflation) the value of old money by
reducing the liability that the old money represents. These gains to a
money-issuing government are called "seignorage" revenues.
The original meaning of seignorage was the fee taken by a money issuer (a
government) for the cost of minting the money. Money itself, at that time,
was intrinsically valuable because it was made of metal.
Source: Cukierman, 1992
Contexts: money
self-generating:
Given an operator B() that operates on sets, a set W is self-generating if W
is contained in B(W).
This definition is in Sargent (98) and may come from Abreu, Pearce, and
Stacchetti (1990).
semi-nonparametric:
synonym for semiparametric.
Contexts: econometrics
semi-strong form:
Can refer to the semi-strong form of the efficient markets hypothesis, which
is that any public information about a security is fully reflected in its
current price.
Fama (1991) says that a more common and current name for tests of the
semi-strong form hypothesis is 'event studies.'
Source: Fama, 1970, p 404
Contexts: finance
semilog:
The semilog equation is an econometric model:
Y = e^{a+bX+e}
or equivalently
ln Y = a + bX + e
Commonly used to describe exponential growth curves. (Greene 1993, p
239)
Source: Greene, 1993, p 239
Contexts: econometrics
semiparametric:
An adjective that describes an econometric model with some components that are
unknown functions, while others are specified as unknown finite dimensional
parameters.
An example is the partially linear model.
Source: Tripathi, 1996, p 2
Contexts: econometrics, statistics
senior:
Debts may vary in the order in which they must legally be paid in the event of
bankruptcy of the individual or firm that owes the debt. The debts that must
be paid first are said to be senior debts.
Contexts: finance
SES:
socioeconomic status
Contexts: labor; sociology
shadow price:
In the context of a maximization problem with a constraint, the shadow price
on the constrain is the amount that the objective function of the maximization
would increase by if the constraint were relaxed by one unit.
The value of a Lagrangian multiplier is a shadow price.
This is a striking and useful fact, but takes some practice to
understand.
Source: Layard and Glaister, p 8
shakeout:
A period when the failure rate or exit rate of firms from an industry is
unusually high.
Source: Philip Anderson and Michael L. Tushman, Research-Technology
Management, May/June 1991, pp. 26-31.
Contexts: IO; business history
sharing rule:
A function that defines the split of gains between a principal and agent. The
gains are usually profits, and the split is usually a linear rule that gives a
fraction to the agent.
For example, suppose profits are x, which might be a random variable. The
principal and agent might agree, in advance of knowing x, on a sharing rule
s(x). Here s(x) is the amount given to the agent, leaving the principal with
the residual gain x-s(x).
Source: very roughly taken from Holmstrom (82)
Contexts: game theory; micro theory; models
Sharpe ratio:
Computed in context of the Sharpe-Linter CAPM. Defined for an asset
portfolio a that has mean m_{a}, standard
deviation s_{a}, and with risk-free rate
r_{f} by:
[m_{a}-r_{f}]/s_{a}
Higher Sharpe ratios are more desirable to the investor in this model.
The Sharpe ratio is a synonym for the "market price of risk."
Empirically, for the NYSE, the Sharpe ratio is in the range of .30 to
.40.
Contexts: finance
SHAZAM:
Econometric software published at the University of British Columbia. See
http://shazam.econ.ubc.ca.
Contexts: data; estimation
Shephard's lemma:
Source: Shephard (1953) cited in Zegeye (2001)
Sherman Act:
1890 U.S. antitrust law. It has been described as vague, leading to ambiguous
interpretations over the years.
Section one of the law forbids certain joint actions: "Every contract,
combination in the form of trust or otherwise, or conspiracy, in restraint of
trade or commerce among the several states, or with foreign nations, is hereby
declared illegal...."
Section two of the law forbids certain individual actions: "Every person
who shall monopolize, or attempt to monopolize, or combine or conspire with
any other person or persons, to monopolize any part of the trade or commerce
among the several states, or with foreign nations, shall be deemed guilty of a
felony..."
The reasons for the passage of the Sherman Act:
(1) To promote competition to benefit consumers,
(2) Concern for injured competitors,
(3) Distrust of concentration of power.
Source: lectures and handouts of Michael Whinston at Northwestern U in
Economics D50, Winter 1998
Contexts: IO; antitrust; regulation
short rate:
Abbreviation for 'short term interest rate'; that is, the interest rate
charged (usually in some particular market) for short term loans.
Contexts: finance
Shubik model:
A theoretical model designed to study the behavior of money. There are N
goods traded in N(N-1) markets, one for each possible combination of good i
and good j that could be exchanged. One assumes that only N of these markets
are open; that good 0, acting as money, is traded for each of the other
commodities but they are not exchanged for one another. Then one can study
the behavior of the money good.
Contexts: money; models
SIC:
Standard Industrial Classification code -- a four-digit number assigned to
U.S. industries and their products. By "two-digit industries" we
mean a coarser categorization, grouping the industries whose first two digits
are the same.
Contexts: IO; data
sieve estimators:
flexible basis functions to approximate a function being estimated. It may be
that orthogonal series, splines, and neural networks are examples. Donald
(1997) and Gallant and Nychka (1987) may have more information.
Source: Chen, Xiaohong, and Timothy G. Conley, &A semiparametric spatial model
for panel time series" Preliminary Northwestern University working
paper, January 1999.
Donald, Stephen G. 1997. "Inference concerning the number of factors in
a multivariate nonparametric relationship" Econometrica
65:103-132.
Gallant, A.R. and Nychka, D. 1987. "Semi-non-parametric maximum
likelihood estimation" Econometrica 55:363-390.
Contexts: econometrics
sigma-algebra:
A collection of sets that satisfy certain properties with respect to their
union. (Intuitively, the collection must include any result of
complementations, unions, and intersections of its elements. The effect is to
define properties of a collection of sets such that one can define probability
on them in a consistent way.) Formally:
Let S be a set and A be a collection of subsets of S.
A is a sigma-algebra of S if:
(i) the null set and S itself are members of A
(ii) the complement of any set in A is also in A
(iii) countable unions of sets in A are also in A.
It follows from these that a sigma-algebra is closed under countable
complementation, unions, and intersections.
Contexts: math; measure theory; real analysis
signaling game:
A game in which a player with private information (the informed
player) sends a signal of his private type to the uninformed player before
the uninformed player makes a choice. An example: a candidate worker might
suggest to the potential employer what wage is appropriate for himself in a
negotiation.
Contexts: game theory
significance:
A finding in economics may be said to be of economic significance (or
substantive significance) if it shows a theory to be useful or not
useful, or if has implications for scientific interpretation or policy
practice (McCloskey and Ziliak, 1996). Statistical significance is property
of the probability that a given finding was produced by a stated model but at
random: see significance level.
These meanings are different but sometimes overlap. McCloskey and Ziliak
(1996) have a substantial discussion of them. Ambiguity is common in
practice, but not hard to avoid. (Editorial comment follows.) When the
second meaning is intended, use the phrase "statistically
significant" and refer to a level of statistical significance or a
p-value. Avoid the aggressive word "insignificant" unless it
is clear whether the word is to be taken to mean substantively
insignificant or not statistically significant.
Source: McCloskey, Deirdre N., and Stephen T. Ziliak. "The standard
error of regressions," Journal of Economic Literature vol XXXIV
(March 1996), pp 97-114.
Contexts: phrases; statistics; econometrics; estimation
significance level:
The significance level of a test is the probability that the test statistic
will reject the null hypothesis when the [hypothesis] is true. Significance
is a property of the distribution of a test statistic, not of any particular
draw of the statistic.
Synonymous with statistical 'size' of a test.
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; estimation
simulated annealing:
A method of finding optimal values numerically. Simulated annealing is a
search method as opposed to a gradient based algorithm. It chooses a new
point, and (for optimization) all uphill points are accepted while some
downhill points are accepted depending on a probabilistic criteria.
Unlike the simplex search method provided by Matlab, simulated annealing may
allow "bad" moves thereby allowing for escape from a local max. The
value of a move is evaluated according to a temperature criteria (which
essentially determines whether the algorithm is in an "hot" area of
the function).
Source: Mark Manuszak (as of 5/25/99, mdm080@nwu.edu of Northwestern Univ);
Matlab documentation
Contexts: optimization; estimation; numerical techniques
simultaneous equation system:
By "system" is meant that there are multiple, related, estimable equations.
By simultaneous is meant that two quantities are jointly determined at time t
by one another's values at time t-1 and possibly at t also.
Example, from Greene, (1993, p. 579), of market
equilibrium:
q_{d}=a_{1}p+a_{2}+e_{d} (Demand equation)
q_{s}=b_{1}p+e_{s} (Supply equation)
q_{d}=q_{s}=q
Here the quantity supplied is q_{s}, quantity demanded is
q_{d}, price is p, the e's are errors or residuals, and the a's and
b's are parameters to be estimated. We have data on p and q, and the
quantities supplied and demanded are conjectural.
Contexts: econometrics; estimation
single-crossing property:
Distributions with cdfs F and G satisfy the single-crossing property if there
is an x_{0} such that:
F(x) >= G(x) for x<=x_{0}
and
G(x) >= F(x) for x>=x_{0}
Contexts: models; statistics
sink:
"In a second-order [linear difference equation] system, if both roots are
positive and less than one, then the system converges monotonically to the
steady state. If the roots are complex and lie inside the unit circle then
the system spirals into the steady state. If at least one root is negative,
but both roots are less than one in absolute value, then the system will flip
from one side of the steady state to the other as it converges. In all of
these cases the steady state is called a sink." Contrast
'source'.
Source: Farmer, p. 29
Contexts: macro; dynamical systems; models
SIPP:
The U.S.
Survey of Income and Program
Participation, which is conducted by the U.S. Census Bureau.
A tutorial is at:
http://www.bls.census.gov/sipp/tutorial/SIPP_Tutorial_Beta_version/LAUNCHtutor
ial.html
Source:
http://www.sipp.census.gov/sipp
With thanks to: Dan Levy
Contexts: data
size:
A synonym for significance level.
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; estimation
skewness:
An attribute of a distribution. A distribution that is symmetric around its
mean has skewness zero, and is 'not skewed'. Skewness is calculated as
E[(x-mu)^{3}]/s^{3} where mu is the mean and s is the standard
deviation.
Source: Hogg and Craig, p 57
Contexts: statistics
skill:
In regular English usage means "proficiency". Sometimes used in
economics papers to represent the experience and formal education. (Ed.: in
this editor's opinion that is a dangerously misleading use of the term; it
invites errors of thought and understanding.)
Contexts: labor
SLID:
Stands for Survey of Labour and Income Dynamics. A Canadian government
database going back to 1993 at least.
Web pages on this subject can be searched from:
http://www.statcan.ca/english/search/index.htm.
Contexts: data; labor
SLLN:
Stands for strong law of large numbers.
Contexts: econometrics
SMA:
Structural Moving Average model, which see
Contexts: econometrics
Smithian growth:
Paraphrasing directly from Mokyr, 1990: Economic growth brought about by
increases in trade.
Source: Mokyr, 1990, pp 4-6
Contexts: history; macro
smoothers:
Smoothers are estimators that produce smooth estimates of regression
functions. They are nonparametric estimators. The most common and
implementable types are kernel estimators, k-nearest-neighbor estimators, and
cubic spline smoothers.
Source: Hardle, 1990
Contexts: econometrics; nonparametrics; estimation
smoothing:
Smoothing of a data set {X_{i}, Y_{i}} is the act of
approximating m() in a regression such as:
Y_{i} = m(X_{i}) + e_{i}
The result of a smoothing is a smooth functional estimate of m().
Source: Hardle, 1990
Contexts: econometrics; estimation
SMR:
Standardized mortality ratio
Contexts: demography; epidemiology
SMSA:
Stands for Standard Metropolitan Statistical Area, a U.S. term for the
standard boundaries of urban regions for purpose of measurement. Defined by
the U.S. Census Bureau. There are 250-300 of these.
Contexts: data
SNP:
abbreviation for 'seminonparametric', which means the same thing as
semiparametric.
Contexts: econometrics
social capital:
The relationships of a person which lead to economically productive
consequences. E.g., they may produce something analogous to investment
returns to that person, or socially productive consequences to a larger
society.
"'Social capital' refers to the benefits of strong social bonds. [Sociologist
James] Coleman defined the term to take in 'the norms, the social interworks,
the relationships between adults and children that are of value for the
children's growing up.' The support of a strong community helps the child
accumulate social capital in myriad ways; in the [1990s U.S.] inner city,
where institutions have disintegrated, and mothers often keep children locked
inside out of fear for their safety, social capital hardly exists." -- Traub
(2000)
Source: Traub, James. The New York Times. January 16, 2000. Sunday,
Late Edition, final. Article in section 6, starting page 52, column 1,
Magazine Desk.
A World Bank publication suggests that James Coleman has a "classic 1987
article" called "Social Capital in the Creation of Human
Capital" which was fundamental to the use of the term social capital in
the social sciences.
With thanks to: Isaac McFarlin for finding this definition
Contexts: labor; sociology; education
social planner:
One solving a Pareto optimality problem. The problem faced by a social
planner will have as an answer an allocation, without prices.
Also, "the social planner is subject to the same information limitations
as the agents in the economy." -- Cooley and Hansen p 185
That is, the social planner does not see information that is hidden by the
rules of the game from some of the agents. If an agent happens not to know
something, but it is not hidden from him by the rules of the game, then the
social planner DOES see it.
Contexts: general equilibrium; models
social savings:
A measurement of a new technology discussed in Crafts (2002). "How much more
did [a new technology] contribute than an alternative investment might have
yielded?" and cites Fogel (1979).
social welfare function:
A mapping from allocations of goods or rights among people to the real
numbers.
Such a social welfare function (abbreviated SWF) might describe the
preferences of an individual over social states, or might describe outcomes of
a process that made allocations, whether or not individuals had preferences
over those outcomes.
Contexts: models; public economics
SOFFEX:
Swiss Options and Financial Futures Exchange
Contexts: organizations
Solas:
Software for imputing values to missing data, published by
Statistical Solutions.
Contexts: data; estimation
Solovian growth:
Paraphrasing from Mokyr (1990): Economic growth brought about by investment,
meaning increases in the capital stock.
Source: Mokyr, 1990, p. 4-6.
Contexts: history; macro
Solow growth model:
Paraphrasing pretty directly from Romer, 1996, p 7:
The Solow model is meant to describe the production function of an entire
economy, so all variables are aggregates. The date or time is denoted t.
Output or production is denoted Y(t). Capital is K(t). Labor time is denoted
L(t). Labor's effectiveness, or knowledge, is A(t). The production function
is denoted F() and is assumed to have constant returns to scale. At
each time t, the production function is:
Y = F(K, AL)
which can be written:
Y(t) = F(K(t), A(t)L(t))
AL is effective labor.
Note variants of the way A enters into the production function. This one is
called labor-augmenting or Harrod-neutral. Others are
capital-augmenting, e.g. Y=F(AK,L), or , like
Y=AF(K,L).
--------------------
From _Mosaic of Economic Growth_:
DEFN of Solow-style growth models: They come from the seminal Solow (1956).
"In Solow-style models, there exists a unique and globally stable growth path
to which the level of labor productivity (and per capita output) will
converge, and along which the rate of advance is fixed (exogenously) by the
rate of technological progress." Many subsequent models of agg growth (like
Romer 1986) have abandoned the assumption that all forms of kap accumulation
run into diminishing marginal returns, and get different global convergence
implications. (p22)
Source: Romer, 1996, p 7;
Solow, 1956; that is:
Solow, Robert. "A contribution to the theory of economic growth."
Quarterly Journal of Economics. Feb. 1956
Contexts: macro
Solow residual:
A measure of the change in total factor productivity in a
Solow growth model. This is a way of doing growth accounting
empirically either for an industry or more commonly for a macroeconomy.
Formally, roughly following Hornstein and Krusell (1996):
Suppose that in year t an economy produces output quantity y_{t} with
exactly two inputs: capital quantity k_{t} and labor quantity
l_{t}. Assume perfectly competitive markets and that production has
constant returns to scale. Let capital's share of income be fixed over
time and denoted a. Then the change in total factor productivity
between period t and period t+1, which is the Solow residual, is defined
by:
Solow residual = (log TFP_{t+1}) - (log TFP_{t})
= (log y_{t+1}) - (log y_{t})
- a(log k_{t+1}) - a(log k_{t})
- (1-a)(log l_{t+1}) - (1-a)(log l_{t})
Analogous definitions exist for more complicated models (with other factors
besides capital and labor) or on an industry-by-industry basis, or with
capital's share varying by time or by industry.
The equation may look daunting but the derivations are not difficult and
students are sometimes asked to practice them until they are routine.
Hulten (2000) says about the residual that:
-- it measures shifts in the implicit aggregate production function.
-- it is a nonparametric index number which measures that shift in a
computation that uses prices to measure marginal products.
-- the factors causing the measured shift include technical innovation,
organizational and institutional changes, fluctuations in demand, changes in
factor shares (where factors are capital, labor, and sometimes measures of
energy use, materials use, and purchased services use), and measurement
errors.
From an informal discussion by this editor, it looks like the residual
contains these empirical factors, among others: public goods like highways;
externalities from networks like the Internet; some externalities and losses
of capital services from disasters like September 11; theft; shirking; and
technical / technological change.
Source: Hornstein, Andreas, and Per Krusell. 1996. "Can technology
improvements cause productivity slowdowns?" NBER Macroeconomics
Annual 1996. MIT Press. pp 214-215.
Hulten, 2000
Contexts: macro
solution concept:
A phrase relevant to game theory. A game has a 'solution' which may represent
a model's prediction. The modeler often must choose one of several
substantively different solution methods, or solution concepts, which can lead
to different game outcomes. A solution concept may be preferred to others in
context where it makes a unique prediction. Game solution concepts
include:
iterative elimination of strictly dominated strategies
Nash equilibrium
subgame perfect equilibrium
perfect Bayesian equilibrium
Contexts: phrases; game theory; modelling
source:
"In a second-order [linear difference equation] system, ... if both roots
are positive and greater than one, then the system diverges monotonically
to plus or minus infinity. If the roots are complex and [lie] outside the
unit circle then the system spirals out away from the steady state. If at
least one root is negative, but both roots are greater than one in absolute
value, then the system will flip from one side of the steady state to the
other as it diverges to infinity. In each of these cases the steady state is
called a source." Constrast 'sink'.
Source: Farmer, p. 29
Contexts: macro; dynamical systems; models
sparse:
A matrix is sparse if many of its values are zero. A division of sample data
into discrete bins (that is into a multinomial table) is sparse if many of the
bins have no data in them.
Contexts: econometrics
spatial autocorrelation:
Usually autocorrelation means correlation among the data from different time
periods. Spatial autocorrelation means correlation among the data from
locations. There could be many dimensions of spatial autocorrelation, unlike
autocorrelation between periods.
Nick J. Cox () wrote, in a broadcast to a listserv
discussing the software Stata, this discussion of spatial
autocorrelation. It is quoted here without any explicit permission
whatsoever. (Parts clipped out are marked by 'snip'.) If 'Moran measure' and
'Geary measure' are standard terms used in economics I'll add them to the
glossary.
Date: Thu, 15 Apr 1999 12:29:10 GMT
From: "Nick Cox"
Subject: statalist: Spatial autocorrelation
[snip...]
First, the kind of spatial data considered here is data in two-dimensional
space, such as rainfall at a set of stations or disease incidence in a set of
areas, not three-dimensional or point pattern data (there is a tree or a
disease case at coordinates x, y). Those of you who know time series might
expect from the name `spatial autocorrelation' estimation of a function,
autocorrelation as a function of distance and perhaps direction. What is
given here are rather single-value measures that provide tests of
autocorrelation for problems where the possibility of local influences is of
most interest, for example, disease spreading by contagion. The set-up is that
the value for each location (point or area) is compared with values for its
`neighbours', defined in some way.
The names Moran and Geary are attached to these measures to honour the pioneer
work of two very fine statisticians around 1950, but the modern theory is due
to the statistical geographer Andrew Cliff and the statistician Keith Ord.
For a vector of deviations from the mean z, a vector of ones 1, and a matrix
describing the neighbourliness of each pair of locations W, the Moran measure
for example is
(z' W z) / (z' z)
I = -----------------
(1' W 1) / (1' 1)
where ' indicates transpose. This measure is for raw data, not regression
residuals.
[snip; and the remainder discusses a particular implementation of a spatial
autocorrelation measuring function in Stata.]
For n values of a spatial variable x defined for various locations,
which might be points or areas, calculate the deviations
_
z = x - x
and for pairs of locations i and j, define a matrix
W = ( w )
ij
describing which locations are neighbours in some precise sense.
For example, w might be assigned 1 if i and j are contiguous areas
ij
and 0 otherwise; or w might be a function of the distance between
ij
i and j and/or the length of boundary shared by i and j.
The Moran measure of autocorrelation is
n n n n n 2
n ( SUM SUM z w z ) / ( 2 (SUM SUM w ) SUM z )
i=1 j=1 i ij j i=1 j=1 ij i=1 i
and the Geary measure of autocorrelation is
n n 2 n n n 2
(n -1) ( SUM SUM w (z - z ) ) / ( 4 (SUM SUM w ) SUM z )
i=1 j=1 ij i j i=1 j=1 ij i=1 i
and these measures may used to test the null hypothesis of no spatial
autocorrelation, using both a sampling distribution assuming that x
is normally distributed and a sampling distribution assuming randomisation,
that is, we treat the data as one of n! assignments of the n values to
the n locations.
In a toy example, area 1 neighbours 2, 3 and 4 and has value 3
2 1 and 4 2
3 1 and 4 2
4 1, 2 and 3 1
This would be matched by the data
^_n^ (obs no) ^value^ (numeric variable) ^nabors^ (string variable)
- ----------- ------------------------ ------------------------
1 3 "2 3 4"
2 2 "1 4"
3 2 "1 4"
4 1 "1 2 3"
That is, ^nabors^ contains the observation numbers of the neighbours of
the location in the current observation, separated by spaces. Therefore,
the data must be in precisely this sort order when ^spautoc^ is called.
Note various assumptions made here:
1. The neighbourhood information can be fitted into at most a ^str80^
variable.
2. If i neighbours j, then j also neighbours i and both facts are
specified.
By default this data structure implies that those locations listed
have weights in W that are 1, while all other pairs of locations are not
neighbours and have weights in W that are 0.
If the weights in W are not binary (1 or 0), use the ^weights^ option.
The variable specified must be another string variable.
^_n^ (obs no) ^nabors^ (string variable) ^weight^ (string variable)
- ----------- ------------------------ ------------------------
1 "2 3 4" ".1234 .5678 .9012"
etc.
that is, w = 0.1234, and so forth. w need not equal w .
12 ij ji
[snip]
References
- ----------
Cliff, A.D. and Ord, J.K. 1973. Spatial autocorrelation. London: Pion.
Cliff, A.D. and Ord, J.K. 1981. Spatial processes: models and
applications. London: Pion.
Author
- ------
Nicholas J. Cox, University of Durham, U.K.
n.j.cox@@durham.ac.uk
- ------------------------- end spautoc.hlp
Nick
n.j.cox@durham.ac.uk
Source: statalist, Nick J. Cox (N.J.Cox@durham.ac.uk, as of 4/15/99)
With thanks to: Nicholas J. Cox, University of Durham, U.K.
(n.j.cox@durham.ac.uk)
Contexts: statistics; econometrics; estimation
SPE:
Abbreviation for: Subgame perfect equilibrium
Contexts: game theory; models
specie:
A commodity metal backing money; historically specie was gold or
silver.
Contexts: money; history
spectral decomposition:
The factorization of a positive definite matrix A into A=CLC' where L is a
diagonal matrix of eigenvalues, and the C matrix has the eigenvectors. That
decomposition can be written as a sum of outer products:
A = (sum from i=1 to i=N of) L_{i}c_{i}c_{i}'
where c_{i} is the i^{th} column of C.
Source: Greene, 1993, p 34
Contexts: econometrics
spectrum:
Summarizes the periodicity properties of a time series or time series sample
x_{t}. Often represented in a graph with frequency, or period, (often
denoted little omega) on the horizontal axis, and S_{x }(omega), which
is defined below, on the vertical axis. S_{x} is zero for frequencies
that are not found in the time series or sample, and is increasingly positive
for frequencies that are more important in the data.
S_{x}(omega) = (2pi)^{-1}(sum for j from -infinity to
+infinity of) g_{j}e^{-ijomega}
where g_{j} is the jth
autocovariance, omega is in the range [-pi, pi], and i is the square
root of -1.
Example 1: If x_{t} is white noise, the spectrum is flat. All cycles
are equally important. If they were not, the series would be
forecastable.
Example 2: If x_{t} is an AR(1) process, with coefficient in (0, 1),
the spectrum has a peak at frequency zero and declines monotonically with
distance from zero. This process does not have an observable cycle.
Source: Wouter J. den Haan, April 1996, "The Comovements between real activity
and prices at different business cycle frequencies", as presented at
Northwestern, April 21, 1997. Possibly a UCSD or NBER working paper.
Supported by NSF grant SBR-9514813. Another good reference is Priestly,
1994.
Contexts: time series; econometrics
speculative demand:
The speculative demand for money is inversely related to the interest
rate.
Source: Branson
Contexts: money
spline function:
The kind of estimate producted by a spline regression in which the
slope varies for different ranges of the regressors. The spline function is
continuous but usually not differentiable.
Source: Greene, 1993, p 237
Contexts: econometrics
spline regression:
A regression which estimates different linear slopes for different ranges of
the independent variables. The endpoints of the ranges are called
knots.
Source: Greene, 1993, p 237
Contexts: econometrics
spline smoothing:
A particular nonparametric estimator of a function. Given a data set
{X_{i}, Y_{i}} it estimates values of Y for X's other than
those in the sample. The process is to construct a function that balances the
twin needs of (1) proximity to the actual sample points, (2) smoothness. So a
'roughness penalty' is defined. See Hardle's equation 3.4.1 near p. 56 for
the 'cubic spline' which seems to be the most common.
Source: Hardle, 1990
Contexts: econometrics; nonparametrics; estimation
SPO:
stands for Strongly Pareto Optimal, which see.
Contexts: general equilibrium; models
SPSS:
Stands for 'Statistical Product and Service Solutions', a corporation at www.spss.com
Contexts: data
SSEP:
Social Science Electronic Publishing, Inc.
Contexts: data
SSRN:
Social Science Research Network
Their web site
Contexts: data
stabilization policy:
"Macroeconomic stabilization policy consists of all the actions taken by
governments to (1) keep inflation low and stable; and (2) keep the short-run
(business cycle) fluctuations in output and employment small." Includes
monetary and fiscal policies, international and exchange rate policy, and
international coordination. (p129 in Taylor (1996)).
Source: Taylor, John B. 1996. "Stabilization Policy and Long-Term Economic
Growth." In The Mosaic of Economic Growth, edited by Ralph Landau,
Timothy Taylor, and Gavin Wright. Stanford University Press.
Contexts: macro
stable distributions:
See Campbell, Lo, and MacKinlay pp 17-18. Ref to French probability theories
Levy. The normal, Cauchy, and Bernoulli distrbutions are special cases.
Except for the normal distrbituion, they have infinite variance.
There has been some study of whether continuously compounded asset returns
could fit a stable distribution, given that their kurtosis is too high for a
normal distribution.
Source: Campbell, Lo, and MacKinlay 1996, pp 17-18
Contexts: finance; statistics
stable steady state:
in a dynamical system with deterministic generator function F() such that
N_{t+1}=F(N_{t}), a steady state is stable if, loosely, all
nearby trajectories go to it.
Source: J. Montgomery, social networks paper
Contexts: macro; dynamical systems; models
staggered contracting:
A model can be constructed in which some agents, usually firms, cannot change
their prices at will. They make a contract at some price for a specified
duration, then when that time is up can change the price. If the terms of the
contracts overlap, that is they do not all end at the same time, we say the
contracts are staggered.
An important paper on this topic was Taylor (1980) which showed that staggered
contracts can have an effect of persistence -- that is, that one-time shocks
can have effects that are still evolving for several periods. This is a
version of a new Keynesian, sticky-price model.
Contexts: models
standard normal:
Refers to a normal distribution with mean of zero and variance of
one.
Contexts: statistics
Stata:
Statistical analysis software. Stata web
site
Contexts: data; estimation
state price:
the price at time zero of a state-contingent claim that pays one unit of
consumption in a certain future state.
Source: Huang and Litzenberger, 1988, pp 123-4
Contexts: finance; models
state price vector:
the vector of state prices for all states.
Contexts: macro; finance; models
state-space approach to linearization:
Approximating decision rules by linearizing the Euler equations of the
maximization problem around the stationary steady state and finding a unique
solution to the resulting system of dynamic equations
Source: explained in King, Plosser, Rebelo (87); ref Merz 94, p 21
footnote
Contexts: macro; models
stationarity:
The attribute of being covariance stationary, with reference to a
stochastic process. Note that strict stationarity is not a
superset or subset, but a different thing.
stationary:
When used to describe a stochastic process, this is usually a synonym
for covariance stationary.
Source: Enders, Walter. 1995. Applied Econometric Time Series. John
Wiley & Sons, Inc.
Hamilton, J. Time Series Analysis.
Contexts: stochastic processes
statistic:
a function of one or more random variables that does not depend upon any
unknown parameter.
(The distribution of the statistic may depend on one or more unknown
parameters, but the statistic can be calculated without knowing them just from
the realizations of the random variables, e.g. the data in a sample.)
In general a statistic could be a vector of values, but often it is a
scalar.
Source: adapted from Hogg & Craig.
Contexts: statistics; econometrics; estimation
Statistica:
Statistical software. See
http://www.statsoft.com.
Contexts: data; estimation
statistical discrimination:
A theory of why minority groups are paid less when hired. The theory is
roughly that managers, who are of one type (say, white), are more culturally
attuned to the applicants of their own type than to applicants of another type
(say, black), and therefore they have a better measure of the likely
productivity of the applicants of their own type. (There is uncertainty in
the manager's predictions about blacks and probably of whites too, but more
uncertainty for blacks.) Because the managers are risk averse they bid more
for a white applicant of a given apparent productivity than for a black one,
since their measure of the white's productivity is better. This theory
predicts that white managers would offer black applicants lower starting wages
than whites of the same apparent ability, even if the manager is not
prejudiced against the blacks.
Contexts: labor
statistics:
Relevant terms: acceptance region,
adapted,
almost surely,
alternative hypothesis,
ANOVA,
AR,
AR(1),
ARCH,
autoregressive process,
b,
bandwidth,
Bayesian analysis,
Bonferroni criterion,
bootstrapping,
Box-Cox transformation,
Cauchy distribution,
cdf,
characteristic function,
chi-square distribution,
coefficient of variation,
complete,
consistent,
correlation,
Cramer-Rao lower bound,
cross-validation,
delta method,
density function,
diffuse prior,
distribution function,
efficiency,
efficiency bound,
efficient,
EGARCH,
essentially stationary,
estimator,
excess kurtosis,
expected value,
exponential distribution,
exponential family,
F distribution,
fat-tailed,
frequency function,
gamma distribution,
gamma function,
GARCH,
Gaussian,
Gaussian white noise process,
generalized Wiener process,
GEV,
Heaviside function,
Hermite polynomials,
heterogeneous process,
Huber standard errors,
Huber-White standard errors,
iid,
information number,
Ito process,
Jensen's inequality,
Kolmogorov's Second Law of Large Numbers,
kurtosis,
LAN,
leptokurtic,
Lindeberg-Levy Central Limit Theorem,
log-concave,
log-convex,
logistic distribution,
lognormal distribution,
MA,
main effect,
MANOVA,
Markov process,
Markov's inequality,
martingale,
martingale difference sequence,
mesokurtic,
MGF,
moment-generating function,
Monte Carlo simulations,
multivariate,
MVN,
noncentral chi-squared distribution,
normal distribution,
Op(1),
order statistic,
Pareto distribution,
pdf,
platykurtic,
Poisson distribution,
Poisson process,
power,
power distribution,
probability function,
process,
random,
random process,
random variable,
random walk,
Rao-Cramer inequality,
Riemann-Stieltjes integral,
second moment,
semiparametric,
significance,
single-crossing property,
skewness,
spatial autocorrelation,
stable distributions,
standard normal,
statistic,
stochastic,
stochastic process,
strict stationarity,
strictly stationary,
strong law of large numbers,
strongly consistent,
Student t,
sufficient statistic,
support,
t distribution,
t statistic,
tangent cone,
Tukey boxplot,
type I error,
type I extreme value distribution,
type II error,
uniform distribution,
uniform weak law of large numbers,
unit root,
unit root test,
UWLLN,
variance,
wavelet,
weak law of large numbers,
weak stationarity,
weakly consistent,
weakly dependent,
Weibull distribution,
white noise process,
White standard errors,
WLLN,
Wold's theorem.
Contexts: fields
stochastic:
synonym for random.
Contexts: statistics; econometrics; time series
stochastic difference equation:
A linear difference equation with random forcing variables on the right hand
side. Here is a stochastic difference equation in k:
k_{t+1} + k_{t} = w_{t}
where the k's and w's are scalars, and time t goes from 0 to infinity. The
w's were exogenous forcing variables. Or:
Ak_{t+1} + Bk_{t} + Ck_{t-1} = Dw_{t} +
e_{t}
where the k's are vectors, the w's and e's are exogenous vectors, and A, B, C,
and D are constant matrices.
Source: Sargent, 1979
Contexts: macro; models
stochastic dominance:
An abbreviation for first-order stochastic dominance. A possible
comparison relationship between two stochastic distributions. Let the
possible returns from assets A and B be described by statistical distributions
A and B. Payoff distribution A first-order stochastically dominates payoff
distribution B if for every possible payoff, the probability of getting a
payoff that high is never better in B than in A.
Much more is in Huang and Litzenberger (1988), chapter 2.
Source: Huang and Litzenberger, (1988), p. 40
stochastic kernel:
Another name for stochastic transition function.
stochastic process:
A stochastic process is an ordered collection of random variables. The term
is synonymous with random process. Discrete ones are indexed, often by
the subscript t for time, e.g., y_{t}, y_{t+1}, although such
a process could be spatial instead of temporal. Continuous ones can be
described as continuous functions of time, e.g. y(t).
A stochastic process is specified by properties of the joint distribution for
those random variables. Examples:
-- the random variables are independently and identically distributed
(iid).
-- the process is a Markov process
-- the process is a martingale
-- the process is white noise
-- the process is autoregressive (e.g. AR(1))
-- the process has a moving average (e.g. see MA(1))
Contexts: models; statistics
stochastic transition function:
A generalization of the Markov transition matrix which describes the
probability of a transition by a system from one set of states to another set
of states. A matrix may not describe the situation if the subsets are
complicated or infinite in number.
Stolper-Samuelson theorem:
In some models of international trade, trade lowers the real wage of the
scarce factor of production, and protection from trade raises it.
That is a Stolper-Samuelson effect, by analogy to their (1941) theorem in a
Heckscher-Ohlin model context.
A notable case is when trade between a modernized economy and a developing one
would lower the wages of the unskilled in the modernized economy because the
developing country has so many of the unskilled.
Source: MIT Dictionary of Modern Economics, edited by David W
Pearce;
Gordon H. Hanson and Matthew J. Slaughter, "The Rybczinski theorem,
factor-price equalization, and immigration: evidence from U.S. states,"
NBER working paper 7074, April 1999. On Web at http://www.nber.org/papers/w7074
Stolper, Wolfgang, and Paul A. Samuelson. 1941. "Protection and real
wages." Review of Economics and Statistics 9(1): 58-73.
Contexts: trade; macro
stopping rule:
A stopping rule, in the context of search theory, is a mapping from histories
of draws to one of two decisions: stop at this draw, or continue
drawing.
Source: Wolinsky's D14 notes, Jan 1997
Contexts: information; search
storable:
A good is storable to the degree that it does not degrade or lose its value
over time. In models of money, storable goods dominate less storable goods as
media of exchange.
Contexts: money
straddle:
An options trading strategy of buying a call option and a put
option on the same stock with the same strike price and expiration date.
Such a strategy would result in a profitable position if the stock price is
far enough from the strike price.
Source: Hull, 1997, p 187
Contexts: finance
strategic form:
Synonym for normal form display of a game.
Source: Varian, 1992
Contexts: game theory
strategy-proof:
A decision rule (a mapping from expressed preferences by each of a
group of agents to a common decision) "is strategy-proof if in its
associated revelation game, it is a dominant strategy for each agent to
reveal its true preferences."
Source: Miyagawa, 1998, p 2
Contexts: game theory
strict stationarity:
Describes a stochastic process whose joint distribution of observations is not
a function of time. Contrast weak stationarity.
Source: Hoel, Port, and Stone, 1972, p. 122.
Contexts: statistics; econometrics; time series
strict version of Jensen's inequality:
Quoting directly from Newey-McFadden: "[I]f a(y) is a stricly concave
function [e.g. a(y)=ln(y)] and Y is a nonconstant random variable, then
a(E[Y]) > E[a(Y)]."
Source: Newey-McFadden, Ch 36, Handbook of Econometrics, p. 2124
Contexts: models
strictly stationary:
A random process {x_{t}} is strictly stationary if the joint
distribution of elements is not a function of the index t. In one sense this
is a stronger condition than covariance stationarity because it
requires also that the third and higher moments of the distributions be
stationary. But a process can be strictly stationary without being covariance
stationary if it does not have a finite variance.
Contexts: time series; econometrics; statistics
strip financing:
Corporate financing by selling "stapled" packages of securities
together that cannot be sold separately. E.g., if a firm might sell bonds
only in a package that includes a standard proportion of senior subordinated
debt, convertible debt, preferred, and common stock. A benefit is reduced
conflict. In principle bondholders and stockholders have different interests
and that can impose costs on the firm. After a strip financing, however,
those groups are each made up of all the same people, so their interests
coincide.
Source: Jensen (86)
Contexts: finance
strips:
securities made up of standardized proportions of other securities from the
same firm. See strip financing.
U.S. Treasury bonds can be split into principal and interest components, and
the standard name for the resulting securities is STRIPS (Separate Trading of
Registered Interest and Principal of Securities). See coupon strip and
principal strip.
Source: Jensen (86)
Contexts: finance
strong form:
Can refer to the strong form of the efficient markets hypothesis, which is
that any public or private information known to anyone about a security is
fully reflected in its current price.
Fama (1991) renames tests of the strong form of the hypothesis to be 'tests
for private information.' Roughly -- If individuals with private information
can make trading gains with it, the strong form hypothesis does not
hold.
Source: Fama, 1970.
Dow and Gorton (1996) cite this paper. Possibly it defined the term for th
first time:
Roberts, Harry V. 1967. "Statistical versus clinical prediction of the
stock market," working paper, University of Chicago.
Contexts: finance
strong incentive:
An incentive that encourages maximization of an objective.
For example, payment per unit of output produced encourages maximum
production. Useful in design of a contract if the buyer knows exactly what is
desired. Contrast weak incentive.
Source: Weisbrod's class 5/23/97
Contexts: public economics
strong law of large numbers:
If {Z_{t}} is a sequence of n iid random variables drawn from a
distribution with mean MU, then with probability one, the limit of sample
averages of the Z's goes to MU as sample size n goes to infinity.
I believe that strong laws of large numbers are generally, or perhaps always,
proved using some version of Chebyshev's inequality. (The proof is rarely
shown; in most contexts in economics one can simply assume laws of large
numbers).
Contexts: statistics; econometrics; probability
strongly consistent:
An estimator for a parameter is strongly consistent if the estimator goes to
the true value almost surely as the sample size n goes to infinity. This is a
stronger condition than weak consistency; that is, all strongly consistent
estimators are weakly consistent but the reverse is not true.
Contexts: econometrics; statistics
strongly dependent:
A time series process {x_{t}} is strongly dependent if it is not
weakly dependent; that is, if it is strongly autocorrelated, either
positively or negatively.
Example 1: A random walk with correlation 1 between observations is strongly
dependent.
Example 2: An iid process is not strongly dependent.
Source: Discussed in Wooldridge, 1995, p 2646 which
cites Robinson 1991b in J of econometrics.
Contexts: time series; econometrics
strongly ergodic:
A stochastic process may be strongly ergodic even if it is nonstationary. A
strongly ergodic process is also weakly ergodic.
Contexts: time series; econometrics
strongly Pareto Optimal:
A strongly Pareto optimal allocation is one such that no other allocation
would be both (a) as good for everyone and (b) strictly preferred by
some.
Contexts: general equilibrium; models
strongly stationary:
Synonym for strictly stationary, regarding a stochastic
process.
Source: Hoel, Port, and Stone, 1972
Contexts: time series
structural break:
A structural change detected in a time series sample.
Contexts: time series; econometrics
structural change:
A change in the parameters of a structure generating a time series.
There exist tests for whether the parameters changed. One is the Chow
test.
Examples: (planned)
Contexts: time series; econometrics
structural moving average model:
The model is a multivariate, discrete time, dynamic econometric model. Let
y_{t} be an n_{y} x 1 vector of observable economic variables,
C(L) is a n_{y} x n_{e} matrix of lag polynomials, and
e_{t} be a vector of exogenous unobservable shocks, e.g. to labor
supply, the quantity of money, and labor productivity. Then:
y_{t}=C(L)e_{t}
is a structural moving average model.
Source: M.W. Watson, Ch 57, Handbook of Econometrics, p 2899.
Contexts: econometrics
structural parameters:
Underlying parameters in a model or class of models.
If a theoretical model explains two effects of variable x on variable
y, one of which is positive and one negative, they are structurally separate.
In another model, in which only the net effect of x on y is relevant, one
structural parameter for the effect may be sufficient.
So a parameter is structural if a theoretical model has a distinct structure
for its effect. The definition is not absolute, but relative to a model or
class of models which are sometimes left implicit.
Contexts: estimation; econometrics
structural unemployment:
Unemployment that comes from there being an absence of demand for the workers
that are available. Contrast frictional unemployment.
Source: Baumol & Blinder
Contexts: labor; macro
structure:
A model with its parameters fixed. One can discuss properties of a model with
various parameters, but 'structural' properties are those that are fixed
unless parameters change.
Source: Davidson and MacKinnon, 1993, I think, but can't find
the exact page.
Contexts: econometrics
Student t:
Synonym for the t distribution. The name came about because the
original researcher who described the t distribution wrote under the pseudonym
'Student'.
Contexts: statistics
stylized facts:
Observations that have been made in so many contexts that they are widely
understood to be empirical truths, to which theories must fit. Used
especially in macroeconomic theory. Considered unhelpful in economic history
where context is central.
Relevant terms: Engel's law.
Contexts: fields; phrases
subdifferential:
A class of slopes. By example -- consider the top half of a stop sign as a
function graphed on the xy-plane. It has well-defined derivatives except at
the corners. The subdifferential is made up of only one slope, the
derivative, at those points. At the corners there are many 'tangents' which
define lines that are everywhere above the stop sign except at the corner.
The slopes of those lines are members of the subdifferential at those points.
In general equilibrium usage, the subdifferential can be a class of prices.
It's the set of prices such that expanding the total endowment constraint
would not cause buying and selling, because the agents have optimized
perfectly with respect to the prices. So if a set of prices is possible for a
Walrasian equilibrium, it is in the subdifferential of that alocation.
Contexts: general equilibrium; models
subgame perfect equilibrium:
An equilibrium in which the strategies are a Nash equilibrium, and, within
each subgame, the parts of the strategies relevant to the subgame make a Nash
equilibrium of the subgame.
Contexts: game theory; models
submartingale:
A kind of stochastic process; one in which the expected value of
next period's value, as projected on the basis of the current period's
information, is greater than or equal to the current period's value.
This kind of process could be assumed for securities prices.
Source: Fama, 1970, p 386
Contexts: finance; time series
subordinated:
Adjective. A particular debt issue is said to be subordinated if it was
senior but because of a subsequent issue of debt by the same firm is no longer
senior. One says, 'subordinated debt'.
Contexts: finance
substitution bias:
A possible problem with a price index. Consumers can substitute goods in
response to price changes. For example when the price of apples rises but the
price of oranges does not, consumers are likely to switch their consumption a
little bit away from apples and toward oranges, and thereby avoid experiencing
the entire price increase. A substitution bias exists if a price index does
not take this change in purchasing choices into account, e.g. if the
collection ("basket") of goods whose prices are compared over time is fixed.
"For example, when used to measure consumption prices between 1987 and 1992, a
fixed basket of commodities consumed in 1987 gives too much weight to the
prcies that rise rapidly over the timespan and too little weight to the prices
that have fall; as a result, using the 1987 fixed basket overstates the
1987-92 cost-of-living change. Conversely, because consumers substitute, a
fixed basket of commodities consumed in 1992 gives too much weight to the
prices that have fallen over the timespan and to little to the prices that
have risen; as a result, the 1992 fixed based understates the 1987-92
cost-of-living change." (Triplett, 1992)
Source: Triplett, 1992, p. 50
Contexts: macro; prices
SUDAAN:
A statistical software program designed especially to analyze clustered
data and data from sample surveys. The SUDAAN Web site is at http://www.rti.org/patent
s/sudaan/sudaan.html.
Contexts: data; estimation; code
sufficient statistic:
Suppose one has samples from a distribution, does not know exactly what that
distribution is, but does know that it comes from a certain
set of distributions that is determined partly or wholly by a certain
parameter, q. A statistic is sufficient for
inference about q
if and only if the values of any sample from that distribution give no
more information about q than does the value of the
statistic on
that sample.
E.g. if we know that a distribution is normal with variance 1 but has
an unknown mean, the sample average is a sufficient statistic for the
mean.
Contexts: statistics
sunk costs:
Unrecoverable past expenditures. These should not normally be taken into
account when determining whether to continue a project or abandon it, because
they cannot be recovered either way. It is a common instinct to count them,
however.
sup:
Stands for 'supremum'. A value is a supremum with respect to a set if it is
at least as large as any element of that set. A supremum exists in context
where a maximum does not, because (say) the set is open; e.g. the set (0,1)
has no maximum but 1 is a supremum.
sup is a mathematical operator that maps from a set to a value that is
syntactically like an element of that set, although it may not actually be a
member of the set.
Contexts: real analysis
superlative index numbers:
"What Diewert called 'superlative' index numbers were those that provide a
good approximation to a theoretical cost-of-living index for large classes of
consumer demand and utility function specifications. In addition to the
Tornqvist index, Diewert classified Irving Fisher's 'Ideal' index as belong to
this class." -- Gordon, 1990, p. 5
from harper (1999, p. 335): The term "superlative index number" was coined
by W. Erwin Diewert (1976) to describe index number formulas which generate
aggregates consistent with flexible specifications of the production
function."
Two examples of superlative index number formulas are the Fisher Ideal
Index and the Tornqvist index. These indexes "accomodate
subsitution in consumer spending while holding living standards constant,
something the Paasche and Laspeyres indexes do not do." (Triplett, 1992, p.
50).
Source: Diewert, 1976;
Gordon, 1990, p. 5;
Triplett, 1992, p. 50
Contexts: index numbers
superneutrality:
Money in a model 'is said to be superneutral if changes in [nominal] money
growth have no effect on the real equilibrium.' Contrast neutrality.
Source: Blanchard & Fischer, p. 207
Contexts: money; macro; models
supply curve:
For a given good, the supply curve is a relation between each possible price
of the good and the quantity that would be supplied for market sale at that
price.
Drawn in introductory classes with this arrangement of the axes, although
price is thought of as the independent variable:
Price | / Supply
| /
| /
| /
|________________________
Quantity
Contexts: micro
support:
of a distribution. Informally, the domain of the probability function;
includes the set of outcomes that have positive probability.
A little more exactly: a set of values that a random variable may take, such
that the probability is one that it will take one of those values. Note that
a support is not unique, because it could include outcomes with zero
probability.
Source: Farmer, p. 235
Contexts: probability; statistics
SUR:
Stands for Seemingly Unrelated Regressions. The situation is one where the
errors across observations are thought to be correlated, and one would like to
use this information to improve estimates. One makes an SUR estimate by
calculating the covariance matrix, then running GLS.
The term comes from Arnold Zellner and may have been used first in Zellner
(1962).
Source: StataCorp. 1999. Stata statistical software release 6.0
manual, vol 4., pages 8 and 14.
Zellner, A. 1962. "An efficient method of estimating seeming unrelated
regressions and tests for aggregation bias." Journal of the American
Statistical Association 57: 348-368.
Zellner, A. 1963. "Estimators for seemingly unrelated regression
equations: Some exact finite sample results." Journal of the American
Statistical Association 58: 977-992.
Zellner, A., and D.S. Huang. 1962. "Further properties of efficient
estimators for seemingly unrelated regression equations."
International Economic Review 3: 300-313.
Contexts: econometrics; estimation
SURE:
same as SUR estimation.
Contexts: econometrics; estimation
Survey of Consumer Finances:
There is a U.S. survey and a Canadian survey by this name.
The U.S. one is a survey of U.S. households by the Federal Reserve which
collects information on their assets and debt. The survey oversamples high
income households because that's where the wealth is. The survey has been
conducted every three years since 1983.
The Canadian one is an annual supplement to the Labor Force Survey that
is carried out every April.
Source: For the Canadian definition, see page 297 of:
Kevin M. Murphy, W. Craig Riddell, and Paul M. Romer. 1998. "Wages,
skills, and technology in the United States and Canada", Chapter 11 of
General purpose technologies and economic growth, edited by Elhanan
Helpman. MIT Press.
Contexts: public finance; labor
survival function:
From a model of durations between events (which are indexed here by i).
Probability that an event has not happened since event (i-1), as a function of
time.
E.g. denote that probability by S_{i}():
S_{i}(t | t_{i-1}, t_{i-2}, ...)
Contexts: econometrics; estimation
SVAR:
Structured VAR (Vector Autoregression).
The SVAR representation of a SMA model comes from inverting the matrix of lag
polynomials C(L) (see the SMA definition) to get:
A(L)y_{t}=e_{t}
The SVAR is useful for (1) estimating A(L), (2) reconstructing the shocks
e_{t} if A(L) is known.
Source: M.W. Watson, Ch 47, Handbook of Econometrics, p 2901.
Contexts: econometrics; time series
symmetric:
A matrix M is symmetric if for every row i and column j, element M[i,j] =
M[j,i].
Contexts: econometrics; linear algebra
t distribution:
Defined in terms of a normal variable and a chi-squared variable. Let
z~N(0,1) and v~X^{2}_{(n)}. (That is, v is drawn from
a chi-squared distribution with n degrees of freedom.)
Then t = z(n/v)^{1/2}
has a t distribution with n
degrees of freedom. The t distribution is a one-parameter family of
distributions. n is that parameter here. The t distribution is symmetric
around zero and asymptotically (as n goes to infinity) approaches the standard
normal distribution.
Mean is zero, and variance is n/(n-2).
Source: Johnston, p. 530
Contexts: econometrics; statistics; estimation
t statistic:
After an estimation of a coefficient, the t-statistic for that coefficient is
the ratio of the coefficient to its standard error. That can be tested
against a t distribution (which see) to determine how probable it is that the
true value of the coefficient is really zero.
Contexts: econometrics; statistics; estimation
tangent cone:
Informally: a set of vectors that is tangent to a specified point.
Source: Tripathi, 1996, p 8 and on
Contexts: mathematics, statistics
team production:
Defined by Alchian and Demsetz (1972) this way: "Team pproductive
activity is that in which a union, or joint use, of inputs yields a larger
output than the sum of the products of the separately used inputs." (p.
794)
Source: Alchian and Demsetz, 1972, p 794
Contexts: theory of the firm; IO; corporate finance
technical change:
A change in the amount of output produced from the same inputs. Such a change
is not necessarily technological; it might be organizational, or the result of
a change in a constraint such as regulation, prices, or quantities of
inputs.
According to Jorgenson and Stiroh (May 1999 American Economic Review p
110), sometimes total factor productivity (TFP) can be a synonym for technical
change. A possible measure is output per unit of factor input.
Jorgenson and Stiroh also have an explanation of how it is definitionally
possible for how a technological revolution not to lead to technical change as
measured in these ways.
Contexts: macro
technological change:
A change in the set of feasible production possibilities.
Contrast technical change.
technology shocks:
An event, in a macro model, that changes the production function. Commonly
this is modeled with a aggregate production function that has a scaling
factor, e.g.:
F(K_{t},N_{t}) = A_{t}K^{a}N^{(1-a)}
where A_{t} a time series of technology shocks whose values can be
estimated or whose stochastic process (joint distribution) might be
conjectured to have certain properties.
By this definition the oil shocks of the 1970s were technology shocks -- that
is, for any given aggregate capital stock or labor stock, production was more
expensive after an oil shock because energy would be more expensive. This
interpretation explains why real business cycle theory drew interest in
economics in the 1970s after the oil shocks had such a dramatic impact on
Western economies.
Contexts: macro; models
tenure:
In the context of studies of employees, length of time with current employer
in current job. Contrast experience.
Contexts: labor; corporate finance
term spreads:
"long-term minus short-term interest rates"
Source: Fama 1991 p 1609
Contexts: finance
terms of trade:
An index of the price of a country's exports in terms of its imports. The
terms of trade are said to improve if that index rises. (Obstfeld and
Rogoff, p 25)
An analogous use is when comparing relative prices. If the cost of
agricultural goods in terms of industrial goods goes up, one might say the
"terms of trade ... shifted in favor of agricultural products." (North and
Thomas, p 108).
Source: Obstfeld and Rogoff, 1996, p 25
North, Douglass C. and Robert Paul Thomas. 1973/1992. The Rise of the
Western World: A New Economic History. Cambridge University Press.
Contexts: trade; international
tertiary sector:
Literally, 'third sector'. Per Landes, 1969/1993, p 9,
refers to the "administrative and service sector of the economy".
In context of Williamson and Lindert, 1980, p 172, is defined
more specifically to be the sector of production outside of agriculture and
industry, and includes construction, trade, finance, real estate, private
services, government, and sometimes transportation.
Source: Landes, 1969/1993, p 9; Williamson and
Lindert, 1980, p 172
Contexts: macro
test for structural change:
An econometric test to determine whether the coefficients in a regression
model are the same in separate subsamples. Often the subsamples come from
different time periods. See Chow test.
Source: Davidson and MacKinnon, 1993, p 375
Contexts: estimation; econometrics
test of identifying restrictions:
synonym for Hausman test, in practice. Only overidentifying
restrictions (assumptions) can be tested.
Contexts: econometrics; estimation
test statistic:
"A random variable [T, in this example] of which the probability
distribution is known, either exactly or approximately, under the null
hypothesis. We then see how likely the observed value of T is to have
occurred, according to that probability distribution. If T is a number that
could easily have occurred by chance [under the tested hypothesis], then we
have no evidence against the null hypothesis H_{0}. However if it is
a number that would occur by chance only rarely, we do have evidence against
the null, and may well decide to reject it."
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; estimation
TFP:
Abbreviation for Total Factor Productivity.
Contexts: macro
the standard model:
Has a variety of meanings, and can be a confusing phrase to outsiders to a
discussion. Often implicitly contrasts the model at hand to a simpler,
earlier one in the same literature, sometimes with the implication that
variations from the earlier one ought, in the speaker's opinion, to be
justified explicitly.
A standard model of a firm is one in which it is strictly and always profit
maximizing. Often 'profit' is interpreted in a short term way, but depending
on context it may refer to a long run present-discounted value kind of profit.
A standard model of individuals seeking jobs is that they are strictly
consumption maximizing, and therefore wage maximizing. Occasionally a long
run present discounted value of wages is the objective. If time away from
work is relevant, the consumer maximizes some combination of consumption/ wage
and time away from work, or 'leisure'.
A standard model of international trade is one in which countries specialize
toward their comparative advantages.
A standard model of a product market is one in which (1) all producers (called
firms) and consumers (thought of as individuals) are price takers and
variations in any one actor's production or consumption have no effect on the
price; (2) the demand curve is strictly increasing (that is the price and
quantity are positively correlated); (3) the supply curve is strictly
decreasing (that is, price and quantity are negatively correlated); (4) the
good is infinitely divisible.
Contexts: phrases
theory of the firm:
Subject is: What are the nature, extent, and purposes of firms? This
organization of the answers comes from Hart's book.
Categories of answers:
Neoclassical theories of the firm identify it with its production technology,
and usually define the driving objective of the firm as maximizing its profits
given its technology.
Principal-agent theories of firms -- that firms are organized to divide work
among many people in ways that minimize principal agent problems.
Transaction cost theories -- that comprehensive contracts with workers are
unrealistic and that the structure of a firm (e.g., a hierarchical one) is
useful for efficiently doing a job. First academic paper of this kind was
Coase, 1937.
Property rights theories -- that ownership is a source of power ...
---------
theory of the firm: firm organization substitutes for
contracts, firms reduce uncertainty and opportunistic
behavior, and set incentives to elicit efficient responses
from agents.
-- Mokyr's rise and fall paper
Source: Hart, 1995; Coase, 1937
Contexts: IO
theta:
As used with respect to options: The rate of change of the value of a
portfolio of derivatives with respect to time with all else held constant.
Formally this is a partial derivative.
Source: Hull, 1997, p 321
Contexts: finance
tightness:
An attribute of a market.
In securities markets, tightness is "the cost of turning around a
position over a short period of time." (Kyle, 1985, p 1316). [Does
'cost' mean trading costs, alone? So does 'turning around' just mean
'trading'?]
A labor market is said to be tight if employers have trouble filling jobs, or
if there is a long wait to fill an available job. It is not evidence that the
labor market is tight if potential employees have trouble finding jobs or must
wait to get one.
Source: Kyle, 1985, p 1316
Contexts: finance; labor
time consistency:
Opposite of time inconsistency or dynamic inconsistency.
Contexts: macro
time deposits:
The money stored in the form of savings accounts at banks.
Contexts: macro; money
time inconsistency:
Same as dynamic inconsistency.
Contexts: dynamic optimization
time preference:
A utility function may or may not have the property of time preference.
Time preference is an intense preference to receive goods or services
immediately.
The discount factor preference to avoid delay must be more than
multiplictavely linear in the delay time passed, or one would not use this
term to describe the utility function. In theory this attribute is
analytically distinct from other reasons to want something sooner, such as
interest rates; the bounded rationality problem of remembering how and
when to consume the good later; or discounting of future events for reasons of
opportunity, risk, or uncertainty (e.g., the chance of surviving to a later
time).
There is evidence that human behavior exhibits great impatience which might be
modeled well by time preference and perhaps can perhaps be distinguished from
these other factors. So one may read references to empirical
observations of time preference, though as far as this editor can tell the
concept is quite theoretical and some jump is required to leave all other
explanations aside and link it directly to an observation.
Contexts: micro theory; utility
time series:
A stochastic process where the time index takes on a finite or countably
infinite set of values. Denoted, e.g. {X_{t} | for all integers t}.
Relevant terms:
Relevant terms: AIC,
Akaike's Information Criterion,
AR,
ARCH,
ARIMA,
ARMA,
augmented Dickey-Fuller test,
autocorrelation,
autocovariance,
autocovariance matrix,
autoregressive process,
Box-Jenkins,
Box-Pierce statistic,
BVAR,
Cochrane-Orcutt estimation,
cointegration,
convolution,
covariance stationary,
Dickey-Fuller test,
Donsker's theorem,
Durbin's h test,
Durbin-Watson statistic,
ergodic,
error-correction model,
essentially stationary,
FCLT,
filter,
Gaussian white noise process,
generalized Wiener process,
Granger causality,
heterogeneous process,
I(0),
I(1),
integrated,
invertibility,
Ito process,
lag operator,
Lindeberg-Levy Central Limit Theorem,
Ljung-Box test,
MA,
Markov chain,
Markov property,
mixing,
nonergodic,
own,
Ox,
Phillips-Perron test,
portmanteau test,
Prais-Winsten transformation,
Q-statistic,
QLR,
random,
random process,
random walk,
Riemann-Stieltjes integral,
Schwarz Criterion,
spectrum,
stochastic,
strict stationarity,
strictly stationary,
strongly dependent,
strongly ergodic,
strongly stationary,
structural break,
structural change,
submartingale,
SVAR,
trend stationary,
uniform weak law of large numbers,
unit root,
unit root test,
UVAR,
UWLLN,
VAR,
variance decomposition,
variance ratio statistic,
VARs,
volatility clustering,
Wallis statistic,
weak law of large numbers,
weak stationarity,
weakly dependent,
weakly ergodic,
WLLN,
Wold decomposition,
Wold's theorem.
See Editor's comment on time series.
Contexts: fields
time-varying covariates:
Means the same thing as time-dependent covariates; that the covariates
(regressors, probably) change over time.
Source: statalist, general discussion
Contexts: econometrics; estimation
tit-for-tat:
A strategy in a repeated game (or a series of similar games). When a
Prisoner's dilemma game is repeated between the same players, the
tit-for-tat strategy is to choose the 'cooperate' action unless in the
previous round, one's opponent chose to defect, in which case one responds by
choosing to defect this round. This tends to induce cooperative behavior
against an attentive opponent.
Contexts: game theory
Tobin tax:
A tax on foreign currency exchanges.
Contexts: public finance
Tobin's marginal q:
The ratio of the change in the value of the firm to the added capital cost for
a small increment to the capital stock. If the firm is in equilibrium, it's
marginal q is one; all investments that add more to the value of the firm than
their cost have already been undertaken, and if we knew the replacement cost
of capital we could look up the stock market value of a firm and calculate its
average q directly.
Source: Branson, Ch 13
Contexts: macro
Tobin's q:
This description comes from Dow and Gorton, (1996): The ratio of the current
market value of a firm's assets to their cost. If q is greater than 1, the
firm should increase its capital stock. It follows that, according to
"Fischer and Merton (1984), 'the stock market should be a predictor of
the rate of corporate investment' (p 84-85)" -- that is,"rising
stock prices cause higher investment [by firms]. The empirical evidence is
consistent with this view: investment in plant and eqipment increases
following a rise in stock prices in all countries that have been studied. In
fact, lagged stock returns outperform q in predicting investment [at both] the
macroeconomic level and in cross-sections of firms. See Barro (1990),
Bosworth (1975), and Welch (1994)."
Source: Quoted from:
James Dow and Gary Gorton. 1996. "Stock market efficiency and economic
efficiency: is there a connection?" European University Institute,
London Business School, the Wharton Schoo, and NBER. Working paper.
Barro, Robert J. 1990. "The stock market and investment,"
Review of Financial Studies 3: 115-131.
Bosworth, Barry. 1975. "The stock market and the economy,"
Brookings Papers on Economic Activity 2: 257-300.
Fischer, Stanley, and Robert C. Merton. 1984. "Macroeconomics and
finance: the role of the stock market," Carnegie-Rochester Conference
Series on Public Policy 21: 57-108.
Tobin, James. 1969. "A general equilibrium approach to monetary
theory" Journal of Money, Credit, and Banking 1: 15-29.
Welch, Ivo. 1994. "The cross-sectional determinants of corporate
capital expenditures: a multinational comparison." UCLA working
paper.
tobit model:
An econometric model in which the dependent variable is censored; in the
original model of Tobin (1958), for example, the dependent variable was
expenditures on durables, and the censoring occurs because values below zero
are not observed.
The model is:
y_{i}^{*}=bx_{i}+u_{i} where
u_{i}~N(0,s^{2})
But y_{i}^{*} (e.g., durable goods desired by the consumer
described by variables x_{i}) is not observed.
y_{i}=y_{i}^{*} if
y_{i}^{*}>y_{0}, and y_{i}=y_{0}
otherwise
y_{i} is observed.
y_{0} is known. s^{2} is often treated as known.
x_{i}'s are observed for all i.
Contexts: econometrics; estimation
top-coded:
For data recorded in groups, e.g. 0-20, 21-50, 50-100, 101-and-up, we do not
know the average or distribution of the top category, just its lower bound and
quantity. That data is "top-coded." We may adjust for it by scaling up the
top-code and calling that the average.
Contexts: data
topological space:
A pair of sets (X, t) such that t is a topology in X.
See topology.
Source: Kolmogorov and Fomin, p 78
Contexts: real analysis
topology:
Is defined with respect to a set X. A 'topology in X' is a set of subsets of
X satisfying several criteria. Let t denote a
topology in X. The sets in t are by definition
'open sets' with respect to t, and sets outside of
t are not. t satisfies
the following:
(1) X and the null set are in t.
(2) Finite or infinite unions of open sets (that is, elements of t) are also in t.
(3) Finite intersections of open sets are in t.
Comments and related definitions:
More than one topology in X may be possible for a given set X.
The complement of a set in t is said to be a
'closed set'.
Element of X may be called 'points'.
A 'neighborhood' of a point x is any open set containing x.
Let M be a subset of X. A point x in X is a 'contact point' of M if every
neighborhood of x contains at least one point of M; and x would be a 'limit
point' of M if every neighborhood of x contained infinitely many points of M.
The set of all contact points of M is the 'closure' of M.
A 'topological space' is a pair of sets (X, t)
satisfying the above.
All metric spaces are topological spaces. The sets one would call open in a
metric space satisfy the criteria above; one could also label all subsets of X
as open for purpose of listing the members of the topology and they would then
satisfy the definition above.
Given two topologies t1 and t2 on the same set X, we say that 't1 is stronger than t2', or
equivalently that 't2 is weaker than t1' if every set in t2 is in
t1. A stronger topology thus has at least as many
elements as a weaker one.
Source: Kolmogorov and Fomin, p 78
Contexts: real analysis
Tornqvist index:
A kind of index number which can accumulate various kinds of capital in to a
single number. Averages between periods fill in the quantities of capital and
labor.
Defined in Hulten to be a discrete-time approximation to a Divisia
index
Can handle a production function as complicated as a translog, not just
a Cobb-Douglas. It can handle a Cobb-Douglas production function in
which the shares change over time.
The Tornqvist index is a superlative index formula. It was developed
in the 1930s at the Bank of Finland, according to Triplett (1992).
Defined at length in Dean & Harper, 1998, pages 8-9.
Source: paraphrased from Dean & Harper, Feb. 1998;
Hulten, 2000;
Triplett, 1992
Contexts: government; measurement
total factor productivity:
Given the macro model: Y_{t} =
Z_{t}F(K_{t},L_{t}), Total Factor Productivity (TFP)
is defined to be Y_{t}/F(K_{t},L_{t})
Likewise, given Y_{t} =
Z_{t}F(K_{t},L_{t},E_{t},M_{t}), TFP
is Y_{t}/F(K_{t},L_{t},E_{t},M_{t})
The Solow residual is a measure of TFP. TFP presumably changes over time.
There is disagreement in the literature over the question of whether the Solow
residual measures technology shocks. Efforts to change the inputs, like
K_{t}, to adjust for utilization rate and so forth, have the effect of
changing the Solow residual and thus the measure of TFP. But the idea of TFP
is well defined for each model of this kind.
TFP is not necessarily a measure of technology since the TFP could be a
function of other things like military spending, or monetary shocks, or the
political party in power.
"Growth in total-factor productivty (TFP) represents output growth not
accounted for by the growth in inputs." -- Hornstein and Krusell (1996).
Disease, crime, and computer viruses have small negative effects on TFP using
almost any measure of K and L, although with absolutely perfects measures of K
and L they might disappear.
Reason: crime, disease, and computer viruses make people AT WORK less
productive.
Source: Conversation with Martin Eichenbaum, 8/19/96
Robert E Hall. "The Relation between Price and Marginal Cost in U.S.
Industry", Journal of Political Economy, April 1996.
Hornstein, Andreas, and Per Krusell. 1996. "Can technology improvements
cause productivity slowdowns?" NBER Macroeconomics Annual 1996.
MIT Press. p 214.
Contexts: macro; models
totally mixed strategy:
In a noncooperative game, a totally mixed strategy of a player is a mixed
strategy giving positive probability weight to every pure strategy
available to the player.
For a more formal definition see Pearce, 1984, p 1037.
This is a rough paraphrase.
Source: Pearce, 1984, p 1037
Contexts: game theory
Townsend inefficiency:
a possible property of monetary exchange. One of the parties is evaluating
the value of the money he gets in the transaction not the utility he generated
in production.
Contexts: money; models
trace:
The trace of a square matrix A is the sum of the elements on its diagonal.
Has the property that tr(AB)=tr(BA).
Source: Greene, 1993, p 33
Contexts: econometrics; linear algebra
tract:
A geographical unit of the U.S. defined by the U.S. Census Bureau, usually
having between a population between 2500 and 8000. Zip codes are about five
times larger. Census-defined "blocks" are a smaller unit than tracts.
Source: Working paper by Joel Elvery; it cites on these questions this
book:
U.S. Dept of Commerce, Bureau of the Census. Geographic Areas Reference
Manual. 1994. Washington, DC.
Tragedy of the commons:
A metaphor for the public goods problem that it is hard to coordinate and pay
for public goods. The term comes from Hardin (1968). The commons is a
pasture held by a group. Each individual owns sheep and has the incentive to
put more and more sheep on the pasture to gain, privately. The overall effect
of many individuals do this overwhelms the carrying capacity of the pasture
and the sheep cannot all survive.
Source: Hardin, 1968
Contexts: public
trajectory:
series of states in a dynamical system {N_{0}, N_{1},
N_{2}, ...}.
For a deterministic generator function F() such that N_{t+1} =
F(N_{t}), then N_{1}=F(N_{0}),
N_{2}=F(F(N_{0})), etc.
Source: J. Montgomery, social networks paper
Contexts: macro; models
transactions costs:
Made up of three types per North and Thomas (1973) p 93:
-- search costs (the costs of locating information about opportunities for
exchange)
-- negotiation costs (costs of negotiating the terms of the exchange)
-- enforcement costs (costs of enforcing the contract)
Source: North, Douglass C., and Robert Paul Thomas. 1973/1992. The Rise
of the Western World: A New Economic History. Cambridge University
Press.
transactions demand:
The transactions demand for money is positively related to income and
negatively related to the interest rate.
Source: Branson
Contexts: money
transient:
In the context of stochastic processes, "A state is called transient if
there is a positive probability of leaving and never returning." --
Stokey and Lucas, p 322
Source: Stokey and Lucas, 1989, p 322
Contexts: macro; stochastic processes
transition economics:
Since about 1992 this term has come to mean the subject of the transition of
post-Soviet economies toward a Western free market model.
It almost never refers either to other kinds of transitions economies might
undergo, nor to the subject labeled development economics.
Contexts: phrases; fields
translog:
The translog production function is a generalization of the
Cobb-Douglas production function. The name stands for
'transcendental logarithmic'.
See Greene, 2nd edition, p 209-210. Cited to Berndt and Christensen (1972);
elsewhere, said to have been introduced by Christensen, Jorgenson, and
Lau, 1971, p 255-6. Applied to a case like Y=f(K,L), where f()
is replaced by the translog. Its use always seems to be in estimation not in
theory. Avoids strong assumptions about the functional form of the production
function; can approximate any other production function to second degree. The
regression run is, e.g. (from Greene p 209):
ln Y = b1 + b2 (ln L) + b3 (ln K) + b4 (ln L)^{2}/2
+ b5 (ln K)^{2}/2 + b6 (ln L)(ln K) + e
The Cobb-Douglas estimation is like this but with the restriction that
b4=b5=b6=0.
Greene, p 210 says that the elasticity of output with respect to capital is in
this model:
(d ln Y)/(d ln K) = b3 + b5 (ln K) + b6 (ln L)
--------------------
From Lau (1996) in _Mosaic_:
Flexible functional forms such as the translog production function allow "the
production [function?] elasticies to change with differing relative factor
proportions." (p76)
Source: Christensen, L.R., D.W. Jorgenson, and L.J. Lau. 1973.
"Transcendental Logarithmic Production Frontiers." Review of Economics and
Statistics. 55: pp 28-45.
Greene, 1993
Contexts: econometrics
transpose:
A matrix operation. The transpose of an M x N matrix A is an N x M matrix,
denoted A' or A^{T}, in which the top row of A has been made into the
first column of A', the second row of A has been made into the second column
of A', and so forth.
Contexts: econometrics; linear algebra
transversality condition:
Limits solutions to an infinite period dynamic optimization problem.
Intuitively, it rules out those that involve accumulating, for example,
infinite debt.
The transversality condition (TC) can be obtained by considering a finite,
T-period horizon version of the problem of maximizing present value, obtaining
the first-order condition for n_{t+T}, and then taking the limit of
this condition as T goes to infinity.
The form is often:
(TC) lim b^{T}.... = 0
Contexts: macro; models
treatment effects:
In the language of experiments, a treatment is something done to a person that
might have an effect. In the absence of experiments, discerning the effect of
a treatment like a college education or a job training program can be clouded
by the fact that the person made the choice to be treated. The outcomes are a
combined result of the person's propensity to choose the treatment, and the
effects of the treatment itself. Measuring the treatment's effect while
screening out the effects of the person's propensity to choose it is the
classic treatment effects problem.
A standard way to do this is to regress the outcome on other predictors that
do not vary with time, as well as whether the person took the treatment or
not. An example is a regression of wages not only on years-of-education but
also on test scores meant to measure abilities or motivation. Both
years-of-education and test scores are positively correlated with subsequent
wages, and when interpreting the findings the coefficient found on years of
education has been partly cleansed of the factors predicting which people
would have chosen to have more education.
A more advanced method is the Heckman two-step.
Source: Greene, 1997, p 981-2
Contexts: econometrics; labor
trembling hand perfect equilibrium:
Defined by Selten (1975). Now perfect equilibrium is considered a
synonym.
Source: Selten, 1975, as cited by Pearce,
1984, p 1037
Contexts: game theory
trend stationary:
A time series process is trend stationary if after trends were removed it
would be stationary.
Following Phillips and Xiao (1998): iff a time series process y_{t}
can be decomposed into the sum of other time series as below, it is trend
stationary:
y_{t} = gx_{t} + s_{t}
where g is a k-vector of constants, x_{t} is a vector of deterministic
trends, and s_{t} is a stationary time series.
Phillips and Xiao (1998), p. 2, say that x_{t} may be "more
complex than a simple time polynomial. For example, time polynomials with
sinusoidal factors and piecewise time polynomials may be used. The latter
corresponds to a class of models with structural breaks in the deterministic
trend."
Whether all researchers would include statistical models with structural
breaks in the class of those that are trend stationary, as Phillips and Xiao
do, is not known to this writer.
Note that this definition is designed to discuss the question of whether a
statistical model is trend stationary. To decide if one should think of a
particular time series sample as trend stationary requires imposing a
statistical model first.
Source: Phillips, Peter C.B. and Zhijie Xiao, "A primer on unit root
testing" Journal of Economic Surveys Vol 12, No. 5, 1998.
and/or: Cowles Foundation paper No. 972, Yale University, 1999
Contexts: time series
triangular kernel:
The triangular kernel is this function: (1-|u|) for -1<u<1 and zero for
u outside that range. Here u=(x-x_{i})/h, where h is the window width
and x_{i} are the values of the independent variable in the data, and
x is the value of the independent variable for which one seeks an
estimate.
For kernel estimation.
Source: Hardle, 1990
Contexts: econometrics; nonparametrics; estimation
TRIPs:
A recent international treaty on intellectual property.
Contexts: intellectual property; organizations
truncated dependent variable:
A dependent variable in a model is truncated if observations cannot be seen
when it takes on vales in some range. That is, both the independent and the
dependent variables are not observed when the dependent variable is in that
range.
A natural example is that if we have data on consumption purchases, if a
consumer's willingness-to-pay for a certain product is negative, we will never
see evidence of it no matter how low the price goes. Price observations are
truncated at zero, along with identifying characteristics of the consumer in
this kind of data.
Contrast censored dependent variables.
Contexts: econometrics; estimation
TSP:
Time series econometrics software
Contexts: estimation
Tukey boxplot:
A way of showing a distribution on a line, so that distributions can be
compared easily in a single diagram. Used more in statistics than in
econometrics. A thin box marks out the 25th to 75th percentiles; a dash
within that box marks the median; a line marks the outer part of the
distribution, and outside dots or stars mark outliers. (The exact range of
the line is also derived from the location of the quartiles; its exact
definition I do not understand from Quah, 1997; maybe is clear in Cleveland
1993.)
A rough example; consider two continuous distributions that ranges from 0 to
4:
0 1 2 3 4
|--[==+===]---| * * <= the first distribution
|-[=+==]---| <= the second distribution
The first distribution has a median around 1.3, and the main part of it ranges
from .3 to 3.0. There are some outliers at the top. The second distribution
has a median near 2.0, and is more narrowly concentrated than the first, with
few outliers.
Source: Quah, 1997; Cleveland,
1993
Contexts: statistics; econometrics
tutorial: Matlab:
From a Unix shell one can just type 'matlab' as a command on any computer that
has it, and start to type interactive statements such as those below. One
could also put them in a file with the .m extension to run them from within
matlab with 'run file.m' or from the shell with 'matlab < file.m' This
tutorial covers very little but you can see something of the language.
% The percent sign begins comments.
% The statements below can be typed interactively one per line to get
% clear responses from Matlab. No need to type the comment part at the
% end of the lines. Make sure to use upper and lower case in the
% same was as in the statements shown.
A=[1 2;3 4] % defines matrix A as a 2x2 with first line [1 2]
B=A' % transpose
B=A+A % sum, element by element
Ainv=inv(A) % takes inverse of a matrix
A*Ainv % calculates and prints the result of a matrix multiplication
B=[A;A] % stacked so B has twice as many rows as A
B=[A A] % the A's are side by side. B has twice as many columns as A.
B=A(1,1) % B is a scalar now, the upper left element of A
B=A'*A % matrix multiplication
B=A(:,1) % B is set to first row of A
B=A.*A % element by element multiplication
B=B./A % element by element division
A=zeros(3,3) % special definition of a matrix of zeros
B=ones(3,1) % defines a matrix of ones
A=eye(5) % defines identity matrix
B=A(1:2,1:3) % takes part of matrix
more on % may not be needed; prevents help screen from scrolling off
help * % shows sample of the help available
Source: Chris Taber, Econ D83 at Northwestern 1996-7, Matlab tutorial
handout
Contexts: data
two stage least squares:
An instrumental variables estimation technique. Extends the IV idea to a
situation where one has more instruments than independent variables in the
model. Suppose one has a model:
y = Xb + e
Here y is a T x 1 vector of dependent variables, X is a T x k matrix of
independent variables, b is a k x 1 vector of parameters to estimate, and e is
a k x 1 vector of errors.
But the matrix of independent variables X may be correlated to the e's. Then
using a matrix of independent variables Z, uncorrelated to the e's, that is T
x r, where r>k:
Stage 1: By OLS, regress the X's on the Z's to get Xhat =
(Z'X)^{-1}Z'y
Stage 2: By OLS, regress y on the Xhat's. This gives an unbiased estimate of
b.
The stages can be combined into one for maximum speed:
b = (X'P_{z}X)^{-1}X'P_{z}y
where P_{z}, the projection matrix of Z, is defined to be:
P_{z} = Z(Z'Z)^{-1}Z'
Contexts: econometrics; estimation
two-factor model:
suggests a production model with two factors of production, labor L and
capital K.
tying:
Tying is the vendor practice of requiring customers of one product to buy
others.
Tying can be said to impede trade in that the customer's choices are
restricted. If the customer were free to buy the product without further
conditions, the customer would apparently be better off than if the product
has strings attached. Tying could, however, be efficiency-enhancing by (1)
reducing the number of market transactions (an efficiency of scale), or by (2)
enabling a work-around of a regulation, such as offering a bargain in
conjunction with a price-controlled product.
A historical example: years ago lessees of IBM mainframes had to agree to buy
punch cards only from IBM. Those punch cards were sold at a higher price than
on the open market. So the customer would have been better off with the same
contract minus this clause. But one could argue that tying the products this
way improved competition. It could be that IBM was trying to charge heavy
users of the computer more than light users by putting a surcharge on the
punch cards. If so, IBM found a way to bill customers for one of its costs,
computer maintenance. The practice would theoretically encourage customers to
optimize their use of the computer rather than use it excessively. In this
case the practice might be pro-competitive.
Contexts: IO
type I error:
That is, 'type one error.' This is the error in testing a hypothesis of
rejecting a hypothesis when it is true.
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; estimation; statistics
type I extreme value distribution:
Has the cdf F(x)=exp(-exp(-x)).
(Devine and Kiefer write F(x)=exp(-exp(-x)); the difference may be in the
range of x? must write this out)
Source: Amemiya, Takeshi. "Discrete Choice Models," The New Palgrave:
Econometrics
Contexts: econometrics; statistics
type II error:
That is, 'type two error.' This is the error in testing a hypothesis of
failing to reject a hypothesis when it is false.
Source: Davidson and MacKinnon, 1993, p 78-79
Contexts: econometrics; estimation; statistics
ultimatum game:
An experiment. There are two players, an allocator A and a recipient R, who
in the experiment do not know one another. They have received a windfall,
e.g., of $1. The allocator, moving first, proposes to split the windfall by
proposing to take share x, so that A receives x and R receives 1-x. The
recipient can accept this allocation, or reject it in which case both get
nothing. The subgame perfect equilibrium outcome is that A would offer the
smallest possible amount to R, e.g., the share $.99 for A and $.01 for R, and
that the recipient should accept. The experimental evidence, however, is that
A offers a relatively large share to R, often 50-50, and that R would often
reject smaller positive amounts. We may interpret R's behavior has
willingness to pay a cost to punish "unfair" splits. With regard
to A's behavior -- does A care about fairness too? Or is A income-maximizing
given R's likely behavior? See also Dictator Game.
Contexts: game theory; experimental
unbalanced data:
In a panel data set, there are observations across cross-section units
(e.g. individuals or firms), and across time periods. Often such a data set
can be represented by a completely filled in matrix of N units and T periods.
In the "unbalanced data" case, however, the number of observations
per time period varies. (Equivalently one might say that the number of
observations per unit is not always the same.) One might handle this by
letting T be the total number of time periods and N_{t} be the number
of observations in each period.
Contexts: econometrics; estimation
unbiased:
An estimator b of a distribution's parameter B is unbiased if the mean of b's
sampling distribution is B. Formally, if: E[b] = B.
Source: Greene, 1993, p 93
Contexts: econometrics
uncertainty:
If outcomes will occur with a probability that cannot even be estimated, the
decisionmaker faces uncertainty. Contrast risk.
This meaning to uncertainty is attributed to Frank Knight, and is sometimes
referred to as Knightian uncertainty.
The decisionmaker can apply game theory even in such a circumstance, e.g. the
choice of a dominant strategy.
Kreps (1988), p 31, writes that three standard ways of modeling choices made
under conditions of uncertainty are with von Neumann-Morgenstern expected
utility over objective uncertainty, the Savage axioms for modeling subjective
uncertainty, and the Anscombe-Aumann theory which is a middle course between
them.
A recent ad for a new book edited by Haim Levy (Stochastic Dominance:
Investment Decision Making under Uncertainty) considers three ways of
modeling investment choices under uncertainty: by tradeoffs between mean and
variance, by choices made by stochastic dominance, and non-expected
utility approaches using prospect theory.
Source: J. Montgomery, social networks paper;
Kreps, 1988
Contexts: models
uncorrelated:
Two random variables X and Y are uncorrelated if E(XY)=E(X)E(y). Note that
this does not guarantee they are independent.
Source: Greene, 1993, p 64-65
Contexts: econometrics
under the null:
Means "assuming the hypothesis formally being tested is true." See
null hypothesis.
Contexts: phrases; econometrics
over a range which we will denote [a,b].
Pdf is (x-a)/(b-a). Mean is .5*(a+b). Variance is
(1/12)(b-a)^{2}.
Contexts: statistics
uniform kernel:
The uniform kernel function is 1/2, for -1<u<1 and zero outside that
range. Here u=(x-x_{i})/h, where h is the window width and
x_{i} are the values of the independent variable in the data, and x is
the value of the independent variable for which one seeks an estimate. Unlike
most kernel functions this one is unbounded in the x direction; so every data
point will be brought into every estimate in theory, although outside three
standard deviations they make hardly any difference.
For kernel estimation.
Source: Hardle, 1990
Contexts: econometrics; nonparametrics; estimation
uniform weak law of large numbers:
See Wooldridge chapter, p 2651. The UWLLN applies to a non-random criterion
function q_{t}(w_{t},q), if the
sample average of q_{t}() for a sample {w_{t}} from a random
time series is a consistent estimator for E(q_{t}()).
A law like this is proved with Chebyshev's inequality.
Source: Wooldridge, 1995, p 2651
Contexts: econometrics; statistics; time series
union threat model:
"Firms may find it profitable to pay wages above the market clearing
level to try to prevent unionization." In a model this could lead to job
rationing and unemployment, just as efficiency wage models can.
Source: Katz, "Efficiency Wage Theories: A Partial Evaluation" NBER Macro
Annual 1986, p 250
Contexts: labor
unit root:
An attribute of a statistical model of a time series whose autoregressive
parameter is one. In a data series y[t] modeled by:
y[t+1] = y[t] + other terms
the series y[] has a unit root.
Contexts: statistics; econometrics; time series
unit root test:
A statistical test for the proposition that in a autoregressive statistical
model of a time series, the autoregressive parameter is one. In a data series
y[t], where t a whole number, modeled by:
y[t+1] = ay[t] + other terms
where a is an unknown constant, a unit root test would be a test of the
hypothesis that a=1, usually against the alternative that |a|<1.
Contexts: statistics; econometrics; time series
unity:
A synonym for the number 'one'.
Contexts: phrases
univariate:
A discrete choice model in which the choice is made from a one-dimensional set
is said to be a univariate discrete choice model.
Contexts: econometrics
univariate binary model:
For dependent variable y_{i} that can be only one or zero, and a
continuous indepdendent scalar variable x_{i}, that:
Pr(y_{i}=1)=F(x_{i}'b)
Here b is a parameter to be estimated, and F is a distribution function. See
probit and logit models for examples.
Source: Takeshi Amemiya, "Discrete Choice Models," The New Palgrave:
Econometrics
Contexts: econometrics; estimation
unrestricted estimate:
An estimate of parameters taken without constraining the parameters. See
"restricted estimate."
Contexts: econometrics; estimation
upper hemicontinuous:
no disappearing points.
Contexts: real analysis; models
urban ghetto:
As commonly defined by U.S. researchers: areas where 40 percent or more of
residents are poor.
Source: Blank, _ITAN_, p. 41
Contexts: poverty; data
utilitarianism:
A moral philosophy, generally operating on the principle that the utility
(happiness or satisfaction) of different people can not only be measured but
also meaningfully summed over people and that utility comparisons between
people are meaningful. That makes it possible to achieve a well-defined
societal optimum in allocations, production, and other decisions, and achieve
the goal utilitarian British philosopher Jeremy Bentham described as "the
greatest good for the greatest number."
This form of utilitarianism is thought of as extreme, now, partly because it
is widely believed that there exists no generally acceptable way of summing
utilities across people and comparing between them. Utility functions that
can be compared and summed arithmetically are cardinal utility
functions; utility functions that only represent the choices that would be
made by an individual are ordinal.
Contexts: philosophy
utility curve:
synonym for indifference curve.
Contexts: models
utility function:
A function which describes an individual agent's preferences. It is a
mathematical relation from various quantitatively measurable goods given to an
individual, or attributes of that individual's environment, and a level of
satisfaction that brings to the individual. We have no measure of the level
of satisfaction, or utility level, experienced by the agent but we can make a
hypothesis about the individual's utility function which could then be
disproved by the individual's behavior. Because they *could* be disproved,
the utility functions economists normally use have survived something of a
selection process.
A standard utility function is log utility which can be a function of a
one-dimensional measure of consumption or wealth.
Contexts: utility theory
UVAR:
Unstructured VAR (Vector Autoregression)
Contexts: econometrics; time series; estimation
UWLLN:
Uniform weak law of large numbers
Source: Wooldridge, 1995, p 2651
Contexts: econometrics; statistics; time series
value added:
A measure of output. Value added by an organization or industry is, in
principle:
revenue - non-labor costs of inputs
where revenue can be imagined to be price*quantity, and costs are usually
described by capital (structures, equipment, land), materials, energy,
and purchased services.
Treatment of taxes and subsidies can be nontrivial.
Value-added is a measure of output which is potentially comparable
across countries and economic structures.
value function:
Often denoted v() or V(). Its value is the present discounted value, in
consumption or utility terms, of the choice represented by its arguments.
The classic example, from Stokey and Lucas, is:
v(k) = max_{k'} { u(k, k') + bv(k') }
where k is current capital,
k' is the choice of capital for the next (discrete time) period,
u(k, k') is the utility from the consumption implied by k and k',
b is the period-to-period discount factor,
and the agent is presumed to have a time-separable function, in a discrete
time environment, and to make the choice of k' that maximizes the given
function.
Source: Stokey and Lucas, 1989
Contexts: macro; models
VAR:
Vector Autoregression, a kind of model of related time series.
In the simplest example, the vector of data points at each time t
(y_{t}) is thought of as a parameter vector (say, phi1) times a
previous value of the data vector, plus a vector of errors about which some
distribution is assumed. Such a model may have autoregression going back
further in time than t-1 too.
Contexts: econometrics; time series; estimation
var():
An operator returning the variance of its argument
Contexts: notation
variance:
The variance of a distribution is the average of squares of the distances from
the values drawn from the mean of the distribution:
var(x) = E[(x-Ex)^{2}].
Also called 'centered second moment.'
Nick Cox attributes the term to R.A. Fisher, 1918.
Contexts: econometrics; statistics
variance decomposition:
In a VAR, the variance decomposition at horizon h is the set of
R^{2} values associated with the dependent variable y_{t} and
each of the shocks h periods prior.
Source: M.W. Watson, Handbook of Econometrics, Ch 47, p. 2900
Contexts: econometrics; estimation; time series
variance ratio statistic:
discussed thoroughly on Bollerslev-Hodrick 1992 p. 19. Equations and
estimation there.
Contexts: finance; time series
VARs:
Vector Autoregressions. "Vector autoregressive models are _atheoretical_
models that use only the observed time series properties of the data to
forecast economic variables." Unlike structural models there are no
assumptions/restrictions that theorists of different stripes would object to.
But a VAR approach only test LINEAR relations among the time series.
Source: Hakkio & Morris: Vector Autoregressions, a User's Guide
Contexts: macro; time series; estimation
vec:
An operator. For a matrix C, vec(C) is the vector constructed by stacking all
of the columns of C, the second below the first and so on. So if C is n x k,
then vec(C) is nk x 1.
Contexts: econometrics; notation
vega:
As used with respect to options: "The vega of a portfolio of derivatives
is the rate of change fo the value of the portfolio with respect to the
volatility of the underlying asset." -- Hull (1997) p 328.
Formally this is a partial derivative.
A portfolio is vega-neutral if it has a vega of zero.
Source: Hull, 1997, p 328
Contexts: finance
verifiable:
Observable to outsiders, in the context of a model of information.
Models commonly assume that some the values of some variables are known to
both of the parties to a contract but are NOT verifiable, by which we mean
that outsiders cannot see them and so references to those variables in a
contract between the two parties cannot be enforced by outside authorities.
Examples: .....
vintage model:
One in which technological change is 'embodied' in Solow's
language.
Source: Mortensen, Job Reallocation paper, Feb 1997
Contexts: macro
vNM:
Abbreviation for von Neumann-Morgenstern, which describes attributes of
some utility functions.
Contexts: modelling; micro theory
volatility clustering:
In a time series of stock prices, it is observed that the variance of returns
or log-prices is high for extended periods and then low for extended periods.
(E.g. the variance of daily returns can be high one month and low the next.)
This occurs to a degree that makes an iid model of log-prices or
returns unconvincing. This property of time series of prices can be called
'volatility clustering' and is usually approached by modeling the price
process with an ARCH-type model.
Source: Bollerslev-Hodrick 92 circa p 8
Contexts: finance; time series
von Neumann-Morgenstern utility:
Describes a utility function (or perhaps a broader class of preference
relations) that has the expected utility property: the agent is indifferent
between receiving a given bundle or a gamble with the same expected
value.
There may be other, or somewhat stronger or weaker assumptions in the vNM
phrasing but this is a basic and important one. It does not seem to be the
case that such a utility representation is required to be increasing in all
arguments or concave in all arguments, although these are also common
assumptions about utility functions.
The name refers to John von Neumann and Oskar Morgenstern's Theory of Games
and Economic Behavior. Kreps (1990), p 76, says that this kind of utility
function predates that work substantially, and was used in the 1700s by Daniel
Bernoulli.
Source: Kreps (1990)
Contexts: micro theory; modelling
WACM:
abbreviation for the Weak Axiom of Cost Minimization
Source: Varian, 1992
Contexts: models
wage curve:
A graph of the relation between the local rate of unemployment, on the
horizontal axis, and the local wage rate, on the vertical axis. Blanchflower
and Oswald show that this relation is downward sloping.
That is, locally high wages and locally low unemployment are
correlated.
Source: Blanchflower and Oswald, The Wage Curve, Ch 4
Contexts: labor
wage equation:
An equation in which a wage is the dependent variable. Usually this is an
empirical, econometric, linear regression equation. It may however be
nonlinear, or an abstract equation within a structural model.
Contexts: labor economics
Wallis statistic:
A test for fourth-order serial correlation in the residuals of a
regression, from Wallis (1972) Econometrica 40:617-636. Fourth-order serial
correlation comes up in the context of quarterly data; e.g., seasonality.
Formally, the statistic is:
d_{4}=(sum from t=5 to t=T of:
(e_{t}-e_{t-4})^{2}/(sum from t=1 to t=T of:
e_{t}^{2})
where the series of e_{t} are the residuals from a regression.
Tables for interpreting the statistic are in Wallis (1972).
Source: Greene, 1993, p 424
Contexts: time series; econometrics; estimation
Walrasian auctioneer:
A hypothetical market-maker who matches suppliers and demanders to get a
single price for a good. One imagines such a market-maker when modeling a
market as having a single price at which all parties can trade.
Such an auctioneer makes the process of finding trading opportunities perfect
and cost free; consider by contrast a "search problem" in which
there is a stochastic cost of finding a partner to trade with and transactions
costs when one does meet such a partner.
Contexts: models
Walrasian equilibrium:
An allocation vector pair (x,p), where x are the quantities held of each good
by each agent, and p is a vector of prices for each good, is a Walrasian
equilibrium if (a) it is feasible, and (b) each agent is choosing optimally,
given that agent's budget. In a Walrasian equilibrium, if an agent prefers
another combination of goods, the agent can't afford it.
Contexts: general equilibrium; models
Walrasian model:
A competitive markets equilibrium model "'without any externalities,
asymmetric information, missing markets, or other imperfections."
(Romer, 1996, p 151)
"In this general equilibrium model, commodities are identical, themarket is
concentrated at a single point [location] in space, and the exchange is
instantaneous. [Individuals] are fully informed about the exchange commodity
and the terms of trade are known to both parties. [No] effort is required to
effect exchange other than to dispense with the appropriate amount of cash.
[Prices are] a sufficient allocative device to achieve highest value uses."
(North, 1990, p. 30.)
Source: Romer, 1996, p 151; North, 1990
Contexts: models
wavelet:
A wavelet is a function which (a) maps from the real line to the real line,
(b) has an average value of zero, (c) has values very near zero except over a
bounded domain, and (d) is used for the purpose, analogous to Fourier
analysis, implied by the following paragraphs.
Unlike sine waves, wavelets tend to be irregular, asymmetric, and to have
values that die out to zero as one approaches positive and negative infinity.
"Fourier analysis consists of breaking up a signal into sine waves of
various frequencies. Similarly, wavelet analysis is the breaking up of
a signal into shifted and scaled versions of the original (or mother)
wavelet."
By decomposing a signal into wavelets one hopes not to lose local features of
the signal and information about timing. These contrast with Fourier
analysis, which tends to reproduce only repeated features of the original
function or series.
Source: Michel Misiti, Yves Misiti, Georges Oppenheim, Jean-Michel Poggi.
MATLAB Wavelet Toolbox User's Guide, version 1. March 1996. Copyright
The Mathworks, Inc. page 1-7.
Contexts: econometrics; statistics
WE:
Walrasian Equilibrium
Contexts: general equilibrium; models
weak form:
Can refer to the weak form of the efficient markets hypothesis, which
is that any information in the past prices of a security are fully reflected
in its current price.
Fama (1991) broadens the category of tests of the weak form hypothesis under
the name of 'test for return predictability.'
Source: Fama, 1970
Contexts: finance
weak incentive:
An incentive that is does not encourage maximization of an objective, because
it is ambiguous or satisfice-able.
For example, payment of weekly wages is a weak incentive since by construction
it does not encourage maximum production, but rather the minimal performance
of showing up every work day. This can be the best kind of incentive in a
contract if the buyer doesn't know exactly what he wants or if output is not
straightforwardly measurable.
Contrast strong incentive.
Source: Weisbrod's class 5/23/97
Contexts: public economics
weak law of large numbers:
Quoted right from Wooldridge chapter:
A sequence of random variables {z_{t}} for t=1,2,... satisfies the
weak law of large numbers if these three conditions hold:
(1) E[|z_{t}|] is finite for all t,
(2) as T goes to infinity, the limit of the average of the first T elements
of {z_{t}} 'exists' [unknown: that means it's fixed and finite,
right?],
(3) as T goes to infinity, the probability limit of the average of the first
T elements of the series [z_{t} - E(z_{t})] is zero.
The most important point (I think) is that the weak law of large numbers holds
iff the sample average is a consistent estimate for the mean of the process.
Laws of large numbers are proved with Chebyshev's inequality.
Source: Wooldridge, 1995, p 2651
Contexts: econometrics; statistics; time series
weak stationarity:
synonym for covariance stationarity. A random process is weakly stationary if
and only if it is covariance stationary.
Contexts: statistics; time series; econometrics
weakly consistent:
synonym for consistent in the context of evaluating
estimators.
Contexts: econometrics; statistics
weakly dependent:
A time series process {x_{t}} is weakly dependent iff these four
conditions hold:
(1) {x_{t}} is essentially stationary, that is if
E[x_{t}^{2}] is uniformly bounded. In any such process, the
following 'variance of partial sums' is well defined, and it will be used in
the following conditions. Define s_{T}^{2} to be the variance of the sum
from t=1 to t=T of x_{t}.
(2) s_{T}^{2} is O(T).
(3) s_{T}^{-2} is O(1/T).
(4) The asymptotic distribution of the sum from t=1 to t=T of
(x_{t}-E(x_{t}))/s_{T} is
N(0,1).
These conditions rule out random processes which are serially correlated too
positively or negatively or whose partial sums are near zero.
Example 1: An iid process IS weakly dependent. (Domowitz, in class
4/14/97.)
Example 2: A stable AR(1) (|r|<1) with iid
innovations.
Source: Wooldridge, 1995, p 2643-44
Contexts: time series; econometrics; statistics
weakly ergodic:
A stochastic process may be weakly ergodic without being strongly
ergodic.
Contexts: time series; econometrics
weakly Pareto Optimal:
An allocation is weakly Pareto optimal (WPO) if a feasible reallocation would
be strictly preferred by all agents.
WPO <=> SPO if preferences are continuous and strictly increasing (that
is, locally nonsatiated).
Contexts: general equilibrium; models
WebEc:
A Web site with indexes to World Wide Web Resources in Economics. Click here to go there.
wedge:
The gap between the price paid by the buyer and price received by the seller
in an exchange. Might be caused by a tax paid to a third party.
Contexts: micro theory
Weibull distribution:
in at least one 'standard' specification, has pdf:
f(x)=Tx^{T-1}exp(-x^{T})
where T stands for q. T=1 is the simplest case.
It looks like the pdf is zero for x<1 in that case.
Contexts: statistics
Weierstrauss Theorem:
that a continuous function on a closed and bounded set will have a maximum and
a minimum.
This theorem is often used implicitly, in the assumption that some set is
compact, meaning closed and bounded. Examples that may help
clarify:
Example 1: Consider a set which is unbounded, like the real line. Say
variable x has any value on the real line, and we wish to maximize the
function f(x)=2x. It doesn't have a maximum or minimum because values of x
further from zero have more and more extreme values of f(x).
Example 2: Consider a set which is not closed, like (0,1). Again, let f(x)
be 2x. Again this function has no maximum or minimum because there is no
largest or smallest value of x in the set.
Contexts: real analysis
weighted least squares:
A way of choosing an estimator. Makes a weighted tradeoff between the error
in an estimator due to bias and that due to variance.
Putting equal weights on the two is the mean square error
criterion.
Source: Kennedy, 1992, p. 16
Contexts: econometrics
welfare capitalism:
welfare capitalism -- the practice of employers' voluntary provision of
nonwage benefits of to their blue collar employees.
Source: Moriguchi, Chiaki. 2000. TITLE?
Contexts: labor; macro; history; comparative; political economy; sociology
WesVar:
A software program for computing estimates and variance estimates from
potentially complicated survey data. Made by
Westat.
Source: Westat, Inc.
Contexts: data; software
white noise process:
a random process of random variables that are uncorrelated, have
mean zero, and a finite variance (which is denoted s^{2} below).
Formally, e_{t} is a white noise process if E(e_{t}) = 0,
E(e_{t}^{2}) = s^{2}, and
E(e_{t}e_{j}) = 0 for t<>j, where all those expectations are
taken prior to times t and j.
A common, slightly stronger condition is that they are independent from one
another; this is an "independent white noise process."
Often one assumes a normal distribution for the variables, in which case the
distribution was completely specified by the mean and variance; these are
"normally distributed" or "Gaussian" white noise
processes.
Contexts: econometrics; finance; statistics; models
White standard errors:
Same as Huber-White standard errors.
Contexts: econometrics; statistics; estimation
Wiener process:
A continuous-time random walk with random jumps at every point in time
(roughly speaking).
window width:
Synonym for bandwidth in the context of kernel estimation
Contexts: econometrics, nonparametrics
winner's curse:
That a winner of an auction may have overestimated the value of the good
auctioned.
"The winner's curse arises in an auction when the good being sold has a
common value to all the bidders (such as an oil field) and each bidder has a
privately known unbiased estimate of the value of the good (such as from a
geologist's report): the winning bidder [may] be the one who most
overestimated the value of the good; this bidder's estimate itself may be
unbiased but the estimate conditional on the knowledge that it is the highest
of n unbiased estimates is not." -- Gibbons and Katz
Source: Gibbons and Katz, "Layoffs and Lemons", Journal of Labor
Economics, 1991
Contexts: game theory; auctions
WIPO:
World Intellectual Property Organization, a component of the WTO
agreement.
Contexts: intellectual property; organizations
within estimator:
synonym for fixed effects estimator
Contexts: econometrics
WLLN:
Weak law of large numbers
Contexts: econometrics; statistics; time series
WLOG:
abbreviation for "without loss of generality". This phrase is
relevant in the context of a proof or derivation in which the notation becomes
simpler, or there are fewer cases to demonstrate, by making an innocuous
assumption, for example that the data are in a certain order.
Contexts: mathematics; proofs
Wold decomposition:
Any zero mean, covariance stationary process can be represented as a
moving average sum of white noise processes plus a linearly deterministic
component that is a function of the index t. That form of expressing the
process is its Wold decomposition. Clear expression of this idea requires an
equation or two that cannot be put here yet.
Source: Hamilton, p. 109
Contexts: econometrics; time series
Wold's theorem:
That any covariance stationary stochastic process with mean zero
has a moving average representation, called its Wold
decomposition. Let {x_{t}} be that process. See Sargent,
1987, p 286-288 for the complete theorem, assumptions, and
proof.
Source: Sargent, 1987, p 286-288
Contexts: macro; time series; econometrics; statistics
World Bank:
A collection of international organizations to aid countries in their process
of economic development with loans, advice, and research. It was founded in
the 1940s to aid Western European countries after World War II with
capital.
Click here to go to the
World Bank web site.
Contexts: organizations
world systems theory:
[What follows is the editor's best understanding, but not definitive.] A
category of sociological/historical description and analysis in which aspects
of the world's history are thought of as byproducts of the world being an
organic whole. Key categories are core and periphery. Core
countries, economies, or societies are richer, have more capital-intensive
industry, skilled labor and relatively high profits. In a way they exploit
the poorer peripheral societies but it may not be a deliberate
collusion.
Source: Gray's review of O'Hearn's book at http://www.eh.net/BookReview, first
paragraph.
http://eh.net/bookreviews/library/0595.shtml
WPO:
stands for Weakly Pareto Optimal
Contexts: general equilibrium; models
X-11 ARIMA:
A nonparametric method or program for seasonal adjustment, developed at the
Census Bureau, and used by US national agencies such as the Federal
Reserve.
Source: conversations with Ish, Jan 18, 2004; Stuart Scott, Aug 2, 2005
Contexts: data; estimation
X-inefficiency model:
A model in which there is a best-practice technology, and a unit (firm or
country, for example) either has that technology or one not as good. No
random factor could make a firm's production function better than that
best-practice one.
An organization is perfectly x-efficient if it produces the maximum output
possible from its inputs? Or is there some connection between its choice
of output levels and types and its x-efficiency?
Sources of x-inefficiency discussed in the academic literature:
n inertia in process; that is, doing things to minimize internal redesign
from the way they were done last time, rather than in the most efficient
way for current circumstances
In prisoner's dilemma situations where an individual's effort is
unobservable; lack of trust and lack of communication can contribute to
this. It is hard for any individual to coordinate the agreement necessary
to raise effort. (Leibenstein, Sept 1983 AER comment.)
In Absence of knowledge (I haven't seen this discussed but it has to be out
there.)
Source: Leibenstein and others
Contexts: IO
yellow-dog contract:
A requirement by a firm that the worker agree not to engage in collective
labor action. Such contracts are not enforceable in the U.S.
Source: Katz, "Efficiency Wage Theories...", NBER Annual 1986, p 250
Contexts: labor
zero-sum game:
A game in which total winnings and total losings sum to zero for each possible
outcome.
Contexts: game theory