Since Campbell and Deaton  macroeconomists have known that aggregate consumption exhibits “excess smoothness” compared to benchmark models. But a large literature has found no evidence of such smoothness in microeconomic data for individual households. We show that the conflict can be explained by a model in which consumers have accurate knowledge of their personal circumstances but ‘sticky expectations’ about the macroeconomy. In our model, the persistence of aggregate consumption growth reflects consumers’ imperfect attention to aggregate shocks. Our proposed degree of (macro) inattention has negligible utility costs, because aggregate shocks constitute only a tiny proportion of the uncertainty that consumers face. In contrast with models in the existing literature, our model is consistent with both micro and macro stylized facts about consumption dynamics.
Consumption, Expectations, Habits, Inattention
D83, D84, E21, E32
|Slides:||Versions to View or Print|
1Carroll: Department of Economics, Johns Hopkins University, http://www.econ2.jhu.edu/people/ccarroll/, firstname.lastname@example.org 2Crawley: Department of Economics, Johns Hopkins University, email@example.com 3Slacalek: DG Research, European Central Bank, http://www.slacalek.com/, firstname.lastname@example.org 4Tokuoka: Ministry of Finance, Japan, email@example.com 5White: Department of Economics, University of Delaware, firstname.lastname@example.org
The macroeconomics literature typically measures the degree of “excess smoothness” in aggregate consumption with a parameter conventionally labeled as the ‘habit formation coefficient’ (which we denote as ). A recent comprehensive meta-analysis of 597 published estimates (Havranek et al. ) reports that studies based on aggregate data find that on average; see Figure 1.2 A careful reading of the literature suggests that the coefficient is higher, perhaps 0.75, in papers where the data are better measured.
But empirical studies using household-level data reject the existence of any substantial degree of excess smoothness. The modal estimate from Havranek et al. ’s survey of the micro literature is of 0; the mean estimate is about 0.1 (see Figure 1).3
It is difficult to see how the micro evidence can be reconciled with an interpretation in which the aggregate parameter reflects ‘habits’ that are an actual characteristic of consumers’ individual utility functions.
One route to explaining aggregate smoothness that does not rely on individual habits has been to suppose that smoothness arises from some some form of information friction, either ‘noisy information’ (cf Pischke ) or because the difficulty of gathering information makes consumers ‘rationally inattentive’ (cf Reis [2006a] and Maćkowiak and Wiederholt ).4 Later in the paper, we show that neither of these approaches is capable of simultaneously explaining the micro and the macro evidence. We propose a simple alternative. Instead of facing information frictions like those modeled in previous work, in our framework consumers perfectly (‘frictionlessly’) perceive their own personal circumstances (employment status, wage rate, wealth, etc); but information about macroeconomic quantities like aggregate productivity growth arrives only occasionally (as in the Calvo model of firms’ price updating). Specifically, households’ macroeconomic expectations are “sticky,” as in Mankiw and Reis  and Carroll .
Consumption sluggishness a la Campbell and Deaton  arises as follows. Even with accurate knowledge of their personal circumstances, a household whose beliefs about the state of the aggregate economy are out of date will behave in the ways that would have been macroeconomically appropriate (for the consumer’s currently observed level of wealth etc) at the time of their last perception of macroeconomic circumstances. The lag in perception generates a lag in the response of aggregate spending to aggregate developments; the amount of sluggishness will depend on the frequency with which consumers update. When our model’s updating frequency is calibrated to match estimates of the degree of inattention for other aggregate variables (e.g., inflation) using direct expectations data (from surveys of households), the model’s implications for the persistence in aggregate consumption growth matches the estimates of the ‘excess smoothness’ of consumption growth in the macro literature.
Despite aggregate sluggishness, at the level of individual households, high-frequency consumption growth has negligible predictability (aside from what arises from the standard mechanisms of the model without habits, e.g. precautionary motives or intertemporal substitution). The lack of micro predictability can be reconciled with aggregate smoothness because the rationally appropriate contribution of the consumer’s perception of the macroeconomic environment to their individual spending choices is swamped by the importance of fluctuations in idiosyncratic components of income which our consumers have no difficulty observing (and to which we assume they are perfectly attentive).5
Our sticky updating of beliefs about the aggregate economy takes the same form (and has the same magnitude) as proposed in Carroll  as a microfoundation for the Mankiw and Reis  model of inflation expectations. An advantage of our context compared to those papers is that because we are using an optimizing model, we are able to calculate an explicit utility cost of stickiness. Consistent with a theme in the literature on inattentiveness all the way back to Akerlof and Yellen , we find that the utility penalty from inattention is low, so that, under our calibrated parameters, our consumers would be willing to pay very little for even the most perfect information about the macroeconomic state. (The murky information actually available, for example from professional forecasters, would be even less valuable to them).
Our results are essentially the same in a partial equilibrium model (in which factor prices are constant) and a heterogeneous-agents DSGE model with aggregate shocks (which affect factor prices).6 Data simulated from our models match what we take to be the main stylized facts about individual and aggregate consumption dynamics.
When estimated on simulated individual data (corresponding to microeconomic evidence), regressions in the spirit of Hall  and Campbell and Mankiw  find that consumption growth exhibits little persistence. This result is essentially identical across all variants of our models: partial or general equilibrium, with or without inattention. It comports well with the conclusions of a micro literature that was already large when Deaton  surveyed it and has remained consistent since then in finding little persistence. In this respect (and all others), the micro implications of the model are essentially indistinguishable from the implications of standard models of consumption that dominate the microeconomic literature (models with uninsurable uncertainty as well as precautionary saving and perhaps liquidity constraints).7 Because our model is perfectly standard in these respects, we confine our analysis of the micro implications of the model to showing that it matches the specific evidence on the lack of ‘excess smoothness’ in micro data.
We then analyze Hall /Campbell and Mankiw -style regressions with simulated aggregate data. Thanks to the law of large numbers, the idiosyncratic shocks that dominate the household data cancel out upon aggregation, leaving only the residual systematic factors, which generate predictability of consumption growth in aggregate that is absent in idiosyncratic data. But Campbell and Mankiw  proposed that such predictability arises because some people just spend all of their income, while the habit formation literature has argued instead that predictability reflected the sluggishness of consumption growth itself. Horserace regressions that pit these two possibilities against each other produce a clear winner: Both in our model, and in empirical data, almost all of the predictability of consumption growth is explained by its correlation with lagged consumption growth; only a small portion comes from the predictable component of aggregate income growth.
After a brief review of the extensive relevant literature, we begin explaining our ideas with a ‘toy model’ (section 3) in which the key mechanisms can be derived analytically, thanks to extreme simplifying assumptions like quadratic utility and constant factor prices. We next (section 4) present the full versions of our models, which abide by the more realistic assumptions (CRRA utility, aggregate as well as individual shocks, time varying factor prices, etc) that have become conventional respectively in the micro and macro literatures.
After calibrating the model (section 5), we describe the stylized facts from both the micro and macro literatures that need to be explained by a good microfounded macroeconomic model of consumption, and show that all of the various versions of our model (partial versus general equilibrium, etc) robustly reproduce those facts (section 6). This robustness indicates that our results are not a fragile implication of any highly specific framework but instead flow from the underlying structure of inattention that is the common element across all versions of our model (including the quadratic utility ‘toy model’ where the consequences can be seen most clearly). We then (section 7) calculate how much a fully informed consumer would be willing to pay at birth to enjoy instantaneous and perfect knowledge of aggregate developments as they live their life (not much, it turns out).
With our model’s quantitative results in hand, we describe its quantitative and qualitative differences with the other ‘imperfect information’ approaches to explaining aggregate consumption smoothness that have been explored in the prior literature (section 8), and argue that no prior model can explain both micro and macro data. Our conclusion suggests directions for future research.
No review of the empirical literature is needed; Havranek et al.  have done an admirable job. Our only critique is that they have followed much of the prior literature in referring to the parameter of interest as the ‘habit coefficient.’ A better choice would have been to call it the ‘excess smoothness’ coefficient; ours is not the first paper to suggest that habits are not the only possible explanation for consumption smoothness.
Our ‘sticky expectations’ approach is related to several strands of the burgeoning literature on models of imperfect information processing. There are two contributions we make with respect to that literature: First, we simultaneously explain the micro and the macro evidence on the excess smoothness of consumption. (In contrast, existing literature has mostly focused on capturing consumption smoothness in aggregate data only, using models that microeconomic data reject). Second, our setup employs realistic assumptions about utility (CRRA rather than quadratic or CARA utility) and the structure of the income process (income process with permanent and transitory components, aggregate uncertainty à la Krusell and Smith ). Using this setup we investigate how our model is able to quantitatively match stylized facts about consumption smoothness both in micro and in macro data.
A major strand in that literature is models of ‘rational inattention’ in the spirit of Sims , in which agents have a limited ability to pay attention and allocate it optimally, recently embodied (for example) in the work of Maćkowiak and Wiederholt  and Maćkowiak and Wiederholt . Maćkowiak and Wiederholt  study a DSGE model with inattentive consumers and firms using a simple New Keynesian framework in which they replace all sources of slow adjustment (habit formation, Calvo pricing and wage setting) with rational inattention. The setup with rational inattention can match the sluggish responses observed in aggregate data, in response both to monetary policy shocks and to technology shocks.
A challenge to this approach has been the extraordinary complexity of solving models that aim to work out the full implications of a Sims-like rational inattention in complex environments like those that can be handled with perfect attention. For this reason, the literature on rational inattention has adopted extreme simplifying assumptions like quadratic utility (Luo ) or a highly stylized setup of idiosyncratic and aggregate income shocks. To our knowledge, no one has has far solved a rational inattention model in the context of the full Krusell and Smith  framework.
As a halfway house, Gabaix  has recently proposed a framework that is much simpler than the full rational inattention framework of Sims , but aims to capture much of its essence. This approach is relatively new, and while it does promise to be more tractable than the full-bore Simsian framework, even the simplified Gabaix approach would be formidably difficult to embed in a model with a rich treatment of transitory and persistent income shocks, precautionary motives and other complexities entailed in modern models of microeconomic consumption decisions. It would be similarly challenging to determine how to apply the approaches of Woodford  or Morris and Shin  to our question.8
Another way to dial back the complexity of the rational inattention approach is to radically simplify the model’s assumptions about decisionmaker’s problem. In that spirit Reis [2006a] considers a model in which consumers with a linear consumption function and a conveniently simple environment optimally choose to be inattentive because of explicit (fixed monetary) costs of attention.9 In this framework, Reis [2006a] is able to calculate an explicit analytical formula for the tradeoff between the disutility from the increase in uncertainty caused by inattention, and the monetary savings due to infrequent payment of the cost of information. Reis shows that in his model, inattention is manifested in the fact the his consumers only gather new information (and therefore only update their consumption) at fixed intervals whose length depends on the cost of obtaining information versus the costs of remaining ignorant.
Inattention is not the only alternative to habits as an explanation for excess smoothness. Information itself can be imperfect, even for a perfectly attentive consumer. The seminal work contemplating this possibility was by Muth , whose most direct descendant in the consumption literature is Pischke  (building also on Lucas ; see also Ludvigson and Michaelides ). The idea is that (perfectly attentive) consumers face a signal extraction problem in determining whether a shock to income is transitory or permanent. When a permanent shock occurs, the immediate adjustment to the shock is only partial, since agents’ best guess is that the shock is partly transitory and partly permanent. With the right calibration, such a model could in principle explain any amount of excess smoothness. But we argue that when a model of this kind is calibrated to the actual empirical data, it generates only a modest amount of excess smoothness, far less than exhibited by the empirical data.
One of our objectives is to faithfully match microeconomic data. A large empirical literature has over the last several decades documented the importance of modelling precautionary saving behavior under uncertainty. Rather than replicating the key results, which are well-known, we instead only refer to those results. For example, in micro data there is incontrovertible evidence—most recently from millions of datapoints from the Norwegian population registry examined by Fagereng et al. —that the consumption function is not linear. It is concave, as the general theory suggests (Carroll and Kimball ), and this concavity matters greatly for matching the main micro facts. In addition, there is also nothing that looks either like the Reis model’s prediction that there will be extended periods in which consumption does not change at all, or its prediction that there will be occasional periods in which it moves a lot (at dates of adjustment) and then remains anchored at that newer level for another extended period. This critique applies generically to models that incorporate a convex cost of adjustment—whether to the consumer’s stock of information (Reis [2006a]) or to the level of consumption as in Chetty and Szeidl . All such models imply counterfactually ‘jerky’ behavior of spending at the microeconomic level.10
To better match the micro data, we use the now-conventional microeconomic formulation in which utility takes the Constant Relative Risk Aversion form and uncertainty is calibrated to match micro estimates. Our assumption that consumers can perfectly observe the idiosyncratic components of their income allows us to use essentially the same solution methods as in the large recent literature exploring models of this kind; our assumption that macroeconomic expectations are sticky makes no material difference to the solution of the model.11 Implementing the state of the art in the micro literature adds a great deal of complexity and precludes a closed form solution for consumption like the one used by Reis; its virtue is that the model is quantitatively plausible enough that, for example, it might actually be usable by policymakers who wanted to assess the likely aggregate dynamics entailed by alternative fiscal policy options.
Given our choice to embrace the challenge of matching micro data, it was essential to keep the rest of the model as simple as possible, in the spirit of Akerlof and Yellen , Cochrane , Mankiw and Reis  and as forcefully advocated by Browning and Crossley . In pursuit of such simplicity, we adopt the Calvo -like framework of Carroll  in which updating is a Poisson event.12
Moving from theory to evidence, there is an interesting and growing literature that uses expectations data from surveys in an attempt to directly measure sluggishness in expectations dynamics.13 For example, Coibion and Gorodnichenko  find that the implied degree of information rigidity in inflation expectations is high, with an average duration of six to seven months between information updates. Fuhrer [2017b] and Fuhrer [2017a] find that even for professional forecasters, forecast revisions are explainable using lagged information, which would not be the case under perfect information processing.
Here we briefly introduce concepts and notation, and motivate the key result using a simple framework with quadratic utility. We start with the classic Hall  random walk model, with the standard assumption of time separable utility and geometric discounting by factor . Overall wealth (the sum of human and nonhuman wealth) evolves according to the dynamic budget constraintwhere is the interest factor and is a shock to (total) wealth.
With no informational frictions, the usual derivations lead to the standard Euler equation:where denotes an assumption of instantaneous perfect frictionless updating of all information. Quadratic and imply Hall’s random walk proposition:
Now suppose consumers update their information about , and therefore their behavior, only occasionally. A consumer who updates in period obtains precisely the same information that a consumer in a frictionless model would receive, forms the same expectations, and makes the same choices. Nonupdaters, however, behave as though their former expectations had actually come true (since by definition these are the persons who have learned nothing to disconfirm their prior beliefs). For example, consider a consumer who updates in periods and but not between. Designating as the consumer’s perception of wealth:
The economy is populated by consumers indexed by , distributed uniformly along the unit interval. Aggregate (or equivalently, per capita) consumption is
Whether the consumer at location updates in period is determined by the realization of the binary random variable , which takes the value 1 if consumer updates in period and 0 otherwise. Each period’s updaters are chosen randomly such that a constant proportion update in each period:
Aggregate consumption is the population-weighted average of per-capita consumption of updaters and nonupdaters :where per-capita consumption because the nonupdaters at time are a random subset of the population at time . The first difference of (2) yields and Appendix C.1 shows that is approximately mean zero.14 Thus, in the quadratic utility framework the serial correlation of aggregate per-capita consumption changes is an approximate measure of the proportion of nonupdaters.
This is the mechanism behind the exercises presented in Section 6. While the details of the informational friction is different in the more realistic models we will set up in Section 4, the same logic and quantitative result holds: the serial correlation of consumption growth approximately equals the proportion of non-updaters.
Note further that the model does not introduce any explicit reason that consumption growth should be related to the predictable component of income growth a la Campbell and Mankiw . In a regression of consumption growth on the predictable component of income growth (and nothing else), the coefficient on income growth would entirely derive from whatever correlation predictable income growth might have with lagged consumption growth. This is the pattern we will find below, both in our theoretical and our empirical work.
One of the lessons of the consumption literature after Hall  is that his simplifying assumptions (quadratic utility, perfect capital markets, ) are far from innocuous; more plausible assumptions can lead to very different conclusions. In particular, a host of persuasive theoretical and empirical considerations has led to the now-standard assumption of constant relative risk aversion utility, When utility is not quadratic, solution of the model requires specification of the exact stochastic structure of the income and transition processes.
Below, we present two models that will be used to simulate the economy under frictionless and sticky expectations. First, we specify a small open economy (or partial equilibrium) model with a rich and empirically realistic calibration of idiosyncratic and aggregate risk but exogenous interest rates and wages. Second, we extend the SOE model to a heterogeneous agents dynamic stochastic general equilibrium (closed-economy) model that endogenizes factor returns, at the cost of considerably more computation.15
Several features are common across all our models. A continuum of agents care about expected lifetime utility derived from CRRA preferences over a unitary consumption good; they geometrically discount future utility flows by discount factor . These agents inelastically supply one unit of labor, and their only decision in each period is how to divide their market resources between consumption and saving in a single asset . We assume agents are Blanchard  “perpetual youth” consumers: They have a constant probability of death between periods, and upon death they are are immediately replaced, while their assets are distributed among surviving households in proportion to the recipient’s wealth.
Output is produced by a Cobb–Douglas technology using capital and (effective) labor ; capital depreciates at rate immediately after producing output, leaving portion intact, and as usual the effectiveness of labor depends on the level of aggregate labor productivity.
We represent both aggregate and idiosyncratic productivity levels as having both transitory and permanent components. Large literatures have found that this representation is difficult to improve upon much in either context, and the simplicity of this description yields considerable benefits both in the tractability of the model, and in making its mechanics as easy to understand as possible.
In more detail, aggregate permanent labor productivity grows by factor , subject to mean one iid aggregate permanent shocks , so the aggregate productivity state evolves according to a finite Markov chain:
where and index the states. The productivity growth factor follows a bounded random walk, as in (for example) Edge et al. , which is part of a literature whose aim is to capture in a simple statistical way the fact that underlying rates of productivity growth seem to vary substantially over time (e.g., fast in the 1950s, slow in the 1970s and 1980s, moderate in the 1990s, and so on; see also Jorgenson et al. ).16 We introduce these slow-moving productivity growth rates not just for realism but also because we need to perform, in our simulated data, exercises like those Campbell and Mankiw  performed in empirical data, in which consumption growth is regressed on the component of income growth that was predictable using data lagged several quarters. We therefore need a model in which there is some predictability in income growth several quarters in the future.
The transitory component of productivity in any period is represented by a mean-one variable , so the overall level of aggregate productivity in a given period is .
Similarly, each household has an idiosyncratic labor productivity level , which (conditional on survival) evolves according to:
and like their aggregate counterparts, idiosyncratic permanent productivity shocks are mean one iid ().17 Total labor productivity for the individual is determined by the interaction of transitory idiosyncratic (), transitory aggregate (), permanent idiosyncratic , and permanent aggregate factors. When the household supplies one unit of labor, effective labor is:Here, can be thought of as reflecting, for example, individual unemployment spells, while captures, e.g., disruptions in output due to bad weather. Just like aggregate transitory shocks, is mean one and iid, so that . The idiosyncratic transitory shock has a minimum possible value of 0 (corresponding to an unemployment spell) which occurs with a small finite probability . This has the effect of imposing a ‘natural borrowing constraint’ (cf. Zeldes [1989b]) at zero.
For understanding the decisions of an individual consumer in a frictionless (i.e., perfect information) world the aggregate and idiosyncratic transitory shocks can be combined into a single overall transitory shock indicated by the boldface , and the aggregate and idiosyncratic levels of permanent income can be combined as (likewise, the combined permanent shock is boldface ). However, a key feature of the models used here is that a household does not necessarily know the true value of the aggregate productivity state variables , as they might not have (stochastically) observed it in the current period. Instead, each household has perceptions about the aggregate state . Our key behavioral assumption is twofold:
Given the assumption that productivity growth follows a random walk, the second part of the behavioral assumption says that an agent who last observed the true aggregate state periods ago perceives:
That is, our assumed random walk in productivity growth means that the household believes that aggregate productivity has grown at the last observed growth rate for the past periods.18 For households who observed the true aggregate state this period, and thus (6) says that . The household perceives that their overall permanent productivity level is .
Households in our models always correctly observe the level of all household-specific variables—they are able to read their bank statement and paycheck. But (as will be shown below) consumers’ optimal behavior in the frictionless model depends on the ratios of those household-specific variables to productivity. That is, for some state variable (like market wealth), the optimal choice would depend on , where our definition of nonboldface reflects our notational convention that when a level variable has been normalized by the corresponding measure of productivity, it loses its boldness. The same applies for aggregate variables .
When a household’s perception of productivity differs from actual productivity, we denote the perceived ratio as, e.g., where the last equality reflects our assumption that the household perceives the idiosyncratic component of their productivity without error.
The behavior of a ‘sticky expectations’ consumer thus differs from that of a frictionless consumer only to the extent that the ‘sticky expectations’ consumer’s perception of aggregate productivity is out of date.
Infinitely-lived households with a productivity process like (4) would generate a nonergodic distribution of idiosyncratic productivity—as individuals accumulated ever more shocks to their permanent productivities, those productivities would spread out indefinitely with time. To avoid this inconvenience, we make the Blanchard  assumption: Each consumer faces a constant probability of mortality of (with complementary survival probability ). We track death events using a binary indicator:
We refer to this henceforth as a ‘replacement’ event, since the consumer who dies is replaced by an unrelated newborn who happens to inhabit the same location on the number line. The ex ante probability of death is identical for each consumer, so that the aggregate mass of consumers who are replaced is time invariant at .
Under the assumption that ‘newborns’ have the population average productivity level of , the population mean of the idiosyncratic component of permanent income is always .19 Our earlier equation (4) for the idiosyncratic productivity transition rule for the inhabitant of location on the number line is thus adjusted to:20
Along with its productivity level, the household’s primary state variable when the consumption decision is made is the level of market resources , which captures both current period labor income (the wage rate times the household’s effective labor supply) and the resources that come from the agent’s capital stock (the value of the capital itself plus the value of the capital income it yields):
The transition process for is broken up, for convenience of analysis, into three steps. ‘Assets’ at the end of the period are market resources minus consumption:Next period’s capital is determined from this period’s assets via
where the first row’s division of by the survival probability reflects returns to survivors from the Blanchardian insurance scheme in which the dying agents’ assets are distributed to the survivors. More compactly we can write:
The foregoing assumptions permit straightforward aggregation of individual-level variables. Aggregate capital is the population integral of (9):
The third equality holds because and is independent of . Because aggregate labor supply is
Aggregate market resources can be written as per-capita resources of the survivors times their population mass , plus per-capita resources of the newborns times their population mass :This identity can also be derived directly as the population integral of (7).
The productivity-normalized version of (12) says that
Because the households in our model do not necessarily observe the true aggregate productivity level, their perception of normalized aggregate market resources is
We will sometimes refer to the factor as the household’s ‘productivity misperception,’ the scaling factor between actual and perceived market resources. As discussed below, this same misperception factor applies to individual market resources as well.
Our first realistic model considers a small open economy with perfect international capital mobility, so that factor prices and are exogenously determined (at constant values and ). These assumptions permit a partial equilibrium analysis using only the solution to the individual households’ problem. The frictionless consumer’s state variables are simply . Because we assume that the sticky expectations consumer behaves according to the decision rules that are optimal for the frictionless consumer but using perceived rather than true values of the state variables, we need only to solve for the frictionless solution.
The household’s problem in levels can be written in Bellman form as:21
Our assumption that the aggregate and idiosyncratic productivity levels both reflect a combination of transitory and purely permanent components now permits us to make a transformation that considerably simplifies analysis and solution of the model: When the utility function is in the CRRA class, the problem can be simplified by dividing by while converting to normalized variables as above (e.g., ).22 This yields the normalized form of the problem, which has only and as state variables:
Defining , the main requirement for this problem to have a solution is an impatience condition:23
Designating the converged normalized consumption function that solves (16) as , the level of consumption for the frictionless consumer can be obtained24 from
Following the same notation as in the motivating Section 3, we define an indicator variable for whether household updates their perception to the true aggregate state in period :25
The Bernoulli random variable is iid for each household each period, with a probability of returning 1. Consistent with (6), household beliefs about the aggregate state evolve according to:
Under the assumption that consumers treat their belief about the aggregate state as if it were the truth, the relevant inputs for the normalized consumption function are the household’s perceived normalized market resources and perceived aggregate productivity growth . The household chooses their level of consumption by:
The behavior of the ‘sticky expectations’ consumer converges to that of the frictionless consumer as approaches 1.
Because households in our model never misperceive the level of their own market resources (), they can never choose consumption that would violate the budget constraint. Households observe both their level of income and its idiosyncratic components and . If they wanted to do so, households could therefore calculate the aggregate component , which would correspond with the reports of a statistical agency; but they do not observe or separately (because statistical agencies do not report these objects). Our assumption is simply that households neither perceive nor attempt to extract an estimate of the decomposition of that aggregate state into transitory and permanent components. Section 8 analyzes the alternative model in which households DO perform such a signal extraction, and shows that the dynamics of aggregate consumption under this assumption do not match the dynamics that are observed in the aggregate data. Consumers’ misperceptions of aggregate permanent income do cause them to make systematic errors – but below we present calculations showing that for the value of that we estimate, those errors are small.
Our second model relaxes the simplifying assumption of a frictionless global capital market. In this closed economy, factor prices and are determined in the usual way from the aggregate production function and aggregate state variables, including the stochastic aggregate shocks, putting the model in the (small, but growing) class of heterogeneous agent DSGE models.
We make the standard assumption that markets are competitive, and so factor prices are the marginal product of (effective) labor and capital respectively. Denoting capital’s share as , so that , this yields the usual wage and interest rates:Net of depreciation, the return factor on capital is .
An agent’s relevant state variables at the time of the consumption decision include the levels of household and aggregate market resources , as well as household and aggregate labor productivity and the aggregate growth rate . We assume that agents correctly understand the operation of the economy, including the production and shock processes, and have beliefs about aggregate saving—how aggregate market resources become aggregate assets (equivalently, next period’s aggregate capital ). Following Krusell and Smith  and Carroll et al. , we assume that households believe that the aggregate saving rule is linear in logs, conditional on the current aggregate growth rate:
The growth-rate-conditional parameters and are exogenous to the individual’s (partial equilibrium) optimization problem, but are endogenous to the general equilibrium of the economy. Taking the aggregate saving rule as given, the household’s problem can be written in Bellman form as:26
As in the SOE model, the household’s problem can be normalized by the combined productivity level , reducing the state space by two continuous dimensions. Dividing (21) by and substituting normalized variables, the reduced problem is:Because household beliefs about the aggregate saving rule are linear in logs, (20) holds with normalized market resources and aggregate assets as well as in levels.
The equilibrium of the HA-DSGE model is characterized by a (normalized) consumption function and an aggregate saving rule such that when all households believe , the solution to their individual problem (22) is ; and when all agents act according to , the best log-linear fit of on (conditional on ) is . The model is solved using a method similar to Krusell and Smith .27
The treatment of sticky beliefs in the HA-DSGE model is the natural extension of what we did in the SOE model presented in section 4.2.2: Because the level of now affects future wages and interest rates, a consumer’s perceptions of that variable now matter. Households in the DSGE model choose their level of consumption using their perception of their normalized state variables:
Households who misperceive the aggregate productivity state will incorrectly predict aggregate saving at the end of the period, and thus aggregate capital and the distribution of factor prices next period.28
Because households who misperceive the aggregate productivity state will make (slightly) different consumption–saving decisions than they would have if fully informed, aggregate saving behavior will be different under sticky than under frictionless expectations. Consequently, the equilibrium aggregate saving rule will be slightly different under sticky vs frictionless expectations. When the HA-DSGE model is solved under sticky expectations, we implicitly assume that all households understand that all other households also have sticky expectations, and the equilibrium aggregate saving rule is the one that emerges from this belief structure.
We begin by calibrating market-level and preference parameters by standard methods, then specify additional parameters to characterize the idiosyncratic income shock distribution.
We assume a coefficient of relative risk aversion of . The quarterly depreciation rate is calibrated by assuming annual depreciation of 6 percent, i.e., . Capital’s share in aggregate output takes its usual value of .
We set the variances of the quarterly transitory and permanent shocks at the approximate values respectively:
To finish the calibration, we consider a simple perfect foresight model (PF-DSGE), with all aggregate and idiosyncratic shocks turned off. We set the perfect foresight steady state aggregate capital-to-output ratio to 12 on a quarterly basis (corresponding to the usual ratio of 3 for capital divided by annual income). Along with the calibrated values of and , this choice implies values for the other steady-state characteristics of the PF-DSGE model:30
A perfect foresight representative agent would achieve this steady state if his discount factor satisfied . For the HA-DSGE model, we thus set the discount factor to , roughly matching the target capital-to-output ratio.31 For the SOE model we choose a much lower value of (). This results in agents with wealth holdings around the median observed in the data.32 The two values of are chosen to span the rather wide range of calibrations found in the micro and macro literatures. Experimentation has indicated that our results are not sensitive to such choices.
The annual-rate idiosyncratic transitory and permanent shocks are assumed to be
These figures are conservative in comparison with standard raw estimates from the micro data;33 using data from the Panel Study of Income Dynamics, for example, Carroll and Samwick  estimate and ; Storesletten, Telmer, and Yaron (2004) estimate , with varying estimates of the transitory component. But recent work by Low et al.  suggests that controlling for job mobility and participation decisions reduces estimates of the permanent variance somewhat; and using very well-measured Danish administrative data, Nielsen and Vissing-Jorgensen  estimate and , which presumably constitute lower bounds for plausible values for the truth in the U.S. (given the comparative generosity of the Danish welfare state).
Since the variance of the annual permanent innovation is four times the variance of the quarterly innovation, this calibration implies that the variance of the idiosyncratic permanent innovations at the quarterly frequency is about 100 times the variance of the aggregate permanent innovations (0.00004 divided by ). This is a point worth emphasizing: Idiosyncratic uncertainty is approximately two orders of magnitude larger than aggregate uncertainty. While reasonable people could differ a bit from our calibration of either the aggregate or the idiosyncratic risk, no plausible calibration of either magnitude will change the fundamental point that the aggregate component of risk is tiny compared to the idiosyncratic component. This is why assuming that people do not pay close attention to the macroeconomic environment is plausible.34
We assume that the probability of unemployment is 5 percent per quarter. This approximates the historical mean unemployment rate in the U.S., but model unemployment differs from real unemployment in (at least) two important ways. First, the model does not incorporate unemployment insurance, so labor income of the unemployed is zero. Second, model unemployment shocks last only one quarter, so their duration is shorter than the typical U.S. unemployment spell (about 6 months). The idea of the calibration is that a single quarter of unemployment with zero benefits is roughly as bad as two quarters of unemployment with an unemployment insurance payment of half of permanent labor income (a reasonable approximation to the typical situation facing unemployed workers). The model could be modified to permit a more realistic treatment of unemployment spells; this is a promising topic for future research, but would involve a considerable increase in model complexity because realism would require adding the individual’s employment situation as a state variable.
The probability of mortality is set at 0.005 which implies an expected working life of 50 years; results are not sensitive to plausible alternative values of this parameter, so long as the life length is short enough to permit a stationary distribution of idiosyncratic permanent income.
We calibrate the probability of updating at 0.25 per quarter, for several reasons. First, this is the parameter value assumed for the speed of expectations updating by Mankiw and Reis  in their analysis of the consequences of sticky expectations for inflation. They argue that an average frequency of updating of once a year is intuitively plausible. Second, Carroll  estimates an empirical process for the adjustment process for household inflation expectations in which the point estimate of the corresponding parameter is 0.27 for inflation expectations and 0.32 for unemployment expectations; the similarity of these figures suggests 0.25 is a reasonable benchmark, and provides some insulation against the charge that the model is ad hoc: It is calibrated in a way that corresponds to estimates of the stickiness of expectations in a fundamentally different context. Finally, empirical results presented below will also suggest a speed of updating for U.S. consumption dynamics of about 0.25 per quarter.
This section briefly characterizes some of the equilibrium characteristics of the solutions to the models under the parameters specified above. Results are reported in Table 2.
Note first the considerable difference between the mean level of assets in the HA-DSGE and SOE models (first row of the table). As indicated above, this reflects our goal of presenting results that span the full range of calibrations in the micro and macro literatures; the micro literature has often focused on trying to explain the wealth holdings of the median household, which are much smaller than average wealth holdings.
The table suggests a broad generalization that we have confirmed with extensive experimentation: With respect to either cross section statistics, mean outcomes, or idiosyncratic consumption dynamics, the frictionless expectations and sticky expectations models are virtually indistinguishable using microeconomic data, and very similar in most aggregate implications aside from the dynamics of aggregate consumption.
The calibrated models can now be used to evaluate the effects of sticky expectations on consumption dynamics. We begin this section with an empirical benchmark on U.S. data that will guide our investigation of the implications of the model. We then demonstrate that simulated data from the sticky expectations models quantitatively and qualitatively reproduces the key patterns of aggregate and idiosyncratic consumption data.
The random walk model provides the framework around which both micro and macro consumption literatures have been organized. Reinterpreted to incorporate CRRA utility and permit time-varying interest rates, the random walk proposition has frequently been formulated as a claim that in regressions of the form:where is any variable whose value was known to consumers when the period- consumption decision was made, and is white noise.
For macroeconomic models (including the HA-DSGE setup in Section 4.3), our simulation analysis 35 shows that the relationship between the normalized asset stock and the expected interest rate is nearly linear, so (23) can be reformulated with no loss of statistical power asThis reformulation is convenient because the literatures on precautionary saving and liquidity constraints since at least Zeldes [1989a,b] have argued that the effects of capital market imperfections can be captured by incorporating a lagged measure of resources like in consumption growth regressions.
Campbell and Mankiw  famously proposed a modification of this model in which a proportion of income goes to rule-of-thumb consumers who spend in every period. They argued that can be estimated by incorporating the predictable component of income growth as an additional regressor. Finally, Dynan  and Sommer  show that in standard habit formation models, the size of the habit formation parameter can be captured by including lagged consumption growth as a regressor. These considerations lead to a benchmark specification of the form:
There is an extensive existing literature on aggregate consumption dynamics, but Sommer  is the only paper we are aware of that estimates an equation of precisely this form in aggregate data. Sommer  interprets the serial correlation of consumption growth as reflecting habit formation.36 However, Sommer’s choice of instruments, estimation methodology, and tests do not correspond precisely to our purposes here, so we have produced our own estimates using U.S. data.
First, while the existing empirical literature has tended to focus on spending on nondurables and services, there are reasons to be skeptical about the measurement of quarterly dynamics (or lack of such dynamics) in large portions of the services component of measured spending.38 Hence, we report results both for the traditional measure of nondurables and services spending, and for the more restricted category of nondurables spending alone. Fortunately, as the table shows, our results are robust to the measure of spending. Indeed, similar results hold even when the measure of spending is the broader measure of total personal consumption expenditures, or for an even stricter version of nondurables spending.
Second, Sommer  emphasizes the importance of taking account of the effects of measurement error and transitory shocks on high frequency consumption data. In principle, measurement error in the level of consumption could lead to a severe downward bias in the estimated serial correlation of measured consumption growth as distinct from ‘true’ consumption growth. The simplest solution to this problem is the classic response to measurement error in any explanatory variable: Instrumental variables estimation. This point is illustrated in the fact that instrumenting drastically increases the estimated serial correlation of consumption growth.
Finally, we needed to balance the desire for the empirical exercise to match the theory with the need for sufficiently powerful instruments. This would not be a problem if, in empirical work, we could use once-lagged instruments as is possible for the theoretical model. However, empirical consumption data are subject to time aggregation bias (Working , Campbell and Mankiw ), which can be remedied by lagging the time-aggregated instruments an extra period. To increase the predictive power of the lagged instruments, we augmented with two variables traditionally known to have predictive power: The Federal Funds rate and the expectations component of the University of Michigan’s Index of Consumer Sentiment (cf. Carroll et al. ).39
The table demonstrates three main points. First, when lagged consumption growth is excluded from the regression equation, the classic Campbell and Mankiw  result holds: Consumption growth is strongly related to predictable income growth. Second, when predictable income growth is excluded but lagged consumption growth is included, the serial correlation of consumption growth is estimated to be in the range of 0.7–0.8, consistent with Havranek et al.  survey of the ‘habits’ literature and very far from the benchmark random walk coefficient of zero. Finally, in the ‘horse race’ regression that pits predictable income growth against lagged consumption growth, lagged consumption growth retains its statistical significance and large point estimate, while the predictable income growth term becomes statistically insignificant (and economically small).
None of these points is a peculiarity of the U.S. data. Carroll et al.  performed similar exercises for all eleven countries for which they could obtain the required data, and robustly obtained similar results across almost all of those countries.
Havranek et al. ’s meta-analysis of the micro literature is consistent with Dynan ’s early finding that there is little evidence of serial correlation in household-level consumption growth. Such a lack of serial correlation is a direct implication of the canonical Hall  certainty-equivalent model with quadratic utility. But in principle, even without habits, a more modern model like ours with precautionary saving motives predicts that there will be some positive serial correlation in consumption growth. To see why, think of the behavior of a household whose wealth, leading up to date , was near its target value (for a proof that such a target value will exist in models of the class we are using, see Carroll ). Now in period this household experiences a large negative transitory shock to income, pushing buffer stock wealth far below its target. The model says the household will cut back sharply on consumption to rebuild its buffer stock, and during that period of rebuilding the expected growth rate of consumption will be persistently above its long-term rate (but declining asymptotically toward that rate). That is, in a univariate analysis, consumption growth will exhibit serial correlation.
But as the foregoing discussion suggests, the model says there is a much more direct indicator than lagged consumption growth for current consumption growth: The lagged value of , the buffer stock of assets.
The same fundamental point holds for a model in which there is an explicit liquidity constraint (our model has no such constraint, but the precautionary motive induces something that looks like a ‘soft’ liquidity constraint). Zeldes [1989a] pointed out long ago that the Euler equation on which the random walk proposition is based fails to hold for consumers who are liquidity constrained; if consumers with low levels of wealth (relative to their permanent income) are more likely to be constrained, then low wealth consumers will experience systematically faster consumption growth than otherwise-similar high-wealth consumers. Zeldes found empirical evidence of such a pattern, as has a large subsequent literature.
It is less clear is whether models in this class imply that any residual serial correlation will remain once the lagged level of assets has been controlled for. In numerical models like ours, such quantitative questions can be answered only by numerically solving and simulating the model, which is what we do here.
The model predicts that the relationship between and will be nonlinear and downward sloping, but theory does not imply any specific functional form. We experimented with a number of ways of capturing the role of but will spare the reader the unedifying discussion of those experiments because they all reached conclusions similar to those of a particularly simple case, inspired by the original analysis of Zeldes [1989a]: We simply include a dummy variable that indicates whether last period’s is low. Specifically, we define as 0 if household ’s level of in period is in the bottom 1 percent of the distribution, and otherwise. (We could have chosen, say, 10 or 20 percent with qualitatively similar, though less quantitatively impressive, results).
So, in data simulated from our SOE model, we estimate regressions of the form:40
Results for the frictionless model are presented the upper panel of Table 4.41 For our purposes, the most important conclusion is that the predictable component of idiosyncratic consumption growth is very modest. In the version of the model that corresponds to the thought experiment above, in which consumption growth should have some positive serial correlation, the magnitude of that correlation is only 0.019.42
The second row of the table presents the results of a Campbell and Mankiw -type exercise regressing . From our definitions above,
The existing micro literature has typically found much larger Campbell–Mankiw coefficients than ours. However, much of that literature has made little effort to determine the extent to which the predictable component of income growth reflects permanent underlying growth rates like versus the extent to which that predictability comes from purely transitory movements. If we were to use instruments that had no power for the transitory component but did have power for , our estimated coefficient would be close to 1 (because consumption growth in models of this kind settles down in the long run to something close to the underlying growth rate of permanent income). Thus, our view is that little can be learned from the micro-empirical literature on the magnitude of the coefficient.43
The third row confirms the proposition articulated above: For people with very low levels of wealth, the model implies rapid consumption growth as they dig themselves out of their hole.
The final row presents the results when all three terms are present. Interestingly, the coefficient on lagged consumption growth actually increases, to about 0.06, when we control for the other two terms. But this is still easily in the range of estimates from 0.0 to 0.1 that Havranek et al.  indicate characterizes the micro literature.
The final point to note from the frictionless model is the very small values of the ’s. Even the version of the model including all three explanatory variables can explain only about 2 percent of the variation in consumption growth.
The table’s lower panel contains results from estimating the same regressions on the sticky expectations version of the model. These results are virtually indistinguishable from those obtained for the frictionless expectations model. As before, aside from the precautionary component captured by , idiosyncratic consumption growth is largely unpredictable.
Table 3 presents the results that an econometrician would obtain from estimating an equation like (24) using aggregate data generated by the same models whose micro results are presented in Table 4. In short, it shows that even though simulated households with sticky expectations do not exhibit any meaningful predictability of idiosyncratic consumption growth, aggregate consumption growth in an economy populated by such consumers exhibits a high degree of serial correlation (similar to that in empirical data).
To generate these results, we simulate the small open economy model for 200 quarters, tracking aggregate dynamics to generate a dataset whose size is similar to the 57 years of NIPA data used for Table 3. Because there is some variation in coefficient estimates depending on the random number generator’s seed, we repeat the simulation exercise 100 times. Table 5 reports average point estimates and standard errors across those 100 samples.
Given the relatively long time frame of each sample, and that the idiosyncratic shocks to income are washed away by the law of large numbers, it is feasible to use instrumental variables techniques to obtain the coefficient on the expected growth term. This is the appropriate procedure for comparison with empirical results in any case, since instrumental variables estimation is the standard way of estimating the benchmark Campbell–Mankiw model. As instruments, we use lags of consumption growth, income growth, the wealth–permanent income ratio, and income growth over a two-year span.44
Finally, for comparison to empirical results, we take into account Sommer ’s argument (based on Wilcox ) that transitory components of aggregate spending45 (hurricanes, etc) and high-frequency measurement problems introduce transitory components in measured NIPA consumption expenditure data. Sommer finds that measurement error produces a severe downward bias in the empirical estimate of the serial correlation in consumption growth, relative to the ‘true’ serial correlation coefficient. To make the simulated data comparable to the measurement-error-distorted empirical data, we multiply our model’s simulated aggregate spending data by a white noise error :
The top panel of Table 5 estimates (24) on simulated data for the frictionless economy. The second and third rows indicate that consumption growth is moderately predictable by (instrumented versions of) both its own lag and expected income growth, of comparable magnitude to the empirical benchmark. However, the ‘horse race’ regression in the bottom row reveals that neither variable is significantly predictive of consumption growth when both are present as regressors – contrary to the robust empirical results from the U.S. and other countries (cf Carroll et al. ). The problem is that for both consumption growth and income growth, most of the predictive power of the instruments stems from the serial correlation of productivity growth in the model, so the instrumented versions of the variables are highly correlated with each other. Thus neither has distinct statistical power when they are both included.
In the sticky expectations specification (the lower panel of the table), the second-stage ’s are all much higher than in the frictionless model, and more in keeping with the corresponding statistics in NIPA data. This is because high frequency aggregate consumption growth is being driven by the predictable sticky expectations dynamics. The first two rows show that when we introduce measurement error as described above, the OLS estimate is biased downward significantly. As suggested by the analysis of our ‘toy model’ above, the IV estimate of in the second row is close to the figure that measures the proportion of consumers who do not adjust their expectations in any given period; thus the intuition derived from the toy model survives all the subsequent complications and elaborations. The third row reflects what would have been found by Campbell and Mankiw had they estimated their model on data produced by the simulated ‘sticky expectations’ economy: The coefficient on predictable component of perceived income growth term is large and highly statistically significant.
The last row of the table presents the ‘horse race’ between the Campbell–Mankiw model and the sticky expectations model, and shows that the dynamics of consumption are dominated by the serial correlation in the predictable component of consumption growth stemming from the stickiness of expectations. This can be seen not only from the magnitude of the coefficients, but also by comparison of the second-stage ’s, which indicate that the contribution of predictable income growth to the predictability of consumption growth is negligible, increasing the from 0.261 to 0.263.
Table 6 reports the results of estimating regression (24) on data generated from the HA-DSGE model of Section 4.3; results are substantially the same as the previous analysis for the SOE model.46
The model with frictionless expectations (top panel) implies aggregate consumption growth that is moderately (but not statistically significantly) serially correlated when examined in isolation (second row), but the effect “washes out” when expected income growth and the aggregate wealth to income ratio are included in the horse race regression (fourth row). As expected in a closed economy model, the aggregate wealth-to-income ratio is negatively correlated with consumption growth, but its predictive power is so slight that it is statistically insignificant in samples of only 200 quarters.
The model with sticky expectations (bottom panel) again implies a serial correlation coefficient of consumption growth not far from 0.75 in the univariate IV regression (second row). As in the SOE simulation, the horserace regression (fifth row) indicates that the apparent success of the Campbell–Mankiw specification (third row) reflects the correlation of predicted current income growth with instrumented lagged consumption growth.
To this point, we have taken to be exogenous (though reasonably calibrated); here, we examine the results if the probability of updating depends on costs and benefits, as in ‘rational’ inattention models. We briefly examine the tradeoffs by imagining that newborns make a once-and-for-all choice of their idiosyncratic value of , yielding an intuitive approximating formula for the optimal updating frequency.47 We then conduct a numerical exercise to compute the cost of stickiness for the calibrated models. The utility costs of having equal to our calibrated value of , rather than updating every period, are on the order of one two-thousandth of lifetime consumption, so that even small informational costs would justify updating aggregate information only occasionally. (Benefits of updating would be even smaller if the update yielded imperfect information about the true state of the macroeconomy; see below).
In the first period of life, we assume that the consumer is employed and experiences no transitory shocks, so that market resources are nonstochastically equal to ; value can therefore be written as . There is no analytical expression for ; but, fixing all parameters aside from the variance of the permanent aggregate shock, theoretical considerations suggest (and numerical experiments confirm) that the consequences of permanent uncertainty for value can be well approximated by:where is the value that would be generated by a model with no aggregate permanent shocks and is a constant of approximation that captures the cost of aggregate permanent uncertainty (effectively, it is the coefficient on a first order Taylor expansion of the model around the point ).
Suppose now (again confirmed numerically—see Figure 2 below) that the effect of sticky expectations is approximately to reduce value by an amount proportional to the inverse of the updating probability:
This assumption has appropriate scaling properties in three senses:
Now imagine that newborns make a once-and-for-all choice of the value of ; a higher (faster updating) is assumed to have a linear cost in units of normalized value.48 The newborn’s objective is therefore to choose the that solves:The first order condition is:
Thus, the speed of updating should be related directly to the utility cost of permanent uncertainty , inversely to the cost of information (cheaper information induces faster updating), and linearly to the standard deviation of permanent aggregate shocks.
Our calibrated models can be used to numerically calculate the welfare loss from our specification of sticky expectations as an agent’s willingness to pay at birth in order to avoid having for his entire lifetime.49 Specifically, we calculate the percentage loss of permanent income that would make a newborn indifferent between being frictionless while taking the loss versus having sticky expectations.50
Using notation from the theoretical exercise above, define a newborn’s average lifetime (normalized) value at birth under frictionless and sticky expectations as respectively:
where the expectation is taken over the distribution of state variables other than that an agent might be born into (as well as the wage rate, in the HA-DSGE model). We compute these quantities by averaging the discounted sum of consumption utilities experienced by households over their simulated lifetimes. A newborn’s willingness to pay (as a fraction of permanent income) to avoid having sticky expectations can then be calculated as:
The bottom row of Table 2 reports the cost of stickiness for the SOE and HA-DSGE models. A newborn in either model is willing to give up about 0.05 percent of his permanent income to remain frictionless. These values are comparable to the findings of Maćkowiak and Wiederholt , who construct a model in which, as in Reis [2006a], agents optimally choose how much attention to pay to economic shocks by weighing off costs and benefits. They find (p. 1519) that the cost of suboptimal tracking of aggregate shocks is 0.06 percent of steady state consumption.
Now that we have explained how to compute the cost of stickiness numerically, we can test our supposition in equation (28) that the cost of stickiness might have a roughly inverse linear relationship to . Figure 2 plots numerically computed for various values of and is close to linear, as we speculated.
Our preferred interpretation is not that households deliberately choose optimally due to a cost of updating, but instead that is exogenous and represents the speed with which macroeconomic news arrives “for free” from the news media (this could explain why the parameter seems to work well for inflation, unemployment expectations, and consumption). An objection to this interpretation is that a household who has not updated for several years would face a substantially larger loss from continuing to be oblivious and would deliberately look up some aggregate facts. At the cost of a large computational and theoretical investment, we could modify the model to allow consumers to behave in this way, but it seems clear that the ex ante benefit would be extremely small, because the likelihood of being sufficiently out of date to make costly mistakes is negligible: After three years, only 3 percent, , of households will be in this position. Furthermore, simple calculations show that a model in which households automatically update after three years barely changes aggregate dynamics (the estimate of slightly increases from 0.660 to 0.667 in the small open economy model).
Now that our calibrations and results have been presented, we are in position to make some quantitative comparisons of our model to two principal alternatives to habit formation (or our model) for explaining excess smoothness in consumption growth.
The longest-standing rival to habit formation as an explanation of consumption sluggishness is what we will call the Muth–Lucas–Pischke (henceforth, MLP) framework. The idea is not that agents are inattentive, but instead that they have imperfect information on which they (perfectly attentively) perform an optimal signal extraction problem.
Muth ’s agents could observe only the level of their income, but not the split between its permanent and transitory components. He derived the optimal (mean-squared-error-minimizing) method for estimating the level of permanent income from the observed signal about the level of actual income. Lucas  applied the same mathematical toolkit to solve a model in which firms are assumed to be unable to distinguish idiosyncratic from aggregate shocks. Pischke  combines the ideas of Muth and Lucas and applies the result to micro consumption data: His consumers have no ability at all to perceive whether income shocks that hit them are aggregate or idiosyncratic, transitory or permanent. They see only their income, and perform signal extraction on it.
Pischke calibrates his model with micro data in which he calculates that transitory shocks vastly outweigh permanent shocks.51 So, when a shock arrives, consumers always interpret it as being almost entirely transitory and change their consumption by little. However, macroeconometricians have long known that aggregate income shocks are close to permanent. When an aggregate permanent shock comes along, Pischkian consumers spend very little of it, confounding the aggregate permanent shock’s effect on their income with the mainly transitory idiosyncratic shocks that account for most of the total variation in their income. This misperception causes sluggishness in aggregate consumption dynamics in response to aggregate shocks. (See below for a more precise formulation of this point).
In its assumption that consumers fail to perceive aggregate shocks immediately and fully, Pischke’s model resembles ours. However, few papers in the literature after Pischke  have adopted his assumption that households have no idea, when an idiosyncratic income shock occurs, whether it is transitory or permanent. Especially in the last decade or so, the literature instead has almost always assumed that consumers can perfectly perceive the transitory and permanent components of their income.52
Granting our choice to assume that consumers correctly perceive the events that are idiosyncratic to them (job changes, lottery winnings, etc), there is still a potential role for application of the MLP framework: Instead of assuming sticky expectations, we could instead have assumed that consumers perform a signal extraction exercise on only the aggregate component of their income, because they cannot perceive the transitory/permanent split for the (tiny) part of their income change that reflects aggregate macroeconomic developments.
In principle, such confusion could generate excess smoothness. To see how, note that in the Muth framework, agents update their estimate of permanent income according to an equation of the form:53
We can now consider the dynamics of aggregate consumption in response to the arrival of an aggregate shock that (unbeknownst to the consumer) is permanent. The consumer spends of the shock in the first period, leaving unspent because that reflects the average transitory component of an undifferentiated shock. However, since the shock really was permanent, income next period does not fall back as the consumer guessed it would on the basis of the mistaken belief that of the shock was transitory. The next-period consumer treats this surprise as a positive shock relative to expected income, and spends the same proportion out of the perceived new shock. These dynamics continue indefinitely, but with each successive perceived shock (and therefore each consumption increment) being smaller than the last by the proportion . Thus, after a true permanent shock received in period , the full-information prediction of the expected dynamics of future consumption changes would be .54
At first blush, this predictability in consumption growth would appear to be a violation of Hall ’s proof that, for consumers who make rational estimates of their permanent income, consumption must be a random walk. The reconciliation is that what Hall proves is that consumption must be a random walk with respect to the knowledge the consumer has. The random walk proposition remains true for consumers whose knowledge base contains only the perceived level of aggregate income. Our thought experiment was to ask how much predictability would be found by an econometrician who knows more than the consumer about the level of aggregate permanent income.
The in-principle reconciliation of econometric evidence of predictability/excess smoothness in consumption growth, and the random walk proposition, is therefore that the econometricians who are making their forecasts of aggregate consumption growth use additional variables (beyond the lagged history of aggregate income itself, and that those variables have useful predictive power.55
We now turn to the question of whether the Muth–Lucas–Pischke story is a good quantitative explanation of the size of aggregate excess smoothness. Appendix C.4 shows that, defining the signal-to-noise ratio , Muth’s derivations imply that the optimal updating coefficient is:56
Plugging our calibrations of and from section 5 into (33), the model yields a predicted value of —very far below the approximately estimate from Havranek et al.  and even farther below our estimate of roughly – for U.S. data. This reflects the well-known fact that aggregate income is hard to distinguish from a random walk; if it were perceived to be a perfect random walk with no transitory component at all, the serial correlation in its growth would be zero.57
Considerations similar to the foregoing apply, at least to some degree, to the Reis [2006a] model. Moreover, that model has a further disadvantage relative to any of the other three stories (habits, MLP, or our model). In Reis’s model consumers update their information on a regular schedule; under a plausible calibration of the model, once a year. One implication of the model is that the change in consumption at the next reset is unpredictable; this implies that aggregate consumption growth would be unpredictable at any horizon beyond, say, the one-year horizon.58 But, business cycle analysts felt compelled to incorporate some reason for sluggishness into macroeconomic models in large part to explain the fact that consumption growth is forecastable over extended periods – empirical impulse response functions indicate that a macroeconomically substantial component of the adjustment to shocks takes place well beyond the one year horizon. A calibration of the Reis model in which consumers update once a year therefore leaves much of the original puzzle in place.59
Using a traditional utility function that does not incorporate habits, the literature on the microfoundations of consumption behavior has made great strides over the past couple of decades in constructing models that are faithful to many of the microeconomic facts about consumption, income dynamics, and the distribution of wealth. But over roughly the same interval, habit formation has gone from an exotic hypothesis to a standard assumption in the representative agent macroeconomics literature, because habits allow representative agent models to match the measured smoothness in aggregate consumption growth. This conflict, thrown into sharp focus by the recent meta-analysis of both literatures by Havranek et al. , is arguably the most important puzzle in the microfoundations of macroeconomic consumption dynamics.
We show that this conflict can be resolved by applying insights from the literature on ‘inattention’ that has developed robustly since the early contributions of Sims , Woodford , Mankiw and Reis , and others. In the presence of such inattention, aggregation of the behavior of microeconomic consumers without habits generates aggregate consumption dynamics that match the ‘excess smoothness’ facts that have induced the representative agent literature to embrace habits.
The sticky expectations assumption is more attractive for modeling consumption than for other areas where it has been more widely applied, because in the consumption context there is a well-defined utility-based metric for calculating the cost of sticky expectations (in contrast, say, with models in which households’ inflation expectations are sticky; the cost of misperceiving the inflation rate is unclear). The cost to consumers of our proposed degree of macroeconomic inattention is quite modest, for reasons that will be familiar to anyone who has worked with both micro and macro data: Idiosyncratic variation is vastly greater than aggregate variation. This means that the small imperfections in macroeconomic perceptions proposed here have very modest utility consequences. So long as consumers respond appropriately to their idiosyncratic shocks (which we assume they do), the failure to keep completely up-to-date with aggregate developments simply does not matter much.
While a number of previous papers have mooted the idea that inattention (or imperfect information) might generate excess smoothness, the modeling question is a quantitative one (‘how much excess smoothness can a sensible model explain?’). We argue that the imperfect information models and mechanisms proposed in the prior literature are quantitatively unable simultaneously to match the micro and macro quantitative facts, while our model matches the main stylized facts from both literatures.
In future work, it would be interesting to enrich the model so that it has plausible implications for how the degree of attention might vary over time or across people, and to connect the model to the available expectations data (for example, measures of consumer sentiment, or measures of uncertainty constructed from news sources, cf Baker et al. ). Such work might be particularly useful in any attempt to understand how behavioral dynamics change between normal times (in which news coverage of macroeconomic dynamics is not front-page material) and crisis times (when it is).
Fernando Alvarez, Luigi Guiso, and Francesco Lippi. Durable consumption and asset management with transaction and observation costs. American Economic Review, 102(5):2272–2300, August 2012. URL https://ideas.repec.org/a/aea/aecrev/v102y2012i5p2272-2300.html.
John Campbell and Angus Deaton. Why is Consumption So Smooth? The Review of Economic Studies, 56(3):357–373, jul 1989. ISSN 0034-6527. URL http://www.jstor.org/stable/2297552. http://www.jstor.org/stable/2297552.
John Y. Campbell and N. Gregory Mankiw. Consumption, income, and interest rates: Reinterpreting the time-series evidence. In Olivier J. Blanchard and Stanley Fischer, editors, NBER Macroeconomics Annual, 1989, pages 185–216. MIT Press, Cambridge, MA, 1989. URL http://www.nber.org/papers/w2924.pdf. http://www.nber.org/papers/w2924.pdf.
Christopher D. Carroll. Macroeconomic Expectations of Households and Professional Forecasters. Quarterly Journal of Economics, 118 (1):269–298, 2003. URL http://www.econ2.jhu.edu/people/ccarroll/epidemiologyQJE.pdf. http://www.econ2.jhu.edu/people/ccarroll/epidemiologyQJE.pdf.
Christopher D. Carroll. Theoretical foundations of buffer stock saving. manuscript, Department of Economics, Johns Hopkins University, 2016. URL http://www.econ2.jhu.edu/people/ccarroll/papers/BufferStockTheory.pdf. Available at http://www.econ2.jhu.edu/people/ccarroll/papers/BufferStockTheory.
Christopher D. Carroll and Miles S. Kimball. On the Concavity of the Consumption Function. Econometrica, 64(4):981–992, 1996. URL http://www.econ2.jhu.edu/people/ccarroll/concavity.pdf. http://www.econ2.jhu.edu/people/ccarroll/concavity.pdf.
Christopher D. Carroll and Andrew A. Samwick. The Nature of Precautionary Wealth. Journal of Monetary Economics, 40(1):41–71, 1997. URL http://www.econ2.jhu.edu/people/ccarroll/papers/nature.pdf.
Christopher D. Carroll, Jeffrey C. Fuhrer, and David W. Wilcox. Does consumer sentiment forecast household spending? If so, why? American Economic Review, 84(5):1397–1408, 1994. URL http://www.econ2.jhu.edu/people/ccarroll/SentAERCarrollFuhrerWilcox.pdf. http://www.econ2.jhu.edu/people/ccarroll/SentAERCarrollFuhrerWilcox.pdf.
Christopher D. Carroll, Martin Sommer, and Jiri Slacalek. International evidence on sticky consumption growth. Review of Economics and Statistics, 93(4):1135–1145, October 2011. doi: 10.1162/REST\_a\_00122. URL https://doi.org/10.1162/REST_a_00122. http://www.econ2.jhu.edu/people/ccarroll/papers/cssIntlStickyC/.
Christopher D Carroll, Jiri Slacalek, and Kiichi Tokuoka. Buffer-stock saving in a krusell–smith world. Economics Letters, 132:97–100, 2015. doi: doi:10.1016/j.econlet.2015.04.021. URL http://www.econ2.jhu.edu/people/ccarroll/papers/cstKS/. At http://www.econ2.jhu.edu/people/ccarroll/papers/cstKS/; extended version available as ECB Working Paper number 1633, https://www.ecb.europa.eu/pub/pdf/scpwps/ecbwp1633.pdf.
Christopher D. Carroll, Jiri Slacalek, Kiichi Tokuoka, and Matthew N. White. The distribution of wealth and the marginal propensity to consume. Quantitative Economics, 8:977–1020, November 2017. doi: 10.3982/QE694. URL http://onlinelibrary.wiley.com/doi/10.3982/QE694/pdf. At http://www.econ2.jhu.edu/people/ccarroll/papers/cstwMPC.
Raj Chetty and Adam Szeidl. Consumption commitments and habit formation. Econometrica, 84:855–890, 03 2016. URL https://ideas.repec.org/a/wly/emetrp/v84y2016ip855-890.html.
Jeffrey C. Fuhrer. Habit formation in consumption and its implications for monetary policy models. American Economic Review, 90(3):367–390, June 2000. URL http://www.jstor.org/stable/117334. http://www.jstor.org/stable/117334.
Xavier Gabaix. A sparsity-based model of bounded rationality. The Quarterly Journal of Economics, 129(4):1661–1710, 2014. URL https://ideas.repec.org/a/oup/qjecon/v129y2014i4p1661-1710.html.
Robert E. Hall. Stochastic implications of the life-cycle/permanent income hypothesis: Theory and evidence. Journal of Political Economy, 96: 971–87, 1978. URL http://www.stanford.edu/~rehall/Stochastic-JPE-Dec-1978.pdf. Available at http://www.stanford.edu/~rehall/Stochastic-JPE-Dec-1978.pdf.
Tomas Havranek, Marek Rusnak, and Anna Sokolova. Habit formation in consumption: A meta-analysis. European Economic Review, 95:142–167, 2017. doi: 10.1016/j.euroecorev.2017.03.009. URL https://doi.org/10.1016/j.euroecorev.2017.03.009.
David S. Johnson, Jonathan A. Parker, and Nicholas S. Souleles. Household expenditure and the income tax rebates of 2001. American Economic Review, 96(5):1589–1610, December 2006. URL http://ideas.repec.org/a/aea/aecrev/v96y2006i5p1589-1610.html.
Fatih Karahan, Sean Mihaljevich, and Laura Pilossoph. Understanding permanent and temporary income shocks. URL link retrieved on 03/02/2018 here., 2017.
Bartosz Maćkowiak and Mirko Wiederholt. Optimal Sticky Prices under Rational Inattention. American Economic Review, 99(3):769–803, June 2009. URL https://ideas.repec.org/a/aea/aecrev/v99y2009i3p769-803.html.
Bartosz Maćkowiak and Mirko Wiederholt. Business cycle dynamics under rational inattention. The Review of Economic Studies, 82(4):1502–1532, 2015. doi: 10.1093/restud/rdv027. URL +http://dx.doi.org/10.1093/restud/rdv027.
Stephen Morris and Hyun Song Shin. Inertia of forward-looking expectations. The American Economic Review, 96(2):152–157, 2006. ISSN 00028282. URL http://www.jstor.org/stable/30034632.
Christopher Sims. Implications of rational inattention. Journal of Monetary Economics, 50(3):665–690, 2003. available at http://ideas.repec.org/a/eee/moneco/v50y2003i3p665-690.html.
Douglas Staiger, James H. Stock, and Mark W. Watson. Prices wages and the us nairu in the 1990s. In Alan B. Krueger and Robert Solow, editors, The Roaring Nineties: Can Full Employment Be Sustained? The Russell Sage Foundation and Century Press, New York, 2001.
Kjetil Storesletten, Chris I. Telmer, and Amir Yaron. Consumption and risk sharing over the life cycle. Journal of Monetary Economics, 51(3):609–633, Apr 2004. URL http://www.sciencedirect.com/science/article/B6VBW-4BWMTRW-2/1/4934de112177c84dc55a3f37dbde0e16.
Michael Woodford. Imperfect common knowledge and the effects of monetary policy. In P. Aghion, R. Frydman, J. Stiglitz, and M. Woodford, editors, Knowledge, Information and Expectations in Modern Macroeconomics. Princeton University Press, Princeton, 2002. URL http://EconPapers.repec.org/RePEc:nbr:nberwo:8673.
Stephen P. Zeldes. Consumption and liquidity constraints: An empirical investigation. Journal of Political Economy, 97:305–46, 1989a. URL http://www.jstor.org/stable/1831315. Available at http://www.jstor.org/stable/1831315.
This appendix presents a representative agent model for analyzing the consequences of sticky expectations in a DSGE framework while abstracting from idiosyncratic income shocks and the death (and replacement) of households. It builds upon the modeling assumptions in Section 4.1 to formulate the representative agent model, then presents simulated results analogous to Section 6. The primary advantage of this model is that it allows fast analysis of sticky expectations in a closed economy, yielding very similar results to the heterogeneous agents DSGE model with less than a minute of computation, rather than a few hours. However, the model is not truly “representative agent” under sticky expectations, as the representative household’s perception of the aggregate state is “smeared” over the state space. As presented below, the realized level of consumption represents the average level of consumption chosen by the “multiple minds” of the representative household.
The representative agent’s state variables at the time of its consumption decision are the level of market resources , the productivity of labor , and the growth rate of productivity . Idiosyncratic productivity shocks and do not exist, and the possibility of death is irrelevant; aggregate permanent and transitory productivity shocks and are distributed as usual.
The representative agent’s problem can be written in Bellman form as:60
Normalizing the representative agent’s problem by the productivity level as in the SOE and HA-DSGE models, the problem’s state space can be reduced to:61
The representative agent model can be solved using the endogenous grid method, following the same procedure as for the SOE model described in Appendix B.1, yielding normalized consumption function .62
The typical interpretation of a representative agent model is that it represents a continuum of households that face no idiosyncratic shocks, and thus all find themselves with the same state variables; idiosyncratic decisions are equivalent to aggregate, representative agent decisions. Once we introduce sticky expectations of aggregate productivity, this no longer holds: different households will have different perceptions of productivity, and thus make different consumption decisions.
To handle this departure from the usual representative agent framework, we take a “multiple minds” or quasi-representative agent approach. That is, we model the representative agent as being made up of a continuum of households who all correctly perceive the level of aggregate market resources , but might have different perceptions of the aggregate productivity state. Each household chooses their level of consumption based on their perception of the productivity state; the realized level of aggregate consumption is simply the sum across all households.
Formally, we track the distribution of perceptions about the aggregate productivity state as a stochastic vector over the current growth rate , representing the fraction of households who perceive each value of , and a vector representing the average perceived productivity level among households who perceive each . As in our other models, agents update their perception of the true aggregate productivity state with probability ; likewise, the distinction between frictionless and sticky expectations is simply whether or .
Defining as the -length vector with zeros in all elements but the -th, which has a one, the distribution of population perceptions of growth rate evolves according to:
That is, a proportion of households who perceive each growth rate update their perception to the true state , while the other proportion of households maintain their prior belief (which might already be ).
The vector of average perceptions of aggregate productivity for each growth rate can then be calculated as:
That is, the average perception of productivity in each growth state is the weighted average of updaters and non-updaters who perceive that growth rate.63
Households who perceive each growth rate act as a partial representative agent, choosing their level of consumption according to their perception of normalized market resources. Defining as perceived normalized market resources for households who perceive the aggregate growth rate is , aggregate consumption is:
This represents the weighted average of per-state consumption levels of the partial representative agents.
When the representative agent frictionlessly updates its information every period (), equations (36) and (37) say that and (with irrelevant values in the other vector elements), so that the representative agent is truly representative. When expectations are sticky (), the representative agent’s perceptions of the growth rate become “smeared” across its past realizations; its perceptions the productivity level likewise deviate from the true value, even for the part of the representative agent who perceives the true growth rate.64
We calibrate the RA model using the same parameters as for the HA-DSGE model (see Section 5.1 and Table 1), except that there are no idiosyncratic income shocks () and the possibility of death is irrelevant (). After solving the model, we utilize the same simulation procedure described in Section 6, taking 100 samples of 200 quarters each; average coefficients and standard errors across the samples are reported in Table 7.
The upper panel of Table 7 shows that under frictionless expectations, consumption growth in the representative agent model cannot be predicted to any statistically significant degree under any specification. The lower panel, under sticky expectations, yields results that are strikingly similar to the SOE model in Table 5. Both (instrumented) lagged consumption growth and expected income growth are significant predictors of aggregate consumption growth, but the ‘horse race’ regression reveals that the predictability is dominated by serially correlated consumption growth, confirming the results of the two heterogeneous agents models.
Consider the household’s normalized problem in the SOE model, given in (16). Substituting the latter two constraints into the maximand, this problem has one first order condition (with respect to ), which is sufficient to characterize the solution:
We use the endogenous grid method to solve the model by iterating on the first order condition. Eliding some uninteresting complications, our procedure is straightforward:
The numerically computed consumption function can then be used to simulate a population of households, as described in Appendix B.2.
Consider the household’s normalized problem in the HA-DSGE model, given in (22). Recalling that we are taking the aggregate saving rule as given, optimal consumption is characterized by the solution to the first-order condition:
Solving the HA-DSGE model requires a nested loop procedure in the style of Krusell and Smith , as the equilibrium of the model is a fixed point in the space of household beliefs about the aggregate saving rule. For the outer loop, searching for the equilibrium , we use the following procedure:
The inner solution loop (step 3) proceeds very similarly to the SOE solution method above, with differences in the following steps:
This appendix describes the procedure for generating a history of simulated outcomes once the household’s optimization problem has been solved to yield consumption function (or in the representative agent model). We first describe the procedure for the SOE and HA-DSGE models from the body of the text, then summarize the simulation method for the representative agent model of Appendix A.
In any given period , there are exactly households in the simulated population. At the very beginning of the simulation, all households are given an initial level of capital: in the SOE model (as if they were newborns) and in the HA-DSGE model. Likewise, normalized aggregate capital is set to the perfect foresight steady state . At the beginning of time, all households have and correct perceptions of the aggregate state. We initialize and , average growth.
Time begins in period , but the reported history begins at following a 1000 period “burn in” phase to allow the population distribution of and to reach its long run distribution. In each simulated period , we execute the following steps:
We simulate a total of about 21,000 periods, so that the final period is indexed by . The time series values reported in Table 2 are calculated on the span of the history, to ; the cross sectional values in this table are averaged across all within-period cross sections. The time series regressions in Tables 5 and 6 partition the history into 200 samples of 100 quarters each; the tables report average coefficients and statistics across 100 sample regressions.
When simulating the representative agent model of Appendix A, only a few changes are necessary to the procedure above. The vectors of perceptions are initialized to and , so the “entire” representative agent has correct perceptions of the aggregate state. No households are ever “replaced” in the RA simulation, idiosyncratic shocks do not exist; only aggregate market resources are relevant. The vectors of perceptions evolve according to (36) and (37), and aggregate consumption is determined using (38).
The microeconomic (or cross sectional) regressions in Table 4 are generated using a single 4000 period sample of the history, from to , using 5000 of the 20,000 households. After dropping observations with , this leaves about 19 million observations, far larger than any consumption panel dataset that we know of. Standard errors are thus vanishingly small, and have little meaning in any case, which is why we do not report them in the table summarizing our microsimulation results.
When making their forecasts of expected income growth, households are assumed to forecast that the transitory component of income will grow by the factor , which is the forecast implied by their observation of the idiosyncratic transitory component of income. Substantively, this assumption reflects the real-world fact that essentially all of the predictable variation in income growth at the household level comes from idiosyncratic components of income.
After simulating a population of households using the procedure in Appendix B.2, we have a history of micro observations and a history of aggregate permanent productivity levels . Each household index contains the history of many agents, as the agent at dies and is replaced at the beginning of any period with . Let be the -th time index where ; further define , the number of replacement events for household index .
A single consumer’s (normalized) discounted sum of lifetime utility is then:
Normalizing by aggregate productivity at birth is equivalent to normalizing by the consumer’s total productivity at birth because at birth by assumption.
The total number of households who are born and die in the history is:
The overall expected lifetime value at birth can then be computed as:
Because we use and , and agents live for 200 periods on average (), our simulated history includes about 2 million consumer lifetimes. The standard errors on our numerically calculated and are thus negligible and not reported.
In the SOE model, we use the same random seed for the frictionless and sticky specifications, so the same sequence of replacement events and income shocks occurs in both. With no externalities or general equilibrium effects, the distribution of states that consumers are born into is likewise identical, so the “value ratio” calculation is valid.
The cost of stickiness in the HA-DSGE model is slightly more complicated. If we used the generated histories of the frictionless and sticky specifications to compute and , the calculated would represent a newborn’s willingness-to-pay for everyone to be frictionless rather than sticky. We are interested in the utility cost of just one agent having sticky expectations, so an alternate procedure is required.
We compute in the HA-DSGE model the same as in the SOE model. However, is calculated as the expected lifetime (normalized) value of a newborn who is frictionless but lives in a world otherwise populated by sticky consumers. To do this, we simulate a new history of micro observations using the consumption function for the sticky HA-DSGE economy, but with all households updating their knowledge of the aggregate state frictionlessly. Critically, we do not actually calculate each period; instead, we use the same sequence of that occurred in the ordinary sticky simulation. Thus our simulated population of households represents an infinitesimally small portion of an economy made up (almost) entirely of consumers with sticky expectations. The calculated is thus the willingness-to-pay to be the very first agent to “wake up”.
The formula for willingness-to-pay (31) arises from the homotheticity of the household’s problem with respect to . If a consumer gives up an portion of their permanent income at the moment they are “born”, before receiving income that period, then his normalized market resources will still be , and he will make the same normalized consumption choice that he would have, had he not lost any permanent income. In fact, he will make the exact same sequence of normalized consumption choices for his entire life; the level of his consumption will be scaled by the factor in every period. With CRRA utility, this means that utility is scaled by in every period of life, which can be factored out of the lifetime summation. The indifference condition between being frictionless and losing an fraction of permanent income versus having sticky expectations (and not losing) can be easily rearranged into (31).
This appendix derives the equation (3) asserted in the main text. Start with the definition of consumption for the updaters,
The text asserts (equation (3)) that
To see this, define market resources where is noncapital income in period and is the level of nonhuman assets with which the consumer ended the previous period; and define as ‘human wealth,’ the present discounted value of future noncapital income. Then write
What theory tells us is that if aggregate consumption were chosen frictionlessly in period , then this expression would be white noise; that is, we know that
So equation (3) can be rewritten as
This appendix follows closely Appendix A in the ECB working paper version of Carroll et al. .66 It computes dynamics and steady state of the square of the idiosyncratic component of permanent income (from which the variance can be derived). Recalling that consumers are born with :
Finally, note the relation between and the variance of :
For the preceding derivations to be valid, it is necessary to impose the parameter restriction . This requires that income does not spread out so quickly among survivors as to overcome the compression of the distribution that arises because of death.
If the quarterly transitory shock is , define the annual transitory shock as:
Let be the quarterly permanent shock. Define the annual permanent shock as:
Muth , pp. 303–304, shows that the signal-extracted estimate of permanent income is
This compares with (32) in the main text
Defining the signal-to-noise ratio , starting with equation (3.10) in Muth  we have