Forecasting mean and sd of time series

Garrett M. writes:

I had two (hopefully straightforward) questions related to time series analysis that I was hoping I could get your thoughts on:

First, much of the work I do involves “backtesting” investment strategies, where I simulate the performance of an investment portfolio using historical data on returns. The primary summary statistics I generate from this sort of analysis are mean return (both arithmetic and geometric) and standard deviation (called “volatility” in my industry). Basically the idea is to select strategies that are likely to generate high returns given the amount of volatility they experience.

However, historical market data are very noisy, with stock portfolios generating an average monthly return of around 0.8% with a monthly standard deviation of around 4%. Even samples containing 300 months of data then have standard errors of about 0.2% (4%/sqrt(300)).

My first question is, suppose I have two time series. One has a mean return of 0.8% and the second has a mean return of 1.1%, both with a standard error of 0.4%. Assuming the future will look like the past, is it reasonable to expect the second series to have a higher future mean than the first out of sample, given that it has a mean 0.3% greater in the sample? The answer might be obvious to you, but I commonly see researchers make this sort of determination, when it appears to me that the data are too noisy to draw any sort of conclusion between series with means within at least two standard errors of each other (ignoring for now any additional issues with multiple comparisons).

My second question involves forecasting standard deviation. There are many models and products used by traders to determine the future volatility of a portfolio. The way I have tested these products has been to record the percentage of the time future returns (so out of sample) fall within one, two, or three standard deviations, as forecasted by the model. If future returns fall within those buckets around 68%/95%/99% of the time, I conclude that the model adequately predicts future volatility. Does this method make sense?

My reply:

Regarding your first question about the two time series, I’d recommend doing a multilevel model. I bet you have more than two of these series. Model a whole bunch at once, and then estimate the levels and trends of each series. Move away from a deterministic rule of which series will be higher, and just create forecasts that acknowledge uncertainty.

Regarding your second question about standard deviation, your method might work but it also discards some information. For example, the number of cases greater than 3sd must be so low that your estimate of these tails will be noisy, so you have to be careful that you’re not in the position of those climatologists who are surprised when so-called hundred-year floods happen every 10 years. At a deeper level, it’s not clear to me that you should want to be looking at sd; perhaps there are summaries that map more closely to decisions of interest.

But I say these things all pretty generically as I don’t know anything about stock trading (except that I lost something like 40% of my life savings back in 2008, and that was a good thing for me).

41 thoughts on “Forecasting mean and sd of time series

  1. It strikes me that this problem is much like the problem of studying climate change (CC) (aka global warming). In CC, there is a large number of sensors, and you have a large number of stock records. In CC, some of those records are probably well correlated (being physically close together), in stocks some are correlated by being in the same industry. In CC, the day-to-day variability is much higher than changes due to the trend. And so on. Also, CC has more than its share of frauds and charlatans…

    So it might be a good idea to look at how the CC folks handle these issues. It can get really complex. Two good blogs to go through are:

    1. Open Mind: https://tamino.wordpress.com/
    2. Skeptical Science: http://www.skepticalscience.com/

    Also, with time series, data points for successive times are often correlated, and this must be the case for stocks as well. If so, you have to correct the variances for correlation, which tends to reduce the apparent variations.

    You said “Basically the idea is to select strategies that are likely to generate high returns given the amount of volatility they experience”. This is a recipe for drawing spurious conclusions because it is essentially p-hacking. One way to combat this is to only use part of the data for arriving at a strategy. Then you test it on the other part of the data.

    Also, remember that if you model your data sets with several parameters and then estimate those parameters from the data (e.g., by least squares methods), then unknown and unincluded parameters will project their noise indirectly onto your estimates. You can end up with so much noise in the parameter estimates that they aren’t really helpful.

  2. Garrett, regarding evaluation of volatility forecasts, check out proper scoring rules and Patton & Sheppard “Evaluating volatility and correlation forecasts” (2009). Also, your question could be a good fit for Cross Validated or Quantitative Finance Stack Exchange.

    • Yes, and this model of increasingly variable storms is more more reasonable than trying to apply an hour-by-hour weather model to the entire globe and extrapolate it for 350 years and see what happens. This is an example of what I meant below by “inner” (weather model) vs “outer” (models at the scale of decades or centuries where frequency of certain kinds of events are driven by known causal forces on the hydrological cycle without modeling them at the scale of the PDEs of fluid and energy transport).

  3. Well before I drank the Bayesian Kool-Aid I worked at a company that made mathematical models for finance. It always seemed like there was something wrong with this industry, it had a lot of mathematical sophistication, and basically no “real world” sophistication. The ideas tend to go something like “the sum of a large number of random variables is normal, the stock market consists of an enormous number of small trades each of which is as good as random, so stock returns are normal”. Following this you get to pull out the big guns in stochastic differential equations and make big bucks with your masters degree in quantitative finance, or your physics degree or whatever.

    Of course, the truth is, it’s easily observable that stock returns are not normal… So then there’s all the stuff in quantitative finance departments about how to treat Levy processes with long tails…

    But at it’s heart, all of it is down to *the stock market is an agglomeration of random number generators*. There’s even this weird concept of a “risk neutral probability distribution” which more or less says that the random number generator of the stock market is this thing that comes from a mixture model of all the individual random traders.

    Well, as someone who’s had the Bayesian Kool-Aid, I just don’t buy that this is the way to go. I suspect there are a number of people who trade in the market using Bayesian techniques but they keep their MOUTH SHUT.

    All I’ll say, since it’s a pretty generic thought, is that it seems to me the way to go is to use causal models of social processes to create Bayesian predictions of what will happen in the future, and then trade on those predictions.

    That is a nontrivial task, and certainly if one wants to make money off it, the models will stay proprietary.

    • Another way to think of what’s going on is that standard finance takes the asymptotic “inner” model (ie. an hour worth of trades is as good as a normal random number generator, which is probably not bad) and extrapolates this “inner” model to t = infinity. My intuition is that what you want is a causal *outer* model that predicts on the scale of months or years ignoring the “inner” noise.

        • Probably it does. I am not well enough educated in the area to really say, but I have always regarded the Efficient Market Hypothesis as a fairy tale anyway.

          However,I’m a psychologist not an economist. I’ve regarded Utility Theory as nonsense since 2nd year university when my girl friend told me about it.

        • The existence of hedge funds basically says that large numbers of people think the EMH is false and are willing to put their money where their mouth is.

          The key is more, how do you build the model and exploit it? and how long does it take before your own actions in the market erase your advantage so you need to find even better models to make money?

        • I think few hedge funds take the approach I’m suggesting, most of them are all about the stochastic ODEs and high-speed trading and all that jazz.

          The approach I’m suggesting is a lot more like the one that made Warren Buffett rich. He found causal factors that made him think certain companies would do better in the future, and typically he also then bought the company and told the managers what to do ;-)

        • @Daniel

          Feature or bug? :)

          I mean, assuming I was smart enough, I could realistically convince myself that I can write a successful high-speed trading strategy than the approach you are suggesting.

          HST can run on temporary mismatches, inter exchange arbitrage, low latency links, stuff like that. To write a Buffet like strategy, I don’t think that’s easy. Hell, I don’t even think it is possible. And I think that’s where EMH kicks in.

        • Zbicyclist:

          You write, “that the big money on Wall Street isn’t in beating the market, but in convincing others that you can beat the market.”

          Or in convincing others that you can convince others that you can beat the market.

        • I would say that the existence of trading algorithms that consistently obtain excess risk-adjusted returns net of taxes and commissions (finding alpha in financial jargon) would be a substantial blow to the Efficient Market Hypothesis. But, as Daniel alluded to, if an algorithm of this sort came into the public domain the profits would be arbitraged away and shrink to risk-adjusted returns (and fairly quickly).

          But I am not so sure this matters: for some time the EMH has been thought to be an incomplete theory of how markets work.

          The two strongest two forms of the EMH have been argued to be less than perfect from both an empirical (see Shiller on the difference between volatility of stock prices and dividends: http://www.nber.org/papers/w0456.pdf) and analytical viewpoint (see Stiglitz & Grossman on the Impossibility of Efficient Markets: http://alex2.umd.edu/wermers/ftpsite/FAME/grossman_stiglitz_(1980).pdf)

          That is not to say that consistently obtaining excess risk-adjusted returns in the markets is a trivial task. In fact, it remains quite a difficult endeavour. The majority of funds underperform the market after fees and on average, fund managers and investors underperform their own funds!

          The vast majority of investors would be substantially better off if they believed the EMH and invested, at regular intervals and regular amounts, into a low-cost index fund (for example the Vangaurd 500: https://personal.vanguard.com/us/funds/snapshot?FundId=0040&FundIntExt=INT).

          The same is apt to be true of quantitative specialists who focus on algorithmic trading strategies (high frequency trading not included).

        • Although I think it’s possible to do a good job of creating a predictive model that outperforms the market, I myself invest in low cost ETFs, because I don’t think it’s likely to be possible *for me* given the information resources I have available, and my limited capital.

          All this is to say that I more or less agree with what Allan has to say, and I think that Stiglitz & Grossman paper makes a huge amount of sense.

        • @Daniel:

          >>>The existence of hedge funds basically says that large numbers of people think the EMH is false and are willing to put their money where their mouth is.<<<

          But isn't that like saying that the existence of tarot card readers shows that people think astrology is real and are willing to put their money where their mouth is? The statement is more indicative of perceptions than statistical truth.

          If you aggregate over *all* hedge funds, then do they generate super normal returns? I really don't know. Any studies?

          Isn't there a selection bias here? We only here about the star performers.

        • Exactly. As a market, this may be true. As a single hedge fund, it is not only possible but probable that many will outperform the market. If there is a consistent underperforming of the market then the conclusion would be that predictive analytics actually reduces the quality of decisions which makes little sense.

        • Rahul,

          The question that I would ask is slightly different than the one you pose. Since reward tends to correlate with return, even if a fund consistently achieved high returns it may only be doing so by taking on more risk (with your money). The question then becomes, I think, is there any evidence that funds or managers or investors or simple strategies can consistently beat the market after adjusting for the risk profile?

          The answer to this question is there appears to be some evidence that this is possible but there is also some evidence that it is not. I’m only going to review literature that pertains to fundamental analysis (technical analysis is a completely different bag as is high frequency trading).

          On the side with evidence for a consistent ability to beat the market we have a plethora of low P/E (Price to Earnings) and low P/B (Price to Book) studies that are replicated across almost every time period, and across every public exchange in the world. I don’t have one fantastic paper to point you towards that demonstrates this point but Shiller may be a good start (see http://aida.wss.yale.edu/~shiller/data/peratio.html). Also see DeBont and Thaler for a discussion of risk-adjusted returns for portfolios constructed of past losers/winners (http://faculty.chicagobooth.edu/richard.thaler/research/pdf/DoesStockMarketOverreact.pdf), which has more of a behavioral economics flavour.

          In general, these low p/e and low p/b are not disputed by those in the EMH camp (see A Random Walk Down Wallstreet page 278). The EMH guys typically explain away excess returns from so-called strategies if the volatility of the constructed portfolios is greater than that of the market (the academics version of risk). However, this becomes a problem because in general, the volatility is lower for the low p/e and low p/b portfolios. So instead, they tend to explain the studies away by saying that the low p/e and low p/b portfolios are capturing other elements of risk that volatility alone doesn’t account for.

          A cute example of this is Fama & French and their three factor model (see https://faculty.fuqua.duke.edu/~charvey/Teaching/BA453_2006/FF_Common_risk.pdf). In it they basically come out and say, traditional measures of risk such as Beta, Volatility, etc. (major components of the Capital Asset Pricing Model) do not capture returns appropriately. However, low p/e and low p/b and others do a much better job of predicting returns, so these things must be capturing an element of risk that the traditional measures are not. These all go into equations that extends the current version of the CAPM and now we have a new one.

          Although it’s a little ridiculous to keep adding predictors to your model of capital assets (it’s now at 5 I believe) when newly discovered predictors explain price variation better than the base predictors, especially when saying that this (with a straight face) is all in the interest of incorporating measures of risk: the EMH camp does have a powerful point. If low p/e and low p/b stocks outperform the market and are less risky, why have professional arbitrageurs and investors not arbitraged the returns down to market returns? They also have another point; most fund managers underperform the market indices after fees (see A Random Walk Down Wallstreet page 178)! What the hell are fund managers doing if they are not going after easy to use strategies to beat the market, especially when their own performance has been suspect?!

          This is where behavioral economists have stepped in and tried to subsume the low p/e and low p/b as being explained by their theory. The gist of it is this: there are natural limits to arbitrage (see http://www.palermo.edu/economicas/PDF_2012/PBR7/PBR_01MiguelHerschberg.pdf) that prevent investors, even institutional investors, from fully taking advantage of the situation. Combine this with phenomenon such as anchoring, loss aversion, herding, and heuristic driven bias (see Beyond Greed and Fear by Hersh Shefrin for an introduction/review of this literature) and the behavioral guys suggest that they’ve sort of solved the puzzle.

          The existence of these biases does not fully explain why institutional investors, the so-called Smart Money, consistently underperform the market, however. This has been explained in a couple of ways but I’ll review two. The first is basically that these biases really are that strong.

          There is an anecdote about Richard Thaler (one of the first researchers to investigate these behavioral biases). He came to the same conclusion as the studies mentioned above (low p/e and low p/b being the way to go) and said, hot dang, I’m going to make a fund for my department. The one caveat he told people when asking them to invest was don’t look at the companies in the fund. They all looked awful (playing to the loss aversion bias)! People looked and no one invested. The fund went on to beat the market by some margin (I don’t recall reading this anecdote but rather hearing it from a lecture by Bruce Greenwald that I think is still online).

          The second reason that has been proffered is that the combination of trying to maximise assets under management (see Kara et. al International Investment Management for a review of fund manager’s incentives) and the control of intuitional investors as a whole over the market, has led to sub-market returns. The evidence seems to suggest that managers don’t have an outflow from their funds if they don’t significantly deviate from their peer group. In other words, if you run a growth fund and you do poorly on an absolute basis, but relative to your peers, you do okay, you won’t have an outflow of money. This, combined with the desire to maximise assets under management, leads to herding (and back to the behavioral explanation). The second part of the explanation is that institutional investors account for a quite substantial part of the market. At present, they account for about 70% of the market (see http://www.q-group.org/wp-content/uploads/2014/01/Keim-InstitutionalOwnership.pdf). The remaining 30% are retail investors (read: average Joe’s). However, the average Joe who invests their own money typically follows sell-side research or follows the moves of an institution by looking at their filings or otherwise is led by the moves of Smart Money. Thus, that 30% isn’t an independent group, it’s maybe, say 15%, original thought and 15% just following the Smart Money. That means, at present time, about 85% of the market is comprised of Smart Money. The market being a zero-sum game has to have winners and losers. So, if money is reasonably evenly distributed amongst firms (that is, no couple firms corner the market in size), then the average performance of the average fund manager has to be close to the market before fees, and substantially less after fees. This is how the argument proceeds anyways.

          So there’s all that and then there is Warren Buffet who came from the school of Graham and Dodd. Graham and Dodd published the first real tome of Securities Analysis in 1934. It basically advocated for valuing financial items with intelligence, as a business person would, and buying things when they were on sale.

          Without getting too much into how Buffet has made his money (this alone is quite topic) he made it by following the principles of Graham and Dodd. Some have called Buffet’s and Munger’s success an outlier to which Buffet responded with his address entitled The Super Investors of Graham and Doddsville (see http://gdsinvestments.com/wp-content/uploads/2015/07/The-Superinvestors-of-Graham-and-Doddsville-by-Warren-Buffett.pdf). This address speaks directly to the p-hacking type selection bias you mention in your post. Which, by the way, is incredibly important for evaluating trading algorithms without substantive theory.

          Anyways, that’s a very brief review. In summary: some evidence for strategies that work (low p/e and behavioral economics) and some evidence that they don’t (fund managers underperform, etc.)

        • In other words I feel there’s a good reason why *not* to look for a “causal *outer* model that predicts on the scale of months or years”

          And to just model the random walk and temporary mismatches is about the best you can do and hope to get *somewhat* consistent success.

        • I agree with Daniel. Frankly, to do a good job at selecting stocks so as to “beat the market” would in the first place be a full-time job, and most of us have better things to do with our limited time on Earth. In the second place, of those that make it a full-time job, only a very small percentage can actually do better than stock market averages for extended periods. So for decades I’ve recommended to my students the following strategy:

          1) Make sure you are adequately insured against risks that you cannot cover yourself. (Term life if you have dependents, auto, house if you own one, health, very little if any else).

          I also recommend paying credit card bills in full every month, and having the credit card company debit the bank account to accomplish this. Again, this imposes a discipline that tends to keep people from running up credit card balances at high interest rates.

          2) Invest the maximum the law allows into an IRA if you have earned income. These days a Roth IRA is probably the best bet. Have the funds automatically withdrawn from your bank account on a periodic basis…if you never see the money and don’t plan on using it, you’ll never notice that it’s not there and hopefully will adjust your spending appropriately. Have the IRA invest in index funds (Vanguard is my preference) or ETFs (again Vanguard). Vanguard has the lowest fees in the business.

          3) Pay no attention to the stock market. If it goes up you’ll make money. If it goes down, the next shares you buy will be cheaper and you’ll get more of them (Dollar cost averaging). Never panic and sell if the market goes down, you’ll probably sell at the bottom (I’ve known people that did this). Don’t get excited if the market goes up, you’ll probably buy a whole bunch just before the market tanks (ditto comment). Slow and steady is the ticket.

          4) Recently “Target Date” funds have become available and are worth a look, particularly for goals that have a determined date when the funds will be needed, for example, funding a child’s college education. These funds invest in indexed stock and fixed income funds, and rebalance on a regular basis (so for example, if the market is down, some stocks will be bought more cheaply to rebalance, and if it is up, some will be sold to rebalance, automatically “buying low and selling high”.) The percentage in fixed income increases as the target date is approached, reducing the volatility of the investment as the money will be needed.

          The good thing about this general strategy is that the costs are low, the emotions are taken out so that you won’t do something stupid (buy high, sell low!) and over a lifetime satisfactory results are virtually assured. (Nothing is certain, of course).

          When we started investing for our children’s college and our retirement, index funds weren’t available, so we had to choose our mutual funds on other bases. We looked for funds invested primarily in low P/E ratios with decently low management fees (1% or less were available, unfortunately not the 0.1% or so that index funds charge today). As index funds became available we used them as much as possible. We put our kids through college ourselves without aid or loans and are comfortably retired, now 40 years later. So I feel comfortable recommending a similar strategy, suitably updated to the vehicles available today, to the students I’ve taught.

        • > They show essentially how big a piece of the money supply (blue) or how big a piece of the stock market (red) your marginal hour of work buys.

          Comparing the individual wages with the aggregated total market capitalisation doesn’t make much sense, the number of people working (and the size of the economy in general) has doubled. I agree that the stock market is relatively expensive, which is likely to result in relatively lower returns, but there are simpler dimensionless ratios like Price/Earnings, Enterprise Value / Free Cash Flow, etc that will tell you that.

        • I took your comment to heart while standing around in a grass field with my tiny soccer stars :-)

          Re-calculating this in terms of per person shares gives basically the same thing because the per person in the numerator cancels with the per person in the denominator

          http://models.street-artists.org/2017/03/03/personal-vs-macro-dimensionless-ratios/

          You could buy 1.4 times “an equal per person share” of the stock market in 1975, in 2010 you could buy 0.3 times “an equal per person share”…

          I think this is relevant to questions I have, such as issues of wealth concentration, and inter-generational wealth transfer, as well as savings behavior today and the expected future return on education. The usefulness of a dimensionless ratio is entirely in terms of what the question is you think it answers or what the model is you will use it for.

          Although this does tell you “the stock market is expensive” what it also tells you is “labor is cheap for someone who has capital” and maybe “you’d better hope you inherit something from your parents/grandparents who bought the stock market back in the 60’s and 70’s” and also maybe “companies that offer stock options and things instead of dollar wages are secretly printing their own money” etc etc

        • This is not “basically the same thing because the per person in the numerator cancels with the per person in the denominator”. This is something that happens to be similar (but the effect is somewhat smaller), despite being wrong before and correct (or at least less wrong) now.

          There several other factors that you may want to consider. Not only the stock market is more expensive that in 1975, it’s also larger because a larger part of the real economy is being publicly traded. And there are also more multinational companies now, which own foreign assets and generate profits abroad, but the other hand there is also a larger share of those stocks that are owned by foreign investors. And US-investors also hold foreign stocks and debt. And financial assets are only part of the wealth of households (if this is what you care about). And it’s not surprising that given that wealth increases as a fraction of GDP (the natural trend unless there is surge in GDP or a collapse in wealth), the ratio of total wages to total wealth goes down (wages as % of GDP are slightly down, but the largest contribution comes from the increase in wealth).

        • Carlos: I’m not sure what you’re trying to say. It feels like you are arguing against something, like perhaps you think I have a particular interpretation of things, but I don’t know what you’re arguing against. Like, maybe you think I’m claiming that I’ve come up with “the one true secret to the economy” or some such thing. Certainly that’s not how I intended all this.

          anyway, I think it’s very interesting to construct these measures and see what I get. Each one answers different questions. This isn’t a matter of “used to be wrong and now is right”. My previous measure was an equally valid measure, just of a slightly different thing (Average Hourly Wages / Stock Market Capitalization * 1 hr * Today’s Wage * Dimensionless Scale Constant is a dollar quantity so I wasn’t claiming something incorrect). You can argue all you want about how relevant it is to something, but I never claimed that it was relevant to anything more than what it is, a measure of what fraction of the total stock market you can buy for an hour at a given point in time.

          My latest graph is slightly different because it measures what fraction of “an equal share of the market for everyone” you can buy with your hour. That’s maybe more relevant to questions of inequality, the earlier one is more relevant to questions of economic control. For example, if one group is primarily the one investing in the 1970’s then they’re going to have been buying a lot more control of companies than people investing an equal quantity of work today…

          The economy is, unsurprisingly (to me) highly multi-dimensional. In fact much of the reason I’m doing this is that I’m frustrated with a small number of dimensions that are “the usual suspects” among economists: real GDP/capita, the CPI, interest rates, market cap…

          All your comments about how I should consider more things… are basically exactly the kinds of things that have been driving me to graph lots of stuff on the FRED website. So yes, I agree with you we should look at lots of these things… that’s what I’m doing!

          For example, your point about the “increase in wealth”. Scaling these by price/book ratios could be interesting to see how much of the wealth is the actual stuff that the companies own vs how much is the theoretical stuff they’re supposed to produce in the future…

          anyway, thanks for engaging at least!

        • @Daniel

          >>>The economy is, unsurprisingly (to me) highly multi-dimensional. In fact much of the reason I’m doing this is that I’m frustrated with a small number of dimensions that are “the usual suspects” among economists: real GDP/capita, the CPI, interest rates, market cap…<<<

          But how does that help? Won't you just generate a large palette of ratios & dimensionless numbers with which you then don't know what to do?

          We use a lot of dimensionless ratios in Chem. Eng. for example. I agree they are useful. But given the number of variables involved I'm sure we could come up with numerous novel dimensionless groups that no one has used before. But what use would that be?

          A large part of the utility of dimensionless ratios comes from their legacy use. Renyolds numbers are hugely useful just because there *exists* a lot of correlations that are available using them.

          I could postulate other numbers but what good is that?

        • Rahul. Well I’m not just picking them at random. I’m designing them to help answer certain questions. I’m working on a document putting some of it together more coherently but not ready to share that yet.

          The Reynolds number isn’t helpful because of a large number of correlations. It’s helpful because the dimensionless form of Navier stokes for incompressible flow depends on only a single dimensionless group. The large number of correlations follows from that fact.

          The economy as a whole may be highly multidimensional but individual process might depend on only a small number of groups. For example the ratio of income after taxes to the minimum wage is relevant to whether selling labor makes sense compared to directly producing things in your home. Since tax rates are not constant with income and cost of living is not constant across states you may see different expectations for family behavior in different locations.

        • @Daniel

          Right, but do you the equivalent of a Navier stokes for the economics you are working on?

          In the absence of a strong phenomenological foundation whether a dimensionless group you select is relevant or not is a subjective choice which leads to the sort of argument you & Carlos were having.

        • @Rahul: well, Navier Stokes is a description of the statistical aggregate of the behavior of individual molecules, as seen by the fact that Lattice Boltzmann reproduces Navier Stokes.

          And, yes, I’m trying to describe how statistical aggregates of Micro-economic decisions aggregated across regions in the US produce certain outcomes related to employment, investment, and household welfare. And one thing I think is key is to frame the micro-economics in terms of dimensionless ratios that unify the decision making across multiple groups and locations and time.

          So, maybe?

        • > (Average Hourly Wages / Stock Market Capitalization * 1 hr * Today’s Wage * Dimensionless Scale Constant is a dollar quantity so I wasn’t claiming something incorrect).

          Irrelevant correction: that doesn’t seem right (at least if “average hourly wages” and “today’s wage” have the same dimensions).

          > I never claimed that it was relevant to anything more than what it is, a measure of what fraction of the total stock market you can buy for an hour at a given point in time.

          You used it to argue that “the average wage has gone down from $160/hr in 1975 to $30/hr in 2015.” I just pointed out that comparing the per capita income with the aggregate wealth (if market cap was a proxy for that) had a very obvious inconsistency in the “per person” dimension (which cancels out in the new calculation but didn’t before). But of course you can index hourly wages by total market capitalisation, or by number of cars, or by average weight, or by whatever measure you want.

        • Carlos: yes sorry a constant wage quantity (dollars/hr), which was what I was original saying, that today if your extra hour of work buys $30 worth of consumer goods, and $30 worth of stock market share, that back in 1970’s your equivalent hour bought as big a share of the stock market as you could buy if you were paid $160/hr today.

          So in terms of “ability to purchase a fraction of the stock market” wages went down. In terms of “ability to buy consumer goods” your wage went up since 1970.

          Now, you might expect that a fixed epsilon fraction of the stock market isn’t the scale to consider, and instead a 1/N fraction of the stock market where N is the population is the appropriate scale bar. and that gives a slightly different result, the newer result in my second post.

          As to “you can index hourly wages by total market capitalisation, or by number of cars, or by average weight, or by whatever measure you want”. To make this dimensionless you can’t use weight or cars, you can use the dollar cost of some selection of cars, and the dollar cost of the food required to gain a certain amount of weight, or to feed a person for a fixed time who weighs a certain amount… but it won’t be dimensionally neutral unless all the units cancel. And that’s a key insight that is often ignored in Economics at least at the level of economic communications in the media and between non-specialists.

        • Carlos, I also need to discuss on my blog the “per person” dimension issue. Person is of course a count of a specific indivisible object. So, sometimes when aggregating you can treat it as an infinitely divisible / scalable dimension like moles of a gas. Other times, you can’t, like when dealing with molecules of a gas… you can’t talk about 1/16th of a molecule in any meaningful manner. So in the asymptotic limit of large N you can treat “person” as a dimension. In the asymptotic limit of small N “person” is just a dimensionless integer.

        • The real advantage of the index fund approach is the very low fees paid. Warren Buffett’s 2008 bet is an example:

          http://secure.marketwatch.com/story/warren-buffett-has-nothing-but-praise-for-this-guy-2017-03-03/print?guid=73AA3B0E-F115-4CDC-B97E-4631E4CE01C1

          The index funds have an annual management fee of <0.1% typically; the hedge funds, actively managed by smart people, generally use the 2%+20% fee strategy (2% of the money under management, plus on an annual basis 20% of gains in the fund, sorry, no return of fees if the fund loses money). It's very good for the hedge fund managers, not so good for the investor.

          I would generally suggest investing in a mixture of a total market fund and a fund that only invests in (for example) the S&P 500. Reaching downward in the market is probably a good idea. There is also room for a portion to be invested in international index funds.

          Periodic rebalancing of the various funds to keep the fraction invested in each vehicle roughly equal (including a fixed income component) helps to "buy low and sell high".

  4. Most economic time series are not stationary and therefore they don’t have standard deviations. In general, stock returns have been found to be non-stationary. Entirely different types of techniques are needed for times series in general and non-stationary time series in particular. These techniques are well developed and comparatively accessible, but they are completely different from ones used in series where serial correlation is generally absent.

Leave a Reply

Your email address will not be published. Required fields are marked *