Tuesday, May 31, 2011

Back to the old curtains

If you didn't read my previous post about having a party and going all crazy last-minute to fix my family room/ basement, well, that pretty much sums it up.  I've always wanted to replace my ikea curtains with doublewidth panels and tried out the Peytons from Pottery Barn because there was no time for custom curtains.  When I put them up, they completely changed the look of the room.  I figured I could repaint another time and switched up my rug and accessories for a quick change:

BUT it was completely the wrong feeling.  I loooove that rug but not in my family room.  I wanted fresh & fun & happy (we hang with the kids here a lot) and the rug & curtains were taking the room into a more serious, richer place.  Not okay.  So, I tried out the PB peytons in french ivory and with the lining and double panels, they still felt to "decorated" for me so up went my old cotton Ikea curtains.  I added additional rings so they wouldn't sag as much and am totally fine with them now after my fiasco.  (Just need to hem them! ;)

{The two men in the photographs are my dad (top) and my husband's dad (bottom) waaaay back when.  We have very odd wood work going on in our old 70s house so for now I'm just forgetting about it.}

I also switched up the art & accessories, FINALLY pulling in the green I was craving.  My basement was bothering me because it was so unrelated to the rest of my house but now with the added green, I'm loving the flow.  I had this old dinosaur chart and added the Peter Dunham pillows and a Dash & Albert rug we had:

{ignore the messy blanket and toys & books- sorry no time for pretty!}

And here's a pic of the natural woven shades we put up:

I'm ordering another set for the large window by the sofa to replace the white roman shade there, which will add the texture that's missing for me over there.  And here's a quick pic from the party:

I'll share pics from the party in my next post.  It was a ton of fun and took place almost entirely oustide.  I'm pretty sure no one even noticed my curtains.  (Except for those who'd read my whacky post before coming!)  Ah vell.  I do have to say though, I live for this stuff. 


If you'd like help creating a home you absolutely love, contact me about our design services.

How has the crisis changed the teaching of economics?

The Economist has organized a discussion on how the teaching of economics may change in the wake of the recent crisis. Around 15 prominent economists have weighed in, with few suggesting anything radical. Harvard's Alberto Alesina suggests that indeed nothing so far has changed:
As for the methods of teaching and research nothing has changed. We kept all that is good about methods in economics: theoretical and empirical rigor. But one may say we kept also what is bad: a tendency to be too fond of technical elegance and empirical perfection at the expense of enlarging the scope of analysis and its realism. Those who found our methodology good should not worry about changes. Those who did not like it should not hold their breath for any sudden change due to the crisis.
But some of the others suggest that things are changing, and that the crisis has at least stimulated a renewed interest in economic history. Indeed, for all the consternation that economists didn't see this crisis coming, and didn't predict it, this isn't really the most surprising thing about the crisis. More surprising is how many economists seemed convinced that an event of this magnitude simply couldn't happen, and believed this despite centuries of history showing a never-ending string of episodic crises in countries around the world. Some economists did foresee trouble brewing, precisely because they took the past seriously as a guide to what could happen in the future, rather than mathematical theory. As Michael Pettis writes,

ONE of the stranger myths about the recent financial crisis is that no one saw it coming. In fact quite a lot of economists saw it coming, and for years had been writing with dread about the growing global imbalances and the necessary financial adjustments. In 2002 for example, Financial Policy published my article, “Will Globalization Go Bankrupt?” in which I compared the previous decade to earlier globalisation cycles during the past two hundred years and argued that we were about to see a major financial crisis that would result in a sharp economic contraction, bankruptcies of seemingly unassailable financial institutions, rising international trade tensions, and the reassertion of politics over finance. I even predicted that at least one financial superstar would go to jail.

How did I know? It didn’t require a very sophisticated understanding of economics, just some knowledge of history. Every previous globalisation cycle except one (the one cut short in 1914) ended that way, and nothing in the current cycle seemed fundamentally different from what had happened before. ... So how should the teaching of economics change? That’s easy. While mathematical fluency is very useful, it should not be at the heart of economics instruction. That place should be reserved for economic history.

That seems like an eminently sensible attitude. Moreover, modelling in economics ought to be much more strongly focused on understanding how past events and crises have emerged and why they appear to be inherent in the nature of economics systems -- just as natural as thunderstorms or hurricanes are in the Earth's atmosphere. This means, it would seem clear, moving outside the context of the profession's beloved general equilibrium models to study natural instabilities in a serious way.

In any event, I'm not convinced this discussion reflects adequately the deep dissatisfaction -- perhaps even embarrassment -- some economists feel over the state of their field. A good gauge of the stronger views held by a fraction of academic economists is the report written by those at the 2008 Dahlem Workshop in economics. The opening paragraph sets the tone:
The global financial crisis has revealed the need to rethink fundamentally how financial systems are regulated. It has also made clear a systemic failure of the economics profession. Over the past three decades, economists have largely developed and come to rely on models that disregard key factors—including heterogeneity of decision rules, revisions of forecasting strategies, and changes in the social context—that drive outcomes in asset and other markets. It is obvious, even to the casual observer that these models fail to account for the actual evolution of the real-world economy. Moreover, the current academic agenda has largely crowded out research on the inherent causes of financial crises. There has also been little exploration of early indicators of system crisis and potential ways to prevent this malady from developing. In fact, if one browses through the academic macroeconomics and finance literature, “systemic crisis” appears like an otherworldly event that is absent from economic models. Most models, by design, offer no immediate handle on how to think about or deal with this recurring phenomenon. In our hour of greatest need, societies around the world are left to grope in the dark without a theory. That, to us, is a systemic failure of the economics profession.

How many "lost decades"?

It is perhaps a reflection of the perceived self-importance of the financial industry -- it is the axis about which the world revolves -- that the term "lost decade" refers to any ten year period in which the stock market actually declines in value. An entire decade of history is deemed to have been lost. Fortunately, such events are exceedingly rare -- or so most people think.

Not quite. Economist Blake LeBaron finds that the likelihood of a lost decade -- as assessed by the historical data for U.S. markets -- is actually around 7%. That's the historical chance for the numerical or nominal value of a diversified portfolio of U.S. stocks to fall over a decade. Calculated in real terms -- adjusting for inflation -- makes the probability significantly higher, probably over 10%, not really an extremely unlikely event at all. The figure below (Figure 1 in LeBaron's paper) shows the calculated return over ten year windows over the past 200 years or so, and shows maybe six or so episodes in which the real return descends into negative figures.

Not an earth shaking result, perhaps, but a useful corrective to widespread belief that long term drops in the market are truly exceptional events. As LeBaron comments,
Lost decades are often treated as a kind of black swan event that is almost impossible. Results in this note show that while they are a tail event, they may not be as far out in the tail as the popular press would have us think.... A life long investor facing 6 decades of investments should consider a probability 0.35 of seeing at least one lost decade in their lifetime.

Blake pointed out to me in an email that the term "lost decade" of course has much wider meanings that what I've discussed here, arising for example in discussions of Japan from 1990 to 2000 (and maybe longer).

Monday, May 30, 2011

Is California's Housing Problem that it has too Many Houses?


Below is a chart of residential vacancy rate by state from the 2010 US Census.

California has the second lowest vacancy rate among the 50 states and DC.  California builders may have built the wrong kind of housing in the wrong places, but overall, they did not build too many houses. 

Note that Florida and Arizona have very high rates, but their rates are always high, because so much of their housing stock is seasonal--I suspect vacation homes drive a lot of what is happening in Maine, Vermont and Alaska as well.  Nevada has had the largest increase in vacancy over the past 20 years, rising from about 10 percent to 14 percent. 

Are commercial property values rising or falling?

This might seem like a simple question.  But it is not.

The Moodys/REAL Commercial Property Price Index (CPPI), produced at MIT, says they are still falling:

Green Street's Commerical Property Price Index says they are rising:
Which one is correct matters.  If Green Street is right, and prices are only 12.6 percent off peak, then commercial properties by-and-large have equity (loan-to-value ratios rarely exceeded 80 percent on commericial properties).  If MIT is right, we are still in deep trouble.
Both sources do a good job explaining their methods.  For MIT:
The Moodys/REAL commercial property index (CPPI) is a periodic same-property round-trip investment price change index of the U.S. commercial investment property market based on data from MIT Center for Real Estate industry partner Real Capital Analytics, Inc (RCA). The methodology for index construction has been developed by the MIT/CRE through a project undertaken in cooperation with a consortium of firms including RCA and Real Estate Analytics, LLC (REAL). The index has been developed with the objective of supporting the trading of commercial property price derivatives. The index is designed to track same-property realized round-trip price changes based purely on the documented prices in completed, contemporary property transactions. The index uses no appraisal valuations. The methodology employed to construct the index is a repeat-sales regression (RSR), as described in detail in Geltner & Pollakowski (2007). The data source for the index is described in detail in a white paper available from RCA.

The set of indices developed so far includes a national all-property index at the monthly frequency, national quarterly indices for each of the four major property type sectors (office, apartment, industrial, retail), selected annual-frequency indices for specific property sectors in specific metropolitan areas, and primary markets quarterly indices for the top 10 metropolitan areas in the major property types. The annual indices are produced in four versions, beginning in January, April, July, and October of each year. These are respectively named the calendar year (CY) index, the fiscal year ending March (FYM) index, the fiscal year ending June (FYJ) index and the fiscal year ending September (FYS) index.

The RCA Database

The commercial property index is based on the RCA database which attempts to collect, on a timely basis, price information for every commercial property transaction in the U.S. over $2,500,000 in value. This represents one of the most extensive and intensively documented national databases of commercial property prices ever developed in the U.S.

The Moodys/REAL CPPI and the TBI

The Moodys/REAL CPPI index is a complementary information product to the transaction based index (TBI) also published on the MIT/CRE web site. Both the CPPI and the TBI are based purely on transaction price data. The TBI is based on NCREIF property sales prices data, while the CPPI is based on RCA sales prices data. Thus, the TBI is based on a smaller population of more purely institutionally held properties. The TBI is based on a hedonic regression methodology whereas the CPPI is constructed with a repeat-sales methodology. The TBI is published with history going back to 1984 but only at the quarterly frequency, and only at the national level (for the four major property types), whereas the CPPI includes monthly and annual frequencies and more geographic regional break outs. The CPPI is a variable-liquidity price-change (appreciation return) index, while the TBI includes total return and demand and supply-side indexes.
For Green Street:
Green Street’s Commercial Property Price Index is a time series of unleveraged U.S. commercial property values that captures the prices at which commercial real estate transactions are currently being negotiated and contracted.

Two features that differentiate this index are its timeliness and its ability to capture changes in the aggregate value of the commercial property sector.

• Timeliness: Other indices are based on closed transactions, and therefore convey info about market prices from several months earlier. Also, the Green Street index value for a given month is released within days of monthend, whereas other indices have a sizeable lag. As shown below, the Green Street index spots inflection points earlier than other indices.

• Weighting: This index is weighted by asset value within each property sector, and therefore it provides a gauge of changes in aggregate values. Most other indices are equally weighted.

So the big differences are: (1) MIT looks only at transactions, whereas Green Street looks at current negotiations; (2) MIT's valuation model gives equal weight to all properties, while Green Street's valuation gives greater weight to expensive properties than to cheaper properties; and (3) MIT has a much broader sample, because REITs would rarely buy properties as inexpensive as $2.5 million.

So which index is correct?  It all depends on context.  While I would be a little leary of using "negotiated price" as an indicator of value (as opposed to closed transactions), the timeliness of Green Street's data does give it an advantage.  For REIT's trying to determine strategy, the Green Street index is probably better.  For banks making loans to smaller properties--or for individual investors thinking of buying small office buildings--the MIT index is more relevant. 

Interactive Atlas of NSW

The Atlas of New South Wales is an initiative of the Land and Property Management Authority and it was created with the objective of providing detailed statistics across a range of topics to educational institutions and the broader community. It is built with Bing Map so, provides very familiar to many, simple and intuitive interface to government information.

All the information is presented as a series of thematic map overlays in four categories:

  • People (eg. population, health, housing, religion, indigenous population, indexes of relative advantage/disadvantage, crime);
  • Economy (eg. labour force, taxation and revenue, and production of fruit and vegetables, oils and grins, and livestock);
  • History (eg. information on settlement, State elections and boarders); and
  • Environment (including vegetation, geology and soils, and locations of national parks).

Users have a choice between satellite image or roads map as a base layer and can adjust transparency level of thematic overlays. Each overlay is accompanied by a comprehensive legend, explaining the meaning of presented data. A click on individual region brings up a pop-up window with information about the region, presented as charts and gauges.

The Atlas of New South Wales is quite responsive considering the amount of data that is required to present thematic overlays. It would benefit though from a bit more legible charts and access to source data in a tabular format and/ or for download. Overall, the application is well built and very simple to navigate through.

Saturday, May 28, 2011

Family Room Curtains Indecision

We decided we'd have a little party this weekend to kick off summer.  Well, our "little" party grew - which is always good- and yesterday I started looking around at our basement family room, tearing it apart mentally.  (Do you do that??)  Some of our friends coming over are designers and, although all of them are supersweet & would never judge, I couldn't help but want to fix it up before tonight.  (I had a client over to the office last week for a presentation and used areas of my home for "what not to do" and why we'd be doing what we were doing. hahaha )

I bought custom bamboo shades for the DC showhouse and sized them so that they could also work in my office once the showhouse was over.  As some of you know, my family room/downsairs has been irking me for a while.  I have to admit that I love the feeling down there- light, airy, fresh & happy- and I like working in it, but I've always felt like it just doesn't jive with the rest of my house.  Well, of course when the shades and some of my pillows came home from the showhouse, I started envisioning a new color palette for the space, using what I had.  Our basement is very coastal feeling and while I love it, it just doesn't make sense here.  (My dad lives on a lake and wants some of our paintings so I think eventually I'll give in because they'd be a better fit with him.) 


Anyway- besides the coastal vibe- the main thing that really bothers me about my house/ basement is the curtains. I made some myself and did Ikea for the rest, planning to upgrade later. I used only single panels on the door and large window. where I should have at least used double. Curtains are one of those elements that I notice when I'm in people's homes and they can really take a room up or down a notch. SO... Since there was no time to have any custom ones made the day before the party or change the color scheme too much, and I had the day off with my 3-year-old, we headed to Pottery Barn for some last-minute curtains. I like their linen "Peyton Drapes" which are lined and hung by drapery hooks so I bought a bunch in "blue smoke."

Here's a picture of the curtains that drive me craaaaaazy:

{oh so bad}

And here's a pic of the Pottery Barn Peytons (not hemmed to the proper height but that will have to wait for another day)

{oh yeah, I switched the rug with another one I had too.  I have plans to sell it but am having toruble parting with it.  I have a problem.}

I'm much happier with the width and quality of the panels and loved the color with the rug.. buuuuut....  I'm losing the vibe I want down there even more.  It took a turn in another direction I'm just not after.  (Although am now dying to do in a room "for real.")   The rug is insanely gorgeous in person and its vibe is just a little too warm/ rich/ formal for what I'm after.  We use the areas as the kids play space, and while I don't want it junky, I want it to feel a bit more effortless.  (ironic isn't it??  All this effort for "effortless?")

{So although the vibe's off, I'm much happier with the drapery hooks being hung nicely on the curtain rings...  my unlined ikea curtains were not thick enough up top to properly support the drapery hooks I'd used, so they sagged and flopped too much. And I'm also loving the doublewidths.}

BUT... they had to go because I can't justify them if they not where I'm trying to go.  I returned them last night & picked up some new ones in french ivory.  I also restrained myself from using the pretty rug and it's all safely packed away again.  (Who does this?!!!  I know I have problems.)  SO...  now I'm off the the family room to try out the ivories, but I have to be honest that what I really want isn't off the rack and I think that I'm looking for unlined and a bit more chill.  (and I know my friends will not care in the least if the windows are bare or Ikea'd) BUT...  maaaybe they'll work and then I'll be thrilled.  (And if anyone from Pottery Barn is reading--- so sorry to be such a PIA!!)

Wish me luck!

Is it weird that I love this?!!! haahahah

Oh ps-   We're having crab, shrimp, and bratworst along with a bunch of salads-  am so excited to dig into the kitchen today- and the party will be mainly outside in the backyard.  I can't wait to share pics!!


xoxo, Lauren

If you'd like help creating a home you absolutely love, contact me about our design services.

Deep Discounting Errors?

One of the papers currently on my list of "Breaking Research" (see right sidebar) has the potential to be unusually explosive; perhaps world-changing. It's conclusions represent dynamite for everyone involved in the economic assessment (i.e. cost-benefit analysis) of various proposals for measures to respond to climate change or environmental degradation more generally. All this from a bit of algebra (and good thinking). Here's why.

Five years ago, the British Government issued the so-called Stern Review of the economics of climate change, authored by economist Nicholas Stern. The review had strong conclusions:
“If we don’t act, the overall costs and risks of climate change will be equivalent to losing at least 5% of global GDP each year, now and forever. If a wider range of risks and impacts is taken into account, the estimates of damage could rise to 20% of GDP or more.”
The review recommended that governments take fast action to reduce greenhouse-gas emissions.

In response, many economists -- most prominently William Nordhaus of Yale University -- have countered the Stern Review by criticizing the way it "discounted" the value of consequences in the future. They said it didn't discount the future strongly enough. In this essay in Science in 2007, for example, Nordhaus argued that the value of future economic losses attributed to climate change (or any other concerns about the environment) should be discounted at about 7% per year, far higher than the value of 1.4% used in the Stern Review. Here is his comment on this difference, providing some context:
In choosing among alternative trajectories for emissions reductions, the key economic variable is the real return on capital, r, which measures the net yield on investments in capital, education, and technology. In principle, this is observable in the marketplace. For example, the real pretax return on U.S. corporate capital over the last four decades has averaged about 0.07 per year. Estimated real returns on human capital range from 0.06 to > 0.20 per year, depending on the country and time period (7). The return on capital is the “discount rate” that enters into the determination of the efficient balance between the cost of emissions reductions today and the benefit of reduced climate damages in the future. A high return on capital tilts the balance toward emissions reductions in the future, whereas a low return tilts reductions toward the present. The Stern Review’s economic analysis recommended immediate emissions reductions because its assumptions led to very low assumed real returns on capital.
Of course, one might wonder if four decades of data is enough to project this analysis safely into untold centuries in the future (think sub-prime crisis and the widespread belief that average housing prices in the US have never fallen, based on a study going back 30 years or so). That to one side, however, there may be something much more fundamentally wrong with Nordhaus's critique, as well as with the method of discounting used by Stern in his review and by most economists today in almost every cost benefit analysis involving the projections into future.

The standard method of economic discounting follows an exponential decay. Using the 7% figure, each movement of roughly 10 years into the future implies a decrease in current value by a factor of 2. With a discounting rate r, the discount factor applied at time T in the future is exp(-rT). Is this the correct way to do it? Economists have long argued that it is for several reasons. To be "rational", in particular, discounting should obey a condition known as "time consistency" -- essentially that subsequent periods of time should all contribute to the discounting in an equal way. This means that a discount over a time A+B should be equal to a discount over time A multiplied by a discount over time B. If this is true -- and it seems sensible that it should be -- then it's possible to show that exponential discounting is the only possibility. It's the rational way to discount.

That would seem beyond dispute, although it doesn't settle the question of which discount rate to use. But not so fast. Physicist Doyne Farmer and economist John Geanakoplos have taken another look at the matter in the case in which the discount rate isn't fixed, but varies randomly through time (as indeed do interest rates in the market). This blog isn't a mathematics seminar so I won't get into details, but their analysis concludes that in such a (realistically) uncertain world, the exponential discounting function no longer satisfies the time consistency condition. Instead, a different mathematical form is the natural one for discounting. The proper or rational discounting factor D(T) has the form D(T) = 1/(1 + αT)^β, where α and β are constants (here ^ means "raised to the power of"). For long times T, this form has a power law tail proportional to T^-β, which falls off far more slowly than an exponential. Hence, the value of the future isn't discounted to anywhere near the same degree.

Farmer and Geanakoplos illustrate the effect with several simple models. You might take the discount rate at any moment to be the current interest rate, for example. The standard model in finance for interest rate movements in the geometric random walk (the rate gets multiplied or divided at each moment by a number, say 1.1, to determine the next rate). With discount rates following this fluctuating random process, the average effective discount after a time T isn't at all like that based on the current rate projected into the future. Taking the interest rate as 4%, with a volatility of 15%, the following figure taken from their paper compares the resulting discount factors as time increases:

For the first 100 years, the numbers aren't too different. But at 500 years the exponential is already discounting values about one million times more strongly than the random process (GRW), and it gets worse after that. This is truly a significant hole in the analyses performed to date on climate policy (or steps to counter other problems where costs come in the future).

Farmer and Geanakoplos don't claim that this geometric random walk model is THE correct one, it's only illustrative (but also isn't obviously unreasonable). But the point is that everything about discounting depends very sensitively on the kinds of assumptions made, not only about the rate of discounting but the very process it follows through time. As they put it:
What this analysis makes clear, however, is that the long term behavior of valuations depends extremely sensitively on the interest rate model. The fact that the present value of actions that affect the far future can shift from a few percent to infinity when we move from a constant interest rate to a geometric random walk calls seriously into question many well regarded analyses of the economic consequences of global warming. ... no fixed discount rate is really adequate – as our analysis makes abundantly clear, the proper discounting function is not an exponential.
It seems to me this is a finding of potentially staggering importance. I hope it quickly gets the attention it deserves. It's incredible that what are currently considered the best analyses of some of the world's most pressing problems hinge almost entirely on quite arbitrary -- and possible quite mistaken -- techniques for discounting the future, for valuing tomorrow much less than today.  But it's true. In his essay in Science criticizing the Stern Review, Nordhaus makes the following quite amazing statement, which is nonetheless taken by most economists, I think, as "obviously" sensible:
In fact, if the Stern Review’s methodology is used, more than half of the estimated damages “now and forever” occur after 2800.
Can you imagine that? Most of the damage could accrue after 2800 -- i.e., in that semi-infinite expanse of the future leading forward into eternity, rather than in the 700 years between now and then? Those using standard economics are so used to the idea that the future should receive very little consideration find this kind of idea crazy. But their logic looks to me seriously full of holes.

Thursday, May 26, 2011

Food Reward: a Dominant Factor in Obesity, Part IV

What is Food Reward?

After reading comments on my recent posts, I realized I need to do a better job of defining the term "food reward".  I'm going to take a moment to do that here.  Reward is a psychology term with a specific definition: "a process that reinforces behavior" (1).  Rewarding food is not the same thing as food that tastes good, although they often occur together. 

Read more »

Eugene Fama's first paper

University of Chicago financial economist Eugene Fama is famous for a number of things, perhaps foremost for his assertion in the 1960s of the Efficient Markets Hypothesis. A million people (including me) have criticized this ever-so-malleable idea as ultimately not offering a great deal of insight into markets. They're hard to predict, true, but who's surprised? Fama is still defending his hypothesis even after the recent crisis: witness his valiant if not quite convincing efforts in this interview with John Cassidy.

But the EMH isn't the only thing Fama has worked on, and he deserves great credit for a half-century of detailed empirical studies of financial markets. Way back in 1963, in fact, it was Fama who took pains in his very first published paper to bring attention to the work of Benoit Mandelbrot on what we now call "fat tails" in the distribution of financial returns. I may have known this before, but I had forgotten and only relearned it when watching this interview of Fama by Richard Roll on the Journal of Finance web site. The paper was entitled "Mandelbrot and the Stable Paretian Hypothesis." Fama gives a crystal clear description of Mandelbrot's empirical studies on price movements in commodities markets, showing a preponderance of large, abrupt movements -- far more than would be expected by the Gaussian or normal statistics assumed at the time. He explored Mandelbrot's hypothesis that the true empirical distributions might be fit by "Stable Paretian" distributions, which we today call "Stable Levy" distributions, for which statistical measures of fluctuations, such as the mean square variance, may be formally infinite. All of this 48 years ago.

How did Fama know about Mandelbrot so early on, when the rest of the economics profession took so long to take notice (and in many cases still haven't)? It turns out that Mandebrot visited Chicago for several months in 1963 and he and Fama spent much time discussing the former's empirical work. As Fama says in the interview, he's always been convinced that a lot of research depends on serendipity. Good example.

Given much better data, we now know (and have for more than a decade) that the Stable Levy distributions aren't in fact adequate for describing the empirical distribution of market returns. If we define the return R(t) over some time interval t as the logarithm of the ratio of prices, s(t)/s(0) -- this makes the return be centered roughly about zero -- then the distribution of R has been found in all markets studied to have power law tails with P(R) inversely proportional to R raised to a power α = 4, at least approximately. See this early paper, for example, as one of many finding the same pattern. Stable Levy distributions can't cope with this as they only yield tail exponents α between 1 and 3.

Given that power laws of this sort arise quite naturally in systems driven out of equilibrium (in physics, geology, biology, engineering etc), these observations don't sit comfortably with the equilibrium fixation of theoretical economics -- or with the EMH in particular. But that's another matter. Fama clearly saw the deep importance of the power law deviation from Gaussian regularity, noting that it implies a market with much more unruly fluctuations than one would expect in a Gaussian world. As he put it,
"...such a market is inherently more risky for the speculator or investor than a Gaussian market."

Free data a GFC casualty

The US government has been a proponent of free data for quite a while now and over the years it established a number of national programs to allow easy access to wast resources of public information. However, the annual budgets for e-government initiatives were slashed by 75% last month, putting in question the survival of such programs like data.gov (it is the repository for publicly available data that was promised as a platform to power software and analysis created by and for the public). Comments from federal CIO Vivek Kundra indicate that data.gov will not be shut down but “…there will be no enhancements or other development to address needs for improvement”. So, although the policy of free data remains unchanged, significant cost of delivering that policy may be its ultimate “undoing”.

Meantime, in Australia, the progress towards opening up government data vaults has taken another step forward. Earlier this week Australia's Information Commissioner, John McMillan, unveiled eight new rules for Federal agencies to adhere to when considering the publication of government data. These rules are:

  • Open access to information – a default position,
  • Engaging the community,
  • Effective information governance,
  • Robust information asset management,
  • Discoverable and useable information,
  • Clear reuse rights,
  • Appropriate charging for access, [So, not entirely free access!]
  • Transparent enquiry and complaints processes

The Principles are not binding on agencies, and operate alongside legal requirements about information management that are spelt out in the FOI Act, Privacy Act 1988, Archives Act 1983 and other legislation and the general law.

Despite the launch of data.gov.au portal, there is no federal program in Australia to facilitate access to public data on a large scale (ie. the US style) and the onus so far is on individual agencies to manage the dissemination of public information in their possession. State and Territory governments are pursuing their own initiatives. This “piecemeal approach”, although slower in implementation, may prove to be a more sustainable model for enabling access to public data, considering the vulnerability of large scale initiatives to budgetary pressures of the government of the day in these uncertain times.

Wednesday, May 25, 2011

Mortgage Defaulters can be good credit risks.

Transunion performed a study that shows that households whose only default is on their mortgage are pretty good credit risks. Reuters does a story about it, and I have seen the powerpoint deck on it, but I can't find a link to the powerpoint.

The long story short is that sometimes people stop making payments not because they are deadbeats, but because the economy kicked their legs out from under them. Such people are good prospective credit risks.

An Unsurpassable Greenspan-ism

Former Chairman of the Federal Reserve Bank Alan Greenspan has been known to say some remarkable things (and some remarkably opaque things), but he really out-did himself in a recent Financial Times editorial. Not surprisingly, he's back at it recycling his favourite story that markets know best and that any attempt to regulate them can only be counterproductive. But wonder at the paradoxical beauty of the following sentence:
With notably rare exceptions (2008, for example), the global "invisible hand" has created relatively stable exchange rates, interest rates, prices, and wage rates.
The markets work beautifully, and all on their own, but for those rare, notable exceptions. In other words, they work wonderfully except when they fail spectacularly, bring the banking system to the brink of collapse and throw millions of people out of work and into financial misery.

But Greenspan's language of il-logic has at least spawned an amusing reaction. The blog Crooked Timber posted up some further examples to illustrate how his delicate construction might be employed much more generally. For example,
"With notably rare exceptions, Russian Roulette is a fun, safe game for all the family to play."
or, from a commenter on the blog,
"With notably rare exceptions, Germany remained largely at peace with its neighbors during the 20th century."
See Crooked Timber (especially the comments) for hundreds of other impressive examples of Greenspan-ian logic.


VSCC SeeRed at Donnington Park 2011 Report

VSCC SeeRed at Donington was certainly a feast for the eyes in terms of the variety of cars. I also got some painting done too.

GN Spider at Shelsley Walsh c1920s
Oil on Board

Here's the demonstration piece so far. There is a real dynamic to this piece that really works. Mostly work to do on the crowd as I want to keep this piece loosely rendered.


After almost being blown away twice on Saturday morning I was kindly allowed to
se up in the Paddock Suite; where I managed to run a painting demonstration rather than hanging on to the gazebo.
Lea Francis S Hyper
In original condition as raced by the factory.

Ford Special?

Brooklands Napier Railton

Cooper Mark I

Cooper 500cc Jap Engine



A very Smart Fraser Nash with a fantastic looking rear end!

Client's Bungalow Living Room Plan

I stopped over to check in on a client's in-progress project yesterday & just had to share with you.  My client is young & creative and when we first met had literally just moved into her adorable bungalow:

The bungalow had been lovingly restored to certain point, but we're tailoring it to perfectly fit her lifestyle.  She had everything out & ready to show me & we assessed what she'd keep and what we'd replace.  

{The house is really beautiful.  Love the doors!!}

She was ready to upgrade her Pottery Barn-type pieces for a more sophisticated yet still casual look.

My client entertains often & knows how to throw a good party.  I noticed that she was really drawn to blues & aquas and a classic-vintage look in her inspiration photos.  The rooms she loved all had a touch of old-fashioned charm yet still felt fresh & updated.  The living room isn't large but is just what she needed to for a cozy 4-chair furniture arrangement by the fireplace.

The spaces my client loves also feel open & airy yet still have a warm to them.  Many of her inspiration photos had shutters instead of curtains.  We decided to keep her existing shutters and warm up the cool-feeling space with grasscloth on the walls.  I stopped in yesterday & got a peek:

{Grasscloth by Thibaut}

Our installer, Michael DiGuiseppe, who also did the DC Showhouse room, is soooo good & incredibly fast.  By the time I was finished with our two hour meeting, he'd almost finished the living room. 

{Thanks Michael!}

Last night, my client snapped a shot of some of the furnishings in place & emailed it over:

{Completely unstyled, but you get the idea}

I'm loving the blue against the grasscloth and can't wait until everything else arrives!!  We're waiting on the upholstery and are working on a gallery wall full of personal paintings & prints. 

Will keep you posted!!

xoxo, Lauren

If you'd like help creating a home you absolutely love, contact me about our design services.

Tuesday, May 24, 2011

Healthy Skeptic Podcast

Chris Kresser has just posted our recent interview/discussion on his blog The Healthy Skeptic.  You can listen to it on Chris's blog here.  The discussion mostly centered around body fat and food reward.  I also answered a few reader questions.  Here are some highlights:
  • How does the food reward system work? Why did it evolve?
  • Why do certain flavors we don’t initially like become appealing over time?
  • How does industrially processed food affect the food reward system?
  • What’s the most effective diet used to make rats obese in a research setting? What does this tell us about human diet and weight regulation?
  • Do we know why highly rewarding food increases the set point in some people but not in others?
  • How does the food reward theory explain the effectiveness of popular fat loss diets?
  • Does the food reward theory tell us anything about why traditional cultures are generally lean?
  • What does cooking temperature have to do with health?
  • Reader question: How does one lose fat?
  • Reader question: What do I (Stephan) eat?
  • Reader question: Why do many people gain fat with age, especially postmenopausal women?
The podcast is a sneak preview of some of the things I'll be discussing in the near future.  Enjoy!

Physics Envy?

I just came across this post from late last year by Rick Bookstaber, someone I respect highly and consider well worth listening to. Last year when researching an article on high-frequency trading for Wired UK, insiders in the field strongly recommended Bookstaber's book A Demon of Our Own Design: Markets, Hedge Funds and the Perils of Financial Innovation. It is indeed a great book as Bookstaber draws on a wealth of practical Wall St. experience in describing markets in realistic terms, without resorting to the caricatures of academic finance theory.

In his post, he makes an argument that seems to contradict everything I'm writing about here. Essentially, he argues that there's already too much "physics envy" in finance, meaning too much desire to make it appear that market functions can be wrapped up in tidy equations. As he puts it,
...physics can generate useful models if there is well-parameterized uncertainty, where we know the distribution of the randomness, it becomes less useful if the uncertainty is fuzzy and ill-defined, what is called Knightian uncertainty.

I think it is useful to go one step further, and ask where this fuzzy, ill-defined uncertainty comes from. It is not all inevitable, it is not just that this is the way the world works. It is also the creation of those in the market, created because that is how those in the market make their money. That is, the markets are difficult to model, whether with the methods of physics or anything else, because those in the market make their money by having it difficult to model, or, more generally, difficult for others to anticipate and do as well.

Bookstaber goes on to argue that it is this relentless innovation and emergence of true novelty in the market which makes physics methods inapplicable:
The markets are not physical systems guided by timeless and universal laws. They are systems based on creating an informational advantage, on gaming, on action and strategic reaction, in a space without well structured rules or defined possibilities. There is feedback to undo whatever is put in place, to neutralize whatever information comes in.

The natural reply of the physicist to this observation is, “Not to worry. I will build a physics-based model that includes feedback. I do that all the time”. The problem is that the feedback in the markets is designed specifically not to fit into a model, to be obscure, stealthy, coming from a direction where no one is looking. That is, the Knightian uncertainty is endogenous. You can’t build in a feedback or reactive model, because you don’t know what to model. And if you do know – by the time you know – the odds are the market has changed.

I think this is an important and perceptive observation, yet it also strongly misrepresents what physicists -- the ones doing good work, at least -- are trying to do in modeling markets. Indeed, I think it's fair to say that much of the work in what I call the physics of finance starts from the key observation that "feedback in the markets is designed specifically not to fit into a model, to be obscure, stealthy, coming from a direction where no one is looking." The best work in no way hopes to wrap up everything in one final tidy equation (as in the cartoon version of physics, although very little real physics works like this), or even one final model solved on a computer, but to begin teasing out -- with a variety of models of different kinds -- the kinds of things that can happen and might be expected to happen in markets dense with interacting, intelligent and adaptive participants who are by nature highly uncertain and trying to go in directions no one has gone before.

A good example is a recent paper (still a pre-print) by Bence Toth and other physicists. It's a fascinating and truly novel effort to tackle in a fundamental way the long-standing mystery of market impact -- the widely observed empirical regularity that a market order of size V causes the price of an asset to rise or fall in proportion (roughly) to the square root of V. The paper doesn't start from the old efficient markets idea that all information is somehow rapidly reflected in market prices, but rather tries to actually model how this process takes place. It begins with the recognition that when an investor has valuable information, they don't just go out and release it to the market in one big trade. To avoid the adverse effects of market impact -- your buying drives the price up so you have to buy at ever higher prices -- those with large orders typically break them up into lots of pieces and try to disguise their intentions and reduce market impact. As a result, the market isn't at all a place where all information rapidly becomes evident. Most of the trading is being driven by people trying to hide their information and keep it private as long as possible.

In particular, as Toth and colleagues argue, the sharp rise of market impact for very small trades (the infinite slope of the square root form at the origin) suggests a view very different from the standard one. Many people take the concave form of the observed impact function, gradually becoming flatter for larger trades, as reflecting some kind of saturation of the impact for large volumes. Perhaps. But the extremely high impact of small trades is perhaps a more interesting phenomenon. The square root form in fact implies that the "susceptibility" of the market -- the marginal price change induced per unit of market order -- heads toward infinity in the limit of zero trade size. A singularity of this kind in physics or engineering generally signals something special -- a point where the linear response of the system (reflecting outcomes in direct proportion to the size of their causes) breaks down. The market lives in a highly unstable state.

Toth and colleagues go on to show that this form can be understood naturally as arising from a critical shortage of liquidity -- that is, a perpetual scarcity of available small volume trades at the best prices. I won't get into the details here (as I will return to the topic in greater detail soon), but their model depends crucially on the idea that much information remains "latent" or hidden in the market, and only gets revealed over long timescales. It remains hidden precisely because market participants don't want to give it away and incur undue costs associated with market impact. The upshot -- although this needs further study to flesh out details -- is that this perpetual hiding of information, and the extreme market sensitivity it gives rise to for small trades, might well lie behind the surprising frequency with which markets experience relatively large movements up or down, apparently without any cause.

This is just the kind of the thing that anyone interested in a deeper picture of market dynamics should find valuable. It's the kind of fundamental insight that, with further development, might even suggest some very non-obvious policy steps for making markets more stable.

Much the same can be said for a variety of physics-inspired studies on complex financial networks, as reviewed recently in Nature by May and Haldane. These models also share a great resonance with ecology and the study of evolutionary biology, which Bookstaber suggests might be more appropriate fields in which to find insights of value to financial theory. These fields do have much that is valuable, but even here it is hard to get away from physics. Indeed, some of the best models in either theoretical ecology or evolutionary biology -- especially for evolution at the genetic level over long timescales -- have also been strongly inspired by thinking in physics. A case in point is a recent theory for the evolution of so-called horizontal gene flow in bacteria and other organisms, developed by famous biologist Carl Woese along with physicist Nigel Goldenfeld.  

This wide reach of physics doesn't show, I think, that physicists are smarter than anyone else. Rather, just that physicists have inherited a very rich modeling culture and tools -- developed in statistical physics over the past few decades -- that are incredibly powerful. If there is physics envy  in finance -- and Bookstaber asserts there is -- it's only a problem because the wrong model of physics is being envied. Forget the elegant old equation based physics of quantum field theory. Nothing like that is going to be of much help in understanding markets. Think more of the more modern and much more messy physics of fluid and plasma instabilities in supernovae, or here on Earth in the project to achieve inertial confinement fusion, where simple hot gases continue to find new and surprising ways to foil our best attempts to trap them long enough to produce practical fusion energy.

In his post, Bookstaber (politely) dismisses as nonsense a New York Times article about physics in finance (or so-called 'econophysics'). Along the way, the article notes that...
Macroeconomists construct elegant theories to inform their understanding of crises. Econophysicists view markets as far more messy and complex — so much so that the beauty and logic of economic theory is a poor substitute. Drawing on the tools of the natural sciences, they believe that by sorting through an enormous amount of data, they can work backward to find the underlying dynamics of economic earthquakes and figure out how to prepare for the next one.

Financial crises are difficult to predict, the econophysicists say, because markets are not, as some traditional economists believe, efficient, self-regulating and self-correcting. The periodic upheavals are the result of a cascade of events and feedback loops, much like the tectonic rumblings beneath the Earth’s surface.
As long as one doesn't push metaphors too far, I can't see anything wrong with the above, and much that makes absolutely obvious good sense. No one working in this field thinks there's going to be a "theory of everything for finance". But we might well get a deeper understanding than we have today -- and not be so misled by silly slogans like the old efficient markets idea -- if we accept the presence of myriad instabilities in markets and begin modeling the important feed back loops and evolving systems in considerable detail.

Monday, May 23, 2011

What's Efficient About the Efficient Markets Hypothesis?

The infamous Efficient Markets Hypothesis (EMH) has been the subject of rancorous and unresolved debate for decades. It's often used to assert that markets don't need regulation or oversight because they have a remarkable power to get prices just about right (stocks, bonds and other assets have their correct "fundamental values"), and so never get too much out of balance. Somehow the idea still gets lots of attention even after the recent crisis. Financial Times columnist Tom Harford recently suggested that the EMH gets some things right (markets are "mostly efficient") even if it is also supports unjustified faith in market stability. In a talk, economist George Akerlof took on the question of whether the EMH can be seen to have caused the crisis, and concludes that yes, it could, although there are plenty of other causes as well.

Others have defended the EMH as being unfairly maligned. Jeremy Siegel, for example, argues that the EMH actually doesn't imply anything about prices being right, and insists that, recent dramatic evidence to the contrary, "our economy is inherently more stable" than it was before -- precisely because of modern financial engineering and the wondrous ability of markets to aggregate information into prices. Robert Lucas asserted much the same thing in The Economist, as did Alan Greenspan in the Financial Times. Lucas asserted his view (equivalent to the EMH) that the market really does know best:
The main lesson we should take away from the EMH for policy making purposes is the futility of trying to deal with crises and recessions by finding central bankers and regulators who can identify and puncture bubbles. If these people exist, we will not be able to afford them.

That debate over the EMH persists half century after it was first stated seems to reflect tremendous confusion and disagreement over what the hypothesis actually asserts. As Andrew Lo and Doyne Farmer noted in a paper from a decade ago, it's not actually a well-defined hypothesis that would permit clear and objective testing:

One of the reasons for this state of affairs is the fact that the EMH, by itself, is not a well posed and empirically refutable hypothesis. To make it operational, one must specify additional structure: e.g., investors’ preferences, information structure, etc. But then a test of the EMH becomes a test of several auxiliary hypotheses as well, and a rejection of such a joint hypothesis tells us little about which aspect of the joint hypothesis is inconsistent with the data.

So what does the EMH assert?

In trying to bring some order to the topic, one useful technique is to identify distinct forms of the hypothesis reflecting different shades of meaning frequently in use. This was originally done in 1970 by Eugene Fama, who introduced a "weak" form, a "semi-strong" form and a "strong" form of the hypothesis. Considering these in turn is useful, and helps to expose a rhetorical trick -- a simple bait and switch -- that defenders of the EMH (such as those mentioned above) often use. One version of the EMH makes an interesting claim -- that markets always work very efficiently (and rapidly) in bringing information to bear on prices which therefore take on accurate values. This (as we'll see below) is clearly false. Another version makes the uninteresting and uncontroversial claim that markets are hard to predict. The rhetorical trick is to mix these two in argument and to defend the interesting one by giving evidence for the uninteresting one. In his Economist article, for example, Lucas cites as evidence for information efficiency the fact that markets are hard to predict, when these are very much not the same thing.

Let's look at this in a little more detail. The Weak form of the EMH merely asserts that asset prices fluctuate in a random way so that there's no information in past prices which can be used to predict future prices. As it is, even this weak form appears to be definitively false if it is taken to apply to all asset prices. In their 1999 book A Non-random Walk Down Wall St, Andrew Lo and Craig MacKinley documented a host of predictable patterns in the movements of stocks and other assets. Many of these patterns disappeared after being discovered -- presumably because some market agents began trading on these strategies -- but there existence for a short time proves that markets have some predictability.

Other studies document the same thing in other ways. The simplest argument for the randomness of market movements is that any patterns that exist should be exploited by market participants to make profits. The trading they do should act to remove these patterns. Is this true? Take a look at Figure 1 below, taken from a paper from 2008 by Doyne Farmer and John Geanakoplos. Back in the 1970s, Farmer and others at a financial firm called The Prediction Company identified numerous market signals they could use to try to predict market movements in the future. The figure shows the correlation between one such trading signal and market prices two weeks in advance, calculated from data over a 23 year period. In 1975, this correlation was as high as 15%, and it was still persisting at a level of roughly 5% as of 2008. This signal -- I don't know what it is, as it is a proprietary signal of The Prediction Company -- has long been giving reliable advance information on market movements.

One might try to argue that this data shows that the pattern is indeed gradually being wiped out, but this is hardly anything like the rapid or "nearly instantaneous" action generally supposed by efficient market enthusiasts. Indeed, there's not much reason to think this pattern will be entirely wiped out for another 50 years.

This persisting memory in price movements can also be analyzed more systematically. Physicist Jean-Philippe Bouchaud and colleagues from the hedge fund Capital Fund management have explored the subtle nature of how new market orders arrive in the market and initiate trades. A market order is a request by an investor to either buy or sell a certain volume of an asset. In the view of the EMH, these orders should arrive in markets at random, driven by the randomness of arriving news. If one piece of news is positive for some stock, influencing someone to place a market buy order, there's no reason to expect that the next piece of news is therefore more likely also to be positive and to trigger another. So there shouldn't be any observed correlation in the times when buy or sell orders enter the market. But there is.

What Bouchaud and colleagues found (originally in 2003, but improved on since then) is that the arrivals of these order are correlated and remain so over very long times -- even over months. This means that the sequence of buy or sell market orders isn't at all just a random signal, but is highly predictable. As Bouchaud writes in a recent and beautifully written review: "Conditional on observing a buy trade now, one can predict with a rate of success a few percent above 1/2 that the sign of the 10,000th trade from now (corresponding to a few days of trading) will be again positive."

Hardly the complete unpredictability claimed by EMH enthusiasts. To look at just one more piece of evidence -- from a very long list of possibilities -- we might take an example discussed recently by Gavyn Davies in the Financial Times. He refers to a study by Andrew Haldane of the Bank of England. As Davies writes,
Andy Haldane conducts the following experiment. He estimates the results of an investment strategy in US equities which is based entirely on the past direction of the stockmarket. If the market rises in the period just ended, the strategy buys stocks for the next period, and vice versa. In other words, the strategy simply extrapolates the recent trend in the market. The result? According to Andy, if you had been wise enough to start this procedure with $1 in 1880, you would have consistently shifted in and out of stocks at the right times, and you would now possess over $50,000. Not bad for a strategy which could have been designed in a kindergarten.

Next, Andy tries an alternative strategy based on value. This calculates whether the stockmarket is fundamentally over or undervalued, and buys the market only when value gives a positive signal. The criterion for measuring value is the dividend discount model, first devised by Robert Shiller. If you had been clever enough to devise this measure of value investing in 1880, and had invested $1 at the time, the procedure would have left you with a portfolio now worth the princely sum of 11 cents.

That, according to the weak version of the EMH, shouldn't be possible.

If weakened still further you might salvage some form of the weak hypothesis by saying that "most or many asset prices are difficult to predict," which seems to be true. We might call this the Absurdly Weak form of the EMH, and it seems ridiculous to form such a puffed-up "hypothesis" at all. Does anyone doubt that markets are hard to predict?

But the more serious point with regard to the weak (or absurdly weak) forms of the EMH is that the word "efficient" really has no business being present at all. This word seems to go back to a famous paper by Paul Samuelson, the originator (along with Eugene Fama) of the EMH, who established that prices should fluctuate randomly and be impossible to predict in a market that is "informationally efficient," i.e. in which participants bring all possible information to bear in trying to anticipate the future. If such efficient information processing goes on in the market, then prices will fluctuate randomly. Informational efficiency is what Lucas and others claim the market does, and they take the difficulty of predicting markets as evidence. But it is not, in fact, evidence of anything of the sort.

Think carefully about this. The statement that information efficiency implies random price movements in no way implies the opposite -- that random price movements imply that information is being processed efficiently, although many people seem to want to draw this conclusion. Just suppose (to illustrate the point) that investors in some market make their decisions to buy and sell by flipping coins. Their actions would bring absolutely no information into the market, yet prices would fluctuate randomly and the market would be hard to predict. It would be far better and more honest to call the weak form of the EMH the Random Market Hypothesis or the Market Unpredictability Hypothesis. It is strictly speaking false, as we just noted, although still a useful, crude first approximation. It's about as true as it is to say that water doesn't flow uphill. Yes, mostly, but then, ordinary waves do it at the seaside every day.

So the weak version of the EMH isn't very useful. Perhaps it has some value in dissuading casual investors from thinking it ought to be easy to beat the market, but it's more metaphor than science.

Next up is the "semi-strong" version of the EMH. This asserts that the prices of stocks or other assets (in the market under consideration) reflect all publicly available information, so these assets have the correct values in view of this information.That is, investors quickly pounce on any new information that becomes public, buy or sell accordingly, and the supply and demand in the market works its wonders so prices take their fundamental values (instantaneously, it is often said, or at least very quickly). This version has one big advantage already over the weak form of the EMH -- it actually makes an assertion about information, and so might plausibly say something about the efficiency with which the market absorbs and processes information. However, there are many vague terms here. What do we mean precisely by "public"? How quickly are the prices supposed to reflect the new information? Minutes? Days? Weeks? This isn't specified.

Notice that a hypothesis formulated this way -- as a positive statement that a market always behaves in a certain way -- cannot possibly ever be proven. Evidence that a market works this way today doesn't mean it will tomorrow or did yesterday. Asserting that the hypothesis is true is asserting the truth of an infinite number of propositions -- efficiency for all stocks, for example, and all information at all times. No finite amount of evidence goes any distance whatsoever toward establishing this infinite set of propositions. The only thing that can be tested is whether it is sometimes -- possibly often or even frequently -- demonstrably false that a market is efficient in this sense.

This observation puts into a context an enormous body of studies which purport to give "evidence for" the EMH, going back to Fama's 1970 review. What they all mean is "evidence consistent with" the EMH, but not in any sense "evidence for." In science, you test hypotheses by trying to prove they are wrong, not right, and the most useful hypotheses are those that turn out hardest to find any evidence against. This is very much not the case for the semi-strong EMH.

If markets move quickly to absorb new information, then they should settle down and remain inert in the absence of new information. This seems to be very much not the case. Nearly two decades ago, a classic economic study by Lawrence Summers and others found that of the 50 largest single-day price movements since World War II, most happened on days when there was no significant news, and that news in general seemed to account for only about a third of the overall variance in stock returns. A similar study more recently (2002) found much the same thing: "Many large stock price changes have no events associated with them."

But if we leave aside the most dramatic market events, what about price movements over short times during a single day? Here too the evidence rather strongly contradicts the semi-strong EMH. Bouchaud and his colleagues at Capital Fund Management recently used data for high-frequency trading to test the alleged EMH link between news and price movements far more precisely. Their idea was to study possible links between sudden jumps in the prices of stock prices and possible news items appearing in electronic news feeds, which might, for example, announce new information about a company. Without entering into the technical points, they found that most sudden price jumps took place without any conceivably causal news arriving on the feeds. To be sure, the news entering did cause price movements in many cases, but most large movements happened in the absence of such news.

Finally, we can immediately also dismiss -- with the evidence just cited -- the strong version of the EMH which claims that markets rapidly reflect not only all public information, but all private information as well. In such a market insider trading would be impossible, because insider information gives no one an advantage. If I'm a government regulator about to issue a drilling permit to Exxon for a wildly lucrative new oil field, even my personal knowledge won't permit be to profit by buying Exxon stock in advance of announcing my decision. The market, in effect, can read my mind and tell the future. This is clearly ridiculous.

So it appears that the two stronger versions of the EMH -- which make real claims about how the markets process information -- are demonstrably (or obviously ) false. The weak version is also falsified by masses of data -- there are patterns in the market which can be used to make profits. People are doing it all the time.

The one statement close to the EMH which does have empirical support is that market movements are very difficult to predict because prices do move in a highly erratic, essentially random fashion. Markets sometimes and perhaps even frequently process new information fairly quickly and that information gets reflected in prices. But frequently they do not. And frequently markets move even though there appears to be no new information at all -- as if they simply have rich internal dynamics driven by the expectations, fears and hopes of market participants.

All in all, the EMH then doesn't tell us much. Perhaps Emanuel Dermin, a former physicist who has worked on Wall St. as a "quant" for many years, puts it best: you shouldn't take the thing too seriously, he suggests, but only take it to assert that "it's #$&^ing difficult or well-nigh impossible to systematically predict what's going to happen next." But this, of course, has nothing at all to do with "efficiency." Many economists, lured by the desire to prove some kind of efficiency for markets, have gone a lot further, absurdly so, even trying to make a strength of its own ignorance about markets, indeed enshrining its ignorance as if it were a final infallible theory. Dermin again:
The EMH was a kind of jiu-jitsu response on the part of economists to turn weakness into strength. "I can't figure out how things work, so I'll make that a principle." 
In this sense, on the other hand, I have to admit that the word "efficient" fits here after all. Maybe the word is meant to apply to "hypothesis" rather than "markets." Measured for its ability to wrap up a universe of market complexity and rich dynamic possibilities in a sentence or two, giving the illusion of complete and final understanding on which no improvement can be made, the efficient markets hypothesis is indeed remarkably efficient.

Bridal Shower & Weekend

My cousin, Jen, is getting married this summer so my mom & I had a shower for her this weekend.  The invitations (above) were from Target and I have to tell you how easy they were to do.  They came as blank cards and I followed the link on the back of the package to an online template that I filled out with our information and printed them out at home.  I truly am not tech-savvy and it was a breeze.  (Except for the fact that I printed them all out with the wrong date the first time- eeek!!  Thank goodness the back of the cards was blue too.) 

{Jen is third from the left and she's standing next to our other cousin, Shelly}

My Aunt Allison made the most amazing bread:

{Seriously???  I want to do this!}

The wedding is in Puerto Rico so my family whipped up some authentic Puerto Rican dishes:

{I can't remember the name but really yummy seafood stew}

...And you poured the stew over these potato balls:

{Again, I forget the name!!}

And seafood salad:

And I not-so-authentically picked up a bunch of sausage-egg breakfast burritoes from Anita's:

For flowers, I filled a few different milk glass urns with pinky-peach roses, tulips, perivian lilies and fresh white freesia:

{smell so goooood}

Jen's colors for the wedding are coral & aqua. 

{loved this "mr and mrs" paper}

Here's a picture of the ladies in my family:

{My Grandmother is in the center with her 3 daughters & all of the granddaughters}

And I loved this cup tower my cousin made.  I did knock it halfway over but she doesn't know that:

{I put it back, don't worry, I'm not evil.}

And my mom baked a rum pansy cake:

{Soooo good and too pretty to eat, but we did anyway.}

Anyway, the shower was great & we headed out that night for her bachelorette party.  I don't have any pictures of that as they were confiscated.  (haha no, totally kidding, I just didn't bring my camera.)

The DC Design House final party was also Saturday evening before the bachelorette and it was at Skip & Debbie Singleton's home in DC.  I am kicking myself for not getting any photos because their back yard is my dream yard:  Pool set right into the grass with some checkerboard slate in areas arround it.  HEAVEN.  I'm so glad to have gotten the chance to be a part of the showhouse and to have met so many great new friends. 

update-  got a pic!! :


And my little sister was confirmed on Sunday!!  I can't believe how old she's getting!! (which means I'm getting even older!  We're 15 years apart.)  I was her sponsor and the Mass was beautiful.  Congrats Morgan if you're reading!!

I'm off to start the day -kids still haven't woken up & how amazing is that?!! ;) 

Hope your weekend was great!!

xoxo, Lauren

If you'd like help creating a home you absolutely love, contact me about our design services.