Wednesday, December 21, 2011

Record Long Term Treasury Returns

I mistakenly assume everyone knows that US Treasury Returns have been extreme in 2011.  As we near the end of the year, I thought it would be beneficial to look at the world’s best performer while incorporating some new graphical techniques.  There is also an opinion (NOT INVESTMENT ADVICE) expressed in one of the charts.

From TimelyPortfolio
From TimelyPortfolio

R code in GIST:

Friday, December 16, 2011

Lattice Explore Bonds

Since my fifth most popular post has been Bond Market as a Casino Game Part 1, I thought I would use Vanguard Total US Bond Market mutual fund (VBMFX) monthly returns to build our skills in the lattice R package and help visualize the unbelievable run of U.S. bonds (Calmar Ratio 1.37 over the past 20 years).

Although I have primarily graphed with R base graphics, PerformanceAnalytics charts, and ggplot, the lattice package provides an extremely powerful set of charting tools.

We can start with a basic qqplot of the entire set of monthly returns.

From TimelyPortfolio

Then, we can group by year.

From TimelyPortfolio

Or, we can also very easily split each year into its own panel.

From TimelyPortfolio

Here is a little different look with a density plot.

From TimelyPortfolio

Now let’s build boxplots and dotplots.

From TimelyPortfolio

Add an annual dotplot to a boxplot for the entire period.

From TimelyPortfolio

Or we can add a boxplot for each year.

From TimelyPortfolio

See for all my posts on bonds.

R code from GIST:

Thursday, December 15, 2011

With Size, Does Risk–>Return?

A basic tenet in finance is that higher risk should lead to higher return as the time horizon stretches to infinity.  However, in bonds, higher risk has not meant higher return with either credit risk (high-yield) or long duration risk (maturity > 15 years).  Based on some quick analysis of Kenneth French’s dataset on returns by market capitalization, it appears theory might better explain reality but not in a linear fashion.  For those more interested in risk and return on small caps, see this fascinating revelation refuting a basic tenet of finance About That Small Cap Effect: Oops!

From TimelyPortfolio
From TimelyPortfolio
From TimelyPortfolio

R code from GIST:

Friday, December 9, 2011

A Tale of Two Frontiers

In a follow up to Evolving Domestic Frontier, I wanted to explore the efficient frontier including international indexes since 1980.  Life is great when your primary indexes (Barclays Aggregate and S&P 500) lie on the frontier as they did 1980-1999.  The situation becomes much more difficult with a frontier like 2000-now.

From TimelyPortfolio

If we examine return, risk, and Sharpe though, 1980 to now was really not that bad with annualized returns of all indexes stocks and bonds > 8% even including the last decade of “equity misery”.  As the S&P 500 shifted from the top of the efficient frontier to the bottom, it seems the focus over the last decade as been finding alternatives for the S&P 500. Strangely, the unbelievable riskless return of bonds has allowed some investors to claim and use bonds as one of these potential alternatives to the S&P 500.

From TimelyPortfolio
From TimelyPortfolio

Bonds as alternatives to equities have made even more sense when we look at correlations.

From TimelyPortfolio

However, bonds based on the yield to maturity of the Barclays Aggregate index suggest, even guarantee, forward returns of only 2.3%, significantly below the +8% generously provided for 30 years.  I believe the focus on alternatives should be alternatives for the guaranteed miserably low returns of bonds (WSJ Bond Buyers Dilemma).  The trick though is reducing risk (drawdown) of these alternatives to an acceptable level to most bond investors.  Max drawdown on bonds since 1982 has been 5%, which is virtually an impossible constraint for any risky alternatives, since in even very attractive secular buy-hold periods, drawdown generally is 20% to 30%.

From TimelyPortfolio

Our options are basically limited to cash, which at 0% return is unacceptable, unless we can discover a method to reduce drawdown on the risky set of alternatives to 20%, but really down to 10%.  How do we transform risky alternatives with historical drawdowns of 40%-70% to an acceptable bond alternative?  I think it requires tactical techniques blended with cash with a lot of client education and hand-holding.

R code from GIST:

Wednesday, December 7, 2011

Happy 1st Birthday Timely Portfolio

December 8, 2010, I sat down to write my first post on Timely Portfolio Reading->Writing

“I am determined to play not spectate. After 20 years of voracious reading, I have decided to write, and this blog represents my commitment. More than likely it will be a reflection of me, so a lot about my work/passion money management and markets but also hopefully some worthwhile thoughts and observations. I will be the writer and possibly the only reader ultimately but I know that I will benefit immensely from this project. Any benefit to others will be extremely gratifying and help resolve my debt to all the wonderful authors that have entertained and enlightened me over the years.”

After a couple of months, I was able to more specifically assess the benefit of my blogging in Why Talk My Book?

“…I blog to

1) record my thoughts with a timestamp and no benefit of hindsight

2) improve the thoroughness of my thoughts through the Hawthorne Effect. If I publish something for all the world to see, I am going to make absolute sure that my references are correct.

3) get feedback to improve my thoughts. Please criticize and bash what I write. The markets hurt my feelings all the time and do far more damage than your thoughts ever will. Please share them.

Markets have humbled me far too greatly to believe that I can or will move markets. Only the established trend of the market can attract enough attention to move the market meaningfully in my favor. Markets are not efficient, and I have no private information. I simply need to be an diligent, honest, and humble observer of the market, and I will do well.”

The year in blogging has been incredibly fun, challenging, and educational.  Thanks to all those who have joined me along the way, especially those that helped, commented, criticized, referred, and improved my blog.  I never planned for or even imagined anything close to the stats shown below.

From TimelyPortfolio

I love the interaction, so please let me know what you are thinking.

Tuesday, December 6, 2011

World Since June 2008

For a client meeting, I struggled with how best to illustrate world markets since June 2008.  I used R to produce this (I amended this post to reflect a prettier forked version as discussed in the comments), but I’m still not completely satisfied. Anyone have suggestions to improve?

From TimelyPortfolio

What I thought was interesting was US equity outperformance and the US 10y yield move through the financial collapse low 2.15% seen in December 2008 and significantly lower than the 2.87% at the S&P low March 2009.  I think both can be explained by an illusion of control assigned to the US.  This illusion begins to unravel if the political process fails and monetary policy has reached its limits.

R code from Gist:

Sunday, December 4, 2011

Improved Moving Average?

I have been notified by the authors that the code does not perfectly reflect the improved moving average introduced.  In another post, I will explore the differences.  The authors have now released their version of the code

When @quantfblog started following me on Twitter, I was delighted to discover their papers

Papailias, Fotis and Thomakos, Dimitrios D., An Improved Moving Average Technical Trading Rule (September 11, 2011). Available at SSRN:

Papailias, Fotis and Thomakos, Dimitrios D., An Improved Moving Average Technical Trading Rule II - Can We Obtain Performance Improvements with Short Sales? (November 12, 2011). Available at SSRN:

backed by a nice and improving website  I just could not resist the opportunity to port their improved moving average idea to R and run some additional tests.  The entire process was extremely pleasant due to the authors’ willingness to test, comment, and suggest throughout the implementation process.  Thanks so much to them for all their help.



From TimelyPortfolio
From TimelyPortfolio
From TimelyPortfolio

For fun, I thought it would be interesting to compare the "Improved Moving Average" to a Mebane Faber style 10-month moving average system.

From TimelyPortfolio


While I enjoyed the testing, I am still not entirely sure if the “improved moving average” is significantly improved, but it certainly might fit someone’s utility curve better than the standard moving average.  More than anything, this process has proven to me a couple of things:

1) the beauty of open-source and collaboration.  The authors were incredibly generous and helpful as I worked through this process.  To even better demonstrate the power of open-source, I will use ttrTests to do additional testing and then the bt examples from in future posts.

2) how even a simple moving average can become incredibly complex.  Making one very slight change markedly changes the results.

R code from GIST:


Thursday, December 1, 2011

Is Drawdown the Biggest Determinant of System Success?

In all my system development, I still have not been able to determine what universal underlying conditions significantly improve a system’s chances of outperforming buy-and-hold.  Also, I have found very little discussion, so maybe R with some help from ttrTests can help answer my question of when I should just go buy-and-hold (a very pleasant situation for a money manager).  Starting in this business in 1998, I have often said that I dream of a day when I can just buy and hold similar to Japan stocks 1980-1990, US stocks 1990-2000, and US bonds 1982-now.

Those who follow my blog or know me already understand my obsession with drawdown, but that obsession focuses more on client/manager psychology (Investing for the Long Run) rather than drawdown’s effects on tactical systems.  I do not understand the industry’s focus on standard deviation.  I have never had a client call me or even worse fire me because my standard deviation has increased.  I know the argument is that higher standard deviations lead to higher drawdown, but as I show later in the post, this does not seem to be the case.

Clients call me or fire me because they have lost money, so if I can minimize the frequency, amplitude, and duration of drawdowns, then I can help/guide the client and reduce the worry, which is one main reason why they are paying me.  Also, though I think that focusing on minimizing drawdown can meaningfully increase the chances of achieving their long term return objectives (Drawdown Control Can Also Determine Ending Wealth and Confidence, Ending Equity, and What I Can Do as the Money Manager), which is even more likely the reason why clients pay me.

How nice would it be if drawdown also determines an objective system’s success?  To start the testing I thought I would use the fine work of David St. John on ttrTests (ttrTests 4th and Final Test) to get 100,000 bootstrapped samples from monthly S&P 500 data to examine drawdown, standard deviation, skewness, and compound returns on buy-and-hold versus a Mebane Faber 10-month moving average system.  Since I am so biased, I will let you determine the significance of drawdown on the results.

From TimelyPortfolio
From TimelyPortfolio

Here is where I get some confidence in my belief higher standard deviation does not necessarily cause worse drawdowns. However, it is interesting that higher standard deviation has as high a correlation as drawdown with system out(under)performance (bottom right).

From TimelyPortfolio

R code from GIST:

Inevitability of a Death Spiral

I think Satyajit Das’ guest post

The Sovereign Debt Train Wreck
from The Big Picture by Barry Ritholtz

does a very nice job highlighting and clarifying some of my thoughts on the U.S. sovereign debt problem, which is clearly not being priced with a US 10y Treasury at 2.10% or even 4%.  For those of you unfamiliar with Satyajit Das, his book exposed ex-ante both what caused the financial panic of 2008-2009 and the concurrent disgusting practices prevalent in finance.

For my thoughts on death spiral and the generosity of Asian central banks, see posts

Generosity of Asian Central Banks

Death Spiral of a Country

Death Spiral Warning Graph

Unsustainable Gift

Nine Lives of the Fed Put

Tuesday, November 22, 2011

Risk and Return by Size/Momentum and Industry

In lots of previous posts, I have demonstrated how to use the wonderful and free Kenneth French data in R, but I have not shown a basic risk/return plot by size/momentum and industry.  Hopefully, it will just be another example that somebody somewhere will find useful.

From TimelyPortfolio
From TimelyPortfolio

R code in GIST:

Magical RUT with GIST

In search of better ways to post my R code, I finally discovered how GIST can help make my R blogging easier.  I know I am way behind, and I apologize to my loyal readers for my shortcomings.  Here is yesterday’s Magical Russell 2000 code using GIST:

Monday, November 21, 2011

Magical Russell 2000

I have marveled at the magical Russell 2000 in Crazy RUT, but I am still surprised at its behavior through this selloff.  With a 20-day move of 30% (6% in one hour) and big outperformance to the developed and developing world, the Russell 2000 continues its magical display.

From TimelyPortfolio

R code:

#look at distance from the 3 month minimum
#to compare the magical US Russell 2000
#to the world


tkrs <- c("^W2DOW","^RUT")


#merge the closing values
markets <- na.omit(merge(W2DOW[,4],RUT[,4]))

#this is ugly but it works
altitude <- function(x) x/min(x)-1
mins <- as.xts(apply(markets[(NROW(markets)-250):NROW(markets),1:2],
    lwd=2,ylab="% from 250 day minimum",xlab=NA,
    main="Russell 2000 and DJ World ex US
    Distance from 250 Day Minimum")
legend("bottom",c("DJ World ex US","Russell 2000"),lty=1,lwd=2,

Sunday, November 20, 2011

Cross Pollination from Systematic Investor

After reading the fine article Style Analysis from Systematic Investor and What we can learn from Bill Miller and the Legg Mason Value Trust from Asymmetric Investment Returns, I thought I should combine the two in R with the FactorAnalytics package.  Let’s explore the Legg Mason Value Trust run by Bill Miller to get some insight into the source of his returns over many years by using the Ken French momentum by size data set as factors.

From TimelyPortfolio
From TimelyPortfolio
From TimelyPortfolio

R code (click to download from Google Docs):

#use Ken French momentum style indexes for style analysis
#   require(PerformanceAnalytics)
require(quantmod)   my.url=""
download.file(my.url, my.tempfile, method="auto",
quiet = FALSE, mode = "wb",cacheOK = TRUE)
#read space delimited text file extracted from zip
french_momentum <- read.table(file=my.usefile,
header = TRUE, sep = "", = TRUE,
skip = 12, nrows=1017)
colnames(french_momentum) <- c(paste("Small",
paste("Large",colnames(french_momentum)[1:3],sep="."))   #get dates ready for xts index
datestoformat <- rownames(french_momentum)
datestoformat <- paste(substr(datestoformat,1,4),
substr(datestoformat,5,7),"01",sep="-")   #get xts for analysis
french_momentum_xts <- as.xts(french_momentum[,1:6],   french_momentum_xts <- french_momentum_xts/100   #get price series from monthly returns
#check data for reasonability
plot.zoo(french_price,log="y")   #for this example let's use Bill Miller's fund
getSymbols("LMVTX",from="1896-01-01", to=Sys.Date(), adjust=TRUE)
LMVTX <- to.monthly(LMVTX)
index(LMVTX) <- as.Date(format(as.Date(index(LMVTX)),"%Y-%m-01"))
LMVTX.roc <- ROC(LMVTX[,4],type="discrete",n=1)   perfComp <- na.omit(merge(LMVTX.roc,french_momentum_xts))   chart.RollingStyle(perfComp[,1],perfComp[,2:NCOL(perfComp)],
main="LMVTX Rolling 36mo French Momentum Weights")
#could use the packaged chart.Style but does not allow the
#flexibility I would like
# colorset=c("darkseagreen1","darkseagreen3","darkseagreen4","slateblue1","slateblue3","slateblue4"),
# main="LMVTX French Momentum Weights")   #get weights for the cumulative period
style.weight <- as.matrix([,1],
main=paste("LMVTX French Momentum Weights
Since "
,format(index(LMVTX)[1],"%b %Y"),sep=""))   #look at total R to determine goodness of fit
style.R <-[,1],
perfComp[,2:NCOL(perfComp)])$R.squared     styleR <- function(x) {
#convert to matrix since I get
#error "The data cannot be converted into a time series."
#when I use xts as data
style.RollingR <- as.xts(rollapply(data=as.matrix(perfComp),
chart.TimeSeries(style.RollingR,ylab="Rolling 12-mo R",
main=paste("LMVTX Rolling R versus French Momentum
Since "
,format(index(LMVTX)[1],"%b %Y"),sep=""))
text(x=1,y=style.R,labels="r for entire series",adj=0,col="indianred")

Created by Pretty R at

Friday, November 18, 2011

Let the Lagging Lead

THIS IS NOT INVESTMENT ADVICE AND WILL PROBABLY WIPE OUT ALL YOUR MONEY IF PURSUED.  While exploring utilities, I discovered a strange phenomenon that I have not quite thoroughly understood, but I attribute to the business cycle.  If I dust off the system proposed in Unrequited lm Love, apply that signal to utilities as my total entry/exit, and then use relative strength to decide utilities or transports (really all cyclicals work with chemicals best), I get some magic. This is much longer than my normal simple process but I think the result might be worth the effort.

From TimelyPortfolio

Although I use the Kenneth French data set (thanks again), the method works very similarly on the Dow Jones series easily obtained through getSymbols with Yahoo! Finance or FRED.

Sloppy R code (Click to Download from Google Docs):

#get very helpful Ken French data
#for this project we will look at Industry Portfolios
#   require(PerformanceAnalytics)
require(RColorBrewer)   #my.url will be the location of the zip file with the data
#this will be the temp file set up for the zip file
#my.usefile is the name of the txt file with the data
download.file(my.url, my.tempfile, method="auto",
quiet = FALSE, mode = "wb",cacheOK = TRUE)
#read space delimited text file extracted from zip
french_industry <- read.table(file=my.usefile,
header = TRUE, sep = "", = TRUE,
skip = 11, nrows=1021)   #get dates ready for xts index
datestoformat <- rownames(french_industry)
datestoformat <- paste(substr(datestoformat,1,4),
substr(datestoformat,5,7),"01",sep="-")   #get xts for analysis
french_industry_xts <- as.xts(french_industry[,1:17],   french_industry_xts <- french_industry_xts/100   #get price series from monthly returns for utilities and transports
Utils <- cumprod(1+french_industry_xts[,14])
Trans <- cumprod(1+french_industry_xts[,13]) #use chemicals #6 for best result
Utilsmean <- runMean(Utils,n=4)   #get relative strength Utils to Transports
UtilsTrans <- Utils/Trans   width = 3
for (i in (width+1):NROW(Utils)) {
linmod <- lm(Utils[((i-width):i),1]~index(Utils[((i-width):i)]))
ifelse(i==width+1,signal <- coredata(linmod$residuals[length(linmod$residuals)]),
signal <- rbind(signal,coredata(linmod$residuals[length(linmod$residuals)])))
ifelse(i==width+1,signal2 <- coredata(linmod$coefficients[2]),
signal2 <- rbind(signal2,coredata(linmod$coefficients[2])))
ifelse(i==width+1,signal3 <- cor(linmod$fitted.values,Utils[((i-width):i),1]),
signal3 <- rbind(signal3,cor(linmod$fitted.values,Utils[((i-width):i),1])))
}   signal <- as.xts(signal,[(width+1):NROW(Utils)]))
signal2 <- as.xts(signal2,[(width+1):NROW(Utils)]))
signal3 <- as.xts(signal3,[(width+1):NROW(Utils)]))
signal4 <- ifelse(Utils > Utilsmean,1,0)   price_ret_signal <- merge(Utils,
price_ret_signal[,2] <- price_ret_signal[,2]/price_ret_signal[,1]
price_ret_signal[,3] <- price_ret_signal[,3]/price_ret_signal[,1]
ret <- ifelse((price_ret_signal[,5] == 1) | (price_ret_signal[,5] == 0 &
runMean(price_ret_signal[,3],n=12) > 0 & runMean(price_ret_signal[,2],n=3) < 0 ),
1, 0) * price_ret_signal[,7]
retCompare <- merge(ret, price_ret_signal[,7])
colnames(retCompare) <- c("Linear System", "BuyHoldUtils")
#jpeg(filename="performance summary.jpg",
# quality=100,width=6.25, height = 8, units="in",res=96)
colorset=c("black","gray70"),main="Utils System Return Comparison")   #eliminate NA at start of return series
retCompare[] <- 0
price_system <- merge(Utils,ifelse((price_ret_signal[,5] == 1) |
(price_ret_signal[,5] == 0 &
runMean(price_ret_signal[,3],n=12) > 0 &
runMean(price_ret_signal[,2],n=3) < 0 ),
NA, 1),coredata(Utils)[width+12]*cumprod(retCompare[,1]+1))
price_system[,2] <- price_system[,1]*price_system[,2]
colnames(price_system) <- c("In","Out","System")   chartSeries(price_system$System,theme="white",log=TRUE,up.col="black",
name="Utils Linear Model System")   #let's try an easy relative strength signal to choose small cap or large cap
#know I can do this better in R but here is my ugly code
#to calculate 12 month or 1 year slope of utils/trans
#get relative strength slope of high beta/low vol
for (i in 1:(NROW(UtilsTrans)-width)) {
indexRS<-xts(cbind(indexRS),[(width+1):NROW(UtilsTrans)])   price_ret_signal <- na.omit(merge(price_ret_signal, lag(indexRS,k=1), ROC(Trans,type="discrete",n=1)))
#use same linear system signal for in out but add RS to choose Russell 2000 or SP500
retRS <- ifelse((price_ret_signal[,5] == 1) | (price_ret_signal[,5] == 0 &
runMean(price_ret_signal[,3],n=12) > 0 & runMean(price_ret_signal[,2],n=3) < 0 ),
1, 0) * ifelse(price_ret_signal[,8]<0,price_ret_signal[,9],price_ret_signal[,7])
retCompareWithRS <- na.omit(merge(retRS,retCompare,ROC(Trans, n=1, type="discrete")))
colnames(retCompareWithRS) <- c("Linear.With.RS",colnames(retCompareWithRS)[2:3],
"BuyHoldTrans")   #jpeg(filename="performance summary.jpg",
# quality=100,width=6.25, height = 8, units="in",res=96)
main="Utility and Transports System Rotator", colorset=brewer.pal(4,"Paired"))
side=1,adj=0,cex=0.75)   corr <- runCor(price_ret_signal[,7],price_ret_signal[,9],n=12)

Created by Pretty R at