Original paper
Abstract
To understand how practitioners operationalize evaluations of earnings quality, we obtain a proprietary dataset of 1,029 reports on aggressive reporting practices over 2003-2015 for 348 unique firms published by a research firm (RF) that sells such data to institutional clients. From these reports, we identify 121 measures of poor earnings quality under four major categories: (i) sales quality; (ii) margin quality; (iii) cash flow quality; and (iv) others. As a first-cut to short-list stocks for detailed fundamental analysis, the RF appears to screen for larger, growing firms with lower barriers to arbitrage. The firms flagged by the RF also have higher M-score, F-score, and total and abnormal accruals. The average two (251) days abnormal return after the stock is first flagged by the RF is -1.30 (-18.5) percent, and such return is incremental to the return attributable to mispricing of accruals. Modified Jones and Dechow-Dichev models of abnormal accruals do not appear to capture RF identified signals well suggesting that such models are too coarse to pick up nuanced fundamental analysis conducted by the RF. In out of sample analyses, we find that the RF signals are associated with future restatements, AAERs, and GAAP-related lawsuits after controlling for other earnings quality indicators. We develop an improved earnings quality indicator (RFSCORE) for firms in the retail, durable manufacturing, and business services sectors using the RF’s signals which are based on granular, context- and industry-specific fundamental analysis. To the Street, our paper suggests that fundamental analysis, beyond just the magnitude of accruals, can predict future stock returns. To academics, our research demonstrates that granular, context-specific analysis of public data can supplement and improve the workhorse models used to identify poor earnings quality.
Keywords:Â Earnings Quality, Dechow-Dichev, F-Score, Jones Model, M-Score, Wall Street, Restatements, Aaers, Lawsuits, Sales Quality, Margin Quality, Cash Flow Quality, Stock Returns
Trading rules
- Focus: Firms with market cap > $1 Billion.
- Evaluation timeframe: yearly in April (after year-end fundamental data is reported).
- Calculate O-SCORE (0 to 5) for each company based on:
- Highest quintile of sales growth
- Lowest quintile of CFO to total assets
- Net stock issuance greater than industry median (current or prior year)
- Acquisition of another company within the last five years
- Highest quintile of PROBM (probability of manipulation)
- Short stocks with O-SCORE of 5.
- Yearly portfolio rebalancing.
- PROBM calculation using Beneish (1999) model with eight variables:
- Days Sales in Receivables Index (DSRI)
- Gross Margin Index (GMI)
- Asset Quality Index (AQI)
- Sales Growth Index (SGI)
- Depreciation Index (DEPI)
- Sales General and Administrative Expenses Index (SGAI)
- Leverage Index (LVGI)
- Total Accruals to Total Assets (TATA)
- Update regression coefficients in PROBM equation annually
Python code
Backtrader
import backtrader as bt
import pandas as pd
import numpy as np
from scipy.stats import mstats
class OvervaluedStocks(bt.Strategy):
def __init__(self):
self.order = None
def next(self):
if self.order:
return
if len(self.datas) > 0:
self.log('Rebalance')
self.rebalance_portfolio()
self.order = self.sell()
def rebalance_portfolio(self):
stocks = self.get_overvalued_stocks()
for stock in stocks:
self.order_target_percent(stock, target=-1/len(stocks))
def get_overvalued_stocks(self):
# Get fundamental data and calculate O-SCORE for each stock
fundamental_data = self.get_fundamental_data()
fundamental_data['o_score'] = self.calculate_o_score(fundamental_data)
# Short stocks with O-SCORE of 5
overvalued_stocks = fundamental_data[fundamental_data['o_score'] == 5].index.tolist()
return overvalued_stocks
def get_fundamental_data(self):
# TODO: Implement method to get fundamental data for the investment universe
pass
def calculate_o_score(self, df):
# Calculate O-SCORE based on the 5 criteria
df['sales_growth_quintile'] = pd.qcut(df['sales_growth'], 5, labels=False)
df['cfo_to_assets_quintile'] = pd.qcut(df['cfo_to_assets'], 5, labels=False)
df['probm_quintile'] = pd.qcut(df['probm'], 5, labels=False)
df['o_score'] = (
(df['sales_growth_quintile'] == 4).astype(int) +
(df['cfo_to_assets_quintile'] == 0).astype(int) +
(df['net_stock_issuance'] > df['industry_median_issuance']).astype(int) +
(df['has_acquisition'] == 1).astype(int) +
(df['probm_quintile'] == 4).astype(int)
)
return df['o_score']
def log(self, txt):
print(f'{self.datetime.date()}, {txt}')
if __name__ == '__main__':
cerebro = bt.Cerebro()
cerebro.addstrategy(OvervaluedStocks)
# Add data feeds for investment universe
# TODO: Implement method to get data feeds for stocks with market cap > $1 Billion
cerebro.broker.setcash(100000)
cerebro.broker.setcommission(commission=0.001)
cerebro.addsizer(bt.sizers.PercentSizer, percents=95)
print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue())
cerebro.run()
print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue())
Please note that you need to implement the methods for getting fundamental data and data feeds for the investment universe, as well as the PROBM calculation using the Beneish (1999) model. The provided code is a template that covers the basic structure of the strategy.