Conservative Books and "Studies" Alleging "Liberal
2.10 PAPERS/BOOKS/STUDIES: Other papers,
books or studies claiming the presence of a "liberal media"
2.10A PAPER: "Is
Newspaper Coverage of Economic Events Politically Biased?" by
John Lott and Kevin Hassett (of the American Enterprise Institute)
2.10B PAPER: "Being the
New York Times: The Political Behaviour of a Newspaper" by
Riccardo Puglisi (2004)
2.10C BOOK CHAPTER: The chapter
titled "The Mass Media and Voter Information" in the
upcoming (as of 4/18/05) book "Analyzing Elections" by
2.10A PAPER: "Is
Newspaper Coverage of Economic Events Politically Biased?" by
John Lott and Kevin Hassett (of the American Enterprise Institute)
A copy of this "paper" is here
(a PowerPoint type summary from the authors is here).
The abstract of this paper says the following (bold
text is eRiposte emphasis):
Accusations of political bias in the media are often
made by members of both political parties, yet there have been few
systematic studies of such bias to date. This paper develops
an econometric technique to test for political bias in news
reports that controls for the underlying character of the news
reported. Our results suggest that American newspapers tend to
give more positive news coverage to the same economic news when
Democrats are in the Presidency than for Republicans. When all types
of news are pooled into a single analysis, our results are highly
significant. However, the results vary greatly depending upon which
economic numbers are being reported. When GDP growth is reported,
Republicans received between 16 and 24 percentage point fewer
positive stories for the same economic numbers than Democrats. For
durable goods for all newspapers, Republicans received between 15
and 25 percentage points fewer positive news stories than Democrats.
For unemployment, the difference was between zero and 21 percentage
points. Retail sales showed no difference. Among the Associated
Press and the top 10 papers, the Washington Post, Chicago Tribune,
Associated Press, and New York Times tend to be the least likely to
report positive news during Republican administrations, while the
Houston Chronicle slightly favors Republicans. Only one newspaper
treated one Republican administration significantly more positively
than the Clinton administration: the Los Angeles Times’ headlines
were most favorable to the Reagan administration, but it still
favored Clinton over either Bush administration. We also find that
the media coverage affects people’s perceptions of the economy.
Contrary to the typical impression that bad news sells, we find that
good economic news generates more news coverage and that it is
usually covered more prominently. We also present some evidence that
media treats parties differently when they control both the
presidency and the congress.
Why have I highlighted specific words? Well, when
you read about the methodology they use, you'll understand (bold
text is my emphasis):
In this paper, we attempt to overcome these problems
by objectively categorizing newspaper headlines as either
positive, negative, neutral or mixed and then comparing those
headlines to the actual economic numbers that generated those news
They study newspaper headlines and not the
actual content of the articles (wow) - and I would bet that
anyone who reads the abstract of the paper could easily miss this
point, namely that their "study" is based on headlines
- not "news reports", "news coverage",
"stories", "news stories", etc. I imagine it must be particularly busy
out there at AEI where both these authors work.
Now, they do acknowledge this silliness (not
in so many words of course), among other things (bold text is my
We chose headlines because they create the strongest
image of the news in readers’ minds, and because headlines are
easier to objectively classify, though the headlines we examine
may differ systematically from the stories they are associated with.
While newspapers write other news stories on the economy that do
not coincide with the specific release of economic data, one benefit
of limiting ourselves to these announcement dates is that we can
more directly link a specific set of economic data to how the media
covers that data. It is possible that these other news stories are
biased in ways that are different from stories released on
announcement dates, and thus announcement date coverage might not
give the complete picture of any partisan biases. The values for
the different economic variables were those released at the time of
the news reports.
So, let's recap.
They only look at headlines; they don't look at
the actual content of articles
They only consider headlines associated with
articles that coincide with the release of the economic data and
not at any other articles that may be published about the same
They acknowledge that "It is possible that
these other news stories are biased in ways that are different
from stories released on announcement dates, and thus announcement
date coverage might not give the complete picture of any partisan
And predictably, they make firm conclusions
from their data anyway
Let's just say I didn't have it this easy in graduate
school. And they actually get paid big bucks to write this stuff up,
while I have to do this on my own dime.
Now, I do concede that people who only skim
through headlines may potentially get misled by badly written headlines. But,
even if set aside the fact that the headlines, however
"negative" they were, could have actually been accurate or
appropriate, Lott and Hassett's study did not even bother to assess headlines on
all articles about the same economic news. They certainly didn't look
at the actual content either. And despite the ridiculously limited
nature of their exercise, they arrive at firm conclusions anyway. (As
a side note, as far as I know copy editors are
the ones who usually write headlines, not reporters / journalists
themselves, and there's
quite a movement to teach
them how to do it better, in general).
At this point, people with a reasonable level of
intelligence should be wondering: "What the heck is this paper
worth?" Reasonable people already know the answer
("nothing"). But "research papers" get circulated regardless of how
poor they are (especially ones that are not peer reviewed), so, unfortunately someone has to say "more"
about it to sound credible these days.
It turns out, that whenever John Lott publishes
something, a useful place to start is usually Tim Lambert's Deltoid
blog. So I checked in and sure enough Lambert did cover this
Let me reproduce his post because it tells you "more" about why this
paper is worthless:
Lott finds more bias
Lott has teamed up with Kevin Hassett to study whether economic
reporting is biased. The paper, Is
Newspaper Coverage of Economic Events Politically Biased?,
concludes, surprise, surprise that the newspapers are biased against
The trouble with their study is that the economy was stronger
under Clinton than under either Bush, so of course the reporting of
the economy under Clinton was more positive. Lott and Hassett claim
to have controlled for this with a multivariate analysis but you
should only find this persuasive if you have complete confidence in
the competence and integrity of the authors. When building such
models there are so many choices you can make that it easy to get
the results that you want to see. In this case it is particularly
easy, since all you have to do is leave out a relevant variable so
that the state of the economy is not fully controlled for and you
will get results like Lott and Hassett report.
We probably should not have complete
confidence in Lott and Hassett. Lott’s previous
attempt to show that the media was biased seems to have involved
cherry picking of models, and careful selection of models
to get a desired result seems to be a constant characteristic of his
work. As for Hassett, I’ll let Lott tell you about him. This is
Lott’s anonymous review of Hassett’s Bubbleology:
Despite Mr. Hassett’s track record with his previous book “Dow
36,000,” I saw him appear on CNBC during the early morning show
and thought that he did well enough that I should buy the book. He
promised that you could use his book to figure out what stocks
were overvalued and which ones weren’t. A pretty important topic
given the current market environment. However, after reading this
short book I have no idea of how to actually rank stocks on the 1
to 6 scale that he uses. He doesn’t actually provide concrete
examples, only that he says that he put together this ranking and
it worked really well. My other problem is that if this approach
works so well how come he didn’t use it when his “Dow
36,000” book came out when the stock market was at its peak.
Some explanation would have been useful for why Hassett, who is
marketing this book as a full proof approach to spotting bubbles,
wasn’t able to use this approach himself over just the last
couple of years to warn people and predict which stocks were going
to crash, a period when he was supposedly writing this book.
Claiming that you use a not clearly stated formula to identify
overvalued stocks after they have already crashed seems like a
scam to me.
“seems like a scam”, indeed.
The New York Times has
published what Lott calls
piece” on their study. The Times gives their
study more credence than it deserves (Atrios is disgusted
that they even did a story on him), but they do eventually mention
Mary Rosh and the 36,000 Dow prediction and finish with an apposite
quote from Brad
To even base a story on Lott’s work at this point in time is to
demonstrate a pronounced bias toward right-wing hacks
Update: The AEI is holding an event to promote
the study and the
official George W Bush blog is also pushing the study. The gang
at Lawyers, Guns and Money get stuck into Lott and Hassett here
Update 2: Brad DeLong points
out that the NY Times reporter dropped the ball by
letting Lott’s deceitful statement that “the things he had said
in the guise of Ms. Rosh were, indeed, truthful” stand. You can
see them here
and count how many are plainly untrue.
Let's continue with a thread from the comments
section of Lambert's post above, that by itself shows why this
paper is complete nonsense and proves nothing.
The trouble with their study is that the economy was stronger
under Clinton than under either Bush...
Define "stronger." Today's unemployment rate is
identical to that of 1996.
The unemployment rate decreased under Clinton and increased under
Tim is right about unemployment; in fact he could have
put it much more forcefully.
Have a look at the numbers.
During Clinton's two terms ('93 to '01), unemployment declined
monotonically, while it rose monotonically under Bush I and
II. (If you go to the BLS, get the PDF file. It's an
Or check the
figures for GDP.
Average annual GDP increase over the four years of Bush I was
1.9%. For the eight years of Clinton, it was 3.6%, and for the
first three years of Bush II, it has been 2.8%.
And as for the deficit -- aaargh, don't get me started.
This case is clear-cut: the US economy did extremely well
during the eight years of the Clinton presidency, so much so
that when someone argues otherwise I have to wonder if that
person is searching for any old stick, even a feeble twig, to
beat Bill Clinton with.
All right then, this picture shows exactly what is wrong with
[eRiposte note: This last sarcastic
Dick Cheney is perhaps the best part of this thread]
But what about those hundereds of thousands of uncounted people
who make their living off Ebay?
Lambert posted an update here:
Now, here’s what Lott and Hassett say:
“In the case of unemployment, 44 percent of the headlines under
the Clinton administration were positive while that same number
was only 23 percent under Bush II. By comparison, the average
unemployment rates were fairly similar, 5.2 percent under Clinton
s eight years and 5.5 percent under Bush during the sample. There
is also a great deal of overlap (3.9 to 7.1 percent under Clinton
to 4.2 to 6.4 percent under Bush II).”
What they fail to mention and what is obvious from the graph is that
under Clinton the unemployment rate decreased from
7.1% to 3.9%, while under Bush it increased from
4.2% to 6.4%. Maybe, just maybe, that’s why the headlines were
more positive under Clinton. In fact, there seems to be evidence of
bias against Clinton—why were only 44% of the headlines about
unemployment positive when it just kept going down and down to the
lowest levels in decades? Oh, and don’t expect to see a graph of
the unemployment rate anywhere in their paper or presentation.
Now, they claim to have controlled for level and trends in
unemployment in their analysis, but of course they have not. The
only control they have for trend is the change since the previous
quarter and it is obvious that changes over longer terms will affect
the reporting. Do Lott and Hassett believe that no-one ever compares
the unemployment rate with what it was a year or two before?
Another look at the graph of the unemployment rate will reveal
the futility of the whole exercise. Is it not obvious that
unemployment did something fundamentally different under Clinton
than under either Bush? To get a meaningful comparison you would
need to compare Clinton with a Republican administration where
unemployment declined for eight years.
Lambert also links to this post at Lawyers,
Guns and Money, that points out a few more obvious points:
Pay careful attention to how the [NYT]
article is constructed. The goal of the piece is to demonstrate that
economic reporters are all lefties who can't report economic matters
straight (same goal as the crappy "research" it purports
to report on), and to that end it contains howlers like this:
For instance, they said, the unemployment rate in the Clinton
administration averaged 5.2 percent, only three-tenths of a
percentage point less than it has under George W. Bush. But while
44 percent of Mr. Clinton's headlines on unemployment were
positive, only 23 percent of President Bush's headlines on the
subject have been upbeat.
Hmm. Why could that be. Must be bias. Or, it could be because the
unemployment rates went down consistently during Clinton's tenure,
which GWB has failed to do. There's a reason only one month with
solid job growth data under Bush (March 2004) is cited--they're
pretty rare--and in the context of Bush's own purported expectations
of job growth, it falls pretty short. (By the way, what don't those
liberal economic reporters harp on how far short of his own job
growth goals team-Bush has fallen?)
Through a Google search I came across this
post at Dead Parrots Society that explored the unemployment
comparison further (not specifically in the context of the
Lott/Hassett paper, but in the context of a similarly nonsensical
"media bias" post by another blogger, using the employment
Glenn Reynolds, I and many
others have been reading this
Tim Blair post about media framing of unemployment figures. The
gist is that CNN described a 5.6% unemployment rate as
"low" in 1996, when Clinton was in office, but describes a
similar rate as a sign of problems for Bush. Tim's post is being
widely cited as yet more proof of media bias; in Glenn's link, he
encourages us to "Go figure." So I did.
[eRiposte note: chart similar to the one
above is shown]
The graphic is courtesy
of the BLS, and shows the unemployment rate charted over the
past 15 years. Perhaps it offers a little insight into why 5.6% was
considered "low" in early 1996, but not in 2001. Actually,
the context was right there in the excerpts Blair chose from the
1996 CNN story:
Economists didn't expect June's unemployment rate to be much
different from May's, which was an already-low 5.6 percent. But in
fact, it did fall -- to 5.3 percent. The unemployment rate
hasn't been that low since June 1990.
And from the 2001 CNN story:
The U.S. unemployment rate jumped to 5.7 percent in November - the
highest in six years - as employers cut hundreds of thousands
more jobs in response to the first recession in a decade in the
world's largest economy.
Tim Blair was right: It is all relative. But not to who's
in office, rather to where the unemployment rate has been in the
past few years. So is it appropriate for journalists to speak in
such relativistic terms? Are they at least consistent? What would be
really helpful here is a Lexis-Nexis search of media reporting on
the early part of Clinton's tenure, when unemployment was rising. Or
on the situation in 1984, when unemployment was at 8%, down from
10%. But I don't have Lexis-Nexis access.
Still, there is some context that might be helpful. The first
place we can look is right there in Tim Blair's 1996 story, to see
how the Clinton administration and economic analysts felt about the
White House: But the Clinton administration was
tickled about the increase in jobs, and took credit for the
upturn. The president said the figures showed "the most solid
American economy in a generation."
Analysts: In January, analysts were concerned that growth
was so anemic that the nation was in danger of a recession. But
five straight months of strong job gains now have analysts worried
more about inflation. ... The Federal Reserve is almost guaranteed
to push interest rates up to stave off inflation.
The second place we can look for context is in Tim Blair's 2001
story, to see how the Bush administration and economic analysts felt
about that very similar unemployment figure:
White House: President Bush and his Labor Secretary,
Elaine Chao, separately expressed alarm at the data and called for
Congress to approve a package of economic stimulus. "Today's
numbers are not good news, and I think it's a clear reflection
that the attacks of Sept. 11 are still reverberating around our
economy," Chao told CNNfn's Market Call program.
Analysts: To keep consumers spending despite mounting
unemployment, the Federal Reserve has cut its target for
short-term interest rates 10 times this year and is expected to do
so again after its policy makers meet Tuesday. "Despite some
better-than-expected data over the past two weeks, this report is
sufficiently gloomy to force the Fed to ease next Tuesday and
retain their bias toward further economic weakness," said
Steven Wood, economist with FinancialOxygen.
Really, my point here doesn't have anything to do with whether a
5.6% unemployment rate is too hot, too cold or just right. Frankly,
I don't have any idea. What I do know is that journalists
weren't the only ones who looked at the unemployment figures in a
different light between 1996 and 2001. The reality is, the media saw
the data the same way as the White House, economic analysts and the
Fed. (The question Tim didn't ask is why Bush considered it
bad news when the unemployment rate hit 5.7% in 2001.) Perhaps
everyone was too excited in 1996 and too glum in 2001, but the key
word here is "everyone." This is not media bias, folks,
and casting it as such is misleading at best.
Update II: Thanks to Tim Blair for linking
to this post.
Let me note that the operating word is
not [what] "everyone" [felt]. The operating word is
whether the information portrayed in the article is
"accurate". The whole world may think something is just
dandy or something is just terrible, but that's different from
saying that what the whole world thinks is "accurate".
One of the commenters to the post above
also notes the following:
Barry Ritholtz writes ...
This is an interesting discussion -- but you kids need to learn
to drill below the headline numbers to gather the most relevant
The headline is always "How many jobs per month is the
But that's a simplistic question which fails to address the
underlying details of the ecoonmy's strengths and weaknesses.
Forget framing and the media -- lets get to the actual DATA:
How do the jobs being created compare to the job lost,
outsourced, or made obsolete in temrs of wages? Remember, 2/3rds
of the US economy is based upon consumer spending, so wage
growth/contraction has a major impact on the next part of the
How do the new jobs compare benefit wise with the lost
positions? Again, like oil price increases, if employees are
paying for halth costs enttirely, they have that much less money
to discretionary income to spend on retail goods, autos, durable
What is the spread between wage growth and CPI? At present,
acording to Bear Stearns, CPI has risen 3% faster than Unit Labor
Costs -- that makes those people with jobs "feel" like
they are losing ground. Incidentally, Bear tracks this because
this spread can be a significant danger to incumbents in
re-election campaigns (see Carter, Bush I).
Next up: How many unemployed people have stopped looking for
work? What is the actual percentage of unemployed -- as opposed to
only the # wno are still eligible for Unemployment benefits.
(Greenspan called it the "Augmented unemployment rate").
Its almost always much higher in Europe (12%+) than it is here --
Here's an odd data point: What is the average age of the labor
pool? Are people working longer -- are the elderly coming back to
work, as they did after the market crash?
Lastly, how many people have given up looking for work --
dropped out of the labor pool?
One of the great failings of economics is the way it allows
school boys to grossly oversimply the complex, the intricate, the
unseen, and believe they have "figured it out."
Its never that simple . . .
article also had some comments from others:
Alan S. Blinder, a former vice chairman of the Federal Reserve
who also served as an economic adviser to Mr. Clinton, said that, if
anything, current economic coverage favored Mr. Bush by letting the
administration get away with blaming 9/11 for the economy's poor
Jack Shafer, the media critic of Slate, the Web journal, was
skeptical of the study, saying that it was based solely on
headlines, not on an appraisal of actual news articles. "A
headline is not coverage," he said.
While the researchers of the American Enterprise Institute claim
to expose the political bias of the reporting, Mr. Carroll said, it
was unlikely that they succeeded in stripping out other factors. He
said the reporting of economic statistics depended on broad
perceptions of the state of the economy, which are influenced by
many variables. The fact that the economy did better under Mr.
Clinton than either of the Bushes probably affected the coverage
more than the researchers allowed for.
Moreover, Mr. Carroll pointed out that the results had large
statistical margins of error. "I'm not persuaded that the
results have any statistical significance," he said.
AND what of the researchers' own objectivity? Critics question
both their scholarship and their motivations in releasing this
research in the middle of a presidential campaign in which the
economy is no small issue. Mr. Hassett was an adviser to Senator
John McCain, Republican of Arizona, during his bid for the
presidency in 2000, and a co-author of "Dow 36,000," a
wildly bullish analysis of the stock market's prospects.
Mr. Lott's research supporting gun ownership as a crime deterrent
has also come under criticism. He acknowledged that he assumed a
pseudonym - Mary Rosh- to write his own praise and defend his
positions in online debate on that subject from 2000 through January
Enough said. Let's move on from this junk paper.
2.10B PAPER: "Being the
New York Times: The Political Behaviour of a Newspaper" by
Riccardo Puglisi (2004)
UPDATE: It has been brought to
my attention that the version of the paper I
had originally linked to and analyzed is not the final version of
Puglisi's paper. The latest version is available for download here.
I apologize for this inadvertent/unintentional error. Given this, I
have made appropriate (minor) modifications in this section to reflect the
content and pagination in the final version of the paper. Having said that, Puglisi's
conclusions or my critiques of his assumptions, data or conclusions have not
changed with the latest version of his paper. (I have
preserved a copy of the older version of my critique here). Thus,
the substance of my critique remains unchanged.
[NOTE: I have also edited the tone of my
critique to keep the focus on the subject matter of the paper, because
it is possible that some people may be otherwise distracted by the
I decided to comment on this paper because I saw it being mentioned on
the blog (sans analysis) of
conservative economists. A cursory analysis of this paper shows
six major problems (I, II,
III, IV, V,
VI) that, unfortunately, make its conclusions untenable.
other similar papers, the author believes his "results" are
credible even though no serious attempt is made to confirm that the
paper's fundamental assumptions (about the premise and model) are actually
correct. While I certainly have no complaints against someone doing a
lot of hard work to study the issue of media bias (which the author has
clearly done), it is disappointing that he seems to have forgotten a
basic fact about modeling (econometric modeling or otherwise): just
because one has a model and churns out some numbers it doesn't mean
that the model is correct or that the results are meaningful.
start by reviewing a couple of extracts from the paper to get some
I analyze a dataset of news from the New York Times,
from 1946 to 1994. Controlling for the incumbent President's
activity across issues, I find that during the presidential campaign
the New York Times gives more emphasis to topics that are owned by
the Democratic party (civil rights, health care, labor and social
welfare), when the incumbent President is a Republican. This is
consistent with the hypothesis that the New York Times has a
Democratic partisanship, with some "watchdog" aspects, in
that it gives more emphasis to issues over which the (Republican)
incumbent is weak. Moreover, out of the presidential campaign, there
are more stories about Democratic topics when the incumbent
President is a Democrat.
A partial extract on the methodology used (bold text is my emphasis):
The empirical approach adopted by Lott and
Hassett is close to the one I follow here, because of this
common focus on the time series behaviour of news providers.
However, their analysis is chiefly based on the correlation
between the political affiliation of the incumbent President and the
average tone adopted by newspapers in the coverage of economic
emphasis; note how a paper that looked at "headlines" is
being quoted as one that addressed news coverage]. On
the other hand, presidential elections and campaigns play a very
minor role in their econometric analysis, while they represent the
central aspect of my identification strategy7.
Let's now examine why, neither the assumptions, nor
the conclusions of this paper, are correct or meaningful.
MAJOR PROBLEM I: The first fatal
problem with this paper is that one of its basic assumptions (#2
below) is wrong
[The other assumptions are questionable too, but
let's not worry about that right now.]
Here are the assumptions stated by Puglisi:
As briefly anticipated in the introduction, the
empirical analysis performed here and the interpretation of its
findings are based on the following set of identifying assumptions:
(1) The issue ownership hypothesis holds.
(2) “All publicity is good publicity”.
(3) The relative share of Executive Orders about a
subset of issues proxies the relative intensity of the activity of
the incumbent President with respect to those issues.
The issue ownership hypothesis, which Puglisi bases on
historical polling data and mentions throughout, is the
Democratic topics comprise Civil Rights, Health
Care, Labor & Employment and Social Welfare. Republican topics
comprise Defense and Law & Crime.
Now, it may be convenient to assign such
ownership because it helps make the analysis more interesting, but
really, someone "owning" the issue often has little to do with whether
the publicity/coverage that person gets on that issue is good or bad
(even if one can be sure that the "issue ownership" actually
Thus, the second assumption, that "All publicity is good
publicity" (referring to "owned issue" coverage for the
person who owns it) simply makes no sense.
For example, was "Health Care" coverage always "good
publicity" for Bill Clinton (Democrat)? Was "Defense"
and "Law and Crime" coverage always "good
publicity" for Richard Nixon (Republican) and Ronald Reagan
(Republican)? Was "Employment" and "Social
Security" coverage necessarily always
"bad" publicity for the Reagan administration?
In other words, the assumption that if a newspaper
reports on topics "owned" by a party, it automatically means
that party benefits, makes no sense because such an assumption fails
to account for the fact that newspapers, can and do issue reports on
"owned" topics that may not be positive at all to the
(P.S. Consider another example. Clinton's term actually saw
a drop in crime rates. Granted, a big part of it was outside this
particular study's time period, but, if anything, stories on crime
rates would have tended to benefit the Democrats during that time, not
PROBLEM II: The second, decidedly fatal
problem with this paper is that the author's simplistic interpretation of
his results is wrong
Let's consider these
"definitions" from Puglisi:
Definition 1 A newspaper has a Democratic
(Republican) partisanship if during the presidential campaign it
devotes more space to issues owned by the Democratic (Republican)
party, at the expense of neutral or Republican (Democratic)
In fact, over and above the electoral partisanship
of the newspaper, as described by definition 1, the political color
of the incumbent President could be given an interpretation within a
lapdog/watchdog dichotomy. The idea is the following: if it turns
out that -during the presidential campaign- the New York Times gives
less emphasis to Democratic topics and/or more emphasis to
Republican topics when the incumbent is a Democrat, over and above
his Democratic or Republican partisanship, this is consistent with
the fact that the newsaper acts as an electoral watchdog with
respect to the incumbent President.
Definition 2 A newspaper is an electoral
lapdog of the incumbent President if, ceteris paribus, during the
presidential campaign it devotes more space to the issues over which
the incumbent is strong, and/or less to issues over which the
incumbent is weak.
Definition 3 A newspaper acts as an
electoral watchdog if, ceteris paribus, during the presidential
campaign it dedicates more space to the issues over which the
incumbent is weak, and/or less space to the issues over which the
incumbent is strong.
These definitions are incorrect -
not only are they inconsistent with each other, the latter definitions are
incorrect in themselves. For example, I can just as well argue based on
Puglisi's Definition 1
that the newspaper is no "watchdog" but just a shill for the
candidate opposing the incumbent and is therefore displaying
"partisanship" in favor of the challenger. In fact, let's
ignore Definition 1 completely and consider Definition 3
on its own. It is Puglisi's *opinion* that the newspaper serves
as a "watchdog" by focusing on the topics that supposedly
favor the challenger. One can easily have a different *opinion*
that a newspaper doing this is a partisan supporter of the challenger
and not a "watchdog". (Thus, setting up the definitions
the way Puglisi does, has the (unintentional and) unfortunate
consequence of pre-ordaining the results.)
This is the natural (and fully expected)
problem with studies of this nature which don't actually
analyze the content of the news articles. Thus, Puglisi's
assumptions and definitions are incorrect because at a very fundamental level, they
neglect the actual nature of the coverage (accurate or
inaccurate). The "study" therefore fails completely to
even set up the basic
question and data interpretation correctly. Now, this is not a
statement that is intended to criticize Puglisi per se. This is
a standard problem with many "econometric" media bias "studies"
(especially those that, conveniently, keep finding "liberal bias").
Whether they look at "words" used
in news reports (e.g., "right-wing" v.
"left-wing") or "topics" covered or
"tone" ("positive" v. "negative") or
even "headlines" (to some extent), etc., none of them really focuses on the facts (or fictions)
covered in the actual news reports and how that impacted the person
or party being covered.
So, combining Problem I and Problem II, this study
and the interpretation of its results totally break down even before
we get to the actual data. Needless
to say, this study's findings are untenable, as a result.
[AN ASIDE: To see how inapplicable the
"partisanship" assumption can be, one only needs to look at
George Bush Jr. and social security. Although Bush II was not part of the study,
let's remember that there has been significant coverage of Social Security in the New York Times in the
past three months because the Republican President is pushing it
NOT the Democrats. If we took Puglisi's methodology seriously this
might imply "Democratic partisanship" by the Times.]
Sadly, the problems with this paper don't stop there.
What about topics that are not assigned an "owner"?
Puglisi's own admission (Tables 2 and 3), when we look at "All
stories" that appeared in the New York Times in the period
1946-1994, the so-called Republican topics and so-called Democratic
topics were only 21.7% (8.37% + 13.36%) of the total. Thus,
this study claims to show "Democratic partisanship" (or
otherwise) based on
a study that essentially ignores over 78% of all stories published in
the New York Times. Stunning.
For example, "Banking, Finance and Dom. Commerce"
(14.66% of all stories) and "International Affairs" (13.22%
of all stories) are not part of Puglisi's model because they are not
"owned" by Republicans or Democrats. What category would
"taxes" or "spending" or "budget
deficits" fall under? This is one of the most important topics in
all Presidential campaigns - which often make or break campaigns - and
there's no mention of it in the analysis. Also, what category would
draft-avoidance or alleged extra-marital affairs fall in? Other? Or is
it "Law and Crime?" There's a whole slew of topics relating
to the individuals or their policies, that fall into the supposed
"non-owned" issue category, which have a habit of coming up
frequently during campaigns. It may be acceptable to ignore all that
for the purpose of creating certain limited hypotheses, but in the
absence of any serious consideration of some of these other topics, it
is not advisable to reach sweeping conclusions of the kind the author has.
PROBLEM IV: Topics may be entirely event driven and have
nothing to do with Executive Orders
paper does not consider seriously the fact that major events happen which
have nothing to do with the "strength" of Democrats or
Republicans. For example, George Bush Sr. started
significant cuts to defense spending at the end of the Cold War
and Bill Clinton continued this effort. When there are no major wars and when there is no
concern about national defense, there is no reason for
papers to simply keep writing more articles about "defense"
just because a Democrat is in power.
This same argument applies to every topic under the sun.
is also obvious that many topics are raised, especially in electoral
campaigns, by the politicians who are campaigning. The New York
Times, or any other paper, may simply be reporting the issues
raised by the candidates themselves. Candidates who run for President
have to talk about most topics (not just "Republican topics"
or "Democratic topics"). It would have made no sense for
George Bush Sr. to completely avoid talking about jobs, since jobs was
one of the most important issues of his re-election campaign. Would it
be "Democratic partisanship" for the NYT to have written reports
about jobs then, even though it was one of the most important concerns
Not to mention, one of Puglisi's
"findings" is that the coverage of "Republican
topics" actually goes up significantly in the campaign coverage
when the challenger is a Republican. As he says (bold text is my
When considering the 1961-1994 subperiod, this
effect of fewer Republican stories under a Democratic incumbent is
larger in magnitude and more precisely estimated.
Moreover, considering the same subperiod, under a
Democratic incumbent there are more stories about Republican topics
when the presidential campaign kicks in. This effect is quite strong
in magnitude and very precisely estimated when excluding the 1964
presidential campaign, namely the only year during which the defense
issue (with reference to the handling of the Vietnam War by Lyndon
Johnson) was clearly owned by the Democratic party.
This takes us right back to Problem II. Either the NYT
has "Democratic partisanship" or it doesn't. It makes no
sense to claim that it has "Democratic partisanship" and
simultaneously say that "...under a Democratic incumbent there
are more stories about Republican topics when the presidential
campaign kicks in. This effect is quite strong in magnitude...".
Why is the latter considered a "watchdog" behavior rather
than "Republican partisanship"? After all, if part of the
"results" point one way, it is sufficient for Puglisi to
label it "partisanship" of one kind; yet, when another part
of the "results" points in another direction, it is not
partisanship in the other direction - it is "watchdog"ism.
To make things even more confusing, let me also note
this discussion in the paper, which appears earlier in the paper
Suppose that, after controlling for the presidential
activity across issues, outside of the presidential campaign there
are systematically more stories about Republican topics when the
incumbent President is a Republican. There are two plausible
explanations for this kind of behaviour.
First of all, this bias could be due to the fact
that the newspaper is acting as a pressure group with respect
to these Republican issues, and is taking into account the fact that
a Republican incumbent could be more responsive to pressures that
are related to owned issues. In other words, the idea is that the
newspaper, over and above the incumbent's behaviour, is devoting
more room to stories about Republican issues when the incumbent
President is a Republican, exactly because this incumbent could be
more responsive to implicit and explicit appeals that deal with his
"natural" issues, i.e. with the issues that are owned by
his political party.
Alternatively, this bias introduced by the newspaper
could be demand-led. Suppose that the issue ownership hypothesis
holds, and that citizens have elected a Republican President, just
because they think that the most relevant problems would arise in
the policy fields that are owned by the Republican Party. Therefore,
within a political agency framework, they want to obtain pieces of
information about what the Republican incumbent is delivering with
respect to those problems. The newspaper responds to this demand for
specific information by publishing more stories concerning
Here, Puglisi appears to be acknowledging that there are other possible
interpretations of specific topic coverage than partisanship.
PROBLEM V: Stories on "Republican topics" vs. "Democratic
As much as I was loathe to spend more time on this
paper given that the earlier problems already showed that its results
were meaningless, I thought I should spend a little bit of time
scanning through at least a sample of the data. What caught my eye before I started reviewing
Puglisi's detailed analysis was this sentence:
Coming back to the comparison between Democratic and
Republican issues, the last two rows of Table 3 indeed show that the
relative advantage of stories about Republican topics over
Democratic ones, being less than 5 percent on internal pages, jumps
to more than 11 percent on the front page. [eRiposte emphasis]
Here he is talking about the "All stories" coverage
New York Times. What is interesting here is Puglisi's definition
of "5 percent" and "11 percent". Let's look at
the numbers and you'll see what I mean.
reproduced in the first three rows of the table below, a part of the
data from Table 3 of the paper. In the bottom two rows I have
populated some results extracted from the data in the two
preceding rows, with the items in red being the
numbers cited by Puglisi above:
of] Stories not on the front page
Front page stories
Let me draw your attention to the bottom-most row.
When Puglisi claims:
...relative advantage of stories about Republican
topics over Democratic ones, being less than 5 percent on internal
what he means is that the difference in absolute
percentages (relative to a scale of 100) is less than 5%. This DOES NOT mean that the so-called
topics are found only <5% more than the so-called Democratic topics,
one another. As you can see in the bottom-most row, the ratio of
REP topics covered to DEM topics covered (in internal pages) is 1.55, i.e.,
there were 55% more REP
topics than DEM topics in the inside pages of the Times, per Puglisi's
For front-page coverage, the ratio is 2.2, which means there were
120% more REP topics than Dem topics on the New York Times' front
reiterate the point of the comparisons above. Puglisi's wording is
misleading, and gives the impression that the Times barely covers the
so-called REP topics over the so-called DEM topics, when the reality
is drastically more in favor of REP topics than DEM topics.
Clearly, that doesn't sound like a "Democratic partisan"
paper by Puglisi's own definition, which is probably why
he adds a "control" using Executive Orders (as a proxy for
Presidential activity), making the
assumption that because there were more Executive Orders on REP topics
than DEM ones, the coverage in the Times must be normalized to reflect
this (this assumption is questionable in my view, but I'm going to
ignore this for now).
So, let's look at the numbers vis-a-vis
the "Executive Orders". To do this, I've combined the
relevant data from Puglisi's Tables 3 and 4 into a single table below:
of] All NYT
Orders on DEM topics
Orders on REP topics
DEM Exec Orders
Once again, the last row is the most important. What
is it saying? It's saying that:
There were 46% more Executive Orders on so-called REP topics than on so-called DEM topics across all Presidencies from
1946-1994, and, during the same time period,
There were 60% more stories on
REP topics in the NY Times than on DEM topics
Thus, at this
broad level, even if one makes the assumption that
Executive Orders get proportional coverage in the NY Times, the data above
suggests that even when the New York Times' topics-coverage is
normalized to Executive Orders, it provided more coverage overall on
the REP topics than on the DEM topics.
Now, I could not find any data in Puglisi's paper
which separates out "All stories" in the NY Times based on
the Presidential administration. I don't know why that is not provided,
just like the Executive Orders are partitioned, but that data is
important. Why? Because the REP/DEM Executive Order ratio
is lower (1.26) for Democratic administrations than the average
(1.46), which in turn is lower than the ratio for Republican
administrations (1.80). So, without a similar breakup of news stories
between the two types of administrations, it is hard to make a
conclusion about whether REP topics are always over-represented
in the NY Times or only in one of the two types of administrations.
Regardless, the fact that normalizing the data to
Executive Orders doesn't reverse the extra coverage given by the Times
to the so-called REP topics, argues against Puglisi's main
(I invite readers who are more statistics-aware to comment on whether
I made any mistakes in my assumptions/calculations because I am not a
PROBLEM VI: No real "control" for the study
Puglisi discusses a lot of
"controls" on the data, based on accounting for the ideology of the New
York City Mayor, New York State Governor, NYT Publisher etc. But
the real "control" needed for the study is
His comments in the conclusions of his paper hint at
this, suggesting that he does realize the need to do this, but he
probably did not appreciate the significant impact this potentially
has on his conclusions. He says (bold text is my emphasis):
Much work remains to be done. The next step is to
apply such methodology to other mass media outlets. The "first
best" would certainly be to analyse the editorial choices of a
panel of news providers, in order to estimate the fixed effects of
their positions in the ideological spectrum, as proxied by the
average issue balance of stories, over and above the study of their
time series behaviour.
At this point, one can formulate a conjecture,
according to which the behaviour of the New York Times within the
issue space is that of many other news providers. This
conjecture could be tested using data from other news providers. For
example, one could infer these news providers' political
orientations from their endorsement choices, and then consider
whether during the presidential campaign they systematically publish
more stories about issues that are owned by their preferred
candidate. In particular, a natural question to ask is whether
conservative newspapers are indeed characterised by an issue pattern
in the time series which is specular to that displayed by the New
York Times. [END]
I assume Puglisi is thinking out aloud about whether conservative
papers show the reverse of the results he derived for the NYT in his study (note
of "specular": "Of, resembling, or produced by a
mirror"). This brief mention is disappointing because this is
another standard problem with a lot of these "media bias"
"studies" - they don't check out such an
obvious thing before they make claims of "liberal
media" bias or "Democratic partisanship". What I mean
is this: even if we assume that the results of this study are correct
(which they are not), how can someone claim that a paper
is partisan (based on topic coverage alone) without evaluating another paper - with an ideology
known to be conservative - to see whether that paper's topic coverage
was similar or the opposite?
To understand the importance of
this, let's review a couple of examples from Bob
Somerby at the Daily Howler:
But with Sully, the lies are just foreplay. Yesterday, he quickly
"defer[red] to a young and fearless blogger," Patrick
Ruffini, who had done "a quick statistical analysis of the use
of the term ‘right-wing’ in a couple of major papers."
Trembling over his acolyte’s brilliance, Sullivan quoted at
RUFFINI, AS QUOTED BY SULLIVAN: Since 1996, the Washington Post
has used this loaded term ["right-wing"] more than twice
as frequently as "left-wing"…This disparity was even
more palpable at the New York Times, where 80.2% of the left-right
mentions on the national news pages since 1996 have spotlighted
the right. The research also found that the more loaded and
derogatory the phrase, the more likely it was to be associated
with the political right. The term "conservative"
outpolled "liberal" by 66-34% in New York Times news
page mentions, while the aforementioned "right-wing"
clocked in at 80% in a similar measure. However, the term
"right-wing extremist" was used at least six times as
frequently than "left-wing extremist" (at 87.4% since
’96 in the Times). [emphasis added]
If that didn’t prove it, nothing would. At the New York Times,
"right-wing extremist" was used much more often than
"left-wing extremist." Case closed.
But duh. Does unequal usage of those terms show a liberal bias?
We were dubious, so we did a test—we checked out the use of these
terms at the Washington Times. How many times did the Wes
Pruden rag use those terms in the last five years? Our finding? The
Washington Times reeks of liberal bias! In fact, its liberal
bias is even worse than that found in the Times of New York!
That’s right, folks. Over the past five years, NEXIS says that
"left-wing extremist" has appeared in the Washington Times
all of eight times total. But the term "right-wing
extremist" has appeared there 72 times, exactly nine times
as often. Surely this fact doesn’t mean that the paper is full
of liberal bias. But that’s the conclusion that Sullivan’s
method would force us to reach. There’s a word for such a dude.
Alas! Ruffini simply counted the use of certain expressions, then
leaped to conclusions about liberal bias. There are so many problems
with this technique that it would take a whole book to explain them.
But no matter. The Brainy Brit quickly bought his method, and soon
was broadcasting drek to the planet.
That’s right, gang. If you buy the Brainy Brit’s latest
researched technique, the Washington Times is swimming in liberal
bias. Count Ruffini will live to research again. But where in the
world—where on earth—did we find his hapless promoter?
Next: Motive mavens
The Times, D.C. and Gotham: Ruffini charted the usage of his
selected expressions "since 1996." According to NEXIS, if
you start your search at 1/1/96, here’s how the Times Two stack
The Washington Times:
Right-wing extremist: 86 uses
Left-wing extremist: 9 uses
The New York Times:
Right-wing extremist: 75 uses
Left-wing extremist: 9 uses
According to Sullivan’s brilliant technique, the WashTimes has
slightly more liberal bias. Question: Where in the world—where
on earth—did we ever come up with this dud?
Another example from
In Chapter 9, Coulter complains about the press corps’ use of
the terms “Christian conservative” and “religious right.”
According to Coulter, “[t]he point of the phrase ‘religious
right’ or ‘Christian conservative’ is not to define but to
belittle.” And lefties, of course, get a pass:
COULTER (page 166): Despite the constant threat of the
“religious right” in America, there is evidently no such thing
as the “atheist left.” In a typical year, the New York
Times refers to either “Christian conservatives” or the
“religious right” almost two hundred times. But in a Lexis/Nexis
search of the entire New York Times archives, the phrases
“atheist liberals” or “the atheist left” do not appear
once. Only deviations from the left-wing norm merit labels.
In a footnote, Coulter extends her complaint. “In a one year
period (roughly corresponding to calendar year 2000), the New York
Times found occasion to mention either ‘Christian conservatives’
or the ‘religious right’ 187 times. Not once did the paper refer
to ‘atheist liberals’ or ‘the atheist left.’” To Coulter,
of course, this is all a sign of gruesome bias. She goes on to claim
that the terms “religious right” and “Christian
conservative” are now used “[j]ust as some people once spat out
the term ‘Jew’ as an insult.”
It certainly makes for high excitement, but does it make any
sense? Do newspapers use “Christian conservative” as an emblem
of hatred, and avoid “atheist left” due to liberal bias? If so,
we have big news to share. If Coulter’s NEXIS search has proven
these things, then the once-conservative Washington Times is
spilling with lib bias, too.
In the calendar year 2000, how often did the New York Times refer
to “Christian conservatives” or the “religious right?” A
NEXIS search of that year presents 182 references. But the Washington
Times—a much slimmer paper—had 151 such cites that same year.
And how about those other terms—“atheist liberals” or “the
atheist left?” Incredibly, Coulter was right in one of her claims;
the New York Times never used either term. But guess what? The Washington
Times never used the terms, either. If Coulter has sniffed out a
vast left-wing plot, Wes Pruden is in on it too.
Why do newspapers write about “Christian conservatives?”
Because they exist, and because they’re important.
And why don’t we read about the “atheist left?” Because the
group doesn’t exist. That’s why the New York Times
doesn’t mention the group; that’s why the Washington Times
doesn’t mention it, either. Everyone in America knows this is
true—until they read Coulter’s cracked book.
We have no idea whether Puglisi's findings will be
"mirrored" or "similar" in a rag like the
Washington Times. But it was inappropriate to make the kind of
sweeping conclusions he makes in his paper without doing such a basic comparison in the first place.
this point I really have to stop because I've spent far more time on
this paper than I had originally intended to. But I've shown clearly
that this paper is seriously flawed and proves nothing.
2.10C BOOK CHAPTER: The chapter
titled "The Mass Media and Voter Information" in the upcoming (as of
4/18/05) book "Analyzing Elections" by Rebecca Morton
A reader informed me of an upcoming book titled "Analyzing
Elections", by Prof. Rebecca Morton (NYU), where she has a chapter
dedicated to the media titled "The Mass Media and Voter Information".
In this chapter, Morton has a section titled "Measuring Media Bias"
where she discusses some of the published research/academic papers that address
the media bias topic. I don't doubt Morton's good intentions in presenting an
overview of these papers, but her coverage is quite limited and does not address
key critiques of some of the papers that claim to prove some form of
"liberal bias" - so I will provide a brief (considering the size of
the chapter) critique.
Chapter sub-section "Just the Facts Ma'am"
Let's start with her discussion of Lott and Hassett's paper "Is Newspaper
Coverage of Economic Events Politically Biased?" which studies newspaper
...However, they did not actually test whether information was
withheld or revealed as in Groeling and Kernel or truthful as in Ansolabehere
et al....Technically their study is not an evaluation of the “truth” of
the reporting since they did not actually have an estimate of how the
information should be revealed although they do have a clear understanding of
the fact to be reported (the statistic). Lott and Hassert found that in the
aggregate that newspapers presented economic statistics more positively under
Clinton’s administration than when either Bush Senior or Junior was
president, even controlling for the better economic conditions during the
Clinton years. The Chicago Tribune, Associated Press, New
York Times, Wall Street Journal, and Washington Post tend to
be the least likely to report positive news during Republican administrations.
Not all the newspapers demonstrated a Democratic bias however, for example the
Houston Chronicle (Bush Senior’s home paper) had a slight Republican
bias and the Los Angeles Times (Reagan’s
home paper) presented figures during Reagan’s administration more positively
than during Clinton’s.
I would certainly beg to differ on the claim that Lott and Hassett " do have a clear understanding of
the fact to be reported". More importantly, though, since there is no additional
critical coverage of the Lott/Hassett paper
in her chapter, this is likely to leave the reader with the impression that
their paper may actually have some merit, despite the general limitations that
Morton has pointed out. As readers know, the Lott/Hassett paper has received
reasonable scrutiny in the blogosphere and I
it myself, showing how the paper is so completely flawed that its
conclusions are worthless.
Morton contrasts the Lott-Hassett paper with that of Niven (“Bias in the
News—Partisanship and Negativity in Media Coverage of Presidents George Bush
and Bill Clinton,” Harvard International Journal of Press-Politics 6(3):31-46,
Niven (2001) examines the reporting on one particular
economic statistic, unemployment, from February 1989 to September 1999 in
150 newspapers with at least two papers from every state. He analyzes news
stories on the unemployment rate for whether they explicitly mention the
incumbent president positively, neutrally, or negatively. He codes news
stories on a three-point scale (1 = positive, 2 = neutral, 3 = negative). He
finds no significant differences in the coverage of George H. W. Bush and
Clinton on unemployment, the average coverage for Bush is 2.3 while the
average is 2.2, suggesting no bias in the reporting on unemployment. He does
find that media coverage is more likely when the unemployment rate is high,
suggesting that the media is more likely to cover bad news.
I have not read this paper by Niven (although this
other paper of his suggests he is trying to approach the media bias
topic in a thoughtful way); but let me point out one thing. Even if we were
to make an assumption that Niven's study was accurate and that his results
are reliable, the conclusion " suggesting no bias in the reporting on
unemployment" would seem odd because Bush had a substantially worse
record on employment (rising unemployment) and Clinton had a much
better record on employment (falling unemployment), as shown here.
If the coverage they received was essentially similar, then it
would suggest a media bias in favor of George H. W. Bush, not
"no bias". I'm not going to belabor this point since I have not
read Niven's paper, but I will return to the general notion of
"balance" in coverage subsequently.
Chapter sub-section "Measuring Ideological
Morton starts this sub-section with this introduction:
Although the ideal way of measuring bias is comparing truth to
reporting, requiring a measure of the “truth” severely limits the extent
we can measure bias, particularly in areas where the public is likely to be
least informed or there is a great deal of continuing uncertainty about the
truth which may never be known.
I do have a problem with the statement that " requiring a measure of
the “truth” severely limits the extent we can measure bias," but I
think what Morton is trying to say here, is that there are certain
where news is not so much about "truth" as it is, say, about
opinions or personal beliefs. However, the example she uses to illustrate her
point is not the right one (bold text is my emphasis):
For example, George W. Bush, after reelection in 2004 began
to push for privatization of social security. He claimed that social
security was in bad financial straits and that privatization was needed to
keep the program afloat. He also argued that privatization was better for
retirees in the long run, allowing them to achieve higher returns on their
investments. Democrats contended that social security was not in financial
trouble and that privatization hurts retirees. Who was right? Both had
plenty of statistics and facts to support their point of view. Given that we
can never have two different countries under exactly the same conditions
running alternative plans, we can never know who was ultimately right.
Presentation of facts and information on this issue is arguably more
potentially consequential to voters than presentation of economic growth
rates or presidential approval ratings.
This is the unfortunate consequence of the media we have in the U.S., where
the relentless use of "he-said, she-said"
reporting has created a
myth that "we can never know who was ultimately right" even on a
topic as quantifiable as social security, where we can very easily
know who is right. People who have read my earlier work and that of
numerous others in the blogosphere, undoubtedly know that that the Bush
administrations' claims on social security (not to mention every other topic) are
deceptive or false claims, most often "reported"
stenographically by the media without any fact checking. This is a
long-standing strategy by Republican leaders and right-wing spokespersons - to
exploit the mainstream media's supposed striving for "balance" by
providing their "side" of the story using misleading or false
talking points. (David Brock examined the history of this in detail in his
seminal book The
Republican Noise Machine).
It is not a
complete surprise to me that Morton's views appear to have been largely
influenced by the appallingly poor media coverage. But, here's the point. If
an academic researcher/professor writing a chapter on the media can be so
easily influenced by the conservative bias and disinformation in the media -
where deceptive or false statements are considered equal to actual
facts - without realizing that she has been so grossly misled by the media's
poor coverage as to have led her to conclude that "we can never know who was
ultimately right", imagine what the average reader is likely to take away
from the daily news coverage. This is just the tip of the iceberg of devastation wrought
by the mainstream American media on honest discourse in this country.
Let's continue on with her chapter:
A number of researchers have taken the approach of
estimating the ideological bias of various media sources by measuring the
ideological makeup of their readers. For example, Sutter (2004) showed that
as a region becomes more liberal, the consumption of the newsmagazines Time,
Newsweek, and U.S. News and World Report increase. Hamilton (2004) provides
an ideological rating for news outlets by the ideology of their readers as
measured by Pew and the ratings that consumers assign to the outlets. He
finds that Fox News is considered one of the most conservative outlets and
that magazines such as The Atlantic Monthly, The New Yorker,
and Harpers are rated most liberal. Hamilton also finds that
conservatives are more likely to see the media as biased than liberals are,
which he maintains suggests an overall liberal bias in the media.
Although these indirect measures are indicative of a biased
media, they do not prove that the reporting is actually biased or if so,
biased in the direction of voter ideology unless we know for sure that
voters choose media that has biases similar to their ideological
Morton's last sentence above is confusing because she seems to suggest
that measures such as that of Hamilton "are indicative of a biased
media", but that "they do not prove that the reporting is actually
biased". I have not read Hamilton's referenced book (All
the News That’s Fit to Sell), which seems to be a somewhat expansive
review of the media from
an economist's viewpoint, so I am not able to provide a detailed review or
critique of it (you can find some on the web - like
this one for example). However, if the last sentence (above) is an
accurate reflection of (one of) Hamilton's conclusions, namely:
Hamilton also finds that conservatives are more likely to see the media
as biased than liberals are, which he maintains suggests an overall liberal
bias in the media.
then I should point out that such a conclusion is spectacularly incorrect
for a couple of reasons. First, people of one ideology may have an
intrinsic tendency to view the media's reporting colored through their
ideological lens and such a tendency may be exacerbated by prominent
spokespersons of their ideology advocating that the media is biased in the
other direction - regardless of whether the media is actually biased in
the other direction. Second, even if we ignore ideology, in
virtually every case, public opinion alone cannot be used to
prove any kind of media bias because the media's (ideal) role is
not to play to the likings or prejudices of the readers, but to report facts.
Put another way, anyone can claim bias just because they don't
like the media's coverage, even if the media's coverage is actually
unbiased. Examples abound in real life due to
factors, including but not limited to, personal opinions or beliefs
coloring their interpretation of straight news, lack of knowledge of the
facts, being told repeatedly
by some in the media that the media is "liberal biased", etc. The
point is that public opinions (as opposed to facts) about
media bias may have nothing whatsoever to do with actual
media bias - it is the content and accuracy of news articles that
determines whether there is any media bias. Thus, the "consumption
of the newsmagazines Time, Newsweek, and U.S. News and World Report"
increasing in more liberal regions [Sutter 2004] says nothing about
whether these magazines are liberal biased. This could be because readers don't
believe there are other quality newsmagazines available, or because they perceive
these newsmagazines as being not too conservatively slanted or as being
unbiased - even though the vast majority of readers don't have much of a
clue as to how slanted the newsmagazines are in reality. (They can't know
this unless they have access to ALL the facts - and if no other source
provides them with all the facts, how can they figure out the real bias?) This
leads us to the other important corollary - even people who don't think the media has any perceptible bias, may be
completely off in their judgment, if such judgment is not based on a
detailed/factual analysis of the coverage.
Morton then covers papers or studies that look for "balanced"
coverage. She says:
Most studies of media bias look for balance in terms of coverage, that is
the assumption is that if a report or a news outlet gives equal weight to
two opposing viewpoints, then the media outlet is unbiased. These types of
analyses provide mixed results. Lowry and Shidler (1995) analyze sound bites
about candidates during the 1992 presidential election and found that they
were significantly more negative towards Republicans than Democrats.
However, Domke, Fan, Fibison, Shah, Smith, and Watts (1997) analyzed a
random sample of 12,215 news stories during the 1996 campaign and found that
the ratio of positive to negative stories was 1.8 for Clinton and 1.7 for
She doesn't specifically comment on the invalidity of the claim that
"if a report or a news outlet gives equal weight to two opposing
viewpoints, then the media outlet is unbiased" or on the invalidity
of any attempts to use "positive" or "negative" coverage
to prove media bias. So, let me point out that concluding any kind of media
bias from such studies (without knowing what the accuracy of the actual content was) is
incorrect. I've covered this before, here,
showing how even reputed organizations that claim to study journalism fall for
this spin about "tone" of coverage (or "balance" in
At this point, Morton moves on to the Groseclose-Milyo (G-M) paper:
An alternative to checking for balance is to compare the reporting of the
news to the versions of reality presented by liberals and conservatives. If
news reports sound the same as a known liberal (conservative) public
official speaking on the issue then we can classify the news organization
producing those reports as having a liberal (conservative) bias. How can we
make such a comparison? Groseclose and Milyo (2005) devised a clever way to
She then proceeds to provide a fairly lengthy (and positive) review of G-M's findings.
However, she does not point out that the basic assumptions and premise of the
G-M paper are wrong and that their conclusions are completely unreliable and
I've pointed that out, here.
Chapter sub-section "Media Bias as Agenda
Morton writes about another view of bias:
But an alternative way of thinking of bias is that there are a set of
issues to choose from in reporting, rather than a set of facts, and that a
media bias can occur by the media emphasizing issues that benefit particular
candidates or parties.
From either perspective, we can think of some issues as more advantageous to
one party. Petrocik (1996) argued that some issues are therefore “owned”
by particular parties.
And the main example of a corresponding media bias study that she cites is the paper by Riccardo Puglisi.
Her concluding comments on Puglisi's paper and on the corresponding
sub-section of the chapter are as follows (bold text is my emphasis):
Puglisi finds (controlling for other factors that might influence news
coverage) that the New York Times gives more emphasis to issues owned by the
Democrats during presidential campaigns when the incumbent president is a
Republican, suggesting that the New York Times does demonstrate some
partisan bias. He also finds that during periods when there is no
presidential campaign, the Times covers more Democratic issues when the
incumbent president is a Democrat.
The studies of Ansolabehere, et al., Groeling and Kernel, Lott and
Hassert, Sutter, Hamilton, Groseclose and Milyo, and Puglisi provide a
general picture of bias in reporting. There is evidence that most
national media outlets have a liberal bias, although the extent of the
bias varies by outlet and several outlets have a clear measurable
Let me mention two things here.
First, Puglisi also
When considering the 1961-1994 subperiod, this effect of fewer Republican
stories under a Democratic incumbent is larger in magnitude and more
Moreover, considering the same subperiod, under a Democratic incumbent
there are more stories about Republican topics when the presidential
campaign kicks in.
Puglisi used this to suggest that the New York Times also behaved as
a "watchdog" sometimes. That said, Puglisi's assumptions and
conclusions were also quite wrong, as I have shown here. Morton's review
does not offer a detailed critique of Puglisi's paper.
Second, when we get to Morton's statement that "There is
evidence that most national media outlets have a liberal bias", I
would like to emphasize that none of the papers cited by Morton (as I have
shown above) actually prove this. In other words, there is really
NO evidence that "most national media outlets have a liberal bias".
This is important to note. Additionally, although I happen to agree with
Morton that "several outlets
have a clear measurable conservative bias" in the U.S.,
there is very little credible evidence presented in her chapter (at least up
to this point) to support this claim
There is a lot more I could write about Morton's chapter, because,
unfortunately, there are many other problems in the chapter. Since I don't have
too much time, I'm just going to
give two more examples.
For example, Morton mentions the CBS 60 Minutes faux pas on the Bush TX-ANG
story in her introduction by stating that conservative "bloggers
discovered that the documents might not be what they claimed to be and the facts
had not been checked adequately." She says, later in the chapter:
CBS News was forced to correct its story on Bush’s service
based on inaccurate documents. Conservative bloggers served as a check on CBS
because opinion within the public was sufficiently diverse and therefore there
was a demand for an alternative viewpoint. Similarly, Mullainathan and
Shleifer argue that during the Clinton Lewinsky scandal, the competition
between different biased news outlets led to an overall accurate
interpretation of events.
She comments on this in
subsequent pages as well. For example, here:
Both CBS producer Mary Mapes
and reporter Dan Rather told the panel of investigators after CBS News
apologized for airing unsubstantiated claims about Bush’s National Guard
Service that they continued to believe that the memos were real. Rather
remarked that he believed the content of the documents was true because “the
facts are right on the money.”29
The day after Dan Rather reported on the controversial memos
that Mapes had gotten for him, USA Today published a story, which made the
same unsubstantiated claims.33 According to Mapes’ superiors, she
used the forthcoming USA Today story as one of the reasons why the story
should be rushed at CBS.34 Interestingly, USA Today did not receive
the same general criticism as CBS.35
Morton hits a couple of the
points of interest, but overall, her coverage is dramatically underwhelming because it
the reader with the impression (even if it was unintentional on Morton's part)
that conservative bloggers and writers would love to convey - CBS was
"liberal biased". The reality, as we all know, is pretty much the
opposite, and spectacularly so, even with the Bush TX-ANG issue.
The second example illustrates another case where Morton's fact-checking is very limited.
In January 2005, Armstrong Williams a conservative
commentator and columnist revealed he was paid $240,000 by the Education Department to
promote the No Child Left Behind Act and Howard Dean’s campaign reported that it paid two web bloggers to support their efforts in the Democratic primaries. However, these instances of
supplying obviously biased news reports and paying journalists have been heavily criticized as
violations of a well-understood norm.40
I hate to say it but just taking this example alone reflects
rather poorly on Morton's chapter. Armstrong Williams was paid a huge sum by the Bush administration (Government) to propagandize in favor of the Government, and he
never disclosed it to his listeners/readers. The webloggers were Democratic activists, not journalists, and even if they were "journalists", they prominently disclosed on their blogs, *ahead of time*, that they were paid campaign advisors to Dean (one of them even stopped blogging during the time he worked for Dean)!
This is made clear in the same Wall Street Journal article that Morton
Indeed, if Morton wanted an example of bloggers being paid money
by a politician and not disclosing it to their readers and pretending to be
"independent", she should have included the South Dakota example:
Well, read on, then, from [The National Journal] article
(reprinted, amazingly, by Van Beek's own blog, with his own editorializing
inserted in the margins [the quote below is entirely from TNJ, however]):
Lauck, Van Beek, and other conservative activists in the state also tout a
series of stories written by Jeff Gannon, the Washington bureau chief for
TalonNews.com, as their ultimate proof of bias at the Argus Leader. The series,
penned in summer 2003, alleged that Kranz, who went to college with Daschle, was
not just sympathetic to his friend but was an actual part of Daschle's larger
However, TalonNews is not the independent news source it purports to be. It's
run by GOPUSA, a conservative political publishing and consulting firm. While
the Bush administration has provided Gannon with press credentials, the
nonpartisan U.S. Senate Daily Press Gallery has rejected Gannon's repeated
requests for congressional press credentials because of Talon News' financial
ties to GOPUSA.
But then, this past spring, Van Beek unearthed a series of memos from the 1970s
that, according to Van Beek and Gannon, showed that Kranz had consulted on press
strategy with aides to former Rep. James Abourezk, D-S.D. In the memos, aides
refer to Kranz as a "good Democrat" whom Abourezk's office should work
The publication of the memos, as well as growing attention to the Daschle-Thune
race by national bloggers and conservative media outlets, prompted an angry
response from Argus Leader Executive Editor Randell Beck. On a radio call-in
show, Beck defended Kranz, called the memos "crap," and accused the
bloggers of being part of an organized right-wing effort looking to damage the
According to documents that the Thune campaign filed with the Federal Election
Commission, the campaign paid the two men $35,000--$27,000 to Lauck and $8,000
to Van Beek--between June and October of this year ....neither "DaschlevThune"
nor "SouthDakotaPolitics" included a disclaimer or other standing
mention during the election that Thune's campaign was employing the
authors....[and Lauck has said] "I wouldn't have had access to a lot of the
information if I hadn't been with the campaign..."....the state's bloggers
and media sources both said the campaign against the newspaper played a key role
in the GOP's message-control effort to persuade voters to elect Thune over
Morton's chapter attempts to provide a detailed review on the
topic of media bias (as well as its possible impact on election outcomes) and I
don't doubt her intentions. However, it largely suffers from the same problems
that so many other academic studies of the media suffer from: a general lack
of emphasis on the accuracy of media reporting. Morton and most of
the other authors she cites also don't seem to have much exposure to the widespread
media malpractice outside of what they hear from the media itself (e.g., CBS and Bush, Jayson Blair).
This is problematic for two reasons. An independent examination of a subject
should not rely overly on the subject's claims and underemphasize independent,
critical analysis of the subject's claims (this is the cardinal
law of any independent research). Further, considering that the media rarely,
if ever, reports its own gross inaccuracies or malpractice when the targets are Democrats (see here
evidence), this adds a clear bias to their analysis, which they don't
seem to be cognizant of. All of these authors would benefit substantially by
widening the scope of their research to include web sites like The
Daily Howler, Media
Academics, who can do a lot to reverse the rampant inaccuracies, biases and routine journalistic malpractice in the media are unfortunately too hooked into the media's own discourse to realize that they are missing the main problem with the American media. It is rarely accurate on controversial topics. That's where the search for media bias