Illiberal Conservative Media (ICM) TM

[alternately, Insidious Corporatist Media, U.S.A.]

One Page Summary
 
Defining Media Bias
 
Introduction
 
How the Liberal Media Myth is Created
 
Why the Liberal Media Myth Persists
 
1. Conservatives Let Out The truth
 
2. Conservative Books and Studies Alleging "Liberal Bias" 
3. Conservative Media Watch Orgs Alleging "Liberal Bias" 
4. Issues and Bias 
5. Pravda, U.S.A. 
Liars, Inc.
 
Alternative Media
 
Updates/Corrections
 

2. Conservative Books and "Studies" Alleging "Liberal Bias"

2.9 PAPER: "A Measure of Media Bias" by Tim Groseclose and Jeff Milyo, 2005

This is an updated version of my original response to the Groseclose-Milyo (G-M) paper. As I was doing a review of some other published literature on media bias on 4/16/05, I discovered that G-M had posted an updated version (HTML, PDF) of their original paper as of 2005-01-03. The revised version of their paper corrects some of lacunae in the original version; however, the most fundamental problems with the original paper remain in this new version. [NOTE: The fact that I missed the latest version in my original critique was purely an unintentional oversight. The updated G-M paper does not in any way invalidate my original critique (indeed, one of the fixes they made shows that one part of my critique was right on target). I have updated my critique here to refer to their revised paper.] 

For consistency, I use the word think-tank in this page in the same sense in which the authors use it - to describe not just traditional think-tanks, but advocacy groups as well. This is a debatable point, but it is irrelevant to this response.

SUMMARY [detailed analysis follows the summary]

The Groseclose-Milyo (G-M) paper (HTML, PDF) attempts to assess media bias using an approach wherein adjusted ADA (Americans for Democratic Action) scores (0-to-100) are used to assess legislator ideology (archconservative-to-archliberal), and separately, the think-tank citations of the legislators are compared to the think-tank citations of the media outlet to then derive the media outlet's "bias". Based on their methodology (presented and discussed in this paper), they claim that:

Our results show a strong liberal bias.

In this critique I examine the paper from three perspectives:
1. Is the methodology used for assessing the ideology of think-tanks correct and reliable? (Section 1)
2. Is the methodology used for assessing the ideology of the media correct and reliable? (Section 2)
3. Is the definition of media bias used by the authors correct and reliable? (Section 3)

I find that the answers to each of those questions is NO

The methodology used by the authors for assessing think-tank ideology (i.e., based on the average adjusted ADA score of the legislators citing the think-tank) is deeply flawed because it omits public or private disagreements that legislators have with the same think-tank and it does not account for the fact that legislators may agree with a think-tank but not state it publicly for various reasons (e.g., they are unaware of the think-tank; they are aware of the think-tank but the latter may not be known well enough to cite, it may be a "controversial" think-tank, there may be no need to cite a think-tank, etc.). This can effectively skew their results in the wrong direction, to an unknown degree. For example, the fact that their methodology found the ACLU to be "conservative" was a result of the former flaw. To address this, they say: 

The reason the ACLU has such a low score is that it opposed the McCain-Feingold Campaign Finance bill, and conservatives in Congress cited this often.  In fact, slightly more than one-eight of all ACLU citations in Congress were due to one person alone, Mitch McConnell (R.-Kt.), perhaps the chief critic of McCain-Feingold.  If we omit McConnell’s citations, the ACLU’s average score increases to 55.9.  Because of this anomaly, in the Appendix we report the results when we repeat all of our analyses but omit the ACLU data. 

Unfortunately, omitting McConnell's citations or the ACLU data point is the wrong approach to fix this problem. The way to fix this is by actually ADDING all those instances in which Republicans actually disagreed with ACLU, not incorrectly and artificially remove situations where *they agreed with ACLU* in order to get an average score that seems more in sync with a *separately established* reality. In other words, if we already knew ACLU is "liberal" and need to know that to "adjust the data", then what is the value or point of this study?

Additionally, a legislator may cite a think tank not because he or she mostly agrees with the think tank but because that think tank's view is closer to his or her view than any other think-tank the legislator is aware of or cares to cite. It is very unlikely that legislators who cite a think tank agree with everything the think tank says or stands for. For example, some legislators may cite it because their position is in agreement with, say, only one or two or three of the think tank's positions and they may cite it for that reason, repeatedly (like in the ACLU case). The bottom line is that their think-tank ideology ratings are unreliable and incorrect, as I show in detail in Section 1.

The methodology used by the authors for assessing media ideology is completely untenable. There are three principal reasons for this:

(a) The approach G-M use establishes media ideology indirectly, by using the media's think-tank citations and comparing those to think-tank citations by legislators in order to find the legislator whose citations are the closest match. Thus, if a legislator is liberal and the media's think-tank citations match that of the liberal legislator, they would declare the media to be liberal. Momentarily setting aside the fact that this definition of media bias is itself incorrect, their claim would make sense only if it can be independently proven that the think-tanks cited by the liberal legislator are actually liberal. Their study does not prove this at all, considering that their methodology to establish think-tank ideology is itself deficient. Thus, at a fundamental level, their entire conclusion on media bias breaks down. (NOTE:  It is not at all implausible that left-leaning legislators may cite more centrist think-tanks in public than progressive/liberal ones, especially considering how the liberal advocacy groups and think-tanks are tarred negatively by the GOP in the illiberal conservative media). 

(b) The use of weighted-average ADA scores (for the House and the Senate) is slightly more meaningful than the Median (which they used in the original version of their paper), but even this is completely deficient and incorrect because the ideological center is set not using an independent, objective measure of ideology but based on the (political) positions of the people in Congress at a given point in time. Thus, their model simultaneously assumes that ADA scores can provide an absolute picture of a legislator's ideology but that media and think-tank ideology should be determined not using the same absolute reference but a relative, moving reference that is highly dependent on who's the majority in Congress and how they think or vote. This is not an acceptable model, for, if the minority party becomes the majority party in the next election, the derived ideology of think-tanks or the media could change significantly even though their actual positions underwent ZERO change. 

Put another way, if the Republican majority suddenly decides to become 100% conservative, guess what happens. The weighted-mean ADA score would drop, even if the Democrats in Congress DID NOT change at all, and even if the media outlets that are considered "liberal", by the G-M definition, remain STATIC (i.e., no change in their think-tank citation ratios and that of the corresponding "liberals" in Congress). In this case, even though the media's ideology has NOT changed at all, it's adjusted ADA score(s) will artificially look more liberal compared to the lower weighted-mean ADA score. (BONUS FOR LEFTIES: This is right in line with one of the long-time Republican strategies of declaring the media (and Democrats) to be too "liberal" by moving the country to the Right). This is not a partisan issue though. The opposite could occur when we are talking about media outlets that are considered "conservative" because they match the citations of conservative Republicans and if the Democrats decide to become 100% liberal.  

(c) The final, and perhaps most serious, problem with their analysis is their attempt to derive a conclusion of media bias using this study - because their definition of media bias, is in itself, completely flawed. Their confident conclusion that they have demonstrated "liberal" media bias is wrong because the study does not examine whether the media's news reporting is accurate. Their assumption that "seldom do journalists make dishonest statements" is also fatally incorrect. The focus on think-tank citations completely ignores what the media communicates to viewers or readers when it is NOT citing think-tanks, which is a big chunk of the time. The irony of the authors' citing serial liar Brent Bozell's claim that there is "rarely a conscious attempt to distort the news" is incredibly ironic! Their claim that "the citations that they gather from experts are also very rarely dishonest or inaccurate" also suggests that they are very un-skeptical when it comes to absorbing news.

When controlled for other factors (see Appendix A), the more fundamental determinant of bias in news reporting is accuracy -- not whom the news reports cite. To the extent that news reporting could become inaccurate by citing certain think-tanks over others, one may have a case that think-tank citations could influence the accuracy of the reports. But, G-M have fallen into the trap of assuming that the part is the whole. Think-tank citations are merely one part of the whole - which is the media's accuracy in news reporting. 

DISCUSSION

SECTION 1. ASSESSING THE IDEOLOGY OF THINK TANKS

SECTION 2. ASSESSING THE IDEOLOGY OF MEDIA OUTLETS 

SECTION 3. DEFINING MEDIA BIAS 

APPENDIX A: Clarifying comments on Section 3


SECTION 1. ASSESSING THE IDEOLOGY OF THINK TANKS

1.1 The authors' summarize their basic methodology (at a high level) as follows:

To compute our measure, we count the times that a media outlet cites various think tanks and other policy groups.[1]  We compare this with the times that members of Congress cite the same think tanks in their speeches on the floor of the House and Senate.  By comparing the citation patterns we can construct an ADA score for each media outlet. 

As a simplified example, imagine that there were only two think tanks, one liberal and one conservative.  Suppose that the New York Times cited the liberal think tank twice as often as the conservative one.  Our method asks:  What is the estimated ADA score of a member of Congress who exhibits the same frequency (2:1) in his or her speeches?  This is the score that our method would assign the New York Times. 

A feature of our method is that it does not require us to make a subjective assessment of how liberal or conservative a think tank is.  That is, for instance, we do we need [sic; this is a very critical phrase and unfortunately there is a typo or grammatical error here which makes it's meaning unclear. Based on the content of the rest of the paragraph I infer that the authors mean to say "we do NOT need" and my discussion below is based on this understanding] to read policy reports of the think tank or analyze its position on various issues to determine its ideology.  Instead, we simply observe the ADA scores of the members of Congress who cite the think tank.  This feature is important, since an active controversy exists whether, e.g., the Brookings Institution or the RAND Corporation is moderate, left-wing, or right-wing. 

Although the authors state that their method is such that it "does not require us to make a subjective assessment of how liberal or conservative a think tank is", their method is in fact making a highly subjective assessment. Why?

Firstly, by definition, an objective assessment is one in which we would need to actually read through the think tanks policy briefs and compare the details in those briefs to a fixed definition of what is considered liberal, centrist or conservative (one can pick a reference like the ADA if one wishes, but the reference needs to be fixed). That is not being done here because the think tanks ideology is being derived by using a (weighted) metric of who cites the think tank and how often. That metric is a function of:
  • the actual policy issues that are being debated or discussed publicly (e.g., if only 3, out of say 20, policy issues were discussed prominently during the time period of the study it could skew the results because a big chunk of a think-tank's positions may never be cited or discussed by legislators
  • the legislator citing or not citing the think tank 
  • the assumption that everyone in the sample knows about the think tank as well as its detailed positions on matters of interest to them, among other things
    • In my view, the latter is unlikely to be true for any think tank across the board, and even if true it is likely to be the case for no more than 1 or 2 top think-tanks (although experience has made it clear to me this is exceedingly unlikely. I know of no legislator who is fully familiar with all the key details of even one think tank's policy proposals on every topic of interest to them, let alone multiple). 

In fact, one of the "anomalies" the authors found is directly attributable to the first bullet point above:

The second apparent anomaly is the RAND Corporation, which has a fairly liberal average score, 60.4.  We mentioned this finding to some employees of RAND , who told us they were not surprised.  While RAND strives to be middle-of-the-road ideologically, the more conservative scholars at RAND tend to work on military studies, while the more liberal scholars tend to work on domestic studies.  Because the military studies are sometimes classified and often more technocratic than the domestic studies, the media and members of Congress tend to cite the domestic studies disproportionately.  As a consequence, RAND appears liberal when judged by these citations.  It is important to note that this fact—that the research at RAND is more conservative than the numbers in Table 1 suggest—will not bias our results.  To see this, think of RAND as two think tanks: RAND I, the left-leaning think tank which produces the research that the media and members of Congress tend to cite, and RAND II, the conservative think tank which produces the research that they tend not to cite.  Our results exclude RAND II from the analysis.  This causes no more bias than excluding any other think tank that is rarely cited in Congress or the media. 

This is a serious flaw in their method because the partitioning of RAND into RAND I and RAND II is not only artificial, it masks RAND's ideology as a whole. The same problem is most certainly occurring with other think-tanks, where only certain policy positions of those think-tanks were cited in the period covered, because only those were relevant to the debates in progress. Moreover, it must be noted that the artificial partitioning of RAND is done using an independent assessment of the think-tank's ideology post-facto. I will return to this point in a minute, but this is another serious problem. 

Secondly, while the authors are free to use their current approach, it is, in fact, subjective because it is very unlikely that legislators who cite the think tank agree with everything the think tank says or stands for. For example, some legislators may cite it because their position is in agreement with, say, only one or two or three of the think tank's positions and they may cite it for that reason, repeatedly. Indeed, the authors' own ACLU example directly illustrates this problem. ACLU appears more conservative than it is (according to the authors) because a conservative cited it repeatedly on one specific point of agreement, even though the same conservative likely disagreed with ACLU on most other issues. While the authors did the right thing by pointing this out, this is a significant and fundamental problem that severely impacts the rest of their study and conclusions. Let's explore why this is the case. 

If many legislators that are citing a think tank (e.g., Brookings) do so only on two or three specific topics (among a wide gamut of topics that would encompass one's definition of ideology), and they actually disagree with Brookings' ideology on many other matters, what options do they have? Well, they can either state their disagreement publicly or keep the disagreement private and never mention it. Let's see how the authors' approach deals with these scenarios. 

  • If a legislator decides to cite Brookings in a negative way, that should actually be considered in the model as one less point of support for Brookings. However, the model actually ignores this data point. As the study says: 

    Also, we omitted the instances where the member of Congress or journalist only cited the think tank so he or she could criticize it or explain why it was wrong.  About five percent of the congressional citations and about one percent of the media citations fell into this category.

    This is a serious, structural flaw in the methodology, that cannot be ignored as long as the ideology of the think tank is derived based on the legislator's citing it.

  • Even more seriously, let's consider the other case where the legislator is actually in disagreement with the think tank on many topics but decides not to cite the disagreement publicly (i.e., he or she keeps it private and doesn't mention the name of the think tank). If this legislator otherwise mentions the think tank positively, the method used by the authors would, again, dramatically overstate the think tank's ideology match with the legislator because the legislator's negative views of the think-tank on other occasions is not included in the model (simply because it is private and not publicly known). 
    • These two cases resonate directly with the ACLU case the authors highlight:

      While most of these averages closely agree with the conventional wisdom, two cases seem somewhat anomalous.  The first is the ACLU.  The average score of legislators citing it was 49.8.  Later, we shall provide reasons why it makes sense to define the political center at 50.1.  This suggests that the ACLU, if anything is a  right-leaning organization.  The reason the ACLU has such a low score is that it opposed the McCain-Feingold Campaign Finance bill, and conservatives in Congress cited this often.  In fact, slightly more than one-eight of all ACLU citations in Congress were due to one person alone, Mitch McConnell (R.-Kt.), perhaps the chief critic of McCain-Feingold.  If we omit McConnell’s citations, the ACLU’s average score increases to 55.9.  Because of this anomaly, in the Appendix we report the results when we repeat all of our analyses but omit the ACLU data. 

      Unfortunately, omitting McConnell's citations or the ACLU data point is the wrong approach to fix this anomaly. The way to fix this is by actually ADDING all those instances in which Republicans actually disagreed with ACLU, not incorrectly and artificially remove situations where *they agreed with ACLU* in order to get an average score that seems more in sync with a *separately established* reality. In other words, if we already knew ACLU is "liberal" and need to know that to "adjust the data", then what is the value or point of this study? (NOTE: I am actually thankful to the authors for having been honest enough to provide the ACLU example because I would have otherwise had to find a hypothetical example to illustrate my thought process.)

      • The ACLU case may seem extreme, but it is not. It is not at all difficult to fathom a situation where support for a think tank (say, Brookings) is overstated with certain (e.g., left-leaning) legislators because of the factors described above.
  • Moreover, as I said earlier, a legislator may cite a think tank not because he or she mostly agrees with the think tank but because that think tank's view is closer to his or her view than any other think-tank the legislator is aware of or cares to cite, e.g., say, a legislator likes MoveOn.org but fears the negative publicity he may get from his constituents through fake attack ads in the next election. Should he cite MoveOn's position or try and pick a more centrist think-tank like Brookings that is somewhat close to his position, to make his point? Obviously some of these think-tanks do overlap in some of their positions (e.g., "No Crisis in Social Security"). This is yet another example, where the authors' model would overstate the think tank's ideology match with the legislator. 
  • There is an additional problem inherent in what I discussed above. A legislator may have a certain policy position but rarely cite a think tank that agrees with that position publicly, simply because (a) he or she may not be aware of the think tank itself, (b) he or she may not even be aware that the think tank's position is actually in sync with his or her position or (c) he or she may NOT find it necessary to cite the think tank to make a point. All of this would understate the think-tank's ideological similarities with the legislator. This could cut both  ways in terms of the impact to the final results.

Thus, the very basis of this study - deriving think-tank ideology in a relative fashion (rather than objective fashion) - causes its results to be dubious. While it is clear that a lot of thought went into it and the idea pursued here is interesting, the basic assumptions required to make this model work are a fundamental limitation in being able to derive results that are even reasonably accurate. The net effect of the model could be to artificially push the results in the direction of making the think tank far closer to the legislator's ideology when that is not true in reality (it is also possible to mask closer ideological matching with lesser known or more ideological think-tanks which exist in reality). Scientifically speaking, and with due respect, this flaw is so severe that the conclusions of this paper on the derived ideology of the think-tanks are completely unreliable.

Let me also add that the same arguments apply to the treatment of the media as well, but also in a different sense. It is one thing for a legislator to cite a think-tank approvingly. In a news report, though, unless the media outlet specifically states that it approves the position of the think-tank cited in the news report, assuming that the media outlet somehow overtly shares the ideology of the think-tank in a big, unwarranted leap of faith. For instance, it may reflect an unintentional bias in the choice of think-tank based on the reporter's knowledge base, or a reflection of laziness in reporting, or simply a matter of day-to-day editorial judgment (which has to consider the timeliness of a news story, among other things) or even the fact that the media cites a think-tank because the legislator does. Broadly speaking I would tend to agree that the relative proportion of liberal, conservative or centrist think-tanks that a media outlet cites is a piece of information worth knowing. But the methodology used in this particular paper would really not answer this question with reasonable accuracy, since it is flawed. 

1.2 A minor note.

The authors say that:

Along with direct quotes of think tanks, we sometimes included sentences that were not direct quotes.  For instance, many of the citations were cases where a member of Congress noted “This bill is supported by think tank X.”  Also, members of Congress sometimes insert printed material into the Congressional Record, such as a letter, a newspaper article, or a report.  If a think tank was cited in such material or if a think tank member wrote the material, we treated it just as if the member of Congress had read the material in his or her speech.

We did the same exercise for stories that media outlets report, except with media outlets we did not record an ADA score.  Instead, our method estimates such a score.

Sometimes a legislator or journalist noted an action that a think tank had taken—e.g. that it raised a certain amount of money, initiated a boycott, filed a lawsuit, elected new officers, or held its annual convention.  We did not record such cases in our data set.  However, sometimes in the process of describing such actions, the journalist or legislator would quote a member of the think tank, and the quote revealed the think tank’s views on national policy, or the quote stated a fact that is relevant to national policy.  If so, we would record that quote in our data set.  For instance, suppose a reporter noted “The NAACP has asked its members to boycott businesses in the state of South Carolina.  `We are initiating this boycott, because we believe that it is racist to fly the Confederate Flag on the state capitol,’ a leader of the group noted.” In this instance, we would count the second sentence that the reporter wrote, but not the first. 

This section raises a question which pertains to the accuracy of the study. 

Maybe this is mentioned elsewhere in the paper and I missed it, but I don't see a specific qualification associated with a legislator (or news organization) citing a particular think-tank -- in the sense of whether the citation is accompanied by an overt or implied agreement with that think-tank on the particular item being cited. For example, if a legislator cites a think-tank to make a neutral observation (e.g., "Here's what Brookings says, but I don't know whether they are right or not"), how is that counted? In the description in the paper, the authors say: "...member of Congress would quote a member of the think tank, and the quote revealed the think tank’s views on national policy, or the quote stated a fact that is relevant to national policy. If so, we would record that quote in our data set." Does the member of Congress actually have to agree with the quoted portion for it to be recorded?


SECTION 2. ASSESSING THE IDEOLOGY OF MEDIA OUTLETS 

2.1 The authors say:

To compute our measure, we count the times that a media outlet cites various think tanks and other policy groups.[1]  We compare this with the times that members of Congress cite the same think tanks in their speeches on the floor of the House and Senate.  By comparing the citation patterns we can construct an ADA score for each media outlet. 

The authors are assessing "media bias" by extracting adjusted ADA (Americans for Democratic Action) scores for media outlets by comparing their think-tank citations to those of the legislators whose adjusted ADA scores are independently estimated using voting records. Put another way, G-M's goal is to assess the bias of the media outlet relative to that of legislators - using the common variable of think-tank citations.

However, if I understand their methodology correctly, there's an important point of detail which should be noted.

As they point out via an example:

As a simplified example, imagine that there were only two think tanks, one liberal and one conservative.  Suppose that the New York Times cited the liberal think tank twice as often as the conservative one.  Our method asks:  What is the estimated ADA score of a member of Congress who exhibits the same frequency (2:1) in his or her speeches?  This is the score that our method would assign the New York Times. 

This is perhaps the most important portion of their entire paper. Why?

What they are effectively saying is that if the nature of think-tank citations by a legislator who would be considered liberal based on his/her adjusted ADA-score (including possible adjustments to reflect the "median voter"), matches the nature of citations by the New York Times, then the New York Times would be deemed liberal (and have a similar ADA score). But is it really that straightforward? Put another way, is there any situation under which this assumption would be wrong? The answer is, resoundingly, yes. Consider a hypothetical (not really, but let's say it is a hypothetical) situation where liberal legislators have a habit of citing centrist (or less likely, conservative) think-tanks. In this scenario, if the New York Times in similar fashion, cites those think-tanks which are centrist (or conservative), then the New York Times will decidedly not be considered liberal - it will have to be considered "centrist" (or conservative) by the authors' own assumption. The problem here is that the think-tank's ideology is NOT assessed independently

According to the authors' model, for the media to be considered liberal, the think-tanks cited by liberal legislators need to be established unequivocally as being liberal. I have already shown in Section 1 that the methodology used by the paper to determine the ideology of the think-tanks is fundamentally flawed. So, right away, we can conclude that the main conclusion of this paper - that most of the media outlets they examined are "liberal" - does not hold. 

This is another serious flaw and the final outcome is not surprising at all because this is the kind of difficulty one runs into when one does not assess ideology objectively, but rather, extracts it with reference to something else

[NOTE: The above argument is not specific to the liberal perspective. If conservative legislators end up excessively citing centrist or liberal think-tanks (as established using an independent ideology analysis), one would run into the same kind of problem. In other words, this is a structural problem with the model and has nothing to do with partisan or ideological issues].

A couple of additional points: 

  • It is not at all implausible that left-leaning legislators may cite more centrist think-tanks in public than progressive/liberal ones (even today). The authors "used the period 1993 to 1999 to calculate the average adjusted ADA score for members of Congress.[15]". Recall that this was a time period when Democrats were led by one of the most centrist Democratic Presidents in modern times. In part due to Clinton and the DLC and in part due to pressure from the Republican Party, they were under pressure to appear more centrist because of their congressional losses (partly due to the defeat of the Clinton healthcare plan which was portrayed very negatively (by including claims such as "socialism") by the Republicans and, by extension, the media). This situation has not changed substantially since Bush came to power. Until Jan 2005, the party had been influenced far more heavily by the DLC, especially on economic matters (which is the mainstay of think-tanks like Brookings), but to some extent even on socio-economic issues like Welfare.  
  • Even today, the passage of the God-awful Bankruptcy Bill is a measure of how legislators in the Democratic party (and Republican party) decided to ignore their real liberal (and conservative) values. Needless to say, overt dalliances with liberal groups are often met with negative publicity in the media due to the Republican Noise Machine. Let me give you another hypothetical example to illustrate what I mean. A liberal senator afraid of being associated with say, MoveOn.org, may over a period of time repeatedly cite Brookings, with whom he only partially agrees with (e.g., let's say for argument, that the legislator, MoveOn.org and Brookings all agree that social security is not "in crisis", but the legislator and MoveOn.org feel that the way to address social security problems is to repeal some of Bush's tax cuts, whereas Brookings may be considering raising the retirement age as a solution (this is PURELY hypothetical and I really don't know Brookings' latest position on social security)). The legislator may simply choose to cite Brookings to make the point that there is no crisis in social security but say nothing about retirement age increases, even though he is against it. This will make Brookings seem more liberal than it is, and likewise make the media outlet seem more liberal if it cited Brookings more. But the reality would be quite different. (This example only goes to illustrate that the method used in this paper is extremely blunt and it is no surprise that the results are unreliable.) 

2.2 There is another serious problem with the authors' methodology, which represents another fundamental underpinning of this paper. Here, I am referring to the methodology used to extract the reference ADA score, which demarcates "liberals" and "conservatives" in Congress by defining the "center". Clearly, unless this methodology is air-tight, even the definition of which legislator is "liberal" and which legislator is not, would be questionable.

2.2.1 To address this point let's review the assumptions made by the author in the section titled "Digression: Defining the “Center”" - with bold text being my emphasis.

While the main goal of our research is to provide a measure that allows us to compare the ideological positions of media outlets to political actors, a separate goal is to express whether a news outlet is left or right of center.  To do the latter, we must define center.  This is a little more arbitrary than the first exercise.  For instance, the results of the previous section show that the average NY Times article is approximately as liberal as the average Joe Lieberman (D-Ct.) speech.  While Lieberman is left of center in the U.S. Senate, many would claim that, compared to all persons in the entire world, he is centrist or even right-leaning.  And if the latter is one’s criterion, then nearly all of the media outlets that we examine are right of center.

However, we are more interested in defining centrist by U.S. views, rather than world views or, say, European views.  One reason is that the primary consumers for the 20 news outlets that we examine are in the U.S.   If, for example, we wish to test economic theories about whether U.S. news producers are adequately catering to the demands of their consumers, then U.S. consumers are the ones on which we should focus.  A second reason is that the popular debate on media bias has focused on U.S. views, not world views.  For instance, in Bernard Goldberg’s (2002) insider account of CBS News, he only claims that CBS is more liberal than the average American, not the average European or world citizen.

The authors present a straw-man argument here. Almost no serious critic is going to argue that "compared to all persons in the entire world, [Leiberman] is centrist or even right-leaning." If anything, the argument would be that Lieberman might be centrist or right-leaning compared to all persons in the United States. Also, the authors' serious consideration of Bernard Goldberg's claims reflects one of the other fundamental flaws in their assumptions and methodology, which I will discuss towards the end of this critique.

Given this, one of the simplest definitions of centrist is simply to use the mean or median ideological score of the U.S. House or Senate.  We focus on mean scores since the median tends to be unstable.[30]  This is due to the bi-modal nature that ADA scores have followed in recent years.  For instance, in 1999 only three senators, out of a total of 100, received a score between 33 and 67.  In contrast, 33 senators would have received scores in this range if the scores had been distributed uniformly, and the number would be even larger if scores had been distributed uni-modally.[31]

I am glad to see that the authors abandoned the median approach considering how fatally flawed it is - as I showed in my original response. Additionally, what is even more interesting is how the arguments supporting the use of mean over median scores are completely reversed from their original paper, where they said: "...Rather it is to demonstrate an arbitrariness that exists when one uses a mean score for comparison. The same arbitrariness does not exist with median scores." That's quite a change, but I am glad they fixed this error because the median score is useless. This is not to say that the use of a mean score is reliable - more on that shortly.

We are most interested in comparing news outlets to the centrist voter, who, for a number of reasons, might not have the same ideology as the centrist member of Congress.  For instance, because Washington , D.C. is not represented in Congress and because D.C. residents tend to be more liberal than the rest of the country, the centrist member of Congress should tend to be more conservative than the centrist voter. 

Another problem, which applies only to the Senate, involves the fact that voters from small states are overrepresented.  Since in recent years small states have tended to vote more conservatively than large states, this would cause the centrist member of the Senate to be more conservative than the centrist voter.

A third reason, which applies only to the House, is that gerrymandered districts can skew the relationship between a centrist voter and a centrist member of the House.  For instance, although the total votes for Al Gore and George W. Bush favored Gore slightly, the median House district slightly favored Bush.  Specifically, if we exclude the District of Columbia (since it does not have a House member), Al Gore received 50.19% of the two-party vote.  Yet in the median House district (judging by Gore-Bush vote percentages), Al Gore received only 48.96% of the two-party vote.  (Twelve districts had percentages between the median and mean percentages.)  The fact that the latter number is smaller than the former number means that House districts are drawn to favor Republicans slightly. Similar results occurred in the 1996 election.  Bill Clinton received 54.66% of the two-party vote.  Yet in the median House district he received 53.54%.

It is not obvious to me that the claim that "the latter number is smaller than the former number means that House districts are drawn to favor Republicans slightly" is true. But this is a minor point, so let's continue reading.  

It is possible to overcome each of these problems to estimate an ADA score of the centrist voter in the U.S.   First, to account for the D.C. bias, we can add phantom D.C. legislators to the House and Senate.  Of course, we necessarily do not know the ADA scores of such legislators.  However, it is reasonable to believe that they would be fairly liberal, since D.C. residents tend to vote overwhelmingly Democratic in presidential elections.  (They voted 90.5% for Gore in 2000, and they voted 90.6% for Kerry in 2004.)  For each year, we gave the phantom D.C. House member and senators the highest respective House and Senate scores that occurred that year.  Of course, actual D.C. legislators might not be quite so liberal.  However, one of our main conclusions is that the media are liberal compared to U.S. voters.  Consequently, it is better err on the side of making voters appear more liberal than they really are than the opposite.[32]

The second problem, the small-state bias in the Senate, can be overcome simply by weighting each senator’s score by the population of his or her state. The third problem, gerrymandered districts in the House, is overcome simply by the fact that we use mean scores instead of the median.[33]

In Figure 1, we list the mean House and Senate scores over the period 1947-99 when we use this methodology (i.e. including phantom D.C. legislators and weighting senators’ scores by the population of their state).  The focus of our results is for the period 1995-99.  We chose 1999 as the end year simply because this is the last year for which Groseclose, Levitt, and Snyder (1999) computed adjusted ADA scores.  However, any conclusions that we make for this period should also hold for the 2000-04 period, since in the latter period the House and Senate had almost identical party ratios.  We chose 1995 as the beginning year, because it is the first year after the historic 1994 elections, where Republicans gained 52 House seats and eight Senate seats.  This year, it is reasonable to believe, marks the beginning of separate era of American politics.  As a consequence, if one wanted to test hypotheses about the typical U.S. voter of, say, 1999, then the years 1998, 1997, 1996, and 1995 would also provide helpful data.  However, prior years would not.

Over this period the mean score of the Senate (after including phantom D.C. senators and weighting by state population) varied between 49.28 and 50.87.  The mean of these means was 49.94.  The similar figure for the House was 50.18.  After rounding, we use the midpoint of these numbers, 50.1, as our estimate of the adjusted ADA score of the centrist U.S. voter.[34]

The authors' treatment of centrism does show that a lot of thought went into it. However, their claim that they have extracted an adjusted ADA score for a centrist U.S. voter, using adjusted ADA scores of legislatures and voting related metrics is not meaningful. The reason this is a serious problem is that this is another indirect method that relies on the ideologies of elected politicians (and voting stats) to extract the ideology of voters. This implicitly assumes that the (centrist or other) voter knows which politicians actually represent his or her real needs and views, and votes for/elects them. This is a fundamentally mistaken assumption because voters are bamboozled all the time by the politicians they vote for (or don't vote for) - through fake or misleading ads, through poor media coverage of their elected representative's actual positions, through the representative himself/herself misrepresenting his or her own views or positions to voters or supporters, etc. A voter's ideology may have major differences and disconnects with the ideology of the politicians the voter supports, simple because the voter is not well informed about the politician's real views or actions or voting records.

Especially considering that this is a study of media bias, clearly biases in the media can result in impressions on voters that may sway them in a direction opposed to their own ideology or belief, without their realizing how they have been swayed. After all, that is one of the prime reasons to be worried about media bias. Put another way, centrist voters (even if they exist, and I say that because a deeply polarized environment with entrenched beliefs I wonder whether there is such a thing as a "centrist" voter) may not be electing candidates who actually stand for what the voters think they stand for. [This is not conjecture. During Election 2004 many voters were quite ill-informed about their candidate's positions - especially those who supported George Bush].

G-M are free to pick this metric set their center, but calling it the adjusted ADA score of a "centrist voter" is not correct. Having said that, there is still a major problem with a metric of this kind, as I discuss in Sec. 2.2.3.

[NOTE: In my original response, I included a section 2.2.2 to highlight the fatal flaw of using median ADA scores. Since the authors' revised paper no longer tries to make their case using median scores, this section has been removed. You can read my original critique here]. 

2.2.3 The built-in assumption in G-M approach is that the ADA score fully represents the real tenets of liberalism since we are using it to establish how liberal or conservative the legislators are (this assumption is debatable, but let's assume this is true). What is strange is that the authors abandon the objective definition of the degree of liberalism as defined by the ADA scores to base their judgments of legislator-, think-tank- and media ideology on a relative reference that may not have much to do with the real tenets of the ideology at all. Let me explain.

First, we need to make an assumption here, as I said before. It needs to be assumed that an ADA score is indeed the correct measure of a legislator's ideology (I have no idea what the ADA's policy positions are and that's why I'm making this point). For the sake of argument let us assume it is.

Let's look at a couple of cases to see how problematic this model is even with the Weighted-Mean ADA score approach. 

  • If the Republican majority suddenly decides to become 100% conservative, guess what happens. The weighted-mean ADA score would drop, even if the Democrats in Congress DID NOT change at all, and even if the media outlets that are considered "liberal", by the G-M definition, remain STATIC (i.e., no change in their think-tank citation ratios and that of the corresponding "liberals" in Congress). In this case, even though the media's ideology has NOT changed at all, it's adjusted ADA score(s) will artificially look more liberal compared to the lower weighted-mean ADA score. (BONUS FOR LEFTIES: This is right in line with one of the long-time Republican strategies of declaring the media (and Democrats) to be too "liberal" by moving the country to the Right). This is not a partisan issue though. The opposite could occur when we are talking about media outlets that are considered "conservative" because they match the citations of conservative Republicans and if the Democrats decide to become 100% liberal. 

  • Another scenario is when one party suffers a major defeat in the elections and becomes a minority party, when it was previously the majority party. In an extreme case, where you end up with say far more members of one party than another, the mean would simply be skewed by the presence of the majority party. While one may consider this as reflecting the intent of the voter, this may have nothing to do with the actual liberalism or conservatism of the voter per se, but reflect unhappiness on even a single issue. The result though would be a shift of the weighted-mean ADA score even though media outlets have not changed in the interim. Thus, the ideology of media outlets is defined with respect to a reference that changes with time, often significantly

What the examples above illustrate is that the weighted-mean adjusted ADA score is also highly inappropriate to assess think-tank and media ideology, even if one were to trust the indirect media bias extraction methodology used by the authors (which, by the way, has been shown earlier in this page to be untenable). 

The core of the problem here is that the model is fundamentally flawed, by on the one hand assuming that ADA scores can provide an absolute picture of a legislator's ideology and then assuming that media and think-tank ideology should be determined not using the same absolute reference but a relative, moving reference that is highly dependent on who's the majority in Congress and how they think or vote. This is not an acceptable model, for, if the minority party becomes the majority party in the next election, the derived ideology of think-tanks or the media could change significantly even though their actual positions underwent ZERO change. 

To correct this, one would need to have an absolute, objective reference for ideology. That is not just to make the model work. One needs that because that is reality! 

NOTE: What represents the ideologically pure point may actually change a bit over time, and that is perfectly acceptable. Only, in that case, the people defining what it means to be ideologically pure must certify that they are willing to change their goal post. 


SECTION 3. DEFINING MEDIA BIAS 

I have shown in sections 1 and 2 that the basic methodology used by this paper, while interesting, is so deeply flawed that the final results simply don't have a lot of significance or accuracy. This conclusion was based on the assumption that G-M's definition of media bias is as they have stated in their paper.

Perhaps the most serious flaw of this paper, though, is that their definition of media bias itself is completely incorrect. Here is what they say about their definition (bold text is my emphasis):

Before  proceeding, it is useful to clarify our definition of bias.  Most important, the definition has nothing to do with the honesty or accuracy of the news outlet.  Instead, our notion is more like a taste or preference.  For instance, we estimate that the centrist U.S. voter during the late 1990s had a left-right ideology approximately equal to that of Arlen Specter (R-Pa.) or Sam Nunn (D-Ga.).  Meanwhile, we estimate that the average New York Times article is ideologically very similar to the average speech by Joe Lieberman (D-Ct.).  Next, since vote scores show Lieberman to be more liberal than Specter or Nunn, our method concludes that the New York Times has a liberal bias.  However, in no way does this imply that the New York Times is inaccurate or dishonest—just as the vote scores do not imply that Joe Lieberman is any less honest than Sam Nunn or Arlen Specter.

In contrast, other writers, at least at times, do define bias as a matter of accuracy or honesty.  We emphasize that our differences with such writers are ones of semantics, not substance.  If, say, a reader insists that bias should refer to accuracy or honesty, then we urge him or her simply to substitute another word wherever we write “bias”.  Perhaps “slant” is a good alternative.

However, at the same time, we argue that our notion of bias is meaningful and relevant, and perhaps more meaningful and relevant than the alternative notion.  The main reason, we believe, is that only seldom do journalists make dishonest statements.  Cases such as Jayson Blair, Stephen Glass, or the falsified memo at CBS are rare; they make headlines when they do occur; and much of the time they are orthogonal to any political bias.

Instead, for every sin of commission, such as those by Glass or Blair, we believe that there are hundreds, and maybe thousands, of sins of omission—cases where a journalist chose facts or stories that only one side of the political spectrum is likely to mention.

This passage alone shows that authors have a view of the American media that has very little to do with reality. The problems with their statements are numerous.

(a) Their assumption that "seldom do journalists make dishonest statements" is completely wrong. If they were to scroll through even the examples cited on this website, they will notice that dishonesty or inaccuracy is very very common, and not an exception.

(b) Just because a journalist chooses "facts or stories that only one side of the political spectrum is likely to mention" does not make the reporting biased! This is true for multiple reasons:

  • The other side may have declined to comment

  • The other side does not challenge the facts cited in the article

  • The factual accuracy and completeness of the article is excellent, without having to cite the "other side"

  • The comments from the "other side" may be simply false and a journalism organization may not want to be the purveyor of blatant falsehoods. Thus, citing a think-tank says nothing about whether that think-tank is accurate or not (also see this post by Brian Montopoli at CJR Daily for some additional perspective on the issue of citations).

(c) The G-M paper does not examine anything about whether the media actually reports the positions of liberals or conservatives accurately, even if they cite both sides. 

(d) The focus on think-tank citations completely ignores what the media communicates to viewers or readers when it is NOT citing think-tanks, which is a big chunk of the time. 

Thus, I am quite stunned that the authors make such a sweeping conclusion about media bias when they have omitted from consideration so many important factors in the media bias debate. 

The authors' reflect their complete disconnect from the reality of the U.S. media with this quote and statement:

Our notion of bias also seems closely aligned to the notion described by Bozell and Baker (1990, 3):

But though bias in the media exists, it is rarely a conscious attempt to distort the news.  It stems from the fact that most members of the media elite have little contact with conservatives and make little effort to understand the conservative viewpoint.  Their friends are liberals, what they read and hear is written by liberals.[20]

Similar to the facts and stories that journalists report, the citations that they gather from experts are also very rarely dishonest or inaccurate.  

The irony of the authors' citing serial liar Brent Bozell's claim that there is "rarely a conscious attempt to distort the news" is incredibly ironic! Their claim that "the citations that they gather from experts are also very rarely dishonest or inaccurate" also suggests that they are very un-skeptical when it comes to absorbing news (Think-tanks on both sides, but much more so on the Right, are well-known for their misleading or dishonesty.) 

The authors also make this claim:

A final anecdote gives some compelling evidence that our method is not biased.   Note that none of the above arguments suggest a problem with the way our method ranks media outlets.  Now, suppose that there is no problem with the rankings, yet our method is plagued with a significant bias that systematically causes media outlets to appear more liberal (conservative) than they really are.  If so, then this means that the three outlets we find to be most centrist (Newshour with Jim Lehrer, Good Morning America , and Newsnight with Aaron Brown) are actually conservative (liberal).  But if this is true, why did John Kerry’s (George W. Bush’s) campaign agree to allow three of the four debate moderators to come from these outlets?

This is unfortunately quite a poor example to claim vindication. There are any number of reasons why a Democrat may allow moderators to come from these media outlets:

  • The Democrat may have no idea what the real bias is of those outlets 

  • The Democrat may want to show that he is comfortable taking questions from anyone, regardless of their ideology

  • The Democrat (and his party) have a poor history of calling out conservative bias in the media, compared to the long history of fake "liberal bias" claims from the Republican party

  • etc.

So, the anecdote G-M offer is in NO WAY a vindication of their approach or their conclusions.

The authors conclude their paper with the following statements:

Rather, the main goal of our research is simply to demonstrate that it is possible to create an objective measure of the slant of the news.  Once this is done, as we hope we have demonstrated in this section, it is easy to raise a host of theoretical issues to which such a measure can be applied.

Unfortunately, they have not demonstrated this at all. Their method/approach is extraordinarily subjective and has hardly any connection to media bias at all.

Let me also add that I have been studying media bias long enough to know that there is a tendency to be overly simplistic in claiming media bias. When controlled for other factors (see Appendix A), the more fundamental determinant of bias is the accuracy of news reporting, not whom the reports cite. To the extent that news reporting could become inaccurate by citing certain think-tanks over others, one may have a case that think-tank citations could influence the accuracy of the reports. But, G-M have fallen into the trap of assuming that the part is the whole. Think-tank citations are merely one part of the whole - which is the media's accuracy in news reporting. Let's not forget that and jump to unwarranted conclusions!

P.S. Also read the comments made by Buck Turgidson in the comments section of my post about this paper at The Left Coaster.


APPENDIX A

While accuracy is the most important aspect, it is not the only one. For the purposes of this response, this clarification is not that important, but I wanted to make it clear that I wasn't entirely ignoring other parallel aspects (albeit less important in most cases) that may affect media bias; when I say "Think-tank citations are merely one part of the whole - ...accuracy" I mean that accuracy is the "whole" relative to citations (the "part").

One of the aspects outside of accuracy is the issue of the topics covered by a media outlet. Topic choice is certainly a function of editorial bias, but it also a function of numerous other confounding factors - source credibility, events, circumstances, issues of public interest, issues of interest to politicians or policy-makers, issues of interest to the media outlet to ensure their revenues and profits in the markets they compete in, etc. So, it would be much more difficult to credibly demonstrate editorial bias on topic choice, by itself. G-M's study is unconcerned with topic choice and that's fine, and my response does not address this either.

I would also like to clarify that my definition of "accuracy" is broad and it encompasses the notion that coverage on a topic may not necessarily be accurate [using the reader/customer as the object of the coverage] if the different, credible view points are not publicized to the same degree (i.e., "unequal coverage"). Thus, the New York Times and media outlets revving up the war drums for George Bush on their front pages or repeating it endlessly in top TV shows and then relegating stories (if any) challenging the Bush administration's allegations to somewhere deep in the paper (or to TV shows with lower frequency runs or lower ratings) is something that I would consider an issue of "accuracy" because the significantly disparate treatment makes it much less likely that the full picture is conveyed to the same cross-section of readers or viewers. [In fact, this has been a common complaint I have expressed to friends when I talk to them about media bias]. In some sense this is a matter of semantics. I was simply trying to point out in Sec. 3 that accuracy is the more fundamental determinant than citations.

If I wanted to be broader in my scope in Sec. 3, instead of saying: "To the extent that news reporting could become inaccurate by citing certain think-tanks over others, one may have a case that think-tank citations could influence the accuracy of the reports," I could have said: "To the extent that news reporting could become inaccurate by citing certain think-tanks over others or publicizing the views of one side less prominently, one may have a case that think-tank citations could influence the accuracy of the reports." But, as I said, this adds a dimension to my response that is unnecessary in the context of G-M's paper because they are not assessing placement of articles/news items - only frequency


P.S. I will address the topic of how media bias should be defined in this page.