-
Start Preamble
Start Printed Page 3552
AGENCY:
Copyright Royalty Board (CRB), Library of Congress.
ACTION:
Final allocation determination.
SUMMARY:
The Copyright Royalty Judges announce the allocation of shares of cable and satellite royalty funds for the years 2010, 2011, 2012, and 2013 among six claimant groups.
ADDRESSES:
The final distribution order is also published in eCRB at https://app.crb.gov/.
Docket: For access to the docket to read background documents, go to eCRB, the Copyright Royalty Board's electronic filing and case management system, at https://app.crb.gov/and search for CONSOLIDATED docket number 14-CRB-0010-CD (2010-2013). For older documents not yet uploaded to eCRB, go to the agency website at https://www.crb.gov/or contact the CRB Program Specialist.
Start Further InfoFOR FURTHER INFORMATION CONTACT:
Anita Blaine, CRB Program Specialist, by phone at (202) 707-7658 or by email at crb@loc.gov.
End Further Info End Preamble Start Supplemental InformationSUPPLEMENTARY INFORMATION:
Final Determination of Royalty Allocation
The purpose of this proceeding is to determine the allocation of shares of the 2010-2013 cable royalty funds among six claimant groups: The Joint Sports Claimants, Commercial Television Claimants, Public Television Claimants, Canadian Claimants Group, Settling Devotional Claimants, and Program Suppliers.[1] The parties have agreed to settlements regarding the shares to be allocated to the Music Claimants and National Public Radio (NPR). Public Television Claimants Proposed Findings of Fact and Conclusions of Law (PFFCL) ¶ 1.
Between 2012 and 2015, the Judges ordered partial distributions of the 2010-2013 cable funds to the “Phase I” participants (including Music Claimants and NPR) according to allocation percentages agreed upon by the participants. Order Granting Phase I Claimants' Motion for Partial Distribution of 2010 Cable Royalty Funds, Docket No. 2012-4 CRB CD 2010 (Sept. 14, 2012), Order Granting Phase I Claimants' Motion for Partial Distribution of 2011 Cable Royalty Funds, Docket No. 2012-9 CRB CD 2011 (Mar. 13, 2013), Order Granting Motion of Phase I Claimants for Partial Distribution, Docket No. 14-CRB-0007 CD (2010-12) (Dec. 23, 2014); Order Granting Motion of Phase I Claimants for Partial Distribution, Docket No. 14-CRB-0010 CD (2013) (May 28, 2015).
In December 2016, the Judges ordered the final distribution of the settled shares from the remaining funds to Music Claimants and National Public Radio. Amended Order Granting Motion for Final Distribution of 2010-2013 Cable Royalty Funds to Music Claimants (Aug. 23, 2017); Order Granting Motion for Final Distribution of 2010-2013 Cable Royalty Funds to National Public Radio (Aug. 23, 2017). When the Judges ultimately order the final distribution of the remaining 2010-13 cable royalty funds, they will direct the Licensing Division of the Copyright Office to adjust distributions to each participant to account for partial distributions and to apply the allocation percentages determined herein.
Based on the record in this proceeding, the Judges make the following allocation of deposited royalties.[2]
Start Printed Page 3553Table 1—Royalty Allocations
2010 (%) 2011 (%) 2012 (%) 2013 (%) Basic Fund: Canadian Claimants 5.0 5.0 5.0 5.5 Commercial TV 16.8 16.8 16.2 15.3 Devotional Programs 4.0 5.5 5.5 4.3 Program Suppliers 26.5 23.9 21.5 19.3 Public TV 14.8 18.6 17.9 19.5 Sports 32.9 30.2 33.9 36.1 3.75% Fund: Canadian Claimants 5.9 6.1 6.1 6.8 Commercial TV 19.7 20.6 19.7 19.0 Devotional Programs 4.7 6.8 6.7 5.3 Program Suppliers 31.1 29.4 26.2 24.0 Public TV 0.0 0.0 0.0 0.0 Sports 38.6 37.1 41.3 44.9 Syndex Fund: Program Suppliers 100 100 100 100 Program Suppliers filed a timely request for rehearing on November 2, 2018 (Rehearing Request). The Judges issued their ruling on the Rehearing Request on December 13, 2018 (Order on Rehearing), denying rehearing on any basis asserted by Program Suppliers in the Rehearing Request. The Initial Determination is, therefore, the Judges' Final Determination in this proceeding.
I. Background
A. Legal Context
In 1976, Congress granted cable television operators a statutory license to enable them to clear the copyrights to over-the-air television and radio broadcast programming which they retransmit to their subscribers. The license requires cable operators to submit semi-annual royalty payments, along with accompanying statements of account, to the Copyright Office for subsequent distribution to copyright owners of the broadcast programming that those cable operators retransmit. See 17 U.S.C. 111(d)(1). To determine how the collected royalties are to be distributed among the copyright owners filing claims for them, the Copyright Royalty Judges (Judges) conduct a proceeding in accordance with chapter 8 of the Copyright Act. This determination is the culmination of one of those proceedings.[3] Proceedings for determining the distribution of the cable license royalties historically have been conducted in two phases. In Phase I, the royalties were divided among programming categories. The claimants to the royalties have previously organized themselves into eight categories of programming retransmitted by cable systems: Movies and syndicated television programming; sports programming; commercial broadcast programming; religious broadcast programming; noncommercial television broadcast programming; Canadian broadcast programming; noncommercial radio broadcast programming; and music contained on all broadcast programming. In Phase II, the royalties allotted to each category at Phase I were subdivided among the various copyright holders within that category.[4] In the current proceeding, the Judges broke with past practice by combining Phase I and Phase II into a single proceeding in which the functions of allocating funds between program categories and distributing funds among claimants within those categories would proceed in parallel.[5] This determination addresses the Allocation Phase for royalties collected from cable operators for the years 2010, 2011, 2012 and 2013.
The statutory cable license places cable systems into three classes based upon the fees they receive from their subscribers for the retransmission of over-the-air broadcast signals. Small- and medium-sized systems pay a flat fee. See 17 U.S.C. 111(d)(1). Large cable systems (“Form 3” systems) [6] —whose royalty payments comprise the lion's share of the royalties distributed in this proceeding—pay a percentage of the gross receipts they receive from their subscribers for each distant over-the-air broadcast station signal they retransmit.[7] The amount of royalties that a cable system must pay for each broadcast station signal it retransmits depends upon how the carriage of that signal would have been regulated by the Federal Communications Commission (“FCC”) in 1976, the year in which the current Copyright Act was enacted.
The royalty scheme for large cable systems employs a statutory device known as the distant signal equivalent (DSE), which is defined at 17 U.S.C. 111(f)(5). The cable systems, other than those paying the minimum fee, pay royalties based upon the number of DSEs they retransmit. The greater the number of DSEs a cable system retransmits the larger its total royalty payment. The cable system pays these royalties to the Copyright Office. These fees comprise the “Basic Fund.” See 17 U.S.C. 111(d)(1)(B). In addition to the Basic Fund, large cable systems also may be required to pay royalties into one of two other funds that the Copyright Office maintains: The Syndex Fund and the 3.75% Fund.
As noted above, the utilization of the cable license is linked with how the FCC regulated the cable industry in 1976.[8] FCC rules at the time restricted the number of distant broadcast signals a cable system was permitted to carry (“the distant signal carriage rules”). National Cable Television Assoc., Inc. v. Copyright Royalty Tribunal, 724 F.2d 176, 180 (D.C. Cir. 1983). FCC rules also allowed local broadcasters and copyright holders to require cable systems to delete (or blackout) syndicated programming from imported signals if the local station had purchased exclusive rights to the programming (“syndicated exclusivity” or “syndex” rules). Id. at 187. In 1980, the FCC repealed both sets of rules. Id. at 181.
The Copyright Royalty Tribunal (CRT) initiated a cable rate adjustment proceeding to compensate copyright owners for royalties lost as a result of the FCC's repeal of the rules. Adjustment of the Royalty Rate for Cable Systems; Federal Communications Commission's Deregulation of the Cable Industry, Docket No. CRT 81-2, 47 FR 52146 (Nov. 19, 1982). The CRT adopted two new rates applicable to large cable systems making section 111 royalty Start Printed Page 3554payments. The first, to compensate for repeal of the distant signal carriage rules, was a 3.75% surcharge of a large cable system's gross receipts for each distant signal the carriage of which would not have been permitted under the FCC's distant signal carriage rules. Royalties paid at the 3.75% rate—sometimes referred to by the cable industry as the “penalty fee”—are accounted for by the Copyright Office in the “3.75% Fund,” which is separate from royalties kept in the Basic Fund. See id.; s ee also 17 U.S.C. 111(d); 37 CFR, part 387.The second rate the CRT adopted, to compensate for the FCC's repeal of its syndicated exclusivity rules, is known as the “syndex surcharge.” Large cable operators were required to pay this additional fee for carrying signals that were or would have been subject to the FCC's syndex rules. Syndex Fund fees are accounted for separately from royalties paid into the Basic Fund.[9]
Royalties in the three funds—Basic, 3.75%, and Syndex—are the royalties to be distributed to copyright owners of non-network broadcast programming in a Section 111 cable license distribution proceeding. See 37 CFR, part 387.[10]
Cable system operators are required to file Statements of Account with the Copyright Office detailing subscription revenues and specific television signals they retransmit distantly, and to deposit section 111 royalties calculated according to the reported figures. Ex. 2004, Testimony of Gregory S. Crawford ¶ 74 & n.37. As cable system operators merged they created contiguous cable systems that were required to file consolidated Statements of Account. The consolidated systems were required to pay royalties calculated on the aggregate subscription income of the corporate operator, even though not all the systems under the corporate umbrella, not even the contiguous systems, carried or retransmitted compensable distant signals.
Between the time of the last adjudicated cable royalty allocation proceeding and the present proceeding, Congress passed the Satellite Television and Localism Act of 2010 (STELA).[11] Before STELA, cable operators were required to pay for the carriage of distant signals on a system-wide basis, even though each signal was not made available to every subscriber in the cable system. U.S. Copyright Office, Frequently Asked Questions on the Satellite Television Extension and Localism Act of 2010. Distant broadcast signals that subscribers could not receive were called “phantom signals.” Id. STELA addressed the phantom-signal issue by amending section 111(d)(1) of the Copyright Act, which details the method by which cable operators can calculate royalties on a community-by-community or subscriber-group basis. Id. From the 2010/1 accounting period and all periods thereafter, cable operators have been required to pay royalties based upon where a distant broadcast signal is offered rather than on a system-wide basis.[12] Id. As discussed below, this statutory change permitted the participants to analyze relative value at the subscriber-group level. See, e.g., Corrected Written Direct Testimony of Gregory Crawford, Ex. 2004 (Crawford CWDT) ¶ 66.
B. Posture of the Current Proceeding
In December 2014, the Copyright Royalty Board (CRB) published notice in the Federal Register announcing commencement of proceedings and seeking Petitions to Participate to determine distribution of 2010, 2011, and 2012 royalties under the cable and satellite licenses.[13] On June 5, 2015, the CRB published a notice in the Federal Register announcing commencement of a proceeding to determine distribution of 2013 royalties deposited with the Copyright Office under the cable license and the satellite license.[14] The Judges determined that controversies existed with respect to distribution of the cable (and satellite) retransmission royalties deposited for 2013, and directed interested parties to file Petitions to Participate.[15] On September 9, 2015, the Judges consolidated the proceedings regarding the cable license for the years 2010, 2011, 2012, and 2013. See Notice of Participants, Notice of Consolidation, and Order for Preliminary Action to Address Categories of Claims.
On November 25, 2015, the Judges issued a Notice of Participant Groups, Commencement of Voluntary Negotiation Period (Allocation), and Scheduling Order, in which the Judges identified eight categories of claimants for the proceeding: (1) Canadian Claimants, (2) Commercial Television Claimants; (3) Devotional Claimants, (4) Joint Sports Claimants, (5) Music Claimants, (6) National Public Radio, (7) Program Suppliers, and (8) Public Television Claimants. National Public Radio and Music Claimants reached settlements with the other claimants groups and received respective final distributions. Order Granting Motion for Start Printed Page 3555Final Distribution of 2010-2013 Cable Royalty Funds to Music Claimants (Aug. 11, 2017) and Order Granting Motion for Final Distribution of 2010-2013 Cable Royalty Funds to National Public Radio (Aug. 23, 2017).
With the settlement of the Music Claimants' share, only the Program Suppliers claimant group has an interest in the royalties in the Syndex Fund. Program Suppliers Proposed Conclusions of Law ¶ 2 & n.3 and references cited therein. Public TV Claimants claim a share only of the Basic Fund. Public TV PFFCL ¶ 43.
The hearing in the present proceeding commenced on February 14, 2018, and concluded on March 19, 2018.[16] During that period, the Judges heard live testimony from 23 witnesses and admitted written and designated testimony from a number of additional witnesses. The Judges admitted into the record more than 200 exhibits. Participants made closing arguments on April 24, 2018, after which time the Judges closed the record.
After reviewing the record, the Judges identified a controversy among the parties relating to the allocation of royalties held in the 3.75% Fund and requested additional briefing from the parties. Order Soliciting Further Briefing (June 29, 2018) (3.75% Order). Responding to the Judges' order, the parties submitted additional briefs and responses to address the issue framed by the Judges:
Whether the interrelationship between and among the Basic Fund, the 3.75% Fund, and the Syndex Fund affects the allocations within the Basic Fund, if at all, and, if so, how that affect should be calculated and quantified.
Id. The Judges' disposition of the 3.75% Fund and Syndex Fund issues is set forth at section VII, infra. The allocation described in Table 1 of this Determination incorporates the Judges' resolution of this issue.
C. Allocation Standard
Congress did not establish a statutory standard in section 111 for the Judges (or their predecessors) to apply when allocating royalties among copyright owners or categories of copyright owners. However, through determinations by the Judges and their predecessors (the Copyright Royalty Tribunal, the CARPs, and the Librarian of Congress), the allocation standard has evolved, and the present standard is one of “relative marketplace value.” [17] See Distribution Order, 75 FR 57063, 57065 (Sept. 17, 2010) (2004-05 Distribution Order).
“Relative marketplace values” in these proceedings have been defined as valuations that “simulate [relative] market valuations as if no compulsory license existed.” 1998-99 Librarian Order, 69 FR at 3608. Because such a market does not exist (having been supplanted by the regulatory structure), the Judges are required to construct a “hypothetical market” that generates the relative values that approximate those that would arise in an unregulated market. 2004-05 Distribution Order, 75 FR at 57065; see also Program Suppliers v. Librarian of Congress, 409 F.3d 395, 401-02 (D.C. Cir. 2001) (“[I]t makes perfect sense to compensate copyright owners by awarding them what they would have gotten relative to other owners . . . .”).
In the present proceeding, the parties disagree as to the appropriate specification of the sellers in the hypothetical market. Program Suppliers assert that the hypothetical sellers are the owners of the copyrights in the retransmitted programs. See Corrected Written Rebuttal Testimony of Jeffrey S. Gray, Trial Ex. 6037, ¶ 11 (Gray CWRT). Other parties assert that the sellers are the local stations offering for licensing the entire bundle of programs on the retransmitted signal. See Corrected Written Direct Testimony of Gregory S. Crawford, Trial Ex. 2004, ¶ 45 (Crawford CWDT) and Corrected Written Direct Testimony of Lisa George, Trial Ex. 4005, at 8 (George CWDT). After considering the record and arguments in this proceeding, the Judges find that, from an economic perspective, this is a disagreement without a difference, and therefore, consistent with prior rulings, identify the local stations as the hypothetical sellers. If the hypothetical sellers (licensors) were assumed to be the owners of the individual programs (instead of the local stations), then (as a matter of elementary economics) they, like any sellers, would attempt to maximize the royalties they receive from licensing the retransmission rights to CSOs.[18] Because the CSOs are assumed to be the buyers (licensees), they would each negotiate one-to-one with owners of the program copyrights. The corollary to the assumption that the hypothetical sellers are the individual program copyright owners is the assumption that the CSOs, as buyers, would need to create one or more new channels to bundle these programs for retransmission. That raises the economically important question of whether the transaction costs [19] that a CSO would incur to negotiate separate contracts with individual copyright owners would be so prohibitive as to preclude one-to-one negotiations from going forward. Transaction costs are relatively ubiquitous in the licensing of copyrighted products to licensees, resulting in the creation of a collective to represent the licensees, and in blanket or standardized licenses to reduce transaction costs further. See Watt, supra note 19, at 17, 164-67.
But in the present case, a “collective” of sorts already exists—the broadcaster who bundles programs for transmission within a single signal. Therefore, it remains reasonable to consider the local stations that have bundled the programs into their respective signals to be the hypothetical sellers.
As noted supra, the values of the programs in the several categories that are determined in this proceeding are “relative values,” i.e., values relative to each other, from the perspective of the CSOs, when the programs from these different categories are offered for Start Printed Page 3556distant retransmission in the form of bundles from local stations. Relative value is based on the preferences of the CSOs (derived from those of their subscribers). Because relative preferences are components of market demand, the CSOs' choices represent important elements of a market transaction. See generally P. Krugman & R. Wells, Microeconomics, 284-85 (2d ed. 2009) (relative “preferences” lead to buyers' “choices” and an “optimal consumption bundle”); A. Schotter, Microeconomics: A Modern Approach (2009) (revealed “preferences” allow for an analysis of how buyers “behave in markets,” and those preferences are building blocks for “individual and market demand”). Thus, any methodology based on the identification of the relative preferences and values of CSOs is indeed a market-based approach to the allocation of royalties in this proceeding.
Because the pricing of the licenses is regulated, however, it is not possible to identify the actual royalties that would be established by these ranked preferences. To identify such royalties would require an application of game theoretic/bargaining power considerations and the extent and allocation of costs attributable to the licensed programs—facts that are not in the record and likely are not reasonably or accurately ascertainable.[20] Nonetheless, the raison d'être of this section 111 proceeding is to allocate royalties that have already been paid in a manner that reflects relevant market factors. To do so, it is sufficient to relate CSOs' revealed preferences among program categories, whether through a CSO survey or a regression analysis, to the sum of all royalties paid. Prior determinations may have described the allocations that resulted as the “relative market value,” [21] but there is no doubt that royalties determined in these ways reveal “relative values” that are based on the critical market factor of identified preferences.
In the present proceeding, the parties presented five discrete analytical methodologies for the Judges to consider in determining relative market value of the programming types at issue: Regression analyses, CSO survey results, viewership measurements, a changed circumstances analysis, and a cable content analysis.
II. Regression Analyses
Regression analysis, when properly constructed and applied, “is an accurate and reliable method of determining the relationship between two or more variables, and it can be a valuable tool for resolving factual disputes.” [22] A particular approach, multiple regression analysis, “is the technique used in most econometric studies, because it is well suited to the analysis of diverse data necessary to evaluate competing theories about the relationships that may exist among a number of explanatory facts.” ABA Econometrics, supra note 22, at 4.
A regression can take one of several forms. The linear form is the most common form, though not the most appropriate for all analyses. As one court has explained:
[A] linear regression is an equation for the straight line that provides the best fit for the data being analyzed. The “best fit” is the [regression] line that minimizes the sum of the squares of the vertical distance between each data point and the line . . . . The regression equation that generates that line can be written as
Y = a + bX + u
Where Y is the dependent variable, a is the intercept [with the vertical axis], X the independent variable, b the coefficient of the independent variable (that is, the number that indicates how changes in the independent variable produces changes in the dependent variables), and u the regression residual—the part of the dependent variable that is not explained or predicted by the independent variable . . . or, in other words, what is “left over.”
ATA Airlines, Inc. v. Fed. Express Corp., 665 F.3d 882, 890 (7th Cir. 2011) (Posner, J.), cert. denied, 568 U.S. 820 (2012).[23] See Crawford CWDT ¶¶ 94-95.
An economist testifying in the present proceeding, Professor Lisa George, explained how the regression approach may be useful to test economic theories, describing regression analysis as “a tool for understanding how variations in an outcome of interest . . . depends on various factors affecting that outcome . . . when the factors of interest are not separately priced or traded.” George CWDT at 2. Professor George noted a basic difference between regression analysis and survey methodology. Regression analysis, unlike survey methodology, “infers value for decisions actually made in a market.” Id.
Although regression analysis is a powerful tool, it is important to appreciate the subtle distinction between econometric correlation identified by a regression, on one hand, and economic causation explained by economic theory, on the other:
Econometrics provides a means for determining whether a correlation, which may reflect a . . . causal relationship, may exist between various events that involve complex sets of facts. The principle value of econometrics . . . lies in its use for developing an empirical foundation in order to prove or disprove assertions that are based on a particular economic theory . . . . [E]conometric evidence coupled with economic theory [may] show the likelihood of a causally-driven correlation between two events or facts. . . . [Thus] [c]orrelation is distinct from causation. . . . [T]he correlation is simply circumstantial confirmation of a hypothesized relationship. If the hypothesized relationship does not make theoretical sense, the existence of a correlation between the two variables is irrelevant.
ABA Econometrics, supra note 22, at 1, 3, 5 (emphasis added).
In the present proceeding, the economic theory that the experts put to the test via regression analysis is whether or not royalties paid are a function of (caused by) the types of program categories bundled in distantly retransmitted local stations.
A. Waldfogel-Type Regressions
Professors Crawford, Israel, and George each used a regression approach based on the regression approach undertaken by Dr. Joel Waldfogel, an economist who appeared in the 2004-05 proceeding on behalf of the joint “Settling Parties,” including three of the present parties: The JSC, Commercial Television Claimants (CTV), and PTV. 2004-05 Distribution Order, 75 FR at 57064. The Judges' findings concerning his regression (Waldfogel regression) are instructive with regard to the Judges' analysis in the present proceeding of the “Waldfogel-type” regressions proffered Start Printed Page 3557by Professor Crawford, Professor George, and Professor Israel.
Several features characterize a Waldfogel-type regression. Most importantly, such an approach attempts to correlate “variation in the [program category] composition of distant signal bundles along with royalties paid to estimate the relative marketplace value of programming.” George CWDT at 6. Specifically, Dr. Waldfogel “regress[ed] observed royalty payments for the bundle on the numbers of minutes in each programming category. . . . ” Israel WDT ¶ 22. He also employed “ ‘control variables’ . . . to hold other drivers of CSO payments constant.” Id. Dr. Waldfogel's control variables included the number of subscribers, local median income, and the number of local channels. Id.
In the 2004-05 allocation proceeding, the Judges found the Waldfogel regression “helpful to some degree” in assisting the Judges “to more fully delineate all of the boundaries of reasonableness with respect to the relative value of distant signal programming. 2004-05 Distribution Order, 75 FR at 57068. The Judges described the Waldfogel regression as an “attempt [ ] to analyze the relationship between the total royalties payed by cable operators for carriage of distant signals . . . and the quantity of programming minutes by programming category . . . .” Id. Conceptually, the Judges found that, “Dr. Waldfogel's regression coefficients do provide some additional useful, independent information about how cable operators may view the value of adding distant signals based on the programming mix on such signals.” Id. The Judges also found Dr. Waldfogel's methodology “generally reasonable.” Id. They cautioned, however, that the wide confidence intervals around Dr. Waldfogel's coefficients limited the usefulness of his analysis in corroborating survey-based evidence in that proceeding. Id.[24]
The SDC challenge the use of Waldfogel-type regressions in this proceeding, thus raising as a preliminary question whether or not the Judges' past acceptance of this regression approach is binding on the Judges in the present proceeding as a matter of what has been loosely described as “precedent.”
The Librarian and the Register considered the extent to which a CARP should be bound by prior determinations of acceptable royalty allocation methodologies in the 1998-99 Phase I cable distribution proceeding.[25] The Register acknowledged that “[t]he concept of `precedent' . . . plays an important role in [these] proceedings,” but observed that “prior decisions are not cast in stone and can be varied from when there are (1) changed circumstances from a prior proceeding or; (2) evidence on the record before it that requires prior conclusions to be modified regardless of whether there are changed circumstances.” 1998-99 Librarian Order, 69 FR at 3613-14 (citations omitted). The Register also referred to a prior Librarian's decision in which the Register had stated that a CARP “may deviate from [a prior decision] if the Panel provides a reasoned explanation of its decision to vary from precedent . . . .” Id.
The Judges understand that they have the authority and, indeed, the duty, to consider all appropriate factual presentations regarding the establishment of value in this proceeding in order to allocate royalties among the several program categories. The Judges consider the loose use of the term “precedent” in this context to be unhelpful. The concept of “precedent” typically relates to judicial deference to prior legal determinations, not factual ones.[26]
However, the 1998-99 Librarian Order clearly indicates that factual challenges to previously-accepted methodologies shall be subject to a particular evidentiary standard. Specifically, the Judges have been directed that they may disregard or modify prior methodologies only in the event of “changed circumstances” or because of evidence in the record that “requires” such a change. See Program Suppliers v. Librarian of Congress, 409 F.3d 395, 402 (D.C. Cir. 2005). The Judges understand this instruction to be in the nature of a “precedent” setting forth the legal standard for the evaluation of fact evidence.
Accordingly, the Judges consider the challenges in this proceeding to the application of Waldfogel-type regressions by considering whether there have been either “changed circumstances” or the presentation of other record evidence that “requires” a departure from considering the Waldfogel-type regressions introduced into the record in this proceeding. Absent evidence of relevant “changed circumstances” or other new evidence in the record specifically identified as such by any critics of the Waldfogel-type regression approach, the Judges will evaluate the proffered Waldfogel-type regressions consistent with their treatment of Dr. Waldfogel's analysis in the 2004-2005 allocation proceeding.
In the current proceeding, the SDC's economic expert, Dr. Erkan Erdem, leveled broad criticisms at the use of Waldfogel-type regressions by Professor Crawford, Professor George, and Dr. Israel, notwithstanding the Judges' prior contrary conclusions in the 2004-05 Determination. See Written Rebuttal Testimony of Erkan Erdem, Trial Ex. 5007, at 5-6 (Erdem WRT).[27] Dr. Erdem opined that, conceptually, “Waldfogel-type regressions do not measure relative market value” for two reasons. First, according to Dr. Erdem, CSO royalty payments are uninformative because they are determined by a statutory formula, not through free-market negotiations between CSOs and content owners; [28] and, second, in Dr. Erdem's view, the volume of programming does not necessarily equate to value. Written Direct Testimony of Erkan Erdem, Trial Ex. 5002, at 14 (Erdem WDT). Dr. Erdem thus concluded that “[o]verall, the Waldfogel-type regressions say little about relative market value” and at most are “marginally informative” as corroborative evidence. . . . .” Id. at 18.
The Judges have found previously that Waldfogel-type regressions are relevant in cable distribution proceedings and find nothing in Dr. Erdem's testimony in the current proceeding to support changing that position. Therefore, the Judges reject Dr. Erdem's broad argument that Waldfogel-Start Printed Page 3558type regressions are not useful in establishing relative value in this proceeding.[29] Of course, this point does not mean that the Judges therefore necessarily accept all aspects of the application of the Waldfogel-type regressions by Professor Crawford, Professor George, and Dr. Israel in this proceeding. Rather, the Judges analyze infra the more granular critiques of those regressions leveled by various witnesses, to determine the weight to be accorded to each such regression.
B. Crawford Regression Analysis
1. General Principles
CTV called Professor Gregory Crawford as an economic expert witness. Professor Crawford undertook a Waldfogel-type regression, which he opined was an appropriate approach for estimating relative market value among the six allocation-phase categories. Crawford CWDT ¶ 5. Professor Crawford envisaged a hypothetical market consistent with the actual market for cable channel carriage in general. Crawford CWDT ¶¶ 8, 36. In Professor Crawford's hypothetical market, the owners of the distantly retransmitted stations (i.e., broadcasters) are the sellers of bundles of programming (their respective program lineups), and the CSOs are the buyers. Crawford CWDT ¶ 6.[30] Professor Crawford opined that CSOs are more likely to retransmit “distant signals that carry more highly-valued programming.” Id. ¶ 7. Although this reasoning appears self-evident (ceteris paribus, re-sellers prefer to sell products that are more valuable), according to Professor Crawford, this point also has a subtler meaning in connection with CSO decision-making. Id. ¶ 46. Specifically, he opined that, because such stations bundle various types of programming, there can exist across subscribers a “negative correlation” in their “Willingness to Pay” (WTP) (in other words, making the bundle relatively less preferable when a program from one category is added to the bundle, as opposed to one from another category). Id. ¶ 6 (emphasis added).
Accordingly, Professor Crawford concluded that when deciding whether to enlarge its channel lineup by distantly retransmitting a television station, a rational CSO would consider the variety, or mix, of programming on that channel in light of the existing programming mix offered by the CSO to subscribers across the channel lineup. According to Professor Crawford, to achieve an optimal programming mix a CSO would recognize that “niche taste[ ] channels are more likely to increase CSO profitability due to the likelihood that household tastes for such programming are `negatively correlated' with tastes for other components of cable bundles.” Id. ¶ 7. For example, if a channel lineup were saturated with programming from five of the six program categories, but had little or no programming in the sixth category, e.g., PTV, then a CSO might enhance its profitability through fees from new subscribers, by adding PTV programming, which may have a following among subscribers who have little or no taste for marginal increases in programming in other categories.
Professor Crawford's regression adopted the general concept from the Waldfogel-type regressions. Specifically, Professor Crawford concluded that the “most suitable” econometric regression would “relat[e] existing distant signal royalty payments to the minutes of programming of different types carried on distant signals under the compulsory license . . . .” Id. ¶ 46. He favored a regression model because it is a standard econometric approach utilized to establish the discrete prices of different elements in a bundle of goods, or the value of a bundle of attributes in a single good. Id. ¶ 47.[31]
Thus, Professor Crawford inferred the “average marginal value” of content type (by program category), based on the decisions CSOs made. 2/28/18 Tr. 1400-02 (Crawford). More precisely, as in any Waldfogel-type regression, he related the relative variation in royalties across categories to the relative variation in minutes of different categories of programming. Crawford CWDT ¶¶ 53-54.
In econometric terms, Professor Crawford related the natural log [32] of royalties: (1) To the minutes of claimed programming by category; and (2) to other “control” variables.[33] Id. ¶ 91. Professor Crawford's regression looked for a correlation in a subscriber group between changes in the number of minutes of programming the subscribers watched by categories and changes in the percentage of royalties the subscriber group paid while holding constant other potential explanatory variables (called control variables).[34] The variables Professor Crawford controlled for included the numbers of local and distant stations, the number of activated cable channels, and the size of the CSO. Id. ¶ 118 & App. A.
Professor Crawford first estimated the average marginal value per minute of each type of programming by subscriber group. Id. ¶ 128.[35] Econometrically, these values are referred to as the coefficients for each program-category parameter.[36] Professor Crawford then summed the marginal value of the compensable minutes each subscriber group retransmitted. Id. ¶ 131. Finally, Professor Crawford divided the total Start Printed Page 3559value of each given programming category by the total value of all compensated minutes, which produced a percentage reflecting the relative value of each program category as produced by his regression.
The percentage totals estimated by Professor Crawford, and the standard errors [37] associated with those estimates, by year and averaged across all four years, were as follows (with standard errors in parentheses):
Table 2—Implied Shares of Distant Minutes by Claimant Categories
Year Program suppliers (%) Sports (%) Commercial TV (%) Public TV (%) Devotional (%) Canadian (%) 2010 27.66 (1.89) 34.29 (3.78) 17.48 (1.50) 15.44 (1.01) 1.02 (0.27) 4.10 (0.33) 2011 25.44 (1.67) 32.12 (3.65) 17.93 (1.49) 19.77 (1.22) 0.71 (0.19) 4.02 (0.32) 2012 22.84 (1.64) 36.09 (3.86) 17.29 (1.52) 19.03 (1.29) 0.55 (0.15) 4.19 (0.35) 2013 20.31 (1.52) 38.00 (3.94) 16.08 (1.45) 20.51 (1.44) 0.51 (0.14) 4.59 (0.39) 2010-13 23.95 (1.68) 35.19 (3.82) 17.18 (1.49) 18.75 (1.25) 0.69 (0.18) 4.23 (0.35) Id. ¶ 141 and Fig. 17.
Professor Crawford did not use these values, however, as his only estimates of relative market value across the six programming categories. Rather, he identified an issue with regard to network (and to a lesser extent, non-network) programming that he believed to require a further adjustment. Specifically, Professor Crawford noted that on some distantly retransmitted stations there existed programming that duplicated programming on the local channels in that market. Id. at ¶ 87. According to Professor Crawford, “[n]etwork duplication is a non-trivial issue, accounting for 4.6% of minutes carried on distant broadcast signals . . . .” Id. This issue, he noted, is particularly applicable to Big 3 (ABC, CBS, and NBC) network programming, because a number of local markets to which Big 3 affiliate stations were distantly retransmitted by a CSO already had a local Big 3 network affiliate, rendering the retransmitted network programming duplicative. Professor Crawford understood the relative percentages attributable to the six categories of programming—because they were averaged across all minutes of programming—to be distorted by these duplicative minutes. Id. ¶¶ 81, 85-87, 143. Accordingly, even though network programming is not compensable in this proceeding, Professor Crawford made this adjustment as a “deaveraging” device, stating: “I am attributing the full value of the positive non-duplicate programming just to the non-duplicate programming (and the zero value of the duplicate programming to the duplicate programming).” Id. ¶ 147.
Assuming a zero value for the duplicative network programming, Professor Crawford instructed his data analysts to remove the duplicate network programming.[38] With those duplications removed, Professor Crawford re-ran his regression and averaged the relative values of the six program categories at issue in this proceeding.
After making this adjustment, Professor Crawford estimated the following percentage allocations (with the associated standard errors set forth below each allocation):
Table 3—Implied Shares of Distant Minutes by Claimant Categories: Non-Duplicate Minutes Analysis
Year Program suppliers (%) Sports (%) Commercial TV (%) Public TV (%) Devotional (%) Canadian (%) 2010 27.06 (1.97) 34.02 (3.96) 19.76 (1.48) 14.01 (1.00) 1.05 (0.25) 4.10 (0.36) 2011 24.67 (1.73) 31.78 (3.82) 20.18 (1.45) 18.64 (1.25) 0.73 (0.18) 4.00 (0.35) 2012 22.50 (1.72) 35.93 (4.06) 19.64 (1.51) 17.17 (1.27) 0.56 (0.14) 4.20 (0.38) 2013 19.74 (1.60) 38.56 (4.17) 18.44 (1.48) 18.09 (1.41) 0.53 (0.13) 4.65 (0.44) 2010-13 23.40 (1.76) 35.13 (4.02) 19.49 (1.48) 17.02 (1.23) 0.71 (0.17) 4.24 (0.38) Id. ¶ 153 & Fig. 20.
2. The SDC Criticisms of Dr. Crawford's Analysis
a. Alleged Flaw in the Algorithm
Dr. Erkan Erdem, the SDC's economist, claimed to have identified a flaw in the algorithm Professor Crawford used to allocate royalties to minutes of programming across categories. Dr. Erdem testified that, because of this alleged flaw, Professor Crawford's model was highly sensitive to the sequencing in which data was inputted and sorted into his regression model. Erdem WRT at 2, 14.
However, Dr. Erdem acknowledged receiving additional data from CTV that pertained to this issue. When Dr. Erdem re-ran the updated data using Professor Crawford's regression model, Dr. Erdem found only “slightly different” results with regard to “implied shares of distant minute royalties by claimant categories for both the initial and nonduplicated analyses . . . presented by Professor Crawford.” Erdem WRT at 15 n.13.
Dr. Erdem further testified that he did not review and test Professor Crawford's Start Printed Page 3560algorithm fully because it would have taken him a week to do so. Id. at 14. Additionally, neither Dr. Erdem nor the SDC pursued this point further, either in Dr. Erdem's further testimony or in post-hearing filings and arguments.
Based on the foregoing, the Judges find this criticism to be insufficient to invalidate or call into question the evidentiary value of Professor Crawford's regression.
b. Economic Principles Allegedly Not Embodied in Crawford Regression Analysis
Dr. Erdem noted approvingly certain general economic points that Professor Crawford made. First, he agreed with Professor Crawford that it is reasonable to posit that a rational CSO would likely tend to select stations for distant retransmission that maximize the difference between anticipated revenue and the cost of acquiring the retransmission rights. Second, Dr. Erdem agreed with Professor Crawford that a “negative correlation” rationally should exist among subscribers between different categories of programs, leading CSOs to engage in strategic bundling of program categories. Id. at 12.
However, Dr. Erdem faulted Professor Crawford for failing to incorporate these economic observations into the latter's regression model. With regard to the first point—maximizing the spread between revenues and costs—Dr. Erdem noted that the royalty fees are set by statute, so this concept is not applicable in the regulated market. Id. at 12.
With regard to the second point—the negative correlation of different programming types between and among subscribers—Dr. Erdem noted that Professor Crawford did not incorporate this principle into his regression analysis. Id. Dr. Erdem acknowledged that the program bundling that results from the negative correlation between program types has “important implications,” but not implications that support Professor Crawford's regression model. Dr. Erdem asserts that the negative correlation between program types implies “that subscribers likely do not think of distant broadcasts in terms of total minutes . . . . A more natural unit would be the availability of particular programs, regardless of their duration or frequency.” Id. at 13 (emphasis added). Thus, Dr. Erdem suggested that Professor Crawford's reliance (as is the case in all Waldfogel-type regressions) on programming minutes as the independent (explanatory) variable with respect to program type valuation misses the real economic correlation pertinent to a value estimate, which is the correlation between royalties and the number of subscribers. Id.
In response to the first point, Professor Crawford noted that his regression analysis implicitly incorporated this revenue maximization principle because it identified, ranked, and estimated the relative value of program categories that maximize economic value for subscribers given the existence of retransmission costs. Written Rebuttal Testimony of Gregory Crawford, Trial Ex. 2005, ¶¶ 70-71 (Crawford WRT). With regard to the second point, Professor Crawford did not expressly state that the negative correlation between programming types applied to his results. Rather, he noted that the negative coefficients he had estimated for duplicated network programming[39] in part represented the fact that, on average, a station bundle containing duplicated network minutes would be less valuable to subscribers than one that did not. 2/28/18 Tr. 1404, 1607-08 (Crawford) (duplicate programming adds no value and might be blacked-out).[40]
The Judges agree with Dr. Erdem that Professor Crawford's regression analysis does not literally demonstrate that CSOs seek to maximize the difference between revenues and costs as they would in an unregulated market. Because royalty costs are determined independently from retransmission decisions (especially with regard to the first DSE, which is retransmitted in exchange for a mandatory minimum fee, as discussed infra), CSOs do not and cannot engage in the sort of marginal profit maximization decisions buyers/licensees would undertake in an unregulated market. However, that does not mean that CSOs do not engage in maximizing behavior through marginal analyses that weigh the relative values of adding additional programming from different program categories, -notwithstanding the presence of the regulated royalty rate.
The Judges give no weight, however, to Dr. Erdem's speculation as to how subscribers value programs of varying lengths. Dr. Erdem did not undertake any affirmative analysis and presented no original methodology. Thus, even assuming arguendo there might be value in such a subscriber-based value analysis, Dr. Erdem did not present one here.
c. The “Distant Minutes” Criticism
Dr. Erdem noted that Professor Crawford's regression, because it is a Waldfogel-type regression, “assigned a predominant role” to the number of distant minutes retransmitted by each program category. Dr. Erdem thus characterized Dr. Crawford's regression as a “volume focused” approach. Erdem WDT at 14. Dr. Erdem questioned whether Professor Crawford's key variable—“distant minutes” by category—really explained a “significant share of the variation in royalty fees.” Erdem WRT at 15. To answer that question, Dr. Erdem “estimate[ed] a regression model with only total distant minutes for each claimant group as the independent (explanatory) variable.” Id. Dr. Erdem found that the number of distant minutes by claimant group explained “very little” of the variation in royalties as measured by adjusted R[2] . Id. at 15-16.[41]
In response, Professor Crawford noted that his regression, like all Waldfogel-type regressions, “does not measure the relative value of a programming type using only the number of minutes of . . . programming type.” Crawford WRT ¶ 74. Rather, such regressions also “measure the average value per minute to CSOs of each programming type[,] [and then] multiply[ ] the average value per minute by the number of minutes of programming, giv[ing] the total value of each program type.” Id. ¶ 75. Then, the total value of each program type is converted to “average values per minute of each claimant's programming via Professor Crawford's regression (and, Start Printed Page 3561indeed, any Waldfogel-type regression). As Professor Crawford opined, it is the “variation in the royalties paid by CSOs” across each programming category that allows the regression “to infer the average value per minute” of each programming category, and “[t]hese estimated average values per minute are the estimated coefficients” in the regression. Id. ¶ 76.
The Judges find that Dr. Erdem's analysis, although apparently accurate, is off-point and does not diminish the value of Professor Crawford's regression (or any similarly-constructed Waldfogel-type regression). The Judges recognize that the two elements multiplied in such a regression—the volume of total minutes per program category and the value-per-minute are both functions of volume. The former, volume of minutes per program category, is facially a volume metric. Professor Crawford recognized that if a regression measured only volume, then it would be properly subject to criticism. Crawford WRT ¶ 74. But the latter factor in the product, the value-per-minute, is not subject to the same criticism. The value-per-minute factor is a metric for relative value, estimating the CSOs' relative demand for different categories of programming. To criticize the product as related to volume, therefore, misses the mark, because it is relative value that the Judges must determine in this proceeding.
With regard to Dr. Erdem's rebuttal critique, in which he found the R[2] calculation to demonstrate little correlation between categorical programming minutes and royalties, Professor Crawford had a persuasive rejoinder. Professor Crawford explained that it would be as uninformative as it would be unsurprising that the number of distant minutes alone—as Dr. Erdem found—would better estimate the royalties paid (via a higher R[2] ). Professor Crawford explained that the purpose of his regression is to demonstrate the “effect” of different programming (by category) on the relative royalties, not simply to find the regressor (independent variable) that best “predicts” the level of royalties. Crawford WRT ¶¶ 91-95. Thus, Professor Crawford opined, his regression is relevant to the economic issue at hand: The relative value of program categories.[42]
The Judges do not agree that Dr. Erdem's calculation of a higher R2 alone for his alternative approach demonstrated a deficiency in Professor Crawford's regression. As one econometric expert has explained:
[A] low R2 does not necessarily imply a poor model (or vice versa) . . . What level of R2, if any, should lead to a conclusion that the model is satisfactory? Unfortunately, there is no clear cut answer to this question, since the magnitude of R2 depends on the characteristics of the data series being studied . . . . [A] high R2 does not by itself mean that the variables included in the model are the appropriate ones. . . . As a general rule, courts should be reluctant to rely on a statistic such as R2 to choose one model over another.
Rubinfeld, supra note 36, at 425, 457.
Dr. Rubinfeld's emphasis on identifying the “appropriate” variables leads to Professor Crawford's next response to Dr. Erdem's critique. According to Professor Crawford, from the perspective of economic analysis (as opposed to purely econometric analysis), Dr. Erdem's critique failed to address the institutional and economic concerns in this proceeding, viz., how to determine the relative value of the different program categories in an allocation proceeding. Crawford WRT ¶ 95. Professor Crawford maintained that his regression properly identifies the relative relationships at issue in this proceeding.
d. Alleged Failure To Focus on Impact of the “Number of Distant Subscribers”
Dr. Erdem asserted that a control variable in Professor Crawford's regression—the “number of distant subscribers”—was statistically significant and accounted for a large share of the variability in the royalties. Erdem WRT at 17. Accordingly, Dr. Erdem concluded that Professor Crawford's regression inaccurately and wrongly emphasized a correlation between program minutes (across categories) and royalty variability, when the more significant correlation was between the number of distant subscribers and the variability of royalties. Id.
In response, Professor Crawford explained that Dr. Erdem had failed to use the proper measure of “distant subscribers,” which led Dr. Erdem in essence to double-count the number of distant subscribers, thus invalidating his argument. Crawford WRT ¶ 104.[43] Dr. Erdem was compelled to concede at the hearing that his manipulations in his Models numbered 1 through 6 should all be ignored. 3/8/12 Tr. 2779-80 (Erdem).
Accordingly, the Judges do not give any weight to this criticism.[44]
e. The Zero Minutes Issue
Dr. Erdem pointed out that Professor Crawford's two models contained numerous zeros (i.e., instances when there was no distant content being retransmitted for a particular claimant category). More particularly, Dr. Erdem noted that for the duplicated analysis, the Canadian distant programming minutes had about 94 percent zeros, followed by PTV with approximately 59 percent, the JSC with approximately 10 percent, and between 5-8 percent for the remaining categories. (These percentages remain essentially unchanged for the nonduplicated analysis.) Erdem WRT at 17-18.
Dr. Erdem asserted that because zero represented a floor on the number of minutes any programming category could have offered, Professor Crawford's failure to control for the presence of a non-trivial number of zeros has the “potential” to skew the coefficients Professor Crawford estimated in his models. In an attempt to address this issue, Dr. Erdem reworked Professor Crawford's regression approach by including “indicator variables” for instances in which the distant minute variables were zero. He then re-estimated Professor Crawford's two models, creating what he called “Model 3.” Dr. Erdem's Model 3 cumulatively reworked Professor Crawford's duplicated and nonduplicated regressions to incorporate, inter alia, the distant subscriber instances and the zero-minutes indicator issue. Erdem WRT at 38, 40.
Dr. Erdem found that, relative to Professor Crawford's regression model, adding the indicators for instances with zero distant minutes increased the PS and PTV shares by approximately 6 percentage points and 1-2 percentage points, respectively. The Devotional share increased by approximately 1 percentage point while the CTV share decreased by approximately 10 percentage points. The JSC share increased by approximately 1 Start Printed Page 3562percentage point, and the Canadian share decreased by approximately 0.4-0.5 percentage points. Id.
Because these revised percentages also incorporate Dr. Erdem's erroneous adjustment for his “distant subscriber instances” variable, his “Model 3,” must be ignored. 3/8/18 Tr. 2779-80 (Erdem). Further, as a separate problem with Dr. Erdem's critique, he did not opine that Professor Crawford's treatment of the number of zeros was improper or that it had caused a skewing of the coefficients; rather Dr. Erdem testified only that such skewing was a “potential” problem—one that Dr. Erdem would have elected to address with the use of an indicator variable.[45] The Judges understand this point to indicate that although Dr. Erdem would have undertaken a different approach, he did not opine that Professor Crawford's approach was unreasonable. Accordingly, the Judges are unpersuaded that this criticism served to undermine the usefulness of Professor Crawford's regression analysis.[46]
f. Sensitivity of Nonduplicated Minutes Model
In his nonduplicated model, Professor Crawford included as an additional variable the total number of nonduplicated minutes. Dr. Erdem noted that Professor Crawford explained that “[t]his new covariate plays the same role in the final econometric model that the number of distant signals plays in the initial econometric model.” Erdem WRT at 19 (quoting Crawford CWDT ¶ 165 n.57). However, Dr. Erdem discovered that in this nonduplicated model the number of distant signals was still present, together with the new variable, (i.e., the total number of nonduplicated minutes). Dr. Erdem determined that these two variables were almost perfectly correlated (a 0.998 correlation), rendering “the rationale for including that additional variable . . . less clear.” Erdem WRT at 19.[47]
To analyze this issue, Dr. Erdem performed a sensitivity analysis, or test [48] , rerunning the nonduplicated model without the total nonduplicated minutes variable. Dr. Erdem's “Model 5” presented regression results and estimated royalty shares from this analysis. See Erdem WRT Ex. R3. Compared to his Model 4, excluding the added variable decreased the Program Supplier share by approximately 0.2 percentage points, the JSC share by about 2 percentage points, the CTV share by about 2 percentage points the PTV share by about 0.3 percentage points. The Devotional and Canadian shares remained approximately the same. See Erdem WRT at 19, Ex. R3.
The Judges find that these modest percentage point differences would not diminish the value of Professor Crawford's nonduplicate minute regression, in part because the regression approach is by design an estimate rather than a precise measure.[49] Moreover, Dr. Erdem's modest changes are derived from his alternative models that also incorporate his erroneous distant subscriber minutes approach, which Dr. Erdem acknowledged to invalidate his adjustments to a number of his models, including Models 4 and 5. See 3/8/18 Tr. 2779-80 (Erdem).
g. The WGNA Indicator Variable
Dr. Erdem altered Professor Crawford's approach by including a dummy variable to indicate the presence (or absence) of WGNA. This alteration increased the Program Supplier share by approximately 2 percentage points, increased the CTV and PTV shares by approximately 1 percentage point, respectively, and decreased the JSC shares by about 4 percentage points. The shares of the Devotional and Canadian categories increased by 0.1 and 0.3 percentage points, respectively. Erdem WRT at 18-19.
However, Dr. Erdem did not expressly conclude that the absence of this WGNA indicator variable in Professor Crawford's regression analysis demonstrated that the latter's approach was inappropriate or less relevant. Indeed, Dr. Erdem ended this particular analysis by suggesting only that the use of an indicator variable regarding the presence (or absence) of WGNA among the distantly retransmitted stations could be suggestive of an outlier effect arising from the presence of WGNA, yet Dr. Erdem conceded that “Professor Crawford's model does not exhibit sensitivity to outliers.” Erdem WRT at 19 n.17.[50] Accordingly, Dr. Erdem's criticism in this regard does not diminish the value of Professor Crawford's regression analysis. And, once more, Dr. Erdem's estimate of the impact of this criticism was bundled together with, inter alia, his admittedly erroneous adjustment for distant subscriber minutes, thereby tainting the measure of this adjustment.
h. Geographical Effects
The SDC noted that a CTV economic expert witness, Dr. Christopher Bennett, found that “over 90% of the distant signals imported were within 150 miles of the community served, and over 95% were within 200 miles.” Corrected Written Direct Testimony of Christopher Bennett, Trial Ex. 2006, ¶ 31 & Fig. 6 (Bennett CWDT).[51] Accordingly, Dr. Erdem asserted that the positive coefficients in Professor Crawford's regression “could” have been driven by factors “like” geography, emphasizing Start Printed Page 3563the values and preferences of large urban areas and de-emphasizing the values and preferences of smaller rural areas. 3/8/18 Tr. 2688-91 (Erdem).
In response, CTV pointed out that Professor Crawford's regression contained variables that controlled for geographic effects. In particular, CTV noted that the SDC had in fact acknowledged that Professor Crawford's regression included “system-level fixed effects [that] introduce a form of geographic control . . . .” [52] SDC PFF ¶ 101 (citing 3/8/18 Tr. 2709-10 (Erdem)).[53] Moreover, CTV pointed out that Professor Crawford's regression also included as a control variable the number of local signals at the subgroup level, which also helped account for geographical market differences (including market and Designated Market Area (DMA) size) across subgroups within the systems. See Crawford CWDT App. B Fig. 22; see also Written Rebuttal Testimony of Ceril Shagrin, Trial Ex. 2009, ¶ 20 & Exs. A, B (Shagrin WRT) (number of local stations is prime indicator of market size).
The Judges find that Professor Crawford's regression controlled for geographic effects. Dr. Erdem's criticism to the contrary appears to be based on a difference of opinion as to how to account for the geographic issue rather than any error in Professor Crawford's regression analysis. Additionally, the Judges do not find that a regression that weighs more heavily the value of programs retransmitted to more people is inherently suspect. Indeed, the opposite is the case. To use Dr. Erdem's example, population density is greater in areas adjacent to urban areas where professional sports teams are based and will demand more professional sports. See 3/8/18 Tr. 2689 (Erdem). This subscriber demand causes a CSO serving their subscriber group to have a derived demand for the retransmission of stations with more JSC programming. More JSC programming leads to higher JSC royalties relative to whatever other programming is more popular in areas where, as Dr. Erdem testified, there exist “smaller systems with smaller number of subscribers and smaller fees . . . .” 3/8/18 Tr. 2690 (Erdem). In short, the Judges see this phenomenon as an attribute of Waldfogel-type regressions, including Professor Crawford's regression analysis.[54]
i. Ignoring Signals That CSOs Chose Not To Carry
The SDC also criticized Professor Crawford for not taking into account in his regression the impact on value of the stations that were “not retransmitted.” SDC PFF ¶ 81 (citing 2/28/18 Tr. 1494-5 (Crawford)) (emphasis added). The SDC noted that Professor Crawford had written a published article that indicated that an approach accounting for stations that were not retransmitted could have been applied to determine program category value in the present proceeding. SDC PFF ¶ 82 (citing 2/28/18 Tr. 1497-98 (Crawford)). However, nothing in the record suggested that the potential usefulness of such an alternative regression approach called into question the validity, reasonableness, or persuasiveness of the regression approach undertaken by Professor Crawford in the present proceeding, which approached the relative value analysis from a perspective that analyzed the programs and stations that were transmitted. Indeed, the SDC do not cite any expert witness in the present proceeding to support their conclusory assertions in proposed findings of fact that Professor Crawford's decision not to analyze non-transmitted stations and programs compromised his analysis in this proceeding. See SDC RPFF ¶¶ 81-82. Accordingly, the Judges find that this criticism does not diminish the value of Professor Crawford's regression analysis in this proceeding.
j. Number of Subscribers as Control Variable
The SDC noted that Professor Crawford used the log of fees paid as his dependent variable (expressing changes in fees paid in percentage terms), but he expressed changes in “the number of subscribers—one of his control variables—in level form (i.e., linear, or non-log). SDC PFF ¶ 102 (citing 2/28/18 Tr. 1541, 1550 (Crawford)). The SDC's expert, Dr. Erdem, testified that Professor Crawford's use of the linear form for this control variable was improper, because it failed to correspond with the actual relationship between royalty fees and subscribers, i.e., a percentage change in the number of subscribers corresponds with an equal change in the percentage of royalty fees). 3/8/18 Tr. 2770-71 (Erdem). As a consequence, Dr. Erdem maintained, Professor Crawford had introduced statistical “bias” [55] into his regression. Id. at 2716-17 (Erdem).
To address this criticism, Dr. Erdem, undertook a sensitivity test and transformed the control variable for the number of subscribers into log form. 3/8/18 Tr. 2767 (Erdem). He found that this linear-to-log transformation improved the fit of the regression, increasing the R2 metric from approximately .24 to .97. (A higher R2 indicates a tighter fit of within the data points, see supra note 41).
In response, CTV and Professor Crawford argued that Dr. Erdem misapplied a principle that might be valid in a “prediction” regression. Professor Crawford maintained though that his own regression on behalf of CTV was an “effects” regression, Start Printed Page 3564seeking to explain the issue at hand, i.e., how different program categories correlate with the royalties paid. According to Professor Crawford, his regression analysis was not a “prediction” regression designed to identify the best predictors of royalties paid. Thus, he argued, it was important to use control variables that keep constant the effects on the dollar amount of royalties paid in order to determine the relative values among program categories, which was the purpose of the regression. 2/28/18 Tr. 1393-94, 1430, 1549-50 (Crawford).
Professor Crawford explained what he understood to be a fundamental mistake made by Dr. Erdem:
Dr. Erdem misunderstands the purpose of an econometric analysis in this proceeding. . . . For the goal of prediction, the focus is on finding the explanatory variables that best predict the outcome of interest . . . . [I]f the goal is to predict stock prices and the price of tea in China helps, then . . . include it in the model (and don't worry about the economic interpretation of its coefficient).
That is not the purpose in this proceeding, however. In this proceeding, experts are using econometric analyses to help the Judges determine . . . relative marketplace value . . . . The dependent variable in these regressions, the royalties cable operators pay for the carriage of the distant signals, are informative of this relationship . . . . The key explanatory variables in this relationship, the minutes of programming of the various types carried on distant signals, are informative as the impact they have on royalties reveals the relative market value of each programming type. Other explanatory variables are included in the model to control for other possible determinants of cable operator royalties. This helps improve the statistical fit of the regression (to “reduce its noise”), providing more precise estimates of the impact of programming minutes that are the focus of the analysis.
. . .
The goal here is to find the econometric model that can best reveal relative marketplace value. Doing so means crafting the econometric model to reflect the institutional and economic features of the environment that is generating the data being used. . . . The econometrician determines which explanatory variables to include not based exclusively on statistical criteria regarding the overall fit of the model, but also on whether there are good economic and/or institutional justifications for including that variable.
Crawford WRT ¶¶ 91-94 (footnotes omitted) (emphasis added). Accordingly, Professor Crawford testified that the R2 measure on which Dr. Erdem relied is not relevant to the task at hand, because that measure does not explain the relative values of the several program categories, but rather shows “how much of the variation in the dependent variable can be explained by the control or explanatory variables.” Crawford WRT ¶ 93.
Applying this distinction more particularly to the present dispute, Professor Crawford defended his use of a linear control variable for the number of subscribers as sufficient for its intended purpose—to avoid statistical bias and distortion. He contrasted his approach with Dr. Erdem's claim that a log control variable would be preferable, with Professor Crawford asserting that Dr. Erdem's proposed log transformation did not merely control for the royalty formula, but rather essentially replicated the formula for calculating royalties, thereby distorting the regression results. 2/28/18 Tr. 1429-30, 1552 (Crawford). That is, Dr. Erdem's log approach might well have been appropriate to predict a meaningful correlation between the percentage change in royalties and the percentage change in the number of subscribers, but that is not informative (and thus not relevant) as to the effect, if any, of the impact of the different program categories within the distantly retransmitted stations on the dollar amount of royalties that were paid.
The Judges find that Professor Crawford's regression is not compromised by his use of the linear form to express the number of subscribers in this control variable. If the Judges' statutory task were to identify and rank all the causes of a change in total royalties, the change in the number of subscribers apparently might be the chief causal element because the statutory royalty fee is a percent of receipts. Changes in the dollar value of receipts, naturally, are directly related, on a percentage basis, to percentage changes in the number of subscribers. But the Judges' legal, regulatory, and economic task in this proceeding is to determine the relative market value of different categories of programming; thus, any correlation between the number of subscribers and royalties is not in furtherance of that objective. Rather, Professor Crawford's use of a linear form for the number of subscribers served to control for the size of the system without overriding the purpose of the regression, which was to measure the effects (if any) of different program categories on royalties paid.
The Judges not only find Professor Crawford's assertions in this regard persuasive, they note that his opinion has some support in the academic literature.[56] See G. Shmueli, To Explain or to Predict?, 25 Statistical Science 289, 290-91, 297 (2010) (“The criteria for choosing variables differ markedly in explanatory versus predictive contexts.”); see also F.M. Fisher, Multiple Regression in Legal Proceedings, 80 Colum. L. Rev. 702, 720 (1980) (The R2 measure “must be approached with a fair amount of caution, since R2 can be affected by otherwise trivial changes in the way in which the problem is set up.”).
The Waldfogel-type regression is an example of modeling utilized to explain the effects of different program categories on the relative payment of royalties—rather than an attempt to predict the level of royalties. Thus, as Professor Shmueli wrote, the choice of variables can reasonably be based on the “underlying theoretical model.” Id.; see also F.M. Fisher, Econometricians and Adversary Proceedings, 81 J. Am. Stat. Ass'n 277, 279 (1986) (“There is a natural view that models are supposed to do nothing other than predict . . .” resulting in the “danger” of ignoring “better models that do not fit or predict quite so well but are in fact informative about the phenomena being investigated.”) (emphasis added).[57]
Because the Judges find in this proceeding, as in past proceedings, that the theoretical model of a Waldfogel-type regression is reasonable and useful in this context, Dr. Erdem's criticism regarding Professor Crawford's use of a linear control variable for the number of subscribers does not diminish the value of his regression analysis in this proceeding.
k. Purportedly Incorrect Consideration of Network Programming
The SDC asserted that Professor Crawford failed to analyze correctly the impact of the number of distant signals and the total number of minutes in his nonduplicated minutes analysis, which caused his coefficients to be uninterpretable and certain coefficients to turn negative, falsely implying a negative value for such retransmitted distant programming. However, a substantial portion of this assertion grew out of Dr. Erdem's tardy and thus Start Printed Page 3565rejected proposed rebuttal testimony. See 3/8/18 Tr. 2704-05 (Erdem). Thus, Dr. Erdem's written testimony and the SDC's affirmative case at the hearing do not support the SDC's criticisms in this regard.
However, the SDC had some success in raising this issue on cross-examination of Professor Crawford, who appeared to acknowledge that nonduplicated network programming had positive value that he had not added back into his analysis. 2/28/18 Tr. 1572 (Crawford). Professor Crawford attempted to discount the import of this factor, asserting that adding in such values would have caused a “common level shift” in all the coefficients. 2/28/18 Tr. 1573 (Crawford). However, when confronted on cross-examination with the logarithmic (percentage) impact on the coefficients (and thus the relative values), Professor Crawford became uncertain as to whether he should have considered the logarithmic (percentage) impact of nonduplicated network programming. More particularly, having considered the issue on the witness stand, Professor Crawford was then asked by cross-examining counsel whether he was ready to agree that he “should have taken into account the value of the . . . coefficient that would be implied for the nonduplicated network programming”—to which he replied: “So I am not sure that I do [agree] [a]nd I am not sure that I don't.” 2/28/18 Tr. 1581 (Crawford).
Professor Crawford and CTV further responded to this nonduplicated network minutes argument by noting that the impact of the issue, if any, was indeterminate, because Professor Crawford had lumped nonduplicated network minutes with off-air programming as a single control variable, not as an input to determine the values of the coefficients of interest. 2/28/18 Tr. 1625-29 (Crawford). Additionally, Professor Crawford explained that, in any event, the purpose of the “total non-duplicate minutes” variable was to serve the same volume control function as the “number of distant signals” variable in the initial regression.
The Judges find that Professor Crawford's admitted uncertainty as to the impact of nonduplicated network programming minutes on the relative values of his coefficients somewhat diminishes the probative value of his non-duplicated model. Further, the fact that Professor Crawford's purpose in adding these minutes was to insert a control variable did not address whether this variable did not also affect the calculation of coefficients for the program categories at issue.[58] However, the absence of any hard evidence of the extent of this problem on the measurement of the coefficients makes this deficiency difficult to quantify. Accordingly, this criticism leads the Judges to consider the accuracy of the estimates in Professor Crawford's nonduplicated analysis to be less certain, and the Judges thus will look to Professor Crawford's duplicated-minutes regression results when incorporating his analysis and conclusions into their determination of the appropriate allocation of shares.
l. Overfitting
The SDC contended that Professor Crawford's regression methodology suffered from a problem known as “overfitting.” In econometrics, and in statistics more broadly, overfitting occurs when the regression attempts to “estimat[e] too large a model with too many parameters.” C. Brooks, Introductory Econometrics for Finance 690 (3d ed. 2014). See also T. Powell & P. Lewecki, Statistics: Methods and Applications 681 (2006) (“overfitting” is “[w]hen [a regression] produc[es] a curve . . . that fits the data points well, but does not model the underlying function well [because] its shape is being distorted by the noise inherent in the data.”).
On the other hand, when an econometrician attempts to avoid overfitting, he or she must be mindful not to eliminate potentially important data from the regression. Otherwise a different problem—underfitting—can arise. To wit:
There is actually a dual problem to overfitting, which is called underfitting. In [an] attempt to reduce overfitting, the [modeler] might actually begin to head to the other extreme and . . . start to ignore important features of [the] data set. This happens when [the modeler] choose[s] a model that is not complex enough to capture these important features . . . . [T]his incredibly important problem is known as the bias-variance dilemma[ [59] ] [and] is just as much an art as it is a science.
D. Geng and S. Shih, Machine Learning Crash Course: Part 4—The Bias-Variance Dilemma, ML@B, The Official Blog of Machine Learning @Berkeley (July 13, 2017), available at https://ml.berkeley.edu/blog/2017/07/13/tutorial-4/(last visited May 1, 2018) (emphasis added).
In the present case, the SDC argued that Professor Crawford's regressions suffered from overfitting for several reasons.
First, because he used “system-accounting period fixed effects [as distinguished from the subscriber group level], Professor Crawford's regression employs more than 7,300 variables [and] approximately 26,000 observations . . . only about 3.55 observations per variable.” SDC PFF ¶ 109 (citing Crawford CWDT at C-3; 2/28/18 Tr. 1646 (Crawford)). According to the SDC, Professor Crawford acknowledged that “[a]s a rule of thumb, fewer than ten observations per variable can yield a likelihood of overfitting.” SDC PFF ¶ 111 (citing 2/28/18 Tr. 1461 (Crawford)). Because Professor Crawford had less than ten observations per variable (3.55), the SDC argued that Professor Crawford's regression suffered from overfitting, calling into question the usefulness of the estimates Professor Crawford produced.
However, Professor Crawford denied that he endorsed this test, and the Judges agree with Professor Crawford, based on the following cross-examination colloquy:
SDC COUNSEL: [H]ave you ever heard of the One-in-Ten Rule? One-in-Ten?
PROFESSOR CRAWFORD: Not—if you could describe it, perhaps I have.
SDC COUNSEL: A rule of thumb—not saying it is precise—a rule of thumb that you should have at least ten observations per . . . per coefficient.
PROFESSOR CRAWFORD: I have not heard that specific rule, but I understand the idea behind it. And generally the idea behind that is if you don't have ten observations per one tends to get imprecise parameter estimates. . . . I don't subscribe to the One-in-Ten Rule.
2/28/18 Tr. 1461, 1463 (Crawford) (emphasis added). Nowhere in this testimony did Professor Crawford indicate a familiarity with the supposed “one-in-ten” rule in counsel's question, and Professor Crawford instead Start Printed Page 3566attempted merely to explain his understanding of this heuristic as the SDC's counsel had presented it.[60] Without a more developed record regarding the existence and applicability of this one-in-ten heuristic, the Judges cannot find that Professor Crawford's use of “only” 3.55 observations per variable would have a negative impact on his regression methodology. Moreover, because the SDC presented this principle as a heuristic rather than a rule, the underdeveloped nature of the record is of even greater importance. Finally, because the problem of overfitting versus underfitting (the bias/variance dilemma discussed supra) appears to be a judgment call for the econometric modeler, the Judges are loath to impose this heuristic as an invalidating principle in connection with Professor Crawford's regression.
Relatedly, Professor Crawford only acknowledged that overfitting would be a problem if there were a one-to-one ratio of variables to observations that would perfectly predict the variables, but with very wide confidence intervals. Professor Crawford testified that, in his opinion, his confidence intervals were not so wide as to diminish the value of his regression results. See 2/28/18 Tr. 1460-62 (Crawford). The Judges agree that Professor Crawford did not go further than acknowledging that an absolute identity in the number of variables and observations would create an overfitting problem.
As a more theoretical rejoinder, Professor Crawford asserted that concerns with regard to overfitting apply to “prediction” regressions—not “effects” regressions such as Professor Crawford's regressions and all the Waldfogel-type regressions introduced in this proceeding. Id. at 1460, 1463.[61] However, Professor Crawford did not provide a sufficient explanation as to the disparate impacts of overfitting in a “prediction” regression and an “effects” regression to allow the Judges to find that the relatively low number of observations per variable is less important in his “effects” regression.
Second, according to SDC, Professor Crawford's total observations were diminished, and his regressions compromised, because he “effectively discarded” approximately 15% of his observations by disregarding observations from systems with a single subscriber group, which totaled “approximately half of all systems in his data set”, by virtue of his reliance on “system-accounting period fixed effect.” SDC PFF ¶ 110 (citing 2/28/18 Tr. 1458 (Crawford); Crawford CWDT at 21, Fig. 10; 3/8/18 Tr. 2710-11 (Erdem)).
The Judges are troubled by CTV's failure to respond expressly to this criticism.[62] Similarly, the Judges are troubled that CTV neither cited nor addressed the SDC's criticism that Professor Crawford did not test his model for overfitting.
The final reason the SDC criticized Professor Crawford's analysis for overfitting was their claim that he essentially selected his regression model out of “more than one” model he had previously run. SDC PFF ¶ 118 (citing 3/1/18 Tr. 1888 (Bennett)). More particularly, the SDC contended that Professor Crawford and his team disregarded at least two regressions. First, Professor Crawford allegedly discarded a regression without the top-six multiple-system operator (MSO) interaction variables that were in his final model. 2/28/18 Tr. 1642-44 (Crawford). Second, the SDC asserted that Professor Crawford disregarded “a model run at the system level instead of the subscriber group level,” i.e., a model that would not have treated system-accounting period data as a fixed effect. 3/1/18 Tr. 1888 (Bennett). See SDC PFF ¶ 113 (relying on Crawford and Bennett testimony).
According to the SDC, Professor Crawford's rejection of several models before deciding on the one he presented in evidence in this proceeding indicated a potential likelihood of overfitting in the regression model in evidence through his consumption of “phantom degrees of freedom,” i.e., “variables that were tried and rejected”—rather than included in the regression model in evidence.[63] SDC PFF ¶ 113 (citing 3/8/18 Tr. 2711 (Erdem)).
The SDC claimed this issue is important in the context of its overfitting criticism because, as Professor Crawford's testimony indicated, it is not generally good econometric practice to “to try a regression, to reject some variable or to reject a form, and then try another specification and find you get a statistically improved result.” SDC PFF ¶ 115 (citing 2/28/18 Tr. 2109 (Crawford)). According to Dr. Erdem when such an approach is taken, “the reliability of the coefficients at the end of that model selection process is questionable.” 3/8/18 Tr. 2711 (Erdem).
In response, CTV noted that it had addressed the issue of the first supposed “discarded” regression without the top-six MSO interaction variables, in its opposition to a Motion to Strike filed by SDC. In that Opposition, CTV made particular note of Professor Crawford's written direct testimony in which he explained why his regression analysis did not originally treat the interaction of these top-six MSOs as a fixed effect. See Crawford CWDT ¶ 166 (“Dummy variables for each of the six largest MSOs—Comcast, Time Warner, AT&T, Verizon, Cox, and Charter—are included as covariates to capture potential differences in factors not included in the econometric model that could shift demand for bundles that include imported distant broadcast signals.”).
CTV further referred to the Judges' Order Denying SDC Motion to Strike Start Printed Page 3567Testimony of Gregory S. Crawford (Crawford Order), which credited CTV's position that Professor Crawford had not run such an alternative course of action by generating a regression and then discarding it, but rather had decided to add the top-six MSO effects as “fixed effects” in the course of developing his regression approach, in order better to isolate the correlation, if any, between the explanatory (independent) variables at issue in this proceeding—the different programming categories—and the dependent variable, i.e., total royalties. As the Judges explained in the Crawford Order:
Dr. Crawford's WDT . . . explained how he first described differences that were observed in the data among the six largest MSOs in terms of their average receipts per subscriber. CTV Opp'n at 10-11 and Ex. 2004, Figure 6. Dr. Crawford's WDT also explained that these differences may suggest other important differences among these large MSOs regarding their signal carriage strategies, pricing, and other relevant dimensions. CTV Opp'n at 11; Ex. 2004 ¶ 61. Dr. Crawford also described a regression without the six MSO Interaction variables. Ex. 2004 ¶ 61 (unobserved differences in average revenue per subscriber could bias estimates of relative value of different programming).
Crawford Order at 5.
The Judges find that the SDC's criticism of Professor Crawford's models for consuming “phantom degrees of freedom” is essentially a restatement of Dr. Erdem's general claim of overfitting. Accordingly, this argument does not add a new basis for reducing the weight the Judges place on Professor Crawford's regression analysis.[64]
On balance, the Judges find that there may be some degree of overfitting in Professor Crawford's regression analyses that he did not adequately explain. It further appears that this problem was the result of a tradeoff, arising from Professor Crawford's use of a subscriber group analysis and thus a reliance on system-accounting period fixed effects that, as the SDC noted, reduced the number of observations in Professor Crawford's data set. Although such potential overfitting may exist, there is nothing in the record to demonstrate sufficiently that this problem would support a decision to diminish the judges' reliance on Professor Crawford's regression analysis.[65]
3. Program Suppliers' Criticisms of Dr. Crawford's Analysis
a. Assumption Regarding CSO Behavior
Sue Ann Hamilton, an industry expert, testified that Professor Crawford made a significant error (one that would apply to any Waldfogel-type regression) when he posited that CSOs make decisions regarding distant retransmission based on their intention to maximize profits by selecting those stations with an optimal bundle of programming. Corrected Written Rebuttal Testimony of Sue Ann Hamilton, Trial Ex. 6009, at 13-14 (Hamilton CWRT). Rather, Ms. Hamilton testified, a CSOs' selection of stations for distant retransmission is marked by inertia, not by an affirmative analysis and weighing of alternative stations. Id. She identified two reasons for CSO inertia. First, distant retransmission costs represent a non-material expenditure for CSOs compared with their other more expensive programming and carriage decisions. Id. at 9. Second, she testified that CSOs are more concerned with losing existing subscribers if they drop certain stations and the associated programs than they are with whether or not any new retransmitted station and its associated programs might entice new subscribers.[66] Id. In industry jargon, CSOs are more concerned with “legacy distant signal carriage” than with adjusting the roster of distantly retransmitted stations. Id. at 15. Thus, Ms. Hamilton implied, any correlation between program categories and royalties is spurious, because it is “inconsistent with [her] understanding of how CSOs actually make distant signal carriage decisions.” Id.[67]
The Judges find that Ms. Hamilton was a knowledgeable and credible witness, particularly with regard to the de minimis impact of distantly retransmitted stations on CSOs and the importance of “legacy carriage.” Moreover, the Judges take note that CSO time and effort are themselves finite resources (opportunity costs), and, as Ms. Hamilton implied, it would behoove a rational CSO to expend more of those resources making carriage and programming decisions with a greater financial impact.[68]
However, the Judges do not find that the relative unimportance of distantly retransmitted stations to a CSO deprived the regression by Professor Crawford, or any of the regressions in evidence, of value in this proceeding. If the reasons articulated by Ms. Hamilton caused CSOs to emphasize legacy carriage over potential increases in value from adding or substituting different local stations for distant retransmission, then otherwise well-constructed regressions should capture the relative values of those legacy-based decisions. The Judges are mindful that regression analysis is of benefit because it looks for a correlation between economic actors' choices (the independent explanatory variables) and the dependent variables as potential circumstantial evidence of a causal relationship, but it does not purport to explain what lies behind such a potential causal relation. Thus, Ms. Hamilton has not so much criticized regression analyses as she has provided an answer to a different question.
Indeed, if legacy-based decision-making is prevalent, the Judges would expect to see relatively stable shares over the royalty years encompassed within and across the Allocation/Phase I proceedings. In fact, the record does reflect relative stability. See, e.g., Crawford CWDT ¶¶ 12, 15 (in his two regressions in this proceeding, “the estimated parameters underlying these marginal values are stable across years . . . .”), ¶ 39, Table V-3. It thus appears that past decision-making has to an extent generally locked in (through an emphasis on legacy carriage) decisions as to the carriage of distantly Start Printed Page 3568retransmitted stations for the 2010-2013 period.
In sum, therefore, Ms. Hamilton's testimony, while informative and credible, does not diminish the value of Professor Crawford's regression or, for that matter, any other Waldfogel-type regression.
b. Minimum Fee Issue
Dr. Jeffrey Gray criticized Professor Crawford's regression because the analysis included in the dependent variable royalties that are paid as part of the statutorily mandated minimum fees. Gray CWRT ¶¶ 17-18. Any Form 3 cable system must pay a system-wide minimum fee equal to 1.064% of its gross receipts into the royalty pool for distantly retransmitted stations, even if it does not retransmit any stations to distant markets, up to the retransmission of one full DSE. 17 U.S.C. 111(d)(1)(B)(i) and (ii). Dr. Gray asserted that, consequently, the data used by Professor Crawford is not informative, because the minimum fee cost is decoupled from the marginal economic decision regarding the retransmission of the first DSE. Gray CWRT ¶¶ 20-22.
Dr. Gray noted that approximately 50% of CSOs did retransmit more than one DSE, and thus voluntarily paid a royalty greater than the minimum fee. Dr. Gray acknowledged that the data regarding this subgroup of CSOs was informative because these CSOs had made a discretionary choice to incur additional royalty charges in exchange for carriage of additional distantly retransmitted stations and their constituent programs. Accordingly, he ran what he described as Professor Crawford's regression using only the CSOs that paid more than the minimum fee, and his results were different from Professor Crawford's results. However, although Dr. Gray had characterized his work as a rerun of Professor Crawford's regression, at the hearing Dr. Gray confirmed that he had been “unable to replicate” Dr. Crawford's regression. 3/14/18 Tr. 3739 (Crawford).[69]
In any event, Dr. Gray's analysis resulted in the allocations among program categories—presented in the table below alongside Professor Crawford's allocations (and Dr. Gray's viewership-based allocations discussed elsewhere in this Determination):
Table 4—Impact of Accounting for Minimum Fees Requirement on Crawford Royalty Shares, 2010-2013
Claimant category Crawford royalty Shares Crawford-modified royalty shares Distant viewing royalty shares (%) (1) (2) (3) CCG 3.51 5.46 3.70 CTV 16.50 13.54 13.50 Devotionals 0.60 0.75 1.44 Program Suppliers 23.44 61.19 45.43 PTV 17.72 19.06 33.04 JSC 38.23 0.00 2.89 Gray CWRT ¶ 24, Table 3.
In response, Professor Crawford pointed out that, contrary to Dr. Gray's assertions, Dr. Crawford's regression did not ignore the impact of the minimum fee, because he included an indicator variable as a control, subsumed within his fixed effects variables, to reflect whether the minimum fee was paid at the system level. 2/28/18 Tr. 1422 (Crawford). Thus, Professor Crawford maintained that he had already accounted for the minimum fee effect. Accordingly, Professor Crawford argued that Dr. Gray's analysis merely attempted to account for minimum fee systems in a different way—by omitting those systems instead of replicating Professor Crawford's regression that used control variables and fixed effects to account for the minimum fee paying systems.[70]
Dr. Gray is correct with regard to his general principle that a CSO's decision to distantly retransmit any particular station, when that CSO is otherwise obligated to pay the minimum royalty fee, does not indicate a direct correlation between the decision to retransmit and the decision to incur a royalty obligation. By contrast, when a CSO decides to incur an increase in its marginal royalty costs by retransmitting more than one DSE, that decision reveals the CSO's preference to incur the royalty cost in exchange for the perceived value of the distantly retransmitted station and the programs in that station's lineup.
As Dr. Gray noted, the minimum royalty fee is somewhat akin to a “tax” that is paid regardless of whether the CSO decided to distantly retransmit a local station. 3/14/18 Tr. 3704 (Gray). Nonetheless, the CSO still has several choices to make, because it will receive something of potential value, i.e., distantly retransmitted stations, in exchange for the “tax.” The first choice is binary; should it retransmit any station or no station? As Dr. Gray noted, during the 2010-2013 period, on average 527 out of the 1,004 Form 3 CSOs analyzed (52.5%) chose to retransmit the exact or fewer number of signals than the regulated fees permitted; 83 paid the minimum fee yet elected not to retransmit any local stations. Gray CWRT ¶ 17. Those decisions reveal that the CSO has concluded (whether by analysis or resort to a heuristic) that any of the marginal costs (physical or opportunity) associated with retransmission likely exceed the value to the CSO of such retransmission, even accounting for minimum royalties, which the CSO must pay in any event.Start Printed Page 3569
These statistics also reveal that many CSOs decided to retransmit stations when they were obligated to pay only the minimum royalty. Although there is no marginal royalty cost associated with this decision, the CSO's decision as to which stations to retransmit remains a function of choice, preference, and ranking.[71] Thus, the CSO in this context would still have the incentive to select distant local stations for retransmission that are more likely to maximize CSO profits, through either an increase in subscribership or, as Ms. Hamilton emphasized, by avoiding the loss of subscribers through the preservation of “legacy carriage” through the non-analytical heuristic of maintaining the status quo.[72]
There are substantial economic bases for this finding. Because the “tax” of the minimum fee is paid regardless of whether distant retransmission occurs, that “tax” is also in the nature of a sunk cost. Fundamental economic analysis provides that a seller should ignore sunk costs when making marginal decisions (although they should try to recoup these costs if the buyers' willingness-to-pay allows it). Nonetheless, a CSO that decides to distantly retransmit a station when the marginal royalty cost is zero has revealed that the particular station contains programming that would increase marginal value to that CSO, over and above the next best alternative “retransmittable” local station and above any other marginal costs (e.g., physical retransmission costs or the opportunity cost of foregoing a different type of cable channel in the CSO's channel lineup).
Finally, Dr. Gray's emphasis on the CSOs that retransmit more than one DSE is misleading. Those other CSOs that pay only the minimum royalty fee and elect to distantly retransmit one station might have elected to pay a positive fee in the absence of the minimum fee. For example, assuming Program Suppliers' programs were more valuable to a CSO than the minimum fee and disproportionately more valuable than any other program category, that CSO would have retransmitted a station that disproportionately included Program Supplier content and willingly paid the minimum fee (or more). Dr. Gray's criticism fails to address this issue.
With regard to Dr. Gray's own regression, run for the first time in rebuttal, the Judges are not surprised that his different regression approach would yield different results. However, the Judges do not rely on methodological approaches proffered for the first time in rebuttal, except to the extent they appropriately demonstrate defects in another party's approach. Because Dr. Gray acknowledged that he could not replicate Professor Crawford's regression and because Dr. Gray therefore utilized a different approach, the Judges do not find that Dr. Gray's critique as it related to the minimum fee issue was sufficient to discredit Professor Crawford's approach.[73]
4. Conclusion Regarding Professor Crawford's Regression Analysis
Not only did Professor Crawford sufficiently respond to the criticisms of his regression analysis, that analysis is based on a number of other factors as to which no criticisms were leveled. First, he used the universe of all programming on all distant signals, rather than a sampling, thus avoiding any problems that may be associated by improper sampling or inadequately sized samples. 2/28/18 Tr. 1186 (Crawford). Second, by using data and royalties at the subscriber group level, his regression analysis related more specifically to programs and signals actually available to subscribers and provided more variation and observations than past regressions. 2/28/18 Tr. 1512, 1517-19, 1661 (Crawford). Third, his use of a fixed effects approach avoided the criticism that he had omitted key variables. Crawford CWDT ¶ 107; 2/28/18 Tr. 1398 (Crawford). Fourth, the confidence intervals for his proposed shares were relatively narrow at the 95% confidence level (i.e., at a .05 significance level). Crawford CWDT ¶¶ 117 and 176, Tables 23 & 24. Fifth, Professor Crawford acknowledged the potential problem that his fixed effects could lead to the “costs” of higher standard errors and wider confidence intervals (and, as Professor George noted, with specific reference to the minimum fee issue), but he was able to mitigate that effect with his rich data set, so that his parameters remained relatively precise. Crawford CWDT ¶ 123. Finally, unlike the other regressions, Professor Crawford does not estimate any negative coefficients for the coefficients of interest in this proceeding, which makes his regression analysis (especially his duplicated analysis that also had no negative coefficients for network programming) more of a stand-alone estimate of relative value and less in need of reconciliation with the survey analysis. Thus, on balance, the Judges find Professor Crawford's regression analysis, especially his duplicate-minutes approach, to be highly useful in estimating relative values in this proceeding.
C. Dr. Israel's Regression Analysis
1. Introduction
On behalf of the Joint Sports Claimants, its economic expert, Dr. Mark Israel, conducted a regression also in the general form of a Waldfogel-type regression, but with minor modifications intended to improve the reliability of the methodology. Written Direct Testimony of Mark Israel, Trial Ex. 1003, ¶¶ 23, 25 (Israel WDT). Dr. Israel's primary purpose was to determine whether such a regression would corroborate the results of the 2004-05 and the 2010-13 Bortz Surveys. He concluded that the “observable marketplace behavior” he had analyzed did indeed corroborate the results of both Bortz Surveys. Id. ¶ 8. Dr. Israel further testified that, if the Judges Start Printed Page 3570were to find that the 2010-13 Bortz Survey did not support a finding of relative market value, his and Professor Crawford's respective regressions constituted the best alternative evidence of such value. 3/12/18 Tr. 3079 (Israel).[74]
2. Dr. Israel's Regression
Dr. Israel analyzed royalties CSOs paid over a three-year period, 2010-2012, rather than the full four-year period at issue in this proceeding, 2010-2013. Id. ¶ 7. Dr. Israel testified that he did not analyze the full 2010-2013 four-year period because he had begun his analysis when the proceeding was limited to the three-year 2010-2012 period. However, he testified that he was able to confirm the accuracy of his regression estimates against the results from the Bortz Survey that covered all four years. He also noted that his results corresponded closely to the results that Professor Crawford obtained in his regression, which spanned the full four-year period. 3/12/18 Tr. 2838-40 (Israel).
Dr. Israel, like Professor Crawford, utilized the royalty data from the “Form 3” CSOs, i.e., the larger CSOs, which paid the largest dollar amount of royalties for distantly retransmitted stations by virtue of the large amount of “gross receipts” they earned from their cable operations. Israel WDT ¶ 9.
Referring to the regulated nature of the cable market, Dr. Israel noted: “There is no market price for distant signal programming to use in assessing relative marketplace value.” Id. ¶ 16. Dr. Israel further noted that, applying the principles laid out in prior proceedings, “relative marketplace value” must be estimated by consideration of evidence as to what royalties would be paid for different categories of programming in a “hypothetical free market.” Id. To ascertain that value, and consistent with his understanding of prior determinations, Dr. Israel focused on the relative value of program categories to the buyers, i.e., CSOs. Id.[75]
To assemble the specifications of his regression model, Dr. Israel applied the essentials of a Waldfogel-type regression. That is, he tested to find a correlation between: (1) Royalties paid by CSOs (the dependent variable) and (2) minutes of programing in each category of programming as established in this proceeding (the independent/explanatory variable). He utilized control variables to hold constant other potential drivers of CSO royalty payments, itemized infra. Id. ¶ 22.
However, he altered his approach from the Waldfogel regression approach in the following important ways:
- To reflect the fact that not all subscriber groups among a CSO's total subscriber base received any given distant signal, Dr. Israel prorated each signal “based on the fraction of the number of subscribers who received it . . . by using the variable in the CDC data called `Prorated DSE' as a measure of the prorated distant signal equivalents that each distant signal represents for each CSO—Accounting Period.” Id. ¶ 26.[76]
- To account for the retransmission of non-compensable “Network Programming” minutes in the estimates, Dr. Israel included those minutes to “effectively act” as a control variable, thus excluding them from the calculation of shares of the royalty fund. That is, he included these minutes in his regression because they are in fact retransmitted and “therefore are part of the cost-benefit analysis that a [CSO] undertakes when deciding whether or not to carry [a] distant signal . . . [h]ence explaining total royalty payments [even though] they are not compensable minutes in this proceeding.” Id. ¶ 27.
- To improve the quality of his estimates, Dr. Israel utilized a larger sample than employed in the Waldfogel regression. Specifically, Dr. Israel used data from a random sample of 28 days in each six-month accounting period in his 2010-2012 analysis, a 33% increase in the number of sample days (21) utilized in the Waldfogel regression. Id. ¶ 30.[77]
Dr. Israel controlled for other independent variables in essentially the same manner as in the Waldfogel regression, by including the following control variables in his regression model:
- Number of CSO subscribers from the previous accounting period
- Number of activated channels for the CSO in the previous accounting period
- Count of broadcast channels for the CSO
- Indicator for whether a CSO pays the special 3.75 percent rate royalty fee
- Indicator for whether or not the CSO pays the minimum statutory payment
- Average household income for the CSO's Designated Market Area (DMA)
- Indicators for the accounting period of each observation
Id. ¶ 33.
Through these specifications, Dr. Israel stated that he was able to answer what he characterized as the fundamental question: “How much do CSO royalty payments increase with each additional minute of each category of programming content?” Id. ¶ 34.
Applying his regression model, Dr. Israel made the following estimations:
Table 5—Israel Regression Model Results
Variables Regression model all categories Minutes of Sports Programming ** 4.836 (2.466) Minutes of Program Suppliers Programming *** 0.469 (0.104) Minutes of Commercial TV Programming *** 1.010 (0.355) Minutes of Public Broadcasting Programming ** 0.660 (0.306) Start Printed Page 3571 Minutes of Canadian Programming *** −0.973 (0.212) Minutes of Devotional Programming *** −0.701 (0.246) Minutes of Network Programming *** −0.985 (0.290) Minutes of Other Programming ** 0.916 (0.462) Number of Subscribers (Previous Accounting Period) *** 1.351 (0.0601) Number of Activated Channels (Previous Accounting Period) *** 141.8 (18.73) Median Household Income in Designated Marketing Area *** 1.339 (0.286) Count of Broadcast Channels −493.5 (326.5) Indicator for Special 3.75% Royalty Rate *** 41,918 (4,711) Minimum Payment Indicator *** −16,501 (3,689) Observations 5,465 R-squared 0.692 Source: TMS/Gracenote; Cable Data Corporation; Kantar media/SRDS. Note: Robust standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1.78 Israel WDT ¶ 36 Table V-I (citations omitted).
Although Dr. Israel reported the standard errors generated by his regression (in the parentheticals in the table above, pursuant to conventional regression notation), he did not set forth the confidence intervals that result from these standard errors, either for his coefficients or for the resulting shares. He acknowledged that it would be difficult to calculate meaningful confidence intervals in this exercise because shares of any one category are dependent on the shares in other categories and the econometrician must “do something more than just a simple linear calculation.” 3/12/18 Tr. 2975 (Israel).
Nonetheless, Dr. Israel acknowledged that confidence intervals could be calculated from the standard errors in his regression. In cross-examination, and by way of example, he acknowledged that the confidence interval applicable to the JSC programming coefficient in his regression ranged from 0.003 to 9.669. 3/12/18 Tr. 2976 (Israel). Given this range, he agreed that the math would create a range for the value of JSC programming, with a 95% degree of confidence, between “a fraction of a penny and $9.67 per minute.” 3/12/18 Tr. 2977 (Israel). Similarly, Dr. Israel acknowledged that, given his standard error for CTV, he could state with 99% confidence that the value for a minute of CTV programming ranged between 31 cents and $1.71. 3/12/18 Tr. 2978 (Israel). In similar fashion, Dr. Israel acknowledged that his regression, and the standard errors he reported, generated the following confidence intervals for each minute of programming: For PTV, between $.06 and $1.26, for Canadian Programming, between −$1.39 and −$0.56, and, for SDC programming, between −$1.18 and −$0.22.
Dr. Israel further acknowledged that the coefficients he estimated in his regression all fell within the confidence intervals of each other, which suggested an overlapping that could undermine the usefulness of his results. However, he denied that such a consequence had statistical meaning detrimental to his opinion because “confidence intervals tell you something about the precision of those coefficients, but you can't step from a statement about statistical significance to a statement about magnitude of value.” 3/12/18 Tr. 3014 (Israel).
Nonetheless, Dr. Israel conceded that “the confidence intervals are . . . important if I have no other information to compare it to, so I am testing a hypothesis based on just the regression.” 3/12/18 Tr. 2981 (Israel). However, Dr. Israel further testified that he reached the opinion that the regression he ran generated meaningful coefficients because they corroborated the Bortz Survey, which was both the primary purpose of his regression analysis and a corroborative result that mitigated any uncertainty generated by the wide confidence intervals arising out of his regression. 3/12/18 Tr. 2981-82 (Israel).
Dr. Israel described the coefficients derived by his regression analysis as Start Printed Page 3572representing the “average value across all cable systems of an additional minute of that category of programming.” Israel WDT ¶ 37; 3/12/18 Tr. 2831 (Israel). Thus, it became a simple algebraic matter “to determine the relative value of each type of programming.” That is, as with any Waldfogel-type regression, Dr. Israel simply took the coefficient estimated by his regression for each program category and multiplied it by the number of minutes applicable to that category, and divided that product by the total value of all such products summed across all categories. He expressed the ratio for any program category X as:
Israel WDT ¶ 38. Applying this ratio to each of the six categories Dr. Israel calculated the following estimated percentage shares per category averaged over the 2010-2012 period for which he had data:
Table 6—Israel Regression: Estimated Percentage Shares
Category 2010-2012 average share (%) JSC 37.54 Program Suppliers 26.82 CTV 22.16 PTV 13.48 SDC 0.00 CCG 0.00 Id. Table V-2. However, Dr. Israel did not calculate share allocations for specific years, which is how the Judges are required by statute to make the allocations.[79]
Dr. Israel further noted that these results were not only consistent with the results of the Waldfogel regression for the 2004-05 years, they were consistent with the results of the regression undertaken by Dr. Rosston, referenced supra, in an earlier proceeding covering 1998 and 1999. Specifically, Dr. Israel's regression implied the same rank order for the top four programming categories and a generally similar magnitude of royalty allocations for the top three categories as in Dr. Waldfogel's regression. Id. ¶ 39.
Further, with regard to his assigned task, Dr. Israel noted that his rank order for the top four program categories was consistent with—and thus corroborative of—the top four rank order determined by the Bortz Survey. Dr. Israel set forth and also depicted the consistency of his regression and the Bortz Survey as follows:
Table 7—Comparison of Bortz Survey Results to Israel Regression
Programming category 2010 (%) 2011 (%) 2012 (%) 2013 (%) Bortz Survey average 2010-2013 (%) Israel regression 2010-2012 (%) Sports 40.9 36.4 37.9 37.7 38.2 37.5 Program Suppliers 31.9 36.0 28.8 27.3 31.0 26.8 CTV 18.7 18.3 22.8 22.7 20.6 22.2 PTV 4.4 4.7 5.1 6.2 5.1 13.5 Devotional 4.0 4.5 4.8 5.0 4.6 0.0 Canadian 0.1 0.2 0.6 1.2 0.5 0.0 Id. ¶ 40 Table V-4.
Start Printed Page 3573Dr. Israel acknowledged that although his ranking of the top four categories (JSC, Program Suppliers, CTV and PTV) was consistent with the Bortz Survey ranking, that consistency did not extend to the bottom tier (PTV, SDC and Canadian programming). Id. ¶ 41. Rather, he acknowledged that his regression estimated no value for the SDC and Canadian programming. However, he noted that, when the three low-tier categories are viewed collectively, his regression estimated a total share of value (13.5%) to all three categories (actually just PTV) and the Bortz Survey provided what he understood to be a roughly equivalent relative value range between roughly 9% and 13% in total for Public TV, Devotional, and Canadian programming. 3/12/18 Tr. 2880-81 (Israel).
To test the robustness of his findings, Dr. Israel conducted several sensitivity analyses. He concluded that each of his sensitivity analyses “confirm[ed] the relative ranking of the various categories, particularly of the top three categories relative to the bottom three.” Israel WRT ¶ 43. See also Id. App. C.
More particularly, Dr. Israel ran three sensitivity analyses to determine whether the following changes in his model would alter his results in any meaningful way. These analyses examined changes that would result from: (1) Isolating JSC minutes and comparing these minutes “to all other programming minutes combined . . . to test whether the value for [JSC] minutes is sensitive to splitting out the individual programming categories” (as in his regression), (2) controlling for any additional “market-specific traits of the CSO” (through application of a DMA “fixed effect”), and (3) controlling for any royalties “that [resulted from] the 3.75% fee [rather than] the base rate fee royalties.” In each sensitivity analysis, Dr. Israel found that the changes had “no effect on any of [his] conclusions.” Id.
3. Program Suppliers' Criticisms
Dr. Gray expressed a number of specific criticisms of Dr. Israel's regression, in addition to Dr. Gray's criticisms of Waldfogel-type regressions generally.
a. Alleged Sensitivity of Regression
First, Dr. Gray asserted that Dr. Israel's regression exhibits “remarkable sensitivity” because of the wide range of proposed relative shares. For example, when Dr. Israel's standard errors are converted into confidence intervals, Dr. Israel's regression indicates a range for the JSC share “from 0% to 63.29%”, when assumptions are changed “regarding the choice of explanatory variables or the assumed functional relationship those variables have on royalty fees paid.” Gray CWRT ¶ 28.
Dr. Gray testified that he replicated Dr. Israel's results exactly and then calculated what Dr. Israel had omitted—95% confidence intervals around the estimates of the value of an additional minute of programming by category type. Gray WDT ¶ 29. Dr. Gray determined that at the 95% confidence level, the JSC share could have been as low as .05%, far less than the 37.5% share derived by Dr. Israel through his point estimate, but consistent with the 0% share for the JSC estimated by the SDC's economic expert, Dr. Erdem. Accordingly, Dr. Gray opined that Dr. Israel's regression is both “imprecise” and “unreliable.” Gray CWRT ¶ 29.
Dr. Israel rejected Dr. Gray's criticisms in this regard. Specifically, Dr. Israel maintained that it was uninformative that Dr. Gray's sensitivity analysis diminished the statistical significance of the former's estimates because statistical Start Printed Page 3574significance is “a measure . . . [of] how certain we are that the estimate is different from zero.” 3/12/18 Tr. 2840 (Israel). Further, when a modeler or critic adds many additional variables, the regression will generate lower statistical significance. Thus, according to Dr. Israel, Dr. Gray's sensitivity analysis necessarily created the loss of statistical significance, by introducing too many new variables that were unrelated to the core variables (program categories) that must be isolated and measured in this proceeding.
Dr. Israel also defended this large interval with what the Judges see as a non sequitur—that he nonetheless still ranked the JSC first. See id. at 3011. When confronted with the additional fact that injecting the DMA effect into the regression resulted in a regression with the highest R2 among his proffered and sensitivity regressions, Dr. Israel testified that when “you add a bunch of DMA fixed effects, you're going to get a higher R-squared. The notion of choosing a regression to maximize R-squared is given zero credit in economics.” Id. The Judges agree with Dr. Israel on this narrow point because, as discussed supra with regard to the Crawford regression analysis, goodness-of-fit as measured by the R2 calculation is not dispositive when evaluating a regression intended to measure specific effects rather than to predict a result.
The Judges also agree with Dr. Israel that the replicated model created by Dr. Gray did not necessarily discredit Dr. Israel's analysis, given the addition of several variables in that replication.
However, the Judges agree with Dr. Gray that the large confidence intervals around Dr. Israel's estimated coefficients—and therefore around his shares—are troubling, especially when compared to the narrow confidence intervals and low standard errors in Professor Crawford's regression analysis. The Judges recognize, as in the 2004-05 Determination, that wide confidence intervals and large standard errors call into doubt the “precision of the results [and] caution against assigning `too much weight' to their corroborative value.” See also ATA Airlines, 665 F.3d at 896 (confidence interval can be so wide that “there can be no reasonable confidence” sufficient for reliance by fact finder.).[80]
b. Choice of Linear Functional Form and Inclusion of Minimum Fee CSOs
Dr. Gray took issue with Dr. Israel's use of a linear relationship between royalties paid and minutes of programming, rather than using a log of royalties paid. Rather, and by comparison, Dr. Gray found that Professor Crawford's use of a log-linear relation was “a more realistic economic function for the functional form of the relationship,” particularly as “between minutes and royalties,” because the logarithmic calculation revealed the percentage impact that retransmitted minutes have on royalties. Gray CWRT ¶ 30.[81]
In response to Dr. Gray's criticism of his use of a linear form, Dr. Israel testified that “taking the log is kind of a technical thing . . . .” 3/12/18 Tr. 2856 (Israel). Further, he did not utilize any econometric tests to determine whether the linear form was appropriate, particularly compared to the log-linear form.
Dr. Gray combined his log transformation of Dr. Israel's linear approach with another of Dr. Gray's criticisms—the use of data from CSOs that only pay the minimum fee (as he also discussed in his criticism of Professor Crawford's regression). Adjusting for these two purported defects, Dr. Gray found that Dr. Israel's reworked regression produced the following radically different estimates, compared to Dr. Israel's unadjusted regression:
Table 8—Impact of Accounting for Minimum Fees Requirement on Israel Royalty Shares, 2010-2013
Claimant category Israel royalty shares (%) Israel-modified royalty shares (%) Distant viewing royalty shares (%) (1) (2) (3) CCG 0.00 4.15 3.70 CTV 22.16 27.20 13.50 Devotionals 0.00 0.64 1.44 Program Suppliers 26.82 44.27 45.43 PTV 13.48 19.55 33.04 JSC 37.54 4.19 2.89 Gray CWRT ¶ 31 Table 4.
In response to Dr. Gray's criticism of Dr. Israel's use of data from CSOs paying only the minimum fee, Dr. Israel stated that such data should not simply be disregarded, because it provides useful information regarding the carriage decisions of those CSOs. He also noted that Dr. Waldfogel's regression, relied upon by the Judges in the most recent Allocation/Phase I proceeding, likewise applied the data from CSOs who paid only the minimum fee. 3/12/18 Tr. 2830 (Israel).
The Judges agree with Dr. Israel that the data regarding the carriage decisions of CSOs who pay only the minimum fee should not be disregarded, and adopt their findings relating to this issue in connection with Professor Crawford's regression. See section II.B.3.b, supra. To summarize, even when a CSO is obligated to pay the minimum royalty fee, it still has the incentive to select stations for distant retransmission that it believes will maximize the benefits (or, in economic terms, utility) to the CSO. Start Printed Page 3575However, because carriage decisions are not tied even indirectly to a contemporaneous discretionary decision to pay royalties (beyond the mandatory minimum 1.064% for the first DSE), they strike the Judges as potentially less informative than discretionary decisions by CSOs to incur an additional royalty expense in order to distantly retransmit particular stations. Nonetheless, as explained supra in the Judges' consideration of this issue in connection with Professor Crawford's regressions, the Judges find no basis in the record by which they could or should make a reasonable “relative value” adjustment based on whether a CSO did or did not pay only the minimum fee.
c. Negative Coefficients
Dr. Gray further attacked the usefulness of Dr. Israel's regression by criticizing as “nonsensical” the negative coefficients Dr. Israel estimated for Canadian and Devotional programming. According to Dr. Gray, negative coefficients are implausible because a program category cannot have a negative market value. Gray CWRT ¶ 35.
In response, Dr. Israel did not dispute that the coefficients themselves (whether positive or negative) should be understood as the value per minute, or, equivalently, as the “implied price” of a minute of programming. 3/12/18 Tr. 2832-36 (Israel). Dr. Israel understood the negative coefficients to indicate that the inclusion of such programming on a station lineup (i.e., a bundle) correlated with a lower station value compared to programming that generated a “positive coefficient” in the regression. 3/12/18 Tr. 2832-33 (Israel). However, Dr. Israel conceded that even programming with negative coefficients nonetheless have positive value when retransmitted, and he therefore declined to assign zero value to such categories.
However, the Judges find that Dr. Israel's concession proves too much. If programs could have positive economic value despite the negative value of the coefficient identified by the regression, then the coefficient does not reflect absolute market value per minute. Rather, the coefficient must represent something else. Dr. Israel identified that something else as the contribution of a program category to the value of the royalty pool as compared with, that is, relative to, the value of other program categories.[82] Of course, this “something else” is something that the Judges must determine in this proceeding—the relative value of a program from a given category to a CSO when packaged in a station bundle, i.e., relative to the inclusion of a program in another category.
Accordingly, the Judges do not find the presence of negative coefficients to be “nonsensical.” However, because of Dr. Israel's explanation of the negative coefficients, the Judges disagree with his decision to reset those negative coefficients to zero.[83] And, because negative coefficients do not mean that the programs lacked any absolute value as contributors to the sum of royalties paid, any negative values for program categories derived from a regression would need to be adjusted to reflect the absolute value of such programming, given that it indeed was retransmitted on some cable systems.[84]
d. Criticisms by Dr. Jeffrey Stec
Dr. Jeffrey Stec, another economic expert witness for Program Suppliers, leveled several criticisms at Dr. Israel's regression. First, he added to the chorus of witnesses who opined that the regulated nature of the market renders inapposite any purported statistical relationship between royalties and program categories. Amended Written Rebuttal Testimony of Jeffrey Stec, Trial Ex. 6016, at 15 (Stec AWRT). Nonetheless, the Judges find regression in such circumstances to be a useful tool to ascertain relative differences in value among program categories, notwithstanding the regulated nature of the marketplace.
Dr. Stec also criticized Dr. Israel's regression because it suggests that two different distantly retransmitted signals could be associated with the same royalty level despite transmitting different combinations of content. Stec AWRT at 25-27. The Judges do not find this to be a valid criticism. Dr. Israel's regression identifies values for each program category and multiplies those values by the number of minutes transmitted for each category. These categorical values certainly could be summed up for any given signal, as Dr. Stec's criticism assumes. However, there is no reason why different signals retransmitted on different cable systems to different subscriber groups (of various sizes) could not generate the same level of royalties notwithstanding that they contain different mixes of program categories. This criticism misapprehends that the purpose of a section 111 allocation proceeding is not to value the signals as a whole, but rather to value the constituent program categories across the signals.
4. The SDC's Criticisms
a. Criticisms by John Sanders
John Sanders, a media valuation expert who testified on behalf of the SDC, criticized Dr. Israel's regression from a non-statistical perspective. First, he opined that the concept of correlating royalty generation with program categories is “conceptually flawed.” Written Rebuttal Testimony of John Sanders, Trial Ex. 5006, at 6 (Sanders WRT). He opined that marketplace value, or fair market value, is identified by evaluating actual transactions that are “modulat[ed]” by price and quantity. Accordingly, he asserted that a higher market value could be associated with programming that represents a relatively small amount of airtime. Amended Direct Testimony of John Sanders, Trial Ex. 5001, at 21.
The Judges agree with Mr. Sanders regarding the potential for programming to possess a relative value greater than would be suggested by relatively low total viewership and airtime.[85] Start Printed Page 3576However, that is not a reasonable criticism of the regression by Dr. Israel in particular or of the Waldfogel-type regressions in general. Such regressions, for example, have assigned a relative value to the JSC programming that is greater than its total minutes of airtime would suggest. See, e.g., Gray CWRT ¶ 31 & Table 4 (Israel regression estimated a 37.5% JSC share whereas a viewing analysis provided only a 2.8% JSC share).
Mr. Sanders also found fault with Dr. Israel's regression because other evidence suggested that SDC programming had a positive value not captured by that regression. Specifically, Mr. Sanders noted that when WGNA removed certain programming from its retransmitted feed, it would frequently replace that local programming with SDC programming, suggesting that the latter has significant value. Sanders WRT at 13.[86] While this may be indicative, anecdotally, of the value of SDC programming as “programming inserts on WGNA,” it does not suggest to the Judges any defect in Dr. Israel's regression analysis.
Finally, Mr. Sanders noted that CSO program selection cannot be viewed as a voluntary market-related decision in all instances, because the record reflects that WGNA's parent company, Tribune Media Services (Tribune Co. in 2010), had a practice of requiring CSOs to agree to transmit multiple stations that it owned if a CSO wanted to transmit a particular Tribune station. See Direct Testimony of Sue Ann R. Hamilton, Trial Ex. 6008, at 7 (Hamilton WDT).[87] Thus, Mr. Sanders argued, Tribune's forced bundling diminished the assumption that a CSO's station-by-station retransmission decision was made by consideration of the programming categories within the station signal. Rather, he opined that in certain instances, CSOs may well have retransmitted WGNA and its mix of categorical programming because those CSOs wanted to include other Tribune stations in the channel lineup.
Dr. Israel did not address this issue in his Written Rebuttal Testimony. However, another JSC witness, Allan Singer, a Charter Communications executive from 2011 through 2016, testified that “during [2010-2013], an annual average of approximately 86 Charter Form 3 systems made the decision to carry WGNA on a distant basis each year, and on average approximately 69 of those systems did not carry any other Tribune station in addition to WGNA [and] approximately 11 Charter Form 3 systems carried Tribune-owned stations on a local basis, but did not carry WGNA.” Written Rebuttal Testimony of Allan Singer, Trial Ex. 1009, ¶¶ 1, 5. Likewise, another JSC witness, Daniel Hartman, a former satellite television programming executive, testified that industry data showed “that in 2010-13 . . . 169 Form 3 cable systems carried a Tribune signal other than WGN (on a local or distant basis) while not carrying WGN during the same period . . . and . . . 725 Form 3 cable systems carried WGN as a distant signal while not carrying another Tribune signal during the same period.” Written Rebuttal Testimony of Daniel Hartman, Trial Ex. 1011, ¶ 25 (Hartman WRT).
The Judges find that the record does not support Mr. Sanders' or Ms. Hamilton's claim that there were tying-based reasons for the distant transmission of WGNA that would have diminished the probative value of WGNA data as regression inputs. Additionally, to the extent any tying-based pressures may have existed, they were not quantified and thus this factor could not serve to alter the regression estimates.[88]
b. Criticisms by Dr. Erdem
Dr. Erdem, on behalf of the SDC, leveled several criticisms at Dr. Israel's regression. Dr. Erdem opined that Dr. Israel's regression was especially sensitive to: (1) The inclusion of additional variables, (2) changes in the regression model specifications, and (3) data points that Dr. Erdem identified as “influential observations” [89] that, in his opinion, were statistical outliers. Erdem WDT at 14-18.
i. Sensitivity to Additional Variables
Dr. Erdem testified that much of the variation within Dr. Israel's regression could be explained by introducing the number of distant subscribers as an independent (explanatory) variable rather than applying it in the regression as a control variable. When Dr. Erdem applied this subscriber count data in this manner, he claimed that “all of the implied royalty shares” in Dr. Israel's regression became zero percent, and that some coefficients turned from positive to negative. Erdem WDT at 15-16. Overall, he found that, with this one sensitivity adjustment, the coefficients for the program categories necessarily were no longer statistically significant. Id.
In rebuttal, Dr. Israel focused on a database issue, arguing that Dr. Erdem had misunderstood “the nature of the CDC data” he used to calculate distant subscribers, resulting in double-counted subscribers. Israel WRT ¶ 24 n.22. This is the same criticism made of Dr. Erdem's data analysis pertaining to the number of distant subscribers. As noted, Dr. Erdem acknowledged his error, and the Judges denied the SDC's out-of-time motion for leave to correct his testimony.
Accordingly, the Judges find that, given the acknowledged deficiency in Dr. Erdem's application of distant subscriber data, his criticism of Dr. Israel's regression for failure to utilize that data as an independent (explanatory) variable rather than a control variable cannot support Dr. Erdem's claims regarding the lack of statistical significance in Dr. Israel's coefficients.
ii. Specification of the Functional Form of the Regression
With regard to Dr. Erdem's second criticism, he hypothesized that “royalty payments may not have a linear relationship with several potential variables.” Erdem WDT at 16. Therefore, he transformed Dr. Israel's regression from linear form to non-linear form to test for further sensitivity. Specifically, Dr. Erdem made log transformations to: (1) The total number of subscribers, (2) the number of distant subscribers, (3) the number of activated channels, and (4) the number of broadcast channels. Start Printed Page 3577 Id. These transformations indicated to him that the estimated coefficients for the program categories changed substantially. Id. at 17.
In response, Dr. Israel asserted that he found Dr. Erdem's log transformation/exponential versions of the former's level variables to be something he had “never seen . . . before.” Israel WRT ¶ 24, n.22. Rather, Dr. Israel characterized this transformation as “simply `fishing' for a specification that changes my result—throwing variables into a model until the result changes.” Id. Dr. Israel indicated that such additions to the variables and such transformations are “not informative” because they lack “economic justification.” Id.
At the hearing, Dr. Israel elaborated, flatly rejecting the contention that Dr. Erdem had merely tested for non-linearities. Rather, he testified that Dr. Erdem had “added an extra set of variables to the regression.” 3/12/18 Tr. 2993 (Israel). He further elucidated that the proper way for Dr. Erdem to have tested for another functional form, i.e., a non-linear function, would have been to use a log form on the right side (the explanatory variable side) of Dr. Israel's regression, not for Dr. Erdem to pile log variables on top of linear variables. Id. at 2994.
Finally, Dr. Israel testified that he decided to use a linear function in order to be consistent with the previous Waldfogel regression. Id. at 2955-56. As with the Judges' discussion regarding Professor Crawford's regression analysis, the Judges do not find that Dr. Israel's use of a linear relationship between royalties paid and these additional variables diminished the value of his regression analysis. Additionally, as discussed in connection with Professor Crawford's regression, the Judges do not find it was necessary or appropriate for a modeler to treat the number of subscribers, distant or otherwise, as anything other than control variables because, in this proceeding, the economic and regulatory purpose is to estimate the relative values of different program categories on the level of royalties rather than to predict or explain all of the causes or correlations between other independent (explanatory) variables and the level of royalties.
iii. “Influential Observations”
Dr. Erdem identified 200 observations, out of Dr. Israel's 5,465 observations, that he labeled as “influential observations.” However, Dr. Erdem did not propose that these influential observations constituted outliers that should have been removed from Dr. Israel's regression analysis. Quite the contrary, Dr. Erdem testified that these influential observations “shouldn't be excluded” for any economic reason, but rather demonstrate that, from an econometric perspective, Dr. Israel's “regression is sensitive to influential observations and only that there “could be subsets of data . . . that may require additional investigation . . . .” 3/8/18 Tr. 2708 (Erdem). Dr. Erdem further posited that the influential observations might reflect a “geographic effect” that influenced Dr. Israel's coefficients, a problem that, Dr. Erdem further opined, was not present in Professor Crawford's regression analysis because he used “system accounting period fixed effects” that have “indirect geography implications.” 3/8/18 Tr. 2708-09 (Erdem). In fact, Dr. Erdem further contrasted Professor Crawford's approach with Dr. Israel's approach by noting that “Dr. Crawford's model does not exhibit sensitivity to outliers.” Erdem WRT at 20 n.17.
In response, Dr. Israel testified that Dr. Erdem was fundamentally wrong to suggest exclusion of what he characterized as “influential observations.” More particularly, Dr. Israel asserted that “[t]he purpose of this regression analysis is to study the relationship established by the full set of data, representing all Form 3 CSOs.” (emphasis added). Moreover, Dr. Israel pointed out that “even the authors Dr. Erdem cited for this statistical practice, Israel WRT ¶ 24 n.22, themselves state that “influential data points, of course, are not necessarily bad data points; they may contain some of the most interesting sample information.” D. Belsley, D. E. Kuh, and R. E. Welsch, Regression Diagnostics: Identifying Influential Data and Sources of Collinearity at 3 (1980). Dr. Israel noted that the data Dr. Erdem characterized as distorting influential observations, i.e., outliers, actually revealed an important influence, viz., the impact of the relatively large size of the CSOs and Prorated DSEs that were associated with these observations. More broadly, Dr. Israel noted that “every regression that has ever been run is going to be sensitive to the removal of influential observations,” indicating that the mere presence of such observations begs the question of whether they provide valuable or anomalous data points. 3/12/18 Tr. at 2996 (Israel).
The Judges agree with Dr. Israel that it would be inappropriate on this record to disregard the 200 observations that Dr. Erdem labeled as influential observations/outliers. The Judges find that, from this record, absent any compelling explanation as to why the data from these 200 observations are not relevant, simply ignoring those data would not necessarily paint a more accurate picture of the population as a whole with respect to the relationship between royalties paid and program categories on local stations retransmitted by CSOs. The dueling positions taken by Drs. Israel and Erdem indicate that the difference between informative influential observations and uninformative outliers is a matter of degree, and deciding where an observation crosses from one type to the other is a matter of expert judgment. Dr. Erdem, who raised this issue, did not provide a sufficient argument to support his criticism that the impact of these data points should preclude or diminish reliance on Dr. Israel's regression analysis. In fact, on the present record, disregarding Dr. Israel's regression analysis because he failed to discard “influential” data seems to the Judges to be more likely to risk a cherry-picking of the data rather than an identification of demonstrable anomalies. The Judges note, however, that Professor Crawford's regression analysis is superior to Dr. Israel's in that the former is not subject even to potential distortion from influential observations.
c. Limited Impact of Dr. Erdem's Adjustments
The Judges note that, notwithstanding the merits of Dr. Erdem's specific criticisms, there is not a wide gulf between the share values that he identified after reworking Dr. Israel's regression to remove the alleged influential observations, as noted by the following comparison:Start Printed Page 3578
Table 9—Comparison of Israel Regression and Erdem's Adjusted Israel Regression
Israel regression (%) Erdem's adjusted Israel regression (%) Joint Sports Claimants 37.5 45.0 Program Suppliers 26.8 22.6 Commercial TV 22.2 21.6 Public TV 13.5 7.0 Devotional 0.00 3.8 Canadian 0.00 0.0 Israel WDT ¶ 39 & Table V-3.; Erdem WDT at 18, Ex. 13. As for the bottom two ranked program categories, Devotional and Canadian, Dr. Israel was unsurprised that his regression could be less accurate in estimating the shares for these categories. See 3/12/18 Tr. 2881, 2960 (Israel) (acknowledging “negative coefficients for Canadian [and] Devotional,” explaining that “in my experience, regressions of this type often struggle to match at the lower end.”).
Dr. Erdem acknowledged as well that his allocations set forth in the above table are “very broadly comparable to the results from both the Bortz and Horowitz surveys,” although he hastened to opine that “there are strong reasons to doubt that comparability of the results is much more than a coincidence . . . .” Id.[90]
5. Dr. Israel's Sensitivity Analyses
Dr. Israel is on shakier ground when it comes to defending the results of his own sensitivity analyses of his regression. Specifically, in his sensitivity analysis set forth in his own Model 3 (in which Dr. Israel controlled by geography by including an indicator variable “by DMA”), Dr. Israel estimated coefficients for Program Suppliers and PTV that were approximately 50% higher for each category than in the regression on which he has asked the Judges to rely. 3/12/18 Tr. 3002-04 (Israel). When confronted on cross-examination with this quantitative change, Dr. Israel responded by saying that he did not view that quantitative difference “as changing the overall rankings of the corroboration [of the Bortz Survey].” 3/12/18 Tr. 3004 (Israel)
The Judges are troubled by Dr. Israel's fixation on “relative ranks” over the substantial “quantitative difference” in shares. The present proceeding is intended, by statute, precedent, and consensus, to allocate a dollar quantity of royalties. The rank ordering of those allocations is not an end in itself. Moreover, the fact that one could rank the claimant categories in that process is obvious—yet legally, economically, and practically of no importance.
A simple example is useful. Assume three program categories, A, B and C, seeking to split a $100 million royalty pool. A CSO survey might estimate the following allocation of royalties:
Category A: 60%, i.e., $60 million
Category B: 30%, i.e., $30 million
Category C: 10%, i.e., $10 million
By contrast, a regression might estimate the following allocation of this $100 million royalty pool:
Category A: 35%, i.e., $35 million
Category B: 33%, i.e., $33 million
Category C: 32%, i.e., $32 million
The rankings are identical in both the survey and the regression: A, B, and C in descending order. However, copyright owners in Categories C certainly would not agree that the regression results “corroborate” the survey result, when the regression produces $22 million more in royalties for them than the survey. Similarly, copyright owners in Category A would be unlikely to find their $35 million payout under the regression to be “corroborative” of the $60 million payout they would otherwise receive pursuant to the survey. Even copyright owners in Category B would likely chafe at the notion that the survey results would take precedence over the regression results—resulting in a $3 million loss—based on the strained idea that a $33 million regression allocation corroborates a $30 million payout.[91]
In fact, under questioning by Program Suppliers' counsel, Dr. Israel acknowledged that an over-reliance on the rankings established by a regression as opposed to the values estimated by the regression could be of limited use. See 3/12/18 Tr. 3101 (Israel) (“mere ranking” only “one indicator generated by his regression”). For the foregoing reasons, the Judges do not place much weight on the relative rankings of the program categories in Dr. Israel's regression as evidence of relative value, or as a basis to find his sensitivity analysis supported his regression results.
6. Conclusion Regarding Dr. Israel's Regression Analysis
The Judges give no weight to Dr. Israel's regression analysis, for a number of reasons. First, he did not break out his proposed allocations on an annual basis, making his average allocations inapplicable in the present proceeding. Second, he did not perform any analysis of data for the final year (2013) of the period at issue. Third, his regression analysis produced large standard errors, making his estimates less reliable than Professor Crawford's estimates and potentially unreliable. Fourth, and relatedly, Dr. Israel failed to produce the confidence intervals around his proposed coefficients which, when calculated, were shown to be extremely wide. Fifth, his regression analysis produced negative coefficients for several program categories, which he arbitrarily reset to zero. Finally, even Dr. Israel did not wholeheartedly advocate for the Judges' adoption of his regression results as independent proof of reasonable royalty shares; rather, he proposed that the Judges accept his results as corroboration of the Bortz survey results. Perhaps no single one of these failings would have been sufficient to justify the Judges' decision to give no weight to Dr. Israel's Start Printed Page 3579regression analysis. However, in combination, and in comparison to Dr. Crawford's better constructed regression analysis, the Judges find themselves unable to rely on Dr. Israel's regression analysis.
D. Professor George's Regression Analysis
The CCG proffered a valuation estimate based on the regression analysis of their economic expert, Professor Lisa George. As a general matter, Professor George testified that she believed the regression approach was superior to other attempts to measure relative value because it infers value from decisions actually made by market participants. George CWDT at 2. She noted further that inferring value from observed market decisions, known as the “revealed preference” method, has been an established feature of economic analysis. George CWDT at 3 n.1. Like Drs. Crawford and Israel, she undertook a Waldfogel-type regression. George CWDT at 6. However, she modified that approach in a manner that she understood better focused on Canadian programming. See id. at 5.
Professor George understood that her task was to estimate, via her regression approach, the relative value of the several program categories, in a hypothetical market in which no compulsory license existed. See id. at 6. She assumed that: (1) The supply side of the market was not relevant, because distant retransmission does not affect local carriage decisions; (2) the cable television market is imperfectly competitive; (3) CSOs focus on incremental revenue and cost, in the form of royalties, transmission costs, and the opportunity costs of transmitting (or retransmitting) any given program or signal rather than any other program or signal; (4) distantly retransmitted programs that are differentiated from other programs transmitted by the CSO will have greater value; and (5) the transactions by which the distant retransmissions would be agreed to would be between the CSO, as buyer, and the station (or groups of stations), as sellers. Id. at 7-9.
Professor George testified that in her regression the coefficients for the Canadian program category should be interpreted as a “value per unit” or, equivalently, as an “implicit price.” Id. at 10, 12.[92] With regard to the functional form, Professor George selected a linear model because the coefficient in interest, the value of the programming by category, is itself linear, i.e., it is measured in dollars per minute. See id. at 11.
Anticipating that past criticisms of Waldfogel-type regressions would be repeated in this proceeding, Professor George met those points head-on. First, she noted that the presence of price regulation not only does not diminish the usefulness of a regression, but in fact is the type of situation in which a regression approach to the estimation of value is appropriate. See id. at 18. She distinguished market prices from market decisions, noting that the latter are sufficient, standing alone, to estimate values through regression analysis. See id. at 13.[93] More particularly, she opined that the CSO must decide whether the revenues to be realized from retransmission are sufficient to warrant incurring the costs associated with retransmission (including royalties, transmission cost, and opportunity costs). With regard to the systems paying only the minimum fee, Professor George noted that their decision to carry any particular signal rather than other potential signal provides useful information regarding relative value. See id. at 16. From a technical point of view, Professor George explained that her regression “accounts for minimum fee systems by specifying a separate average (intercept) term [94] for systems carrying less than one distant signal equivalent and paying minimum fees,” which she further noted was similar to the procedure followed by Dr. Waldfogel in his 2004-2005 regression. George CWDT at 16.
Professor George explained that, although she followed the basic specifications of the Waldfogel-type regressions, she made two important changes. First, she estimated only the relative market value of Canadian programming compared with the combined value of all other program claimant categories. See id. at 23. Second, Professor George made her estimates only for the region in which Canadian signals may be retransmitted. See id. at 23. According to Professor George, applying these two modifications rendered her regression both more precise and less subject to downward bias. See id. at 25.
As in the other Waldfogel-type regressions, Professor George included control variables in her regression, in order “to isolate the role of the independent variables of interest holding all else equal.” Id. In particular, Professor George's control variables controlled for: (1) Average income, (2) population, (3) the number of local stations, (4) the number of subscribers, and (5) the number of active channels. See id. The model also included “indicator variables for binary system attributes such as for minimum fee systems carrying less than one distant signal equivalent.” Id.
Her regression estimated that, within its regulatory geographic region, Canadian programming's share of the royalties was 24.22%, 24.08%, 25.92% and 27.4% for each year, respectively, from 2010-2013. Corrected Amended Written Direct Statement of Lisa George, Tr. Ex. 4006, at 6-7 (George CAWDT). Professor George then considered the proportion of total U.S. royalties that were generated within this narrow region, in order to estimate the Canadian Claimants' share of the total royalty pool across the 2010-2013 four-year period. When making this calculation, Professor George utilized revised data updating compensable minutes that were contained in Professor Crawford's regression analysis.[95] She estimated the following shares for Canadian programming: 6.55% for 2010, 6.61% for 2011, 7.47% for 2012 and 7.85% for 2013. George CAWDT at 4, 7.
Professor George noted that her regression produced a negative coefficient within the Canadian region for Program Suppliers' and the SDC's programs aired on Canadian signals. As noted supra, she explained that a negative coefficient in this context meant that the marginal presence of such programming “does not allow cable systems to charge higher prices for signal bundles, or to attract and retain subscribers,” relative to program categories with positive coefficients, such as Canadian programming on the Canadian distant signals. Id. at 32.Start Printed Page 3580
1. The JSC's Criticisms
a. Collapsing Non-Canadian Programming
The JSC's expert, Dr. Israel, took issue with Professor George's unique decision to collapse all other claimant categories into a single catch-all category to compare with the category of interest to her client: Canadian programming on Canadian signals in the Canadian zone. Israel WRT ¶ 12. He explained that when he altered her model to control for the categories individually, her point estimate for Canadian programming fell to 1.48% of the total royalty fund, which was more consistent with the Bortz Survey share of 0.5% for Canadian programming. See id. at A-2 to A-3.
Further, Dr. Israel opined that his alteration to control for other program categories individually was necessary because Professor George's collapsing of all other programming into a collective category distorted her results by subjecting her estimation of those collapsed minutes to “noise” for which she failed to account. That is, he claimed that Professor George's Canadian share result was “driven by many important variables on the number of minutes by each other category, thus subjecting her regression to omitted variable bias.” Israel WRT ¶ 75 (emphasis added).[96]
At the hearing, Professor George explained that she chose to collapse all U.S. programming into one category because of the “limited data” available to her, precluding her from engaging in a “detailed breakdown of programming on U.S. distant signals.” 3/5/18 Tr. 2022 (George). However, she did not adequately respond to Dr. Israel's assertions regarding the impact of this decision on the statistical reliability of her regression. See 3/5/18 Tr. 2055 (George) (criticizing Dr. Israel's rerunning of her model for several reasons, but without sufficiently explaining why her collapsing of all U.S. programming into a single category would not be problematic). The Judges are troubled by the absence of an adequate response to this criticism, and find insufficient her testimony as to the limited nature of her data. Accordingly, the Judges find that this criticism serves to diminish the weight they give to Professor George's regression results.
b. Applying Negative Coefficients
Dr. Israel also claimed error in Professor George's treatment of the negative coefficient she estimated in her regression for Program Suppliers and the SDC. Whereas Professor George simply used the negative coefficient as an input for her calculation of relative values per minute, as noted supra, when Dr. Israel's own regression estimated negative coefficients, he reset them to zero, on the theory that a coefficient intended to measure the value of programming could not be negative. Thus, he opined that Professor George's application of the negative coefficients “distort[ed] the royalty shares for categories with positive coefficients.” Israel WRT ¶ 76.
In response, Professor George testified that her negative coefficient is “telling us that [Program Suppliers' programming] is effectively dragging down the value of the Canadian signals.” 3/5/18 Tr. 2031 (George). Alternately stated, she explained that, in her opinion, the negative coefficient indicates that “if we could replace the Program Supplier content on Canadian signals in a sort of hypothetical world . . . with Joint Sports or Canadian Claimant programming, the value of the signal would be higher. . . . So it's not surprising to me that more Program Supplier minutes on a Canadian signal reduces the value of the signal.” Id. at 2031-32 (George) (emphasis added). Thus, she opined that the negative coefficient does not reflect a negative monetary value for such programming, but rather reflects the opportunity cost arising from the inclusion of programming from such categories in the bundle of programs on the retransmitted signal compared with programs from other categories with positive coefficients. 3/5/18 Tr. 2117 (George).
Accordingly, because Professor George finds valuable information in the negative coefficient, she rejected Dr. Israel's criticism that she should have reset the negative coefficient to zero. See id. at 2043 (George) (“[My] . . . negative valuation, which is precisely estimated, so within standard confidence intervals . . . makes sense from theory. [I]t is completely arbitrary to replace a coefficient in a regression model with another . . . number. It is just bad econometric practice.”).
As discussed in connection with Dr. Israel's regression, the Judges find (as Professor George opined) that negative coefficients are reasonably well-explained by the fact that they reflect the relative impact on the value of the signal [97] of different categories of programming rather than the absolute value of programming-by-category. Again, though, this explanation of the negative coefficients underscores that the coefficients represent the relative value in a market for programs by categories as inputs to a bundle (the signal)—economically relevant to the task at hand (allocating the royalty pool by category) but not reflective of absolute market prices.
c. Weighting Results by the Number of Subscribers
Dr. Israel asserted that Professor George's regression is inconsistent with the specifications of the Waldfogel-type regression because she weighted her compensable minutes by the number of subscribers of each CSO, whereas Dr. Waldfogel estimated royalty payments per CSO, not royalty payments per subscriber. See Israel WRT ¶ 76. Moreover, Dr. Israel asserted that this deviation from Dr. Waldfogel's approach was improper because it was inconsistent with the functional form of her regression, which was otherwise of the Waldfogel-type. See id.
In response to Dr. Israel, Professor George acknowledged that her approach was “quite different,” yet she did not adequately explain how or why her modification made her results more precise or otherwise improved the quality of her regression. See 3/5/18 Tr. 2055 (George). The Judges find Professor George's vague statement to be an insufficient response to Dr. Israel's criticism.[98]
2. The SDC's Criticisms
a. The Regulated Nature of the Market
Dr. Erdem criticized Professor George's regression approach because, as she acknowledged, it did not reflect the prices that CSOs and stations would negotiate in an unregulated market. However, Dr. Erdem did note that her “observed data” revealed that distant retransmission occurred when “incremental benefits are higher than incremental costs” for the retransmitting CSOs. Erdem WRT at 20 (citing George Start Printed Page 3581CWDT at 8-9, 20). The Judges note that this criticism is a variant of the repeated refrain that the regulated nature of the market precluded the use of a Waldfogel-type regression. In the context of the present criticism as well, the Judges find that the relative preferences of CSOs for different categories of programs are revealed through such a regression and that Professor George's regression analysis is not subject to appropriate criticism in this regard.
b. Compensable Minutes
Dr. Erdem also criticized Professor George's approach for using actual compensable minutes for Canadian signals, but estimated compensable minutes for U.S. signals in the Canadian zone. Dr. Erdem suggested that such an approach “is likely less precise.” Erdem WRT at 21. Moreover, like Dr. Israel, Dr. Erdem criticized Professor George for using Professor Crawford's data, based on all U.S. distant signals, as a proxy for compensable minutes in the Canadian zone. Dr. Erdem asserted that there was no basis in the record for Professor George to make this assumption. See id.
Professor George did not offer a sufficient response to this criticism. Accordingly, the Judges find Dr. George's regression analysis is compromised by this unexplained criticism. However, there is no sufficient evidence in the record that reflects the dimensions of this assumption or the impact it may have on Professor George's proposed allocations. The Judges find, as noted supra, that Professor George's lack of disaggregated data across other program categories is insufficient to justify her less precise approach.
c. The Number of Broadcast Hours
Next, Dr. Erdem asserted that Professor George also assumed without substantiation that “all stations broadcast the same number of hours throughout the day,” which, according to Dr. Erdem, “seems to contradict the actual data . . . used in Professor George's analysis”. Erdem WRT at 21-22.
Once again, Professor George did not offer a sufficient substantive response to this criticism. Thus, the Judges find her assumption to be unsupported by the record and her regression analysis therefore is compromised. However, there is no sufficient evidence in the record that reflects the dimensions of this assumption or the impact it may have on Professor George's proposed allocations.
d. Negative Coefficients
Dr. Erdem (like Dr. Israel) is troubled by the negative coefficient produced by Professor George's regression for Program Suppliers' minutes. However, his concern is not aimed at Professor George's defense of such a negative coefficient. In fact, he agreed with Professor George regarding a “likely” reason for the presence of the negative coefficient, i.e., that it “suggests that on Canadian signals, Program Supplier content is a close substitute for other cable system offerings from the standpoint of viewers [and] the presence of Program Supplier programming on Canadian distant signals does not allow cable systems to charge higher prices for signal bundles, or to attract or retain subscribers.” Erdem WRT at 22 (approvingly quoting Professor George). Rather, Dr. Erdem contended that the negative coefficient in the context of the Canadian signal “likely does not factor in the complex decision making process of U.S. cable operators, who are maximizing overall profits across all regions combined.” Id. However, this criticism was speculative, unsupported by a factual basis and otherwise undeveloped, and the Judges do not find it to diminish the value of Professor George's regression analysis.
e. Joinder of the Program Supplier and SDC Categories
Next, Dr. Erdem attempted a sensitivity analysis of Professor George's results. In particular, he separated the Program Supplier and SDC minutes and input this separated data into an updated model. He found meaningful changes in the resulting coefficients, including a “coefficient for [SDC] distant minutes [that was] positive and statistically significant.” Id. at 22.
In response, Professor George testified that she had combined these two program categories because the amount of SDC programming was so low and therefore the data would not generate enough variation. Further, she asserted that when Dr. Erdem split apart the data for Program Suppliers and the SDC, he created “multicollinearity problems” because the variables for each program category are functions of each other. 3/5/18 Tr. 2042 (George). However, Professor George did not point to evidence that would indicate the presence of such multicollinearity. Moreover, she acknowledged she had combined the two categories to obtain sufficient variation in the SDC minutes across CSOs that would be lacking if the SDC category was analyzed separately. That in itself was an artifact, because SDC programming is not Program Supplier programming.
Accordingly, the Judges find that the probative value of Professor George's regression analysis is compromised to an extent by her artificial joinder of the Program Supplier and SDC categories.
f. Subscriber-Weighted Compensable Minutes
Dr. Erdem, like Dr. Israel, criticized Professor George's decision to multiply the coefficients by “the subscriber weighted compensable distant minutes.” Erdem WRT at 23 (“Conceptually, weighting by subscribers may not be appropriate in Waldfogel-type regressions which model the decisions of cable operators (i.e., decision to carry a signal or signals with minutes of different types of content in return for royalty payments implied by the formula.”)). Dr. Erdem replaced Professor George's weighted compensable distant minutes with unweighted compensable distant minutes and found that Professor George's use of the weighted minutes approach caused “[t]he share for the Canadian category [to] increase[ ] significantly.” Id.
In response, Professor George explained her reason for using subscriber-weighted compensable minutes: “[W]e are counting up the subscribers who have access to this programming to give us a better feel, because counting just systems doesn't give you really a full picture of how many people are exposed to programming.” 3/5/18 Tr. 2078 (George) (emphasis added).
The emphasized language above indicates that Dr. George engaged in such weighting for the same reasons that Professor Crawford used minutes at the subscriber group level and Dr. Israel used prorated DSE data—to better identify which subscribers actually received the distantly retransmitted local signal. Accordingly, the Judges find Professor George's weighting to be an acceptable deviation from the Waldfogel approach in the same way as Professor Crawford's subscriber group approach and Dr. Israel's Prorated DSE approach represent appropriate adaptions of the Waldfogel-type regression to available and more granular data.
3. Program Suppliers' Criticisms
a. Negative Coefficients
Dr. Gray criticized Professor George for failing to reset her negative coefficient for her combined Program Supplier/SDC minutes to zero, as did Dr. Israel. Dr. Gray asserted that these Start Printed Page 3582negative coefficients implied that these two program categories would be required to pay royalties to CSOs, clearly an absurd result. See Gray CWRT ¶ 35. However, as the Judges have explained, supra, these negative coefficients do not represent negative values for programs in the categories, but rather represent, on average, reductions in the value of a program bundle (i.e., a station) in comparison with other program categories.
b. The Minimum Fee Issue
Dr. Gray also criticized Professor George's regression for the same reason he criticized all the Waldfogel-type regressions in this proceeding—the failure to distinguish between CSOs paying only the minimum fee and those who intentionally incurred additional incremental costs by paying more than the minimum to distantly retransmit additional local stations. See id. ¶ 37. Dr. Gray's reworking of Professor George's regression applying only the subset of CSOs paying greater than the statutory minimum fee found no statistically significant relationship between CCG programming minutes and royalty fees paid in the Canadian region, which would support an estimate of 0% for the Canadian share (presumably because the null hypothesis [99] was not disproven). See Gray CWRT App. D.
In response, Professor George testified that even the station retransmission choices by CSOs paying only the minimum fee provide relevant economic information. 3/5/18 Tr. 2038-39 (George). However, she acknowledged that incorporating the minimum-fee-paying CSOs in an integrated analysis does add some “uncertainty . . . to our estimates [and] we do lose some precision from having some minimum fee systems.” 3/5/18 Tr. 2039 (George). Further, Professor George did not contest the statistical correctness of Dr. Gray's estimate of a 0% share for Canadian programming regarding the relative value for Canadian programming arising from an analysis of only those CSOs paying more than the minimum fee. 3/5/18 Tr. 2044-45 (George).
The Judges find, as noted supra, that an analysis of the CSOs paying only the minimum fee might provide some useful information. However, as also noted supra, the record does not provide an adequate basis to incorporate any “relative value” differences based on a distinction between CSOs that do and do not pay only the minimum fee.
4. Conclusion Regarding Professor George's Regression Analysis
In sum, the Judges find that Professor George's regression analysis is of limited value. Her collapsing of all non-Canadian programming into a single category was the consequence of the unavailability of data, not a choice intended to enhance the reliability of her estimates. Also, her negative coefficients within the Canadian zone of compensable programming categories rendered her analysis indeterminate and thus in need of adjustment.
III. CSO Surveys
Another analytical approach presented in this proceeding for determining relative value of the program types retransmitted by cable operators is analysis of data from surveys administered to CSOs, the entities that buy the compensable programming (bundled as distant signals). In essence, the surveys ask the CSOs to place a relative value on the types of programming they license for retransmission to their subscribers.
CSO survey results have long played a central role in assisting adjudicators in assessing relative market value of cable programming. The JSC presented the first survey report, designed by the predecessor of Bortz Media & Sports Group, Inc. (Bortz), to establish the relative value of the various categories of programming at issue in 1983. See Bortz Survey,[100] Trial Ex. 1001 at A-2. Over the years, Bortz refined its survey design to address issues raised by the triers of fact. The goal of the surveys was to answer the question of relative value of the competing program categories as seen through the eyes of CSOs. Id. at A-3—A-4. In the present proceeding, the JSC and the SDC support an analysis based on the work of Bortz for the relevant royalty years. Program Suppliers offer an alternative survey [101] designed by Horowitz Research (Horowitz Survey), which they offered as a critique of the Bortz survey results.[102] In addition, the CCG presented a third survey focused on Canadian signals (Ringold Survey). Other participants offered criticisms of the surveys.
All of the surveys the parties proffered in this proceeding were conducted by telephone and purported to inquire of the individual at the responding CSO who was responsible for signal carriage decisions. Each proponent constructed its survey as a constant sum survey; that is, respondents were asked to value each program category relative to the other categories and as a portion of 100%.
The JSC contended that the Bortz Survey responses are a sound measure of the relative value of programming, by category. See Bortz Survey, Trial Ex. 1001 at 7. Program Suppliers contended that CSO survey responses are
[d]one well, such a survey may illuminate the criterion (sic.) by which to allocate royalties. . . . [W]hatever the reasoned judgment of executives . . . , any cable operator survey should not be considered a substitute for behavioral data on viewing.
Corrected Written Direct Testimony of Howard Horowitz, Trial Ex. 6012 at 21-22 (Horowitz CWDT). The Ringold Survey focuses on CCG programming within the Canadian broadcast region. The CCG claimed the Ringold Survey provides a better measure of the relative value of compensable Canadian programs distantly retransmitted in the U.S.
A. Bortz Survey
As in the past, the JSC have engaged Bortz to develop and implement a methodology to ascertain relative market value of categories of distantly retransmitted television programming.[103] See Bortz Survey at A-1. Bortz made “refinements” to the present survey to address concerns expressed by the CRT, CARP, and more recently, the Judges. Specifically, Bortz refined the way in which it (1) assessed the level of pertinent knowledge of the individual survey respondent (i.e., the person “most responsible for programming decisions”), (2) conformed program category definitions to those adopted for royalty distribution proceedings, (3) selected cable systems to participate by excluding any that did Start Printed Page 3583not distantly retransmit eligible non-network programming, and (4) closed the time gap [104] between the royalty year at issue and the conduct of the survey relating to that year. Id. at A-5—A-12.
With regard to the survey contents, Bortz attempted to focus respondents on the actual distant signals at issue using information from the CSOs' Statements of Account filed with the Copyright Office. Id. at 12. To address a criticism regarding asking respondents to allocate “value,” Bortz asked them to think about relative value of the categories and subsequently to provide estimates for each. The interviewers then went through the list of program categories to give respondents an opportunity to reconsider the relative values the respondent placed on the categories. Id. at 13. Bortz also reported other refinements responsive to criticisms of the triers of fact and opposing parties in prior proceedings.[105]
The CARP determination regarding allocation of 1998-99 cable royalties noted that the Bortz Survey focused on the demand side of a typical market, i.e., what CSOs are willing to pay to broadcasters, which it concluded is more likely to reflect relative values of the programming categories. In essence, according to the CARP, in the relevant hypothetical market the supply of programming would be fixed and value would be determined only by the CSOs' demand as reflected in their willingness to pay. See 1998-99 Librarian Order, 69 FR at 3613-15. In any event, beginning with its 2009 survey, Bortz included a question asking respondents to rank the relative cost of the programming categories, which it alleged gave respondents a cue to consider the supply side of the valuation. Bortz Survey at A-14—A-15.
Bortz surveyed a stratified, random sample of “Form 3” cable systems,[106] but excluded systems that did not carry distant signals and those whose only distant signals were PTV or Canadian signals, or both. Id. at 13-14. Bortz made five adjustments for the 2010-13 survey questionnaires to address criticisms of their studies from earlier proceedings. Specifically, Bortz (1) identified compensable programming on WGNA, the most widely carried distant signal; (2) reduced the number of signals about which they inquired; (3) did not offer “sports” as a category in the constant sum question for CSOs that did not retransmit programming within the Sports Programming category established in this proceeding; (4) modified the “warm-up” questions; and (5) omitted reference to attracting and retaining subscribers to broaden the concept of value to CSOs. Id. at 2.
Initially, Bortz confirmed that the respondent self-identified as the individual responsible for signal carriage decisions for the cable system. Then Bortz identified the distant signals at issue and asked each respondent to rank by “importance” to the system the non-network programming on those distant signals by categories “intended to correspond” to the programming categories adopted in the present proceeding. Id. at 15-16. Bortz next asked respondents to estimate the cost to acquire programming within the identified categories if the cable system had been required to purchase the programming in the marketplace. Id. at 16. Respondents were then asked to assign relative values to the relevant programming; that is, to assign a share of 100% of value to each category.[107]
The influence of superstation WGN America (WGNA) was a major factor in valuing compensable programming during 2010 to 2013. Bortz concedes that survey respondents might have lacked information detailed enough to distinguish between compensable programming and content WGN substituted for contemporaneous broadcasts and transmitted to WGNA subscribers.[108] Bortz modified its prior survey questions to attempt to address the WGNA content issue. According to Bortz, for cable systems that only retransmit WGNA as a distant signal, survey questions regarding WGNA programming described only compensable programming, by agreed category as nearly as possible.[109] In this way, Bortz sought to address criticism that its prior survey results contained skewed values because Bortz' survey questions failed to distinguish between compensable and non-compensable WGNA retransmissions. Id. at 19.
Comparing the 2004-05 survey results (which formed the basis of the 2010-13 survey) to those for the time period relevant to the present proceeding compensable programming retransmitted by WGNA decreased by about half, from approximately 30% of the signal to under 15%. JSC-, CTV-, and SDC-represented programming increased in relative value from the 2004-05 survey to the 2010-13 survey, while Program Suppliers' content declined in relative value. Bortz attributes these changes to a reduction in compensable retransmissions of Program Suppliers' programming. Id. at 29.
PTV [110] and the CCG [111] criticized the Bortz results because the survey excluded cable systems for which public television and/or Canadian programming were the systems' only distantly retransmitted signals. Bortz conceded that both PTV and CCG categories are likely undervalued because of the survey's exclusion of PTV-only and CCG-only systems and because of the relatively small number of Form 3 systems that retransmit PTV and CCG signals. Bortz Survey at 46-47. Respondents for multiple signal systems that included PTV and Canadian programming valued public television programming on multiple signal systems at an average of between 7.8% and 10.3% and valued Canadian signals at an average of between 2.4% and 7.9% during the relevant period. Id. The Bortz Survey aggregate values for PTV and CCG during the period were substantially lower because of the exclusion of PTV-only and CCG-only systems.[112]
Notwithstanding the refinements Bortz implemented in its survey for 2010-13, Mr. Trautman still professed Start Printed Page 3584that the Judges should consider the value estimates for the Program Suppliers and Devotional Programming categories as a “ceiling” or upper bound for the allocation to those categories. Mr. Trautman reached this conclusion largely because he was not confident that even the modified survey accurately accounts for non-compensable programming on WGNA, most of which he asserted falls within those two program categories. Id. at 18.
Further, Mr. Trautman conceded that “some adjustment” upward of allocations to the PTV and CCG categories is appropriate. Id. 7-8; Trautman WRT ¶ 4.[113] Professors McLaughlin and Blackburn adjusted the 2010-13 Bortz Survey results to increase the share of value allocated to PTV and CCG programming, but Mr. Trautman argued that the McLaughlin/Blackburn adjustments should be considered a “ceiling” on the values of those two categories, because they relied in part on Horowitz Survey results. Mr. Trautman contended the Horowitz results were invalid because “most” of the respondents with PTV-only or CCG-only distant retransmissions valued the compensable programming at less than 100%. Trautman WRT ¶ 3.
The initial relative valuations from the 2010-13 Bortz Survey results are:
Table 10—Initial Bortz Survey Results
Category 2010 (%) 2011 (%) 2012 (%) 2013 (%) CCG 0.10 0.20 0.60 1.20 CTV 18.70 18.30 28.80 22.70 Devotional 4.00 4.50 4.80 5.00 PS 31.90 36.00 28.80 27.30 PTV 4.40 4.70 5.10 6.20 Sports 40.90 36.40 37.90 37.70 (Columns might not add to 100% because of rounding.) See Bortz Survey at 3. Referring to the calculations performed by Ms. McLaughlin and Dr. Blackburn, Mr. Trautman adjusted the allocations in the Bortz Survey, to increase the relative values of PTV and CCG programming at the expense of the relative values of the remaining categories:
Table 11—McLaughlin/Blackburn Augmented Bortz Survey Results
Category 2010 (%) 2011 (%) 2012 (%) 2013 (%) CCG 1.6 1.8 1.2 2.1 CTV 17.8 17.2 22.3 21.7 Devotional 3.8 4.2 4.6 4.8 PS 30.3 33.8 28.1 26.1 PTV 7.5 8.7 6.9 9.1 Sports 39.0 34.2 37.0 36.1 (Columns might not add to 100% because of rounding.) See Table A-2, Trautman WRT, App. A at A-3.
After reviewing the McLaughlin/Blackburn analysis, Mr. Trautman adjusted the Bortz Survey results in two ways. First, he adjusted the Bortz Survey results using the McLaughlin/Blackburn augmented results, derived by adding PTV-only and Canadian-only distant signals and assuming CSOs would have set the relative value of the PTV and Canadian signals at 100%. Mr. Trautman then referred to the Horowitz Survey results, opining that it was error for McLaughlin/Blackburn to assume CSOs would assign 100% relative value to PTV programming on PTV-only signals.
B. Horowitz Survey
Program Suppliers retained Horowitz Research, Inc. to evaluate the Bortz Survey and to design a proprietary survey to improve on the Bortz Survey. Horowitz attempted to replicate and improve upon the methods and procedures of the Bortz Survey used in the “Phase I” or allocation phase of the 2004-05 cable royalty distribution proceeding.[114] See Horowitz WDT at 3. The Horowitz Survey sought to measure the relative value of programming categories in attracting and retaining subscribers. Id. In rebuttal, Horowitz evaluated the Bortz Survey covering royalty years 2010-13. See Written Rebuttal Testimony of Howard Horowitz, Trial Ex. 6013, at 2 (Horowitz WRT).
Horowitz also conducted its own survey, fashioned on the Bortz Survey, but with amendments Horowitz considered necessary. The Horowitz Survey, among other things, addressed the PTV and CCG programming the Bortz Survey omitted. The Horowitz Survey questionnaire provided category descriptions to assist respondents in allocating relative value, identified examples of programming that might fit the category description, and created a separate “Other Sports” category to clarify that the definition of “sports programming” for purposes of the valuation survey did not include all sports broadcasts, but only included Start Printed Page 3585those live college and professional team sports fitting the category definition operative in CRB royalty distribution proceedings. Horowitz WDT at 5-6. The 2010-13 Bortz Survey eliminated from the valuation questions references made in prior Bortz surveys to attraction and retention of subscribers. See Bortz Survey at 15.[115] Horowitz opined that omitting references to subscriber acquisition and retention “distracted survey respondents from the purpose of allocating a fixed budget . . . by leaving out all references to subscriber value . . . the `primary consideration' for allocating value.” Horowitz WRT at 2. According to Horowitz, between 79% and 85% of CSO survey respondents ranked programming popular with and important to current and potential subscribers as the most important factor in their carriage decisions. By contrast, only between 4% and 35% ranked importance to the cable system as the primary factor influencing carriage decisions.[116]
The Horowitz Survey results, weighted by Dr. Martin Frankel, indicate relative market values of the programming categories at issue [117] in this proceeding as:
Table 12—Horowitz Survey Results
Category 2010 (%) 2011 (%) 2012 (%) 2013 (%) CCG 0.01 1.00 0.87 0.35 CTV 12.38 12.85 15.72 9.54 Devotional 3.78 5.92 5.74 3.48 PS 37.43 28.99 28.11 28.65 PTV 7.69 13.31 15.05 15.39 Sports 31.94 27.13 25.50 35.28 “Other Sports” 6.77 10.80 9.02 7.40 See Horowitz WDT at 16; Written Direct Testimony of Martin R. Frankel, Trial Ex. 6010 at 7 (Frankel WDT).
Mr. Horowitz's decisions to (1) rely on acquisition and retention of subscribers and (2) create a separate “Other Sports” category came under criticism, as did his methodological choice to provide examples of shows that might fall within the categories.
C. Ringold Survey
The CCG criticized both the Bortz and the Horowitz studies and presented its own limited survey (Ringold Survey). See Report of Gary T. Ford and Debra J. Ringold, Trial Ex. 4010 (Ringold WDT).[118] The Ringold Survey attempted to establish a value for eligible programs distantly retransmitted by cable systems in the United States, segregating Canadian-produced programs comprising the CCG and other programs included in the Devotional, Program Suppliers, and Sports categories.
Valuation of CCG programming is complicated by the legal prohibition on retransmission of Canadian programming outside a geographic zone lying along the U.S. northern border. 17 U.S.C. 111(c)(4). The CCG argued that the relative value of CCG programming inside its retransmission zone is necessarily diluted when measuring the relative value of other claimant groups' programming over the entirety of the United States. See Written Rebuttal Testimony of Lisa George, Trial Ex. 4007, p. 8 (George WRT). In addition, the CCG argued that its category is an “unnatural” category of programming, because the Canadian signals include programming compensable in other categories, viz., the JSC, Program Suppliers, and Devotional Programming categories.
The CCG commissioned a “double blind” [119] survey of cable systems sampled from the Form 3 systems that retransmit Canadian signals distantly. To further guard against response bias, Professors Ringold and Ford constructed the survey to include questions regarding the relative values of various categories of programming on retransmitted Canadian signals as well as retransmitted superstation and independent station signals.[120] The Ringold Survey was conducted by telephone and used a constant sum construct.
The Ringold Survey differed from both the Bortz and Horowitz surveys in two significant aspects. Unlike in the Bortz Survey, interviewers in the Ringold Survey asked respondents to assign relative values to program categories that included programming on Canadian signals. Unlike both the Bortz Survey and the Horowitz Survey, Ringold Survey interviewers asked each respondent to rank programming on only one retransmitted signal at a time.
The Ringold Survey measured the average relative value of CCG programming on retransmitted Canadian signals as:Start Printed Page 3586
Table 13—Ringold Survey Results: Relative Value of CCG Programming on Canadian Signals
Category 2010 (%) 2011 (%) 2012 (%) 2013 (%) CCG 61.45 64.17 61.47 56.36 Program Suppliers (U.S.) 11.40 21.11 12.20 21.82 Sports (JSC) 26.67 14.72 24.67 20.91 “Other” 0.48 0.00 1.67 0.91 See Ringold WDT at 15, Table 1.[121] In other words, the Ringold Survey results indicated that Canadian-produced programming accounted for approximately 61%, 64%, 61%, and 56%, respectively, of the value of all programming shown on surveyed systems' Canadian signals for the years 2010-2013. Ringold WDT, at 5, 11; 15, Table 1. Ringold found that live professional and college sports were generally valued higher on independent and superstations than on Canadian signals. Ringold WDT at 12; 16, Table 2; 17, Table 3; see Fig. 4. Ringold also found that movies and syndicated series were always valued higher on independent and superstations than on Canadian signals. Ringold WDT at 12, 16, Table 2; 17, Table 3; see Fig. 5.
Scaling the relative value of Canadian signals within the Canadian zone, CCG concluded Canadian signals should command the following portions of each annual fund.
Table 14—Ringold Survey Results: Relative Value of CCG Programming Overall
Year Base rate fund (%) 2010 5.59 2011 5.36 2012 5.95 2013 6.18 Written Direct Statement of Canadian Claimants Group at 1.[122] CCG does not claim any portion of the overall royalty funds for programming on Canadian signals that is compensable in the Program Suppliers or Joint Sports Claimants groups. Id. At the hearing, CCG did not controvert testimony by SDC's witness, Mr. Sanders that some Canadian programming is or should be compensable as Devotional Programming. See 3/6/18 Tr. at 2410 (Sanders).
D. Criticisms of the Survey Instruments
1. Survey Construct
The surveys the parties presented in this proceeding had some construct similarities. Each of the surveys was directed to CSO executives who self-identified as the person responsible for carriage decisions for the cable systems about which the surveyor inquired. All of the surveys were conducted by telephone [123] by experienced survey entities. Each survey inquired of a sample of potential respondents drawn from the universe of Form 3 cable systems.
a. Sampling
Professor Martin Frankel, who was retained by Program Suppliers, criticized Bortz for including in its sampling Form 3 cable systems that did not carry a distant signal and not correcting for the overinclusion. See Amended Rebuttal Testimony of Martin Frankel, Trial Ex. 6011, at 3 (Frankel AWRT). In fact, Bortz sampled from all Form 3 systems but dropped, i.e., did not interview, systems in the sample with zero distant signals. See 2/15/18 Tr. at 247 (Trautman). In live testimony, Professor Frankel submitted that Bortz, while not “wrong,” conducted its survey on a “suboptimal” sample frame. See 3/6/18 Tr. at 2267, 2288 (Frankel). Professor Frankel also criticized the Bortz Survey for disadvantaging cable systems with only PTV, CCG, or PTV and CCG distant signals by excluding them and “affording them no value when producing . . . weighted results.” Frankel AWRT at 3.
In his amended rebuttal testimony, Professor Frankel corrected for the suboptimal sampling and for the exclusion of PTV and CCG signals in the Bortz Survey. Even so, Professor Frankel declined to endorse even the corrected Bortz results. Id. at 15. Professor Frankel advocated reliance on the Horowitz Survey, which used his improved sample frame and included distantly retransmitted PTV and CCG claimant programming. Id. at 16.
Professor Frederick Conrad, testifying on behalf of CCG, criticized both the Bortz Survey and the Horowitz Survey on the basis of their sampling.[124] See Written Rebuttal Testimony of Frederick Conrad, Trial Ex. 4003 passim (Conrad WRT). Because so few cable systems retransmit Canadian stations, the small sample size caused Professor Conrad to question the validity of the results as they relate to the CCG. Id. at 4. Further, Bortz excluded from its survey systems whose only distantly retransmitted signal was Canadian, Public Television, or some combination of those. Bortz then assigned a value of zero to CCG- and PTV-only systems, without accounting for the regulatory constraints limiting retransmission of Canadian signals to a geographic zone in the northern tier of states. Exclusion of the CCG and PTV programming from the Bortz Survey resulted in agreement among the parties that the Bortz results would need an unquantified adjustment to reflect the actual relative value of CCG and PTV programming.
Professor Conrad recognized that the Horowitz Survey corrected for this omission by Bortz. Id. at 6. Inclusion of the “missing” stations did not, however, address all of the issues troubling Professor Conrad. Notably, when Horowitz asked CSOs whose only distantly retransmitted signal was Canadian, for example, the CSO nevertheless stated the relative value of the Canadian programming at less than 100%. Id. at 7. According to Professor Conrad, this purported anomaly suggests a problem with the construct of the survey or a problem of communicating the task to either the Start Printed Page 3587interviewers or the respondents.[125] Given that Canadian signals include less than 100% Canadian content, the Judges reject this particular criticism.
b. Respondents
All three surveys sought to elicit responses from the individual at each cable system that had primary responsibility for signal carriage decisions. In the Bortz Survey, the questioners asked several questions at the outset to establish that they were speaking with the appropriate individual. See, e.g., Trautman WDT at 14-15.
Testimony at the hearing was in conflict regarding carriage decision-makers. Horowitz Research, Inc. employed a cable system executive to screen respondents to assure that they were the appropriate respondents, viz., the respondents responsible for making carriage decisions at the system level. See Horowitz WDT at 8. Fact witnesses disagreed about the level at which carriage decisions are made. Compare 2/21/18 Tr. at 930 (Burdick) (carriage decisions at Schurz Communications decentralized to local CSOs) with 2/22/18 Tr. (Singer) at 1082-84 (carriage decisions made at system level, not at corporate headquarters), 1144-45 (respondents intimately familiar with categories and signals they carry). Ms. Sue Ann Hamilton testified that cable programming decisions [126] are generally centralized at the corporate level in an increasingly consolidated cable industry. 3/19/18 Tr. at 4295 (Hamilton). She opined that respondents to the Bortz Survey were insufficiently “sophisticated . . . , programming-focused and experienced” to understand the categories at issue in this proceeding. Id. at 4311.
c. Constant Sum Methodology
All three surveys were structured as “constant sum” surveys; that is, respondents were asked to allocate value among the programming categories at issue, with the sum of those values to equal 100%. An increase in valuation of one category must result in a decrease in value in one or more other categories.
Among the many criticisms of the three surveys,[127] Professor Joel Steckel, a witness for Program Suppliers, criticized in general the use of the constant sum survey structure. See Written Direct Testimony of Joel Steckel, Trial Ex. 6014, at 34-35 (Steckel WDT). Professor Steckel criticized Professor Mathiowetz's touting of the suitability of a constant sum construct in this context. He noted that she cited prior testimony that relied on academic literature from the 1960s and 1970s. See Written Rebuttal Testimony of Joel Steckel, Trial Ex. 6015, at 21 (Steckel WRT). Countering the perceived endorsement of constant sum survey methodology by the CARP,[128] Professor Steckel cited recent academic studies that conclude that a measurement based on paired comparisons, i.e., comparisons across only two categories, out-predict constant sum surveys by 22 percentage points. Id. at 36 (citations omitted).
On rebuttal, Professor Steckel reviewed the changes in the Bortz Survey between the 2004-05 proceeding and the present proceedings. While he conceded some improvement, he concluded that the changes were insufficient to bestow construct validity on the Bortz Survey. See Steckel WRT at 26. Viewing the Horowitz Survey as an augmented Bortz Survey, Professor Steckel also noted some improvements, but concluded that those improvements in form were insufficient to reorient the Horowitz Survey to the question of interest in this proceeding, viz., relative value of program categories.[129]
Professor Mathiowetz endorsed the constant sum survey method used by Bortz in the present proceeding. Professor Mathiowetz concluded, however, that the Horowitz Survey did not employ a valid constant sum construct because of the differences Horowitz introduced as alleged improvements to the Bortz Survey. See Mathiowetz WRT at 16. Professor Mathiowetz opined that the Horowitz changes in fact rendered the Horowitz Survey both unreliable and invalid. Id. at 26. For example, Professor Mathiowetz opined that Horowitz's inclusion of program examples and “such as” descriptions rendered the questions misleading. Id. Similarly, incorrect information in program category descriptions resulted in invalid valuations for the various program categories. Id. at 17-18. Professor Mathiowetz criticized Horowitz's creation of an “Other Sports” category when no such category is a part of this proceeding. She faulted Horowitz's failure clearly to identify noncompensable programming on WGNA. Id. at 19.
In the Bortz Survey, interviewers asked respondents about a maximum of eight distant signals even if their systems carried more. See Bortz Survey at 31. Professor Mathiowetz criticized the Horowitz decision to ask a single respondent to answer on behalf of all distantly retransmitted signals for the surveyed system, rather than limiting those to a manageable number. Respondents to the Horowitz Survey were asked to evaluate from one to “over fifty” discrete signals. See Mathiowetz WRT ¶ 48. According to Professor Mathiowetz, this inclusion of so many signals for valuation rendered the survey burdensome and invalid, as respondents would not or could not make fine distinctions between the distantly retransmitted program lineups at multiple systems. Id.
Dr. Jeffery Stec, an economic expert called by Program Suppliers, performed reliability analyses of the Bortz Survey results by comparing responses of CSOs for consistency over time. He concluded that the Bortz Survey responses were not reliable as they were not consistent over time, notwithstanding Mr. Trautman's assertions that the Bortz results were consistent over time. See Amended Written Rebuttal Testimony of Jeffery Stec, Trial Ex. 6016, at 30-34 (Stec AWRT).
2. Survey Content
a. Programming Categories
Surveyors inquired about programming on retransmitted distant signals using the category designations adopted in the present proceeding. CSOs, however, do not acquire categories of programs for retransmission; by law they must acquire entire signals which often Start Printed Page 3588bundle together multiple categories of programming.[130]
Professor Steckel criticized the Bortz and Horowitz surveys for requiring CSOs, unaided and in the course of a brief telephone survey, to disaggregate signals and reconfigure the programming from each into compensable categories. See Steckel WDT at 29-30. Professor Steckel opined that, because of the perceived complexity of the survey construct, respondents were compelled to satisfice [131] with shortcuts and heuristics to create a defensible answer to the overly complicated questions. Id. at 31-32; 3/13/18 Tr. at 3298 (Steckel).
More than one witness downplayed Professor Steckel's complexity criticism, asserting that the survey respondents are experienced professionals thoroughly familiar with the programming categories copyright owners utilize in CRB distribution proceedings. See, e.g., 3/13/18 Tr. at 3176 (Hartman) (CSOs negotiate for linear channels, but channels fall into categories. “It's our day-to-day job to . . . know those, that type of programming.”); 2/22/18 Tr. at 1144-45 (Singer). Participants proffering survey results as a measure of relative value also asserted that cable system executives could accurately allocate program category values by reference to the “dominant impression” of each signal's content or the “signature programming” of a given signal. See 2/15/18 Tr. at 281, 334 (Trautman); 2/22/18 Tr. at 1001 (Singer).
Ms. Sue Ann Hamilton testified that the programming categories adopted in royalty distribution proceedings are unique and “quite different from the industry understanding of what programming typically falls in a particular programing genre.” Id. at 10; see 3/19/18 Tr. at 4309, 4312 (Hamilton); Hamilton WRT at 17-18. For example, she testified that “most cable operators” would not recognize that pre- and post-game interviews and highlight compilation telecasts would fall into the Program Suppliers category, or that locally produced high school team sports would fall into the Commercial Television category. Id. at 11. Other industry witnesses disagreed. See 2/22/18 Tr. at 1046-47 (Singer) (categories “straightforward”). Ms. Hamilton further opined that cable operators were not likely to differentiate between network and non-network sports telecasts and that migration of live team sports programming to regional cable networks further complicates the equation. See Hamilton WRT at 17-18; 3/19/18 Tr. at 4315 (Hamilton).
Dr. Stec gave weight to Ms. Hamilton's testimony. See Stec AWRT at 23-25. According to Dr. Stec, the Horowitz Survey results, gained after the surveyors provided category descriptions and program examples, demonstrate the fallacies of the Bortz Survey and its reliance on CSO executives' familiarity with the program categories. Id. at 27. The Horowitz category descriptions and examples were also roundly criticized, however.[132] Nothing in Dr. Stec's analysis supports his contention that there is a causal relationship between changes in an interviewer's category or program descriptions in the two major surveys, from which Dr. Stec concludes that the Horowitz results are more valid than the Bortz results.
A related criticism from Professor Conrad was that the categories about which respondents were questioned were not comparable. Id. at 10-11. In other words, all programming categories other than CCG and PTV are characterized by homogeneity in types of program content. The CCG and PTV categories, on the other hand, are based on program origin and include programs that span the categories making them, in this context, “unnatural categories.” See 3/5/18 Tr. at 1965 (Conrad). Even though cable systems might retransmit PTV signals, all of which are compensable entirely from the PTV category, PTV stations might broadcast children's programming, nationally produced specials or series, or locally-produced programming. On the other hand, some of the CCG programs might be allocable to another category but some might not.[133]
b. Augmentation of Categories
Professor Mathiowetz criticized aspects that distinguish the Horowitz Survey from the Bortz Survey. Her two most significant criticisms related to Mr. Horowitz's use of program examples and the creation of an “Other Sports” category.[134]
Professor Mathiowetz asserted that a questioner's volunteering of examples tends to bias survey results. See 2/20/18 Tr. at 699 (Mathiowetz); but see 3/5/18 Tr. at 1967-68 (Conrad) (examples can hurt or help or have no effect on responses). According to Professor Mathiowetz, Respondents assume a questioner has valid information or knows something that is important to the survey outcome. See 2/20/18 Tr. at 699 (Mathiowetz). Thus, even a knowledgeable respondent might be influenced by a questioner's prompting. As she noted, in a relative valuation, a shift in one category affects potentially the value of every other category. Id. at 727.
Furthermore, according to Professor Mathiowetz, some of the examples used in the Horowitz Survey were simply erroneous. 2/20/18 Tr. 700 (Mathiowetz). Use of erroneous examples illustrated Professor Mathiowetz's criticism of Mr. Horowitz's creation of an “Other Sports” category. In an effort to differentiate live team college and professional sports, i.e., the programs to be compensated from JSC's share of the royalty funds, interviewers introduced “other sports programming.” For WGNA-only systems, the category description ended with “Examples include horse racing.” Id. at 27. According to Professor Mathiowetz, in 2013, WGNA carried only a single horse race. Accord Trautman WRT 20-21.[135] For WGNA and PTV systems, the interviewers prompted, “Examples include NASCAR auto races, professional wrestling, and figure skating broadcasts.” Horowitz WDT (App. A) at 26. WGNA retransmitted no programming fitting the description of the examples. 2/20/18 Tr. at 703 (Mathiowetz). Professor Mathiowetz also expressed doubt that non-JSC sports broadcasts accounted for sufficient distantly retransmitted airtime to warrant a separate category, even for survey inquiry purposes. Id. at 702. As she noted in another context, in a constant sum survey, variation in one Start Printed Page 3589category necessarily effects the relative value of other categories. See 2/20/18 Tr. at 727 (Mathiowetz).
Professor Conrad agreed with the criticism of enumerating examples of “other sports” or any program category. 3/5/18 Tr. at 1967(Conrad). According to Professor Conrad, citing examples might cut either way. If the example is typical of the category, then citing it will have no effect. An atypical example might help a respondent “think outside the box” and trigger a broader, more accurate response. For other respondents, however, an atypical example might narrow focus to incidents closely related to the particular example and therefore confine the respondent's thinking too narrowly. Id. at 1968. Professor Conrad cautioned that a “rare example” will bias downward the counts for more typical choices. Id.
Mr. Horowitz assigned all “Other Sports” points to Program Suppliers. See Horowitz WDT at 3, 5. This allocation ignores the possibility that a portion of “other sports” might be attributable to CTV. Without evidence to support the assignment of all “other sports” value to Program Suppliers, the category becomes even more problematic.
c. Value Measurement
Dr. Jeffery Stec, criticized the Bortz Survey on several grounds. See Stec AWRT at 11-12. His primary criticism is that the Bortz Survey measures, at best, only a CSO's willingness to pay. Id. at 17. Dr. Stec disputes the assertion by Mr. Trautman and Bortz that CSO respondents are familiar with the rates charged for programming and that their responses are, therefore, a reflection of the “supply side.” Id. at 18; see 3/13/18 Tr. at 3432-50 (Stec). Dr. Stec contends that a CSO's willingness to pay is also influenced by its own market factors, e.g., local market demand or competition from other CSOs. Id. at 19-20. According to Dr. Stec, relative willingness to pay is not the same as relative market value. Id. at 22.
An underlying assumption in each survey is that cost is the equivalent of value. Economists do not measure such a subjective trait as value. According to Professor Steckel, value, in an economic sense, can only be surmised by reference to external indicators of value. Steckel WDT at 36-40; but see Mathiowetz WRT ¶¶ 4, 11-12 (Steckel incorrect; CARP precedent accepted Bortz as measure of relative market value). Professor Steckel opined that resource allocation does not equate to value and that marketplace value is measured by a CSO's return on investment. Steckel WDT at 21. Because of the cable television market structure, i.e., program acquisition in a bundle, CSOs are unable to assess market returns by program category. Id. Professor Steckel proposed—as a possible alternative to surveying CSO executives' best guesses about supply-side relative values—a survey of demand-side program consumers. Steckel WDT at 40-41 (“customers are the best judges of what customers want, value, and will do.”). Alternatively, Professor Steckel recommended relying on viewership to establish relative values. See Steckel WRT at 4.
Mr. Horowitz also criticized Bortz for asking a cost question, opining that cost is not the equivalent of value. Horowitz WDT at 7. He testified that the Bortz Survey erroneously mixed the concepts of value and cost. 3/16/18 Tr. at 4146-47 (Horowitz). Mr. Horowitz contended that by asking about expense in a warmup question, Bortz conflated the concepts of cost and value.[136] Mr. Horowitz noted that the Bortz Survey did not define “relative value” and made no mention of subscriber attraction and retention.[137] Id. Further, Mr. Horowitz criticized the form of the budget allocation (constant sum) question as ambiguous. The question asked how much the respondent's system “would have spent” during the relevant year. See, e.g., Bortz Survey at B-5 (Question 4a.). Mr. Horowitz maintains this sentence structure is open to interpretation. Id. Treatment of PTV, CCG, and WGNA.
d. PTV and Canadian Measures
Various parties criticized the treatment of PTV and CCG claimant groups in almost every relative value measure, including the surveys. As noted, Ms. McLaughlin and Dr. Blackburn criticized both the survey and regression methodologies, but applied their “changed circumstances” [138] analysis to estimate the relative value of PTV programming and PTV's relative claim to royalties deposited in the Basic Fund.[139] Professor Conrad opined that it was a “strange practice” to assign a value of zero to Canadian programming for respondents who did not retransmit any Canadian signals. See 3/5/18 Tr. at 1964-65 (Conrad). He testified that the better practice would have been to characterize Canadian programming for non-CCG signals as “missing data” and to impute values from data actually collected. Id. at 1965.
Mr. Trautman acknowledged a slight participation bias in the Bortz Survey, but testified that the number of PTV-only and CCG-only cable systems (approximately 60 systems in the aggregate) was insignificant and that including them would have made little difference in his results. See 2/15/18 Tr. at 507 (Trautman). The triers of fact for these royalty allocation proceedings have long recognized that the results of the survey methodology employed by Bortz exhibited a bias against PTV and Canadian claimants. The Judges in the 2004-04 proceeding acknowledged that the participation bias affecting results for both PTV and CCG was troubling, but that
[i]t would be inappropriate to overstate the impact of this problem. No one in this proceeding maintains that it substantially affects more than a small portion of the total royalty pool . . . . Nor has it been shown that the Bortz survey's remaining non-PTV-Canadian estimates were thrown outside the parameters of their respective confidence intervals solely because of this problem. That is, the PTV-Canadian problem does not substantially affect any of the remaining categories in some disproportionate way.
2004-05 Distribution Order, 75 FR at 57067. Nonetheless, on rebuttal, Mr. Trautman adjusted the Bortz Survey results based on the McLaughlin/Blackburn testimony that supported a greater valuation of the PTV and CCG claimant groups and by referring to the Horowitz Survey responses to further adjust the augmentation proposed by McLaughlin/Blackburn. See Trautman WRT at 47-48; 2/20/18 Tr. at 523-24 (Trautman).[140]
Further, in the present proceeding, the Judges have the advantage of competing surveys such as the Ringold Survey commissioned by the CCG that dealt with PTV and Canadian programming, and other methodologies that did not suffer from the participation bias that discounts the Bortz Survey results.Start Printed Page 3590
e. Impact of WGNA
Participants in the present proceeding wrangled with valuation of WGN programming distantly retransmitted on the WGN “Superstation,” WGN America (WGNA).[141] WGNA did not offer for retransmission, a program lineup identical to the one broadcast locally on WGN. Only those programs carried simultaneously on WGN and WGNA are compensable under the section 111 license. WGNA substituted syndicated or devotional programming for elements of the WGN signal. In the 2004-05 proceeding, the Judges criticized the Bortz Survey for failing to measure and value accurately the compensable programs retransmitted on WGNA. In fact, Bortz acknowledged this failure to differentiate compensable from noncompensable programs on WGNA and conceded that the survey results for Program Suppliers (the category most frequently retransmitted on WGNA) and Devotional Programming should be considered the ceiling for those categories. See 75 FR at 57067. In the 2004-05 determination, the Judges cited repeatedly the lack of record evidence regarding the quantitative adjustment for over-valuing noncompensable programming retransmitted on WGNA. See, e.g., id.
In the present proceeding, Bortz employed a separate questionnaire form to survey cable systems that retransmitted only the WGNA signal. Bortz created a WGNA programming list that identified compensable programming and provided the list to survey respondents before continuing with the questions. See Bortz Survey at 30. Bortz continued to use its standard questionnaire for cable systems that carried WGNA along with other distant signals. See Bortz Survey at B-2 (“This Appendix provides examples of the survey instruments used to interview respondents at systems that carried distant signals in addition to or other than WGN during the relevant survey year.”) (emphasis added).
The Horowitz Survey's questions relating to WGNA directed respondents not to assign any value to noncompensable programming, describing noncompensable programs as “substituted for WGN's blacked out programming.” Mr. Trautman opined that the “blacked out” instruction in the Horowitz Survey was meaningless because respondents would “have no reason to be aware of which [programming is substituted].” See 2/20/18 Tr. at 535 (Horowitz).
WGNA was the most widely-retransmitted station in the U.S. during the period at issue in this proceeding.[142] In the 2010-2013 timeframe WGNA was retransmitted by approximately three-fourths of the cable systems retransmitting distant signals and reached over 41 million distant subscribers. See Wecker Report, ¶ 23; Bortz Survey at 25. Bortz attempted to improve on the measure of WGNA retransmissions criticized in the 2004-05 proceeding. Horowitz also addressed the issue from the 2004-05 Bortz survey, but with less specificity than Bortz achieved in its 2010-13 survey for WGNA-only cable systems.
E. Conclusions Regarding Surveys
Surveys of cable system programming executives provide insight into the value those executives assign to the categories of programs eligible to receive a portion of the retransmission royalties cable systems deposit with the Copyright Office. No participant in any television royalty proceeding has developed a method to measure the actual market value of a content creator's product as bundled into a broadcast signal. Indeed, the value of a content creator's product will vary depending on the nature of the bundle and the buyer of that bundle; every creator and every viewer is likely to place a different value on every product. As buyers of the broadcast signals, CSO executives' valuations reflect their conclusions regarding the extent to which the category of programming contributes to the return on that investment; i.e., helps the cable system attract and retain subscribers.[143]
Surveys of CSO executives admittedly measure only the demand side of a value calculation. Several witnesses in the present proceeding criticized the focus only on a demand-side valuation. See, e.g., 3/13/18 Tr. at 3433 (Stec) As noted in the discussion of relative value in allocation proceedings, the Judges accept that there are valid reasons for focusing on the demand side in this proceeding. See 1998-99 Librarian Order, 69 FR at 3615 (in relevant hypothetical marketplace, supply of broadcast programming is fixed and does not determine value). Indeed, in the present proceeding, both the regression and viewership methodologies also attempt to measure value from a demand-side perspective: Regressions by measuring various demand variables, such as subscribers, and the viewership study by measuring consumption of programming by viewers. In the current regulated market structure, CSOs' purchase of broadcast signals as bundles reflects a derived demand, one step removed from the supply and demand measured at the station acquisition level. CSOs deposit royalties based on distant signal equivalents (or a minimum fee) that is divorced from the individual program content copyright owner. In this context, the buyers' demand, as measured primarily by revealed preferences, is the only equitable measure of compensation to copyright owners.
Bortz, Horowitz, and Ringold used a constant sum construct, asking respondents to value program categories by percentages and requiring that their allocations totaled 100%. The Bortz Survey muddled the concepts of cost and value by means of its warm-up question that asked survey respondents to rank program categories by how expensive it would have been for the CSO to acquire them. This may have injected some confusion into the respondent's estimation of relative value. The question of interest in this proceeding is not cost; rather, it is relative value. It is unclear how, if at all, the injection of a cost question furthers that inquiry.
Further, as in past surveys Bortz did not survey cable systems that carried only PTV and/or CCG signals; those systems thus had no opportunity to allocate any of their hypothetical budgets to PTV or CCG programming. See id. The Horowitz Survey included PTV- and CCG-only systems, but threw a curve ball by including an “Other Sports” category when there may have been little to no “other sports” content, and assigning the entire value of that category to Program Suppliers. Horowitz also may have introduced bias by providing program examples for some of the program categories. The examples, at best, would have had no effect on the results; but at worst, could have skewed results unnecessarily.
For all of the reasons highlighted by critics of the survey valuation method, the Judges agree that surveys are not a perfect measure. Nonetheless, survey results have been cited in prior royalty distribution proceedings as a generally acceptable starting point to measure Start Printed Page 3591relative program category value. Previous allocation determinations have relied heavily and almost exclusively on Bortz surveys. That reliance serves as precedent for the current Judges.[144] Adoption of a methodological precedent does not, however, preclude the Judges' consideration of current evidence.[145] In the present proceeding, the Judges have three CSO surveys to consider. The methodological precedent thus gives rise to additional evidence to guide the Judges' treatment of the survey methodology. Notwithstanding the differences in approach, the results derived from the Bortz Survey and the Horowitz Survey are compatible. Further, the relative valuations of CSO executives do not vary wildly from the valuations derived from participants' regression analyses.
The Judges conclude that the allocation measures resulting from the Horowitz Survey, with adjustments, are the survey results that most closely reflect the relative value of the agreed categories of programming in the hypothetical, unregulated market. Regardless of proffered evidence to the contrary, the Judges find that the surveyed cable system executives were sufficiently familiar with the compensable content on the signals their respective systems retransmit.[146]
The doubly regulated nature of compensable Canadian programming complicates assignment of a value to that category. The clarity of the Ringold Survey, with its comparisons to superstations and independent stations, establishes the relative value of Canadian and non-Canadian programming on Canadian signals to cable systems retransmitting within the Canadian zone of the U.S. The Ringold Survey takes the relative values of Canadian programming on Canadian signals to cable operators that retransmit them within the Canadian zone. The CCG did not provide any means of converting those results into a royalty share for the CCG category (or any other program category). The Ringold survey is thus of minimal assistance to the Judges.
Horowitz did not exclude from its sample systems that distantly carried only PTV and/or Canadian signals. The Judges conclude that Horowitz's use of examples to “aid” respondents, while flawed, was not likely to skew significantly results in any of the established categories. Horowitz. Horowitz's inclusion of Other Sports created a value where none, or next to none, existed and allocated all Other Sports value to Program Suppliers.
For all the reasons described above, particularly the acknowledged systematic bias against PTV and CCG programming, the Judges accord relatively less weight to the “Augmented” Bortz Survey. On balance, the Judges find the Horowitz Survey results to be more reflective of CSOs actual valuations of the program categories defined by agreement and adopted in this proceeding. However, the Judges cannot accept allocation of 100% of the Other Sports relative value to Program Suppliers. For that reason, the Judges conclude that the most appropriate treatment of the Other Sports “points” is to reallocate them in proportion to the relative values established outside the Other Sports category. The Judges' calculations are illustrated in Table 15.[147]
Table 15—Horowitz Survey Results After Reallocating “Other Sports” to Remaining Categories
2010 (%) 2011 (%) 2012 (%) 2013 (%) CTV 13.28 14.41 17.28 10.30 Program Suppliers 40.15 32.50 30.90 30.94 JSC 34.26 30.41 28.03 38.10 SDC 4.05 6.64 6.31 3.76 PTV 8.25 14.92 16.54 16.62 CCG 0.01 1.12 0.96 0.38 With regard to the ultimate question of interest in the present proceeding, the Judges conclude that survey results offer one acceptable measure of relative value, particularly for Sports, Program Suppliers, Commercial TV, and Devotional programming. With regard to PTV and Canadian programming, adjustments resulting from the McLaughlin/Blackburn evidence and the Ringold Survey assure a reasonable relative value of PTV and Canadian signals, respectively. Considering all of the evidence presented in this proceeding, the Judges conclude that the constant sum survey methodology, with adjustments, provides relevant information relating to the relative value for each of the six categories remaining at issue. Considering the more persuasive regression analyses, however, the Judges afford less evidentiary power to the values derived from these adjusted survey results. The Judges conclude that Dr. Crawford's first (duplicate minutes) regression analysis is a stronger base on which to make the category allocation determination.
IV. Viewership Measurement
Program Suppliers, unique among all participants in this proceeding, proposed an allocation methodology based on the relative amount of aggregate viewing of the programs in each of the agreed program categories. Start Printed Page 3592They presented this methodology through the report and testimony of economist Dr. Jeffrey Gray.[148]
A. Viewership as a Measure of Value
Dr. Gray posited a hypothetical market structure divided into a primary market and a secondary market. In the primary market broadcasters would purchase from copyright owners the right to broadcast programs in their local market (as is currently the case) and would at the same time obtain the right to retransmit the programs into distant markets. In the secondary market the broadcasters would sell their entire signal to cable operators, most likely as part of retransmission consent negotiations. In the hypothetical primary market the broadcaster would pay the copyright owner both a royalty to broadcast the program in the local market and a surcharge for the right to retransmit each program into distant markets. The broadcaster would recoup that surcharge as part of its transaction with the cable operator in the secondary market. See 3/14/18 Tr. at 3682-84, 3779-81 (Gray); Hamilton WDT at 14.
Dr. Gray stated that “[i]t is axiomatic that consumers subscribe to a CSO to watch the programming made available via their subscriptions” and that “[t]he more programming a subscriber watches, the happier the subscriber is, and the more likely she will continue to subscribe, all else equal.” Gray CAWDT ¶ 13. He concluded, therefore, that “a measure of the happiness, or `utility,' an individual subscriber gets from a specific program is the number of minutes that subscriber spent viewing the program offered to him or her by the CSO” and “[a] measure of the utility all subscribers get, in total, from a specific program is the total level of subscriber viewing of the program.” Id.
Applying this economic principle to the hypothetical market, Dr. Gray opined that expected viewing in the distant market would determine the value of the programming in the distant market. See 3/14/18 Tr. at 3684-85, 3873-74. Program Suppliers assert that actual and projected subscriber viewing information would be critical to negotiations between cable operators and broadcasters for the right to retransmit broadcast signals in an unregulated market. See PS PFF ¶ 17; Hamilton WDT at 14; 3/19/18 Tr. at 4317-19 (Hamilton). Consequently, Program Suppliers argue that subscriber viewing information is the most reasonable metric for determining relative market value. See PS PFF ¶ 18; Hamilton WDT at 14-15; 3/19/18 Tr. at 4317-19 (Hamilton); 3/14/18 Tr. at 3822-23, 3873-74 (Gray).
B. Implementation of the Viewing Study
In the broadest sense, Dr. Gray's methodology for determining the relative value of programming in the various program categories was to assign all compensable distantly retransmitted programs on a sample of stations to appropriate program categories, aggregate the quarter hours of expected viewing for every program in each category, and divide the total number of expected quarter hours of viewing for each program category by the sum of expected quarter hours of viewing for all categories. See Gray CAWDT ¶ 22; 3/14/18 Tr. at 3684-85, 3689-90 (Gray).
To accomplish this, Program Suppliers obtained, at Dr. Gray's direction, data on cable systems and retransmitted television signals from Cable Data Corporation (CDC),[149] television programming data from Gracenote,[150] program logs for Canadian television stations from the Canadian Radio-television and Telecommunications Commission (CRTC),[151] and viewing data from Nielsen's National People Meter (NPM) database.[152] See 3/14/18 Tr. at 3685-88 (Gray). Due to cost considerations, Dr. Gray created a sample of approximately 150 distantly retransmitted stations for each year and instructed Program Suppliers to obtain program and viewership data only for those stations included in his sample. See Gray CAWDT at 24 App. B; 3/14/18 Tr. at 3686-89 (Gray).
Dr. Gray did not calculate viewing shares directly from the Nielsen viewing data. Instead, he used the Nielsen data as inputs to a regression algorithm that permitted him to calculate expected distant viewing for each program in each quarter-hour throughout each year based on a number of independent variables including what Dr. Gray described as “a measure of local ratings.” See Gray CAWDT ¶¶ 36-38; 3/14/18 Tr. at 3692 (Gray).[153] Dr. Gray stated that he employed regression to compensate for the high incidence of non-recorded viewing in the Nielsen data, as well as instances where viewing data were missing. Id. at 3690-91. Regression analysis allowed Dr. Gray to estimate positive viewing even in instances where there was zero observed viewing in the Nielsen data, by increasing low estimates and decreasing high estimates. Dr. Gray described this as “data smoothing,” and opined that “[i]t's a desirable outcome in general when estimating based upon other estimates, in particular.” Id. at 3691. In addition, regression allowed Dr. Gray to “fill in the blanks” where Nielsen data was missing. Id.
Based on his regression analysis Dr. Gray derived the following viewing shares:Start Printed Page 3593
Table 16—Gray Viewing Shares
Claimant Royalty share 2010 (%) 2011 (%) 2012 (%) 2013 (%) Canadian Claimants 1.96 3.93 3.58 5.16 Commercial Television 15.83 12.06 15.48 10.61 Devotionals 1.18 2.44 1.07 1.10 Program Suppliers 50.94 49.92 36.17 45.09 Public Television 27.96 29.09 41.64 33.29 JSC 2.13 2.57 2.06 4.76 Total 100 100 100 100 Gray CAWDT ¶ 38, Table 2.
Program suppliers propose that Dr. Gray's viewing shares serve as one end of a range of reasonable royalty allocations (the other end being determined by the Horowitz survey). PS PFF ¶ 355.
C. Criticism of Dr. Gray's Viewing Study
Program suppliers' proposed use of Dr. Gray's viewing analysis as a basis for allocating royalty shares was roundly criticized by nearly all other participants through their respective experts. The criticism ranged from general disagreement with the underlying premise that viewership is an appropriate measure of relative value, to specific critiques of how Dr. Gray executed his study.
1. Viewership Not an Appropriate Measure
Several economists testified that viewership is not an appropriate measure of relative value, at least when apportioning value among different program types.[154] See, e. g., Written Direct Testimony of Michelle Connolly, Trial Ex. 1005, ¶ 33, and citations to designated prior testimony therein (Connolly WDT); Israel WRT ¶ 42; see also 3/7/18 Tr. at 2474 (McLaughlin) (“We can look at viewing, which I don't see as a measure of value itself . . . .”). For example, Dr. Mark Israel, an economist testifying for the JSC, opined that Dr. Gray's viewing analysis “provides no reliable basis for determining the relative valuation” of the agreed categories of programs, primarily because “it treats all viewing minutes as the same and thus does not account for the fact that minutes of different types of programming have different values.” Israel WRT ¶ 42. Dr. Israel argues that it is not valid to treat all minutes of viewing equally without considering the number of minutes of each type of content that is available. “If the same number of minutes of all types of content were available, then the total amount of each that viewers choose to consume could indicate their relative value. But given the smaller number of available minutes of Sports programming, one cannot support such a conclusion.” Id.
Professor Crawford, an expert witness for CTV, sought to demonstrate the lack of a one-to-one correlation between viewing minutes and relative value by examining the affiliate fees cable operators pay in an unregulated market to carry cable channels with different types of content. His analysis showed that cable systems pay far more for sports content than non-sports content with the same level of viewership. See Written Rebuttal Testimony of Gregory S. Crawford, Ph.D., Trial Ex. 2005, ¶ 36 & Fig. 1 (Crawford WRT).
Dr. Israel posited that many viewers may choose to view a given category of programming only as a second choice because their first choice is not available. See Israel WRT ¶ 42. Stated differently, a raw viewing measurement conveys no information about the intensity of the viewers' preferences for particular types of programming. See Connolly WDT ¶ 29. In its pursuit of greater subscription revenues, “the perceived intensity of subscriber preferences” would be a key consideration for cable operators. Id. ¶¶ 29-30.
Several economists found Dr. Gray's focus on subscribers' viewing patterns to be misplaced because it is cable operators, not subscribers, who pay for programming to fill their channel lineups. See, e. g., Israel WRT ¶ 43; Written Rebuttal Testimony of Matthew Shum, Trial Ex. 4004, ¶ 7 (Shum WRT). “Naturally, the value of distant signals to CSOs derive [sic] in part from the value that existing and potential subscribers place on them. . . . Nevertheless, as a principle, the relative market values for distant signal programming depend on the CSOs' valuations of the programming, and not on subscribers' valuations. Shum WRT ¶ 7. According to CCG expert Professor Shum, viewing is, at best, “a measure of subscribers' valuations” rather than CSOs'. Id. ¶ 8.
Dr. Gray's critics assert that viewership is not a primary consideration for cable operators. A cable operator's goal in selecting distant signals is to grow subscriber revenue by attracting new subscribers, retaining existing subscribers, and increasing subscription fees. See Connolly WDT ¶¶ 29, 31-32. Cable operators seek to increase profits by offering bundles of channels that will appeal to subscribers with varying tastes, including tastes for niche programming. See Shum WRT ¶¶ 10-11; Connolly WDT ¶¶ 31-32. According to JSC expert Professor Connolly, “the economics of bundling suggests that the most profitable addition to a cable system's programming is for content that is negatively correlated with content already offered by the cable system[,]” thus, “in the context of the economic value of individual programming within a bundle to a CSO, neither simple viewership data nor volume of programming is an appropriate metric for the relative market value of programming on distant signals.” Connolly WDT ¶¶ 32, 31; accord Crawford CWDT ¶ 7 (“channels that appeal to niche tastes are more likely to increase cable operator profitability due to the likelihood that household tastes for such programming are negatively correlated with tastes for other components of cable bundles”). As Professor Shum explained:
[N]iche programming, which may have small viewership numbers, may actually have higher incremental value for CSOs relative to mass appeal programs with larger viewerships. . . . While this may seem paradoxical, the reason is that many mass appeal programs (e.g., gameshows or sitcom Start Printed Page 3594reruns) are close substitutes for each other, and hence if many viewers watch a mass appeal program on a distant signal, that merely subtracts from, or “displaces,” the viewership of similar programs on non-distant signals. Thus adding a distant signal station with mass appeal programming merely shuffles existing viewers between the added stations and other stations already carried by the CSO and does not attract new viewers to the CSO's offerings. The rational CSO would have no value for such a distant signal. In contrast, the viewership of niche programs, no matter how small, represent “new eyeballs” for the CSOs, as those viewers would not find similar programs on other channels in the CSO's bundles. These viewers would be among the “new subscribers” who may otherwise not initiate service with the CSO if distant signal programming were not available.
Shum WRT ¶ 12 (footnotes omitted).
Parties critical of using viewing as a measure of value point to empirical evidence to corroborate arguments based on economic theory. Dr. Wecker and Mr. Harvey demonstrate (based on Dr. Gray's analysis) that paid programming (i.e., infomercials) had a higher viewing share than JSC programming in three of the four years covered by this proceeding. See Wecker Report ¶ 44 & Table 7. The JSC point out that, according to Dr. Gray's theory equating viewership with value, cable operators would place a higher value on paid programming than live sports broadcasts, even though Mr. Allan Singer, a former cable industry executive and JSC witness, testified that content such as infomercials actually detracts from the value of a signal. Singer WRT ¶ 7. Mr. Singer also testified that there is “clearly not” a “one-to-one correlation between audience viewing levels and value,” though it is a “component” of value. 2/22/18 Tr. at 1047-48 (Singer). Mr. Daniel Hartman, a media consultant and former DirectTV executive testifying for the JSC, stated that ratings were “definitely not a determinative factor” in a multi-channel video program distributor's (MVPD's) negotiations with suppliers of programming. 3/12/18 Tr. at 3155-56 (Hartman). Nor do ratings figure into the rates that MVPD's pay or the contractual terms and conditions they agree to when they negotiate with suppliers of programming. Id. at 3156-57. CTV argues that, while Program Suppliers' witness Sue Ann Hamilton testified to the importance to cable operators of prospective viewing by subscribers, she also stated that she did not obtain Nielsen data on viewing of distant signals. CTV PFF ¶¶ 147-148 (citing Hamilton WDT at 5-6; 3/19/18 Tr. at 4326 (Hamilton)).
Program Suppliers responded by holding to the position that viewership is the most direct measurement of relative value of programming for the reasons articulated supra,[155] relying primarily on Dr. Gray's and Ms. Hamilton's testimony in support of Dr. Gray's viewing study. See, e. g., PS Reply PFF ¶ 129.
2. Reliance on Incomplete Nielsen Data
On January 22, 2018, two weeks before the scheduled commencement of the allocation hearing in this proceeding,[156] Program Suppliers filed a “Third Errata” to Dr. Gray's written direct testimony. See Third Errata to Amended and Corrected Written Direct Statement and Second Errata to Written Rebuttal Statement Regarding Allocation Methodologies of Program Suppliers (Jan. 22, 2018) (Third Errata). The stated reason for this Third Errata was that Dr. Gray had discovered that the Nielsen viewing data he had been provided for his analysis did not include any data for distant viewing of WGNA. Id. at 1; see also 3/14/18 Tr. at 3518 (Lindstrom). WGNA, the national satellite feed for WGN-Chicago, was the most widely retransmitted distant signal in the U.S. during the years covered by this proceeding.
The SDC moved to exclude the Third Errata from evidence, arguing that Program Suppliers were seeking to introduce “substantial revisions to its proposed allocation methodology” and not “mere corrections of errors.” Settling Devotional Claimants' . . . Motion to Strike MPAA's Purported “Errata” to the Testimony of Dr. Jeffrey Gray at 9 (Jan. 25, 2018). The SDC argued that, in addition to using a Nielsen dataset that included WGNA viewing data, Dr. Gray proposed “an all-new regression in addition to the regression [he] previously proposed, and a new sample weighting methodology underlying all of its computations.” Id. The Judges granted the SDC's motion and excluded the Third Errata, reasoning that it was too late to introduce a new analysis. See 2/15/18 Tr. at 232 (Barnett, C.J.); accord Order Granting MPAA and SDC Motions to Strike IPG Amended Written Direct Statement and Denying SDC Motion for Entry of Distribution Order, Docket Nos. 2012-6 CRB CD 2004-09 (Phase II), 2012-7 CRB SD 1999-2009 (Phase 2), at 5 (Oct. 7, 2016) (striking Amended Written Direct Statement that was filed without leave and that introduced a substantially modified regression specification).
As a result of the Judges' exclusion of the Third Errata, the version of Dr. Gray's viewing analysis in the record is based on a Nielsen dataset that does not include viewing data for WGNA. While it is undisputed that the use of this incomplete dataset almost certainly affected Dr. Gray's computations, the record does not reveal the magnitude of the effect on each participant's viewing share.
Dr. Gray testified that, in spite of the missing WGNA data, his viewing analysis produced viewing shares that were within a “zone of reasonable consideration.” 3/14/18 Tr. at 3764 (Gray). He based his opinion on “a dramatic decline in compensable programming carried on WGNA and a dramatic decline in viewing of WGNA programming, such that it had become increasingly less important over time.” Id. at 3763; see also 3/14/Tr. at 3522 (Lindstrom) (“I haven't quantified it, but based on past experience, I would say that . . . there wasn't much that was, in fact, compensable programming that was on.”). In addition, Program Suppliers argue that Dr. Gray's computed viewing shares were based on accurate Nielsen data as to viewing on the remainder of the approximately 150 stations in his sample for each year and were reliable as to those stations. See PS PFF ¶ 109; 3/14/18 Tr. at 3525, 3537-38 (Lindstrom). Moreover, Dr. Gray testified that the Crawford and Israel fee-based regression analyses, as modified by Dr. Gray, support his estimated viewing shares as being within a zone of reasonableness. See 3/14/18 Tr. at 3744-45 (Gray).
Other participants dispute this. The JSC point to evidence that, while compensable Program Suppliers' programming declined in the 2010 to 2013 time frame (and as between that period and the 2004-05 period), the amount of compensable JSC programming remained stable. See Cable Operator Valuation of Distant Signal Non-Network Programming 2010-13, Trial Ex. 1001, at 28 Table III-2 (Bortz Report); see also Hartman WRT ¶ 14, Table III-1 (telecasts of JSC programming on WGNA remain relatively constant during 2010-13 and between 2010-13 and 2004-05). The JSC argue that the omission of the WGNA data thus disproportionately affected the JSC, as compared to Program Suppliers. JSC PFF ¶ 162.Start Printed Page 3595
The SDC, through the testimony of their economist Dr. Erdem, similarly argue that the absence of WGNA data is likely to disproportionately bias the results against claimant categories with smaller distant viewership. See Erdem WRT at 32.
Several experts testified that the imputed zero distant viewing values that Dr. Gray input into his regression for the missing WGNA data necessarily affected the predicted viewing that the regression produced. See Wecker Report ¶ 33 (“choosing to code zero distant viewing for large stations such as WGNA . . . created counterintuitive associations within the data where stations with extremely large distant subscribers are predicted to have low numbers of viewers”); 2/22/18 Tr. at 1299-1300 (Harvey). Dr. Gray appears to have conceded this point. See 3/15/18 Tr. at 4054-55 (Gray).
3. Reliance on Unweighted Nielsen NPM Data
The Nielsen data on which Dr. Gray relied was an extract from Nielsen's NPM database. See 3/14/18 Tr. at 3685-88 (Gray). The NPM data are derived from a geographically stratified sample of about 22,000 television households that is “designed in such a way so that every household in the United States has a probability of being selected” and represents approximately 110 million U.S. television households. Id. at 3507, 3539-40 (Lindstrom); 2/22/18 Tr. at 1179 (Harvey); National Reference Supplement 2010-2011, Trial Ex. 2021, at 1-1 (Nielsen Supplement). A subset of the NPM data, known as Local People Meter (LPM) data, is used for measuring viewership in the top 25 local markets. 3/14/18 Tr. at 3556 (Lindstrom); Sanders WRT ¶ 6.viii. Nielsen disproportionately oversamples the (mostly urban) LPM markets, with 600 to 1000 metered households in each. See Nielsen Supplement at 1-1; Erdem WRT at 27.
a. Use of Nielsen NPM Data
Several witnesses opined that the NPM database is the wrong tool for measuring local and distant viewing to individual television stations because the NPM data are not designed to measure viewership in local or regional markets. See Corrected Written Rebuttal Testimony of Susan Nathan, Trial Ex. 1090, at 3, 5-6 (Nathan CWRT); 2/22/18 Tr. at 1180-81, 1213 (Harvey); Written Rebuttal Testimony of Ceril Shagrin, Trial Ex. 2009, ¶ 24 (Shagrin WRT). Ms. Shagrin contended that an appropriate sample to measure distant viewing would need to oversample small markets, and the NPM does not oversample small markets. Consequently, the NPM data could not produce a proper measure of distant signal viewing. Shagrin WRT at ¶¶ 18, 22, 24; 3/1/18 Tr. at 1778 (Shagrin).
The CCG and SDC both argued that their program categories are underrepresented in the NPM sample design. See CCG PFF ¶ 200; SDC PFF ¶¶ 130-131. By statute, Canadian television stations may only be carried by cable systems within 150 miles of the U.S.-Canada border or north of the forty-second parallel. 17 U.S.C. 111(c)(4). Many communities within that “Canadian Zone” are not included in the NPM sample. 3/15/18 Tr. at 4071-73 (Gray); Sanders WRT, App. E; Boudreau CWDT at 87. Similarly, the SDC claim that many portions of the “Bible Belt” are not included in the NPM sample. See Sanders WRT, ¶ 6.xi, Apps. E-F.
More generally, some experts argued that Dr. Gray's use of the NPM data resulted in a high number of instances of zero recorded viewing in the data he fed into his regression algorithm. Viewing of distantly-retransmitted signals is a relatively small phenomenon, and in many regions the NPM had an insufficient number of metered households to measure that viewing. See Nathan CWRT at 5-6, 8; Wecker Report ¶¶ 21-22 & Table 4; 2/22/18 Tr. at 1180-81, 1183-84, 1252-54 (Harvey); Gray CAWDT ¶ 35. Ninety-four percent of the quarter hour observations in Dr. Gray's dataset showed zero recorded viewing, and only 0.96% of the observations reported two or more distant viewing households. See Wecker Report ¶¶ 18, 21-22 & Table 4; Shum WRT ¶ 17; see also Bennett WRT ¶ 49 & Fig. 16. Approximately 20% of the distantly-retransmitted stations in Dr. Gray's sample have no recorded local or distant viewing in the Nielsen data. See Shum WRT ¶ 18.
Dr. Gray, and Mr. Lindstrom of Nielsen,[157] defended the use of NPM data for measuring viewership of programs on distant signals. Dr. Gray testified that he consulted with Nielsen concerning his selection of data and the uses to which he intended to put it, and Nielsen found his approach to be reasonable. See 3/14/18 Tr. at 3932-33 (Gray); 3/15/18 Tr. at 3846 (Gray). He relied on his regression analysis to project distant viewership values to quarter hours on stations in his sample, including those stations in portions of the country that were not included in the Nielsen NPM sample. See id. at 4073. Mr. Lindstrom testified that Nielsen recommended the NPM database because “it is recognized that the meter is by far the best technology and best method for being able to measure television usage.” 3/14/Tr. at 3506 (Lindstrom). Mr. Lindstrom also testified that, while the NPM is a measurement of nationwide viewing, “all national viewing is inherently aggregations of local usage. . . . It's all based on viewing built up from a very localized level. . . . [I]f you believe in sampling—and I'm a big believer in sampling—and the core methodology behind it, that you are getting a very good measure of the viewing going on in those homes and that when looked at in aggregate, it is a very solid number.” Id. at 3508-10.
Regarding the “zero viewing” criticisms, Dr. Gray testified that instances of no recorded viewing are to be expected, and constitute “information regarding the level of viewing for the Nielsen sample.” 3/15/18 Tr. at 3973 (Gray); see Gray CAWDT ¶ 35; 3/14/18 Tr. at 3717 (Gray). Similarly, Mr. Lindstrom explained that, given Nielsen's sampling rates and the levels of distant viewing, one would expect a large number of individual quarter-hour observations to show no recorded viewing. He emphasized that it is necessary to aggregate and average the observations to get an accurate picture of viewing. See 3/14/18 Tr. at 3527-28 (Lindstrom). “[I]f you believe in sampling, then the aggregation is, in fact, going to give you solid results . . . . [I]f you're going to look at the individual pieces, then the individual pieces are highly subject to criticism because you're not supposed to look at individual pieces.” Id. at 3529.[158]
b. Application of Improper Sample Weights to the Nielsen Data
In order to project viewing data from sample households to the broader television audience, Nielsen employs sophisticated weighting schemes. “The weights measure the number of people in the population that are represented by each member of the sample. For example, if [a] sample member has a weight of 20,000 for a selected day, this Start Printed Page 3596means that on that day the sample member represents 20,000 in the population.” Nathan CWRT at 5 (quoting Nielsen online tutorial on weighting (internal quotations and footnote omitted)). Dr. Gray was supplied with Nielsen's national weights, but not with weights that would permit accurate projection to local or regional markets. See 3/14/18 Tr. at 3711, 3715-16 (Gray). He chose to use the unweighted Nielsen data, rather than weights that would project to a national audience. Dr. Gray testified that he was concerned that using the national weights would produce anomalous results, where numbers of projected viewers for a distant signal would, in some cases, exceed the number of cable households that receive the signal on a distant basis. See id. at 3715-16.
Ms. Susan Nathan, a media research consultant, agreed that it would have been inappropriate for Dr. Gray to apply the NPM national weights to data concerning distant viewing. See Nathan CWRT, at 9. However, Ms. Nathan also found Dr. Gray's use of unweighted Nielsen data inappropriate:
In arriving at his distant viewing estimates, Dr. Gray treats each NPM sample household as equal—even though each NPM sample household is not equal in Nielsen's sample design. Rather, each household is representative of a different number of potential viewers. Simply estimating the number of sample participants that might view a given program is not an accurate means of estimating viewership. By ignoring the weighting and assuming one people meter household is the same as another, Gray also applies the unweighted data in a manner for which it was not intended.
Id. Mr. Gary Harvey, a statistician and applied mathematician, similarly criticized Dr. Gray's use of unweighted data: “[B]ecause Dr. Gray doesn't take into account any weighting . . . you don't know how important that household is . . . for your particular area.” 2/22/18 Tr. at 1182 (Harvey); see id. at 1201-02.
Dr. Gray responded that his decision to use the unweighted Nielsen data was the best of three options available to him. He could have used the sample weights in the NPM database, which project each quarter-hour observation out to the number of households in the NPM survey that particular Nielsen household represented on that particular day. Dr. Gray was concerned that this would produce anomalous results, where the predicted number of viewing households could exceed the number of distant subscribers with access to that distant signal. See 3/14/18 Tr. at 3714-15 (Gray). He could have used sample weights that project each observation to the particular distant viewing market, but those weights were not available from Nielsen, and would have been impracticable for him to develop. Id. at 3715-16. Or he could have taken the approach that he ultimately settled on and used the unweighted Nielsen data. See id. at 3716. Dr. Gray pointed out that Nielsen used unweighted data in a similar fashion in a previous proceeding and noted that, in any event, he was not interested in the absolute number of viewer quarter hours, but the relative level of viewing among the parties. See id. He concluded that performing a regression on the unweighted Nielsen viewing numbers was “a reliable methodology to do so.” Id.
4. Sample of Stations Biased Results
Dr. Gray selected his sample of stations using a statistical technique called stratified random sampling. He ranked the universe of distantly-retransmitted stations by numbers of distant subscribers, divided the stations into strata proportionate to the number of distant subscribers reached by the signal, and randomly selected stations from each stratum. 3/14/18 Tr. at 3686 (Gray). He selected stations from the stratum containing the stations with the most distant subscribers with 100% probability (i.e., he selected all of them). The probability of selecting any given station declined with each succeeding stratum, with the probability of selecting a given station in the final stratum ranging from approximately 2.4% (i.e., 19 in 792) to approximately 3.5% (i.e., 22 in 632). See Bennett WRT ¶ 28, Figs. 6-9. In order to account for the differing probabilities of selection between the different strata, Dr. Gray had to weight the viewing data. Data pertaining to the largest stations, which were selected with 100% probability received a weight of 1. Data pertaining to stations with a lower probability of selection received a higher sample weight (the reciprocal of the probability of selection). See 3/15/18 Tr. at 3964-65 (Gray). The stations with the fewest number of distant subscribers, which had the lowest probability of being selected, received the highest sample weight, ranging from 28.73 to 41.68. See Bennett WRT ¶ 28, Figs. 6-9.
Use of a stratified random sample (with appropriate weighting) can allow oversampling of elements with a given characteristic (in this case stations with larger numbers of distant subscribers), while still being able to make statistical inferences about the universe of elements as a whole. However, Dr. Bennett, an economist and econometrician who testified for CTV, criticized this approach, arguing that Dr. Gray's sampling design is prone to error and bias and that Dr. Gray made a number of errors implementing his sample. See generally Bennett WRT.
a. Sample Design Led to a Biased Sample
Dr. Bennett describes Dr. Gray's sample design as an example of “cluster sampling” because Dr. Gray sampled stations (which air multiple programs) rather than sampling programs directly. See Bennett WRT ¶¶ 15-16. Cluster sampling, according to Dr. Bennett, is “more prone to bias than simple random samples of equal size” because “individual clusters often contain a non-random and relatively homogenous set of units.” Id. ¶ 17, 18 & Fig.1. In the context of television programming, Dr. Bennett observed that programs assigned to particular claimant categories are often concentrated by station type (i.e., Canadian, educational, network, independent, or low power). Over- or under-sampling of stations of a particular type could thus have a substantial impact on the volume and viewership share of the categories of programming that are disproportionately carried on those stations. Id. ¶ 18. If the sample of stations is not proportionately representative of the station types in the population, the program types will not be representative of the population of television programs.
Dr. Bennett argues that Dr. Gray's samples of stations were, in fact, not representative of the station types in the population. See id. ¶ 29. Dr. Bennett offers as evidence of unrepresentativeness the proportion of educational stations in Dr. Gray's samples in each year as compared to the proportion of educational stations in the population. He notes that Dr. Gray consistently under-sampled educational stations in 2010, 2011, and 2013, and oversampled educational stations in 2012. See id. ¶ 32 & Fig. 10. Conversely, he finds that Dr. Gray over-sampled independent stations in 2010, 2011, and 2013, and under-sampled them in 2012. See id. ¶ 34 & Fig. 11. Since independent stations carry a greater proportion of Program Suppliers' programs than other station categories, Dr. Bennett concludes that Dr. Gray's computations of volume and viewership overstate those values for Program Suppliers' programming. See id. ¶¶ 39-42. Dr. Bennett opines that Dr. Gray should have included station type as a stratification variable to avoid potential bias. See id. ¶ 19.Start Printed Page 3597
Dr. Gray acknowledged that it would have been possible, as Dr. Bennett suggested, to stratify with respect to program type. See 3/14/18 Tr. at 3771 (Gray). However, he argued that not performing that stratification did not render his sample biased. “I'm appealing to randomness. I think bias is a strong word.” Id. He also acknowledged that he could have done some “post-sampling weighting, which would have changed [the] estimate slightly,” but did not do so. Id.
b. Sampling Frame and Sampling Weights Were Incorrect
Dr. Bennett points out (and Dr. Gray confirms) that some duplicate stations were included in Dr. Gray's samples. See id. ¶¶ 21-25 & Fig. 3; 3/15/18 Tr. at 3859-63 (Gray). This occurred, for example, when the CDC data Dr. Gray received listed certain stations twice—once with a “DT” suffix after the call sign and once without (e.g., CBUT and CBUT-DT). See Bennett WRT ¶ 24 & Fig. 4.
As a result of these duplicates, Dr. Bennett found that Dr. Gray's sampling frame included more stations than were in his target population.[159] Bennett WRT ¶ 22. Dr. Bennett argues that the mismatch of Dr. Gray's sampling frame and the population of distantly-retransmitted stations rendered the sampling frame unsuitable to represent the target population. Id. ¶ 21. Dr. Bennett argues that “Dr. Gray's failure to remove duplicate stations . . . distorts his count of unique stations, his assignment of stations to individual strata, and the sampling weights that he calculates based on his incorrect station count,” which could affect Dr. Gray's analysis in several ways:
a. Double-counting some stations in the sampling frame, which changed the likelihood of selection for all stations outside the top stratum; and
b. Where both versions of the duplicative station were selected, such as for CBUT . . . 2010, overrepresentation of the duplicate station in the sample, and the exclusion of a non-duplicate station from the sample; and
c. Incorrect sampling weights being applied to sampled stations in strata with one or more of the duplicative stations.
Id. ¶ 25.
Dr. Bennett argued that “the errors in Dr. Gray's sampling weights are further compounded by the fact that Dr. Gray has dropped sampled stations that did not have coverage in the Gracenote Data.” Id. ¶ 26. Over the four years at issue in this proceeding, Dr. Gray had to drop between five and eight sampled stations per year (for a total of 24 of his 609 sampled stations) because Gracenote could not provide programming information for them. See id. ¶ 27. The omitted stations were distributed unevenly across the sample strata and subject to different sample weights. Dr. Bennett opines that Dr. Gray should have adjusted his weighting to account for the number of missing stations across the strata for each year. See id. ¶ 28. In addition, Dr. Bennett testified that Dr. Gray failed to apply his sample weights in performing his regression analysis, leading to biased results. See id. ¶¶ 58-59.
Dr. Gray acknowledged the existence of duplicate stations in his sample. See 3/15/18 Tr. at 3859 (Gray). He explained that at the time that he drew the sample there were a number of stations that had the same call signs with different suffixes, and, after consultation with CDC and Nielsen, he was unable to determine whether or not they were the same or different signals. See 3/14/18 Tr. at 3719-20. He opted to treat them as different stations because, if he had treated them as the same station and they proved to be different stations he would have had to discard the sample and start over. Id. Having duplicate stations in the sample effectively resulted in a smaller sample and a higher margin of error. See id. at 3721; 3/15/18 Tr. at 3853-56 (Gray). Dr. Gray testified, however, that the existence of duplicate stations did not render his viewing estimates biased or incorrect. See 3/15/18 Tr. at 3859 (Gray).
Dr. Gray also acknowledged that the existence of duplicate stations resulted in the application of different sample weights to different subscriber groups that received the same signal. See id. at 3861-62. He maintained, however, that applying differing sample weights did not “make the make the estimated viewing biased or wrong.” Id. at 3861.
Regarding his sampling weights, Dr. Gray acknowledged that he should have recalculated them to reflect the removal of certain stations from the sample for which data were unavailable. See id. at 3867. He opined that the difference would be de minimis, “given the types of stations that did not have programming data.” Id. “[E]very . . . sensitivity analysis I ever did with respect to viewing had . . . almost de minimis impacts. . . . I would not expect it to impact the overall calculated shares.” Id. at 3867-68.
Contrary to Dr. Bennett's assertion, Dr. Gray testified that he applied his sample weights to the Nielsen data and maintained that “it's an unbiased measure of viewing.” Id. at 3861-62.
c. Erroneous Application of Random Sample to Geographic Stratified Sample
Dr. Erdem criticized Dr. Gray's sampling technique because it superimposed a random selection on a geographically-stratified sample.[160] He argued that the two sampling schemes are incompatible, because “[t]here is no guarantee that the stations in Dr. Gray's sample were broadcast or retransmitted in the . . . geographic areas sampled by Nielsen.” Erdem WRT at 26. As a result, “[l]ocal or distant viewership would be underreported or completely missing if geographies where a particular station is retransmitted are not sampled by Nielsen.” Id. Consequently, Dr. Erdem considered Dr. Gray's data source to be “practically unusable,” and concluded that “no reliable conclusions can be drawn on the basis of the sample that Dr. Gray uses.” Id. at 25.
Dr. Gray responded that Dr. Erdem's criticism “would have been a concern, had [he] not used regression analysis.” 3/14/18 Tr. at 3718 (Gray). He conceded that “Dr. Erdem has a legitimate point” and that it is not “ideal” to superimpose a random sample on top of a geographic sample. Id. He testified, however, that he had overcome that criticism by using regression analysis to predict viewing “even in those areas of underrepresentation by Nielsen.” Id. at 3718-19. As a consequence, he was not concerned about Dr. Erdem's criticism. Id. at 3719.
5. Other Methodological Errors
Experts for the other parties lodged a barrage of criticisms of a variety of methodological choices that Dr. Gray made in performing his analysis.
a. Imputation of Zeroes for Missing Nielsen Data
The NPM data that Nielsen provided to Dr. Gray included only observations of positive viewing. See 3/14/18 Tr. at 3712 (Gray). For several million station/quarter-hour pairings during the relevant period there was no record of positive viewing in the NPM data. See Wecker Report ¶ 21. Dr. Gray added zero-viewing records for these station/quarter-hour pairings and used these zero values as input in his regression analysis. See id.; Bennett WRT ¶ 53 & Fig. 17.Start Printed Page 3598
Dr. Bennett and Mr. Harvey both criticized this practice. Dr. Bennett argued that “Dr. Gray's practice of equating missing records with zero viewing 1acks foundation and undermines the reliability of his regression analysis. . . . Dr. Gray offers no logical explanation for why zero might be the correct value to use in place of a missing record.” Bennett WRT ¶ 54. Dr. Bennett posited the existence of an apparent contradiction: “[E]ither the missing values truly correspond to zero viewing and the regressions serve no purpose—why estimate a known quantity—or the true values of the missing records potentially differ from zero, in which case Dr. Gray has imposed an incorrect assumption that biases the estimated relationship between distant and local viewing.” Id.
Mr. Harvey argued that Dr. Gray failed to demonstrate that a sufficient number of NPM households received a given distantly transmitted signal to conclude that the absence of viewership data indicated zero viewing. 2/22/18 Tr. at 1203-07 (Harvey). “[Y]ou might have zero people meters, in which case [a zero viewing observation] is useless data. . . .” Id. at 1335. In Mr. Harvey's view, “there is no possible way to come up with some metric . . . for these smaller samples without knowing the number of people meters. . . .” Id.
Dr. Gray explained that “[t]here was [sic] never any zeros in the Nielsen data. They only have recorded viewing and non-recorded viewing.” 3/24/18 Tr. at 3712 (Gray). The data that Nielsen provided to Dr. Gray were “all recorded viewing values.” Id. He testified that the absence of an entry for recorded viewing for a given quarter hour meant that “there was no Nielsen household in the sample viewing” that channel at that particular time. Id. In those cases he added an entry with a zero-household count. See id. at 3712-13. Dr. Gray distinguished between instances zero local viewing and data that was “missing” because local viewing for that channel was not measured by Nielsen. See id. at 3895-97; 3/14/18 Tr. at 3717-18. In the latter instance, he imputed a local rating based on the average local rating for programs of the same type during that particular quarter hour. See id.; 3/15/18 Tr. at 3897-3900 (Gray).
b. Incorrect Measure of Local Ratings
As an input for his regression analysis, Dr. Gray used a “measure of local ratings” that he constructed by dividing local viewing (as measured by Nielsen) by the size of the market—i.e., “the number of subscribers reached by the particular signal.” See 3/14/18 Tr. at 3693 (Gray). Dr. Bennett clarifies that, by number of subscribers, Dr. Gray refers to the total number of local and distant subscribers who receive the signal. See Bennett WRT ¶ 56.
Dr. Bennett faults Dr. Gray's inclusion of the number of distant subscribers in the denominator when calculating his measure of local ratings. “Dr. Gray's inclusion of distant subscribers in his `measure' of local viewing means that, all else equal, he will assign higher local viewing to a station with the fewest distant subscribers, and vice versa.” Id.
Dr. Gray maintained that, after consultation with Nielsen, he found his measure of local ratings to be reasonable. See id. at 3932-33.
c. Regression-Based Estimates in Lieu of Nielsen Observations of Positive Viewing
Dr. Gray computed his viewing shares based solely on the estimates he computed using his regression analysis. He used the observations of positive viewing in the Nielsen NPM data solely as an input into the regression analysis, not in the final computation of viewing shares. Dr. Bennett described this procedure as being “without . . . support” and argued that Dr. Gray's reliance on estimated viewing “further undermines the reliability of his viewing analysis.” Id. ¶¶ 50-51.
Specifically, Dr. Bennett argued that, as compared with the observations of positive viewing in the Nielsen NPM data, Dr. Gray's estimates are biased in favor of Program Suppliers and PTV programming, and biased against CTV and CCG programming. See id. ¶ 64 & Figs. 21-22; 3/1/18 Tr. at 1874-75 (Bennett). Professor Shum reiterates the same point with respect to CCG programming, arguing that Dr. Gray's analysis systematically lowered estimates of distant viewing of Canadian signals because (a) the regression undercounted local viewing by excluding local viewing in Canada; (b) Canadian stations were underrepresented in Dr. Gray's 2010 sample; and (c) Canadian signals cannot be carried outside the Canadian Zone. See Shum WRT ¶¶ 25-38. Professor Shum proposes adjustments to Dr. Gray's viewing shares to account for the first two purported defects, but he was unable to propose an adjustment to account for the third. See id. ¶¶ 29-30, 33-35, 38.
Dr. Gray maintained that basing his viewing shares on the predicted viewing he computed through his regression analysis was both reasonable and superior to using Nielsen's viewing estimates for that purpose. See 3/15/18 Tr. at 3940-41, 3943, 3948 (Gray). In particular, he argued that, while Nielsen's measurements were of “geographically-focused areas,” his regression analysis produces estimates of relative viewing “throughout the United States.” Id. at 3949. He acknowledged that his regression would not produce particularly good estimates of the level of distant viewing, but opined that his estimates were “more accurate on a relative basis for the United States.” Id.; see id. at 3946, 3948.
d. Miscategorized Programs
Dr. Bennett asserts that Dr. Gray incorrectly assigned thousands of programs to the wrong claimant categories. For example, he states that Dr. Gray's algorithm failed to consider Gracenote's title and program type fields when assigning programs to the CCG category and, as a result, incorrectly assigned JSC programming on Canadian signals to the CCG category. Bennett WRT ¶¶ 44-45; see also Wecker Report ¶ 12 (Dr. Gray included nearly all MLB, NHL, NBA, and NFL broadcasts on Canadian signals in the CCG category); 2/22/18 Tr. at 1169-70 (Harvey) (“Dr. Gray was very clear in his testimony that he intended to code Canadian broadcasts of Major League Baseball games and football games into the JSC Category, but he did not do that.”); Bennett WRT ¶ 18, n.11 (“obvious program categorization errors” in table showing 20 CTV programs on Canadian stations and 5 Devotional programs on Educational stations). In addition, Dr. Bennett states that Dr. Gray didn't consider whether a program coded as “religious” was syndicated before he assigned it to the Devotional category. Dr. Bennett asserts that nonsyndicated religious programming belongs in the CTV category. Id. ¶ 46.
Dr. Gray compared the category classification that he performed to Dr. Bennett's. He found that their respective algorithms assigned programs to the same category 93.5% of the time. See Gray CWRT ¶ 50. As to the programs where Dr. Gray's categorization differed from Dr. Harvey's, Dr. Gray was unable to determine which categorization was correct with undertaking a program-by-program review.[161] See id. Instead, Dr. Gray performed a sensitivity analysis to determine whether using Dr. Bennett's categorizations would have an impact on his (Dr. Gray's) share calculations. See id. ¶ 51. Using Dr. Bennett's program categorizations resulted in a modest increase in Program Suppliers' Start Printed Page 3599viewership share in each royalty year, “consistent with no bias in intent on the part of Dr. Bennett or me.” Id. ¶ 52.
D. Analysis
1. Relevance and Impact of Prior Decisions
Program Suppliers' use of viewing data to propose allocations of cable royalties among program categories has a long, if not illustrious history. MPAA (to use the Program Suppliers' contemporaneous designation) first offered a Nielsen study in the Copyright Royalty Tribunal's (CRT) adjudication of 1979 cable royalties. See 1979 Cable Royalty Distribution Determination, 47 FR 9879, 9880 (Mar. 8, 1982). At that time the CRT found Nielsen's viewership study to be the “single most important piece of evidence in [the] record.” Id. at 9892. Over time, however, decision makers' (first the CRT, then the CARPs) reliance on Nielsen studies waned. See 1998-99 CARP Report, supra note 144, at 33 (recounting history of use of Nielsen studies by CRT and CARPs). In 2003 a CARP, with the approval of the Librarian of Congress (Librarian) declined to use the Nielsen study as a direct measure of relative value of programming to cable operators:
[T]he Nielsen study does not directly address the criterion of relevance to the Panel. The value of distant signals to CSOs is in attracting and retaining subscribers, and not contributing to supplemental advertising revenue. Because the Nielsen study “fails to measure the value of the retransmitted programming in terms of its ability to attract and retain subscribers,” it can not be used to measure directly relative value to CSOs. The Nielsen study reveals what viewers actually watched but nothing about whether those programs motivated them to subscribe or remain subscribed to cable.
Id. at 38 (citations omitted). Or, as the Librarian summarized pithily, “[t]he Nielsen study was not useful because it measured the wrong thing.” 1998-99 Librarian Order, 69 FR at 3613.
More recently the Judges have relied upon evidence of viewership in a pair of “Phase II” distribution cases.[162] In the 2000-03 cable Phase II distribution case, the Judges concluded that “viewership, as measured after the airing of the retransmitted programs is a reasonable, though imperfect proxy for the viewership-based value of those programs.” Distribution of 2000, 2001, 2002 and 2003 Cable Royalty Funds, 78 FR 64984, 64995 (Oct. 30, 2013) (2000-03 Cable Phase II Decision) (footnote omitted). The Judges agreed with Program Suppliers' expert in that case [163] that “viewership can be a reasonable and directly measurable metric for calculating relative market value . . . . Indeed, the Judges conclude that viewership is the initial and predominant heuristic that a hypothetical CSO would consider in determining whether to acquire a bundle of programs for distant retransmission . . . .” Id. at 64996. Similarly, in the 1998-99 Phase II proceeding, the Judges found a viewership analysis to be an “acceptable `second-best' measure of value” for distributing funds allocated to the devotional programming category among claimants in that category. See Distribution of 1998 and 1999 Cable Royalty Funds, 80 FR 13423, 13432-33 (Mar. 13, 2015) (1998-99 Cable Phase II Decision).
The Copyright Act mandates that the Judges act
on the basis of a written record, prior determinations and interpretations of the Copyright Royalty Tribunal, Librarian of Congress, the Register of Copyrights, copyright arbitration royalty panels (to the extent those determinations are not inconsistent with a decision of the Librarian of Congress or the Register of Copyrights), and the Copyright Royalty Judges (to the extent those determinations are not inconsistent with a decision of the Register of Copyrights that was timely delivered to the Copyright Royalty Judges pursuant to section 802(f)(1)(A) or (B), or with a decision of the Register of Copyrights pursuant to section 802(f)(1)(D)), under this chapter, and decisions of the court of appeals. . . .
17 U.S.C. 803(a)(1). In interpreting a nearly identical provision under the CARP system,[164] the Librarian stated that “[w]hile the CARP must take account of Tribunal precedent, the Panel may deviate from it if the Panel provides a reasoned explanation of its decision to vary from precedent.” Distribution of 1990, 1991 and 1992 Cable Royalties, 61 FR 55653, 55659 (Oct. 28, 1996) (1990-92 Librarian Order) (citation omitted). In a subsequent decision, the Librarian observed that “prior decisions are not cast in stone and can be varied from when there are (1) changed circumstances from a prior proceeding or; (2) evidence on the record before it that requires prior conclusions to be modified regardless of whether there are changed circumstances.” 1998-99 Librarian Order, 69 FR at 3613-14.
As an initial matter, the Judges find that the 1998-99 CARP Report and the 1998-99 Librarian Order are relevant “precedent” [165] that the Judges must consider in connection with Dr. Gray's analysis of Nielsen viewing data; the 1998-99 Cable Phase II Decision and the 2000-03 Cable Phase II Decision are not. The task of distributing royalties among a reasonably homogeneous group of programs differs from that of allocating royalties among heterogeneous categories, and different considerations apply to each. See Indep. Producers Grp. v. Librarian of Congress, 792 F.3d 132, 142 (DC Cir. 2015) (IPG v. Librarian); Distribution of 1993, 1994, 1995, 1996 and 1997 Cable Royalty Funds, 66 FR 66433, 66453 (Dec. 6, 2001).
In considering Dr. Gray's viewing study, therefore, the Judges are mindful of the earlier decisions that found viewership studies unhelpful in allocating royalties among program categories. In particular, the Judges examine whether there is record evidence that would compel a different conclusion in the present case.[166]
2. Rejection of Viewership as a Measure of Relative Value
Although the record supports a conclusion that viewership is a measure of value, the weight of the evidence demonstrates that it is an incomplete measure of value.
The Judges agree in principle with Dr. Gray that the focus of the relative market value inquiry is on the hypothetical market in which copyright owners license programs to broadcasters for retransmission by cable operators. See 3/14/18 Tr. at 3683-84 (Gray). Experts from multiple parties agreed that, in the hypothetical market, cable operators would continue to acquire Start Printed Page 3600entire signals, rather than individual programs. See id. at 3683; 2/28/18 Tr. at 1377-78 (Crawford); 3/5/18 Tr. at 2157-58 (George). In this market structure copyright owners' compensation (the object of this proceeding) would flow from broadcasters to copyright owners, and would be recouped through the retransmission fee charged by the broadcaster to the cable operator. See 3/14/18 Tr. at 3682-84, 3779-81 (Gray).
That market does not exist in a world with a compulsory license, so there is no evidence of the surcharge that broadcasters would pay to copyright owners for the right to license distant retransmissions. Most parties have used the transaction in which a cable operator acquires the right to retransmit programming as a proxy. Program Suppliers, by contrast, focus on the consumer demand for programs as measured by viewership.
At bottom, Dr. Gray's study is premised on the truism that, ultimately, programming is acquired to be viewed. See Gray CAWDT ¶ 13. Consumers subscribe to cable in order to watch the programming carried on the various channels provided by the cable operator. Cable operators acquire broadcast and cable channels that carry programming their subscribers want to view. Broadcasters acquire programs that will attract viewers.[167] Viewing is the engine that drives the entire industry. It is an example of the economic concept of derived demand. The demand for programming at each step in the chain is derived from demand further along the chain, all the way to the television viewer. Program Suppliers corroborated Dr. Gray's economic insight with evidence that at least some MVPDs consider viewership metrics in making program acquisitions.
Consequently, based on the evidence presented in this proceeding, the Judges disagree with the Librarian's statement that viewership studies are not useful because they “measure [ ] the wrong thing.” 1998-99 Librarian Order, 69 FR at 3613. Viewership is no less relevant to the question of how a copyright owner would be compensated by a broadcaster in the hypothetical market than to the question of what a cable operator would be willing to pay to a broadcaster. Both are relevant because the copyright owner's compensation would be a function of downstream demand in the hypothetical market.
However, even accepting that viewership is relevant to the question of value doesn't end the inquiry. There is record evidence supporting the contention that, in the analogous market for cable channels, cable operators will pay substantially more for certain types of programming than for other programming with equal or higher viewership. See Crawford WRT ¶ 36 & Fig. 1.[168] These empirical data support economic arguments about the role of bundling and “niche” programming in cable operators' decision making. See Shum WRT ¶¶ 10-12; Connolly WDT ¶¶ 31-32; Crawford CWDT ¶ 7. It is clear to the Judges that relative levels of viewership do not adequately explain the premium that certain types of programming can demand in the marketplace. In short, viewing doesn't provide the whole picture.
The Judges conclude, therefore, that viewership, without any additional evidence to account for the premium that certain categories of programming fetch in an open market, is not an adequate basis for apportioning relative value among disparate program categories.
3. Rejection of Dr. Gray's Study due to Incomplete Data
The Judges also must reject Dr. Gray's study because he computed his predicted distant viewing on the basis of incomplete data. Specifically, the use of erroneous zero viewing observations for compensable WGNA programming rendered Dr. Gray's results unreliable.
WGNA was, by far, the most widely retransmitted signal in the U.S. during the period covered by this proceeding, reaching over 40 million distant subscribers. See Wecker Report, ¶ 23. That provided an opportunity for any compensable program retransmitted on WGNA to be viewed by a substantial number of households. Yet nearly none of those compensable programs were credited with any positive distant viewing on WGNA in Dr. Gray's regression. The Wecker Report, moreover, demonstrates that there were significant amounts of positive distant viewing in Nielsen's NPM database for programs carried on WGNA. See id. ¶ 26 & App. G. As Dr. Wecker and Mr. Harvey demonstrated, the numerous zeros for distant viewing on WGNA that were input into Dr. Gray's regression, combined with the use of the number of distant subscribers as a variable in the regression specification, created an erroneous negative correlation between distant subscribership and distant viewing. See id. ¶¶ 33; 2/22/18 Tr. at 1299-1300 (Harvey); see also 3/15/18 Tr. at 4054-55 (Gray) (appearing to concede point).
The aggregate effect of the missing WGNA data on Dr. Gray's predictions of distant viewing, and on the viewing shares he computed therefrom, cannot be determined with any certainty from the record. It was incumbent on Program Suppliers to demonstrate that the effect of the missing WGNA data did not have a substantial influence on Dr. Gray's results. They failed to do so. Program Supplier's efforts to argue, essentially, that the omission of the WGNA data was harmless error are unavailing. The JSC rebutted Dr. Gray's assertion that compensable programming on WGNA had declined significantly, showing that JSC programming on WGNA remained stable during the 2010-2013 period. See Bortz Report, at 28 Table III-2. The Wecker Report rebutted Dr. Gray's assertion that his computed viewing shares were accurate as to the non-WGNA stations in his sample. See Wecker Report, ¶¶ 33. As for Dr. Gray's assertion that his viewing analysis produced viewing shares that were within a “zone of reasonable consideration,” 3/14/18 Tr. at 3764 (Gray), the “zone of reasonableness” is a legal construct that is solely within the purview of the Judges. Dr. Gray's views on what lies within or without a zone of reasonableness are immaterial.
4. Other Asserted Methodological Defects
As recounted above,[169] several experts identified what they found to be methodological errors in Dr. Gray's analysis, including his decision to use Nielsen NPM data and not to apply Nielsen's weighting to that data; his sample design and application of sampling weights; his program categorization; his imputation of zero viewing values to quarter hours not represented in the Nielsen data; and his substitution of regression-based predicted distant viewing values for the observed distant viewing in the Nielsen data. Because the Judges have found an adequate basis for rejecting Dr. Gray's viewing study based on its failure to provide a complete measurement of value, and its reliance on incomplete data, the Judges do not need to evaluate the remaining critiques.
E. Conclusion Concerning Viewing Study
Dr. Gray's viewing study provides an incomplete and therefore inadequate measure of relative market value of disparate categories of distantly-Start Printed Page 3601retransmitted programming. While viewing is relevant to value, it does not adequately measure the premium that cable operators are willing to pay for certain types of programming in the analogous market for cable channels.
Even if viewing were an adequate basis for apportioning value among program categories, Dr. Gray's study is fatally flawed by its reliance on Nielsen data that omitted distant viewing on WGNA—the most widely retransmitted station in the United States.
For the foregoing reasons, the Judges will not rely on Dr. Gray's viewing study for apportioning royalties among the program categories represented in this proceeding.
V. Cable Content Analysis
Dr. Israel also undertook an analysis that he characterized as a “Cable Content Analysis”—focusing on the dollar amount paid by CSOs to carry sports and other programming during the years 2010-13. More particularly, for the years 2010-13 he considered the amounts that cable networks spent per hour of programming televised in relation to total household viewing hours (HHVH). Israel WDT ¶ 45. As explained in more detail, infra, Dr. Israel concluded that CSOs place a high value per hour on live sports programming compared with other program categories. He further opined that his Cable Content Analysis presented results that were consistent with the share estimates determined by the Bortz Survey. Israel WDT ¶ 46.
More particularly, according to Dr. Israel, his Cable Content Analysis demonstrated that in each year of the 2010-13 period, CSOs networks paid significantly more per hour for JSC programming than for any other category of programming. Making this point in an alternative manner, Dr. Israel testified that the JSC's programming share of CSO expenditures was larger than the JSC programming share of CSO broadcast minutes or HHVH. Israel WDT ¶ 46.
Table V-5 of Dr. Israel's WDT, set forth below, compares total program hours, total HHVH, and total CSO expenditures for JSC programming with all other categories of programming on the top twenty-five cable networks:
Table 17—Cable Content Analysis 2010-2013, Summary of Top 25 Networks
Category Total programming hours % Total HHVH (000) % Expenditures ($M) % Expenditures per hour of programming Expenditures per hour of viewing [A] [B] [C] [D] = [C] / [A] [E] = [C] / [B] JSC 9,274.0 15,164,368.9 $12,524.7 $1,350,517.6 $0.826 Non-JSC 866,726.0 496,492,970.2 42,702.0 49,268.2 0.086 JSC / Non-JSC 0.01 0.03 0.29 27.41 9.60 JSC % of Total 1.06 2.96 22.68 Israel WDT ¶ 47 Table V-5.
As this table shows, for the top twenty-five cable networks, JSC programming represents approximately 1% of all programming in terms of hours transmitted and less than 3% of total HHVH. Nonetheless, these top twenty-five cable networks applied more than 22% of their programming budgets to acquire the rights to transmit JSC programming.
Dr. Israel further highlighted the importance of JSC programming to these cable networks, relative to other categories, by expressing the data on a per hour basis. Dividing total expenditures by total hours of programming per category, he showed that expenditures per hour of JSC programming are worth more than 27 times other programming categories. Dr. Israel also calculated these expenditures per hour of household viewing and found that JSC programming was worth almost 10 times more per hour of viewing than all other programming categories on the top twenty-five cable networks. Israel WDT ¶ 47; Table 17, supra.
Dr. Israel also looked more granularly at two cable networks, TBS and TNT, which he noted (without opposition) carried a mix of JSC and other program categories. His analysis showed patterns that were similar to what he had found with regard to the top twenty-five cable networks, viz., that JSC programming was far more valuable than all other program categories. Specifically, during the years 2010-13, JSC programming accounted for approximately 2% of the total programming hours transmitted by TBS, and about 3% of the total programming hours transmitted by TNT. In terms of viewership, the JSC generated roughly 5.5% of total HHVH on TBS during the four-year period and about 7.9% on TNT. In contrast to these relatively small percentages of programming and viewing hours, TBS spent 44.4% of its 2010-13 programming budget on JSC programming, and TNT quite similarly spent 45.5%. Once again, expressing these choices on an hourly basis, expenditures per hour of JSC programming were more than 40 times greater than expenditures per hour of all other programming on TBS, and expenditures per hour of JSC programming were almost 30 times greater than expenditures per hour of all other kinds of programming on TNT. In terms of expenditures per HHVH, TBS spent more than 13 times as much on JSC programming than on other program categories, and TNT spent almost 10 times as much compared with its spending on other program categories. Israel WDT ¶ 48 & Table V-6.
According to Dr. Israel, these absolute and relative differences are reflected in “the significantly higher license fees that cable systems and other MVPDs [Multichannel Video Programming Distributors] pay to carry these networks.” Israel WDT ¶ 51. Dr. Israel presented data to support this point, analyzing the 97 nationally and regionally distributed cable networks with a minimum of 50 million subscribers in 2013. Of these 97 networks, he found that 14 offered telecasts of JSC events and 83 did not. Over the full 2010-13 period, Dr. Israel found that the average license fee for the 14 cable networks that offered JSC programming (along with other programming) was $0.753 per subscriber per month, whereas for the 83 cable networks that did not offer JSC programming, the average license fee over the four year period was much lower, $0.174 per subscriber per month. Israel WDT ¶ 51.Start Printed Page 3602
In opposition, Program Suppliers asserted that this analysis “is irrelevant to this proceeding.” PSPFF ¶ 354. In support of this argument they rely on Dr. Gray's assertion that “consistent with Professor Crawford's economic arguments, after negotiating programming deals with cable networks carrying live team sports programming, CSOs may then have a sufficient quantity of that type of programming to bundle for its current and potential subscribers [such that] live team sports programming would be less valuable to CSOs than other types of programming.” Gray CWRT ¶ 60.
In response to this opposition, the JSC asserted that Dr. Gray had misapplied Professor Crawford's explanation that CSOs have an incentive to add differentiated distant signal programming to their bundles “because it can help to attract and retain subscribers.” JSC RPFF ¶ 46 & n.174 (and record citations therein). More particularly, the JSC argued that Program Suppliers' argument regarding program-type saturation would not apply only to JSC programming. As they asserted: “[T]hat argument would apply equally to [Program Suppliers] (and others), whose content likewise is on cable networks in addition to local and distant signals; it provides no basis to ascribe a lower relative value to JSC.” JSC PFF ¶ 50 (and record citations therein).
The Judges understand Dr. Israel's Cable Content Analysis to be in the nature of an assertion that a similar market provides relevant and meaningful information regarding the relative values of distantly retransmitted local programs in a hypothetical market in which the statutory royalty structure did not exist. As such, Dr. Israel's approach is similar to the “benchmark” approach that is a hallmark of the sound recording and musical works rate proceedings within the Judges' jurisdiction. That is, parties in those proceeding regularly present economic evidence regarding royalty rates in other markets, urging the Judges to find sufficient comparability between the “benchmark” market and the hypothetical market at issue. When Judges decide whether and how to weigh such benchmark evidence, they begin with the following foundational analysis that is equally applicable here:
In choosing a benchmark and determining how it should be adjusted, a rate court must determine [1] the degree of comparability of the negotiating parties to the parties contending in the rate proceeding, [2] the comparability of the rights in question, and [3] the similarity of the economic circumstances affecting the earlier negotiators and the current litigants, as well as [4] the degree to which the assertedly analogous market under examination reflects an adequate degree of competition to justify reliance on agreements that it has spawned.
In re Pandora Media, 6 F. Supp. 3d 317, 354 (S.D.N.Y. 2014), aff'd sub nom., Pandora Media, Inc. v. ASCAP, 785 F.3d 73 (2d Cir. 2015).
In the present case, Dr. Israel has not attempted to make such a structured analysis. Rather, the Judges understand his argument to be based on the assumption that the rights at issue are comparable (i.e., the programs can be categorized in a similar manner) and the buyers/licensees (the CSOs) are identical in both markets. However, in all other respects—regarding economic circumstances, competitive positions, and the nature of the seller/licensor—the relative similarities or differences are unexplored.
Accordingly, the Judges are reluctant to put much weight on Dr. Israel's Cable Content Analysis. At most, the Judges rely on his Cable Content Analysis as demonstrating that JSC programming enjoys a level of demand out of proportion to its broadcast minutes, not inconsistent with the results of his regression analysis and Dr. Crawford's regression analysis.
VI. Changed Circumstances
The Judges and their predecessors have looked at a “changed circumstances” analysis in prior proceedings. In the 1998-99 cable distribution proceeding, the CARP recommended allocation to the four largest categories strictly based on the Bortz survey results.[170] Because PTV and CCG were undervalued by the Bortz survey, the CARP recommended adjustment of allocations to those categories, giving “some weight” to the remarkable increases in relative fee generation and in “changed circumstances” as measured by an increase in subscriber instances.[171] See Final Order, Distribution of 1998 and 1999 Cable Royalty Funds, 69 FR 3606, 3617 (Jan. 26, 2004). In the 2000-03 distribution proceeding, the Judges salvaged consideration of changed circumstances by differentiating a fee generation methodology from a changed circumstances evidentiary consideration. See Distribution Order, [172] 75 FR 26798, 26805-07 (May 12, 2010) (2000-03 Distribution Order). Ultimately, the CARP concluded that changed circumstances, as measured by changes in subscriber instances alone, revealed a change in programming volume, which did not necessarily translate to a change in programming value. 1998-99 Librarian Order, 69 FR at 3616.
In the present proceeding, PTV retained Ms. Linda McLaughlin and Dr. David Blackburn, who filed joint written testimony. See Trial Ex. 3012. The McLaughlin/Blackburn report focused on the share of royalties that would reflect the relative value of PTV programming only. See 3/7/18 Tr. at 2446 (McLaughlin). McLaughlin and Blackburn began with the PTV share from the 2004-05 distribution proceeding, which was based largely on Bortz survey results. See Amended Testimony of McLaughlin and Blackburn, Trial Ex. 3007 at 7 (McLaughlin/Blackburn AWDT). Using primarily data from the Cable Data Corporation (CDC), they analyzed not just changes in subscriber instances, but external changes in various unit measures from 2005 to the relevant period, 2010-13, viz., distant subscriber instances, distant signal transmissions, and the balance of programming types distantly retransmitted. See id. at 7-8. Each of their unit measures indicated an increase in the PTV relative share, and all of their unit measures indicated a basis for an increase in PTV's relative share for the period at issue in this proceeding. As Ms. McLaughlin testified, however, an increase in unit measures does not compel a conclusion that value also increased. 3/7/18 Tr. at 2648 (McLaughlin).
For valuation, McLaughlin and Blackburn analyzed survey results, regression analyses, and viewership studies. For survey analysis, they used the 2004-05 Bortz survey as a starting point. The Bortz Survey omitted respondents whose distantly retransmitted signal carried only PTV or only CCG or only PTV and CCG together.[173] McLaughlin and Blackburn added those omitted stations to the Bortz Survey results, using the overall Bortz response rates by stratum, and by assuming, for example, that the PTV-only systems would assign a relative value to PTV of 100%.[174] They then Start Printed Page 3603recalculated the Bortz Survey relative value for PTV, by stratum, using the relative values she determined. McLaughlin and Blackburn noted that the increase resulting from their augmentation of the Bortz Survey yielded a smaller PTV relative value (9.9%) than did the Horowitz Survey (15.8%), which included PTV- and CCG-only systems from the outset. They attributed this discrepancy to the participation bias evident in the Bortz data, i.e., that fewer eligible systems carrying PTV responded to the Bortz Survey than the Horowitz Survey. See Rebuttal Testimony of McLaughlin and Blackburn, Trial Ex. 3002, at 4 (McLaughlin/Blackburn WRT).
On rebuttal, McLaughlin and Blackburn noted that their own calculations augmenting the Bortz survey probably also underestimated the relative value of PTV, because they originated with the 2004-05 Bortz survey, which was tainted with participation bias. See id. at 4. McLaughlin and Blackburn asserted that participation bias also discounted the value of the 2010-13 Bortz Survey as an accurate measure of the relative value of PTV programming. Id. at 5.
McLaughlin and Blackburn looked at Professor Crawford's econometric study to confirm that marginal value per minute of distantly retransmitted programs changed in a like manner to her unit measurements. She noted increases in relative value from Dr. Waldfogel's 2004-05 regression analysis, on the one hand, and Professor Crawford's and Dr. Israel's regression analyses on the other: 20.8% under Professor Crawford's analysis and 15% using Dr. Israel's analysis. 3/7/18 Tr. at 2472-73 (McLaughlin). As Ms. McLaughlin testified, the Crawford study establishes a price, from which value may be ascertained: “value is . . . a quantity times a price. . . . ” 3/7/18 Tr. at 2653 (McLaughlin).
Ms. McLaughlin opined that viewership is just another unit measure, not a valuation. Nonetheless, she contended that the results of Dr. Gray's viewership analysis were consistent with the survey and regression analyses, indicating a PTV relative market value of 12.6%. See McLaughlin/Blackburn WDT at 23.
The Judges find that quantifying changes in various unit measures, while not without corroborative value, is not a definitive approach to relative valuation, especially in comparison to other more probative approaches, such as regression analyses. Apparently, PTV ultimately made the same assessment. See PTV PFF ¶ 11 (“[Professor] Crawford's econometric framework is the best suited methodology to determine the claimants' shares in this proceeding for the years 2010 through 2013.”). Accordingly, the Judges consider PTV to have adopted Professor Crawford's regression analysis as the methodology on which it has relied in this proceeding.
VII. Nonparticipation Adjustment for PTV
In its proposed findings of fact and conclusions of law, PTV raised the issue of Basic Fund allocation adjustment to account for PTV not being a participant in the 3.75% Fund. See PTV PFF/PCL at ¶¶ 43-45. Although there was mention of the 3.75% Fund in the record of the proceeding, no party addressed the issue comprehensively. The Judges issued an order seeking additional briefing, including an inquiry about both the 3.75% Fund and the Syndex Fund. See Order Soliciting Further Briefing (Jun. 29, 2018) (June 29 Order). Specifically, the Judges asked
[w]hether the interrelationship between and among the Basic Fund, the 3.75% Fund, and the Syndex Fund affects the allocations within the Basic Fund, if at all, and, if so, how that affect should be calculated and quantified.
June 29 Order at 1. The Judges expressly asked for legal analysis of the issue. The Judges refused to allow introduction of any new evidence but agreed to accept affidavits, if appropriate, to clarify the record evidence of any witness. Id. at 2.
In their responses, the parties agreed that only Program Suppliers were entitled to any royalties in the Syndex Fund and that the size of the fund was so insignificant in context that the Judges should not make any adjustment to allocations in the Basic Fund to compensate for any party's exclusion from the Syndex Fund. See, e.g., SDC Brief at 1 n.1; SDC Responsive Brief at 5 (“given the minuscule amount of money in the Syndex Fund, any calculation to compensate for that fund would constitute nothing more than a rounding error to a second or third decimal place. . . .”). The parties offered analysis and argument regarding the 3.75% Fund.
The essence of the Judges' question is whether the record evidence was intended to propose an allocation of all royalty funds in all three funds, which might imply an adjustment to the Basic Fund allocations for parties that did not participate in the other two funds. Program Suppliers submitted affidavits from their witnesses asserting that their analysis focused on the Basic Fund only. Accordingly, according to the Program Suppliers' argument, the Judges should simply scale the Basic Fund allocation by eliminating PTV from the calculation of allocation percentages for the 3.75% Fund. See Program Suppliers' Responsive Brief at 6. PTV and the SDC both argued contrariwise that the Judges should scale the Basic Fund up for PTV. PTV/SDC derived their argument from prior allocation determinations. See PTV Brief at 5-7; SDC Brief at 1-5.
All parties agree that the PTV category is ineligible for an allocation of royalties assigned to the 3.75% Fund.[175] The Judges found, however, that the parties did not agree whether PTV's nonparticipation in the 3.75% Fund affects the allocations within the Basic Fund. Moreover, the Judges found that the arguments and evidence presented by the parties was insufficient for the Judges to resolve the issue. That problem was compounded by the fact that prior determinations, regarding how the 3.75% Fund allocations might affect the Basic Fund allocation, were themselves contradictory and did not address all the issues the Judges have concluded are relevant. Consequently, on June 29, 2018, the Judges entered an Order soliciting further briefing regarding:
Whether the interrelationship between and among the Basic Fund, the 3.75% Fund, and the Syndex Fund affects the allocations within the Basic Fund, if at all, and, if so, how that affect should be calculated and quantified.
Order Soliciting Further Briefing (Jun. 29, 2018) (3.75% Fund Order).[176] In Start Printed Page 3604accordance with the 3.75% Fund Order, the parties filed briefs and responding briefs on these issues, The Judges weighed the parties' arguments and based on their analysis, the Judges do not adjust PTV's share of the Basic Fund to reflect its nonparticipation in the 3.75% Fund or to reflect any alleged inconsistencies between the record evidence, on the one hand, and the separate allocations to the Basic Fund and the 3.75% Fund, on the other.
A. Arguments of the Parties
The parties disagree as to how, if at all, the scaling of the 3.75% Fund allocations might affect allocations in the Basic Fund. PTV argues that it is entitled to an “Evidentiary Adjustment,” [177] whereby its share of the Basic Fund is “bumped up” [178] to offset its nonparticipation in the 3.75% Fund. PTV Initial Brief at 1-2. PTV alleges that this increase is necessary because “[t]he surveys and econometric estimates of value to CSOs determine shares of the Combined Royalty Funds for each of the programming claimants” and that “[a]s a result, in order for PTV to receive the share of total value to CSOs estimated by the . . . experts, it must receive a larger share of the Basic Fund, since it will receive no share from the [3.75% Fund].” Id. at 7 (quoting McLaughlin/Blackburn WDT at 24-25). In addition, PTV maintains that it is entitled to this Evidentiary Adjustment regardless of whether the Judges allocate the Basic Fund shares based on survey evidence, regression evidence, or viewing evidence. PTV Responding Brief at 12-21. PTV also argues that this result is supported by precedent and by the record in this proceeding. PTV Initial Brief at 10-16.[179]
JSC, CTV, and the SDC agree that prior rulings support PTV's assertion that it is entitled to a bump up in its Basic Fund share, but only to the extent the Judges tie the Basic Fund allocations to the Bortz Survey results and no other allocation methodology.[180] Those parties maintain that the language in prior rulings supports such an adjustment only to that limited extent. See JSC Initial Brief at 7-8; CTV Initial Brief at 10; SDC Initial Brief at 9-10.
By contrast, CCG argues that, in light of the evidence presented, PTV's Basic Fund shares should be adjusted upward, regardless of the allocation methodology employed by the Judges, to account for PTV's non-participation in the 3.75% Fund. See CCG Initial Brief at 6.
At the other extreme, Program Suppliers oppose any increase in PTV's Basic Fund share, arguing that such an increase “effectively, albeit indirectly, compensates PTV for royalties to which it is not entitled.” Program Suppliers Initial Brief at 2. Further, Program Suppliers argue that relevant prior rulings that may have suggested PTV was entitled to this upward adjustment were based on incorrect reasoning and that none of them “rises to the level of controlling precedent.” Id. at 7; see Program Suppliers Responding Brief at 2. Finally, arguing in the alternative, Program Suppliers assert that, even under PTV's view of the relevant prior rulings, PTV would not be entitled to the Evidentiary Adjustment it seeks unless “PTV's Basic Fund share was derived solely from the Bortz Survey.” Program Suppliers Initial Brief at 7.
B. Analysis
1. Statutory Law and Regulations
Any upward adjustment of PTV's share of the Basic Fund to account for its non- participation in the 3.75% Fund would be inconsistent with the regulations that established the 3.75% Fund because CSOs are expressly exempted from paying into the 3.75% Fund for the distant retransmission of noncommercial educational stations. See 37 CFR 387.2(c)(2).[181]
More particularly, the CRT established the 3.75% Fund in 1982 to offset the negative economic effects on owners of copyrights on commercial programming arising from the FCC's elimination of its rule setting a ceiling on the number of distant commercial stations a CSO could retransmit. See Final Rule, Adj. of the Royalty Rate for Cable Sys., 47 FR 52146 (Nov. 19, 1982). The regulation implements Congressional policy as expressed in 17 U.S.C. 801(b)((2)(B), which provides that “[i]n the event that the . . . [FCC] . . . permit[s] the carriage by cable systems of additional television broadcast signals beyond the local service area . . . the royalty rates established by section 111(d)(1)(B) may be adjusted to ensure that the rates for the additional [DSEs] resulting from such carriage are reasonable in light of the changes effected by the [FCC] . . . . ”). See also Malrite T.V. of New York, Inc. v. FCC, 652 F.2d 1140, 1148 (2d Cir. 1981) (“The plain import of § 801 is that the FCC, in its development of communications policy, may increase the number of distant signals that cable systems can carry and may eliminate the syndicated exclusivity rules, in which event the [CRT] is free to respond with rate increases.”).[182]
Thus, any upward adjustment in the Basic Fund by the Judges to “compensate” PTV—i.e., non-commercial stations—would constitute an unlawful back-door attempt to modify this regulation and would be inconsistent with the statutory provision on which it is based. See generally 5 U.S.C. 706(2)(A) and (C) (agency action unlawful if “not in accordance with law” or “in excess of statutory jurisdiction, authority, or limitations, or short of statutory right.”).
2. Administrative Process
Even assuming arguendo that applicable statutory law permits the adjustment PTV seeks, any such adjustment would amount to an adjudicatory change to an economic policy that was created through a separate administrative rulemaking proceeding initiated for the express purpose of protecting only those copyright owners who, as a result of FCC action, lost the protection afforded by the ceiling on the number of a CSO's distant retransmissions of commercial broadcasts. See 47 FR 52146. The Judges Start Printed Page 3605will not shoehorn a de facto change in the regulations in this adjudicatory proceeding by permitting PTV to share in the royalty revenue collected by the levy of the “penalty rate” [183] of 3.75% of gross receipts.
3. Unauthorized Redistribution of Wealth and Income
Any adjustment upward to PTV's Basic Fund allocation to account for its nonparticipation in the 3.75% Fund would amount to a redistribution of wealth and income by the Judges that is not authorized by law or regulation. That is, any reduction in the Basic Fund royalties paid to owners of copyrights on programs distantly retransmitted on commercial stations to “compensate” PTV for its nonparticipation in the 3.75% Fund would constitute the imposition of an economic loss on the former and an economic windfall on the latter, in terms of the value of the program copyrights (a redistribution of wealth) and the flow of royalties realized from such ownership (a redistribution of income). The Judge find no basis in law to support such a transfer of wealth or income.
PTV argues though that “[n]othing could be further from the truth” than the characterization of its position as seeking to share in the 3.75% Fund. PTV Responding Brief at 5. In point of fact, PTV's argument is tantamount to an attempt to appropriate value from the 3.75% Fund. Although PTV does not seek a ruling that it is legally entitled to share in the 3.75% Fund, it seeks a ruling that it is economically entitled to appropriate value from the Basic Fund, as measured by its non-participation in the 3.75% Fund. The Judges are as concerned with the economic incidence of the application of the so-called Evidentiary Adjustment as they are with the legal incidence of PTV's attempt to appropriate wealth and income from a fund that, by law, belongs to other claimants.[184]
In the face of the foregoing points, PTV and all the other parties except Program Suppliers nonetheless argue that two factors—evidence and precedent—support the subsidy sought by PTV. The two arguments are considered below.
4. The Evidence-Based Argument
As an initial matter, the Judges note that the evidence-based argument asserted by PTV and other parties in support of the Evidentiary Adjustment cannot overcome the legal points, discussed above, that make it legally impermissible to bump up PTV's share of the Basic Fund.
Additionally, the Judges find the evidence-based argument made by and on behalf of PTV, standing alone, to be insufficient. Broadly, PTV and other parties assert that the Evidentiary Adjustment is necessitated by the purported nature of the survey evidence and the regression evidence.[185] The Judges reject this argument.
a. The Survey Evidence
With regard to the survey evidence, PTV notes that the survey questions did not explicitly ask the respondents to “differentiat[e] between the Basic, 3.75% and Syndex Rates,” and “their responses presumably were based on their past payments at all rates into the Combined Royalty Funds.” PTV Initial Brief at 10-11 (emphasis added); see also CTV Initial Brief at 6 (survey responses measure relative value of distant signals “without regard to the royalty rate paid for any particular signal”). According to this argument, the survey responses could not reflect the effects, if any, of the higher royalty rate of 3.75% of gross receipts paid by CSOs into the eponymous 3.75% Fund. Rather, according to this argument, the survey responses reflected relative value in the combined royalty funds. Therefore, PTV asserts that it is entitled to the Evidentiary Adjustment, bumping up its Basic Fund allocation to offset the economic effect of its nonparticipation in the 3.75% Fund.
The Judges find this argument to lack sufficient merit. The two surveys were designed to allow for the selection of respondents to the surveys who were the individuals most responsible for programming carriage decisions at the CSO. See Bortz Survey at 14-15 & App. B; Horowitz WDT at 9, 24; see also 2/15/18 Tr. 254 (Trautman); 3/16/18 Tr. 4109 (Horowitz). Neither survey was designed to question whether the individuals who self-reported in fact possessed this knowledge, or to test the extent or specific aspects of respondents' knowledge.
The Judges decline to presume, in the context of this 3.75% Fund dispute, that the survey respondents lacked knowledge as to the variable royalties paid for distantly retransmitted stations, when the accepted survey evidence upon which the Judges rely (the same type of survey evidence on which their predecessors have consistently relied) presumes the opposite, i.e., that the respondents are indeed knowledgeable regarding this sector of the cable industry.[186] Indeed, the argument that the Judges should presume that the survey respondents were ignorant of the impact on royalty costs of retransmitting a given number of distant local stations [187] also proves too much, because it would call into question any reliance on the survey evidence.
Moreover, the Bortz Survey includes a question—Question #3—in which the respondents are directed to consider the costs associated with the retransmission of categories of programs. Although the question is linked to the cost of program categories rather than the cost of retransmitting entire stations, the question was designed as a “warm-up” question that would encourage Start Printed Page 3606respondents to be cognizant of the costs associated with their decisions to distantly retransmit stations containing the categories represented in this proceeding. See Bortz Survey, App. at 15. Thus, the Bortz Survey evidence tends further to support the assumption that the respondents were cognizant of the costs, including the royalty costs, associated with retransmitting distant local stations.[188]
For these reasons, the Judges cannot adopt a presumption that the survey respondents, deemed knowledgeable in all other pertinent respects regarding distant retransmissions of local stations, were ignorant of the royalty costs associated with the number and type of local stations they carried. Thus, there is not a sufficient evidentiary predicate for the application of the Evidentiary Adjustment.[189]
b. The Regression Evidence
Turning to the Crawford and Israel regressions, PTV's arguments fare no better. As the SDC explained in its briefing: “Each regression includes an indicator for retransmission of a 3.75% signal [with] statistically significant coefficients for the indicator variables suggest[ing] that there is a systematic difference in the amount of royalties paid by systems and subscriber groups that retransmit 3.75% signals and those that do not.” SDC Initial Brief at 4. Thus, the Crawford and Israel regression analyses demonstrated a correlation between the amount of royalties paid by a CSO and its participation in the 3.75% Fund. This correlation is essentially tautological. CSOs who pay the higher 3.75% royalty rate for the distant retransmission of one or more additional commercial local stations (previously “non-permitted” under the since-repealed FCC “ceiling” regulation) will pay higher royalties than CSOs that pay no more than 1.064% to retransmit such stations. See id. (correlation is “not surprising, considering that retransmission of a 3.75% signal by definition carries a higher rate”). Moreover, Dr. Crawford confirmed that the coefficient for the 3.75 control variable in his regression analysis was both large and statistically significant. Crawford WDT at App. B Fig. 22.[190]
Likewise, Dr. Israel “[s]imilar to Dr. Waldfogel,” included an indicator variable “for whether a CSO pays the special 3.75 percent fee,” and he held this factor “constant” in order to determine the extent of any correlation between royalty payments and additional minutes of programming category content. Israel WDT ¶¶ 33-34. In his regression model, Dr. Israel estimated a coefficient of 41,918 for his “Indicator for Special 3.75% Royalty Rate,” multiple times the coefficients he estimated for any other variable. Id. ¶ 36, Table V-1.
Thus, the regression evidence in the hearing records provides independent support for distinguishing the allocations in the 3.75% Fund from the allocations in the Basic Fund. Accordingly, the regression evidence provides substantial support for rejecting PTV's proposed bump-up in its Basic Fund allocation to offset its non-participation in the 3.75% Fund.[191]
5. The Effect of Prior Decisions
The second argument raised by PTV and supported by several other parties, is that the Judges are bound by prior decisions of CARP panels, the Librarian, and the Judges, in which the Evidentiary Adjustment was either applied or found to be generally valid. PTV Initial Brief at 10-12; PTV Responding Brief at 9-12; JSC Initial Brief at 4-6; CTV Brief at 1-6; SDC Initial Brief at 1-7. That is, they argue that prior rulings, by the force of their reasoning or as controlling law, require the Judges to bump up PTV's share of the Basic Fund to account for its non-participation in the 3.75% Fund.
More particularly, PTV and other parties make this argument in several alternative forms, from broad to narrow. PTV and CCG argue that prior rulings support increasing PTV's share of the Basic Fund to reflect not only the survey-based allocations but also the regression-based allocations, whereas JSC, CTV, and the SDC assert that PTV's survey-based allocations should be bumped-up, only to the extent the Judges apply the survey share percentages in making their overall allocations.
The Judges conclude that there is neither controlling law nor any prior determination or other ruling that binds them on this issue. Further, the Judges do not agree with the explanations in two prior rulings that applied or legitimized the application of the Evidentiary Adjustment. To the extent those prior rulings might, arguendo, constitute controlling law or might, arguendo, have properly applied or legitimized the Evidentiary Adjustment on the record in those cases, the Judges find those rulings distinguishable, based on the particular facts of the present case.
a. The 1986 CRT Determination
In a 1986 determination regarding the distribution of 1983 royalties, the CRT ruled that public television (represented by PBS in that proceeding) was not entitled to participate in the 3.75% Start Printed Page 3607Fund because “non-commercial educational stations could be carried on an unlimited basis prior to FCC deregulation, and . . . no cable operator paid the 3.75% rate to carry any noncommercial stations.” 1983 Cable Royalty Distribution Proceeding, 51 FR 12792, 12813 (Apr. 15, 1986), aff'd sub nom. Nat'l Ass'n of Broadcasters v. CRT, 809 F.2d 172, 179 n.7 (2d Cir. 1986) (“because cable carriage of noncommercial educational stations was not limited by the old distant signal rules, PBS is not eligible for royalties at the new 3.75% rate”). Further, there was no argument by the parties, and no discussion in the 1986 determination, with regard to the issue at hand, viz. whether PTV should receive an upward adjustment to its Basic Fund allocation to account for its non-participation in the 3.75% Fund. See 51 FR 12792 et seq.
Accordingly, the Judges find no aspect of the 1986 determination to be on point with regard to whether PTV is entitled to an upward adjustment in its Basic Fund share to offset its non-participation in the 3.75% Fund. Indeed, the 1986 determination would be consistent with the rejection of such an adjustment.
b. The 1992 CRT Determination
The next CRT determination concerned distribution of cable television royalties for the 1989 year. 1989 Cable Royalty Distribution Proceeding, 57 FR 15286 (Apr. 27, 1992). PBS was again denied any share of the 3.75% Fund “because PBS stations are not paid for at the 3.75% rate . . . . ” 57 FR at 15303.
In this 1992 case, public television claimants, through PBS, requested the bump up in their adjustment to the Basic Fund that is at issue in the present proceeding, i.e., “to back out the 3.75% portion” from the Basic Fund. See 57 FR at 15300. The CRT rejected this proposed adjustment, relying on the testimony of Paul Bortz (president of the entity that administered the Bortz Survey), who stated that “there was nothing in his survey to suggest that respondents were considering their 1989 copyright payment as the fixed budget they were allocating.” Id.
The Judges find this rationale to be cryptic at best, because there is no obvious logical link between Mr. Bortz's description of the mindset of the CSO survey respondents and its impact on whether PBS's share of the Basic Fund should have been adjusted upward to reflect the survey evidence. In fact, Mr. Bortz's testimony could be construed as supportive of the upward adjustment in the public television claimants' share of the Basic Fund. Accordingly, the Judges do not find any controlling or persuasive authority in the 1992 determination that can serve as guidance in the present proceeding.
c. The 1990-92 CARP Report and the Librarian's Order
In the proceeding to allocate royalties for the 1990-1992 period, PTV argued on behalf of public television claimants for an Evidentiary Adjustment to its share of the Basic Fund, as that share was estimated by the CARP's reliance on the Bortz Survey.[192] The CARP ruled, with regard to the question of whether to adjust PTV's share of the Basic Fund:
PTV also contends that a further adjustment should be made in its award because its total share of the adjusted Bortz Survey must come entirely from the Basic Fund and the Bortz survey does not differentiate between the Basic fund and the 3.75 fund in which PTV does not participate.
. . .
PTV's proposed further adjustment to allow for its non-participation in the 3.75 fund is rejected for the same reason given by the [CRT] in the 1989 proceeding. Mr. Bortz specifically disavowed any intention or implication in his survey to have respondents answer based on their royalty payments.
1990-92 CARP Phase I Distribution Report 120, 124 (Jun. 3, 1996) (1990-92 CARP Report). The Judges find that the CARP's reliance on the prior reasoning of the CRT only serves to repeat the cryptic nature of that prior ruling, and does not offer any basis on which the Judges may rely to resolve the issue in this proceeding.
When Congress instituted the CARP process, it also charged the Librarian with the duty to accept or reject, in whole or in part, the decision of a CARP, and charged the Register with the duty to provide recommendations to the Librarian. 17 U.S.C. 802(f) (2003) (superseded). Discharging her duty in that 1990-92 proceeding, the Register made specific recommendations to the Librarian regarding the issues pertaining to the 3.75% Fund, all of which the Librarian adopted. The Register described, and the Librarian agreed, that the CARP's reasoning supporting its distribution of the 3.75% Fund was “at best, terse.” Distribution of 1990, 1991 and 1992 Cable Royalties, 61 FR 55653, 55662 (Oct. 28, 1996) (Librarian's Order).
In her recommendations, the Register more specifically addressed the issue at hand, rejecting PTV's request for the Evidentiary Adjustment.
The Panel did not act arbitrarily in rejecting PBS's [193] Bortz adjustment for the same reasons articulated by the [CRT] in 1989. . . . [T]he approach used in the Bortz survey itself remained unchanged. As in the 1989 proceeding, Bortz did not ask cable operators to base their program share allocation according to the royalties they actually paid. Thus, in awarding PBS programming a specific share, a [CSO] did not take into account that its stated share only applied to the Basic Fund and not the 3.75% fund. . . . The Bortz survey numbers therefore do not necessarily require the adjustment demanded by PBS. Thus, the Panel was reasonable in adopting the [CRT's] 1989 rationale because PBS's argument, and the design parameters of the Bortz survey, were fundamentally the same.
Id. at 55668. However, for the first time in a distribution proceeding, the door was opened to an argument that this Evidentiary Adjustment might be appropriate in certain contexts, as the Register further recommended:
The Panel did not state that it was using PBS's Bortz numbers as the sole means of determining its award. In fact, the Panel awarded PBS a share that is less than the unadjusted Bortz survey numbers. Had the Panel stated that it was attempting to award PBS its Bortz share, then PBS's argument might have some validity. However, since the Panel did not, it did not act arbitrarily in denying PBS's requested adjustment.
Id. (emphasis added).
d. The 2003 CARP Determination and the Librarian's Order
In 2003, for the first time, public television claimants, through PTV, were successful in obtaining a ruling that supported the application of the Evidentiary Adjustment. Specifically, a CARP adopted PTV's argument that it was entitled to the Evidentiary Adjustment, whereby its share of the Basic Fund was increased to offset the impact of its non-participation in the 3.75% Fund. The CARP Report was adopted by the Librarian, upon the recommendation of the Register. 1998-99 CARP Report, supra note 144, at 26, n.10, adopted by the Librarian, 69 FR 3606.
The 1998-99 CARP found that, based on the evidence, PTV's “raw Bortz figure” was 2.9% for both 1998 and 1999, prior to the application of the Evidentiary Adjustment. 1998-99 CARP Report at 26 n.10. The CARP then, over JSC's opposition, bumped up this “raw” percentage “to account for PTV's non-participation in the 3.75% . . . fund[ ].” Id. The CARP explained its rationale:
Start Printed Page 3608The Adjustment makes sense in the context of a CSO Survey where the respondents are allocating a fixed budget among the various claimant groups—unless JSC can demonstrate that the respondents already understood that PTV does not participate in the 3.75% Fund. JSC has made no such showing.
Id.
The CARP also sought to distinguish the prior rejections of this Evidentiary Adjustment by the CRT and the 1990-92 CARP panel.
The Panel is aware that the 1989 CRT rejected this Adjustment to Bortz and the 1990-1992 CARP adopted that rejection . . . . The Panel believes the 1989 CRT and 1990-92 CARP did not fully appreciate the logic supporting this Adjustment. It is precisely because the Bortz respondents did not answer based on their actual royalty payments and presumably did not know that PTV would not be eligible to receive part of their budget allocation that the Adjustment is warranted.
Id. (citation omitted) (boldface added). However, the 1998-99 CARP Report did not make an upward adjustment to PTV's overall Basic Fund allocation or to any measure of its relative share of the Basic Fund other than the Bortz Survey percentage, concluding:
[W]e disagree with PTV's assertion that it is entitled to such an Adjustment no matter which methodology is employed. . . . We view PTV's position that the adjustment should be made for any methodology merely as an attempt to circumvent mathematically the legal precedents established by the CRT, and PTV has presented no legal justification for reversing these precedents.
Id. Consistent with this limitation, the 1998-99 CARP did not apply the Evidentiary Adjustment to the regression approach utilized by Dr. Gregory Rosston, an economic expert who presented a regression analysis on behalf of another party. See 1998-99 CARP Report, supra note 144, at 45-51 (discussing Rosston regression approach). However, although the CARP did not apply the Evidentiary Adjustment, it did not explicitly state its reasoning, nor did the CARP provide any specific rationale for not applying the Evidentiary Adjustment to the Rosston regression approach, other than to refer to the general discussion in that same report.. See id. at 48 n.21 & 59 n.29 (citing p. 26 n.10).
In the end, the CARP applied the Evidentiary Adjustment by increasing PTV's Basic Fund minimum allocation, or “floor,” as derived from the Bortz Survey, from 2.9% to 3.2%. 1998-99 CARP Report, supra note 144, at 25-26, & n.10. The final allocation to PTV though was based on additional evidence, which led the CARP to establish PTV's share above this floor, at 5.49125%, the same level as in the prior proceeding. Id. at 69; see 69 FR 3606, 3610, 3616 & n.32.
The Librarian, upon the recommendation of the Register, accepted the CARP Report in its entirety. 69 FR at 3606. However, neither the Register nor the Librarian made any specific recommendations or findings regarding the Evidentiary Adjustment applied by the CARP to increase PTV's allocation floor from 2.9% to 3.2%. See 69 FR at 3616-17[.
In the present proceeding, Program Suppliers assert that, because the CARP set PTV's Basic Fund share above the 3.2% floor, it had not actually applied the Evidentiary Adjustment to the Bortz Survey results. Therefore, Program Suppliers argue that the CARP's analysis regarding the Evidentiary Adjustment was mere dicta, rather than a controlling endorsement of the Evidentiary Adjustment. Program Supplier's Responding Brief at 3-4. The Judges disagree with Program Suppliers' characterization of that ruling. The fact that PTV's ultimate Basic Fund Share exceeded the floor does not call into question the ruling by the CARP or the Librarian that the Evidentiary Adjustment, in their opinion, should be applied.[194]
e. The Judges' 2010 Determination
In 2010, the Judges determined the allocation of royalties for the 2004 and 2005 distribution years.[195] See 2004-05 Distribution Order. There, the Judges applied the Evidentiary Adjustment on behalf of PTV, as proposed by the “Settling Parties.” [196] Id. at 57070. However, the Judges did not engage in any analysis of the Evidentiary Adjustment (and indeed did not even describe that adjustment or identify it by name). Rather, they simply adopted as a “starting point” the augmented Bortz Survey “which includes appropriate adjustments to the PTV share” and then referred to paragraph 317 of the “Settling Parties” Proposed Findings of Fact. That paragraph stated: “Because PTV receives payments from only the Basic fund, an adjustment to the augmented survey results is needed to produce PTV's share of the Basic fund, as recognized by the CARP in the 1998-99 Proceeding.” Id.
In the present proceeding, PTV further notes that, in that 2010 proceeding, Professor Waldfogel asserted that his regression approach, like the Bortz survey approach, had not differentiated between the Basic Fund and the 3.75% Fund, thus purportedly supporting an application of the Evidentiary Adjustment to the regression allocations. PTV Initial Brief at 14-15. PTV further asserts that Professor Waldfogel's testimony was consistent with Dr. Rosston's testimony in the prior proceeding, supporting the application of the Evidentiary Adjustment to Basic Fund allocations based on regression analyses. Id. at 13-14. Notwithstanding that testimony, in neither of those cases did the CARP, the Librarian, or the Judges find that the Evidentiary Adjustment should be applied to the regression results. See JSC Responding Brief at 7, 9.
6. The Prior Decisions Are Not Binding
The Judges do not find the foregoing findings and conclusions sufficient to overcome the analysis they undertake in this proceeding. First, none of the prior cases considered the dispositive statutory or regulatory issues discussed herein. Second, the prior cases are factually distinguishable, because neither the survey evidence nor the regression evidence support the application of the Evidentiary Adjustment to PTV's share of the Basic Fund. Third, as explained below, as a matter of law, the Judges are not duty bound to apply the Evidentiary Adjustment on behalf of PTV as it relates to the survey evidence, notwithstanding the conclusions in the two most recent distribution cases.
The Copyright Act does not equate relevant prior rulings with binding legal precedent. Rather, the Act provides only that the Judges shall “act on the basis . . . of prior determinations and interpretations . . . .” 17 U.S.C. 803(a)(1) (emphasis added). As the D.C. Circuit has explained, this provision does not mandate that the Judges abide by specific findings in prior rulings, provided the Judges set forth a “reasoned explanation” for a departure from those findings. See Program Suppliers v. Librarian of Congress, 409 F.3d 395, 402 (D.C. Cir. 2005). In the present determination, the Judges have explained the legal, administrative, policy, economic, and factual reasons why an application of the Evidentiary Adjustment on behalf of PTV is unwarranted. The two prior rulings that applied the Evidentiary Adjustment did not address these multiple factors, and Start Printed Page 3609certainly did not consider the issue at the depth warranted by the supplemental briefing required in this proceeding.
Further, the prior decisions reveal that the relevant tribunals went through an evolution, from prohibiting the application of the Evidentiary Adjustment, to acknowledging its potential application and, then, to supporting its application. Thus, the “controlling” aspect of those prior decisions, if any, appears to be the proposition that this thorny issue needs to be considered in detail, and that no prior decision should be extended if the successor tribunal, through reasoned explanation, finds good cause to render a decision different from the one that immediately preceded it.
7. The Waiver Argument
In its Responding Brief, PTV asserts, for the first time, that Program Suppliers, the SDC, and JSC, each “waived” its right to contest the application of the Evidentiary Adjustment. PTV Responding Brief at 21-26.[197] PTV makes two basic arguments in support of its theory of waiver. First, it argues that Program Suppliers, the SDC, and JSC “knowingly and intentionally” did not “submit evidence or advance arguments” regarding the Evidentiary Adjustment, seeking to depart from or to distinguish the prior determinations that adopted PTV's construction of the Evidentiary Adjustment. Id. at 21. Second, PTV notes that none of these parties raised the issue of the application of the Evidentiary Adjustment in closing arguments. Id. at 22. PTV acknowledges that Program Suppliers did address the issue previously, but only in response to PTV's PCL addressing the Evidentiary Adjustment issue. See PTV Initial Brief at 9 (citing Program Suppliers' RPCL ¶ 12. Accordingly, PTV, relying on four decisions,[198] asserts that Program Suppliers, the SDC, and JSC waived their arguments against the Evidentiary Adjustment.
The Judges find PTV's waiver argument to be inapposite, given the procedural posture of the proceeding. The Judges found the hearing record and legal arguments to be incomplete with regard to the impact, if any, of allocations in the 3.75% Fund on the allocations in the Basic Fund. That deficiency extended to PTV's briefing as well as to the briefing of the other parties. In an attempt to cure the incompleteness, the Judges, sua sponte, entered the 3.75% Fund Order, which specifically noted the insufficiency of the facts (“exhibits [and] witness testimonies”) and the law (“legal arguments”), which could be remedied by supplemental “memoranda of law,” as well as new affidavits that “clarif[ied]” the extant record. Id. at 1. In sum, the deficiencies in the factual presentations and legal briefings of the parties were the bases for the Judges' ordering of supplemental briefing.[199] It would be anomalous for the Judges to now reverse course and find that the arguments relevant to this issue had been waived prior to the submission of supplemental filings, when those deficiencies had themselves engendered the 3.75% Fund Order.
The four cases PTV string cites in its responding brief,[200] are not on point, and do not alter the Judges' analysis. U.S. v. Laslie,[201] American Wildlands v. Kempthorne,[202] and U.S. v. L.A. Tucker Truck Lines, Inc.,[203] all involved litigants who raised issues for the first time during judicial review of action by a trial court or administrative agency, and thus had engaged in an “intentional relinquishment of a known right,” which is the essence of an act of waiver. Laslie, 716 F.3d at 614. These cases are clearly distinguishable because: (1) The arguments raised with regard to the impact, if any, the 3.75% Fund has on allocation of the Basic Fund relate to an issue still before the tribunal hearing the matter; (2) the Judges have called for supplemental briefing on the very issue; and (3) the Judges' have concluded that the issue can and should be decided as a matter of law.
The final case cited by PTV is Intercollegiate Broadcast. Sys., Inc., v. Copyright Royalty Bd., 574 F.3d 748 (D.C. Cir. 2009). There, the D.C. Circuit declined to consider an argument, raised by an appellant for the first time “[n]early a year after appealing the Judges' order, and almost three months after filing its opening brief. . . . ” Id. at 755. Although the D.C. Circuit accepted the supplemental briefing and permitted responsive briefing, the court expressly noted that it was allowing that briefing “without prejudice” as to whether it would consider the delinquent issue on appeal. Id. The D.C. Circuit ultimately ruled that it would not consider the Start Printed Page 3610issue, noting that, notwithstanding its discretionary “power” to consider the delinquently briefed issue, it chose not to exercise that discretion, in part because of the incomplete nature of the briefing and the far-reaching consequences of the delinquently raised issue. Id. at 755-56.
Intercollegiate is clearly not on point. To the extent the D.C. Circuit's procedure for weighing whether to consider a delinquently raised issue is analogous to the present case, the D.C. Circuit emphasized that it was a matter of discretion. Likewise, the Judges have the discretion, pursuant to 17 U.S.C. 801(c), to make procedural rulings in furtherance of their statutory duties. The fact that the D.C. Circuit chose in Intercollegiate to allow supplemental briefing—without prejudice to its ultimate ruling that the delinquently asserted issue would not be heard—in no way suggests that the Judges in this proceeding are barred (by an assertion of waiver, or otherwise) from exercising their statutory discretion by deciding the issue at hand, after ordering supplemental briefing.
C. Conclusion Regarding Nonparticipation Adjustment
For the foregoing reasons, the Judges do not apply an Evidentiary Adjustment to or otherwise adjust PTV's share of the Basic Fund to reflect PTV's nonparticipation in the 3.75% Fund.
VIII. Conclusions and Award
As many witnesses testified in this proceeding, no one methodology can be a perfect measure of relative market value of categories of television programs distantly retransmitted by cable television systems. That is inevitable, because the market value of distantly retransmitted programs cannot be measured directly: Cable systems do not buy retransmission rights from the program copyright owners and cable systems do not acquire retransmission rights to broadcast stations in marketplace transactions. In the applicable scheme, prices are set by statute. Neither the copyright owners' valuations nor the general laws of supply and demand apply in all their particulars in setting prices as they would in an unregulated market. Use of different methodologies can assist the Judges by illuminating different aspects of the buyers' valuation.
In this proceeding, the participants, through their respective expert witnesses, took a variety of approaches to estimate how cable systems value programming on distant signals. Some witnesses looked to survey evidence in which CSOs estimated relative value of programming by category. Cable system fact witnesses also considered whether the value of the distantly retransmitted programs is generated more by acquisition of new subscribers or by retention of niche viewers.
A broadcast station's valuation of programming is driven by each show's popularity among viewers: Viewership translates to advertising income for the broadcast station. Program Suppliers advocated looking at that viewership to determine relative value. While viewership is important for broadcasters, the Judges conclude, based on the evidence and arguments presented, that viewership, without more, is an inadequate measure of relative value of different categories of programming distantly retransmitted by cable systems. The Judges, consistent with the past several allocation decisions, give no weight to viewership evidence in allocating royalties among the various program categories.
Several participants' econometricians who testified in this proceeding analyzed value from the perspective of what CSOs actually had done in terms of deciding which distant signals to retransmit on their systems. The essence of their regression approaches was the same as the fundamental correlation in the Waldfogel regression analysis in the 2004-05 proceeding—the correlation between royalties paid and minutes of programming in each program category on each distant signal. As discussed, the Judges place primary reliance on Professor Crawford's regression analysis, and rely on his duplicated minutes approach, as to which he expressed no methodological reservations during his testimony.
After considering all the methodologies and supporting evidence presented by the copyright owner groups, the Judges are struck by the relative consistency of the results across the accepted methodologies.[204] In this proceeding, the Judges conclude that the Horowitz Survey responses and Professor Crawford's duplicate minutes regression analysis, adjusted to account for methodological limitations in these approaches, are the best available measures of relative value of the program categories.
The Bortz and Horowitz Surveys, together with the McLaughlin “Augmented Bortz” results and the Crawford and George regressions, taking into account the confidence intervals (when available) surrounding the point estimates, define the following ranges of reasonable allocations for each program category in each year:
Table 18—Ranges of Reasonable Allocations
2010 2011 2012 2013 Min. (%) Max. (%) Min. (%) Max. (%) Min. (%) Max. (%) Min. (%) Max. (%) JSC 26.73 41.85 24.82 39.42 28.03 43.81 30.12 45.88 CTV 13.28 20.48 14.41 23.91 14.25 23.30 10.30 22.60 Program Suppliers 23.88 40.15 22.10 35.70 19.56 30.90 17.27 30.94 PTV 6.70 17.46 7.90 21.21 6.10 21.61 8.30 29.39 SDC 0.48 4.20 0.33 6.64 0.25 6.31 0.23 5.20 CCG 0.01 6.55 1.12 6.61 0.70 7.47 0.38 7.85 Within these ranges, the Judges use Professor Crawford's point estimates as the starting point for most categories because the Judges find the Crawford (duplicate minutes) analysis to be the most persuasive methodology overall on this record. For two specific categories, however, the Judges deviate from the Crawford analysis based on other record Start Printed Page 3611evidence. Specifically, the Judges make a modest upward adjustment to Professor Crawford's allocation for the SDC category based on the Horowitz survey results and the Augmented Bortz survey results, together with testimony concerning the “niche” value of devotional programming. Similarly, the Judges make a modest upward adjustment to the CCG category based on Professor George's analysis and testimony that Professor Crawford's analysis (as well as the survey evidence) undervalues Canadian programming to a degree. The Judges adjust the Crawford-based allocations for the remaining categories to account for the increased allocations to the SDC and CCG categories, and to ensure that the percentages total 100% after rounding. The resulting allocations are:
Table 19—Basic Fund Allocations
2010 (%) 2011 (%) 2012 (%) 2013 (%) JSC 32.9 30.2 33.9 36.1 CTV 16.8 16.8 16.2 15.3 Program Suppliers 26.5 23.9 21.5 19.3 PTV 14.8 18.6 17.9 19.5 SDC 4.0 5.5 5.5 4.3 CCG 5.0 5.0 5.0 5.5 Total 100.0 100.0 100.0 100.0 As discussed in section VII, the Judges considered and rejected PTV's arguments that the allocations of Basic Fund royalties must be adjusted to account for PTV's non-participation in the 3.75% Fund. Consequently, the allocations for the Basic Fund set forth in Table 1 are identical to the allocations set forth in Table 19. To arrive at the allocations for the 3.75% Fund set forth in Table 1, the Judges have reallocated the PTV share from Table 19 proportionally among the categories that participate in that fund. In accordance with the consensus view of the parties, the Judges have allocated 100% of the funds remaining in the Syndex Fund (after distribution of the Music Claimants' share) to Program Suppliers.
The allocations described in Table 1 at the outset of this Determination reflect the Judges' weighing of the evidence and their findings regarding allocation to each category of programming within the respective ranges of reasonable allocations.
The Register of Copyrights may review the Judges' Determination for legal error in resolving a material issue of substantive copyright law. The Librarian shall cause the Judges' Determination, and any correction thereto by the Register, to be published in the Federal Register no later than the conclusion of the 60-day review period.
October 18, 2018.
So ordered.
Suzanne M. Barnett,
Chief United States Copyright Royalty Judge.
David R. Strickler,
United States Copyright Royalty Judge.
Jesse M. Feder,
United States Copyright Royalty Judge.
The Register of Copyrights closed her review of this Determination on January 28, 2019, with no finding of legal error.
Start SignatureDated: January 29, 2019.
Suzanne M. Barnett,
Chief United States Copyright Royalty Judge.
Approved by:
Carla B. Hayden,
Librarian of Congress.
Footnotes
1. The program categories at issue are as follows: Canadian Claimants Group: All programs broadcast on Canadian television stations, except (1) live telecasts of Major League Baseball, National Hockey League, and U.S. college team sports and (2) programs owned by U.S. Copyright owners; Joint Sports Claimants: Live telecasts of professional and college team sports broadcast by U.S. and Canadian television stations, except programming in the Canadian Claimants category; Commercial Television Claimants: Programs produced by or for a U.S. commercial television station and broadcast only by that station during the calendar year in question, except those listed in subpart (3) of the Program Suppliers category; Public Television Claimants: All programs broadcast on U.S. noncommercial educational television stations; Settling Devotional Claimants: Syndicated programs of a primarily religious theme, but not limited to programs produced by or for religious institutions; and Program Suppliers: Syndicated series, specials, and movies, except those included in the Devotional Claimants category. Syndicated series and specials are defined as including (1) programs licensed to and broadcast by at least one U.S. commercial television station during the calendar year in question, (2) programs produced by or for a broadcast station that are broadcast by two or more U.S. television stations during the calendar year in question, and (3) that are comprised predominantly of syndicated elements, such as music videos, cartoons, “PM Magazine,” and locally hosted movies. Public TV PFFCL at ¶ 4; Notice of Participant Groups, Commencement of Voluntary Negotiation Period (Allocation), and Scheduling Order, Docket No. 14-CRB-0010-CD, at Ex. A (Nov. 25, 2015). The categories are mutually exclusive and, in aggregate, comprehensive.
Back to Citation2. In reviewing responses to Program Suppliers' request for rehearing, the Judges became aware of an error in the Initial Determination. The Judges used an incorrect base figure in calculating the royalty shares for 2012 and 2013. The Judges detailed that correction in the Order on Rehearing. The corrected values appear in this Final Determination.
Back to Citation3. Prior to enactment of the Copyright Royalty and Distribution Reform Act of 2004, which established the Judges program, royalty allocation determinations under the Section 111 license were made by two other bodies. The first was the Copyright Royalty Tribunal, which made distributions beginning with the 1978 royalty year, the first year in which cable royalties were collected under the 1976 Copyright Act. Congress abolished the Tribunal in 1993 and replaced it with the Copyright Arbitration Royalty Panel (“CARP”) system. Under this regime, the Librarian of Congress appointed a CARP, consisting of three arbitrators, which recommended to the Librarian how the royalties should be allocated. Final distribution authority, however, rested with the Librarian. The CARP system ended in 2004. See Copyright Royalty Distribution and Reform Act of 2004, Public Law 108-419, 118 Stat. 2341 (Nov. 30, 2004).
Back to Citation4. The Judges last adjudicated an allocation (Phase I) determination for royalty years 2004-05. See Distribution of the 2004 and 2005 Cable Royalty Funds, Distribution Order, 75 FR 57063 (Sept. 17, 2010) (2004-05 Distribution Order). In the Phase I cable proceeding relating to royalties deposited between 2000 and 2003, the parties stipulated that the only unresolved issue would be the Phase I share awarded to the Canadian Claimants Group. The remaining balance would be awarded to the Settling Parties. See Distribution of the 2000-2003 Cable Royalty Funds, Distribution Order, 75 FR 26798-99 (May 12, 2010) (2000-03 Distribution Order). The Judges adopted the stipulation.
Back to Citation5. Second Reissued Order Granting In Part Allocation Phase Parties' Motion to Dismiss Multigroup Claimants and Denying Multigroup Claimants' Motion For Sanctions Against Allocation Phase Parties, Docket No. 14-CRB-0010-CD (2010-13) (Apr. 25, 2018). The Judges discontinued use of the terms Phase I and Phase II and use the terms Allocation Phase and Distribution Phase instead. Id. at n.4. This determination addresses the Allocation Phase of the proceeding.
Back to Citation6. “Form 3” cable systems, so named because they account to the Copyright Office for retransmissions and royalties on “Form 3.” The Form 3 filing is required because they have semiannual gross receipts in excess of $527,600. These systems must submit an SA3 Long Form to the U.S. Copyright Office. They are the only systems required to identify which of the stations they carry are distant signals. Royalty payments from Form 3 systems accounted for over 90% of the total royalties that cable systems paid during 2010-2013. Corrected Testimony of Christopher J. Bennett ¶ 10 n.2 (Bennett CWDT).
Back to Citation7. The cable license is premised on the Congressional judgment that large cable systems should only pay royalties for the distant broadcast station signals that they retransmit to their subscribers and not for the local broadcast station signals they provide. However, cable systems that carry only local stations are still required to submit a statement of account and pay a basic minimum fee. See 2000-03 Distribution Order, 75 FR at 26,798 n.2.
Back to Citation8. FCC regulation of the cable industry was impacted by passage of the 1976 Copyright Act that created the compulsory license for cable retransmissions codified in section 111. See Report and Order, Docket Nos. 20988 & 21284, 79 F.C.C. 663 (1980), aff'd sub nom. Malrite T.V., v. FCC, 652 F.2d 1140, 1146 (2d Cir. 1981).
Back to Citation9. In 1989, in response to changes in the cable television industry and passage of the Satellite Home Viewer Act of 1988, the FCC reinstated syndicated exclusivity rules. The reinstated rules differed from the original syndex rules, giving rise to a petition to the CRT for adjustment or elimination of the syndex surcharge. See Final Rule, Adjustment of the Syndicated Exclusivity Surcharge, Docket No. 89-5-CRA, 55 FR 33604 (Aug. 16, 1990).
The CRT held that the syndicated exclusivity surcharge paid by Form 3 cable systems in the top 100 television markets is eliminated, except for those instances when a cable system is importing a distant commercial VHF station which places a predicted Grade B contour, as defined by FCC rules, over the cable system, and the station is not “significantly viewed” or otherwise exempt from the syndicated exclusivity rules in effect as of June 24, 1981. In such cases, the syndicated exclusivity surcharge shall continue to be paid at the same level as before. Id.
See Final Rule, 54 FR 12,913 (Mar. 29, 1989), aff'd sub nom. United Video, Inc. v. FCC, 890 F.2d 1173 (D.C. Cir. 1989); 47 CFR 73.658(m)(2) (1989); 47 CFR 76.156 (1989). The present proceeding deals only with allocation of those royalties among copyright owners in the various program categories.
Back to Citation10. The CRB last adjusted cable Basic, 3.75%, and Syndex rates in 2016, for the period January 1, 2015, through December 31, 2019. See Final Rule, Adjustment of Royalty Fees for Cable Compulsory License, Docket No. 15-CRB-0010-CA, 81 FR 62,812 (Sept. 13, 2016). This adjustment was pursuant to a negotiated agreement.
Back to Citation11. Public Law 111-175, 124 Stat. 1218 (May 27, 2010), reauthorized by Public Law 113-200, 128 Stat. 2059 (Dec. 4, 2014),
Back to Citation12. CSOs continue to be liable to pay a “minimum fee” for systems that do not retransmit distant signals. See 17 U.S.C. 111(d)(1)(B)(i). Calculation of royalties at subscriber group levels segregates minimum fee systems from systems that pay royalties based on retransmission of distant signals in excess of one DSE.
Back to Citation13. Docket Nos. 14-CRB-0007-CD (2010-12) and 14-CRB-0008-SD (2010-12), 79 FR 76396 (Dec. 22, 2014). The CRB received Petitions to Participate from: ASCAP/BMI (joint), Canadian Claimants, Major League Soccer, PBS for Public Television Claimants, Certain Devotional Claimants aka certain Devotional Claimants or Settling Devotional Claimants (SDC), Joint Sports Claimants, MPAA for Program Suppliers, Multigroup Claimants, NAB for Commercial Television Claimants, NPR, SESAC, and Spanish Language Producers. Major League Soccer subsequently withdrew its petition to participate.
Back to Citation14. Docket Nos. 14-CRB-0010-CD (2013) and 14-CRB-0011-SD (2013), 80 FR 32182 (June 5, 2015).
Back to Citation15. The Judges received petitions from: ASCAP/BMI (joint), Canadian Claimants, SDC, Joint Sports Claimants, Major League Soccer, MPAA for Program Suppliers, Multigroup Claimants, NAB for Commercial Television Claimants, NPR, Professional Bull Riders, PBS for Public Television Claimants, SESAC, and Spanish Language Producers. Professional Bull Riders and Major League Soccer subsequently withdrew their Petitions to Participate. Major League Soccer withdrew its Petition to Participate in the Joint Sports Category for 2010-2013 but maintained its 2013 satellite and cable claims in the Program Suppliers category and indicated it would be represented by MPAA. Major League Soccer LLC Withdrawal of Certain Claims Relating to the Distribution of the 2010-2013 Cable and Satellite Royalty Funds (Sept. 21, 2016). Multigroup Claimants, which had sought to participate in the Allocation and Distribution phases of the proceeding failed to file a written direct statement in the Allocation Phase and was dismissed from participating in that phase of the proceeding. [Second Reissued] Order Granting in Part Allocation Phase Parties' Motion to Dismiss Multigroup Claimants and Denying Multigroup Claimants' Motion for Sanctions Against Allocation Phase Parties (April 25, 2018).
Back to Citation16. The Judges also held a hearing on June 15, 2016, to address concerns the parties raised about changes to the historical bifurcation of proceedings into a first and a second phase.
Back to Citation17. In this proceeding, the Judges distinguish between “relative values” (to describe the allocation shares), and absolute “fair market values.” Because the royalties at issue in this proceeding are regulated and not derived from any actual market transactions, they do not correspond with absolute dollar royalties that would be generated in a market and thus would not reflect absolute “fair market value.”
Back to Citation18. Because the programs already exist, production costs have been “sunk,” and the copyright owners incur no marginal physical cost in the retransmission of their programs. Thus, the copyright owners would seek only to maximize marginal revenue (but would still consider marginal “opportunity cost” if applicable, e.g., if retransmission would cannibalize their profits from local broadcasting of the identical program or another program owned by the copyright owner). In a more dynamic long-run model, copyright owners might consider even the costs of production to be variable and would then also seek to recover an appropriate portion of production costs from retransmission royalties, thereby maximizing long-term profits (rather than only shorter-term revenue), with respect to retransmission royalties. However, because retransmissions of local broadcasts are “only a very small fraction of a typical CSO's programming budget,” it is unlikely that, in the hypothetical market, owners of copyrights to the retransmitted programs would have the market power to compel CSOs to contribute to the long-run program production costs. See Rebuttal Testimony of Sue Ann R. Hamilton, Trial Ex. 6009, at 14 (Hamilton WRT). Thus, the Judges agree with the pronouncement in prior determinations that the royalties that would be paid in the hypothetical market would essentially be a function only of the CSOs' demand and the copyright owners' costs, and their supply curves (if any) would not be important determinants of the market-based royalty. See, e. g., Distribution of 1998 and 1999 Cable Royalty Funds, Final Order, 69 FR 3606, 3608 (Jan. 26, 2004) (1998-99 Librarian Order).
Back to Citation19. Transaction costs are “pure reductions in the total amount of resources to be distributed that are necessary to achieve and maintain any given allocation.” Richard Watt, Copyright and Economic Theory at 15 (2000).
Back to Citation20. For example, in a hypothetical market, a copyright owner could refuse to grant distant retransmission rights to a local station unless the local station (and the retransmitting CSO) agreed to pay an additional royalty (to cover a share of sunk costs and/or additional profit). The ability of the copyright owner to obtain such value would be a function of his or her market and bargaining power. (Because the costs are sunk, the copyright owner would not rationally walk away from a retransmission agreement as long as some positive royalty would be paid.) Even at the level of the “collective,” a local station in the hypothetical market could use its market/bargaining power to maximize royalty payments, assuming it had the economic incentive to do so.
Back to Citation21. Actually, in the 2004-05 Determination, the Judges recognized that neither a survey approach nor a regression approach (both of which they nonetheless relied upon) identified all aspects of actual market values as opposed to relative values based on market forces. See 2004-05 Distribution Order, 75 FR at 57066, 57068 (noting that a CSO survey “is certainly not a fully equilibrating model of supply and demand in the relevant hypothetical market,” and a regression does not “necessarily identif[y]” all of “the determinants of distant signal prices in a hypothetical free market . . . .”).
Back to Citation22. American Bar Association, Econometrics 1-2 (2005) (ABA Econometrics).
Back to Citation23. In a multiple linear regression, the equation would be expanded, for example as Y = a + bX + cZ + u− with Z an additional independent variable and c its coefficient.
Back to Citation24. The Judges noted that “Dr. Waldfogel's specification was similar in its choice of independent variables to a regression model utilized by Dr. Gregory Rosston to corroborate the Bortz survey results in the 1998-99 CARP proceeding. Id. See Report of the Copyright Arbitration Royalty Panel to the Librarian of Congress, Docket No. 2001-8 CARP CD 98-99 (1998-99 CARP Report) at 46 (Oct. 21, 2003).
Back to Citation25. The CARPs were governed by a statutory provision regarding precedent that was nearly identical to the current section 803(a)(1). See 17 U.S.C. 802(c) (2003) (repealed). Consequently, the 1998-99 Librarian Orde r remains relevant in spite of the intervening statutory amendments abolishing the CARP system and creating the Judges.
Back to Citation26. Legal precedents provide stare decisis effect to “legal issues . . . prescribing the norms that apply and consequences that attach to” facts presented at trial. See A. Larsen, Factual Precedents, 162 U. Pa. L. Rev. 59, 68 (2013).
Back to Citation27. Dr. Erdem referred to the Crawford, Israel, and George analyses as “Waldfogel-type” regressions because they “attempted to estimate the marginal effect of each minute of programming for claimant categories using regression analysis in which the dependent variable is the royalty fees paid by a system and independent variables include minutes of programming for each claimant category and other control variables.” Id.
Back to Citation28. Another SDC witness, Mr. John Sanders (a valuation expert rather than an economic expert), echoed this criticism, as discussed infra. A Program Supplier economic expert witness, Dr. Jeffrey Gray, criticized the regression approach to the extent it included minimum fee-paying CSOs in the analysis, as also discussed infra.
Back to Citation29. In this determination, when the use of a particular Waldfogel-type regression is challenged on one of these broad bases, the Judges address those specific challenges.
Back to Citation30. Professor Crawford does not hypothesize that in this ersatz market the CSO could replace advertising that was included in the local broadcast with advertising targeted to the distant market in which it has been retransmitted. Crawford CWDT ¶ 37. The Judges find this approach reasonable because they did not identify any evidence that would sufficiently support the hypothesis that CSOs would insert replacement advertising into distantly retransmitted stations.
Back to Citation31. Despite his advocacy for a regression approach, and for his particular regression, Professor Crawford acknowledged the possibility “for economists to apply alternative approaches to this problem.” Id.
Back to Citation32. The “natural log” (shorthand for logarithm) is “[a] mathematical function defined for a positive argument; its slope is always positive but with a diminishing slope tending to zero,” and it “is the inverse of the exponential function X = ln(ex).” J. Stock & M. Watson, Introduction to Econometrics 821 (3d ed. 2015). For purposes of applied econometrics, using the logarithmic functional form, showing percentage changes in the variables, may be more practical.
Back to Citation33. A “control variable” is an independent (explanatory) variable that “is not the object of interest in the study; rather it is a regressor included to hold constant factors that, if neglected, could lead the estimated . . . effect of interest to suffer from omitted variable bias.” Stock & Watson, supra note 32, at 280.
Back to Citation34. By investigating the change (effect) in percentage terms on royalties (the dependent variable) from a change in the number of minutes per program category (the independent variable), Professor Crawford adopted what is known as a “log-level” (a/k/a “log-linear”) functional form. See, e.g., J. Wooldridge, Introductory Economics 865 (3d ed. 2006). This approach allowed Professor Crawford to compare the effect of a change in the number of program category minutes to the percent increase in subscriber group royalties of different sizes. For example, a 100-minute increase in Program Supplier minutes for a subscriber group in which 10,000 such minutes are retransmitted represents a 1% increase in such minutes, whereas the same 100-minute increase for a subscriber group in which only 1,000 such minutes are retransmitted would represent a 10% increase. See Crawford CWDT ¶¶ 113-114.
Back to Citation35. The royalty data on which Dr. Crawford relied came from the Licensing Division of the Copyright Office via the Cable Data Corporation (CDC), and were provided to Dr. Christopher Bennett, another CTV economic witness, who directed the preparation of the data for Professor Crawford's regression analysis. Crawford CWDT ¶ 73. Dr. Bennett also obtained and compiled the data relating to the minutes of different programming types, using raw data obtained from FYI Television. Crawford CWDT ¶¶ 78-79.
Back to Citation36. A “parameter” is “[a] numerical characteristic of a population or a model,” whereas a “coefficient” is “an estimated regression parameter.” D. Rubinfeld, Reference Guide on Multiple Regression, reprinted in Reference Manual on Scientific Evidence 463, 466 (2011). The “true” value of the parameter is “unknown,” but can be estimated, and the coefficient is that estimate. See Peter Kennedy, A Guide to Econometrics 4 (5th ed. 2003).
Back to Citation37. The “standard error is “[a]n estimate of the standard deviation of the regression error . . . calculated as an average of the squares of the residuals associated with a particular multiple regression analysis.” Rubinfeld, supra note 36, at 467. The standard error measures the probability distribution for the estimates of each parameter in the regression if “the expert continued to collect more and more samples and generated additional estimates . . . .” ABA Econometrics, supra note 22, at 404.
Back to Citation38. Professor Crawford assumed that duplicated programming, whether or not it was blacked out upon retransmission, had zero value because the programming was already available on a local station. Id. ¶¶ 86, 144-145. The Judges find this assumption reasonable because identical network programs that are broadcast locally and retransmitted distantly into the same local market are essentially perfect substitutes. Why are they essentially perfect and not just perfect substitutes? Because they are on different channels, the search cost might be different for viewers. For example a viewer might find a show on local channel 4, but the same show on a distantly retransmitted station might appear on channel 157, which is not included in the viewer's usual “channel surfing.”
Back to Citation39. He estimated no negative coefficients for the six program categories at issue in this proceeding.
Back to Citation40. Professor Crawford also estimated a negative coefficient for nonduplicated network minutes, but he testified that this was solely an artifact of the regulated rate structure, in which distantly retransmitted networks “only pay royalties of .25 DSE.” 2/28/18 Tr. 1605 (Crawford). The Canadian Claimant Group's expert, Professor George, understood the negative coefficients for a program category to reflect that programs in such a category would reduce the value of a station bundle compared with programs from other program categories. 3/5/18 Tr. 2117-18 (George); see id. at 2031 (“the negative coefficient here is telling us that this is effectively dragging down the value of the Canadian signals. . . . [I]if we could replace the Program Supplier content on Canadian signals in a sort of hypothetical world . . . with Joint Sports or Canadian Claimant programming, the value of the signal would be higher. And so this coefficient, the negative coefficient, isn't really surprising to me in this context . . . .”).
Back to Citation41. R2. in a multiple regression model is “the proportion of the total sample variation in the dependent variable [royalties-by-category here] that is explained by the independent variable here, [the number of distant minutes by claimant group].” Wooldridge, supra note 34, at 868. In more practical terms, “R2. provides a measure of the overall goodness-of-fit of the multiple regression equation [with] value ranges from 0 to 1. An R2. of 0 means the explanatory variables explain none of the variation of the dependent variable; an R2. of 1 means that the explanatory variables explain all of the variation.” ABA Econometrics, supra note 22, at 409. “There is no clear-cut answer [as] to [w]hat level of R2. , if any, should lead to a conclusion that the model is satisfactory.” Id.
Back to Citation42. Professor Crawford calculated an R2. of .247 for his duplicate analysis and an R2. of.246 for his non-duplicate analysis. Crawford CWDT Appx. B at B-2.
Back to Citation43. In fact, as discussed infra, Dr. Erdem subsequently agreed with Professor Crawford's criticism in this regard, and the SDC moved for leave to correct Dr. Erdem's testimony, but the Judges entered an order denying that motion as out of time.
Back to Citation44. Dr. Erdem modeled several of his additional critiques, discussed infra, by combining the impact of those critiques with the impact of his admittedly erroneous measure of the number of “distant subscriber minutes.” The Judges separately consider those further critiques on their own merits, not only in the interest of completeness, but also to consider whether or not these other criticisms have qualitative value, notwithstanding that their impact cannot be quantified by resort to Dr. Erdem's modeling that bundled those critiques with the admittedly tainted measure of “distant subscriber minutes.”
Back to Citation45. An “indicator variable,” also known as a “dummy variable” is a “[a]variable that takes on only two values, usually 0 and 1, with one value indicating the presence of a characteristic, attribute or effect and the other value indicating absence.” Rubinfeld, supra note 36, at 464.
Back to Citation46. The Judges are also unconvinced that the number of zeros is as striking as Dr. Erdem suggested. For example, the high percent of zeros for Canadian claimants would be consistent with the inevitable absence of any retransmissions of Canadian stations outside the Canadian zone.
Back to Citation47. When two covariates are highly or perfectly correlated with each other, the regression can suffer from a “multicollinearity” problem, whereby the model does not reveal the separate effects of each of the two variables. See Rubinfeld, supra note 36, at 465 (“Multicollinearity [a]rises in multiple regression analysis when two or more variables are highly correlated.”).
Back to Citation48. A “sensitivity analysis” is “[t]he process of checking whether the estimated effects and statistical significance of key explanatory variables are sensitive to inclusion of other explanatory variables, functional form, dropping of potential out-lying observations, or different modes of estimating.” Wooldridge, supra note 34, at 869. The issue of robustness is related to the issue of sensitivity: “The issue of robustness [addresses] whether regression results are sensitive to slight modifications in assumptions.” Rubinfeld, supra note 36, at 43; see also Peter Kennedy, A Guide to Econometrics at 11 (5th ed. 2003) (defining the “robustness” of an estimator as “insensitivity to violations of the assumptions under which the estimator has desirable properties . . . .”). Importantly, because “[e]valuating the robustness of multiple regression results is a complex endeavor . . . there is no agreed-on set of tests for robustness which analysts should apply. In general, it is important to explore the reasons for unusual data points.” ABA Econometrics, supra note 22, at 24; accord Rubinfeld, supra note 36, at 437.
Back to Citation49. The Judges also do not find this to be a potential problem with regard to the use of Professor Crawford's regression to identify relative values, because these two covariates (the number of nonduplicated minutes and the number of distant signals) are control variables used to hold all other potential effects fixed while analyzing program category minutes as the independent variables—and the Judges do not identify in Dr. Erdem's testimony any impact of his claimed multicollinearity on the purported explanatory effect of program categories on royalties.
Back to Citation50. More particularly, Dr. Erdem acknowledged that because Professor Crawford had utilized a “larger sample,” Erdem WRT at 20, n.17, Professor Crawford's regression analysis was not subject to an outlier problem. In fact, Professor Crawford's data included programming minutes using the population of programs carried on all imported distant broadcast signals, rather than using estimates of programming minutes based on sampling the programs carried on distant broadcast signals. Crawford CWDT ¶ 72.
Back to Citation51. Dr. Bennett, who compiled data for Professor Crawford's regression analyses, excluded superstations such as “WGN, WPIX, WSBK, and WWOR, which historically were distributed nationwide by satellite [and] were excluded in distance analyses presented in previous copyright royalty distribution proceedings.” Bennett CWDT ¶ 30, n.15.
Back to Citation52. “Fixed effects” variables are potential effects on the dependent variable (here, categorical royalties) by other factors that are unobserved by the regression. Wooldridge, supra note 34, at 461. (To put the “fixed effects” variables in context, they differ from the “error term,” which reflects “idiosyncratic error,” id., and differ from a control variable in that, as noted supra, a control variable is one that is known and expected to impact the dependent variable (categorical royalties here), but “is not the object of interest in the study” and thus held constant by the econometrician. Stock & Watson, supra note 32, at 280.
Back to Citation53. The SDC argue that this control caused a new geographic effect that Professor Crawford's regression ignored: “some” stations “could” be local as well as distant within some subscriber groups. SDC PFF ¶ 101 (and record citations therein). However, speculation as to the existence of this possibility and its possible extent are insufficient to invalidate or diminish the evidentiary value of the geographic controls used by Professor Crawford in his regression.
Back to Citation54. This point regarding geographic effects also relates to what Dr. Erdem asserted is an anomaly in a Waldfogel-type regression such as undertaken by Professor Crawford. Dr. Erdem claims that if a certain type of programming (Devotional, for example) were more popular on lower fee paying cable systems, the lower fee status of that system would cause Devotional programming to have a lower coefficient and a lower royalty share under the regression. However, if that cable system decided “this category of programming isn't doing it for us” and thus eliminated Devotional programming, that programming category elimination would anomalously cause the Devotional coefficient to increase, because it would no longer be associated with that lower fee paying cable system. 3/8/18 Tr. 2685-86 (Erdem). The flaw in that argument is two-fold. First, although the Devotional coefficient might increase, there would be fewer minutes of programming to multiply by that coefficient, which would reduce the relative share allocated to Devotional programming under a Waldfogel-type regression. Second, a cable system would distantly retransmit Devotional programming, even if it generated lower royalties relative to other CSOs in other regions, because the CSO is incentivized by increasing or retaining subscribers, not by maximizing royalties compared with other CSOs. Again, the Judges emphasize that the hypothetical buyer is the CSO, not the copyright owner, and the relative value of a program category is based on its economic contribution as part of a bundle to the CSO, not the royalty it might generate in any other context. The royalties flow from such carriage decisions and those decisions are made by each CSO with varying receipts (constrained by the WTP of its subscriber base), averaged through a Waldfogel-type regression.
Back to Citation55. “Bias” is “[a]ny effect . . . tending to produce results that depart systematically (either too high or too low) from the true values. A biased estimator of a parameter [e.g., a regression parameter] differs on average from the true parameter.” Rubinfeld, supra note 36, at 463-64. Somewhat more formally, “bias” reflects “[t]he difference between the expected value of an estimator and the population value that the estimator is supposed to be estimating.” Wooldridge, supra note 34, at 859.
Back to Citation56. Professor Crawford did not support his lengthy exposition (quoted in some detail in the text, supra), with any references to learned treatises or other authorities, nor did Dr. Erdem support his critique in such a manner. The experts for all parties were guilty of this omission throughout their respective testimonies, a problem the Judges find disturbing particularly in the present context, causing dueling esoteric econometric positions sometimes to devolve into ipse dixit disputes.
Back to Citation57. This econometric point regarding the appropriate use of different models is of a piece with the Judges' statement in Web IV that no one economic model is appropriate to explain all market activity. Determination of Royalty Rates and Terms for Ephemeral Recording and Webcasting Digital Performance of Sound Recordings (Web IV), 81 FR 26 316, 26 334-35 (May 2, 2016).
Back to Citation58. The Judges note that although the shares are not drastically different in the two models, the shares for CTV, who engaged Dr. Crawford, increased more substantially under his nonduplicated analysis, i.e., the approach as to which he expressed uncertainty under cross-examination than any other program category. Further, a number of categories saw either a decline or essentially no change in their shares in the nonduplicated model compared to the duplicated model. Compare Crawford CWDT Fig. 17 with Crawford CWDT Fig. 20 (both reproduced supra).
Back to Citation59. The “bias-variance dilemma” refers to the problem that arises when a model that tends to overfitting (too few observations per variable) will have a low bias in the regression coefficient (i.e., a regression line based on the data will tightly fit the data points) but will suffer from a relatively higher variance, (i.e., a relatively higher expected distance from the variable from its true value. See ABA Econometrics, supra note 22, at 275-76 nn.13 & 14 (“The higher the variance, the less precise is the estimate [i.e.,] the less the data say about the true value of the coefficient. . . . A biased estimate differs systemically from the true value, rather than departing from the true value only because of sampling error.”).
Back to Citation60. Moreover, Professor Crawford's testimony was at odds with what the SDC's counsel actually meant by the “one in ten” rule as it relates to overfitting. In the immediately subsequent testimony, the SDC's counsel challenged Professor Crawford's opinion that “the idea behind that is if you don't have ten observations per coefficient, one tends to get imprecise parameter estimates.” Id. The SDC's counsel then disagreed with the expert witness, Professor Crawford, and asserted that “[a]n overfitted model will be able to estimate the parameters [a]nd you might not be able to project it to other data, but will be able to estimate the parameters with great precision.” Id. As the introductory discussion of overfitting (set forth supra) makes clear, the SDC's counsel was correct in his presentation of the overfitting problem, but that is unrelated to the fact that Professor Crawford's testimony demonstrated his unfamiliarity with both the “one-in- ten” heuristic and its alleged econometric importance. (The Judges are not suggesting that a “one-in-ten” heuristic is not utilized by econometricians, but rather note that the record does not establish its existence and its applicability in this proceeding.).
Back to Citation61. The Judges discussed the distinction between an “effects” regression and a “prediction” regression at length, supra, section 0.
Back to Citation62. In its Response to the SDC's PFF, CTV helpfully cited (and reproduced) each numbered paragraph of the SDCPFF, and conspicuously absent from that response is any reference to ¶ 110.
Back to Citation63. “Degrees of freedom” are defined “[i]n multiple regression analysis, [as] the number of observations minus the number of estimated parameters.” Wooldridge, supra note 34, at 837. Accordingly, statisticians understand “degrees of freedom” to be measures of how much can be learned from a regression, with the quality of knowledge improved by increasing the number of observations, reducing the number of estimated parameters, or by some combination of both that serves to widen the difference between the number of observations and parameters. See What are degrees of freedom?, https://support.minitab.com/en-us/minitab/18/help-and-how-to/statistics/basic-statistics/supporting-topics/tests-of-means/what-are-degrees-of-freedom/(last visited June 14, 2018). Dr. Erdem does not define a “phantom degree of freedom” except to describe it as an “economic concept . . . not a statistic.” 3/8/18 Tr. 2711 (Erdem). More particularly, a “phantom degree of freedom” can be generated when the modeler reduces the number of parameters by his or her rejection of other models that would have added a greater number of parameters—nothing more has really been learned but the explicit number of degrees of freedom appears larger, as an artifact (a “phantom”) arising from the econometrician's rejection of models containing additional parameters. See Minitab Blog Editor, Beware of Phantom Degrees of Freedom that Haunt Your Regression Models!, The Minitab Blog (Oct. 29, 2015), http://blog.minitab.com/blog.
Back to Citation64. Although the Judges denied the SDC's Motion to Strike, they indicated in the Crawford Order that they would consider whether the absence of that prior work diminished the weight they might otherwise give to the regression methodology that Professor Crawford presented at the hearing. After considering the entire record, the Judges do not reduce the weight they accord to Dr. Crawford's regression analysis based on this argument.
Back to Citation65. Also, Professor Crawford's use of data from the entire population of Form 3 CSOs provided him with a wealth of data that mitigated a potential problem with regard to potential overfitting arising from sampling that provided too little data relative to the number of parameters. Crawford CWDT ¶ 123.
Back to Citation66. Ms. Hamilton's assertion that CSOs are more interested in satisfying niche signal viewers than with attracting and retaining new subscribers is contrary to assumptions underlying much of the survey analysis of CSO attitudes and valuations. Survey analyses are described in Section III, infra.
Back to Citation67. Ms. Hamilton also criticized Professor Crawford for assuming duplicated network minutes had zero value, because: (1) Some people prefer to watch a program at times other than when aired by a local network affiliate and (2) all programming has a value greater than zero to a CSO. Id. at 13-14. However, Professor Crawford explained in his oral testimony that: (1) He only dropped duplicated network programming that was aired at the same time as the local network programming and (2) Ms. Hamilton's conclusory assertion that all programming has value to a CSO flies in the face of the economic principle that consumers value only one version of perfectly substitutable goods. 2/28/18 Tr. 1426 (Crawford).
Back to Citation68. Given the low value of retransmitted stations, a CSO might rationally emphasize the value of “legacy carriage” as a heuristic (without further analytical effort), assuming as Ms. Hamilton implies, that eliminating a distantly retransmitted legacy station and its programs is more likely to cause a loss in subscribers than a change in station lineup is likely (without further and costly analytical effort) to increase the number of subscribers.
Back to Citation69. Not only was Dr. Gray unable to replicate Professor Crawford's work, Professor Crawford also challenged Dr. Gray's assertion that he otherwise faithfully reran Professor Crawford's regression. 2/28/18 Tr. 1422 (Crawford) (asserting that Dr. Gray changed a “key element of my regression analysis . . . the subscriber group variation [by] aggregate[ing] that subscriber group level information up to the level of the systems, which means . . . he cannot do fixed effects anymore . . . and he then adds additional variables.”).
Back to Citation70. Professor Crawford testified that after reviewing the rebuttal testimony, he did a “test” in which he claimed to have “dropped the minimum fee systems from the regression analysis and re-ran the regression,” which showed that the implied royalty shares were “very, very close: to his own original results. . ..” 2/28/18 Tr. 1424 (Crawford). However, Professor Crawford and CTV did not produce this regression because, as CTV's counsel acknowledged in response to a rebuttal, “this is not a new analysis [and] [w]e are not presenting any numbers here.” 2/28/18 Tr. 18 (John Stewart, CTV counsel).
Back to Citation71. In constructing a hypothetical market, the Judges assume CSO rationality or bounded rationality, at the least. “Bounded rationality” means that economic actors behave rationally (e.g., preferring potential profits to possible losses), but that rationality is inevitably limited by their lack of full information or the resources and ability to obtain full information necessary to make a completely (“unbounded”) rational decision. See C. Sunstein, Behavioral Law & Economics 14-15 (2000).
Back to Citation72. A more homespun analogy is perhaps instructive. Consider a child who has misbehaved and is thus punished by her parents who prohibited her from playing outside, as is her preference. Instead, she is sent by her parents to her room for the evening, where she is permitted to watch television (either the offense is not so great in this example as to warrant a suspension of TV privileges or the child has relatively permissive parents). The child has been compelled to pay a cost (confinement to her room) and precluded from her first choice (no confinement). If watching television is her only (or next best) option given confinement, she will rationally select the programs that provide her with the most utility. The fact that she was compelled to remain in her room would not provide her any incentive to abandon her order of preference as to the programs she would watch, even though she would not watch any of them but for the “tax” imposed by her parents (this analogy assumes that she would not refuse to watch television, as “cutting off her nose to spite her face” is assumed to be an irrational response). The CSO that is “confined” to a market in which the minimum royalty fee is imposed likewise rationally would make the best of a bad situation and retransmit stations based on the capacity of the station to increase CSO utility/profits, that is, assuming marginal non-royalty costs were not prohibitive.
Back to Citation73. An expert economic witness, Professor George, who otherwise approved of Professor Crawford's analysis, notes that the treatment of minimum fee only systems by Professor Crawford generally resulted in a tradeoff between accuracy and bias. Specifically, Professor George testified that lumping together CSOs paying only the minimum fee with other CSOs (as Professor Crawford did) “introduces some uncertainty [and] wider confidence intervals,” but, on the other hand, Dr. Gray introduces “bias” because he has “pull[ed] out systems . . . where their choices are very valid.” 3/5/18 Tr. 2045 (George). Because the Judges have found Professor Crawford's confidence intervals to be relatively narrow, Professor George's testimony in this regard does not affect the Judges' reliance on Professor Crawford's analysis.
Back to Citation74. In addition to performing a regression analysis, Dr. Israel also reviewed data relating to the economics of a different market—that in which large cable networks generally, and TNT and TBS specifically, bought sports and other programming. The Judges discuss that analysis infra.
Back to Citation75. Dr. Israel did not consider the relative value of program categories from the perspective of the hypothetical sellers, which he identified as the stations retransmitting the programs in a bundled signal. 3/12/18 Tr. 3064 (Israel).
Back to Citation76. Thus, Dr. Israel's regression differs from Professor Crawford's regression in that Professor Crawford analyzed the relationship between royalties and program categories at the subscriber group level, whereas Dr. Israel ran the regression at the CSO level, using CDC data that prorated the DSE to reflect the proportion of CSO subscribers who received the distant signal. Israel WDT ¶ 27.
Back to Citation77. Dr. Israel made note of two other adjustments he made to his regression that caused it to differ from the Waldfogel regression. First, he eliminated a “Mexican Stations” category because no such category was identified in this proceeding. Israel WDT ¶ 29. Second, Dr. Israel grouped the programs from “low power” stations according to their appropriate program categories, rather than carving out a miscellaneous category for “low power” stations, as had been done in the Waldfogel regression. Israel WDT ¶ 31.
Back to Citation78. The “p-value” provides a measure of statistical significance. It represents “[t]he smallest significance level at which the null hypothesis can be rejected.” Wooldridge, supra note 34, at 867. A statistical significance level of .01, .05 and .1, as used in the table in the accompanying text is “often referred to inversely as the . . . confidence level,” equivalent to 99%, 95% and 90%, respectively. ABA Econometrics, supra note 22, at 18. Although “[s]ignificance levels of five percent and one percent are generally used by statisticians in testing hypotheses . . . this does not mean that only results significant at the five percent level should be presented or considered [because] [ l]ess significant results may be suggestive, even if not probative, and suggestive evidence is certainly worth something.” Fisher, 80 Col. L. Rev., supra at 717-718. Thus, “[in] multiple regressions, one should never eliminate a variable that there is a firm foundation for including, just because its estimated coefficient happens not to be significant in a particular sample.” Id. However, care must be taken not to confuse the “significance level” with the “preponderance of the evidence” standard, because “the significance level tells us only the probability of obtaining the measured coefficient if the true value is zero,” so one cannot “subtract[] the significance level from one hundred percent” to determine whether a hypothesis is more or less likely to be correct. Id. See also D. Rubinfeld, Econometrics in the Courtroom, 85 Col. L. Rev. 1048, 1050 (1985) (“[I]f significance levels are to be used, it is inappropriate to set a fixed statistical standard irrespective of the substantive nature of the litigation.”); D. McCloskey & S. Ziliak, The Standard Error of Regressions, 34 J. Econ. Lit. 97, 98, 101 (1996) (“statistically significant” means neither “economically significant” nor “significant [in] everyday usage [where] `significant' means `of practical importance' . . ..”).
Back to Citation79. Dr. Israel testified that he did run a test to determine whether his regression results changed depending upon the time period evaluated and that he found that his results were stable over time. Israel WDT App. C-1. However, he did not link that result with any sufficient assertion explaining how or why the Judges might apply his findings for each year.
Back to Citation80. The Judges emphasize that Dr. Israel's confidence intervals are problematic especially because they are wide relative to those in Professor Crawford's regression. The Judges are not finding that wide confidence intervals, standing alone, automatically serve to discredit a regression analysis. See generally Fisher, 80 Colum. L. Rev., at 716 (even when the standard errors are relatively large and the confidence intervals relatively wide, that “does not mean that the true coefficient is likely to be any part of that range,” but rather “the estimated coefficient” remains “[t]he single most probable figure . . . .”) (emphasis added).
Back to Citation81. Dr. Gray stated that he used a “Box-Cox” test to confirm that a percentage-based relationship was a preferred specification over an assumed linear relation and better fit the data. However, Dr. Gray did not support that statement with a citation to his work or to literature that would be supportive. Gray WRT ¶ 30 n. 10. When a rebuttal expert purports to do a deeper dive into a model than the expert whose work he or she is criticizing, support for that deeper analysis should be provided in the written rebuttal testimony. However, Professor Crawford also undertook (and provided a succinct explanation of) a Box-Cox test for his regression analysis and found the results “strongly favoring the log-linear over the linear model.” Crawford CWDT ¶ 115.
Back to Citation82. For a simpler example, consider a restaurant patron offered a three-flavor ice cream dessert. Assume for that patron chocolate adds a utility measure (“utils” in econo-speak) of 5, vanilla adds a util measure of 4, strawberry adds a util measure of 3, and kiwi adds a util measure of 2. A three-flavor combination of chocolate, vanilla and strawberry has a total util value of 12 (5 + 4 + 3). If kiwi is substituted for strawberry, the total util value is now only 11 (5 + 4 + 2). Thus, kiwi, relative to strawberry in this combination, has a value in utils of −1 (reducing the value of the dessert from 12 to 11)—even though its absolute value in utils is +2. This negative value reflects the opportunity cost or relative value of substituting kiwi for strawberry in the bundle, but not the absolute market value of kiwi as an unbundled ice cream flavor. Applying this example to a market, the coefficient represents the value in a market populated by such bundles, not a value in a market without bundles. Clearly, how the “hypothetical market” is understood in terms of bundled programs therefore determines whether the negative coefficients make sense and also affects the extent to which the coefficients are of assistance in allocating the royalties.
Back to Citation83. Dr. Israel's explanation of the reason for a negative coefficient is substantively similar to Professor George's explanation of negative coefficients, discussed infra, as well as to Professor Crawford's explanation of negative coefficients for duplicative network programming, as discussed supra.
Back to Citation84. However, because the Judges find that only Dr. Crawford's regression is sufficiently credible and because it does not contain negative coefficients for the categories of interest, the conundrum of negative coefficients does not affect the Judges' reliance on regression analysis in this determination.
Back to Citation85. Royalty distribution parties have proposed fee generation valuation methodologies in the past and the Judges and their predecessors have generally discounted them as appropriate for determining overall relative values. See, e.g., 2000-03 Distribution Order, 75 FR at 26800-01. In that order, the Judges noted that the CRT had criticized the fee generation approach, but then resorted to fee generation reasoning in excluding PTV from a distribution from the 3.75% Fund. Id. at 26803. The Judges later reaffirmed their declination of fee generation valuation in the 2004-05 distribution proceeding, noting that the fees cable systems pay are statutorily determined and do not necessarily reflect relative value. See 2004-05 Distribution Order, 75 FR at 57072.
Back to Citation86. Though making a point about relative value, Mr. Sanders acknowledged that substituted programming inserts on the WGNA national feed are not compensable in this proceeding because they do not constitute retransmitted local programming. Sanders WRT at 13.
Back to Citation87. Ms. Hamilton did not have direct knowledge of the existence of this Tribune Co. policy after 2007 when she left her position with Charter, a CSO. Rather, she opined that such tying would have likely been a factor thereafter “primarily due to legacy carriage considerations.” Hamilton WDT at 7.
Back to Citation88. Of course, Ms. Hamilton's tying-based argument would be equally unavailing as against either the Crawford or George regression analyses.
Back to Citation89. An “influential observation,” also known as an “influential data point,” is defined as “[a] data point whose addition to a regression sample causes one or more estimated regression parameters to change substantially.” Rubinfeld, supra note 36, at 465. An “outlier,” by contrast, is “[a] data point that is more than some appropriate distance from a regression line that is estimated using all the other data points in the sample.” Id. at 466 (emphasis added). Although some authorities equate all “influential observations” with “outliers,” Dr. Rubinfeld's more careful distinction makes it clear that an “influential” observation or data point is not to be disregarded unless it is outside an “appropriate distance” from the regression line. The experts' dueling positions (with citations to other outside authority) on whether the “influential observations” identified by Dr. Erdem in Dr. Israel's regression are “outliers”—and thus must be ignored in the regression—are discussed infra.
Back to Citation90. The economic expert witness for the CCG, Professor Lisa George, weighed in with a defense of Dr. Israel's regression. She asserted that Dr. Erdem's argument that Dr. Israel's regression technique produced “unstable” results reflects a fundamental misunderstanding of the regression process. George WRT at 6-7 (“[V]ariables that do not affect royalty payments are not needed, since they typically will just worsen precision of the estimates. Changes to Dr. Israel's regression advocated by Settling Devotional Claimants run counter to the goals of causal inference, tending to increase bias and reduce precision.”).
Back to Citation91. Alternately stated, this exercise is not analogous to Olympic competition, where the difference in rankings—gold, silver and bronze medals—makes all the difference. Here, copyright owners in any claimant category would prefer more gold (royalty money) than less. Therefore, any analysis that assumes that value attaches to being ranked more highly would be absurd.
Back to Citation92. In her regression, Professor George used signal carriage and royalty data provided by cable systems on Form 3 Statements of Account as provided by CDC. George CWDT at 51-54; Written Direct Testimony of Jonda Martin, Trial Ex. 4009, at 23 (Martin WDT). Professor George obtained program categorization information that was assembled by Danielle Boudreau from program content logs filed with the Canadian Radio-television and Telecommunications Commission (CRTC) by Canadian broadcasters. George CWDT at 53; Corrected Written Direct Testimony of Danielle Boudreau, Trial Ex. 4001, at 3 (Boudreau CWDT).
Back to Citation93. And, to state the obvious, if market prices were available, no analysis of any sort would be necessary.
Back to Citation94. The “intercept” is defined as “the value of the y variable when the x variable is zero,” and, accordingly, it is “the parameter in a multiple linear regression model that gives the expected value of the dependent variable when all the independent variables equal zero.” Wooldridge, supra note 34, at 864. The intercept parameter “is rarely central” to a regression analysis. See id. at 25.
Back to Citation95. Professor George had originally made her calculations for the entire 2010-2013 period without breaking down her estimates by year. After she reviewed data contained in Professor Crawford's CWDT, Professor George was able to update her estimates and express them on an annual basis. George CAWDT at 2.
Back to Citation96. “Omitted variable bias” can arise “when a relevant variable is omitted from the regression.” Wooldridge, supra note 34, at 866. More particularly, omitted variable bias arises “because a variable that is a determinant of Y [the dependent variable] and is correlated with a regressor [independent variable] has been omitted from the regression.” Stock & Watson, supra note 32, at 822. The cumulative effect of any excluded variables “shows up as a random error term in the regression model. . . . An important assumption in multiple regression analysis is that the error term and each of the explanatory variables are independent of each other.” ABA Econometrics, supra note 22, at 10 n.21. Thus, Dr. Israel's criticism is that the “noise” in Professor George's regression reflects a bias arising from her failure to include important data from each programming category. Id. at 160.
Back to Citation97. Indeed, Professor George twice referred to the value of the program categories in the context of the “value of the signal” containing a bundle of programs offered to a CSO. 3/5/18 Tr. 2031-32 (George).
Back to Citation98. However, this issue was also raised by Dr. Erdem and, in response, Professor George provided a more compelling defense, as discussed infra.
Back to Citation99. “An expert's expectation or contention that a particular independent variable does not have a correlation with a particular dependent variable is called a null hypothesis, because the expected outcome of the analysis would show the absence of a correlation. . . . Often, the null hypothesis is stated in terms of a particular regression coefficient equal to zero.” ABA Econometrics, supra note 22, at 17 (emphasis added). See also Rubinfeld, 85 Colum. L. Rev. at 1054 n.20 (“If the evidence is not sufficiently strong, the null hypothesis is sometimes presumed to be correct, but a more accurate description would simply say that the evidence was not sufficiently strong to allow for its rejection.”).
Back to Citation100. The full title of the Bortz Survey is “Cable Operator Valuation of Distant Signal Non-Network Programming: 2010-13.”
Back to Citation101. Program Suppliers also advocated using viewing statistics as the optimal measure of relative market value of the participating program category groups. See infra, section IV.
Back to Citation102. Notwithstanding his survey results, Mr. Horowitz opined that “the Horowitz Survey is not a substitute for behavioral data such as viewing.” Corrected Written Direct Testimony of Howard Horowitz, Trial Ex. 6012, at 3 (Horowitz CWDT).
Back to Citation103. Bortz retained THA Research to conduct the 2010-13 telephone surveys. Id. at 19. Criticisms of the Bortz Survey focused on construct and content; no party criticized the Bortz selection of THA Research.
Back to Citation104. To avoid any criticism that there was a delay in conducting an annual survey that could result in “recall bias,” Bortz conducted all but the 2010 survey beginning in the summer following the royalty year at issue. Bortz conducted the 2010 survey in December 2011. See Bortz Survey at A-11.
Back to Citation105. Other criticisms noted by the triers of fact and opposing parties included, e.g., breaking up the survey and completing it through multiple callbacks, and asking for critical conclusions in a short survey of approximately ten minutes' length.
Back to Citation106. Form 3 cable systems are the largest systems by gross receipts and account for over 98% of section 111 royalty deposits. Id. at 10.
Back to Citation107. The relative value question read: “Assume you [system] spent a fixed dollar amount in [year] to acquire all the non-network programming actually broadcast during [year] by the stations . . . listed. What percentage, if any, of the fixed dollar amount would your system have spent for each category of programming?” Id. at 18.
Back to Citation108. Only programming that airs simultaneously on WGN-Chicago (the local feed) and WGNA (the satellite feed) is compensable under the section 111 license.
Back to Citation109. Questioners offered to send respondents a guide to compensable WGNA programming and instructed respondents that they could call back if the respondent needed more time to consider the compensable program list. Bortz Survey at 30.
Back to Citation110. McLaughlin and Blackburn augmented the 2004-05 Bortz survey results by inserting stations whose only distant signal was PTV, using the same response rates reported by Bortz. See 3/7/19 Tr. at 2457-59 (McLaughlin). They concluded that response bias depressed the PTV values claimed in the Bortz Survey. See Written Rebuttal Testimony of Linda McLaughlin and David Blackburn, Trial Ex. 3002, at 4 (McLaughlin/Blackburn WRT).
Back to Citation111. See, e.g., Corrected Written Rebuttal Testimony of Frederick Conrad, Trial Ex. 4003, at 7-8 (Conrad CWRT) (assuming stations with Canadian-only distant signals would assign 100% relative value to CCG programming creates response bias).
Back to Citation112. The Bortz Survey measured all programming on Canadian signals as one category. See Bortz Survey at 46-47. The CCG concedes that some of the programming on Canadian signals is compensable in other categories, such as Devotional or Program Suppliers.
Back to Citation113. Mr. Trautman criticized the Horowitz Survey results that valued Program Suppliers and Devotional programming higher than the Bortz Survey. He contended Horowitz failed to account for the amount of non-compensable programming on WGNA, i.e., the substituted syndicated or devotional programs WGNA adds to its lineup when it is not simultaneously retransmitting WGN programming. Trautman WRT ¶ 1. Mr. Trautman argued that Horowitz further inflated Program Suppliers, because it attributed all programming in the allegedly inflated “Other Sports” category to Program Suppliers. Id. ¶ 2.
Back to Citation114. Horowitz employed Global Marketing Research Services, Inc. to conduct the telephone surveys. Horowitz WDT at 8.
Back to Citation115. In the 2004-05 Bortz Survey, the warmup questions focused respondents on subscriber acquisition and retention by asking which categories were most “popular” with subscribers. See Bortz Survey at 39. Responding to a Judges' observation that acquisition and retention of subscribers might be too narrow a notion of value, Bortz replaced the popularity question with one intended to establish distant signals' importance to the respondent's system.
Back to Citation116. See Horowitz WDT at 17. Horowitz surveyed a sample of 300 systems, inquiring about factors influencing carriage decisions. The response categories were (1) programming popular and important to current and potential subscribers, (2) programming important to the cable system, and (3) other. Respondents could choose multiple factors.
Back to Citation117. The numbers for Program Suppliers (PS) are derived by adding responses for syndicated series and movies. “Other Sports” are left as a separately valued type of programming because the Horowitz Survey did not and could not specify whether non-JSC sports programming should be categorized as Program Suppliers or CTV.
Back to Citation118. The report of results of the Canadian Survey included Emeritus Professor Gary Ford as an author, but only Professor Ringold signed the report; consequently, for simplicity, the Judges refer to the report as Ringold WDT. Professors Ford and Ringold had conducted similar surveys since 1996 and Professor Ringold presented a longitudinal study showing the results from 1996 through 2013. See Trial Ex. 4011. A longitudinal study analyzes data collected using the same methodology to ask the same population of respondents the same question(s) over time. Such studies can prove useful in evaluating the stability and/or robustness of an estimate. Ringold WDT at 4-5.
Back to Citation119. Ford and Ringold referred to their survey, conducted by Target Research Group, as “double blind” in that neither the interviewers nor the respondents were aware of the sponsor of the survey. Written Direct Testimony of Gary Ford and Debra Ringold, Trial Ex. 4010 at 7 (Ford/Ringold WDT).
Back to Citation120. Drs. Ringold and Ford used responses relating to superstations and independent stations both to disguise the survey sponsor and as comparators to substantiate their results.
Back to Citation121. The values for the CCG category are the aggregate of relative values CSOs assigned to Canadian-produced news, public affairs, religious, and documentary programs (both network and station-produced); Canadian-produced sports programming; Canadian-produced series, movies, arts and variety shows, and specials; and Canadian-produced children's programming.
Back to Citation122. The table recreated here omits the column headed “3.75% Fund.” The Judges consider the 3.75% Fund separately.
Back to Citation123. Professor Steckel criticized telephone questioning, contending that the issues were too complex for the respondents to weigh and analyze over the telephone. See Written Direct Testimony of Joel Steckel, Trial Ex. 6014, at 36-37 (Steckel WDT). Telephone surveys have been the norm for allocation proceedings.
Back to Citation124. Professor Conrad criticized the Bortz and Horowitz Surveys on four bases: Sample size, i.e., the number of participants that actually carry a distant Canadian signal; assigning a value of zero to Canadian programming for systems that do not have the option to carry Canadian signals; incompatibility of programming categories; and flaws in either survey design or execution. See Written Rebuttal Testimony of Frederick Conrad, Trial Ex. 4003, passim (Conrad WRT).
Back to Citation125. Professor Conrad criticized both surveys for lacking independent pre-testing to detect confusion or anomalies. 3/5/18 Tr. at 1969-70 (Conrad).
Back to Citation126. Ms. Hamilton also testified that distant signal programming was an insignificant consideration in cable systems' programming decisions. 3/19/18 Tr. at 4306.
Back to Citation127. Professor Steckel asserted two standards to which a survey must conform: Reliability, i.e., the ability to replicate the survey's results, and validity, i.e., the conclusion that the survey measures what it purports to measure. See 3/13/18 Tr. at 3269 (Steckel). He opined that neither the Bortz Survey nor Horowitz Survey measures what it purports to measure nor what the statute requires the Judges to determine. He concluded that both, therefore, lack construct validity. See Steckel WRT at 21.
Back to Citation128. Professor Mathiowetz did cite multiple royalty allocation decisions that relied on Bortz surveys. See Written Rebuttal Testimony of Nancy Mathiowetz, Trial Ex. 1007, at 5-6 (Mathiowetz WRT). She did not contend those decisions were an endorsement of the constant sum methodology; rather she cited those decisions as support for the conclusion that the Bortz Survey addresses the relevant question of interest in these proceedings. Id.
Back to Citation129. Given the task to choose the lesser of the two evils, Professor Steckel concluded that the Horowitz Survey was a slightly better instrument because, inter alia, it included PTV and CCG stations and programming, it broke out “other sports” categories from those represented by the JSC, and its interviewers did a better job of reminding respondents of program categories, stations at issue. Steckel WDT at 38.
Back to Citation130. PTV and, to a lesser extent, CCG signals are exceptions to this bundling phenomenon.
Back to Citation131. Satisfice means “to choose or adopt the first satisfactory option that one comes across.” See www.dictionary.com,, last visited 07/19/2018.
Back to Citation132. See discussion at section § III.D.2.b.
Back to Citation133. For example, Mr. Trautman acknowledged that the Bortz Survey did not differentiate by category programming transmitted on Canadian signals even though some of the programs should be compensated not in the CCG group, but in other categories. 2/20/18 Tr. at 629 (Trautman).
Back to Citation134. Professor Mathiowetz also opined that the Horowitz Survey was not a valid constant sum survey because some of the Horowitz respondents, the PTV-only and CCG-only systems, could be asked about only one category of programming, and thus not requiring a sum of percentages at all. 2/20/18 Tr. at 511 (Mathiowetz). While correct as to PTV-only systems, this opinion disregards the fact that Canadian stations transmit both CCG-compensable programs and, for example, Devotional programs compensable from the SDC royalty funds.
Back to Citation135. Mr. Trautman further argued that cable systems retransmit a “substantial amount” of other sports programming, most of which is non-compensable under the section 111 license. Trautman WRT at 16. He contended that, notwithstanding the examples of rare compensable sports broadcasts, CSO respondents likely confused the volume of non-compensable sports programs as belonging in the unfamiliar Other Sports category inserted by Mr. Horowitz. Id.
Back to Citation136. Question 3 of the Bortz Survey asked respondents as a warmup question to rank how “expensive” it would be to acquire the programming in each category if the system had to acquire the programming “in the marketplace.” See, e.g., Bortz Survey at B-4.
Back to Citation137. See supra note 110 and accompanying text.
Back to Citation138. See infra section 200E;VI. McLaughlin and Blackburn used the Judges' 2004-05 distribution determination as their starting point. See Testimony of Linda McLaughlin & David Blackburn, Trial Ex. 3012 at 9 (McLaughlin/Blackburn WDT).
Back to Citation139. PTV does not participate in the 3.75% Fund or the Syndex Fund. McLaughlin and Blackburn were careful, therefore, to relate their valuations to the Basic Fund. See McLaughlin/Blackburn WDT, passim.
Back to Citation140. Mr. Trautman made the further adjustment by reference to the Horowitz Survey actual responses from PTV-only cable systems. See 2/2/0/18 Tr. at 525-26 (Trautman).
Back to Citation141. According to the Bortz Survey, approximately three-fourths of cable systems retransmitting distant signals retransmitted WGNA. Bortz Survey at 25.
Back to Citation142. For purposes of the royalty years at issue in this proceeding, WGNA as a superstation cast a long shadow on valuation methodologies. Following the period at issue in the present proceeding, WGNA began the process of converting to a cable network, which would, in time, remove it from consideration in royalty allocation proceedings.
Back to Citation143. Subscribers are a major source of revenue for cable systems; consequently, CSOs focus on retention of subscribers. In some instances, a CSO might relicense a signal with less viewed, niche programming to avoid losing a subscriber to a competing system. See 3/19/18 Tr. at 4297-99 (Hamilton).
Back to Citation144. In the 1998-99 CARP determination, the Panel concluded that the Bortz Survey was the most “robust” and “powerfully and reliably predictive” model for determining relative value . . .” for all categories except PTV, Canadian Programming, and Music Claimants. Report of the Copyright Arbitration Royalty Panel to the Librarian of Congress, Docket No. 2001-8 CARP CD 98-99, at 31 (Oct. 21, 2003) (1998-99 CARP Report); see also 1998-99 Librarian Order, 69 FR at 3609. For PTV, the Panel acknowledged the inherent bias against PTV in the Bortz Survey, but found the changed circumstances and fee-generation evidence proffered by PTV to be unpersuasive and declined to increase the PTV allocation percentage from the 1990-92 determination. Id. at 3616.
Back to Citation145. For Canadian Claimants, the CARP had no Bortz results so it used a fee-generation methodology. Id. at 3618. In the 2000-03 determination involving only the Canadian Claimants, the Judges distinguished the precedential mandate of a fee-generation methodology and applicable changed circumstances evidence. See 2000-03 Distribution Order, 75 FR at 26807.
Back to Citation146. Further, the categories endorsed by the Judges in the present proceeding have not changed for decades, giving CSOs time to acquaint themselves fully with the programming comprising each agreed category, whether or not they routinely agree with the programming characterizations at issue in these proceedings. The Judges do not gainsay that there have been changes in CSO personnel over the years, but it is nonetheless not unreasonable to think that even with changes in personnel, the CSOs have maintained an institutional memory of the requirements of these proceedings.
Back to Citation147. For example, for 2010, eliminating the relative value of Other Sports from the 100% constant sum leaves an allocation of 93.23% of the total assessed value. Recasting that 93.23% as the whole, the 3.78% relative value assigned to Devotional programming in 2010 would translate to 3.52% (3.78% of 3.78 × 93.23 = 100x; x = 3.52).
Back to Citation148. Dr. Gray also performed an analysis of the relative “volume” (i.e., total number of minutes) of the different categories of programming, which he described as “useful” but not “sufficient” information concerning the relative value of programming. See Corrected Amended Direct Testimony of Jeffrey S. Gray, Ph.D., Trial Ex. 6036, ¶¶ 17-18, 32-34 (Gray CAWDT); 3/14/18 Tr. at 3696-97 (Gray); 3/15/18 Tr. at 3834-36 (Gray). As Dr. Gray himself conceded that his volume analysis was an insufficient basis for determining relative value of programming, the Judges will not rely on it. See also Written Rebuttal Testimony of Dr. Mark A. Israel, Trial Ex. 1087, ¶ 38 (Israel WRT) (“measures of volume do not translate directly into value”). The Judges need not consider, therefore, criticisms concerning the accuracy of Dr. Gray's volume analysis. See Analysis of Written Direct Testimony of Jeffrey S. Gray, Ph.D., Trial Ex. 1089, at ¶¶ 11-17 (Wecker Report); 2/22/18 Tr. at 1169 (Harvey); Written Rebuttal Testimony of Christopher J. Bennett, Trial Ex. 2007, ¶¶ 36-43 (Bennett WRT); 3/1/18 Tr. at 1861-64 (Bennett).
Back to Citation149. CDC data is a compilation of information provided by cable systems to the Copyright Office on their semi-annual statements of account (SOAs). It includes information about the number of distant signals that each cable system carries, the number of subscribers receiving each distant signal, and the amount of royalties paid. See Gray CAWDT ¶ 28; Martin WDT at 5. From this information, CDC provided, inter alia, an analysis of which counties fall within a television station's local service area. See Martin WDT at 5-6.
Back to Citation150. Gracenote (formerly Tribune) provides a compilation of information about each television program airing throughout each day, including the station on which the program aired; whether the program was local, network or syndicated; the program and episode titles; and the type of program. See Gray CAWDT ¶ 27; 3/14/18 Tr. at 3686-87 (Gray).
Back to Citation151. The CRTC program logs include station call signs, program title, actual starting and ending time, and country of origin for each program broadcast on Canadian television stations. Dr. Gray used them to determine the country of origin of programs broadcast on Canadian stations, since U.S.-origin programs are excluded from the Canadian Claimant category. See Gray CAWDT ¶ 29.
Back to Citation152. A “people meter” is a device attached to a television set that passively detects the channel to which the television is tuned, and includes a means for each household member to identify him- or herself as the person watching the TV. The NPM database is derived from a national sample of households equipped with people meters and is used for measuring national broadcast and cable networks. See Direct Testimony of Paul B. Lindstrom, Trial Ex. 6017, at 4 (Lindstrom WDT); 3/14/18 Tr. at 3496-97, 3505-07 (Lindstrom).
Back to Citation153. The other independent variables include the time of day that the program aired and the program type. See 3/14/18 Tr. at 3692 (Gray).
Back to Citation154. Dr. Erdem, an economist testifying on behalf of the SDC, conceded that, in past proceedings, he had found viewership to be a reasonable basis for apportioning royalties among claimants within the same program category. See 3/8/18 Tr. at 2791-93 (Erdem); accord Amended Written Direct Testimony of John S. Sanders, Trial Ex. 5001, at 22.
Back to Citation155. See supra, section IV.A.
Back to Citation156. The hearing had been scheduled to begin on February 5. The Judges granted Program Suppliers' motion to delay the start of the hearing until February 14 for reasons unrelated to Dr. Gray's Third Errata. See Order Continuing Hearing and Permitting Amended Written Rebuttal Statements, Denying Other Motions, and Reserving Ruling on Other Requests (Jan. 26, 2018).
Back to Citation157. Mr. Lindstrom retired in June 2017 after nearly 40 years at Nielsen. See 3/14/18 Tr. at 3495-96 (Lindstrom). Prior to his retirement, Mr. Lindstrom was a Senior Vice President in charge of custom research and custom analysis for Nielsen's media business. See id. at 3496. He testified in this proceeding with Nielsen's “full cooperation and support.” Id. at 3495.
Back to Citation158. Program Suppliers also sought to cast doubt on the experience and expertise of the witnesses who criticized Dr. Gray's use of the NPM database for his viewing study. See, e.g., PS Reply PFF ¶ 66 (“Ms. Shagrin testified that she had never worked on custom analysis projects while at Nielsen, and that she did not understand how Dr. Gray used Nielsen's custom analysis in his methodology.”).
Back to Citation159. “A sampling frame is an enumeration of the items from which a sample is selected. Ideally, the sampling frame will be identical to—and therefore representative of—the target population that one seeks to study.” Bennett WRT at ¶ 21.
Back to Citation160. Nielsen's sample is a tiered sample of geographic areas, see Erdem WRT at 25; see also 3/14/18 Tr. at 3507, 3539-40 (Lindstrom), unlike Dr. Gray's sample, which was stratified by the number of distant subscribers. See 3/14/18 Tr. at 3686 (Gray).
Back to Citation161. Dr. Gray testified about a number of specific instances in which his categorization differed from Dr. Bennett's, and, on further review, he stood by his categorization. However, he did not perform a comprehensive review. See 3/14/18 Tr. at 3721-23 (Gray).
Back to Citation162. Prior to the cases to determine allocation and distribution of 2010-13 cable and satellite royalties the Judges and their predecessors referred to the process of dividing royalties among program categories as “Phase I,” and the process of dividing royalties allocated to a program category among the claimants within that category as “Phase II.” When the Judges decided to conduct both processes simultaneously for 2010-13 cable and satellite royalties they decided to refer to them as the “allocation phase” and “distribution phase,” respectively, to avoid any expectation that the processes would be carried out sequentially.
Back to Citation163. Then, as now, the Program Suppliers' principal witness regarding the analysis of Nielsen viewership data was Dr. Gray.
Back to Citation164. The earlier provision, former section 802(c) of the Copyright Act, stated that CARPs “shall act on the basis of . . . prior decisions of the Copyright Royalty Tribunal, prior copyright arbitration panel determinations, and rulings of the Librarian . . . .”
Back to Citation165. The decision whether or not to accept a methodology for determining relative market value is factually-dependent, so it is a misnomer to describe a previous decision declining to rely on viewership as “precedent”—i.e., controlling under the principle of stare decisis. Nevertheless, it is a “prior determination” “on the basis of ” which Congress has directed the Judges to act (along with the written record and other items enumerated in the statute). See 17 U.S.C. 803(a)(1).
Back to Citation166. No party has alleged changed circumstances that would bear on the Judges' reliance, vel non, on viewing data.
Back to Citation167. Broadcasters' reasons to attract viewers are driven by advertising-revenue considerations rather than subscriber attraction and retention considerations.
Back to Citation168. See also discussion of Dr. Israel's “cable content analysis,” supra, section V.
Back to Citation169. See sections 0-0.
Back to Citation170. SDC did not challenge the relative share indicated by the Bortz results. 1998-99 Librarian Order, 69 FR at 3609 n.15.
Back to Citation171. A “subscriber instance” as used in these proceedings relating to distant signal retransmission means one subscriber having access to one distant signal.
Back to Citation172. The 2000-03 Distribution Order was a “Phase I” or category allocation determination.
Back to Citation173. Ms. McLaughlin estimated that the average number of omitted stations over the period 2010-13 was 16 per year. See 3/5/18 Tr. at 2457 (McLaughlin).
Back to Citation174. Ms. McLaughlin also assumed that CCG-only systems would assign a relative value of CCG at 100%. 2/20/18 Tr. at 719-20 (Mathiowetz); 3/6/18 Tr. at 2291 (Frankel). In fact, not all Canadian programming falls within the CCG category for royalty purposes. CCG conceded that, for example, some programming broadcast on Canadian stations should rightfully be attributed to the SDC. 3/7/18 Tr. at 2675 (Erdem); Boudreau CWDT at 3-4, 10. The volume of mischaracterized programming is not great, but, as Professor Mathiowetz pointed out, a change in the relative allocation to any one category necessarily changes the allocation to other categories. 2/20/18 Tr. at 701 (Mathiowetz).
Back to Citation175. The five parties eligible to share the royalties allocated to the 3.75% Fund (CCG, CTV, JSC, Program Suppliers, and the SDC) agree that, to reflect PTV's nonparticipation in the 3.75% Fund, the Judges must adjust each eligible group's share of that fund in proportion to its respective share of the Basic Fund. See 2004-05 Distribution Order, 75 FR at 57071; Declaration of Howard Horowitz ¶ 4 (Jul. 13, 2018); Declaration of Jeffrey S. Gray ¶ 8 (Jul. 16, 2018); see also JSC Initial Brief at 3-4. The Judges apply this approach in allocating shares in the 3.75% Fund in the present proceeding.
Back to Citation176. The parties agreed that Program Suppliers are entitled to receive 100% of the remaining royalties from the Syndex Fund. Further, the amount in that Fund, less than $10,000 per six-month accounting period, see JSC Initial Brief at 2 n.1, is so low that, even assuming arguendo allocations to the Syndex Fund would require an adjustment to the Basic Fund, such an adjustment would be “inconsequential.” CTV Initial Brief at 11 n.20; see also SDC Initial Brief at 1 n.1 (the Syndex Fund comprises “only about 0.01% of total royalties paid in 2010-2013.”). Accordingly, the discussion in this section is limited to the impact, if any, of the allocations to the 3.75% Fund on the allocations in the Basic Fund.
Back to Citation177. PTV broadly defines the phrase “Evidentiary Adjustment” as the process by which “the Judges must . . . convert the [evidentiary] studies' estimated shares based on the `Combined Royalty Funds' [i.e., estimated without explicit regard to an itemization among the three specific funds] to shares tailored to the particular funds from which the parties are entitled to recover.” Id. at 1. For the sake of clarity, the Judges utilize the phrase “Evidentiary Adjustment” more narrowly in this Determination, to mean only the potential bump up of PTV's share of the Basic Fund to account for its nonparticipation in the 3.75% Fund.
Back to Citation178. Of course, because the Basic Fund is finite, any bump up in PTV's share would necessitate a decrease in the percentage allocations to the other five claimant groups proportionate to their relative shares (inter se) of the Basic Fund.
Back to Citation179. The Judges discuss the relevant prior rulings, infra, section 0.
Back to Citation180. In prior rulings by the Judges and the Librarian (in the CARP era), the Bortz survey was the only survey of CSO representatives given any credence. In the present case, the Horowitz Survey also surveyed CSO representatives. The Judges find no basis to treat these two surveys differently in connection with the issue of whether PTV should receive an increase in its Basic Fund share to account for its nonparticipation in the 3.75% Fund.
Back to Citation181. The original regulatory text was located in 37 CFR, part 308. See 37 CFR 308.2(c)(2). In 2016, the Judges recodified this provision in Part 387, without changing the relevant language. See Adjustment of Cable Statutory License Royalty Rates, 81 FR 24523 (April 26, 2016); Adjustment of Cable Statutory License Royalty Rates 62812 (Sept. 13, 2016) (Note that the CFR version of Part 387 erroneously lists the second Federal Register page cite as page 62813.).
Back to Citation182. In economic terms, the new 3.75% Fund royalties substitute a tariff for a quota, in order to maintain some form of protection of the value of copyrights on local commercial programs in markets into which CSOs would now be able to retransmit an unlimited number of commercial stations from distant locales.
Back to Citation183. See, e.g., PTV Initial Brief at 4 (3.75% rate “sometimes called the `Penalty Rate' ” because it applies higher royalty rate “to the retransmission of additional distant signals beyond the limited number that cable systems could carry under the [f]ormer FCC Rules.”).
Back to Citation184. The distinction between economic incidence and legal incidence is typically exemplified in the analysis of sales taxes. The seller bears the legal incidence by writing a check to the governmental unit assessing the tax, but the seller and the consumer share the economic incidence of the sales tax, the latter paying a portion of the tax in the form of a higher prices for the taxed item, with the allocation of the economic incidence between merchant and consumer determined by the elasticity of demand for the taxed item. See R. Posner, Economic Analysis of Law at 491-495 (6th ed. 2003). Analogously, the economic incidence of PTV's argument is transparent; although the legal incidence of its argument—bumping up its Basic Fund share—is not expressly prohibited, 100% of the economic incidence of its argument is a shift to itself wealth and income from the lawful participants in the 3.75% Fund.
Back to Citation185. Again, PTV makes the same argument with regard to the viewing evidence. However, that issue is moot, because, as explained supra, the Judges do not apply the viewing evidence in making allocations.
Back to Citation186. The Judges part company with the CARP determination (adopted by the Librarian), allocating royalties for 1998 and 1999, in which the CARP stated that the adjustment is warranted because “the Bortz respondents . . . presumably did not know that PTV would not be eligible to receive part of their budget allocation . . . . ” Distribution of 1998-1999 Cable Royalties, at 26 n.10 (Oct. 21, 2003), adopted by the Librarian 69 FR 3606 (Jan. 26, 2004). When the Judges have qualified and relied upon expert survey witnesses, the Judges cannot, without contrary evidence, inject a presumption inconsistent with their qualifications. The Judges consider that and other prior rulings infra.
Back to Citation187. The Judges find no reason to presume that survey respondents who were otherwise deemed by the survey experts, based on answers to introductory questions, to be knowledgeable about their programming and carriage decisions, would not also be aware that they could add an educational station without incurring the higher 3.75% royalty, whereas the addition of a commercial station in certain instances did trigger the 3.75% royalty. All parties accepted, and the Judges agreed, that the individuals responsible for making distant retransmission decisions for the cable systems understood that the CSO paid the minimum fee of 1.064%, regardless of whether they distantly retransmitted any local stations. It would be inconsistent to presume, on the one hand, that CSO executives were cognizant of a 1.064% minimum fee, but were ignorant of the 3.75% rate—more than 300% greater than that minimum fee—when the responsible executives answered the surveys.
Back to Citation188. Although Question #3 referred to program categories, it is still relevant to the 3.75% Fund issue, because only the five other claimant categories (i.e., other than PTV) could have triggered the higher royalty cost. Thus, a knowledgeable survey respondent could not be presumed to lack knowledge of the different impact on value from adding an educational station rather than a commercial station.
Back to Citation189. In response to the Judges' 3.75% Fund Order, Program Suppliers submitted a Declaration by Howard Horowitz, who designed the Horowitz Survey, in which he stated that it is “appropriate” to apply the allocation of the Horowitz Survey shares “to any fund in which all parties participate.” Declaration of Howard Horowitz ¶ 4 (July 16, 2013). This statement would support the Judges' decision, but the Judges give no weight this declaration, for two reasons. First, Mr. Horowitz did not offer any such testimony during the proceeding; therefore his declaration is impermissible new testimony (not clarifying testimony). Second, in the absence of persuasive hearing testimony, Mr. Horowitz cannot opine as to what would be the “appropriate” allocation of the Horowitz Survey shares. What is an appropriate allocation in this context is a question of law reserved to the Judges.
Back to Citation190. CTV, on whose behalf Dr. Crawford undertook his regression analysis, argues in its briefing that Dr. Crawford's 3.75% Fund coefficient “may already be accounted for to some degree” in his overall regression analysis. CTV Responding Brief at 7 (emphasis added). Not only is this statement highly conditional (as noted by the italicized language, CTV also did not submit a supporting declaration from Dr. Crawford properly clarifying how his hearing testimony supported this assertion, despite the Judges' invitation in the 3.75% Fund Order to submit witness statements. Instead, CTV referred to Dr. Crawford's hearing testimony on an unrelated issue in which he stated, with regard to a different control variable, that its coefficient estimate should be included in a regression analysis when there are “good” economic and statistical reasons to do so. See 2/28/18 Tr. 1643 (Crawford). The Judges do not dispute this point, but it is not relevant to the task at hand. As an indicator (dummy) variable in a regression designed to generate estimates for relative value results among program categories, the 3.75% Fund variable was designed to control for the influence of the 3.75% Fund impact on those relative values. Dr. Crawford further testified that any control variable that would correlate significantly with the dependent variable should be included in the regression model so that it does not bias the coefficients of interest (the program categories' coefficients in the present case), Id. at 1644 (Crawford). Thus, the excerpt from Dr. Crawford's testimony, when considered in context, does not demonstrate that the impact of participation in the 3.75% Fund is already “accounted for” in his overall regression analysis in a manner relevant to the present issue.
Back to Citation191. The Judges emphasize a distinction between their consideration of the 3.75% Fund regression coefficients and their evaluation of the various coefficients relied on by Dr. Erdem to predict the level of royalty payments. The Judges discounted Dr. Erdem's emphasis on coefficients relating, for example, to the number of CSO subscribers, because such coefficients, as Dr. Crawford testified, simply re-created the royalty formula. However, now the Judges are called upon to distinguish and apply a separate royalty formula—the formula for the 3.75% Fund—from the formula for the Basic Fund. In this latter context, the coefficients related to the 3.75% Fund are indeed relevant. Accordingly, what constituted vice in the critique of the Crawford regressions with regard to allocations among the program categories is virtue in distinguishing between two different categories of rate formulas.
Back to Citation192. While this proceeding was pending, Congress abolished the CRT. The proceeding continued under the auspices of the CARP appointed to distribute the royalties.
Back to Citation193. The Librarian identified the public television claimants as the PBS claimants, rather than the PTV claimants as had the CARP.
Back to Citation194. However, as discussed infra, for other reasons, the Judges do not conclude that the decisions by the CARP and the Librarian to apply the Evidentiary Adjustment are dispositive in the present proceeding.
Back to Citation195. Congress replaced the CARP system with the Judges in 2004 (effective 2005). Copyright Royalty and Distribution Reform Act of 2004, Public Law 108-419, 118 Stat. 2341 (Nov. 30, 2004).
Back to Citation196. The “Settling Parties” were comprised of: JSC, CTV, PTV, and Music Claimants. Id. at 57064.
Back to Citation197. There is an element of irony in PTV's assertion of waiver for the first time in its Responding Brief. By not making this legal argument of waiver in its July 16, 2018 Initial Brief, PTV prevented adverse parties from addressing the issue of waiver. See, e.g., U.S. v. Layeni, 90 F.3d 514, 522 (D.C. Cir. 1996); In re Brand Name Prescription Drugs Antitrust Litig. 186 F.3d 781, 790 (7th Cir. 1999) Although PTV might claim that it could not have been certain it had the right to assert the waiver argument until it had reviewed these parties' Initial Briefs, such a position would be belied by the fact that PTV's waiver argument is based on the alleged absence from the hearing record of adverse facts relating to facts or arguments concerning the impact, if any, of the 3.75% Fund allocations on the allocations of the Basic Fund. Thus, PTV appears to have waived its waiver argument. Nonetheless, the Judges consider and reject PTV's waiver argument on the merits.
Back to Citation198. The cases are cited at PTV's Responding Brief at 22 n.85 and discussed below.
Back to Citation199. The Judges regularly exercise discretion to seek supplemental briefing in order to address an issue that had not been sufficiently addressed during the hearing. A judicial order directing the filing of supplemental papers is the preferred method by which judges should address issues they find to have been insufficiently considered. See United States Nat'l Bank of Oregon v. Ind. Agents of America, 508 U.S. 439 (1991) (affirming D.C. Circuit's sua sponte raising of unaddressed issue and ordering supplemental briefing). Moreover, supplemental briefing provides the parties a full and fair opportunity to address relevant issues that were insufficiently developed and argued. Trest v. Cain, 522 U.S. 87, 92 (1997) (“We do not say that a court must always ask for further briefing when it disposes of a case on a basis not previously argued . . . [but] often . . . that somewhat longer (and often fairer) way `round is the shortest way home.”) (dicta); see also R. Offenkrantz & A. Lichter, Sua Sponte Actions in the Appellate Courts: The “Gorilla Rule” Revisited, 17 J. App. Prac. 113, 120 (Spring 2016) (noting the Supreme Court's “preference for ordering supplemental briefing when a new issue is raised sua sponte . . . . ”); B. Miller, Sua Sponte Appellate Rulings: When Courts Deprive Litigants of an Opportunity to be Heard, 39 San Diego L. Rev. 1253, 1281-82, 1297-1300 (2002) (courts more likely to raise, sua sponte, “questions of law,” and “routinely ask the parties for supplemental briefs when deciding a new issue.”); R. Ginsburg, The Obligation to Reason Why, U. Fla. L. Rev. 205. 214-15 (1985) (in D.C. Circuit, if judges identify a potentially dispositive point not raised by the parties, they generally invite supplemental briefs).
In the present case, the Judges also have wide statutory discretion to cure deficiencies in the legal or factual record to mitigate the harm that might otherwise necessitate a finding of waiver. See 17 U.S.C. 801(c) (“The . . . Judges may make any necessary procedural . . . rulings in any proceeding under this chapter. . . . ”). The ordering of supplemental briefing is one example of the exercise of that discretion, and its invocation renders moot a claim that legal arguments had been waived.
The parties' supplemental briefing ultimately did not address all of the legal reasons in the full detail that the Judges now rely upon to conclude that they cannot bump-up PTV's share of the Basic Fund to offset its non-participation in the 3.75% Fund. However, as Nat'l Bank of Oregon further holds, a court can rule sua sponte even if the parties fail to address in their supplemental briefing the issue on which the court sought such briefing. Id. at 447. Moreover, in that decision, the Supreme Court held that lower courts may reframe the legal issues posed by the parties, in order to ensure that the law is correctly applied, lest the parties force the court to misstate the law. Nat'l Bank of Oregon at 446-47. In the same vein, “[a] court should apply the right body of law even if the parties fail to cite their best cases.” Palmer v. Bd. Of Educ., 46 F.3d 682, 684 (7th Cir. 1995 (Easterbrook, J.). Here, a fortiori, because PTV did not make its legal waiver argument until it filed its Responding Brief (the very tactic of which it accuses Program Suppliers regarding the substantive Evidentiary Adjustment issue), the adverse parties had no opportunity to cite any cases.
Back to Citation200. See PTV Responding Brief at 22 n.85.
Back to Citation201. 716 F.3d 612 (D.C. Cir. 2013).
Back to Citation202. 530 F.3d 991 (D.C. Cir. 2008).
Back to Citation203. 344 U.S. 33 (1952).
Back to Citation204. As noted, Dr. Israel's Cable Content Analysis, although not a methodology that the Judges adopted, provided information on JSC-related expenditures in a related market sufficient to lend some support for the award of a significant share to JSC (as indicated by the methodologies that the Judges have adopted), even though the shares are disproportionate to the number of programming hours retransmitted. Similarly, the McLaughlin/Blackburn “changed circumstances” adjustments bolster the results of methodologies valuing PTV programming above the lower bound set by regression analyses.
Back to Citation[FR Doc. 2019-01544 Filed 2-11-19; 8:45 am]
BILLING CODE 1410-72-P
Document Information
- Published:
- 02/12/2019
- Department:
- Copyright Royalty Board
- Entry Type:
- Notice
- Action:
- Final allocation determination.
- Document Number:
- 2019-01544
- Pages:
- 3552-3611 (60 pages)
- Docket Numbers:
- Docket No. CONSOLIDATED 14-CRB-0010-CD (2010-2013)
- PDF File:
- 2019-01544.pdf