The branch of economics that is commonly referred to as “industrial organization” has undergone several major revolutions over the past 100 or so years. At its heart, industrial organization is the study of firms and of the markets in which they operate. This simple idea, however, raises a host of complex questions.
What do we mean by a “firm”? The view that has emerged in answer to this question in the past few decades is that we should think of the firm as a “nexus of contracts”.1 This approach provides a useful organizing framework. Some of the contracts are internal to the firm, relating to the ways in which intra-firm relationships can be organized. An important question that arises with these contracts is how to formulate them in order to align the interests of agents employed by the firm to act on the firm’s behalf, with principals who “own” the firm. Other contracts are between the firm and outside agents, such as suppliers or buyers. One important issue with these contracts relates to the appropriate form of contract – short-term, longer-term, relational. Another relates to the choice of activities that should be left outside the firm and those that should be internalized to the firm, essentially asking the question first posed explicitly by Ronald Coase (1937): What is the “nature of the firm”?
Then, what do we mean by a “market” and how might we measure the structure of markets? The underlying principle is that markets of different types can be expected to evolve in very different ways. In addition, firms operating in different types of market face competitive pressures that vary in nature and degree and so have to deal with issues that depend on the specific market context. This variety in competitive environments introduces important strategic questions, for example on pricing policy, product design, or product and process innovation. It also introduces complex econometric questions. Theory is useful in providing us with testable hypotheses with respect to the behavior of firms and markets. But how are these hypotheses to be tested? What models are appropriate and what econometric techniques need to be applied?
The remainder of this introduction to the Dictionary of Industrial Organization builds on these questions and provides a rationale for the choices we have made with respect to the range of entries we have included.
p. 2Early Industrial Organization: Structure–conduct–performance
The “structure–conduct–performance” (SCP) paradigm provided some of the earliest empirically grounded analysis of industrial organization. This paradigm was first developed in the early work of Edward Mason (1939, 1957) and his colleagues at Harvard, and was further developed in the work of Joe Bain (1956). The SCP paradigm begins with the idea that basic economic conditions determine market structure, defined as a spectrum along which all real-world markets lie. At one extreme are heavily concentrated markets dominated by a small number of firms, close to the monopoly end of the spectrum. At the other are markets containing many firms, none with any significant market power and lying nearer to the perfectly competitive end of the spectrum. The analysis of an industry begins with determining where on this spectrum the industry lies, using factors such as the number and size of sellers, the degree of product differentiation, and barriers to entry.
Market structure, it was hypothesized, determines market conduct, as captured by pricing behavior, product and process innovation, and advertising. Finally, market structure and conduct together determine the market performance of the industry being studied, as measured by, for example, economic efficiency, consumer welfare and profitability.
Much of the analysis in the SCP paradigm was based on empirically rich case studies and large-scale cross-section econometric analysis, and exhibited a remarkable ability to interpret institutional details and identify stylized facts. This approach suffered, however, from two major defects. First, there is the explicit assumption that structure affects conduct, which affects performance, implicitly assuming that market structure is exogenous. Recent developments suggest, by contrast, that there may well be important feedback loops from conduct to structure, and from performance to conduct and structure, with the result that all three elements of the paradigm are endogenous. Second, the paradigm implicitly characterizes an industry in terms of the mix of perfect competition and monopoly it exhibits. The problem with this characterization is that there is no role for strategic decision-making. At the competitive end of the spectrum there are so many firms that no one firm can affect anything. At the other end there is only one firm that need not worry about any other firms. As a result, the SCP paradigm suppressed any explicit consideration of firms thinking and behaving strategically.
Bain (1956) developed an analysis that was something of an exception, introducing the notion that industry and firm performance cannot be analyzed independently of a consideration of the ability of new firms to enter the market. This idea is important. Indeed, it subsequently re-emerged in p. 3the “contestability” theory developed by William Baumol et al. (1982). In Bain’s analysis, however, entry barriers were largely taken as existing independently of the strategic actions of firms, whereas more recent developments treat barriers to entry as being partly structural but also potentially strategic: see, for example, Steven Salop (1979a) and Harold Demsetz (1982).
The Theory of the Firm
Just as the SCP paradigm was developing, another strand in industrial organization was also emerging, based on the seminal 1937 article by Ronald Coase, “The Nature of the Firm”. The SCP paradigm takes “the firm” as a given – almost a black box – and asks us to look from the firm outwards to its markets. Coase took a very different view, asking that we look inside the firm. He asked a truly fundamental question: Why do firms exist? If we consider the firm as a set of transactions mediated by contracts, bringing a transaction inside a firm means that the firm is replacing the market as a mechanism for undertaking that transaction. If markets are perfect, this implies a loss of efficiency, so why do this?
Coase answered this question by arguing that traditional economic analysis ignores an important set of costs: the transaction costs of using markets. He went further. If transaction costs of using markets are significant, why do we not see a small set of very large firms? The implication is that there must also be transaction costs of bringing activities inside the firm. The implication is that the boundaries of the firm, sometimes referred to as the resolution of the “make-or-buy” decision, or the degree of vertical integration, is the result of the firm balancing these two sets of transaction costs.
Subsequent development of this approach to the firm by, for example, Oliver Williamson (1985), Paul Milgrom and John Roberts (1992) and Oliver Hart and John Moore (1990) added much-needed additional detail to the concept of transaction costs.2 These authors suggested that transaction costs of using the market take two broad forms. First, there are the costs of search, negotiation, contract formation, monitoring and enforcement as firms seek to attract and retain customers and suppliers. Second, in some cases one, or the other, or both parties to a transaction has to make a relationship-specific investment to support the transaction. Making such an investment creates quasi-rents and opens the door to the p. 4possibility of ex post opportunistic renegotiation of the original contract in order to extract the quasi-rents. This is referred to as the “hold-up problem”, based on what Williamson (1979) referred to as “self-interest seeking with guile”: see, for example, Victor Goldberg (1976), Benjamin Klein et al. (1978) and Klein (1996).
If the contract in question perfectly delineated every possible contingency, there would be no scope for opportunism. However, contracts are inevitably incomplete as a result of bounded rationality and asymmetric information. The immediate implication is that we are more likely to see an activity conducted inside the firm when that activity requires significant investment in relationship-specific assets.
The question that remains is: What limits the degree of vertical integration? Why not replace the market completely? The answer to these questions rests on another set of transaction costs, those that arise within firms. These take two broad forms. First, there are agency costs:3 the costs of slack effort by employees in the firm and the costs associated with deterring and/or detecting slack effort. Second, there are influence costs:4 the costs of activities intended to affect the firm’s internal allocation of scarce resources (lobbying, internal politicking) and the costs of inefficient decisions that arise as a result of influence activities.
The argument is that agency and influence costs are proportionately more significant in large, vertically integrated firms. Monitoring employee performance in such firms is difficult, less effective and more costly as a result of the very size of these firms. Similarly, aligning the interests of employees (agents) with those of the firm (or its principals) is more difficult in large, vertically integrated firms. In addition, such firms contain more operating divisions, each of which is competing for resources within the firm, making the integrated firm more vulnerable to costly influence activities.
So again we return to a deceptively simple proposition. The firm resolves the make-or-buy decision with respect to a particular activity by balancing the external transaction costs against the internal transaction costs associated with that activity.
While, as we noted above, monopoly lies at one extreme on the spectrum of market structures, the analysis of a monopolist’s decision-making does raise many interesting issues for industrial organization. Far from being p. 5insulated from strategic concerns, real-world monopolists have to worry about potential entrants, and entry deterrence is an explicit strategic act. Moreover, a monopolist may well in some circumstances be competing with itself, for example if the monopolist produces a durable good, and so again has to think strategically.
Suppose, for example, that the monopolist does indeed produce a durable good such as a software package, an automobile or capital equipment.5 Suppose further that the monopolist sells this good in a market that lasts for multiple periods, and that the monopolist sells to consumers who live for multiple periods. The result is that the monopolist actually faces competition – from its own product. This introduces the Coase conjecture (Coase 1972). Since the monopolist cannot credibly commit to withholding supply in later periods, goods sold in later periods effectively compete with goods sold in earlier periods. The durability of a good thus erodes the monopolist’s market power. Coase conjectured that in the limit, as the period between sales decreases, the monopolist will be constrained to price at marginal cost.
The underlying idea is simple enough (the analysis is much more complex). The monopolist need not charge the same price in every period and it is to be expected that price will fall over time; after all, the monopolist in later periods actually faces competition from sales by consumers who have purchased in earlier periods. As a result, consumers will tend to postpone their purchase decisions in order to take advantage of the lower future prices. This forces the monopolist to charge lower prices in the earlier periods, weakening the monopolist’s market power. While the generality of the Coase conjecture has been called into question,6 the basic point remains: the monopoly provider of a durable good faces competition from itself.
The analysis of durable goods implies that the monopolist can charge different prices for goods that are differentiated by the time at which they are sold – intertemporal price discrimination. We can also consider situations in which the monopolist charges different prices for goods sold at the same time. Price discrimination in this context is defined by Louis Phlips as “implying that two varieties of a commodity are sold (by the same seller) to two buyers at different net prices, the net price being the price (paid by the buyer) corrected for the cost associated with the product p. 6differentiation” (Phlips 1983, p. 6). Thus the large difference between first-class and economy airline fares is probably an example of price discrimination, as is a firm that sells a product produced in London to consumers in Paris, New York and Singapore at the same price.
Several interesting questions arise with price discrimination. First, what makes price discrimination feasible and profitable? This requires three conditions: (a) the monopolist knows that it is serving consumers of different types; (b) the monopolist either has an exogenous signal of a consumer’s true type or is able to design a pricing mechanism that encourages consumers to self-select into their true types; and (c) the monopolist can prevent arbitrage from consumers charged lower prices to consumers charged higher prices.
Second, what form of price discrimination should be applied? The language here is that developed by Arthur Pigou (1920): first-, second- or third-degree price discrimination, referred to more recently by Carl Shapiro and Hal Varian (1999) as personalized pricing, menu pricing and group pricing.
Third, what are the efficiency properties of price discrimination? Are there circumstances in which price discrimination increases total surplus, and if so, what are these? Standard analysis (see, for example, Lynne Pepall et al. 2008; Richard Schmalensee 1981) tells us that a necessary condition for third-degree price discrimination to increase social welfare is that it increases total output.7 It follows that one obvious way in which price discrimination will increase total surplus is if such discrimination leads to markets being served that would otherwise not be served. More complex examples rest on the convexity/concavity of demand functions in the different markets.
Just as a monopolist can discriminate in prices, the monopolist can also discriminate in product quality by designing goods or services some of which are of high quality and some of which are of lower quality.8 As with durable goods, offering goods of different qualities means that the monopolist is effectively competing with itself. The firm has to design and price its different products to satisfy an incentive compatibility constraint with respect to its consumers, such that consumers are able to choose the product quality they will buy. Thus in pricing the high-quality good, for p. 7example, the monopolist must recognize that the consumers for whom this product is designed – those who place a high value on quality – can always choose instead to purchase a lower-quality good.
Quality differentiation implies that the monopolist is offering more than one good; in this case, multiple goods differentiated by quality, or vertically differentiated. Monopolists often also offer multiple goods that are horizontally differentiated: goods of similar qualities but with different characteristics. This is particularly relevant as a strategy when the monopolist knows that its market contains consumers with similar willingness to pay for quality but with very different tastes for the characteristics of the goods being offered.
The question to be answered now is: How many product variants should the monopolist bring to market? In answering this question, the firm must balance three forces. First, having additional product variants implies that each variant is more closely aligned with a particular group of consumers’ tastes and so is more attractive to those consumers. Second, adding another product means that the monopolist is competing with its existing products, cannibalizing sales of the existing products. Third, introducing additional products can sacrifice economies of scale. There is yet another more strategic consideration. Once we recognize that a monopolist is not necessarily free from the threat of entry, the ability to offer multiple product variants to supply a market, containing consumers with very diverse tastes, introduces the strategic possibility that the monopolist will seek to “cover the market” in order to leave no “holes” that a potential entrant might exploit.9
Resolving these issues raises the question of whether the monopolist offers too much or too little product variety in a social welfare sense. There is no settled answer to this question, the conclusion being dependent upon the specific modeling assumptions. One set of analyses indicates that the monopolist offers excess variety, another that there is too little variety, and the third that there is socially optimal variety. In other words, theory offers us little guidance on this question.
Strategic Oligopoly Pricing and Game Theory
We noted in our discussion of the SCP paradigm that it pays little or no attention to the strategic dimension of firms’ decisions when they are operating in markets that are imperfectly competitive. Explicitly recognizing p. 8the interdependence that characterizes decision-making in such markets is a distinctive feature of what is often called the new or modern industrial organization.10 Firms try to influence their market environment rather than take that environment as exogenously given.
The language now becomes that of game theory. The final outcome of any particular game will be dependent upon the strategy space, firms’ beliefs and information sets, the sequence of moves and the ability to make strategic commitments, the time horizon and players’ time preferences. The limitation of this approach is that we can no longer develop a general theory covering the full range of industrial behavior. However, the advantage of this approach is that we can address a wide range of important questions that were beyond the capabilities of theorists steeped in the SCP paradigm.
Game theory allows us to analyze the relationship between market power and pricing, using, for example, the workhorses of Cournot and Bertrand competition. We can build on the seminal Harold Hotelling (1929) analysis of spatial competition, addressing his hypothesis that competing oligopolists will offer horizontally differentiated products characterized by what he termed an “excessive sameness”. By contrast, firms offering vertically differentiated products are likely to offer highly differentiated products.
Hotelling’s analysis of spatial competition has provided us with a theoretical approach that can be applied to a wide range of apparently non-spatial questions. The effects of deregulation of airline transport routes, the implications of introducing flexible manufacturing systems, and the choice of movie exhibition programming might seem to be unrelated questions, but they can be and have all been addressed using the spatial model that Hotelling first introduced.
Returning to some of our comments in the previous section on monopoly, a game-theoretic approach allows us to analyze whether competition will lead to excess product variety. William Vickrey (1964) was the first to coin the “excess entry theorem”, that in a circle model of horizontal product differentiation11 consumers will be offered too much product variety. More recent analyses can be found in, for example, Yiquan Gum and Tobias Wenzel (2009) and Toshihiro Matsumara and Makoto Okamura (2006). A contrasting result holds in models of vertical differentiation, referred to as the “finiteness property” (Jaskold Gabszewicz and Jacques-François Thisse 1980; Avner Shaked and John Sutton 1983). Even p. 9if firms offering vertically differentiated products have zero fixed costs, the market will only support a finite number of firms with positive revenues.
An important distinction in game theory more generally is between static and repeated games. This distinction is particularly important when we apply game theory to questions such as price setting, location strategies and product line competition. Take an extreme example of Bertrand competition between firms offering essentially identical products. If this is posed as a static game, then the only price equilibrium with two or more firms in the market is marginal cost pricing. When we extend this to a two-stage entry-price game, the only subgame perfect equilibrium is monopoly. By contrast, if the game is indefinitely repeated, we can apply the “folk theorem” (James Friedman 1971) to conclude that any set of prices between the competitive and the monopoly price is sustainable for some discount factor sufficiently close to unity. In the two-stage entry game we can now see the potential for multiple-firm entry, even in the seemingly aggressive Bertrand context.
Repeated games typically support a non-competitive outcome by means of tacit cooperation based upon a trigger strategy. We should, however, be aware of the fact that firms in imperfectly competitive markets have an incentive to explicitly cooperate if they can possibly do so, forming cartels. This is far from a new idea. Indeed, one of its earliest statements is to be found in Adam Smith’s Wealth of Nations (1776) where he wrote, “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices” (Book 1, Chapter X, Part II).
Some of the simpler analyses of cartels suggest that we need not be concerned about their impact on market efficiency: they will fall apart of their own volition as a result of the temptation to cheat on the cartel’s agreed prices or market quotas. At first sight this appears encouraging. However, it ignores the repeated game context in which cartels operate. The same trigger strategy that can sustain tacit cooperation is even more effective in sustaining explicit cartels. The question then is: Can we offer reasonably simple tests by which cartels can be discovered? According to Ronald Harstad and Phlips (1994) and Phlips (1996), we cannot. They formulated the indistinguishability theorem, stating that cartel members can exploit their information advantages with respect to the antitrust authorities to make the cartel behavior appear to be competitive.
One mechanism to ease cartel detection that the antitrust authorities have adopted in the United States, the European Union and Australia, among others, is to offer a simple amnesty program that can be paraphrased something along the following lines: “If you are the member of a cartel and you provide us with information that leads to successful prosecution p. 10of the cartel, you go free. Everyone else faces heavy fines.”12 This would appear to present the cartel members with a classic prisoners’ dilemma. However, careful application of game theory indicates that matters are not quite so simple.13 Yes, once a cartel is suspected to exist it should be easier to detect at law as a result of at least one firm asking for amnesty in return for evidence of the workings of the cartel. On the other hand, the cost of being a member of a cartel that is detected and prosecuted is now much lower provided that you can defect to the antitrust authorities first.
Further strategic considerations arise with respect to such issues as entry deterrence and innovation. We noted above that a monopolist may have an incentive to employ product proliferation as an entry-deterring device. The same strategy applies to oligopolists, for example, by installing excess capacity, or by offering exclusive contracts, or by adopting strategies such as advertising and creating consumer loyalty that raise the costs of potential entrants.
With respect to innovation, the techniques that we can draw from game theory allow us to answer questions such as whether a monopolist has a greater incentive to innovate than a competitive firm. The Schumpeterian hypothesis suggests that this is, indeed, the case (Joseph Schumpeter 1942). We can also consider the optimal design of a patent system. For example, under what circumstances should patents be “short and broad” rather than “long and narrow” (Richard Gilbert and Carl Shapiro 1990; Paul Klemperer 1990)? What are the efficiency implications of the patent system? On the one hand, it may encourage innovation. On the other hand, it creates at least local monopolies. Moreover, given the “first-past-the-post” nature of the patent system, where being second to file is of no benefit, the patent system may well encourage wasteful patent races, implying that encouraging research joint ventures might well be efficiency enhancing.14
Information, Agency and Contracts
Turning once again from looking from the firm to its outside markets, to looking inside the firm, modern developments in industrial organization research have given us important insights by applying the language and techniques of game theory. These developments are rooted in two p. 11fundamental propositions: first, that contracts are inevitably incomplete; and second, that parties to a transaction have asymmetric information. This opens up the possibility of opportunistic behavior by the parties to a contract and raises the question of what type of contract should actually be formulated.
Suppose that agents have private information, for example with respect to their true nature or the true quality of a good or service that they are offering for sale. Suppose further that agents’ actions can be observed only imperfectly. In other words, it is not clear whether or not an agent has abided by the terms of a contract. Two potential market failures arise, one pre-contract and the other post-contract.
Pre-contract, there is the potential for adverse selection. In insurance markets, for example, potential buyers are privately informed of their true risk characteristics, with the result that those who actually purchase insurance are a biased sample of the population of potential buyers. Moreover, raising insurance premiums drives out good risks, worsening the risk class that actually buys the insurance.
In markets that contain both new and pre-owned goods, similar biases emerge, as demonstrated by George Akerlof ’s Nobel prize-winning work (Akerlof 1970). Consider the market for automobiles and suppose that there is a small, but finite, probability that a new automobile is a low-quality “lemon”. Now consider a typical buyer. Having bought a new car, the buyer is privately informed regarding whether or not he has a lemon. Suppose that he does, indeed, have such a lemon. Then the owner will be tempted to sell the lemon and replace it with a new car that is likely not to be a lemon. Suppose, by contrast, that the buyer finds that his new car is perfect. Then he will prefer to hold on to it since selling it and replacing it with a new car exposes him to the risk that he purchases a lemon. The implication is that the second-hand market will contain a much higher proportion of lemons than the new market.
Post-contract, if an agent’s actions cannot be perfectly observed, there is the potential for moral hazard. Having insurance can bias risk-taking behavior, as was seen rather spectacularly in the United States savings and loan scandal. When the true quality of the goods or services being offered is difficult to observe – for example advertising, management or financial consulting, or maintenance services – there is the potential for the agent providing the goods or services to renege on the actual quality offered as compared to the quality that is contracted.
Moral hazard is particularly problematic in principal–agent settings.15 p. 12Here a principal delegates the performance of an activity to an agent who is supposed to work on the principal’s behalf. Such settings are common: shareholders of publicly traded companies delegate the management of such companies to senior management; senior management delegates to middle management; clients delegate to consultants, and so on. The interests of the principal and those of the agent are not necessarily fully aligned, which would of itself not be a problem if the actions of the agent could be perfectly observed by the principal and so could be made the subject of an enforceable contract. However, the principal can only imperfectly observe the actions of the agent, opening up the possibility that the agent will act opportunistically post-contract.
The challenge in such principal–agent settings is finding mechanisms that can control, at least partly, the temptation of the agent to renege on the true spirit of the contract with the principal. One possibility is for the principal to offer the agent an incentive-based contract. Such a contract has to satisfy two constraints. The first is the participation constraint: the agent must be willing to accept the contract – a necessary condition for which is that it is better than the agent’s next-best contract. The second is the incentive compatibility constraint: the contract must make it in the self-interest of the agent to behave “well” – exert optimal effort, for example – rather than behave badly.
Incentive-based compensation is not always appropriate. First, note that an incentive contract results in an inefficient allocation of risk. Risk is moved from the principal, the firm, which is roughly risk neutral, to the agent, who is typically risk averse. As a result, an incentive contract must offer the agent a risk premium to compensate for the risk associated with the contract. The literature (see, for example, Paul Milgrom and John Roberts 1992, Chapter 7) suggests that such incentive-based compensation schemes are more likely to be effective when:
the value of output is sensitive to the agent’s effort;
the agent is not very risk averse;
the level of risk beyond the agent’s control is low;
the agent’s effort is sensitive to increased incentives;
the agent’s output can be measured at low cost.
Another possibility is to place the contract in a repeated game setting or, equivalently, to tie the agent’s reputation to the agent’s measured performance. Now the agent recognizes that reneging on a contract gives a short-term gain but, if reneging is detected, results in a long-term loss. This requires, of course, that it is feasible and desirable to offer the possibility of a long-term contract. Japanese major assembly companies such p. 13as Mitsubishi, Toyota and Hitachi, for example, traditionally offered “life-time employment” contracts to their employees. They continue to use negotiation with long-term suppliers rather than competitive tendering in the sourcing of critical inputs. In addition, for reputation to be effective in controlling moral hazard, there must be a finite, non-trivial probability that opportunism by the agent will be detected by the principal.
A third possibility is to give the agent “ownership” of the value the agent creates. Commission-based contracts that are typical in some retail settings and which are characteristic of contracts with sales teams have some of this property. Management buyouts of publicly traded companies change the status of managers from agents to principals. In many upscale hairdressing salons the top stylists are self-employed, renting space from the owner of the salon.
Franchising is a very common method by which a principal, the franchisor, gives ownership to an agent, the franchisee. In the typical franchise contract the franchisee gains the right to operate under the franchisor’s name in return for a fixed up-front fee and a royalty based on some measurable performance criterion such as sales or turnover. Such a contract makes the franchisee the residual claimant on the returns generated by the franchise, aligning the interests of the franchisee with those of the franchisor.16
Note, however, that not everything can be franchised. There remains the potential for moral hazard if the franchisee can increase their returns by reneging on the agreed quality with the franchisor. In other words, franchising is most likely to be found when the quality of the good or service being franchised can be easily defined and measured, and when the reputational costs to the franchisor of underperformance by a franchisee are relatively small.
The issues that we have been discussing in this section have direct implications for the ways in which organizations – of any type – should be structured: what is sometimes referred to as organizational architecture (James Brickley et al. 2009). An effective organizational architecture must balance three components:
The allocation of decision rights: Who has the authority to make what decisions?
The reward and incentive systems: How are individuals rewarded for exercising their decision rights?
Monitoring and performance evaluation: What key performance indicators are used to evaluate and monitor managers and employees?
p. 14Imbalance in any one of these “three legs of the stool”, as Brickley et al. term them, generates a dysfunctional organizational architecture, subject to the risk that the organization will fall prey to the types of behavior that led to major losses at companies such as Enron, J.P. Morgan, Société Général, Sumitomo and Barings Bank.
The empirical methodologies used in industrial organization research have paralleled the evolution of research questions over the years. In early studies of the structure–conduct–performance paradigm, a common empirical approach entailed inter-industry analysis, regressing measures of performance, such as profitability, on measures of structure, such as industry size. A limitation of such empirical approaches was determining causality (for example, as we noted in the early part of this Introduction, while structure can influence performance, profitability can, in turn, impact market structure). This problem of reverse causality, a form of endogeneity, led to usage of more appropriate methodologies, including instrumental variables and structural equation estimation.
As theoretical research turned to identifying foundations for the theory of the firm and to delineating the nature of strategic interaction among firms, the requirements for empirical research and methods shifted notably towards reflecting optimizing behavior of individual decision-makers within specific industries or markets, such as consumers seeking their highest utility and firms seeking their maximal profits.
Discrete choice models of demand consider the case of a typical consumer selecting one option from among two or more mutually exclusive choices.17 The econometric analyses, including multinomial probit and logit, nested logit and mixed logit estimation, use consumer demographics, price, and product or choice attributes to estimate demand. The multinomial logit model relies on the assumption of independence of irrelevant alternatives (IIA), which imposes the restriction that the relative probability the consumer chooses one option over another not be influenced by the presence of additional alternatives in the choice set. The IIA assumption can be problematic when the elements of the choice set include close substitutes. Daniel McFadden (1984) and Kenneth Train (2009) provide very useful guides to these models.
In their seminal work, Steven Berry et al. (1995) introduced a demand (and cost) estimation methodology for the automobile industry and, p. 15more generally, for industries characterized by product differentiation and imperfect competition. Their methodology allowed the econometrician to use information on product characteristics and on aggregate consumer characteristics as the basis for estimation, rather than relying on consumer-level data. Further, their approach allowed for variation in consumer tastes, as well as for a range of consumer substitution patterns that the IIA assumption in the multinomial logit model precluded.
The empirical requirements of industrial organization research reach beyond supply and demand estimation. For example, differences-in-differences estimation allows the econometrician to measure the impact of a policy over time by comparing outcomes in the group targeted by the policy to outcomes in a group outside of the scope of the policy. If one state adopts tax incentives for firms to invest in alternative energy research, while the neighboring state does not, the rate of adoption of new energy technologies can be compared across the two states (the difference in differences) to determine how effective such tax incentives are relative to a baseline trend. Alternatively, duration analysis methods, which estimate probabilities of “survival”, can be used, for example, to estimate the likelihood of a firm continuing to sell one product within a multi-product line, given that the product has survived until time t. For an excellent review of recent developments in empirical industrial organization methodology, see Liran Einav and Jonathan Levin (2010).
Regulation and Antitrust
The early development of antitrust policy, at least in the United States, predates the formal modeling of imperfectly competitive markets that is the defining characteristic of industrial organization. It is, however, based at least in part on economists’ intuitive understanding that the exercise of monopoly power is unlikely to be benign. We noted above that Adam Smith was fully aware of the potential for collusion. He was also aware of the market impact that the exercise of monopoly power, once attained, implies: “The monopolists, by keeping the market constantly understocked, by never fully supplying the effectual demand, sell their commodities much above the natural price” (Smith 1776, Book I, Chapter 7).
The emergence of large firm trusts – such as Standard Oil and American Tobacco – and the ways in which these trusts were created and subsequently behaved led to the enactment of the first-ever US antitrust law, the Sherman Act of 1890. Section 1 prohibits contracts, combinations and conspiracies “in restraint of trade”. Section 2 makes any attempt to monopolize a market illegal.
Section 1 remains central to antitrust policy, being an essential statute p. 16under which cartels are prosecuted. Section 2 has had a much more checkered history, with the courts’ decisions being much less clear on which actions that lead to monopolization should be considered to be illegal. Essentially, the court developed a “rule of reason”, requiring that the antitrust authorities not only show that there was monopolization of the relevant market, but also show that this was achieved through explicit intent or exploitation of monopoly power.
The ambiguity introduced by the rule of reason approach to Section 2 of the Sherman Act led to the passage of the Clayton Act of 1914. This Act and the subsequent Celler-Kefauver Act of 1950 were intended to stop monopolization of a market “in its incipiency” by inhibiting the use of rebates, tying practices and exclusive contracts and also by preventing mergers that were considered to be anticompetitive. The Federal Trade Commission Act of 1914 and subsequent amendments to the Clayton Act made illegal “unfair methods of competition” and “unfair and deceptive acts or practices”. About the same time, adoption of the rule of reason approach resulted in the judgment in the US Steel case of 1920 that “the law does not make mere size an offence or the exercise of unexerted power an offence – it does not compel competition nor require all that is possible”.
This decision can be seen as providing the initial intellectual stimulus that led to the emergence of industrial organization as an important field of study. It showed that economists lacked a coherent set of principles upon which the study of imperfectly competitive markets could be based and so lacked the ability to influence the formulation of antitrust policy.
The first step was to provide a consistent and practical way of determining the structure of a market. The second was to draw clear links from market structure to market conduct and market performance. As we saw above, this led to the development of the SCP paradigm and to something of a reversal of the US Steel case, with firm size, no matter how attained, becoming an important consideration.
The 1970s brought an important counter-revolution pioneered by the Chicago School of lawyers and economists.18 Their work reflected a growing sense that there were important failings in the SCP paradigm. Take, for example, the convincing empirical evidence that firms with large market shares earned higher profits. A “traditional” SCP theorist would take this as implying that market share leads to monopoly power and higher profits. On the other hand, it could be argued that the higher profits came from the firm being more efficient or more talented than its rivals, p. 17resulting in the firm gaining increased market share while at the same time providing real benefits to its consumers.
More generally, as we noted above, the SCP paradigm ignored important feedback loops from performance and conduct to market structure and paid little attention to strategic interaction. As Joe Bain (1956) noted, for example, firms in a highly concentrated industry might not be able to exploit their market power if they have to take into account the potential for new firms to break into their market. In other words, we cannot look at firms’ conduct independent of analyzing the barriers to entry confronting potential rivals. (Of course, we should also recognize that these barriers to entry might be endogenous, determined by the strategic actions of the incumbent firms.)
More generally, the Chicago School argued that many business practices viewed as harmful could, when considered as part of corporate strategy and tactics, actually improve economic efficiency and benefit consumers. Consider the vertical relationships between a firm and its suppliers or its distributors. A negative view of a vertical contract between two firms is that it introduces the possibility of market foreclosure. A more positive view is that it eliminates the inefficiency of double marginalization. More generally, the Chicago School argued that contracts awarding exclusivity or controlling retail prices actually brought benefits to consumers, for example by encouraging retailers to provide support services that they would not otherwise offer.
Similar arguments weakened regulators’ ability to prevent horizontal mergers, in this case using the argument that such mergers offered significant cost savings, or the argument that the exercise of market power by the merged firm would be effectively constrained by potential new entrants.
The contributions of the Chicago School to the formulation and implementation of antitrust policy is significant and has been long-lasting. Their analysis was limited, however, by the fact that it lacked an effective framework by which analysts could model strategic interaction. This takes us back to the language of game theory. From the early 1980s, there was a rapid spread of the application of game theory to almost every aspect of imperfect competition. We are now seeing a post-Chicago School view based on the “new” industrial organization, in which antitrust cases are examined using explicitly game-theoretic tools and analyses.19 For example, the roots of the merger guidelines adopted by the Federal Trade p. 18Commission in the United States, and many of the simulation analyses of proposed mergers, can be found in the Cournot–Nash game-theoretic model.
The simplest way to conclude this Introduction to the Dictionary of Industrial Organization is to note that we have come full circle. Antitrust policy provided the much-needed early stimulus and econometric analysis for the development of industrial organization. Subsequent theoretical developments in industrial organization and more recent advances in econometric techniques have substantially influenced the modern formulation and application of antitrust policy.
This concept was first suggested by Jensen and Meckling (1976).
An accessible review of these ideas is provided by Besanko et al. (2013). The central role of contracts as mediators of transactions is discussed in detail in Hart (1995) and Bolton and Dewatripont (2005).
Jensen and Meckling (op. cit.).
Milgrom and Roberts (1990).
The literature providing exceptions is too large to cite here. For a recent analysis, see McAfee and Wiseman (2008).
Pepall et al. (2008) show that the same condition applies to second-degree price discrimination. First-degree price discrimination always maximizes social welfare.
This model actually predates the Salop (1979b) analysis.
The precise conditions under which amnesty might be granted are given in a speech by the Assistant Attorney General at http://www.usdoj.gov/atr/public/speeches/2247.htm.
See Hart and Holmström (1987) and Laffont and Martimort (2001).
For a review of the economics of franchising, see Blair and Lafontaine (2005).
For a discussion of the theory of discrete choice in the context of industrial organization, see Anderson et al. (1992).
For a review of the Chicago School’s approach to antitrust policy see Posner (1979).
1994. “Informational Requirements of Predation Detection”, Mimeo, Department of Economics, European University Institute, Florence, reproduced in Phlips (1995)., and .
p. 21Further Reading
1. There are numerous textbooks and references on the subject of industrial organization and related topics that can be recommended. Some of these have already been referenced but we feel that it is informative to present the list here.
2. In addition, there are several books in the “management” literature that are relevant to the study of industrial organization and that are well worth consulting to gain a more applied approach to many of the dictionary entries.
3. In addition to textbooks, there are several journals that specialize in the field of industrial organization.
International Journal of the Economics of Business
International Journal of Industrial Organization
Journal of Economics and Management Strategy
Journal of Industrial Economics
Journal of Law and Economics
Journal of Law, Economics and Organization
RAND Journal of Economics
Review of Industrial Organization