Show Summary Details
This content is available to you

Algorithms and competition: the latest theory and evidence

Ambroise Descamps, Timo Klein and Gareth Shier

Keywords: algorithms; competition; consumers; harm to competition

In the modern economy, algorithms influence many aspects of our lives, from how much we pay for groceries and what adverts we see, to the decisions taken by health professionals. As is true with all new technologies, algorithms bring new economic opportunities and make our lives easier, but they also bring new challenges. Indeed, many competition authorities have voiced their concerns that under certain circumstances algorithms may harm consumers, lead to exclusion of some competitors and may even enable firms (knowingly or otherwise) to avoid competitive pressure and collude. In this article, we explain how algorithms work and what potential benefits and harms they bring to competition.

Full Text

1. Introduction

Algorithms influence many aspects of our economic lives, from the advertisements we see, to the movies we watch or the interest rate we pay on our mortgages. The COVID-19 pandemic increased our reliance on digital tools and one can only expect this trend to continue, if not to accelerate.

The use of automated decision-making tools, such as algorithms, is not new and existed long before the so-called ‘GAFAM’ did. 1 What has changed over the last two decades is the exponential rise of computing power, the collection of very large and detailed datasets and the democratization of internet use. These changes lead algorithms and artificial intelligence (AI) to play a central role in our online lives.

The adoption of new technologies is the main long-term driver of human development. 2 Among other things, AI has been used to fight crime by optimizing police patrols, to assist health professionals by improving breast cancer detection from mammograms, and to help sellers with price-setting on online platforms. 3 As is true with all technology, algorithms also come with some costs and risks. 4

As a matter of online economic decisions, and broadly speaking, algorithms influence two main aspects of behaviour:

  • Choice architecture, which refers to the way products are presented to consumers, such as the content and timing of the information received or the consumer journey. For instance, online shoppers often see products that are ranked and classified according to different criteria.
  • Prices, which can now be easily personalized (e.g. plane tickets), but also observed by many individuals in the market (e.g. on comparison websites).
According to competition authorities in many jurisdictions, this influence of algorithms on our behaviour may be used to harm competition.

For instance, the current version of the draft Digital Markets Act published by the European Commission (Commission) 5 proposes that so-called gatekeepers ‘refrain from treating more favourably in ranking services and products offered by the gatekeeper itself’. 6 This refers to the ability of certain firms that act as platforms where consumers and suppliers meet to influence choice architecture in way that would favour their own products and services. Similarly, the Competition and Markets Authority (CMA) has recently published a research paper on how algorithms can harm competition and proposes that: 7

[regulators] can also use their information gathering powers to identify and remedy harms on either a case-by-case basis or as part of an ex-ante regime overseen by a regulator of technology firms, such as the proposed Digital Markets Unit (DMU) in the UK.

In this article we analyse the rise of algorithms and its implications for competition policy. We aim to answer the following questions:
  • How do algorithms and AI, in their simple and complex forms, work in practice, and how does this affect businesses and competition?
  • How do algorithms benefit competition?
  • What are the main theories of harm relating to the use of algorithms?
  • How can current policy proposals help in solving these issues?
This article has specific emphasis on algorithmic collusion, but also discusses algorithms more broadly.

2. What are AI, algorithms, statistical modelling and machine learning?

Computer programs, statistical models and AI are designed to help humans classify information (e.g. to automatically classify emails as spam or, in the classic example used in the field, to separate images of cats from images of dogs), match preferences (e.g. in dating websites or song selection algorithms) or predict outcomes (e.g. in weather forecasting or assessing the likelihood that a convicted criminal will reoffend).

Many firms use so-called AI or other forms of automated decision-making tools with the aim of improving or accelerating decisions. The applications of AI are numerous and highly varied.

AI has been defined by Elaine Rich, author and world-leading academic in computer science, as ‘the study of how to make computers do things at which, at the moment, people do better’. 8 This broad definition carries an important nuance in that it compares machines and humans: AI is about a computer system acquiring human abilities – and perhaps ultimately improving upon them. A notion of temporality can usefully be added to this definition: today's machines are already much better than humans at doing a wide range of things using vast computing power – AI is about getting machines to do even more. AI also typically involves a degree of interaction between humans and machines.

To make AI a more manageable concept, it can be associated with programs that are not made up of a simple set of step-by-step rules for decision-making, but rather a complicated web of processes that combine to give a probabilistic result, as if the machine has developed an ‘intelligence’ of its own.

Firms and institutions today use programs and statistical models of varying complexity. Regardless of their sophistication, such tools share common features and risks when decision processes are automated.

2.1. Algorithms and statistical modelling

The simplest forms of such programming involve simple algorithms (i.e. human code-based decision-making processes of the type ‘if X is true, then Y, otherwise Z’), as well as statistical modelling (i.e. human-designed statistical modelling aimed at explaining a particular phenomenon using factors that influence it, such as classical regression analysis). These two forms of programming can be combined, but in their ‘simple’ form they are always designed by humans.

2.2. Statistical learning methods

The next level of sophistication involves machines ‘learning’ how to reach the desired outcome (such as classifying information, matching individuals or predicting a result). If the objective is set by a human, the machine decides the best way to achieve it, through ‘machine learning’.

In order to learn, the machine generally relies on a training dataset, a test dataset, one or more techniques, and an objective (such as minimizing the prediction error). The process is described in Box 1 below. Machine learning relies on (sometimes complex) algorithms and may make use of statistical modelling. Yet, the form of these sub-programs is not necessarily imposed by humans. Data scientists nevertheless typically understand the precise mechanics of their models and are able to explain the process used by the machine to reach a decision.

Illustration of the concept of machine learning

A machine learning algorithm is fed a dataset (such as pictures of cats and dogs), at least one technique and one objective. The algorithm estimates a model using the training dataset. The complexity of such models can vary widely, from simple linear equations to complex neural networks. If the data is labelled, the machine automatically ‘knows’ whether a picture represents a cat or a dog, and therefore follows a supervised learning process. If the data is not labelled, the machine instead follows an unsupervised learning process.

In the test dataset, the quality of the initial model is assessed in light of the objective that was set (e.g. predicting whether a picture accurately represents a cat or a dog in a supervised learning process). If the objective is reached in the test dataset, the model can be used. If the results are not satisfactory, a new mathematical model is estimated on the training dataset until the objective specified in the algorithm is reached in the test dataset.

Figure 1
Figure 1
Source: Oxera.

However, the machine may also use other methods, such as artificial neural networks or deep learning, 9 which typically become ‘black boxes’ even for data scientists (in the same way that we may be unable to explain how the human brain reaches a particular decision).

Machine learning is particularly valuable when one is ‘data rich and understanding poor’ – which means that there is a lot of data that can be used to train the algorithms and little prior understanding about how to take optimal decisions.

Within machine learning, there are different levels of ‘machine autonomy’:

  • supervised’ – when the training dataset contains data labels identified by humans (e.g. when pictures of cats are identified and labelled as such by humans in the training dataset);
  • unsupervised’ – when the training data is not labelled and the machine needs to find patterns by itself. For instance, the machine may use a survey to identify groups of users that share common characteristics; 10 and
  • reinforcement learning’ – when the machine learns through independent trial-and-error exploration, and amends its future decision-making process to improve optimization.
Machine learning techniques are increasingly used for pricing purposes, such as the one of Danish a2i systems or UK-based Kalibrate. These companies offer AI pricing software to optimize dynamic petrol pricing. Initial training is based on historical data (such as past transactions, competitor prices and other market conditions), after which prices are set by taking into account ‘real-time’ information (such as competitors’ current prices, the weather and traffic conditions). The resulting transactions are then in turn fed back into the system and used to re-optimize the algorithm. These programs therefore make use of supervised learning in combination with a reinforcement mechanism.

The use of such petrol pricing algorithms can benefit both petrol stations and consumers, as it reduces the cost of managing prices and increases market efficiency. Petrol stations pass on lower costs and faster price adjustments to consumers who pay lower prices. However, competition authorities have voiced their concern that such pricing algorithms may be used as an instrument of collusion to sustain supra-competitive petrol prices. 11 We discuss in the rest of the article how problems such as this may emerge.

3. The pro-competitive and efficiency enhancing effects of algorithms

As with many more familiar business tools algorithms offer pro-competitive and efficiency-enhancing effects alongside the potential risks. 12 There are at least five ways in which pricing algorithms can produce win-win outcomes for both firms and their consumers.

3.1. Reduced search costs

Matching algorithms identify for a consumer the products that would correspond best to their preferences. This means that rather than spend hours searching through countless shops or websites, a consumer is directly matched with the most relevant options for her.

3.2. Improvement of existing products

Collection of data combined with the use of algorithms allows firms to optimize the development of new and existing products, often to the benefit of consumers. 13 This allows firms to identify design flaws in existing products and identify the most important characteristics of products and services, and therefore better identify and meet consumer needs.

For instance, Google Maps relies on learning across users to improve traffic predictions. Traffic data is collected live and is combined with historical data in order to yield accurate traffic predictions. This creates a virtuous circle as traffic predictions improve, more drivers use Google Maps, resulting in more data to improve traffic predictions, and leading to more drivers adopting it and so on.

3.3. Cost reductions

It can be difficult for a multi-product firm to identify the ‘right’ price for all of its products; this is particularly challenging for online retailers that sell hundreds or even thousands of different products in a fluctuating market with changing costs and inventories. Here, the use of automated decision rules or optimization algorithms when setting prices can lead to significant efficiency gains. These cost savings can then, in whole or in part, be passed on to consumers through lower prices.

3.4. Optimal price discovery

Well-functioning markets are powerful mechanisms for allocating scarce resources, so long as prices are set ‘just right’. If prices are too high, there will be too few consumers willing to buy; if prices are too low, there will be too few producers willing to sell.

Pricing algorithms can help competitive markets function better by improving this overall price discovery process. Using data analytics, pricing algorithms can enable firms to more quickly identify the optimal price – especially in rapidly changing market conditions. Not only will this help the market to find an equilibrium of buyers and sellers, but it will signal where entrepreneurs should focus their resources and efforts to provide the products most valued by consumers.

3.5. Reduced barriers to entry

Pricing algorithms may also help firms to enter new markets previously reserved for knowledgeable and experienced players. For example, the marketing and pricing of toys previously required good knowledge of what children like and the latest playground trends, typically built on years of experience. However, with the introduction of online pricing algorithms, manufacturers can now let the data do the work for them, automatically experimenting with different prices for different toys – starting with a small assortment and gradually expanding based on actual sales.

This ability to enter unknown markets and be guided by self-generated data analytics can help level the playing field between new firms and established incumbents. Similarly, existing retailers may find it easier to broaden their product offering and include products about which they may have less expertise.

4. What are the concerns with algorithms?

Despite the many potential pro-competitive justifications for the use algorithms, there is a concern that algorithms may lead to anticompetitive market outcomes.

The CMA has found 14 that algorithms may harm competition by:

  1. directly harming consumers, by allowing firms to extract consumer surplus, thus also raising fairness issues;
  2. excluding competitors of dominant firms, by deterring competitors or new entrants from challenging their market position; and
  3. leading to price collusion, whereby algorithms allow – inadvertently or otherwise – firms to avoid price competition.
In this section, we discuss in turn each of the main theories of harm covered in these aspects. While the CMA has focussed on the first two types of harm, we focus below on algorithmic collusion.

4.1. Direct harm to consumer

In this section, we consider direct harms to consumers relating to personalized pricing and choice architecture theories of harm.

a. Personalized pricing harms

Personalized pricing is also known to economists as ‘price discrimination’. Price discrimination is the ability of a firm to charge different prices to different consumers. Price discrimination can occur when: (i) the firm has the ability to identify different groups of consumers with different willingness to pay, and (ii) these different consumers cannot resell the goods they purchase to each other.

The use of algorithms and the collection of detailed data on consumers allows firms to precisely identify the different groups of consumers, and how much they are willing to pay for different goods.

From an economic perspective, the effect of personalized prices is ambiguous. On the one hand, it allows firms to sell at lower prices to some consumers who would not be willing the buy their goods under uniform pricing.

On the other hand, some consumers will see a price increase with personalized pricing, which implies a surplus transfer from consumers to firms. This can be problematic if undertaken by a monopolist which will extract large surpluses without necessarily increasing sales. This practice can also be perceived as unfair by certain consumers. 15

b. Choice architecture harms

Algorithms allow firms not only to personalize prices but also user experience. As data is collected on each user's habits, firms can use algorithms that will optimize information display, thereby improving the user journey and, consequently, increasing profits. 16

One risk is that personalized rankings nudge users towards options that are more profitable for the firm, rather than more suitable for the user. Indeed, the behavioural economics literature has explained how human beings have a limited ability to process information, and, for instance, tend to select the default option that is presented to them. Therefore, the way information is presented to users will affect their final decisions.

Besides the exploitation of behavioural biases, there is also a risk that certain options are simply hidden from users.

4.2. Exclusionary conduct

Exclusionary conduct refers to the actions undertaken by a dominant firm to deter new or existing competitors from challenging its market position. When considering exclusionary conducts in relation to algorithms, the main theory of harm considered is self-preferencing.

Self-preferencing can be defined as a situation where a firm treats its own services more favourably than those of competitors. This typically occurs when a firm operates at different levels of the supply chain, being a wholesaler and a retailer at the same time, and controls access to a critical input or consumers. For instance, this may happen when a supermarket also sells its own brand products and controls shelf display. Depending on the margins it can expect, the supermarket may have an incentive to display its own brand products on the most profitable part of the shelves.

On digital platforms, algorithms are used to control choice architecture and, in particular, the ranking in which different options are shown to consumers. A concern is that the design of algorithms can be biased in a way that favours products supplied by the platform, rather than those that would be the most relevant to the final consumers.

For instance, in the landmark Google Shopping case, the Commission found that Google had abused its dominant position in online search to give preferential treatment to its own comparison shopping service. 17 One of the Commission's findings was that Google's comparison service was not subject to the same ranking algorithms.

Other potential theories of harm that can lead to the exclusion of competitors include the gaming of algorithms by third party incumbents (i.e. other than the platforms) and personalized predatory pricing (using personalized pricing in order to identify and target customers most at risk of switching and to offer them a product at a price that prevents switching).

4.3. Algorithmic price collusion

In this section, we describe the main theories of harm around price collusion. At a high level, it is possible to identify at least four different ways in which pricing algorithms may lead to collusion. These ways are decreasing in their degree of feasibility in practice, but increasing in terms of legal concerns in case they do occur.

a. Explicit algorithmic collusion

The Commission E-commerce Sector Inquiry found that a majority of online retailers used algorithms to monitor competitor prices, with approximately two-thirds using algorithms to automatically adjust prices in response. 18

The increasing ubiquity of automated pricing can, however, make it easier for competing managers with malicious intent to implement a price-fixing agreement. Rather than having to continuously discuss and calibrate joint pricing behaviour, they can now use simple algorithms instead.

The prominent example is GB Eye/Trod case in the UK (known as the Topkins case in the US 19 ), in which competing online poster sellers were found to have used pre-programmed pricing algorithms to coordinate prices in a differentiated and unstable market. 20 This was, of course, just as illegal as conventional cartel arrangements contrived in smoke-filled rooms. The key difference, however, was that the algorithm made the implementation and monitoring of the agreement far more straightforward.

Moreover, recent research from operations research shows how algorithms can be designed to maximize joint profit conditional on mutual adoption of the algorithm (but maximize own profit otherwise). 21 Stable collusion then occurs even in the absence of communication or any other conventional form of ‘concerted practice’.

b. Algorithmic hub-and-spoke collusion

A second way in which pricing algorithms can undermine competition is through a ‘hub-and-spoke’ construction. Here, a common supplier (the ‘hub’) coordinates the prices of downstream competitors (the ‘spokes’), without the need for these downstream competitors to formulate a horizontal agreement among themselves.

While illegal, building a solid case around allegations of hub-and-spoke collusion is generally more difficult than explicit horizontal collusion, as it requires proof that the downstream ‘spokes’ that are competing with each other are aware of the likely collusive consequences when giving up their pricing autonomy. 22

The CMA has already voiced concerns of algorithmic hub-and-spoke collusion in the context of third-party pricing software providers – its concern being that a dominant pricing software provider in an industry may act upon its ability and incentive to deploy algorithms that take into account the pricing spillovers of competitors, effectively orchestrating collusion. 23

A specific allegation of digital hub-and-spoke collusion was voiced in a 2016 US class action against Uber, which alleged that Uber acted as a hub in a hub-and-spoke conspiracy by orchestrating the prices of its drivers through its common surge-pricing algorithm. 24 The class action against Uber was eventually dismissed on the grounds that Uber competes with transport more generally, including public transport.

It is important to note that empirical evidence to date is based on data collected in a controlled laboratory setting and there is no real world evidence that the use of third-party pricing software providers has led to collusion. Notwithstanding this, the increased use of vertical relations in algorithmic price-setting does raise clear concerns about the ability and incentive of firms to coordinate prices. 25 The fact that this coordination occurs via a vertical channel raises the concern that the line between an illegal explicit cartel and legal tacit collusion may become much more blurred.

Platform operators and pricing software firms that supply to competing firms are therefore likely to receive increased scrutiny for their role in the price-setting behaviour of competing businesses.

c. Tacit algorithmic collusion

Collusion may not always be explicit. Pricing algorithms may also enable firms to unilaterally implement strategies that have the effect of preventing aggressive pricing in the market – in effect, reaching a tacit collusive outcome that is nearly impossible to prosecute.

However, reaching a stable but silent understanding on high prices is not easy. Firms have different cost structures and inventories, and new firms may enter the market and demand may fluctuate – factors that destabilize a tacit understanding to keep prices high.

At the same time, the practical feasibility of tacit human collusion because of algorithms should not be discounted.

For a competition authority, any ambition to ‘avoid a price war’ may sound like an attempt to collectively maintain high prices and is accordingly a red flag – even if it is achieved tacitly and via an automated process. 26

Moreover, pricing algorithms may be specified in ways that unwittingly lead to higher prices. For instance, recent academic research has shown that when competing algorithms fail to properly account for each other's prices, which is often the case, they may underestimate their own price elasticity – the downward response in demand for their own product(s) when they increase prices. 27 The net effect is that firms set prices higher than the level that would maximize their profits.

Other academic research has shown that when competing algorithms have similar perceptions of what the optimal price points are, they may end up experimenting with equivalent prices. This, in turn, may cause them to see higher prices as optimal, not knowing that it is because they have managed to reach a supra-competitive coordinated outcome. 28

While such learning specifications might be regarded as irrational or suboptimal, and not technically collusion, their use may still be explained by current limitations in what pricing algorithms can or cannot do in practice.

d. Autonomous algorithmic collusion

The biggest concern may arise, however, when algorithms can learn to optimally form cartels all by themselves – not through instructions from their human masters (or some irrational behaviour), but through optimal autonomous learning (i.e. ‘self-learning’ algorithms). Such an outcome, were it to occur, may be very difficult to prosecute, as businesses deploying such algorithms may not even be aware of what strategy the algorithm has learned.

The big question, though, is how likely autonomous algorithmic collusion is in practice. Two recent academic papers have shown that such autonomous collusion is, in principle, feasible. 29 The research is based on computer simulation experiments in which competing firms learn to set optimal prices using reinforcement learning – i.e. where the algorithm learns through independent trial-and-error exploration. Both papers find that the firms indeed learn collusive strategies in which they keep prices high to match their competitors, and only undercut and compete if their competitors do so.

However, many practical limitations for such autonomous algorithmic collusion remain – such as the need for a long learning period in a stable market environment. However, these papers show that autonomous algorithmic collusion is, at least in principle, possible. Moreover, advances in AI may be able to deal with these practical limitations sooner than we might expect.

5. Conclusion: anticipating regulatory vigilance

So what can businesses expect from competition and regulatory authorities?

First, machine learning tools can similarly be used by authorities to detect cases of collusion. 30 For instance, the French Competition Authority recently created a digital economy unit to develop these competencies (in the same way as several other authorities). 31 Similarly, the CMA has recently announced the creation of its own Digital Markets Unit. 32

Secondly, the use of algorithms by firms will be increasingly scrutinized or even audited. As John Moore, Etienne Pfister and Henri Piffaut (the last two of whom are the Chief Economist and the Vice-President of the French Competition Authority respectively) recently proposed: 33

[…] firms could be required […] first to test their algorithms prior to deployment in real market conditions (‘risk assessment’), then to monitor the consequences of deployment (‘harm identification’).

Moreover, the US Deputy Assistant Attorney General for Criminal Enforcement, Richard Powers, recently stated that:

Just as there's a role for corporate compliance programs in deterring price fixing that occurs in traditional smoke-filled rooms, there's a role for corporate compliance programs in preventing collusion effectuated by algorithms. 34

Algorithms have great potential for the promotion of competition – they can reduce costs, facilitate product development, increase market efficiency and promote market entry. These benefits can apply to markets as diverse as petrol pricing, airline tickets, e-commerce and financial market trading.

However, this does not mean that the authorities have no need for concern and vigilance. There are legitimate concerns regarding competition. On the German retail petrol market, a recent academic working paper shows that the rise of pricing algorithms has led to reduced competition and increased margins – up to 28% for areas where two competing petrol stations both adopted algorithmic pricing. 35 The study highlights that it is a strictly economic assessment and does not pass any legal judgment on whether there is anticompetitive behaviour; however, results like these will attract the attention of authorities and regulators.

Overall, the benefits that algorithms can provide to firms and their customers are desirable. When pursuing these benefits, businesses and other organizations using algorithms need to also reflect on the competition concerns involved, so that they can show that they are indeed getting their margin on the merit.