Page against the machine: the death of the author and the rise of the producer?
Johanna Gibson
Search for other papers by Johanna Gibson in
Current site
Google Scholar
Full access

Stephen King responded recently to the knowledge that his books, among many others,1 were being used to train the ‘artificial intelligence’ of Large Language Models (LLMs): ‘[T]hese programmers can dump thousands of books into state-of-the-art digital blenders. Including, it seems, mine.’2 King has advised previously that in order to become a writer, one must be a reader and one must read a lot.3 But what is this massive project of machine training? Is it reading? And if it is reading, then, as King asks, ‘can a machine that reads learn to write?’4

Almost thirty years ago the philosopher, Anthony O’Hear, wrote an influential essay entitled, ‘Art and Technology: An Old Tension’. And it is indeed an old tension, one that accompanied the introduction of photography in the first half of the nineteenth century,5 the advent of cinema in the late nineteenth century,6 and numerous technological flashpoints ever since,7 including artificial intelligence (AI). In his essay, O’Hear ponders what was then a probability but is now a reality: ‘If computers were able to generate copies of existing works of art, it might well be possible for them to produce formulaic works which drew on repertoires derived from existing works, but which were not copies of any complete works.’8 Before the reassembling of vast swathes of copyright material became a daily news headline, O’Hear appears to be anticipating the very business model of LLMs. And his question then is as pertinent today as it was then: ‘What, if anything, would follow for our concept of a work of art from this possibility?’9

What of the work, of the artist, of the author? What does it mean to want intellectual property for more than humans?10


King’s description of the digital blender into which so many books are ingredients highlights the most common concern in terms of copyright.11 But the other concern is a technical one. Will the machines simply run out of data to consume?12 This question arises because it is not just about feeding data into the machine; it is also about the machine beginning to learn from itself. At first this may sound resilient and remarkable, but in fact the model itself introduces one of the key technical concerns, namely ‘model collapse’.13 Model collapse is where LLMs, like ChatGPT, start to contribute their own language to the available data, causing ‘irreversible defects in the resulting models, where tails of the original content distribution disappear’.14 This reminds me of the photograph in Antonioni’s Blow-Up,15 where David Hemmings keeps enlarging and enlarging until he can see nothing at all. Was there ever anyone there at all?

The risk of model collapse will be exacerbated through so-called ‘poisoning attacks’, seen in various campaigns ‘to misguide social networks and search algorithms’, and used for both good and evil in recent political campaigns,16 for example. But as the authors note, with the vast scale of LLMs comes a similarly vast opportunity for poisoning, as well as a loss of diverse voices.17 In other words, without mechanisms to preserve and protect ‘the value of data collected about genuine human interactions’,18 the model effectively resolves the oppositions at the heart of creative renewal, ultimately adapting itself towards its own destruction.

If this is correct, then LLMs are not bulletproof. They are at all times striving for proximity, rather than looking for trouble and critical distance.19 And in looking for the popular and averaging the dominant, ‘over the generations learned behaviours start converging to a point estimate with very small variance’20 and the model collapses. Content generated by artificial intelligence will need to have a way to identify itself, so it can ignore itself.

Otherwise, pop will eat itself.


Indeed, perhaps this highlights the most important technical and social challenge. It is not even the mining, or scraping, or the copying that should be the primary concern, but the mixing, all to the authors’ discredit, as it were. And in baking this melange, who is the author?

LLMs are beset by uncertainties of authorship, from student essays to deceptive credits, but it seems that this structural character of the model may also be part of the risk of its collapse. Resolving both the social-cultural acceptance as well as the operational jeopardy seems to introduce the need for certain limits and constraints. Attention to constraint is suggested not as a draconian measure of regulatory control, but rather for the expedience of creative processes and the functionality of the machine. To return to O’Hear’s concerns around the tensions between art and technology, the notion of constraint might be seen to be a practical theme, as well as a social and cultural one, when examining the relationship between art and new technologies. O’Hear states:

If I have a complaint about what has happened in the arts this century, it is not that technology is constraining imagination, or making slaves of us. It is rather that it isn’t constraining it enough, that technology is removing those very constraints which made art a matter of craft, rather than an unfettered display of expression and imagination.21

While there may be a sense of nostalgia for the physical engagement of the hand here, there is also something quite crucial about this notion of constraint in the creative industries, long before the advent of LLMs. And a desire for constraint thus has some interesting and possibly facilitative implications and resolutions for the way in which AI-generated products may be reconciled with both technical and popular concepts of authorship; resolutions to issues ranging from the technical and recursion, to the legal and commercial implications for copyright, through to fundamental critical and philosophical questions of art itself.

O’Hear despairs that ‘our technological advancement has led to a loss of technique’, and argues that ‘one of the unfortunate effects of the entry of technology into the world of art has been the downgrading of technique and the upgrading of the idea’.22 Perhaps this is an appeal to a somewhat romantic inimitability of the author, but perhaps not. More importantly, it suggests that the contrived distinction between craft and art is one that has been imposed by the advance of technology, rather than the pedestal of art. And thus, in reconciling technological advancements both within the art community and within the law itself, a stratification of creative social classes has been introduced. In 1948, at the Brussels Conference, this class system of applied arts and works of authorship was imported explicitly into the Berne Convention,23 after some protracted battles,24 thus introducing the (in)distinction of applied art. While the law in the United Kingdom admits applied art as artistic works,25 it nevertheless differentiated previously as to the duration of protection if those works had been industrially applied.26 This distinction was repealed following the Court of Justice decision in Flos,27 and indeed it is difficult to see how a legal distinction could be sustained between applied arts and all other authored works, as though this is meaningful and can continue to be meaningful throughout an age of mechanical reproduction and beyond. But more minutely, the remnants of this stratification are perhaps persistent in the very character of copyright in the United Kingdom, as it moves away from the test of skill, labour and judgement,28 which might be argued to harbour an affection for craft and the technical proficiency of a former age, and towards the author’s own intellectual creation29 and the upgrading of the idea over all else. While this might have seemed untenable for O’Hear, is there something more just about a system of ideas?


But who is the author? And does it matter if the consumer does not know? And what about dupes and misrepresentations of authorship? Do these concern consumers as a question of maintaining quality or as a matter of maintaining a reassuring relationality with an identifiable source? And what about accidents? What happens when we create copyright ‘by mistake’?

In the tension between technology and art, O’Hear suggests that this authentication and verification of authorships is important: ‘In so doing, the audience is confident that it is not indulging in the exercise of pathetic fallacy, that is, it is not imputing to unplanned and unintended phenomena characteristics which are properly attributed only to planned and intended things.’30 But what does it mean to be fooled by artificial intelligence? It would seem that, along with student essays,31 literary journal submissions,32 and travel guides,33 online bookstores are also being targeted with AI-generated content.34 The question is attracting some commercial urgency.

The intentional stance so carefully crafted for LLMs through the iterative conversations, apologies, and expressions of doubt provided by the machines, is somewhat at odds with the glossing of the authorship question. And although the training presents as a copyright infringement, in fact a lot of the commercial and artistic controversies are really more to do with a kind of imposture. As well as generating and selling music under the names and voices of recording artists, such as the infamous episode of ‘Heart on My Sleeve’ using the AI-generated voices of Drake and The Weeknd,35 the Number 1 and Number 4 respectively most streamed artists on Spotify,36 there is now a thriving industry of AI literary imposture. However, unlike the masterful forgery and fakery of past centuries, these products are largely unreflective disgorgements based upon the haphazard excavation of a particular author’s style.

The growing number of examples of AI-generated literary products sold under the apparently imitated authors’ names can be found across various platforms, leading to complex issues of naming, copying, and deception. One such recent example is the experience of the author, Jane Friedman, who discovered titles apparently generated by AI, but certainly not generated by Friedman, for sale on Amazon. In an account of the experience, Friedman notes that because of a history of blogging as well as recent ‘vanity prompting’ her works were likely an easy target: ‘As soon as I read the first pages of these fake books, it was like reading ChatGPT responses I had generated myself.’37 Friedman contacted Amazon to request that the titles under the name ‘Jane Friedman’ be removed from sale, and at that point discovered that there is considerable difficulty in particularizing the deception. Notably, the books were not appearing, and at no time did they appear, on Friedman’s Amazon author page. This detail may cause an issue for authors who appear to be impersonated by unassociated titles and want to establish that those titles are indeed being held out as their own. And this may also explain the reply Friedman received from Amazon. Upon reporting the offending titles to Amazon, the company replied: ‘Please provide us with any trademark registration numbers that relate to your claim.’38 When Friedman explained that she had no such protection for her name, she was told the case was resolved and the books would remain on the site for sale.39

This response may sound somewhat perverse, but of course the evidence that these books were being sold under the name of the same ‘Jane Friedman’ was more limited. It is conceivable that there are two ‘Jane Friedmans’, a point that Friedman indeed concedes. In this case, is the Amazon author page enough to satisfy the risk of deceiving or misleading Amazon customers? And once authors have an author page, does this stop being a problem that Amazon must address at all? Paradoxically, these objects are perhaps not as close as they appear, and suddenly Amazon’s question regarding trade mark registration begins to make sense. But importantly for Friedman’s dispute, the titles also began appearing on Friedman’s official Goodreads profile: ‘Whoever’s doing this is obviously preying on writers who trust my name and think I’ve actually written these books.’40 This seems to be a critical and distinguishing fact about Friedman’s experience, and when the titles started to appear on her official Goodreads profile, Amazon also took action.41


Friedman states, ‘I know my work gets pirated and frankly I don’t care’, and even titles her blog post, ‘I Would Rather See My Books Get Pirated Than This (Or: Why Goodreads and Amazon Are Becoming Dumpster Fires)’.42 This may seem strange at first, especially given the effort Friedman went to in order to verify the works being sold under her name. But it is not strange at all, and in fact is completely consistent with the issues at stake in literary imposture and in Friedman’s dispute. In piracy, there is always attribution and that attribution remains intact; indeed, the piracy is worthless without that attribution. Further, it may even be argued that a form of piracy can enhance the attributive landscape, as with the advent of photography and the facilitation of wider dissemination of art works as reproductions (and thus also as phenomenal references to the original). But in the imposture of AI-generated works, what is at stake is not attribution, or even the absence of attribution, but rather, false attribution43 or passing off.44 Both actions, piracy and imposture, are trading on an author’s name, but one is at least ‘honest’ about it.

This raises another complex concept in this environment, which is plagiarism. In plagiarism there is both the copying of the work and the literal discrediting of the author. There are also arguments that forms of sanctioned plagiarism reinforce privileged positions, race and gender hierarchies, and status.45 This may seem unrelated to what is going on in the present discussion, but in fact there is a concern that the model underpinning LLMs may potentially allow for the appropriation of marginalized and diverse voices and their representation as outputs from those in privileged positions. And it may facilitate this appropriation on an unprecedented scale – an elaborate and untraceable cultural appropriation and ultimately, as in the model collapse, an erasure.46 As one commentator argues, following the ‘Heart on My Sleeve’ episode, ‘Given the rate at which Black culture is appropriated, it’s important to remain vigilant and call out blatant offenders.’47 But with less well-known voices, this may be an impossible task. Arguably, attribution is the fundamental social tool that not only contributes to sustaining the diversity in cultural outputs but also facilitates the fairness in copying. Indeed, for creative outputs more generally, with or without the contribution of AI, if there is no attention to attribution and the cultural stewardship that attends this practice, there is a ‘model collapse’ for the wider sociality of creative work.

Any sense of attribution in AI-generated works is to date merely in the form of disclosure of the tool.48 But is this adequate? This doubt as to adequacy is raised not because the works may be valuable, and not because the uncertainties in copyright must be resolved, but because the sociality of the system depends upon this ongoing relationality of authorships. Who, then, if anyone, is the author of an AI-generated work?

When is a pile of bricks a pile of bricks?49 And when is it a Carl Andre?


Perhaps one of the critical details in developing strategies in relation to AI-generated works, is the distinction between the reproducibility and reproduction of a work (recalling the applied art debates) and works generated autonomously by machine. Both appear to question authorship, but from different perspectives. One is concerning the argument as to whether mass-production somehow justifies a diminished protection (curious in the context of the mass-production of books, for example, where it all began …). The other argument is concerning an unexplainable and untraceable process. O’Hear maintains, ‘Even if a work of art is reproducible, it cannot be machine-generated.’50 Translating this assertion into contemporary circumstances, art is an autographic activity, as it were … just in case a reminder of the lesson from Marcel Duchamp and the readymades is necessary.51 Duchamp’s famous Fountain (1917), the porcelain urinal set on a pedestal at 90 degrees, perhaps in order to defeat its original use, is possibly the most pronounced explanation of this principle. In other words, a work of art may be reproducible, but it cannot be machine-generated … unless of course it can be signed. The signature imputes intention, in a curious and remarkable resonance with Amazon’s resort to the brand. And that brings the discussion back to Friedman.

Indeed, the fact that Amazon was refusing to remove the works because Friedman had not registered her name as a trade mark reinforces the emphasis on the work as product, and authorship as a brand; the same emphasis that is the momentum behind the ‘authorship’ and ‘inventorship’ debates around AI.52 Authorship in itself appears to warrant little regard. Copyright? Maybe. But from Amazon’s perspective, there is, on the face of it at least, no copyright infringement here, although the AI-generated works have arisen likely through the training on Friedman’s works.53 And there is apparently no plagiarism either; at least none that readily meets the algorithm’s eye. This is a question of authorship and reputation. And those things appear to be of no concern to Amazon at all. Indeed, the author is a quaint remnant of a bygone era of physical artefacts and artistic purpose. What is at stake here is the commercial value of the ‘phenomenal’ authority of the product and brands.

Thus, when O’Hear maintains in his ‘externalist’ thesis that the obstacle to accepting ‘purely machine-produced art’ as art is that ‘it necessarily lacks those references, explicit and implicit, to human life and sensibility, which make art more than a purely formal exercise’,54 he really does seem to anticipate the fundamental problem with AI – its explainability, its lack of references. Indeed, these models arguably cannot operate through the kind of logic of aesthetic and literary fragments that enliven the creative field and the very functionality of copyright, because they are based upon blending all of that diversity and sociality into an amorphous and untraceable melange. An AI-generated work cannot show its sources, and it cannot appreciate its workings. It is not only inept at the process but also structurally opposed to citation.

And this is the problem. Because, as the saying goes, you cannot make bricks without straw.


It is thus through the interest in the work, and its potential commercial value, that an author may be produced in circumstances that are otherwise anathematical to authorship, at least at first glance. O’Hear may have lamented a few decades ago, with the advent of computer-generated works, that ‘human input’ in producing the final result was ‘infinitesimal compared to what is required in painting even the most formulaic canvas’.55 Nevertheless, computer-generated works are now not only common but also, since 1988, have been protected as copyright works.56 However, they are protected not as works of authors, but as works of producers; or more accurately, works of producers as authors. Computer-generated works are protected for just 50 years from the end of the year of production,57 unlike other authored works. The disparity in protection may also raise issues as to the possible conflation of ownership of the technology (whether the computer or the algorithm) with authorship of the work as there is also some uncertainty as to what is meant in the definition of the author as ‘the person by whom the arrangements necessary for the creation of the work are undertaken’. Whether this includes something as simple as owning the algorithm itself would shift the emphasis away from an authorial manipulation of the algorithm as tool to an entrepreneurial investment in the algorithm as service. But are computer-generated works really like any ‘recording’? It would seem that there is perhaps a legacy of the tension between technology and art here, just like the applied art of old, that continues in the provisions for computer-generated works.

Is this then the problem for the use of artificial intelligence in generating new works? Setting aside the aspirations to consciousness suggested by some commentators,58 and noting the somewhat grounding effect of contrasting arguments from others,59 perhaps this debate is not even about whether a machine can be an author. Indeed, it is certainly not about whether a machine can be conscious, at least that is what the advocates themselves claim.60 And consciousness has seemingly made no difference to the potential claims of nonhuman animal authorship thus far, so why should it be any different for machines? That the discourse around consciousness and sentience is more a popular distraction and tool of affect, than it is an argument for authorship, is seemingly borne out in the attempts to test such authorship legally, where efforts to claim authorship for artificial intelligence continue to skip the whole question of consciousness on the path to personhood,61 as though the work is enough – a phenomenal consciousness, indeed.

In a way this seems to be the belief behind efforts to impersonate authors through AI-generated works. While such feats of imposture have been around for many centuries and in many forms, to date they have arguably demanded some skill. But through the use of LLMs, this has become a remarkably efficient and automated production line, and any notion of skill is both unnecessary and seemingly tedious. The lessons of the history of altercations between art and technology are therefore very important when examining the current incursions and opportunities of LLMs. Would the resolution of authorship, together with the technical, attributive measures to combat recursion, actually restore some credibility and sociality to the system and facilitate a productive use of LLMs? Are LLMs authors or tools?

As in any discussion of art and technology, Walter Benjamin provides salient insight: ‘Rather than asking, “What is the attitude of a work to the relations of production of its time?” I would like to ask, “What is its position in them?”’62 In other words, how do these AI-generated works fit into current authorship practices and production? How do AI-generated works interact with other works? That is, how might AI-generated works fulfil the attributive sociality necessary to creative practice and communication between works, and potentially crucial to the ongoing productivity of LLMs? And how might the constraint and control of authorship structures enhance this cultural and technical functionality? Benjamin’s reference to the ‘relations of production’ is important. This relationality and sociality of the interactions between intellectual properties is fundamental.63 What is the AI-generated work’s position in them? As Benjamin explains, ‘This question directly concerns the function the work has within the literary relations of production of its time. It is concerned, in other words, directly with the literary technique of works.’64

In advocating the social and legal function of attribution, I am not appealing to a romanticized inimitability of the author and certainly not of the work; rather, this attribution goes to the inimitability of relations, acknowledged and preserved through the stewardship of citation. It is this sociality between works that necessarily creates authors; the author as produced. Is the AI-generated work a readymade ripe for an autograph? Quite possibly, just turn it 90 degrees.

It seems that reports of the author’s death are greatly exaggerated. Turns out we need that straw person after all.

August 2023

  • 1

    See further

    P Villalobos et al., ‘Will We Run Out of Data? An Analysis of the Limits of Scaling Datasets in Machine Learning’ (2022) arXiv:2211.04325

    • Search Google Scholar
    • Export Citation

    . The authors estimate totals in the millions based on ebook availability on commercial platforms like Amazon, as well as digital libraries such as the Internet Archive, and estimate a total of ‘between 620 [Billion] and 1.8 [Trillion] words’ provided through books to date.

  • 2

    S King, ‘My Books Were Used to Train AI’, The Atlantic, 23 August 2023

    (emphasis added).

  • 3

    S King On Writing: A Memoir of the Craft (Simon & Schuster 2000 [2010]). King advises, ‘If you want to be a writer, you must do two things above all others: read a lot and write a lot’ (145).

  • 4

    King (n 2).

  • 5

    For example, note the famous review of photography offered by Charles Baudelaire in The Salon of 1859, published in four instalments between 10 June and 20 July in the Revue Française. The review has been translated and collected in C Baudelaire, The Mirror of Art: Critical Studies by Baudelaire, J Mayne (trans and ed) (Doubleday 1956), as well as in numerous anthologies in art and photography.

  • 6

    Gibson J , '‘The Man Behind the Curtain: Developing Film’s Double Exposure of Intellectual Property’ ', in PS Morris (ed), Intellectual Property and the Law of Nations, 1860–1920 , (Brill , 2022 ) 207 -41.

    • Search Google Scholar
    • Export Citation
  • 7

    See the useful discussion in

    Kockelkoren P , '‘Art and Technology Playing Leapfrog: A History and Philosophy of Technoèsis’ ', in H Harbers (ed), Inside the Politics of Technology: Agency and Normativity in the Co-Production of Technology and Society , (Amsterdam University Press , 2005 ) 147 -67.

    • Search Google Scholar
    • Export Citation
  • 8

    O’Hear A , '‘Art and Technology: An Old Tension’ ' (1995 ) 38 Royal Institute of Philosophy Supplements : 143 -58, 149.

  • 9

    ibid 149.

  • 10

    These questions of nonhuman authorship and art, in both nonhuman animals and artificial intelligence, are questions I address in much more detail in my forthcoming book, Wanted, More Than Human Intellectual Property: Animal Authors and Human Machines, which is the second in a series of three books in which I am developing a theory of ethological jurisprudence in property and intellectual property. The first book is Owned, An Ethological Jurisprudence of Property: From the Cave to the Commons (Routledge 2020), and the third book is Made, The Nature of Intellectual Property: An Ethological Jurisprudence of Objects (Routledge forthcoming).

  • 11

    Samuelson P , '‘Generative AI meets Copyright’ ' (2023 ) 381 (6654 ) Science : 158 -61.

  • 12

    Villalobos et al. (n 1).

  • 13

    I Shumailov et al., ‘The Curse of Recursion: Training on Generated Data Makes Models Forget’ (2023) arXiv:2305.17493, 114.

  • 14

    ibid 1.

  • 15

    Blow-Up (1966) Michelangelo Antonioni (dir).

  • 16

    T Lorenz et al., ‘TikTok Teens and K-Pop Stans Say They Sank Trump Rally’, The New York Times, 21 June 2020

    ; HJ Parkinson, ‘Click and Elect: How Fake News Helped Donald Trump Win a Real Election’, The Guardian, 14 May 2016.

  • 17

    Shumailov (n 13) 13.

  • 18

    ibid 1.

  • 19

    This concept is central to an understanding of authorship and the kind of evaluative imitation that sustains creative communities and is in fact enshrined within the copyright system through the concepts of parody, pastiche, and caricature in particular, as discussed in more detail in

    J Gibson, Wanted, More Than Human Intellectual Property: Animal Authors and Human Machines (Routledge forthcoming) Chapter 9.

  • 20

    Shumailov et al. (n 13) 2.

  • 21

    O’Hear (n 8) 143.

  • 22

    ibid 145.

  • 23

    Berne Convention for the Protection of Literary and Artistic Works, art 2(7).

  • 24

    Ricketson S & Ginsburg J , International Copyright and Neighbouring Rights: The Berne Convention and Beyond , (Oxford University Press , 2022 ) 8.60 -8.68.

    • Search Google Scholar
    • Export Citation
  • 25

    Copyright, Designs and Patents Act 1988, s 4.

  • 26

    Copyright, Designs and Patents Act 1988, s 52 (repealed 28 July 2016 by the Enterprise and Regulatory Reform Act 2013).

  • 27

    Flos SpA v Semeraro Casa e Famiglia SpA (C-168/09), EU:C:2011:29.

  • 28

    University of London Press v University Tutorial [1916] 2 Ch 601; Ladbroke (Football) Ltd v William Hill (Football) Ltd [1964] 1 WLR 273.

  • 29

    Infopaq (C-5/08), EU:C:2009:465.

  • 30

    O’Hear (n 8) 147.

  • 31

    L Sleator and M Hennessey, ‘Almost Half of Cambridge Students Admit They Have Used ChatGPT’, The Times, 21 April 2023.

  • 32

    C Silva, ‘How ChatGPT and AI are Affecting the Literary World’, Mashable, 3 March 2023.

  • 33

    S Kugel and S Hiltner, ‘A New Frontier for Travel Scammers: AI-Generated Guidebooks’, The New York Times, 5 August 2023.

  • 34

    S Rosenberg, ‘AI-generated Books Are Infiltrating Online Bookstores’, Axios, 16 August 2023.

  • 35

    C Willman, ‘AI-Generated Fake “Drake”/“Weeknd” Collaboration, “Heart on My Sleeve,” Delights Fans and Sets Off Industry Alarm Bells’, Variety, 17 April 2023.

    • Search Google Scholar
    • Export Citation
  • 36

    As at 27 August 2023.

  • 37

    J Friedman, ‘I Would Rather See My Books Get Pirated Than This (Or: Why Goodreads and Amazon Are Becoming Dumpster Fires)’, Blog Post, 7 August 2023, <>.

    • Search Google Scholar
    • Export Citation
  • 38


  • 39

    ibid. They have now been removed from Amazon and Goodreads. See further Friedman’s advice to authors in

    J Friedman, ‘IMHO: What Remedies Do Authors Have When Fraudulent Work Appears on Amazon?’, The Hot Sheet, 16 August 2023.

  • 40

    Friedman (n 37).

  • 41

    Friedman also reports difficulties in correcting a Goodreads profile due to the labyrinthine structure of the voluntary network that maintains the site. ibid.

  • 42

    ibid. They have now been removed from Amazon and Goodreads. See further Friedman’s advice to authors in Friedman (n 39).

  • 43

    Copyright, Designs and Patents Act 1988, s 84.

  • 44

    Clark v Associated Newspapers Ltd [1998] 1 WLR 1558 (21 January 1998).

  • 45

    E Cussen, ‘When Plagiarism Does the Work of White Supremacy’, Medium, 11 July 2017.

  • 46

    HD Blackburn, ‘The Music Industry Has an AI Problem’, The Washington Post, 2 May 2023

    . Writing on the AI-generated, ‘Heart on My Sleeve’, featuring Drake and The Weeknd’s AI-generated voices, Blackburn describes ‘heightened concerns about whether those behind the music were maliciously targeting hip-hop and Black people’. For a fuller discussion of the complex and intersecting concerns in plagiarism, see Gibson (n 19).

  • 47

    Blackburn (n 46).

  • 48

    For example, some journals are accepting the use of ChatGPT and similar tools but are requiring disclosure of that use, rather than listing ChatGPT as an author: ‘Editorial’, Nature, 24 January 2023.

  • 49

    Lucasfilm v Ainsworth [2008] EWHC 1878 (Ch), para 118(viii).

  • 50

    O’Hear (n 8) 155.

  • 51

    For example, Duchamp’s Fountain (1917) on display in Tate Modern, signed and dated, ‘R. Mutt 1917’, is a 1964 replica after the original was lost (the replica is authenticated by Duchamp’s signature (‘Marcel Duchamp 1964’) on the back of the left flange).

  • 52

    E Creamer, ‘Amazon Removes Books “generated by AI” for Sale Under Author’s Name’, The Guardian, 9 August 2023.

  • 53

    Whether or not this learning amounts to reproduction is the topic of current disputes: for example, see the statement from Getty Images released upon commencement of legal proceedings in the High Court against Stability AI, alleging intellectual property infringement in the processing of images, available at <>.

  • 54

    O’Hear (n 8) 150.

  • 55

    ibid 145–6.

  • 56

    Copyright, Designs and Patents Act, s 9(3).

  • 57

    Copyright, Designs and Patents Act, s 12(7).

  • 58

    P Butlin et al., ‘Consciousness in Artificial Intelligence: Insights from the Science of Consciousness’ (2023) arXiv:2308.08708 47.

  • 59

    For a useful review of approaches, see JM Bishop, ‘Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It’ (2021) Frontiers in Psychology 11:513464.

  • 60

    Abbott R , '‘Allow Patents on AI-generated Inventions – for the Good of Science’ ' (2023 ) Nature : 620, 699.

  • 61

    Thaler v Perlmutter (USDC 18/08/2023 Civ Action No 22-1564).

  • 62

    Benjamin W , '‘The Author as Producer’ ', in W Benjamin (ed), Selected Writings: Volume 2, Part 2, 1931–1934 , (Harvard University Press/Belknap Press , 1934 ) 768 -82, 770 1999/[2005].

    • Search Google Scholar
    • Export Citation

    (emphasis in original).

  • 63

    On relational property and intellectual property, see Gibson, Owned, An Ethological Jurisprudence of Property (n 10) and on the more specific details in the context of authorship, see Gibson (n 19).

  • 64

    Benjamin (n 62) 770 (emphasis in original).