Browse by title

You are looking at 1 - 10 of 705 items :

  • Economics 2018 x
  • Intellectual Property Law x
  • Innovation and Technology x
  • Intellectual Property x
Clear All Modify Search
You do not have access to this content

Rachel Adams and Nóra Ní Loideáin

Virtual personal assistants (VPAs) are increasingly becoming a common aspect of everyday living. However, with female names, voices and characters, these devices appear to reproduce harmful gender stereotypes about the role of women in society and the type of work women perform. Designed to ‘assist’, VPAs – such as Apple's Siri and Amazon's Alexa – reproduce and reify the idea that women are subordinate to men, and exist to be ‘used’ by men. Despite their ubiquity, these aspects of their design have seen little critical attention in scholarship, and the potential legal responses to this issue have yet to be fully canvassed. Accordingly, this article sets out to critique the reproduction of negative gender stereotypes in VPAs and explores the provisions and findings within international women's rights law to assess both how this constitutes indirect discrimination and possible means for redress. In this regard, this article explores the obligation to protect women from discrimination at the hands of private actors under the Convention on the Elimination of All Forms of Discrimination Against Women, and the work of the Committee on Discrimination Against Women on gender stereotyping. With regard to corporate human rights responsibilities, the role of the United Nations Guiding Principles on Business and Human Rights is examined, as well as domestic enforcement mechanisms for international human rights norms and standards, noting the limitations to date in enforcing human rights compliance by multinational private actors.

You do not have access to this content

Paolo Cavaliere

The EU Code of Conduct on hate speech requires online platforms to set standards to regulate the blocking or removal of undesirable content. The standards chosen can be analysed for four variables: the scope of protection, the form of speech, the nature of harm, and the likelihood of harm. Comparing the platforms' terms of use against existing legal standards for hate speech reveals that the scope of speech that may be removed increases significantly under the Code's mechanism. Therefore, it is legitimate to consider the platforms as substantive regulators of speech. However, the Code is only the latest example in a global trend of platforms' activities affecting both the substantive regulation of speech and its governance. Meanwhile, States' authority to set standards of acceptable speech wanes.

This content is available to you

Edited by Eirini Kikarea and Maayan Menashe

This content is available to you

Benedict Kingsbury

Physical, informational and now digital infrastructure features throughout Nation-State consolidation and imperial extension, in war preparedness and war logistics, in resource extraction and energy capture and transit, in each quantum step in economic globalisation, in mass migrations and religious missions, in the global scaling of finance and financialisation, in the global digital economy, in artificial intelligence (AI) and robots, in economic development strategies and in China's vast Belt and Road Initiative. International law has largely aligned with these enterprises, but has seemed not effectively to address massive anthropocenic degradation, AI, new biotech, and the human and planetary consequences of extractive capitalism. Science and technology studies, and work extending from Bruno Latour and Susan Leigh Star to governance-by-prototype and ‘new materialism’, have generated rich insights about infrastructure. These are being extended to ‘infrastructure as regulation’ (the infra-reg project). This paper explores implications for reinvigorating deliberative forward-planning international law projects to address technologically driven transformation, which follow from ‘thinking infrastructurally’.

You do not have access to this content

Shannon Raj Singh

This article considers the application of international criminal law to the role of social media entities in fuelling atrocity crimes, and the legal theories that could be most valuable in fostering their accountability. While incitement of atrocity crimes is one way of framing social media's role in fomenting conflict, this paper argues that it may be more productive to conceptualise social media's role in atrocity crimes through the lens of complicity, drawing inspiration not from the media cases in international criminal law jurisprudence, but rather by evaluating the use of social media as a weapon, which, under certain circumstances, ought to face accountability under international criminal law.

You do not have access to this content

M R Leiser

A historical analysis of the regulation of propaganda and obligations on States to prevent its dissemination reveals competing origins of the protection (and suppression) of free expression in international law. The conflict between the ‘marketplace of ideas’ approach favoured by Western democracies and the Soviet Union's proposed direct control of media outlets have indirectly contributed to both the fake-news crisis and engineered polarisation via computational propaganda. From the troubled League of Nations to the Friendly Relations Declaration of 1970, several international agreements and resolutions limit State use of propaganda to interfere with ‘malicious intent’ in the affairs of another. Yet State and non-State actors continually use a variety of methods to disseminate deceptive content sowing civil discord and damaging democracies in the process. In Europe, much of the discourse about the regulation of ‘fake news’ has revolved around the role of the European Union's General Data Protection Regulation and the role of platforms in preventing ‘online manipulation’. There is also a common perception that human rights frameworks limit States' ability to constrain political speech; however, using the principle of subsidiarity as a mapping tool, a regulatory anomaly is revealed. There is a significant lack of regulatory oversight of actors responsible for, and the flow of, computational propaganda that is disseminated as deceptive political advertising. The article examines whether there is a right to disseminate propaganda within our free expression rights and focuses on the harms associated with the engineered polarisation that is often the objective of a computational propaganda campaign. The article concludes with a discussion of the implications of maintaining this status quo and some suggestions for plugging the regulatory holes identified.

You do not have access to this content

Enguerrand Marique and Yseult Marique

Against a background of extensive literature examining how digital platforms are regulated through ‘soft’ mechanisms, this paper analyses the ‘hard law’ techniques, such as sanctions, which are also very much used on digital platforms to police undesirable behaviours. It illustrates the use of these sanctions, suggesting that it is possible to find three different categories of sanctions: sanctions that find their source in hard (international and domestic) law, sanctions that find their source in digital platforms' own normative production, and sanctions used in the course of disputes. Platform operators can have an intense power of norm-setting and sanctions, with a tendency to concentrate power within themselves or with unclear arrangements for dividing it across different entities. This can deeply affect individual freedoms. This paper suggests that the ways in which the power to set, decide and enforce sanctions is exercised in the digital space transform the public–private divide: the allocation of roles between sovereign public bodies and free private actors is reshaped to become ‘hybrid’ when it comes to enforcing rules and monitoring compliance through a wide range of sanctions on digital platforms. This paper frames the legitimacy questions arising from sanctions and suggests that the public–private divide may have to be bridged in order to locate a possible source of legitimacy. A future framework for assessing how platform operators set norms and ensure compliance through sanctions needs to start from individual users to see how best to protect their freedom when checks and balances around platforms' powers and sanctions are developed. These individual users are the ones who suffer from the economic, social and reputational consequences of sanctions in both the digital world and the physical world.

You do not have access to this content

Louise Arimatsu

In this paper I explore some of the ways in which developments in new digital technologies reproduce, and often amplify, the patriarchal structures, practices and culture of contemporary life and, in doing so, operate to silence women through exclusion and through violence. I consider how international human rights law – most notably the Convention on the Elimination of Discrimination Against Women (CEDAW) – can be harnessed to counter both forms of silencing in that each is rooted in gender-based discrimination. The digital gender divide and the rise in online violence against women evidences the failure on the part of States Parties to fully commit to their legal obligations pursuant to CEDAW. Ensuring equality of access to, and use of, digital technologies cannot be anything other than the preconditions to ensuring that women can benefit from, contribute to, and influence the development of digital technologies in a meaningful manner. The digital realm may be a privatised public space that warrants a reconceptualisation of the scope and content of human rights law but the fact that much of the digital infrastructure is owned and controlled by private actors does not absolve States of their human rights responsibilities.

You do not have access to this content

Petra Molnar

Experiments with new technologies in migration management are increasing. From Big Data predictions about population movements in the Mediterranean, to Canada's use of automated decision-making in immigration and refugee applications, to artificial-intelligence lie detectors deployed at European borders, States are keen to explore the use of new technologies, yet often fail to take into account profound human rights ramifications and real impacts on human lives. These technologies are largely unregulated, developed and deployed in opaque spaces with little oversight and accountability. This paper examines how technologies used in the management of migration impinge on human rights with little international regulation, arguing that this lack of regulation is deliberate, as States single out the migrant population as a viable testing ground for new technologies. Making migrants more trackable and intelligible justifies the use of more technology and data collection under the guise of national security, or even under tropes of humanitarianism and development. The way that technology operates is a useful lens that highlights State practices, democracy, notions of power, and accountability. Technology is not inherently democratic and human rights impacts are particularly important to consider in humanitarian and forced migration contexts. An international human rights law framework is particularly useful for codifying and recognising potential harms, because technology and its development is inherently global and transnational. More oversight and issue-specific accountability mechanisms are needed to safeguard fundamental rights of migrants such as freedom from discrimination, privacy rights and procedural justice safeguards such as the right to a fair decision-maker and the rights of appeal.

This content is available to you

Edited by Johanna Gibson