The term artificial intelligence refers to computer systems that can perform tasks that ordinarily require natural (usually human) intelligence, such as speech recognition, decision-making, visual perception and language translation. AI systems can be programmed to learn from data inputs and adapt, which allows them to perform better over time.

One privacy risk with AI is the possibility of sensitive or personal data being misused or exposed by AI systems. Large datasets, including personal data such as names, addresses and other identifying information, are frequently collected, and processed by AI systems. If these data are not adequately protected, they may be susceptible to theft, hacking or unauthorised access. Additionally, AI systems may be used to make decisions that may affect people’s privacy, such as when hiring people or using automated decision-making processes in other delicate circumstances.

Recent developments in AI, such as deep learning and foundational large language models, are regularly producing output that challenges human ideas of what machines are capable of. Their language comprehension and production capabilities far exceed that of earlier generations of AI methods, while their ability to craft images and deepfakes concerns many. Most important for privacy, their data assimilation and pattern recognition are sufficiently powerful for very weak signals to be detected in noisy data (sometimes beyond the capabilities of human, social or organisational processes), so that AI techniques now enable far more information to be extracted from a given dataset, opening more possibilities of unanticipated privacy breaches.

As AI increases in capacity a new privacy angle emerges. If AI becomes sentient, then it will enter our moral universe as an agent (not just as a tool). We therefore may and probably will become concerned about what we reveal to an AI system, and not just in terms of how the outputs of that system might be used by another human.

p. 23Further reading

See also: DEEPFAKE, DEEP LEARNING, EXPLAINABLE AI, MACHINE LEARNING, RECOMMENDATION SYSTEM

  • Hagendorff, T., 2020. The ethics of AI ethics: an evaluation of guidelines. Minds and Machines, 30, 99120, https://doi.org/10.1007/s11023-020-09517-8.

    • Search Google Scholar
    • Export Citation
  • Spindler, G., 2021. Algorithms, credit scoring, and the new proposals of the EU for an AI Act and on a Consumer Credit Directive. Law and Financial Markets Review, 15(3), 23961, https://doi.org/10.1080/17521440.2023.2168940.

    • Search Google Scholar
    • Export Citation
  • Hagendorff, T., 2020. The ethics of AI ethics: an evaluation of guidelines. Minds and Machines, 30, 99120, https://doi.org/10.1007/s11023-020-09517-8.

    • Search Google Scholar
    • Export Citation
  • Spindler, G., 2021. Algorithms, credit scoring, and the new proposals of the EU for an AI Act and on a Consumer Credit Directive. Law and Financial Markets Review, 15(3), 23961, https://doi.org/10.1080/17521440.2023.2168940.

    • Search Google Scholar
    • Export Citation
Reference & Dictionaries