Deep-Rooted Images: Situating (Extra) Institutional Appropriations of Deepfakes in the US and India

Kailyn Slater, Akriti Rastogi


The paper aims to map institutional and extra-institutional affordances and appropriations of deepfake images through an analytical framework that accounts for the socio-political contexts of the US and India. Our main argument involves the inevitable leakage of technologies outside institutions and its redressal through corporatized comebacks. Utilizing vernacular and global examples, we trace the perceived ownership and extended modalities of deepfake images and videos. While compositing (Manovich 2006) and habitual media (Chun 2016) predetermine our deep mediatized world (Hepp 2019), deepfakes, as a visual cultural technology newly popular within the political economy of media, offer a novel entry point into locating the neoliberal ethos of both socio-political contexts and their respective apparatuses and valences of control. Thus, the paper articulates the coordinates of deepfake affordances to situate the technological power and political rhetoric that governs our international media situation across differing but interrelated socio-political contexts.

Full Text:



Ahmed, Sara. 2021. Complaint! Durham, NC: Duke University Press Books.

Berlant, L. 2011. Cruel optimism. Durham: Duke University Press.

Chai, J., Zeng, H., Li, A., & Ngai, E. W. 2021. Deep learning in computer vision: A critical review of emerging techniques and application scenarios. Machine Learning with

Applications, 6, 100134.

Chun, W. H. K. 2016. Updating to remain the same: Habitual new media. Cambridge, MA: MIT Press.

Cole, S. 2020a. In a Huge Policy Shift, Pornhub Bans Unverified Uploads. Motherboard for VICE, 1-16.

Cole, S. 2020b. Pornhub Announces “Biometric Technology” to Verify Users. Motherboard for VICE, 1-18.

Cole, S., & Maiberg, E. 2020. Images of Sexual Abuse Are Fueling AI Porn. Motherboard for VICE, 1-25.

Craig, D., & Cunningham, S. 2019. Social media entertainment: The new intersection of Hollywood and Silicon Valley. New York: NYU Press.

Gillespie, T. 2010. “The Politics of ‘Platforms.” New Media & Society, (12)3, 347-64.

Gray, J., Bounegru, L., & Venturini, T. 2020. ‘Fake news’ as infrastructural uncanny. New Media & Society, 22(2), 317-341.

Hand, M. 2017. Visuality in social media: Researching images, circulations and practices, in L. Sloan and A. Quan-Haase (eds). The SAGE Handbook of Social Media Research Methods, 215-232.

Harrod, J. 2020. Do Deepfakes Have Fingerprints? | Deepfake Detection + GAN Fingerprints.

Heidegger, M. (2001). Zollikon seminars: Protocols, conversations, letters. Northwestern University Press.

Hepp, A. 2019. Deep mediatization. London: Routledge.

Huang, X., & Belongie, S. 2017. Arbitrary Style Transfer in Real-time with Adaptive Instance

Normalization. ArXiv:1703.06868 [Cs].

Hui, Y. 2012. What is a digital object? Metaphilosophy, 43(4), 380-395.

Karras, T., Laine, S., & Aila, T. 2019. A Style-Based Generator Architecture for Generative Adversarial Networks. ArXiv:1812.04948 [Cs, Stat].

Lund, J. 2021. Questionnaire on the Changing Ontology of the Image. The Nordic Journal of Aesthetics, 30(61-62), 6–7. Retrieved from

Manovich, L. 2005. Soft Cinema: Navigating the database. Cambridge: MIT Press.

Ngai, S. 2020. Theory of the gimmick. Cambridge: Harvard University Press.

Paris, B., & Donovan, J. 2019. “Deepfakes and cheap fakes: The manipulation of audio and visual evidence.” Data & Society, 1-47.

Qi, H., Guo, Q., Juefei-Xu, F., Xie, X., Ma, L., Feng, W., Liu, Y., & Zhao, J. 2020. DeepRhythm: Exposing DeepFakes with Attentional Visual Heartbeat Rhythms. ArXiv:2006.07634 [Cs].

Simondon, G., Mellamphy, N., & Hart, J. 1980. On the mode of existence of technical objects (p. 1980). London: University of Western Ontario.

Sundaram, R. 2011. Pirate Modernity: Delhi’s media urbanism. London: Routledge.

Tolosana, R., Vera-Rodriguez, R., Fierrez, J., Morales, A., & Ortega-Garcia, J. 2020. Deepfakes and beyond: A Survey of face manipulation and fake detection. Information Fusion, 64, 131-148.

van der Nagel, E. 2020. Verifying images: Deepfakes, control, and consent. Porn Studies, 7(4), 424–429.

Wojewidka, J. 2020. The deepfake threat to face biometrics. Biometric Technology Today, 5-7.

Yadlin-Segal, A., & Oppenheim, Y. 2020. Whose dystopia is it anyway? Deepfakes and social media regulation. Convergence: The International Journal of Research into New Media Technologies, 1-16.

Yu, N., Davis, L., & Fritz, M. 2019. Attributing Fake Images to GANs: Learning and Analyzing GAN Fingerprints. ArXiv:1811.08180 [Cs].

Zhang, B., Zhou, J. P., Shumailov, I., & Papernot, N. 2021. On Attribution of Deepfakes. ArXiv:2008.09194 [Cs].



  • There are currently no refbacks.

Copyright (c) 2022 Kailyn Slater, Akriti Rastogi

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

ISSN: 1930-014X