Here is the recording of the event:
(the very active chat that took place during this event is presented in full below.)
How should the recent explosion of research, investment, and discourse around “artificial intelligence” be critically understood? Which methods, traditions, and narratives from the humanities and social sciences are fit to grasp these unusual technologies? What elements of today’s society are influencing tomorrow’s computing? This panel invited four researchers to present their emerging but distinct approaches to these questions, and to enter into a dialogue.
This event was the first sponsored by “Diagonal,” a network of scholars and artists trying to understand A.I.’s implications for aesthetics and politics, hosted by U of Toronto’s BMO Lab in Creative Research in the Arts, Performance, Emerging Technologies, and A.I., and the Centre for Drama, Theatre, and Performance Studies.
Panelists: Marie-Pier Boucher (U of T), Ranjodh Dhaliwal (UC-Davis), Peli Grietzer (New Center for Theory & Practice), and Karina Vold (U of T).
Moderated by Douglas Eacho (U of T).
Here is the chat that unfolded during the event:
14:05:18 From Roberta Buiani to All panelists : you should put some music on
14:20:19 From dan mcquillan : throw them all in the lake
14:24:08 From Nick Fox-Gieg : Don’t we need a control group
14:28:25 From dan mcquillan : neural networks aren’t doing poetry they’re doing paranoia
14:34:11 From dan mcquillan : referring to this as a gestalt implies a psychological grounding for what is is a crude statistical operation
14:34:56 From dan mcquillan : i guess chat is disabled, right?
14:35:18 From Nicole Vella : I can see your messages.
14:35:36 From dan mcquillan : ah ok – quiet crowd 🙂
14:36:43 From Owen Lyons : I hear you Dan. I have questions about this too: i.e. is this statistical modelling actually comparable to the creation of “a face as such” as a concept
14:37:31 From Racelar Ho : I agree with u Dan
14:37:34 From Nicole Vella : My issue with this dataset is the same as most others in that its lacking the diversity that we see in the real world.
14:37:51 From Anna Munster : Lets also not forget that the material reality of all these faces – auto encoded or not – are pixels
14:37:55 From Nicole Vella : And these NNs are trained with these biased datasets.
14:38:58 From dan mcquillan : an autoencoder definitely doesn’t have an imagination
14:39:10 From dan mcquillan : an fwiw alphago doesn’t have any intelligence
14:39:24 From dan mcquillan : not saying these aren’t powerful machines
14:39:44 From dan mcquillan : sorry to be disagreeable
14:39:58 From David Rokeby : Celeb-A was used in a lot of early neural nets. More recent nets (i.e. StyleGAN) tend to be trained on more diverse FFHQ (flicker face database) But FFHQ, while more diverse, is mostly self-iso, many smiles, little in the way of sad or brooding faces.
14:40:27 From dan mcquillan : but we should resist mystification if we are to tackle the social consequences of such systems
14:40:37 From Racelar Ho : then we need to answer the question- what is intelligent
14:40:56 From Xavier Snelgrove : Interesting the focus on autoencoders, when much of the “hype” has been more around GANs. Among other things an autoencoder is much more focused on a particular idea of “proximity” based on the individual pixels of the image, which is why the images come out so blurry, whereas a GAN cares a bit more about its “convincingness” as a face
14:41:51 From dan mcquillan : @racelar i agree that’s an interesting question. i’m personally more interested in what work these systems are doing in the world
14:42:26 From Owen Lyons : @Xavier – yes, that is interesting
14:43:52 From Racelar Ho : I think we need to answer the question first, target to the subject of human being , ourselves . then we can just tell whether machines have intelligence and what is the meaning to it
14:45:06 From dan mcquillan : @david a dataset of faces from pandemic era zoom calls would have more sad and brooding faces 🙁
14:45:13 From Nicole Vella : Thanks for sharing that ^. Its interesting how these early networks, you can literally see them whitening the skin of PoC
14:45:30 From Sarah Vollmer to All panelists : @dan ahh well we all know the work deepfake is adding to the world….
14:48:11 From dan mcquillan : @nicole i’m told this is a good read ‘Artificial Whiteness – Politics and Ideology in Artificial Intelligence’ https://cup.columbia.edu/book/artificial-whiteness/9780231194914
14:49:07 From Racelar Ho : @dan thx for the link
14:49:39 From Nicole Vella : @dan thanks. Adding that to the top of my reading list
14:50:03 From dan mcquillan : @nicole you’re welcome. related points about agi & white supremacy (short but punchy read) https://davidgolumbia.medium.com/the-great-white-robot-god-bea8e23943da
14:50:41 From Anna Munster : @nicole that’ snot simply the result of the training data set. All AI models deploy other algorithms (active shape models eg), which may fit data to particular statistical forms that favour certain kinds of ‘shapes’ eg the shapes of white faces that have been lit to define those features. Ramon Amaro doing great work on this! https://www.e-flux.com/architecture/becoming-digital/248073/as-if
14:50:42 From Nicole Vella : “Coded Bias” is a documentary that explores that exact topic. Great film too.
14:50:44 From Nicole Vella : https://www.youtube.com/watch?v=jZl55PsfZJQ
14:50:50 From Nicole Vella : ^^ link to the trailer
14:52:00 From Xavier Snelgrove : Seconded on Aniara it’s excellent, and I rarely see it mentioned
14:53:03 From Nicole Vella : @Anna yes, there are other algos implemented in those networks for sure. Thanks for that link.
14:55:54 From Nicole Vella : I can imagine the end of capitalism. And its a utopia.
14:56:08 From dan mcquillan : ai is does not cleave to human intelligence but to bureaucracy
14:58:21 From Racelar Ho : @dan
it depends who takes control of the discourse power of the mainstream algorithm
14:59:12 From Maev Beaty : applause
14:59:15 From Nicole Vella : That was great! Thank you Dr Bouchet
14:59:42 From Beth Warrian : ^^^
15:00:25 From dan mcquillan : @racelar i’m thinking of, for example, the ways ai scales behaviourist thoughtlessness
15:02:29 From dan mcquillan : let’s remember where regression models come from (eugenics)
15:02:38 From Racelar Ho : @dan
that’s a nice idea. or, I’m thinking about the way connect it with de Certeau’s theory of strategy versus tactic
15:03:05 From dan mcquillan : @racelar nice, interesting angle
15:03:36 From Marie-Pier Boucher to Nicole Vella and all panelists : thank you, Nicole!
15:04:37 From Anna Munster to All panelists : @ranjodh you mention your work on GPUs and supply chains. Is that published?
15:05:14 From dan mcquillan : game chips operationalise ai in the same historical moment that qanon gamifies conspiracy theory
15:07:27 From dan mcquillan : strategy of tension -> strategy of tensor
15:08:16 From Johannes Bruder to All panelists : Also at the same time when hack & flash crashes on stock markets emerge
15:08:22 From Racelar Ho : @dan if we consider the established rules in algorithms as the established rules determined by power -> strategy, then us, as the dominated subjects, how can we employ the “wisdom of ruse” to maintain and satisfy our self-desire (freedom of thinking).
15:09:06 From Racelar Ho : if we consider the digital world as another type of urbanism territory.
15:09:40 From dan mcquillan : @racelar got you. worth thinking through
15:09:40 From Michael Castelle : I think the relationship between the use of GPUs for rendering e.g. 3D shooter games in the early 2000s and the use of GPUs for convolutional neural networks in the 2010s is more indirect than suggested here…
15:10:28 From Prem Sylvester to All panelists : would employing the wisdom of ruse still rely on us buying into the same computational logics we are seeking to escape or subvert?
15:10:50 From dan mcquillan : @racelar also see glissant’s ideas about opacity discussed in a similar context
15:11:38 From Michael Castelle : (OK, I think Ranjodh is addressing my point above)
15:11:50 From dan mcquillan : @racelar personally i’m motivated by tactics of explicit refusal + restructuring (abolitionism, luddism)
15:12:33 From dan mcquillan : jeff dean’s legacy taken a different turn recently, right…
15:12:58 From Racelar Ho : @dan yah, I read his book then it reminds me I had read when I was in the art school
15:13:20 From Racelar Ho : and, for space by Massey
15:14:45 From dan mcquillan :
15:14:49 From Dot Tuer : Great presentations … applause.
15:14:50 From Maev Beaty : applause
15:14:55 From Vicki Zhang to All panelists : applause!
15:14:56 From David Han : 👏🏽👏🏽
15:15:01 From steve daniels : ^
15:15:03 From Prem Sylvester to All panelists : applause
15:15:06 From Ken Rodenwaldt to All panelists : +++++++
15:15:15 From Anna Munster : asks Ranjodh – excellent talk!
15:15:22 From Anna Munster : *Thanks
15:15:34 From Racelar Ho : @dan I’m more motivated by Foucault ‘s discipline and punish and the archaeology of knowledge
15:15:56 From Racelar Ho : thx for the presentation <#
15:19:08 From Maev Beaty : heeehee
15:19:14 From Owen Lyons : Great question
15:23:33 From Nicole Vella : *applause*!!
15:23:46 From Nicole Vella : Excellent answer @Karina
15:24:52 From David Rokeby : Yes… the distributed nature of the representations in the inner layers of a neural net, and their continuous rather than discrete nature are really important.
15:27:00 From dan mcquillan : c’mon dude. it’s not an umwelt it’s a carceral system
15:29:37 From Eamonn Bell : I understood Peli’s point to be that AI insofar as it goes leaves much more to be filled in our picture of what counts as intelligence. For one, precisely the sense of irony or “problem” in relation to the environment
15:29:43 From Anna Munster : The Q and A isn’t working
15:31:20 From Douglas Eacho to All panelists : Q&A seems to be working now!
15:31:32 From Douglas Eacho to All panelists : But drop a Q in the chat as well if not!
15:31:33 From Anna Munster : ok thanks seems to be working now!
15:31:39 From Tara to Douglas Eacho(Privately) : Q&A is on
15:32:18 From dan mcquillan : Q. so if ai can’t be ethically optimised, is discussing it as an aesthetic a form of futurism?
15:32:44 From John Enman-Beech to All panelists : ai is mostly made by paid professionals, working within capitalism for capitalist purposes, but why would this make us think that ai in general cannot contest capitalism
15:35:54 From dan mcquillan : thanks
15:37:17 From Sarah Vollmer to All panelists : […can people even be ethically optimised though ?]
15:37:23 From Owen Lyons : I think Dan’s question is closely connected to Ranjodh’s point about developments in the GPU industry. Are we implicit in this technological arms race, is this just and inevitable side discourse?
15:37:25 From dan mcquillan : ok – next question:)
15:37:43 From Anna Munster : @Ranjohd: can you say a little more about the historical ‘narrative’ you have been working on between GPUs and supply chains? Does this ‘backfill’ some of the science fictions current about the capacities of AI ?
15:38:54 From dan mcquillan : ” We will sing of the great crowds agitated by work, pleasure and revolt; the multi-colored and polyphonic surf of revolutions in modern capitals: the nocturnal vibration of the arsenals and the workshops beneath their violent electric moons: the gluttonous railway stations devouring smoking serpents; factories suspended from the clouds by the thread of their smoke; bridges with the leap of gymnasts flung across the diabolic cutlery of sunny rivers: adventurous steamers sniffing the horizon; great-breasted locomotives, puffing on the rails like enormous steel horses with long tubes for bridle, and the gliding flight of aeroplanes whose propeller sounds like the flapping of a flag and the applause of enthusiastic crowds. “
15:39:46 From Nicole Vella : AI networks are being built under a capitalist system and therefore have capitalist biases (market efficiencies, profits, etc). How can we expect it to get any better? “You can’t dismantle the masters house with the masters tools”
15:39:57 From Nicole Vella : …basically what Peli is saying/asking right now
15:40:31 From Nicole Vella : Who defines what “good” goals are?
15:41:49 From dan mcquillan : it’s worth thinking about the science of this because it applies the reflexive back on science per se
15:43:22 From dan mcquillan : from the threads of the jacquard loom to the threads in the tpus
15:43:31 From Owen Lyons : ha
15:43:37 From dan mcquillan : the luddites were right
15:44:00 From dan mcquillan : (because they were aksing for more than broken machines…)
15:45:05 From Prem Sylvester to All panelists : I believe we need to consider how much networks are
15:45:24 From Prem Sylvester to All panelists : “built” vs how much they are described as self-organizing
15:45:31 From Prem Sylvester to All panelists : (apologies for the split message)
15:45:39 From John Enman-Beech to All panelists : Nico… if you were responding to me, i don’t “expect” things to get “better”, but i hope they do, and i find the idea that ai is irremediably poisoned by technical rationality and inextricably capitalistic to the point that there can be no “better” ai to be implausible
15:48:02 From Anna Munster : @Ranjodh – really interesting set of connections between pipelines + supply chains! Really looking forward to reading that. Thanks
15:48:24 From Owen Lyons : Yes, that’s really interesting Ranjodh. Thanks! I look forward to this work.
15:48:27 From Johannes Bruder to All panelists : Seconded, thanks Ranjodh
15:48:48 From John Enman-Beech : Nico… if you were responding to me, i don’t “expect” things to get “better”, but i hope they do, and i find the idea that ai is irremediably poisoned by technical rationality and inextricably capitalistic to the point that there can be no “better” ai to be implausible
15:52:09 From dan mcquillan : @john what aspects of ai (from statistical models through processors to the construction of the objective function) do you see as prior to technical rationality and colonial cultures?
15:53:05 From David Rokeby : What is interesting about the way neural networks are used for practical purposes is that the distributed continuous representations that the networks hold internally, which are full of ambiguities and nuance, tend to be collapsed at the output into discrete quasi-symbolic ‘answers’ at the end. So it matters what kinds of questions we ask or operations we demand from neural nets.
15:53:34 From Nicole Vella : @John it wasn’t responding to you. However, I disagree that AI is not capitalistic. At least, in its modern use cases and how its being utilized by corporations today.
15:53:52 From John Enman-Beech : @dan of course i assert no such priority! time and logic are complicated
15:54:29 From dan mcquillan : @john sorry, prior was a careless word to use, i mean ‘unpoisoned’ or unshaped, perhaps
15:54:33 From John Enman-Beech : @nicole i don’t think that’s a disagreement.
15:55:32 From John Enman-Beech : we are all one-dimensional, not just ai. if you think there can be any change i would need to hear more to be convinced that ai is irremediably poisoned in a way that eg humans are not.
15:56:24 From John Enman-Beech : it strikes me as unlikely that weber has the last word on the future of ai
15:56:24 From Anna Munster : Great panel – thanks for organising. It’s been wonderful to find out more about a diverse range of ECR research! Brilliant new research from all of you. cheers
15:57:24 From dan mcquillan : good points from @ranjodh, syed mustafa ali also talks about hauntology in this context
15:58:36 From dan mcquillan : the deep diminishment of science (it had it coming)
16:00:07 From Nicole Vella : @John ^thats just my point. We cannot expect AI to improve or remove any biases it has baked into it at a fundamental level simply because those biases were held by its creators, us, humans. We are flawed. Its inextricably capitalistic because its a product of a capitalist society. Regardless of its history, its modern use case is to improve efficiency. Efficiency in markets, in sending advertisements to more and more targeted people, in finding criminals through facial detection. And, to me, it seems that is the direction that AI is going to continue to go down.
16:01:19 From dan mcquillan : re: karina’s point; not easliy replaced but quite easily surveilled https://twitter.com/evan_greer/status/1357122709403164680
16:01:20 From John Enman-Beech : i recognize we are playing out a classic leftist in-fight, but i tend to find this kind of totalist discourse makes it hard to look for things to do. ai is not one thing or one way
16:01:23 From David Rokeby : Thanks for the great presentations! Much to chew on and think over!
16:01:26 From steve daniels : REQUEST to organizers — it is possible to save the chat — there are many threads and questions I would like to reflect upon after. Thanks to panellists for their insights and great discussion.
16:01:41 From dan mcquillan : thanks to the speakers and organisers
16:01:48 From David Rokeby : I have saved the chat!~!!
16:01:50 From Ken Rodenwaldt to All panelists : thanks very much
16:01:51 From Dot Tuer : Many thanks – it was fascinating. Is this being posted?
16:01:54 From Owen Lyons : Thanks to all the panelists and organizers!
16:01:54 From Nicole Vella : Thank you!!
16:01:54 From John Enman-Beech : right-click select all copy paste
16:01:58 From steve daniels : thanks all
16:01:58 From David Han : thanks!
16:01:58 From Sara Grimes : Thank you!
16:02:00 From Prem Sylvester to All panelists : thank you!
16:02:00 From Beth Warrian : Thanks all!
16:02:03 From Isadora Dannin to All panelists : thank you!