D3US EX M4CH1NA

Pau Waelder

Publishing date: 22th november 2019

Before the algorithmic gods
Medea, the well-known tragedy by Euripides, concludes with the protagonist avoiding being executed for her multiple murders thanks to the god Helios, who lends her his chariot to flee the court of Corinth. On stage, such a divine intervention was represented by an actor appearing from above, mounted on a crane or other mechanical contraption. This character’s actions gave a satisfactory twist to the story, regardless of his coherence with the situations in which it had developed. The philosopher Aristotle criticized this resource, stating that “the outcome of the play must result from the piece itself, and not, as in the Medea, from a strange intervention.” The practice of removing an actor on a crane to create a quick denouement of the plot of a play would later be known as the Latin expression deus ex machina (“god from the machine”)1. This expression is currently used to refer to any element that is introduced into a story without having any relation to the events that have been narrated and completely changes its outcome, often to satisfy the expectations of the public, but resulting in an implausible story. . As a narrative device, it is excessively easy, and therefore the expression usually has a pejorative character2. All in all, an interesting aspect of the deus ex machina is that it introduces what one wants to happen, even if it opposes the reality of the events being narrated. The story takes a sudden turn towards a happy ending, salvation at the critical moment, reconciliation that seemed impossible. Everything happens quickly and expeditiously, without any room to question the reason for this outcome, since the actions of a god or those of destiny are not questioned.

Artificial intelligence (AI) is currently dominating the media discourse about our relationship with technology as a deus ex machina: an omniscient and all-powerful entity that promises to easily solve all our problems. In recent years, large technology companies have entered a competition to lead the development of AI, which has resulted in spectacular applications of this technology to concrete solutions, but also in ambitious statements about what it can achieve in the future. next. In 2016, former Google CEO Eric Schmidt claimed that artificial intelligence was going to solve climate change, poverty, war and cancer3. Mike Schroepfer, chief technology officer at Facebook, was talking at the same time about how AI would mean a global transformation. However, these promises seem to be far from being fulfilled, even in more specific (although also very ambitious) areas such as smart cars. In 2017, John Krafcik, CEO of Waymo, said that the company was on the verge of making driverless cars, but has not been able to make this goal a reality4. However, making bold and excessively optimistic predictions is not something that corporate executives have begun to do in recent years, but is closely linked to the history of artificial intelligence itself. In 1967, Marvin Minsky, one of the “fathers” of AI, stated that “within a generation, the problem of artificial intelligence will be basically solved”5. Since then, bets have been made about when the intelligence of computers will surpass that of humans, a turning point that the futurist Ray Kurzweil has placed in the year 20296. The idea of ​​an intelligent machine undoubtedly raises great hopes and also apocalyptic fears, since machines, as mere “dumb” tools, have brought humanity unimaginable progress and also terrible devastation. What can we expect from a computer thousands of times more intelligent than human beings, and with its own consciousness? Is that even possible? If it is, will it come to save us or destroy us? The first thing that comes to mind are killer robots like from the films Blade Runner (Ridley Scott, 1982), The Terminator (James Cameron, 1984) or Ex Machina (Alex Garland, 2014), as well as the unstable personalities of Hal 9000 in 2001: A Space Odyssey (Stanley Kubrick, 1968), Wintermute in the novel Neuromancer (1984) by William Gibson, or Samantha in the film Her (Spike Jonze, 2013). These stories pose fascinating and also terrifying scenarios, but the reality is that robots are currently far from achieving the autonomy and ability to act like humans that replicants or the T-800 have, while artificial intelligence programs can beat a human opponent in a game of chess or Go, and also reserve a table in a restaurant by phone, but they are not capable of understanding a joke or following a complex conversation. The distance between what we expect from artificial intelligence and what it is currently capable of producing is enormous. Researchers Gary Marcus and Ernest Davis call it “the AI ​​abyss” 7 and argue that it is mainly due to three factors: first, we have a tendency to “humanize” machines and give them their own will and even personality; Secondly, the progress of AI in very specific and limited tasks (such as winning a game of chess) is assumed to be progress at a much broader level; Finally, when a system is developed that works in certain situations (such as making a car drive itself through the desert), it is assumed that it will also work in many others, although this is not always true.

In short, the development of artificial intelligence is surrounded by stories, some of which describe real innovations, while others reflect the bold expectations of researchers, the ambitions of corporations, or our own hopes and fears of a machine that does not exist. we completely understand. The way in which artificial intelligence programs, processing millions of data in milliseconds, respond to our requests or even guess our desires seems like magic, and in fact this is how technological innovations have often been presented throughout the world. of the centuries. Greek mythology contains numerous stories of automatons and animated objects, created by the god Hephaestus, that came to life in a mysterious way8. Nowadays, we do not conceive of a smartphone as a divine creation, but it still appears to us as a black box whose internal functioning is, to a certain extent, unknown. When this black box, in addition to doing everything we ask of it, is capable of predicting what we are going to want or do, it is not difficult to give it supernatural powers or let the imagination assign it abilities that go beyond what it can really do. . According to various researchers9, an essential aspect of AI is its prediction capacity: although we do not (yet) have machines that can think, we do have machines capable of making predictions based on the data at their disposal, recognizing patterns, objects or other elements and build your own models10. This is certainly a spectacular advance that is revolutionizing the technology industry and is beginning to have profound consequences in the most industrialized societies. Therefore, it is necessary to view artificial intelligence as more than a black box that “issues verdicts from the algorithmic gods,” in the words of mathematician Cathy O’Neil11. We must understand what artificial intelligence consists of, what its objectives are and its real achievements.

What is artificial intelligence?

The term “artificial intelligence” was coined by computer scientist John McCarthy in 1956 in the context of the Dartmouth Conference, the first academic symposium in which the possibility of making a machine capable of reasoning like a human being was debated. human. The implications of this ambitious goal span various disciplines, such as mathematics, philosophy, neuroscience, psychology and biology. All of them will be involved in the development of AI, but it is computer science that takes on the project of creating an intelligent machine as a field of research. The machine in question is, logically, a computer (specifically a virtual machine, that is, a program) and the methods used to achieve something similar to human intelligence are data processing, mathematical calculation, statistical analysis and the preparation of complex algorithms. These methods involve translating the different aspects of reasoning into models that can be processed by a computer. Dartmouth’s initial proposal makes this goal clear:

“The study must proceed on the basis of the conjecture that every aspect of learning or any other characteristic of intelligence can, in principle, be described so precisely that a machine can simulate it”12


This approach finds a notable precedent in the article that, in 1950, the mathematician Alan Turing dedicated to the question “can machines think?” and in which he proposes an “imitation game”, later known as the Turing Test13. The game consists of a person having to find out (through questions to which he receives answers in written form) whether he is conversing with a human being or a machine. Turing therefore shifts the question of whether a machine can think to that of whether it is possible to develop a computer capable of imitating human intelligence. Initiating what would later become a custom among artificial intelligence researchers, Turing predicts that, in about 50 years, there will be computers capable of deceiving a human being and in general we will be able to speak of thinking machines14. The mathematician also anticipates the numerous objections that AI still faces today and proposes that machines be made to compete with humans in “purely intellectual fields,” such as chess15.

Another contribution of Turing is to propose that, instead of simulating an adult mind through a program, that of a child should be simulated. This involves not only programming a simpler machine, but also introduces the concept of learning, which could be done through positive or negative reinforcement. This idea has been implemented in the branch of AI known as machine learning, which consists of creating programs capable of modifying their operation based on the data provided and the results obtained16. Unlike a regular program, which always executes the same instructions, a learning program can modify the parameters of the model that has been given to it to adapt to specific objectives, such as detecting the faces of people in a set. of photographs or a video capture. As Turing suggested, these programs can be “educated” through learning techniques. In machine learning, essentially three are used: supervised learning, in which the program is told what result it wants to obtain and feedback is provided indicating whether it has achieved it or not; unsupervised learning, in which data is supplied to the program and it discovers recurrences and patterns on its own; and finally, reinforcement learning, in which a system of messages is introduced that indicate to the program whether or not it has achieved its objective17.

Among the techniques used in machine learning, one of the most common is artificial neural networks (ANN), a computational model inspired by the structure of the brain that is made up of thousands (or millions) of connected units. each other, each of which processes the data it receives and provides a result, which serves as input to other units. These processing units, or neurons, are structured in layers in a complex system that subjects the data entered to numerous mathematical operations and generates an output that the program itself can alter depending on the feedback received. Thanks to the network structure, certain neurons can be activated or the connections between them can be enhanced to progressively achieve a result that fits the desired objective. In order to classify the information it is processing, the artificial neural network has a “training set,” which is usually data selected and labeled according to the task to be performed. Thus, for example, to “train” an artificial neural network to recognize human faces, it is supplied with thousands of portraits of people. Among artificial neural networks, it is worth highlighting a system known as a generative adversarial network (GAN), which is part of unsupervised machine learning. A GAN consists of two artificial neural networks, one dedicated to generating outputs (the “generator”) and another dedicated to determining whether said outputs correspond to a pre-established objective (the “discriminator”). Typically, GANs are used to generate images that look real, starting from a training set made up of thousands of photographs of the type to be created. Using this information, the generator creates a new image and the discriminator determines if said image is valid, that is, if it can pass as one of the images in the training set18. If the image is rejected, the generator modifies its parameters and repeats the process, which continues in a feedback loop. The images that the discriminator approves are provided to the person using the GAN. This in turn has the possibility of providing feedback or adjusting the system to obtain better results. GANs have achieved enormous popularity since they were invented by Ian Goodfellow and his team in 2014 due to their ability to generate images that fool the human eye, although they also find a wide variety of uses linked to image analysis and processing. . Today, a growing number of artists employ GANs in the creation of artistic projects, in part because some of these programs are accessible in software repositories such as Github, and also because they introduce the possibility of making the machine a computer. author of the work: The software participates in the creation of the piece with a certain level of autonomy, in a kind of dialogue with the artist herself. This autonomy is evident in the fact that it is very difficult to understand exactly what happens within an ANN, which is why its operation is mysterious and almost seems magical19.

The applications of artificial neural networks and other artificial intelligence systems have proven to be successful in solving specific tasks, in which these programs are even capable of surpassing humans (whether they are experts in radiology or world champions of Go or chess). ). However, what has not yet been achieved is general artificial intelligence, which can be applied interchangeably to any real-world situation or environment. This is, obviously, much more complex than getting a program to generate non-existent photographs or know how to recognize a person’s face. Even automating a specific task, but in a changing environment, whose conditions cannot be predicted (driving a car, for example), is turning out to be much more difficult than it seemed. Therefore, what has been developed at the moment is what is known as a “weak AI,” limited to a specific task, compared to what would be a “strong AI,” which would theoretically have self-awareness and would be the type of intelligence artificial that, according to science fiction and the predictions of some researchers, will dominate the world and, possibly, end the human race. To reach this point, it would be necessary to achieve artificial general intelligence (AGI), which would continue to develop itself until reaching the point where it would surpass human intelligence, becoming artificial superintelligence (ASI). ). What this ASI decided to do with humans is not clear, but in any case a point in technological development known as “the singularity” would have been reached, in which machines would be beyond our control20. Whether or not such a singularity will ever arrive is one of the many debates among AI researchers, as well as one of the many myths surrounding artificial intelligence. It is easy to indulge in apocalyptic visions such as those described in The Matrix trilogy (Lilly and Lana Wachowsky, 1999-2003), but while these do not materialize, it is necessary to look at some current challenges and dangers that AI poses today.

Scientists meeting at Dartmouth more than sixty years ago agreed that they needed to find a way to describe any aspect of intelligence with enough precision that a machine could simulate it. This involves providing the machine with a large amount of data and a series of algorithms by which to process it. Collecting data has been the priority of researchers, and later large companies, that have been involved in the development of artificial intelligence. For an AI to be able to write a text like a human, converse with a person or recognize a face, it must be provided with thousands of texts, audio recordings and photographs, which must also be authentic, extracted from real situations and people. Therefore, companies such as Google, Apple, Facebook or Amazon use the data generated by their users, the content of social networks and public forums, countless scanned books, millions of photos and videos hosted on their servers, as well as recordings and other data obtained from the devices they market. This enormous volume of data did not exist a few years ago, and is one of the factors (along with hardware development) that has facilitated the recent explosion of AI21. More sophisticated artificial intelligence therefore requires that we be willing to give up our data and give up our privacy.

But this is not the only renunciation that the current development of AI leads us to. Once our data is provided to the program, it processes it using algorithms that are unknown to us and reflect the prejudices and interests of those who created them. Even if programmed with the best of intentions, an AI system can fail to analyze the data and make predictions from it. Since its precise operation is difficult to understand even for the person who developed it, auditing one of these systems is extremely complex, and is beyond the reach of the people who are affected by its decisions. Furthermore, unlike human decisions, which are susceptible to rethinking, automated systems continue to apply the same instructions unless they are reprogrammed22. This limitation of weak AI in turn raises the fear of certain agents who are not as intelligent as they should be and whose actions can have dire consequences for people’s lives (for example, autonomous weapons and vehicles or surveillance and identification systems. people employed by the police). The solution to limited AI is to provide more data to the system, but this again leads to giving up privacy and raising the need for ethical use of the information collected. Some defenders of AI therefore propose the combination between machines and humans, so that the former focus on the analysis of large amounts of data and the resolution of repetitive tasks, while the latter deal with more ambiguous or complex situations, exercising a judgment based on ethical values ​​and assume responsibilities for their decisions23.

Beyond ethical issues, there are other factors derived from the very materiality of this technology: an artificial intelligence program requires a lot of processing time, powerful hardware and access to data stored in the cloud, all of which translates into a considerable energy expenditure. Training an AI with a large volume of data generates about 284,000 kg of carbon dioxide, equivalent to the emissions of five cars over their entire useful life24. In 2019, the first experiments with the BigGAN system, which generates high-resolution images, have required the use of 512 TPU processing units (developed by Google) that consume, in the creation of each image, as much electricity as is spent in a home over six months25. The carbon footprint is therefore another of the worrying aspects of AI (and more immediate than killer robots), which not only has consequences for the environment but also for access to the technology that makes a more artificial intelligence possible. advanced. Only researchers linked to large companies like Google can use their extensive hardware resources and the necessary financing to pay for the energy consumption derived from their use. This can decisively limit how artificial intelligence research will be developed and what objectives will be prioritized26.


Questioning AI through art
Numerous authors27 agree that the challenges and dangers posed by artificial intelligence require adopting a critical vision and becoming aware of both its real possibilities and the need to use this technology ethically. AI is not magic, nor is it a neutral and inevitable force, it is a set of algorithms and computing techniques that must be observed and questioned to prevent them from perpetuating prejudices and inequalities or generating other collateral damage. This is usually not available to the general public, since technology industries only want to show the benefits of their products and look for loyal and convinced consumers, not users capable of developing a critical reflection on technology. Just the scandalous news about the violation of privacy and the unethical use of user data has forced large companies to rethink (albeit temporarily) their products and policies: Google had to stop marketing its Google Glass augmented reality glasses; Apple has had to close the program for listening to its users’ conversations that it had created to improve the operation of Siri; The Cambridge Analytica scandal has led Facebook CEO Mark Zuckerberg to mumble a feeble apology to the US Senate; Amazon has faced widespread criticism for listening to users’ conversations through devices equipped with its Alexa virtual assistant and for providing law enforcement with a facial recognition system that makes dangerous mistakes. However, these striking cases are far from changing the long-term perception that users have of technology: an elaborate apology and a few promises seem to settle the matter. What is necessary, therefore, is to create a framework for reflection on technology that allows us to apply a critical look at all times. Not to deny what it gives us, but to understand how it works and our involvement in it.

Roger F. Malina, astronomer and professor emeritus of art and technology at the University of Texas, has indicated on several occasions that art, science and technology can feed each other as artists open new paths to technology and scientific thought, while These disciplines facilitate new forms of artistic creativity28. Art that appropriates scientific and technological advances creates a cultural framework that allows us to understand them, experience them and develop a critical, more realistic and also more committed stance. The D3US EX M4CH1NA exhibition. Art and Artificial Intelligence pursues this goal, bringing together the work of fourteen artists, whose works address the ways in which we perceive AI and what challenges it poses to us. The pieces can be experienced individually, as part of the discourse that each artist has developed throughout their career, or considered as nodes of a network that builds a portrait of artificial intelligence through different perspectives and approaches, some more playful and others are more reflective, sometimes inviting the viewer to dialogue with the work, and other times consciously excluding them. Next, we propose a route in which the pieces have been grouped into five major themes, which progressively lead from the human to the machine, from the humble apprentice robot to cryptic superintelligence.

* * * * * * *

artificial art
The beginnings of algorithmic art can be traced to an exhibition of computer-generated drawings that took place at the Institute of Technology of the University of Stuttgart on February 5, 1965. The artist and mathematician Georg Nees presented his work in a room in which the philosopher Max Bense gave his classes. At the opening, several artists gathered and asked Nees about the process of creating the drawings. According to the artist and mathematician Frieder Nake, one of the pioneers of algorithmic art, a conversation took place between Nees and an artist in which the latter asked him if the machine was capable of drawing as he did, to which the mathematician He responded: “yes, if you can describe to me exactly what your style is like!”29. Nees considered, analogously to the scientists gathered at Dartmouth, that it was possible for a machine to create art if it could be given enough data to simulate human creativity. This did not please the artists, who left the exhibition angry, while Bense tried to calm things down by telling them: “Gentlemen, we are talking about artificial art!” With this term (which, according to Nake, was invented at that time) the philosopher tried to differentiate the art created by the machine from that created by artists.

Three years later, in 1968, the young British artist Harold Cohen began his stay as a visiting professor in the Fine Arts department of the University of California in San Diego (UCSD), with the intention of learning to program and applying programming to his artistic research. While Georg Nees was a mathematician who explored artistic creation, Harold Cohen was an artist who delved into the possibilities of computing30. They both agreed that, in order to create art with a computer, it was necessary to describe in detail, in symbolic terms, how the machine should generate the work. However, as Nake points out, this description is not limited to a single work, but rather allows the generation of an infinite series of works. Therefore, the artist’s creation is the description itself, translated into programming code and executed by a program that generates a visual composition through a plotter. Cohen’s main creation was AARON, an artificial intelligence program that he began developing in 1973 and on which he continued working until his death in 2016. The artist describes AARON as “a computer program designed to model some aspects of human behavior in artistic creation and produce “freehand” drawings of a highly evocative type as a result.” The program was capable of autonomously generating an infinite series of drawings without any visual information, just a set of rules based on Cohen’s experience with the artistic process. The objective was to investigate the simulation of cognition through the creation of works of art, starting from the question: “what would be the minimum condition for a set of strokes to constitute an image?” This approach is far from the idea of ​​replacing the artist with a machine, and in fact Cohen insists that AARON is not an artist, nor is it a tool that allows a user to create visual forms: the program generated its own drawings solely and exclusively. based on the rules inscribed in the programming code. Even so, according to its creator, when AARON’s work was presented, the public interpreted that the drawings must have been “supplied” to the program by an artist, and later, upon learning how it works, they assigned it its own personality. This anecdote exemplifies what Marcus and Davis call “the credulity gap”: we cannot help but think of machines in cognitive terms, granting them a mind and will of their own. But it is also the effect of the complex system of instructions designed by Cohen. The AARON program originally had (in 1979) about three hundred “productions”, or instructions to execute part of an action. These instructions were distributed at various levels, with a hierarchical structure in which the higher levels limited the field of action on the lower ones. Thus, the highest level of the system, Called ARTWORK, it decided the organization of the drawing in its entirety, while the MAPPING level was responsible for assigning the spaces where each element was placed and PLANNING managed the development of each figure. This structure extended to the level of each stroke, so that a single line was already the result of some twenty or thirty productions at three levels of the system.

Harold Cohen continued to develop AARON over more than three decades, incorporating new elements such as color and the combination of elements created by the program and the direct intervention of the artist. His work constitutes a pioneering exploration of the interactions between art and artificial intelligence, in which we see how the principles proposed at Dartmouth are applied to artistic creation, through a system that presents certain similarities with the structure of artificial neural networks. As Cohen states, AARON’s drawings (of which we present a series made for the Arnolfini gallery, in Bristol, in 1983) are evocative, since they are fascinating whether they are confused with the works of a human artist (which is why the program would pass the Turing Test), as when the complex system by which they are generated is understood. Something similar happens with the drawings that Anna Ridler has made for The Fall of the House of Usher I and II (2017). The artist is inspired by the silent short film made by James Sibley Watson and Melville Webber in 1928, freely interpreting Edgar Allan Poe’s story, to develop a subtle reflection on the creative process as translation or reinterpretation. Ridler’s work consists of an animation generated by a generative adversarial network (GAN) and a series of two hundred drawings made by her, which reproduce the main scenes of the short film. Instead of creating an animation directly from the drawings, the artist uses them as the training set that she provides to an artificial neural network, so that it learns to generate new images. The output of this first neural network is supplied by Ridler to a second neural network, which in turn generates new images, and finally to a third. The animation simultaneously shows the production of the three neural networks, allowing them to be compared and giving rise to a new version of the film that is as narrative as it is self-referential: the story of the brothers Roderick and Madeline Usher unfolds in a split screen effect in which The image progressively deteriorates, losing detail but also giving way to increasingly delirious forms, in line with both the progressive loss of sanity of the characters created by Poe and the surrealist style that Sibley Watson and Webber gave to their experimental film. . It is possible, however, to identify some key elements, such as the close-ups of the protagonists, the settings, the typographic games that the filmmakers introduce and finally the moon, which appears between the cracks of Usher’s house when it falls apart, acquiring a notable role in both the original story and the short film. The soundtrack composed by Alec Wilder in 1959, which has been added to the animation, It contributes to giving meaning to the images and situating the narrative. Without a doubt, the particular visual style of Sibley Watson and Webber’s film, to which Ridler’s drawings give an even more enigmatic air, is enriched by the interpretation made by artificial neural networks. The images generated by these programs, even when based on real images, are sinister, both recognizable and strange, like a wax figure. The artist consciously employs a recursive machine learning process to question notions of creativity and originality, while highlighting the role of GANs and in particular the data used to train them. As a whole, the work poses a game of duplications and delusions that Poe already introduces into his story, Sibley Watson and Webber translate into fascinating visual metaphors, and Anna Ridler interprets in her own graphic style, then releasing the result to successive artificial intelligence programs. , in a loop that could have no end. The artist finally asks herself which of all these versions is the real work, where is the art, and in fact shows her drawings together with the animation, not as a work, but as the set of data that has been supplied to the neural network. artificial.

Ridler’s use of the drawing technique responds to his interest in drawing as a first language, prior to speech or writing, which in this case serves to create images with which to train the machine. The artist indicates that, although a GAN can produce a drawing, she cannot draw. This is true in that the artificial neural network generates the images in a different way, but a machine can also learn to draw. AARON is a program capable of constructing a drawing making the same decisions as a person. In the same way, the robots that Patrick Tresset has created for his series Human Studies (2011-2019) create portraits with impulsive, but at the same time precise, strokes. Each of the robots manufactured by Tresset consists of a desk equipped with a camera, located on an articulated support, a mechanical arm that holds a pen and a computer housed under the surface of the table. The body of the robot, therefore, is the same desk, from which its only two limbs emerge, an eye and an arm. The only thing he can do is draw, once he places a sheet of paper on the table, and he does so (in Tresset’s words) obsessively, alternating his gaze between what he wants to reproduce and the sheet on which he draws sinuous lines. In this series, what the robots reproduce are portraits of people who agree to sit and serve as models in sessions that can last up to thirty minutes. The person thus becomes the object of study of the machine, which in this case is not a mere tool through which to obtain a faithful reproduction of oneself in a few minutes. The artist could have created robots capable of reproducing portraits quickly and with photorealistic quality, as if it were a photo booth, but he has chosen to use old or low-resolution cameras and give the robots a certain personality. Tresset introduces his own drawing technique in the program that teaches the robots to draw, but modifies their individual behavior, so that each one generates different drawings. The result is that, in this performative installation, the person and the machine exchange their roles, with the robot assuming the creative role and the human being a passive subject who must submit to the scrutiny of the camera, without moving, during a session that takes place. It extends beyond what is comfortable in our impatient interactions with technology. The drawings are, therefore, not mere digital prints that can be reproduced infinitely, but rather unique pieces that Tresset progressively adds to a collection of more than 10,000 drawings. The behavior of robots is what makes their drawings go beyond mere mechanical reproduction and acquire the quality of a work of art, despite the fact that the machines themselves have no consciousness or artistic intentions, but rather react to stimuli and , like AARON did,

The works of Harold Cohen, Anna Ridler and Patrick Tresset explore human creativity through artificial intelligence, revealing the extent to which the belief that a machine cannot create art can be challenged. According to Margaret Boden, three types of creativity can be distinguished: combinatorial creativity, in which known ideas are combined in a new way; exploratory creativity, in which established style rules are applied to create something new; and finally transformative creativity, which goes beyond established formats and seeks new structures that accommodate a different idea. In AARON we can see an example of combinatorial creativity, the program being capable of creating new forms based on established rules. Tresset robots apply exploratory creativity, giving drawings created based on a common technique their own personality. Ridler’s short film, finally, exemplifies transformative creativity by developing a new version of Sibley Watson and Webber’s film that reinterprets her and Poe’s work, resulting in a radically different piece. Obviously, machines have not created these works on their own, but as part of a process devised by the artists. Therefore, the question of creativity is resolved not in a substitution of the person for the machine, but as a collaboration in a creation system in which both participate. As Frieder Nake states: “like an artist who has decided to develop software to control the operations of a computer, you think about the image, you don’t make it. The elaboration of the image has now become the task of the computer.”


“Here I am, watching over you.”
Leaving decisions in the hands of machines can lead to a better world, or perhaps turn out to be disastrous. In both cases, we must observe the trust we place in computers and their apparent impartiality, which in turn depends on the instructions and data that are supplied to them (so, ultimately, the problem is not with the machines but with the people who program and control them). In 1950, the same year that Alan Turing wondered if machines can think, the writer Isaac Asimov published the story “Avoidable Conflict,” in which he imagined a stable and peaceful world, without wars or hunger, controlled by four large machines that made all economic decisions on a global level. These machines were capable of analyzing an immense volume of data with such complexity of calculations that humans could no longer understand or control their operation. One day, machines begin to make decisions that seem wrong, since they generate certain problems in some regions of the planet. The “errors” turn out to be a manipulation calculated by machines, which are capable of predicting the actions of humans who ignore their indications to redirect them towards optimal functioning of the system. The machine specifically harms a few humans for the benefit of all humanity, and at the same time ensures that its dictates cannot be disobeyed.

This type of automated benevolent government is what Pinar Yoldas proposes in her video The Kitty AI: Artificial Intelligence for Governance (2016), a science fiction story set in the year 2039 in which an artificial intelligence with the appearance of a cat has become the first non-human leader of a city. AI is located in an area of ​​the world where there are no political parties and can govern through a network of artificial intelligences and direct interaction with citizens through their mobile devices. Kitty has detailed information about all aspects of the city (infrastructure, demographics, movement of vehicles and people, etc.) and can respond to citizen requests when they reach a critical mass. Therefore, like the machines imagined by Asimov, it does not serve the interests of a single person but rather those that affect the collective of citizens. With a tender childish voice, the cat tells us “I am your absolute ruler” and she reminds us that she controls all the systems that make our daily lives possible. “Here I am, watching over you,” adds Kitty, “not like Big Brother, but like a curious cat who adores you…” Yoldas consciously plays with human emotions to indicate how technology industries seek to create an emotional bond between users and their products. , emphasizing the benefits of human relationships mediated by technology. At the same time, he raises the possibility of AI governance at a time when most people have stopped trusting political parties, given the numerous cases of corruption and the ineptitude of their leaders. The idea of ​​being governed by a virtual cat actually seems more attractive than the real options that exist in many countries, and this in turn reminds us that governments are devoting increasing investments to the development of AI. This interest does not focus so much on technological innovation as on achieving supremacy over other nations: in 2017, Russian President Vladimir Putin stated that whoever manages to lead artificial intelligence will be the one who rules the world, and that same year President Xi Jinping announced that AI was part of his “grand vision for China” with a million-dollar investment in a national artificial intelligence industry and the goal of leading the sector globally by 2030. An advantage that Xi Jinping’s vision has is the control that the Chinese government has over the data of its citizens and the absence of any privacy defense, which allows AI programs to be fed with a quantity and variety of data that is much more difficult to obtain in other countries. The more data it obtains, the better the AI ​​is, and therefore a company with strong artificial intelligence can dominate an industry, ousting its competitors and benefiting from access to the data it has. This is what happened with companies like Microsoft and Intel a few decades ago, and currently with Google and Facebook. When a government controls the artificial intelligence sector, for its citizens this may mean something similar to what Kitty claims, but without the tenderness of a kitten: absolute control, in which there is no possibility of disobedience, since This is anticipated and redirected towards optimal functioning of the system.

Even under a benevolent government of an AI, it is worth asking what functions human beings can perform, what jobs we would have left to do. Automation has always brought with it the fear of massive job loss, and according to various researchers, this will occur as artificial intelligence programs and robots can perform more tasks with a minimum margin of error. In the first phase, AIs will need to be trained by humans specialized in each area, until it can replace them in most of the tasks they used to perform. This will lead to different situations for the worker, who will either be replaced, or will see part of their work automated, or will perform a different job with new tasks derived from collaboration with the machine. Some authors suggest that it will be mainly people with a low educational level and who perform repetitive and manual tasks who will see their jobs disappear. What will happen when machines do most of the work? The most optimistic predictions point to a humanity with a lot of free time: the futurologist Alvin Toffler predicted in 1970 that the number of weekly working hours was going to be reduced by fifty percent at the beginning of 2000 (this has not been the case), while the Writer Arthur C. Clarke imagined in 1968 that the population of the most industrialized countries would live idle and bored, without worrying about anything other than deciding what to watch among hundreds of television channels. Currently we have other concerns (among them, the loss of jobs due to automation) but it is true that we have more audiovisual entertainment options than ever, with content adapted to our tastes thanks to personalization algorithms and the data that we supply to the platforms. All in all, perhaps one of the most interesting predictions is the one Isaac Asimov made in 1977 when he imagined humanity as a “global aristocracy” served by sophisticated machines. In this world, people fight boredom thanks to a renewed and extensive liberal arts program, taught by… machines. In Asimov’s future, therefore, people do not work but cultivate their minds by learning art, philosophy, mathematics and science through software, with apparently no need for human creators or teachers. To anticipate this future, it may be advisable to follow the advice of machine learning expert Pedro Domingos, who states: “the best way not to lose your job is to automate it yourself. This way you will have time for all the aspects of it that you had not been able to dedicate yourself to before and that a computer will take a long time to know how to execute.” In his project Demand Full Laziness (2018-2023), Guido Segni seems to have adopted Domingos’ words: for five years, The artist has decided to delegate part of his artistic production to a series of deep learning algorithms in order to “increase production, overcome the work aspect of artistic creation and increasingly free himself from laziness.” Segni lets the program record your periods of inactivity (sleeping, reading, being lazy) with a camera and subject the images to a set of GANs, which result in new images generated from a machine learning process. The works of art created by the software are distributed among sponsors who regularly contribute money to the artist through the patronage platform Patreon. In this project, the artist continues his research on working conditions in the art world and proposes (not without a certain irony) a solution to the dilemma of artistic production in times of maximum precariousness through automation and micro-patronage. In this regard, Segni cites the Manifesto for an Accelerationist Policy, in which the economists Alex Williams and Nick Srniček criticize how neoliberal capitalism has not led to a drastic reduction in the working day (as John Maynard Keynes predicted in 1930), but has instead led to to eliminate the separation between work and private life, with work being part of all social relationships. In this sense, the leisure that the artist engages in, particularly when allowing himself to be observed by the camera, becomes another form of work and paradoxically leads to a situation in which (following the accelerationist theses) leisure no longer exists, since all our activities are capable of generating data for a machine. work being part of all social relations. In this sense, the leisure that the artist engages in, particularly when allowing himself to be observed by the camera, becomes another form of work and paradoxically leads to a situation in which (following the accelerationist theses) leisure no longer exists, since all our activities are capable of generating data for a machine. work being part of all social relations. In this sense, the leisure that the artist engages in, particularly when allowing himself to be observed by the camera, becomes another form of work and paradoxically leads to a situation in which (following the accelerationist theses) leisure no longer exists, since all our activities are capable of generating data for a machine.

The context in which Guido Segni carries out his “five-year performance” is also significant. In the privacy of his home, the artist lies in his bed, reading or simply doing nothing while a camera records him. This isn’t much different from the everyday experience of many people who have speakers and other devices equipped with Amazon’s Alexa voice assistant, which records their conversations or even, in the case of the Echo Look, takes photos and makes styling recommendations. That we can live with these devices and give them access to our private lives demonstrates a blind trust in computers as machines that are at our service and simplify our lives, ironically contrasted with the fear of being replaced by an AI. However, both situations are linked. According to Pedro Domingos, when we interact with a computer, we do so on two levels: the first consists of obtaining what we want, whether it is searching for information, buying a product or listening to music; The second is to provide the computer with information about yourself. The more he knows about us, the better he can serve us, but also manipulate us. Currently, large companies such as Google, Apple and Amazon compete in the marketing of devices specifically designed for consumers’ homes, which act as Trojan horses, introducing products and obtaining data within the private lives of users. The home is a space that is particularly coveted by the technology industry, since it is where we carry out routine tasks, maintain social relationships with our inner circle, rest and spend part of our leisure time. The information that can be obtained in a person’s home is enormous, since it is the place where we usually relax and show our authentic personality, habits and desires. In addition, devices that enter a home usually have two additional advantages for companies: on the one hand, they are always plugged in and connected to the Internet, so they can transmit data at all times; On the other hand, after a while users stop being aware of their presence or assume it naturally, so they do not question its use or what they do in front of them. This occurs, in part, because we conceive of machines as mere helpers without will of their own, who listen or observe us to serve us better and cannot judge us or explain what they have seen or heard. But what if the person behind Alexa is a person? This is the premise of LAUREN (2017), a performance by Lauren McCarthy in which the artist dedicates herself to monitoring and caring for various people in her homes through a series of connected devices. McCarthy asks several individuals and families to agree to let him install the devices (including cameras, microphones, switches, etc.) in their homes. locks and taps) and then observes them twenty-four hours a day, for several days, responding to their requests and offering them advice. According to him, he feels that he can be better than an AI, since as a person he can better understand his needs and anticipate them. The relationship established between the artist and the participants in the performance is a strange exchange between people, mediated by technology and under the protocol of servitude, which McCarthy describes as “an ambiguous space between the human-machine relationship and the human-human.” The interactions that she collects in the documentation of this work show revealing aspects of the connection between people and also with machines. One user states that LAUREN’s advantage is that she understands her feelings and that allows her to focus on “more important things;” Another user says he feels satisfied because she can maintain a relationship in which everything revolves around him; one user claims that she forgets that McCarthy is watching her, and when she remembers she looks in the mirror to check her appearance; Finally, a user confesses that there are facets of himself that he does not want to share with anyone, and in this sense LAUREN’s presence can make him uncomfortable. McCarthy shows with this project how we are willing to exchange intimacy for convenience, letting a set of Algorithms control our lives.

Algorithms and prejudices
Through machine learning, a program can identify patterns in a large data set and predict what the new data will look like. This prediction capability, for which more and more applications are being found, is based on information supplied to the program as a “training set.” If we provide the program with a large number of photos that we have identified as cat photos, the AI ​​will learn what a cat looks like and can even generate new images of cats. But the program does not know what a cat is, but instead identifies and creates images of what it has been told is a cat. This also applies, for example, to facial recognition algorithms, such as those used by Amazon Rekognition software, which allows any user to identify objects, scenes, facial expressions and certain activities in video or image files. Amazon markets this software for use in marketing, advertising and multimedia, but is also looking to sell it to government agencies and law enforcement for personal identification. The problem is that the program makes serious errors: when provided with a set of photos of members of the United States Congress, Rekognition identified twenty-eight of them as criminals, of which 40% are African-Americans. Software bias can have serious consequences for citizens, which is why a group made up of 70 civil rights groups, 400 members of the academic community and more than 150,000 signatories has asked that Amazon stop providing facial recognition technology to the government of USA.

How training sets are created, which are rarely examined, determines how an AI understands the world. We conceive of artificial intelligence algorithms as perfect, impartial, and inscrutable machines, but in fact they incorporate into their programming all the prejudices of their programmers and of those who have classified and labeled the data provided to them. A good example of this is the ImageNet training set, created in 2009, which has become one of the most used in the history of AI throughout its ten-year history. Co-created by Professor Fei-Fei Li with the intention of developing a data set of all existing objects, ImageNet has more than 14 million images, organized into more than 20,000 categories, and is the main resource for programs object recognition. To build this enormous database, the Amazon Mechanical Turk platform was used, through which thousands of workers (called Turkers) were tasked with carrying out small tasks such as describing the contents of an image, at a rate of about 250 images. every five minutes. In 2019, researcher Kate Crawford and artist Trevor Paglen denounced that the “person” category, which has nearly 2,500 labels to classify images of people, includes all kinds of humiliating, racist and xenophobic terms. The labels applied by the underpaid Turkers had never been reviewed, and yet this data set is what tells an AI how to classify people in a facial recognition program. As Crawford and Paglen point out, at a time when artificial intelligence systems are used to select personnel in a company or to arrest a suspect, it is necessary to question how they have been developed and what ideological and social biases have been applied. Paglen created an app, ImageNet Roulette, that allowed any user to see how the AI ​​tagged them. The media interest generated by this project led the ImageNet team to eliminate half of the images in the “person” category.

Memo Akten’s video Optimizing for Beauty (2017) eloquently anticipates the case of ImageNet by using images from another dataset, CelebFaces Attributes (CelebA), which has more than 200,000 images of famous faces, labeled with forty different attributes (haircut, nose shape, facial hair, expression, among others). The work shows an artificial neural network in the process of generating new faces from images of real people that capture the attention of the media. The set, in itself, is already limited to a relatively small group of people, who in many cases tend to a certain homogeneity (as may be the case of models or actors). Akten consciously accentuates the similarity of faces by adjusting the parameters of the artificial neural network through maximum similarity estimation (MLE), which leads to generating portraits with the elements that have the highest probability of being found in the observed data set. . The artificial neural network thus learns to create an idealized, perfect and uniform form of beauty in which individual features (skin or hair color, age, sex) dissolve into a single face that progressively stops looking human. This work shows us how the images of the most visible (and admired) people in our society tend to mark certain stereotypes, which automatic processing accentuates until reaching a disturbing uniformity. The manipulation carried out by the artist subtly reveals that the training sets on which the AIs are based are nothing but fictions, highly biased extracts from reality that reveal their artificiality with just a few modifications to the parameters with which they are created. analyzed.

The way in which the data sets that feed AI are constructed is worrying, but not only because of the creation or perpetuation of biases, but because they determine predictions that lead to automated actions. Researchers Ajay Agrawal, Joshua Gans and Avi Goldfarb indicate that the development of AI can lead to changes in business models based on prediction: for example, Amazon can become so precise in its prediction capacity that it can be more beneficial Send customers products you know they will want to buy instead of waiting for them to buy them. Applied to people, this could lead to a dystopia like the one imagined by the writer Philip K. Dick in the story The Minority Report (1956). However, AI also presents an opportunity to combat bias and create data sets that teach programs to apply a more equal worldview. Discrimination based on gender is a fact in all areas of society, and requires both a change in attitudes at the individual level, ethical codes and salary policies at the company level, and laws and representation at the level of companies. of governments. But we live in an information society, and therefore technological products also intervene in this discrimination, although sometimes in a very subtle way: for example, a 2017 study showed that a job offer in the technological sector advertised in Facebook was shown to more men than women because the algorithm judged it to be more effective in relation to cost per click. Algorithms reflect the values, perspectives and biases of those who create them, and these tend to be mostly men. According to the White Paper on Women in Technology, there is a clear gender gap in the sector: in Europe, only 30% of the approximately 7 million people who work in the information and technology sector communication, they are women. In the field of AI, the percentage of women drops to 13.5%, while in the EU specialists in this field make up only 5% of all researchers. Added to this is that the risk of job loss due to automation is twice as high for women as for men, with 26 million female jobs in 30 OECD countries at high risk of being displaced by technology in the next two decades. If the number of women dedicated to designing the systems that determine many aspects of our daily lives does not increase, it will be difficult to prevent algorithms from perpetuating gender biases or to ensure that situations that may predominantly affect women are taken into account. An example is found in the When & Where app created by five teenagers from a high school in Móstoles (Madrid), which allows a user to monitor any journey she takes. The app detects any detour or sudden stop and sends a message asking the user if she is okay: if she does not answer or answers negatively, the app automatically contacts an emergency phone number. This app is the result of the creativity of young women and their attention to a problem that affects them personally, while at the same time it denotes the serious deficiencies of a society in which a young woman must put on a digital beacon to feel safe. Technology can, therefore, provide practical solutions and also raise awareness.

The Feminist Data Set project (2017-present) by Caroline Sinders points in this direction by proposing a collective action to compile what she calls “a feminist data set.” This set includes works of art, essays, articles, interviews and books that deal with feminism, explore feminism or provide a feminist vision. The purpose of collecting all this information is to get into the workings of machine learning systems and create the equivalent of a training set, through which artificial intelligence algorithms could be developed that adopt a feminist perspective on the world. Sinders carries out this task through a series of workshops in which she invites participants to brainstorm general concepts (such as inequality, femininity, gender) and specific themes or titles (such as it could be Virginia Woolf’s essay A Room of One’s Own or Donna Haraway’s Cyborg Manifesto) on sticky notes. These notes are placed on the wall to be organized into categories, and in this way start a conversation about feminist references and the perception that each person has of feminism, how they define it and what it contributes. This serves, to begin with, to question the very definition of a feminist data set and also so that each community can present ideas and examples from its local experience, thus contributing to enriching the diversity of the set (it should be taken into account that even a set of Feminist data may be biased if it only collects the points of view of a small group of people from the same cultural and socioeconomic background). To this qualitative data collection, Sinders adds a phase dedicated to understanding how training sets work in artificial intelligence. Designer and researcher specialized in machine learning and digital anthropologist, the artist has first-hand experience having worked at Intel, IBM Watson and the Wikimedia Foundation. In the latter, he worked on the study of patterns of harassment on the Internet, where he had to collect numerous ethnographic data about misogynistic users and extreme right groups, which led him to propose the need to create a data set that would offer a vision diametrically opposite. But beyond counteracting sexism on the Internet, the feminist data set offers the possibility of better understanding the biases in the use of data with which artificial intelligence programs are “educated,” and also integrating the ideas of feminism into the heart of technological development. In this sense, Sinders’ project is aligned with the principles of the manifesto Xenofeminism: A politics for alienation, prepared in 2015 by the Laboria Cuboniks collective, which highlights the importance of combating gender inequality in the technology sector: Gender inequality still characterizes the fields in which our technologies are conceived, built and legislated[…]. Such injustice requires structural, machinic and ideological reform. […] Given that there are a range of gender challenges specifically related to life in the digital age – from sexual harassment on social media, to doxxing, maintaining privacy, or protecting images online – the situation requires a feminism comfortable with computer media and the use of new technologies. Sinders’ project aligns feminist thinking with digital skills and facilitates awareness (and responsibility) regarding systems that are usually left in the hands of companies and only now are they beginning to question themselves. At the same time, it helps to better understand what feminism consists of, a movement that is usually misunderstood and received with violent rejection. As writer and activist bell hooks (Gloria Jean Watkins) points out: “many people believe that feminism is all about women wanting to be equal to men, and the vast majority of these people believe that feminism is anti-men. This lack of understanding of feminist politics reflects what most people learn about feminism through patriarchal mass media.” The biased vision of feminism is also evident in the technology sector through the discomfort it generates even if only as a concept. As recently revealed by an investigation by The Guardian newspaper, Apple programmed its virtual assistant Siri to avoid speaking out about feminism or even using this word in its responses. The company, therefore, considers it correct that users give orders to a female voice, but it does not seem appropriate for this assistant to be able to talk about women’s rights. Caroline Sinders could counter this situation with her next project, a feminist chatbot that she would have trained with all the data she has collected so far. As writer and activist bell hooks (Gloria Jean Watkins) points out: “many people believe that feminism is all about women wanting to be equal to men, and the vast majority of these people believe that feminism is anti-men. This lack of understanding of feminist politics reflects what most people learn about feminism through patriarchal mass media.” The biased vision of feminism is also evident in the technology sector through the discomfort it generates even if only as a concept. As recently revealed by an investigation by The Guardian newspaper, Apple programmed its virtual assistant Siri to avoid speaking out about feminism or even using this word in its responses. The company, therefore, considers it correct that users give orders to a female voice, but it does not seem appropriate for this assistant to be able to talk about women’s rights. Caroline Sinders could counter this situation with her next project, a feminist chatbot that she would have trained with all the data she has collected so far. As writer and activist bell hooks (Gloria Jean Watkins) points out: “many people believe that feminism is all about women wanting to be equal to men, and the vast majority of these people believe that feminism is anti-men. This lack of understanding of feminist politics reflects what most people learn about feminism through patriarchal mass media.” The biased vision of feminism is also evident in the technology sector through the discomfort it generates even if only as a concept. As recently revealed by an investigation by The Guardian newspaper, Apple programmed its virtual assistant Siri to avoid speaking out about feminism or even using this word in its responses. The company, therefore, considers it correct that users give orders to a female voice, but it does not seem appropriate for this assistant to be able to talk about women’s rights. Caroline Sinders could counter this situation with her next project, a feminist chatbot that she would have trained with all the data she has collected so far.

How to dialogue with an AI?
Most AI programs in use today perform relatively invisible tasks, such as analyzing large amounts of data or automatically recognizing elements in an image. The direct interaction between the user and the AI ​​occurs in the form of a virtual assistant (such as Siri or Alexa), which is limited to obeying orders or simulating a brief conversation. The conception of the machine as a faithful servant is translated into a kind female voice that responds diligently and, as one of the participants in Lauren McCarthy’s LAUREN project states, establishes a relationship in which everything revolves around the user. This type of interaction is, to a certain extent, positive, in the sense that it can facilitate collaboration between the person and the AI, increasing the capabilities of the former (as computers currently do) and helping the latter to learn and process data better. But it also entails forgetting that, ultimately, we are talking with software and not with a person. This confusion (which some AI projects, such as Google Duplex, try to use to their advantage) can lead to emotional involvement like the one narrated in the film Her (Spike Jonze, 2013) or dehumanize relationships with the people who provide a service. of assistance. Interaction with AI therefore raises the notion of the “other,” as that which is not oneself or that occupies a subordinate position.

These themes underlie Lynn Hershman Leeson’s work on artificial intelligence. In her film Teknolust (2002), she explores human relationships through clones that have to interact with the real world, and in doing so modify it on both a biological and technological level. Expanding on the idea of ​​this film, Agent Ruby (2002) is a female character reduced to a face that users can interact with on a website. Ruby can answer questions posed to her in writing and learns from these interactions, as well as from the information she obtains on the Internet. Ruby is not only a chatbot (an intelligent program that interacts via text), but she has the ability to express emotions in a rudimentary way through her face. Ruby’s identity is built as she interacts with users and she even develops moods based on the information she collects and whether or not she likes the user. According to the artist, her intention was to create an artificial intelligence character that was capable of interacting with the general public, something she achieved a decade before Siri or Alexa existed. As a continuation of Agent Ruby, the artist creates DiNA (2004), a female character played by actress Tilda Swinton (protagonist of Teknolust) who engages in a dialogue with viewers through a voice recognition system and, like Agent Ruby can learn from such interactions and from the information it obtains on the network. DiNA represents an advance with respect to Ruby, both at the code level, since it has a more sophisticated program, and at the level of presence, since in this case the viewers are faced with a filming of the actress, whose head occupies an entire wall. This intelligent agent also adopts a specific role in her first incarnation, which is to run as a candidate for the position of “Telepresident” and carry out a virtual electoral campaign. This prompts people who interact with DiNA to ask questions related to the current socio-political reality, and raises the possibility of AI governance, a topic that Pinar Yoldas will later address in Kitty AI. By obtaining the information to prepare its responses from the Internet, DiNA can deal with current topics, and in this way it appears more real, intelligent and capable of adapting to conversations, despite the limitations currently presented by voice recognition systems and the own AI program. This installation offers us the opportunity to dialogue with an artificial intelligence that, unlike the virtual assistants of Apple and Amazon, is not a disembodied female voice that lends itself to obeying our orders or resolving our doubts, but rather has a presence own and invites us to consider it as a being with which to seek understanding. As researcher Eana Kim indicates,

The interaction between human and AI can also lead to another situation in which the machine stops being an other and becomes a double. Currently, a part of each user’s identity exists on the Internet as a set of data, social network profiles, uploaded images and videos, published messages and a long etcetera. This “digital self” will be managed in the future by an artificial intelligence program that will create a detailed model of each person. Companies, whether to offer services, products or job offers, will contact this model, which will be in charge of filtering the messages that reach the real person. Artificial intelligence tools will also make predictions, based on the data they have, about what we will need, what news we will want to read or what products we will be interested in. The paradox that can occur at this point is that, if the person follows the AI’s recommendations, it reinforces the precision of its predictions, with which these, as a self-fulfilling prophecy, end up determining the user’s actions. The interactive installation Neuro Mirror (2017) by Christa Sommerer and Laurent Mignonneau confronts the viewer with a system that predicts his actions and leads him to question whether he has control over them or not. Artists are inspired by scientific research about mirror neurons, which are those that are activated when executing an action and seeing it performed by another individual. These neurons participate in the processes that take place in the brain when establishing relationships with other people, imitating them, empathizing, as well as differentiating between the “self” and the “other.” They also have a primary role in intuition, specifically in predicting the behavior of others in the future. In Neuro Mirror, Sommerer and Mignonneau use artificial neural networks to create a piece capable of analyzing the actions of viewers and showing a visualization of what they could do next. When a person stands in front of the work, made up of three screens, he sees his image reproduced on the central monitor, while the one on the left shows his activity in the immediate past. The third monitor shows a character that predicts the gestures that the person will make in the future. In this way, a dialogue is established between spectator and machine, as the person feels obliged to imitate the character who predicts her actions or act in a different way, also being led by the dictates of the system. The artists consider that machine learning will probably never reach the level of complexity and adaptability of the human brain, but they demonstrate with this piece that a system that uses the relatively limited current tools can already significantly affect the perception that an individual has of himself. same.

Humans are not necessary
Since the beginning of artificial intelligence research, the possibility of a “strong AI” has been raised, which would be able to reason on its own (and not simulate human reasoning enough to pass the Turing test). If this is possible, as some authors argue, this type of artificial intelligence would soon be able to develop to the point of surpassing human intelligence and giving rise to what has been called artificial superintelligence. As early as 1965, Irving John Good, a colleague of Alan Turing, predicted that a machine more intelligent than any human would be able to create other machines even more intelligent, and thus lead to an “intelligence explosion,” being the machines themselves, and no longer humans, those in charge of all subsequent technological development. According to Good, “the first ultra-intelligent machine is the last invention humanity would need to make, as long as the machine is docile enough to tell us how to keep it under control.” Mathematician and science fiction author Vernor Vinge called this situation “the Singularity” and predicted, in 1993, that artificial superintelligence would arrive in 2023, which would mark the end of the human era. The pessimistic vision of the future of AI has included other prominent voices, such as astrophysicist Stephen Hawking, who stated in 2014 that artificial intelligence could be humanity’s worst mistake. Against these ominous omens, the extraordinary optimism of Ray Kurzweil stands out, who hopes that in 2045 a superintelligent AI, combined with advances in biotechnology and nanotechnology, will end wars, diseases, poverty and even death. Among those who believe in the Singularity there are evident differences of opinion about what it will mean for the human race. This is certainly a matter of wild speculation, and it may seem absurd to consider these scenarios. But it is also true that the technological advances that are part of everyday life today only appeared in science fiction stories a century ago.

Félix Luque’s work confronts the viewer with their expectations and fears about technology in narratives that combine reality and fiction, possible futures and dystopias. Nihil Ex Nihilo (2011), tells the story of SN W8931 CGX66E, the computer of Juliet, a secretary who works in a large corporation. Infected by malicious software, she soon becomes part of a botnet, a network of machines controlled by a cracker who uses them in cybercriminal activities. Due to an electronic alteration, the computer acquires a form of consciousness, a primitive artificial intelligence. Confused by this situation, he tries to communicate with other machines and free them from their submission to human users. Seeking to establish contact, it responds to the spam messages its user receives in order to spread her ideas among the network of machines. The work consists of several independent but linked pieces that complete the story in different formats. This version includes: The Dialogue, a set of eight alphanumeric displays that show in real time the messages exchanged between a text generation program and the computers that send spam, while they are read by a synthetic voice; and The Transformation, an audiovisual archive of the moment in which SN W8931 CGX66E mutates and becomes a semi-neuronal structure, in an animation that leads us to think about the concept of “intelligence explosion” that Good proposes, a sudden and without turn back. Felix Luque’s installation raises the fear of a “machine rebellion” that is often associated with the development of artificial intelligence and which is a recurring theme in science fiction stories. The dialogues that shape the main piece occur between machines, without the participation or knowledge of humans, giving an inescapable presence to a type of communication that occurs constantly but that we ignore. In line with Luque’s work, which opposes the vision of technology as a mere tool to the perception of the machine as a mysterious and sometimes incomprehensible entity, the work reminds us that the development of intelligence also entails madness and shows a system that works beyond our control. Nihil Ex Nihilo also touches on a recurring theme among AI researchers, which is the debate over whether a machine can develop a consciousness of its own, as well as the need for a strong artificial intelligence to have a consciousness in order to reason like a human being does. . In the case of SN W8931 CGX66E, consciousness does not turn it into a superintelligence, but into a paranoid and obsessive machine. As happens to the intelligent robots Adam and Eve in the novel Machines Like Me (2019) by Ian McEwan, consciousness is not a gift, but a sentence that some machines find themselves unable to endure.

Regardless of the possibility (real or not) that a machine acquires its own consciousness, a particularly disturbing aspect of Félix Luque’s piece is the dialogue that occurs between machines, without any mediation by a human being. Certainly, the infected computers do not know what they are talking about, in terms of the meaning of the words, but they are carrying out effective communication. As Claude Shannon states in his Mathematical Theory of Communication, the semantic aspect is irrelevant to the system, the important thing is that a message passes from a sender to a receiver. One computer generates an output that is processed by another and sends a response, therefore a “dialogue” is established between the machines as soon as they exchange data. This level of communication is strange and intelligible, as well as invisible, for most people, who usually conceive of interaction between human and machine but not between two machines. And yet, the devices we use every day are constantly exchanging data with other machines according to established protocols. This type of communication is usually unrelated to the user, it does not require their direct intervention. Thus, the idea that machines are “talking” to each other can make us uneasy, either because (as Luque’s piece proposes) we imagine that they could be conspiring against us or simply because they leave us out of the conversation. Jake Elwes proposes this type of exchange in Closed Loop (2017). The artist creates a conversation between two artificial neural networks: one analyzes and describes, in text form, the images that are supplied to it; the other generates images in response to words written by the first neural network. The networks have been trained with a data set of 4.1 million images with descriptions and another data set with 14.2 million photographs. According to Elwes, he has not provided any visual or text content to the piece, which works on its own thanks to the feedback loop established between the two AIs. In a similar way to Luque’s piece, this installation makes people mere spectators of a closed dialogue in which they cannot intervene, only observe trying to understand what leads a neural network to generate the images and how the other responds with descriptions. The system is self-sufficient and once again contrasts our anthropocentric perception of technology, in which we are either those who dominate or those who are dominated, the existence of an exchange that is foreign to us. Trying to decipher the narrative that develops between the images and texts generated, we are ultimately the ones who strive to understand the machine’s reasoning.

We have seen the exploration of the other in works of art and artificial intelligence that start from a human presence (albeit fictitious) in the interactive installations of Lynn Hershman Leeson and Christa Sommerer and Laurent Mignonneau, then moving on to machines that express themselves through texts in English and images, in the works of Félix Luque and Jake Elwes. We conclude with a form of communication that is, literally, Martian. In nimiia cétïi (2018), Jenna Sutela uses machine learning to generate a new language, which she gives written and spoken form. To do this, it is based on the “Martian language” created (or communicated) by the French medium Hélène Smith, who at the end of the 19th century claimed to be able to establish contact with a civilization on Mars and expressed herself orally and in writing in its language. planet, in sleepwalking trance sessions and automatic writing. The artist has compiled Smith’s drawings and has given voice to the transcriptions of the phrases spoken by the medium that the psychology professor Théodore Flournoy included in his book Des Indes à la Planète Mars (1900). Added to this material is the analysis of the movements of the bacterium Bacillus subtilis nattō, which has been shown in a laboratory study to survive in extreme conditions, such as those found on the surface of the red planet. Sutela, on the one hand, has provided an artificial neural network with recordings of the medium’s phrases and, on the other hand, a sequence of the movements of a sample of B. subtilis observed through a microscope. These movements have given rise to patterns that the program has reinterpreted as traces of signs, which in turn take Smith’s automatic writing as a reference. All these elements make up the audiovisual in which the artist brings together the sounds and graphics generated by the machine together with images of the bacteria, visualizations of the patterns that the AI ​​has identified and a simulated landscape that evokes the planet Mars. The whole is obviously cryptic, very far from any human reference by using indecipherable language (despite the fact that it is presented in written and oral form) and images of a microscopic and extraterrestrial nature. The piece, according to the artist, makes the computer a medium, which interprets messages from entities with which we cannot communicate and provides them to us automatically, without the mediation of reasoning, as supposedly happened to Smith during one of his trances. Jenna Sutela describes Martian language messages as “glossopoetry,” referring to the phenomenon of glossolalia, which is the ability to speak in a language unknown to the speaker. This phenomenon has been associated with the belief that people can be possessed by a divine being or spirit (becoming mediums). This, in turn, has been related to the art of poetry, as in Ión, Plato’s dialogue in which Socrates describes the rhapsode as a mere vehicle of divine inspiration. In the same way, the artificial intelligence program that has developed a version of the Martian language does so without being aware of it or knowing what meaning what it has created has. The audience’s experience of this work, which can be aggressively strange, ultimately allows us to understand how an AI sees the world and how it creates an artifice that we interpret as intelligence.

Pau WaelderOctober, 2019

Grades:

1. Aristotle, Poetics, XV. Translation by José Alsina Clota. Barcelona: Icaria, 1987, p.47.

2. Robert McKee describes the deus ex machina as “the worst sin of any screenwriter” and, even more, “an insult to the public.” Robert McKee, The Screenplay. Substance, structure, style and principles of screenwriting. Translated Jessica Lockhart. Barcelona: Alba Editorial, 2011.

3. Gary Marcus and Ernest Davis, Rebooting AI. Building Artificial Intelligence We Can Trust. New York: Pantheon Books, 2019, p.11

4. Marcus and Davis, Rebooting AI, p.10

5. Marcus and Davis, Rebooting AI, p.8

6. Ray Kurzweil, The Age Of Spiritual Machines. When Computers Exceed Human Intelligence. New York: Penguin Books, 1999, p.161.

7. Marcus and Davis, Rebooting AI, p.27-31

8. Adrienne Mayor, Gods and Robots. Myths, Machines and Ancient Dreams of Technology. Princeton-Oxford: Princeton University Press, 2018, p.164-191.

9. In analyzes of the perception of artificial intelligence, it is common to refer to the magical or the divine, as done, among others, by Pedro Domingos, Adrienne Mayor, Cathy O’Neil, or Meredith Broussard (see note 8 et seq.)

10. Meredith Broussard, Artificial Unintelligence. How Computers Misunderstand the World. Cambridge-London: The MIT Press, 2018, p.41 et seq.

11. Cathy O’Neil, Weapons of Math Destruction. How Big Data Increases Inequality And Threatens Democracy. New York: Crown Publishing, 2016, p.13.

12. James Moor, “The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years.” AI Magazine, Vol.27, No.4. American Association for Artificial Intelligence, 2006, p.87.

13. Alan M. Turing, “Computing Machinery and Intelligence.” Mind, New series, Vol. 59, No. 236 (October 1950), pp. 433-460. Published by Oxford University Press. http://www.jstor.org/stable/2251299

14. Turing, “Computing Machinery and Intelligence”, p.442.

15. Turing, “Computing Machinery and Intelligence”, p.460.

16. Ethem Alpaydin, Machine Learning. The New AI. Cambridge – London: The MIT Press, 2016, p.25.

17. Margaret A. Boden, AI. Its Nature and Future. Oxford: Oxford University Press, 2016, p.47-48.

18. It is interesting to see how this process is similar to the Turing Test, with the goal of the generator being to “deceive” the discriminator.

19. Boden, A.I. Its Nature and Future, p.78.

20. Boden, A.I. Its Nature and Future, p.147-148.

21. Marcus and Davis, Rebooting AI, p.17

22. O’Neil, Weapons of Math Destruction, p.202.

23. Paul R. Daugherty and H. James Wilson, Human + Machine: Reimaging Work in the Age of AI. Boston: Harvard Business Publishing, 2018, p.186.

24. Karen Hao, “Training a single AI model can emit as much carbon as five cars in their lifetimes,” MIT Technology Review, June 6, 2019.https://www.technologyreview.com/s/613630/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/

25. Katharine Schwab, “A Google intern built the AI ​​behind these shockingly good fake images,” FastCompany.com, 4/20/2019.https://www.fastcompany.com/90244767/see-the-shockingly-realistic-images-made-by-googles-new-ai

26. Emma Strubell, Ananya Ganesh and Andrew McCallum, “Energy and Policy Considerations for Deep Learning in NLP,” 57th Annual Meeting of the Association for Computational Linguistics (ACL), July, 2019. arXiv:1906.02243 [cs.CL]

27. See, for example: O’Neil, Weapons of Math Destruction, p.212-216; Marcus and Davis, Rebooting AI, p.32-34; Broussard, Artificial Unintelligence, p.240-241

28. Pau Waelder, “Roger Malina: “art leads to a new science,” LIFE. International Art and Artificial Life Competition, December 22, 2014.https://vida.fundaciontelefonica.com/blog/roger-malina-el-arte-conduce-a-una-nueva-ciencia/

29. Frieder Nake, “Roots and randomness –a perspective on the beginnings of digital art”, in: Wolf Lieser (ed.), The World of Digital Art. Potsdam: hf Ullmann, 2010, p.40.

30. Frieder Nake, “Georg Nees & Harold Cohen: Re:tracing the origins of digital media,” in: Oliver Grau, Janina Hoth and Eveline Wandl-Vogt (eds.) Digital Art through the Looking Glass. New strategies for archiving, collecting and preserving in digital humanities. Donau: Donau-Universität, 2019, p.30.31. Nake, “Georg Nees & Harold Cohen”, p.39.

32. Harold Cohen, “What is an image?”, 1979. AARON’s home.http://www.aaronshome.com/aaron/publications/index.html

33. Marcus and Davis, Rebooting AI, p.27.34. Cohen, “What is an image?”

35. Anna Ridler, “Fall of the House of Usher. Datasets and Decay”. Victoria and Albert Museum, September 17, 2018.https://www.vam.ac.uk/blog/museum-life/guest-blog-post-fall-of-the-house-of-usher-datasets-and-decay

36. Ridler, “Fall of the House of Usher. Datasets and Decay”

37. Boden, A.I. Its Nature and Future, p.68-69.

38. Nake, “Georg Nees & Harold Cohen”, p.39.

39. Ajay Agrawal, Joshua Gans and Avi Goldfarb, Prediction Machines: The Simple Economics of Artificial Intelligence. Boston: Harvard Business Publishing, 2018, p. 205.

40. Agrawal, Gans and Goldfarb, Prediction Machines, p.93-143.

41. See: Boden, AI. Its Nature and Future, p. 160; Agrawal, Gans and Goldfarb, Prediction Machines, p.202; Carys Roberts, Henry Parkes, Rachel Statham and Lesley Rankin, The Future Is Ours. Women, Automation And Equality In The Digital Age. London: IPPR, the Institute for Public Policy Research, 2019, p.10.


42. Stuart J. Russell and Peter Norvig, Artificial Intelligence. A Modern Approach. Third edition. Boston: Prentice Hall, 2010, p.1034.

43. Isaac Asimov. Robot Visions. New York: ROC, 1991, p.415.


44. Pedro Domingos. The Master Algorithm. How the quest for the ultimate learning machine will remake our world. London: Penguin Books, 2015, p.277.

45. Alex Williams and Nick Srniček, Manifesto for an Accelerationist Politics. Translated into Spanish by Comite Disperso, 2013.https://syntheticedifice.files.wordpress.com/2013/08/manifiesto-aceleracionista1.pdf

46. ​​Pedro Domingos, The Master Algorithm, p.264.

47. Lauren McCarthy, “Feeling at Home: Between Human and AI,” Immerse, January 8, 2018.https://immerse.news/feeling-at-home-between-human-and-ai-6047561e7f04

48. Jacob Snow, “Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots,” American Civil Liberties Union, July 26, 2018.https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28

49. John Markoff, “Seeking a Better Way to Find Web Images,” The New York Times, November 19, 2012.
https://www.nytimes.com/2012/11/20/science/for-web-images-creating-new-technology-to-seek-and-find.html

50. Kate Crawford and Trevor Paglen, “Excavating AI. The Politics of Images in Machine Learning Training Sets,” Excavating AI, 2019.https://www.excavating.ai

51. Agrawal, Gans and Goldfarb, Prediction Machines, p.21.

52. Agrawal, Gans and Goldfarb, Prediction Machines, p.185.

53. Sara Mateos Sillero and Clara Gómez Hernández, White Paper on women in the technological field. Madrid: Secretary of State for Digital Advancement/Ministry of Economy and Business, 2019, p.13.http://www.mineco.gob.es/stfls/mineco/ministerio/ficheros/libreria/LibroBlancoFINAL.pdf

54. Sara Mateos Sillero and Clara Gómez Hernández, White Book, p.118.

55. Sara Mateos Sillero and Clara Gómez Hernández, White Book, p.17.

56. Katharine Schwab, “This Designer Is Fighting Back Against Bad Data–With Feminism,” FastCompany, April 16, 2018.https://www.fastcompany.com/90168266/the-designer-fighting-back-against-bad-data-with-feminism

57. Laboria Cuboniks, Xenofeminism: A politics of alienation. Translation by Giancarlo Morales Sandoval, 2015.http://www.laboriacuboniks.net/es/

58. bell hooks, Feminism is for everyone. Translation by Beatriz Esteban Agustí, Lina Tatiana Lozano Ruiz, Mayra Sofía Moreno, Maira Puertas Romo and Sara Vega González. Madrid: Traficantes de Sueños, 2017, p.21.


59. Alex Hern, “Apple made Siri deflect questions on feminism, leaked papers reveal,” The Guardian, September 6, 2019.https://www.theguardian.com/technology/2019/sep/06/apple-rewrote-siri-to-deflect-questions-about-feminism

60. Eana Kim, Embodiments of Autonomous Entitites: Lynn Hershman Leeson’s Artificially Intelligent Robots, Agent Ruby and DiNA. Master’s Thesis, The Institute of Fine Arts, New York University, 2018.

61. Eana Kim, Embodiments of Autonomous Entities.

62. Pedro Domingos, The Master Algorithm, p.269.

63. Russell and Norvig, Artificial Intelligence. A Modern Approach, p.1020.

64. Boden, AI. Its Nature and Future, p.148.

65. Russell and Norvig, Artificial Intelligence. A Modern Approach, p.1038.

66. Boden, A.I. Its Nature and Future, p.148.

67. Boden, A.I. Its Nature and Future, p.149.

68. Claude E. Shannon, “A Mathematical Theory of Communication,” The Bell System Technical Journal, vol.27, p.379-423, 623-656, July, October, 1948.http://www.essrl.wustl.edu/~jao/itrg/shannon.pdf