FacebookTwitterYoutubeInstagramGoogle Plus

Computational Creativity: AI and the Art of Ingenuity

Today, there are robots that make art, move like dancers, tell stories, and even help human chefs devise unique recipes. But is there ingenuity in silico? Can computers be creative? A rare treat for the senses, this thought-provoking event brings together artists and computer scientists who are creating original works with the help of artificially intelligent machines. Joined by leading experts in psychology and neuroscience, they’ll explore the roots of creativity in humans and computers, what artificial creativity reveals about human imagination, and the future of hybrid systems that build on the capabilities of both.

This program is part of the Big Ideas Series, made possible with support from the John Templeton Foundation.

View Additional Video Information

WORLD SCIENCE FESTIVAL COMPUTATIONAL CREATIVITY

(Video Clip)

CREATIVITY: IT’S AT THE HEART OF WHO WE HUMANS ARE…

…WITH BRAINS EQUIPPED BY EVOLUTION TO INVENT, TO WRITE, TO MAKE ART AND MUSIC.

WE HUMANS ARE SPECIAL, RIGHT?

BUT WAIT.

WHAT IF A ROBOT COULD PAINT A MASTERPIECE?

WHAT IF ARTIFICIAL INTELLIGENCE COULD COMPOSE GREAT MUSIC?

DOES A ROBOT – A MACHINE WITH ARTIFICIAL INTELLIGENCE – DRIVEN BY CODE AND ALGORITHMS — FEEL AND EXPRESS ITSELF UNIQUELY AND ORIGINALLY?

Will Smith: You’re just a machine, an imitation of life. Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?

Robot: Can you?

OVER SOME 40,000 YEARS, HUMAN CREATIVITY HAS EXPLODED – FROM DRAWINGS ON CAVE WALLS THROUGH THE GREAT ART OF CENTURIES TO COME….

TO EVER MORE ADVANCED TECHNOLOGY AND DESIGN.

NOW, SCIENTISTS — AND ARTISTS –ARE ASKING CAN A ROBOT TRULY IMAGINE AN ORIGINAL MASTERWORK? OR CAN IT ONLY COPY THE PATTERNS OF A HUMAN PAINTER… OR MUSICIAN?

DOES THIS COMPUTER VERSION OF VIVALDI SOUND LIKE THE REAL THING?

CAN ARTIFICIAL INTELLIGENCE FEEL EMOTION AND TURN IT INTO ART?

DO ALGORITHMS WITH YES-NO OPTIONS HAVE THE POTENTIAL FOR UNLIMITED CREATIVITY?

EVEN THE HUMAN BRAIN, WITH ITS 86 BILLION NEURONS, MAY HAVE CREATIVE LIMITS.

COMPUTATIONAL CREATIVITY IS LEADING US TO ASK NEW QUESTIONS ABOUT HUMAN CREATIVITY.

IS THIS ESSENTIAL HUMAN TRAIT TRULY UNIQUE? HOW FAR CAN IT TAKE US?

WILL ARTIFICIAL INTELLIGENCE BE A COMPETITOR?

OR CAN IT BE A COLLABORATOR, HELPING US TOWARD STILL UNIMAGINED CREATIONS?

JOHN SCHAEFER, RADIO HOST: Hello and welcome to Computational Creativity.

SCHAEFER: My first guest is a member of Google Brain’s Magenta team. He is currently working on neural network models of sound and music and recently produced a synthesizer that designed its own sounds. He’s also a jazz guitarist. Please welcome Jesse Engel.

SCHAEFER: Also with us, is an Assistant professor at the University of Illinois at Urbana Champaign in the Dept. of Electrical and Computer Engineering. He focuses on several surprising creative domains including the culinary arts and fashion and the theoretical foundations of creativity. Please welcome Lav Varshney

SCHAEFER: Also with us is an Associate Professor of psychological and brain science at Dartmouth College. He’s interested in the neural basis of imagination and in the evolution of human creativity. Please welcome Peter Tse.

SCHAEFER: And finally, an artist in residence at Bell Labs exploring new forms of drawing with virtual reality, biometrics, machines learning and robotics. A former research fellow at MIT’s Media lab and artist in residence at Google, please welcome Sougwen Chung.

SCHAEFER: Peter, it seems like there are many possible pros and cons for approaching computational creativity. One potential con would be we don’t even fully understand human creativity yet. As someone who has studied the evolution of human creativity, where does it come from, what do we know about it, what don’t we know about it?

[00:04:53] PETER TSE, COGNITIVE NEUROSCIENTIST: So we’re mammals that have descended from a very long lineage and in the hominin lineage, we diverged from chimpanzees something like seven million years ago, from the common ancestors with chimpanzees and bonobos about seven million years ago. And we went through a long period of having small brains, we were bipedal, the australopithecines which you can see in the lower left there and the lower middle. But then around 2.2 million years ago, homo habilis created stone tools and these Olduvai stone tools were very simple, basically just broken rocks. But then about 1.8 million, 2.0 million years ago, homo erectus in the upper middle there created a kind of hand axe called the Acheulean hand axe which hardly changed for a million and a half years. So it’s really striking is how un-innovative homo erectus was…

SCHAEFER: Had one idea and stuck with it.

TSE: And then heidelbergensis, which is in the upper right there is about six hundred to eight hundred thousand years ago. Again, very non innovative and even the Neanderthals who had the so-called Mousterian tool set, it hardly changed for over one hundred fifty thousand years.

TSE: And then our species shows up, and in the case of Europe kind of in the middle of Europe probably came up the Danube and couldn’t go any further and their artifacts are incredible, you see the first evidence of instruments, of art and we can know for sure how this came about because thoughts don’t leave fossils, neuro-circuits don’t leave fossils, all we have are bones and skulls and artifacts. But the artifacts that show up are just incredible; stone flutes… I mean bone flutes and beautiful art. So something happened in our neural circuitry that afforded us the capacity to think of such things and then paint them or create them. And some of them involve the creation of objects that existed in their world like that bison up there, but some of them involve the creation of objects that could not exist. And near Un in what is now Germany, there’s a cave where they found this Löwenmensch, this lion headed human. So there was never a lion headed human, somebody had to imagine that, thirty-five to forty thousand years ago and then go and build it. I’m very interested in the neural circuitry that could have created that.

SCHAEFER: Just to be clear, this is not a result of some kind of physical evolution, the brain pan doesn’t get… we don’t see an increase in the size of the brain in the species.

TSE: No we do see an increase in the size of the brain from roughly the size of a chimpanzee, three hundred fifty cubic centimeters, up through homo habilis getting up to maybe six hundred CC, homo erectus gets bigger and bigger and the Neanderthals actually had brains that were bigger than ours, much bigger, well not much bigger maybe fourteen hundred, fourteen-fifty CC whereas ours is in the thirteen hundreds. And yet Neanderthals with their bigger brains were not very innovative, their tools hardly changed. But what really seems to change in our skulls and in our brains, because the skull molds to fit the brain… is this, our foreheads got really big. So something about our frontal cortex changed and I suspect this is central to our becoming more creative.

SCHAEFER: So when did this happen, I mean homo sapiens as a species was around for a while and it seems like what you’re saying, it sort of describes a J curve where there’s a sudden explosion of…

TSE: It’s a big puzzle right, so I can look at a skull from one hundred-thirty thousand years ago and it looks more or less like a modern skull. But the first evidence of sort of jewelry and art is maybe seventy, maybe eighty-thousand years ago in South Africa in the Blombos Cave. And in Europe, Cro-Magnon man our species shows up about forty-one thousand years ago give or take a couple thousand of years. And it really seemed to be a big bang, a cultural big bang. And what that was,       I think nobody knows. Was it a change in culture, a change in language? Nobody knows.

SCHAEFER: Lav you’ve studied some of the theoretical foundations of creativity. It is important to at least ask these questions. Because if we’re going to talk about computational creativity, we need to know what we’re starting from.

[00:9:50] LAV VARSHNEY, ENGINEERING THEORIST: Exactly, and so what I’ve been working on is a mathematical theory of creativity which again seems perhaps strange, how can you mathematize something like creativity? And the way I’ve been defining it is, we want things that are both novel as well as high quality in the domain. So if we’re thinking about food, we want it to be flavorful or if we want fashion, we want it to be visually appealing perhaps. But novelty seems to be fairly consistent across domains. Things that change your beliefs are perhaps the most surprising and the most novel.

SCHAEFER: The human brain as neuroscientist know, has balanced the need for knowing what’s coming next, expectation, the satisfaction the brain gets when it knows the next digit in the sequence or the next word in the sentence versus novelty. Whether you can mathematically determine that sweet spot is still kind of an open question I would imagine.

VARSHNEY: It is, yeah. How to balance novelty against familiarity that’s really the question that artists face. You don’t want to deviate too much perhaps because then its not understandable but you definitely want to push the envelope.

SCHAEFER: So with food, does the computer put together things that no human chef would ever dream of putting together, like blueberries and pork?

VARSHNEY: Yeah, you can definitely come up with novel combinations that have never been done before so things like cocoa powder and saffron and almonds for example, that’s incredibly good.

SCHAEFER: Now…you’ve also been working with an algorithm that…if we’re going to talk about artificial intelligence learning. There are the different algorithms and before we get to Jesse and his musical tools. Talk us through this algorithm that helps learn a machine to learn to create music.

VARSHNEY: Right so what we wanted to do was essentially learn what are the laws of music theory? What are the principles of good music as it were?

VARSHNEY: So what this algorithm is doing is it has two parts. Essentially a teacher that knows Bach and a student who is trying to learn what is music. And they iterate between each other

So we want to build this hierarchy of concepts in music, and further we want them to be human interruptible. So that’s roughly what this algorithm tries to do and then eventually, we can use those principles then to inform composition and the generative question of creativity.

SCHAEFER: So is there an analogy to how we as humans would learn the theory of making music?

VARSHNEY: Yeah in fact the rules that the algorithm generates are exactly or nearly the ones that are taught in music theory courses. They go step by step in roughly the same way though there are a couple of rules that come out that some of our colleagues and colleagues of music find kind of interesting as well, things that they hadn’t noticed.

SCHAEFER: Now Jesse in the field of music we have seen some attempts made a quote unquote generated music. Brian Eno for example in England has been pursuing this for years. What’s the attraction in taking something so inherently human and trying to expand that into the world of artificial intelligence?

JESSE ENGEL, COMPUTER SCIENTIST, MUSICIAN: That’s a great question. So I think it’s interesting to think of things in the context that creativity and tools have always been co-involving. We weren’t able to… you know the evolution of well tempered instruments allowed for playing in multiple keys, algorithm and machine learning, artificial intelligence are just sort of the next step in this process where they’re creating new opportunities to interact with composition, new ways to interact with sonic texture and just new tools to express ourselves creatively.

SCHAEFER: What are you and the Magenta team at Google, what is your kind of approach to the music creativity process?

ENGEL: What’s interesting about machine learning and what’s called AI is that it’s really a data driven way or at least a lot of the current methods are a data driven way of discovering some of these core components to a given set of data, whether it be audio from someone talking or the core components of a recipe or anything like that. And so once you’ve identified these core components, then you can start playing games about how can I mix and match them? The systems aren’t perfect, they fail in interesting ways because they’re trying to do the best they can, and they’re trying to do it according to the data we show them and the data that we show them comes from the natural world. So the mistakes they make are somewhat in line with the natural world. And so they have a kind of more humanistic character to them. Which is actually a great source of, it’s a different kind of creativity. This is creativity through trial and error you know, or mistakes.

[00:15:06] SCHAEFER: Well that sounds very human. Jamaican musicians in the early 60’s listening to offshore radio from the southern American states and thinking, I’ve got to do me some of that R&B stuff and getting it wrong and creating reggae. This is how almost every kind of development in our arts has happened.

ENGEL: Or like in the case of, that the 808 drum machine was originally invented to be a perfectly accurate drummer, it was supposed to be a replacement of a drummer. But it wasn’t, it sounds like a drum in a box. But it actually the way that it was not perfect with the kick drum and stuff was actually the starting point for a lot of electronic music and hip-hop. So there’s the way that you have your creative intention and it goes a little bit off but if you’re guiding your creative intention, if it’s grounded in the world, it can be a great creative source as well.

SCHAEFER: With these machines, is the learning experiential, can it be experiential? Or is it completely based on what we the people have taught it?

ENGEL: I definitely think it can be experiential, the standard way of teaching things is to give it a bunch of exemplars, and then it has to try to… it meaning the algorithm that you come up with has to try to determine some structure within that. But there’s an alternative way where you give a bunch of core components and an algorithm can try to combine these components in different ways. Evolutionary algorithms where you sort of take these two things and you say what if I combine them, I would have a child of them. There’s also reinforcement learning where you sort of give it feedback if it combines two things in a good way, then you say okay good job and it does more things like that. And those are some of the frontiers of these types of algorithms.

SCHAEFER: At the table there you have a kind of melody generating system that puts this theory into practice right?

ENGEL: Right, so the Google Magenta project is part of a machine learning research group. So we do fundamental research, but one of the key things I think often gets overlooked is there’s always a human in the creative process. So it’s an open source project that we try to get as much engagement from creative coders and artist in the community. And we try to create tools that allow these artists to interact with these algorithms without having to do a bunch of programing on the command line and that type of stuff. And so I just brought along a few of them here to show.

SCHAEFER: Why don’t you demonstrate what one of these can do?

SCHAEFER: Sougwen, while Jesse’s walking over to the table, does this sound analogous to your sort of collaboration?

SOUGWEN CHUNG, ARTIST: I started a robotic drawing collaboration a few years ago with the intent of exploring how we can measure creativity and actually reflects that back as a type of performance. So I think as I come to work with the robotic arm and with these new machine-learning algorithms, I’ve come to view it as more of a collaborator because of its ability to reflect our own creative processes. And I think that’s a new process that’s only going to grow more in the future.

SCHAEFER: Now what are we looking at here?

CHUNG: So this is my project Drawing Operations Unit Generation One. It’s sort of a first very initial foray into what it would be like to collaborate with a robotic agent. The first generation is really simple, I wanted to show a few things, I wanted to show sort of a human hand and a robotic arm working in harmony cause I feel like that image is really, and that narrative is underserved in our culture now and also think about how relationally and behaviorally the sort of two organic and robotic units can work together to create something beautiful or not and kind of explore that side of ambiguity.

SCHAEFER: And Jesse was talking about, I think you were saying that for you getting the feedback from the machine affects your creative process, so it becomes a feedback loop.

CHUNG: Absolutely.

SCHAEFER: You did bring one other video for us to watch. What’s happening here?

CHUNG: Oh this is just sort of an older installation work of mine. I think it hits upon this sort of unique point in history that we’re in. Where a lot of how we work with traditional mediums and how we engage in music and food how that’s being shifted and transmuted by these different technological systems. So this is an installation project that I did a few years ago combines sort of hand made paper craft with computational graphics to create an environment that’s sort of this union of both.

[00:20:22] SCHAEFER: So it is a genuine collaborative process?

CHUNG: It’s a hybrid process.

SCHAEFER: Let’s take a look.

(Video clip)

SCHAEFER: So the idea that it’s a hybrid process it seemed like maybe you were hedging on whether it was a genuine collaboration yet, is that the goal?

CHUNG: I think definitely that’s the goal for me. I’d like to see more people engaged with you know machine learning, behaviors as part of a process they adopt into their own work. I think we’re always inching closer and closer toward true collaboration even with human actors. So there’s a real area to be explored there.

SCHAEFER: So what do you got over there?

ENGEL: Okay. So what I have here is a demonstration a very common type of model in machine learning, where there’s-given a series of notes, it looks for structure within those notes and tries to keep on continuing and predicting what would I play next if I was the one who had played those notes. And so there’s a lot of different ways you can use that within a creative process and so we’ve taken tens of thousands of popular songs and just the notes from the songs, its called midi data, but we train these models to find the structure there and then to be able to generate new sounds when conditioned on-when listening to what it is someone is playing. So if sound works out here… just to keep me at all centered I’m going to put a metronome on… but I can do something where I might have a melody that I’m going to play into, and you’ll hear the electric piano, will be me, then I’m going to hear a response from the algorithm running on my computer here that will be trying to continue what I am playing, it will be a cello… might go something simple

(music)

ENGEL: It might respond

(music)

ENGEL: So there’s some continuity there with what I played before. I have it looping actually, so you can hear the response over and over again. And you can hear some similarities of what I played, but its not doing the job of trying to replicate what I play, it’s trying to continue it in a meaningful way.

SCHAEFER: It’s done a little inversion of some of the…

ENGEL: … yea a little inversion, but you see it’s sort of keeping to the same key and stuff so if I then wanted to go and play chords behind it and sort of say that sounds real good as like a melody for my song maybe I can build off that and be like…

(music)

ENGEL: … … something like that and all of a sudden I have the start of course a very popular hit song. But that’s just sort of one type of way that you can be using these tools.

SCHAEFER: and another type of way is the sounds themselves right? We know which instruments sounds like because we made all of them. But there are potentially a gazillion other sounds out there.

ENGEL: So we have to take a whole bunch of individual instruments and we have it learn the structure between all those different instruments and then provide that as a tool to a musician so that you can interpolate between different instruments and create sounds that are sort of novel and new. And yes, so it’s sort of a similar idea but applied now to tambour and texture.

ENGEL: So like in this example, we can take the sound originally of a flute. Sounds like a flute. Here’s the model’s best attempts to replicate that sound.

(flute)

ENGEL: And you notice it sounds more or less like a flute but it messes up in some ways that there’s this upper high frequency content stuff. But it’s actually like I was saying those are the kind of mistakes we enjoy. It’s sort of like you’re running back in the old days the 60’s people would run, turn up their tune amplifiers and get this upper harmonic distortion. And so it’s sort of a natural type of distortion.

[00:25:10] ENGEL: But you can take on the other side for novelty sake let’s take a cat.

*Cat sound effects

ENGEL: My wife loves cats so she was like you have to do a cat… so here’s the model, it was only trained on instruments like guitars and flutes, this is it’s best attempts to recreate a cat.

*Cat sound effect

ENGEL: You know it sort of sounds like a cat, but it’s like if the cat was played by something else. So if you take the flute and the cat and you were just to add the numbers corresponding to those waves together it would just sound like the cat and the flute in the same room. You know, there’s two separate sound sources but instead, if we take those core components that the network learns and we mix those core components together then we get something that is a new sound that combines elements of the two sounds together. So it sort of sounds like a flute that meows.

*Cat/flute sound effect

ENGEL: And I can then turn that into an instrument, that I can play and of course it’s very fun in novelty. But we also want to encourage more that just novelty, you know the creation of new… not serious but personal expression. So we’ve also created these tools and integrated them into modern musical work flows as well.

SCHAEFER: Lav is there anything similar to that in…

VARSHNEY: Yeah so once you have these components you can start playing around with them even in other domains say like culinary. You can say I want Asian flavored ceviche using dairy products, so then you’ll have an algorithm that’ll generate something pretty flavorful and pretty good. So you can set up… and I think this is probably true for you Jesse as well, once you start using these things you kind of get a sense for what input parameters are good as well so you learn how the tool works and you can make it do better things. And I think actually building the computational creativity system gives you some insight into that.

ENGEL: A great example is this was build with a data set that we created and released to the open public, was built off sample libraries and the data set was really biased to trumpets, synthesizers and stuff. So it does, it sort of makes, like when I play a piano through it, it doesn’t get it just right and it kind of makes it sound more like a trumpet. And so there’s this even in the training of a model, there’s creative inputs in the terms of what is the data that you want it to represent in these types of things.

SCHAEFER: So Sougwen let’s talk about the visual arts, there has to be an analogy to be made there.

CHUNG: Well yeah, I was actually thinking about what’s fascinating about Lav and Jesse’s work and I think it’s that it takes place in the site of ambiguity. Like we know that when we train these sort of new models of thinking to do things that we already understand it’s kind of boring, but when it’s to invent a new recipe or if it’s to invent a new sound it becomes a lot more compelling.

SCHAEFER: You found the cat/flute compelling?

CHUNG: I mean well that’s the thing it’s ambiguous and it’s novel but I think there are ways to create our own new narratives using these tools. Like instead of a cat and flute, I can have a recording of my father’s voice and my favorite song, the melody. So suddenly I can hybridize those things it becomes a very meaningful way of engaging with each other and like the historical past. I think while there’s a lot of novelty… like I like a cat/flute, I’d love to play a cat/flute.

SHAEFER: I knew you did.

CHUNG: But there’s so many different ways we can use these core components to create beautiful, new interpretations that helps us understand these sort of aspects of our histories a little more.

SCHAEFER: Is there a way that these machines, this kind of access that creativity has to those ambiguous places, is that something you’re all working towards?

ENGEL: If we develop these tools farther, if you can treat it more like a garden, you know rather than painting like the individual strokes on a canvas where you’re controlling it in the sense that you put a certain amount of water on a plant, you choose what you plant and how much fertilizer to give it or maybe you’re really compulsive and you do like bonsai, you know the cutting of all the leaves and stuff. But your control is more high level, and part of the interesting part is letting the system do whatever it is that it’s going to do. And you understand the system, you go out each day to your garden and you interact with it as the things grow. So I sort of think of AI gardens in that same sense. There’s various elements that you want control over and there’s various elements that you don’t.

[00:30:08] VARSHNEY: And another thing that these computational creativity algorithms give you is a sense of safety in a way. So you can explore parts of the creative space that you might be afraid of a little bit, because you have this kind of algorithm that’ll guarantee that it’ll work out. It’s a little bit like GPS, it allows you to explore parts of the physical world and you know you can get home. So that’s another thing that really helps me when I use these creative technologies, pushes me a little outside my normal boxes and puts me in different boxes but they’re kind of somewhere else and I know it’ll work out.

CHUNG: Actually I have a remark about that. I feel the exact opposite about that. As I think about collaborating with in a very performative way, this robotic arm whether it’s trained on my own style which Generation Two is. Or it’s just sort of mimicking my movements. There’s a real randomness and sort of sense of unpredictability to it, and lack of understanding involved that it’s kind of exhilarating and exciting but it is not safe, it doesn’t feel safe or comforting at all, it’s like staring into the void.

SCHAEFER: Well you know even among humans, the phenomenon of apophenia where you have two dispirit, unrelated bits of stimulus, the brain will try to make some connections where maybe there was no intention to begin with. But a lot of artists Peter, make a lot of hay with that.

TSE: Sure, and central to our kind of creativity is taking known components and constructing new things from them. So I can say to you for example: imagine an egg with wings that’s flying

around the statue of liberty holding an ice cream cone instead of a torch. And you can do that. I could’ve just said anything. And so these mental operations seem to take place in this aspect of our consciousness that is this internal virtual reality that we may call imagination. So consciousness has this sort of duo role, telling us about the world and our bodies in it but also this internal virtual reality. And this is really central to what we do and in order to that I think we have to have experience in the real world with real things. Actually, you know, in some of my work, we’ve looked at the mental operations, for example, rotating a physical object this way or this way. And then we can decode if someone’s imagining rotating it this way or this way. Probably because some similar brain circuits are involved in actually imagining it in your internal virtual reality, which you learned in the real world.

SCHAEFER: There’s a program in France actually Sony France, maybe Jesse you’ve heard this where they fed every Beatles song into this computer so that the computer could learn how the Beatles wrote their songs. And then they asked the computer to generate its own Beatles like songs and you know, they had human lyricists.

(AI Beatles song)

SCHAEFER: If you grew up on the Beatles it’s just like “eh, alright, that’s kind of cute.” If you didn’t grow up on the Beatles though, I don’t know what your response might, might be something like, “oh my gosh, I’ve never hear anything like this. This is brilliant.” So the idea… you’re absolutely right it takes a conscious receptor to decide whether something… and so this gets us into the whole question of aesthetics and taste, whether something is in fact creative or not.

TSE: Whether something is creative in the good sense is going to be a human consciousness that says, “oh, that taste good actually.” And if my algorithm says well this should taste good and I just, actually no, it comes down to my conscious experience. And computers at least the currently configured are not conscious. Our creativity is very much rooted in our own conscious experience of a world and so based on my own conscious experience of the world, I form mental models, so I know what it is for you to feel pain because I’ve felt pain. And we’re constantly modeling things that are actually not in the input. I’m modeling what’s going on in your mind, based on what’s going on in my mind, that’s not visible. And so a lot of our creativity is rooted in our mental models which in return is rooted in our consciousness and so I’ll voice a note of skepticism because in the absence of mental models that are rooted in conscious experience, robots and AI systems I think will at best approach as if creativity.

[00:35:20] CHUNG: I actually think that’s one realm in which the arena of competitive games gets a lot of traction in machine learning, is because there is a set of constraints, there is a winner and loser and styles. So in that way, it creates a platform around discussion that’s a little bit… it’s more stable than a conversation about aesthetics. Do you feel like the realm of competitive games and creating these models that could be conscious, do you think that’s the right approach?

TSE: Well when IBM created Deep Blue it could beat Kasparov. But it was doing a brute force search through zillions of possibilities. Whereas Kasparov would instantly see a pattern so our brain is doing something very different and it could beat him, just like an airplane can fly faster than a bird but it’s doing something very different from a bird, in this case the human brain.

ENGEL: Well I want to take the, just for fun, the contrary point of view on this because, so a lot like you would use the Kasparov example right, and there’s recently been these great successes of this AlphaGo Program beating the world champions in Go, and the way that they do it is not through brute force search because that’s, it’s just computational intractable because the game of Go is much more complex. But they do it through the creation of these functions to represent sort of intuition, you know to represent like roughly how good do I feel this move is, and I can’t tell you exactly why I feel this good, but I have this function that I’ve been learning that says if these pieces are here and these pieces are here, then moving here is a good move. And it’s because I played a whole bunch of games and I usually win when I put my piece here and all those pieces are in those places. And similarly, a lot of the really cutting edge machine learning and reinforcement learning algorithms explicitly have modeling of… you know there’s some numbers present that are saying okay this is my representation of the environment around me. This is my representation of what this other actor is doing; this is a representation of what of what I am doing as an actor in the system.

ENGEL: So the concept that our creative critique or, you know analysis is somehow a fundamentally a human thing that requires consciousness. I just find it to be a little bit of a small definition because here you have, you can, it’s very easy to imagine agents that can do things similar in nature to what we might consider to be consciousness, but it’s not human consciousness. It’s just sort of a broader definition on how a thing interacts with its environment. So you can have a computer critic and computer artist that make art that’s optimized for what a computer likes, but it’s not necessarily optimized for what we want. It doesn’t, just because it’s a possibility doesn’t mean, I mean I think we’re always going to lend toward creating systems that you know we enjoy interacting with, that benefit us as a society and benefit us in our own expression.

SCHAEFER: Does it seem like there are limits, Peter you should weigh in as well, to human creativity and is there a limit to what artificial creativity might be?

VARSHNEY: Yeah, so you can figure out what’s the maximum speed a bird can fly through a forest without crashing. So right now the bird is going pretty slow he can navigate sufficiently quickly and avoid all the obstacles and then land on the tree. But if he tries to go too fast he’ll crash into the tree which is what I think happens in the video. And that’s exactly the fundamental speed limit of a flight through forest. And there’s a similar fundamental speed limit for communication, and what we’ve been trying to find and what we seem to be finding is there’s a fundamental speed limit to creativity as well. There’s a fundamental tension, you can’t be too novel in a given domain and you can’t be too high quality in a domain. There’s a tension between the two. So it’s easy to be very novel and low quality just put random stuff together. It’s very easy to be high quality and low novelty just look it up in your favorite cookbook or wherever it is. But doing both simultaneously is very difficult and that’s where the limit is. And the larger the inspiration set, the known stuff, the harder it is to be creative. So if there are people in the audience who want to be creative, you should start a new field where there’s much more low hanging fruit actually.

[00:40:02] SCHAEFER: Are there limits do you think to human creativity as Lav was suggesting there are to computational creativity?

TSE: I don’t know, I mean humans have proven to be incredibly creative both in the good sense of making airplanes for flight or for murder. I think of our human imagination as the greatest tool and also the greatest weapon we have because we can imagine the most amazing things and also the most horrible things. Like the holocaust, that was an act of imagination actually. And my hope is that these tools will eventually foster human flourishing and not be used to, you know also create new ways of murdering people. And I think one question that I would have for everybody is; what is the goal of all this? Is it just simply to make new artifacts or is it… I mean when I was a kid there was the Jetsons and they had this robot and you would say “robot do the dishes.” And it would do the dishes. But is it to make a general-purpose intelligence that would effectively be a slave? My guess is in order to make a general-purpose robot like the Jetsons robot, it would have to be conscious so it would understand what I mean when I say change the tires. I have to have a whole model of a cars and everything. But if it’s conscious, then we have the problem of slavery again because it would suffer.

SCHAEFER: So consciousness implies self-awareness, awareness of self as suppose to another or others.

TSE: The having of experience like pain, like seeing.

SCHAEFER: Could a robot be conscious without feeling pain?

TSE: I suppose I mean there are actually humans that don’t feel pain and they can see so yeah.

CHUNG: Sociopaths…

TSE: Emotional pain, yeah.

SCHAEFER: Jesse what about at your end of things?

ENGEL: Yeah, I mean I think maybe another way of thinking about it rather than just… it’s like it’s really nice to reflect it back on our own human experience. As we’ve developed tools to better control mechanical systems, we really began to better understand how our bodies work. But what’s amazing about these new systems is they do a math that’s not human but it’s similar in a way that can help us provide introspections in terms of what are my mental processes? When I’m being self critical, what are the dynamics of that, how can I interrupt that process? And how does that affect my creative output? You can get introspection in a quantitative way that can help give you words cause whenever you talk… I live in the bay area, whenever I talk about this I just sound like a hippie. It’s nice to have things that you can point to to say hey look this is actual physical systems here. There something here that cuts common across human experience that we can have a vocabulary to talk about.

CHUNG: I think to that point too it’s important to ask why we’re having this conversation now and I think part of that, if I can flipped that script a little bit, is we’re generating all this data that’s being collected that’s very, very difficult to wrap our brains around. And these mental models help us understand the sort of complexity and the scale of our own behavior as a collective world and I think that’s another goal, potentially of continuing to research these new areas of cognition.

SCHAEFER: So let me ask you to take one more trip to the table there. You gave us a demonstration before of some of the elemental, the basic gestures that this machine, this system is capable of. What if you tried to do something more complex?

ENGEL: So I think what I had here to show was sort of more on the concept of creating tools that actual musicians can use to explore. Like we had that novelty sound of the cat/flute for example, and so what I’ve done here on my computer is to create a… we want to get this into the hands of artist and musician and to close the feedback loop cause creativity takes two to tango, there’s the tool and the person and so we want to get that feedback loop closed. So we created this plug in for this popular software called Ableton Live that if you’ve ever gone to see any electronic music, most likely it was made for Ableton Live. So here what we can do that instead of just interpolating between just two instruments we can create an entire grid of instruments and allow a musician to take sort of a random walk, take a walk through this space and explore a whole variety of sounds. So what I can do is rather than just play this with my hand I can just sort of play a clip here.

*audio plays

[00:45:24] ENGEL: So right now the instrument is like an organ sound and you can see this is the software that we have going but you see we’ve created a manifold for people to play on where we can actually just drag through this space. And see the evolution of sound as it goes between different instruments and some of the spots… all of a sudden we’re in a dubstep thing. But you can see the spots where it’s in between the different sounds. And it creates a whole bunch of different textures… alright that might be the start to my next track, or something like that. It gives you ability to interact in this space that if you were trying to design by hand to be able to draw through those things it’d be very difficult to do. Sort of one of those things.

SCHAEFER: Sougwen, where do you see computational creativity headed in the visual arts? Are there sort of analogous tools available to you?

CHUNG: Yea definitely there a lot of style transfers and this sort of visual assist… style transfer is sort of the neural networks reading the visual pixel data essentially of an image, a popular painting for instance and then extracting that style, like Starry Night or what have you, and then transferring that into your next selfie or a picture of your dog or something. Yeah, it’s compelling. So I think there’s that as a very common approach to using these systems in the visual arts now. What I get really excited about is ways in which we can create new forms of collaboration through sort of processes that we’re training these neural nets on.

SCHAEFER: What about the visual arts as performative arts where normally we only see the sausage. We don’t see the grinding and all that but it seems like with what you’re doing you want us to see some of the process.

CHUNG: Very much so. I think for me the creative process is much more important than the outcome and that’s why drawing as a collaboration, that’s why it is drawing and not dance. I think music performances is a little more ambiguous, but actually being able to explore the drawn mark in a way that isn’t representational could be a reflection of consciousness or sort of exploratory intentionality and doing that alongside a, I hesitate to say artificial intelligence, but machine learning system approaches an understanding or at least could create a data set that could help us approach an understanding of what the exploratory creative process is.

SCHAEFER: Well Jesse you know music throughout history really has had a kind of ambivalent reaction to new technologies, “oh my gosh digital music. All the musicians will be put out of work. Oh my gosh the pianos. Al the harpsichordist will be put out of work.” And yet we still have real musicians, we still have harpsichordists even. So the idea that there’s room for a whole new kind of generation of creative artist in this field, it’s kind of a new idea, but it seems like it has analogies to things that have gone before.

ENGEL: Right in some ways it’s a very old idea. That’s where I was sort of going that this is kind of just part of a long coevolution that’s existed between technology and music in this case. I think it really just takes a generation that really owns it. They don’t feel attached partially to the things that came before, because they weren’t alive when that was really happening so why should they feel attached to it? Some are traditional, I play jazz but I also like pushing it in new directions. So I think that will always happen, there will always be people find strength in the continuity of learning that harpsichord from the 1600’s and those types of things.

[00:50:14] SCHAEFER: Alright you mentioned the word ownership and it gets up to something I said before I want to come back to, that is agency. So Lav you help create and your team helps to create this entity that comes up with new fashion ideas, new food recipes, new metal alloys. Who owns that?

VARSHNEY: So the current understanding of US trademark and copyright patent laws that it goes into the public domain because there has to be a human inventor to claim ownership. Machines don’t have property rights, at least currently. And so that’s the difficulty, it creates this perverse incentive for people to pretend like they were very fundamentally involved in the creative process. But it also raises the national question, how should we think about property rights in this area? And it’s actually important because who will fund the computational creative research if you can’t actually patent using those tools eventually? So there’s no backwards flow of incentive essentially. So the nice thing let’s say pencils is they’re considered researchable. So it’s not the inventor of the pencil that gets property rights on things that are invented using pencils. But when you do invent using a pencil you can claim property rights on that. Right so similarly we need some incentives perhaps, a revision of this understanding to encourage development of new computational creativity technologies.

SCHAEFER: Peter?

TSE: I also think we need wisdom in handling tools. Any tool can be a two-edged sword, a hammer could be used to murder somebody or build a house, even an atom bomb can deflect an asteroid. But these tools are so powerful that they can kind of take on a life of their own and that I think is a danger. And I think there’s an example of that in everyone’s pocket, right. These cell phones are amazing. You have all the information available in the universe with Wikipedia right there. But it becomes very distracting and many people… I was walking across the Dartmouth green and the sky was on fire with the red sunset and everybody, a hundred people were doing this. And I almost wanted to say people look at this, but I didn’t have the courage to do it. I noted that in my mind thirty years ago, walking across that Dartmouth green, I would see ten people holding hands and I don’t see that hardly anymore. And these devices meant to foster human connectivity have in many ways undermined it because people are more distracted, less attentive, here instead of talking to people. Like when I was a kid and AI is very powerful, but it runs the same risk, we have to own it. We have to become agentic ourselves and make sure it serves our ends creative or other wise and that we don’t become slaves to it.

SCHAEFER: But if AI does reach a point Jesse, Sougwen, where it can compose a piece of music, create a genuine work of visual art, something that moves us that genuinely passes the Turing test, that we know where it came from and it uplifts us anyway. You don’t care.

SCHAEFER: Then does that AI entity have some sort of agency, does it have some sort of ownership as opposed to the humans who set it in motion maybe a generation earlier? (pause) We’re in William Gibson territory…science fiction

ENGEL: I mean… it’s all good stuff. I think what you were saying about cell phones and stuff, it’s really fun to think about these scenarios where you start thinking about personification and then constant evolution and stuff. But the fact of the matter is for technology to ever reach that point, we’ll have gone through so many transformative things in our society… like there’s lower hanging fruit than that that will change everything. And so I always think, like there’s lots of groups like Google, open AI, DeepMind that are really focused on more of the short term things of like how does this affect our society here and now and if we ever reach that point, the only way we’ll ever do that if we figure out all the stuff that came before it because otherwise it just won’t happen.

TSE: But can I add that in order to get to that point, we’ll have to have deep mental models on what it means to be human being rooted in what we have experience as being a human being. So for example you know that scene in Les Mis where Jean Valjean is in prison for nineteen years and he hates the system and he’s angry and he stays with this priest. He then steals the silverware of the priest, and the police catch him and bring him in. And the priest in a act of true Christian grace says to him release him, he didn’t steal it. I gave it to him and here you forgot the platter. And we find that deeply moving and beautiful because we can understand having hit bottom. And then by having hit bottom he goes through a mental transformation and he realizes I’m not this, I need to become a new kind of person. That can only be understood by us because we have human experience and I’m skeptical that AI is going to be able to understand redemption like that. But maybe then again if we get to that point they will be humans or very close to us.

[00:55:48] SCHAEFER: Sougwen… tough act to follow there.

CHUNG: No, I think part of how we even categorize what makes us human is how we create our own narratives. Like you explained this sort of profoundly human moment through the narrative of Les Mis. I think right now we’re in the process of creating narratives around machine learning, we’re calling it artificial intelligence, we’re calling it designed intelligence. This is definitely an interesting precursor to that question of agency. We form our own through creating narratives and now we’re able to co-create narratives with these machines. So I think that’s sort of the path forward.

SCHAEFER: The arc of human history has been to acknowledge agency for more and more people. Agency was for many generations was restricted to a certain gender and certain group of people and we hope that’s a continually expanding definition of agency and whether it can include things that are artificial. I guess that’s a question that will be answered down the road.

Leave a Reply

Your email address will not be published. Required fields are marked *

View More Comments
Load More

Computational Creativity: AI and the Art of Ingenuity

Today, there are robots that make art, move like dancers, tell stories, and even help human chefs devise unique recipes. But is there ingenuity in silico? Can computers be creative? A rare treat for the senses, this thought-provoking event brings together artists and computer scientists who are creating original works with the help of artificially intelligent machines. Joined by leading experts in psychology and neuroscience, they’ll explore the roots of creativity in humans and computers, what artificial creativity reveals about human imagination, and the future of hybrid systems that build on the capabilities of both.

This program is part of the Big Ideas Series, made possible with support from the John Templeton Foundation.

View Additional Video Information

Moderator

John SchaeferRadio Host

John Schaefer is the host and producer of WNYC’s long-running new music show New Sounds, which Billboard magazine has called “the #1 radio show for the Global Village,” founded in 1982, and its innovative Soundcheck podcast, which features live performances and interviews with a variety of guests.

Read More

Participants

Sougwen ChungArtist

Sougwen Chung (愫君) is a Chinese-born, Canadian-raised artist based in New York. Her work explores transitional edges — the mark-made-by-hand and the mark-made-by-machine — as an approach to understanding the interaction between humans and computers.

Read More
Peter Ulric TseNeuroscientist

Peter Ulric Tse is interested in understanding, first, how matter can become conscious, and second, how conscious and unconscious mental events can be causal in a universe where so many believe a solely physical account of causation should be sufficient.

Read More
Lav VarshneyEngineering Theorist

Lav Varshney is an assistant professor in the Department of Electrical and Computer Engineering and the Department of Computer Science (by courtesy), and a research affiliate in the Beckman Institute and in the Neuroscience Program, all at the University of Illinois at Urbana-Champaign.

Read More
Jesse EngelComputer Scientist, Musician

Jesse Engel is a research scientist with Google Brain’s Magenta team on the coevolution of artificial intelligence algorithms and their creative applications. He received his Bachelors, Ph.D., and Postdoc degree from UC Berkeley.

Read More

Transcription

WORLD SCIENCE FESTIVAL COMPUTATIONAL CREATIVITY

(Video Clip)

CREATIVITY: IT’S AT THE HEART OF WHO WE HUMANS ARE…

…WITH BRAINS EQUIPPED BY EVOLUTION TO INVENT, TO WRITE, TO MAKE ART AND MUSIC.

WE HUMANS ARE SPECIAL, RIGHT?

BUT WAIT.

WHAT IF A ROBOT COULD PAINT A MASTERPIECE?

WHAT IF ARTIFICIAL INTELLIGENCE COULD COMPOSE GREAT MUSIC?

DOES A ROBOT – A MACHINE WITH ARTIFICIAL INTELLIGENCE – DRIVEN BY CODE AND ALGORITHMS — FEEL AND EXPRESS ITSELF UNIQUELY AND ORIGINALLY?

Will Smith: You’re just a machine, an imitation of life. Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?

Robot: Can you?

OVER SOME 40,000 YEARS, HUMAN CREATIVITY HAS EXPLODED – FROM DRAWINGS ON CAVE WALLS THROUGH THE GREAT ART OF CENTURIES TO COME….

TO EVER MORE ADVANCED TECHNOLOGY AND DESIGN.

NOW, SCIENTISTS — AND ARTISTS –ARE ASKING CAN A ROBOT TRULY IMAGINE AN ORIGINAL MASTERWORK? OR CAN IT ONLY COPY THE PATTERNS OF A HUMAN PAINTER… OR MUSICIAN?

DOES THIS COMPUTER VERSION OF VIVALDI SOUND LIKE THE REAL THING?

CAN ARTIFICIAL INTELLIGENCE FEEL EMOTION AND TURN IT INTO ART?

DO ALGORITHMS WITH YES-NO OPTIONS HAVE THE POTENTIAL FOR UNLIMITED CREATIVITY?

EVEN THE HUMAN BRAIN, WITH ITS 86 BILLION NEURONS, MAY HAVE CREATIVE LIMITS.

COMPUTATIONAL CREATIVITY IS LEADING US TO ASK NEW QUESTIONS ABOUT HUMAN CREATIVITY.

IS THIS ESSENTIAL HUMAN TRAIT TRULY UNIQUE? HOW FAR CAN IT TAKE US?

WILL ARTIFICIAL INTELLIGENCE BE A COMPETITOR?

OR CAN IT BE A COLLABORATOR, HELPING US TOWARD STILL UNIMAGINED CREATIONS?

JOHN SCHAEFER, RADIO HOST: Hello and welcome to Computational Creativity.

SCHAEFER: My first guest is a member of Google Brain’s Magenta team. He is currently working on neural network models of sound and music and recently produced a synthesizer that designed its own sounds. He’s also a jazz guitarist. Please welcome Jesse Engel.

SCHAEFER: Also with us, is an Assistant professor at the University of Illinois at Urbana Champaign in the Dept. of Electrical and Computer Engineering. He focuses on several surprising creative domains including the culinary arts and fashion and the theoretical foundations of creativity. Please welcome Lav Varshney

SCHAEFER: Also with us is an Associate Professor of psychological and brain science at Dartmouth College. He’s interested in the neural basis of imagination and in the evolution of human creativity. Please welcome Peter Tse.

SCHAEFER: And finally, an artist in residence at Bell Labs exploring new forms of drawing with virtual reality, biometrics, machines learning and robotics. A former research fellow at MIT’s Media lab and artist in residence at Google, please welcome Sougwen Chung.

SCHAEFER: Peter, it seems like there are many possible pros and cons for approaching computational creativity. One potential con would be we don’t even fully understand human creativity yet. As someone who has studied the evolution of human creativity, where does it come from, what do we know about it, what don’t we know about it?

[00:04:53] PETER TSE, COGNITIVE NEUROSCIENTIST: So we’re mammals that have descended from a very long lineage and in the hominin lineage, we diverged from chimpanzees something like seven million years ago, from the common ancestors with chimpanzees and bonobos about seven million years ago. And we went through a long period of having small brains, we were bipedal, the australopithecines which you can see in the lower left there and the lower middle. But then around 2.2 million years ago, homo habilis created stone tools and these Olduvai stone tools were very simple, basically just broken rocks. But then about 1.8 million, 2.0 million years ago, homo erectus in the upper middle there created a kind of hand axe called the Acheulean hand axe which hardly changed for a million and a half years. So it’s really striking is how un-innovative homo erectus was…

SCHAEFER: Had one idea and stuck with it.

TSE: And then heidelbergensis, which is in the upper right there is about six hundred to eight hundred thousand years ago. Again, very non innovative and even the Neanderthals who had the so-called Mousterian tool set, it hardly changed for over one hundred fifty thousand years.

TSE: And then our species shows up, and in the case of Europe kind of in the middle of Europe probably came up the Danube and couldn’t go any further and their artifacts are incredible, you see the first evidence of instruments, of art and we can know for sure how this came about because thoughts don’t leave fossils, neuro-circuits don’t leave fossils, all we have are bones and skulls and artifacts. But the artifacts that show up are just incredible; stone flutes… I mean bone flutes and beautiful art. So something happened in our neural circuitry that afforded us the capacity to think of such things and then paint them or create them. And some of them involve the creation of objects that existed in their world like that bison up there, but some of them involve the creation of objects that could not exist. And near Un in what is now Germany, there’s a cave where they found this Löwenmensch, this lion headed human. So there was never a lion headed human, somebody had to imagine that, thirty-five to forty thousand years ago and then go and build it. I’m very interested in the neural circuitry that could have created that.

SCHAEFER: Just to be clear, this is not a result of some kind of physical evolution, the brain pan doesn’t get… we don’t see an increase in the size of the brain in the species.

TSE: No we do see an increase in the size of the brain from roughly the size of a chimpanzee, three hundred fifty cubic centimeters, up through homo habilis getting up to maybe six hundred CC, homo erectus gets bigger and bigger and the Neanderthals actually had brains that were bigger than ours, much bigger, well not much bigger maybe fourteen hundred, fourteen-fifty CC whereas ours is in the thirteen hundreds. And yet Neanderthals with their bigger brains were not very innovative, their tools hardly changed. But what really seems to change in our skulls and in our brains, because the skull molds to fit the brain… is this, our foreheads got really big. So something about our frontal cortex changed and I suspect this is central to our becoming more creative.

SCHAEFER: So when did this happen, I mean homo sapiens as a species was around for a while and it seems like what you’re saying, it sort of describes a J curve where there’s a sudden explosion of…

TSE: It’s a big puzzle right, so I can look at a skull from one hundred-thirty thousand years ago and it looks more or less like a modern skull. But the first evidence of sort of jewelry and art is maybe seventy, maybe eighty-thousand years ago in South Africa in the Blombos Cave. And in Europe, Cro-Magnon man our species shows up about forty-one thousand years ago give or take a couple thousand of years. And it really seemed to be a big bang, a cultural big bang. And what that was,       I think nobody knows. Was it a change in culture, a change in language? Nobody knows.

SCHAEFER: Lav you’ve studied some of the theoretical foundations of creativity. It is important to at least ask these questions. Because if we’re going to talk about computational creativity, we need to know what we’re starting from.

[00:9:50] LAV VARSHNEY, ENGINEERING THEORIST: Exactly, and so what I’ve been working on is a mathematical theory of creativity which again seems perhaps strange, how can you mathematize something like creativity? And the way I’ve been defining it is, we want things that are both novel as well as high quality in the domain. So if we’re thinking about food, we want it to be flavorful or if we want fashion, we want it to be visually appealing perhaps. But novelty seems to be fairly consistent across domains. Things that change your beliefs are perhaps the most surprising and the most novel.

SCHAEFER: The human brain as neuroscientist know, has balanced the need for knowing what’s coming next, expectation, the satisfaction the brain gets when it knows the next digit in the sequence or the next word in the sentence versus novelty. Whether you can mathematically determine that sweet spot is still kind of an open question I would imagine.

VARSHNEY: It is, yeah. How to balance novelty against familiarity that’s really the question that artists face. You don’t want to deviate too much perhaps because then its not understandable but you definitely want to push the envelope.

SCHAEFER: So with food, does the computer put together things that no human chef would ever dream of putting together, like blueberries and pork?

VARSHNEY: Yeah, you can definitely come up with novel combinations that have never been done before so things like cocoa powder and saffron and almonds for example, that’s incredibly good.

SCHAEFER: Now…you’ve also been working with an algorithm that…if we’re going to talk about artificial intelligence learning. There are the different algorithms and before we get to Jesse and his musical tools. Talk us through this algorithm that helps learn a machine to learn to create music.

VARSHNEY: Right so what we wanted to do was essentially learn what are the laws of music theory? What are the principles of good music as it were?

VARSHNEY: So what this algorithm is doing is it has two parts. Essentially a teacher that knows Bach and a student who is trying to learn what is music. And they iterate between each other

So we want to build this hierarchy of concepts in music, and further we want them to be human interruptible. So that’s roughly what this algorithm tries to do and then eventually, we can use those principles then to inform composition and the generative question of creativity.

SCHAEFER: So is there an analogy to how we as humans would learn the theory of making music?

VARSHNEY: Yeah in fact the rules that the algorithm generates are exactly or nearly the ones that are taught in music theory courses. They go step by step in roughly the same way though there are a couple of rules that come out that some of our colleagues and colleagues of music find kind of interesting as well, things that they hadn’t noticed.

SCHAEFER: Now Jesse in the field of music we have seen some attempts made a quote unquote generated music. Brian Eno for example in England has been pursuing this for years. What’s the attraction in taking something so inherently human and trying to expand that into the world of artificial intelligence?

JESSE ENGEL, COMPUTER SCIENTIST, MUSICIAN: That’s a great question. So I think it’s interesting to think of things in the context that creativity and tools have always been co-involving. We weren’t able to… you know the evolution of well tempered instruments allowed for playing in multiple keys, algorithm and machine learning, artificial intelligence are just sort of the next step in this process where they’re creating new opportunities to interact with composition, new ways to interact with sonic texture and just new tools to express ourselves creatively.

SCHAEFER: What are you and the Magenta team at Google, what is your kind of approach to the music creativity process?

ENGEL: What’s interesting about machine learning and what’s called AI is that it’s really a data driven way or at least a lot of the current methods are a data driven way of discovering some of these core components to a given set of data, whether it be audio from someone talking or the core components of a recipe or anything like that. And so once you’ve identified these core components, then you can start playing games about how can I mix and match them? The systems aren’t perfect, they fail in interesting ways because they’re trying to do the best they can, and they’re trying to do it according to the data we show them and the data that we show them comes from the natural world. So the mistakes they make are somewhat in line with the natural world. And so they have a kind of more humanistic character to them. Which is actually a great source of, it’s a different kind of creativity. This is creativity through trial and error you know, or mistakes.

[00:15:06] SCHAEFER: Well that sounds very human. Jamaican musicians in the early 60’s listening to offshore radio from the southern American states and thinking, I’ve got to do me some of that R&B stuff and getting it wrong and creating reggae. This is how almost every kind of development in our arts has happened.

ENGEL: Or like in the case of, that the 808 drum machine was originally invented to be a perfectly accurate drummer, it was supposed to be a replacement of a drummer. But it wasn’t, it sounds like a drum in a box. But it actually the way that it was not perfect with the kick drum and stuff was actually the starting point for a lot of electronic music and hip-hop. So there’s the way that you have your creative intention and it goes a little bit off but if you’re guiding your creative intention, if it’s grounded in the world, it can be a great creative source as well.

SCHAEFER: With these machines, is the learning experiential, can it be experiential? Or is it completely based on what we the people have taught it?

ENGEL: I definitely think it can be experiential, the standard way of teaching things is to give it a bunch of exemplars, and then it has to try to… it meaning the algorithm that you come up with has to try to determine some structure within that. But there’s an alternative way where you give a bunch of core components and an algorithm can try to combine these components in different ways. Evolutionary algorithms where you sort of take these two things and you say what if I combine them, I would have a child of them. There’s also reinforcement learning where you sort of give it feedback if it combines two things in a good way, then you say okay good job and it does more things like that. And those are some of the frontiers of these types of algorithms.

SCHAEFER: At the table there you have a kind of melody generating system that puts this theory into practice right?

ENGEL: Right, so the Google Magenta project is part of a machine learning research group. So we do fundamental research, but one of the key things I think often gets overlooked is there’s always a human in the creative process. So it’s an open source project that we try to get as much engagement from creative coders and artist in the community. And we try to create tools that allow these artists to interact with these algorithms without having to do a bunch of programing on the command line and that type of stuff. And so I just brought along a few of them here to show.

SCHAEFER: Why don’t you demonstrate what one of these can do?

SCHAEFER: Sougwen, while Jesse’s walking over to the table, does this sound analogous to your sort of collaboration?

SOUGWEN CHUNG, ARTIST: I started a robotic drawing collaboration a few years ago with the intent of exploring how we can measure creativity and actually reflects that back as a type of performance. So I think as I come to work with the robotic arm and with these new machine-learning algorithms, I’ve come to view it as more of a collaborator because of its ability to reflect our own creative processes. And I think that’s a new process that’s only going to grow more in the future.

SCHAEFER: Now what are we looking at here?

CHUNG: So this is my project Drawing Operations Unit Generation One. It’s sort of a first very initial foray into what it would be like to collaborate with a robotic agent. The first generation is really simple, I wanted to show a few things, I wanted to show sort of a human hand and a robotic arm working in harmony cause I feel like that image is really, and that narrative is underserved in our culture now and also think about how relationally and behaviorally the sort of two organic and robotic units can work together to create something beautiful or not and kind of explore that side of ambiguity.

SCHAEFER: And Jesse was talking about, I think you were saying that for you getting the feedback from the machine affects your creative process, so it becomes a feedback loop.

CHUNG: Absolutely.

SCHAEFER: You did bring one other video for us to watch. What’s happening here?

CHUNG: Oh this is just sort of an older installation work of mine. I think it hits upon this sort of unique point in history that we’re in. Where a lot of how we work with traditional mediums and how we engage in music and food how that’s being shifted and transmuted by these different technological systems. So this is an installation project that I did a few years ago combines sort of hand made paper craft with computational graphics to create an environment that’s sort of this union of both.

[00:20:22] SCHAEFER: So it is a genuine collaborative process?

CHUNG: It’s a hybrid process.

SCHAEFER: Let’s take a look.

(Video clip)

SCHAEFER: So the idea that it’s a hybrid process it seemed like maybe you were hedging on whether it was a genuine collaboration yet, is that the goal?

CHUNG: I think definitely that’s the goal for me. I’d like to see more people engaged with you know machine learning, behaviors as part of a process they adopt into their own work. I think we’re always inching closer and closer toward true collaboration even with human actors. So there’s a real area to be explored there.

SCHAEFER: So what do you got over there?

ENGEL: Okay. So what I have here is a demonstration a very common type of model in machine learning, where there’s-given a series of notes, it looks for structure within those notes and tries to keep on continuing and predicting what would I play next if I was the one who had played those notes. And so there’s a lot of different ways you can use that within a creative process and so we’ve taken tens of thousands of popular songs and just the notes from the songs, its called midi data, but we train these models to find the structure there and then to be able to generate new sounds when conditioned on-when listening to what it is someone is playing. So if sound works out here… just to keep me at all centered I’m going to put a metronome on… but I can do something where I might have a melody that I’m going to play into, and you’ll hear the electric piano, will be me, then I’m going to hear a response from the algorithm running on my computer here that will be trying to continue what I am playing, it will be a cello… might go something simple

(music)

ENGEL: It might respond

(music)

ENGEL: So there’s some continuity there with what I played before. I have it looping actually, so you can hear the response over and over again. And you can hear some similarities of what I played, but its not doing the job of trying to replicate what I play, it’s trying to continue it in a meaningful way.

SCHAEFER: It’s done a little inversion of some of the…

ENGEL: … yea a little inversion, but you see it’s sort of keeping to the same key and stuff so if I then wanted to go and play chords behind it and sort of say that sounds real good as like a melody for my song maybe I can build off that and be like…

(music)

ENGEL: … … something like that and all of a sudden I have the start of course a very popular hit song. But that’s just sort of one type of way that you can be using these tools.

SCHAEFER: and another type of way is the sounds themselves right? We know which instruments sounds like because we made all of them. But there are potentially a gazillion other sounds out there.

ENGEL: So we have to take a whole bunch of individual instruments and we have it learn the structure between all those different instruments and then provide that as a tool to a musician so that you can interpolate between different instruments and create sounds that are sort of novel and new. And yes, so it’s sort of a similar idea but applied now to tambour and texture.

ENGEL: So like in this example, we can take the sound originally of a flute. Sounds like a flute. Here’s the model’s best attempts to replicate that sound.

(flute)

ENGEL: And you notice it sounds more or less like a flute but it messes up in some ways that there’s this upper high frequency content stuff. But it’s actually like I was saying those are the kind of mistakes we enjoy. It’s sort of like you’re running back in the old days the 60’s people would run, turn up their tune amplifiers and get this upper harmonic distortion. And so it’s sort of a natural type of distortion.

[00:25:10] ENGEL: But you can take on the other side for novelty sake let’s take a cat.

*Cat sound effects

ENGEL: My wife loves cats so she was like you have to do a cat… so here’s the model, it was only trained on instruments like guitars and flutes, this is it’s best attempts to recreate a cat.

*Cat sound effect

ENGEL: You know it sort of sounds like a cat, but it’s like if the cat was played by something else. So if you take the flute and the cat and you were just to add the numbers corresponding to those waves together it would just sound like the cat and the flute in the same room. You know, there’s two separate sound sources but instead, if we take those core components that the network learns and we mix those core components together then we get something that is a new sound that combines elements of the two sounds together. So it sort of sounds like a flute that meows.

*Cat/flute sound effect

ENGEL: And I can then turn that into an instrument, that I can play and of course it’s very fun in novelty. But we also want to encourage more that just novelty, you know the creation of new… not serious but personal expression. So we’ve also created these tools and integrated them into modern musical work flows as well.

SCHAEFER: Lav is there anything similar to that in…

VARSHNEY: Yeah so once you have these components you can start playing around with them even in other domains say like culinary. You can say I want Asian flavored ceviche using dairy products, so then you’ll have an algorithm that’ll generate something pretty flavorful and pretty good. So you can set up… and I think this is probably true for you Jesse as well, once you start using these things you kind of get a sense for what input parameters are good as well so you learn how the tool works and you can make it do better things. And I think actually building the computational creativity system gives you some insight into that.

ENGEL: A great example is this was build with a data set that we created and released to the open public, was built off sample libraries and the data set was really biased to trumpets, synthesizers and stuff. So it does, it sort of makes, like when I play a piano through it, it doesn’t get it just right and it kind of makes it sound more like a trumpet. And so there’s this even in the training of a model, there’s creative inputs in the terms of what is the data that you want it to represent in these types of things.

SCHAEFER: So Sougwen let’s talk about the visual arts, there has to be an analogy to be made there.

CHUNG: Well yeah, I was actually thinking about what’s fascinating about Lav and Jesse’s work and I think it’s that it takes place in the site of ambiguity. Like we know that when we train these sort of new models of thinking to do things that we already understand it’s kind of boring, but when it’s to invent a new recipe or if it’s to invent a new sound it becomes a lot more compelling.

SCHAEFER: You found the cat/flute compelling?

CHUNG: I mean well that’s the thing it’s ambiguous and it’s novel but I think there are ways to create our own new narratives using these tools. Like instead of a cat and flute, I can have a recording of my father’s voice and my favorite song, the melody. So suddenly I can hybridize those things it becomes a very meaningful way of engaging with each other and like the historical past. I think while there’s a lot of novelty… like I like a cat/flute, I’d love to play a cat/flute.

SHAEFER: I knew you did.

CHUNG: But there’s so many different ways we can use these core components to create beautiful, new interpretations that helps us understand these sort of aspects of our histories a little more.

SCHAEFER: Is there a way that these machines, this kind of access that creativity has to those ambiguous places, is that something you’re all working towards?

ENGEL: If we develop these tools farther, if you can treat it more like a garden, you know rather than painting like the individual strokes on a canvas where you’re controlling it in the sense that you put a certain amount of water on a plant, you choose what you plant and how much fertilizer to give it or maybe you’re really compulsive and you do like bonsai, you know the cutting of all the leaves and stuff. But your control is more high level, and part of the interesting part is letting the system do whatever it is that it’s going to do. And you understand the system, you go out each day to your garden and you interact with it as the things grow. So I sort of think of AI gardens in that same sense. There’s various elements that you want control over and there’s various elements that you don’t.

[00:30:08] VARSHNEY: And another thing that these computational creativity algorithms give you is a sense of safety in a way. So you can explore parts of the creative space that you might be afraid of a little bit, because you have this kind of algorithm that’ll guarantee that it’ll work out. It’s a little bit like GPS, it allows you to explore parts of the physical world and you know you can get home. So that’s another thing that really helps me when I use these creative technologies, pushes me a little outside my normal boxes and puts me in different boxes but they’re kind of somewhere else and I know it’ll work out.

CHUNG: Actually I have a remark about that. I feel the exact opposite about that. As I think about collaborating with in a very performative way, this robotic arm whether it’s trained on my own style which Generation Two is. Or it’s just sort of mimicking my movements. There’s a real randomness and sort of sense of unpredictability to it, and lack of understanding involved that it’s kind of exhilarating and exciting but it is not safe, it doesn’t feel safe or comforting at all, it’s like staring into the void.

SCHAEFER: Well you know even among humans, the phenomenon of apophenia where you have two dispirit, unrelated bits of stimulus, the brain will try to make some connections where maybe there was no intention to begin with. But a lot of artists Peter, make a lot of hay with that.

TSE: Sure, and central to our kind of creativity is taking known components and constructing new things from them. So I can say to you for example: imagine an egg with wings that’s flying

around the statue of liberty holding an ice cream cone instead of a torch. And you can do that. I could’ve just said anything. And so these mental operations seem to take place in this aspect of our consciousness that is this internal virtual reality that we may call imagination. So consciousness has this sort of duo role, telling us about the world and our bodies in it but also this internal virtual reality. And this is really central to what we do and in order to that I think we have to have experience in the real world with real things. Actually, you know, in some of my work, we’ve looked at the mental operations, for example, rotating a physical object this way or this way. And then we can decode if someone’s imagining rotating it this way or this way. Probably because some similar brain circuits are involved in actually imagining it in your internal virtual reality, which you learned in the real world.

SCHAEFER: There’s a program in France actually Sony France, maybe Jesse you’ve heard this where they fed every Beatles song into this computer so that the computer could learn how the Beatles wrote their songs. And then they asked the computer to generate its own Beatles like songs and you know, they had human lyricists.

(AI Beatles song)

SCHAEFER: If you grew up on the Beatles it’s just like “eh, alright, that’s kind of cute.” If you didn’t grow up on the Beatles though, I don’t know what your response might, might be something like, “oh my gosh, I’ve never hear anything like this. This is brilliant.” So the idea… you’re absolutely right it takes a conscious receptor to decide whether something… and so this gets us into the whole question of aesthetics and taste, whether something is in fact creative or not.

TSE: Whether something is creative in the good sense is going to be a human consciousness that says, “oh, that taste good actually.” And if my algorithm says well this should taste good and I just, actually no, it comes down to my conscious experience. And computers at least the currently configured are not conscious. Our creativity is very much rooted in our own conscious experience of a world and so based on my own conscious experience of the world, I form mental models, so I know what it is for you to feel pain because I’ve felt pain. And we’re constantly modeling things that are actually not in the input. I’m modeling what’s going on in your mind, based on what’s going on in my mind, that’s not visible. And so a lot of our creativity is rooted in our mental models which in return is rooted in our consciousness and so I’ll voice a note of skepticism because in the absence of mental models that are rooted in conscious experience, robots and AI systems I think will at best approach as if creativity.

[00:35:20] CHUNG: I actually think that’s one realm in which the arena of competitive games gets a lot of traction in machine learning, is because there is a set of constraints, there is a winner and loser and styles. So in that way, it creates a platform around discussion that’s a little bit… it’s more stable than a conversation about aesthetics. Do you feel like the realm of competitive games and creating these models that could be conscious, do you think that’s the right approach?

TSE: Well when IBM created Deep Blue it could beat Kasparov. But it was doing a brute force search through zillions of possibilities. Whereas Kasparov would instantly see a pattern so our brain is doing something very different and it could beat him, just like an airplane can fly faster than a bird but it’s doing something very different from a bird, in this case the human brain.

ENGEL: Well I want to take the, just for fun, the contrary point of view on this because, so a lot like you would use the Kasparov example right, and there’s recently been these great successes of this AlphaGo Program beating the world champions in Go, and the way that they do it is not through brute force search because that’s, it’s just computational intractable because the game of Go is much more complex. But they do it through the creation of these functions to represent sort of intuition, you know to represent like roughly how good do I feel this move is, and I can’t tell you exactly why I feel this good, but I have this function that I’ve been learning that says if these pieces are here and these pieces are here, then moving here is a good move. And it’s because I played a whole bunch of games and I usually win when I put my piece here and all those pieces are in those places. And similarly, a lot of the really cutting edge machine learning and reinforcement learning algorithms explicitly have modeling of… you know there’s some numbers present that are saying okay this is my representation of the environment around me. This is my representation of what this other actor is doing; this is a representation of what of what I am doing as an actor in the system.

ENGEL: So the concept that our creative critique or, you know analysis is somehow a fundamentally a human thing that requires consciousness. I just find it to be a little bit of a small definition because here you have, you can, it’s very easy to imagine agents that can do things similar in nature to what we might consider to be consciousness, but it’s not human consciousness. It’s just sort of a broader definition on how a thing interacts with its environment. So you can have a computer critic and computer artist that make art that’s optimized for what a computer likes, but it’s not necessarily optimized for what we want. It doesn’t, just because it’s a possibility doesn’t mean, I mean I think we’re always going to lend toward creating systems that you know we enjoy interacting with, that benefit us as a society and benefit us in our own expression.

SCHAEFER: Does it seem like there are limits, Peter you should weigh in as well, to human creativity and is there a limit to what artificial creativity might be?

VARSHNEY: Yeah, so you can figure out what’s the maximum speed a bird can fly through a forest without crashing. So right now the bird is going pretty slow he can navigate sufficiently quickly and avoid all the obstacles and then land on the tree. But if he tries to go too fast he’ll crash into the tree which is what I think happens in the video. And that’s exactly the fundamental speed limit of a flight through forest. And there’s a similar fundamental speed limit for communication, and what we’ve been trying to find and what we seem to be finding is there’s a fundamental speed limit to creativity as well. There’s a fundamental tension, you can’t be too novel in a given domain and you can’t be too high quality in a domain. There’s a tension between the two. So it’s easy to be very novel and low quality just put random stuff together. It’s very easy to be high quality and low novelty just look it up in your favorite cookbook or wherever it is. But doing both simultaneously is very difficult and that’s where the limit is. And the larger the inspiration set, the known stuff, the harder it is to be creative. So if there are people in the audience who want to be creative, you should start a new field where there’s much more low hanging fruit actually.

[00:40:02] SCHAEFER: Are there limits do you think to human creativity as Lav was suggesting there are to computational creativity?

TSE: I don’t know, I mean humans have proven to be incredibly creative both in the good sense of making airplanes for flight or for murder. I think of our human imagination as the greatest tool and also the greatest weapon we have because we can imagine the most amazing things and also the most horrible things. Like the holocaust, that was an act of imagination actually. And my hope is that these tools will eventually foster human flourishing and not be used to, you know also create new ways of murdering people. And I think one question that I would have for everybody is; what is the goal of all this? Is it just simply to make new artifacts or is it… I mean when I was a kid there was the Jetsons and they had this robot and you would say “robot do the dishes.” And it would do the dishes. But is it to make a general-purpose intelligence that would effectively be a slave? My guess is in order to make a general-purpose robot like the Jetsons robot, it would have to be conscious so it would understand what I mean when I say change the tires. I have to have a whole model of a cars and everything. But if it’s conscious, then we have the problem of slavery again because it would suffer.

SCHAEFER: So consciousness implies self-awareness, awareness of self as suppose to another or others.

TSE: The having of experience like pain, like seeing.

SCHAEFER: Could a robot be conscious without feeling pain?

TSE: I suppose I mean there are actually humans that don’t feel pain and they can see so yeah.

CHUNG: Sociopaths…

TSE: Emotional pain, yeah.

SCHAEFER: Jesse what about at your end of things?

ENGEL: Yeah, I mean I think maybe another way of thinking about it rather than just… it’s like it’s really nice to reflect it back on our own human experience. As we’ve developed tools to better control mechanical systems, we really began to better understand how our bodies work. But what’s amazing about these new systems is they do a math that’s not human but it’s similar in a way that can help us provide introspections in terms of what are my mental processes? When I’m being self critical, what are the dynamics of that, how can I interrupt that process? And how does that affect my creative output? You can get introspection in a quantitative way that can help give you words cause whenever you talk… I live in the bay area, whenever I talk about this I just sound like a hippie. It’s nice to have things that you can point to to say hey look this is actual physical systems here. There something here that cuts common across human experience that we can have a vocabulary to talk about.

CHUNG: I think to that point too it’s important to ask why we’re having this conversation now and I think part of that, if I can flipped that script a little bit, is we’re generating all this data that’s being collected that’s very, very difficult to wrap our brains around. And these mental models help us understand the sort of complexity and the scale of our own behavior as a collective world and I think that’s another goal, potentially of continuing to research these new areas of cognition.

SCHAEFER: So let me ask you to take one more trip to the table there. You gave us a demonstration before of some of the elemental, the basic gestures that this machine, this system is capable of. What if you tried to do something more complex?

ENGEL: So I think what I had here to show was sort of more on the concept of creating tools that actual musicians can use to explore. Like we had that novelty sound of the cat/flute for example, and so what I’ve done here on my computer is to create a… we want to get this into the hands of artist and musician and to close the feedback loop cause creativity takes two to tango, there’s the tool and the person and so we want to get that feedback loop closed. So we created this plug in for this popular software called Ableton Live that if you’ve ever gone to see any electronic music, most likely it was made for Ableton Live. So here what we can do that instead of just interpolating between just two instruments we can create an entire grid of instruments and allow a musician to take sort of a random walk, take a walk through this space and explore a whole variety of sounds. So what I can do is rather than just play this with my hand I can just sort of play a clip here.

*audio plays

[00:45:24] ENGEL: So right now the instrument is like an organ sound and you can see this is the software that we have going but you see we’ve created a manifold for people to play on where we can actually just drag through this space. And see the evolution of sound as it goes between different instruments and some of the spots… all of a sudden we’re in a dubstep thing. But you can see the spots where it’s in between the different sounds. And it creates a whole bunch of different textures… alright that might be the start to my next track, or something like that. It gives you ability to interact in this space that if you were trying to design by hand to be able to draw through those things it’d be very difficult to do. Sort of one of those things.

SCHAEFER: Sougwen, where do you see computational creativity headed in the visual arts? Are there sort of analogous tools available to you?

CHUNG: Yea definitely there a lot of style transfers and this sort of visual assist… style transfer is sort of the neural networks reading the visual pixel data essentially of an image, a popular painting for instance and then extracting that style, like Starry Night or what have you, and then transferring that into your next selfie or a picture of your dog or something. Yeah, it’s compelling. So I think there’s that as a very common approach to using these systems in the visual arts now. What I get really excited about is ways in which we can create new forms of collaboration through sort of processes that we’re training these neural nets on.

SCHAEFER: What about the visual arts as performative arts where normally we only see the sausage. We don’t see the grinding and all that but it seems like with what you’re doing you want us to see some of the process.

CHUNG: Very much so. I think for me the creative process is much more important than the outcome and that’s why drawing as a collaboration, that’s why it is drawing and not dance. I think music performances is a little more ambiguous, but actually being able to explore the drawn mark in a way that isn’t representational could be a reflection of consciousness or sort of exploratory intentionality and doing that alongside a, I hesitate to say artificial intelligence, but machine learning system approaches an understanding or at least could create a data set that could help us approach an understanding of what the exploratory creative process is.

SCHAEFER: Well Jesse you know music throughout history really has had a kind of ambivalent reaction to new technologies, “oh my gosh digital music. All the musicians will be put out of work. Oh my gosh the pianos. Al the harpsichordist will be put out of work.” And yet we still have real musicians, we still have harpsichordists even. So the idea that there’s room for a whole new kind of generation of creative artist in this field, it’s kind of a new idea, but it seems like it has analogies to things that have gone before.

ENGEL: Right in some ways it’s a very old idea. That’s where I was sort of going that this is kind of just part of a long coevolution that’s existed between technology and music in this case. I think it really just takes a generation that really owns it. They don’t feel attached partially to the things that came before, because they weren’t alive when that was really happening so why should they feel attached to it? Some are traditional, I play jazz but I also like pushing it in new directions. So I think that will always happen, there will always be people find strength in the continuity of learning that harpsichord from the 1600’s and those types of things.

[00:50:14] SCHAEFER: Alright you mentioned the word ownership and it gets up to something I said before I want to come back to, that is agency. So Lav you help create and your team helps to create this entity that comes up with new fashion ideas, new food recipes, new metal alloys. Who owns that?

VARSHNEY: So the current understanding of US trademark and copyright patent laws that it goes into the public domain because there has to be a human inventor to claim ownership. Machines don’t have property rights, at least currently. And so that’s the difficulty, it creates this perverse incentive for people to pretend like they were very fundamentally involved in the creative process. But it also raises the national question, how should we think about property rights in this area? And it’s actually important because who will fund the computational creative research if you can’t actually patent using those tools eventually? So there’s no backwards flow of incentive essentially. So the nice thing let’s say pencils is they’re considered researchable. So it’s not the inventor of the pencil that gets property rights on things that are invented using pencils. But when you do invent using a pencil you can claim property rights on that. Right so similarly we need some incentives perhaps, a revision of this understanding to encourage development of new computational creativity technologies.

SCHAEFER: Peter?

TSE: I also think we need wisdom in handling tools. Any tool can be a two-edged sword, a hammer could be used to murder somebody or build a house, even an atom bomb can deflect an asteroid. But these tools are so powerful that they can kind of take on a life of their own and that I think is a danger. And I think there’s an example of that in everyone’s pocket, right. These cell phones are amazing. You have all the information available in the universe with Wikipedia right there. But it becomes very distracting and many people… I was walking across the Dartmouth green and the sky was on fire with the red sunset and everybody, a hundred people were doing this. And I almost wanted to say people look at this, but I didn’t have the courage to do it. I noted that in my mind thirty years ago, walking across that Dartmouth green, I would see ten people holding hands and I don’t see that hardly anymore. And these devices meant to foster human connectivity have in many ways undermined it because people are more distracted, less attentive, here instead of talking to people. Like when I was a kid and AI is very powerful, but it runs the same risk, we have to own it. We have to become agentic ourselves and make sure it serves our ends creative or other wise and that we don’t become slaves to it.

SCHAEFER: But if AI does reach a point Jesse, Sougwen, where it can compose a piece of music, create a genuine work of visual art, something that moves us that genuinely passes the Turing test, that we know where it came from and it uplifts us anyway. You don’t care.

SCHAEFER: Then does that AI entity have some sort of agency, does it have some sort of ownership as opposed to the humans who set it in motion maybe a generation earlier? (pause) We’re in William Gibson territory…science fiction

ENGEL: I mean… it’s all good stuff. I think what you were saying about cell phones and stuff, it’s really fun to think about these scenarios where you start thinking about personification and then constant evolution and stuff. But the fact of the matter is for technology to ever reach that point, we’ll have gone through so many transformative things in our society… like there’s lower hanging fruit than that that will change everything. And so I always think, like there’s lots of groups like Google, open AI, DeepMind that are really focused on more of the short term things of like how does this affect our society here and now and if we ever reach that point, the only way we’ll ever do that if we figure out all the stuff that came before it because otherwise it just won’t happen.

TSE: But can I add that in order to get to that point, we’ll have to have deep mental models on what it means to be human being rooted in what we have experience as being a human being. So for example you know that scene in Les Mis where Jean Valjean is in prison for nineteen years and he hates the system and he’s angry and he stays with this priest. He then steals the silverware of the priest, and the police catch him and bring him in. And the priest in a act of true Christian grace says to him release him, he didn’t steal it. I gave it to him and here you forgot the platter. And we find that deeply moving and beautiful because we can understand having hit bottom. And then by having hit bottom he goes through a mental transformation and he realizes I’m not this, I need to become a new kind of person. That can only be understood by us because we have human experience and I’m skeptical that AI is going to be able to understand redemption like that. But maybe then again if we get to that point they will be humans or very close to us.

[00:55:48] SCHAEFER: Sougwen… tough act to follow there.

CHUNG: No, I think part of how we even categorize what makes us human is how we create our own narratives. Like you explained this sort of profoundly human moment through the narrative of Les Mis. I think right now we’re in the process of creating narratives around machine learning, we’re calling it artificial intelligence, we’re calling it designed intelligence. This is definitely an interesting precursor to that question of agency. We form our own through creating narratives and now we’re able to co-create narratives with these machines. So I think that’s sort of the path forward.

SCHAEFER: The arc of human history has been to acknowledge agency for more and more people. Agency was for many generations was restricted to a certain gender and certain group of people and we hope that’s a continually expanding definition of agency and whether it can include things that are artificial. I guess that’s a question that will be answered down the road.