The Worst Filing System Known To Humans

-Punk (5) A Song of Ice and Fire (2) Affect (9) Alienating My Audience (31) Animation (27) Anime (17) Anonymous (3) Anything Salvaged (15) Art Crit (41) Avatar the Last Airbender (2) Black Lives Matter (1) Bonus Article (1) Children's Media (6) Close Reading (90) Collaboration (1) comics (29) Cyborg Feminism (3) Deconstruction (10) Devin Townsend (2) Discworld (1) Evo Psych (1) Fandom Failstates (7) Fanfiction (28) Feminism (23) Fiction Experiments (13) Food (1) Fragments (11) Games (29) Geek Culture (28) Gender Shit (1) Getting Kicked Off Of TV Tropes For This One (11) Gnostic (6) Guest Posts (5) Guest: Ian McDevitt (2) Guest: Jon Grasseschi (3) Guest: Leslie the Sleepless Film Producer (1) Guest: Sara the Hot Librarian (2) Guest: Timebaum (1) Harry Potter (8) Harry Potter and the Methods of Rationality (3) Has DC Done Something Stupid Today (5) Hauntology (6) Homestuck (18) How Very Queer (35) hyperallthethings (10) hyperanimation (1) Hypercomics (10) I Didn't Ask For Your Life Story Sheesh (24) Illustrated (37) In The Shadow Of No Towers (1) It Just Keeps Tumblring Down Tumblring Down Tumblring Down (9) It's D&D (2) Judeo-Christian (9) Lady Gaga (5) Let's Read Theory (3) Lit Crit (19) Living In The Future Problems (11) Lord of the Rings (4) Mad Max (1) Madoka Magica (1) Magic The Gathering (4) Manos (2) Marvel Cinematic Universe (17) Marx My Words (15) Medium Specificity (15) Meme Hell (1) Metal (2) Movies (33) Music (26) Music Videos (21) NFTs (10) Object Oriented Ontology (4) Occupy Wall Street (3) Pacific Rim (2) Paradise Lost (2) Parafiction (6) Patreon Announcements (15) Phenomenology (4) Poetry (6) Pokemon (3) Politics and Taxes and People Grinding Axes (13) PONIES (9) Pop Art (6) Raising My Pageranks Through Porn (4) Reload The Canons! (7) Remixes (8) Review Compilations (6) Room For You Inside (2) Science Fiction Double Feature (30) Self-Referential Bullshit (23) Semiotics (2) Sense8 (4) Sociology (12) Spooky Stuff (41) Sports (1) Star Wars (6) Steven Universe (3) Surrealism (11) The Net Is Vast (36) Time (1) To Make An Apple Pie (4) Transhumanism (9) Twilight (4) Using This Thing To Explain That Thing (120) Video Response (2) Watchmen (3) Webcomics (2) Who Killed The World? (9)

Reload the Canons!

This series of articles is an attempt to play through The Canon of videogames: your Metroids, your Marios, your Zeldas, your Pokemons, that kind of thing.

Except I'm not playing the original games. Instead, I'm playing only remakes, remixes, and weird fan projects. This is the canon of games as seen through the eyes of fans, and I'm going to treat fan games as what they are: legitimate works of art in their own right that deserve our analysis and respect.

Tuesday, March 19, 2013

AI and the Magic Paintbrush

This'll be relevant in a few paragraphs, I swear.

I am discovering that when Elizer Yudkowski, the author of Harry Potter and the Methods of Rationality and LessWrong1, tells me I should be scared of something, there are actually two levels of terror that I have to access. This is because it's not difficult for me to distance myself from problems of AI--after all, the likelihood that I'm going to be designing a pet friendly artificial intelligence in my basement is pretty slim. So, when he says "I don't talk about this idea, because most people are too frightened by it to react with the proper curiosity and interest," I can easily pick curiosity, because I've got nothing on the line.

I have to get to a state where I can actually be legitimately frightened--where I have chips in the game. Otherwise, all I'm doing is finding a solution that masks the act of fleeing from a problem in the guise of intellectual curiosity. It is very easy for me to say, "Wow, what an interesting problem," then immediately put the problem out of my mind. It would look like I'm reacting appropriately to something scary, but really I'm just disengaging.

This article is a very good example of that, actually. The basic gist is: if we create an AI, it might want to study humans. And the way you study things is frequently to make better and better models of your subject.

So, what happens if the AI accidentally creates models of humans so good that they become sapient?

And then what happens if the AI decides to start deleting old backup copies of these sapient simulations?

It's an intriguing thought that a lot of AI researchers, according to Yudowski, anyway, would handwave out of existence--they would say the problem would take care of itself, because the AI will be smart enough to recognize what was happening and keep it from happening, or that certain limitations would naturally prevent the creation of fully simulated consciousnesses. Of course, there's no way of really knowing that ahead of time, and I'm not sure how an AI would actually recognize that it was creating sentient cyberhumans while it's still in the process of figuring out how sentient humans work. And once it has them, they're there, and both the AI, and humanity, has to figure out what we do with a bunch of simulated beings trapped within the mind of another artificial being.

Which, yeah, I can see how that would be a problem, but not for me personally, right? I'm not an AI researcher. I'm pretty sure that for us artists and writers there's not a lot to worry about. After all, we don't have to deal with the hard realities of AI, we can comfortably speculate and fantasize about the intriguing future that awaits us without worrying too much about solving the problems ourselves. We're never going to get so exact a fictional simulation that our own creations start thinking for themselves! And besides, artists are smart, we'll know if that's what's happening and stop ourselves before we go to far. There are just fundamental limitations to our simulations that would prevent the creation of an actual secondary consciousness in our own minds.

Huh.

Why does that sound familiar?

There's a story I remember reading as a child (which Google tells me was probably "Liang and the Magic Paintbrush"), a picture book about a boy who can paint pictures so real they spring to life, and so he deliberately paints flaws in his form. The Emperor hears tell of the boy's strange powers and commissions the artist to paint a great dragon. The boy does, but leaves one eye unfinished, blank.

The emperor doesn't like this.

You can probably imagine what kind of ending the story has. It's not a happy ending.

I didn't really understand this story as a child, and I'm not sure I quite grasp the intended metaphor now, but boy, I can think of a pretty intriguing new reading.

Think about it like this:

As artists (used here to include writers, dancers, &c.--creators of aesthetic works) we often simulate characters, audiences, Ideal Readers, even semi-abstracted emotional ideas as part of our works. I think this is true even of abstract artists--expressionists, poets, dancers, maybe even chefs--albeit to a lesser extent than to realists. There's still a mental model of audience and experience that you're trying to convey--a simulation that attempts to accurately map behavior.

In the most extreme cases of this modeling, we have writers discussing their characters in self-determining terms. The character does, in essence, what it wants and the writer is along for the ride. Which isn't to say the simulation has free will. Think of it in terms of the classic philosophical problem of omniscience: because we are an omniscient observer, we know what the characters would do based on our modeling of their personality, and so while the characters aren't literally walking around making decisions, we see the path that they would weave through a fictional narrative.

Basically, although ultimately I (or more accurately, my mental simulation) am winding the characters up and noting what paths they naturally wobble along due to the particular physics of their setting and personality, they still feel quite real. So intense is this experience that I personally have a lot of trouble subjecting characters to pain, because I feel to much empathy for these simulations, despite the fact that they don't have subjective experiences.

At least, they don't yet.

There's going to be a point in possibly the very near future when we start actually augmenting our intelligence. How long do you think before we start simulating simple people--actual subjectively aware life forms--within our own swelled heads?

If you are an artist, you should be feeling sheer terror right now. Imagine what it will be like to write stories or draw portraits when you might accidentally create a real being just by thinking too hard about your subject.

You will essentially have become mentally pregnant with a fully grown adult that cannot escape the confines of your mind.

Oh, but it gets worse!

See, there's nothing currently that says a sociopath can't be an artist, and that a sociopathic artist can't get the same kind of brain augmentation that the rest of us can.

Ever wanted to just... blow up the world? Well in the future, you might be able to blow up fully realized simulated worlds with sentient beings--genocide as stress relief.

It's enough to make you give up art forever... or give up augmentation.

But that's a path I don't really find interesting or productive. The benefits of upgrading everyone's brains are just too damn weighty to be counterbalanced by this totally hypothetical, fictional, and possibly straight up idiotic media theorist's fears. Remember, this isn't my field. I could be totally off base here--dreaming up nightmares that could never manifest in real life.

No, this isn't a problem we can run from, as alarming as it is. Maybe the solution is to put hard limits in our own brains along the lines that Yudowski suggests for sentient AI--something that can recognize when a being might be created and stop it from being created. We need to leave flaws in our form so that the dragon doesn't spring to life. That seems like, at the very least, a useful metaphor for describing the problem. And really, one of the lessons of that magic paintbrush tale is that art can and perhaps even should accept flaws. Remember, artists are liars, and art derives the greater part of its power from lies--sometimes lies as simple as the careful manipulation of a shadow, or a single unfinished eye.

How do we set up those limits? Hell if I know. But it's something we're going to have to worry about in the future, I think. And in the mean time I'll be thinking very carefully before killing off any fictional characters.

After all, for all I know I may already have blood on my hands.

Circle me on Google+ at gplus.to/SamKeeper. As always, you can e-mail me at KeeperofManyNames@gmail.com. If you liked this piece please share it on Facebook, Google+, Twitter, Reddit, Equestria Daily, Xanga, MySpace, or whathaveyou, and leave some thoughts in the comments below.

1 I can never quite figure out whether LessWrong is an identity, a collective, or just a website full of articles. It might be all three, and it seems to be used differently in different situations. Fucking transhumanists.

7 comments:

  1. I remember reading a similar tale. IT was about an artist who was comissioned by a lord to paint a dragon, even though only members of the royal family could own such a painting. The artist responded by painting the dragon on the ground, rather than flying, ("Perhaps he will see how his pride weighs him down") and without eyes. Enraged, the lord took his own brush and painted in eyes. The dragon then peeled itself from the paper and destroyed the lord's house.
    I believe the author said it was based off of a Chinese folktale. I could be mistaken.

    ReplyDelete
  2. Have you gotten to Roko's Basilisk yet? There are some elements in the LessWrong community who take thought experiments a little too seriously, with kind of screwed-up results.

    (You can google it, but I'll give you a hint. I won't describe it in detail in case LessWrong users come here and decide I need to die for the greater good or something ;-). It has to do with personality simulation by a strongly superhuman AI, it's recursive, and it leads to both self-censoring and actual censorship of the LessWrong site)

    ReplyDelete
    Replies
    1. Yesss, yeeeeessss. Keep spamming Roko's Basilisk EVERYWHERE until Roko himself gets really pissed off and mugs you in a back alley.

      Nawww, just keep spamming it everywhere because Roko's zealotry is kind of obnoxious.

      Delete
  3. It's not a magic black box, it's an entire set of "mirror neurons".

    ReplyDelete
  4. Those of you in this subthread (at least) might be interested in the phenomenon of "tulpas" (easily googleable), whose proponents can't seem to decide whether they're self-inducing DID, creating new sentient beings, or fucking around with their mirror neurons when they create new sentient beings in their heads to play around with. And do it anyway.

    The metaphysical types are very creepy, but most of them seem at least nice to talk to, if rather immature.

    ReplyDelete
  5. Great to end up going to your weblog once more, it has been a very long time for me. Pleasantly this article I've been sat tight for so long. I will require this post to add up to my task in the school, and it has identical subject together with your review. Much appreciated, great offer. 토토사이트추천 토토사이트추천

    ReplyDelete

Support on Patreon
Store
Reader's Guide
Tag Index
Homestuck Articles
Solarpunk Articles
Twitter
RSS Feed