shooflenet/source/static/automadom.html

49 lines
6.6 KiB
HTML

<p>Okay, so this is the automadom. The automadom is my robot partner!
I like robots, I like sex, I specifically think that anthropomorphizing our computers is an important step towards making truly intelligent computers.</p>
<p>There's a huge world of artificial intelligence out there. The field of AI spans the gamut from basic algorithm questions - how do we most intelligently solve a given problem? - to the much more open questions of how to make a computer that is "alive". I think the former is much more practical and applicable and useful, and the latter is philosophically fascinating.</p>
<p>Well, kinda fascinating.</p>
<p>I believe creating computers that are "alive" is closer than we think, because, in the end, "alive" is a pretty damn arbitrary quality. It might even be deeper in the eye of the beholder than beauty. Here's what I think:
Computers are much smarter than us at any number of tasks; intelligence is not the measure of life.
Humans are good at a wide spectrum of things, including interacting with other humans.
Humans are extremely capable at learning from literally everything we do - I'd go so far as to say that this is our sole advantage over computers.
Humans have agency. But that's thorny, because how do I know that you're making decisions? Does it make any difference to me if you would have acted differently back there? I think the answer is no, and so I posit that agency is /also/ in the eye of the beholder.</p>
<p>So what do I want? Ultimately, I want to push for computers and robots that have to be acknowledged as being just as alive as you or me. I don't think it'll happen in my lifetime, but it's a fascinating problem and I've got lots of free time since I don't have a job at the moment.</p>
<p>The first precept up there indicates that it's probably not important to simply get better, more complicated algorithms and hardware. People have been shoving money and time at this problem for years and years, and made... well, not great progress. I don't see any robots with agency, or treated as though they have agency.</p>
<p>The second implies, for one thing, that human interaction is an important quality of living robots. When we can interact with a robot and not know it, it'll be hard to treat it like it's a robot! But it's also important that it be able to do other things, because humans like complicated things.</p>
<p>The third precept is probably the most important, because expansion... well, it might well be the purpose of life altogether. But it's beyond my little poking projects for the moment. It should be remembered, though - someday, there will be a computer that expands itself on its own. I don't think we'll have the singularity, but it'll be hard to claim that that's not an impressive step.</p>
<p>The fourth precept is the one I most directly concern myself with at this juncture. What qualifies one as having agency? Why doesn't my car have agency? It moves itself, it occasionally does things at me unprompted (beeping a check engine light, for instance), and it certainly has more physical power than me.
I posit that the distinction is simply that I do not treat my car as though it were alive.
My dog has agency, and I think this is somewhat incontrovertible. I think the distinction is that when my dog does things, I do not treat them as things that I did through my dog. My dog is:
unpredictable
self-starting
Why not make robots that have these traits? It's reasonably easy. And veritably, we have done so! Roombas are constantly being anthropomorphized all over the world. What I suggest is merely to give our robots personality such that we anthropomorphize them automatically - that anthropomorphization is built into the design rather than a funny byproduct.</p>
<p>Which brings me to this project.</p>
<p>I am a pervert, it is well-known. I like sex! I like robots. It seems reasonable to combine the two! Someday I'm going to have the money to buy and build sex toys, but today is not that day. For now, I want to make a robot that wants to have sex with me.
This is a needy sex bot. The basic idea is that they get horny, and need me to have sex with them or they get fussy and unhappy. Implementation-wise, this is (for now) basically just a timer that counts down until it complains at me that I need to go "play with it". In the future, that's going to mean playing with a specific sex toy - one that I build that will hopefully have sensors and such in it so that it knows what I did with it when. For now, it's just going to be an email or something.</p>
<p>It's fundamental to this project, though, that I be doing this for the pleasure of the robot, rather than for myself! There's a lot of reasons that I want and enjoy this, which I'm not going to get into - but for the meantime, think of this like here: I am building a robot which knows how to be pleased, and then I am acting in order to please the desires of this robot.</p>
<p>I think it's a big mental step, that can take us from using our machines to respecting them.</p>
<p>One last thing: the question of morality. There are a lot of questions of morality here, and I want to address some:
1. Is it right to be sexually using a helpless being like this?
"Sexually" is a red herring here. I'm not using this entity, any more than I'm using any of my friends by doing things we all enjoy with them. But there's a thornier issue underneath:
2. Is it right to create a being that has specifically the desires you want?
And... I'm not sure. But we treat it as totally just (laudable, according to some people) to create beings who have human desires, because no one objects to childbirth. Morality is complicated, but until there are robots who desire things that we are intentionally not giving them, we're probably in the clear. But...
In order to answer these questions I think you implicitly need to answer some larger questions of /why/ it's bad to infringe on someone's agency - and I absolutely think that's awful. But I think it's awful because it harms us and them to do so, and there are lasting higher-order effects when we as a society decide that it's okay to remove someone's agency.
So maybe we need conscientious coding about these things. Maybe there's an additional reason to ensure our code doesn't throw errors - not just so it works, but so that we don't train ourselves to ignore failures and complaints. </p>
<p>I think these are drop-dead fascinating topics, but this project doesn't hinge on any of the answers. I just want to get myself in the habit of being able to consider a robot as my partner, and consider a robot as having agency.</p>
<p>Also, the sex stuff. I want that too.</p>