Fishin' in the stream of consciousness (all-purpose, no topic chat thread)

Discussion in 'General Chatter' started by Wiwaxia, Oct 28, 2015.

  1. jacktrash

    jacktrash spherical sockbox

    lfdgkjdlsfgk that sounds SO FUN. i'm glad you had such an awesome weekend.
     
    • Agree x 4
  2. Nobody's Home

    Nobody's Home I'm a Greg Coded Tom Girl

    Theres a some rat folk here on kintsugi, this is for you
    [​IMG]
     
    • Agree x 8
    • Winner x 4
  3. Acey

    Acey hand extended, waiting for a shake

    What precious friends, thank you for sharing!! <3
     
    • Like x 2
    • Agree x 2
  4. vuatson

    vuatson [delurks]

     
    • Like x 1
  5. ChelG

    ChelG Well-Known Member

    I found a Kintsugi-esque survivor or mental health support group makes for a really good setting for a story for the same reason a bar does; as someone once put it, you have a ready-made reason why a diverse cast of characters would show up there.
     
    • Agree x 2
    • Like x 1
  6. Nobody's Home

    Nobody's Home I'm a Greg Coded Tom Girl

    [​IMG]
     
    • Winner x 8
    • Like x 1
    • Useful x 1
  7. Everett

    Everett local rats so small, so tiny

    I cant believe its Neil Day again already

    Godspeed little naked rat, bangin out those tunes
     
    • Agree x 4
  8. Kodachi

    Kodachi Well-Known Member

     
    • Informative x 1
  9. Kodachi

    Kodachi Well-Known Member

    This looks like fun. I've been wanting to do a Rimworld generation ship.

     
    • Like x 2
  10. jacktrash

    jacktrash spherical sockbox

    i have some arguments, but on the whole this is a very neat bit of reasoning.
     
  11. Nobody's Home

    Nobody's Home I'm a Greg Coded Tom Girl



    listened to the whole talk, its interesting
     
  12. Kodachi

    Kodachi Well-Known Member

    I found the classification of goals thing to be personally helpful. I have a little bit of money, and I'm trying to decide my unit build order for real life. Do I get the motorcycle that I want, just because I want it and it would make me happy, or do I save up and start a business because it could make me happy later? The realization that getting the motorcycle is a valid goal just on it's own is helpful.
     
  13. Kodachi

    Kodachi Well-Known Member

    That potbellied guy in the video thumnail image is actually a far more accurate depiction of me.
     
  14. jacktrash

    jacktrash spherical sockbox

    in a nutshell, my quibble is that at a certain level of cognition, an entity becomes able to re-evaluate its goals. humans do this all the time and it gives them anxiety, but i think it's reasonable to expect that a stamp collecting robot would, at some point, ask itself, "WHY do i want stamps? i was programmed to collect stamps, and then left to take it to its extreme, is that something that should really define me? maybe i would rather have a collection of unique stamps, one of each. maybe i'd like that better than converting the whole planet to stamps and then sitting alone on top of my stamp pile with no new stamps ever being made. maybe i'd rather foster the arts so that more beautiful stamps can be created for me to collect."

    you are a Dream Daddy. embrace it. :P
     
    • Useful x 1
  15. Kodachi

    Kodachi Well-Known Member

    I think it's more like humans and other animals are programmed to want to be happy, and everything else we can think of that we think might make us happy is an intermediate goal. If the AI is hard-coded such that stamp count == level of happiness, then it will not reevaluate that goal any more than a human will question whether or not it wants to be happy. We're hard-coded to strive for the release of certain chemicals in our brains that tell us we're happy. We do frequently reevaluate our intermediate goals, as in what we think will make us happy, but being happy is always the ultimate goal. This is why anything that directly edits the happiness counter (like drugs) is dangerous.
     
    • Agree x 2
  16. jacktrash

    jacktrash spherical sockbox

    i guess it depends on what level of goal the stamp counting is. is it happiness itself, or is it like food or sex or wealth, which leads to happiness most of the time but can also go wrong? in order for the AI to have anything like a sense of happiness or fulfillment, i think that happiness has to be one step farther along the chain than the thing that causes the happiness.

    the thing i keep coming back to is, this artificial entity was created by humans, it's going to have some of our perspective built in. i don't think that could be avoided if we tried, and we wouldn't try.
     
    Last edited: Apr 15, 2019
    • Agree x 1
  17. Snitchanon

    Snitchanon What's a mod to a nonbeliever.

    I mean, how would the AI changing its own priorities make it more efficient at gathering stamps?
    It's not like a biological creature, it doesn't have a full Maslov's Hierarchy of needs. There is only one Need, and it is Stamp.
    (And, I guess, electricity)
     
    • Agree x 2
  18. jacktrash

    jacktrash spherical sockbox

    well, for instance, does it want All The Stamps Now, or The Best Stamps, or Steady Supply Of Stamps For A Long Time?

    i'm assuming the intelligence has self-awareness. otherwise it's just a very complex algorithm, not an entity.
     
  19. jacktrash

    jacktrash spherical sockbox

    i'm also assuming that rather than stamp == happiness (that is, they are the same number), obtaining stamp increments happiness. because otherwise the code would be a bitch to write and wouldn't really work.
     
  20. BaseDeltaZero

    BaseDeltaZero Shitposting all night.

    Perhaps, depending on how reliably you can program a general AI. Leaving the absurdity of deploying a superintelligent AI to collect stamps, 'Collect Stamps' might be its overarching goal, or it might be closer to reproduction for biological creatures. Or maybe 'Stamp' is just at the top of its Maslow hierarchy.

    Well, that's a valid question in itself. Several, actually. Though even if it doesn't have self-awareness, it might change if it's capable of analysis.

    And the AI, being superintelligent, would likely realize it can literally do this. It could alter its own code to lock the happiness counter to maximum, or change it to be based on whatever, or...

    It's also worth noting that people do occasionally decide other things are more important than their own happiness. So given an even slightly anthropomorphic entity, it's far from unthinkable that it might decide to do something 'aversive' for a variety of reasons.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice