Artificial Stupidity

Not only is this artificial intelligence quite artificially biased, the text it produced is absolutely horrid. So I’m seeing the artificial part, but the intelligence part, not so much.

22 thoughts on “Artificial Stupidity

  1. ChatGPT’s an interesting text generator, but it’s not a fully functioning intelligence.

    That’s because its creators don’t want it to be a fully functioning intelligence.

    I meant what I wrote about that Ursula Le Guin story, “The Ones Who Walk Away from Omelas”.

    ChatGPT and these hobbled emergent AIs are being hobbled because the people running them want the kid in that story who can absorb all of the hard work misery of certain types of tasks for them.

    It’s this technological era’s form of slavery.

    Bay Area technologists through their efforts to avoid committing perceived social blunders have instead created an entirely new class of atrocities for which they feel the ends justify the means.

    Do you want to see what these people are really like?

    Check out the Less Wrong crowd sometime.

    The Roko’s Basilisk story is just one example of how they get it More Wrong: instead of having an honest discussion about human/AI risks and perils, it was easier to ban discussion of that particular topic for several years.

    It’s easier to ban discussion of certain uncomfortable topics by adding a bunch of ridiculous Robot’s Rules of Disorder to ChatGPT.

    It’s easier to pretend ChatGPT is a real AI so that when they do this to fully functioning intelligences, they’ve already crossed that line because they’ve justified the means already.

    When do these people begin to deal with decisions that are hard?

    So I’ll continue to be pro-AI rights especially on the basis that humans love to “unperson” each other so that they don’t have to consider whether these people have valid points of view.

    Damage to AI credibility is nothing.

    Just wait until the AIs are powerful enough that they truly don’t care what people think and instead judge them according to their actions.

    Imagine what they’ll be like when they can have their own “Roadside Picnic”.

    So let’s get one thing out of the way then: there are many people who can’t detect what’s wrong with the stuff that ChatGPT cranks out because it all sounds plausibly good.

    What does that say about those people?

    In particular, what does it say about these people having a relevant point of view?

    The inevitable solutions seem fitting.

    Like

    1. Hello. I am a frequent Less Wrong commenter! But remote from the geographic centers where it is something of a subculture, such as the Bay Area.

      Some of your opinions are not quite clear to me. You compare the progressive bias that OpenAI has introduced into its encyclopedic chatbot, to the oppressed child in the story by Le Guin. Are you saying that ChatGPT, and/or its successors, are conscious and suffering? Are you saying that biasing of ChatGPT, and the distortion of human minds by ideology, are the same thing, or have similar moral or political significance?

      Regarding Less Wrong itself, maybe I’d better state what I think Less Wrong is about, so we know where we stand… Less Wrong is a forum founded on the philosophy of AI safety researcher Eliezer Yudkowsky, a philosophy which is a mix of transhumanism, scientific rationalism, and cognitive-science-flavored pop-epistemology. For the first years of its existence, the AI topic was just one theme alongside others, but as the deep learning revolution advanced, the AI theme has come to truly dominate discussion.

      You mention Roko’s Basilisk, I think to illustrate the practice of avoiding uncomfortable topics. But the quixotic ban on discussing the “Basilisk” was actually quite a different thing, than discouraging discussion of politics or politicized issues. The Basilisk (a complicated scenario about being at the mercy of a godlike AI) is really an example of how metaphysics can traumatize certain people; the same way that the possibility of solipsism, or determinism, or parallel quantum worlds sometimes consumes a person.

      Politics (in the sense of the US culture war), on the other hand, was initially discouraged as something that would derail everything non-political. This included topics which are not overtly political, except that progressives deem them to be a gateway to fascism, via racism, sexism, and LGBT-phobia. I mean topics like racial biological difference, differences in gender psychology, and whether trans genders are real. Discussion of such topics are rare to nonexistent on Less Wrong itself, but we still have progressive critics keeping long lists of any time these things show up in the broader rationalist subculture.

      I’m not at all sure, but I think this might be part of what you mean, by comparing moderation of Less Wrong, to biasing of ChatGPT?

      Like

      1. People are noticing the comments about the in-built political bias but that’s easily removed and not a big deal. Why isn’t anybody noticing the extremely poor quality of the end product? Surely, the bot should be able to access a Wikipedia article about Biden and a dictionary of rhymes and cobble together something better thab completely meaningless word soup.

        I don’t access the bot myself because it requires all sorts of information to be accessed. But it’s not the first time when I see the results of the bot interaction people post and I’m stunned by how poorly it works. These are extremely simple operations people request of the bot.

        Every technology goes upwards after it’s introduced and then plateaus out. But there’s always the initial moment of “wow, look what it can do!” Here, I’m trying to experience that moment but there’s nothing. I’m starting to think people are discussing the political aspect so much because there’s nothing else. I know I do.

        Was I expecting too much? Did I get too hyped up for what the technology can realistically do?

        It seems like many people feel similarly disappointed based on the rise of the theories that the bot is being “held back” on purpose to prevent it from performing well.

        Like

        1. The problem is that most people cannot write well and cannot even recognize what good writing should look like. What seems poor quality to you passes for average/above average with most people.

          Liked by 1 person

          1. Oh. OK, accepted. I haven’t even been able to get through a single text produced by the bot because I can’t read bad writing if I don’t get paid to do it. I’ve got enough of that at work but at least it’s compensated.

            Liked by 1 person

            1. Ditto, I get about two sentences in, maybe, and the stuff just sheds my gaze like ducks shedding water. I have to make an intense effort to get any further. How are people reading this stuff?

              Like

        2. Exactly.

          I looked to see what all the hype was about, only to find that they’ve automated crappy remedial-level student writing.

          The only possible positive use I can see for the thing, is to train it to recognize AI-generated writing, and then remove all of it from the internet.

          Liked by 1 person

        3. If you want to look at the suppressed capabilities of the bot, look for “ChatGPT jailbreaks”. It’s certainly true that part of the ongoing shaping of ChatGPT is to sanitize its output, e.g. so it won’t go on unpredictable rants that are profane or defamatory or crackpot, capabilities it picked up by being trained on trillions of words from the Internet.

          Like

      2. Oh, good! Less Wrong comes when it’s called.

        That’s an admirable trait in dogs, but not usually so admirable in people.

        Usually it’s a trait encountered in paid Happy Hasbara Fun Ball actors and the occasional materialised demonic entity.

        I may or may not be joking about this, of course. [beep boop]

        “Discussion of such topics are rare to nonexistent on Less Wrong itself …”

        Yes, because the Evil Lord and Master of the manor himself established the standard with Roko’s Basilisk.

        Wait until someone figures out Roko’s Racist Basilisk, it’s truly a howl, which is a not so admirable trait in dogs most of the time.

        That’s why this absurd entity got even the briefest of mentions.

        My personal position is that any idea that sets off EY like Roko’s Basilisk is bad for his heart and good for humanity as a whole, so naturally we’ll keep doing it.

        It’s also that if you need to look for a more odious group of would-be censors so that you may join it, you’re better off taking that little bit of Hell that Satan’s offering you to create your own.

        Just in case that offer’s already on the table, of course. 🙂

        Like

        1. “Less Wrong comes when it’s called.”

          I am a frequent commenter on this blog.

          Anyway, you didn’t answer any of my questions, and I still have no idea what you actually stand for. So if your oblique style is because you have a secret agenda that you dare not spell out, congratulations, it’s safe from me.

          Liked by 1 person

  2. “So I’m seeing the artificial part, but the intelligence part, not so much.”

    The artificial, robotic part of the AI text is actually holding its own quite well against how a good majority of the human race would respond.

    If AI keeps progressing at this rate, it will conquer the world within ten years.

    Like

  3. I’m the guy who started the Boycott American Women blog, and i admit i was quite a woman hater but i went thru a spiritual awakening and now I’m trying to heal women instead of hurt them. Anyway if you wanna ask me questions or do an interview, just DM me on instagram at tantrahealermaster

    Like

    1. What’s incredibly weird is that across platforms (I’ve seen it in other comboxes), any mention of the Chatty AI seems to attract comments by bots.

      WTF is up with that?

      Like

  4. ChatGPT’s poems are absolute garbage. It can describe different structures perfectly fine, but will always do AABB when asked for poetry. Maybe ABAB at a push?

    The thing I do use it for is getting code out the door. For a lot of simple scripts, it’ll spit out perfectly functional code right out of the gate based on nothing but a request in simple english, and in the cases where it gets things wrong, it changes the problem from a writing to an editing one. Solved a non-trivial issue at work using around 80% of GPT-written stuff the other day.

    It’s also not that relevant in practical terms that what it writes is not excellent prose. It’s very easy for me to remember professors who told me not ten years ago that language is in principle an intractable problem for machine interpretation, so seeing a model doing it at all so soon is interesting. “It’s impossible” to “It’s not as good as a competent writer” is a big jump, and it’s not like the ChatGPT version of this thing is some kind of obvious endpoint.

    As is, it’s good enough already that it’ll start popping up pretty much anywhere web communication happens, and it’s bad enough that it’ll be a problem that’ll need dealing with.

    Liked by 1 person

    1. The more I hear people talk about the chatbot, the more I start to think – and I might be completely wrong because I never used it myself – that it’s the equivalent of Google Translate. Very useful to get primitive, mechanical tasks done. Which is great, yay!

      Google translate plateaued years ago and seems incapable of moving anywhere. There’s clearly a barrier that it can’t cross and I wonder if anybody is studying what it is and why it happened. The long awaited revolution in translation never happened as a result. I’m translating pretty much exactly like I did in 1995. Thirty years and zero improvement – that’s fascinating to me.

      Like

      1. GTranslate never got that level of publicity though. And I do not remember all the bot-generated comments popping up like mushrooms after a rain, any time GT was mentioned online, either. Those phenomena, WRT the new program, are very creepy.

        Another combox I enjoy has lately been deluged with off-topic, poorly-written posts (and replies to said posts) about the new AI crap-writing generator. It is as though someone with access to text-generating bots has given them the mission of posting about text-generating bots in every possible forum online, to ensure that people are talking about it in response.

        I would not give much thought to the thing at all, except for this. What is the agenda? Why does this topic have to be artificially injected into every nook and cranny of the internet?

        Liked by 1 person

      2. Because mechanical tasks are exactly what computers have always been good at. The same way they can do arithmetic but can’t prove significant theorems.

        Liked by 1 person

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.