• 1 Post
  • 20 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle
  • Really, this has been a thing for centuries.

    There’s a reason why Christianity was so popular with monarchs in the middle ages, and it’s because Christian cosmology is arranged just like a monarchy.

    The reformation tried to carve chunks of that monarchism out of the liturgy, but the whole “Jesus, king of kings” thing stuck around, and had been moved more and more towards the forefront again with the evangelical movement — which is undoubtedly why the new American fascism has come cloaked in all the trappings of evangelical Christianity.








  • That’s because it isn’t true. Retraining models is expensive with a capital E, so companies only train a new model once or twice a year. The process of ‘fine-tuning’ a model is less expensive, but the cost is still prohibitive enough that it does not make sense to fine-tune on every single conversation. Any ‘memory’ or ‘learning’ that people perceive in LLMs is just smoke and mirrors. Typically, it looks something like this:

    -You have a conversation with a model.

    -Your conversation is saved into a database with all of the other conversations you’ve had. Often, an LLM will be used to ‘summarize’ your conversation before it’s stored, causing some details and context to be lost.

    -You come back and have a new conversation with the same model. The model no longer remembers your past conversations, so each time you prompt it, it searches through that database for relevant snippets from past (summarized) conversations to give the illusion of memory.


  • Besides, tech bros didn’t program this in, this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.

    That’s not necessarily true. The AI’s output is obviously shaped by the training data, but much of it is also shaped by the prompt (and I don’t just mean your prompt as a user).

    When you interact with (for example) ChatGPT, your prompt gets merged into a much larger meta-prompt that you don’t get to see. This meta-prompt includes things like what tone the AI should use, how the AI should identify itself, how the AI should steer the conversation, what topics the AI should avoid, etc. All of that is under the control of the people designing these systems, and it’s trivially easy for them to adjust the way the AI behaves in order to, for example, maximize your engagement as a user.













  • 260,930 kilograms of CO₂ monthly from ChatGPT alone

    ChatGPT has the most marketing, but it’s only part of the AI ecosystem… and honestly, I wouldn’t be surprised if other AI products are bigger now. Practically every time someone does a Google search, Gemini AI spits out a summary whether you wanted it or not — and Google processes more than 8 billion search queries per day. That’s a lot of slop.

    There are also more bespoke tools that are being pushed aggressively in enterprise. Microsoft’s Copilot is used extensively in tech for code generation and code reviews. Ditto for Claude Code. And believe me, tech companies are pushing this shit hard. I write code for a living, and the company I work for is so bullish on AI that they’ve mandated that us devs have to use it every day if we want to stay employed. They’re even tracking our usage to make sure we comply… and I know I’m not alone in my experience.

    All of that combined probably still doesn’t reach the same level of CO² emissions as global air travel, but there are a lot more fish in this proverbial pond than just OpenAI, and when you add them all up, the numbers get big. AI usage is also rising much, much faster than air travel, so it’s really only a matter of time before it does cross that threshold.