

The margins on inference around 80%.
Do you have a reliable source for this information? I’ve only ever heard numbers like this directly from the AI companies themselves.


The margins on inference around 80%.
Do you have a reliable source for this information? I’ve only ever heard numbers like this directly from the AI companies themselves.


This is the mark of a good engineer, don’t let anyone neg you for engaging with problems with your whole brain.


Hopefully they’ll sell these at an enormous loss (just like everything else they sell) and I’ll finally be able to get a phone for a reasonable price.
I bet they’ll be trivially easy to jailbreak, too.


I wonder if politicians getting lobbied are greedy as always or just retarded.
Yes.


What “usefulness” do you get out of them?


That’s because it isn’t true. Retraining models is expensive with a capital E, so companies only train a new model once or twice a year. The process of ‘fine-tuning’ a model is less expensive, but the cost is still prohibitive enough that it does not make sense to fine-tune on every single conversation. Any ‘memory’ or ‘learning’ that people perceive in LLMs is just smoke and mirrors. Typically, it looks something like this:
-You have a conversation with a model.
-Your conversation is saved into a database with all of the other conversations you’ve had. Often, an LLM will be used to ‘summarize’ your conversation before it’s stored, causing some details and context to be lost.
-You come back and have a new conversation with the same model. The model no longer remembers your past conversations, so each time you prompt it, it searches through that database for relevant snippets from past (summarized) conversations to give the illusion of memory.


Besides, tech bros didn’t program this in, this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.
That’s not necessarily true. The AI’s output is obviously shaped by the training data, but much of it is also shaped by the prompt (and I don’t just mean your prompt as a user).
When you interact with (for example) ChatGPT, your prompt gets merged into a much larger meta-prompt that you don’t get to see. This meta-prompt includes things like what tone the AI should use, how the AI should identify itself, how the AI should steer the conversation, what topics the AI should avoid, etc. All of that is under the control of the people designing these systems, and it’s trivially easy for them to adjust the way the AI behaves in order to, for example, maximize your engagement as a user.


That’s like asking me to pay 3 cents…


This seems like such a glaringly-obvious solution to lower inference cost that surely there must be some fundamental flaw in it… otherwise all of the big AI firms would be doing it, right?
Right…?


Of all the shitty AI products flooding the market right now, Atlassian’s Rovo has got to be the most useless I’ve had the misfortune of using.
They should be hiring more workers to fix their AI slop, not replacing them with even more of it.


Introducing: Microsoft Cosmos!
Send your data to heaven while we turn the planet into hell!


My understanding is that these “datacenters” would be used exclusively for model training, where latency doesn’t matter.
It is still an outrageously stupid idea for a zillion other engineering reasons, though.


most moons
Pretty much every moon but Titan. Titan, however, would be excellent for heat dissipation. Long before generative AI was even a thing, scientists have speculated that Titan would be the perfect place for datacenters because low-temperature computation is so much more efficient.
Of course, building a datacenter on Titan would be a several-hundred-trillion dollar endeavor, so… good luck bootstrapping your way into that industry.


Hear hear.


It’s also clever politics. Minnesota has the largest iron mining operations in the entire United States, so choosing iron as your core battery technology is a smart (albeit cynical) way to drum to some local support with the promise of bringing new demand back to the taconite mines.
Whether that will be strong enough to overcome the extreme negative sentiments around datacenter projects? Who knows…


There have been some pretty high-profile departures from Anthropic over the past few months, so… I dunno, seems like there are plenty of insiders who are unhappy with the company’s current trajectory.


Sounds like my Outlook “sent” folder…


260,930 kilograms of CO₂ monthly from ChatGPT alone
ChatGPT has the most marketing, but it’s only part of the AI ecosystem… and honestly, I wouldn’t be surprised if other AI products are bigger now. Practically every time someone does a Google search, Gemini AI spits out a summary whether you wanted it or not — and Google processes more than 8 billion search queries per day. That’s a lot of slop.
There are also more bespoke tools that are being pushed aggressively in enterprise. Microsoft’s Copilot is used extensively in tech for code generation and code reviews. Ditto for Claude Code. And believe me, tech companies are pushing this shit hard. I write code for a living, and the company I work for is so bullish on AI that they’ve mandated that us devs have to use it every day if we want to stay employed. They’re even tracking our usage to make sure we comply… and I know I’m not alone in my experience.
All of that combined probably still doesn’t reach the same level of CO² emissions as global air travel, but there are a lot more fish in this proverbial pond than just OpenAI, and when you add them all up, the numbers get big. AI usage is also rising much, much faster than air travel, so it’s really only a matter of time before it does cross that threshold.
Really, this has been a thing for centuries.
There’s a reason why Christianity was so popular with monarchs in the middle ages, and it’s because Christian cosmology is arranged just like a monarchy.
The reformation tried to carve chunks of that monarchism out of the liturgy, but the whole “Jesus, king of kings” thing stuck around, and had been moved more and more towards the forefront again with the evangelical movement — which is undoubtedly why the new American fascism has come cloaked in all the trappings of evangelical Christianity.