

I imagine it has plenty of use cases for blue team as well, just not as many for active threat response.


I imagine it has plenty of use cases for blue team as well, just not as many for active threat response.


The risk of that is relatively low for kernel contributions, though. Most of the work being done is porting existing protocols/firmware into the latest Linux kernel, not creating novel features.
The larger risk is instability caused by bad, hallucinated code because it was submitted under the assumption of human authorship. In both cases, further review by the Linux team can be done if they understand where that code is coming from.
Banning AI does nothing, because theres no way of knowing who uses it without proper disclosure, which wouldnt happen if it were banned. To use an example from the article, it would be like banning code written with the use of a specific brand of keyboard.
Better to have it properly disclosed than to make it illicit


That would be true even if they didn’t use AI to reproduce it.
The problem being addressed by the Linux foundation isn’t the use of copyrighted work in developer contribution, it’s the assumption that the code was authored by them at all just because it’s submitted in their name and tagged as verified.
Does that make sense?


Even if this were true, it would only mean that the GNU license is unenforceable, not that the Linux kernel itself is infringing copyright


Yup
People want to pretend as if everything that flows downstream from the creation of LLMs is illegal, but that’s just not the reality.


The Linux Kernel is under a copyleft license - it isnt being copyrighted.
But the policy being discussed isn’t allowing the use of copyrighted code - they’re simply requiring any code submitted by AI be tagged as such so that the human using the agent is ultimately responsible for any infringing code, instead of allowing that code go undisclosed (and even ‘certified’ by the dev submitting it even if they didnt write or review it themselves)
Submissions are still subject to copyright law - the law just doesnt function the way you or OP are suggesting.


Yup.
I would also just point out that this doesnt change the legal exposure to the Linux kernel to infringing submissions from before the advent of LLMs.


LLMs themselves being products of copyright isnt the legal question at issue, it’s the downstream use of that product.
If I use a copyright-infringing work as a part of a new creative work, does that new work infringe copyright by default? Or does the new work need to be judged itself as to the question of infringing a copyrighted work?
And if it is judged as infringing, who is responsible for the damage done? Can I pass the damages back to the original infringing work? Or should I be held responsible for not performing due diligence?


If you think “bad” is too vague, then that isnt a new problem.
Linux has always had to reject ‘bad’ code submissons - what’s new here is that the kernel team isnt willing to prejudice all AI code as “bad”, even if that would be easier.


That’s not really how copyright law works.


I find myself wondering just how complicated TVS could actually be before it’s no longer possible to hijack the display signal that’s fed to the display
Unlike with cars, TVS seem simple enough that a sufficiently motivated novice could modify a cheap TV to circumvent these bullshit features. If they ever started requiring internet connections to start or use these, i think enough people would be bothered by it that there would probably be a secondary market of modified hardware
As with most enshitifications, the question will ultimately be one of complacency of the average consumer.


I dont see how the second cert that goes to the site is useful if it isnt still associated with the first, but I also wouldnt trust the state to abide by an untraceable standard to begin with because identifying individuals by their accounts is in their interest.
I get where the enthusiasm for cryptography is coming from, but I think it’s misplaced.


A state issuing a cert file has to be able to verify that it goes to the intended person. The state would have to know the ID of the person they’re issuing it to, otherwise it wouldn’t function as intended. Similar to blockchain wallets - they are anonymous all the way up to the point of fiat exchange, where most state actors can still end up ID’ing wallet owners.
Even if you try obscuring that information via encryption, it still gets signed by a ‘trusted’ authority at the end of the chain.
Even in theory this is a shit idea.


The problem is that it’s stored outside of your control and accessible without your consent. This system addresses those issues.
Sorry, I just don’t agree with this, either. It isn’t just that it’s a third party, it’s that verification necessarily ties your device to your personal identity at all. No matter how you store the actual identity data, there needs to be an identifier associated with every device/account. I’d be fine if the OS just asked for my age and didn’t verify it with my state-issued ID - but if there’s any cross-checking involved that’s a dealbreaker.
If there were any possibility that a state actor had interest in identifying my personal identity of this account, and there was a record that pointed to my name, SSN, or other unique personal identifiers, i’d be absolutely fucked. There are really good reasons not to want social media accounts tied to real, verifiable identities - even if you think social media should be limited to adults (i’m not on willing to concede this, for what it’s worth).
It doesn’t matter if the data is stored on your local device - if it’s being verified by a state authority at all, that’s a huge problem.


The problem isnt just that the third party can abuse their access to your information, it’s that it is digitally stored and certifiable at all
The most secure data providers in the world have all basically had data breaches by now - including the IRS and US government. There is no party that can guarantee data security, even if they themselves are benevolent.
And for what purpose are we willing to gut privacy online? So it’s marginally more difficult for minors to obtain porn?
GTFO. De-anonymization has always been the goal, not ‘protecting the children’.


I wish I could tell if ‘liberate’ was being used facetiously or not
Kinda, but they’re specifically saying the the AI agent cannot itself tag the contribution with the sign-off - like, someone using Claude Code to submit PRs on their behalf. The developer must add the tag themselves, indicating that they at least reviewed and submitted it themselves, and it wasn’t just an agent going off-prompt or some other shit and submitting it without the developer’s knowledge. This is saying ‘the dog ate my homework’ is not a valid excuse.
The developer can use AI, but they must review the code themselves, and the agent can’t “sign-off” on the code for them.
What does holding any individual responsible on a development team do? The Linux project is still responsible for anything they put out in the kernel just like any other project, but individual developers can be removed from the contributing team if they break the rules and put it at risk.
The new rule simply makes the expectations clear.