

I guess prepare for potential kernel rot: https://www.neowin.net/news/linus-torvalds-declares-massive-ai-fueled-code-surges-as-the-new-normal-for-linux/
I code and do art things. Check https://private.horse64.org/u/ell1e for the person behind this content. For my projects, https://codeberg.org/ell1e has many of them.


I guess prepare for potential kernel rot: https://www.neowin.net/news/linus-torvalds-declares-massive-ai-fueled-code-surges-as-the-new-normal-for-linux/


It is important to understand that the core disagreement is not whether Fedora should support AI development
Sad. Even the kernel seems to be going all in now: https://www.neowin.net/news/linus-torvalds-declares-massive-ai-fueled-code-surges-as-the-new-normal-for-linux/ I do hope there’ll be a discussion one day, so far no response yet: https://lore.kernel.org/lkml/e12330b9-c29e-45ca-9375-9e3d13426d85@horse64.org/T/


Where does Codeberg rule out commercial projects? I’ve never heard of that being banned over there. (Do you perhaps mean closed-source?)


I’ve moved to Codeberg. It works well enough for me.
Gitlab is trying hard to ruin the software so beware of that.


I feel like it’s been going downhill since 2019, given the point in time Microsoft acquired them was in 2018 I’d say people have just not wanted to acknowledge the trajectory. (That included me.)
Every big feature since 2019 has been enterprise slop, in my opinion:
In 2019 they announced dependabot. What’s wrong with it?
It’s not configurable, rather than allowing a universal mechanism so people can feed dependencies into it via some custom tool that e.g. generates a standardized listing, it only supports the popular package managers. This is exactly what big enterprise wants since they only care about their super old codebases and what those use, not any upcoming stack.
In 2019, they also announced security advisories. What’s wrong with it?
That Github to this day in 2026, hasn’t bothered to add the most basic feature that regular FOSS projects would need to handle security reports, which is confidential issues. Instead, the assumption seems to be you’re either a big enterprise that already has some dedicated security team with their own email infrastructure, or Microsoft doesn’t care about you.
In 2020, they announced Github’s Codespaces. What’s wrong with it?
It makes the UI more complicated and as far as I know leaves buttons for it everywhere that can’t be turned off even if you don’t want it. And it’s a vendor lock-in feature that’s expensive, the average small FOSS project will neither have the budget to use it nor likely care to do so.
Then of course the entire AI slop spin since 2025 ish.
There’s probably more, but those are the big ones that I’ve noticed that made me suspicious of where this was going.


My condolences.



I heard it’s alright for games and many apparently work. Sadly, FreeBSD simply doesn’t seem to have drivers for a lot of hardware that I’m using. And as far as I know, they don’t have an LLM policy yet (so they could still come out in favor of it).


I like that I can read this as you stating you use Atlassian yet hate Gitlab, and the statement still works either way 😅


That makes sense, since Gitlab seems to be trying to challenge Atlassian. In who manages to make worse software…


I’m saying if their policy is to accept AI code, which the link seems to demonstrate that it is, the rate of future hidden errors in the kernel code is likely going to go up. This is what all the studies are saying, including those involving competent coders.


Perhaps Forgejo will at some point change into a hard fork. That would be kind of nice…


Perhaps some higher up at the college wanted the speech to be pro AI, and that’s the only speaker they found… (I have no idea if that’s what happened, but it would be funny if that was why.)


The kernel policy seems to be what I think it is, since LLM slop patches have been merged. Edit: I call it “slop” since it’s LLM code, and I’m aware some use that word differently.
I find it slightly contradictory to delete code due to hidden bugs on the one end, then insert LLM code at the other rather than hand-craft the code to avoid hidden bugs better.


I doubt the Linux kernel allowing slop patch submissions with potentially higher rate of hidden insidious bugs will help the LLM-pocalypse much…


any focused ideas on how this is triggered?
Be careful of what you say in front our Smart TV, warns Samsung


The only relatively safe way to avoid it is to not use any app unless it’s from f-droid or similar places, to use a degoogled phone, to use an adblocker for all websites, to use an end-to-end messenger for private conversations and not social media, to use federated non-profit social media if you ever use any at all, to use a paid email provider that doesn’t make money of your data as primary income, and to use an actually private web search (not Bing, not Google).
It’s a shame that it requires so much knowledge and effort for a bare minimum of privacy.


If I had to guess, too many government agencies are probably bought by big tech. Otherwise, they wouldn’t let that neglect of privacy fly at such a scale. I suppose the question is when people will get sick enough of it that it results in a change.


Gloria Caulfield, the Vice President of Strategic Alliances for real estate firm Tavistock Development
Yeah, and the brain rot and that AI code is dumb and ruins projects.
So much more seems wrong than just the business model.