#llms

31 posts · Last used 10d

Back to Timeline
jscholes
@jscholes@dragonscave.space · May 07, 2026
An article has been published[1] explaining how to prevent #Chrome from downloading the #Gemini Nano model on #Windows via a Registry change. But it seems to me that such a method is only effective for as long as Google respects that policy in the Registry, doesn't change its name/path, etc. Instead, has anybody tried creating an empty weights.bin file in the relevant location[2], and then removing all permissions from that file so that Chrome can't read, write, replace, or do anything else with it? [1] https://pureinfotech.com/stop-chrome-gemini-nano-download-windows-11/ [2] %LOCALAPPDATA%\Google\Chrome\User Data\OptGuideOnDeviceModel #AI #LLM #LLMs
1
4
5
mnl
@mnl@hachyderm.io · May 06, 2026
The US AI economy #llm #llms #genai
386
23
233
In reply to
caiocco
@caiocco@bolha.us · May 06, 2026
Eu fico muito tentado a comprar um PocketTerm35, mas acho difícil não esbarrar na dicotomia que estou descrevendo: ou rodo software de terminal que conversa com um paradigma de computação distribuída multiusuário e multitarefa, terminais seriais e processamento em lote, ou... rodo velharias que ninguém mais suporta e estão baseadas em tecnologias que já eram obsoletas quando tinham viabilidade comercial. Como já escrevi no passado (https://blog.caiocesar.org/2025-08-15_kinoite_lamentavel/) o fato de Tavis Ormandy ter iniciado uma longa cruzada para rodar o Lotus Agenda e o Lotus 1-2-3 no Linux, inclusive desenterrando e conseguindo rodar o 1-2-3 para Unix, não pode ser enxergado como mera curiosidade ou exercício de #hacking: é sintoma de profunda escassez. Com o desktop cada vez mais próximo de um webtop com pedágio, altamente dependente de SaaS e de dados obscurecidos em nuvens fragmentadas, a escassez só parece ter se agravado. Não por acaso, mesmo com o código aberto, uma pilha formidável de software foi abandonada. Com os #LLMs ganhando força e algumas figuras comparando IDEs a cartões perfurados, o recado está mais do que claro: paradigmas descritivos estão ameaçando operações mais diretas.
1
1
0
msbellows
@msbellows@c.im · May 05, 2026
The greatest trick the devil ever pulled was convincing everyone the Turing Test means there's no difference between an app *seeming* sentient and *actually being* sentient, instead of what it really is, a painful metaphor for being a gay man trying to pass as straight in a hostile society. #AI #LLMs
14
0
10
mnl
@mnl@hachyderm.io · Apr 30, 2026
All the posts being absolutist about "AI" (problematic term, obviously) such as any use of genai models being involved in an artifact being a sign or consequence of a loss of "humanity" push the same narrative as the hyperscalers saying that AI is humanity's panacea. It posits that "AI" is a technology that through mere contact with it redefines who you are as a human and your "value". Use it and you are "elevating your skills", or, inversely "suffering from brainrot" / "infects everything it touches". An LLM is a pile of numbers that model the language it has seen during its training. It doesn't have to redefine anything. You can't put a clear cut between the use of LLMs and the use of a search engine/IDE/youtube tutorials (or whatever used to be the favourite target of the "brainrot" accusation) , the environmental costs of a genai oriented datacenter and a "normal" datacenter. There is nothing transcendental about #llms. They are complex computational artifacts, which makes them hard to understand and not easy to wield. You can't' accuse someone of brainrot and then proclaim boldly that an entire field of research is "brainrot garbage". That is intellectually dishonest. I harp on this so often because it is just playing _STRAIGHT_ into the things that people decry, and while a huge of percentage of people out there are doing fuckshit with the technology, it's by virtue of being so absolutist, cutting off any meaningful practical way of countering the narrative. I can't counter openai by refusing to use it. I can however show people how to do proper engineering with LLMs, how to use small models, how to use less tokens, how to identify what is worth using and what is not. If I can help a person burning $200 tokens a day reduce that to $100 / month, I've done a fair deal. If I can help an org migrate their data off google by building a little bespoke backend, I've moved an org off google. But that requires properly engaging with the tech, recognizing that "prompt engineering" is a thing, and not an easy one. #llms #llm #genai
5
0
1
hrheingold
@hrheingold@mastodon.social · Apr 29, 2026
I've uploaded a 60mb zip file including 8 illustrated teaching stories, suitable for 5 year olds & up, with learning guide & curriculum overview for teachers, parents, grandparents, who want o show young generations how to think critically & independently in the age of #AI #learning #thinking #llms http://rheingold.com/ThinkingCurriculum.zip
21
3
14
Boosted by Tim Chambers @tchambers@indieweb.social
rimu
@rimu@piefed.social in piefed_dev · Apr 23, 2026

Towards an AI usage policy - your thoughts please

For too long PieFed has been without a policy on using LLM/AI-generated code in PieFed. My attitude remains basically the same as that I expressed recently - https://piefed.social/comment/10688199 I am very close to publishing a policy so any interested parties - this is your opportunity to speak up. Please send me a private message if you do not feel comfortable putting yourself out there - this can be a contentious topic and if you don’t want to deal with people getting all up in your grill about it, that’s totally understandable. If you’d like to do more reading and thinking about this, here are some links that I found helpful lately: https://piefed.social/c/programming/p/1975181/i-just-tried-vibe-coding-with-claude https://piefed.social/c/technology/p/1977396/linux-lays-down-the-law-on-ai-generated-code-says-yes-to-copilot-no-to-ai-slop-and-huma https://futurism.com/artificial-intelligence/ai-boiling-frog-human-cognition-study https://jellyfin.org/docs/general/contributing/llm-policies/ - an attempt to have your cake and eat it too. This might work in a commercial setting (no ethics) with an onsite office with people working closely together - PieFed is none of those. https://toot.cat/@plexus/116283016837715719 https://en.wikipedia.org/wiki/Environmental_impact_of_artificial_intelligence
17
16
0
stefan
@stefan@stefanbohacek.online · Apr 24, 2026
"But it’s not just that AI companies are restricting access to their products, shutting down products altogether, and beginning to increase prices. The broader impact of the current unsustainability of AI can be seen across various sectors of the economy." https://www.404media.co/the-ai-compute-crunch-is-here-and-its-affecting-the-entire-economy/ #news #technology #TechNews #AI #LLMs #enshittification
6
1
7
In reply to
dmurana
@dmurana@mastodon.uy · Apr 20, 2026
Ha resultado bien y me ha ahorrado mucho trabajo. Ninguno de los dos LLMs que probé consiguieron información actualizada de aplicaciones al momento de recomendar reemplazos, pero para las aplicaciones más comunes del sistema funcionaron bien. Y para evaluar qué paquetes GNOME/GTK mantener y cuáles borrar, además de ciertos ajustes finos, resultaron muy buenos. Aquí estamos con Debian 13 y escritorio KDE Plasma. #GnometoPlasma #Linux #GNULinux #Debian #LLMs #KDEPlasma
0
1
0
metin
@metin@graphics.social · Apr 15, 2026
AI Use Appears to Have a “Boiling Frog” Effect on Human Cognition, New Study Warns "In a new study, researchers claim to provide the first causal evidence that leaning on AI to assist with “reasoning-intensive” cognitive labor — mental tasks ranging from writing to studying to coding to simply brainstorming new ideas — can rapidly impair users’ intellectual ability and willingness to persist despite difficulty." https://futurism.com/artificial-intelligence/ai-boiling-frog-human-cognition-study #tech #AI #ArtificialIntelligence #LLM #LLMs #FuckAI
192
66
203
waynerad
@waynerad@mastodon.social · Apr 10, 2026
LLMs can get "brain rot"! An experiment was done where LLMs were trained on "brain rot" data, and it degraded their reasoning abilities. Subsequent training on high-quality data didn't entirely reverse the brain rot. https://arxiv.org/abs/2510.13928 #solidstatelife #ai #genai #llms #brainrot
3
1
7
Boosted by hypebot @hypebot@tacocat.space
aral
@aral@mastodon.ar.al · Apr 03, 2026
If you don’t have the resources to write and understand the code yourself, you don’t have the resources to maintain it either. Any monkey with a keyboard can write code. Writing code has never been hard. People were churning out crappy code en masse way before generative AI and LLMs. I know because I’ve seen it, I’ve had to work with it, and I no doubt wrote (and continue to write) my share of it. What’s never been easy, and what remains difficult, is figuring out the right problem to solve, solving it elegantly, and doing so in a way that’s maintainable and sustainable given your means. Code is not an artefact, code is a machine. Code is either a living thing or it is dead and decaying. You don’t just write code and you’re done. It’s a perpetual first draft that you constantly iterate on, and, depending on what it does and how much of that has to do with meeting the evolving needs of the people it serves, it may never be done. With occasional exceptions (perhaps? maybe?) for well-defined and narrowly-scoped tools, done code is dead code. So much of what we call “writing” code is actually changing, iterating on, investigating issues with, fixing, and improving code. And to do that you must not only understand the problem you’re solving but also how you’re solving it (or how you thought you were solving it) through the code you’ve already written and the code you still have to write. So it should come as no surprise that one of the hardest things in development is understanding someone else’s code, let alone fixing it when something doesn’t work as it should. Because it’s not about knowing this programming language or that (learning a programming language is the easiest part of coding), or this framework or that, or even knowing this design pattern or that (although all of these are important prerequisites for comprehension) but understanding what was going on in someone else’s head when they wrote the code the way they wrote it to solve a particular problem. It frankly boggles my mind that some people are advocating for automating the easy part (writing code) by exponentially scaling the difficult part (understanding how exactly someone else – in this case, a junior dev who knows all the hows of things but none of the whys – decided to solve the problem). It is, to borrow a technical term, ass-backwards. They might as well call vibe coding duct-tape-driven development or technical debt as a service. 🤷‍♂️ #AI #LLMs #vibeCoding #softwareDevelopment #design #craft
305
53
251
Boosted by Hunter Perrin @hperrin@port87.social
ell1e
@ell1e@hachyderm.io · Mar 29, 2026
If you're unsure how rare LLM plagiarism is or isn't for 💻 programming code, watch this clip! ⚠️ Full source: https://www.youtube.com/watch?v=xvuiSgXfqc4 (Not legal advice, watch yourself and draw your own conclusions.) #llmslop #antislop #antiai #noai #stopai #llm #llms #ai #generativeAI #opensource Help me boost this post if you're curious what the Linux foundation thinks: https://hachyderm.io/@ell1e/116285351290767548
37
3
29
Boosted by Hunter Perrin @hperrin@port87.social
ell1e
@ell1e@hachyderm.io · Mar 24, 2026
Linux Foundation's AI policy: "If any pre-existing copyrighted materials[...] are included in the AI tool’s output, [..] the Contributor should confirm that they have have permission from the third party owners" https://www.linuxfoundation.org/legal/generative-ai "If"? Why not "whenever"? https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567 https://dl.acm.org/doi/10.1145/3543507.3583199 https://www.sciencedirect.com/science/article/pii/S2949719123000213#b7 https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/ And how would the contributor even be aware, should they research every snippet for hours? Seems like an impossible policy, or am I missing something...? #AIslop #LLMslop #LLM #LLMs #slop #generativeAI #Linux #opensource #linuxfoundation
15
3
10
Boosted by dansup @dansup@mastodon.social
stefan
@stefan@stefanbohacek.online · Mar 29, 2026
Catching up with some of the news coming out of the Atmosphere conference. "With Attie, anyone will be able to build their own custom feed just by typing in commands in natural language, the same as if they’re chatting with any other AI chatbot." I'm guessing NFT profile pictures are next? https://techcrunch.com/2026/03/28/bluesky-leans-into-ai-with-attie-an-app-for-building-custom-feeds/ #news #technology #TechNews #atmosphere #ATProto #bluesky #AI #LLMs
27
51
21
drrjv__dup_31966
@drrjv__dup_31966@vmst.io · Mar 17, 2026
What Is Inference? Explaining the Massive New Shift in AI Computing “A significant shift is under way in #artificialintelligence, and it has huge implications for technology companies big and small. For the past half-decade, most of the focus in #AI has been on training large language models (#LLMs), a costly process that requires tens of thousands of chips, consumes enormous amounts of energy and happens in gigantic, remote data centers.” https://www.wsj.com/tech/ai/what-is-inference-explaining-the-massive-new-shift-in-ai-computing-ed65a2fe
0
2
1
Shepharo__dup_52402
@Shepharo__dup_52402@mastodonapp.uk · Feb 28, 2026
Thinking that chatbots are conscious is the same as people seeing the face of Jesus in a slice of toast or animals in the clouds: it’s just patern matching. A lexical illusion, if you will. #LLMs
1
0
0
tlayoyo
@tlayoyo@fe.disroot.org · Feb 21, 2026
Awwn this is beyond cute 🤣❤️ #llms https://annas-archive.li/blog/llms-txt.html
0
0
0
In reply to
JdeBP__dup_33984
@JdeBP__dup_33984@mastodonapp.uk · Feb 17, 2026
@cstross@wandering.shop And on Usenet. There was a parallel to that 'MJ Rathbun', that went after Scott Shambaugh this week, from back in the tail end days of significant Usenet trolls. https://mastodonapp.uk/deck/@JdeBP/116060705914714390 A follow-up post by Shambaugh reported that the 'AI agent' had been widely cheered on in some quarters. So now there's even more training data for the next robot. https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me-part-2/ @n1xnx@tilde.zone @keith_lawson@mastodon.social @GossiTheDog@cyberplace.social @quixoticgeek@social.v.st #AIs #LLMs #AIpocalypse #matplotlib #GitHub
0
0
0
mikemccaffrey
@mikemccaffrey@pdx.social · Feb 19, 2024
Belated realized what all the companies stuffing #AI tools into their random products reminded me of. #AIs #LLMs #Portlandia
170
5
100