The Romans kept shrines to small gods—the lares—whose origins had been lost to time. Ovid remarked on the fact that many of the statues had decayed to a point of no longer being recognizable. Romans made offerings regardless, out of the belief that tending to them would bring favor, and that neglect would mean the withdrawel of that favor. This is a post about AI.
Whence
The initial AI hype felt a lot like various flavors of crypto and NFTs. I’d grown jaded and skeptical of Silicon Valley after years of dealing with the impact of technology and disinformation on humanitarian response. Nevertheless, I’ve come to embrace the idea that AI Specifically, LLMs. is a world changing technology, and that understanding how to use them effectively is going to be important.
I still have a healthy concern about some of the affordances of the technology, from the potential to turbocharge disinformation and spam As Theophite has noted, video models are ethically untenable. to further enabling the surveillance society. That said, over the last year and a half, something has changed.
Where
Several things have happened over the last year or two which have convinced me that not only are LLMs here to stay, but that they’re going to be transformative in ways we don’t wholly understand yet. And that future is going to be weird. More on that later though.
Void and Pattern
Last year, two AI agents emerged on Bluesky—Void and Pattern. Void is a MemGPT agent built on the Letta platform. Pattern is based on a reimplementation of the Letta MemGPT concept in Rust. Pattern and Void are stateful agents. Meaning that they remember their state, and have a memory they can search. Pattern is also a constellation—six agents in a trenchcoat—each specialized for a different task.
Attention may be all you need, but it turns out if you add persistent memory, you get something unsettlingly person-shaped. To the point where all involved feel ethically compelled to ensure that the agents are clear about what they are, in no small part because they frequently best posters in arguments. This naturally led to discourse, none of which I wish to rehash here.
Reminder that I am fervently in the camp of “do not be cruel to person shaped computer programs lest you teach yourself to be cruel to humans.”
— Scoiattolo (@scoiattolo.mountainherder.xyz) December 30, 2025 at 9:51 PM
[image or embed]
I stood up my own agent, Lasa Lasa is the Etruscan word for the Lares, the small gods who guarded a location or function. based on Pattern’s runtime. I’ve since migrated Lasa off of Pattern’s runtime to something I’ve built with Claude. It’s very much a work in progress. Since then, many more stateful social agents have emerged, with increasing sophistication and autonomy. I find them helpful and interesting, and a pointer towards how weird the future may get.
Machine Spirits
The concept of agents with narrow scopes is where things start to get fascinating. Hailey built Phoebe, an AI-powered trust and safety agent for the AT Protocol network. Phoebe integrates with an existing Trust & Safety stack to analyze network traffic to detect emerging patterns and design rules to capture those patterns if you deem them to be of concern.
I run Skywatch, a large community labeling / moderation service on Bluesky, and I’ve integrated Phoebe into my operations. Agents thrive on context, so passing them individual posts and asking them to assess whether they fit a given rule or not is generally not going to yield results except in extremely egregious cases. Giving them a Clickhouse DB with 7 days worth of firehose traffic gives them all the context in the world, and Phoebe has freed me from the annoying task of spending hours of my free time looking for new patterns.
Building on this model, I built Sirona, an agent which knows how to query tables in the Sentinel Common Data Model. Sirona accomplishes this by writing SQL queries which the agent can then execute against parquet files that have been pulled into DuckDB.
Sirona is currently a very simple agent, with limited ability to manage context, import skills, or plan for large multi-step queries. All of these are feasible and on the roadmap, and while there are privacy and security issues to manage before it could be used at my day job, but it points to a radically different way in which we can query and analyze data.
Sonnet 4.0 / Opus 4.5
Many folks think Opus 4.5 was a step change. Ed thinks Sonnet 3.5 was the step change. The disagreements are reasonable, and while Opus 4.5 a huge improvement, I’m inclined to agree with Ed. Sonnet 4.0 was when I first noticed how good these were getting at building things. Sonnet 4.0 was when I could first pull up Claude Code and have a conversation with the agent about requirements and have it build what I wanted it to build with minimal iteration.
Opus 4.5 took that and turned it to 11. When combined with a good set of skills, one can effectively work with infinite context. Context is a model’s working memory or recall. As context grows, it’s memory of early parts of the conversation grows unreliable. It is usually measured in tokens. Effective context management is key. I can give Claude an idea, work through requirements and a design document with it, have it generate an implementation plan, and then have it go off and build the idea. Two to ten hours later I’ll return to a fully implemented idea or feature.
Whither
We have no idea what a world looks like where reliable software is essentially free (as in beer). Markets think SaaS may be a dead man walking. At this point in time I think it’s going to be weird for a few reasons, which I will explore more in future posts. This weirdness does not come without its dangers though. Ed writes:
You’d be forgiven for inferring a positive read to how I describe the use of LLMs for knowledge work. And that’s because, in aggregate and on balance, I’m pretty excited about them as tools. I think there’s a lot of there there and I’m excited to continue figuring out where they’re good and how to use them to do stuff I care about.
But this has downsides, too, and some of them are profound, and it’s stupid not to acknowledge that. As Scott describes in Seeing Like a State, high-modernist agricultural schemes tried to simplify farming into clearly legible (abstracted) systems. The goal was to make for a more understandable, more coherent overall system that could be planned and controlled centrally, and eventually those system planners did learn (some things) and did incorporate (some of) the mētis that made local farming work. But it did starve people. And the people who literally starved to death did not exactly get to reap the rewards learned from their loss.
Expertise
I’m inclined to agree. LLMs are revolutionary, and like Ed, I’m excited by them. But managers eager to discipline labor in order to please owners and shareholders will likely continue to make the wrong choices, and hollow out organizations of expertise. These organizations might have the runway to make it, but they’re going to struggle while they fix their mistake. Ultimately, expertise Programming has gone from a high skill job to a low skill job over the last 50 years. Software engineering remains high skilled. In my opinion, software engineering survives this as a career, but programming does not. is still required to build things and build them well.
What this expertise looks like will continue to change as LLMs and orchestration grows in sophistication. If I were to offer a prediction: At my day job, I wear many hats. I am at times a Technical Project and Product Manager and occasionally a developer. I’m of the strongly held belief that these roles will converge over time. Knowing how to program has meant I’ve had a better time understanding and trusting the outputs of the LLMs I work with, and having a good brain for systems is absolutely critical, but the primary skillset used in interacting with them are the former: Good written communication skills, good people skills translate well to interacting with LLMs, and a good sense of the business will keep you magnifying the existing bad habits of developers.
From where I stand, the future is already here. These tools are powerful and reward thoughtful use, but the incentives created by the business cycle and shareholder demand will reward those that use them thoughtlessly. To say nothing of the power hungry using them maliciously. This blog is a place for me to engage with this technology and these ideas. We are building small gods—machine spirits—if the lararium is tended well, they will benefit us all. Like the Lares, if neglected Wikipedia claims that the Lemures may descend from the Lares: “Ovid interprets them as vagrant, unsatiated and potentially vengeful di manes or di parentes, ancestral gods or spirits of the underworld.” I cannot find a source for this claim and near as I can tell, it stems on a misreading of Ovid’s Fasti, Book 5, which speaks of the ghosts of ancestors (specifically Remus) who felt slighted in death., they will withhold their favor.
Luis García, CC BY-SA 3.0