now | blog | wiki | recipes | bookmarks | contact | about | donate
* * * back home * * *
looking at agentic programming tools
2026-04-21
It has been a crazy last few years with the advent of LLMs and their overtaking of everything from entire operating systems (Windows) and their associated stock applications (Edge, Paint, Notepad, Files) to being shoved in different websites everywhere you go (GitHub, search engines, etc).
Not even a year ago, I remember testing out using an LLM as a pair programmer. It was still not very good. You could do some basic webpages or scripts or something like that with it, sure, but I wouldn't have used it in a large production codebase.
These days, things have been changing. These "agents" as we are calling them have gotten quite good, and so many of them can be run locally. You don't need to be dependent on the cloud and OpenAI or Claude or whatever else anymore. If you have the right hardware, you can run local models and use them in an agentic capacity, and not have to worry about using any of these cloud APIs and big frontier models.
I have been messing around with some of the newer open-weights models through OpenCode, and I have to say, it has gotten quite good. Especially with me being the only main programmer on our team around here, except for my wife contributing some HTML and CSS to some of our projects, being able to use one of these agents as a pair-programmer is certainly a nice help. I can be working on one file, and tell OpenCode to go work on another file, and it doesn't take a single step without asking my permission to do something to a file. I don't turn it loose on a codebase - it has to ask permission before making any edits, creating or deleting files, all that good stuff.
Unlike the "vibe coders" of our current day, I am not trying to "oneshot" apps with agents or something like that. I am using it as something to bounce ideas around with, and to have do code side-by-side with. I am not someone who is in a position to hire more people to join my team and build FOSS. I am in a position, however, to test out some of these local agents, and use them to assist me in my own projects and the things we build out, and I have been quite impressed as of late. I bring up OpenCode quite a bit while working on some of our bigger projects, and it has been a joy to work with. I have certainly been having a good time working alongside it and bouncing ideas around back and forth with it.
Yeah, I know there are lots of people who have their issues with LLMs and "AI" in general. Some of them have some good points, some of them are doomers repeating the things they have heard on social media about how AI is going to take over the world and become like the Terminator films. The truth is a lot of grey area in the middle.
AI is certainly going to change a lot of things. It already has over the last few years since chatGPT exploded onto the scene. People ask me how I feel about AI sometimes. My answer is always the same. My issue isn't with LLMs as a technology, in fact, I think it is a pretty cool technology with lots of potential use-cases beyond the stupid things people use it for on a day to day basis.
No, my issue with most AI is the frontier models. I have high doubts about the privacy around using most of these frontier models in the cloud, such as ChatGPT, Claude, all those from the big providers. What I like to do, instead, is to use local models, and I like to run them through Ollama on my local machines. In fact, I have one machine in my office that is dedicated to running models, and I can SSH into that machine from other machines if I want to interact with the models.
I have been running a customized instance of Gemma4 lately for conversations. For pair programming with the LLM, I like to use qwen3-coder, which has been a good model as well. Both of these are running on my own local hardware, and I can switch between them at any time, or even run them in parallel.
LLMs might be the biggest breakthrough technology since the iPhone. I don't say that to be a hypeman - I'm not one. There are many things about AI I don't like. I don't like how we're in the middle of this "AI arms race" and I don't like how we're in the middle of a bubble that is eventually going to have to pop. I don't like that some companies and people are hooking onto "AI" as the new way to make money, just like blockchain and NFTs before it. This just shows you who the companies and people to avoid in the space are. That doesn't mean though, that there aren't real utilities to this technology.
Are there some things currently being done with AI right now that shouldn't be? Sure. I think AI in art is pretty silly. AI music is bad, AI art is usually pretty bad, it's all just pretty bad on that front. That is why it is called "slop". When you look past the silly and mundane thing people are using these models for, you can find some pretty interesting use-cases.
When it comes to coding, I have finally found some good assistance with these local models. OpenCode has been a lot of fun to have open in one workspace on my window manager, and planning out a build of something with it, or throwing ideas around about a project. I don't have many fellow programmers in my life, so it is nice to be able to have something to be able to "pair program" with, throwing ideas back and forth, and implementing the good ones into the projects we're working on. Certainly helps speed some things up, and if something needs to be done in a language I am not familiar in, I can have it come up with some ideas and I can take a look at it and see if it makes sense or not.
I am very in the middle on the AI debate. I am not "AI save everyone" and I am also not "AI kill everyone". I think LLMs have their uses, and I think some of the uses are stupid. I think some of them, however, are quite good, and some of the coding tools like OpenCode have gotten very good, and I enjoy running the local models on my main machine and being able to tunnel into it to interface with them. It has been a lot of fun, and I will certainly continue to try out some new models in the future and customize them as we have been doing, and seeing how they perform against the "frontier models" that are out there on the internet that everyone else uses. It will be fun to see how they continue to compare.
These are some of the big ones, though I only have experience with the first three (OpenCode, Qwen, Gemini). You can test them out if you want, I know I am kinda interested in putting Hermes Agent on its own box sometime and see how it goes.
These are some of the main ones I have been hearing about lately, and while I have only really used OpenCode, I have plans to deploy Hermes Agent and test it out for the fun of it. I have a spare raspberry pi 5 with a good amount of memory, so it might be fun to test it out and see how it goes.
This might be something new for some of my readers to find out I'm not completely against AI. My issues with AI are mainly privacy-related, and I think that if everyone was able to run their own local AI models, that issue would be alleviated. I don't trust OpenAI or most of these other AI companies, to be sure, but I do trust things running on my own machine, just like with normal software applications. Thankfully, we have plenty of open models, such as Gemma, Qwen, DeepSeek, and more that we can build on top of and modify to suit whatever use-case we are working on.
How do you feel about some of these local LLMs? Or if you use LLMs at all, do you simply use the frontier models such as Claude or ChatGPT? I am interested to see what you have to say! Let me know on the fediverse or over email, or hit me up on any of the platforms listed on my contact page. I know this is a hot topic, and everyone has different opinions on it, so I would love to know how you feel about it!