AI Agent with long-term memory and automatic skill learning - Shameless self-promotion

Hi everyone. I've tried several agents (opencode, claude, goose, bytebot), and by far I love hermes the most. So I forked it April 9th or so, and made it so it works in FreeBSD natively, with no linux emulation. You just need rust, make (optional: python-sqlite for the real benefits: long-term memory, and xclip for clipboard access).

I'm trying to grow it in two independent commands: hermes, which is just a hermes clone and should be script compatible (if it's not with current hermes branch, please fill out a bug report!). If you're familiar with hermes, it should be the same out of the box experience. And lycus, which is to be much more experimental. Right now, it's pretty much the same as hermes, except it chooses a name for itself.

Advantages;
  • Runs native, can run on x86_64, arm64, should run on any POSIX system, any BSD, any linux or macos. It uses rust to create an venv through uv.
  • Runs fast and light: models run independently, either local or though a provider. That's your freedom. Can run the agent on a single board computer or tiny VM and the model elsewhere, sandboxing things.
  • Cron! Integrate smart tasks directly. Tell it "setup a schedule where you read the mail every 37 minutes and send me a report through telegram" and it will setup the cron job and do everything. If it did the job well, it will save a condensed version of what it did as a skill. Over time it grows a huge database of skills which far exceed it's context limit. When you ask it again a similar task, it pulls up that skill and already knows what you liked and what worked in that case.
  • Has really cool capabilities out of the box, like web search, integration with telegram messaging (send it images or messages, it replies), and so much more.
Caveats:
  • Tokens. This 🔥 burns tokens like crazy. Yes, you can connect to any provider, but I STRONGLY suggest you connect to a local one and serve yourself (lmstudio and ollama have been tested and work great). I'm pretty sure this would cost me about $1500/day/computer if I used claude's API, for example. If you have any type of computer where you already run an LLM, you can connect this agent to that. For example if you have a system with an 8GB nvidia GPU, you could run qwen3.5-9b on that system and your cost would be electricity. The agent never has access to that system, just the model. I recommend lmstudio on the system you want to serve the model, because it's really easy to setup. If you have a mac M1+ with a lot of RAM, you're in luck. Keep your OS, just serve the model over the network to your rpi or server or VM with freebsd.
  • It's not the safest ⚠️⚠️⚠️, so best to run in a vm or a stand-alone computer (even a single-board computer, it's very light). It can recklessly try to delete files or run commands that break things. It will overwrite files that already exists, destroy configs. It depends on how smart your model is, but even the best models hallucinates and make mistakes from time to time. Setting up snapshots is a good idea.
  • It's lgpl 2.1 , which I hope won't get me banned from this forum. I have my reasons, mainly this is very dangerous tech if misrepresented. I really don't want to start a war over this, it is what it is for now. I'm saying it because I'm all about full disclosure.
Reasons this might not be vaporware:
  • About a week ago, I was 3700 commits behind (yes, hermes makes over 100 commits a day), and in the past 72 hours, I'm consistently between 1 and 10 commits behind at any point, so I'm winning. That's why I'm confident enough to post this here: I think I can keep this up. I think I can stay up to date with a major functioning product and even improve it, while evolving a completely ambitious lycus branch. I'm confident that if I take a week off and see I'm 1200 commits behind from upstream, I can catch up.
Links and stuff: the project's name is Autolycus, and it's available here: https://github.com/waym0reom3ga/autolycus-agent . Note that while current should work (it even has a fancy auto-install script that requires bash for now), release v0.0.4 is the "stable" current release.

I'm not looking for suggestions right now (I can barely keep up), but if you find a bug I'd love to know! Fill out the github bug thing and I'll see if my system self-repairs it.

Final thoughts: I could go on and on forever. Feel free to ask me anything.
 
Back
Top