LocalAGI allows you to create “AI agents” that run completely on your hardware and can access your systems to perform tasks. It’s Open Source, allowing you to self-host, modify and audit it. You don’t need an internet connection to run it and none of your data leaves your hardware unless you want it to.
It comes with a web UI that allows you to configure and chat to AI agents. You can also configure connectors, for instance Slack. Then you can interact with agents on Slack in a similar way to a regular user. If you don’t trust Slack you can use a locally hosted IRC server or if there is another platform you prefer to communicate with, just let us know.
There are built-in actions for things like search, posting to social media and creating GitHub issues. So, for example, you can ask an agent on Slack to summarize discussions and create GitHub issues from them.
To give a more specific case; we had a long discussion on Slack about loop detection in agent planning. We called on an agent to create a GitHub issue from this discussion. The result is a nicely formatted issue with acceptance criteria.
The agent can also provide background information in the ticket by doing an internet search and break it down into sub-tasks. The result isn’t always what we’d want, but there is a kernel of something very useful here on which to build.
In order to create the issue we didn’t need to manually browse to GitHub or integrate Slack with GitHub. It’s handled by the agent and the workflow can be transported to other chat and issue tracking software, let’s say Matrix and GitLab. Once we have added the relevant “connectors”, for you it is just a case of entering the connection details.
The ability to search the internet is just one of the kinds of research we could do. Another is that the agent automatically checks for similar issues using semantic search and then links to them. It could also search the code, it could ping people involved in previous similar discussions and so on. We don’t quite have all of the pieces working in harmony yet, but the foundations are set.
As long as an agent has the tools and connector configurations necessary, it can include these in a workflow. The agents know what tools they have available to them and can create a plan to use the tools together to achieve a goal.
You can also provide agents with instructions depending on the context (dynamic prompts) so that when a particular thing happens it has a playbook to follow. So in the case someone asks it to create an issue on Slack, it will be given instructions for what to do when it is asked on slack to create an issue. There is also a knowledge base which agents can search and retrieve instructions from.
Essentially you can create standard operating procedures for your AI. Using natural language in a similar fashion to how you would with human agents. Creating an issue from a discussion saves a bit of time, but performing a wide set of validation on an issue can save a lot of time. Also while LLMs sometimes do the wrong thing, they don’t get bored of following procedures, so they can be given the least interesting work to do without fear of upsetting them.
It’s multi-modal so you can give pictures to it which it will look at and describe or can use as a reference to generate new images. We use this to automatically create avatars for agents, but potentially there are more serious uses. For instance let’s a say a customer e-mails a picture of a faulty product, the agent can use image recognition to describe the state of the product, whether or not it contains the serial number and suggest a response.
You can create groups of agents with different personas and capabilities. For example you could have one agent which is configured to be good at reasoning and planning. Then another which is configured with the model and persona to create posts for X and another specifically for critiquing the posts.
The planning and reasoning agent can create a high level plan for converting some promotional material into a series of posts. It can then call on the creative agent a number of times to create the posts and the critique agent to check each result.
This allows you to organise and think about your agents in a natural way. Essentially like a team of humans, each with different capabilities, personalities and authorisations. So if you want to write some code, let’s say a custom action module for LocalAGI that let’s agents get the weather forecast from the UK Met Office, then you ask the agent which is configured to write code for LocalAGI custom actions.
You don’t have to worry about your coding agent randomly making a post to LinkedIn if it’s not been configured with that capability. Meanwhile the agent which does have that ability could be setup as a gatekeeper, so that it is instructed not to produce content itself, but instead to only handle requests to post content. Before posting it reviews each request against some criteria and may reject the post. Local First
All of this can be done locally using hardware that at the very least small to medium businesses can afford. In fact I run it on a $300 GPU from Intel which is good enough for experimentation and I suspect that with tuning and refinement would be good for a number of production use cases.
This means you won’t have to hand over all of your data to the major AI service providers, nor will all of your systems stop when the internet goes down.
However you can still use external LLM providers and it is possible to mix local and remote providers. So that you can have one agent using a remote provider and others using local ones.
For the adventurous LocalAGI is ready today, but if you are not quite ready to dive in now please still head over to GitHub and star it. Follow Ettore Di Giacinto and myself for development updates. In between the time I had started writing this article and releasing it, Ettore had implemented a browser operator and a coding agent so things are moving fast.