May 5th, 2024
Hiveminds: Modelling and Pondering
im doing something actually interesting this time
Hiveminds are a common cultural phenomenon seen primarily in sci-fi, and are conceptually pretty awesome. However, I've always wondered about which of the forms of collective intelligence are most effective at ruling. I've always said that "twitch chat is the worst type of hivemind," but I've never actually theoretically verified that statement. So what I'll do here is try to model hiveminds, and maybe give some commentary on the moral/political value of hiveminds at the end.
Formalizing
buzz buzz (earthbound reference???)
Mathematically, we can think about a hivemind as a network (or set) of state machines that each take input from some environment and some arbitrary amount of other state machines. These state machines each have an output goal that they will try to reach (whether it be making it to a certain position or collecting a certain volume of water or whatever combination of those that you wish). should contain some "metadata" about how to achieve , such as the energy required for achieving .
Philosophically, we can think about a hivemind as any network of minds that have a connection between them and form a collective intelligence (that is, an agent that works to achieve goal that results from the efforts of several agents).
Scientifically, we can think about a hivemind as an entity capable of creating some productivity measure at the cost of some total amount of energy . What's interesting about this is, for some hiveminds, resources collected that contribute towards will contribute towards , thus creating a positive feedback loop.
The 3 types of hiveminds
legend of zorlda
- Swarms
- Democracies
- Overminds
Swarms
A more quintessential example of a swarm hivemind is that of a... swarm. If we look at a boid swarm , for example, we see how collective action and goal can be formed out of 3 simple rules.
To see this for myself, I decided to use the previously linked tutorial as a guideline for making this : a simple boid simulation with a predator that tracks the center of mass of the boid swarm. The code is yours for the forking if you want to modify it/tune some constants. The result I wanted was that the overall swarm would collectively avoid the predator if any boid near the predator would move away from it. This is because each boid reacts to the movement of a boid around it, thus creating a collective reaction to the predator's encroachment. Because the agents in a boid swarm are so simple, they do not produce incredibly complex results. However, if we create an agent with more states and transitions between those states, we can create a collective that is capable of performing more complex tasks.
It is my opinion that this hivemind is probably most efficient at large sizes. This will be discussed more in a later section , but long story short, communication between 2 agents is fastest in a swarm hivemind, as information propagates at the speed of reaction and will reach a given location in the shortest amount of time that information moving at that speed can.