ecs: major rethink & database-aligned design #157

Merged
emidoots merged 1 commit from sg/ecs-take-2 into main 2022-01-28 05:54:26 +00:00
emidoots commented 2022-01-27 05:27:09 +00:00 (Migrated from github.com)

I promised the next blog post in the series would be code, not verbal explanation like this, so I'll likely allude to all of this (and link to this explanation) for how my thinking has developed since the last article. Then it will focus on the code here.

Limitations of our ECS

Previously, we had thought about our ECS in terms of archetypes defined at compile time (effectively arrays of archetype structs with comptime defined fields as components.) I believe that this is likely the most efficient way that one could ever represent entities. However, it comes with many limitations, namely that:

You have to define which components your entity will have at compile time: with our implementation, adding/removing components to an entity at runtime was not possible (although declaring components at comptime that had optional values at runtime was). This is contradictory with some goals that we have:

  • The ability to add/remove components at runtime:
    • In an editor for the game engine, e.g. adding a Physics component or similar to see how it behaves.
    • In a code file as part of Zig hot code swapping in the future, adding an arbitrary component to an entity while your game is running.
    • In more obscure cases: adding components at runtime as part of loading a config file, in response to network operations, etc.

Investigating sparse sets

To find the best way to solve this, I did begin to investigate sparse sets which I saw mentioned in various contexts with ECS implementations. My understanding is that many ECS implementations utilize sparse sets to store a relation between an entity ID and the dense arrays of components associated with it. My understanding is that sparse sets often imply storing components as distinct dense arrays (e.g. an array of physics component values, an array of weapon component values, etc.) and then using the sparse set to map entity IDs -> indexes within those dense component arrays, weapon_components[weapons_sparse_set[entityID]] is effectively used to lookup an entity's weapon component value, because not every entity is guaranteed to have the same components and so weapon_components[entityID] is not possible.

This of course introduces overhead, not only due to two arrays needed to lookup a component's value, but also because you may now be accessing weapon_components values non-sequentially which can easily introduce CPU cache misses. And so I began to think about how to reconcile the comptime-component-definition archetype approach I had written before and this sparse set approach that seems to be popular among other ECS implementations.

Thinking in terms of databases

What helped me was thinking about an ECS in terms of databases, where tables represent a rather arbitrary "type" of entity, rows represent entities (of that type) themselves, and the columns represent component values. This makes a lot of sense to me, and can be implemented at runtime easily to allow adding/removing "columns" (components) to an entity.

The drawback of this database model made the benefit of sparse sets obvious: If I have a table representing monster entities, and add a Weapon component to one monster - every monster must now pay the cost of storing such a component as we've introduced a column, whether they intend to store a value there or not. In this context, having a way to separately store components and associate them with an entity via a sparse set is nice: you pay a bit more to iterate over such components (because they are not stored as dense arrays), but you only pay the cost of storing them for entities that actually intend to use them. In fact, iteration could be faster due to not having to skip over "empty" column values.

So this was the approach I implemented here:

  • Entities is a database of tables.

    • It's a hashmap of table names (entity type names) to tables (EntityTypeStorage).
    • An "entity type" is some arbitrary type of entity likely to have the same components. It's optimized for that. But unlike an "archetype", adding/removing ocmponents does not change the type - it just adds/removes a new column (array) of data.
    • You would use just one set of these for any entities that would pass through the same system. e.g. one of these for all 3D objects, one for all 2D objects, one for UI components. Or one for all three.
  • EntityTypeStorage is a table, whose rows are entities and columns are components.

    • It's a hashmap of component names -> ComponentStorage(T)
    • Adding/removing a component is as simple as adding/removing a hashmap entry.
  • ComponentStorage(T) is one of two things:

    • (default) a dense array of component values, making it quite optimal for iterating over.
    • (optional) a sparsely stored map of (row ID) -> (component value).
  • EntityID thus becomes a simple 32-bit row ID + a 16-bit table ID, and it's globally unique within a set of Entities.

    • Also enables O(1) entity ID lookups, effectively entities.tables[tableID].rows[rowID]
  • Note: When I say "hashmap" above I really mean a Zig array hashmap, which appears to be quite similar to a sparse set and mostly optimal for smaller hashmaps from what I have found.

Benefits

Faster "give me all entities with components (T, U, V) queries"

One nice thing about this approach compared to other ECS I think is that to answer a query like "give me all entities with a 'weapon' component", we can reduce the search space dramatically right off the bat due to the entity types: an EntityTypeStorage has fast access to the set of components all entities within it may have set. Now, not all of them will have such a component, but most of them will. We just "know" that without doing any computations, our data is structured to hint this to us. And this makes sense logically, because most entities are similar: buttons, ogre monsters, players, etc. are often minor variations of something, not a truly unique type of entity with 100% random components.

Shared component values

In addition to having sparse storage for entity ID -> component value relations, we can also offer a third type of storage: shared storage. Because we allow the user to arbitrarily define entity types, we can offer to store components at the entity type (table) level: pay to store the component only once, not per-entity. This seems quite useful (and perhaps even unique to our ECS? I'd be curious to hear if others offer this!)

For example, if you want to have all entities of type "monster" share the same Renderer component value for example, we simply elevate the storage of that component value to the EntityTypeStorage / as part of the table itself, not as a column or sparse relation. This is a mere component name -> component value map. There is no entity ID -> component value relationship involved here, we just "know" that every entity of the "monster" entity type has that component value.

Runtime/editor introspection

This is not a benefit of thinking in terms of databases, but this implementation opens the possibility for runtime (future editor) manipulation & introspection:

  • Adding/removing components to an entity at runtime
  • Iterating all entity types within a world
    • Iterating all entities of a given type
      • Iterating all possibly-stored components for entities of this type
      • Iterating all entities of this type
        • Iterating all components of this entity (future)
  • Converting from sparse -> dense storage at runtime

A note about Bevy/EnTT

After writing this, and the above commit message, I got curious how Bevy/EnTT handle this. Do they do something similar?

I found Bevy has hybrid component storage (pick between dense and sparse) which appears to be more clearly specified in this linked PR which also indicates:

hecs, legion, flec, and Unity DOTS are all "archetypal ecs-es".
Shipyard and EnTT are "sparse set ecs-es".

Is our archetypal memory layout better than other ECS implementations?

One notable difference is that Bevy states about Archetypal ECS:

Comes at the cost of more expensive add/remove operations for an Entity's components, because all components need to be copied to the new archetype's "table"

Update: see https://github.com/hexops/mach/pull/157#issuecomment-1022916117

I've seen this stated elsewhere, outside of Bevy, too. I've had folks tell me that archetypal ECS implementations use an AoS memory layout in order to make iteration faster (where A, B, and C are component values):

ABCABCABCABC

I have no doubt a sparse set is worse for iteration, as it involves accessing non-sequentially into the underlying dense arrays of the sparse set (from what I understand.) However, I find the archetypal storage pattern most have settled on (AoS memory layout) to be a strange choice. The other choice is an SoA memory layout:

AAAA
BBBB
CCCC

My understanding from data oriented design (primarily from Andrew Kelley's talk) is that due to struct padding and alignment SoA is in fact better as it reduces the size of data (up to nearly half, IIRC) and that ensures more actually ends up in CPU cache despite accessing distinct arrays (which apparently CPUs are quite efficient at.)

Obviously, I have no benchmarks, and so making such a claim is super naive. However, if true, it means that our memory layout is not just more CPU cache efficient but also largely eliminates the typically increased cost of adding/removing components with archetypal storage: others pay to copy every single entity when adding/removing a component, we don't. We only pay to allocate space for the new component. We don't pay to copy anything. Of course, in our case adding/removing a component to sparse storage is still cheaper: effectively a hashmap insert for affected entities only, rather than allocating an entire array of size len(entities).

An additional advantage of this, is that even when iterating over every entity your intent is often not to access every component. For example, a physics system may access multiple components but will not be interested in rendering/game-logic components and those will "push" data we care about out of the limited cache space.

I'm poking Bevy ECS authors about this to see how they think about this: https://discord.com/channels/691052431525675048/742569353878437978/936125050095034418

Future

Major things still not implemented here include:

  • Multi-threading
  • Querying, iterating
  • "Indexes"
    • Graph relations index: e.g. parent-child entity relations for a DOM / UI / scene graph.
    • Spatial index: "give me all entities within 5 units distance from (x, y, z)"
    • Generic index: "give me all entities where arbitraryFunction(e) returns true"

Signed-off-by: Stephen Gutekanst stephen@hexops.com

  • By selecting this checkbox, I agree to license my contributions to this project under the license(s) described in the LICENSE file, and I have the right to do so or have received permission to do so by an employer or client I am producing work for whom has this right.
I promised the next blog post in the series would be code, not verbal explanation like this, so I'll likely allude to all of this (and link to this explanation) for how my thinking has developed since the last article. Then it will focus on the code here. ## Limitations of our ECS Previously, we had thought about our ECS in terms of archetypes defined at compile time (effectively arrays of archetype structs with comptime defined fields as components.) I believe that this is likely *the most efficient* way that one could ever represent entities. However, it comes with many limitations, namely that: You have to define which components your entity will have _at compile time_: with our implementation, adding/removing components to an entity at runtime was not possible (although declaring components at comptime that had optional _values_ at runtime was). This is contradictory with some goals that we have: * The ability to add/remove components at runtime: * In an editor for the game engine, e.g. adding a Physics component or similar to see how it behaves. * In a code file as part of Zig hot code swapping in the future, adding an arbitrary component to an entity while your game is running. * In more obscure cases: adding components at runtime as part of loading a config file, in response to network operations, etc. ## Investigating sparse sets To find the best way to solve this, I did begin to investigate sparse sets which I saw mentioned in various contexts with ECS implementations. My understanding is that many ECS implementations utilize sparse sets to store a relation between an entity ID and the dense arrays of components associated with it. My understanding is that sparse sets often imply storing components as distinct dense arrays (e.g. an array of physics component values, an array of weapon component values, etc.) and then using the sparse set to map entity IDs -> indexes within those dense component arrays, `weapon_components[weapons_sparse_set[entityID]]` is effectively used to lookup an entity's weapon component value, because not every entity is guaranteed to have the same components and so `weapon_components[entityID]` is not possible. This of course introduces overhead, not only due to two arrays needed to lookup a component's value, but also because you may now be accessing `weapon_components` values non-sequentially which can easily introduce CPU cache misses. And so I began to think about how to reconcile the comptime-component-definition archetype approach I had written before and this sparse set approach that seems to be popular among other ECS implementations. ## Thinking in terms of databases What helped me was thinking about an ECS in terms of databases, where tables represent a rather arbitrary "type" of entity, rows represent entities (of that type) themselves, and the columns represent component values. This makes a lot of sense to me, and can be implemented at runtime easily to allow adding/removing "columns" (components) to an entity. The drawback of this database model made the benefit of sparse sets obvious: If I have a table representing monster entities, and add a Weapon component to one monster - every monster must now pay the cost of storing such a component as we've introduced a column, whether they intend to store a value there or not. In this context, having a way to separately store components and associate them with an entity via a sparse set is nice: you pay a bit more to iterate over such components (because they are not stored as dense arrays), but you only pay the cost of storing them for entities that actually intend to use them. In fact, iteration could be faster due to not having to skip over "empty" column values. So this was the approach I implemented here: * `Entities` is a database of tables. * It's a hashmap of table names (entity type names) to tables (`EntityTypeStorage`). * An "entity type" is some arbitrary type of entity _likely to have the same components_. It's optimized for that. But unlike an "archetype", adding/removing ocmponents does not change the type - it just adds/removes a new column (array) of data. * You would use just one set of these for any entities that would pass through the same system. e.g. one of these for all 3D objects, one for all 2D objects, one for UI components. Or one for all three. * `EntityTypeStorage` is a table, whose rows are entities and columns are components. * It's a hashmap of component names -> `ComponentStorage(T)` * Adding/removing a component is as simple as adding/removing a hashmap entry. * `ComponentStorage(T)` is one of two things: * (default) a dense array of component values, making it quite optimal for iterating over. * (optional) a sparsely stored map of (row ID) -> (component value). * `EntityID` thus becomes a simple 32-bit row ID + a 16-bit table ID, and it's globally unique within a set of `Entities`. * Also enables O(1) entity ID lookups, effectively `entities.tables[tableID].rows[rowID]` * Note: When I say "hashmap" above I really mean a Zig _array_ hashmap, which appears to be quite similar to a sparse set and mostly optimal for smaller hashmaps from what I have found. ## Benefits ### Faster "give me all entities with components (T, U, V) queries" One nice thing about this approach compared to other ECS I think is that to answer a query like "give me all entities with a 'weapon' component", we can reduce the search space dramatically right off the bat due to the entity types: an `EntityTypeStorage` has fast access to the set of components all entities within it may have set. Now, not all of them will have such a component, but _most of them will_. We just "know" that without doing any computations, our data is structured to hint this to us. And this makes sense logically, because most entities are similar: buttons, ogre monsters, players, etc. are often minor variations of something, not a truly unique type of entity with 100% random components. ### Shared component values In addition to having sparse storage for `entity ID -> component value` relations, we can _also_ offer a third type of storage: shared storage. Because we allow the user to arbitrarily define entity types, we can offer to store components at the entity type (table) level: pay to store the component only once, not per-entity. This seems quite useful (and perhaps even unique to our ECS? I'd be curious to hear if others offer this!) For example, if you want to have all entities of type "monster" share the same `Renderer` component value for example, we simply elevate the storage of that component value to the `EntityTypeStorage` / as part of the table itself, not as a column or sparse relation. This is a mere `component name -> component value` map. There is no `entity ID -> component value` relationship involved here, we just "know" that every entity of the "monster" entity type has that component value. ### Runtime/editor introspection This is not a benefit of thinking in terms of databases, but this implementation opens the possibility for runtime (future editor) manipulation & introspection: * Adding/removing components to an entity at runtime * Iterating all entity types within a world * Iterating all entities of a given type * Iterating all possibly-stored components for entities of this type * Iterating all entities of this type * Iterating all components of this entity (future) * Converting from sparse -> dense storage at runtime ## A note about Bevy/EnTT After writing this, and the above commit message, I got curious how Bevy/EnTT handle this. Do they do something similar? I found [Bevy has hybrid component storage (pick between dense and sparse)](https://bevyengine.org/news/bevy-0-5/#hybrid-component-storage-the-solution) which appears to be more clearly specified in [this linked PR](https://github.com/bevyengine/bevy/pull/1525) which also indicates: > hecs, legion, flec, and Unity DOTS are all "archetypal ecs-es". > Shipyard and EnTT are "sparse set ecs-es". ## Is our archetypal memory layout better than other ECS implementations? One notable difference is that Bevy states about Archetypal ECS: > Comes at the cost of more expensive add/remove operations for an Entity's components, because all components need to be copied to the new archetype's "table" **Update:** see https://github.com/hexops/mach/pull/157#issuecomment-1022916117 I've seen this stated elsewhere, outside of Bevy, too. I've had folks tell me that archetypal ECS implementations use an AoS memory layout in order to make iteration faster (where `A`, `B`, and `C` are component values): ``` ABCABCABCABC ``` I have no doubt a sparse set is worse for iteration, as it involves accessing non-sequentially into the underlying dense arrays of the sparse set (from what I understand.) However, I find the archetypal storage pattern most have settled on (AoS memory layout) to be a strange choice. The other choice is an SoA memory layout: ``` AAAA BBBB CCCC ``` My understanding from data oriented design (primarily from Andrew Kelley's talk) is that due to struct padding and alignment SoA is in fact better as it reduces the size of data (up to nearly half, IIRC) and that ensures more actually ends up in CPU cache despite accessing distinct arrays (which apparently CPUs are quite efficient at.) Obviously, I have no benchmarks, and so making such a claim is super naive. However, if true, it means that our memory layout is not just more CPU cache efficient but also largely eliminates the typically increased cost of adding/removing components with archetypal storage: others pay to copy every single entity when adding/removing a component, we don't. We only pay to allocate space for the new component. We don't pay to copy anything. Of course, in our case adding/removing a component to sparse storage is still cheaper: effectively a hashmap insert for affected entities only, rather than allocating an entire array of size `len(entities)`. An additional advantage of this, is that even when iterating over every entity your intent is often not to access every component. For example, a physics system may access multiple components but will not be interested in rendering/game-logic components and those will "push" data we care about out of the limited cache space. I'm poking Bevy ECS authors about this to see how they think about this: https://discord.com/channels/691052431525675048/742569353878437978/936125050095034418 ## Future Major things still not implemented here include: * Multi-threading * Querying, iterating * "Indexes" * Graph relations index: e.g. parent-child entity relations for a DOM / UI / scene graph. * Spatial index: "give me all entities within 5 units distance from (x, y, z)" * Generic index: "give me all entities where arbitraryFunction(e) returns true" Signed-off-by: Stephen Gutekanst <stephen@hexops.com> - [x] By selecting this checkbox, I agree to license my contributions to this project under the license(s) described in the LICENSE file, and I have the right to do so or have received permission to do so by an employer or client I am producing work for whom has this right.
emidoots commented 2022-01-27 07:19:22 +00:00 (Migrated from github.com)

Is our archetypal memory layout better than other ECS implementations?

And the answer is: Bevy does NOT use an AoS memory layout. The difference between what they're doing and what I've done here is as follows:

  • Bevy: When you add a component C to an entity which currently has (A, B), that entity "moves" from the old (A, B) archetype table to the new archetype table (A, B, C). If you plan to add a component to 1,000 entities of the same archetype, that would involve copying 1,000 entities from the old to new table. Their table has Vector<A>, Vector<B>, and Vector<C> distinct vectors effectively (SoA).
  • What I did here:
    • We represent columns as distinct vectors Vector<?T> and currently pay the cost of that optional bit (but I had plans to remove this with a bitmask)
    • When you add a new component to an entity in ours, the entity doesn't move from one table to another. Instead, the table itself adds a new column (if it didn't already exist) in preparation of the other entities of this type also having that component.

This tradeoff seems to mostly be complexity (Bevy seems simpler), cost of adding a component to few entities vs. thousands (memcpy vs. alloc), and potential for overlapping entities within the same archetype:

Does Bevy provide any optimization (pools or sorting like EnTT or something) for this case: say you want to query for entities with components (A, B, C), where C is a boolean which represents whether an entity is a monster or player. You only want to iterate player entities (e.g. to send network packets for them at a faster frequency than monster entities or something like that). Obviously you could iterate every entity of that archetype table (players+monsters) and check the boolean component to skip over monsters; curious if there's any additional query optimization here, or this isn't a common use case/problem?

which they intend to solve with indexes:

Not yet. This falls under the big umbrella of "indexes", which are probably two or three major ECS features away

seems reasonable.

> Is our archetypal memory layout better than other ECS implementations? And the answer is: Bevy does NOT use an AoS memory layout. The difference between what they're doing and what I've done here is as follows: * Bevy: When you add a component `C` to an entity which currently has `(A, B)`, that entity "moves" from the old `(A, B)` archetype table to the new archetype table `(A, B, C)`. If you plan to add a component to 1,000 entities of the same archetype, that would involve copying 1,000 entities from the old to new table. Their table has `Vector<A>`, `Vector<B>`, and `Vector<C>` distinct vectors effectively (SoA). * What I did here: * We represent columns as distinct vectors `Vector<?T>` and currently pay the cost of that optional bit (but I had plans to remove this with a bitmask) * When you add a new component to an entity in ours, the entity doesn't move from one table to another. Instead, the table itself adds a new column (if it didn't already exist) in preparation of the other entities of this type also having that component. This tradeoff seems to mostly be complexity (Bevy seems simpler), cost of adding a component to few entities vs. thousands (memcpy vs. alloc), and potential for overlapping entities within the same archetype: > Does Bevy provide any optimization (pools or sorting like EnTT or something) for this case: say you want to query for entities with components (A, B, C), where C is a boolean which represents whether an entity is a monster or player. You only want to iterate player entities (e.g. to send network packets for them at a faster frequency than monster entities or something like that). Obviously you could iterate every entity of that archetype table (players+monsters) and check the boolean component to skip over monsters; curious if there's any additional query optimization here, or this isn't a common use case/problem? which they intend to solve with indexes: > Not yet. This falls under the big umbrella of "indexes", which are probably two or three major ECS features away seems reasonable.
Sign in to join this conversation.
No reviewers
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
hexops/mach!157
No description provided.