ecs: major rethink & database-aligned design #157
No reviewers
Labels
No labels
CI
all
basisu
blog
bug
build
contributor-friendly
core
correctness
deferred
dev
direct3d-headers
docs
driver-os-issue
duplicate
dxcompiler
editor
examples
experiment
feature-idea
feedback
flac
freetype
gamemode
gkurve
glfw
gpu
gpu-dawn
harfbuzz
help welcome
in-progress
infrastructure
invalid
libmach
linux-audio-headers
long-term
mach
mach.gfx
mach.math
mach.physics
mach.testing
model3d
needs-triage
object
opengl-headers
opus
os/linux
os/macos
os/wasm
os/windows
package-manager
priority
proposal
proposal-accepted
question
roadmap
slipped
stability
sysaudio
sysgpu
sysjs
validating-fix
vulkan-zig-generated
wayland-headers
website
wontfix
wrench
www
x11-headers
xcode-frameworks
zig-update
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
hexops/mach!157
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "sg/ecs-take-2"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I promised the next blog post in the series would be code, not verbal explanation like this, so I'll likely allude to all of this (and link to this explanation) for how my thinking has developed since the last article. Then it will focus on the code here.
Limitations of our ECS
Previously, we had thought about our ECS in terms of archetypes defined at compile time (effectively arrays of archetype structs with comptime defined fields as components.) I believe that this is likely the most efficient way that one could ever represent entities. However, it comes with many limitations, namely that:
You have to define which components your entity will have at compile time: with our implementation, adding/removing components to an entity at runtime was not possible (although declaring components at comptime that had optional values at runtime was). This is contradictory with some goals that we have:
Investigating sparse sets
To find the best way to solve this, I did begin to investigate sparse sets which I saw mentioned in various contexts with ECS implementations. My understanding is that many ECS implementations utilize sparse sets to store a relation between an entity ID and the dense arrays of components associated with it. My understanding is that sparse sets often imply storing components as distinct dense arrays (e.g. an array of physics component values, an array of weapon component values, etc.) and then using the sparse set to map entity IDs -> indexes within those dense component arrays,
weapon_components[weapons_sparse_set[entityID]]is effectively used to lookup an entity's weapon component value, because not every entity is guaranteed to have the same components and soweapon_components[entityID]is not possible.This of course introduces overhead, not only due to two arrays needed to lookup a component's value, but also because you may now be accessing
weapon_componentsvalues non-sequentially which can easily introduce CPU cache misses. And so I began to think about how to reconcile the comptime-component-definition archetype approach I had written before and this sparse set approach that seems to be popular among other ECS implementations.Thinking in terms of databases
What helped me was thinking about an ECS in terms of databases, where tables represent a rather arbitrary "type" of entity, rows represent entities (of that type) themselves, and the columns represent component values. This makes a lot of sense to me, and can be implemented at runtime easily to allow adding/removing "columns" (components) to an entity.
The drawback of this database model made the benefit of sparse sets obvious: If I have a table representing monster entities, and add a Weapon component to one monster - every monster must now pay the cost of storing such a component as we've introduced a column, whether they intend to store a value there or not. In this context, having a way to separately store components and associate them with an entity via a sparse set is nice: you pay a bit more to iterate over such components (because they are not stored as dense arrays), but you only pay the cost of storing them for entities that actually intend to use them. In fact, iteration could be faster due to not having to skip over "empty" column values.
So this was the approach I implemented here:
Entitiesis a database of tables.EntityTypeStorage).EntityTypeStorageis a table, whose rows are entities and columns are components.ComponentStorage(T)ComponentStorage(T)is one of two things:EntityIDthus becomes a simple 32-bit row ID + a 16-bit table ID, and it's globally unique within a set ofEntities.entities.tables[tableID].rows[rowID]Note: When I say "hashmap" above I really mean a Zig array hashmap, which appears to be quite similar to a sparse set and mostly optimal for smaller hashmaps from what I have found.
Benefits
Faster "give me all entities with components (T, U, V) queries"
One nice thing about this approach compared to other ECS I think is that to answer a query like "give me all entities with a 'weapon' component", we can reduce the search space dramatically right off the bat due to the entity types: an
EntityTypeStoragehas fast access to the set of components all entities within it may have set. Now, not all of them will have such a component, but most of them will. We just "know" that without doing any computations, our data is structured to hint this to us. And this makes sense logically, because most entities are similar: buttons, ogre monsters, players, etc. are often minor variations of something, not a truly unique type of entity with 100% random components.Shared component values
In addition to having sparse storage for
entity ID -> component valuerelations, we can also offer a third type of storage: shared storage. Because we allow the user to arbitrarily define entity types, we can offer to store components at the entity type (table) level: pay to store the component only once, not per-entity. This seems quite useful (and perhaps even unique to our ECS? I'd be curious to hear if others offer this!)For example, if you want to have all entities of type "monster" share the same
Renderercomponent value for example, we simply elevate the storage of that component value to theEntityTypeStorage/ as part of the table itself, not as a column or sparse relation. This is a merecomponent name -> component valuemap. There is noentity ID -> component valuerelationship involved here, we just "know" that every entity of the "monster" entity type has that component value.Runtime/editor introspection
This is not a benefit of thinking in terms of databases, but this implementation opens the possibility for runtime (future editor) manipulation & introspection:
A note about Bevy/EnTT
After writing this, and the above commit message, I got curious how Bevy/EnTT handle this. Do they do something similar?
I found Bevy has hybrid component storage (pick between dense and sparse) which appears to be more clearly specified in this linked PR which also indicates:
Is our archetypal memory layout better than other ECS implementations?
One notable difference is that Bevy states about Archetypal ECS:
Update: see https://github.com/hexops/mach/pull/157#issuecomment-1022916117
I've seen this stated elsewhere, outside of Bevy, too. I've had folks tell me that archetypal ECS implementations use an AoS memory layout in order to make iteration faster (where
A,B, andCare component values):I have no doubt a sparse set is worse for iteration, as it involves accessing non-sequentially into the underlying dense arrays of the sparse set (from what I understand.) However, I find the archetypal storage pattern most have settled on (AoS memory layout) to be a strange choice. The other choice is an SoA memory layout:
My understanding from data oriented design (primarily from Andrew Kelley's talk) is that due to struct padding and alignment SoA is in fact better as it reduces the size of data (up to nearly half, IIRC) and that ensures more actually ends up in CPU cache despite accessing distinct arrays (which apparently CPUs are quite efficient at.)
Obviously, I have no benchmarks, and so making such a claim is super naive. However, if true, it means that our memory layout is not just more CPU cache efficient but also largely eliminates the typically increased cost of adding/removing components with archetypal storage: others pay to copy every single entity when adding/removing a component, we don't. We only pay to allocate space for the new component. We don't pay to copy anything. Of course, in our case adding/removing a component to sparse storage is still cheaper: effectively a hashmap insert for affected entities only, rather than allocating an entire array of size
len(entities).An additional advantage of this, is that even when iterating over every entity your intent is often not to access every component. For example, a physics system may access multiple components but will not be interested in rendering/game-logic components and those will "push" data we care about out of the limited cache space.
I'm poking Bevy ECS authors about this to see how they think about this: https://discord.com/channels/691052431525675048/742569353878437978/936125050095034418
Future
Major things still not implemented here include:
Signed-off-by: Stephen Gutekanst stephen@hexops.com
And the answer is: Bevy does NOT use an AoS memory layout. The difference between what they're doing and what I've done here is as follows:
Cto an entity which currently has(A, B), that entity "moves" from the old(A, B)archetype table to the new archetype table(A, B, C). If you plan to add a component to 1,000 entities of the same archetype, that would involve copying 1,000 entities from the old to new table. Their table hasVector<A>,Vector<B>, andVector<C>distinct vectors effectively (SoA).Vector<?T>and currently pay the cost of that optional bit (but I had plans to remove this with a bitmask)This tradeoff seems to mostly be complexity (Bevy seems simpler), cost of adding a component to few entities vs. thousands (memcpy vs. alloc), and potential for overlapping entities within the same archetype:
which they intend to solve with indexes:
seems reasonable.