You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/2025h2/reflection-and-comptime.md
+33-6Lines changed: 33 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -15,9 +15,28 @@ Design, implement and experimentally land a reflection scheme based on `const fn
15
15
16
16
## Motivation & status quo
17
17
18
-
Creating new general purpose crates (like serialization crates, log/tracing crates, game engine state inspection crates) that should work with almost all other data structures is nontrivial today. You either need to locally implement your traits for other crates, or the other crates need to depend on you and implement your traits. This often hinders rollout and will never reach everything.
18
+
Creating new general purpose crates (like serialization crates, log/tracing crates, game engine state inspection crates) that should work with almost all other data structures is nontrivial today.
19
+
You either need to locally implement your new traits for other (common) crates, or the other crates need to depend on you and implement your traits.
20
+
This often hinders rollout and will never reach every crate. Most crate maintainers do not want to depend on 2+ serialization crates, and 3+ logging crates, so they will instead pick one,
21
+
causing everyone to either pick the large popular crates or be limited in what they can serializer/log. This is a hindrance to innovation and will (imo) long term cause the ecosystem to
22
+
stop evolving even when an objectively better solution to a problem is found.
19
23
20
-
Reflection offers a way out of this dilemma, as you can write your logic for all types, by processing the type information at runtime (or even preprocess it at compile-time) without requiring trait bounds on your functions or trait impls anywhere.
24
+
Reflection offers a way out of this dilemma, as you can write your logic for all types.
25
+
You would be processing the type information at runtime (or even preprocess it at compile-time) without requiring trait bounds on your functions or trait impls anywhere.
26
+
This means no one but consumers of your serialization/logging/game-engine will need to know about your crate.
27
+
They immediately are able to interoperate with tuples of any size, arbitrary structs and enums from arbitrary crates that neither depend on yours nor you on theirs.
28
+
29
+
If this experiment is successful, crates like `bevy` will be able to "just work" with arbitrary types instead of requiring authors to `#[derive(Component)]`, `#[derive(Bundle)]`, or `#[derive(Resource)]` their types
30
+
just to get the `bevy_reflect` information built at compile-time.
31
+
New serialization crates (e.g. for things that are non-goals of `serde`https://github.com/serde-rs/serde/issues/2877) could be used with any types without being limited to types from crates that know about the new serialization crate.
32
+
33
+
Furthermore it opens up new possibilities of reflection-like behaviour by
34
+
* specializing serialization on specific formats (e.g. serde won't support changing serialization depending on the serializer),
35
+
* specializing trait impl method bodies to have more performant code paths for specific types, groups of types or shapes (e.g. based on the layout) of types.
36
+
37
+
I consider reflection orthogonal to derives as they solve similar problems from different directions. Reflection lets you write the logic that processes your types in a way very similar to dynamic languages, by inspecting *values*, while derives generate the code that processes your types ahead of time. Proc macros derives have historically been shown to be fairly hard to debug and bootstrap from scratch. While reflection can get similarly complex fast, it allows for a more dynamic approach where you can easily debug the state your are in, as you do not have to pair the derive logic with the consumer logic (e.g. a serializer) and are instead directly writing just the consumer logic.
38
+
39
+
Reflection often is not as efficient as derives, as the derives can generate the ideal code ahead of time, but once a fully functioning reflection system has been written for a use case, and performance becomes a problem, it should be significantly easier to now write a derive for the performance critical cases than to have started doing so from the start.
21
40
22
41
### The next 6 months
23
42
@@ -30,7 +49,7 @@ Create basic building blocks that allow `facet`, `bevy-reflect` and `reflect` to
30
49
31
50
## Design axioms
32
51
33
-
* Prefer procedural const-eval code over associated const based designs
52
+
* Prefer procedural const-eval code over associated const based designs (see also "why not uwuflection" in the FAQ).
34
53
* We picked `const fn` in general evaluation over associated const based designs that are equally expressive but are essentially a DSL
35
54
* Ensure privacy is upheld, modulo things like `size_of` exposing whether new private fields have been added
36
55
* Avoid new semver hazards and document any if unavoidable.
@@ -62,9 +81,7 @@ Create basic building blocks that allow `facet`, `bevy-reflect` and `reflect` to
| Lang-team experiment |![Team][][lang], [libs]| Needs libstd data structures (lang items) to make the specialization data available |
65
-
| Author RFC || Not at that stage in the next 6 months |
66
84
| Lang-team champion |![Team][][lang]| TBD |
67
-
| RFC decision || Not at that stage in the next 6 months |
68
85
69
86
### Implement language feature
70
87
@@ -126,6 +143,16 @@ in order to use uwuflection in types in generic code you need to either write in
126
143
127
144
### Why not go full zig-style comptime?
128
145
146
+
zig's approach to comptime from a very high level is effectively
147
+
148
+
* generate AST for all source files
149
+
* pick the `main` function and start compiling it and looking for what it needs to be compiled
150
+
* if a comptime function call is found, look only for what code that needs to compile, compile it and produce the resulting code of the comptime function
151
+
* continue the main compilation, which may now invoke the generated code and start compiling that
152
+
153
+
we do not experiment with this approach at this time, because the compiler is not set up in a way to permit proc macros from accessing type information from the current crate.
154
+
While there are ongoing refactorings that go into the direction of potentially allowing more of that in the future, that future seems to be more than 5 years away at my best guess.
155
+
129
156
* the compiler is not set up to perform codegen while type information is already available. It possibly never will, and it would be an immense amount of work to get there. I'm doing lots of refactorings that would need to be done for sth like that anyway, even if the goal is just better incremental and general compilar architecture.
130
157
* there are too many open language questions about it that we haven't even started to discuss
131
-
* a hacky prototype that works for just tuples and that works with regular const eval exists right now, so pursueing the definitely possible implementation will pay off in a shorter term.
158
+
* a hacky comptime reflection prototype that works for just tuples and that works with regular const eval exists right now, so pursueing the definitely possible implementation will pay off in a shorter term.
0 commit comments