glfw: Denormalize Errors #96
Labels
No labels
CI
all
basisu
blog
bug
build
contributor-friendly
core
correctness
deferred
dev
direct3d-headers
docs
driver-os-issue
duplicate
dxcompiler
editor
examples
experiment
feature-idea
feedback
flac
freetype
gamemode
gkurve
glfw
gpu
gpu-dawn
harfbuzz
help welcome
in-progress
infrastructure
invalid
libmach
linux-audio-headers
long-term
mach
mach.gfx
mach.math
mach.physics
mach.testing
model3d
needs-triage
object
opengl-headers
opus
os/linux
os/macos
os/wasm
os/windows
package-manager
priority
proposal
proposal-accepted
question
roadmap
slipped
stability
sysaudio
sysgpu
sysjs
validating-fix
vulkan-zig-generated
wayland-headers
website
wontfix
wrench
www
x11-headers
xcode-frameworks
zig-update
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
hexops/mach#96
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
In the original draft of #92, I added an off-the-cuff change to de-normalize error sets. As suggested by @slimsag, I'm opening an issue to discuss this change (and whether it's beneficial enough to consider adding).
Currently, we have normalized errors, adhering to the general possibility imposed by GLFW of (almost) any function returning (or rather "setting") any of the errors specified by the library. But it is important to note that GLFW does obviously specify, for each function, its "possible" error set, and according to the maintainer of GLFW, this documentation is "generally accurate" (paraphrased).
A consideration here is ABI stability: with a normalized error set, that's (almost universally) a guarantee, whilst with denormalized error sets, that's essentially impossible unless we were to freeze the GLFW version.
The other consideration here is how "comfortable" the API is: while a comment describing which errors you might encounter is alright, it is a departure from the error handling philosophy of Zig, and much more akin to C (which is what GLFW aims to cater to). Denormalized errors would slightly increase maintenance, but it would vastly improve readability, given that the errors the user would have to handle would be in the function's signature, and makes it such that a compilation error will be emitted if the user tries to catch an error which the function can't emit.
As well, another positive of denormalized errors would be behaviour-documentation parity. If GLFW changes the behaviour of one of their functions, and adds a possible error to a function, then we'll
catch unreachableon that prong, and be able to make a PR upstream to update their docs. Net positive.I am in favour of denormalized errors.
Very interesting arguments.
The one risk I see with error de-normalization is that we don't catch such an issue until someone's application makes its way into production, and they begin to experience a slew of
unreachablepanics on rarer system configurations (e.g. Wayland, obscure Linux distros/GPU drivers, etc.)Looking at the current
Errorset and trying to group them into logical sets, we have:InvalidEnumInvalidValueNoCurrentContextAPIUnavailableVersionUnavailableNoWindowContextFormatUnavailableOutOfMemoryPlatformErrorI think that I would feel comfortable with denormalizing all errors except those listed under "Generic errors" above. I am worried that basically any GLFW function can generate those two and we won't really know when/how unless we did a full audit of GLFW's sources
But I can also see your argument here about just denormalizing everything and improving GLFW docs when we run into anything inconsistent. I think overall I am OK with either of these approaches. After reading what I wrote above, what do you think the best path forward is?
You make a strong point about denormalization being quite potentially harmful to production. Though, I think it then comes down to a trade-off: take the risk, causing inconvenience and some struggle, with the return being an overall net-positive in the long run, or avoid taking the risk, and simply continue with status-quo behaviour being, as you've described it, aspirational, as it pertains to documented behaviour, and actual behaviour.
After thinking through it for a bit, I would say I am still of the opinion that error denormalization is worth the benefits; whilst it does rock the boat, it's also a step in the same direction that pointed out the undefined behaviour that was present in GLFW for 6+ years.
In an extreme case, where this change causes too much havoc, we could re-normalize errors, and we'd come back to where we are now, but with additional information about the weaknesses in GLFW's error handling model, so still a net-positive.
Fair enough, I think you've convinced me. Denormalization of all errors as you described originally SGTM!