Niall Douglas wrote:
As you'll note, the first possible state (empty) tends to be chosen by the compiler as the most likely. That implies a 20 cycle branch misprediction cost for each of the valued or errored states. So they are equally costly, which is intentional.
Wait a minute. Are you saying that you consider the fact that valued and errored are equally slow a feature, instead of one of them being fast? How is that a good thing? Of course empty should be the least likely - it _is_ the least likely.
As I've mentioned several times already in other threads, that was a deliberate and intentional design choice for outcome/result. Predictable latency throughout.
That's an intriguing statement. Now I know that you usually know what you're talking about, so perhaps you can provide a bit of further explanation. If we take a typical example - your AFIO function from earlier - it issues at least three syscalls, every one of which has different latency when it succeeds or when it fails, and in addition, later ones are skipped when an earlier one fails, and to top all that off, the last syscall is an fsync, which does not inhabit the same galaxy as the words predictable latency. So what use case are we targeting here where success and failure are equally costly and measured in cycles?