Hello,
I am continuing to work through my Protobuf parsing with
Constant/Floating-Point tests. Double precision is rearing its ugly
head and I am curious what's the best way to handle potentially
determining a healthy epsilon given the expected parsed value. For the
most part, I think it's going to work, but for the precision issues.
Catch:
REQUIRE( actual == expected )
with expansion:
struct my::protobuf::ast::floating_point_t { 'val': 1.16624e+306,
'opt_sign': null }
==
struct my::protobuf::ast::floating_point_t { 'val': 1.16624e+306,
'opt_sign': null }
with messages:
Source: syntax = 'proto2';option L = 1.16624e+306;
Delta was: 2.35309e+300
There was clearly a delta involved, most like a precision issue
encountered during the parse. Without going into a great deal of
depth, if possible:
namespace my { namespace protobuf { namespace ast {
using float_t = double;
enum num_sign_t {
sign_none
/// '-'
, sign_minus
/// '+'
, sign_plus
};
// Which includes
struct floating_point_t : numeric_t