Dear Boosters, Quite a bit has happened since last I reported about sqlpp11 in this forum [1,2]. I have incorporated a lot of the feedback you gave me, hopefully bringing the library closer to a reviewable state. Source: https://github.com/rbock/sqlpp11 Doku: https://github.com/rbock/sqlpp11/wiki (not at all formal yet) I am hoping for more feedback both here and live at CppCon (http://sched.co/1r4lue3) Here are some noteworthy changes: _Restructuring:_ The code has undergone a major restructuring, leading to a much simpler and thereby much more flexible way to define the structure of SQL statements. As a result, it would now be quite simple to add vendor specific features sqlpp11 connector libraries. Dominique Devienne mentioned array binding for inserts, Johan BaltiƩ mentioned hierarchical searches (both features of Oracle). If you are interested in writing a connector and adding such features, please let me know, I'll walk you through it. _NULL handling:_ Enabled by the restructured code and spurred by the library quince by Michael Shepanski, sqlpp11 can now calculate which result fields can or cannot be NULL. Speaking of which, handling NULL for result values has been discussed a lot. The library now has compile-time configurable behavior, you can choose between an std::optional-like interface and mapping NULL to the trivial value of the result type, e.g. 0 for numbers or "" for strings, see also https://github.com/rbock/sqlpp11/wiki/NULL _Connectors:_ Matthijs wrote a new postgresql connector from scratch, see https://github.com/matthijs/sqlpp11-connector-postgresql The sqlite3 connector received a bunch of corrections, thanks to dirkvdb _Compilers:_ Known to compile with clang>=3.1 and gcc>=4.8. I currently cannot test with MSVC or Intel or others, but if you happen to have the latest top notch version of any of those, I'd be interested in what they have to say about sqlpp11. Cheers, Roland [1]: http://lists.boost.org/Archives/boost/2013/11/208388.php [2]: http://lists.boost.org/Archives/boost/2014/02/211206.php
Roland, Get documentation into your package so we can include it in "Library Writers Workshop" Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/sqlpp11-3rd-iteration-tp4666521p4666524.h... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 2014-08-18 18:56, Robert Ramey wrote:
Roland,
Get documentation into your package so we can include it in "Library Writers Workshop"
Robert Ramey
Robert, I'll see what I can come up with until then. As written in another thread, I am not sure yet how to document the library in a more formal way. The statements' interface change while being used (not really, of course, but it kind of behaves that way). Here are a few examples (the first one will compile with the current development branch, the others will compile with any recent release): auto x = select(sqlpp::value(7).as(sqlpp::alias::a)); x is a statement that has a bunch of member functions like from(), where(), having(), etc, but none of those are required to be called. You can let this be executed by the database as is. auto y = select(all_of(t)); // t being a table y is a statement that has the same member functions, but two of them are required to execute the statement: from() and where(). I.e. db(y); // This will trigger two static asserts db(y.from(t).where(true)); // This will compile And auto z = select(all_of(t)).from(t).where(t.name == "XX"); z is a statement that can be executed by a database. It does not have the methods from() and where() anymore. You can call having(), group_by(), etc, though. This behaves really smoothly, but I don't know how to formally document it yet. Best, Roland
Hi Roland, Roland Bock wrote:
_NULL handling:_ Enabled by the restructured code and spurred by the library quince by Michael Shepanski, sqlpp11 can now calculate which result fields can or cannot be NULL.
Speaking of which, handling NULL for result values has been discussed a lot. The library now has compile-time configurable behavior, you can choose between an std::optional-like interface and mapping NULL to the trivial value of the result type, e.g. 0 for numbers or "" for strings, see also https://github.com/rbock/sqlpp11/wiki/NULL
You wrote in the docs: <cite> One often discussed alternative would be boost::optional or (in the future) std::optional. There is one drawbacks (correct me if I am wrong, please):| optional| cannot be used for binding result values because it is unclear whether there already is a value to bind to. </cite> What do you mean by that? If I understand correctly, you have in mind returning boost::optional<> from a function. It's ok to do it, the value is stored in optional and deep copies are done if needed. http://www.boost.org/doc/libs/1_56_0/libs/optional/doc/html/boost_optional/t... http://www.boost.org/doc/libs/1_56_0/libs/optional/doc/html/boost_optional/q... Regards, Adam
On 2014-08-18 19:58, Adam Wulkiewicz wrote:
Hi Roland,
Roland Bock wrote:
_NULL handling:_ Enabled by the restructured code and spurred by the library quince by Michael Shepanski, sqlpp11 can now calculate which result fields can or cannot be NULL.
Speaking of which, handling NULL for result values has been discussed a lot. The library now has compile-time configurable behavior, you can choose between an std::optional-like interface and mapping NULL to the trivial value of the result type, e.g. 0 for numbers or "" for strings, see also https://github.com/rbock/sqlpp11/wiki/NULL
You wrote in the docs:
<cite> One often discussed alternative would be boost::optional or (in the future) std::optional. There is one drawbacks (correct me if I am wrong, please):| optional| cannot be used for binding result values because it is unclear whether there already is a value to bind to. </cite>
What do you mean by that?
If I understand correctly, you have in mind returning boost::optional<> from a function. It's ok to do it, the value is stored in optional and deep copies are done if needed. http://www.boost.org/doc/libs/1_56_0/libs/optional/doc/html/boost_optional/t...
http://www.boost.org/doc/libs/1_56_0/libs/optional/doc/html/boost_optional/q...
Regards, Adam
Adam, I was referring to what some sql libraries do, e.g. Mysql's C interface: They take pointers to some memory and then write result fields to those memory blocks, see for instance http://dev.mysql.com/doc/refman/5.7/en/mysql-stmt-bind-result.html I don't know if that would be legal to do with the value in optional. Regards, Roland
Roland Bock wrote:
I was referring to what some sql libraries do, e.g. Mysql's C interface: They take pointers to some memory and then write result fields to those memory blocks, see for instance http://dev.mysql.com/doc/refman/5.7/en/mysql-stmt-bind-result.html
I don't know if that would be legal to do with the value in optional. If some object containing value (corresponding to the data stored in DB) is created at some point, it may be stored in optional.
I'm guessing that the problem exists because select is lazily executed. C++ objects corresponding to the data aren't created (e.g. std::string, int, float, etc.). Instead some pointers to buffers are kept (in sqlpp::result_field_t?). And C++ objects are created and returned later from value() method or conversion operators each time one of them is called. Is that right? But AFAIU if the C++ objects were created along with the representation of a row, the results could be stored as optionals. Or am I missing something? I saw that there are optional-like wrappers of values, e.g. sqlpp::text. They're close to optionals since they're storing a value and some flag. What are they used for? Are you using tabs? The code doesn't seem to look right on GitHub. Regards, Adam
On 2014-08-18 22:43, Adam Wulkiewicz wrote:
Roland Bock wrote:
I was referring to what some sql libraries do, e.g. Mysql's C interface: They take pointers to some memory and then write result fields to those memory blocks, see for instance http://dev.mysql.com/doc/refman/5.7/en/mysql-stmt-bind-result.html
I don't know if that would be legal to do with the value in optional. If some object containing value (corresponding to the data stored in DB) is created at some point, it may be stored in optional.
I'm guessing that the problem exists because select is lazily executed. C++ objects corresponding to the data aren't created (e.g. std::string, int, float, etc.). Instead some pointers to buffers are kept (in sqlpp::result_field_t?). And C++ objects are created and returned later from value() method or conversion operators each time one of them is called. Is that right? Not quite. Lets look at integral.h and the partial specialization of
template
But AFAIU if the C++ objects were created along with the representation of a row, the results could be stored as optionals. Or am I missing something? Yes the values are set when the row is fetched. The objects are re-used for each row. And the backend is given pointers to write the value into as explained above.
But while writing this, I just realized: I could do is offer a
conversion operator for std::optional
I saw that there are optional-like wrappers of values, e.g. sqlpp::text. They're close to optionals since they're storing a value and some flag. What are they used for?
These are the result fields as described above. I guess the key for understanding their mechanics is the bind function. This gets called when a row is fetched: It hands over the address of the field's value to the database backend where it will be used in methods like mysql-stmt-bind-result for instance. I am offering the optional-like interface since I cannot use optional to actually store the field's value, if I understand the semantics of optional correctly. But as mentioned above, I probably could offer a converter to optional.
Are you using tabs? The code doesn't seem to look right on GitHub.
Yes, but actually, I think the worse problem is that vim indents template code a bit weirdly. Its not so bad with a tab width of two, but I'll have to take care of that sometime soon :-) Regards, Roland
Roland Bock wrote:
On 2014-08-18 22:43, Adam Wulkiewicz wrote:
But AFAIU if the C++ objects were created along with the representation of a row, the results could be stored as optionals. Or am I missing something? Yes the values are set when the row is fetched. The objects are re-used for each row. And the backend is given pointers to write the value into as explained above.
But while writing this, I just realized: I could do is offer a conversion operator for std::optional
, of course. Thus, given a row with a column `a` which can be NULL, you would then have std::optional
a1 = row.a; // OK int64_t a2 = row.a; // compile failure
Is it the same way with bigger objects, e.g. std::strings?
Is the pointer to std::string passed to the backend (which means that
some temporary buffer must be used) or must the pointer to the memory
owned by std::string be passed, after resizeing a string?
AFAIU the reason of a problem with optionals is a limitation of the
implementation.
Couldn't a reference to optional be passed to the backend and then the
optional filled with data? E.g. using some temporary buffer or whatever
method suitable to get the data. And then the optional re-created using
this data.
Even temporary buffers could be ommited. Hmm, it probably even wouldn't
be required to pass an optional to the backend. An optional could be
created with default-constructed value:
member = optional
On 2014-08-19 02:09, Adam Wulkiewicz wrote:
Roland Bock wrote:
On 2014-08-18 22:43, Adam Wulkiewicz wrote:
But AFAIU if the C++ objects were created along with the representation of a row, the results could be stored as optionals. Or am I missing something? Yes the values are set when the row is fetched. The objects are re-used for each row. And the backend is given pointers to write the value into as explained above.
But while writing this, I just realized: I could do is offer a conversion operator for std::optional
, of course. Thus, given a row with a column `a` which can be NULL, you would then have std::optional
a1 = row.a; // OK int64_t a2 = row.a; // compile failure Is it the same way with bigger objects, e.g. std::strings? Yes, but see below. Is the pointer to std::string passed to the backend (which means that some temporary buffer must be used) or must the pointer to the memory owned by std::string be passed, after resizeing a string? Afaik, I must not pass the memory owned by a string to anybody for modification, since writing to the return values of both std::string::data() and std::string:c_str() results in undefined behavior.
Anyway, with text results, the situation is special since the memory is actually owned by the backend. All the backends I've seen so far work this way. [...]
Even temporary buffers could be ommited. Hmm, it probably even wouldn't be required to pass an optional to the backend. An optional could be created with default-constructed value:
member = optional
(int64_t()); or re-constructed in case there were no valid value stored, probably like this:
if ( !member ) member = optional
(int64_t()); and then the address of this value already stored within optional could be passed deeper:
if ( backend.fill(member.get_ptr()) == false ) member = optional
();// null Or am I missing something?
Sure that's possible, but what's the benefit over using a plain value and a flag? For each row to be fetched, I would have to check and potentially reconstruct the optional value just to invalidate it again, if the value is NULL in the new row. In my eyes this just adds overhead for the internal representation.
Btw, I'm not saying you should change the interface. I was just surprised that it couldn't be done.
Got that :-) Regards, Roland
On 19/08/2014 09:55, Roland Bock wrote:
I'm guessing that the problem exists because select is lazily executed. C++ objects corresponding to the data aren't created (e.g. std::string, int, float, etc.). Instead some pointers to buffers are kept (in sqlpp::result_field_t?). And C++ objects are created and returned later from value() method or conversion operators each time one of them is called. Is that right? Not quite. Lets look at integral.h and the partial specialization of
template
result_field_t {...}; Now, this contains an int64_t value. The address of this value is given to the backend in the method bind() when fetching each result row (no laziness here). It seems to me that I cannot replace int64_t by boost::optional
. For instance, I cannot call get() to obtain the address of the value if the optional is not initialized (I would run into an assert).
Presumably it is the backend that knows whether the column/result is NULL or not. Therefore it should be the backend's responsibility to fill in the optional correctly -- ie. you should be passing an address/reference to the entire optional, not the internal integer. Otherwise how does the backend return a NULL value?
On 2014-08-19 04:17, Gavin Lambert wrote:
On 19/08/2014 09:55, Roland Bock wrote:
I'm guessing that the problem exists because select is lazily executed. C++ objects corresponding to the data aren't created (e.g. std::string, int, float, etc.). Instead some pointers to buffers are kept (in sqlpp::result_field_t?). And C++ objects are created and returned later from value() method or conversion operators each time one of them is called. Is that right? Not quite. Lets look at integral.h and the partial specialization of
template
result_field_t {...}; Now, this contains an int64_t value. The address of this value is given to the backend in the method bind() when fetching each result row (no laziness here). It seems to me that I cannot replace int64_t by boost::optional
. For instance, I cannot call get() to obtain the address of the value if the optional is not initialized (I would run into an assert). Presumably it is the backend that knows whether the column/result is NULL or not. Correct. Therefore it should be the backend's responsibility to fill in the optional correctly If I had an optional in the result field, yes. -- ie. you should be passing an address/reference to the entire optional, not the internal integer. Some backends have functions like this (simplified):
void get_int_field(int index, int* retval); How would I interact with such an interface if I had an optional<int>? As suggested by Adam: optional<int> member; ... if (!member) member = 0 get_int_field(17, member.get());
Otherwise how does the backend return a NULL value?
The backend is called with two parameters, one pointer for the value, the other for the is_null information. Regards, Roland
On 19/08/2014 19:21, Roland Bock wrote:
On 2014-08-19 04:17, Gavin Lambert wrote:
Now, this contains an int64_t value. The address of this value is given to the backend in the method bind() when fetching each result row (no laziness here). It seems to me that I cannot replace int64_t by boost::optional
. For instance, I cannot call get() to obtain the address of the value if the optional is not initialized (I would run into an assert). [...] Therefore it should be the backend's responsibility to fill in the On 19/08/2014 09:55, Roland Bock wrote: optional correctly If I had an optional in the result field, yes. -- ie. you should be passing an address/reference to the entire optional, not the internal integer. Some backends have functions like this (simplified):
void get_int_field(int index, int* retval);
How would I interact with such an interface if I had an optional<int>?
I think we've had a terminology clash. By "backend" I thought you meant "the sqlpp11 class that knows how to talk to the native driver", not the native driver itself. Of course the native driver probably won't know how to drive an optional, nor should it be expected to. There are several layers, I assume: 1. User code 2. sqlpp11 database-independent frontend 3. sqlpp11 database-specific connector 4. native database library Between layers 3 & 4 obviously you have to use whatever the native library supports, which is unlikely to be boost::optional (but still possible in some cases). So you might have to provide a raw int64_t pointer to the database up front, and translate from a int64_t pointer and an "is this null" method call (or a bool*) to a boost::optional when it calls you back saying the complete row is ready. (I'm assuming this is asynchronous, otherwise it's easier.) But between layers 1 & 2 and 2 & 3 you'd only have boost::optionals.
Otherwise how does the backend return a NULL value? The backend is called with two parameters, one pointer for the value, the other for the is_null information.
So once you know that those have been filled in, you can translate it into a boost::optional to be returned to the higher layer. It does mean the value has to be copied (unless boost::optional has had move assignment added since I last looked), but you'd be doing that anyway for std::string so this doesn't seem any worse than that.
I think we've had a terminology clash. By "backend" I thought you meant "the sqlpp11 class that knows how to talk to the native driver", not the native driver itself. Of course the native driver probably won't know how to drive an optional, nor should it be expected to.
There are several layers, I assume:
1. User code 2. sqlpp11 database-independent frontend 3. sqlpp11 database-specific connector 4. native database library
Nice definition of layers. AFAIK 3. is named "bindings" in UBlas where it binds to eg Atlas, Lapack The Multiprecision lib also has support for interfacing with various 3rd party libs like GMP but there I can't find a specific name.
On 2014-08-19 10:55, Thijs (M.A.) van den Berg wrote:
I think we've had a terminology clash. By "backend" I thought you meant "the sqlpp11 class that knows how to talk to the native driver", not the native driver itself. Of course the native driver probably won't know how to drive an optional, nor should it be expected to.
There are several layers, I assume:
1. User code 2. sqlpp11 database-independent frontend 3. sqlpp11 database-specific connector 4. native database library
Nice definition of layers. +1
AFAIK 3. is named "bindings" in UBlas where it binds to eg Atlas, Lapack In layer 4, the term "binding" is already used by third parties, for instance "binding values to prepared statements". That's why I would not use here.
Cheers, Roland
On 2014-08-19 09:39, Gavin Lambert wrote:
On 19/08/2014 19:21, Roland Bock wrote:
On 2014-08-19 04:17, Gavin Lambert wrote:
Now, this contains an int64_t value. The address of this value is given to the backend in the method bind() when fetching each result row (no laziness here). It seems to me that I cannot replace int64_t by boost::optional
. For instance, I cannot call get() to obtain the address of the value if the optional is not initialized (I would run into an assert). [...] Therefore it should be the backend's responsibility to fill in the On 19/08/2014 09:55, Roland Bock wrote: optional correctly If I had an optional in the result field, yes. -- ie. you should be passing an address/reference to the entire optional, not the internal integer. Some backends have functions like this (simplified):
void get_int_field(int index, int* retval);
How would I interact with such an interface if I had an optional<int>?
I think we've had a terminology clash. By "backend" I thought you meant "the sqlpp11 class that knows how to talk to the native driver", not the native driver itself. Of course the native driver probably won't know how to drive an optional, nor should it be expected to.
There are several layers, I assume:
1. User code 2. sqlpp11 database-independent frontend 3. sqlpp11 database-specific connector 4. native database library Perfectly correct :-)
Between layers 3 & 4 obviously you have to use whatever the native library supports, which is unlikely to be boost::optional (but still possible in some cases). So you might have to provide a raw int64_t pointer to the database up front, and translate from a int64_t pointer and an "is this null" method call (or a bool*) to a boost::optional when it calls you back saying the complete row is ready. (I'm assuming this is asynchronous, otherwise it's easier.)
As of today, it is synchronous.
But between layers 1 & 2 and 2 & 3 you'd only have boost::optionals.
I can see the reasoning for 1&2. And I understand that it can be done with 2&3 of course, but I am not sure there is a benefit.
Otherwise how does the backend return a NULL value? The backend is called with two parameters, one pointer for the value, the other for the is_null information.
So once you know that those have been filled in, you can translate it into a boost::optional to be returned to the higher layer. It does mean the value has to be copied (unless boost::optional has had move assignment added since I last looked), but you'd be doing that anyway for std::string so this doesn't seem any worse than that.
I can see that for the 1&2 interface, but not necessarily for 2&3 since then I would have two copies of the string: once when obtaining the value from 4 and copying it into 2&3, one for copying it from 2->1. As of now, I create a temporary string when the value is requested in the 1&2 interface. Regards, Roland
Roland Bock wrote:
On 2014-08-19 09:39, Gavin Lambert wrote:
I think we've had a terminology clash. By "backend" I thought you meant "the sqlpp11 class that knows how to talk to the native driver", not the native driver itself. Of course the native driver probably won't know how to drive an optional, nor should it be expected to.
There are several layers, I assume:
1. User code 2. sqlpp11 database-independent frontend 3. sqlpp11 database-specific connector 4. native database library
Perfectly correct :-)
Between layers 3 & 4 obviously you have to use whatever the native library supports, which is unlikely to be boost::optional (but still possible in some cases). So you might have to provide a raw int64_t pointer to the database up front, and translate from a int64_t pointer and an "is this null" method call (or a bool*) to a boost::optional when it calls you back saying the complete row is ready. (I'm assuming this is asynchronous, otherwise it's easier.) As of today, it is synchronous.
But between layers 1 & 2 and 2 & 3 you'd only have boost::optionals. I can see the reasoning for 1&2. And I understand that it can be done with 2&3 of course, but I am not sure there is a benefit.
As I see it, the benefit would be the use of the standard-approved way of handling values that could be invalid instead of using a library-specific handling. But I think optionals wouldn't be safe. Correct me if I'm wrong. If optionals were used without a check for validity and the unexpected NULL value was set in the DB (maybe by mistake), it could result in segmentation fault. Of course assuming that the macro BOOST_ASSERT() wouldn't be expanded to some exception throw. This could lead to some security vulnerabilities in apps using this library.
Otherwise how does the backend return a NULL value? The backend is called with two parameters, one pointer for the value, the other for the is_null information. So once you know that those have been filled in, you can translate it into a boost::optional to be returned to the higher layer. It does mean the value has to be copied (unless boost::optional has had move assignment added since I last looked), but you'd be doing that anyway for std::string so this doesn't seem any worse than that. I can see that for the 1&2 interface, but not necessarily for 2&3 since then I would have two copies of the string: once when obtaining the value from 4 and copying it into 2&3, one for copying it from 2->1. The above assuming that RVO can't be applied, rvalue refs aren't supported by the compiler, move semantics isn't supported by boost::optional and swap() isn't explicitly used. As of now, I create a temporary string when the value is requested in the 1&2 interface.
So there always must be some temporary object. You could directly create boost::optionalstd::string using InPlaceFactory (http://www.boost.org/doc/libs/1_56_0/libs/optional/doc/html/boost_optional/i...) or as I mentioned earlier create default-constructed string within optional and assign to it (assuming it's a valid way of modifying an optional). But as I wrote above, I'm not sure if supporting "raw" optionals would be safe. Regards, Adam
On 2014-08-19 14:03, Adam Wulkiewicz wrote:
Roland Bock wrote:
On 2014-08-19 09:39, Gavin Lambert wrote:
I think we've had a terminology clash. By "backend" I thought you meant "the sqlpp11 class that knows how to talk to the native driver", not the native driver itself. Of course the native driver probably won't know how to drive an optional, nor should it be expected to.
There are several layers, I assume:
1. User code 2. sqlpp11 database-independent frontend 3. sqlpp11 database-specific connector 4. native database library
Perfectly correct :-)
Between layers 3 & 4 obviously you have to use whatever the native library supports, which is unlikely to be boost::optional (but still possible in some cases). So you might have to provide a raw int64_t pointer to the database up front, and translate from a int64_t pointer and an "is this null" method call (or a bool*) to a boost::optional when it calls you back saying the complete row is ready. (I'm assuming this is asynchronous, otherwise it's easier.) As of today, it is synchronous.
But between layers 1 & 2 and 2 & 3 you'd only have boost::optionals. I can see the reasoning for 1&2. And I understand that it can be done with 2&3 of course, but I am not sure there is a benefit.
As I see it, the benefit would be the use of the standard-approved way of handling values that could be invalid instead of using a library-specific handling.
But I think optionals wouldn't be safe. Correct me if I'm wrong. If optionals were used without a check for validity and the unexpected NULL value was set in the DB (maybe by mistake), it could result in segmentation fault. Of course assuming that the macro BOOST_ASSERT() wouldn't be expanded to some exception throw. This could lead to some security vulnerabilities in apps using this library. I thought the whole purpose of the optional-interface was to make people to check for validity first?
If I understand you correctly, you should be more happy with the current implementation then: As of today, depending on the compile-time configuration in the connector you have the following possible behaviors: You can always check for NULL by calling is_null. But if you ignore the outcome and try to obtain the value of a field which happens to be NULL Variant A: an exception is thrown Variant B: the "trivial" value is returned, 0 for numbers. The Variant is chosen at compile time. [...] Regards, Roland PS: I cut the rest since we are in agreement there and the number of copies, possible RVO etc depend on implementation details which might have to be adjusted if optional were to be used :-)
Roland Bock wrote:
On 2014-08-19 14:03, Adam Wulkiewicz wrote:
But I think optionals wouldn't be safe. Correct me if I'm wrong. If optionals were used without a check for validity and the unexpected NULL value was set in the DB (maybe by mistake), it could result in segmentation fault. Of course assuming that the macro BOOST_ASSERT() wouldn't be expanded to some exception throw. This could lead to some security vulnerabilities in apps using this library. I thought the whole purpose of the optional-interface was to make people to check for validity first?
If I understand you correctly, you should be more happy with the current implementation
Yes, exceptions and default values seems to be better to handle the NULL/unexpected values. I started the discussion because you wrote in the Wiki that the problem with optionals is related to the binding of values which IMO is just an implementation detail. Maybe it would be a good idea to explicitly state why do you think that optionals shouldn't be used. Define a rationale behind it. Regards, Adam
On 2014-08-19 16:01, Adam Wulkiewicz wrote:
Roland Bock wrote:
On 2014-08-19 14:03, Adam Wulkiewicz wrote:
But I think optionals wouldn't be safe. Correct me if I'm wrong. If optionals were used without a check for validity and the unexpected NULL value was set in the DB (maybe by mistake), it could result in segmentation fault. Of course assuming that the macro BOOST_ASSERT() wouldn't be expanded to some exception throw. This could lead to some security vulnerabilities in apps using this library. I thought the whole purpose of the optional-interface was to make people to check for validity first?
If I understand you correctly, you should be more happy with the current implementation
Yes, exceptions and default values seems to be better to handle the NULL/unexpected values.
I started the discussion because you wrote in the Wiki that the problem with optionals is related to the binding of values which IMO is just an implementation detail. Maybe it would be a good idea to explicitly state why do you think that optionals shouldn't be used. Define a rationale behind it. Thanks for the discussion, its been quite helpful :-)
Cheers, Roland
On 20/08/2014 00:49, Roland Bock wrote:
If I understand you correctly, you should be more happy with the current implementation then: As of today, depending on the compile-time configuration in the connector you have the following possible behaviors:
You can always check for NULL by calling is_null. But if you ignore the outcome and try to obtain the value of a field which happens to be NULL
Variant A: an exception is thrown Variant B: the "trivial" value is returned, 0 for numbers.
The Variant is chosen at compile time.
That seems like a reasonable justification for not using optional, then, as optional isn't configurable in this way. If you made your type implicitly convertible to optional and back, that ought to make everybody happy. :) (Well, except maybe the folks who hate implicit conversions, but they're never happy.)
On 2014-08-20 01:54, Gavin Lambert wrote:
On 20/08/2014 00:49, Roland Bock wrote:
If I understand you correctly, you should be more happy with the current implementation then: As of today, depending on the compile-time configuration in the connector you have the following possible behaviors:
You can always check for NULL by calling is_null. But if you ignore the outcome and try to obtain the value of a field which happens to be NULL
Variant A: an exception is thrown Variant B: the "trivial" value is returned, 0 for numbers.
The Variant is chosen at compile time.
That seems like a reasonable justification for not using optional, then, as optional isn't configurable in this way.
If you made your type implicitly convertible to optional and back, that ought to make everybody happy. :) That's possible of course :-)
(Well, except maybe the folks who hate implicit conversions, but they're never happy.)
rofl! Cheers, Roland
On 20/08/2014 9:54 AM, Gavin Lambert wrote:
If you made your type implicitly convertible to optional and back, that ought to make everybody happy. :) (Well, except maybe the folks who hate implicit conversions, but they're never happy.)
I love implicit conversions. I love implicitly converting const char* to std::string. I love implicitly converting std::string to boost::optionalstd::string. So imagine my heartache when they told me I couldn't implicitly convert const char* to boost::optional<string>. What, you mean I can't really use const char * /wherever/ I can use std::string? I feel that I'm in for more tears if we have an implicit conversion from boost::optional to something else. --- Michael
On 20/08/2014 18:30, Michael Shepanski wrote:
On 20/08/2014 9:54 AM, Gavin Lambert wrote:
If you made your type implicitly convertible to optional and back, that ought to make everybody happy. :) (Well, except maybe the folks who hate implicit conversions, but they're never happy.)
I love implicit conversions. I love implicitly converting const char* to std::string. I love implicitly converting std::string to boost::optionalstd::string. So imagine my heartache when they told me I couldn't implicitly convert const char* to boost::optional<string>. What, you mean I can't really use const char * /wherever/ I can use std::string?
I feel that I'm in for more tears if we have an implicit conversion from boost::optional to something else.
Two implicit conversions is double-jeopardy, partly because that would bring an infinite cycle into scope. Yes, it does mean that you have to be a little more explicit sometimes, but I'm unconvinced that would be a real problem in practice. Methods in the sqlpp interface would accept a some_other_optional<T>, which means that you could pass them one of those directly, or a boost::optional<T> (via some_other_optional<T> implicit constructor), or a T (via some_other_optional<T> implicit constructor). Note that the latter cannot implicitly be constructed as a boost::optional<T> first because that would require two implicit conversions. So everything should just naturally work. (And you can't use const char* wherever you use std::string anyway -- "a" + "b" is a very different thing from std::string("a") + "b".)
On 20/08/2014 9:54 AM, Gavin Lambert wrote:
If you made your type implicitly convertible to optional and back, that ought to make everybody happy. :) (Well, except maybe the folks who hate implicit conversions, but they're never happy.)
I love implicit conversions. I love implicitly converting const char* to std::string. I love implicitly converting std::string to boost::optionalstd::string. So imagine my heartache when they told me I couldn't implicitly convert const char* to boost::optional<string>. What, you mean I can't really use const char * /wherever/ I can use std::string? Well, no, you can't, and I think that's good since multi-level implicit conversion could get very confusing and quite ambiguous: If there were several conversion paths, which one should the compiler choose? Adding an additional include file could then change the choice...
I feel that I'm in for more tears if we have an implicit conversion from boost::optional to something else. Nah, at least not in the cases I have in mind. Another overload or
On 2014-08-20 08:30, Michael Shepanski wrote: partial specialization will do just fine. Cheers, Roland
On 08/19/2014 02:49 PM, Roland Bock wrote:
If I understand you correctly, you should be more happy with the current implementation then: As of today, depending on the compile-time configuration in the connector you have the following possible behaviors:
You can always check for NULL by calling is_null. But if you ignore the outcome and try to obtain the value of a field which happens to be NULL
Variant A: an exception is thrown Variant B: the "trivial" value is returned, 0 for numbers.
The Variant is chosen at compile time.
I find it a bit worrying that this behavior is changeable at compile-time, as it makes reviewing and reuse more challenging. With optional<T> you will get variant A. If you want variant B, then you use optional<T>::value_or(). This makes your intent clear in the code.
On 2014-08-21 12:32, Bjorn Reese wrote:
On 08/19/2014 02:49 PM, Roland Bock wrote:
If I understand you correctly, you should be more happy with the current implementation then: As of today, depending on the compile-time configuration in the connector you have the following possible behaviors:
You can always check for NULL by calling is_null. But if you ignore the outcome and try to obtain the value of a field which happens to be NULL
Variant A: an exception is thrown Variant B: the "trivial" value is returned, 0 for numbers.
The Variant is chosen at compile time.
I find it a bit worrying that this behavior is changeable at compile-time, as it makes reviewing and reuse more challenging. The decision is taken in the connector class or the respective column. If you change those, you will have some effect.
Also, it would be easy to make your code break at compile time if you switch between A and B. It already breaks if you use B in the typical way (see below) and switch to A. Thus, you would not get nasty surprises after compilation.
With optional<T> you will get variant A. If you want variant B, then you use optional<T>::value_or(). This makes your intent clear in the code.
_Variant A is (as of today):_ if (!row.alpha.is_null()) { int a = row.alpha.get_value(); } else { // something else } _Variant B is:_ int a = row.alpha; The conversion operator to the respective value type is not available in Option A. I am using Variant B all over the place. If I had to change all that into int a = row.alpha.get_value_or(0); I would have to add kilobytes of noise, lowering readability, IMO. Since I personally do not use Variant A, I am open for suggestions, of course :-) Cheers, Roland
On 2014-08-21 12:32, Bjorn Reese wrote:
On 08/19/2014 02:49 PM, Roland Bock wrote:
If I understand you correctly, you should be more happy with the current implementation then: As of today, depending on the compile-time configuration in the connector you have the following possible behaviors:
You can always check for NULL by calling is_null. But if you ignore the outcome and try to obtain the value of a field which happens to be NULL
Variant A: an exception is thrown Variant B: the "trivial" value is returned, 0 for numbers.
The Variant is chosen at compile time.
I find it a bit worrying that this behavior is changeable at compile-time, as it makes reviewing and reuse more challenging.
With optional<T> you will get variant A.
That is not the case, is it? * If I call get_value() on a NULL field in variant A, an exception gets thrown. * If I call get() on an uninitialized boost::optional, I run into a BOOST_ASSERT. By default, this is equivalent to C-assert (ie not an exception), but you can also turn it off completely at compile time. Then, you will have basically undefined behavior. Or you can configure BOOST_ASSERT at compile time to do something entirely different. I fail to see how the optional-way would any better with respect to compile-time configuration and code re-use or code reviews. When get() is used, you have no idea what will happen tomorrow. You only know for sure when get_value_or() is used. That's why I prefer Variant B anyway: No matter what, at least you get a defined value in case you run into a NULL. And if you really need to know that something is NULL, you have to check that regardless of whether it is an optional or not. Best, Roland
On 2014-08-22 12:29, Bjorn Reese wrote:
On 08/21/2014 05:32 PM, Roland Bock wrote:
* If I call get() on an uninitialized boost::optional, I run into a
optional<T>::value() throws. Notice that get() is not part of the std::optional proposal. Oh, missed that. Sorry.
Thanks for the std::optional information :-) Ok, so I could have operator optional<int>; // only in variant A operator int() const; // only in variant B In addition: bool is_null() const; int value() const; // throws if field is null Best, Roland
On 2014-08-19 09:39, Gavin Lambert wrote:
On 19/08/2014 19:21, Roland Bock wrote:
On 2014-08-19 04:17, Gavin Lambert wrote:
Now, this contains an int64_t value. The address of this value is given to the backend in the method bind() when fetching each result row (no laziness here). It seems to me that I cannot replace int64_t by boost::optional
. For instance, I cannot call get() to obtain the address of the value if the optional is not initialized (I would run into an assert). [...] Therefore it should be the backend's responsibility to fill in the On 19/08/2014 09:55, Roland Bock wrote: optional correctly If I had an optional in the result field, yes. -- ie. you should be passing an address/reference to the entire optional, not the internal integer. Some backends have functions like this (simplified):
void get_int_field(int index, int* retval);
How would I interact with such an interface if I had an optional<int>?
I think we've had a terminology clash. By "backend" I thought you meant "the sqlpp11 class that knows how to talk to the native driver", not the native driver itself. Of course the native driver probably won't know how to drive an optional, nor should it be expected to.
There are several layers, I assume:
1. User code 2. sqlpp11 database-independent frontend 3. sqlpp11 database-specific connector 4. native database library
BTW: I have proposed an open content session for CppCon to start writing a few more instances of Layer3. Apart from making the library usable for more people, this would also help to evaluate the pros and cons of the interface between 2&3. Cheers, Roland
Roland Bock wrote:
_NULL handling:_ Enabled by the restructured code and spurred by the library quince by Michael Shepanski, sqlpp11 can now calculate which result fields can or cannot be NULL.
Speaking of which, handling NULL for result values has been discussed a lot. The library now has compile-time configurable behavior, you can choose between an std::optional-like interface and mapping NULL to the trivial value of the result type, e.g. 0 for numbers or "" for strings, see also https://github.com/rbock/sqlpp11/wiki/NULL
The way how NULL values are handled is similar to how missing options
are handled in ProgramOptions where if some parameter isn't passed it
may be set to some default value which may be set by the user.
Could it be possible to allow users to define their "trivial" values in
sqlpp?
I'm aware that there is a difference between the two - in sqlpp the
structure is defined in compile-time but ProgramOptions is much more
user friendly at the stage of defining possible options:
desc.add_options()
("help", "produce help message")
("optimization", po::value<int>(&opt)->default_value(10),
"optimization level")
("include-path,I", po::value< vector<string> >(),
"include path")
("input-file", po::value< vector<string> >(), "input file")
;
I'm wondering, could it be possible to implement similar, compile-time
interface in sqlpp, e.g. something similar to MPL?
I'm asking because what the sqlpp library requires is very complicated:
https://github.com/rbock/sqlpp11/blob/master/tests/Sample.h
I'm aware that its purpose is to create a type reflecting required
columns and to use them from the level of C++ but couldn't it be done
simpler?
The first step could be to simplify the process, e.g.:
template <typename T>
struct col_member1
: sqlpp::column {};
with sqlpp::column<> e.g. defined as:
template <
template<typename> class Derived
const char* name,
typename... Traits struct column
{
/*the rest*/
};
Or maybe separate the C++ members from values' definitions and use them
only as a source of C++ members names.
Then define the table structure in MPL-like way:
template <typename T>
struct col_member1 { T col1; };
template <typename T>
struct col_member2 { T col2; };
struct tab1
: sqlpp::table {};
with sqlpp::column<> e.g. defined as:
template <
template<typename> class Base
const char* name,
typename... Traits struct column
: Base
{
/*the rest*/
};
or something similar...
Regards,
Adam
Roland Bock wrote:
_NULL handling:_ Enabled by the restructured code and spurred by the library quince by Michael Shepanski, sqlpp11 can now calculate which result fields can or cannot be NULL.
Speaking of which, handling NULL for result values has been discussed a lot. The library now has compile-time configurable behavior, you can choose between an std::optional-like interface and mapping NULL to the trivial value of the result type, e.g. 0 for numbers or "" for strings, see also https://github.com/rbock/sqlpp11/wiki/NULL
The way how NULL values are handled is similar to how missing options are handled in ProgramOptions where if some parameter isn't passed it may be set to some default value which may be set by the user. Could it be possible to allow users to define their "trivial" values in sqlpp? If result fields correspond to columns, then you could certainly define a per column default value or even a function to be called which would
On 2014-08-19 16:43, Adam Wulkiewicz wrote: then yield the value, or throw, or assert... In other cases, say select(t.a + t.b), you could define a per-type default value/method in the connector. That should be rather easy to add.
I'm aware that there is a difference between the two - in sqlpp the structure is defined in compile-time but ProgramOptions is much more user friendly at the stage of defining possible options:
[...]
I'm wondering, could it be possible to implement similar, compile-time interface in sqlpp, e.g. something similar to MPL? I'm asking because what the sqlpp library requires is very complicated: https://github.com/rbock/sqlpp11/blob/master/tests/Sample.h I'm aware that its purpose is to create a type reflecting required columns and to use them from the level of C++ but couldn't it be done simpler?
The crucial part is the _member_t template. I need that one. The other
stuff can be constructed whatever way you like. But if the name of the
column is `beta`, then I need a way to get this:
template<typename T>
struct _member_t
{
T beta;
T& operator()() { return beta; }
const T& operator()() const { return beta; }
};
This is the magic ingredient that makes tables, result rows and
parameter sets of prepared statements to have members with a proper name.
The way it works is best to be observed in the table_t template, see
https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/table.h
template
Roland Bock wrote:
On 2014-08-19 16:43, Adam Wulkiewicz wrote:
I'm wondering, could it be possible to implement similar, compile-time interface in sqlpp, e.g. something similar to MPL? I'm asking because what the sqlpp library requires is very complicated:https://github.com/rbock/sqlpp11/blob/master/tests/Sample.h I'm aware that its purpose is to create a type reflecting required columns and to use them from the level of C++ but couldn't it be done simpler? The crucial part is the _member_t template. I need that one. The other stuff can be constructed whatever way you like. But if the name of the
column is `beta`, then I need a way to get this:
template<typename T> struct _member_t { T beta; T& operator()() { return beta; } const T& operator()() const { return beta; } };
This is the magic ingredient that makes tables, result rows and parameter sets of prepared statements to have members with a proper name.
The way it works is best to be observed in the table_t template, see https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/table.h
template
struct table_t : public table_base_t, public ColumnSpec::_name_t ::template _member_t >...
Ok AFAIU a struct like _member_t should define some convenient member
variable for the user and must define operator() for the library.
But the rest could be automatically generated, couldn't it?
Why not just pass a list of templates of classes adapted to MemberType
concept (defined operator()) into the table/column/etc.?
I'm thinking about something like the code below. I don't know exactly
what's required so this is just an example of a technique rather than a
solution ready-to-use in sqlpp.
namespace sqlpp {
template
struct table_t {/*all that's required*/};
template
struct column_t {/*all that's required*/};
template
struct table
: public Table< table_t<Table> >
, public Members< column_t For each column, the table inherits from the column's _member_t which is
instantiated with the column itself. This adds a member of the column's
type and the column's name to the table.
The same as above, table is inherited from Table and all passed Members.
The thing is to allow the user to implement only the required part and
generate the rest automatically. I see no way to create such a template other than having it in the code.
You should of course not write it personally. You should use macros
(yuk) or code generators similar to the ddl2cpp script in the
repository. But you have to somehow create this code. Unless you know some brilliant TMP technique for this?
Personally I think that is a missing feature in C++. I'd call it named
member mixin. Or something like that. Hmm.
Or, in addition to names and values, we should be able to declare names
in templates. That would be awesome! Anyway, if the name thing can be solved without macros, I am all for a
terser notation :-) Maybe in one of the future standards... :)
Regards,
Adam Roland Bock wrote: On 2014-08-19 16:43, Adam Wulkiewicz wrote: I'm wondering, could it be possible to implement similar, compile-time
interface in sqlpp, e.g. something similar to MPL?
I'm asking because what the sqlpp library requires is very
complicated:https://github.com/rbock/sqlpp11/blob/master/tests/Sample.h
I'm aware that its purpose is to create a type reflecting required
columns and to use them from the level of C++ but couldn't it be done
simpler?
The crucial part is the _member_t template. I need that one. The other
stuff can be constructed whatever way you like. But if the name of the column is `beta`, then I need a way to get this: template<typename T>
struct _member_t
{
T beta;
T& operator()() { return beta; }
const T& operator()() const { return beta; }
}; This is the magic ingredient that makes tables, result rows and
parameter sets of prepared statements to have members with a proper
name. The way it works is best to be observed in the table_t template, see
https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/table.h template Ok AFAIU a struct like _member_t should define some convenient member
variable for the user and must define operator() for the library.
But the rest could be automatically generated, couldn't it?
Why not just pass a list of templates of classes adapted to MemberType
concept (defined operator()) into the table/column/etc.?
I'm thinking about something like the code below. I don't know exactly
what's required so this is just an example of a technique rather than
a solution ready-to-use in sqlpp.
The idea is good. For the columns, you will have to add a few more On 2014-08-19 18:56, Adam Wulkiewicz wrote:
parameters, e.g. the value_type (mandatory), can_be_null,
must_not_insert, must_not_update, null_is_trivial, trivial_is_null,
maybe as per your suggestion a default value or a function for producing
it. That default stuff might be tough in such a design.
But thats manageable. And yes, the code would be shorter, although not
that much, I suspect. The only problem I have with it is that now the
column types are going to be about a hundred characters long. And users
are going to operate on columns all the time. So error message have to
be short.
I would thus add a struct which inherits from the column template
instance for each column, e..g.
struct alpha: public column I see no way to create such a template other than having it in the code.
You should of course not write it personally. You should use macros
(yuk) or code generators similar to the ddl2cpp script in the
repository. But you have to somehow create this code. Unless you know some brilliant TMP technique for this?
Personally I think that is a missing feature in C++. I'd call it named
member mixin. Or something like that. Hmm.
Or, in addition to names and values, we should be able to declare names
in templates. That would be awesome! Anyway, if the name thing can be solved without macros, I am all for a
terser notation :-) Maybe in one of the future standards... :) Yeah, I should start working on that. I am really quite fascinated by
the idea of having names as template parameters.
Cheers,
Roland Roland Bock wrote: Ok AFAIU a struct like _member_t should define some convenient member
variable for the user and must define operator() for the library.
But the rest could be automatically generated, couldn't it?
Why not just pass a list of templates of classes adapted to MemberType
concept (defined operator()) into the table/column/etc.?
I'm thinking about something like the code below. I don't know exactly
what's required so this is just an example of a technique rather than
a solution ready-to-use in sqlpp.
The idea is good. For the columns, you will have to add a few more On 2014-08-19 18:56, Adam Wulkiewicz wrote:
parameters, e.g. the value_type (mandatory), can_be_null,
must_not_insert, must_not_update, null_is_trivial, trivial_is_null,
maybe as per your suggestion a default value or a function for producing
it. That default stuff might be tough in such a design. The additional traits would be a list of variadic template parameters.
So if this list contained only one type, e.g. sqlpp::default_traits the
default traits could be generated e.g. by specializing sqlpp::make_traits<>.
Since it's impossible to define a default argument of a template
parameters pack (another missing language feature?) it could be
"simulated" with something like:
template
struct column
{
using traits = make_traits But thats manageable. And yes, the code would be shorter, although not
that much, I suspect. The only problem I have with it is that now the
column types are going to be about a hundred characters long. And users
are going to operate on columns all the time. So error message have to
be short.
Do you have in mind the code of the library or user's code?
I expect that the user's code, even not using defaults, would be a lot
shorter.
But the most important is that the definition of a table would probably
be more clear, in one place, etc.
Or am I wrong? I would thus add a struct which inherits from the column template
instance for each column, e..g. struct alpha: public column With variadic templates the construction of traits out of this would be
straightforward.
An alternative would be to take additional parameters/traits list as the
3rd parameter as you wrote below.
Btw, why a column must be aware about a Table?
Can a table also have some traits specified?
I'm asking because then there would be 2 lists that should be passed -
Members and Traits. I tried something similar a while back but failed, which is mainly due
to lack of perseverance, I guess. Right now, I am happy with the current design because it is quite easy
to change things, like introducing that default value or a function for
handling attempts to read NULL. Sure, I'm not saying that you should change the design. I'm just sharing
my thoughts. If you want to put everything into that one list of template parameters,
it is much tougher, IMO. I mean how would you add a function for
handling access to NULL value? You would need another class, I think.
And you would have to group those tags into a tuple or type_set, because
otherwise it would be ugly to add another optional parameter... I'm guessing that the function or ... could be passed as yet another
trait like:
struct alpha: public column On 2014-08-20 00:34, Adam Wulkiewicz wrote: Roland Bock wrote: Ok AFAIU a struct like _member_t should define some convenient member
variable for the user and must define operator() for the library.
But the rest could be automatically generated, couldn't it?
Why not just pass a list of templates of classes adapted to MemberType
concept (defined operator()) into the table/column/etc.?
I'm thinking about something like the code below. I don't know exactly
what's required so this is just an example of a technique rather than
a solution ready-to-use in sqlpp.
The idea is good. For the columns, you will have to add a few more On 2014-08-19 18:56, Adam Wulkiewicz wrote:
parameters, e.g. the value_type (mandatory), can_be_null,
must_not_insert, must_not_update, null_is_trivial, trivial_is_null,
maybe as per your suggestion a default value or a function for producing
it. That default stuff might be tough in such a design. The additional traits would be a list of variadic template parameters.
So if this list contained only one type, e.g. sqlpp::default_traits
the default traits could be generated e.g. by specializing
sqlpp::make_traits<>.
Since it's impossible to define a default argument of a template
parameters pack (another missing language feature?) it could be
"simulated" with something like: template
struct column
{
using traits = make_traits Or all traits could be passed as one list type like MPL sequence or
someting like that as you wrote below. But thats manageable. And yes, the code would be shorter, although not
that much, I suspect. The only problem I have with it is that now the
column types are going to be about a hundred characters long. And users
are going to operate on columns all the time. So error message have to
be short.
Do you have in mind the code of the library or user's code? User's code:
* You have the member template which must be defined outside.
* You have the get_name method which should be defined outside the
member template since I have no instance of that template where I
need the name (you still cant use a string literal as template
parameter directly, right? Like table_t<"sample">?)
* You need to group the member template and the get_name method since
they are always used in combination
* You need a struct or class to hold the default value or function
And if you don't want to have all this flying around as individual
pieces with individual names, then you will group it into a class. And
you're back to where you started. I expect that the user's code, even not using defaults, would be a lot
shorter.
But the most important is that the definition of a table would
probably be more clear, in one place, etc.
Or am I wrong? I think you're wrong, although I'd love to be wrong about that :-)
Based on my thoughts above you'd end up with
struct Alpha
{
struct _name_t
{
static constexpr const char* _get_name() { return "alpha"; }
template<typename T>
struct _member_t
{
T alpha;
T& operator()() { return alpha; }
const T& operator()() const { return alpha; }
};
};
struct _trivial_t
{
int64_t get_trivial_value() { return 42; }
}
};
struct alpha: public column_t I would thus add a struct which inherits from the column template
instance for each column, e..g. struct alpha: public column With variadic templates the construction of traits out of this would
be straightforward.
An alternative would be to take additional parameters/traits list as
the 3rd parameter as you wrote below. Btw, why a column must be aware about a Table? For three reasons at least:
1. representation: in most statement types, more than one table can be
involved, e.g. when using some kind of join. To avoid name clashes,
columns are represented as tablename.columnname when being serialized
2. consistency checking: sqlpp11 performs a lot of chekcs at compile
time that your statements are consistent, for instance, it detects
if you are selecting columns from tables which are not mentioned in
the from clause. It therefore has to know which tables the selected
columns belong to.
3. determining can_be_null for result fields: if you are using any of
the outer joins, then selected columns of those outer tables can be
null. In order to determine this at compile time, again the column
has to be associated with its table.
And this association has to be done not only for those pre-defined
tables and their columns, it also has to work with sub-selects which are
used as tables, of course :-) Can a table also have some traits specified? Not today, but that will probably change soon. read-only would be a very
good trait for a table, for instance. I'm asking because then there would be 2 lists that should be passed -
Members and Traits. I tried something similar a while back but failed, which is mainly due
to lack of perseverance, I guess. Right now, I am happy with the current design because it is quite easy
to change things, like introducing that default value or a function for
handling attempts to read NULL. Sure, I'm not saying that you should change the design. I'm just
sharing my thoughts. And I really appreciate it :-) If you want to put everything into that one list of template parameters,
it is much tougher, IMO. I mean how would you add a function for
handling access to NULL value? You would need another class, I think.
And you would have to group those tags into a tuple or type_set, because
otherwise it would be ugly to add another optional parameter... I'm guessing that the function or ... could be passed as yet another
trait like: struct alpha: public column If not passed, a default trivial value would be used. The best would be to somehow pass a static value in compile-time but
only integral types could be handled this way. The reference to the
global external variable of non-integral type could also be passed as
a template parameter but still it would have to be defined somewhere
so it wouldn't be convenient. So some_generator could be a type of default-constructible function
object or a pointer to function, etc. Or do someone knows some trick that could be used here? Well, anonymous in-place class definitions would help to keep the
relevant information in one place. Another missing language feature, I
think. Something like
struct alpha : public column_t<
Table,
struct {
static constexpr const char* _get_name() { return "alpha"; }
template<typename T>
struct_member_t
{
T alpha;
T& operator()() { return alpha; }
const T& operator()() const { return alpha; }
};
},
struct { int64_t _get_trivial() const { return 42;}},
sqlpp::make_traitssqlpp::integral {}; But that's still not much shorter :-(
I believe that the key is the name stuff. If we could use names in the
same way as types and values, for instance as template parameters, this
would be much easier, both in user code and in the library code.
Cheers,
Roland Roland Bock wrote: On 2014-08-20 00:34, Adam Wulkiewicz wrote: Roland Bock wrote: But thats manageable. And yes, the code would be shorter, although not
that much, I suspect. The only problem I have with it is that now the
column types are going to be about a hundred characters long. And users
are going to operate on columns all the time. So error message have to
be short.
Do you have in mind the code of the library or user's code?
User's code: * You have the member template which must be defined outside.
* You have the get_name method which should be defined outside the
member template since I have no instance of that template where I
need the name (you still cant use a string literal as template
parameter directly, right? Like table_t<"sample">?)
* You need to group the member template and the get_name method since
they are always used in combination
* You need a struct or class to hold the default value or function And if you don't want to have all this flying around as individual
pieces with individual names, then you will group it into a class. And
you're back to where you started.
Yes, passing a string literal isn't possible unfortunately. So the member and a name must be bound together somehow but the rest
could still be automatically generated. In particular, IMHO the
specification of a default value should be optional (e.g. passed as yet
another trait). The library shouldn't require defining it each time,
even as some dummy function if a user wanted to use exceptions. Besides,
defining the default value generator as external to the member-"name"
binding would probably be preferable because the same generator could be
reused for many columns. However I don't expect that the generator would
do something complicated, rather just return a value. But for integral
members it could be predefined in the sqlpp and it could be passed as
just 1 additional type. I expect that the user's code, even not using defaults, would be a lot
shorter.
But the most important is that the definition of a table would
probably be more clear, in one place, etc.
Or am I wrong?
I think you're wrong, although I'd love to be wrong about that :-) Based on my thoughts above you'd end up with struct Alpha
{
struct _name_t
{
static constexpr const char* _get_name() { return "alpha"; }
template<typename T>
struct _member_t
{
T alpha;
T& operator()() { return alpha; }
const T& operator()() const { return alpha; }
};
};
struct _trivial_t
{
int64_t get_trivial_value() { return 42; }
}
}; struct alpha: public column_t (I need to be able to combine name and trivial value freely, for
instance when using an alias of a column, thats why those have to be
separated). I seem to be ending up with exactly the same number of lines in the user
code. Technically, I /could/ do without the name_t and move the get_name
function into the member template code, but that would also mean
inheriting multiple versions of the get_name method into tables and rows. Hmm, is the member template used in many places? AFAIU it must be used
at least 2 times, to define columns, tables, etc. and later to construct
a row. Well, it isn't that important.
If you write it this way:
struct Alpha
{
static constexpr const char* _get_name() { return "alpha"; }
template<typename T>
struct _member_t
{
T alpha;
T& operator()() { return alpha; }
const T& operator()() const { return alpha; }
};
};
struct alpha: public column_t If you want to put everything into that one list of template parameters,
it is much tougher, IMO. I mean how would you add a function for
handling access to NULL value? You would need another class, I think.
And you would have to group those tags into a tuple or type_set, because
otherwise it would be ugly to add another optional parameter...
I'm guessing that the function or ... could be passed as yet another
trait like: struct alpha: public column If not passed, a default trivial value would be used. The best would be to somehow pass a static value in compile-time but
only integral types could be handled this way. The reference to the
global external variable of non-integral type could also be passed as
a template parameter but still it would have to be defined somewhere
so it wouldn't be convenient. So some_generator could be a type of default-constructible function
object or a pointer to function, etc. Or do someone knows some trick that could be used here?
Well, anonymous in-place class definitions would help to keep the
relevant information in one place. Another missing language feature, I
think. Something like struct alpha : public column_t<
Table,
struct {
static constexpr const char* _get_name() { return "alpha"; }
template<typename T>
struct_member_t
{
T alpha;
T& operator()() { return alpha; }
const T& operator()() const { return alpha; }
};
},
struct { int64_t _get_trivial() const { return 42;}},
sqlpp::make_traitssqlpp::integral
> {}; But that's still not much shorter :-( I believe that the key is the name stuff. If we could use names in the
same way as types and values, for instance as template parameters, this
would be much easier, both in user code and in the library code. It could be convenient to generate a type of a non-parameter lambda
expression in unevaluated context, something like:
sqlpp::make_traits Roland Bock wrote: Dear Boosters, Quite a bit has happened since last I reported about sqlpp11 in this
forum [1,2]. I have incorporated a lot of the feedback you gave me,
hopefully bringing the library closer to a reviewable state. Source: https://github.com/rbock/sqlpp11
Doku: https://github.com/rbock/sqlpp11/wiki (not at all formal yet) I am hoping for more feedback both here and live at CppCon
(http://sched.co/1r4lue3) I didn't mention it earlier but your library looks great!
In my work I musn't write code that uses such functionality but as a C++
developer I appreciate that I could handle queries, errors, etc. at the
C++ level.
Which brings me to a question about the SQL extensions. In order to
support such extensions, e.g. SQL/MM or SQL/SFA [1][2] which specifies a
storage, access model, operations, etc. for handling of
geometrical/geographical data, a user would be forced to extend your
library with additional functions/methods/structures to e.g. perform a
query:
select(streets.name)
.from(streets)
.where( intersects(streets.geometry, some_polygon) )
or
select(streets.name)
.from(streets)
.where( streets.geometry.within(from_wkt("POLYGON((0 0,10 0,10 10,0 10,0 0))")) )
or
select(streets.name)
.from(streets)
.where( streets.geometry.distance(some_point) < 100 )
or something like that. For more info see: WKT [3], spatial relations [4].
How simple/complicated would it be (for the user) to add the support for
such extensions currently? Would the user be forced to implement it in
the library directly and e.g. always include it with the rest of the
library? Or would it be possible to implement it as a separate addon
that could be optionally included?
In addition to the above, would it be possible to map the same C++
functions/methods to different SQL functions for different database
servers? In various servers there are non-standard extensions which may
have various SQL functions names or different number of parameters, etc.
E.g. related to the above example, one server can support SQL/MM
defining operation ST_Intersects() and other one SQL/SFA defining
Intersects().
Assuming that various servers may support various functionalities on
which layer of sqlpp this support should be checked and the error
returned if necessary?
E.g. ST_CoveredBy() isn't defined in the SQL/MM standard but it can be
used in PostgreSQL/PostGIS, but currently not in MySQL (version 5.7).
Or should all servers support the same functionalities?
If such errors was reported at compile-time then AFAIU specific version
of the library (or just a lowest level connector?) would be forced to
work with specific version of a server?
Referenes:
[1] http://www.opengeospatial.org/standards/sfa
[2] http://www.opengeospatial.org/standards/sfs
[3] http://en.wikipedia.org/wiki/Well-known_text
[4] http://en.wikipedia.org/wiki/DE-9IM
Regards,
Adam On 2014-08-21 18:45, Adam Wulkiewicz wrote: Roland Bock wrote: Dear Boosters, Quite a bit has happened since last I reported about sqlpp11 in this
forum [1,2]. I have incorporated a lot of the feedback you gave me,
hopefully bringing the library closer to a reviewable state. Source: https://github.com/rbock/sqlpp11
Doku: https://github.com/rbock/sqlpp11/wiki (not at all formal yet) I am hoping for more feedback both here and live at CppCon
(http://sched.co/1r4lue3) I didn't mention it earlier but your library looks great!
Thank you :-)
In my work I musn't write code that uses such functionality but as a
C++ developer I appreciate that I could handle queries, errors, etc.
at the C++ level.
:-) Which brings me to a question about the SQL extensions. In order to
support such extensions, e.g. SQL/MM or SQL/SFA [1][2] which specifies
a storage, access model, operations, etc. for handling of
geometrical/geographical data, a user would be forced to extend your
library with additional functions/methods/structures to e.g. perform a
query: select(streets.name)
.from(streets)
.where( intersects(streets.geometry, some_polygon) ) or select(streets.name)
.from(streets)
.where( streets.geometry.within(from_wkt("POLYGON((0 0,10 0,10
10,0 10,0 0))")) ) or select(streets.name)
.from(streets)
.where( streets.geometry.distance(some_point) < 100 ) or something like that. For more info see: WKT [3], spatial relations
[4]. How simple/complicated would it be (for the user) to add the support
for such extensions currently? This is actually all very simple (famous last words of a library
writer). But I invested quite some time to be able to say this without
hesitation :-)
For the things above, you would need
* A few more "sql" types like point, polygon, linestring.
o You need to write the respective classes for representing
values, parameters (if you want those in prepared statements)
and result fields, see for instance integral.h.
o The value classes also contain the specific member functions,
like the operators for integrals or the like() method for texts.
o That is simple. The worst part is to figure out the interface
you want these types to have.
o Your connector library requires a few more functions to bind
parameters and yield results of these types
o The interface is simple and I assume that the backend provides
everything necessary for the implementation to be simple, too.
* A few free functions like the intersect function in your first example.
* A few template classes to represent nodes in the expression tree,
for instance and intersect_t, which is the return value of the
intersect method and its parameters.
o Those are really simple :-)
* specializations of the serializer or interpreter for the nodes
o The serializer simply writes the node into the context
(typically an ostream).
o That should be simple if your backend expects a query in the
form of a string.
The last part becomes more complex, if your backend does not expect a
string representation. In that case, it is still conceptually simple:
You just walk the expression tree and transform it in any way you like.
For a compile-time transformation, see the interpreter at
https://github.com/rbock/sqlpp11-connector-stl Would the user be forced to implement it in the library directly and
e.g. always include it with the rest of the library? Or would it be
possible to implement it as a separate addon that could be optionally
included?
The latter. sqlpp11 uses value type and tags. If your class says it is
an sqlpp expression with a boolean value type, then it welcome wherever
a boolean sql expression is required. struct sample
{
using _traits = sqlpp::make_traits In addition to the above, would it be possible to map the same C++
functions/methods to different SQL functions for different database
servers? In various servers there are non-standard extensions which
may have various SQL functions names or different number of
parameters, etc. E.g. related to the above example, one server can
support SQL/MM defining operation ST_Intersects() and other one
SQL/SFA defining Intersects(). Again, no problem at all.
You can use partial specialization in the serializer/interpreter to
create different ways of serialization or other transformation for
individual databases. See for instance the serializer of mysql at
https://github.com/rbock/sqlpp11-connector-mysql/blob/master/include/sqlpp11...
for different serialization than standard and the serializer of sqlite3
at
https://github.com/rbock/sqlpp11-connector-sqlite3/blob/master/include/sqlpp...
for compile-time disabled SQL features. Assuming that various servers may support various functionalities on
which layer of sqlpp this support should be checked and the error
returned if necessary? As explained above, this kind of adjustments could be done in the
serializer/interpreter. It would produce static_asserts at compile time,
typically (that's the sqlpp11 way), but it is up to you in the end.
Your database connector has a serialization context. You can give it any
kind of information or throw exceptions, up to you. I would use
static_asserts to indicate missing support, but see also below. E.g. ST_CoveredBy() isn't defined in the SQL/MM standard but it can be
used in PostgreSQL/PostGIS, but currently not in MySQL (version 5.7).
Or should all servers support the same functionalities? Nah, that would be quite annoying. You would constrain yourself to the
minimum set. I would use the partial specialization as described. If such errors was reported at compile-time then AFAIU specific
version of the library (or just a lowest level connector?) would be
forced to work with specific version of a server? I am not sure I fully understand your question.
You could write an extension for sqlpp11, say sqlpp11-spatial. It would
probably live in its own namespace in sqlpp. This would be a vendor
neutral library, like sqlpp11. And you would write new connector
libraries or extend existing ones.
In your code you would then use sqlpp11, sqlpp11-spatial and one of
those connectors.
If you want to choose the database at runtime, you will have a shared
connector library and you will have to turn those static_asserts in the
serializer into exceptions for instance.
Hope this helps. Are you asking for a specific project? Or just out of
curiosity?
FYI: There is one more way to extend sqlpp11 queries: You can add
additional clauses or change the interface of clauses. For instance,
http://docs.oracle.com/cd/B19306_01/server.102/b14200/queries003.htm
uses a CONNECT clause in SELECT.
That also isn't very hard, but I would not call it really simple either :-) Referenes:
[1] http://www.opengeospatial.org/standards/sfa
[2] http://www.opengeospatial.org/standards/sfs
[3] http://en.wikipedia.org/wiki/Well-known_text
[4] http://en.wikipedia.org/wiki/DE-9IM Thanks for the links :-)
Regards,
Roland On 2014-08-21 18:45, Adam Wulkiewicz wrote: Which brings me to a question about the SQL extensions. In order to
support such extensions, e.g. SQL/MM or SQL/SFA [1][2] which specifies
a storage, access model, operations, etc. for handling of
geometrical/geographical data, a user would be forced to extend your
library with additional functions/methods/structures to e.g. perform a
query: select(streets.name)
.from(streets)
.where( intersects(streets.geometry, some_polygon) ) or select(streets.name)
.from(streets)
.where( streets.geometry.within(from_wkt("POLYGON((0 0,10 0,10
10,0 10,0 0))")) ) or select(streets.name)
.from(streets)
.where( streets.geometry.distance(some_point) < 100 ) or something like that. For more info see: WKT [3], spatial relations
[4]. How simple/complicated would it be (for the user) to add the support
for such extensions currently?
This is actually all very simple (famous last words of a library
writer). But I invested quite some time to be able to say this without
hesitation :-)
Great! For the things above, you would need * A few more "sql" types like point, polygon, linestring.
o You need to write the respective classes for representing
values, parameters (if you want those in prepared statements)
and result fields, see for instance integral.h.
o The value classes also contain the specific member functions,
like the operators for integrals or the like() method for texts.
o That is simple. The worst part is to figure out the interface
you want these types to have.
o Your connector library requires a few more functions to bind
parameters and yield results of these types
o The interface is simple and I assume that the backend provides
everything necessary for the implementation to be simple, too.
* A few free functions like the intersect function in your first example.
* A few template classes to represent nodes in the expression tree,
for instance and intersect_t, which is the return value of the
intersect method and its parameters.
o Those are really simple :-)
* specializations of the serializer or interpreter for the nodes
o The serializer simply writes the node into the context
(typically an ostream).
o That should be simple if your backend expects a query in the
form of a string. The last part becomes more complex, if your backend does not expect a
string representation. In that case, it is still conceptually simple:
You just walk the expression tree and transform it in any way you like.
For a compile-time transformation, see the interpreter at
https://github.com/rbock/sqlpp11-connector-stl
Yes, there is also possible to describe geometries using WKB (binary) Roland Bock wrote:
format. And AFAIK some databeses uses slightly modified/extended version. Would the user be forced to implement it in the library directly and
e.g. always include it with the rest of the library? Or would it be
possible to implement it as a separate addon that could be optionally
included?
The latter. sqlpp11 uses value type and tags. If your class says it is
an sqlpp expression with a boolean value type, then it welcome wherever
a boolean sql expression is required. struct sample
{
using _traits = sqlpp::make_traits This is a boolean expression as far as sqlpp1 is concerned.
Great, so the extension like this could be included if needed or even be
a standalone library. In addition to the above, would it be possible to map the same C++
functions/methods to different SQL functions for different database
servers? In various servers there are non-standard extensions which
may have various SQL functions names or different number of
parameters, etc. E.g. related to the above example, one server can
support SQL/MM defining operation ST_Intersects() and other one
SQL/SFA defining Intersects().
Again, no problem at all. You can use partial specialization in the serializer/interpreter to
create different ways of serialization or other transformation for
individual databases. See for instance the serializer of mysql at
https://github.com/rbock/sqlpp11-connector-mysql/blob/master/include/sqlpp11...
for different serialization than standard and the serializer of sqlite3
at
https://github.com/rbock/sqlpp11-connector-sqlite3/blob/master/include/sqlpp...
for compile-time disabled SQL features. Assuming that various servers may support various functionalities on
which layer of sqlpp this support should be checked and the error
returned if necessary?
As explained above, this kind of adjustments could be done in the
serializer/interpreter. It would produce static_asserts at compile time,
typically (that's the sqlpp11 way), but it is up to you in the end. Your database connector has a serialization context. You can give it any
kind of information or throw exceptions, up to you. I would use
static_asserts to indicate missing support, but see also below. E.g. ST_CoveredBy() isn't defined in the SQL/MM standard but it can be
used in PostgreSQL/PostGIS, but currently not in MySQL (version 5.7).
Or should all servers support the same functionalities?
Nah, that would be quite annoying. You would constrain yourself to the
minimum set. I would use the partial specialization as described. If such errors was reported at compile-time then AFAIU specific
version of the library (or just a lowest level connector?) would be
forced to work with specific version of a server?
I am not sure I fully understand your question. Hope this helps. Are you asking for a specific project? Or just out of
curiosity?
I'm curious. I'm a contributor at Boost.Geometry and this is just a Let me clarify. Let's say that I supported this ST_CoveredBy() function
for both PostGIS and MySQL. Let's say that in version 5.9 MySQL will
support this function. So the library should work with some future
release. But I'm using sqlpp with MySQL 5.7. What will be the result of
performing this "unsupported" query? An exception? AFAIU ideally it
should be a compile-time error?
<snip>
problem from my domain.
I'm wondering what would be needed to use sqlpp in a
GIS/Geometry-related application. E.g. to load some geometrical objects
from some database into objects of types adapted to Boost.Geometry
concepts (polygon, linestring, etc.), do something with them and maybe
write the results back or display them, etc. Something like that. FYI: There is one more way to extend sqlpp11 queries: You can add
additional clauses or change the interface of clauses. For instance,
http://docs.oracle.com/cd/B19306_01/server.102/b14200/queries003.htm
uses a CONNECT clause in SELECT. That also isn't very hard, but I would not call it really simple either :-) Thanks for the answers and tips!
Regards,
Adam On 2014-08-21 22:50, Adam Wulkiewicz wrote: Roland Bock wrote: On 2014-08-21 18:45, Adam Wulkiewicz wrote: Which brings me to a question about the SQL extensions. In order to
support such extensions, e.g. SQL/MM or SQL/SFA [1][2] which specifies
a storage, access model, operations, etc. for handling of
geometrical/geographical data, a user would be forced to extend your
library with additional functions/methods/structures to e.g. perform a
query: select(streets.name)
.from(streets)
.where( intersects(streets.geometry, some_polygon) ) or select(streets.name)
.from(streets)
.where( streets.geometry.within(from_wkt("POLYGON((0 0,10 0,10
10,0 10,0 0))")) )
<snip> Would the user be forced to implement it in the library directly and
e.g. always include it with the rest of the library? Or would it be
possible to implement it as a separate addon that could be optionally
included?
The latter. sqlpp11 uses value type and tags. If your class says it is
an sqlpp expression with a boolean value type, then it welcome wherever
a boolean sql expression is required. struct sample
{
using _traits = sqlpp::make_traits This is a boolean expression as far as sqlpp1 is concerned.
Great, so the extension like this could be included if needed or even
be a standalone library.
Exactly :-) <snip> If such errors was reported at compile-time then AFAIU specific
version of the library (or just a lowest level connector?) would be
forced to work with specific version of a server?
I am not sure I fully understand your question. Let me clarify. Let's say that I supported this ST_CoveredBy()
function for both PostGIS and MySQL. Let's say that in version 5.9
MySQL will support this function. So the library should work with some
future release. But I'm using sqlpp with MySQL 5.7. What will be the
result of performing this "unsupported" query? An exception? AFAIU
ideally it should be a compile-time error?
Ah, ok!
I think I would make the connector a template, giving it a major and
minor version number. You could then use that version information in a
static assert in the serializer for that function. That's my first idea.
Alternatively, depending on the amount of changes, it might even make
sense to write a new connector. The connector libraries are rather small. <snip> Hope this helps. Are you asking for a specific project? Or just out of
curiosity?
I'm curious. I'm a contributor at Boost.Geometry and this is just a
problem from my domain.
I'm wondering what would be needed to use sqlpp in a
GIS/Geometry-related application. E.g. to load some geometrical
objects from some database into objects of types adapted to
Boost.Geometry concepts (polygon, linestring, etc.), do something with
them and maybe write the results back or display them, etc. Something
like that. OK, cool!
Let me know if it should become more concrete :-) FYI: There is one more way to extend sqlpp11 queries: You can add
additional clauses or change the interface of clauses. For instance,
http://docs.oracle.com/cd/B19306_01/server.102/b14200/queries003.htm
uses a CONNECT clause in SELECT. That also isn't very hard, but I would not call it really simple
either :-) Thanks for the answers and tips! Thanks for the questions. I'll use some of that in my talk at CppCon, if
I may.
Cheers,
Roland On 2014-08-18 18:49, Roland Bock wrote: Dear Boosters, Quite a bit has happened since last I reported about sqlpp11 in this
forum [1,2]. I have incorporated a lot of the feedback you gave me,
hopefully bringing the library closer to a reviewable state. Source: https://github.com/rbock/sqlpp11
Doku: https://github.com/rbock/sqlpp11/wiki (not at all formal yet) I am hoping for more feedback both here and live at CppCon
(http://sched.co/1r4lue3)
FYI, I also just got the confirmation for my open sessions at CppCon: Mixins (Monday evening):
Based on pain in sqlpp11, a few language suggestions to make our life
easier plus subsequent discussion.
http://sched.co/1qh7PQa
The sqlpp11-connector experiment, Part 1 (Tuesday evening):
Write an sqlpp11 connector for your favorite database in under an hour.
It won't be complete by then, but I am sure we can execute the first few
queries by then.
http://sched.co/1qhngYK
The sqlpp11-connector experiment, Part 2 (Friday morning):
Continue Part 1. Depending on the status, we might even start stuff like
what Adam Wulkiewicz suggested (geometry/geography extensions).
http://sched.co/Wi8aWM
Cheers,
Roland
>...
{};
}
template <typename T>
struct alpha_member
{
T alpha;
T& operator()() { return alpha; }
const T& operator()() const { return alpha; }
};
template <typename T>
struct beta_member
{
T beta;
T& operator()() { return beta; }
const T& operator()() const { return beta; }
};
template <typename T>
struct tab_table
{
T tab;
T& operator()() { return tab; }
const T& operator()() const { return tab; }
};
struct my_tab
: sqlpp::table
On 2014-08-20 19:21, Adam Wulkiewicz wrote:
> Roland Bock wrote:
>> On 2014-08-20 00:34, Adam Wulkiewicz wrote:
>>> Roland Bock wrote:
>>>> But thats manageable. And yes, the code would be shorter, although not
>>>> that much, I suspect. The only problem I have with it is that now the
>>>> column types are going to be about a hundred characters long. And
>>>> users
>>>> are going to operate on columns all the time. So error message have to
>>>> be short.
>>> Do you have in mind the code of the library or user's code?
>> User's code:
>>
>> * You have the member template which must be defined outside.
>> * You have the get_name method which should be defined outside the
>> member template since I have no instance of that template where I
>> need the name (you still cant use a string literal as template
>> parameter directly, right? Like table_t<"sample">?)
>> * You need to group the member template and the get_name method since
>> they are always used in combination
>> * You need a struct or class to hold the default value or function
>>
>> And if you don't want to have all this flying around as individual
>> pieces with individual names, then you will group it into a class. And
>> you're back to where you started.
> Yes, passing a string literal isn't possible unfortunately.
>
> So the member and a name must be bound together somehow but the rest
> could still be automatically generated. In particular, IMHO the
> specification of a default value should be optional (e.g. passed as
> yet another trait). The library shouldn't require defining it each
> time, even as some dummy function if a user wanted to use exceptions.
> Besides, defining the default value generator as external to the
> member-"name" binding would probably be preferable because the same
> generator could be reused for many columns. However I don't expect
> that the generator would do something complicated, rather just return
> a value. But for integral members it could be predefined in the sqlpp
> and it could be passed as just 1 additional type.
Right.
>
>>
>>> I expect that the user's code, even not using defaults, would be a lot
>>> shorter.
>>> But the most important is that the definition of a table would
>>> probably be more clear, in one place, etc.
>>> Or am I wrong?
>> I think you're wrong, although I'd love to be wrong about that :-)
[...]
> Hmm, is the member template used in many places? AFAIU it must be used
> at least 2 times, to define columns, tables, etc. and later to
> construct a row.
- Tables,
- subqueries used as tables (basically tables again),
- result rows
- parameter sets for prepared queries
> Well, it isn't that important.
:-)
>
> If you write it this way:
>
> struct Alpha
> {
> static constexpr const char* _get_name() { return "alpha"; }
> template
participants (7)