On 1/30/2016 6:50 PM, Rob Stewart wrote:
On January 30, 2016 1:31:09 PM EST, Noah
wrote: On 1/29/2016 7:00 PM, Rob Stewart wrote:
Have a look at boost::intrusive_ptr.
If I understand boost::intrusive_ptr correctly, and I'm not totally sure that I do
You missed, and snipped, the context. You were discussing ways to inject your logic. intrusive_ptr uses free functions, found via ADL, to manage the reference count. That approach could work for you.
Oh yeah, sorry about that. I did get your point, and it was relevant. I don't think it would be hard at all to provide the analogous version of intrusive_ptr for registered_ptr. In fact, the "registered_intrusive_ptr" or whatever would probably very closely resemble the original registered_ptr, the only difference really being that instead of the "management" object (technically it's not exactly a "refcount" object) being a member of an object derived from the target, it would be a member of the object itself. The "management" object could probably be reused unmodified. The target object would also need change it's "operator&" to return a smart pointer instead of native one. It just struck me that the point of intrusive pointer was performance, and it was using the same technique as registered_ptr (and make_shared) to do it. Namely, store the refcount/management object with the target object eliminating the need for a separate allocation. In fact, doesn't make_shared obviate the point of intrusive_ptr? But of course registered_ptr takes it a step further and allows the entire allocation to occur on the stack.
Many times I don't want to initialize a variable because the branches in the subsequent code select the value. Do your wrappers provide a constructor that permits leaving the value uninitialized?
So first let me say that I'm not proposing a total ban on primitive types. When you need the performance, and primitive types give you the performance, use them. But that should be small fraction of the world's total C++ code.
Okay, but I was asking whether you provide for that case.
At the moment my substitute classes do not. But they were not intended as a universal substitute for primitives. They were intended as a substitute for primitives in the cases when language safety is of higher priority than performance.
What is antiquated, in my opinion, is that primitive types are the still the default. In terms of not wanting to initialize due to subsequent conditional assignment, I would say don't underestimate the compiler optimizer. When the optimizer can figure out that the default initialization is redundant, it will remove it for you, right?
You also can't assume that the optimizer will recognize such things.
I would agree that if we were deprecating native types, it would not be appropriate for their replacements to have this built in theoretical performance penalty. But we're not deprecating native types. I'm just hoping to replace them as the default. Anyway, I am not opposed to providing multiple versions of the primitive replacements that support different performance-safety tradeoffs. The question is what's the best way to keep the multiple versions compatible with each other? At the moment I'm thinking to publicly derive the one with default initialization from the one without. But what if we want to support more versions? Is the public inheritance mechanism general enough? Do these substitute classes really need to be templates? I'll have to think about.
I should note though, that I found it difficult (or impossible) to fully mimic all the implicit conversion rules of primitive types, so there are going to be some cases where the substitute classes can't be used (without rewriting some of your code) for compatibility reasons.
That could prove to be a stumbling block, but you can propose your ideas.
Yeah, so this turns out to be the key issue. Other people have asked why primitive types can't be used as base classes - http://stackoverflow.com/questions/2143020/why-cant-i-inherit-from-int-in-c. It turns out that really the only reason primitive types weren't made into full fledged classes is that they inherit these "chaotic" conversion rules from C that can't be fully mimicked by C++ classes, and Bjarne thought it would be too ugly to try to make special case classes that followed different conversion rules. That's it. That's the only reason. So if C had reasonably sane conversion rules for primitive types, then primitive types would already be full fledged classes. The problem is that we, the C++ community, are perpetuating our dependency on these crippled and dangerous primitive types by retaining them as the default, and consequently, inadvertently, writing code that depends on their inane conversion rules. So we need to stop doing that. Stop writing code that requires these legacy conversion rules to work. So what needs to happen is that boost, or whoever, adopt an "official" set of primitive substitute classes so that people can, if they choose, write their code and libraries to be compatible with both the old primitives and the new substitute classes with more sane conversion rules. This would not require any extra work on boost's (or whoever's) part. They wouldn't have to make all their libraries support these new classes. They just need to designate a common interface that people can, if they choose, standardize on. An interface that can be implemented by classes (unlike the interface of primitive types). Once this happens, then people will be free to, if they want, re-implement the interface however they choose. This should solve the contention between the performance obsessed and the safety obsessed crowds. Because of it's legacy, C++ has already demonstrated it's power as a language for developing high-performance applications. Some of that same power could be directed at making applications safer and more secure as well. So far it has not been. C++ has not demonstrated it's power as a language for safe and secure applications. I believe the primary reason for this is the lack of even reasonably safe building blocks to work with. And the only reason we don't have them is because of those legacy conversion rules. <hyperbolic exaggeration for effect> I mean registered_ptr may be one of the fastest safe reference types in existence and you're telling me that its show-stopper flaw is that it can't directly target a data type that was designed in the 1970s? That instead I should just be happy with native pointers? Probably the single most dangerous data type on the planet? Really? I mean let's say I want to write an internet facing application and I want to reduce as much as possible the likelihood of a "remote execution" vulnerability? Performance being a secondary (or tertiary) consideration. I guess the default answer is to use Java. But I get the feeling that C++ is now powerful enough that it should be better able to address the task than even Java. But I think first C++ has to demonstrate that it's now powerful enough that applications can be, if desired, practically implemented without using elements that can reference invalid memory or have values determined by random bits of uninitialized memory. Right? Is that too much to ask?
By default, an unsigned integer minus another unsigned integer should really return a signed integer, like my primitives do.
I understand what you're trying to do, but that's a narrowing conversion. The signed type may not be large enough to hold the difference.
I really should have said "size_t" instead of unsigned integer, because that's what I mean. Even though size_t is often implemented as an unsigned integer, it implies that it is being used as count of quantity rather than a set of bits. With size_t, the wrap around bug is a real world problem. I've encountered it several times in real life. (And not just my own code :) The overflow due to narrowing would rarely, if ever, occur in real life.