There's another issue here as well. I've tried to shy a way from making the presumption that all arithmetic is two's complement. When I think about the cases you cite, I then have to evaluate them in light of this view. Going down this path leads me to a chamber with mulitple exits - all alike. When I find myself here, I've reverted to the text of the standard - which is more restrictive. (negative shift? for me no problem - but the standard says it's undefined. The real question for me here is should I a) insist that all code be standards conforming and not engage in undefined behavior b) permit code which is in common usage (like shifting negative numbers) but is undefined by the standard. I've tended toward the former for a couple of reasons. Undefined behavior is non-portable - even though it's likely portable in the 3 or 4 compilers which occupy the current market. I've become aware that compiler writers are using undefined bevavior to optimize out related code - apparently without letting use know. If this is true, we're laying a whole new minefield. Sooooo - I've been inclined to disallow behavior which makes sense and strictly follow the standard - because it seems that's the only way to make a guarantee - no unexpected behavior. Also - it's the only way I and avoid having to make "hidden" decisions inside the code which create the kind of uncertainty which is cause of the real problems that I'm trying to address. Of course it's not made easier by the fact that the standard also changes in subtle ways in what it permits. I want to be able to write code which does what it says it does - that's my goal. But it seems that I'm frustated by good intentions of others. The road to hell IS paved by good intentions - at least in this case. Robert Ramey