www.digitalmars.com         C & C++   DMDScript  

c++.windows.16-bits - Compiler Insertions for Huge Pointers

reply Mark Evans <mevans zyvex.com> writes:
Walter,

I'm having some mysterious hang-ups which seem to disappear when I shrink my
huge pointer
blocks to less than a segment in size.  This leads me to ask about the code
inserted by
the compiler to handle huge pointers.  Could you give me some feeling for the
nature of
this code?

I'm using a huge pointer block as a circular buffer.  When this buffer is < 1
segment, it
runs indefinitely and without problems.  When the buffer is > 1 segment long,
there is a
repeatable hang-up which occurs.

The bug is probably mine but if there is anything I can learn about the
compiler's
behavior it might give me some clues.

Mark
Aug 08 2001
next sibling parent Mark Evans <mevans zyvex.com> writes:
(In particular, is there any possibility of memory manager invocations or of my
block being moved around, and if so how would I lock it.)




On Wed, 08 Aug 2001 17:37:07 GMT, Mark Evans <mevans zyvex.com> wrote:
 Walter,
 
 I'm having some mysterious hang-ups which seem to disappear when I shrink my
huge pointer
 blocks to less than a segment in size.  This leads me to ask about the code
inserted by
 the compiler to handle huge pointers.  Could you give me some feeling for the
nature of
 this code?
 
 I'm using a huge pointer block as a circular buffer.  When this buffer is < 1
segment, it
 runs indefinitely and without problems.  When the buffer is > 1 segment long,
there is a
 repeatable hang-up which occurs.
 
 The bug is probably mine but if there is anything I can learn about the
compiler's
 behavior it might give me some clues.
 
 Mark
 
 
 

Aug 08 2001
prev sibling parent reply "Walter" <walter digitalmars.com> writes:
The easiest way is to compile your huge pointer code with -gl, and run
OBJ2ASM on the output. You'll see just what code is generated for each line
of source.

-Walter

"Mark Evans" <mevans zyvex.com> wrote in message
news:1103_997292227 dphillips...
 Walter,

 I'm having some mysterious hang-ups which seem to disappear when I shrink

 blocks to less than a segment in size.  This leads me to ask about the

 the compiler to handle huge pointers.  Could you give me some feeling for

 this code?

 I'm using a huge pointer block as a circular buffer.  When this buffer is

 runs indefinitely and without problems.  When the buffer is > 1 segment

 repeatable hang-up which occurs.

 The bug is probably mine but if there is anything I can learn about the

 behavior it might give me some clues.

 Mark

Aug 08 2001
parent reply Mark Evans <mevans zyvex.com> writes:
Walter,

This is asking me to reverse-engineer something which you wrote.

All I need are a few philosophical tips about the design of your huge pointer
code.  Only then would doing what you suggest even be worthwhile.  Otherwise I
am reverse engineering in the blind.  I'm 
not that much of a Win16 expert to begin with, and not intimate with x86
assembly (much more Motorola / DSP assembly experience than Intel x86).

I do wonder whether some DS == SS type issue could be causing problems at
critical points when the compiler insertions have to compute offsets.

Thanks,

Mark


On Wed, 8 Aug 2001 11:49:32 -0700, "Walter" <walter digitalmars.com> wrote:
 The easiest way is to compile your huge pointer code with -gl, and run
 OBJ2ASM on the output. You'll see just what code is generated for each line
 of source.
 
 -Walter
 
 "Mark Evans" <mevans zyvex.com> wrote in message
 news:1103_997292227 dphillips...
 Walter,

 I'm having some mysterious hang-ups which seem to disappear when I shrink

 blocks to less than a segment in size.  This leads me to ask about the

 the compiler to handle huge pointers.  Could you give me some feeling for

 this code?

 I'm using a huge pointer block as a circular buffer.  When this buffer is

 runs indefinitely and without problems.  When the buffer is > 1 segment

 repeatable hang-up which occurs.

 The bug is probably mine but if there is anything I can learn about the

 behavior it might give me some clues.

 Mark


Aug 08 2001
parent reply "Walter" <walter digitalmars.com> writes:
It's difficult to understand what's happening with huge pointers without
knowing what code is generated for it, it least that's the way it is for me
<g>.

But there is something else you need to be aware of with huge pointers. The
objects you point to with them must have a size that evenly divides into
64k. In other words, objects cannot straddle a 64k boundary, they must sit
wholly on one side or the other.

-Walter


"Mark Evans" <mevans zyvex.com> wrote in message
news:1103_997300825 dphillips...
 Walter,

 This is asking me to reverse-engineer something which you wrote.

 All I need are a few philosophical tips about the design of your huge

Otherwise I am reverse engineering in the blind. I'm
 not that much of a Win16 expert to begin with, and not intimate with x86

 I do wonder whether some DS == SS type issue could be causing problems at

 Thanks,

 Mark


 On Wed, 8 Aug 2001 11:49:32 -0700, "Walter" <walter digitalmars.com>

 The easiest way is to compile your huge pointer code with -gl, and run
 OBJ2ASM on the output. You'll see just what code is generated for each


 of source.

 -Walter

 "Mark Evans" <mevans zyvex.com> wrote in message
 news:1103_997292227 dphillips...
 Walter,

 I'm having some mysterious hang-ups which seem to disappear when I



 my huge pointer
 blocks to less than a segment in size.  This leads me to ask about the

 the compiler to handle huge pointers.  Could you give me some feeling



 the nature of
 this code?

 I'm using a huge pointer block as a circular buffer.  When this buffer



 < 1 segment, it
 runs indefinitely and without problems.  When the buffer is > 1



 long, there is a
 repeatable hang-up which occurs.

 The bug is probably mine but if there is anything I can learn about



 compiler's
 behavior it might give me some clues.

 Mark



Aug 09 2001
parent reply Mark Evans <mevans zyvex.com> writes:
Ah, that is a critical piece of knowledge.  I have been using arbitrary sizes,
not segment multiples.

The runtime library (_halloc) should burp if the size requested is not a
segment multiple.  If not that, it should automatically increase the caller's
request to equal the next highest segment multiple.

In my case the "object" is just an array of chars, a giant string if you like. 
Maybe I'm OK then, because characters are not structures that can straddle a
boundary?  Or should I only allocate an exact segment 
multiple for an array of char?

Thanks Walter!

Mark


On Thu, 9 Aug 2001 10:50:36 -0700, "Walter" <walter digitalmars.com> wrote:
 It's difficult to understand what's happening with huge pointers without
 knowing what code is generated for it, it least that's the way it is for me
 <g>.
 
 But there is something else you need to be aware of with huge pointers. The
 objects you point to with them must have a size that evenly divides into
 64k. In other words, objects cannot straddle a 64k boundary, they must sit
 wholly on one side or the other.
 
 -Walter
 

Aug 09 2001
parent reply "Walter" <walter digitalmars.com> writes:
The rule applies to the entire size of the object, not the sizes of its
individual components. -Walter

Mark Evans wrote in message <1103_997386537 dphillips>...
Ah, that is a critical piece of knowledge.  I have been using arbitrary

The runtime library (_halloc) should burp if the size requested is not a

caller's request to equal the next highest segment multiple.
In my case the "object" is just an array of chars, a giant string if you

straddle a boundary? Or should I only allocate an exact segment
multiple for an array of char?

Thanks Walter!

Mark


On Thu, 9 Aug 2001 10:50:36 -0700, "Walter" <walter digitalmars.com> wrote:
 It's difficult to understand what's happening with huge pointers without
 knowing what code is generated for it, it least that's the way it is for


 <g>.

 But there is something else you need to be aware of with huge pointers.


 objects you point to with them must have a size that evenly divides into
 64k. In other words, objects cannot straddle a 64k boundary, they must


 wholly on one side or the other.

 -Walter


Aug 10 2001
next sibling parent reply Mark Evans <mevans zyvex.com> writes:
Walter,

Thanks.

Is that a fundamental Win16 issue, or just a compiler issue that could be
improved?  It would be nice if huge pointers did not have this restriction.

As I understand what you are saying, the only valid huge memory blocks are N
times 64K in size (contiguous) up to the limit of 1 MB; and behavior of
nonconforming huge blocks is undefined.

Mark
Aug 10 2001
parent reply "Walter" <walter digitalmars.com> writes:
No, the rule is if an array of objects is allocated, then 64k must be evenly
divisible by that object size. That is because offset arithmetic, as in
h->offset, cannot wrap. -Walter

Mark Evans wrote in message <1103_997452984 dphillips>...
Walter,

Thanks.

Is that a fundamental Win16 issue, or just a compiler issue that could be

As I understand what you are saying, the only valid huge memory blocks are

nonconforming huge blocks is undefined.
Mark

Aug 10 2001
parent reply Mark Evans <mevans zyvex.com> writes:
Then I am back to square one.  All I have is an array of characters.  The
object size of a character object is 1.  Any number is evenly divisible by 1.
So I guess I don't have to worry?

Thanks,

Mark


On Fri, 10 Aug 2001 09:38:19 -0700, "Walter" <walter digitalmars.com> wrote:
 No, the rule is if an array of objects is allocated, then 64k must be evenly
 divisible by that object size. That is because offset arithmetic, as in
 h->offset, cannot wrap. -Walter
 
 Mark Evans wrote in message <1103_997452984 dphillips>...
Walter,

Thanks.

Is that a fundamental Win16 issue, or just a compiler issue that could be

As I understand what you are saying, the only valid huge memory blocks are

nonconforming huge blocks is undefined.
Mark


Aug 10 2001
parent reply "Walter" <walter digitalmars.com> writes:
Not about that particular problem, no. Are you able to identify a particular
line of code where you're getting a segment wrap?

Mark Evans wrote in message <1103_997458762 dphillips>...
Then I am back to square one.  All I have is an array of characters.  The

1.
So I guess I don't have to worry?

Thanks,

Mark


On Fri, 10 Aug 2001 09:38:19 -0700, "Walter" <walter digitalmars.com>

 No, the rule is if an array of objects is allocated, then 64k must be


 divisible by that object size. That is because offset arithmetic, as in
 h->offset, cannot wrap. -Walter

 Mark Evans wrote in message <1103_997452984 dphillips>...
Walter,

Thanks.

Is that a fundamental Win16 issue, or just a compiler issue that could



 improved?  It would be nice if huge pointers did not have this


As I understand what you are saying, the only valid huge memory blocks



 N times 64K in size (contiguous) up to the limit of 1 MB; and behavior of
 nonconforming huge blocks is undefined.
Mark



Aug 10 2001
parent reply Mark Evans <mevans zyvex.com> writes:
Not really but I could look harder.

Again, any design tips about how DM treats huge pointers would be useful.

I'm not too worried about this because the bug is probably mine and pertains to
some obscure, rare situation that only happens after several thousand calls
have been made.  I just wanted to pulse you before 
starting a full-scale investigation.  Any tips about huge pointer
behavior/design would help.  Already I've learned some new things about them.

As far as I know right now, my circular buffer code is perfect and works
indefinitely (rolling over and over and over) when the size is < 1 segment. 
The identical C code works fine for a long time when the 
buffer is > 1 segment but not forever.  After several thousand calls something
goes wrong.  I will look into it.

Mark


On Fri, 10 Aug 2001 11:45:23 -0700, "Walter" <walter digitalmars.com> wrote:
 Not about that particular problem, no. Are you able to identify a particular
 line of code where you're getting a segment wrap?
 
 Mark Evans wrote in message <1103_997458762 dphillips>...
Then I am back to square one.  All I have is an array of characters.  The

1.
So I guess I don't have to worry?

Thanks,

Mark


Aug 10 2001
parent "Walter" <walter digitalmars.com> writes:
Huge pointer arithmetic should work or fail, not work 999 times and fail
once. It sounds like you have a program bug. Look for uninitialized
variables, dangling pointers, etc. -Walter

Mark Evans wrote in message <1103_997477807 dphillips>...
Not really but I could look harder.

Again, any design tips about how DM treats huge pointers would be useful.

I'm not too worried about this because the bug is probably mine and

thousand calls have been made. I just wanted to pulse you before
starting a full-scale investigation.  Any tips about huge pointer

them.
As far as I know right now, my circular buffer code is perfect and works

The identical C code works fine for a long time when the
buffer is > 1 segment but not forever.  After several thousand calls

Mark


On Fri, 10 Aug 2001 11:45:23 -0700, "Walter" <walter digitalmars.com>

 Not about that particular problem, no. Are you able to identify a


 line of code where you're getting a segment wrap?

 Mark Evans wrote in message <1103_997458762 dphillips>...
Then I am back to square one.  All I have is an array of characters.



 object size of a character object is 1.  Any number is evenly divisible


 1.
So I guess I don't have to worry?

Thanks,

Mark



Aug 10 2001
prev sibling parent Mark Evans <mevans zyvex.com> writes:
Here is my candidate for a preprocessor macro to enforce the rule.

#ifndef ROUND_TO_NEXT_64K_MULTIPLE
#define ROUND_TO_NEXT_64K_MULTIPLE( size ) \
(((unsigned long int)(size) & (unsigned long int)0xFFFF0000L) + (unsigned long
int)0x00010000L)
#endif

Should this macro be included in the Digital Mars headers somewhere?

It could also be written as an inline function.

Mark
Aug 10 2001