Jump to page: 1 2
Thread overview
Compiler Insertions for Huge Pointers
Aug 08, 2001
Mark Evans
Aug 08, 2001
Mark Evans
Aug 08, 2001
Walter
Aug 08, 2001
Mark Evans
Aug 09, 2001
Walter
Aug 09, 2001
Mark Evans
Aug 10, 2001
Walter
Aug 10, 2001
Mark Evans
Aug 10, 2001
Walter
Aug 10, 2001
Mark Evans
Aug 10, 2001
Walter
Aug 10, 2001
Mark Evans
Aug 10, 2001
Walter
Aug 10, 2001
Mark Evans
August 08, 2001
Walter,

I'm having some mysterious hang-ups which seem to disappear when I shrink my huge pointer blocks to less than a segment in size.  This leads me to ask about the code inserted by the compiler to handle huge pointers.  Could you give me some feeling for the nature of this code?

I'm using a huge pointer block as a circular buffer.  When this buffer is < 1 segment, it runs indefinitely and without problems.  When the buffer is > 1 segment long, there is a repeatable hang-up which occurs.

The bug is probably mine but if there is anything I can learn about the compiler's behavior it might give me some clues.

Mark



August 08, 2001
(In particular, is there any possibility of memory manager invocations or of my block being moved around, and if so how would I lock it.)




On Wed, 08 Aug 2001 17:37:07 GMT, Mark Evans <mevans@zyvex.com> wrote:
> Walter,
> 
> I'm having some mysterious hang-ups which seem to disappear when I shrink my huge pointer blocks to less than a segment in size.  This leads me to ask about the code inserted by the compiler to handle huge pointers.  Could you give me some feeling for the nature of this code?
> 
> I'm using a huge pointer block as a circular buffer.  When this buffer is < 1 segment, it runs indefinitely and without problems.  When the buffer is > 1 segment long, there is a repeatable hang-up which occurs.
> 
> The bug is probably mine but if there is anything I can learn about the compiler's behavior it might give me some clues.
> 
> Mark
> 
> 
> 


August 08, 2001
The easiest way is to compile your huge pointer code with -gl, and run OBJ2ASM on the output. You'll see just what code is generated for each line of source.

-Walter

"Mark Evans" <mevans@zyvex.com> wrote in message news:1103_997292227@dphillips...
> Walter,
>
> I'm having some mysterious hang-ups which seem to disappear when I shrink
my huge pointer
> blocks to less than a segment in size.  This leads me to ask about the
code inserted by
> the compiler to handle huge pointers.  Could you give me some feeling for
the nature of
> this code?
>
> I'm using a huge pointer block as a circular buffer.  When this buffer is
< 1 segment, it
> runs indefinitely and without problems.  When the buffer is > 1 segment
long, there is a
> repeatable hang-up which occurs.
>
> The bug is probably mine but if there is anything I can learn about the
compiler's
> behavior it might give me some clues.
>
> Mark
>
>
>


August 08, 2001
Walter,

This is asking me to reverse-engineer something which you wrote.

All I need are a few philosophical tips about the design of your huge pointer code.  Only then would doing what you suggest even be worthwhile.  Otherwise I am reverse engineering in the blind.  I'm not that much of a Win16 expert to begin with, and not intimate with x86 assembly (much more Motorola / DSP assembly experience than Intel x86).

I do wonder whether some DS == SS type issue could be causing problems at critical points when the compiler insertions have to compute offsets.

Thanks,

Mark


On Wed, 8 Aug 2001 11:49:32 -0700, "Walter" <walter@digitalmars.com> wrote:
> The easiest way is to compile your huge pointer code with -gl, and run OBJ2ASM on the output. You'll see just what code is generated for each line of source.
> 
> -Walter
> 
> "Mark Evans" <mevans@zyvex.com> wrote in message news:1103_997292227@dphillips...
> > Walter,
> >
> > I'm having some mysterious hang-ups which seem to disappear when I shrink
> my huge pointer
> > blocks to less than a segment in size.  This leads me to ask about the
> code inserted by
> > the compiler to handle huge pointers.  Could you give me some feeling for
> the nature of
> > this code?
> >
> > I'm using a huge pointer block as a circular buffer.  When this buffer is
> < 1 segment, it
> > runs indefinitely and without problems.  When the buffer is > 1 segment
> long, there is a
> > repeatable hang-up which occurs.
> >
> > The bug is probably mine but if there is anything I can learn about the
> compiler's
> > behavior it might give me some clues.
> >
> > Mark
> >
> >
> >
> 
> 


August 09, 2001
It's difficult to understand what's happening with huge pointers without knowing what code is generated for it, it least that's the way it is for me <g>.

But there is something else you need to be aware of with huge pointers. The objects you point to with them must have a size that evenly divides into 64k. In other words, objects cannot straddle a 64k boundary, they must sit wholly on one side or the other.

-Walter


"Mark Evans" <mevans@zyvex.com> wrote in message news:1103_997300825@dphillips...
> Walter,
>
> This is asking me to reverse-engineer something which you wrote.
>
> All I need are a few philosophical tips about the design of your huge
pointer code.  Only then would doing what you suggest even be worthwhile. Otherwise I am reverse engineering in the blind.  I'm
> not that much of a Win16 expert to begin with, and not intimate with x86
assembly (much more Motorola / DSP assembly experience than Intel x86).
>
> I do wonder whether some DS == SS type issue could be causing problems at
critical points when the compiler insertions have to compute offsets.
>
> Thanks,
>
> Mark
>
>
> On Wed, 8 Aug 2001 11:49:32 -0700, "Walter" <walter@digitalmars.com>
wrote:
> > The easiest way is to compile your huge pointer code with -gl, and run OBJ2ASM on the output. You'll see just what code is generated for each
line
> > of source.
> >
> > -Walter
> >
> > "Mark Evans" <mevans@zyvex.com> wrote in message news:1103_997292227@dphillips...
> > > Walter,
> > >
> > > I'm having some mysterious hang-ups which seem to disappear when I
shrink
> > my huge pointer
> > > blocks to less than a segment in size.  This leads me to ask about the
> > code inserted by
> > > the compiler to handle huge pointers.  Could you give me some feeling
for
> > the nature of
> > > this code?
> > >
> > > I'm using a huge pointer block as a circular buffer.  When this buffer
is
> > < 1 segment, it
> > > runs indefinitely and without problems.  When the buffer is > 1
segment
> > long, there is a
> > > repeatable hang-up which occurs.
> > >
> > > The bug is probably mine but if there is anything I can learn about
the
> > compiler's
> > > behavior it might give me some clues.
> > >
> > > Mark
> > >
> > >
> > >
> >
> >
>
>


August 09, 2001
Ah, that is a critical piece of knowledge.  I have been using arbitrary sizes, not segment multiples.

The runtime library (_halloc) should burp if the size requested is not a segment multiple.  If not that, it should automatically increase the caller's request to equal the next highest segment multiple.

In my case the "object" is just an array of chars, a giant string if you like.  Maybe I'm OK then, because characters are not structures that can straddle a boundary?  Or should I only allocate an exact segment multiple for an array of char?

Thanks Walter!

Mark


On Thu, 9 Aug 2001 10:50:36 -0700, "Walter" <walter@digitalmars.com> wrote:
> It's difficult to understand what's happening with huge pointers without knowing what code is generated for it, it least that's the way it is for me <g>.
> 
> But there is something else you need to be aware of with huge pointers. The objects you point to with them must have a size that evenly divides into 64k. In other words, objects cannot straddle a 64k boundary, they must sit wholly on one side or the other.
> 
> -Walter
> 


August 10, 2001
The rule applies to the entire size of the object, not the sizes of its individual components. -Walter

Mark Evans wrote in message <1103_997386537@dphillips>...
>Ah, that is a critical piece of knowledge.  I have been using arbitrary
sizes, not segment multiples.
>
>The runtime library (_halloc) should burp if the size requested is not a
segment multiple.  If not that, it should automatically increase the caller's request to equal the next highest segment multiple.
>
>In my case the "object" is just an array of chars, a giant string if you
like.  Maybe I'm OK then, because characters are not structures that can straddle a boundary?  Or should I only allocate an exact segment
>multiple for an array of char?
>
>Thanks Walter!
>
>Mark
>
>
>On Thu, 9 Aug 2001 10:50:36 -0700, "Walter" <walter@digitalmars.com> wrote:
>> It's difficult to understand what's happening with huge pointers without knowing what code is generated for it, it least that's the way it is for
me
>> <g>.
>>
>> But there is something else you need to be aware of with huge pointers.
The
>> objects you point to with them must have a size that evenly divides into 64k. In other words, objects cannot straddle a 64k boundary, they must
sit
>> wholly on one side or the other.
>>
>> -Walter
>>
>
>


August 10, 2001
Walter,

Thanks.

Is that a fundamental Win16 issue, or just a compiler issue that could be improved?  It would be nice if huge pointers did not have this restriction.

As I understand what you are saying, the only valid huge memory blocks are N times 64K in size (contiguous) up to the limit of 1 MB; and behavior of nonconforming huge blocks is undefined.

Mark


August 10, 2001
Here is my candidate for a preprocessor macro to enforce the rule.

#ifndef ROUND_TO_NEXT_64K_MULTIPLE
#define ROUND_TO_NEXT_64K_MULTIPLE( size ) \
(((unsigned long int)(size) & (unsigned long int)0xFFFF0000L) + (unsigned long int)0x00010000L)
#endif

Should this macro be included in the Digital Mars headers somewhere?

It could also be written as an inline function.

Mark


August 10, 2001
No, the rule is if an array of objects is allocated, then 64k must be evenly divisible by that object size. That is because offset arithmetic, as in h->offset, cannot wrap. -Walter

Mark Evans wrote in message <1103_997452984@dphillips>...
>Walter,
>
>Thanks.
>
>Is that a fundamental Win16 issue, or just a compiler issue that could be
improved?  It would be nice if huge pointers did not have this restriction.
>
>As I understand what you are saying, the only valid huge memory blocks are
N times 64K in size (contiguous) up to the limit of 1 MB; and behavior of nonconforming huge blocks is undefined.
>
>Mark
>
>


« First   ‹ Prev
1 2