huge huge-Non-Standard Type Modifier
A huge data item may be located anywhere in memory--it is not assumed
to reside in the current data segment. A huge data item can exceed
64K bytes in size. Huge does not apply to functions, only to data.
Huge pointers are similar to far pointers, in that both are 32-bit
pointers containing a 16-bit segment address and a 16-bit offset.
Huge pointers, however, are normalized, which means that as much of
their value as possible is stored in the segment address. This means
that the offset portion of a huge pointer cannot be larger than 15
(base 10), because segments start every 16 bytes (base 10).
As a result of the normalization of huge pointers, there is only one
huge pointer for each memory address. That's quite different from
the situation with far pointers, where many different segment:offset
pairs can refer to the same address. The following ramifications
arise from the fact that there can be only one huge pointer for each
address:
The == and != operators work correctly with huge pointers; they
do not always work correctly with far pointers.
The results of the <, <=, >, and >= operators work correctly with
huge pointers.
Most important, huge pointers do not "wrap-around." Because of
normalization, the segment portion of a huge pointer is adjusted
every time the offset "wraps" from 15 back to 0. Because the
segment portion is constantly adjusted, you can use huge pointers
to work with data structures greater than 64k in size.
Notes: The price of using huge pointers is the additional time
involved in the overhead of normalization. Huge pointer
operations are noticeably slower than far or near pointer
operations.
-------------------------------- Example ---------------------------------
char huge table[70000];
char huge *hpc;
char huge **hppc;
char huge * huge *hphpc;
long int huge *check();
i = sizeof(char huge *);
Seealso:
This page last updated on Fri Nov 30 10:48:32 MSK 2001
Copyright © 1992-2001, Vitaly Filatov, Moscow, Russia
Webmaster