I did almost exactly the same, to support my XML parser.
The XML parser allocates many little segments of storage, and to overcome
problems with normal malloc/free and to be really fast, I put all areas of the
same length in queues where I can easily reuse them (sizes rounded to multiples
of 32, 20 queues of every size from 32 to 640, larger areas handled
differently). The original malloc/free may be very slow, but as I only use it to
get large chunks of memory and then I do my own management on them, it doesn't
really matter. And all the large chunks are simply freed on the XML parser's
Also important: the original malloc may be a site specific storage management
function (for example specified by the installation, to be used under IMS
control) which is still slower than C malloc/free (but that doesn't matter),
because the XML parser can be configured so that it uses this storage management
function. It can even be configured to get all the storage from a pre-allocated
static area, if necessary.
As a result of all this efforts, the parser is 3 times faster than - for example
- Xerces. I didn't check this by myself - my customer did and told me.
Am Dienstag, 26. Februar 2013 20:04 schrieben Sie:
> R. -- that's an interesting idea. The compiler + RTL would have to
> rigorously enforce the boundaries of the AREA under repeated ALLOCs and
> FREEs, else extremely mysterious bugs would occur. That would require
> either AREA descriptors held by the RTL, or RTL function call parameters to
> define them.
> It reminds me of our home-brew substitute for C's malloc()/free(), which we
> did as a defensive measure in the face of badly implemented ones from
> compiler vendors. Basically it allocates very large blocks using malloc(),
> then retails them via our version of malloc()/free(). It's not as
> efficient as really good malloc()/free()s from vendors, but it's much
> better than some we've encountered. We can turn it on and tune its block
> sizes etc. with environment variables, when we find ourselves in a hostile
> environment. :) (We also cache frequently-used sizes for reuse, to cut down
> on the churn.)