Thank you all for this discussion, I find it very interesting to read all your
comments. But my own opinions follow almost 100 percent those of Mr. Blair.
I'm in the tuning business for years. But we NEVER did a significant overall
optimization to our applications by changing a heavy used low level component
from HLL to ASSEMBLER. It was always other things, for example:
- eliminating unused logic totally
- replacing sequential lookup in linked lists by faster algorithms
- caching information somewhere
- tuning bad components, no matter if HLL or ASSEMBLER
- identifying and improving bad IBM components !!
and so on.
Regarding HLL or ASSEMBLER: if it is ASSEMBLER and it works, we leave it alone.
If it has to be maintained, and the original author has retired, we look at the
code and decide if it can easily be maintained by another person. If not, we
consider rewriting it in PL/1 or C. In this case, it can be redesigned, and - of
course - the performance has to be measured.
This is our pragmatic approach, and it works.
Regarding optimization: Mr G. asked, who today would want a compiler, which
does no optimization. In fact, we probably need OPT(0), because we have a home
grown debug tool, which needs the PL/1 statements to remain contiguous blocks of
machine code. So in some special cases a compiler which does no optimization
(that is: code rearrangement) could make some sense.
Kind regards
Bernd
W.B. schrieb:
> (2) It is difficult for even the very best Assembler programmers
> to construct a problem solution in Assembler that performs
> better than one in a HLL provided there is no prior constraint
> on the data structures that must be used. With the ability to
> declare the data structures to be used in the implementation
> of the solution as they desire, high level language programmers
> using even old optimizing compilers can produce solutions that
> outperform the very best hand-coded, hand-optimized Assembler
> solutions.
>
>
|