A Tour of NTL: NTL Implementation and Portability
NTL is designed to be portable, fast, and relatively easy to use and extend.
To make NTL portable, no assembly code is used (well, almost none, see below). This is highly desirable, as architectures are constantly changing and evolving, and maintaining assembly code is quite costly. By avoiding assembly code, NTL should remain usable, with virtually no maintenance, for many years.
NTL makes very conservative requirements of the C++ compiler:
The configuration flag NTL_CLEAN_INT is currently off by default.
When this flag is off, NTL makes another requirement of its platform; namely, that conversions from unsigned long to long convert the bit pattern without change to the corresponding 2's complement signed integer. Note that the C++ standard defines the behavior of converting unsigned to signed values as implementation defined when the value cannot be represented in the range of nonnegative signed values. Nevertheless, this behavior is essentially universal, and more importantly, is is not undefined behavior: implementation-defined behavior must be documented and respected by the compiler, while undefined behavior can be exploited by the compiler in some surprising ways.
Actually, with NTL_CLEAN_INT off, it is also assumed that right shifts of signed integers are consistent, in the sense that if it is sometimes an arithmetic shift, then it is always an arithmetic shift (the installation scripts check if right shift appears to be arithmetic, and if so, this assumption is made elsewhere). Arithmetic right shift is also implementation defined behavior that is essentially universal.
It seems fairly unlikely that one would ever have to turn the NTL_CLEAN_INT flag on, but it seems a good idea to make this possible, and at the very least to identify and isolate the code that relies on this assumption. The only code affected by this flag is the traditional LIP long integer package (which, if you use GMP as the long integer package, is not involved), and the single-precision modular multiplication routines.
Note that prior to NTL 9.0, the default compilation mode required that in a few critical places, signed integer arithmetic quietly wraps around on overflow; however, signed integer overflow is undefined behavior, and it seems that in recent years compilers have been getting more aggressive in exploiting such undefined behavior in their optimizations. Moreover, recent versions of GCC now come with a "sanitizer" that checks for undefined behavior. So, both to avoid potentially dangerous optimizations and to allow NTL to pass such sanitzer checks, it seemed safer to move to this more conservative approach. There should, in fact, be zero performance penalty in doing so. Also note that I was never aware of any compiler that generated incorrect code in the pre-9.0 code: this new approach is just to be on the safe side in the future.
The configuration flag NTL_CLEAN_PTR is currently off by default.
When this flag is off, NTL makes another requirement of its platform; namely, that the address space is "flat", and in particular, that one can test if an object pointed to by a pointer p is located in a array of objects v[0..n-1] by testing if p >= v and p < v + n. The C++ standard does not guarantee that such a test will work; the only way to perform this test in a standard-conforming way is to iteratively test if p == v, p == v+1, etc.
This assumption of a "flat" address space is essentially universally valid, and making this assumption leads to more efficicient code. For this reason, the NTL_CLEAN_PTR is off by default, but one can always turn it on, and in fact, the overall performance penalty should be negligible for most applications.
NTL uses floating point arithmetic in a number of places, including a number of exact computations, where one might not expect to see floating point. Relying on floating point may seem prone to errors, but with the guarantees provided by the IEEE standard, one can prove the correctness of the NTL code that uses floating point.
Briefly, the IEEE floating point standard says that basic arithmetic operations on doubles should work as if the operation were performed with infinite precision, and then rounded to p bits, where p is the precision (typically, p = 53).
Throughout most of NTL, correctness follows from weaker assumptions, namely
It is also generally assumed that the compiler does not do too much "regrouping" of arithmetic expressions involving floating point. Most compilers respect the implied grouping of floating point computations, and NTL goes out of its way to make its intentions clear: instead of x = (a + b) + c, if the grouping is truly important, this is written as t = a + b; x = t + c. Current standards do not allow, and most implementations will not perform, any regrouping of this, e.g., x = a + (b + c), since in floating point, addition and subtraction are not associative.
Unfortunately, some compilers do not do this correctly, unless you tell them. With Intel's C compiler icc, for example, you should compile NTL with the flag -fp-model strict to enforce strict adherence to floating point standards. That said, some effort has been made to ensure that NTL works correctly even if the compiler does perform such regrouping, including replacement of x/y by x*(1/y).
Also, you should be wary of compiling using an optimization level higher than the default -O2 -- this may break some floating point assumptions (and maybe some other assumptions as well).
In any case, programs that compile against NTL header files should compile correctly, even under very aggressive optimizations.
One big problem with the IEEE standard is that it allows intermediate quantities to be computed in a higher precision than the standard double precision. Most platforms today implement the "strict" IEEE standard, with no excess precision. Up until recently, the Intel x86 machine with the GCC compiler was a notable exception to this: on older x86 machines, floating point was performed using the x87 FPU instructions, which operate on 80-bit, extended precision numbers; nowadays, most compilers use the SSE instructions, which operate on the standard, 64-bit numbers.
Historically, NTL went out of its way to ensure that its code is correct with both "strict" and "loose" IEEE floating point. This is achieved in a portable fashion throughout NTL, except for the quad_float module, where some desperate hacks, including assembly code, may be used to try to work around problems created by "loose" IEEE floating point [more details]. But note that even if the quad_float package does not work correctly because of these problems, the only other routines that are affected are the LLL_QP routines in the LLL module -- the rest of NTL should work fine. Hopefully, because of the newer SSE instructions, this whole strict/loose issue is a thing of the past.
Another problem is that some hardware (especially newer Intel chips) support fused multiply-add (FMA) instructions. Again, this is only a problem for quad_float, and some care is taken to detect the problem and to work around it. The rest of NTL will work fine regardles.
Mostly, NTL does not require that the IEEE floating point special quantities "infinity" and "not a number" are implemented correctly. This is certainly the case for core code where floating point arithmetic is used for exact (but fast) computations, as the numbers involved never get too big (or small). However, the behavior of certain explicit floating point computations (e.g., the xdouble and quad_float classes, and the floating point versions of LLL) will be much more predictable and reliable if "infinity" and "not a number" are implemented correctly.
NTL makes fairly consistent use of asymptotically fast algorithms.
Long integer multiplication is implemented using the classical algorithm, crossing over to Karatsuba for very big numbers. Long integer division is currently only implemented using the classical algorithm -- unless you use NTL with GMP (version 3 or later), which employs an algorithm that is about twice as slow as multiplication for very large numbers.
Polynomial multiplication and division is carried out using a combination of the classical algorithm, Karatsuba, the FFT using small primes, and the FFT using the Schoenhagge-Strassen approach. The choice of algorithm depends on the coefficient domain.
Many algorithms employed throughout NTL are inventions of the author (Victor Shoup) and his colleagues Joachim von zur Gathen and Erich Kaltofen, as well as John Abbott and Paul Zimmermann.
As of v7.0, NTL is thread safe. That said, there are several things to be aware of:
As of v9.5.0, NTL provides a thread boosting feature. With this feature, certain code within NTL will use available threads to speed up computations on a multicore machine. This feature is enabled by setting NTL_THREAD_BOOST=on during configuration. See BasicThreadPool.txt for more information.
This feature is a work in progress. Currently, basic ZZ_pX arithmetic has been thread boosted. More code will be boosted later.
As of v8.0, NTL provides error handling through exceptions. To enable exptions, you have to configure NTL with NTL_EXCEPTIONS flag turned on. By default, exceptions are not enabled, and NTL reverts to its old error handling method: abort with an error message.
If exceptions are enabled, then instead of aborting your program, and appropriate exception is thrown. More details ion the programming interface of this feature are available here.
If you enable exceptions, you must use a C++11 compiler. Specifically, your compiler will need support for lambdas (which are used to conveniently implement the "scope guard" idiom), and your compiler should implement the new default exception specification semantics (namely, that destructors are "noexcept" by default).
Implementation of this required a top-to-bottom scrub of NTL's code, replacing a lot of old-fashioned code with more modern, RAII-oriented code (RAII = "resource acquisition is initialization").