Hi! I have a new idea: why not make a difference between null and 0.0 and x * null is null. null is of course only a compile time constant that can be used to remove unused parts of floating point computations, i.e. all inputs to a floating point computation that are not used can be set to null before compiling it. -Jochen