ABSTRACTS as of 11 Sept. 2005
~~~~~~~~~
by Prof. W. Kahan
w k a h a n at e e c s dot b e r k e l e y dot e d u
Mathematics Dept. #3840, and (510) 642-5638
Elect. Eng. & Computer Science Dept. #1776
University of California
Berkeley CA 94720-1776
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
This file is intended to be extended ASCII printable on an IBM PC
in PC DOS. On a Mac use the MS LineDraw font; under MS Windows
2000 use the Terminal font.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
A Brief Tutorial on Gradual Underflow
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Prepared for ARITH 17, Tues. 28 June 2005,
and subsequently augmented
version dated 8 July 2005
posted at http://www.cs.berkeley.edu/~wkahan/ARITH_17U.pdf
Contents: Page
$0: What are Gradual and Abrupt Underflow ? 2
Pictures of Gradual and Abrupt Underflow 4
$1: Why IEEE 754Õs Default is Gradual Underflow 5
Advantages of Gradual over Abrupt Underflow to Zero 6
$2: Implementations 7
IMPORTANT IMPLEMENTATION DETAIL: 9
All Floating-Point Traps can be Lightweight Traps 10
A Short Menu of Exception-Handling Options 11
$3: (Gradual) Underflow Avoidance in Applications 13
Examples of Programming Mistakes: 14
CAVEAT: 15
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
Floating-Point Arithmetic Besieged by "Business Decisions"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A Keynote Address, prepared for the
IEEE-Sponsored ARITH 17 Symposium on Computer Arithmetic,
delivered on Mon. 27 June 2005 in Hyannis, Massachussets
version dated 5 July 2005
posted at http://www.cs.berkeley.edu/~wkahan/ARITH_17.pdf
Abstract:
Daunting technical impediments to productivity in scientific
and engineering computation have been exacerbated by a lack
of adequate support from most programming languages for
features of IEEE Standard 754 for Binary Floating-Point
Arithmetic. C99 and perhaps Fortran 2003 are exceptions.
Revisions to IEEE 754 are being contemplated to mitigate
some of the impediments. Among the innumerable issues being
addressed are ...
> Decimal Arithmetic merging commercial Fixed- into
Floating-Point.
> Flexible Floating-Point Exception Handling WITHOUT
requiring trap-handlers to be programmed by
applications programmers.
> Extended and Quadruple Precision to lift floating-point
roundoff analysis from the conscience of almost every
programmer.
> Aids to the localization of software modules perhaps
responsible for contaminating results suspected of
undue influence by roundoff.
But "Decision-Makers" ill-informed about our industry's
history underestimate how costly are consequences of short-
sighted "Business Decisions".
Contents: Page
Auxiliary Reading 2
The traditional treatment of floating-point
exceptions as errors is a short-sighted policy. 3
Exceptions become Errors only when mishandled. 4
Historical ÒBusiness DecisionsÓ bad for
scientific and engineering computation 5
Values Excel 2000 Displays for Several Expressions 7
Uncertain Business Decisions 10
Why are roundoff-induced numerical anomalies
so hard to find and diagnose? 13
How do nearby singularities cause damage? 15
Hypothetical Case Study: Bits Lost in Space 20
Quadruple precision is an alternative to error analysis: 22
More Business Decisions 23
Epilogue 24
The essence of civilization is that we benefit from the
experience of others without having to relive it. 25
It is futile to single-step through floating-point
programs intended to run at gigaflops. 28
And nobody will be safe from them. 28
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
A Demonstration of Presubstitution for Infinity/Infinity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
version dated 5 July 2005
posted at http://www.cs.berkeley.edu/~wkahan/Grail.pdf
Contents:
What is it? page 1
Alternatives 1
Ideal Language 2
Example: The Holy Grail 2
Here are the loops: 3
Presubstitution in the Loops 4
An Obvious Alternative 4
The Batched Forward Loop: 5
An Unobvious Alternative 6
Presubstitution as "Syntactic Sugar" 6
A Very Unobvious Alternative 7
How Presubstitution Should Work 8
Exceptions become Errors only when mishandled 8
Conclusion 10
CAVEAT: 10
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
How Futile are Mindless Assessments of Roundoff in
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Floating-Point Computation ? Why should we care?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
What should we do?
~~~~~~~~~~~~~~~~~~
Presented on 23 May 2005 to the
Householder Symposium XVI in Seven Springs PA
version dated 5 July 2005
posted at http://www.cs.berkeley.edu/~wkahan/Mind1ess.pdf
~
The purpose of this presentation is to persuade you to read
a much longer document, posted at
http://www.cs.berkeley.edu/~wkahan/Mindless.pdf ,
~
intended to persuade you to demand better support for the
diagnosis of numerical embarrassment, much of it due to
roundoff. Hardware conforming to IEEE Standard 754 for
Binary Floating-Point does support better diagnostic tools
than you are getting from programming languages (except
perhaps from a few implementations of C99) and program
development environments. That hardware support is
atrophying for lack of exercise. Use it or lose it.
Rather than try to persuade you that Rounding-Error Analysis
should be as important to you as it is to me, and therefore
deserves your generous support, I shall assume that you
would rather have nothing to do with it. Almost all students
of Mathematics and Computer Science incline this way.
I believe that, should a (presumably rare) numerical anomaly
embarrass you, you would prefer to determine as quickly and
quietly as possible (in case it's your own fault) whom to
blame. Rather than present a general assessment of ways to
diagnose and sometimes cure numerical embarrassments (you
can find that in my lengthy Mindless.pdf), I shall
titillate you with some examples drawn from that document.
Contents: Page
Errors Designed Not To Be Found 3
Values Excel 2000 Displays for Several Expressions 3
What's so special about 15 sig. dec. and 17 sig. dec. ? 6
Over-Zealous Compiler "Optimization" 7
Example of Pejoration by Over-Zealous "Optimization": 9
How can a programmer unaware of the "optimization"
debug it? 10
"Optimized" Register Spill 11
A Hypothetical Case Study: Bits Lost in Space 15
More on the web to read 17
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
The Baleful Influence of SPEC Benchmarks upon
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Floating-Point Arithmetic
~~~~~~~~~~~~~~~~~~~~~~~~~
Prepared for presentation to SPEC 17 May 2005
version dated 5 July 2005
posted at http://www.cs.berkeley.edu/~wkahan/SPECbnch.pdf
Three Challenges:
> How can SPEC benchmarks take Correctness and Robustness
into account as well as Speed?
> How can SPEC benchmarks inhibit petty "optimizations"
that turn into pejorations, which degrade the
correctness and mathematical integrity of numerical
software generally?
> How can SPEC benchmarks reward improved arithmetic
designs instead of eschewing them, thus penalizing
their designers?
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
How Futile are Mindless Assessments of Roundoff in
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Floating-Point Computation ?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
version dated 5 July 2005
posted at http://www.cs.berkeley.edu/~wkahan/Mindless.pdf
$0: Abstract
Redesigning computers costs less than retraining people, so
it behooves us to adapt computers to the way people have
evolved rather than try to adapt people to the way computers
have evolved. As the population of computer programmers has
grown, proficiency in rounding-error analysis has dwindled.
To compensate, better diagnostic aids should be
incorporated into hardware, into program development
environments, and into programming languages; but this is
not happening. Schemes to assist roundoff analysis are
beset by failure modes; no scheme is foolproof; only two
or three are worth trying. Alas, these few rely upon
hardware features built into IEEE Standard 754 for binary
floating-point but now atrophying for lack of adequate
linguistic support. Here extensive analyses of the genesis
of embarrassment due to roundoff, and of the failure modes
of all schemes devised so far to avert it, point clearly to
what needs doing next.
Contents : Page
$1: Introduction 1
$2: Errors Designed Not To Be Found 3
$3: Inscrutable Errors Generated by
Over-Zealous Compiler "Optimizations" 6
$4: Five Plausible Schemes 13
$5: J-M. Muller's Recurrence 14
$6: A Smooth Surprise 18
$7: Some More Spikes, and MATLAB's log2 21
$8: An "Old Hand" Accuses Division 27
$9: Repeated Randomized Rounding 34
$10: Cancellation is Not the Culprit 38
$11: A Case Study of Bits Lost in Space 42
$12: Mangled Angles 46
$13: Bloated Coffins 49
$14: Desperate Debugging 53
$15: Conclusion 55
At present, occasionally inaccurate floating-point software
of moderate complexity is difficult verging on impossible to
debug. If this state of affairs persists long enough to
become generally accepted as inevitable, the obligations of
Due Diligence will atrophy, and nobody will expect to be
held accountable for unobvious numerical malfunctions.
And nobody will be safe from them.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
"The Numerical Analyst as Computer Science Curmudgeon"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Presented in a 20-min. time-slot on Thurs. 5 Sept. 2002
to the E.E.&C.S. Dept. Research Fair acquainting grad.
students with the faculty and their research interests.
posted at http://www.cs.berkeley.edu/~wkahan/Curmudge.pdf
Numerical work by programmers inexperienced in numerical
computation is rendered unnecessarily hazardous (to the
programs' users) by archaic practices among programming
languages and compiler writers.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
Simple Transistorized Ignition Retrofit for Old Cars
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
posted at http://www.cs.berkeley.edu/~wkahan/Transign.pdf
Used from 1969 to 1973 in a car originally fitted
with standard (non-electronic) ignition, this
circuit requires minimal modifications to the car's
existing ignition circuitry, and relieves the
breaker points of electric erosion so as to keep
ignition performance up to the level of a careful
tune-up without further adjustment for at least
50000 miles.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
Abbreviated lecture notes on
Ellipsoidal Error Bounds for Trajectory Calculations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
7 March 2002
This extract was originally prepared for presentation
in Oct. 1993
posted at http://www.cs.berkeley.edu/~wkahan/Math128/Ellipsoi.pdf
Abstract: A practical way is outlined to bound error
accrued during numerical calculations of trajectories.
The algorithm accommodates uncertainties in the governing
differential equation as well as error due to the numerical
process. The accrued error is constrained to lie within
an ellipsoid that can be proved to grow, as time t
increases to infinity, bigger than the worst possible
accrual by a factor no worse than 1 + b sqrt(t) for
some constant b , rather than exponentially bigger,
until nonlinearity in the differential equation forces a
singularity to manifest itself.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
About infinity, for schoolteachers.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
7 March 2002
posted at http://www.cs.berkeley.edu/~wkahan/Infinity.pdf
This responds to a letter from a student whose schoolteacher
assured the class that blades of grass and pencils constituted
infinite sets because they were too numerous to be counted.
Alas, that mistake is all too common. This essay attempts
to explain infinity to schoolteachers so that, understanding
it better, they can explain it correctly to schoolchildren.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
Names for Standardized Floating-Point Formats
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
17 May 2001
posted at http://www.cs.berkeley.edu/~wkahan/ieee754status/Names.pdf
Abstract: I lack the courage to impose names upon
floating-point formats of diverse widths, precisions,
ranges and radices, fearing the damage that can be done
to one's progeny by saddling them with ill-chosen names.
Still, we need at least a list of the things we must name;
otherwise how can we discuss them without talking at cross-
purposes? These things include ...
Wordsize, Width, Precision, Range, Radix, etc.,
all of which must be both declarable by a computer programmer,
and ascertainable by a program through Environmental Inquiries.
And precision must be subject to truncation by Coercions.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
How Blabber-Mouth U-Boats got Sunk in World War II
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Prepared for presentation to HKN,
the Elect. Eng. & Computer Sci. Student Society,
on Wed. 7 March 2001.
posted at http://www.cs.berkeley.edu/~wkahan/BlaUboat.pdf
Abstract: In 1945 when World War II ended,
I was a very lucky Canadian 12-year-old deeply
indebted to a generation who fought that war for
me. Afterwards some of them became my teachers,
professors, colleagues and friends whose
technological interests and activities overlapped
mine. And I became acquainted too with some who
had fought against us. Recounting their stories
before they fade from aging memories repays a
little towards a debt that can never be repaid
fully. Besides, some of their stories reveal
what were then close-kept secrets and are now
interesting arcana.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
MATHEMATICS WRITTEN IN SAND - the hp-15C, Intel 8087, etc.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
W. Kahan,
University of California @ Berkeley
Version of 22 Nov. 1983
posted at http://www.cs.berkeley.edu/~wkahan/MathSand.pdf
__________________________________________________________________
This paper was presented at the Joint Statistical Meeting of the
American Statistical Association with ENAR, WNAR, IMS and SSC ,
held in Toronto, Canada, August 15-18, 1983. Then the paper
appeared in pp. 12-26 of the 1983 Statistical Computing Section
of the Proceedings of the American Statistical Association. It had
been typeset on an IBM PC and printed on an EPSON FX-80 at draft
speed with an unreadable type-font of the author's devising, and
then photo-reduced. The paper is reproduced here unaltered but for
type fonts, pagination, and an appended Contents page.
__________________________________________________________________
CONTENTS:
Page Subject
1 ABSTRACT
2 INTRODUCTION
3 VisiCalc 1.10 anomaly
5 WHO'S TO BLAME
7 Degree 504 polynomial
9 Perverse subtraction
10 THE AREA OF A TRIANGLE
12 Theorem: Exact subtraction
13 Diff: a cure worse than the disease
14 FINANCIAL CALCULATORS
15 A Penny for your Thoughts
17 Yield from a Risky Investment
17 SOLVING EQUATIONS
18 Lemma & Theorem about Newton's Iteration
20 Secant Iteration Phenomenon
21 HP-15C [SOLVE] Key
22 THE [INTEGRATE] KEY
23 Example: the Error function
25 Can't be foolproof
26 THE CALCULATOR OWNER'S HANDBOOK
27 Consel of too-near perfection
29 Inescapable anomaly
30 Tolerable noise
32 The Intimidation Factor
32 COMPLEX NUMBERS AND MATRICES
34 A Slab, a Strip, a Channel; Table 1
37 Robust formula for arccosh
38 Matrix examples
40 THE INTEL i8087
43 THE PROPOSED IEEE STANDARD
44 Floating-point exceptions
46 ACKNOWLEDGMENTS
46 REFERENCES
ABSTRACT: Simplicity is a Virtue; yet we
continue to cram ever more complicated circuits
ever more densely into silicon chips, hoping all
the while that their internal complexity will
promote simplicity of use. This paper exhibits
how well that hope has been fulfilled by several
inexpensive devices widely used nowadays for
numerical computation. One of them is the
Hewlett-Packard hp-15C programmable shirt-
pocket calculator ...
Mathematically dense circuitry is also found in
Intel's 8087 coprocessor chip ...
As sophisticated mathematical operations come
into use ever more widely, mathematical
proficiency appears to rise; in a sense it
actually declines. Computations formerly
reserved for experts lie now within reach of
whoever might benefit from them regardless of how
little mathematics he understands; and that
little is more likely to have been gleaned from
handbooks for calculators and personal computers
than from professors. This trend is pronounced
among users of financial calculators like the
hp-12C. Such trends ought to affect what and how
we teach, as well as how we use mathematics,
regardless of whether large fast computers,
hitherto dedicated mostly to speed, ever catch
up with some smaller machines' progress towards
mathematical robustness and convenience.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
"Marketing versus Mathematics and
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Other Ruminations on the Design of Floating-Point Arithmetic"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(... upon receiving the IEEE's Emanuel R. Piore Award, 15 Aug. 2000)
posted at http://www.cs.berkeley.edu/~wkahan/MktgMath.pdf
Abstract: "Marketing is Bad; Mathematics is Good."
No; that is not my message. Rather, in so far as our technological
activities have become ever more mathematical as well as market-
oriented, mathematics and marketing have come to depend upon each
other, and therefore must appreciate each other's needs, more than
ever. This growing interdependence is illustrated in what follows by
the design and implementation of floating-point arithmetic.
Contents:
Mathematics for Marketing and vice-versa page 3
A Good Example: The Intimidation Factor 5
Good and Bad in Marketing and in Mathematics 7
A Good Example: HP-92, ..., HP-12C Financial Calculators 9
Market-motivated Mathematics is a Gamble 12
A Bad Example: QPRO 4.0 and QPRO for Windows 13
Why is Floating-Point almost all Binary nowadays? 18
Example: ... literal constant " 25.4 " ; what's its nationality? 19
What is worth learning from the story about " 25.4 ", ... ? 23
Besides ... size, what distinguishes today's market ... ? 24
Case study: Kernighan-Ritchie C vs. ANSI C & Java 26
Example: Find the nearest point ... 27
Old Kernighan-Ritchie C works better than ANSI C or Java! 28
The Intel 8087 Numeric Coprocessor's Marketing Vicissitudes 29
Floating-Point Arithmetic for a Mass Market ... What was Best? 30
How do Errors get Amplified? By Singularities Near the Data. ... 33
Testing floating-point software ... A Frightening Example 36
What else should we remember about the foregoing examples? 42
A Bad Example of a Programming Language Ostensibly for a Mass Market 43
Appendix:
How Directed Roundings May Help Locate Numerical Instability 45
Over/Underflow Undermines Complex Number Division in Java 46
Four Rules of Thumb ... Modern Floating-point Hardware 47
Conclusion & Acknowledgement 48
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
Is there a Small Skew Cayley Transform with Zero Diagonal?
version of 11 Sept. 2005
posted at http://www.cs.berkeley.edu/~wkahan/skcayley.pdf
Abstract
The eigenvectors of an Hermitian matrix H are the columns of
some complex unitary matrix Q . For any diagonal unitary matrix
ê the columns of Qúê are eigenvectors too. Among all such Qúê
at least one has a skew-Hermitian Cayley transform
S := (I-Qúê)ú(I+Qúê)^(-1)
with just zeros on its diagonal. Why? The proof is unobvious,
as is the further observation that ê may also be so chosen that
no element of this S need exceed 1 in magnitude. Thus,
plausible constraints, easy to satisfy by perturbations of
complex eigenvectors when Hermitian matrix H is perturbed
infinitesimally, can be satisfied for discrete perturbations too.
But if H is real symmetric, Q real orthogonal and ê confined
to diagonals of ñ1s, then whether at least one real skew-
symmetric S must have no element bigger than 1 in magnitude is
not known yet.
Contents
Introduction Page 1
Notational Note 1
The Cayley Transform $(B) := (I-B)ú(I+B)^(-1) 2
Lemma 2
œ(Q) Gauges How "Near" a Unitary Q is to I . 4
Theorem 4
Examples 5
Why Minimizing œ(QúW) Makes $(QúW) Small. 6
Corollary 7
Conclusion 7
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
Matlab's Loss is Nobody's Gain
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
23 Aug. 1998
posted at http://www.cs.berkeley.edu/~wkahan/MxMulEps.pdf
Abstract:
Matlab has become the software package most widely used by engineers
and scientists for their numerical computations. Part of its appeal
arose from its early availability on Intel x86-based IBM PCs and
their clones, the hardware most widely used by engineers and
scientists for their numerical computations nowadays. Almost all these
users are unaware of enhanced accuracy Matlab used to enjoy on PCs
but lost in its latest releases, versions 5.x, under Windows.
Offered here is a function mxmuleps that, when run on various
versions of Matlab on various computers, tells which ones have
enhanced accuracy, whose benefits are illustrated by examples. Also
addressed are pervasive misunderstandings about how much that lost
accuracy was worth.
Contents:
Introduction page 2
What eps means 3
The Fused Multiply-Accumulate ( FMAC ) Instruction 4
How mxmuleps works 5
The Program mxmuleps.m 7
Example 1 . The Fibonacci Numbers 8
The Program fibon.m 9
Two dilemmas 12
Example 2 . The Zeros of a Real Quadratic 13
Example 3. Iterative Refinement for Linear Systems 16
Ill-scaled well-conditioned 3-by-3 systems 17
Ill-conditioned n-by-n Pascal systems 19
Treacherous Mathematics 22
How Much is Accuracy Worth? 24
Graph showing Accuracy need not be Transitive 25
Pejorative regions in the space of all data 26
Better accuracy reduces risk 27
Example 4. A Simple Geometry Problem 28
A Proposal for Modest Modifications to Matlab 30
Ruminations and Conclusions 31
For Further Reading 32
Acknowledgments 32
Appendix: The Program divulge.m 33
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
How Java's Floating-Point Hurts Everyone Everywhere
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Prof. W. Kahan & Joseph D. Darcy
E.E. & Computer Science
Univ. of Calif. @ Berkeley
( Originally presented Sun. 1 March 1998
at Stanford University at the
ACM 1998 Workshop on
Java for High-Performance Network Computing )
posted at http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf
Abstract:
~~~~~~~~~
Java's floating-point arithmetic is blighted by five gratuitous
mistakes:
1. Linguistically legislated exact reproducibility is at best
mere wishful thinking.
2. Of two traditional policies for mixed precision evaluation,
Java chose the worse.
3. Infinities and NaNs unleashed without the protection of
floating-point traps and flags mandated by IEEE Standards
754/854 belie Java's claim to robustness.
4. Every programmer's prospects for success are diminished by
Java's refusal to grant access to capabilities built into 95%
of today's floating-point hardware.
5. Java has rejected even mildly disciplined infix operator
overloading, without which extensions to arithmetic with
everyday mathematical types like complex numbers, matrices,
geometrical objects, intervals and arbitrarily high precision
become extremely inconvenient.
To leave these mistakes uncorrected would be a tragic sixth mistake.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
This presentation was intended partially to rebut Dr. James Gosling's
preceding (Sat. 28 Feb.) keynote address "Extensions to Java for
Numerical Computation." See his
http://java.sun.com/people/jag/FP.html .
For a better idea of what is in store for us in the future unless we
can change it, see
http://www.sun.com/smi/Press/sunflash/9803/sunflash.980324.17.html
http://math.nist.gov/javanumerics/issues.html#LanguageFeatures
Experts in the development of portable numerical software learned long
ago how to get around the inadequacies in Java's floating-point since
these were widely to be found in the programming languages and hardware
of the 1960s and 1970s. Now hardware is much better, and so is our
understanding of the way computers ought to support floating-point
programming by scientists and engineers whose expertise lies elsewhere;
but Java reflects none of that understanding. Only if the community
of software users and producers becomes aware of what is now missing in
Java, and then lets the arbiters of taste and fashion in the world of
programming languages know what our community needs, can we expect
Java to get better instead of merely faster.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
"An Interview with the Old Man of Floating-Point"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Reminiscences elicited from William Kahan
by Prof. Charles R. Severance
11 Feb. 1998
posted at
http://www.cs.berkeley.edu/~wkahan/ieee754status/754story.html
Abstract:
This interview elicits reminiscences about how IEEE Standard 754 for
Binary Floating-Point Arithmetic came into existence. The interview
underlies an abbreviated version to appear in IEEE Computer in March
1998. ~~~~~~~~~~~~~
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
The John von Neumann Lecture on ...
The Baleful Effect of Computer Languages and Benchmarks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
upon Applied Mathematics, Physics and Chemistry
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
presented at the 45th Annual Meeting of S.I.A.M.
Stanford University, 15 July 1997
posted at http://www.cs.berkeley.edu/~wkahan/SIAMjvnl.pdf
Abstract:
An unhealthy preoccupation with floating-point arithmetic's Speed, as
if it were the same as Throughput, has distracted the computing
industry and its marketplace from other important qualities that
computers' arithmetic hardware and software should possess too,
qualities like
Accuracy, Reliability, Ease of Use, Adaptability, ...
These qualities, especially the ones that arise out of Intellectual
Economy, tend to be given little weight partly because they are so
hard to quantify and to compare meaningfully. Worse, technical and
political limitations built into current benchmarking practices
discourage innovations and accommodations of features necessary as well
as desirable for robust and reliable technical computation. Most
exasperating are computer languages like Java(R) that lack locutions
to access advantageous features in hardware that we consequently cannot
use though we have paid for them. That lack prevents benchmarks from
demonstrating the features' benefits, thus denying language
implementors any incentive to accommodate those features in their
compilers. It is a vicious circle that the communities concerned with
scientific and engineering computation must help the computing industry
break. But our communities are still entangled in misconceptions that
becloud floating-point issues, so we send the industry mixed and murky
signals.
" The trouble with people is not that they don't know
but that they know so much that ain't so."
Josh Billings' Encyclopedia of Wit and Wisdom (1874)
For more details see also .../Triangle.pdf , .../Cantilever.pdf .
For an earlier version see .../ieee754status/baleful.pdf below.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
"The Baleful Effect of Computer Benchmarks upon
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Applied Mathematics, Physics and Chemistry"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
( Updated to 11 June 1996, a version of transparencies first
presented in Park City UT, 4 Aug. 1995, at Prof. S. Smale's
workshop on the Interaction of Numerical Analysis and Nonlinear
Dynamical Systems.)
posted at http://www.cs.berkeley.edu/~wkahan/ieee754status/baleful.pdf
An unhealthy preoccupation with Speed, as if it were synonymous with
Throughput, has distracted the computing industry and its marketplace
from other important qualities that computer hardware and software
should possess too ---
Correctness, Accuracy, Dependability, Ease of Use, Flexibility, ... .
Worse, technical and political limitations peculiar to current
benchmarking practices discourage innovations and accommodations of
features necessary as well as desirable for robust and reliable
technical computation. Particularly exasperating are computer
languages that lack locutions to access advantageous features in
hardware that we consequently cannot use though we have paid for them.
That lack prevents benchmarks from demonstrating the features'
advantages, thus affording language implementors scant incentive to
accommodate those features in their compilers. It is a vicious circle
that the scientific and engineering community must help the computing
industry break.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
"Approximate Trisection of an Angle"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
posted at http://www.cs.berkeley.edu/~wkahan/trisect.pdf
Seekers of fame and fortune should not waste time trisecting an angle,
but several do every year, and then they send their handiwork to a
mathematics department to get its seal of approval. This note may help
them distinguish between nearly exact trisection, which is easy, and
exact trisection, which was long ago proved impossible using only an
UNMARKED straightedge and a compass.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
"A Test for SRT Division"
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Version dated March 1995
Text files posted at ...
http://www.cs.berkeley.edu/~wkahan/tests/srtest...
SRTEST is an astonishingly simple computer program devised to test the
correctness of quotient-prediction hardware for binary floating-point
division that uses any Non-Restoring (SRT) algorithm. Nowadays that
hardware often takes the form of a Programmed Logic Array (PLA). How
long SRTEST should run to "prove" a PLA correct remains an open
question; however, SRTEST is intended to uncover a flaw in a PLA
far sooner than random testing can. For example, when it ran on an
Intel Pentium with the notorious binary floating-point divide bug of
1994, SRTEST uncovered the flaw in a tiny fraction of a second after
performing its 356th trial division and a few thousand arithmetic
operations in all, although the flaw had earlier been found to afflict
one division in about 8,800,000,000 with independent random operands.
However, to use either that kind of finding or SRTEST's performance
to predict failure rates among diverse users of a defective divider is
an abuse of statistics. A study of SRTEST's workings sheds light on
that abuse and helps us avoid repeating it.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
" A Test for Correctly Rounded SQRT "
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lecture note dated 31 May 1996
posted at http://www.cs.berkeley.edu/~wkahan/SQRTest.pdf
Ideally, the computed value of SQRT(x) should be the same as if it
had first been computed perfectly, correct to infinitely many
significant bits, and then rounded to the nearest approximation that
fits into the number of significant bits, say N sig. bits, in the
target's format. This ideal is achievable; it is mandatory under
IEEE Standard 754 for Binary Floating-Point Arithmetic, to which
almost all computers built nowadays attempt to conform. The tricky
part of the SQRT algorithm is getting the last rounding error right
so that the computed SQRT(x) will be in error by less than 1/2 ulp
( Unit in its Last Place ). If the trick is fumbled very slightly,
the error in SQRT can exceed 1/2 ulp so rarely that random testing
has practically no chance of exposing the flaw; such flaws have
occurred and have evaded detection by billions of random tests. The
tests discussed in this note are focussed sharply enough to reveal such
flaws almost immediately.
For subsequent evolutions from this kind of test see articles by ...
M. Parks "Number-Theoretic Test Generation for Directed Roundings" in
pp. 241-8, Proc. IEEE 14th Symposium on Computer Arithmetic (1999)
and more posted on his web page.
Merav Aharoni et al. "Solving Constraints on the Invisible Bits of the
Intermediate Result for Floating-Point Verification" in pp. 76-
83, Proc. IEEE 17th Symposium on Computer Arithmetic (2005).
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
"Beastly Numbers"
~~~~~~~~~~~~~~~~~
Version dated 13 Jan, 1996
posted at http://www.cs.berkeley.edu/~wkahan/tests/numbeast.pdf
with other files: .../tests/fistest ...
It seems unlikely that two computers, designed by different people
1800 miles apart, would be upset in the same way by the same two
floating-point numbers 65535.*** and 4294967295.*** , but it has
happened.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
" Computing Days between Dates, the Day of the Week, etc."
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lecture note dated 16 Sept. 1992
Text files: http://www.cs.berkeley.edu/~wkahan/daydate/ ...
The year 2000 will be a leap year but 2100 will not; that is how
the Gregorian calendar has been working since 1582 . Will 4000 be
a leap year? Why do Sept., Apr., June and Nov. have 30 days and
the rest 31 except for Feb.? These questions are discussed, and to
save programmers the bother of looking up their answers in tables a few
formulas are supplied. These have been embedded in MATLAB functions
ETIME, J2YMD and YMD2J for handling, respectively, elapsed time,
days between dates, and the Day of the Week. Incidentally, YMD2J
descends from an ancient self-checking progam used in H-P financial
calculators since 1976; despite roundoff, it assesses the validity
of its output and input by running an inverse program J2YMD .
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
"Lecture Notes on the Status of
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
IEEE Standard 754 for Binary Floating-Point Arithmetic"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Version dated 31 May 1996 (now obsolete)
posted at http://www.cs.berkeley.edu/~wkahan/ieee754status/ieee754.pdf
Twenty years ago anarchy threatened floating-point arithmetic. Over a
dozen commercially significant arithmetics boasted diverse wordsizes,
precisions, rounding procedures and over/underflow behaviors, and
more were in the works. "Portable" software intended to reconcile
that numerical diversity had become unbearably costly to develop.
Eleven years ago, when IEEE 754 became official, major
microprocessor manufacturers had already adopted it despite the
challenge it posed to implementors. With unprecedented altruism,
hardware designers rose to its challenge in the belief that they would
ease and encourage a vast burgeoning of numerical software. They did
succeed to a considerable extent. Anyway, rounding anomalies that
preoccupied all of us in the 1970s afflict only CRAYs now.
Now atrophy threatens features of IEEE 754 caught in a vicious circle:
Those features lack support in programming languages and compilers,
so those features are mishandled and/or practically unusable,
so those features are little known and less in demand, and so
those features lack support in programming languages and compilers.
To help break that circle, those features are discussed in these notes
under the following headings:
Representable Numbers, Normal and Subnormal, Infinite and NaN
Encodings, Span and Precision
Multiply-Accumulate, a Mixed Blessing
Exceptions in General; Retrospective Diagnostics
Exception: Invalid Operation; NaNs
Exception: Divide by Zero; Infinities
Digression on Division by Zero; Two Examples
Exception: Overflow
Exception: Underflow
Digression on Gradual Underflow; an Example
Exception: Inexact
Directions of Rounding
Precisions of Rounding
The Baleful Influence of Benchmarks; a Proposed Benchmark
Exceptions in General, Reconsidered; a Suggested Scheme
Ruminations on Programming Languages
Annotated Bibliography
Insofar as this is a status report, it is subject to change and
supersedes versions with earlier dates. This version supersedes one
distributed at a panel discussion of "Floating-Point Past, Present
and Future" in a series of San Francisco Bay Area Computer History
Perspectives sponsored by Sun Microsystems Inc. in May 1995.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
"The Improbability of PROBABILISTIC ERROR ANALYSES
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
for Numerical Computations"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
( Expanded version of transparencies to be presented at the 3rd
ICIAM Congress, 3-7 July 1995 in Hamburg; revised for the
UCB Statistics Colloquium, 28 Feb. 1996, and subsequently.)
Posted at http://www.cs.berkeley.edu/~wkahan/improber.ps and pdf
" Everything I know nothing about is equally improbable "
exemplifies the illogic that plagues attempts, by statistical means,
to assess error and risk in floating-point computation.
Over a decade ago, the French Academy of Sciences awarded a prize
for a patented method that automated the estimation of roundoff's
effect upon any computation and dispensed with error analysis; two
decades earlier essentially the same method had been tried and
abandoned as altogether too risky.
Two years ago the manufacturer of a very popular microprocessor acceded
to a recall because its floating-point division was defective although
the defect was believed to pose less of a hazard than cosmic rays do,
and was expected to inflict a perceptible error upon its user perhaps
once every millennium by the manufacturer's estimate, once every three
weeks by a competitor's. The division algorithm had been "proved"
correct, and had passed a few billion random tests; afterwards,
however, a short test program exhibited over 2000 failures in the
first million divisions tested.
Perhaps the hazards posed by numerical computation (and by earthquakes
and by carcinogens) would be misjudged less often if first courses on
statistics spent a little more time on the probabilistics of extremely
improbable events. Anyway, we need help from Statisticians to avoid
these abuses of Statistics.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
Miscalculating Area and Angles of a Needle-like Triangle
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
( from Lecture Notes for Introductory Numerical Analysis Classes )
Prof. W. Kahan, 19 July 1999
Abstract:
This case study drawn from an elementary numerical analysis course is aimed at
computer language designers and implementors who took no competent course on the
subject or forgot it and consequently subscribe to principles of language design
inimical to the best interests of writers and users of software with a little
floating-point arithmetic in it. Though triangles rarely matter to computer
language designers and implementors, recollections of their high-school
trigonometry and calculus will enable them to follow the discussion, which is
intended primarily to expose and correct common misconceptions. The first of
these is that classical trigonometric formulas taught in schools and found in
handbooks and software must have passed the Test of Time. Actually they have
withstood it; they are unnecessarily inaccurate, sometimes badly, for some
data some of which is unexceptionable. Better formulas are supplied here. Even
if these are impeccably accurate and cost no more than the classical formulas
they should supplant, the expectation that the better formulas will supplant
the worse is based upon misconceptions too (see pp. 5, 9, 16 and 21).
Other misconceptions addressed herein ( on indicated pages ) are ...
= That subtractive cancellation always causes numerical inaccuracy, or is the
only cause of it. (3, 5)
= That a singularity always degrades accuracy when data approach it.
(4, 5, 7, 8, 10, 12-16)
= That algorithms known to be numerically unstable should never be used. (10)
= That arithmetic much more precise than the data it operates upon is pointless.
(7, 9-12, 14, 17-21)
= That modern "Backward Error-Analysis" explains everything, or excuses it.
(12, 14-17)
= That bad results are due to bad data or bad programmers, never to a
programming language. (9-12, 18-20)
Misconceptions like these hinder programming languages from conveying to owners
of by far the majority of computers now on desk-tops the benefits of their
hardware's superior floating-point semantics. (9-11, 19-20)
Contents:
$1. Classical Formulas for Area and Angle C page 2
$2. How to compute Area 3
$3. How to compute C 3
$4. Why Cancellation Cannot Hurt 3
$5. Examples 4
Table 1: Area and Angle C 4
$6. Opposite Side c and Adjacent Angle B 5
$7. How to compute c 5
$8. How to compute B 5
$9. How Accurate is B ? 7
Table 2: Angle B , New Formula B(...) vs. Classical BS(...) 7
Table 3: Uncertainty dB due to Roundoff 9
$10. Must the Classical Formulas be Amended ? 9
Table 4: Formats of IEEE Std. 754 for Bin. Floating-Pt. Arith. 9
$11. What Extra Precision Does for B 11
$12. How Much Accuracy do Data Deserve ? 12
$13. A Triangle is an Object 13
Table 5: Side c , cb(a,A,b) vs. cB(a,A,B(a,A,b)) 13
$14. Three Caveats and a Picture 14
$15. Proper Precision Management 18
$16. Different Floating-Point Semantics 19
$17. Conclusion 21
$18. Acknowledgments and Bibliography 22
$19. Footnote about Examples 22
22 pp. Acrobat Reader .pdf file:
http://www.cs.berkeley.edu/~wkahan/Triangle.pdf
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
Roundoff Degrades an Idealized Cantilever
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3 July 1997
Prof. W. Kahan and Ms. Melody Y. Ivory
Abstract:
By far the majority of computers in use to-day are AMD/Cyrix/Intel-
based PCs, and a big fraction of the rest are old 680x0-based Apple
Macintoshes. Owners of these machines are mostly unaware that their
floating-point arithmetic hardware is capable of delivering routinely
better results than can be expected from the more prestigious and more
expensive workstations preferred by much of the academic Computer
Science community. This work attempts to awaken an awareness of the
difference in arithmetics by comparing results for an idealized problem
not entirely unrepresentative of industrial strength computation. The
problem is to compute the deflection under load of a discretized
approximation to a horizontally cantilevered steel spar.
Discretization generates N simultaneous linear equations that can be
solved in time proportional to N as it grows big, as it must to
ensure physical verisimilitude of the solution. Their solution is
programmed in MATLAB 4.2 which, like most computer languages
nowadays, lacks any way to mention those features that distinguish
better arithmetics from others. None the less this program yields
results on PCs and old Macs correct to at least 52 sig. bits for
all values N tried, up to N = 18827 on a Pentium. However the
other workstations yield roughly 52.3 - 4.67 log N correct sig. bits
from the same program despite that it tries two styles of Iterative
Refinement; at N = 18827 only a half dozen bits are left.
This kind of experience raises troublesome questions about the coverage
of popular computer benchmarks, and about the prospects for a would-be
universal language like JAVA that purports to deliver identical
numerical results on all computers from one library of numerical
software.
The MATLAB 4.2 programs used to get the aforementioned results are
available by electronic mail from the authors: ivory@cs.berkeley.edu
and wkahan@cs.berkeley.edu . Alternatively they can be downloaded:
http://www.cs.berkeley.edu/~wkahan/refCant.tar.gz for UNIX;
then gunzip refCant.tar.gz ,
and tar -xf refCant.tar . Or else
http://www.cs.berkeley.edu/~wkahan/refCant.exe for IBM PCs etc.;
then refcant.exe -d .
For related work see also
http://www.cs.berkeley.edu/~wkahan/triangle.ps and pdf .
11 pp. PostScript file:
http://www.cs.berkeley.edu/~wkahan/Cantilever.ps and pdf
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
LECTURE NOTES ON REAL ROOT-FINDING
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
for Math. 128 A/B
Prof. W. Kahan
Math. Dept.
4 March 1998
NOT YET READY FOR DISTRIBUTION
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Abstract:
These course notes concern the solution of one real equation f(z) = 0
for one real root z , also called a real zero z of function f(x) .
They supplement, not supplant, textbooks and deal mathematically
with troublesome practical details not discussed in my article about a
calculator's [SOLVE] key in the Hewlett-Packard Journal 30 #12 (Dec.
1979) pp. 20-26. ( See http://www.cs.berkeley.edu/~wkahan/Math128/
SOLVEkey.pdf .) It should be read first; it offers easy-to-read
advice about real root-finding in general to anyone who wishes merely
to use a root-finder to solve an equation in hand. These course notes
are harder to read; intended for the would-be designer of a root-
finder, they exercise what undergraduates may learn about Real
Analysis from texts like Bartle's " Elements of Real Analysis " 2d
ed.(1976) Wiley, New York. Collected here are proofs, mostly short,
for mathematical phenomena, some little known, worth knowing during
the design of robust and rapid root-finders.
Almost all Numerical Analysis texts cover the solution of one real
equation f(z) = 0 for one real root z by a variety of iterative
algorithms, like x -> U(x) for some function U that has z = U(z)
as a fixed-point. The best known iteration is Newton's:
x -> x - f(x)/f'(x) .
Another is Secant iteration: Replace the pair of guesses {x, y} by
the pair {w, x} where w := x - f(x)(x-y)/( f(x) - f(y) ) . But no
text I know mentions some of the most interesting questions:
Is some simple Combinatorial (Homeomorphically invariant) condition
both Necessary and Sufficient for convergence of x -> U(x) ?
(Yes; $5)
Is that condition relevant to the design of root-finding software?
(Yes; $6)
Are there other iterations x -> U(x) besides Newton's ?
(Not really; $3)
Must there be a neighborhood of z within which Newton's iteration
converges if f'(x) and x - f(x)/f'(x) are continuous?
(Maybe Not; $7)
Do useful conditions less restrictive than Convexity suffice
Globally for the convergence of Newton's and Secant iteration?
(Yes; $8)
Why are these less restrictive conditions not Projective Invariants,
as are Convexity and the convergence of Newton's and Secant
iterations? ( I don't know; $A3)
Is slow convergence to a multiple root worth accelerating?
(Probably not; $7)
Can slow convergence from afar be accelerated with no risk of
overshooting and thus losing the desired root?
(In certain common cases, Yes; $10)
When should iteration be stopped?
( Not for the reasons usually cited; $6)
Which of Newton's and Secant iterations converges faster?
(Depends; $7)
Which of Newton's and Secant iterations converges from a wider
range of initial guesses at z ?
( Secant, unless z has even multiplicity; $9)
THEREFORE, WHY USE TANGENTS WHEN SECANTS WILL DO?
( Have ALL the foregoing answers been PROVED ? Yes. Most were
proved over twenty years ago, and influenced the design of the
[SOLVE] key on Hewlett-Packard Calculators.)
Contents:
$1. Overview
$2. Three Phases of a Search
$3. Models and Methods
$4. "Global" Convergence Theory from Textbooks
$5. Global Convergence Theory
$6. A One-Sided Contribution to Software Strategy
$7. Local Behavior of Newton's and Secant Iterations
$8. Sum-Topped Functions
$9. The Projective Connection between Newton's and Secant Iterations
$10. Accelerated Convergence to Clustered Zeros (incomplete)
$11. All Real Zeros of a Real Polynomial (to appear)
$12. Zeros of a Real Cubic (to appear; until then see .../Cubic.pdf)
$13. Error Bounds for Computed Roots (to appear)
$ccc. Conclusion
$A1. Appendix: Divided Differences Briefly
$A2. Appendix: Functions of Restrained Variation
$A3. Appendix: Projective Images
$A4. Appendix: Parabolas
$A5. Appendix: Running Error Bounds for Polynomial Evaluation (to appear)
$: Citations
NOT YET READY FOR DISTRIBUTION
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Posted at http://www.cs.berkeley.edu/~wkahan/Math128/RealRoots.ps amd .pdf
See also .../SOLVEkey.pdf , which should be read first.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
Better Real Rootfinding by W. Kahan, Prof. Emeritus
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Prepared for the Seminar on Scientific Computation etc.
2 April 2014, Univ. of Calif. @ Berkeley
Given a program to compute a real function f(x) of a real
scalar x , and given one or two guesses at z , firmware
like MATLAB's fzero and HP calculators' [SOLVE] key
seek a root z of the equation " f(z) = 0 ". Old software
like fzero requires initial guesses that straddle z ;
[SOLVE] accepts guesses that need not straddle a root; and
for three decades [SOLVE] has allowed searches for a root
to stray temporarily outside the domain of f . Evidently
the theory of real rootfinding has advanced beyond almost
all real rootfinding software. The theory will be surveyed.
For details and proofs see
and for a readable account of a rootfinder's challenges see
and
pp. 42-4 of .
A worthy project for a mathematically adept CS student is
to make well-engineered rootfinding software more available.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =