Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Floating point arithmetic question

6 views
Skip to first unread message

Richard Asbury

unread,
May 17, 2007, 3:18:37 PM5/17/07
to
If I have a hyperplane with normal n, and some point p lies on that plane,
then I can divide real space into two regions with the predicate (x - p).n >
0. If I do this calculation in floating point arithmetic (of some specific
precision), then do there exist any x representable in floating point
numbers which are allocated to the wrong side of the plane (i.e. from a
different side than if I do that calculation in real arithmetic, perhaps due
to rounding errors etc.)?

If I say instead that the hyperplane divides space into three regions given
by (x - p).n > 0, (x - p).n = 0 and (x - p).n < 0, does this change the
answer?

Thoughts would be appreciated! I'm really looking for a convincing proof or
counter-example - my intuition is that floating point arithmetic is actually
reliable for the three-region case... Thanks,

Richard


Pixel.to.life

unread,
May 17, 2007, 3:40:02 PM5/17/07
to


The rounding errors will only incur when you actually implement this
test, depending upon the language of choice, and the processor.

The precison of the 'side with respect to a given plane' test result
depends on how you represent the calculations, and the intermediate
results. But I doubt if for practical purposes, a float or double
precision calculation will give you wrong answer.

The best way to know this is prepare a simple example with known plane
definition, and a three known points (one on each side of the plane,
and one on the plane). Compute the measure for each on paper, and
using a computer. Then bring the two 'off-plane' points closer to the
plane in infinitesimally small steps along their perpendicular
distance vector with the plane. Recompute the measure for each step,
manually, and using the computer. If you get tired of the manual
computations before the computer starts giving erroneous result, this
problem is of concern. Else I wouldnt worry about it.

In summary, I cannot think of a practical example where accuracy of
the level of last decimal place floating point precision is needed
when computing this test (On a 32 bit PC, Float range: 10^-308 to
10^308, double more precise). If you do, please share.

Fernando Cacciola

unread,
May 17, 2007, 10:23:06 PM5/17/07
to

Hello Richard,

> If I have a hyperplane with normal n, and some point p lies on that plane,
> then I can divide real space into two regions with the predicate (x - p).n
> > 0. If I do this calculation in floating point arithmetic (of some
> specific precision), then do there exist any x representable in floating
> point numbers which are allocated to the wrong side of the plane (i.e.
> from a different side than if I do that calculation in real arithmetic,
> perhaps due to rounding errors etc.)?
>

Yes.

> If I say instead that the hyperplane divides space into three regions
> given by (x - p).n > 0, (x - p).n = 0 and (x - p).n < 0, does this change
> the answer?
>

First of, let's assume that the problem is as-if you would cache R=(x-p).n
(for instance, making sure you don't mess with the FPU except at the right
time and place).
This is a safe assumption.

It won't "change" the answer, it will only "extend" it, because now you can
discriminate the "false" case.

> Thoughts would be appreciated! I'm really looking for a convincing proof
> or counter-example - my intuition is that floating point arithmetic is
> actually reliable for the three-region case.

But it isn't. In reality you are not gaining "robustness" or reliabilty by
discriminating the false case.

In fact, you are getting more information out of a point (is it *on* the
plane rather than to some side of it), but then you can be even "more wrong"
than with a single predicate: your logic is now trivalued so there are 2
instead of 1 possible wrong branches.

Consider this simple counter-example:

The real value of (x-p).n is some positive tiny value, but the computed
value is some negative tiny value (a *very* common case since *exactly zero*
is rare in most modern architectures since they handle subnormals quite
well).

With the first predicate you get 'false', which is the wrong answer as it
tells you that the point is on the wrong side.
With the triple predicates you get 'false', 'false' and 'true'. But that's
exactly the same equally wrong answer as before.

You are entering the interesting subject of "robustness in geometric
computing".
Fortunately, you are against a the textbook problem (well, an equivalent
problem): "robustely computing the sign of a determinant".

Just google for that to get you started, and get back to us here.

HTH

--------
Fernando Cacciola
SciSoft

http://fcacciola.50webs.com
http://fcacciola.wordpress.com
http://certuscode.wordpress.com

Fernando Cacciola

unread,
May 17, 2007, 10:34:12 PM5/17/07
to

"Pixel.to.life" <pixel....@gmail.com> escribió en el mensaje
news:1179430801....@l77g2000hsb.googlegroups.com...

> The precison of the 'side with respect to a given plane' test result
> depends on how you represent the calculations, and the intermediate
> results. But I doubt if for practical purposes, a float or double
> precision calculation will give you wrong answer.
>

That assumes that input points are not actually close to the plane.
If that's indeed the case then accuaracy won't be a problem (as long as 'n'
is normalized), but for points really close to the plane we have a problem.
In most practical cases though, you cannot afford a correct answer only for
points sufficiently away from the plane, so the typical "simplisitic"
workaround is the so callled epsilon-geometry approach: basically, measure
the squared distance from the point to the plane and use the predicate only
if it is sufficiently separated (or some other 'guess-how-wrong-can-I-be
filter)
These days however that approach is totally unnecesary. The current state of
the art is to use "filtered predicates", and if neccesary "lazy
constructions" (if the points or the plane themselves can carry an error
that would cause the wrong convexity answer).

Best

Pixel.to.life

unread,
May 18, 2007, 3:40:26 AM5/18/07
to
On May 17, 7:34 pm, "Fernando Cacciola" <fernando.cacci...@gmail.com>
wrote:
> "Pixel.to.life" <pixel.to.l...@gmail.com> escribió en el mensajenews:1179430801....@l77g2000hsb.googlegroups.com...

>
> > The precison of the 'side with respect to a given plane' test result
> > depends on how you represent the calculations, and the intermediate
> > results. But I doubt if for practical purposes, a float or double
> > precision calculation will give you wrong answer.
>
> That assumes that input points are not actually close to the plane.


Exactly. But how do you define what is "sufficient"? That is defined
by the implementation.
Just as you say later on, the 'epsilon' approach limits the precision
of how fine you can resolve two infinitesimally close points on a
discrete computing machine.
The smaller epsilon your machine allows you to define, the finer you
will be able to resolve the test suggested by Richard.

The filtered predicate approach comes handy only when a number of
similar precision tests happen in sequence, one depending on the
other's result, so the error of computation could accumulate. For the
example Richard mentions, it is a simple case of whether precision is
lost when computing the plane test when one way is used over the
other.

Thanks for touching the concept of exact geometric computing, though.
Here is a nice related article: "http://www.cgal.org/Events/
Liama_Beijing_2007_documents/Design_Paradigms_Bindings.pdf".

Richard Asbury

unread,
May 18, 2007, 8:07:13 AM5/18/07
to
Thanks very much for the responses - we're definitely on the right
track here but...

On May 18, 3:23 am, "Fernando Cacciola" <fernando.cacci...@gmail.com>
wrote:


> Hello Richard,
>
> > If I have a hyperplane with normal n, and some point p lies on that plane,
> > then I can divide real space into two regions with the predicate (x - p).n
> > > 0. If I do this calculation in floating point arithmetic (of some
> > specific precision), then do there exist any x representable in floating
> > point numbers which are allocated to the wrong side of the plane (i.e.
> > from a different side than if I do that calculation in real arithmetic,
> > perhaps due to rounding errors etc.)?
>
> Yes.
>
> > If I say instead that the hyperplane divides space into three regions
> > given by (x - p).n > 0, (x - p).n = 0 and (x - p).n < 0, does this change
> > the answer?
>
> First of, let's assume that the problem is as-if you would cache R=(x-p).n
> (for instance, making sure you don't mess with the FPU except at the right
> time and place).
> This is a safe assumption.
>
> It won't "change" the answer, it will only "extend" it, because now you can
> discriminate the "false" case.
>
> > Thoughts would be appreciated! I'm really looking for a convincing proof
> > or counter-example - my intuition is that floating point arithmetic is
> > actually reliable for the three-region case.
>
> But it isn't. In reality you are not gaining "robustness" or reliabilty by
> discriminating the false case.

Apologies - I didn't clearly explain what I was getting at there. My
point was that I believe there are cases where floating point will
incorrectly give R = 0 when it's really slightly greater or slightly
less, BUT it will never give R < 0 when it should be R > 0, or vice
versa.

> In fact, you are getting more information out of a point (is it *on* the
> plane rather than to some side of it), but then you can be even "more wrong"
> than with a single predicate: your logic is now trivalued so there are 2
> instead of 1 possible wrong branches.
>
> Consider this simple counter-example:
>
> The real value of (x-p).n is some positive tiny value, but the computed
> value is some negative tiny value (a *very* common case since *exactly zero*
> is rare in most modern architectures since they handle subnormals quite
> well).

<snip>

I appreciate that this would be the basis of a counter-example, but I
don't believe there are any situations where this occurs. If you could
give actual numbers, say for the 2D case, then I'd be convinced.

Am I right in the following assumption: If I have floating point
numbers p and q, and I calculate their product pq in real arithmetic
to give r, and floating point arithmetic to give r', then there's no
floating point number strictly between r and r'?

Thanks,

Richard

Fernando Cacciola

unread,
May 18, 2007, 9:06:56 AM5/18/07
to
Hi Richard,

> Thanks very much for the responses - we're definitely on the right
> track here but...
>
>

> Apologies - I didn't clearly explain what I was getting at there. My
> point was that I believe there are cases where floating point will
> incorrectly give R = 0 when it's really slightly greater or slightly
> less, BUT it will never give R < 0 when it should be R > 0, or vice
> versa.
>

But that exactly what happens. In real world computations the sign is in
error, not just the quantity.

>> In fact, you are getting more information out of a point (is it on


>> the plane rather than to some side of it), but then you can be even
>> "more wrong" than with a single predicate: your logic is now
>> trivalued so there are 2 instead of 1 possible wrong branches.
>>
>> Consider this simple counter-example:
>>
>> The real value of (x-p).n is some positive tiny value, but the

>> computed value is some negative tiny value (a very common case
>> since exactly zero is rare in most modern architectures since they


>> handle subnormals quite well).
>
> <snip>
>
> I appreciate that this would be the basis of a counter-example, but I
> don't believe there are any situations where this occurs. If you could
> give actual numbers, say for the 2D case, then I'd be convinced.
>

Google for "robustely computing sign of determinant".
As soon as you start reading whatever comes from the search, you'll realize
that the whole point is precisely that with a naive fp expression, the
sign could be wrong, so there's a lot of effort trying to get the sign
right, regardless of the value of determinant itself.

> Am I right in the following assumption: If I have floating point
> numbers p and q, and I calculate their product pq in real arithmetic
> to give r, and floating point arithmetic to give r', then there's no
> floating point number strictly between r and r'?
>

Right.
Or to put it more differently: individual aritmetic operations are required
to be faithfully rounded which means that the roundoff is never greater than
1ulp (unit in the last place).

But you are missing the point: for one and only one single operation you
can't have the wrong sign in the result, but as soon as you perform more
than one computation that guarantee is off.
Consider: (a-b) - (c-d).
In RealRAM, (a-b) could be greater than (c-d) so the result is positive.
Bu in the computer, (a-b) could be smaller than (c-d) (you see how right?),
but then the final substraction gives you a negative number while it should
have been positive.

HTH

Fernando Cacciola

Richard Asbury

unread,
May 18, 2007, 10:04:43 AM5/18/07
to
On May 18, 2:06 pm, "Fernando Cacciola"

I guess I'm being stupid here, but I can't see how this could happen.
As I say, I believe that rounding errors can cause a - b to
erroneously become equal to c - d in fp but not to actually rearrange
them from the real order. I hate to be a pedant here, but can you give
actual numbers where this happens (you can assume rounding at 3 dp or
something if you like)? I do agree that if this can happen then it
would reflect similarly on the plane test.

Thanks,

Richard

Fernando Cacciola

unread,
May 18, 2007, 8:21:15 PM5/18/07
to

Hi Richard,

>
> I guess I'm being stupid here, but I can't see how this could happen.
> As I say, I believe that rounding errors can cause a - b to
> erroneously become equal to c - d in fp but not to actually rearrange
> them from the real order. I hate to be a pedant here, but can you give
> actual numbers where this happens (you can assume rounding at 3 dp or
> something if you like)?

double huge = 1e30 ;
double tiny = 1e-30 ;
double r = ( ( 2 * tiny + huge ) - huge ) - tiny ;

You can see how that simplifies to just 'tiny' (+1e-30), which is a very
small but *positive* number.
Now go compile and run it. You'll see that the computed result is -tiny.
Yes, that's (-)1e-30, a *negative* number.
Exactly as I said, it has the wrong sign.

Btw, the above is taken from:

http://docs.sun.com/source/806-3568/ncg_goldberg.html#12892

That's the goldberg bible.
If you haven't read it yet, forget about the weekend and eat it top to
bottom :)

Fernando Cacciola

unread,
May 18, 2007, 9:37:57 PM5/18/07
to
Pixel.to.life wrote:
> On May 17, 7:34 pm, "Fernando Cacciola" <fernando.cacci...@gmail.com>
> wrote:
>> "Pixel.to.life" <pixel.to.l...@gmail.com> escribió en el
>> mensajenews:1179430801....@l77g2000hsb.googlegroups.com...
>>
>>> The precison of the 'side with respect to a given plane' test result
>>> depends on how you represent the calculations, and the intermediate
>>> results. But I doubt if for practical purposes, a float or double
>>> precision calculation will give you wrong answer.
>>
>> That assumes that input points are not actually close to the plane.
>
>
> Exactly. But how do you define what is "sufficient"? That is defined
> by the implementation.

The implementation defines the reliability limit, but sufficiency is given
by the program requirements.
In my experience, the fp reliability limit is not nearly enough for the
typical requirements of geometric computing.

Think about a typical application of halfspace convexity tests (the OP
problem): constructive solid geometry. Would the application just refuse to
process nearly coplanar faces? try to explain that to the customers.

> Just as you say later on, the 'epsilon' approach limits the precision
> of how fine you can resolve two infinitesimally close points on a
> discrete computing machine.
> The smaller epsilon your machine allows you to define, the finer you
> will be able to resolve the test suggested by Richard.
>

Yes, but there are always points that falls into the unreliability window,
and trust me, in real world geometric computing, users run across those in a
snap.

> The filtered predicate approach comes handy only when a number of
> similar precision tests happen in sequence, one depending on the
> other's result, so the error of computation could accumulate.
>

This applies to what definition of filtered predicates??

>For the example Richard mentions, it is a simple case of whether precision
>is
> lost when computing the plane test when one way is used over the
> other.
>

When *significant* precision is lost (and you know that, for example via
interval arithmetic), you know you cannot trust the floating-point result,
so you recompute the predicate again using a slower but exact arithmetic. In
the end, you always get the correct result. That's what a filtered predicate
does, and it comes handy whether you cascade computations or not.

> Thanks for touching the concept of exact geometric computing, though.
> Here is a nice related article: "http://www.cgal.org/Events/
> Liama_Beijing_2007_documents/Design_Paradigms_Bindings.pdf".
>

Very nice indeed.
FWIW the author happens to be one of the main researchers on this business
of robust geometric computing.

And btw, the on the CGAL manual, the bibliography index contains references
to some of the most relevant papers on the subject (along with lots of other
of course):

http://www.cgal.org/Manual/3.2/doc_html/cgal_manual/biblio.html

Best

Fernando Cacciola


Pixel.to.life

unread,
May 19, 2007, 2:20:55 AM5/19/07
to
On May 18, 6:37 pm, "Fernando Cacciola" <fernando.cacci...@gmail.com>
> Fernando Cacciola- Hide quoted text -
>
> - Show quoted text -

Fernando,

Good exercise (the example you shared from Goldberg bible). Perhaps
you realise now why I mentioned what is 'sufficient' precision or 'how
close' the points can be to a given plane for the test to give
meaningful results.

[1] The example you shared is the 'implementation' I mentioned that
guides the precision in the sample computation (why double? why not
float?)
[2] The machine one would compile and execute your example presents
the 'discrete computing limit' I mentioned for a realisation of the
example. (why use computer?)

You would also realise that had you used huge=1.0f, and tiny=-1.0f,
you wouldnt be able to demonstrate the change of sign.

My point remains: there will always be limits to accurately computing
an expression on any discrete computing machine. There will always be
an epsilon, and ways to *approach* the exact result. How small you
want the epsilon, and how exact you want the computation, will be
guided by the application.

As much as your sharing insight into the area of exact geometric
computing is appreciated, I do not believe everyday computing needs
precision of the order of e+/-30. Your example (re: almost coplanar
faces) is one of the specialized scientific applications that deals
with it. An answer to Richard's question is still bound to the limits
of the computer he uses, and the data structures he represents to
formulate the test.

No discrete machine can compute an exact solution to all real number
expressions: there will always be an epsilon, OR degree of precision.
It will still be approximate.
Even if all sensitive computations are cascaded, lazy-evaluated, they
will be evaluated at some point. No? What happens if that last
computation is of the sort that you just described (huge, tiny etc.)?

Thanks for sharing the experiences, it is much appreciated.

Pixel.To.Life
[ http://groups.google.com/group/medicalimagingscience ]

Dave Eberly

unread,
May 19, 2007, 2:31:09 AM5/19/07
to
"Fernando Cacciola" <fernando...@gmail.com> wrote in message
news:464e55ec$0$21418$c3e...@news.astraweb.com...

> Yes, but there are always points that falls into the unreliability window,
> and trust me, in real world geometric computing, users run across those in
> a snap.

A classic case is computing the convex hull of models built by artists
using a 3D modeling package. Lots of subsets of coplanar vertices...

> When *significant* precision is lost (and you know that, for example via
> interval arithmetic), you know you cannot trust the floating-point result,
> so you recompute the predicate again using a slower but exact arithmetic.
> In the end, you always get the correct result. That's what a filtered
> predicate does, and it comes handy whether you cascade computations or
> not.

As much as I like filtered predicates and exact arithmetic, a warning
is in order. If the inputs to an algorithm already have small errors
that were introduced when computing the inputs, exact arithmetic
produces exact results for the inexact inputs, so the final result still
might not be correct.

Any application in computational geometry must handle the entire
process from initial inputs to final results (which I know you are already
well aware :)

--
Dave Eberly
http://www.geometrictools.com


Dave Eberly

unread,
May 19, 2007, 3:08:57 AM5/19/07
to
"Pixel.to.life" <pixel....@gmail.com> wrote in message
news:1179555655.3...@q23g2000hsg.googlegroups.com...

> As much as your sharing insight into the area of exact geometric
> computing is appreciated, I do not believe everyday computing needs
> precision of the order of e+/-30. Your example (re: almost coplanar
> faces) is one of the specialized scientific applications that deals
> with it. An answer to Richard's question is still bound to the limits
> of the computer he uses, and the data structures he represents to
> formulate the test.

"Everyday" geometric computing and floating-point arithmetic are
constantly a curse for those of us doing this everyday :) I beg
to differ--the problems with floating-point precision haunt any
application that uses floating-point arithmetic. Such applications
are quite common.

> No discrete machine can compute an exact solution to all real number
> expressions: there will always be an epsilon, OR degree of precision.
> It will still be approximate.
> Even if all sensitive computations are cascaded, lazy-evaluated, they
> will be evaluated at some point. No? What happens if that last
> computation is of the sort that you just described (huge, tiny etc.)?

Despite the precision problems, the convex hull of a set of points is
something that can be computed, even when floating-point round-off
errors get in the way. This is the challenge for robust computational
geometry--obtain the correct result even when you have small errors.

Richard Asbury

unread,
May 19, 2007, 9:50:43 AM5/19/07
to
On May 19, 1:21 am, "Fernando Cacciola" <fernando.cacci...@gmail.com>
wrote:

Aaaaaah, ok! Thanks so much - I totally get it now. In my
understanding this has "shifted" the tiny value into a low-resolution
region where it cannot be represented and come back without it,
thereby allowing an fp-representable value to get between it and the
real answer.

So as a separate but related issue, I'm wondering if I can rewrite the
calculation so that it's robust. I initially said

(x - p).n > 0

which is equivalent (in real arithmetic) to

x.n > p.n

Now is there some way I can rearrange the scalar products so that they
are reliable? Perhaps if I do the multiplications, re-order the
results and then sum, I can get it to cancel the larger positive and
negative values before incorporating the smaller ones?

Thanks for the reference, btw - I'll certainly have a read! Regards,

Richard

Fernando Cacciola

unread,
May 19, 2007, 12:09:31 PM5/19/07
to
Pixel.to.life wrote:
> Fernando,

>
> My point remains: there will always be limits to accurately computing
> an expression on any discrete computing machine.

Almost.
There will always be limits to accurately computing *the real value* of a
(real) expression.
But the distinction between what you said and what I said is they key...
read on.

> (snip)


> No discrete machine can compute an exact solution to all real number
> expressions: there will always be an epsilon, OR degree of precision.
> It will still be approximate.
> Even if all sensitive computations are cascaded, lazy-evaluated, they
> will be evaluated at some point. No? What happens if that last
> computation is of the sort that you just described (huge, tiny etc.)?
>

You are of course right that you can't represent any real value in a
computer with limited storage (and this is the source of the problem).
Take 1/3. What's the value of that? a computer can only represent an
approximation of it.

But let's see...

Is that a problem specific to computers? Can *you* tell me the value of 1/3
is?

Of course you can, you just say: is a periodic fraction, with the last '3'
digit repeating infinitely. You don't take a piece of paper and starting
writting 0.3333, pass on the tradition to your sons, and so on :)

And how do you dealt with 1/3 in school, college, etc? You can't write it's
value down completely, yet you certainly can get to the results you need.
How is that? because there is algebra besides aritmetic.

The whole "practical" problem with computing with numbers is that
programming languages and computer arquitectures are strongly biased toward
arithmetic instead of algebra that the typical mindset of a programmer is
to 'get to the actual values and proceed from there': If you see this
expression in a program "(1/3)*3", you start considering how to make that
'1' by operating at the basic arithmetic level, while basic algebra tells
you that the result is exactly '1' by *definition* (the two expressions are
equivalent).

Can you make a program to evaluate "((a/b)*b)-c>0" *exactly*, for whatever
numbers a,b and c (but say b!=0), and for wathever arquitecture, not matter
how much precision it has? Of course you can! You just need to shift away
from the basic idea that you have to perform the actual division to get the
result.

In most computing problems (geometric or not) the important answers are not
the actual values of real expression but their properties (is it zero,
negative/positive, complex, infinity?).
And it turns out that you can get exact answers to that (in most if not all
cases) no matter how limited precision you work with. That's why you'll find
so much papers about "perturbations", computing the *sign* of an expression
(which differs a lot from trying to compute the real value of it),
effectively instrumenting algebraic numbers, etc. The goal is to find ways
to extract the important properties of real expressions even if we can't get
their real values.

Best

Fernando Cacciola


Fernando Cacciola

unread,
May 19, 2007, 12:21:42 PM5/19/07
to
Dave Eberly wrote:

> As much as I like filtered predicates and exact arithmetic, a warning
> is in order. If the inputs to an algorithm already have small errors
> that were introduced when computing the inputs, exact arithmetic
> produces exact results for the inexact inputs, so the final result
> still might not be correct.
>

Indeed.

> Any application in computational geometry must handle the entire
> process from initial inputs to final results (which I know you are
> already well aware :)
>

Absolutely.
Since the exact branch of a filtered predicate can only be as exact as its
input, you need to make sure it operates on the "initial inputs".

Perhaps I should have elaborated a bit on "lazy constructions", so here it
goes: for any *computed* (intermediate) value, keep track of the expression
that derives it (that is, store the tree). This way, the exact predicate can
trace the initial values all the way back. These days, with expression
templates, you can do this without as much overhead as in the past.

A much more efficient but complicated approach is to design the algorithm
with this in mind: that is, avoid computing critical intermediate values and
try to get all predicates to work only with the input. This is a challenge
but is more than worth it.

Best

Fernando Cacciola


Fernando Cacciola

unread,
May 19, 2007, 12:53:30 PM5/19/07
to
Richard Asbury wrote:
> So as a separate but related issue, I'm wondering if I can rewrite the
> calculation so that it's robust.
>
In general you can rearrange real expressions to minimnize roundoff and
avoid other nasty side effects like cancellation.
The goldberg bible has examples of it.
But it's quite a challenge though, and you have to do it on a per-expression
basis. So IMO is not a good choice from an typical engineering perspective,
unless you happen to be a numerical programmer who's main job is to do that.

>
> (x - p).n > 0
>
> which is equivalent (in real arithmetic) to
>
> x.n > p.n
>
> Now is there some way I can rearrange the scalar products so that they
> are reliable? Perhaps if I do the multiplications, re-order the
> results and then sum, I can get it to cancel the larger positive and
> negative values before incorporating the smaller ones?
>

Distributing the multiplication is definitely a good idea, but it would only
help so much.
From the POV of the answer you seek (a discrete value), minimizing errors is
not the way to go as it would be effectively the same as increasing the
precision: it just reduces the number of wrong cases while your goal is to
avoid them all.

Best

Fernando Cacciola


Pixel.to.life

unread,
May 19, 2007, 2:07:37 PM5/19/07
to
On May 19, 9:09 am, "Fernando Cacciola" <fernando.cacci...@gmail.com>
wrote:

Fernando, Dave,

Although 'exact computing' appears to be a misleading term, your
examples do emphasize its importance.
As much as I strongly believe there will always be precision related
errors when using discrete computing machines, this is one way to
minimize such errors.
And ok I agree precision errors can be more common that expected in
everyday computing.. although both Dave and yourself do not seem like
'everyday' programmers.. you are experts in your fields.. arent
you? :-)

Kudos, guys!

Hans-Bernhard Bröker

unread,
May 23, 2007, 2:55:22 PM5/23/07
to
Richard Asbury wrote:

> precision), then do there exist any x representable in floating point
> numbers which are allocated to the wrong side of the plane (i.e. from a
> different side than if I do that calculation in real arithmetic, perhaps due
> to rounding errors etc.)?

Yes, that can happen. Particularly if you're not very careful about how
you evaluate ((x-p).n). Rounding errors can become larger than the
actual result you're computing, yielding the opposite sign.

> If I say instead that the hyperplane divides space into three regions given
> by (x - p).n > 0, (x - p).n = 0 and (x - p).n < 0, does this change the
> answer?

It makes it worse. You can pretty much forget about ever getting a
correct answer out of a test for (something = 0) out of floating-point
arithmetics.

0 new messages